chore: update docs/dyn , add static discovery files to discovery_cache/documents (#1111)
This PR was generated using Autosynth. :rainbow:
Synth log will be available here:
https://source.cloud.google.com/results/invocations/78f53313-0c78-4a29-8841-f031665a4c6a/targets
- [ ] To automatically regenerate this PR, check this box.
Source-Link: https://github.com/googleapis/synthtool/commit/c2de32114ec484aa708d32012d1fa8d75232daf5
diff --git a/docs/dyn/sqladmin_v1beta4.projects.instances.html b/docs/dyn/sqladmin_v1beta4.projects.instances.html
index 83b1741..b64e163 100644
--- a/docs/dyn/sqladmin_v1beta4.projects.instances.html
+++ b/docs/dyn/sqladmin_v1beta4.projects.instances.html
@@ -84,7 +84,7 @@
<code><a href="#startExternalSync">startExternalSync(project, instance, syncMode=None, x__xgafv=None)</a></code></p>
<p class="firstline">Start External primary instance migration.</p>
<p class="toc_element">
- <code><a href="#verifyExternalSyncSettings">verifyExternalSyncSettings(project, instance, verifyConnectionOnly=None, syncMode=None, x__xgafv=None)</a></code></p>
+ <code><a href="#verifyExternalSyncSettings">verifyExternalSyncSettings(project, instance, syncMode=None, verifyConnectionOnly=None, x__xgafv=None)</a></code></p>
<p class="firstline">Verify External primary instance external sync settings.</p>
<h3>Method Details</h3>
<div class="method">
@@ -104,8 +104,8 @@
{ # Reschedule options for maintenance windows.
"reschedule": { # Required. The type of the reschedule the user wants.
- "rescheduleType": "A String", # Required. The type of the reschedule.
"scheduleTime": "A String", # Optional. Timestamp when the maintenance shall be rescheduled to if reschedule_type=SPECIFIC_TIME, in RFC 3339 format, for example *2012-11-15T16:19:00.094Z*.
+ "rescheduleType": "A String", # Required. The type of the reschedule.
},
}
@@ -118,73 +118,73 @@
An object of the form:
{ # An Operation resource. For successful operations that return an Operation resource, only the fields relevant to the operation are populated in the resource. Next field: 18
- "user": "A String", # The email address of the user who initiated this operation.
+ "targetProject": "A String", # The project ID of the target instance related to this operation.
"selfLink": "A String", # The URI of this resource.
- "name": "A String", # An identifier that uniquely identifies the operation. You can use this identifier to retrieve the Operations resource that has information about the operation.
- "error": { # Database instance operation errors list wrapper. # If errors occurred during processing of this operation, this field will be populated.
- "errors": [ # The list of errors encountered while processing this operation.
- { # Database instance operation error.
- "code": "A String", # Identifies the specific error that occurred.
- "message": "A String", # Additional information about the error encountered.
- "kind": "A String", # This is always *sql#operationError*.
- },
- ],
- "kind": "A String", # This is always *sql#operationErrors*.
- },
"importContext": { # Database instance import context. # The context for import operation, if applicable.
+ "importUser": "A String", # The PostgreSQL user for this import operation. PostgreSQL instances only.
+ "uri": "A String", # Path to the import file in Cloud Storage, in the form *gs: //bucketName/fileName*. Compressed gzip files (.gz) are supported // when *fileType* is *SQL*. The instance must have // write permissions to the bucket and read access to the file.
+ "database": "A String", # The target database for the import. If *fileType* is *SQL*, this field is required only if the import file does not specify a database, and is overridden by any database specification in the import file. If *fileType* is *CSV*, one database must be specified.
"bakImportOptions": { # Import parameters specific to SQL Server .BAK files
"encryptionOptions": {
- "certPath": "A String", # Path to the Certificate (.cer) in Cloud Storage, in the form *gs://bucketName/fileName*. The instance must have write permissions to the bucket and read access to the file.
- "pvkPassword": "A String", # Password that encrypts the private key
"pvkPath": "A String", # Path to the Certificate Private Key (.pvk) in Cloud Storage, in the form *gs://bucketName/fileName*. The instance must have write permissions to the bucket and read access to the file.
+ "pvkPassword": "A String", # Password that encrypts the private key
+ "certPath": "A String", # Path to the Certificate (.cer) in Cloud Storage, in the form *gs://bucketName/fileName*. The instance must have write permissions to the bucket and read access to the file.
},
},
"kind": "A String", # This is always *sql#importContext*.
- "uri": "A String", # Path to the import file in Cloud Storage, in the form *gs: //bucketName/fileName*. Compressed gzip files (.gz) are supported // when *fileType* is *SQL*. The instance must have // write permissions to the bucket and read access to the file.
- "database": "A String", # The target database for the import. If *fileType* is *SQL*, this field is required only if the import file does not specify a database, and is overridden by any database specification in the import file. If *fileType* is *CSV*, one database must be specified.
+ "fileType": "A String", # The file type for the specified uri. *SQL*: The file contains SQL statements. *CSV*: The file contains CSV data.
"csvImportOptions": { # Options for importing data as CSV.
"table": "A String", # The table to which CSV data is imported.
"columns": [ # The columns to which CSV data is imported. If not specified, all columns of the database table are loaded with CSV data.
"A String",
],
},
- "fileType": "A String", # The file type for the specified uri. *SQL*: The file contains SQL statements. *CSV*: The file contains CSV data.
- "importUser": "A String", # The PostgreSQL user for this import operation. PostgreSQL instances only.
+ },
+ "targetId": "A String", # Name of the database instance related to this operation.
+ "operationType": "A String", # The type of the operation. Valid values are: *CREATE* *DELETE* *UPDATE* *RESTART* *IMPORT* *EXPORT* *BACKUP_VOLUME* *RESTORE_VOLUME* *CREATE_USER* *DELETE_USER* *CREATE_DATABASE* *DELETE_DATABASE*
+ "insertTime": "A String", # The time this operation was enqueued in UTC timezone in RFC 3339 format, for example *2012-11-15T16:19:00.094Z*.
+ "user": "A String", # The email address of the user who initiated this operation.
+ "error": { # Database instance operation errors list wrapper. # If errors occurred during processing of this operation, this field will be populated.
+ "kind": "A String", # This is always *sql#operationErrors*.
+ "errors": [ # The list of errors encountered while processing this operation.
+ { # Database instance operation error.
+ "kind": "A String", # This is always *sql#operationError*.
+ "message": "A String", # Additional information about the error encountered.
+ "code": "A String", # Identifies the specific error that occurred.
+ },
+ ],
},
"status": "A String", # The status of an operation. Valid values are: *PENDING* *RUNNING* *DONE* *SQL_OPERATION_STATUS_UNSPECIFIED*
- "targetLink": "A String",
- "targetId": "A String", # Name of the database instance related to this operation.
+ "startTime": "A String", # The time this operation actually started in UTC timezone in RFC 3339 format, for example *2012-11-15T16:19:00.094Z*.
+ "name": "A String", # An identifier that uniquely identifies the operation. You can use this identifier to retrieve the Operations resource that has information about the operation.
"endTime": "A String", # The time this operation finished in UTC timezone in RFC 3339 format, for example *2012-11-15T16:19:00.094Z*.
- "operationType": "A String", # The type of the operation. Valid values are: *CREATE* *DELETE* *UPDATE* *RESTART* *IMPORT* *EXPORT* *BACKUP_VOLUME* *RESTORE_VOLUME* *CREATE_USER* *DELETE_USER* *CREATE_DATABASE* *DELETE_DATABASE*
"kind": "A String", # This is always *sql#operation*.
- "backupContext": { # Backup context. # The context for backup operation, if applicable.
- "backupId": "A String", # The identifier of the backup.
- "kind": "A String", # This is always *sql#backupContext*.
- },
"exportContext": { # Database instance export context. # The context for export operation, if applicable.
- "sqlExportOptions": { # Options for exporting data as SQL statements.
- "schemaOnly": True or False, # Export only schemas.
- "mysqlExportOptions": { # Options for exporting from MySQL.
- "masterData": 42, # Option to include SQL statement required to set up replication. If set to *1*, the dump file includes a CHANGE MASTER TO statement with the binary log coordinates, and --set-gtid-purged is set to ON. If set to *2*, the CHANGE MASTER TO statement is written as a SQL comment and has no effect. If set to any value other than *1*, --set-gtid-purged is set to OFF.
- },
- "tables": [ # Tables to export, or that were exported, from the specified database. If you specify tables, specify one and only one database. For PostgreSQL instances, you can specify only one table.
- "A String",
- ],
- },
- "kind": "A String", # This is always *sql#exportContext*.
- "fileType": "A String", # The file type for the specified uri. *SQL*: The file contains SQL statements. *CSV*: The file contains CSV data. *BAK*: The file contains backup data for a SQL Server instance.
+ "databases": [ # Databases to be exported. *MySQL instances:* If *fileType* is *SQL* and no database is specified, all databases are exported, except for the *mysql* system database. If *fileType* is *CSV*, you can specify one database, either by using this property or by using the *csvExportOptions.selectQuery* property, which takes precedence over this property. *PostgreSQL instances:* You must specify one database to be exported. If *fileType* is *CSV*, this database must match the one specified in the *csvExportOptions.selectQuery* property.
+ "A String",
+ ],
+ "uri": "A String", # The path to the file in Google Cloud Storage where the export will be stored. The URI is in the form *gs: //bucketName/fileName*. If the file already exists, the requests // succeeds, but the operation fails. If *fileType* is // *SQL* and the filename ends with .gz, the contents are // compressed.
"csvExportOptions": { # Options for exporting data as CSV. *MySQL* and *PostgreSQL* instances only.
"selectQuery": "A String", # The select query used to extract the data.
},
"offload": True or False, # Option for export offload.
- "uri": "A String", # The path to the file in Google Cloud Storage where the export will be stored. The URI is in the form *gs: //bucketName/fileName*. If the file already exists, the requests // succeeds, but the operation fails. If *fileType* is // *SQL* and the filename ends with .gz, the contents are // compressed.
- "databases": [ # Databases to be exported. *MySQL instances:* If *fileType* is *SQL* and no database is specified, all databases are exported, except for the *mysql* system database. If *fileType* is *CSV*, you can specify one database, either by using this property or by using the *csvExportOptions.selectQuery* property, which takes precedence over this property. *PostgreSQL instances:* You must specify one database to be exported. If *fileType* is *CSV*, this database must match the one specified in the *csvExportOptions.selectQuery* property.
- "A String",
- ],
+ "fileType": "A String", # The file type for the specified uri. *SQL*: The file contains SQL statements. *CSV*: The file contains CSV data. *BAK*: The file contains backup data for a SQL Server instance.
+ "kind": "A String", # This is always *sql#exportContext*.
+ "sqlExportOptions": { # Options for exporting data as SQL statements.
+ "tables": [ # Tables to export, or that were exported, from the specified database. If you specify tables, specify one and only one database. For PostgreSQL instances, you can specify only one table.
+ "A String",
+ ],
+ "schemaOnly": True or False, # Export only schemas.
+ "mysqlExportOptions": { # Options for exporting from MySQL.
+ "masterData": 42, # Option to include SQL statement required to set up replication. If set to *1*, the dump file includes a CHANGE MASTER TO statement with the binary log coordinates, and --set-gtid-purged is set to ON. If set to *2*, the CHANGE MASTER TO statement is written as a SQL comment and has no effect. If set to any value other than *1*, --set-gtid-purged is set to OFF.
+ },
+ },
},
- "startTime": "A String", # The time this operation actually started in UTC timezone in RFC 3339 format, for example *2012-11-15T16:19:00.094Z*.
- "insertTime": "A String", # The time this operation was enqueued in UTC timezone in RFC 3339 format, for example *2012-11-15T16:19:00.094Z*.
- "targetProject": "A String", # The project ID of the target instance related to this operation.
+ "targetLink": "A String",
+ "backupContext": { # Backup context. # The context for backup operation, if applicable.
+ "kind": "A String", # This is always *sql#backupContext*.
+ "backupId": "A String", # The identifier of the backup.
+ },
}</pre>
</div>
@@ -209,89 +209,89 @@
An object of the form:
{ # An Operation resource. For successful operations that return an Operation resource, only the fields relevant to the operation are populated in the resource. Next field: 18
- "user": "A String", # The email address of the user who initiated this operation.
+ "targetProject": "A String", # The project ID of the target instance related to this operation.
"selfLink": "A String", # The URI of this resource.
- "name": "A String", # An identifier that uniquely identifies the operation. You can use this identifier to retrieve the Operations resource that has information about the operation.
- "error": { # Database instance operation errors list wrapper. # If errors occurred during processing of this operation, this field will be populated.
- "errors": [ # The list of errors encountered while processing this operation.
- { # Database instance operation error.
- "code": "A String", # Identifies the specific error that occurred.
- "message": "A String", # Additional information about the error encountered.
- "kind": "A String", # This is always *sql#operationError*.
- },
- ],
- "kind": "A String", # This is always *sql#operationErrors*.
- },
"importContext": { # Database instance import context. # The context for import operation, if applicable.
+ "importUser": "A String", # The PostgreSQL user for this import operation. PostgreSQL instances only.
+ "uri": "A String", # Path to the import file in Cloud Storage, in the form *gs: //bucketName/fileName*. Compressed gzip files (.gz) are supported // when *fileType* is *SQL*. The instance must have // write permissions to the bucket and read access to the file.
+ "database": "A String", # The target database for the import. If *fileType* is *SQL*, this field is required only if the import file does not specify a database, and is overridden by any database specification in the import file. If *fileType* is *CSV*, one database must be specified.
"bakImportOptions": { # Import parameters specific to SQL Server .BAK files
"encryptionOptions": {
- "certPath": "A String", # Path to the Certificate (.cer) in Cloud Storage, in the form *gs://bucketName/fileName*. The instance must have write permissions to the bucket and read access to the file.
- "pvkPassword": "A String", # Password that encrypts the private key
"pvkPath": "A String", # Path to the Certificate Private Key (.pvk) in Cloud Storage, in the form *gs://bucketName/fileName*. The instance must have write permissions to the bucket and read access to the file.
+ "pvkPassword": "A String", # Password that encrypts the private key
+ "certPath": "A String", # Path to the Certificate (.cer) in Cloud Storage, in the form *gs://bucketName/fileName*. The instance must have write permissions to the bucket and read access to the file.
},
},
"kind": "A String", # This is always *sql#importContext*.
- "uri": "A String", # Path to the import file in Cloud Storage, in the form *gs: //bucketName/fileName*. Compressed gzip files (.gz) are supported // when *fileType* is *SQL*. The instance must have // write permissions to the bucket and read access to the file.
- "database": "A String", # The target database for the import. If *fileType* is *SQL*, this field is required only if the import file does not specify a database, and is overridden by any database specification in the import file. If *fileType* is *CSV*, one database must be specified.
+ "fileType": "A String", # The file type for the specified uri. *SQL*: The file contains SQL statements. *CSV*: The file contains CSV data.
"csvImportOptions": { # Options for importing data as CSV.
"table": "A String", # The table to which CSV data is imported.
"columns": [ # The columns to which CSV data is imported. If not specified, all columns of the database table are loaded with CSV data.
"A String",
],
},
- "fileType": "A String", # The file type for the specified uri. *SQL*: The file contains SQL statements. *CSV*: The file contains CSV data.
- "importUser": "A String", # The PostgreSQL user for this import operation. PostgreSQL instances only.
+ },
+ "targetId": "A String", # Name of the database instance related to this operation.
+ "operationType": "A String", # The type of the operation. Valid values are: *CREATE* *DELETE* *UPDATE* *RESTART* *IMPORT* *EXPORT* *BACKUP_VOLUME* *RESTORE_VOLUME* *CREATE_USER* *DELETE_USER* *CREATE_DATABASE* *DELETE_DATABASE*
+ "insertTime": "A String", # The time this operation was enqueued in UTC timezone in RFC 3339 format, for example *2012-11-15T16:19:00.094Z*.
+ "user": "A String", # The email address of the user who initiated this operation.
+ "error": { # Database instance operation errors list wrapper. # If errors occurred during processing of this operation, this field will be populated.
+ "kind": "A String", # This is always *sql#operationErrors*.
+ "errors": [ # The list of errors encountered while processing this operation.
+ { # Database instance operation error.
+ "kind": "A String", # This is always *sql#operationError*.
+ "message": "A String", # Additional information about the error encountered.
+ "code": "A String", # Identifies the specific error that occurred.
+ },
+ ],
},
"status": "A String", # The status of an operation. Valid values are: *PENDING* *RUNNING* *DONE* *SQL_OPERATION_STATUS_UNSPECIFIED*
- "targetLink": "A String",
- "targetId": "A String", # Name of the database instance related to this operation.
+ "startTime": "A String", # The time this operation actually started in UTC timezone in RFC 3339 format, for example *2012-11-15T16:19:00.094Z*.
+ "name": "A String", # An identifier that uniquely identifies the operation. You can use this identifier to retrieve the Operations resource that has information about the operation.
"endTime": "A String", # The time this operation finished in UTC timezone in RFC 3339 format, for example *2012-11-15T16:19:00.094Z*.
- "operationType": "A String", # The type of the operation. Valid values are: *CREATE* *DELETE* *UPDATE* *RESTART* *IMPORT* *EXPORT* *BACKUP_VOLUME* *RESTORE_VOLUME* *CREATE_USER* *DELETE_USER* *CREATE_DATABASE* *DELETE_DATABASE*
"kind": "A String", # This is always *sql#operation*.
- "backupContext": { # Backup context. # The context for backup operation, if applicable.
- "backupId": "A String", # The identifier of the backup.
- "kind": "A String", # This is always *sql#backupContext*.
- },
"exportContext": { # Database instance export context. # The context for export operation, if applicable.
- "sqlExportOptions": { # Options for exporting data as SQL statements.
- "schemaOnly": True or False, # Export only schemas.
- "mysqlExportOptions": { # Options for exporting from MySQL.
- "masterData": 42, # Option to include SQL statement required to set up replication. If set to *1*, the dump file includes a CHANGE MASTER TO statement with the binary log coordinates, and --set-gtid-purged is set to ON. If set to *2*, the CHANGE MASTER TO statement is written as a SQL comment and has no effect. If set to any value other than *1*, --set-gtid-purged is set to OFF.
- },
- "tables": [ # Tables to export, or that were exported, from the specified database. If you specify tables, specify one and only one database. For PostgreSQL instances, you can specify only one table.
- "A String",
- ],
- },
- "kind": "A String", # This is always *sql#exportContext*.
- "fileType": "A String", # The file type for the specified uri. *SQL*: The file contains SQL statements. *CSV*: The file contains CSV data. *BAK*: The file contains backup data for a SQL Server instance.
+ "databases": [ # Databases to be exported. *MySQL instances:* If *fileType* is *SQL* and no database is specified, all databases are exported, except for the *mysql* system database. If *fileType* is *CSV*, you can specify one database, either by using this property or by using the *csvExportOptions.selectQuery* property, which takes precedence over this property. *PostgreSQL instances:* You must specify one database to be exported. If *fileType* is *CSV*, this database must match the one specified in the *csvExportOptions.selectQuery* property.
+ "A String",
+ ],
+ "uri": "A String", # The path to the file in Google Cloud Storage where the export will be stored. The URI is in the form *gs: //bucketName/fileName*. If the file already exists, the requests // succeeds, but the operation fails. If *fileType* is // *SQL* and the filename ends with .gz, the contents are // compressed.
"csvExportOptions": { # Options for exporting data as CSV. *MySQL* and *PostgreSQL* instances only.
"selectQuery": "A String", # The select query used to extract the data.
},
"offload": True or False, # Option for export offload.
- "uri": "A String", # The path to the file in Google Cloud Storage where the export will be stored. The URI is in the form *gs: //bucketName/fileName*. If the file already exists, the requests // succeeds, but the operation fails. If *fileType* is // *SQL* and the filename ends with .gz, the contents are // compressed.
- "databases": [ # Databases to be exported. *MySQL instances:* If *fileType* is *SQL* and no database is specified, all databases are exported, except for the *mysql* system database. If *fileType* is *CSV*, you can specify one database, either by using this property or by using the *csvExportOptions.selectQuery* property, which takes precedence over this property. *PostgreSQL instances:* You must specify one database to be exported. If *fileType* is *CSV*, this database must match the one specified in the *csvExportOptions.selectQuery* property.
- "A String",
- ],
+ "fileType": "A String", # The file type for the specified uri. *SQL*: The file contains SQL statements. *CSV*: The file contains CSV data. *BAK*: The file contains backup data for a SQL Server instance.
+ "kind": "A String", # This is always *sql#exportContext*.
+ "sqlExportOptions": { # Options for exporting data as SQL statements.
+ "tables": [ # Tables to export, or that were exported, from the specified database. If you specify tables, specify one and only one database. For PostgreSQL instances, you can specify only one table.
+ "A String",
+ ],
+ "schemaOnly": True or False, # Export only schemas.
+ "mysqlExportOptions": { # Options for exporting from MySQL.
+ "masterData": 42, # Option to include SQL statement required to set up replication. If set to *1*, the dump file includes a CHANGE MASTER TO statement with the binary log coordinates, and --set-gtid-purged is set to ON. If set to *2*, the CHANGE MASTER TO statement is written as a SQL comment and has no effect. If set to any value other than *1*, --set-gtid-purged is set to OFF.
+ },
+ },
},
- "startTime": "A String", # The time this operation actually started in UTC timezone in RFC 3339 format, for example *2012-11-15T16:19:00.094Z*.
- "insertTime": "A String", # The time this operation was enqueued in UTC timezone in RFC 3339 format, for example *2012-11-15T16:19:00.094Z*.
- "targetProject": "A String", # The project ID of the target instance related to this operation.
+ "targetLink": "A String",
+ "backupContext": { # Backup context. # The context for backup operation, if applicable.
+ "kind": "A String", # This is always *sql#backupContext*.
+ "backupId": "A String", # The identifier of the backup.
+ },
}</pre>
</div>
<div class="method">
- <code class="details" id="verifyExternalSyncSettings">verifyExternalSyncSettings(project, instance, verifyConnectionOnly=None, syncMode=None, x__xgafv=None)</code>
+ <code class="details" id="verifyExternalSyncSettings">verifyExternalSyncSettings(project, instance, syncMode=None, verifyConnectionOnly=None, x__xgafv=None)</code>
<pre>Verify External primary instance external sync settings.
Args:
project: string, Project ID of the project that contains the instance. (required)
instance: string, Cloud SQL instance ID. This does not include the project ID. (required)
- verifyConnectionOnly: boolean, Flag to enable verifying connection only
syncMode: string, External sync mode
Allowed values
EXTERNAL_SYNC_MODE_UNSPECIFIED - Unknown external sync mode, will be defaulted to ONLINE mode
ONLINE - Online external sync will set up replication after initial data external sync
OFFLINE - Offline external sync only dumps and loads a one-time snapshot of the primary instance's data
+ verifyConnectionOnly: boolean, Flag to enable verifying connection only
x__xgafv: string, V1 error format.
Allowed values
1 - v1 error format
@@ -301,6 +301,7 @@
An object of the form:
{ # Instance verify external sync settings response.
+ "kind": "A String", # This is always *sql#migrationSettingErrorList*.
"errors": [ # List of migration violations.
{ # External primary instance migration setting error.
"detail": "A String", # Additional information about the error encountered.
@@ -308,7 +309,6 @@
"kind": "A String", # This is always *sql#migrationSettingError*.
},
],
- "kind": "A String", # This is always *sql#migrationSettingErrorList*.
}</pre>
</div>