docs: docs update (#911)

Thank you for opening a Pull Request! Before submitting your PR, there are a few things you can do to make sure it goes smoothly:
- [ ] Make sure to open an issue as a [bug/issue](https://github.com/googleapis/google-api-python-client/issues/new/choose) before writing your code!  That way we can discuss the change, evaluate designs, and agree on the general idea
- [ ] Ensure the tests and linter pass
- [ ] Code coverage does not decrease (if any source code was changed)
- [ ] Appropriate docs were updated (if necessary)

Fixes #<issue_number_goes_here> 🦕
diff --git a/docs/dyn/spanner_v1.projects.instances.databases.sessions.html b/docs/dyn/spanner_v1.projects.instances.databases.sessions.html
index 59ece63..1632d41 100644
--- a/docs/dyn/spanner_v1.projects.instances.databases.sessions.html
+++ b/docs/dyn/spanner_v1.projects.instances.databases.sessions.html
@@ -102,7 +102,7 @@
   <code><a href="#get">get(name, x__xgafv=None)</a></code></p>
 <p class="firstline">Gets a session. Returns `NOT_FOUND` if the session does not exist.</p>
 <p class="toc_element">
-  <code><a href="#list">list(database, pageSize=None, filter=None, pageToken=None, x__xgafv=None)</a></code></p>
+  <code><a href="#list">list(database, filter=None, pageToken=None, pageSize=None, x__xgafv=None)</a></code></p>
 <p class="firstline">Lists all sessions in a given database.</p>
 <p class="toc_element">
   <code><a href="#list_next">list_next(previous_request, previous_response)</a></code></p>
@@ -136,8 +136,13 @@
     The object takes the form of:
 
 { # The request for BatchCreateSessions.
-    "sessionTemplate": { # A session in the Cloud Spanner API. # Parameters to be applied to each created session.
-      "labels": { # The labels for the session.
+    &quot;sessionTemplate&quot;: { # A session in the Cloud Spanner API. # Parameters to be applied to each created session.
+      &quot;approximateLastUseTime&quot;: &quot;A String&quot;, # Output only. The approximate timestamp when the session is last used. It is
+          # typically earlier than the actual last use time.
+      &quot;name&quot;: &quot;A String&quot;, # The name of the session. This is always system-assigned; values provided
+          # when creating a session are ignored.
+      &quot;createTime&quot;: &quot;A String&quot;, # Output only. The timestamp when the session is created.
+      &quot;labels&quot;: { # The labels for the session.
           #
           #  * Label keys must be between 1 and 63 characters long and must conform to
           #    the following regular expression: `[a-z]([-a-z0-9]*[a-z0-9])?`.
@@ -146,15 +151,10 @@
           #  * No more than 64 labels can be associated with a given session.
           #
           # See https://goo.gl/xmQnxf for more information on and examples of labels.
-        "a_key": "A String",
+        &quot;a_key&quot;: &quot;A String&quot;,
       },
-      "name": "A String", # The name of the session. This is always system-assigned; values provided
-          # when creating a session are ignored.
-      "approximateLastUseTime": "A String", # Output only. The approximate timestamp when the session is last used. It is
-          # typically earlier than the actual last use time.
-      "createTime": "A String", # Output only. The timestamp when the session is created.
     },
-    "sessionCount": 42, # Required. The number of sessions to be created in this batch call.
+    &quot;sessionCount&quot;: 42, # Required. The number of sessions to be created in this batch call.
         # The API may return fewer than the requested number of sessions. If a
         # specific number of sessions are desired, the client can make additional
         # calls to BatchCreateSessions (adjusting
@@ -170,9 +170,14 @@
   An object of the form:
 
     { # The response for BatchCreateSessions.
-    "session": [ # The freshly created sessions.
+    &quot;session&quot;: [ # The freshly created sessions.
       { # A session in the Cloud Spanner API.
-        "labels": { # The labels for the session.
+        &quot;approximateLastUseTime&quot;: &quot;A String&quot;, # Output only. The approximate timestamp when the session is last used. It is
+            # typically earlier than the actual last use time.
+        &quot;name&quot;: &quot;A String&quot;, # The name of the session. This is always system-assigned; values provided
+            # when creating a session are ignored.
+        &quot;createTime&quot;: &quot;A String&quot;, # Output only. The timestamp when the session is created.
+        &quot;labels&quot;: { # The labels for the session.
             #
             #  * Label keys must be between 1 and 63 characters long and must conform to
             #    the following regular expression: `[a-z]([-a-z0-9]*[a-z0-9])?`.
@@ -181,13 +186,8 @@
             #  * No more than 64 labels can be associated with a given session.
             #
             # See https://goo.gl/xmQnxf for more information on and examples of labels.
-          "a_key": "A String",
+          &quot;a_key&quot;: &quot;A String&quot;,
         },
-        "name": "A String", # The name of the session. This is always system-assigned; values provided
-            # when creating a session are ignored.
-        "approximateLastUseTime": "A String", # Output only. The approximate timestamp when the session is last used. It is
-            # typically earlier than the actual last use time.
-        "createTime": "A String", # Output only. The timestamp when the session is created.
       },
     ],
   }</pre>
@@ -206,7 +206,7 @@
     The object takes the form of:
 
 { # The request for BeginTransaction.
-    "options": { # # Transactions # Required. Options for the new transaction.
+    &quot;options&quot;: { # # Transactions # Required. Options for the new transaction.
         #
         #
         # Each session can have at most one active transaction at a time. After the
@@ -261,7 +261,7 @@
         # Commit or
         # Rollback.  Long periods of
         # inactivity at the client may cause Cloud Spanner to release a
-        # transaction's locks and abort it.
+        # transaction&#x27;s locks and abort it.
         #
         # Conceptually, a read-write transaction consists of zero or more
         # reads or SQL statements followed by
@@ -279,7 +279,7 @@
         # that the transaction has not modified any user data in Cloud Spanner.
         #
         # Unless the transaction commits, Cloud Spanner makes no guarantees about
-        # how long the transaction's locks were held for. It is an error to
+        # how long the transaction&#x27;s locks were held for. It is an error to
         # use Cloud Spanner locks for any sort of mutual exclusion other than
         # between Cloud Spanner transactions themselves.
         #
@@ -288,7 +288,7 @@
         # When a transaction aborts, the application can choose to retry the
         # whole transaction again. To maximize the chances of successfully
         # committing the retry, the client should execute the retry in the
-        # same session as the original attempt. The original session's lock
+        # same session as the original attempt. The original session&#x27;s lock
         # priority increases with each consecutive abort, meaning that each
         # attempt has a slightly better chance of success than the previous.
         #
@@ -304,7 +304,7 @@
         # A transaction is considered idle if it has no outstanding reads or
         # SQL queries and has not started a read or SQL query within the last 10
         # seconds. Idle transactions can be aborted by Cloud Spanner so that they
-        # don't hold on to locks indefinitely. In that case, the commit will
+        # don&#x27;t hold on to locks indefinitely. In that case, the commit will
         # fail with error `ABORTED`.
         #
         # If this behavior is undesirable, periodically executing a simple
@@ -379,7 +379,7 @@
         # The timestamp can either be expressed as an absolute Cloud Spanner commit
         # timestamp or a staleness relative to the current time.
         #
-        # These modes do not require a "negotiation phase" to pick a
+        # These modes do not require a &quot;negotiation phase&quot; to pick a
         # timestamp. As a result, they execute slightly faster than the
         # equivalent boundedly stale concurrency modes. On the other hand,
         # boundedly stale reads usually return fresher results.
@@ -421,7 +421,7 @@
         #
         # Cloud Spanner continuously garbage collects deleted and overwritten data
         # in the background to reclaim storage space. This process is known
-        # as "version GC". By default, version GC reclaims versions after they
+        # as &quot;version GC&quot;. By default, version GC reclaims versions after they
         # are one hour old. Because of this, Cloud Spanner cannot perform reads
         # at read timestamps more than one hour in the past. This
         # restriction also applies to in-progress reads and/or SQL queries whose
@@ -483,19 +483,18 @@
         # Given the above, Partitioned DML is good fit for large, database-wide,
         # operations that are idempotent, such as deleting old rows from a very large
         # table.
-      "readWrite": { # Message type to initiate a read-write transaction. Currently this # Transaction may write.
+      &quot;partitionedDml&quot;: { # Message type to initiate a Partitioned DML transaction. # Partitioned DML transaction.
           #
-          # Authorization to begin a read-write transaction requires
-          # `spanner.databases.beginOrRollbackReadWriteTransaction` permission
+          # Authorization to begin a Partitioned DML transaction requires
+          # `spanner.databases.beginPartitionedDmlTransaction` permission
           # on the `session` resource.
-          # transaction type has no options.
       },
-      "readOnly": { # Message type to initiate a read-only transaction. # Transaction will not write.
+      &quot;readOnly&quot;: { # Message type to initiate a read-only transaction. # Transaction will not write.
           #
           # Authorization to begin a read-only transaction requires
           # `spanner.databases.beginReadOnlyTransaction` permission
           # on the `session` resource.
-        "minReadTimestamp": "A String", # Executes all reads at a timestamp &gt;= `min_read_timestamp`.
+        &quot;minReadTimestamp&quot;: &quot;A String&quot;, # Executes all reads at a timestamp &gt;= `min_read_timestamp`.
             #
             # This is useful for requesting fresher data than some previous
             # read, or data that is fresh enough to observe the effects of some
@@ -503,25 +502,25 @@
             #
             # Note that this option can only be used in single-use transactions.
             #
-            # A timestamp in RFC3339 UTC \"Zulu\" format, accurate to nanoseconds.
-            # Example: `"2014-10-02T15:01:23.045123456Z"`.
-        "readTimestamp": "A String", # Executes all reads at the given timestamp. Unlike other modes,
+            # A timestamp in RFC3339 UTC \&quot;Zulu\&quot; format, accurate to nanoseconds.
+            # Example: `&quot;2014-10-02T15:01:23.045123456Z&quot;`.
+        &quot;readTimestamp&quot;: &quot;A String&quot;, # Executes all reads at the given timestamp. Unlike other modes,
             # reads at a specific timestamp are repeatable; the same read at
             # the same timestamp always returns the same data. If the
             # timestamp is in the future, the read will block until the
-            # specified timestamp, modulo the read's deadline.
+            # specified timestamp, modulo the read&#x27;s deadline.
             #
             # Useful for large scale consistent reads such as mapreduces, or
             # for coordinating many reads against a consistent snapshot of the
             # data.
             #
-            # A timestamp in RFC3339 UTC \"Zulu\" format, accurate to nanoseconds.
-            # Example: `"2014-10-02T15:01:23.045123456Z"`.
-        "maxStaleness": "A String", # Read data at a timestamp &gt;= `NOW - max_staleness`
+            # A timestamp in RFC3339 UTC \&quot;Zulu\&quot; format, accurate to nanoseconds.
+            # Example: `&quot;2014-10-02T15:01:23.045123456Z&quot;`.
+        &quot;maxStaleness&quot;: &quot;A String&quot;, # Read data at a timestamp &gt;= `NOW - max_staleness`
             # seconds. Guarantees that all writes that have committed more
             # than the specified number of seconds ago are visible. Because
             # Cloud Spanner chooses the exact timestamp, this mode works even if
-            # the client's local clock is substantially skewed from Cloud Spanner
+            # the client&#x27;s local clock is substantially skewed from Cloud Spanner
             # commit timestamps.
             #
             # Useful for reading the freshest data available at a nearby
@@ -530,27 +529,28 @@
             #
             # Note that this option can only be used in single-use
             # transactions.
-        "exactStaleness": "A String", # Executes all reads at a timestamp that is `exact_staleness`
+        &quot;returnReadTimestamp&quot;: True or False, # If true, the Cloud Spanner-selected read timestamp is included in
+            # the Transaction message that describes the transaction.
+        &quot;exactStaleness&quot;: &quot;A String&quot;, # Executes all reads at a timestamp that is `exact_staleness`
             # old. The timestamp is chosen soon after the read is started.
             #
             # Guarantees that all writes that have committed more than the
             # specified number of seconds ago are visible. Because Cloud Spanner
-            # chooses the exact timestamp, this mode works even if the client's
+            # chooses the exact timestamp, this mode works even if the client&#x27;s
             # local clock is substantially skewed from Cloud Spanner commit
             # timestamps.
             #
             # Useful for reading at nearby replicas without the distributed
             # timestamp negotiation overhead of `max_staleness`.
-        "returnReadTimestamp": True or False, # If true, the Cloud Spanner-selected read timestamp is included in
-            # the Transaction message that describes the transaction.
-        "strong": True or False, # Read at a timestamp where all previously committed transactions
+        &quot;strong&quot;: True or False, # Read at a timestamp where all previously committed transactions
             # are visible.
       },
-      "partitionedDml": { # Message type to initiate a Partitioned DML transaction. # Partitioned DML transaction.
+      &quot;readWrite&quot;: { # Message type to initiate a read-write transaction. Currently this # Transaction may write.
           #
-          # Authorization to begin a Partitioned DML transaction requires
-          # `spanner.databases.beginPartitionedDmlTransaction` permission
+          # Authorization to begin a read-write transaction requires
+          # `spanner.databases.beginOrRollbackReadWriteTransaction` permission
           # on the `session` resource.
+          # transaction type has no options.
       },
     },
   }
@@ -564,13 +564,7 @@
   An object of the form:
 
     { # A transaction.
-    "readTimestamp": "A String", # For snapshot read-only transactions, the read timestamp chosen
-        # for the transaction. Not returned by default: see
-        # TransactionOptions.ReadOnly.return_read_timestamp.
-        #
-        # A timestamp in RFC3339 UTC \"Zulu\" format, accurate to nanoseconds.
-        # Example: `"2014-10-02T15:01:23.045123456Z"`.
-    "id": "A String", # `id` may be used to identify the transaction in subsequent
+    &quot;id&quot;: &quot;A String&quot;, # `id` may be used to identify the transaction in subsequent
         # Read,
         # ExecuteSql,
         # Commit, or
@@ -578,6 +572,12 @@
         #
         # Single-use read-only transactions do not have IDs, because
         # single-use transactions do not support multiple requests.
+    &quot;readTimestamp&quot;: &quot;A String&quot;, # For snapshot read-only transactions, the read timestamp chosen
+        # for the transaction. Not returned by default: see
+        # TransactionOptions.ReadOnly.return_read_timestamp.
+        #
+        # A timestamp in RFC3339 UTC \&quot;Zulu\&quot; format, accurate to nanoseconds.
+        # Example: `&quot;2014-10-02T15:01:23.045123456Z&quot;`.
   }</pre>
 </div>
 
@@ -598,38 +598,13 @@
     The object takes the form of:
 
 { # The request for Commit.
-    "transactionId": "A String", # Commit a previously-started transaction.
-    "mutations": [ # The mutations to be executed when this transaction commits. All
+    &quot;mutations&quot;: [ # The mutations to be executed when this transaction commits. All
         # mutations are applied atomically, in the order they appear in
         # this list.
       { # A modification to one or more Cloud Spanner rows.  Mutations can be
           # applied to a Cloud Spanner database by sending them in a
           # Commit call.
-        "insert": { # Arguments to insert, update, insert_or_update, and # Insert new rows in a table. If any of the rows already exist,
-            # the write or transaction fails with error `ALREADY_EXISTS`.
-            # replace operations.
-          "table": "A String", # Required. The table whose rows will be written.
-          "values": [ # The values to be written. `values` can contain more than one
-              # list of values. If it does, then multiple rows are written, one
-              # for each entry in `values`. Each list in `values` must have
-              # exactly as many entries as there are entries in columns
-              # above. Sending multiple lists is equivalent to sending multiple
-              # `Mutation`s, each containing one `values` entry and repeating
-              # table and columns. Individual values in each list are
-              # encoded as described here.
-            [
-              "",
-            ],
-          ],
-          "columns": [ # The names of the columns in table to be written.
-              #
-              # The list of columns must contain enough columns to allow
-              # Cloud Spanner to derive values for all primary key columns in the
-              # row(s) to be modified.
-            "A String",
-          ],
-        },
-        "replace": { # Arguments to insert, update, insert_or_update, and # Like insert, except that if the row already exists, it is
+        &quot;replace&quot;: { # Arguments to insert, update, insert_or_update, and # Like insert, except that if the row already exists, it is
             # deleted, and the column values provided are inserted
             # instead. Unlike insert_or_update, this means any values not
             # explicitly written become `NULL`.
@@ -639,8 +614,8 @@
             # also deletes the child rows. Otherwise, you must delete the
             # child rows before you replace the parent row.
             # replace operations.
-          "table": "A String", # Required. The table whose rows will be written.
-          "values": [ # The values to be written. `values` can contain more than one
+          &quot;table&quot;: &quot;A String&quot;, # Required. The table whose rows will be written.
+          &quot;values&quot;: [ # The values to be written. `values` can contain more than one
               # list of values. If it does, then multiple rows are written, one
               # for each entry in `values`. Each list in `values` must have
               # exactly as many entries as there are entries in columns
@@ -649,76 +624,22 @@
               # table and columns. Individual values in each list are
               # encoded as described here.
             [
-              "",
+              &quot;&quot;,
             ],
           ],
-          "columns": [ # The names of the columns in table to be written.
+          &quot;columns&quot;: [ # The names of the columns in table to be written.
               #
               # The list of columns must contain enough columns to allow
               # Cloud Spanner to derive values for all primary key columns in the
               # row(s) to be modified.
-            "A String",
+            &quot;A String&quot;,
           ],
         },
-        "insertOrUpdate": { # Arguments to insert, update, insert_or_update, and # Like insert, except that if the row already exists, then
-            # its column values are overwritten with the ones provided. Any
-            # column values not explicitly written are preserved.
-            #
-            # When using insert_or_update, just as when using insert, all `NOT
-            # NULL` columns in the table must be given a value. This holds true
-            # even when the row already exists and will therefore actually be updated.
-            # replace operations.
-          "table": "A String", # Required. The table whose rows will be written.
-          "values": [ # The values to be written. `values` can contain more than one
-              # list of values. If it does, then multiple rows are written, one
-              # for each entry in `values`. Each list in `values` must have
-              # exactly as many entries as there are entries in columns
-              # above. Sending multiple lists is equivalent to sending multiple
-              # `Mutation`s, each containing one `values` entry and repeating
-              # table and columns. Individual values in each list are
-              # encoded as described here.
-            [
-              "",
-            ],
-          ],
-          "columns": [ # The names of the columns in table to be written.
-              #
-              # The list of columns must contain enough columns to allow
-              # Cloud Spanner to derive values for all primary key columns in the
-              # row(s) to be modified.
-            "A String",
-          ],
-        },
-        "update": { # Arguments to insert, update, insert_or_update, and # Update existing rows in a table. If any of the rows does not
-            # already exist, the transaction fails with error `NOT_FOUND`.
-            # replace operations.
-          "table": "A String", # Required. The table whose rows will be written.
-          "values": [ # The values to be written. `values` can contain more than one
-              # list of values. If it does, then multiple rows are written, one
-              # for each entry in `values`. Each list in `values` must have
-              # exactly as many entries as there are entries in columns
-              # above. Sending multiple lists is equivalent to sending multiple
-              # `Mutation`s, each containing one `values` entry and repeating
-              # table and columns. Individual values in each list are
-              # encoded as described here.
-            [
-              "",
-            ],
-          ],
-          "columns": [ # The names of the columns in table to be written.
-              #
-              # The list of columns must contain enough columns to allow
-              # Cloud Spanner to derive values for all primary key columns in the
-              # row(s) to be modified.
-            "A String",
-          ],
-        },
-        "delete": { # Arguments to delete operations. # Delete rows from a table. Succeeds whether or not the named
+        &quot;delete&quot;: { # Arguments to delete operations. # Delete rows from a table. Succeeds whether or not the named
             # rows were present.
-          "table": "A String", # Required. The table whose rows will be deleted.
-          "keySet": { # `KeySet` defines a collection of Cloud Spanner keys and/or key ranges. All # Required. The primary keys of the rows within table to delete.  The
+          &quot;keySet&quot;: { # `KeySet` defines a collection of Cloud Spanner keys and/or key ranges. All # Required. The primary keys of the rows within table to delete.  The
               # primary keys must be specified in the order in which they appear in the
-              # `PRIMARY KEY()` clause of the table's equivalent DDL statement (the DDL
+              # `PRIMARY KEY()` clause of the table&#x27;s equivalent DDL statement (the DDL
               # statement used to create the table).
               # Delete is idempotent. The transaction will succeed even if some or all
               # rows do not exist.
@@ -728,7 +649,7 @@
               # If the same key is specified multiple times in the set (for example
               # if two ranges, two keys, or a key and a range overlap), Cloud Spanner
               # behaves as if the key were only specified once.
-            "ranges": [ # A list of key ranges. See KeyRange for more information about
+            &quot;ranges&quot;: [ # A list of key ranges. See KeyRange for more information about
                 # key range specifications.
               { # KeyRange represents a range of rows in a table or index.
                   #
@@ -749,19 +670,19 @@
                   #
                   # The following keys name rows in this table:
                   #
-                  #     "Bob", "2014-09-23"
+                  #     &quot;Bob&quot;, &quot;2014-09-23&quot;
                   #
-                  # Since the `UserEvents` table's `PRIMARY KEY` clause names two
+                  # Since the `UserEvents` table&#x27;s `PRIMARY KEY` clause names two
                   # columns, each `UserEvents` key has two elements; the first is the
                   # `UserName`, and the second is the `EventDate`.
                   #
                   # Key ranges with multiple components are interpreted
-                  # lexicographically by component using the table or index key's declared
+                  # lexicographically by component using the table or index key&#x27;s declared
                   # sort order. For example, the following range returns all events for
-                  # user `"Bob"` that occurred in the year 2015:
+                  # user `&quot;Bob&quot;` that occurred in the year 2015:
                   #
-                  #     "start_closed": ["Bob", "2015-01-01"]
-                  #     "end_closed": ["Bob", "2015-12-31"]
+                  #     &quot;start_closed&quot;: [&quot;Bob&quot;, &quot;2015-01-01&quot;]
+                  #     &quot;end_closed&quot;: [&quot;Bob&quot;, &quot;2015-12-31&quot;]
                   #
                   # Start and end keys can omit trailing key components. This affects the
                   # inclusion and exclusion of rows that exactly match the provided key
@@ -769,37 +690,37 @@
                   # provided components are included; if the key is open, then rows
                   # that exactly match are not included.
                   #
-                  # For example, the following range includes all events for `"Bob"` that
+                  # For example, the following range includes all events for `&quot;Bob&quot;` that
                   # occurred during and after the year 2000:
                   #
-                  #     "start_closed": ["Bob", "2000-01-01"]
-                  #     "end_closed": ["Bob"]
+                  #     &quot;start_closed&quot;: [&quot;Bob&quot;, &quot;2000-01-01&quot;]
+                  #     &quot;end_closed&quot;: [&quot;Bob&quot;]
                   #
-                  # The next example retrieves all events for `"Bob"`:
+                  # The next example retrieves all events for `&quot;Bob&quot;`:
                   #
-                  #     "start_closed": ["Bob"]
-                  #     "end_closed": ["Bob"]
+                  #     &quot;start_closed&quot;: [&quot;Bob&quot;]
+                  #     &quot;end_closed&quot;: [&quot;Bob&quot;]
                   #
                   # To retrieve events before the year 2000:
                   #
-                  #     "start_closed": ["Bob"]
-                  #     "end_open": ["Bob", "2000-01-01"]
+                  #     &quot;start_closed&quot;: [&quot;Bob&quot;]
+                  #     &quot;end_open&quot;: [&quot;Bob&quot;, &quot;2000-01-01&quot;]
                   #
                   # The following range includes all rows in the table:
                   #
-                  #     "start_closed": []
-                  #     "end_closed": []
+                  #     &quot;start_closed&quot;: []
+                  #     &quot;end_closed&quot;: []
                   #
                   # This range returns all users whose `UserName` begins with any
                   # character from A to C:
                   #
-                  #     "start_closed": ["A"]
-                  #     "end_open": ["D"]
+                  #     &quot;start_closed&quot;: [&quot;A&quot;]
+                  #     &quot;end_open&quot;: [&quot;D&quot;]
                   #
                   # This range returns all users whose `UserName` begins with B:
                   #
-                  #     "start_closed": ["B"]
-                  #     "end_open": ["C"]
+                  #     &quot;start_closed&quot;: [&quot;B&quot;]
+                  #     &quot;end_open&quot;: [&quot;C&quot;]
                   #
                   # Key ranges honor column sort order. For example, suppose a table is
                   # defined as follows:
@@ -812,45 +733,123 @@
                   # The following range retrieves all rows with key values between 1
                   # and 100 inclusive:
                   #
-                  #     "start_closed": ["100"]
-                  #     "end_closed": ["1"]
+                  #     &quot;start_closed&quot;: [&quot;100&quot;]
+                  #     &quot;end_closed&quot;: [&quot;1&quot;]
                   #
                   # Note that 100 is passed as the start, and 1 is passed as the end,
                   # because `Key` is a descending column in the schema.
-                "endOpen": [ # If the end is open, then the range excludes rows whose first
+                &quot;endOpen&quot;: [ # If the end is open, then the range excludes rows whose first
                     # `len(end_open)` key columns exactly match `end_open`.
-                  "",
+                  &quot;&quot;,
                 ],
-                "startOpen": [ # If the start is open, then the range excludes rows whose first
-                    # `len(start_open)` key columns exactly match `start_open`.
-                  "",
-                ],
-                "endClosed": [ # If the end is closed, then the range includes all rows whose
+                &quot;endClosed&quot;: [ # If the end is closed, then the range includes all rows whose
                     # first `len(end_closed)` key columns exactly match `end_closed`.
-                  "",
+                  &quot;&quot;,
                 ],
-                "startClosed": [ # If the start is closed, then the range includes all rows whose
+                &quot;startOpen&quot;: [ # If the start is open, then the range excludes rows whose first
+                    # `len(start_open)` key columns exactly match `start_open`.
+                  &quot;&quot;,
+                ],
+                &quot;startClosed&quot;: [ # If the start is closed, then the range includes all rows whose
                     # first `len(start_closed)` key columns exactly match `start_closed`.
-                  "",
+                  &quot;&quot;,
                 ],
               },
             ],
-            "keys": [ # A list of specific keys. Entries in `keys` should have exactly as
+            &quot;all&quot;: True or False, # For convenience `all` can be set to `true` to indicate that this
+                # `KeySet` matches all keys in the table or index. Note that any keys
+                # specified in `keys` or `ranges` are only yielded once.
+            &quot;keys&quot;: [ # A list of specific keys. Entries in `keys` should have exactly as
                 # many elements as there are columns in the primary or index key
                 # with which this `KeySet` is used.  Individual key values are
                 # encoded as described here.
               [
-                "",
+                &quot;&quot;,
               ],
             ],
-            "all": True or False, # For convenience `all` can be set to `true` to indicate that this
-                # `KeySet` matches all keys in the table or index. Note that any keys
-                # specified in `keys` or `ranges` are only yielded once.
           },
+          &quot;table&quot;: &quot;A String&quot;, # Required. The table whose rows will be deleted.
+        },
+        &quot;insertOrUpdate&quot;: { # Arguments to insert, update, insert_or_update, and # Like insert, except that if the row already exists, then
+            # its column values are overwritten with the ones provided. Any
+            # column values not explicitly written are preserved.
+            #
+            # When using insert_or_update, just as when using insert, all `NOT
+            # NULL` columns in the table must be given a value. This holds true
+            # even when the row already exists and will therefore actually be updated.
+            # replace operations.
+          &quot;table&quot;: &quot;A String&quot;, # Required. The table whose rows will be written.
+          &quot;values&quot;: [ # The values to be written. `values` can contain more than one
+              # list of values. If it does, then multiple rows are written, one
+              # for each entry in `values`. Each list in `values` must have
+              # exactly as many entries as there are entries in columns
+              # above. Sending multiple lists is equivalent to sending multiple
+              # `Mutation`s, each containing one `values` entry and repeating
+              # table and columns. Individual values in each list are
+              # encoded as described here.
+            [
+              &quot;&quot;,
+            ],
+          ],
+          &quot;columns&quot;: [ # The names of the columns in table to be written.
+              #
+              # The list of columns must contain enough columns to allow
+              # Cloud Spanner to derive values for all primary key columns in the
+              # row(s) to be modified.
+            &quot;A String&quot;,
+          ],
+        },
+        &quot;insert&quot;: { # Arguments to insert, update, insert_or_update, and # Insert new rows in a table. If any of the rows already exist,
+            # the write or transaction fails with error `ALREADY_EXISTS`.
+            # replace operations.
+          &quot;table&quot;: &quot;A String&quot;, # Required. The table whose rows will be written.
+          &quot;values&quot;: [ # The values to be written. `values` can contain more than one
+              # list of values. If it does, then multiple rows are written, one
+              # for each entry in `values`. Each list in `values` must have
+              # exactly as many entries as there are entries in columns
+              # above. Sending multiple lists is equivalent to sending multiple
+              # `Mutation`s, each containing one `values` entry and repeating
+              # table and columns. Individual values in each list are
+              # encoded as described here.
+            [
+              &quot;&quot;,
+            ],
+          ],
+          &quot;columns&quot;: [ # The names of the columns in table to be written.
+              #
+              # The list of columns must contain enough columns to allow
+              # Cloud Spanner to derive values for all primary key columns in the
+              # row(s) to be modified.
+            &quot;A String&quot;,
+          ],
+        },
+        &quot;update&quot;: { # Arguments to insert, update, insert_or_update, and # Update existing rows in a table. If any of the rows does not
+            # already exist, the transaction fails with error `NOT_FOUND`.
+            # replace operations.
+          &quot;table&quot;: &quot;A String&quot;, # Required. The table whose rows will be written.
+          &quot;values&quot;: [ # The values to be written. `values` can contain more than one
+              # list of values. If it does, then multiple rows are written, one
+              # for each entry in `values`. Each list in `values` must have
+              # exactly as many entries as there are entries in columns
+              # above. Sending multiple lists is equivalent to sending multiple
+              # `Mutation`s, each containing one `values` entry and repeating
+              # table and columns. Individual values in each list are
+              # encoded as described here.
+            [
+              &quot;&quot;,
+            ],
+          ],
+          &quot;columns&quot;: [ # The names of the columns in table to be written.
+              #
+              # The list of columns must contain enough columns to allow
+              # Cloud Spanner to derive values for all primary key columns in the
+              # row(s) to be modified.
+            &quot;A String&quot;,
+          ],
         },
       },
     ],
-    "singleUseTransaction": { # # Transactions # Execute mutations in a temporary transaction. Note that unlike
+    &quot;singleUseTransaction&quot;: { # # Transactions # Execute mutations in a temporary transaction. Note that unlike
         # commit of a previously-started transaction, commit with a
         # temporary transaction is non-idempotent. That is, if the
         # `CommitRequest` is sent to Cloud Spanner more than once (for
@@ -913,7 +912,7 @@
         # Commit or
         # Rollback.  Long periods of
         # inactivity at the client may cause Cloud Spanner to release a
-        # transaction's locks and abort it.
+        # transaction&#x27;s locks and abort it.
         #
         # Conceptually, a read-write transaction consists of zero or more
         # reads or SQL statements followed by
@@ -931,7 +930,7 @@
         # that the transaction has not modified any user data in Cloud Spanner.
         #
         # Unless the transaction commits, Cloud Spanner makes no guarantees about
-        # how long the transaction's locks were held for. It is an error to
+        # how long the transaction&#x27;s locks were held for. It is an error to
         # use Cloud Spanner locks for any sort of mutual exclusion other than
         # between Cloud Spanner transactions themselves.
         #
@@ -940,7 +939,7 @@
         # When a transaction aborts, the application can choose to retry the
         # whole transaction again. To maximize the chances of successfully
         # committing the retry, the client should execute the retry in the
-        # same session as the original attempt. The original session's lock
+        # same session as the original attempt. The original session&#x27;s lock
         # priority increases with each consecutive abort, meaning that each
         # attempt has a slightly better chance of success than the previous.
         #
@@ -956,7 +955,7 @@
         # A transaction is considered idle if it has no outstanding reads or
         # SQL queries and has not started a read or SQL query within the last 10
         # seconds. Idle transactions can be aborted by Cloud Spanner so that they
-        # don't hold on to locks indefinitely. In that case, the commit will
+        # don&#x27;t hold on to locks indefinitely. In that case, the commit will
         # fail with error `ABORTED`.
         #
         # If this behavior is undesirable, periodically executing a simple
@@ -1031,7 +1030,7 @@
         # The timestamp can either be expressed as an absolute Cloud Spanner commit
         # timestamp or a staleness relative to the current time.
         #
-        # These modes do not require a "negotiation phase" to pick a
+        # These modes do not require a &quot;negotiation phase&quot; to pick a
         # timestamp. As a result, they execute slightly faster than the
         # equivalent boundedly stale concurrency modes. On the other hand,
         # boundedly stale reads usually return fresher results.
@@ -1073,7 +1072,7 @@
         #
         # Cloud Spanner continuously garbage collects deleted and overwritten data
         # in the background to reclaim storage space. This process is known
-        # as "version GC". By default, version GC reclaims versions after they
+        # as &quot;version GC&quot;. By default, version GC reclaims versions after they
         # are one hour old. Because of this, Cloud Spanner cannot perform reads
         # at read timestamps more than one hour in the past. This
         # restriction also applies to in-progress reads and/or SQL queries whose
@@ -1135,19 +1134,18 @@
         # Given the above, Partitioned DML is good fit for large, database-wide,
         # operations that are idempotent, such as deleting old rows from a very large
         # table.
-      "readWrite": { # Message type to initiate a read-write transaction. Currently this # Transaction may write.
+      &quot;partitionedDml&quot;: { # Message type to initiate a Partitioned DML transaction. # Partitioned DML transaction.
           #
-          # Authorization to begin a read-write transaction requires
-          # `spanner.databases.beginOrRollbackReadWriteTransaction` permission
+          # Authorization to begin a Partitioned DML transaction requires
+          # `spanner.databases.beginPartitionedDmlTransaction` permission
           # on the `session` resource.
-          # transaction type has no options.
       },
-      "readOnly": { # Message type to initiate a read-only transaction. # Transaction will not write.
+      &quot;readOnly&quot;: { # Message type to initiate a read-only transaction. # Transaction will not write.
           #
           # Authorization to begin a read-only transaction requires
           # `spanner.databases.beginReadOnlyTransaction` permission
           # on the `session` resource.
-        "minReadTimestamp": "A String", # Executes all reads at a timestamp &gt;= `min_read_timestamp`.
+        &quot;minReadTimestamp&quot;: &quot;A String&quot;, # Executes all reads at a timestamp &gt;= `min_read_timestamp`.
             #
             # This is useful for requesting fresher data than some previous
             # read, or data that is fresh enough to observe the effects of some
@@ -1155,25 +1153,25 @@
             #
             # Note that this option can only be used in single-use transactions.
             #
-            # A timestamp in RFC3339 UTC \"Zulu\" format, accurate to nanoseconds.
-            # Example: `"2014-10-02T15:01:23.045123456Z"`.
-        "readTimestamp": "A String", # Executes all reads at the given timestamp. Unlike other modes,
+            # A timestamp in RFC3339 UTC \&quot;Zulu\&quot; format, accurate to nanoseconds.
+            # Example: `&quot;2014-10-02T15:01:23.045123456Z&quot;`.
+        &quot;readTimestamp&quot;: &quot;A String&quot;, # Executes all reads at the given timestamp. Unlike other modes,
             # reads at a specific timestamp are repeatable; the same read at
             # the same timestamp always returns the same data. If the
             # timestamp is in the future, the read will block until the
-            # specified timestamp, modulo the read's deadline.
+            # specified timestamp, modulo the read&#x27;s deadline.
             #
             # Useful for large scale consistent reads such as mapreduces, or
             # for coordinating many reads against a consistent snapshot of the
             # data.
             #
-            # A timestamp in RFC3339 UTC \"Zulu\" format, accurate to nanoseconds.
-            # Example: `"2014-10-02T15:01:23.045123456Z"`.
-        "maxStaleness": "A String", # Read data at a timestamp &gt;= `NOW - max_staleness`
+            # A timestamp in RFC3339 UTC \&quot;Zulu\&quot; format, accurate to nanoseconds.
+            # Example: `&quot;2014-10-02T15:01:23.045123456Z&quot;`.
+        &quot;maxStaleness&quot;: &quot;A String&quot;, # Read data at a timestamp &gt;= `NOW - max_staleness`
             # seconds. Guarantees that all writes that have committed more
             # than the specified number of seconds ago are visible. Because
             # Cloud Spanner chooses the exact timestamp, this mode works even if
-            # the client's local clock is substantially skewed from Cloud Spanner
+            # the client&#x27;s local clock is substantially skewed from Cloud Spanner
             # commit timestamps.
             #
             # Useful for reading the freshest data available at a nearby
@@ -1182,29 +1180,31 @@
             #
             # Note that this option can only be used in single-use
             # transactions.
-        "exactStaleness": "A String", # Executes all reads at a timestamp that is `exact_staleness`
+        &quot;returnReadTimestamp&quot;: True or False, # If true, the Cloud Spanner-selected read timestamp is included in
+            # the Transaction message that describes the transaction.
+        &quot;exactStaleness&quot;: &quot;A String&quot;, # Executes all reads at a timestamp that is `exact_staleness`
             # old. The timestamp is chosen soon after the read is started.
             #
             # Guarantees that all writes that have committed more than the
             # specified number of seconds ago are visible. Because Cloud Spanner
-            # chooses the exact timestamp, this mode works even if the client's
+            # chooses the exact timestamp, this mode works even if the client&#x27;s
             # local clock is substantially skewed from Cloud Spanner commit
             # timestamps.
             #
             # Useful for reading at nearby replicas without the distributed
             # timestamp negotiation overhead of `max_staleness`.
-        "returnReadTimestamp": True or False, # If true, the Cloud Spanner-selected read timestamp is included in
-            # the Transaction message that describes the transaction.
-        "strong": True or False, # Read at a timestamp where all previously committed transactions
+        &quot;strong&quot;: True or False, # Read at a timestamp where all previously committed transactions
             # are visible.
       },
-      "partitionedDml": { # Message type to initiate a Partitioned DML transaction. # Partitioned DML transaction.
+      &quot;readWrite&quot;: { # Message type to initiate a read-write transaction. Currently this # Transaction may write.
           #
-          # Authorization to begin a Partitioned DML transaction requires
-          # `spanner.databases.beginPartitionedDmlTransaction` permission
+          # Authorization to begin a read-write transaction requires
+          # `spanner.databases.beginOrRollbackReadWriteTransaction` permission
           # on the `session` resource.
+          # transaction type has no options.
       },
     },
+    &quot;transactionId&quot;: &quot;A String&quot;, # Commit a previously-started transaction.
   }
 
   x__xgafv: string, V1 error format.
@@ -1216,7 +1216,7 @@
   An object of the form:
 
     { # The response for Commit.
-    "commitTimestamp": "A String", # The Cloud Spanner timestamp at which the transaction committed.
+    &quot;commitTimestamp&quot;: &quot;A String&quot;, # The Cloud Spanner timestamp at which the transaction committed.
   }</pre>
 </div>
 
@@ -1240,7 +1240,7 @@
 requests to it return `NOT_FOUND`.
 
 Idle sessions can be kept alive by sending a trivial SQL query
-periodically, e.g., `"SELECT 1"`.
+periodically, e.g., `&quot;SELECT 1&quot;`.
 
 Args:
   database: string, Required. The database in which the new session is created. (required)
@@ -1248,8 +1248,13 @@
     The object takes the form of:
 
 { # The request for CreateSession.
-    "session": { # A session in the Cloud Spanner API. # The session to create.
-      "labels": { # The labels for the session.
+    &quot;session&quot;: { # A session in the Cloud Spanner API. # The session to create.
+      &quot;approximateLastUseTime&quot;: &quot;A String&quot;, # Output only. The approximate timestamp when the session is last used. It is
+          # typically earlier than the actual last use time.
+      &quot;name&quot;: &quot;A String&quot;, # The name of the session. This is always system-assigned; values provided
+          # when creating a session are ignored.
+      &quot;createTime&quot;: &quot;A String&quot;, # Output only. The timestamp when the session is created.
+      &quot;labels&quot;: { # The labels for the session.
           #
           #  * Label keys must be between 1 and 63 characters long and must conform to
           #    the following regular expression: `[a-z]([-a-z0-9]*[a-z0-9])?`.
@@ -1258,13 +1263,8 @@
           #  * No more than 64 labels can be associated with a given session.
           #
           # See https://goo.gl/xmQnxf for more information on and examples of labels.
-        "a_key": "A String",
+        &quot;a_key&quot;: &quot;A String&quot;,
       },
-      "name": "A String", # The name of the session. This is always system-assigned; values provided
-          # when creating a session are ignored.
-      "approximateLastUseTime": "A String", # Output only. The approximate timestamp when the session is last used. It is
-          # typically earlier than the actual last use time.
-      "createTime": "A String", # Output only. The timestamp when the session is created.
     },
   }
 
@@ -1277,7 +1277,12 @@
   An object of the form:
 
     { # A session in the Cloud Spanner API.
-    "labels": { # The labels for the session.
+    &quot;approximateLastUseTime&quot;: &quot;A String&quot;, # Output only. The approximate timestamp when the session is last used. It is
+        # typically earlier than the actual last use time.
+    &quot;name&quot;: &quot;A String&quot;, # The name of the session. This is always system-assigned; values provided
+        # when creating a session are ignored.
+    &quot;createTime&quot;: &quot;A String&quot;, # Output only. The timestamp when the session is created.
+    &quot;labels&quot;: { # The labels for the session.
         #
         #  * Label keys must be between 1 and 63 characters long and must conform to
         #    the following regular expression: `[a-z]([-a-z0-9]*[a-z0-9])?`.
@@ -1286,13 +1291,8 @@
         #  * No more than 64 labels can be associated with a given session.
         #
         # See https://goo.gl/xmQnxf for more information on and examples of labels.
-      "a_key": "A String",
+      &quot;a_key&quot;: &quot;A String&quot;,
     },
-    "name": "A String", # The name of the session. This is always system-assigned; values provided
-        # when creating a session are ignored.
-    "approximateLastUseTime": "A String", # Output only. The approximate timestamp when the session is last used. It is
-        # typically earlier than the actual last use time.
-    "createTime": "A String", # Output only. The timestamp when the session is created.
   }</pre>
 </div>
 
@@ -1344,15 +1344,7 @@
     The object takes the form of:
 
 { # The request for ExecuteBatchDml.
-    "seqno": "A String", # Required. A per-transaction sequence number used to identify this request. This field
-        # makes each request idempotent such that if the request is received multiple
-        # times, at most one will succeed.
-        # 
-        # The sequence number must be monotonically increasing within the
-        # transaction. If a request arrives for the first time with an out-of-order
-        # sequence number, the transaction may be aborted. Replays of previously
-        # handled requests will yield the same response as the first execution.
-    "transaction": { # This message is used to select the transaction in which a # Required. The transaction to use. Must be a read-write transaction.
+    &quot;transaction&quot;: { # This message is used to select the transaction in which a # Required. The transaction to use. Must be a read-write transaction.
         # 
         # To protect against replays, single-use transactions are not supported. The
         # caller must either supply an existing transaction ID or begin a new
@@ -1361,356 +1353,7 @@
         # ExecuteSql call runs.
         #
         # See TransactionOptions for more information about transactions.
-      "begin": { # # Transactions # Begin a new transaction and execute this read or SQL query in
-          # it. The transaction ID of the new transaction is returned in
-          # ResultSetMetadata.transaction, which is a Transaction.
-          #
-          #
-          # Each session can have at most one active transaction at a time. After the
-          # active transaction is completed, the session can immediately be
-          # re-used for the next transaction. It is not necessary to create a
-          # new session for each transaction.
-          #
-          # # Transaction Modes
-          #
-          # Cloud Spanner supports three transaction modes:
-          #
-          #   1. Locking read-write. This type of transaction is the only way
-          #      to write data into Cloud Spanner. These transactions rely on
-          #      pessimistic locking and, if necessary, two-phase commit.
-          #      Locking read-write transactions may abort, requiring the
-          #      application to retry.
-          #
-          #   2. Snapshot read-only. This transaction type provides guaranteed
-          #      consistency across several reads, but does not allow
-          #      writes. Snapshot read-only transactions can be configured to
-          #      read at timestamps in the past. Snapshot read-only
-          #      transactions do not need to be committed.
-          #
-          #   3. Partitioned DML. This type of transaction is used to execute
-          #      a single Partitioned DML statement. Partitioned DML partitions
-          #      the key space and runs the DML statement over each partition
-          #      in parallel using separate, internal transactions that commit
-          #      independently. Partitioned DML transactions do not need to be
-          #      committed.
-          #
-          # For transactions that only read, snapshot read-only transactions
-          # provide simpler semantics and are almost always faster. In
-          # particular, read-only transactions do not take locks, so they do
-          # not conflict with read-write transactions. As a consequence of not
-          # taking locks, they also do not abort, so retry loops are not needed.
-          #
-          # Transactions may only read/write data in a single database. They
-          # may, however, read/write data in different tables within that
-          # database.
-          #
-          # ## Locking Read-Write Transactions
-          #
-          # Locking transactions may be used to atomically read-modify-write
-          # data anywhere in a database. This type of transaction is externally
-          # consistent.
-          #
-          # Clients should attempt to minimize the amount of time a transaction
-          # is active. Faster transactions commit with higher probability
-          # and cause less contention. Cloud Spanner attempts to keep read locks
-          # active as long as the transaction continues to do reads, and the
-          # transaction has not been terminated by
-          # Commit or
-          # Rollback.  Long periods of
-          # inactivity at the client may cause Cloud Spanner to release a
-          # transaction's locks and abort it.
-          #
-          # Conceptually, a read-write transaction consists of zero or more
-          # reads or SQL statements followed by
-          # Commit. At any time before
-          # Commit, the client can send a
-          # Rollback request to abort the
-          # transaction.
-          #
-          # ### Semantics
-          #
-          # Cloud Spanner can commit the transaction if all read locks it acquired
-          # are still valid at commit time, and it is able to acquire write
-          # locks for all writes. Cloud Spanner can abort the transaction for any
-          # reason. If a commit attempt returns `ABORTED`, Cloud Spanner guarantees
-          # that the transaction has not modified any user data in Cloud Spanner.
-          #
-          # Unless the transaction commits, Cloud Spanner makes no guarantees about
-          # how long the transaction's locks were held for. It is an error to
-          # use Cloud Spanner locks for any sort of mutual exclusion other than
-          # between Cloud Spanner transactions themselves.
-          #
-          # ### Retrying Aborted Transactions
-          #
-          # When a transaction aborts, the application can choose to retry the
-          # whole transaction again. To maximize the chances of successfully
-          # committing the retry, the client should execute the retry in the
-          # same session as the original attempt. The original session's lock
-          # priority increases with each consecutive abort, meaning that each
-          # attempt has a slightly better chance of success than the previous.
-          #
-          # Under some circumstances (e.g., many transactions attempting to
-          # modify the same row(s)), a transaction can abort many times in a
-          # short period before successfully committing. Thus, it is not a good
-          # idea to cap the number of retries a transaction can attempt;
-          # instead, it is better to limit the total amount of wall time spent
-          # retrying.
-          #
-          # ### Idle Transactions
-          #
-          # A transaction is considered idle if it has no outstanding reads or
-          # SQL queries and has not started a read or SQL query within the last 10
-          # seconds. Idle transactions can be aborted by Cloud Spanner so that they
-          # don't hold on to locks indefinitely. In that case, the commit will
-          # fail with error `ABORTED`.
-          #
-          # If this behavior is undesirable, periodically executing a simple
-          # SQL query in the transaction (e.g., `SELECT 1`) prevents the
-          # transaction from becoming idle.
-          #
-          # ## Snapshot Read-Only Transactions
-          #
-          # Snapshot read-only transactions provides a simpler method than
-          # locking read-write transactions for doing several consistent
-          # reads. However, this type of transaction does not support writes.
-          #
-          # Snapshot transactions do not take locks. Instead, they work by
-          # choosing a Cloud Spanner timestamp, then executing all reads at that
-          # timestamp. Since they do not acquire locks, they do not block
-          # concurrent read-write transactions.
-          #
-          # Unlike locking read-write transactions, snapshot read-only
-          # transactions never abort. They can fail if the chosen read
-          # timestamp is garbage collected; however, the default garbage
-          # collection policy is generous enough that most applications do not
-          # need to worry about this in practice.
-          #
-          # Snapshot read-only transactions do not need to call
-          # Commit or
-          # Rollback (and in fact are not
-          # permitted to do so).
-          #
-          # To execute a snapshot transaction, the client specifies a timestamp
-          # bound, which tells Cloud Spanner how to choose a read timestamp.
-          #
-          # The types of timestamp bound are:
-          #
-          #   - Strong (the default).
-          #   - Bounded staleness.
-          #   - Exact staleness.
-          #
-          # If the Cloud Spanner database to be read is geographically distributed,
-          # stale read-only transactions can execute more quickly than strong
-          # or read-write transaction, because they are able to execute far
-          # from the leader replica.
-          #
-          # Each type of timestamp bound is discussed in detail below.
-          #
-          # ### Strong
-          #
-          # Strong reads are guaranteed to see the effects of all transactions
-          # that have committed before the start of the read. Furthermore, all
-          # rows yielded by a single read are consistent with each other -- if
-          # any part of the read observes a transaction, all parts of the read
-          # see the transaction.
-          #
-          # Strong reads are not repeatable: two consecutive strong read-only
-          # transactions might return inconsistent results if there are
-          # concurrent writes. If consistency across reads is required, the
-          # reads should be executed within a transaction or at an exact read
-          # timestamp.
-          #
-          # See TransactionOptions.ReadOnly.strong.
-          #
-          # ### Exact Staleness
-          #
-          # These timestamp bounds execute reads at a user-specified
-          # timestamp. Reads at a timestamp are guaranteed to see a consistent
-          # prefix of the global transaction history: they observe
-          # modifications done by all transactions with a commit timestamp &lt;=
-          # the read timestamp, and observe none of the modifications done by
-          # transactions with a larger commit timestamp. They will block until
-          # all conflicting transactions that may be assigned commit timestamps
-          # &lt;= the read timestamp have finished.
-          #
-          # The timestamp can either be expressed as an absolute Cloud Spanner commit
-          # timestamp or a staleness relative to the current time.
-          #
-          # These modes do not require a "negotiation phase" to pick a
-          # timestamp. As a result, they execute slightly faster than the
-          # equivalent boundedly stale concurrency modes. On the other hand,
-          # boundedly stale reads usually return fresher results.
-          #
-          # See TransactionOptions.ReadOnly.read_timestamp and
-          # TransactionOptions.ReadOnly.exact_staleness.
-          #
-          # ### Bounded Staleness
-          #
-          # Bounded staleness modes allow Cloud Spanner to pick the read timestamp,
-          # subject to a user-provided staleness bound. Cloud Spanner chooses the
-          # newest timestamp within the staleness bound that allows execution
-          # of the reads at the closest available replica without blocking.
-          #
-          # All rows yielded are consistent with each other -- if any part of
-          # the read observes a transaction, all parts of the read see the
-          # transaction. Boundedly stale reads are not repeatable: two stale
-          # reads, even if they use the same staleness bound, can execute at
-          # different timestamps and thus return inconsistent results.
-          #
-          # Boundedly stale reads execute in two phases: the first phase
-          # negotiates a timestamp among all replicas needed to serve the
-          # read. In the second phase, reads are executed at the negotiated
-          # timestamp.
-          #
-          # As a result of the two phase execution, bounded staleness reads are
-          # usually a little slower than comparable exact staleness
-          # reads. However, they are typically able to return fresher
-          # results, and are more likely to execute at the closest replica.
-          #
-          # Because the timestamp negotiation requires up-front knowledge of
-          # which rows will be read, it can only be used with single-use
-          # read-only transactions.
-          #
-          # See TransactionOptions.ReadOnly.max_staleness and
-          # TransactionOptions.ReadOnly.min_read_timestamp.
-          #
-          # ### Old Read Timestamps and Garbage Collection
-          #
-          # Cloud Spanner continuously garbage collects deleted and overwritten data
-          # in the background to reclaim storage space. This process is known
-          # as "version GC". By default, version GC reclaims versions after they
-          # are one hour old. Because of this, Cloud Spanner cannot perform reads
-          # at read timestamps more than one hour in the past. This
-          # restriction also applies to in-progress reads and/or SQL queries whose
-          # timestamp become too old while executing. Reads and SQL queries with
-          # too-old read timestamps fail with the error `FAILED_PRECONDITION`.
-          #
-          # ## Partitioned DML Transactions
-          #
-          # Partitioned DML transactions are used to execute DML statements with a
-          # different execution strategy that provides different, and often better,
-          # scalability properties for large, table-wide operations than DML in a
-          # ReadWrite transaction. Smaller scoped statements, such as an OLTP workload,
-          # should prefer using ReadWrite transactions.
-          #
-          # Partitioned DML partitions the keyspace and runs the DML statement on each
-          # partition in separate, internal transactions. These transactions commit
-          # automatically when complete, and run independently from one another.
-          #
-          # To reduce lock contention, this execution strategy only acquires read locks
-          # on rows that match the WHERE clause of the statement. Additionally, the
-          # smaller per-partition transactions hold locks for less time.
-          #
-          # That said, Partitioned DML is not a drop-in replacement for standard DML used
-          # in ReadWrite transactions.
-          #
-          #  - The DML statement must be fully-partitionable. Specifically, the statement
-          #    must be expressible as the union of many statements which each access only
-          #    a single row of the table.
-          #
-          #  - The statement is not applied atomically to all rows of the table. Rather,
-          #    the statement is applied atomically to partitions of the table, in
-          #    independent transactions. Secondary index rows are updated atomically
-          #    with the base table rows.
-          #
-          #  - Partitioned DML does not guarantee exactly-once execution semantics
-          #    against a partition. The statement will be applied at least once to each
-          #    partition. It is strongly recommended that the DML statement should be
-          #    idempotent to avoid unexpected results. For instance, it is potentially
-          #    dangerous to run a statement such as
-          #    `UPDATE table SET column = column + 1` as it could be run multiple times
-          #    against some rows.
-          #
-          #  - The partitions are committed automatically - there is no support for
-          #    Commit or Rollback. If the call returns an error, or if the client issuing
-          #    the ExecuteSql call dies, it is possible that some rows had the statement
-          #    executed on them successfully. It is also possible that statement was
-          #    never executed against other rows.
-          #
-          #  - Partitioned DML transactions may only contain the execution of a single
-          #    DML statement via ExecuteSql or ExecuteStreamingSql.
-          #
-          #  - If any error is encountered during the execution of the partitioned DML
-          #    operation (for instance, a UNIQUE INDEX violation, division by zero, or a
-          #    value that cannot be stored due to schema constraints), then the
-          #    operation is stopped at that point and an error is returned. It is
-          #    possible that at this point, some partitions have been committed (or even
-          #    committed multiple times), and other partitions have not been run at all.
-          #
-          # Given the above, Partitioned DML is good fit for large, database-wide,
-          # operations that are idempotent, such as deleting old rows from a very large
-          # table.
-        "readWrite": { # Message type to initiate a read-write transaction. Currently this # Transaction may write.
-            #
-            # Authorization to begin a read-write transaction requires
-            # `spanner.databases.beginOrRollbackReadWriteTransaction` permission
-            # on the `session` resource.
-            # transaction type has no options.
-        },
-        "readOnly": { # Message type to initiate a read-only transaction. # Transaction will not write.
-            #
-            # Authorization to begin a read-only transaction requires
-            # `spanner.databases.beginReadOnlyTransaction` permission
-            # on the `session` resource.
-          "minReadTimestamp": "A String", # Executes all reads at a timestamp &gt;= `min_read_timestamp`.
-              #
-              # This is useful for requesting fresher data than some previous
-              # read, or data that is fresh enough to observe the effects of some
-              # previously committed transaction whose timestamp is known.
-              #
-              # Note that this option can only be used in single-use transactions.
-              #
-              # A timestamp in RFC3339 UTC \"Zulu\" format, accurate to nanoseconds.
-              # Example: `"2014-10-02T15:01:23.045123456Z"`.
-          "readTimestamp": "A String", # Executes all reads at the given timestamp. Unlike other modes,
-              # reads at a specific timestamp are repeatable; the same read at
-              # the same timestamp always returns the same data. If the
-              # timestamp is in the future, the read will block until the
-              # specified timestamp, modulo the read's deadline.
-              #
-              # Useful for large scale consistent reads such as mapreduces, or
-              # for coordinating many reads against a consistent snapshot of the
-              # data.
-              #
-              # A timestamp in RFC3339 UTC \"Zulu\" format, accurate to nanoseconds.
-              # Example: `"2014-10-02T15:01:23.045123456Z"`.
-          "maxStaleness": "A String", # Read data at a timestamp &gt;= `NOW - max_staleness`
-              # seconds. Guarantees that all writes that have committed more
-              # than the specified number of seconds ago are visible. Because
-              # Cloud Spanner chooses the exact timestamp, this mode works even if
-              # the client's local clock is substantially skewed from Cloud Spanner
-              # commit timestamps.
-              #
-              # Useful for reading the freshest data available at a nearby
-              # replica, while bounding the possible staleness if the local
-              # replica has fallen behind.
-              #
-              # Note that this option can only be used in single-use
-              # transactions.
-          "exactStaleness": "A String", # Executes all reads at a timestamp that is `exact_staleness`
-              # old. The timestamp is chosen soon after the read is started.
-              #
-              # Guarantees that all writes that have committed more than the
-              # specified number of seconds ago are visible. Because Cloud Spanner
-              # chooses the exact timestamp, this mode works even if the client's
-              # local clock is substantially skewed from Cloud Spanner commit
-              # timestamps.
-              #
-              # Useful for reading at nearby replicas without the distributed
-              # timestamp negotiation overhead of `max_staleness`.
-          "returnReadTimestamp": True or False, # If true, the Cloud Spanner-selected read timestamp is included in
-              # the Transaction message that describes the transaction.
-          "strong": True or False, # Read at a timestamp where all previously committed transactions
-              # are visible.
-        },
-        "partitionedDml": { # Message type to initiate a Partitioned DML transaction. # Partitioned DML transaction.
-            #
-            # Authorization to begin a Partitioned DML transaction requires
-            # `spanner.databases.beginPartitionedDmlTransaction` permission
-            # on the `session` resource.
-        },
-      },
-      "singleUse": { # # Transactions # Execute the read or SQL query in a temporary transaction.
+      &quot;singleUse&quot;: { # # Transactions # Execute the read or SQL query in a temporary transaction.
           # This is the most efficient way to execute a transaction that
           # consists of a single SQL query.
           #
@@ -1767,7 +1410,7 @@
           # Commit or
           # Rollback.  Long periods of
           # inactivity at the client may cause Cloud Spanner to release a
-          # transaction's locks and abort it.
+          # transaction&#x27;s locks and abort it.
           #
           # Conceptually, a read-write transaction consists of zero or more
           # reads or SQL statements followed by
@@ -1785,7 +1428,7 @@
           # that the transaction has not modified any user data in Cloud Spanner.
           #
           # Unless the transaction commits, Cloud Spanner makes no guarantees about
-          # how long the transaction's locks were held for. It is an error to
+          # how long the transaction&#x27;s locks were held for. It is an error to
           # use Cloud Spanner locks for any sort of mutual exclusion other than
           # between Cloud Spanner transactions themselves.
           #
@@ -1794,7 +1437,7 @@
           # When a transaction aborts, the application can choose to retry the
           # whole transaction again. To maximize the chances of successfully
           # committing the retry, the client should execute the retry in the
-          # same session as the original attempt. The original session's lock
+          # same session as the original attempt. The original session&#x27;s lock
           # priority increases with each consecutive abort, meaning that each
           # attempt has a slightly better chance of success than the previous.
           #
@@ -1810,7 +1453,7 @@
           # A transaction is considered idle if it has no outstanding reads or
           # SQL queries and has not started a read or SQL query within the last 10
           # seconds. Idle transactions can be aborted by Cloud Spanner so that they
-          # don't hold on to locks indefinitely. In that case, the commit will
+          # don&#x27;t hold on to locks indefinitely. In that case, the commit will
           # fail with error `ABORTED`.
           #
           # If this behavior is undesirable, periodically executing a simple
@@ -1885,7 +1528,7 @@
           # The timestamp can either be expressed as an absolute Cloud Spanner commit
           # timestamp or a staleness relative to the current time.
           #
-          # These modes do not require a "negotiation phase" to pick a
+          # These modes do not require a &quot;negotiation phase&quot; to pick a
           # timestamp. As a result, they execute slightly faster than the
           # equivalent boundedly stale concurrency modes. On the other hand,
           # boundedly stale reads usually return fresher results.
@@ -1927,7 +1570,7 @@
           #
           # Cloud Spanner continuously garbage collects deleted and overwritten data
           # in the background to reclaim storage space. This process is known
-          # as "version GC". By default, version GC reclaims versions after they
+          # as &quot;version GC&quot;. By default, version GC reclaims versions after they
           # are one hour old. Because of this, Cloud Spanner cannot perform reads
           # at read timestamps more than one hour in the past. This
           # restriction also applies to in-progress reads and/or SQL queries whose
@@ -1989,19 +1632,18 @@
           # Given the above, Partitioned DML is good fit for large, database-wide,
           # operations that are idempotent, such as deleting old rows from a very large
           # table.
-        "readWrite": { # Message type to initiate a read-write transaction. Currently this # Transaction may write.
+        &quot;partitionedDml&quot;: { # Message type to initiate a Partitioned DML transaction. # Partitioned DML transaction.
             #
-            # Authorization to begin a read-write transaction requires
-            # `spanner.databases.beginOrRollbackReadWriteTransaction` permission
+            # Authorization to begin a Partitioned DML transaction requires
+            # `spanner.databases.beginPartitionedDmlTransaction` permission
             # on the `session` resource.
-            # transaction type has no options.
         },
-        "readOnly": { # Message type to initiate a read-only transaction. # Transaction will not write.
+        &quot;readOnly&quot;: { # Message type to initiate a read-only transaction. # Transaction will not write.
             #
             # Authorization to begin a read-only transaction requires
             # `spanner.databases.beginReadOnlyTransaction` permission
             # on the `session` resource.
-          "minReadTimestamp": "A String", # Executes all reads at a timestamp &gt;= `min_read_timestamp`.
+          &quot;minReadTimestamp&quot;: &quot;A String&quot;, # Executes all reads at a timestamp &gt;= `min_read_timestamp`.
               #
               # This is useful for requesting fresher data than some previous
               # read, or data that is fresh enough to observe the effects of some
@@ -2009,25 +1651,25 @@
               #
               # Note that this option can only be used in single-use transactions.
               #
-              # A timestamp in RFC3339 UTC \"Zulu\" format, accurate to nanoseconds.
-              # Example: `"2014-10-02T15:01:23.045123456Z"`.
-          "readTimestamp": "A String", # Executes all reads at the given timestamp. Unlike other modes,
+              # A timestamp in RFC3339 UTC \&quot;Zulu\&quot; format, accurate to nanoseconds.
+              # Example: `&quot;2014-10-02T15:01:23.045123456Z&quot;`.
+          &quot;readTimestamp&quot;: &quot;A String&quot;, # Executes all reads at the given timestamp. Unlike other modes,
               # reads at a specific timestamp are repeatable; the same read at
               # the same timestamp always returns the same data. If the
               # timestamp is in the future, the read will block until the
-              # specified timestamp, modulo the read's deadline.
+              # specified timestamp, modulo the read&#x27;s deadline.
               #
               # Useful for large scale consistent reads such as mapreduces, or
               # for coordinating many reads against a consistent snapshot of the
               # data.
               #
-              # A timestamp in RFC3339 UTC \"Zulu\" format, accurate to nanoseconds.
-              # Example: `"2014-10-02T15:01:23.045123456Z"`.
-          "maxStaleness": "A String", # Read data at a timestamp &gt;= `NOW - max_staleness`
+              # A timestamp in RFC3339 UTC \&quot;Zulu\&quot; format, accurate to nanoseconds.
+              # Example: `&quot;2014-10-02T15:01:23.045123456Z&quot;`.
+          &quot;maxStaleness&quot;: &quot;A String&quot;, # Read data at a timestamp &gt;= `NOW - max_staleness`
               # seconds. Guarantees that all writes that have committed more
               # than the specified number of seconds ago are visible. Because
               # Cloud Spanner chooses the exact timestamp, this mode works even if
-              # the client's local clock is substantially skewed from Cloud Spanner
+              # the client&#x27;s local clock is substantially skewed from Cloud Spanner
               # commit timestamps.
               #
               # Useful for reading the freshest data available at a nearby
@@ -2036,39 +1678,397 @@
               #
               # Note that this option can only be used in single-use
               # transactions.
-          "exactStaleness": "A String", # Executes all reads at a timestamp that is `exact_staleness`
+          &quot;returnReadTimestamp&quot;: True or False, # If true, the Cloud Spanner-selected read timestamp is included in
+              # the Transaction message that describes the transaction.
+          &quot;exactStaleness&quot;: &quot;A String&quot;, # Executes all reads at a timestamp that is `exact_staleness`
               # old. The timestamp is chosen soon after the read is started.
               #
               # Guarantees that all writes that have committed more than the
               # specified number of seconds ago are visible. Because Cloud Spanner
-              # chooses the exact timestamp, this mode works even if the client's
+              # chooses the exact timestamp, this mode works even if the client&#x27;s
               # local clock is substantially skewed from Cloud Spanner commit
               # timestamps.
               #
               # Useful for reading at nearby replicas without the distributed
               # timestamp negotiation overhead of `max_staleness`.
-          "returnReadTimestamp": True or False, # If true, the Cloud Spanner-selected read timestamp is included in
-              # the Transaction message that describes the transaction.
-          "strong": True or False, # Read at a timestamp where all previously committed transactions
+          &quot;strong&quot;: True or False, # Read at a timestamp where all previously committed transactions
               # are visible.
         },
-        "partitionedDml": { # Message type to initiate a Partitioned DML transaction. # Partitioned DML transaction.
+        &quot;readWrite&quot;: { # Message type to initiate a read-write transaction. Currently this # Transaction may write.
+            #
+            # Authorization to begin a read-write transaction requires
+            # `spanner.databases.beginOrRollbackReadWriteTransaction` permission
+            # on the `session` resource.
+            # transaction type has no options.
+        },
+      },
+      &quot;begin&quot;: { # # Transactions # Begin a new transaction and execute this read or SQL query in
+          # it. The transaction ID of the new transaction is returned in
+          # ResultSetMetadata.transaction, which is a Transaction.
+          #
+          #
+          # Each session can have at most one active transaction at a time. After the
+          # active transaction is completed, the session can immediately be
+          # re-used for the next transaction. It is not necessary to create a
+          # new session for each transaction.
+          #
+          # # Transaction Modes
+          #
+          # Cloud Spanner supports three transaction modes:
+          #
+          #   1. Locking read-write. This type of transaction is the only way
+          #      to write data into Cloud Spanner. These transactions rely on
+          #      pessimistic locking and, if necessary, two-phase commit.
+          #      Locking read-write transactions may abort, requiring the
+          #      application to retry.
+          #
+          #   2. Snapshot read-only. This transaction type provides guaranteed
+          #      consistency across several reads, but does not allow
+          #      writes. Snapshot read-only transactions can be configured to
+          #      read at timestamps in the past. Snapshot read-only
+          #      transactions do not need to be committed.
+          #
+          #   3. Partitioned DML. This type of transaction is used to execute
+          #      a single Partitioned DML statement. Partitioned DML partitions
+          #      the key space and runs the DML statement over each partition
+          #      in parallel using separate, internal transactions that commit
+          #      independently. Partitioned DML transactions do not need to be
+          #      committed.
+          #
+          # For transactions that only read, snapshot read-only transactions
+          # provide simpler semantics and are almost always faster. In
+          # particular, read-only transactions do not take locks, so they do
+          # not conflict with read-write transactions. As a consequence of not
+          # taking locks, they also do not abort, so retry loops are not needed.
+          #
+          # Transactions may only read/write data in a single database. They
+          # may, however, read/write data in different tables within that
+          # database.
+          #
+          # ## Locking Read-Write Transactions
+          #
+          # Locking transactions may be used to atomically read-modify-write
+          # data anywhere in a database. This type of transaction is externally
+          # consistent.
+          #
+          # Clients should attempt to minimize the amount of time a transaction
+          # is active. Faster transactions commit with higher probability
+          # and cause less contention. Cloud Spanner attempts to keep read locks
+          # active as long as the transaction continues to do reads, and the
+          # transaction has not been terminated by
+          # Commit or
+          # Rollback.  Long periods of
+          # inactivity at the client may cause Cloud Spanner to release a
+          # transaction&#x27;s locks and abort it.
+          #
+          # Conceptually, a read-write transaction consists of zero or more
+          # reads or SQL statements followed by
+          # Commit. At any time before
+          # Commit, the client can send a
+          # Rollback request to abort the
+          # transaction.
+          #
+          # ### Semantics
+          #
+          # Cloud Spanner can commit the transaction if all read locks it acquired
+          # are still valid at commit time, and it is able to acquire write
+          # locks for all writes. Cloud Spanner can abort the transaction for any
+          # reason. If a commit attempt returns `ABORTED`, Cloud Spanner guarantees
+          # that the transaction has not modified any user data in Cloud Spanner.
+          #
+          # Unless the transaction commits, Cloud Spanner makes no guarantees about
+          # how long the transaction&#x27;s locks were held for. It is an error to
+          # use Cloud Spanner locks for any sort of mutual exclusion other than
+          # between Cloud Spanner transactions themselves.
+          #
+          # ### Retrying Aborted Transactions
+          #
+          # When a transaction aborts, the application can choose to retry the
+          # whole transaction again. To maximize the chances of successfully
+          # committing the retry, the client should execute the retry in the
+          # same session as the original attempt. The original session&#x27;s lock
+          # priority increases with each consecutive abort, meaning that each
+          # attempt has a slightly better chance of success than the previous.
+          #
+          # Under some circumstances (e.g., many transactions attempting to
+          # modify the same row(s)), a transaction can abort many times in a
+          # short period before successfully committing. Thus, it is not a good
+          # idea to cap the number of retries a transaction can attempt;
+          # instead, it is better to limit the total amount of wall time spent
+          # retrying.
+          #
+          # ### Idle Transactions
+          #
+          # A transaction is considered idle if it has no outstanding reads or
+          # SQL queries and has not started a read or SQL query within the last 10
+          # seconds. Idle transactions can be aborted by Cloud Spanner so that they
+          # don&#x27;t hold on to locks indefinitely. In that case, the commit will
+          # fail with error `ABORTED`.
+          #
+          # If this behavior is undesirable, periodically executing a simple
+          # SQL query in the transaction (e.g., `SELECT 1`) prevents the
+          # transaction from becoming idle.
+          #
+          # ## Snapshot Read-Only Transactions
+          #
+          # Snapshot read-only transactions provides a simpler method than
+          # locking read-write transactions for doing several consistent
+          # reads. However, this type of transaction does not support writes.
+          #
+          # Snapshot transactions do not take locks. Instead, they work by
+          # choosing a Cloud Spanner timestamp, then executing all reads at that
+          # timestamp. Since they do not acquire locks, they do not block
+          # concurrent read-write transactions.
+          #
+          # Unlike locking read-write transactions, snapshot read-only
+          # transactions never abort. They can fail if the chosen read
+          # timestamp is garbage collected; however, the default garbage
+          # collection policy is generous enough that most applications do not
+          # need to worry about this in practice.
+          #
+          # Snapshot read-only transactions do not need to call
+          # Commit or
+          # Rollback (and in fact are not
+          # permitted to do so).
+          #
+          # To execute a snapshot transaction, the client specifies a timestamp
+          # bound, which tells Cloud Spanner how to choose a read timestamp.
+          #
+          # The types of timestamp bound are:
+          #
+          #   - Strong (the default).
+          #   - Bounded staleness.
+          #   - Exact staleness.
+          #
+          # If the Cloud Spanner database to be read is geographically distributed,
+          # stale read-only transactions can execute more quickly than strong
+          # or read-write transaction, because they are able to execute far
+          # from the leader replica.
+          #
+          # Each type of timestamp bound is discussed in detail below.
+          #
+          # ### Strong
+          #
+          # Strong reads are guaranteed to see the effects of all transactions
+          # that have committed before the start of the read. Furthermore, all
+          # rows yielded by a single read are consistent with each other -- if
+          # any part of the read observes a transaction, all parts of the read
+          # see the transaction.
+          #
+          # Strong reads are not repeatable: two consecutive strong read-only
+          # transactions might return inconsistent results if there are
+          # concurrent writes. If consistency across reads is required, the
+          # reads should be executed within a transaction or at an exact read
+          # timestamp.
+          #
+          # See TransactionOptions.ReadOnly.strong.
+          #
+          # ### Exact Staleness
+          #
+          # These timestamp bounds execute reads at a user-specified
+          # timestamp. Reads at a timestamp are guaranteed to see a consistent
+          # prefix of the global transaction history: they observe
+          # modifications done by all transactions with a commit timestamp &lt;=
+          # the read timestamp, and observe none of the modifications done by
+          # transactions with a larger commit timestamp. They will block until
+          # all conflicting transactions that may be assigned commit timestamps
+          # &lt;= the read timestamp have finished.
+          #
+          # The timestamp can either be expressed as an absolute Cloud Spanner commit
+          # timestamp or a staleness relative to the current time.
+          #
+          # These modes do not require a &quot;negotiation phase&quot; to pick a
+          # timestamp. As a result, they execute slightly faster than the
+          # equivalent boundedly stale concurrency modes. On the other hand,
+          # boundedly stale reads usually return fresher results.
+          #
+          # See TransactionOptions.ReadOnly.read_timestamp and
+          # TransactionOptions.ReadOnly.exact_staleness.
+          #
+          # ### Bounded Staleness
+          #
+          # Bounded staleness modes allow Cloud Spanner to pick the read timestamp,
+          # subject to a user-provided staleness bound. Cloud Spanner chooses the
+          # newest timestamp within the staleness bound that allows execution
+          # of the reads at the closest available replica without blocking.
+          #
+          # All rows yielded are consistent with each other -- if any part of
+          # the read observes a transaction, all parts of the read see the
+          # transaction. Boundedly stale reads are not repeatable: two stale
+          # reads, even if they use the same staleness bound, can execute at
+          # different timestamps and thus return inconsistent results.
+          #
+          # Boundedly stale reads execute in two phases: the first phase
+          # negotiates a timestamp among all replicas needed to serve the
+          # read. In the second phase, reads are executed at the negotiated
+          # timestamp.
+          #
+          # As a result of the two phase execution, bounded staleness reads are
+          # usually a little slower than comparable exact staleness
+          # reads. However, they are typically able to return fresher
+          # results, and are more likely to execute at the closest replica.
+          #
+          # Because the timestamp negotiation requires up-front knowledge of
+          # which rows will be read, it can only be used with single-use
+          # read-only transactions.
+          #
+          # See TransactionOptions.ReadOnly.max_staleness and
+          # TransactionOptions.ReadOnly.min_read_timestamp.
+          #
+          # ### Old Read Timestamps and Garbage Collection
+          #
+          # Cloud Spanner continuously garbage collects deleted and overwritten data
+          # in the background to reclaim storage space. This process is known
+          # as &quot;version GC&quot;. By default, version GC reclaims versions after they
+          # are one hour old. Because of this, Cloud Spanner cannot perform reads
+          # at read timestamps more than one hour in the past. This
+          # restriction also applies to in-progress reads and/or SQL queries whose
+          # timestamp become too old while executing. Reads and SQL queries with
+          # too-old read timestamps fail with the error `FAILED_PRECONDITION`.
+          #
+          # ## Partitioned DML Transactions
+          #
+          # Partitioned DML transactions are used to execute DML statements with a
+          # different execution strategy that provides different, and often better,
+          # scalability properties for large, table-wide operations than DML in a
+          # ReadWrite transaction. Smaller scoped statements, such as an OLTP workload,
+          # should prefer using ReadWrite transactions.
+          #
+          # Partitioned DML partitions the keyspace and runs the DML statement on each
+          # partition in separate, internal transactions. These transactions commit
+          # automatically when complete, and run independently from one another.
+          #
+          # To reduce lock contention, this execution strategy only acquires read locks
+          # on rows that match the WHERE clause of the statement. Additionally, the
+          # smaller per-partition transactions hold locks for less time.
+          #
+          # That said, Partitioned DML is not a drop-in replacement for standard DML used
+          # in ReadWrite transactions.
+          #
+          #  - The DML statement must be fully-partitionable. Specifically, the statement
+          #    must be expressible as the union of many statements which each access only
+          #    a single row of the table.
+          #
+          #  - The statement is not applied atomically to all rows of the table. Rather,
+          #    the statement is applied atomically to partitions of the table, in
+          #    independent transactions. Secondary index rows are updated atomically
+          #    with the base table rows.
+          #
+          #  - Partitioned DML does not guarantee exactly-once execution semantics
+          #    against a partition. The statement will be applied at least once to each
+          #    partition. It is strongly recommended that the DML statement should be
+          #    idempotent to avoid unexpected results. For instance, it is potentially
+          #    dangerous to run a statement such as
+          #    `UPDATE table SET column = column + 1` as it could be run multiple times
+          #    against some rows.
+          #
+          #  - The partitions are committed automatically - there is no support for
+          #    Commit or Rollback. If the call returns an error, or if the client issuing
+          #    the ExecuteSql call dies, it is possible that some rows had the statement
+          #    executed on them successfully. It is also possible that statement was
+          #    never executed against other rows.
+          #
+          #  - Partitioned DML transactions may only contain the execution of a single
+          #    DML statement via ExecuteSql or ExecuteStreamingSql.
+          #
+          #  - If any error is encountered during the execution of the partitioned DML
+          #    operation (for instance, a UNIQUE INDEX violation, division by zero, or a
+          #    value that cannot be stored due to schema constraints), then the
+          #    operation is stopped at that point and an error is returned. It is
+          #    possible that at this point, some partitions have been committed (or even
+          #    committed multiple times), and other partitions have not been run at all.
+          #
+          # Given the above, Partitioned DML is good fit for large, database-wide,
+          # operations that are idempotent, such as deleting old rows from a very large
+          # table.
+        &quot;partitionedDml&quot;: { # Message type to initiate a Partitioned DML transaction. # Partitioned DML transaction.
             #
             # Authorization to begin a Partitioned DML transaction requires
             # `spanner.databases.beginPartitionedDmlTransaction` permission
             # on the `session` resource.
         },
+        &quot;readOnly&quot;: { # Message type to initiate a read-only transaction. # Transaction will not write.
+            #
+            # Authorization to begin a read-only transaction requires
+            # `spanner.databases.beginReadOnlyTransaction` permission
+            # on the `session` resource.
+          &quot;minReadTimestamp&quot;: &quot;A String&quot;, # Executes all reads at a timestamp &gt;= `min_read_timestamp`.
+              #
+              # This is useful for requesting fresher data than some previous
+              # read, or data that is fresh enough to observe the effects of some
+              # previously committed transaction whose timestamp is known.
+              #
+              # Note that this option can only be used in single-use transactions.
+              #
+              # A timestamp in RFC3339 UTC \&quot;Zulu\&quot; format, accurate to nanoseconds.
+              # Example: `&quot;2014-10-02T15:01:23.045123456Z&quot;`.
+          &quot;readTimestamp&quot;: &quot;A String&quot;, # Executes all reads at the given timestamp. Unlike other modes,
+              # reads at a specific timestamp are repeatable; the same read at
+              # the same timestamp always returns the same data. If the
+              # timestamp is in the future, the read will block until the
+              # specified timestamp, modulo the read&#x27;s deadline.
+              #
+              # Useful for large scale consistent reads such as mapreduces, or
+              # for coordinating many reads against a consistent snapshot of the
+              # data.
+              #
+              # A timestamp in RFC3339 UTC \&quot;Zulu\&quot; format, accurate to nanoseconds.
+              # Example: `&quot;2014-10-02T15:01:23.045123456Z&quot;`.
+          &quot;maxStaleness&quot;: &quot;A String&quot;, # Read data at a timestamp &gt;= `NOW - max_staleness`
+              # seconds. Guarantees that all writes that have committed more
+              # than the specified number of seconds ago are visible. Because
+              # Cloud Spanner chooses the exact timestamp, this mode works even if
+              # the client&#x27;s local clock is substantially skewed from Cloud Spanner
+              # commit timestamps.
+              #
+              # Useful for reading the freshest data available at a nearby
+              # replica, while bounding the possible staleness if the local
+              # replica has fallen behind.
+              #
+              # Note that this option can only be used in single-use
+              # transactions.
+          &quot;returnReadTimestamp&quot;: True or False, # If true, the Cloud Spanner-selected read timestamp is included in
+              # the Transaction message that describes the transaction.
+          &quot;exactStaleness&quot;: &quot;A String&quot;, # Executes all reads at a timestamp that is `exact_staleness`
+              # old. The timestamp is chosen soon after the read is started.
+              #
+              # Guarantees that all writes that have committed more than the
+              # specified number of seconds ago are visible. Because Cloud Spanner
+              # chooses the exact timestamp, this mode works even if the client&#x27;s
+              # local clock is substantially skewed from Cloud Spanner commit
+              # timestamps.
+              #
+              # Useful for reading at nearby replicas without the distributed
+              # timestamp negotiation overhead of `max_staleness`.
+          &quot;strong&quot;: True or False, # Read at a timestamp where all previously committed transactions
+              # are visible.
+        },
+        &quot;readWrite&quot;: { # Message type to initiate a read-write transaction. Currently this # Transaction may write.
+            #
+            # Authorization to begin a read-write transaction requires
+            # `spanner.databases.beginOrRollbackReadWriteTransaction` permission
+            # on the `session` resource.
+            # transaction type has no options.
+        },
       },
-      "id": "A String", # Execute the read or SQL query in a previously-started transaction.
+      &quot;id&quot;: &quot;A String&quot;, # Execute the read or SQL query in a previously-started transaction.
     },
-    "statements": [ # Required. The list of statements to execute in this batch. Statements are executed
+    &quot;seqno&quot;: &quot;A String&quot;, # Required. A per-transaction sequence number used to identify this request. This field
+        # makes each request idempotent such that if the request is received multiple
+        # times, at most one will succeed.
+        # 
+        # The sequence number must be monotonically increasing within the
+        # transaction. If a request arrives for the first time with an out-of-order
+        # sequence number, the transaction may be aborted. Replays of previously
+        # handled requests will yield the same response as the first execution.
+    &quot;statements&quot;: [ # Required. The list of statements to execute in this batch. Statements are executed
         # serially, such that the effects of statement `i` are visible to statement
         # `i+1`. Each statement must be a DML statement. Execution stops at the
         # first failed statement; the remaining statements are not executed.
         # 
         # Callers must provide at least one statement.
       { # A single DML statement.
-        "paramTypes": { # It is not always possible for Cloud Spanner to infer the right SQL type
+        &quot;paramTypes&quot;: { # It is not always possible for Cloud Spanner to infer the right SQL type
             # from a JSON value.  For example, values of type `BYTES` and values
             # of type `STRING` both appear in params as JSON strings.
             #
@@ -2076,34 +2076,35 @@
             # SQL type for some or all of the SQL statement parameters. See the
             # definition of Type for more information
             # about SQL types.
-          "a_key": { # `Type` indicates the type of a Cloud Spanner value, as might be stored in a
+          &quot;a_key&quot;: { # `Type` indicates the type of a Cloud Spanner value, as might be stored in a
               # table cell or returned from an SQL query.
-            "structType": { # `StructType` defines the fields of a STRUCT type. # If code == STRUCT, then `struct_type`
-                # provides type information for the struct's fields.
-              "fields": [ # The list of fields that make up this struct. Order is
+            &quot;arrayElementType&quot;: # Object with schema name: Type # If code == ARRAY, then `array_element_type`
+                # is the type of the array elements.
+            &quot;code&quot;: &quot;A String&quot;, # Required. The TypeCode for this type.
+            &quot;structType&quot;: { # `StructType` defines the fields of a STRUCT type. # If code == STRUCT, then `struct_type`
+                # provides type information for the struct&#x27;s fields.
+              &quot;fields&quot;: [ # The list of fields that make up this struct. Order is
                   # significant, because values of this struct type are represented as
                   # lists, where the order of field values matches the order of
                   # fields in the StructType. In turn, the order of fields
                   # matches the order of columns in a read request, or the order of
                   # fields in the `SELECT` clause of a query.
                 { # Message representing a single field of a struct.
-                  "type": # Object with schema name: Type # The type of the field.
-                  "name": "A String", # The name of the field. For reads, this is the column name. For
-                      # SQL queries, it is the column alias (e.g., `"Word"` in the
-                      # query `"SELECT 'hello' AS Word"`), or the column name (e.g.,
-                      # `"ColName"` in the query `"SELECT ColName FROM Table"`). Some
-                      # columns might have an empty name (e.g., !"SELECT
-                      # UPPER(ColName)"`). Note that a query result can contain
+                  &quot;name&quot;: &quot;A String&quot;, # The name of the field. For reads, this is the column name. For
+                      # SQL queries, it is the column alias (e.g., `&quot;Word&quot;` in the
+                      # query `&quot;SELECT &#x27;hello&#x27; AS Word&quot;`), or the column name (e.g.,
+                      # `&quot;ColName&quot;` in the query `&quot;SELECT ColName FROM Table&quot;`). Some
+                      # columns might have an empty name (e.g., !&quot;SELECT
+                      # UPPER(ColName)&quot;`). Note that a query result can contain
                       # multiple fields with the same name.
+                  &quot;type&quot;: # Object with schema name: Type # The type of the field.
                 },
               ],
             },
-            "code": "A String", # Required. The TypeCode for this type.
-            "arrayElementType": # Object with schema name: Type # If code == ARRAY, then `array_element_type`
-                # is the type of the array elements.
           },
         },
-        "params": { # Parameter names and values that bind to placeholders in the DML string.
+        &quot;sql&quot;: &quot;A String&quot;, # Required. The DML string.
+        &quot;params&quot;: { # Parameter names and values that bind to placeholders in the DML string.
             #
             # A parameter placeholder consists of the `@` character followed by the
             # parameter name (for example, `@firstName`). Parameter names can contain
@@ -2112,12 +2113,11 @@
             # Parameters can appear anywhere that a literal value is expected.  The
             # same parameter name can be used more than once, for example:
             #
-            # `"WHERE id &gt; @msg_id AND id &lt; @msg_id + 100"`
+            # `&quot;WHERE id &gt; @msg_id AND id &lt; @msg_id + 100&quot;`
             #
             # It is an error to execute a SQL statement with unbound parameters.
-          "a_key": "", # Properties of the object.
+          &quot;a_key&quot;: &quot;&quot;, # Properties of the object.
         },
-        "sql": "A String", # Required. The DML string.
       },
     ],
   }
@@ -2154,7 +2154,7 @@
       # * Response: 2 ResultSet messages, and a syntax error (`INVALID_ARGUMENT`)
       #   status. The number of ResultSet messages indicates that the third
       #   statement failed, and the fourth and fifth statements were not executed.
-    "status": { # The `Status` type defines a logical error model that is suitable for # If all DML statements are executed successfully, the status is `OK`.
+    &quot;status&quot;: { # The `Status` type defines a logical error model that is suitable for # If all DML statements are executed successfully, the status is `OK`.
         # Otherwise, the error status of the first failed statement.
         # different programming environments, including REST APIs and RPC APIs. It is
         # used by [gRPC](https://github.com/grpc). Each `Status` message contains
@@ -2162,18 +2162,18 @@
         #
         # You can find out more about this error model and how to work with it in the
         # [API Design Guide](https://cloud.google.com/apis/design/errors).
-      "message": "A String", # A developer-facing error message, which should be in English. Any
+      &quot;code&quot;: 42, # The status code, which should be an enum value of google.rpc.Code.
+      &quot;message&quot;: &quot;A String&quot;, # A developer-facing error message, which should be in English. Any
           # user-facing error message should be localized and sent in the
           # google.rpc.Status.details field, or localized by the client.
-      "code": 42, # The status code, which should be an enum value of google.rpc.Code.
-      "details": [ # A list of messages that carry the error details.  There is a common set of
+      &quot;details&quot;: [ # A list of messages that carry the error details.  There is a common set of
           # message types for APIs to use.
         {
-          "a_key": "", # Properties of the object. Contains field @type with type URL.
+          &quot;a_key&quot;: &quot;&quot;, # Properties of the object. Contains field @type with type URL.
         },
       ],
     },
-    "resultSets": [ # One ResultSet for each statement in the request that ran successfully,
+    &quot;resultSets&quot;: [ # One ResultSet for each statement in the request that ran successfully,
         # in the same order as the statements in the request. Each ResultSet does
         # not contain any rows. The ResultSetStats in each ResultSet contain
         # the number of rows modified by the statement.
@@ -2182,17 +2182,7 @@
         # ResultSetMetadata.
       { # Results from Read or
           # ExecuteSql.
-        "rows": [ # Each element in `rows` is a row whose format is defined by
-            # metadata.row_type. The ith element
-            # in each row matches the ith field in
-            # metadata.row_type. Elements are
-            # encoded based on type as described
-            # here.
-          [
-            "",
-          ],
-        ],
-        "stats": { # Additional statistics about a ResultSet or PartialResultSet. # Query plan and execution statistics for the SQL statement that
+        &quot;stats&quot;: { # Additional statistics about a ResultSet or PartialResultSet. # Query plan and execution statistics for the SQL statement that
             # produced this result set. These can be requested by setting
             # ExecuteSqlRequest.query_mode.
             # DML statements always produce stats containing the number of rows
@@ -2200,31 +2190,63 @@
             # ExecuteSqlRequest.QueryMode.PLAN ExecuteSqlRequest.query_mode.
             # Other fields may or may not be populated, based on the
             # ExecuteSqlRequest.query_mode.
-          "rowCountLowerBound": "A String", # Partitioned DML does not offer exactly-once semantics, so it
+          &quot;rowCountExact&quot;: &quot;A String&quot;, # Standard DML returns an exact count of rows that were modified.
+          &quot;queryStats&quot;: { # Aggregated statistics from the execution of the query. Only present when
+              # the query is profiled. For example, a query could return the statistics as
+              # follows:
+              #
+              #     {
+              #       &quot;rows_returned&quot;: &quot;3&quot;,
+              #       &quot;elapsed_time&quot;: &quot;1.22 secs&quot;,
+              #       &quot;cpu_time&quot;: &quot;1.19 secs&quot;
+              #     }
+            &quot;a_key&quot;: &quot;&quot;, # Properties of the object.
+          },
+          &quot;rowCountLowerBound&quot;: &quot;A String&quot;, # Partitioned DML does not offer exactly-once semantics, so it
               # returns a lower bound of the rows modified.
-          "rowCountExact": "A String", # Standard DML returns an exact count of rows that were modified.
-          "queryPlan": { # Contains an ordered list of nodes appearing in the query plan. # QueryPlan for the query associated with this result.
-            "planNodes": [ # The nodes in the query plan. Plan nodes are returned in pre-order starting
-                # with the plan root. Each PlanNode's `id` corresponds to its index in
+          &quot;queryPlan&quot;: { # Contains an ordered list of nodes appearing in the query plan. # QueryPlan for the query associated with this result.
+            &quot;planNodes&quot;: [ # The nodes in the query plan. Plan nodes are returned in pre-order starting
+                # with the plan root. Each PlanNode&#x27;s `id` corresponds to its index in
                 # `plan_nodes`.
               { # Node information for nodes appearing in a QueryPlan.plan_nodes.
-                "index": 42, # The `PlanNode`'s index in node list.
-                "kind": "A String", # Used to determine the type of node. May be needed for visualizing
+                &quot;metadata&quot;: { # Attributes relevant to the node contained in a group of key-value pairs.
+                    # For example, a Parameter Reference node could have the following
+                    # information in its metadata:
+                    #
+                    #     {
+                    #       &quot;parameter_reference&quot;: &quot;param1&quot;,
+                    #       &quot;parameter_type&quot;: &quot;array&quot;
+                    #     }
+                  &quot;a_key&quot;: &quot;&quot;, # Properties of the object.
+                },
+                &quot;executionStats&quot;: { # The execution statistics associated with the node, contained in a group of
+                    # key-value pairs. Only present if the plan was returned as a result of a
+                    # profile query. For example, number of executions, number of rows/time per
+                    # execution etc.
+                  &quot;a_key&quot;: &quot;&quot;, # Properties of the object.
+                },
+                &quot;shortRepresentation&quot;: { # Condensed representation of a node and its subtree. Only present for # Condensed representation for SCALAR nodes.
+                    # `SCALAR` PlanNode(s).
+                  &quot;description&quot;: &quot;A String&quot;, # A string representation of the expression subtree rooted at this node.
+                  &quot;subqueries&quot;: { # A mapping of (subquery variable name) -&gt; (subquery node id) for cases
+                      # where the `description` string of this node references a `SCALAR`
+                      # subquery contained in the expression subtree rooted at this node. The
+                      # referenced `SCALAR` subquery may not necessarily be a direct child of
+                      # this node.
+                    &quot;a_key&quot;: 42,
+                  },
+                },
+                &quot;index&quot;: 42, # The `PlanNode`&#x27;s index in node list.
+                &quot;kind&quot;: &quot;A String&quot;, # Used to determine the type of node. May be needed for visualizing
                     # different kinds of nodes differently. For example, If the node is a
                     # SCALAR node, it will have a condensed representation
                     # which can be used to directly embed a description of the node in its
                     # parent.
-                "displayName": "A String", # The display name for the node.
-                "executionStats": { # The execution statistics associated with the node, contained in a group of
-                    # key-value pairs. Only present if the plan was returned as a result of a
-                    # profile query. For example, number of executions, number of rows/time per
-                    # execution etc.
-                  "a_key": "", # Properties of the object.
-                },
-                "childLinks": [ # List of child node `index`es and their relationship to this parent.
+                &quot;displayName&quot;: &quot;A String&quot;, # The display name for the node.
+                &quot;childLinks&quot;: [ # List of child node `index`es and their relationship to this parent.
                   { # Metadata associated with a parent-child relationship appearing in a
                       # PlanNode.
-                    "variable": "A String", # Only present if the child node is SCALAR and corresponds
+                    &quot;variable&quot;: &quot;A String&quot;, # Only present if the child node is SCALAR and corresponds
                         # to an output variable of the parent node. The field carries the name of
                         # the output variable.
                         # For example, a `TableScan` operator that reads rows from a table will
@@ -2232,85 +2254,57 @@
                         # created for each column that is read by the operator. The corresponding
                         # `variable` fields will be set to the variable names assigned to the
                         # columns.
-                    "type": "A String", # The type of the link. For example, in Hash Joins this could be used to
+                    &quot;childIndex&quot;: 42, # The node to which the link points.
+                    &quot;type&quot;: &quot;A String&quot;, # The type of the link. For example, in Hash Joins this could be used to
                         # distinguish between the build child and the probe child, or in the case
                         # of the child being an output variable, to represent the tag associated
                         # with the output variable.
-                    "childIndex": 42, # The node to which the link points.
                   },
                 ],
-                "shortRepresentation": { # Condensed representation of a node and its subtree. Only present for # Condensed representation for SCALAR nodes.
-                    # `SCALAR` PlanNode(s).
-                  "subqueries": { # A mapping of (subquery variable name) -&gt; (subquery node id) for cases
-                      # where the `description` string of this node references a `SCALAR`
-                      # subquery contained in the expression subtree rooted at this node. The
-                      # referenced `SCALAR` subquery may not necessarily be a direct child of
-                      # this node.
-                    "a_key": 42,
-                  },
-                  "description": "A String", # A string representation of the expression subtree rooted at this node.
-                },
-                "metadata": { # Attributes relevant to the node contained in a group of key-value pairs.
-                    # For example, a Parameter Reference node could have the following
-                    # information in its metadata:
-                    #
-                    #     {
-                    #       "parameter_reference": "param1",
-                    #       "parameter_type": "array"
-                    #     }
-                  "a_key": "", # Properties of the object.
-                },
               },
             ],
           },
-          "queryStats": { # Aggregated statistics from the execution of the query. Only present when
-              # the query is profiled. For example, a query could return the statistics as
-              # follows:
-              #
-              #     {
-              #       "rows_returned": "3",
-              #       "elapsed_time": "1.22 secs",
-              #       "cpu_time": "1.19 secs"
-              #     }
-            "a_key": "", # Properties of the object.
-          },
         },
-        "metadata": { # Metadata about a ResultSet or PartialResultSet. # Metadata about the result set, such as row type information.
-          "rowType": { # `StructType` defines the fields of a STRUCT type. # Indicates the field names and types for the rows in the result
-              # set.  For example, a SQL query like `"SELECT UserId, UserName FROM
-              # Users"` could return a `row_type` value like:
+        &quot;rows&quot;: [ # Each element in `rows` is a row whose format is defined by
+            # metadata.row_type. The ith element
+            # in each row matches the ith field in
+            # metadata.row_type. Elements are
+            # encoded based on type as described
+            # here.
+          [
+            &quot;&quot;,
+          ],
+        ],
+        &quot;metadata&quot;: { # Metadata about a ResultSet or PartialResultSet. # Metadata about the result set, such as row type information.
+          &quot;rowType&quot;: { # `StructType` defines the fields of a STRUCT type. # Indicates the field names and types for the rows in the result
+              # set.  For example, a SQL query like `&quot;SELECT UserId, UserName FROM
+              # Users&quot;` could return a `row_type` value like:
               #
-              #     "fields": [
-              #       { "name": "UserId", "type": { "code": "INT64" } },
-              #       { "name": "UserName", "type": { "code": "STRING" } },
+              #     &quot;fields&quot;: [
+              #       { &quot;name&quot;: &quot;UserId&quot;, &quot;type&quot;: { &quot;code&quot;: &quot;INT64&quot; } },
+              #       { &quot;name&quot;: &quot;UserName&quot;, &quot;type&quot;: { &quot;code&quot;: &quot;STRING&quot; } },
               #     ]
-            "fields": [ # The list of fields that make up this struct. Order is
+            &quot;fields&quot;: [ # The list of fields that make up this struct. Order is
                 # significant, because values of this struct type are represented as
                 # lists, where the order of field values matches the order of
                 # fields in the StructType. In turn, the order of fields
                 # matches the order of columns in a read request, or the order of
                 # fields in the `SELECT` clause of a query.
               { # Message representing a single field of a struct.
-                "type": # Object with schema name: Type # The type of the field.
-                "name": "A String", # The name of the field. For reads, this is the column name. For
-                    # SQL queries, it is the column alias (e.g., `"Word"` in the
-                    # query `"SELECT 'hello' AS Word"`), or the column name (e.g.,
-                    # `"ColName"` in the query `"SELECT ColName FROM Table"`). Some
-                    # columns might have an empty name (e.g., !"SELECT
-                    # UPPER(ColName)"`). Note that a query result can contain
+                &quot;name&quot;: &quot;A String&quot;, # The name of the field. For reads, this is the column name. For
+                    # SQL queries, it is the column alias (e.g., `&quot;Word&quot;` in the
+                    # query `&quot;SELECT &#x27;hello&#x27; AS Word&quot;`), or the column name (e.g.,
+                    # `&quot;ColName&quot;` in the query `&quot;SELECT ColName FROM Table&quot;`). Some
+                    # columns might have an empty name (e.g., !&quot;SELECT
+                    # UPPER(ColName)&quot;`). Note that a query result can contain
                     # multiple fields with the same name.
+                &quot;type&quot;: # Object with schema name: Type # The type of the field.
               },
             ],
           },
-          "transaction": { # A transaction. # If the read or SQL query began a transaction as a side-effect, the
+          &quot;transaction&quot;: { # A transaction. # If the read or SQL query began a transaction as a side-effect, the
               # information about the new transaction is yielded here.
-            "readTimestamp": "A String", # For snapshot read-only transactions, the read timestamp chosen
-                # for the transaction. Not returned by default: see
-                # TransactionOptions.ReadOnly.return_read_timestamp.
-                #
-                # A timestamp in RFC3339 UTC \"Zulu\" format, accurate to nanoseconds.
-                # Example: `"2014-10-02T15:01:23.045123456Z"`.
-            "id": "A String", # `id` may be used to identify the transaction in subsequent
+            &quot;id&quot;: &quot;A String&quot;, # `id` may be used to identify the transaction in subsequent
                 # Read,
                 # ExecuteSql,
                 # Commit, or
@@ -2318,6 +2312,12 @@
                 #
                 # Single-use read-only transactions do not have IDs, because
                 # single-use transactions do not support multiple requests.
+            &quot;readTimestamp&quot;: &quot;A String&quot;, # For snapshot read-only transactions, the read timestamp chosen
+                # for the transaction. Not returned by default: see
+                # TransactionOptions.ReadOnly.return_read_timestamp.
+                #
+                # A timestamp in RFC3339 UTC \&quot;Zulu\&quot; format, accurate to nanoseconds.
+                # Example: `&quot;2014-10-02T15:01:23.045123456Z&quot;`.
           },
         },
       },
@@ -2346,7 +2346,10 @@
 
 { # The request for ExecuteSql and
       # ExecuteStreamingSql.
-    "transaction": { # This message is used to select the transaction in which a # The transaction to use.
+    &quot;queryMode&quot;: &quot;A String&quot;, # Used to control the amount of debugging information returned in
+        # ResultSetStats. If partition_token is set, query_mode can only
+        # be set to QueryMode.NORMAL.
+    &quot;transaction&quot;: { # This message is used to select the transaction in which a # The transaction to use.
         # 
         # For queries, if none is provided, the default is a temporary read-only
         # transaction with strong concurrency.
@@ -2360,356 +2363,7 @@
         # ExecuteSql call runs.
         #
         # See TransactionOptions for more information about transactions.
-      "begin": { # # Transactions # Begin a new transaction and execute this read or SQL query in
-          # it. The transaction ID of the new transaction is returned in
-          # ResultSetMetadata.transaction, which is a Transaction.
-          #
-          #
-          # Each session can have at most one active transaction at a time. After the
-          # active transaction is completed, the session can immediately be
-          # re-used for the next transaction. It is not necessary to create a
-          # new session for each transaction.
-          #
-          # # Transaction Modes
-          #
-          # Cloud Spanner supports three transaction modes:
-          #
-          #   1. Locking read-write. This type of transaction is the only way
-          #      to write data into Cloud Spanner. These transactions rely on
-          #      pessimistic locking and, if necessary, two-phase commit.
-          #      Locking read-write transactions may abort, requiring the
-          #      application to retry.
-          #
-          #   2. Snapshot read-only. This transaction type provides guaranteed
-          #      consistency across several reads, but does not allow
-          #      writes. Snapshot read-only transactions can be configured to
-          #      read at timestamps in the past. Snapshot read-only
-          #      transactions do not need to be committed.
-          #
-          #   3. Partitioned DML. This type of transaction is used to execute
-          #      a single Partitioned DML statement. Partitioned DML partitions
-          #      the key space and runs the DML statement over each partition
-          #      in parallel using separate, internal transactions that commit
-          #      independently. Partitioned DML transactions do not need to be
-          #      committed.
-          #
-          # For transactions that only read, snapshot read-only transactions
-          # provide simpler semantics and are almost always faster. In
-          # particular, read-only transactions do not take locks, so they do
-          # not conflict with read-write transactions. As a consequence of not
-          # taking locks, they also do not abort, so retry loops are not needed.
-          #
-          # Transactions may only read/write data in a single database. They
-          # may, however, read/write data in different tables within that
-          # database.
-          #
-          # ## Locking Read-Write Transactions
-          #
-          # Locking transactions may be used to atomically read-modify-write
-          # data anywhere in a database. This type of transaction is externally
-          # consistent.
-          #
-          # Clients should attempt to minimize the amount of time a transaction
-          # is active. Faster transactions commit with higher probability
-          # and cause less contention. Cloud Spanner attempts to keep read locks
-          # active as long as the transaction continues to do reads, and the
-          # transaction has not been terminated by
-          # Commit or
-          # Rollback.  Long periods of
-          # inactivity at the client may cause Cloud Spanner to release a
-          # transaction's locks and abort it.
-          #
-          # Conceptually, a read-write transaction consists of zero or more
-          # reads or SQL statements followed by
-          # Commit. At any time before
-          # Commit, the client can send a
-          # Rollback request to abort the
-          # transaction.
-          #
-          # ### Semantics
-          #
-          # Cloud Spanner can commit the transaction if all read locks it acquired
-          # are still valid at commit time, and it is able to acquire write
-          # locks for all writes. Cloud Spanner can abort the transaction for any
-          # reason. If a commit attempt returns `ABORTED`, Cloud Spanner guarantees
-          # that the transaction has not modified any user data in Cloud Spanner.
-          #
-          # Unless the transaction commits, Cloud Spanner makes no guarantees about
-          # how long the transaction's locks were held for. It is an error to
-          # use Cloud Spanner locks for any sort of mutual exclusion other than
-          # between Cloud Spanner transactions themselves.
-          #
-          # ### Retrying Aborted Transactions
-          #
-          # When a transaction aborts, the application can choose to retry the
-          # whole transaction again. To maximize the chances of successfully
-          # committing the retry, the client should execute the retry in the
-          # same session as the original attempt. The original session's lock
-          # priority increases with each consecutive abort, meaning that each
-          # attempt has a slightly better chance of success than the previous.
-          #
-          # Under some circumstances (e.g., many transactions attempting to
-          # modify the same row(s)), a transaction can abort many times in a
-          # short period before successfully committing. Thus, it is not a good
-          # idea to cap the number of retries a transaction can attempt;
-          # instead, it is better to limit the total amount of wall time spent
-          # retrying.
-          #
-          # ### Idle Transactions
-          #
-          # A transaction is considered idle if it has no outstanding reads or
-          # SQL queries and has not started a read or SQL query within the last 10
-          # seconds. Idle transactions can be aborted by Cloud Spanner so that they
-          # don't hold on to locks indefinitely. In that case, the commit will
-          # fail with error `ABORTED`.
-          #
-          # If this behavior is undesirable, periodically executing a simple
-          # SQL query in the transaction (e.g., `SELECT 1`) prevents the
-          # transaction from becoming idle.
-          #
-          # ## Snapshot Read-Only Transactions
-          #
-          # Snapshot read-only transactions provides a simpler method than
-          # locking read-write transactions for doing several consistent
-          # reads. However, this type of transaction does not support writes.
-          #
-          # Snapshot transactions do not take locks. Instead, they work by
-          # choosing a Cloud Spanner timestamp, then executing all reads at that
-          # timestamp. Since they do not acquire locks, they do not block
-          # concurrent read-write transactions.
-          #
-          # Unlike locking read-write transactions, snapshot read-only
-          # transactions never abort. They can fail if the chosen read
-          # timestamp is garbage collected; however, the default garbage
-          # collection policy is generous enough that most applications do not
-          # need to worry about this in practice.
-          #
-          # Snapshot read-only transactions do not need to call
-          # Commit or
-          # Rollback (and in fact are not
-          # permitted to do so).
-          #
-          # To execute a snapshot transaction, the client specifies a timestamp
-          # bound, which tells Cloud Spanner how to choose a read timestamp.
-          #
-          # The types of timestamp bound are:
-          #
-          #   - Strong (the default).
-          #   - Bounded staleness.
-          #   - Exact staleness.
-          #
-          # If the Cloud Spanner database to be read is geographically distributed,
-          # stale read-only transactions can execute more quickly than strong
-          # or read-write transaction, because they are able to execute far
-          # from the leader replica.
-          #
-          # Each type of timestamp bound is discussed in detail below.
-          #
-          # ### Strong
-          #
-          # Strong reads are guaranteed to see the effects of all transactions
-          # that have committed before the start of the read. Furthermore, all
-          # rows yielded by a single read are consistent with each other -- if
-          # any part of the read observes a transaction, all parts of the read
-          # see the transaction.
-          #
-          # Strong reads are not repeatable: two consecutive strong read-only
-          # transactions might return inconsistent results if there are
-          # concurrent writes. If consistency across reads is required, the
-          # reads should be executed within a transaction or at an exact read
-          # timestamp.
-          #
-          # See TransactionOptions.ReadOnly.strong.
-          #
-          # ### Exact Staleness
-          #
-          # These timestamp bounds execute reads at a user-specified
-          # timestamp. Reads at a timestamp are guaranteed to see a consistent
-          # prefix of the global transaction history: they observe
-          # modifications done by all transactions with a commit timestamp &lt;=
-          # the read timestamp, and observe none of the modifications done by
-          # transactions with a larger commit timestamp. They will block until
-          # all conflicting transactions that may be assigned commit timestamps
-          # &lt;= the read timestamp have finished.
-          #
-          # The timestamp can either be expressed as an absolute Cloud Spanner commit
-          # timestamp or a staleness relative to the current time.
-          #
-          # These modes do not require a "negotiation phase" to pick a
-          # timestamp. As a result, they execute slightly faster than the
-          # equivalent boundedly stale concurrency modes. On the other hand,
-          # boundedly stale reads usually return fresher results.
-          #
-          # See TransactionOptions.ReadOnly.read_timestamp and
-          # TransactionOptions.ReadOnly.exact_staleness.
-          #
-          # ### Bounded Staleness
-          #
-          # Bounded staleness modes allow Cloud Spanner to pick the read timestamp,
-          # subject to a user-provided staleness bound. Cloud Spanner chooses the
-          # newest timestamp within the staleness bound that allows execution
-          # of the reads at the closest available replica without blocking.
-          #
-          # All rows yielded are consistent with each other -- if any part of
-          # the read observes a transaction, all parts of the read see the
-          # transaction. Boundedly stale reads are not repeatable: two stale
-          # reads, even if they use the same staleness bound, can execute at
-          # different timestamps and thus return inconsistent results.
-          #
-          # Boundedly stale reads execute in two phases: the first phase
-          # negotiates a timestamp among all replicas needed to serve the
-          # read. In the second phase, reads are executed at the negotiated
-          # timestamp.
-          #
-          # As a result of the two phase execution, bounded staleness reads are
-          # usually a little slower than comparable exact staleness
-          # reads. However, they are typically able to return fresher
-          # results, and are more likely to execute at the closest replica.
-          #
-          # Because the timestamp negotiation requires up-front knowledge of
-          # which rows will be read, it can only be used with single-use
-          # read-only transactions.
-          #
-          # See TransactionOptions.ReadOnly.max_staleness and
-          # TransactionOptions.ReadOnly.min_read_timestamp.
-          #
-          # ### Old Read Timestamps and Garbage Collection
-          #
-          # Cloud Spanner continuously garbage collects deleted and overwritten data
-          # in the background to reclaim storage space. This process is known
-          # as "version GC". By default, version GC reclaims versions after they
-          # are one hour old. Because of this, Cloud Spanner cannot perform reads
-          # at read timestamps more than one hour in the past. This
-          # restriction also applies to in-progress reads and/or SQL queries whose
-          # timestamp become too old while executing. Reads and SQL queries with
-          # too-old read timestamps fail with the error `FAILED_PRECONDITION`.
-          #
-          # ## Partitioned DML Transactions
-          #
-          # Partitioned DML transactions are used to execute DML statements with a
-          # different execution strategy that provides different, and often better,
-          # scalability properties for large, table-wide operations than DML in a
-          # ReadWrite transaction. Smaller scoped statements, such as an OLTP workload,
-          # should prefer using ReadWrite transactions.
-          #
-          # Partitioned DML partitions the keyspace and runs the DML statement on each
-          # partition in separate, internal transactions. These transactions commit
-          # automatically when complete, and run independently from one another.
-          #
-          # To reduce lock contention, this execution strategy only acquires read locks
-          # on rows that match the WHERE clause of the statement. Additionally, the
-          # smaller per-partition transactions hold locks for less time.
-          #
-          # That said, Partitioned DML is not a drop-in replacement for standard DML used
-          # in ReadWrite transactions.
-          #
-          #  - The DML statement must be fully-partitionable. Specifically, the statement
-          #    must be expressible as the union of many statements which each access only
-          #    a single row of the table.
-          #
-          #  - The statement is not applied atomically to all rows of the table. Rather,
-          #    the statement is applied atomically to partitions of the table, in
-          #    independent transactions. Secondary index rows are updated atomically
-          #    with the base table rows.
-          #
-          #  - Partitioned DML does not guarantee exactly-once execution semantics
-          #    against a partition. The statement will be applied at least once to each
-          #    partition. It is strongly recommended that the DML statement should be
-          #    idempotent to avoid unexpected results. For instance, it is potentially
-          #    dangerous to run a statement such as
-          #    `UPDATE table SET column = column + 1` as it could be run multiple times
-          #    against some rows.
-          #
-          #  - The partitions are committed automatically - there is no support for
-          #    Commit or Rollback. If the call returns an error, or if the client issuing
-          #    the ExecuteSql call dies, it is possible that some rows had the statement
-          #    executed on them successfully. It is also possible that statement was
-          #    never executed against other rows.
-          #
-          #  - Partitioned DML transactions may only contain the execution of a single
-          #    DML statement via ExecuteSql or ExecuteStreamingSql.
-          #
-          #  - If any error is encountered during the execution of the partitioned DML
-          #    operation (for instance, a UNIQUE INDEX violation, division by zero, or a
-          #    value that cannot be stored due to schema constraints), then the
-          #    operation is stopped at that point and an error is returned. It is
-          #    possible that at this point, some partitions have been committed (or even
-          #    committed multiple times), and other partitions have not been run at all.
-          #
-          # Given the above, Partitioned DML is good fit for large, database-wide,
-          # operations that are idempotent, such as deleting old rows from a very large
-          # table.
-        "readWrite": { # Message type to initiate a read-write transaction. Currently this # Transaction may write.
-            #
-            # Authorization to begin a read-write transaction requires
-            # `spanner.databases.beginOrRollbackReadWriteTransaction` permission
-            # on the `session` resource.
-            # transaction type has no options.
-        },
-        "readOnly": { # Message type to initiate a read-only transaction. # Transaction will not write.
-            #
-            # Authorization to begin a read-only transaction requires
-            # `spanner.databases.beginReadOnlyTransaction` permission
-            # on the `session` resource.
-          "minReadTimestamp": "A String", # Executes all reads at a timestamp &gt;= `min_read_timestamp`.
-              #
-              # This is useful for requesting fresher data than some previous
-              # read, or data that is fresh enough to observe the effects of some
-              # previously committed transaction whose timestamp is known.
-              #
-              # Note that this option can only be used in single-use transactions.
-              #
-              # A timestamp in RFC3339 UTC \"Zulu\" format, accurate to nanoseconds.
-              # Example: `"2014-10-02T15:01:23.045123456Z"`.
-          "readTimestamp": "A String", # Executes all reads at the given timestamp. Unlike other modes,
-              # reads at a specific timestamp are repeatable; the same read at
-              # the same timestamp always returns the same data. If the
-              # timestamp is in the future, the read will block until the
-              # specified timestamp, modulo the read's deadline.
-              #
-              # Useful for large scale consistent reads such as mapreduces, or
-              # for coordinating many reads against a consistent snapshot of the
-              # data.
-              #
-              # A timestamp in RFC3339 UTC \"Zulu\" format, accurate to nanoseconds.
-              # Example: `"2014-10-02T15:01:23.045123456Z"`.
-          "maxStaleness": "A String", # Read data at a timestamp &gt;= `NOW - max_staleness`
-              # seconds. Guarantees that all writes that have committed more
-              # than the specified number of seconds ago are visible. Because
-              # Cloud Spanner chooses the exact timestamp, this mode works even if
-              # the client's local clock is substantially skewed from Cloud Spanner
-              # commit timestamps.
-              #
-              # Useful for reading the freshest data available at a nearby
-              # replica, while bounding the possible staleness if the local
-              # replica has fallen behind.
-              #
-              # Note that this option can only be used in single-use
-              # transactions.
-          "exactStaleness": "A String", # Executes all reads at a timestamp that is `exact_staleness`
-              # old. The timestamp is chosen soon after the read is started.
-              #
-              # Guarantees that all writes that have committed more than the
-              # specified number of seconds ago are visible. Because Cloud Spanner
-              # chooses the exact timestamp, this mode works even if the client's
-              # local clock is substantially skewed from Cloud Spanner commit
-              # timestamps.
-              #
-              # Useful for reading at nearby replicas without the distributed
-              # timestamp negotiation overhead of `max_staleness`.
-          "returnReadTimestamp": True or False, # If true, the Cloud Spanner-selected read timestamp is included in
-              # the Transaction message that describes the transaction.
-          "strong": True or False, # Read at a timestamp where all previously committed transactions
-              # are visible.
-        },
-        "partitionedDml": { # Message type to initiate a Partitioned DML transaction. # Partitioned DML transaction.
-            #
-            # Authorization to begin a Partitioned DML transaction requires
-            # `spanner.databases.beginPartitionedDmlTransaction` permission
-            # on the `session` resource.
-        },
-      },
-      "singleUse": { # # Transactions # Execute the read or SQL query in a temporary transaction.
+      &quot;singleUse&quot;: { # # Transactions # Execute the read or SQL query in a temporary transaction.
           # This is the most efficient way to execute a transaction that
           # consists of a single SQL query.
           #
@@ -2766,7 +2420,7 @@
           # Commit or
           # Rollback.  Long periods of
           # inactivity at the client may cause Cloud Spanner to release a
-          # transaction's locks and abort it.
+          # transaction&#x27;s locks and abort it.
           #
           # Conceptually, a read-write transaction consists of zero or more
           # reads or SQL statements followed by
@@ -2784,7 +2438,7 @@
           # that the transaction has not modified any user data in Cloud Spanner.
           #
           # Unless the transaction commits, Cloud Spanner makes no guarantees about
-          # how long the transaction's locks were held for. It is an error to
+          # how long the transaction&#x27;s locks were held for. It is an error to
           # use Cloud Spanner locks for any sort of mutual exclusion other than
           # between Cloud Spanner transactions themselves.
           #
@@ -2793,7 +2447,7 @@
           # When a transaction aborts, the application can choose to retry the
           # whole transaction again. To maximize the chances of successfully
           # committing the retry, the client should execute the retry in the
-          # same session as the original attempt. The original session's lock
+          # same session as the original attempt. The original session&#x27;s lock
           # priority increases with each consecutive abort, meaning that each
           # attempt has a slightly better chance of success than the previous.
           #
@@ -2809,7 +2463,7 @@
           # A transaction is considered idle if it has no outstanding reads or
           # SQL queries and has not started a read or SQL query within the last 10
           # seconds. Idle transactions can be aborted by Cloud Spanner so that they
-          # don't hold on to locks indefinitely. In that case, the commit will
+          # don&#x27;t hold on to locks indefinitely. In that case, the commit will
           # fail with error `ABORTED`.
           #
           # If this behavior is undesirable, periodically executing a simple
@@ -2884,7 +2538,7 @@
           # The timestamp can either be expressed as an absolute Cloud Spanner commit
           # timestamp or a staleness relative to the current time.
           #
-          # These modes do not require a "negotiation phase" to pick a
+          # These modes do not require a &quot;negotiation phase&quot; to pick a
           # timestamp. As a result, they execute slightly faster than the
           # equivalent boundedly stale concurrency modes. On the other hand,
           # boundedly stale reads usually return fresher results.
@@ -2926,7 +2580,7 @@
           #
           # Cloud Spanner continuously garbage collects deleted and overwritten data
           # in the background to reclaim storage space. This process is known
-          # as "version GC". By default, version GC reclaims versions after they
+          # as &quot;version GC&quot;. By default, version GC reclaims versions after they
           # are one hour old. Because of this, Cloud Spanner cannot perform reads
           # at read timestamps more than one hour in the past. This
           # restriction also applies to in-progress reads and/or SQL queries whose
@@ -2988,19 +2642,18 @@
           # Given the above, Partitioned DML is good fit for large, database-wide,
           # operations that are idempotent, such as deleting old rows from a very large
           # table.
-        "readWrite": { # Message type to initiate a read-write transaction. Currently this # Transaction may write.
+        &quot;partitionedDml&quot;: { # Message type to initiate a Partitioned DML transaction. # Partitioned DML transaction.
             #
-            # Authorization to begin a read-write transaction requires
-            # `spanner.databases.beginOrRollbackReadWriteTransaction` permission
+            # Authorization to begin a Partitioned DML transaction requires
+            # `spanner.databases.beginPartitionedDmlTransaction` permission
             # on the `session` resource.
-            # transaction type has no options.
         },
-        "readOnly": { # Message type to initiate a read-only transaction. # Transaction will not write.
+        &quot;readOnly&quot;: { # Message type to initiate a read-only transaction. # Transaction will not write.
             #
             # Authorization to begin a read-only transaction requires
             # `spanner.databases.beginReadOnlyTransaction` permission
             # on the `session` resource.
-          "minReadTimestamp": "A String", # Executes all reads at a timestamp &gt;= `min_read_timestamp`.
+          &quot;minReadTimestamp&quot;: &quot;A String&quot;, # Executes all reads at a timestamp &gt;= `min_read_timestamp`.
               #
               # This is useful for requesting fresher data than some previous
               # read, or data that is fresh enough to observe the effects of some
@@ -3008,25 +2661,25 @@
               #
               # Note that this option can only be used in single-use transactions.
               #
-              # A timestamp in RFC3339 UTC \"Zulu\" format, accurate to nanoseconds.
-              # Example: `"2014-10-02T15:01:23.045123456Z"`.
-          "readTimestamp": "A String", # Executes all reads at the given timestamp. Unlike other modes,
+              # A timestamp in RFC3339 UTC \&quot;Zulu\&quot; format, accurate to nanoseconds.
+              # Example: `&quot;2014-10-02T15:01:23.045123456Z&quot;`.
+          &quot;readTimestamp&quot;: &quot;A String&quot;, # Executes all reads at the given timestamp. Unlike other modes,
               # reads at a specific timestamp are repeatable; the same read at
               # the same timestamp always returns the same data. If the
               # timestamp is in the future, the read will block until the
-              # specified timestamp, modulo the read's deadline.
+              # specified timestamp, modulo the read&#x27;s deadline.
               #
               # Useful for large scale consistent reads such as mapreduces, or
               # for coordinating many reads against a consistent snapshot of the
               # data.
               #
-              # A timestamp in RFC3339 UTC \"Zulu\" format, accurate to nanoseconds.
-              # Example: `"2014-10-02T15:01:23.045123456Z"`.
-          "maxStaleness": "A String", # Read data at a timestamp &gt;= `NOW - max_staleness`
+              # A timestamp in RFC3339 UTC \&quot;Zulu\&quot; format, accurate to nanoseconds.
+              # Example: `&quot;2014-10-02T15:01:23.045123456Z&quot;`.
+          &quot;maxStaleness&quot;: &quot;A String&quot;, # Read data at a timestamp &gt;= `NOW - max_staleness`
               # seconds. Guarantees that all writes that have committed more
               # than the specified number of seconds ago are visible. Because
               # Cloud Spanner chooses the exact timestamp, this mode works even if
-              # the client's local clock is substantially skewed from Cloud Spanner
+              # the client&#x27;s local clock is substantially skewed from Cloud Spanner
               # commit timestamps.
               #
               # Useful for reading the freshest data available at a nearby
@@ -3035,32 +2688,392 @@
               #
               # Note that this option can only be used in single-use
               # transactions.
-          "exactStaleness": "A String", # Executes all reads at a timestamp that is `exact_staleness`
+          &quot;returnReadTimestamp&quot;: True or False, # If true, the Cloud Spanner-selected read timestamp is included in
+              # the Transaction message that describes the transaction.
+          &quot;exactStaleness&quot;: &quot;A String&quot;, # Executes all reads at a timestamp that is `exact_staleness`
               # old. The timestamp is chosen soon after the read is started.
               #
               # Guarantees that all writes that have committed more than the
               # specified number of seconds ago are visible. Because Cloud Spanner
-              # chooses the exact timestamp, this mode works even if the client's
+              # chooses the exact timestamp, this mode works even if the client&#x27;s
               # local clock is substantially skewed from Cloud Spanner commit
               # timestamps.
               #
               # Useful for reading at nearby replicas without the distributed
               # timestamp negotiation overhead of `max_staleness`.
-          "returnReadTimestamp": True or False, # If true, the Cloud Spanner-selected read timestamp is included in
-              # the Transaction message that describes the transaction.
-          "strong": True or False, # Read at a timestamp where all previously committed transactions
+          &quot;strong&quot;: True or False, # Read at a timestamp where all previously committed transactions
               # are visible.
         },
-        "partitionedDml": { # Message type to initiate a Partitioned DML transaction. # Partitioned DML transaction.
+        &quot;readWrite&quot;: { # Message type to initiate a read-write transaction. Currently this # Transaction may write.
+            #
+            # Authorization to begin a read-write transaction requires
+            # `spanner.databases.beginOrRollbackReadWriteTransaction` permission
+            # on the `session` resource.
+            # transaction type has no options.
+        },
+      },
+      &quot;begin&quot;: { # # Transactions # Begin a new transaction and execute this read or SQL query in
+          # it. The transaction ID of the new transaction is returned in
+          # ResultSetMetadata.transaction, which is a Transaction.
+          #
+          #
+          # Each session can have at most one active transaction at a time. After the
+          # active transaction is completed, the session can immediately be
+          # re-used for the next transaction. It is not necessary to create a
+          # new session for each transaction.
+          #
+          # # Transaction Modes
+          #
+          # Cloud Spanner supports three transaction modes:
+          #
+          #   1. Locking read-write. This type of transaction is the only way
+          #      to write data into Cloud Spanner. These transactions rely on
+          #      pessimistic locking and, if necessary, two-phase commit.
+          #      Locking read-write transactions may abort, requiring the
+          #      application to retry.
+          #
+          #   2. Snapshot read-only. This transaction type provides guaranteed
+          #      consistency across several reads, but does not allow
+          #      writes. Snapshot read-only transactions can be configured to
+          #      read at timestamps in the past. Snapshot read-only
+          #      transactions do not need to be committed.
+          #
+          #   3. Partitioned DML. This type of transaction is used to execute
+          #      a single Partitioned DML statement. Partitioned DML partitions
+          #      the key space and runs the DML statement over each partition
+          #      in parallel using separate, internal transactions that commit
+          #      independently. Partitioned DML transactions do not need to be
+          #      committed.
+          #
+          # For transactions that only read, snapshot read-only transactions
+          # provide simpler semantics and are almost always faster. In
+          # particular, read-only transactions do not take locks, so they do
+          # not conflict with read-write transactions. As a consequence of not
+          # taking locks, they also do not abort, so retry loops are not needed.
+          #
+          # Transactions may only read/write data in a single database. They
+          # may, however, read/write data in different tables within that
+          # database.
+          #
+          # ## Locking Read-Write Transactions
+          #
+          # Locking transactions may be used to atomically read-modify-write
+          # data anywhere in a database. This type of transaction is externally
+          # consistent.
+          #
+          # Clients should attempt to minimize the amount of time a transaction
+          # is active. Faster transactions commit with higher probability
+          # and cause less contention. Cloud Spanner attempts to keep read locks
+          # active as long as the transaction continues to do reads, and the
+          # transaction has not been terminated by
+          # Commit or
+          # Rollback.  Long periods of
+          # inactivity at the client may cause Cloud Spanner to release a
+          # transaction&#x27;s locks and abort it.
+          #
+          # Conceptually, a read-write transaction consists of zero or more
+          # reads or SQL statements followed by
+          # Commit. At any time before
+          # Commit, the client can send a
+          # Rollback request to abort the
+          # transaction.
+          #
+          # ### Semantics
+          #
+          # Cloud Spanner can commit the transaction if all read locks it acquired
+          # are still valid at commit time, and it is able to acquire write
+          # locks for all writes. Cloud Spanner can abort the transaction for any
+          # reason. If a commit attempt returns `ABORTED`, Cloud Spanner guarantees
+          # that the transaction has not modified any user data in Cloud Spanner.
+          #
+          # Unless the transaction commits, Cloud Spanner makes no guarantees about
+          # how long the transaction&#x27;s locks were held for. It is an error to
+          # use Cloud Spanner locks for any sort of mutual exclusion other than
+          # between Cloud Spanner transactions themselves.
+          #
+          # ### Retrying Aborted Transactions
+          #
+          # When a transaction aborts, the application can choose to retry the
+          # whole transaction again. To maximize the chances of successfully
+          # committing the retry, the client should execute the retry in the
+          # same session as the original attempt. The original session&#x27;s lock
+          # priority increases with each consecutive abort, meaning that each
+          # attempt has a slightly better chance of success than the previous.
+          #
+          # Under some circumstances (e.g., many transactions attempting to
+          # modify the same row(s)), a transaction can abort many times in a
+          # short period before successfully committing. Thus, it is not a good
+          # idea to cap the number of retries a transaction can attempt;
+          # instead, it is better to limit the total amount of wall time spent
+          # retrying.
+          #
+          # ### Idle Transactions
+          #
+          # A transaction is considered idle if it has no outstanding reads or
+          # SQL queries and has not started a read or SQL query within the last 10
+          # seconds. Idle transactions can be aborted by Cloud Spanner so that they
+          # don&#x27;t hold on to locks indefinitely. In that case, the commit will
+          # fail with error `ABORTED`.
+          #
+          # If this behavior is undesirable, periodically executing a simple
+          # SQL query in the transaction (e.g., `SELECT 1`) prevents the
+          # transaction from becoming idle.
+          #
+          # ## Snapshot Read-Only Transactions
+          #
+          # Snapshot read-only transactions provides a simpler method than
+          # locking read-write transactions for doing several consistent
+          # reads. However, this type of transaction does not support writes.
+          #
+          # Snapshot transactions do not take locks. Instead, they work by
+          # choosing a Cloud Spanner timestamp, then executing all reads at that
+          # timestamp. Since they do not acquire locks, they do not block
+          # concurrent read-write transactions.
+          #
+          # Unlike locking read-write transactions, snapshot read-only
+          # transactions never abort. They can fail if the chosen read
+          # timestamp is garbage collected; however, the default garbage
+          # collection policy is generous enough that most applications do not
+          # need to worry about this in practice.
+          #
+          # Snapshot read-only transactions do not need to call
+          # Commit or
+          # Rollback (and in fact are not
+          # permitted to do so).
+          #
+          # To execute a snapshot transaction, the client specifies a timestamp
+          # bound, which tells Cloud Spanner how to choose a read timestamp.
+          #
+          # The types of timestamp bound are:
+          #
+          #   - Strong (the default).
+          #   - Bounded staleness.
+          #   - Exact staleness.
+          #
+          # If the Cloud Spanner database to be read is geographically distributed,
+          # stale read-only transactions can execute more quickly than strong
+          # or read-write transaction, because they are able to execute far
+          # from the leader replica.
+          #
+          # Each type of timestamp bound is discussed in detail below.
+          #
+          # ### Strong
+          #
+          # Strong reads are guaranteed to see the effects of all transactions
+          # that have committed before the start of the read. Furthermore, all
+          # rows yielded by a single read are consistent with each other -- if
+          # any part of the read observes a transaction, all parts of the read
+          # see the transaction.
+          #
+          # Strong reads are not repeatable: two consecutive strong read-only
+          # transactions might return inconsistent results if there are
+          # concurrent writes. If consistency across reads is required, the
+          # reads should be executed within a transaction or at an exact read
+          # timestamp.
+          #
+          # See TransactionOptions.ReadOnly.strong.
+          #
+          # ### Exact Staleness
+          #
+          # These timestamp bounds execute reads at a user-specified
+          # timestamp. Reads at a timestamp are guaranteed to see a consistent
+          # prefix of the global transaction history: they observe
+          # modifications done by all transactions with a commit timestamp &lt;=
+          # the read timestamp, and observe none of the modifications done by
+          # transactions with a larger commit timestamp. They will block until
+          # all conflicting transactions that may be assigned commit timestamps
+          # &lt;= the read timestamp have finished.
+          #
+          # The timestamp can either be expressed as an absolute Cloud Spanner commit
+          # timestamp or a staleness relative to the current time.
+          #
+          # These modes do not require a &quot;negotiation phase&quot; to pick a
+          # timestamp. As a result, they execute slightly faster than the
+          # equivalent boundedly stale concurrency modes. On the other hand,
+          # boundedly stale reads usually return fresher results.
+          #
+          # See TransactionOptions.ReadOnly.read_timestamp and
+          # TransactionOptions.ReadOnly.exact_staleness.
+          #
+          # ### Bounded Staleness
+          #
+          # Bounded staleness modes allow Cloud Spanner to pick the read timestamp,
+          # subject to a user-provided staleness bound. Cloud Spanner chooses the
+          # newest timestamp within the staleness bound that allows execution
+          # of the reads at the closest available replica without blocking.
+          #
+          # All rows yielded are consistent with each other -- if any part of
+          # the read observes a transaction, all parts of the read see the
+          # transaction. Boundedly stale reads are not repeatable: two stale
+          # reads, even if they use the same staleness bound, can execute at
+          # different timestamps and thus return inconsistent results.
+          #
+          # Boundedly stale reads execute in two phases: the first phase
+          # negotiates a timestamp among all replicas needed to serve the
+          # read. In the second phase, reads are executed at the negotiated
+          # timestamp.
+          #
+          # As a result of the two phase execution, bounded staleness reads are
+          # usually a little slower than comparable exact staleness
+          # reads. However, they are typically able to return fresher
+          # results, and are more likely to execute at the closest replica.
+          #
+          # Because the timestamp negotiation requires up-front knowledge of
+          # which rows will be read, it can only be used with single-use
+          # read-only transactions.
+          #
+          # See TransactionOptions.ReadOnly.max_staleness and
+          # TransactionOptions.ReadOnly.min_read_timestamp.
+          #
+          # ### Old Read Timestamps and Garbage Collection
+          #
+          # Cloud Spanner continuously garbage collects deleted and overwritten data
+          # in the background to reclaim storage space. This process is known
+          # as &quot;version GC&quot;. By default, version GC reclaims versions after they
+          # are one hour old. Because of this, Cloud Spanner cannot perform reads
+          # at read timestamps more than one hour in the past. This
+          # restriction also applies to in-progress reads and/or SQL queries whose
+          # timestamp become too old while executing. Reads and SQL queries with
+          # too-old read timestamps fail with the error `FAILED_PRECONDITION`.
+          #
+          # ## Partitioned DML Transactions
+          #
+          # Partitioned DML transactions are used to execute DML statements with a
+          # different execution strategy that provides different, and often better,
+          # scalability properties for large, table-wide operations than DML in a
+          # ReadWrite transaction. Smaller scoped statements, such as an OLTP workload,
+          # should prefer using ReadWrite transactions.
+          #
+          # Partitioned DML partitions the keyspace and runs the DML statement on each
+          # partition in separate, internal transactions. These transactions commit
+          # automatically when complete, and run independently from one another.
+          #
+          # To reduce lock contention, this execution strategy only acquires read locks
+          # on rows that match the WHERE clause of the statement. Additionally, the
+          # smaller per-partition transactions hold locks for less time.
+          #
+          # That said, Partitioned DML is not a drop-in replacement for standard DML used
+          # in ReadWrite transactions.
+          #
+          #  - The DML statement must be fully-partitionable. Specifically, the statement
+          #    must be expressible as the union of many statements which each access only
+          #    a single row of the table.
+          #
+          #  - The statement is not applied atomically to all rows of the table. Rather,
+          #    the statement is applied atomically to partitions of the table, in
+          #    independent transactions. Secondary index rows are updated atomically
+          #    with the base table rows.
+          #
+          #  - Partitioned DML does not guarantee exactly-once execution semantics
+          #    against a partition. The statement will be applied at least once to each
+          #    partition. It is strongly recommended that the DML statement should be
+          #    idempotent to avoid unexpected results. For instance, it is potentially
+          #    dangerous to run a statement such as
+          #    `UPDATE table SET column = column + 1` as it could be run multiple times
+          #    against some rows.
+          #
+          #  - The partitions are committed automatically - there is no support for
+          #    Commit or Rollback. If the call returns an error, or if the client issuing
+          #    the ExecuteSql call dies, it is possible that some rows had the statement
+          #    executed on them successfully. It is also possible that statement was
+          #    never executed against other rows.
+          #
+          #  - Partitioned DML transactions may only contain the execution of a single
+          #    DML statement via ExecuteSql or ExecuteStreamingSql.
+          #
+          #  - If any error is encountered during the execution of the partitioned DML
+          #    operation (for instance, a UNIQUE INDEX violation, division by zero, or a
+          #    value that cannot be stored due to schema constraints), then the
+          #    operation is stopped at that point and an error is returned. It is
+          #    possible that at this point, some partitions have been committed (or even
+          #    committed multiple times), and other partitions have not been run at all.
+          #
+          # Given the above, Partitioned DML is good fit for large, database-wide,
+          # operations that are idempotent, such as deleting old rows from a very large
+          # table.
+        &quot;partitionedDml&quot;: { # Message type to initiate a Partitioned DML transaction. # Partitioned DML transaction.
             #
             # Authorization to begin a Partitioned DML transaction requires
             # `spanner.databases.beginPartitionedDmlTransaction` permission
             # on the `session` resource.
         },
+        &quot;readOnly&quot;: { # Message type to initiate a read-only transaction. # Transaction will not write.
+            #
+            # Authorization to begin a read-only transaction requires
+            # `spanner.databases.beginReadOnlyTransaction` permission
+            # on the `session` resource.
+          &quot;minReadTimestamp&quot;: &quot;A String&quot;, # Executes all reads at a timestamp &gt;= `min_read_timestamp`.
+              #
+              # This is useful for requesting fresher data than some previous
+              # read, or data that is fresh enough to observe the effects of some
+              # previously committed transaction whose timestamp is known.
+              #
+              # Note that this option can only be used in single-use transactions.
+              #
+              # A timestamp in RFC3339 UTC \&quot;Zulu\&quot; format, accurate to nanoseconds.
+              # Example: `&quot;2014-10-02T15:01:23.045123456Z&quot;`.
+          &quot;readTimestamp&quot;: &quot;A String&quot;, # Executes all reads at the given timestamp. Unlike other modes,
+              # reads at a specific timestamp are repeatable; the same read at
+              # the same timestamp always returns the same data. If the
+              # timestamp is in the future, the read will block until the
+              # specified timestamp, modulo the read&#x27;s deadline.
+              #
+              # Useful for large scale consistent reads such as mapreduces, or
+              # for coordinating many reads against a consistent snapshot of the
+              # data.
+              #
+              # A timestamp in RFC3339 UTC \&quot;Zulu\&quot; format, accurate to nanoseconds.
+              # Example: `&quot;2014-10-02T15:01:23.045123456Z&quot;`.
+          &quot;maxStaleness&quot;: &quot;A String&quot;, # Read data at a timestamp &gt;= `NOW - max_staleness`
+              # seconds. Guarantees that all writes that have committed more
+              # than the specified number of seconds ago are visible. Because
+              # Cloud Spanner chooses the exact timestamp, this mode works even if
+              # the client&#x27;s local clock is substantially skewed from Cloud Spanner
+              # commit timestamps.
+              #
+              # Useful for reading the freshest data available at a nearby
+              # replica, while bounding the possible staleness if the local
+              # replica has fallen behind.
+              #
+              # Note that this option can only be used in single-use
+              # transactions.
+          &quot;returnReadTimestamp&quot;: True or False, # If true, the Cloud Spanner-selected read timestamp is included in
+              # the Transaction message that describes the transaction.
+          &quot;exactStaleness&quot;: &quot;A String&quot;, # Executes all reads at a timestamp that is `exact_staleness`
+              # old. The timestamp is chosen soon after the read is started.
+              #
+              # Guarantees that all writes that have committed more than the
+              # specified number of seconds ago are visible. Because Cloud Spanner
+              # chooses the exact timestamp, this mode works even if the client&#x27;s
+              # local clock is substantially skewed from Cloud Spanner commit
+              # timestamps.
+              #
+              # Useful for reading at nearby replicas without the distributed
+              # timestamp negotiation overhead of `max_staleness`.
+          &quot;strong&quot;: True or False, # Read at a timestamp where all previously committed transactions
+              # are visible.
+        },
+        &quot;readWrite&quot;: { # Message type to initiate a read-write transaction. Currently this # Transaction may write.
+            #
+            # Authorization to begin a read-write transaction requires
+            # `spanner.databases.beginOrRollbackReadWriteTransaction` permission
+            # on the `session` resource.
+            # transaction type has no options.
+        },
       },
-      "id": "A String", # Execute the read or SQL query in a previously-started transaction.
+      &quot;id&quot;: &quot;A String&quot;, # Execute the read or SQL query in a previously-started transaction.
     },
-    "seqno": "A String", # A per-transaction sequence number used to identify this request. This field
+    &quot;resumeToken&quot;: &quot;A String&quot;, # If this request is resuming a previously interrupted SQL statement
+        # execution, `resume_token` should be copied from the last
+        # PartialResultSet yielded before the interruption. Doing this
+        # enables the new SQL statement execution to resume where the last one left
+        # off. The rest of the request parameters must exactly match the
+        # request that yielded this token.
+    &quot;partitionToken&quot;: &quot;A String&quot;, # If present, results will be restricted to the specified partition
+        # previously created using PartitionQuery().  There must be an exact
+        # match for the values of fields common to this message and the
+        # PartitionQueryRequest message used to create this partition_token.
+    &quot;seqno&quot;: &quot;A String&quot;, # A per-transaction sequence number used to identify this request. This field
         # makes each request idempotent such that if the request is received multiple
         # times, at most one will succeed.
         # 
@@ -3070,17 +3083,7 @@
         # handled requests will yield the same response as the first execution.
         # 
         # Required for DML statements. Ignored for queries.
-    "resumeToken": "A String", # If this request is resuming a previously interrupted SQL statement
-        # execution, `resume_token` should be copied from the last
-        # PartialResultSet yielded before the interruption. Doing this
-        # enables the new SQL statement execution to resume where the last one left
-        # off. The rest of the request parameters must exactly match the
-        # request that yielded this token.
-    "partitionToken": "A String", # If present, results will be restricted to the specified partition
-        # previously created using PartitionQuery().  There must be an exact
-        # match for the values of fields common to this message and the
-        # PartitionQueryRequest message used to create this partition_token.
-    "paramTypes": { # It is not always possible for Cloud Spanner to infer the right SQL type
+    &quot;paramTypes&quot;: { # It is not always possible for Cloud Spanner to infer the right SQL type
         # from a JSON value.  For example, values of type `BYTES` and values
         # of type `STRING` both appear in params as JSON strings.
         # 
@@ -3088,40 +3091,55 @@
         # SQL type for some or all of the SQL statement parameters. See the
         # definition of Type for more information
         # about SQL types.
-      "a_key": { # `Type` indicates the type of a Cloud Spanner value, as might be stored in a
+      &quot;a_key&quot;: { # `Type` indicates the type of a Cloud Spanner value, as might be stored in a
           # table cell or returned from an SQL query.
-        "structType": { # `StructType` defines the fields of a STRUCT type. # If code == STRUCT, then `struct_type`
-            # provides type information for the struct's fields.
-          "fields": [ # The list of fields that make up this struct. Order is
+        &quot;arrayElementType&quot;: # Object with schema name: Type # If code == ARRAY, then `array_element_type`
+            # is the type of the array elements.
+        &quot;code&quot;: &quot;A String&quot;, # Required. The TypeCode for this type.
+        &quot;structType&quot;: { # `StructType` defines the fields of a STRUCT type. # If code == STRUCT, then `struct_type`
+            # provides type information for the struct&#x27;s fields.
+          &quot;fields&quot;: [ # The list of fields that make up this struct. Order is
               # significant, because values of this struct type are represented as
               # lists, where the order of field values matches the order of
               # fields in the StructType. In turn, the order of fields
               # matches the order of columns in a read request, or the order of
               # fields in the `SELECT` clause of a query.
             { # Message representing a single field of a struct.
-              "type": # Object with schema name: Type # The type of the field.
-              "name": "A String", # The name of the field. For reads, this is the column name. For
-                  # SQL queries, it is the column alias (e.g., `"Word"` in the
-                  # query `"SELECT 'hello' AS Word"`), or the column name (e.g.,
-                  # `"ColName"` in the query `"SELECT ColName FROM Table"`). Some
-                  # columns might have an empty name (e.g., !"SELECT
-                  # UPPER(ColName)"`). Note that a query result can contain
+              &quot;name&quot;: &quot;A String&quot;, # The name of the field. For reads, this is the column name. For
+                  # SQL queries, it is the column alias (e.g., `&quot;Word&quot;` in the
+                  # query `&quot;SELECT &#x27;hello&#x27; AS Word&quot;`), or the column name (e.g.,
+                  # `&quot;ColName&quot;` in the query `&quot;SELECT ColName FROM Table&quot;`). Some
+                  # columns might have an empty name (e.g., !&quot;SELECT
+                  # UPPER(ColName)&quot;`). Note that a query result can contain
                   # multiple fields with the same name.
+              &quot;type&quot;: # Object with schema name: Type # The type of the field.
             },
           ],
         },
-        "code": "A String", # Required. The TypeCode for this type.
-        "arrayElementType": # Object with schema name: Type # If code == ARRAY, then `array_element_type`
-            # is the type of the array elements.
       },
     },
-    "queryOptions": { # Query optimizer configuration. # Query optimizer configuration to use for the given query.
-      "optimizerVersion": "A String", # An option to control the selection of optimizer version.
+    &quot;sql&quot;: &quot;A String&quot;, # Required. The SQL string.
+    &quot;params&quot;: { # Parameter names and values that bind to placeholders in the SQL string.
+        # 
+        # A parameter placeholder consists of the `@` character followed by the
+        # parameter name (for example, `@firstName`). Parameter names can contain
+        # letters, numbers, and underscores.
+        # 
+        # Parameters can appear anywhere that a literal value is expected.  The same
+        # parameter name can be used more than once, for example:
+        # 
+        # `&quot;WHERE id &gt; @msg_id AND id &lt; @msg_id + 100&quot;`
+        # 
+        # It is an error to execute a SQL statement with unbound parameters.
+      &quot;a_key&quot;: &quot;&quot;, # Properties of the object.
+    },
+    &quot;queryOptions&quot;: { # Query optimizer configuration. # Query optimizer configuration to use for the given query.
+      &quot;optimizerVersion&quot;: &quot;A String&quot;, # An option to control the selection of optimizer version.
           #
           # This parameter allows individual queries to pick different query
           # optimizer versions.
           #
-          # Specifying "latest" as a value instructs Cloud Spanner to use the
+          # Specifying &quot;latest&quot; as a value instructs Cloud Spanner to use the
           # latest supported query optimizer version. If not specified, Cloud Spanner
           # uses optimizer version set at the database level options. Any other
           # positive integer (from the list of supported optimizer versions)
@@ -3133,24 +3151,6 @@
           #
           # The `optimizer_version` statement hint has precedence over this setting.
     },
-    "params": { # Parameter names and values that bind to placeholders in the SQL string.
-        # 
-        # A parameter placeholder consists of the `@` character followed by the
-        # parameter name (for example, `@firstName`). Parameter names can contain
-        # letters, numbers, and underscores.
-        # 
-        # Parameters can appear anywhere that a literal value is expected.  The same
-        # parameter name can be used more than once, for example:
-        # 
-        # `"WHERE id &gt; @msg_id AND id &lt; @msg_id + 100"`
-        # 
-        # It is an error to execute a SQL statement with unbound parameters.
-      "a_key": "", # Properties of the object.
-    },
-    "sql": "A String", # Required. The SQL string.
-    "queryMode": "A String", # Used to control the amount of debugging information returned in
-        # ResultSetStats. If partition_token is set, query_mode can only
-        # be set to QueryMode.NORMAL.
   }
 
   x__xgafv: string, V1 error format.
@@ -3163,17 +3163,7 @@
 
     { # Results from Read or
       # ExecuteSql.
-    "rows": [ # Each element in `rows` is a row whose format is defined by
-        # metadata.row_type. The ith element
-        # in each row matches the ith field in
-        # metadata.row_type. Elements are
-        # encoded based on type as described
-        # here.
-      [
-        "",
-      ],
-    ],
-    "stats": { # Additional statistics about a ResultSet or PartialResultSet. # Query plan and execution statistics for the SQL statement that
+    &quot;stats&quot;: { # Additional statistics about a ResultSet or PartialResultSet. # Query plan and execution statistics for the SQL statement that
         # produced this result set. These can be requested by setting
         # ExecuteSqlRequest.query_mode.
         # DML statements always produce stats containing the number of rows
@@ -3181,31 +3171,63 @@
         # ExecuteSqlRequest.QueryMode.PLAN ExecuteSqlRequest.query_mode.
         # Other fields may or may not be populated, based on the
         # ExecuteSqlRequest.query_mode.
-      "rowCountLowerBound": "A String", # Partitioned DML does not offer exactly-once semantics, so it
+      &quot;rowCountExact&quot;: &quot;A String&quot;, # Standard DML returns an exact count of rows that were modified.
+      &quot;queryStats&quot;: { # Aggregated statistics from the execution of the query. Only present when
+          # the query is profiled. For example, a query could return the statistics as
+          # follows:
+          #
+          #     {
+          #       &quot;rows_returned&quot;: &quot;3&quot;,
+          #       &quot;elapsed_time&quot;: &quot;1.22 secs&quot;,
+          #       &quot;cpu_time&quot;: &quot;1.19 secs&quot;
+          #     }
+        &quot;a_key&quot;: &quot;&quot;, # Properties of the object.
+      },
+      &quot;rowCountLowerBound&quot;: &quot;A String&quot;, # Partitioned DML does not offer exactly-once semantics, so it
           # returns a lower bound of the rows modified.
-      "rowCountExact": "A String", # Standard DML returns an exact count of rows that were modified.
-      "queryPlan": { # Contains an ordered list of nodes appearing in the query plan. # QueryPlan for the query associated with this result.
-        "planNodes": [ # The nodes in the query plan. Plan nodes are returned in pre-order starting
-            # with the plan root. Each PlanNode's `id` corresponds to its index in
+      &quot;queryPlan&quot;: { # Contains an ordered list of nodes appearing in the query plan. # QueryPlan for the query associated with this result.
+        &quot;planNodes&quot;: [ # The nodes in the query plan. Plan nodes are returned in pre-order starting
+            # with the plan root. Each PlanNode&#x27;s `id` corresponds to its index in
             # `plan_nodes`.
           { # Node information for nodes appearing in a QueryPlan.plan_nodes.
-            "index": 42, # The `PlanNode`'s index in node list.
-            "kind": "A String", # Used to determine the type of node. May be needed for visualizing
+            &quot;metadata&quot;: { # Attributes relevant to the node contained in a group of key-value pairs.
+                # For example, a Parameter Reference node could have the following
+                # information in its metadata:
+                #
+                #     {
+                #       &quot;parameter_reference&quot;: &quot;param1&quot;,
+                #       &quot;parameter_type&quot;: &quot;array&quot;
+                #     }
+              &quot;a_key&quot;: &quot;&quot;, # Properties of the object.
+            },
+            &quot;executionStats&quot;: { # The execution statistics associated with the node, contained in a group of
+                # key-value pairs. Only present if the plan was returned as a result of a
+                # profile query. For example, number of executions, number of rows/time per
+                # execution etc.
+              &quot;a_key&quot;: &quot;&quot;, # Properties of the object.
+            },
+            &quot;shortRepresentation&quot;: { # Condensed representation of a node and its subtree. Only present for # Condensed representation for SCALAR nodes.
+                # `SCALAR` PlanNode(s).
+              &quot;description&quot;: &quot;A String&quot;, # A string representation of the expression subtree rooted at this node.
+              &quot;subqueries&quot;: { # A mapping of (subquery variable name) -&gt; (subquery node id) for cases
+                  # where the `description` string of this node references a `SCALAR`
+                  # subquery contained in the expression subtree rooted at this node. The
+                  # referenced `SCALAR` subquery may not necessarily be a direct child of
+                  # this node.
+                &quot;a_key&quot;: 42,
+              },
+            },
+            &quot;index&quot;: 42, # The `PlanNode`&#x27;s index in node list.
+            &quot;kind&quot;: &quot;A String&quot;, # Used to determine the type of node. May be needed for visualizing
                 # different kinds of nodes differently. For example, If the node is a
                 # SCALAR node, it will have a condensed representation
                 # which can be used to directly embed a description of the node in its
                 # parent.
-            "displayName": "A String", # The display name for the node.
-            "executionStats": { # The execution statistics associated with the node, contained in a group of
-                # key-value pairs. Only present if the plan was returned as a result of a
-                # profile query. For example, number of executions, number of rows/time per
-                # execution etc.
-              "a_key": "", # Properties of the object.
-            },
-            "childLinks": [ # List of child node `index`es and their relationship to this parent.
+            &quot;displayName&quot;: &quot;A String&quot;, # The display name for the node.
+            &quot;childLinks&quot;: [ # List of child node `index`es and their relationship to this parent.
               { # Metadata associated with a parent-child relationship appearing in a
                   # PlanNode.
-                "variable": "A String", # Only present if the child node is SCALAR and corresponds
+                &quot;variable&quot;: &quot;A String&quot;, # Only present if the child node is SCALAR and corresponds
                     # to an output variable of the parent node. The field carries the name of
                     # the output variable.
                     # For example, a `TableScan` operator that reads rows from a table will
@@ -3213,85 +3235,57 @@
                     # created for each column that is read by the operator. The corresponding
                     # `variable` fields will be set to the variable names assigned to the
                     # columns.
-                "type": "A String", # The type of the link. For example, in Hash Joins this could be used to
+                &quot;childIndex&quot;: 42, # The node to which the link points.
+                &quot;type&quot;: &quot;A String&quot;, # The type of the link. For example, in Hash Joins this could be used to
                     # distinguish between the build child and the probe child, or in the case
                     # of the child being an output variable, to represent the tag associated
                     # with the output variable.
-                "childIndex": 42, # The node to which the link points.
               },
             ],
-            "shortRepresentation": { # Condensed representation of a node and its subtree. Only present for # Condensed representation for SCALAR nodes.
-                # `SCALAR` PlanNode(s).
-              "subqueries": { # A mapping of (subquery variable name) -&gt; (subquery node id) for cases
-                  # where the `description` string of this node references a `SCALAR`
-                  # subquery contained in the expression subtree rooted at this node. The
-                  # referenced `SCALAR` subquery may not necessarily be a direct child of
-                  # this node.
-                "a_key": 42,
-              },
-              "description": "A String", # A string representation of the expression subtree rooted at this node.
-            },
-            "metadata": { # Attributes relevant to the node contained in a group of key-value pairs.
-                # For example, a Parameter Reference node could have the following
-                # information in its metadata:
-                #
-                #     {
-                #       "parameter_reference": "param1",
-                #       "parameter_type": "array"
-                #     }
-              "a_key": "", # Properties of the object.
-            },
           },
         ],
       },
-      "queryStats": { # Aggregated statistics from the execution of the query. Only present when
-          # the query is profiled. For example, a query could return the statistics as
-          # follows:
-          #
-          #     {
-          #       "rows_returned": "3",
-          #       "elapsed_time": "1.22 secs",
-          #       "cpu_time": "1.19 secs"
-          #     }
-        "a_key": "", # Properties of the object.
-      },
     },
-    "metadata": { # Metadata about a ResultSet or PartialResultSet. # Metadata about the result set, such as row type information.
-      "rowType": { # `StructType` defines the fields of a STRUCT type. # Indicates the field names and types for the rows in the result
-          # set.  For example, a SQL query like `"SELECT UserId, UserName FROM
-          # Users"` could return a `row_type` value like:
+    &quot;rows&quot;: [ # Each element in `rows` is a row whose format is defined by
+        # metadata.row_type. The ith element
+        # in each row matches the ith field in
+        # metadata.row_type. Elements are
+        # encoded based on type as described
+        # here.
+      [
+        &quot;&quot;,
+      ],
+    ],
+    &quot;metadata&quot;: { # Metadata about a ResultSet or PartialResultSet. # Metadata about the result set, such as row type information.
+      &quot;rowType&quot;: { # `StructType` defines the fields of a STRUCT type. # Indicates the field names and types for the rows in the result
+          # set.  For example, a SQL query like `&quot;SELECT UserId, UserName FROM
+          # Users&quot;` could return a `row_type` value like:
           #
-          #     "fields": [
-          #       { "name": "UserId", "type": { "code": "INT64" } },
-          #       { "name": "UserName", "type": { "code": "STRING" } },
+          #     &quot;fields&quot;: [
+          #       { &quot;name&quot;: &quot;UserId&quot;, &quot;type&quot;: { &quot;code&quot;: &quot;INT64&quot; } },
+          #       { &quot;name&quot;: &quot;UserName&quot;, &quot;type&quot;: { &quot;code&quot;: &quot;STRING&quot; } },
           #     ]
-        "fields": [ # The list of fields that make up this struct. Order is
+        &quot;fields&quot;: [ # The list of fields that make up this struct. Order is
             # significant, because values of this struct type are represented as
             # lists, where the order of field values matches the order of
             # fields in the StructType. In turn, the order of fields
             # matches the order of columns in a read request, or the order of
             # fields in the `SELECT` clause of a query.
           { # Message representing a single field of a struct.
-            "type": # Object with schema name: Type # The type of the field.
-            "name": "A String", # The name of the field. For reads, this is the column name. For
-                # SQL queries, it is the column alias (e.g., `"Word"` in the
-                # query `"SELECT 'hello' AS Word"`), or the column name (e.g.,
-                # `"ColName"` in the query `"SELECT ColName FROM Table"`). Some
-                # columns might have an empty name (e.g., !"SELECT
-                # UPPER(ColName)"`). Note that a query result can contain
+            &quot;name&quot;: &quot;A String&quot;, # The name of the field. For reads, this is the column name. For
+                # SQL queries, it is the column alias (e.g., `&quot;Word&quot;` in the
+                # query `&quot;SELECT &#x27;hello&#x27; AS Word&quot;`), or the column name (e.g.,
+                # `&quot;ColName&quot;` in the query `&quot;SELECT ColName FROM Table&quot;`). Some
+                # columns might have an empty name (e.g., !&quot;SELECT
+                # UPPER(ColName)&quot;`). Note that a query result can contain
                 # multiple fields with the same name.
+            &quot;type&quot;: # Object with schema name: Type # The type of the field.
           },
         ],
       },
-      "transaction": { # A transaction. # If the read or SQL query began a transaction as a side-effect, the
+      &quot;transaction&quot;: { # A transaction. # If the read or SQL query began a transaction as a side-effect, the
           # information about the new transaction is yielded here.
-        "readTimestamp": "A String", # For snapshot read-only transactions, the read timestamp chosen
-            # for the transaction. Not returned by default: see
-            # TransactionOptions.ReadOnly.return_read_timestamp.
-            #
-            # A timestamp in RFC3339 UTC \"Zulu\" format, accurate to nanoseconds.
-            # Example: `"2014-10-02T15:01:23.045123456Z"`.
-        "id": "A String", # `id` may be used to identify the transaction in subsequent
+        &quot;id&quot;: &quot;A String&quot;, # `id` may be used to identify the transaction in subsequent
             # Read,
             # ExecuteSql,
             # Commit, or
@@ -3299,6 +3293,12 @@
             #
             # Single-use read-only transactions do not have IDs, because
             # single-use transactions do not support multiple requests.
+        &quot;readTimestamp&quot;: &quot;A String&quot;, # For snapshot read-only transactions, the read timestamp chosen
+            # for the transaction. Not returned by default: see
+            # TransactionOptions.ReadOnly.return_read_timestamp.
+            #
+            # A timestamp in RFC3339 UTC \&quot;Zulu\&quot; format, accurate to nanoseconds.
+            # Example: `&quot;2014-10-02T15:01:23.045123456Z&quot;`.
       },
     },
   }</pre>
@@ -3319,7 +3319,10 @@
 
 { # The request for ExecuteSql and
       # ExecuteStreamingSql.
-    "transaction": { # This message is used to select the transaction in which a # The transaction to use.
+    &quot;queryMode&quot;: &quot;A String&quot;, # Used to control the amount of debugging information returned in
+        # ResultSetStats. If partition_token is set, query_mode can only
+        # be set to QueryMode.NORMAL.
+    &quot;transaction&quot;: { # This message is used to select the transaction in which a # The transaction to use.
         # 
         # For queries, if none is provided, the default is a temporary read-only
         # transaction with strong concurrency.
@@ -3333,356 +3336,7 @@
         # ExecuteSql call runs.
         #
         # See TransactionOptions for more information about transactions.
-      "begin": { # # Transactions # Begin a new transaction and execute this read or SQL query in
-          # it. The transaction ID of the new transaction is returned in
-          # ResultSetMetadata.transaction, which is a Transaction.
-          #
-          #
-          # Each session can have at most one active transaction at a time. After the
-          # active transaction is completed, the session can immediately be
-          # re-used for the next transaction. It is not necessary to create a
-          # new session for each transaction.
-          #
-          # # Transaction Modes
-          #
-          # Cloud Spanner supports three transaction modes:
-          #
-          #   1. Locking read-write. This type of transaction is the only way
-          #      to write data into Cloud Spanner. These transactions rely on
-          #      pessimistic locking and, if necessary, two-phase commit.
-          #      Locking read-write transactions may abort, requiring the
-          #      application to retry.
-          #
-          #   2. Snapshot read-only. This transaction type provides guaranteed
-          #      consistency across several reads, but does not allow
-          #      writes. Snapshot read-only transactions can be configured to
-          #      read at timestamps in the past. Snapshot read-only
-          #      transactions do not need to be committed.
-          #
-          #   3. Partitioned DML. This type of transaction is used to execute
-          #      a single Partitioned DML statement. Partitioned DML partitions
-          #      the key space and runs the DML statement over each partition
-          #      in parallel using separate, internal transactions that commit
-          #      independently. Partitioned DML transactions do not need to be
-          #      committed.
-          #
-          # For transactions that only read, snapshot read-only transactions
-          # provide simpler semantics and are almost always faster. In
-          # particular, read-only transactions do not take locks, so they do
-          # not conflict with read-write transactions. As a consequence of not
-          # taking locks, they also do not abort, so retry loops are not needed.
-          #
-          # Transactions may only read/write data in a single database. They
-          # may, however, read/write data in different tables within that
-          # database.
-          #
-          # ## Locking Read-Write Transactions
-          #
-          # Locking transactions may be used to atomically read-modify-write
-          # data anywhere in a database. This type of transaction is externally
-          # consistent.
-          #
-          # Clients should attempt to minimize the amount of time a transaction
-          # is active. Faster transactions commit with higher probability
-          # and cause less contention. Cloud Spanner attempts to keep read locks
-          # active as long as the transaction continues to do reads, and the
-          # transaction has not been terminated by
-          # Commit or
-          # Rollback.  Long periods of
-          # inactivity at the client may cause Cloud Spanner to release a
-          # transaction's locks and abort it.
-          #
-          # Conceptually, a read-write transaction consists of zero or more
-          # reads or SQL statements followed by
-          # Commit. At any time before
-          # Commit, the client can send a
-          # Rollback request to abort the
-          # transaction.
-          #
-          # ### Semantics
-          #
-          # Cloud Spanner can commit the transaction if all read locks it acquired
-          # are still valid at commit time, and it is able to acquire write
-          # locks for all writes. Cloud Spanner can abort the transaction for any
-          # reason. If a commit attempt returns `ABORTED`, Cloud Spanner guarantees
-          # that the transaction has not modified any user data in Cloud Spanner.
-          #
-          # Unless the transaction commits, Cloud Spanner makes no guarantees about
-          # how long the transaction's locks were held for. It is an error to
-          # use Cloud Spanner locks for any sort of mutual exclusion other than
-          # between Cloud Spanner transactions themselves.
-          #
-          # ### Retrying Aborted Transactions
-          #
-          # When a transaction aborts, the application can choose to retry the
-          # whole transaction again. To maximize the chances of successfully
-          # committing the retry, the client should execute the retry in the
-          # same session as the original attempt. The original session's lock
-          # priority increases with each consecutive abort, meaning that each
-          # attempt has a slightly better chance of success than the previous.
-          #
-          # Under some circumstances (e.g., many transactions attempting to
-          # modify the same row(s)), a transaction can abort many times in a
-          # short period before successfully committing. Thus, it is not a good
-          # idea to cap the number of retries a transaction can attempt;
-          # instead, it is better to limit the total amount of wall time spent
-          # retrying.
-          #
-          # ### Idle Transactions
-          #
-          # A transaction is considered idle if it has no outstanding reads or
-          # SQL queries and has not started a read or SQL query within the last 10
-          # seconds. Idle transactions can be aborted by Cloud Spanner so that they
-          # don't hold on to locks indefinitely. In that case, the commit will
-          # fail with error `ABORTED`.
-          #
-          # If this behavior is undesirable, periodically executing a simple
-          # SQL query in the transaction (e.g., `SELECT 1`) prevents the
-          # transaction from becoming idle.
-          #
-          # ## Snapshot Read-Only Transactions
-          #
-          # Snapshot read-only transactions provides a simpler method than
-          # locking read-write transactions for doing several consistent
-          # reads. However, this type of transaction does not support writes.
-          #
-          # Snapshot transactions do not take locks. Instead, they work by
-          # choosing a Cloud Spanner timestamp, then executing all reads at that
-          # timestamp. Since they do not acquire locks, they do not block
-          # concurrent read-write transactions.
-          #
-          # Unlike locking read-write transactions, snapshot read-only
-          # transactions never abort. They can fail if the chosen read
-          # timestamp is garbage collected; however, the default garbage
-          # collection policy is generous enough that most applications do not
-          # need to worry about this in practice.
-          #
-          # Snapshot read-only transactions do not need to call
-          # Commit or
-          # Rollback (and in fact are not
-          # permitted to do so).
-          #
-          # To execute a snapshot transaction, the client specifies a timestamp
-          # bound, which tells Cloud Spanner how to choose a read timestamp.
-          #
-          # The types of timestamp bound are:
-          #
-          #   - Strong (the default).
-          #   - Bounded staleness.
-          #   - Exact staleness.
-          #
-          # If the Cloud Spanner database to be read is geographically distributed,
-          # stale read-only transactions can execute more quickly than strong
-          # or read-write transaction, because they are able to execute far
-          # from the leader replica.
-          #
-          # Each type of timestamp bound is discussed in detail below.
-          #
-          # ### Strong
-          #
-          # Strong reads are guaranteed to see the effects of all transactions
-          # that have committed before the start of the read. Furthermore, all
-          # rows yielded by a single read are consistent with each other -- if
-          # any part of the read observes a transaction, all parts of the read
-          # see the transaction.
-          #
-          # Strong reads are not repeatable: two consecutive strong read-only
-          # transactions might return inconsistent results if there are
-          # concurrent writes. If consistency across reads is required, the
-          # reads should be executed within a transaction or at an exact read
-          # timestamp.
-          #
-          # See TransactionOptions.ReadOnly.strong.
-          #
-          # ### Exact Staleness
-          #
-          # These timestamp bounds execute reads at a user-specified
-          # timestamp. Reads at a timestamp are guaranteed to see a consistent
-          # prefix of the global transaction history: they observe
-          # modifications done by all transactions with a commit timestamp &lt;=
-          # the read timestamp, and observe none of the modifications done by
-          # transactions with a larger commit timestamp. They will block until
-          # all conflicting transactions that may be assigned commit timestamps
-          # &lt;= the read timestamp have finished.
-          #
-          # The timestamp can either be expressed as an absolute Cloud Spanner commit
-          # timestamp or a staleness relative to the current time.
-          #
-          # These modes do not require a "negotiation phase" to pick a
-          # timestamp. As a result, they execute slightly faster than the
-          # equivalent boundedly stale concurrency modes. On the other hand,
-          # boundedly stale reads usually return fresher results.
-          #
-          # See TransactionOptions.ReadOnly.read_timestamp and
-          # TransactionOptions.ReadOnly.exact_staleness.
-          #
-          # ### Bounded Staleness
-          #
-          # Bounded staleness modes allow Cloud Spanner to pick the read timestamp,
-          # subject to a user-provided staleness bound. Cloud Spanner chooses the
-          # newest timestamp within the staleness bound that allows execution
-          # of the reads at the closest available replica without blocking.
-          #
-          # All rows yielded are consistent with each other -- if any part of
-          # the read observes a transaction, all parts of the read see the
-          # transaction. Boundedly stale reads are not repeatable: two stale
-          # reads, even if they use the same staleness bound, can execute at
-          # different timestamps and thus return inconsistent results.
-          #
-          # Boundedly stale reads execute in two phases: the first phase
-          # negotiates a timestamp among all replicas needed to serve the
-          # read. In the second phase, reads are executed at the negotiated
-          # timestamp.
-          #
-          # As a result of the two phase execution, bounded staleness reads are
-          # usually a little slower than comparable exact staleness
-          # reads. However, they are typically able to return fresher
-          # results, and are more likely to execute at the closest replica.
-          #
-          # Because the timestamp negotiation requires up-front knowledge of
-          # which rows will be read, it can only be used with single-use
-          # read-only transactions.
-          #
-          # See TransactionOptions.ReadOnly.max_staleness and
-          # TransactionOptions.ReadOnly.min_read_timestamp.
-          #
-          # ### Old Read Timestamps and Garbage Collection
-          #
-          # Cloud Spanner continuously garbage collects deleted and overwritten data
-          # in the background to reclaim storage space. This process is known
-          # as "version GC". By default, version GC reclaims versions after they
-          # are one hour old. Because of this, Cloud Spanner cannot perform reads
-          # at read timestamps more than one hour in the past. This
-          # restriction also applies to in-progress reads and/or SQL queries whose
-          # timestamp become too old while executing. Reads and SQL queries with
-          # too-old read timestamps fail with the error `FAILED_PRECONDITION`.
-          #
-          # ## Partitioned DML Transactions
-          #
-          # Partitioned DML transactions are used to execute DML statements with a
-          # different execution strategy that provides different, and often better,
-          # scalability properties for large, table-wide operations than DML in a
-          # ReadWrite transaction. Smaller scoped statements, such as an OLTP workload,
-          # should prefer using ReadWrite transactions.
-          #
-          # Partitioned DML partitions the keyspace and runs the DML statement on each
-          # partition in separate, internal transactions. These transactions commit
-          # automatically when complete, and run independently from one another.
-          #
-          # To reduce lock contention, this execution strategy only acquires read locks
-          # on rows that match the WHERE clause of the statement. Additionally, the
-          # smaller per-partition transactions hold locks for less time.
-          #
-          # That said, Partitioned DML is not a drop-in replacement for standard DML used
-          # in ReadWrite transactions.
-          #
-          #  - The DML statement must be fully-partitionable. Specifically, the statement
-          #    must be expressible as the union of many statements which each access only
-          #    a single row of the table.
-          #
-          #  - The statement is not applied atomically to all rows of the table. Rather,
-          #    the statement is applied atomically to partitions of the table, in
-          #    independent transactions. Secondary index rows are updated atomically
-          #    with the base table rows.
-          #
-          #  - Partitioned DML does not guarantee exactly-once execution semantics
-          #    against a partition. The statement will be applied at least once to each
-          #    partition. It is strongly recommended that the DML statement should be
-          #    idempotent to avoid unexpected results. For instance, it is potentially
-          #    dangerous to run a statement such as
-          #    `UPDATE table SET column = column + 1` as it could be run multiple times
-          #    against some rows.
-          #
-          #  - The partitions are committed automatically - there is no support for
-          #    Commit or Rollback. If the call returns an error, or if the client issuing
-          #    the ExecuteSql call dies, it is possible that some rows had the statement
-          #    executed on them successfully. It is also possible that statement was
-          #    never executed against other rows.
-          #
-          #  - Partitioned DML transactions may only contain the execution of a single
-          #    DML statement via ExecuteSql or ExecuteStreamingSql.
-          #
-          #  - If any error is encountered during the execution of the partitioned DML
-          #    operation (for instance, a UNIQUE INDEX violation, division by zero, or a
-          #    value that cannot be stored due to schema constraints), then the
-          #    operation is stopped at that point and an error is returned. It is
-          #    possible that at this point, some partitions have been committed (or even
-          #    committed multiple times), and other partitions have not been run at all.
-          #
-          # Given the above, Partitioned DML is good fit for large, database-wide,
-          # operations that are idempotent, such as deleting old rows from a very large
-          # table.
-        "readWrite": { # Message type to initiate a read-write transaction. Currently this # Transaction may write.
-            #
-            # Authorization to begin a read-write transaction requires
-            # `spanner.databases.beginOrRollbackReadWriteTransaction` permission
-            # on the `session` resource.
-            # transaction type has no options.
-        },
-        "readOnly": { # Message type to initiate a read-only transaction. # Transaction will not write.
-            #
-            # Authorization to begin a read-only transaction requires
-            # `spanner.databases.beginReadOnlyTransaction` permission
-            # on the `session` resource.
-          "minReadTimestamp": "A String", # Executes all reads at a timestamp &gt;= `min_read_timestamp`.
-              #
-              # This is useful for requesting fresher data than some previous
-              # read, or data that is fresh enough to observe the effects of some
-              # previously committed transaction whose timestamp is known.
-              #
-              # Note that this option can only be used in single-use transactions.
-              #
-              # A timestamp in RFC3339 UTC \"Zulu\" format, accurate to nanoseconds.
-              # Example: `"2014-10-02T15:01:23.045123456Z"`.
-          "readTimestamp": "A String", # Executes all reads at the given timestamp. Unlike other modes,
-              # reads at a specific timestamp are repeatable; the same read at
-              # the same timestamp always returns the same data. If the
-              # timestamp is in the future, the read will block until the
-              # specified timestamp, modulo the read's deadline.
-              #
-              # Useful for large scale consistent reads such as mapreduces, or
-              # for coordinating many reads against a consistent snapshot of the
-              # data.
-              #
-              # A timestamp in RFC3339 UTC \"Zulu\" format, accurate to nanoseconds.
-              # Example: `"2014-10-02T15:01:23.045123456Z"`.
-          "maxStaleness": "A String", # Read data at a timestamp &gt;= `NOW - max_staleness`
-              # seconds. Guarantees that all writes that have committed more
-              # than the specified number of seconds ago are visible. Because
-              # Cloud Spanner chooses the exact timestamp, this mode works even if
-              # the client's local clock is substantially skewed from Cloud Spanner
-              # commit timestamps.
-              #
-              # Useful for reading the freshest data available at a nearby
-              # replica, while bounding the possible staleness if the local
-              # replica has fallen behind.
-              #
-              # Note that this option can only be used in single-use
-              # transactions.
-          "exactStaleness": "A String", # Executes all reads at a timestamp that is `exact_staleness`
-              # old. The timestamp is chosen soon after the read is started.
-              #
-              # Guarantees that all writes that have committed more than the
-              # specified number of seconds ago are visible. Because Cloud Spanner
-              # chooses the exact timestamp, this mode works even if the client's
-              # local clock is substantially skewed from Cloud Spanner commit
-              # timestamps.
-              #
-              # Useful for reading at nearby replicas without the distributed
-              # timestamp negotiation overhead of `max_staleness`.
-          "returnReadTimestamp": True or False, # If true, the Cloud Spanner-selected read timestamp is included in
-              # the Transaction message that describes the transaction.
-          "strong": True or False, # Read at a timestamp where all previously committed transactions
-              # are visible.
-        },
-        "partitionedDml": { # Message type to initiate a Partitioned DML transaction. # Partitioned DML transaction.
-            #
-            # Authorization to begin a Partitioned DML transaction requires
-            # `spanner.databases.beginPartitionedDmlTransaction` permission
-            # on the `session` resource.
-        },
-      },
-      "singleUse": { # # Transactions # Execute the read or SQL query in a temporary transaction.
+      &quot;singleUse&quot;: { # # Transactions # Execute the read or SQL query in a temporary transaction.
           # This is the most efficient way to execute a transaction that
           # consists of a single SQL query.
           #
@@ -3739,7 +3393,7 @@
           # Commit or
           # Rollback.  Long periods of
           # inactivity at the client may cause Cloud Spanner to release a
-          # transaction's locks and abort it.
+          # transaction&#x27;s locks and abort it.
           #
           # Conceptually, a read-write transaction consists of zero or more
           # reads or SQL statements followed by
@@ -3757,7 +3411,7 @@
           # that the transaction has not modified any user data in Cloud Spanner.
           #
           # Unless the transaction commits, Cloud Spanner makes no guarantees about
-          # how long the transaction's locks were held for. It is an error to
+          # how long the transaction&#x27;s locks were held for. It is an error to
           # use Cloud Spanner locks for any sort of mutual exclusion other than
           # between Cloud Spanner transactions themselves.
           #
@@ -3766,7 +3420,7 @@
           # When a transaction aborts, the application can choose to retry the
           # whole transaction again. To maximize the chances of successfully
           # committing the retry, the client should execute the retry in the
-          # same session as the original attempt. The original session's lock
+          # same session as the original attempt. The original session&#x27;s lock
           # priority increases with each consecutive abort, meaning that each
           # attempt has a slightly better chance of success than the previous.
           #
@@ -3782,7 +3436,7 @@
           # A transaction is considered idle if it has no outstanding reads or
           # SQL queries and has not started a read or SQL query within the last 10
           # seconds. Idle transactions can be aborted by Cloud Spanner so that they
-          # don't hold on to locks indefinitely. In that case, the commit will
+          # don&#x27;t hold on to locks indefinitely. In that case, the commit will
           # fail with error `ABORTED`.
           #
           # If this behavior is undesirable, periodically executing a simple
@@ -3857,7 +3511,7 @@
           # The timestamp can either be expressed as an absolute Cloud Spanner commit
           # timestamp or a staleness relative to the current time.
           #
-          # These modes do not require a "negotiation phase" to pick a
+          # These modes do not require a &quot;negotiation phase&quot; to pick a
           # timestamp. As a result, they execute slightly faster than the
           # equivalent boundedly stale concurrency modes. On the other hand,
           # boundedly stale reads usually return fresher results.
@@ -3899,7 +3553,7 @@
           #
           # Cloud Spanner continuously garbage collects deleted and overwritten data
           # in the background to reclaim storage space. This process is known
-          # as "version GC". By default, version GC reclaims versions after they
+          # as &quot;version GC&quot;. By default, version GC reclaims versions after they
           # are one hour old. Because of this, Cloud Spanner cannot perform reads
           # at read timestamps more than one hour in the past. This
           # restriction also applies to in-progress reads and/or SQL queries whose
@@ -3961,19 +3615,18 @@
           # Given the above, Partitioned DML is good fit for large, database-wide,
           # operations that are idempotent, such as deleting old rows from a very large
           # table.
-        "readWrite": { # Message type to initiate a read-write transaction. Currently this # Transaction may write.
+        &quot;partitionedDml&quot;: { # Message type to initiate a Partitioned DML transaction. # Partitioned DML transaction.
             #
-            # Authorization to begin a read-write transaction requires
-            # `spanner.databases.beginOrRollbackReadWriteTransaction` permission
+            # Authorization to begin a Partitioned DML transaction requires
+            # `spanner.databases.beginPartitionedDmlTransaction` permission
             # on the `session` resource.
-            # transaction type has no options.
         },
-        "readOnly": { # Message type to initiate a read-only transaction. # Transaction will not write.
+        &quot;readOnly&quot;: { # Message type to initiate a read-only transaction. # Transaction will not write.
             #
             # Authorization to begin a read-only transaction requires
             # `spanner.databases.beginReadOnlyTransaction` permission
             # on the `session` resource.
-          "minReadTimestamp": "A String", # Executes all reads at a timestamp &gt;= `min_read_timestamp`.
+          &quot;minReadTimestamp&quot;: &quot;A String&quot;, # Executes all reads at a timestamp &gt;= `min_read_timestamp`.
               #
               # This is useful for requesting fresher data than some previous
               # read, or data that is fresh enough to observe the effects of some
@@ -3981,25 +3634,25 @@
               #
               # Note that this option can only be used in single-use transactions.
               #
-              # A timestamp in RFC3339 UTC \"Zulu\" format, accurate to nanoseconds.
-              # Example: `"2014-10-02T15:01:23.045123456Z"`.
-          "readTimestamp": "A String", # Executes all reads at the given timestamp. Unlike other modes,
+              # A timestamp in RFC3339 UTC \&quot;Zulu\&quot; format, accurate to nanoseconds.
+              # Example: `&quot;2014-10-02T15:01:23.045123456Z&quot;`.
+          &quot;readTimestamp&quot;: &quot;A String&quot;, # Executes all reads at the given timestamp. Unlike other modes,
               # reads at a specific timestamp are repeatable; the same read at
               # the same timestamp always returns the same data. If the
               # timestamp is in the future, the read will block until the
-              # specified timestamp, modulo the read's deadline.
+              # specified timestamp, modulo the read&#x27;s deadline.
               #
               # Useful for large scale consistent reads such as mapreduces, or
               # for coordinating many reads against a consistent snapshot of the
               # data.
               #
-              # A timestamp in RFC3339 UTC \"Zulu\" format, accurate to nanoseconds.
-              # Example: `"2014-10-02T15:01:23.045123456Z"`.
-          "maxStaleness": "A String", # Read data at a timestamp &gt;= `NOW - max_staleness`
+              # A timestamp in RFC3339 UTC \&quot;Zulu\&quot; format, accurate to nanoseconds.
+              # Example: `&quot;2014-10-02T15:01:23.045123456Z&quot;`.
+          &quot;maxStaleness&quot;: &quot;A String&quot;, # Read data at a timestamp &gt;= `NOW - max_staleness`
               # seconds. Guarantees that all writes that have committed more
               # than the specified number of seconds ago are visible. Because
               # Cloud Spanner chooses the exact timestamp, this mode works even if
-              # the client's local clock is substantially skewed from Cloud Spanner
+              # the client&#x27;s local clock is substantially skewed from Cloud Spanner
               # commit timestamps.
               #
               # Useful for reading the freshest data available at a nearby
@@ -4008,32 +3661,392 @@
               #
               # Note that this option can only be used in single-use
               # transactions.
-          "exactStaleness": "A String", # Executes all reads at a timestamp that is `exact_staleness`
+          &quot;returnReadTimestamp&quot;: True or False, # If true, the Cloud Spanner-selected read timestamp is included in
+              # the Transaction message that describes the transaction.
+          &quot;exactStaleness&quot;: &quot;A String&quot;, # Executes all reads at a timestamp that is `exact_staleness`
               # old. The timestamp is chosen soon after the read is started.
               #
               # Guarantees that all writes that have committed more than the
               # specified number of seconds ago are visible. Because Cloud Spanner
-              # chooses the exact timestamp, this mode works even if the client's
+              # chooses the exact timestamp, this mode works even if the client&#x27;s
               # local clock is substantially skewed from Cloud Spanner commit
               # timestamps.
               #
               # Useful for reading at nearby replicas without the distributed
               # timestamp negotiation overhead of `max_staleness`.
-          "returnReadTimestamp": True or False, # If true, the Cloud Spanner-selected read timestamp is included in
-              # the Transaction message that describes the transaction.
-          "strong": True or False, # Read at a timestamp where all previously committed transactions
+          &quot;strong&quot;: True or False, # Read at a timestamp where all previously committed transactions
               # are visible.
         },
-        "partitionedDml": { # Message type to initiate a Partitioned DML transaction. # Partitioned DML transaction.
+        &quot;readWrite&quot;: { # Message type to initiate a read-write transaction. Currently this # Transaction may write.
+            #
+            # Authorization to begin a read-write transaction requires
+            # `spanner.databases.beginOrRollbackReadWriteTransaction` permission
+            # on the `session` resource.
+            # transaction type has no options.
+        },
+      },
+      &quot;begin&quot;: { # # Transactions # Begin a new transaction and execute this read or SQL query in
+          # it. The transaction ID of the new transaction is returned in
+          # ResultSetMetadata.transaction, which is a Transaction.
+          #
+          #
+          # Each session can have at most one active transaction at a time. After the
+          # active transaction is completed, the session can immediately be
+          # re-used for the next transaction. It is not necessary to create a
+          # new session for each transaction.
+          #
+          # # Transaction Modes
+          #
+          # Cloud Spanner supports three transaction modes:
+          #
+          #   1. Locking read-write. This type of transaction is the only way
+          #      to write data into Cloud Spanner. These transactions rely on
+          #      pessimistic locking and, if necessary, two-phase commit.
+          #      Locking read-write transactions may abort, requiring the
+          #      application to retry.
+          #
+          #   2. Snapshot read-only. This transaction type provides guaranteed
+          #      consistency across several reads, but does not allow
+          #      writes. Snapshot read-only transactions can be configured to
+          #      read at timestamps in the past. Snapshot read-only
+          #      transactions do not need to be committed.
+          #
+          #   3. Partitioned DML. This type of transaction is used to execute
+          #      a single Partitioned DML statement. Partitioned DML partitions
+          #      the key space and runs the DML statement over each partition
+          #      in parallel using separate, internal transactions that commit
+          #      independently. Partitioned DML transactions do not need to be
+          #      committed.
+          #
+          # For transactions that only read, snapshot read-only transactions
+          # provide simpler semantics and are almost always faster. In
+          # particular, read-only transactions do not take locks, so they do
+          # not conflict with read-write transactions. As a consequence of not
+          # taking locks, they also do not abort, so retry loops are not needed.
+          #
+          # Transactions may only read/write data in a single database. They
+          # may, however, read/write data in different tables within that
+          # database.
+          #
+          # ## Locking Read-Write Transactions
+          #
+          # Locking transactions may be used to atomically read-modify-write
+          # data anywhere in a database. This type of transaction is externally
+          # consistent.
+          #
+          # Clients should attempt to minimize the amount of time a transaction
+          # is active. Faster transactions commit with higher probability
+          # and cause less contention. Cloud Spanner attempts to keep read locks
+          # active as long as the transaction continues to do reads, and the
+          # transaction has not been terminated by
+          # Commit or
+          # Rollback.  Long periods of
+          # inactivity at the client may cause Cloud Spanner to release a
+          # transaction&#x27;s locks and abort it.
+          #
+          # Conceptually, a read-write transaction consists of zero or more
+          # reads or SQL statements followed by
+          # Commit. At any time before
+          # Commit, the client can send a
+          # Rollback request to abort the
+          # transaction.
+          #
+          # ### Semantics
+          #
+          # Cloud Spanner can commit the transaction if all read locks it acquired
+          # are still valid at commit time, and it is able to acquire write
+          # locks for all writes. Cloud Spanner can abort the transaction for any
+          # reason. If a commit attempt returns `ABORTED`, Cloud Spanner guarantees
+          # that the transaction has not modified any user data in Cloud Spanner.
+          #
+          # Unless the transaction commits, Cloud Spanner makes no guarantees about
+          # how long the transaction&#x27;s locks were held for. It is an error to
+          # use Cloud Spanner locks for any sort of mutual exclusion other than
+          # between Cloud Spanner transactions themselves.
+          #
+          # ### Retrying Aborted Transactions
+          #
+          # When a transaction aborts, the application can choose to retry the
+          # whole transaction again. To maximize the chances of successfully
+          # committing the retry, the client should execute the retry in the
+          # same session as the original attempt. The original session&#x27;s lock
+          # priority increases with each consecutive abort, meaning that each
+          # attempt has a slightly better chance of success than the previous.
+          #
+          # Under some circumstances (e.g., many transactions attempting to
+          # modify the same row(s)), a transaction can abort many times in a
+          # short period before successfully committing. Thus, it is not a good
+          # idea to cap the number of retries a transaction can attempt;
+          # instead, it is better to limit the total amount of wall time spent
+          # retrying.
+          #
+          # ### Idle Transactions
+          #
+          # A transaction is considered idle if it has no outstanding reads or
+          # SQL queries and has not started a read or SQL query within the last 10
+          # seconds. Idle transactions can be aborted by Cloud Spanner so that they
+          # don&#x27;t hold on to locks indefinitely. In that case, the commit will
+          # fail with error `ABORTED`.
+          #
+          # If this behavior is undesirable, periodically executing a simple
+          # SQL query in the transaction (e.g., `SELECT 1`) prevents the
+          # transaction from becoming idle.
+          #
+          # ## Snapshot Read-Only Transactions
+          #
+          # Snapshot read-only transactions provides a simpler method than
+          # locking read-write transactions for doing several consistent
+          # reads. However, this type of transaction does not support writes.
+          #
+          # Snapshot transactions do not take locks. Instead, they work by
+          # choosing a Cloud Spanner timestamp, then executing all reads at that
+          # timestamp. Since they do not acquire locks, they do not block
+          # concurrent read-write transactions.
+          #
+          # Unlike locking read-write transactions, snapshot read-only
+          # transactions never abort. They can fail if the chosen read
+          # timestamp is garbage collected; however, the default garbage
+          # collection policy is generous enough that most applications do not
+          # need to worry about this in practice.
+          #
+          # Snapshot read-only transactions do not need to call
+          # Commit or
+          # Rollback (and in fact are not
+          # permitted to do so).
+          #
+          # To execute a snapshot transaction, the client specifies a timestamp
+          # bound, which tells Cloud Spanner how to choose a read timestamp.
+          #
+          # The types of timestamp bound are:
+          #
+          #   - Strong (the default).
+          #   - Bounded staleness.
+          #   - Exact staleness.
+          #
+          # If the Cloud Spanner database to be read is geographically distributed,
+          # stale read-only transactions can execute more quickly than strong
+          # or read-write transaction, because they are able to execute far
+          # from the leader replica.
+          #
+          # Each type of timestamp bound is discussed in detail below.
+          #
+          # ### Strong
+          #
+          # Strong reads are guaranteed to see the effects of all transactions
+          # that have committed before the start of the read. Furthermore, all
+          # rows yielded by a single read are consistent with each other -- if
+          # any part of the read observes a transaction, all parts of the read
+          # see the transaction.
+          #
+          # Strong reads are not repeatable: two consecutive strong read-only
+          # transactions might return inconsistent results if there are
+          # concurrent writes. If consistency across reads is required, the
+          # reads should be executed within a transaction or at an exact read
+          # timestamp.
+          #
+          # See TransactionOptions.ReadOnly.strong.
+          #
+          # ### Exact Staleness
+          #
+          # These timestamp bounds execute reads at a user-specified
+          # timestamp. Reads at a timestamp are guaranteed to see a consistent
+          # prefix of the global transaction history: they observe
+          # modifications done by all transactions with a commit timestamp &lt;=
+          # the read timestamp, and observe none of the modifications done by
+          # transactions with a larger commit timestamp. They will block until
+          # all conflicting transactions that may be assigned commit timestamps
+          # &lt;= the read timestamp have finished.
+          #
+          # The timestamp can either be expressed as an absolute Cloud Spanner commit
+          # timestamp or a staleness relative to the current time.
+          #
+          # These modes do not require a &quot;negotiation phase&quot; to pick a
+          # timestamp. As a result, they execute slightly faster than the
+          # equivalent boundedly stale concurrency modes. On the other hand,
+          # boundedly stale reads usually return fresher results.
+          #
+          # See TransactionOptions.ReadOnly.read_timestamp and
+          # TransactionOptions.ReadOnly.exact_staleness.
+          #
+          # ### Bounded Staleness
+          #
+          # Bounded staleness modes allow Cloud Spanner to pick the read timestamp,
+          # subject to a user-provided staleness bound. Cloud Spanner chooses the
+          # newest timestamp within the staleness bound that allows execution
+          # of the reads at the closest available replica without blocking.
+          #
+          # All rows yielded are consistent with each other -- if any part of
+          # the read observes a transaction, all parts of the read see the
+          # transaction. Boundedly stale reads are not repeatable: two stale
+          # reads, even if they use the same staleness bound, can execute at
+          # different timestamps and thus return inconsistent results.
+          #
+          # Boundedly stale reads execute in two phases: the first phase
+          # negotiates a timestamp among all replicas needed to serve the
+          # read. In the second phase, reads are executed at the negotiated
+          # timestamp.
+          #
+          # As a result of the two phase execution, bounded staleness reads are
+          # usually a little slower than comparable exact staleness
+          # reads. However, they are typically able to return fresher
+          # results, and are more likely to execute at the closest replica.
+          #
+          # Because the timestamp negotiation requires up-front knowledge of
+          # which rows will be read, it can only be used with single-use
+          # read-only transactions.
+          #
+          # See TransactionOptions.ReadOnly.max_staleness and
+          # TransactionOptions.ReadOnly.min_read_timestamp.
+          #
+          # ### Old Read Timestamps and Garbage Collection
+          #
+          # Cloud Spanner continuously garbage collects deleted and overwritten data
+          # in the background to reclaim storage space. This process is known
+          # as &quot;version GC&quot;. By default, version GC reclaims versions after they
+          # are one hour old. Because of this, Cloud Spanner cannot perform reads
+          # at read timestamps more than one hour in the past. This
+          # restriction also applies to in-progress reads and/or SQL queries whose
+          # timestamp become too old while executing. Reads and SQL queries with
+          # too-old read timestamps fail with the error `FAILED_PRECONDITION`.
+          #
+          # ## Partitioned DML Transactions
+          #
+          # Partitioned DML transactions are used to execute DML statements with a
+          # different execution strategy that provides different, and often better,
+          # scalability properties for large, table-wide operations than DML in a
+          # ReadWrite transaction. Smaller scoped statements, such as an OLTP workload,
+          # should prefer using ReadWrite transactions.
+          #
+          # Partitioned DML partitions the keyspace and runs the DML statement on each
+          # partition in separate, internal transactions. These transactions commit
+          # automatically when complete, and run independently from one another.
+          #
+          # To reduce lock contention, this execution strategy only acquires read locks
+          # on rows that match the WHERE clause of the statement. Additionally, the
+          # smaller per-partition transactions hold locks for less time.
+          #
+          # That said, Partitioned DML is not a drop-in replacement for standard DML used
+          # in ReadWrite transactions.
+          #
+          #  - The DML statement must be fully-partitionable. Specifically, the statement
+          #    must be expressible as the union of many statements which each access only
+          #    a single row of the table.
+          #
+          #  - The statement is not applied atomically to all rows of the table. Rather,
+          #    the statement is applied atomically to partitions of the table, in
+          #    independent transactions. Secondary index rows are updated atomically
+          #    with the base table rows.
+          #
+          #  - Partitioned DML does not guarantee exactly-once execution semantics
+          #    against a partition. The statement will be applied at least once to each
+          #    partition. It is strongly recommended that the DML statement should be
+          #    idempotent to avoid unexpected results. For instance, it is potentially
+          #    dangerous to run a statement such as
+          #    `UPDATE table SET column = column + 1` as it could be run multiple times
+          #    against some rows.
+          #
+          #  - The partitions are committed automatically - there is no support for
+          #    Commit or Rollback. If the call returns an error, or if the client issuing
+          #    the ExecuteSql call dies, it is possible that some rows had the statement
+          #    executed on them successfully. It is also possible that statement was
+          #    never executed against other rows.
+          #
+          #  - Partitioned DML transactions may only contain the execution of a single
+          #    DML statement via ExecuteSql or ExecuteStreamingSql.
+          #
+          #  - If any error is encountered during the execution of the partitioned DML
+          #    operation (for instance, a UNIQUE INDEX violation, division by zero, or a
+          #    value that cannot be stored due to schema constraints), then the
+          #    operation is stopped at that point and an error is returned. It is
+          #    possible that at this point, some partitions have been committed (or even
+          #    committed multiple times), and other partitions have not been run at all.
+          #
+          # Given the above, Partitioned DML is good fit for large, database-wide,
+          # operations that are idempotent, such as deleting old rows from a very large
+          # table.
+        &quot;partitionedDml&quot;: { # Message type to initiate a Partitioned DML transaction. # Partitioned DML transaction.
             #
             # Authorization to begin a Partitioned DML transaction requires
             # `spanner.databases.beginPartitionedDmlTransaction` permission
             # on the `session` resource.
         },
+        &quot;readOnly&quot;: { # Message type to initiate a read-only transaction. # Transaction will not write.
+            #
+            # Authorization to begin a read-only transaction requires
+            # `spanner.databases.beginReadOnlyTransaction` permission
+            # on the `session` resource.
+          &quot;minReadTimestamp&quot;: &quot;A String&quot;, # Executes all reads at a timestamp &gt;= `min_read_timestamp`.
+              #
+              # This is useful for requesting fresher data than some previous
+              # read, or data that is fresh enough to observe the effects of some
+              # previously committed transaction whose timestamp is known.
+              #
+              # Note that this option can only be used in single-use transactions.
+              #
+              # A timestamp in RFC3339 UTC \&quot;Zulu\&quot; format, accurate to nanoseconds.
+              # Example: `&quot;2014-10-02T15:01:23.045123456Z&quot;`.
+          &quot;readTimestamp&quot;: &quot;A String&quot;, # Executes all reads at the given timestamp. Unlike other modes,
+              # reads at a specific timestamp are repeatable; the same read at
+              # the same timestamp always returns the same data. If the
+              # timestamp is in the future, the read will block until the
+              # specified timestamp, modulo the read&#x27;s deadline.
+              #
+              # Useful for large scale consistent reads such as mapreduces, or
+              # for coordinating many reads against a consistent snapshot of the
+              # data.
+              #
+              # A timestamp in RFC3339 UTC \&quot;Zulu\&quot; format, accurate to nanoseconds.
+              # Example: `&quot;2014-10-02T15:01:23.045123456Z&quot;`.
+          &quot;maxStaleness&quot;: &quot;A String&quot;, # Read data at a timestamp &gt;= `NOW - max_staleness`
+              # seconds. Guarantees that all writes that have committed more
+              # than the specified number of seconds ago are visible. Because
+              # Cloud Spanner chooses the exact timestamp, this mode works even if
+              # the client&#x27;s local clock is substantially skewed from Cloud Spanner
+              # commit timestamps.
+              #
+              # Useful for reading the freshest data available at a nearby
+              # replica, while bounding the possible staleness if the local
+              # replica has fallen behind.
+              #
+              # Note that this option can only be used in single-use
+              # transactions.
+          &quot;returnReadTimestamp&quot;: True or False, # If true, the Cloud Spanner-selected read timestamp is included in
+              # the Transaction message that describes the transaction.
+          &quot;exactStaleness&quot;: &quot;A String&quot;, # Executes all reads at a timestamp that is `exact_staleness`
+              # old. The timestamp is chosen soon after the read is started.
+              #
+              # Guarantees that all writes that have committed more than the
+              # specified number of seconds ago are visible. Because Cloud Spanner
+              # chooses the exact timestamp, this mode works even if the client&#x27;s
+              # local clock is substantially skewed from Cloud Spanner commit
+              # timestamps.
+              #
+              # Useful for reading at nearby replicas without the distributed
+              # timestamp negotiation overhead of `max_staleness`.
+          &quot;strong&quot;: True or False, # Read at a timestamp where all previously committed transactions
+              # are visible.
+        },
+        &quot;readWrite&quot;: { # Message type to initiate a read-write transaction. Currently this # Transaction may write.
+            #
+            # Authorization to begin a read-write transaction requires
+            # `spanner.databases.beginOrRollbackReadWriteTransaction` permission
+            # on the `session` resource.
+            # transaction type has no options.
+        },
       },
-      "id": "A String", # Execute the read or SQL query in a previously-started transaction.
+      &quot;id&quot;: &quot;A String&quot;, # Execute the read or SQL query in a previously-started transaction.
     },
-    "seqno": "A String", # A per-transaction sequence number used to identify this request. This field
+    &quot;resumeToken&quot;: &quot;A String&quot;, # If this request is resuming a previously interrupted SQL statement
+        # execution, `resume_token` should be copied from the last
+        # PartialResultSet yielded before the interruption. Doing this
+        # enables the new SQL statement execution to resume where the last one left
+        # off. The rest of the request parameters must exactly match the
+        # request that yielded this token.
+    &quot;partitionToken&quot;: &quot;A String&quot;, # If present, results will be restricted to the specified partition
+        # previously created using PartitionQuery().  There must be an exact
+        # match for the values of fields common to this message and the
+        # PartitionQueryRequest message used to create this partition_token.
+    &quot;seqno&quot;: &quot;A String&quot;, # A per-transaction sequence number used to identify this request. This field
         # makes each request idempotent such that if the request is received multiple
         # times, at most one will succeed.
         # 
@@ -4043,17 +4056,7 @@
         # handled requests will yield the same response as the first execution.
         # 
         # Required for DML statements. Ignored for queries.
-    "resumeToken": "A String", # If this request is resuming a previously interrupted SQL statement
-        # execution, `resume_token` should be copied from the last
-        # PartialResultSet yielded before the interruption. Doing this
-        # enables the new SQL statement execution to resume where the last one left
-        # off. The rest of the request parameters must exactly match the
-        # request that yielded this token.
-    "partitionToken": "A String", # If present, results will be restricted to the specified partition
-        # previously created using PartitionQuery().  There must be an exact
-        # match for the values of fields common to this message and the
-        # PartitionQueryRequest message used to create this partition_token.
-    "paramTypes": { # It is not always possible for Cloud Spanner to infer the right SQL type
+    &quot;paramTypes&quot;: { # It is not always possible for Cloud Spanner to infer the right SQL type
         # from a JSON value.  For example, values of type `BYTES` and values
         # of type `STRING` both appear in params as JSON strings.
         # 
@@ -4061,40 +4064,55 @@
         # SQL type for some or all of the SQL statement parameters. See the
         # definition of Type for more information
         # about SQL types.
-      "a_key": { # `Type` indicates the type of a Cloud Spanner value, as might be stored in a
+      &quot;a_key&quot;: { # `Type` indicates the type of a Cloud Spanner value, as might be stored in a
           # table cell or returned from an SQL query.
-        "structType": { # `StructType` defines the fields of a STRUCT type. # If code == STRUCT, then `struct_type`
-            # provides type information for the struct's fields.
-          "fields": [ # The list of fields that make up this struct. Order is
+        &quot;arrayElementType&quot;: # Object with schema name: Type # If code == ARRAY, then `array_element_type`
+            # is the type of the array elements.
+        &quot;code&quot;: &quot;A String&quot;, # Required. The TypeCode for this type.
+        &quot;structType&quot;: { # `StructType` defines the fields of a STRUCT type. # If code == STRUCT, then `struct_type`
+            # provides type information for the struct&#x27;s fields.
+          &quot;fields&quot;: [ # The list of fields that make up this struct. Order is
               # significant, because values of this struct type are represented as
               # lists, where the order of field values matches the order of
               # fields in the StructType. In turn, the order of fields
               # matches the order of columns in a read request, or the order of
               # fields in the `SELECT` clause of a query.
             { # Message representing a single field of a struct.
-              "type": # Object with schema name: Type # The type of the field.
-              "name": "A String", # The name of the field. For reads, this is the column name. For
-                  # SQL queries, it is the column alias (e.g., `"Word"` in the
-                  # query `"SELECT 'hello' AS Word"`), or the column name (e.g.,
-                  # `"ColName"` in the query `"SELECT ColName FROM Table"`). Some
-                  # columns might have an empty name (e.g., !"SELECT
-                  # UPPER(ColName)"`). Note that a query result can contain
+              &quot;name&quot;: &quot;A String&quot;, # The name of the field. For reads, this is the column name. For
+                  # SQL queries, it is the column alias (e.g., `&quot;Word&quot;` in the
+                  # query `&quot;SELECT &#x27;hello&#x27; AS Word&quot;`), or the column name (e.g.,
+                  # `&quot;ColName&quot;` in the query `&quot;SELECT ColName FROM Table&quot;`). Some
+                  # columns might have an empty name (e.g., !&quot;SELECT
+                  # UPPER(ColName)&quot;`). Note that a query result can contain
                   # multiple fields with the same name.
+              &quot;type&quot;: # Object with schema name: Type # The type of the field.
             },
           ],
         },
-        "code": "A String", # Required. The TypeCode for this type.
-        "arrayElementType": # Object with schema name: Type # If code == ARRAY, then `array_element_type`
-            # is the type of the array elements.
       },
     },
-    "queryOptions": { # Query optimizer configuration. # Query optimizer configuration to use for the given query.
-      "optimizerVersion": "A String", # An option to control the selection of optimizer version.
+    &quot;sql&quot;: &quot;A String&quot;, # Required. The SQL string.
+    &quot;params&quot;: { # Parameter names and values that bind to placeholders in the SQL string.
+        # 
+        # A parameter placeholder consists of the `@` character followed by the
+        # parameter name (for example, `@firstName`). Parameter names can contain
+        # letters, numbers, and underscores.
+        # 
+        # Parameters can appear anywhere that a literal value is expected.  The same
+        # parameter name can be used more than once, for example:
+        # 
+        # `&quot;WHERE id &gt; @msg_id AND id &lt; @msg_id + 100&quot;`
+        # 
+        # It is an error to execute a SQL statement with unbound parameters.
+      &quot;a_key&quot;: &quot;&quot;, # Properties of the object.
+    },
+    &quot;queryOptions&quot;: { # Query optimizer configuration. # Query optimizer configuration to use for the given query.
+      &quot;optimizerVersion&quot;: &quot;A String&quot;, # An option to control the selection of optimizer version.
           #
           # This parameter allows individual queries to pick different query
           # optimizer versions.
           #
-          # Specifying "latest" as a value instructs Cloud Spanner to use the
+          # Specifying &quot;latest&quot; as a value instructs Cloud Spanner to use the
           # latest supported query optimizer version. If not specified, Cloud Spanner
           # uses optimizer version set at the database level options. Any other
           # positive integer (from the list of supported optimizer versions)
@@ -4106,24 +4124,6 @@
           #
           # The `optimizer_version` statement hint has precedence over this setting.
     },
-    "params": { # Parameter names and values that bind to placeholders in the SQL string.
-        # 
-        # A parameter placeholder consists of the `@` character followed by the
-        # parameter name (for example, `@firstName`). Parameter names can contain
-        # letters, numbers, and underscores.
-        # 
-        # Parameters can appear anywhere that a literal value is expected.  The same
-        # parameter name can be used more than once, for example:
-        # 
-        # `"WHERE id &gt; @msg_id AND id &lt; @msg_id + 100"`
-        # 
-        # It is an error to execute a SQL statement with unbound parameters.
-      "a_key": "", # Properties of the object.
-    },
-    "sql": "A String", # Required. The SQL string.
-    "queryMode": "A String", # Used to control the amount of debugging information returned in
-        # ResultSetStats. If partition_token is set, query_mode can only
-        # be set to QueryMode.NORMAL.
   }
 
   x__xgafv: string, V1 error format.
@@ -4137,15 +4137,7 @@
     { # Partial results from a streaming read or SQL query. Streaming reads and
       # SQL queries better tolerate large result sets, large rows, and large
       # values, but are a little trickier to consume.
-    "resumeToken": "A String", # Streaming calls might be interrupted for a variety of reasons, such
-        # as TCP connection loss. If this occurs, the stream of results can
-        # be resumed by re-sending the original request and including
-        # `resume_token`. Note that executing any other transaction in the
-        # same session invalidates the token.
-    "chunkedValue": True or False, # If true, then the final value in values is chunked, and must
-        # be combined with more values from subsequent `PartialResultSet`s
-        # to obtain a complete field value.
-    "values": [ # A streamed result set consists of a stream of values, which might
+    &quot;values&quot;: [ # A streamed result set consists of a stream of values, which might
         # be split into many `PartialResultSet` messages to accommodate
         # large rows and/or large values. Every N complete values defines a
         # row, where N is equal to the number of entries in
@@ -4154,7 +4146,7 @@
         # Most values are encoded based on type as described
         # here.
         #
-        # It is possible that the last value in values is "chunked",
+        # It is possible that the last value in values is &quot;chunked&quot;,
         # meaning that the rest of the value is sent in subsequent
         # `PartialResultSet`(s). This is denoted by the chunked_value
         # field. Two or more chunked values can be merged to form a
@@ -4172,172 +4164,85 @@
         # Some examples of merging:
         #
         #     # Strings are concatenated.
-        #     "foo", "bar" =&gt; "foobar"
+        #     &quot;foo&quot;, &quot;bar&quot; =&gt; &quot;foobar&quot;
         #
         #     # Lists of non-strings are concatenated.
         #     [2, 3], [4] =&gt; [2, 3, 4]
         #
         #     # Lists are concatenated, but the last and first elements are merged
         #     # because they are strings.
-        #     ["a", "b"], ["c", "d"] =&gt; ["a", "bc", "d"]
+        #     [&quot;a&quot;, &quot;b&quot;], [&quot;c&quot;, &quot;d&quot;] =&gt; [&quot;a&quot;, &quot;bc&quot;, &quot;d&quot;]
         #
         #     # Lists are concatenated, but the last and first elements are merged
         #     # because they are lists. Recursively, the last and first elements
         #     # of the inner lists are merged because they are strings.
-        #     ["a", ["b", "c"]], [["d"], "e"] =&gt; ["a", ["b", "cd"], "e"]
+        #     [&quot;a&quot;, [&quot;b&quot;, &quot;c&quot;]], [[&quot;d&quot;], &quot;e&quot;] =&gt; [&quot;a&quot;, [&quot;b&quot;, &quot;cd&quot;], &quot;e&quot;]
         #
         #     # Non-overlapping object fields are combined.
-        #     {"a": "1"}, {"b": "2"} =&gt; {"a": "1", "b": 2"}
+        #     {&quot;a&quot;: &quot;1&quot;}, {&quot;b&quot;: &quot;2&quot;} =&gt; {&quot;a&quot;: &quot;1&quot;, &quot;b&quot;: 2&quot;}
         #
         #     # Overlapping object fields are merged.
-        #     {"a": "1"}, {"a": "2"} =&gt; {"a": "12"}
+        #     {&quot;a&quot;: &quot;1&quot;}, {&quot;a&quot;: &quot;2&quot;} =&gt; {&quot;a&quot;: &quot;12&quot;}
         #
         #     # Examples of merging objects containing lists of strings.
-        #     {"a": ["1"]}, {"a": ["2"]} =&gt; {"a": ["12"]}
+        #     {&quot;a&quot;: [&quot;1&quot;]}, {&quot;a&quot;: [&quot;2&quot;]} =&gt; {&quot;a&quot;: [&quot;12&quot;]}
         #
         # For a more complete example, suppose a streaming SQL query is
         # yielding a result set whose rows contain a single string
         # field. The following `PartialResultSet`s might be yielded:
         #
         #     {
-        #       "metadata": { ... }
-        #       "values": ["Hello", "W"]
-        #       "chunked_value": true
-        #       "resume_token": "Af65..."
+        #       &quot;metadata&quot;: { ... }
+        #       &quot;values&quot;: [&quot;Hello&quot;, &quot;W&quot;]
+        #       &quot;chunked_value&quot;: true
+        #       &quot;resume_token&quot;: &quot;Af65...&quot;
         #     }
         #     {
-        #       "values": ["orl"]
-        #       "chunked_value": true
-        #       "resume_token": "Bqp2..."
+        #       &quot;values&quot;: [&quot;orl&quot;]
+        #       &quot;chunked_value&quot;: true
+        #       &quot;resume_token&quot;: &quot;Bqp2...&quot;
         #     }
         #     {
-        #       "values": ["d"]
-        #       "resume_token": "Zx1B..."
+        #       &quot;values&quot;: [&quot;d&quot;]
+        #       &quot;resume_token&quot;: &quot;Zx1B...&quot;
         #     }
         #
         # This sequence of `PartialResultSet`s encodes two rows, one
-        # containing the field value `"Hello"`, and a second containing the
-        # field value `"World" = "W" + "orl" + "d"`.
-      "",
+        # containing the field value `&quot;Hello&quot;`, and a second containing the
+        # field value `&quot;World&quot; = &quot;W&quot; + &quot;orl&quot; + &quot;d&quot;`.
+      &quot;&quot;,
     ],
-    "stats": { # Additional statistics about a ResultSet or PartialResultSet. # Query plan and execution statistics for the statement that produced this
-        # streaming result set. These can be requested by setting
-        # ExecuteSqlRequest.query_mode and are sent
-        # only once with the last response in the stream.
-        # This field will also be present in the last response for DML
-        # statements.
-      "rowCountLowerBound": "A String", # Partitioned DML does not offer exactly-once semantics, so it
-          # returns a lower bound of the rows modified.
-      "rowCountExact": "A String", # Standard DML returns an exact count of rows that were modified.
-      "queryPlan": { # Contains an ordered list of nodes appearing in the query plan. # QueryPlan for the query associated with this result.
-        "planNodes": [ # The nodes in the query plan. Plan nodes are returned in pre-order starting
-            # with the plan root. Each PlanNode's `id` corresponds to its index in
-            # `plan_nodes`.
-          { # Node information for nodes appearing in a QueryPlan.plan_nodes.
-            "index": 42, # The `PlanNode`'s index in node list.
-            "kind": "A String", # Used to determine the type of node. May be needed for visualizing
-                # different kinds of nodes differently. For example, If the node is a
-                # SCALAR node, it will have a condensed representation
-                # which can be used to directly embed a description of the node in its
-                # parent.
-            "displayName": "A String", # The display name for the node.
-            "executionStats": { # The execution statistics associated with the node, contained in a group of
-                # key-value pairs. Only present if the plan was returned as a result of a
-                # profile query. For example, number of executions, number of rows/time per
-                # execution etc.
-              "a_key": "", # Properties of the object.
-            },
-            "childLinks": [ # List of child node `index`es and their relationship to this parent.
-              { # Metadata associated with a parent-child relationship appearing in a
-                  # PlanNode.
-                "variable": "A String", # Only present if the child node is SCALAR and corresponds
-                    # to an output variable of the parent node. The field carries the name of
-                    # the output variable.
-                    # For example, a `TableScan` operator that reads rows from a table will
-                    # have child links to the `SCALAR` nodes representing the output variables
-                    # created for each column that is read by the operator. The corresponding
-                    # `variable` fields will be set to the variable names assigned to the
-                    # columns.
-                "type": "A String", # The type of the link. For example, in Hash Joins this could be used to
-                    # distinguish between the build child and the probe child, or in the case
-                    # of the child being an output variable, to represent the tag associated
-                    # with the output variable.
-                "childIndex": 42, # The node to which the link points.
-              },
-            ],
-            "shortRepresentation": { # Condensed representation of a node and its subtree. Only present for # Condensed representation for SCALAR nodes.
-                # `SCALAR` PlanNode(s).
-              "subqueries": { # A mapping of (subquery variable name) -&gt; (subquery node id) for cases
-                  # where the `description` string of this node references a `SCALAR`
-                  # subquery contained in the expression subtree rooted at this node. The
-                  # referenced `SCALAR` subquery may not necessarily be a direct child of
-                  # this node.
-                "a_key": 42,
-              },
-              "description": "A String", # A string representation of the expression subtree rooted at this node.
-            },
-            "metadata": { # Attributes relevant to the node contained in a group of key-value pairs.
-                # For example, a Parameter Reference node could have the following
-                # information in its metadata:
-                #
-                #     {
-                #       "parameter_reference": "param1",
-                #       "parameter_type": "array"
-                #     }
-              "a_key": "", # Properties of the object.
-            },
-          },
-        ],
-      },
-      "queryStats": { # Aggregated statistics from the execution of the query. Only present when
-          # the query is profiled. For example, a query could return the statistics as
-          # follows:
-          #
-          #     {
-          #       "rows_returned": "3",
-          #       "elapsed_time": "1.22 secs",
-          #       "cpu_time": "1.19 secs"
-          #     }
-        "a_key": "", # Properties of the object.
-      },
-    },
-    "metadata": { # Metadata about a ResultSet or PartialResultSet. # Metadata about the result set, such as row type information.
+    &quot;metadata&quot;: { # Metadata about a ResultSet or PartialResultSet. # Metadata about the result set, such as row type information.
         # Only present in the first response.
-      "rowType": { # `StructType` defines the fields of a STRUCT type. # Indicates the field names and types for the rows in the result
-          # set.  For example, a SQL query like `"SELECT UserId, UserName FROM
-          # Users"` could return a `row_type` value like:
+      &quot;rowType&quot;: { # `StructType` defines the fields of a STRUCT type. # Indicates the field names and types for the rows in the result
+          # set.  For example, a SQL query like `&quot;SELECT UserId, UserName FROM
+          # Users&quot;` could return a `row_type` value like:
           #
-          #     "fields": [
-          #       { "name": "UserId", "type": { "code": "INT64" } },
-          #       { "name": "UserName", "type": { "code": "STRING" } },
+          #     &quot;fields&quot;: [
+          #       { &quot;name&quot;: &quot;UserId&quot;, &quot;type&quot;: { &quot;code&quot;: &quot;INT64&quot; } },
+          #       { &quot;name&quot;: &quot;UserName&quot;, &quot;type&quot;: { &quot;code&quot;: &quot;STRING&quot; } },
           #     ]
-        "fields": [ # The list of fields that make up this struct. Order is
+        &quot;fields&quot;: [ # The list of fields that make up this struct. Order is
             # significant, because values of this struct type are represented as
             # lists, where the order of field values matches the order of
             # fields in the StructType. In turn, the order of fields
             # matches the order of columns in a read request, or the order of
             # fields in the `SELECT` clause of a query.
           { # Message representing a single field of a struct.
-            "type": # Object with schema name: Type # The type of the field.
-            "name": "A String", # The name of the field. For reads, this is the column name. For
-                # SQL queries, it is the column alias (e.g., `"Word"` in the
-                # query `"SELECT 'hello' AS Word"`), or the column name (e.g.,
-                # `"ColName"` in the query `"SELECT ColName FROM Table"`). Some
-                # columns might have an empty name (e.g., !"SELECT
-                # UPPER(ColName)"`). Note that a query result can contain
+            &quot;name&quot;: &quot;A String&quot;, # The name of the field. For reads, this is the column name. For
+                # SQL queries, it is the column alias (e.g., `&quot;Word&quot;` in the
+                # query `&quot;SELECT &#x27;hello&#x27; AS Word&quot;`), or the column name (e.g.,
+                # `&quot;ColName&quot;` in the query `&quot;SELECT ColName FROM Table&quot;`). Some
+                # columns might have an empty name (e.g., !&quot;SELECT
+                # UPPER(ColName)&quot;`). Note that a query result can contain
                 # multiple fields with the same name.
+            &quot;type&quot;: # Object with schema name: Type # The type of the field.
           },
         ],
       },
-      "transaction": { # A transaction. # If the read or SQL query began a transaction as a side-effect, the
+      &quot;transaction&quot;: { # A transaction. # If the read or SQL query began a transaction as a side-effect, the
           # information about the new transaction is yielded here.
-        "readTimestamp": "A String", # For snapshot read-only transactions, the read timestamp chosen
-            # for the transaction. Not returned by default: see
-            # TransactionOptions.ReadOnly.return_read_timestamp.
-            #
-            # A timestamp in RFC3339 UTC \"Zulu\" format, accurate to nanoseconds.
-            # Example: `"2014-10-02T15:01:23.045123456Z"`.
-        "id": "A String", # `id` may be used to identify the transaction in subsequent
+        &quot;id&quot;: &quot;A String&quot;, # `id` may be used to identify the transaction in subsequent
             # Read,
             # ExecuteSql,
             # Commit, or
@@ -4345,8 +4250,103 @@
             #
             # Single-use read-only transactions do not have IDs, because
             # single-use transactions do not support multiple requests.
+        &quot;readTimestamp&quot;: &quot;A String&quot;, # For snapshot read-only transactions, the read timestamp chosen
+            # for the transaction. Not returned by default: see
+            # TransactionOptions.ReadOnly.return_read_timestamp.
+            #
+            # A timestamp in RFC3339 UTC \&quot;Zulu\&quot; format, accurate to nanoseconds.
+            # Example: `&quot;2014-10-02T15:01:23.045123456Z&quot;`.
       },
     },
+    &quot;resumeToken&quot;: &quot;A String&quot;, # Streaming calls might be interrupted for a variety of reasons, such
+        # as TCP connection loss. If this occurs, the stream of results can
+        # be resumed by re-sending the original request and including
+        # `resume_token`. Note that executing any other transaction in the
+        # same session invalidates the token.
+    &quot;stats&quot;: { # Additional statistics about a ResultSet or PartialResultSet. # Query plan and execution statistics for the statement that produced this
+        # streaming result set. These can be requested by setting
+        # ExecuteSqlRequest.query_mode and are sent
+        # only once with the last response in the stream.
+        # This field will also be present in the last response for DML
+        # statements.
+      &quot;rowCountExact&quot;: &quot;A String&quot;, # Standard DML returns an exact count of rows that were modified.
+      &quot;queryStats&quot;: { # Aggregated statistics from the execution of the query. Only present when
+          # the query is profiled. For example, a query could return the statistics as
+          # follows:
+          #
+          #     {
+          #       &quot;rows_returned&quot;: &quot;3&quot;,
+          #       &quot;elapsed_time&quot;: &quot;1.22 secs&quot;,
+          #       &quot;cpu_time&quot;: &quot;1.19 secs&quot;
+          #     }
+        &quot;a_key&quot;: &quot;&quot;, # Properties of the object.
+      },
+      &quot;rowCountLowerBound&quot;: &quot;A String&quot;, # Partitioned DML does not offer exactly-once semantics, so it
+          # returns a lower bound of the rows modified.
+      &quot;queryPlan&quot;: { # Contains an ordered list of nodes appearing in the query plan. # QueryPlan for the query associated with this result.
+        &quot;planNodes&quot;: [ # The nodes in the query plan. Plan nodes are returned in pre-order starting
+            # with the plan root. Each PlanNode&#x27;s `id` corresponds to its index in
+            # `plan_nodes`.
+          { # Node information for nodes appearing in a QueryPlan.plan_nodes.
+            &quot;metadata&quot;: { # Attributes relevant to the node contained in a group of key-value pairs.
+                # For example, a Parameter Reference node could have the following
+                # information in its metadata:
+                #
+                #     {
+                #       &quot;parameter_reference&quot;: &quot;param1&quot;,
+                #       &quot;parameter_type&quot;: &quot;array&quot;
+                #     }
+              &quot;a_key&quot;: &quot;&quot;, # Properties of the object.
+            },
+            &quot;executionStats&quot;: { # The execution statistics associated with the node, contained in a group of
+                # key-value pairs. Only present if the plan was returned as a result of a
+                # profile query. For example, number of executions, number of rows/time per
+                # execution etc.
+              &quot;a_key&quot;: &quot;&quot;, # Properties of the object.
+            },
+            &quot;shortRepresentation&quot;: { # Condensed representation of a node and its subtree. Only present for # Condensed representation for SCALAR nodes.
+                # `SCALAR` PlanNode(s).
+              &quot;description&quot;: &quot;A String&quot;, # A string representation of the expression subtree rooted at this node.
+              &quot;subqueries&quot;: { # A mapping of (subquery variable name) -&gt; (subquery node id) for cases
+                  # where the `description` string of this node references a `SCALAR`
+                  # subquery contained in the expression subtree rooted at this node. The
+                  # referenced `SCALAR` subquery may not necessarily be a direct child of
+                  # this node.
+                &quot;a_key&quot;: 42,
+              },
+            },
+            &quot;index&quot;: 42, # The `PlanNode`&#x27;s index in node list.
+            &quot;kind&quot;: &quot;A String&quot;, # Used to determine the type of node. May be needed for visualizing
+                # different kinds of nodes differently. For example, If the node is a
+                # SCALAR node, it will have a condensed representation
+                # which can be used to directly embed a description of the node in its
+                # parent.
+            &quot;displayName&quot;: &quot;A String&quot;, # The display name for the node.
+            &quot;childLinks&quot;: [ # List of child node `index`es and their relationship to this parent.
+              { # Metadata associated with a parent-child relationship appearing in a
+                  # PlanNode.
+                &quot;variable&quot;: &quot;A String&quot;, # Only present if the child node is SCALAR and corresponds
+                    # to an output variable of the parent node. The field carries the name of
+                    # the output variable.
+                    # For example, a `TableScan` operator that reads rows from a table will
+                    # have child links to the `SCALAR` nodes representing the output variables
+                    # created for each column that is read by the operator. The corresponding
+                    # `variable` fields will be set to the variable names assigned to the
+                    # columns.
+                &quot;childIndex&quot;: 42, # The node to which the link points.
+                &quot;type&quot;: &quot;A String&quot;, # The type of the link. For example, in Hash Joins this could be used to
+                    # distinguish between the build child and the probe child, or in the case
+                    # of the child being an output variable, to represent the tag associated
+                    # with the output variable.
+              },
+            ],
+          },
+        ],
+      },
+    },
+    &quot;chunkedValue&quot;: True or False, # If true, then the final value in values is chunked, and must
+        # be combined with more values from subsequent `PartialResultSet`s
+        # to obtain a complete field value.
   }</pre>
 </div>
 
@@ -4367,7 +4367,12 @@
   An object of the form:
 
     { # A session in the Cloud Spanner API.
-    "labels": { # The labels for the session.
+    &quot;approximateLastUseTime&quot;: &quot;A String&quot;, # Output only. The approximate timestamp when the session is last used. It is
+        # typically earlier than the actual last use time.
+    &quot;name&quot;: &quot;A String&quot;, # The name of the session. This is always system-assigned; values provided
+        # when creating a session are ignored.
+    &quot;createTime&quot;: &quot;A String&quot;, # Output only. The timestamp when the session is created.
+    &quot;labels&quot;: { # The labels for the session.
         #
         #  * Label keys must be between 1 and 63 characters long and must conform to
         #    the following regular expression: `[a-z]([-a-z0-9]*[a-z0-9])?`.
@@ -4376,24 +4381,17 @@
         #  * No more than 64 labels can be associated with a given session.
         #
         # See https://goo.gl/xmQnxf for more information on and examples of labels.
-      "a_key": "A String",
+      &quot;a_key&quot;: &quot;A String&quot;,
     },
-    "name": "A String", # The name of the session. This is always system-assigned; values provided
-        # when creating a session are ignored.
-    "approximateLastUseTime": "A String", # Output only. The approximate timestamp when the session is last used. It is
-        # typically earlier than the actual last use time.
-    "createTime": "A String", # Output only. The timestamp when the session is created.
   }</pre>
 </div>
 
 <div class="method">
-    <code class="details" id="list">list(database, pageSize=None, filter=None, pageToken=None, x__xgafv=None)</code>
+    <code class="details" id="list">list(database, filter=None, pageToken=None, pageSize=None, x__xgafv=None)</code>
   <pre>Lists all sessions in a given database.
 
 Args:
   database: string, Required. The database in which to list sessions. (required)
-  pageSize: integer, Number of sessions to be returned in the response. If 0 or less, defaults
-to the server's maximum allowed page size.
   filter: string, An expression for filtering the results of the request. Filter rules are
 case insensitive. The fields eligible for filtering are:
 
@@ -4401,12 +4399,14 @@
 
 Some examples of using filters are:
 
-  * `labels.env:*` --&gt; The session has the label "env".
-  * `labels.env:dev` --&gt; The session has the label "env" and the value of
-                       the label contains the string "dev".
+  * `labels.env:*` --&gt; The session has the label &quot;env&quot;.
+  * `labels.env:dev` --&gt; The session has the label &quot;env&quot; and the value of
+                       the label contains the string &quot;dev&quot;.
   pageToken: string, If non-empty, `page_token` should contain a
 next_page_token from a previous
 ListSessionsResponse.
+  pageSize: integer, Number of sessions to be returned in the response. If 0 or less, defaults
+to the server&#x27;s maximum allowed page size.
   x__xgafv: string, V1 error format.
     Allowed values
       1 - v1 error format
@@ -4416,12 +4416,14 @@
   An object of the form:
 
     { # The response for ListSessions.
-    "nextPageToken": "A String", # `next_page_token` can be sent in a subsequent
-        # ListSessions call to fetch more of the matching
-        # sessions.
-    "sessions": [ # The list of requested sessions.
+    &quot;sessions&quot;: [ # The list of requested sessions.
       { # A session in the Cloud Spanner API.
-        "labels": { # The labels for the session.
+        &quot;approximateLastUseTime&quot;: &quot;A String&quot;, # Output only. The approximate timestamp when the session is last used. It is
+            # typically earlier than the actual last use time.
+        &quot;name&quot;: &quot;A String&quot;, # The name of the session. This is always system-assigned; values provided
+            # when creating a session are ignored.
+        &quot;createTime&quot;: &quot;A String&quot;, # Output only. The timestamp when the session is created.
+        &quot;labels&quot;: { # The labels for the session.
             #
             #  * Label keys must be between 1 and 63 characters long and must conform to
             #    the following regular expression: `[a-z]([-a-z0-9]*[a-z0-9])?`.
@@ -4430,15 +4432,13 @@
             #  * No more than 64 labels can be associated with a given session.
             #
             # See https://goo.gl/xmQnxf for more information on and examples of labels.
-          "a_key": "A String",
+          &quot;a_key&quot;: &quot;A String&quot;,
         },
-        "name": "A String", # The name of the session. This is always system-assigned; values provided
-            # when creating a session are ignored.
-        "approximateLastUseTime": "A String", # Output only. The approximate timestamp when the session is last used. It is
-            # typically earlier than the actual last use time.
-        "createTime": "A String", # Output only. The timestamp when the session is created.
       },
     ],
+    &quot;nextPageToken&quot;: &quot;A String&quot;, # `next_page_token` can be sent in a subsequent
+        # ListSessions call to fetch more of the matching
+        # sessions.
   }</pre>
 </div>
 
@@ -4451,7 +4451,7 @@
   previous_response: The response from the request for the previous page. (required)
 
 Returns:
-  A request object that you can call 'execute()' on to request the next
+  A request object that you can call &#x27;execute()&#x27; on to request the next
   page. Returns None if there are no more items in the collection.
     </pre>
 </div>
@@ -4476,44 +4476,25 @@
     The object takes the form of:
 
 { # The request for PartitionQuery
-    "paramTypes": { # It is not always possible for Cloud Spanner to infer the right SQL type
-        # from a JSON value.  For example, values of type `BYTES` and values
-        # of type `STRING` both appear in params as JSON strings.
+    &quot;sql&quot;: &quot;A String&quot;, # Required. The query request to generate partitions for. The request will fail if
+        # the query is not root partitionable. The query plan of a root
+        # partitionable query has a single distributed union operator. A distributed
+        # union operator conceptually divides one or more tables into multiple
+        # splits, remotely evaluates a subquery independently on each split, and
+        # then unions all results.
         # 
-        # In these cases, `param_types` can be used to specify the exact
-        # SQL type for some or all of the SQL query parameters. See the
-        # definition of Type for more information
-        # about SQL types.
-      "a_key": { # `Type` indicates the type of a Cloud Spanner value, as might be stored in a
-          # table cell or returned from an SQL query.
-        "structType": { # `StructType` defines the fields of a STRUCT type. # If code == STRUCT, then `struct_type`
-            # provides type information for the struct's fields.
-          "fields": [ # The list of fields that make up this struct. Order is
-              # significant, because values of this struct type are represented as
-              # lists, where the order of field values matches the order of
-              # fields in the StructType. In turn, the order of fields
-              # matches the order of columns in a read request, or the order of
-              # fields in the `SELECT` clause of a query.
-            { # Message representing a single field of a struct.
-              "type": # Object with schema name: Type # The type of the field.
-              "name": "A String", # The name of the field. For reads, this is the column name. For
-                  # SQL queries, it is the column alias (e.g., `"Word"` in the
-                  # query `"SELECT 'hello' AS Word"`), or the column name (e.g.,
-                  # `"ColName"` in the query `"SELECT ColName FROM Table"`). Some
-                  # columns might have an empty name (e.g., !"SELECT
-                  # UPPER(ColName)"`). Note that a query result can contain
-                  # multiple fields with the same name.
-            },
-          ],
-        },
-        "code": "A String", # Required. The TypeCode for this type.
-        "arrayElementType": # Object with schema name: Type # If code == ARRAY, then `array_element_type`
-            # is the type of the array elements.
-      },
-    },
-    "partitionOptions": { # Options for a PartitionQueryRequest and # Additional options that affect how many partitions are created.
+        # This must not contain DML commands, such as INSERT, UPDATE, or
+        # DELETE. Use ExecuteStreamingSql with a
+        # PartitionedDml transaction for large, partition-friendly DML operations.
+    &quot;partitionOptions&quot;: { # Options for a PartitionQueryRequest and # Additional options that affect how many partitions are created.
         # PartitionReadRequest.
-      "maxPartitions": "A String", # **Note:** This hint is currently ignored by PartitionQuery and
+      &quot;partitionSizeBytes&quot;: &quot;A String&quot;, # **Note:** This hint is currently ignored by PartitionQuery and
+          # PartitionRead requests.
+          #
+          # The desired data size for each partition generated.  The default for this
+          # option is currently 1 GiB.  This is only a hint. The actual size of each
+          # partition may be smaller or larger than this size request.
+      &quot;maxPartitions&quot;: &quot;A String&quot;, # **Note:** This hint is currently ignored by PartitionQuery and
           # PartitionRead requests.
           #
           # The desired maximum number of partitions to return.  For example, this may
@@ -4521,369 +4502,14 @@
           # is currently 10,000. The maximum value is currently 200,000.  This is only
           # a hint.  The actual number of partitions returned may be smaller or larger
           # than this maximum count request.
-      "partitionSizeBytes": "A String", # **Note:** This hint is currently ignored by PartitionQuery and
-          # PartitionRead requests.
-          #
-          # The desired data size for each partition generated.  The default for this
-          # option is currently 1 GiB.  This is only a hint. The actual size of each
-          # partition may be smaller or larger than this size request.
     },
-    "transaction": { # This message is used to select the transaction in which a # Read only snapshot transactions are supported, read/write and single use
+    &quot;transaction&quot;: { # This message is used to select the transaction in which a # Read only snapshot transactions are supported, read/write and single use
         # transactions are not.
         # Read or
         # ExecuteSql call runs.
         #
         # See TransactionOptions for more information about transactions.
-      "begin": { # # Transactions # Begin a new transaction and execute this read or SQL query in
-          # it. The transaction ID of the new transaction is returned in
-          # ResultSetMetadata.transaction, which is a Transaction.
-          #
-          #
-          # Each session can have at most one active transaction at a time. After the
-          # active transaction is completed, the session can immediately be
-          # re-used for the next transaction. It is not necessary to create a
-          # new session for each transaction.
-          #
-          # # Transaction Modes
-          #
-          # Cloud Spanner supports three transaction modes:
-          #
-          #   1. Locking read-write. This type of transaction is the only way
-          #      to write data into Cloud Spanner. These transactions rely on
-          #      pessimistic locking and, if necessary, two-phase commit.
-          #      Locking read-write transactions may abort, requiring the
-          #      application to retry.
-          #
-          #   2. Snapshot read-only. This transaction type provides guaranteed
-          #      consistency across several reads, but does not allow
-          #      writes. Snapshot read-only transactions can be configured to
-          #      read at timestamps in the past. Snapshot read-only
-          #      transactions do not need to be committed.
-          #
-          #   3. Partitioned DML. This type of transaction is used to execute
-          #      a single Partitioned DML statement. Partitioned DML partitions
-          #      the key space and runs the DML statement over each partition
-          #      in parallel using separate, internal transactions that commit
-          #      independently. Partitioned DML transactions do not need to be
-          #      committed.
-          #
-          # For transactions that only read, snapshot read-only transactions
-          # provide simpler semantics and are almost always faster. In
-          # particular, read-only transactions do not take locks, so they do
-          # not conflict with read-write transactions. As a consequence of not
-          # taking locks, they also do not abort, so retry loops are not needed.
-          #
-          # Transactions may only read/write data in a single database. They
-          # may, however, read/write data in different tables within that
-          # database.
-          #
-          # ## Locking Read-Write Transactions
-          #
-          # Locking transactions may be used to atomically read-modify-write
-          # data anywhere in a database. This type of transaction is externally
-          # consistent.
-          #
-          # Clients should attempt to minimize the amount of time a transaction
-          # is active. Faster transactions commit with higher probability
-          # and cause less contention. Cloud Spanner attempts to keep read locks
-          # active as long as the transaction continues to do reads, and the
-          # transaction has not been terminated by
-          # Commit or
-          # Rollback.  Long periods of
-          # inactivity at the client may cause Cloud Spanner to release a
-          # transaction's locks and abort it.
-          #
-          # Conceptually, a read-write transaction consists of zero or more
-          # reads or SQL statements followed by
-          # Commit. At any time before
-          # Commit, the client can send a
-          # Rollback request to abort the
-          # transaction.
-          #
-          # ### Semantics
-          #
-          # Cloud Spanner can commit the transaction if all read locks it acquired
-          # are still valid at commit time, and it is able to acquire write
-          # locks for all writes. Cloud Spanner can abort the transaction for any
-          # reason. If a commit attempt returns `ABORTED`, Cloud Spanner guarantees
-          # that the transaction has not modified any user data in Cloud Spanner.
-          #
-          # Unless the transaction commits, Cloud Spanner makes no guarantees about
-          # how long the transaction's locks were held for. It is an error to
-          # use Cloud Spanner locks for any sort of mutual exclusion other than
-          # between Cloud Spanner transactions themselves.
-          #
-          # ### Retrying Aborted Transactions
-          #
-          # When a transaction aborts, the application can choose to retry the
-          # whole transaction again. To maximize the chances of successfully
-          # committing the retry, the client should execute the retry in the
-          # same session as the original attempt. The original session's lock
-          # priority increases with each consecutive abort, meaning that each
-          # attempt has a slightly better chance of success than the previous.
-          #
-          # Under some circumstances (e.g., many transactions attempting to
-          # modify the same row(s)), a transaction can abort many times in a
-          # short period before successfully committing. Thus, it is not a good
-          # idea to cap the number of retries a transaction can attempt;
-          # instead, it is better to limit the total amount of wall time spent
-          # retrying.
-          #
-          # ### Idle Transactions
-          #
-          # A transaction is considered idle if it has no outstanding reads or
-          # SQL queries and has not started a read or SQL query within the last 10
-          # seconds. Idle transactions can be aborted by Cloud Spanner so that they
-          # don't hold on to locks indefinitely. In that case, the commit will
-          # fail with error `ABORTED`.
-          #
-          # If this behavior is undesirable, periodically executing a simple
-          # SQL query in the transaction (e.g., `SELECT 1`) prevents the
-          # transaction from becoming idle.
-          #
-          # ## Snapshot Read-Only Transactions
-          #
-          # Snapshot read-only transactions provides a simpler method than
-          # locking read-write transactions for doing several consistent
-          # reads. However, this type of transaction does not support writes.
-          #
-          # Snapshot transactions do not take locks. Instead, they work by
-          # choosing a Cloud Spanner timestamp, then executing all reads at that
-          # timestamp. Since they do not acquire locks, they do not block
-          # concurrent read-write transactions.
-          #
-          # Unlike locking read-write transactions, snapshot read-only
-          # transactions never abort. They can fail if the chosen read
-          # timestamp is garbage collected; however, the default garbage
-          # collection policy is generous enough that most applications do not
-          # need to worry about this in practice.
-          #
-          # Snapshot read-only transactions do not need to call
-          # Commit or
-          # Rollback (and in fact are not
-          # permitted to do so).
-          #
-          # To execute a snapshot transaction, the client specifies a timestamp
-          # bound, which tells Cloud Spanner how to choose a read timestamp.
-          #
-          # The types of timestamp bound are:
-          #
-          #   - Strong (the default).
-          #   - Bounded staleness.
-          #   - Exact staleness.
-          #
-          # If the Cloud Spanner database to be read is geographically distributed,
-          # stale read-only transactions can execute more quickly than strong
-          # or read-write transaction, because they are able to execute far
-          # from the leader replica.
-          #
-          # Each type of timestamp bound is discussed in detail below.
-          #
-          # ### Strong
-          #
-          # Strong reads are guaranteed to see the effects of all transactions
-          # that have committed before the start of the read. Furthermore, all
-          # rows yielded by a single read are consistent with each other -- if
-          # any part of the read observes a transaction, all parts of the read
-          # see the transaction.
-          #
-          # Strong reads are not repeatable: two consecutive strong read-only
-          # transactions might return inconsistent results if there are
-          # concurrent writes. If consistency across reads is required, the
-          # reads should be executed within a transaction or at an exact read
-          # timestamp.
-          #
-          # See TransactionOptions.ReadOnly.strong.
-          #
-          # ### Exact Staleness
-          #
-          # These timestamp bounds execute reads at a user-specified
-          # timestamp. Reads at a timestamp are guaranteed to see a consistent
-          # prefix of the global transaction history: they observe
-          # modifications done by all transactions with a commit timestamp &lt;=
-          # the read timestamp, and observe none of the modifications done by
-          # transactions with a larger commit timestamp. They will block until
-          # all conflicting transactions that may be assigned commit timestamps
-          # &lt;= the read timestamp have finished.
-          #
-          # The timestamp can either be expressed as an absolute Cloud Spanner commit
-          # timestamp or a staleness relative to the current time.
-          #
-          # These modes do not require a "negotiation phase" to pick a
-          # timestamp. As a result, they execute slightly faster than the
-          # equivalent boundedly stale concurrency modes. On the other hand,
-          # boundedly stale reads usually return fresher results.
-          #
-          # See TransactionOptions.ReadOnly.read_timestamp and
-          # TransactionOptions.ReadOnly.exact_staleness.
-          #
-          # ### Bounded Staleness
-          #
-          # Bounded staleness modes allow Cloud Spanner to pick the read timestamp,
-          # subject to a user-provided staleness bound. Cloud Spanner chooses the
-          # newest timestamp within the staleness bound that allows execution
-          # of the reads at the closest available replica without blocking.
-          #
-          # All rows yielded are consistent with each other -- if any part of
-          # the read observes a transaction, all parts of the read see the
-          # transaction. Boundedly stale reads are not repeatable: two stale
-          # reads, even if they use the same staleness bound, can execute at
-          # different timestamps and thus return inconsistent results.
-          #
-          # Boundedly stale reads execute in two phases: the first phase
-          # negotiates a timestamp among all replicas needed to serve the
-          # read. In the second phase, reads are executed at the negotiated
-          # timestamp.
-          #
-          # As a result of the two phase execution, bounded staleness reads are
-          # usually a little slower than comparable exact staleness
-          # reads. However, they are typically able to return fresher
-          # results, and are more likely to execute at the closest replica.
-          #
-          # Because the timestamp negotiation requires up-front knowledge of
-          # which rows will be read, it can only be used with single-use
-          # read-only transactions.
-          #
-          # See TransactionOptions.ReadOnly.max_staleness and
-          # TransactionOptions.ReadOnly.min_read_timestamp.
-          #
-          # ### Old Read Timestamps and Garbage Collection
-          #
-          # Cloud Spanner continuously garbage collects deleted and overwritten data
-          # in the background to reclaim storage space. This process is known
-          # as "version GC". By default, version GC reclaims versions after they
-          # are one hour old. Because of this, Cloud Spanner cannot perform reads
-          # at read timestamps more than one hour in the past. This
-          # restriction also applies to in-progress reads and/or SQL queries whose
-          # timestamp become too old while executing. Reads and SQL queries with
-          # too-old read timestamps fail with the error `FAILED_PRECONDITION`.
-          #
-          # ## Partitioned DML Transactions
-          #
-          # Partitioned DML transactions are used to execute DML statements with a
-          # different execution strategy that provides different, and often better,
-          # scalability properties for large, table-wide operations than DML in a
-          # ReadWrite transaction. Smaller scoped statements, such as an OLTP workload,
-          # should prefer using ReadWrite transactions.
-          #
-          # Partitioned DML partitions the keyspace and runs the DML statement on each
-          # partition in separate, internal transactions. These transactions commit
-          # automatically when complete, and run independently from one another.
-          #
-          # To reduce lock contention, this execution strategy only acquires read locks
-          # on rows that match the WHERE clause of the statement. Additionally, the
-          # smaller per-partition transactions hold locks for less time.
-          #
-          # That said, Partitioned DML is not a drop-in replacement for standard DML used
-          # in ReadWrite transactions.
-          #
-          #  - The DML statement must be fully-partitionable. Specifically, the statement
-          #    must be expressible as the union of many statements which each access only
-          #    a single row of the table.
-          #
-          #  - The statement is not applied atomically to all rows of the table. Rather,
-          #    the statement is applied atomically to partitions of the table, in
-          #    independent transactions. Secondary index rows are updated atomically
-          #    with the base table rows.
-          #
-          #  - Partitioned DML does not guarantee exactly-once execution semantics
-          #    against a partition. The statement will be applied at least once to each
-          #    partition. It is strongly recommended that the DML statement should be
-          #    idempotent to avoid unexpected results. For instance, it is potentially
-          #    dangerous to run a statement such as
-          #    `UPDATE table SET column = column + 1` as it could be run multiple times
-          #    against some rows.
-          #
-          #  - The partitions are committed automatically - there is no support for
-          #    Commit or Rollback. If the call returns an error, or if the client issuing
-          #    the ExecuteSql call dies, it is possible that some rows had the statement
-          #    executed on them successfully. It is also possible that statement was
-          #    never executed against other rows.
-          #
-          #  - Partitioned DML transactions may only contain the execution of a single
-          #    DML statement via ExecuteSql or ExecuteStreamingSql.
-          #
-          #  - If any error is encountered during the execution of the partitioned DML
-          #    operation (for instance, a UNIQUE INDEX violation, division by zero, or a
-          #    value that cannot be stored due to schema constraints), then the
-          #    operation is stopped at that point and an error is returned. It is
-          #    possible that at this point, some partitions have been committed (or even
-          #    committed multiple times), and other partitions have not been run at all.
-          #
-          # Given the above, Partitioned DML is good fit for large, database-wide,
-          # operations that are idempotent, such as deleting old rows from a very large
-          # table.
-        "readWrite": { # Message type to initiate a read-write transaction. Currently this # Transaction may write.
-            #
-            # Authorization to begin a read-write transaction requires
-            # `spanner.databases.beginOrRollbackReadWriteTransaction` permission
-            # on the `session` resource.
-            # transaction type has no options.
-        },
-        "readOnly": { # Message type to initiate a read-only transaction. # Transaction will not write.
-            #
-            # Authorization to begin a read-only transaction requires
-            # `spanner.databases.beginReadOnlyTransaction` permission
-            # on the `session` resource.
-          "minReadTimestamp": "A String", # Executes all reads at a timestamp &gt;= `min_read_timestamp`.
-              #
-              # This is useful for requesting fresher data than some previous
-              # read, or data that is fresh enough to observe the effects of some
-              # previously committed transaction whose timestamp is known.
-              #
-              # Note that this option can only be used in single-use transactions.
-              #
-              # A timestamp in RFC3339 UTC \"Zulu\" format, accurate to nanoseconds.
-              # Example: `"2014-10-02T15:01:23.045123456Z"`.
-          "readTimestamp": "A String", # Executes all reads at the given timestamp. Unlike other modes,
-              # reads at a specific timestamp are repeatable; the same read at
-              # the same timestamp always returns the same data. If the
-              # timestamp is in the future, the read will block until the
-              # specified timestamp, modulo the read's deadline.
-              #
-              # Useful for large scale consistent reads such as mapreduces, or
-              # for coordinating many reads against a consistent snapshot of the
-              # data.
-              #
-              # A timestamp in RFC3339 UTC \"Zulu\" format, accurate to nanoseconds.
-              # Example: `"2014-10-02T15:01:23.045123456Z"`.
-          "maxStaleness": "A String", # Read data at a timestamp &gt;= `NOW - max_staleness`
-              # seconds. Guarantees that all writes that have committed more
-              # than the specified number of seconds ago are visible. Because
-              # Cloud Spanner chooses the exact timestamp, this mode works even if
-              # the client's local clock is substantially skewed from Cloud Spanner
-              # commit timestamps.
-              #
-              # Useful for reading the freshest data available at a nearby
-              # replica, while bounding the possible staleness if the local
-              # replica has fallen behind.
-              #
-              # Note that this option can only be used in single-use
-              # transactions.
-          "exactStaleness": "A String", # Executes all reads at a timestamp that is `exact_staleness`
-              # old. The timestamp is chosen soon after the read is started.
-              #
-              # Guarantees that all writes that have committed more than the
-              # specified number of seconds ago are visible. Because Cloud Spanner
-              # chooses the exact timestamp, this mode works even if the client's
-              # local clock is substantially skewed from Cloud Spanner commit
-              # timestamps.
-              #
-              # Useful for reading at nearby replicas without the distributed
-              # timestamp negotiation overhead of `max_staleness`.
-          "returnReadTimestamp": True or False, # If true, the Cloud Spanner-selected read timestamp is included in
-              # the Transaction message that describes the transaction.
-          "strong": True or False, # Read at a timestamp where all previously committed transactions
-              # are visible.
-        },
-        "partitionedDml": { # Message type to initiate a Partitioned DML transaction. # Partitioned DML transaction.
-            #
-            # Authorization to begin a Partitioned DML transaction requires
-            # `spanner.databases.beginPartitionedDmlTransaction` permission
-            # on the `session` resource.
-        },
-      },
-      "singleUse": { # # Transactions # Execute the read or SQL query in a temporary transaction.
+      &quot;singleUse&quot;: { # # Transactions # Execute the read or SQL query in a temporary transaction.
           # This is the most efficient way to execute a transaction that
           # consists of a single SQL query.
           #
@@ -4940,7 +4566,7 @@
           # Commit or
           # Rollback.  Long periods of
           # inactivity at the client may cause Cloud Spanner to release a
-          # transaction's locks and abort it.
+          # transaction&#x27;s locks and abort it.
           #
           # Conceptually, a read-write transaction consists of zero or more
           # reads or SQL statements followed by
@@ -4958,7 +4584,7 @@
           # that the transaction has not modified any user data in Cloud Spanner.
           #
           # Unless the transaction commits, Cloud Spanner makes no guarantees about
-          # how long the transaction's locks were held for. It is an error to
+          # how long the transaction&#x27;s locks were held for. It is an error to
           # use Cloud Spanner locks for any sort of mutual exclusion other than
           # between Cloud Spanner transactions themselves.
           #
@@ -4967,7 +4593,7 @@
           # When a transaction aborts, the application can choose to retry the
           # whole transaction again. To maximize the chances of successfully
           # committing the retry, the client should execute the retry in the
-          # same session as the original attempt. The original session's lock
+          # same session as the original attempt. The original session&#x27;s lock
           # priority increases with each consecutive abort, meaning that each
           # attempt has a slightly better chance of success than the previous.
           #
@@ -4983,7 +4609,7 @@
           # A transaction is considered idle if it has no outstanding reads or
           # SQL queries and has not started a read or SQL query within the last 10
           # seconds. Idle transactions can be aborted by Cloud Spanner so that they
-          # don't hold on to locks indefinitely. In that case, the commit will
+          # don&#x27;t hold on to locks indefinitely. In that case, the commit will
           # fail with error `ABORTED`.
           #
           # If this behavior is undesirable, periodically executing a simple
@@ -5058,7 +4684,7 @@
           # The timestamp can either be expressed as an absolute Cloud Spanner commit
           # timestamp or a staleness relative to the current time.
           #
-          # These modes do not require a "negotiation phase" to pick a
+          # These modes do not require a &quot;negotiation phase&quot; to pick a
           # timestamp. As a result, they execute slightly faster than the
           # equivalent boundedly stale concurrency modes. On the other hand,
           # boundedly stale reads usually return fresher results.
@@ -5100,7 +4726,7 @@
           #
           # Cloud Spanner continuously garbage collects deleted and overwritten data
           # in the background to reclaim storage space. This process is known
-          # as "version GC". By default, version GC reclaims versions after they
+          # as &quot;version GC&quot;. By default, version GC reclaims versions after they
           # are one hour old. Because of this, Cloud Spanner cannot perform reads
           # at read timestamps more than one hour in the past. This
           # restriction also applies to in-progress reads and/or SQL queries whose
@@ -5162,19 +4788,18 @@
           # Given the above, Partitioned DML is good fit for large, database-wide,
           # operations that are idempotent, such as deleting old rows from a very large
           # table.
-        "readWrite": { # Message type to initiate a read-write transaction. Currently this # Transaction may write.
+        &quot;partitionedDml&quot;: { # Message type to initiate a Partitioned DML transaction. # Partitioned DML transaction.
             #
-            # Authorization to begin a read-write transaction requires
-            # `spanner.databases.beginOrRollbackReadWriteTransaction` permission
+            # Authorization to begin a Partitioned DML transaction requires
+            # `spanner.databases.beginPartitionedDmlTransaction` permission
             # on the `session` resource.
-            # transaction type has no options.
         },
-        "readOnly": { # Message type to initiate a read-only transaction. # Transaction will not write.
+        &quot;readOnly&quot;: { # Message type to initiate a read-only transaction. # Transaction will not write.
             #
             # Authorization to begin a read-only transaction requires
             # `spanner.databases.beginReadOnlyTransaction` permission
             # on the `session` resource.
-          "minReadTimestamp": "A String", # Executes all reads at a timestamp &gt;= `min_read_timestamp`.
+          &quot;minReadTimestamp&quot;: &quot;A String&quot;, # Executes all reads at a timestamp &gt;= `min_read_timestamp`.
               #
               # This is useful for requesting fresher data than some previous
               # read, or data that is fresh enough to observe the effects of some
@@ -5182,25 +4807,25 @@
               #
               # Note that this option can only be used in single-use transactions.
               #
-              # A timestamp in RFC3339 UTC \"Zulu\" format, accurate to nanoseconds.
-              # Example: `"2014-10-02T15:01:23.045123456Z"`.
-          "readTimestamp": "A String", # Executes all reads at the given timestamp. Unlike other modes,
+              # A timestamp in RFC3339 UTC \&quot;Zulu\&quot; format, accurate to nanoseconds.
+              # Example: `&quot;2014-10-02T15:01:23.045123456Z&quot;`.
+          &quot;readTimestamp&quot;: &quot;A String&quot;, # Executes all reads at the given timestamp. Unlike other modes,
               # reads at a specific timestamp are repeatable; the same read at
               # the same timestamp always returns the same data. If the
               # timestamp is in the future, the read will block until the
-              # specified timestamp, modulo the read's deadline.
+              # specified timestamp, modulo the read&#x27;s deadline.
               #
               # Useful for large scale consistent reads such as mapreduces, or
               # for coordinating many reads against a consistent snapshot of the
               # data.
               #
-              # A timestamp in RFC3339 UTC \"Zulu\" format, accurate to nanoseconds.
-              # Example: `"2014-10-02T15:01:23.045123456Z"`.
-          "maxStaleness": "A String", # Read data at a timestamp &gt;= `NOW - max_staleness`
+              # A timestamp in RFC3339 UTC \&quot;Zulu\&quot; format, accurate to nanoseconds.
+              # Example: `&quot;2014-10-02T15:01:23.045123456Z&quot;`.
+          &quot;maxStaleness&quot;: &quot;A String&quot;, # Read data at a timestamp &gt;= `NOW - max_staleness`
               # seconds. Guarantees that all writes that have committed more
               # than the specified number of seconds ago are visible. Because
               # Cloud Spanner chooses the exact timestamp, this mode works even if
-              # the client's local clock is substantially skewed from Cloud Spanner
+              # the client&#x27;s local clock is substantially skewed from Cloud Spanner
               # commit timestamps.
               #
               # Useful for reading the freshest data available at a nearby
@@ -5209,32 +4834,382 @@
               #
               # Note that this option can only be used in single-use
               # transactions.
-          "exactStaleness": "A String", # Executes all reads at a timestamp that is `exact_staleness`
+          &quot;returnReadTimestamp&quot;: True or False, # If true, the Cloud Spanner-selected read timestamp is included in
+              # the Transaction message that describes the transaction.
+          &quot;exactStaleness&quot;: &quot;A String&quot;, # Executes all reads at a timestamp that is `exact_staleness`
               # old. The timestamp is chosen soon after the read is started.
               #
               # Guarantees that all writes that have committed more than the
               # specified number of seconds ago are visible. Because Cloud Spanner
-              # chooses the exact timestamp, this mode works even if the client's
+              # chooses the exact timestamp, this mode works even if the client&#x27;s
               # local clock is substantially skewed from Cloud Spanner commit
               # timestamps.
               #
               # Useful for reading at nearby replicas without the distributed
               # timestamp negotiation overhead of `max_staleness`.
-          "returnReadTimestamp": True or False, # If true, the Cloud Spanner-selected read timestamp is included in
-              # the Transaction message that describes the transaction.
-          "strong": True or False, # Read at a timestamp where all previously committed transactions
+          &quot;strong&quot;: True or False, # Read at a timestamp where all previously committed transactions
               # are visible.
         },
-        "partitionedDml": { # Message type to initiate a Partitioned DML transaction. # Partitioned DML transaction.
+        &quot;readWrite&quot;: { # Message type to initiate a read-write transaction. Currently this # Transaction may write.
+            #
+            # Authorization to begin a read-write transaction requires
+            # `spanner.databases.beginOrRollbackReadWriteTransaction` permission
+            # on the `session` resource.
+            # transaction type has no options.
+        },
+      },
+      &quot;begin&quot;: { # # Transactions # Begin a new transaction and execute this read or SQL query in
+          # it. The transaction ID of the new transaction is returned in
+          # ResultSetMetadata.transaction, which is a Transaction.
+          #
+          #
+          # Each session can have at most one active transaction at a time. After the
+          # active transaction is completed, the session can immediately be
+          # re-used for the next transaction. It is not necessary to create a
+          # new session for each transaction.
+          #
+          # # Transaction Modes
+          #
+          # Cloud Spanner supports three transaction modes:
+          #
+          #   1. Locking read-write. This type of transaction is the only way
+          #      to write data into Cloud Spanner. These transactions rely on
+          #      pessimistic locking and, if necessary, two-phase commit.
+          #      Locking read-write transactions may abort, requiring the
+          #      application to retry.
+          #
+          #   2. Snapshot read-only. This transaction type provides guaranteed
+          #      consistency across several reads, but does not allow
+          #      writes. Snapshot read-only transactions can be configured to
+          #      read at timestamps in the past. Snapshot read-only
+          #      transactions do not need to be committed.
+          #
+          #   3. Partitioned DML. This type of transaction is used to execute
+          #      a single Partitioned DML statement. Partitioned DML partitions
+          #      the key space and runs the DML statement over each partition
+          #      in parallel using separate, internal transactions that commit
+          #      independently. Partitioned DML transactions do not need to be
+          #      committed.
+          #
+          # For transactions that only read, snapshot read-only transactions
+          # provide simpler semantics and are almost always faster. In
+          # particular, read-only transactions do not take locks, so they do
+          # not conflict with read-write transactions. As a consequence of not
+          # taking locks, they also do not abort, so retry loops are not needed.
+          #
+          # Transactions may only read/write data in a single database. They
+          # may, however, read/write data in different tables within that
+          # database.
+          #
+          # ## Locking Read-Write Transactions
+          #
+          # Locking transactions may be used to atomically read-modify-write
+          # data anywhere in a database. This type of transaction is externally
+          # consistent.
+          #
+          # Clients should attempt to minimize the amount of time a transaction
+          # is active. Faster transactions commit with higher probability
+          # and cause less contention. Cloud Spanner attempts to keep read locks
+          # active as long as the transaction continues to do reads, and the
+          # transaction has not been terminated by
+          # Commit or
+          # Rollback.  Long periods of
+          # inactivity at the client may cause Cloud Spanner to release a
+          # transaction&#x27;s locks and abort it.
+          #
+          # Conceptually, a read-write transaction consists of zero or more
+          # reads or SQL statements followed by
+          # Commit. At any time before
+          # Commit, the client can send a
+          # Rollback request to abort the
+          # transaction.
+          #
+          # ### Semantics
+          #
+          # Cloud Spanner can commit the transaction if all read locks it acquired
+          # are still valid at commit time, and it is able to acquire write
+          # locks for all writes. Cloud Spanner can abort the transaction for any
+          # reason. If a commit attempt returns `ABORTED`, Cloud Spanner guarantees
+          # that the transaction has not modified any user data in Cloud Spanner.
+          #
+          # Unless the transaction commits, Cloud Spanner makes no guarantees about
+          # how long the transaction&#x27;s locks were held for. It is an error to
+          # use Cloud Spanner locks for any sort of mutual exclusion other than
+          # between Cloud Spanner transactions themselves.
+          #
+          # ### Retrying Aborted Transactions
+          #
+          # When a transaction aborts, the application can choose to retry the
+          # whole transaction again. To maximize the chances of successfully
+          # committing the retry, the client should execute the retry in the
+          # same session as the original attempt. The original session&#x27;s lock
+          # priority increases with each consecutive abort, meaning that each
+          # attempt has a slightly better chance of success than the previous.
+          #
+          # Under some circumstances (e.g., many transactions attempting to
+          # modify the same row(s)), a transaction can abort many times in a
+          # short period before successfully committing. Thus, it is not a good
+          # idea to cap the number of retries a transaction can attempt;
+          # instead, it is better to limit the total amount of wall time spent
+          # retrying.
+          #
+          # ### Idle Transactions
+          #
+          # A transaction is considered idle if it has no outstanding reads or
+          # SQL queries and has not started a read or SQL query within the last 10
+          # seconds. Idle transactions can be aborted by Cloud Spanner so that they
+          # don&#x27;t hold on to locks indefinitely. In that case, the commit will
+          # fail with error `ABORTED`.
+          #
+          # If this behavior is undesirable, periodically executing a simple
+          # SQL query in the transaction (e.g., `SELECT 1`) prevents the
+          # transaction from becoming idle.
+          #
+          # ## Snapshot Read-Only Transactions
+          #
+          # Snapshot read-only transactions provides a simpler method than
+          # locking read-write transactions for doing several consistent
+          # reads. However, this type of transaction does not support writes.
+          #
+          # Snapshot transactions do not take locks. Instead, they work by
+          # choosing a Cloud Spanner timestamp, then executing all reads at that
+          # timestamp. Since they do not acquire locks, they do not block
+          # concurrent read-write transactions.
+          #
+          # Unlike locking read-write transactions, snapshot read-only
+          # transactions never abort. They can fail if the chosen read
+          # timestamp is garbage collected; however, the default garbage
+          # collection policy is generous enough that most applications do not
+          # need to worry about this in practice.
+          #
+          # Snapshot read-only transactions do not need to call
+          # Commit or
+          # Rollback (and in fact are not
+          # permitted to do so).
+          #
+          # To execute a snapshot transaction, the client specifies a timestamp
+          # bound, which tells Cloud Spanner how to choose a read timestamp.
+          #
+          # The types of timestamp bound are:
+          #
+          #   - Strong (the default).
+          #   - Bounded staleness.
+          #   - Exact staleness.
+          #
+          # If the Cloud Spanner database to be read is geographically distributed,
+          # stale read-only transactions can execute more quickly than strong
+          # or read-write transaction, because they are able to execute far
+          # from the leader replica.
+          #
+          # Each type of timestamp bound is discussed in detail below.
+          #
+          # ### Strong
+          #
+          # Strong reads are guaranteed to see the effects of all transactions
+          # that have committed before the start of the read. Furthermore, all
+          # rows yielded by a single read are consistent with each other -- if
+          # any part of the read observes a transaction, all parts of the read
+          # see the transaction.
+          #
+          # Strong reads are not repeatable: two consecutive strong read-only
+          # transactions might return inconsistent results if there are
+          # concurrent writes. If consistency across reads is required, the
+          # reads should be executed within a transaction or at an exact read
+          # timestamp.
+          #
+          # See TransactionOptions.ReadOnly.strong.
+          #
+          # ### Exact Staleness
+          #
+          # These timestamp bounds execute reads at a user-specified
+          # timestamp. Reads at a timestamp are guaranteed to see a consistent
+          # prefix of the global transaction history: they observe
+          # modifications done by all transactions with a commit timestamp &lt;=
+          # the read timestamp, and observe none of the modifications done by
+          # transactions with a larger commit timestamp. They will block until
+          # all conflicting transactions that may be assigned commit timestamps
+          # &lt;= the read timestamp have finished.
+          #
+          # The timestamp can either be expressed as an absolute Cloud Spanner commit
+          # timestamp or a staleness relative to the current time.
+          #
+          # These modes do not require a &quot;negotiation phase&quot; to pick a
+          # timestamp. As a result, they execute slightly faster than the
+          # equivalent boundedly stale concurrency modes. On the other hand,
+          # boundedly stale reads usually return fresher results.
+          #
+          # See TransactionOptions.ReadOnly.read_timestamp and
+          # TransactionOptions.ReadOnly.exact_staleness.
+          #
+          # ### Bounded Staleness
+          #
+          # Bounded staleness modes allow Cloud Spanner to pick the read timestamp,
+          # subject to a user-provided staleness bound. Cloud Spanner chooses the
+          # newest timestamp within the staleness bound that allows execution
+          # of the reads at the closest available replica without blocking.
+          #
+          # All rows yielded are consistent with each other -- if any part of
+          # the read observes a transaction, all parts of the read see the
+          # transaction. Boundedly stale reads are not repeatable: two stale
+          # reads, even if they use the same staleness bound, can execute at
+          # different timestamps and thus return inconsistent results.
+          #
+          # Boundedly stale reads execute in two phases: the first phase
+          # negotiates a timestamp among all replicas needed to serve the
+          # read. In the second phase, reads are executed at the negotiated
+          # timestamp.
+          #
+          # As a result of the two phase execution, bounded staleness reads are
+          # usually a little slower than comparable exact staleness
+          # reads. However, they are typically able to return fresher
+          # results, and are more likely to execute at the closest replica.
+          #
+          # Because the timestamp negotiation requires up-front knowledge of
+          # which rows will be read, it can only be used with single-use
+          # read-only transactions.
+          #
+          # See TransactionOptions.ReadOnly.max_staleness and
+          # TransactionOptions.ReadOnly.min_read_timestamp.
+          #
+          # ### Old Read Timestamps and Garbage Collection
+          #
+          # Cloud Spanner continuously garbage collects deleted and overwritten data
+          # in the background to reclaim storage space. This process is known
+          # as &quot;version GC&quot;. By default, version GC reclaims versions after they
+          # are one hour old. Because of this, Cloud Spanner cannot perform reads
+          # at read timestamps more than one hour in the past. This
+          # restriction also applies to in-progress reads and/or SQL queries whose
+          # timestamp become too old while executing. Reads and SQL queries with
+          # too-old read timestamps fail with the error `FAILED_PRECONDITION`.
+          #
+          # ## Partitioned DML Transactions
+          #
+          # Partitioned DML transactions are used to execute DML statements with a
+          # different execution strategy that provides different, and often better,
+          # scalability properties for large, table-wide operations than DML in a
+          # ReadWrite transaction. Smaller scoped statements, such as an OLTP workload,
+          # should prefer using ReadWrite transactions.
+          #
+          # Partitioned DML partitions the keyspace and runs the DML statement on each
+          # partition in separate, internal transactions. These transactions commit
+          # automatically when complete, and run independently from one another.
+          #
+          # To reduce lock contention, this execution strategy only acquires read locks
+          # on rows that match the WHERE clause of the statement. Additionally, the
+          # smaller per-partition transactions hold locks for less time.
+          #
+          # That said, Partitioned DML is not a drop-in replacement for standard DML used
+          # in ReadWrite transactions.
+          #
+          #  - The DML statement must be fully-partitionable. Specifically, the statement
+          #    must be expressible as the union of many statements which each access only
+          #    a single row of the table.
+          #
+          #  - The statement is not applied atomically to all rows of the table. Rather,
+          #    the statement is applied atomically to partitions of the table, in
+          #    independent transactions. Secondary index rows are updated atomically
+          #    with the base table rows.
+          #
+          #  - Partitioned DML does not guarantee exactly-once execution semantics
+          #    against a partition. The statement will be applied at least once to each
+          #    partition. It is strongly recommended that the DML statement should be
+          #    idempotent to avoid unexpected results. For instance, it is potentially
+          #    dangerous to run a statement such as
+          #    `UPDATE table SET column = column + 1` as it could be run multiple times
+          #    against some rows.
+          #
+          #  - The partitions are committed automatically - there is no support for
+          #    Commit or Rollback. If the call returns an error, or if the client issuing
+          #    the ExecuteSql call dies, it is possible that some rows had the statement
+          #    executed on them successfully. It is also possible that statement was
+          #    never executed against other rows.
+          #
+          #  - Partitioned DML transactions may only contain the execution of a single
+          #    DML statement via ExecuteSql or ExecuteStreamingSql.
+          #
+          #  - If any error is encountered during the execution of the partitioned DML
+          #    operation (for instance, a UNIQUE INDEX violation, division by zero, or a
+          #    value that cannot be stored due to schema constraints), then the
+          #    operation is stopped at that point and an error is returned. It is
+          #    possible that at this point, some partitions have been committed (or even
+          #    committed multiple times), and other partitions have not been run at all.
+          #
+          # Given the above, Partitioned DML is good fit for large, database-wide,
+          # operations that are idempotent, such as deleting old rows from a very large
+          # table.
+        &quot;partitionedDml&quot;: { # Message type to initiate a Partitioned DML transaction. # Partitioned DML transaction.
             #
             # Authorization to begin a Partitioned DML transaction requires
             # `spanner.databases.beginPartitionedDmlTransaction` permission
             # on the `session` resource.
         },
+        &quot;readOnly&quot;: { # Message type to initiate a read-only transaction. # Transaction will not write.
+            #
+            # Authorization to begin a read-only transaction requires
+            # `spanner.databases.beginReadOnlyTransaction` permission
+            # on the `session` resource.
+          &quot;minReadTimestamp&quot;: &quot;A String&quot;, # Executes all reads at a timestamp &gt;= `min_read_timestamp`.
+              #
+              # This is useful for requesting fresher data than some previous
+              # read, or data that is fresh enough to observe the effects of some
+              # previously committed transaction whose timestamp is known.
+              #
+              # Note that this option can only be used in single-use transactions.
+              #
+              # A timestamp in RFC3339 UTC \&quot;Zulu\&quot; format, accurate to nanoseconds.
+              # Example: `&quot;2014-10-02T15:01:23.045123456Z&quot;`.
+          &quot;readTimestamp&quot;: &quot;A String&quot;, # Executes all reads at the given timestamp. Unlike other modes,
+              # reads at a specific timestamp are repeatable; the same read at
+              # the same timestamp always returns the same data. If the
+              # timestamp is in the future, the read will block until the
+              # specified timestamp, modulo the read&#x27;s deadline.
+              #
+              # Useful for large scale consistent reads such as mapreduces, or
+              # for coordinating many reads against a consistent snapshot of the
+              # data.
+              #
+              # A timestamp in RFC3339 UTC \&quot;Zulu\&quot; format, accurate to nanoseconds.
+              # Example: `&quot;2014-10-02T15:01:23.045123456Z&quot;`.
+          &quot;maxStaleness&quot;: &quot;A String&quot;, # Read data at a timestamp &gt;= `NOW - max_staleness`
+              # seconds. Guarantees that all writes that have committed more
+              # than the specified number of seconds ago are visible. Because
+              # Cloud Spanner chooses the exact timestamp, this mode works even if
+              # the client&#x27;s local clock is substantially skewed from Cloud Spanner
+              # commit timestamps.
+              #
+              # Useful for reading the freshest data available at a nearby
+              # replica, while bounding the possible staleness if the local
+              # replica has fallen behind.
+              #
+              # Note that this option can only be used in single-use
+              # transactions.
+          &quot;returnReadTimestamp&quot;: True or False, # If true, the Cloud Spanner-selected read timestamp is included in
+              # the Transaction message that describes the transaction.
+          &quot;exactStaleness&quot;: &quot;A String&quot;, # Executes all reads at a timestamp that is `exact_staleness`
+              # old. The timestamp is chosen soon after the read is started.
+              #
+              # Guarantees that all writes that have committed more than the
+              # specified number of seconds ago are visible. Because Cloud Spanner
+              # chooses the exact timestamp, this mode works even if the client&#x27;s
+              # local clock is substantially skewed from Cloud Spanner commit
+              # timestamps.
+              #
+              # Useful for reading at nearby replicas without the distributed
+              # timestamp negotiation overhead of `max_staleness`.
+          &quot;strong&quot;: True or False, # Read at a timestamp where all previously committed transactions
+              # are visible.
+        },
+        &quot;readWrite&quot;: { # Message type to initiate a read-write transaction. Currently this # Transaction may write.
+            #
+            # Authorization to begin a read-write transaction requires
+            # `spanner.databases.beginOrRollbackReadWriteTransaction` permission
+            # on the `session` resource.
+            # transaction type has no options.
+        },
       },
-      "id": "A String", # Execute the read or SQL query in a previously-started transaction.
+      &quot;id&quot;: &quot;A String&quot;, # Execute the read or SQL query in a previously-started transaction.
     },
-    "params": { # Parameter names and values that bind to placeholders in the SQL string.
+    &quot;params&quot;: { # Parameter names and values that bind to placeholders in the SQL string.
         # 
         # A parameter placeholder consists of the `@` character followed by the
         # parameter name (for example, `@firstName`). Parameter names can contain
@@ -5243,21 +5218,46 @@
         # Parameters can appear anywhere that a literal value is expected.  The same
         # parameter name can be used more than once, for example:
         # 
-        # `"WHERE id &gt; @msg_id AND id &lt; @msg_id + 100"`
+        # `&quot;WHERE id &gt; @msg_id AND id &lt; @msg_id + 100&quot;`
         # 
         # It is an error to execute a SQL statement with unbound parameters.
-      "a_key": "", # Properties of the object.
+      &quot;a_key&quot;: &quot;&quot;, # Properties of the object.
     },
-    "sql": "A String", # Required. The query request to generate partitions for. The request will fail if
-        # the query is not root partitionable. The query plan of a root
-        # partitionable query has a single distributed union operator. A distributed
-        # union operator conceptually divides one or more tables into multiple
-        # splits, remotely evaluates a subquery independently on each split, and
-        # then unions all results.
+    &quot;paramTypes&quot;: { # It is not always possible for Cloud Spanner to infer the right SQL type
+        # from a JSON value.  For example, values of type `BYTES` and values
+        # of type `STRING` both appear in params as JSON strings.
         # 
-        # This must not contain DML commands, such as INSERT, UPDATE, or
-        # DELETE. Use ExecuteStreamingSql with a
-        # PartitionedDml transaction for large, partition-friendly DML operations.
+        # In these cases, `param_types` can be used to specify the exact
+        # SQL type for some or all of the SQL query parameters. See the
+        # definition of Type for more information
+        # about SQL types.
+      &quot;a_key&quot;: { # `Type` indicates the type of a Cloud Spanner value, as might be stored in a
+          # table cell or returned from an SQL query.
+        &quot;arrayElementType&quot;: # Object with schema name: Type # If code == ARRAY, then `array_element_type`
+            # is the type of the array elements.
+        &quot;code&quot;: &quot;A String&quot;, # Required. The TypeCode for this type.
+        &quot;structType&quot;: { # `StructType` defines the fields of a STRUCT type. # If code == STRUCT, then `struct_type`
+            # provides type information for the struct&#x27;s fields.
+          &quot;fields&quot;: [ # The list of fields that make up this struct. Order is
+              # significant, because values of this struct type are represented as
+              # lists, where the order of field values matches the order of
+              # fields in the StructType. In turn, the order of fields
+              # matches the order of columns in a read request, or the order of
+              # fields in the `SELECT` clause of a query.
+            { # Message representing a single field of a struct.
+              &quot;name&quot;: &quot;A String&quot;, # The name of the field. For reads, this is the column name. For
+                  # SQL queries, it is the column alias (e.g., `&quot;Word&quot;` in the
+                  # query `&quot;SELECT &#x27;hello&#x27; AS Word&quot;`), or the column name (e.g.,
+                  # `&quot;ColName&quot;` in the query `&quot;SELECT ColName FROM Table&quot;`). Some
+                  # columns might have an empty name (e.g., !&quot;SELECT
+                  # UPPER(ColName)&quot;`). Note that a query result can contain
+                  # multiple fields with the same name.
+              &quot;type&quot;: # Object with schema name: Type # The type of the field.
+            },
+          ],
+        },
+      },
+    },
   }
 
   x__xgafv: string, V1 error format.
@@ -5270,14 +5270,8 @@
 
     { # The response for PartitionQuery
       # or PartitionRead
-    "transaction": { # A transaction. # Transaction created by this request.
-      "readTimestamp": "A String", # For snapshot read-only transactions, the read timestamp chosen
-          # for the transaction. Not returned by default: see
-          # TransactionOptions.ReadOnly.return_read_timestamp.
-          #
-          # A timestamp in RFC3339 UTC \"Zulu\" format, accurate to nanoseconds.
-          # Example: `"2014-10-02T15:01:23.045123456Z"`.
-      "id": "A String", # `id` may be used to identify the transaction in subsequent
+    &quot;transaction&quot;: { # A transaction. # Transaction created by this request.
+      &quot;id&quot;: &quot;A String&quot;, # `id` may be used to identify the transaction in subsequent
           # Read,
           # ExecuteSql,
           # Commit, or
@@ -5285,11 +5279,17 @@
           #
           # Single-use read-only transactions do not have IDs, because
           # single-use transactions do not support multiple requests.
+      &quot;readTimestamp&quot;: &quot;A String&quot;, # For snapshot read-only transactions, the read timestamp chosen
+          # for the transaction. Not returned by default: see
+          # TransactionOptions.ReadOnly.return_read_timestamp.
+          #
+          # A timestamp in RFC3339 UTC \&quot;Zulu\&quot; format, accurate to nanoseconds.
+          # Example: `&quot;2014-10-02T15:01:23.045123456Z&quot;`.
     },
-    "partitions": [ # Partitions created by this request.
+    &quot;partitions&quot;: [ # Partitions created by this request.
       { # Information returned for each partition returned in a
           # PartitionResponse.
-        "partitionToken": "A String", # This token can be passed to Read, StreamingRead, ExecuteSql, or
+        &quot;partitionToken&quot;: &quot;A String&quot;, # This token can be passed to Read, StreamingRead, ExecuteSql, or
             # ExecuteStreamingSql requests to restrict the results to those identified by
             # this partition token.
       },
@@ -5319,365 +5319,17 @@
     The object takes the form of:
 
 { # The request for PartitionRead
-    "index": "A String", # If non-empty, the name of an index on table. This index is
-        # used instead of the table primary key when interpreting key_set
-        # and sorting result rows. See key_set for further information.
-    "transaction": { # This message is used to select the transaction in which a # Read only snapshot transactions are supported, read/write and single use
+    &quot;columns&quot;: [ # The columns of table to be returned for each row matching
+        # this request.
+      &quot;A String&quot;,
+    ],
+    &quot;transaction&quot;: { # This message is used to select the transaction in which a # Read only snapshot transactions are supported, read/write and single use
         # transactions are not.
         # Read or
         # ExecuteSql call runs.
         #
         # See TransactionOptions for more information about transactions.
-      "begin": { # # Transactions # Begin a new transaction and execute this read or SQL query in
-          # it. The transaction ID of the new transaction is returned in
-          # ResultSetMetadata.transaction, which is a Transaction.
-          #
-          #
-          # Each session can have at most one active transaction at a time. After the
-          # active transaction is completed, the session can immediately be
-          # re-used for the next transaction. It is not necessary to create a
-          # new session for each transaction.
-          #
-          # # Transaction Modes
-          #
-          # Cloud Spanner supports three transaction modes:
-          #
-          #   1. Locking read-write. This type of transaction is the only way
-          #      to write data into Cloud Spanner. These transactions rely on
-          #      pessimistic locking and, if necessary, two-phase commit.
-          #      Locking read-write transactions may abort, requiring the
-          #      application to retry.
-          #
-          #   2. Snapshot read-only. This transaction type provides guaranteed
-          #      consistency across several reads, but does not allow
-          #      writes. Snapshot read-only transactions can be configured to
-          #      read at timestamps in the past. Snapshot read-only
-          #      transactions do not need to be committed.
-          #
-          #   3. Partitioned DML. This type of transaction is used to execute
-          #      a single Partitioned DML statement. Partitioned DML partitions
-          #      the key space and runs the DML statement over each partition
-          #      in parallel using separate, internal transactions that commit
-          #      independently. Partitioned DML transactions do not need to be
-          #      committed.
-          #
-          # For transactions that only read, snapshot read-only transactions
-          # provide simpler semantics and are almost always faster. In
-          # particular, read-only transactions do not take locks, so they do
-          # not conflict with read-write transactions. As a consequence of not
-          # taking locks, they also do not abort, so retry loops are not needed.
-          #
-          # Transactions may only read/write data in a single database. They
-          # may, however, read/write data in different tables within that
-          # database.
-          #
-          # ## Locking Read-Write Transactions
-          #
-          # Locking transactions may be used to atomically read-modify-write
-          # data anywhere in a database. This type of transaction is externally
-          # consistent.
-          #
-          # Clients should attempt to minimize the amount of time a transaction
-          # is active. Faster transactions commit with higher probability
-          # and cause less contention. Cloud Spanner attempts to keep read locks
-          # active as long as the transaction continues to do reads, and the
-          # transaction has not been terminated by
-          # Commit or
-          # Rollback.  Long periods of
-          # inactivity at the client may cause Cloud Spanner to release a
-          # transaction's locks and abort it.
-          #
-          # Conceptually, a read-write transaction consists of zero or more
-          # reads or SQL statements followed by
-          # Commit. At any time before
-          # Commit, the client can send a
-          # Rollback request to abort the
-          # transaction.
-          #
-          # ### Semantics
-          #
-          # Cloud Spanner can commit the transaction if all read locks it acquired
-          # are still valid at commit time, and it is able to acquire write
-          # locks for all writes. Cloud Spanner can abort the transaction for any
-          # reason. If a commit attempt returns `ABORTED`, Cloud Spanner guarantees
-          # that the transaction has not modified any user data in Cloud Spanner.
-          #
-          # Unless the transaction commits, Cloud Spanner makes no guarantees about
-          # how long the transaction's locks were held for. It is an error to
-          # use Cloud Spanner locks for any sort of mutual exclusion other than
-          # between Cloud Spanner transactions themselves.
-          #
-          # ### Retrying Aborted Transactions
-          #
-          # When a transaction aborts, the application can choose to retry the
-          # whole transaction again. To maximize the chances of successfully
-          # committing the retry, the client should execute the retry in the
-          # same session as the original attempt. The original session's lock
-          # priority increases with each consecutive abort, meaning that each
-          # attempt has a slightly better chance of success than the previous.
-          #
-          # Under some circumstances (e.g., many transactions attempting to
-          # modify the same row(s)), a transaction can abort many times in a
-          # short period before successfully committing. Thus, it is not a good
-          # idea to cap the number of retries a transaction can attempt;
-          # instead, it is better to limit the total amount of wall time spent
-          # retrying.
-          #
-          # ### Idle Transactions
-          #
-          # A transaction is considered idle if it has no outstanding reads or
-          # SQL queries and has not started a read or SQL query within the last 10
-          # seconds. Idle transactions can be aborted by Cloud Spanner so that they
-          # don't hold on to locks indefinitely. In that case, the commit will
-          # fail with error `ABORTED`.
-          #
-          # If this behavior is undesirable, periodically executing a simple
-          # SQL query in the transaction (e.g., `SELECT 1`) prevents the
-          # transaction from becoming idle.
-          #
-          # ## Snapshot Read-Only Transactions
-          #
-          # Snapshot read-only transactions provides a simpler method than
-          # locking read-write transactions for doing several consistent
-          # reads. However, this type of transaction does not support writes.
-          #
-          # Snapshot transactions do not take locks. Instead, they work by
-          # choosing a Cloud Spanner timestamp, then executing all reads at that
-          # timestamp. Since they do not acquire locks, they do not block
-          # concurrent read-write transactions.
-          #
-          # Unlike locking read-write transactions, snapshot read-only
-          # transactions never abort. They can fail if the chosen read
-          # timestamp is garbage collected; however, the default garbage
-          # collection policy is generous enough that most applications do not
-          # need to worry about this in practice.
-          #
-          # Snapshot read-only transactions do not need to call
-          # Commit or
-          # Rollback (and in fact are not
-          # permitted to do so).
-          #
-          # To execute a snapshot transaction, the client specifies a timestamp
-          # bound, which tells Cloud Spanner how to choose a read timestamp.
-          #
-          # The types of timestamp bound are:
-          #
-          #   - Strong (the default).
-          #   - Bounded staleness.
-          #   - Exact staleness.
-          #
-          # If the Cloud Spanner database to be read is geographically distributed,
-          # stale read-only transactions can execute more quickly than strong
-          # or read-write transaction, because they are able to execute far
-          # from the leader replica.
-          #
-          # Each type of timestamp bound is discussed in detail below.
-          #
-          # ### Strong
-          #
-          # Strong reads are guaranteed to see the effects of all transactions
-          # that have committed before the start of the read. Furthermore, all
-          # rows yielded by a single read are consistent with each other -- if
-          # any part of the read observes a transaction, all parts of the read
-          # see the transaction.
-          #
-          # Strong reads are not repeatable: two consecutive strong read-only
-          # transactions might return inconsistent results if there are
-          # concurrent writes. If consistency across reads is required, the
-          # reads should be executed within a transaction or at an exact read
-          # timestamp.
-          #
-          # See TransactionOptions.ReadOnly.strong.
-          #
-          # ### Exact Staleness
-          #
-          # These timestamp bounds execute reads at a user-specified
-          # timestamp. Reads at a timestamp are guaranteed to see a consistent
-          # prefix of the global transaction history: they observe
-          # modifications done by all transactions with a commit timestamp &lt;=
-          # the read timestamp, and observe none of the modifications done by
-          # transactions with a larger commit timestamp. They will block until
-          # all conflicting transactions that may be assigned commit timestamps
-          # &lt;= the read timestamp have finished.
-          #
-          # The timestamp can either be expressed as an absolute Cloud Spanner commit
-          # timestamp or a staleness relative to the current time.
-          #
-          # These modes do not require a "negotiation phase" to pick a
-          # timestamp. As a result, they execute slightly faster than the
-          # equivalent boundedly stale concurrency modes. On the other hand,
-          # boundedly stale reads usually return fresher results.
-          #
-          # See TransactionOptions.ReadOnly.read_timestamp and
-          # TransactionOptions.ReadOnly.exact_staleness.
-          #
-          # ### Bounded Staleness
-          #
-          # Bounded staleness modes allow Cloud Spanner to pick the read timestamp,
-          # subject to a user-provided staleness bound. Cloud Spanner chooses the
-          # newest timestamp within the staleness bound that allows execution
-          # of the reads at the closest available replica without blocking.
-          #
-          # All rows yielded are consistent with each other -- if any part of
-          # the read observes a transaction, all parts of the read see the
-          # transaction. Boundedly stale reads are not repeatable: two stale
-          # reads, even if they use the same staleness bound, can execute at
-          # different timestamps and thus return inconsistent results.
-          #
-          # Boundedly stale reads execute in two phases: the first phase
-          # negotiates a timestamp among all replicas needed to serve the
-          # read. In the second phase, reads are executed at the negotiated
-          # timestamp.
-          #
-          # As a result of the two phase execution, bounded staleness reads are
-          # usually a little slower than comparable exact staleness
-          # reads. However, they are typically able to return fresher
-          # results, and are more likely to execute at the closest replica.
-          #
-          # Because the timestamp negotiation requires up-front knowledge of
-          # which rows will be read, it can only be used with single-use
-          # read-only transactions.
-          #
-          # See TransactionOptions.ReadOnly.max_staleness and
-          # TransactionOptions.ReadOnly.min_read_timestamp.
-          #
-          # ### Old Read Timestamps and Garbage Collection
-          #
-          # Cloud Spanner continuously garbage collects deleted and overwritten data
-          # in the background to reclaim storage space. This process is known
-          # as "version GC". By default, version GC reclaims versions after they
-          # are one hour old. Because of this, Cloud Spanner cannot perform reads
-          # at read timestamps more than one hour in the past. This
-          # restriction also applies to in-progress reads and/or SQL queries whose
-          # timestamp become too old while executing. Reads and SQL queries with
-          # too-old read timestamps fail with the error `FAILED_PRECONDITION`.
-          #
-          # ## Partitioned DML Transactions
-          #
-          # Partitioned DML transactions are used to execute DML statements with a
-          # different execution strategy that provides different, and often better,
-          # scalability properties for large, table-wide operations than DML in a
-          # ReadWrite transaction. Smaller scoped statements, such as an OLTP workload,
-          # should prefer using ReadWrite transactions.
-          #
-          # Partitioned DML partitions the keyspace and runs the DML statement on each
-          # partition in separate, internal transactions. These transactions commit
-          # automatically when complete, and run independently from one another.
-          #
-          # To reduce lock contention, this execution strategy only acquires read locks
-          # on rows that match the WHERE clause of the statement. Additionally, the
-          # smaller per-partition transactions hold locks for less time.
-          #
-          # That said, Partitioned DML is not a drop-in replacement for standard DML used
-          # in ReadWrite transactions.
-          #
-          #  - The DML statement must be fully-partitionable. Specifically, the statement
-          #    must be expressible as the union of many statements which each access only
-          #    a single row of the table.
-          #
-          #  - The statement is not applied atomically to all rows of the table. Rather,
-          #    the statement is applied atomically to partitions of the table, in
-          #    independent transactions. Secondary index rows are updated atomically
-          #    with the base table rows.
-          #
-          #  - Partitioned DML does not guarantee exactly-once execution semantics
-          #    against a partition. The statement will be applied at least once to each
-          #    partition. It is strongly recommended that the DML statement should be
-          #    idempotent to avoid unexpected results. For instance, it is potentially
-          #    dangerous to run a statement such as
-          #    `UPDATE table SET column = column + 1` as it could be run multiple times
-          #    against some rows.
-          #
-          #  - The partitions are committed automatically - there is no support for
-          #    Commit or Rollback. If the call returns an error, or if the client issuing
-          #    the ExecuteSql call dies, it is possible that some rows had the statement
-          #    executed on them successfully. It is also possible that statement was
-          #    never executed against other rows.
-          #
-          #  - Partitioned DML transactions may only contain the execution of a single
-          #    DML statement via ExecuteSql or ExecuteStreamingSql.
-          #
-          #  - If any error is encountered during the execution of the partitioned DML
-          #    operation (for instance, a UNIQUE INDEX violation, division by zero, or a
-          #    value that cannot be stored due to schema constraints), then the
-          #    operation is stopped at that point and an error is returned. It is
-          #    possible that at this point, some partitions have been committed (or even
-          #    committed multiple times), and other partitions have not been run at all.
-          #
-          # Given the above, Partitioned DML is good fit for large, database-wide,
-          # operations that are idempotent, such as deleting old rows from a very large
-          # table.
-        "readWrite": { # Message type to initiate a read-write transaction. Currently this # Transaction may write.
-            #
-            # Authorization to begin a read-write transaction requires
-            # `spanner.databases.beginOrRollbackReadWriteTransaction` permission
-            # on the `session` resource.
-            # transaction type has no options.
-        },
-        "readOnly": { # Message type to initiate a read-only transaction. # Transaction will not write.
-            #
-            # Authorization to begin a read-only transaction requires
-            # `spanner.databases.beginReadOnlyTransaction` permission
-            # on the `session` resource.
-          "minReadTimestamp": "A String", # Executes all reads at a timestamp &gt;= `min_read_timestamp`.
-              #
-              # This is useful for requesting fresher data than some previous
-              # read, or data that is fresh enough to observe the effects of some
-              # previously committed transaction whose timestamp is known.
-              #
-              # Note that this option can only be used in single-use transactions.
-              #
-              # A timestamp in RFC3339 UTC \"Zulu\" format, accurate to nanoseconds.
-              # Example: `"2014-10-02T15:01:23.045123456Z"`.
-          "readTimestamp": "A String", # Executes all reads at the given timestamp. Unlike other modes,
-              # reads at a specific timestamp are repeatable; the same read at
-              # the same timestamp always returns the same data. If the
-              # timestamp is in the future, the read will block until the
-              # specified timestamp, modulo the read's deadline.
-              #
-              # Useful for large scale consistent reads such as mapreduces, or
-              # for coordinating many reads against a consistent snapshot of the
-              # data.
-              #
-              # A timestamp in RFC3339 UTC \"Zulu\" format, accurate to nanoseconds.
-              # Example: `"2014-10-02T15:01:23.045123456Z"`.
-          "maxStaleness": "A String", # Read data at a timestamp &gt;= `NOW - max_staleness`
-              # seconds. Guarantees that all writes that have committed more
-              # than the specified number of seconds ago are visible. Because
-              # Cloud Spanner chooses the exact timestamp, this mode works even if
-              # the client's local clock is substantially skewed from Cloud Spanner
-              # commit timestamps.
-              #
-              # Useful for reading the freshest data available at a nearby
-              # replica, while bounding the possible staleness if the local
-              # replica has fallen behind.
-              #
-              # Note that this option can only be used in single-use
-              # transactions.
-          "exactStaleness": "A String", # Executes all reads at a timestamp that is `exact_staleness`
-              # old. The timestamp is chosen soon after the read is started.
-              #
-              # Guarantees that all writes that have committed more than the
-              # specified number of seconds ago are visible. Because Cloud Spanner
-              # chooses the exact timestamp, this mode works even if the client's
-              # local clock is substantially skewed from Cloud Spanner commit
-              # timestamps.
-              #
-              # Useful for reading at nearby replicas without the distributed
-              # timestamp negotiation overhead of `max_staleness`.
-          "returnReadTimestamp": True or False, # If true, the Cloud Spanner-selected read timestamp is included in
-              # the Transaction message that describes the transaction.
-          "strong": True or False, # Read at a timestamp where all previously committed transactions
-              # are visible.
-        },
-        "partitionedDml": { # Message type to initiate a Partitioned DML transaction. # Partitioned DML transaction.
-            #
-            # Authorization to begin a Partitioned DML transaction requires
-            # `spanner.databases.beginPartitionedDmlTransaction` permission
-            # on the `session` resource.
-        },
-      },
-      "singleUse": { # # Transactions # Execute the read or SQL query in a temporary transaction.
+      &quot;singleUse&quot;: { # # Transactions # Execute the read or SQL query in a temporary transaction.
           # This is the most efficient way to execute a transaction that
           # consists of a single SQL query.
           #
@@ -5734,7 +5386,7 @@
           # Commit or
           # Rollback.  Long periods of
           # inactivity at the client may cause Cloud Spanner to release a
-          # transaction's locks and abort it.
+          # transaction&#x27;s locks and abort it.
           #
           # Conceptually, a read-write transaction consists of zero or more
           # reads or SQL statements followed by
@@ -5752,7 +5404,7 @@
           # that the transaction has not modified any user data in Cloud Spanner.
           #
           # Unless the transaction commits, Cloud Spanner makes no guarantees about
-          # how long the transaction's locks were held for. It is an error to
+          # how long the transaction&#x27;s locks were held for. It is an error to
           # use Cloud Spanner locks for any sort of mutual exclusion other than
           # between Cloud Spanner transactions themselves.
           #
@@ -5761,7 +5413,7 @@
           # When a transaction aborts, the application can choose to retry the
           # whole transaction again. To maximize the chances of successfully
           # committing the retry, the client should execute the retry in the
-          # same session as the original attempt. The original session's lock
+          # same session as the original attempt. The original session&#x27;s lock
           # priority increases with each consecutive abort, meaning that each
           # attempt has a slightly better chance of success than the previous.
           #
@@ -5777,7 +5429,7 @@
           # A transaction is considered idle if it has no outstanding reads or
           # SQL queries and has not started a read or SQL query within the last 10
           # seconds. Idle transactions can be aborted by Cloud Spanner so that they
-          # don't hold on to locks indefinitely. In that case, the commit will
+          # don&#x27;t hold on to locks indefinitely. In that case, the commit will
           # fail with error `ABORTED`.
           #
           # If this behavior is undesirable, periodically executing a simple
@@ -5852,7 +5504,7 @@
           # The timestamp can either be expressed as an absolute Cloud Spanner commit
           # timestamp or a staleness relative to the current time.
           #
-          # These modes do not require a "negotiation phase" to pick a
+          # These modes do not require a &quot;negotiation phase&quot; to pick a
           # timestamp. As a result, they execute slightly faster than the
           # equivalent boundedly stale concurrency modes. On the other hand,
           # boundedly stale reads usually return fresher results.
@@ -5894,7 +5546,7 @@
           #
           # Cloud Spanner continuously garbage collects deleted and overwritten data
           # in the background to reclaim storage space. This process is known
-          # as "version GC". By default, version GC reclaims versions after they
+          # as &quot;version GC&quot;. By default, version GC reclaims versions after they
           # are one hour old. Because of this, Cloud Spanner cannot perform reads
           # at read timestamps more than one hour in the past. This
           # restriction also applies to in-progress reads and/or SQL queries whose
@@ -5956,19 +5608,18 @@
           # Given the above, Partitioned DML is good fit for large, database-wide,
           # operations that are idempotent, such as deleting old rows from a very large
           # table.
-        "readWrite": { # Message type to initiate a read-write transaction. Currently this # Transaction may write.
+        &quot;partitionedDml&quot;: { # Message type to initiate a Partitioned DML transaction. # Partitioned DML transaction.
             #
-            # Authorization to begin a read-write transaction requires
-            # `spanner.databases.beginOrRollbackReadWriteTransaction` permission
+            # Authorization to begin a Partitioned DML transaction requires
+            # `spanner.databases.beginPartitionedDmlTransaction` permission
             # on the `session` resource.
-            # transaction type has no options.
         },
-        "readOnly": { # Message type to initiate a read-only transaction. # Transaction will not write.
+        &quot;readOnly&quot;: { # Message type to initiate a read-only transaction. # Transaction will not write.
             #
             # Authorization to begin a read-only transaction requires
             # `spanner.databases.beginReadOnlyTransaction` permission
             # on the `session` resource.
-          "minReadTimestamp": "A String", # Executes all reads at a timestamp &gt;= `min_read_timestamp`.
+          &quot;minReadTimestamp&quot;: &quot;A String&quot;, # Executes all reads at a timestamp &gt;= `min_read_timestamp`.
               #
               # This is useful for requesting fresher data than some previous
               # read, or data that is fresh enough to observe the effects of some
@@ -5976,25 +5627,25 @@
               #
               # Note that this option can only be used in single-use transactions.
               #
-              # A timestamp in RFC3339 UTC \"Zulu\" format, accurate to nanoseconds.
-              # Example: `"2014-10-02T15:01:23.045123456Z"`.
-          "readTimestamp": "A String", # Executes all reads at the given timestamp. Unlike other modes,
+              # A timestamp in RFC3339 UTC \&quot;Zulu\&quot; format, accurate to nanoseconds.
+              # Example: `&quot;2014-10-02T15:01:23.045123456Z&quot;`.
+          &quot;readTimestamp&quot;: &quot;A String&quot;, # Executes all reads at the given timestamp. Unlike other modes,
               # reads at a specific timestamp are repeatable; the same read at
               # the same timestamp always returns the same data. If the
               # timestamp is in the future, the read will block until the
-              # specified timestamp, modulo the read's deadline.
+              # specified timestamp, modulo the read&#x27;s deadline.
               #
               # Useful for large scale consistent reads such as mapreduces, or
               # for coordinating many reads against a consistent snapshot of the
               # data.
               #
-              # A timestamp in RFC3339 UTC \"Zulu\" format, accurate to nanoseconds.
-              # Example: `"2014-10-02T15:01:23.045123456Z"`.
-          "maxStaleness": "A String", # Read data at a timestamp &gt;= `NOW - max_staleness`
+              # A timestamp in RFC3339 UTC \&quot;Zulu\&quot; format, accurate to nanoseconds.
+              # Example: `&quot;2014-10-02T15:01:23.045123456Z&quot;`.
+          &quot;maxStaleness&quot;: &quot;A String&quot;, # Read data at a timestamp &gt;= `NOW - max_staleness`
               # seconds. Guarantees that all writes that have committed more
               # than the specified number of seconds ago are visible. Because
               # Cloud Spanner chooses the exact timestamp, this mode works even if
-              # the client's local clock is substantially skewed from Cloud Spanner
+              # the client&#x27;s local clock is substantially skewed from Cloud Spanner
               # commit timestamps.
               #
               # Useful for reading the freshest data available at a nearby
@@ -6003,32 +5654,403 @@
               #
               # Note that this option can only be used in single-use
               # transactions.
-          "exactStaleness": "A String", # Executes all reads at a timestamp that is `exact_staleness`
+          &quot;returnReadTimestamp&quot;: True or False, # If true, the Cloud Spanner-selected read timestamp is included in
+              # the Transaction message that describes the transaction.
+          &quot;exactStaleness&quot;: &quot;A String&quot;, # Executes all reads at a timestamp that is `exact_staleness`
               # old. The timestamp is chosen soon after the read is started.
               #
               # Guarantees that all writes that have committed more than the
               # specified number of seconds ago are visible. Because Cloud Spanner
-              # chooses the exact timestamp, this mode works even if the client's
+              # chooses the exact timestamp, this mode works even if the client&#x27;s
               # local clock is substantially skewed from Cloud Spanner commit
               # timestamps.
               #
               # Useful for reading at nearby replicas without the distributed
               # timestamp negotiation overhead of `max_staleness`.
-          "returnReadTimestamp": True or False, # If true, the Cloud Spanner-selected read timestamp is included in
-              # the Transaction message that describes the transaction.
-          "strong": True or False, # Read at a timestamp where all previously committed transactions
+          &quot;strong&quot;: True or False, # Read at a timestamp where all previously committed transactions
               # are visible.
         },
-        "partitionedDml": { # Message type to initiate a Partitioned DML transaction. # Partitioned DML transaction.
+        &quot;readWrite&quot;: { # Message type to initiate a read-write transaction. Currently this # Transaction may write.
+            #
+            # Authorization to begin a read-write transaction requires
+            # `spanner.databases.beginOrRollbackReadWriteTransaction` permission
+            # on the `session` resource.
+            # transaction type has no options.
+        },
+      },
+      &quot;begin&quot;: { # # Transactions # Begin a new transaction and execute this read or SQL query in
+          # it. The transaction ID of the new transaction is returned in
+          # ResultSetMetadata.transaction, which is a Transaction.
+          #
+          #
+          # Each session can have at most one active transaction at a time. After the
+          # active transaction is completed, the session can immediately be
+          # re-used for the next transaction. It is not necessary to create a
+          # new session for each transaction.
+          #
+          # # Transaction Modes
+          #
+          # Cloud Spanner supports three transaction modes:
+          #
+          #   1. Locking read-write. This type of transaction is the only way
+          #      to write data into Cloud Spanner. These transactions rely on
+          #      pessimistic locking and, if necessary, two-phase commit.
+          #      Locking read-write transactions may abort, requiring the
+          #      application to retry.
+          #
+          #   2. Snapshot read-only. This transaction type provides guaranteed
+          #      consistency across several reads, but does not allow
+          #      writes. Snapshot read-only transactions can be configured to
+          #      read at timestamps in the past. Snapshot read-only
+          #      transactions do not need to be committed.
+          #
+          #   3. Partitioned DML. This type of transaction is used to execute
+          #      a single Partitioned DML statement. Partitioned DML partitions
+          #      the key space and runs the DML statement over each partition
+          #      in parallel using separate, internal transactions that commit
+          #      independently. Partitioned DML transactions do not need to be
+          #      committed.
+          #
+          # For transactions that only read, snapshot read-only transactions
+          # provide simpler semantics and are almost always faster. In
+          # particular, read-only transactions do not take locks, so they do
+          # not conflict with read-write transactions. As a consequence of not
+          # taking locks, they also do not abort, so retry loops are not needed.
+          #
+          # Transactions may only read/write data in a single database. They
+          # may, however, read/write data in different tables within that
+          # database.
+          #
+          # ## Locking Read-Write Transactions
+          #
+          # Locking transactions may be used to atomically read-modify-write
+          # data anywhere in a database. This type of transaction is externally
+          # consistent.
+          #
+          # Clients should attempt to minimize the amount of time a transaction
+          # is active. Faster transactions commit with higher probability
+          # and cause less contention. Cloud Spanner attempts to keep read locks
+          # active as long as the transaction continues to do reads, and the
+          # transaction has not been terminated by
+          # Commit or
+          # Rollback.  Long periods of
+          # inactivity at the client may cause Cloud Spanner to release a
+          # transaction&#x27;s locks and abort it.
+          #
+          # Conceptually, a read-write transaction consists of zero or more
+          # reads or SQL statements followed by
+          # Commit. At any time before
+          # Commit, the client can send a
+          # Rollback request to abort the
+          # transaction.
+          #
+          # ### Semantics
+          #
+          # Cloud Spanner can commit the transaction if all read locks it acquired
+          # are still valid at commit time, and it is able to acquire write
+          # locks for all writes. Cloud Spanner can abort the transaction for any
+          # reason. If a commit attempt returns `ABORTED`, Cloud Spanner guarantees
+          # that the transaction has not modified any user data in Cloud Spanner.
+          #
+          # Unless the transaction commits, Cloud Spanner makes no guarantees about
+          # how long the transaction&#x27;s locks were held for. It is an error to
+          # use Cloud Spanner locks for any sort of mutual exclusion other than
+          # between Cloud Spanner transactions themselves.
+          #
+          # ### Retrying Aborted Transactions
+          #
+          # When a transaction aborts, the application can choose to retry the
+          # whole transaction again. To maximize the chances of successfully
+          # committing the retry, the client should execute the retry in the
+          # same session as the original attempt. The original session&#x27;s lock
+          # priority increases with each consecutive abort, meaning that each
+          # attempt has a slightly better chance of success than the previous.
+          #
+          # Under some circumstances (e.g., many transactions attempting to
+          # modify the same row(s)), a transaction can abort many times in a
+          # short period before successfully committing. Thus, it is not a good
+          # idea to cap the number of retries a transaction can attempt;
+          # instead, it is better to limit the total amount of wall time spent
+          # retrying.
+          #
+          # ### Idle Transactions
+          #
+          # A transaction is considered idle if it has no outstanding reads or
+          # SQL queries and has not started a read or SQL query within the last 10
+          # seconds. Idle transactions can be aborted by Cloud Spanner so that they
+          # don&#x27;t hold on to locks indefinitely. In that case, the commit will
+          # fail with error `ABORTED`.
+          #
+          # If this behavior is undesirable, periodically executing a simple
+          # SQL query in the transaction (e.g., `SELECT 1`) prevents the
+          # transaction from becoming idle.
+          #
+          # ## Snapshot Read-Only Transactions
+          #
+          # Snapshot read-only transactions provides a simpler method than
+          # locking read-write transactions for doing several consistent
+          # reads. However, this type of transaction does not support writes.
+          #
+          # Snapshot transactions do not take locks. Instead, they work by
+          # choosing a Cloud Spanner timestamp, then executing all reads at that
+          # timestamp. Since they do not acquire locks, they do not block
+          # concurrent read-write transactions.
+          #
+          # Unlike locking read-write transactions, snapshot read-only
+          # transactions never abort. They can fail if the chosen read
+          # timestamp is garbage collected; however, the default garbage
+          # collection policy is generous enough that most applications do not
+          # need to worry about this in practice.
+          #
+          # Snapshot read-only transactions do not need to call
+          # Commit or
+          # Rollback (and in fact are not
+          # permitted to do so).
+          #
+          # To execute a snapshot transaction, the client specifies a timestamp
+          # bound, which tells Cloud Spanner how to choose a read timestamp.
+          #
+          # The types of timestamp bound are:
+          #
+          #   - Strong (the default).
+          #   - Bounded staleness.
+          #   - Exact staleness.
+          #
+          # If the Cloud Spanner database to be read is geographically distributed,
+          # stale read-only transactions can execute more quickly than strong
+          # or read-write transaction, because they are able to execute far
+          # from the leader replica.
+          #
+          # Each type of timestamp bound is discussed in detail below.
+          #
+          # ### Strong
+          #
+          # Strong reads are guaranteed to see the effects of all transactions
+          # that have committed before the start of the read. Furthermore, all
+          # rows yielded by a single read are consistent with each other -- if
+          # any part of the read observes a transaction, all parts of the read
+          # see the transaction.
+          #
+          # Strong reads are not repeatable: two consecutive strong read-only
+          # transactions might return inconsistent results if there are
+          # concurrent writes. If consistency across reads is required, the
+          # reads should be executed within a transaction or at an exact read
+          # timestamp.
+          #
+          # See TransactionOptions.ReadOnly.strong.
+          #
+          # ### Exact Staleness
+          #
+          # These timestamp bounds execute reads at a user-specified
+          # timestamp. Reads at a timestamp are guaranteed to see a consistent
+          # prefix of the global transaction history: they observe
+          # modifications done by all transactions with a commit timestamp &lt;=
+          # the read timestamp, and observe none of the modifications done by
+          # transactions with a larger commit timestamp. They will block until
+          # all conflicting transactions that may be assigned commit timestamps
+          # &lt;= the read timestamp have finished.
+          #
+          # The timestamp can either be expressed as an absolute Cloud Spanner commit
+          # timestamp or a staleness relative to the current time.
+          #
+          # These modes do not require a &quot;negotiation phase&quot; to pick a
+          # timestamp. As a result, they execute slightly faster than the
+          # equivalent boundedly stale concurrency modes. On the other hand,
+          # boundedly stale reads usually return fresher results.
+          #
+          # See TransactionOptions.ReadOnly.read_timestamp and
+          # TransactionOptions.ReadOnly.exact_staleness.
+          #
+          # ### Bounded Staleness
+          #
+          # Bounded staleness modes allow Cloud Spanner to pick the read timestamp,
+          # subject to a user-provided staleness bound. Cloud Spanner chooses the
+          # newest timestamp within the staleness bound that allows execution
+          # of the reads at the closest available replica without blocking.
+          #
+          # All rows yielded are consistent with each other -- if any part of
+          # the read observes a transaction, all parts of the read see the
+          # transaction. Boundedly stale reads are not repeatable: two stale
+          # reads, even if they use the same staleness bound, can execute at
+          # different timestamps and thus return inconsistent results.
+          #
+          # Boundedly stale reads execute in two phases: the first phase
+          # negotiates a timestamp among all replicas needed to serve the
+          # read. In the second phase, reads are executed at the negotiated
+          # timestamp.
+          #
+          # As a result of the two phase execution, bounded staleness reads are
+          # usually a little slower than comparable exact staleness
+          # reads. However, they are typically able to return fresher
+          # results, and are more likely to execute at the closest replica.
+          #
+          # Because the timestamp negotiation requires up-front knowledge of
+          # which rows will be read, it can only be used with single-use
+          # read-only transactions.
+          #
+          # See TransactionOptions.ReadOnly.max_staleness and
+          # TransactionOptions.ReadOnly.min_read_timestamp.
+          #
+          # ### Old Read Timestamps and Garbage Collection
+          #
+          # Cloud Spanner continuously garbage collects deleted and overwritten data
+          # in the background to reclaim storage space. This process is known
+          # as &quot;version GC&quot;. By default, version GC reclaims versions after they
+          # are one hour old. Because of this, Cloud Spanner cannot perform reads
+          # at read timestamps more than one hour in the past. This
+          # restriction also applies to in-progress reads and/or SQL queries whose
+          # timestamp become too old while executing. Reads and SQL queries with
+          # too-old read timestamps fail with the error `FAILED_PRECONDITION`.
+          #
+          # ## Partitioned DML Transactions
+          #
+          # Partitioned DML transactions are used to execute DML statements with a
+          # different execution strategy that provides different, and often better,
+          # scalability properties for large, table-wide operations than DML in a
+          # ReadWrite transaction. Smaller scoped statements, such as an OLTP workload,
+          # should prefer using ReadWrite transactions.
+          #
+          # Partitioned DML partitions the keyspace and runs the DML statement on each
+          # partition in separate, internal transactions. These transactions commit
+          # automatically when complete, and run independently from one another.
+          #
+          # To reduce lock contention, this execution strategy only acquires read locks
+          # on rows that match the WHERE clause of the statement. Additionally, the
+          # smaller per-partition transactions hold locks for less time.
+          #
+          # That said, Partitioned DML is not a drop-in replacement for standard DML used
+          # in ReadWrite transactions.
+          #
+          #  - The DML statement must be fully-partitionable. Specifically, the statement
+          #    must be expressible as the union of many statements which each access only
+          #    a single row of the table.
+          #
+          #  - The statement is not applied atomically to all rows of the table. Rather,
+          #    the statement is applied atomically to partitions of the table, in
+          #    independent transactions. Secondary index rows are updated atomically
+          #    with the base table rows.
+          #
+          #  - Partitioned DML does not guarantee exactly-once execution semantics
+          #    against a partition. The statement will be applied at least once to each
+          #    partition. It is strongly recommended that the DML statement should be
+          #    idempotent to avoid unexpected results. For instance, it is potentially
+          #    dangerous to run a statement such as
+          #    `UPDATE table SET column = column + 1` as it could be run multiple times
+          #    against some rows.
+          #
+          #  - The partitions are committed automatically - there is no support for
+          #    Commit or Rollback. If the call returns an error, or if the client issuing
+          #    the ExecuteSql call dies, it is possible that some rows had the statement
+          #    executed on them successfully. It is also possible that statement was
+          #    never executed against other rows.
+          #
+          #  - Partitioned DML transactions may only contain the execution of a single
+          #    DML statement via ExecuteSql or ExecuteStreamingSql.
+          #
+          #  - If any error is encountered during the execution of the partitioned DML
+          #    operation (for instance, a UNIQUE INDEX violation, division by zero, or a
+          #    value that cannot be stored due to schema constraints), then the
+          #    operation is stopped at that point and an error is returned. It is
+          #    possible that at this point, some partitions have been committed (or even
+          #    committed multiple times), and other partitions have not been run at all.
+          #
+          # Given the above, Partitioned DML is good fit for large, database-wide,
+          # operations that are idempotent, such as deleting old rows from a very large
+          # table.
+        &quot;partitionedDml&quot;: { # Message type to initiate a Partitioned DML transaction. # Partitioned DML transaction.
             #
             # Authorization to begin a Partitioned DML transaction requires
             # `spanner.databases.beginPartitionedDmlTransaction` permission
             # on the `session` resource.
         },
+        &quot;readOnly&quot;: { # Message type to initiate a read-only transaction. # Transaction will not write.
+            #
+            # Authorization to begin a read-only transaction requires
+            # `spanner.databases.beginReadOnlyTransaction` permission
+            # on the `session` resource.
+          &quot;minReadTimestamp&quot;: &quot;A String&quot;, # Executes all reads at a timestamp &gt;= `min_read_timestamp`.
+              #
+              # This is useful for requesting fresher data than some previous
+              # read, or data that is fresh enough to observe the effects of some
+              # previously committed transaction whose timestamp is known.
+              #
+              # Note that this option can only be used in single-use transactions.
+              #
+              # A timestamp in RFC3339 UTC \&quot;Zulu\&quot; format, accurate to nanoseconds.
+              # Example: `&quot;2014-10-02T15:01:23.045123456Z&quot;`.
+          &quot;readTimestamp&quot;: &quot;A String&quot;, # Executes all reads at the given timestamp. Unlike other modes,
+              # reads at a specific timestamp are repeatable; the same read at
+              # the same timestamp always returns the same data. If the
+              # timestamp is in the future, the read will block until the
+              # specified timestamp, modulo the read&#x27;s deadline.
+              #
+              # Useful for large scale consistent reads such as mapreduces, or
+              # for coordinating many reads against a consistent snapshot of the
+              # data.
+              #
+              # A timestamp in RFC3339 UTC \&quot;Zulu\&quot; format, accurate to nanoseconds.
+              # Example: `&quot;2014-10-02T15:01:23.045123456Z&quot;`.
+          &quot;maxStaleness&quot;: &quot;A String&quot;, # Read data at a timestamp &gt;= `NOW - max_staleness`
+              # seconds. Guarantees that all writes that have committed more
+              # than the specified number of seconds ago are visible. Because
+              # Cloud Spanner chooses the exact timestamp, this mode works even if
+              # the client&#x27;s local clock is substantially skewed from Cloud Spanner
+              # commit timestamps.
+              #
+              # Useful for reading the freshest data available at a nearby
+              # replica, while bounding the possible staleness if the local
+              # replica has fallen behind.
+              #
+              # Note that this option can only be used in single-use
+              # transactions.
+          &quot;returnReadTimestamp&quot;: True or False, # If true, the Cloud Spanner-selected read timestamp is included in
+              # the Transaction message that describes the transaction.
+          &quot;exactStaleness&quot;: &quot;A String&quot;, # Executes all reads at a timestamp that is `exact_staleness`
+              # old. The timestamp is chosen soon after the read is started.
+              #
+              # Guarantees that all writes that have committed more than the
+              # specified number of seconds ago are visible. Because Cloud Spanner
+              # chooses the exact timestamp, this mode works even if the client&#x27;s
+              # local clock is substantially skewed from Cloud Spanner commit
+              # timestamps.
+              #
+              # Useful for reading at nearby replicas without the distributed
+              # timestamp negotiation overhead of `max_staleness`.
+          &quot;strong&quot;: True or False, # Read at a timestamp where all previously committed transactions
+              # are visible.
+        },
+        &quot;readWrite&quot;: { # Message type to initiate a read-write transaction. Currently this # Transaction may write.
+            #
+            # Authorization to begin a read-write transaction requires
+            # `spanner.databases.beginOrRollbackReadWriteTransaction` permission
+            # on the `session` resource.
+            # transaction type has no options.
+        },
       },
-      "id": "A String", # Execute the read or SQL query in a previously-started transaction.
+      &quot;id&quot;: &quot;A String&quot;, # Execute the read or SQL query in a previously-started transaction.
     },
-    "keySet": { # `KeySet` defines a collection of Cloud Spanner keys and/or key ranges. All # Required. `key_set` identifies the rows to be yielded. `key_set` names the
+    &quot;table&quot;: &quot;A String&quot;, # Required. The name of the table in the database to be read.
+    &quot;partitionOptions&quot;: { # Options for a PartitionQueryRequest and # Additional options that affect how many partitions are created.
+        # PartitionReadRequest.
+      &quot;partitionSizeBytes&quot;: &quot;A String&quot;, # **Note:** This hint is currently ignored by PartitionQuery and
+          # PartitionRead requests.
+          #
+          # The desired data size for each partition generated.  The default for this
+          # option is currently 1 GiB.  This is only a hint. The actual size of each
+          # partition may be smaller or larger than this size request.
+      &quot;maxPartitions&quot;: &quot;A String&quot;, # **Note:** This hint is currently ignored by PartitionQuery and
+          # PartitionRead requests.
+          #
+          # The desired maximum number of partitions to return.  For example, this may
+          # be set to the number of workers available.  The default for this option
+          # is currently 10,000. The maximum value is currently 200,000.  This is only
+          # a hint.  The actual number of partitions returned may be smaller or larger
+          # than this maximum count request.
+    },
+    &quot;index&quot;: &quot;A String&quot;, # If non-empty, the name of an index on table. This index is
+        # used instead of the table primary key when interpreting key_set
+        # and sorting result rows. See key_set for further information.
+    &quot;keySet&quot;: { # `KeySet` defines a collection of Cloud Spanner keys and/or key ranges. All # Required. `key_set` identifies the rows to be yielded. `key_set` names the
         # primary keys of the rows in table to be yielded, unless index
         # is present. If index is present, then key_set instead names
         # index keys in index.
@@ -6041,7 +6063,7 @@
         # If the same key is specified multiple times in the set (for example
         # if two ranges, two keys, or a key and a range overlap), Cloud Spanner
         # behaves as if the key were only specified once.
-      "ranges": [ # A list of key ranges. See KeyRange for more information about
+      &quot;ranges&quot;: [ # A list of key ranges. See KeyRange for more information about
           # key range specifications.
         { # KeyRange represents a range of rows in a table or index.
             #
@@ -6062,19 +6084,19 @@
             #
             # The following keys name rows in this table:
             #
-            #     "Bob", "2014-09-23"
+            #     &quot;Bob&quot;, &quot;2014-09-23&quot;
             #
-            # Since the `UserEvents` table's `PRIMARY KEY` clause names two
+            # Since the `UserEvents` table&#x27;s `PRIMARY KEY` clause names two
             # columns, each `UserEvents` key has two elements; the first is the
             # `UserName`, and the second is the `EventDate`.
             #
             # Key ranges with multiple components are interpreted
-            # lexicographically by component using the table or index key's declared
+            # lexicographically by component using the table or index key&#x27;s declared
             # sort order. For example, the following range returns all events for
-            # user `"Bob"` that occurred in the year 2015:
+            # user `&quot;Bob&quot;` that occurred in the year 2015:
             #
-            #     "start_closed": ["Bob", "2015-01-01"]
-            #     "end_closed": ["Bob", "2015-12-31"]
+            #     &quot;start_closed&quot;: [&quot;Bob&quot;, &quot;2015-01-01&quot;]
+            #     &quot;end_closed&quot;: [&quot;Bob&quot;, &quot;2015-12-31&quot;]
             #
             # Start and end keys can omit trailing key components. This affects the
             # inclusion and exclusion of rows that exactly match the provided key
@@ -6082,37 +6104,37 @@
             # provided components are included; if the key is open, then rows
             # that exactly match are not included.
             #
-            # For example, the following range includes all events for `"Bob"` that
+            # For example, the following range includes all events for `&quot;Bob&quot;` that
             # occurred during and after the year 2000:
             #
-            #     "start_closed": ["Bob", "2000-01-01"]
-            #     "end_closed": ["Bob"]
+            #     &quot;start_closed&quot;: [&quot;Bob&quot;, &quot;2000-01-01&quot;]
+            #     &quot;end_closed&quot;: [&quot;Bob&quot;]
             #
-            # The next example retrieves all events for `"Bob"`:
+            # The next example retrieves all events for `&quot;Bob&quot;`:
             #
-            #     "start_closed": ["Bob"]
-            #     "end_closed": ["Bob"]
+            #     &quot;start_closed&quot;: [&quot;Bob&quot;]
+            #     &quot;end_closed&quot;: [&quot;Bob&quot;]
             #
             # To retrieve events before the year 2000:
             #
-            #     "start_closed": ["Bob"]
-            #     "end_open": ["Bob", "2000-01-01"]
+            #     &quot;start_closed&quot;: [&quot;Bob&quot;]
+            #     &quot;end_open&quot;: [&quot;Bob&quot;, &quot;2000-01-01&quot;]
             #
             # The following range includes all rows in the table:
             #
-            #     "start_closed": []
-            #     "end_closed": []
+            #     &quot;start_closed&quot;: []
+            #     &quot;end_closed&quot;: []
             #
             # This range returns all users whose `UserName` begins with any
             # character from A to C:
             #
-            #     "start_closed": ["A"]
-            #     "end_open": ["D"]
+            #     &quot;start_closed&quot;: [&quot;A&quot;]
+            #     &quot;end_open&quot;: [&quot;D&quot;]
             #
             # This range returns all users whose `UserName` begins with B:
             #
-            #     "start_closed": ["B"]
-            #     "end_open": ["C"]
+            #     &quot;start_closed&quot;: [&quot;B&quot;]
+            #     &quot;end_open&quot;: [&quot;C&quot;]
             #
             # Key ranges honor column sort order. For example, suppose a table is
             # defined as follows:
@@ -6125,63 +6147,41 @@
             # The following range retrieves all rows with key values between 1
             # and 100 inclusive:
             #
-            #     "start_closed": ["100"]
-            #     "end_closed": ["1"]
+            #     &quot;start_closed&quot;: [&quot;100&quot;]
+            #     &quot;end_closed&quot;: [&quot;1&quot;]
             #
             # Note that 100 is passed as the start, and 1 is passed as the end,
             # because `Key` is a descending column in the schema.
-          "endOpen": [ # If the end is open, then the range excludes rows whose first
+          &quot;endOpen&quot;: [ # If the end is open, then the range excludes rows whose first
               # `len(end_open)` key columns exactly match `end_open`.
-            "",
+            &quot;&quot;,
           ],
-          "startOpen": [ # If the start is open, then the range excludes rows whose first
-              # `len(start_open)` key columns exactly match `start_open`.
-            "",
-          ],
-          "endClosed": [ # If the end is closed, then the range includes all rows whose
+          &quot;endClosed&quot;: [ # If the end is closed, then the range includes all rows whose
               # first `len(end_closed)` key columns exactly match `end_closed`.
-            "",
+            &quot;&quot;,
           ],
-          "startClosed": [ # If the start is closed, then the range includes all rows whose
+          &quot;startOpen&quot;: [ # If the start is open, then the range excludes rows whose first
+              # `len(start_open)` key columns exactly match `start_open`.
+            &quot;&quot;,
+          ],
+          &quot;startClosed&quot;: [ # If the start is closed, then the range includes all rows whose
               # first `len(start_closed)` key columns exactly match `start_closed`.
-            "",
+            &quot;&quot;,
           ],
         },
       ],
-      "keys": [ # A list of specific keys. Entries in `keys` should have exactly as
+      &quot;all&quot;: True or False, # For convenience `all` can be set to `true` to indicate that this
+          # `KeySet` matches all keys in the table or index. Note that any keys
+          # specified in `keys` or `ranges` are only yielded once.
+      &quot;keys&quot;: [ # A list of specific keys. Entries in `keys` should have exactly as
           # many elements as there are columns in the primary or index key
           # with which this `KeySet` is used.  Individual key values are
           # encoded as described here.
         [
-          "",
+          &quot;&quot;,
         ],
       ],
-      "all": True or False, # For convenience `all` can be set to `true` to indicate that this
-          # `KeySet` matches all keys in the table or index. Note that any keys
-          # specified in `keys` or `ranges` are only yielded once.
     },
-    "partitionOptions": { # Options for a PartitionQueryRequest and # Additional options that affect how many partitions are created.
-        # PartitionReadRequest.
-      "maxPartitions": "A String", # **Note:** This hint is currently ignored by PartitionQuery and
-          # PartitionRead requests.
-          #
-          # The desired maximum number of partitions to return.  For example, this may
-          # be set to the number of workers available.  The default for this option
-          # is currently 10,000. The maximum value is currently 200,000.  This is only
-          # a hint.  The actual number of partitions returned may be smaller or larger
-          # than this maximum count request.
-      "partitionSizeBytes": "A String", # **Note:** This hint is currently ignored by PartitionQuery and
-          # PartitionRead requests.
-          #
-          # The desired data size for each partition generated.  The default for this
-          # option is currently 1 GiB.  This is only a hint. The actual size of each
-          # partition may be smaller or larger than this size request.
-    },
-    "table": "A String", # Required. The name of the table in the database to be read.
-    "columns": [ # The columns of table to be returned for each row matching
-        # this request.
-      "A String",
-    ],
   }
 
   x__xgafv: string, V1 error format.
@@ -6194,14 +6194,8 @@
 
     { # The response for PartitionQuery
       # or PartitionRead
-    "transaction": { # A transaction. # Transaction created by this request.
-      "readTimestamp": "A String", # For snapshot read-only transactions, the read timestamp chosen
-          # for the transaction. Not returned by default: see
-          # TransactionOptions.ReadOnly.return_read_timestamp.
-          #
-          # A timestamp in RFC3339 UTC \"Zulu\" format, accurate to nanoseconds.
-          # Example: `"2014-10-02T15:01:23.045123456Z"`.
-      "id": "A String", # `id` may be used to identify the transaction in subsequent
+    &quot;transaction&quot;: { # A transaction. # Transaction created by this request.
+      &quot;id&quot;: &quot;A String&quot;, # `id` may be used to identify the transaction in subsequent
           # Read,
           # ExecuteSql,
           # Commit, or
@@ -6209,11 +6203,17 @@
           #
           # Single-use read-only transactions do not have IDs, because
           # single-use transactions do not support multiple requests.
+      &quot;readTimestamp&quot;: &quot;A String&quot;, # For snapshot read-only transactions, the read timestamp chosen
+          # for the transaction. Not returned by default: see
+          # TransactionOptions.ReadOnly.return_read_timestamp.
+          #
+          # A timestamp in RFC3339 UTC \&quot;Zulu\&quot; format, accurate to nanoseconds.
+          # Example: `&quot;2014-10-02T15:01:23.045123456Z&quot;`.
     },
-    "partitions": [ # Partitions created by this request.
+    &quot;partitions&quot;: [ # Partitions created by this request.
       { # Information returned for each partition returned in a
           # PartitionResponse.
-        "partitionToken": "A String", # This token can be passed to Read, StreamingRead, ExecuteSql, or
+        &quot;partitionToken&quot;: &quot;A String&quot;, # This token can be passed to Read, StreamingRead, ExecuteSql, or
             # ExecuteStreamingSql requests to restrict the results to those identified by
             # this partition token.
       },
@@ -6244,365 +6244,157 @@
 
 { # The request for Read and
       # StreamingRead.
-    "index": "A String", # If non-empty, the name of an index on table. This index is
+    &quot;index&quot;: &quot;A String&quot;, # If non-empty, the name of an index on table. This index is
         # used instead of the table primary key when interpreting key_set
         # and sorting result rows. See key_set for further information.
-    "transaction": { # This message is used to select the transaction in which a # The transaction to use. If none is provided, the default is a
+    &quot;keySet&quot;: { # `KeySet` defines a collection of Cloud Spanner keys and/or key ranges. All # Required. `key_set` identifies the rows to be yielded. `key_set` names the
+        # primary keys of the rows in table to be yielded, unless index
+        # is present. If index is present, then key_set instead names
+        # index keys in index.
+        # 
+        # If the partition_token field is empty, rows are yielded
+        # in table primary key order (if index is empty) or index key order
+        # (if index is non-empty).  If the partition_token field is not
+        # empty, rows will be yielded in an unspecified order.
+        # 
+        # It is not an error for the `key_set` to name rows that do not
+        # exist in the database. Read yields nothing for nonexistent rows.
+        # the keys are expected to be in the same table or index. The keys need
+        # not be sorted in any particular way.
+        #
+        # If the same key is specified multiple times in the set (for example
+        # if two ranges, two keys, or a key and a range overlap), Cloud Spanner
+        # behaves as if the key were only specified once.
+      &quot;ranges&quot;: [ # A list of key ranges. See KeyRange for more information about
+          # key range specifications.
+        { # KeyRange represents a range of rows in a table or index.
+            #
+            # A range has a start key and an end key. These keys can be open or
+            # closed, indicating if the range includes rows with that key.
+            #
+            # Keys are represented by lists, where the ith value in the list
+            # corresponds to the ith component of the table or index primary key.
+            # Individual values are encoded as described
+            # here.
+            #
+            # For example, consider the following table definition:
+            #
+            #     CREATE TABLE UserEvents (
+            #       UserName STRING(MAX),
+            #       EventDate STRING(10)
+            #     ) PRIMARY KEY(UserName, EventDate);
+            #
+            # The following keys name rows in this table:
+            #
+            #     &quot;Bob&quot;, &quot;2014-09-23&quot;
+            #
+            # Since the `UserEvents` table&#x27;s `PRIMARY KEY` clause names two
+            # columns, each `UserEvents` key has two elements; the first is the
+            # `UserName`, and the second is the `EventDate`.
+            #
+            # Key ranges with multiple components are interpreted
+            # lexicographically by component using the table or index key&#x27;s declared
+            # sort order. For example, the following range returns all events for
+            # user `&quot;Bob&quot;` that occurred in the year 2015:
+            #
+            #     &quot;start_closed&quot;: [&quot;Bob&quot;, &quot;2015-01-01&quot;]
+            #     &quot;end_closed&quot;: [&quot;Bob&quot;, &quot;2015-12-31&quot;]
+            #
+            # Start and end keys can omit trailing key components. This affects the
+            # inclusion and exclusion of rows that exactly match the provided key
+            # components: if the key is closed, then rows that exactly match the
+            # provided components are included; if the key is open, then rows
+            # that exactly match are not included.
+            #
+            # For example, the following range includes all events for `&quot;Bob&quot;` that
+            # occurred during and after the year 2000:
+            #
+            #     &quot;start_closed&quot;: [&quot;Bob&quot;, &quot;2000-01-01&quot;]
+            #     &quot;end_closed&quot;: [&quot;Bob&quot;]
+            #
+            # The next example retrieves all events for `&quot;Bob&quot;`:
+            #
+            #     &quot;start_closed&quot;: [&quot;Bob&quot;]
+            #     &quot;end_closed&quot;: [&quot;Bob&quot;]
+            #
+            # To retrieve events before the year 2000:
+            #
+            #     &quot;start_closed&quot;: [&quot;Bob&quot;]
+            #     &quot;end_open&quot;: [&quot;Bob&quot;, &quot;2000-01-01&quot;]
+            #
+            # The following range includes all rows in the table:
+            #
+            #     &quot;start_closed&quot;: []
+            #     &quot;end_closed&quot;: []
+            #
+            # This range returns all users whose `UserName` begins with any
+            # character from A to C:
+            #
+            #     &quot;start_closed&quot;: [&quot;A&quot;]
+            #     &quot;end_open&quot;: [&quot;D&quot;]
+            #
+            # This range returns all users whose `UserName` begins with B:
+            #
+            #     &quot;start_closed&quot;: [&quot;B&quot;]
+            #     &quot;end_open&quot;: [&quot;C&quot;]
+            #
+            # Key ranges honor column sort order. For example, suppose a table is
+            # defined as follows:
+            #
+            #     CREATE TABLE DescendingSortedTable {
+            #       Key INT64,
+            #       ...
+            #     ) PRIMARY KEY(Key DESC);
+            #
+            # The following range retrieves all rows with key values between 1
+            # and 100 inclusive:
+            #
+            #     &quot;start_closed&quot;: [&quot;100&quot;]
+            #     &quot;end_closed&quot;: [&quot;1&quot;]
+            #
+            # Note that 100 is passed as the start, and 1 is passed as the end,
+            # because `Key` is a descending column in the schema.
+          &quot;endOpen&quot;: [ # If the end is open, then the range excludes rows whose first
+              # `len(end_open)` key columns exactly match `end_open`.
+            &quot;&quot;,
+          ],
+          &quot;endClosed&quot;: [ # If the end is closed, then the range includes all rows whose
+              # first `len(end_closed)` key columns exactly match `end_closed`.
+            &quot;&quot;,
+          ],
+          &quot;startOpen&quot;: [ # If the start is open, then the range excludes rows whose first
+              # `len(start_open)` key columns exactly match `start_open`.
+            &quot;&quot;,
+          ],
+          &quot;startClosed&quot;: [ # If the start is closed, then the range includes all rows whose
+              # first `len(start_closed)` key columns exactly match `start_closed`.
+            &quot;&quot;,
+          ],
+        },
+      ],
+      &quot;all&quot;: True or False, # For convenience `all` can be set to `true` to indicate that this
+          # `KeySet` matches all keys in the table or index. Note that any keys
+          # specified in `keys` or `ranges` are only yielded once.
+      &quot;keys&quot;: [ # A list of specific keys. Entries in `keys` should have exactly as
+          # many elements as there are columns in the primary or index key
+          # with which this `KeySet` is used.  Individual key values are
+          # encoded as described here.
+        [
+          &quot;&quot;,
+        ],
+      ],
+    },
+    &quot;columns&quot;: [ # Required. The columns of table to be returned for each row matching
+        # this request.
+      &quot;A String&quot;,
+    ],
+    &quot;transaction&quot;: { # This message is used to select the transaction in which a # The transaction to use. If none is provided, the default is a
         # temporary read-only transaction with strong concurrency.
         # Read or
         # ExecuteSql call runs.
         #
         # See TransactionOptions for more information about transactions.
-      "begin": { # # Transactions # Begin a new transaction and execute this read or SQL query in
-          # it. The transaction ID of the new transaction is returned in
-          # ResultSetMetadata.transaction, which is a Transaction.
-          #
-          #
-          # Each session can have at most one active transaction at a time. After the
-          # active transaction is completed, the session can immediately be
-          # re-used for the next transaction. It is not necessary to create a
-          # new session for each transaction.
-          #
-          # # Transaction Modes
-          #
-          # Cloud Spanner supports three transaction modes:
-          #
-          #   1. Locking read-write. This type of transaction is the only way
-          #      to write data into Cloud Spanner. These transactions rely on
-          #      pessimistic locking and, if necessary, two-phase commit.
-          #      Locking read-write transactions may abort, requiring the
-          #      application to retry.
-          #
-          #   2. Snapshot read-only. This transaction type provides guaranteed
-          #      consistency across several reads, but does not allow
-          #      writes. Snapshot read-only transactions can be configured to
-          #      read at timestamps in the past. Snapshot read-only
-          #      transactions do not need to be committed.
-          #
-          #   3. Partitioned DML. This type of transaction is used to execute
-          #      a single Partitioned DML statement. Partitioned DML partitions
-          #      the key space and runs the DML statement over each partition
-          #      in parallel using separate, internal transactions that commit
-          #      independently. Partitioned DML transactions do not need to be
-          #      committed.
-          #
-          # For transactions that only read, snapshot read-only transactions
-          # provide simpler semantics and are almost always faster. In
-          # particular, read-only transactions do not take locks, so they do
-          # not conflict with read-write transactions. As a consequence of not
-          # taking locks, they also do not abort, so retry loops are not needed.
-          #
-          # Transactions may only read/write data in a single database. They
-          # may, however, read/write data in different tables within that
-          # database.
-          #
-          # ## Locking Read-Write Transactions
-          #
-          # Locking transactions may be used to atomically read-modify-write
-          # data anywhere in a database. This type of transaction is externally
-          # consistent.
-          #
-          # Clients should attempt to minimize the amount of time a transaction
-          # is active. Faster transactions commit with higher probability
-          # and cause less contention. Cloud Spanner attempts to keep read locks
-          # active as long as the transaction continues to do reads, and the
-          # transaction has not been terminated by
-          # Commit or
-          # Rollback.  Long periods of
-          # inactivity at the client may cause Cloud Spanner to release a
-          # transaction's locks and abort it.
-          #
-          # Conceptually, a read-write transaction consists of zero or more
-          # reads or SQL statements followed by
-          # Commit. At any time before
-          # Commit, the client can send a
-          # Rollback request to abort the
-          # transaction.
-          #
-          # ### Semantics
-          #
-          # Cloud Spanner can commit the transaction if all read locks it acquired
-          # are still valid at commit time, and it is able to acquire write
-          # locks for all writes. Cloud Spanner can abort the transaction for any
-          # reason. If a commit attempt returns `ABORTED`, Cloud Spanner guarantees
-          # that the transaction has not modified any user data in Cloud Spanner.
-          #
-          # Unless the transaction commits, Cloud Spanner makes no guarantees about
-          # how long the transaction's locks were held for. It is an error to
-          # use Cloud Spanner locks for any sort of mutual exclusion other than
-          # between Cloud Spanner transactions themselves.
-          #
-          # ### Retrying Aborted Transactions
-          #
-          # When a transaction aborts, the application can choose to retry the
-          # whole transaction again. To maximize the chances of successfully
-          # committing the retry, the client should execute the retry in the
-          # same session as the original attempt. The original session's lock
-          # priority increases with each consecutive abort, meaning that each
-          # attempt has a slightly better chance of success than the previous.
-          #
-          # Under some circumstances (e.g., many transactions attempting to
-          # modify the same row(s)), a transaction can abort many times in a
-          # short period before successfully committing. Thus, it is not a good
-          # idea to cap the number of retries a transaction can attempt;
-          # instead, it is better to limit the total amount of wall time spent
-          # retrying.
-          #
-          # ### Idle Transactions
-          #
-          # A transaction is considered idle if it has no outstanding reads or
-          # SQL queries and has not started a read or SQL query within the last 10
-          # seconds. Idle transactions can be aborted by Cloud Spanner so that they
-          # don't hold on to locks indefinitely. In that case, the commit will
-          # fail with error `ABORTED`.
-          #
-          # If this behavior is undesirable, periodically executing a simple
-          # SQL query in the transaction (e.g., `SELECT 1`) prevents the
-          # transaction from becoming idle.
-          #
-          # ## Snapshot Read-Only Transactions
-          #
-          # Snapshot read-only transactions provides a simpler method than
-          # locking read-write transactions for doing several consistent
-          # reads. However, this type of transaction does not support writes.
-          #
-          # Snapshot transactions do not take locks. Instead, they work by
-          # choosing a Cloud Spanner timestamp, then executing all reads at that
-          # timestamp. Since they do not acquire locks, they do not block
-          # concurrent read-write transactions.
-          #
-          # Unlike locking read-write transactions, snapshot read-only
-          # transactions never abort. They can fail if the chosen read
-          # timestamp is garbage collected; however, the default garbage
-          # collection policy is generous enough that most applications do not
-          # need to worry about this in practice.
-          #
-          # Snapshot read-only transactions do not need to call
-          # Commit or
-          # Rollback (and in fact are not
-          # permitted to do so).
-          #
-          # To execute a snapshot transaction, the client specifies a timestamp
-          # bound, which tells Cloud Spanner how to choose a read timestamp.
-          #
-          # The types of timestamp bound are:
-          #
-          #   - Strong (the default).
-          #   - Bounded staleness.
-          #   - Exact staleness.
-          #
-          # If the Cloud Spanner database to be read is geographically distributed,
-          # stale read-only transactions can execute more quickly than strong
-          # or read-write transaction, because they are able to execute far
-          # from the leader replica.
-          #
-          # Each type of timestamp bound is discussed in detail below.
-          #
-          # ### Strong
-          #
-          # Strong reads are guaranteed to see the effects of all transactions
-          # that have committed before the start of the read. Furthermore, all
-          # rows yielded by a single read are consistent with each other -- if
-          # any part of the read observes a transaction, all parts of the read
-          # see the transaction.
-          #
-          # Strong reads are not repeatable: two consecutive strong read-only
-          # transactions might return inconsistent results if there are
-          # concurrent writes. If consistency across reads is required, the
-          # reads should be executed within a transaction or at an exact read
-          # timestamp.
-          #
-          # See TransactionOptions.ReadOnly.strong.
-          #
-          # ### Exact Staleness
-          #
-          # These timestamp bounds execute reads at a user-specified
-          # timestamp. Reads at a timestamp are guaranteed to see a consistent
-          # prefix of the global transaction history: they observe
-          # modifications done by all transactions with a commit timestamp &lt;=
-          # the read timestamp, and observe none of the modifications done by
-          # transactions with a larger commit timestamp. They will block until
-          # all conflicting transactions that may be assigned commit timestamps
-          # &lt;= the read timestamp have finished.
-          #
-          # The timestamp can either be expressed as an absolute Cloud Spanner commit
-          # timestamp or a staleness relative to the current time.
-          #
-          # These modes do not require a "negotiation phase" to pick a
-          # timestamp. As a result, they execute slightly faster than the
-          # equivalent boundedly stale concurrency modes. On the other hand,
-          # boundedly stale reads usually return fresher results.
-          #
-          # See TransactionOptions.ReadOnly.read_timestamp and
-          # TransactionOptions.ReadOnly.exact_staleness.
-          #
-          # ### Bounded Staleness
-          #
-          # Bounded staleness modes allow Cloud Spanner to pick the read timestamp,
-          # subject to a user-provided staleness bound. Cloud Spanner chooses the
-          # newest timestamp within the staleness bound that allows execution
-          # of the reads at the closest available replica without blocking.
-          #
-          # All rows yielded are consistent with each other -- if any part of
-          # the read observes a transaction, all parts of the read see the
-          # transaction. Boundedly stale reads are not repeatable: two stale
-          # reads, even if they use the same staleness bound, can execute at
-          # different timestamps and thus return inconsistent results.
-          #
-          # Boundedly stale reads execute in two phases: the first phase
-          # negotiates a timestamp among all replicas needed to serve the
-          # read. In the second phase, reads are executed at the negotiated
-          # timestamp.
-          #
-          # As a result of the two phase execution, bounded staleness reads are
-          # usually a little slower than comparable exact staleness
-          # reads. However, they are typically able to return fresher
-          # results, and are more likely to execute at the closest replica.
-          #
-          # Because the timestamp negotiation requires up-front knowledge of
-          # which rows will be read, it can only be used with single-use
-          # read-only transactions.
-          #
-          # See TransactionOptions.ReadOnly.max_staleness and
-          # TransactionOptions.ReadOnly.min_read_timestamp.
-          #
-          # ### Old Read Timestamps and Garbage Collection
-          #
-          # Cloud Spanner continuously garbage collects deleted and overwritten data
-          # in the background to reclaim storage space. This process is known
-          # as "version GC". By default, version GC reclaims versions after they
-          # are one hour old. Because of this, Cloud Spanner cannot perform reads
-          # at read timestamps more than one hour in the past. This
-          # restriction also applies to in-progress reads and/or SQL queries whose
-          # timestamp become too old while executing. Reads and SQL queries with
-          # too-old read timestamps fail with the error `FAILED_PRECONDITION`.
-          #
-          # ## Partitioned DML Transactions
-          #
-          # Partitioned DML transactions are used to execute DML statements with a
-          # different execution strategy that provides different, and often better,
-          # scalability properties for large, table-wide operations than DML in a
-          # ReadWrite transaction. Smaller scoped statements, such as an OLTP workload,
-          # should prefer using ReadWrite transactions.
-          #
-          # Partitioned DML partitions the keyspace and runs the DML statement on each
-          # partition in separate, internal transactions. These transactions commit
-          # automatically when complete, and run independently from one another.
-          #
-          # To reduce lock contention, this execution strategy only acquires read locks
-          # on rows that match the WHERE clause of the statement. Additionally, the
-          # smaller per-partition transactions hold locks for less time.
-          #
-          # That said, Partitioned DML is not a drop-in replacement for standard DML used
-          # in ReadWrite transactions.
-          #
-          #  - The DML statement must be fully-partitionable. Specifically, the statement
-          #    must be expressible as the union of many statements which each access only
-          #    a single row of the table.
-          #
-          #  - The statement is not applied atomically to all rows of the table. Rather,
-          #    the statement is applied atomically to partitions of the table, in
-          #    independent transactions. Secondary index rows are updated atomically
-          #    with the base table rows.
-          #
-          #  - Partitioned DML does not guarantee exactly-once execution semantics
-          #    against a partition. The statement will be applied at least once to each
-          #    partition. It is strongly recommended that the DML statement should be
-          #    idempotent to avoid unexpected results. For instance, it is potentially
-          #    dangerous to run a statement such as
-          #    `UPDATE table SET column = column + 1` as it could be run multiple times
-          #    against some rows.
-          #
-          #  - The partitions are committed automatically - there is no support for
-          #    Commit or Rollback. If the call returns an error, or if the client issuing
-          #    the ExecuteSql call dies, it is possible that some rows had the statement
-          #    executed on them successfully. It is also possible that statement was
-          #    never executed against other rows.
-          #
-          #  - Partitioned DML transactions may only contain the execution of a single
-          #    DML statement via ExecuteSql or ExecuteStreamingSql.
-          #
-          #  - If any error is encountered during the execution of the partitioned DML
-          #    operation (for instance, a UNIQUE INDEX violation, division by zero, or a
-          #    value that cannot be stored due to schema constraints), then the
-          #    operation is stopped at that point and an error is returned. It is
-          #    possible that at this point, some partitions have been committed (or even
-          #    committed multiple times), and other partitions have not been run at all.
-          #
-          # Given the above, Partitioned DML is good fit for large, database-wide,
-          # operations that are idempotent, such as deleting old rows from a very large
-          # table.
-        "readWrite": { # Message type to initiate a read-write transaction. Currently this # Transaction may write.
-            #
-            # Authorization to begin a read-write transaction requires
-            # `spanner.databases.beginOrRollbackReadWriteTransaction` permission
-            # on the `session` resource.
-            # transaction type has no options.
-        },
-        "readOnly": { # Message type to initiate a read-only transaction. # Transaction will not write.
-            #
-            # Authorization to begin a read-only transaction requires
-            # `spanner.databases.beginReadOnlyTransaction` permission
-            # on the `session` resource.
-          "minReadTimestamp": "A String", # Executes all reads at a timestamp &gt;= `min_read_timestamp`.
-              #
-              # This is useful for requesting fresher data than some previous
-              # read, or data that is fresh enough to observe the effects of some
-              # previously committed transaction whose timestamp is known.
-              #
-              # Note that this option can only be used in single-use transactions.
-              #
-              # A timestamp in RFC3339 UTC \"Zulu\" format, accurate to nanoseconds.
-              # Example: `"2014-10-02T15:01:23.045123456Z"`.
-          "readTimestamp": "A String", # Executes all reads at the given timestamp. Unlike other modes,
-              # reads at a specific timestamp are repeatable; the same read at
-              # the same timestamp always returns the same data. If the
-              # timestamp is in the future, the read will block until the
-              # specified timestamp, modulo the read's deadline.
-              #
-              # Useful for large scale consistent reads such as mapreduces, or
-              # for coordinating many reads against a consistent snapshot of the
-              # data.
-              #
-              # A timestamp in RFC3339 UTC \"Zulu\" format, accurate to nanoseconds.
-              # Example: `"2014-10-02T15:01:23.045123456Z"`.
-          "maxStaleness": "A String", # Read data at a timestamp &gt;= `NOW - max_staleness`
-              # seconds. Guarantees that all writes that have committed more
-              # than the specified number of seconds ago are visible. Because
-              # Cloud Spanner chooses the exact timestamp, this mode works even if
-              # the client's local clock is substantially skewed from Cloud Spanner
-              # commit timestamps.
-              #
-              # Useful for reading the freshest data available at a nearby
-              # replica, while bounding the possible staleness if the local
-              # replica has fallen behind.
-              #
-              # Note that this option can only be used in single-use
-              # transactions.
-          "exactStaleness": "A String", # Executes all reads at a timestamp that is `exact_staleness`
-              # old. The timestamp is chosen soon after the read is started.
-              #
-              # Guarantees that all writes that have committed more than the
-              # specified number of seconds ago are visible. Because Cloud Spanner
-              # chooses the exact timestamp, this mode works even if the client's
-              # local clock is substantially skewed from Cloud Spanner commit
-              # timestamps.
-              #
-              # Useful for reading at nearby replicas without the distributed
-              # timestamp negotiation overhead of `max_staleness`.
-          "returnReadTimestamp": True or False, # If true, the Cloud Spanner-selected read timestamp is included in
-              # the Transaction message that describes the transaction.
-          "strong": True or False, # Read at a timestamp where all previously committed transactions
-              # are visible.
-        },
-        "partitionedDml": { # Message type to initiate a Partitioned DML transaction. # Partitioned DML transaction.
-            #
-            # Authorization to begin a Partitioned DML transaction requires
-            # `spanner.databases.beginPartitionedDmlTransaction` permission
-            # on the `session` resource.
-        },
-      },
-      "singleUse": { # # Transactions # Execute the read or SQL query in a temporary transaction.
+      &quot;singleUse&quot;: { # # Transactions # Execute the read or SQL query in a temporary transaction.
           # This is the most efficient way to execute a transaction that
           # consists of a single SQL query.
           #
@@ -6659,7 +6451,7 @@
           # Commit or
           # Rollback.  Long periods of
           # inactivity at the client may cause Cloud Spanner to release a
-          # transaction's locks and abort it.
+          # transaction&#x27;s locks and abort it.
           #
           # Conceptually, a read-write transaction consists of zero or more
           # reads or SQL statements followed by
@@ -6677,7 +6469,7 @@
           # that the transaction has not modified any user data in Cloud Spanner.
           #
           # Unless the transaction commits, Cloud Spanner makes no guarantees about
-          # how long the transaction's locks were held for. It is an error to
+          # how long the transaction&#x27;s locks were held for. It is an error to
           # use Cloud Spanner locks for any sort of mutual exclusion other than
           # between Cloud Spanner transactions themselves.
           #
@@ -6686,7 +6478,7 @@
           # When a transaction aborts, the application can choose to retry the
           # whole transaction again. To maximize the chances of successfully
           # committing the retry, the client should execute the retry in the
-          # same session as the original attempt. The original session's lock
+          # same session as the original attempt. The original session&#x27;s lock
           # priority increases with each consecutive abort, meaning that each
           # attempt has a slightly better chance of success than the previous.
           #
@@ -6702,7 +6494,7 @@
           # A transaction is considered idle if it has no outstanding reads or
           # SQL queries and has not started a read or SQL query within the last 10
           # seconds. Idle transactions can be aborted by Cloud Spanner so that they
-          # don't hold on to locks indefinitely. In that case, the commit will
+          # don&#x27;t hold on to locks indefinitely. In that case, the commit will
           # fail with error `ABORTED`.
           #
           # If this behavior is undesirable, periodically executing a simple
@@ -6777,7 +6569,7 @@
           # The timestamp can either be expressed as an absolute Cloud Spanner commit
           # timestamp or a staleness relative to the current time.
           #
-          # These modes do not require a "negotiation phase" to pick a
+          # These modes do not require a &quot;negotiation phase&quot; to pick a
           # timestamp. As a result, they execute slightly faster than the
           # equivalent boundedly stale concurrency modes. On the other hand,
           # boundedly stale reads usually return fresher results.
@@ -6819,7 +6611,7 @@
           #
           # Cloud Spanner continuously garbage collects deleted and overwritten data
           # in the background to reclaim storage space. This process is known
-          # as "version GC". By default, version GC reclaims versions after they
+          # as &quot;version GC&quot;. By default, version GC reclaims versions after they
           # are one hour old. Because of this, Cloud Spanner cannot perform reads
           # at read timestamps more than one hour in the past. This
           # restriction also applies to in-progress reads and/or SQL queries whose
@@ -6881,19 +6673,18 @@
           # Given the above, Partitioned DML is good fit for large, database-wide,
           # operations that are idempotent, such as deleting old rows from a very large
           # table.
-        "readWrite": { # Message type to initiate a read-write transaction. Currently this # Transaction may write.
+        &quot;partitionedDml&quot;: { # Message type to initiate a Partitioned DML transaction. # Partitioned DML transaction.
             #
-            # Authorization to begin a read-write transaction requires
-            # `spanner.databases.beginOrRollbackReadWriteTransaction` permission
+            # Authorization to begin a Partitioned DML transaction requires
+            # `spanner.databases.beginPartitionedDmlTransaction` permission
             # on the `session` resource.
-            # transaction type has no options.
         },
-        "readOnly": { # Message type to initiate a read-only transaction. # Transaction will not write.
+        &quot;readOnly&quot;: { # Message type to initiate a read-only transaction. # Transaction will not write.
             #
             # Authorization to begin a read-only transaction requires
             # `spanner.databases.beginReadOnlyTransaction` permission
             # on the `session` resource.
-          "minReadTimestamp": "A String", # Executes all reads at a timestamp &gt;= `min_read_timestamp`.
+          &quot;minReadTimestamp&quot;: &quot;A String&quot;, # Executes all reads at a timestamp &gt;= `min_read_timestamp`.
               #
               # This is useful for requesting fresher data than some previous
               # read, or data that is fresh enough to observe the effects of some
@@ -6901,25 +6692,25 @@
               #
               # Note that this option can only be used in single-use transactions.
               #
-              # A timestamp in RFC3339 UTC \"Zulu\" format, accurate to nanoseconds.
-              # Example: `"2014-10-02T15:01:23.045123456Z"`.
-          "readTimestamp": "A String", # Executes all reads at the given timestamp. Unlike other modes,
+              # A timestamp in RFC3339 UTC \&quot;Zulu\&quot; format, accurate to nanoseconds.
+              # Example: `&quot;2014-10-02T15:01:23.045123456Z&quot;`.
+          &quot;readTimestamp&quot;: &quot;A String&quot;, # Executes all reads at the given timestamp. Unlike other modes,
               # reads at a specific timestamp are repeatable; the same read at
               # the same timestamp always returns the same data. If the
               # timestamp is in the future, the read will block until the
-              # specified timestamp, modulo the read's deadline.
+              # specified timestamp, modulo the read&#x27;s deadline.
               #
               # Useful for large scale consistent reads such as mapreduces, or
               # for coordinating many reads against a consistent snapshot of the
               # data.
               #
-              # A timestamp in RFC3339 UTC \"Zulu\" format, accurate to nanoseconds.
-              # Example: `"2014-10-02T15:01:23.045123456Z"`.
-          "maxStaleness": "A String", # Read data at a timestamp &gt;= `NOW - max_staleness`
+              # A timestamp in RFC3339 UTC \&quot;Zulu\&quot; format, accurate to nanoseconds.
+              # Example: `&quot;2014-10-02T15:01:23.045123456Z&quot;`.
+          &quot;maxStaleness&quot;: &quot;A String&quot;, # Read data at a timestamp &gt;= `NOW - max_staleness`
               # seconds. Guarantees that all writes that have committed more
               # than the specified number of seconds ago are visible. Because
               # Cloud Spanner chooses the exact timestamp, this mode works even if
-              # the client's local clock is substantially skewed from Cloud Spanner
+              # the client&#x27;s local clock is substantially skewed from Cloud Spanner
               # commit timestamps.
               #
               # Useful for reading the freshest data available at a nearby
@@ -6928,186 +6719,395 @@
               #
               # Note that this option can only be used in single-use
               # transactions.
-          "exactStaleness": "A String", # Executes all reads at a timestamp that is `exact_staleness`
+          &quot;returnReadTimestamp&quot;: True or False, # If true, the Cloud Spanner-selected read timestamp is included in
+              # the Transaction message that describes the transaction.
+          &quot;exactStaleness&quot;: &quot;A String&quot;, # Executes all reads at a timestamp that is `exact_staleness`
               # old. The timestamp is chosen soon after the read is started.
               #
               # Guarantees that all writes that have committed more than the
               # specified number of seconds ago are visible. Because Cloud Spanner
-              # chooses the exact timestamp, this mode works even if the client's
+              # chooses the exact timestamp, this mode works even if the client&#x27;s
               # local clock is substantially skewed from Cloud Spanner commit
               # timestamps.
               #
               # Useful for reading at nearby replicas without the distributed
               # timestamp negotiation overhead of `max_staleness`.
-          "returnReadTimestamp": True or False, # If true, the Cloud Spanner-selected read timestamp is included in
-              # the Transaction message that describes the transaction.
-          "strong": True or False, # Read at a timestamp where all previously committed transactions
+          &quot;strong&quot;: True or False, # Read at a timestamp where all previously committed transactions
               # are visible.
         },
-        "partitionedDml": { # Message type to initiate a Partitioned DML transaction. # Partitioned DML transaction.
+        &quot;readWrite&quot;: { # Message type to initiate a read-write transaction. Currently this # Transaction may write.
+            #
+            # Authorization to begin a read-write transaction requires
+            # `spanner.databases.beginOrRollbackReadWriteTransaction` permission
+            # on the `session` resource.
+            # transaction type has no options.
+        },
+      },
+      &quot;begin&quot;: { # # Transactions # Begin a new transaction and execute this read or SQL query in
+          # it. The transaction ID of the new transaction is returned in
+          # ResultSetMetadata.transaction, which is a Transaction.
+          #
+          #
+          # Each session can have at most one active transaction at a time. After the
+          # active transaction is completed, the session can immediately be
+          # re-used for the next transaction. It is not necessary to create a
+          # new session for each transaction.
+          #
+          # # Transaction Modes
+          #
+          # Cloud Spanner supports three transaction modes:
+          #
+          #   1. Locking read-write. This type of transaction is the only way
+          #      to write data into Cloud Spanner. These transactions rely on
+          #      pessimistic locking and, if necessary, two-phase commit.
+          #      Locking read-write transactions may abort, requiring the
+          #      application to retry.
+          #
+          #   2. Snapshot read-only. This transaction type provides guaranteed
+          #      consistency across several reads, but does not allow
+          #      writes. Snapshot read-only transactions can be configured to
+          #      read at timestamps in the past. Snapshot read-only
+          #      transactions do not need to be committed.
+          #
+          #   3. Partitioned DML. This type of transaction is used to execute
+          #      a single Partitioned DML statement. Partitioned DML partitions
+          #      the key space and runs the DML statement over each partition
+          #      in parallel using separate, internal transactions that commit
+          #      independently. Partitioned DML transactions do not need to be
+          #      committed.
+          #
+          # For transactions that only read, snapshot read-only transactions
+          # provide simpler semantics and are almost always faster. In
+          # particular, read-only transactions do not take locks, so they do
+          # not conflict with read-write transactions. As a consequence of not
+          # taking locks, they also do not abort, so retry loops are not needed.
+          #
+          # Transactions may only read/write data in a single database. They
+          # may, however, read/write data in different tables within that
+          # database.
+          #
+          # ## Locking Read-Write Transactions
+          #
+          # Locking transactions may be used to atomically read-modify-write
+          # data anywhere in a database. This type of transaction is externally
+          # consistent.
+          #
+          # Clients should attempt to minimize the amount of time a transaction
+          # is active. Faster transactions commit with higher probability
+          # and cause less contention. Cloud Spanner attempts to keep read locks
+          # active as long as the transaction continues to do reads, and the
+          # transaction has not been terminated by
+          # Commit or
+          # Rollback.  Long periods of
+          # inactivity at the client may cause Cloud Spanner to release a
+          # transaction&#x27;s locks and abort it.
+          #
+          # Conceptually, a read-write transaction consists of zero or more
+          # reads or SQL statements followed by
+          # Commit. At any time before
+          # Commit, the client can send a
+          # Rollback request to abort the
+          # transaction.
+          #
+          # ### Semantics
+          #
+          # Cloud Spanner can commit the transaction if all read locks it acquired
+          # are still valid at commit time, and it is able to acquire write
+          # locks for all writes. Cloud Spanner can abort the transaction for any
+          # reason. If a commit attempt returns `ABORTED`, Cloud Spanner guarantees
+          # that the transaction has not modified any user data in Cloud Spanner.
+          #
+          # Unless the transaction commits, Cloud Spanner makes no guarantees about
+          # how long the transaction&#x27;s locks were held for. It is an error to
+          # use Cloud Spanner locks for any sort of mutual exclusion other than
+          # between Cloud Spanner transactions themselves.
+          #
+          # ### Retrying Aborted Transactions
+          #
+          # When a transaction aborts, the application can choose to retry the
+          # whole transaction again. To maximize the chances of successfully
+          # committing the retry, the client should execute the retry in the
+          # same session as the original attempt. The original session&#x27;s lock
+          # priority increases with each consecutive abort, meaning that each
+          # attempt has a slightly better chance of success than the previous.
+          #
+          # Under some circumstances (e.g., many transactions attempting to
+          # modify the same row(s)), a transaction can abort many times in a
+          # short period before successfully committing. Thus, it is not a good
+          # idea to cap the number of retries a transaction can attempt;
+          # instead, it is better to limit the total amount of wall time spent
+          # retrying.
+          #
+          # ### Idle Transactions
+          #
+          # A transaction is considered idle if it has no outstanding reads or
+          # SQL queries and has not started a read or SQL query within the last 10
+          # seconds. Idle transactions can be aborted by Cloud Spanner so that they
+          # don&#x27;t hold on to locks indefinitely. In that case, the commit will
+          # fail with error `ABORTED`.
+          #
+          # If this behavior is undesirable, periodically executing a simple
+          # SQL query in the transaction (e.g., `SELECT 1`) prevents the
+          # transaction from becoming idle.
+          #
+          # ## Snapshot Read-Only Transactions
+          #
+          # Snapshot read-only transactions provides a simpler method than
+          # locking read-write transactions for doing several consistent
+          # reads. However, this type of transaction does not support writes.
+          #
+          # Snapshot transactions do not take locks. Instead, they work by
+          # choosing a Cloud Spanner timestamp, then executing all reads at that
+          # timestamp. Since they do not acquire locks, they do not block
+          # concurrent read-write transactions.
+          #
+          # Unlike locking read-write transactions, snapshot read-only
+          # transactions never abort. They can fail if the chosen read
+          # timestamp is garbage collected; however, the default garbage
+          # collection policy is generous enough that most applications do not
+          # need to worry about this in practice.
+          #
+          # Snapshot read-only transactions do not need to call
+          # Commit or
+          # Rollback (and in fact are not
+          # permitted to do so).
+          #
+          # To execute a snapshot transaction, the client specifies a timestamp
+          # bound, which tells Cloud Spanner how to choose a read timestamp.
+          #
+          # The types of timestamp bound are:
+          #
+          #   - Strong (the default).
+          #   - Bounded staleness.
+          #   - Exact staleness.
+          #
+          # If the Cloud Spanner database to be read is geographically distributed,
+          # stale read-only transactions can execute more quickly than strong
+          # or read-write transaction, because they are able to execute far
+          # from the leader replica.
+          #
+          # Each type of timestamp bound is discussed in detail below.
+          #
+          # ### Strong
+          #
+          # Strong reads are guaranteed to see the effects of all transactions
+          # that have committed before the start of the read. Furthermore, all
+          # rows yielded by a single read are consistent with each other -- if
+          # any part of the read observes a transaction, all parts of the read
+          # see the transaction.
+          #
+          # Strong reads are not repeatable: two consecutive strong read-only
+          # transactions might return inconsistent results if there are
+          # concurrent writes. If consistency across reads is required, the
+          # reads should be executed within a transaction or at an exact read
+          # timestamp.
+          #
+          # See TransactionOptions.ReadOnly.strong.
+          #
+          # ### Exact Staleness
+          #
+          # These timestamp bounds execute reads at a user-specified
+          # timestamp. Reads at a timestamp are guaranteed to see a consistent
+          # prefix of the global transaction history: they observe
+          # modifications done by all transactions with a commit timestamp &lt;=
+          # the read timestamp, and observe none of the modifications done by
+          # transactions with a larger commit timestamp. They will block until
+          # all conflicting transactions that may be assigned commit timestamps
+          # &lt;= the read timestamp have finished.
+          #
+          # The timestamp can either be expressed as an absolute Cloud Spanner commit
+          # timestamp or a staleness relative to the current time.
+          #
+          # These modes do not require a &quot;negotiation phase&quot; to pick a
+          # timestamp. As a result, they execute slightly faster than the
+          # equivalent boundedly stale concurrency modes. On the other hand,
+          # boundedly stale reads usually return fresher results.
+          #
+          # See TransactionOptions.ReadOnly.read_timestamp and
+          # TransactionOptions.ReadOnly.exact_staleness.
+          #
+          # ### Bounded Staleness
+          #
+          # Bounded staleness modes allow Cloud Spanner to pick the read timestamp,
+          # subject to a user-provided staleness bound. Cloud Spanner chooses the
+          # newest timestamp within the staleness bound that allows execution
+          # of the reads at the closest available replica without blocking.
+          #
+          # All rows yielded are consistent with each other -- if any part of
+          # the read observes a transaction, all parts of the read see the
+          # transaction. Boundedly stale reads are not repeatable: two stale
+          # reads, even if they use the same staleness bound, can execute at
+          # different timestamps and thus return inconsistent results.
+          #
+          # Boundedly stale reads execute in two phases: the first phase
+          # negotiates a timestamp among all replicas needed to serve the
+          # read. In the second phase, reads are executed at the negotiated
+          # timestamp.
+          #
+          # As a result of the two phase execution, bounded staleness reads are
+          # usually a little slower than comparable exact staleness
+          # reads. However, they are typically able to return fresher
+          # results, and are more likely to execute at the closest replica.
+          #
+          # Because the timestamp negotiation requires up-front knowledge of
+          # which rows will be read, it can only be used with single-use
+          # read-only transactions.
+          #
+          # See TransactionOptions.ReadOnly.max_staleness and
+          # TransactionOptions.ReadOnly.min_read_timestamp.
+          #
+          # ### Old Read Timestamps and Garbage Collection
+          #
+          # Cloud Spanner continuously garbage collects deleted and overwritten data
+          # in the background to reclaim storage space. This process is known
+          # as &quot;version GC&quot;. By default, version GC reclaims versions after they
+          # are one hour old. Because of this, Cloud Spanner cannot perform reads
+          # at read timestamps more than one hour in the past. This
+          # restriction also applies to in-progress reads and/or SQL queries whose
+          # timestamp become too old while executing. Reads and SQL queries with
+          # too-old read timestamps fail with the error `FAILED_PRECONDITION`.
+          #
+          # ## Partitioned DML Transactions
+          #
+          # Partitioned DML transactions are used to execute DML statements with a
+          # different execution strategy that provides different, and often better,
+          # scalability properties for large, table-wide operations than DML in a
+          # ReadWrite transaction. Smaller scoped statements, such as an OLTP workload,
+          # should prefer using ReadWrite transactions.
+          #
+          # Partitioned DML partitions the keyspace and runs the DML statement on each
+          # partition in separate, internal transactions. These transactions commit
+          # automatically when complete, and run independently from one another.
+          #
+          # To reduce lock contention, this execution strategy only acquires read locks
+          # on rows that match the WHERE clause of the statement. Additionally, the
+          # smaller per-partition transactions hold locks for less time.
+          #
+          # That said, Partitioned DML is not a drop-in replacement for standard DML used
+          # in ReadWrite transactions.
+          #
+          #  - The DML statement must be fully-partitionable. Specifically, the statement
+          #    must be expressible as the union of many statements which each access only
+          #    a single row of the table.
+          #
+          #  - The statement is not applied atomically to all rows of the table. Rather,
+          #    the statement is applied atomically to partitions of the table, in
+          #    independent transactions. Secondary index rows are updated atomically
+          #    with the base table rows.
+          #
+          #  - Partitioned DML does not guarantee exactly-once execution semantics
+          #    against a partition. The statement will be applied at least once to each
+          #    partition. It is strongly recommended that the DML statement should be
+          #    idempotent to avoid unexpected results. For instance, it is potentially
+          #    dangerous to run a statement such as
+          #    `UPDATE table SET column = column + 1` as it could be run multiple times
+          #    against some rows.
+          #
+          #  - The partitions are committed automatically - there is no support for
+          #    Commit or Rollback. If the call returns an error, or if the client issuing
+          #    the ExecuteSql call dies, it is possible that some rows had the statement
+          #    executed on them successfully. It is also possible that statement was
+          #    never executed against other rows.
+          #
+          #  - Partitioned DML transactions may only contain the execution of a single
+          #    DML statement via ExecuteSql or ExecuteStreamingSql.
+          #
+          #  - If any error is encountered during the execution of the partitioned DML
+          #    operation (for instance, a UNIQUE INDEX violation, division by zero, or a
+          #    value that cannot be stored due to schema constraints), then the
+          #    operation is stopped at that point and an error is returned. It is
+          #    possible that at this point, some partitions have been committed (or even
+          #    committed multiple times), and other partitions have not been run at all.
+          #
+          # Given the above, Partitioned DML is good fit for large, database-wide,
+          # operations that are idempotent, such as deleting old rows from a very large
+          # table.
+        &quot;partitionedDml&quot;: { # Message type to initiate a Partitioned DML transaction. # Partitioned DML transaction.
             #
             # Authorization to begin a Partitioned DML transaction requires
             # `spanner.databases.beginPartitionedDmlTransaction` permission
             # on the `session` resource.
         },
+        &quot;readOnly&quot;: { # Message type to initiate a read-only transaction. # Transaction will not write.
+            #
+            # Authorization to begin a read-only transaction requires
+            # `spanner.databases.beginReadOnlyTransaction` permission
+            # on the `session` resource.
+          &quot;minReadTimestamp&quot;: &quot;A String&quot;, # Executes all reads at a timestamp &gt;= `min_read_timestamp`.
+              #
+              # This is useful for requesting fresher data than some previous
+              # read, or data that is fresh enough to observe the effects of some
+              # previously committed transaction whose timestamp is known.
+              #
+              # Note that this option can only be used in single-use transactions.
+              #
+              # A timestamp in RFC3339 UTC \&quot;Zulu\&quot; format, accurate to nanoseconds.
+              # Example: `&quot;2014-10-02T15:01:23.045123456Z&quot;`.
+          &quot;readTimestamp&quot;: &quot;A String&quot;, # Executes all reads at the given timestamp. Unlike other modes,
+              # reads at a specific timestamp are repeatable; the same read at
+              # the same timestamp always returns the same data. If the
+              # timestamp is in the future, the read will block until the
+              # specified timestamp, modulo the read&#x27;s deadline.
+              #
+              # Useful for large scale consistent reads such as mapreduces, or
+              # for coordinating many reads against a consistent snapshot of the
+              # data.
+              #
+              # A timestamp in RFC3339 UTC \&quot;Zulu\&quot; format, accurate to nanoseconds.
+              # Example: `&quot;2014-10-02T15:01:23.045123456Z&quot;`.
+          &quot;maxStaleness&quot;: &quot;A String&quot;, # Read data at a timestamp &gt;= `NOW - max_staleness`
+              # seconds. Guarantees that all writes that have committed more
+              # than the specified number of seconds ago are visible. Because
+              # Cloud Spanner chooses the exact timestamp, this mode works even if
+              # the client&#x27;s local clock is substantially skewed from Cloud Spanner
+              # commit timestamps.
+              #
+              # Useful for reading the freshest data available at a nearby
+              # replica, while bounding the possible staleness if the local
+              # replica has fallen behind.
+              #
+              # Note that this option can only be used in single-use
+              # transactions.
+          &quot;returnReadTimestamp&quot;: True or False, # If true, the Cloud Spanner-selected read timestamp is included in
+              # the Transaction message that describes the transaction.
+          &quot;exactStaleness&quot;: &quot;A String&quot;, # Executes all reads at a timestamp that is `exact_staleness`
+              # old. The timestamp is chosen soon after the read is started.
+              #
+              # Guarantees that all writes that have committed more than the
+              # specified number of seconds ago are visible. Because Cloud Spanner
+              # chooses the exact timestamp, this mode works even if the client&#x27;s
+              # local clock is substantially skewed from Cloud Spanner commit
+              # timestamps.
+              #
+              # Useful for reading at nearby replicas without the distributed
+              # timestamp negotiation overhead of `max_staleness`.
+          &quot;strong&quot;: True or False, # Read at a timestamp where all previously committed transactions
+              # are visible.
+        },
+        &quot;readWrite&quot;: { # Message type to initiate a read-write transaction. Currently this # Transaction may write.
+            #
+            # Authorization to begin a read-write transaction requires
+            # `spanner.databases.beginOrRollbackReadWriteTransaction` permission
+            # on the `session` resource.
+            # transaction type has no options.
+        },
       },
-      "id": "A String", # Execute the read or SQL query in a previously-started transaction.
+      &quot;id&quot;: &quot;A String&quot;, # Execute the read or SQL query in a previously-started transaction.
     },
-    "resumeToken": "A String", # If this request is resuming a previously interrupted read,
+    &quot;resumeToken&quot;: &quot;A String&quot;, # If this request is resuming a previously interrupted read,
         # `resume_token` should be copied from the last
         # PartialResultSet yielded before the interruption. Doing this
         # enables the new read to resume where the last read left off. The
         # rest of the request parameters must exactly match the request
         # that yielded this token.
-    "partitionToken": "A String", # If present, results will be restricted to the specified partition
+    &quot;partitionToken&quot;: &quot;A String&quot;, # If present, results will be restricted to the specified partition
         # previously created using PartitionRead().    There must be an exact
         # match for the values of fields common to this message and the
         # PartitionReadRequest message used to create this partition_token.
-    "keySet": { # `KeySet` defines a collection of Cloud Spanner keys and/or key ranges. All # Required. `key_set` identifies the rows to be yielded. `key_set` names the
-        # primary keys of the rows in table to be yielded, unless index
-        # is present. If index is present, then key_set instead names
-        # index keys in index.
-        # 
-        # If the partition_token field is empty, rows are yielded
-        # in table primary key order (if index is empty) or index key order
-        # (if index is non-empty).  If the partition_token field is not
-        # empty, rows will be yielded in an unspecified order.
-        # 
-        # It is not an error for the `key_set` to name rows that do not
-        # exist in the database. Read yields nothing for nonexistent rows.
-        # the keys are expected to be in the same table or index. The keys need
-        # not be sorted in any particular way.
-        #
-        # If the same key is specified multiple times in the set (for example
-        # if two ranges, two keys, or a key and a range overlap), Cloud Spanner
-        # behaves as if the key were only specified once.
-      "ranges": [ # A list of key ranges. See KeyRange for more information about
-          # key range specifications.
-        { # KeyRange represents a range of rows in a table or index.
-            #
-            # A range has a start key and an end key. These keys can be open or
-            # closed, indicating if the range includes rows with that key.
-            #
-            # Keys are represented by lists, where the ith value in the list
-            # corresponds to the ith component of the table or index primary key.
-            # Individual values are encoded as described
-            # here.
-            #
-            # For example, consider the following table definition:
-            #
-            #     CREATE TABLE UserEvents (
-            #       UserName STRING(MAX),
-            #       EventDate STRING(10)
-            #     ) PRIMARY KEY(UserName, EventDate);
-            #
-            # The following keys name rows in this table:
-            #
-            #     "Bob", "2014-09-23"
-            #
-            # Since the `UserEvents` table's `PRIMARY KEY` clause names two
-            # columns, each `UserEvents` key has two elements; the first is the
-            # `UserName`, and the second is the `EventDate`.
-            #
-            # Key ranges with multiple components are interpreted
-            # lexicographically by component using the table or index key's declared
-            # sort order. For example, the following range returns all events for
-            # user `"Bob"` that occurred in the year 2015:
-            #
-            #     "start_closed": ["Bob", "2015-01-01"]
-            #     "end_closed": ["Bob", "2015-12-31"]
-            #
-            # Start and end keys can omit trailing key components. This affects the
-            # inclusion and exclusion of rows that exactly match the provided key
-            # components: if the key is closed, then rows that exactly match the
-            # provided components are included; if the key is open, then rows
-            # that exactly match are not included.
-            #
-            # For example, the following range includes all events for `"Bob"` that
-            # occurred during and after the year 2000:
-            #
-            #     "start_closed": ["Bob", "2000-01-01"]
-            #     "end_closed": ["Bob"]
-            #
-            # The next example retrieves all events for `"Bob"`:
-            #
-            #     "start_closed": ["Bob"]
-            #     "end_closed": ["Bob"]
-            #
-            # To retrieve events before the year 2000:
-            #
-            #     "start_closed": ["Bob"]
-            #     "end_open": ["Bob", "2000-01-01"]
-            #
-            # The following range includes all rows in the table:
-            #
-            #     "start_closed": []
-            #     "end_closed": []
-            #
-            # This range returns all users whose `UserName` begins with any
-            # character from A to C:
-            #
-            #     "start_closed": ["A"]
-            #     "end_open": ["D"]
-            #
-            # This range returns all users whose `UserName` begins with B:
-            #
-            #     "start_closed": ["B"]
-            #     "end_open": ["C"]
-            #
-            # Key ranges honor column sort order. For example, suppose a table is
-            # defined as follows:
-            #
-            #     CREATE TABLE DescendingSortedTable {
-            #       Key INT64,
-            #       ...
-            #     ) PRIMARY KEY(Key DESC);
-            #
-            # The following range retrieves all rows with key values between 1
-            # and 100 inclusive:
-            #
-            #     "start_closed": ["100"]
-            #     "end_closed": ["1"]
-            #
-            # Note that 100 is passed as the start, and 1 is passed as the end,
-            # because `Key` is a descending column in the schema.
-          "endOpen": [ # If the end is open, then the range excludes rows whose first
-              # `len(end_open)` key columns exactly match `end_open`.
-            "",
-          ],
-          "startOpen": [ # If the start is open, then the range excludes rows whose first
-              # `len(start_open)` key columns exactly match `start_open`.
-            "",
-          ],
-          "endClosed": [ # If the end is closed, then the range includes all rows whose
-              # first `len(end_closed)` key columns exactly match `end_closed`.
-            "",
-          ],
-          "startClosed": [ # If the start is closed, then the range includes all rows whose
-              # first `len(start_closed)` key columns exactly match `start_closed`.
-            "",
-          ],
-        },
-      ],
-      "keys": [ # A list of specific keys. Entries in `keys` should have exactly as
-          # many elements as there are columns in the primary or index key
-          # with which this `KeySet` is used.  Individual key values are
-          # encoded as described here.
-        [
-          "",
-        ],
-      ],
-      "all": True or False, # For convenience `all` can be set to `true` to indicate that this
-          # `KeySet` matches all keys in the table or index. Note that any keys
-          # specified in `keys` or `ranges` are only yielded once.
-    },
-    "limit": "A String", # If greater than zero, only the first `limit` rows are yielded. If `limit`
+    &quot;table&quot;: &quot;A String&quot;, # Required. The name of the table in the database to be read.
+    &quot;limit&quot;: &quot;A String&quot;, # If greater than zero, only the first `limit` rows are yielded. If `limit`
         # is zero, the default is no limit. A limit cannot be specified if
         # `partition_token` is set.
-    "table": "A String", # Required. The name of the table in the database to be read.
-    "columns": [ # Required. The columns of table to be returned for each row matching
-        # this request.
-      "A String",
-    ],
   }
 
   x__xgafv: string, V1 error format.
@@ -7120,17 +7120,7 @@
 
     { # Results from Read or
       # ExecuteSql.
-    "rows": [ # Each element in `rows` is a row whose format is defined by
-        # metadata.row_type. The ith element
-        # in each row matches the ith field in
-        # metadata.row_type. Elements are
-        # encoded based on type as described
-        # here.
-      [
-        "",
-      ],
-    ],
-    "stats": { # Additional statistics about a ResultSet or PartialResultSet. # Query plan and execution statistics for the SQL statement that
+    &quot;stats&quot;: { # Additional statistics about a ResultSet or PartialResultSet. # Query plan and execution statistics for the SQL statement that
         # produced this result set. These can be requested by setting
         # ExecuteSqlRequest.query_mode.
         # DML statements always produce stats containing the number of rows
@@ -7138,31 +7128,63 @@
         # ExecuteSqlRequest.QueryMode.PLAN ExecuteSqlRequest.query_mode.
         # Other fields may or may not be populated, based on the
         # ExecuteSqlRequest.query_mode.
-      "rowCountLowerBound": "A String", # Partitioned DML does not offer exactly-once semantics, so it
+      &quot;rowCountExact&quot;: &quot;A String&quot;, # Standard DML returns an exact count of rows that were modified.
+      &quot;queryStats&quot;: { # Aggregated statistics from the execution of the query. Only present when
+          # the query is profiled. For example, a query could return the statistics as
+          # follows:
+          #
+          #     {
+          #       &quot;rows_returned&quot;: &quot;3&quot;,
+          #       &quot;elapsed_time&quot;: &quot;1.22 secs&quot;,
+          #       &quot;cpu_time&quot;: &quot;1.19 secs&quot;
+          #     }
+        &quot;a_key&quot;: &quot;&quot;, # Properties of the object.
+      },
+      &quot;rowCountLowerBound&quot;: &quot;A String&quot;, # Partitioned DML does not offer exactly-once semantics, so it
           # returns a lower bound of the rows modified.
-      "rowCountExact": "A String", # Standard DML returns an exact count of rows that were modified.
-      "queryPlan": { # Contains an ordered list of nodes appearing in the query plan. # QueryPlan for the query associated with this result.
-        "planNodes": [ # The nodes in the query plan. Plan nodes are returned in pre-order starting
-            # with the plan root. Each PlanNode's `id` corresponds to its index in
+      &quot;queryPlan&quot;: { # Contains an ordered list of nodes appearing in the query plan. # QueryPlan for the query associated with this result.
+        &quot;planNodes&quot;: [ # The nodes in the query plan. Plan nodes are returned in pre-order starting
+            # with the plan root. Each PlanNode&#x27;s `id` corresponds to its index in
             # `plan_nodes`.
           { # Node information for nodes appearing in a QueryPlan.plan_nodes.
-            "index": 42, # The `PlanNode`'s index in node list.
-            "kind": "A String", # Used to determine the type of node. May be needed for visualizing
+            &quot;metadata&quot;: { # Attributes relevant to the node contained in a group of key-value pairs.
+                # For example, a Parameter Reference node could have the following
+                # information in its metadata:
+                #
+                #     {
+                #       &quot;parameter_reference&quot;: &quot;param1&quot;,
+                #       &quot;parameter_type&quot;: &quot;array&quot;
+                #     }
+              &quot;a_key&quot;: &quot;&quot;, # Properties of the object.
+            },
+            &quot;executionStats&quot;: { # The execution statistics associated with the node, contained in a group of
+                # key-value pairs. Only present if the plan was returned as a result of a
+                # profile query. For example, number of executions, number of rows/time per
+                # execution etc.
+              &quot;a_key&quot;: &quot;&quot;, # Properties of the object.
+            },
+            &quot;shortRepresentation&quot;: { # Condensed representation of a node and its subtree. Only present for # Condensed representation for SCALAR nodes.
+                # `SCALAR` PlanNode(s).
+              &quot;description&quot;: &quot;A String&quot;, # A string representation of the expression subtree rooted at this node.
+              &quot;subqueries&quot;: { # A mapping of (subquery variable name) -&gt; (subquery node id) for cases
+                  # where the `description` string of this node references a `SCALAR`
+                  # subquery contained in the expression subtree rooted at this node. The
+                  # referenced `SCALAR` subquery may not necessarily be a direct child of
+                  # this node.
+                &quot;a_key&quot;: 42,
+              },
+            },
+            &quot;index&quot;: 42, # The `PlanNode`&#x27;s index in node list.
+            &quot;kind&quot;: &quot;A String&quot;, # Used to determine the type of node. May be needed for visualizing
                 # different kinds of nodes differently. For example, If the node is a
                 # SCALAR node, it will have a condensed representation
                 # which can be used to directly embed a description of the node in its
                 # parent.
-            "displayName": "A String", # The display name for the node.
-            "executionStats": { # The execution statistics associated with the node, contained in a group of
-                # key-value pairs. Only present if the plan was returned as a result of a
-                # profile query. For example, number of executions, number of rows/time per
-                # execution etc.
-              "a_key": "", # Properties of the object.
-            },
-            "childLinks": [ # List of child node `index`es and their relationship to this parent.
+            &quot;displayName&quot;: &quot;A String&quot;, # The display name for the node.
+            &quot;childLinks&quot;: [ # List of child node `index`es and their relationship to this parent.
               { # Metadata associated with a parent-child relationship appearing in a
                   # PlanNode.
-                "variable": "A String", # Only present if the child node is SCALAR and corresponds
+                &quot;variable&quot;: &quot;A String&quot;, # Only present if the child node is SCALAR and corresponds
                     # to an output variable of the parent node. The field carries the name of
                     # the output variable.
                     # For example, a `TableScan` operator that reads rows from a table will
@@ -7170,85 +7192,57 @@
                     # created for each column that is read by the operator. The corresponding
                     # `variable` fields will be set to the variable names assigned to the
                     # columns.
-                "type": "A String", # The type of the link. For example, in Hash Joins this could be used to
+                &quot;childIndex&quot;: 42, # The node to which the link points.
+                &quot;type&quot;: &quot;A String&quot;, # The type of the link. For example, in Hash Joins this could be used to
                     # distinguish between the build child and the probe child, or in the case
                     # of the child being an output variable, to represent the tag associated
                     # with the output variable.
-                "childIndex": 42, # The node to which the link points.
               },
             ],
-            "shortRepresentation": { # Condensed representation of a node and its subtree. Only present for # Condensed representation for SCALAR nodes.
-                # `SCALAR` PlanNode(s).
-              "subqueries": { # A mapping of (subquery variable name) -&gt; (subquery node id) for cases
-                  # where the `description` string of this node references a `SCALAR`
-                  # subquery contained in the expression subtree rooted at this node. The
-                  # referenced `SCALAR` subquery may not necessarily be a direct child of
-                  # this node.
-                "a_key": 42,
-              },
-              "description": "A String", # A string representation of the expression subtree rooted at this node.
-            },
-            "metadata": { # Attributes relevant to the node contained in a group of key-value pairs.
-                # For example, a Parameter Reference node could have the following
-                # information in its metadata:
-                #
-                #     {
-                #       "parameter_reference": "param1",
-                #       "parameter_type": "array"
-                #     }
-              "a_key": "", # Properties of the object.
-            },
           },
         ],
       },
-      "queryStats": { # Aggregated statistics from the execution of the query. Only present when
-          # the query is profiled. For example, a query could return the statistics as
-          # follows:
-          #
-          #     {
-          #       "rows_returned": "3",
-          #       "elapsed_time": "1.22 secs",
-          #       "cpu_time": "1.19 secs"
-          #     }
-        "a_key": "", # Properties of the object.
-      },
     },
-    "metadata": { # Metadata about a ResultSet or PartialResultSet. # Metadata about the result set, such as row type information.
-      "rowType": { # `StructType` defines the fields of a STRUCT type. # Indicates the field names and types for the rows in the result
-          # set.  For example, a SQL query like `"SELECT UserId, UserName FROM
-          # Users"` could return a `row_type` value like:
+    &quot;rows&quot;: [ # Each element in `rows` is a row whose format is defined by
+        # metadata.row_type. The ith element
+        # in each row matches the ith field in
+        # metadata.row_type. Elements are
+        # encoded based on type as described
+        # here.
+      [
+        &quot;&quot;,
+      ],
+    ],
+    &quot;metadata&quot;: { # Metadata about a ResultSet or PartialResultSet. # Metadata about the result set, such as row type information.
+      &quot;rowType&quot;: { # `StructType` defines the fields of a STRUCT type. # Indicates the field names and types for the rows in the result
+          # set.  For example, a SQL query like `&quot;SELECT UserId, UserName FROM
+          # Users&quot;` could return a `row_type` value like:
           #
-          #     "fields": [
-          #       { "name": "UserId", "type": { "code": "INT64" } },
-          #       { "name": "UserName", "type": { "code": "STRING" } },
+          #     &quot;fields&quot;: [
+          #       { &quot;name&quot;: &quot;UserId&quot;, &quot;type&quot;: { &quot;code&quot;: &quot;INT64&quot; } },
+          #       { &quot;name&quot;: &quot;UserName&quot;, &quot;type&quot;: { &quot;code&quot;: &quot;STRING&quot; } },
           #     ]
-        "fields": [ # The list of fields that make up this struct. Order is
+        &quot;fields&quot;: [ # The list of fields that make up this struct. Order is
             # significant, because values of this struct type are represented as
             # lists, where the order of field values matches the order of
             # fields in the StructType. In turn, the order of fields
             # matches the order of columns in a read request, or the order of
             # fields in the `SELECT` clause of a query.
           { # Message representing a single field of a struct.
-            "type": # Object with schema name: Type # The type of the field.
-            "name": "A String", # The name of the field. For reads, this is the column name. For
-                # SQL queries, it is the column alias (e.g., `"Word"` in the
-                # query `"SELECT 'hello' AS Word"`), or the column name (e.g.,
-                # `"ColName"` in the query `"SELECT ColName FROM Table"`). Some
-                # columns might have an empty name (e.g., !"SELECT
-                # UPPER(ColName)"`). Note that a query result can contain
+            &quot;name&quot;: &quot;A String&quot;, # The name of the field. For reads, this is the column name. For
+                # SQL queries, it is the column alias (e.g., `&quot;Word&quot;` in the
+                # query `&quot;SELECT &#x27;hello&#x27; AS Word&quot;`), or the column name (e.g.,
+                # `&quot;ColName&quot;` in the query `&quot;SELECT ColName FROM Table&quot;`). Some
+                # columns might have an empty name (e.g., !&quot;SELECT
+                # UPPER(ColName)&quot;`). Note that a query result can contain
                 # multiple fields with the same name.
+            &quot;type&quot;: # Object with schema name: Type # The type of the field.
           },
         ],
       },
-      "transaction": { # A transaction. # If the read or SQL query began a transaction as a side-effect, the
+      &quot;transaction&quot;: { # A transaction. # If the read or SQL query began a transaction as a side-effect, the
           # information about the new transaction is yielded here.
-        "readTimestamp": "A String", # For snapshot read-only transactions, the read timestamp chosen
-            # for the transaction. Not returned by default: see
-            # TransactionOptions.ReadOnly.return_read_timestamp.
-            #
-            # A timestamp in RFC3339 UTC \"Zulu\" format, accurate to nanoseconds.
-            # Example: `"2014-10-02T15:01:23.045123456Z"`.
-        "id": "A String", # `id` may be used to identify the transaction in subsequent
+        &quot;id&quot;: &quot;A String&quot;, # `id` may be used to identify the transaction in subsequent
             # Read,
             # ExecuteSql,
             # Commit, or
@@ -7256,6 +7250,12 @@
             #
             # Single-use read-only transactions do not have IDs, because
             # single-use transactions do not support multiple requests.
+        &quot;readTimestamp&quot;: &quot;A String&quot;, # For snapshot read-only transactions, the read timestamp chosen
+            # for the transaction. Not returned by default: see
+            # TransactionOptions.ReadOnly.return_read_timestamp.
+            #
+            # A timestamp in RFC3339 UTC \&quot;Zulu\&quot; format, accurate to nanoseconds.
+            # Example: `&quot;2014-10-02T15:01:23.045123456Z&quot;`.
       },
     },
   }</pre>
@@ -7278,7 +7278,7 @@
     The object takes the form of:
 
 { # The request for Rollback.
-    "transactionId": "A String", # Required. The transaction to roll back.
+    &quot;transactionId&quot;: &quot;A String&quot;, # Required. The transaction to roll back.
   }
 
   x__xgafv: string, V1 error format.
@@ -7316,365 +7316,157 @@
 
 { # The request for Read and
       # StreamingRead.
-    "index": "A String", # If non-empty, the name of an index on table. This index is
+    &quot;index&quot;: &quot;A String&quot;, # If non-empty, the name of an index on table. This index is
         # used instead of the table primary key when interpreting key_set
         # and sorting result rows. See key_set for further information.
-    "transaction": { # This message is used to select the transaction in which a # The transaction to use. If none is provided, the default is a
+    &quot;keySet&quot;: { # `KeySet` defines a collection of Cloud Spanner keys and/or key ranges. All # Required. `key_set` identifies the rows to be yielded. `key_set` names the
+        # primary keys of the rows in table to be yielded, unless index
+        # is present. If index is present, then key_set instead names
+        # index keys in index.
+        # 
+        # If the partition_token field is empty, rows are yielded
+        # in table primary key order (if index is empty) or index key order
+        # (if index is non-empty).  If the partition_token field is not
+        # empty, rows will be yielded in an unspecified order.
+        # 
+        # It is not an error for the `key_set` to name rows that do not
+        # exist in the database. Read yields nothing for nonexistent rows.
+        # the keys are expected to be in the same table or index. The keys need
+        # not be sorted in any particular way.
+        #
+        # If the same key is specified multiple times in the set (for example
+        # if two ranges, two keys, or a key and a range overlap), Cloud Spanner
+        # behaves as if the key were only specified once.
+      &quot;ranges&quot;: [ # A list of key ranges. See KeyRange for more information about
+          # key range specifications.
+        { # KeyRange represents a range of rows in a table or index.
+            #
+            # A range has a start key and an end key. These keys can be open or
+            # closed, indicating if the range includes rows with that key.
+            #
+            # Keys are represented by lists, where the ith value in the list
+            # corresponds to the ith component of the table or index primary key.
+            # Individual values are encoded as described
+            # here.
+            #
+            # For example, consider the following table definition:
+            #
+            #     CREATE TABLE UserEvents (
+            #       UserName STRING(MAX),
+            #       EventDate STRING(10)
+            #     ) PRIMARY KEY(UserName, EventDate);
+            #
+            # The following keys name rows in this table:
+            #
+            #     &quot;Bob&quot;, &quot;2014-09-23&quot;
+            #
+            # Since the `UserEvents` table&#x27;s `PRIMARY KEY` clause names two
+            # columns, each `UserEvents` key has two elements; the first is the
+            # `UserName`, and the second is the `EventDate`.
+            #
+            # Key ranges with multiple components are interpreted
+            # lexicographically by component using the table or index key&#x27;s declared
+            # sort order. For example, the following range returns all events for
+            # user `&quot;Bob&quot;` that occurred in the year 2015:
+            #
+            #     &quot;start_closed&quot;: [&quot;Bob&quot;, &quot;2015-01-01&quot;]
+            #     &quot;end_closed&quot;: [&quot;Bob&quot;, &quot;2015-12-31&quot;]
+            #
+            # Start and end keys can omit trailing key components. This affects the
+            # inclusion and exclusion of rows that exactly match the provided key
+            # components: if the key is closed, then rows that exactly match the
+            # provided components are included; if the key is open, then rows
+            # that exactly match are not included.
+            #
+            # For example, the following range includes all events for `&quot;Bob&quot;` that
+            # occurred during and after the year 2000:
+            #
+            #     &quot;start_closed&quot;: [&quot;Bob&quot;, &quot;2000-01-01&quot;]
+            #     &quot;end_closed&quot;: [&quot;Bob&quot;]
+            #
+            # The next example retrieves all events for `&quot;Bob&quot;`:
+            #
+            #     &quot;start_closed&quot;: [&quot;Bob&quot;]
+            #     &quot;end_closed&quot;: [&quot;Bob&quot;]
+            #
+            # To retrieve events before the year 2000:
+            #
+            #     &quot;start_closed&quot;: [&quot;Bob&quot;]
+            #     &quot;end_open&quot;: [&quot;Bob&quot;, &quot;2000-01-01&quot;]
+            #
+            # The following range includes all rows in the table:
+            #
+            #     &quot;start_closed&quot;: []
+            #     &quot;end_closed&quot;: []
+            #
+            # This range returns all users whose `UserName` begins with any
+            # character from A to C:
+            #
+            #     &quot;start_closed&quot;: [&quot;A&quot;]
+            #     &quot;end_open&quot;: [&quot;D&quot;]
+            #
+            # This range returns all users whose `UserName` begins with B:
+            #
+            #     &quot;start_closed&quot;: [&quot;B&quot;]
+            #     &quot;end_open&quot;: [&quot;C&quot;]
+            #
+            # Key ranges honor column sort order. For example, suppose a table is
+            # defined as follows:
+            #
+            #     CREATE TABLE DescendingSortedTable {
+            #       Key INT64,
+            #       ...
+            #     ) PRIMARY KEY(Key DESC);
+            #
+            # The following range retrieves all rows with key values between 1
+            # and 100 inclusive:
+            #
+            #     &quot;start_closed&quot;: [&quot;100&quot;]
+            #     &quot;end_closed&quot;: [&quot;1&quot;]
+            #
+            # Note that 100 is passed as the start, and 1 is passed as the end,
+            # because `Key` is a descending column in the schema.
+          &quot;endOpen&quot;: [ # If the end is open, then the range excludes rows whose first
+              # `len(end_open)` key columns exactly match `end_open`.
+            &quot;&quot;,
+          ],
+          &quot;endClosed&quot;: [ # If the end is closed, then the range includes all rows whose
+              # first `len(end_closed)` key columns exactly match `end_closed`.
+            &quot;&quot;,
+          ],
+          &quot;startOpen&quot;: [ # If the start is open, then the range excludes rows whose first
+              # `len(start_open)` key columns exactly match `start_open`.
+            &quot;&quot;,
+          ],
+          &quot;startClosed&quot;: [ # If the start is closed, then the range includes all rows whose
+              # first `len(start_closed)` key columns exactly match `start_closed`.
+            &quot;&quot;,
+          ],
+        },
+      ],
+      &quot;all&quot;: True or False, # For convenience `all` can be set to `true` to indicate that this
+          # `KeySet` matches all keys in the table or index. Note that any keys
+          # specified in `keys` or `ranges` are only yielded once.
+      &quot;keys&quot;: [ # A list of specific keys. Entries in `keys` should have exactly as
+          # many elements as there are columns in the primary or index key
+          # with which this `KeySet` is used.  Individual key values are
+          # encoded as described here.
+        [
+          &quot;&quot;,
+        ],
+      ],
+    },
+    &quot;columns&quot;: [ # Required. The columns of table to be returned for each row matching
+        # this request.
+      &quot;A String&quot;,
+    ],
+    &quot;transaction&quot;: { # This message is used to select the transaction in which a # The transaction to use. If none is provided, the default is a
         # temporary read-only transaction with strong concurrency.
         # Read or
         # ExecuteSql call runs.
         #
         # See TransactionOptions for more information about transactions.
-      "begin": { # # Transactions # Begin a new transaction and execute this read or SQL query in
-          # it. The transaction ID of the new transaction is returned in
-          # ResultSetMetadata.transaction, which is a Transaction.
-          #
-          #
-          # Each session can have at most one active transaction at a time. After the
-          # active transaction is completed, the session can immediately be
-          # re-used for the next transaction. It is not necessary to create a
-          # new session for each transaction.
-          #
-          # # Transaction Modes
-          #
-          # Cloud Spanner supports three transaction modes:
-          #
-          #   1. Locking read-write. This type of transaction is the only way
-          #      to write data into Cloud Spanner. These transactions rely on
-          #      pessimistic locking and, if necessary, two-phase commit.
-          #      Locking read-write transactions may abort, requiring the
-          #      application to retry.
-          #
-          #   2. Snapshot read-only. This transaction type provides guaranteed
-          #      consistency across several reads, but does not allow
-          #      writes. Snapshot read-only transactions can be configured to
-          #      read at timestamps in the past. Snapshot read-only
-          #      transactions do not need to be committed.
-          #
-          #   3. Partitioned DML. This type of transaction is used to execute
-          #      a single Partitioned DML statement. Partitioned DML partitions
-          #      the key space and runs the DML statement over each partition
-          #      in parallel using separate, internal transactions that commit
-          #      independently. Partitioned DML transactions do not need to be
-          #      committed.
-          #
-          # For transactions that only read, snapshot read-only transactions
-          # provide simpler semantics and are almost always faster. In
-          # particular, read-only transactions do not take locks, so they do
-          # not conflict with read-write transactions. As a consequence of not
-          # taking locks, they also do not abort, so retry loops are not needed.
-          #
-          # Transactions may only read/write data in a single database. They
-          # may, however, read/write data in different tables within that
-          # database.
-          #
-          # ## Locking Read-Write Transactions
-          #
-          # Locking transactions may be used to atomically read-modify-write
-          # data anywhere in a database. This type of transaction is externally
-          # consistent.
-          #
-          # Clients should attempt to minimize the amount of time a transaction
-          # is active. Faster transactions commit with higher probability
-          # and cause less contention. Cloud Spanner attempts to keep read locks
-          # active as long as the transaction continues to do reads, and the
-          # transaction has not been terminated by
-          # Commit or
-          # Rollback.  Long periods of
-          # inactivity at the client may cause Cloud Spanner to release a
-          # transaction's locks and abort it.
-          #
-          # Conceptually, a read-write transaction consists of zero or more
-          # reads or SQL statements followed by
-          # Commit. At any time before
-          # Commit, the client can send a
-          # Rollback request to abort the
-          # transaction.
-          #
-          # ### Semantics
-          #
-          # Cloud Spanner can commit the transaction if all read locks it acquired
-          # are still valid at commit time, and it is able to acquire write
-          # locks for all writes. Cloud Spanner can abort the transaction for any
-          # reason. If a commit attempt returns `ABORTED`, Cloud Spanner guarantees
-          # that the transaction has not modified any user data in Cloud Spanner.
-          #
-          # Unless the transaction commits, Cloud Spanner makes no guarantees about
-          # how long the transaction's locks were held for. It is an error to
-          # use Cloud Spanner locks for any sort of mutual exclusion other than
-          # between Cloud Spanner transactions themselves.
-          #
-          # ### Retrying Aborted Transactions
-          #
-          # When a transaction aborts, the application can choose to retry the
-          # whole transaction again. To maximize the chances of successfully
-          # committing the retry, the client should execute the retry in the
-          # same session as the original attempt. The original session's lock
-          # priority increases with each consecutive abort, meaning that each
-          # attempt has a slightly better chance of success than the previous.
-          #
-          # Under some circumstances (e.g., many transactions attempting to
-          # modify the same row(s)), a transaction can abort many times in a
-          # short period before successfully committing. Thus, it is not a good
-          # idea to cap the number of retries a transaction can attempt;
-          # instead, it is better to limit the total amount of wall time spent
-          # retrying.
-          #
-          # ### Idle Transactions
-          #
-          # A transaction is considered idle if it has no outstanding reads or
-          # SQL queries and has not started a read or SQL query within the last 10
-          # seconds. Idle transactions can be aborted by Cloud Spanner so that they
-          # don't hold on to locks indefinitely. In that case, the commit will
-          # fail with error `ABORTED`.
-          #
-          # If this behavior is undesirable, periodically executing a simple
-          # SQL query in the transaction (e.g., `SELECT 1`) prevents the
-          # transaction from becoming idle.
-          #
-          # ## Snapshot Read-Only Transactions
-          #
-          # Snapshot read-only transactions provides a simpler method than
-          # locking read-write transactions for doing several consistent
-          # reads. However, this type of transaction does not support writes.
-          #
-          # Snapshot transactions do not take locks. Instead, they work by
-          # choosing a Cloud Spanner timestamp, then executing all reads at that
-          # timestamp. Since they do not acquire locks, they do not block
-          # concurrent read-write transactions.
-          #
-          # Unlike locking read-write transactions, snapshot read-only
-          # transactions never abort. They can fail if the chosen read
-          # timestamp is garbage collected; however, the default garbage
-          # collection policy is generous enough that most applications do not
-          # need to worry about this in practice.
-          #
-          # Snapshot read-only transactions do not need to call
-          # Commit or
-          # Rollback (and in fact are not
-          # permitted to do so).
-          #
-          # To execute a snapshot transaction, the client specifies a timestamp
-          # bound, which tells Cloud Spanner how to choose a read timestamp.
-          #
-          # The types of timestamp bound are:
-          #
-          #   - Strong (the default).
-          #   - Bounded staleness.
-          #   - Exact staleness.
-          #
-          # If the Cloud Spanner database to be read is geographically distributed,
-          # stale read-only transactions can execute more quickly than strong
-          # or read-write transaction, because they are able to execute far
-          # from the leader replica.
-          #
-          # Each type of timestamp bound is discussed in detail below.
-          #
-          # ### Strong
-          #
-          # Strong reads are guaranteed to see the effects of all transactions
-          # that have committed before the start of the read. Furthermore, all
-          # rows yielded by a single read are consistent with each other -- if
-          # any part of the read observes a transaction, all parts of the read
-          # see the transaction.
-          #
-          # Strong reads are not repeatable: two consecutive strong read-only
-          # transactions might return inconsistent results if there are
-          # concurrent writes. If consistency across reads is required, the
-          # reads should be executed within a transaction or at an exact read
-          # timestamp.
-          #
-          # See TransactionOptions.ReadOnly.strong.
-          #
-          # ### Exact Staleness
-          #
-          # These timestamp bounds execute reads at a user-specified
-          # timestamp. Reads at a timestamp are guaranteed to see a consistent
-          # prefix of the global transaction history: they observe
-          # modifications done by all transactions with a commit timestamp &lt;=
-          # the read timestamp, and observe none of the modifications done by
-          # transactions with a larger commit timestamp. They will block until
-          # all conflicting transactions that may be assigned commit timestamps
-          # &lt;= the read timestamp have finished.
-          #
-          # The timestamp can either be expressed as an absolute Cloud Spanner commit
-          # timestamp or a staleness relative to the current time.
-          #
-          # These modes do not require a "negotiation phase" to pick a
-          # timestamp. As a result, they execute slightly faster than the
-          # equivalent boundedly stale concurrency modes. On the other hand,
-          # boundedly stale reads usually return fresher results.
-          #
-          # See TransactionOptions.ReadOnly.read_timestamp and
-          # TransactionOptions.ReadOnly.exact_staleness.
-          #
-          # ### Bounded Staleness
-          #
-          # Bounded staleness modes allow Cloud Spanner to pick the read timestamp,
-          # subject to a user-provided staleness bound. Cloud Spanner chooses the
-          # newest timestamp within the staleness bound that allows execution
-          # of the reads at the closest available replica without blocking.
-          #
-          # All rows yielded are consistent with each other -- if any part of
-          # the read observes a transaction, all parts of the read see the
-          # transaction. Boundedly stale reads are not repeatable: two stale
-          # reads, even if they use the same staleness bound, can execute at
-          # different timestamps and thus return inconsistent results.
-          #
-          # Boundedly stale reads execute in two phases: the first phase
-          # negotiates a timestamp among all replicas needed to serve the
-          # read. In the second phase, reads are executed at the negotiated
-          # timestamp.
-          #
-          # As a result of the two phase execution, bounded staleness reads are
-          # usually a little slower than comparable exact staleness
-          # reads. However, they are typically able to return fresher
-          # results, and are more likely to execute at the closest replica.
-          #
-          # Because the timestamp negotiation requires up-front knowledge of
-          # which rows will be read, it can only be used with single-use
-          # read-only transactions.
-          #
-          # See TransactionOptions.ReadOnly.max_staleness and
-          # TransactionOptions.ReadOnly.min_read_timestamp.
-          #
-          # ### Old Read Timestamps and Garbage Collection
-          #
-          # Cloud Spanner continuously garbage collects deleted and overwritten data
-          # in the background to reclaim storage space. This process is known
-          # as "version GC". By default, version GC reclaims versions after they
-          # are one hour old. Because of this, Cloud Spanner cannot perform reads
-          # at read timestamps more than one hour in the past. This
-          # restriction also applies to in-progress reads and/or SQL queries whose
-          # timestamp become too old while executing. Reads and SQL queries with
-          # too-old read timestamps fail with the error `FAILED_PRECONDITION`.
-          #
-          # ## Partitioned DML Transactions
-          #
-          # Partitioned DML transactions are used to execute DML statements with a
-          # different execution strategy that provides different, and often better,
-          # scalability properties for large, table-wide operations than DML in a
-          # ReadWrite transaction. Smaller scoped statements, such as an OLTP workload,
-          # should prefer using ReadWrite transactions.
-          #
-          # Partitioned DML partitions the keyspace and runs the DML statement on each
-          # partition in separate, internal transactions. These transactions commit
-          # automatically when complete, and run independently from one another.
-          #
-          # To reduce lock contention, this execution strategy only acquires read locks
-          # on rows that match the WHERE clause of the statement. Additionally, the
-          # smaller per-partition transactions hold locks for less time.
-          #
-          # That said, Partitioned DML is not a drop-in replacement for standard DML used
-          # in ReadWrite transactions.
-          #
-          #  - The DML statement must be fully-partitionable. Specifically, the statement
-          #    must be expressible as the union of many statements which each access only
-          #    a single row of the table.
-          #
-          #  - The statement is not applied atomically to all rows of the table. Rather,
-          #    the statement is applied atomically to partitions of the table, in
-          #    independent transactions. Secondary index rows are updated atomically
-          #    with the base table rows.
-          #
-          #  - Partitioned DML does not guarantee exactly-once execution semantics
-          #    against a partition. The statement will be applied at least once to each
-          #    partition. It is strongly recommended that the DML statement should be
-          #    idempotent to avoid unexpected results. For instance, it is potentially
-          #    dangerous to run a statement such as
-          #    `UPDATE table SET column = column + 1` as it could be run multiple times
-          #    against some rows.
-          #
-          #  - The partitions are committed automatically - there is no support for
-          #    Commit or Rollback. If the call returns an error, or if the client issuing
-          #    the ExecuteSql call dies, it is possible that some rows had the statement
-          #    executed on them successfully. It is also possible that statement was
-          #    never executed against other rows.
-          #
-          #  - Partitioned DML transactions may only contain the execution of a single
-          #    DML statement via ExecuteSql or ExecuteStreamingSql.
-          #
-          #  - If any error is encountered during the execution of the partitioned DML
-          #    operation (for instance, a UNIQUE INDEX violation, division by zero, or a
-          #    value that cannot be stored due to schema constraints), then the
-          #    operation is stopped at that point and an error is returned. It is
-          #    possible that at this point, some partitions have been committed (or even
-          #    committed multiple times), and other partitions have not been run at all.
-          #
-          # Given the above, Partitioned DML is good fit for large, database-wide,
-          # operations that are idempotent, such as deleting old rows from a very large
-          # table.
-        "readWrite": { # Message type to initiate a read-write transaction. Currently this # Transaction may write.
-            #
-            # Authorization to begin a read-write transaction requires
-            # `spanner.databases.beginOrRollbackReadWriteTransaction` permission
-            # on the `session` resource.
-            # transaction type has no options.
-        },
-        "readOnly": { # Message type to initiate a read-only transaction. # Transaction will not write.
-            #
-            # Authorization to begin a read-only transaction requires
-            # `spanner.databases.beginReadOnlyTransaction` permission
-            # on the `session` resource.
-          "minReadTimestamp": "A String", # Executes all reads at a timestamp &gt;= `min_read_timestamp`.
-              #
-              # This is useful for requesting fresher data than some previous
-              # read, or data that is fresh enough to observe the effects of some
-              # previously committed transaction whose timestamp is known.
-              #
-              # Note that this option can only be used in single-use transactions.
-              #
-              # A timestamp in RFC3339 UTC \"Zulu\" format, accurate to nanoseconds.
-              # Example: `"2014-10-02T15:01:23.045123456Z"`.
-          "readTimestamp": "A String", # Executes all reads at the given timestamp. Unlike other modes,
-              # reads at a specific timestamp are repeatable; the same read at
-              # the same timestamp always returns the same data. If the
-              # timestamp is in the future, the read will block until the
-              # specified timestamp, modulo the read's deadline.
-              #
-              # Useful for large scale consistent reads such as mapreduces, or
-              # for coordinating many reads against a consistent snapshot of the
-              # data.
-              #
-              # A timestamp in RFC3339 UTC \"Zulu\" format, accurate to nanoseconds.
-              # Example: `"2014-10-02T15:01:23.045123456Z"`.
-          "maxStaleness": "A String", # Read data at a timestamp &gt;= `NOW - max_staleness`
-              # seconds. Guarantees that all writes that have committed more
-              # than the specified number of seconds ago are visible. Because
-              # Cloud Spanner chooses the exact timestamp, this mode works even if
-              # the client's local clock is substantially skewed from Cloud Spanner
-              # commit timestamps.
-              #
-              # Useful for reading the freshest data available at a nearby
-              # replica, while bounding the possible staleness if the local
-              # replica has fallen behind.
-              #
-              # Note that this option can only be used in single-use
-              # transactions.
-          "exactStaleness": "A String", # Executes all reads at a timestamp that is `exact_staleness`
-              # old. The timestamp is chosen soon after the read is started.
-              #
-              # Guarantees that all writes that have committed more than the
-              # specified number of seconds ago are visible. Because Cloud Spanner
-              # chooses the exact timestamp, this mode works even if the client's
-              # local clock is substantially skewed from Cloud Spanner commit
-              # timestamps.
-              #
-              # Useful for reading at nearby replicas without the distributed
-              # timestamp negotiation overhead of `max_staleness`.
-          "returnReadTimestamp": True or False, # If true, the Cloud Spanner-selected read timestamp is included in
-              # the Transaction message that describes the transaction.
-          "strong": True or False, # Read at a timestamp where all previously committed transactions
-              # are visible.
-        },
-        "partitionedDml": { # Message type to initiate a Partitioned DML transaction. # Partitioned DML transaction.
-            #
-            # Authorization to begin a Partitioned DML transaction requires
-            # `spanner.databases.beginPartitionedDmlTransaction` permission
-            # on the `session` resource.
-        },
-      },
-      "singleUse": { # # Transactions # Execute the read or SQL query in a temporary transaction.
+      &quot;singleUse&quot;: { # # Transactions # Execute the read or SQL query in a temporary transaction.
           # This is the most efficient way to execute a transaction that
           # consists of a single SQL query.
           #
@@ -7731,7 +7523,7 @@
           # Commit or
           # Rollback.  Long periods of
           # inactivity at the client may cause Cloud Spanner to release a
-          # transaction's locks and abort it.
+          # transaction&#x27;s locks and abort it.
           #
           # Conceptually, a read-write transaction consists of zero or more
           # reads or SQL statements followed by
@@ -7749,7 +7541,7 @@
           # that the transaction has not modified any user data in Cloud Spanner.
           #
           # Unless the transaction commits, Cloud Spanner makes no guarantees about
-          # how long the transaction's locks were held for. It is an error to
+          # how long the transaction&#x27;s locks were held for. It is an error to
           # use Cloud Spanner locks for any sort of mutual exclusion other than
           # between Cloud Spanner transactions themselves.
           #
@@ -7758,7 +7550,7 @@
           # When a transaction aborts, the application can choose to retry the
           # whole transaction again. To maximize the chances of successfully
           # committing the retry, the client should execute the retry in the
-          # same session as the original attempt. The original session's lock
+          # same session as the original attempt. The original session&#x27;s lock
           # priority increases with each consecutive abort, meaning that each
           # attempt has a slightly better chance of success than the previous.
           #
@@ -7774,7 +7566,7 @@
           # A transaction is considered idle if it has no outstanding reads or
           # SQL queries and has not started a read or SQL query within the last 10
           # seconds. Idle transactions can be aborted by Cloud Spanner so that they
-          # don't hold on to locks indefinitely. In that case, the commit will
+          # don&#x27;t hold on to locks indefinitely. In that case, the commit will
           # fail with error `ABORTED`.
           #
           # If this behavior is undesirable, periodically executing a simple
@@ -7849,7 +7641,7 @@
           # The timestamp can either be expressed as an absolute Cloud Spanner commit
           # timestamp or a staleness relative to the current time.
           #
-          # These modes do not require a "negotiation phase" to pick a
+          # These modes do not require a &quot;negotiation phase&quot; to pick a
           # timestamp. As a result, they execute slightly faster than the
           # equivalent boundedly stale concurrency modes. On the other hand,
           # boundedly stale reads usually return fresher results.
@@ -7891,7 +7683,7 @@
           #
           # Cloud Spanner continuously garbage collects deleted and overwritten data
           # in the background to reclaim storage space. This process is known
-          # as "version GC". By default, version GC reclaims versions after they
+          # as &quot;version GC&quot;. By default, version GC reclaims versions after they
           # are one hour old. Because of this, Cloud Spanner cannot perform reads
           # at read timestamps more than one hour in the past. This
           # restriction also applies to in-progress reads and/or SQL queries whose
@@ -7953,19 +7745,18 @@
           # Given the above, Partitioned DML is good fit for large, database-wide,
           # operations that are idempotent, such as deleting old rows from a very large
           # table.
-        "readWrite": { # Message type to initiate a read-write transaction. Currently this # Transaction may write.
+        &quot;partitionedDml&quot;: { # Message type to initiate a Partitioned DML transaction. # Partitioned DML transaction.
             #
-            # Authorization to begin a read-write transaction requires
-            # `spanner.databases.beginOrRollbackReadWriteTransaction` permission
+            # Authorization to begin a Partitioned DML transaction requires
+            # `spanner.databases.beginPartitionedDmlTransaction` permission
             # on the `session` resource.
-            # transaction type has no options.
         },
-        "readOnly": { # Message type to initiate a read-only transaction. # Transaction will not write.
+        &quot;readOnly&quot;: { # Message type to initiate a read-only transaction. # Transaction will not write.
             #
             # Authorization to begin a read-only transaction requires
             # `spanner.databases.beginReadOnlyTransaction` permission
             # on the `session` resource.
-          "minReadTimestamp": "A String", # Executes all reads at a timestamp &gt;= `min_read_timestamp`.
+          &quot;minReadTimestamp&quot;: &quot;A String&quot;, # Executes all reads at a timestamp &gt;= `min_read_timestamp`.
               #
               # This is useful for requesting fresher data than some previous
               # read, or data that is fresh enough to observe the effects of some
@@ -7973,25 +7764,25 @@
               #
               # Note that this option can only be used in single-use transactions.
               #
-              # A timestamp in RFC3339 UTC \"Zulu\" format, accurate to nanoseconds.
-              # Example: `"2014-10-02T15:01:23.045123456Z"`.
-          "readTimestamp": "A String", # Executes all reads at the given timestamp. Unlike other modes,
+              # A timestamp in RFC3339 UTC \&quot;Zulu\&quot; format, accurate to nanoseconds.
+              # Example: `&quot;2014-10-02T15:01:23.045123456Z&quot;`.
+          &quot;readTimestamp&quot;: &quot;A String&quot;, # Executes all reads at the given timestamp. Unlike other modes,
               # reads at a specific timestamp are repeatable; the same read at
               # the same timestamp always returns the same data. If the
               # timestamp is in the future, the read will block until the
-              # specified timestamp, modulo the read's deadline.
+              # specified timestamp, modulo the read&#x27;s deadline.
               #
               # Useful for large scale consistent reads such as mapreduces, or
               # for coordinating many reads against a consistent snapshot of the
               # data.
               #
-              # A timestamp in RFC3339 UTC \"Zulu\" format, accurate to nanoseconds.
-              # Example: `"2014-10-02T15:01:23.045123456Z"`.
-          "maxStaleness": "A String", # Read data at a timestamp &gt;= `NOW - max_staleness`
+              # A timestamp in RFC3339 UTC \&quot;Zulu\&quot; format, accurate to nanoseconds.
+              # Example: `&quot;2014-10-02T15:01:23.045123456Z&quot;`.
+          &quot;maxStaleness&quot;: &quot;A String&quot;, # Read data at a timestamp &gt;= `NOW - max_staleness`
               # seconds. Guarantees that all writes that have committed more
               # than the specified number of seconds ago are visible. Because
               # Cloud Spanner chooses the exact timestamp, this mode works even if
-              # the client's local clock is substantially skewed from Cloud Spanner
+              # the client&#x27;s local clock is substantially skewed from Cloud Spanner
               # commit timestamps.
               #
               # Useful for reading the freshest data available at a nearby
@@ -8000,186 +7791,395 @@
               #
               # Note that this option can only be used in single-use
               # transactions.
-          "exactStaleness": "A String", # Executes all reads at a timestamp that is `exact_staleness`
+          &quot;returnReadTimestamp&quot;: True or False, # If true, the Cloud Spanner-selected read timestamp is included in
+              # the Transaction message that describes the transaction.
+          &quot;exactStaleness&quot;: &quot;A String&quot;, # Executes all reads at a timestamp that is `exact_staleness`
               # old. The timestamp is chosen soon after the read is started.
               #
               # Guarantees that all writes that have committed more than the
               # specified number of seconds ago are visible. Because Cloud Spanner
-              # chooses the exact timestamp, this mode works even if the client's
+              # chooses the exact timestamp, this mode works even if the client&#x27;s
               # local clock is substantially skewed from Cloud Spanner commit
               # timestamps.
               #
               # Useful for reading at nearby replicas without the distributed
               # timestamp negotiation overhead of `max_staleness`.
-          "returnReadTimestamp": True or False, # If true, the Cloud Spanner-selected read timestamp is included in
-              # the Transaction message that describes the transaction.
-          "strong": True or False, # Read at a timestamp where all previously committed transactions
+          &quot;strong&quot;: True or False, # Read at a timestamp where all previously committed transactions
               # are visible.
         },
-        "partitionedDml": { # Message type to initiate a Partitioned DML transaction. # Partitioned DML transaction.
+        &quot;readWrite&quot;: { # Message type to initiate a read-write transaction. Currently this # Transaction may write.
+            #
+            # Authorization to begin a read-write transaction requires
+            # `spanner.databases.beginOrRollbackReadWriteTransaction` permission
+            # on the `session` resource.
+            # transaction type has no options.
+        },
+      },
+      &quot;begin&quot;: { # # Transactions # Begin a new transaction and execute this read or SQL query in
+          # it. The transaction ID of the new transaction is returned in
+          # ResultSetMetadata.transaction, which is a Transaction.
+          #
+          #
+          # Each session can have at most one active transaction at a time. After the
+          # active transaction is completed, the session can immediately be
+          # re-used for the next transaction. It is not necessary to create a
+          # new session for each transaction.
+          #
+          # # Transaction Modes
+          #
+          # Cloud Spanner supports three transaction modes:
+          #
+          #   1. Locking read-write. This type of transaction is the only way
+          #      to write data into Cloud Spanner. These transactions rely on
+          #      pessimistic locking and, if necessary, two-phase commit.
+          #      Locking read-write transactions may abort, requiring the
+          #      application to retry.
+          #
+          #   2. Snapshot read-only. This transaction type provides guaranteed
+          #      consistency across several reads, but does not allow
+          #      writes. Snapshot read-only transactions can be configured to
+          #      read at timestamps in the past. Snapshot read-only
+          #      transactions do not need to be committed.
+          #
+          #   3. Partitioned DML. This type of transaction is used to execute
+          #      a single Partitioned DML statement. Partitioned DML partitions
+          #      the key space and runs the DML statement over each partition
+          #      in parallel using separate, internal transactions that commit
+          #      independently. Partitioned DML transactions do not need to be
+          #      committed.
+          #
+          # For transactions that only read, snapshot read-only transactions
+          # provide simpler semantics and are almost always faster. In
+          # particular, read-only transactions do not take locks, so they do
+          # not conflict with read-write transactions. As a consequence of not
+          # taking locks, they also do not abort, so retry loops are not needed.
+          #
+          # Transactions may only read/write data in a single database. They
+          # may, however, read/write data in different tables within that
+          # database.
+          #
+          # ## Locking Read-Write Transactions
+          #
+          # Locking transactions may be used to atomically read-modify-write
+          # data anywhere in a database. This type of transaction is externally
+          # consistent.
+          #
+          # Clients should attempt to minimize the amount of time a transaction
+          # is active. Faster transactions commit with higher probability
+          # and cause less contention. Cloud Spanner attempts to keep read locks
+          # active as long as the transaction continues to do reads, and the
+          # transaction has not been terminated by
+          # Commit or
+          # Rollback.  Long periods of
+          # inactivity at the client may cause Cloud Spanner to release a
+          # transaction&#x27;s locks and abort it.
+          #
+          # Conceptually, a read-write transaction consists of zero or more
+          # reads or SQL statements followed by
+          # Commit. At any time before
+          # Commit, the client can send a
+          # Rollback request to abort the
+          # transaction.
+          #
+          # ### Semantics
+          #
+          # Cloud Spanner can commit the transaction if all read locks it acquired
+          # are still valid at commit time, and it is able to acquire write
+          # locks for all writes. Cloud Spanner can abort the transaction for any
+          # reason. If a commit attempt returns `ABORTED`, Cloud Spanner guarantees
+          # that the transaction has not modified any user data in Cloud Spanner.
+          #
+          # Unless the transaction commits, Cloud Spanner makes no guarantees about
+          # how long the transaction&#x27;s locks were held for. It is an error to
+          # use Cloud Spanner locks for any sort of mutual exclusion other than
+          # between Cloud Spanner transactions themselves.
+          #
+          # ### Retrying Aborted Transactions
+          #
+          # When a transaction aborts, the application can choose to retry the
+          # whole transaction again. To maximize the chances of successfully
+          # committing the retry, the client should execute the retry in the
+          # same session as the original attempt. The original session&#x27;s lock
+          # priority increases with each consecutive abort, meaning that each
+          # attempt has a slightly better chance of success than the previous.
+          #
+          # Under some circumstances (e.g., many transactions attempting to
+          # modify the same row(s)), a transaction can abort many times in a
+          # short period before successfully committing. Thus, it is not a good
+          # idea to cap the number of retries a transaction can attempt;
+          # instead, it is better to limit the total amount of wall time spent
+          # retrying.
+          #
+          # ### Idle Transactions
+          #
+          # A transaction is considered idle if it has no outstanding reads or
+          # SQL queries and has not started a read or SQL query within the last 10
+          # seconds. Idle transactions can be aborted by Cloud Spanner so that they
+          # don&#x27;t hold on to locks indefinitely. In that case, the commit will
+          # fail with error `ABORTED`.
+          #
+          # If this behavior is undesirable, periodically executing a simple
+          # SQL query in the transaction (e.g., `SELECT 1`) prevents the
+          # transaction from becoming idle.
+          #
+          # ## Snapshot Read-Only Transactions
+          #
+          # Snapshot read-only transactions provides a simpler method than
+          # locking read-write transactions for doing several consistent
+          # reads. However, this type of transaction does not support writes.
+          #
+          # Snapshot transactions do not take locks. Instead, they work by
+          # choosing a Cloud Spanner timestamp, then executing all reads at that
+          # timestamp. Since they do not acquire locks, they do not block
+          # concurrent read-write transactions.
+          #
+          # Unlike locking read-write transactions, snapshot read-only
+          # transactions never abort. They can fail if the chosen read
+          # timestamp is garbage collected; however, the default garbage
+          # collection policy is generous enough that most applications do not
+          # need to worry about this in practice.
+          #
+          # Snapshot read-only transactions do not need to call
+          # Commit or
+          # Rollback (and in fact are not
+          # permitted to do so).
+          #
+          # To execute a snapshot transaction, the client specifies a timestamp
+          # bound, which tells Cloud Spanner how to choose a read timestamp.
+          #
+          # The types of timestamp bound are:
+          #
+          #   - Strong (the default).
+          #   - Bounded staleness.
+          #   - Exact staleness.
+          #
+          # If the Cloud Spanner database to be read is geographically distributed,
+          # stale read-only transactions can execute more quickly than strong
+          # or read-write transaction, because they are able to execute far
+          # from the leader replica.
+          #
+          # Each type of timestamp bound is discussed in detail below.
+          #
+          # ### Strong
+          #
+          # Strong reads are guaranteed to see the effects of all transactions
+          # that have committed before the start of the read. Furthermore, all
+          # rows yielded by a single read are consistent with each other -- if
+          # any part of the read observes a transaction, all parts of the read
+          # see the transaction.
+          #
+          # Strong reads are not repeatable: two consecutive strong read-only
+          # transactions might return inconsistent results if there are
+          # concurrent writes. If consistency across reads is required, the
+          # reads should be executed within a transaction or at an exact read
+          # timestamp.
+          #
+          # See TransactionOptions.ReadOnly.strong.
+          #
+          # ### Exact Staleness
+          #
+          # These timestamp bounds execute reads at a user-specified
+          # timestamp. Reads at a timestamp are guaranteed to see a consistent
+          # prefix of the global transaction history: they observe
+          # modifications done by all transactions with a commit timestamp &lt;=
+          # the read timestamp, and observe none of the modifications done by
+          # transactions with a larger commit timestamp. They will block until
+          # all conflicting transactions that may be assigned commit timestamps
+          # &lt;= the read timestamp have finished.
+          #
+          # The timestamp can either be expressed as an absolute Cloud Spanner commit
+          # timestamp or a staleness relative to the current time.
+          #
+          # These modes do not require a &quot;negotiation phase&quot; to pick a
+          # timestamp. As a result, they execute slightly faster than the
+          # equivalent boundedly stale concurrency modes. On the other hand,
+          # boundedly stale reads usually return fresher results.
+          #
+          # See TransactionOptions.ReadOnly.read_timestamp and
+          # TransactionOptions.ReadOnly.exact_staleness.
+          #
+          # ### Bounded Staleness
+          #
+          # Bounded staleness modes allow Cloud Spanner to pick the read timestamp,
+          # subject to a user-provided staleness bound. Cloud Spanner chooses the
+          # newest timestamp within the staleness bound that allows execution
+          # of the reads at the closest available replica without blocking.
+          #
+          # All rows yielded are consistent with each other -- if any part of
+          # the read observes a transaction, all parts of the read see the
+          # transaction. Boundedly stale reads are not repeatable: two stale
+          # reads, even if they use the same staleness bound, can execute at
+          # different timestamps and thus return inconsistent results.
+          #
+          # Boundedly stale reads execute in two phases: the first phase
+          # negotiates a timestamp among all replicas needed to serve the
+          # read. In the second phase, reads are executed at the negotiated
+          # timestamp.
+          #
+          # As a result of the two phase execution, bounded staleness reads are
+          # usually a little slower than comparable exact staleness
+          # reads. However, they are typically able to return fresher
+          # results, and are more likely to execute at the closest replica.
+          #
+          # Because the timestamp negotiation requires up-front knowledge of
+          # which rows will be read, it can only be used with single-use
+          # read-only transactions.
+          #
+          # See TransactionOptions.ReadOnly.max_staleness and
+          # TransactionOptions.ReadOnly.min_read_timestamp.
+          #
+          # ### Old Read Timestamps and Garbage Collection
+          #
+          # Cloud Spanner continuously garbage collects deleted and overwritten data
+          # in the background to reclaim storage space. This process is known
+          # as &quot;version GC&quot;. By default, version GC reclaims versions after they
+          # are one hour old. Because of this, Cloud Spanner cannot perform reads
+          # at read timestamps more than one hour in the past. This
+          # restriction also applies to in-progress reads and/or SQL queries whose
+          # timestamp become too old while executing. Reads and SQL queries with
+          # too-old read timestamps fail with the error `FAILED_PRECONDITION`.
+          #
+          # ## Partitioned DML Transactions
+          #
+          # Partitioned DML transactions are used to execute DML statements with a
+          # different execution strategy that provides different, and often better,
+          # scalability properties for large, table-wide operations than DML in a
+          # ReadWrite transaction. Smaller scoped statements, such as an OLTP workload,
+          # should prefer using ReadWrite transactions.
+          #
+          # Partitioned DML partitions the keyspace and runs the DML statement on each
+          # partition in separate, internal transactions. These transactions commit
+          # automatically when complete, and run independently from one another.
+          #
+          # To reduce lock contention, this execution strategy only acquires read locks
+          # on rows that match the WHERE clause of the statement. Additionally, the
+          # smaller per-partition transactions hold locks for less time.
+          #
+          # That said, Partitioned DML is not a drop-in replacement for standard DML used
+          # in ReadWrite transactions.
+          #
+          #  - The DML statement must be fully-partitionable. Specifically, the statement
+          #    must be expressible as the union of many statements which each access only
+          #    a single row of the table.
+          #
+          #  - The statement is not applied atomically to all rows of the table. Rather,
+          #    the statement is applied atomically to partitions of the table, in
+          #    independent transactions. Secondary index rows are updated atomically
+          #    with the base table rows.
+          #
+          #  - Partitioned DML does not guarantee exactly-once execution semantics
+          #    against a partition. The statement will be applied at least once to each
+          #    partition. It is strongly recommended that the DML statement should be
+          #    idempotent to avoid unexpected results. For instance, it is potentially
+          #    dangerous to run a statement such as
+          #    `UPDATE table SET column = column + 1` as it could be run multiple times
+          #    against some rows.
+          #
+          #  - The partitions are committed automatically - there is no support for
+          #    Commit or Rollback. If the call returns an error, or if the client issuing
+          #    the ExecuteSql call dies, it is possible that some rows had the statement
+          #    executed on them successfully. It is also possible that statement was
+          #    never executed against other rows.
+          #
+          #  - Partitioned DML transactions may only contain the execution of a single
+          #    DML statement via ExecuteSql or ExecuteStreamingSql.
+          #
+          #  - If any error is encountered during the execution of the partitioned DML
+          #    operation (for instance, a UNIQUE INDEX violation, division by zero, or a
+          #    value that cannot be stored due to schema constraints), then the
+          #    operation is stopped at that point and an error is returned. It is
+          #    possible that at this point, some partitions have been committed (or even
+          #    committed multiple times), and other partitions have not been run at all.
+          #
+          # Given the above, Partitioned DML is good fit for large, database-wide,
+          # operations that are idempotent, such as deleting old rows from a very large
+          # table.
+        &quot;partitionedDml&quot;: { # Message type to initiate a Partitioned DML transaction. # Partitioned DML transaction.
             #
             # Authorization to begin a Partitioned DML transaction requires
             # `spanner.databases.beginPartitionedDmlTransaction` permission
             # on the `session` resource.
         },
+        &quot;readOnly&quot;: { # Message type to initiate a read-only transaction. # Transaction will not write.
+            #
+            # Authorization to begin a read-only transaction requires
+            # `spanner.databases.beginReadOnlyTransaction` permission
+            # on the `session` resource.
+          &quot;minReadTimestamp&quot;: &quot;A String&quot;, # Executes all reads at a timestamp &gt;= `min_read_timestamp`.
+              #
+              # This is useful for requesting fresher data than some previous
+              # read, or data that is fresh enough to observe the effects of some
+              # previously committed transaction whose timestamp is known.
+              #
+              # Note that this option can only be used in single-use transactions.
+              #
+              # A timestamp in RFC3339 UTC \&quot;Zulu\&quot; format, accurate to nanoseconds.
+              # Example: `&quot;2014-10-02T15:01:23.045123456Z&quot;`.
+          &quot;readTimestamp&quot;: &quot;A String&quot;, # Executes all reads at the given timestamp. Unlike other modes,
+              # reads at a specific timestamp are repeatable; the same read at
+              # the same timestamp always returns the same data. If the
+              # timestamp is in the future, the read will block until the
+              # specified timestamp, modulo the read&#x27;s deadline.
+              #
+              # Useful for large scale consistent reads such as mapreduces, or
+              # for coordinating many reads against a consistent snapshot of the
+              # data.
+              #
+              # A timestamp in RFC3339 UTC \&quot;Zulu\&quot; format, accurate to nanoseconds.
+              # Example: `&quot;2014-10-02T15:01:23.045123456Z&quot;`.
+          &quot;maxStaleness&quot;: &quot;A String&quot;, # Read data at a timestamp &gt;= `NOW - max_staleness`
+              # seconds. Guarantees that all writes that have committed more
+              # than the specified number of seconds ago are visible. Because
+              # Cloud Spanner chooses the exact timestamp, this mode works even if
+              # the client&#x27;s local clock is substantially skewed from Cloud Spanner
+              # commit timestamps.
+              #
+              # Useful for reading the freshest data available at a nearby
+              # replica, while bounding the possible staleness if the local
+              # replica has fallen behind.
+              #
+              # Note that this option can only be used in single-use
+              # transactions.
+          &quot;returnReadTimestamp&quot;: True or False, # If true, the Cloud Spanner-selected read timestamp is included in
+              # the Transaction message that describes the transaction.
+          &quot;exactStaleness&quot;: &quot;A String&quot;, # Executes all reads at a timestamp that is `exact_staleness`
+              # old. The timestamp is chosen soon after the read is started.
+              #
+              # Guarantees that all writes that have committed more than the
+              # specified number of seconds ago are visible. Because Cloud Spanner
+              # chooses the exact timestamp, this mode works even if the client&#x27;s
+              # local clock is substantially skewed from Cloud Spanner commit
+              # timestamps.
+              #
+              # Useful for reading at nearby replicas without the distributed
+              # timestamp negotiation overhead of `max_staleness`.
+          &quot;strong&quot;: True or False, # Read at a timestamp where all previously committed transactions
+              # are visible.
+        },
+        &quot;readWrite&quot;: { # Message type to initiate a read-write transaction. Currently this # Transaction may write.
+            #
+            # Authorization to begin a read-write transaction requires
+            # `spanner.databases.beginOrRollbackReadWriteTransaction` permission
+            # on the `session` resource.
+            # transaction type has no options.
+        },
       },
-      "id": "A String", # Execute the read or SQL query in a previously-started transaction.
+      &quot;id&quot;: &quot;A String&quot;, # Execute the read or SQL query in a previously-started transaction.
     },
-    "resumeToken": "A String", # If this request is resuming a previously interrupted read,
+    &quot;resumeToken&quot;: &quot;A String&quot;, # If this request is resuming a previously interrupted read,
         # `resume_token` should be copied from the last
         # PartialResultSet yielded before the interruption. Doing this
         # enables the new read to resume where the last read left off. The
         # rest of the request parameters must exactly match the request
         # that yielded this token.
-    "partitionToken": "A String", # If present, results will be restricted to the specified partition
+    &quot;partitionToken&quot;: &quot;A String&quot;, # If present, results will be restricted to the specified partition
         # previously created using PartitionRead().    There must be an exact
         # match for the values of fields common to this message and the
         # PartitionReadRequest message used to create this partition_token.
-    "keySet": { # `KeySet` defines a collection of Cloud Spanner keys and/or key ranges. All # Required. `key_set` identifies the rows to be yielded. `key_set` names the
-        # primary keys of the rows in table to be yielded, unless index
-        # is present. If index is present, then key_set instead names
-        # index keys in index.
-        # 
-        # If the partition_token field is empty, rows are yielded
-        # in table primary key order (if index is empty) or index key order
-        # (if index is non-empty).  If the partition_token field is not
-        # empty, rows will be yielded in an unspecified order.
-        # 
-        # It is not an error for the `key_set` to name rows that do not
-        # exist in the database. Read yields nothing for nonexistent rows.
-        # the keys are expected to be in the same table or index. The keys need
-        # not be sorted in any particular way.
-        #
-        # If the same key is specified multiple times in the set (for example
-        # if two ranges, two keys, or a key and a range overlap), Cloud Spanner
-        # behaves as if the key were only specified once.
-      "ranges": [ # A list of key ranges. See KeyRange for more information about
-          # key range specifications.
-        { # KeyRange represents a range of rows in a table or index.
-            #
-            # A range has a start key and an end key. These keys can be open or
-            # closed, indicating if the range includes rows with that key.
-            #
-            # Keys are represented by lists, where the ith value in the list
-            # corresponds to the ith component of the table or index primary key.
-            # Individual values are encoded as described
-            # here.
-            #
-            # For example, consider the following table definition:
-            #
-            #     CREATE TABLE UserEvents (
-            #       UserName STRING(MAX),
-            #       EventDate STRING(10)
-            #     ) PRIMARY KEY(UserName, EventDate);
-            #
-            # The following keys name rows in this table:
-            #
-            #     "Bob", "2014-09-23"
-            #
-            # Since the `UserEvents` table's `PRIMARY KEY` clause names two
-            # columns, each `UserEvents` key has two elements; the first is the
-            # `UserName`, and the second is the `EventDate`.
-            #
-            # Key ranges with multiple components are interpreted
-            # lexicographically by component using the table or index key's declared
-            # sort order. For example, the following range returns all events for
-            # user `"Bob"` that occurred in the year 2015:
-            #
-            #     "start_closed": ["Bob", "2015-01-01"]
-            #     "end_closed": ["Bob", "2015-12-31"]
-            #
-            # Start and end keys can omit trailing key components. This affects the
-            # inclusion and exclusion of rows that exactly match the provided key
-            # components: if the key is closed, then rows that exactly match the
-            # provided components are included; if the key is open, then rows
-            # that exactly match are not included.
-            #
-            # For example, the following range includes all events for `"Bob"` that
-            # occurred during and after the year 2000:
-            #
-            #     "start_closed": ["Bob", "2000-01-01"]
-            #     "end_closed": ["Bob"]
-            #
-            # The next example retrieves all events for `"Bob"`:
-            #
-            #     "start_closed": ["Bob"]
-            #     "end_closed": ["Bob"]
-            #
-            # To retrieve events before the year 2000:
-            #
-            #     "start_closed": ["Bob"]
-            #     "end_open": ["Bob", "2000-01-01"]
-            #
-            # The following range includes all rows in the table:
-            #
-            #     "start_closed": []
-            #     "end_closed": []
-            #
-            # This range returns all users whose `UserName` begins with any
-            # character from A to C:
-            #
-            #     "start_closed": ["A"]
-            #     "end_open": ["D"]
-            #
-            # This range returns all users whose `UserName` begins with B:
-            #
-            #     "start_closed": ["B"]
-            #     "end_open": ["C"]
-            #
-            # Key ranges honor column sort order. For example, suppose a table is
-            # defined as follows:
-            #
-            #     CREATE TABLE DescendingSortedTable {
-            #       Key INT64,
-            #       ...
-            #     ) PRIMARY KEY(Key DESC);
-            #
-            # The following range retrieves all rows with key values between 1
-            # and 100 inclusive:
-            #
-            #     "start_closed": ["100"]
-            #     "end_closed": ["1"]
-            #
-            # Note that 100 is passed as the start, and 1 is passed as the end,
-            # because `Key` is a descending column in the schema.
-          "endOpen": [ # If the end is open, then the range excludes rows whose first
-              # `len(end_open)` key columns exactly match `end_open`.
-            "",
-          ],
-          "startOpen": [ # If the start is open, then the range excludes rows whose first
-              # `len(start_open)` key columns exactly match `start_open`.
-            "",
-          ],
-          "endClosed": [ # If the end is closed, then the range includes all rows whose
-              # first `len(end_closed)` key columns exactly match `end_closed`.
-            "",
-          ],
-          "startClosed": [ # If the start is closed, then the range includes all rows whose
-              # first `len(start_closed)` key columns exactly match `start_closed`.
-            "",
-          ],
-        },
-      ],
-      "keys": [ # A list of specific keys. Entries in `keys` should have exactly as
-          # many elements as there are columns in the primary or index key
-          # with which this `KeySet` is used.  Individual key values are
-          # encoded as described here.
-        [
-          "",
-        ],
-      ],
-      "all": True or False, # For convenience `all` can be set to `true` to indicate that this
-          # `KeySet` matches all keys in the table or index. Note that any keys
-          # specified in `keys` or `ranges` are only yielded once.
-    },
-    "limit": "A String", # If greater than zero, only the first `limit` rows are yielded. If `limit`
+    &quot;table&quot;: &quot;A String&quot;, # Required. The name of the table in the database to be read.
+    &quot;limit&quot;: &quot;A String&quot;, # If greater than zero, only the first `limit` rows are yielded. If `limit`
         # is zero, the default is no limit. A limit cannot be specified if
         # `partition_token` is set.
-    "table": "A String", # Required. The name of the table in the database to be read.
-    "columns": [ # Required. The columns of table to be returned for each row matching
-        # this request.
-      "A String",
-    ],
   }
 
   x__xgafv: string, V1 error format.
@@ -8193,15 +8193,7 @@
     { # Partial results from a streaming read or SQL query. Streaming reads and
       # SQL queries better tolerate large result sets, large rows, and large
       # values, but are a little trickier to consume.
-    "resumeToken": "A String", # Streaming calls might be interrupted for a variety of reasons, such
-        # as TCP connection loss. If this occurs, the stream of results can
-        # be resumed by re-sending the original request and including
-        # `resume_token`. Note that executing any other transaction in the
-        # same session invalidates the token.
-    "chunkedValue": True or False, # If true, then the final value in values is chunked, and must
-        # be combined with more values from subsequent `PartialResultSet`s
-        # to obtain a complete field value.
-    "values": [ # A streamed result set consists of a stream of values, which might
+    &quot;values&quot;: [ # A streamed result set consists of a stream of values, which might
         # be split into many `PartialResultSet` messages to accommodate
         # large rows and/or large values. Every N complete values defines a
         # row, where N is equal to the number of entries in
@@ -8210,7 +8202,7 @@
         # Most values are encoded based on type as described
         # here.
         #
-        # It is possible that the last value in values is "chunked",
+        # It is possible that the last value in values is &quot;chunked&quot;,
         # meaning that the rest of the value is sent in subsequent
         # `PartialResultSet`(s). This is denoted by the chunked_value
         # field. Two or more chunked values can be merged to form a
@@ -8228,172 +8220,85 @@
         # Some examples of merging:
         #
         #     # Strings are concatenated.
-        #     "foo", "bar" =&gt; "foobar"
+        #     &quot;foo&quot;, &quot;bar&quot; =&gt; &quot;foobar&quot;
         #
         #     # Lists of non-strings are concatenated.
         #     [2, 3], [4] =&gt; [2, 3, 4]
         #
         #     # Lists are concatenated, but the last and first elements are merged
         #     # because they are strings.
-        #     ["a", "b"], ["c", "d"] =&gt; ["a", "bc", "d"]
+        #     [&quot;a&quot;, &quot;b&quot;], [&quot;c&quot;, &quot;d&quot;] =&gt; [&quot;a&quot;, &quot;bc&quot;, &quot;d&quot;]
         #
         #     # Lists are concatenated, but the last and first elements are merged
         #     # because they are lists. Recursively, the last and first elements
         #     # of the inner lists are merged because they are strings.
-        #     ["a", ["b", "c"]], [["d"], "e"] =&gt; ["a", ["b", "cd"], "e"]
+        #     [&quot;a&quot;, [&quot;b&quot;, &quot;c&quot;]], [[&quot;d&quot;], &quot;e&quot;] =&gt; [&quot;a&quot;, [&quot;b&quot;, &quot;cd&quot;], &quot;e&quot;]
         #
         #     # Non-overlapping object fields are combined.
-        #     {"a": "1"}, {"b": "2"} =&gt; {"a": "1", "b": 2"}
+        #     {&quot;a&quot;: &quot;1&quot;}, {&quot;b&quot;: &quot;2&quot;} =&gt; {&quot;a&quot;: &quot;1&quot;, &quot;b&quot;: 2&quot;}
         #
         #     # Overlapping object fields are merged.
-        #     {"a": "1"}, {"a": "2"} =&gt; {"a": "12"}
+        #     {&quot;a&quot;: &quot;1&quot;}, {&quot;a&quot;: &quot;2&quot;} =&gt; {&quot;a&quot;: &quot;12&quot;}
         #
         #     # Examples of merging objects containing lists of strings.
-        #     {"a": ["1"]}, {"a": ["2"]} =&gt; {"a": ["12"]}
+        #     {&quot;a&quot;: [&quot;1&quot;]}, {&quot;a&quot;: [&quot;2&quot;]} =&gt; {&quot;a&quot;: [&quot;12&quot;]}
         #
         # For a more complete example, suppose a streaming SQL query is
         # yielding a result set whose rows contain a single string
         # field. The following `PartialResultSet`s might be yielded:
         #
         #     {
-        #       "metadata": { ... }
-        #       "values": ["Hello", "W"]
-        #       "chunked_value": true
-        #       "resume_token": "Af65..."
+        #       &quot;metadata&quot;: { ... }
+        #       &quot;values&quot;: [&quot;Hello&quot;, &quot;W&quot;]
+        #       &quot;chunked_value&quot;: true
+        #       &quot;resume_token&quot;: &quot;Af65...&quot;
         #     }
         #     {
-        #       "values": ["orl"]
-        #       "chunked_value": true
-        #       "resume_token": "Bqp2..."
+        #       &quot;values&quot;: [&quot;orl&quot;]
+        #       &quot;chunked_value&quot;: true
+        #       &quot;resume_token&quot;: &quot;Bqp2...&quot;
         #     }
         #     {
-        #       "values": ["d"]
-        #       "resume_token": "Zx1B..."
+        #       &quot;values&quot;: [&quot;d&quot;]
+        #       &quot;resume_token&quot;: &quot;Zx1B...&quot;
         #     }
         #
         # This sequence of `PartialResultSet`s encodes two rows, one
-        # containing the field value `"Hello"`, and a second containing the
-        # field value `"World" = "W" + "orl" + "d"`.
-      "",
+        # containing the field value `&quot;Hello&quot;`, and a second containing the
+        # field value `&quot;World&quot; = &quot;W&quot; + &quot;orl&quot; + &quot;d&quot;`.
+      &quot;&quot;,
     ],
-    "stats": { # Additional statistics about a ResultSet or PartialResultSet. # Query plan and execution statistics for the statement that produced this
-        # streaming result set. These can be requested by setting
-        # ExecuteSqlRequest.query_mode and are sent
-        # only once with the last response in the stream.
-        # This field will also be present in the last response for DML
-        # statements.
-      "rowCountLowerBound": "A String", # Partitioned DML does not offer exactly-once semantics, so it
-          # returns a lower bound of the rows modified.
-      "rowCountExact": "A String", # Standard DML returns an exact count of rows that were modified.
-      "queryPlan": { # Contains an ordered list of nodes appearing in the query plan. # QueryPlan for the query associated with this result.
-        "planNodes": [ # The nodes in the query plan. Plan nodes are returned in pre-order starting
-            # with the plan root. Each PlanNode's `id` corresponds to its index in
-            # `plan_nodes`.
-          { # Node information for nodes appearing in a QueryPlan.plan_nodes.
-            "index": 42, # The `PlanNode`'s index in node list.
-            "kind": "A String", # Used to determine the type of node. May be needed for visualizing
-                # different kinds of nodes differently. For example, If the node is a
-                # SCALAR node, it will have a condensed representation
-                # which can be used to directly embed a description of the node in its
-                # parent.
-            "displayName": "A String", # The display name for the node.
-            "executionStats": { # The execution statistics associated with the node, contained in a group of
-                # key-value pairs. Only present if the plan was returned as a result of a
-                # profile query. For example, number of executions, number of rows/time per
-                # execution etc.
-              "a_key": "", # Properties of the object.
-            },
-            "childLinks": [ # List of child node `index`es and their relationship to this parent.
-              { # Metadata associated with a parent-child relationship appearing in a
-                  # PlanNode.
-                "variable": "A String", # Only present if the child node is SCALAR and corresponds
-                    # to an output variable of the parent node. The field carries the name of
-                    # the output variable.
-                    # For example, a `TableScan` operator that reads rows from a table will
-                    # have child links to the `SCALAR` nodes representing the output variables
-                    # created for each column that is read by the operator. The corresponding
-                    # `variable` fields will be set to the variable names assigned to the
-                    # columns.
-                "type": "A String", # The type of the link. For example, in Hash Joins this could be used to
-                    # distinguish between the build child and the probe child, or in the case
-                    # of the child being an output variable, to represent the tag associated
-                    # with the output variable.
-                "childIndex": 42, # The node to which the link points.
-              },
-            ],
-            "shortRepresentation": { # Condensed representation of a node and its subtree. Only present for # Condensed representation for SCALAR nodes.
-                # `SCALAR` PlanNode(s).
-              "subqueries": { # A mapping of (subquery variable name) -&gt; (subquery node id) for cases
-                  # where the `description` string of this node references a `SCALAR`
-                  # subquery contained in the expression subtree rooted at this node. The
-                  # referenced `SCALAR` subquery may not necessarily be a direct child of
-                  # this node.
-                "a_key": 42,
-              },
-              "description": "A String", # A string representation of the expression subtree rooted at this node.
-            },
-            "metadata": { # Attributes relevant to the node contained in a group of key-value pairs.
-                # For example, a Parameter Reference node could have the following
-                # information in its metadata:
-                #
-                #     {
-                #       "parameter_reference": "param1",
-                #       "parameter_type": "array"
-                #     }
-              "a_key": "", # Properties of the object.
-            },
-          },
-        ],
-      },
-      "queryStats": { # Aggregated statistics from the execution of the query. Only present when
-          # the query is profiled. For example, a query could return the statistics as
-          # follows:
-          #
-          #     {
-          #       "rows_returned": "3",
-          #       "elapsed_time": "1.22 secs",
-          #       "cpu_time": "1.19 secs"
-          #     }
-        "a_key": "", # Properties of the object.
-      },
-    },
-    "metadata": { # Metadata about a ResultSet or PartialResultSet. # Metadata about the result set, such as row type information.
+    &quot;metadata&quot;: { # Metadata about a ResultSet or PartialResultSet. # Metadata about the result set, such as row type information.
         # Only present in the first response.
-      "rowType": { # `StructType` defines the fields of a STRUCT type. # Indicates the field names and types for the rows in the result
-          # set.  For example, a SQL query like `"SELECT UserId, UserName FROM
-          # Users"` could return a `row_type` value like:
+      &quot;rowType&quot;: { # `StructType` defines the fields of a STRUCT type. # Indicates the field names and types for the rows in the result
+          # set.  For example, a SQL query like `&quot;SELECT UserId, UserName FROM
+          # Users&quot;` could return a `row_type` value like:
           #
-          #     "fields": [
-          #       { "name": "UserId", "type": { "code": "INT64" } },
-          #       { "name": "UserName", "type": { "code": "STRING" } },
+          #     &quot;fields&quot;: [
+          #       { &quot;name&quot;: &quot;UserId&quot;, &quot;type&quot;: { &quot;code&quot;: &quot;INT64&quot; } },
+          #       { &quot;name&quot;: &quot;UserName&quot;, &quot;type&quot;: { &quot;code&quot;: &quot;STRING&quot; } },
           #     ]
-        "fields": [ # The list of fields that make up this struct. Order is
+        &quot;fields&quot;: [ # The list of fields that make up this struct. Order is
             # significant, because values of this struct type are represented as
             # lists, where the order of field values matches the order of
             # fields in the StructType. In turn, the order of fields
             # matches the order of columns in a read request, or the order of
             # fields in the `SELECT` clause of a query.
           { # Message representing a single field of a struct.
-            "type": # Object with schema name: Type # The type of the field.
-            "name": "A String", # The name of the field. For reads, this is the column name. For
-                # SQL queries, it is the column alias (e.g., `"Word"` in the
-                # query `"SELECT 'hello' AS Word"`), or the column name (e.g.,
-                # `"ColName"` in the query `"SELECT ColName FROM Table"`). Some
-                # columns might have an empty name (e.g., !"SELECT
-                # UPPER(ColName)"`). Note that a query result can contain
+            &quot;name&quot;: &quot;A String&quot;, # The name of the field. For reads, this is the column name. For
+                # SQL queries, it is the column alias (e.g., `&quot;Word&quot;` in the
+                # query `&quot;SELECT &#x27;hello&#x27; AS Word&quot;`), or the column name (e.g.,
+                # `&quot;ColName&quot;` in the query `&quot;SELECT ColName FROM Table&quot;`). Some
+                # columns might have an empty name (e.g., !&quot;SELECT
+                # UPPER(ColName)&quot;`). Note that a query result can contain
                 # multiple fields with the same name.
+            &quot;type&quot;: # Object with schema name: Type # The type of the field.
           },
         ],
       },
-      "transaction": { # A transaction. # If the read or SQL query began a transaction as a side-effect, the
+      &quot;transaction&quot;: { # A transaction. # If the read or SQL query began a transaction as a side-effect, the
           # information about the new transaction is yielded here.
-        "readTimestamp": "A String", # For snapshot read-only transactions, the read timestamp chosen
-            # for the transaction. Not returned by default: see
-            # TransactionOptions.ReadOnly.return_read_timestamp.
-            #
-            # A timestamp in RFC3339 UTC \"Zulu\" format, accurate to nanoseconds.
-            # Example: `"2014-10-02T15:01:23.045123456Z"`.
-        "id": "A String", # `id` may be used to identify the transaction in subsequent
+        &quot;id&quot;: &quot;A String&quot;, # `id` may be used to identify the transaction in subsequent
             # Read,
             # ExecuteSql,
             # Commit, or
@@ -8401,8 +8306,103 @@
             #
             # Single-use read-only transactions do not have IDs, because
             # single-use transactions do not support multiple requests.
+        &quot;readTimestamp&quot;: &quot;A String&quot;, # For snapshot read-only transactions, the read timestamp chosen
+            # for the transaction. Not returned by default: see
+            # TransactionOptions.ReadOnly.return_read_timestamp.
+            #
+            # A timestamp in RFC3339 UTC \&quot;Zulu\&quot; format, accurate to nanoseconds.
+            # Example: `&quot;2014-10-02T15:01:23.045123456Z&quot;`.
       },
     },
+    &quot;resumeToken&quot;: &quot;A String&quot;, # Streaming calls might be interrupted for a variety of reasons, such
+        # as TCP connection loss. If this occurs, the stream of results can
+        # be resumed by re-sending the original request and including
+        # `resume_token`. Note that executing any other transaction in the
+        # same session invalidates the token.
+    &quot;stats&quot;: { # Additional statistics about a ResultSet or PartialResultSet. # Query plan and execution statistics for the statement that produced this
+        # streaming result set. These can be requested by setting
+        # ExecuteSqlRequest.query_mode and are sent
+        # only once with the last response in the stream.
+        # This field will also be present in the last response for DML
+        # statements.
+      &quot;rowCountExact&quot;: &quot;A String&quot;, # Standard DML returns an exact count of rows that were modified.
+      &quot;queryStats&quot;: { # Aggregated statistics from the execution of the query. Only present when
+          # the query is profiled. For example, a query could return the statistics as
+          # follows:
+          #
+          #     {
+          #       &quot;rows_returned&quot;: &quot;3&quot;,
+          #       &quot;elapsed_time&quot;: &quot;1.22 secs&quot;,
+          #       &quot;cpu_time&quot;: &quot;1.19 secs&quot;
+          #     }
+        &quot;a_key&quot;: &quot;&quot;, # Properties of the object.
+      },
+      &quot;rowCountLowerBound&quot;: &quot;A String&quot;, # Partitioned DML does not offer exactly-once semantics, so it
+          # returns a lower bound of the rows modified.
+      &quot;queryPlan&quot;: { # Contains an ordered list of nodes appearing in the query plan. # QueryPlan for the query associated with this result.
+        &quot;planNodes&quot;: [ # The nodes in the query plan. Plan nodes are returned in pre-order starting
+            # with the plan root. Each PlanNode&#x27;s `id` corresponds to its index in
+            # `plan_nodes`.
+          { # Node information for nodes appearing in a QueryPlan.plan_nodes.
+            &quot;metadata&quot;: { # Attributes relevant to the node contained in a group of key-value pairs.
+                # For example, a Parameter Reference node could have the following
+                # information in its metadata:
+                #
+                #     {
+                #       &quot;parameter_reference&quot;: &quot;param1&quot;,
+                #       &quot;parameter_type&quot;: &quot;array&quot;
+                #     }
+              &quot;a_key&quot;: &quot;&quot;, # Properties of the object.
+            },
+            &quot;executionStats&quot;: { # The execution statistics associated with the node, contained in a group of
+                # key-value pairs. Only present if the plan was returned as a result of a
+                # profile query. For example, number of executions, number of rows/time per
+                # execution etc.
+              &quot;a_key&quot;: &quot;&quot;, # Properties of the object.
+            },
+            &quot;shortRepresentation&quot;: { # Condensed representation of a node and its subtree. Only present for # Condensed representation for SCALAR nodes.
+                # `SCALAR` PlanNode(s).
+              &quot;description&quot;: &quot;A String&quot;, # A string representation of the expression subtree rooted at this node.
+              &quot;subqueries&quot;: { # A mapping of (subquery variable name) -&gt; (subquery node id) for cases
+                  # where the `description` string of this node references a `SCALAR`
+                  # subquery contained in the expression subtree rooted at this node. The
+                  # referenced `SCALAR` subquery may not necessarily be a direct child of
+                  # this node.
+                &quot;a_key&quot;: 42,
+              },
+            },
+            &quot;index&quot;: 42, # The `PlanNode`&#x27;s index in node list.
+            &quot;kind&quot;: &quot;A String&quot;, # Used to determine the type of node. May be needed for visualizing
+                # different kinds of nodes differently. For example, If the node is a
+                # SCALAR node, it will have a condensed representation
+                # which can be used to directly embed a description of the node in its
+                # parent.
+            &quot;displayName&quot;: &quot;A String&quot;, # The display name for the node.
+            &quot;childLinks&quot;: [ # List of child node `index`es and their relationship to this parent.
+              { # Metadata associated with a parent-child relationship appearing in a
+                  # PlanNode.
+                &quot;variable&quot;: &quot;A String&quot;, # Only present if the child node is SCALAR and corresponds
+                    # to an output variable of the parent node. The field carries the name of
+                    # the output variable.
+                    # For example, a `TableScan` operator that reads rows from a table will
+                    # have child links to the `SCALAR` nodes representing the output variables
+                    # created for each column that is read by the operator. The corresponding
+                    # `variable` fields will be set to the variable names assigned to the
+                    # columns.
+                &quot;childIndex&quot;: 42, # The node to which the link points.
+                &quot;type&quot;: &quot;A String&quot;, # The type of the link. For example, in Hash Joins this could be used to
+                    # distinguish between the build child and the probe child, or in the case
+                    # of the child being an output variable, to represent the tag associated
+                    # with the output variable.
+              },
+            ],
+          },
+        ],
+      },
+    },
+    &quot;chunkedValue&quot;: True or False, # If true, then the final value in values is chunked, and must
+        # be combined with more values from subsequent `PartialResultSet`s
+        # to obtain a complete field value.
   }</pre>
 </div>