blob: e5b537237b323ea4a39b9e3b075c4a4bae0e7cab [file] [log] [blame]
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -04001<html><body>
2<style>
3
4body, h1, h2, h3, div, span, p, pre, a {
5 margin: 0;
6 padding: 0;
7 border: 0;
8 font-weight: inherit;
9 font-style: inherit;
10 font-size: 100%;
11 font-family: inherit;
12 vertical-align: baseline;
13}
14
15body {
16 font-size: 13px;
17 padding: 1em;
18}
19
20h1 {
21 font-size: 26px;
22 margin-bottom: 1em;
23}
24
25h2 {
26 font-size: 24px;
27 margin-bottom: 1em;
28}
29
30h3 {
31 font-size: 20px;
32 margin-bottom: 1em;
33 margin-top: 1em;
34}
35
36pre, code {
37 line-height: 1.5;
38 font-family: Monaco, 'DejaVu Sans Mono', 'Bitstream Vera Sans Mono', 'Lucida Console', monospace;
39}
40
41pre {
42 margin-top: 0.5em;
43}
44
45h1, h2, h3, p {
46 font-family: Arial, sans serif;
47}
48
49h1, h2, h3 {
50 border-bottom: solid #CCC 1px;
51}
52
53.toc_element {
54 margin-top: 0.5em;
55}
56
57.firstline {
58 margin-left: 2 em;
59}
60
61.method {
62 margin-top: 1em;
63 border: solid 1px #CCC;
64 padding: 1em;
65 background: #EEE;
66}
67
68.details {
69 font-weight: bold;
70 font-size: 14px;
71}
72
73</style>
74
75<h1><a href="spanner_v1.html">Cloud Spanner API</a> . <a href="spanner_v1.projects.html">projects</a> . <a href="spanner_v1.projects.instances.html">instances</a> . <a href="spanner_v1.projects.instances.databases.html">databases</a> . <a href="spanner_v1.projects.instances.databases.sessions.html">sessions</a></h1>
76<h2>Instance Methods</h2>
77<p class="toc_element">
Dan O'Mearadd494642020-05-01 07:42:23 -070078 <code><a href="#batchCreate">batchCreate(database, body=None, x__xgafv=None)</a></code></p>
79<p class="firstline">Creates multiple new sessions.</p>
80<p class="toc_element">
81 <code><a href="#beginTransaction">beginTransaction(session, body=None, x__xgafv=None)</a></code></p>
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -040082<p class="firstline">Begins a new transaction. This step can often be skipped:</p>
83<p class="toc_element">
Dan O'Mearadd494642020-05-01 07:42:23 -070084 <code><a href="#commit">commit(session, body=None, x__xgafv=None)</a></code></p>
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -040085<p class="firstline">Commits a transaction. The request includes the mutations to be</p>
86<p class="toc_element">
Dan O'Mearadd494642020-05-01 07:42:23 -070087 <code><a href="#create">create(database, body=None, x__xgafv=None)</a></code></p>
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -040088<p class="firstline">Creates a new session. A session can be used to perform</p>
89<p class="toc_element">
90 <code><a href="#delete">delete(name, x__xgafv=None)</a></code></p>
Bu Sun Kim715bd7f2019-06-14 16:50:42 -070091<p class="firstline">Ends a session, releasing server resources associated with it. This will</p>
92<p class="toc_element">
Dan O'Mearadd494642020-05-01 07:42:23 -070093 <code><a href="#executeBatchDml">executeBatchDml(session, body=None, x__xgafv=None)</a></code></p>
Bu Sun Kim715bd7f2019-06-14 16:50:42 -070094<p class="firstline">Executes a batch of SQL DML statements. This method allows many statements</p>
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -040095<p class="toc_element">
Dan O'Mearadd494642020-05-01 07:42:23 -070096 <code><a href="#executeSql">executeSql(session, body=None, x__xgafv=None)</a></code></p>
Bu Sun Kim715bd7f2019-06-14 16:50:42 -070097<p class="firstline">Executes an SQL statement, returning all results in a single reply. This</p>
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -040098<p class="toc_element">
Dan O'Mearadd494642020-05-01 07:42:23 -070099 <code><a href="#executeStreamingSql">executeStreamingSql(session, body=None, x__xgafv=None)</a></code></p>
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -0400100<p class="firstline">Like ExecuteSql, except returns the result</p>
101<p class="toc_element">
102 <code><a href="#get">get(name, x__xgafv=None)</a></code></p>
103<p class="firstline">Gets a session. Returns `NOT_FOUND` if the session does not exist.</p>
104<p class="toc_element">
Bu Sun Kimd059ad82020-07-22 17:02:09 -0700105 <code><a href="#list">list(database, filter=None, pageToken=None, pageSize=None, x__xgafv=None)</a></code></p>
Bu Sun Kim715bd7f2019-06-14 16:50:42 -0700106<p class="firstline">Lists all sessions in a given database.</p>
107<p class="toc_element">
108 <code><a href="#list_next">list_next(previous_request, previous_response)</a></code></p>
109<p class="firstline">Retrieves the next page of results.</p>
110<p class="toc_element">
Dan O'Mearadd494642020-05-01 07:42:23 -0700111 <code><a href="#partitionQuery">partitionQuery(session, body=None, x__xgafv=None)</a></code></p>
Bu Sun Kim715bd7f2019-06-14 16:50:42 -0700112<p class="firstline">Creates a set of partition tokens that can be used to execute a query</p>
113<p class="toc_element">
Dan O'Mearadd494642020-05-01 07:42:23 -0700114 <code><a href="#partitionRead">partitionRead(session, body=None, x__xgafv=None)</a></code></p>
Bu Sun Kim715bd7f2019-06-14 16:50:42 -0700115<p class="firstline">Creates a set of partition tokens that can be used to execute a read</p>
116<p class="toc_element">
Dan O'Mearadd494642020-05-01 07:42:23 -0700117 <code><a href="#read">read(session, body=None, x__xgafv=None)</a></code></p>
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -0400118<p class="firstline">Reads rows from the database using key lookups and scans, as a</p>
119<p class="toc_element">
Dan O'Mearadd494642020-05-01 07:42:23 -0700120 <code><a href="#rollback">rollback(session, body=None, x__xgafv=None)</a></code></p>
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -0400121<p class="firstline">Rolls back a transaction, releasing any locks it holds. It is a good</p>
122<p class="toc_element">
Dan O'Mearadd494642020-05-01 07:42:23 -0700123 <code><a href="#streamingRead">streamingRead(session, body=None, x__xgafv=None)</a></code></p>
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -0400124<p class="firstline">Like Read, except returns the result set as a</p>
125<h3>Method Details</h3>
126<div class="method">
Dan O'Mearadd494642020-05-01 07:42:23 -0700127 <code class="details" id="batchCreate">batchCreate(database, body=None, x__xgafv=None)</code>
128 <pre>Creates multiple new sessions.
129
130This API can be used to initialize a session cache on the clients.
131See https://goo.gl/TgSFN2 for best practices on session cache management.
132
133Args:
134 database: string, Required. The database in which the new sessions are created. (required)
135 body: object, The request body.
136 The object takes the form of:
137
138{ # The request for BatchCreateSessions.
Bu Sun Kim65020912020-05-20 12:08:20 -0700139 &quot;sessionTemplate&quot;: { # A session in the Cloud Spanner API. # Parameters to be applied to each created session.
Bu Sun Kim4ed7d3f2020-05-27 12:20:54 -0700140 &quot;createTime&quot;: &quot;A String&quot;, # Output only. The timestamp when the session is created.
Bu Sun Kimd059ad82020-07-22 17:02:09 -0700141 &quot;name&quot;: &quot;A String&quot;, # Output only. The name of the session. This is always system-assigned.
142 &quot;approximateLastUseTime&quot;: &quot;A String&quot;, # Output only. The approximate timestamp when the session is last used. It is
143 # typically earlier than the actual last use time.
Bu Sun Kim65020912020-05-20 12:08:20 -0700144 &quot;labels&quot;: { # The labels for the session.
Dan O'Mearadd494642020-05-01 07:42:23 -0700145 #
146 # * Label keys must be between 1 and 63 characters long and must conform to
147 # the following regular expression: `[a-z]([-a-z0-9]*[a-z0-9])?`.
148 # * Label values must be between 0 and 63 characters long and must conform
149 # to the regular expression `([a-z]([-a-z0-9]*[a-z0-9])?)?`.
150 # * No more than 64 labels can be associated with a given session.
151 #
152 # See https://goo.gl/xmQnxf for more information on and examples of labels.
Bu Sun Kim65020912020-05-20 12:08:20 -0700153 &quot;a_key&quot;: &quot;A String&quot;,
Dan O'Mearadd494642020-05-01 07:42:23 -0700154 },
Dan O'Mearadd494642020-05-01 07:42:23 -0700155 },
Bu Sun Kim65020912020-05-20 12:08:20 -0700156 &quot;sessionCount&quot;: 42, # Required. The number of sessions to be created in this batch call.
Dan O'Mearadd494642020-05-01 07:42:23 -0700157 # The API may return fewer than the requested number of sessions. If a
158 # specific number of sessions are desired, the client can make additional
159 # calls to BatchCreateSessions (adjusting
160 # session_count as necessary).
161 }
162
163 x__xgafv: string, V1 error format.
164 Allowed values
165 1 - v1 error format
166 2 - v2 error format
167
168Returns:
169 An object of the form:
170
171 { # The response for BatchCreateSessions.
Bu Sun Kim65020912020-05-20 12:08:20 -0700172 &quot;session&quot;: [ # The freshly created sessions.
Dan O'Mearadd494642020-05-01 07:42:23 -0700173 { # A session in the Cloud Spanner API.
Bu Sun Kim4ed7d3f2020-05-27 12:20:54 -0700174 &quot;createTime&quot;: &quot;A String&quot;, # Output only. The timestamp when the session is created.
Bu Sun Kimd059ad82020-07-22 17:02:09 -0700175 &quot;name&quot;: &quot;A String&quot;, # Output only. The name of the session. This is always system-assigned.
176 &quot;approximateLastUseTime&quot;: &quot;A String&quot;, # Output only. The approximate timestamp when the session is last used. It is
177 # typically earlier than the actual last use time.
Bu Sun Kim65020912020-05-20 12:08:20 -0700178 &quot;labels&quot;: { # The labels for the session.
Dan O'Mearadd494642020-05-01 07:42:23 -0700179 #
180 # * Label keys must be between 1 and 63 characters long and must conform to
181 # the following regular expression: `[a-z]([-a-z0-9]*[a-z0-9])?`.
182 # * Label values must be between 0 and 63 characters long and must conform
183 # to the regular expression `([a-z]([-a-z0-9]*[a-z0-9])?)?`.
184 # * No more than 64 labels can be associated with a given session.
185 #
186 # See https://goo.gl/xmQnxf for more information on and examples of labels.
Bu Sun Kim65020912020-05-20 12:08:20 -0700187 &quot;a_key&quot;: &quot;A String&quot;,
Dan O'Mearadd494642020-05-01 07:42:23 -0700188 },
Dan O'Mearadd494642020-05-01 07:42:23 -0700189 },
190 ],
191 }</pre>
192</div>
193
194<div class="method">
195 <code class="details" id="beginTransaction">beginTransaction(session, body=None, x__xgafv=None)</code>
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -0400196 <pre>Begins a new transaction. This step can often be skipped:
197Read, ExecuteSql and
198Commit can begin a new transaction as a
199side-effect.
200
201Args:
202 session: string, Required. The session in which the transaction runs. (required)
Dan O'Mearadd494642020-05-01 07:42:23 -0700203 body: object, The request body.
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -0400204 The object takes the form of:
205
206{ # The request for BeginTransaction.
Bu Sun Kim65020912020-05-20 12:08:20 -0700207 &quot;options&quot;: { # # Transactions # Required. Options for the new transaction.
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -0400208 #
209 #
Bu Sun Kimd059ad82020-07-22 17:02:09 -0700210 # Each session can have at most one active transaction at a time (note that
211 # standalone reads and queries use a transaction internally and do count
212 # towards the one transaction limit). After the active transaction is
213 # completed, the session can immediately be re-used for the next transaction.
214 # It is not necessary to create a new session for each transaction.
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -0400215 #
216 # # Transaction Modes
217 #
Bu Sun Kim715bd7f2019-06-14 16:50:42 -0700218 # Cloud Spanner supports three transaction modes:
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -0400219 #
220 # 1. Locking read-write. This type of transaction is the only way
221 # to write data into Cloud Spanner. These transactions rely on
222 # pessimistic locking and, if necessary, two-phase commit.
223 # Locking read-write transactions may abort, requiring the
224 # application to retry.
225 #
226 # 2. Snapshot read-only. This transaction type provides guaranteed
227 # consistency across several reads, but does not allow
228 # writes. Snapshot read-only transactions can be configured to
229 # read at timestamps in the past. Snapshot read-only
230 # transactions do not need to be committed.
231 #
Bu Sun Kim715bd7f2019-06-14 16:50:42 -0700232 # 3. Partitioned DML. This type of transaction is used to execute
233 # a single Partitioned DML statement. Partitioned DML partitions
234 # the key space and runs the DML statement over each partition
235 # in parallel using separate, internal transactions that commit
236 # independently. Partitioned DML transactions do not need to be
237 # committed.
238 #
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -0400239 # For transactions that only read, snapshot read-only transactions
240 # provide simpler semantics and are almost always faster. In
241 # particular, read-only transactions do not take locks, so they do
242 # not conflict with read-write transactions. As a consequence of not
243 # taking locks, they also do not abort, so retry loops are not needed.
244 #
245 # Transactions may only read/write data in a single database. They
246 # may, however, read/write data in different tables within that
247 # database.
248 #
249 # ## Locking Read-Write Transactions
250 #
251 # Locking transactions may be used to atomically read-modify-write
252 # data anywhere in a database. This type of transaction is externally
253 # consistent.
254 #
255 # Clients should attempt to minimize the amount of time a transaction
256 # is active. Faster transactions commit with higher probability
257 # and cause less contention. Cloud Spanner attempts to keep read locks
258 # active as long as the transaction continues to do reads, and the
259 # transaction has not been terminated by
260 # Commit or
261 # Rollback. Long periods of
262 # inactivity at the client may cause Cloud Spanner to release a
Bu Sun Kim65020912020-05-20 12:08:20 -0700263 # transaction&#x27;s locks and abort it.
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -0400264 #
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -0400265 # Conceptually, a read-write transaction consists of zero or more
Bu Sun Kim715bd7f2019-06-14 16:50:42 -0700266 # reads or SQL statements followed by
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -0400267 # Commit. At any time before
268 # Commit, the client can send a
269 # Rollback request to abort the
270 # transaction.
271 #
272 # ### Semantics
273 #
274 # Cloud Spanner can commit the transaction if all read locks it acquired
275 # are still valid at commit time, and it is able to acquire write
276 # locks for all writes. Cloud Spanner can abort the transaction for any
277 # reason. If a commit attempt returns `ABORTED`, Cloud Spanner guarantees
278 # that the transaction has not modified any user data in Cloud Spanner.
279 #
280 # Unless the transaction commits, Cloud Spanner makes no guarantees about
Bu Sun Kim65020912020-05-20 12:08:20 -0700281 # how long the transaction&#x27;s locks were held for. It is an error to
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -0400282 # use Cloud Spanner locks for any sort of mutual exclusion other than
283 # between Cloud Spanner transactions themselves.
284 #
285 # ### Retrying Aborted Transactions
286 #
287 # When a transaction aborts, the application can choose to retry the
288 # whole transaction again. To maximize the chances of successfully
289 # committing the retry, the client should execute the retry in the
Bu Sun Kim65020912020-05-20 12:08:20 -0700290 # same session as the original attempt. The original session&#x27;s lock
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -0400291 # priority increases with each consecutive abort, meaning that each
292 # attempt has a slightly better chance of success than the previous.
293 #
294 # Under some circumstances (e.g., many transactions attempting to
295 # modify the same row(s)), a transaction can abort many times in a
296 # short period before successfully committing. Thus, it is not a good
297 # idea to cap the number of retries a transaction can attempt;
298 # instead, it is better to limit the total amount of wall time spent
299 # retrying.
300 #
301 # ### Idle Transactions
302 #
303 # A transaction is considered idle if it has no outstanding reads or
304 # SQL queries and has not started a read or SQL query within the last 10
305 # seconds. Idle transactions can be aborted by Cloud Spanner so that they
Bu Sun Kim65020912020-05-20 12:08:20 -0700306 # don&#x27;t hold on to locks indefinitely. In that case, the commit will
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -0400307 # fail with error `ABORTED`.
308 #
309 # If this behavior is undesirable, periodically executing a simple
310 # SQL query in the transaction (e.g., `SELECT 1`) prevents the
311 # transaction from becoming idle.
312 #
313 # ## Snapshot Read-Only Transactions
314 #
315 # Snapshot read-only transactions provides a simpler method than
316 # locking read-write transactions for doing several consistent
317 # reads. However, this type of transaction does not support writes.
318 #
319 # Snapshot transactions do not take locks. Instead, they work by
320 # choosing a Cloud Spanner timestamp, then executing all reads at that
321 # timestamp. Since they do not acquire locks, they do not block
322 # concurrent read-write transactions.
323 #
324 # Unlike locking read-write transactions, snapshot read-only
325 # transactions never abort. They can fail if the chosen read
326 # timestamp is garbage collected; however, the default garbage
327 # collection policy is generous enough that most applications do not
328 # need to worry about this in practice.
329 #
330 # Snapshot read-only transactions do not need to call
331 # Commit or
332 # Rollback (and in fact are not
333 # permitted to do so).
334 #
335 # To execute a snapshot transaction, the client specifies a timestamp
336 # bound, which tells Cloud Spanner how to choose a read timestamp.
337 #
338 # The types of timestamp bound are:
339 #
340 # - Strong (the default).
341 # - Bounded staleness.
342 # - Exact staleness.
343 #
344 # If the Cloud Spanner database to be read is geographically distributed,
345 # stale read-only transactions can execute more quickly than strong
346 # or read-write transaction, because they are able to execute far
347 # from the leader replica.
348 #
349 # Each type of timestamp bound is discussed in detail below.
350 #
351 # ### Strong
352 #
353 # Strong reads are guaranteed to see the effects of all transactions
354 # that have committed before the start of the read. Furthermore, all
355 # rows yielded by a single read are consistent with each other -- if
356 # any part of the read observes a transaction, all parts of the read
357 # see the transaction.
358 #
359 # Strong reads are not repeatable: two consecutive strong read-only
360 # transactions might return inconsistent results if there are
361 # concurrent writes. If consistency across reads is required, the
362 # reads should be executed within a transaction or at an exact read
363 # timestamp.
364 #
365 # See TransactionOptions.ReadOnly.strong.
366 #
367 # ### Exact Staleness
368 #
369 # These timestamp bounds execute reads at a user-specified
370 # timestamp. Reads at a timestamp are guaranteed to see a consistent
371 # prefix of the global transaction history: they observe
Dan O'Mearadd494642020-05-01 07:42:23 -0700372 # modifications done by all transactions with a commit timestamp &lt;=
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -0400373 # the read timestamp, and observe none of the modifications done by
374 # transactions with a larger commit timestamp. They will block until
375 # all conflicting transactions that may be assigned commit timestamps
Dan O'Mearadd494642020-05-01 07:42:23 -0700376 # &lt;= the read timestamp have finished.
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -0400377 #
378 # The timestamp can either be expressed as an absolute Cloud Spanner commit
379 # timestamp or a staleness relative to the current time.
380 #
Bu Sun Kim65020912020-05-20 12:08:20 -0700381 # These modes do not require a &quot;negotiation phase&quot; to pick a
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -0400382 # timestamp. As a result, they execute slightly faster than the
383 # equivalent boundedly stale concurrency modes. On the other hand,
384 # boundedly stale reads usually return fresher results.
385 #
386 # See TransactionOptions.ReadOnly.read_timestamp and
387 # TransactionOptions.ReadOnly.exact_staleness.
388 #
389 # ### Bounded Staleness
390 #
391 # Bounded staleness modes allow Cloud Spanner to pick the read timestamp,
392 # subject to a user-provided staleness bound. Cloud Spanner chooses the
393 # newest timestamp within the staleness bound that allows execution
394 # of the reads at the closest available replica without blocking.
395 #
396 # All rows yielded are consistent with each other -- if any part of
397 # the read observes a transaction, all parts of the read see the
398 # transaction. Boundedly stale reads are not repeatable: two stale
399 # reads, even if they use the same staleness bound, can execute at
400 # different timestamps and thus return inconsistent results.
401 #
402 # Boundedly stale reads execute in two phases: the first phase
403 # negotiates a timestamp among all replicas needed to serve the
404 # read. In the second phase, reads are executed at the negotiated
405 # timestamp.
406 #
407 # As a result of the two phase execution, bounded staleness reads are
408 # usually a little slower than comparable exact staleness
409 # reads. However, they are typically able to return fresher
410 # results, and are more likely to execute at the closest replica.
411 #
412 # Because the timestamp negotiation requires up-front knowledge of
413 # which rows will be read, it can only be used with single-use
414 # read-only transactions.
415 #
416 # See TransactionOptions.ReadOnly.max_staleness and
417 # TransactionOptions.ReadOnly.min_read_timestamp.
418 #
419 # ### Old Read Timestamps and Garbage Collection
420 #
421 # Cloud Spanner continuously garbage collects deleted and overwritten data
422 # in the background to reclaim storage space. This process is known
Bu Sun Kim65020912020-05-20 12:08:20 -0700423 # as &quot;version GC&quot;. By default, version GC reclaims versions after they
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -0400424 # are one hour old. Because of this, Cloud Spanner cannot perform reads
425 # at read timestamps more than one hour in the past. This
426 # restriction also applies to in-progress reads and/or SQL queries whose
427 # timestamp become too old while executing. Reads and SQL queries with
428 # too-old read timestamps fail with the error `FAILED_PRECONDITION`.
Bu Sun Kim715bd7f2019-06-14 16:50:42 -0700429 #
430 # ## Partitioned DML Transactions
431 #
432 # Partitioned DML transactions are used to execute DML statements with a
433 # different execution strategy that provides different, and often better,
434 # scalability properties for large, table-wide operations than DML in a
435 # ReadWrite transaction. Smaller scoped statements, such as an OLTP workload,
436 # should prefer using ReadWrite transactions.
437 #
438 # Partitioned DML partitions the keyspace and runs the DML statement on each
439 # partition in separate, internal transactions. These transactions commit
440 # automatically when complete, and run independently from one another.
441 #
442 # To reduce lock contention, this execution strategy only acquires read locks
443 # on rows that match the WHERE clause of the statement. Additionally, the
444 # smaller per-partition transactions hold locks for less time.
445 #
446 # That said, Partitioned DML is not a drop-in replacement for standard DML used
447 # in ReadWrite transactions.
448 #
449 # - The DML statement must be fully-partitionable. Specifically, the statement
450 # must be expressible as the union of many statements which each access only
451 # a single row of the table.
452 #
453 # - The statement is not applied atomically to all rows of the table. Rather,
454 # the statement is applied atomically to partitions of the table, in
455 # independent transactions. Secondary index rows are updated atomically
456 # with the base table rows.
457 #
458 # - Partitioned DML does not guarantee exactly-once execution semantics
459 # against a partition. The statement will be applied at least once to each
460 # partition. It is strongly recommended that the DML statement should be
461 # idempotent to avoid unexpected results. For instance, it is potentially
462 # dangerous to run a statement such as
463 # `UPDATE table SET column = column + 1` as it could be run multiple times
464 # against some rows.
465 #
466 # - The partitions are committed automatically - there is no support for
467 # Commit or Rollback. If the call returns an error, or if the client issuing
468 # the ExecuteSql call dies, it is possible that some rows had the statement
469 # executed on them successfully. It is also possible that statement was
470 # never executed against other rows.
471 #
472 # - Partitioned DML transactions may only contain the execution of a single
473 # DML statement via ExecuteSql or ExecuteStreamingSql.
474 #
475 # - If any error is encountered during the execution of the partitioned DML
476 # operation (for instance, a UNIQUE INDEX violation, division by zero, or a
477 # value that cannot be stored due to schema constraints), then the
478 # operation is stopped at that point and an error is returned. It is
479 # possible that at this point, some partitions have been committed (or even
480 # committed multiple times), and other partitions have not been run at all.
481 #
482 # Given the above, Partitioned DML is good fit for large, database-wide,
483 # operations that are idempotent, such as deleting old rows from a very large
484 # table.
Bu Sun Kim4ed7d3f2020-05-27 12:20:54 -0700485 &quot;readWrite&quot;: { # Message type to initiate a read-write transaction. Currently this # Transaction may write.
486 #
487 # Authorization to begin a read-write transaction requires
488 # `spanner.databases.beginOrRollbackReadWriteTransaction` permission
489 # on the `session` resource.
490 # transaction type has no options.
491 },
Bu Sun Kim65020912020-05-20 12:08:20 -0700492 &quot;readOnly&quot;: { # Message type to initiate a read-only transaction. # Transaction will not write.
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -0400493 #
494 # Authorization to begin a read-only transaction requires
495 # `spanner.databases.beginReadOnlyTransaction` permission
496 # on the `session` resource.
Bu Sun Kimd059ad82020-07-22 17:02:09 -0700497 &quot;readTimestamp&quot;: &quot;A String&quot;, # Executes all reads at the given timestamp. Unlike other modes,
498 # reads at a specific timestamp are repeatable; the same read at
499 # the same timestamp always returns the same data. If the
500 # timestamp is in the future, the read will block until the
501 # specified timestamp, modulo the read&#x27;s deadline.
502 #
503 # Useful for large scale consistent reads such as mapreduces, or
504 # for coordinating many reads against a consistent snapshot of the
505 # data.
506 #
507 # A timestamp in RFC3339 UTC \&quot;Zulu\&quot; format, accurate to nanoseconds.
508 # Example: `&quot;2014-10-02T15:01:23.045123456Z&quot;`.
509 &quot;minReadTimestamp&quot;: &quot;A String&quot;, # Executes all reads at a timestamp &gt;= `min_read_timestamp`.
510 #
511 # This is useful for requesting fresher data than some previous
512 # read, or data that is fresh enough to observe the effects of some
513 # previously committed transaction whose timestamp is known.
514 #
515 # Note that this option can only be used in single-use transactions.
516 #
517 # A timestamp in RFC3339 UTC \&quot;Zulu\&quot; format, accurate to nanoseconds.
518 # Example: `&quot;2014-10-02T15:01:23.045123456Z&quot;`.
519 &quot;exactStaleness&quot;: &quot;A String&quot;, # Executes all reads at a timestamp that is `exact_staleness`
520 # old. The timestamp is chosen soon after the read is started.
521 #
522 # Guarantees that all writes that have committed more than the
523 # specified number of seconds ago are visible. Because Cloud Spanner
524 # chooses the exact timestamp, this mode works even if the client&#x27;s
525 # local clock is substantially skewed from Cloud Spanner commit
526 # timestamps.
527 #
528 # Useful for reading at nearby replicas without the distributed
529 # timestamp negotiation overhead of `max_staleness`.
Bu Sun Kim65020912020-05-20 12:08:20 -0700530 &quot;maxStaleness&quot;: &quot;A String&quot;, # Read data at a timestamp &gt;= `NOW - max_staleness`
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -0400531 # seconds. Guarantees that all writes that have committed more
532 # than the specified number of seconds ago are visible. Because
533 # Cloud Spanner chooses the exact timestamp, this mode works even if
Bu Sun Kim65020912020-05-20 12:08:20 -0700534 # the client&#x27;s local clock is substantially skewed from Cloud Spanner
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -0400535 # commit timestamps.
536 #
537 # Useful for reading the freshest data available at a nearby
538 # replica, while bounding the possible staleness if the local
539 # replica has fallen behind.
540 #
541 # Note that this option can only be used in single-use
542 # transactions.
Bu Sun Kim65020912020-05-20 12:08:20 -0700543 &quot;returnReadTimestamp&quot;: True or False, # If true, the Cloud Spanner-selected read timestamp is included in
544 # the Transaction message that describes the transaction.
Bu Sun Kimd059ad82020-07-22 17:02:09 -0700545 &quot;strong&quot;: True or False, # Read at a timestamp where all previously committed transactions
546 # are visible.
547 },
548 &quot;partitionedDml&quot;: { # Message type to initiate a Partitioned DML transaction. # Partitioned DML transaction.
549 #
550 # Authorization to begin a Partitioned DML transaction requires
551 # `spanner.databases.beginPartitionedDmlTransaction` permission
552 # on the `session` resource.
Bu Sun Kim715bd7f2019-06-14 16:50:42 -0700553 },
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -0400554 },
555 }
556
557 x__xgafv: string, V1 error format.
558 Allowed values
559 1 - v1 error format
560 2 - v2 error format
561
562Returns:
563 An object of the form:
564
565 { # A transaction.
Bu Sun Kim4ed7d3f2020-05-27 12:20:54 -0700566 &quot;readTimestamp&quot;: &quot;A String&quot;, # For snapshot read-only transactions, the read timestamp chosen
567 # for the transaction. Not returned by default: see
568 # TransactionOptions.ReadOnly.return_read_timestamp.
569 #
570 # A timestamp in RFC3339 UTC \&quot;Zulu\&quot; format, accurate to nanoseconds.
571 # Example: `&quot;2014-10-02T15:01:23.045123456Z&quot;`.
Bu Sun Kim65020912020-05-20 12:08:20 -0700572 &quot;id&quot;: &quot;A String&quot;, # `id` may be used to identify the transaction in subsequent
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -0400573 # Read,
574 # ExecuteSql,
575 # Commit, or
576 # Rollback calls.
577 #
578 # Single-use read-only transactions do not have IDs, because
579 # single-use transactions do not support multiple requests.
580 }</pre>
581</div>
582
583<div class="method">
Dan O'Mearadd494642020-05-01 07:42:23 -0700584 <code class="details" id="commit">commit(session, body=None, x__xgafv=None)</code>
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -0400585 <pre>Commits a transaction. The request includes the mutations to be
586applied to rows in the database.
587
588`Commit` might return an `ABORTED` error. This can occur at any time;
589commonly, the cause is conflicts with concurrent
590transactions. However, it can also happen for a variety of other
591reasons. If `Commit` returns `ABORTED`, the caller should re-attempt
592the transaction from the beginning, re-using the same session.
593
Bu Sun Kimd059ad82020-07-22 17:02:09 -0700594On very rare occasions, `Commit` might return `UNKNOWN`. This can happen,
595for example, if the client job experiences a 1+ hour networking failure.
596At that point, Cloud Spanner has lost track of the transaction outcome and
597we recommend that you perform another read from the database to see the
598state of things as they are now.
599
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -0400600Args:
601 session: string, Required. The session in which the transaction to be committed is running. (required)
Dan O'Mearadd494642020-05-01 07:42:23 -0700602 body: object, The request body.
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -0400603 The object takes the form of:
604
605{ # The request for Commit.
Bu Sun Kim65020912020-05-20 12:08:20 -0700606 &quot;singleUseTransaction&quot;: { # # Transactions # Execute mutations in a temporary transaction. Note that unlike
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -0400607 # commit of a previously-started transaction, commit with a
608 # temporary transaction is non-idempotent. That is, if the
609 # `CommitRequest` is sent to Cloud Spanner more than once (for
610 # instance, due to retries in the application, or in the
611 # transport library), it is possible that the mutations are
612 # executed more than once. If this is undesirable, use
613 # BeginTransaction and
614 # Commit instead.
615 #
616 #
Bu Sun Kimd059ad82020-07-22 17:02:09 -0700617 # Each session can have at most one active transaction at a time (note that
618 # standalone reads and queries use a transaction internally and do count
619 # towards the one transaction limit). After the active transaction is
620 # completed, the session can immediately be re-used for the next transaction.
621 # It is not necessary to create a new session for each transaction.
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -0400622 #
623 # # Transaction Modes
624 #
Bu Sun Kim715bd7f2019-06-14 16:50:42 -0700625 # Cloud Spanner supports three transaction modes:
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -0400626 #
627 # 1. Locking read-write. This type of transaction is the only way
628 # to write data into Cloud Spanner. These transactions rely on
629 # pessimistic locking and, if necessary, two-phase commit.
630 # Locking read-write transactions may abort, requiring the
631 # application to retry.
632 #
633 # 2. Snapshot read-only. This transaction type provides guaranteed
634 # consistency across several reads, but does not allow
635 # writes. Snapshot read-only transactions can be configured to
636 # read at timestamps in the past. Snapshot read-only
637 # transactions do not need to be committed.
638 #
Bu Sun Kim715bd7f2019-06-14 16:50:42 -0700639 # 3. Partitioned DML. This type of transaction is used to execute
640 # a single Partitioned DML statement. Partitioned DML partitions
641 # the key space and runs the DML statement over each partition
642 # in parallel using separate, internal transactions that commit
643 # independently. Partitioned DML transactions do not need to be
644 # committed.
645 #
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -0400646 # For transactions that only read, snapshot read-only transactions
647 # provide simpler semantics and are almost always faster. In
648 # particular, read-only transactions do not take locks, so they do
649 # not conflict with read-write transactions. As a consequence of not
650 # taking locks, they also do not abort, so retry loops are not needed.
651 #
652 # Transactions may only read/write data in a single database. They
653 # may, however, read/write data in different tables within that
654 # database.
655 #
656 # ## Locking Read-Write Transactions
657 #
658 # Locking transactions may be used to atomically read-modify-write
659 # data anywhere in a database. This type of transaction is externally
660 # consistent.
661 #
662 # Clients should attempt to minimize the amount of time a transaction
663 # is active. Faster transactions commit with higher probability
664 # and cause less contention. Cloud Spanner attempts to keep read locks
665 # active as long as the transaction continues to do reads, and the
666 # transaction has not been terminated by
667 # Commit or
668 # Rollback. Long periods of
669 # inactivity at the client may cause Cloud Spanner to release a
Bu Sun Kim65020912020-05-20 12:08:20 -0700670 # transaction&#x27;s locks and abort it.
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -0400671 #
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -0400672 # Conceptually, a read-write transaction consists of zero or more
Bu Sun Kim715bd7f2019-06-14 16:50:42 -0700673 # reads or SQL statements followed by
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -0400674 # Commit. At any time before
675 # Commit, the client can send a
676 # Rollback request to abort the
677 # transaction.
678 #
679 # ### Semantics
680 #
681 # Cloud Spanner can commit the transaction if all read locks it acquired
682 # are still valid at commit time, and it is able to acquire write
683 # locks for all writes. Cloud Spanner can abort the transaction for any
684 # reason. If a commit attempt returns `ABORTED`, Cloud Spanner guarantees
685 # that the transaction has not modified any user data in Cloud Spanner.
686 #
687 # Unless the transaction commits, Cloud Spanner makes no guarantees about
Bu Sun Kim65020912020-05-20 12:08:20 -0700688 # how long the transaction&#x27;s locks were held for. It is an error to
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -0400689 # use Cloud Spanner locks for any sort of mutual exclusion other than
690 # between Cloud Spanner transactions themselves.
691 #
692 # ### Retrying Aborted Transactions
693 #
694 # When a transaction aborts, the application can choose to retry the
695 # whole transaction again. To maximize the chances of successfully
696 # committing the retry, the client should execute the retry in the
Bu Sun Kim65020912020-05-20 12:08:20 -0700697 # same session as the original attempt. The original session&#x27;s lock
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -0400698 # priority increases with each consecutive abort, meaning that each
699 # attempt has a slightly better chance of success than the previous.
700 #
701 # Under some circumstances (e.g., many transactions attempting to
702 # modify the same row(s)), a transaction can abort many times in a
703 # short period before successfully committing. Thus, it is not a good
704 # idea to cap the number of retries a transaction can attempt;
705 # instead, it is better to limit the total amount of wall time spent
706 # retrying.
707 #
708 # ### Idle Transactions
709 #
710 # A transaction is considered idle if it has no outstanding reads or
711 # SQL queries and has not started a read or SQL query within the last 10
712 # seconds. Idle transactions can be aborted by Cloud Spanner so that they
Bu Sun Kim65020912020-05-20 12:08:20 -0700713 # don&#x27;t hold on to locks indefinitely. In that case, the commit will
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -0400714 # fail with error `ABORTED`.
715 #
716 # If this behavior is undesirable, periodically executing a simple
717 # SQL query in the transaction (e.g., `SELECT 1`) prevents the
718 # transaction from becoming idle.
719 #
720 # ## Snapshot Read-Only Transactions
721 #
722 # Snapshot read-only transactions provides a simpler method than
723 # locking read-write transactions for doing several consistent
724 # reads. However, this type of transaction does not support writes.
725 #
726 # Snapshot transactions do not take locks. Instead, they work by
727 # choosing a Cloud Spanner timestamp, then executing all reads at that
728 # timestamp. Since they do not acquire locks, they do not block
729 # concurrent read-write transactions.
730 #
731 # Unlike locking read-write transactions, snapshot read-only
732 # transactions never abort. They can fail if the chosen read
733 # timestamp is garbage collected; however, the default garbage
734 # collection policy is generous enough that most applications do not
735 # need to worry about this in practice.
736 #
737 # Snapshot read-only transactions do not need to call
738 # Commit or
739 # Rollback (and in fact are not
740 # permitted to do so).
741 #
742 # To execute a snapshot transaction, the client specifies a timestamp
743 # bound, which tells Cloud Spanner how to choose a read timestamp.
744 #
745 # The types of timestamp bound are:
746 #
747 # - Strong (the default).
748 # - Bounded staleness.
749 # - Exact staleness.
750 #
751 # If the Cloud Spanner database to be read is geographically distributed,
752 # stale read-only transactions can execute more quickly than strong
753 # or read-write transaction, because they are able to execute far
754 # from the leader replica.
755 #
756 # Each type of timestamp bound is discussed in detail below.
757 #
758 # ### Strong
759 #
760 # Strong reads are guaranteed to see the effects of all transactions
761 # that have committed before the start of the read. Furthermore, all
762 # rows yielded by a single read are consistent with each other -- if
763 # any part of the read observes a transaction, all parts of the read
764 # see the transaction.
765 #
766 # Strong reads are not repeatable: two consecutive strong read-only
767 # transactions might return inconsistent results if there are
768 # concurrent writes. If consistency across reads is required, the
769 # reads should be executed within a transaction or at an exact read
770 # timestamp.
771 #
772 # See TransactionOptions.ReadOnly.strong.
773 #
774 # ### Exact Staleness
775 #
776 # These timestamp bounds execute reads at a user-specified
777 # timestamp. Reads at a timestamp are guaranteed to see a consistent
778 # prefix of the global transaction history: they observe
Dan O'Mearadd494642020-05-01 07:42:23 -0700779 # modifications done by all transactions with a commit timestamp &lt;=
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -0400780 # the read timestamp, and observe none of the modifications done by
781 # transactions with a larger commit timestamp. They will block until
782 # all conflicting transactions that may be assigned commit timestamps
Dan O'Mearadd494642020-05-01 07:42:23 -0700783 # &lt;= the read timestamp have finished.
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -0400784 #
785 # The timestamp can either be expressed as an absolute Cloud Spanner commit
786 # timestamp or a staleness relative to the current time.
787 #
Bu Sun Kim65020912020-05-20 12:08:20 -0700788 # These modes do not require a &quot;negotiation phase&quot; to pick a
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -0400789 # timestamp. As a result, they execute slightly faster than the
790 # equivalent boundedly stale concurrency modes. On the other hand,
791 # boundedly stale reads usually return fresher results.
792 #
793 # See TransactionOptions.ReadOnly.read_timestamp and
794 # TransactionOptions.ReadOnly.exact_staleness.
795 #
796 # ### Bounded Staleness
797 #
798 # Bounded staleness modes allow Cloud Spanner to pick the read timestamp,
799 # subject to a user-provided staleness bound. Cloud Spanner chooses the
800 # newest timestamp within the staleness bound that allows execution
801 # of the reads at the closest available replica without blocking.
802 #
803 # All rows yielded are consistent with each other -- if any part of
804 # the read observes a transaction, all parts of the read see the
805 # transaction. Boundedly stale reads are not repeatable: two stale
806 # reads, even if they use the same staleness bound, can execute at
807 # different timestamps and thus return inconsistent results.
808 #
809 # Boundedly stale reads execute in two phases: the first phase
810 # negotiates a timestamp among all replicas needed to serve the
811 # read. In the second phase, reads are executed at the negotiated
812 # timestamp.
813 #
814 # As a result of the two phase execution, bounded staleness reads are
815 # usually a little slower than comparable exact staleness
816 # reads. However, they are typically able to return fresher
817 # results, and are more likely to execute at the closest replica.
818 #
819 # Because the timestamp negotiation requires up-front knowledge of
820 # which rows will be read, it can only be used with single-use
821 # read-only transactions.
822 #
823 # See TransactionOptions.ReadOnly.max_staleness and
824 # TransactionOptions.ReadOnly.min_read_timestamp.
825 #
826 # ### Old Read Timestamps and Garbage Collection
827 #
828 # Cloud Spanner continuously garbage collects deleted and overwritten data
829 # in the background to reclaim storage space. This process is known
Bu Sun Kim65020912020-05-20 12:08:20 -0700830 # as &quot;version GC&quot;. By default, version GC reclaims versions after they
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -0400831 # are one hour old. Because of this, Cloud Spanner cannot perform reads
832 # at read timestamps more than one hour in the past. This
833 # restriction also applies to in-progress reads and/or SQL queries whose
834 # timestamp become too old while executing. Reads and SQL queries with
835 # too-old read timestamps fail with the error `FAILED_PRECONDITION`.
Bu Sun Kim715bd7f2019-06-14 16:50:42 -0700836 #
837 # ## Partitioned DML Transactions
838 #
839 # Partitioned DML transactions are used to execute DML statements with a
840 # different execution strategy that provides different, and often better,
841 # scalability properties for large, table-wide operations than DML in a
842 # ReadWrite transaction. Smaller scoped statements, such as an OLTP workload,
843 # should prefer using ReadWrite transactions.
844 #
845 # Partitioned DML partitions the keyspace and runs the DML statement on each
846 # partition in separate, internal transactions. These transactions commit
847 # automatically when complete, and run independently from one another.
848 #
849 # To reduce lock contention, this execution strategy only acquires read locks
850 # on rows that match the WHERE clause of the statement. Additionally, the
851 # smaller per-partition transactions hold locks for less time.
852 #
853 # That said, Partitioned DML is not a drop-in replacement for standard DML used
854 # in ReadWrite transactions.
855 #
856 # - The DML statement must be fully-partitionable. Specifically, the statement
857 # must be expressible as the union of many statements which each access only
858 # a single row of the table.
859 #
860 # - The statement is not applied atomically to all rows of the table. Rather,
861 # the statement is applied atomically to partitions of the table, in
862 # independent transactions. Secondary index rows are updated atomically
863 # with the base table rows.
864 #
865 # - Partitioned DML does not guarantee exactly-once execution semantics
866 # against a partition. The statement will be applied at least once to each
867 # partition. It is strongly recommended that the DML statement should be
868 # idempotent to avoid unexpected results. For instance, it is potentially
869 # dangerous to run a statement such as
870 # `UPDATE table SET column = column + 1` as it could be run multiple times
871 # against some rows.
872 #
873 # - The partitions are committed automatically - there is no support for
874 # Commit or Rollback. If the call returns an error, or if the client issuing
875 # the ExecuteSql call dies, it is possible that some rows had the statement
876 # executed on them successfully. It is also possible that statement was
877 # never executed against other rows.
878 #
879 # - Partitioned DML transactions may only contain the execution of a single
880 # DML statement via ExecuteSql or ExecuteStreamingSql.
881 #
882 # - If any error is encountered during the execution of the partitioned DML
883 # operation (for instance, a UNIQUE INDEX violation, division by zero, or a
884 # value that cannot be stored due to schema constraints), then the
885 # operation is stopped at that point and an error is returned. It is
886 # possible that at this point, some partitions have been committed (or even
887 # committed multiple times), and other partitions have not been run at all.
888 #
889 # Given the above, Partitioned DML is good fit for large, database-wide,
890 # operations that are idempotent, such as deleting old rows from a very large
891 # table.
Bu Sun Kim4ed7d3f2020-05-27 12:20:54 -0700892 &quot;readWrite&quot;: { # Message type to initiate a read-write transaction. Currently this # Transaction may write.
893 #
894 # Authorization to begin a read-write transaction requires
895 # `spanner.databases.beginOrRollbackReadWriteTransaction` permission
896 # on the `session` resource.
897 # transaction type has no options.
898 },
Bu Sun Kim65020912020-05-20 12:08:20 -0700899 &quot;readOnly&quot;: { # Message type to initiate a read-only transaction. # Transaction will not write.
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -0400900 #
901 # Authorization to begin a read-only transaction requires
902 # `spanner.databases.beginReadOnlyTransaction` permission
903 # on the `session` resource.
Bu Sun Kimd059ad82020-07-22 17:02:09 -0700904 &quot;readTimestamp&quot;: &quot;A String&quot;, # Executes all reads at the given timestamp. Unlike other modes,
905 # reads at a specific timestamp are repeatable; the same read at
906 # the same timestamp always returns the same data. If the
907 # timestamp is in the future, the read will block until the
908 # specified timestamp, modulo the read&#x27;s deadline.
909 #
910 # Useful for large scale consistent reads such as mapreduces, or
911 # for coordinating many reads against a consistent snapshot of the
912 # data.
913 #
914 # A timestamp in RFC3339 UTC \&quot;Zulu\&quot; format, accurate to nanoseconds.
915 # Example: `&quot;2014-10-02T15:01:23.045123456Z&quot;`.
916 &quot;minReadTimestamp&quot;: &quot;A String&quot;, # Executes all reads at a timestamp &gt;= `min_read_timestamp`.
917 #
918 # This is useful for requesting fresher data than some previous
919 # read, or data that is fresh enough to observe the effects of some
920 # previously committed transaction whose timestamp is known.
921 #
922 # Note that this option can only be used in single-use transactions.
923 #
924 # A timestamp in RFC3339 UTC \&quot;Zulu\&quot; format, accurate to nanoseconds.
925 # Example: `&quot;2014-10-02T15:01:23.045123456Z&quot;`.
926 &quot;exactStaleness&quot;: &quot;A String&quot;, # Executes all reads at a timestamp that is `exact_staleness`
927 # old. The timestamp is chosen soon after the read is started.
928 #
929 # Guarantees that all writes that have committed more than the
930 # specified number of seconds ago are visible. Because Cloud Spanner
931 # chooses the exact timestamp, this mode works even if the client&#x27;s
932 # local clock is substantially skewed from Cloud Spanner commit
933 # timestamps.
934 #
935 # Useful for reading at nearby replicas without the distributed
936 # timestamp negotiation overhead of `max_staleness`.
Bu Sun Kim65020912020-05-20 12:08:20 -0700937 &quot;maxStaleness&quot;: &quot;A String&quot;, # Read data at a timestamp &gt;= `NOW - max_staleness`
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -0400938 # seconds. Guarantees that all writes that have committed more
939 # than the specified number of seconds ago are visible. Because
940 # Cloud Spanner chooses the exact timestamp, this mode works even if
Bu Sun Kim65020912020-05-20 12:08:20 -0700941 # the client&#x27;s local clock is substantially skewed from Cloud Spanner
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -0400942 # commit timestamps.
943 #
944 # Useful for reading the freshest data available at a nearby
945 # replica, while bounding the possible staleness if the local
946 # replica has fallen behind.
947 #
948 # Note that this option can only be used in single-use
949 # transactions.
Bu Sun Kim65020912020-05-20 12:08:20 -0700950 &quot;returnReadTimestamp&quot;: True or False, # If true, the Cloud Spanner-selected read timestamp is included in
951 # the Transaction message that describes the transaction.
Bu Sun Kimd059ad82020-07-22 17:02:09 -0700952 &quot;strong&quot;: True or False, # Read at a timestamp where all previously committed transactions
953 # are visible.
954 },
955 &quot;partitionedDml&quot;: { # Message type to initiate a Partitioned DML transaction. # Partitioned DML transaction.
956 #
957 # Authorization to begin a Partitioned DML transaction requires
958 # `spanner.databases.beginPartitionedDmlTransaction` permission
959 # on the `session` resource.
Bu Sun Kim715bd7f2019-06-14 16:50:42 -0700960 },
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -0400961 },
Bu Sun Kimd059ad82020-07-22 17:02:09 -0700962 &quot;transactionId&quot;: &quot;A String&quot;, # Commit a previously-started transaction.
Bu Sun Kim4ed7d3f2020-05-27 12:20:54 -0700963 &quot;mutations&quot;: [ # The mutations to be executed when this transaction commits. All
964 # mutations are applied atomically, in the order they appear in
965 # this list.
966 { # A modification to one or more Cloud Spanner rows. Mutations can be
967 # applied to a Cloud Spanner database by sending them in a
968 # Commit call.
Bu Sun Kim4ed7d3f2020-05-27 12:20:54 -0700969 &quot;delete&quot;: { # Arguments to delete operations. # Delete rows from a table. Succeeds whether or not the named
970 # rows were present.
Bu Sun Kim4ed7d3f2020-05-27 12:20:54 -0700971 &quot;keySet&quot;: { # `KeySet` defines a collection of Cloud Spanner keys and/or key ranges. All # Required. The primary keys of the rows within table to delete. The
972 # primary keys must be specified in the order in which they appear in the
973 # `PRIMARY KEY()` clause of the table&#x27;s equivalent DDL statement (the DDL
974 # statement used to create the table).
975 # Delete is idempotent. The transaction will succeed even if some or all
976 # rows do not exist.
977 # the keys are expected to be in the same table or index. The keys need
978 # not be sorted in any particular way.
979 #
980 # If the same key is specified multiple times in the set (for example
981 # if two ranges, two keys, or a key and a range overlap), Cloud Spanner
982 # behaves as if the key were only specified once.
983 &quot;ranges&quot;: [ # A list of key ranges. See KeyRange for more information about
984 # key range specifications.
985 { # KeyRange represents a range of rows in a table or index.
986 #
987 # A range has a start key and an end key. These keys can be open or
988 # closed, indicating if the range includes rows with that key.
989 #
990 # Keys are represented by lists, where the ith value in the list
991 # corresponds to the ith component of the table or index primary key.
992 # Individual values are encoded as described
993 # here.
994 #
995 # For example, consider the following table definition:
996 #
997 # CREATE TABLE UserEvents (
998 # UserName STRING(MAX),
999 # EventDate STRING(10)
1000 # ) PRIMARY KEY(UserName, EventDate);
1001 #
1002 # The following keys name rows in this table:
1003 #
1004 # &quot;Bob&quot;, &quot;2014-09-23&quot;
1005 #
1006 # Since the `UserEvents` table&#x27;s `PRIMARY KEY` clause names two
1007 # columns, each `UserEvents` key has two elements; the first is the
1008 # `UserName`, and the second is the `EventDate`.
1009 #
1010 # Key ranges with multiple components are interpreted
1011 # lexicographically by component using the table or index key&#x27;s declared
1012 # sort order. For example, the following range returns all events for
1013 # user `&quot;Bob&quot;` that occurred in the year 2015:
1014 #
1015 # &quot;start_closed&quot;: [&quot;Bob&quot;, &quot;2015-01-01&quot;]
1016 # &quot;end_closed&quot;: [&quot;Bob&quot;, &quot;2015-12-31&quot;]
1017 #
1018 # Start and end keys can omit trailing key components. This affects the
1019 # inclusion and exclusion of rows that exactly match the provided key
1020 # components: if the key is closed, then rows that exactly match the
1021 # provided components are included; if the key is open, then rows
1022 # that exactly match are not included.
1023 #
1024 # For example, the following range includes all events for `&quot;Bob&quot;` that
1025 # occurred during and after the year 2000:
1026 #
1027 # &quot;start_closed&quot;: [&quot;Bob&quot;, &quot;2000-01-01&quot;]
1028 # &quot;end_closed&quot;: [&quot;Bob&quot;]
1029 #
1030 # The next example retrieves all events for `&quot;Bob&quot;`:
1031 #
1032 # &quot;start_closed&quot;: [&quot;Bob&quot;]
1033 # &quot;end_closed&quot;: [&quot;Bob&quot;]
1034 #
1035 # To retrieve events before the year 2000:
1036 #
1037 # &quot;start_closed&quot;: [&quot;Bob&quot;]
1038 # &quot;end_open&quot;: [&quot;Bob&quot;, &quot;2000-01-01&quot;]
1039 #
1040 # The following range includes all rows in the table:
1041 #
1042 # &quot;start_closed&quot;: []
1043 # &quot;end_closed&quot;: []
1044 #
1045 # This range returns all users whose `UserName` begins with any
1046 # character from A to C:
1047 #
1048 # &quot;start_closed&quot;: [&quot;A&quot;]
1049 # &quot;end_open&quot;: [&quot;D&quot;]
1050 #
1051 # This range returns all users whose `UserName` begins with B:
1052 #
1053 # &quot;start_closed&quot;: [&quot;B&quot;]
1054 # &quot;end_open&quot;: [&quot;C&quot;]
1055 #
1056 # Key ranges honor column sort order. For example, suppose a table is
1057 # defined as follows:
1058 #
1059 # CREATE TABLE DescendingSortedTable {
1060 # Key INT64,
1061 # ...
1062 # ) PRIMARY KEY(Key DESC);
1063 #
1064 # The following range retrieves all rows with key values between 1
1065 # and 100 inclusive:
1066 #
1067 # &quot;start_closed&quot;: [&quot;100&quot;]
1068 # &quot;end_closed&quot;: [&quot;1&quot;]
1069 #
1070 # Note that 100 is passed as the start, and 1 is passed as the end,
1071 # because `Key` is a descending column in the schema.
Bu Sun Kim4ed7d3f2020-05-27 12:20:54 -07001072 &quot;endClosed&quot;: [ # If the end is closed, then the range includes all rows whose
1073 # first `len(end_closed)` key columns exactly match `end_closed`.
1074 &quot;&quot;,
1075 ],
Bu Sun Kim4ed7d3f2020-05-27 12:20:54 -07001076 &quot;startClosed&quot;: [ # If the start is closed, then the range includes all rows whose
1077 # first `len(start_closed)` key columns exactly match `start_closed`.
1078 &quot;&quot;,
1079 ],
Bu Sun Kimd059ad82020-07-22 17:02:09 -07001080 &quot;startOpen&quot;: [ # If the start is open, then the range excludes rows whose first
1081 # `len(start_open)` key columns exactly match `start_open`.
1082 &quot;&quot;,
1083 ],
1084 &quot;endOpen&quot;: [ # If the end is open, then the range excludes rows whose first
1085 # `len(end_open)` key columns exactly match `end_open`.
1086 &quot;&quot;,
1087 ],
Bu Sun Kim4ed7d3f2020-05-27 12:20:54 -07001088 },
1089 ],
1090 &quot;keys&quot;: [ # A list of specific keys. Entries in `keys` should have exactly as
1091 # many elements as there are columns in the primary or index key
1092 # with which this `KeySet` is used. Individual key values are
1093 # encoded as described here.
1094 [
1095 &quot;&quot;,
1096 ],
1097 ],
1098 &quot;all&quot;: True or False, # For convenience `all` can be set to `true` to indicate that this
1099 # `KeySet` matches all keys in the table or index. Note that any keys
1100 # specified in `keys` or `ranges` are only yielded once.
1101 },
Bu Sun Kimd059ad82020-07-22 17:02:09 -07001102 &quot;table&quot;: &quot;A String&quot;, # Required. The table whose rows will be deleted.
Bu Sun Kim4ed7d3f2020-05-27 12:20:54 -07001103 },
1104 &quot;replace&quot;: { # Arguments to insert, update, insert_or_update, and # Like insert, except that if the row already exists, it is
1105 # deleted, and the column values provided are inserted
1106 # instead. Unlike insert_or_update, this means any values not
1107 # explicitly written become `NULL`.
1108 #
1109 # In an interleaved table, if you create the child table with the
1110 # `ON DELETE CASCADE` annotation, then replacing a parent row
1111 # also deletes the child rows. Otherwise, you must delete the
1112 # child rows before you replace the parent row.
1113 # replace operations.
1114 &quot;table&quot;: &quot;A String&quot;, # Required. The table whose rows will be written.
Bu Sun Kimd059ad82020-07-22 17:02:09 -07001115 &quot;columns&quot;: [ # The names of the columns in table to be written.
1116 #
1117 # The list of columns must contain enough columns to allow
1118 # Cloud Spanner to derive values for all primary key columns in the
1119 # row(s) to be modified.
1120 &quot;A String&quot;,
1121 ],
Bu Sun Kim4ed7d3f2020-05-27 12:20:54 -07001122 &quot;values&quot;: [ # The values to be written. `values` can contain more than one
1123 # list of values. If it does, then multiple rows are written, one
1124 # for each entry in `values`. Each list in `values` must have
1125 # exactly as many entries as there are entries in columns
1126 # above. Sending multiple lists is equivalent to sending multiple
1127 # `Mutation`s, each containing one `values` entry and repeating
1128 # table and columns. Individual values in each list are
1129 # encoded as described here.
1130 [
1131 &quot;&quot;,
1132 ],
1133 ],
Bu Sun Kimd059ad82020-07-22 17:02:09 -07001134 },
1135 &quot;insert&quot;: { # Arguments to insert, update, insert_or_update, and # Insert new rows in a table. If any of the rows already exist,
1136 # the write or transaction fails with error `ALREADY_EXISTS`.
1137 # replace operations.
1138 &quot;table&quot;: &quot;A String&quot;, # Required. The table whose rows will be written.
Bu Sun Kim4ed7d3f2020-05-27 12:20:54 -07001139 &quot;columns&quot;: [ # The names of the columns in table to be written.
1140 #
1141 # The list of columns must contain enough columns to allow
1142 # Cloud Spanner to derive values for all primary key columns in the
1143 # row(s) to be modified.
1144 &quot;A String&quot;,
1145 ],
Bu Sun Kimd059ad82020-07-22 17:02:09 -07001146 &quot;values&quot;: [ # The values to be written. `values` can contain more than one
1147 # list of values. If it does, then multiple rows are written, one
1148 # for each entry in `values`. Each list in `values` must have
1149 # exactly as many entries as there are entries in columns
1150 # above. Sending multiple lists is equivalent to sending multiple
1151 # `Mutation`s, each containing one `values` entry and repeating
1152 # table and columns. Individual values in each list are
1153 # encoded as described here.
1154 [
1155 &quot;&quot;,
1156 ],
1157 ],
1158 },
1159 &quot;update&quot;: { # Arguments to insert, update, insert_or_update, and # Update existing rows in a table. If any of the rows does not
1160 # already exist, the transaction fails with error `NOT_FOUND`.
1161 # replace operations.
1162 &quot;table&quot;: &quot;A String&quot;, # Required. The table whose rows will be written.
1163 &quot;columns&quot;: [ # The names of the columns in table to be written.
1164 #
1165 # The list of columns must contain enough columns to allow
1166 # Cloud Spanner to derive values for all primary key columns in the
1167 # row(s) to be modified.
1168 &quot;A String&quot;,
1169 ],
1170 &quot;values&quot;: [ # The values to be written. `values` can contain more than one
1171 # list of values. If it does, then multiple rows are written, one
1172 # for each entry in `values`. Each list in `values` must have
1173 # exactly as many entries as there are entries in columns
1174 # above. Sending multiple lists is equivalent to sending multiple
1175 # `Mutation`s, each containing one `values` entry and repeating
1176 # table and columns. Individual values in each list are
1177 # encoded as described here.
1178 [
1179 &quot;&quot;,
1180 ],
1181 ],
Bu Sun Kim4ed7d3f2020-05-27 12:20:54 -07001182 },
1183 &quot;insertOrUpdate&quot;: { # Arguments to insert, update, insert_or_update, and # Like insert, except that if the row already exists, then
1184 # its column values are overwritten with the ones provided. Any
1185 # column values not explicitly written are preserved.
1186 #
1187 # When using insert_or_update, just as when using insert, all `NOT
1188 # NULL` columns in the table must be given a value. This holds true
1189 # even when the row already exists and will therefore actually be updated.
1190 # replace operations.
1191 &quot;table&quot;: &quot;A String&quot;, # Required. The table whose rows will be written.
Bu Sun Kimd059ad82020-07-22 17:02:09 -07001192 &quot;columns&quot;: [ # The names of the columns in table to be written.
1193 #
1194 # The list of columns must contain enough columns to allow
1195 # Cloud Spanner to derive values for all primary key columns in the
1196 # row(s) to be modified.
1197 &quot;A String&quot;,
1198 ],
Bu Sun Kim4ed7d3f2020-05-27 12:20:54 -07001199 &quot;values&quot;: [ # The values to be written. `values` can contain more than one
1200 # list of values. If it does, then multiple rows are written, one
1201 # for each entry in `values`. Each list in `values` must have
1202 # exactly as many entries as there are entries in columns
1203 # above. Sending multiple lists is equivalent to sending multiple
1204 # `Mutation`s, each containing one `values` entry and repeating
1205 # table and columns. Individual values in each list are
1206 # encoded as described here.
1207 [
1208 &quot;&quot;,
1209 ],
1210 ],
Bu Sun Kim4ed7d3f2020-05-27 12:20:54 -07001211 },
1212 },
1213 ],
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -04001214 }
1215
1216 x__xgafv: string, V1 error format.
1217 Allowed values
1218 1 - v1 error format
1219 2 - v2 error format
1220
1221Returns:
1222 An object of the form:
1223
1224 { # The response for Commit.
Bu Sun Kim65020912020-05-20 12:08:20 -07001225 &quot;commitTimestamp&quot;: &quot;A String&quot;, # The Cloud Spanner timestamp at which the transaction committed.
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -04001226 }</pre>
1227</div>
1228
1229<div class="method">
Dan O'Mearadd494642020-05-01 07:42:23 -07001230 <code class="details" id="create">create(database, body=None, x__xgafv=None)</code>
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -04001231 <pre>Creates a new session. A session can be used to perform
1232transactions that read and/or modify data in a Cloud Spanner database.
1233Sessions are meant to be reused for many consecutive
1234transactions.
1235
1236Sessions can only execute one transaction at a time. To execute
1237multiple concurrent read-write/write-only transactions, create
1238multiple sessions. Note that standalone reads and queries use a
1239transaction internally, and count toward the one transaction
1240limit.
1241
Dan O'Mearadd494642020-05-01 07:42:23 -07001242Active sessions use additional server resources, so it is a good idea to
1243delete idle and unneeded sessions.
1244Aside from explicit deletes, Cloud Spanner may delete sessions for which no
Sai Cheemalapatie833b792017-03-24 15:06:46 -07001245operations are sent for more than an hour. If a session is deleted,
1246requests to it return `NOT_FOUND`.
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -04001247
1248Idle sessions can be kept alive by sending a trivial SQL query
Bu Sun Kim65020912020-05-20 12:08:20 -07001249periodically, e.g., `&quot;SELECT 1&quot;`.
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -04001250
1251Args:
1252 database: string, Required. The database in which the new session is created. (required)
Dan O'Mearadd494642020-05-01 07:42:23 -07001253 body: object, The request body.
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07001254 The object takes the form of:
1255
1256{ # The request for CreateSession.
Bu Sun Kimd059ad82020-07-22 17:02:09 -07001257 &quot;session&quot;: { # A session in the Cloud Spanner API. # Required. The session to create.
Bu Sun Kim4ed7d3f2020-05-27 12:20:54 -07001258 &quot;createTime&quot;: &quot;A String&quot;, # Output only. The timestamp when the session is created.
Bu Sun Kimd059ad82020-07-22 17:02:09 -07001259 &quot;name&quot;: &quot;A String&quot;, # Output only. The name of the session. This is always system-assigned.
1260 &quot;approximateLastUseTime&quot;: &quot;A String&quot;, # Output only. The approximate timestamp when the session is last used. It is
1261 # typically earlier than the actual last use time.
Bu Sun Kim65020912020-05-20 12:08:20 -07001262 &quot;labels&quot;: { # The labels for the session.
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07001263 #
1264 # * Label keys must be between 1 and 63 characters long and must conform to
1265 # the following regular expression: `[a-z]([-a-z0-9]*[a-z0-9])?`.
1266 # * Label values must be between 0 and 63 characters long and must conform
1267 # to the regular expression `([a-z]([-a-z0-9]*[a-z0-9])?)?`.
1268 # * No more than 64 labels can be associated with a given session.
1269 #
1270 # See https://goo.gl/xmQnxf for more information on and examples of labels.
Bu Sun Kim65020912020-05-20 12:08:20 -07001271 &quot;a_key&quot;: &quot;A String&quot;,
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07001272 },
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07001273 },
1274 }
1275
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -04001276 x__xgafv: string, V1 error format.
1277 Allowed values
1278 1 - v1 error format
1279 2 - v2 error format
1280
1281Returns:
1282 An object of the form:
1283
1284 { # A session in the Cloud Spanner API.
Bu Sun Kim4ed7d3f2020-05-27 12:20:54 -07001285 &quot;createTime&quot;: &quot;A String&quot;, # Output only. The timestamp when the session is created.
Bu Sun Kimd059ad82020-07-22 17:02:09 -07001286 &quot;name&quot;: &quot;A String&quot;, # Output only. The name of the session. This is always system-assigned.
1287 &quot;approximateLastUseTime&quot;: &quot;A String&quot;, # Output only. The approximate timestamp when the session is last used. It is
1288 # typically earlier than the actual last use time.
Bu Sun Kim65020912020-05-20 12:08:20 -07001289 &quot;labels&quot;: { # The labels for the session.
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07001290 #
1291 # * Label keys must be between 1 and 63 characters long and must conform to
1292 # the following regular expression: `[a-z]([-a-z0-9]*[a-z0-9])?`.
1293 # * Label values must be between 0 and 63 characters long and must conform
1294 # to the regular expression `([a-z]([-a-z0-9]*[a-z0-9])?)?`.
1295 # * No more than 64 labels can be associated with a given session.
1296 #
1297 # See https://goo.gl/xmQnxf for more information on and examples of labels.
Bu Sun Kim65020912020-05-20 12:08:20 -07001298 &quot;a_key&quot;: &quot;A String&quot;,
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07001299 },
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -04001300 }</pre>
1301</div>
1302
1303<div class="method">
1304 <code class="details" id="delete">delete(name, x__xgafv=None)</code>
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07001305 <pre>Ends a session, releasing server resources associated with it. This will
1306asynchronously trigger cancellation of any operations that are running with
1307this session.
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -04001308
1309Args:
1310 name: string, Required. The name of the session to delete. (required)
1311 x__xgafv: string, V1 error format.
1312 Allowed values
1313 1 - v1 error format
1314 2 - v2 error format
1315
1316Returns:
1317 An object of the form:
1318
1319 { # A generic empty message that you can re-use to avoid defining duplicated
1320 # empty messages in your APIs. A typical example is to use it as the request
1321 # or the response type of an API method. For instance:
1322 #
1323 # service Foo {
1324 # rpc Bar(google.protobuf.Empty) returns (google.protobuf.Empty);
1325 # }
1326 #
1327 # The JSON representation for `Empty` is empty JSON object `{}`.
1328 }</pre>
1329</div>
1330
1331<div class="method">
Dan O'Mearadd494642020-05-01 07:42:23 -07001332 <code class="details" id="executeBatchDml">executeBatchDml(session, body=None, x__xgafv=None)</code>
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07001333 <pre>Executes a batch of SQL DML statements. This method allows many statements
1334to be run with lower latency than submitting them sequentially with
1335ExecuteSql.
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -04001336
Dan O'Mearadd494642020-05-01 07:42:23 -07001337Statements are executed in sequential order. A request can succeed even if
1338a statement fails. The ExecuteBatchDmlResponse.status field in the
1339response provides information about the statement that failed. Clients must
1340inspect this field to determine whether an error occurred.
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -04001341
Dan O'Mearadd494642020-05-01 07:42:23 -07001342Execution stops after the first failed statement; the remaining statements
1343are not executed.
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -04001344
1345Args:
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07001346 session: string, Required. The session in which the DML statements should be performed. (required)
Dan O'Mearadd494642020-05-01 07:42:23 -07001347 body: object, The request body.
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -04001348 The object takes the form of:
1349
Dan O'Mearadd494642020-05-01 07:42:23 -07001350{ # The request for ExecuteBatchDml.
Bu Sun Kim4ed7d3f2020-05-27 12:20:54 -07001351 &quot;statements&quot;: [ # Required. The list of statements to execute in this batch. Statements are executed
1352 # serially, such that the effects of statement `i` are visible to statement
1353 # `i+1`. Each statement must be a DML statement. Execution stops at the
1354 # first failed statement; the remaining statements are not executed.
1355 #
1356 # Callers must provide at least one statement.
1357 { # A single DML statement.
Bu Sun Kimd059ad82020-07-22 17:02:09 -07001358 &quot;sql&quot;: &quot;A String&quot;, # Required. The DML string.
Bu Sun Kim4ed7d3f2020-05-27 12:20:54 -07001359 &quot;params&quot;: { # Parameter names and values that bind to placeholders in the DML string.
1360 #
1361 # A parameter placeholder consists of the `@` character followed by the
1362 # parameter name (for example, `@firstName`). Parameter names can contain
1363 # letters, numbers, and underscores.
1364 #
1365 # Parameters can appear anywhere that a literal value is expected. The
1366 # same parameter name can be used more than once, for example:
1367 #
1368 # `&quot;WHERE id &gt; @msg_id AND id &lt; @msg_id + 100&quot;`
1369 #
1370 # It is an error to execute a SQL statement with unbound parameters.
1371 &quot;a_key&quot;: &quot;&quot;, # Properties of the object.
1372 },
Bu Sun Kim4ed7d3f2020-05-27 12:20:54 -07001373 &quot;paramTypes&quot;: { # It is not always possible for Cloud Spanner to infer the right SQL type
1374 # from a JSON value. For example, values of type `BYTES` and values
1375 # of type `STRING` both appear in params as JSON strings.
1376 #
1377 # In these cases, `param_types` can be used to specify the exact
1378 # SQL type for some or all of the SQL statement parameters. See the
1379 # definition of Type for more information
1380 # about SQL types.
1381 &quot;a_key&quot;: { # `Type` indicates the type of a Cloud Spanner value, as might be stored in a
1382 # table cell or returned from an SQL query.
Bu Sun Kimd059ad82020-07-22 17:02:09 -07001383 &quot;code&quot;: &quot;A String&quot;, # Required. The TypeCode for this type.
Bu Sun Kim4ed7d3f2020-05-27 12:20:54 -07001384 &quot;arrayElementType&quot;: # Object with schema name: Type # If code == ARRAY, then `array_element_type`
1385 # is the type of the array elements.
Bu Sun Kim4ed7d3f2020-05-27 12:20:54 -07001386 &quot;structType&quot;: { # `StructType` defines the fields of a STRUCT type. # If code == STRUCT, then `struct_type`
1387 # provides type information for the struct&#x27;s fields.
1388 &quot;fields&quot;: [ # The list of fields that make up this struct. Order is
1389 # significant, because values of this struct type are represented as
1390 # lists, where the order of field values matches the order of
1391 # fields in the StructType. In turn, the order of fields
1392 # matches the order of columns in a read request, or the order of
1393 # fields in the `SELECT` clause of a query.
1394 { # Message representing a single field of a struct.
Bu Sun Kimd059ad82020-07-22 17:02:09 -07001395 &quot;type&quot;: # Object with schema name: Type # The type of the field.
Bu Sun Kim4ed7d3f2020-05-27 12:20:54 -07001396 &quot;name&quot;: &quot;A String&quot;, # The name of the field. For reads, this is the column name. For
1397 # SQL queries, it is the column alias (e.g., `&quot;Word&quot;` in the
1398 # query `&quot;SELECT &#x27;hello&#x27; AS Word&quot;`), or the column name (e.g.,
1399 # `&quot;ColName&quot;` in the query `&quot;SELECT ColName FROM Table&quot;`). Some
1400 # columns might have an empty name (e.g., !&quot;SELECT
1401 # UPPER(ColName)&quot;`). Note that a query result can contain
1402 # multiple fields with the same name.
Bu Sun Kim4ed7d3f2020-05-27 12:20:54 -07001403 },
1404 ],
1405 },
1406 },
1407 },
1408 },
1409 ],
1410 &quot;seqno&quot;: &quot;A String&quot;, # Required. A per-transaction sequence number used to identify this request. This field
1411 # makes each request idempotent such that if the request is received multiple
1412 # times, at most one will succeed.
1413 #
1414 # The sequence number must be monotonically increasing within the
1415 # transaction. If a request arrives for the first time with an out-of-order
1416 # sequence number, the transaction may be aborted. Replays of previously
1417 # handled requests will yield the same response as the first execution.
Bu Sun Kim65020912020-05-20 12:08:20 -07001418 &quot;transaction&quot;: { # This message is used to select the transaction in which a # Required. The transaction to use. Must be a read-write transaction.
Dan O'Mearadd494642020-05-01 07:42:23 -07001419 #
1420 # To protect against replays, single-use transactions are not supported. The
1421 # caller must either supply an existing transaction ID or begin a new
1422 # transaction.
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -04001423 # Read or
1424 # ExecuteSql call runs.
1425 #
1426 # See TransactionOptions for more information about transactions.
Bu Sun Kim4ed7d3f2020-05-27 12:20:54 -07001427 &quot;singleUse&quot;: { # # Transactions # Execute the read or SQL query in a temporary transaction.
1428 # This is the most efficient way to execute a transaction that
1429 # consists of a single SQL query.
1430 #
1431 #
Bu Sun Kimd059ad82020-07-22 17:02:09 -07001432 # Each session can have at most one active transaction at a time (note that
1433 # standalone reads and queries use a transaction internally and do count
1434 # towards the one transaction limit). After the active transaction is
1435 # completed, the session can immediately be re-used for the next transaction.
1436 # It is not necessary to create a new session for each transaction.
Bu Sun Kim4ed7d3f2020-05-27 12:20:54 -07001437 #
1438 # # Transaction Modes
1439 #
1440 # Cloud Spanner supports three transaction modes:
1441 #
1442 # 1. Locking read-write. This type of transaction is the only way
1443 # to write data into Cloud Spanner. These transactions rely on
1444 # pessimistic locking and, if necessary, two-phase commit.
1445 # Locking read-write transactions may abort, requiring the
1446 # application to retry.
1447 #
1448 # 2. Snapshot read-only. This transaction type provides guaranteed
1449 # consistency across several reads, but does not allow
1450 # writes. Snapshot read-only transactions can be configured to
1451 # read at timestamps in the past. Snapshot read-only
1452 # transactions do not need to be committed.
1453 #
1454 # 3. Partitioned DML. This type of transaction is used to execute
1455 # a single Partitioned DML statement. Partitioned DML partitions
1456 # the key space and runs the DML statement over each partition
1457 # in parallel using separate, internal transactions that commit
1458 # independently. Partitioned DML transactions do not need to be
1459 # committed.
1460 #
1461 # For transactions that only read, snapshot read-only transactions
1462 # provide simpler semantics and are almost always faster. In
1463 # particular, read-only transactions do not take locks, so they do
1464 # not conflict with read-write transactions. As a consequence of not
1465 # taking locks, they also do not abort, so retry loops are not needed.
1466 #
1467 # Transactions may only read/write data in a single database. They
1468 # may, however, read/write data in different tables within that
1469 # database.
1470 #
1471 # ## Locking Read-Write Transactions
1472 #
1473 # Locking transactions may be used to atomically read-modify-write
1474 # data anywhere in a database. This type of transaction is externally
1475 # consistent.
1476 #
1477 # Clients should attempt to minimize the amount of time a transaction
1478 # is active. Faster transactions commit with higher probability
1479 # and cause less contention. Cloud Spanner attempts to keep read locks
1480 # active as long as the transaction continues to do reads, and the
1481 # transaction has not been terminated by
1482 # Commit or
1483 # Rollback. Long periods of
1484 # inactivity at the client may cause Cloud Spanner to release a
1485 # transaction&#x27;s locks and abort it.
1486 #
1487 # Conceptually, a read-write transaction consists of zero or more
1488 # reads or SQL statements followed by
1489 # Commit. At any time before
1490 # Commit, the client can send a
1491 # Rollback request to abort the
1492 # transaction.
1493 #
1494 # ### Semantics
1495 #
1496 # Cloud Spanner can commit the transaction if all read locks it acquired
1497 # are still valid at commit time, and it is able to acquire write
1498 # locks for all writes. Cloud Spanner can abort the transaction for any
1499 # reason. If a commit attempt returns `ABORTED`, Cloud Spanner guarantees
1500 # that the transaction has not modified any user data in Cloud Spanner.
1501 #
1502 # Unless the transaction commits, Cloud Spanner makes no guarantees about
1503 # how long the transaction&#x27;s locks were held for. It is an error to
1504 # use Cloud Spanner locks for any sort of mutual exclusion other than
1505 # between Cloud Spanner transactions themselves.
1506 #
1507 # ### Retrying Aborted Transactions
1508 #
1509 # When a transaction aborts, the application can choose to retry the
1510 # whole transaction again. To maximize the chances of successfully
1511 # committing the retry, the client should execute the retry in the
1512 # same session as the original attempt. The original session&#x27;s lock
1513 # priority increases with each consecutive abort, meaning that each
1514 # attempt has a slightly better chance of success than the previous.
1515 #
1516 # Under some circumstances (e.g., many transactions attempting to
1517 # modify the same row(s)), a transaction can abort many times in a
1518 # short period before successfully committing. Thus, it is not a good
1519 # idea to cap the number of retries a transaction can attempt;
1520 # instead, it is better to limit the total amount of wall time spent
1521 # retrying.
1522 #
1523 # ### Idle Transactions
1524 #
1525 # A transaction is considered idle if it has no outstanding reads or
1526 # SQL queries and has not started a read or SQL query within the last 10
1527 # seconds. Idle transactions can be aborted by Cloud Spanner so that they
1528 # don&#x27;t hold on to locks indefinitely. In that case, the commit will
1529 # fail with error `ABORTED`.
1530 #
1531 # If this behavior is undesirable, periodically executing a simple
1532 # SQL query in the transaction (e.g., `SELECT 1`) prevents the
1533 # transaction from becoming idle.
1534 #
1535 # ## Snapshot Read-Only Transactions
1536 #
1537 # Snapshot read-only transactions provides a simpler method than
1538 # locking read-write transactions for doing several consistent
1539 # reads. However, this type of transaction does not support writes.
1540 #
1541 # Snapshot transactions do not take locks. Instead, they work by
1542 # choosing a Cloud Spanner timestamp, then executing all reads at that
1543 # timestamp. Since they do not acquire locks, they do not block
1544 # concurrent read-write transactions.
1545 #
1546 # Unlike locking read-write transactions, snapshot read-only
1547 # transactions never abort. They can fail if the chosen read
1548 # timestamp is garbage collected; however, the default garbage
1549 # collection policy is generous enough that most applications do not
1550 # need to worry about this in practice.
1551 #
1552 # Snapshot read-only transactions do not need to call
1553 # Commit or
1554 # Rollback (and in fact are not
1555 # permitted to do so).
1556 #
1557 # To execute a snapshot transaction, the client specifies a timestamp
1558 # bound, which tells Cloud Spanner how to choose a read timestamp.
1559 #
1560 # The types of timestamp bound are:
1561 #
1562 # - Strong (the default).
1563 # - Bounded staleness.
1564 # - Exact staleness.
1565 #
1566 # If the Cloud Spanner database to be read is geographically distributed,
1567 # stale read-only transactions can execute more quickly than strong
1568 # or read-write transaction, because they are able to execute far
1569 # from the leader replica.
1570 #
1571 # Each type of timestamp bound is discussed in detail below.
1572 #
1573 # ### Strong
1574 #
1575 # Strong reads are guaranteed to see the effects of all transactions
1576 # that have committed before the start of the read. Furthermore, all
1577 # rows yielded by a single read are consistent with each other -- if
1578 # any part of the read observes a transaction, all parts of the read
1579 # see the transaction.
1580 #
1581 # Strong reads are not repeatable: two consecutive strong read-only
1582 # transactions might return inconsistent results if there are
1583 # concurrent writes. If consistency across reads is required, the
1584 # reads should be executed within a transaction or at an exact read
1585 # timestamp.
1586 #
1587 # See TransactionOptions.ReadOnly.strong.
1588 #
1589 # ### Exact Staleness
1590 #
1591 # These timestamp bounds execute reads at a user-specified
1592 # timestamp. Reads at a timestamp are guaranteed to see a consistent
1593 # prefix of the global transaction history: they observe
1594 # modifications done by all transactions with a commit timestamp &lt;=
1595 # the read timestamp, and observe none of the modifications done by
1596 # transactions with a larger commit timestamp. They will block until
1597 # all conflicting transactions that may be assigned commit timestamps
1598 # &lt;= the read timestamp have finished.
1599 #
1600 # The timestamp can either be expressed as an absolute Cloud Spanner commit
1601 # timestamp or a staleness relative to the current time.
1602 #
1603 # These modes do not require a &quot;negotiation phase&quot; to pick a
1604 # timestamp. As a result, they execute slightly faster than the
1605 # equivalent boundedly stale concurrency modes. On the other hand,
1606 # boundedly stale reads usually return fresher results.
1607 #
1608 # See TransactionOptions.ReadOnly.read_timestamp and
1609 # TransactionOptions.ReadOnly.exact_staleness.
1610 #
1611 # ### Bounded Staleness
1612 #
1613 # Bounded staleness modes allow Cloud Spanner to pick the read timestamp,
1614 # subject to a user-provided staleness bound. Cloud Spanner chooses the
1615 # newest timestamp within the staleness bound that allows execution
1616 # of the reads at the closest available replica without blocking.
1617 #
1618 # All rows yielded are consistent with each other -- if any part of
1619 # the read observes a transaction, all parts of the read see the
1620 # transaction. Boundedly stale reads are not repeatable: two stale
1621 # reads, even if they use the same staleness bound, can execute at
1622 # different timestamps and thus return inconsistent results.
1623 #
1624 # Boundedly stale reads execute in two phases: the first phase
1625 # negotiates a timestamp among all replicas needed to serve the
1626 # read. In the second phase, reads are executed at the negotiated
1627 # timestamp.
1628 #
1629 # As a result of the two phase execution, bounded staleness reads are
1630 # usually a little slower than comparable exact staleness
1631 # reads. However, they are typically able to return fresher
1632 # results, and are more likely to execute at the closest replica.
1633 #
1634 # Because the timestamp negotiation requires up-front knowledge of
1635 # which rows will be read, it can only be used with single-use
1636 # read-only transactions.
1637 #
1638 # See TransactionOptions.ReadOnly.max_staleness and
1639 # TransactionOptions.ReadOnly.min_read_timestamp.
1640 #
1641 # ### Old Read Timestamps and Garbage Collection
1642 #
1643 # Cloud Spanner continuously garbage collects deleted and overwritten data
1644 # in the background to reclaim storage space. This process is known
1645 # as &quot;version GC&quot;. By default, version GC reclaims versions after they
1646 # are one hour old. Because of this, Cloud Spanner cannot perform reads
1647 # at read timestamps more than one hour in the past. This
1648 # restriction also applies to in-progress reads and/or SQL queries whose
1649 # timestamp become too old while executing. Reads and SQL queries with
1650 # too-old read timestamps fail with the error `FAILED_PRECONDITION`.
1651 #
1652 # ## Partitioned DML Transactions
1653 #
1654 # Partitioned DML transactions are used to execute DML statements with a
1655 # different execution strategy that provides different, and often better,
1656 # scalability properties for large, table-wide operations than DML in a
1657 # ReadWrite transaction. Smaller scoped statements, such as an OLTP workload,
1658 # should prefer using ReadWrite transactions.
1659 #
1660 # Partitioned DML partitions the keyspace and runs the DML statement on each
1661 # partition in separate, internal transactions. These transactions commit
1662 # automatically when complete, and run independently from one another.
1663 #
1664 # To reduce lock contention, this execution strategy only acquires read locks
1665 # on rows that match the WHERE clause of the statement. Additionally, the
1666 # smaller per-partition transactions hold locks for less time.
1667 #
1668 # That said, Partitioned DML is not a drop-in replacement for standard DML used
1669 # in ReadWrite transactions.
1670 #
1671 # - The DML statement must be fully-partitionable. Specifically, the statement
1672 # must be expressible as the union of many statements which each access only
1673 # a single row of the table.
1674 #
1675 # - The statement is not applied atomically to all rows of the table. Rather,
1676 # the statement is applied atomically to partitions of the table, in
1677 # independent transactions. Secondary index rows are updated atomically
1678 # with the base table rows.
1679 #
1680 # - Partitioned DML does not guarantee exactly-once execution semantics
1681 # against a partition. The statement will be applied at least once to each
1682 # partition. It is strongly recommended that the DML statement should be
1683 # idempotent to avoid unexpected results. For instance, it is potentially
1684 # dangerous to run a statement such as
1685 # `UPDATE table SET column = column + 1` as it could be run multiple times
1686 # against some rows.
1687 #
1688 # - The partitions are committed automatically - there is no support for
1689 # Commit or Rollback. If the call returns an error, or if the client issuing
1690 # the ExecuteSql call dies, it is possible that some rows had the statement
1691 # executed on them successfully. It is also possible that statement was
1692 # never executed against other rows.
1693 #
1694 # - Partitioned DML transactions may only contain the execution of a single
1695 # DML statement via ExecuteSql or ExecuteStreamingSql.
1696 #
1697 # - If any error is encountered during the execution of the partitioned DML
1698 # operation (for instance, a UNIQUE INDEX violation, division by zero, or a
1699 # value that cannot be stored due to schema constraints), then the
1700 # operation is stopped at that point and an error is returned. It is
1701 # possible that at this point, some partitions have been committed (or even
1702 # committed multiple times), and other partitions have not been run at all.
1703 #
1704 # Given the above, Partitioned DML is good fit for large, database-wide,
1705 # operations that are idempotent, such as deleting old rows from a very large
1706 # table.
Bu Sun Kim65020912020-05-20 12:08:20 -07001707 &quot;readWrite&quot;: { # Message type to initiate a read-write transaction. Currently this # Transaction may write.
1708 #
1709 # Authorization to begin a read-write transaction requires
1710 # `spanner.databases.beginOrRollbackReadWriteTransaction` permission
1711 # on the `session` resource.
1712 # transaction type has no options.
1713 },
Bu Sun Kim4ed7d3f2020-05-27 12:20:54 -07001714 &quot;readOnly&quot;: { # Message type to initiate a read-only transaction. # Transaction will not write.
1715 #
1716 # Authorization to begin a read-only transaction requires
1717 # `spanner.databases.beginReadOnlyTransaction` permission
1718 # on the `session` resource.
Bu Sun Kimd059ad82020-07-22 17:02:09 -07001719 &quot;readTimestamp&quot;: &quot;A String&quot;, # Executes all reads at the given timestamp. Unlike other modes,
1720 # reads at a specific timestamp are repeatable; the same read at
1721 # the same timestamp always returns the same data. If the
1722 # timestamp is in the future, the read will block until the
1723 # specified timestamp, modulo the read&#x27;s deadline.
1724 #
1725 # Useful for large scale consistent reads such as mapreduces, or
1726 # for coordinating many reads against a consistent snapshot of the
1727 # data.
1728 #
1729 # A timestamp in RFC3339 UTC \&quot;Zulu\&quot; format, accurate to nanoseconds.
1730 # Example: `&quot;2014-10-02T15:01:23.045123456Z&quot;`.
1731 &quot;minReadTimestamp&quot;: &quot;A String&quot;, # Executes all reads at a timestamp &gt;= `min_read_timestamp`.
1732 #
1733 # This is useful for requesting fresher data than some previous
1734 # read, or data that is fresh enough to observe the effects of some
1735 # previously committed transaction whose timestamp is known.
1736 #
1737 # Note that this option can only be used in single-use transactions.
1738 #
1739 # A timestamp in RFC3339 UTC \&quot;Zulu\&quot; format, accurate to nanoseconds.
1740 # Example: `&quot;2014-10-02T15:01:23.045123456Z&quot;`.
1741 &quot;exactStaleness&quot;: &quot;A String&quot;, # Executes all reads at a timestamp that is `exact_staleness`
1742 # old. The timestamp is chosen soon after the read is started.
1743 #
1744 # Guarantees that all writes that have committed more than the
1745 # specified number of seconds ago are visible. Because Cloud Spanner
1746 # chooses the exact timestamp, this mode works even if the client&#x27;s
1747 # local clock is substantially skewed from Cloud Spanner commit
1748 # timestamps.
1749 #
1750 # Useful for reading at nearby replicas without the distributed
1751 # timestamp negotiation overhead of `max_staleness`.
Bu Sun Kim4ed7d3f2020-05-27 12:20:54 -07001752 &quot;maxStaleness&quot;: &quot;A String&quot;, # Read data at a timestamp &gt;= `NOW - max_staleness`
1753 # seconds. Guarantees that all writes that have committed more
1754 # than the specified number of seconds ago are visible. Because
1755 # Cloud Spanner chooses the exact timestamp, this mode works even if
1756 # the client&#x27;s local clock is substantially skewed from Cloud Spanner
1757 # commit timestamps.
1758 #
1759 # Useful for reading the freshest data available at a nearby
1760 # replica, while bounding the possible staleness if the local
1761 # replica has fallen behind.
1762 #
1763 # Note that this option can only be used in single-use
1764 # transactions.
Bu Sun Kim4ed7d3f2020-05-27 12:20:54 -07001765 &quot;returnReadTimestamp&quot;: True or False, # If true, the Cloud Spanner-selected read timestamp is included in
1766 # the Transaction message that describes the transaction.
Bu Sun Kimd059ad82020-07-22 17:02:09 -07001767 &quot;strong&quot;: True or False, # Read at a timestamp where all previously committed transactions
1768 # are visible.
1769 },
1770 &quot;partitionedDml&quot;: { # Message type to initiate a Partitioned DML transaction. # Partitioned DML transaction.
1771 #
1772 # Authorization to begin a Partitioned DML transaction requires
1773 # `spanner.databases.beginPartitionedDmlTransaction` permission
1774 # on the `session` resource.
1775 },
1776 },
1777 &quot;begin&quot;: { # # Transactions # Begin a new transaction and execute this read or SQL query in
1778 # it. The transaction ID of the new transaction is returned in
1779 # ResultSetMetadata.transaction, which is a Transaction.
1780 #
1781 #
1782 # Each session can have at most one active transaction at a time (note that
1783 # standalone reads and queries use a transaction internally and do count
1784 # towards the one transaction limit). After the active transaction is
1785 # completed, the session can immediately be re-used for the next transaction.
1786 # It is not necessary to create a new session for each transaction.
1787 #
1788 # # Transaction Modes
1789 #
1790 # Cloud Spanner supports three transaction modes:
1791 #
1792 # 1. Locking read-write. This type of transaction is the only way
1793 # to write data into Cloud Spanner. These transactions rely on
1794 # pessimistic locking and, if necessary, two-phase commit.
1795 # Locking read-write transactions may abort, requiring the
1796 # application to retry.
1797 #
1798 # 2. Snapshot read-only. This transaction type provides guaranteed
1799 # consistency across several reads, but does not allow
1800 # writes. Snapshot read-only transactions can be configured to
1801 # read at timestamps in the past. Snapshot read-only
1802 # transactions do not need to be committed.
1803 #
1804 # 3. Partitioned DML. This type of transaction is used to execute
1805 # a single Partitioned DML statement. Partitioned DML partitions
1806 # the key space and runs the DML statement over each partition
1807 # in parallel using separate, internal transactions that commit
1808 # independently. Partitioned DML transactions do not need to be
1809 # committed.
1810 #
1811 # For transactions that only read, snapshot read-only transactions
1812 # provide simpler semantics and are almost always faster. In
1813 # particular, read-only transactions do not take locks, so they do
1814 # not conflict with read-write transactions. As a consequence of not
1815 # taking locks, they also do not abort, so retry loops are not needed.
1816 #
1817 # Transactions may only read/write data in a single database. They
1818 # may, however, read/write data in different tables within that
1819 # database.
1820 #
1821 # ## Locking Read-Write Transactions
1822 #
1823 # Locking transactions may be used to atomically read-modify-write
1824 # data anywhere in a database. This type of transaction is externally
1825 # consistent.
1826 #
1827 # Clients should attempt to minimize the amount of time a transaction
1828 # is active. Faster transactions commit with higher probability
1829 # and cause less contention. Cloud Spanner attempts to keep read locks
1830 # active as long as the transaction continues to do reads, and the
1831 # transaction has not been terminated by
1832 # Commit or
1833 # Rollback. Long periods of
1834 # inactivity at the client may cause Cloud Spanner to release a
1835 # transaction&#x27;s locks and abort it.
1836 #
1837 # Conceptually, a read-write transaction consists of zero or more
1838 # reads or SQL statements followed by
1839 # Commit. At any time before
1840 # Commit, the client can send a
1841 # Rollback request to abort the
1842 # transaction.
1843 #
1844 # ### Semantics
1845 #
1846 # Cloud Spanner can commit the transaction if all read locks it acquired
1847 # are still valid at commit time, and it is able to acquire write
1848 # locks for all writes. Cloud Spanner can abort the transaction for any
1849 # reason. If a commit attempt returns `ABORTED`, Cloud Spanner guarantees
1850 # that the transaction has not modified any user data in Cloud Spanner.
1851 #
1852 # Unless the transaction commits, Cloud Spanner makes no guarantees about
1853 # how long the transaction&#x27;s locks were held for. It is an error to
1854 # use Cloud Spanner locks for any sort of mutual exclusion other than
1855 # between Cloud Spanner transactions themselves.
1856 #
1857 # ### Retrying Aborted Transactions
1858 #
1859 # When a transaction aborts, the application can choose to retry the
1860 # whole transaction again. To maximize the chances of successfully
1861 # committing the retry, the client should execute the retry in the
1862 # same session as the original attempt. The original session&#x27;s lock
1863 # priority increases with each consecutive abort, meaning that each
1864 # attempt has a slightly better chance of success than the previous.
1865 #
1866 # Under some circumstances (e.g., many transactions attempting to
1867 # modify the same row(s)), a transaction can abort many times in a
1868 # short period before successfully committing. Thus, it is not a good
1869 # idea to cap the number of retries a transaction can attempt;
1870 # instead, it is better to limit the total amount of wall time spent
1871 # retrying.
1872 #
1873 # ### Idle Transactions
1874 #
1875 # A transaction is considered idle if it has no outstanding reads or
1876 # SQL queries and has not started a read or SQL query within the last 10
1877 # seconds. Idle transactions can be aborted by Cloud Spanner so that they
1878 # don&#x27;t hold on to locks indefinitely. In that case, the commit will
1879 # fail with error `ABORTED`.
1880 #
1881 # If this behavior is undesirable, periodically executing a simple
1882 # SQL query in the transaction (e.g., `SELECT 1`) prevents the
1883 # transaction from becoming idle.
1884 #
1885 # ## Snapshot Read-Only Transactions
1886 #
1887 # Snapshot read-only transactions provides a simpler method than
1888 # locking read-write transactions for doing several consistent
1889 # reads. However, this type of transaction does not support writes.
1890 #
1891 # Snapshot transactions do not take locks. Instead, they work by
1892 # choosing a Cloud Spanner timestamp, then executing all reads at that
1893 # timestamp. Since they do not acquire locks, they do not block
1894 # concurrent read-write transactions.
1895 #
1896 # Unlike locking read-write transactions, snapshot read-only
1897 # transactions never abort. They can fail if the chosen read
1898 # timestamp is garbage collected; however, the default garbage
1899 # collection policy is generous enough that most applications do not
1900 # need to worry about this in practice.
1901 #
1902 # Snapshot read-only transactions do not need to call
1903 # Commit or
1904 # Rollback (and in fact are not
1905 # permitted to do so).
1906 #
1907 # To execute a snapshot transaction, the client specifies a timestamp
1908 # bound, which tells Cloud Spanner how to choose a read timestamp.
1909 #
1910 # The types of timestamp bound are:
1911 #
1912 # - Strong (the default).
1913 # - Bounded staleness.
1914 # - Exact staleness.
1915 #
1916 # If the Cloud Spanner database to be read is geographically distributed,
1917 # stale read-only transactions can execute more quickly than strong
1918 # or read-write transaction, because they are able to execute far
1919 # from the leader replica.
1920 #
1921 # Each type of timestamp bound is discussed in detail below.
1922 #
1923 # ### Strong
1924 #
1925 # Strong reads are guaranteed to see the effects of all transactions
1926 # that have committed before the start of the read. Furthermore, all
1927 # rows yielded by a single read are consistent with each other -- if
1928 # any part of the read observes a transaction, all parts of the read
1929 # see the transaction.
1930 #
1931 # Strong reads are not repeatable: two consecutive strong read-only
1932 # transactions might return inconsistent results if there are
1933 # concurrent writes. If consistency across reads is required, the
1934 # reads should be executed within a transaction or at an exact read
1935 # timestamp.
1936 #
1937 # See TransactionOptions.ReadOnly.strong.
1938 #
1939 # ### Exact Staleness
1940 #
1941 # These timestamp bounds execute reads at a user-specified
1942 # timestamp. Reads at a timestamp are guaranteed to see a consistent
1943 # prefix of the global transaction history: they observe
1944 # modifications done by all transactions with a commit timestamp &lt;=
1945 # the read timestamp, and observe none of the modifications done by
1946 # transactions with a larger commit timestamp. They will block until
1947 # all conflicting transactions that may be assigned commit timestamps
1948 # &lt;= the read timestamp have finished.
1949 #
1950 # The timestamp can either be expressed as an absolute Cloud Spanner commit
1951 # timestamp or a staleness relative to the current time.
1952 #
1953 # These modes do not require a &quot;negotiation phase&quot; to pick a
1954 # timestamp. As a result, they execute slightly faster than the
1955 # equivalent boundedly stale concurrency modes. On the other hand,
1956 # boundedly stale reads usually return fresher results.
1957 #
1958 # See TransactionOptions.ReadOnly.read_timestamp and
1959 # TransactionOptions.ReadOnly.exact_staleness.
1960 #
1961 # ### Bounded Staleness
1962 #
1963 # Bounded staleness modes allow Cloud Spanner to pick the read timestamp,
1964 # subject to a user-provided staleness bound. Cloud Spanner chooses the
1965 # newest timestamp within the staleness bound that allows execution
1966 # of the reads at the closest available replica without blocking.
1967 #
1968 # All rows yielded are consistent with each other -- if any part of
1969 # the read observes a transaction, all parts of the read see the
1970 # transaction. Boundedly stale reads are not repeatable: two stale
1971 # reads, even if they use the same staleness bound, can execute at
1972 # different timestamps and thus return inconsistent results.
1973 #
1974 # Boundedly stale reads execute in two phases: the first phase
1975 # negotiates a timestamp among all replicas needed to serve the
1976 # read. In the second phase, reads are executed at the negotiated
1977 # timestamp.
1978 #
1979 # As a result of the two phase execution, bounded staleness reads are
1980 # usually a little slower than comparable exact staleness
1981 # reads. However, they are typically able to return fresher
1982 # results, and are more likely to execute at the closest replica.
1983 #
1984 # Because the timestamp negotiation requires up-front knowledge of
1985 # which rows will be read, it can only be used with single-use
1986 # read-only transactions.
1987 #
1988 # See TransactionOptions.ReadOnly.max_staleness and
1989 # TransactionOptions.ReadOnly.min_read_timestamp.
1990 #
1991 # ### Old Read Timestamps and Garbage Collection
1992 #
1993 # Cloud Spanner continuously garbage collects deleted and overwritten data
1994 # in the background to reclaim storage space. This process is known
1995 # as &quot;version GC&quot;. By default, version GC reclaims versions after they
1996 # are one hour old. Because of this, Cloud Spanner cannot perform reads
1997 # at read timestamps more than one hour in the past. This
1998 # restriction also applies to in-progress reads and/or SQL queries whose
1999 # timestamp become too old while executing. Reads and SQL queries with
2000 # too-old read timestamps fail with the error `FAILED_PRECONDITION`.
2001 #
2002 # ## Partitioned DML Transactions
2003 #
2004 # Partitioned DML transactions are used to execute DML statements with a
2005 # different execution strategy that provides different, and often better,
2006 # scalability properties for large, table-wide operations than DML in a
2007 # ReadWrite transaction. Smaller scoped statements, such as an OLTP workload,
2008 # should prefer using ReadWrite transactions.
2009 #
2010 # Partitioned DML partitions the keyspace and runs the DML statement on each
2011 # partition in separate, internal transactions. These transactions commit
2012 # automatically when complete, and run independently from one another.
2013 #
2014 # To reduce lock contention, this execution strategy only acquires read locks
2015 # on rows that match the WHERE clause of the statement. Additionally, the
2016 # smaller per-partition transactions hold locks for less time.
2017 #
2018 # That said, Partitioned DML is not a drop-in replacement for standard DML used
2019 # in ReadWrite transactions.
2020 #
2021 # - The DML statement must be fully-partitionable. Specifically, the statement
2022 # must be expressible as the union of many statements which each access only
2023 # a single row of the table.
2024 #
2025 # - The statement is not applied atomically to all rows of the table. Rather,
2026 # the statement is applied atomically to partitions of the table, in
2027 # independent transactions. Secondary index rows are updated atomically
2028 # with the base table rows.
2029 #
2030 # - Partitioned DML does not guarantee exactly-once execution semantics
2031 # against a partition. The statement will be applied at least once to each
2032 # partition. It is strongly recommended that the DML statement should be
2033 # idempotent to avoid unexpected results. For instance, it is potentially
2034 # dangerous to run a statement such as
2035 # `UPDATE table SET column = column + 1` as it could be run multiple times
2036 # against some rows.
2037 #
2038 # - The partitions are committed automatically - there is no support for
2039 # Commit or Rollback. If the call returns an error, or if the client issuing
2040 # the ExecuteSql call dies, it is possible that some rows had the statement
2041 # executed on them successfully. It is also possible that statement was
2042 # never executed against other rows.
2043 #
2044 # - Partitioned DML transactions may only contain the execution of a single
2045 # DML statement via ExecuteSql or ExecuteStreamingSql.
2046 #
2047 # - If any error is encountered during the execution of the partitioned DML
2048 # operation (for instance, a UNIQUE INDEX violation, division by zero, or a
2049 # value that cannot be stored due to schema constraints), then the
2050 # operation is stopped at that point and an error is returned. It is
2051 # possible that at this point, some partitions have been committed (or even
2052 # committed multiple times), and other partitions have not been run at all.
2053 #
2054 # Given the above, Partitioned DML is good fit for large, database-wide,
2055 # operations that are idempotent, such as deleting old rows from a very large
2056 # table.
2057 &quot;readWrite&quot;: { # Message type to initiate a read-write transaction. Currently this # Transaction may write.
2058 #
2059 # Authorization to begin a read-write transaction requires
2060 # `spanner.databases.beginOrRollbackReadWriteTransaction` permission
2061 # on the `session` resource.
2062 # transaction type has no options.
2063 },
2064 &quot;readOnly&quot;: { # Message type to initiate a read-only transaction. # Transaction will not write.
2065 #
2066 # Authorization to begin a read-only transaction requires
2067 # `spanner.databases.beginReadOnlyTransaction` permission
2068 # on the `session` resource.
Bu Sun Kim4ed7d3f2020-05-27 12:20:54 -07002069 &quot;readTimestamp&quot;: &quot;A String&quot;, # Executes all reads at the given timestamp. Unlike other modes,
2070 # reads at a specific timestamp are repeatable; the same read at
2071 # the same timestamp always returns the same data. If the
2072 # timestamp is in the future, the read will block until the
2073 # specified timestamp, modulo the read&#x27;s deadline.
2074 #
2075 # Useful for large scale consistent reads such as mapreduces, or
2076 # for coordinating many reads against a consistent snapshot of the
2077 # data.
2078 #
2079 # A timestamp in RFC3339 UTC \&quot;Zulu\&quot; format, accurate to nanoseconds.
2080 # Example: `&quot;2014-10-02T15:01:23.045123456Z&quot;`.
Bu Sun Kimd059ad82020-07-22 17:02:09 -07002081 &quot;minReadTimestamp&quot;: &quot;A String&quot;, # Executes all reads at a timestamp &gt;= `min_read_timestamp`.
2082 #
2083 # This is useful for requesting fresher data than some previous
2084 # read, or data that is fresh enough to observe the effects of some
2085 # previously committed transaction whose timestamp is known.
2086 #
2087 # Note that this option can only be used in single-use transactions.
2088 #
2089 # A timestamp in RFC3339 UTC \&quot;Zulu\&quot; format, accurate to nanoseconds.
2090 # Example: `&quot;2014-10-02T15:01:23.045123456Z&quot;`.
2091 &quot;exactStaleness&quot;: &quot;A String&quot;, # Executes all reads at a timestamp that is `exact_staleness`
2092 # old. The timestamp is chosen soon after the read is started.
2093 #
2094 # Guarantees that all writes that have committed more than the
2095 # specified number of seconds ago are visible. Because Cloud Spanner
2096 # chooses the exact timestamp, this mode works even if the client&#x27;s
2097 # local clock is substantially skewed from Cloud Spanner commit
2098 # timestamps.
2099 #
2100 # Useful for reading at nearby replicas without the distributed
2101 # timestamp negotiation overhead of `max_staleness`.
2102 &quot;maxStaleness&quot;: &quot;A String&quot;, # Read data at a timestamp &gt;= `NOW - max_staleness`
2103 # seconds. Guarantees that all writes that have committed more
2104 # than the specified number of seconds ago are visible. Because
2105 # Cloud Spanner chooses the exact timestamp, this mode works even if
2106 # the client&#x27;s local clock is substantially skewed from Cloud Spanner
2107 # commit timestamps.
2108 #
2109 # Useful for reading the freshest data available at a nearby
2110 # replica, while bounding the possible staleness if the local
2111 # replica has fallen behind.
2112 #
2113 # Note that this option can only be used in single-use
2114 # transactions.
2115 &quot;returnReadTimestamp&quot;: True or False, # If true, the Cloud Spanner-selected read timestamp is included in
2116 # the Transaction message that describes the transaction.
2117 &quot;strong&quot;: True or False, # Read at a timestamp where all previously committed transactions
2118 # are visible.
2119 },
2120 &quot;partitionedDml&quot;: { # Message type to initiate a Partitioned DML transaction. # Partitioned DML transaction.
2121 #
2122 # Authorization to begin a Partitioned DML transaction requires
2123 # `spanner.databases.beginPartitionedDmlTransaction` permission
2124 # on the `session` resource.
Bu Sun Kim4ed7d3f2020-05-27 12:20:54 -07002125 },
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -04002126 },
Bu Sun Kimd059ad82020-07-22 17:02:09 -07002127 &quot;id&quot;: &quot;A String&quot;, # Execute the read or SQL query in a previously-started transaction.
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -04002128 },
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07002129 }
2130
2131 x__xgafv: string, V1 error format.
2132 Allowed values
2133 1 - v1 error format
2134 2 - v2 error format
2135
2136Returns:
2137 An object of the form:
2138
2139 { # The response for ExecuteBatchDml. Contains a list
Dan O'Mearadd494642020-05-01 07:42:23 -07002140 # of ResultSet messages, one for each DML statement that has successfully
2141 # executed, in the same order as the statements in the request. If a statement
2142 # fails, the status in the response body identifies the cause of the failure.
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07002143 #
Dan O'Mearadd494642020-05-01 07:42:23 -07002144 # To check for DML statements that failed, use the following approach:
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07002145 #
Dan O'Mearadd494642020-05-01 07:42:23 -07002146 # 1. Check the status in the response message. The google.rpc.Code enum
2147 # value `OK` indicates that all statements were executed successfully.
2148 # 2. If the status was not `OK`, check the number of result sets in the
2149 # response. If the response contains `N` ResultSet messages, then
2150 # statement `N+1` in the request failed.
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07002151 #
Dan O'Mearadd494642020-05-01 07:42:23 -07002152 # Example 1:
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07002153 #
Dan O'Mearadd494642020-05-01 07:42:23 -07002154 # * Request: 5 DML statements, all executed successfully.
2155 # * Response: 5 ResultSet messages, with the status `OK`.
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07002156 #
Dan O'Mearadd494642020-05-01 07:42:23 -07002157 # Example 2:
2158 #
2159 # * Request: 5 DML statements. The third statement has a syntax error.
2160 # * Response: 2 ResultSet messages, and a syntax error (`INVALID_ARGUMENT`)
2161 # status. The number of ResultSet messages indicates that the third
2162 # statement failed, and the fourth and fifth statements were not executed.
Bu Sun Kim65020912020-05-20 12:08:20 -07002163 &quot;resultSets&quot;: [ # One ResultSet for each statement in the request that ran successfully,
Dan O'Mearadd494642020-05-01 07:42:23 -07002164 # in the same order as the statements in the request. Each ResultSet does
2165 # not contain any rows. The ResultSetStats in each ResultSet contain
2166 # the number of rows modified by the statement.
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07002167 #
Dan O'Mearadd494642020-05-01 07:42:23 -07002168 # Only the first ResultSet in the response contains valid
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07002169 # ResultSetMetadata.
2170 { # Results from Read or
2171 # ExecuteSql.
Bu Sun Kimd059ad82020-07-22 17:02:09 -07002172 &quot;stats&quot;: { # Additional statistics about a ResultSet or PartialResultSet. # Query plan and execution statistics for the SQL statement that
2173 # produced this result set. These can be requested by setting
2174 # ExecuteSqlRequest.query_mode.
2175 # DML statements always produce stats containing the number of rows
2176 # modified, unless executed using the
2177 # ExecuteSqlRequest.QueryMode.PLAN ExecuteSqlRequest.query_mode.
2178 # Other fields may or may not be populated, based on the
2179 # ExecuteSqlRequest.query_mode.
2180 &quot;queryStats&quot;: { # Aggregated statistics from the execution of the query. Only present when
2181 # the query is profiled. For example, a query could return the statistics as
2182 # follows:
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07002183 #
Bu Sun Kimd059ad82020-07-22 17:02:09 -07002184 # {
2185 # &quot;rows_returned&quot;: &quot;3&quot;,
2186 # &quot;elapsed_time&quot;: &quot;1.22 secs&quot;,
2187 # &quot;cpu_time&quot;: &quot;1.19 secs&quot;
2188 # }
2189 &quot;a_key&quot;: &quot;&quot;, # Properties of the object.
2190 },
2191 &quot;rowCountExact&quot;: &quot;A String&quot;, # Standard DML returns an exact count of rows that were modified.
2192 &quot;rowCountLowerBound&quot;: &quot;A String&quot;, # Partitioned DML does not offer exactly-once semantics, so it
2193 # returns a lower bound of the rows modified.
2194 &quot;queryPlan&quot;: { # Contains an ordered list of nodes appearing in the query plan. # QueryPlan for the query associated with this result.
2195 &quot;planNodes&quot;: [ # The nodes in the query plan. Plan nodes are returned in pre-order starting
2196 # with the plan root. Each PlanNode&#x27;s `id` corresponds to its index in
2197 # `plan_nodes`.
2198 { # Node information for nodes appearing in a QueryPlan.plan_nodes.
2199 &quot;childLinks&quot;: [ # List of child node `index`es and their relationship to this parent.
2200 { # Metadata associated with a parent-child relationship appearing in a
2201 # PlanNode.
2202 &quot;childIndex&quot;: 42, # The node to which the link points.
2203 &quot;type&quot;: &quot;A String&quot;, # The type of the link. For example, in Hash Joins this could be used to
2204 # distinguish between the build child and the probe child, or in the case
2205 # of the child being an output variable, to represent the tag associated
2206 # with the output variable.
2207 &quot;variable&quot;: &quot;A String&quot;, # Only present if the child node is SCALAR and corresponds
2208 # to an output variable of the parent node. The field carries the name of
2209 # the output variable.
2210 # For example, a `TableScan` operator that reads rows from a table will
2211 # have child links to the `SCALAR` nodes representing the output variables
2212 # created for each column that is read by the operator. The corresponding
2213 # `variable` fields will be set to the variable names assigned to the
2214 # columns.
2215 },
2216 ],
2217 &quot;metadata&quot;: { # Attributes relevant to the node contained in a group of key-value pairs.
2218 # For example, a Parameter Reference node could have the following
2219 # information in its metadata:
2220 #
2221 # {
2222 # &quot;parameter_reference&quot;: &quot;param1&quot;,
2223 # &quot;parameter_type&quot;: &quot;array&quot;
2224 # }
2225 &quot;a_key&quot;: &quot;&quot;, # Properties of the object.
2226 },
2227 &quot;kind&quot;: &quot;A String&quot;, # Used to determine the type of node. May be needed for visualizing
2228 # different kinds of nodes differently. For example, If the node is a
2229 # SCALAR node, it will have a condensed representation
2230 # which can be used to directly embed a description of the node in its
2231 # parent.
2232 &quot;shortRepresentation&quot;: { # Condensed representation of a node and its subtree. Only present for # Condensed representation for SCALAR nodes.
2233 # `SCALAR` PlanNode(s).
2234 &quot;subqueries&quot;: { # A mapping of (subquery variable name) -&gt; (subquery node id) for cases
2235 # where the `description` string of this node references a `SCALAR`
2236 # subquery contained in the expression subtree rooted at this node. The
2237 # referenced `SCALAR` subquery may not necessarily be a direct child of
2238 # this node.
2239 &quot;a_key&quot;: 42,
2240 },
2241 &quot;description&quot;: &quot;A String&quot;, # A string representation of the expression subtree rooted at this node.
2242 },
2243 &quot;displayName&quot;: &quot;A String&quot;, # The display name for the node.
2244 &quot;index&quot;: 42, # The `PlanNode`&#x27;s index in node list.
2245 &quot;executionStats&quot;: { # The execution statistics associated with the node, contained in a group of
2246 # key-value pairs. Only present if the plan was returned as a result of a
2247 # profile query. For example, number of executions, number of rows/time per
2248 # execution etc.
2249 &quot;a_key&quot;: &quot;&quot;, # Properties of the object.
2250 },
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07002251 },
2252 ],
2253 },
Bu Sun Kimd059ad82020-07-22 17:02:09 -07002254 },
2255 &quot;rows&quot;: [ # Each element in `rows` is a row whose format is defined by
2256 # metadata.row_type. The ith element
2257 # in each row matches the ith field in
2258 # metadata.row_type. Elements are
2259 # encoded based on type as described
2260 # here.
2261 [
2262 &quot;&quot;,
2263 ],
2264 ],
2265 &quot;metadata&quot;: { # Metadata about a ResultSet or PartialResultSet. # Metadata about the result set, such as row type information.
Bu Sun Kim65020912020-05-20 12:08:20 -07002266 &quot;transaction&quot;: { # A transaction. # If the read or SQL query began a transaction as a side-effect, the
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07002267 # information about the new transaction is yielded here.
Bu Sun Kim4ed7d3f2020-05-27 12:20:54 -07002268 &quot;readTimestamp&quot;: &quot;A String&quot;, # For snapshot read-only transactions, the read timestamp chosen
2269 # for the transaction. Not returned by default: see
2270 # TransactionOptions.ReadOnly.return_read_timestamp.
2271 #
2272 # A timestamp in RFC3339 UTC \&quot;Zulu\&quot; format, accurate to nanoseconds.
2273 # Example: `&quot;2014-10-02T15:01:23.045123456Z&quot;`.
Bu Sun Kim65020912020-05-20 12:08:20 -07002274 &quot;id&quot;: &quot;A String&quot;, # `id` may be used to identify the transaction in subsequent
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07002275 # Read,
2276 # ExecuteSql,
2277 # Commit, or
2278 # Rollback calls.
2279 #
2280 # Single-use read-only transactions do not have IDs, because
2281 # single-use transactions do not support multiple requests.
2282 },
Bu Sun Kimd059ad82020-07-22 17:02:09 -07002283 &quot;rowType&quot;: { # `StructType` defines the fields of a STRUCT type. # Indicates the field names and types for the rows in the result
2284 # set. For example, a SQL query like `&quot;SELECT UserId, UserName FROM
2285 # Users&quot;` could return a `row_type` value like:
2286 #
2287 # &quot;fields&quot;: [
2288 # { &quot;name&quot;: &quot;UserId&quot;, &quot;type&quot;: { &quot;code&quot;: &quot;INT64&quot; } },
2289 # { &quot;name&quot;: &quot;UserName&quot;, &quot;type&quot;: { &quot;code&quot;: &quot;STRING&quot; } },
2290 # ]
2291 &quot;fields&quot;: [ # The list of fields that make up this struct. Order is
2292 # significant, because values of this struct type are represented as
2293 # lists, where the order of field values matches the order of
2294 # fields in the StructType. In turn, the order of fields
2295 # matches the order of columns in a read request, or the order of
2296 # fields in the `SELECT` clause of a query.
2297 { # Message representing a single field of a struct.
2298 &quot;type&quot;: # Object with schema name: Type # The type of the field.
2299 &quot;name&quot;: &quot;A String&quot;, # The name of the field. For reads, this is the column name. For
2300 # SQL queries, it is the column alias (e.g., `&quot;Word&quot;` in the
2301 # query `&quot;SELECT &#x27;hello&#x27; AS Word&quot;`), or the column name (e.g.,
2302 # `&quot;ColName&quot;` in the query `&quot;SELECT ColName FROM Table&quot;`). Some
2303 # columns might have an empty name (e.g., !&quot;SELECT
2304 # UPPER(ColName)&quot;`). Note that a query result can contain
2305 # multiple fields with the same name.
Bu Sun Kim4ed7d3f2020-05-27 12:20:54 -07002306 },
2307 ],
2308 },
Bu Sun Kim4ed7d3f2020-05-27 12:20:54 -07002309 },
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07002310 },
2311 ],
Bu Sun Kimd059ad82020-07-22 17:02:09 -07002312 &quot;status&quot;: { # The `Status` type defines a logical error model that is suitable for # If all DML statements are executed successfully, the status is `OK`.
2313 # Otherwise, the error status of the first failed statement.
2314 # different programming environments, including REST APIs and RPC APIs. It is
2315 # used by [gRPC](https://github.com/grpc). Each `Status` message contains
2316 # three pieces of data: error code, error message, and error details.
2317 #
2318 # You can find out more about this error model and how to work with it in the
2319 # [API Design Guide](https://cloud.google.com/apis/design/errors).
2320 &quot;message&quot;: &quot;A String&quot;, # A developer-facing error message, which should be in English. Any
2321 # user-facing error message should be localized and sent in the
2322 # google.rpc.Status.details field, or localized by the client.
2323 &quot;details&quot;: [ # A list of messages that carry the error details. There is a common set of
2324 # message types for APIs to use.
2325 {
2326 &quot;a_key&quot;: &quot;&quot;, # Properties of the object. Contains field @type with type URL.
2327 },
2328 ],
2329 &quot;code&quot;: 42, # The status code, which should be an enum value of google.rpc.Code.
2330 },
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07002331 }</pre>
2332</div>
2333
2334<div class="method">
Dan O'Mearadd494642020-05-01 07:42:23 -07002335 <code class="details" id="executeSql">executeSql(session, body=None, x__xgafv=None)</code>
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07002336 <pre>Executes an SQL statement, returning all results in a single reply. This
2337method cannot be used to return a result set larger than 10 MiB;
2338if the query yields more data than that, the query fails with
2339a `FAILED_PRECONDITION` error.
2340
2341Operations inside read-write transactions might return `ABORTED`. If
2342this occurs, the application should restart the transaction from
2343the beginning. See Transaction for more details.
2344
2345Larger result sets can be fetched in streaming fashion by calling
2346ExecuteStreamingSql instead.
2347
2348Args:
2349 session: string, Required. The session in which the SQL query should be performed. (required)
Dan O'Mearadd494642020-05-01 07:42:23 -07002350 body: object, The request body.
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07002351 The object takes the form of:
2352
2353{ # The request for ExecuteSql and
2354 # ExecuteStreamingSql.
Bu Sun Kim65020912020-05-20 12:08:20 -07002355 &quot;resumeToken&quot;: &quot;A String&quot;, # If this request is resuming a previously interrupted SQL statement
2356 # execution, `resume_token` should be copied from the last
2357 # PartialResultSet yielded before the interruption. Doing this
2358 # enables the new SQL statement execution to resume where the last one left
2359 # off. The rest of the request parameters must exactly match the
2360 # request that yielded this token.
Bu Sun Kim4ed7d3f2020-05-27 12:20:54 -07002361 &quot;queryOptions&quot;: { # Query optimizer configuration. # Query optimizer configuration to use for the given query.
2362 &quot;optimizerVersion&quot;: &quot;A String&quot;, # An option to control the selection of optimizer version.
2363 #
2364 # This parameter allows individual queries to pick different query
2365 # optimizer versions.
2366 #
2367 # Specifying &quot;latest&quot; as a value instructs Cloud Spanner to use the
2368 # latest supported query optimizer version. If not specified, Cloud Spanner
2369 # uses optimizer version set at the database level options. Any other
2370 # positive integer (from the list of supported optimizer versions)
2371 # overrides the default optimizer version for query execution.
2372 # The list of supported optimizer versions can be queried from
2373 # SPANNER_SYS.SUPPORTED_OPTIMIZER_VERSIONS. Executing a SQL statement
2374 # with an invalid optimizer version will fail with a syntax error
2375 # (`INVALID_ARGUMENT`) status.
2376 # See
2377 # https://cloud.google.com/spanner/docs/query-optimizer/manage-query-optimizer
2378 # for more information on managing the query optimizer.
2379 #
2380 # The `optimizer_version` statement hint has precedence over this setting.
2381 },
Bu Sun Kimd059ad82020-07-22 17:02:09 -07002382 &quot;partitionToken&quot;: &quot;A String&quot;, # If present, results will be restricted to the specified partition
2383 # previously created using PartitionQuery(). There must be an exact
2384 # match for the values of fields common to this message and the
2385 # PartitionQueryRequest message used to create this partition_token.
2386 &quot;queryMode&quot;: &quot;A String&quot;, # Used to control the amount of debugging information returned in
2387 # ResultSetStats. If partition_token is set, query_mode can only
2388 # be set to QueryMode.NORMAL.
2389 &quot;transaction&quot;: { # This message is used to select the transaction in which a # The transaction to use.
2390 #
2391 # For queries, if none is provided, the default is a temporary read-only
2392 # transaction with strong concurrency.
2393 #
2394 # Standard DML statements require a read-write transaction. To protect
2395 # against replays, single-use transactions are not supported. The caller
2396 # must either supply an existing transaction ID or begin a new transaction.
2397 #
2398 # Partitioned DML requires an existing Partitioned DML transaction ID.
2399 # Read or
2400 # ExecuteSql call runs.
2401 #
2402 # See TransactionOptions for more information about transactions.
2403 &quot;singleUse&quot;: { # # Transactions # Execute the read or SQL query in a temporary transaction.
2404 # This is the most efficient way to execute a transaction that
2405 # consists of a single SQL query.
2406 #
2407 #
2408 # Each session can have at most one active transaction at a time (note that
2409 # standalone reads and queries use a transaction internally and do count
2410 # towards the one transaction limit). After the active transaction is
2411 # completed, the session can immediately be re-used for the next transaction.
2412 # It is not necessary to create a new session for each transaction.
2413 #
2414 # # Transaction Modes
2415 #
2416 # Cloud Spanner supports three transaction modes:
2417 #
2418 # 1. Locking read-write. This type of transaction is the only way
2419 # to write data into Cloud Spanner. These transactions rely on
2420 # pessimistic locking and, if necessary, two-phase commit.
2421 # Locking read-write transactions may abort, requiring the
2422 # application to retry.
2423 #
2424 # 2. Snapshot read-only. This transaction type provides guaranteed
2425 # consistency across several reads, but does not allow
2426 # writes. Snapshot read-only transactions can be configured to
2427 # read at timestamps in the past. Snapshot read-only
2428 # transactions do not need to be committed.
2429 #
2430 # 3. Partitioned DML. This type of transaction is used to execute
2431 # a single Partitioned DML statement. Partitioned DML partitions
2432 # the key space and runs the DML statement over each partition
2433 # in parallel using separate, internal transactions that commit
2434 # independently. Partitioned DML transactions do not need to be
2435 # committed.
2436 #
2437 # For transactions that only read, snapshot read-only transactions
2438 # provide simpler semantics and are almost always faster. In
2439 # particular, read-only transactions do not take locks, so they do
2440 # not conflict with read-write transactions. As a consequence of not
2441 # taking locks, they also do not abort, so retry loops are not needed.
2442 #
2443 # Transactions may only read/write data in a single database. They
2444 # may, however, read/write data in different tables within that
2445 # database.
2446 #
2447 # ## Locking Read-Write Transactions
2448 #
2449 # Locking transactions may be used to atomically read-modify-write
2450 # data anywhere in a database. This type of transaction is externally
2451 # consistent.
2452 #
2453 # Clients should attempt to minimize the amount of time a transaction
2454 # is active. Faster transactions commit with higher probability
2455 # and cause less contention. Cloud Spanner attempts to keep read locks
2456 # active as long as the transaction continues to do reads, and the
2457 # transaction has not been terminated by
2458 # Commit or
2459 # Rollback. Long periods of
2460 # inactivity at the client may cause Cloud Spanner to release a
2461 # transaction&#x27;s locks and abort it.
2462 #
2463 # Conceptually, a read-write transaction consists of zero or more
2464 # reads or SQL statements followed by
2465 # Commit. At any time before
2466 # Commit, the client can send a
2467 # Rollback request to abort the
2468 # transaction.
2469 #
2470 # ### Semantics
2471 #
2472 # Cloud Spanner can commit the transaction if all read locks it acquired
2473 # are still valid at commit time, and it is able to acquire write
2474 # locks for all writes. Cloud Spanner can abort the transaction for any
2475 # reason. If a commit attempt returns `ABORTED`, Cloud Spanner guarantees
2476 # that the transaction has not modified any user data in Cloud Spanner.
2477 #
2478 # Unless the transaction commits, Cloud Spanner makes no guarantees about
2479 # how long the transaction&#x27;s locks were held for. It is an error to
2480 # use Cloud Spanner locks for any sort of mutual exclusion other than
2481 # between Cloud Spanner transactions themselves.
2482 #
2483 # ### Retrying Aborted Transactions
2484 #
2485 # When a transaction aborts, the application can choose to retry the
2486 # whole transaction again. To maximize the chances of successfully
2487 # committing the retry, the client should execute the retry in the
2488 # same session as the original attempt. The original session&#x27;s lock
2489 # priority increases with each consecutive abort, meaning that each
2490 # attempt has a slightly better chance of success than the previous.
2491 #
2492 # Under some circumstances (e.g., many transactions attempting to
2493 # modify the same row(s)), a transaction can abort many times in a
2494 # short period before successfully committing. Thus, it is not a good
2495 # idea to cap the number of retries a transaction can attempt;
2496 # instead, it is better to limit the total amount of wall time spent
2497 # retrying.
2498 #
2499 # ### Idle Transactions
2500 #
2501 # A transaction is considered idle if it has no outstanding reads or
2502 # SQL queries and has not started a read or SQL query within the last 10
2503 # seconds. Idle transactions can be aborted by Cloud Spanner so that they
2504 # don&#x27;t hold on to locks indefinitely. In that case, the commit will
2505 # fail with error `ABORTED`.
2506 #
2507 # If this behavior is undesirable, periodically executing a simple
2508 # SQL query in the transaction (e.g., `SELECT 1`) prevents the
2509 # transaction from becoming idle.
2510 #
2511 # ## Snapshot Read-Only Transactions
2512 #
2513 # Snapshot read-only transactions provides a simpler method than
2514 # locking read-write transactions for doing several consistent
2515 # reads. However, this type of transaction does not support writes.
2516 #
2517 # Snapshot transactions do not take locks. Instead, they work by
2518 # choosing a Cloud Spanner timestamp, then executing all reads at that
2519 # timestamp. Since they do not acquire locks, they do not block
2520 # concurrent read-write transactions.
2521 #
2522 # Unlike locking read-write transactions, snapshot read-only
2523 # transactions never abort. They can fail if the chosen read
2524 # timestamp is garbage collected; however, the default garbage
2525 # collection policy is generous enough that most applications do not
2526 # need to worry about this in practice.
2527 #
2528 # Snapshot read-only transactions do not need to call
2529 # Commit or
2530 # Rollback (and in fact are not
2531 # permitted to do so).
2532 #
2533 # To execute a snapshot transaction, the client specifies a timestamp
2534 # bound, which tells Cloud Spanner how to choose a read timestamp.
2535 #
2536 # The types of timestamp bound are:
2537 #
2538 # - Strong (the default).
2539 # - Bounded staleness.
2540 # - Exact staleness.
2541 #
2542 # If the Cloud Spanner database to be read is geographically distributed,
2543 # stale read-only transactions can execute more quickly than strong
2544 # or read-write transaction, because they are able to execute far
2545 # from the leader replica.
2546 #
2547 # Each type of timestamp bound is discussed in detail below.
2548 #
2549 # ### Strong
2550 #
2551 # Strong reads are guaranteed to see the effects of all transactions
2552 # that have committed before the start of the read. Furthermore, all
2553 # rows yielded by a single read are consistent with each other -- if
2554 # any part of the read observes a transaction, all parts of the read
2555 # see the transaction.
2556 #
2557 # Strong reads are not repeatable: two consecutive strong read-only
2558 # transactions might return inconsistent results if there are
2559 # concurrent writes. If consistency across reads is required, the
2560 # reads should be executed within a transaction or at an exact read
2561 # timestamp.
2562 #
2563 # See TransactionOptions.ReadOnly.strong.
2564 #
2565 # ### Exact Staleness
2566 #
2567 # These timestamp bounds execute reads at a user-specified
2568 # timestamp. Reads at a timestamp are guaranteed to see a consistent
2569 # prefix of the global transaction history: they observe
2570 # modifications done by all transactions with a commit timestamp &lt;=
2571 # the read timestamp, and observe none of the modifications done by
2572 # transactions with a larger commit timestamp. They will block until
2573 # all conflicting transactions that may be assigned commit timestamps
2574 # &lt;= the read timestamp have finished.
2575 #
2576 # The timestamp can either be expressed as an absolute Cloud Spanner commit
2577 # timestamp or a staleness relative to the current time.
2578 #
2579 # These modes do not require a &quot;negotiation phase&quot; to pick a
2580 # timestamp. As a result, they execute slightly faster than the
2581 # equivalent boundedly stale concurrency modes. On the other hand,
2582 # boundedly stale reads usually return fresher results.
2583 #
2584 # See TransactionOptions.ReadOnly.read_timestamp and
2585 # TransactionOptions.ReadOnly.exact_staleness.
2586 #
2587 # ### Bounded Staleness
2588 #
2589 # Bounded staleness modes allow Cloud Spanner to pick the read timestamp,
2590 # subject to a user-provided staleness bound. Cloud Spanner chooses the
2591 # newest timestamp within the staleness bound that allows execution
2592 # of the reads at the closest available replica without blocking.
2593 #
2594 # All rows yielded are consistent with each other -- if any part of
2595 # the read observes a transaction, all parts of the read see the
2596 # transaction. Boundedly stale reads are not repeatable: two stale
2597 # reads, even if they use the same staleness bound, can execute at
2598 # different timestamps and thus return inconsistent results.
2599 #
2600 # Boundedly stale reads execute in two phases: the first phase
2601 # negotiates a timestamp among all replicas needed to serve the
2602 # read. In the second phase, reads are executed at the negotiated
2603 # timestamp.
2604 #
2605 # As a result of the two phase execution, bounded staleness reads are
2606 # usually a little slower than comparable exact staleness
2607 # reads. However, they are typically able to return fresher
2608 # results, and are more likely to execute at the closest replica.
2609 #
2610 # Because the timestamp negotiation requires up-front knowledge of
2611 # which rows will be read, it can only be used with single-use
2612 # read-only transactions.
2613 #
2614 # See TransactionOptions.ReadOnly.max_staleness and
2615 # TransactionOptions.ReadOnly.min_read_timestamp.
2616 #
2617 # ### Old Read Timestamps and Garbage Collection
2618 #
2619 # Cloud Spanner continuously garbage collects deleted and overwritten data
2620 # in the background to reclaim storage space. This process is known
2621 # as &quot;version GC&quot;. By default, version GC reclaims versions after they
2622 # are one hour old. Because of this, Cloud Spanner cannot perform reads
2623 # at read timestamps more than one hour in the past. This
2624 # restriction also applies to in-progress reads and/or SQL queries whose
2625 # timestamp become too old while executing. Reads and SQL queries with
2626 # too-old read timestamps fail with the error `FAILED_PRECONDITION`.
2627 #
2628 # ## Partitioned DML Transactions
2629 #
2630 # Partitioned DML transactions are used to execute DML statements with a
2631 # different execution strategy that provides different, and often better,
2632 # scalability properties for large, table-wide operations than DML in a
2633 # ReadWrite transaction. Smaller scoped statements, such as an OLTP workload,
2634 # should prefer using ReadWrite transactions.
2635 #
2636 # Partitioned DML partitions the keyspace and runs the DML statement on each
2637 # partition in separate, internal transactions. These transactions commit
2638 # automatically when complete, and run independently from one another.
2639 #
2640 # To reduce lock contention, this execution strategy only acquires read locks
2641 # on rows that match the WHERE clause of the statement. Additionally, the
2642 # smaller per-partition transactions hold locks for less time.
2643 #
2644 # That said, Partitioned DML is not a drop-in replacement for standard DML used
2645 # in ReadWrite transactions.
2646 #
2647 # - The DML statement must be fully-partitionable. Specifically, the statement
2648 # must be expressible as the union of many statements which each access only
2649 # a single row of the table.
2650 #
2651 # - The statement is not applied atomically to all rows of the table. Rather,
2652 # the statement is applied atomically to partitions of the table, in
2653 # independent transactions. Secondary index rows are updated atomically
2654 # with the base table rows.
2655 #
2656 # - Partitioned DML does not guarantee exactly-once execution semantics
2657 # against a partition. The statement will be applied at least once to each
2658 # partition. It is strongly recommended that the DML statement should be
2659 # idempotent to avoid unexpected results. For instance, it is potentially
2660 # dangerous to run a statement such as
2661 # `UPDATE table SET column = column + 1` as it could be run multiple times
2662 # against some rows.
2663 #
2664 # - The partitions are committed automatically - there is no support for
2665 # Commit or Rollback. If the call returns an error, or if the client issuing
2666 # the ExecuteSql call dies, it is possible that some rows had the statement
2667 # executed on them successfully. It is also possible that statement was
2668 # never executed against other rows.
2669 #
2670 # - Partitioned DML transactions may only contain the execution of a single
2671 # DML statement via ExecuteSql or ExecuteStreamingSql.
2672 #
2673 # - If any error is encountered during the execution of the partitioned DML
2674 # operation (for instance, a UNIQUE INDEX violation, division by zero, or a
2675 # value that cannot be stored due to schema constraints), then the
2676 # operation is stopped at that point and an error is returned. It is
2677 # possible that at this point, some partitions have been committed (or even
2678 # committed multiple times), and other partitions have not been run at all.
2679 #
2680 # Given the above, Partitioned DML is good fit for large, database-wide,
2681 # operations that are idempotent, such as deleting old rows from a very large
2682 # table.
2683 &quot;readWrite&quot;: { # Message type to initiate a read-write transaction. Currently this # Transaction may write.
2684 #
2685 # Authorization to begin a read-write transaction requires
2686 # `spanner.databases.beginOrRollbackReadWriteTransaction` permission
2687 # on the `session` resource.
2688 # transaction type has no options.
2689 },
2690 &quot;readOnly&quot;: { # Message type to initiate a read-only transaction. # Transaction will not write.
2691 #
2692 # Authorization to begin a read-only transaction requires
2693 # `spanner.databases.beginReadOnlyTransaction` permission
2694 # on the `session` resource.
2695 &quot;readTimestamp&quot;: &quot;A String&quot;, # Executes all reads at the given timestamp. Unlike other modes,
2696 # reads at a specific timestamp are repeatable; the same read at
2697 # the same timestamp always returns the same data. If the
2698 # timestamp is in the future, the read will block until the
2699 # specified timestamp, modulo the read&#x27;s deadline.
2700 #
2701 # Useful for large scale consistent reads such as mapreduces, or
2702 # for coordinating many reads against a consistent snapshot of the
2703 # data.
2704 #
2705 # A timestamp in RFC3339 UTC \&quot;Zulu\&quot; format, accurate to nanoseconds.
2706 # Example: `&quot;2014-10-02T15:01:23.045123456Z&quot;`.
2707 &quot;minReadTimestamp&quot;: &quot;A String&quot;, # Executes all reads at a timestamp &gt;= `min_read_timestamp`.
2708 #
2709 # This is useful for requesting fresher data than some previous
2710 # read, or data that is fresh enough to observe the effects of some
2711 # previously committed transaction whose timestamp is known.
2712 #
2713 # Note that this option can only be used in single-use transactions.
2714 #
2715 # A timestamp in RFC3339 UTC \&quot;Zulu\&quot; format, accurate to nanoseconds.
2716 # Example: `&quot;2014-10-02T15:01:23.045123456Z&quot;`.
2717 &quot;exactStaleness&quot;: &quot;A String&quot;, # Executes all reads at a timestamp that is `exact_staleness`
2718 # old. The timestamp is chosen soon after the read is started.
2719 #
2720 # Guarantees that all writes that have committed more than the
2721 # specified number of seconds ago are visible. Because Cloud Spanner
2722 # chooses the exact timestamp, this mode works even if the client&#x27;s
2723 # local clock is substantially skewed from Cloud Spanner commit
2724 # timestamps.
2725 #
2726 # Useful for reading at nearby replicas without the distributed
2727 # timestamp negotiation overhead of `max_staleness`.
2728 &quot;maxStaleness&quot;: &quot;A String&quot;, # Read data at a timestamp &gt;= `NOW - max_staleness`
2729 # seconds. Guarantees that all writes that have committed more
2730 # than the specified number of seconds ago are visible. Because
2731 # Cloud Spanner chooses the exact timestamp, this mode works even if
2732 # the client&#x27;s local clock is substantially skewed from Cloud Spanner
2733 # commit timestamps.
2734 #
2735 # Useful for reading the freshest data available at a nearby
2736 # replica, while bounding the possible staleness if the local
2737 # replica has fallen behind.
2738 #
2739 # Note that this option can only be used in single-use
2740 # transactions.
2741 &quot;returnReadTimestamp&quot;: True or False, # If true, the Cloud Spanner-selected read timestamp is included in
2742 # the Transaction message that describes the transaction.
2743 &quot;strong&quot;: True or False, # Read at a timestamp where all previously committed transactions
2744 # are visible.
2745 },
2746 &quot;partitionedDml&quot;: { # Message type to initiate a Partitioned DML transaction. # Partitioned DML transaction.
2747 #
2748 # Authorization to begin a Partitioned DML transaction requires
2749 # `spanner.databases.beginPartitionedDmlTransaction` permission
2750 # on the `session` resource.
2751 },
2752 },
2753 &quot;begin&quot;: { # # Transactions # Begin a new transaction and execute this read or SQL query in
2754 # it. The transaction ID of the new transaction is returned in
2755 # ResultSetMetadata.transaction, which is a Transaction.
2756 #
2757 #
2758 # Each session can have at most one active transaction at a time (note that
2759 # standalone reads and queries use a transaction internally and do count
2760 # towards the one transaction limit). After the active transaction is
2761 # completed, the session can immediately be re-used for the next transaction.
2762 # It is not necessary to create a new session for each transaction.
2763 #
2764 # # Transaction Modes
2765 #
2766 # Cloud Spanner supports three transaction modes:
2767 #
2768 # 1. Locking read-write. This type of transaction is the only way
2769 # to write data into Cloud Spanner. These transactions rely on
2770 # pessimistic locking and, if necessary, two-phase commit.
2771 # Locking read-write transactions may abort, requiring the
2772 # application to retry.
2773 #
2774 # 2. Snapshot read-only. This transaction type provides guaranteed
2775 # consistency across several reads, but does not allow
2776 # writes. Snapshot read-only transactions can be configured to
2777 # read at timestamps in the past. Snapshot read-only
2778 # transactions do not need to be committed.
2779 #
2780 # 3. Partitioned DML. This type of transaction is used to execute
2781 # a single Partitioned DML statement. Partitioned DML partitions
2782 # the key space and runs the DML statement over each partition
2783 # in parallel using separate, internal transactions that commit
2784 # independently. Partitioned DML transactions do not need to be
2785 # committed.
2786 #
2787 # For transactions that only read, snapshot read-only transactions
2788 # provide simpler semantics and are almost always faster. In
2789 # particular, read-only transactions do not take locks, so they do
2790 # not conflict with read-write transactions. As a consequence of not
2791 # taking locks, they also do not abort, so retry loops are not needed.
2792 #
2793 # Transactions may only read/write data in a single database. They
2794 # may, however, read/write data in different tables within that
2795 # database.
2796 #
2797 # ## Locking Read-Write Transactions
2798 #
2799 # Locking transactions may be used to atomically read-modify-write
2800 # data anywhere in a database. This type of transaction is externally
2801 # consistent.
2802 #
2803 # Clients should attempt to minimize the amount of time a transaction
2804 # is active. Faster transactions commit with higher probability
2805 # and cause less contention. Cloud Spanner attempts to keep read locks
2806 # active as long as the transaction continues to do reads, and the
2807 # transaction has not been terminated by
2808 # Commit or
2809 # Rollback. Long periods of
2810 # inactivity at the client may cause Cloud Spanner to release a
2811 # transaction&#x27;s locks and abort it.
2812 #
2813 # Conceptually, a read-write transaction consists of zero or more
2814 # reads or SQL statements followed by
2815 # Commit. At any time before
2816 # Commit, the client can send a
2817 # Rollback request to abort the
2818 # transaction.
2819 #
2820 # ### Semantics
2821 #
2822 # Cloud Spanner can commit the transaction if all read locks it acquired
2823 # are still valid at commit time, and it is able to acquire write
2824 # locks for all writes. Cloud Spanner can abort the transaction for any
2825 # reason. If a commit attempt returns `ABORTED`, Cloud Spanner guarantees
2826 # that the transaction has not modified any user data in Cloud Spanner.
2827 #
2828 # Unless the transaction commits, Cloud Spanner makes no guarantees about
2829 # how long the transaction&#x27;s locks were held for. It is an error to
2830 # use Cloud Spanner locks for any sort of mutual exclusion other than
2831 # between Cloud Spanner transactions themselves.
2832 #
2833 # ### Retrying Aborted Transactions
2834 #
2835 # When a transaction aborts, the application can choose to retry the
2836 # whole transaction again. To maximize the chances of successfully
2837 # committing the retry, the client should execute the retry in the
2838 # same session as the original attempt. The original session&#x27;s lock
2839 # priority increases with each consecutive abort, meaning that each
2840 # attempt has a slightly better chance of success than the previous.
2841 #
2842 # Under some circumstances (e.g., many transactions attempting to
2843 # modify the same row(s)), a transaction can abort many times in a
2844 # short period before successfully committing. Thus, it is not a good
2845 # idea to cap the number of retries a transaction can attempt;
2846 # instead, it is better to limit the total amount of wall time spent
2847 # retrying.
2848 #
2849 # ### Idle Transactions
2850 #
2851 # A transaction is considered idle if it has no outstanding reads or
2852 # SQL queries and has not started a read or SQL query within the last 10
2853 # seconds. Idle transactions can be aborted by Cloud Spanner so that they
2854 # don&#x27;t hold on to locks indefinitely. In that case, the commit will
2855 # fail with error `ABORTED`.
2856 #
2857 # If this behavior is undesirable, periodically executing a simple
2858 # SQL query in the transaction (e.g., `SELECT 1`) prevents the
2859 # transaction from becoming idle.
2860 #
2861 # ## Snapshot Read-Only Transactions
2862 #
2863 # Snapshot read-only transactions provides a simpler method than
2864 # locking read-write transactions for doing several consistent
2865 # reads. However, this type of transaction does not support writes.
2866 #
2867 # Snapshot transactions do not take locks. Instead, they work by
2868 # choosing a Cloud Spanner timestamp, then executing all reads at that
2869 # timestamp. Since they do not acquire locks, they do not block
2870 # concurrent read-write transactions.
2871 #
2872 # Unlike locking read-write transactions, snapshot read-only
2873 # transactions never abort. They can fail if the chosen read
2874 # timestamp is garbage collected; however, the default garbage
2875 # collection policy is generous enough that most applications do not
2876 # need to worry about this in practice.
2877 #
2878 # Snapshot read-only transactions do not need to call
2879 # Commit or
2880 # Rollback (and in fact are not
2881 # permitted to do so).
2882 #
2883 # To execute a snapshot transaction, the client specifies a timestamp
2884 # bound, which tells Cloud Spanner how to choose a read timestamp.
2885 #
2886 # The types of timestamp bound are:
2887 #
2888 # - Strong (the default).
2889 # - Bounded staleness.
2890 # - Exact staleness.
2891 #
2892 # If the Cloud Spanner database to be read is geographically distributed,
2893 # stale read-only transactions can execute more quickly than strong
2894 # or read-write transaction, because they are able to execute far
2895 # from the leader replica.
2896 #
2897 # Each type of timestamp bound is discussed in detail below.
2898 #
2899 # ### Strong
2900 #
2901 # Strong reads are guaranteed to see the effects of all transactions
2902 # that have committed before the start of the read. Furthermore, all
2903 # rows yielded by a single read are consistent with each other -- if
2904 # any part of the read observes a transaction, all parts of the read
2905 # see the transaction.
2906 #
2907 # Strong reads are not repeatable: two consecutive strong read-only
2908 # transactions might return inconsistent results if there are
2909 # concurrent writes. If consistency across reads is required, the
2910 # reads should be executed within a transaction or at an exact read
2911 # timestamp.
2912 #
2913 # See TransactionOptions.ReadOnly.strong.
2914 #
2915 # ### Exact Staleness
2916 #
2917 # These timestamp bounds execute reads at a user-specified
2918 # timestamp. Reads at a timestamp are guaranteed to see a consistent
2919 # prefix of the global transaction history: they observe
2920 # modifications done by all transactions with a commit timestamp &lt;=
2921 # the read timestamp, and observe none of the modifications done by
2922 # transactions with a larger commit timestamp. They will block until
2923 # all conflicting transactions that may be assigned commit timestamps
2924 # &lt;= the read timestamp have finished.
2925 #
2926 # The timestamp can either be expressed as an absolute Cloud Spanner commit
2927 # timestamp or a staleness relative to the current time.
2928 #
2929 # These modes do not require a &quot;negotiation phase&quot; to pick a
2930 # timestamp. As a result, they execute slightly faster than the
2931 # equivalent boundedly stale concurrency modes. On the other hand,
2932 # boundedly stale reads usually return fresher results.
2933 #
2934 # See TransactionOptions.ReadOnly.read_timestamp and
2935 # TransactionOptions.ReadOnly.exact_staleness.
2936 #
2937 # ### Bounded Staleness
2938 #
2939 # Bounded staleness modes allow Cloud Spanner to pick the read timestamp,
2940 # subject to a user-provided staleness bound. Cloud Spanner chooses the
2941 # newest timestamp within the staleness bound that allows execution
2942 # of the reads at the closest available replica without blocking.
2943 #
2944 # All rows yielded are consistent with each other -- if any part of
2945 # the read observes a transaction, all parts of the read see the
2946 # transaction. Boundedly stale reads are not repeatable: two stale
2947 # reads, even if they use the same staleness bound, can execute at
2948 # different timestamps and thus return inconsistent results.
2949 #
2950 # Boundedly stale reads execute in two phases: the first phase
2951 # negotiates a timestamp among all replicas needed to serve the
2952 # read. In the second phase, reads are executed at the negotiated
2953 # timestamp.
2954 #
2955 # As a result of the two phase execution, bounded staleness reads are
2956 # usually a little slower than comparable exact staleness
2957 # reads. However, they are typically able to return fresher
2958 # results, and are more likely to execute at the closest replica.
2959 #
2960 # Because the timestamp negotiation requires up-front knowledge of
2961 # which rows will be read, it can only be used with single-use
2962 # read-only transactions.
2963 #
2964 # See TransactionOptions.ReadOnly.max_staleness and
2965 # TransactionOptions.ReadOnly.min_read_timestamp.
2966 #
2967 # ### Old Read Timestamps and Garbage Collection
2968 #
2969 # Cloud Spanner continuously garbage collects deleted and overwritten data
2970 # in the background to reclaim storage space. This process is known
2971 # as &quot;version GC&quot;. By default, version GC reclaims versions after they
2972 # are one hour old. Because of this, Cloud Spanner cannot perform reads
2973 # at read timestamps more than one hour in the past. This
2974 # restriction also applies to in-progress reads and/or SQL queries whose
2975 # timestamp become too old while executing. Reads and SQL queries with
2976 # too-old read timestamps fail with the error `FAILED_PRECONDITION`.
2977 #
2978 # ## Partitioned DML Transactions
2979 #
2980 # Partitioned DML transactions are used to execute DML statements with a
2981 # different execution strategy that provides different, and often better,
2982 # scalability properties for large, table-wide operations than DML in a
2983 # ReadWrite transaction. Smaller scoped statements, such as an OLTP workload,
2984 # should prefer using ReadWrite transactions.
2985 #
2986 # Partitioned DML partitions the keyspace and runs the DML statement on each
2987 # partition in separate, internal transactions. These transactions commit
2988 # automatically when complete, and run independently from one another.
2989 #
2990 # To reduce lock contention, this execution strategy only acquires read locks
2991 # on rows that match the WHERE clause of the statement. Additionally, the
2992 # smaller per-partition transactions hold locks for less time.
2993 #
2994 # That said, Partitioned DML is not a drop-in replacement for standard DML used
2995 # in ReadWrite transactions.
2996 #
2997 # - The DML statement must be fully-partitionable. Specifically, the statement
2998 # must be expressible as the union of many statements which each access only
2999 # a single row of the table.
3000 #
3001 # - The statement is not applied atomically to all rows of the table. Rather,
3002 # the statement is applied atomically to partitions of the table, in
3003 # independent transactions. Secondary index rows are updated atomically
3004 # with the base table rows.
3005 #
3006 # - Partitioned DML does not guarantee exactly-once execution semantics
3007 # against a partition. The statement will be applied at least once to each
3008 # partition. It is strongly recommended that the DML statement should be
3009 # idempotent to avoid unexpected results. For instance, it is potentially
3010 # dangerous to run a statement such as
3011 # `UPDATE table SET column = column + 1` as it could be run multiple times
3012 # against some rows.
3013 #
3014 # - The partitions are committed automatically - there is no support for
3015 # Commit or Rollback. If the call returns an error, or if the client issuing
3016 # the ExecuteSql call dies, it is possible that some rows had the statement
3017 # executed on them successfully. It is also possible that statement was
3018 # never executed against other rows.
3019 #
3020 # - Partitioned DML transactions may only contain the execution of a single
3021 # DML statement via ExecuteSql or ExecuteStreamingSql.
3022 #
3023 # - If any error is encountered during the execution of the partitioned DML
3024 # operation (for instance, a UNIQUE INDEX violation, division by zero, or a
3025 # value that cannot be stored due to schema constraints), then the
3026 # operation is stopped at that point and an error is returned. It is
3027 # possible that at this point, some partitions have been committed (or even
3028 # committed multiple times), and other partitions have not been run at all.
3029 #
3030 # Given the above, Partitioned DML is good fit for large, database-wide,
3031 # operations that are idempotent, such as deleting old rows from a very large
3032 # table.
3033 &quot;readWrite&quot;: { # Message type to initiate a read-write transaction. Currently this # Transaction may write.
3034 #
3035 # Authorization to begin a read-write transaction requires
3036 # `spanner.databases.beginOrRollbackReadWriteTransaction` permission
3037 # on the `session` resource.
3038 # transaction type has no options.
3039 },
3040 &quot;readOnly&quot;: { # Message type to initiate a read-only transaction. # Transaction will not write.
3041 #
3042 # Authorization to begin a read-only transaction requires
3043 # `spanner.databases.beginReadOnlyTransaction` permission
3044 # on the `session` resource.
3045 &quot;readTimestamp&quot;: &quot;A String&quot;, # Executes all reads at the given timestamp. Unlike other modes,
3046 # reads at a specific timestamp are repeatable; the same read at
3047 # the same timestamp always returns the same data. If the
3048 # timestamp is in the future, the read will block until the
3049 # specified timestamp, modulo the read&#x27;s deadline.
3050 #
3051 # Useful for large scale consistent reads such as mapreduces, or
3052 # for coordinating many reads against a consistent snapshot of the
3053 # data.
3054 #
3055 # A timestamp in RFC3339 UTC \&quot;Zulu\&quot; format, accurate to nanoseconds.
3056 # Example: `&quot;2014-10-02T15:01:23.045123456Z&quot;`.
3057 &quot;minReadTimestamp&quot;: &quot;A String&quot;, # Executes all reads at a timestamp &gt;= `min_read_timestamp`.
3058 #
3059 # This is useful for requesting fresher data than some previous
3060 # read, or data that is fresh enough to observe the effects of some
3061 # previously committed transaction whose timestamp is known.
3062 #
3063 # Note that this option can only be used in single-use transactions.
3064 #
3065 # A timestamp in RFC3339 UTC \&quot;Zulu\&quot; format, accurate to nanoseconds.
3066 # Example: `&quot;2014-10-02T15:01:23.045123456Z&quot;`.
3067 &quot;exactStaleness&quot;: &quot;A String&quot;, # Executes all reads at a timestamp that is `exact_staleness`
3068 # old. The timestamp is chosen soon after the read is started.
3069 #
3070 # Guarantees that all writes that have committed more than the
3071 # specified number of seconds ago are visible. Because Cloud Spanner
3072 # chooses the exact timestamp, this mode works even if the client&#x27;s
3073 # local clock is substantially skewed from Cloud Spanner commit
3074 # timestamps.
3075 #
3076 # Useful for reading at nearby replicas without the distributed
3077 # timestamp negotiation overhead of `max_staleness`.
3078 &quot;maxStaleness&quot;: &quot;A String&quot;, # Read data at a timestamp &gt;= `NOW - max_staleness`
3079 # seconds. Guarantees that all writes that have committed more
3080 # than the specified number of seconds ago are visible. Because
3081 # Cloud Spanner chooses the exact timestamp, this mode works even if
3082 # the client&#x27;s local clock is substantially skewed from Cloud Spanner
3083 # commit timestamps.
3084 #
3085 # Useful for reading the freshest data available at a nearby
3086 # replica, while bounding the possible staleness if the local
3087 # replica has fallen behind.
3088 #
3089 # Note that this option can only be used in single-use
3090 # transactions.
3091 &quot;returnReadTimestamp&quot;: True or False, # If true, the Cloud Spanner-selected read timestamp is included in
3092 # the Transaction message that describes the transaction.
3093 &quot;strong&quot;: True or False, # Read at a timestamp where all previously committed transactions
3094 # are visible.
3095 },
3096 &quot;partitionedDml&quot;: { # Message type to initiate a Partitioned DML transaction. # Partitioned DML transaction.
3097 #
3098 # Authorization to begin a Partitioned DML transaction requires
3099 # `spanner.databases.beginPartitionedDmlTransaction` permission
3100 # on the `session` resource.
3101 },
3102 },
3103 &quot;id&quot;: &quot;A String&quot;, # Execute the read or SQL query in a previously-started transaction.
3104 },
3105 &quot;seqno&quot;: &quot;A String&quot;, # A per-transaction sequence number used to identify this request. This field
3106 # makes each request idempotent such that if the request is received multiple
3107 # times, at most one will succeed.
3108 #
3109 # The sequence number must be monotonically increasing within the
3110 # transaction. If a request arrives for the first time with an out-of-order
3111 # sequence number, the transaction may be aborted. Replays of previously
3112 # handled requests will yield the same response as the first execution.
3113 #
3114 # Required for DML statements. Ignored for queries.
3115 &quot;paramTypes&quot;: { # It is not always possible for Cloud Spanner to infer the right SQL type
3116 # from a JSON value. For example, values of type `BYTES` and values
3117 # of type `STRING` both appear in params as JSON strings.
3118 #
3119 # In these cases, `param_types` can be used to specify the exact
3120 # SQL type for some or all of the SQL statement parameters. See the
3121 # definition of Type for more information
3122 # about SQL types.
3123 &quot;a_key&quot;: { # `Type` indicates the type of a Cloud Spanner value, as might be stored in a
3124 # table cell or returned from an SQL query.
3125 &quot;code&quot;: &quot;A String&quot;, # Required. The TypeCode for this type.
3126 &quot;arrayElementType&quot;: # Object with schema name: Type # If code == ARRAY, then `array_element_type`
3127 # is the type of the array elements.
3128 &quot;structType&quot;: { # `StructType` defines the fields of a STRUCT type. # If code == STRUCT, then `struct_type`
3129 # provides type information for the struct&#x27;s fields.
3130 &quot;fields&quot;: [ # The list of fields that make up this struct. Order is
3131 # significant, because values of this struct type are represented as
3132 # lists, where the order of field values matches the order of
3133 # fields in the StructType. In turn, the order of fields
3134 # matches the order of columns in a read request, or the order of
3135 # fields in the `SELECT` clause of a query.
3136 { # Message representing a single field of a struct.
3137 &quot;type&quot;: # Object with schema name: Type # The type of the field.
3138 &quot;name&quot;: &quot;A String&quot;, # The name of the field. For reads, this is the column name. For
3139 # SQL queries, it is the column alias (e.g., `&quot;Word&quot;` in the
3140 # query `&quot;SELECT &#x27;hello&#x27; AS Word&quot;`), or the column name (e.g.,
3141 # `&quot;ColName&quot;` in the query `&quot;SELECT ColName FROM Table&quot;`). Some
3142 # columns might have an empty name (e.g., !&quot;SELECT
3143 # UPPER(ColName)&quot;`). Note that a query result can contain
3144 # multiple fields with the same name.
3145 },
3146 ],
3147 },
3148 },
3149 },
Bu Sun Kim4ed7d3f2020-05-27 12:20:54 -07003150 &quot;params&quot;: { # Parameter names and values that bind to placeholders in the SQL string.
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07003151 #
Bu Sun Kim4ed7d3f2020-05-27 12:20:54 -07003152 # A parameter placeholder consists of the `@` character followed by the
3153 # parameter name (for example, `@firstName`). Parameter names can contain
3154 # letters, numbers, and underscores.
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07003155 #
Bu Sun Kim4ed7d3f2020-05-27 12:20:54 -07003156 # Parameters can appear anywhere that a literal value is expected. The same
3157 # parameter name can be used more than once, for example:
3158 #
3159 # `&quot;WHERE id &gt; @msg_id AND id &lt; @msg_id + 100&quot;`
3160 #
3161 # It is an error to execute a SQL statement with unbound parameters.
3162 &quot;a_key&quot;: &quot;&quot;, # Properties of the object.
3163 },
3164 &quot;sql&quot;: &quot;A String&quot;, # Required. The SQL string.
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -04003165 }
3166
3167 x__xgafv: string, V1 error format.
3168 Allowed values
3169 1 - v1 error format
3170 2 - v2 error format
3171
3172Returns:
3173 An object of the form:
3174
3175 { # Results from Read or
3176 # ExecuteSql.
Bu Sun Kimd059ad82020-07-22 17:02:09 -07003177 &quot;stats&quot;: { # Additional statistics about a ResultSet or PartialResultSet. # Query plan and execution statistics for the SQL statement that
3178 # produced this result set. These can be requested by setting
3179 # ExecuteSqlRequest.query_mode.
3180 # DML statements always produce stats containing the number of rows
3181 # modified, unless executed using the
3182 # ExecuteSqlRequest.QueryMode.PLAN ExecuteSqlRequest.query_mode.
3183 # Other fields may or may not be populated, based on the
3184 # ExecuteSqlRequest.query_mode.
3185 &quot;queryStats&quot;: { # Aggregated statistics from the execution of the query. Only present when
3186 # the query is profiled. For example, a query could return the statistics as
3187 # follows:
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -04003188 #
Bu Sun Kimd059ad82020-07-22 17:02:09 -07003189 # {
3190 # &quot;rows_returned&quot;: &quot;3&quot;,
3191 # &quot;elapsed_time&quot;: &quot;1.22 secs&quot;,
3192 # &quot;cpu_time&quot;: &quot;1.19 secs&quot;
3193 # }
3194 &quot;a_key&quot;: &quot;&quot;, # Properties of the object.
3195 },
3196 &quot;rowCountExact&quot;: &quot;A String&quot;, # Standard DML returns an exact count of rows that were modified.
3197 &quot;rowCountLowerBound&quot;: &quot;A String&quot;, # Partitioned DML does not offer exactly-once semantics, so it
3198 # returns a lower bound of the rows modified.
3199 &quot;queryPlan&quot;: { # Contains an ordered list of nodes appearing in the query plan. # QueryPlan for the query associated with this result.
3200 &quot;planNodes&quot;: [ # The nodes in the query plan. Plan nodes are returned in pre-order starting
3201 # with the plan root. Each PlanNode&#x27;s `id` corresponds to its index in
3202 # `plan_nodes`.
3203 { # Node information for nodes appearing in a QueryPlan.plan_nodes.
3204 &quot;childLinks&quot;: [ # List of child node `index`es and their relationship to this parent.
3205 { # Metadata associated with a parent-child relationship appearing in a
3206 # PlanNode.
3207 &quot;childIndex&quot;: 42, # The node to which the link points.
3208 &quot;type&quot;: &quot;A String&quot;, # The type of the link. For example, in Hash Joins this could be used to
3209 # distinguish between the build child and the probe child, or in the case
3210 # of the child being an output variable, to represent the tag associated
3211 # with the output variable.
3212 &quot;variable&quot;: &quot;A String&quot;, # Only present if the child node is SCALAR and corresponds
3213 # to an output variable of the parent node. The field carries the name of
3214 # the output variable.
3215 # For example, a `TableScan` operator that reads rows from a table will
3216 # have child links to the `SCALAR` nodes representing the output variables
3217 # created for each column that is read by the operator. The corresponding
3218 # `variable` fields will be set to the variable names assigned to the
3219 # columns.
3220 },
3221 ],
3222 &quot;metadata&quot;: { # Attributes relevant to the node contained in a group of key-value pairs.
3223 # For example, a Parameter Reference node could have the following
3224 # information in its metadata:
3225 #
3226 # {
3227 # &quot;parameter_reference&quot;: &quot;param1&quot;,
3228 # &quot;parameter_type&quot;: &quot;array&quot;
3229 # }
3230 &quot;a_key&quot;: &quot;&quot;, # Properties of the object.
3231 },
3232 &quot;kind&quot;: &quot;A String&quot;, # Used to determine the type of node. May be needed for visualizing
3233 # different kinds of nodes differently. For example, If the node is a
3234 # SCALAR node, it will have a condensed representation
3235 # which can be used to directly embed a description of the node in its
3236 # parent.
3237 &quot;shortRepresentation&quot;: { # Condensed representation of a node and its subtree. Only present for # Condensed representation for SCALAR nodes.
3238 # `SCALAR` PlanNode(s).
3239 &quot;subqueries&quot;: { # A mapping of (subquery variable name) -&gt; (subquery node id) for cases
3240 # where the `description` string of this node references a `SCALAR`
3241 # subquery contained in the expression subtree rooted at this node. The
3242 # referenced `SCALAR` subquery may not necessarily be a direct child of
3243 # this node.
3244 &quot;a_key&quot;: 42,
3245 },
3246 &quot;description&quot;: &quot;A String&quot;, # A string representation of the expression subtree rooted at this node.
3247 },
3248 &quot;displayName&quot;: &quot;A String&quot;, # The display name for the node.
3249 &quot;index&quot;: 42, # The `PlanNode`&#x27;s index in node list.
3250 &quot;executionStats&quot;: { # The execution statistics associated with the node, contained in a group of
3251 # key-value pairs. Only present if the plan was returned as a result of a
3252 # profile query. For example, number of executions, number of rows/time per
3253 # execution etc.
3254 &quot;a_key&quot;: &quot;&quot;, # Properties of the object.
3255 },
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -04003256 },
3257 ],
3258 },
Bu Sun Kimd059ad82020-07-22 17:02:09 -07003259 },
3260 &quot;rows&quot;: [ # Each element in `rows` is a row whose format is defined by
3261 # metadata.row_type. The ith element
3262 # in each row matches the ith field in
3263 # metadata.row_type. Elements are
3264 # encoded based on type as described
3265 # here.
3266 [
3267 &quot;&quot;,
3268 ],
3269 ],
3270 &quot;metadata&quot;: { # Metadata about a ResultSet or PartialResultSet. # Metadata about the result set, such as row type information.
Bu Sun Kim65020912020-05-20 12:08:20 -07003271 &quot;transaction&quot;: { # A transaction. # If the read or SQL query began a transaction as a side-effect, the
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -04003272 # information about the new transaction is yielded here.
Bu Sun Kim4ed7d3f2020-05-27 12:20:54 -07003273 &quot;readTimestamp&quot;: &quot;A String&quot;, # For snapshot read-only transactions, the read timestamp chosen
3274 # for the transaction. Not returned by default: see
3275 # TransactionOptions.ReadOnly.return_read_timestamp.
3276 #
3277 # A timestamp in RFC3339 UTC \&quot;Zulu\&quot; format, accurate to nanoseconds.
3278 # Example: `&quot;2014-10-02T15:01:23.045123456Z&quot;`.
Bu Sun Kim65020912020-05-20 12:08:20 -07003279 &quot;id&quot;: &quot;A String&quot;, # `id` may be used to identify the transaction in subsequent
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -04003280 # Read,
3281 # ExecuteSql,
3282 # Commit, or
3283 # Rollback calls.
3284 #
3285 # Single-use read-only transactions do not have IDs, because
3286 # single-use transactions do not support multiple requests.
3287 },
Bu Sun Kimd059ad82020-07-22 17:02:09 -07003288 &quot;rowType&quot;: { # `StructType` defines the fields of a STRUCT type. # Indicates the field names and types for the rows in the result
3289 # set. For example, a SQL query like `&quot;SELECT UserId, UserName FROM
3290 # Users&quot;` could return a `row_type` value like:
3291 #
3292 # &quot;fields&quot;: [
3293 # { &quot;name&quot;: &quot;UserId&quot;, &quot;type&quot;: { &quot;code&quot;: &quot;INT64&quot; } },
3294 # { &quot;name&quot;: &quot;UserName&quot;, &quot;type&quot;: { &quot;code&quot;: &quot;STRING&quot; } },
3295 # ]
3296 &quot;fields&quot;: [ # The list of fields that make up this struct. Order is
3297 # significant, because values of this struct type are represented as
3298 # lists, where the order of field values matches the order of
3299 # fields in the StructType. In turn, the order of fields
3300 # matches the order of columns in a read request, or the order of
3301 # fields in the `SELECT` clause of a query.
3302 { # Message representing a single field of a struct.
3303 &quot;type&quot;: # Object with schema name: Type # The type of the field.
3304 &quot;name&quot;: &quot;A String&quot;, # The name of the field. For reads, this is the column name. For
3305 # SQL queries, it is the column alias (e.g., `&quot;Word&quot;` in the
3306 # query `&quot;SELECT &#x27;hello&#x27; AS Word&quot;`), or the column name (e.g.,
3307 # `&quot;ColName&quot;` in the query `&quot;SELECT ColName FROM Table&quot;`). Some
3308 # columns might have an empty name (e.g., !&quot;SELECT
3309 # UPPER(ColName)&quot;`). Note that a query result can contain
3310 # multiple fields with the same name.
Bu Sun Kim4ed7d3f2020-05-27 12:20:54 -07003311 },
3312 ],
3313 },
Bu Sun Kim4ed7d3f2020-05-27 12:20:54 -07003314 },
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -04003315 }</pre>
3316</div>
3317
3318<div class="method">
Dan O'Mearadd494642020-05-01 07:42:23 -07003319 <code class="details" id="executeStreamingSql">executeStreamingSql(session, body=None, x__xgafv=None)</code>
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -04003320 <pre>Like ExecuteSql, except returns the result
3321set as a stream. Unlike ExecuteSql, there
3322is no limit on the size of the returned result set. However, no
3323individual row in the result set can exceed 100 MiB, and no
3324column value can exceed 10 MiB.
3325
3326Args:
3327 session: string, Required. The session in which the SQL query should be performed. (required)
Dan O'Mearadd494642020-05-01 07:42:23 -07003328 body: object, The request body.
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -04003329 The object takes the form of:
3330
3331{ # The request for ExecuteSql and
3332 # ExecuteStreamingSql.
Bu Sun Kim65020912020-05-20 12:08:20 -07003333 &quot;resumeToken&quot;: &quot;A String&quot;, # If this request is resuming a previously interrupted SQL statement
3334 # execution, `resume_token` should be copied from the last
3335 # PartialResultSet yielded before the interruption. Doing this
3336 # enables the new SQL statement execution to resume where the last one left
3337 # off. The rest of the request parameters must exactly match the
3338 # request that yielded this token.
Bu Sun Kim4ed7d3f2020-05-27 12:20:54 -07003339 &quot;queryOptions&quot;: { # Query optimizer configuration. # Query optimizer configuration to use for the given query.
3340 &quot;optimizerVersion&quot;: &quot;A String&quot;, # An option to control the selection of optimizer version.
3341 #
3342 # This parameter allows individual queries to pick different query
3343 # optimizer versions.
3344 #
3345 # Specifying &quot;latest&quot; as a value instructs Cloud Spanner to use the
3346 # latest supported query optimizer version. If not specified, Cloud Spanner
3347 # uses optimizer version set at the database level options. Any other
3348 # positive integer (from the list of supported optimizer versions)
3349 # overrides the default optimizer version for query execution.
3350 # The list of supported optimizer versions can be queried from
3351 # SPANNER_SYS.SUPPORTED_OPTIMIZER_VERSIONS. Executing a SQL statement
3352 # with an invalid optimizer version will fail with a syntax error
3353 # (`INVALID_ARGUMENT`) status.
3354 # See
3355 # https://cloud.google.com/spanner/docs/query-optimizer/manage-query-optimizer
3356 # for more information on managing the query optimizer.
3357 #
3358 # The `optimizer_version` statement hint has precedence over this setting.
3359 },
Bu Sun Kimd059ad82020-07-22 17:02:09 -07003360 &quot;partitionToken&quot;: &quot;A String&quot;, # If present, results will be restricted to the specified partition
3361 # previously created using PartitionQuery(). There must be an exact
3362 # match for the values of fields common to this message and the
3363 # PartitionQueryRequest message used to create this partition_token.
3364 &quot;queryMode&quot;: &quot;A String&quot;, # Used to control the amount of debugging information returned in
3365 # ResultSetStats. If partition_token is set, query_mode can only
3366 # be set to QueryMode.NORMAL.
3367 &quot;transaction&quot;: { # This message is used to select the transaction in which a # The transaction to use.
3368 #
3369 # For queries, if none is provided, the default is a temporary read-only
3370 # transaction with strong concurrency.
3371 #
3372 # Standard DML statements require a read-write transaction. To protect
3373 # against replays, single-use transactions are not supported. The caller
3374 # must either supply an existing transaction ID or begin a new transaction.
3375 #
3376 # Partitioned DML requires an existing Partitioned DML transaction ID.
3377 # Read or
3378 # ExecuteSql call runs.
3379 #
3380 # See TransactionOptions for more information about transactions.
3381 &quot;singleUse&quot;: { # # Transactions # Execute the read or SQL query in a temporary transaction.
3382 # This is the most efficient way to execute a transaction that
3383 # consists of a single SQL query.
3384 #
3385 #
3386 # Each session can have at most one active transaction at a time (note that
3387 # standalone reads and queries use a transaction internally and do count
3388 # towards the one transaction limit). After the active transaction is
3389 # completed, the session can immediately be re-used for the next transaction.
3390 # It is not necessary to create a new session for each transaction.
3391 #
3392 # # Transaction Modes
3393 #
3394 # Cloud Spanner supports three transaction modes:
3395 #
3396 # 1. Locking read-write. This type of transaction is the only way
3397 # to write data into Cloud Spanner. These transactions rely on
3398 # pessimistic locking and, if necessary, two-phase commit.
3399 # Locking read-write transactions may abort, requiring the
3400 # application to retry.
3401 #
3402 # 2. Snapshot read-only. This transaction type provides guaranteed
3403 # consistency across several reads, but does not allow
3404 # writes. Snapshot read-only transactions can be configured to
3405 # read at timestamps in the past. Snapshot read-only
3406 # transactions do not need to be committed.
3407 #
3408 # 3. Partitioned DML. This type of transaction is used to execute
3409 # a single Partitioned DML statement. Partitioned DML partitions
3410 # the key space and runs the DML statement over each partition
3411 # in parallel using separate, internal transactions that commit
3412 # independently. Partitioned DML transactions do not need to be
3413 # committed.
3414 #
3415 # For transactions that only read, snapshot read-only transactions
3416 # provide simpler semantics and are almost always faster. In
3417 # particular, read-only transactions do not take locks, so they do
3418 # not conflict with read-write transactions. As a consequence of not
3419 # taking locks, they also do not abort, so retry loops are not needed.
3420 #
3421 # Transactions may only read/write data in a single database. They
3422 # may, however, read/write data in different tables within that
3423 # database.
3424 #
3425 # ## Locking Read-Write Transactions
3426 #
3427 # Locking transactions may be used to atomically read-modify-write
3428 # data anywhere in a database. This type of transaction is externally
3429 # consistent.
3430 #
3431 # Clients should attempt to minimize the amount of time a transaction
3432 # is active. Faster transactions commit with higher probability
3433 # and cause less contention. Cloud Spanner attempts to keep read locks
3434 # active as long as the transaction continues to do reads, and the
3435 # transaction has not been terminated by
3436 # Commit or
3437 # Rollback. Long periods of
3438 # inactivity at the client may cause Cloud Spanner to release a
3439 # transaction&#x27;s locks and abort it.
3440 #
3441 # Conceptually, a read-write transaction consists of zero or more
3442 # reads or SQL statements followed by
3443 # Commit. At any time before
3444 # Commit, the client can send a
3445 # Rollback request to abort the
3446 # transaction.
3447 #
3448 # ### Semantics
3449 #
3450 # Cloud Spanner can commit the transaction if all read locks it acquired
3451 # are still valid at commit time, and it is able to acquire write
3452 # locks for all writes. Cloud Spanner can abort the transaction for any
3453 # reason. If a commit attempt returns `ABORTED`, Cloud Spanner guarantees
3454 # that the transaction has not modified any user data in Cloud Spanner.
3455 #
3456 # Unless the transaction commits, Cloud Spanner makes no guarantees about
3457 # how long the transaction&#x27;s locks were held for. It is an error to
3458 # use Cloud Spanner locks for any sort of mutual exclusion other than
3459 # between Cloud Spanner transactions themselves.
3460 #
3461 # ### Retrying Aborted Transactions
3462 #
3463 # When a transaction aborts, the application can choose to retry the
3464 # whole transaction again. To maximize the chances of successfully
3465 # committing the retry, the client should execute the retry in the
3466 # same session as the original attempt. The original session&#x27;s lock
3467 # priority increases with each consecutive abort, meaning that each
3468 # attempt has a slightly better chance of success than the previous.
3469 #
3470 # Under some circumstances (e.g., many transactions attempting to
3471 # modify the same row(s)), a transaction can abort many times in a
3472 # short period before successfully committing. Thus, it is not a good
3473 # idea to cap the number of retries a transaction can attempt;
3474 # instead, it is better to limit the total amount of wall time spent
3475 # retrying.
3476 #
3477 # ### Idle Transactions
3478 #
3479 # A transaction is considered idle if it has no outstanding reads or
3480 # SQL queries and has not started a read or SQL query within the last 10
3481 # seconds. Idle transactions can be aborted by Cloud Spanner so that they
3482 # don&#x27;t hold on to locks indefinitely. In that case, the commit will
3483 # fail with error `ABORTED`.
3484 #
3485 # If this behavior is undesirable, periodically executing a simple
3486 # SQL query in the transaction (e.g., `SELECT 1`) prevents the
3487 # transaction from becoming idle.
3488 #
3489 # ## Snapshot Read-Only Transactions
3490 #
3491 # Snapshot read-only transactions provides a simpler method than
3492 # locking read-write transactions for doing several consistent
3493 # reads. However, this type of transaction does not support writes.
3494 #
3495 # Snapshot transactions do not take locks. Instead, they work by
3496 # choosing a Cloud Spanner timestamp, then executing all reads at that
3497 # timestamp. Since they do not acquire locks, they do not block
3498 # concurrent read-write transactions.
3499 #
3500 # Unlike locking read-write transactions, snapshot read-only
3501 # transactions never abort. They can fail if the chosen read
3502 # timestamp is garbage collected; however, the default garbage
3503 # collection policy is generous enough that most applications do not
3504 # need to worry about this in practice.
3505 #
3506 # Snapshot read-only transactions do not need to call
3507 # Commit or
3508 # Rollback (and in fact are not
3509 # permitted to do so).
3510 #
3511 # To execute a snapshot transaction, the client specifies a timestamp
3512 # bound, which tells Cloud Spanner how to choose a read timestamp.
3513 #
3514 # The types of timestamp bound are:
3515 #
3516 # - Strong (the default).
3517 # - Bounded staleness.
3518 # - Exact staleness.
3519 #
3520 # If the Cloud Spanner database to be read is geographically distributed,
3521 # stale read-only transactions can execute more quickly than strong
3522 # or read-write transaction, because they are able to execute far
3523 # from the leader replica.
3524 #
3525 # Each type of timestamp bound is discussed in detail below.
3526 #
3527 # ### Strong
3528 #
3529 # Strong reads are guaranteed to see the effects of all transactions
3530 # that have committed before the start of the read. Furthermore, all
3531 # rows yielded by a single read are consistent with each other -- if
3532 # any part of the read observes a transaction, all parts of the read
3533 # see the transaction.
3534 #
3535 # Strong reads are not repeatable: two consecutive strong read-only
3536 # transactions might return inconsistent results if there are
3537 # concurrent writes. If consistency across reads is required, the
3538 # reads should be executed within a transaction or at an exact read
3539 # timestamp.
3540 #
3541 # See TransactionOptions.ReadOnly.strong.
3542 #
3543 # ### Exact Staleness
3544 #
3545 # These timestamp bounds execute reads at a user-specified
3546 # timestamp. Reads at a timestamp are guaranteed to see a consistent
3547 # prefix of the global transaction history: they observe
3548 # modifications done by all transactions with a commit timestamp &lt;=
3549 # the read timestamp, and observe none of the modifications done by
3550 # transactions with a larger commit timestamp. They will block until
3551 # all conflicting transactions that may be assigned commit timestamps
3552 # &lt;= the read timestamp have finished.
3553 #
3554 # The timestamp can either be expressed as an absolute Cloud Spanner commit
3555 # timestamp or a staleness relative to the current time.
3556 #
3557 # These modes do not require a &quot;negotiation phase&quot; to pick a
3558 # timestamp. As a result, they execute slightly faster than the
3559 # equivalent boundedly stale concurrency modes. On the other hand,
3560 # boundedly stale reads usually return fresher results.
3561 #
3562 # See TransactionOptions.ReadOnly.read_timestamp and
3563 # TransactionOptions.ReadOnly.exact_staleness.
3564 #
3565 # ### Bounded Staleness
3566 #
3567 # Bounded staleness modes allow Cloud Spanner to pick the read timestamp,
3568 # subject to a user-provided staleness bound. Cloud Spanner chooses the
3569 # newest timestamp within the staleness bound that allows execution
3570 # of the reads at the closest available replica without blocking.
3571 #
3572 # All rows yielded are consistent with each other -- if any part of
3573 # the read observes a transaction, all parts of the read see the
3574 # transaction. Boundedly stale reads are not repeatable: two stale
3575 # reads, even if they use the same staleness bound, can execute at
3576 # different timestamps and thus return inconsistent results.
3577 #
3578 # Boundedly stale reads execute in two phases: the first phase
3579 # negotiates a timestamp among all replicas needed to serve the
3580 # read. In the second phase, reads are executed at the negotiated
3581 # timestamp.
3582 #
3583 # As a result of the two phase execution, bounded staleness reads are
3584 # usually a little slower than comparable exact staleness
3585 # reads. However, they are typically able to return fresher
3586 # results, and are more likely to execute at the closest replica.
3587 #
3588 # Because the timestamp negotiation requires up-front knowledge of
3589 # which rows will be read, it can only be used with single-use
3590 # read-only transactions.
3591 #
3592 # See TransactionOptions.ReadOnly.max_staleness and
3593 # TransactionOptions.ReadOnly.min_read_timestamp.
3594 #
3595 # ### Old Read Timestamps and Garbage Collection
3596 #
3597 # Cloud Spanner continuously garbage collects deleted and overwritten data
3598 # in the background to reclaim storage space. This process is known
3599 # as &quot;version GC&quot;. By default, version GC reclaims versions after they
3600 # are one hour old. Because of this, Cloud Spanner cannot perform reads
3601 # at read timestamps more than one hour in the past. This
3602 # restriction also applies to in-progress reads and/or SQL queries whose
3603 # timestamp become too old while executing. Reads and SQL queries with
3604 # too-old read timestamps fail with the error `FAILED_PRECONDITION`.
3605 #
3606 # ## Partitioned DML Transactions
3607 #
3608 # Partitioned DML transactions are used to execute DML statements with a
3609 # different execution strategy that provides different, and often better,
3610 # scalability properties for large, table-wide operations than DML in a
3611 # ReadWrite transaction. Smaller scoped statements, such as an OLTP workload,
3612 # should prefer using ReadWrite transactions.
3613 #
3614 # Partitioned DML partitions the keyspace and runs the DML statement on each
3615 # partition in separate, internal transactions. These transactions commit
3616 # automatically when complete, and run independently from one another.
3617 #
3618 # To reduce lock contention, this execution strategy only acquires read locks
3619 # on rows that match the WHERE clause of the statement. Additionally, the
3620 # smaller per-partition transactions hold locks for less time.
3621 #
3622 # That said, Partitioned DML is not a drop-in replacement for standard DML used
3623 # in ReadWrite transactions.
3624 #
3625 # - The DML statement must be fully-partitionable. Specifically, the statement
3626 # must be expressible as the union of many statements which each access only
3627 # a single row of the table.
3628 #
3629 # - The statement is not applied atomically to all rows of the table. Rather,
3630 # the statement is applied atomically to partitions of the table, in
3631 # independent transactions. Secondary index rows are updated atomically
3632 # with the base table rows.
3633 #
3634 # - Partitioned DML does not guarantee exactly-once execution semantics
3635 # against a partition. The statement will be applied at least once to each
3636 # partition. It is strongly recommended that the DML statement should be
3637 # idempotent to avoid unexpected results. For instance, it is potentially
3638 # dangerous to run a statement such as
3639 # `UPDATE table SET column = column + 1` as it could be run multiple times
3640 # against some rows.
3641 #
3642 # - The partitions are committed automatically - there is no support for
3643 # Commit or Rollback. If the call returns an error, or if the client issuing
3644 # the ExecuteSql call dies, it is possible that some rows had the statement
3645 # executed on them successfully. It is also possible that statement was
3646 # never executed against other rows.
3647 #
3648 # - Partitioned DML transactions may only contain the execution of a single
3649 # DML statement via ExecuteSql or ExecuteStreamingSql.
3650 #
3651 # - If any error is encountered during the execution of the partitioned DML
3652 # operation (for instance, a UNIQUE INDEX violation, division by zero, or a
3653 # value that cannot be stored due to schema constraints), then the
3654 # operation is stopped at that point and an error is returned. It is
3655 # possible that at this point, some partitions have been committed (or even
3656 # committed multiple times), and other partitions have not been run at all.
3657 #
3658 # Given the above, Partitioned DML is good fit for large, database-wide,
3659 # operations that are idempotent, such as deleting old rows from a very large
3660 # table.
3661 &quot;readWrite&quot;: { # Message type to initiate a read-write transaction. Currently this # Transaction may write.
3662 #
3663 # Authorization to begin a read-write transaction requires
3664 # `spanner.databases.beginOrRollbackReadWriteTransaction` permission
3665 # on the `session` resource.
3666 # transaction type has no options.
3667 },
3668 &quot;readOnly&quot;: { # Message type to initiate a read-only transaction. # Transaction will not write.
3669 #
3670 # Authorization to begin a read-only transaction requires
3671 # `spanner.databases.beginReadOnlyTransaction` permission
3672 # on the `session` resource.
3673 &quot;readTimestamp&quot;: &quot;A String&quot;, # Executes all reads at the given timestamp. Unlike other modes,
3674 # reads at a specific timestamp are repeatable; the same read at
3675 # the same timestamp always returns the same data. If the
3676 # timestamp is in the future, the read will block until the
3677 # specified timestamp, modulo the read&#x27;s deadline.
3678 #
3679 # Useful for large scale consistent reads such as mapreduces, or
3680 # for coordinating many reads against a consistent snapshot of the
3681 # data.
3682 #
3683 # A timestamp in RFC3339 UTC \&quot;Zulu\&quot; format, accurate to nanoseconds.
3684 # Example: `&quot;2014-10-02T15:01:23.045123456Z&quot;`.
3685 &quot;minReadTimestamp&quot;: &quot;A String&quot;, # Executes all reads at a timestamp &gt;= `min_read_timestamp`.
3686 #
3687 # This is useful for requesting fresher data than some previous
3688 # read, or data that is fresh enough to observe the effects of some
3689 # previously committed transaction whose timestamp is known.
3690 #
3691 # Note that this option can only be used in single-use transactions.
3692 #
3693 # A timestamp in RFC3339 UTC \&quot;Zulu\&quot; format, accurate to nanoseconds.
3694 # Example: `&quot;2014-10-02T15:01:23.045123456Z&quot;`.
3695 &quot;exactStaleness&quot;: &quot;A String&quot;, # Executes all reads at a timestamp that is `exact_staleness`
3696 # old. The timestamp is chosen soon after the read is started.
3697 #
3698 # Guarantees that all writes that have committed more than the
3699 # specified number of seconds ago are visible. Because Cloud Spanner
3700 # chooses the exact timestamp, this mode works even if the client&#x27;s
3701 # local clock is substantially skewed from Cloud Spanner commit
3702 # timestamps.
3703 #
3704 # Useful for reading at nearby replicas without the distributed
3705 # timestamp negotiation overhead of `max_staleness`.
3706 &quot;maxStaleness&quot;: &quot;A String&quot;, # Read data at a timestamp &gt;= `NOW - max_staleness`
3707 # seconds. Guarantees that all writes that have committed more
3708 # than the specified number of seconds ago are visible. Because
3709 # Cloud Spanner chooses the exact timestamp, this mode works even if
3710 # the client&#x27;s local clock is substantially skewed from Cloud Spanner
3711 # commit timestamps.
3712 #
3713 # Useful for reading the freshest data available at a nearby
3714 # replica, while bounding the possible staleness if the local
3715 # replica has fallen behind.
3716 #
3717 # Note that this option can only be used in single-use
3718 # transactions.
3719 &quot;returnReadTimestamp&quot;: True or False, # If true, the Cloud Spanner-selected read timestamp is included in
3720 # the Transaction message that describes the transaction.
3721 &quot;strong&quot;: True or False, # Read at a timestamp where all previously committed transactions
3722 # are visible.
3723 },
3724 &quot;partitionedDml&quot;: { # Message type to initiate a Partitioned DML transaction. # Partitioned DML transaction.
3725 #
3726 # Authorization to begin a Partitioned DML transaction requires
3727 # `spanner.databases.beginPartitionedDmlTransaction` permission
3728 # on the `session` resource.
3729 },
3730 },
3731 &quot;begin&quot;: { # # Transactions # Begin a new transaction and execute this read or SQL query in
3732 # it. The transaction ID of the new transaction is returned in
3733 # ResultSetMetadata.transaction, which is a Transaction.
3734 #
3735 #
3736 # Each session can have at most one active transaction at a time (note that
3737 # standalone reads and queries use a transaction internally and do count
3738 # towards the one transaction limit). After the active transaction is
3739 # completed, the session can immediately be re-used for the next transaction.
3740 # It is not necessary to create a new session for each transaction.
3741 #
3742 # # Transaction Modes
3743 #
3744 # Cloud Spanner supports three transaction modes:
3745 #
3746 # 1. Locking read-write. This type of transaction is the only way
3747 # to write data into Cloud Spanner. These transactions rely on
3748 # pessimistic locking and, if necessary, two-phase commit.
3749 # Locking read-write transactions may abort, requiring the
3750 # application to retry.
3751 #
3752 # 2. Snapshot read-only. This transaction type provides guaranteed
3753 # consistency across several reads, but does not allow
3754 # writes. Snapshot read-only transactions can be configured to
3755 # read at timestamps in the past. Snapshot read-only
3756 # transactions do not need to be committed.
3757 #
3758 # 3. Partitioned DML. This type of transaction is used to execute
3759 # a single Partitioned DML statement. Partitioned DML partitions
3760 # the key space and runs the DML statement over each partition
3761 # in parallel using separate, internal transactions that commit
3762 # independently. Partitioned DML transactions do not need to be
3763 # committed.
3764 #
3765 # For transactions that only read, snapshot read-only transactions
3766 # provide simpler semantics and are almost always faster. In
3767 # particular, read-only transactions do not take locks, so they do
3768 # not conflict with read-write transactions. As a consequence of not
3769 # taking locks, they also do not abort, so retry loops are not needed.
3770 #
3771 # Transactions may only read/write data in a single database. They
3772 # may, however, read/write data in different tables within that
3773 # database.
3774 #
3775 # ## Locking Read-Write Transactions
3776 #
3777 # Locking transactions may be used to atomically read-modify-write
3778 # data anywhere in a database. This type of transaction is externally
3779 # consistent.
3780 #
3781 # Clients should attempt to minimize the amount of time a transaction
3782 # is active. Faster transactions commit with higher probability
3783 # and cause less contention. Cloud Spanner attempts to keep read locks
3784 # active as long as the transaction continues to do reads, and the
3785 # transaction has not been terminated by
3786 # Commit or
3787 # Rollback. Long periods of
3788 # inactivity at the client may cause Cloud Spanner to release a
3789 # transaction&#x27;s locks and abort it.
3790 #
3791 # Conceptually, a read-write transaction consists of zero or more
3792 # reads or SQL statements followed by
3793 # Commit. At any time before
3794 # Commit, the client can send a
3795 # Rollback request to abort the
3796 # transaction.
3797 #
3798 # ### Semantics
3799 #
3800 # Cloud Spanner can commit the transaction if all read locks it acquired
3801 # are still valid at commit time, and it is able to acquire write
3802 # locks for all writes. Cloud Spanner can abort the transaction for any
3803 # reason. If a commit attempt returns `ABORTED`, Cloud Spanner guarantees
3804 # that the transaction has not modified any user data in Cloud Spanner.
3805 #
3806 # Unless the transaction commits, Cloud Spanner makes no guarantees about
3807 # how long the transaction&#x27;s locks were held for. It is an error to
3808 # use Cloud Spanner locks for any sort of mutual exclusion other than
3809 # between Cloud Spanner transactions themselves.
3810 #
3811 # ### Retrying Aborted Transactions
3812 #
3813 # When a transaction aborts, the application can choose to retry the
3814 # whole transaction again. To maximize the chances of successfully
3815 # committing the retry, the client should execute the retry in the
3816 # same session as the original attempt. The original session&#x27;s lock
3817 # priority increases with each consecutive abort, meaning that each
3818 # attempt has a slightly better chance of success than the previous.
3819 #
3820 # Under some circumstances (e.g., many transactions attempting to
3821 # modify the same row(s)), a transaction can abort many times in a
3822 # short period before successfully committing. Thus, it is not a good
3823 # idea to cap the number of retries a transaction can attempt;
3824 # instead, it is better to limit the total amount of wall time spent
3825 # retrying.
3826 #
3827 # ### Idle Transactions
3828 #
3829 # A transaction is considered idle if it has no outstanding reads or
3830 # SQL queries and has not started a read or SQL query within the last 10
3831 # seconds. Idle transactions can be aborted by Cloud Spanner so that they
3832 # don&#x27;t hold on to locks indefinitely. In that case, the commit will
3833 # fail with error `ABORTED`.
3834 #
3835 # If this behavior is undesirable, periodically executing a simple
3836 # SQL query in the transaction (e.g., `SELECT 1`) prevents the
3837 # transaction from becoming idle.
3838 #
3839 # ## Snapshot Read-Only Transactions
3840 #
3841 # Snapshot read-only transactions provides a simpler method than
3842 # locking read-write transactions for doing several consistent
3843 # reads. However, this type of transaction does not support writes.
3844 #
3845 # Snapshot transactions do not take locks. Instead, they work by
3846 # choosing a Cloud Spanner timestamp, then executing all reads at that
3847 # timestamp. Since they do not acquire locks, they do not block
3848 # concurrent read-write transactions.
3849 #
3850 # Unlike locking read-write transactions, snapshot read-only
3851 # transactions never abort. They can fail if the chosen read
3852 # timestamp is garbage collected; however, the default garbage
3853 # collection policy is generous enough that most applications do not
3854 # need to worry about this in practice.
3855 #
3856 # Snapshot read-only transactions do not need to call
3857 # Commit or
3858 # Rollback (and in fact are not
3859 # permitted to do so).
3860 #
3861 # To execute a snapshot transaction, the client specifies a timestamp
3862 # bound, which tells Cloud Spanner how to choose a read timestamp.
3863 #
3864 # The types of timestamp bound are:
3865 #
3866 # - Strong (the default).
3867 # - Bounded staleness.
3868 # - Exact staleness.
3869 #
3870 # If the Cloud Spanner database to be read is geographically distributed,
3871 # stale read-only transactions can execute more quickly than strong
3872 # or read-write transaction, because they are able to execute far
3873 # from the leader replica.
3874 #
3875 # Each type of timestamp bound is discussed in detail below.
3876 #
3877 # ### Strong
3878 #
3879 # Strong reads are guaranteed to see the effects of all transactions
3880 # that have committed before the start of the read. Furthermore, all
3881 # rows yielded by a single read are consistent with each other -- if
3882 # any part of the read observes a transaction, all parts of the read
3883 # see the transaction.
3884 #
3885 # Strong reads are not repeatable: two consecutive strong read-only
3886 # transactions might return inconsistent results if there are
3887 # concurrent writes. If consistency across reads is required, the
3888 # reads should be executed within a transaction or at an exact read
3889 # timestamp.
3890 #
3891 # See TransactionOptions.ReadOnly.strong.
3892 #
3893 # ### Exact Staleness
3894 #
3895 # These timestamp bounds execute reads at a user-specified
3896 # timestamp. Reads at a timestamp are guaranteed to see a consistent
3897 # prefix of the global transaction history: they observe
3898 # modifications done by all transactions with a commit timestamp &lt;=
3899 # the read timestamp, and observe none of the modifications done by
3900 # transactions with a larger commit timestamp. They will block until
3901 # all conflicting transactions that may be assigned commit timestamps
3902 # &lt;= the read timestamp have finished.
3903 #
3904 # The timestamp can either be expressed as an absolute Cloud Spanner commit
3905 # timestamp or a staleness relative to the current time.
3906 #
3907 # These modes do not require a &quot;negotiation phase&quot; to pick a
3908 # timestamp. As a result, they execute slightly faster than the
3909 # equivalent boundedly stale concurrency modes. On the other hand,
3910 # boundedly stale reads usually return fresher results.
3911 #
3912 # See TransactionOptions.ReadOnly.read_timestamp and
3913 # TransactionOptions.ReadOnly.exact_staleness.
3914 #
3915 # ### Bounded Staleness
3916 #
3917 # Bounded staleness modes allow Cloud Spanner to pick the read timestamp,
3918 # subject to a user-provided staleness bound. Cloud Spanner chooses the
3919 # newest timestamp within the staleness bound that allows execution
3920 # of the reads at the closest available replica without blocking.
3921 #
3922 # All rows yielded are consistent with each other -- if any part of
3923 # the read observes a transaction, all parts of the read see the
3924 # transaction. Boundedly stale reads are not repeatable: two stale
3925 # reads, even if they use the same staleness bound, can execute at
3926 # different timestamps and thus return inconsistent results.
3927 #
3928 # Boundedly stale reads execute in two phases: the first phase
3929 # negotiates a timestamp among all replicas needed to serve the
3930 # read. In the second phase, reads are executed at the negotiated
3931 # timestamp.
3932 #
3933 # As a result of the two phase execution, bounded staleness reads are
3934 # usually a little slower than comparable exact staleness
3935 # reads. However, they are typically able to return fresher
3936 # results, and are more likely to execute at the closest replica.
3937 #
3938 # Because the timestamp negotiation requires up-front knowledge of
3939 # which rows will be read, it can only be used with single-use
3940 # read-only transactions.
3941 #
3942 # See TransactionOptions.ReadOnly.max_staleness and
3943 # TransactionOptions.ReadOnly.min_read_timestamp.
3944 #
3945 # ### Old Read Timestamps and Garbage Collection
3946 #
3947 # Cloud Spanner continuously garbage collects deleted and overwritten data
3948 # in the background to reclaim storage space. This process is known
3949 # as &quot;version GC&quot;. By default, version GC reclaims versions after they
3950 # are one hour old. Because of this, Cloud Spanner cannot perform reads
3951 # at read timestamps more than one hour in the past. This
3952 # restriction also applies to in-progress reads and/or SQL queries whose
3953 # timestamp become too old while executing. Reads and SQL queries with
3954 # too-old read timestamps fail with the error `FAILED_PRECONDITION`.
3955 #
3956 # ## Partitioned DML Transactions
3957 #
3958 # Partitioned DML transactions are used to execute DML statements with a
3959 # different execution strategy that provides different, and often better,
3960 # scalability properties for large, table-wide operations than DML in a
3961 # ReadWrite transaction. Smaller scoped statements, such as an OLTP workload,
3962 # should prefer using ReadWrite transactions.
3963 #
3964 # Partitioned DML partitions the keyspace and runs the DML statement on each
3965 # partition in separate, internal transactions. These transactions commit
3966 # automatically when complete, and run independently from one another.
3967 #
3968 # To reduce lock contention, this execution strategy only acquires read locks
3969 # on rows that match the WHERE clause of the statement. Additionally, the
3970 # smaller per-partition transactions hold locks for less time.
3971 #
3972 # That said, Partitioned DML is not a drop-in replacement for standard DML used
3973 # in ReadWrite transactions.
3974 #
3975 # - The DML statement must be fully-partitionable. Specifically, the statement
3976 # must be expressible as the union of many statements which each access only
3977 # a single row of the table.
3978 #
3979 # - The statement is not applied atomically to all rows of the table. Rather,
3980 # the statement is applied atomically to partitions of the table, in
3981 # independent transactions. Secondary index rows are updated atomically
3982 # with the base table rows.
3983 #
3984 # - Partitioned DML does not guarantee exactly-once execution semantics
3985 # against a partition. The statement will be applied at least once to each
3986 # partition. It is strongly recommended that the DML statement should be
3987 # idempotent to avoid unexpected results. For instance, it is potentially
3988 # dangerous to run a statement such as
3989 # `UPDATE table SET column = column + 1` as it could be run multiple times
3990 # against some rows.
3991 #
3992 # - The partitions are committed automatically - there is no support for
3993 # Commit or Rollback. If the call returns an error, or if the client issuing
3994 # the ExecuteSql call dies, it is possible that some rows had the statement
3995 # executed on them successfully. It is also possible that statement was
3996 # never executed against other rows.
3997 #
3998 # - Partitioned DML transactions may only contain the execution of a single
3999 # DML statement via ExecuteSql or ExecuteStreamingSql.
4000 #
4001 # - If any error is encountered during the execution of the partitioned DML
4002 # operation (for instance, a UNIQUE INDEX violation, division by zero, or a
4003 # value that cannot be stored due to schema constraints), then the
4004 # operation is stopped at that point and an error is returned. It is
4005 # possible that at this point, some partitions have been committed (or even
4006 # committed multiple times), and other partitions have not been run at all.
4007 #
4008 # Given the above, Partitioned DML is good fit for large, database-wide,
4009 # operations that are idempotent, such as deleting old rows from a very large
4010 # table.
4011 &quot;readWrite&quot;: { # Message type to initiate a read-write transaction. Currently this # Transaction may write.
4012 #
4013 # Authorization to begin a read-write transaction requires
4014 # `spanner.databases.beginOrRollbackReadWriteTransaction` permission
4015 # on the `session` resource.
4016 # transaction type has no options.
4017 },
4018 &quot;readOnly&quot;: { # Message type to initiate a read-only transaction. # Transaction will not write.
4019 #
4020 # Authorization to begin a read-only transaction requires
4021 # `spanner.databases.beginReadOnlyTransaction` permission
4022 # on the `session` resource.
4023 &quot;readTimestamp&quot;: &quot;A String&quot;, # Executes all reads at the given timestamp. Unlike other modes,
4024 # reads at a specific timestamp are repeatable; the same read at
4025 # the same timestamp always returns the same data. If the
4026 # timestamp is in the future, the read will block until the
4027 # specified timestamp, modulo the read&#x27;s deadline.
4028 #
4029 # Useful for large scale consistent reads such as mapreduces, or
4030 # for coordinating many reads against a consistent snapshot of the
4031 # data.
4032 #
4033 # A timestamp in RFC3339 UTC \&quot;Zulu\&quot; format, accurate to nanoseconds.
4034 # Example: `&quot;2014-10-02T15:01:23.045123456Z&quot;`.
4035 &quot;minReadTimestamp&quot;: &quot;A String&quot;, # Executes all reads at a timestamp &gt;= `min_read_timestamp`.
4036 #
4037 # This is useful for requesting fresher data than some previous
4038 # read, or data that is fresh enough to observe the effects of some
4039 # previously committed transaction whose timestamp is known.
4040 #
4041 # Note that this option can only be used in single-use transactions.
4042 #
4043 # A timestamp in RFC3339 UTC \&quot;Zulu\&quot; format, accurate to nanoseconds.
4044 # Example: `&quot;2014-10-02T15:01:23.045123456Z&quot;`.
4045 &quot;exactStaleness&quot;: &quot;A String&quot;, # Executes all reads at a timestamp that is `exact_staleness`
4046 # old. The timestamp is chosen soon after the read is started.
4047 #
4048 # Guarantees that all writes that have committed more than the
4049 # specified number of seconds ago are visible. Because Cloud Spanner
4050 # chooses the exact timestamp, this mode works even if the client&#x27;s
4051 # local clock is substantially skewed from Cloud Spanner commit
4052 # timestamps.
4053 #
4054 # Useful for reading at nearby replicas without the distributed
4055 # timestamp negotiation overhead of `max_staleness`.
4056 &quot;maxStaleness&quot;: &quot;A String&quot;, # Read data at a timestamp &gt;= `NOW - max_staleness`
4057 # seconds. Guarantees that all writes that have committed more
4058 # than the specified number of seconds ago are visible. Because
4059 # Cloud Spanner chooses the exact timestamp, this mode works even if
4060 # the client&#x27;s local clock is substantially skewed from Cloud Spanner
4061 # commit timestamps.
4062 #
4063 # Useful for reading the freshest data available at a nearby
4064 # replica, while bounding the possible staleness if the local
4065 # replica has fallen behind.
4066 #
4067 # Note that this option can only be used in single-use
4068 # transactions.
4069 &quot;returnReadTimestamp&quot;: True or False, # If true, the Cloud Spanner-selected read timestamp is included in
4070 # the Transaction message that describes the transaction.
4071 &quot;strong&quot;: True or False, # Read at a timestamp where all previously committed transactions
4072 # are visible.
4073 },
4074 &quot;partitionedDml&quot;: { # Message type to initiate a Partitioned DML transaction. # Partitioned DML transaction.
4075 #
4076 # Authorization to begin a Partitioned DML transaction requires
4077 # `spanner.databases.beginPartitionedDmlTransaction` permission
4078 # on the `session` resource.
4079 },
4080 },
4081 &quot;id&quot;: &quot;A String&quot;, # Execute the read or SQL query in a previously-started transaction.
4082 },
4083 &quot;seqno&quot;: &quot;A String&quot;, # A per-transaction sequence number used to identify this request. This field
4084 # makes each request idempotent such that if the request is received multiple
4085 # times, at most one will succeed.
4086 #
4087 # The sequence number must be monotonically increasing within the
4088 # transaction. If a request arrives for the first time with an out-of-order
4089 # sequence number, the transaction may be aborted. Replays of previously
4090 # handled requests will yield the same response as the first execution.
4091 #
4092 # Required for DML statements. Ignored for queries.
4093 &quot;paramTypes&quot;: { # It is not always possible for Cloud Spanner to infer the right SQL type
4094 # from a JSON value. For example, values of type `BYTES` and values
4095 # of type `STRING` both appear in params as JSON strings.
4096 #
4097 # In these cases, `param_types` can be used to specify the exact
4098 # SQL type for some or all of the SQL statement parameters. See the
4099 # definition of Type for more information
4100 # about SQL types.
4101 &quot;a_key&quot;: { # `Type` indicates the type of a Cloud Spanner value, as might be stored in a
4102 # table cell or returned from an SQL query.
4103 &quot;code&quot;: &quot;A String&quot;, # Required. The TypeCode for this type.
4104 &quot;arrayElementType&quot;: # Object with schema name: Type # If code == ARRAY, then `array_element_type`
4105 # is the type of the array elements.
4106 &quot;structType&quot;: { # `StructType` defines the fields of a STRUCT type. # If code == STRUCT, then `struct_type`
4107 # provides type information for the struct&#x27;s fields.
4108 &quot;fields&quot;: [ # The list of fields that make up this struct. Order is
4109 # significant, because values of this struct type are represented as
4110 # lists, where the order of field values matches the order of
4111 # fields in the StructType. In turn, the order of fields
4112 # matches the order of columns in a read request, or the order of
4113 # fields in the `SELECT` clause of a query.
4114 { # Message representing a single field of a struct.
4115 &quot;type&quot;: # Object with schema name: Type # The type of the field.
4116 &quot;name&quot;: &quot;A String&quot;, # The name of the field. For reads, this is the column name. For
4117 # SQL queries, it is the column alias (e.g., `&quot;Word&quot;` in the
4118 # query `&quot;SELECT &#x27;hello&#x27; AS Word&quot;`), or the column name (e.g.,
4119 # `&quot;ColName&quot;` in the query `&quot;SELECT ColName FROM Table&quot;`). Some
4120 # columns might have an empty name (e.g., !&quot;SELECT
4121 # UPPER(ColName)&quot;`). Note that a query result can contain
4122 # multiple fields with the same name.
4123 },
4124 ],
4125 },
4126 },
4127 },
Bu Sun Kim4ed7d3f2020-05-27 12:20:54 -07004128 &quot;params&quot;: { # Parameter names and values that bind to placeholders in the SQL string.
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07004129 #
Bu Sun Kim4ed7d3f2020-05-27 12:20:54 -07004130 # A parameter placeholder consists of the `@` character followed by the
4131 # parameter name (for example, `@firstName`). Parameter names can contain
4132 # letters, numbers, and underscores.
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07004133 #
Bu Sun Kim4ed7d3f2020-05-27 12:20:54 -07004134 # Parameters can appear anywhere that a literal value is expected. The same
4135 # parameter name can be used more than once, for example:
4136 #
4137 # `&quot;WHERE id &gt; @msg_id AND id &lt; @msg_id + 100&quot;`
4138 #
4139 # It is an error to execute a SQL statement with unbound parameters.
4140 &quot;a_key&quot;: &quot;&quot;, # Properties of the object.
4141 },
4142 &quot;sql&quot;: &quot;A String&quot;, # Required. The SQL string.
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -04004143 }
4144
4145 x__xgafv: string, V1 error format.
4146 Allowed values
4147 1 - v1 error format
4148 2 - v2 error format
4149
4150Returns:
4151 An object of the form:
4152
4153 { # Partial results from a streaming read or SQL query. Streaming reads and
4154 # SQL queries better tolerate large result sets, large rows, and large
4155 # values, but are a little trickier to consume.
Bu Sun Kim4ed7d3f2020-05-27 12:20:54 -07004156 &quot;stats&quot;: { # Additional statistics about a ResultSet or PartialResultSet. # Query plan and execution statistics for the statement that produced this
4157 # streaming result set. These can be requested by setting
4158 # ExecuteSqlRequest.query_mode and are sent
4159 # only once with the last response in the stream.
4160 # This field will also be present in the last response for DML
4161 # statements.
Bu Sun Kimd059ad82020-07-22 17:02:09 -07004162 &quot;queryStats&quot;: { # Aggregated statistics from the execution of the query. Only present when
4163 # the query is profiled. For example, a query could return the statistics as
4164 # follows:
4165 #
4166 # {
4167 # &quot;rows_returned&quot;: &quot;3&quot;,
4168 # &quot;elapsed_time&quot;: &quot;1.22 secs&quot;,
4169 # &quot;cpu_time&quot;: &quot;1.19 secs&quot;
4170 # }
4171 &quot;a_key&quot;: &quot;&quot;, # Properties of the object.
4172 },
4173 &quot;rowCountExact&quot;: &quot;A String&quot;, # Standard DML returns an exact count of rows that were modified.
Bu Sun Kim4ed7d3f2020-05-27 12:20:54 -07004174 &quot;rowCountLowerBound&quot;: &quot;A String&quot;, # Partitioned DML does not offer exactly-once semantics, so it
4175 # returns a lower bound of the rows modified.
4176 &quot;queryPlan&quot;: { # Contains an ordered list of nodes appearing in the query plan. # QueryPlan for the query associated with this result.
4177 &quot;planNodes&quot;: [ # The nodes in the query plan. Plan nodes are returned in pre-order starting
4178 # with the plan root. Each PlanNode&#x27;s `id` corresponds to its index in
4179 # `plan_nodes`.
4180 { # Node information for nodes appearing in a QueryPlan.plan_nodes.
Bu Sun Kim4ed7d3f2020-05-27 12:20:54 -07004181 &quot;childLinks&quot;: [ # List of child node `index`es and their relationship to this parent.
4182 { # Metadata associated with a parent-child relationship appearing in a
4183 # PlanNode.
Bu Sun Kimd059ad82020-07-22 17:02:09 -07004184 &quot;childIndex&quot;: 42, # The node to which the link points.
Bu Sun Kim4ed7d3f2020-05-27 12:20:54 -07004185 &quot;type&quot;: &quot;A String&quot;, # The type of the link. For example, in Hash Joins this could be used to
4186 # distinguish between the build child and the probe child, or in the case
4187 # of the child being an output variable, to represent the tag associated
4188 # with the output variable.
4189 &quot;variable&quot;: &quot;A String&quot;, # Only present if the child node is SCALAR and corresponds
4190 # to an output variable of the parent node. The field carries the name of
4191 # the output variable.
4192 # For example, a `TableScan` operator that reads rows from a table will
4193 # have child links to the `SCALAR` nodes representing the output variables
4194 # created for each column that is read by the operator. The corresponding
4195 # `variable` fields will be set to the variable names assigned to the
4196 # columns.
Bu Sun Kim4ed7d3f2020-05-27 12:20:54 -07004197 },
4198 ],
Bu Sun Kimd059ad82020-07-22 17:02:09 -07004199 &quot;metadata&quot;: { # Attributes relevant to the node contained in a group of key-value pairs.
4200 # For example, a Parameter Reference node could have the following
4201 # information in its metadata:
4202 #
4203 # {
4204 # &quot;parameter_reference&quot;: &quot;param1&quot;,
4205 # &quot;parameter_type&quot;: &quot;array&quot;
4206 # }
4207 &quot;a_key&quot;: &quot;&quot;, # Properties of the object.
4208 },
Bu Sun Kim4ed7d3f2020-05-27 12:20:54 -07004209 &quot;kind&quot;: &quot;A String&quot;, # Used to determine the type of node. May be needed for visualizing
4210 # different kinds of nodes differently. For example, If the node is a
4211 # SCALAR node, it will have a condensed representation
4212 # which can be used to directly embed a description of the node in its
4213 # parent.
Bu Sun Kimd059ad82020-07-22 17:02:09 -07004214 &quot;shortRepresentation&quot;: { # Condensed representation of a node and its subtree. Only present for # Condensed representation for SCALAR nodes.
4215 # `SCALAR` PlanNode(s).
4216 &quot;subqueries&quot;: { # A mapping of (subquery variable name) -&gt; (subquery node id) for cases
4217 # where the `description` string of this node references a `SCALAR`
4218 # subquery contained in the expression subtree rooted at this node. The
4219 # referenced `SCALAR` subquery may not necessarily be a direct child of
4220 # this node.
4221 &quot;a_key&quot;: 42,
4222 },
4223 &quot;description&quot;: &quot;A String&quot;, # A string representation of the expression subtree rooted at this node.
4224 },
4225 &quot;displayName&quot;: &quot;A String&quot;, # The display name for the node.
4226 &quot;index&quot;: 42, # The `PlanNode`&#x27;s index in node list.
4227 &quot;executionStats&quot;: { # The execution statistics associated with the node, contained in a group of
4228 # key-value pairs. Only present if the plan was returned as a result of a
4229 # profile query. For example, number of executions, number of rows/time per
4230 # execution etc.
4231 &quot;a_key&quot;: &quot;&quot;, # Properties of the object.
4232 },
Bu Sun Kim4ed7d3f2020-05-27 12:20:54 -07004233 },
4234 ],
4235 },
Bu Sun Kimd059ad82020-07-22 17:02:09 -07004236 },
4237 &quot;resumeToken&quot;: &quot;A String&quot;, # Streaming calls might be interrupted for a variety of reasons, such
4238 # as TCP connection loss. If this occurs, the stream of results can
4239 # be resumed by re-sending the original request and including
4240 # `resume_token`. Note that executing any other transaction in the
4241 # same session invalidates the token.
4242 &quot;metadata&quot;: { # Metadata about a ResultSet or PartialResultSet. # Metadata about the result set, such as row type information.
4243 # Only present in the first response.
4244 &quot;transaction&quot;: { # A transaction. # If the read or SQL query began a transaction as a side-effect, the
4245 # information about the new transaction is yielded here.
4246 &quot;readTimestamp&quot;: &quot;A String&quot;, # For snapshot read-only transactions, the read timestamp chosen
4247 # for the transaction. Not returned by default: see
4248 # TransactionOptions.ReadOnly.return_read_timestamp.
4249 #
4250 # A timestamp in RFC3339 UTC \&quot;Zulu\&quot; format, accurate to nanoseconds.
4251 # Example: `&quot;2014-10-02T15:01:23.045123456Z&quot;`.
4252 &quot;id&quot;: &quot;A String&quot;, # `id` may be used to identify the transaction in subsequent
4253 # Read,
4254 # ExecuteSql,
4255 # Commit, or
4256 # Rollback calls.
4257 #
4258 # Single-use read-only transactions do not have IDs, because
4259 # single-use transactions do not support multiple requests.
4260 },
4261 &quot;rowType&quot;: { # `StructType` defines the fields of a STRUCT type. # Indicates the field names and types for the rows in the result
4262 # set. For example, a SQL query like `&quot;SELECT UserId, UserName FROM
4263 # Users&quot;` could return a `row_type` value like:
Bu Sun Kim4ed7d3f2020-05-27 12:20:54 -07004264 #
Bu Sun Kimd059ad82020-07-22 17:02:09 -07004265 # &quot;fields&quot;: [
4266 # { &quot;name&quot;: &quot;UserId&quot;, &quot;type&quot;: { &quot;code&quot;: &quot;INT64&quot; } },
4267 # { &quot;name&quot;: &quot;UserName&quot;, &quot;type&quot;: { &quot;code&quot;: &quot;STRING&quot; } },
4268 # ]
4269 &quot;fields&quot;: [ # The list of fields that make up this struct. Order is
4270 # significant, because values of this struct type are represented as
4271 # lists, where the order of field values matches the order of
4272 # fields in the StructType. In turn, the order of fields
4273 # matches the order of columns in a read request, or the order of
4274 # fields in the `SELECT` clause of a query.
4275 { # Message representing a single field of a struct.
4276 &quot;type&quot;: # Object with schema name: Type # The type of the field.
4277 &quot;name&quot;: &quot;A String&quot;, # The name of the field. For reads, this is the column name. For
4278 # SQL queries, it is the column alias (e.g., `&quot;Word&quot;` in the
4279 # query `&quot;SELECT &#x27;hello&#x27; AS Word&quot;`), or the column name (e.g.,
4280 # `&quot;ColName&quot;` in the query `&quot;SELECT ColName FROM Table&quot;`). Some
4281 # columns might have an empty name (e.g., !&quot;SELECT
4282 # UPPER(ColName)&quot;`). Note that a query result can contain
4283 # multiple fields with the same name.
4284 },
4285 ],
Bu Sun Kim4ed7d3f2020-05-27 12:20:54 -07004286 },
4287 },
Bu Sun Kim65020912020-05-20 12:08:20 -07004288 &quot;values&quot;: [ # A streamed result set consists of a stream of values, which might
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -04004289 # be split into many `PartialResultSet` messages to accommodate
4290 # large rows and/or large values. Every N complete values defines a
4291 # row, where N is equal to the number of entries in
4292 # metadata.row_type.fields.
4293 #
4294 # Most values are encoded based on type as described
4295 # here.
4296 #
Bu Sun Kim65020912020-05-20 12:08:20 -07004297 # It is possible that the last value in values is &quot;chunked&quot;,
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -04004298 # meaning that the rest of the value is sent in subsequent
4299 # `PartialResultSet`(s). This is denoted by the chunked_value
4300 # field. Two or more chunked values can be merged to form a
4301 # complete value as follows:
4302 #
4303 # * `bool/number/null`: cannot be chunked
4304 # * `string`: concatenate the strings
4305 # * `list`: concatenate the lists. If the last element in a list is a
4306 # `string`, `list`, or `object`, merge it with the first element in
4307 # the next list by applying these rules recursively.
4308 # * `object`: concatenate the (field name, field value) pairs. If a
4309 # field name is duplicated, then apply these rules recursively
4310 # to merge the field values.
4311 #
4312 # Some examples of merging:
4313 #
4314 # # Strings are concatenated.
Bu Sun Kim65020912020-05-20 12:08:20 -07004315 # &quot;foo&quot;, &quot;bar&quot; =&gt; &quot;foobar&quot;
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -04004316 #
4317 # # Lists of non-strings are concatenated.
Dan O'Mearadd494642020-05-01 07:42:23 -07004318 # [2, 3], [4] =&gt; [2, 3, 4]
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -04004319 #
4320 # # Lists are concatenated, but the last and first elements are merged
4321 # # because they are strings.
Bu Sun Kim65020912020-05-20 12:08:20 -07004322 # [&quot;a&quot;, &quot;b&quot;], [&quot;c&quot;, &quot;d&quot;] =&gt; [&quot;a&quot;, &quot;bc&quot;, &quot;d&quot;]
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -04004323 #
4324 # # Lists are concatenated, but the last and first elements are merged
4325 # # because they are lists. Recursively, the last and first elements
4326 # # of the inner lists are merged because they are strings.
Bu Sun Kim65020912020-05-20 12:08:20 -07004327 # [&quot;a&quot;, [&quot;b&quot;, &quot;c&quot;]], [[&quot;d&quot;], &quot;e&quot;] =&gt; [&quot;a&quot;, [&quot;b&quot;, &quot;cd&quot;], &quot;e&quot;]
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -04004328 #
4329 # # Non-overlapping object fields are combined.
Bu Sun Kim65020912020-05-20 12:08:20 -07004330 # {&quot;a&quot;: &quot;1&quot;}, {&quot;b&quot;: &quot;2&quot;} =&gt; {&quot;a&quot;: &quot;1&quot;, &quot;b&quot;: 2&quot;}
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -04004331 #
4332 # # Overlapping object fields are merged.
Bu Sun Kim65020912020-05-20 12:08:20 -07004333 # {&quot;a&quot;: &quot;1&quot;}, {&quot;a&quot;: &quot;2&quot;} =&gt; {&quot;a&quot;: &quot;12&quot;}
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -04004334 #
4335 # # Examples of merging objects containing lists of strings.
Bu Sun Kim65020912020-05-20 12:08:20 -07004336 # {&quot;a&quot;: [&quot;1&quot;]}, {&quot;a&quot;: [&quot;2&quot;]} =&gt; {&quot;a&quot;: [&quot;12&quot;]}
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -04004337 #
4338 # For a more complete example, suppose a streaming SQL query is
4339 # yielding a result set whose rows contain a single string
4340 # field. The following `PartialResultSet`s might be yielded:
4341 #
4342 # {
Bu Sun Kim65020912020-05-20 12:08:20 -07004343 # &quot;metadata&quot;: { ... }
4344 # &quot;values&quot;: [&quot;Hello&quot;, &quot;W&quot;]
4345 # &quot;chunked_value&quot;: true
4346 # &quot;resume_token&quot;: &quot;Af65...&quot;
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -04004347 # }
4348 # {
Bu Sun Kim65020912020-05-20 12:08:20 -07004349 # &quot;values&quot;: [&quot;orl&quot;]
4350 # &quot;chunked_value&quot;: true
4351 # &quot;resume_token&quot;: &quot;Bqp2...&quot;
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -04004352 # }
4353 # {
Bu Sun Kim65020912020-05-20 12:08:20 -07004354 # &quot;values&quot;: [&quot;d&quot;]
4355 # &quot;resume_token&quot;: &quot;Zx1B...&quot;
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -04004356 # }
4357 #
4358 # This sequence of `PartialResultSet`s encodes two rows, one
Bu Sun Kim65020912020-05-20 12:08:20 -07004359 # containing the field value `&quot;Hello&quot;`, and a second containing the
4360 # field value `&quot;World&quot; = &quot;W&quot; + &quot;orl&quot; + &quot;d&quot;`.
4361 &quot;&quot;,
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -04004362 ],
Bu Sun Kimd059ad82020-07-22 17:02:09 -07004363 &quot;chunkedValue&quot;: True or False, # If true, then the final value in values is chunked, and must
4364 # be combined with more values from subsequent `PartialResultSet`s
4365 # to obtain a complete field value.
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -04004366 }</pre>
4367</div>
4368
4369<div class="method">
4370 <code class="details" id="get">get(name, x__xgafv=None)</code>
4371 <pre>Gets a session. Returns `NOT_FOUND` if the session does not exist.
4372This is mainly useful for determining whether a session is still
4373alive.
4374
4375Args:
4376 name: string, Required. The name of the session to retrieve. (required)
4377 x__xgafv: string, V1 error format.
4378 Allowed values
4379 1 - v1 error format
4380 2 - v2 error format
4381
4382Returns:
4383 An object of the form:
4384
4385 { # A session in the Cloud Spanner API.
Bu Sun Kim4ed7d3f2020-05-27 12:20:54 -07004386 &quot;createTime&quot;: &quot;A String&quot;, # Output only. The timestamp when the session is created.
Bu Sun Kimd059ad82020-07-22 17:02:09 -07004387 &quot;name&quot;: &quot;A String&quot;, # Output only. The name of the session. This is always system-assigned.
4388 &quot;approximateLastUseTime&quot;: &quot;A String&quot;, # Output only. The approximate timestamp when the session is last used. It is
4389 # typically earlier than the actual last use time.
Bu Sun Kim65020912020-05-20 12:08:20 -07004390 &quot;labels&quot;: { # The labels for the session.
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07004391 #
4392 # * Label keys must be between 1 and 63 characters long and must conform to
4393 # the following regular expression: `[a-z]([-a-z0-9]*[a-z0-9])?`.
4394 # * Label values must be between 0 and 63 characters long and must conform
4395 # to the regular expression `([a-z]([-a-z0-9]*[a-z0-9])?)?`.
4396 # * No more than 64 labels can be associated with a given session.
4397 #
4398 # See https://goo.gl/xmQnxf for more information on and examples of labels.
Bu Sun Kim65020912020-05-20 12:08:20 -07004399 &quot;a_key&quot;: &quot;A String&quot;,
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07004400 },
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07004401 }</pre>
4402</div>
4403
4404<div class="method">
Bu Sun Kimd059ad82020-07-22 17:02:09 -07004405 <code class="details" id="list">list(database, filter=None, pageToken=None, pageSize=None, x__xgafv=None)</code>
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07004406 <pre>Lists all sessions in a given database.
4407
4408Args:
4409 database: string, Required. The database in which to list sessions. (required)
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07004410 filter: string, An expression for filtering the results of the request. Filter rules are
4411case insensitive. The fields eligible for filtering are:
4412
4413 * `labels.key` where key is the name of a label
4414
4415Some examples of using filters are:
4416
Bu Sun Kim65020912020-05-20 12:08:20 -07004417 * `labels.env:*` --&gt; The session has the label &quot;env&quot;.
4418 * `labels.env:dev` --&gt; The session has the label &quot;env&quot; and the value of
4419 the label contains the string &quot;dev&quot;.
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07004420 pageToken: string, If non-empty, `page_token` should contain a
4421next_page_token from a previous
4422ListSessionsResponse.
Bu Sun Kimd059ad82020-07-22 17:02:09 -07004423 pageSize: integer, Number of sessions to be returned in the response. If 0 or less, defaults
4424to the server&#x27;s maximum allowed page size.
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07004425 x__xgafv: string, V1 error format.
4426 Allowed values
4427 1 - v1 error format
4428 2 - v2 error format
4429
4430Returns:
4431 An object of the form:
4432
4433 { # The response for ListSessions.
Bu Sun Kimd059ad82020-07-22 17:02:09 -07004434 &quot;nextPageToken&quot;: &quot;A String&quot;, # `next_page_token` can be sent in a subsequent
4435 # ListSessions call to fetch more of the matching
4436 # sessions.
Bu Sun Kim65020912020-05-20 12:08:20 -07004437 &quot;sessions&quot;: [ # The list of requested sessions.
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07004438 { # A session in the Cloud Spanner API.
Bu Sun Kim4ed7d3f2020-05-27 12:20:54 -07004439 &quot;createTime&quot;: &quot;A String&quot;, # Output only. The timestamp when the session is created.
Bu Sun Kimd059ad82020-07-22 17:02:09 -07004440 &quot;name&quot;: &quot;A String&quot;, # Output only. The name of the session. This is always system-assigned.
4441 &quot;approximateLastUseTime&quot;: &quot;A String&quot;, # Output only. The approximate timestamp when the session is last used. It is
4442 # typically earlier than the actual last use time.
Bu Sun Kim65020912020-05-20 12:08:20 -07004443 &quot;labels&quot;: { # The labels for the session.
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07004444 #
4445 # * Label keys must be between 1 and 63 characters long and must conform to
4446 # the following regular expression: `[a-z]([-a-z0-9]*[a-z0-9])?`.
4447 # * Label values must be between 0 and 63 characters long and must conform
4448 # to the regular expression `([a-z]([-a-z0-9]*[a-z0-9])?)?`.
4449 # * No more than 64 labels can be associated with a given session.
4450 #
4451 # See https://goo.gl/xmQnxf for more information on and examples of labels.
Bu Sun Kim65020912020-05-20 12:08:20 -07004452 &quot;a_key&quot;: &quot;A String&quot;,
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07004453 },
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07004454 },
4455 ],
4456 }</pre>
4457</div>
4458
4459<div class="method">
4460 <code class="details" id="list_next">list_next(previous_request, previous_response)</code>
4461 <pre>Retrieves the next page of results.
4462
4463Args:
4464 previous_request: The request for the previous page. (required)
4465 previous_response: The response from the request for the previous page. (required)
4466
4467Returns:
Bu Sun Kim65020912020-05-20 12:08:20 -07004468 A request object that you can call &#x27;execute()&#x27; on to request the next
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07004469 page. Returns None if there are no more items in the collection.
4470 </pre>
4471</div>
4472
4473<div class="method">
Dan O'Mearadd494642020-05-01 07:42:23 -07004474 <code class="details" id="partitionQuery">partitionQuery(session, body=None, x__xgafv=None)</code>
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07004475 <pre>Creates a set of partition tokens that can be used to execute a query
4476operation in parallel. Each of the returned partition tokens can be used
4477by ExecuteStreamingSql to specify a subset
4478of the query result to read. The same session and read-only transaction
4479must be used by the PartitionQueryRequest used to create the
4480partition tokens and the ExecuteSqlRequests that use the partition tokens.
4481
4482Partition tokens become invalid when the session used to create them
4483is deleted, is idle for too long, begins a new transaction, or becomes too
4484old. When any of these happen, it is not possible to resume the query, and
4485the whole operation must be restarted from the beginning.
4486
4487Args:
4488 session: string, Required. The session used to create the partitions. (required)
Dan O'Mearadd494642020-05-01 07:42:23 -07004489 body: object, The request body.
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07004490 The object takes the form of:
4491
4492{ # The request for PartitionQuery
Bu Sun Kim4ed7d3f2020-05-27 12:20:54 -07004493 &quot;params&quot;: { # Parameter names and values that bind to placeholders in the SQL string.
4494 #
4495 # A parameter placeholder consists of the `@` character followed by the
4496 # parameter name (for example, `@firstName`). Parameter names can contain
4497 # letters, numbers, and underscores.
4498 #
4499 # Parameters can appear anywhere that a literal value is expected. The same
4500 # parameter name can be used more than once, for example:
4501 #
4502 # `&quot;WHERE id &gt; @msg_id AND id &lt; @msg_id + 100&quot;`
4503 #
4504 # It is an error to execute a SQL statement with unbound parameters.
4505 &quot;a_key&quot;: &quot;&quot;, # Properties of the object.
4506 },
Bu Sun Kimd059ad82020-07-22 17:02:09 -07004507 &quot;sql&quot;: &quot;A String&quot;, # Required. The query request to generate partitions for. The request will fail if
4508 # the query is not root partitionable. The query plan of a root
4509 # partitionable query has a single distributed union operator. A distributed
4510 # union operator conceptually divides one or more tables into multiple
4511 # splits, remotely evaluates a subquery independently on each split, and
4512 # then unions all results.
4513 #
4514 # This must not contain DML commands, such as INSERT, UPDATE, or
4515 # DELETE. Use ExecuteStreamingSql with a
4516 # PartitionedDml transaction for large, partition-friendly DML operations.
4517 &quot;paramTypes&quot;: { # It is not always possible for Cloud Spanner to infer the right SQL type
4518 # from a JSON value. For example, values of type `BYTES` and values
4519 # of type `STRING` both appear in params as JSON strings.
4520 #
4521 # In these cases, `param_types` can be used to specify the exact
4522 # SQL type for some or all of the SQL query parameters. See the
4523 # definition of Type for more information
4524 # about SQL types.
4525 &quot;a_key&quot;: { # `Type` indicates the type of a Cloud Spanner value, as might be stored in a
4526 # table cell or returned from an SQL query.
4527 &quot;code&quot;: &quot;A String&quot;, # Required. The TypeCode for this type.
4528 &quot;arrayElementType&quot;: # Object with schema name: Type # If code == ARRAY, then `array_element_type`
4529 # is the type of the array elements.
4530 &quot;structType&quot;: { # `StructType` defines the fields of a STRUCT type. # If code == STRUCT, then `struct_type`
4531 # provides type information for the struct&#x27;s fields.
4532 &quot;fields&quot;: [ # The list of fields that make up this struct. Order is
4533 # significant, because values of this struct type are represented as
4534 # lists, where the order of field values matches the order of
4535 # fields in the StructType. In turn, the order of fields
4536 # matches the order of columns in a read request, or the order of
4537 # fields in the `SELECT` clause of a query.
4538 { # Message representing a single field of a struct.
4539 &quot;type&quot;: # Object with schema name: Type # The type of the field.
4540 &quot;name&quot;: &quot;A String&quot;, # The name of the field. For reads, this is the column name. For
4541 # SQL queries, it is the column alias (e.g., `&quot;Word&quot;` in the
4542 # query `&quot;SELECT &#x27;hello&#x27; AS Word&quot;`), or the column name (e.g.,
4543 # `&quot;ColName&quot;` in the query `&quot;SELECT ColName FROM Table&quot;`). Some
4544 # columns might have an empty name (e.g., !&quot;SELECT
4545 # UPPER(ColName)&quot;`). Note that a query result can contain
4546 # multiple fields with the same name.
4547 },
4548 ],
4549 },
4550 },
4551 },
4552 &quot;transaction&quot;: { # This message is used to select the transaction in which a # Read only snapshot transactions are supported, read/write and single use
4553 # transactions are not.
4554 # Read or
4555 # ExecuteSql call runs.
4556 #
4557 # See TransactionOptions for more information about transactions.
4558 &quot;singleUse&quot;: { # # Transactions # Execute the read or SQL query in a temporary transaction.
4559 # This is the most efficient way to execute a transaction that
4560 # consists of a single SQL query.
4561 #
4562 #
4563 # Each session can have at most one active transaction at a time (note that
4564 # standalone reads and queries use a transaction internally and do count
4565 # towards the one transaction limit). After the active transaction is
4566 # completed, the session can immediately be re-used for the next transaction.
4567 # It is not necessary to create a new session for each transaction.
4568 #
4569 # # Transaction Modes
4570 #
4571 # Cloud Spanner supports three transaction modes:
4572 #
4573 # 1. Locking read-write. This type of transaction is the only way
4574 # to write data into Cloud Spanner. These transactions rely on
4575 # pessimistic locking and, if necessary, two-phase commit.
4576 # Locking read-write transactions may abort, requiring the
4577 # application to retry.
4578 #
4579 # 2. Snapshot read-only. This transaction type provides guaranteed
4580 # consistency across several reads, but does not allow
4581 # writes. Snapshot read-only transactions can be configured to
4582 # read at timestamps in the past. Snapshot read-only
4583 # transactions do not need to be committed.
4584 #
4585 # 3. Partitioned DML. This type of transaction is used to execute
4586 # a single Partitioned DML statement. Partitioned DML partitions
4587 # the key space and runs the DML statement over each partition
4588 # in parallel using separate, internal transactions that commit
4589 # independently. Partitioned DML transactions do not need to be
4590 # committed.
4591 #
4592 # For transactions that only read, snapshot read-only transactions
4593 # provide simpler semantics and are almost always faster. In
4594 # particular, read-only transactions do not take locks, so they do
4595 # not conflict with read-write transactions. As a consequence of not
4596 # taking locks, they also do not abort, so retry loops are not needed.
4597 #
4598 # Transactions may only read/write data in a single database. They
4599 # may, however, read/write data in different tables within that
4600 # database.
4601 #
4602 # ## Locking Read-Write Transactions
4603 #
4604 # Locking transactions may be used to atomically read-modify-write
4605 # data anywhere in a database. This type of transaction is externally
4606 # consistent.
4607 #
4608 # Clients should attempt to minimize the amount of time a transaction
4609 # is active. Faster transactions commit with higher probability
4610 # and cause less contention. Cloud Spanner attempts to keep read locks
4611 # active as long as the transaction continues to do reads, and the
4612 # transaction has not been terminated by
4613 # Commit or
4614 # Rollback. Long periods of
4615 # inactivity at the client may cause Cloud Spanner to release a
4616 # transaction&#x27;s locks and abort it.
4617 #
4618 # Conceptually, a read-write transaction consists of zero or more
4619 # reads or SQL statements followed by
4620 # Commit. At any time before
4621 # Commit, the client can send a
4622 # Rollback request to abort the
4623 # transaction.
4624 #
4625 # ### Semantics
4626 #
4627 # Cloud Spanner can commit the transaction if all read locks it acquired
4628 # are still valid at commit time, and it is able to acquire write
4629 # locks for all writes. Cloud Spanner can abort the transaction for any
4630 # reason. If a commit attempt returns `ABORTED`, Cloud Spanner guarantees
4631 # that the transaction has not modified any user data in Cloud Spanner.
4632 #
4633 # Unless the transaction commits, Cloud Spanner makes no guarantees about
4634 # how long the transaction&#x27;s locks were held for. It is an error to
4635 # use Cloud Spanner locks for any sort of mutual exclusion other than
4636 # between Cloud Spanner transactions themselves.
4637 #
4638 # ### Retrying Aborted Transactions
4639 #
4640 # When a transaction aborts, the application can choose to retry the
4641 # whole transaction again. To maximize the chances of successfully
4642 # committing the retry, the client should execute the retry in the
4643 # same session as the original attempt. The original session&#x27;s lock
4644 # priority increases with each consecutive abort, meaning that each
4645 # attempt has a slightly better chance of success than the previous.
4646 #
4647 # Under some circumstances (e.g., many transactions attempting to
4648 # modify the same row(s)), a transaction can abort many times in a
4649 # short period before successfully committing. Thus, it is not a good
4650 # idea to cap the number of retries a transaction can attempt;
4651 # instead, it is better to limit the total amount of wall time spent
4652 # retrying.
4653 #
4654 # ### Idle Transactions
4655 #
4656 # A transaction is considered idle if it has no outstanding reads or
4657 # SQL queries and has not started a read or SQL query within the last 10
4658 # seconds. Idle transactions can be aborted by Cloud Spanner so that they
4659 # don&#x27;t hold on to locks indefinitely. In that case, the commit will
4660 # fail with error `ABORTED`.
4661 #
4662 # If this behavior is undesirable, periodically executing a simple
4663 # SQL query in the transaction (e.g., `SELECT 1`) prevents the
4664 # transaction from becoming idle.
4665 #
4666 # ## Snapshot Read-Only Transactions
4667 #
4668 # Snapshot read-only transactions provides a simpler method than
4669 # locking read-write transactions for doing several consistent
4670 # reads. However, this type of transaction does not support writes.
4671 #
4672 # Snapshot transactions do not take locks. Instead, they work by
4673 # choosing a Cloud Spanner timestamp, then executing all reads at that
4674 # timestamp. Since they do not acquire locks, they do not block
4675 # concurrent read-write transactions.
4676 #
4677 # Unlike locking read-write transactions, snapshot read-only
4678 # transactions never abort. They can fail if the chosen read
4679 # timestamp is garbage collected; however, the default garbage
4680 # collection policy is generous enough that most applications do not
4681 # need to worry about this in practice.
4682 #
4683 # Snapshot read-only transactions do not need to call
4684 # Commit or
4685 # Rollback (and in fact are not
4686 # permitted to do so).
4687 #
4688 # To execute a snapshot transaction, the client specifies a timestamp
4689 # bound, which tells Cloud Spanner how to choose a read timestamp.
4690 #
4691 # The types of timestamp bound are:
4692 #
4693 # - Strong (the default).
4694 # - Bounded staleness.
4695 # - Exact staleness.
4696 #
4697 # If the Cloud Spanner database to be read is geographically distributed,
4698 # stale read-only transactions can execute more quickly than strong
4699 # or read-write transaction, because they are able to execute far
4700 # from the leader replica.
4701 #
4702 # Each type of timestamp bound is discussed in detail below.
4703 #
4704 # ### Strong
4705 #
4706 # Strong reads are guaranteed to see the effects of all transactions
4707 # that have committed before the start of the read. Furthermore, all
4708 # rows yielded by a single read are consistent with each other -- if
4709 # any part of the read observes a transaction, all parts of the read
4710 # see the transaction.
4711 #
4712 # Strong reads are not repeatable: two consecutive strong read-only
4713 # transactions might return inconsistent results if there are
4714 # concurrent writes. If consistency across reads is required, the
4715 # reads should be executed within a transaction or at an exact read
4716 # timestamp.
4717 #
4718 # See TransactionOptions.ReadOnly.strong.
4719 #
4720 # ### Exact Staleness
4721 #
4722 # These timestamp bounds execute reads at a user-specified
4723 # timestamp. Reads at a timestamp are guaranteed to see a consistent
4724 # prefix of the global transaction history: they observe
4725 # modifications done by all transactions with a commit timestamp &lt;=
4726 # the read timestamp, and observe none of the modifications done by
4727 # transactions with a larger commit timestamp. They will block until
4728 # all conflicting transactions that may be assigned commit timestamps
4729 # &lt;= the read timestamp have finished.
4730 #
4731 # The timestamp can either be expressed as an absolute Cloud Spanner commit
4732 # timestamp or a staleness relative to the current time.
4733 #
4734 # These modes do not require a &quot;negotiation phase&quot; to pick a
4735 # timestamp. As a result, they execute slightly faster than the
4736 # equivalent boundedly stale concurrency modes. On the other hand,
4737 # boundedly stale reads usually return fresher results.
4738 #
4739 # See TransactionOptions.ReadOnly.read_timestamp and
4740 # TransactionOptions.ReadOnly.exact_staleness.
4741 #
4742 # ### Bounded Staleness
4743 #
4744 # Bounded staleness modes allow Cloud Spanner to pick the read timestamp,
4745 # subject to a user-provided staleness bound. Cloud Spanner chooses the
4746 # newest timestamp within the staleness bound that allows execution
4747 # of the reads at the closest available replica without blocking.
4748 #
4749 # All rows yielded are consistent with each other -- if any part of
4750 # the read observes a transaction, all parts of the read see the
4751 # transaction. Boundedly stale reads are not repeatable: two stale
4752 # reads, even if they use the same staleness bound, can execute at
4753 # different timestamps and thus return inconsistent results.
4754 #
4755 # Boundedly stale reads execute in two phases: the first phase
4756 # negotiates a timestamp among all replicas needed to serve the
4757 # read. In the second phase, reads are executed at the negotiated
4758 # timestamp.
4759 #
4760 # As a result of the two phase execution, bounded staleness reads are
4761 # usually a little slower than comparable exact staleness
4762 # reads. However, they are typically able to return fresher
4763 # results, and are more likely to execute at the closest replica.
4764 #
4765 # Because the timestamp negotiation requires up-front knowledge of
4766 # which rows will be read, it can only be used with single-use
4767 # read-only transactions.
4768 #
4769 # See TransactionOptions.ReadOnly.max_staleness and
4770 # TransactionOptions.ReadOnly.min_read_timestamp.
4771 #
4772 # ### Old Read Timestamps and Garbage Collection
4773 #
4774 # Cloud Spanner continuously garbage collects deleted and overwritten data
4775 # in the background to reclaim storage space. This process is known
4776 # as &quot;version GC&quot;. By default, version GC reclaims versions after they
4777 # are one hour old. Because of this, Cloud Spanner cannot perform reads
4778 # at read timestamps more than one hour in the past. This
4779 # restriction also applies to in-progress reads and/or SQL queries whose
4780 # timestamp become too old while executing. Reads and SQL queries with
4781 # too-old read timestamps fail with the error `FAILED_PRECONDITION`.
4782 #
4783 # ## Partitioned DML Transactions
4784 #
4785 # Partitioned DML transactions are used to execute DML statements with a
4786 # different execution strategy that provides different, and often better,
4787 # scalability properties for large, table-wide operations than DML in a
4788 # ReadWrite transaction. Smaller scoped statements, such as an OLTP workload,
4789 # should prefer using ReadWrite transactions.
4790 #
4791 # Partitioned DML partitions the keyspace and runs the DML statement on each
4792 # partition in separate, internal transactions. These transactions commit
4793 # automatically when complete, and run independently from one another.
4794 #
4795 # To reduce lock contention, this execution strategy only acquires read locks
4796 # on rows that match the WHERE clause of the statement. Additionally, the
4797 # smaller per-partition transactions hold locks for less time.
4798 #
4799 # That said, Partitioned DML is not a drop-in replacement for standard DML used
4800 # in ReadWrite transactions.
4801 #
4802 # - The DML statement must be fully-partitionable. Specifically, the statement
4803 # must be expressible as the union of many statements which each access only
4804 # a single row of the table.
4805 #
4806 # - The statement is not applied atomically to all rows of the table. Rather,
4807 # the statement is applied atomically to partitions of the table, in
4808 # independent transactions. Secondary index rows are updated atomically
4809 # with the base table rows.
4810 #
4811 # - Partitioned DML does not guarantee exactly-once execution semantics
4812 # against a partition. The statement will be applied at least once to each
4813 # partition. It is strongly recommended that the DML statement should be
4814 # idempotent to avoid unexpected results. For instance, it is potentially
4815 # dangerous to run a statement such as
4816 # `UPDATE table SET column = column + 1` as it could be run multiple times
4817 # against some rows.
4818 #
4819 # - The partitions are committed automatically - there is no support for
4820 # Commit or Rollback. If the call returns an error, or if the client issuing
4821 # the ExecuteSql call dies, it is possible that some rows had the statement
4822 # executed on them successfully. It is also possible that statement was
4823 # never executed against other rows.
4824 #
4825 # - Partitioned DML transactions may only contain the execution of a single
4826 # DML statement via ExecuteSql or ExecuteStreamingSql.
4827 #
4828 # - If any error is encountered during the execution of the partitioned DML
4829 # operation (for instance, a UNIQUE INDEX violation, division by zero, or a
4830 # value that cannot be stored due to schema constraints), then the
4831 # operation is stopped at that point and an error is returned. It is
4832 # possible that at this point, some partitions have been committed (or even
4833 # committed multiple times), and other partitions have not been run at all.
4834 #
4835 # Given the above, Partitioned DML is good fit for large, database-wide,
4836 # operations that are idempotent, such as deleting old rows from a very large
4837 # table.
4838 &quot;readWrite&quot;: { # Message type to initiate a read-write transaction. Currently this # Transaction may write.
4839 #
4840 # Authorization to begin a read-write transaction requires
4841 # `spanner.databases.beginOrRollbackReadWriteTransaction` permission
4842 # on the `session` resource.
4843 # transaction type has no options.
4844 },
4845 &quot;readOnly&quot;: { # Message type to initiate a read-only transaction. # Transaction will not write.
4846 #
4847 # Authorization to begin a read-only transaction requires
4848 # `spanner.databases.beginReadOnlyTransaction` permission
4849 # on the `session` resource.
4850 &quot;readTimestamp&quot;: &quot;A String&quot;, # Executes all reads at the given timestamp. Unlike other modes,
4851 # reads at a specific timestamp are repeatable; the same read at
4852 # the same timestamp always returns the same data. If the
4853 # timestamp is in the future, the read will block until the
4854 # specified timestamp, modulo the read&#x27;s deadline.
4855 #
4856 # Useful for large scale consistent reads such as mapreduces, or
4857 # for coordinating many reads against a consistent snapshot of the
4858 # data.
4859 #
4860 # A timestamp in RFC3339 UTC \&quot;Zulu\&quot; format, accurate to nanoseconds.
4861 # Example: `&quot;2014-10-02T15:01:23.045123456Z&quot;`.
4862 &quot;minReadTimestamp&quot;: &quot;A String&quot;, # Executes all reads at a timestamp &gt;= `min_read_timestamp`.
4863 #
4864 # This is useful for requesting fresher data than some previous
4865 # read, or data that is fresh enough to observe the effects of some
4866 # previously committed transaction whose timestamp is known.
4867 #
4868 # Note that this option can only be used in single-use transactions.
4869 #
4870 # A timestamp in RFC3339 UTC \&quot;Zulu\&quot; format, accurate to nanoseconds.
4871 # Example: `&quot;2014-10-02T15:01:23.045123456Z&quot;`.
4872 &quot;exactStaleness&quot;: &quot;A String&quot;, # Executes all reads at a timestamp that is `exact_staleness`
4873 # old. The timestamp is chosen soon after the read is started.
4874 #
4875 # Guarantees that all writes that have committed more than the
4876 # specified number of seconds ago are visible. Because Cloud Spanner
4877 # chooses the exact timestamp, this mode works even if the client&#x27;s
4878 # local clock is substantially skewed from Cloud Spanner commit
4879 # timestamps.
4880 #
4881 # Useful for reading at nearby replicas without the distributed
4882 # timestamp negotiation overhead of `max_staleness`.
4883 &quot;maxStaleness&quot;: &quot;A String&quot;, # Read data at a timestamp &gt;= `NOW - max_staleness`
4884 # seconds. Guarantees that all writes that have committed more
4885 # than the specified number of seconds ago are visible. Because
4886 # Cloud Spanner chooses the exact timestamp, this mode works even if
4887 # the client&#x27;s local clock is substantially skewed from Cloud Spanner
4888 # commit timestamps.
4889 #
4890 # Useful for reading the freshest data available at a nearby
4891 # replica, while bounding the possible staleness if the local
4892 # replica has fallen behind.
4893 #
4894 # Note that this option can only be used in single-use
4895 # transactions.
4896 &quot;returnReadTimestamp&quot;: True or False, # If true, the Cloud Spanner-selected read timestamp is included in
4897 # the Transaction message that describes the transaction.
4898 &quot;strong&quot;: True or False, # Read at a timestamp where all previously committed transactions
4899 # are visible.
4900 },
4901 &quot;partitionedDml&quot;: { # Message type to initiate a Partitioned DML transaction. # Partitioned DML transaction.
4902 #
4903 # Authorization to begin a Partitioned DML transaction requires
4904 # `spanner.databases.beginPartitionedDmlTransaction` permission
4905 # on the `session` resource.
4906 },
4907 },
4908 &quot;begin&quot;: { # # Transactions # Begin a new transaction and execute this read or SQL query in
4909 # it. The transaction ID of the new transaction is returned in
4910 # ResultSetMetadata.transaction, which is a Transaction.
4911 #
4912 #
4913 # Each session can have at most one active transaction at a time (note that
4914 # standalone reads and queries use a transaction internally and do count
4915 # towards the one transaction limit). After the active transaction is
4916 # completed, the session can immediately be re-used for the next transaction.
4917 # It is not necessary to create a new session for each transaction.
4918 #
4919 # # Transaction Modes
4920 #
4921 # Cloud Spanner supports three transaction modes:
4922 #
4923 # 1. Locking read-write. This type of transaction is the only way
4924 # to write data into Cloud Spanner. These transactions rely on
4925 # pessimistic locking and, if necessary, two-phase commit.
4926 # Locking read-write transactions may abort, requiring the
4927 # application to retry.
4928 #
4929 # 2. Snapshot read-only. This transaction type provides guaranteed
4930 # consistency across several reads, but does not allow
4931 # writes. Snapshot read-only transactions can be configured to
4932 # read at timestamps in the past. Snapshot read-only
4933 # transactions do not need to be committed.
4934 #
4935 # 3. Partitioned DML. This type of transaction is used to execute
4936 # a single Partitioned DML statement. Partitioned DML partitions
4937 # the key space and runs the DML statement over each partition
4938 # in parallel using separate, internal transactions that commit
4939 # independently. Partitioned DML transactions do not need to be
4940 # committed.
4941 #
4942 # For transactions that only read, snapshot read-only transactions
4943 # provide simpler semantics and are almost always faster. In
4944 # particular, read-only transactions do not take locks, so they do
4945 # not conflict with read-write transactions. As a consequence of not
4946 # taking locks, they also do not abort, so retry loops are not needed.
4947 #
4948 # Transactions may only read/write data in a single database. They
4949 # may, however, read/write data in different tables within that
4950 # database.
4951 #
4952 # ## Locking Read-Write Transactions
4953 #
4954 # Locking transactions may be used to atomically read-modify-write
4955 # data anywhere in a database. This type of transaction is externally
4956 # consistent.
4957 #
4958 # Clients should attempt to minimize the amount of time a transaction
4959 # is active. Faster transactions commit with higher probability
4960 # and cause less contention. Cloud Spanner attempts to keep read locks
4961 # active as long as the transaction continues to do reads, and the
4962 # transaction has not been terminated by
4963 # Commit or
4964 # Rollback. Long periods of
4965 # inactivity at the client may cause Cloud Spanner to release a
4966 # transaction&#x27;s locks and abort it.
4967 #
4968 # Conceptually, a read-write transaction consists of zero or more
4969 # reads or SQL statements followed by
4970 # Commit. At any time before
4971 # Commit, the client can send a
4972 # Rollback request to abort the
4973 # transaction.
4974 #
4975 # ### Semantics
4976 #
4977 # Cloud Spanner can commit the transaction if all read locks it acquired
4978 # are still valid at commit time, and it is able to acquire write
4979 # locks for all writes. Cloud Spanner can abort the transaction for any
4980 # reason. If a commit attempt returns `ABORTED`, Cloud Spanner guarantees
4981 # that the transaction has not modified any user data in Cloud Spanner.
4982 #
4983 # Unless the transaction commits, Cloud Spanner makes no guarantees about
4984 # how long the transaction&#x27;s locks were held for. It is an error to
4985 # use Cloud Spanner locks for any sort of mutual exclusion other than
4986 # between Cloud Spanner transactions themselves.
4987 #
4988 # ### Retrying Aborted Transactions
4989 #
4990 # When a transaction aborts, the application can choose to retry the
4991 # whole transaction again. To maximize the chances of successfully
4992 # committing the retry, the client should execute the retry in the
4993 # same session as the original attempt. The original session&#x27;s lock
4994 # priority increases with each consecutive abort, meaning that each
4995 # attempt has a slightly better chance of success than the previous.
4996 #
4997 # Under some circumstances (e.g., many transactions attempting to
4998 # modify the same row(s)), a transaction can abort many times in a
4999 # short period before successfully committing. Thus, it is not a good
5000 # idea to cap the number of retries a transaction can attempt;
5001 # instead, it is better to limit the total amount of wall time spent
5002 # retrying.
5003 #
5004 # ### Idle Transactions
5005 #
5006 # A transaction is considered idle if it has no outstanding reads or
5007 # SQL queries and has not started a read or SQL query within the last 10
5008 # seconds. Idle transactions can be aborted by Cloud Spanner so that they
5009 # don&#x27;t hold on to locks indefinitely. In that case, the commit will
5010 # fail with error `ABORTED`.
5011 #
5012 # If this behavior is undesirable, periodically executing a simple
5013 # SQL query in the transaction (e.g., `SELECT 1`) prevents the
5014 # transaction from becoming idle.
5015 #
5016 # ## Snapshot Read-Only Transactions
5017 #
5018 # Snapshot read-only transactions provides a simpler method than
5019 # locking read-write transactions for doing several consistent
5020 # reads. However, this type of transaction does not support writes.
5021 #
5022 # Snapshot transactions do not take locks. Instead, they work by
5023 # choosing a Cloud Spanner timestamp, then executing all reads at that
5024 # timestamp. Since they do not acquire locks, they do not block
5025 # concurrent read-write transactions.
5026 #
5027 # Unlike locking read-write transactions, snapshot read-only
5028 # transactions never abort. They can fail if the chosen read
5029 # timestamp is garbage collected; however, the default garbage
5030 # collection policy is generous enough that most applications do not
5031 # need to worry about this in practice.
5032 #
5033 # Snapshot read-only transactions do not need to call
5034 # Commit or
5035 # Rollback (and in fact are not
5036 # permitted to do so).
5037 #
5038 # To execute a snapshot transaction, the client specifies a timestamp
5039 # bound, which tells Cloud Spanner how to choose a read timestamp.
5040 #
5041 # The types of timestamp bound are:
5042 #
5043 # - Strong (the default).
5044 # - Bounded staleness.
5045 # - Exact staleness.
5046 #
5047 # If the Cloud Spanner database to be read is geographically distributed,
5048 # stale read-only transactions can execute more quickly than strong
5049 # or read-write transaction, because they are able to execute far
5050 # from the leader replica.
5051 #
5052 # Each type of timestamp bound is discussed in detail below.
5053 #
5054 # ### Strong
5055 #
5056 # Strong reads are guaranteed to see the effects of all transactions
5057 # that have committed before the start of the read. Furthermore, all
5058 # rows yielded by a single read are consistent with each other -- if
5059 # any part of the read observes a transaction, all parts of the read
5060 # see the transaction.
5061 #
5062 # Strong reads are not repeatable: two consecutive strong read-only
5063 # transactions might return inconsistent results if there are
5064 # concurrent writes. If consistency across reads is required, the
5065 # reads should be executed within a transaction or at an exact read
5066 # timestamp.
5067 #
5068 # See TransactionOptions.ReadOnly.strong.
5069 #
5070 # ### Exact Staleness
5071 #
5072 # These timestamp bounds execute reads at a user-specified
5073 # timestamp. Reads at a timestamp are guaranteed to see a consistent
5074 # prefix of the global transaction history: they observe
5075 # modifications done by all transactions with a commit timestamp &lt;=
5076 # the read timestamp, and observe none of the modifications done by
5077 # transactions with a larger commit timestamp. They will block until
5078 # all conflicting transactions that may be assigned commit timestamps
5079 # &lt;= the read timestamp have finished.
5080 #
5081 # The timestamp can either be expressed as an absolute Cloud Spanner commit
5082 # timestamp or a staleness relative to the current time.
5083 #
5084 # These modes do not require a &quot;negotiation phase&quot; to pick a
5085 # timestamp. As a result, they execute slightly faster than the
5086 # equivalent boundedly stale concurrency modes. On the other hand,
5087 # boundedly stale reads usually return fresher results.
5088 #
5089 # See TransactionOptions.ReadOnly.read_timestamp and
5090 # TransactionOptions.ReadOnly.exact_staleness.
5091 #
5092 # ### Bounded Staleness
5093 #
5094 # Bounded staleness modes allow Cloud Spanner to pick the read timestamp,
5095 # subject to a user-provided staleness bound. Cloud Spanner chooses the
5096 # newest timestamp within the staleness bound that allows execution
5097 # of the reads at the closest available replica without blocking.
5098 #
5099 # All rows yielded are consistent with each other -- if any part of
5100 # the read observes a transaction, all parts of the read see the
5101 # transaction. Boundedly stale reads are not repeatable: two stale
5102 # reads, even if they use the same staleness bound, can execute at
5103 # different timestamps and thus return inconsistent results.
5104 #
5105 # Boundedly stale reads execute in two phases: the first phase
5106 # negotiates a timestamp among all replicas needed to serve the
5107 # read. In the second phase, reads are executed at the negotiated
5108 # timestamp.
5109 #
5110 # As a result of the two phase execution, bounded staleness reads are
5111 # usually a little slower than comparable exact staleness
5112 # reads. However, they are typically able to return fresher
5113 # results, and are more likely to execute at the closest replica.
5114 #
5115 # Because the timestamp negotiation requires up-front knowledge of
5116 # which rows will be read, it can only be used with single-use
5117 # read-only transactions.
5118 #
5119 # See TransactionOptions.ReadOnly.max_staleness and
5120 # TransactionOptions.ReadOnly.min_read_timestamp.
5121 #
5122 # ### Old Read Timestamps and Garbage Collection
5123 #
5124 # Cloud Spanner continuously garbage collects deleted and overwritten data
5125 # in the background to reclaim storage space. This process is known
5126 # as &quot;version GC&quot;. By default, version GC reclaims versions after they
5127 # are one hour old. Because of this, Cloud Spanner cannot perform reads
5128 # at read timestamps more than one hour in the past. This
5129 # restriction also applies to in-progress reads and/or SQL queries whose
5130 # timestamp become too old while executing. Reads and SQL queries with
5131 # too-old read timestamps fail with the error `FAILED_PRECONDITION`.
5132 #
5133 # ## Partitioned DML Transactions
5134 #
5135 # Partitioned DML transactions are used to execute DML statements with a
5136 # different execution strategy that provides different, and often better,
5137 # scalability properties for large, table-wide operations than DML in a
5138 # ReadWrite transaction. Smaller scoped statements, such as an OLTP workload,
5139 # should prefer using ReadWrite transactions.
5140 #
5141 # Partitioned DML partitions the keyspace and runs the DML statement on each
5142 # partition in separate, internal transactions. These transactions commit
5143 # automatically when complete, and run independently from one another.
5144 #
5145 # To reduce lock contention, this execution strategy only acquires read locks
5146 # on rows that match the WHERE clause of the statement. Additionally, the
5147 # smaller per-partition transactions hold locks for less time.
5148 #
5149 # That said, Partitioned DML is not a drop-in replacement for standard DML used
5150 # in ReadWrite transactions.
5151 #
5152 # - The DML statement must be fully-partitionable. Specifically, the statement
5153 # must be expressible as the union of many statements which each access only
5154 # a single row of the table.
5155 #
5156 # - The statement is not applied atomically to all rows of the table. Rather,
5157 # the statement is applied atomically to partitions of the table, in
5158 # independent transactions. Secondary index rows are updated atomically
5159 # with the base table rows.
5160 #
5161 # - Partitioned DML does not guarantee exactly-once execution semantics
5162 # against a partition. The statement will be applied at least once to each
5163 # partition. It is strongly recommended that the DML statement should be
5164 # idempotent to avoid unexpected results. For instance, it is potentially
5165 # dangerous to run a statement such as
5166 # `UPDATE table SET column = column + 1` as it could be run multiple times
5167 # against some rows.
5168 #
5169 # - The partitions are committed automatically - there is no support for
5170 # Commit or Rollback. If the call returns an error, or if the client issuing
5171 # the ExecuteSql call dies, it is possible that some rows had the statement
5172 # executed on them successfully. It is also possible that statement was
5173 # never executed against other rows.
5174 #
5175 # - Partitioned DML transactions may only contain the execution of a single
5176 # DML statement via ExecuteSql or ExecuteStreamingSql.
5177 #
5178 # - If any error is encountered during the execution of the partitioned DML
5179 # operation (for instance, a UNIQUE INDEX violation, division by zero, or a
5180 # value that cannot be stored due to schema constraints), then the
5181 # operation is stopped at that point and an error is returned. It is
5182 # possible that at this point, some partitions have been committed (or even
5183 # committed multiple times), and other partitions have not been run at all.
5184 #
5185 # Given the above, Partitioned DML is good fit for large, database-wide,
5186 # operations that are idempotent, such as deleting old rows from a very large
5187 # table.
5188 &quot;readWrite&quot;: { # Message type to initiate a read-write transaction. Currently this # Transaction may write.
5189 #
5190 # Authorization to begin a read-write transaction requires
5191 # `spanner.databases.beginOrRollbackReadWriteTransaction` permission
5192 # on the `session` resource.
5193 # transaction type has no options.
5194 },
5195 &quot;readOnly&quot;: { # Message type to initiate a read-only transaction. # Transaction will not write.
5196 #
5197 # Authorization to begin a read-only transaction requires
5198 # `spanner.databases.beginReadOnlyTransaction` permission
5199 # on the `session` resource.
5200 &quot;readTimestamp&quot;: &quot;A String&quot;, # Executes all reads at the given timestamp. Unlike other modes,
5201 # reads at a specific timestamp are repeatable; the same read at
5202 # the same timestamp always returns the same data. If the
5203 # timestamp is in the future, the read will block until the
5204 # specified timestamp, modulo the read&#x27;s deadline.
5205 #
5206 # Useful for large scale consistent reads such as mapreduces, or
5207 # for coordinating many reads against a consistent snapshot of the
5208 # data.
5209 #
5210 # A timestamp in RFC3339 UTC \&quot;Zulu\&quot; format, accurate to nanoseconds.
5211 # Example: `&quot;2014-10-02T15:01:23.045123456Z&quot;`.
5212 &quot;minReadTimestamp&quot;: &quot;A String&quot;, # Executes all reads at a timestamp &gt;= `min_read_timestamp`.
5213 #
5214 # This is useful for requesting fresher data than some previous
5215 # read, or data that is fresh enough to observe the effects of some
5216 # previously committed transaction whose timestamp is known.
5217 #
5218 # Note that this option can only be used in single-use transactions.
5219 #
5220 # A timestamp in RFC3339 UTC \&quot;Zulu\&quot; format, accurate to nanoseconds.
5221 # Example: `&quot;2014-10-02T15:01:23.045123456Z&quot;`.
5222 &quot;exactStaleness&quot;: &quot;A String&quot;, # Executes all reads at a timestamp that is `exact_staleness`
5223 # old. The timestamp is chosen soon after the read is started.
5224 #
5225 # Guarantees that all writes that have committed more than the
5226 # specified number of seconds ago are visible. Because Cloud Spanner
5227 # chooses the exact timestamp, this mode works even if the client&#x27;s
5228 # local clock is substantially skewed from Cloud Spanner commit
5229 # timestamps.
5230 #
5231 # Useful for reading at nearby replicas without the distributed
5232 # timestamp negotiation overhead of `max_staleness`.
5233 &quot;maxStaleness&quot;: &quot;A String&quot;, # Read data at a timestamp &gt;= `NOW - max_staleness`
5234 # seconds. Guarantees that all writes that have committed more
5235 # than the specified number of seconds ago are visible. Because
5236 # Cloud Spanner chooses the exact timestamp, this mode works even if
5237 # the client&#x27;s local clock is substantially skewed from Cloud Spanner
5238 # commit timestamps.
5239 #
5240 # Useful for reading the freshest data available at a nearby
5241 # replica, while bounding the possible staleness if the local
5242 # replica has fallen behind.
5243 #
5244 # Note that this option can only be used in single-use
5245 # transactions.
5246 &quot;returnReadTimestamp&quot;: True or False, # If true, the Cloud Spanner-selected read timestamp is included in
5247 # the Transaction message that describes the transaction.
5248 &quot;strong&quot;: True or False, # Read at a timestamp where all previously committed transactions
5249 # are visible.
5250 },
5251 &quot;partitionedDml&quot;: { # Message type to initiate a Partitioned DML transaction. # Partitioned DML transaction.
5252 #
5253 # Authorization to begin a Partitioned DML transaction requires
5254 # `spanner.databases.beginPartitionedDmlTransaction` permission
5255 # on the `session` resource.
5256 },
5257 },
5258 &quot;id&quot;: &quot;A String&quot;, # Execute the read or SQL query in a previously-started transaction.
5259 },
Bu Sun Kim4ed7d3f2020-05-27 12:20:54 -07005260 &quot;partitionOptions&quot;: { # Options for a PartitionQueryRequest and # Additional options that affect how many partitions are created.
5261 # PartitionReadRequest.
5262 &quot;maxPartitions&quot;: &quot;A String&quot;, # **Note:** This hint is currently ignored by PartitionQuery and
5263 # PartitionRead requests.
5264 #
5265 # The desired maximum number of partitions to return. For example, this may
5266 # be set to the number of workers available. The default for this option
5267 # is currently 10,000. The maximum value is currently 200,000. This is only
5268 # a hint. The actual number of partitions returned may be smaller or larger
5269 # than this maximum count request.
5270 &quot;partitionSizeBytes&quot;: &quot;A String&quot;, # **Note:** This hint is currently ignored by PartitionQuery and
5271 # PartitionRead requests.
5272 #
5273 # The desired data size for each partition generated. The default for this
5274 # option is currently 1 GiB. This is only a hint. The actual size of each
5275 # partition may be smaller or larger than this size request.
5276 },
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07005277 }
5278
5279 x__xgafv: string, V1 error format.
5280 Allowed values
5281 1 - v1 error format
5282 2 - v2 error format
5283
5284Returns:
5285 An object of the form:
5286
5287 { # The response for PartitionQuery
5288 # or PartitionRead
Bu Sun Kim65020912020-05-20 12:08:20 -07005289 &quot;partitions&quot;: [ # Partitions created by this request.
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07005290 { # Information returned for each partition returned in a
5291 # PartitionResponse.
Bu Sun Kim65020912020-05-20 12:08:20 -07005292 &quot;partitionToken&quot;: &quot;A String&quot;, # This token can be passed to Read, StreamingRead, ExecuteSql, or
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07005293 # ExecuteStreamingSql requests to restrict the results to those identified by
5294 # this partition token.
5295 },
5296 ],
Bu Sun Kim4ed7d3f2020-05-27 12:20:54 -07005297 &quot;transaction&quot;: { # A transaction. # Transaction created by this request.
5298 &quot;readTimestamp&quot;: &quot;A String&quot;, # For snapshot read-only transactions, the read timestamp chosen
5299 # for the transaction. Not returned by default: see
5300 # TransactionOptions.ReadOnly.return_read_timestamp.
5301 #
5302 # A timestamp in RFC3339 UTC \&quot;Zulu\&quot; format, accurate to nanoseconds.
5303 # Example: `&quot;2014-10-02T15:01:23.045123456Z&quot;`.
5304 &quot;id&quot;: &quot;A String&quot;, # `id` may be used to identify the transaction in subsequent
5305 # Read,
5306 # ExecuteSql,
5307 # Commit, or
5308 # Rollback calls.
5309 #
5310 # Single-use read-only transactions do not have IDs, because
5311 # single-use transactions do not support multiple requests.
5312 },
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07005313 }</pre>
5314</div>
5315
5316<div class="method">
Dan O'Mearadd494642020-05-01 07:42:23 -07005317 <code class="details" id="partitionRead">partitionRead(session, body=None, x__xgafv=None)</code>
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07005318 <pre>Creates a set of partition tokens that can be used to execute a read
5319operation in parallel. Each of the returned partition tokens can be used
5320by StreamingRead to specify a subset of the read
5321result to read. The same session and read-only transaction must be used by
5322the PartitionReadRequest used to create the partition tokens and the
5323ReadRequests that use the partition tokens. There are no ordering
5324guarantees on rows returned among the returned partition tokens, or even
5325within each individual StreamingRead call issued with a partition_token.
5326
5327Partition tokens become invalid when the session used to create them
5328is deleted, is idle for too long, begins a new transaction, or becomes too
5329old. When any of these happen, it is not possible to resume the read, and
5330the whole operation must be restarted from the beginning.
5331
5332Args:
5333 session: string, Required. The session used to create the partitions. (required)
Dan O'Mearadd494642020-05-01 07:42:23 -07005334 body: object, The request body.
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07005335 The object takes the form of:
5336
5337{ # The request for PartitionRead
Bu Sun Kim65020912020-05-20 12:08:20 -07005338 &quot;index&quot;: &quot;A String&quot;, # If non-empty, the name of an index on table. This index is
5339 # used instead of the table primary key when interpreting key_set
5340 # and sorting result rows. See key_set for further information.
Bu Sun Kimd059ad82020-07-22 17:02:09 -07005341 &quot;table&quot;: &quot;A String&quot;, # Required. The name of the table in the database to be read.
Bu Sun Kim65020912020-05-20 12:08:20 -07005342 &quot;keySet&quot;: { # `KeySet` defines a collection of Cloud Spanner keys and/or key ranges. All # Required. `key_set` identifies the rows to be yielded. `key_set` names the
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07005343 # primary keys of the rows in table to be yielded, unless index
5344 # is present. If index is present, then key_set instead names
5345 # index keys in index.
5346 #
5347 # It is not an error for the `key_set` to name rows that do not
5348 # exist in the database. Read yields nothing for nonexistent rows.
5349 # the keys are expected to be in the same table or index. The keys need
5350 # not be sorted in any particular way.
5351 #
5352 # If the same key is specified multiple times in the set (for example
5353 # if two ranges, two keys, or a key and a range overlap), Cloud Spanner
5354 # behaves as if the key were only specified once.
Bu Sun Kim65020912020-05-20 12:08:20 -07005355 &quot;ranges&quot;: [ # A list of key ranges. See KeyRange for more information about
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07005356 # key range specifications.
5357 { # KeyRange represents a range of rows in a table or index.
5358 #
5359 # A range has a start key and an end key. These keys can be open or
5360 # closed, indicating if the range includes rows with that key.
5361 #
5362 # Keys are represented by lists, where the ith value in the list
5363 # corresponds to the ith component of the table or index primary key.
5364 # Individual values are encoded as described
5365 # here.
5366 #
5367 # For example, consider the following table definition:
5368 #
5369 # CREATE TABLE UserEvents (
5370 # UserName STRING(MAX),
5371 # EventDate STRING(10)
5372 # ) PRIMARY KEY(UserName, EventDate);
5373 #
5374 # The following keys name rows in this table:
5375 #
Bu Sun Kim65020912020-05-20 12:08:20 -07005376 # &quot;Bob&quot;, &quot;2014-09-23&quot;
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07005377 #
Bu Sun Kim65020912020-05-20 12:08:20 -07005378 # Since the `UserEvents` table&#x27;s `PRIMARY KEY` clause names two
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07005379 # columns, each `UserEvents` key has two elements; the first is the
5380 # `UserName`, and the second is the `EventDate`.
5381 #
5382 # Key ranges with multiple components are interpreted
Bu Sun Kim65020912020-05-20 12:08:20 -07005383 # lexicographically by component using the table or index key&#x27;s declared
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07005384 # sort order. For example, the following range returns all events for
Bu Sun Kim65020912020-05-20 12:08:20 -07005385 # user `&quot;Bob&quot;` that occurred in the year 2015:
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07005386 #
Bu Sun Kim65020912020-05-20 12:08:20 -07005387 # &quot;start_closed&quot;: [&quot;Bob&quot;, &quot;2015-01-01&quot;]
5388 # &quot;end_closed&quot;: [&quot;Bob&quot;, &quot;2015-12-31&quot;]
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07005389 #
5390 # Start and end keys can omit trailing key components. This affects the
5391 # inclusion and exclusion of rows that exactly match the provided key
5392 # components: if the key is closed, then rows that exactly match the
5393 # provided components are included; if the key is open, then rows
5394 # that exactly match are not included.
5395 #
Bu Sun Kim65020912020-05-20 12:08:20 -07005396 # For example, the following range includes all events for `&quot;Bob&quot;` that
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07005397 # occurred during and after the year 2000:
5398 #
Bu Sun Kim65020912020-05-20 12:08:20 -07005399 # &quot;start_closed&quot;: [&quot;Bob&quot;, &quot;2000-01-01&quot;]
5400 # &quot;end_closed&quot;: [&quot;Bob&quot;]
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07005401 #
Bu Sun Kim65020912020-05-20 12:08:20 -07005402 # The next example retrieves all events for `&quot;Bob&quot;`:
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07005403 #
Bu Sun Kim65020912020-05-20 12:08:20 -07005404 # &quot;start_closed&quot;: [&quot;Bob&quot;]
5405 # &quot;end_closed&quot;: [&quot;Bob&quot;]
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07005406 #
5407 # To retrieve events before the year 2000:
5408 #
Bu Sun Kim65020912020-05-20 12:08:20 -07005409 # &quot;start_closed&quot;: [&quot;Bob&quot;]
5410 # &quot;end_open&quot;: [&quot;Bob&quot;, &quot;2000-01-01&quot;]
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07005411 #
5412 # The following range includes all rows in the table:
5413 #
Bu Sun Kim65020912020-05-20 12:08:20 -07005414 # &quot;start_closed&quot;: []
5415 # &quot;end_closed&quot;: []
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07005416 #
5417 # This range returns all users whose `UserName` begins with any
5418 # character from A to C:
5419 #
Bu Sun Kim65020912020-05-20 12:08:20 -07005420 # &quot;start_closed&quot;: [&quot;A&quot;]
5421 # &quot;end_open&quot;: [&quot;D&quot;]
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07005422 #
5423 # This range returns all users whose `UserName` begins with B:
5424 #
Bu Sun Kim65020912020-05-20 12:08:20 -07005425 # &quot;start_closed&quot;: [&quot;B&quot;]
5426 # &quot;end_open&quot;: [&quot;C&quot;]
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07005427 #
5428 # Key ranges honor column sort order. For example, suppose a table is
5429 # defined as follows:
5430 #
5431 # CREATE TABLE DescendingSortedTable {
5432 # Key INT64,
5433 # ...
5434 # ) PRIMARY KEY(Key DESC);
5435 #
5436 # The following range retrieves all rows with key values between 1
5437 # and 100 inclusive:
5438 #
Bu Sun Kim65020912020-05-20 12:08:20 -07005439 # &quot;start_closed&quot;: [&quot;100&quot;]
5440 # &quot;end_closed&quot;: [&quot;1&quot;]
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07005441 #
5442 # Note that 100 is passed as the start, and 1 is passed as the end,
5443 # because `Key` is a descending column in the schema.
Bu Sun Kim65020912020-05-20 12:08:20 -07005444 &quot;endClosed&quot;: [ # If the end is closed, then the range includes all rows whose
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07005445 # first `len(end_closed)` key columns exactly match `end_closed`.
Bu Sun Kim65020912020-05-20 12:08:20 -07005446 &quot;&quot;,
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07005447 ],
Bu Sun Kim65020912020-05-20 12:08:20 -07005448 &quot;startClosed&quot;: [ # If the start is closed, then the range includes all rows whose
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07005449 # first `len(start_closed)` key columns exactly match `start_closed`.
Bu Sun Kim65020912020-05-20 12:08:20 -07005450 &quot;&quot;,
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07005451 ],
Bu Sun Kimd059ad82020-07-22 17:02:09 -07005452 &quot;startOpen&quot;: [ # If the start is open, then the range excludes rows whose first
5453 # `len(start_open)` key columns exactly match `start_open`.
5454 &quot;&quot;,
5455 ],
5456 &quot;endOpen&quot;: [ # If the end is open, then the range excludes rows whose first
5457 # `len(end_open)` key columns exactly match `end_open`.
5458 &quot;&quot;,
5459 ],
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07005460 },
5461 ],
Bu Sun Kim65020912020-05-20 12:08:20 -07005462 &quot;keys&quot;: [ # A list of specific keys. Entries in `keys` should have exactly as
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07005463 # many elements as there are columns in the primary or index key
5464 # with which this `KeySet` is used. Individual key values are
5465 # encoded as described here.
5466 [
Bu Sun Kim65020912020-05-20 12:08:20 -07005467 &quot;&quot;,
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07005468 ],
5469 ],
Bu Sun Kim4ed7d3f2020-05-27 12:20:54 -07005470 &quot;all&quot;: True or False, # For convenience `all` can be set to `true` to indicate that this
5471 # `KeySet` matches all keys in the table or index. Note that any keys
5472 # specified in `keys` or `ranges` are only yielded once.
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07005473 },
Bu Sun Kim4ed7d3f2020-05-27 12:20:54 -07005474 &quot;partitionOptions&quot;: { # Options for a PartitionQueryRequest and # Additional options that affect how many partitions are created.
5475 # PartitionReadRequest.
5476 &quot;maxPartitions&quot;: &quot;A String&quot;, # **Note:** This hint is currently ignored by PartitionQuery and
5477 # PartitionRead requests.
5478 #
5479 # The desired maximum number of partitions to return. For example, this may
5480 # be set to the number of workers available. The default for this option
5481 # is currently 10,000. The maximum value is currently 200,000. This is only
5482 # a hint. The actual number of partitions returned may be smaller or larger
5483 # than this maximum count request.
5484 &quot;partitionSizeBytes&quot;: &quot;A String&quot;, # **Note:** This hint is currently ignored by PartitionQuery and
5485 # PartitionRead requests.
5486 #
5487 # The desired data size for each partition generated. The default for this
5488 # option is currently 1 GiB. This is only a hint. The actual size of each
5489 # partition may be smaller or larger than this size request.
5490 },
Bu Sun Kimd059ad82020-07-22 17:02:09 -07005491 &quot;columns&quot;: [ # The columns of table to be returned for each row matching
5492 # this request.
5493 &quot;A String&quot;,
5494 ],
Bu Sun Kim4ed7d3f2020-05-27 12:20:54 -07005495 &quot;transaction&quot;: { # This message is used to select the transaction in which a # Read only snapshot transactions are supported, read/write and single use
5496 # transactions are not.
5497 # Read or
5498 # ExecuteSql call runs.
5499 #
5500 # See TransactionOptions for more information about transactions.
Bu Sun Kim4ed7d3f2020-05-27 12:20:54 -07005501 &quot;singleUse&quot;: { # # Transactions # Execute the read or SQL query in a temporary transaction.
5502 # This is the most efficient way to execute a transaction that
5503 # consists of a single SQL query.
5504 #
5505 #
Bu Sun Kimd059ad82020-07-22 17:02:09 -07005506 # Each session can have at most one active transaction at a time (note that
5507 # standalone reads and queries use a transaction internally and do count
5508 # towards the one transaction limit). After the active transaction is
5509 # completed, the session can immediately be re-used for the next transaction.
5510 # It is not necessary to create a new session for each transaction.
Bu Sun Kim4ed7d3f2020-05-27 12:20:54 -07005511 #
5512 # # Transaction Modes
5513 #
5514 # Cloud Spanner supports three transaction modes:
5515 #
5516 # 1. Locking read-write. This type of transaction is the only way
5517 # to write data into Cloud Spanner. These transactions rely on
5518 # pessimistic locking and, if necessary, two-phase commit.
5519 # Locking read-write transactions may abort, requiring the
5520 # application to retry.
5521 #
5522 # 2. Snapshot read-only. This transaction type provides guaranteed
5523 # consistency across several reads, but does not allow
5524 # writes. Snapshot read-only transactions can be configured to
5525 # read at timestamps in the past. Snapshot read-only
5526 # transactions do not need to be committed.
5527 #
5528 # 3. Partitioned DML. This type of transaction is used to execute
5529 # a single Partitioned DML statement. Partitioned DML partitions
5530 # the key space and runs the DML statement over each partition
5531 # in parallel using separate, internal transactions that commit
5532 # independently. Partitioned DML transactions do not need to be
5533 # committed.
5534 #
5535 # For transactions that only read, snapshot read-only transactions
5536 # provide simpler semantics and are almost always faster. In
5537 # particular, read-only transactions do not take locks, so they do
5538 # not conflict with read-write transactions. As a consequence of not
5539 # taking locks, they also do not abort, so retry loops are not needed.
5540 #
5541 # Transactions may only read/write data in a single database. They
5542 # may, however, read/write data in different tables within that
5543 # database.
5544 #
5545 # ## Locking Read-Write Transactions
5546 #
5547 # Locking transactions may be used to atomically read-modify-write
5548 # data anywhere in a database. This type of transaction is externally
5549 # consistent.
5550 #
5551 # Clients should attempt to minimize the amount of time a transaction
5552 # is active. Faster transactions commit with higher probability
5553 # and cause less contention. Cloud Spanner attempts to keep read locks
5554 # active as long as the transaction continues to do reads, and the
5555 # transaction has not been terminated by
5556 # Commit or
5557 # Rollback. Long periods of
5558 # inactivity at the client may cause Cloud Spanner to release a
5559 # transaction&#x27;s locks and abort it.
5560 #
5561 # Conceptually, a read-write transaction consists of zero or more
5562 # reads or SQL statements followed by
5563 # Commit. At any time before
5564 # Commit, the client can send a
5565 # Rollback request to abort the
5566 # transaction.
5567 #
5568 # ### Semantics
5569 #
5570 # Cloud Spanner can commit the transaction if all read locks it acquired
5571 # are still valid at commit time, and it is able to acquire write
5572 # locks for all writes. Cloud Spanner can abort the transaction for any
5573 # reason. If a commit attempt returns `ABORTED`, Cloud Spanner guarantees
5574 # that the transaction has not modified any user data in Cloud Spanner.
5575 #
5576 # Unless the transaction commits, Cloud Spanner makes no guarantees about
5577 # how long the transaction&#x27;s locks were held for. It is an error to
5578 # use Cloud Spanner locks for any sort of mutual exclusion other than
5579 # between Cloud Spanner transactions themselves.
5580 #
5581 # ### Retrying Aborted Transactions
5582 #
5583 # When a transaction aborts, the application can choose to retry the
5584 # whole transaction again. To maximize the chances of successfully
5585 # committing the retry, the client should execute the retry in the
5586 # same session as the original attempt. The original session&#x27;s lock
5587 # priority increases with each consecutive abort, meaning that each
5588 # attempt has a slightly better chance of success than the previous.
5589 #
5590 # Under some circumstances (e.g., many transactions attempting to
5591 # modify the same row(s)), a transaction can abort many times in a
5592 # short period before successfully committing. Thus, it is not a good
5593 # idea to cap the number of retries a transaction can attempt;
5594 # instead, it is better to limit the total amount of wall time spent
5595 # retrying.
5596 #
5597 # ### Idle Transactions
5598 #
5599 # A transaction is considered idle if it has no outstanding reads or
5600 # SQL queries and has not started a read or SQL query within the last 10
5601 # seconds. Idle transactions can be aborted by Cloud Spanner so that they
5602 # don&#x27;t hold on to locks indefinitely. In that case, the commit will
5603 # fail with error `ABORTED`.
5604 #
5605 # If this behavior is undesirable, periodically executing a simple
5606 # SQL query in the transaction (e.g., `SELECT 1`) prevents the
5607 # transaction from becoming idle.
5608 #
5609 # ## Snapshot Read-Only Transactions
5610 #
5611 # Snapshot read-only transactions provides a simpler method than
5612 # locking read-write transactions for doing several consistent
5613 # reads. However, this type of transaction does not support writes.
5614 #
5615 # Snapshot transactions do not take locks. Instead, they work by
5616 # choosing a Cloud Spanner timestamp, then executing all reads at that
5617 # timestamp. Since they do not acquire locks, they do not block
5618 # concurrent read-write transactions.
5619 #
5620 # Unlike locking read-write transactions, snapshot read-only
5621 # transactions never abort. They can fail if the chosen read
5622 # timestamp is garbage collected; however, the default garbage
5623 # collection policy is generous enough that most applications do not
5624 # need to worry about this in practice.
5625 #
5626 # Snapshot read-only transactions do not need to call
5627 # Commit or
5628 # Rollback (and in fact are not
5629 # permitted to do so).
5630 #
5631 # To execute a snapshot transaction, the client specifies a timestamp
5632 # bound, which tells Cloud Spanner how to choose a read timestamp.
5633 #
5634 # The types of timestamp bound are:
5635 #
5636 # - Strong (the default).
5637 # - Bounded staleness.
5638 # - Exact staleness.
5639 #
5640 # If the Cloud Spanner database to be read is geographically distributed,
5641 # stale read-only transactions can execute more quickly than strong
5642 # or read-write transaction, because they are able to execute far
5643 # from the leader replica.
5644 #
5645 # Each type of timestamp bound is discussed in detail below.
5646 #
5647 # ### Strong
5648 #
5649 # Strong reads are guaranteed to see the effects of all transactions
5650 # that have committed before the start of the read. Furthermore, all
5651 # rows yielded by a single read are consistent with each other -- if
5652 # any part of the read observes a transaction, all parts of the read
5653 # see the transaction.
5654 #
5655 # Strong reads are not repeatable: two consecutive strong read-only
5656 # transactions might return inconsistent results if there are
5657 # concurrent writes. If consistency across reads is required, the
5658 # reads should be executed within a transaction or at an exact read
5659 # timestamp.
5660 #
5661 # See TransactionOptions.ReadOnly.strong.
5662 #
5663 # ### Exact Staleness
5664 #
5665 # These timestamp bounds execute reads at a user-specified
5666 # timestamp. Reads at a timestamp are guaranteed to see a consistent
5667 # prefix of the global transaction history: they observe
5668 # modifications done by all transactions with a commit timestamp &lt;=
5669 # the read timestamp, and observe none of the modifications done by
5670 # transactions with a larger commit timestamp. They will block until
5671 # all conflicting transactions that may be assigned commit timestamps
5672 # &lt;= the read timestamp have finished.
5673 #
5674 # The timestamp can either be expressed as an absolute Cloud Spanner commit
5675 # timestamp or a staleness relative to the current time.
5676 #
5677 # These modes do not require a &quot;negotiation phase&quot; to pick a
5678 # timestamp. As a result, they execute slightly faster than the
5679 # equivalent boundedly stale concurrency modes. On the other hand,
5680 # boundedly stale reads usually return fresher results.
5681 #
5682 # See TransactionOptions.ReadOnly.read_timestamp and
5683 # TransactionOptions.ReadOnly.exact_staleness.
5684 #
5685 # ### Bounded Staleness
5686 #
5687 # Bounded staleness modes allow Cloud Spanner to pick the read timestamp,
5688 # subject to a user-provided staleness bound. Cloud Spanner chooses the
5689 # newest timestamp within the staleness bound that allows execution
5690 # of the reads at the closest available replica without blocking.
5691 #
5692 # All rows yielded are consistent with each other -- if any part of
5693 # the read observes a transaction, all parts of the read see the
5694 # transaction. Boundedly stale reads are not repeatable: two stale
5695 # reads, even if they use the same staleness bound, can execute at
5696 # different timestamps and thus return inconsistent results.
5697 #
5698 # Boundedly stale reads execute in two phases: the first phase
5699 # negotiates a timestamp among all replicas needed to serve the
5700 # read. In the second phase, reads are executed at the negotiated
5701 # timestamp.
5702 #
5703 # As a result of the two phase execution, bounded staleness reads are
5704 # usually a little slower than comparable exact staleness
5705 # reads. However, they are typically able to return fresher
5706 # results, and are more likely to execute at the closest replica.
5707 #
5708 # Because the timestamp negotiation requires up-front knowledge of
5709 # which rows will be read, it can only be used with single-use
5710 # read-only transactions.
5711 #
5712 # See TransactionOptions.ReadOnly.max_staleness and
5713 # TransactionOptions.ReadOnly.min_read_timestamp.
5714 #
5715 # ### Old Read Timestamps and Garbage Collection
5716 #
5717 # Cloud Spanner continuously garbage collects deleted and overwritten data
5718 # in the background to reclaim storage space. This process is known
5719 # as &quot;version GC&quot;. By default, version GC reclaims versions after they
5720 # are one hour old. Because of this, Cloud Spanner cannot perform reads
5721 # at read timestamps more than one hour in the past. This
5722 # restriction also applies to in-progress reads and/or SQL queries whose
5723 # timestamp become too old while executing. Reads and SQL queries with
5724 # too-old read timestamps fail with the error `FAILED_PRECONDITION`.
5725 #
5726 # ## Partitioned DML Transactions
5727 #
5728 # Partitioned DML transactions are used to execute DML statements with a
5729 # different execution strategy that provides different, and often better,
5730 # scalability properties for large, table-wide operations than DML in a
5731 # ReadWrite transaction. Smaller scoped statements, such as an OLTP workload,
5732 # should prefer using ReadWrite transactions.
5733 #
5734 # Partitioned DML partitions the keyspace and runs the DML statement on each
5735 # partition in separate, internal transactions. These transactions commit
5736 # automatically when complete, and run independently from one another.
5737 #
5738 # To reduce lock contention, this execution strategy only acquires read locks
5739 # on rows that match the WHERE clause of the statement. Additionally, the
5740 # smaller per-partition transactions hold locks for less time.
5741 #
5742 # That said, Partitioned DML is not a drop-in replacement for standard DML used
5743 # in ReadWrite transactions.
5744 #
5745 # - The DML statement must be fully-partitionable. Specifically, the statement
5746 # must be expressible as the union of many statements which each access only
5747 # a single row of the table.
5748 #
5749 # - The statement is not applied atomically to all rows of the table. Rather,
5750 # the statement is applied atomically to partitions of the table, in
5751 # independent transactions. Secondary index rows are updated atomically
5752 # with the base table rows.
5753 #
5754 # - Partitioned DML does not guarantee exactly-once execution semantics
5755 # against a partition. The statement will be applied at least once to each
5756 # partition. It is strongly recommended that the DML statement should be
5757 # idempotent to avoid unexpected results. For instance, it is potentially
5758 # dangerous to run a statement such as
5759 # `UPDATE table SET column = column + 1` as it could be run multiple times
5760 # against some rows.
5761 #
5762 # - The partitions are committed automatically - there is no support for
5763 # Commit or Rollback. If the call returns an error, or if the client issuing
5764 # the ExecuteSql call dies, it is possible that some rows had the statement
5765 # executed on them successfully. It is also possible that statement was
5766 # never executed against other rows.
5767 #
5768 # - Partitioned DML transactions may only contain the execution of a single
5769 # DML statement via ExecuteSql or ExecuteStreamingSql.
5770 #
5771 # - If any error is encountered during the execution of the partitioned DML
5772 # operation (for instance, a UNIQUE INDEX violation, division by zero, or a
5773 # value that cannot be stored due to schema constraints), then the
5774 # operation is stopped at that point and an error is returned. It is
5775 # possible that at this point, some partitions have been committed (or even
5776 # committed multiple times), and other partitions have not been run at all.
5777 #
5778 # Given the above, Partitioned DML is good fit for large, database-wide,
5779 # operations that are idempotent, such as deleting old rows from a very large
5780 # table.
Bu Sun Kim4ed7d3f2020-05-27 12:20:54 -07005781 &quot;readWrite&quot;: { # Message type to initiate a read-write transaction. Currently this # Transaction may write.
5782 #
5783 # Authorization to begin a read-write transaction requires
5784 # `spanner.databases.beginOrRollbackReadWriteTransaction` permission
5785 # on the `session` resource.
5786 # transaction type has no options.
5787 },
5788 &quot;readOnly&quot;: { # Message type to initiate a read-only transaction. # Transaction will not write.
5789 #
5790 # Authorization to begin a read-only transaction requires
5791 # `spanner.databases.beginReadOnlyTransaction` permission
5792 # on the `session` resource.
Bu Sun Kimd059ad82020-07-22 17:02:09 -07005793 &quot;readTimestamp&quot;: &quot;A String&quot;, # Executes all reads at the given timestamp. Unlike other modes,
5794 # reads at a specific timestamp are repeatable; the same read at
5795 # the same timestamp always returns the same data. If the
5796 # timestamp is in the future, the read will block until the
5797 # specified timestamp, modulo the read&#x27;s deadline.
5798 #
5799 # Useful for large scale consistent reads such as mapreduces, or
5800 # for coordinating many reads against a consistent snapshot of the
5801 # data.
5802 #
5803 # A timestamp in RFC3339 UTC \&quot;Zulu\&quot; format, accurate to nanoseconds.
5804 # Example: `&quot;2014-10-02T15:01:23.045123456Z&quot;`.
5805 &quot;minReadTimestamp&quot;: &quot;A String&quot;, # Executes all reads at a timestamp &gt;= `min_read_timestamp`.
5806 #
5807 # This is useful for requesting fresher data than some previous
5808 # read, or data that is fresh enough to observe the effects of some
5809 # previously committed transaction whose timestamp is known.
5810 #
5811 # Note that this option can only be used in single-use transactions.
5812 #
5813 # A timestamp in RFC3339 UTC \&quot;Zulu\&quot; format, accurate to nanoseconds.
5814 # Example: `&quot;2014-10-02T15:01:23.045123456Z&quot;`.
5815 &quot;exactStaleness&quot;: &quot;A String&quot;, # Executes all reads at a timestamp that is `exact_staleness`
5816 # old. The timestamp is chosen soon after the read is started.
5817 #
5818 # Guarantees that all writes that have committed more than the
5819 # specified number of seconds ago are visible. Because Cloud Spanner
5820 # chooses the exact timestamp, this mode works even if the client&#x27;s
5821 # local clock is substantially skewed from Cloud Spanner commit
5822 # timestamps.
5823 #
5824 # Useful for reading at nearby replicas without the distributed
5825 # timestamp negotiation overhead of `max_staleness`.
Bu Sun Kim4ed7d3f2020-05-27 12:20:54 -07005826 &quot;maxStaleness&quot;: &quot;A String&quot;, # Read data at a timestamp &gt;= `NOW - max_staleness`
5827 # seconds. Guarantees that all writes that have committed more
5828 # than the specified number of seconds ago are visible. Because
5829 # Cloud Spanner chooses the exact timestamp, this mode works even if
5830 # the client&#x27;s local clock is substantially skewed from Cloud Spanner
5831 # commit timestamps.
5832 #
5833 # Useful for reading the freshest data available at a nearby
5834 # replica, while bounding the possible staleness if the local
5835 # replica has fallen behind.
5836 #
5837 # Note that this option can only be used in single-use
5838 # transactions.
Bu Sun Kim4ed7d3f2020-05-27 12:20:54 -07005839 &quot;returnReadTimestamp&quot;: True or False, # If true, the Cloud Spanner-selected read timestamp is included in
5840 # the Transaction message that describes the transaction.
Bu Sun Kimd059ad82020-07-22 17:02:09 -07005841 &quot;strong&quot;: True or False, # Read at a timestamp where all previously committed transactions
5842 # are visible.
5843 },
5844 &quot;partitionedDml&quot;: { # Message type to initiate a Partitioned DML transaction. # Partitioned DML transaction.
5845 #
5846 # Authorization to begin a Partitioned DML transaction requires
5847 # `spanner.databases.beginPartitionedDmlTransaction` permission
5848 # on the `session` resource.
5849 },
5850 },
5851 &quot;begin&quot;: { # # Transactions # Begin a new transaction and execute this read or SQL query in
5852 # it. The transaction ID of the new transaction is returned in
5853 # ResultSetMetadata.transaction, which is a Transaction.
5854 #
5855 #
5856 # Each session can have at most one active transaction at a time (note that
5857 # standalone reads and queries use a transaction internally and do count
5858 # towards the one transaction limit). After the active transaction is
5859 # completed, the session can immediately be re-used for the next transaction.
5860 # It is not necessary to create a new session for each transaction.
5861 #
5862 # # Transaction Modes
5863 #
5864 # Cloud Spanner supports three transaction modes:
5865 #
5866 # 1. Locking read-write. This type of transaction is the only way
5867 # to write data into Cloud Spanner. These transactions rely on
5868 # pessimistic locking and, if necessary, two-phase commit.
5869 # Locking read-write transactions may abort, requiring the
5870 # application to retry.
5871 #
5872 # 2. Snapshot read-only. This transaction type provides guaranteed
5873 # consistency across several reads, but does not allow
5874 # writes. Snapshot read-only transactions can be configured to
5875 # read at timestamps in the past. Snapshot read-only
5876 # transactions do not need to be committed.
5877 #
5878 # 3. Partitioned DML. This type of transaction is used to execute
5879 # a single Partitioned DML statement. Partitioned DML partitions
5880 # the key space and runs the DML statement over each partition
5881 # in parallel using separate, internal transactions that commit
5882 # independently. Partitioned DML transactions do not need to be
5883 # committed.
5884 #
5885 # For transactions that only read, snapshot read-only transactions
5886 # provide simpler semantics and are almost always faster. In
5887 # particular, read-only transactions do not take locks, so they do
5888 # not conflict with read-write transactions. As a consequence of not
5889 # taking locks, they also do not abort, so retry loops are not needed.
5890 #
5891 # Transactions may only read/write data in a single database. They
5892 # may, however, read/write data in different tables within that
5893 # database.
5894 #
5895 # ## Locking Read-Write Transactions
5896 #
5897 # Locking transactions may be used to atomically read-modify-write
5898 # data anywhere in a database. This type of transaction is externally
5899 # consistent.
5900 #
5901 # Clients should attempt to minimize the amount of time a transaction
5902 # is active. Faster transactions commit with higher probability
5903 # and cause less contention. Cloud Spanner attempts to keep read locks
5904 # active as long as the transaction continues to do reads, and the
5905 # transaction has not been terminated by
5906 # Commit or
5907 # Rollback. Long periods of
5908 # inactivity at the client may cause Cloud Spanner to release a
5909 # transaction&#x27;s locks and abort it.
5910 #
5911 # Conceptually, a read-write transaction consists of zero or more
5912 # reads or SQL statements followed by
5913 # Commit. At any time before
5914 # Commit, the client can send a
5915 # Rollback request to abort the
5916 # transaction.
5917 #
5918 # ### Semantics
5919 #
5920 # Cloud Spanner can commit the transaction if all read locks it acquired
5921 # are still valid at commit time, and it is able to acquire write
5922 # locks for all writes. Cloud Spanner can abort the transaction for any
5923 # reason. If a commit attempt returns `ABORTED`, Cloud Spanner guarantees
5924 # that the transaction has not modified any user data in Cloud Spanner.
5925 #
5926 # Unless the transaction commits, Cloud Spanner makes no guarantees about
5927 # how long the transaction&#x27;s locks were held for. It is an error to
5928 # use Cloud Spanner locks for any sort of mutual exclusion other than
5929 # between Cloud Spanner transactions themselves.
5930 #
5931 # ### Retrying Aborted Transactions
5932 #
5933 # When a transaction aborts, the application can choose to retry the
5934 # whole transaction again. To maximize the chances of successfully
5935 # committing the retry, the client should execute the retry in the
5936 # same session as the original attempt. The original session&#x27;s lock
5937 # priority increases with each consecutive abort, meaning that each
5938 # attempt has a slightly better chance of success than the previous.
5939 #
5940 # Under some circumstances (e.g., many transactions attempting to
5941 # modify the same row(s)), a transaction can abort many times in a
5942 # short period before successfully committing. Thus, it is not a good
5943 # idea to cap the number of retries a transaction can attempt;
5944 # instead, it is better to limit the total amount of wall time spent
5945 # retrying.
5946 #
5947 # ### Idle Transactions
5948 #
5949 # A transaction is considered idle if it has no outstanding reads or
5950 # SQL queries and has not started a read or SQL query within the last 10
5951 # seconds. Idle transactions can be aborted by Cloud Spanner so that they
5952 # don&#x27;t hold on to locks indefinitely. In that case, the commit will
5953 # fail with error `ABORTED`.
5954 #
5955 # If this behavior is undesirable, periodically executing a simple
5956 # SQL query in the transaction (e.g., `SELECT 1`) prevents the
5957 # transaction from becoming idle.
5958 #
5959 # ## Snapshot Read-Only Transactions
5960 #
5961 # Snapshot read-only transactions provides a simpler method than
5962 # locking read-write transactions for doing several consistent
5963 # reads. However, this type of transaction does not support writes.
5964 #
5965 # Snapshot transactions do not take locks. Instead, they work by
5966 # choosing a Cloud Spanner timestamp, then executing all reads at that
5967 # timestamp. Since they do not acquire locks, they do not block
5968 # concurrent read-write transactions.
5969 #
5970 # Unlike locking read-write transactions, snapshot read-only
5971 # transactions never abort. They can fail if the chosen read
5972 # timestamp is garbage collected; however, the default garbage
5973 # collection policy is generous enough that most applications do not
5974 # need to worry about this in practice.
5975 #
5976 # Snapshot read-only transactions do not need to call
5977 # Commit or
5978 # Rollback (and in fact are not
5979 # permitted to do so).
5980 #
5981 # To execute a snapshot transaction, the client specifies a timestamp
5982 # bound, which tells Cloud Spanner how to choose a read timestamp.
5983 #
5984 # The types of timestamp bound are:
5985 #
5986 # - Strong (the default).
5987 # - Bounded staleness.
5988 # - Exact staleness.
5989 #
5990 # If the Cloud Spanner database to be read is geographically distributed,
5991 # stale read-only transactions can execute more quickly than strong
5992 # or read-write transaction, because they are able to execute far
5993 # from the leader replica.
5994 #
5995 # Each type of timestamp bound is discussed in detail below.
5996 #
5997 # ### Strong
5998 #
5999 # Strong reads are guaranteed to see the effects of all transactions
6000 # that have committed before the start of the read. Furthermore, all
6001 # rows yielded by a single read are consistent with each other -- if
6002 # any part of the read observes a transaction, all parts of the read
6003 # see the transaction.
6004 #
6005 # Strong reads are not repeatable: two consecutive strong read-only
6006 # transactions might return inconsistent results if there are
6007 # concurrent writes. If consistency across reads is required, the
6008 # reads should be executed within a transaction or at an exact read
6009 # timestamp.
6010 #
6011 # See TransactionOptions.ReadOnly.strong.
6012 #
6013 # ### Exact Staleness
6014 #
6015 # These timestamp bounds execute reads at a user-specified
6016 # timestamp. Reads at a timestamp are guaranteed to see a consistent
6017 # prefix of the global transaction history: they observe
6018 # modifications done by all transactions with a commit timestamp &lt;=
6019 # the read timestamp, and observe none of the modifications done by
6020 # transactions with a larger commit timestamp. They will block until
6021 # all conflicting transactions that may be assigned commit timestamps
6022 # &lt;= the read timestamp have finished.
6023 #
6024 # The timestamp can either be expressed as an absolute Cloud Spanner commit
6025 # timestamp or a staleness relative to the current time.
6026 #
6027 # These modes do not require a &quot;negotiation phase&quot; to pick a
6028 # timestamp. As a result, they execute slightly faster than the
6029 # equivalent boundedly stale concurrency modes. On the other hand,
6030 # boundedly stale reads usually return fresher results.
6031 #
6032 # See TransactionOptions.ReadOnly.read_timestamp and
6033 # TransactionOptions.ReadOnly.exact_staleness.
6034 #
6035 # ### Bounded Staleness
6036 #
6037 # Bounded staleness modes allow Cloud Spanner to pick the read timestamp,
6038 # subject to a user-provided staleness bound. Cloud Spanner chooses the
6039 # newest timestamp within the staleness bound that allows execution
6040 # of the reads at the closest available replica without blocking.
6041 #
6042 # All rows yielded are consistent with each other -- if any part of
6043 # the read observes a transaction, all parts of the read see the
6044 # transaction. Boundedly stale reads are not repeatable: two stale
6045 # reads, even if they use the same staleness bound, can execute at
6046 # different timestamps and thus return inconsistent results.
6047 #
6048 # Boundedly stale reads execute in two phases: the first phase
6049 # negotiates a timestamp among all replicas needed to serve the
6050 # read. In the second phase, reads are executed at the negotiated
6051 # timestamp.
6052 #
6053 # As a result of the two phase execution, bounded staleness reads are
6054 # usually a little slower than comparable exact staleness
6055 # reads. However, they are typically able to return fresher
6056 # results, and are more likely to execute at the closest replica.
6057 #
6058 # Because the timestamp negotiation requires up-front knowledge of
6059 # which rows will be read, it can only be used with single-use
6060 # read-only transactions.
6061 #
6062 # See TransactionOptions.ReadOnly.max_staleness and
6063 # TransactionOptions.ReadOnly.min_read_timestamp.
6064 #
6065 # ### Old Read Timestamps and Garbage Collection
6066 #
6067 # Cloud Spanner continuously garbage collects deleted and overwritten data
6068 # in the background to reclaim storage space. This process is known
6069 # as &quot;version GC&quot;. By default, version GC reclaims versions after they
6070 # are one hour old. Because of this, Cloud Spanner cannot perform reads
6071 # at read timestamps more than one hour in the past. This
6072 # restriction also applies to in-progress reads and/or SQL queries whose
6073 # timestamp become too old while executing. Reads and SQL queries with
6074 # too-old read timestamps fail with the error `FAILED_PRECONDITION`.
6075 #
6076 # ## Partitioned DML Transactions
6077 #
6078 # Partitioned DML transactions are used to execute DML statements with a
6079 # different execution strategy that provides different, and often better,
6080 # scalability properties for large, table-wide operations than DML in a
6081 # ReadWrite transaction. Smaller scoped statements, such as an OLTP workload,
6082 # should prefer using ReadWrite transactions.
6083 #
6084 # Partitioned DML partitions the keyspace and runs the DML statement on each
6085 # partition in separate, internal transactions. These transactions commit
6086 # automatically when complete, and run independently from one another.
6087 #
6088 # To reduce lock contention, this execution strategy only acquires read locks
6089 # on rows that match the WHERE clause of the statement. Additionally, the
6090 # smaller per-partition transactions hold locks for less time.
6091 #
6092 # That said, Partitioned DML is not a drop-in replacement for standard DML used
6093 # in ReadWrite transactions.
6094 #
6095 # - The DML statement must be fully-partitionable. Specifically, the statement
6096 # must be expressible as the union of many statements which each access only
6097 # a single row of the table.
6098 #
6099 # - The statement is not applied atomically to all rows of the table. Rather,
6100 # the statement is applied atomically to partitions of the table, in
6101 # independent transactions. Secondary index rows are updated atomically
6102 # with the base table rows.
6103 #
6104 # - Partitioned DML does not guarantee exactly-once execution semantics
6105 # against a partition. The statement will be applied at least once to each
6106 # partition. It is strongly recommended that the DML statement should be
6107 # idempotent to avoid unexpected results. For instance, it is potentially
6108 # dangerous to run a statement such as
6109 # `UPDATE table SET column = column + 1` as it could be run multiple times
6110 # against some rows.
6111 #
6112 # - The partitions are committed automatically - there is no support for
6113 # Commit or Rollback. If the call returns an error, or if the client issuing
6114 # the ExecuteSql call dies, it is possible that some rows had the statement
6115 # executed on them successfully. It is also possible that statement was
6116 # never executed against other rows.
6117 #
6118 # - Partitioned DML transactions may only contain the execution of a single
6119 # DML statement via ExecuteSql or ExecuteStreamingSql.
6120 #
6121 # - If any error is encountered during the execution of the partitioned DML
6122 # operation (for instance, a UNIQUE INDEX violation, division by zero, or a
6123 # value that cannot be stored due to schema constraints), then the
6124 # operation is stopped at that point and an error is returned. It is
6125 # possible that at this point, some partitions have been committed (or even
6126 # committed multiple times), and other partitions have not been run at all.
6127 #
6128 # Given the above, Partitioned DML is good fit for large, database-wide,
6129 # operations that are idempotent, such as deleting old rows from a very large
6130 # table.
6131 &quot;readWrite&quot;: { # Message type to initiate a read-write transaction. Currently this # Transaction may write.
6132 #
6133 # Authorization to begin a read-write transaction requires
6134 # `spanner.databases.beginOrRollbackReadWriteTransaction` permission
6135 # on the `session` resource.
6136 # transaction type has no options.
6137 },
6138 &quot;readOnly&quot;: { # Message type to initiate a read-only transaction. # Transaction will not write.
6139 #
6140 # Authorization to begin a read-only transaction requires
6141 # `spanner.databases.beginReadOnlyTransaction` permission
6142 # on the `session` resource.
Bu Sun Kim4ed7d3f2020-05-27 12:20:54 -07006143 &quot;readTimestamp&quot;: &quot;A String&quot;, # Executes all reads at the given timestamp. Unlike other modes,
6144 # reads at a specific timestamp are repeatable; the same read at
6145 # the same timestamp always returns the same data. If the
6146 # timestamp is in the future, the read will block until the
6147 # specified timestamp, modulo the read&#x27;s deadline.
6148 #
6149 # Useful for large scale consistent reads such as mapreduces, or
6150 # for coordinating many reads against a consistent snapshot of the
6151 # data.
6152 #
6153 # A timestamp in RFC3339 UTC \&quot;Zulu\&quot; format, accurate to nanoseconds.
6154 # Example: `&quot;2014-10-02T15:01:23.045123456Z&quot;`.
Bu Sun Kimd059ad82020-07-22 17:02:09 -07006155 &quot;minReadTimestamp&quot;: &quot;A String&quot;, # Executes all reads at a timestamp &gt;= `min_read_timestamp`.
6156 #
6157 # This is useful for requesting fresher data than some previous
6158 # read, or data that is fresh enough to observe the effects of some
6159 # previously committed transaction whose timestamp is known.
6160 #
6161 # Note that this option can only be used in single-use transactions.
6162 #
6163 # A timestamp in RFC3339 UTC \&quot;Zulu\&quot; format, accurate to nanoseconds.
6164 # Example: `&quot;2014-10-02T15:01:23.045123456Z&quot;`.
6165 &quot;exactStaleness&quot;: &quot;A String&quot;, # Executes all reads at a timestamp that is `exact_staleness`
6166 # old. The timestamp is chosen soon after the read is started.
6167 #
6168 # Guarantees that all writes that have committed more than the
6169 # specified number of seconds ago are visible. Because Cloud Spanner
6170 # chooses the exact timestamp, this mode works even if the client&#x27;s
6171 # local clock is substantially skewed from Cloud Spanner commit
6172 # timestamps.
6173 #
6174 # Useful for reading at nearby replicas without the distributed
6175 # timestamp negotiation overhead of `max_staleness`.
6176 &quot;maxStaleness&quot;: &quot;A String&quot;, # Read data at a timestamp &gt;= `NOW - max_staleness`
6177 # seconds. Guarantees that all writes that have committed more
6178 # than the specified number of seconds ago are visible. Because
6179 # Cloud Spanner chooses the exact timestamp, this mode works even if
6180 # the client&#x27;s local clock is substantially skewed from Cloud Spanner
6181 # commit timestamps.
6182 #
6183 # Useful for reading the freshest data available at a nearby
6184 # replica, while bounding the possible staleness if the local
6185 # replica has fallen behind.
6186 #
6187 # Note that this option can only be used in single-use
6188 # transactions.
6189 &quot;returnReadTimestamp&quot;: True or False, # If true, the Cloud Spanner-selected read timestamp is included in
6190 # the Transaction message that describes the transaction.
6191 &quot;strong&quot;: True or False, # Read at a timestamp where all previously committed transactions
6192 # are visible.
6193 },
6194 &quot;partitionedDml&quot;: { # Message type to initiate a Partitioned DML transaction. # Partitioned DML transaction.
6195 #
6196 # Authorization to begin a Partitioned DML transaction requires
6197 # `spanner.databases.beginPartitionedDmlTransaction` permission
6198 # on the `session` resource.
Bu Sun Kim4ed7d3f2020-05-27 12:20:54 -07006199 },
6200 },
Bu Sun Kimd059ad82020-07-22 17:02:09 -07006201 &quot;id&quot;: &quot;A String&quot;, # Execute the read or SQL query in a previously-started transaction.
Bu Sun Kim4ed7d3f2020-05-27 12:20:54 -07006202 },
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07006203 }
6204
6205 x__xgafv: string, V1 error format.
6206 Allowed values
6207 1 - v1 error format
6208 2 - v2 error format
6209
6210Returns:
6211 An object of the form:
6212
6213 { # The response for PartitionQuery
6214 # or PartitionRead
Bu Sun Kim65020912020-05-20 12:08:20 -07006215 &quot;partitions&quot;: [ # Partitions created by this request.
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07006216 { # Information returned for each partition returned in a
6217 # PartitionResponse.
Bu Sun Kim65020912020-05-20 12:08:20 -07006218 &quot;partitionToken&quot;: &quot;A String&quot;, # This token can be passed to Read, StreamingRead, ExecuteSql, or
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07006219 # ExecuteStreamingSql requests to restrict the results to those identified by
6220 # this partition token.
6221 },
6222 ],
Bu Sun Kim4ed7d3f2020-05-27 12:20:54 -07006223 &quot;transaction&quot;: { # A transaction. # Transaction created by this request.
6224 &quot;readTimestamp&quot;: &quot;A String&quot;, # For snapshot read-only transactions, the read timestamp chosen
6225 # for the transaction. Not returned by default: see
6226 # TransactionOptions.ReadOnly.return_read_timestamp.
6227 #
6228 # A timestamp in RFC3339 UTC \&quot;Zulu\&quot; format, accurate to nanoseconds.
6229 # Example: `&quot;2014-10-02T15:01:23.045123456Z&quot;`.
6230 &quot;id&quot;: &quot;A String&quot;, # `id` may be used to identify the transaction in subsequent
6231 # Read,
6232 # ExecuteSql,
6233 # Commit, or
6234 # Rollback calls.
6235 #
6236 # Single-use read-only transactions do not have IDs, because
6237 # single-use transactions do not support multiple requests.
6238 },
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -04006239 }</pre>
6240</div>
6241
6242<div class="method">
Dan O'Mearadd494642020-05-01 07:42:23 -07006243 <code class="details" id="read">read(session, body=None, x__xgafv=None)</code>
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -04006244 <pre>Reads rows from the database using key lookups and scans, as a
6245simple key/value style alternative to
6246ExecuteSql. This method cannot be used to
6247return a result set larger than 10 MiB; if the read matches more
6248data than that, the read fails with a `FAILED_PRECONDITION`
6249error.
6250
6251Reads inside read-write transactions might return `ABORTED`. If
6252this occurs, the application should restart the transaction from
6253the beginning. See Transaction for more details.
6254
6255Larger result sets can be yielded in streaming fashion by calling
6256StreamingRead instead.
6257
6258Args:
6259 session: string, Required. The session in which the read should be performed. (required)
Dan O'Mearadd494642020-05-01 07:42:23 -07006260 body: object, The request body.
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -04006261 The object takes the form of:
6262
6263{ # The request for Read and
6264 # StreamingRead.
Bu Sun Kimd059ad82020-07-22 17:02:09 -07006265 &quot;resumeToken&quot;: &quot;A String&quot;, # If this request is resuming a previously interrupted read,
6266 # `resume_token` should be copied from the last
6267 # PartialResultSet yielded before the interruption. Doing this
6268 # enables the new read to resume where the last read left off. The
6269 # rest of the request parameters must exactly match the request
6270 # that yielded this token.
Bu Sun Kim4ed7d3f2020-05-27 12:20:54 -07006271 &quot;columns&quot;: [ # Required. The columns of table to be returned for each row matching
6272 # this request.
6273 &quot;A String&quot;,
6274 ],
6275 &quot;limit&quot;: &quot;A String&quot;, # If greater than zero, only the first `limit` rows are yielded. If `limit`
6276 # is zero, the default is no limit. A limit cannot be specified if
6277 # `partition_token` is set.
Bu Sun Kimd059ad82020-07-22 17:02:09 -07006278 &quot;index&quot;: &quot;A String&quot;, # If non-empty, the name of an index on table. This index is
6279 # used instead of the table primary key when interpreting key_set
6280 # and sorting result rows. See key_set for further information.
Bu Sun Kim4ed7d3f2020-05-27 12:20:54 -07006281 &quot;table&quot;: &quot;A String&quot;, # Required. The name of the table in the database to be read.
Bu Sun Kimd059ad82020-07-22 17:02:09 -07006282 &quot;transaction&quot;: { # This message is used to select the transaction in which a # The transaction to use. If none is provided, the default is a
6283 # temporary read-only transaction with strong concurrency.
6284 # Read or
6285 # ExecuteSql call runs.
6286 #
6287 # See TransactionOptions for more information about transactions.
6288 &quot;singleUse&quot;: { # # Transactions # Execute the read or SQL query in a temporary transaction.
6289 # This is the most efficient way to execute a transaction that
6290 # consists of a single SQL query.
6291 #
6292 #
6293 # Each session can have at most one active transaction at a time (note that
6294 # standalone reads and queries use a transaction internally and do count
6295 # towards the one transaction limit). After the active transaction is
6296 # completed, the session can immediately be re-used for the next transaction.
6297 # It is not necessary to create a new session for each transaction.
6298 #
6299 # # Transaction Modes
6300 #
6301 # Cloud Spanner supports three transaction modes:
6302 #
6303 # 1. Locking read-write. This type of transaction is the only way
6304 # to write data into Cloud Spanner. These transactions rely on
6305 # pessimistic locking and, if necessary, two-phase commit.
6306 # Locking read-write transactions may abort, requiring the
6307 # application to retry.
6308 #
6309 # 2. Snapshot read-only. This transaction type provides guaranteed
6310 # consistency across several reads, but does not allow
6311 # writes. Snapshot read-only transactions can be configured to
6312 # read at timestamps in the past. Snapshot read-only
6313 # transactions do not need to be committed.
6314 #
6315 # 3. Partitioned DML. This type of transaction is used to execute
6316 # a single Partitioned DML statement. Partitioned DML partitions
6317 # the key space and runs the DML statement over each partition
6318 # in parallel using separate, internal transactions that commit
6319 # independently. Partitioned DML transactions do not need to be
6320 # committed.
6321 #
6322 # For transactions that only read, snapshot read-only transactions
6323 # provide simpler semantics and are almost always faster. In
6324 # particular, read-only transactions do not take locks, so they do
6325 # not conflict with read-write transactions. As a consequence of not
6326 # taking locks, they also do not abort, so retry loops are not needed.
6327 #
6328 # Transactions may only read/write data in a single database. They
6329 # may, however, read/write data in different tables within that
6330 # database.
6331 #
6332 # ## Locking Read-Write Transactions
6333 #
6334 # Locking transactions may be used to atomically read-modify-write
6335 # data anywhere in a database. This type of transaction is externally
6336 # consistent.
6337 #
6338 # Clients should attempt to minimize the amount of time a transaction
6339 # is active. Faster transactions commit with higher probability
6340 # and cause less contention. Cloud Spanner attempts to keep read locks
6341 # active as long as the transaction continues to do reads, and the
6342 # transaction has not been terminated by
6343 # Commit or
6344 # Rollback. Long periods of
6345 # inactivity at the client may cause Cloud Spanner to release a
6346 # transaction&#x27;s locks and abort it.
6347 #
6348 # Conceptually, a read-write transaction consists of zero or more
6349 # reads or SQL statements followed by
6350 # Commit. At any time before
6351 # Commit, the client can send a
6352 # Rollback request to abort the
6353 # transaction.
6354 #
6355 # ### Semantics
6356 #
6357 # Cloud Spanner can commit the transaction if all read locks it acquired
6358 # are still valid at commit time, and it is able to acquire write
6359 # locks for all writes. Cloud Spanner can abort the transaction for any
6360 # reason. If a commit attempt returns `ABORTED`, Cloud Spanner guarantees
6361 # that the transaction has not modified any user data in Cloud Spanner.
6362 #
6363 # Unless the transaction commits, Cloud Spanner makes no guarantees about
6364 # how long the transaction&#x27;s locks were held for. It is an error to
6365 # use Cloud Spanner locks for any sort of mutual exclusion other than
6366 # between Cloud Spanner transactions themselves.
6367 #
6368 # ### Retrying Aborted Transactions
6369 #
6370 # When a transaction aborts, the application can choose to retry the
6371 # whole transaction again. To maximize the chances of successfully
6372 # committing the retry, the client should execute the retry in the
6373 # same session as the original attempt. The original session&#x27;s lock
6374 # priority increases with each consecutive abort, meaning that each
6375 # attempt has a slightly better chance of success than the previous.
6376 #
6377 # Under some circumstances (e.g., many transactions attempting to
6378 # modify the same row(s)), a transaction can abort many times in a
6379 # short period before successfully committing. Thus, it is not a good
6380 # idea to cap the number of retries a transaction can attempt;
6381 # instead, it is better to limit the total amount of wall time spent
6382 # retrying.
6383 #
6384 # ### Idle Transactions
6385 #
6386 # A transaction is considered idle if it has no outstanding reads or
6387 # SQL queries and has not started a read or SQL query within the last 10
6388 # seconds. Idle transactions can be aborted by Cloud Spanner so that they
6389 # don&#x27;t hold on to locks indefinitely. In that case, the commit will
6390 # fail with error `ABORTED`.
6391 #
6392 # If this behavior is undesirable, periodically executing a simple
6393 # SQL query in the transaction (e.g., `SELECT 1`) prevents the
6394 # transaction from becoming idle.
6395 #
6396 # ## Snapshot Read-Only Transactions
6397 #
6398 # Snapshot read-only transactions provides a simpler method than
6399 # locking read-write transactions for doing several consistent
6400 # reads. However, this type of transaction does not support writes.
6401 #
6402 # Snapshot transactions do not take locks. Instead, they work by
6403 # choosing a Cloud Spanner timestamp, then executing all reads at that
6404 # timestamp. Since they do not acquire locks, they do not block
6405 # concurrent read-write transactions.
6406 #
6407 # Unlike locking read-write transactions, snapshot read-only
6408 # transactions never abort. They can fail if the chosen read
6409 # timestamp is garbage collected; however, the default garbage
6410 # collection policy is generous enough that most applications do not
6411 # need to worry about this in practice.
6412 #
6413 # Snapshot read-only transactions do not need to call
6414 # Commit or
6415 # Rollback (and in fact are not
6416 # permitted to do so).
6417 #
6418 # To execute a snapshot transaction, the client specifies a timestamp
6419 # bound, which tells Cloud Spanner how to choose a read timestamp.
6420 #
6421 # The types of timestamp bound are:
6422 #
6423 # - Strong (the default).
6424 # - Bounded staleness.
6425 # - Exact staleness.
6426 #
6427 # If the Cloud Spanner database to be read is geographically distributed,
6428 # stale read-only transactions can execute more quickly than strong
6429 # or read-write transaction, because they are able to execute far
6430 # from the leader replica.
6431 #
6432 # Each type of timestamp bound is discussed in detail below.
6433 #
6434 # ### Strong
6435 #
6436 # Strong reads are guaranteed to see the effects of all transactions
6437 # that have committed before the start of the read. Furthermore, all
6438 # rows yielded by a single read are consistent with each other -- if
6439 # any part of the read observes a transaction, all parts of the read
6440 # see the transaction.
6441 #
6442 # Strong reads are not repeatable: two consecutive strong read-only
6443 # transactions might return inconsistent results if there are
6444 # concurrent writes. If consistency across reads is required, the
6445 # reads should be executed within a transaction or at an exact read
6446 # timestamp.
6447 #
6448 # See TransactionOptions.ReadOnly.strong.
6449 #
6450 # ### Exact Staleness
6451 #
6452 # These timestamp bounds execute reads at a user-specified
6453 # timestamp. Reads at a timestamp are guaranteed to see a consistent
6454 # prefix of the global transaction history: they observe
6455 # modifications done by all transactions with a commit timestamp &lt;=
6456 # the read timestamp, and observe none of the modifications done by
6457 # transactions with a larger commit timestamp. They will block until
6458 # all conflicting transactions that may be assigned commit timestamps
6459 # &lt;= the read timestamp have finished.
6460 #
6461 # The timestamp can either be expressed as an absolute Cloud Spanner commit
6462 # timestamp or a staleness relative to the current time.
6463 #
6464 # These modes do not require a &quot;negotiation phase&quot; to pick a
6465 # timestamp. As a result, they execute slightly faster than the
6466 # equivalent boundedly stale concurrency modes. On the other hand,
6467 # boundedly stale reads usually return fresher results.
6468 #
6469 # See TransactionOptions.ReadOnly.read_timestamp and
6470 # TransactionOptions.ReadOnly.exact_staleness.
6471 #
6472 # ### Bounded Staleness
6473 #
6474 # Bounded staleness modes allow Cloud Spanner to pick the read timestamp,
6475 # subject to a user-provided staleness bound. Cloud Spanner chooses the
6476 # newest timestamp within the staleness bound that allows execution
6477 # of the reads at the closest available replica without blocking.
6478 #
6479 # All rows yielded are consistent with each other -- if any part of
6480 # the read observes a transaction, all parts of the read see the
6481 # transaction. Boundedly stale reads are not repeatable: two stale
6482 # reads, even if they use the same staleness bound, can execute at
6483 # different timestamps and thus return inconsistent results.
6484 #
6485 # Boundedly stale reads execute in two phases: the first phase
6486 # negotiates a timestamp among all replicas needed to serve the
6487 # read. In the second phase, reads are executed at the negotiated
6488 # timestamp.
6489 #
6490 # As a result of the two phase execution, bounded staleness reads are
6491 # usually a little slower than comparable exact staleness
6492 # reads. However, they are typically able to return fresher
6493 # results, and are more likely to execute at the closest replica.
6494 #
6495 # Because the timestamp negotiation requires up-front knowledge of
6496 # which rows will be read, it can only be used with single-use
6497 # read-only transactions.
6498 #
6499 # See TransactionOptions.ReadOnly.max_staleness and
6500 # TransactionOptions.ReadOnly.min_read_timestamp.
6501 #
6502 # ### Old Read Timestamps and Garbage Collection
6503 #
6504 # Cloud Spanner continuously garbage collects deleted and overwritten data
6505 # in the background to reclaim storage space. This process is known
6506 # as &quot;version GC&quot;. By default, version GC reclaims versions after they
6507 # are one hour old. Because of this, Cloud Spanner cannot perform reads
6508 # at read timestamps more than one hour in the past. This
6509 # restriction also applies to in-progress reads and/or SQL queries whose
6510 # timestamp become too old while executing. Reads and SQL queries with
6511 # too-old read timestamps fail with the error `FAILED_PRECONDITION`.
6512 #
6513 # ## Partitioned DML Transactions
6514 #
6515 # Partitioned DML transactions are used to execute DML statements with a
6516 # different execution strategy that provides different, and often better,
6517 # scalability properties for large, table-wide operations than DML in a
6518 # ReadWrite transaction. Smaller scoped statements, such as an OLTP workload,
6519 # should prefer using ReadWrite transactions.
6520 #
6521 # Partitioned DML partitions the keyspace and runs the DML statement on each
6522 # partition in separate, internal transactions. These transactions commit
6523 # automatically when complete, and run independently from one another.
6524 #
6525 # To reduce lock contention, this execution strategy only acquires read locks
6526 # on rows that match the WHERE clause of the statement. Additionally, the
6527 # smaller per-partition transactions hold locks for less time.
6528 #
6529 # That said, Partitioned DML is not a drop-in replacement for standard DML used
6530 # in ReadWrite transactions.
6531 #
6532 # - The DML statement must be fully-partitionable. Specifically, the statement
6533 # must be expressible as the union of many statements which each access only
6534 # a single row of the table.
6535 #
6536 # - The statement is not applied atomically to all rows of the table. Rather,
6537 # the statement is applied atomically to partitions of the table, in
6538 # independent transactions. Secondary index rows are updated atomically
6539 # with the base table rows.
6540 #
6541 # - Partitioned DML does not guarantee exactly-once execution semantics
6542 # against a partition. The statement will be applied at least once to each
6543 # partition. It is strongly recommended that the DML statement should be
6544 # idempotent to avoid unexpected results. For instance, it is potentially
6545 # dangerous to run a statement such as
6546 # `UPDATE table SET column = column + 1` as it could be run multiple times
6547 # against some rows.
6548 #
6549 # - The partitions are committed automatically - there is no support for
6550 # Commit or Rollback. If the call returns an error, or if the client issuing
6551 # the ExecuteSql call dies, it is possible that some rows had the statement
6552 # executed on them successfully. It is also possible that statement was
6553 # never executed against other rows.
6554 #
6555 # - Partitioned DML transactions may only contain the execution of a single
6556 # DML statement via ExecuteSql or ExecuteStreamingSql.
6557 #
6558 # - If any error is encountered during the execution of the partitioned DML
6559 # operation (for instance, a UNIQUE INDEX violation, division by zero, or a
6560 # value that cannot be stored due to schema constraints), then the
6561 # operation is stopped at that point and an error is returned. It is
6562 # possible that at this point, some partitions have been committed (or even
6563 # committed multiple times), and other partitions have not been run at all.
6564 #
6565 # Given the above, Partitioned DML is good fit for large, database-wide,
6566 # operations that are idempotent, such as deleting old rows from a very large
6567 # table.
6568 &quot;readWrite&quot;: { # Message type to initiate a read-write transaction. Currently this # Transaction may write.
6569 #
6570 # Authorization to begin a read-write transaction requires
6571 # `spanner.databases.beginOrRollbackReadWriteTransaction` permission
6572 # on the `session` resource.
6573 # transaction type has no options.
6574 },
6575 &quot;readOnly&quot;: { # Message type to initiate a read-only transaction. # Transaction will not write.
6576 #
6577 # Authorization to begin a read-only transaction requires
6578 # `spanner.databases.beginReadOnlyTransaction` permission
6579 # on the `session` resource.
6580 &quot;readTimestamp&quot;: &quot;A String&quot;, # Executes all reads at the given timestamp. Unlike other modes,
6581 # reads at a specific timestamp are repeatable; the same read at
6582 # the same timestamp always returns the same data. If the
6583 # timestamp is in the future, the read will block until the
6584 # specified timestamp, modulo the read&#x27;s deadline.
6585 #
6586 # Useful for large scale consistent reads such as mapreduces, or
6587 # for coordinating many reads against a consistent snapshot of the
6588 # data.
6589 #
6590 # A timestamp in RFC3339 UTC \&quot;Zulu\&quot; format, accurate to nanoseconds.
6591 # Example: `&quot;2014-10-02T15:01:23.045123456Z&quot;`.
6592 &quot;minReadTimestamp&quot;: &quot;A String&quot;, # Executes all reads at a timestamp &gt;= `min_read_timestamp`.
6593 #
6594 # This is useful for requesting fresher data than some previous
6595 # read, or data that is fresh enough to observe the effects of some
6596 # previously committed transaction whose timestamp is known.
6597 #
6598 # Note that this option can only be used in single-use transactions.
6599 #
6600 # A timestamp in RFC3339 UTC \&quot;Zulu\&quot; format, accurate to nanoseconds.
6601 # Example: `&quot;2014-10-02T15:01:23.045123456Z&quot;`.
6602 &quot;exactStaleness&quot;: &quot;A String&quot;, # Executes all reads at a timestamp that is `exact_staleness`
6603 # old. The timestamp is chosen soon after the read is started.
6604 #
6605 # Guarantees that all writes that have committed more than the
6606 # specified number of seconds ago are visible. Because Cloud Spanner
6607 # chooses the exact timestamp, this mode works even if the client&#x27;s
6608 # local clock is substantially skewed from Cloud Spanner commit
6609 # timestamps.
6610 #
6611 # Useful for reading at nearby replicas without the distributed
6612 # timestamp negotiation overhead of `max_staleness`.
6613 &quot;maxStaleness&quot;: &quot;A String&quot;, # Read data at a timestamp &gt;= `NOW - max_staleness`
6614 # seconds. Guarantees that all writes that have committed more
6615 # than the specified number of seconds ago are visible. Because
6616 # Cloud Spanner chooses the exact timestamp, this mode works even if
6617 # the client&#x27;s local clock is substantially skewed from Cloud Spanner
6618 # commit timestamps.
6619 #
6620 # Useful for reading the freshest data available at a nearby
6621 # replica, while bounding the possible staleness if the local
6622 # replica has fallen behind.
6623 #
6624 # Note that this option can only be used in single-use
6625 # transactions.
6626 &quot;returnReadTimestamp&quot;: True or False, # If true, the Cloud Spanner-selected read timestamp is included in
6627 # the Transaction message that describes the transaction.
6628 &quot;strong&quot;: True or False, # Read at a timestamp where all previously committed transactions
6629 # are visible.
6630 },
6631 &quot;partitionedDml&quot;: { # Message type to initiate a Partitioned DML transaction. # Partitioned DML transaction.
6632 #
6633 # Authorization to begin a Partitioned DML transaction requires
6634 # `spanner.databases.beginPartitionedDmlTransaction` permission
6635 # on the `session` resource.
6636 },
6637 },
6638 &quot;begin&quot;: { # # Transactions # Begin a new transaction and execute this read or SQL query in
6639 # it. The transaction ID of the new transaction is returned in
6640 # ResultSetMetadata.transaction, which is a Transaction.
6641 #
6642 #
6643 # Each session can have at most one active transaction at a time (note that
6644 # standalone reads and queries use a transaction internally and do count
6645 # towards the one transaction limit). After the active transaction is
6646 # completed, the session can immediately be re-used for the next transaction.
6647 # It is not necessary to create a new session for each transaction.
6648 #
6649 # # Transaction Modes
6650 #
6651 # Cloud Spanner supports three transaction modes:
6652 #
6653 # 1. Locking read-write. This type of transaction is the only way
6654 # to write data into Cloud Spanner. These transactions rely on
6655 # pessimistic locking and, if necessary, two-phase commit.
6656 # Locking read-write transactions may abort, requiring the
6657 # application to retry.
6658 #
6659 # 2. Snapshot read-only. This transaction type provides guaranteed
6660 # consistency across several reads, but does not allow
6661 # writes. Snapshot read-only transactions can be configured to
6662 # read at timestamps in the past. Snapshot read-only
6663 # transactions do not need to be committed.
6664 #
6665 # 3. Partitioned DML. This type of transaction is used to execute
6666 # a single Partitioned DML statement. Partitioned DML partitions
6667 # the key space and runs the DML statement over each partition
6668 # in parallel using separate, internal transactions that commit
6669 # independently. Partitioned DML transactions do not need to be
6670 # committed.
6671 #
6672 # For transactions that only read, snapshot read-only transactions
6673 # provide simpler semantics and are almost always faster. In
6674 # particular, read-only transactions do not take locks, so they do
6675 # not conflict with read-write transactions. As a consequence of not
6676 # taking locks, they also do not abort, so retry loops are not needed.
6677 #
6678 # Transactions may only read/write data in a single database. They
6679 # may, however, read/write data in different tables within that
6680 # database.
6681 #
6682 # ## Locking Read-Write Transactions
6683 #
6684 # Locking transactions may be used to atomically read-modify-write
6685 # data anywhere in a database. This type of transaction is externally
6686 # consistent.
6687 #
6688 # Clients should attempt to minimize the amount of time a transaction
6689 # is active. Faster transactions commit with higher probability
6690 # and cause less contention. Cloud Spanner attempts to keep read locks
6691 # active as long as the transaction continues to do reads, and the
6692 # transaction has not been terminated by
6693 # Commit or
6694 # Rollback. Long periods of
6695 # inactivity at the client may cause Cloud Spanner to release a
6696 # transaction&#x27;s locks and abort it.
6697 #
6698 # Conceptually, a read-write transaction consists of zero or more
6699 # reads or SQL statements followed by
6700 # Commit. At any time before
6701 # Commit, the client can send a
6702 # Rollback request to abort the
6703 # transaction.
6704 #
6705 # ### Semantics
6706 #
6707 # Cloud Spanner can commit the transaction if all read locks it acquired
6708 # are still valid at commit time, and it is able to acquire write
6709 # locks for all writes. Cloud Spanner can abort the transaction for any
6710 # reason. If a commit attempt returns `ABORTED`, Cloud Spanner guarantees
6711 # that the transaction has not modified any user data in Cloud Spanner.
6712 #
6713 # Unless the transaction commits, Cloud Spanner makes no guarantees about
6714 # how long the transaction&#x27;s locks were held for. It is an error to
6715 # use Cloud Spanner locks for any sort of mutual exclusion other than
6716 # between Cloud Spanner transactions themselves.
6717 #
6718 # ### Retrying Aborted Transactions
6719 #
6720 # When a transaction aborts, the application can choose to retry the
6721 # whole transaction again. To maximize the chances of successfully
6722 # committing the retry, the client should execute the retry in the
6723 # same session as the original attempt. The original session&#x27;s lock
6724 # priority increases with each consecutive abort, meaning that each
6725 # attempt has a slightly better chance of success than the previous.
6726 #
6727 # Under some circumstances (e.g., many transactions attempting to
6728 # modify the same row(s)), a transaction can abort many times in a
6729 # short period before successfully committing. Thus, it is not a good
6730 # idea to cap the number of retries a transaction can attempt;
6731 # instead, it is better to limit the total amount of wall time spent
6732 # retrying.
6733 #
6734 # ### Idle Transactions
6735 #
6736 # A transaction is considered idle if it has no outstanding reads or
6737 # SQL queries and has not started a read or SQL query within the last 10
6738 # seconds. Idle transactions can be aborted by Cloud Spanner so that they
6739 # don&#x27;t hold on to locks indefinitely. In that case, the commit will
6740 # fail with error `ABORTED`.
6741 #
6742 # If this behavior is undesirable, periodically executing a simple
6743 # SQL query in the transaction (e.g., `SELECT 1`) prevents the
6744 # transaction from becoming idle.
6745 #
6746 # ## Snapshot Read-Only Transactions
6747 #
6748 # Snapshot read-only transactions provides a simpler method than
6749 # locking read-write transactions for doing several consistent
6750 # reads. However, this type of transaction does not support writes.
6751 #
6752 # Snapshot transactions do not take locks. Instead, they work by
6753 # choosing a Cloud Spanner timestamp, then executing all reads at that
6754 # timestamp. Since they do not acquire locks, they do not block
6755 # concurrent read-write transactions.
6756 #
6757 # Unlike locking read-write transactions, snapshot read-only
6758 # transactions never abort. They can fail if the chosen read
6759 # timestamp is garbage collected; however, the default garbage
6760 # collection policy is generous enough that most applications do not
6761 # need to worry about this in practice.
6762 #
6763 # Snapshot read-only transactions do not need to call
6764 # Commit or
6765 # Rollback (and in fact are not
6766 # permitted to do so).
6767 #
6768 # To execute a snapshot transaction, the client specifies a timestamp
6769 # bound, which tells Cloud Spanner how to choose a read timestamp.
6770 #
6771 # The types of timestamp bound are:
6772 #
6773 # - Strong (the default).
6774 # - Bounded staleness.
6775 # - Exact staleness.
6776 #
6777 # If the Cloud Spanner database to be read is geographically distributed,
6778 # stale read-only transactions can execute more quickly than strong
6779 # or read-write transaction, because they are able to execute far
6780 # from the leader replica.
6781 #
6782 # Each type of timestamp bound is discussed in detail below.
6783 #
6784 # ### Strong
6785 #
6786 # Strong reads are guaranteed to see the effects of all transactions
6787 # that have committed before the start of the read. Furthermore, all
6788 # rows yielded by a single read are consistent with each other -- if
6789 # any part of the read observes a transaction, all parts of the read
6790 # see the transaction.
6791 #
6792 # Strong reads are not repeatable: two consecutive strong read-only
6793 # transactions might return inconsistent results if there are
6794 # concurrent writes. If consistency across reads is required, the
6795 # reads should be executed within a transaction or at an exact read
6796 # timestamp.
6797 #
6798 # See TransactionOptions.ReadOnly.strong.
6799 #
6800 # ### Exact Staleness
6801 #
6802 # These timestamp bounds execute reads at a user-specified
6803 # timestamp. Reads at a timestamp are guaranteed to see a consistent
6804 # prefix of the global transaction history: they observe
6805 # modifications done by all transactions with a commit timestamp &lt;=
6806 # the read timestamp, and observe none of the modifications done by
6807 # transactions with a larger commit timestamp. They will block until
6808 # all conflicting transactions that may be assigned commit timestamps
6809 # &lt;= the read timestamp have finished.
6810 #
6811 # The timestamp can either be expressed as an absolute Cloud Spanner commit
6812 # timestamp or a staleness relative to the current time.
6813 #
6814 # These modes do not require a &quot;negotiation phase&quot; to pick a
6815 # timestamp. As a result, they execute slightly faster than the
6816 # equivalent boundedly stale concurrency modes. On the other hand,
6817 # boundedly stale reads usually return fresher results.
6818 #
6819 # See TransactionOptions.ReadOnly.read_timestamp and
6820 # TransactionOptions.ReadOnly.exact_staleness.
6821 #
6822 # ### Bounded Staleness
6823 #
6824 # Bounded staleness modes allow Cloud Spanner to pick the read timestamp,
6825 # subject to a user-provided staleness bound. Cloud Spanner chooses the
6826 # newest timestamp within the staleness bound that allows execution
6827 # of the reads at the closest available replica without blocking.
6828 #
6829 # All rows yielded are consistent with each other -- if any part of
6830 # the read observes a transaction, all parts of the read see the
6831 # transaction. Boundedly stale reads are not repeatable: two stale
6832 # reads, even if they use the same staleness bound, can execute at
6833 # different timestamps and thus return inconsistent results.
6834 #
6835 # Boundedly stale reads execute in two phases: the first phase
6836 # negotiates a timestamp among all replicas needed to serve the
6837 # read. In the second phase, reads are executed at the negotiated
6838 # timestamp.
6839 #
6840 # As a result of the two phase execution, bounded staleness reads are
6841 # usually a little slower than comparable exact staleness
6842 # reads. However, they are typically able to return fresher
6843 # results, and are more likely to execute at the closest replica.
6844 #
6845 # Because the timestamp negotiation requires up-front knowledge of
6846 # which rows will be read, it can only be used with single-use
6847 # read-only transactions.
6848 #
6849 # See TransactionOptions.ReadOnly.max_staleness and
6850 # TransactionOptions.ReadOnly.min_read_timestamp.
6851 #
6852 # ### Old Read Timestamps and Garbage Collection
6853 #
6854 # Cloud Spanner continuously garbage collects deleted and overwritten data
6855 # in the background to reclaim storage space. This process is known
6856 # as &quot;version GC&quot;. By default, version GC reclaims versions after they
6857 # are one hour old. Because of this, Cloud Spanner cannot perform reads
6858 # at read timestamps more than one hour in the past. This
6859 # restriction also applies to in-progress reads and/or SQL queries whose
6860 # timestamp become too old while executing. Reads and SQL queries with
6861 # too-old read timestamps fail with the error `FAILED_PRECONDITION`.
6862 #
6863 # ## Partitioned DML Transactions
6864 #
6865 # Partitioned DML transactions are used to execute DML statements with a
6866 # different execution strategy that provides different, and often better,
6867 # scalability properties for large, table-wide operations than DML in a
6868 # ReadWrite transaction. Smaller scoped statements, such as an OLTP workload,
6869 # should prefer using ReadWrite transactions.
6870 #
6871 # Partitioned DML partitions the keyspace and runs the DML statement on each
6872 # partition in separate, internal transactions. These transactions commit
6873 # automatically when complete, and run independently from one another.
6874 #
6875 # To reduce lock contention, this execution strategy only acquires read locks
6876 # on rows that match the WHERE clause of the statement. Additionally, the
6877 # smaller per-partition transactions hold locks for less time.
6878 #
6879 # That said, Partitioned DML is not a drop-in replacement for standard DML used
6880 # in ReadWrite transactions.
6881 #
6882 # - The DML statement must be fully-partitionable. Specifically, the statement
6883 # must be expressible as the union of many statements which each access only
6884 # a single row of the table.
6885 #
6886 # - The statement is not applied atomically to all rows of the table. Rather,
6887 # the statement is applied atomically to partitions of the table, in
6888 # independent transactions. Secondary index rows are updated atomically
6889 # with the base table rows.
6890 #
6891 # - Partitioned DML does not guarantee exactly-once execution semantics
6892 # against a partition. The statement will be applied at least once to each
6893 # partition. It is strongly recommended that the DML statement should be
6894 # idempotent to avoid unexpected results. For instance, it is potentially
6895 # dangerous to run a statement such as
6896 # `UPDATE table SET column = column + 1` as it could be run multiple times
6897 # against some rows.
6898 #
6899 # - The partitions are committed automatically - there is no support for
6900 # Commit or Rollback. If the call returns an error, or if the client issuing
6901 # the ExecuteSql call dies, it is possible that some rows had the statement
6902 # executed on them successfully. It is also possible that statement was
6903 # never executed against other rows.
6904 #
6905 # - Partitioned DML transactions may only contain the execution of a single
6906 # DML statement via ExecuteSql or ExecuteStreamingSql.
6907 #
6908 # - If any error is encountered during the execution of the partitioned DML
6909 # operation (for instance, a UNIQUE INDEX violation, division by zero, or a
6910 # value that cannot be stored due to schema constraints), then the
6911 # operation is stopped at that point and an error is returned. It is
6912 # possible that at this point, some partitions have been committed (or even
6913 # committed multiple times), and other partitions have not been run at all.
6914 #
6915 # Given the above, Partitioned DML is good fit for large, database-wide,
6916 # operations that are idempotent, such as deleting old rows from a very large
6917 # table.
6918 &quot;readWrite&quot;: { # Message type to initiate a read-write transaction. Currently this # Transaction may write.
6919 #
6920 # Authorization to begin a read-write transaction requires
6921 # `spanner.databases.beginOrRollbackReadWriteTransaction` permission
6922 # on the `session` resource.
6923 # transaction type has no options.
6924 },
6925 &quot;readOnly&quot;: { # Message type to initiate a read-only transaction. # Transaction will not write.
6926 #
6927 # Authorization to begin a read-only transaction requires
6928 # `spanner.databases.beginReadOnlyTransaction` permission
6929 # on the `session` resource.
6930 &quot;readTimestamp&quot;: &quot;A String&quot;, # Executes all reads at the given timestamp. Unlike other modes,
6931 # reads at a specific timestamp are repeatable; the same read at
6932 # the same timestamp always returns the same data. If the
6933 # timestamp is in the future, the read will block until the
6934 # specified timestamp, modulo the read&#x27;s deadline.
6935 #
6936 # Useful for large scale consistent reads such as mapreduces, or
6937 # for coordinating many reads against a consistent snapshot of the
6938 # data.
6939 #
6940 # A timestamp in RFC3339 UTC \&quot;Zulu\&quot; format, accurate to nanoseconds.
6941 # Example: `&quot;2014-10-02T15:01:23.045123456Z&quot;`.
6942 &quot;minReadTimestamp&quot;: &quot;A String&quot;, # Executes all reads at a timestamp &gt;= `min_read_timestamp`.
6943 #
6944 # This is useful for requesting fresher data than some previous
6945 # read, or data that is fresh enough to observe the effects of some
6946 # previously committed transaction whose timestamp is known.
6947 #
6948 # Note that this option can only be used in single-use transactions.
6949 #
6950 # A timestamp in RFC3339 UTC \&quot;Zulu\&quot; format, accurate to nanoseconds.
6951 # Example: `&quot;2014-10-02T15:01:23.045123456Z&quot;`.
6952 &quot;exactStaleness&quot;: &quot;A String&quot;, # Executes all reads at a timestamp that is `exact_staleness`
6953 # old. The timestamp is chosen soon after the read is started.
6954 #
6955 # Guarantees that all writes that have committed more than the
6956 # specified number of seconds ago are visible. Because Cloud Spanner
6957 # chooses the exact timestamp, this mode works even if the client&#x27;s
6958 # local clock is substantially skewed from Cloud Spanner commit
6959 # timestamps.
6960 #
6961 # Useful for reading at nearby replicas without the distributed
6962 # timestamp negotiation overhead of `max_staleness`.
6963 &quot;maxStaleness&quot;: &quot;A String&quot;, # Read data at a timestamp &gt;= `NOW - max_staleness`
6964 # seconds. Guarantees that all writes that have committed more
6965 # than the specified number of seconds ago are visible. Because
6966 # Cloud Spanner chooses the exact timestamp, this mode works even if
6967 # the client&#x27;s local clock is substantially skewed from Cloud Spanner
6968 # commit timestamps.
6969 #
6970 # Useful for reading the freshest data available at a nearby
6971 # replica, while bounding the possible staleness if the local
6972 # replica has fallen behind.
6973 #
6974 # Note that this option can only be used in single-use
6975 # transactions.
6976 &quot;returnReadTimestamp&quot;: True or False, # If true, the Cloud Spanner-selected read timestamp is included in
6977 # the Transaction message that describes the transaction.
6978 &quot;strong&quot;: True or False, # Read at a timestamp where all previously committed transactions
6979 # are visible.
6980 },
6981 &quot;partitionedDml&quot;: { # Message type to initiate a Partitioned DML transaction. # Partitioned DML transaction.
6982 #
6983 # Authorization to begin a Partitioned DML transaction requires
6984 # `spanner.databases.beginPartitionedDmlTransaction` permission
6985 # on the `session` resource.
6986 },
6987 },
6988 &quot;id&quot;: &quot;A String&quot;, # Execute the read or SQL query in a previously-started transaction.
6989 },
6990 &quot;partitionToken&quot;: &quot;A String&quot;, # If present, results will be restricted to the specified partition
6991 # previously created using PartitionRead(). There must be an exact
6992 # match for the values of fields common to this message and the
6993 # PartitionReadRequest message used to create this partition_token.
Bu Sun Kim65020912020-05-20 12:08:20 -07006994 &quot;keySet&quot;: { # `KeySet` defines a collection of Cloud Spanner keys and/or key ranges. All # Required. `key_set` identifies the rows to be yielded. `key_set` names the
6995 # primary keys of the rows in table to be yielded, unless index
6996 # is present. If index is present, then key_set instead names
6997 # index keys in index.
6998 #
6999 # If the partition_token field is empty, rows are yielded
7000 # in table primary key order (if index is empty) or index key order
7001 # (if index is non-empty). If the partition_token field is not
7002 # empty, rows will be yielded in an unspecified order.
7003 #
7004 # It is not an error for the `key_set` to name rows that do not
7005 # exist in the database. Read yields nothing for nonexistent rows.
7006 # the keys are expected to be in the same table or index. The keys need
7007 # not be sorted in any particular way.
7008 #
7009 # If the same key is specified multiple times in the set (for example
7010 # if two ranges, two keys, or a key and a range overlap), Cloud Spanner
7011 # behaves as if the key were only specified once.
7012 &quot;ranges&quot;: [ # A list of key ranges. See KeyRange for more information about
7013 # key range specifications.
7014 { # KeyRange represents a range of rows in a table or index.
7015 #
7016 # A range has a start key and an end key. These keys can be open or
7017 # closed, indicating if the range includes rows with that key.
7018 #
7019 # Keys are represented by lists, where the ith value in the list
7020 # corresponds to the ith component of the table or index primary key.
7021 # Individual values are encoded as described
7022 # here.
7023 #
7024 # For example, consider the following table definition:
7025 #
7026 # CREATE TABLE UserEvents (
7027 # UserName STRING(MAX),
7028 # EventDate STRING(10)
7029 # ) PRIMARY KEY(UserName, EventDate);
7030 #
7031 # The following keys name rows in this table:
7032 #
7033 # &quot;Bob&quot;, &quot;2014-09-23&quot;
7034 #
7035 # Since the `UserEvents` table&#x27;s `PRIMARY KEY` clause names two
7036 # columns, each `UserEvents` key has two elements; the first is the
7037 # `UserName`, and the second is the `EventDate`.
7038 #
7039 # Key ranges with multiple components are interpreted
7040 # lexicographically by component using the table or index key&#x27;s declared
7041 # sort order. For example, the following range returns all events for
7042 # user `&quot;Bob&quot;` that occurred in the year 2015:
7043 #
7044 # &quot;start_closed&quot;: [&quot;Bob&quot;, &quot;2015-01-01&quot;]
7045 # &quot;end_closed&quot;: [&quot;Bob&quot;, &quot;2015-12-31&quot;]
7046 #
7047 # Start and end keys can omit trailing key components. This affects the
7048 # inclusion and exclusion of rows that exactly match the provided key
7049 # components: if the key is closed, then rows that exactly match the
7050 # provided components are included; if the key is open, then rows
7051 # that exactly match are not included.
7052 #
7053 # For example, the following range includes all events for `&quot;Bob&quot;` that
7054 # occurred during and after the year 2000:
7055 #
7056 # &quot;start_closed&quot;: [&quot;Bob&quot;, &quot;2000-01-01&quot;]
7057 # &quot;end_closed&quot;: [&quot;Bob&quot;]
7058 #
7059 # The next example retrieves all events for `&quot;Bob&quot;`:
7060 #
7061 # &quot;start_closed&quot;: [&quot;Bob&quot;]
7062 # &quot;end_closed&quot;: [&quot;Bob&quot;]
7063 #
7064 # To retrieve events before the year 2000:
7065 #
7066 # &quot;start_closed&quot;: [&quot;Bob&quot;]
7067 # &quot;end_open&quot;: [&quot;Bob&quot;, &quot;2000-01-01&quot;]
7068 #
7069 # The following range includes all rows in the table:
7070 #
7071 # &quot;start_closed&quot;: []
7072 # &quot;end_closed&quot;: []
7073 #
7074 # This range returns all users whose `UserName` begins with any
7075 # character from A to C:
7076 #
7077 # &quot;start_closed&quot;: [&quot;A&quot;]
7078 # &quot;end_open&quot;: [&quot;D&quot;]
7079 #
7080 # This range returns all users whose `UserName` begins with B:
7081 #
7082 # &quot;start_closed&quot;: [&quot;B&quot;]
7083 # &quot;end_open&quot;: [&quot;C&quot;]
7084 #
7085 # Key ranges honor column sort order. For example, suppose a table is
7086 # defined as follows:
7087 #
7088 # CREATE TABLE DescendingSortedTable {
7089 # Key INT64,
7090 # ...
7091 # ) PRIMARY KEY(Key DESC);
7092 #
7093 # The following range retrieves all rows with key values between 1
7094 # and 100 inclusive:
7095 #
7096 # &quot;start_closed&quot;: [&quot;100&quot;]
7097 # &quot;end_closed&quot;: [&quot;1&quot;]
7098 #
7099 # Note that 100 is passed as the start, and 1 is passed as the end,
7100 # because `Key` is a descending column in the schema.
Bu Sun Kim65020912020-05-20 12:08:20 -07007101 &quot;endClosed&quot;: [ # If the end is closed, then the range includes all rows whose
7102 # first `len(end_closed)` key columns exactly match `end_closed`.
7103 &quot;&quot;,
7104 ],
Bu Sun Kim65020912020-05-20 12:08:20 -07007105 &quot;startClosed&quot;: [ # If the start is closed, then the range includes all rows whose
7106 # first `len(start_closed)` key columns exactly match `start_closed`.
7107 &quot;&quot;,
7108 ],
Bu Sun Kimd059ad82020-07-22 17:02:09 -07007109 &quot;startOpen&quot;: [ # If the start is open, then the range excludes rows whose first
7110 # `len(start_open)` key columns exactly match `start_open`.
7111 &quot;&quot;,
7112 ],
7113 &quot;endOpen&quot;: [ # If the end is open, then the range excludes rows whose first
7114 # `len(end_open)` key columns exactly match `end_open`.
7115 &quot;&quot;,
7116 ],
Bu Sun Kim65020912020-05-20 12:08:20 -07007117 },
7118 ],
Bu Sun Kim65020912020-05-20 12:08:20 -07007119 &quot;keys&quot;: [ # A list of specific keys. Entries in `keys` should have exactly as
7120 # many elements as there are columns in the primary or index key
7121 # with which this `KeySet` is used. Individual key values are
7122 # encoded as described here.
7123 [
7124 &quot;&quot;,
7125 ],
7126 ],
Bu Sun Kim4ed7d3f2020-05-27 12:20:54 -07007127 &quot;all&quot;: True or False, # For convenience `all` can be set to `true` to indicate that this
7128 # `KeySet` matches all keys in the table or index. Note that any keys
7129 # specified in `keys` or `ranges` are only yielded once.
Bu Sun Kim65020912020-05-20 12:08:20 -07007130 },
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -04007131 }
7132
7133 x__xgafv: string, V1 error format.
7134 Allowed values
7135 1 - v1 error format
7136 2 - v2 error format
7137
7138Returns:
7139 An object of the form:
7140
7141 { # Results from Read or
7142 # ExecuteSql.
Bu Sun Kimd059ad82020-07-22 17:02:09 -07007143 &quot;stats&quot;: { # Additional statistics about a ResultSet or PartialResultSet. # Query plan and execution statistics for the SQL statement that
7144 # produced this result set. These can be requested by setting
7145 # ExecuteSqlRequest.query_mode.
7146 # DML statements always produce stats containing the number of rows
7147 # modified, unless executed using the
7148 # ExecuteSqlRequest.QueryMode.PLAN ExecuteSqlRequest.query_mode.
7149 # Other fields may or may not be populated, based on the
7150 # ExecuteSqlRequest.query_mode.
7151 &quot;queryStats&quot;: { # Aggregated statistics from the execution of the query. Only present when
7152 # the query is profiled. For example, a query could return the statistics as
7153 # follows:
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -04007154 #
Bu Sun Kimd059ad82020-07-22 17:02:09 -07007155 # {
7156 # &quot;rows_returned&quot;: &quot;3&quot;,
7157 # &quot;elapsed_time&quot;: &quot;1.22 secs&quot;,
7158 # &quot;cpu_time&quot;: &quot;1.19 secs&quot;
7159 # }
7160 &quot;a_key&quot;: &quot;&quot;, # Properties of the object.
7161 },
7162 &quot;rowCountExact&quot;: &quot;A String&quot;, # Standard DML returns an exact count of rows that were modified.
7163 &quot;rowCountLowerBound&quot;: &quot;A String&quot;, # Partitioned DML does not offer exactly-once semantics, so it
7164 # returns a lower bound of the rows modified.
7165 &quot;queryPlan&quot;: { # Contains an ordered list of nodes appearing in the query plan. # QueryPlan for the query associated with this result.
7166 &quot;planNodes&quot;: [ # The nodes in the query plan. Plan nodes are returned in pre-order starting
7167 # with the plan root. Each PlanNode&#x27;s `id` corresponds to its index in
7168 # `plan_nodes`.
7169 { # Node information for nodes appearing in a QueryPlan.plan_nodes.
7170 &quot;childLinks&quot;: [ # List of child node `index`es and their relationship to this parent.
7171 { # Metadata associated with a parent-child relationship appearing in a
7172 # PlanNode.
7173 &quot;childIndex&quot;: 42, # The node to which the link points.
7174 &quot;type&quot;: &quot;A String&quot;, # The type of the link. For example, in Hash Joins this could be used to
7175 # distinguish between the build child and the probe child, or in the case
7176 # of the child being an output variable, to represent the tag associated
7177 # with the output variable.
7178 &quot;variable&quot;: &quot;A String&quot;, # Only present if the child node is SCALAR and corresponds
7179 # to an output variable of the parent node. The field carries the name of
7180 # the output variable.
7181 # For example, a `TableScan` operator that reads rows from a table will
7182 # have child links to the `SCALAR` nodes representing the output variables
7183 # created for each column that is read by the operator. The corresponding
7184 # `variable` fields will be set to the variable names assigned to the
7185 # columns.
7186 },
7187 ],
7188 &quot;metadata&quot;: { # Attributes relevant to the node contained in a group of key-value pairs.
7189 # For example, a Parameter Reference node could have the following
7190 # information in its metadata:
7191 #
7192 # {
7193 # &quot;parameter_reference&quot;: &quot;param1&quot;,
7194 # &quot;parameter_type&quot;: &quot;array&quot;
7195 # }
7196 &quot;a_key&quot;: &quot;&quot;, # Properties of the object.
7197 },
7198 &quot;kind&quot;: &quot;A String&quot;, # Used to determine the type of node. May be needed for visualizing
7199 # different kinds of nodes differently. For example, If the node is a
7200 # SCALAR node, it will have a condensed representation
7201 # which can be used to directly embed a description of the node in its
7202 # parent.
7203 &quot;shortRepresentation&quot;: { # Condensed representation of a node and its subtree. Only present for # Condensed representation for SCALAR nodes.
7204 # `SCALAR` PlanNode(s).
7205 &quot;subqueries&quot;: { # A mapping of (subquery variable name) -&gt; (subquery node id) for cases
7206 # where the `description` string of this node references a `SCALAR`
7207 # subquery contained in the expression subtree rooted at this node. The
7208 # referenced `SCALAR` subquery may not necessarily be a direct child of
7209 # this node.
7210 &quot;a_key&quot;: 42,
7211 },
7212 &quot;description&quot;: &quot;A String&quot;, # A string representation of the expression subtree rooted at this node.
7213 },
7214 &quot;displayName&quot;: &quot;A String&quot;, # The display name for the node.
7215 &quot;index&quot;: 42, # The `PlanNode`&#x27;s index in node list.
7216 &quot;executionStats&quot;: { # The execution statistics associated with the node, contained in a group of
7217 # key-value pairs. Only present if the plan was returned as a result of a
7218 # profile query. For example, number of executions, number of rows/time per
7219 # execution etc.
7220 &quot;a_key&quot;: &quot;&quot;, # Properties of the object.
7221 },
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -04007222 },
7223 ],
7224 },
Bu Sun Kimd059ad82020-07-22 17:02:09 -07007225 },
7226 &quot;rows&quot;: [ # Each element in `rows` is a row whose format is defined by
7227 # metadata.row_type. The ith element
7228 # in each row matches the ith field in
7229 # metadata.row_type. Elements are
7230 # encoded based on type as described
7231 # here.
7232 [
7233 &quot;&quot;,
7234 ],
7235 ],
7236 &quot;metadata&quot;: { # Metadata about a ResultSet or PartialResultSet. # Metadata about the result set, such as row type information.
Bu Sun Kim65020912020-05-20 12:08:20 -07007237 &quot;transaction&quot;: { # A transaction. # If the read or SQL query began a transaction as a side-effect, the
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -04007238 # information about the new transaction is yielded here.
Bu Sun Kim4ed7d3f2020-05-27 12:20:54 -07007239 &quot;readTimestamp&quot;: &quot;A String&quot;, # For snapshot read-only transactions, the read timestamp chosen
7240 # for the transaction. Not returned by default: see
7241 # TransactionOptions.ReadOnly.return_read_timestamp.
7242 #
7243 # A timestamp in RFC3339 UTC \&quot;Zulu\&quot; format, accurate to nanoseconds.
7244 # Example: `&quot;2014-10-02T15:01:23.045123456Z&quot;`.
Bu Sun Kim65020912020-05-20 12:08:20 -07007245 &quot;id&quot;: &quot;A String&quot;, # `id` may be used to identify the transaction in subsequent
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -04007246 # Read,
7247 # ExecuteSql,
7248 # Commit, or
7249 # Rollback calls.
7250 #
7251 # Single-use read-only transactions do not have IDs, because
7252 # single-use transactions do not support multiple requests.
7253 },
Bu Sun Kimd059ad82020-07-22 17:02:09 -07007254 &quot;rowType&quot;: { # `StructType` defines the fields of a STRUCT type. # Indicates the field names and types for the rows in the result
7255 # set. For example, a SQL query like `&quot;SELECT UserId, UserName FROM
7256 # Users&quot;` could return a `row_type` value like:
7257 #
7258 # &quot;fields&quot;: [
7259 # { &quot;name&quot;: &quot;UserId&quot;, &quot;type&quot;: { &quot;code&quot;: &quot;INT64&quot; } },
7260 # { &quot;name&quot;: &quot;UserName&quot;, &quot;type&quot;: { &quot;code&quot;: &quot;STRING&quot; } },
7261 # ]
7262 &quot;fields&quot;: [ # The list of fields that make up this struct. Order is
7263 # significant, because values of this struct type are represented as
7264 # lists, where the order of field values matches the order of
7265 # fields in the StructType. In turn, the order of fields
7266 # matches the order of columns in a read request, or the order of
7267 # fields in the `SELECT` clause of a query.
7268 { # Message representing a single field of a struct.
7269 &quot;type&quot;: # Object with schema name: Type # The type of the field.
7270 &quot;name&quot;: &quot;A String&quot;, # The name of the field. For reads, this is the column name. For
7271 # SQL queries, it is the column alias (e.g., `&quot;Word&quot;` in the
7272 # query `&quot;SELECT &#x27;hello&#x27; AS Word&quot;`), or the column name (e.g.,
7273 # `&quot;ColName&quot;` in the query `&quot;SELECT ColName FROM Table&quot;`). Some
7274 # columns might have an empty name (e.g., !&quot;SELECT
7275 # UPPER(ColName)&quot;`). Note that a query result can contain
7276 # multiple fields with the same name.
Bu Sun Kim4ed7d3f2020-05-27 12:20:54 -07007277 },
7278 ],
7279 },
Bu Sun Kim4ed7d3f2020-05-27 12:20:54 -07007280 },
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -04007281 }</pre>
7282</div>
7283
7284<div class="method">
Dan O'Mearadd494642020-05-01 07:42:23 -07007285 <code class="details" id="rollback">rollback(session, body=None, x__xgafv=None)</code>
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -04007286 <pre>Rolls back a transaction, releasing any locks it holds. It is a good
7287idea to call this for any transaction that includes one or more
7288Read or ExecuteSql requests and
7289ultimately decides not to commit.
7290
7291`Rollback` returns `OK` if it successfully aborts the transaction, the
7292transaction was already aborted, or the transaction is not
7293found. `Rollback` never returns `ABORTED`.
7294
7295Args:
7296 session: string, Required. The session in which the transaction to roll back is running. (required)
Dan O'Mearadd494642020-05-01 07:42:23 -07007297 body: object, The request body.
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -04007298 The object takes the form of:
7299
7300{ # The request for Rollback.
Bu Sun Kim65020912020-05-20 12:08:20 -07007301 &quot;transactionId&quot;: &quot;A String&quot;, # Required. The transaction to roll back.
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -04007302 }
7303
7304 x__xgafv: string, V1 error format.
7305 Allowed values
7306 1 - v1 error format
7307 2 - v2 error format
7308
7309Returns:
7310 An object of the form:
7311
7312 { # A generic empty message that you can re-use to avoid defining duplicated
7313 # empty messages in your APIs. A typical example is to use it as the request
7314 # or the response type of an API method. For instance:
7315 #
7316 # service Foo {
7317 # rpc Bar(google.protobuf.Empty) returns (google.protobuf.Empty);
7318 # }
7319 #
7320 # The JSON representation for `Empty` is empty JSON object `{}`.
7321 }</pre>
7322</div>
7323
7324<div class="method">
Dan O'Mearadd494642020-05-01 07:42:23 -07007325 <code class="details" id="streamingRead">streamingRead(session, body=None, x__xgafv=None)</code>
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -04007326 <pre>Like Read, except returns the result set as a
7327stream. Unlike Read, there is no limit on the
7328size of the returned result set. However, no individual row in
7329the result set can exceed 100 MiB, and no column value can exceed
733010 MiB.
7331
7332Args:
7333 session: string, Required. The session in which the read should be performed. (required)
Dan O'Mearadd494642020-05-01 07:42:23 -07007334 body: object, The request body.
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -04007335 The object takes the form of:
7336
7337{ # The request for Read and
7338 # StreamingRead.
Bu Sun Kimd059ad82020-07-22 17:02:09 -07007339 &quot;resumeToken&quot;: &quot;A String&quot;, # If this request is resuming a previously interrupted read,
7340 # `resume_token` should be copied from the last
7341 # PartialResultSet yielded before the interruption. Doing this
7342 # enables the new read to resume where the last read left off. The
7343 # rest of the request parameters must exactly match the request
7344 # that yielded this token.
Bu Sun Kim4ed7d3f2020-05-27 12:20:54 -07007345 &quot;columns&quot;: [ # Required. The columns of table to be returned for each row matching
7346 # this request.
7347 &quot;A String&quot;,
7348 ],
7349 &quot;limit&quot;: &quot;A String&quot;, # If greater than zero, only the first `limit` rows are yielded. If `limit`
7350 # is zero, the default is no limit. A limit cannot be specified if
7351 # `partition_token` is set.
Bu Sun Kimd059ad82020-07-22 17:02:09 -07007352 &quot;index&quot;: &quot;A String&quot;, # If non-empty, the name of an index on table. This index is
7353 # used instead of the table primary key when interpreting key_set
7354 # and sorting result rows. See key_set for further information.
Bu Sun Kim4ed7d3f2020-05-27 12:20:54 -07007355 &quot;table&quot;: &quot;A String&quot;, # Required. The name of the table in the database to be read.
Bu Sun Kimd059ad82020-07-22 17:02:09 -07007356 &quot;transaction&quot;: { # This message is used to select the transaction in which a # The transaction to use. If none is provided, the default is a
7357 # temporary read-only transaction with strong concurrency.
7358 # Read or
7359 # ExecuteSql call runs.
7360 #
7361 # See TransactionOptions for more information about transactions.
7362 &quot;singleUse&quot;: { # # Transactions # Execute the read or SQL query in a temporary transaction.
7363 # This is the most efficient way to execute a transaction that
7364 # consists of a single SQL query.
7365 #
7366 #
7367 # Each session can have at most one active transaction at a time (note that
7368 # standalone reads and queries use a transaction internally and do count
7369 # towards the one transaction limit). After the active transaction is
7370 # completed, the session can immediately be re-used for the next transaction.
7371 # It is not necessary to create a new session for each transaction.
7372 #
7373 # # Transaction Modes
7374 #
7375 # Cloud Spanner supports three transaction modes:
7376 #
7377 # 1. Locking read-write. This type of transaction is the only way
7378 # to write data into Cloud Spanner. These transactions rely on
7379 # pessimistic locking and, if necessary, two-phase commit.
7380 # Locking read-write transactions may abort, requiring the
7381 # application to retry.
7382 #
7383 # 2. Snapshot read-only. This transaction type provides guaranteed
7384 # consistency across several reads, but does not allow
7385 # writes. Snapshot read-only transactions can be configured to
7386 # read at timestamps in the past. Snapshot read-only
7387 # transactions do not need to be committed.
7388 #
7389 # 3. Partitioned DML. This type of transaction is used to execute
7390 # a single Partitioned DML statement. Partitioned DML partitions
7391 # the key space and runs the DML statement over each partition
7392 # in parallel using separate, internal transactions that commit
7393 # independently. Partitioned DML transactions do not need to be
7394 # committed.
7395 #
7396 # For transactions that only read, snapshot read-only transactions
7397 # provide simpler semantics and are almost always faster. In
7398 # particular, read-only transactions do not take locks, so they do
7399 # not conflict with read-write transactions. As a consequence of not
7400 # taking locks, they also do not abort, so retry loops are not needed.
7401 #
7402 # Transactions may only read/write data in a single database. They
7403 # may, however, read/write data in different tables within that
7404 # database.
7405 #
7406 # ## Locking Read-Write Transactions
7407 #
7408 # Locking transactions may be used to atomically read-modify-write
7409 # data anywhere in a database. This type of transaction is externally
7410 # consistent.
7411 #
7412 # Clients should attempt to minimize the amount of time a transaction
7413 # is active. Faster transactions commit with higher probability
7414 # and cause less contention. Cloud Spanner attempts to keep read locks
7415 # active as long as the transaction continues to do reads, and the
7416 # transaction has not been terminated by
7417 # Commit or
7418 # Rollback. Long periods of
7419 # inactivity at the client may cause Cloud Spanner to release a
7420 # transaction&#x27;s locks and abort it.
7421 #
7422 # Conceptually, a read-write transaction consists of zero or more
7423 # reads or SQL statements followed by
7424 # Commit. At any time before
7425 # Commit, the client can send a
7426 # Rollback request to abort the
7427 # transaction.
7428 #
7429 # ### Semantics
7430 #
7431 # Cloud Spanner can commit the transaction if all read locks it acquired
7432 # are still valid at commit time, and it is able to acquire write
7433 # locks for all writes. Cloud Spanner can abort the transaction for any
7434 # reason. If a commit attempt returns `ABORTED`, Cloud Spanner guarantees
7435 # that the transaction has not modified any user data in Cloud Spanner.
7436 #
7437 # Unless the transaction commits, Cloud Spanner makes no guarantees about
7438 # how long the transaction&#x27;s locks were held for. It is an error to
7439 # use Cloud Spanner locks for any sort of mutual exclusion other than
7440 # between Cloud Spanner transactions themselves.
7441 #
7442 # ### Retrying Aborted Transactions
7443 #
7444 # When a transaction aborts, the application can choose to retry the
7445 # whole transaction again. To maximize the chances of successfully
7446 # committing the retry, the client should execute the retry in the
7447 # same session as the original attempt. The original session&#x27;s lock
7448 # priority increases with each consecutive abort, meaning that each
7449 # attempt has a slightly better chance of success than the previous.
7450 #
7451 # Under some circumstances (e.g., many transactions attempting to
7452 # modify the same row(s)), a transaction can abort many times in a
7453 # short period before successfully committing. Thus, it is not a good
7454 # idea to cap the number of retries a transaction can attempt;
7455 # instead, it is better to limit the total amount of wall time spent
7456 # retrying.
7457 #
7458 # ### Idle Transactions
7459 #
7460 # A transaction is considered idle if it has no outstanding reads or
7461 # SQL queries and has not started a read or SQL query within the last 10
7462 # seconds. Idle transactions can be aborted by Cloud Spanner so that they
7463 # don&#x27;t hold on to locks indefinitely. In that case, the commit will
7464 # fail with error `ABORTED`.
7465 #
7466 # If this behavior is undesirable, periodically executing a simple
7467 # SQL query in the transaction (e.g., `SELECT 1`) prevents the
7468 # transaction from becoming idle.
7469 #
7470 # ## Snapshot Read-Only Transactions
7471 #
7472 # Snapshot read-only transactions provides a simpler method than
7473 # locking read-write transactions for doing several consistent
7474 # reads. However, this type of transaction does not support writes.
7475 #
7476 # Snapshot transactions do not take locks. Instead, they work by
7477 # choosing a Cloud Spanner timestamp, then executing all reads at that
7478 # timestamp. Since they do not acquire locks, they do not block
7479 # concurrent read-write transactions.
7480 #
7481 # Unlike locking read-write transactions, snapshot read-only
7482 # transactions never abort. They can fail if the chosen read
7483 # timestamp is garbage collected; however, the default garbage
7484 # collection policy is generous enough that most applications do not
7485 # need to worry about this in practice.
7486 #
7487 # Snapshot read-only transactions do not need to call
7488 # Commit or
7489 # Rollback (and in fact are not
7490 # permitted to do so).
7491 #
7492 # To execute a snapshot transaction, the client specifies a timestamp
7493 # bound, which tells Cloud Spanner how to choose a read timestamp.
7494 #
7495 # The types of timestamp bound are:
7496 #
7497 # - Strong (the default).
7498 # - Bounded staleness.
7499 # - Exact staleness.
7500 #
7501 # If the Cloud Spanner database to be read is geographically distributed,
7502 # stale read-only transactions can execute more quickly than strong
7503 # or read-write transaction, because they are able to execute far
7504 # from the leader replica.
7505 #
7506 # Each type of timestamp bound is discussed in detail below.
7507 #
7508 # ### Strong
7509 #
7510 # Strong reads are guaranteed to see the effects of all transactions
7511 # that have committed before the start of the read. Furthermore, all
7512 # rows yielded by a single read are consistent with each other -- if
7513 # any part of the read observes a transaction, all parts of the read
7514 # see the transaction.
7515 #
7516 # Strong reads are not repeatable: two consecutive strong read-only
7517 # transactions might return inconsistent results if there are
7518 # concurrent writes. If consistency across reads is required, the
7519 # reads should be executed within a transaction or at an exact read
7520 # timestamp.
7521 #
7522 # See TransactionOptions.ReadOnly.strong.
7523 #
7524 # ### Exact Staleness
7525 #
7526 # These timestamp bounds execute reads at a user-specified
7527 # timestamp. Reads at a timestamp are guaranteed to see a consistent
7528 # prefix of the global transaction history: they observe
7529 # modifications done by all transactions with a commit timestamp &lt;=
7530 # the read timestamp, and observe none of the modifications done by
7531 # transactions with a larger commit timestamp. They will block until
7532 # all conflicting transactions that may be assigned commit timestamps
7533 # &lt;= the read timestamp have finished.
7534 #
7535 # The timestamp can either be expressed as an absolute Cloud Spanner commit
7536 # timestamp or a staleness relative to the current time.
7537 #
7538 # These modes do not require a &quot;negotiation phase&quot; to pick a
7539 # timestamp. As a result, they execute slightly faster than the
7540 # equivalent boundedly stale concurrency modes. On the other hand,
7541 # boundedly stale reads usually return fresher results.
7542 #
7543 # See TransactionOptions.ReadOnly.read_timestamp and
7544 # TransactionOptions.ReadOnly.exact_staleness.
7545 #
7546 # ### Bounded Staleness
7547 #
7548 # Bounded staleness modes allow Cloud Spanner to pick the read timestamp,
7549 # subject to a user-provided staleness bound. Cloud Spanner chooses the
7550 # newest timestamp within the staleness bound that allows execution
7551 # of the reads at the closest available replica without blocking.
7552 #
7553 # All rows yielded are consistent with each other -- if any part of
7554 # the read observes a transaction, all parts of the read see the
7555 # transaction. Boundedly stale reads are not repeatable: two stale
7556 # reads, even if they use the same staleness bound, can execute at
7557 # different timestamps and thus return inconsistent results.
7558 #
7559 # Boundedly stale reads execute in two phases: the first phase
7560 # negotiates a timestamp among all replicas needed to serve the
7561 # read. In the second phase, reads are executed at the negotiated
7562 # timestamp.
7563 #
7564 # As a result of the two phase execution, bounded staleness reads are
7565 # usually a little slower than comparable exact staleness
7566 # reads. However, they are typically able to return fresher
7567 # results, and are more likely to execute at the closest replica.
7568 #
7569 # Because the timestamp negotiation requires up-front knowledge of
7570 # which rows will be read, it can only be used with single-use
7571 # read-only transactions.
7572 #
7573 # See TransactionOptions.ReadOnly.max_staleness and
7574 # TransactionOptions.ReadOnly.min_read_timestamp.
7575 #
7576 # ### Old Read Timestamps and Garbage Collection
7577 #
7578 # Cloud Spanner continuously garbage collects deleted and overwritten data
7579 # in the background to reclaim storage space. This process is known
7580 # as &quot;version GC&quot;. By default, version GC reclaims versions after they
7581 # are one hour old. Because of this, Cloud Spanner cannot perform reads
7582 # at read timestamps more than one hour in the past. This
7583 # restriction also applies to in-progress reads and/or SQL queries whose
7584 # timestamp become too old while executing. Reads and SQL queries with
7585 # too-old read timestamps fail with the error `FAILED_PRECONDITION`.
7586 #
7587 # ## Partitioned DML Transactions
7588 #
7589 # Partitioned DML transactions are used to execute DML statements with a
7590 # different execution strategy that provides different, and often better,
7591 # scalability properties for large, table-wide operations than DML in a
7592 # ReadWrite transaction. Smaller scoped statements, such as an OLTP workload,
7593 # should prefer using ReadWrite transactions.
7594 #
7595 # Partitioned DML partitions the keyspace and runs the DML statement on each
7596 # partition in separate, internal transactions. These transactions commit
7597 # automatically when complete, and run independently from one another.
7598 #
7599 # To reduce lock contention, this execution strategy only acquires read locks
7600 # on rows that match the WHERE clause of the statement. Additionally, the
7601 # smaller per-partition transactions hold locks for less time.
7602 #
7603 # That said, Partitioned DML is not a drop-in replacement for standard DML used
7604 # in ReadWrite transactions.
7605 #
7606 # - The DML statement must be fully-partitionable. Specifically, the statement
7607 # must be expressible as the union of many statements which each access only
7608 # a single row of the table.
7609 #
7610 # - The statement is not applied atomically to all rows of the table. Rather,
7611 # the statement is applied atomically to partitions of the table, in
7612 # independent transactions. Secondary index rows are updated atomically
7613 # with the base table rows.
7614 #
7615 # - Partitioned DML does not guarantee exactly-once execution semantics
7616 # against a partition. The statement will be applied at least once to each
7617 # partition. It is strongly recommended that the DML statement should be
7618 # idempotent to avoid unexpected results. For instance, it is potentially
7619 # dangerous to run a statement such as
7620 # `UPDATE table SET column = column + 1` as it could be run multiple times
7621 # against some rows.
7622 #
7623 # - The partitions are committed automatically - there is no support for
7624 # Commit or Rollback. If the call returns an error, or if the client issuing
7625 # the ExecuteSql call dies, it is possible that some rows had the statement
7626 # executed on them successfully. It is also possible that statement was
7627 # never executed against other rows.
7628 #
7629 # - Partitioned DML transactions may only contain the execution of a single
7630 # DML statement via ExecuteSql or ExecuteStreamingSql.
7631 #
7632 # - If any error is encountered during the execution of the partitioned DML
7633 # operation (for instance, a UNIQUE INDEX violation, division by zero, or a
7634 # value that cannot be stored due to schema constraints), then the
7635 # operation is stopped at that point and an error is returned. It is
7636 # possible that at this point, some partitions have been committed (or even
7637 # committed multiple times), and other partitions have not been run at all.
7638 #
7639 # Given the above, Partitioned DML is good fit for large, database-wide,
7640 # operations that are idempotent, such as deleting old rows from a very large
7641 # table.
7642 &quot;readWrite&quot;: { # Message type to initiate a read-write transaction. Currently this # Transaction may write.
7643 #
7644 # Authorization to begin a read-write transaction requires
7645 # `spanner.databases.beginOrRollbackReadWriteTransaction` permission
7646 # on the `session` resource.
7647 # transaction type has no options.
7648 },
7649 &quot;readOnly&quot;: { # Message type to initiate a read-only transaction. # Transaction will not write.
7650 #
7651 # Authorization to begin a read-only transaction requires
7652 # `spanner.databases.beginReadOnlyTransaction` permission
7653 # on the `session` resource.
7654 &quot;readTimestamp&quot;: &quot;A String&quot;, # Executes all reads at the given timestamp. Unlike other modes,
7655 # reads at a specific timestamp are repeatable; the same read at
7656 # the same timestamp always returns the same data. If the
7657 # timestamp is in the future, the read will block until the
7658 # specified timestamp, modulo the read&#x27;s deadline.
7659 #
7660 # Useful for large scale consistent reads such as mapreduces, or
7661 # for coordinating many reads against a consistent snapshot of the
7662 # data.
7663 #
7664 # A timestamp in RFC3339 UTC \&quot;Zulu\&quot; format, accurate to nanoseconds.
7665 # Example: `&quot;2014-10-02T15:01:23.045123456Z&quot;`.
7666 &quot;minReadTimestamp&quot;: &quot;A String&quot;, # Executes all reads at a timestamp &gt;= `min_read_timestamp`.
7667 #
7668 # This is useful for requesting fresher data than some previous
7669 # read, or data that is fresh enough to observe the effects of some
7670 # previously committed transaction whose timestamp is known.
7671 #
7672 # Note that this option can only be used in single-use transactions.
7673 #
7674 # A timestamp in RFC3339 UTC \&quot;Zulu\&quot; format, accurate to nanoseconds.
7675 # Example: `&quot;2014-10-02T15:01:23.045123456Z&quot;`.
7676 &quot;exactStaleness&quot;: &quot;A String&quot;, # Executes all reads at a timestamp that is `exact_staleness`
7677 # old. The timestamp is chosen soon after the read is started.
7678 #
7679 # Guarantees that all writes that have committed more than the
7680 # specified number of seconds ago are visible. Because Cloud Spanner
7681 # chooses the exact timestamp, this mode works even if the client&#x27;s
7682 # local clock is substantially skewed from Cloud Spanner commit
7683 # timestamps.
7684 #
7685 # Useful for reading at nearby replicas without the distributed
7686 # timestamp negotiation overhead of `max_staleness`.
7687 &quot;maxStaleness&quot;: &quot;A String&quot;, # Read data at a timestamp &gt;= `NOW - max_staleness`
7688 # seconds. Guarantees that all writes that have committed more
7689 # than the specified number of seconds ago are visible. Because
7690 # Cloud Spanner chooses the exact timestamp, this mode works even if
7691 # the client&#x27;s local clock is substantially skewed from Cloud Spanner
7692 # commit timestamps.
7693 #
7694 # Useful for reading the freshest data available at a nearby
7695 # replica, while bounding the possible staleness if the local
7696 # replica has fallen behind.
7697 #
7698 # Note that this option can only be used in single-use
7699 # transactions.
7700 &quot;returnReadTimestamp&quot;: True or False, # If true, the Cloud Spanner-selected read timestamp is included in
7701 # the Transaction message that describes the transaction.
7702 &quot;strong&quot;: True or False, # Read at a timestamp where all previously committed transactions
7703 # are visible.
7704 },
7705 &quot;partitionedDml&quot;: { # Message type to initiate a Partitioned DML transaction. # Partitioned DML transaction.
7706 #
7707 # Authorization to begin a Partitioned DML transaction requires
7708 # `spanner.databases.beginPartitionedDmlTransaction` permission
7709 # on the `session` resource.
7710 },
7711 },
7712 &quot;begin&quot;: { # # Transactions # Begin a new transaction and execute this read or SQL query in
7713 # it. The transaction ID of the new transaction is returned in
7714 # ResultSetMetadata.transaction, which is a Transaction.
7715 #
7716 #
7717 # Each session can have at most one active transaction at a time (note that
7718 # standalone reads and queries use a transaction internally and do count
7719 # towards the one transaction limit). After the active transaction is
7720 # completed, the session can immediately be re-used for the next transaction.
7721 # It is not necessary to create a new session for each transaction.
7722 #
7723 # # Transaction Modes
7724 #
7725 # Cloud Spanner supports three transaction modes:
7726 #
7727 # 1. Locking read-write. This type of transaction is the only way
7728 # to write data into Cloud Spanner. These transactions rely on
7729 # pessimistic locking and, if necessary, two-phase commit.
7730 # Locking read-write transactions may abort, requiring the
7731 # application to retry.
7732 #
7733 # 2. Snapshot read-only. This transaction type provides guaranteed
7734 # consistency across several reads, but does not allow
7735 # writes. Snapshot read-only transactions can be configured to
7736 # read at timestamps in the past. Snapshot read-only
7737 # transactions do not need to be committed.
7738 #
7739 # 3. Partitioned DML. This type of transaction is used to execute
7740 # a single Partitioned DML statement. Partitioned DML partitions
7741 # the key space and runs the DML statement over each partition
7742 # in parallel using separate, internal transactions that commit
7743 # independently. Partitioned DML transactions do not need to be
7744 # committed.
7745 #
7746 # For transactions that only read, snapshot read-only transactions
7747 # provide simpler semantics and are almost always faster. In
7748 # particular, read-only transactions do not take locks, so they do
7749 # not conflict with read-write transactions. As a consequence of not
7750 # taking locks, they also do not abort, so retry loops are not needed.
7751 #
7752 # Transactions may only read/write data in a single database. They
7753 # may, however, read/write data in different tables within that
7754 # database.
7755 #
7756 # ## Locking Read-Write Transactions
7757 #
7758 # Locking transactions may be used to atomically read-modify-write
7759 # data anywhere in a database. This type of transaction is externally
7760 # consistent.
7761 #
7762 # Clients should attempt to minimize the amount of time a transaction
7763 # is active. Faster transactions commit with higher probability
7764 # and cause less contention. Cloud Spanner attempts to keep read locks
7765 # active as long as the transaction continues to do reads, and the
7766 # transaction has not been terminated by
7767 # Commit or
7768 # Rollback. Long periods of
7769 # inactivity at the client may cause Cloud Spanner to release a
7770 # transaction&#x27;s locks and abort it.
7771 #
7772 # Conceptually, a read-write transaction consists of zero or more
7773 # reads or SQL statements followed by
7774 # Commit. At any time before
7775 # Commit, the client can send a
7776 # Rollback request to abort the
7777 # transaction.
7778 #
7779 # ### Semantics
7780 #
7781 # Cloud Spanner can commit the transaction if all read locks it acquired
7782 # are still valid at commit time, and it is able to acquire write
7783 # locks for all writes. Cloud Spanner can abort the transaction for any
7784 # reason. If a commit attempt returns `ABORTED`, Cloud Spanner guarantees
7785 # that the transaction has not modified any user data in Cloud Spanner.
7786 #
7787 # Unless the transaction commits, Cloud Spanner makes no guarantees about
7788 # how long the transaction&#x27;s locks were held for. It is an error to
7789 # use Cloud Spanner locks for any sort of mutual exclusion other than
7790 # between Cloud Spanner transactions themselves.
7791 #
7792 # ### Retrying Aborted Transactions
7793 #
7794 # When a transaction aborts, the application can choose to retry the
7795 # whole transaction again. To maximize the chances of successfully
7796 # committing the retry, the client should execute the retry in the
7797 # same session as the original attempt. The original session&#x27;s lock
7798 # priority increases with each consecutive abort, meaning that each
7799 # attempt has a slightly better chance of success than the previous.
7800 #
7801 # Under some circumstances (e.g., many transactions attempting to
7802 # modify the same row(s)), a transaction can abort many times in a
7803 # short period before successfully committing. Thus, it is not a good
7804 # idea to cap the number of retries a transaction can attempt;
7805 # instead, it is better to limit the total amount of wall time spent
7806 # retrying.
7807 #
7808 # ### Idle Transactions
7809 #
7810 # A transaction is considered idle if it has no outstanding reads or
7811 # SQL queries and has not started a read or SQL query within the last 10
7812 # seconds. Idle transactions can be aborted by Cloud Spanner so that they
7813 # don&#x27;t hold on to locks indefinitely. In that case, the commit will
7814 # fail with error `ABORTED`.
7815 #
7816 # If this behavior is undesirable, periodically executing a simple
7817 # SQL query in the transaction (e.g., `SELECT 1`) prevents the
7818 # transaction from becoming idle.
7819 #
7820 # ## Snapshot Read-Only Transactions
7821 #
7822 # Snapshot read-only transactions provides a simpler method than
7823 # locking read-write transactions for doing several consistent
7824 # reads. However, this type of transaction does not support writes.
7825 #
7826 # Snapshot transactions do not take locks. Instead, they work by
7827 # choosing a Cloud Spanner timestamp, then executing all reads at that
7828 # timestamp. Since they do not acquire locks, they do not block
7829 # concurrent read-write transactions.
7830 #
7831 # Unlike locking read-write transactions, snapshot read-only
7832 # transactions never abort. They can fail if the chosen read
7833 # timestamp is garbage collected; however, the default garbage
7834 # collection policy is generous enough that most applications do not
7835 # need to worry about this in practice.
7836 #
7837 # Snapshot read-only transactions do not need to call
7838 # Commit or
7839 # Rollback (and in fact are not
7840 # permitted to do so).
7841 #
7842 # To execute a snapshot transaction, the client specifies a timestamp
7843 # bound, which tells Cloud Spanner how to choose a read timestamp.
7844 #
7845 # The types of timestamp bound are:
7846 #
7847 # - Strong (the default).
7848 # - Bounded staleness.
7849 # - Exact staleness.
7850 #
7851 # If the Cloud Spanner database to be read is geographically distributed,
7852 # stale read-only transactions can execute more quickly than strong
7853 # or read-write transaction, because they are able to execute far
7854 # from the leader replica.
7855 #
7856 # Each type of timestamp bound is discussed in detail below.
7857 #
7858 # ### Strong
7859 #
7860 # Strong reads are guaranteed to see the effects of all transactions
7861 # that have committed before the start of the read. Furthermore, all
7862 # rows yielded by a single read are consistent with each other -- if
7863 # any part of the read observes a transaction, all parts of the read
7864 # see the transaction.
7865 #
7866 # Strong reads are not repeatable: two consecutive strong read-only
7867 # transactions might return inconsistent results if there are
7868 # concurrent writes. If consistency across reads is required, the
7869 # reads should be executed within a transaction or at an exact read
7870 # timestamp.
7871 #
7872 # See TransactionOptions.ReadOnly.strong.
7873 #
7874 # ### Exact Staleness
7875 #
7876 # These timestamp bounds execute reads at a user-specified
7877 # timestamp. Reads at a timestamp are guaranteed to see a consistent
7878 # prefix of the global transaction history: they observe
7879 # modifications done by all transactions with a commit timestamp &lt;=
7880 # the read timestamp, and observe none of the modifications done by
7881 # transactions with a larger commit timestamp. They will block until
7882 # all conflicting transactions that may be assigned commit timestamps
7883 # &lt;= the read timestamp have finished.
7884 #
7885 # The timestamp can either be expressed as an absolute Cloud Spanner commit
7886 # timestamp or a staleness relative to the current time.
7887 #
7888 # These modes do not require a &quot;negotiation phase&quot; to pick a
7889 # timestamp. As a result, they execute slightly faster than the
7890 # equivalent boundedly stale concurrency modes. On the other hand,
7891 # boundedly stale reads usually return fresher results.
7892 #
7893 # See TransactionOptions.ReadOnly.read_timestamp and
7894 # TransactionOptions.ReadOnly.exact_staleness.
7895 #
7896 # ### Bounded Staleness
7897 #
7898 # Bounded staleness modes allow Cloud Spanner to pick the read timestamp,
7899 # subject to a user-provided staleness bound. Cloud Spanner chooses the
7900 # newest timestamp within the staleness bound that allows execution
7901 # of the reads at the closest available replica without blocking.
7902 #
7903 # All rows yielded are consistent with each other -- if any part of
7904 # the read observes a transaction, all parts of the read see the
7905 # transaction. Boundedly stale reads are not repeatable: two stale
7906 # reads, even if they use the same staleness bound, can execute at
7907 # different timestamps and thus return inconsistent results.
7908 #
7909 # Boundedly stale reads execute in two phases: the first phase
7910 # negotiates a timestamp among all replicas needed to serve the
7911 # read. In the second phase, reads are executed at the negotiated
7912 # timestamp.
7913 #
7914 # As a result of the two phase execution, bounded staleness reads are
7915 # usually a little slower than comparable exact staleness
7916 # reads. However, they are typically able to return fresher
7917 # results, and are more likely to execute at the closest replica.
7918 #
7919 # Because the timestamp negotiation requires up-front knowledge of
7920 # which rows will be read, it can only be used with single-use
7921 # read-only transactions.
7922 #
7923 # See TransactionOptions.ReadOnly.max_staleness and
7924 # TransactionOptions.ReadOnly.min_read_timestamp.
7925 #
7926 # ### Old Read Timestamps and Garbage Collection
7927 #
7928 # Cloud Spanner continuously garbage collects deleted and overwritten data
7929 # in the background to reclaim storage space. This process is known
7930 # as &quot;version GC&quot;. By default, version GC reclaims versions after they
7931 # are one hour old. Because of this, Cloud Spanner cannot perform reads
7932 # at read timestamps more than one hour in the past. This
7933 # restriction also applies to in-progress reads and/or SQL queries whose
7934 # timestamp become too old while executing. Reads and SQL queries with
7935 # too-old read timestamps fail with the error `FAILED_PRECONDITION`.
7936 #
7937 # ## Partitioned DML Transactions
7938 #
7939 # Partitioned DML transactions are used to execute DML statements with a
7940 # different execution strategy that provides different, and often better,
7941 # scalability properties for large, table-wide operations than DML in a
7942 # ReadWrite transaction. Smaller scoped statements, such as an OLTP workload,
7943 # should prefer using ReadWrite transactions.
7944 #
7945 # Partitioned DML partitions the keyspace and runs the DML statement on each
7946 # partition in separate, internal transactions. These transactions commit
7947 # automatically when complete, and run independently from one another.
7948 #
7949 # To reduce lock contention, this execution strategy only acquires read locks
7950 # on rows that match the WHERE clause of the statement. Additionally, the
7951 # smaller per-partition transactions hold locks for less time.
7952 #
7953 # That said, Partitioned DML is not a drop-in replacement for standard DML used
7954 # in ReadWrite transactions.
7955 #
7956 # - The DML statement must be fully-partitionable. Specifically, the statement
7957 # must be expressible as the union of many statements which each access only
7958 # a single row of the table.
7959 #
7960 # - The statement is not applied atomically to all rows of the table. Rather,
7961 # the statement is applied atomically to partitions of the table, in
7962 # independent transactions. Secondary index rows are updated atomically
7963 # with the base table rows.
7964 #
7965 # - Partitioned DML does not guarantee exactly-once execution semantics
7966 # against a partition. The statement will be applied at least once to each
7967 # partition. It is strongly recommended that the DML statement should be
7968 # idempotent to avoid unexpected results. For instance, it is potentially
7969 # dangerous to run a statement such as
7970 # `UPDATE table SET column = column + 1` as it could be run multiple times
7971 # against some rows.
7972 #
7973 # - The partitions are committed automatically - there is no support for
7974 # Commit or Rollback. If the call returns an error, or if the client issuing
7975 # the ExecuteSql call dies, it is possible that some rows had the statement
7976 # executed on them successfully. It is also possible that statement was
7977 # never executed against other rows.
7978 #
7979 # - Partitioned DML transactions may only contain the execution of a single
7980 # DML statement via ExecuteSql or ExecuteStreamingSql.
7981 #
7982 # - If any error is encountered during the execution of the partitioned DML
7983 # operation (for instance, a UNIQUE INDEX violation, division by zero, or a
7984 # value that cannot be stored due to schema constraints), then the
7985 # operation is stopped at that point and an error is returned. It is
7986 # possible that at this point, some partitions have been committed (or even
7987 # committed multiple times), and other partitions have not been run at all.
7988 #
7989 # Given the above, Partitioned DML is good fit for large, database-wide,
7990 # operations that are idempotent, such as deleting old rows from a very large
7991 # table.
7992 &quot;readWrite&quot;: { # Message type to initiate a read-write transaction. Currently this # Transaction may write.
7993 #
7994 # Authorization to begin a read-write transaction requires
7995 # `spanner.databases.beginOrRollbackReadWriteTransaction` permission
7996 # on the `session` resource.
7997 # transaction type has no options.
7998 },
7999 &quot;readOnly&quot;: { # Message type to initiate a read-only transaction. # Transaction will not write.
8000 #
8001 # Authorization to begin a read-only transaction requires
8002 # `spanner.databases.beginReadOnlyTransaction` permission
8003 # on the `session` resource.
8004 &quot;readTimestamp&quot;: &quot;A String&quot;, # Executes all reads at the given timestamp. Unlike other modes,
8005 # reads at a specific timestamp are repeatable; the same read at
8006 # the same timestamp always returns the same data. If the
8007 # timestamp is in the future, the read will block until the
8008 # specified timestamp, modulo the read&#x27;s deadline.
8009 #
8010 # Useful for large scale consistent reads such as mapreduces, or
8011 # for coordinating many reads against a consistent snapshot of the
8012 # data.
8013 #
8014 # A timestamp in RFC3339 UTC \&quot;Zulu\&quot; format, accurate to nanoseconds.
8015 # Example: `&quot;2014-10-02T15:01:23.045123456Z&quot;`.
8016 &quot;minReadTimestamp&quot;: &quot;A String&quot;, # Executes all reads at a timestamp &gt;= `min_read_timestamp`.
8017 #
8018 # This is useful for requesting fresher data than some previous
8019 # read, or data that is fresh enough to observe the effects of some
8020 # previously committed transaction whose timestamp is known.
8021 #
8022 # Note that this option can only be used in single-use transactions.
8023 #
8024 # A timestamp in RFC3339 UTC \&quot;Zulu\&quot; format, accurate to nanoseconds.
8025 # Example: `&quot;2014-10-02T15:01:23.045123456Z&quot;`.
8026 &quot;exactStaleness&quot;: &quot;A String&quot;, # Executes all reads at a timestamp that is `exact_staleness`
8027 # old. The timestamp is chosen soon after the read is started.
8028 #
8029 # Guarantees that all writes that have committed more than the
8030 # specified number of seconds ago are visible. Because Cloud Spanner
8031 # chooses the exact timestamp, this mode works even if the client&#x27;s
8032 # local clock is substantially skewed from Cloud Spanner commit
8033 # timestamps.
8034 #
8035 # Useful for reading at nearby replicas without the distributed
8036 # timestamp negotiation overhead of `max_staleness`.
8037 &quot;maxStaleness&quot;: &quot;A String&quot;, # Read data at a timestamp &gt;= `NOW - max_staleness`
8038 # seconds. Guarantees that all writes that have committed more
8039 # than the specified number of seconds ago are visible. Because
8040 # Cloud Spanner chooses the exact timestamp, this mode works even if
8041 # the client&#x27;s local clock is substantially skewed from Cloud Spanner
8042 # commit timestamps.
8043 #
8044 # Useful for reading the freshest data available at a nearby
8045 # replica, while bounding the possible staleness if the local
8046 # replica has fallen behind.
8047 #
8048 # Note that this option can only be used in single-use
8049 # transactions.
8050 &quot;returnReadTimestamp&quot;: True or False, # If true, the Cloud Spanner-selected read timestamp is included in
8051 # the Transaction message that describes the transaction.
8052 &quot;strong&quot;: True or False, # Read at a timestamp where all previously committed transactions
8053 # are visible.
8054 },
8055 &quot;partitionedDml&quot;: { # Message type to initiate a Partitioned DML transaction. # Partitioned DML transaction.
8056 #
8057 # Authorization to begin a Partitioned DML transaction requires
8058 # `spanner.databases.beginPartitionedDmlTransaction` permission
8059 # on the `session` resource.
8060 },
8061 },
8062 &quot;id&quot;: &quot;A String&quot;, # Execute the read or SQL query in a previously-started transaction.
8063 },
8064 &quot;partitionToken&quot;: &quot;A String&quot;, # If present, results will be restricted to the specified partition
8065 # previously created using PartitionRead(). There must be an exact
8066 # match for the values of fields common to this message and the
8067 # PartitionReadRequest message used to create this partition_token.
Bu Sun Kim4ed7d3f2020-05-27 12:20:54 -07008068 &quot;keySet&quot;: { # `KeySet` defines a collection of Cloud Spanner keys and/or key ranges. All # Required. `key_set` identifies the rows to be yielded. `key_set` names the
8069 # primary keys of the rows in table to be yielded, unless index
8070 # is present. If index is present, then key_set instead names
8071 # index keys in index.
8072 #
8073 # If the partition_token field is empty, rows are yielded
8074 # in table primary key order (if index is empty) or index key order
8075 # (if index is non-empty). If the partition_token field is not
8076 # empty, rows will be yielded in an unspecified order.
8077 #
8078 # It is not an error for the `key_set` to name rows that do not
8079 # exist in the database. Read yields nothing for nonexistent rows.
8080 # the keys are expected to be in the same table or index. The keys need
8081 # not be sorted in any particular way.
8082 #
8083 # If the same key is specified multiple times in the set (for example
8084 # if two ranges, two keys, or a key and a range overlap), Cloud Spanner
8085 # behaves as if the key were only specified once.
8086 &quot;ranges&quot;: [ # A list of key ranges. See KeyRange for more information about
8087 # key range specifications.
8088 { # KeyRange represents a range of rows in a table or index.
8089 #
8090 # A range has a start key and an end key. These keys can be open or
8091 # closed, indicating if the range includes rows with that key.
8092 #
8093 # Keys are represented by lists, where the ith value in the list
8094 # corresponds to the ith component of the table or index primary key.
8095 # Individual values are encoded as described
8096 # here.
8097 #
8098 # For example, consider the following table definition:
8099 #
8100 # CREATE TABLE UserEvents (
8101 # UserName STRING(MAX),
8102 # EventDate STRING(10)
8103 # ) PRIMARY KEY(UserName, EventDate);
8104 #
8105 # The following keys name rows in this table:
8106 #
8107 # &quot;Bob&quot;, &quot;2014-09-23&quot;
8108 #
8109 # Since the `UserEvents` table&#x27;s `PRIMARY KEY` clause names two
8110 # columns, each `UserEvents` key has two elements; the first is the
8111 # `UserName`, and the second is the `EventDate`.
8112 #
8113 # Key ranges with multiple components are interpreted
8114 # lexicographically by component using the table or index key&#x27;s declared
8115 # sort order. For example, the following range returns all events for
8116 # user `&quot;Bob&quot;` that occurred in the year 2015:
8117 #
8118 # &quot;start_closed&quot;: [&quot;Bob&quot;, &quot;2015-01-01&quot;]
8119 # &quot;end_closed&quot;: [&quot;Bob&quot;, &quot;2015-12-31&quot;]
8120 #
8121 # Start and end keys can omit trailing key components. This affects the
8122 # inclusion and exclusion of rows that exactly match the provided key
8123 # components: if the key is closed, then rows that exactly match the
8124 # provided components are included; if the key is open, then rows
8125 # that exactly match are not included.
8126 #
8127 # For example, the following range includes all events for `&quot;Bob&quot;` that
8128 # occurred during and after the year 2000:
8129 #
8130 # &quot;start_closed&quot;: [&quot;Bob&quot;, &quot;2000-01-01&quot;]
8131 # &quot;end_closed&quot;: [&quot;Bob&quot;]
8132 #
8133 # The next example retrieves all events for `&quot;Bob&quot;`:
8134 #
8135 # &quot;start_closed&quot;: [&quot;Bob&quot;]
8136 # &quot;end_closed&quot;: [&quot;Bob&quot;]
8137 #
8138 # To retrieve events before the year 2000:
8139 #
8140 # &quot;start_closed&quot;: [&quot;Bob&quot;]
8141 # &quot;end_open&quot;: [&quot;Bob&quot;, &quot;2000-01-01&quot;]
8142 #
8143 # The following range includes all rows in the table:
8144 #
8145 # &quot;start_closed&quot;: []
8146 # &quot;end_closed&quot;: []
8147 #
8148 # This range returns all users whose `UserName` begins with any
8149 # character from A to C:
8150 #
8151 # &quot;start_closed&quot;: [&quot;A&quot;]
8152 # &quot;end_open&quot;: [&quot;D&quot;]
8153 #
8154 # This range returns all users whose `UserName` begins with B:
8155 #
8156 # &quot;start_closed&quot;: [&quot;B&quot;]
8157 # &quot;end_open&quot;: [&quot;C&quot;]
8158 #
8159 # Key ranges honor column sort order. For example, suppose a table is
8160 # defined as follows:
8161 #
8162 # CREATE TABLE DescendingSortedTable {
8163 # Key INT64,
8164 # ...
8165 # ) PRIMARY KEY(Key DESC);
8166 #
8167 # The following range retrieves all rows with key values between 1
8168 # and 100 inclusive:
8169 #
8170 # &quot;start_closed&quot;: [&quot;100&quot;]
8171 # &quot;end_closed&quot;: [&quot;1&quot;]
8172 #
8173 # Note that 100 is passed as the start, and 1 is passed as the end,
8174 # because `Key` is a descending column in the schema.
Bu Sun Kim4ed7d3f2020-05-27 12:20:54 -07008175 &quot;endClosed&quot;: [ # If the end is closed, then the range includes all rows whose
8176 # first `len(end_closed)` key columns exactly match `end_closed`.
8177 &quot;&quot;,
8178 ],
Bu Sun Kim4ed7d3f2020-05-27 12:20:54 -07008179 &quot;startClosed&quot;: [ # If the start is closed, then the range includes all rows whose
8180 # first `len(start_closed)` key columns exactly match `start_closed`.
8181 &quot;&quot;,
8182 ],
Bu Sun Kimd059ad82020-07-22 17:02:09 -07008183 &quot;startOpen&quot;: [ # If the start is open, then the range excludes rows whose first
8184 # `len(start_open)` key columns exactly match `start_open`.
8185 &quot;&quot;,
8186 ],
8187 &quot;endOpen&quot;: [ # If the end is open, then the range excludes rows whose first
8188 # `len(end_open)` key columns exactly match `end_open`.
8189 &quot;&quot;,
8190 ],
Bu Sun Kim4ed7d3f2020-05-27 12:20:54 -07008191 },
8192 ],
8193 &quot;keys&quot;: [ # A list of specific keys. Entries in `keys` should have exactly as
8194 # many elements as there are columns in the primary or index key
8195 # with which this `KeySet` is used. Individual key values are
8196 # encoded as described here.
8197 [
8198 &quot;&quot;,
8199 ],
8200 ],
8201 &quot;all&quot;: True or False, # For convenience `all` can be set to `true` to indicate that this
8202 # `KeySet` matches all keys in the table or index. Note that any keys
8203 # specified in `keys` or `ranges` are only yielded once.
8204 },
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -04008205 }
8206
8207 x__xgafv: string, V1 error format.
8208 Allowed values
8209 1 - v1 error format
8210 2 - v2 error format
8211
8212Returns:
8213 An object of the form:
8214
8215 { # Partial results from a streaming read or SQL query. Streaming reads and
8216 # SQL queries better tolerate large result sets, large rows, and large
8217 # values, but are a little trickier to consume.
Bu Sun Kim4ed7d3f2020-05-27 12:20:54 -07008218 &quot;stats&quot;: { # Additional statistics about a ResultSet or PartialResultSet. # Query plan and execution statistics for the statement that produced this
8219 # streaming result set. These can be requested by setting
8220 # ExecuteSqlRequest.query_mode and are sent
8221 # only once with the last response in the stream.
8222 # This field will also be present in the last response for DML
8223 # statements.
Bu Sun Kimd059ad82020-07-22 17:02:09 -07008224 &quot;queryStats&quot;: { # Aggregated statistics from the execution of the query. Only present when
8225 # the query is profiled. For example, a query could return the statistics as
8226 # follows:
8227 #
8228 # {
8229 # &quot;rows_returned&quot;: &quot;3&quot;,
8230 # &quot;elapsed_time&quot;: &quot;1.22 secs&quot;,
8231 # &quot;cpu_time&quot;: &quot;1.19 secs&quot;
8232 # }
8233 &quot;a_key&quot;: &quot;&quot;, # Properties of the object.
8234 },
8235 &quot;rowCountExact&quot;: &quot;A String&quot;, # Standard DML returns an exact count of rows that were modified.
Bu Sun Kim4ed7d3f2020-05-27 12:20:54 -07008236 &quot;rowCountLowerBound&quot;: &quot;A String&quot;, # Partitioned DML does not offer exactly-once semantics, so it
8237 # returns a lower bound of the rows modified.
8238 &quot;queryPlan&quot;: { # Contains an ordered list of nodes appearing in the query plan. # QueryPlan for the query associated with this result.
8239 &quot;planNodes&quot;: [ # The nodes in the query plan. Plan nodes are returned in pre-order starting
8240 # with the plan root. Each PlanNode&#x27;s `id` corresponds to its index in
8241 # `plan_nodes`.
8242 { # Node information for nodes appearing in a QueryPlan.plan_nodes.
Bu Sun Kim4ed7d3f2020-05-27 12:20:54 -07008243 &quot;childLinks&quot;: [ # List of child node `index`es and their relationship to this parent.
8244 { # Metadata associated with a parent-child relationship appearing in a
8245 # PlanNode.
Bu Sun Kimd059ad82020-07-22 17:02:09 -07008246 &quot;childIndex&quot;: 42, # The node to which the link points.
Bu Sun Kim4ed7d3f2020-05-27 12:20:54 -07008247 &quot;type&quot;: &quot;A String&quot;, # The type of the link. For example, in Hash Joins this could be used to
8248 # distinguish between the build child and the probe child, or in the case
8249 # of the child being an output variable, to represent the tag associated
8250 # with the output variable.
8251 &quot;variable&quot;: &quot;A String&quot;, # Only present if the child node is SCALAR and corresponds
8252 # to an output variable of the parent node. The field carries the name of
8253 # the output variable.
8254 # For example, a `TableScan` operator that reads rows from a table will
8255 # have child links to the `SCALAR` nodes representing the output variables
8256 # created for each column that is read by the operator. The corresponding
8257 # `variable` fields will be set to the variable names assigned to the
8258 # columns.
Bu Sun Kim4ed7d3f2020-05-27 12:20:54 -07008259 },
8260 ],
Bu Sun Kimd059ad82020-07-22 17:02:09 -07008261 &quot;metadata&quot;: { # Attributes relevant to the node contained in a group of key-value pairs.
8262 # For example, a Parameter Reference node could have the following
8263 # information in its metadata:
8264 #
8265 # {
8266 # &quot;parameter_reference&quot;: &quot;param1&quot;,
8267 # &quot;parameter_type&quot;: &quot;array&quot;
8268 # }
8269 &quot;a_key&quot;: &quot;&quot;, # Properties of the object.
8270 },
Bu Sun Kim4ed7d3f2020-05-27 12:20:54 -07008271 &quot;kind&quot;: &quot;A String&quot;, # Used to determine the type of node. May be needed for visualizing
8272 # different kinds of nodes differently. For example, If the node is a
8273 # SCALAR node, it will have a condensed representation
8274 # which can be used to directly embed a description of the node in its
8275 # parent.
Bu Sun Kimd059ad82020-07-22 17:02:09 -07008276 &quot;shortRepresentation&quot;: { # Condensed representation of a node and its subtree. Only present for # Condensed representation for SCALAR nodes.
8277 # `SCALAR` PlanNode(s).
8278 &quot;subqueries&quot;: { # A mapping of (subquery variable name) -&gt; (subquery node id) for cases
8279 # where the `description` string of this node references a `SCALAR`
8280 # subquery contained in the expression subtree rooted at this node. The
8281 # referenced `SCALAR` subquery may not necessarily be a direct child of
8282 # this node.
8283 &quot;a_key&quot;: 42,
8284 },
8285 &quot;description&quot;: &quot;A String&quot;, # A string representation of the expression subtree rooted at this node.
8286 },
8287 &quot;displayName&quot;: &quot;A String&quot;, # The display name for the node.
8288 &quot;index&quot;: 42, # The `PlanNode`&#x27;s index in node list.
8289 &quot;executionStats&quot;: { # The execution statistics associated with the node, contained in a group of
8290 # key-value pairs. Only present if the plan was returned as a result of a
8291 # profile query. For example, number of executions, number of rows/time per
8292 # execution etc.
8293 &quot;a_key&quot;: &quot;&quot;, # Properties of the object.
8294 },
Bu Sun Kim4ed7d3f2020-05-27 12:20:54 -07008295 },
8296 ],
8297 },
Bu Sun Kimd059ad82020-07-22 17:02:09 -07008298 },
8299 &quot;resumeToken&quot;: &quot;A String&quot;, # Streaming calls might be interrupted for a variety of reasons, such
8300 # as TCP connection loss. If this occurs, the stream of results can
8301 # be resumed by re-sending the original request and including
8302 # `resume_token`. Note that executing any other transaction in the
8303 # same session invalidates the token.
8304 &quot;metadata&quot;: { # Metadata about a ResultSet or PartialResultSet. # Metadata about the result set, such as row type information.
8305 # Only present in the first response.
8306 &quot;transaction&quot;: { # A transaction. # If the read or SQL query began a transaction as a side-effect, the
8307 # information about the new transaction is yielded here.
8308 &quot;readTimestamp&quot;: &quot;A String&quot;, # For snapshot read-only transactions, the read timestamp chosen
8309 # for the transaction. Not returned by default: see
8310 # TransactionOptions.ReadOnly.return_read_timestamp.
8311 #
8312 # A timestamp in RFC3339 UTC \&quot;Zulu\&quot; format, accurate to nanoseconds.
8313 # Example: `&quot;2014-10-02T15:01:23.045123456Z&quot;`.
8314 &quot;id&quot;: &quot;A String&quot;, # `id` may be used to identify the transaction in subsequent
8315 # Read,
8316 # ExecuteSql,
8317 # Commit, or
8318 # Rollback calls.
8319 #
8320 # Single-use read-only transactions do not have IDs, because
8321 # single-use transactions do not support multiple requests.
8322 },
8323 &quot;rowType&quot;: { # `StructType` defines the fields of a STRUCT type. # Indicates the field names and types for the rows in the result
8324 # set. For example, a SQL query like `&quot;SELECT UserId, UserName FROM
8325 # Users&quot;` could return a `row_type` value like:
Bu Sun Kim4ed7d3f2020-05-27 12:20:54 -07008326 #
Bu Sun Kimd059ad82020-07-22 17:02:09 -07008327 # &quot;fields&quot;: [
8328 # { &quot;name&quot;: &quot;UserId&quot;, &quot;type&quot;: { &quot;code&quot;: &quot;INT64&quot; } },
8329 # { &quot;name&quot;: &quot;UserName&quot;, &quot;type&quot;: { &quot;code&quot;: &quot;STRING&quot; } },
8330 # ]
8331 &quot;fields&quot;: [ # The list of fields that make up this struct. Order is
8332 # significant, because values of this struct type are represented as
8333 # lists, where the order of field values matches the order of
8334 # fields in the StructType. In turn, the order of fields
8335 # matches the order of columns in a read request, or the order of
8336 # fields in the `SELECT` clause of a query.
8337 { # Message representing a single field of a struct.
8338 &quot;type&quot;: # Object with schema name: Type # The type of the field.
8339 &quot;name&quot;: &quot;A String&quot;, # The name of the field. For reads, this is the column name. For
8340 # SQL queries, it is the column alias (e.g., `&quot;Word&quot;` in the
8341 # query `&quot;SELECT &#x27;hello&#x27; AS Word&quot;`), or the column name (e.g.,
8342 # `&quot;ColName&quot;` in the query `&quot;SELECT ColName FROM Table&quot;`). Some
8343 # columns might have an empty name (e.g., !&quot;SELECT
8344 # UPPER(ColName)&quot;`). Note that a query result can contain
8345 # multiple fields with the same name.
8346 },
8347 ],
Bu Sun Kim4ed7d3f2020-05-27 12:20:54 -07008348 },
8349 },
Bu Sun Kim65020912020-05-20 12:08:20 -07008350 &quot;values&quot;: [ # A streamed result set consists of a stream of values, which might
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -04008351 # be split into many `PartialResultSet` messages to accommodate
8352 # large rows and/or large values. Every N complete values defines a
8353 # row, where N is equal to the number of entries in
8354 # metadata.row_type.fields.
8355 #
8356 # Most values are encoded based on type as described
8357 # here.
8358 #
Bu Sun Kim65020912020-05-20 12:08:20 -07008359 # It is possible that the last value in values is &quot;chunked&quot;,
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -04008360 # meaning that the rest of the value is sent in subsequent
8361 # `PartialResultSet`(s). This is denoted by the chunked_value
8362 # field. Two or more chunked values can be merged to form a
8363 # complete value as follows:
8364 #
8365 # * `bool/number/null`: cannot be chunked
8366 # * `string`: concatenate the strings
8367 # * `list`: concatenate the lists. If the last element in a list is a
8368 # `string`, `list`, or `object`, merge it with the first element in
8369 # the next list by applying these rules recursively.
8370 # * `object`: concatenate the (field name, field value) pairs. If a
8371 # field name is duplicated, then apply these rules recursively
8372 # to merge the field values.
8373 #
8374 # Some examples of merging:
8375 #
8376 # # Strings are concatenated.
Bu Sun Kim65020912020-05-20 12:08:20 -07008377 # &quot;foo&quot;, &quot;bar&quot; =&gt; &quot;foobar&quot;
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -04008378 #
8379 # # Lists of non-strings are concatenated.
Dan O'Mearadd494642020-05-01 07:42:23 -07008380 # [2, 3], [4] =&gt; [2, 3, 4]
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -04008381 #
8382 # # Lists are concatenated, but the last and first elements are merged
8383 # # because they are strings.
Bu Sun Kim65020912020-05-20 12:08:20 -07008384 # [&quot;a&quot;, &quot;b&quot;], [&quot;c&quot;, &quot;d&quot;] =&gt; [&quot;a&quot;, &quot;bc&quot;, &quot;d&quot;]
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -04008385 #
8386 # # Lists are concatenated, but the last and first elements are merged
8387 # # because they are lists. Recursively, the last and first elements
8388 # # of the inner lists are merged because they are strings.
Bu Sun Kim65020912020-05-20 12:08:20 -07008389 # [&quot;a&quot;, [&quot;b&quot;, &quot;c&quot;]], [[&quot;d&quot;], &quot;e&quot;] =&gt; [&quot;a&quot;, [&quot;b&quot;, &quot;cd&quot;], &quot;e&quot;]
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -04008390 #
8391 # # Non-overlapping object fields are combined.
Bu Sun Kim65020912020-05-20 12:08:20 -07008392 # {&quot;a&quot;: &quot;1&quot;}, {&quot;b&quot;: &quot;2&quot;} =&gt; {&quot;a&quot;: &quot;1&quot;, &quot;b&quot;: 2&quot;}
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -04008393 #
8394 # # Overlapping object fields are merged.
Bu Sun Kim65020912020-05-20 12:08:20 -07008395 # {&quot;a&quot;: &quot;1&quot;}, {&quot;a&quot;: &quot;2&quot;} =&gt; {&quot;a&quot;: &quot;12&quot;}
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -04008396 #
8397 # # Examples of merging objects containing lists of strings.
Bu Sun Kim65020912020-05-20 12:08:20 -07008398 # {&quot;a&quot;: [&quot;1&quot;]}, {&quot;a&quot;: [&quot;2&quot;]} =&gt; {&quot;a&quot;: [&quot;12&quot;]}
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -04008399 #
8400 # For a more complete example, suppose a streaming SQL query is
8401 # yielding a result set whose rows contain a single string
8402 # field. The following `PartialResultSet`s might be yielded:
8403 #
8404 # {
Bu Sun Kim65020912020-05-20 12:08:20 -07008405 # &quot;metadata&quot;: { ... }
8406 # &quot;values&quot;: [&quot;Hello&quot;, &quot;W&quot;]
8407 # &quot;chunked_value&quot;: true
8408 # &quot;resume_token&quot;: &quot;Af65...&quot;
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -04008409 # }
8410 # {
Bu Sun Kim65020912020-05-20 12:08:20 -07008411 # &quot;values&quot;: [&quot;orl&quot;]
8412 # &quot;chunked_value&quot;: true
8413 # &quot;resume_token&quot;: &quot;Bqp2...&quot;
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -04008414 # }
8415 # {
Bu Sun Kim65020912020-05-20 12:08:20 -07008416 # &quot;values&quot;: [&quot;d&quot;]
8417 # &quot;resume_token&quot;: &quot;Zx1B...&quot;
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -04008418 # }
8419 #
8420 # This sequence of `PartialResultSet`s encodes two rows, one
Bu Sun Kim65020912020-05-20 12:08:20 -07008421 # containing the field value `&quot;Hello&quot;`, and a second containing the
8422 # field value `&quot;World&quot; = &quot;W&quot; + &quot;orl&quot; + &quot;d&quot;`.
8423 &quot;&quot;,
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -04008424 ],
Bu Sun Kimd059ad82020-07-22 17:02:09 -07008425 &quot;chunkedValue&quot;: True or False, # If true, then the final value in values is chunked, and must
8426 # be combined with more values from subsequent `PartialResultSet`s
8427 # to obtain a complete field value.
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -04008428 }</pre>
8429</div>
8430
8431</body></html>