blob: 59ece639c31d0b981e6c07b46fbe18bd897ff884 [file] [log] [blame]
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -04001<html><body>
2<style>
3
4body, h1, h2, h3, div, span, p, pre, a {
5 margin: 0;
6 padding: 0;
7 border: 0;
8 font-weight: inherit;
9 font-style: inherit;
10 font-size: 100%;
11 font-family: inherit;
12 vertical-align: baseline;
13}
14
15body {
16 font-size: 13px;
17 padding: 1em;
18}
19
20h1 {
21 font-size: 26px;
22 margin-bottom: 1em;
23}
24
25h2 {
26 font-size: 24px;
27 margin-bottom: 1em;
28}
29
30h3 {
31 font-size: 20px;
32 margin-bottom: 1em;
33 margin-top: 1em;
34}
35
36pre, code {
37 line-height: 1.5;
38 font-family: Monaco, 'DejaVu Sans Mono', 'Bitstream Vera Sans Mono', 'Lucida Console', monospace;
39}
40
41pre {
42 margin-top: 0.5em;
43}
44
45h1, h2, h3, p {
46 font-family: Arial, sans serif;
47}
48
49h1, h2, h3 {
50 border-bottom: solid #CCC 1px;
51}
52
53.toc_element {
54 margin-top: 0.5em;
55}
56
57.firstline {
58 margin-left: 2 em;
59}
60
61.method {
62 margin-top: 1em;
63 border: solid 1px #CCC;
64 padding: 1em;
65 background: #EEE;
66}
67
68.details {
69 font-weight: bold;
70 font-size: 14px;
71}
72
73</style>
74
75<h1><a href="spanner_v1.html">Cloud Spanner API</a> . <a href="spanner_v1.projects.html">projects</a> . <a href="spanner_v1.projects.instances.html">instances</a> . <a href="spanner_v1.projects.instances.databases.html">databases</a> . <a href="spanner_v1.projects.instances.databases.sessions.html">sessions</a></h1>
76<h2>Instance Methods</h2>
77<p class="toc_element">
Dan O'Mearadd494642020-05-01 07:42:23 -070078 <code><a href="#batchCreate">batchCreate(database, body=None, x__xgafv=None)</a></code></p>
79<p class="firstline">Creates multiple new sessions.</p>
80<p class="toc_element">
81 <code><a href="#beginTransaction">beginTransaction(session, body=None, x__xgafv=None)</a></code></p>
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -040082<p class="firstline">Begins a new transaction. This step can often be skipped:</p>
83<p class="toc_element">
Dan O'Mearadd494642020-05-01 07:42:23 -070084 <code><a href="#commit">commit(session, body=None, x__xgafv=None)</a></code></p>
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -040085<p class="firstline">Commits a transaction. The request includes the mutations to be</p>
86<p class="toc_element">
Dan O'Mearadd494642020-05-01 07:42:23 -070087 <code><a href="#create">create(database, body=None, x__xgafv=None)</a></code></p>
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -040088<p class="firstline">Creates a new session. A session can be used to perform</p>
89<p class="toc_element">
90 <code><a href="#delete">delete(name, x__xgafv=None)</a></code></p>
Bu Sun Kim715bd7f2019-06-14 16:50:42 -070091<p class="firstline">Ends a session, releasing server resources associated with it. This will</p>
92<p class="toc_element">
Dan O'Mearadd494642020-05-01 07:42:23 -070093 <code><a href="#executeBatchDml">executeBatchDml(session, body=None, x__xgafv=None)</a></code></p>
Bu Sun Kim715bd7f2019-06-14 16:50:42 -070094<p class="firstline">Executes a batch of SQL DML statements. This method allows many statements</p>
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -040095<p class="toc_element">
Dan O'Mearadd494642020-05-01 07:42:23 -070096 <code><a href="#executeSql">executeSql(session, body=None, x__xgafv=None)</a></code></p>
Bu Sun Kim715bd7f2019-06-14 16:50:42 -070097<p class="firstline">Executes an SQL statement, returning all results in a single reply. This</p>
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -040098<p class="toc_element">
Dan O'Mearadd494642020-05-01 07:42:23 -070099 <code><a href="#executeStreamingSql">executeStreamingSql(session, body=None, x__xgafv=None)</a></code></p>
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -0400100<p class="firstline">Like ExecuteSql, except returns the result</p>
101<p class="toc_element">
102 <code><a href="#get">get(name, x__xgafv=None)</a></code></p>
103<p class="firstline">Gets a session. Returns `NOT_FOUND` if the session does not exist.</p>
104<p class="toc_element">
Bu Sun Kim715bd7f2019-06-14 16:50:42 -0700105 <code><a href="#list">list(database, pageSize=None, filter=None, pageToken=None, x__xgafv=None)</a></code></p>
106<p class="firstline">Lists all sessions in a given database.</p>
107<p class="toc_element">
108 <code><a href="#list_next">list_next(previous_request, previous_response)</a></code></p>
109<p class="firstline">Retrieves the next page of results.</p>
110<p class="toc_element">
Dan O'Mearadd494642020-05-01 07:42:23 -0700111 <code><a href="#partitionQuery">partitionQuery(session, body=None, x__xgafv=None)</a></code></p>
Bu Sun Kim715bd7f2019-06-14 16:50:42 -0700112<p class="firstline">Creates a set of partition tokens that can be used to execute a query</p>
113<p class="toc_element">
Dan O'Mearadd494642020-05-01 07:42:23 -0700114 <code><a href="#partitionRead">partitionRead(session, body=None, x__xgafv=None)</a></code></p>
Bu Sun Kim715bd7f2019-06-14 16:50:42 -0700115<p class="firstline">Creates a set of partition tokens that can be used to execute a read</p>
116<p class="toc_element">
Dan O'Mearadd494642020-05-01 07:42:23 -0700117 <code><a href="#read">read(session, body=None, x__xgafv=None)</a></code></p>
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -0400118<p class="firstline">Reads rows from the database using key lookups and scans, as a</p>
119<p class="toc_element">
Dan O'Mearadd494642020-05-01 07:42:23 -0700120 <code><a href="#rollback">rollback(session, body=None, x__xgafv=None)</a></code></p>
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -0400121<p class="firstline">Rolls back a transaction, releasing any locks it holds. It is a good</p>
122<p class="toc_element">
Dan O'Mearadd494642020-05-01 07:42:23 -0700123 <code><a href="#streamingRead">streamingRead(session, body=None, x__xgafv=None)</a></code></p>
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -0400124<p class="firstline">Like Read, except returns the result set as a</p>
125<h3>Method Details</h3>
126<div class="method">
Dan O'Mearadd494642020-05-01 07:42:23 -0700127 <code class="details" id="batchCreate">batchCreate(database, body=None, x__xgafv=None)</code>
128 <pre>Creates multiple new sessions.
129
130This API can be used to initialize a session cache on the clients.
131See https://goo.gl/TgSFN2 for best practices on session cache management.
132
133Args:
134 database: string, Required. The database in which the new sessions are created. (required)
135 body: object, The request body.
136 The object takes the form of:
137
138{ # The request for BatchCreateSessions.
139 "sessionTemplate": { # A session in the Cloud Spanner API. # Parameters to be applied to each created session.
140 "labels": { # The labels for the session.
141 #
142 # * Label keys must be between 1 and 63 characters long and must conform to
143 # the following regular expression: `[a-z]([-a-z0-9]*[a-z0-9])?`.
144 # * Label values must be between 0 and 63 characters long and must conform
145 # to the regular expression `([a-z]([-a-z0-9]*[a-z0-9])?)?`.
146 # * No more than 64 labels can be associated with a given session.
147 #
148 # See https://goo.gl/xmQnxf for more information on and examples of labels.
149 "a_key": "A String",
150 },
151 "name": "A String", # The name of the session. This is always system-assigned; values provided
152 # when creating a session are ignored.
153 "approximateLastUseTime": "A String", # Output only. The approximate timestamp when the session is last used. It is
154 # typically earlier than the actual last use time.
155 "createTime": "A String", # Output only. The timestamp when the session is created.
156 },
157 "sessionCount": 42, # Required. The number of sessions to be created in this batch call.
158 # The API may return fewer than the requested number of sessions. If a
159 # specific number of sessions are desired, the client can make additional
160 # calls to BatchCreateSessions (adjusting
161 # session_count as necessary).
162 }
163
164 x__xgafv: string, V1 error format.
165 Allowed values
166 1 - v1 error format
167 2 - v2 error format
168
169Returns:
170 An object of the form:
171
172 { # The response for BatchCreateSessions.
173 "session": [ # The freshly created sessions.
174 { # A session in the Cloud Spanner API.
175 "labels": { # The labels for the session.
176 #
177 # * Label keys must be between 1 and 63 characters long and must conform to
178 # the following regular expression: `[a-z]([-a-z0-9]*[a-z0-9])?`.
179 # * Label values must be between 0 and 63 characters long and must conform
180 # to the regular expression `([a-z]([-a-z0-9]*[a-z0-9])?)?`.
181 # * No more than 64 labels can be associated with a given session.
182 #
183 # See https://goo.gl/xmQnxf for more information on and examples of labels.
184 "a_key": "A String",
185 },
186 "name": "A String", # The name of the session. This is always system-assigned; values provided
187 # when creating a session are ignored.
188 "approximateLastUseTime": "A String", # Output only. The approximate timestamp when the session is last used. It is
189 # typically earlier than the actual last use time.
190 "createTime": "A String", # Output only. The timestamp when the session is created.
191 },
192 ],
193 }</pre>
194</div>
195
196<div class="method">
197 <code class="details" id="beginTransaction">beginTransaction(session, body=None, x__xgafv=None)</code>
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -0400198 <pre>Begins a new transaction. This step can often be skipped:
199Read, ExecuteSql and
200Commit can begin a new transaction as a
201side-effect.
202
203Args:
204 session: string, Required. The session in which the transaction runs. (required)
Dan O'Mearadd494642020-05-01 07:42:23 -0700205 body: object, The request body.
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -0400206 The object takes the form of:
207
208{ # The request for BeginTransaction.
209 "options": { # # Transactions # Required. Options for the new transaction.
210 #
211 #
212 # Each session can have at most one active transaction at a time. After the
213 # active transaction is completed, the session can immediately be
214 # re-used for the next transaction. It is not necessary to create a
215 # new session for each transaction.
216 #
217 # # Transaction Modes
218 #
Bu Sun Kim715bd7f2019-06-14 16:50:42 -0700219 # Cloud Spanner supports three transaction modes:
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -0400220 #
221 # 1. Locking read-write. This type of transaction is the only way
222 # to write data into Cloud Spanner. These transactions rely on
223 # pessimistic locking and, if necessary, two-phase commit.
224 # Locking read-write transactions may abort, requiring the
225 # application to retry.
226 #
227 # 2. Snapshot read-only. This transaction type provides guaranteed
228 # consistency across several reads, but does not allow
229 # writes. Snapshot read-only transactions can be configured to
230 # read at timestamps in the past. Snapshot read-only
231 # transactions do not need to be committed.
232 #
Bu Sun Kim715bd7f2019-06-14 16:50:42 -0700233 # 3. Partitioned DML. This type of transaction is used to execute
234 # a single Partitioned DML statement. Partitioned DML partitions
235 # the key space and runs the DML statement over each partition
236 # in parallel using separate, internal transactions that commit
237 # independently. Partitioned DML transactions do not need to be
238 # committed.
239 #
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -0400240 # For transactions that only read, snapshot read-only transactions
241 # provide simpler semantics and are almost always faster. In
242 # particular, read-only transactions do not take locks, so they do
243 # not conflict with read-write transactions. As a consequence of not
244 # taking locks, they also do not abort, so retry loops are not needed.
245 #
246 # Transactions may only read/write data in a single database. They
247 # may, however, read/write data in different tables within that
248 # database.
249 #
250 # ## Locking Read-Write Transactions
251 #
252 # Locking transactions may be used to atomically read-modify-write
253 # data anywhere in a database. This type of transaction is externally
254 # consistent.
255 #
256 # Clients should attempt to minimize the amount of time a transaction
257 # is active. Faster transactions commit with higher probability
258 # and cause less contention. Cloud Spanner attempts to keep read locks
259 # active as long as the transaction continues to do reads, and the
260 # transaction has not been terminated by
261 # Commit or
262 # Rollback. Long periods of
263 # inactivity at the client may cause Cloud Spanner to release a
264 # transaction's locks and abort it.
265 #
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -0400266 # Conceptually, a read-write transaction consists of zero or more
Bu Sun Kim715bd7f2019-06-14 16:50:42 -0700267 # reads or SQL statements followed by
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -0400268 # Commit. At any time before
269 # Commit, the client can send a
270 # Rollback request to abort the
271 # transaction.
272 #
273 # ### Semantics
274 #
275 # Cloud Spanner can commit the transaction if all read locks it acquired
276 # are still valid at commit time, and it is able to acquire write
277 # locks for all writes. Cloud Spanner can abort the transaction for any
278 # reason. If a commit attempt returns `ABORTED`, Cloud Spanner guarantees
279 # that the transaction has not modified any user data in Cloud Spanner.
280 #
281 # Unless the transaction commits, Cloud Spanner makes no guarantees about
282 # how long the transaction's locks were held for. It is an error to
283 # use Cloud Spanner locks for any sort of mutual exclusion other than
284 # between Cloud Spanner transactions themselves.
285 #
286 # ### Retrying Aborted Transactions
287 #
288 # When a transaction aborts, the application can choose to retry the
289 # whole transaction again. To maximize the chances of successfully
290 # committing the retry, the client should execute the retry in the
291 # same session as the original attempt. The original session's lock
292 # priority increases with each consecutive abort, meaning that each
293 # attempt has a slightly better chance of success than the previous.
294 #
295 # Under some circumstances (e.g., many transactions attempting to
296 # modify the same row(s)), a transaction can abort many times in a
297 # short period before successfully committing. Thus, it is not a good
298 # idea to cap the number of retries a transaction can attempt;
299 # instead, it is better to limit the total amount of wall time spent
300 # retrying.
301 #
302 # ### Idle Transactions
303 #
304 # A transaction is considered idle if it has no outstanding reads or
305 # SQL queries and has not started a read or SQL query within the last 10
306 # seconds. Idle transactions can be aborted by Cloud Spanner so that they
307 # don't hold on to locks indefinitely. In that case, the commit will
308 # fail with error `ABORTED`.
309 #
310 # If this behavior is undesirable, periodically executing a simple
311 # SQL query in the transaction (e.g., `SELECT 1`) prevents the
312 # transaction from becoming idle.
313 #
314 # ## Snapshot Read-Only Transactions
315 #
316 # Snapshot read-only transactions provides a simpler method than
317 # locking read-write transactions for doing several consistent
318 # reads. However, this type of transaction does not support writes.
319 #
320 # Snapshot transactions do not take locks. Instead, they work by
321 # choosing a Cloud Spanner timestamp, then executing all reads at that
322 # timestamp. Since they do not acquire locks, they do not block
323 # concurrent read-write transactions.
324 #
325 # Unlike locking read-write transactions, snapshot read-only
326 # transactions never abort. They can fail if the chosen read
327 # timestamp is garbage collected; however, the default garbage
328 # collection policy is generous enough that most applications do not
329 # need to worry about this in practice.
330 #
331 # Snapshot read-only transactions do not need to call
332 # Commit or
333 # Rollback (and in fact are not
334 # permitted to do so).
335 #
336 # To execute a snapshot transaction, the client specifies a timestamp
337 # bound, which tells Cloud Spanner how to choose a read timestamp.
338 #
339 # The types of timestamp bound are:
340 #
341 # - Strong (the default).
342 # - Bounded staleness.
343 # - Exact staleness.
344 #
345 # If the Cloud Spanner database to be read is geographically distributed,
346 # stale read-only transactions can execute more quickly than strong
347 # or read-write transaction, because they are able to execute far
348 # from the leader replica.
349 #
350 # Each type of timestamp bound is discussed in detail below.
351 #
352 # ### Strong
353 #
354 # Strong reads are guaranteed to see the effects of all transactions
355 # that have committed before the start of the read. Furthermore, all
356 # rows yielded by a single read are consistent with each other -- if
357 # any part of the read observes a transaction, all parts of the read
358 # see the transaction.
359 #
360 # Strong reads are not repeatable: two consecutive strong read-only
361 # transactions might return inconsistent results if there are
362 # concurrent writes. If consistency across reads is required, the
363 # reads should be executed within a transaction or at an exact read
364 # timestamp.
365 #
366 # See TransactionOptions.ReadOnly.strong.
367 #
368 # ### Exact Staleness
369 #
370 # These timestamp bounds execute reads at a user-specified
371 # timestamp. Reads at a timestamp are guaranteed to see a consistent
372 # prefix of the global transaction history: they observe
Dan O'Mearadd494642020-05-01 07:42:23 -0700373 # modifications done by all transactions with a commit timestamp &lt;=
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -0400374 # the read timestamp, and observe none of the modifications done by
375 # transactions with a larger commit timestamp. They will block until
376 # all conflicting transactions that may be assigned commit timestamps
Dan O'Mearadd494642020-05-01 07:42:23 -0700377 # &lt;= the read timestamp have finished.
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -0400378 #
379 # The timestamp can either be expressed as an absolute Cloud Spanner commit
380 # timestamp or a staleness relative to the current time.
381 #
382 # These modes do not require a "negotiation phase" to pick a
383 # timestamp. As a result, they execute slightly faster than the
384 # equivalent boundedly stale concurrency modes. On the other hand,
385 # boundedly stale reads usually return fresher results.
386 #
387 # See TransactionOptions.ReadOnly.read_timestamp and
388 # TransactionOptions.ReadOnly.exact_staleness.
389 #
390 # ### Bounded Staleness
391 #
392 # Bounded staleness modes allow Cloud Spanner to pick the read timestamp,
393 # subject to a user-provided staleness bound. Cloud Spanner chooses the
394 # newest timestamp within the staleness bound that allows execution
395 # of the reads at the closest available replica without blocking.
396 #
397 # All rows yielded are consistent with each other -- if any part of
398 # the read observes a transaction, all parts of the read see the
399 # transaction. Boundedly stale reads are not repeatable: two stale
400 # reads, even if they use the same staleness bound, can execute at
401 # different timestamps and thus return inconsistent results.
402 #
403 # Boundedly stale reads execute in two phases: the first phase
404 # negotiates a timestamp among all replicas needed to serve the
405 # read. In the second phase, reads are executed at the negotiated
406 # timestamp.
407 #
408 # As a result of the two phase execution, bounded staleness reads are
409 # usually a little slower than comparable exact staleness
410 # reads. However, they are typically able to return fresher
411 # results, and are more likely to execute at the closest replica.
412 #
413 # Because the timestamp negotiation requires up-front knowledge of
414 # which rows will be read, it can only be used with single-use
415 # read-only transactions.
416 #
417 # See TransactionOptions.ReadOnly.max_staleness and
418 # TransactionOptions.ReadOnly.min_read_timestamp.
419 #
420 # ### Old Read Timestamps and Garbage Collection
421 #
422 # Cloud Spanner continuously garbage collects deleted and overwritten data
423 # in the background to reclaim storage space. This process is known
424 # as "version GC". By default, version GC reclaims versions after they
425 # are one hour old. Because of this, Cloud Spanner cannot perform reads
426 # at read timestamps more than one hour in the past. This
427 # restriction also applies to in-progress reads and/or SQL queries whose
428 # timestamp become too old while executing. Reads and SQL queries with
429 # too-old read timestamps fail with the error `FAILED_PRECONDITION`.
Bu Sun Kim715bd7f2019-06-14 16:50:42 -0700430 #
431 # ## Partitioned DML Transactions
432 #
433 # Partitioned DML transactions are used to execute DML statements with a
434 # different execution strategy that provides different, and often better,
435 # scalability properties for large, table-wide operations than DML in a
436 # ReadWrite transaction. Smaller scoped statements, such as an OLTP workload,
437 # should prefer using ReadWrite transactions.
438 #
439 # Partitioned DML partitions the keyspace and runs the DML statement on each
440 # partition in separate, internal transactions. These transactions commit
441 # automatically when complete, and run independently from one another.
442 #
443 # To reduce lock contention, this execution strategy only acquires read locks
444 # on rows that match the WHERE clause of the statement. Additionally, the
445 # smaller per-partition transactions hold locks for less time.
446 #
447 # That said, Partitioned DML is not a drop-in replacement for standard DML used
448 # in ReadWrite transactions.
449 #
450 # - The DML statement must be fully-partitionable. Specifically, the statement
451 # must be expressible as the union of many statements which each access only
452 # a single row of the table.
453 #
454 # - The statement is not applied atomically to all rows of the table. Rather,
455 # the statement is applied atomically to partitions of the table, in
456 # independent transactions. Secondary index rows are updated atomically
457 # with the base table rows.
458 #
459 # - Partitioned DML does not guarantee exactly-once execution semantics
460 # against a partition. The statement will be applied at least once to each
461 # partition. It is strongly recommended that the DML statement should be
462 # idempotent to avoid unexpected results. For instance, it is potentially
463 # dangerous to run a statement such as
464 # `UPDATE table SET column = column + 1` as it could be run multiple times
465 # against some rows.
466 #
467 # - The partitions are committed automatically - there is no support for
468 # Commit or Rollback. If the call returns an error, or if the client issuing
469 # the ExecuteSql call dies, it is possible that some rows had the statement
470 # executed on them successfully. It is also possible that statement was
471 # never executed against other rows.
472 #
473 # - Partitioned DML transactions may only contain the execution of a single
474 # DML statement via ExecuteSql or ExecuteStreamingSql.
475 #
476 # - If any error is encountered during the execution of the partitioned DML
477 # operation (for instance, a UNIQUE INDEX violation, division by zero, or a
478 # value that cannot be stored due to schema constraints), then the
479 # operation is stopped at that point and an error is returned. It is
480 # possible that at this point, some partitions have been committed (or even
481 # committed multiple times), and other partitions have not been run at all.
482 #
483 # Given the above, Partitioned DML is good fit for large, database-wide,
484 # operations that are idempotent, such as deleting old rows from a very large
485 # table.
486 "readWrite": { # Message type to initiate a read-write transaction. Currently this # Transaction may write.
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -0400487 #
488 # Authorization to begin a read-write transaction requires
489 # `spanner.databases.beginOrRollbackReadWriteTransaction` permission
490 # on the `session` resource.
Bu Sun Kim715bd7f2019-06-14 16:50:42 -0700491 # transaction type has no options.
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -0400492 },
Bu Sun Kim715bd7f2019-06-14 16:50:42 -0700493 "readOnly": { # Message type to initiate a read-only transaction. # Transaction will not write.
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -0400494 #
495 # Authorization to begin a read-only transaction requires
496 # `spanner.databases.beginReadOnlyTransaction` permission
497 # on the `session` resource.
Dan O'Mearadd494642020-05-01 07:42:23 -0700498 "minReadTimestamp": "A String", # Executes all reads at a timestamp &gt;= `min_read_timestamp`.
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -0400499 #
500 # This is useful for requesting fresher data than some previous
501 # read, or data that is fresh enough to observe the effects of some
502 # previously committed transaction whose timestamp is known.
503 #
504 # Note that this option can only be used in single-use transactions.
Bu Sun Kim715bd7f2019-06-14 16:50:42 -0700505 #
506 # A timestamp in RFC3339 UTC \"Zulu\" format, accurate to nanoseconds.
507 # Example: `"2014-10-02T15:01:23.045123456Z"`.
Dan O'Mearadd494642020-05-01 07:42:23 -0700508 "readTimestamp": "A String", # Executes all reads at the given timestamp. Unlike other modes,
509 # reads at a specific timestamp are repeatable; the same read at
510 # the same timestamp always returns the same data. If the
511 # timestamp is in the future, the read will block until the
512 # specified timestamp, modulo the read's deadline.
513 #
514 # Useful for large scale consistent reads such as mapreduces, or
515 # for coordinating many reads against a consistent snapshot of the
516 # data.
517 #
518 # A timestamp in RFC3339 UTC \"Zulu\" format, accurate to nanoseconds.
519 # Example: `"2014-10-02T15:01:23.045123456Z"`.
520 "maxStaleness": "A String", # Read data at a timestamp &gt;= `NOW - max_staleness`
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -0400521 # seconds. Guarantees that all writes that have committed more
522 # than the specified number of seconds ago are visible. Because
523 # Cloud Spanner chooses the exact timestamp, this mode works even if
524 # the client's local clock is substantially skewed from Cloud Spanner
525 # commit timestamps.
526 #
527 # Useful for reading the freshest data available at a nearby
528 # replica, while bounding the possible staleness if the local
529 # replica has fallen behind.
530 #
531 # Note that this option can only be used in single-use
532 # transactions.
533 "exactStaleness": "A String", # Executes all reads at a timestamp that is `exact_staleness`
534 # old. The timestamp is chosen soon after the read is started.
535 #
536 # Guarantees that all writes that have committed more than the
537 # specified number of seconds ago are visible. Because Cloud Spanner
538 # chooses the exact timestamp, this mode works even if the client's
539 # local clock is substantially skewed from Cloud Spanner commit
540 # timestamps.
541 #
542 # Useful for reading at nearby replicas without the distributed
543 # timestamp negotiation overhead of `max_staleness`.
Dan O'Mearadd494642020-05-01 07:42:23 -0700544 "returnReadTimestamp": True or False, # If true, the Cloud Spanner-selected read timestamp is included in
545 # the Transaction message that describes the transaction.
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -0400546 "strong": True or False, # Read at a timestamp where all previously committed transactions
547 # are visible.
548 },
Bu Sun Kim715bd7f2019-06-14 16:50:42 -0700549 "partitionedDml": { # Message type to initiate a Partitioned DML transaction. # Partitioned DML transaction.
550 #
551 # Authorization to begin a Partitioned DML transaction requires
552 # `spanner.databases.beginPartitionedDmlTransaction` permission
553 # on the `session` resource.
554 },
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -0400555 },
556 }
557
558 x__xgafv: string, V1 error format.
559 Allowed values
560 1 - v1 error format
561 2 - v2 error format
562
563Returns:
564 An object of the form:
565
566 { # A transaction.
567 "readTimestamp": "A String", # For snapshot read-only transactions, the read timestamp chosen
568 # for the transaction. Not returned by default: see
569 # TransactionOptions.ReadOnly.return_read_timestamp.
Bu Sun Kim715bd7f2019-06-14 16:50:42 -0700570 #
571 # A timestamp in RFC3339 UTC \"Zulu\" format, accurate to nanoseconds.
572 # Example: `"2014-10-02T15:01:23.045123456Z"`.
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -0400573 "id": "A String", # `id` may be used to identify the transaction in subsequent
574 # Read,
575 # ExecuteSql,
576 # Commit, or
577 # Rollback calls.
578 #
579 # Single-use read-only transactions do not have IDs, because
580 # single-use transactions do not support multiple requests.
581 }</pre>
582</div>
583
584<div class="method">
Dan O'Mearadd494642020-05-01 07:42:23 -0700585 <code class="details" id="commit">commit(session, body=None, x__xgafv=None)</code>
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -0400586 <pre>Commits a transaction. The request includes the mutations to be
587applied to rows in the database.
588
589`Commit` might return an `ABORTED` error. This can occur at any time;
590commonly, the cause is conflicts with concurrent
591transactions. However, it can also happen for a variety of other
592reasons. If `Commit` returns `ABORTED`, the caller should re-attempt
593the transaction from the beginning, re-using the same session.
594
595Args:
596 session: string, Required. The session in which the transaction to be committed is running. (required)
Dan O'Mearadd494642020-05-01 07:42:23 -0700597 body: object, The request body.
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -0400598 The object takes the form of:
599
600{ # The request for Commit.
601 "transactionId": "A String", # Commit a previously-started transaction.
602 "mutations": [ # The mutations to be executed when this transaction commits. All
603 # mutations are applied atomically, in the order they appear in
604 # this list.
605 { # A modification to one or more Cloud Spanner rows. Mutations can be
606 # applied to a Cloud Spanner database by sending them in a
607 # Commit call.
608 "insert": { # Arguments to insert, update, insert_or_update, and # Insert new rows in a table. If any of the rows already exist,
609 # the write or transaction fails with error `ALREADY_EXISTS`.
610 # replace operations.
611 "table": "A String", # Required. The table whose rows will be written.
612 "values": [ # The values to be written. `values` can contain more than one
613 # list of values. If it does, then multiple rows are written, one
614 # for each entry in `values`. Each list in `values` must have
615 # exactly as many entries as there are entries in columns
616 # above. Sending multiple lists is equivalent to sending multiple
617 # `Mutation`s, each containing one `values` entry and repeating
618 # table and columns. Individual values in each list are
619 # encoded as described here.
620 [
621 "",
622 ],
623 ],
624 "columns": [ # The names of the columns in table to be written.
625 #
626 # The list of columns must contain enough columns to allow
627 # Cloud Spanner to derive values for all primary key columns in the
628 # row(s) to be modified.
629 "A String",
630 ],
631 },
Sai Cheemalapati4ba8c232017-06-06 18:46:08 -0400632 "replace": { # Arguments to insert, update, insert_or_update, and # Like insert, except that if the row already exists, it is
633 # deleted, and the column values provided are inserted
634 # instead. Unlike insert_or_update, this means any values not
635 # explicitly written become `NULL`.
Dan O'Mearadd494642020-05-01 07:42:23 -0700636 #
637 # In an interleaved table, if you create the child table with the
638 # `ON DELETE CASCADE` annotation, then replacing a parent row
639 # also deletes the child rows. Otherwise, you must delete the
640 # child rows before you replace the parent row.
Sai Cheemalapati4ba8c232017-06-06 18:46:08 -0400641 # replace operations.
642 "table": "A String", # Required. The table whose rows will be written.
643 "values": [ # The values to be written. `values` can contain more than one
644 # list of values. If it does, then multiple rows are written, one
645 # for each entry in `values`. Each list in `values` must have
646 # exactly as many entries as there are entries in columns
647 # above. Sending multiple lists is equivalent to sending multiple
648 # `Mutation`s, each containing one `values` entry and repeating
649 # table and columns. Individual values in each list are
650 # encoded as described here.
651 [
652 "",
653 ],
654 ],
655 "columns": [ # The names of the columns in table to be written.
656 #
657 # The list of columns must contain enough columns to allow
658 # Cloud Spanner to derive values for all primary key columns in the
659 # row(s) to be modified.
660 "A String",
661 ],
662 },
Bu Sun Kim715bd7f2019-06-14 16:50:42 -0700663 "insertOrUpdate": { # Arguments to insert, update, insert_or_update, and # Like insert, except that if the row already exists, then
664 # its column values are overwritten with the ones provided. Any
665 # column values not explicitly written are preserved.
Dan O'Mearadd494642020-05-01 07:42:23 -0700666 #
667 # When using insert_or_update, just as when using insert, all `NOT
668 # NULL` columns in the table must be given a value. This holds true
669 # even when the row already exists and will therefore actually be updated.
Bu Sun Kim715bd7f2019-06-14 16:50:42 -0700670 # replace operations.
671 "table": "A String", # Required. The table whose rows will be written.
672 "values": [ # The values to be written. `values` can contain more than one
673 # list of values. If it does, then multiple rows are written, one
674 # for each entry in `values`. Each list in `values` must have
675 # exactly as many entries as there are entries in columns
676 # above. Sending multiple lists is equivalent to sending multiple
677 # `Mutation`s, each containing one `values` entry and repeating
678 # table and columns. Individual values in each list are
679 # encoded as described here.
680 [
681 "",
682 ],
683 ],
684 "columns": [ # The names of the columns in table to be written.
685 #
686 # The list of columns must contain enough columns to allow
687 # Cloud Spanner to derive values for all primary key columns in the
688 # row(s) to be modified.
689 "A String",
690 ],
691 },
692 "update": { # Arguments to insert, update, insert_or_update, and # Update existing rows in a table. If any of the rows does not
693 # already exist, the transaction fails with error `NOT_FOUND`.
694 # replace operations.
695 "table": "A String", # Required. The table whose rows will be written.
696 "values": [ # The values to be written. `values` can contain more than one
697 # list of values. If it does, then multiple rows are written, one
698 # for each entry in `values`. Each list in `values` must have
699 # exactly as many entries as there are entries in columns
700 # above. Sending multiple lists is equivalent to sending multiple
701 # `Mutation`s, each containing one `values` entry and repeating
702 # table and columns. Individual values in each list are
703 # encoded as described here.
704 [
705 "",
706 ],
707 ],
708 "columns": [ # The names of the columns in table to be written.
709 #
710 # The list of columns must contain enough columns to allow
711 # Cloud Spanner to derive values for all primary key columns in the
712 # row(s) to be modified.
713 "A String",
714 ],
715 },
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -0400716 "delete": { # Arguments to delete operations. # Delete rows from a table. Succeeds whether or not the named
717 # rows were present.
Sai Cheemalapati4ba8c232017-06-06 18:46:08 -0400718 "table": "A String", # Required. The table whose rows will be deleted.
Dan O'Mearadd494642020-05-01 07:42:23 -0700719 "keySet": { # `KeySet` defines a collection of Cloud Spanner keys and/or key ranges. All # Required. The primary keys of the rows within table to delete. The
720 # primary keys must be specified in the order in which they appear in the
721 # `PRIMARY KEY()` clause of the table's equivalent DDL statement (the DDL
722 # statement used to create the table).
Bu Sun Kim715bd7f2019-06-14 16:50:42 -0700723 # Delete is idempotent. The transaction will succeed even if some or all
724 # rows do not exist.
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -0400725 # the keys are expected to be in the same table or index. The keys need
726 # not be sorted in any particular way.
727 #
728 # If the same key is specified multiple times in the set (for example
729 # if two ranges, two keys, or a key and a range overlap), Cloud Spanner
730 # behaves as if the key were only specified once.
731 "ranges": [ # A list of key ranges. See KeyRange for more information about
732 # key range specifications.
733 { # KeyRange represents a range of rows in a table or index.
734 #
735 # A range has a start key and an end key. These keys can be open or
736 # closed, indicating if the range includes rows with that key.
737 #
738 # Keys are represented by lists, where the ith value in the list
739 # corresponds to the ith component of the table or index primary key.
Bu Sun Kim715bd7f2019-06-14 16:50:42 -0700740 # Individual values are encoded as described
741 # here.
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -0400742 #
743 # For example, consider the following table definition:
744 #
745 # CREATE TABLE UserEvents (
746 # UserName STRING(MAX),
747 # EventDate STRING(10)
748 # ) PRIMARY KEY(UserName, EventDate);
749 #
750 # The following keys name rows in this table:
751 #
752 # "Bob", "2014-09-23"
753 #
754 # Since the `UserEvents` table's `PRIMARY KEY` clause names two
755 # columns, each `UserEvents` key has two elements; the first is the
756 # `UserName`, and the second is the `EventDate`.
757 #
758 # Key ranges with multiple components are interpreted
759 # lexicographically by component using the table or index key's declared
760 # sort order. For example, the following range returns all events for
761 # user `"Bob"` that occurred in the year 2015:
762 #
763 # "start_closed": ["Bob", "2015-01-01"]
764 # "end_closed": ["Bob", "2015-12-31"]
765 #
766 # Start and end keys can omit trailing key components. This affects the
767 # inclusion and exclusion of rows that exactly match the provided key
768 # components: if the key is closed, then rows that exactly match the
769 # provided components are included; if the key is open, then rows
770 # that exactly match are not included.
771 #
772 # For example, the following range includes all events for `"Bob"` that
773 # occurred during and after the year 2000:
774 #
775 # "start_closed": ["Bob", "2000-01-01"]
776 # "end_closed": ["Bob"]
777 #
778 # The next example retrieves all events for `"Bob"`:
779 #
780 # "start_closed": ["Bob"]
781 # "end_closed": ["Bob"]
782 #
783 # To retrieve events before the year 2000:
784 #
785 # "start_closed": ["Bob"]
786 # "end_open": ["Bob", "2000-01-01"]
787 #
788 # The following range includes all rows in the table:
789 #
790 # "start_closed": []
791 # "end_closed": []
792 #
793 # This range returns all users whose `UserName` begins with any
794 # character from A to C:
795 #
796 # "start_closed": ["A"]
797 # "end_open": ["D"]
798 #
799 # This range returns all users whose `UserName` begins with B:
800 #
801 # "start_closed": ["B"]
802 # "end_open": ["C"]
803 #
804 # Key ranges honor column sort order. For example, suppose a table is
805 # defined as follows:
806 #
807 # CREATE TABLE DescendingSortedTable {
808 # Key INT64,
809 # ...
810 # ) PRIMARY KEY(Key DESC);
811 #
812 # The following range retrieves all rows with key values between 1
813 # and 100 inclusive:
814 #
815 # "start_closed": ["100"]
816 # "end_closed": ["1"]
817 #
818 # Note that 100 is passed as the start, and 1 is passed as the end,
819 # because `Key` is a descending column in the schema.
820 "endOpen": [ # If the end is open, then the range excludes rows whose first
821 # `len(end_open)` key columns exactly match `end_open`.
822 "",
823 ],
824 "startOpen": [ # If the start is open, then the range excludes rows whose first
825 # `len(start_open)` key columns exactly match `start_open`.
826 "",
827 ],
828 "endClosed": [ # If the end is closed, then the range includes all rows whose
829 # first `len(end_closed)` key columns exactly match `end_closed`.
830 "",
831 ],
832 "startClosed": [ # If the start is closed, then the range includes all rows whose
833 # first `len(start_closed)` key columns exactly match `start_closed`.
834 "",
835 ],
836 },
837 ],
Sai Cheemalapati4ba8c232017-06-06 18:46:08 -0400838 "keys": [ # A list of specific keys. Entries in `keys` should have exactly as
839 # many elements as there are columns in the primary or index key
840 # with which this `KeySet` is used. Individual key values are
841 # encoded as described here.
842 [
843 "",
844 ],
845 ],
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -0400846 "all": True or False, # For convenience `all` can be set to `true` to indicate that this
847 # `KeySet` matches all keys in the table or index. Note that any keys
848 # specified in `keys` or `ranges` are only yielded once.
849 },
Thomas Coffee2f245372017-03-27 10:39:26 -0700850 },
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -0400851 },
852 ],
853 "singleUseTransaction": { # # Transactions # Execute mutations in a temporary transaction. Note that unlike
854 # commit of a previously-started transaction, commit with a
855 # temporary transaction is non-idempotent. That is, if the
856 # `CommitRequest` is sent to Cloud Spanner more than once (for
857 # instance, due to retries in the application, or in the
858 # transport library), it is possible that the mutations are
859 # executed more than once. If this is undesirable, use
860 # BeginTransaction and
861 # Commit instead.
862 #
863 #
864 # Each session can have at most one active transaction at a time. After the
865 # active transaction is completed, the session can immediately be
866 # re-used for the next transaction. It is not necessary to create a
867 # new session for each transaction.
868 #
869 # # Transaction Modes
870 #
Bu Sun Kim715bd7f2019-06-14 16:50:42 -0700871 # Cloud Spanner supports three transaction modes:
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -0400872 #
873 # 1. Locking read-write. This type of transaction is the only way
874 # to write data into Cloud Spanner. These transactions rely on
875 # pessimistic locking and, if necessary, two-phase commit.
876 # Locking read-write transactions may abort, requiring the
877 # application to retry.
878 #
879 # 2. Snapshot read-only. This transaction type provides guaranteed
880 # consistency across several reads, but does not allow
881 # writes. Snapshot read-only transactions can be configured to
882 # read at timestamps in the past. Snapshot read-only
883 # transactions do not need to be committed.
884 #
Bu Sun Kim715bd7f2019-06-14 16:50:42 -0700885 # 3. Partitioned DML. This type of transaction is used to execute
886 # a single Partitioned DML statement. Partitioned DML partitions
887 # the key space and runs the DML statement over each partition
888 # in parallel using separate, internal transactions that commit
889 # independently. Partitioned DML transactions do not need to be
890 # committed.
891 #
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -0400892 # For transactions that only read, snapshot read-only transactions
893 # provide simpler semantics and are almost always faster. In
894 # particular, read-only transactions do not take locks, so they do
895 # not conflict with read-write transactions. As a consequence of not
896 # taking locks, they also do not abort, so retry loops are not needed.
897 #
898 # Transactions may only read/write data in a single database. They
899 # may, however, read/write data in different tables within that
900 # database.
901 #
902 # ## Locking Read-Write Transactions
903 #
904 # Locking transactions may be used to atomically read-modify-write
905 # data anywhere in a database. This type of transaction is externally
906 # consistent.
907 #
908 # Clients should attempt to minimize the amount of time a transaction
909 # is active. Faster transactions commit with higher probability
910 # and cause less contention. Cloud Spanner attempts to keep read locks
911 # active as long as the transaction continues to do reads, and the
912 # transaction has not been terminated by
913 # Commit or
914 # Rollback. Long periods of
915 # inactivity at the client may cause Cloud Spanner to release a
916 # transaction's locks and abort it.
917 #
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -0400918 # Conceptually, a read-write transaction consists of zero or more
Bu Sun Kim715bd7f2019-06-14 16:50:42 -0700919 # reads or SQL statements followed by
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -0400920 # Commit. At any time before
921 # Commit, the client can send a
922 # Rollback request to abort the
923 # transaction.
924 #
925 # ### Semantics
926 #
927 # Cloud Spanner can commit the transaction if all read locks it acquired
928 # are still valid at commit time, and it is able to acquire write
929 # locks for all writes. Cloud Spanner can abort the transaction for any
930 # reason. If a commit attempt returns `ABORTED`, Cloud Spanner guarantees
931 # that the transaction has not modified any user data in Cloud Spanner.
932 #
933 # Unless the transaction commits, Cloud Spanner makes no guarantees about
934 # how long the transaction's locks were held for. It is an error to
935 # use Cloud Spanner locks for any sort of mutual exclusion other than
936 # between Cloud Spanner transactions themselves.
937 #
938 # ### Retrying Aborted Transactions
939 #
940 # When a transaction aborts, the application can choose to retry the
941 # whole transaction again. To maximize the chances of successfully
942 # committing the retry, the client should execute the retry in the
943 # same session as the original attempt. The original session's lock
944 # priority increases with each consecutive abort, meaning that each
945 # attempt has a slightly better chance of success than the previous.
946 #
947 # Under some circumstances (e.g., many transactions attempting to
948 # modify the same row(s)), a transaction can abort many times in a
949 # short period before successfully committing. Thus, it is not a good
950 # idea to cap the number of retries a transaction can attempt;
951 # instead, it is better to limit the total amount of wall time spent
952 # retrying.
953 #
954 # ### Idle Transactions
955 #
956 # A transaction is considered idle if it has no outstanding reads or
957 # SQL queries and has not started a read or SQL query within the last 10
958 # seconds. Idle transactions can be aborted by Cloud Spanner so that they
959 # don't hold on to locks indefinitely. In that case, the commit will
960 # fail with error `ABORTED`.
961 #
962 # If this behavior is undesirable, periodically executing a simple
963 # SQL query in the transaction (e.g., `SELECT 1`) prevents the
964 # transaction from becoming idle.
965 #
966 # ## Snapshot Read-Only Transactions
967 #
968 # Snapshot read-only transactions provides a simpler method than
969 # locking read-write transactions for doing several consistent
970 # reads. However, this type of transaction does not support writes.
971 #
972 # Snapshot transactions do not take locks. Instead, they work by
973 # choosing a Cloud Spanner timestamp, then executing all reads at that
974 # timestamp. Since they do not acquire locks, they do not block
975 # concurrent read-write transactions.
976 #
977 # Unlike locking read-write transactions, snapshot read-only
978 # transactions never abort. They can fail if the chosen read
979 # timestamp is garbage collected; however, the default garbage
980 # collection policy is generous enough that most applications do not
981 # need to worry about this in practice.
982 #
983 # Snapshot read-only transactions do not need to call
984 # Commit or
985 # Rollback (and in fact are not
986 # permitted to do so).
987 #
988 # To execute a snapshot transaction, the client specifies a timestamp
989 # bound, which tells Cloud Spanner how to choose a read timestamp.
990 #
991 # The types of timestamp bound are:
992 #
993 # - Strong (the default).
994 # - Bounded staleness.
995 # - Exact staleness.
996 #
997 # If the Cloud Spanner database to be read is geographically distributed,
998 # stale read-only transactions can execute more quickly than strong
999 # or read-write transaction, because they are able to execute far
1000 # from the leader replica.
1001 #
1002 # Each type of timestamp bound is discussed in detail below.
1003 #
1004 # ### Strong
1005 #
1006 # Strong reads are guaranteed to see the effects of all transactions
1007 # that have committed before the start of the read. Furthermore, all
1008 # rows yielded by a single read are consistent with each other -- if
1009 # any part of the read observes a transaction, all parts of the read
1010 # see the transaction.
1011 #
1012 # Strong reads are not repeatable: two consecutive strong read-only
1013 # transactions might return inconsistent results if there are
1014 # concurrent writes. If consistency across reads is required, the
1015 # reads should be executed within a transaction or at an exact read
1016 # timestamp.
1017 #
1018 # See TransactionOptions.ReadOnly.strong.
1019 #
1020 # ### Exact Staleness
1021 #
1022 # These timestamp bounds execute reads at a user-specified
1023 # timestamp. Reads at a timestamp are guaranteed to see a consistent
1024 # prefix of the global transaction history: they observe
Dan O'Mearadd494642020-05-01 07:42:23 -07001025 # modifications done by all transactions with a commit timestamp &lt;=
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -04001026 # the read timestamp, and observe none of the modifications done by
1027 # transactions with a larger commit timestamp. They will block until
1028 # all conflicting transactions that may be assigned commit timestamps
Dan O'Mearadd494642020-05-01 07:42:23 -07001029 # &lt;= the read timestamp have finished.
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -04001030 #
1031 # The timestamp can either be expressed as an absolute Cloud Spanner commit
1032 # timestamp or a staleness relative to the current time.
1033 #
1034 # These modes do not require a "negotiation phase" to pick a
1035 # timestamp. As a result, they execute slightly faster than the
1036 # equivalent boundedly stale concurrency modes. On the other hand,
1037 # boundedly stale reads usually return fresher results.
1038 #
1039 # See TransactionOptions.ReadOnly.read_timestamp and
1040 # TransactionOptions.ReadOnly.exact_staleness.
1041 #
1042 # ### Bounded Staleness
1043 #
1044 # Bounded staleness modes allow Cloud Spanner to pick the read timestamp,
1045 # subject to a user-provided staleness bound. Cloud Spanner chooses the
1046 # newest timestamp within the staleness bound that allows execution
1047 # of the reads at the closest available replica without blocking.
1048 #
1049 # All rows yielded are consistent with each other -- if any part of
1050 # the read observes a transaction, all parts of the read see the
1051 # transaction. Boundedly stale reads are not repeatable: two stale
1052 # reads, even if they use the same staleness bound, can execute at
1053 # different timestamps and thus return inconsistent results.
1054 #
1055 # Boundedly stale reads execute in two phases: the first phase
1056 # negotiates a timestamp among all replicas needed to serve the
1057 # read. In the second phase, reads are executed at the negotiated
1058 # timestamp.
1059 #
1060 # As a result of the two phase execution, bounded staleness reads are
1061 # usually a little slower than comparable exact staleness
1062 # reads. However, they are typically able to return fresher
1063 # results, and are more likely to execute at the closest replica.
1064 #
1065 # Because the timestamp negotiation requires up-front knowledge of
1066 # which rows will be read, it can only be used with single-use
1067 # read-only transactions.
1068 #
1069 # See TransactionOptions.ReadOnly.max_staleness and
1070 # TransactionOptions.ReadOnly.min_read_timestamp.
1071 #
1072 # ### Old Read Timestamps and Garbage Collection
1073 #
1074 # Cloud Spanner continuously garbage collects deleted and overwritten data
1075 # in the background to reclaim storage space. This process is known
1076 # as "version GC". By default, version GC reclaims versions after they
1077 # are one hour old. Because of this, Cloud Spanner cannot perform reads
1078 # at read timestamps more than one hour in the past. This
1079 # restriction also applies to in-progress reads and/or SQL queries whose
1080 # timestamp become too old while executing. Reads and SQL queries with
1081 # too-old read timestamps fail with the error `FAILED_PRECONDITION`.
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07001082 #
1083 # ## Partitioned DML Transactions
1084 #
1085 # Partitioned DML transactions are used to execute DML statements with a
1086 # different execution strategy that provides different, and often better,
1087 # scalability properties for large, table-wide operations than DML in a
1088 # ReadWrite transaction. Smaller scoped statements, such as an OLTP workload,
1089 # should prefer using ReadWrite transactions.
1090 #
1091 # Partitioned DML partitions the keyspace and runs the DML statement on each
1092 # partition in separate, internal transactions. These transactions commit
1093 # automatically when complete, and run independently from one another.
1094 #
1095 # To reduce lock contention, this execution strategy only acquires read locks
1096 # on rows that match the WHERE clause of the statement. Additionally, the
1097 # smaller per-partition transactions hold locks for less time.
1098 #
1099 # That said, Partitioned DML is not a drop-in replacement for standard DML used
1100 # in ReadWrite transactions.
1101 #
1102 # - The DML statement must be fully-partitionable. Specifically, the statement
1103 # must be expressible as the union of many statements which each access only
1104 # a single row of the table.
1105 #
1106 # - The statement is not applied atomically to all rows of the table. Rather,
1107 # the statement is applied atomically to partitions of the table, in
1108 # independent transactions. Secondary index rows are updated atomically
1109 # with the base table rows.
1110 #
1111 # - Partitioned DML does not guarantee exactly-once execution semantics
1112 # against a partition. The statement will be applied at least once to each
1113 # partition. It is strongly recommended that the DML statement should be
1114 # idempotent to avoid unexpected results. For instance, it is potentially
1115 # dangerous to run a statement such as
1116 # `UPDATE table SET column = column + 1` as it could be run multiple times
1117 # against some rows.
1118 #
1119 # - The partitions are committed automatically - there is no support for
1120 # Commit or Rollback. If the call returns an error, or if the client issuing
1121 # the ExecuteSql call dies, it is possible that some rows had the statement
1122 # executed on them successfully. It is also possible that statement was
1123 # never executed against other rows.
1124 #
1125 # - Partitioned DML transactions may only contain the execution of a single
1126 # DML statement via ExecuteSql or ExecuteStreamingSql.
1127 #
1128 # - If any error is encountered during the execution of the partitioned DML
1129 # operation (for instance, a UNIQUE INDEX violation, division by zero, or a
1130 # value that cannot be stored due to schema constraints), then the
1131 # operation is stopped at that point and an error is returned. It is
1132 # possible that at this point, some partitions have been committed (or even
1133 # committed multiple times), and other partitions have not been run at all.
1134 #
1135 # Given the above, Partitioned DML is good fit for large, database-wide,
1136 # operations that are idempotent, such as deleting old rows from a very large
1137 # table.
1138 "readWrite": { # Message type to initiate a read-write transaction. Currently this # Transaction may write.
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -04001139 #
1140 # Authorization to begin a read-write transaction requires
1141 # `spanner.databases.beginOrRollbackReadWriteTransaction` permission
1142 # on the `session` resource.
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07001143 # transaction type has no options.
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -04001144 },
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07001145 "readOnly": { # Message type to initiate a read-only transaction. # Transaction will not write.
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -04001146 #
1147 # Authorization to begin a read-only transaction requires
1148 # `spanner.databases.beginReadOnlyTransaction` permission
1149 # on the `session` resource.
Dan O'Mearadd494642020-05-01 07:42:23 -07001150 "minReadTimestamp": "A String", # Executes all reads at a timestamp &gt;= `min_read_timestamp`.
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -04001151 #
1152 # This is useful for requesting fresher data than some previous
1153 # read, or data that is fresh enough to observe the effects of some
1154 # previously committed transaction whose timestamp is known.
1155 #
1156 # Note that this option can only be used in single-use transactions.
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07001157 #
1158 # A timestamp in RFC3339 UTC \"Zulu\" format, accurate to nanoseconds.
1159 # Example: `"2014-10-02T15:01:23.045123456Z"`.
Dan O'Mearadd494642020-05-01 07:42:23 -07001160 "readTimestamp": "A String", # Executes all reads at the given timestamp. Unlike other modes,
1161 # reads at a specific timestamp are repeatable; the same read at
1162 # the same timestamp always returns the same data. If the
1163 # timestamp is in the future, the read will block until the
1164 # specified timestamp, modulo the read's deadline.
1165 #
1166 # Useful for large scale consistent reads such as mapreduces, or
1167 # for coordinating many reads against a consistent snapshot of the
1168 # data.
1169 #
1170 # A timestamp in RFC3339 UTC \"Zulu\" format, accurate to nanoseconds.
1171 # Example: `"2014-10-02T15:01:23.045123456Z"`.
1172 "maxStaleness": "A String", # Read data at a timestamp &gt;= `NOW - max_staleness`
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -04001173 # seconds. Guarantees that all writes that have committed more
1174 # than the specified number of seconds ago are visible. Because
1175 # Cloud Spanner chooses the exact timestamp, this mode works even if
1176 # the client's local clock is substantially skewed from Cloud Spanner
1177 # commit timestamps.
1178 #
1179 # Useful for reading the freshest data available at a nearby
1180 # replica, while bounding the possible staleness if the local
1181 # replica has fallen behind.
1182 #
1183 # Note that this option can only be used in single-use
1184 # transactions.
1185 "exactStaleness": "A String", # Executes all reads at a timestamp that is `exact_staleness`
1186 # old. The timestamp is chosen soon after the read is started.
1187 #
1188 # Guarantees that all writes that have committed more than the
1189 # specified number of seconds ago are visible. Because Cloud Spanner
1190 # chooses the exact timestamp, this mode works even if the client's
1191 # local clock is substantially skewed from Cloud Spanner commit
1192 # timestamps.
1193 #
1194 # Useful for reading at nearby replicas without the distributed
1195 # timestamp negotiation overhead of `max_staleness`.
Dan O'Mearadd494642020-05-01 07:42:23 -07001196 "returnReadTimestamp": True or False, # If true, the Cloud Spanner-selected read timestamp is included in
1197 # the Transaction message that describes the transaction.
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -04001198 "strong": True or False, # Read at a timestamp where all previously committed transactions
1199 # are visible.
1200 },
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07001201 "partitionedDml": { # Message type to initiate a Partitioned DML transaction. # Partitioned DML transaction.
1202 #
1203 # Authorization to begin a Partitioned DML transaction requires
1204 # `spanner.databases.beginPartitionedDmlTransaction` permission
1205 # on the `session` resource.
1206 },
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -04001207 },
1208 }
1209
1210 x__xgafv: string, V1 error format.
1211 Allowed values
1212 1 - v1 error format
1213 2 - v2 error format
1214
1215Returns:
1216 An object of the form:
1217
1218 { # The response for Commit.
1219 "commitTimestamp": "A String", # The Cloud Spanner timestamp at which the transaction committed.
1220 }</pre>
1221</div>
1222
1223<div class="method">
Dan O'Mearadd494642020-05-01 07:42:23 -07001224 <code class="details" id="create">create(database, body=None, x__xgafv=None)</code>
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -04001225 <pre>Creates a new session. A session can be used to perform
1226transactions that read and/or modify data in a Cloud Spanner database.
1227Sessions are meant to be reused for many consecutive
1228transactions.
1229
1230Sessions can only execute one transaction at a time. To execute
1231multiple concurrent read-write/write-only transactions, create
1232multiple sessions. Note that standalone reads and queries use a
1233transaction internally, and count toward the one transaction
1234limit.
1235
Dan O'Mearadd494642020-05-01 07:42:23 -07001236Active sessions use additional server resources, so it is a good idea to
1237delete idle and unneeded sessions.
1238Aside from explicit deletes, Cloud Spanner may delete sessions for which no
Sai Cheemalapatie833b792017-03-24 15:06:46 -07001239operations are sent for more than an hour. If a session is deleted,
1240requests to it return `NOT_FOUND`.
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -04001241
1242Idle sessions can be kept alive by sending a trivial SQL query
1243periodically, e.g., `"SELECT 1"`.
1244
1245Args:
1246 database: string, Required. The database in which the new session is created. (required)
Dan O'Mearadd494642020-05-01 07:42:23 -07001247 body: object, The request body.
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07001248 The object takes the form of:
1249
1250{ # The request for CreateSession.
1251 "session": { # A session in the Cloud Spanner API. # The session to create.
1252 "labels": { # The labels for the session.
1253 #
1254 # * Label keys must be between 1 and 63 characters long and must conform to
1255 # the following regular expression: `[a-z]([-a-z0-9]*[a-z0-9])?`.
1256 # * Label values must be between 0 and 63 characters long and must conform
1257 # to the regular expression `([a-z]([-a-z0-9]*[a-z0-9])?)?`.
1258 # * No more than 64 labels can be associated with a given session.
1259 #
1260 # See https://goo.gl/xmQnxf for more information on and examples of labels.
1261 "a_key": "A String",
1262 },
1263 "name": "A String", # The name of the session. This is always system-assigned; values provided
1264 # when creating a session are ignored.
1265 "approximateLastUseTime": "A String", # Output only. The approximate timestamp when the session is last used. It is
1266 # typically earlier than the actual last use time.
1267 "createTime": "A String", # Output only. The timestamp when the session is created.
1268 },
1269 }
1270
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -04001271 x__xgafv: string, V1 error format.
1272 Allowed values
1273 1 - v1 error format
1274 2 - v2 error format
1275
1276Returns:
1277 An object of the form:
1278
1279 { # A session in the Cloud Spanner API.
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07001280 "labels": { # The labels for the session.
1281 #
1282 # * Label keys must be between 1 and 63 characters long and must conform to
1283 # the following regular expression: `[a-z]([-a-z0-9]*[a-z0-9])?`.
1284 # * Label values must be between 0 and 63 characters long and must conform
1285 # to the regular expression `([a-z]([-a-z0-9]*[a-z0-9])?)?`.
1286 # * No more than 64 labels can be associated with a given session.
1287 #
1288 # See https://goo.gl/xmQnxf for more information on and examples of labels.
1289 "a_key": "A String",
1290 },
1291 "name": "A String", # The name of the session. This is always system-assigned; values provided
1292 # when creating a session are ignored.
1293 "approximateLastUseTime": "A String", # Output only. The approximate timestamp when the session is last used. It is
1294 # typically earlier than the actual last use time.
1295 "createTime": "A String", # Output only. The timestamp when the session is created.
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -04001296 }</pre>
1297</div>
1298
1299<div class="method">
1300 <code class="details" id="delete">delete(name, x__xgafv=None)</code>
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07001301 <pre>Ends a session, releasing server resources associated with it. This will
1302asynchronously trigger cancellation of any operations that are running with
1303this session.
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -04001304
1305Args:
1306 name: string, Required. The name of the session to delete. (required)
1307 x__xgafv: string, V1 error format.
1308 Allowed values
1309 1 - v1 error format
1310 2 - v2 error format
1311
1312Returns:
1313 An object of the form:
1314
1315 { # A generic empty message that you can re-use to avoid defining duplicated
1316 # empty messages in your APIs. A typical example is to use it as the request
1317 # or the response type of an API method. For instance:
1318 #
1319 # service Foo {
1320 # rpc Bar(google.protobuf.Empty) returns (google.protobuf.Empty);
1321 # }
1322 #
1323 # The JSON representation for `Empty` is empty JSON object `{}`.
1324 }</pre>
1325</div>
1326
1327<div class="method">
Dan O'Mearadd494642020-05-01 07:42:23 -07001328 <code class="details" id="executeBatchDml">executeBatchDml(session, body=None, x__xgafv=None)</code>
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07001329 <pre>Executes a batch of SQL DML statements. This method allows many statements
1330to be run with lower latency than submitting them sequentially with
1331ExecuteSql.
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -04001332
Dan O'Mearadd494642020-05-01 07:42:23 -07001333Statements are executed in sequential order. A request can succeed even if
1334a statement fails. The ExecuteBatchDmlResponse.status field in the
1335response provides information about the statement that failed. Clients must
1336inspect this field to determine whether an error occurred.
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -04001337
Dan O'Mearadd494642020-05-01 07:42:23 -07001338Execution stops after the first failed statement; the remaining statements
1339are not executed.
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -04001340
1341Args:
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07001342 session: string, Required. The session in which the DML statements should be performed. (required)
Dan O'Mearadd494642020-05-01 07:42:23 -07001343 body: object, The request body.
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -04001344 The object takes the form of:
1345
Dan O'Mearadd494642020-05-01 07:42:23 -07001346{ # The request for ExecuteBatchDml.
1347 "seqno": "A String", # Required. A per-transaction sequence number used to identify this request. This field
1348 # makes each request idempotent such that if the request is received multiple
1349 # times, at most one will succeed.
1350 #
1351 # The sequence number must be monotonically increasing within the
1352 # transaction. If a request arrives for the first time with an out-of-order
1353 # sequence number, the transaction may be aborted. Replays of previously
1354 # handled requests will yield the same response as the first execution.
1355 "transaction": { # This message is used to select the transaction in which a # Required. The transaction to use. Must be a read-write transaction.
1356 #
1357 # To protect against replays, single-use transactions are not supported. The
1358 # caller must either supply an existing transaction ID or begin a new
1359 # transaction.
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -04001360 # Read or
1361 # ExecuteSql call runs.
1362 #
1363 # See TransactionOptions for more information about transactions.
1364 "begin": { # # Transactions # Begin a new transaction and execute this read or SQL query in
1365 # it. The transaction ID of the new transaction is returned in
1366 # ResultSetMetadata.transaction, which is a Transaction.
1367 #
1368 #
1369 # Each session can have at most one active transaction at a time. After the
1370 # active transaction is completed, the session can immediately be
1371 # re-used for the next transaction. It is not necessary to create a
1372 # new session for each transaction.
1373 #
1374 # # Transaction Modes
1375 #
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07001376 # Cloud Spanner supports three transaction modes:
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -04001377 #
1378 # 1. Locking read-write. This type of transaction is the only way
1379 # to write data into Cloud Spanner. These transactions rely on
1380 # pessimistic locking and, if necessary, two-phase commit.
1381 # Locking read-write transactions may abort, requiring the
1382 # application to retry.
1383 #
1384 # 2. Snapshot read-only. This transaction type provides guaranteed
1385 # consistency across several reads, but does not allow
1386 # writes. Snapshot read-only transactions can be configured to
1387 # read at timestamps in the past. Snapshot read-only
1388 # transactions do not need to be committed.
1389 #
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07001390 # 3. Partitioned DML. This type of transaction is used to execute
1391 # a single Partitioned DML statement. Partitioned DML partitions
1392 # the key space and runs the DML statement over each partition
1393 # in parallel using separate, internal transactions that commit
1394 # independently. Partitioned DML transactions do not need to be
1395 # committed.
1396 #
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -04001397 # For transactions that only read, snapshot read-only transactions
1398 # provide simpler semantics and are almost always faster. In
1399 # particular, read-only transactions do not take locks, so they do
1400 # not conflict with read-write transactions. As a consequence of not
1401 # taking locks, they also do not abort, so retry loops are not needed.
1402 #
1403 # Transactions may only read/write data in a single database. They
1404 # may, however, read/write data in different tables within that
1405 # database.
1406 #
1407 # ## Locking Read-Write Transactions
1408 #
1409 # Locking transactions may be used to atomically read-modify-write
1410 # data anywhere in a database. This type of transaction is externally
1411 # consistent.
1412 #
1413 # Clients should attempt to minimize the amount of time a transaction
1414 # is active. Faster transactions commit with higher probability
1415 # and cause less contention. Cloud Spanner attempts to keep read locks
1416 # active as long as the transaction continues to do reads, and the
1417 # transaction has not been terminated by
1418 # Commit or
1419 # Rollback. Long periods of
1420 # inactivity at the client may cause Cloud Spanner to release a
1421 # transaction's locks and abort it.
1422 #
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -04001423 # Conceptually, a read-write transaction consists of zero or more
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07001424 # reads or SQL statements followed by
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -04001425 # Commit. At any time before
1426 # Commit, the client can send a
1427 # Rollback request to abort the
1428 # transaction.
1429 #
1430 # ### Semantics
1431 #
1432 # Cloud Spanner can commit the transaction if all read locks it acquired
1433 # are still valid at commit time, and it is able to acquire write
1434 # locks for all writes. Cloud Spanner can abort the transaction for any
1435 # reason. If a commit attempt returns `ABORTED`, Cloud Spanner guarantees
1436 # that the transaction has not modified any user data in Cloud Spanner.
1437 #
1438 # Unless the transaction commits, Cloud Spanner makes no guarantees about
1439 # how long the transaction's locks were held for. It is an error to
1440 # use Cloud Spanner locks for any sort of mutual exclusion other than
1441 # between Cloud Spanner transactions themselves.
1442 #
1443 # ### Retrying Aborted Transactions
1444 #
1445 # When a transaction aborts, the application can choose to retry the
1446 # whole transaction again. To maximize the chances of successfully
1447 # committing the retry, the client should execute the retry in the
1448 # same session as the original attempt. The original session's lock
1449 # priority increases with each consecutive abort, meaning that each
1450 # attempt has a slightly better chance of success than the previous.
1451 #
1452 # Under some circumstances (e.g., many transactions attempting to
1453 # modify the same row(s)), a transaction can abort many times in a
1454 # short period before successfully committing. Thus, it is not a good
1455 # idea to cap the number of retries a transaction can attempt;
1456 # instead, it is better to limit the total amount of wall time spent
1457 # retrying.
1458 #
1459 # ### Idle Transactions
1460 #
1461 # A transaction is considered idle if it has no outstanding reads or
1462 # SQL queries and has not started a read or SQL query within the last 10
1463 # seconds. Idle transactions can be aborted by Cloud Spanner so that they
1464 # don't hold on to locks indefinitely. In that case, the commit will
1465 # fail with error `ABORTED`.
1466 #
1467 # If this behavior is undesirable, periodically executing a simple
1468 # SQL query in the transaction (e.g., `SELECT 1`) prevents the
1469 # transaction from becoming idle.
1470 #
1471 # ## Snapshot Read-Only Transactions
1472 #
1473 # Snapshot read-only transactions provides a simpler method than
1474 # locking read-write transactions for doing several consistent
1475 # reads. However, this type of transaction does not support writes.
1476 #
1477 # Snapshot transactions do not take locks. Instead, they work by
1478 # choosing a Cloud Spanner timestamp, then executing all reads at that
1479 # timestamp. Since they do not acquire locks, they do not block
1480 # concurrent read-write transactions.
1481 #
1482 # Unlike locking read-write transactions, snapshot read-only
1483 # transactions never abort. They can fail if the chosen read
1484 # timestamp is garbage collected; however, the default garbage
1485 # collection policy is generous enough that most applications do not
1486 # need to worry about this in practice.
1487 #
1488 # Snapshot read-only transactions do not need to call
1489 # Commit or
1490 # Rollback (and in fact are not
1491 # permitted to do so).
1492 #
1493 # To execute a snapshot transaction, the client specifies a timestamp
1494 # bound, which tells Cloud Spanner how to choose a read timestamp.
1495 #
1496 # The types of timestamp bound are:
1497 #
1498 # - Strong (the default).
1499 # - Bounded staleness.
1500 # - Exact staleness.
1501 #
1502 # If the Cloud Spanner database to be read is geographically distributed,
1503 # stale read-only transactions can execute more quickly than strong
1504 # or read-write transaction, because they are able to execute far
1505 # from the leader replica.
1506 #
1507 # Each type of timestamp bound is discussed in detail below.
1508 #
1509 # ### Strong
1510 #
1511 # Strong reads are guaranteed to see the effects of all transactions
1512 # that have committed before the start of the read. Furthermore, all
1513 # rows yielded by a single read are consistent with each other -- if
1514 # any part of the read observes a transaction, all parts of the read
1515 # see the transaction.
1516 #
1517 # Strong reads are not repeatable: two consecutive strong read-only
1518 # transactions might return inconsistent results if there are
1519 # concurrent writes. If consistency across reads is required, the
1520 # reads should be executed within a transaction or at an exact read
1521 # timestamp.
1522 #
1523 # See TransactionOptions.ReadOnly.strong.
1524 #
1525 # ### Exact Staleness
1526 #
1527 # These timestamp bounds execute reads at a user-specified
1528 # timestamp. Reads at a timestamp are guaranteed to see a consistent
1529 # prefix of the global transaction history: they observe
Dan O'Mearadd494642020-05-01 07:42:23 -07001530 # modifications done by all transactions with a commit timestamp &lt;=
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -04001531 # the read timestamp, and observe none of the modifications done by
1532 # transactions with a larger commit timestamp. They will block until
1533 # all conflicting transactions that may be assigned commit timestamps
Dan O'Mearadd494642020-05-01 07:42:23 -07001534 # &lt;= the read timestamp have finished.
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -04001535 #
1536 # The timestamp can either be expressed as an absolute Cloud Spanner commit
1537 # timestamp or a staleness relative to the current time.
1538 #
1539 # These modes do not require a "negotiation phase" to pick a
1540 # timestamp. As a result, they execute slightly faster than the
1541 # equivalent boundedly stale concurrency modes. On the other hand,
1542 # boundedly stale reads usually return fresher results.
1543 #
1544 # See TransactionOptions.ReadOnly.read_timestamp and
1545 # TransactionOptions.ReadOnly.exact_staleness.
1546 #
1547 # ### Bounded Staleness
1548 #
1549 # Bounded staleness modes allow Cloud Spanner to pick the read timestamp,
1550 # subject to a user-provided staleness bound. Cloud Spanner chooses the
1551 # newest timestamp within the staleness bound that allows execution
1552 # of the reads at the closest available replica without blocking.
1553 #
1554 # All rows yielded are consistent with each other -- if any part of
1555 # the read observes a transaction, all parts of the read see the
1556 # transaction. Boundedly stale reads are not repeatable: two stale
1557 # reads, even if they use the same staleness bound, can execute at
1558 # different timestamps and thus return inconsistent results.
1559 #
1560 # Boundedly stale reads execute in two phases: the first phase
1561 # negotiates a timestamp among all replicas needed to serve the
1562 # read. In the second phase, reads are executed at the negotiated
1563 # timestamp.
1564 #
1565 # As a result of the two phase execution, bounded staleness reads are
1566 # usually a little slower than comparable exact staleness
1567 # reads. However, they are typically able to return fresher
1568 # results, and are more likely to execute at the closest replica.
1569 #
1570 # Because the timestamp negotiation requires up-front knowledge of
1571 # which rows will be read, it can only be used with single-use
1572 # read-only transactions.
1573 #
1574 # See TransactionOptions.ReadOnly.max_staleness and
1575 # TransactionOptions.ReadOnly.min_read_timestamp.
1576 #
1577 # ### Old Read Timestamps and Garbage Collection
1578 #
1579 # Cloud Spanner continuously garbage collects deleted and overwritten data
1580 # in the background to reclaim storage space. This process is known
1581 # as "version GC". By default, version GC reclaims versions after they
1582 # are one hour old. Because of this, Cloud Spanner cannot perform reads
1583 # at read timestamps more than one hour in the past. This
1584 # restriction also applies to in-progress reads and/or SQL queries whose
1585 # timestamp become too old while executing. Reads and SQL queries with
1586 # too-old read timestamps fail with the error `FAILED_PRECONDITION`.
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07001587 #
1588 # ## Partitioned DML Transactions
1589 #
1590 # Partitioned DML transactions are used to execute DML statements with a
1591 # different execution strategy that provides different, and often better,
1592 # scalability properties for large, table-wide operations than DML in a
1593 # ReadWrite transaction. Smaller scoped statements, such as an OLTP workload,
1594 # should prefer using ReadWrite transactions.
1595 #
1596 # Partitioned DML partitions the keyspace and runs the DML statement on each
1597 # partition in separate, internal transactions. These transactions commit
1598 # automatically when complete, and run independently from one another.
1599 #
1600 # To reduce lock contention, this execution strategy only acquires read locks
1601 # on rows that match the WHERE clause of the statement. Additionally, the
1602 # smaller per-partition transactions hold locks for less time.
1603 #
1604 # That said, Partitioned DML is not a drop-in replacement for standard DML used
1605 # in ReadWrite transactions.
1606 #
1607 # - The DML statement must be fully-partitionable. Specifically, the statement
1608 # must be expressible as the union of many statements which each access only
1609 # a single row of the table.
1610 #
1611 # - The statement is not applied atomically to all rows of the table. Rather,
1612 # the statement is applied atomically to partitions of the table, in
1613 # independent transactions. Secondary index rows are updated atomically
1614 # with the base table rows.
1615 #
1616 # - Partitioned DML does not guarantee exactly-once execution semantics
1617 # against a partition. The statement will be applied at least once to each
1618 # partition. It is strongly recommended that the DML statement should be
1619 # idempotent to avoid unexpected results. For instance, it is potentially
1620 # dangerous to run a statement such as
1621 # `UPDATE table SET column = column + 1` as it could be run multiple times
1622 # against some rows.
1623 #
1624 # - The partitions are committed automatically - there is no support for
1625 # Commit or Rollback. If the call returns an error, or if the client issuing
1626 # the ExecuteSql call dies, it is possible that some rows had the statement
1627 # executed on them successfully. It is also possible that statement was
1628 # never executed against other rows.
1629 #
1630 # - Partitioned DML transactions may only contain the execution of a single
1631 # DML statement via ExecuteSql or ExecuteStreamingSql.
1632 #
1633 # - If any error is encountered during the execution of the partitioned DML
1634 # operation (for instance, a UNIQUE INDEX violation, division by zero, or a
1635 # value that cannot be stored due to schema constraints), then the
1636 # operation is stopped at that point and an error is returned. It is
1637 # possible that at this point, some partitions have been committed (or even
1638 # committed multiple times), and other partitions have not been run at all.
1639 #
1640 # Given the above, Partitioned DML is good fit for large, database-wide,
1641 # operations that are idempotent, such as deleting old rows from a very large
1642 # table.
1643 "readWrite": { # Message type to initiate a read-write transaction. Currently this # Transaction may write.
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -04001644 #
1645 # Authorization to begin a read-write transaction requires
1646 # `spanner.databases.beginOrRollbackReadWriteTransaction` permission
1647 # on the `session` resource.
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07001648 # transaction type has no options.
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -04001649 },
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07001650 "readOnly": { # Message type to initiate a read-only transaction. # Transaction will not write.
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -04001651 #
1652 # Authorization to begin a read-only transaction requires
1653 # `spanner.databases.beginReadOnlyTransaction` permission
1654 # on the `session` resource.
Dan O'Mearadd494642020-05-01 07:42:23 -07001655 "minReadTimestamp": "A String", # Executes all reads at a timestamp &gt;= `min_read_timestamp`.
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -04001656 #
1657 # This is useful for requesting fresher data than some previous
1658 # read, or data that is fresh enough to observe the effects of some
1659 # previously committed transaction whose timestamp is known.
1660 #
1661 # Note that this option can only be used in single-use transactions.
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07001662 #
1663 # A timestamp in RFC3339 UTC \"Zulu\" format, accurate to nanoseconds.
1664 # Example: `"2014-10-02T15:01:23.045123456Z"`.
Dan O'Mearadd494642020-05-01 07:42:23 -07001665 "readTimestamp": "A String", # Executes all reads at the given timestamp. Unlike other modes,
1666 # reads at a specific timestamp are repeatable; the same read at
1667 # the same timestamp always returns the same data. If the
1668 # timestamp is in the future, the read will block until the
1669 # specified timestamp, modulo the read's deadline.
1670 #
1671 # Useful for large scale consistent reads such as mapreduces, or
1672 # for coordinating many reads against a consistent snapshot of the
1673 # data.
1674 #
1675 # A timestamp in RFC3339 UTC \"Zulu\" format, accurate to nanoseconds.
1676 # Example: `"2014-10-02T15:01:23.045123456Z"`.
1677 "maxStaleness": "A String", # Read data at a timestamp &gt;= `NOW - max_staleness`
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -04001678 # seconds. Guarantees that all writes that have committed more
1679 # than the specified number of seconds ago are visible. Because
1680 # Cloud Spanner chooses the exact timestamp, this mode works even if
1681 # the client's local clock is substantially skewed from Cloud Spanner
1682 # commit timestamps.
1683 #
1684 # Useful for reading the freshest data available at a nearby
1685 # replica, while bounding the possible staleness if the local
1686 # replica has fallen behind.
1687 #
1688 # Note that this option can only be used in single-use
1689 # transactions.
1690 "exactStaleness": "A String", # Executes all reads at a timestamp that is `exact_staleness`
1691 # old. The timestamp is chosen soon after the read is started.
1692 #
1693 # Guarantees that all writes that have committed more than the
1694 # specified number of seconds ago are visible. Because Cloud Spanner
1695 # chooses the exact timestamp, this mode works even if the client's
1696 # local clock is substantially skewed from Cloud Spanner commit
1697 # timestamps.
1698 #
1699 # Useful for reading at nearby replicas without the distributed
1700 # timestamp negotiation overhead of `max_staleness`.
Dan O'Mearadd494642020-05-01 07:42:23 -07001701 "returnReadTimestamp": True or False, # If true, the Cloud Spanner-selected read timestamp is included in
1702 # the Transaction message that describes the transaction.
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -04001703 "strong": True or False, # Read at a timestamp where all previously committed transactions
1704 # are visible.
1705 },
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07001706 "partitionedDml": { # Message type to initiate a Partitioned DML transaction. # Partitioned DML transaction.
1707 #
1708 # Authorization to begin a Partitioned DML transaction requires
1709 # `spanner.databases.beginPartitionedDmlTransaction` permission
1710 # on the `session` resource.
1711 },
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -04001712 },
1713 "singleUse": { # # Transactions # Execute the read or SQL query in a temporary transaction.
1714 # This is the most efficient way to execute a transaction that
1715 # consists of a single SQL query.
1716 #
1717 #
1718 # Each session can have at most one active transaction at a time. After the
1719 # active transaction is completed, the session can immediately be
1720 # re-used for the next transaction. It is not necessary to create a
1721 # new session for each transaction.
1722 #
1723 # # Transaction Modes
1724 #
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07001725 # Cloud Spanner supports three transaction modes:
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -04001726 #
1727 # 1. Locking read-write. This type of transaction is the only way
1728 # to write data into Cloud Spanner. These transactions rely on
1729 # pessimistic locking and, if necessary, two-phase commit.
1730 # Locking read-write transactions may abort, requiring the
1731 # application to retry.
1732 #
1733 # 2. Snapshot read-only. This transaction type provides guaranteed
1734 # consistency across several reads, but does not allow
1735 # writes. Snapshot read-only transactions can be configured to
1736 # read at timestamps in the past. Snapshot read-only
1737 # transactions do not need to be committed.
1738 #
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07001739 # 3. Partitioned DML. This type of transaction is used to execute
1740 # a single Partitioned DML statement. Partitioned DML partitions
1741 # the key space and runs the DML statement over each partition
1742 # in parallel using separate, internal transactions that commit
1743 # independently. Partitioned DML transactions do not need to be
1744 # committed.
1745 #
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -04001746 # For transactions that only read, snapshot read-only transactions
1747 # provide simpler semantics and are almost always faster. In
1748 # particular, read-only transactions do not take locks, so they do
1749 # not conflict with read-write transactions. As a consequence of not
1750 # taking locks, they also do not abort, so retry loops are not needed.
1751 #
1752 # Transactions may only read/write data in a single database. They
1753 # may, however, read/write data in different tables within that
1754 # database.
1755 #
1756 # ## Locking Read-Write Transactions
1757 #
1758 # Locking transactions may be used to atomically read-modify-write
1759 # data anywhere in a database. This type of transaction is externally
1760 # consistent.
1761 #
1762 # Clients should attempt to minimize the amount of time a transaction
1763 # is active. Faster transactions commit with higher probability
1764 # and cause less contention. Cloud Spanner attempts to keep read locks
1765 # active as long as the transaction continues to do reads, and the
1766 # transaction has not been terminated by
1767 # Commit or
1768 # Rollback. Long periods of
1769 # inactivity at the client may cause Cloud Spanner to release a
1770 # transaction's locks and abort it.
1771 #
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -04001772 # Conceptually, a read-write transaction consists of zero or more
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07001773 # reads or SQL statements followed by
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -04001774 # Commit. At any time before
1775 # Commit, the client can send a
1776 # Rollback request to abort the
1777 # transaction.
1778 #
1779 # ### Semantics
1780 #
1781 # Cloud Spanner can commit the transaction if all read locks it acquired
1782 # are still valid at commit time, and it is able to acquire write
1783 # locks for all writes. Cloud Spanner can abort the transaction for any
1784 # reason. If a commit attempt returns `ABORTED`, Cloud Spanner guarantees
1785 # that the transaction has not modified any user data in Cloud Spanner.
1786 #
1787 # Unless the transaction commits, Cloud Spanner makes no guarantees about
1788 # how long the transaction's locks were held for. It is an error to
1789 # use Cloud Spanner locks for any sort of mutual exclusion other than
1790 # between Cloud Spanner transactions themselves.
1791 #
1792 # ### Retrying Aborted Transactions
1793 #
1794 # When a transaction aborts, the application can choose to retry the
1795 # whole transaction again. To maximize the chances of successfully
1796 # committing the retry, the client should execute the retry in the
1797 # same session as the original attempt. The original session's lock
1798 # priority increases with each consecutive abort, meaning that each
1799 # attempt has a slightly better chance of success than the previous.
1800 #
1801 # Under some circumstances (e.g., many transactions attempting to
1802 # modify the same row(s)), a transaction can abort many times in a
1803 # short period before successfully committing. Thus, it is not a good
1804 # idea to cap the number of retries a transaction can attempt;
1805 # instead, it is better to limit the total amount of wall time spent
1806 # retrying.
1807 #
1808 # ### Idle Transactions
1809 #
1810 # A transaction is considered idle if it has no outstanding reads or
1811 # SQL queries and has not started a read or SQL query within the last 10
1812 # seconds. Idle transactions can be aborted by Cloud Spanner so that they
1813 # don't hold on to locks indefinitely. In that case, the commit will
1814 # fail with error `ABORTED`.
1815 #
1816 # If this behavior is undesirable, periodically executing a simple
1817 # SQL query in the transaction (e.g., `SELECT 1`) prevents the
1818 # transaction from becoming idle.
1819 #
1820 # ## Snapshot Read-Only Transactions
1821 #
1822 # Snapshot read-only transactions provides a simpler method than
1823 # locking read-write transactions for doing several consistent
1824 # reads. However, this type of transaction does not support writes.
1825 #
1826 # Snapshot transactions do not take locks. Instead, they work by
1827 # choosing a Cloud Spanner timestamp, then executing all reads at that
1828 # timestamp. Since they do not acquire locks, they do not block
1829 # concurrent read-write transactions.
1830 #
1831 # Unlike locking read-write transactions, snapshot read-only
1832 # transactions never abort. They can fail if the chosen read
1833 # timestamp is garbage collected; however, the default garbage
1834 # collection policy is generous enough that most applications do not
1835 # need to worry about this in practice.
1836 #
1837 # Snapshot read-only transactions do not need to call
1838 # Commit or
1839 # Rollback (and in fact are not
1840 # permitted to do so).
1841 #
1842 # To execute a snapshot transaction, the client specifies a timestamp
1843 # bound, which tells Cloud Spanner how to choose a read timestamp.
1844 #
1845 # The types of timestamp bound are:
1846 #
1847 # - Strong (the default).
1848 # - Bounded staleness.
1849 # - Exact staleness.
1850 #
1851 # If the Cloud Spanner database to be read is geographically distributed,
1852 # stale read-only transactions can execute more quickly than strong
1853 # or read-write transaction, because they are able to execute far
1854 # from the leader replica.
1855 #
1856 # Each type of timestamp bound is discussed in detail below.
1857 #
1858 # ### Strong
1859 #
1860 # Strong reads are guaranteed to see the effects of all transactions
1861 # that have committed before the start of the read. Furthermore, all
1862 # rows yielded by a single read are consistent with each other -- if
1863 # any part of the read observes a transaction, all parts of the read
1864 # see the transaction.
1865 #
1866 # Strong reads are not repeatable: two consecutive strong read-only
1867 # transactions might return inconsistent results if there are
1868 # concurrent writes. If consistency across reads is required, the
1869 # reads should be executed within a transaction or at an exact read
1870 # timestamp.
1871 #
1872 # See TransactionOptions.ReadOnly.strong.
1873 #
1874 # ### Exact Staleness
1875 #
1876 # These timestamp bounds execute reads at a user-specified
1877 # timestamp. Reads at a timestamp are guaranteed to see a consistent
1878 # prefix of the global transaction history: they observe
Dan O'Mearadd494642020-05-01 07:42:23 -07001879 # modifications done by all transactions with a commit timestamp &lt;=
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -04001880 # the read timestamp, and observe none of the modifications done by
1881 # transactions with a larger commit timestamp. They will block until
1882 # all conflicting transactions that may be assigned commit timestamps
Dan O'Mearadd494642020-05-01 07:42:23 -07001883 # &lt;= the read timestamp have finished.
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -04001884 #
1885 # The timestamp can either be expressed as an absolute Cloud Spanner commit
1886 # timestamp or a staleness relative to the current time.
1887 #
1888 # These modes do not require a "negotiation phase" to pick a
1889 # timestamp. As a result, they execute slightly faster than the
1890 # equivalent boundedly stale concurrency modes. On the other hand,
1891 # boundedly stale reads usually return fresher results.
1892 #
1893 # See TransactionOptions.ReadOnly.read_timestamp and
1894 # TransactionOptions.ReadOnly.exact_staleness.
1895 #
1896 # ### Bounded Staleness
1897 #
1898 # Bounded staleness modes allow Cloud Spanner to pick the read timestamp,
1899 # subject to a user-provided staleness bound. Cloud Spanner chooses the
1900 # newest timestamp within the staleness bound that allows execution
1901 # of the reads at the closest available replica without blocking.
1902 #
1903 # All rows yielded are consistent with each other -- if any part of
1904 # the read observes a transaction, all parts of the read see the
1905 # transaction. Boundedly stale reads are not repeatable: two stale
1906 # reads, even if they use the same staleness bound, can execute at
1907 # different timestamps and thus return inconsistent results.
1908 #
1909 # Boundedly stale reads execute in two phases: the first phase
1910 # negotiates a timestamp among all replicas needed to serve the
1911 # read. In the second phase, reads are executed at the negotiated
1912 # timestamp.
1913 #
1914 # As a result of the two phase execution, bounded staleness reads are
1915 # usually a little slower than comparable exact staleness
1916 # reads. However, they are typically able to return fresher
1917 # results, and are more likely to execute at the closest replica.
1918 #
1919 # Because the timestamp negotiation requires up-front knowledge of
1920 # which rows will be read, it can only be used with single-use
1921 # read-only transactions.
1922 #
1923 # See TransactionOptions.ReadOnly.max_staleness and
1924 # TransactionOptions.ReadOnly.min_read_timestamp.
1925 #
1926 # ### Old Read Timestamps and Garbage Collection
1927 #
1928 # Cloud Spanner continuously garbage collects deleted and overwritten data
1929 # in the background to reclaim storage space. This process is known
1930 # as "version GC". By default, version GC reclaims versions after they
1931 # are one hour old. Because of this, Cloud Spanner cannot perform reads
1932 # at read timestamps more than one hour in the past. This
1933 # restriction also applies to in-progress reads and/or SQL queries whose
1934 # timestamp become too old while executing. Reads and SQL queries with
1935 # too-old read timestamps fail with the error `FAILED_PRECONDITION`.
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07001936 #
1937 # ## Partitioned DML Transactions
1938 #
1939 # Partitioned DML transactions are used to execute DML statements with a
1940 # different execution strategy that provides different, and often better,
1941 # scalability properties for large, table-wide operations than DML in a
1942 # ReadWrite transaction. Smaller scoped statements, such as an OLTP workload,
1943 # should prefer using ReadWrite transactions.
1944 #
1945 # Partitioned DML partitions the keyspace and runs the DML statement on each
1946 # partition in separate, internal transactions. These transactions commit
1947 # automatically when complete, and run independently from one another.
1948 #
1949 # To reduce lock contention, this execution strategy only acquires read locks
1950 # on rows that match the WHERE clause of the statement. Additionally, the
1951 # smaller per-partition transactions hold locks for less time.
1952 #
1953 # That said, Partitioned DML is not a drop-in replacement for standard DML used
1954 # in ReadWrite transactions.
1955 #
1956 # - The DML statement must be fully-partitionable. Specifically, the statement
1957 # must be expressible as the union of many statements which each access only
1958 # a single row of the table.
1959 #
1960 # - The statement is not applied atomically to all rows of the table. Rather,
1961 # the statement is applied atomically to partitions of the table, in
1962 # independent transactions. Secondary index rows are updated atomically
1963 # with the base table rows.
1964 #
1965 # - Partitioned DML does not guarantee exactly-once execution semantics
1966 # against a partition. The statement will be applied at least once to each
1967 # partition. It is strongly recommended that the DML statement should be
1968 # idempotent to avoid unexpected results. For instance, it is potentially
1969 # dangerous to run a statement such as
1970 # `UPDATE table SET column = column + 1` as it could be run multiple times
1971 # against some rows.
1972 #
1973 # - The partitions are committed automatically - there is no support for
1974 # Commit or Rollback. If the call returns an error, or if the client issuing
1975 # the ExecuteSql call dies, it is possible that some rows had the statement
1976 # executed on them successfully. It is also possible that statement was
1977 # never executed against other rows.
1978 #
1979 # - Partitioned DML transactions may only contain the execution of a single
1980 # DML statement via ExecuteSql or ExecuteStreamingSql.
1981 #
1982 # - If any error is encountered during the execution of the partitioned DML
1983 # operation (for instance, a UNIQUE INDEX violation, division by zero, or a
1984 # value that cannot be stored due to schema constraints), then the
1985 # operation is stopped at that point and an error is returned. It is
1986 # possible that at this point, some partitions have been committed (or even
1987 # committed multiple times), and other partitions have not been run at all.
1988 #
1989 # Given the above, Partitioned DML is good fit for large, database-wide,
1990 # operations that are idempotent, such as deleting old rows from a very large
1991 # table.
1992 "readWrite": { # Message type to initiate a read-write transaction. Currently this # Transaction may write.
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -04001993 #
1994 # Authorization to begin a read-write transaction requires
1995 # `spanner.databases.beginOrRollbackReadWriteTransaction` permission
1996 # on the `session` resource.
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07001997 # transaction type has no options.
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -04001998 },
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07001999 "readOnly": { # Message type to initiate a read-only transaction. # Transaction will not write.
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -04002000 #
2001 # Authorization to begin a read-only transaction requires
2002 # `spanner.databases.beginReadOnlyTransaction` permission
2003 # on the `session` resource.
Dan O'Mearadd494642020-05-01 07:42:23 -07002004 "minReadTimestamp": "A String", # Executes all reads at a timestamp &gt;= `min_read_timestamp`.
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -04002005 #
2006 # This is useful for requesting fresher data than some previous
2007 # read, or data that is fresh enough to observe the effects of some
2008 # previously committed transaction whose timestamp is known.
2009 #
2010 # Note that this option can only be used in single-use transactions.
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07002011 #
2012 # A timestamp in RFC3339 UTC \"Zulu\" format, accurate to nanoseconds.
2013 # Example: `"2014-10-02T15:01:23.045123456Z"`.
Dan O'Mearadd494642020-05-01 07:42:23 -07002014 "readTimestamp": "A String", # Executes all reads at the given timestamp. Unlike other modes,
2015 # reads at a specific timestamp are repeatable; the same read at
2016 # the same timestamp always returns the same data. If the
2017 # timestamp is in the future, the read will block until the
2018 # specified timestamp, modulo the read's deadline.
2019 #
2020 # Useful for large scale consistent reads such as mapreduces, or
2021 # for coordinating many reads against a consistent snapshot of the
2022 # data.
2023 #
2024 # A timestamp in RFC3339 UTC \"Zulu\" format, accurate to nanoseconds.
2025 # Example: `"2014-10-02T15:01:23.045123456Z"`.
2026 "maxStaleness": "A String", # Read data at a timestamp &gt;= `NOW - max_staleness`
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -04002027 # seconds. Guarantees that all writes that have committed more
2028 # than the specified number of seconds ago are visible. Because
2029 # Cloud Spanner chooses the exact timestamp, this mode works even if
2030 # the client's local clock is substantially skewed from Cloud Spanner
2031 # commit timestamps.
2032 #
2033 # Useful for reading the freshest data available at a nearby
2034 # replica, while bounding the possible staleness if the local
2035 # replica has fallen behind.
2036 #
2037 # Note that this option can only be used in single-use
2038 # transactions.
2039 "exactStaleness": "A String", # Executes all reads at a timestamp that is `exact_staleness`
2040 # old. The timestamp is chosen soon after the read is started.
2041 #
2042 # Guarantees that all writes that have committed more than the
2043 # specified number of seconds ago are visible. Because Cloud Spanner
2044 # chooses the exact timestamp, this mode works even if the client's
2045 # local clock is substantially skewed from Cloud Spanner commit
2046 # timestamps.
2047 #
2048 # Useful for reading at nearby replicas without the distributed
2049 # timestamp negotiation overhead of `max_staleness`.
Dan O'Mearadd494642020-05-01 07:42:23 -07002050 "returnReadTimestamp": True or False, # If true, the Cloud Spanner-selected read timestamp is included in
2051 # the Transaction message that describes the transaction.
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -04002052 "strong": True or False, # Read at a timestamp where all previously committed transactions
2053 # are visible.
2054 },
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07002055 "partitionedDml": { # Message type to initiate a Partitioned DML transaction. # Partitioned DML transaction.
2056 #
2057 # Authorization to begin a Partitioned DML transaction requires
2058 # `spanner.databases.beginPartitionedDmlTransaction` permission
2059 # on the `session` resource.
2060 },
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -04002061 },
2062 "id": "A String", # Execute the read or SQL query in a previously-started transaction.
2063 },
Dan O'Mearadd494642020-05-01 07:42:23 -07002064 "statements": [ # Required. The list of statements to execute in this batch. Statements are executed
2065 # serially, such that the effects of statement `i` are visible to statement
2066 # `i+1`. Each statement must be a DML statement. Execution stops at the
2067 # first failed statement; the remaining statements are not executed.
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07002068 #
Dan O'Mearadd494642020-05-01 07:42:23 -07002069 # Callers must provide at least one statement.
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07002070 { # A single DML statement.
2071 "paramTypes": { # It is not always possible for Cloud Spanner to infer the right SQL type
2072 # from a JSON value. For example, values of type `BYTES` and values
2073 # of type `STRING` both appear in params as JSON strings.
2074 #
2075 # In these cases, `param_types` can be used to specify the exact
2076 # SQL type for some or all of the SQL statement parameters. See the
2077 # definition of Type for more information
2078 # about SQL types.
2079 "a_key": { # `Type` indicates the type of a Cloud Spanner value, as might be stored in a
2080 # table cell or returned from an SQL query.
Dan O'Mearadd494642020-05-01 07:42:23 -07002081 "structType": { # `StructType` defines the fields of a STRUCT type. # If code == STRUCT, then `struct_type`
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07002082 # provides type information for the struct's fields.
Dan O'Mearadd494642020-05-01 07:42:23 -07002083 "fields": [ # The list of fields that make up this struct. Order is
2084 # significant, because values of this struct type are represented as
2085 # lists, where the order of field values matches the order of
2086 # fields in the StructType. In turn, the order of fields
2087 # matches the order of columns in a read request, or the order of
2088 # fields in the `SELECT` clause of a query.
2089 { # Message representing a single field of a struct.
2090 "type": # Object with schema name: Type # The type of the field.
2091 "name": "A String", # The name of the field. For reads, this is the column name. For
2092 # SQL queries, it is the column alias (e.g., `"Word"` in the
2093 # query `"SELECT 'hello' AS Word"`), or the column name (e.g.,
2094 # `"ColName"` in the query `"SELECT ColName FROM Table"`). Some
2095 # columns might have an empty name (e.g., !"SELECT
2096 # UPPER(ColName)"`). Note that a query result can contain
2097 # multiple fields with the same name.
2098 },
2099 ],
2100 },
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07002101 "code": "A String", # Required. The TypeCode for this type.
2102 "arrayElementType": # Object with schema name: Type # If code == ARRAY, then `array_element_type`
2103 # is the type of the array elements.
2104 },
2105 },
Dan O'Mearadd494642020-05-01 07:42:23 -07002106 "params": { # Parameter names and values that bind to placeholders in the DML string.
2107 #
2108 # A parameter placeholder consists of the `@` character followed by the
2109 # parameter name (for example, `@firstName`). Parameter names can contain
2110 # letters, numbers, and underscores.
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07002111 #
2112 # Parameters can appear anywhere that a literal value is expected. The
2113 # same parameter name can be used more than once, for example:
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07002114 #
Dan O'Mearadd494642020-05-01 07:42:23 -07002115 # `"WHERE id &gt; @msg_id AND id &lt; @msg_id + 100"`
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07002116 #
Dan O'Mearadd494642020-05-01 07:42:23 -07002117 # It is an error to execute a SQL statement with unbound parameters.
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07002118 "a_key": "", # Properties of the object.
2119 },
2120 "sql": "A String", # Required. The DML string.
2121 },
2122 ],
2123 }
2124
2125 x__xgafv: string, V1 error format.
2126 Allowed values
2127 1 - v1 error format
2128 2 - v2 error format
2129
2130Returns:
2131 An object of the form:
2132
2133 { # The response for ExecuteBatchDml. Contains a list
Dan O'Mearadd494642020-05-01 07:42:23 -07002134 # of ResultSet messages, one for each DML statement that has successfully
2135 # executed, in the same order as the statements in the request. If a statement
2136 # fails, the status in the response body identifies the cause of the failure.
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07002137 #
Dan O'Mearadd494642020-05-01 07:42:23 -07002138 # To check for DML statements that failed, use the following approach:
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07002139 #
Dan O'Mearadd494642020-05-01 07:42:23 -07002140 # 1. Check the status in the response message. The google.rpc.Code enum
2141 # value `OK` indicates that all statements were executed successfully.
2142 # 2. If the status was not `OK`, check the number of result sets in the
2143 # response. If the response contains `N` ResultSet messages, then
2144 # statement `N+1` in the request failed.
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07002145 #
Dan O'Mearadd494642020-05-01 07:42:23 -07002146 # Example 1:
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07002147 #
Dan O'Mearadd494642020-05-01 07:42:23 -07002148 # * Request: 5 DML statements, all executed successfully.
2149 # * Response: 5 ResultSet messages, with the status `OK`.
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07002150 #
Dan O'Mearadd494642020-05-01 07:42:23 -07002151 # Example 2:
2152 #
2153 # * Request: 5 DML statements. The third statement has a syntax error.
2154 # * Response: 2 ResultSet messages, and a syntax error (`INVALID_ARGUMENT`)
2155 # status. The number of ResultSet messages indicates that the third
2156 # statement failed, and the fourth and fifth statements were not executed.
2157 "status": { # The `Status` type defines a logical error model that is suitable for # If all DML statements are executed successfully, the status is `OK`.
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07002158 # Otherwise, the error status of the first failed statement.
2159 # different programming environments, including REST APIs and RPC APIs. It is
Dan O'Mearadd494642020-05-01 07:42:23 -07002160 # used by [gRPC](https://github.com/grpc). Each `Status` message contains
2161 # three pieces of data: error code, error message, and error details.
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07002162 #
Dan O'Mearadd494642020-05-01 07:42:23 -07002163 # You can find out more about this error model and how to work with it in the
2164 # [API Design Guide](https://cloud.google.com/apis/design/errors).
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07002165 "message": "A String", # A developer-facing error message, which should be in English. Any
2166 # user-facing error message should be localized and sent in the
2167 # google.rpc.Status.details field, or localized by the client.
2168 "code": 42, # The status code, which should be an enum value of google.rpc.Code.
2169 "details": [ # A list of messages that carry the error details. There is a common set of
2170 # message types for APIs to use.
2171 {
2172 "a_key": "", # Properties of the object. Contains field @type with type URL.
2173 },
2174 ],
2175 },
Dan O'Mearadd494642020-05-01 07:42:23 -07002176 "resultSets": [ # One ResultSet for each statement in the request that ran successfully,
2177 # in the same order as the statements in the request. Each ResultSet does
2178 # not contain any rows. The ResultSetStats in each ResultSet contain
2179 # the number of rows modified by the statement.
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07002180 #
Dan O'Mearadd494642020-05-01 07:42:23 -07002181 # Only the first ResultSet in the response contains valid
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07002182 # ResultSetMetadata.
2183 { # Results from Read or
2184 # ExecuteSql.
2185 "rows": [ # Each element in `rows` is a row whose format is defined by
2186 # metadata.row_type. The ith element
2187 # in each row matches the ith field in
2188 # metadata.row_type. Elements are
2189 # encoded based on type as described
2190 # here.
2191 [
2192 "",
2193 ],
2194 ],
2195 "stats": { # Additional statistics about a ResultSet or PartialResultSet. # Query plan and execution statistics for the SQL statement that
2196 # produced this result set. These can be requested by setting
2197 # ExecuteSqlRequest.query_mode.
2198 # DML statements always produce stats containing the number of rows
2199 # modified, unless executed using the
2200 # ExecuteSqlRequest.QueryMode.PLAN ExecuteSqlRequest.query_mode.
2201 # Other fields may or may not be populated, based on the
2202 # ExecuteSqlRequest.query_mode.
2203 "rowCountLowerBound": "A String", # Partitioned DML does not offer exactly-once semantics, so it
2204 # returns a lower bound of the rows modified.
2205 "rowCountExact": "A String", # Standard DML returns an exact count of rows that were modified.
2206 "queryPlan": { # Contains an ordered list of nodes appearing in the query plan. # QueryPlan for the query associated with this result.
2207 "planNodes": [ # The nodes in the query plan. Plan nodes are returned in pre-order starting
2208 # with the plan root. Each PlanNode's `id` corresponds to its index in
2209 # `plan_nodes`.
2210 { # Node information for nodes appearing in a QueryPlan.plan_nodes.
2211 "index": 42, # The `PlanNode`'s index in node list.
2212 "kind": "A String", # Used to determine the type of node. May be needed for visualizing
2213 # different kinds of nodes differently. For example, If the node is a
2214 # SCALAR node, it will have a condensed representation
2215 # which can be used to directly embed a description of the node in its
2216 # parent.
2217 "displayName": "A String", # The display name for the node.
2218 "executionStats": { # The execution statistics associated with the node, contained in a group of
2219 # key-value pairs. Only present if the plan was returned as a result of a
2220 # profile query. For example, number of executions, number of rows/time per
2221 # execution etc.
2222 "a_key": "", # Properties of the object.
2223 },
2224 "childLinks": [ # List of child node `index`es and their relationship to this parent.
2225 { # Metadata associated with a parent-child relationship appearing in a
2226 # PlanNode.
2227 "variable": "A String", # Only present if the child node is SCALAR and corresponds
2228 # to an output variable of the parent node. The field carries the name of
2229 # the output variable.
2230 # For example, a `TableScan` operator that reads rows from a table will
2231 # have child links to the `SCALAR` nodes representing the output variables
2232 # created for each column that is read by the operator. The corresponding
2233 # `variable` fields will be set to the variable names assigned to the
2234 # columns.
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07002235 "type": "A String", # The type of the link. For example, in Hash Joins this could be used to
2236 # distinguish between the build child and the probe child, or in the case
2237 # of the child being an output variable, to represent the tag associated
2238 # with the output variable.
Dan O'Mearadd494642020-05-01 07:42:23 -07002239 "childIndex": 42, # The node to which the link points.
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07002240 },
2241 ],
2242 "shortRepresentation": { # Condensed representation of a node and its subtree. Only present for # Condensed representation for SCALAR nodes.
2243 # `SCALAR` PlanNode(s).
Dan O'Mearadd494642020-05-01 07:42:23 -07002244 "subqueries": { # A mapping of (subquery variable name) -&gt; (subquery node id) for cases
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07002245 # where the `description` string of this node references a `SCALAR`
2246 # subquery contained in the expression subtree rooted at this node. The
2247 # referenced `SCALAR` subquery may not necessarily be a direct child of
2248 # this node.
2249 "a_key": 42,
2250 },
2251 "description": "A String", # A string representation of the expression subtree rooted at this node.
2252 },
2253 "metadata": { # Attributes relevant to the node contained in a group of key-value pairs.
2254 # For example, a Parameter Reference node could have the following
2255 # information in its metadata:
2256 #
2257 # {
2258 # "parameter_reference": "param1",
2259 # "parameter_type": "array"
2260 # }
2261 "a_key": "", # Properties of the object.
2262 },
2263 },
2264 ],
2265 },
2266 "queryStats": { # Aggregated statistics from the execution of the query. Only present when
2267 # the query is profiled. For example, a query could return the statistics as
2268 # follows:
2269 #
2270 # {
2271 # "rows_returned": "3",
2272 # "elapsed_time": "1.22 secs",
2273 # "cpu_time": "1.19 secs"
2274 # }
2275 "a_key": "", # Properties of the object.
2276 },
2277 },
2278 "metadata": { # Metadata about a ResultSet or PartialResultSet. # Metadata about the result set, such as row type information.
2279 "rowType": { # `StructType` defines the fields of a STRUCT type. # Indicates the field names and types for the rows in the result
2280 # set. For example, a SQL query like `"SELECT UserId, UserName FROM
2281 # Users"` could return a `row_type` value like:
2282 #
2283 # "fields": [
2284 # { "name": "UserId", "type": { "code": "INT64" } },
2285 # { "name": "UserName", "type": { "code": "STRING" } },
2286 # ]
2287 "fields": [ # The list of fields that make up this struct. Order is
2288 # significant, because values of this struct type are represented as
2289 # lists, where the order of field values matches the order of
2290 # fields in the StructType. In turn, the order of fields
2291 # matches the order of columns in a read request, or the order of
2292 # fields in the `SELECT` clause of a query.
2293 { # Message representing a single field of a struct.
Dan O'Mearadd494642020-05-01 07:42:23 -07002294 "type": # Object with schema name: Type # The type of the field.
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07002295 "name": "A String", # The name of the field. For reads, this is the column name. For
2296 # SQL queries, it is the column alias (e.g., `"Word"` in the
2297 # query `"SELECT 'hello' AS Word"`), or the column name (e.g.,
2298 # `"ColName"` in the query `"SELECT ColName FROM Table"`). Some
2299 # columns might have an empty name (e.g., !"SELECT
2300 # UPPER(ColName)"`). Note that a query result can contain
2301 # multiple fields with the same name.
2302 },
2303 ],
2304 },
2305 "transaction": { # A transaction. # If the read or SQL query began a transaction as a side-effect, the
2306 # information about the new transaction is yielded here.
2307 "readTimestamp": "A String", # For snapshot read-only transactions, the read timestamp chosen
2308 # for the transaction. Not returned by default: see
2309 # TransactionOptions.ReadOnly.return_read_timestamp.
2310 #
2311 # A timestamp in RFC3339 UTC \"Zulu\" format, accurate to nanoseconds.
2312 # Example: `"2014-10-02T15:01:23.045123456Z"`.
2313 "id": "A String", # `id` may be used to identify the transaction in subsequent
2314 # Read,
2315 # ExecuteSql,
2316 # Commit, or
2317 # Rollback calls.
2318 #
2319 # Single-use read-only transactions do not have IDs, because
2320 # single-use transactions do not support multiple requests.
2321 },
2322 },
2323 },
2324 ],
2325 }</pre>
2326</div>
2327
2328<div class="method">
Dan O'Mearadd494642020-05-01 07:42:23 -07002329 <code class="details" id="executeSql">executeSql(session, body=None, x__xgafv=None)</code>
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07002330 <pre>Executes an SQL statement, returning all results in a single reply. This
2331method cannot be used to return a result set larger than 10 MiB;
2332if the query yields more data than that, the query fails with
2333a `FAILED_PRECONDITION` error.
2334
2335Operations inside read-write transactions might return `ABORTED`. If
2336this occurs, the application should restart the transaction from
2337the beginning. See Transaction for more details.
2338
2339Larger result sets can be fetched in streaming fashion by calling
2340ExecuteStreamingSql instead.
2341
2342Args:
2343 session: string, Required. The session in which the SQL query should be performed. (required)
Dan O'Mearadd494642020-05-01 07:42:23 -07002344 body: object, The request body.
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07002345 The object takes the form of:
2346
2347{ # The request for ExecuteSql and
2348 # ExecuteStreamingSql.
Dan O'Mearadd494642020-05-01 07:42:23 -07002349 "transaction": { # This message is used to select the transaction in which a # The transaction to use.
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07002350 #
2351 # For queries, if none is provided, the default is a temporary read-only
2352 # transaction with strong concurrency.
2353 #
Dan O'Mearadd494642020-05-01 07:42:23 -07002354 # Standard DML statements require a read-write transaction. To protect
2355 # against replays, single-use transactions are not supported. The caller
2356 # must either supply an existing transaction ID or begin a new transaction.
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07002357 #
Dan O'Mearadd494642020-05-01 07:42:23 -07002358 # Partitioned DML requires an existing Partitioned DML transaction ID.
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07002359 # Read or
2360 # ExecuteSql call runs.
2361 #
2362 # See TransactionOptions for more information about transactions.
2363 "begin": { # # Transactions # Begin a new transaction and execute this read or SQL query in
2364 # it. The transaction ID of the new transaction is returned in
2365 # ResultSetMetadata.transaction, which is a Transaction.
2366 #
2367 #
2368 # Each session can have at most one active transaction at a time. After the
2369 # active transaction is completed, the session can immediately be
2370 # re-used for the next transaction. It is not necessary to create a
2371 # new session for each transaction.
2372 #
2373 # # Transaction Modes
2374 #
2375 # Cloud Spanner supports three transaction modes:
2376 #
2377 # 1. Locking read-write. This type of transaction is the only way
2378 # to write data into Cloud Spanner. These transactions rely on
2379 # pessimistic locking and, if necessary, two-phase commit.
2380 # Locking read-write transactions may abort, requiring the
2381 # application to retry.
2382 #
2383 # 2. Snapshot read-only. This transaction type provides guaranteed
2384 # consistency across several reads, but does not allow
2385 # writes. Snapshot read-only transactions can be configured to
2386 # read at timestamps in the past. Snapshot read-only
2387 # transactions do not need to be committed.
2388 #
2389 # 3. Partitioned DML. This type of transaction is used to execute
2390 # a single Partitioned DML statement. Partitioned DML partitions
2391 # the key space and runs the DML statement over each partition
2392 # in parallel using separate, internal transactions that commit
2393 # independently. Partitioned DML transactions do not need to be
2394 # committed.
2395 #
2396 # For transactions that only read, snapshot read-only transactions
2397 # provide simpler semantics and are almost always faster. In
2398 # particular, read-only transactions do not take locks, so they do
2399 # not conflict with read-write transactions. As a consequence of not
2400 # taking locks, they also do not abort, so retry loops are not needed.
2401 #
2402 # Transactions may only read/write data in a single database. They
2403 # may, however, read/write data in different tables within that
2404 # database.
2405 #
2406 # ## Locking Read-Write Transactions
2407 #
2408 # Locking transactions may be used to atomically read-modify-write
2409 # data anywhere in a database. This type of transaction is externally
2410 # consistent.
2411 #
2412 # Clients should attempt to minimize the amount of time a transaction
2413 # is active. Faster transactions commit with higher probability
2414 # and cause less contention. Cloud Spanner attempts to keep read locks
2415 # active as long as the transaction continues to do reads, and the
2416 # transaction has not been terminated by
2417 # Commit or
2418 # Rollback. Long periods of
2419 # inactivity at the client may cause Cloud Spanner to release a
2420 # transaction's locks and abort it.
2421 #
2422 # Conceptually, a read-write transaction consists of zero or more
2423 # reads or SQL statements followed by
2424 # Commit. At any time before
2425 # Commit, the client can send a
2426 # Rollback request to abort the
2427 # transaction.
2428 #
2429 # ### Semantics
2430 #
2431 # Cloud Spanner can commit the transaction if all read locks it acquired
2432 # are still valid at commit time, and it is able to acquire write
2433 # locks for all writes. Cloud Spanner can abort the transaction for any
2434 # reason. If a commit attempt returns `ABORTED`, Cloud Spanner guarantees
2435 # that the transaction has not modified any user data in Cloud Spanner.
2436 #
2437 # Unless the transaction commits, Cloud Spanner makes no guarantees about
2438 # how long the transaction's locks were held for. It is an error to
2439 # use Cloud Spanner locks for any sort of mutual exclusion other than
2440 # between Cloud Spanner transactions themselves.
2441 #
2442 # ### Retrying Aborted Transactions
2443 #
2444 # When a transaction aborts, the application can choose to retry the
2445 # whole transaction again. To maximize the chances of successfully
2446 # committing the retry, the client should execute the retry in the
2447 # same session as the original attempt. The original session's lock
2448 # priority increases with each consecutive abort, meaning that each
2449 # attempt has a slightly better chance of success than the previous.
2450 #
2451 # Under some circumstances (e.g., many transactions attempting to
2452 # modify the same row(s)), a transaction can abort many times in a
2453 # short period before successfully committing. Thus, it is not a good
2454 # idea to cap the number of retries a transaction can attempt;
2455 # instead, it is better to limit the total amount of wall time spent
2456 # retrying.
2457 #
2458 # ### Idle Transactions
2459 #
2460 # A transaction is considered idle if it has no outstanding reads or
2461 # SQL queries and has not started a read or SQL query within the last 10
2462 # seconds. Idle transactions can be aborted by Cloud Spanner so that they
2463 # don't hold on to locks indefinitely. In that case, the commit will
2464 # fail with error `ABORTED`.
2465 #
2466 # If this behavior is undesirable, periodically executing a simple
2467 # SQL query in the transaction (e.g., `SELECT 1`) prevents the
2468 # transaction from becoming idle.
2469 #
2470 # ## Snapshot Read-Only Transactions
2471 #
2472 # Snapshot read-only transactions provides a simpler method than
2473 # locking read-write transactions for doing several consistent
2474 # reads. However, this type of transaction does not support writes.
2475 #
2476 # Snapshot transactions do not take locks. Instead, they work by
2477 # choosing a Cloud Spanner timestamp, then executing all reads at that
2478 # timestamp. Since they do not acquire locks, they do not block
2479 # concurrent read-write transactions.
2480 #
2481 # Unlike locking read-write transactions, snapshot read-only
2482 # transactions never abort. They can fail if the chosen read
2483 # timestamp is garbage collected; however, the default garbage
2484 # collection policy is generous enough that most applications do not
2485 # need to worry about this in practice.
2486 #
2487 # Snapshot read-only transactions do not need to call
2488 # Commit or
2489 # Rollback (and in fact are not
2490 # permitted to do so).
2491 #
2492 # To execute a snapshot transaction, the client specifies a timestamp
2493 # bound, which tells Cloud Spanner how to choose a read timestamp.
2494 #
2495 # The types of timestamp bound are:
2496 #
2497 # - Strong (the default).
2498 # - Bounded staleness.
2499 # - Exact staleness.
2500 #
2501 # If the Cloud Spanner database to be read is geographically distributed,
2502 # stale read-only transactions can execute more quickly than strong
2503 # or read-write transaction, because they are able to execute far
2504 # from the leader replica.
2505 #
2506 # Each type of timestamp bound is discussed in detail below.
2507 #
2508 # ### Strong
2509 #
2510 # Strong reads are guaranteed to see the effects of all transactions
2511 # that have committed before the start of the read. Furthermore, all
2512 # rows yielded by a single read are consistent with each other -- if
2513 # any part of the read observes a transaction, all parts of the read
2514 # see the transaction.
2515 #
2516 # Strong reads are not repeatable: two consecutive strong read-only
2517 # transactions might return inconsistent results if there are
2518 # concurrent writes. If consistency across reads is required, the
2519 # reads should be executed within a transaction or at an exact read
2520 # timestamp.
2521 #
2522 # See TransactionOptions.ReadOnly.strong.
2523 #
2524 # ### Exact Staleness
2525 #
2526 # These timestamp bounds execute reads at a user-specified
2527 # timestamp. Reads at a timestamp are guaranteed to see a consistent
2528 # prefix of the global transaction history: they observe
Dan O'Mearadd494642020-05-01 07:42:23 -07002529 # modifications done by all transactions with a commit timestamp &lt;=
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07002530 # the read timestamp, and observe none of the modifications done by
2531 # transactions with a larger commit timestamp. They will block until
2532 # all conflicting transactions that may be assigned commit timestamps
Dan O'Mearadd494642020-05-01 07:42:23 -07002533 # &lt;= the read timestamp have finished.
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07002534 #
2535 # The timestamp can either be expressed as an absolute Cloud Spanner commit
2536 # timestamp or a staleness relative to the current time.
2537 #
2538 # These modes do not require a "negotiation phase" to pick a
2539 # timestamp. As a result, they execute slightly faster than the
2540 # equivalent boundedly stale concurrency modes. On the other hand,
2541 # boundedly stale reads usually return fresher results.
2542 #
2543 # See TransactionOptions.ReadOnly.read_timestamp and
2544 # TransactionOptions.ReadOnly.exact_staleness.
2545 #
2546 # ### Bounded Staleness
2547 #
2548 # Bounded staleness modes allow Cloud Spanner to pick the read timestamp,
2549 # subject to a user-provided staleness bound. Cloud Spanner chooses the
2550 # newest timestamp within the staleness bound that allows execution
2551 # of the reads at the closest available replica without blocking.
2552 #
2553 # All rows yielded are consistent with each other -- if any part of
2554 # the read observes a transaction, all parts of the read see the
2555 # transaction. Boundedly stale reads are not repeatable: two stale
2556 # reads, even if they use the same staleness bound, can execute at
2557 # different timestamps and thus return inconsistent results.
2558 #
2559 # Boundedly stale reads execute in two phases: the first phase
2560 # negotiates a timestamp among all replicas needed to serve the
2561 # read. In the second phase, reads are executed at the negotiated
2562 # timestamp.
2563 #
2564 # As a result of the two phase execution, bounded staleness reads are
2565 # usually a little slower than comparable exact staleness
2566 # reads. However, they are typically able to return fresher
2567 # results, and are more likely to execute at the closest replica.
2568 #
2569 # Because the timestamp negotiation requires up-front knowledge of
2570 # which rows will be read, it can only be used with single-use
2571 # read-only transactions.
2572 #
2573 # See TransactionOptions.ReadOnly.max_staleness and
2574 # TransactionOptions.ReadOnly.min_read_timestamp.
2575 #
2576 # ### Old Read Timestamps and Garbage Collection
2577 #
2578 # Cloud Spanner continuously garbage collects deleted and overwritten data
2579 # in the background to reclaim storage space. This process is known
2580 # as "version GC". By default, version GC reclaims versions after they
2581 # are one hour old. Because of this, Cloud Spanner cannot perform reads
2582 # at read timestamps more than one hour in the past. This
2583 # restriction also applies to in-progress reads and/or SQL queries whose
2584 # timestamp become too old while executing. Reads and SQL queries with
2585 # too-old read timestamps fail with the error `FAILED_PRECONDITION`.
2586 #
2587 # ## Partitioned DML Transactions
2588 #
2589 # Partitioned DML transactions are used to execute DML statements with a
2590 # different execution strategy that provides different, and often better,
2591 # scalability properties for large, table-wide operations than DML in a
2592 # ReadWrite transaction. Smaller scoped statements, such as an OLTP workload,
2593 # should prefer using ReadWrite transactions.
2594 #
2595 # Partitioned DML partitions the keyspace and runs the DML statement on each
2596 # partition in separate, internal transactions. These transactions commit
2597 # automatically when complete, and run independently from one another.
2598 #
2599 # To reduce lock contention, this execution strategy only acquires read locks
2600 # on rows that match the WHERE clause of the statement. Additionally, the
2601 # smaller per-partition transactions hold locks for less time.
2602 #
2603 # That said, Partitioned DML is not a drop-in replacement for standard DML used
2604 # in ReadWrite transactions.
2605 #
2606 # - The DML statement must be fully-partitionable. Specifically, the statement
2607 # must be expressible as the union of many statements which each access only
2608 # a single row of the table.
2609 #
2610 # - The statement is not applied atomically to all rows of the table. Rather,
2611 # the statement is applied atomically to partitions of the table, in
2612 # independent transactions. Secondary index rows are updated atomically
2613 # with the base table rows.
2614 #
2615 # - Partitioned DML does not guarantee exactly-once execution semantics
2616 # against a partition. The statement will be applied at least once to each
2617 # partition. It is strongly recommended that the DML statement should be
2618 # idempotent to avoid unexpected results. For instance, it is potentially
2619 # dangerous to run a statement such as
2620 # `UPDATE table SET column = column + 1` as it could be run multiple times
2621 # against some rows.
2622 #
2623 # - The partitions are committed automatically - there is no support for
2624 # Commit or Rollback. If the call returns an error, or if the client issuing
2625 # the ExecuteSql call dies, it is possible that some rows had the statement
2626 # executed on them successfully. It is also possible that statement was
2627 # never executed against other rows.
2628 #
2629 # - Partitioned DML transactions may only contain the execution of a single
2630 # DML statement via ExecuteSql or ExecuteStreamingSql.
2631 #
2632 # - If any error is encountered during the execution of the partitioned DML
2633 # operation (for instance, a UNIQUE INDEX violation, division by zero, or a
2634 # value that cannot be stored due to schema constraints), then the
2635 # operation is stopped at that point and an error is returned. It is
2636 # possible that at this point, some partitions have been committed (or even
2637 # committed multiple times), and other partitions have not been run at all.
2638 #
2639 # Given the above, Partitioned DML is good fit for large, database-wide,
2640 # operations that are idempotent, such as deleting old rows from a very large
2641 # table.
2642 "readWrite": { # Message type to initiate a read-write transaction. Currently this # Transaction may write.
2643 #
2644 # Authorization to begin a read-write transaction requires
2645 # `spanner.databases.beginOrRollbackReadWriteTransaction` permission
2646 # on the `session` resource.
2647 # transaction type has no options.
2648 },
2649 "readOnly": { # Message type to initiate a read-only transaction. # Transaction will not write.
2650 #
2651 # Authorization to begin a read-only transaction requires
2652 # `spanner.databases.beginReadOnlyTransaction` permission
2653 # on the `session` resource.
Dan O'Mearadd494642020-05-01 07:42:23 -07002654 "minReadTimestamp": "A String", # Executes all reads at a timestamp &gt;= `min_read_timestamp`.
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07002655 #
2656 # This is useful for requesting fresher data than some previous
2657 # read, or data that is fresh enough to observe the effects of some
2658 # previously committed transaction whose timestamp is known.
2659 #
2660 # Note that this option can only be used in single-use transactions.
2661 #
2662 # A timestamp in RFC3339 UTC \"Zulu\" format, accurate to nanoseconds.
2663 # Example: `"2014-10-02T15:01:23.045123456Z"`.
Dan O'Mearadd494642020-05-01 07:42:23 -07002664 "readTimestamp": "A String", # Executes all reads at the given timestamp. Unlike other modes,
2665 # reads at a specific timestamp are repeatable; the same read at
2666 # the same timestamp always returns the same data. If the
2667 # timestamp is in the future, the read will block until the
2668 # specified timestamp, modulo the read's deadline.
2669 #
2670 # Useful for large scale consistent reads such as mapreduces, or
2671 # for coordinating many reads against a consistent snapshot of the
2672 # data.
2673 #
2674 # A timestamp in RFC3339 UTC \"Zulu\" format, accurate to nanoseconds.
2675 # Example: `"2014-10-02T15:01:23.045123456Z"`.
2676 "maxStaleness": "A String", # Read data at a timestamp &gt;= `NOW - max_staleness`
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07002677 # seconds. Guarantees that all writes that have committed more
2678 # than the specified number of seconds ago are visible. Because
2679 # Cloud Spanner chooses the exact timestamp, this mode works even if
2680 # the client's local clock is substantially skewed from Cloud Spanner
2681 # commit timestamps.
2682 #
2683 # Useful for reading the freshest data available at a nearby
2684 # replica, while bounding the possible staleness if the local
2685 # replica has fallen behind.
2686 #
2687 # Note that this option can only be used in single-use
2688 # transactions.
2689 "exactStaleness": "A String", # Executes all reads at a timestamp that is `exact_staleness`
2690 # old. The timestamp is chosen soon after the read is started.
2691 #
2692 # Guarantees that all writes that have committed more than the
2693 # specified number of seconds ago are visible. Because Cloud Spanner
2694 # chooses the exact timestamp, this mode works even if the client's
2695 # local clock is substantially skewed from Cloud Spanner commit
2696 # timestamps.
2697 #
2698 # Useful for reading at nearby replicas without the distributed
2699 # timestamp negotiation overhead of `max_staleness`.
Dan O'Mearadd494642020-05-01 07:42:23 -07002700 "returnReadTimestamp": True or False, # If true, the Cloud Spanner-selected read timestamp is included in
2701 # the Transaction message that describes the transaction.
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07002702 "strong": True or False, # Read at a timestamp where all previously committed transactions
2703 # are visible.
2704 },
2705 "partitionedDml": { # Message type to initiate a Partitioned DML transaction. # Partitioned DML transaction.
2706 #
2707 # Authorization to begin a Partitioned DML transaction requires
2708 # `spanner.databases.beginPartitionedDmlTransaction` permission
2709 # on the `session` resource.
2710 },
2711 },
2712 "singleUse": { # # Transactions # Execute the read or SQL query in a temporary transaction.
2713 # This is the most efficient way to execute a transaction that
2714 # consists of a single SQL query.
2715 #
2716 #
2717 # Each session can have at most one active transaction at a time. After the
2718 # active transaction is completed, the session can immediately be
2719 # re-used for the next transaction. It is not necessary to create a
2720 # new session for each transaction.
2721 #
2722 # # Transaction Modes
2723 #
2724 # Cloud Spanner supports three transaction modes:
2725 #
2726 # 1. Locking read-write. This type of transaction is the only way
2727 # to write data into Cloud Spanner. These transactions rely on
2728 # pessimistic locking and, if necessary, two-phase commit.
2729 # Locking read-write transactions may abort, requiring the
2730 # application to retry.
2731 #
2732 # 2. Snapshot read-only. This transaction type provides guaranteed
2733 # consistency across several reads, but does not allow
2734 # writes. Snapshot read-only transactions can be configured to
2735 # read at timestamps in the past. Snapshot read-only
2736 # transactions do not need to be committed.
2737 #
2738 # 3. Partitioned DML. This type of transaction is used to execute
2739 # a single Partitioned DML statement. Partitioned DML partitions
2740 # the key space and runs the DML statement over each partition
2741 # in parallel using separate, internal transactions that commit
2742 # independently. Partitioned DML transactions do not need to be
2743 # committed.
2744 #
2745 # For transactions that only read, snapshot read-only transactions
2746 # provide simpler semantics and are almost always faster. In
2747 # particular, read-only transactions do not take locks, so they do
2748 # not conflict with read-write transactions. As a consequence of not
2749 # taking locks, they also do not abort, so retry loops are not needed.
2750 #
2751 # Transactions may only read/write data in a single database. They
2752 # may, however, read/write data in different tables within that
2753 # database.
2754 #
2755 # ## Locking Read-Write Transactions
2756 #
2757 # Locking transactions may be used to atomically read-modify-write
2758 # data anywhere in a database. This type of transaction is externally
2759 # consistent.
2760 #
2761 # Clients should attempt to minimize the amount of time a transaction
2762 # is active. Faster transactions commit with higher probability
2763 # and cause less contention. Cloud Spanner attempts to keep read locks
2764 # active as long as the transaction continues to do reads, and the
2765 # transaction has not been terminated by
2766 # Commit or
2767 # Rollback. Long periods of
2768 # inactivity at the client may cause Cloud Spanner to release a
2769 # transaction's locks and abort it.
2770 #
2771 # Conceptually, a read-write transaction consists of zero or more
2772 # reads or SQL statements followed by
2773 # Commit. At any time before
2774 # Commit, the client can send a
2775 # Rollback request to abort the
2776 # transaction.
2777 #
2778 # ### Semantics
2779 #
2780 # Cloud Spanner can commit the transaction if all read locks it acquired
2781 # are still valid at commit time, and it is able to acquire write
2782 # locks for all writes. Cloud Spanner can abort the transaction for any
2783 # reason. If a commit attempt returns `ABORTED`, Cloud Spanner guarantees
2784 # that the transaction has not modified any user data in Cloud Spanner.
2785 #
2786 # Unless the transaction commits, Cloud Spanner makes no guarantees about
2787 # how long the transaction's locks were held for. It is an error to
2788 # use Cloud Spanner locks for any sort of mutual exclusion other than
2789 # between Cloud Spanner transactions themselves.
2790 #
2791 # ### Retrying Aborted Transactions
2792 #
2793 # When a transaction aborts, the application can choose to retry the
2794 # whole transaction again. To maximize the chances of successfully
2795 # committing the retry, the client should execute the retry in the
2796 # same session as the original attempt. The original session's lock
2797 # priority increases with each consecutive abort, meaning that each
2798 # attempt has a slightly better chance of success than the previous.
2799 #
2800 # Under some circumstances (e.g., many transactions attempting to
2801 # modify the same row(s)), a transaction can abort many times in a
2802 # short period before successfully committing. Thus, it is not a good
2803 # idea to cap the number of retries a transaction can attempt;
2804 # instead, it is better to limit the total amount of wall time spent
2805 # retrying.
2806 #
2807 # ### Idle Transactions
2808 #
2809 # A transaction is considered idle if it has no outstanding reads or
2810 # SQL queries and has not started a read or SQL query within the last 10
2811 # seconds. Idle transactions can be aborted by Cloud Spanner so that they
2812 # don't hold on to locks indefinitely. In that case, the commit will
2813 # fail with error `ABORTED`.
2814 #
2815 # If this behavior is undesirable, periodically executing a simple
2816 # SQL query in the transaction (e.g., `SELECT 1`) prevents the
2817 # transaction from becoming idle.
2818 #
2819 # ## Snapshot Read-Only Transactions
2820 #
2821 # Snapshot read-only transactions provides a simpler method than
2822 # locking read-write transactions for doing several consistent
2823 # reads. However, this type of transaction does not support writes.
2824 #
2825 # Snapshot transactions do not take locks. Instead, they work by
2826 # choosing a Cloud Spanner timestamp, then executing all reads at that
2827 # timestamp. Since they do not acquire locks, they do not block
2828 # concurrent read-write transactions.
2829 #
2830 # Unlike locking read-write transactions, snapshot read-only
2831 # transactions never abort. They can fail if the chosen read
2832 # timestamp is garbage collected; however, the default garbage
2833 # collection policy is generous enough that most applications do not
2834 # need to worry about this in practice.
2835 #
2836 # Snapshot read-only transactions do not need to call
2837 # Commit or
2838 # Rollback (and in fact are not
2839 # permitted to do so).
2840 #
2841 # To execute a snapshot transaction, the client specifies a timestamp
2842 # bound, which tells Cloud Spanner how to choose a read timestamp.
2843 #
2844 # The types of timestamp bound are:
2845 #
2846 # - Strong (the default).
2847 # - Bounded staleness.
2848 # - Exact staleness.
2849 #
2850 # If the Cloud Spanner database to be read is geographically distributed,
2851 # stale read-only transactions can execute more quickly than strong
2852 # or read-write transaction, because they are able to execute far
2853 # from the leader replica.
2854 #
2855 # Each type of timestamp bound is discussed in detail below.
2856 #
2857 # ### Strong
2858 #
2859 # Strong reads are guaranteed to see the effects of all transactions
2860 # that have committed before the start of the read. Furthermore, all
2861 # rows yielded by a single read are consistent with each other -- if
2862 # any part of the read observes a transaction, all parts of the read
2863 # see the transaction.
2864 #
2865 # Strong reads are not repeatable: two consecutive strong read-only
2866 # transactions might return inconsistent results if there are
2867 # concurrent writes. If consistency across reads is required, the
2868 # reads should be executed within a transaction or at an exact read
2869 # timestamp.
2870 #
2871 # See TransactionOptions.ReadOnly.strong.
2872 #
2873 # ### Exact Staleness
2874 #
2875 # These timestamp bounds execute reads at a user-specified
2876 # timestamp. Reads at a timestamp are guaranteed to see a consistent
2877 # prefix of the global transaction history: they observe
Dan O'Mearadd494642020-05-01 07:42:23 -07002878 # modifications done by all transactions with a commit timestamp &lt;=
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07002879 # the read timestamp, and observe none of the modifications done by
2880 # transactions with a larger commit timestamp. They will block until
2881 # all conflicting transactions that may be assigned commit timestamps
Dan O'Mearadd494642020-05-01 07:42:23 -07002882 # &lt;= the read timestamp have finished.
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07002883 #
2884 # The timestamp can either be expressed as an absolute Cloud Spanner commit
2885 # timestamp or a staleness relative to the current time.
2886 #
2887 # These modes do not require a "negotiation phase" to pick a
2888 # timestamp. As a result, they execute slightly faster than the
2889 # equivalent boundedly stale concurrency modes. On the other hand,
2890 # boundedly stale reads usually return fresher results.
2891 #
2892 # See TransactionOptions.ReadOnly.read_timestamp and
2893 # TransactionOptions.ReadOnly.exact_staleness.
2894 #
2895 # ### Bounded Staleness
2896 #
2897 # Bounded staleness modes allow Cloud Spanner to pick the read timestamp,
2898 # subject to a user-provided staleness bound. Cloud Spanner chooses the
2899 # newest timestamp within the staleness bound that allows execution
2900 # of the reads at the closest available replica without blocking.
2901 #
2902 # All rows yielded are consistent with each other -- if any part of
2903 # the read observes a transaction, all parts of the read see the
2904 # transaction. Boundedly stale reads are not repeatable: two stale
2905 # reads, even if they use the same staleness bound, can execute at
2906 # different timestamps and thus return inconsistent results.
2907 #
2908 # Boundedly stale reads execute in two phases: the first phase
2909 # negotiates a timestamp among all replicas needed to serve the
2910 # read. In the second phase, reads are executed at the negotiated
2911 # timestamp.
2912 #
2913 # As a result of the two phase execution, bounded staleness reads are
2914 # usually a little slower than comparable exact staleness
2915 # reads. However, they are typically able to return fresher
2916 # results, and are more likely to execute at the closest replica.
2917 #
2918 # Because the timestamp negotiation requires up-front knowledge of
2919 # which rows will be read, it can only be used with single-use
2920 # read-only transactions.
2921 #
2922 # See TransactionOptions.ReadOnly.max_staleness and
2923 # TransactionOptions.ReadOnly.min_read_timestamp.
2924 #
2925 # ### Old Read Timestamps and Garbage Collection
2926 #
2927 # Cloud Spanner continuously garbage collects deleted and overwritten data
2928 # in the background to reclaim storage space. This process is known
2929 # as "version GC". By default, version GC reclaims versions after they
2930 # are one hour old. Because of this, Cloud Spanner cannot perform reads
2931 # at read timestamps more than one hour in the past. This
2932 # restriction also applies to in-progress reads and/or SQL queries whose
2933 # timestamp become too old while executing. Reads and SQL queries with
2934 # too-old read timestamps fail with the error `FAILED_PRECONDITION`.
2935 #
2936 # ## Partitioned DML Transactions
2937 #
2938 # Partitioned DML transactions are used to execute DML statements with a
2939 # different execution strategy that provides different, and often better,
2940 # scalability properties for large, table-wide operations than DML in a
2941 # ReadWrite transaction. Smaller scoped statements, such as an OLTP workload,
2942 # should prefer using ReadWrite transactions.
2943 #
2944 # Partitioned DML partitions the keyspace and runs the DML statement on each
2945 # partition in separate, internal transactions. These transactions commit
2946 # automatically when complete, and run independently from one another.
2947 #
2948 # To reduce lock contention, this execution strategy only acquires read locks
2949 # on rows that match the WHERE clause of the statement. Additionally, the
2950 # smaller per-partition transactions hold locks for less time.
2951 #
2952 # That said, Partitioned DML is not a drop-in replacement for standard DML used
2953 # in ReadWrite transactions.
2954 #
2955 # - The DML statement must be fully-partitionable. Specifically, the statement
2956 # must be expressible as the union of many statements which each access only
2957 # a single row of the table.
2958 #
2959 # - The statement is not applied atomically to all rows of the table. Rather,
2960 # the statement is applied atomically to partitions of the table, in
2961 # independent transactions. Secondary index rows are updated atomically
2962 # with the base table rows.
2963 #
2964 # - Partitioned DML does not guarantee exactly-once execution semantics
2965 # against a partition. The statement will be applied at least once to each
2966 # partition. It is strongly recommended that the DML statement should be
2967 # idempotent to avoid unexpected results. For instance, it is potentially
2968 # dangerous to run a statement such as
2969 # `UPDATE table SET column = column + 1` as it could be run multiple times
2970 # against some rows.
2971 #
2972 # - The partitions are committed automatically - there is no support for
2973 # Commit or Rollback. If the call returns an error, or if the client issuing
2974 # the ExecuteSql call dies, it is possible that some rows had the statement
2975 # executed on them successfully. It is also possible that statement was
2976 # never executed against other rows.
2977 #
2978 # - Partitioned DML transactions may only contain the execution of a single
2979 # DML statement via ExecuteSql or ExecuteStreamingSql.
2980 #
2981 # - If any error is encountered during the execution of the partitioned DML
2982 # operation (for instance, a UNIQUE INDEX violation, division by zero, or a
2983 # value that cannot be stored due to schema constraints), then the
2984 # operation is stopped at that point and an error is returned. It is
2985 # possible that at this point, some partitions have been committed (or even
2986 # committed multiple times), and other partitions have not been run at all.
2987 #
2988 # Given the above, Partitioned DML is good fit for large, database-wide,
2989 # operations that are idempotent, such as deleting old rows from a very large
2990 # table.
2991 "readWrite": { # Message type to initiate a read-write transaction. Currently this # Transaction may write.
2992 #
2993 # Authorization to begin a read-write transaction requires
2994 # `spanner.databases.beginOrRollbackReadWriteTransaction` permission
2995 # on the `session` resource.
2996 # transaction type has no options.
2997 },
2998 "readOnly": { # Message type to initiate a read-only transaction. # Transaction will not write.
2999 #
3000 # Authorization to begin a read-only transaction requires
3001 # `spanner.databases.beginReadOnlyTransaction` permission
3002 # on the `session` resource.
Dan O'Mearadd494642020-05-01 07:42:23 -07003003 "minReadTimestamp": "A String", # Executes all reads at a timestamp &gt;= `min_read_timestamp`.
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07003004 #
3005 # This is useful for requesting fresher data than some previous
3006 # read, or data that is fresh enough to observe the effects of some
3007 # previously committed transaction whose timestamp is known.
3008 #
3009 # Note that this option can only be used in single-use transactions.
3010 #
3011 # A timestamp in RFC3339 UTC \"Zulu\" format, accurate to nanoseconds.
3012 # Example: `"2014-10-02T15:01:23.045123456Z"`.
Dan O'Mearadd494642020-05-01 07:42:23 -07003013 "readTimestamp": "A String", # Executes all reads at the given timestamp. Unlike other modes,
3014 # reads at a specific timestamp are repeatable; the same read at
3015 # the same timestamp always returns the same data. If the
3016 # timestamp is in the future, the read will block until the
3017 # specified timestamp, modulo the read's deadline.
3018 #
3019 # Useful for large scale consistent reads such as mapreduces, or
3020 # for coordinating many reads against a consistent snapshot of the
3021 # data.
3022 #
3023 # A timestamp in RFC3339 UTC \"Zulu\" format, accurate to nanoseconds.
3024 # Example: `"2014-10-02T15:01:23.045123456Z"`.
3025 "maxStaleness": "A String", # Read data at a timestamp &gt;= `NOW - max_staleness`
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07003026 # seconds. Guarantees that all writes that have committed more
3027 # than the specified number of seconds ago are visible. Because
3028 # Cloud Spanner chooses the exact timestamp, this mode works even if
3029 # the client's local clock is substantially skewed from Cloud Spanner
3030 # commit timestamps.
3031 #
3032 # Useful for reading the freshest data available at a nearby
3033 # replica, while bounding the possible staleness if the local
3034 # replica has fallen behind.
3035 #
3036 # Note that this option can only be used in single-use
3037 # transactions.
3038 "exactStaleness": "A String", # Executes all reads at a timestamp that is `exact_staleness`
3039 # old. The timestamp is chosen soon after the read is started.
3040 #
3041 # Guarantees that all writes that have committed more than the
3042 # specified number of seconds ago are visible. Because Cloud Spanner
3043 # chooses the exact timestamp, this mode works even if the client's
3044 # local clock is substantially skewed from Cloud Spanner commit
3045 # timestamps.
3046 #
3047 # Useful for reading at nearby replicas without the distributed
3048 # timestamp negotiation overhead of `max_staleness`.
Dan O'Mearadd494642020-05-01 07:42:23 -07003049 "returnReadTimestamp": True or False, # If true, the Cloud Spanner-selected read timestamp is included in
3050 # the Transaction message that describes the transaction.
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07003051 "strong": True or False, # Read at a timestamp where all previously committed transactions
3052 # are visible.
3053 },
3054 "partitionedDml": { # Message type to initiate a Partitioned DML transaction. # Partitioned DML transaction.
3055 #
3056 # Authorization to begin a Partitioned DML transaction requires
3057 # `spanner.databases.beginPartitionedDmlTransaction` permission
3058 # on the `session` resource.
3059 },
3060 },
3061 "id": "A String", # Execute the read or SQL query in a previously-started transaction.
3062 },
Dan O'Mearadd494642020-05-01 07:42:23 -07003063 "seqno": "A String", # A per-transaction sequence number used to identify this request. This field
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07003064 # makes each request idempotent such that if the request is received multiple
3065 # times, at most one will succeed.
3066 #
3067 # The sequence number must be monotonically increasing within the
3068 # transaction. If a request arrives for the first time with an out-of-order
3069 # sequence number, the transaction may be aborted. Replays of previously
3070 # handled requests will yield the same response as the first execution.
3071 #
3072 # Required for DML statements. Ignored for queries.
3073 "resumeToken": "A String", # If this request is resuming a previously interrupted SQL statement
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -04003074 # execution, `resume_token` should be copied from the last
3075 # PartialResultSet yielded before the interruption. Doing this
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07003076 # enables the new SQL statement execution to resume where the last one left
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -04003077 # off. The rest of the request parameters must exactly match the
3078 # request that yielded this token.
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07003079 "partitionToken": "A String", # If present, results will be restricted to the specified partition
3080 # previously created using PartitionQuery(). There must be an exact
3081 # match for the values of fields common to this message and the
3082 # PartitionQueryRequest message used to create this partition_token.
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -04003083 "paramTypes": { # It is not always possible for Cloud Spanner to infer the right SQL type
3084 # from a JSON value. For example, values of type `BYTES` and values
3085 # of type `STRING` both appear in params as JSON strings.
3086 #
3087 # In these cases, `param_types` can be used to specify the exact
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07003088 # SQL type for some or all of the SQL statement parameters. See the
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -04003089 # definition of Type for more information
3090 # about SQL types.
3091 "a_key": { # `Type` indicates the type of a Cloud Spanner value, as might be stored in a
3092 # table cell or returned from an SQL query.
Dan O'Mearadd494642020-05-01 07:42:23 -07003093 "structType": { # `StructType` defines the fields of a STRUCT type. # If code == STRUCT, then `struct_type`
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -04003094 # provides type information for the struct's fields.
Dan O'Mearadd494642020-05-01 07:42:23 -07003095 "fields": [ # The list of fields that make up this struct. Order is
3096 # significant, because values of this struct type are represented as
3097 # lists, where the order of field values matches the order of
3098 # fields in the StructType. In turn, the order of fields
3099 # matches the order of columns in a read request, or the order of
3100 # fields in the `SELECT` clause of a query.
3101 { # Message representing a single field of a struct.
3102 "type": # Object with schema name: Type # The type of the field.
3103 "name": "A String", # The name of the field. For reads, this is the column name. For
3104 # SQL queries, it is the column alias (e.g., `"Word"` in the
3105 # query `"SELECT 'hello' AS Word"`), or the column name (e.g.,
3106 # `"ColName"` in the query `"SELECT ColName FROM Table"`). Some
3107 # columns might have an empty name (e.g., !"SELECT
3108 # UPPER(ColName)"`). Note that a query result can contain
3109 # multiple fields with the same name.
3110 },
3111 ],
3112 },
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -04003113 "code": "A String", # Required. The TypeCode for this type.
3114 "arrayElementType": # Object with schema name: Type # If code == ARRAY, then `array_element_type`
3115 # is the type of the array elements.
3116 },
3117 },
Dan O'Mearadd494642020-05-01 07:42:23 -07003118 "queryOptions": { # Query optimizer configuration. # Query optimizer configuration to use for the given query.
3119 "optimizerVersion": "A String", # An option to control the selection of optimizer version.
3120 #
3121 # This parameter allows individual queries to pick different query
3122 # optimizer versions.
3123 #
3124 # Specifying "latest" as a value instructs Cloud Spanner to use the
3125 # latest supported query optimizer version. If not specified, Cloud Spanner
3126 # uses optimizer version set at the database level options. Any other
3127 # positive integer (from the list of supported optimizer versions)
3128 # overrides the default optimizer version for query execution.
3129 # The list of supported optimizer versions can be queried from
3130 # SPANNER_SYS.SUPPORTED_OPTIMIZER_VERSIONS. Executing a SQL statement
3131 # with an invalid optimizer version will fail with a syntax error
3132 # (`INVALID_ARGUMENT`) status.
3133 #
3134 # The `optimizer_version` statement hint has precedence over this setting.
3135 },
3136 "params": { # Parameter names and values that bind to placeholders in the SQL string.
3137 #
3138 # A parameter placeholder consists of the `@` character followed by the
3139 # parameter name (for example, `@firstName`). Parameter names can contain
3140 # letters, numbers, and underscores.
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -04003141 #
3142 # Parameters can appear anywhere that a literal value is expected. The same
3143 # parameter name can be used more than once, for example:
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -04003144 #
Dan O'Mearadd494642020-05-01 07:42:23 -07003145 # `"WHERE id &gt; @msg_id AND id &lt; @msg_id + 100"`
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -04003146 #
Dan O'Mearadd494642020-05-01 07:42:23 -07003147 # It is an error to execute a SQL statement with unbound parameters.
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -04003148 "a_key": "", # Properties of the object.
3149 },
Dan O'Mearadd494642020-05-01 07:42:23 -07003150 "sql": "A String", # Required. The SQL string.
3151 "queryMode": "A String", # Used to control the amount of debugging information returned in
3152 # ResultSetStats. If partition_token is set, query_mode can only
3153 # be set to QueryMode.NORMAL.
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -04003154 }
3155
3156 x__xgafv: string, V1 error format.
3157 Allowed values
3158 1 - v1 error format
3159 2 - v2 error format
3160
3161Returns:
3162 An object of the form:
3163
3164 { # Results from Read or
3165 # ExecuteSql.
3166 "rows": [ # Each element in `rows` is a row whose format is defined by
3167 # metadata.row_type. The ith element
3168 # in each row matches the ith field in
3169 # metadata.row_type. Elements are
3170 # encoded based on type as described
3171 # here.
3172 [
3173 "",
3174 ],
3175 ],
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07003176 "stats": { # Additional statistics about a ResultSet or PartialResultSet. # Query plan and execution statistics for the SQL statement that
3177 # produced this result set. These can be requested by setting
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -04003178 # ExecuteSqlRequest.query_mode.
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07003179 # DML statements always produce stats containing the number of rows
3180 # modified, unless executed using the
3181 # ExecuteSqlRequest.QueryMode.PLAN ExecuteSqlRequest.query_mode.
3182 # Other fields may or may not be populated, based on the
3183 # ExecuteSqlRequest.query_mode.
3184 "rowCountLowerBound": "A String", # Partitioned DML does not offer exactly-once semantics, so it
3185 # returns a lower bound of the rows modified.
3186 "rowCountExact": "A String", # Standard DML returns an exact count of rows that were modified.
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -04003187 "queryPlan": { # Contains an ordered list of nodes appearing in the query plan. # QueryPlan for the query associated with this result.
3188 "planNodes": [ # The nodes in the query plan. Plan nodes are returned in pre-order starting
3189 # with the plan root. Each PlanNode's `id` corresponds to its index in
3190 # `plan_nodes`.
3191 { # Node information for nodes appearing in a QueryPlan.plan_nodes.
3192 "index": 42, # The `PlanNode`'s index in node list.
3193 "kind": "A String", # Used to determine the type of node. May be needed for visualizing
3194 # different kinds of nodes differently. For example, If the node is a
3195 # SCALAR node, it will have a condensed representation
3196 # which can be used to directly embed a description of the node in its
3197 # parent.
3198 "displayName": "A String", # The display name for the node.
3199 "executionStats": { # The execution statistics associated with the node, contained in a group of
3200 # key-value pairs. Only present if the plan was returned as a result of a
3201 # profile query. For example, number of executions, number of rows/time per
3202 # execution etc.
3203 "a_key": "", # Properties of the object.
3204 },
3205 "childLinks": [ # List of child node `index`es and their relationship to this parent.
3206 { # Metadata associated with a parent-child relationship appearing in a
3207 # PlanNode.
3208 "variable": "A String", # Only present if the child node is SCALAR and corresponds
3209 # to an output variable of the parent node. The field carries the name of
3210 # the output variable.
3211 # For example, a `TableScan` operator that reads rows from a table will
3212 # have child links to the `SCALAR` nodes representing the output variables
3213 # created for each column that is read by the operator. The corresponding
3214 # `variable` fields will be set to the variable names assigned to the
3215 # columns.
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -04003216 "type": "A String", # The type of the link. For example, in Hash Joins this could be used to
3217 # distinguish between the build child and the probe child, or in the case
3218 # of the child being an output variable, to represent the tag associated
3219 # with the output variable.
Dan O'Mearadd494642020-05-01 07:42:23 -07003220 "childIndex": 42, # The node to which the link points.
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -04003221 },
3222 ],
3223 "shortRepresentation": { # Condensed representation of a node and its subtree. Only present for # Condensed representation for SCALAR nodes.
3224 # `SCALAR` PlanNode(s).
Dan O'Mearadd494642020-05-01 07:42:23 -07003225 "subqueries": { # A mapping of (subquery variable name) -&gt; (subquery node id) for cases
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -04003226 # where the `description` string of this node references a `SCALAR`
3227 # subquery contained in the expression subtree rooted at this node. The
3228 # referenced `SCALAR` subquery may not necessarily be a direct child of
3229 # this node.
3230 "a_key": 42,
3231 },
3232 "description": "A String", # A string representation of the expression subtree rooted at this node.
3233 },
3234 "metadata": { # Attributes relevant to the node contained in a group of key-value pairs.
3235 # For example, a Parameter Reference node could have the following
3236 # information in its metadata:
3237 #
3238 # {
3239 # "parameter_reference": "param1",
3240 # "parameter_type": "array"
3241 # }
3242 "a_key": "", # Properties of the object.
3243 },
3244 },
3245 ],
3246 },
3247 "queryStats": { # Aggregated statistics from the execution of the query. Only present when
3248 # the query is profiled. For example, a query could return the statistics as
3249 # follows:
3250 #
3251 # {
3252 # "rows_returned": "3",
3253 # "elapsed_time": "1.22 secs",
3254 # "cpu_time": "1.19 secs"
3255 # }
3256 "a_key": "", # Properties of the object.
3257 },
3258 },
3259 "metadata": { # Metadata about a ResultSet or PartialResultSet. # Metadata about the result set, such as row type information.
3260 "rowType": { # `StructType` defines the fields of a STRUCT type. # Indicates the field names and types for the rows in the result
3261 # set. For example, a SQL query like `"SELECT UserId, UserName FROM
3262 # Users"` could return a `row_type` value like:
3263 #
3264 # "fields": [
3265 # { "name": "UserId", "type": { "code": "INT64" } },
3266 # { "name": "UserName", "type": { "code": "STRING" } },
3267 # ]
3268 "fields": [ # The list of fields that make up this struct. Order is
3269 # significant, because values of this struct type are represented as
3270 # lists, where the order of field values matches the order of
3271 # fields in the StructType. In turn, the order of fields
3272 # matches the order of columns in a read request, or the order of
3273 # fields in the `SELECT` clause of a query.
3274 { # Message representing a single field of a struct.
Dan O'Mearadd494642020-05-01 07:42:23 -07003275 "type": # Object with schema name: Type # The type of the field.
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -04003276 "name": "A String", # The name of the field. For reads, this is the column name. For
3277 # SQL queries, it is the column alias (e.g., `"Word"` in the
3278 # query `"SELECT 'hello' AS Word"`), or the column name (e.g.,
3279 # `"ColName"` in the query `"SELECT ColName FROM Table"`). Some
3280 # columns might have an empty name (e.g., !"SELECT
3281 # UPPER(ColName)"`). Note that a query result can contain
3282 # multiple fields with the same name.
3283 },
3284 ],
3285 },
3286 "transaction": { # A transaction. # If the read or SQL query began a transaction as a side-effect, the
3287 # information about the new transaction is yielded here.
3288 "readTimestamp": "A String", # For snapshot read-only transactions, the read timestamp chosen
3289 # for the transaction. Not returned by default: see
3290 # TransactionOptions.ReadOnly.return_read_timestamp.
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07003291 #
3292 # A timestamp in RFC3339 UTC \"Zulu\" format, accurate to nanoseconds.
3293 # Example: `"2014-10-02T15:01:23.045123456Z"`.
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -04003294 "id": "A String", # `id` may be used to identify the transaction in subsequent
3295 # Read,
3296 # ExecuteSql,
3297 # Commit, or
3298 # Rollback calls.
3299 #
3300 # Single-use read-only transactions do not have IDs, because
3301 # single-use transactions do not support multiple requests.
3302 },
3303 },
3304 }</pre>
3305</div>
3306
3307<div class="method">
Dan O'Mearadd494642020-05-01 07:42:23 -07003308 <code class="details" id="executeStreamingSql">executeStreamingSql(session, body=None, x__xgafv=None)</code>
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -04003309 <pre>Like ExecuteSql, except returns the result
3310set as a stream. Unlike ExecuteSql, there
3311is no limit on the size of the returned result set. However, no
3312individual row in the result set can exceed 100 MiB, and no
3313column value can exceed 10 MiB.
3314
3315Args:
3316 session: string, Required. The session in which the SQL query should be performed. (required)
Dan O'Mearadd494642020-05-01 07:42:23 -07003317 body: object, The request body.
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -04003318 The object takes the form of:
3319
3320{ # The request for ExecuteSql and
3321 # ExecuteStreamingSql.
Dan O'Mearadd494642020-05-01 07:42:23 -07003322 "transaction": { # This message is used to select the transaction in which a # The transaction to use.
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07003323 #
3324 # For queries, if none is provided, the default is a temporary read-only
3325 # transaction with strong concurrency.
3326 #
Dan O'Mearadd494642020-05-01 07:42:23 -07003327 # Standard DML statements require a read-write transaction. To protect
3328 # against replays, single-use transactions are not supported. The caller
3329 # must either supply an existing transaction ID or begin a new transaction.
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07003330 #
Dan O'Mearadd494642020-05-01 07:42:23 -07003331 # Partitioned DML requires an existing Partitioned DML transaction ID.
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -04003332 # Read or
3333 # ExecuteSql call runs.
3334 #
3335 # See TransactionOptions for more information about transactions.
3336 "begin": { # # Transactions # Begin a new transaction and execute this read or SQL query in
3337 # it. The transaction ID of the new transaction is returned in
3338 # ResultSetMetadata.transaction, which is a Transaction.
3339 #
3340 #
3341 # Each session can have at most one active transaction at a time. After the
3342 # active transaction is completed, the session can immediately be
3343 # re-used for the next transaction. It is not necessary to create a
3344 # new session for each transaction.
3345 #
3346 # # Transaction Modes
3347 #
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07003348 # Cloud Spanner supports three transaction modes:
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -04003349 #
3350 # 1. Locking read-write. This type of transaction is the only way
3351 # to write data into Cloud Spanner. These transactions rely on
3352 # pessimistic locking and, if necessary, two-phase commit.
3353 # Locking read-write transactions may abort, requiring the
3354 # application to retry.
3355 #
3356 # 2. Snapshot read-only. This transaction type provides guaranteed
3357 # consistency across several reads, but does not allow
3358 # writes. Snapshot read-only transactions can be configured to
3359 # read at timestamps in the past. Snapshot read-only
3360 # transactions do not need to be committed.
3361 #
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07003362 # 3. Partitioned DML. This type of transaction is used to execute
3363 # a single Partitioned DML statement. Partitioned DML partitions
3364 # the key space and runs the DML statement over each partition
3365 # in parallel using separate, internal transactions that commit
3366 # independently. Partitioned DML transactions do not need to be
3367 # committed.
3368 #
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -04003369 # For transactions that only read, snapshot read-only transactions
3370 # provide simpler semantics and are almost always faster. In
3371 # particular, read-only transactions do not take locks, so they do
3372 # not conflict with read-write transactions. As a consequence of not
3373 # taking locks, they also do not abort, so retry loops are not needed.
3374 #
3375 # Transactions may only read/write data in a single database. They
3376 # may, however, read/write data in different tables within that
3377 # database.
3378 #
3379 # ## Locking Read-Write Transactions
3380 #
3381 # Locking transactions may be used to atomically read-modify-write
3382 # data anywhere in a database. This type of transaction is externally
3383 # consistent.
3384 #
3385 # Clients should attempt to minimize the amount of time a transaction
3386 # is active. Faster transactions commit with higher probability
3387 # and cause less contention. Cloud Spanner attempts to keep read locks
3388 # active as long as the transaction continues to do reads, and the
3389 # transaction has not been terminated by
3390 # Commit or
3391 # Rollback. Long periods of
3392 # inactivity at the client may cause Cloud Spanner to release a
3393 # transaction's locks and abort it.
3394 #
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -04003395 # Conceptually, a read-write transaction consists of zero or more
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07003396 # reads or SQL statements followed by
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -04003397 # Commit. At any time before
3398 # Commit, the client can send a
3399 # Rollback request to abort the
3400 # transaction.
3401 #
3402 # ### Semantics
3403 #
3404 # Cloud Spanner can commit the transaction if all read locks it acquired
3405 # are still valid at commit time, and it is able to acquire write
3406 # locks for all writes. Cloud Spanner can abort the transaction for any
3407 # reason. If a commit attempt returns `ABORTED`, Cloud Spanner guarantees
3408 # that the transaction has not modified any user data in Cloud Spanner.
3409 #
3410 # Unless the transaction commits, Cloud Spanner makes no guarantees about
3411 # how long the transaction's locks were held for. It is an error to
3412 # use Cloud Spanner locks for any sort of mutual exclusion other than
3413 # between Cloud Spanner transactions themselves.
3414 #
3415 # ### Retrying Aborted Transactions
3416 #
3417 # When a transaction aborts, the application can choose to retry the
3418 # whole transaction again. To maximize the chances of successfully
3419 # committing the retry, the client should execute the retry in the
3420 # same session as the original attempt. The original session's lock
3421 # priority increases with each consecutive abort, meaning that each
3422 # attempt has a slightly better chance of success than the previous.
3423 #
3424 # Under some circumstances (e.g., many transactions attempting to
3425 # modify the same row(s)), a transaction can abort many times in a
3426 # short period before successfully committing. Thus, it is not a good
3427 # idea to cap the number of retries a transaction can attempt;
3428 # instead, it is better to limit the total amount of wall time spent
3429 # retrying.
3430 #
3431 # ### Idle Transactions
3432 #
3433 # A transaction is considered idle if it has no outstanding reads or
3434 # SQL queries and has not started a read or SQL query within the last 10
3435 # seconds. Idle transactions can be aborted by Cloud Spanner so that they
3436 # don't hold on to locks indefinitely. In that case, the commit will
3437 # fail with error `ABORTED`.
3438 #
3439 # If this behavior is undesirable, periodically executing a simple
3440 # SQL query in the transaction (e.g., `SELECT 1`) prevents the
3441 # transaction from becoming idle.
3442 #
3443 # ## Snapshot Read-Only Transactions
3444 #
3445 # Snapshot read-only transactions provides a simpler method than
3446 # locking read-write transactions for doing several consistent
3447 # reads. However, this type of transaction does not support writes.
3448 #
3449 # Snapshot transactions do not take locks. Instead, they work by
3450 # choosing a Cloud Spanner timestamp, then executing all reads at that
3451 # timestamp. Since they do not acquire locks, they do not block
3452 # concurrent read-write transactions.
3453 #
3454 # Unlike locking read-write transactions, snapshot read-only
3455 # transactions never abort. They can fail if the chosen read
3456 # timestamp is garbage collected; however, the default garbage
3457 # collection policy is generous enough that most applications do not
3458 # need to worry about this in practice.
3459 #
3460 # Snapshot read-only transactions do not need to call
3461 # Commit or
3462 # Rollback (and in fact are not
3463 # permitted to do so).
3464 #
3465 # To execute a snapshot transaction, the client specifies a timestamp
3466 # bound, which tells Cloud Spanner how to choose a read timestamp.
3467 #
3468 # The types of timestamp bound are:
3469 #
3470 # - Strong (the default).
3471 # - Bounded staleness.
3472 # - Exact staleness.
3473 #
3474 # If the Cloud Spanner database to be read is geographically distributed,
3475 # stale read-only transactions can execute more quickly than strong
3476 # or read-write transaction, because they are able to execute far
3477 # from the leader replica.
3478 #
3479 # Each type of timestamp bound is discussed in detail below.
3480 #
3481 # ### Strong
3482 #
3483 # Strong reads are guaranteed to see the effects of all transactions
3484 # that have committed before the start of the read. Furthermore, all
3485 # rows yielded by a single read are consistent with each other -- if
3486 # any part of the read observes a transaction, all parts of the read
3487 # see the transaction.
3488 #
3489 # Strong reads are not repeatable: two consecutive strong read-only
3490 # transactions might return inconsistent results if there are
3491 # concurrent writes. If consistency across reads is required, the
3492 # reads should be executed within a transaction or at an exact read
3493 # timestamp.
3494 #
3495 # See TransactionOptions.ReadOnly.strong.
3496 #
3497 # ### Exact Staleness
3498 #
3499 # These timestamp bounds execute reads at a user-specified
3500 # timestamp. Reads at a timestamp are guaranteed to see a consistent
3501 # prefix of the global transaction history: they observe
Dan O'Mearadd494642020-05-01 07:42:23 -07003502 # modifications done by all transactions with a commit timestamp &lt;=
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -04003503 # the read timestamp, and observe none of the modifications done by
3504 # transactions with a larger commit timestamp. They will block until
3505 # all conflicting transactions that may be assigned commit timestamps
Dan O'Mearadd494642020-05-01 07:42:23 -07003506 # &lt;= the read timestamp have finished.
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -04003507 #
3508 # The timestamp can either be expressed as an absolute Cloud Spanner commit
3509 # timestamp or a staleness relative to the current time.
3510 #
3511 # These modes do not require a "negotiation phase" to pick a
3512 # timestamp. As a result, they execute slightly faster than the
3513 # equivalent boundedly stale concurrency modes. On the other hand,
3514 # boundedly stale reads usually return fresher results.
3515 #
3516 # See TransactionOptions.ReadOnly.read_timestamp and
3517 # TransactionOptions.ReadOnly.exact_staleness.
3518 #
3519 # ### Bounded Staleness
3520 #
3521 # Bounded staleness modes allow Cloud Spanner to pick the read timestamp,
3522 # subject to a user-provided staleness bound. Cloud Spanner chooses the
3523 # newest timestamp within the staleness bound that allows execution
3524 # of the reads at the closest available replica without blocking.
3525 #
3526 # All rows yielded are consistent with each other -- if any part of
3527 # the read observes a transaction, all parts of the read see the
3528 # transaction. Boundedly stale reads are not repeatable: two stale
3529 # reads, even if they use the same staleness bound, can execute at
3530 # different timestamps and thus return inconsistent results.
3531 #
3532 # Boundedly stale reads execute in two phases: the first phase
3533 # negotiates a timestamp among all replicas needed to serve the
3534 # read. In the second phase, reads are executed at the negotiated
3535 # timestamp.
3536 #
3537 # As a result of the two phase execution, bounded staleness reads are
3538 # usually a little slower than comparable exact staleness
3539 # reads. However, they are typically able to return fresher
3540 # results, and are more likely to execute at the closest replica.
3541 #
3542 # Because the timestamp negotiation requires up-front knowledge of
3543 # which rows will be read, it can only be used with single-use
3544 # read-only transactions.
3545 #
3546 # See TransactionOptions.ReadOnly.max_staleness and
3547 # TransactionOptions.ReadOnly.min_read_timestamp.
3548 #
3549 # ### Old Read Timestamps and Garbage Collection
3550 #
3551 # Cloud Spanner continuously garbage collects deleted and overwritten data
3552 # in the background to reclaim storage space. This process is known
3553 # as "version GC". By default, version GC reclaims versions after they
3554 # are one hour old. Because of this, Cloud Spanner cannot perform reads
3555 # at read timestamps more than one hour in the past. This
3556 # restriction also applies to in-progress reads and/or SQL queries whose
3557 # timestamp become too old while executing. Reads and SQL queries with
3558 # too-old read timestamps fail with the error `FAILED_PRECONDITION`.
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07003559 #
3560 # ## Partitioned DML Transactions
3561 #
3562 # Partitioned DML transactions are used to execute DML statements with a
3563 # different execution strategy that provides different, and often better,
3564 # scalability properties for large, table-wide operations than DML in a
3565 # ReadWrite transaction. Smaller scoped statements, such as an OLTP workload,
3566 # should prefer using ReadWrite transactions.
3567 #
3568 # Partitioned DML partitions the keyspace and runs the DML statement on each
3569 # partition in separate, internal transactions. These transactions commit
3570 # automatically when complete, and run independently from one another.
3571 #
3572 # To reduce lock contention, this execution strategy only acquires read locks
3573 # on rows that match the WHERE clause of the statement. Additionally, the
3574 # smaller per-partition transactions hold locks for less time.
3575 #
3576 # That said, Partitioned DML is not a drop-in replacement for standard DML used
3577 # in ReadWrite transactions.
3578 #
3579 # - The DML statement must be fully-partitionable. Specifically, the statement
3580 # must be expressible as the union of many statements which each access only
3581 # a single row of the table.
3582 #
3583 # - The statement is not applied atomically to all rows of the table. Rather,
3584 # the statement is applied atomically to partitions of the table, in
3585 # independent transactions. Secondary index rows are updated atomically
3586 # with the base table rows.
3587 #
3588 # - Partitioned DML does not guarantee exactly-once execution semantics
3589 # against a partition. The statement will be applied at least once to each
3590 # partition. It is strongly recommended that the DML statement should be
3591 # idempotent to avoid unexpected results. For instance, it is potentially
3592 # dangerous to run a statement such as
3593 # `UPDATE table SET column = column + 1` as it could be run multiple times
3594 # against some rows.
3595 #
3596 # - The partitions are committed automatically - there is no support for
3597 # Commit or Rollback. If the call returns an error, or if the client issuing
3598 # the ExecuteSql call dies, it is possible that some rows had the statement
3599 # executed on them successfully. It is also possible that statement was
3600 # never executed against other rows.
3601 #
3602 # - Partitioned DML transactions may only contain the execution of a single
3603 # DML statement via ExecuteSql or ExecuteStreamingSql.
3604 #
3605 # - If any error is encountered during the execution of the partitioned DML
3606 # operation (for instance, a UNIQUE INDEX violation, division by zero, or a
3607 # value that cannot be stored due to schema constraints), then the
3608 # operation is stopped at that point and an error is returned. It is
3609 # possible that at this point, some partitions have been committed (or even
3610 # committed multiple times), and other partitions have not been run at all.
3611 #
3612 # Given the above, Partitioned DML is good fit for large, database-wide,
3613 # operations that are idempotent, such as deleting old rows from a very large
3614 # table.
3615 "readWrite": { # Message type to initiate a read-write transaction. Currently this # Transaction may write.
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -04003616 #
3617 # Authorization to begin a read-write transaction requires
3618 # `spanner.databases.beginOrRollbackReadWriteTransaction` permission
3619 # on the `session` resource.
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07003620 # transaction type has no options.
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -04003621 },
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07003622 "readOnly": { # Message type to initiate a read-only transaction. # Transaction will not write.
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -04003623 #
3624 # Authorization to begin a read-only transaction requires
3625 # `spanner.databases.beginReadOnlyTransaction` permission
3626 # on the `session` resource.
Dan O'Mearadd494642020-05-01 07:42:23 -07003627 "minReadTimestamp": "A String", # Executes all reads at a timestamp &gt;= `min_read_timestamp`.
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -04003628 #
3629 # This is useful for requesting fresher data than some previous
3630 # read, or data that is fresh enough to observe the effects of some
3631 # previously committed transaction whose timestamp is known.
3632 #
3633 # Note that this option can only be used in single-use transactions.
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07003634 #
3635 # A timestamp in RFC3339 UTC \"Zulu\" format, accurate to nanoseconds.
3636 # Example: `"2014-10-02T15:01:23.045123456Z"`.
Dan O'Mearadd494642020-05-01 07:42:23 -07003637 "readTimestamp": "A String", # Executes all reads at the given timestamp. Unlike other modes,
3638 # reads at a specific timestamp are repeatable; the same read at
3639 # the same timestamp always returns the same data. If the
3640 # timestamp is in the future, the read will block until the
3641 # specified timestamp, modulo the read's deadline.
3642 #
3643 # Useful for large scale consistent reads such as mapreduces, or
3644 # for coordinating many reads against a consistent snapshot of the
3645 # data.
3646 #
3647 # A timestamp in RFC3339 UTC \"Zulu\" format, accurate to nanoseconds.
3648 # Example: `"2014-10-02T15:01:23.045123456Z"`.
3649 "maxStaleness": "A String", # Read data at a timestamp &gt;= `NOW - max_staleness`
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -04003650 # seconds. Guarantees that all writes that have committed more
3651 # than the specified number of seconds ago are visible. Because
3652 # Cloud Spanner chooses the exact timestamp, this mode works even if
3653 # the client's local clock is substantially skewed from Cloud Spanner
3654 # commit timestamps.
3655 #
3656 # Useful for reading the freshest data available at a nearby
3657 # replica, while bounding the possible staleness if the local
3658 # replica has fallen behind.
3659 #
3660 # Note that this option can only be used in single-use
3661 # transactions.
3662 "exactStaleness": "A String", # Executes all reads at a timestamp that is `exact_staleness`
3663 # old. The timestamp is chosen soon after the read is started.
3664 #
3665 # Guarantees that all writes that have committed more than the
3666 # specified number of seconds ago are visible. Because Cloud Spanner
3667 # chooses the exact timestamp, this mode works even if the client's
3668 # local clock is substantially skewed from Cloud Spanner commit
3669 # timestamps.
3670 #
3671 # Useful for reading at nearby replicas without the distributed
3672 # timestamp negotiation overhead of `max_staleness`.
Dan O'Mearadd494642020-05-01 07:42:23 -07003673 "returnReadTimestamp": True or False, # If true, the Cloud Spanner-selected read timestamp is included in
3674 # the Transaction message that describes the transaction.
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -04003675 "strong": True or False, # Read at a timestamp where all previously committed transactions
3676 # are visible.
3677 },
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07003678 "partitionedDml": { # Message type to initiate a Partitioned DML transaction. # Partitioned DML transaction.
3679 #
3680 # Authorization to begin a Partitioned DML transaction requires
3681 # `spanner.databases.beginPartitionedDmlTransaction` permission
3682 # on the `session` resource.
3683 },
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -04003684 },
3685 "singleUse": { # # Transactions # Execute the read or SQL query in a temporary transaction.
3686 # This is the most efficient way to execute a transaction that
3687 # consists of a single SQL query.
3688 #
3689 #
3690 # Each session can have at most one active transaction at a time. After the
3691 # active transaction is completed, the session can immediately be
3692 # re-used for the next transaction. It is not necessary to create a
3693 # new session for each transaction.
3694 #
3695 # # Transaction Modes
3696 #
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07003697 # Cloud Spanner supports three transaction modes:
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -04003698 #
3699 # 1. Locking read-write. This type of transaction is the only way
3700 # to write data into Cloud Spanner. These transactions rely on
3701 # pessimistic locking and, if necessary, two-phase commit.
3702 # Locking read-write transactions may abort, requiring the
3703 # application to retry.
3704 #
3705 # 2. Snapshot read-only. This transaction type provides guaranteed
3706 # consistency across several reads, but does not allow
3707 # writes. Snapshot read-only transactions can be configured to
3708 # read at timestamps in the past. Snapshot read-only
3709 # transactions do not need to be committed.
3710 #
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07003711 # 3. Partitioned DML. This type of transaction is used to execute
3712 # a single Partitioned DML statement. Partitioned DML partitions
3713 # the key space and runs the DML statement over each partition
3714 # in parallel using separate, internal transactions that commit
3715 # independently. Partitioned DML transactions do not need to be
3716 # committed.
3717 #
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -04003718 # For transactions that only read, snapshot read-only transactions
3719 # provide simpler semantics and are almost always faster. In
3720 # particular, read-only transactions do not take locks, so they do
3721 # not conflict with read-write transactions. As a consequence of not
3722 # taking locks, they also do not abort, so retry loops are not needed.
3723 #
3724 # Transactions may only read/write data in a single database. They
3725 # may, however, read/write data in different tables within that
3726 # database.
3727 #
3728 # ## Locking Read-Write Transactions
3729 #
3730 # Locking transactions may be used to atomically read-modify-write
3731 # data anywhere in a database. This type of transaction is externally
3732 # consistent.
3733 #
3734 # Clients should attempt to minimize the amount of time a transaction
3735 # is active. Faster transactions commit with higher probability
3736 # and cause less contention. Cloud Spanner attempts to keep read locks
3737 # active as long as the transaction continues to do reads, and the
3738 # transaction has not been terminated by
3739 # Commit or
3740 # Rollback. Long periods of
3741 # inactivity at the client may cause Cloud Spanner to release a
3742 # transaction's locks and abort it.
3743 #
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -04003744 # Conceptually, a read-write transaction consists of zero or more
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07003745 # reads or SQL statements followed by
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -04003746 # Commit. At any time before
3747 # Commit, the client can send a
3748 # Rollback request to abort the
3749 # transaction.
3750 #
3751 # ### Semantics
3752 #
3753 # Cloud Spanner can commit the transaction if all read locks it acquired
3754 # are still valid at commit time, and it is able to acquire write
3755 # locks for all writes. Cloud Spanner can abort the transaction for any
3756 # reason. If a commit attempt returns `ABORTED`, Cloud Spanner guarantees
3757 # that the transaction has not modified any user data in Cloud Spanner.
3758 #
3759 # Unless the transaction commits, Cloud Spanner makes no guarantees about
3760 # how long the transaction's locks were held for. It is an error to
3761 # use Cloud Spanner locks for any sort of mutual exclusion other than
3762 # between Cloud Spanner transactions themselves.
3763 #
3764 # ### Retrying Aborted Transactions
3765 #
3766 # When a transaction aborts, the application can choose to retry the
3767 # whole transaction again. To maximize the chances of successfully
3768 # committing the retry, the client should execute the retry in the
3769 # same session as the original attempt. The original session's lock
3770 # priority increases with each consecutive abort, meaning that each
3771 # attempt has a slightly better chance of success than the previous.
3772 #
3773 # Under some circumstances (e.g., many transactions attempting to
3774 # modify the same row(s)), a transaction can abort many times in a
3775 # short period before successfully committing. Thus, it is not a good
3776 # idea to cap the number of retries a transaction can attempt;
3777 # instead, it is better to limit the total amount of wall time spent
3778 # retrying.
3779 #
3780 # ### Idle Transactions
3781 #
3782 # A transaction is considered idle if it has no outstanding reads or
3783 # SQL queries and has not started a read or SQL query within the last 10
3784 # seconds. Idle transactions can be aborted by Cloud Spanner so that they
3785 # don't hold on to locks indefinitely. In that case, the commit will
3786 # fail with error `ABORTED`.
3787 #
3788 # If this behavior is undesirable, periodically executing a simple
3789 # SQL query in the transaction (e.g., `SELECT 1`) prevents the
3790 # transaction from becoming idle.
3791 #
3792 # ## Snapshot Read-Only Transactions
3793 #
3794 # Snapshot read-only transactions provides a simpler method than
3795 # locking read-write transactions for doing several consistent
3796 # reads. However, this type of transaction does not support writes.
3797 #
3798 # Snapshot transactions do not take locks. Instead, they work by
3799 # choosing a Cloud Spanner timestamp, then executing all reads at that
3800 # timestamp. Since they do not acquire locks, they do not block
3801 # concurrent read-write transactions.
3802 #
3803 # Unlike locking read-write transactions, snapshot read-only
3804 # transactions never abort. They can fail if the chosen read
3805 # timestamp is garbage collected; however, the default garbage
3806 # collection policy is generous enough that most applications do not
3807 # need to worry about this in practice.
3808 #
3809 # Snapshot read-only transactions do not need to call
3810 # Commit or
3811 # Rollback (and in fact are not
3812 # permitted to do so).
3813 #
3814 # To execute a snapshot transaction, the client specifies a timestamp
3815 # bound, which tells Cloud Spanner how to choose a read timestamp.
3816 #
3817 # The types of timestamp bound are:
3818 #
3819 # - Strong (the default).
3820 # - Bounded staleness.
3821 # - Exact staleness.
3822 #
3823 # If the Cloud Spanner database to be read is geographically distributed,
3824 # stale read-only transactions can execute more quickly than strong
3825 # or read-write transaction, because they are able to execute far
3826 # from the leader replica.
3827 #
3828 # Each type of timestamp bound is discussed in detail below.
3829 #
3830 # ### Strong
3831 #
3832 # Strong reads are guaranteed to see the effects of all transactions
3833 # that have committed before the start of the read. Furthermore, all
3834 # rows yielded by a single read are consistent with each other -- if
3835 # any part of the read observes a transaction, all parts of the read
3836 # see the transaction.
3837 #
3838 # Strong reads are not repeatable: two consecutive strong read-only
3839 # transactions might return inconsistent results if there are
3840 # concurrent writes. If consistency across reads is required, the
3841 # reads should be executed within a transaction or at an exact read
3842 # timestamp.
3843 #
3844 # See TransactionOptions.ReadOnly.strong.
3845 #
3846 # ### Exact Staleness
3847 #
3848 # These timestamp bounds execute reads at a user-specified
3849 # timestamp. Reads at a timestamp are guaranteed to see a consistent
3850 # prefix of the global transaction history: they observe
Dan O'Mearadd494642020-05-01 07:42:23 -07003851 # modifications done by all transactions with a commit timestamp &lt;=
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -04003852 # the read timestamp, and observe none of the modifications done by
3853 # transactions with a larger commit timestamp. They will block until
3854 # all conflicting transactions that may be assigned commit timestamps
Dan O'Mearadd494642020-05-01 07:42:23 -07003855 # &lt;= the read timestamp have finished.
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -04003856 #
3857 # The timestamp can either be expressed as an absolute Cloud Spanner commit
3858 # timestamp or a staleness relative to the current time.
3859 #
3860 # These modes do not require a "negotiation phase" to pick a
3861 # timestamp. As a result, they execute slightly faster than the
3862 # equivalent boundedly stale concurrency modes. On the other hand,
3863 # boundedly stale reads usually return fresher results.
3864 #
3865 # See TransactionOptions.ReadOnly.read_timestamp and
3866 # TransactionOptions.ReadOnly.exact_staleness.
3867 #
3868 # ### Bounded Staleness
3869 #
3870 # Bounded staleness modes allow Cloud Spanner to pick the read timestamp,
3871 # subject to a user-provided staleness bound. Cloud Spanner chooses the
3872 # newest timestamp within the staleness bound that allows execution
3873 # of the reads at the closest available replica without blocking.
3874 #
3875 # All rows yielded are consistent with each other -- if any part of
3876 # the read observes a transaction, all parts of the read see the
3877 # transaction. Boundedly stale reads are not repeatable: two stale
3878 # reads, even if they use the same staleness bound, can execute at
3879 # different timestamps and thus return inconsistent results.
3880 #
3881 # Boundedly stale reads execute in two phases: the first phase
3882 # negotiates a timestamp among all replicas needed to serve the
3883 # read. In the second phase, reads are executed at the negotiated
3884 # timestamp.
3885 #
3886 # As a result of the two phase execution, bounded staleness reads are
3887 # usually a little slower than comparable exact staleness
3888 # reads. However, they are typically able to return fresher
3889 # results, and are more likely to execute at the closest replica.
3890 #
3891 # Because the timestamp negotiation requires up-front knowledge of
3892 # which rows will be read, it can only be used with single-use
3893 # read-only transactions.
3894 #
3895 # See TransactionOptions.ReadOnly.max_staleness and
3896 # TransactionOptions.ReadOnly.min_read_timestamp.
3897 #
3898 # ### Old Read Timestamps and Garbage Collection
3899 #
3900 # Cloud Spanner continuously garbage collects deleted and overwritten data
3901 # in the background to reclaim storage space. This process is known
3902 # as "version GC". By default, version GC reclaims versions after they
3903 # are one hour old. Because of this, Cloud Spanner cannot perform reads
3904 # at read timestamps more than one hour in the past. This
3905 # restriction also applies to in-progress reads and/or SQL queries whose
3906 # timestamp become too old while executing. Reads and SQL queries with
3907 # too-old read timestamps fail with the error `FAILED_PRECONDITION`.
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07003908 #
3909 # ## Partitioned DML Transactions
3910 #
3911 # Partitioned DML transactions are used to execute DML statements with a
3912 # different execution strategy that provides different, and often better,
3913 # scalability properties for large, table-wide operations than DML in a
3914 # ReadWrite transaction. Smaller scoped statements, such as an OLTP workload,
3915 # should prefer using ReadWrite transactions.
3916 #
3917 # Partitioned DML partitions the keyspace and runs the DML statement on each
3918 # partition in separate, internal transactions. These transactions commit
3919 # automatically when complete, and run independently from one another.
3920 #
3921 # To reduce lock contention, this execution strategy only acquires read locks
3922 # on rows that match the WHERE clause of the statement. Additionally, the
3923 # smaller per-partition transactions hold locks for less time.
3924 #
3925 # That said, Partitioned DML is not a drop-in replacement for standard DML used
3926 # in ReadWrite transactions.
3927 #
3928 # - The DML statement must be fully-partitionable. Specifically, the statement
3929 # must be expressible as the union of many statements which each access only
3930 # a single row of the table.
3931 #
3932 # - The statement is not applied atomically to all rows of the table. Rather,
3933 # the statement is applied atomically to partitions of the table, in
3934 # independent transactions. Secondary index rows are updated atomically
3935 # with the base table rows.
3936 #
3937 # - Partitioned DML does not guarantee exactly-once execution semantics
3938 # against a partition. The statement will be applied at least once to each
3939 # partition. It is strongly recommended that the DML statement should be
3940 # idempotent to avoid unexpected results. For instance, it is potentially
3941 # dangerous to run a statement such as
3942 # `UPDATE table SET column = column + 1` as it could be run multiple times
3943 # against some rows.
3944 #
3945 # - The partitions are committed automatically - there is no support for
3946 # Commit or Rollback. If the call returns an error, or if the client issuing
3947 # the ExecuteSql call dies, it is possible that some rows had the statement
3948 # executed on them successfully. It is also possible that statement was
3949 # never executed against other rows.
3950 #
3951 # - Partitioned DML transactions may only contain the execution of a single
3952 # DML statement via ExecuteSql or ExecuteStreamingSql.
3953 #
3954 # - If any error is encountered during the execution of the partitioned DML
3955 # operation (for instance, a UNIQUE INDEX violation, division by zero, or a
3956 # value that cannot be stored due to schema constraints), then the
3957 # operation is stopped at that point and an error is returned. It is
3958 # possible that at this point, some partitions have been committed (or even
3959 # committed multiple times), and other partitions have not been run at all.
3960 #
3961 # Given the above, Partitioned DML is good fit for large, database-wide,
3962 # operations that are idempotent, such as deleting old rows from a very large
3963 # table.
3964 "readWrite": { # Message type to initiate a read-write transaction. Currently this # Transaction may write.
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -04003965 #
3966 # Authorization to begin a read-write transaction requires
3967 # `spanner.databases.beginOrRollbackReadWriteTransaction` permission
3968 # on the `session` resource.
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07003969 # transaction type has no options.
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -04003970 },
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07003971 "readOnly": { # Message type to initiate a read-only transaction. # Transaction will not write.
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -04003972 #
3973 # Authorization to begin a read-only transaction requires
3974 # `spanner.databases.beginReadOnlyTransaction` permission
3975 # on the `session` resource.
Dan O'Mearadd494642020-05-01 07:42:23 -07003976 "minReadTimestamp": "A String", # Executes all reads at a timestamp &gt;= `min_read_timestamp`.
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -04003977 #
3978 # This is useful for requesting fresher data than some previous
3979 # read, or data that is fresh enough to observe the effects of some
3980 # previously committed transaction whose timestamp is known.
3981 #
3982 # Note that this option can only be used in single-use transactions.
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07003983 #
3984 # A timestamp in RFC3339 UTC \"Zulu\" format, accurate to nanoseconds.
3985 # Example: `"2014-10-02T15:01:23.045123456Z"`.
Dan O'Mearadd494642020-05-01 07:42:23 -07003986 "readTimestamp": "A String", # Executes all reads at the given timestamp. Unlike other modes,
3987 # reads at a specific timestamp are repeatable; the same read at
3988 # the same timestamp always returns the same data. If the
3989 # timestamp is in the future, the read will block until the
3990 # specified timestamp, modulo the read's deadline.
3991 #
3992 # Useful for large scale consistent reads such as mapreduces, or
3993 # for coordinating many reads against a consistent snapshot of the
3994 # data.
3995 #
3996 # A timestamp in RFC3339 UTC \"Zulu\" format, accurate to nanoseconds.
3997 # Example: `"2014-10-02T15:01:23.045123456Z"`.
3998 "maxStaleness": "A String", # Read data at a timestamp &gt;= `NOW - max_staleness`
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -04003999 # seconds. Guarantees that all writes that have committed more
4000 # than the specified number of seconds ago are visible. Because
4001 # Cloud Spanner chooses the exact timestamp, this mode works even if
4002 # the client's local clock is substantially skewed from Cloud Spanner
4003 # commit timestamps.
4004 #
4005 # Useful for reading the freshest data available at a nearby
4006 # replica, while bounding the possible staleness if the local
4007 # replica has fallen behind.
4008 #
4009 # Note that this option can only be used in single-use
4010 # transactions.
4011 "exactStaleness": "A String", # Executes all reads at a timestamp that is `exact_staleness`
4012 # old. The timestamp is chosen soon after the read is started.
4013 #
4014 # Guarantees that all writes that have committed more than the
4015 # specified number of seconds ago are visible. Because Cloud Spanner
4016 # chooses the exact timestamp, this mode works even if the client's
4017 # local clock is substantially skewed from Cloud Spanner commit
4018 # timestamps.
4019 #
4020 # Useful for reading at nearby replicas without the distributed
4021 # timestamp negotiation overhead of `max_staleness`.
Dan O'Mearadd494642020-05-01 07:42:23 -07004022 "returnReadTimestamp": True or False, # If true, the Cloud Spanner-selected read timestamp is included in
4023 # the Transaction message that describes the transaction.
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -04004024 "strong": True or False, # Read at a timestamp where all previously committed transactions
4025 # are visible.
4026 },
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07004027 "partitionedDml": { # Message type to initiate a Partitioned DML transaction. # Partitioned DML transaction.
4028 #
4029 # Authorization to begin a Partitioned DML transaction requires
4030 # `spanner.databases.beginPartitionedDmlTransaction` permission
4031 # on the `session` resource.
4032 },
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -04004033 },
4034 "id": "A String", # Execute the read or SQL query in a previously-started transaction.
4035 },
Dan O'Mearadd494642020-05-01 07:42:23 -07004036 "seqno": "A String", # A per-transaction sequence number used to identify this request. This field
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07004037 # makes each request idempotent such that if the request is received multiple
4038 # times, at most one will succeed.
4039 #
4040 # The sequence number must be monotonically increasing within the
4041 # transaction. If a request arrives for the first time with an out-of-order
4042 # sequence number, the transaction may be aborted. Replays of previously
4043 # handled requests will yield the same response as the first execution.
4044 #
4045 # Required for DML statements. Ignored for queries.
4046 "resumeToken": "A String", # If this request is resuming a previously interrupted SQL statement
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -04004047 # execution, `resume_token` should be copied from the last
4048 # PartialResultSet yielded before the interruption. Doing this
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07004049 # enables the new SQL statement execution to resume where the last one left
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -04004050 # off. The rest of the request parameters must exactly match the
4051 # request that yielded this token.
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07004052 "partitionToken": "A String", # If present, results will be restricted to the specified partition
4053 # previously created using PartitionQuery(). There must be an exact
4054 # match for the values of fields common to this message and the
4055 # PartitionQueryRequest message used to create this partition_token.
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -04004056 "paramTypes": { # It is not always possible for Cloud Spanner to infer the right SQL type
4057 # from a JSON value. For example, values of type `BYTES` and values
4058 # of type `STRING` both appear in params as JSON strings.
4059 #
4060 # In these cases, `param_types` can be used to specify the exact
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07004061 # SQL type for some or all of the SQL statement parameters. See the
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -04004062 # definition of Type for more information
4063 # about SQL types.
4064 "a_key": { # `Type` indicates the type of a Cloud Spanner value, as might be stored in a
4065 # table cell or returned from an SQL query.
Dan O'Mearadd494642020-05-01 07:42:23 -07004066 "structType": { # `StructType` defines the fields of a STRUCT type. # If code == STRUCT, then `struct_type`
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -04004067 # provides type information for the struct's fields.
Dan O'Mearadd494642020-05-01 07:42:23 -07004068 "fields": [ # The list of fields that make up this struct. Order is
4069 # significant, because values of this struct type are represented as
4070 # lists, where the order of field values matches the order of
4071 # fields in the StructType. In turn, the order of fields
4072 # matches the order of columns in a read request, or the order of
4073 # fields in the `SELECT` clause of a query.
4074 { # Message representing a single field of a struct.
4075 "type": # Object with schema name: Type # The type of the field.
4076 "name": "A String", # The name of the field. For reads, this is the column name. For
4077 # SQL queries, it is the column alias (e.g., `"Word"` in the
4078 # query `"SELECT 'hello' AS Word"`), or the column name (e.g.,
4079 # `"ColName"` in the query `"SELECT ColName FROM Table"`). Some
4080 # columns might have an empty name (e.g., !"SELECT
4081 # UPPER(ColName)"`). Note that a query result can contain
4082 # multiple fields with the same name.
4083 },
4084 ],
4085 },
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -04004086 "code": "A String", # Required. The TypeCode for this type.
4087 "arrayElementType": # Object with schema name: Type # If code == ARRAY, then `array_element_type`
4088 # is the type of the array elements.
4089 },
4090 },
Dan O'Mearadd494642020-05-01 07:42:23 -07004091 "queryOptions": { # Query optimizer configuration. # Query optimizer configuration to use for the given query.
4092 "optimizerVersion": "A String", # An option to control the selection of optimizer version.
4093 #
4094 # This parameter allows individual queries to pick different query
4095 # optimizer versions.
4096 #
4097 # Specifying "latest" as a value instructs Cloud Spanner to use the
4098 # latest supported query optimizer version. If not specified, Cloud Spanner
4099 # uses optimizer version set at the database level options. Any other
4100 # positive integer (from the list of supported optimizer versions)
4101 # overrides the default optimizer version for query execution.
4102 # The list of supported optimizer versions can be queried from
4103 # SPANNER_SYS.SUPPORTED_OPTIMIZER_VERSIONS. Executing a SQL statement
4104 # with an invalid optimizer version will fail with a syntax error
4105 # (`INVALID_ARGUMENT`) status.
4106 #
4107 # The `optimizer_version` statement hint has precedence over this setting.
4108 },
4109 "params": { # Parameter names and values that bind to placeholders in the SQL string.
4110 #
4111 # A parameter placeholder consists of the `@` character followed by the
4112 # parameter name (for example, `@firstName`). Parameter names can contain
4113 # letters, numbers, and underscores.
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -04004114 #
4115 # Parameters can appear anywhere that a literal value is expected. The same
4116 # parameter name can be used more than once, for example:
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -04004117 #
Dan O'Mearadd494642020-05-01 07:42:23 -07004118 # `"WHERE id &gt; @msg_id AND id &lt; @msg_id + 100"`
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -04004119 #
Dan O'Mearadd494642020-05-01 07:42:23 -07004120 # It is an error to execute a SQL statement with unbound parameters.
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -04004121 "a_key": "", # Properties of the object.
4122 },
Dan O'Mearadd494642020-05-01 07:42:23 -07004123 "sql": "A String", # Required. The SQL string.
4124 "queryMode": "A String", # Used to control the amount of debugging information returned in
4125 # ResultSetStats. If partition_token is set, query_mode can only
4126 # be set to QueryMode.NORMAL.
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -04004127 }
4128
4129 x__xgafv: string, V1 error format.
4130 Allowed values
4131 1 - v1 error format
4132 2 - v2 error format
4133
4134Returns:
4135 An object of the form:
4136
4137 { # Partial results from a streaming read or SQL query. Streaming reads and
4138 # SQL queries better tolerate large result sets, large rows, and large
4139 # values, but are a little trickier to consume.
Sai Cheemalapati4ba8c232017-06-06 18:46:08 -04004140 "resumeToken": "A String", # Streaming calls might be interrupted for a variety of reasons, such
4141 # as TCP connection loss. If this occurs, the stream of results can
4142 # be resumed by re-sending the original request and including
4143 # `resume_token`. Note that executing any other transaction in the
4144 # same session invalidates the token.
4145 "chunkedValue": True or False, # If true, then the final value in values is chunked, and must
4146 # be combined with more values from subsequent `PartialResultSet`s
4147 # to obtain a complete field value.
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -04004148 "values": [ # A streamed result set consists of a stream of values, which might
4149 # be split into many `PartialResultSet` messages to accommodate
4150 # large rows and/or large values. Every N complete values defines a
4151 # row, where N is equal to the number of entries in
4152 # metadata.row_type.fields.
4153 #
4154 # Most values are encoded based on type as described
4155 # here.
4156 #
4157 # It is possible that the last value in values is "chunked",
4158 # meaning that the rest of the value is sent in subsequent
4159 # `PartialResultSet`(s). This is denoted by the chunked_value
4160 # field. Two or more chunked values can be merged to form a
4161 # complete value as follows:
4162 #
4163 # * `bool/number/null`: cannot be chunked
4164 # * `string`: concatenate the strings
4165 # * `list`: concatenate the lists. If the last element in a list is a
4166 # `string`, `list`, or `object`, merge it with the first element in
4167 # the next list by applying these rules recursively.
4168 # * `object`: concatenate the (field name, field value) pairs. If a
4169 # field name is duplicated, then apply these rules recursively
4170 # to merge the field values.
4171 #
4172 # Some examples of merging:
4173 #
4174 # # Strings are concatenated.
Dan O'Mearadd494642020-05-01 07:42:23 -07004175 # "foo", "bar" =&gt; "foobar"
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -04004176 #
4177 # # Lists of non-strings are concatenated.
Dan O'Mearadd494642020-05-01 07:42:23 -07004178 # [2, 3], [4] =&gt; [2, 3, 4]
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -04004179 #
4180 # # Lists are concatenated, but the last and first elements are merged
4181 # # because they are strings.
Dan O'Mearadd494642020-05-01 07:42:23 -07004182 # ["a", "b"], ["c", "d"] =&gt; ["a", "bc", "d"]
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -04004183 #
4184 # # Lists are concatenated, but the last and first elements are merged
4185 # # because they are lists. Recursively, the last and first elements
4186 # # of the inner lists are merged because they are strings.
Dan O'Mearadd494642020-05-01 07:42:23 -07004187 # ["a", ["b", "c"]], [["d"], "e"] =&gt; ["a", ["b", "cd"], "e"]
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -04004188 #
4189 # # Non-overlapping object fields are combined.
Dan O'Mearadd494642020-05-01 07:42:23 -07004190 # {"a": "1"}, {"b": "2"} =&gt; {"a": "1", "b": 2"}
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -04004191 #
4192 # # Overlapping object fields are merged.
Dan O'Mearadd494642020-05-01 07:42:23 -07004193 # {"a": "1"}, {"a": "2"} =&gt; {"a": "12"}
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -04004194 #
4195 # # Examples of merging objects containing lists of strings.
Dan O'Mearadd494642020-05-01 07:42:23 -07004196 # {"a": ["1"]}, {"a": ["2"]} =&gt; {"a": ["12"]}
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -04004197 #
4198 # For a more complete example, suppose a streaming SQL query is
4199 # yielding a result set whose rows contain a single string
4200 # field. The following `PartialResultSet`s might be yielded:
4201 #
4202 # {
4203 # "metadata": { ... }
4204 # "values": ["Hello", "W"]
4205 # "chunked_value": true
4206 # "resume_token": "Af65..."
4207 # }
4208 # {
4209 # "values": ["orl"]
4210 # "chunked_value": true
4211 # "resume_token": "Bqp2..."
4212 # }
4213 # {
4214 # "values": ["d"]
4215 # "resume_token": "Zx1B..."
4216 # }
4217 #
4218 # This sequence of `PartialResultSet`s encodes two rows, one
4219 # containing the field value `"Hello"`, and a second containing the
4220 # field value `"World" = "W" + "orl" + "d"`.
4221 "",
4222 ],
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07004223 "stats": { # Additional statistics about a ResultSet or PartialResultSet. # Query plan and execution statistics for the statement that produced this
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -04004224 # streaming result set. These can be requested by setting
4225 # ExecuteSqlRequest.query_mode and are sent
4226 # only once with the last response in the stream.
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07004227 # This field will also be present in the last response for DML
4228 # statements.
4229 "rowCountLowerBound": "A String", # Partitioned DML does not offer exactly-once semantics, so it
4230 # returns a lower bound of the rows modified.
4231 "rowCountExact": "A String", # Standard DML returns an exact count of rows that were modified.
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -04004232 "queryPlan": { # Contains an ordered list of nodes appearing in the query plan. # QueryPlan for the query associated with this result.
4233 "planNodes": [ # The nodes in the query plan. Plan nodes are returned in pre-order starting
4234 # with the plan root. Each PlanNode's `id` corresponds to its index in
4235 # `plan_nodes`.
4236 { # Node information for nodes appearing in a QueryPlan.plan_nodes.
4237 "index": 42, # The `PlanNode`'s index in node list.
4238 "kind": "A String", # Used to determine the type of node. May be needed for visualizing
4239 # different kinds of nodes differently. For example, If the node is a
4240 # SCALAR node, it will have a condensed representation
4241 # which can be used to directly embed a description of the node in its
4242 # parent.
4243 "displayName": "A String", # The display name for the node.
4244 "executionStats": { # The execution statistics associated with the node, contained in a group of
4245 # key-value pairs. Only present if the plan was returned as a result of a
4246 # profile query. For example, number of executions, number of rows/time per
4247 # execution etc.
4248 "a_key": "", # Properties of the object.
4249 },
4250 "childLinks": [ # List of child node `index`es and their relationship to this parent.
4251 { # Metadata associated with a parent-child relationship appearing in a
4252 # PlanNode.
4253 "variable": "A String", # Only present if the child node is SCALAR and corresponds
4254 # to an output variable of the parent node. The field carries the name of
4255 # the output variable.
4256 # For example, a `TableScan` operator that reads rows from a table will
4257 # have child links to the `SCALAR` nodes representing the output variables
4258 # created for each column that is read by the operator. The corresponding
4259 # `variable` fields will be set to the variable names assigned to the
4260 # columns.
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -04004261 "type": "A String", # The type of the link. For example, in Hash Joins this could be used to
4262 # distinguish between the build child and the probe child, or in the case
4263 # of the child being an output variable, to represent the tag associated
4264 # with the output variable.
Dan O'Mearadd494642020-05-01 07:42:23 -07004265 "childIndex": 42, # The node to which the link points.
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -04004266 },
4267 ],
4268 "shortRepresentation": { # Condensed representation of a node and its subtree. Only present for # Condensed representation for SCALAR nodes.
4269 # `SCALAR` PlanNode(s).
Dan O'Mearadd494642020-05-01 07:42:23 -07004270 "subqueries": { # A mapping of (subquery variable name) -&gt; (subquery node id) for cases
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -04004271 # where the `description` string of this node references a `SCALAR`
4272 # subquery contained in the expression subtree rooted at this node. The
4273 # referenced `SCALAR` subquery may not necessarily be a direct child of
4274 # this node.
4275 "a_key": 42,
4276 },
4277 "description": "A String", # A string representation of the expression subtree rooted at this node.
4278 },
4279 "metadata": { # Attributes relevant to the node contained in a group of key-value pairs.
4280 # For example, a Parameter Reference node could have the following
4281 # information in its metadata:
4282 #
4283 # {
4284 # "parameter_reference": "param1",
4285 # "parameter_type": "array"
4286 # }
4287 "a_key": "", # Properties of the object.
4288 },
4289 },
4290 ],
4291 },
4292 "queryStats": { # Aggregated statistics from the execution of the query. Only present when
4293 # the query is profiled. For example, a query could return the statistics as
4294 # follows:
4295 #
4296 # {
4297 # "rows_returned": "3",
4298 # "elapsed_time": "1.22 secs",
4299 # "cpu_time": "1.19 secs"
4300 # }
4301 "a_key": "", # Properties of the object.
4302 },
4303 },
4304 "metadata": { # Metadata about a ResultSet or PartialResultSet. # Metadata about the result set, such as row type information.
4305 # Only present in the first response.
4306 "rowType": { # `StructType` defines the fields of a STRUCT type. # Indicates the field names and types for the rows in the result
4307 # set. For example, a SQL query like `"SELECT UserId, UserName FROM
4308 # Users"` could return a `row_type` value like:
4309 #
4310 # "fields": [
4311 # { "name": "UserId", "type": { "code": "INT64" } },
4312 # { "name": "UserName", "type": { "code": "STRING" } },
4313 # ]
4314 "fields": [ # The list of fields that make up this struct. Order is
4315 # significant, because values of this struct type are represented as
4316 # lists, where the order of field values matches the order of
4317 # fields in the StructType. In turn, the order of fields
4318 # matches the order of columns in a read request, or the order of
4319 # fields in the `SELECT` clause of a query.
4320 { # Message representing a single field of a struct.
Dan O'Mearadd494642020-05-01 07:42:23 -07004321 "type": # Object with schema name: Type # The type of the field.
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -04004322 "name": "A String", # The name of the field. For reads, this is the column name. For
4323 # SQL queries, it is the column alias (e.g., `"Word"` in the
4324 # query `"SELECT 'hello' AS Word"`), or the column name (e.g.,
4325 # `"ColName"` in the query `"SELECT ColName FROM Table"`). Some
4326 # columns might have an empty name (e.g., !"SELECT
4327 # UPPER(ColName)"`). Note that a query result can contain
4328 # multiple fields with the same name.
4329 },
4330 ],
4331 },
4332 "transaction": { # A transaction. # If the read or SQL query began a transaction as a side-effect, the
4333 # information about the new transaction is yielded here.
4334 "readTimestamp": "A String", # For snapshot read-only transactions, the read timestamp chosen
4335 # for the transaction. Not returned by default: see
4336 # TransactionOptions.ReadOnly.return_read_timestamp.
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07004337 #
4338 # A timestamp in RFC3339 UTC \"Zulu\" format, accurate to nanoseconds.
4339 # Example: `"2014-10-02T15:01:23.045123456Z"`.
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -04004340 "id": "A String", # `id` may be used to identify the transaction in subsequent
4341 # Read,
4342 # ExecuteSql,
4343 # Commit, or
4344 # Rollback calls.
4345 #
4346 # Single-use read-only transactions do not have IDs, because
4347 # single-use transactions do not support multiple requests.
4348 },
4349 },
4350 }</pre>
4351</div>
4352
4353<div class="method">
4354 <code class="details" id="get">get(name, x__xgafv=None)</code>
4355 <pre>Gets a session. Returns `NOT_FOUND` if the session does not exist.
4356This is mainly useful for determining whether a session is still
4357alive.
4358
4359Args:
4360 name: string, Required. The name of the session to retrieve. (required)
4361 x__xgafv: string, V1 error format.
4362 Allowed values
4363 1 - v1 error format
4364 2 - v2 error format
4365
4366Returns:
4367 An object of the form:
4368
4369 { # A session in the Cloud Spanner API.
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07004370 "labels": { # The labels for the session.
4371 #
4372 # * Label keys must be between 1 and 63 characters long and must conform to
4373 # the following regular expression: `[a-z]([-a-z0-9]*[a-z0-9])?`.
4374 # * Label values must be between 0 and 63 characters long and must conform
4375 # to the regular expression `([a-z]([-a-z0-9]*[a-z0-9])?)?`.
4376 # * No more than 64 labels can be associated with a given session.
4377 #
4378 # See https://goo.gl/xmQnxf for more information on and examples of labels.
4379 "a_key": "A String",
4380 },
4381 "name": "A String", # The name of the session. This is always system-assigned; values provided
4382 # when creating a session are ignored.
4383 "approximateLastUseTime": "A String", # Output only. The approximate timestamp when the session is last used. It is
4384 # typically earlier than the actual last use time.
4385 "createTime": "A String", # Output only. The timestamp when the session is created.
4386 }</pre>
4387</div>
4388
4389<div class="method">
4390 <code class="details" id="list">list(database, pageSize=None, filter=None, pageToken=None, x__xgafv=None)</code>
4391 <pre>Lists all sessions in a given database.
4392
4393Args:
4394 database: string, Required. The database in which to list sessions. (required)
4395 pageSize: integer, Number of sessions to be returned in the response. If 0 or less, defaults
4396to the server's maximum allowed page size.
4397 filter: string, An expression for filtering the results of the request. Filter rules are
4398case insensitive. The fields eligible for filtering are:
4399
4400 * `labels.key` where key is the name of a label
4401
4402Some examples of using filters are:
4403
Dan O'Mearadd494642020-05-01 07:42:23 -07004404 * `labels.env:*` --&gt; The session has the label "env".
4405 * `labels.env:dev` --&gt; The session has the label "env" and the value of
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07004406 the label contains the string "dev".
4407 pageToken: string, If non-empty, `page_token` should contain a
4408next_page_token from a previous
4409ListSessionsResponse.
4410 x__xgafv: string, V1 error format.
4411 Allowed values
4412 1 - v1 error format
4413 2 - v2 error format
4414
4415Returns:
4416 An object of the form:
4417
4418 { # The response for ListSessions.
4419 "nextPageToken": "A String", # `next_page_token` can be sent in a subsequent
4420 # ListSessions call to fetch more of the matching
4421 # sessions.
4422 "sessions": [ # The list of requested sessions.
4423 { # A session in the Cloud Spanner API.
4424 "labels": { # The labels for the session.
4425 #
4426 # * Label keys must be between 1 and 63 characters long and must conform to
4427 # the following regular expression: `[a-z]([-a-z0-9]*[a-z0-9])?`.
4428 # * Label values must be between 0 and 63 characters long and must conform
4429 # to the regular expression `([a-z]([-a-z0-9]*[a-z0-9])?)?`.
4430 # * No more than 64 labels can be associated with a given session.
4431 #
4432 # See https://goo.gl/xmQnxf for more information on and examples of labels.
4433 "a_key": "A String",
4434 },
4435 "name": "A String", # The name of the session. This is always system-assigned; values provided
4436 # when creating a session are ignored.
4437 "approximateLastUseTime": "A String", # Output only. The approximate timestamp when the session is last used. It is
4438 # typically earlier than the actual last use time.
4439 "createTime": "A String", # Output only. The timestamp when the session is created.
4440 },
4441 ],
4442 }</pre>
4443</div>
4444
4445<div class="method">
4446 <code class="details" id="list_next">list_next(previous_request, previous_response)</code>
4447 <pre>Retrieves the next page of results.
4448
4449Args:
4450 previous_request: The request for the previous page. (required)
4451 previous_response: The response from the request for the previous page. (required)
4452
4453Returns:
4454 A request object that you can call 'execute()' on to request the next
4455 page. Returns None if there are no more items in the collection.
4456 </pre>
4457</div>
4458
4459<div class="method">
Dan O'Mearadd494642020-05-01 07:42:23 -07004460 <code class="details" id="partitionQuery">partitionQuery(session, body=None, x__xgafv=None)</code>
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07004461 <pre>Creates a set of partition tokens that can be used to execute a query
4462operation in parallel. Each of the returned partition tokens can be used
4463by ExecuteStreamingSql to specify a subset
4464of the query result to read. The same session and read-only transaction
4465must be used by the PartitionQueryRequest used to create the
4466partition tokens and the ExecuteSqlRequests that use the partition tokens.
4467
4468Partition tokens become invalid when the session used to create them
4469is deleted, is idle for too long, begins a new transaction, or becomes too
4470old. When any of these happen, it is not possible to resume the query, and
4471the whole operation must be restarted from the beginning.
4472
4473Args:
4474 session: string, Required. The session used to create the partitions. (required)
Dan O'Mearadd494642020-05-01 07:42:23 -07004475 body: object, The request body.
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07004476 The object takes the form of:
4477
4478{ # The request for PartitionQuery
4479 "paramTypes": { # It is not always possible for Cloud Spanner to infer the right SQL type
4480 # from a JSON value. For example, values of type `BYTES` and values
4481 # of type `STRING` both appear in params as JSON strings.
4482 #
4483 # In these cases, `param_types` can be used to specify the exact
4484 # SQL type for some or all of the SQL query parameters. See the
4485 # definition of Type for more information
4486 # about SQL types.
4487 "a_key": { # `Type` indicates the type of a Cloud Spanner value, as might be stored in a
4488 # table cell or returned from an SQL query.
Dan O'Mearadd494642020-05-01 07:42:23 -07004489 "structType": { # `StructType` defines the fields of a STRUCT type. # If code == STRUCT, then `struct_type`
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07004490 # provides type information for the struct's fields.
Dan O'Mearadd494642020-05-01 07:42:23 -07004491 "fields": [ # The list of fields that make up this struct. Order is
4492 # significant, because values of this struct type are represented as
4493 # lists, where the order of field values matches the order of
4494 # fields in the StructType. In turn, the order of fields
4495 # matches the order of columns in a read request, or the order of
4496 # fields in the `SELECT` clause of a query.
4497 { # Message representing a single field of a struct.
4498 "type": # Object with schema name: Type # The type of the field.
4499 "name": "A String", # The name of the field. For reads, this is the column name. For
4500 # SQL queries, it is the column alias (e.g., `"Word"` in the
4501 # query `"SELECT 'hello' AS Word"`), or the column name (e.g.,
4502 # `"ColName"` in the query `"SELECT ColName FROM Table"`). Some
4503 # columns might have an empty name (e.g., !"SELECT
4504 # UPPER(ColName)"`). Note that a query result can contain
4505 # multiple fields with the same name.
4506 },
4507 ],
4508 },
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07004509 "code": "A String", # Required. The TypeCode for this type.
4510 "arrayElementType": # Object with schema name: Type # If code == ARRAY, then `array_element_type`
4511 # is the type of the array elements.
4512 },
4513 },
4514 "partitionOptions": { # Options for a PartitionQueryRequest and # Additional options that affect how many partitions are created.
4515 # PartitionReadRequest.
4516 "maxPartitions": "A String", # **Note:** This hint is currently ignored by PartitionQuery and
4517 # PartitionRead requests.
4518 #
4519 # The desired maximum number of partitions to return. For example, this may
4520 # be set to the number of workers available. The default for this option
4521 # is currently 10,000. The maximum value is currently 200,000. This is only
4522 # a hint. The actual number of partitions returned may be smaller or larger
4523 # than this maximum count request.
4524 "partitionSizeBytes": "A String", # **Note:** This hint is currently ignored by PartitionQuery and
4525 # PartitionRead requests.
4526 #
4527 # The desired data size for each partition generated. The default for this
4528 # option is currently 1 GiB. This is only a hint. The actual size of each
4529 # partition may be smaller or larger than this size request.
4530 },
4531 "transaction": { # This message is used to select the transaction in which a # Read only snapshot transactions are supported, read/write and single use
4532 # transactions are not.
4533 # Read or
4534 # ExecuteSql call runs.
4535 #
4536 # See TransactionOptions for more information about transactions.
4537 "begin": { # # Transactions # Begin a new transaction and execute this read or SQL query in
4538 # it. The transaction ID of the new transaction is returned in
4539 # ResultSetMetadata.transaction, which is a Transaction.
4540 #
4541 #
4542 # Each session can have at most one active transaction at a time. After the
4543 # active transaction is completed, the session can immediately be
4544 # re-used for the next transaction. It is not necessary to create a
4545 # new session for each transaction.
4546 #
4547 # # Transaction Modes
4548 #
4549 # Cloud Spanner supports three transaction modes:
4550 #
4551 # 1. Locking read-write. This type of transaction is the only way
4552 # to write data into Cloud Spanner. These transactions rely on
4553 # pessimistic locking and, if necessary, two-phase commit.
4554 # Locking read-write transactions may abort, requiring the
4555 # application to retry.
4556 #
4557 # 2. Snapshot read-only. This transaction type provides guaranteed
4558 # consistency across several reads, but does not allow
4559 # writes. Snapshot read-only transactions can be configured to
4560 # read at timestamps in the past. Snapshot read-only
4561 # transactions do not need to be committed.
4562 #
4563 # 3. Partitioned DML. This type of transaction is used to execute
4564 # a single Partitioned DML statement. Partitioned DML partitions
4565 # the key space and runs the DML statement over each partition
4566 # in parallel using separate, internal transactions that commit
4567 # independently. Partitioned DML transactions do not need to be
4568 # committed.
4569 #
4570 # For transactions that only read, snapshot read-only transactions
4571 # provide simpler semantics and are almost always faster. In
4572 # particular, read-only transactions do not take locks, so they do
4573 # not conflict with read-write transactions. As a consequence of not
4574 # taking locks, they also do not abort, so retry loops are not needed.
4575 #
4576 # Transactions may only read/write data in a single database. They
4577 # may, however, read/write data in different tables within that
4578 # database.
4579 #
4580 # ## Locking Read-Write Transactions
4581 #
4582 # Locking transactions may be used to atomically read-modify-write
4583 # data anywhere in a database. This type of transaction is externally
4584 # consistent.
4585 #
4586 # Clients should attempt to minimize the amount of time a transaction
4587 # is active. Faster transactions commit with higher probability
4588 # and cause less contention. Cloud Spanner attempts to keep read locks
4589 # active as long as the transaction continues to do reads, and the
4590 # transaction has not been terminated by
4591 # Commit or
4592 # Rollback. Long periods of
4593 # inactivity at the client may cause Cloud Spanner to release a
4594 # transaction's locks and abort it.
4595 #
4596 # Conceptually, a read-write transaction consists of zero or more
4597 # reads or SQL statements followed by
4598 # Commit. At any time before
4599 # Commit, the client can send a
4600 # Rollback request to abort the
4601 # transaction.
4602 #
4603 # ### Semantics
4604 #
4605 # Cloud Spanner can commit the transaction if all read locks it acquired
4606 # are still valid at commit time, and it is able to acquire write
4607 # locks for all writes. Cloud Spanner can abort the transaction for any
4608 # reason. If a commit attempt returns `ABORTED`, Cloud Spanner guarantees
4609 # that the transaction has not modified any user data in Cloud Spanner.
4610 #
4611 # Unless the transaction commits, Cloud Spanner makes no guarantees about
4612 # how long the transaction's locks were held for. It is an error to
4613 # use Cloud Spanner locks for any sort of mutual exclusion other than
4614 # between Cloud Spanner transactions themselves.
4615 #
4616 # ### Retrying Aborted Transactions
4617 #
4618 # When a transaction aborts, the application can choose to retry the
4619 # whole transaction again. To maximize the chances of successfully
4620 # committing the retry, the client should execute the retry in the
4621 # same session as the original attempt. The original session's lock
4622 # priority increases with each consecutive abort, meaning that each
4623 # attempt has a slightly better chance of success than the previous.
4624 #
4625 # Under some circumstances (e.g., many transactions attempting to
4626 # modify the same row(s)), a transaction can abort many times in a
4627 # short period before successfully committing. Thus, it is not a good
4628 # idea to cap the number of retries a transaction can attempt;
4629 # instead, it is better to limit the total amount of wall time spent
4630 # retrying.
4631 #
4632 # ### Idle Transactions
4633 #
4634 # A transaction is considered idle if it has no outstanding reads or
4635 # SQL queries and has not started a read or SQL query within the last 10
4636 # seconds. Idle transactions can be aborted by Cloud Spanner so that they
4637 # don't hold on to locks indefinitely. In that case, the commit will
4638 # fail with error `ABORTED`.
4639 #
4640 # If this behavior is undesirable, periodically executing a simple
4641 # SQL query in the transaction (e.g., `SELECT 1`) prevents the
4642 # transaction from becoming idle.
4643 #
4644 # ## Snapshot Read-Only Transactions
4645 #
4646 # Snapshot read-only transactions provides a simpler method than
4647 # locking read-write transactions for doing several consistent
4648 # reads. However, this type of transaction does not support writes.
4649 #
4650 # Snapshot transactions do not take locks. Instead, they work by
4651 # choosing a Cloud Spanner timestamp, then executing all reads at that
4652 # timestamp. Since they do not acquire locks, they do not block
4653 # concurrent read-write transactions.
4654 #
4655 # Unlike locking read-write transactions, snapshot read-only
4656 # transactions never abort. They can fail if the chosen read
4657 # timestamp is garbage collected; however, the default garbage
4658 # collection policy is generous enough that most applications do not
4659 # need to worry about this in practice.
4660 #
4661 # Snapshot read-only transactions do not need to call
4662 # Commit or
4663 # Rollback (and in fact are not
4664 # permitted to do so).
4665 #
4666 # To execute a snapshot transaction, the client specifies a timestamp
4667 # bound, which tells Cloud Spanner how to choose a read timestamp.
4668 #
4669 # The types of timestamp bound are:
4670 #
4671 # - Strong (the default).
4672 # - Bounded staleness.
4673 # - Exact staleness.
4674 #
4675 # If the Cloud Spanner database to be read is geographically distributed,
4676 # stale read-only transactions can execute more quickly than strong
4677 # or read-write transaction, because they are able to execute far
4678 # from the leader replica.
4679 #
4680 # Each type of timestamp bound is discussed in detail below.
4681 #
4682 # ### Strong
4683 #
4684 # Strong reads are guaranteed to see the effects of all transactions
4685 # that have committed before the start of the read. Furthermore, all
4686 # rows yielded by a single read are consistent with each other -- if
4687 # any part of the read observes a transaction, all parts of the read
4688 # see the transaction.
4689 #
4690 # Strong reads are not repeatable: two consecutive strong read-only
4691 # transactions might return inconsistent results if there are
4692 # concurrent writes. If consistency across reads is required, the
4693 # reads should be executed within a transaction or at an exact read
4694 # timestamp.
4695 #
4696 # See TransactionOptions.ReadOnly.strong.
4697 #
4698 # ### Exact Staleness
4699 #
4700 # These timestamp bounds execute reads at a user-specified
4701 # timestamp. Reads at a timestamp are guaranteed to see a consistent
4702 # prefix of the global transaction history: they observe
Dan O'Mearadd494642020-05-01 07:42:23 -07004703 # modifications done by all transactions with a commit timestamp &lt;=
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07004704 # the read timestamp, and observe none of the modifications done by
4705 # transactions with a larger commit timestamp. They will block until
4706 # all conflicting transactions that may be assigned commit timestamps
Dan O'Mearadd494642020-05-01 07:42:23 -07004707 # &lt;= the read timestamp have finished.
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07004708 #
4709 # The timestamp can either be expressed as an absolute Cloud Spanner commit
4710 # timestamp or a staleness relative to the current time.
4711 #
4712 # These modes do not require a "negotiation phase" to pick a
4713 # timestamp. As a result, they execute slightly faster than the
4714 # equivalent boundedly stale concurrency modes. On the other hand,
4715 # boundedly stale reads usually return fresher results.
4716 #
4717 # See TransactionOptions.ReadOnly.read_timestamp and
4718 # TransactionOptions.ReadOnly.exact_staleness.
4719 #
4720 # ### Bounded Staleness
4721 #
4722 # Bounded staleness modes allow Cloud Spanner to pick the read timestamp,
4723 # subject to a user-provided staleness bound. Cloud Spanner chooses the
4724 # newest timestamp within the staleness bound that allows execution
4725 # of the reads at the closest available replica without blocking.
4726 #
4727 # All rows yielded are consistent with each other -- if any part of
4728 # the read observes a transaction, all parts of the read see the
4729 # transaction. Boundedly stale reads are not repeatable: two stale
4730 # reads, even if they use the same staleness bound, can execute at
4731 # different timestamps and thus return inconsistent results.
4732 #
4733 # Boundedly stale reads execute in two phases: the first phase
4734 # negotiates a timestamp among all replicas needed to serve the
4735 # read. In the second phase, reads are executed at the negotiated
4736 # timestamp.
4737 #
4738 # As a result of the two phase execution, bounded staleness reads are
4739 # usually a little slower than comparable exact staleness
4740 # reads. However, they are typically able to return fresher
4741 # results, and are more likely to execute at the closest replica.
4742 #
4743 # Because the timestamp negotiation requires up-front knowledge of
4744 # which rows will be read, it can only be used with single-use
4745 # read-only transactions.
4746 #
4747 # See TransactionOptions.ReadOnly.max_staleness and
4748 # TransactionOptions.ReadOnly.min_read_timestamp.
4749 #
4750 # ### Old Read Timestamps and Garbage Collection
4751 #
4752 # Cloud Spanner continuously garbage collects deleted and overwritten data
4753 # in the background to reclaim storage space. This process is known
4754 # as "version GC". By default, version GC reclaims versions after they
4755 # are one hour old. Because of this, Cloud Spanner cannot perform reads
4756 # at read timestamps more than one hour in the past. This
4757 # restriction also applies to in-progress reads and/or SQL queries whose
4758 # timestamp become too old while executing. Reads and SQL queries with
4759 # too-old read timestamps fail with the error `FAILED_PRECONDITION`.
4760 #
4761 # ## Partitioned DML Transactions
4762 #
4763 # Partitioned DML transactions are used to execute DML statements with a
4764 # different execution strategy that provides different, and often better,
4765 # scalability properties for large, table-wide operations than DML in a
4766 # ReadWrite transaction. Smaller scoped statements, such as an OLTP workload,
4767 # should prefer using ReadWrite transactions.
4768 #
4769 # Partitioned DML partitions the keyspace and runs the DML statement on each
4770 # partition in separate, internal transactions. These transactions commit
4771 # automatically when complete, and run independently from one another.
4772 #
4773 # To reduce lock contention, this execution strategy only acquires read locks
4774 # on rows that match the WHERE clause of the statement. Additionally, the
4775 # smaller per-partition transactions hold locks for less time.
4776 #
4777 # That said, Partitioned DML is not a drop-in replacement for standard DML used
4778 # in ReadWrite transactions.
4779 #
4780 # - The DML statement must be fully-partitionable. Specifically, the statement
4781 # must be expressible as the union of many statements which each access only
4782 # a single row of the table.
4783 #
4784 # - The statement is not applied atomically to all rows of the table. Rather,
4785 # the statement is applied atomically to partitions of the table, in
4786 # independent transactions. Secondary index rows are updated atomically
4787 # with the base table rows.
4788 #
4789 # - Partitioned DML does not guarantee exactly-once execution semantics
4790 # against a partition. The statement will be applied at least once to each
4791 # partition. It is strongly recommended that the DML statement should be
4792 # idempotent to avoid unexpected results. For instance, it is potentially
4793 # dangerous to run a statement such as
4794 # `UPDATE table SET column = column + 1` as it could be run multiple times
4795 # against some rows.
4796 #
4797 # - The partitions are committed automatically - there is no support for
4798 # Commit or Rollback. If the call returns an error, or if the client issuing
4799 # the ExecuteSql call dies, it is possible that some rows had the statement
4800 # executed on them successfully. It is also possible that statement was
4801 # never executed against other rows.
4802 #
4803 # - Partitioned DML transactions may only contain the execution of a single
4804 # DML statement via ExecuteSql or ExecuteStreamingSql.
4805 #
4806 # - If any error is encountered during the execution of the partitioned DML
4807 # operation (for instance, a UNIQUE INDEX violation, division by zero, or a
4808 # value that cannot be stored due to schema constraints), then the
4809 # operation is stopped at that point and an error is returned. It is
4810 # possible that at this point, some partitions have been committed (or even
4811 # committed multiple times), and other partitions have not been run at all.
4812 #
4813 # Given the above, Partitioned DML is good fit for large, database-wide,
4814 # operations that are idempotent, such as deleting old rows from a very large
4815 # table.
4816 "readWrite": { # Message type to initiate a read-write transaction. Currently this # Transaction may write.
4817 #
4818 # Authorization to begin a read-write transaction requires
4819 # `spanner.databases.beginOrRollbackReadWriteTransaction` permission
4820 # on the `session` resource.
4821 # transaction type has no options.
4822 },
4823 "readOnly": { # Message type to initiate a read-only transaction. # Transaction will not write.
4824 #
4825 # Authorization to begin a read-only transaction requires
4826 # `spanner.databases.beginReadOnlyTransaction` permission
4827 # on the `session` resource.
Dan O'Mearadd494642020-05-01 07:42:23 -07004828 "minReadTimestamp": "A String", # Executes all reads at a timestamp &gt;= `min_read_timestamp`.
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07004829 #
4830 # This is useful for requesting fresher data than some previous
4831 # read, or data that is fresh enough to observe the effects of some
4832 # previously committed transaction whose timestamp is known.
4833 #
4834 # Note that this option can only be used in single-use transactions.
4835 #
4836 # A timestamp in RFC3339 UTC \"Zulu\" format, accurate to nanoseconds.
4837 # Example: `"2014-10-02T15:01:23.045123456Z"`.
Dan O'Mearadd494642020-05-01 07:42:23 -07004838 "readTimestamp": "A String", # Executes all reads at the given timestamp. Unlike other modes,
4839 # reads at a specific timestamp are repeatable; the same read at
4840 # the same timestamp always returns the same data. If the
4841 # timestamp is in the future, the read will block until the
4842 # specified timestamp, modulo the read's deadline.
4843 #
4844 # Useful for large scale consistent reads such as mapreduces, or
4845 # for coordinating many reads against a consistent snapshot of the
4846 # data.
4847 #
4848 # A timestamp in RFC3339 UTC \"Zulu\" format, accurate to nanoseconds.
4849 # Example: `"2014-10-02T15:01:23.045123456Z"`.
4850 "maxStaleness": "A String", # Read data at a timestamp &gt;= `NOW - max_staleness`
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07004851 # seconds. Guarantees that all writes that have committed more
4852 # than the specified number of seconds ago are visible. Because
4853 # Cloud Spanner chooses the exact timestamp, this mode works even if
4854 # the client's local clock is substantially skewed from Cloud Spanner
4855 # commit timestamps.
4856 #
4857 # Useful for reading the freshest data available at a nearby
4858 # replica, while bounding the possible staleness if the local
4859 # replica has fallen behind.
4860 #
4861 # Note that this option can only be used in single-use
4862 # transactions.
4863 "exactStaleness": "A String", # Executes all reads at a timestamp that is `exact_staleness`
4864 # old. The timestamp is chosen soon after the read is started.
4865 #
4866 # Guarantees that all writes that have committed more than the
4867 # specified number of seconds ago are visible. Because Cloud Spanner
4868 # chooses the exact timestamp, this mode works even if the client's
4869 # local clock is substantially skewed from Cloud Spanner commit
4870 # timestamps.
4871 #
4872 # Useful for reading at nearby replicas without the distributed
4873 # timestamp negotiation overhead of `max_staleness`.
Dan O'Mearadd494642020-05-01 07:42:23 -07004874 "returnReadTimestamp": True or False, # If true, the Cloud Spanner-selected read timestamp is included in
4875 # the Transaction message that describes the transaction.
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07004876 "strong": True or False, # Read at a timestamp where all previously committed transactions
4877 # are visible.
4878 },
4879 "partitionedDml": { # Message type to initiate a Partitioned DML transaction. # Partitioned DML transaction.
4880 #
4881 # Authorization to begin a Partitioned DML transaction requires
4882 # `spanner.databases.beginPartitionedDmlTransaction` permission
4883 # on the `session` resource.
4884 },
4885 },
4886 "singleUse": { # # Transactions # Execute the read or SQL query in a temporary transaction.
4887 # This is the most efficient way to execute a transaction that
4888 # consists of a single SQL query.
4889 #
4890 #
4891 # Each session can have at most one active transaction at a time. After the
4892 # active transaction is completed, the session can immediately be
4893 # re-used for the next transaction. It is not necessary to create a
4894 # new session for each transaction.
4895 #
4896 # # Transaction Modes
4897 #
4898 # Cloud Spanner supports three transaction modes:
4899 #
4900 # 1. Locking read-write. This type of transaction is the only way
4901 # to write data into Cloud Spanner. These transactions rely on
4902 # pessimistic locking and, if necessary, two-phase commit.
4903 # Locking read-write transactions may abort, requiring the
4904 # application to retry.
4905 #
4906 # 2. Snapshot read-only. This transaction type provides guaranteed
4907 # consistency across several reads, but does not allow
4908 # writes. Snapshot read-only transactions can be configured to
4909 # read at timestamps in the past. Snapshot read-only
4910 # transactions do not need to be committed.
4911 #
4912 # 3. Partitioned DML. This type of transaction is used to execute
4913 # a single Partitioned DML statement. Partitioned DML partitions
4914 # the key space and runs the DML statement over each partition
4915 # in parallel using separate, internal transactions that commit
4916 # independently. Partitioned DML transactions do not need to be
4917 # committed.
4918 #
4919 # For transactions that only read, snapshot read-only transactions
4920 # provide simpler semantics and are almost always faster. In
4921 # particular, read-only transactions do not take locks, so they do
4922 # not conflict with read-write transactions. As a consequence of not
4923 # taking locks, they also do not abort, so retry loops are not needed.
4924 #
4925 # Transactions may only read/write data in a single database. They
4926 # may, however, read/write data in different tables within that
4927 # database.
4928 #
4929 # ## Locking Read-Write Transactions
4930 #
4931 # Locking transactions may be used to atomically read-modify-write
4932 # data anywhere in a database. This type of transaction is externally
4933 # consistent.
4934 #
4935 # Clients should attempt to minimize the amount of time a transaction
4936 # is active. Faster transactions commit with higher probability
4937 # and cause less contention. Cloud Spanner attempts to keep read locks
4938 # active as long as the transaction continues to do reads, and the
4939 # transaction has not been terminated by
4940 # Commit or
4941 # Rollback. Long periods of
4942 # inactivity at the client may cause Cloud Spanner to release a
4943 # transaction's locks and abort it.
4944 #
4945 # Conceptually, a read-write transaction consists of zero or more
4946 # reads or SQL statements followed by
4947 # Commit. At any time before
4948 # Commit, the client can send a
4949 # Rollback request to abort the
4950 # transaction.
4951 #
4952 # ### Semantics
4953 #
4954 # Cloud Spanner can commit the transaction if all read locks it acquired
4955 # are still valid at commit time, and it is able to acquire write
4956 # locks for all writes. Cloud Spanner can abort the transaction for any
4957 # reason. If a commit attempt returns `ABORTED`, Cloud Spanner guarantees
4958 # that the transaction has not modified any user data in Cloud Spanner.
4959 #
4960 # Unless the transaction commits, Cloud Spanner makes no guarantees about
4961 # how long the transaction's locks were held for. It is an error to
4962 # use Cloud Spanner locks for any sort of mutual exclusion other than
4963 # between Cloud Spanner transactions themselves.
4964 #
4965 # ### Retrying Aborted Transactions
4966 #
4967 # When a transaction aborts, the application can choose to retry the
4968 # whole transaction again. To maximize the chances of successfully
4969 # committing the retry, the client should execute the retry in the
4970 # same session as the original attempt. The original session's lock
4971 # priority increases with each consecutive abort, meaning that each
4972 # attempt has a slightly better chance of success than the previous.
4973 #
4974 # Under some circumstances (e.g., many transactions attempting to
4975 # modify the same row(s)), a transaction can abort many times in a
4976 # short period before successfully committing. Thus, it is not a good
4977 # idea to cap the number of retries a transaction can attempt;
4978 # instead, it is better to limit the total amount of wall time spent
4979 # retrying.
4980 #
4981 # ### Idle Transactions
4982 #
4983 # A transaction is considered idle if it has no outstanding reads or
4984 # SQL queries and has not started a read or SQL query within the last 10
4985 # seconds. Idle transactions can be aborted by Cloud Spanner so that they
4986 # don't hold on to locks indefinitely. In that case, the commit will
4987 # fail with error `ABORTED`.
4988 #
4989 # If this behavior is undesirable, periodically executing a simple
4990 # SQL query in the transaction (e.g., `SELECT 1`) prevents the
4991 # transaction from becoming idle.
4992 #
4993 # ## Snapshot Read-Only Transactions
4994 #
4995 # Snapshot read-only transactions provides a simpler method than
4996 # locking read-write transactions for doing several consistent
4997 # reads. However, this type of transaction does not support writes.
4998 #
4999 # Snapshot transactions do not take locks. Instead, they work by
5000 # choosing a Cloud Spanner timestamp, then executing all reads at that
5001 # timestamp. Since they do not acquire locks, they do not block
5002 # concurrent read-write transactions.
5003 #
5004 # Unlike locking read-write transactions, snapshot read-only
5005 # transactions never abort. They can fail if the chosen read
5006 # timestamp is garbage collected; however, the default garbage
5007 # collection policy is generous enough that most applications do not
5008 # need to worry about this in practice.
5009 #
5010 # Snapshot read-only transactions do not need to call
5011 # Commit or
5012 # Rollback (and in fact are not
5013 # permitted to do so).
5014 #
5015 # To execute a snapshot transaction, the client specifies a timestamp
5016 # bound, which tells Cloud Spanner how to choose a read timestamp.
5017 #
5018 # The types of timestamp bound are:
5019 #
5020 # - Strong (the default).
5021 # - Bounded staleness.
5022 # - Exact staleness.
5023 #
5024 # If the Cloud Spanner database to be read is geographically distributed,
5025 # stale read-only transactions can execute more quickly than strong
5026 # or read-write transaction, because they are able to execute far
5027 # from the leader replica.
5028 #
5029 # Each type of timestamp bound is discussed in detail below.
5030 #
5031 # ### Strong
5032 #
5033 # Strong reads are guaranteed to see the effects of all transactions
5034 # that have committed before the start of the read. Furthermore, all
5035 # rows yielded by a single read are consistent with each other -- if
5036 # any part of the read observes a transaction, all parts of the read
5037 # see the transaction.
5038 #
5039 # Strong reads are not repeatable: two consecutive strong read-only
5040 # transactions might return inconsistent results if there are
5041 # concurrent writes. If consistency across reads is required, the
5042 # reads should be executed within a transaction or at an exact read
5043 # timestamp.
5044 #
5045 # See TransactionOptions.ReadOnly.strong.
5046 #
5047 # ### Exact Staleness
5048 #
5049 # These timestamp bounds execute reads at a user-specified
5050 # timestamp. Reads at a timestamp are guaranteed to see a consistent
5051 # prefix of the global transaction history: they observe
Dan O'Mearadd494642020-05-01 07:42:23 -07005052 # modifications done by all transactions with a commit timestamp &lt;=
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07005053 # the read timestamp, and observe none of the modifications done by
5054 # transactions with a larger commit timestamp. They will block until
5055 # all conflicting transactions that may be assigned commit timestamps
Dan O'Mearadd494642020-05-01 07:42:23 -07005056 # &lt;= the read timestamp have finished.
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07005057 #
5058 # The timestamp can either be expressed as an absolute Cloud Spanner commit
5059 # timestamp or a staleness relative to the current time.
5060 #
5061 # These modes do not require a "negotiation phase" to pick a
5062 # timestamp. As a result, they execute slightly faster than the
5063 # equivalent boundedly stale concurrency modes. On the other hand,
5064 # boundedly stale reads usually return fresher results.
5065 #
5066 # See TransactionOptions.ReadOnly.read_timestamp and
5067 # TransactionOptions.ReadOnly.exact_staleness.
5068 #
5069 # ### Bounded Staleness
5070 #
5071 # Bounded staleness modes allow Cloud Spanner to pick the read timestamp,
5072 # subject to a user-provided staleness bound. Cloud Spanner chooses the
5073 # newest timestamp within the staleness bound that allows execution
5074 # of the reads at the closest available replica without blocking.
5075 #
5076 # All rows yielded are consistent with each other -- if any part of
5077 # the read observes a transaction, all parts of the read see the
5078 # transaction. Boundedly stale reads are not repeatable: two stale
5079 # reads, even if they use the same staleness bound, can execute at
5080 # different timestamps and thus return inconsistent results.
5081 #
5082 # Boundedly stale reads execute in two phases: the first phase
5083 # negotiates a timestamp among all replicas needed to serve the
5084 # read. In the second phase, reads are executed at the negotiated
5085 # timestamp.
5086 #
5087 # As a result of the two phase execution, bounded staleness reads are
5088 # usually a little slower than comparable exact staleness
5089 # reads. However, they are typically able to return fresher
5090 # results, and are more likely to execute at the closest replica.
5091 #
5092 # Because the timestamp negotiation requires up-front knowledge of
5093 # which rows will be read, it can only be used with single-use
5094 # read-only transactions.
5095 #
5096 # See TransactionOptions.ReadOnly.max_staleness and
5097 # TransactionOptions.ReadOnly.min_read_timestamp.
5098 #
5099 # ### Old Read Timestamps and Garbage Collection
5100 #
5101 # Cloud Spanner continuously garbage collects deleted and overwritten data
5102 # in the background to reclaim storage space. This process is known
5103 # as "version GC". By default, version GC reclaims versions after they
5104 # are one hour old. Because of this, Cloud Spanner cannot perform reads
5105 # at read timestamps more than one hour in the past. This
5106 # restriction also applies to in-progress reads and/or SQL queries whose
5107 # timestamp become too old while executing. Reads and SQL queries with
5108 # too-old read timestamps fail with the error `FAILED_PRECONDITION`.
5109 #
5110 # ## Partitioned DML Transactions
5111 #
5112 # Partitioned DML transactions are used to execute DML statements with a
5113 # different execution strategy that provides different, and often better,
5114 # scalability properties for large, table-wide operations than DML in a
5115 # ReadWrite transaction. Smaller scoped statements, such as an OLTP workload,
5116 # should prefer using ReadWrite transactions.
5117 #
5118 # Partitioned DML partitions the keyspace and runs the DML statement on each
5119 # partition in separate, internal transactions. These transactions commit
5120 # automatically when complete, and run independently from one another.
5121 #
5122 # To reduce lock contention, this execution strategy only acquires read locks
5123 # on rows that match the WHERE clause of the statement. Additionally, the
5124 # smaller per-partition transactions hold locks for less time.
5125 #
5126 # That said, Partitioned DML is not a drop-in replacement for standard DML used
5127 # in ReadWrite transactions.
5128 #
5129 # - The DML statement must be fully-partitionable. Specifically, the statement
5130 # must be expressible as the union of many statements which each access only
5131 # a single row of the table.
5132 #
5133 # - The statement is not applied atomically to all rows of the table. Rather,
5134 # the statement is applied atomically to partitions of the table, in
5135 # independent transactions. Secondary index rows are updated atomically
5136 # with the base table rows.
5137 #
5138 # - Partitioned DML does not guarantee exactly-once execution semantics
5139 # against a partition. The statement will be applied at least once to each
5140 # partition. It is strongly recommended that the DML statement should be
5141 # idempotent to avoid unexpected results. For instance, it is potentially
5142 # dangerous to run a statement such as
5143 # `UPDATE table SET column = column + 1` as it could be run multiple times
5144 # against some rows.
5145 #
5146 # - The partitions are committed automatically - there is no support for
5147 # Commit or Rollback. If the call returns an error, or if the client issuing
5148 # the ExecuteSql call dies, it is possible that some rows had the statement
5149 # executed on them successfully. It is also possible that statement was
5150 # never executed against other rows.
5151 #
5152 # - Partitioned DML transactions may only contain the execution of a single
5153 # DML statement via ExecuteSql or ExecuteStreamingSql.
5154 #
5155 # - If any error is encountered during the execution of the partitioned DML
5156 # operation (for instance, a UNIQUE INDEX violation, division by zero, or a
5157 # value that cannot be stored due to schema constraints), then the
5158 # operation is stopped at that point and an error is returned. It is
5159 # possible that at this point, some partitions have been committed (or even
5160 # committed multiple times), and other partitions have not been run at all.
5161 #
5162 # Given the above, Partitioned DML is good fit for large, database-wide,
5163 # operations that are idempotent, such as deleting old rows from a very large
5164 # table.
5165 "readWrite": { # Message type to initiate a read-write transaction. Currently this # Transaction may write.
5166 #
5167 # Authorization to begin a read-write transaction requires
5168 # `spanner.databases.beginOrRollbackReadWriteTransaction` permission
5169 # on the `session` resource.
5170 # transaction type has no options.
5171 },
5172 "readOnly": { # Message type to initiate a read-only transaction. # Transaction will not write.
5173 #
5174 # Authorization to begin a read-only transaction requires
5175 # `spanner.databases.beginReadOnlyTransaction` permission
5176 # on the `session` resource.
Dan O'Mearadd494642020-05-01 07:42:23 -07005177 "minReadTimestamp": "A String", # Executes all reads at a timestamp &gt;= `min_read_timestamp`.
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07005178 #
5179 # This is useful for requesting fresher data than some previous
5180 # read, or data that is fresh enough to observe the effects of some
5181 # previously committed transaction whose timestamp is known.
5182 #
5183 # Note that this option can only be used in single-use transactions.
5184 #
5185 # A timestamp in RFC3339 UTC \"Zulu\" format, accurate to nanoseconds.
5186 # Example: `"2014-10-02T15:01:23.045123456Z"`.
Dan O'Mearadd494642020-05-01 07:42:23 -07005187 "readTimestamp": "A String", # Executes all reads at the given timestamp. Unlike other modes,
5188 # reads at a specific timestamp are repeatable; the same read at
5189 # the same timestamp always returns the same data. If the
5190 # timestamp is in the future, the read will block until the
5191 # specified timestamp, modulo the read's deadline.
5192 #
5193 # Useful for large scale consistent reads such as mapreduces, or
5194 # for coordinating many reads against a consistent snapshot of the
5195 # data.
5196 #
5197 # A timestamp in RFC3339 UTC \"Zulu\" format, accurate to nanoseconds.
5198 # Example: `"2014-10-02T15:01:23.045123456Z"`.
5199 "maxStaleness": "A String", # Read data at a timestamp &gt;= `NOW - max_staleness`
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07005200 # seconds. Guarantees that all writes that have committed more
5201 # than the specified number of seconds ago are visible. Because
5202 # Cloud Spanner chooses the exact timestamp, this mode works even if
5203 # the client's local clock is substantially skewed from Cloud Spanner
5204 # commit timestamps.
5205 #
5206 # Useful for reading the freshest data available at a nearby
5207 # replica, while bounding the possible staleness if the local
5208 # replica has fallen behind.
5209 #
5210 # Note that this option can only be used in single-use
5211 # transactions.
5212 "exactStaleness": "A String", # Executes all reads at a timestamp that is `exact_staleness`
5213 # old. The timestamp is chosen soon after the read is started.
5214 #
5215 # Guarantees that all writes that have committed more than the
5216 # specified number of seconds ago are visible. Because Cloud Spanner
5217 # chooses the exact timestamp, this mode works even if the client's
5218 # local clock is substantially skewed from Cloud Spanner commit
5219 # timestamps.
5220 #
5221 # Useful for reading at nearby replicas without the distributed
5222 # timestamp negotiation overhead of `max_staleness`.
Dan O'Mearadd494642020-05-01 07:42:23 -07005223 "returnReadTimestamp": True or False, # If true, the Cloud Spanner-selected read timestamp is included in
5224 # the Transaction message that describes the transaction.
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07005225 "strong": True or False, # Read at a timestamp where all previously committed transactions
5226 # are visible.
5227 },
5228 "partitionedDml": { # Message type to initiate a Partitioned DML transaction. # Partitioned DML transaction.
5229 #
5230 # Authorization to begin a Partitioned DML transaction requires
5231 # `spanner.databases.beginPartitionedDmlTransaction` permission
5232 # on the `session` resource.
5233 },
5234 },
5235 "id": "A String", # Execute the read or SQL query in a previously-started transaction.
5236 },
Dan O'Mearadd494642020-05-01 07:42:23 -07005237 "params": { # Parameter names and values that bind to placeholders in the SQL string.
5238 #
5239 # A parameter placeholder consists of the `@` character followed by the
5240 # parameter name (for example, `@firstName`). Parameter names can contain
5241 # letters, numbers, and underscores.
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07005242 #
5243 # Parameters can appear anywhere that a literal value is expected. The same
5244 # parameter name can be used more than once, for example:
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07005245 #
Dan O'Mearadd494642020-05-01 07:42:23 -07005246 # `"WHERE id &gt; @msg_id AND id &lt; @msg_id + 100"`
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07005247 #
Dan O'Mearadd494642020-05-01 07:42:23 -07005248 # It is an error to execute a SQL statement with unbound parameters.
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07005249 "a_key": "", # Properties of the object.
5250 },
Dan O'Mearadd494642020-05-01 07:42:23 -07005251 "sql": "A String", # Required. The query request to generate partitions for. The request will fail if
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07005252 # the query is not root partitionable. The query plan of a root
5253 # partitionable query has a single distributed union operator. A distributed
5254 # union operator conceptually divides one or more tables into multiple
5255 # splits, remotely evaluates a subquery independently on each split, and
5256 # then unions all results.
5257 #
5258 # This must not contain DML commands, such as INSERT, UPDATE, or
5259 # DELETE. Use ExecuteStreamingSql with a
5260 # PartitionedDml transaction for large, partition-friendly DML operations.
5261 }
5262
5263 x__xgafv: string, V1 error format.
5264 Allowed values
5265 1 - v1 error format
5266 2 - v2 error format
5267
5268Returns:
5269 An object of the form:
5270
5271 { # The response for PartitionQuery
5272 # or PartitionRead
5273 "transaction": { # A transaction. # Transaction created by this request.
5274 "readTimestamp": "A String", # For snapshot read-only transactions, the read timestamp chosen
5275 # for the transaction. Not returned by default: see
5276 # TransactionOptions.ReadOnly.return_read_timestamp.
5277 #
5278 # A timestamp in RFC3339 UTC \"Zulu\" format, accurate to nanoseconds.
5279 # Example: `"2014-10-02T15:01:23.045123456Z"`.
5280 "id": "A String", # `id` may be used to identify the transaction in subsequent
5281 # Read,
5282 # ExecuteSql,
5283 # Commit, or
5284 # Rollback calls.
5285 #
5286 # Single-use read-only transactions do not have IDs, because
5287 # single-use transactions do not support multiple requests.
5288 },
5289 "partitions": [ # Partitions created by this request.
5290 { # Information returned for each partition returned in a
5291 # PartitionResponse.
5292 "partitionToken": "A String", # This token can be passed to Read, StreamingRead, ExecuteSql, or
5293 # ExecuteStreamingSql requests to restrict the results to those identified by
5294 # this partition token.
5295 },
5296 ],
5297 }</pre>
5298</div>
5299
5300<div class="method">
Dan O'Mearadd494642020-05-01 07:42:23 -07005301 <code class="details" id="partitionRead">partitionRead(session, body=None, x__xgafv=None)</code>
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07005302 <pre>Creates a set of partition tokens that can be used to execute a read
5303operation in parallel. Each of the returned partition tokens can be used
5304by StreamingRead to specify a subset of the read
5305result to read. The same session and read-only transaction must be used by
5306the PartitionReadRequest used to create the partition tokens and the
5307ReadRequests that use the partition tokens. There are no ordering
5308guarantees on rows returned among the returned partition tokens, or even
5309within each individual StreamingRead call issued with a partition_token.
5310
5311Partition tokens become invalid when the session used to create them
5312is deleted, is idle for too long, begins a new transaction, or becomes too
5313old. When any of these happen, it is not possible to resume the read, and
5314the whole operation must be restarted from the beginning.
5315
5316Args:
5317 session: string, Required. The session used to create the partitions. (required)
Dan O'Mearadd494642020-05-01 07:42:23 -07005318 body: object, The request body.
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07005319 The object takes the form of:
5320
5321{ # The request for PartitionRead
5322 "index": "A String", # If non-empty, the name of an index on table. This index is
5323 # used instead of the table primary key when interpreting key_set
5324 # and sorting result rows. See key_set for further information.
5325 "transaction": { # This message is used to select the transaction in which a # Read only snapshot transactions are supported, read/write and single use
5326 # transactions are not.
5327 # Read or
5328 # ExecuteSql call runs.
5329 #
5330 # See TransactionOptions for more information about transactions.
5331 "begin": { # # Transactions # Begin a new transaction and execute this read or SQL query in
5332 # it. The transaction ID of the new transaction is returned in
5333 # ResultSetMetadata.transaction, which is a Transaction.
5334 #
5335 #
5336 # Each session can have at most one active transaction at a time. After the
5337 # active transaction is completed, the session can immediately be
5338 # re-used for the next transaction. It is not necessary to create a
5339 # new session for each transaction.
5340 #
5341 # # Transaction Modes
5342 #
5343 # Cloud Spanner supports three transaction modes:
5344 #
5345 # 1. Locking read-write. This type of transaction is the only way
5346 # to write data into Cloud Spanner. These transactions rely on
5347 # pessimistic locking and, if necessary, two-phase commit.
5348 # Locking read-write transactions may abort, requiring the
5349 # application to retry.
5350 #
5351 # 2. Snapshot read-only. This transaction type provides guaranteed
5352 # consistency across several reads, but does not allow
5353 # writes. Snapshot read-only transactions can be configured to
5354 # read at timestamps in the past. Snapshot read-only
5355 # transactions do not need to be committed.
5356 #
5357 # 3. Partitioned DML. This type of transaction is used to execute
5358 # a single Partitioned DML statement. Partitioned DML partitions
5359 # the key space and runs the DML statement over each partition
5360 # in parallel using separate, internal transactions that commit
5361 # independently. Partitioned DML transactions do not need to be
5362 # committed.
5363 #
5364 # For transactions that only read, snapshot read-only transactions
5365 # provide simpler semantics and are almost always faster. In
5366 # particular, read-only transactions do not take locks, so they do
5367 # not conflict with read-write transactions. As a consequence of not
5368 # taking locks, they also do not abort, so retry loops are not needed.
5369 #
5370 # Transactions may only read/write data in a single database. They
5371 # may, however, read/write data in different tables within that
5372 # database.
5373 #
5374 # ## Locking Read-Write Transactions
5375 #
5376 # Locking transactions may be used to atomically read-modify-write
5377 # data anywhere in a database. This type of transaction is externally
5378 # consistent.
5379 #
5380 # Clients should attempt to minimize the amount of time a transaction
5381 # is active. Faster transactions commit with higher probability
5382 # and cause less contention. Cloud Spanner attempts to keep read locks
5383 # active as long as the transaction continues to do reads, and the
5384 # transaction has not been terminated by
5385 # Commit or
5386 # Rollback. Long periods of
5387 # inactivity at the client may cause Cloud Spanner to release a
5388 # transaction's locks and abort it.
5389 #
5390 # Conceptually, a read-write transaction consists of zero or more
5391 # reads or SQL statements followed by
5392 # Commit. At any time before
5393 # Commit, the client can send a
5394 # Rollback request to abort the
5395 # transaction.
5396 #
5397 # ### Semantics
5398 #
5399 # Cloud Spanner can commit the transaction if all read locks it acquired
5400 # are still valid at commit time, and it is able to acquire write
5401 # locks for all writes. Cloud Spanner can abort the transaction for any
5402 # reason. If a commit attempt returns `ABORTED`, Cloud Spanner guarantees
5403 # that the transaction has not modified any user data in Cloud Spanner.
5404 #
5405 # Unless the transaction commits, Cloud Spanner makes no guarantees about
5406 # how long the transaction's locks were held for. It is an error to
5407 # use Cloud Spanner locks for any sort of mutual exclusion other than
5408 # between Cloud Spanner transactions themselves.
5409 #
5410 # ### Retrying Aborted Transactions
5411 #
5412 # When a transaction aborts, the application can choose to retry the
5413 # whole transaction again. To maximize the chances of successfully
5414 # committing the retry, the client should execute the retry in the
5415 # same session as the original attempt. The original session's lock
5416 # priority increases with each consecutive abort, meaning that each
5417 # attempt has a slightly better chance of success than the previous.
5418 #
5419 # Under some circumstances (e.g., many transactions attempting to
5420 # modify the same row(s)), a transaction can abort many times in a
5421 # short period before successfully committing. Thus, it is not a good
5422 # idea to cap the number of retries a transaction can attempt;
5423 # instead, it is better to limit the total amount of wall time spent
5424 # retrying.
5425 #
5426 # ### Idle Transactions
5427 #
5428 # A transaction is considered idle if it has no outstanding reads or
5429 # SQL queries and has not started a read or SQL query within the last 10
5430 # seconds. Idle transactions can be aborted by Cloud Spanner so that they
5431 # don't hold on to locks indefinitely. In that case, the commit will
5432 # fail with error `ABORTED`.
5433 #
5434 # If this behavior is undesirable, periodically executing a simple
5435 # SQL query in the transaction (e.g., `SELECT 1`) prevents the
5436 # transaction from becoming idle.
5437 #
5438 # ## Snapshot Read-Only Transactions
5439 #
5440 # Snapshot read-only transactions provides a simpler method than
5441 # locking read-write transactions for doing several consistent
5442 # reads. However, this type of transaction does not support writes.
5443 #
5444 # Snapshot transactions do not take locks. Instead, they work by
5445 # choosing a Cloud Spanner timestamp, then executing all reads at that
5446 # timestamp. Since they do not acquire locks, they do not block
5447 # concurrent read-write transactions.
5448 #
5449 # Unlike locking read-write transactions, snapshot read-only
5450 # transactions never abort. They can fail if the chosen read
5451 # timestamp is garbage collected; however, the default garbage
5452 # collection policy is generous enough that most applications do not
5453 # need to worry about this in practice.
5454 #
5455 # Snapshot read-only transactions do not need to call
5456 # Commit or
5457 # Rollback (and in fact are not
5458 # permitted to do so).
5459 #
5460 # To execute a snapshot transaction, the client specifies a timestamp
5461 # bound, which tells Cloud Spanner how to choose a read timestamp.
5462 #
5463 # The types of timestamp bound are:
5464 #
5465 # - Strong (the default).
5466 # - Bounded staleness.
5467 # - Exact staleness.
5468 #
5469 # If the Cloud Spanner database to be read is geographically distributed,
5470 # stale read-only transactions can execute more quickly than strong
5471 # or read-write transaction, because they are able to execute far
5472 # from the leader replica.
5473 #
5474 # Each type of timestamp bound is discussed in detail below.
5475 #
5476 # ### Strong
5477 #
5478 # Strong reads are guaranteed to see the effects of all transactions
5479 # that have committed before the start of the read. Furthermore, all
5480 # rows yielded by a single read are consistent with each other -- if
5481 # any part of the read observes a transaction, all parts of the read
5482 # see the transaction.
5483 #
5484 # Strong reads are not repeatable: two consecutive strong read-only
5485 # transactions might return inconsistent results if there are
5486 # concurrent writes. If consistency across reads is required, the
5487 # reads should be executed within a transaction or at an exact read
5488 # timestamp.
5489 #
5490 # See TransactionOptions.ReadOnly.strong.
5491 #
5492 # ### Exact Staleness
5493 #
5494 # These timestamp bounds execute reads at a user-specified
5495 # timestamp. Reads at a timestamp are guaranteed to see a consistent
5496 # prefix of the global transaction history: they observe
Dan O'Mearadd494642020-05-01 07:42:23 -07005497 # modifications done by all transactions with a commit timestamp &lt;=
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07005498 # the read timestamp, and observe none of the modifications done by
5499 # transactions with a larger commit timestamp. They will block until
5500 # all conflicting transactions that may be assigned commit timestamps
Dan O'Mearadd494642020-05-01 07:42:23 -07005501 # &lt;= the read timestamp have finished.
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07005502 #
5503 # The timestamp can either be expressed as an absolute Cloud Spanner commit
5504 # timestamp or a staleness relative to the current time.
5505 #
5506 # These modes do not require a "negotiation phase" to pick a
5507 # timestamp. As a result, they execute slightly faster than the
5508 # equivalent boundedly stale concurrency modes. On the other hand,
5509 # boundedly stale reads usually return fresher results.
5510 #
5511 # See TransactionOptions.ReadOnly.read_timestamp and
5512 # TransactionOptions.ReadOnly.exact_staleness.
5513 #
5514 # ### Bounded Staleness
5515 #
5516 # Bounded staleness modes allow Cloud Spanner to pick the read timestamp,
5517 # subject to a user-provided staleness bound. Cloud Spanner chooses the
5518 # newest timestamp within the staleness bound that allows execution
5519 # of the reads at the closest available replica without blocking.
5520 #
5521 # All rows yielded are consistent with each other -- if any part of
5522 # the read observes a transaction, all parts of the read see the
5523 # transaction. Boundedly stale reads are not repeatable: two stale
5524 # reads, even if they use the same staleness bound, can execute at
5525 # different timestamps and thus return inconsistent results.
5526 #
5527 # Boundedly stale reads execute in two phases: the first phase
5528 # negotiates a timestamp among all replicas needed to serve the
5529 # read. In the second phase, reads are executed at the negotiated
5530 # timestamp.
5531 #
5532 # As a result of the two phase execution, bounded staleness reads are
5533 # usually a little slower than comparable exact staleness
5534 # reads. However, they are typically able to return fresher
5535 # results, and are more likely to execute at the closest replica.
5536 #
5537 # Because the timestamp negotiation requires up-front knowledge of
5538 # which rows will be read, it can only be used with single-use
5539 # read-only transactions.
5540 #
5541 # See TransactionOptions.ReadOnly.max_staleness and
5542 # TransactionOptions.ReadOnly.min_read_timestamp.
5543 #
5544 # ### Old Read Timestamps and Garbage Collection
5545 #
5546 # Cloud Spanner continuously garbage collects deleted and overwritten data
5547 # in the background to reclaim storage space. This process is known
5548 # as "version GC". By default, version GC reclaims versions after they
5549 # are one hour old. Because of this, Cloud Spanner cannot perform reads
5550 # at read timestamps more than one hour in the past. This
5551 # restriction also applies to in-progress reads and/or SQL queries whose
5552 # timestamp become too old while executing. Reads and SQL queries with
5553 # too-old read timestamps fail with the error `FAILED_PRECONDITION`.
5554 #
5555 # ## Partitioned DML Transactions
5556 #
5557 # Partitioned DML transactions are used to execute DML statements with a
5558 # different execution strategy that provides different, and often better,
5559 # scalability properties for large, table-wide operations than DML in a
5560 # ReadWrite transaction. Smaller scoped statements, such as an OLTP workload,
5561 # should prefer using ReadWrite transactions.
5562 #
5563 # Partitioned DML partitions the keyspace and runs the DML statement on each
5564 # partition in separate, internal transactions. These transactions commit
5565 # automatically when complete, and run independently from one another.
5566 #
5567 # To reduce lock contention, this execution strategy only acquires read locks
5568 # on rows that match the WHERE clause of the statement. Additionally, the
5569 # smaller per-partition transactions hold locks for less time.
5570 #
5571 # That said, Partitioned DML is not a drop-in replacement for standard DML used
5572 # in ReadWrite transactions.
5573 #
5574 # - The DML statement must be fully-partitionable. Specifically, the statement
5575 # must be expressible as the union of many statements which each access only
5576 # a single row of the table.
5577 #
5578 # - The statement is not applied atomically to all rows of the table. Rather,
5579 # the statement is applied atomically to partitions of the table, in
5580 # independent transactions. Secondary index rows are updated atomically
5581 # with the base table rows.
5582 #
5583 # - Partitioned DML does not guarantee exactly-once execution semantics
5584 # against a partition. The statement will be applied at least once to each
5585 # partition. It is strongly recommended that the DML statement should be
5586 # idempotent to avoid unexpected results. For instance, it is potentially
5587 # dangerous to run a statement such as
5588 # `UPDATE table SET column = column + 1` as it could be run multiple times
5589 # against some rows.
5590 #
5591 # - The partitions are committed automatically - there is no support for
5592 # Commit or Rollback. If the call returns an error, or if the client issuing
5593 # the ExecuteSql call dies, it is possible that some rows had the statement
5594 # executed on them successfully. It is also possible that statement was
5595 # never executed against other rows.
5596 #
5597 # - Partitioned DML transactions may only contain the execution of a single
5598 # DML statement via ExecuteSql or ExecuteStreamingSql.
5599 #
5600 # - If any error is encountered during the execution of the partitioned DML
5601 # operation (for instance, a UNIQUE INDEX violation, division by zero, or a
5602 # value that cannot be stored due to schema constraints), then the
5603 # operation is stopped at that point and an error is returned. It is
5604 # possible that at this point, some partitions have been committed (or even
5605 # committed multiple times), and other partitions have not been run at all.
5606 #
5607 # Given the above, Partitioned DML is good fit for large, database-wide,
5608 # operations that are idempotent, such as deleting old rows from a very large
5609 # table.
5610 "readWrite": { # Message type to initiate a read-write transaction. Currently this # Transaction may write.
5611 #
5612 # Authorization to begin a read-write transaction requires
5613 # `spanner.databases.beginOrRollbackReadWriteTransaction` permission
5614 # on the `session` resource.
5615 # transaction type has no options.
5616 },
5617 "readOnly": { # Message type to initiate a read-only transaction. # Transaction will not write.
5618 #
5619 # Authorization to begin a read-only transaction requires
5620 # `spanner.databases.beginReadOnlyTransaction` permission
5621 # on the `session` resource.
Dan O'Mearadd494642020-05-01 07:42:23 -07005622 "minReadTimestamp": "A String", # Executes all reads at a timestamp &gt;= `min_read_timestamp`.
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07005623 #
5624 # This is useful for requesting fresher data than some previous
5625 # read, or data that is fresh enough to observe the effects of some
5626 # previously committed transaction whose timestamp is known.
5627 #
5628 # Note that this option can only be used in single-use transactions.
5629 #
5630 # A timestamp in RFC3339 UTC \"Zulu\" format, accurate to nanoseconds.
5631 # Example: `"2014-10-02T15:01:23.045123456Z"`.
Dan O'Mearadd494642020-05-01 07:42:23 -07005632 "readTimestamp": "A String", # Executes all reads at the given timestamp. Unlike other modes,
5633 # reads at a specific timestamp are repeatable; the same read at
5634 # the same timestamp always returns the same data. If the
5635 # timestamp is in the future, the read will block until the
5636 # specified timestamp, modulo the read's deadline.
5637 #
5638 # Useful for large scale consistent reads such as mapreduces, or
5639 # for coordinating many reads against a consistent snapshot of the
5640 # data.
5641 #
5642 # A timestamp in RFC3339 UTC \"Zulu\" format, accurate to nanoseconds.
5643 # Example: `"2014-10-02T15:01:23.045123456Z"`.
5644 "maxStaleness": "A String", # Read data at a timestamp &gt;= `NOW - max_staleness`
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07005645 # seconds. Guarantees that all writes that have committed more
5646 # than the specified number of seconds ago are visible. Because
5647 # Cloud Spanner chooses the exact timestamp, this mode works even if
5648 # the client's local clock is substantially skewed from Cloud Spanner
5649 # commit timestamps.
5650 #
5651 # Useful for reading the freshest data available at a nearby
5652 # replica, while bounding the possible staleness if the local
5653 # replica has fallen behind.
5654 #
5655 # Note that this option can only be used in single-use
5656 # transactions.
5657 "exactStaleness": "A String", # Executes all reads at a timestamp that is `exact_staleness`
5658 # old. The timestamp is chosen soon after the read is started.
5659 #
5660 # Guarantees that all writes that have committed more than the
5661 # specified number of seconds ago are visible. Because Cloud Spanner
5662 # chooses the exact timestamp, this mode works even if the client's
5663 # local clock is substantially skewed from Cloud Spanner commit
5664 # timestamps.
5665 #
5666 # Useful for reading at nearby replicas without the distributed
5667 # timestamp negotiation overhead of `max_staleness`.
Dan O'Mearadd494642020-05-01 07:42:23 -07005668 "returnReadTimestamp": True or False, # If true, the Cloud Spanner-selected read timestamp is included in
5669 # the Transaction message that describes the transaction.
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07005670 "strong": True or False, # Read at a timestamp where all previously committed transactions
5671 # are visible.
5672 },
5673 "partitionedDml": { # Message type to initiate a Partitioned DML transaction. # Partitioned DML transaction.
5674 #
5675 # Authorization to begin a Partitioned DML transaction requires
5676 # `spanner.databases.beginPartitionedDmlTransaction` permission
5677 # on the `session` resource.
5678 },
5679 },
5680 "singleUse": { # # Transactions # Execute the read or SQL query in a temporary transaction.
5681 # This is the most efficient way to execute a transaction that
5682 # consists of a single SQL query.
5683 #
5684 #
5685 # Each session can have at most one active transaction at a time. After the
5686 # active transaction is completed, the session can immediately be
5687 # re-used for the next transaction. It is not necessary to create a
5688 # new session for each transaction.
5689 #
5690 # # Transaction Modes
5691 #
5692 # Cloud Spanner supports three transaction modes:
5693 #
5694 # 1. Locking read-write. This type of transaction is the only way
5695 # to write data into Cloud Spanner. These transactions rely on
5696 # pessimistic locking and, if necessary, two-phase commit.
5697 # Locking read-write transactions may abort, requiring the
5698 # application to retry.
5699 #
5700 # 2. Snapshot read-only. This transaction type provides guaranteed
5701 # consistency across several reads, but does not allow
5702 # writes. Snapshot read-only transactions can be configured to
5703 # read at timestamps in the past. Snapshot read-only
5704 # transactions do not need to be committed.
5705 #
5706 # 3. Partitioned DML. This type of transaction is used to execute
5707 # a single Partitioned DML statement. Partitioned DML partitions
5708 # the key space and runs the DML statement over each partition
5709 # in parallel using separate, internal transactions that commit
5710 # independently. Partitioned DML transactions do not need to be
5711 # committed.
5712 #
5713 # For transactions that only read, snapshot read-only transactions
5714 # provide simpler semantics and are almost always faster. In
5715 # particular, read-only transactions do not take locks, so they do
5716 # not conflict with read-write transactions. As a consequence of not
5717 # taking locks, they also do not abort, so retry loops are not needed.
5718 #
5719 # Transactions may only read/write data in a single database. They
5720 # may, however, read/write data in different tables within that
5721 # database.
5722 #
5723 # ## Locking Read-Write Transactions
5724 #
5725 # Locking transactions may be used to atomically read-modify-write
5726 # data anywhere in a database. This type of transaction is externally
5727 # consistent.
5728 #
5729 # Clients should attempt to minimize the amount of time a transaction
5730 # is active. Faster transactions commit with higher probability
5731 # and cause less contention. Cloud Spanner attempts to keep read locks
5732 # active as long as the transaction continues to do reads, and the
5733 # transaction has not been terminated by
5734 # Commit or
5735 # Rollback. Long periods of
5736 # inactivity at the client may cause Cloud Spanner to release a
5737 # transaction's locks and abort it.
5738 #
5739 # Conceptually, a read-write transaction consists of zero or more
5740 # reads or SQL statements followed by
5741 # Commit. At any time before
5742 # Commit, the client can send a
5743 # Rollback request to abort the
5744 # transaction.
5745 #
5746 # ### Semantics
5747 #
5748 # Cloud Spanner can commit the transaction if all read locks it acquired
5749 # are still valid at commit time, and it is able to acquire write
5750 # locks for all writes. Cloud Spanner can abort the transaction for any
5751 # reason. If a commit attempt returns `ABORTED`, Cloud Spanner guarantees
5752 # that the transaction has not modified any user data in Cloud Spanner.
5753 #
5754 # Unless the transaction commits, Cloud Spanner makes no guarantees about
5755 # how long the transaction's locks were held for. It is an error to
5756 # use Cloud Spanner locks for any sort of mutual exclusion other than
5757 # between Cloud Spanner transactions themselves.
5758 #
5759 # ### Retrying Aborted Transactions
5760 #
5761 # When a transaction aborts, the application can choose to retry the
5762 # whole transaction again. To maximize the chances of successfully
5763 # committing the retry, the client should execute the retry in the
5764 # same session as the original attempt. The original session's lock
5765 # priority increases with each consecutive abort, meaning that each
5766 # attempt has a slightly better chance of success than the previous.
5767 #
5768 # Under some circumstances (e.g., many transactions attempting to
5769 # modify the same row(s)), a transaction can abort many times in a
5770 # short period before successfully committing. Thus, it is not a good
5771 # idea to cap the number of retries a transaction can attempt;
5772 # instead, it is better to limit the total amount of wall time spent
5773 # retrying.
5774 #
5775 # ### Idle Transactions
5776 #
5777 # A transaction is considered idle if it has no outstanding reads or
5778 # SQL queries and has not started a read or SQL query within the last 10
5779 # seconds. Idle transactions can be aborted by Cloud Spanner so that they
5780 # don't hold on to locks indefinitely. In that case, the commit will
5781 # fail with error `ABORTED`.
5782 #
5783 # If this behavior is undesirable, periodically executing a simple
5784 # SQL query in the transaction (e.g., `SELECT 1`) prevents the
5785 # transaction from becoming idle.
5786 #
5787 # ## Snapshot Read-Only Transactions
5788 #
5789 # Snapshot read-only transactions provides a simpler method than
5790 # locking read-write transactions for doing several consistent
5791 # reads. However, this type of transaction does not support writes.
5792 #
5793 # Snapshot transactions do not take locks. Instead, they work by
5794 # choosing a Cloud Spanner timestamp, then executing all reads at that
5795 # timestamp. Since they do not acquire locks, they do not block
5796 # concurrent read-write transactions.
5797 #
5798 # Unlike locking read-write transactions, snapshot read-only
5799 # transactions never abort. They can fail if the chosen read
5800 # timestamp is garbage collected; however, the default garbage
5801 # collection policy is generous enough that most applications do not
5802 # need to worry about this in practice.
5803 #
5804 # Snapshot read-only transactions do not need to call
5805 # Commit or
5806 # Rollback (and in fact are not
5807 # permitted to do so).
5808 #
5809 # To execute a snapshot transaction, the client specifies a timestamp
5810 # bound, which tells Cloud Spanner how to choose a read timestamp.
5811 #
5812 # The types of timestamp bound are:
5813 #
5814 # - Strong (the default).
5815 # - Bounded staleness.
5816 # - Exact staleness.
5817 #
5818 # If the Cloud Spanner database to be read is geographically distributed,
5819 # stale read-only transactions can execute more quickly than strong
5820 # or read-write transaction, because they are able to execute far
5821 # from the leader replica.
5822 #
5823 # Each type of timestamp bound is discussed in detail below.
5824 #
5825 # ### Strong
5826 #
5827 # Strong reads are guaranteed to see the effects of all transactions
5828 # that have committed before the start of the read. Furthermore, all
5829 # rows yielded by a single read are consistent with each other -- if
5830 # any part of the read observes a transaction, all parts of the read
5831 # see the transaction.
5832 #
5833 # Strong reads are not repeatable: two consecutive strong read-only
5834 # transactions might return inconsistent results if there are
5835 # concurrent writes. If consistency across reads is required, the
5836 # reads should be executed within a transaction or at an exact read
5837 # timestamp.
5838 #
5839 # See TransactionOptions.ReadOnly.strong.
5840 #
5841 # ### Exact Staleness
5842 #
5843 # These timestamp bounds execute reads at a user-specified
5844 # timestamp. Reads at a timestamp are guaranteed to see a consistent
5845 # prefix of the global transaction history: they observe
Dan O'Mearadd494642020-05-01 07:42:23 -07005846 # modifications done by all transactions with a commit timestamp &lt;=
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07005847 # the read timestamp, and observe none of the modifications done by
5848 # transactions with a larger commit timestamp. They will block until
5849 # all conflicting transactions that may be assigned commit timestamps
Dan O'Mearadd494642020-05-01 07:42:23 -07005850 # &lt;= the read timestamp have finished.
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07005851 #
5852 # The timestamp can either be expressed as an absolute Cloud Spanner commit
5853 # timestamp or a staleness relative to the current time.
5854 #
5855 # These modes do not require a "negotiation phase" to pick a
5856 # timestamp. As a result, they execute slightly faster than the
5857 # equivalent boundedly stale concurrency modes. On the other hand,
5858 # boundedly stale reads usually return fresher results.
5859 #
5860 # See TransactionOptions.ReadOnly.read_timestamp and
5861 # TransactionOptions.ReadOnly.exact_staleness.
5862 #
5863 # ### Bounded Staleness
5864 #
5865 # Bounded staleness modes allow Cloud Spanner to pick the read timestamp,
5866 # subject to a user-provided staleness bound. Cloud Spanner chooses the
5867 # newest timestamp within the staleness bound that allows execution
5868 # of the reads at the closest available replica without blocking.
5869 #
5870 # All rows yielded are consistent with each other -- if any part of
5871 # the read observes a transaction, all parts of the read see the
5872 # transaction. Boundedly stale reads are not repeatable: two stale
5873 # reads, even if they use the same staleness bound, can execute at
5874 # different timestamps and thus return inconsistent results.
5875 #
5876 # Boundedly stale reads execute in two phases: the first phase
5877 # negotiates a timestamp among all replicas needed to serve the
5878 # read. In the second phase, reads are executed at the negotiated
5879 # timestamp.
5880 #
5881 # As a result of the two phase execution, bounded staleness reads are
5882 # usually a little slower than comparable exact staleness
5883 # reads. However, they are typically able to return fresher
5884 # results, and are more likely to execute at the closest replica.
5885 #
5886 # Because the timestamp negotiation requires up-front knowledge of
5887 # which rows will be read, it can only be used with single-use
5888 # read-only transactions.
5889 #
5890 # See TransactionOptions.ReadOnly.max_staleness and
5891 # TransactionOptions.ReadOnly.min_read_timestamp.
5892 #
5893 # ### Old Read Timestamps and Garbage Collection
5894 #
5895 # Cloud Spanner continuously garbage collects deleted and overwritten data
5896 # in the background to reclaim storage space. This process is known
5897 # as "version GC". By default, version GC reclaims versions after they
5898 # are one hour old. Because of this, Cloud Spanner cannot perform reads
5899 # at read timestamps more than one hour in the past. This
5900 # restriction also applies to in-progress reads and/or SQL queries whose
5901 # timestamp become too old while executing. Reads and SQL queries with
5902 # too-old read timestamps fail with the error `FAILED_PRECONDITION`.
5903 #
5904 # ## Partitioned DML Transactions
5905 #
5906 # Partitioned DML transactions are used to execute DML statements with a
5907 # different execution strategy that provides different, and often better,
5908 # scalability properties for large, table-wide operations than DML in a
5909 # ReadWrite transaction. Smaller scoped statements, such as an OLTP workload,
5910 # should prefer using ReadWrite transactions.
5911 #
5912 # Partitioned DML partitions the keyspace and runs the DML statement on each
5913 # partition in separate, internal transactions. These transactions commit
5914 # automatically when complete, and run independently from one another.
5915 #
5916 # To reduce lock contention, this execution strategy only acquires read locks
5917 # on rows that match the WHERE clause of the statement. Additionally, the
5918 # smaller per-partition transactions hold locks for less time.
5919 #
5920 # That said, Partitioned DML is not a drop-in replacement for standard DML used
5921 # in ReadWrite transactions.
5922 #
5923 # - The DML statement must be fully-partitionable. Specifically, the statement
5924 # must be expressible as the union of many statements which each access only
5925 # a single row of the table.
5926 #
5927 # - The statement is not applied atomically to all rows of the table. Rather,
5928 # the statement is applied atomically to partitions of the table, in
5929 # independent transactions. Secondary index rows are updated atomically
5930 # with the base table rows.
5931 #
5932 # - Partitioned DML does not guarantee exactly-once execution semantics
5933 # against a partition. The statement will be applied at least once to each
5934 # partition. It is strongly recommended that the DML statement should be
5935 # idempotent to avoid unexpected results. For instance, it is potentially
5936 # dangerous to run a statement such as
5937 # `UPDATE table SET column = column + 1` as it could be run multiple times
5938 # against some rows.
5939 #
5940 # - The partitions are committed automatically - there is no support for
5941 # Commit or Rollback. If the call returns an error, or if the client issuing
5942 # the ExecuteSql call dies, it is possible that some rows had the statement
5943 # executed on them successfully. It is also possible that statement was
5944 # never executed against other rows.
5945 #
5946 # - Partitioned DML transactions may only contain the execution of a single
5947 # DML statement via ExecuteSql or ExecuteStreamingSql.
5948 #
5949 # - If any error is encountered during the execution of the partitioned DML
5950 # operation (for instance, a UNIQUE INDEX violation, division by zero, or a
5951 # value that cannot be stored due to schema constraints), then the
5952 # operation is stopped at that point and an error is returned. It is
5953 # possible that at this point, some partitions have been committed (or even
5954 # committed multiple times), and other partitions have not been run at all.
5955 #
5956 # Given the above, Partitioned DML is good fit for large, database-wide,
5957 # operations that are idempotent, such as deleting old rows from a very large
5958 # table.
5959 "readWrite": { # Message type to initiate a read-write transaction. Currently this # Transaction may write.
5960 #
5961 # Authorization to begin a read-write transaction requires
5962 # `spanner.databases.beginOrRollbackReadWriteTransaction` permission
5963 # on the `session` resource.
5964 # transaction type has no options.
5965 },
5966 "readOnly": { # Message type to initiate a read-only transaction. # Transaction will not write.
5967 #
5968 # Authorization to begin a read-only transaction requires
5969 # `spanner.databases.beginReadOnlyTransaction` permission
5970 # on the `session` resource.
Dan O'Mearadd494642020-05-01 07:42:23 -07005971 "minReadTimestamp": "A String", # Executes all reads at a timestamp &gt;= `min_read_timestamp`.
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07005972 #
5973 # This is useful for requesting fresher data than some previous
5974 # read, or data that is fresh enough to observe the effects of some
5975 # previously committed transaction whose timestamp is known.
5976 #
5977 # Note that this option can only be used in single-use transactions.
5978 #
5979 # A timestamp in RFC3339 UTC \"Zulu\" format, accurate to nanoseconds.
5980 # Example: `"2014-10-02T15:01:23.045123456Z"`.
Dan O'Mearadd494642020-05-01 07:42:23 -07005981 "readTimestamp": "A String", # Executes all reads at the given timestamp. Unlike other modes,
5982 # reads at a specific timestamp are repeatable; the same read at
5983 # the same timestamp always returns the same data. If the
5984 # timestamp is in the future, the read will block until the
5985 # specified timestamp, modulo the read's deadline.
5986 #
5987 # Useful for large scale consistent reads such as mapreduces, or
5988 # for coordinating many reads against a consistent snapshot of the
5989 # data.
5990 #
5991 # A timestamp in RFC3339 UTC \"Zulu\" format, accurate to nanoseconds.
5992 # Example: `"2014-10-02T15:01:23.045123456Z"`.
5993 "maxStaleness": "A String", # Read data at a timestamp &gt;= `NOW - max_staleness`
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07005994 # seconds. Guarantees that all writes that have committed more
5995 # than the specified number of seconds ago are visible. Because
5996 # Cloud Spanner chooses the exact timestamp, this mode works even if
5997 # the client's local clock is substantially skewed from Cloud Spanner
5998 # commit timestamps.
5999 #
6000 # Useful for reading the freshest data available at a nearby
6001 # replica, while bounding the possible staleness if the local
6002 # replica has fallen behind.
6003 #
6004 # Note that this option can only be used in single-use
6005 # transactions.
6006 "exactStaleness": "A String", # Executes all reads at a timestamp that is `exact_staleness`
6007 # old. The timestamp is chosen soon after the read is started.
6008 #
6009 # Guarantees that all writes that have committed more than the
6010 # specified number of seconds ago are visible. Because Cloud Spanner
6011 # chooses the exact timestamp, this mode works even if the client's
6012 # local clock is substantially skewed from Cloud Spanner commit
6013 # timestamps.
6014 #
6015 # Useful for reading at nearby replicas without the distributed
6016 # timestamp negotiation overhead of `max_staleness`.
Dan O'Mearadd494642020-05-01 07:42:23 -07006017 "returnReadTimestamp": True or False, # If true, the Cloud Spanner-selected read timestamp is included in
6018 # the Transaction message that describes the transaction.
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07006019 "strong": True or False, # Read at a timestamp where all previously committed transactions
6020 # are visible.
6021 },
6022 "partitionedDml": { # Message type to initiate a Partitioned DML transaction. # Partitioned DML transaction.
6023 #
6024 # Authorization to begin a Partitioned DML transaction requires
6025 # `spanner.databases.beginPartitionedDmlTransaction` permission
6026 # on the `session` resource.
6027 },
6028 },
6029 "id": "A String", # Execute the read or SQL query in a previously-started transaction.
6030 },
6031 "keySet": { # `KeySet` defines a collection of Cloud Spanner keys and/or key ranges. All # Required. `key_set` identifies the rows to be yielded. `key_set` names the
6032 # primary keys of the rows in table to be yielded, unless index
6033 # is present. If index is present, then key_set instead names
6034 # index keys in index.
6035 #
6036 # It is not an error for the `key_set` to name rows that do not
6037 # exist in the database. Read yields nothing for nonexistent rows.
6038 # the keys are expected to be in the same table or index. The keys need
6039 # not be sorted in any particular way.
6040 #
6041 # If the same key is specified multiple times in the set (for example
6042 # if two ranges, two keys, or a key and a range overlap), Cloud Spanner
6043 # behaves as if the key were only specified once.
6044 "ranges": [ # A list of key ranges. See KeyRange for more information about
6045 # key range specifications.
6046 { # KeyRange represents a range of rows in a table or index.
6047 #
6048 # A range has a start key and an end key. These keys can be open or
6049 # closed, indicating if the range includes rows with that key.
6050 #
6051 # Keys are represented by lists, where the ith value in the list
6052 # corresponds to the ith component of the table or index primary key.
6053 # Individual values are encoded as described
6054 # here.
6055 #
6056 # For example, consider the following table definition:
6057 #
6058 # CREATE TABLE UserEvents (
6059 # UserName STRING(MAX),
6060 # EventDate STRING(10)
6061 # ) PRIMARY KEY(UserName, EventDate);
6062 #
6063 # The following keys name rows in this table:
6064 #
6065 # "Bob", "2014-09-23"
6066 #
6067 # Since the `UserEvents` table's `PRIMARY KEY` clause names two
6068 # columns, each `UserEvents` key has two elements; the first is the
6069 # `UserName`, and the second is the `EventDate`.
6070 #
6071 # Key ranges with multiple components are interpreted
6072 # lexicographically by component using the table or index key's declared
6073 # sort order. For example, the following range returns all events for
6074 # user `"Bob"` that occurred in the year 2015:
6075 #
6076 # "start_closed": ["Bob", "2015-01-01"]
6077 # "end_closed": ["Bob", "2015-12-31"]
6078 #
6079 # Start and end keys can omit trailing key components. This affects the
6080 # inclusion and exclusion of rows that exactly match the provided key
6081 # components: if the key is closed, then rows that exactly match the
6082 # provided components are included; if the key is open, then rows
6083 # that exactly match are not included.
6084 #
6085 # For example, the following range includes all events for `"Bob"` that
6086 # occurred during and after the year 2000:
6087 #
6088 # "start_closed": ["Bob", "2000-01-01"]
6089 # "end_closed": ["Bob"]
6090 #
6091 # The next example retrieves all events for `"Bob"`:
6092 #
6093 # "start_closed": ["Bob"]
6094 # "end_closed": ["Bob"]
6095 #
6096 # To retrieve events before the year 2000:
6097 #
6098 # "start_closed": ["Bob"]
6099 # "end_open": ["Bob", "2000-01-01"]
6100 #
6101 # The following range includes all rows in the table:
6102 #
6103 # "start_closed": []
6104 # "end_closed": []
6105 #
6106 # This range returns all users whose `UserName` begins with any
6107 # character from A to C:
6108 #
6109 # "start_closed": ["A"]
6110 # "end_open": ["D"]
6111 #
6112 # This range returns all users whose `UserName` begins with B:
6113 #
6114 # "start_closed": ["B"]
6115 # "end_open": ["C"]
6116 #
6117 # Key ranges honor column sort order. For example, suppose a table is
6118 # defined as follows:
6119 #
6120 # CREATE TABLE DescendingSortedTable {
6121 # Key INT64,
6122 # ...
6123 # ) PRIMARY KEY(Key DESC);
6124 #
6125 # The following range retrieves all rows with key values between 1
6126 # and 100 inclusive:
6127 #
6128 # "start_closed": ["100"]
6129 # "end_closed": ["1"]
6130 #
6131 # Note that 100 is passed as the start, and 1 is passed as the end,
6132 # because `Key` is a descending column in the schema.
6133 "endOpen": [ # If the end is open, then the range excludes rows whose first
6134 # `len(end_open)` key columns exactly match `end_open`.
6135 "",
6136 ],
6137 "startOpen": [ # If the start is open, then the range excludes rows whose first
6138 # `len(start_open)` key columns exactly match `start_open`.
6139 "",
6140 ],
6141 "endClosed": [ # If the end is closed, then the range includes all rows whose
6142 # first `len(end_closed)` key columns exactly match `end_closed`.
6143 "",
6144 ],
6145 "startClosed": [ # If the start is closed, then the range includes all rows whose
6146 # first `len(start_closed)` key columns exactly match `start_closed`.
6147 "",
6148 ],
6149 },
6150 ],
6151 "keys": [ # A list of specific keys. Entries in `keys` should have exactly as
6152 # many elements as there are columns in the primary or index key
6153 # with which this `KeySet` is used. Individual key values are
6154 # encoded as described here.
6155 [
6156 "",
6157 ],
6158 ],
6159 "all": True or False, # For convenience `all` can be set to `true` to indicate that this
6160 # `KeySet` matches all keys in the table or index. Note that any keys
6161 # specified in `keys` or `ranges` are only yielded once.
6162 },
6163 "partitionOptions": { # Options for a PartitionQueryRequest and # Additional options that affect how many partitions are created.
6164 # PartitionReadRequest.
6165 "maxPartitions": "A String", # **Note:** This hint is currently ignored by PartitionQuery and
6166 # PartitionRead requests.
6167 #
6168 # The desired maximum number of partitions to return. For example, this may
6169 # be set to the number of workers available. The default for this option
6170 # is currently 10,000. The maximum value is currently 200,000. This is only
6171 # a hint. The actual number of partitions returned may be smaller or larger
6172 # than this maximum count request.
6173 "partitionSizeBytes": "A String", # **Note:** This hint is currently ignored by PartitionQuery and
6174 # PartitionRead requests.
6175 #
6176 # The desired data size for each partition generated. The default for this
6177 # option is currently 1 GiB. This is only a hint. The actual size of each
6178 # partition may be smaller or larger than this size request.
6179 },
6180 "table": "A String", # Required. The name of the table in the database to be read.
6181 "columns": [ # The columns of table to be returned for each row matching
6182 # this request.
6183 "A String",
6184 ],
6185 }
6186
6187 x__xgafv: string, V1 error format.
6188 Allowed values
6189 1 - v1 error format
6190 2 - v2 error format
6191
6192Returns:
6193 An object of the form:
6194
6195 { # The response for PartitionQuery
6196 # or PartitionRead
6197 "transaction": { # A transaction. # Transaction created by this request.
6198 "readTimestamp": "A String", # For snapshot read-only transactions, the read timestamp chosen
6199 # for the transaction. Not returned by default: see
6200 # TransactionOptions.ReadOnly.return_read_timestamp.
6201 #
6202 # A timestamp in RFC3339 UTC \"Zulu\" format, accurate to nanoseconds.
6203 # Example: `"2014-10-02T15:01:23.045123456Z"`.
6204 "id": "A String", # `id` may be used to identify the transaction in subsequent
6205 # Read,
6206 # ExecuteSql,
6207 # Commit, or
6208 # Rollback calls.
6209 #
6210 # Single-use read-only transactions do not have IDs, because
6211 # single-use transactions do not support multiple requests.
6212 },
6213 "partitions": [ # Partitions created by this request.
6214 { # Information returned for each partition returned in a
6215 # PartitionResponse.
6216 "partitionToken": "A String", # This token can be passed to Read, StreamingRead, ExecuteSql, or
6217 # ExecuteStreamingSql requests to restrict the results to those identified by
6218 # this partition token.
6219 },
6220 ],
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -04006221 }</pre>
6222</div>
6223
6224<div class="method">
Dan O'Mearadd494642020-05-01 07:42:23 -07006225 <code class="details" id="read">read(session, body=None, x__xgafv=None)</code>
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -04006226 <pre>Reads rows from the database using key lookups and scans, as a
6227simple key/value style alternative to
6228ExecuteSql. This method cannot be used to
6229return a result set larger than 10 MiB; if the read matches more
6230data than that, the read fails with a `FAILED_PRECONDITION`
6231error.
6232
6233Reads inside read-write transactions might return `ABORTED`. If
6234this occurs, the application should restart the transaction from
6235the beginning. See Transaction for more details.
6236
6237Larger result sets can be yielded in streaming fashion by calling
6238StreamingRead instead.
6239
6240Args:
6241 session: string, Required. The session in which the read should be performed. (required)
Dan O'Mearadd494642020-05-01 07:42:23 -07006242 body: object, The request body.
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -04006243 The object takes the form of:
6244
6245{ # The request for Read and
6246 # StreamingRead.
6247 "index": "A String", # If non-empty, the name of an index on table. This index is
6248 # used instead of the table primary key when interpreting key_set
6249 # and sorting result rows. See key_set for further information.
6250 "transaction": { # This message is used to select the transaction in which a # The transaction to use. If none is provided, the default is a
6251 # temporary read-only transaction with strong concurrency.
6252 # Read or
6253 # ExecuteSql call runs.
6254 #
6255 # See TransactionOptions for more information about transactions.
6256 "begin": { # # Transactions # Begin a new transaction and execute this read or SQL query in
6257 # it. The transaction ID of the new transaction is returned in
6258 # ResultSetMetadata.transaction, which is a Transaction.
6259 #
6260 #
6261 # Each session can have at most one active transaction at a time. After the
6262 # active transaction is completed, the session can immediately be
6263 # re-used for the next transaction. It is not necessary to create a
6264 # new session for each transaction.
6265 #
6266 # # Transaction Modes
6267 #
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07006268 # Cloud Spanner supports three transaction modes:
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -04006269 #
6270 # 1. Locking read-write. This type of transaction is the only way
6271 # to write data into Cloud Spanner. These transactions rely on
6272 # pessimistic locking and, if necessary, two-phase commit.
6273 # Locking read-write transactions may abort, requiring the
6274 # application to retry.
6275 #
6276 # 2. Snapshot read-only. This transaction type provides guaranteed
6277 # consistency across several reads, but does not allow
6278 # writes. Snapshot read-only transactions can be configured to
6279 # read at timestamps in the past. Snapshot read-only
6280 # transactions do not need to be committed.
6281 #
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07006282 # 3. Partitioned DML. This type of transaction is used to execute
6283 # a single Partitioned DML statement. Partitioned DML partitions
6284 # the key space and runs the DML statement over each partition
6285 # in parallel using separate, internal transactions that commit
6286 # independently. Partitioned DML transactions do not need to be
6287 # committed.
6288 #
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -04006289 # For transactions that only read, snapshot read-only transactions
6290 # provide simpler semantics and are almost always faster. In
6291 # particular, read-only transactions do not take locks, so they do
6292 # not conflict with read-write transactions. As a consequence of not
6293 # taking locks, they also do not abort, so retry loops are not needed.
6294 #
6295 # Transactions may only read/write data in a single database. They
6296 # may, however, read/write data in different tables within that
6297 # database.
6298 #
6299 # ## Locking Read-Write Transactions
6300 #
6301 # Locking transactions may be used to atomically read-modify-write
6302 # data anywhere in a database. This type of transaction is externally
6303 # consistent.
6304 #
6305 # Clients should attempt to minimize the amount of time a transaction
6306 # is active. Faster transactions commit with higher probability
6307 # and cause less contention. Cloud Spanner attempts to keep read locks
6308 # active as long as the transaction continues to do reads, and the
6309 # transaction has not been terminated by
6310 # Commit or
6311 # Rollback. Long periods of
6312 # inactivity at the client may cause Cloud Spanner to release a
6313 # transaction's locks and abort it.
6314 #
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -04006315 # Conceptually, a read-write transaction consists of zero or more
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07006316 # reads or SQL statements followed by
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -04006317 # Commit. At any time before
6318 # Commit, the client can send a
6319 # Rollback request to abort the
6320 # transaction.
6321 #
6322 # ### Semantics
6323 #
6324 # Cloud Spanner can commit the transaction if all read locks it acquired
6325 # are still valid at commit time, and it is able to acquire write
6326 # locks for all writes. Cloud Spanner can abort the transaction for any
6327 # reason. If a commit attempt returns `ABORTED`, Cloud Spanner guarantees
6328 # that the transaction has not modified any user data in Cloud Spanner.
6329 #
6330 # Unless the transaction commits, Cloud Spanner makes no guarantees about
6331 # how long the transaction's locks were held for. It is an error to
6332 # use Cloud Spanner locks for any sort of mutual exclusion other than
6333 # between Cloud Spanner transactions themselves.
6334 #
6335 # ### Retrying Aborted Transactions
6336 #
6337 # When a transaction aborts, the application can choose to retry the
6338 # whole transaction again. To maximize the chances of successfully
6339 # committing the retry, the client should execute the retry in the
6340 # same session as the original attempt. The original session's lock
6341 # priority increases with each consecutive abort, meaning that each
6342 # attempt has a slightly better chance of success than the previous.
6343 #
6344 # Under some circumstances (e.g., many transactions attempting to
6345 # modify the same row(s)), a transaction can abort many times in a
6346 # short period before successfully committing. Thus, it is not a good
6347 # idea to cap the number of retries a transaction can attempt;
6348 # instead, it is better to limit the total amount of wall time spent
6349 # retrying.
6350 #
6351 # ### Idle Transactions
6352 #
6353 # A transaction is considered idle if it has no outstanding reads or
6354 # SQL queries and has not started a read or SQL query within the last 10
6355 # seconds. Idle transactions can be aborted by Cloud Spanner so that they
6356 # don't hold on to locks indefinitely. In that case, the commit will
6357 # fail with error `ABORTED`.
6358 #
6359 # If this behavior is undesirable, periodically executing a simple
6360 # SQL query in the transaction (e.g., `SELECT 1`) prevents the
6361 # transaction from becoming idle.
6362 #
6363 # ## Snapshot Read-Only Transactions
6364 #
6365 # Snapshot read-only transactions provides a simpler method than
6366 # locking read-write transactions for doing several consistent
6367 # reads. However, this type of transaction does not support writes.
6368 #
6369 # Snapshot transactions do not take locks. Instead, they work by
6370 # choosing a Cloud Spanner timestamp, then executing all reads at that
6371 # timestamp. Since they do not acquire locks, they do not block
6372 # concurrent read-write transactions.
6373 #
6374 # Unlike locking read-write transactions, snapshot read-only
6375 # transactions never abort. They can fail if the chosen read
6376 # timestamp is garbage collected; however, the default garbage
6377 # collection policy is generous enough that most applications do not
6378 # need to worry about this in practice.
6379 #
6380 # Snapshot read-only transactions do not need to call
6381 # Commit or
6382 # Rollback (and in fact are not
6383 # permitted to do so).
6384 #
6385 # To execute a snapshot transaction, the client specifies a timestamp
6386 # bound, which tells Cloud Spanner how to choose a read timestamp.
6387 #
6388 # The types of timestamp bound are:
6389 #
6390 # - Strong (the default).
6391 # - Bounded staleness.
6392 # - Exact staleness.
6393 #
6394 # If the Cloud Spanner database to be read is geographically distributed,
6395 # stale read-only transactions can execute more quickly than strong
6396 # or read-write transaction, because they are able to execute far
6397 # from the leader replica.
6398 #
6399 # Each type of timestamp bound is discussed in detail below.
6400 #
6401 # ### Strong
6402 #
6403 # Strong reads are guaranteed to see the effects of all transactions
6404 # that have committed before the start of the read. Furthermore, all
6405 # rows yielded by a single read are consistent with each other -- if
6406 # any part of the read observes a transaction, all parts of the read
6407 # see the transaction.
6408 #
6409 # Strong reads are not repeatable: two consecutive strong read-only
6410 # transactions might return inconsistent results if there are
6411 # concurrent writes. If consistency across reads is required, the
6412 # reads should be executed within a transaction or at an exact read
6413 # timestamp.
6414 #
6415 # See TransactionOptions.ReadOnly.strong.
6416 #
6417 # ### Exact Staleness
6418 #
6419 # These timestamp bounds execute reads at a user-specified
6420 # timestamp. Reads at a timestamp are guaranteed to see a consistent
6421 # prefix of the global transaction history: they observe
Dan O'Mearadd494642020-05-01 07:42:23 -07006422 # modifications done by all transactions with a commit timestamp &lt;=
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -04006423 # the read timestamp, and observe none of the modifications done by
6424 # transactions with a larger commit timestamp. They will block until
6425 # all conflicting transactions that may be assigned commit timestamps
Dan O'Mearadd494642020-05-01 07:42:23 -07006426 # &lt;= the read timestamp have finished.
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -04006427 #
6428 # The timestamp can either be expressed as an absolute Cloud Spanner commit
6429 # timestamp or a staleness relative to the current time.
6430 #
6431 # These modes do not require a "negotiation phase" to pick a
6432 # timestamp. As a result, they execute slightly faster than the
6433 # equivalent boundedly stale concurrency modes. On the other hand,
6434 # boundedly stale reads usually return fresher results.
6435 #
6436 # See TransactionOptions.ReadOnly.read_timestamp and
6437 # TransactionOptions.ReadOnly.exact_staleness.
6438 #
6439 # ### Bounded Staleness
6440 #
6441 # Bounded staleness modes allow Cloud Spanner to pick the read timestamp,
6442 # subject to a user-provided staleness bound. Cloud Spanner chooses the
6443 # newest timestamp within the staleness bound that allows execution
6444 # of the reads at the closest available replica without blocking.
6445 #
6446 # All rows yielded are consistent with each other -- if any part of
6447 # the read observes a transaction, all parts of the read see the
6448 # transaction. Boundedly stale reads are not repeatable: two stale
6449 # reads, even if they use the same staleness bound, can execute at
6450 # different timestamps and thus return inconsistent results.
6451 #
6452 # Boundedly stale reads execute in two phases: the first phase
6453 # negotiates a timestamp among all replicas needed to serve the
6454 # read. In the second phase, reads are executed at the negotiated
6455 # timestamp.
6456 #
6457 # As a result of the two phase execution, bounded staleness reads are
6458 # usually a little slower than comparable exact staleness
6459 # reads. However, they are typically able to return fresher
6460 # results, and are more likely to execute at the closest replica.
6461 #
6462 # Because the timestamp negotiation requires up-front knowledge of
6463 # which rows will be read, it can only be used with single-use
6464 # read-only transactions.
6465 #
6466 # See TransactionOptions.ReadOnly.max_staleness and
6467 # TransactionOptions.ReadOnly.min_read_timestamp.
6468 #
6469 # ### Old Read Timestamps and Garbage Collection
6470 #
6471 # Cloud Spanner continuously garbage collects deleted and overwritten data
6472 # in the background to reclaim storage space. This process is known
6473 # as "version GC". By default, version GC reclaims versions after they
6474 # are one hour old. Because of this, Cloud Spanner cannot perform reads
6475 # at read timestamps more than one hour in the past. This
6476 # restriction also applies to in-progress reads and/or SQL queries whose
6477 # timestamp become too old while executing. Reads and SQL queries with
6478 # too-old read timestamps fail with the error `FAILED_PRECONDITION`.
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07006479 #
6480 # ## Partitioned DML Transactions
6481 #
6482 # Partitioned DML transactions are used to execute DML statements with a
6483 # different execution strategy that provides different, and often better,
6484 # scalability properties for large, table-wide operations than DML in a
6485 # ReadWrite transaction. Smaller scoped statements, such as an OLTP workload,
6486 # should prefer using ReadWrite transactions.
6487 #
6488 # Partitioned DML partitions the keyspace and runs the DML statement on each
6489 # partition in separate, internal transactions. These transactions commit
6490 # automatically when complete, and run independently from one another.
6491 #
6492 # To reduce lock contention, this execution strategy only acquires read locks
6493 # on rows that match the WHERE clause of the statement. Additionally, the
6494 # smaller per-partition transactions hold locks for less time.
6495 #
6496 # That said, Partitioned DML is not a drop-in replacement for standard DML used
6497 # in ReadWrite transactions.
6498 #
6499 # - The DML statement must be fully-partitionable. Specifically, the statement
6500 # must be expressible as the union of many statements which each access only
6501 # a single row of the table.
6502 #
6503 # - The statement is not applied atomically to all rows of the table. Rather,
6504 # the statement is applied atomically to partitions of the table, in
6505 # independent transactions. Secondary index rows are updated atomically
6506 # with the base table rows.
6507 #
6508 # - Partitioned DML does not guarantee exactly-once execution semantics
6509 # against a partition. The statement will be applied at least once to each
6510 # partition. It is strongly recommended that the DML statement should be
6511 # idempotent to avoid unexpected results. For instance, it is potentially
6512 # dangerous to run a statement such as
6513 # `UPDATE table SET column = column + 1` as it could be run multiple times
6514 # against some rows.
6515 #
6516 # - The partitions are committed automatically - there is no support for
6517 # Commit or Rollback. If the call returns an error, or if the client issuing
6518 # the ExecuteSql call dies, it is possible that some rows had the statement
6519 # executed on them successfully. It is also possible that statement was
6520 # never executed against other rows.
6521 #
6522 # - Partitioned DML transactions may only contain the execution of a single
6523 # DML statement via ExecuteSql or ExecuteStreamingSql.
6524 #
6525 # - If any error is encountered during the execution of the partitioned DML
6526 # operation (for instance, a UNIQUE INDEX violation, division by zero, or a
6527 # value that cannot be stored due to schema constraints), then the
6528 # operation is stopped at that point and an error is returned. It is
6529 # possible that at this point, some partitions have been committed (or even
6530 # committed multiple times), and other partitions have not been run at all.
6531 #
6532 # Given the above, Partitioned DML is good fit for large, database-wide,
6533 # operations that are idempotent, such as deleting old rows from a very large
6534 # table.
6535 "readWrite": { # Message type to initiate a read-write transaction. Currently this # Transaction may write.
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -04006536 #
6537 # Authorization to begin a read-write transaction requires
6538 # `spanner.databases.beginOrRollbackReadWriteTransaction` permission
6539 # on the `session` resource.
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07006540 # transaction type has no options.
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -04006541 },
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07006542 "readOnly": { # Message type to initiate a read-only transaction. # Transaction will not write.
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -04006543 #
6544 # Authorization to begin a read-only transaction requires
6545 # `spanner.databases.beginReadOnlyTransaction` permission
6546 # on the `session` resource.
Dan O'Mearadd494642020-05-01 07:42:23 -07006547 "minReadTimestamp": "A String", # Executes all reads at a timestamp &gt;= `min_read_timestamp`.
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -04006548 #
6549 # This is useful for requesting fresher data than some previous
6550 # read, or data that is fresh enough to observe the effects of some
6551 # previously committed transaction whose timestamp is known.
6552 #
6553 # Note that this option can only be used in single-use transactions.
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07006554 #
6555 # A timestamp in RFC3339 UTC \"Zulu\" format, accurate to nanoseconds.
6556 # Example: `"2014-10-02T15:01:23.045123456Z"`.
Dan O'Mearadd494642020-05-01 07:42:23 -07006557 "readTimestamp": "A String", # Executes all reads at the given timestamp. Unlike other modes,
6558 # reads at a specific timestamp are repeatable; the same read at
6559 # the same timestamp always returns the same data. If the
6560 # timestamp is in the future, the read will block until the
6561 # specified timestamp, modulo the read's deadline.
6562 #
6563 # Useful for large scale consistent reads such as mapreduces, or
6564 # for coordinating many reads against a consistent snapshot of the
6565 # data.
6566 #
6567 # A timestamp in RFC3339 UTC \"Zulu\" format, accurate to nanoseconds.
6568 # Example: `"2014-10-02T15:01:23.045123456Z"`.
6569 "maxStaleness": "A String", # Read data at a timestamp &gt;= `NOW - max_staleness`
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -04006570 # seconds. Guarantees that all writes that have committed more
6571 # than the specified number of seconds ago are visible. Because
6572 # Cloud Spanner chooses the exact timestamp, this mode works even if
6573 # the client's local clock is substantially skewed from Cloud Spanner
6574 # commit timestamps.
6575 #
6576 # Useful for reading the freshest data available at a nearby
6577 # replica, while bounding the possible staleness if the local
6578 # replica has fallen behind.
6579 #
6580 # Note that this option can only be used in single-use
6581 # transactions.
6582 "exactStaleness": "A String", # Executes all reads at a timestamp that is `exact_staleness`
6583 # old. The timestamp is chosen soon after the read is started.
6584 #
6585 # Guarantees that all writes that have committed more than the
6586 # specified number of seconds ago are visible. Because Cloud Spanner
6587 # chooses the exact timestamp, this mode works even if the client's
6588 # local clock is substantially skewed from Cloud Spanner commit
6589 # timestamps.
6590 #
6591 # Useful for reading at nearby replicas without the distributed
6592 # timestamp negotiation overhead of `max_staleness`.
Dan O'Mearadd494642020-05-01 07:42:23 -07006593 "returnReadTimestamp": True or False, # If true, the Cloud Spanner-selected read timestamp is included in
6594 # the Transaction message that describes the transaction.
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -04006595 "strong": True or False, # Read at a timestamp where all previously committed transactions
6596 # are visible.
6597 },
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07006598 "partitionedDml": { # Message type to initiate a Partitioned DML transaction. # Partitioned DML transaction.
6599 #
6600 # Authorization to begin a Partitioned DML transaction requires
6601 # `spanner.databases.beginPartitionedDmlTransaction` permission
6602 # on the `session` resource.
6603 },
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -04006604 },
6605 "singleUse": { # # Transactions # Execute the read or SQL query in a temporary transaction.
6606 # This is the most efficient way to execute a transaction that
6607 # consists of a single SQL query.
6608 #
6609 #
6610 # Each session can have at most one active transaction at a time. After the
6611 # active transaction is completed, the session can immediately be
6612 # re-used for the next transaction. It is not necessary to create a
6613 # new session for each transaction.
6614 #
6615 # # Transaction Modes
6616 #
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07006617 # Cloud Spanner supports three transaction modes:
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -04006618 #
6619 # 1. Locking read-write. This type of transaction is the only way
6620 # to write data into Cloud Spanner. These transactions rely on
6621 # pessimistic locking and, if necessary, two-phase commit.
6622 # Locking read-write transactions may abort, requiring the
6623 # application to retry.
6624 #
6625 # 2. Snapshot read-only. This transaction type provides guaranteed
6626 # consistency across several reads, but does not allow
6627 # writes. Snapshot read-only transactions can be configured to
6628 # read at timestamps in the past. Snapshot read-only
6629 # transactions do not need to be committed.
6630 #
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07006631 # 3. Partitioned DML. This type of transaction is used to execute
6632 # a single Partitioned DML statement. Partitioned DML partitions
6633 # the key space and runs the DML statement over each partition
6634 # in parallel using separate, internal transactions that commit
6635 # independently. Partitioned DML transactions do not need to be
6636 # committed.
6637 #
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -04006638 # For transactions that only read, snapshot read-only transactions
6639 # provide simpler semantics and are almost always faster. In
6640 # particular, read-only transactions do not take locks, so they do
6641 # not conflict with read-write transactions. As a consequence of not
6642 # taking locks, they also do not abort, so retry loops are not needed.
6643 #
6644 # Transactions may only read/write data in a single database. They
6645 # may, however, read/write data in different tables within that
6646 # database.
6647 #
6648 # ## Locking Read-Write Transactions
6649 #
6650 # Locking transactions may be used to atomically read-modify-write
6651 # data anywhere in a database. This type of transaction is externally
6652 # consistent.
6653 #
6654 # Clients should attempt to minimize the amount of time a transaction
6655 # is active. Faster transactions commit with higher probability
6656 # and cause less contention. Cloud Spanner attempts to keep read locks
6657 # active as long as the transaction continues to do reads, and the
6658 # transaction has not been terminated by
6659 # Commit or
6660 # Rollback. Long periods of
6661 # inactivity at the client may cause Cloud Spanner to release a
6662 # transaction's locks and abort it.
6663 #
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -04006664 # Conceptually, a read-write transaction consists of zero or more
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07006665 # reads or SQL statements followed by
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -04006666 # Commit. At any time before
6667 # Commit, the client can send a
6668 # Rollback request to abort the
6669 # transaction.
6670 #
6671 # ### Semantics
6672 #
6673 # Cloud Spanner can commit the transaction if all read locks it acquired
6674 # are still valid at commit time, and it is able to acquire write
6675 # locks for all writes. Cloud Spanner can abort the transaction for any
6676 # reason. If a commit attempt returns `ABORTED`, Cloud Spanner guarantees
6677 # that the transaction has not modified any user data in Cloud Spanner.
6678 #
6679 # Unless the transaction commits, Cloud Spanner makes no guarantees about
6680 # how long the transaction's locks were held for. It is an error to
6681 # use Cloud Spanner locks for any sort of mutual exclusion other than
6682 # between Cloud Spanner transactions themselves.
6683 #
6684 # ### Retrying Aborted Transactions
6685 #
6686 # When a transaction aborts, the application can choose to retry the
6687 # whole transaction again. To maximize the chances of successfully
6688 # committing the retry, the client should execute the retry in the
6689 # same session as the original attempt. The original session's lock
6690 # priority increases with each consecutive abort, meaning that each
6691 # attempt has a slightly better chance of success than the previous.
6692 #
6693 # Under some circumstances (e.g., many transactions attempting to
6694 # modify the same row(s)), a transaction can abort many times in a
6695 # short period before successfully committing. Thus, it is not a good
6696 # idea to cap the number of retries a transaction can attempt;
6697 # instead, it is better to limit the total amount of wall time spent
6698 # retrying.
6699 #
6700 # ### Idle Transactions
6701 #
6702 # A transaction is considered idle if it has no outstanding reads or
6703 # SQL queries and has not started a read or SQL query within the last 10
6704 # seconds. Idle transactions can be aborted by Cloud Spanner so that they
6705 # don't hold on to locks indefinitely. In that case, the commit will
6706 # fail with error `ABORTED`.
6707 #
6708 # If this behavior is undesirable, periodically executing a simple
6709 # SQL query in the transaction (e.g., `SELECT 1`) prevents the
6710 # transaction from becoming idle.
6711 #
6712 # ## Snapshot Read-Only Transactions
6713 #
6714 # Snapshot read-only transactions provides a simpler method than
6715 # locking read-write transactions for doing several consistent
6716 # reads. However, this type of transaction does not support writes.
6717 #
6718 # Snapshot transactions do not take locks. Instead, they work by
6719 # choosing a Cloud Spanner timestamp, then executing all reads at that
6720 # timestamp. Since they do not acquire locks, they do not block
6721 # concurrent read-write transactions.
6722 #
6723 # Unlike locking read-write transactions, snapshot read-only
6724 # transactions never abort. They can fail if the chosen read
6725 # timestamp is garbage collected; however, the default garbage
6726 # collection policy is generous enough that most applications do not
6727 # need to worry about this in practice.
6728 #
6729 # Snapshot read-only transactions do not need to call
6730 # Commit or
6731 # Rollback (and in fact are not
6732 # permitted to do so).
6733 #
6734 # To execute a snapshot transaction, the client specifies a timestamp
6735 # bound, which tells Cloud Spanner how to choose a read timestamp.
6736 #
6737 # The types of timestamp bound are:
6738 #
6739 # - Strong (the default).
6740 # - Bounded staleness.
6741 # - Exact staleness.
6742 #
6743 # If the Cloud Spanner database to be read is geographically distributed,
6744 # stale read-only transactions can execute more quickly than strong
6745 # or read-write transaction, because they are able to execute far
6746 # from the leader replica.
6747 #
6748 # Each type of timestamp bound is discussed in detail below.
6749 #
6750 # ### Strong
6751 #
6752 # Strong reads are guaranteed to see the effects of all transactions
6753 # that have committed before the start of the read. Furthermore, all
6754 # rows yielded by a single read are consistent with each other -- if
6755 # any part of the read observes a transaction, all parts of the read
6756 # see the transaction.
6757 #
6758 # Strong reads are not repeatable: two consecutive strong read-only
6759 # transactions might return inconsistent results if there are
6760 # concurrent writes. If consistency across reads is required, the
6761 # reads should be executed within a transaction or at an exact read
6762 # timestamp.
6763 #
6764 # See TransactionOptions.ReadOnly.strong.
6765 #
6766 # ### Exact Staleness
6767 #
6768 # These timestamp bounds execute reads at a user-specified
6769 # timestamp. Reads at a timestamp are guaranteed to see a consistent
6770 # prefix of the global transaction history: they observe
Dan O'Mearadd494642020-05-01 07:42:23 -07006771 # modifications done by all transactions with a commit timestamp &lt;=
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -04006772 # the read timestamp, and observe none of the modifications done by
6773 # transactions with a larger commit timestamp. They will block until
6774 # all conflicting transactions that may be assigned commit timestamps
Dan O'Mearadd494642020-05-01 07:42:23 -07006775 # &lt;= the read timestamp have finished.
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -04006776 #
6777 # The timestamp can either be expressed as an absolute Cloud Spanner commit
6778 # timestamp or a staleness relative to the current time.
6779 #
6780 # These modes do not require a "negotiation phase" to pick a
6781 # timestamp. As a result, they execute slightly faster than the
6782 # equivalent boundedly stale concurrency modes. On the other hand,
6783 # boundedly stale reads usually return fresher results.
6784 #
6785 # See TransactionOptions.ReadOnly.read_timestamp and
6786 # TransactionOptions.ReadOnly.exact_staleness.
6787 #
6788 # ### Bounded Staleness
6789 #
6790 # Bounded staleness modes allow Cloud Spanner to pick the read timestamp,
6791 # subject to a user-provided staleness bound. Cloud Spanner chooses the
6792 # newest timestamp within the staleness bound that allows execution
6793 # of the reads at the closest available replica without blocking.
6794 #
6795 # All rows yielded are consistent with each other -- if any part of
6796 # the read observes a transaction, all parts of the read see the
6797 # transaction. Boundedly stale reads are not repeatable: two stale
6798 # reads, even if they use the same staleness bound, can execute at
6799 # different timestamps and thus return inconsistent results.
6800 #
6801 # Boundedly stale reads execute in two phases: the first phase
6802 # negotiates a timestamp among all replicas needed to serve the
6803 # read. In the second phase, reads are executed at the negotiated
6804 # timestamp.
6805 #
6806 # As a result of the two phase execution, bounded staleness reads are
6807 # usually a little slower than comparable exact staleness
6808 # reads. However, they are typically able to return fresher
6809 # results, and are more likely to execute at the closest replica.
6810 #
6811 # Because the timestamp negotiation requires up-front knowledge of
6812 # which rows will be read, it can only be used with single-use
6813 # read-only transactions.
6814 #
6815 # See TransactionOptions.ReadOnly.max_staleness and
6816 # TransactionOptions.ReadOnly.min_read_timestamp.
6817 #
6818 # ### Old Read Timestamps and Garbage Collection
6819 #
6820 # Cloud Spanner continuously garbage collects deleted and overwritten data
6821 # in the background to reclaim storage space. This process is known
6822 # as "version GC". By default, version GC reclaims versions after they
6823 # are one hour old. Because of this, Cloud Spanner cannot perform reads
6824 # at read timestamps more than one hour in the past. This
6825 # restriction also applies to in-progress reads and/or SQL queries whose
6826 # timestamp become too old while executing. Reads and SQL queries with
6827 # too-old read timestamps fail with the error `FAILED_PRECONDITION`.
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07006828 #
6829 # ## Partitioned DML Transactions
6830 #
6831 # Partitioned DML transactions are used to execute DML statements with a
6832 # different execution strategy that provides different, and often better,
6833 # scalability properties for large, table-wide operations than DML in a
6834 # ReadWrite transaction. Smaller scoped statements, such as an OLTP workload,
6835 # should prefer using ReadWrite transactions.
6836 #
6837 # Partitioned DML partitions the keyspace and runs the DML statement on each
6838 # partition in separate, internal transactions. These transactions commit
6839 # automatically when complete, and run independently from one another.
6840 #
6841 # To reduce lock contention, this execution strategy only acquires read locks
6842 # on rows that match the WHERE clause of the statement. Additionally, the
6843 # smaller per-partition transactions hold locks for less time.
6844 #
6845 # That said, Partitioned DML is not a drop-in replacement for standard DML used
6846 # in ReadWrite transactions.
6847 #
6848 # - The DML statement must be fully-partitionable. Specifically, the statement
6849 # must be expressible as the union of many statements which each access only
6850 # a single row of the table.
6851 #
6852 # - The statement is not applied atomically to all rows of the table. Rather,
6853 # the statement is applied atomically to partitions of the table, in
6854 # independent transactions. Secondary index rows are updated atomically
6855 # with the base table rows.
6856 #
6857 # - Partitioned DML does not guarantee exactly-once execution semantics
6858 # against a partition. The statement will be applied at least once to each
6859 # partition. It is strongly recommended that the DML statement should be
6860 # idempotent to avoid unexpected results. For instance, it is potentially
6861 # dangerous to run a statement such as
6862 # `UPDATE table SET column = column + 1` as it could be run multiple times
6863 # against some rows.
6864 #
6865 # - The partitions are committed automatically - there is no support for
6866 # Commit or Rollback. If the call returns an error, or if the client issuing
6867 # the ExecuteSql call dies, it is possible that some rows had the statement
6868 # executed on them successfully. It is also possible that statement was
6869 # never executed against other rows.
6870 #
6871 # - Partitioned DML transactions may only contain the execution of a single
6872 # DML statement via ExecuteSql or ExecuteStreamingSql.
6873 #
6874 # - If any error is encountered during the execution of the partitioned DML
6875 # operation (for instance, a UNIQUE INDEX violation, division by zero, or a
6876 # value that cannot be stored due to schema constraints), then the
6877 # operation is stopped at that point and an error is returned. It is
6878 # possible that at this point, some partitions have been committed (or even
6879 # committed multiple times), and other partitions have not been run at all.
6880 #
6881 # Given the above, Partitioned DML is good fit for large, database-wide,
6882 # operations that are idempotent, such as deleting old rows from a very large
6883 # table.
6884 "readWrite": { # Message type to initiate a read-write transaction. Currently this # Transaction may write.
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -04006885 #
6886 # Authorization to begin a read-write transaction requires
6887 # `spanner.databases.beginOrRollbackReadWriteTransaction` permission
6888 # on the `session` resource.
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07006889 # transaction type has no options.
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -04006890 },
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07006891 "readOnly": { # Message type to initiate a read-only transaction. # Transaction will not write.
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -04006892 #
6893 # Authorization to begin a read-only transaction requires
6894 # `spanner.databases.beginReadOnlyTransaction` permission
6895 # on the `session` resource.
Dan O'Mearadd494642020-05-01 07:42:23 -07006896 "minReadTimestamp": "A String", # Executes all reads at a timestamp &gt;= `min_read_timestamp`.
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -04006897 #
6898 # This is useful for requesting fresher data than some previous
6899 # read, or data that is fresh enough to observe the effects of some
6900 # previously committed transaction whose timestamp is known.
6901 #
6902 # Note that this option can only be used in single-use transactions.
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07006903 #
6904 # A timestamp in RFC3339 UTC \"Zulu\" format, accurate to nanoseconds.
6905 # Example: `"2014-10-02T15:01:23.045123456Z"`.
Dan O'Mearadd494642020-05-01 07:42:23 -07006906 "readTimestamp": "A String", # Executes all reads at the given timestamp. Unlike other modes,
6907 # reads at a specific timestamp are repeatable; the same read at
6908 # the same timestamp always returns the same data. If the
6909 # timestamp is in the future, the read will block until the
6910 # specified timestamp, modulo the read's deadline.
6911 #
6912 # Useful for large scale consistent reads such as mapreduces, or
6913 # for coordinating many reads against a consistent snapshot of the
6914 # data.
6915 #
6916 # A timestamp in RFC3339 UTC \"Zulu\" format, accurate to nanoseconds.
6917 # Example: `"2014-10-02T15:01:23.045123456Z"`.
6918 "maxStaleness": "A String", # Read data at a timestamp &gt;= `NOW - max_staleness`
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -04006919 # seconds. Guarantees that all writes that have committed more
6920 # than the specified number of seconds ago are visible. Because
6921 # Cloud Spanner chooses the exact timestamp, this mode works even if
6922 # the client's local clock is substantially skewed from Cloud Spanner
6923 # commit timestamps.
6924 #
6925 # Useful for reading the freshest data available at a nearby
6926 # replica, while bounding the possible staleness if the local
6927 # replica has fallen behind.
6928 #
6929 # Note that this option can only be used in single-use
6930 # transactions.
6931 "exactStaleness": "A String", # Executes all reads at a timestamp that is `exact_staleness`
6932 # old. The timestamp is chosen soon after the read is started.
6933 #
6934 # Guarantees that all writes that have committed more than the
6935 # specified number of seconds ago are visible. Because Cloud Spanner
6936 # chooses the exact timestamp, this mode works even if the client's
6937 # local clock is substantially skewed from Cloud Spanner commit
6938 # timestamps.
6939 #
6940 # Useful for reading at nearby replicas without the distributed
6941 # timestamp negotiation overhead of `max_staleness`.
Dan O'Mearadd494642020-05-01 07:42:23 -07006942 "returnReadTimestamp": True or False, # If true, the Cloud Spanner-selected read timestamp is included in
6943 # the Transaction message that describes the transaction.
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -04006944 "strong": True or False, # Read at a timestamp where all previously committed transactions
6945 # are visible.
6946 },
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07006947 "partitionedDml": { # Message type to initiate a Partitioned DML transaction. # Partitioned DML transaction.
6948 #
6949 # Authorization to begin a Partitioned DML transaction requires
6950 # `spanner.databases.beginPartitionedDmlTransaction` permission
6951 # on the `session` resource.
6952 },
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -04006953 },
6954 "id": "A String", # Execute the read or SQL query in a previously-started transaction.
6955 },
6956 "resumeToken": "A String", # If this request is resuming a previously interrupted read,
6957 # `resume_token` should be copied from the last
6958 # PartialResultSet yielded before the interruption. Doing this
6959 # enables the new read to resume where the last read left off. The
6960 # rest of the request parameters must exactly match the request
6961 # that yielded this token.
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07006962 "partitionToken": "A String", # If present, results will be restricted to the specified partition
6963 # previously created using PartitionRead(). There must be an exact
6964 # match for the values of fields common to this message and the
6965 # PartitionReadRequest message used to create this partition_token.
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -04006966 "keySet": { # `KeySet` defines a collection of Cloud Spanner keys and/or key ranges. All # Required. `key_set` identifies the rows to be yielded. `key_set` names the
6967 # primary keys of the rows in table to be yielded, unless index
6968 # is present. If index is present, then key_set instead names
6969 # index keys in index.
6970 #
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07006971 # If the partition_token field is empty, rows are yielded
6972 # in table primary key order (if index is empty) or index key order
6973 # (if index is non-empty). If the partition_token field is not
6974 # empty, rows will be yielded in an unspecified order.
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -04006975 #
6976 # It is not an error for the `key_set` to name rows that do not
6977 # exist in the database. Read yields nothing for nonexistent rows.
6978 # the keys are expected to be in the same table or index. The keys need
6979 # not be sorted in any particular way.
6980 #
6981 # If the same key is specified multiple times in the set (for example
6982 # if two ranges, two keys, or a key and a range overlap), Cloud Spanner
6983 # behaves as if the key were only specified once.
6984 "ranges": [ # A list of key ranges. See KeyRange for more information about
6985 # key range specifications.
6986 { # KeyRange represents a range of rows in a table or index.
6987 #
6988 # A range has a start key and an end key. These keys can be open or
6989 # closed, indicating if the range includes rows with that key.
6990 #
6991 # Keys are represented by lists, where the ith value in the list
6992 # corresponds to the ith component of the table or index primary key.
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07006993 # Individual values are encoded as described
6994 # here.
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -04006995 #
6996 # For example, consider the following table definition:
6997 #
6998 # CREATE TABLE UserEvents (
6999 # UserName STRING(MAX),
7000 # EventDate STRING(10)
7001 # ) PRIMARY KEY(UserName, EventDate);
7002 #
7003 # The following keys name rows in this table:
7004 #
7005 # "Bob", "2014-09-23"
7006 #
7007 # Since the `UserEvents` table's `PRIMARY KEY` clause names two
7008 # columns, each `UserEvents` key has two elements; the first is the
7009 # `UserName`, and the second is the `EventDate`.
7010 #
7011 # Key ranges with multiple components are interpreted
7012 # lexicographically by component using the table or index key's declared
7013 # sort order. For example, the following range returns all events for
7014 # user `"Bob"` that occurred in the year 2015:
7015 #
7016 # "start_closed": ["Bob", "2015-01-01"]
7017 # "end_closed": ["Bob", "2015-12-31"]
7018 #
7019 # Start and end keys can omit trailing key components. This affects the
7020 # inclusion and exclusion of rows that exactly match the provided key
7021 # components: if the key is closed, then rows that exactly match the
7022 # provided components are included; if the key is open, then rows
7023 # that exactly match are not included.
7024 #
7025 # For example, the following range includes all events for `"Bob"` that
7026 # occurred during and after the year 2000:
7027 #
7028 # "start_closed": ["Bob", "2000-01-01"]
7029 # "end_closed": ["Bob"]
7030 #
7031 # The next example retrieves all events for `"Bob"`:
7032 #
7033 # "start_closed": ["Bob"]
7034 # "end_closed": ["Bob"]
7035 #
7036 # To retrieve events before the year 2000:
7037 #
7038 # "start_closed": ["Bob"]
7039 # "end_open": ["Bob", "2000-01-01"]
7040 #
7041 # The following range includes all rows in the table:
7042 #
7043 # "start_closed": []
7044 # "end_closed": []
7045 #
7046 # This range returns all users whose `UserName` begins with any
7047 # character from A to C:
7048 #
7049 # "start_closed": ["A"]
7050 # "end_open": ["D"]
7051 #
7052 # This range returns all users whose `UserName` begins with B:
7053 #
7054 # "start_closed": ["B"]
7055 # "end_open": ["C"]
7056 #
7057 # Key ranges honor column sort order. For example, suppose a table is
7058 # defined as follows:
7059 #
7060 # CREATE TABLE DescendingSortedTable {
7061 # Key INT64,
7062 # ...
7063 # ) PRIMARY KEY(Key DESC);
7064 #
7065 # The following range retrieves all rows with key values between 1
7066 # and 100 inclusive:
7067 #
7068 # "start_closed": ["100"]
7069 # "end_closed": ["1"]
7070 #
7071 # Note that 100 is passed as the start, and 1 is passed as the end,
7072 # because `Key` is a descending column in the schema.
7073 "endOpen": [ # If the end is open, then the range excludes rows whose first
7074 # `len(end_open)` key columns exactly match `end_open`.
7075 "",
7076 ],
7077 "startOpen": [ # If the start is open, then the range excludes rows whose first
7078 # `len(start_open)` key columns exactly match `start_open`.
7079 "",
7080 ],
7081 "endClosed": [ # If the end is closed, then the range includes all rows whose
7082 # first `len(end_closed)` key columns exactly match `end_closed`.
7083 "",
7084 ],
7085 "startClosed": [ # If the start is closed, then the range includes all rows whose
7086 # first `len(start_closed)` key columns exactly match `start_closed`.
7087 "",
7088 ],
7089 },
7090 ],
Sai Cheemalapati4ba8c232017-06-06 18:46:08 -04007091 "keys": [ # A list of specific keys. Entries in `keys` should have exactly as
7092 # many elements as there are columns in the primary or index key
7093 # with which this `KeySet` is used. Individual key values are
7094 # encoded as described here.
7095 [
7096 "",
7097 ],
7098 ],
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -04007099 "all": True or False, # For convenience `all` can be set to `true` to indicate that this
7100 # `KeySet` matches all keys in the table or index. Note that any keys
7101 # specified in `keys` or `ranges` are only yielded once.
7102 },
7103 "limit": "A String", # If greater than zero, only the first `limit` rows are yielded. If `limit`
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07007104 # is zero, the default is no limit. A limit cannot be specified if
7105 # `partition_token` is set.
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -04007106 "table": "A String", # Required. The name of the table in the database to be read.
Dan O'Mearadd494642020-05-01 07:42:23 -07007107 "columns": [ # Required. The columns of table to be returned for each row matching
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -04007108 # this request.
7109 "A String",
7110 ],
7111 }
7112
7113 x__xgafv: string, V1 error format.
7114 Allowed values
7115 1 - v1 error format
7116 2 - v2 error format
7117
7118Returns:
7119 An object of the form:
7120
7121 { # Results from Read or
7122 # ExecuteSql.
7123 "rows": [ # Each element in `rows` is a row whose format is defined by
7124 # metadata.row_type. The ith element
7125 # in each row matches the ith field in
7126 # metadata.row_type. Elements are
7127 # encoded based on type as described
7128 # here.
7129 [
7130 "",
7131 ],
7132 ],
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07007133 "stats": { # Additional statistics about a ResultSet or PartialResultSet. # Query plan and execution statistics for the SQL statement that
7134 # produced this result set. These can be requested by setting
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -04007135 # ExecuteSqlRequest.query_mode.
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07007136 # DML statements always produce stats containing the number of rows
7137 # modified, unless executed using the
7138 # ExecuteSqlRequest.QueryMode.PLAN ExecuteSqlRequest.query_mode.
7139 # Other fields may or may not be populated, based on the
7140 # ExecuteSqlRequest.query_mode.
7141 "rowCountLowerBound": "A String", # Partitioned DML does not offer exactly-once semantics, so it
7142 # returns a lower bound of the rows modified.
7143 "rowCountExact": "A String", # Standard DML returns an exact count of rows that were modified.
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -04007144 "queryPlan": { # Contains an ordered list of nodes appearing in the query plan. # QueryPlan for the query associated with this result.
7145 "planNodes": [ # The nodes in the query plan. Plan nodes are returned in pre-order starting
7146 # with the plan root. Each PlanNode's `id` corresponds to its index in
7147 # `plan_nodes`.
7148 { # Node information for nodes appearing in a QueryPlan.plan_nodes.
7149 "index": 42, # The `PlanNode`'s index in node list.
7150 "kind": "A String", # Used to determine the type of node. May be needed for visualizing
7151 # different kinds of nodes differently. For example, If the node is a
7152 # SCALAR node, it will have a condensed representation
7153 # which can be used to directly embed a description of the node in its
7154 # parent.
7155 "displayName": "A String", # The display name for the node.
7156 "executionStats": { # The execution statistics associated with the node, contained in a group of
7157 # key-value pairs. Only present if the plan was returned as a result of a
7158 # profile query. For example, number of executions, number of rows/time per
7159 # execution etc.
7160 "a_key": "", # Properties of the object.
7161 },
7162 "childLinks": [ # List of child node `index`es and their relationship to this parent.
7163 { # Metadata associated with a parent-child relationship appearing in a
7164 # PlanNode.
7165 "variable": "A String", # Only present if the child node is SCALAR and corresponds
7166 # to an output variable of the parent node. The field carries the name of
7167 # the output variable.
7168 # For example, a `TableScan` operator that reads rows from a table will
7169 # have child links to the `SCALAR` nodes representing the output variables
7170 # created for each column that is read by the operator. The corresponding
7171 # `variable` fields will be set to the variable names assigned to the
7172 # columns.
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -04007173 "type": "A String", # The type of the link. For example, in Hash Joins this could be used to
7174 # distinguish between the build child and the probe child, or in the case
7175 # of the child being an output variable, to represent the tag associated
7176 # with the output variable.
Dan O'Mearadd494642020-05-01 07:42:23 -07007177 "childIndex": 42, # The node to which the link points.
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -04007178 },
7179 ],
7180 "shortRepresentation": { # Condensed representation of a node and its subtree. Only present for # Condensed representation for SCALAR nodes.
7181 # `SCALAR` PlanNode(s).
Dan O'Mearadd494642020-05-01 07:42:23 -07007182 "subqueries": { # A mapping of (subquery variable name) -&gt; (subquery node id) for cases
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -04007183 # where the `description` string of this node references a `SCALAR`
7184 # subquery contained in the expression subtree rooted at this node. The
7185 # referenced `SCALAR` subquery may not necessarily be a direct child of
7186 # this node.
7187 "a_key": 42,
7188 },
7189 "description": "A String", # A string representation of the expression subtree rooted at this node.
7190 },
7191 "metadata": { # Attributes relevant to the node contained in a group of key-value pairs.
7192 # For example, a Parameter Reference node could have the following
7193 # information in its metadata:
7194 #
7195 # {
7196 # "parameter_reference": "param1",
7197 # "parameter_type": "array"
7198 # }
7199 "a_key": "", # Properties of the object.
7200 },
7201 },
7202 ],
7203 },
7204 "queryStats": { # Aggregated statistics from the execution of the query. Only present when
7205 # the query is profiled. For example, a query could return the statistics as
7206 # follows:
7207 #
7208 # {
7209 # "rows_returned": "3",
7210 # "elapsed_time": "1.22 secs",
7211 # "cpu_time": "1.19 secs"
7212 # }
7213 "a_key": "", # Properties of the object.
7214 },
7215 },
7216 "metadata": { # Metadata about a ResultSet or PartialResultSet. # Metadata about the result set, such as row type information.
7217 "rowType": { # `StructType` defines the fields of a STRUCT type. # Indicates the field names and types for the rows in the result
7218 # set. For example, a SQL query like `"SELECT UserId, UserName FROM
7219 # Users"` could return a `row_type` value like:
7220 #
7221 # "fields": [
7222 # { "name": "UserId", "type": { "code": "INT64" } },
7223 # { "name": "UserName", "type": { "code": "STRING" } },
7224 # ]
7225 "fields": [ # The list of fields that make up this struct. Order is
7226 # significant, because values of this struct type are represented as
7227 # lists, where the order of field values matches the order of
7228 # fields in the StructType. In turn, the order of fields
7229 # matches the order of columns in a read request, or the order of
7230 # fields in the `SELECT` clause of a query.
7231 { # Message representing a single field of a struct.
Dan O'Mearadd494642020-05-01 07:42:23 -07007232 "type": # Object with schema name: Type # The type of the field.
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -04007233 "name": "A String", # The name of the field. For reads, this is the column name. For
7234 # SQL queries, it is the column alias (e.g., `"Word"` in the
7235 # query `"SELECT 'hello' AS Word"`), or the column name (e.g.,
7236 # `"ColName"` in the query `"SELECT ColName FROM Table"`). Some
7237 # columns might have an empty name (e.g., !"SELECT
7238 # UPPER(ColName)"`). Note that a query result can contain
7239 # multiple fields with the same name.
7240 },
7241 ],
7242 },
7243 "transaction": { # A transaction. # If the read or SQL query began a transaction as a side-effect, the
7244 # information about the new transaction is yielded here.
7245 "readTimestamp": "A String", # For snapshot read-only transactions, the read timestamp chosen
7246 # for the transaction. Not returned by default: see
7247 # TransactionOptions.ReadOnly.return_read_timestamp.
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07007248 #
7249 # A timestamp in RFC3339 UTC \"Zulu\" format, accurate to nanoseconds.
7250 # Example: `"2014-10-02T15:01:23.045123456Z"`.
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -04007251 "id": "A String", # `id` may be used to identify the transaction in subsequent
7252 # Read,
7253 # ExecuteSql,
7254 # Commit, or
7255 # Rollback calls.
7256 #
7257 # Single-use read-only transactions do not have IDs, because
7258 # single-use transactions do not support multiple requests.
7259 },
7260 },
7261 }</pre>
7262</div>
7263
7264<div class="method">
Dan O'Mearadd494642020-05-01 07:42:23 -07007265 <code class="details" id="rollback">rollback(session, body=None, x__xgafv=None)</code>
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -04007266 <pre>Rolls back a transaction, releasing any locks it holds. It is a good
7267idea to call this for any transaction that includes one or more
7268Read or ExecuteSql requests and
7269ultimately decides not to commit.
7270
7271`Rollback` returns `OK` if it successfully aborts the transaction, the
7272transaction was already aborted, or the transaction is not
7273found. `Rollback` never returns `ABORTED`.
7274
7275Args:
7276 session: string, Required. The session in which the transaction to roll back is running. (required)
Dan O'Mearadd494642020-05-01 07:42:23 -07007277 body: object, The request body.
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -04007278 The object takes the form of:
7279
7280{ # The request for Rollback.
7281 "transactionId": "A String", # Required. The transaction to roll back.
7282 }
7283
7284 x__xgafv: string, V1 error format.
7285 Allowed values
7286 1 - v1 error format
7287 2 - v2 error format
7288
7289Returns:
7290 An object of the form:
7291
7292 { # A generic empty message that you can re-use to avoid defining duplicated
7293 # empty messages in your APIs. A typical example is to use it as the request
7294 # or the response type of an API method. For instance:
7295 #
7296 # service Foo {
7297 # rpc Bar(google.protobuf.Empty) returns (google.protobuf.Empty);
7298 # }
7299 #
7300 # The JSON representation for `Empty` is empty JSON object `{}`.
7301 }</pre>
7302</div>
7303
7304<div class="method">
Dan O'Mearadd494642020-05-01 07:42:23 -07007305 <code class="details" id="streamingRead">streamingRead(session, body=None, x__xgafv=None)</code>
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -04007306 <pre>Like Read, except returns the result set as a
7307stream. Unlike Read, there is no limit on the
7308size of the returned result set. However, no individual row in
7309the result set can exceed 100 MiB, and no column value can exceed
731010 MiB.
7311
7312Args:
7313 session: string, Required. The session in which the read should be performed. (required)
Dan O'Mearadd494642020-05-01 07:42:23 -07007314 body: object, The request body.
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -04007315 The object takes the form of:
7316
7317{ # The request for Read and
7318 # StreamingRead.
7319 "index": "A String", # If non-empty, the name of an index on table. This index is
7320 # used instead of the table primary key when interpreting key_set
7321 # and sorting result rows. See key_set for further information.
7322 "transaction": { # This message is used to select the transaction in which a # The transaction to use. If none is provided, the default is a
7323 # temporary read-only transaction with strong concurrency.
7324 # Read or
7325 # ExecuteSql call runs.
7326 #
7327 # See TransactionOptions for more information about transactions.
7328 "begin": { # # Transactions # Begin a new transaction and execute this read or SQL query in
7329 # it. The transaction ID of the new transaction is returned in
7330 # ResultSetMetadata.transaction, which is a Transaction.
7331 #
7332 #
7333 # Each session can have at most one active transaction at a time. After the
7334 # active transaction is completed, the session can immediately be
7335 # re-used for the next transaction. It is not necessary to create a
7336 # new session for each transaction.
7337 #
7338 # # Transaction Modes
7339 #
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07007340 # Cloud Spanner supports three transaction modes:
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -04007341 #
7342 # 1. Locking read-write. This type of transaction is the only way
7343 # to write data into Cloud Spanner. These transactions rely on
7344 # pessimistic locking and, if necessary, two-phase commit.
7345 # Locking read-write transactions may abort, requiring the
7346 # application to retry.
7347 #
7348 # 2. Snapshot read-only. This transaction type provides guaranteed
7349 # consistency across several reads, but does not allow
7350 # writes. Snapshot read-only transactions can be configured to
7351 # read at timestamps in the past. Snapshot read-only
7352 # transactions do not need to be committed.
7353 #
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07007354 # 3. Partitioned DML. This type of transaction is used to execute
7355 # a single Partitioned DML statement. Partitioned DML partitions
7356 # the key space and runs the DML statement over each partition
7357 # in parallel using separate, internal transactions that commit
7358 # independently. Partitioned DML transactions do not need to be
7359 # committed.
7360 #
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -04007361 # For transactions that only read, snapshot read-only transactions
7362 # provide simpler semantics and are almost always faster. In
7363 # particular, read-only transactions do not take locks, so they do
7364 # not conflict with read-write transactions. As a consequence of not
7365 # taking locks, they also do not abort, so retry loops are not needed.
7366 #
7367 # Transactions may only read/write data in a single database. They
7368 # may, however, read/write data in different tables within that
7369 # database.
7370 #
7371 # ## Locking Read-Write Transactions
7372 #
7373 # Locking transactions may be used to atomically read-modify-write
7374 # data anywhere in a database. This type of transaction is externally
7375 # consistent.
7376 #
7377 # Clients should attempt to minimize the amount of time a transaction
7378 # is active. Faster transactions commit with higher probability
7379 # and cause less contention. Cloud Spanner attempts to keep read locks
7380 # active as long as the transaction continues to do reads, and the
7381 # transaction has not been terminated by
7382 # Commit or
7383 # Rollback. Long periods of
7384 # inactivity at the client may cause Cloud Spanner to release a
7385 # transaction's locks and abort it.
7386 #
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -04007387 # Conceptually, a read-write transaction consists of zero or more
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07007388 # reads or SQL statements followed by
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -04007389 # Commit. At any time before
7390 # Commit, the client can send a
7391 # Rollback request to abort the
7392 # transaction.
7393 #
7394 # ### Semantics
7395 #
7396 # Cloud Spanner can commit the transaction if all read locks it acquired
7397 # are still valid at commit time, and it is able to acquire write
7398 # locks for all writes. Cloud Spanner can abort the transaction for any
7399 # reason. If a commit attempt returns `ABORTED`, Cloud Spanner guarantees
7400 # that the transaction has not modified any user data in Cloud Spanner.
7401 #
7402 # Unless the transaction commits, Cloud Spanner makes no guarantees about
7403 # how long the transaction's locks were held for. It is an error to
7404 # use Cloud Spanner locks for any sort of mutual exclusion other than
7405 # between Cloud Spanner transactions themselves.
7406 #
7407 # ### Retrying Aborted Transactions
7408 #
7409 # When a transaction aborts, the application can choose to retry the
7410 # whole transaction again. To maximize the chances of successfully
7411 # committing the retry, the client should execute the retry in the
7412 # same session as the original attempt. The original session's lock
7413 # priority increases with each consecutive abort, meaning that each
7414 # attempt has a slightly better chance of success than the previous.
7415 #
7416 # Under some circumstances (e.g., many transactions attempting to
7417 # modify the same row(s)), a transaction can abort many times in a
7418 # short period before successfully committing. Thus, it is not a good
7419 # idea to cap the number of retries a transaction can attempt;
7420 # instead, it is better to limit the total amount of wall time spent
7421 # retrying.
7422 #
7423 # ### Idle Transactions
7424 #
7425 # A transaction is considered idle if it has no outstanding reads or
7426 # SQL queries and has not started a read or SQL query within the last 10
7427 # seconds. Idle transactions can be aborted by Cloud Spanner so that they
7428 # don't hold on to locks indefinitely. In that case, the commit will
7429 # fail with error `ABORTED`.
7430 #
7431 # If this behavior is undesirable, periodically executing a simple
7432 # SQL query in the transaction (e.g., `SELECT 1`) prevents the
7433 # transaction from becoming idle.
7434 #
7435 # ## Snapshot Read-Only Transactions
7436 #
7437 # Snapshot read-only transactions provides a simpler method than
7438 # locking read-write transactions for doing several consistent
7439 # reads. However, this type of transaction does not support writes.
7440 #
7441 # Snapshot transactions do not take locks. Instead, they work by
7442 # choosing a Cloud Spanner timestamp, then executing all reads at that
7443 # timestamp. Since they do not acquire locks, they do not block
7444 # concurrent read-write transactions.
7445 #
7446 # Unlike locking read-write transactions, snapshot read-only
7447 # transactions never abort. They can fail if the chosen read
7448 # timestamp is garbage collected; however, the default garbage
7449 # collection policy is generous enough that most applications do not
7450 # need to worry about this in practice.
7451 #
7452 # Snapshot read-only transactions do not need to call
7453 # Commit or
7454 # Rollback (and in fact are not
7455 # permitted to do so).
7456 #
7457 # To execute a snapshot transaction, the client specifies a timestamp
7458 # bound, which tells Cloud Spanner how to choose a read timestamp.
7459 #
7460 # The types of timestamp bound are:
7461 #
7462 # - Strong (the default).
7463 # - Bounded staleness.
7464 # - Exact staleness.
7465 #
7466 # If the Cloud Spanner database to be read is geographically distributed,
7467 # stale read-only transactions can execute more quickly than strong
7468 # or read-write transaction, because they are able to execute far
7469 # from the leader replica.
7470 #
7471 # Each type of timestamp bound is discussed in detail below.
7472 #
7473 # ### Strong
7474 #
7475 # Strong reads are guaranteed to see the effects of all transactions
7476 # that have committed before the start of the read. Furthermore, all
7477 # rows yielded by a single read are consistent with each other -- if
7478 # any part of the read observes a transaction, all parts of the read
7479 # see the transaction.
7480 #
7481 # Strong reads are not repeatable: two consecutive strong read-only
7482 # transactions might return inconsistent results if there are
7483 # concurrent writes. If consistency across reads is required, the
7484 # reads should be executed within a transaction or at an exact read
7485 # timestamp.
7486 #
7487 # See TransactionOptions.ReadOnly.strong.
7488 #
7489 # ### Exact Staleness
7490 #
7491 # These timestamp bounds execute reads at a user-specified
7492 # timestamp. Reads at a timestamp are guaranteed to see a consistent
7493 # prefix of the global transaction history: they observe
Dan O'Mearadd494642020-05-01 07:42:23 -07007494 # modifications done by all transactions with a commit timestamp &lt;=
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -04007495 # the read timestamp, and observe none of the modifications done by
7496 # transactions with a larger commit timestamp. They will block until
7497 # all conflicting transactions that may be assigned commit timestamps
Dan O'Mearadd494642020-05-01 07:42:23 -07007498 # &lt;= the read timestamp have finished.
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -04007499 #
7500 # The timestamp can either be expressed as an absolute Cloud Spanner commit
7501 # timestamp or a staleness relative to the current time.
7502 #
7503 # These modes do not require a "negotiation phase" to pick a
7504 # timestamp. As a result, they execute slightly faster than the
7505 # equivalent boundedly stale concurrency modes. On the other hand,
7506 # boundedly stale reads usually return fresher results.
7507 #
7508 # See TransactionOptions.ReadOnly.read_timestamp and
7509 # TransactionOptions.ReadOnly.exact_staleness.
7510 #
7511 # ### Bounded Staleness
7512 #
7513 # Bounded staleness modes allow Cloud Spanner to pick the read timestamp,
7514 # subject to a user-provided staleness bound. Cloud Spanner chooses the
7515 # newest timestamp within the staleness bound that allows execution
7516 # of the reads at the closest available replica without blocking.
7517 #
7518 # All rows yielded are consistent with each other -- if any part of
7519 # the read observes a transaction, all parts of the read see the
7520 # transaction. Boundedly stale reads are not repeatable: two stale
7521 # reads, even if they use the same staleness bound, can execute at
7522 # different timestamps and thus return inconsistent results.
7523 #
7524 # Boundedly stale reads execute in two phases: the first phase
7525 # negotiates a timestamp among all replicas needed to serve the
7526 # read. In the second phase, reads are executed at the negotiated
7527 # timestamp.
7528 #
7529 # As a result of the two phase execution, bounded staleness reads are
7530 # usually a little slower than comparable exact staleness
7531 # reads. However, they are typically able to return fresher
7532 # results, and are more likely to execute at the closest replica.
7533 #
7534 # Because the timestamp negotiation requires up-front knowledge of
7535 # which rows will be read, it can only be used with single-use
7536 # read-only transactions.
7537 #
7538 # See TransactionOptions.ReadOnly.max_staleness and
7539 # TransactionOptions.ReadOnly.min_read_timestamp.
7540 #
7541 # ### Old Read Timestamps and Garbage Collection
7542 #
7543 # Cloud Spanner continuously garbage collects deleted and overwritten data
7544 # in the background to reclaim storage space. This process is known
7545 # as "version GC". By default, version GC reclaims versions after they
7546 # are one hour old. Because of this, Cloud Spanner cannot perform reads
7547 # at read timestamps more than one hour in the past. This
7548 # restriction also applies to in-progress reads and/or SQL queries whose
7549 # timestamp become too old while executing. Reads and SQL queries with
7550 # too-old read timestamps fail with the error `FAILED_PRECONDITION`.
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07007551 #
7552 # ## Partitioned DML Transactions
7553 #
7554 # Partitioned DML transactions are used to execute DML statements with a
7555 # different execution strategy that provides different, and often better,
7556 # scalability properties for large, table-wide operations than DML in a
7557 # ReadWrite transaction. Smaller scoped statements, such as an OLTP workload,
7558 # should prefer using ReadWrite transactions.
7559 #
7560 # Partitioned DML partitions the keyspace and runs the DML statement on each
7561 # partition in separate, internal transactions. These transactions commit
7562 # automatically when complete, and run independently from one another.
7563 #
7564 # To reduce lock contention, this execution strategy only acquires read locks
7565 # on rows that match the WHERE clause of the statement. Additionally, the
7566 # smaller per-partition transactions hold locks for less time.
7567 #
7568 # That said, Partitioned DML is not a drop-in replacement for standard DML used
7569 # in ReadWrite transactions.
7570 #
7571 # - The DML statement must be fully-partitionable. Specifically, the statement
7572 # must be expressible as the union of many statements which each access only
7573 # a single row of the table.
7574 #
7575 # - The statement is not applied atomically to all rows of the table. Rather,
7576 # the statement is applied atomically to partitions of the table, in
7577 # independent transactions. Secondary index rows are updated atomically
7578 # with the base table rows.
7579 #
7580 # - Partitioned DML does not guarantee exactly-once execution semantics
7581 # against a partition. The statement will be applied at least once to each
7582 # partition. It is strongly recommended that the DML statement should be
7583 # idempotent to avoid unexpected results. For instance, it is potentially
7584 # dangerous to run a statement such as
7585 # `UPDATE table SET column = column + 1` as it could be run multiple times
7586 # against some rows.
7587 #
7588 # - The partitions are committed automatically - there is no support for
7589 # Commit or Rollback. If the call returns an error, or if the client issuing
7590 # the ExecuteSql call dies, it is possible that some rows had the statement
7591 # executed on them successfully. It is also possible that statement was
7592 # never executed against other rows.
7593 #
7594 # - Partitioned DML transactions may only contain the execution of a single
7595 # DML statement via ExecuteSql or ExecuteStreamingSql.
7596 #
7597 # - If any error is encountered during the execution of the partitioned DML
7598 # operation (for instance, a UNIQUE INDEX violation, division by zero, or a
7599 # value that cannot be stored due to schema constraints), then the
7600 # operation is stopped at that point and an error is returned. It is
7601 # possible that at this point, some partitions have been committed (or even
7602 # committed multiple times), and other partitions have not been run at all.
7603 #
7604 # Given the above, Partitioned DML is good fit for large, database-wide,
7605 # operations that are idempotent, such as deleting old rows from a very large
7606 # table.
7607 "readWrite": { # Message type to initiate a read-write transaction. Currently this # Transaction may write.
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -04007608 #
7609 # Authorization to begin a read-write transaction requires
7610 # `spanner.databases.beginOrRollbackReadWriteTransaction` permission
7611 # on the `session` resource.
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07007612 # transaction type has no options.
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -04007613 },
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07007614 "readOnly": { # Message type to initiate a read-only transaction. # Transaction will not write.
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -04007615 #
7616 # Authorization to begin a read-only transaction requires
7617 # `spanner.databases.beginReadOnlyTransaction` permission
7618 # on the `session` resource.
Dan O'Mearadd494642020-05-01 07:42:23 -07007619 "minReadTimestamp": "A String", # Executes all reads at a timestamp &gt;= `min_read_timestamp`.
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -04007620 #
7621 # This is useful for requesting fresher data than some previous
7622 # read, or data that is fresh enough to observe the effects of some
7623 # previously committed transaction whose timestamp is known.
7624 #
7625 # Note that this option can only be used in single-use transactions.
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07007626 #
7627 # A timestamp in RFC3339 UTC \"Zulu\" format, accurate to nanoseconds.
7628 # Example: `"2014-10-02T15:01:23.045123456Z"`.
Dan O'Mearadd494642020-05-01 07:42:23 -07007629 "readTimestamp": "A String", # Executes all reads at the given timestamp. Unlike other modes,
7630 # reads at a specific timestamp are repeatable; the same read at
7631 # the same timestamp always returns the same data. If the
7632 # timestamp is in the future, the read will block until the
7633 # specified timestamp, modulo the read's deadline.
7634 #
7635 # Useful for large scale consistent reads such as mapreduces, or
7636 # for coordinating many reads against a consistent snapshot of the
7637 # data.
7638 #
7639 # A timestamp in RFC3339 UTC \"Zulu\" format, accurate to nanoseconds.
7640 # Example: `"2014-10-02T15:01:23.045123456Z"`.
7641 "maxStaleness": "A String", # Read data at a timestamp &gt;= `NOW - max_staleness`
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -04007642 # seconds. Guarantees that all writes that have committed more
7643 # than the specified number of seconds ago are visible. Because
7644 # Cloud Spanner chooses the exact timestamp, this mode works even if
7645 # the client's local clock is substantially skewed from Cloud Spanner
7646 # commit timestamps.
7647 #
7648 # Useful for reading the freshest data available at a nearby
7649 # replica, while bounding the possible staleness if the local
7650 # replica has fallen behind.
7651 #
7652 # Note that this option can only be used in single-use
7653 # transactions.
7654 "exactStaleness": "A String", # Executes all reads at a timestamp that is `exact_staleness`
7655 # old. The timestamp is chosen soon after the read is started.
7656 #
7657 # Guarantees that all writes that have committed more than the
7658 # specified number of seconds ago are visible. Because Cloud Spanner
7659 # chooses the exact timestamp, this mode works even if the client's
7660 # local clock is substantially skewed from Cloud Spanner commit
7661 # timestamps.
7662 #
7663 # Useful for reading at nearby replicas without the distributed
7664 # timestamp negotiation overhead of `max_staleness`.
Dan O'Mearadd494642020-05-01 07:42:23 -07007665 "returnReadTimestamp": True or False, # If true, the Cloud Spanner-selected read timestamp is included in
7666 # the Transaction message that describes the transaction.
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -04007667 "strong": True or False, # Read at a timestamp where all previously committed transactions
7668 # are visible.
7669 },
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07007670 "partitionedDml": { # Message type to initiate a Partitioned DML transaction. # Partitioned DML transaction.
7671 #
7672 # Authorization to begin a Partitioned DML transaction requires
7673 # `spanner.databases.beginPartitionedDmlTransaction` permission
7674 # on the `session` resource.
7675 },
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -04007676 },
7677 "singleUse": { # # Transactions # Execute the read or SQL query in a temporary transaction.
7678 # This is the most efficient way to execute a transaction that
7679 # consists of a single SQL query.
7680 #
7681 #
7682 # Each session can have at most one active transaction at a time. After the
7683 # active transaction is completed, the session can immediately be
7684 # re-used for the next transaction. It is not necessary to create a
7685 # new session for each transaction.
7686 #
7687 # # Transaction Modes
7688 #
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07007689 # Cloud Spanner supports three transaction modes:
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -04007690 #
7691 # 1. Locking read-write. This type of transaction is the only way
7692 # to write data into Cloud Spanner. These transactions rely on
7693 # pessimistic locking and, if necessary, two-phase commit.
7694 # Locking read-write transactions may abort, requiring the
7695 # application to retry.
7696 #
7697 # 2. Snapshot read-only. This transaction type provides guaranteed
7698 # consistency across several reads, but does not allow
7699 # writes. Snapshot read-only transactions can be configured to
7700 # read at timestamps in the past. Snapshot read-only
7701 # transactions do not need to be committed.
7702 #
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07007703 # 3. Partitioned DML. This type of transaction is used to execute
7704 # a single Partitioned DML statement. Partitioned DML partitions
7705 # the key space and runs the DML statement over each partition
7706 # in parallel using separate, internal transactions that commit
7707 # independently. Partitioned DML transactions do not need to be
7708 # committed.
7709 #
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -04007710 # For transactions that only read, snapshot read-only transactions
7711 # provide simpler semantics and are almost always faster. In
7712 # particular, read-only transactions do not take locks, so they do
7713 # not conflict with read-write transactions. As a consequence of not
7714 # taking locks, they also do not abort, so retry loops are not needed.
7715 #
7716 # Transactions may only read/write data in a single database. They
7717 # may, however, read/write data in different tables within that
7718 # database.
7719 #
7720 # ## Locking Read-Write Transactions
7721 #
7722 # Locking transactions may be used to atomically read-modify-write
7723 # data anywhere in a database. This type of transaction is externally
7724 # consistent.
7725 #
7726 # Clients should attempt to minimize the amount of time a transaction
7727 # is active. Faster transactions commit with higher probability
7728 # and cause less contention. Cloud Spanner attempts to keep read locks
7729 # active as long as the transaction continues to do reads, and the
7730 # transaction has not been terminated by
7731 # Commit or
7732 # Rollback. Long periods of
7733 # inactivity at the client may cause Cloud Spanner to release a
7734 # transaction's locks and abort it.
7735 #
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -04007736 # Conceptually, a read-write transaction consists of zero or more
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07007737 # reads or SQL statements followed by
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -04007738 # Commit. At any time before
7739 # Commit, the client can send a
7740 # Rollback request to abort the
7741 # transaction.
7742 #
7743 # ### Semantics
7744 #
7745 # Cloud Spanner can commit the transaction if all read locks it acquired
7746 # are still valid at commit time, and it is able to acquire write
7747 # locks for all writes. Cloud Spanner can abort the transaction for any
7748 # reason. If a commit attempt returns `ABORTED`, Cloud Spanner guarantees
7749 # that the transaction has not modified any user data in Cloud Spanner.
7750 #
7751 # Unless the transaction commits, Cloud Spanner makes no guarantees about
7752 # how long the transaction's locks were held for. It is an error to
7753 # use Cloud Spanner locks for any sort of mutual exclusion other than
7754 # between Cloud Spanner transactions themselves.
7755 #
7756 # ### Retrying Aborted Transactions
7757 #
7758 # When a transaction aborts, the application can choose to retry the
7759 # whole transaction again. To maximize the chances of successfully
7760 # committing the retry, the client should execute the retry in the
7761 # same session as the original attempt. The original session's lock
7762 # priority increases with each consecutive abort, meaning that each
7763 # attempt has a slightly better chance of success than the previous.
7764 #
7765 # Under some circumstances (e.g., many transactions attempting to
7766 # modify the same row(s)), a transaction can abort many times in a
7767 # short period before successfully committing. Thus, it is not a good
7768 # idea to cap the number of retries a transaction can attempt;
7769 # instead, it is better to limit the total amount of wall time spent
7770 # retrying.
7771 #
7772 # ### Idle Transactions
7773 #
7774 # A transaction is considered idle if it has no outstanding reads or
7775 # SQL queries and has not started a read or SQL query within the last 10
7776 # seconds. Idle transactions can be aborted by Cloud Spanner so that they
7777 # don't hold on to locks indefinitely. In that case, the commit will
7778 # fail with error `ABORTED`.
7779 #
7780 # If this behavior is undesirable, periodically executing a simple
7781 # SQL query in the transaction (e.g., `SELECT 1`) prevents the
7782 # transaction from becoming idle.
7783 #
7784 # ## Snapshot Read-Only Transactions
7785 #
7786 # Snapshot read-only transactions provides a simpler method than
7787 # locking read-write transactions for doing several consistent
7788 # reads. However, this type of transaction does not support writes.
7789 #
7790 # Snapshot transactions do not take locks. Instead, they work by
7791 # choosing a Cloud Spanner timestamp, then executing all reads at that
7792 # timestamp. Since they do not acquire locks, they do not block
7793 # concurrent read-write transactions.
7794 #
7795 # Unlike locking read-write transactions, snapshot read-only
7796 # transactions never abort. They can fail if the chosen read
7797 # timestamp is garbage collected; however, the default garbage
7798 # collection policy is generous enough that most applications do not
7799 # need to worry about this in practice.
7800 #
7801 # Snapshot read-only transactions do not need to call
7802 # Commit or
7803 # Rollback (and in fact are not
7804 # permitted to do so).
7805 #
7806 # To execute a snapshot transaction, the client specifies a timestamp
7807 # bound, which tells Cloud Spanner how to choose a read timestamp.
7808 #
7809 # The types of timestamp bound are:
7810 #
7811 # - Strong (the default).
7812 # - Bounded staleness.
7813 # - Exact staleness.
7814 #
7815 # If the Cloud Spanner database to be read is geographically distributed,
7816 # stale read-only transactions can execute more quickly than strong
7817 # or read-write transaction, because they are able to execute far
7818 # from the leader replica.
7819 #
7820 # Each type of timestamp bound is discussed in detail below.
7821 #
7822 # ### Strong
7823 #
7824 # Strong reads are guaranteed to see the effects of all transactions
7825 # that have committed before the start of the read. Furthermore, all
7826 # rows yielded by a single read are consistent with each other -- if
7827 # any part of the read observes a transaction, all parts of the read
7828 # see the transaction.
7829 #
7830 # Strong reads are not repeatable: two consecutive strong read-only
7831 # transactions might return inconsistent results if there are
7832 # concurrent writes. If consistency across reads is required, the
7833 # reads should be executed within a transaction or at an exact read
7834 # timestamp.
7835 #
7836 # See TransactionOptions.ReadOnly.strong.
7837 #
7838 # ### Exact Staleness
7839 #
7840 # These timestamp bounds execute reads at a user-specified
7841 # timestamp. Reads at a timestamp are guaranteed to see a consistent
7842 # prefix of the global transaction history: they observe
Dan O'Mearadd494642020-05-01 07:42:23 -07007843 # modifications done by all transactions with a commit timestamp &lt;=
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -04007844 # the read timestamp, and observe none of the modifications done by
7845 # transactions with a larger commit timestamp. They will block until
7846 # all conflicting transactions that may be assigned commit timestamps
Dan O'Mearadd494642020-05-01 07:42:23 -07007847 # &lt;= the read timestamp have finished.
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -04007848 #
7849 # The timestamp can either be expressed as an absolute Cloud Spanner commit
7850 # timestamp or a staleness relative to the current time.
7851 #
7852 # These modes do not require a "negotiation phase" to pick a
7853 # timestamp. As a result, they execute slightly faster than the
7854 # equivalent boundedly stale concurrency modes. On the other hand,
7855 # boundedly stale reads usually return fresher results.
7856 #
7857 # See TransactionOptions.ReadOnly.read_timestamp and
7858 # TransactionOptions.ReadOnly.exact_staleness.
7859 #
7860 # ### Bounded Staleness
7861 #
7862 # Bounded staleness modes allow Cloud Spanner to pick the read timestamp,
7863 # subject to a user-provided staleness bound. Cloud Spanner chooses the
7864 # newest timestamp within the staleness bound that allows execution
7865 # of the reads at the closest available replica without blocking.
7866 #
7867 # All rows yielded are consistent with each other -- if any part of
7868 # the read observes a transaction, all parts of the read see the
7869 # transaction. Boundedly stale reads are not repeatable: two stale
7870 # reads, even if they use the same staleness bound, can execute at
7871 # different timestamps and thus return inconsistent results.
7872 #
7873 # Boundedly stale reads execute in two phases: the first phase
7874 # negotiates a timestamp among all replicas needed to serve the
7875 # read. In the second phase, reads are executed at the negotiated
7876 # timestamp.
7877 #
7878 # As a result of the two phase execution, bounded staleness reads are
7879 # usually a little slower than comparable exact staleness
7880 # reads. However, they are typically able to return fresher
7881 # results, and are more likely to execute at the closest replica.
7882 #
7883 # Because the timestamp negotiation requires up-front knowledge of
7884 # which rows will be read, it can only be used with single-use
7885 # read-only transactions.
7886 #
7887 # See TransactionOptions.ReadOnly.max_staleness and
7888 # TransactionOptions.ReadOnly.min_read_timestamp.
7889 #
7890 # ### Old Read Timestamps and Garbage Collection
7891 #
7892 # Cloud Spanner continuously garbage collects deleted and overwritten data
7893 # in the background to reclaim storage space. This process is known
7894 # as "version GC". By default, version GC reclaims versions after they
7895 # are one hour old. Because of this, Cloud Spanner cannot perform reads
7896 # at read timestamps more than one hour in the past. This
7897 # restriction also applies to in-progress reads and/or SQL queries whose
7898 # timestamp become too old while executing. Reads and SQL queries with
7899 # too-old read timestamps fail with the error `FAILED_PRECONDITION`.
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07007900 #
7901 # ## Partitioned DML Transactions
7902 #
7903 # Partitioned DML transactions are used to execute DML statements with a
7904 # different execution strategy that provides different, and often better,
7905 # scalability properties for large, table-wide operations than DML in a
7906 # ReadWrite transaction. Smaller scoped statements, such as an OLTP workload,
7907 # should prefer using ReadWrite transactions.
7908 #
7909 # Partitioned DML partitions the keyspace and runs the DML statement on each
7910 # partition in separate, internal transactions. These transactions commit
7911 # automatically when complete, and run independently from one another.
7912 #
7913 # To reduce lock contention, this execution strategy only acquires read locks
7914 # on rows that match the WHERE clause of the statement. Additionally, the
7915 # smaller per-partition transactions hold locks for less time.
7916 #
7917 # That said, Partitioned DML is not a drop-in replacement for standard DML used
7918 # in ReadWrite transactions.
7919 #
7920 # - The DML statement must be fully-partitionable. Specifically, the statement
7921 # must be expressible as the union of many statements which each access only
7922 # a single row of the table.
7923 #
7924 # - The statement is not applied atomically to all rows of the table. Rather,
7925 # the statement is applied atomically to partitions of the table, in
7926 # independent transactions. Secondary index rows are updated atomically
7927 # with the base table rows.
7928 #
7929 # - Partitioned DML does not guarantee exactly-once execution semantics
7930 # against a partition. The statement will be applied at least once to each
7931 # partition. It is strongly recommended that the DML statement should be
7932 # idempotent to avoid unexpected results. For instance, it is potentially
7933 # dangerous to run a statement such as
7934 # `UPDATE table SET column = column + 1` as it could be run multiple times
7935 # against some rows.
7936 #
7937 # - The partitions are committed automatically - there is no support for
7938 # Commit or Rollback. If the call returns an error, or if the client issuing
7939 # the ExecuteSql call dies, it is possible that some rows had the statement
7940 # executed on them successfully. It is also possible that statement was
7941 # never executed against other rows.
7942 #
7943 # - Partitioned DML transactions may only contain the execution of a single
7944 # DML statement via ExecuteSql or ExecuteStreamingSql.
7945 #
7946 # - If any error is encountered during the execution of the partitioned DML
7947 # operation (for instance, a UNIQUE INDEX violation, division by zero, or a
7948 # value that cannot be stored due to schema constraints), then the
7949 # operation is stopped at that point and an error is returned. It is
7950 # possible that at this point, some partitions have been committed (or even
7951 # committed multiple times), and other partitions have not been run at all.
7952 #
7953 # Given the above, Partitioned DML is good fit for large, database-wide,
7954 # operations that are idempotent, such as deleting old rows from a very large
7955 # table.
7956 "readWrite": { # Message type to initiate a read-write transaction. Currently this # Transaction may write.
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -04007957 #
7958 # Authorization to begin a read-write transaction requires
7959 # `spanner.databases.beginOrRollbackReadWriteTransaction` permission
7960 # on the `session` resource.
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07007961 # transaction type has no options.
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -04007962 },
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07007963 "readOnly": { # Message type to initiate a read-only transaction. # Transaction will not write.
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -04007964 #
7965 # Authorization to begin a read-only transaction requires
7966 # `spanner.databases.beginReadOnlyTransaction` permission
7967 # on the `session` resource.
Dan O'Mearadd494642020-05-01 07:42:23 -07007968 "minReadTimestamp": "A String", # Executes all reads at a timestamp &gt;= `min_read_timestamp`.
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -04007969 #
7970 # This is useful for requesting fresher data than some previous
7971 # read, or data that is fresh enough to observe the effects of some
7972 # previously committed transaction whose timestamp is known.
7973 #
7974 # Note that this option can only be used in single-use transactions.
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07007975 #
7976 # A timestamp in RFC3339 UTC \"Zulu\" format, accurate to nanoseconds.
7977 # Example: `"2014-10-02T15:01:23.045123456Z"`.
Dan O'Mearadd494642020-05-01 07:42:23 -07007978 "readTimestamp": "A String", # Executes all reads at the given timestamp. Unlike other modes,
7979 # reads at a specific timestamp are repeatable; the same read at
7980 # the same timestamp always returns the same data. If the
7981 # timestamp is in the future, the read will block until the
7982 # specified timestamp, modulo the read's deadline.
7983 #
7984 # Useful for large scale consistent reads such as mapreduces, or
7985 # for coordinating many reads against a consistent snapshot of the
7986 # data.
7987 #
7988 # A timestamp in RFC3339 UTC \"Zulu\" format, accurate to nanoseconds.
7989 # Example: `"2014-10-02T15:01:23.045123456Z"`.
7990 "maxStaleness": "A String", # Read data at a timestamp &gt;= `NOW - max_staleness`
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -04007991 # seconds. Guarantees that all writes that have committed more
7992 # than the specified number of seconds ago are visible. Because
7993 # Cloud Spanner chooses the exact timestamp, this mode works even if
7994 # the client's local clock is substantially skewed from Cloud Spanner
7995 # commit timestamps.
7996 #
7997 # Useful for reading the freshest data available at a nearby
7998 # replica, while bounding the possible staleness if the local
7999 # replica has fallen behind.
8000 #
8001 # Note that this option can only be used in single-use
8002 # transactions.
8003 "exactStaleness": "A String", # Executes all reads at a timestamp that is `exact_staleness`
8004 # old. The timestamp is chosen soon after the read is started.
8005 #
8006 # Guarantees that all writes that have committed more than the
8007 # specified number of seconds ago are visible. Because Cloud Spanner
8008 # chooses the exact timestamp, this mode works even if the client's
8009 # local clock is substantially skewed from Cloud Spanner commit
8010 # timestamps.
8011 #
8012 # Useful for reading at nearby replicas without the distributed
8013 # timestamp negotiation overhead of `max_staleness`.
Dan O'Mearadd494642020-05-01 07:42:23 -07008014 "returnReadTimestamp": True or False, # If true, the Cloud Spanner-selected read timestamp is included in
8015 # the Transaction message that describes the transaction.
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -04008016 "strong": True or False, # Read at a timestamp where all previously committed transactions
8017 # are visible.
8018 },
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07008019 "partitionedDml": { # Message type to initiate a Partitioned DML transaction. # Partitioned DML transaction.
8020 #
8021 # Authorization to begin a Partitioned DML transaction requires
8022 # `spanner.databases.beginPartitionedDmlTransaction` permission
8023 # on the `session` resource.
8024 },
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -04008025 },
8026 "id": "A String", # Execute the read or SQL query in a previously-started transaction.
8027 },
8028 "resumeToken": "A String", # If this request is resuming a previously interrupted read,
8029 # `resume_token` should be copied from the last
8030 # PartialResultSet yielded before the interruption. Doing this
8031 # enables the new read to resume where the last read left off. The
8032 # rest of the request parameters must exactly match the request
8033 # that yielded this token.
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07008034 "partitionToken": "A String", # If present, results will be restricted to the specified partition
8035 # previously created using PartitionRead(). There must be an exact
8036 # match for the values of fields common to this message and the
8037 # PartitionReadRequest message used to create this partition_token.
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -04008038 "keySet": { # `KeySet` defines a collection of Cloud Spanner keys and/or key ranges. All # Required. `key_set` identifies the rows to be yielded. `key_set` names the
8039 # primary keys of the rows in table to be yielded, unless index
8040 # is present. If index is present, then key_set instead names
8041 # index keys in index.
8042 #
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07008043 # If the partition_token field is empty, rows are yielded
8044 # in table primary key order (if index is empty) or index key order
8045 # (if index is non-empty). If the partition_token field is not
8046 # empty, rows will be yielded in an unspecified order.
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -04008047 #
8048 # It is not an error for the `key_set` to name rows that do not
8049 # exist in the database. Read yields nothing for nonexistent rows.
8050 # the keys are expected to be in the same table or index. The keys need
8051 # not be sorted in any particular way.
8052 #
8053 # If the same key is specified multiple times in the set (for example
8054 # if two ranges, two keys, or a key and a range overlap), Cloud Spanner
8055 # behaves as if the key were only specified once.
8056 "ranges": [ # A list of key ranges. See KeyRange for more information about
8057 # key range specifications.
8058 { # KeyRange represents a range of rows in a table or index.
8059 #
8060 # A range has a start key and an end key. These keys can be open or
8061 # closed, indicating if the range includes rows with that key.
8062 #
8063 # Keys are represented by lists, where the ith value in the list
8064 # corresponds to the ith component of the table or index primary key.
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07008065 # Individual values are encoded as described
8066 # here.
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -04008067 #
8068 # For example, consider the following table definition:
8069 #
8070 # CREATE TABLE UserEvents (
8071 # UserName STRING(MAX),
8072 # EventDate STRING(10)
8073 # ) PRIMARY KEY(UserName, EventDate);
8074 #
8075 # The following keys name rows in this table:
8076 #
8077 # "Bob", "2014-09-23"
8078 #
8079 # Since the `UserEvents` table's `PRIMARY KEY` clause names two
8080 # columns, each `UserEvents` key has two elements; the first is the
8081 # `UserName`, and the second is the `EventDate`.
8082 #
8083 # Key ranges with multiple components are interpreted
8084 # lexicographically by component using the table or index key's declared
8085 # sort order. For example, the following range returns all events for
8086 # user `"Bob"` that occurred in the year 2015:
8087 #
8088 # "start_closed": ["Bob", "2015-01-01"]
8089 # "end_closed": ["Bob", "2015-12-31"]
8090 #
8091 # Start and end keys can omit trailing key components. This affects the
8092 # inclusion and exclusion of rows that exactly match the provided key
8093 # components: if the key is closed, then rows that exactly match the
8094 # provided components are included; if the key is open, then rows
8095 # that exactly match are not included.
8096 #
8097 # For example, the following range includes all events for `"Bob"` that
8098 # occurred during and after the year 2000:
8099 #
8100 # "start_closed": ["Bob", "2000-01-01"]
8101 # "end_closed": ["Bob"]
8102 #
8103 # The next example retrieves all events for `"Bob"`:
8104 #
8105 # "start_closed": ["Bob"]
8106 # "end_closed": ["Bob"]
8107 #
8108 # To retrieve events before the year 2000:
8109 #
8110 # "start_closed": ["Bob"]
8111 # "end_open": ["Bob", "2000-01-01"]
8112 #
8113 # The following range includes all rows in the table:
8114 #
8115 # "start_closed": []
8116 # "end_closed": []
8117 #
8118 # This range returns all users whose `UserName` begins with any
8119 # character from A to C:
8120 #
8121 # "start_closed": ["A"]
8122 # "end_open": ["D"]
8123 #
8124 # This range returns all users whose `UserName` begins with B:
8125 #
8126 # "start_closed": ["B"]
8127 # "end_open": ["C"]
8128 #
8129 # Key ranges honor column sort order. For example, suppose a table is
8130 # defined as follows:
8131 #
8132 # CREATE TABLE DescendingSortedTable {
8133 # Key INT64,
8134 # ...
8135 # ) PRIMARY KEY(Key DESC);
8136 #
8137 # The following range retrieves all rows with key values between 1
8138 # and 100 inclusive:
8139 #
8140 # "start_closed": ["100"]
8141 # "end_closed": ["1"]
8142 #
8143 # Note that 100 is passed as the start, and 1 is passed as the end,
8144 # because `Key` is a descending column in the schema.
8145 "endOpen": [ # If the end is open, then the range excludes rows whose first
8146 # `len(end_open)` key columns exactly match `end_open`.
8147 "",
8148 ],
8149 "startOpen": [ # If the start is open, then the range excludes rows whose first
8150 # `len(start_open)` key columns exactly match `start_open`.
8151 "",
8152 ],
8153 "endClosed": [ # If the end is closed, then the range includes all rows whose
8154 # first `len(end_closed)` key columns exactly match `end_closed`.
8155 "",
8156 ],
8157 "startClosed": [ # If the start is closed, then the range includes all rows whose
8158 # first `len(start_closed)` key columns exactly match `start_closed`.
8159 "",
8160 ],
8161 },
8162 ],
Sai Cheemalapati4ba8c232017-06-06 18:46:08 -04008163 "keys": [ # A list of specific keys. Entries in `keys` should have exactly as
8164 # many elements as there are columns in the primary or index key
8165 # with which this `KeySet` is used. Individual key values are
8166 # encoded as described here.
8167 [
8168 "",
8169 ],
8170 ],
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -04008171 "all": True or False, # For convenience `all` can be set to `true` to indicate that this
8172 # `KeySet` matches all keys in the table or index. Note that any keys
8173 # specified in `keys` or `ranges` are only yielded once.
8174 },
8175 "limit": "A String", # If greater than zero, only the first `limit` rows are yielded. If `limit`
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07008176 # is zero, the default is no limit. A limit cannot be specified if
8177 # `partition_token` is set.
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -04008178 "table": "A String", # Required. The name of the table in the database to be read.
Dan O'Mearadd494642020-05-01 07:42:23 -07008179 "columns": [ # Required. The columns of table to be returned for each row matching
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -04008180 # this request.
8181 "A String",
8182 ],
8183 }
8184
8185 x__xgafv: string, V1 error format.
8186 Allowed values
8187 1 - v1 error format
8188 2 - v2 error format
8189
8190Returns:
8191 An object of the form:
8192
8193 { # Partial results from a streaming read or SQL query. Streaming reads and
8194 # SQL queries better tolerate large result sets, large rows, and large
8195 # values, but are a little trickier to consume.
Sai Cheemalapati4ba8c232017-06-06 18:46:08 -04008196 "resumeToken": "A String", # Streaming calls might be interrupted for a variety of reasons, such
8197 # as TCP connection loss. If this occurs, the stream of results can
8198 # be resumed by re-sending the original request and including
8199 # `resume_token`. Note that executing any other transaction in the
8200 # same session invalidates the token.
8201 "chunkedValue": True or False, # If true, then the final value in values is chunked, and must
8202 # be combined with more values from subsequent `PartialResultSet`s
8203 # to obtain a complete field value.
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -04008204 "values": [ # A streamed result set consists of a stream of values, which might
8205 # be split into many `PartialResultSet` messages to accommodate
8206 # large rows and/or large values. Every N complete values defines a
8207 # row, where N is equal to the number of entries in
8208 # metadata.row_type.fields.
8209 #
8210 # Most values are encoded based on type as described
8211 # here.
8212 #
8213 # It is possible that the last value in values is "chunked",
8214 # meaning that the rest of the value is sent in subsequent
8215 # `PartialResultSet`(s). This is denoted by the chunked_value
8216 # field. Two or more chunked values can be merged to form a
8217 # complete value as follows:
8218 #
8219 # * `bool/number/null`: cannot be chunked
8220 # * `string`: concatenate the strings
8221 # * `list`: concatenate the lists. If the last element in a list is a
8222 # `string`, `list`, or `object`, merge it with the first element in
8223 # the next list by applying these rules recursively.
8224 # * `object`: concatenate the (field name, field value) pairs. If a
8225 # field name is duplicated, then apply these rules recursively
8226 # to merge the field values.
8227 #
8228 # Some examples of merging:
8229 #
8230 # # Strings are concatenated.
Dan O'Mearadd494642020-05-01 07:42:23 -07008231 # "foo", "bar" =&gt; "foobar"
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -04008232 #
8233 # # Lists of non-strings are concatenated.
Dan O'Mearadd494642020-05-01 07:42:23 -07008234 # [2, 3], [4] =&gt; [2, 3, 4]
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -04008235 #
8236 # # Lists are concatenated, but the last and first elements are merged
8237 # # because they are strings.
Dan O'Mearadd494642020-05-01 07:42:23 -07008238 # ["a", "b"], ["c", "d"] =&gt; ["a", "bc", "d"]
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -04008239 #
8240 # # Lists are concatenated, but the last and first elements are merged
8241 # # because they are lists. Recursively, the last and first elements
8242 # # of the inner lists are merged because they are strings.
Dan O'Mearadd494642020-05-01 07:42:23 -07008243 # ["a", ["b", "c"]], [["d"], "e"] =&gt; ["a", ["b", "cd"], "e"]
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -04008244 #
8245 # # Non-overlapping object fields are combined.
Dan O'Mearadd494642020-05-01 07:42:23 -07008246 # {"a": "1"}, {"b": "2"} =&gt; {"a": "1", "b": 2"}
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -04008247 #
8248 # # Overlapping object fields are merged.
Dan O'Mearadd494642020-05-01 07:42:23 -07008249 # {"a": "1"}, {"a": "2"} =&gt; {"a": "12"}
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -04008250 #
8251 # # Examples of merging objects containing lists of strings.
Dan O'Mearadd494642020-05-01 07:42:23 -07008252 # {"a": ["1"]}, {"a": ["2"]} =&gt; {"a": ["12"]}
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -04008253 #
8254 # For a more complete example, suppose a streaming SQL query is
8255 # yielding a result set whose rows contain a single string
8256 # field. The following `PartialResultSet`s might be yielded:
8257 #
8258 # {
8259 # "metadata": { ... }
8260 # "values": ["Hello", "W"]
8261 # "chunked_value": true
8262 # "resume_token": "Af65..."
8263 # }
8264 # {
8265 # "values": ["orl"]
8266 # "chunked_value": true
8267 # "resume_token": "Bqp2..."
8268 # }
8269 # {
8270 # "values": ["d"]
8271 # "resume_token": "Zx1B..."
8272 # }
8273 #
8274 # This sequence of `PartialResultSet`s encodes two rows, one
8275 # containing the field value `"Hello"`, and a second containing the
8276 # field value `"World" = "W" + "orl" + "d"`.
8277 "",
8278 ],
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07008279 "stats": { # Additional statistics about a ResultSet or PartialResultSet. # Query plan and execution statistics for the statement that produced this
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -04008280 # streaming result set. These can be requested by setting
8281 # ExecuteSqlRequest.query_mode and are sent
8282 # only once with the last response in the stream.
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07008283 # This field will also be present in the last response for DML
8284 # statements.
8285 "rowCountLowerBound": "A String", # Partitioned DML does not offer exactly-once semantics, so it
8286 # returns a lower bound of the rows modified.
8287 "rowCountExact": "A String", # Standard DML returns an exact count of rows that were modified.
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -04008288 "queryPlan": { # Contains an ordered list of nodes appearing in the query plan. # QueryPlan for the query associated with this result.
8289 "planNodes": [ # The nodes in the query plan. Plan nodes are returned in pre-order starting
8290 # with the plan root. Each PlanNode's `id` corresponds to its index in
8291 # `plan_nodes`.
8292 { # Node information for nodes appearing in a QueryPlan.plan_nodes.
8293 "index": 42, # The `PlanNode`'s index in node list.
8294 "kind": "A String", # Used to determine the type of node. May be needed for visualizing
8295 # different kinds of nodes differently. For example, If the node is a
8296 # SCALAR node, it will have a condensed representation
8297 # which can be used to directly embed a description of the node in its
8298 # parent.
8299 "displayName": "A String", # The display name for the node.
8300 "executionStats": { # The execution statistics associated with the node, contained in a group of
8301 # key-value pairs. Only present if the plan was returned as a result of a
8302 # profile query. For example, number of executions, number of rows/time per
8303 # execution etc.
8304 "a_key": "", # Properties of the object.
8305 },
8306 "childLinks": [ # List of child node `index`es and their relationship to this parent.
8307 { # Metadata associated with a parent-child relationship appearing in a
8308 # PlanNode.
8309 "variable": "A String", # Only present if the child node is SCALAR and corresponds
8310 # to an output variable of the parent node. The field carries the name of
8311 # the output variable.
8312 # For example, a `TableScan` operator that reads rows from a table will
8313 # have child links to the `SCALAR` nodes representing the output variables
8314 # created for each column that is read by the operator. The corresponding
8315 # `variable` fields will be set to the variable names assigned to the
8316 # columns.
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -04008317 "type": "A String", # The type of the link. For example, in Hash Joins this could be used to
8318 # distinguish between the build child and the probe child, or in the case
8319 # of the child being an output variable, to represent the tag associated
8320 # with the output variable.
Dan O'Mearadd494642020-05-01 07:42:23 -07008321 "childIndex": 42, # The node to which the link points.
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -04008322 },
8323 ],
8324 "shortRepresentation": { # Condensed representation of a node and its subtree. Only present for # Condensed representation for SCALAR nodes.
8325 # `SCALAR` PlanNode(s).
Dan O'Mearadd494642020-05-01 07:42:23 -07008326 "subqueries": { # A mapping of (subquery variable name) -&gt; (subquery node id) for cases
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -04008327 # where the `description` string of this node references a `SCALAR`
8328 # subquery contained in the expression subtree rooted at this node. The
8329 # referenced `SCALAR` subquery may not necessarily be a direct child of
8330 # this node.
8331 "a_key": 42,
8332 },
8333 "description": "A String", # A string representation of the expression subtree rooted at this node.
8334 },
8335 "metadata": { # Attributes relevant to the node contained in a group of key-value pairs.
8336 # For example, a Parameter Reference node could have the following
8337 # information in its metadata:
8338 #
8339 # {
8340 # "parameter_reference": "param1",
8341 # "parameter_type": "array"
8342 # }
8343 "a_key": "", # Properties of the object.
8344 },
8345 },
8346 ],
8347 },
8348 "queryStats": { # Aggregated statistics from the execution of the query. Only present when
8349 # the query is profiled. For example, a query could return the statistics as
8350 # follows:
8351 #
8352 # {
8353 # "rows_returned": "3",
8354 # "elapsed_time": "1.22 secs",
8355 # "cpu_time": "1.19 secs"
8356 # }
8357 "a_key": "", # Properties of the object.
8358 },
8359 },
8360 "metadata": { # Metadata about a ResultSet or PartialResultSet. # Metadata about the result set, such as row type information.
8361 # Only present in the first response.
8362 "rowType": { # `StructType` defines the fields of a STRUCT type. # Indicates the field names and types for the rows in the result
8363 # set. For example, a SQL query like `"SELECT UserId, UserName FROM
8364 # Users"` could return a `row_type` value like:
8365 #
8366 # "fields": [
8367 # { "name": "UserId", "type": { "code": "INT64" } },
8368 # { "name": "UserName", "type": { "code": "STRING" } },
8369 # ]
8370 "fields": [ # The list of fields that make up this struct. Order is
8371 # significant, because values of this struct type are represented as
8372 # lists, where the order of field values matches the order of
8373 # fields in the StructType. In turn, the order of fields
8374 # matches the order of columns in a read request, or the order of
8375 # fields in the `SELECT` clause of a query.
8376 { # Message representing a single field of a struct.
Dan O'Mearadd494642020-05-01 07:42:23 -07008377 "type": # Object with schema name: Type # The type of the field.
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -04008378 "name": "A String", # The name of the field. For reads, this is the column name. For
8379 # SQL queries, it is the column alias (e.g., `"Word"` in the
8380 # query `"SELECT 'hello' AS Word"`), or the column name (e.g.,
8381 # `"ColName"` in the query `"SELECT ColName FROM Table"`). Some
8382 # columns might have an empty name (e.g., !"SELECT
8383 # UPPER(ColName)"`). Note that a query result can contain
8384 # multiple fields with the same name.
8385 },
8386 ],
8387 },
8388 "transaction": { # A transaction. # If the read or SQL query began a transaction as a side-effect, the
8389 # information about the new transaction is yielded here.
8390 "readTimestamp": "A String", # For snapshot read-only transactions, the read timestamp chosen
8391 # for the transaction. Not returned by default: see
8392 # TransactionOptions.ReadOnly.return_read_timestamp.
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07008393 #
8394 # A timestamp in RFC3339 UTC \"Zulu\" format, accurate to nanoseconds.
8395 # Example: `"2014-10-02T15:01:23.045123456Z"`.
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -04008396 "id": "A String", # `id` may be used to identify the transaction in subsequent
8397 # Read,
8398 # ExecuteSql,
8399 # Commit, or
8400 # Rollback calls.
8401 #
8402 # Single-use read-only transactions do not have IDs, because
8403 # single-use transactions do not support multiple requests.
8404 },
8405 },
8406 }</pre>
8407</div>
8408
8409</body></html>