blob: a8af489eb63ee1cf1cd41991ef366a9f0f063e4e [file] [log] [blame]
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -04001<html><body>
2<style>
3
4body, h1, h2, h3, div, span, p, pre, a {
5 margin: 0;
6 padding: 0;
7 border: 0;
8 font-weight: inherit;
9 font-style: inherit;
10 font-size: 100%;
11 font-family: inherit;
12 vertical-align: baseline;
13}
14
15body {
16 font-size: 13px;
17 padding: 1em;
18}
19
20h1 {
21 font-size: 26px;
22 margin-bottom: 1em;
23}
24
25h2 {
26 font-size: 24px;
27 margin-bottom: 1em;
28}
29
30h3 {
31 font-size: 20px;
32 margin-bottom: 1em;
33 margin-top: 1em;
34}
35
36pre, code {
37 line-height: 1.5;
38 font-family: Monaco, 'DejaVu Sans Mono', 'Bitstream Vera Sans Mono', 'Lucida Console', monospace;
39}
40
41pre {
42 margin-top: 0.5em;
43}
44
45h1, h2, h3, p {
46 font-family: Arial, sans serif;
47}
48
49h1, h2, h3 {
50 border-bottom: solid #CCC 1px;
51}
52
53.toc_element {
54 margin-top: 0.5em;
55}
56
57.firstline {
58 margin-left: 2 em;
59}
60
61.method {
62 margin-top: 1em;
63 border: solid 1px #CCC;
64 padding: 1em;
65 background: #EEE;
66}
67
68.details {
69 font-weight: bold;
70 font-size: 14px;
71}
72
73</style>
74
75<h1><a href="spanner_v1.html">Cloud Spanner API</a> . <a href="spanner_v1.projects.html">projects</a> . <a href="spanner_v1.projects.instances.html">instances</a> . <a href="spanner_v1.projects.instances.databases.html">databases</a> . <a href="spanner_v1.projects.instances.databases.sessions.html">sessions</a></h1>
76<h2>Instance Methods</h2>
77<p class="toc_element">
78 <code><a href="#beginTransaction">beginTransaction(session, body, x__xgafv=None)</a></code></p>
79<p class="firstline">Begins a new transaction. This step can often be skipped:</p>
80<p class="toc_element">
81 <code><a href="#commit">commit(session, body, x__xgafv=None)</a></code></p>
82<p class="firstline">Commits a transaction. The request includes the mutations to be</p>
83<p class="toc_element">
84 <code><a href="#create">create(database, x__xgafv=None)</a></code></p>
85<p class="firstline">Creates a new session. A session can be used to perform</p>
86<p class="toc_element">
87 <code><a href="#delete">delete(name, x__xgafv=None)</a></code></p>
88<p class="firstline">Ends a session, releasing server resources associated with it.</p>
89<p class="toc_element">
90 <code><a href="#executeSql">executeSql(session, body, x__xgafv=None)</a></code></p>
91<p class="firstline">Executes an SQL query, returning all rows in a single reply. This</p>
92<p class="toc_element">
93 <code><a href="#executeStreamingSql">executeStreamingSql(session, body, x__xgafv=None)</a></code></p>
94<p class="firstline">Like ExecuteSql, except returns the result</p>
95<p class="toc_element">
96 <code><a href="#get">get(name, x__xgafv=None)</a></code></p>
97<p class="firstline">Gets a session. Returns `NOT_FOUND` if the session does not exist.</p>
98<p class="toc_element">
99 <code><a href="#read">read(session, body, x__xgafv=None)</a></code></p>
100<p class="firstline">Reads rows from the database using key lookups and scans, as a</p>
101<p class="toc_element">
102 <code><a href="#rollback">rollback(session, body, x__xgafv=None)</a></code></p>
103<p class="firstline">Rolls back a transaction, releasing any locks it holds. It is a good</p>
104<p class="toc_element">
105 <code><a href="#streamingRead">streamingRead(session, body, x__xgafv=None)</a></code></p>
106<p class="firstline">Like Read, except returns the result set as a</p>
107<h3>Method Details</h3>
108<div class="method">
109 <code class="details" id="beginTransaction">beginTransaction(session, body, x__xgafv=None)</code>
110 <pre>Begins a new transaction. This step can often be skipped:
111Read, ExecuteSql and
112Commit can begin a new transaction as a
113side-effect.
114
115Args:
116 session: string, Required. The session in which the transaction runs. (required)
117 body: object, The request body. (required)
118 The object takes the form of:
119
120{ # The request for BeginTransaction.
121 "options": { # # Transactions # Required. Options for the new transaction.
122 #
123 #
124 # Each session can have at most one active transaction at a time. After the
125 # active transaction is completed, the session can immediately be
126 # re-used for the next transaction. It is not necessary to create a
127 # new session for each transaction.
128 #
129 # # Transaction Modes
130 #
131 # Cloud Spanner supports two transaction modes:
132 #
133 # 1. Locking read-write. This type of transaction is the only way
134 # to write data into Cloud Spanner. These transactions rely on
135 # pessimistic locking and, if necessary, two-phase commit.
136 # Locking read-write transactions may abort, requiring the
137 # application to retry.
138 #
139 # 2. Snapshot read-only. This transaction type provides guaranteed
140 # consistency across several reads, but does not allow
141 # writes. Snapshot read-only transactions can be configured to
142 # read at timestamps in the past. Snapshot read-only
143 # transactions do not need to be committed.
144 #
145 # For transactions that only read, snapshot read-only transactions
146 # provide simpler semantics and are almost always faster. In
147 # particular, read-only transactions do not take locks, so they do
148 # not conflict with read-write transactions. As a consequence of not
149 # taking locks, they also do not abort, so retry loops are not needed.
150 #
151 # Transactions may only read/write data in a single database. They
152 # may, however, read/write data in different tables within that
153 # database.
154 #
155 # ## Locking Read-Write Transactions
156 #
157 # Locking transactions may be used to atomically read-modify-write
158 # data anywhere in a database. This type of transaction is externally
159 # consistent.
160 #
161 # Clients should attempt to minimize the amount of time a transaction
162 # is active. Faster transactions commit with higher probability
163 # and cause less contention. Cloud Spanner attempts to keep read locks
164 # active as long as the transaction continues to do reads, and the
165 # transaction has not been terminated by
166 # Commit or
167 # Rollback. Long periods of
168 # inactivity at the client may cause Cloud Spanner to release a
169 # transaction's locks and abort it.
170 #
171 # Reads performed within a transaction acquire locks on the data
172 # being read. Writes can only be done at commit time, after all reads
173 # have been completed.
174 # Conceptually, a read-write transaction consists of zero or more
175 # reads or SQL queries followed by
176 # Commit. At any time before
177 # Commit, the client can send a
178 # Rollback request to abort the
179 # transaction.
180 #
181 # ### Semantics
182 #
183 # Cloud Spanner can commit the transaction if all read locks it acquired
184 # are still valid at commit time, and it is able to acquire write
185 # locks for all writes. Cloud Spanner can abort the transaction for any
186 # reason. If a commit attempt returns `ABORTED`, Cloud Spanner guarantees
187 # that the transaction has not modified any user data in Cloud Spanner.
188 #
189 # Unless the transaction commits, Cloud Spanner makes no guarantees about
190 # how long the transaction's locks were held for. It is an error to
191 # use Cloud Spanner locks for any sort of mutual exclusion other than
192 # between Cloud Spanner transactions themselves.
193 #
194 # ### Retrying Aborted Transactions
195 #
196 # When a transaction aborts, the application can choose to retry the
197 # whole transaction again. To maximize the chances of successfully
198 # committing the retry, the client should execute the retry in the
199 # same session as the original attempt. The original session's lock
200 # priority increases with each consecutive abort, meaning that each
201 # attempt has a slightly better chance of success than the previous.
202 #
203 # Under some circumstances (e.g., many transactions attempting to
204 # modify the same row(s)), a transaction can abort many times in a
205 # short period before successfully committing. Thus, it is not a good
206 # idea to cap the number of retries a transaction can attempt;
207 # instead, it is better to limit the total amount of wall time spent
208 # retrying.
209 #
210 # ### Idle Transactions
211 #
212 # A transaction is considered idle if it has no outstanding reads or
213 # SQL queries and has not started a read or SQL query within the last 10
214 # seconds. Idle transactions can be aborted by Cloud Spanner so that they
215 # don't hold on to locks indefinitely. In that case, the commit will
216 # fail with error `ABORTED`.
217 #
218 # If this behavior is undesirable, periodically executing a simple
219 # SQL query in the transaction (e.g., `SELECT 1`) prevents the
220 # transaction from becoming idle.
221 #
222 # ## Snapshot Read-Only Transactions
223 #
224 # Snapshot read-only transactions provides a simpler method than
225 # locking read-write transactions for doing several consistent
226 # reads. However, this type of transaction does not support writes.
227 #
228 # Snapshot transactions do not take locks. Instead, they work by
229 # choosing a Cloud Spanner timestamp, then executing all reads at that
230 # timestamp. Since they do not acquire locks, they do not block
231 # concurrent read-write transactions.
232 #
233 # Unlike locking read-write transactions, snapshot read-only
234 # transactions never abort. They can fail if the chosen read
235 # timestamp is garbage collected; however, the default garbage
236 # collection policy is generous enough that most applications do not
237 # need to worry about this in practice.
238 #
239 # Snapshot read-only transactions do not need to call
240 # Commit or
241 # Rollback (and in fact are not
242 # permitted to do so).
243 #
244 # To execute a snapshot transaction, the client specifies a timestamp
245 # bound, which tells Cloud Spanner how to choose a read timestamp.
246 #
247 # The types of timestamp bound are:
248 #
249 # - Strong (the default).
250 # - Bounded staleness.
251 # - Exact staleness.
252 #
253 # If the Cloud Spanner database to be read is geographically distributed,
254 # stale read-only transactions can execute more quickly than strong
255 # or read-write transaction, because they are able to execute far
256 # from the leader replica.
257 #
258 # Each type of timestamp bound is discussed in detail below.
259 #
260 # ### Strong
261 #
262 # Strong reads are guaranteed to see the effects of all transactions
263 # that have committed before the start of the read. Furthermore, all
264 # rows yielded by a single read are consistent with each other -- if
265 # any part of the read observes a transaction, all parts of the read
266 # see the transaction.
267 #
268 # Strong reads are not repeatable: two consecutive strong read-only
269 # transactions might return inconsistent results if there are
270 # concurrent writes. If consistency across reads is required, the
271 # reads should be executed within a transaction or at an exact read
272 # timestamp.
273 #
274 # See TransactionOptions.ReadOnly.strong.
275 #
276 # ### Exact Staleness
277 #
278 # These timestamp bounds execute reads at a user-specified
279 # timestamp. Reads at a timestamp are guaranteed to see a consistent
280 # prefix of the global transaction history: they observe
281 # modifications done by all transactions with a commit timestamp <=
282 # the read timestamp, and observe none of the modifications done by
283 # transactions with a larger commit timestamp. They will block until
284 # all conflicting transactions that may be assigned commit timestamps
285 # <= the read timestamp have finished.
286 #
287 # The timestamp can either be expressed as an absolute Cloud Spanner commit
288 # timestamp or a staleness relative to the current time.
289 #
290 # These modes do not require a "negotiation phase" to pick a
291 # timestamp. As a result, they execute slightly faster than the
292 # equivalent boundedly stale concurrency modes. On the other hand,
293 # boundedly stale reads usually return fresher results.
294 #
295 # See TransactionOptions.ReadOnly.read_timestamp and
296 # TransactionOptions.ReadOnly.exact_staleness.
297 #
298 # ### Bounded Staleness
299 #
300 # Bounded staleness modes allow Cloud Spanner to pick the read timestamp,
301 # subject to a user-provided staleness bound. Cloud Spanner chooses the
302 # newest timestamp within the staleness bound that allows execution
303 # of the reads at the closest available replica without blocking.
304 #
305 # All rows yielded are consistent with each other -- if any part of
306 # the read observes a transaction, all parts of the read see the
307 # transaction. Boundedly stale reads are not repeatable: two stale
308 # reads, even if they use the same staleness bound, can execute at
309 # different timestamps and thus return inconsistent results.
310 #
311 # Boundedly stale reads execute in two phases: the first phase
312 # negotiates a timestamp among all replicas needed to serve the
313 # read. In the second phase, reads are executed at the negotiated
314 # timestamp.
315 #
316 # As a result of the two phase execution, bounded staleness reads are
317 # usually a little slower than comparable exact staleness
318 # reads. However, they are typically able to return fresher
319 # results, and are more likely to execute at the closest replica.
320 #
321 # Because the timestamp negotiation requires up-front knowledge of
322 # which rows will be read, it can only be used with single-use
323 # read-only transactions.
324 #
325 # See TransactionOptions.ReadOnly.max_staleness and
326 # TransactionOptions.ReadOnly.min_read_timestamp.
327 #
328 # ### Old Read Timestamps and Garbage Collection
329 #
330 # Cloud Spanner continuously garbage collects deleted and overwritten data
331 # in the background to reclaim storage space. This process is known
332 # as "version GC". By default, version GC reclaims versions after they
333 # are one hour old. Because of this, Cloud Spanner cannot perform reads
334 # at read timestamps more than one hour in the past. This
335 # restriction also applies to in-progress reads and/or SQL queries whose
336 # timestamp become too old while executing. Reads and SQL queries with
337 # too-old read timestamps fail with the error `FAILED_PRECONDITION`.
338 "readWrite": { # Options for read-write transactions. # Transaction may write.
339 #
340 # Authorization to begin a read-write transaction requires
341 # `spanner.databases.beginOrRollbackReadWriteTransaction` permission
342 # on the `session` resource.
343 },
344 "readOnly": { # Options for read-only transactions. # Transaction will not write.
345 #
346 # Authorization to begin a read-only transaction requires
347 # `spanner.databases.beginReadOnlyTransaction` permission
348 # on the `session` resource.
349 "minReadTimestamp": "A String", # Executes all reads at a timestamp >= `min_read_timestamp`.
350 #
351 # This is useful for requesting fresher data than some previous
352 # read, or data that is fresh enough to observe the effects of some
353 # previously committed transaction whose timestamp is known.
354 #
355 # Note that this option can only be used in single-use transactions.
356 "readTimestamp": "A String", # Executes all reads at the given timestamp. Unlike other modes,
357 # reads at a specific timestamp are repeatable; the same read at
358 # the same timestamp always returns the same data. If the
359 # timestamp is in the future, the read will block until the
360 # specified timestamp, modulo the read's deadline.
361 #
362 # Useful for large scale consistent reads such as mapreduces, or
363 # for coordinating many reads against a consistent snapshot of the
364 # data.
365 "maxStaleness": "A String", # Read data at a timestamp >= `NOW - max_staleness`
366 # seconds. Guarantees that all writes that have committed more
367 # than the specified number of seconds ago are visible. Because
368 # Cloud Spanner chooses the exact timestamp, this mode works even if
369 # the client's local clock is substantially skewed from Cloud Spanner
370 # commit timestamps.
371 #
372 # Useful for reading the freshest data available at a nearby
373 # replica, while bounding the possible staleness if the local
374 # replica has fallen behind.
375 #
376 # Note that this option can only be used in single-use
377 # transactions.
378 "exactStaleness": "A String", # Executes all reads at a timestamp that is `exact_staleness`
379 # old. The timestamp is chosen soon after the read is started.
380 #
381 # Guarantees that all writes that have committed more than the
382 # specified number of seconds ago are visible. Because Cloud Spanner
383 # chooses the exact timestamp, this mode works even if the client's
384 # local clock is substantially skewed from Cloud Spanner commit
385 # timestamps.
386 #
387 # Useful for reading at nearby replicas without the distributed
388 # timestamp negotiation overhead of `max_staleness`.
389 "returnReadTimestamp": True or False, # If true, the Cloud Spanner-selected read timestamp is included in
390 # the Transaction message that describes the transaction.
391 "strong": True or False, # Read at a timestamp where all previously committed transactions
392 # are visible.
393 },
394 },
395 }
396
397 x__xgafv: string, V1 error format.
398 Allowed values
399 1 - v1 error format
400 2 - v2 error format
401
402Returns:
403 An object of the form:
404
405 { # A transaction.
406 "readTimestamp": "A String", # For snapshot read-only transactions, the read timestamp chosen
407 # for the transaction. Not returned by default: see
408 # TransactionOptions.ReadOnly.return_read_timestamp.
409 "id": "A String", # `id` may be used to identify the transaction in subsequent
410 # Read,
411 # ExecuteSql,
412 # Commit, or
413 # Rollback calls.
414 #
415 # Single-use read-only transactions do not have IDs, because
416 # single-use transactions do not support multiple requests.
417 }</pre>
418</div>
419
420<div class="method">
421 <code class="details" id="commit">commit(session, body, x__xgafv=None)</code>
422 <pre>Commits a transaction. The request includes the mutations to be
423applied to rows in the database.
424
425`Commit` might return an `ABORTED` error. This can occur at any time;
426commonly, the cause is conflicts with concurrent
427transactions. However, it can also happen for a variety of other
428reasons. If `Commit` returns `ABORTED`, the caller should re-attempt
429the transaction from the beginning, re-using the same session.
430
431Args:
432 session: string, Required. The session in which the transaction to be committed is running. (required)
433 body: object, The request body. (required)
434 The object takes the form of:
435
436{ # The request for Commit.
437 "transactionId": "A String", # Commit a previously-started transaction.
438 "mutations": [ # The mutations to be executed when this transaction commits. All
439 # mutations are applied atomically, in the order they appear in
440 # this list.
441 { # A modification to one or more Cloud Spanner rows. Mutations can be
442 # applied to a Cloud Spanner database by sending them in a
443 # Commit call.
444 "insert": { # Arguments to insert, update, insert_or_update, and # Insert new rows in a table. If any of the rows already exist,
445 # the write or transaction fails with error `ALREADY_EXISTS`.
446 # replace operations.
447 "table": "A String", # Required. The table whose rows will be written.
448 "values": [ # The values to be written. `values` can contain more than one
449 # list of values. If it does, then multiple rows are written, one
450 # for each entry in `values`. Each list in `values` must have
451 # exactly as many entries as there are entries in columns
452 # above. Sending multiple lists is equivalent to sending multiple
453 # `Mutation`s, each containing one `values` entry and repeating
454 # table and columns. Individual values in each list are
455 # encoded as described here.
456 [
457 "",
458 ],
459 ],
460 "columns": [ # The names of the columns in table to be written.
461 #
462 # The list of columns must contain enough columns to allow
463 # Cloud Spanner to derive values for all primary key columns in the
464 # row(s) to be modified.
465 "A String",
466 ],
467 },
468 "replace": { # Arguments to insert, update, insert_or_update, and # Like insert, except that if the row already exists, it is
469 # deleted, and the column values provided are inserted
470 # instead. Unlike insert_or_update, this means any values not
471 # explicitly written become `NULL`.
472 # replace operations.
473 "table": "A String", # Required. The table whose rows will be written.
474 "values": [ # The values to be written. `values` can contain more than one
475 # list of values. If it does, then multiple rows are written, one
476 # for each entry in `values`. Each list in `values` must have
477 # exactly as many entries as there are entries in columns
478 # above. Sending multiple lists is equivalent to sending multiple
479 # `Mutation`s, each containing one `values` entry and repeating
480 # table and columns. Individual values in each list are
481 # encoded as described here.
482 [
483 "",
484 ],
485 ],
486 "columns": [ # The names of the columns in table to be written.
487 #
488 # The list of columns must contain enough columns to allow
489 # Cloud Spanner to derive values for all primary key columns in the
490 # row(s) to be modified.
491 "A String",
492 ],
493 },
494 "insertOrUpdate": { # Arguments to insert, update, insert_or_update, and # Like insert, except that if the row already exists, then
495 # its column values are overwritten with the ones provided. Any
496 # column values not explicitly written are preserved.
497 # replace operations.
498 "table": "A String", # Required. The table whose rows will be written.
499 "values": [ # The values to be written. `values` can contain more than one
500 # list of values. If it does, then multiple rows are written, one
501 # for each entry in `values`. Each list in `values` must have
502 # exactly as many entries as there are entries in columns
503 # above. Sending multiple lists is equivalent to sending multiple
504 # `Mutation`s, each containing one `values` entry and repeating
505 # table and columns. Individual values in each list are
506 # encoded as described here.
507 [
508 "",
509 ],
510 ],
511 "columns": [ # The names of the columns in table to be written.
512 #
513 # The list of columns must contain enough columns to allow
514 # Cloud Spanner to derive values for all primary key columns in the
515 # row(s) to be modified.
516 "A String",
517 ],
518 },
519 "update": { # Arguments to insert, update, insert_or_update, and # Update existing rows in a table. If any of the rows does not
520 # already exist, the transaction fails with error `NOT_FOUND`.
521 # replace operations.
522 "table": "A String", # Required. The table whose rows will be written.
523 "values": [ # The values to be written. `values` can contain more than one
524 # list of values. If it does, then multiple rows are written, one
525 # for each entry in `values`. Each list in `values` must have
526 # exactly as many entries as there are entries in columns
527 # above. Sending multiple lists is equivalent to sending multiple
528 # `Mutation`s, each containing one `values` entry and repeating
529 # table and columns. Individual values in each list are
530 # encoded as described here.
531 [
532 "",
533 ],
534 ],
535 "columns": [ # The names of the columns in table to be written.
536 #
537 # The list of columns must contain enough columns to allow
538 # Cloud Spanner to derive values for all primary key columns in the
539 # row(s) to be modified.
540 "A String",
541 ],
542 },
543 "delete": { # Arguments to delete operations. # Delete rows from a table. Succeeds whether or not the named
544 # rows were present.
545 "table": "A String", # Required. The table whose rows will be deleted.
546 "keySet": { # `KeySet` defines a collection of Cloud Spanner keys and/or key ranges. All # Required. The primary keys of the rows within table to delete.
547 # the keys are expected to be in the same table or index. The keys need
548 # not be sorted in any particular way.
549 #
550 # If the same key is specified multiple times in the set (for example
551 # if two ranges, two keys, or a key and a range overlap), Cloud Spanner
552 # behaves as if the key were only specified once.
553 "ranges": [ # A list of key ranges. See KeyRange for more information about
554 # key range specifications.
555 { # KeyRange represents a range of rows in a table or index.
556 #
557 # A range has a start key and an end key. These keys can be open or
558 # closed, indicating if the range includes rows with that key.
559 #
560 # Keys are represented by lists, where the ith value in the list
561 # corresponds to the ith component of the table or index primary key.
562 # Individual values are encoded as described here.
563 #
564 # For example, consider the following table definition:
565 #
566 # CREATE TABLE UserEvents (
567 # UserName STRING(MAX),
568 # EventDate STRING(10)
569 # ) PRIMARY KEY(UserName, EventDate);
570 #
571 # The following keys name rows in this table:
572 #
573 # "Bob", "2014-09-23"
574 #
575 # Since the `UserEvents` table's `PRIMARY KEY` clause names two
576 # columns, each `UserEvents` key has two elements; the first is the
577 # `UserName`, and the second is the `EventDate`.
578 #
579 # Key ranges with multiple components are interpreted
580 # lexicographically by component using the table or index key's declared
581 # sort order. For example, the following range returns all events for
582 # user `"Bob"` that occurred in the year 2015:
583 #
584 # "start_closed": ["Bob", "2015-01-01"]
585 # "end_closed": ["Bob", "2015-12-31"]
586 #
587 # Start and end keys can omit trailing key components. This affects the
588 # inclusion and exclusion of rows that exactly match the provided key
589 # components: if the key is closed, then rows that exactly match the
590 # provided components are included; if the key is open, then rows
591 # that exactly match are not included.
592 #
593 # For example, the following range includes all events for `"Bob"` that
594 # occurred during and after the year 2000:
595 #
596 # "start_closed": ["Bob", "2000-01-01"]
597 # "end_closed": ["Bob"]
598 #
599 # The next example retrieves all events for `"Bob"`:
600 #
601 # "start_closed": ["Bob"]
602 # "end_closed": ["Bob"]
603 #
604 # To retrieve events before the year 2000:
605 #
606 # "start_closed": ["Bob"]
607 # "end_open": ["Bob", "2000-01-01"]
608 #
609 # The following range includes all rows in the table:
610 #
611 # "start_closed": []
612 # "end_closed": []
613 #
614 # This range returns all users whose `UserName` begins with any
615 # character from A to C:
616 #
617 # "start_closed": ["A"]
618 # "end_open": ["D"]
619 #
620 # This range returns all users whose `UserName` begins with B:
621 #
622 # "start_closed": ["B"]
623 # "end_open": ["C"]
624 #
625 # Key ranges honor column sort order. For example, suppose a table is
626 # defined as follows:
627 #
628 # CREATE TABLE DescendingSortedTable {
629 # Key INT64,
630 # ...
631 # ) PRIMARY KEY(Key DESC);
632 #
633 # The following range retrieves all rows with key values between 1
634 # and 100 inclusive:
635 #
636 # "start_closed": ["100"]
637 # "end_closed": ["1"]
638 #
639 # Note that 100 is passed as the start, and 1 is passed as the end,
640 # because `Key` is a descending column in the schema.
641 "endOpen": [ # If the end is open, then the range excludes rows whose first
642 # `len(end_open)` key columns exactly match `end_open`.
643 "",
644 ],
645 "startOpen": [ # If the start is open, then the range excludes rows whose first
646 # `len(start_open)` key columns exactly match `start_open`.
647 "",
648 ],
649 "endClosed": [ # If the end is closed, then the range includes all rows whose
650 # first `len(end_closed)` key columns exactly match `end_closed`.
651 "",
652 ],
653 "startClosed": [ # If the start is closed, then the range includes all rows whose
654 # first `len(start_closed)` key columns exactly match `start_closed`.
655 "",
656 ],
657 },
658 ],
659 "keys": [ # A list of specific keys. Entries in `keys` should have exactly as
660 # many elements as there are columns in the primary or index key
661 # with which this `KeySet` is used. Individual key values are
662 # encoded as described here.
663 [
664 "",
665 ],
666 ],
667 "all": True or False, # For convenience `all` can be set to `true` to indicate that this
668 # `KeySet` matches all keys in the table or index. Note that any keys
669 # specified in `keys` or `ranges` are only yielded once.
670 },
671 },
672 },
673 ],
674 "singleUseTransaction": { # # Transactions # Execute mutations in a temporary transaction. Note that unlike
675 # commit of a previously-started transaction, commit with a
676 # temporary transaction is non-idempotent. That is, if the
677 # `CommitRequest` is sent to Cloud Spanner more than once (for
678 # instance, due to retries in the application, or in the
679 # transport library), it is possible that the mutations are
680 # executed more than once. If this is undesirable, use
681 # BeginTransaction and
682 # Commit instead.
683 #
684 #
685 # Each session can have at most one active transaction at a time. After the
686 # active transaction is completed, the session can immediately be
687 # re-used for the next transaction. It is not necessary to create a
688 # new session for each transaction.
689 #
690 # # Transaction Modes
691 #
692 # Cloud Spanner supports two transaction modes:
693 #
694 # 1. Locking read-write. This type of transaction is the only way
695 # to write data into Cloud Spanner. These transactions rely on
696 # pessimistic locking and, if necessary, two-phase commit.
697 # Locking read-write transactions may abort, requiring the
698 # application to retry.
699 #
700 # 2. Snapshot read-only. This transaction type provides guaranteed
701 # consistency across several reads, but does not allow
702 # writes. Snapshot read-only transactions can be configured to
703 # read at timestamps in the past. Snapshot read-only
704 # transactions do not need to be committed.
705 #
706 # For transactions that only read, snapshot read-only transactions
707 # provide simpler semantics and are almost always faster. In
708 # particular, read-only transactions do not take locks, so they do
709 # not conflict with read-write transactions. As a consequence of not
710 # taking locks, they also do not abort, so retry loops are not needed.
711 #
712 # Transactions may only read/write data in a single database. They
713 # may, however, read/write data in different tables within that
714 # database.
715 #
716 # ## Locking Read-Write Transactions
717 #
718 # Locking transactions may be used to atomically read-modify-write
719 # data anywhere in a database. This type of transaction is externally
720 # consistent.
721 #
722 # Clients should attempt to minimize the amount of time a transaction
723 # is active. Faster transactions commit with higher probability
724 # and cause less contention. Cloud Spanner attempts to keep read locks
725 # active as long as the transaction continues to do reads, and the
726 # transaction has not been terminated by
727 # Commit or
728 # Rollback. Long periods of
729 # inactivity at the client may cause Cloud Spanner to release a
730 # transaction's locks and abort it.
731 #
732 # Reads performed within a transaction acquire locks on the data
733 # being read. Writes can only be done at commit time, after all reads
734 # have been completed.
735 # Conceptually, a read-write transaction consists of zero or more
736 # reads or SQL queries followed by
737 # Commit. At any time before
738 # Commit, the client can send a
739 # Rollback request to abort the
740 # transaction.
741 #
742 # ### Semantics
743 #
744 # Cloud Spanner can commit the transaction if all read locks it acquired
745 # are still valid at commit time, and it is able to acquire write
746 # locks for all writes. Cloud Spanner can abort the transaction for any
747 # reason. If a commit attempt returns `ABORTED`, Cloud Spanner guarantees
748 # that the transaction has not modified any user data in Cloud Spanner.
749 #
750 # Unless the transaction commits, Cloud Spanner makes no guarantees about
751 # how long the transaction's locks were held for. It is an error to
752 # use Cloud Spanner locks for any sort of mutual exclusion other than
753 # between Cloud Spanner transactions themselves.
754 #
755 # ### Retrying Aborted Transactions
756 #
757 # When a transaction aborts, the application can choose to retry the
758 # whole transaction again. To maximize the chances of successfully
759 # committing the retry, the client should execute the retry in the
760 # same session as the original attempt. The original session's lock
761 # priority increases with each consecutive abort, meaning that each
762 # attempt has a slightly better chance of success than the previous.
763 #
764 # Under some circumstances (e.g., many transactions attempting to
765 # modify the same row(s)), a transaction can abort many times in a
766 # short period before successfully committing. Thus, it is not a good
767 # idea to cap the number of retries a transaction can attempt;
768 # instead, it is better to limit the total amount of wall time spent
769 # retrying.
770 #
771 # ### Idle Transactions
772 #
773 # A transaction is considered idle if it has no outstanding reads or
774 # SQL queries and has not started a read or SQL query within the last 10
775 # seconds. Idle transactions can be aborted by Cloud Spanner so that they
776 # don't hold on to locks indefinitely. In that case, the commit will
777 # fail with error `ABORTED`.
778 #
779 # If this behavior is undesirable, periodically executing a simple
780 # SQL query in the transaction (e.g., `SELECT 1`) prevents the
781 # transaction from becoming idle.
782 #
783 # ## Snapshot Read-Only Transactions
784 #
785 # Snapshot read-only transactions provides a simpler method than
786 # locking read-write transactions for doing several consistent
787 # reads. However, this type of transaction does not support writes.
788 #
789 # Snapshot transactions do not take locks. Instead, they work by
790 # choosing a Cloud Spanner timestamp, then executing all reads at that
791 # timestamp. Since they do not acquire locks, they do not block
792 # concurrent read-write transactions.
793 #
794 # Unlike locking read-write transactions, snapshot read-only
795 # transactions never abort. They can fail if the chosen read
796 # timestamp is garbage collected; however, the default garbage
797 # collection policy is generous enough that most applications do not
798 # need to worry about this in practice.
799 #
800 # Snapshot read-only transactions do not need to call
801 # Commit or
802 # Rollback (and in fact are not
803 # permitted to do so).
804 #
805 # To execute a snapshot transaction, the client specifies a timestamp
806 # bound, which tells Cloud Spanner how to choose a read timestamp.
807 #
808 # The types of timestamp bound are:
809 #
810 # - Strong (the default).
811 # - Bounded staleness.
812 # - Exact staleness.
813 #
814 # If the Cloud Spanner database to be read is geographically distributed,
815 # stale read-only transactions can execute more quickly than strong
816 # or read-write transaction, because they are able to execute far
817 # from the leader replica.
818 #
819 # Each type of timestamp bound is discussed in detail below.
820 #
821 # ### Strong
822 #
823 # Strong reads are guaranteed to see the effects of all transactions
824 # that have committed before the start of the read. Furthermore, all
825 # rows yielded by a single read are consistent with each other -- if
826 # any part of the read observes a transaction, all parts of the read
827 # see the transaction.
828 #
829 # Strong reads are not repeatable: two consecutive strong read-only
830 # transactions might return inconsistent results if there are
831 # concurrent writes. If consistency across reads is required, the
832 # reads should be executed within a transaction or at an exact read
833 # timestamp.
834 #
835 # See TransactionOptions.ReadOnly.strong.
836 #
837 # ### Exact Staleness
838 #
839 # These timestamp bounds execute reads at a user-specified
840 # timestamp. Reads at a timestamp are guaranteed to see a consistent
841 # prefix of the global transaction history: they observe
842 # modifications done by all transactions with a commit timestamp <=
843 # the read timestamp, and observe none of the modifications done by
844 # transactions with a larger commit timestamp. They will block until
845 # all conflicting transactions that may be assigned commit timestamps
846 # <= the read timestamp have finished.
847 #
848 # The timestamp can either be expressed as an absolute Cloud Spanner commit
849 # timestamp or a staleness relative to the current time.
850 #
851 # These modes do not require a "negotiation phase" to pick a
852 # timestamp. As a result, they execute slightly faster than the
853 # equivalent boundedly stale concurrency modes. On the other hand,
854 # boundedly stale reads usually return fresher results.
855 #
856 # See TransactionOptions.ReadOnly.read_timestamp and
857 # TransactionOptions.ReadOnly.exact_staleness.
858 #
859 # ### Bounded Staleness
860 #
861 # Bounded staleness modes allow Cloud Spanner to pick the read timestamp,
862 # subject to a user-provided staleness bound. Cloud Spanner chooses the
863 # newest timestamp within the staleness bound that allows execution
864 # of the reads at the closest available replica without blocking.
865 #
866 # All rows yielded are consistent with each other -- if any part of
867 # the read observes a transaction, all parts of the read see the
868 # transaction. Boundedly stale reads are not repeatable: two stale
869 # reads, even if they use the same staleness bound, can execute at
870 # different timestamps and thus return inconsistent results.
871 #
872 # Boundedly stale reads execute in two phases: the first phase
873 # negotiates a timestamp among all replicas needed to serve the
874 # read. In the second phase, reads are executed at the negotiated
875 # timestamp.
876 #
877 # As a result of the two phase execution, bounded staleness reads are
878 # usually a little slower than comparable exact staleness
879 # reads. However, they are typically able to return fresher
880 # results, and are more likely to execute at the closest replica.
881 #
882 # Because the timestamp negotiation requires up-front knowledge of
883 # which rows will be read, it can only be used with single-use
884 # read-only transactions.
885 #
886 # See TransactionOptions.ReadOnly.max_staleness and
887 # TransactionOptions.ReadOnly.min_read_timestamp.
888 #
889 # ### Old Read Timestamps and Garbage Collection
890 #
891 # Cloud Spanner continuously garbage collects deleted and overwritten data
892 # in the background to reclaim storage space. This process is known
893 # as "version GC". By default, version GC reclaims versions after they
894 # are one hour old. Because of this, Cloud Spanner cannot perform reads
895 # at read timestamps more than one hour in the past. This
896 # restriction also applies to in-progress reads and/or SQL queries whose
897 # timestamp become too old while executing. Reads and SQL queries with
898 # too-old read timestamps fail with the error `FAILED_PRECONDITION`.
899 "readWrite": { # Options for read-write transactions. # Transaction may write.
900 #
901 # Authorization to begin a read-write transaction requires
902 # `spanner.databases.beginOrRollbackReadWriteTransaction` permission
903 # on the `session` resource.
904 },
905 "readOnly": { # Options for read-only transactions. # Transaction will not write.
906 #
907 # Authorization to begin a read-only transaction requires
908 # `spanner.databases.beginReadOnlyTransaction` permission
909 # on the `session` resource.
910 "minReadTimestamp": "A String", # Executes all reads at a timestamp >= `min_read_timestamp`.
911 #
912 # This is useful for requesting fresher data than some previous
913 # read, or data that is fresh enough to observe the effects of some
914 # previously committed transaction whose timestamp is known.
915 #
916 # Note that this option can only be used in single-use transactions.
917 "readTimestamp": "A String", # Executes all reads at the given timestamp. Unlike other modes,
918 # reads at a specific timestamp are repeatable; the same read at
919 # the same timestamp always returns the same data. If the
920 # timestamp is in the future, the read will block until the
921 # specified timestamp, modulo the read's deadline.
922 #
923 # Useful for large scale consistent reads such as mapreduces, or
924 # for coordinating many reads against a consistent snapshot of the
925 # data.
926 "maxStaleness": "A String", # Read data at a timestamp >= `NOW - max_staleness`
927 # seconds. Guarantees that all writes that have committed more
928 # than the specified number of seconds ago are visible. Because
929 # Cloud Spanner chooses the exact timestamp, this mode works even if
930 # the client's local clock is substantially skewed from Cloud Spanner
931 # commit timestamps.
932 #
933 # Useful for reading the freshest data available at a nearby
934 # replica, while bounding the possible staleness if the local
935 # replica has fallen behind.
936 #
937 # Note that this option can only be used in single-use
938 # transactions.
939 "exactStaleness": "A String", # Executes all reads at a timestamp that is `exact_staleness`
940 # old. The timestamp is chosen soon after the read is started.
941 #
942 # Guarantees that all writes that have committed more than the
943 # specified number of seconds ago are visible. Because Cloud Spanner
944 # chooses the exact timestamp, this mode works even if the client's
945 # local clock is substantially skewed from Cloud Spanner commit
946 # timestamps.
947 #
948 # Useful for reading at nearby replicas without the distributed
949 # timestamp negotiation overhead of `max_staleness`.
950 "returnReadTimestamp": True or False, # If true, the Cloud Spanner-selected read timestamp is included in
951 # the Transaction message that describes the transaction.
952 "strong": True or False, # Read at a timestamp where all previously committed transactions
953 # are visible.
954 },
955 },
956 }
957
958 x__xgafv: string, V1 error format.
959 Allowed values
960 1 - v1 error format
961 2 - v2 error format
962
963Returns:
964 An object of the form:
965
966 { # The response for Commit.
967 "commitTimestamp": "A String", # The Cloud Spanner timestamp at which the transaction committed.
968 }</pre>
969</div>
970
971<div class="method">
972 <code class="details" id="create">create(database, x__xgafv=None)</code>
973 <pre>Creates a new session. A session can be used to perform
974transactions that read and/or modify data in a Cloud Spanner database.
975Sessions are meant to be reused for many consecutive
976transactions.
977
978Sessions can only execute one transaction at a time. To execute
979multiple concurrent read-write/write-only transactions, create
980multiple sessions. Note that standalone reads and queries use a
981transaction internally, and count toward the one transaction
982limit.
983
984Cloud Spanner limits the number of sessions that can exist at any given
985time; thus, it is a good idea to delete idle and/or unneeded sessions.
986Aside from explicit deletes, Cloud Spanner can delete sessions for
987which no operations are sent for more than an hour, or due to
988internal errors. If a session is deleted, requests to it
989return `NOT_FOUND`.
990
991Idle sessions can be kept alive by sending a trivial SQL query
992periodically, e.g., `"SELECT 1"`.
993
994Args:
995 database: string, Required. The database in which the new session is created. (required)
996 x__xgafv: string, V1 error format.
997 Allowed values
998 1 - v1 error format
999 2 - v2 error format
1000
1001Returns:
1002 An object of the form:
1003
1004 { # A session in the Cloud Spanner API.
1005 "name": "A String", # Required. The name of the session.
1006 }</pre>
1007</div>
1008
1009<div class="method">
1010 <code class="details" id="delete">delete(name, x__xgafv=None)</code>
1011 <pre>Ends a session, releasing server resources associated with it.
1012
1013Args:
1014 name: string, Required. The name of the session to delete. (required)
1015 x__xgafv: string, V1 error format.
1016 Allowed values
1017 1 - v1 error format
1018 2 - v2 error format
1019
1020Returns:
1021 An object of the form:
1022
1023 { # A generic empty message that you can re-use to avoid defining duplicated
1024 # empty messages in your APIs. A typical example is to use it as the request
1025 # or the response type of an API method. For instance:
1026 #
1027 # service Foo {
1028 # rpc Bar(google.protobuf.Empty) returns (google.protobuf.Empty);
1029 # }
1030 #
1031 # The JSON representation for `Empty` is empty JSON object `{}`.
1032 }</pre>
1033</div>
1034
1035<div class="method">
1036 <code class="details" id="executeSql">executeSql(session, body, x__xgafv=None)</code>
1037 <pre>Executes an SQL query, returning all rows in a single reply. This
1038method cannot be used to return a result set larger than 10 MiB;
1039if the query yields more data than that, the query fails with
1040a `FAILED_PRECONDITION` error.
1041
1042Queries inside read-write transactions might return `ABORTED`. If
1043this occurs, the application should restart the transaction from
1044the beginning. See Transaction for more details.
1045
1046Larger result sets can be fetched in streaming fashion by calling
1047ExecuteStreamingSql instead.
1048
1049Args:
1050 session: string, Required. The session in which the SQL query should be performed. (required)
1051 body: object, The request body. (required)
1052 The object takes the form of:
1053
1054{ # The request for ExecuteSql and
1055 # ExecuteStreamingSql.
1056 "transaction": { # This message is used to select the transaction in which a # The transaction to use. If none is provided, the default is a
1057 # temporary read-only transaction with strong concurrency.
1058 # Read or
1059 # ExecuteSql call runs.
1060 #
1061 # See TransactionOptions for more information about transactions.
1062 "begin": { # # Transactions # Begin a new transaction and execute this read or SQL query in
1063 # it. The transaction ID of the new transaction is returned in
1064 # ResultSetMetadata.transaction, which is a Transaction.
1065 #
1066 #
1067 # Each session can have at most one active transaction at a time. After the
1068 # active transaction is completed, the session can immediately be
1069 # re-used for the next transaction. It is not necessary to create a
1070 # new session for each transaction.
1071 #
1072 # # Transaction Modes
1073 #
1074 # Cloud Spanner supports two transaction modes:
1075 #
1076 # 1. Locking read-write. This type of transaction is the only way
1077 # to write data into Cloud Spanner. These transactions rely on
1078 # pessimistic locking and, if necessary, two-phase commit.
1079 # Locking read-write transactions may abort, requiring the
1080 # application to retry.
1081 #
1082 # 2. Snapshot read-only. This transaction type provides guaranteed
1083 # consistency across several reads, but does not allow
1084 # writes. Snapshot read-only transactions can be configured to
1085 # read at timestamps in the past. Snapshot read-only
1086 # transactions do not need to be committed.
1087 #
1088 # For transactions that only read, snapshot read-only transactions
1089 # provide simpler semantics and are almost always faster. In
1090 # particular, read-only transactions do not take locks, so they do
1091 # not conflict with read-write transactions. As a consequence of not
1092 # taking locks, they also do not abort, so retry loops are not needed.
1093 #
1094 # Transactions may only read/write data in a single database. They
1095 # may, however, read/write data in different tables within that
1096 # database.
1097 #
1098 # ## Locking Read-Write Transactions
1099 #
1100 # Locking transactions may be used to atomically read-modify-write
1101 # data anywhere in a database. This type of transaction is externally
1102 # consistent.
1103 #
1104 # Clients should attempt to minimize the amount of time a transaction
1105 # is active. Faster transactions commit with higher probability
1106 # and cause less contention. Cloud Spanner attempts to keep read locks
1107 # active as long as the transaction continues to do reads, and the
1108 # transaction has not been terminated by
1109 # Commit or
1110 # Rollback. Long periods of
1111 # inactivity at the client may cause Cloud Spanner to release a
1112 # transaction's locks and abort it.
1113 #
1114 # Reads performed within a transaction acquire locks on the data
1115 # being read. Writes can only be done at commit time, after all reads
1116 # have been completed.
1117 # Conceptually, a read-write transaction consists of zero or more
1118 # reads or SQL queries followed by
1119 # Commit. At any time before
1120 # Commit, the client can send a
1121 # Rollback request to abort the
1122 # transaction.
1123 #
1124 # ### Semantics
1125 #
1126 # Cloud Spanner can commit the transaction if all read locks it acquired
1127 # are still valid at commit time, and it is able to acquire write
1128 # locks for all writes. Cloud Spanner can abort the transaction for any
1129 # reason. If a commit attempt returns `ABORTED`, Cloud Spanner guarantees
1130 # that the transaction has not modified any user data in Cloud Spanner.
1131 #
1132 # Unless the transaction commits, Cloud Spanner makes no guarantees about
1133 # how long the transaction's locks were held for. It is an error to
1134 # use Cloud Spanner locks for any sort of mutual exclusion other than
1135 # between Cloud Spanner transactions themselves.
1136 #
1137 # ### Retrying Aborted Transactions
1138 #
1139 # When a transaction aborts, the application can choose to retry the
1140 # whole transaction again. To maximize the chances of successfully
1141 # committing the retry, the client should execute the retry in the
1142 # same session as the original attempt. The original session's lock
1143 # priority increases with each consecutive abort, meaning that each
1144 # attempt has a slightly better chance of success than the previous.
1145 #
1146 # Under some circumstances (e.g., many transactions attempting to
1147 # modify the same row(s)), a transaction can abort many times in a
1148 # short period before successfully committing. Thus, it is not a good
1149 # idea to cap the number of retries a transaction can attempt;
1150 # instead, it is better to limit the total amount of wall time spent
1151 # retrying.
1152 #
1153 # ### Idle Transactions
1154 #
1155 # A transaction is considered idle if it has no outstanding reads or
1156 # SQL queries and has not started a read or SQL query within the last 10
1157 # seconds. Idle transactions can be aborted by Cloud Spanner so that they
1158 # don't hold on to locks indefinitely. In that case, the commit will
1159 # fail with error `ABORTED`.
1160 #
1161 # If this behavior is undesirable, periodically executing a simple
1162 # SQL query in the transaction (e.g., `SELECT 1`) prevents the
1163 # transaction from becoming idle.
1164 #
1165 # ## Snapshot Read-Only Transactions
1166 #
1167 # Snapshot read-only transactions provides a simpler method than
1168 # locking read-write transactions for doing several consistent
1169 # reads. However, this type of transaction does not support writes.
1170 #
1171 # Snapshot transactions do not take locks. Instead, they work by
1172 # choosing a Cloud Spanner timestamp, then executing all reads at that
1173 # timestamp. Since they do not acquire locks, they do not block
1174 # concurrent read-write transactions.
1175 #
1176 # Unlike locking read-write transactions, snapshot read-only
1177 # transactions never abort. They can fail if the chosen read
1178 # timestamp is garbage collected; however, the default garbage
1179 # collection policy is generous enough that most applications do not
1180 # need to worry about this in practice.
1181 #
1182 # Snapshot read-only transactions do not need to call
1183 # Commit or
1184 # Rollback (and in fact are not
1185 # permitted to do so).
1186 #
1187 # To execute a snapshot transaction, the client specifies a timestamp
1188 # bound, which tells Cloud Spanner how to choose a read timestamp.
1189 #
1190 # The types of timestamp bound are:
1191 #
1192 # - Strong (the default).
1193 # - Bounded staleness.
1194 # - Exact staleness.
1195 #
1196 # If the Cloud Spanner database to be read is geographically distributed,
1197 # stale read-only transactions can execute more quickly than strong
1198 # or read-write transaction, because they are able to execute far
1199 # from the leader replica.
1200 #
1201 # Each type of timestamp bound is discussed in detail below.
1202 #
1203 # ### Strong
1204 #
1205 # Strong reads are guaranteed to see the effects of all transactions
1206 # that have committed before the start of the read. Furthermore, all
1207 # rows yielded by a single read are consistent with each other -- if
1208 # any part of the read observes a transaction, all parts of the read
1209 # see the transaction.
1210 #
1211 # Strong reads are not repeatable: two consecutive strong read-only
1212 # transactions might return inconsistent results if there are
1213 # concurrent writes. If consistency across reads is required, the
1214 # reads should be executed within a transaction or at an exact read
1215 # timestamp.
1216 #
1217 # See TransactionOptions.ReadOnly.strong.
1218 #
1219 # ### Exact Staleness
1220 #
1221 # These timestamp bounds execute reads at a user-specified
1222 # timestamp. Reads at a timestamp are guaranteed to see a consistent
1223 # prefix of the global transaction history: they observe
1224 # modifications done by all transactions with a commit timestamp <=
1225 # the read timestamp, and observe none of the modifications done by
1226 # transactions with a larger commit timestamp. They will block until
1227 # all conflicting transactions that may be assigned commit timestamps
1228 # <= the read timestamp have finished.
1229 #
1230 # The timestamp can either be expressed as an absolute Cloud Spanner commit
1231 # timestamp or a staleness relative to the current time.
1232 #
1233 # These modes do not require a "negotiation phase" to pick a
1234 # timestamp. As a result, they execute slightly faster than the
1235 # equivalent boundedly stale concurrency modes. On the other hand,
1236 # boundedly stale reads usually return fresher results.
1237 #
1238 # See TransactionOptions.ReadOnly.read_timestamp and
1239 # TransactionOptions.ReadOnly.exact_staleness.
1240 #
1241 # ### Bounded Staleness
1242 #
1243 # Bounded staleness modes allow Cloud Spanner to pick the read timestamp,
1244 # subject to a user-provided staleness bound. Cloud Spanner chooses the
1245 # newest timestamp within the staleness bound that allows execution
1246 # of the reads at the closest available replica without blocking.
1247 #
1248 # All rows yielded are consistent with each other -- if any part of
1249 # the read observes a transaction, all parts of the read see the
1250 # transaction. Boundedly stale reads are not repeatable: two stale
1251 # reads, even if they use the same staleness bound, can execute at
1252 # different timestamps and thus return inconsistent results.
1253 #
1254 # Boundedly stale reads execute in two phases: the first phase
1255 # negotiates a timestamp among all replicas needed to serve the
1256 # read. In the second phase, reads are executed at the negotiated
1257 # timestamp.
1258 #
1259 # As a result of the two phase execution, bounded staleness reads are
1260 # usually a little slower than comparable exact staleness
1261 # reads. However, they are typically able to return fresher
1262 # results, and are more likely to execute at the closest replica.
1263 #
1264 # Because the timestamp negotiation requires up-front knowledge of
1265 # which rows will be read, it can only be used with single-use
1266 # read-only transactions.
1267 #
1268 # See TransactionOptions.ReadOnly.max_staleness and
1269 # TransactionOptions.ReadOnly.min_read_timestamp.
1270 #
1271 # ### Old Read Timestamps and Garbage Collection
1272 #
1273 # Cloud Spanner continuously garbage collects deleted and overwritten data
1274 # in the background to reclaim storage space. This process is known
1275 # as "version GC". By default, version GC reclaims versions after they
1276 # are one hour old. Because of this, Cloud Spanner cannot perform reads
1277 # at read timestamps more than one hour in the past. This
1278 # restriction also applies to in-progress reads and/or SQL queries whose
1279 # timestamp become too old while executing. Reads and SQL queries with
1280 # too-old read timestamps fail with the error `FAILED_PRECONDITION`.
1281 "readWrite": { # Options for read-write transactions. # Transaction may write.
1282 #
1283 # Authorization to begin a read-write transaction requires
1284 # `spanner.databases.beginOrRollbackReadWriteTransaction` permission
1285 # on the `session` resource.
1286 },
1287 "readOnly": { # Options for read-only transactions. # Transaction will not write.
1288 #
1289 # Authorization to begin a read-only transaction requires
1290 # `spanner.databases.beginReadOnlyTransaction` permission
1291 # on the `session` resource.
1292 "minReadTimestamp": "A String", # Executes all reads at a timestamp >= `min_read_timestamp`.
1293 #
1294 # This is useful for requesting fresher data than some previous
1295 # read, or data that is fresh enough to observe the effects of some
1296 # previously committed transaction whose timestamp is known.
1297 #
1298 # Note that this option can only be used in single-use transactions.
1299 "readTimestamp": "A String", # Executes all reads at the given timestamp. Unlike other modes,
1300 # reads at a specific timestamp are repeatable; the same read at
1301 # the same timestamp always returns the same data. If the
1302 # timestamp is in the future, the read will block until the
1303 # specified timestamp, modulo the read's deadline.
1304 #
1305 # Useful for large scale consistent reads such as mapreduces, or
1306 # for coordinating many reads against a consistent snapshot of the
1307 # data.
1308 "maxStaleness": "A String", # Read data at a timestamp >= `NOW - max_staleness`
1309 # seconds. Guarantees that all writes that have committed more
1310 # than the specified number of seconds ago are visible. Because
1311 # Cloud Spanner chooses the exact timestamp, this mode works even if
1312 # the client's local clock is substantially skewed from Cloud Spanner
1313 # commit timestamps.
1314 #
1315 # Useful for reading the freshest data available at a nearby
1316 # replica, while bounding the possible staleness if the local
1317 # replica has fallen behind.
1318 #
1319 # Note that this option can only be used in single-use
1320 # transactions.
1321 "exactStaleness": "A String", # Executes all reads at a timestamp that is `exact_staleness`
1322 # old. The timestamp is chosen soon after the read is started.
1323 #
1324 # Guarantees that all writes that have committed more than the
1325 # specified number of seconds ago are visible. Because Cloud Spanner
1326 # chooses the exact timestamp, this mode works even if the client's
1327 # local clock is substantially skewed from Cloud Spanner commit
1328 # timestamps.
1329 #
1330 # Useful for reading at nearby replicas without the distributed
1331 # timestamp negotiation overhead of `max_staleness`.
1332 "returnReadTimestamp": True or False, # If true, the Cloud Spanner-selected read timestamp is included in
1333 # the Transaction message that describes the transaction.
1334 "strong": True or False, # Read at a timestamp where all previously committed transactions
1335 # are visible.
1336 },
1337 },
1338 "singleUse": { # # Transactions # Execute the read or SQL query in a temporary transaction.
1339 # This is the most efficient way to execute a transaction that
1340 # consists of a single SQL query.
1341 #
1342 #
1343 # Each session can have at most one active transaction at a time. After the
1344 # active transaction is completed, the session can immediately be
1345 # re-used for the next transaction. It is not necessary to create a
1346 # new session for each transaction.
1347 #
1348 # # Transaction Modes
1349 #
1350 # Cloud Spanner supports two transaction modes:
1351 #
1352 # 1. Locking read-write. This type of transaction is the only way
1353 # to write data into Cloud Spanner. These transactions rely on
1354 # pessimistic locking and, if necessary, two-phase commit.
1355 # Locking read-write transactions may abort, requiring the
1356 # application to retry.
1357 #
1358 # 2. Snapshot read-only. This transaction type provides guaranteed
1359 # consistency across several reads, but does not allow
1360 # writes. Snapshot read-only transactions can be configured to
1361 # read at timestamps in the past. Snapshot read-only
1362 # transactions do not need to be committed.
1363 #
1364 # For transactions that only read, snapshot read-only transactions
1365 # provide simpler semantics and are almost always faster. In
1366 # particular, read-only transactions do not take locks, so they do
1367 # not conflict with read-write transactions. As a consequence of not
1368 # taking locks, they also do not abort, so retry loops are not needed.
1369 #
1370 # Transactions may only read/write data in a single database. They
1371 # may, however, read/write data in different tables within that
1372 # database.
1373 #
1374 # ## Locking Read-Write Transactions
1375 #
1376 # Locking transactions may be used to atomically read-modify-write
1377 # data anywhere in a database. This type of transaction is externally
1378 # consistent.
1379 #
1380 # Clients should attempt to minimize the amount of time a transaction
1381 # is active. Faster transactions commit with higher probability
1382 # and cause less contention. Cloud Spanner attempts to keep read locks
1383 # active as long as the transaction continues to do reads, and the
1384 # transaction has not been terminated by
1385 # Commit or
1386 # Rollback. Long periods of
1387 # inactivity at the client may cause Cloud Spanner to release a
1388 # transaction's locks and abort it.
1389 #
1390 # Reads performed within a transaction acquire locks on the data
1391 # being read. Writes can only be done at commit time, after all reads
1392 # have been completed.
1393 # Conceptually, a read-write transaction consists of zero or more
1394 # reads or SQL queries followed by
1395 # Commit. At any time before
1396 # Commit, the client can send a
1397 # Rollback request to abort the
1398 # transaction.
1399 #
1400 # ### Semantics
1401 #
1402 # Cloud Spanner can commit the transaction if all read locks it acquired
1403 # are still valid at commit time, and it is able to acquire write
1404 # locks for all writes. Cloud Spanner can abort the transaction for any
1405 # reason. If a commit attempt returns `ABORTED`, Cloud Spanner guarantees
1406 # that the transaction has not modified any user data in Cloud Spanner.
1407 #
1408 # Unless the transaction commits, Cloud Spanner makes no guarantees about
1409 # how long the transaction's locks were held for. It is an error to
1410 # use Cloud Spanner locks for any sort of mutual exclusion other than
1411 # between Cloud Spanner transactions themselves.
1412 #
1413 # ### Retrying Aborted Transactions
1414 #
1415 # When a transaction aborts, the application can choose to retry the
1416 # whole transaction again. To maximize the chances of successfully
1417 # committing the retry, the client should execute the retry in the
1418 # same session as the original attempt. The original session's lock
1419 # priority increases with each consecutive abort, meaning that each
1420 # attempt has a slightly better chance of success than the previous.
1421 #
1422 # Under some circumstances (e.g., many transactions attempting to
1423 # modify the same row(s)), a transaction can abort many times in a
1424 # short period before successfully committing. Thus, it is not a good
1425 # idea to cap the number of retries a transaction can attempt;
1426 # instead, it is better to limit the total amount of wall time spent
1427 # retrying.
1428 #
1429 # ### Idle Transactions
1430 #
1431 # A transaction is considered idle if it has no outstanding reads or
1432 # SQL queries and has not started a read or SQL query within the last 10
1433 # seconds. Idle transactions can be aborted by Cloud Spanner so that they
1434 # don't hold on to locks indefinitely. In that case, the commit will
1435 # fail with error `ABORTED`.
1436 #
1437 # If this behavior is undesirable, periodically executing a simple
1438 # SQL query in the transaction (e.g., `SELECT 1`) prevents the
1439 # transaction from becoming idle.
1440 #
1441 # ## Snapshot Read-Only Transactions
1442 #
1443 # Snapshot read-only transactions provides a simpler method than
1444 # locking read-write transactions for doing several consistent
1445 # reads. However, this type of transaction does not support writes.
1446 #
1447 # Snapshot transactions do not take locks. Instead, they work by
1448 # choosing a Cloud Spanner timestamp, then executing all reads at that
1449 # timestamp. Since they do not acquire locks, they do not block
1450 # concurrent read-write transactions.
1451 #
1452 # Unlike locking read-write transactions, snapshot read-only
1453 # transactions never abort. They can fail if the chosen read
1454 # timestamp is garbage collected; however, the default garbage
1455 # collection policy is generous enough that most applications do not
1456 # need to worry about this in practice.
1457 #
1458 # Snapshot read-only transactions do not need to call
1459 # Commit or
1460 # Rollback (and in fact are not
1461 # permitted to do so).
1462 #
1463 # To execute a snapshot transaction, the client specifies a timestamp
1464 # bound, which tells Cloud Spanner how to choose a read timestamp.
1465 #
1466 # The types of timestamp bound are:
1467 #
1468 # - Strong (the default).
1469 # - Bounded staleness.
1470 # - Exact staleness.
1471 #
1472 # If the Cloud Spanner database to be read is geographically distributed,
1473 # stale read-only transactions can execute more quickly than strong
1474 # or read-write transaction, because they are able to execute far
1475 # from the leader replica.
1476 #
1477 # Each type of timestamp bound is discussed in detail below.
1478 #
1479 # ### Strong
1480 #
1481 # Strong reads are guaranteed to see the effects of all transactions
1482 # that have committed before the start of the read. Furthermore, all
1483 # rows yielded by a single read are consistent with each other -- if
1484 # any part of the read observes a transaction, all parts of the read
1485 # see the transaction.
1486 #
1487 # Strong reads are not repeatable: two consecutive strong read-only
1488 # transactions might return inconsistent results if there are
1489 # concurrent writes. If consistency across reads is required, the
1490 # reads should be executed within a transaction or at an exact read
1491 # timestamp.
1492 #
1493 # See TransactionOptions.ReadOnly.strong.
1494 #
1495 # ### Exact Staleness
1496 #
1497 # These timestamp bounds execute reads at a user-specified
1498 # timestamp. Reads at a timestamp are guaranteed to see a consistent
1499 # prefix of the global transaction history: they observe
1500 # modifications done by all transactions with a commit timestamp <=
1501 # the read timestamp, and observe none of the modifications done by
1502 # transactions with a larger commit timestamp. They will block until
1503 # all conflicting transactions that may be assigned commit timestamps
1504 # <= the read timestamp have finished.
1505 #
1506 # The timestamp can either be expressed as an absolute Cloud Spanner commit
1507 # timestamp or a staleness relative to the current time.
1508 #
1509 # These modes do not require a "negotiation phase" to pick a
1510 # timestamp. As a result, they execute slightly faster than the
1511 # equivalent boundedly stale concurrency modes. On the other hand,
1512 # boundedly stale reads usually return fresher results.
1513 #
1514 # See TransactionOptions.ReadOnly.read_timestamp and
1515 # TransactionOptions.ReadOnly.exact_staleness.
1516 #
1517 # ### Bounded Staleness
1518 #
1519 # Bounded staleness modes allow Cloud Spanner to pick the read timestamp,
1520 # subject to a user-provided staleness bound. Cloud Spanner chooses the
1521 # newest timestamp within the staleness bound that allows execution
1522 # of the reads at the closest available replica without blocking.
1523 #
1524 # All rows yielded are consistent with each other -- if any part of
1525 # the read observes a transaction, all parts of the read see the
1526 # transaction. Boundedly stale reads are not repeatable: two stale
1527 # reads, even if they use the same staleness bound, can execute at
1528 # different timestamps and thus return inconsistent results.
1529 #
1530 # Boundedly stale reads execute in two phases: the first phase
1531 # negotiates a timestamp among all replicas needed to serve the
1532 # read. In the second phase, reads are executed at the negotiated
1533 # timestamp.
1534 #
1535 # As a result of the two phase execution, bounded staleness reads are
1536 # usually a little slower than comparable exact staleness
1537 # reads. However, they are typically able to return fresher
1538 # results, and are more likely to execute at the closest replica.
1539 #
1540 # Because the timestamp negotiation requires up-front knowledge of
1541 # which rows will be read, it can only be used with single-use
1542 # read-only transactions.
1543 #
1544 # See TransactionOptions.ReadOnly.max_staleness and
1545 # TransactionOptions.ReadOnly.min_read_timestamp.
1546 #
1547 # ### Old Read Timestamps and Garbage Collection
1548 #
1549 # Cloud Spanner continuously garbage collects deleted and overwritten data
1550 # in the background to reclaim storage space. This process is known
1551 # as "version GC". By default, version GC reclaims versions after they
1552 # are one hour old. Because of this, Cloud Spanner cannot perform reads
1553 # at read timestamps more than one hour in the past. This
1554 # restriction also applies to in-progress reads and/or SQL queries whose
1555 # timestamp become too old while executing. Reads and SQL queries with
1556 # too-old read timestamps fail with the error `FAILED_PRECONDITION`.
1557 "readWrite": { # Options for read-write transactions. # Transaction may write.
1558 #
1559 # Authorization to begin a read-write transaction requires
1560 # `spanner.databases.beginOrRollbackReadWriteTransaction` permission
1561 # on the `session` resource.
1562 },
1563 "readOnly": { # Options for read-only transactions. # Transaction will not write.
1564 #
1565 # Authorization to begin a read-only transaction requires
1566 # `spanner.databases.beginReadOnlyTransaction` permission
1567 # on the `session` resource.
1568 "minReadTimestamp": "A String", # Executes all reads at a timestamp >= `min_read_timestamp`.
1569 #
1570 # This is useful for requesting fresher data than some previous
1571 # read, or data that is fresh enough to observe the effects of some
1572 # previously committed transaction whose timestamp is known.
1573 #
1574 # Note that this option can only be used in single-use transactions.
1575 "readTimestamp": "A String", # Executes all reads at the given timestamp. Unlike other modes,
1576 # reads at a specific timestamp are repeatable; the same read at
1577 # the same timestamp always returns the same data. If the
1578 # timestamp is in the future, the read will block until the
1579 # specified timestamp, modulo the read's deadline.
1580 #
1581 # Useful for large scale consistent reads such as mapreduces, or
1582 # for coordinating many reads against a consistent snapshot of the
1583 # data.
1584 "maxStaleness": "A String", # Read data at a timestamp >= `NOW - max_staleness`
1585 # seconds. Guarantees that all writes that have committed more
1586 # than the specified number of seconds ago are visible. Because
1587 # Cloud Spanner chooses the exact timestamp, this mode works even if
1588 # the client's local clock is substantially skewed from Cloud Spanner
1589 # commit timestamps.
1590 #
1591 # Useful for reading the freshest data available at a nearby
1592 # replica, while bounding the possible staleness if the local
1593 # replica has fallen behind.
1594 #
1595 # Note that this option can only be used in single-use
1596 # transactions.
1597 "exactStaleness": "A String", # Executes all reads at a timestamp that is `exact_staleness`
1598 # old. The timestamp is chosen soon after the read is started.
1599 #
1600 # Guarantees that all writes that have committed more than the
1601 # specified number of seconds ago are visible. Because Cloud Spanner
1602 # chooses the exact timestamp, this mode works even if the client's
1603 # local clock is substantially skewed from Cloud Spanner commit
1604 # timestamps.
1605 #
1606 # Useful for reading at nearby replicas without the distributed
1607 # timestamp negotiation overhead of `max_staleness`.
1608 "returnReadTimestamp": True or False, # If true, the Cloud Spanner-selected read timestamp is included in
1609 # the Transaction message that describes the transaction.
1610 "strong": True or False, # Read at a timestamp where all previously committed transactions
1611 # are visible.
1612 },
1613 },
1614 "id": "A String", # Execute the read or SQL query in a previously-started transaction.
1615 },
1616 "resumeToken": "A String", # If this request is resuming a previously interrupted SQL query
1617 # execution, `resume_token` should be copied from the last
1618 # PartialResultSet yielded before the interruption. Doing this
1619 # enables the new SQL query execution to resume where the last one left
1620 # off. The rest of the request parameters must exactly match the
1621 # request that yielded this token.
1622 "paramTypes": { # It is not always possible for Cloud Spanner to infer the right SQL type
1623 # from a JSON value. For example, values of type `BYTES` and values
1624 # of type `STRING` both appear in params as JSON strings.
1625 #
1626 # In these cases, `param_types` can be used to specify the exact
1627 # SQL type for some or all of the SQL query parameters. See the
1628 # definition of Type for more information
1629 # about SQL types.
1630 "a_key": { # `Type` indicates the type of a Cloud Spanner value, as might be stored in a
1631 # table cell or returned from an SQL query.
1632 "structType": # Object with schema name: StructType # If code == STRUCT, then `struct_type`
1633 # provides type information for the struct's fields.
1634 "code": "A String", # Required. The TypeCode for this type.
1635 "arrayElementType": # Object with schema name: Type # If code == ARRAY, then `array_element_type`
1636 # is the type of the array elements.
1637 },
1638 },
1639 "queryMode": "A String", # Used to control the amount of debugging information returned in
1640 # ResultSetStats.
1641 "sql": "A String", # Required. The SQL query string.
1642 "params": { # The SQL query string can contain parameter placeholders. A parameter
1643 # placeholder consists of `'@'` followed by the parameter
1644 # name. Parameter names consist of any combination of letters,
1645 # numbers, and underscores.
1646 #
1647 # Parameters can appear anywhere that a literal value is expected. The same
1648 # parameter name can be used more than once, for example:
1649 # `"WHERE id > @msg_id AND id < @msg_id + 100"`
1650 #
1651 # It is an error to execute an SQL query with unbound parameters.
1652 #
1653 # Parameter values are specified using `params`, which is a JSON
1654 # object whose keys are parameter names, and whose values are the
1655 # corresponding parameter values.
1656 "a_key": "", # Properties of the object.
1657 },
1658 }
1659
1660 x__xgafv: string, V1 error format.
1661 Allowed values
1662 1 - v1 error format
1663 2 - v2 error format
1664
1665Returns:
1666 An object of the form:
1667
1668 { # Results from Read or
1669 # ExecuteSql.
1670 "rows": [ # Each element in `rows` is a row whose format is defined by
1671 # metadata.row_type. The ith element
1672 # in each row matches the ith field in
1673 # metadata.row_type. Elements are
1674 # encoded based on type as described
1675 # here.
1676 [
1677 "",
1678 ],
1679 ],
1680 "stats": { # Additional statistics about a ResultSet or PartialResultSet. # Query plan and execution statistics for the query that produced this
1681 # result set. These can be requested by setting
1682 # ExecuteSqlRequest.query_mode.
1683 "queryPlan": { # Contains an ordered list of nodes appearing in the query plan. # QueryPlan for the query associated with this result.
1684 "planNodes": [ # The nodes in the query plan. Plan nodes are returned in pre-order starting
1685 # with the plan root. Each PlanNode's `id` corresponds to its index in
1686 # `plan_nodes`.
1687 { # Node information for nodes appearing in a QueryPlan.plan_nodes.
1688 "index": 42, # The `PlanNode`'s index in node list.
1689 "kind": "A String", # Used to determine the type of node. May be needed for visualizing
1690 # different kinds of nodes differently. For example, If the node is a
1691 # SCALAR node, it will have a condensed representation
1692 # which can be used to directly embed a description of the node in its
1693 # parent.
1694 "displayName": "A String", # The display name for the node.
1695 "executionStats": { # The execution statistics associated with the node, contained in a group of
1696 # key-value pairs. Only present if the plan was returned as a result of a
1697 # profile query. For example, number of executions, number of rows/time per
1698 # execution etc.
1699 "a_key": "", # Properties of the object.
1700 },
1701 "childLinks": [ # List of child node `index`es and their relationship to this parent.
1702 { # Metadata associated with a parent-child relationship appearing in a
1703 # PlanNode.
1704 "variable": "A String", # Only present if the child node is SCALAR and corresponds
1705 # to an output variable of the parent node. The field carries the name of
1706 # the output variable.
1707 # For example, a `TableScan` operator that reads rows from a table will
1708 # have child links to the `SCALAR` nodes representing the output variables
1709 # created for each column that is read by the operator. The corresponding
1710 # `variable` fields will be set to the variable names assigned to the
1711 # columns.
1712 "childIndex": 42, # The node to which the link points.
1713 "type": "A String", # The type of the link. For example, in Hash Joins this could be used to
1714 # distinguish between the build child and the probe child, or in the case
1715 # of the child being an output variable, to represent the tag associated
1716 # with the output variable.
1717 },
1718 ],
1719 "shortRepresentation": { # Condensed representation of a node and its subtree. Only present for # Condensed representation for SCALAR nodes.
1720 # `SCALAR` PlanNode(s).
1721 "subqueries": { # A mapping of (subquery variable name) -> (subquery node id) for cases
1722 # where the `description` string of this node references a `SCALAR`
1723 # subquery contained in the expression subtree rooted at this node. The
1724 # referenced `SCALAR` subquery may not necessarily be a direct child of
1725 # this node.
1726 "a_key": 42,
1727 },
1728 "description": "A String", # A string representation of the expression subtree rooted at this node.
1729 },
1730 "metadata": { # Attributes relevant to the node contained in a group of key-value pairs.
1731 # For example, a Parameter Reference node could have the following
1732 # information in its metadata:
1733 #
1734 # {
1735 # "parameter_reference": "param1",
1736 # "parameter_type": "array"
1737 # }
1738 "a_key": "", # Properties of the object.
1739 },
1740 },
1741 ],
1742 },
1743 "queryStats": { # Aggregated statistics from the execution of the query. Only present when
1744 # the query is profiled. For example, a query could return the statistics as
1745 # follows:
1746 #
1747 # {
1748 # "rows_returned": "3",
1749 # "elapsed_time": "1.22 secs",
1750 # "cpu_time": "1.19 secs"
1751 # }
1752 "a_key": "", # Properties of the object.
1753 },
1754 },
1755 "metadata": { # Metadata about a ResultSet or PartialResultSet. # Metadata about the result set, such as row type information.
1756 "rowType": { # `StructType` defines the fields of a STRUCT type. # Indicates the field names and types for the rows in the result
1757 # set. For example, a SQL query like `"SELECT UserId, UserName FROM
1758 # Users"` could return a `row_type` value like:
1759 #
1760 # "fields": [
1761 # { "name": "UserId", "type": { "code": "INT64" } },
1762 # { "name": "UserName", "type": { "code": "STRING" } },
1763 # ]
1764 "fields": [ # The list of fields that make up this struct. Order is
1765 # significant, because values of this struct type are represented as
1766 # lists, where the order of field values matches the order of
1767 # fields in the StructType. In turn, the order of fields
1768 # matches the order of columns in a read request, or the order of
1769 # fields in the `SELECT` clause of a query.
1770 { # Message representing a single field of a struct.
1771 "type": { # `Type` indicates the type of a Cloud Spanner value, as might be stored in a # The type of the field.
1772 # table cell or returned from an SQL query.
1773 "structType": # Object with schema name: StructType # If code == STRUCT, then `struct_type`
1774 # provides type information for the struct's fields.
1775 "code": "A String", # Required. The TypeCode for this type.
1776 "arrayElementType": # Object with schema name: Type # If code == ARRAY, then `array_element_type`
1777 # is the type of the array elements.
1778 },
1779 "name": "A String", # The name of the field. For reads, this is the column name. For
1780 # SQL queries, it is the column alias (e.g., `"Word"` in the
1781 # query `"SELECT 'hello' AS Word"`), or the column name (e.g.,
1782 # `"ColName"` in the query `"SELECT ColName FROM Table"`). Some
1783 # columns might have an empty name (e.g., !"SELECT
1784 # UPPER(ColName)"`). Note that a query result can contain
1785 # multiple fields with the same name.
1786 },
1787 ],
1788 },
1789 "transaction": { # A transaction. # If the read or SQL query began a transaction as a side-effect, the
1790 # information about the new transaction is yielded here.
1791 "readTimestamp": "A String", # For snapshot read-only transactions, the read timestamp chosen
1792 # for the transaction. Not returned by default: see
1793 # TransactionOptions.ReadOnly.return_read_timestamp.
1794 "id": "A String", # `id` may be used to identify the transaction in subsequent
1795 # Read,
1796 # ExecuteSql,
1797 # Commit, or
1798 # Rollback calls.
1799 #
1800 # Single-use read-only transactions do not have IDs, because
1801 # single-use transactions do not support multiple requests.
1802 },
1803 },
1804 }</pre>
1805</div>
1806
1807<div class="method">
1808 <code class="details" id="executeStreamingSql">executeStreamingSql(session, body, x__xgafv=None)</code>
1809 <pre>Like ExecuteSql, except returns the result
1810set as a stream. Unlike ExecuteSql, there
1811is no limit on the size of the returned result set. However, no
1812individual row in the result set can exceed 100 MiB, and no
1813column value can exceed 10 MiB.
1814
1815Args:
1816 session: string, Required. The session in which the SQL query should be performed. (required)
1817 body: object, The request body. (required)
1818 The object takes the form of:
1819
1820{ # The request for ExecuteSql and
1821 # ExecuteStreamingSql.
1822 "transaction": { # This message is used to select the transaction in which a # The transaction to use. If none is provided, the default is a
1823 # temporary read-only transaction with strong concurrency.
1824 # Read or
1825 # ExecuteSql call runs.
1826 #
1827 # See TransactionOptions for more information about transactions.
1828 "begin": { # # Transactions # Begin a new transaction and execute this read or SQL query in
1829 # it. The transaction ID of the new transaction is returned in
1830 # ResultSetMetadata.transaction, which is a Transaction.
1831 #
1832 #
1833 # Each session can have at most one active transaction at a time. After the
1834 # active transaction is completed, the session can immediately be
1835 # re-used for the next transaction. It is not necessary to create a
1836 # new session for each transaction.
1837 #
1838 # # Transaction Modes
1839 #
1840 # Cloud Spanner supports two transaction modes:
1841 #
1842 # 1. Locking read-write. This type of transaction is the only way
1843 # to write data into Cloud Spanner. These transactions rely on
1844 # pessimistic locking and, if necessary, two-phase commit.
1845 # Locking read-write transactions may abort, requiring the
1846 # application to retry.
1847 #
1848 # 2. Snapshot read-only. This transaction type provides guaranteed
1849 # consistency across several reads, but does not allow
1850 # writes. Snapshot read-only transactions can be configured to
1851 # read at timestamps in the past. Snapshot read-only
1852 # transactions do not need to be committed.
1853 #
1854 # For transactions that only read, snapshot read-only transactions
1855 # provide simpler semantics and are almost always faster. In
1856 # particular, read-only transactions do not take locks, so they do
1857 # not conflict with read-write transactions. As a consequence of not
1858 # taking locks, they also do not abort, so retry loops are not needed.
1859 #
1860 # Transactions may only read/write data in a single database. They
1861 # may, however, read/write data in different tables within that
1862 # database.
1863 #
1864 # ## Locking Read-Write Transactions
1865 #
1866 # Locking transactions may be used to atomically read-modify-write
1867 # data anywhere in a database. This type of transaction is externally
1868 # consistent.
1869 #
1870 # Clients should attempt to minimize the amount of time a transaction
1871 # is active. Faster transactions commit with higher probability
1872 # and cause less contention. Cloud Spanner attempts to keep read locks
1873 # active as long as the transaction continues to do reads, and the
1874 # transaction has not been terminated by
1875 # Commit or
1876 # Rollback. Long periods of
1877 # inactivity at the client may cause Cloud Spanner to release a
1878 # transaction's locks and abort it.
1879 #
1880 # Reads performed within a transaction acquire locks on the data
1881 # being read. Writes can only be done at commit time, after all reads
1882 # have been completed.
1883 # Conceptually, a read-write transaction consists of zero or more
1884 # reads or SQL queries followed by
1885 # Commit. At any time before
1886 # Commit, the client can send a
1887 # Rollback request to abort the
1888 # transaction.
1889 #
1890 # ### Semantics
1891 #
1892 # Cloud Spanner can commit the transaction if all read locks it acquired
1893 # are still valid at commit time, and it is able to acquire write
1894 # locks for all writes. Cloud Spanner can abort the transaction for any
1895 # reason. If a commit attempt returns `ABORTED`, Cloud Spanner guarantees
1896 # that the transaction has not modified any user data in Cloud Spanner.
1897 #
1898 # Unless the transaction commits, Cloud Spanner makes no guarantees about
1899 # how long the transaction's locks were held for. It is an error to
1900 # use Cloud Spanner locks for any sort of mutual exclusion other than
1901 # between Cloud Spanner transactions themselves.
1902 #
1903 # ### Retrying Aborted Transactions
1904 #
1905 # When a transaction aborts, the application can choose to retry the
1906 # whole transaction again. To maximize the chances of successfully
1907 # committing the retry, the client should execute the retry in the
1908 # same session as the original attempt. The original session's lock
1909 # priority increases with each consecutive abort, meaning that each
1910 # attempt has a slightly better chance of success than the previous.
1911 #
1912 # Under some circumstances (e.g., many transactions attempting to
1913 # modify the same row(s)), a transaction can abort many times in a
1914 # short period before successfully committing. Thus, it is not a good
1915 # idea to cap the number of retries a transaction can attempt;
1916 # instead, it is better to limit the total amount of wall time spent
1917 # retrying.
1918 #
1919 # ### Idle Transactions
1920 #
1921 # A transaction is considered idle if it has no outstanding reads or
1922 # SQL queries and has not started a read or SQL query within the last 10
1923 # seconds. Idle transactions can be aborted by Cloud Spanner so that they
1924 # don't hold on to locks indefinitely. In that case, the commit will
1925 # fail with error `ABORTED`.
1926 #
1927 # If this behavior is undesirable, periodically executing a simple
1928 # SQL query in the transaction (e.g., `SELECT 1`) prevents the
1929 # transaction from becoming idle.
1930 #
1931 # ## Snapshot Read-Only Transactions
1932 #
1933 # Snapshot read-only transactions provides a simpler method than
1934 # locking read-write transactions for doing several consistent
1935 # reads. However, this type of transaction does not support writes.
1936 #
1937 # Snapshot transactions do not take locks. Instead, they work by
1938 # choosing a Cloud Spanner timestamp, then executing all reads at that
1939 # timestamp. Since they do not acquire locks, they do not block
1940 # concurrent read-write transactions.
1941 #
1942 # Unlike locking read-write transactions, snapshot read-only
1943 # transactions never abort. They can fail if the chosen read
1944 # timestamp is garbage collected; however, the default garbage
1945 # collection policy is generous enough that most applications do not
1946 # need to worry about this in practice.
1947 #
1948 # Snapshot read-only transactions do not need to call
1949 # Commit or
1950 # Rollback (and in fact are not
1951 # permitted to do so).
1952 #
1953 # To execute a snapshot transaction, the client specifies a timestamp
1954 # bound, which tells Cloud Spanner how to choose a read timestamp.
1955 #
1956 # The types of timestamp bound are:
1957 #
1958 # - Strong (the default).
1959 # - Bounded staleness.
1960 # - Exact staleness.
1961 #
1962 # If the Cloud Spanner database to be read is geographically distributed,
1963 # stale read-only transactions can execute more quickly than strong
1964 # or read-write transaction, because they are able to execute far
1965 # from the leader replica.
1966 #
1967 # Each type of timestamp bound is discussed in detail below.
1968 #
1969 # ### Strong
1970 #
1971 # Strong reads are guaranteed to see the effects of all transactions
1972 # that have committed before the start of the read. Furthermore, all
1973 # rows yielded by a single read are consistent with each other -- if
1974 # any part of the read observes a transaction, all parts of the read
1975 # see the transaction.
1976 #
1977 # Strong reads are not repeatable: two consecutive strong read-only
1978 # transactions might return inconsistent results if there are
1979 # concurrent writes. If consistency across reads is required, the
1980 # reads should be executed within a transaction or at an exact read
1981 # timestamp.
1982 #
1983 # See TransactionOptions.ReadOnly.strong.
1984 #
1985 # ### Exact Staleness
1986 #
1987 # These timestamp bounds execute reads at a user-specified
1988 # timestamp. Reads at a timestamp are guaranteed to see a consistent
1989 # prefix of the global transaction history: they observe
1990 # modifications done by all transactions with a commit timestamp <=
1991 # the read timestamp, and observe none of the modifications done by
1992 # transactions with a larger commit timestamp. They will block until
1993 # all conflicting transactions that may be assigned commit timestamps
1994 # <= the read timestamp have finished.
1995 #
1996 # The timestamp can either be expressed as an absolute Cloud Spanner commit
1997 # timestamp or a staleness relative to the current time.
1998 #
1999 # These modes do not require a "negotiation phase" to pick a
2000 # timestamp. As a result, they execute slightly faster than the
2001 # equivalent boundedly stale concurrency modes. On the other hand,
2002 # boundedly stale reads usually return fresher results.
2003 #
2004 # See TransactionOptions.ReadOnly.read_timestamp and
2005 # TransactionOptions.ReadOnly.exact_staleness.
2006 #
2007 # ### Bounded Staleness
2008 #
2009 # Bounded staleness modes allow Cloud Spanner to pick the read timestamp,
2010 # subject to a user-provided staleness bound. Cloud Spanner chooses the
2011 # newest timestamp within the staleness bound that allows execution
2012 # of the reads at the closest available replica without blocking.
2013 #
2014 # All rows yielded are consistent with each other -- if any part of
2015 # the read observes a transaction, all parts of the read see the
2016 # transaction. Boundedly stale reads are not repeatable: two stale
2017 # reads, even if they use the same staleness bound, can execute at
2018 # different timestamps and thus return inconsistent results.
2019 #
2020 # Boundedly stale reads execute in two phases: the first phase
2021 # negotiates a timestamp among all replicas needed to serve the
2022 # read. In the second phase, reads are executed at the negotiated
2023 # timestamp.
2024 #
2025 # As a result of the two phase execution, bounded staleness reads are
2026 # usually a little slower than comparable exact staleness
2027 # reads. However, they are typically able to return fresher
2028 # results, and are more likely to execute at the closest replica.
2029 #
2030 # Because the timestamp negotiation requires up-front knowledge of
2031 # which rows will be read, it can only be used with single-use
2032 # read-only transactions.
2033 #
2034 # See TransactionOptions.ReadOnly.max_staleness and
2035 # TransactionOptions.ReadOnly.min_read_timestamp.
2036 #
2037 # ### Old Read Timestamps and Garbage Collection
2038 #
2039 # Cloud Spanner continuously garbage collects deleted and overwritten data
2040 # in the background to reclaim storage space. This process is known
2041 # as "version GC". By default, version GC reclaims versions after they
2042 # are one hour old. Because of this, Cloud Spanner cannot perform reads
2043 # at read timestamps more than one hour in the past. This
2044 # restriction also applies to in-progress reads and/or SQL queries whose
2045 # timestamp become too old while executing. Reads and SQL queries with
2046 # too-old read timestamps fail with the error `FAILED_PRECONDITION`.
2047 "readWrite": { # Options for read-write transactions. # Transaction may write.
2048 #
2049 # Authorization to begin a read-write transaction requires
2050 # `spanner.databases.beginOrRollbackReadWriteTransaction` permission
2051 # on the `session` resource.
2052 },
2053 "readOnly": { # Options for read-only transactions. # Transaction will not write.
2054 #
2055 # Authorization to begin a read-only transaction requires
2056 # `spanner.databases.beginReadOnlyTransaction` permission
2057 # on the `session` resource.
2058 "minReadTimestamp": "A String", # Executes all reads at a timestamp >= `min_read_timestamp`.
2059 #
2060 # This is useful for requesting fresher data than some previous
2061 # read, or data that is fresh enough to observe the effects of some
2062 # previously committed transaction whose timestamp is known.
2063 #
2064 # Note that this option can only be used in single-use transactions.
2065 "readTimestamp": "A String", # Executes all reads at the given timestamp. Unlike other modes,
2066 # reads at a specific timestamp are repeatable; the same read at
2067 # the same timestamp always returns the same data. If the
2068 # timestamp is in the future, the read will block until the
2069 # specified timestamp, modulo the read's deadline.
2070 #
2071 # Useful for large scale consistent reads such as mapreduces, or
2072 # for coordinating many reads against a consistent snapshot of the
2073 # data.
2074 "maxStaleness": "A String", # Read data at a timestamp >= `NOW - max_staleness`
2075 # seconds. Guarantees that all writes that have committed more
2076 # than the specified number of seconds ago are visible. Because
2077 # Cloud Spanner chooses the exact timestamp, this mode works even if
2078 # the client's local clock is substantially skewed from Cloud Spanner
2079 # commit timestamps.
2080 #
2081 # Useful for reading the freshest data available at a nearby
2082 # replica, while bounding the possible staleness if the local
2083 # replica has fallen behind.
2084 #
2085 # Note that this option can only be used in single-use
2086 # transactions.
2087 "exactStaleness": "A String", # Executes all reads at a timestamp that is `exact_staleness`
2088 # old. The timestamp is chosen soon after the read is started.
2089 #
2090 # Guarantees that all writes that have committed more than the
2091 # specified number of seconds ago are visible. Because Cloud Spanner
2092 # chooses the exact timestamp, this mode works even if the client's
2093 # local clock is substantially skewed from Cloud Spanner commit
2094 # timestamps.
2095 #
2096 # Useful for reading at nearby replicas without the distributed
2097 # timestamp negotiation overhead of `max_staleness`.
2098 "returnReadTimestamp": True or False, # If true, the Cloud Spanner-selected read timestamp is included in
2099 # the Transaction message that describes the transaction.
2100 "strong": True or False, # Read at a timestamp where all previously committed transactions
2101 # are visible.
2102 },
2103 },
2104 "singleUse": { # # Transactions # Execute the read or SQL query in a temporary transaction.
2105 # This is the most efficient way to execute a transaction that
2106 # consists of a single SQL query.
2107 #
2108 #
2109 # Each session can have at most one active transaction at a time. After the
2110 # active transaction is completed, the session can immediately be
2111 # re-used for the next transaction. It is not necessary to create a
2112 # new session for each transaction.
2113 #
2114 # # Transaction Modes
2115 #
2116 # Cloud Spanner supports two transaction modes:
2117 #
2118 # 1. Locking read-write. This type of transaction is the only way
2119 # to write data into Cloud Spanner. These transactions rely on
2120 # pessimistic locking and, if necessary, two-phase commit.
2121 # Locking read-write transactions may abort, requiring the
2122 # application to retry.
2123 #
2124 # 2. Snapshot read-only. This transaction type provides guaranteed
2125 # consistency across several reads, but does not allow
2126 # writes. Snapshot read-only transactions can be configured to
2127 # read at timestamps in the past. Snapshot read-only
2128 # transactions do not need to be committed.
2129 #
2130 # For transactions that only read, snapshot read-only transactions
2131 # provide simpler semantics and are almost always faster. In
2132 # particular, read-only transactions do not take locks, so they do
2133 # not conflict with read-write transactions. As a consequence of not
2134 # taking locks, they also do not abort, so retry loops are not needed.
2135 #
2136 # Transactions may only read/write data in a single database. They
2137 # may, however, read/write data in different tables within that
2138 # database.
2139 #
2140 # ## Locking Read-Write Transactions
2141 #
2142 # Locking transactions may be used to atomically read-modify-write
2143 # data anywhere in a database. This type of transaction is externally
2144 # consistent.
2145 #
2146 # Clients should attempt to minimize the amount of time a transaction
2147 # is active. Faster transactions commit with higher probability
2148 # and cause less contention. Cloud Spanner attempts to keep read locks
2149 # active as long as the transaction continues to do reads, and the
2150 # transaction has not been terminated by
2151 # Commit or
2152 # Rollback. Long periods of
2153 # inactivity at the client may cause Cloud Spanner to release a
2154 # transaction's locks and abort it.
2155 #
2156 # Reads performed within a transaction acquire locks on the data
2157 # being read. Writes can only be done at commit time, after all reads
2158 # have been completed.
2159 # Conceptually, a read-write transaction consists of zero or more
2160 # reads or SQL queries followed by
2161 # Commit. At any time before
2162 # Commit, the client can send a
2163 # Rollback request to abort the
2164 # transaction.
2165 #
2166 # ### Semantics
2167 #
2168 # Cloud Spanner can commit the transaction if all read locks it acquired
2169 # are still valid at commit time, and it is able to acquire write
2170 # locks for all writes. Cloud Spanner can abort the transaction for any
2171 # reason. If a commit attempt returns `ABORTED`, Cloud Spanner guarantees
2172 # that the transaction has not modified any user data in Cloud Spanner.
2173 #
2174 # Unless the transaction commits, Cloud Spanner makes no guarantees about
2175 # how long the transaction's locks were held for. It is an error to
2176 # use Cloud Spanner locks for any sort of mutual exclusion other than
2177 # between Cloud Spanner transactions themselves.
2178 #
2179 # ### Retrying Aborted Transactions
2180 #
2181 # When a transaction aborts, the application can choose to retry the
2182 # whole transaction again. To maximize the chances of successfully
2183 # committing the retry, the client should execute the retry in the
2184 # same session as the original attempt. The original session's lock
2185 # priority increases with each consecutive abort, meaning that each
2186 # attempt has a slightly better chance of success than the previous.
2187 #
2188 # Under some circumstances (e.g., many transactions attempting to
2189 # modify the same row(s)), a transaction can abort many times in a
2190 # short period before successfully committing. Thus, it is not a good
2191 # idea to cap the number of retries a transaction can attempt;
2192 # instead, it is better to limit the total amount of wall time spent
2193 # retrying.
2194 #
2195 # ### Idle Transactions
2196 #
2197 # A transaction is considered idle if it has no outstanding reads or
2198 # SQL queries and has not started a read or SQL query within the last 10
2199 # seconds. Idle transactions can be aborted by Cloud Spanner so that they
2200 # don't hold on to locks indefinitely. In that case, the commit will
2201 # fail with error `ABORTED`.
2202 #
2203 # If this behavior is undesirable, periodically executing a simple
2204 # SQL query in the transaction (e.g., `SELECT 1`) prevents the
2205 # transaction from becoming idle.
2206 #
2207 # ## Snapshot Read-Only Transactions
2208 #
2209 # Snapshot read-only transactions provides a simpler method than
2210 # locking read-write transactions for doing several consistent
2211 # reads. However, this type of transaction does not support writes.
2212 #
2213 # Snapshot transactions do not take locks. Instead, they work by
2214 # choosing a Cloud Spanner timestamp, then executing all reads at that
2215 # timestamp. Since they do not acquire locks, they do not block
2216 # concurrent read-write transactions.
2217 #
2218 # Unlike locking read-write transactions, snapshot read-only
2219 # transactions never abort. They can fail if the chosen read
2220 # timestamp is garbage collected; however, the default garbage
2221 # collection policy is generous enough that most applications do not
2222 # need to worry about this in practice.
2223 #
2224 # Snapshot read-only transactions do not need to call
2225 # Commit or
2226 # Rollback (and in fact are not
2227 # permitted to do so).
2228 #
2229 # To execute a snapshot transaction, the client specifies a timestamp
2230 # bound, which tells Cloud Spanner how to choose a read timestamp.
2231 #
2232 # The types of timestamp bound are:
2233 #
2234 # - Strong (the default).
2235 # - Bounded staleness.
2236 # - Exact staleness.
2237 #
2238 # If the Cloud Spanner database to be read is geographically distributed,
2239 # stale read-only transactions can execute more quickly than strong
2240 # or read-write transaction, because they are able to execute far
2241 # from the leader replica.
2242 #
2243 # Each type of timestamp bound is discussed in detail below.
2244 #
2245 # ### Strong
2246 #
2247 # Strong reads are guaranteed to see the effects of all transactions
2248 # that have committed before the start of the read. Furthermore, all
2249 # rows yielded by a single read are consistent with each other -- if
2250 # any part of the read observes a transaction, all parts of the read
2251 # see the transaction.
2252 #
2253 # Strong reads are not repeatable: two consecutive strong read-only
2254 # transactions might return inconsistent results if there are
2255 # concurrent writes. If consistency across reads is required, the
2256 # reads should be executed within a transaction or at an exact read
2257 # timestamp.
2258 #
2259 # See TransactionOptions.ReadOnly.strong.
2260 #
2261 # ### Exact Staleness
2262 #
2263 # These timestamp bounds execute reads at a user-specified
2264 # timestamp. Reads at a timestamp are guaranteed to see a consistent
2265 # prefix of the global transaction history: they observe
2266 # modifications done by all transactions with a commit timestamp <=
2267 # the read timestamp, and observe none of the modifications done by
2268 # transactions with a larger commit timestamp. They will block until
2269 # all conflicting transactions that may be assigned commit timestamps
2270 # <= the read timestamp have finished.
2271 #
2272 # The timestamp can either be expressed as an absolute Cloud Spanner commit
2273 # timestamp or a staleness relative to the current time.
2274 #
2275 # These modes do not require a "negotiation phase" to pick a
2276 # timestamp. As a result, they execute slightly faster than the
2277 # equivalent boundedly stale concurrency modes. On the other hand,
2278 # boundedly stale reads usually return fresher results.
2279 #
2280 # See TransactionOptions.ReadOnly.read_timestamp and
2281 # TransactionOptions.ReadOnly.exact_staleness.
2282 #
2283 # ### Bounded Staleness
2284 #
2285 # Bounded staleness modes allow Cloud Spanner to pick the read timestamp,
2286 # subject to a user-provided staleness bound. Cloud Spanner chooses the
2287 # newest timestamp within the staleness bound that allows execution
2288 # of the reads at the closest available replica without blocking.
2289 #
2290 # All rows yielded are consistent with each other -- if any part of
2291 # the read observes a transaction, all parts of the read see the
2292 # transaction. Boundedly stale reads are not repeatable: two stale
2293 # reads, even if they use the same staleness bound, can execute at
2294 # different timestamps and thus return inconsistent results.
2295 #
2296 # Boundedly stale reads execute in two phases: the first phase
2297 # negotiates a timestamp among all replicas needed to serve the
2298 # read. In the second phase, reads are executed at the negotiated
2299 # timestamp.
2300 #
2301 # As a result of the two phase execution, bounded staleness reads are
2302 # usually a little slower than comparable exact staleness
2303 # reads. However, they are typically able to return fresher
2304 # results, and are more likely to execute at the closest replica.
2305 #
2306 # Because the timestamp negotiation requires up-front knowledge of
2307 # which rows will be read, it can only be used with single-use
2308 # read-only transactions.
2309 #
2310 # See TransactionOptions.ReadOnly.max_staleness and
2311 # TransactionOptions.ReadOnly.min_read_timestamp.
2312 #
2313 # ### Old Read Timestamps and Garbage Collection
2314 #
2315 # Cloud Spanner continuously garbage collects deleted and overwritten data
2316 # in the background to reclaim storage space. This process is known
2317 # as "version GC". By default, version GC reclaims versions after they
2318 # are one hour old. Because of this, Cloud Spanner cannot perform reads
2319 # at read timestamps more than one hour in the past. This
2320 # restriction also applies to in-progress reads and/or SQL queries whose
2321 # timestamp become too old while executing. Reads and SQL queries with
2322 # too-old read timestamps fail with the error `FAILED_PRECONDITION`.
2323 "readWrite": { # Options for read-write transactions. # Transaction may write.
2324 #
2325 # Authorization to begin a read-write transaction requires
2326 # `spanner.databases.beginOrRollbackReadWriteTransaction` permission
2327 # on the `session` resource.
2328 },
2329 "readOnly": { # Options for read-only transactions. # Transaction will not write.
2330 #
2331 # Authorization to begin a read-only transaction requires
2332 # `spanner.databases.beginReadOnlyTransaction` permission
2333 # on the `session` resource.
2334 "minReadTimestamp": "A String", # Executes all reads at a timestamp >= `min_read_timestamp`.
2335 #
2336 # This is useful for requesting fresher data than some previous
2337 # read, or data that is fresh enough to observe the effects of some
2338 # previously committed transaction whose timestamp is known.
2339 #
2340 # Note that this option can only be used in single-use transactions.
2341 "readTimestamp": "A String", # Executes all reads at the given timestamp. Unlike other modes,
2342 # reads at a specific timestamp are repeatable; the same read at
2343 # the same timestamp always returns the same data. If the
2344 # timestamp is in the future, the read will block until the
2345 # specified timestamp, modulo the read's deadline.
2346 #
2347 # Useful for large scale consistent reads such as mapreduces, or
2348 # for coordinating many reads against a consistent snapshot of the
2349 # data.
2350 "maxStaleness": "A String", # Read data at a timestamp >= `NOW - max_staleness`
2351 # seconds. Guarantees that all writes that have committed more
2352 # than the specified number of seconds ago are visible. Because
2353 # Cloud Spanner chooses the exact timestamp, this mode works even if
2354 # the client's local clock is substantially skewed from Cloud Spanner
2355 # commit timestamps.
2356 #
2357 # Useful for reading the freshest data available at a nearby
2358 # replica, while bounding the possible staleness if the local
2359 # replica has fallen behind.
2360 #
2361 # Note that this option can only be used in single-use
2362 # transactions.
2363 "exactStaleness": "A String", # Executes all reads at a timestamp that is `exact_staleness`
2364 # old. The timestamp is chosen soon after the read is started.
2365 #
2366 # Guarantees that all writes that have committed more than the
2367 # specified number of seconds ago are visible. Because Cloud Spanner
2368 # chooses the exact timestamp, this mode works even if the client's
2369 # local clock is substantially skewed from Cloud Spanner commit
2370 # timestamps.
2371 #
2372 # Useful for reading at nearby replicas without the distributed
2373 # timestamp negotiation overhead of `max_staleness`.
2374 "returnReadTimestamp": True or False, # If true, the Cloud Spanner-selected read timestamp is included in
2375 # the Transaction message that describes the transaction.
2376 "strong": True or False, # Read at a timestamp where all previously committed transactions
2377 # are visible.
2378 },
2379 },
2380 "id": "A String", # Execute the read or SQL query in a previously-started transaction.
2381 },
2382 "resumeToken": "A String", # If this request is resuming a previously interrupted SQL query
2383 # execution, `resume_token` should be copied from the last
2384 # PartialResultSet yielded before the interruption. Doing this
2385 # enables the new SQL query execution to resume where the last one left
2386 # off. The rest of the request parameters must exactly match the
2387 # request that yielded this token.
2388 "paramTypes": { # It is not always possible for Cloud Spanner to infer the right SQL type
2389 # from a JSON value. For example, values of type `BYTES` and values
2390 # of type `STRING` both appear in params as JSON strings.
2391 #
2392 # In these cases, `param_types` can be used to specify the exact
2393 # SQL type for some or all of the SQL query parameters. See the
2394 # definition of Type for more information
2395 # about SQL types.
2396 "a_key": { # `Type` indicates the type of a Cloud Spanner value, as might be stored in a
2397 # table cell or returned from an SQL query.
2398 "structType": # Object with schema name: StructType # If code == STRUCT, then `struct_type`
2399 # provides type information for the struct's fields.
2400 "code": "A String", # Required. The TypeCode for this type.
2401 "arrayElementType": # Object with schema name: Type # If code == ARRAY, then `array_element_type`
2402 # is the type of the array elements.
2403 },
2404 },
2405 "queryMode": "A String", # Used to control the amount of debugging information returned in
2406 # ResultSetStats.
2407 "sql": "A String", # Required. The SQL query string.
2408 "params": { # The SQL query string can contain parameter placeholders. A parameter
2409 # placeholder consists of `'@'` followed by the parameter
2410 # name. Parameter names consist of any combination of letters,
2411 # numbers, and underscores.
2412 #
2413 # Parameters can appear anywhere that a literal value is expected. The same
2414 # parameter name can be used more than once, for example:
2415 # `"WHERE id > @msg_id AND id < @msg_id + 100"`
2416 #
2417 # It is an error to execute an SQL query with unbound parameters.
2418 #
2419 # Parameter values are specified using `params`, which is a JSON
2420 # object whose keys are parameter names, and whose values are the
2421 # corresponding parameter values.
2422 "a_key": "", # Properties of the object.
2423 },
2424 }
2425
2426 x__xgafv: string, V1 error format.
2427 Allowed values
2428 1 - v1 error format
2429 2 - v2 error format
2430
2431Returns:
2432 An object of the form:
2433
2434 { # Partial results from a streaming read or SQL query. Streaming reads and
2435 # SQL queries better tolerate large result sets, large rows, and large
2436 # values, but are a little trickier to consume.
2437 "resumeToken": "A String", # Streaming calls might be interrupted for a variety of reasons, such
2438 # as TCP connection loss. If this occurs, the stream of results can
2439 # be resumed by re-sending the original request and including
2440 # `resume_token`. Note that executing any other transaction in the
2441 # same session invalidates the token.
2442 "chunkedValue": True or False, # If true, then the final value in values is chunked, and must
2443 # be combined with more values from subsequent `PartialResultSet`s
2444 # to obtain a complete field value.
2445 "values": [ # A streamed result set consists of a stream of values, which might
2446 # be split into many `PartialResultSet` messages to accommodate
2447 # large rows and/or large values. Every N complete values defines a
2448 # row, where N is equal to the number of entries in
2449 # metadata.row_type.fields.
2450 #
2451 # Most values are encoded based on type as described
2452 # here.
2453 #
2454 # It is possible that the last value in values is "chunked",
2455 # meaning that the rest of the value is sent in subsequent
2456 # `PartialResultSet`(s). This is denoted by the chunked_value
2457 # field. Two or more chunked values can be merged to form a
2458 # complete value as follows:
2459 #
2460 # * `bool/number/null`: cannot be chunked
2461 # * `string`: concatenate the strings
2462 # * `list`: concatenate the lists. If the last element in a list is a
2463 # `string`, `list`, or `object`, merge it with the first element in
2464 # the next list by applying these rules recursively.
2465 # * `object`: concatenate the (field name, field value) pairs. If a
2466 # field name is duplicated, then apply these rules recursively
2467 # to merge the field values.
2468 #
2469 # Some examples of merging:
2470 #
2471 # # Strings are concatenated.
2472 # "foo", "bar" => "foobar"
2473 #
2474 # # Lists of non-strings are concatenated.
2475 # [2, 3], [4] => [2, 3, 4]
2476 #
2477 # # Lists are concatenated, but the last and first elements are merged
2478 # # because they are strings.
2479 # ["a", "b"], ["c", "d"] => ["a", "bc", "d"]
2480 #
2481 # # Lists are concatenated, but the last and first elements are merged
2482 # # because they are lists. Recursively, the last and first elements
2483 # # of the inner lists are merged because they are strings.
2484 # ["a", ["b", "c"]], [["d"], "e"] => ["a", ["b", "cd"], "e"]
2485 #
2486 # # Non-overlapping object fields are combined.
2487 # {"a": "1"}, {"b": "2"} => {"a": "1", "b": 2"}
2488 #
2489 # # Overlapping object fields are merged.
2490 # {"a": "1"}, {"a": "2"} => {"a": "12"}
2491 #
2492 # # Examples of merging objects containing lists of strings.
2493 # {"a": ["1"]}, {"a": ["2"]} => {"a": ["12"]}
2494 #
2495 # For a more complete example, suppose a streaming SQL query is
2496 # yielding a result set whose rows contain a single string
2497 # field. The following `PartialResultSet`s might be yielded:
2498 #
2499 # {
2500 # "metadata": { ... }
2501 # "values": ["Hello", "W"]
2502 # "chunked_value": true
2503 # "resume_token": "Af65..."
2504 # }
2505 # {
2506 # "values": ["orl"]
2507 # "chunked_value": true
2508 # "resume_token": "Bqp2..."
2509 # }
2510 # {
2511 # "values": ["d"]
2512 # "resume_token": "Zx1B..."
2513 # }
2514 #
2515 # This sequence of `PartialResultSet`s encodes two rows, one
2516 # containing the field value `"Hello"`, and a second containing the
2517 # field value `"World" = "W" + "orl" + "d"`.
2518 "",
2519 ],
2520 "stats": { # Additional statistics about a ResultSet or PartialResultSet. # Query plan and execution statistics for the query that produced this
2521 # streaming result set. These can be requested by setting
2522 # ExecuteSqlRequest.query_mode and are sent
2523 # only once with the last response in the stream.
2524 "queryPlan": { # Contains an ordered list of nodes appearing in the query plan. # QueryPlan for the query associated with this result.
2525 "planNodes": [ # The nodes in the query plan. Plan nodes are returned in pre-order starting
2526 # with the plan root. Each PlanNode's `id` corresponds to its index in
2527 # `plan_nodes`.
2528 { # Node information for nodes appearing in a QueryPlan.plan_nodes.
2529 "index": 42, # The `PlanNode`'s index in node list.
2530 "kind": "A String", # Used to determine the type of node. May be needed for visualizing
2531 # different kinds of nodes differently. For example, If the node is a
2532 # SCALAR node, it will have a condensed representation
2533 # which can be used to directly embed a description of the node in its
2534 # parent.
2535 "displayName": "A String", # The display name for the node.
2536 "executionStats": { # The execution statistics associated with the node, contained in a group of
2537 # key-value pairs. Only present if the plan was returned as a result of a
2538 # profile query. For example, number of executions, number of rows/time per
2539 # execution etc.
2540 "a_key": "", # Properties of the object.
2541 },
2542 "childLinks": [ # List of child node `index`es and their relationship to this parent.
2543 { # Metadata associated with a parent-child relationship appearing in a
2544 # PlanNode.
2545 "variable": "A String", # Only present if the child node is SCALAR and corresponds
2546 # to an output variable of the parent node. The field carries the name of
2547 # the output variable.
2548 # For example, a `TableScan` operator that reads rows from a table will
2549 # have child links to the `SCALAR` nodes representing the output variables
2550 # created for each column that is read by the operator. The corresponding
2551 # `variable` fields will be set to the variable names assigned to the
2552 # columns.
2553 "childIndex": 42, # The node to which the link points.
2554 "type": "A String", # The type of the link. For example, in Hash Joins this could be used to
2555 # distinguish between the build child and the probe child, or in the case
2556 # of the child being an output variable, to represent the tag associated
2557 # with the output variable.
2558 },
2559 ],
2560 "shortRepresentation": { # Condensed representation of a node and its subtree. Only present for # Condensed representation for SCALAR nodes.
2561 # `SCALAR` PlanNode(s).
2562 "subqueries": { # A mapping of (subquery variable name) -> (subquery node id) for cases
2563 # where the `description` string of this node references a `SCALAR`
2564 # subquery contained in the expression subtree rooted at this node. The
2565 # referenced `SCALAR` subquery may not necessarily be a direct child of
2566 # this node.
2567 "a_key": 42,
2568 },
2569 "description": "A String", # A string representation of the expression subtree rooted at this node.
2570 },
2571 "metadata": { # Attributes relevant to the node contained in a group of key-value pairs.
2572 # For example, a Parameter Reference node could have the following
2573 # information in its metadata:
2574 #
2575 # {
2576 # "parameter_reference": "param1",
2577 # "parameter_type": "array"
2578 # }
2579 "a_key": "", # Properties of the object.
2580 },
2581 },
2582 ],
2583 },
2584 "queryStats": { # Aggregated statistics from the execution of the query. Only present when
2585 # the query is profiled. For example, a query could return the statistics as
2586 # follows:
2587 #
2588 # {
2589 # "rows_returned": "3",
2590 # "elapsed_time": "1.22 secs",
2591 # "cpu_time": "1.19 secs"
2592 # }
2593 "a_key": "", # Properties of the object.
2594 },
2595 },
2596 "metadata": { # Metadata about a ResultSet or PartialResultSet. # Metadata about the result set, such as row type information.
2597 # Only present in the first response.
2598 "rowType": { # `StructType` defines the fields of a STRUCT type. # Indicates the field names and types for the rows in the result
2599 # set. For example, a SQL query like `"SELECT UserId, UserName FROM
2600 # Users"` could return a `row_type` value like:
2601 #
2602 # "fields": [
2603 # { "name": "UserId", "type": { "code": "INT64" } },
2604 # { "name": "UserName", "type": { "code": "STRING" } },
2605 # ]
2606 "fields": [ # The list of fields that make up this struct. Order is
2607 # significant, because values of this struct type are represented as
2608 # lists, where the order of field values matches the order of
2609 # fields in the StructType. In turn, the order of fields
2610 # matches the order of columns in a read request, or the order of
2611 # fields in the `SELECT` clause of a query.
2612 { # Message representing a single field of a struct.
2613 "type": { # `Type` indicates the type of a Cloud Spanner value, as might be stored in a # The type of the field.
2614 # table cell or returned from an SQL query.
2615 "structType": # Object with schema name: StructType # If code == STRUCT, then `struct_type`
2616 # provides type information for the struct's fields.
2617 "code": "A String", # Required. The TypeCode for this type.
2618 "arrayElementType": # Object with schema name: Type # If code == ARRAY, then `array_element_type`
2619 # is the type of the array elements.
2620 },
2621 "name": "A String", # The name of the field. For reads, this is the column name. For
2622 # SQL queries, it is the column alias (e.g., `"Word"` in the
2623 # query `"SELECT 'hello' AS Word"`), or the column name (e.g.,
2624 # `"ColName"` in the query `"SELECT ColName FROM Table"`). Some
2625 # columns might have an empty name (e.g., !"SELECT
2626 # UPPER(ColName)"`). Note that a query result can contain
2627 # multiple fields with the same name.
2628 },
2629 ],
2630 },
2631 "transaction": { # A transaction. # If the read or SQL query began a transaction as a side-effect, the
2632 # information about the new transaction is yielded here.
2633 "readTimestamp": "A String", # For snapshot read-only transactions, the read timestamp chosen
2634 # for the transaction. Not returned by default: see
2635 # TransactionOptions.ReadOnly.return_read_timestamp.
2636 "id": "A String", # `id` may be used to identify the transaction in subsequent
2637 # Read,
2638 # ExecuteSql,
2639 # Commit, or
2640 # Rollback calls.
2641 #
2642 # Single-use read-only transactions do not have IDs, because
2643 # single-use transactions do not support multiple requests.
2644 },
2645 },
2646 }</pre>
2647</div>
2648
2649<div class="method">
2650 <code class="details" id="get">get(name, x__xgafv=None)</code>
2651 <pre>Gets a session. Returns `NOT_FOUND` if the session does not exist.
2652This is mainly useful for determining whether a session is still
2653alive.
2654
2655Args:
2656 name: string, Required. The name of the session to retrieve. (required)
2657 x__xgafv: string, V1 error format.
2658 Allowed values
2659 1 - v1 error format
2660 2 - v2 error format
2661
2662Returns:
2663 An object of the form:
2664
2665 { # A session in the Cloud Spanner API.
2666 "name": "A String", # Required. The name of the session.
2667 }</pre>
2668</div>
2669
2670<div class="method">
2671 <code class="details" id="read">read(session, body, x__xgafv=None)</code>
2672 <pre>Reads rows from the database using key lookups and scans, as a
2673simple key/value style alternative to
2674ExecuteSql. This method cannot be used to
2675return a result set larger than 10 MiB; if the read matches more
2676data than that, the read fails with a `FAILED_PRECONDITION`
2677error.
2678
2679Reads inside read-write transactions might return `ABORTED`. If
2680this occurs, the application should restart the transaction from
2681the beginning. See Transaction for more details.
2682
2683Larger result sets can be yielded in streaming fashion by calling
2684StreamingRead instead.
2685
2686Args:
2687 session: string, Required. The session in which the read should be performed. (required)
2688 body: object, The request body. (required)
2689 The object takes the form of:
2690
2691{ # The request for Read and
2692 # StreamingRead.
2693 "index": "A String", # If non-empty, the name of an index on table. This index is
2694 # used instead of the table primary key when interpreting key_set
2695 # and sorting result rows. See key_set for further information.
2696 "transaction": { # This message is used to select the transaction in which a # The transaction to use. If none is provided, the default is a
2697 # temporary read-only transaction with strong concurrency.
2698 # Read or
2699 # ExecuteSql call runs.
2700 #
2701 # See TransactionOptions for more information about transactions.
2702 "begin": { # # Transactions # Begin a new transaction and execute this read or SQL query in
2703 # it. The transaction ID of the new transaction is returned in
2704 # ResultSetMetadata.transaction, which is a Transaction.
2705 #
2706 #
2707 # Each session can have at most one active transaction at a time. After the
2708 # active transaction is completed, the session can immediately be
2709 # re-used for the next transaction. It is not necessary to create a
2710 # new session for each transaction.
2711 #
2712 # # Transaction Modes
2713 #
2714 # Cloud Spanner supports two transaction modes:
2715 #
2716 # 1. Locking read-write. This type of transaction is the only way
2717 # to write data into Cloud Spanner. These transactions rely on
2718 # pessimistic locking and, if necessary, two-phase commit.
2719 # Locking read-write transactions may abort, requiring the
2720 # application to retry.
2721 #
2722 # 2. Snapshot read-only. This transaction type provides guaranteed
2723 # consistency across several reads, but does not allow
2724 # writes. Snapshot read-only transactions can be configured to
2725 # read at timestamps in the past. Snapshot read-only
2726 # transactions do not need to be committed.
2727 #
2728 # For transactions that only read, snapshot read-only transactions
2729 # provide simpler semantics and are almost always faster. In
2730 # particular, read-only transactions do not take locks, so they do
2731 # not conflict with read-write transactions. As a consequence of not
2732 # taking locks, they also do not abort, so retry loops are not needed.
2733 #
2734 # Transactions may only read/write data in a single database. They
2735 # may, however, read/write data in different tables within that
2736 # database.
2737 #
2738 # ## Locking Read-Write Transactions
2739 #
2740 # Locking transactions may be used to atomically read-modify-write
2741 # data anywhere in a database. This type of transaction is externally
2742 # consistent.
2743 #
2744 # Clients should attempt to minimize the amount of time a transaction
2745 # is active. Faster transactions commit with higher probability
2746 # and cause less contention. Cloud Spanner attempts to keep read locks
2747 # active as long as the transaction continues to do reads, and the
2748 # transaction has not been terminated by
2749 # Commit or
2750 # Rollback. Long periods of
2751 # inactivity at the client may cause Cloud Spanner to release a
2752 # transaction's locks and abort it.
2753 #
2754 # Reads performed within a transaction acquire locks on the data
2755 # being read. Writes can only be done at commit time, after all reads
2756 # have been completed.
2757 # Conceptually, a read-write transaction consists of zero or more
2758 # reads or SQL queries followed by
2759 # Commit. At any time before
2760 # Commit, the client can send a
2761 # Rollback request to abort the
2762 # transaction.
2763 #
2764 # ### Semantics
2765 #
2766 # Cloud Spanner can commit the transaction if all read locks it acquired
2767 # are still valid at commit time, and it is able to acquire write
2768 # locks for all writes. Cloud Spanner can abort the transaction for any
2769 # reason. If a commit attempt returns `ABORTED`, Cloud Spanner guarantees
2770 # that the transaction has not modified any user data in Cloud Spanner.
2771 #
2772 # Unless the transaction commits, Cloud Spanner makes no guarantees about
2773 # how long the transaction's locks were held for. It is an error to
2774 # use Cloud Spanner locks for any sort of mutual exclusion other than
2775 # between Cloud Spanner transactions themselves.
2776 #
2777 # ### Retrying Aborted Transactions
2778 #
2779 # When a transaction aborts, the application can choose to retry the
2780 # whole transaction again. To maximize the chances of successfully
2781 # committing the retry, the client should execute the retry in the
2782 # same session as the original attempt. The original session's lock
2783 # priority increases with each consecutive abort, meaning that each
2784 # attempt has a slightly better chance of success than the previous.
2785 #
2786 # Under some circumstances (e.g., many transactions attempting to
2787 # modify the same row(s)), a transaction can abort many times in a
2788 # short period before successfully committing. Thus, it is not a good
2789 # idea to cap the number of retries a transaction can attempt;
2790 # instead, it is better to limit the total amount of wall time spent
2791 # retrying.
2792 #
2793 # ### Idle Transactions
2794 #
2795 # A transaction is considered idle if it has no outstanding reads or
2796 # SQL queries and has not started a read or SQL query within the last 10
2797 # seconds. Idle transactions can be aborted by Cloud Spanner so that they
2798 # don't hold on to locks indefinitely. In that case, the commit will
2799 # fail with error `ABORTED`.
2800 #
2801 # If this behavior is undesirable, periodically executing a simple
2802 # SQL query in the transaction (e.g., `SELECT 1`) prevents the
2803 # transaction from becoming idle.
2804 #
2805 # ## Snapshot Read-Only Transactions
2806 #
2807 # Snapshot read-only transactions provides a simpler method than
2808 # locking read-write transactions for doing several consistent
2809 # reads. However, this type of transaction does not support writes.
2810 #
2811 # Snapshot transactions do not take locks. Instead, they work by
2812 # choosing a Cloud Spanner timestamp, then executing all reads at that
2813 # timestamp. Since they do not acquire locks, they do not block
2814 # concurrent read-write transactions.
2815 #
2816 # Unlike locking read-write transactions, snapshot read-only
2817 # transactions never abort. They can fail if the chosen read
2818 # timestamp is garbage collected; however, the default garbage
2819 # collection policy is generous enough that most applications do not
2820 # need to worry about this in practice.
2821 #
2822 # Snapshot read-only transactions do not need to call
2823 # Commit or
2824 # Rollback (and in fact are not
2825 # permitted to do so).
2826 #
2827 # To execute a snapshot transaction, the client specifies a timestamp
2828 # bound, which tells Cloud Spanner how to choose a read timestamp.
2829 #
2830 # The types of timestamp bound are:
2831 #
2832 # - Strong (the default).
2833 # - Bounded staleness.
2834 # - Exact staleness.
2835 #
2836 # If the Cloud Spanner database to be read is geographically distributed,
2837 # stale read-only transactions can execute more quickly than strong
2838 # or read-write transaction, because they are able to execute far
2839 # from the leader replica.
2840 #
2841 # Each type of timestamp bound is discussed in detail below.
2842 #
2843 # ### Strong
2844 #
2845 # Strong reads are guaranteed to see the effects of all transactions
2846 # that have committed before the start of the read. Furthermore, all
2847 # rows yielded by a single read are consistent with each other -- if
2848 # any part of the read observes a transaction, all parts of the read
2849 # see the transaction.
2850 #
2851 # Strong reads are not repeatable: two consecutive strong read-only
2852 # transactions might return inconsistent results if there are
2853 # concurrent writes. If consistency across reads is required, the
2854 # reads should be executed within a transaction or at an exact read
2855 # timestamp.
2856 #
2857 # See TransactionOptions.ReadOnly.strong.
2858 #
2859 # ### Exact Staleness
2860 #
2861 # These timestamp bounds execute reads at a user-specified
2862 # timestamp. Reads at a timestamp are guaranteed to see a consistent
2863 # prefix of the global transaction history: they observe
2864 # modifications done by all transactions with a commit timestamp <=
2865 # the read timestamp, and observe none of the modifications done by
2866 # transactions with a larger commit timestamp. They will block until
2867 # all conflicting transactions that may be assigned commit timestamps
2868 # <= the read timestamp have finished.
2869 #
2870 # The timestamp can either be expressed as an absolute Cloud Spanner commit
2871 # timestamp or a staleness relative to the current time.
2872 #
2873 # These modes do not require a "negotiation phase" to pick a
2874 # timestamp. As a result, they execute slightly faster than the
2875 # equivalent boundedly stale concurrency modes. On the other hand,
2876 # boundedly stale reads usually return fresher results.
2877 #
2878 # See TransactionOptions.ReadOnly.read_timestamp and
2879 # TransactionOptions.ReadOnly.exact_staleness.
2880 #
2881 # ### Bounded Staleness
2882 #
2883 # Bounded staleness modes allow Cloud Spanner to pick the read timestamp,
2884 # subject to a user-provided staleness bound. Cloud Spanner chooses the
2885 # newest timestamp within the staleness bound that allows execution
2886 # of the reads at the closest available replica without blocking.
2887 #
2888 # All rows yielded are consistent with each other -- if any part of
2889 # the read observes a transaction, all parts of the read see the
2890 # transaction. Boundedly stale reads are not repeatable: two stale
2891 # reads, even if they use the same staleness bound, can execute at
2892 # different timestamps and thus return inconsistent results.
2893 #
2894 # Boundedly stale reads execute in two phases: the first phase
2895 # negotiates a timestamp among all replicas needed to serve the
2896 # read. In the second phase, reads are executed at the negotiated
2897 # timestamp.
2898 #
2899 # As a result of the two phase execution, bounded staleness reads are
2900 # usually a little slower than comparable exact staleness
2901 # reads. However, they are typically able to return fresher
2902 # results, and are more likely to execute at the closest replica.
2903 #
2904 # Because the timestamp negotiation requires up-front knowledge of
2905 # which rows will be read, it can only be used with single-use
2906 # read-only transactions.
2907 #
2908 # See TransactionOptions.ReadOnly.max_staleness and
2909 # TransactionOptions.ReadOnly.min_read_timestamp.
2910 #
2911 # ### Old Read Timestamps and Garbage Collection
2912 #
2913 # Cloud Spanner continuously garbage collects deleted and overwritten data
2914 # in the background to reclaim storage space. This process is known
2915 # as "version GC". By default, version GC reclaims versions after they
2916 # are one hour old. Because of this, Cloud Spanner cannot perform reads
2917 # at read timestamps more than one hour in the past. This
2918 # restriction also applies to in-progress reads and/or SQL queries whose
2919 # timestamp become too old while executing. Reads and SQL queries with
2920 # too-old read timestamps fail with the error `FAILED_PRECONDITION`.
2921 "readWrite": { # Options for read-write transactions. # Transaction may write.
2922 #
2923 # Authorization to begin a read-write transaction requires
2924 # `spanner.databases.beginOrRollbackReadWriteTransaction` permission
2925 # on the `session` resource.
2926 },
2927 "readOnly": { # Options for read-only transactions. # Transaction will not write.
2928 #
2929 # Authorization to begin a read-only transaction requires
2930 # `spanner.databases.beginReadOnlyTransaction` permission
2931 # on the `session` resource.
2932 "minReadTimestamp": "A String", # Executes all reads at a timestamp >= `min_read_timestamp`.
2933 #
2934 # This is useful for requesting fresher data than some previous
2935 # read, or data that is fresh enough to observe the effects of some
2936 # previously committed transaction whose timestamp is known.
2937 #
2938 # Note that this option can only be used in single-use transactions.
2939 "readTimestamp": "A String", # Executes all reads at the given timestamp. Unlike other modes,
2940 # reads at a specific timestamp are repeatable; the same read at
2941 # the same timestamp always returns the same data. If the
2942 # timestamp is in the future, the read will block until the
2943 # specified timestamp, modulo the read's deadline.
2944 #
2945 # Useful for large scale consistent reads such as mapreduces, or
2946 # for coordinating many reads against a consistent snapshot of the
2947 # data.
2948 "maxStaleness": "A String", # Read data at a timestamp >= `NOW - max_staleness`
2949 # seconds. Guarantees that all writes that have committed more
2950 # than the specified number of seconds ago are visible. Because
2951 # Cloud Spanner chooses the exact timestamp, this mode works even if
2952 # the client's local clock is substantially skewed from Cloud Spanner
2953 # commit timestamps.
2954 #
2955 # Useful for reading the freshest data available at a nearby
2956 # replica, while bounding the possible staleness if the local
2957 # replica has fallen behind.
2958 #
2959 # Note that this option can only be used in single-use
2960 # transactions.
2961 "exactStaleness": "A String", # Executes all reads at a timestamp that is `exact_staleness`
2962 # old. The timestamp is chosen soon after the read is started.
2963 #
2964 # Guarantees that all writes that have committed more than the
2965 # specified number of seconds ago are visible. Because Cloud Spanner
2966 # chooses the exact timestamp, this mode works even if the client's
2967 # local clock is substantially skewed from Cloud Spanner commit
2968 # timestamps.
2969 #
2970 # Useful for reading at nearby replicas without the distributed
2971 # timestamp negotiation overhead of `max_staleness`.
2972 "returnReadTimestamp": True or False, # If true, the Cloud Spanner-selected read timestamp is included in
2973 # the Transaction message that describes the transaction.
2974 "strong": True or False, # Read at a timestamp where all previously committed transactions
2975 # are visible.
2976 },
2977 },
2978 "singleUse": { # # Transactions # Execute the read or SQL query in a temporary transaction.
2979 # This is the most efficient way to execute a transaction that
2980 # consists of a single SQL query.
2981 #
2982 #
2983 # Each session can have at most one active transaction at a time. After the
2984 # active transaction is completed, the session can immediately be
2985 # re-used for the next transaction. It is not necessary to create a
2986 # new session for each transaction.
2987 #
2988 # # Transaction Modes
2989 #
2990 # Cloud Spanner supports two transaction modes:
2991 #
2992 # 1. Locking read-write. This type of transaction is the only way
2993 # to write data into Cloud Spanner. These transactions rely on
2994 # pessimistic locking and, if necessary, two-phase commit.
2995 # Locking read-write transactions may abort, requiring the
2996 # application to retry.
2997 #
2998 # 2. Snapshot read-only. This transaction type provides guaranteed
2999 # consistency across several reads, but does not allow
3000 # writes. Snapshot read-only transactions can be configured to
3001 # read at timestamps in the past. Snapshot read-only
3002 # transactions do not need to be committed.
3003 #
3004 # For transactions that only read, snapshot read-only transactions
3005 # provide simpler semantics and are almost always faster. In
3006 # particular, read-only transactions do not take locks, so they do
3007 # not conflict with read-write transactions. As a consequence of not
3008 # taking locks, they also do not abort, so retry loops are not needed.
3009 #
3010 # Transactions may only read/write data in a single database. They
3011 # may, however, read/write data in different tables within that
3012 # database.
3013 #
3014 # ## Locking Read-Write Transactions
3015 #
3016 # Locking transactions may be used to atomically read-modify-write
3017 # data anywhere in a database. This type of transaction is externally
3018 # consistent.
3019 #
3020 # Clients should attempt to minimize the amount of time a transaction
3021 # is active. Faster transactions commit with higher probability
3022 # and cause less contention. Cloud Spanner attempts to keep read locks
3023 # active as long as the transaction continues to do reads, and the
3024 # transaction has not been terminated by
3025 # Commit or
3026 # Rollback. Long periods of
3027 # inactivity at the client may cause Cloud Spanner to release a
3028 # transaction's locks and abort it.
3029 #
3030 # Reads performed within a transaction acquire locks on the data
3031 # being read. Writes can only be done at commit time, after all reads
3032 # have been completed.
3033 # Conceptually, a read-write transaction consists of zero or more
3034 # reads or SQL queries followed by
3035 # Commit. At any time before
3036 # Commit, the client can send a
3037 # Rollback request to abort the
3038 # transaction.
3039 #
3040 # ### Semantics
3041 #
3042 # Cloud Spanner can commit the transaction if all read locks it acquired
3043 # are still valid at commit time, and it is able to acquire write
3044 # locks for all writes. Cloud Spanner can abort the transaction for any
3045 # reason. If a commit attempt returns `ABORTED`, Cloud Spanner guarantees
3046 # that the transaction has not modified any user data in Cloud Spanner.
3047 #
3048 # Unless the transaction commits, Cloud Spanner makes no guarantees about
3049 # how long the transaction's locks were held for. It is an error to
3050 # use Cloud Spanner locks for any sort of mutual exclusion other than
3051 # between Cloud Spanner transactions themselves.
3052 #
3053 # ### Retrying Aborted Transactions
3054 #
3055 # When a transaction aborts, the application can choose to retry the
3056 # whole transaction again. To maximize the chances of successfully
3057 # committing the retry, the client should execute the retry in the
3058 # same session as the original attempt. The original session's lock
3059 # priority increases with each consecutive abort, meaning that each
3060 # attempt has a slightly better chance of success than the previous.
3061 #
3062 # Under some circumstances (e.g., many transactions attempting to
3063 # modify the same row(s)), a transaction can abort many times in a
3064 # short period before successfully committing. Thus, it is not a good
3065 # idea to cap the number of retries a transaction can attempt;
3066 # instead, it is better to limit the total amount of wall time spent
3067 # retrying.
3068 #
3069 # ### Idle Transactions
3070 #
3071 # A transaction is considered idle if it has no outstanding reads or
3072 # SQL queries and has not started a read or SQL query within the last 10
3073 # seconds. Idle transactions can be aborted by Cloud Spanner so that they
3074 # don't hold on to locks indefinitely. In that case, the commit will
3075 # fail with error `ABORTED`.
3076 #
3077 # If this behavior is undesirable, periodically executing a simple
3078 # SQL query in the transaction (e.g., `SELECT 1`) prevents the
3079 # transaction from becoming idle.
3080 #
3081 # ## Snapshot Read-Only Transactions
3082 #
3083 # Snapshot read-only transactions provides a simpler method than
3084 # locking read-write transactions for doing several consistent
3085 # reads. However, this type of transaction does not support writes.
3086 #
3087 # Snapshot transactions do not take locks. Instead, they work by
3088 # choosing a Cloud Spanner timestamp, then executing all reads at that
3089 # timestamp. Since they do not acquire locks, they do not block
3090 # concurrent read-write transactions.
3091 #
3092 # Unlike locking read-write transactions, snapshot read-only
3093 # transactions never abort. They can fail if the chosen read
3094 # timestamp is garbage collected; however, the default garbage
3095 # collection policy is generous enough that most applications do not
3096 # need to worry about this in practice.
3097 #
3098 # Snapshot read-only transactions do not need to call
3099 # Commit or
3100 # Rollback (and in fact are not
3101 # permitted to do so).
3102 #
3103 # To execute a snapshot transaction, the client specifies a timestamp
3104 # bound, which tells Cloud Spanner how to choose a read timestamp.
3105 #
3106 # The types of timestamp bound are:
3107 #
3108 # - Strong (the default).
3109 # - Bounded staleness.
3110 # - Exact staleness.
3111 #
3112 # If the Cloud Spanner database to be read is geographically distributed,
3113 # stale read-only transactions can execute more quickly than strong
3114 # or read-write transaction, because they are able to execute far
3115 # from the leader replica.
3116 #
3117 # Each type of timestamp bound is discussed in detail below.
3118 #
3119 # ### Strong
3120 #
3121 # Strong reads are guaranteed to see the effects of all transactions
3122 # that have committed before the start of the read. Furthermore, all
3123 # rows yielded by a single read are consistent with each other -- if
3124 # any part of the read observes a transaction, all parts of the read
3125 # see the transaction.
3126 #
3127 # Strong reads are not repeatable: two consecutive strong read-only
3128 # transactions might return inconsistent results if there are
3129 # concurrent writes. If consistency across reads is required, the
3130 # reads should be executed within a transaction or at an exact read
3131 # timestamp.
3132 #
3133 # See TransactionOptions.ReadOnly.strong.
3134 #
3135 # ### Exact Staleness
3136 #
3137 # These timestamp bounds execute reads at a user-specified
3138 # timestamp. Reads at a timestamp are guaranteed to see a consistent
3139 # prefix of the global transaction history: they observe
3140 # modifications done by all transactions with a commit timestamp <=
3141 # the read timestamp, and observe none of the modifications done by
3142 # transactions with a larger commit timestamp. They will block until
3143 # all conflicting transactions that may be assigned commit timestamps
3144 # <= the read timestamp have finished.
3145 #
3146 # The timestamp can either be expressed as an absolute Cloud Spanner commit
3147 # timestamp or a staleness relative to the current time.
3148 #
3149 # These modes do not require a "negotiation phase" to pick a
3150 # timestamp. As a result, they execute slightly faster than the
3151 # equivalent boundedly stale concurrency modes. On the other hand,
3152 # boundedly stale reads usually return fresher results.
3153 #
3154 # See TransactionOptions.ReadOnly.read_timestamp and
3155 # TransactionOptions.ReadOnly.exact_staleness.
3156 #
3157 # ### Bounded Staleness
3158 #
3159 # Bounded staleness modes allow Cloud Spanner to pick the read timestamp,
3160 # subject to a user-provided staleness bound. Cloud Spanner chooses the
3161 # newest timestamp within the staleness bound that allows execution
3162 # of the reads at the closest available replica without blocking.
3163 #
3164 # All rows yielded are consistent with each other -- if any part of
3165 # the read observes a transaction, all parts of the read see the
3166 # transaction. Boundedly stale reads are not repeatable: two stale
3167 # reads, even if they use the same staleness bound, can execute at
3168 # different timestamps and thus return inconsistent results.
3169 #
3170 # Boundedly stale reads execute in two phases: the first phase
3171 # negotiates a timestamp among all replicas needed to serve the
3172 # read. In the second phase, reads are executed at the negotiated
3173 # timestamp.
3174 #
3175 # As a result of the two phase execution, bounded staleness reads are
3176 # usually a little slower than comparable exact staleness
3177 # reads. However, they are typically able to return fresher
3178 # results, and are more likely to execute at the closest replica.
3179 #
3180 # Because the timestamp negotiation requires up-front knowledge of
3181 # which rows will be read, it can only be used with single-use
3182 # read-only transactions.
3183 #
3184 # See TransactionOptions.ReadOnly.max_staleness and
3185 # TransactionOptions.ReadOnly.min_read_timestamp.
3186 #
3187 # ### Old Read Timestamps and Garbage Collection
3188 #
3189 # Cloud Spanner continuously garbage collects deleted and overwritten data
3190 # in the background to reclaim storage space. This process is known
3191 # as "version GC". By default, version GC reclaims versions after they
3192 # are one hour old. Because of this, Cloud Spanner cannot perform reads
3193 # at read timestamps more than one hour in the past. This
3194 # restriction also applies to in-progress reads and/or SQL queries whose
3195 # timestamp become too old while executing. Reads and SQL queries with
3196 # too-old read timestamps fail with the error `FAILED_PRECONDITION`.
3197 "readWrite": { # Options for read-write transactions. # Transaction may write.
3198 #
3199 # Authorization to begin a read-write transaction requires
3200 # `spanner.databases.beginOrRollbackReadWriteTransaction` permission
3201 # on the `session` resource.
3202 },
3203 "readOnly": { # Options for read-only transactions. # Transaction will not write.
3204 #
3205 # Authorization to begin a read-only transaction requires
3206 # `spanner.databases.beginReadOnlyTransaction` permission
3207 # on the `session` resource.
3208 "minReadTimestamp": "A String", # Executes all reads at a timestamp >= `min_read_timestamp`.
3209 #
3210 # This is useful for requesting fresher data than some previous
3211 # read, or data that is fresh enough to observe the effects of some
3212 # previously committed transaction whose timestamp is known.
3213 #
3214 # Note that this option can only be used in single-use transactions.
3215 "readTimestamp": "A String", # Executes all reads at the given timestamp. Unlike other modes,
3216 # reads at a specific timestamp are repeatable; the same read at
3217 # the same timestamp always returns the same data. If the
3218 # timestamp is in the future, the read will block until the
3219 # specified timestamp, modulo the read's deadline.
3220 #
3221 # Useful for large scale consistent reads such as mapreduces, or
3222 # for coordinating many reads against a consistent snapshot of the
3223 # data.
3224 "maxStaleness": "A String", # Read data at a timestamp >= `NOW - max_staleness`
3225 # seconds. Guarantees that all writes that have committed more
3226 # than the specified number of seconds ago are visible. Because
3227 # Cloud Spanner chooses the exact timestamp, this mode works even if
3228 # the client's local clock is substantially skewed from Cloud Spanner
3229 # commit timestamps.
3230 #
3231 # Useful for reading the freshest data available at a nearby
3232 # replica, while bounding the possible staleness if the local
3233 # replica has fallen behind.
3234 #
3235 # Note that this option can only be used in single-use
3236 # transactions.
3237 "exactStaleness": "A String", # Executes all reads at a timestamp that is `exact_staleness`
3238 # old. The timestamp is chosen soon after the read is started.
3239 #
3240 # Guarantees that all writes that have committed more than the
3241 # specified number of seconds ago are visible. Because Cloud Spanner
3242 # chooses the exact timestamp, this mode works even if the client's
3243 # local clock is substantially skewed from Cloud Spanner commit
3244 # timestamps.
3245 #
3246 # Useful for reading at nearby replicas without the distributed
3247 # timestamp negotiation overhead of `max_staleness`.
3248 "returnReadTimestamp": True or False, # If true, the Cloud Spanner-selected read timestamp is included in
3249 # the Transaction message that describes the transaction.
3250 "strong": True or False, # Read at a timestamp where all previously committed transactions
3251 # are visible.
3252 },
3253 },
3254 "id": "A String", # Execute the read or SQL query in a previously-started transaction.
3255 },
3256 "resumeToken": "A String", # If this request is resuming a previously interrupted read,
3257 # `resume_token` should be copied from the last
3258 # PartialResultSet yielded before the interruption. Doing this
3259 # enables the new read to resume where the last read left off. The
3260 # rest of the request parameters must exactly match the request
3261 # that yielded this token.
3262 "keySet": { # `KeySet` defines a collection of Cloud Spanner keys and/or key ranges. All # Required. `key_set` identifies the rows to be yielded. `key_set` names the
3263 # primary keys of the rows in table to be yielded, unless index
3264 # is present. If index is present, then key_set instead names
3265 # index keys in index.
3266 #
3267 # Rows are yielded in table primary key order (if index is empty)
3268 # or index key order (if index is non-empty).
3269 #
3270 # It is not an error for the `key_set` to name rows that do not
3271 # exist in the database. Read yields nothing for nonexistent rows.
3272 # the keys are expected to be in the same table or index. The keys need
3273 # not be sorted in any particular way.
3274 #
3275 # If the same key is specified multiple times in the set (for example
3276 # if two ranges, two keys, or a key and a range overlap), Cloud Spanner
3277 # behaves as if the key were only specified once.
3278 "ranges": [ # A list of key ranges. See KeyRange for more information about
3279 # key range specifications.
3280 { # KeyRange represents a range of rows in a table or index.
3281 #
3282 # A range has a start key and an end key. These keys can be open or
3283 # closed, indicating if the range includes rows with that key.
3284 #
3285 # Keys are represented by lists, where the ith value in the list
3286 # corresponds to the ith component of the table or index primary key.
3287 # Individual values are encoded as described here.
3288 #
3289 # For example, consider the following table definition:
3290 #
3291 # CREATE TABLE UserEvents (
3292 # UserName STRING(MAX),
3293 # EventDate STRING(10)
3294 # ) PRIMARY KEY(UserName, EventDate);
3295 #
3296 # The following keys name rows in this table:
3297 #
3298 # "Bob", "2014-09-23"
3299 #
3300 # Since the `UserEvents` table's `PRIMARY KEY` clause names two
3301 # columns, each `UserEvents` key has two elements; the first is the
3302 # `UserName`, and the second is the `EventDate`.
3303 #
3304 # Key ranges with multiple components are interpreted
3305 # lexicographically by component using the table or index key's declared
3306 # sort order. For example, the following range returns all events for
3307 # user `"Bob"` that occurred in the year 2015:
3308 #
3309 # "start_closed": ["Bob", "2015-01-01"]
3310 # "end_closed": ["Bob", "2015-12-31"]
3311 #
3312 # Start and end keys can omit trailing key components. This affects the
3313 # inclusion and exclusion of rows that exactly match the provided key
3314 # components: if the key is closed, then rows that exactly match the
3315 # provided components are included; if the key is open, then rows
3316 # that exactly match are not included.
3317 #
3318 # For example, the following range includes all events for `"Bob"` that
3319 # occurred during and after the year 2000:
3320 #
3321 # "start_closed": ["Bob", "2000-01-01"]
3322 # "end_closed": ["Bob"]
3323 #
3324 # The next example retrieves all events for `"Bob"`:
3325 #
3326 # "start_closed": ["Bob"]
3327 # "end_closed": ["Bob"]
3328 #
3329 # To retrieve events before the year 2000:
3330 #
3331 # "start_closed": ["Bob"]
3332 # "end_open": ["Bob", "2000-01-01"]
3333 #
3334 # The following range includes all rows in the table:
3335 #
3336 # "start_closed": []
3337 # "end_closed": []
3338 #
3339 # This range returns all users whose `UserName` begins with any
3340 # character from A to C:
3341 #
3342 # "start_closed": ["A"]
3343 # "end_open": ["D"]
3344 #
3345 # This range returns all users whose `UserName` begins with B:
3346 #
3347 # "start_closed": ["B"]
3348 # "end_open": ["C"]
3349 #
3350 # Key ranges honor column sort order. For example, suppose a table is
3351 # defined as follows:
3352 #
3353 # CREATE TABLE DescendingSortedTable {
3354 # Key INT64,
3355 # ...
3356 # ) PRIMARY KEY(Key DESC);
3357 #
3358 # The following range retrieves all rows with key values between 1
3359 # and 100 inclusive:
3360 #
3361 # "start_closed": ["100"]
3362 # "end_closed": ["1"]
3363 #
3364 # Note that 100 is passed as the start, and 1 is passed as the end,
3365 # because `Key` is a descending column in the schema.
3366 "endOpen": [ # If the end is open, then the range excludes rows whose first
3367 # `len(end_open)` key columns exactly match `end_open`.
3368 "",
3369 ],
3370 "startOpen": [ # If the start is open, then the range excludes rows whose first
3371 # `len(start_open)` key columns exactly match `start_open`.
3372 "",
3373 ],
3374 "endClosed": [ # If the end is closed, then the range includes all rows whose
3375 # first `len(end_closed)` key columns exactly match `end_closed`.
3376 "",
3377 ],
3378 "startClosed": [ # If the start is closed, then the range includes all rows whose
3379 # first `len(start_closed)` key columns exactly match `start_closed`.
3380 "",
3381 ],
3382 },
3383 ],
3384 "keys": [ # A list of specific keys. Entries in `keys` should have exactly as
3385 # many elements as there are columns in the primary or index key
3386 # with which this `KeySet` is used. Individual key values are
3387 # encoded as described here.
3388 [
3389 "",
3390 ],
3391 ],
3392 "all": True or False, # For convenience `all` can be set to `true` to indicate that this
3393 # `KeySet` matches all keys in the table or index. Note that any keys
3394 # specified in `keys` or `ranges` are only yielded once.
3395 },
3396 "limit": "A String", # If greater than zero, only the first `limit` rows are yielded. If `limit`
3397 # is zero, the default is no limit.
3398 "table": "A String", # Required. The name of the table in the database to be read.
3399 "columns": [ # The columns of table to be returned for each row matching
3400 # this request.
3401 "A String",
3402 ],
3403 }
3404
3405 x__xgafv: string, V1 error format.
3406 Allowed values
3407 1 - v1 error format
3408 2 - v2 error format
3409
3410Returns:
3411 An object of the form:
3412
3413 { # Results from Read or
3414 # ExecuteSql.
3415 "rows": [ # Each element in `rows` is a row whose format is defined by
3416 # metadata.row_type. The ith element
3417 # in each row matches the ith field in
3418 # metadata.row_type. Elements are
3419 # encoded based on type as described
3420 # here.
3421 [
3422 "",
3423 ],
3424 ],
3425 "stats": { # Additional statistics about a ResultSet or PartialResultSet. # Query plan and execution statistics for the query that produced this
3426 # result set. These can be requested by setting
3427 # ExecuteSqlRequest.query_mode.
3428 "queryPlan": { # Contains an ordered list of nodes appearing in the query plan. # QueryPlan for the query associated with this result.
3429 "planNodes": [ # The nodes in the query plan. Plan nodes are returned in pre-order starting
3430 # with the plan root. Each PlanNode's `id` corresponds to its index in
3431 # `plan_nodes`.
3432 { # Node information for nodes appearing in a QueryPlan.plan_nodes.
3433 "index": 42, # The `PlanNode`'s index in node list.
3434 "kind": "A String", # Used to determine the type of node. May be needed for visualizing
3435 # different kinds of nodes differently. For example, If the node is a
3436 # SCALAR node, it will have a condensed representation
3437 # which can be used to directly embed a description of the node in its
3438 # parent.
3439 "displayName": "A String", # The display name for the node.
3440 "executionStats": { # The execution statistics associated with the node, contained in a group of
3441 # key-value pairs. Only present if the plan was returned as a result of a
3442 # profile query. For example, number of executions, number of rows/time per
3443 # execution etc.
3444 "a_key": "", # Properties of the object.
3445 },
3446 "childLinks": [ # List of child node `index`es and their relationship to this parent.
3447 { # Metadata associated with a parent-child relationship appearing in a
3448 # PlanNode.
3449 "variable": "A String", # Only present if the child node is SCALAR and corresponds
3450 # to an output variable of the parent node. The field carries the name of
3451 # the output variable.
3452 # For example, a `TableScan` operator that reads rows from a table will
3453 # have child links to the `SCALAR` nodes representing the output variables
3454 # created for each column that is read by the operator. The corresponding
3455 # `variable` fields will be set to the variable names assigned to the
3456 # columns.
3457 "childIndex": 42, # The node to which the link points.
3458 "type": "A String", # The type of the link. For example, in Hash Joins this could be used to
3459 # distinguish between the build child and the probe child, or in the case
3460 # of the child being an output variable, to represent the tag associated
3461 # with the output variable.
3462 },
3463 ],
3464 "shortRepresentation": { # Condensed representation of a node and its subtree. Only present for # Condensed representation for SCALAR nodes.
3465 # `SCALAR` PlanNode(s).
3466 "subqueries": { # A mapping of (subquery variable name) -> (subquery node id) for cases
3467 # where the `description` string of this node references a `SCALAR`
3468 # subquery contained in the expression subtree rooted at this node. The
3469 # referenced `SCALAR` subquery may not necessarily be a direct child of
3470 # this node.
3471 "a_key": 42,
3472 },
3473 "description": "A String", # A string representation of the expression subtree rooted at this node.
3474 },
3475 "metadata": { # Attributes relevant to the node contained in a group of key-value pairs.
3476 # For example, a Parameter Reference node could have the following
3477 # information in its metadata:
3478 #
3479 # {
3480 # "parameter_reference": "param1",
3481 # "parameter_type": "array"
3482 # }
3483 "a_key": "", # Properties of the object.
3484 },
3485 },
3486 ],
3487 },
3488 "queryStats": { # Aggregated statistics from the execution of the query. Only present when
3489 # the query is profiled. For example, a query could return the statistics as
3490 # follows:
3491 #
3492 # {
3493 # "rows_returned": "3",
3494 # "elapsed_time": "1.22 secs",
3495 # "cpu_time": "1.19 secs"
3496 # }
3497 "a_key": "", # Properties of the object.
3498 },
3499 },
3500 "metadata": { # Metadata about a ResultSet or PartialResultSet. # Metadata about the result set, such as row type information.
3501 "rowType": { # `StructType` defines the fields of a STRUCT type. # Indicates the field names and types for the rows in the result
3502 # set. For example, a SQL query like `"SELECT UserId, UserName FROM
3503 # Users"` could return a `row_type` value like:
3504 #
3505 # "fields": [
3506 # { "name": "UserId", "type": { "code": "INT64" } },
3507 # { "name": "UserName", "type": { "code": "STRING" } },
3508 # ]
3509 "fields": [ # The list of fields that make up this struct. Order is
3510 # significant, because values of this struct type are represented as
3511 # lists, where the order of field values matches the order of
3512 # fields in the StructType. In turn, the order of fields
3513 # matches the order of columns in a read request, or the order of
3514 # fields in the `SELECT` clause of a query.
3515 { # Message representing a single field of a struct.
3516 "type": { # `Type` indicates the type of a Cloud Spanner value, as might be stored in a # The type of the field.
3517 # table cell or returned from an SQL query.
3518 "structType": # Object with schema name: StructType # If code == STRUCT, then `struct_type`
3519 # provides type information for the struct's fields.
3520 "code": "A String", # Required. The TypeCode for this type.
3521 "arrayElementType": # Object with schema name: Type # If code == ARRAY, then `array_element_type`
3522 # is the type of the array elements.
3523 },
3524 "name": "A String", # The name of the field. For reads, this is the column name. For
3525 # SQL queries, it is the column alias (e.g., `"Word"` in the
3526 # query `"SELECT 'hello' AS Word"`), or the column name (e.g.,
3527 # `"ColName"` in the query `"SELECT ColName FROM Table"`). Some
3528 # columns might have an empty name (e.g., !"SELECT
3529 # UPPER(ColName)"`). Note that a query result can contain
3530 # multiple fields with the same name.
3531 },
3532 ],
3533 },
3534 "transaction": { # A transaction. # If the read or SQL query began a transaction as a side-effect, the
3535 # information about the new transaction is yielded here.
3536 "readTimestamp": "A String", # For snapshot read-only transactions, the read timestamp chosen
3537 # for the transaction. Not returned by default: see
3538 # TransactionOptions.ReadOnly.return_read_timestamp.
3539 "id": "A String", # `id` may be used to identify the transaction in subsequent
3540 # Read,
3541 # ExecuteSql,
3542 # Commit, or
3543 # Rollback calls.
3544 #
3545 # Single-use read-only transactions do not have IDs, because
3546 # single-use transactions do not support multiple requests.
3547 },
3548 },
3549 }</pre>
3550</div>
3551
3552<div class="method">
3553 <code class="details" id="rollback">rollback(session, body, x__xgafv=None)</code>
3554 <pre>Rolls back a transaction, releasing any locks it holds. It is a good
3555idea to call this for any transaction that includes one or more
3556Read or ExecuteSql requests and
3557ultimately decides not to commit.
3558
3559`Rollback` returns `OK` if it successfully aborts the transaction, the
3560transaction was already aborted, or the transaction is not
3561found. `Rollback` never returns `ABORTED`.
3562
3563Args:
3564 session: string, Required. The session in which the transaction to roll back is running. (required)
3565 body: object, The request body. (required)
3566 The object takes the form of:
3567
3568{ # The request for Rollback.
3569 "transactionId": "A String", # Required. The transaction to roll back.
3570 }
3571
3572 x__xgafv: string, V1 error format.
3573 Allowed values
3574 1 - v1 error format
3575 2 - v2 error format
3576
3577Returns:
3578 An object of the form:
3579
3580 { # A generic empty message that you can re-use to avoid defining duplicated
3581 # empty messages in your APIs. A typical example is to use it as the request
3582 # or the response type of an API method. For instance:
3583 #
3584 # service Foo {
3585 # rpc Bar(google.protobuf.Empty) returns (google.protobuf.Empty);
3586 # }
3587 #
3588 # The JSON representation for `Empty` is empty JSON object `{}`.
3589 }</pre>
3590</div>
3591
3592<div class="method">
3593 <code class="details" id="streamingRead">streamingRead(session, body, x__xgafv=None)</code>
3594 <pre>Like Read, except returns the result set as a
3595stream. Unlike Read, there is no limit on the
3596size of the returned result set. However, no individual row in
3597the result set can exceed 100 MiB, and no column value can exceed
359810 MiB.
3599
3600Args:
3601 session: string, Required. The session in which the read should be performed. (required)
3602 body: object, The request body. (required)
3603 The object takes the form of:
3604
3605{ # The request for Read and
3606 # StreamingRead.
3607 "index": "A String", # If non-empty, the name of an index on table. This index is
3608 # used instead of the table primary key when interpreting key_set
3609 # and sorting result rows. See key_set for further information.
3610 "transaction": { # This message is used to select the transaction in which a # The transaction to use. If none is provided, the default is a
3611 # temporary read-only transaction with strong concurrency.
3612 # Read or
3613 # ExecuteSql call runs.
3614 #
3615 # See TransactionOptions for more information about transactions.
3616 "begin": { # # Transactions # Begin a new transaction and execute this read or SQL query in
3617 # it. The transaction ID of the new transaction is returned in
3618 # ResultSetMetadata.transaction, which is a Transaction.
3619 #
3620 #
3621 # Each session can have at most one active transaction at a time. After the
3622 # active transaction is completed, the session can immediately be
3623 # re-used for the next transaction. It is not necessary to create a
3624 # new session for each transaction.
3625 #
3626 # # Transaction Modes
3627 #
3628 # Cloud Spanner supports two transaction modes:
3629 #
3630 # 1. Locking read-write. This type of transaction is the only way
3631 # to write data into Cloud Spanner. These transactions rely on
3632 # pessimistic locking and, if necessary, two-phase commit.
3633 # Locking read-write transactions may abort, requiring the
3634 # application to retry.
3635 #
3636 # 2. Snapshot read-only. This transaction type provides guaranteed
3637 # consistency across several reads, but does not allow
3638 # writes. Snapshot read-only transactions can be configured to
3639 # read at timestamps in the past. Snapshot read-only
3640 # transactions do not need to be committed.
3641 #
3642 # For transactions that only read, snapshot read-only transactions
3643 # provide simpler semantics and are almost always faster. In
3644 # particular, read-only transactions do not take locks, so they do
3645 # not conflict with read-write transactions. As a consequence of not
3646 # taking locks, they also do not abort, so retry loops are not needed.
3647 #
3648 # Transactions may only read/write data in a single database. They
3649 # may, however, read/write data in different tables within that
3650 # database.
3651 #
3652 # ## Locking Read-Write Transactions
3653 #
3654 # Locking transactions may be used to atomically read-modify-write
3655 # data anywhere in a database. This type of transaction is externally
3656 # consistent.
3657 #
3658 # Clients should attempt to minimize the amount of time a transaction
3659 # is active. Faster transactions commit with higher probability
3660 # and cause less contention. Cloud Spanner attempts to keep read locks
3661 # active as long as the transaction continues to do reads, and the
3662 # transaction has not been terminated by
3663 # Commit or
3664 # Rollback. Long periods of
3665 # inactivity at the client may cause Cloud Spanner to release a
3666 # transaction's locks and abort it.
3667 #
3668 # Reads performed within a transaction acquire locks on the data
3669 # being read. Writes can only be done at commit time, after all reads
3670 # have been completed.
3671 # Conceptually, a read-write transaction consists of zero or more
3672 # reads or SQL queries followed by
3673 # Commit. At any time before
3674 # Commit, the client can send a
3675 # Rollback request to abort the
3676 # transaction.
3677 #
3678 # ### Semantics
3679 #
3680 # Cloud Spanner can commit the transaction if all read locks it acquired
3681 # are still valid at commit time, and it is able to acquire write
3682 # locks for all writes. Cloud Spanner can abort the transaction for any
3683 # reason. If a commit attempt returns `ABORTED`, Cloud Spanner guarantees
3684 # that the transaction has not modified any user data in Cloud Spanner.
3685 #
3686 # Unless the transaction commits, Cloud Spanner makes no guarantees about
3687 # how long the transaction's locks were held for. It is an error to
3688 # use Cloud Spanner locks for any sort of mutual exclusion other than
3689 # between Cloud Spanner transactions themselves.
3690 #
3691 # ### Retrying Aborted Transactions
3692 #
3693 # When a transaction aborts, the application can choose to retry the
3694 # whole transaction again. To maximize the chances of successfully
3695 # committing the retry, the client should execute the retry in the
3696 # same session as the original attempt. The original session's lock
3697 # priority increases with each consecutive abort, meaning that each
3698 # attempt has a slightly better chance of success than the previous.
3699 #
3700 # Under some circumstances (e.g., many transactions attempting to
3701 # modify the same row(s)), a transaction can abort many times in a
3702 # short period before successfully committing. Thus, it is not a good
3703 # idea to cap the number of retries a transaction can attempt;
3704 # instead, it is better to limit the total amount of wall time spent
3705 # retrying.
3706 #
3707 # ### Idle Transactions
3708 #
3709 # A transaction is considered idle if it has no outstanding reads or
3710 # SQL queries and has not started a read or SQL query within the last 10
3711 # seconds. Idle transactions can be aborted by Cloud Spanner so that they
3712 # don't hold on to locks indefinitely. In that case, the commit will
3713 # fail with error `ABORTED`.
3714 #
3715 # If this behavior is undesirable, periodically executing a simple
3716 # SQL query in the transaction (e.g., `SELECT 1`) prevents the
3717 # transaction from becoming idle.
3718 #
3719 # ## Snapshot Read-Only Transactions
3720 #
3721 # Snapshot read-only transactions provides a simpler method than
3722 # locking read-write transactions for doing several consistent
3723 # reads. However, this type of transaction does not support writes.
3724 #
3725 # Snapshot transactions do not take locks. Instead, they work by
3726 # choosing a Cloud Spanner timestamp, then executing all reads at that
3727 # timestamp. Since they do not acquire locks, they do not block
3728 # concurrent read-write transactions.
3729 #
3730 # Unlike locking read-write transactions, snapshot read-only
3731 # transactions never abort. They can fail if the chosen read
3732 # timestamp is garbage collected; however, the default garbage
3733 # collection policy is generous enough that most applications do not
3734 # need to worry about this in practice.
3735 #
3736 # Snapshot read-only transactions do not need to call
3737 # Commit or
3738 # Rollback (and in fact are not
3739 # permitted to do so).
3740 #
3741 # To execute a snapshot transaction, the client specifies a timestamp
3742 # bound, which tells Cloud Spanner how to choose a read timestamp.
3743 #
3744 # The types of timestamp bound are:
3745 #
3746 # - Strong (the default).
3747 # - Bounded staleness.
3748 # - Exact staleness.
3749 #
3750 # If the Cloud Spanner database to be read is geographically distributed,
3751 # stale read-only transactions can execute more quickly than strong
3752 # or read-write transaction, because they are able to execute far
3753 # from the leader replica.
3754 #
3755 # Each type of timestamp bound is discussed in detail below.
3756 #
3757 # ### Strong
3758 #
3759 # Strong reads are guaranteed to see the effects of all transactions
3760 # that have committed before the start of the read. Furthermore, all
3761 # rows yielded by a single read are consistent with each other -- if
3762 # any part of the read observes a transaction, all parts of the read
3763 # see the transaction.
3764 #
3765 # Strong reads are not repeatable: two consecutive strong read-only
3766 # transactions might return inconsistent results if there are
3767 # concurrent writes. If consistency across reads is required, the
3768 # reads should be executed within a transaction or at an exact read
3769 # timestamp.
3770 #
3771 # See TransactionOptions.ReadOnly.strong.
3772 #
3773 # ### Exact Staleness
3774 #
3775 # These timestamp bounds execute reads at a user-specified
3776 # timestamp. Reads at a timestamp are guaranteed to see a consistent
3777 # prefix of the global transaction history: they observe
3778 # modifications done by all transactions with a commit timestamp <=
3779 # the read timestamp, and observe none of the modifications done by
3780 # transactions with a larger commit timestamp. They will block until
3781 # all conflicting transactions that may be assigned commit timestamps
3782 # <= the read timestamp have finished.
3783 #
3784 # The timestamp can either be expressed as an absolute Cloud Spanner commit
3785 # timestamp or a staleness relative to the current time.
3786 #
3787 # These modes do not require a "negotiation phase" to pick a
3788 # timestamp. As a result, they execute slightly faster than the
3789 # equivalent boundedly stale concurrency modes. On the other hand,
3790 # boundedly stale reads usually return fresher results.
3791 #
3792 # See TransactionOptions.ReadOnly.read_timestamp and
3793 # TransactionOptions.ReadOnly.exact_staleness.
3794 #
3795 # ### Bounded Staleness
3796 #
3797 # Bounded staleness modes allow Cloud Spanner to pick the read timestamp,
3798 # subject to a user-provided staleness bound. Cloud Spanner chooses the
3799 # newest timestamp within the staleness bound that allows execution
3800 # of the reads at the closest available replica without blocking.
3801 #
3802 # All rows yielded are consistent with each other -- if any part of
3803 # the read observes a transaction, all parts of the read see the
3804 # transaction. Boundedly stale reads are not repeatable: two stale
3805 # reads, even if they use the same staleness bound, can execute at
3806 # different timestamps and thus return inconsistent results.
3807 #
3808 # Boundedly stale reads execute in two phases: the first phase
3809 # negotiates a timestamp among all replicas needed to serve the
3810 # read. In the second phase, reads are executed at the negotiated
3811 # timestamp.
3812 #
3813 # As a result of the two phase execution, bounded staleness reads are
3814 # usually a little slower than comparable exact staleness
3815 # reads. However, they are typically able to return fresher
3816 # results, and are more likely to execute at the closest replica.
3817 #
3818 # Because the timestamp negotiation requires up-front knowledge of
3819 # which rows will be read, it can only be used with single-use
3820 # read-only transactions.
3821 #
3822 # See TransactionOptions.ReadOnly.max_staleness and
3823 # TransactionOptions.ReadOnly.min_read_timestamp.
3824 #
3825 # ### Old Read Timestamps and Garbage Collection
3826 #
3827 # Cloud Spanner continuously garbage collects deleted and overwritten data
3828 # in the background to reclaim storage space. This process is known
3829 # as "version GC". By default, version GC reclaims versions after they
3830 # are one hour old. Because of this, Cloud Spanner cannot perform reads
3831 # at read timestamps more than one hour in the past. This
3832 # restriction also applies to in-progress reads and/or SQL queries whose
3833 # timestamp become too old while executing. Reads and SQL queries with
3834 # too-old read timestamps fail with the error `FAILED_PRECONDITION`.
3835 "readWrite": { # Options for read-write transactions. # Transaction may write.
3836 #
3837 # Authorization to begin a read-write transaction requires
3838 # `spanner.databases.beginOrRollbackReadWriteTransaction` permission
3839 # on the `session` resource.
3840 },
3841 "readOnly": { # Options for read-only transactions. # Transaction will not write.
3842 #
3843 # Authorization to begin a read-only transaction requires
3844 # `spanner.databases.beginReadOnlyTransaction` permission
3845 # on the `session` resource.
3846 "minReadTimestamp": "A String", # Executes all reads at a timestamp >= `min_read_timestamp`.
3847 #
3848 # This is useful for requesting fresher data than some previous
3849 # read, or data that is fresh enough to observe the effects of some
3850 # previously committed transaction whose timestamp is known.
3851 #
3852 # Note that this option can only be used in single-use transactions.
3853 "readTimestamp": "A String", # Executes all reads at the given timestamp. Unlike other modes,
3854 # reads at a specific timestamp are repeatable; the same read at
3855 # the same timestamp always returns the same data. If the
3856 # timestamp is in the future, the read will block until the
3857 # specified timestamp, modulo the read's deadline.
3858 #
3859 # Useful for large scale consistent reads such as mapreduces, or
3860 # for coordinating many reads against a consistent snapshot of the
3861 # data.
3862 "maxStaleness": "A String", # Read data at a timestamp >= `NOW - max_staleness`
3863 # seconds. Guarantees that all writes that have committed more
3864 # than the specified number of seconds ago are visible. Because
3865 # Cloud Spanner chooses the exact timestamp, this mode works even if
3866 # the client's local clock is substantially skewed from Cloud Spanner
3867 # commit timestamps.
3868 #
3869 # Useful for reading the freshest data available at a nearby
3870 # replica, while bounding the possible staleness if the local
3871 # replica has fallen behind.
3872 #
3873 # Note that this option can only be used in single-use
3874 # transactions.
3875 "exactStaleness": "A String", # Executes all reads at a timestamp that is `exact_staleness`
3876 # old. The timestamp is chosen soon after the read is started.
3877 #
3878 # Guarantees that all writes that have committed more than the
3879 # specified number of seconds ago are visible. Because Cloud Spanner
3880 # chooses the exact timestamp, this mode works even if the client's
3881 # local clock is substantially skewed from Cloud Spanner commit
3882 # timestamps.
3883 #
3884 # Useful for reading at nearby replicas without the distributed
3885 # timestamp negotiation overhead of `max_staleness`.
3886 "returnReadTimestamp": True or False, # If true, the Cloud Spanner-selected read timestamp is included in
3887 # the Transaction message that describes the transaction.
3888 "strong": True or False, # Read at a timestamp where all previously committed transactions
3889 # are visible.
3890 },
3891 },
3892 "singleUse": { # # Transactions # Execute the read or SQL query in a temporary transaction.
3893 # This is the most efficient way to execute a transaction that
3894 # consists of a single SQL query.
3895 #
3896 #
3897 # Each session can have at most one active transaction at a time. After the
3898 # active transaction is completed, the session can immediately be
3899 # re-used for the next transaction. It is not necessary to create a
3900 # new session for each transaction.
3901 #
3902 # # Transaction Modes
3903 #
3904 # Cloud Spanner supports two transaction modes:
3905 #
3906 # 1. Locking read-write. This type of transaction is the only way
3907 # to write data into Cloud Spanner. These transactions rely on
3908 # pessimistic locking and, if necessary, two-phase commit.
3909 # Locking read-write transactions may abort, requiring the
3910 # application to retry.
3911 #
3912 # 2. Snapshot read-only. This transaction type provides guaranteed
3913 # consistency across several reads, but does not allow
3914 # writes. Snapshot read-only transactions can be configured to
3915 # read at timestamps in the past. Snapshot read-only
3916 # transactions do not need to be committed.
3917 #
3918 # For transactions that only read, snapshot read-only transactions
3919 # provide simpler semantics and are almost always faster. In
3920 # particular, read-only transactions do not take locks, so they do
3921 # not conflict with read-write transactions. As a consequence of not
3922 # taking locks, they also do not abort, so retry loops are not needed.
3923 #
3924 # Transactions may only read/write data in a single database. They
3925 # may, however, read/write data in different tables within that
3926 # database.
3927 #
3928 # ## Locking Read-Write Transactions
3929 #
3930 # Locking transactions may be used to atomically read-modify-write
3931 # data anywhere in a database. This type of transaction is externally
3932 # consistent.
3933 #
3934 # Clients should attempt to minimize the amount of time a transaction
3935 # is active. Faster transactions commit with higher probability
3936 # and cause less contention. Cloud Spanner attempts to keep read locks
3937 # active as long as the transaction continues to do reads, and the
3938 # transaction has not been terminated by
3939 # Commit or
3940 # Rollback. Long periods of
3941 # inactivity at the client may cause Cloud Spanner to release a
3942 # transaction's locks and abort it.
3943 #
3944 # Reads performed within a transaction acquire locks on the data
3945 # being read. Writes can only be done at commit time, after all reads
3946 # have been completed.
3947 # Conceptually, a read-write transaction consists of zero or more
3948 # reads or SQL queries followed by
3949 # Commit. At any time before
3950 # Commit, the client can send a
3951 # Rollback request to abort the
3952 # transaction.
3953 #
3954 # ### Semantics
3955 #
3956 # Cloud Spanner can commit the transaction if all read locks it acquired
3957 # are still valid at commit time, and it is able to acquire write
3958 # locks for all writes. Cloud Spanner can abort the transaction for any
3959 # reason. If a commit attempt returns `ABORTED`, Cloud Spanner guarantees
3960 # that the transaction has not modified any user data in Cloud Spanner.
3961 #
3962 # Unless the transaction commits, Cloud Spanner makes no guarantees about
3963 # how long the transaction's locks were held for. It is an error to
3964 # use Cloud Spanner locks for any sort of mutual exclusion other than
3965 # between Cloud Spanner transactions themselves.
3966 #
3967 # ### Retrying Aborted Transactions
3968 #
3969 # When a transaction aborts, the application can choose to retry the
3970 # whole transaction again. To maximize the chances of successfully
3971 # committing the retry, the client should execute the retry in the
3972 # same session as the original attempt. The original session's lock
3973 # priority increases with each consecutive abort, meaning that each
3974 # attempt has a slightly better chance of success than the previous.
3975 #
3976 # Under some circumstances (e.g., many transactions attempting to
3977 # modify the same row(s)), a transaction can abort many times in a
3978 # short period before successfully committing. Thus, it is not a good
3979 # idea to cap the number of retries a transaction can attempt;
3980 # instead, it is better to limit the total amount of wall time spent
3981 # retrying.
3982 #
3983 # ### Idle Transactions
3984 #
3985 # A transaction is considered idle if it has no outstanding reads or
3986 # SQL queries and has not started a read or SQL query within the last 10
3987 # seconds. Idle transactions can be aborted by Cloud Spanner so that they
3988 # don't hold on to locks indefinitely. In that case, the commit will
3989 # fail with error `ABORTED`.
3990 #
3991 # If this behavior is undesirable, periodically executing a simple
3992 # SQL query in the transaction (e.g., `SELECT 1`) prevents the
3993 # transaction from becoming idle.
3994 #
3995 # ## Snapshot Read-Only Transactions
3996 #
3997 # Snapshot read-only transactions provides a simpler method than
3998 # locking read-write transactions for doing several consistent
3999 # reads. However, this type of transaction does not support writes.
4000 #
4001 # Snapshot transactions do not take locks. Instead, they work by
4002 # choosing a Cloud Spanner timestamp, then executing all reads at that
4003 # timestamp. Since they do not acquire locks, they do not block
4004 # concurrent read-write transactions.
4005 #
4006 # Unlike locking read-write transactions, snapshot read-only
4007 # transactions never abort. They can fail if the chosen read
4008 # timestamp is garbage collected; however, the default garbage
4009 # collection policy is generous enough that most applications do not
4010 # need to worry about this in practice.
4011 #
4012 # Snapshot read-only transactions do not need to call
4013 # Commit or
4014 # Rollback (and in fact are not
4015 # permitted to do so).
4016 #
4017 # To execute a snapshot transaction, the client specifies a timestamp
4018 # bound, which tells Cloud Spanner how to choose a read timestamp.
4019 #
4020 # The types of timestamp bound are:
4021 #
4022 # - Strong (the default).
4023 # - Bounded staleness.
4024 # - Exact staleness.
4025 #
4026 # If the Cloud Spanner database to be read is geographically distributed,
4027 # stale read-only transactions can execute more quickly than strong
4028 # or read-write transaction, because they are able to execute far
4029 # from the leader replica.
4030 #
4031 # Each type of timestamp bound is discussed in detail below.
4032 #
4033 # ### Strong
4034 #
4035 # Strong reads are guaranteed to see the effects of all transactions
4036 # that have committed before the start of the read. Furthermore, all
4037 # rows yielded by a single read are consistent with each other -- if
4038 # any part of the read observes a transaction, all parts of the read
4039 # see the transaction.
4040 #
4041 # Strong reads are not repeatable: two consecutive strong read-only
4042 # transactions might return inconsistent results if there are
4043 # concurrent writes. If consistency across reads is required, the
4044 # reads should be executed within a transaction or at an exact read
4045 # timestamp.
4046 #
4047 # See TransactionOptions.ReadOnly.strong.
4048 #
4049 # ### Exact Staleness
4050 #
4051 # These timestamp bounds execute reads at a user-specified
4052 # timestamp. Reads at a timestamp are guaranteed to see a consistent
4053 # prefix of the global transaction history: they observe
4054 # modifications done by all transactions with a commit timestamp <=
4055 # the read timestamp, and observe none of the modifications done by
4056 # transactions with a larger commit timestamp. They will block until
4057 # all conflicting transactions that may be assigned commit timestamps
4058 # <= the read timestamp have finished.
4059 #
4060 # The timestamp can either be expressed as an absolute Cloud Spanner commit
4061 # timestamp or a staleness relative to the current time.
4062 #
4063 # These modes do not require a "negotiation phase" to pick a
4064 # timestamp. As a result, they execute slightly faster than the
4065 # equivalent boundedly stale concurrency modes. On the other hand,
4066 # boundedly stale reads usually return fresher results.
4067 #
4068 # See TransactionOptions.ReadOnly.read_timestamp and
4069 # TransactionOptions.ReadOnly.exact_staleness.
4070 #
4071 # ### Bounded Staleness
4072 #
4073 # Bounded staleness modes allow Cloud Spanner to pick the read timestamp,
4074 # subject to a user-provided staleness bound. Cloud Spanner chooses the
4075 # newest timestamp within the staleness bound that allows execution
4076 # of the reads at the closest available replica without blocking.
4077 #
4078 # All rows yielded are consistent with each other -- if any part of
4079 # the read observes a transaction, all parts of the read see the
4080 # transaction. Boundedly stale reads are not repeatable: two stale
4081 # reads, even if they use the same staleness bound, can execute at
4082 # different timestamps and thus return inconsistent results.
4083 #
4084 # Boundedly stale reads execute in two phases: the first phase
4085 # negotiates a timestamp among all replicas needed to serve the
4086 # read. In the second phase, reads are executed at the negotiated
4087 # timestamp.
4088 #
4089 # As a result of the two phase execution, bounded staleness reads are
4090 # usually a little slower than comparable exact staleness
4091 # reads. However, they are typically able to return fresher
4092 # results, and are more likely to execute at the closest replica.
4093 #
4094 # Because the timestamp negotiation requires up-front knowledge of
4095 # which rows will be read, it can only be used with single-use
4096 # read-only transactions.
4097 #
4098 # See TransactionOptions.ReadOnly.max_staleness and
4099 # TransactionOptions.ReadOnly.min_read_timestamp.
4100 #
4101 # ### Old Read Timestamps and Garbage Collection
4102 #
4103 # Cloud Spanner continuously garbage collects deleted and overwritten data
4104 # in the background to reclaim storage space. This process is known
4105 # as "version GC". By default, version GC reclaims versions after they
4106 # are one hour old. Because of this, Cloud Spanner cannot perform reads
4107 # at read timestamps more than one hour in the past. This
4108 # restriction also applies to in-progress reads and/or SQL queries whose
4109 # timestamp become too old while executing. Reads and SQL queries with
4110 # too-old read timestamps fail with the error `FAILED_PRECONDITION`.
4111 "readWrite": { # Options for read-write transactions. # Transaction may write.
4112 #
4113 # Authorization to begin a read-write transaction requires
4114 # `spanner.databases.beginOrRollbackReadWriteTransaction` permission
4115 # on the `session` resource.
4116 },
4117 "readOnly": { # Options for read-only transactions. # Transaction will not write.
4118 #
4119 # Authorization to begin a read-only transaction requires
4120 # `spanner.databases.beginReadOnlyTransaction` permission
4121 # on the `session` resource.
4122 "minReadTimestamp": "A String", # Executes all reads at a timestamp >= `min_read_timestamp`.
4123 #
4124 # This is useful for requesting fresher data than some previous
4125 # read, or data that is fresh enough to observe the effects of some
4126 # previously committed transaction whose timestamp is known.
4127 #
4128 # Note that this option can only be used in single-use transactions.
4129 "readTimestamp": "A String", # Executes all reads at the given timestamp. Unlike other modes,
4130 # reads at a specific timestamp are repeatable; the same read at
4131 # the same timestamp always returns the same data. If the
4132 # timestamp is in the future, the read will block until the
4133 # specified timestamp, modulo the read's deadline.
4134 #
4135 # Useful for large scale consistent reads such as mapreduces, or
4136 # for coordinating many reads against a consistent snapshot of the
4137 # data.
4138 "maxStaleness": "A String", # Read data at a timestamp >= `NOW - max_staleness`
4139 # seconds. Guarantees that all writes that have committed more
4140 # than the specified number of seconds ago are visible. Because
4141 # Cloud Spanner chooses the exact timestamp, this mode works even if
4142 # the client's local clock is substantially skewed from Cloud Spanner
4143 # commit timestamps.
4144 #
4145 # Useful for reading the freshest data available at a nearby
4146 # replica, while bounding the possible staleness if the local
4147 # replica has fallen behind.
4148 #
4149 # Note that this option can only be used in single-use
4150 # transactions.
4151 "exactStaleness": "A String", # Executes all reads at a timestamp that is `exact_staleness`
4152 # old. The timestamp is chosen soon after the read is started.
4153 #
4154 # Guarantees that all writes that have committed more than the
4155 # specified number of seconds ago are visible. Because Cloud Spanner
4156 # chooses the exact timestamp, this mode works even if the client's
4157 # local clock is substantially skewed from Cloud Spanner commit
4158 # timestamps.
4159 #
4160 # Useful for reading at nearby replicas without the distributed
4161 # timestamp negotiation overhead of `max_staleness`.
4162 "returnReadTimestamp": True or False, # If true, the Cloud Spanner-selected read timestamp is included in
4163 # the Transaction message that describes the transaction.
4164 "strong": True or False, # Read at a timestamp where all previously committed transactions
4165 # are visible.
4166 },
4167 },
4168 "id": "A String", # Execute the read or SQL query in a previously-started transaction.
4169 },
4170 "resumeToken": "A String", # If this request is resuming a previously interrupted read,
4171 # `resume_token` should be copied from the last
4172 # PartialResultSet yielded before the interruption. Doing this
4173 # enables the new read to resume where the last read left off. The
4174 # rest of the request parameters must exactly match the request
4175 # that yielded this token.
4176 "keySet": { # `KeySet` defines a collection of Cloud Spanner keys and/or key ranges. All # Required. `key_set` identifies the rows to be yielded. `key_set` names the
4177 # primary keys of the rows in table to be yielded, unless index
4178 # is present. If index is present, then key_set instead names
4179 # index keys in index.
4180 #
4181 # Rows are yielded in table primary key order (if index is empty)
4182 # or index key order (if index is non-empty).
4183 #
4184 # It is not an error for the `key_set` to name rows that do not
4185 # exist in the database. Read yields nothing for nonexistent rows.
4186 # the keys are expected to be in the same table or index. The keys need
4187 # not be sorted in any particular way.
4188 #
4189 # If the same key is specified multiple times in the set (for example
4190 # if two ranges, two keys, or a key and a range overlap), Cloud Spanner
4191 # behaves as if the key were only specified once.
4192 "ranges": [ # A list of key ranges. See KeyRange for more information about
4193 # key range specifications.
4194 { # KeyRange represents a range of rows in a table or index.
4195 #
4196 # A range has a start key and an end key. These keys can be open or
4197 # closed, indicating if the range includes rows with that key.
4198 #
4199 # Keys are represented by lists, where the ith value in the list
4200 # corresponds to the ith component of the table or index primary key.
4201 # Individual values are encoded as described here.
4202 #
4203 # For example, consider the following table definition:
4204 #
4205 # CREATE TABLE UserEvents (
4206 # UserName STRING(MAX),
4207 # EventDate STRING(10)
4208 # ) PRIMARY KEY(UserName, EventDate);
4209 #
4210 # The following keys name rows in this table:
4211 #
4212 # "Bob", "2014-09-23"
4213 #
4214 # Since the `UserEvents` table's `PRIMARY KEY` clause names two
4215 # columns, each `UserEvents` key has two elements; the first is the
4216 # `UserName`, and the second is the `EventDate`.
4217 #
4218 # Key ranges with multiple components are interpreted
4219 # lexicographically by component using the table or index key's declared
4220 # sort order. For example, the following range returns all events for
4221 # user `"Bob"` that occurred in the year 2015:
4222 #
4223 # "start_closed": ["Bob", "2015-01-01"]
4224 # "end_closed": ["Bob", "2015-12-31"]
4225 #
4226 # Start and end keys can omit trailing key components. This affects the
4227 # inclusion and exclusion of rows that exactly match the provided key
4228 # components: if the key is closed, then rows that exactly match the
4229 # provided components are included; if the key is open, then rows
4230 # that exactly match are not included.
4231 #
4232 # For example, the following range includes all events for `"Bob"` that
4233 # occurred during and after the year 2000:
4234 #
4235 # "start_closed": ["Bob", "2000-01-01"]
4236 # "end_closed": ["Bob"]
4237 #
4238 # The next example retrieves all events for `"Bob"`:
4239 #
4240 # "start_closed": ["Bob"]
4241 # "end_closed": ["Bob"]
4242 #
4243 # To retrieve events before the year 2000:
4244 #
4245 # "start_closed": ["Bob"]
4246 # "end_open": ["Bob", "2000-01-01"]
4247 #
4248 # The following range includes all rows in the table:
4249 #
4250 # "start_closed": []
4251 # "end_closed": []
4252 #
4253 # This range returns all users whose `UserName` begins with any
4254 # character from A to C:
4255 #
4256 # "start_closed": ["A"]
4257 # "end_open": ["D"]
4258 #
4259 # This range returns all users whose `UserName` begins with B:
4260 #
4261 # "start_closed": ["B"]
4262 # "end_open": ["C"]
4263 #
4264 # Key ranges honor column sort order. For example, suppose a table is
4265 # defined as follows:
4266 #
4267 # CREATE TABLE DescendingSortedTable {
4268 # Key INT64,
4269 # ...
4270 # ) PRIMARY KEY(Key DESC);
4271 #
4272 # The following range retrieves all rows with key values between 1
4273 # and 100 inclusive:
4274 #
4275 # "start_closed": ["100"]
4276 # "end_closed": ["1"]
4277 #
4278 # Note that 100 is passed as the start, and 1 is passed as the end,
4279 # because `Key` is a descending column in the schema.
4280 "endOpen": [ # If the end is open, then the range excludes rows whose first
4281 # `len(end_open)` key columns exactly match `end_open`.
4282 "",
4283 ],
4284 "startOpen": [ # If the start is open, then the range excludes rows whose first
4285 # `len(start_open)` key columns exactly match `start_open`.
4286 "",
4287 ],
4288 "endClosed": [ # If the end is closed, then the range includes all rows whose
4289 # first `len(end_closed)` key columns exactly match `end_closed`.
4290 "",
4291 ],
4292 "startClosed": [ # If the start is closed, then the range includes all rows whose
4293 # first `len(start_closed)` key columns exactly match `start_closed`.
4294 "",
4295 ],
4296 },
4297 ],
4298 "keys": [ # A list of specific keys. Entries in `keys` should have exactly as
4299 # many elements as there are columns in the primary or index key
4300 # with which this `KeySet` is used. Individual key values are
4301 # encoded as described here.
4302 [
4303 "",
4304 ],
4305 ],
4306 "all": True or False, # For convenience `all` can be set to `true` to indicate that this
4307 # `KeySet` matches all keys in the table or index. Note that any keys
4308 # specified in `keys` or `ranges` are only yielded once.
4309 },
4310 "limit": "A String", # If greater than zero, only the first `limit` rows are yielded. If `limit`
4311 # is zero, the default is no limit.
4312 "table": "A String", # Required. The name of the table in the database to be read.
4313 "columns": [ # The columns of table to be returned for each row matching
4314 # this request.
4315 "A String",
4316 ],
4317 }
4318
4319 x__xgafv: string, V1 error format.
4320 Allowed values
4321 1 - v1 error format
4322 2 - v2 error format
4323
4324Returns:
4325 An object of the form:
4326
4327 { # Partial results from a streaming read or SQL query. Streaming reads and
4328 # SQL queries better tolerate large result sets, large rows, and large
4329 # values, but are a little trickier to consume.
4330 "resumeToken": "A String", # Streaming calls might be interrupted for a variety of reasons, such
4331 # as TCP connection loss. If this occurs, the stream of results can
4332 # be resumed by re-sending the original request and including
4333 # `resume_token`. Note that executing any other transaction in the
4334 # same session invalidates the token.
4335 "chunkedValue": True or False, # If true, then the final value in values is chunked, and must
4336 # be combined with more values from subsequent `PartialResultSet`s
4337 # to obtain a complete field value.
4338 "values": [ # A streamed result set consists of a stream of values, which might
4339 # be split into many `PartialResultSet` messages to accommodate
4340 # large rows and/or large values. Every N complete values defines a
4341 # row, where N is equal to the number of entries in
4342 # metadata.row_type.fields.
4343 #
4344 # Most values are encoded based on type as described
4345 # here.
4346 #
4347 # It is possible that the last value in values is "chunked",
4348 # meaning that the rest of the value is sent in subsequent
4349 # `PartialResultSet`(s). This is denoted by the chunked_value
4350 # field. Two or more chunked values can be merged to form a
4351 # complete value as follows:
4352 #
4353 # * `bool/number/null`: cannot be chunked
4354 # * `string`: concatenate the strings
4355 # * `list`: concatenate the lists. If the last element in a list is a
4356 # `string`, `list`, or `object`, merge it with the first element in
4357 # the next list by applying these rules recursively.
4358 # * `object`: concatenate the (field name, field value) pairs. If a
4359 # field name is duplicated, then apply these rules recursively
4360 # to merge the field values.
4361 #
4362 # Some examples of merging:
4363 #
4364 # # Strings are concatenated.
4365 # "foo", "bar" => "foobar"
4366 #
4367 # # Lists of non-strings are concatenated.
4368 # [2, 3], [4] => [2, 3, 4]
4369 #
4370 # # Lists are concatenated, but the last and first elements are merged
4371 # # because they are strings.
4372 # ["a", "b"], ["c", "d"] => ["a", "bc", "d"]
4373 #
4374 # # Lists are concatenated, but the last and first elements are merged
4375 # # because they are lists. Recursively, the last and first elements
4376 # # of the inner lists are merged because they are strings.
4377 # ["a", ["b", "c"]], [["d"], "e"] => ["a", ["b", "cd"], "e"]
4378 #
4379 # # Non-overlapping object fields are combined.
4380 # {"a": "1"}, {"b": "2"} => {"a": "1", "b": 2"}
4381 #
4382 # # Overlapping object fields are merged.
4383 # {"a": "1"}, {"a": "2"} => {"a": "12"}
4384 #
4385 # # Examples of merging objects containing lists of strings.
4386 # {"a": ["1"]}, {"a": ["2"]} => {"a": ["12"]}
4387 #
4388 # For a more complete example, suppose a streaming SQL query is
4389 # yielding a result set whose rows contain a single string
4390 # field. The following `PartialResultSet`s might be yielded:
4391 #
4392 # {
4393 # "metadata": { ... }
4394 # "values": ["Hello", "W"]
4395 # "chunked_value": true
4396 # "resume_token": "Af65..."
4397 # }
4398 # {
4399 # "values": ["orl"]
4400 # "chunked_value": true
4401 # "resume_token": "Bqp2..."
4402 # }
4403 # {
4404 # "values": ["d"]
4405 # "resume_token": "Zx1B..."
4406 # }
4407 #
4408 # This sequence of `PartialResultSet`s encodes two rows, one
4409 # containing the field value `"Hello"`, and a second containing the
4410 # field value `"World" = "W" + "orl" + "d"`.
4411 "",
4412 ],
4413 "stats": { # Additional statistics about a ResultSet or PartialResultSet. # Query plan and execution statistics for the query that produced this
4414 # streaming result set. These can be requested by setting
4415 # ExecuteSqlRequest.query_mode and are sent
4416 # only once with the last response in the stream.
4417 "queryPlan": { # Contains an ordered list of nodes appearing in the query plan. # QueryPlan for the query associated with this result.
4418 "planNodes": [ # The nodes in the query plan. Plan nodes are returned in pre-order starting
4419 # with the plan root. Each PlanNode's `id` corresponds to its index in
4420 # `plan_nodes`.
4421 { # Node information for nodes appearing in a QueryPlan.plan_nodes.
4422 "index": 42, # The `PlanNode`'s index in node list.
4423 "kind": "A String", # Used to determine the type of node. May be needed for visualizing
4424 # different kinds of nodes differently. For example, If the node is a
4425 # SCALAR node, it will have a condensed representation
4426 # which can be used to directly embed a description of the node in its
4427 # parent.
4428 "displayName": "A String", # The display name for the node.
4429 "executionStats": { # The execution statistics associated with the node, contained in a group of
4430 # key-value pairs. Only present if the plan was returned as a result of a
4431 # profile query. For example, number of executions, number of rows/time per
4432 # execution etc.
4433 "a_key": "", # Properties of the object.
4434 },
4435 "childLinks": [ # List of child node `index`es and their relationship to this parent.
4436 { # Metadata associated with a parent-child relationship appearing in a
4437 # PlanNode.
4438 "variable": "A String", # Only present if the child node is SCALAR and corresponds
4439 # to an output variable of the parent node. The field carries the name of
4440 # the output variable.
4441 # For example, a `TableScan` operator that reads rows from a table will
4442 # have child links to the `SCALAR` nodes representing the output variables
4443 # created for each column that is read by the operator. The corresponding
4444 # `variable` fields will be set to the variable names assigned to the
4445 # columns.
4446 "childIndex": 42, # The node to which the link points.
4447 "type": "A String", # The type of the link. For example, in Hash Joins this could be used to
4448 # distinguish between the build child and the probe child, or in the case
4449 # of the child being an output variable, to represent the tag associated
4450 # with the output variable.
4451 },
4452 ],
4453 "shortRepresentation": { # Condensed representation of a node and its subtree. Only present for # Condensed representation for SCALAR nodes.
4454 # `SCALAR` PlanNode(s).
4455 "subqueries": { # A mapping of (subquery variable name) -> (subquery node id) for cases
4456 # where the `description` string of this node references a `SCALAR`
4457 # subquery contained in the expression subtree rooted at this node. The
4458 # referenced `SCALAR` subquery may not necessarily be a direct child of
4459 # this node.
4460 "a_key": 42,
4461 },
4462 "description": "A String", # A string representation of the expression subtree rooted at this node.
4463 },
4464 "metadata": { # Attributes relevant to the node contained in a group of key-value pairs.
4465 # For example, a Parameter Reference node could have the following
4466 # information in its metadata:
4467 #
4468 # {
4469 # "parameter_reference": "param1",
4470 # "parameter_type": "array"
4471 # }
4472 "a_key": "", # Properties of the object.
4473 },
4474 },
4475 ],
4476 },
4477 "queryStats": { # Aggregated statistics from the execution of the query. Only present when
4478 # the query is profiled. For example, a query could return the statistics as
4479 # follows:
4480 #
4481 # {
4482 # "rows_returned": "3",
4483 # "elapsed_time": "1.22 secs",
4484 # "cpu_time": "1.19 secs"
4485 # }
4486 "a_key": "", # Properties of the object.
4487 },
4488 },
4489 "metadata": { # Metadata about a ResultSet or PartialResultSet. # Metadata about the result set, such as row type information.
4490 # Only present in the first response.
4491 "rowType": { # `StructType` defines the fields of a STRUCT type. # Indicates the field names and types for the rows in the result
4492 # set. For example, a SQL query like `"SELECT UserId, UserName FROM
4493 # Users"` could return a `row_type` value like:
4494 #
4495 # "fields": [
4496 # { "name": "UserId", "type": { "code": "INT64" } },
4497 # { "name": "UserName", "type": { "code": "STRING" } },
4498 # ]
4499 "fields": [ # The list of fields that make up this struct. Order is
4500 # significant, because values of this struct type are represented as
4501 # lists, where the order of field values matches the order of
4502 # fields in the StructType. In turn, the order of fields
4503 # matches the order of columns in a read request, or the order of
4504 # fields in the `SELECT` clause of a query.
4505 { # Message representing a single field of a struct.
4506 "type": { # `Type` indicates the type of a Cloud Spanner value, as might be stored in a # The type of the field.
4507 # table cell or returned from an SQL query.
4508 "structType": # Object with schema name: StructType # If code == STRUCT, then `struct_type`
4509 # provides type information for the struct's fields.
4510 "code": "A String", # Required. The TypeCode for this type.
4511 "arrayElementType": # Object with schema name: Type # If code == ARRAY, then `array_element_type`
4512 # is the type of the array elements.
4513 },
4514 "name": "A String", # The name of the field. For reads, this is the column name. For
4515 # SQL queries, it is the column alias (e.g., `"Word"` in the
4516 # query `"SELECT 'hello' AS Word"`), or the column name (e.g.,
4517 # `"ColName"` in the query `"SELECT ColName FROM Table"`). Some
4518 # columns might have an empty name (e.g., !"SELECT
4519 # UPPER(ColName)"`). Note that a query result can contain
4520 # multiple fields with the same name.
4521 },
4522 ],
4523 },
4524 "transaction": { # A transaction. # If the read or SQL query began a transaction as a side-effect, the
4525 # information about the new transaction is yielded here.
4526 "readTimestamp": "A String", # For snapshot read-only transactions, the read timestamp chosen
4527 # for the transaction. Not returned by default: see
4528 # TransactionOptions.ReadOnly.return_read_timestamp.
4529 "id": "A String", # `id` may be used to identify the transaction in subsequent
4530 # Read,
4531 # ExecuteSql,
4532 # Commit, or
4533 # Rollback calls.
4534 #
4535 # Single-use read-only transactions do not have IDs, because
4536 # single-use transactions do not support multiple requests.
4537 },
4538 },
4539 }</pre>
4540</div>
4541
4542</body></html>