blob: 5f3317144ec0f192a5561f408f3d50cd54470dd9 [file] [log] [blame]
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -04001<html><body>
2<style>
3
4body, h1, h2, h3, div, span, p, pre, a {
5 margin: 0;
6 padding: 0;
7 border: 0;
8 font-weight: inherit;
9 font-style: inherit;
10 font-size: 100%;
11 font-family: inherit;
12 vertical-align: baseline;
13}
14
15body {
16 font-size: 13px;
17 padding: 1em;
18}
19
20h1 {
21 font-size: 26px;
22 margin-bottom: 1em;
23}
24
25h2 {
26 font-size: 24px;
27 margin-bottom: 1em;
28}
29
30h3 {
31 font-size: 20px;
32 margin-bottom: 1em;
33 margin-top: 1em;
34}
35
36pre, code {
37 line-height: 1.5;
38 font-family: Monaco, 'DejaVu Sans Mono', 'Bitstream Vera Sans Mono', 'Lucida Console', monospace;
39}
40
41pre {
42 margin-top: 0.5em;
43}
44
45h1, h2, h3, p {
46 font-family: Arial, sans serif;
47}
48
49h1, h2, h3 {
50 border-bottom: solid #CCC 1px;
51}
52
53.toc_element {
54 margin-top: 0.5em;
55}
56
57.firstline {
58 margin-left: 2 em;
59}
60
61.method {
62 margin-top: 1em;
63 border: solid 1px #CCC;
64 padding: 1em;
65 background: #EEE;
66}
67
68.details {
69 font-weight: bold;
70 font-size: 14px;
71}
72
73</style>
74
75<h1><a href="spanner_v1.html">Cloud Spanner API</a> . <a href="spanner_v1.projects.html">projects</a> . <a href="spanner_v1.projects.instances.html">instances</a> . <a href="spanner_v1.projects.instances.databases.html">databases</a> . <a href="spanner_v1.projects.instances.databases.sessions.html">sessions</a></h1>
76<h2>Instance Methods</h2>
77<p class="toc_element">
78 <code><a href="#beginTransaction">beginTransaction(session, body, x__xgafv=None)</a></code></p>
79<p class="firstline">Begins a new transaction. This step can often be skipped:</p>
80<p class="toc_element">
81 <code><a href="#commit">commit(session, body, x__xgafv=None)</a></code></p>
82<p class="firstline">Commits a transaction. The request includes the mutations to be</p>
83<p class="toc_element">
84 <code><a href="#create">create(database, x__xgafv=None)</a></code></p>
85<p class="firstline">Creates a new session. A session can be used to perform</p>
86<p class="toc_element">
87 <code><a href="#delete">delete(name, x__xgafv=None)</a></code></p>
88<p class="firstline">Ends a session, releasing server resources associated with it.</p>
89<p class="toc_element">
90 <code><a href="#executeSql">executeSql(session, body, x__xgafv=None)</a></code></p>
91<p class="firstline">Executes an SQL query, returning all rows in a single reply. This</p>
92<p class="toc_element">
93 <code><a href="#executeStreamingSql">executeStreamingSql(session, body, x__xgafv=None)</a></code></p>
94<p class="firstline">Like ExecuteSql, except returns the result</p>
95<p class="toc_element">
96 <code><a href="#get">get(name, x__xgafv=None)</a></code></p>
97<p class="firstline">Gets a session. Returns `NOT_FOUND` if the session does not exist.</p>
98<p class="toc_element">
99 <code><a href="#read">read(session, body, x__xgafv=None)</a></code></p>
100<p class="firstline">Reads rows from the database using key lookups and scans, as a</p>
101<p class="toc_element">
102 <code><a href="#rollback">rollback(session, body, x__xgafv=None)</a></code></p>
103<p class="firstline">Rolls back a transaction, releasing any locks it holds. It is a good</p>
104<p class="toc_element">
105 <code><a href="#streamingRead">streamingRead(session, body, x__xgafv=None)</a></code></p>
106<p class="firstline">Like Read, except returns the result set as a</p>
107<h3>Method Details</h3>
108<div class="method">
109 <code class="details" id="beginTransaction">beginTransaction(session, body, x__xgafv=None)</code>
110 <pre>Begins a new transaction. This step can often be skipped:
111Read, ExecuteSql and
112Commit can begin a new transaction as a
113side-effect.
114
115Args:
116 session: string, Required. The session in which the transaction runs. (required)
117 body: object, The request body. (required)
118 The object takes the form of:
119
120{ # The request for BeginTransaction.
121 "options": { # # Transactions # Required. Options for the new transaction.
122 #
123 #
124 # Each session can have at most one active transaction at a time. After the
125 # active transaction is completed, the session can immediately be
126 # re-used for the next transaction. It is not necessary to create a
127 # new session for each transaction.
128 #
129 # # Transaction Modes
130 #
131 # Cloud Spanner supports two transaction modes:
132 #
133 # 1. Locking read-write. This type of transaction is the only way
134 # to write data into Cloud Spanner. These transactions rely on
135 # pessimistic locking and, if necessary, two-phase commit.
136 # Locking read-write transactions may abort, requiring the
137 # application to retry.
138 #
139 # 2. Snapshot read-only. This transaction type provides guaranteed
140 # consistency across several reads, but does not allow
141 # writes. Snapshot read-only transactions can be configured to
142 # read at timestamps in the past. Snapshot read-only
143 # transactions do not need to be committed.
144 #
145 # For transactions that only read, snapshot read-only transactions
146 # provide simpler semantics and are almost always faster. In
147 # particular, read-only transactions do not take locks, so they do
148 # not conflict with read-write transactions. As a consequence of not
149 # taking locks, they also do not abort, so retry loops are not needed.
150 #
151 # Transactions may only read/write data in a single database. They
152 # may, however, read/write data in different tables within that
153 # database.
154 #
155 # ## Locking Read-Write Transactions
156 #
157 # Locking transactions may be used to atomically read-modify-write
158 # data anywhere in a database. This type of transaction is externally
159 # consistent.
160 #
161 # Clients should attempt to minimize the amount of time a transaction
162 # is active. Faster transactions commit with higher probability
163 # and cause less contention. Cloud Spanner attempts to keep read locks
164 # active as long as the transaction continues to do reads, and the
165 # transaction has not been terminated by
166 # Commit or
167 # Rollback. Long periods of
168 # inactivity at the client may cause Cloud Spanner to release a
169 # transaction's locks and abort it.
170 #
171 # Reads performed within a transaction acquire locks on the data
172 # being read. Writes can only be done at commit time, after all reads
173 # have been completed.
174 # Conceptually, a read-write transaction consists of zero or more
175 # reads or SQL queries followed by
176 # Commit. At any time before
177 # Commit, the client can send a
178 # Rollback request to abort the
179 # transaction.
180 #
181 # ### Semantics
182 #
183 # Cloud Spanner can commit the transaction if all read locks it acquired
184 # are still valid at commit time, and it is able to acquire write
185 # locks for all writes. Cloud Spanner can abort the transaction for any
186 # reason. If a commit attempt returns `ABORTED`, Cloud Spanner guarantees
187 # that the transaction has not modified any user data in Cloud Spanner.
188 #
189 # Unless the transaction commits, Cloud Spanner makes no guarantees about
190 # how long the transaction's locks were held for. It is an error to
191 # use Cloud Spanner locks for any sort of mutual exclusion other than
192 # between Cloud Spanner transactions themselves.
193 #
194 # ### Retrying Aborted Transactions
195 #
196 # When a transaction aborts, the application can choose to retry the
197 # whole transaction again. To maximize the chances of successfully
198 # committing the retry, the client should execute the retry in the
199 # same session as the original attempt. The original session's lock
200 # priority increases with each consecutive abort, meaning that each
201 # attempt has a slightly better chance of success than the previous.
202 #
203 # Under some circumstances (e.g., many transactions attempting to
204 # modify the same row(s)), a transaction can abort many times in a
205 # short period before successfully committing. Thus, it is not a good
206 # idea to cap the number of retries a transaction can attempt;
207 # instead, it is better to limit the total amount of wall time spent
208 # retrying.
209 #
210 # ### Idle Transactions
211 #
212 # A transaction is considered idle if it has no outstanding reads or
213 # SQL queries and has not started a read or SQL query within the last 10
214 # seconds. Idle transactions can be aborted by Cloud Spanner so that they
215 # don't hold on to locks indefinitely. In that case, the commit will
216 # fail with error `ABORTED`.
217 #
218 # If this behavior is undesirable, periodically executing a simple
219 # SQL query in the transaction (e.g., `SELECT 1`) prevents the
220 # transaction from becoming idle.
221 #
222 # ## Snapshot Read-Only Transactions
223 #
224 # Snapshot read-only transactions provides a simpler method than
225 # locking read-write transactions for doing several consistent
226 # reads. However, this type of transaction does not support writes.
227 #
228 # Snapshot transactions do not take locks. Instead, they work by
229 # choosing a Cloud Spanner timestamp, then executing all reads at that
230 # timestamp. Since they do not acquire locks, they do not block
231 # concurrent read-write transactions.
232 #
233 # Unlike locking read-write transactions, snapshot read-only
234 # transactions never abort. They can fail if the chosen read
235 # timestamp is garbage collected; however, the default garbage
236 # collection policy is generous enough that most applications do not
237 # need to worry about this in practice.
238 #
239 # Snapshot read-only transactions do not need to call
240 # Commit or
241 # Rollback (and in fact are not
242 # permitted to do so).
243 #
244 # To execute a snapshot transaction, the client specifies a timestamp
245 # bound, which tells Cloud Spanner how to choose a read timestamp.
246 #
247 # The types of timestamp bound are:
248 #
249 # - Strong (the default).
250 # - Bounded staleness.
251 # - Exact staleness.
252 #
253 # If the Cloud Spanner database to be read is geographically distributed,
254 # stale read-only transactions can execute more quickly than strong
255 # or read-write transaction, because they are able to execute far
256 # from the leader replica.
257 #
258 # Each type of timestamp bound is discussed in detail below.
259 #
260 # ### Strong
261 #
262 # Strong reads are guaranteed to see the effects of all transactions
263 # that have committed before the start of the read. Furthermore, all
264 # rows yielded by a single read are consistent with each other -- if
265 # any part of the read observes a transaction, all parts of the read
266 # see the transaction.
267 #
268 # Strong reads are not repeatable: two consecutive strong read-only
269 # transactions might return inconsistent results if there are
270 # concurrent writes. If consistency across reads is required, the
271 # reads should be executed within a transaction or at an exact read
272 # timestamp.
273 #
274 # See TransactionOptions.ReadOnly.strong.
275 #
276 # ### Exact Staleness
277 #
278 # These timestamp bounds execute reads at a user-specified
279 # timestamp. Reads at a timestamp are guaranteed to see a consistent
280 # prefix of the global transaction history: they observe
281 # modifications done by all transactions with a commit timestamp <=
282 # the read timestamp, and observe none of the modifications done by
283 # transactions with a larger commit timestamp. They will block until
284 # all conflicting transactions that may be assigned commit timestamps
285 # <= the read timestamp have finished.
286 #
287 # The timestamp can either be expressed as an absolute Cloud Spanner commit
288 # timestamp or a staleness relative to the current time.
289 #
290 # These modes do not require a "negotiation phase" to pick a
291 # timestamp. As a result, they execute slightly faster than the
292 # equivalent boundedly stale concurrency modes. On the other hand,
293 # boundedly stale reads usually return fresher results.
294 #
295 # See TransactionOptions.ReadOnly.read_timestamp and
296 # TransactionOptions.ReadOnly.exact_staleness.
297 #
298 # ### Bounded Staleness
299 #
300 # Bounded staleness modes allow Cloud Spanner to pick the read timestamp,
301 # subject to a user-provided staleness bound. Cloud Spanner chooses the
302 # newest timestamp within the staleness bound that allows execution
303 # of the reads at the closest available replica without blocking.
304 #
305 # All rows yielded are consistent with each other -- if any part of
306 # the read observes a transaction, all parts of the read see the
307 # transaction. Boundedly stale reads are not repeatable: two stale
308 # reads, even if they use the same staleness bound, can execute at
309 # different timestamps and thus return inconsistent results.
310 #
311 # Boundedly stale reads execute in two phases: the first phase
312 # negotiates a timestamp among all replicas needed to serve the
313 # read. In the second phase, reads are executed at the negotiated
314 # timestamp.
315 #
316 # As a result of the two phase execution, bounded staleness reads are
317 # usually a little slower than comparable exact staleness
318 # reads. However, they are typically able to return fresher
319 # results, and are more likely to execute at the closest replica.
320 #
321 # Because the timestamp negotiation requires up-front knowledge of
322 # which rows will be read, it can only be used with single-use
323 # read-only transactions.
324 #
325 # See TransactionOptions.ReadOnly.max_staleness and
326 # TransactionOptions.ReadOnly.min_read_timestamp.
327 #
328 # ### Old Read Timestamps and Garbage Collection
329 #
330 # Cloud Spanner continuously garbage collects deleted and overwritten data
331 # in the background to reclaim storage space. This process is known
332 # as "version GC". By default, version GC reclaims versions after they
333 # are one hour old. Because of this, Cloud Spanner cannot perform reads
334 # at read timestamps more than one hour in the past. This
335 # restriction also applies to in-progress reads and/or SQL queries whose
336 # timestamp become too old while executing. Reads and SQL queries with
337 # too-old read timestamps fail with the error `FAILED_PRECONDITION`.
338 "readWrite": { # Options for read-write transactions. # Transaction may write.
339 #
340 # Authorization to begin a read-write transaction requires
341 # `spanner.databases.beginOrRollbackReadWriteTransaction` permission
342 # on the `session` resource.
343 },
344 "readOnly": { # Options for read-only transactions. # Transaction will not write.
345 #
346 # Authorization to begin a read-only transaction requires
347 # `spanner.databases.beginReadOnlyTransaction` permission
348 # on the `session` resource.
349 "minReadTimestamp": "A String", # Executes all reads at a timestamp >= `min_read_timestamp`.
350 #
351 # This is useful for requesting fresher data than some previous
352 # read, or data that is fresh enough to observe the effects of some
353 # previously committed transaction whose timestamp is known.
354 #
355 # Note that this option can only be used in single-use transactions.
Thomas Coffee2f245372017-03-27 10:39:26 -0700356 "returnReadTimestamp": True or False, # If true, the Cloud Spanner-selected read timestamp is included in
357 # the Transaction message that describes the transaction.
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -0400358 "maxStaleness": "A String", # Read data at a timestamp >= `NOW - max_staleness`
359 # seconds. Guarantees that all writes that have committed more
360 # than the specified number of seconds ago are visible. Because
361 # Cloud Spanner chooses the exact timestamp, this mode works even if
362 # the client's local clock is substantially skewed from Cloud Spanner
363 # commit timestamps.
364 #
365 # Useful for reading the freshest data available at a nearby
366 # replica, while bounding the possible staleness if the local
367 # replica has fallen behind.
368 #
369 # Note that this option can only be used in single-use
370 # transactions.
371 "exactStaleness": "A String", # Executes all reads at a timestamp that is `exact_staleness`
372 # old. The timestamp is chosen soon after the read is started.
373 #
374 # Guarantees that all writes that have committed more than the
375 # specified number of seconds ago are visible. Because Cloud Spanner
376 # chooses the exact timestamp, this mode works even if the client's
377 # local clock is substantially skewed from Cloud Spanner commit
378 # timestamps.
379 #
380 # Useful for reading at nearby replicas without the distributed
381 # timestamp negotiation overhead of `max_staleness`.
Thomas Coffee2f245372017-03-27 10:39:26 -0700382 "readTimestamp": "A String", # Executes all reads at the given timestamp. Unlike other modes,
383 # reads at a specific timestamp are repeatable; the same read at
384 # the same timestamp always returns the same data. If the
385 # timestamp is in the future, the read will block until the
386 # specified timestamp, modulo the read's deadline.
387 #
388 # Useful for large scale consistent reads such as mapreduces, or
389 # for coordinating many reads against a consistent snapshot of the
390 # data.
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -0400391 "strong": True or False, # Read at a timestamp where all previously committed transactions
392 # are visible.
393 },
394 },
395 }
396
397 x__xgafv: string, V1 error format.
398 Allowed values
399 1 - v1 error format
400 2 - v2 error format
401
402Returns:
403 An object of the form:
404
405 { # A transaction.
406 "readTimestamp": "A String", # For snapshot read-only transactions, the read timestamp chosen
407 # for the transaction. Not returned by default: see
408 # TransactionOptions.ReadOnly.return_read_timestamp.
409 "id": "A String", # `id` may be used to identify the transaction in subsequent
410 # Read,
411 # ExecuteSql,
412 # Commit, or
413 # Rollback calls.
414 #
415 # Single-use read-only transactions do not have IDs, because
416 # single-use transactions do not support multiple requests.
417 }</pre>
418</div>
419
420<div class="method">
421 <code class="details" id="commit">commit(session, body, x__xgafv=None)</code>
422 <pre>Commits a transaction. The request includes the mutations to be
423applied to rows in the database.
424
425`Commit` might return an `ABORTED` error. This can occur at any time;
426commonly, the cause is conflicts with concurrent
427transactions. However, it can also happen for a variety of other
428reasons. If `Commit` returns `ABORTED`, the caller should re-attempt
429the transaction from the beginning, re-using the same session.
430
431Args:
432 session: string, Required. The session in which the transaction to be committed is running. (required)
433 body: object, The request body. (required)
434 The object takes the form of:
435
436{ # The request for Commit.
437 "transactionId": "A String", # Commit a previously-started transaction.
438 "mutations": [ # The mutations to be executed when this transaction commits. All
439 # mutations are applied atomically, in the order they appear in
440 # this list.
441 { # A modification to one or more Cloud Spanner rows. Mutations can be
442 # applied to a Cloud Spanner database by sending them in a
443 # Commit call.
444 "insert": { # Arguments to insert, update, insert_or_update, and # Insert new rows in a table. If any of the rows already exist,
445 # the write or transaction fails with error `ALREADY_EXISTS`.
446 # replace operations.
447 "table": "A String", # Required. The table whose rows will be written.
448 "values": [ # The values to be written. `values` can contain more than one
449 # list of values. If it does, then multiple rows are written, one
450 # for each entry in `values`. Each list in `values` must have
451 # exactly as many entries as there are entries in columns
452 # above. Sending multiple lists is equivalent to sending multiple
453 # `Mutation`s, each containing one `values` entry and repeating
454 # table and columns. Individual values in each list are
455 # encoded as described here.
456 [
457 "",
458 ],
459 ],
460 "columns": [ # The names of the columns in table to be written.
461 #
462 # The list of columns must contain enough columns to allow
463 # Cloud Spanner to derive values for all primary key columns in the
464 # row(s) to be modified.
465 "A String",
466 ],
467 },
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -0400468 "delete": { # Arguments to delete operations. # Delete rows from a table. Succeeds whether or not the named
469 # rows were present.
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -0400470 "keySet": { # `KeySet` defines a collection of Cloud Spanner keys and/or key ranges. All # Required. The primary keys of the rows within table to delete.
471 # the keys are expected to be in the same table or index. The keys need
472 # not be sorted in any particular way.
473 #
474 # If the same key is specified multiple times in the set (for example
475 # if two ranges, two keys, or a key and a range overlap), Cloud Spanner
476 # behaves as if the key were only specified once.
Sai Cheemalapatie833b792017-03-24 15:06:46 -0700477 "keys": [ # A list of specific keys. Entries in `keys` should have exactly as
478 # many elements as there are columns in the primary or index key
479 # with which this `KeySet` is used. Individual key values are
480 # encoded as described here.
481 [
482 "",
483 ],
484 ],
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -0400485 "ranges": [ # A list of key ranges. See KeyRange for more information about
486 # key range specifications.
487 { # KeyRange represents a range of rows in a table or index.
488 #
489 # A range has a start key and an end key. These keys can be open or
490 # closed, indicating if the range includes rows with that key.
491 #
492 # Keys are represented by lists, where the ith value in the list
493 # corresponds to the ith component of the table or index primary key.
494 # Individual values are encoded as described here.
495 #
496 # For example, consider the following table definition:
497 #
498 # CREATE TABLE UserEvents (
499 # UserName STRING(MAX),
500 # EventDate STRING(10)
501 # ) PRIMARY KEY(UserName, EventDate);
502 #
503 # The following keys name rows in this table:
504 #
505 # "Bob", "2014-09-23"
506 #
507 # Since the `UserEvents` table's `PRIMARY KEY` clause names two
508 # columns, each `UserEvents` key has two elements; the first is the
509 # `UserName`, and the second is the `EventDate`.
510 #
511 # Key ranges with multiple components are interpreted
512 # lexicographically by component using the table or index key's declared
513 # sort order. For example, the following range returns all events for
514 # user `"Bob"` that occurred in the year 2015:
515 #
516 # "start_closed": ["Bob", "2015-01-01"]
517 # "end_closed": ["Bob", "2015-12-31"]
518 #
519 # Start and end keys can omit trailing key components. This affects the
520 # inclusion and exclusion of rows that exactly match the provided key
521 # components: if the key is closed, then rows that exactly match the
522 # provided components are included; if the key is open, then rows
523 # that exactly match are not included.
524 #
525 # For example, the following range includes all events for `"Bob"` that
526 # occurred during and after the year 2000:
527 #
528 # "start_closed": ["Bob", "2000-01-01"]
529 # "end_closed": ["Bob"]
530 #
531 # The next example retrieves all events for `"Bob"`:
532 #
533 # "start_closed": ["Bob"]
534 # "end_closed": ["Bob"]
535 #
536 # To retrieve events before the year 2000:
537 #
538 # "start_closed": ["Bob"]
539 # "end_open": ["Bob", "2000-01-01"]
540 #
541 # The following range includes all rows in the table:
542 #
543 # "start_closed": []
544 # "end_closed": []
545 #
546 # This range returns all users whose `UserName` begins with any
547 # character from A to C:
548 #
549 # "start_closed": ["A"]
550 # "end_open": ["D"]
551 #
552 # This range returns all users whose `UserName` begins with B:
553 #
554 # "start_closed": ["B"]
555 # "end_open": ["C"]
556 #
557 # Key ranges honor column sort order. For example, suppose a table is
558 # defined as follows:
559 #
560 # CREATE TABLE DescendingSortedTable {
561 # Key INT64,
562 # ...
563 # ) PRIMARY KEY(Key DESC);
564 #
565 # The following range retrieves all rows with key values between 1
566 # and 100 inclusive:
567 #
568 # "start_closed": ["100"]
569 # "end_closed": ["1"]
570 #
571 # Note that 100 is passed as the start, and 1 is passed as the end,
572 # because `Key` is a descending column in the schema.
573 "endOpen": [ # If the end is open, then the range excludes rows whose first
574 # `len(end_open)` key columns exactly match `end_open`.
575 "",
576 ],
577 "startOpen": [ # If the start is open, then the range excludes rows whose first
578 # `len(start_open)` key columns exactly match `start_open`.
579 "",
580 ],
581 "endClosed": [ # If the end is closed, then the range includes all rows whose
582 # first `len(end_closed)` key columns exactly match `end_closed`.
583 "",
584 ],
585 "startClosed": [ # If the start is closed, then the range includes all rows whose
586 # first `len(start_closed)` key columns exactly match `start_closed`.
587 "",
588 ],
589 },
590 ],
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -0400591 "all": True or False, # For convenience `all` can be set to `true` to indicate that this
592 # `KeySet` matches all keys in the table or index. Note that any keys
593 # specified in `keys` or `ranges` are only yielded once.
594 },
Thomas Coffee2f245372017-03-27 10:39:26 -0700595 "table": "A String", # Required. The table whose rows will be deleted.
596 },
597 "insertOrUpdate": { # Arguments to insert, update, insert_or_update, and # Like insert, except that if the row already exists, then
598 # its column values are overwritten with the ones provided. Any
599 # column values not explicitly written are preserved.
600 # replace operations.
601 "table": "A String", # Required. The table whose rows will be written.
602 "values": [ # The values to be written. `values` can contain more than one
603 # list of values. If it does, then multiple rows are written, one
604 # for each entry in `values`. Each list in `values` must have
605 # exactly as many entries as there are entries in columns
606 # above. Sending multiple lists is equivalent to sending multiple
607 # `Mutation`s, each containing one `values` entry and repeating
608 # table and columns. Individual values in each list are
609 # encoded as described here.
610 [
611 "",
612 ],
613 ],
614 "columns": [ # The names of the columns in table to be written.
615 #
616 # The list of columns must contain enough columns to allow
617 # Cloud Spanner to derive values for all primary key columns in the
618 # row(s) to be modified.
619 "A String",
620 ],
621 },
622 "update": { # Arguments to insert, update, insert_or_update, and # Update existing rows in a table. If any of the rows does not
623 # already exist, the transaction fails with error `NOT_FOUND`.
624 # replace operations.
625 "table": "A String", # Required. The table whose rows will be written.
626 "values": [ # The values to be written. `values` can contain more than one
627 # list of values. If it does, then multiple rows are written, one
628 # for each entry in `values`. Each list in `values` must have
629 # exactly as many entries as there are entries in columns
630 # above. Sending multiple lists is equivalent to sending multiple
631 # `Mutation`s, each containing one `values` entry and repeating
632 # table and columns. Individual values in each list are
633 # encoded as described here.
634 [
635 "",
636 ],
637 ],
638 "columns": [ # The names of the columns in table to be written.
639 #
640 # The list of columns must contain enough columns to allow
641 # Cloud Spanner to derive values for all primary key columns in the
642 # row(s) to be modified.
643 "A String",
644 ],
645 },
646 "replace": { # Arguments to insert, update, insert_or_update, and # Like insert, except that if the row already exists, it is
647 # deleted, and the column values provided are inserted
648 # instead. Unlike insert_or_update, this means any values not
649 # explicitly written become `NULL`.
650 # replace operations.
651 "table": "A String", # Required. The table whose rows will be written.
652 "values": [ # The values to be written. `values` can contain more than one
653 # list of values. If it does, then multiple rows are written, one
654 # for each entry in `values`. Each list in `values` must have
655 # exactly as many entries as there are entries in columns
656 # above. Sending multiple lists is equivalent to sending multiple
657 # `Mutation`s, each containing one `values` entry and repeating
658 # table and columns. Individual values in each list are
659 # encoded as described here.
660 [
661 "",
662 ],
663 ],
664 "columns": [ # The names of the columns in table to be written.
665 #
666 # The list of columns must contain enough columns to allow
667 # Cloud Spanner to derive values for all primary key columns in the
668 # row(s) to be modified.
669 "A String",
670 ],
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -0400671 },
672 },
673 ],
674 "singleUseTransaction": { # # Transactions # Execute mutations in a temporary transaction. Note that unlike
675 # commit of a previously-started transaction, commit with a
676 # temporary transaction is non-idempotent. That is, if the
677 # `CommitRequest` is sent to Cloud Spanner more than once (for
678 # instance, due to retries in the application, or in the
679 # transport library), it is possible that the mutations are
680 # executed more than once. If this is undesirable, use
681 # BeginTransaction and
682 # Commit instead.
683 #
684 #
685 # Each session can have at most one active transaction at a time. After the
686 # active transaction is completed, the session can immediately be
687 # re-used for the next transaction. It is not necessary to create a
688 # new session for each transaction.
689 #
690 # # Transaction Modes
691 #
692 # Cloud Spanner supports two transaction modes:
693 #
694 # 1. Locking read-write. This type of transaction is the only way
695 # to write data into Cloud Spanner. These transactions rely on
696 # pessimistic locking and, if necessary, two-phase commit.
697 # Locking read-write transactions may abort, requiring the
698 # application to retry.
699 #
700 # 2. Snapshot read-only. This transaction type provides guaranteed
701 # consistency across several reads, but does not allow
702 # writes. Snapshot read-only transactions can be configured to
703 # read at timestamps in the past. Snapshot read-only
704 # transactions do not need to be committed.
705 #
706 # For transactions that only read, snapshot read-only transactions
707 # provide simpler semantics and are almost always faster. In
708 # particular, read-only transactions do not take locks, so they do
709 # not conflict with read-write transactions. As a consequence of not
710 # taking locks, they also do not abort, so retry loops are not needed.
711 #
712 # Transactions may only read/write data in a single database. They
713 # may, however, read/write data in different tables within that
714 # database.
715 #
716 # ## Locking Read-Write Transactions
717 #
718 # Locking transactions may be used to atomically read-modify-write
719 # data anywhere in a database. This type of transaction is externally
720 # consistent.
721 #
722 # Clients should attempt to minimize the amount of time a transaction
723 # is active. Faster transactions commit with higher probability
724 # and cause less contention. Cloud Spanner attempts to keep read locks
725 # active as long as the transaction continues to do reads, and the
726 # transaction has not been terminated by
727 # Commit or
728 # Rollback. Long periods of
729 # inactivity at the client may cause Cloud Spanner to release a
730 # transaction's locks and abort it.
731 #
732 # Reads performed within a transaction acquire locks on the data
733 # being read. Writes can only be done at commit time, after all reads
734 # have been completed.
735 # Conceptually, a read-write transaction consists of zero or more
736 # reads or SQL queries followed by
737 # Commit. At any time before
738 # Commit, the client can send a
739 # Rollback request to abort the
740 # transaction.
741 #
742 # ### Semantics
743 #
744 # Cloud Spanner can commit the transaction if all read locks it acquired
745 # are still valid at commit time, and it is able to acquire write
746 # locks for all writes. Cloud Spanner can abort the transaction for any
747 # reason. If a commit attempt returns `ABORTED`, Cloud Spanner guarantees
748 # that the transaction has not modified any user data in Cloud Spanner.
749 #
750 # Unless the transaction commits, Cloud Spanner makes no guarantees about
751 # how long the transaction's locks were held for. It is an error to
752 # use Cloud Spanner locks for any sort of mutual exclusion other than
753 # between Cloud Spanner transactions themselves.
754 #
755 # ### Retrying Aborted Transactions
756 #
757 # When a transaction aborts, the application can choose to retry the
758 # whole transaction again. To maximize the chances of successfully
759 # committing the retry, the client should execute the retry in the
760 # same session as the original attempt. The original session's lock
761 # priority increases with each consecutive abort, meaning that each
762 # attempt has a slightly better chance of success than the previous.
763 #
764 # Under some circumstances (e.g., many transactions attempting to
765 # modify the same row(s)), a transaction can abort many times in a
766 # short period before successfully committing. Thus, it is not a good
767 # idea to cap the number of retries a transaction can attempt;
768 # instead, it is better to limit the total amount of wall time spent
769 # retrying.
770 #
771 # ### Idle Transactions
772 #
773 # A transaction is considered idle if it has no outstanding reads or
774 # SQL queries and has not started a read or SQL query within the last 10
775 # seconds. Idle transactions can be aborted by Cloud Spanner so that they
776 # don't hold on to locks indefinitely. In that case, the commit will
777 # fail with error `ABORTED`.
778 #
779 # If this behavior is undesirable, periodically executing a simple
780 # SQL query in the transaction (e.g., `SELECT 1`) prevents the
781 # transaction from becoming idle.
782 #
783 # ## Snapshot Read-Only Transactions
784 #
785 # Snapshot read-only transactions provides a simpler method than
786 # locking read-write transactions for doing several consistent
787 # reads. However, this type of transaction does not support writes.
788 #
789 # Snapshot transactions do not take locks. Instead, they work by
790 # choosing a Cloud Spanner timestamp, then executing all reads at that
791 # timestamp. Since they do not acquire locks, they do not block
792 # concurrent read-write transactions.
793 #
794 # Unlike locking read-write transactions, snapshot read-only
795 # transactions never abort. They can fail if the chosen read
796 # timestamp is garbage collected; however, the default garbage
797 # collection policy is generous enough that most applications do not
798 # need to worry about this in practice.
799 #
800 # Snapshot read-only transactions do not need to call
801 # Commit or
802 # Rollback (and in fact are not
803 # permitted to do so).
804 #
805 # To execute a snapshot transaction, the client specifies a timestamp
806 # bound, which tells Cloud Spanner how to choose a read timestamp.
807 #
808 # The types of timestamp bound are:
809 #
810 # - Strong (the default).
811 # - Bounded staleness.
812 # - Exact staleness.
813 #
814 # If the Cloud Spanner database to be read is geographically distributed,
815 # stale read-only transactions can execute more quickly than strong
816 # or read-write transaction, because they are able to execute far
817 # from the leader replica.
818 #
819 # Each type of timestamp bound is discussed in detail below.
820 #
821 # ### Strong
822 #
823 # Strong reads are guaranteed to see the effects of all transactions
824 # that have committed before the start of the read. Furthermore, all
825 # rows yielded by a single read are consistent with each other -- if
826 # any part of the read observes a transaction, all parts of the read
827 # see the transaction.
828 #
829 # Strong reads are not repeatable: two consecutive strong read-only
830 # transactions might return inconsistent results if there are
831 # concurrent writes. If consistency across reads is required, the
832 # reads should be executed within a transaction or at an exact read
833 # timestamp.
834 #
835 # See TransactionOptions.ReadOnly.strong.
836 #
837 # ### Exact Staleness
838 #
839 # These timestamp bounds execute reads at a user-specified
840 # timestamp. Reads at a timestamp are guaranteed to see a consistent
841 # prefix of the global transaction history: they observe
842 # modifications done by all transactions with a commit timestamp <=
843 # the read timestamp, and observe none of the modifications done by
844 # transactions with a larger commit timestamp. They will block until
845 # all conflicting transactions that may be assigned commit timestamps
846 # <= the read timestamp have finished.
847 #
848 # The timestamp can either be expressed as an absolute Cloud Spanner commit
849 # timestamp or a staleness relative to the current time.
850 #
851 # These modes do not require a "negotiation phase" to pick a
852 # timestamp. As a result, they execute slightly faster than the
853 # equivalent boundedly stale concurrency modes. On the other hand,
854 # boundedly stale reads usually return fresher results.
855 #
856 # See TransactionOptions.ReadOnly.read_timestamp and
857 # TransactionOptions.ReadOnly.exact_staleness.
858 #
859 # ### Bounded Staleness
860 #
861 # Bounded staleness modes allow Cloud Spanner to pick the read timestamp,
862 # subject to a user-provided staleness bound. Cloud Spanner chooses the
863 # newest timestamp within the staleness bound that allows execution
864 # of the reads at the closest available replica without blocking.
865 #
866 # All rows yielded are consistent with each other -- if any part of
867 # the read observes a transaction, all parts of the read see the
868 # transaction. Boundedly stale reads are not repeatable: two stale
869 # reads, even if they use the same staleness bound, can execute at
870 # different timestamps and thus return inconsistent results.
871 #
872 # Boundedly stale reads execute in two phases: the first phase
873 # negotiates a timestamp among all replicas needed to serve the
874 # read. In the second phase, reads are executed at the negotiated
875 # timestamp.
876 #
877 # As a result of the two phase execution, bounded staleness reads are
878 # usually a little slower than comparable exact staleness
879 # reads. However, they are typically able to return fresher
880 # results, and are more likely to execute at the closest replica.
881 #
882 # Because the timestamp negotiation requires up-front knowledge of
883 # which rows will be read, it can only be used with single-use
884 # read-only transactions.
885 #
886 # See TransactionOptions.ReadOnly.max_staleness and
887 # TransactionOptions.ReadOnly.min_read_timestamp.
888 #
889 # ### Old Read Timestamps and Garbage Collection
890 #
891 # Cloud Spanner continuously garbage collects deleted and overwritten data
892 # in the background to reclaim storage space. This process is known
893 # as "version GC". By default, version GC reclaims versions after they
894 # are one hour old. Because of this, Cloud Spanner cannot perform reads
895 # at read timestamps more than one hour in the past. This
896 # restriction also applies to in-progress reads and/or SQL queries whose
897 # timestamp become too old while executing. Reads and SQL queries with
898 # too-old read timestamps fail with the error `FAILED_PRECONDITION`.
899 "readWrite": { # Options for read-write transactions. # Transaction may write.
900 #
901 # Authorization to begin a read-write transaction requires
902 # `spanner.databases.beginOrRollbackReadWriteTransaction` permission
903 # on the `session` resource.
904 },
905 "readOnly": { # Options for read-only transactions. # Transaction will not write.
906 #
907 # Authorization to begin a read-only transaction requires
908 # `spanner.databases.beginReadOnlyTransaction` permission
909 # on the `session` resource.
910 "minReadTimestamp": "A String", # Executes all reads at a timestamp >= `min_read_timestamp`.
911 #
912 # This is useful for requesting fresher data than some previous
913 # read, or data that is fresh enough to observe the effects of some
914 # previously committed transaction whose timestamp is known.
915 #
916 # Note that this option can only be used in single-use transactions.
Thomas Coffee2f245372017-03-27 10:39:26 -0700917 "returnReadTimestamp": True or False, # If true, the Cloud Spanner-selected read timestamp is included in
918 # the Transaction message that describes the transaction.
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -0400919 "maxStaleness": "A String", # Read data at a timestamp >= `NOW - max_staleness`
920 # seconds. Guarantees that all writes that have committed more
921 # than the specified number of seconds ago are visible. Because
922 # Cloud Spanner chooses the exact timestamp, this mode works even if
923 # the client's local clock is substantially skewed from Cloud Spanner
924 # commit timestamps.
925 #
926 # Useful for reading the freshest data available at a nearby
927 # replica, while bounding the possible staleness if the local
928 # replica has fallen behind.
929 #
930 # Note that this option can only be used in single-use
931 # transactions.
932 "exactStaleness": "A String", # Executes all reads at a timestamp that is `exact_staleness`
933 # old. The timestamp is chosen soon after the read is started.
934 #
935 # Guarantees that all writes that have committed more than the
936 # specified number of seconds ago are visible. Because Cloud Spanner
937 # chooses the exact timestamp, this mode works even if the client's
938 # local clock is substantially skewed from Cloud Spanner commit
939 # timestamps.
940 #
941 # Useful for reading at nearby replicas without the distributed
942 # timestamp negotiation overhead of `max_staleness`.
Thomas Coffee2f245372017-03-27 10:39:26 -0700943 "readTimestamp": "A String", # Executes all reads at the given timestamp. Unlike other modes,
944 # reads at a specific timestamp are repeatable; the same read at
945 # the same timestamp always returns the same data. If the
946 # timestamp is in the future, the read will block until the
947 # specified timestamp, modulo the read's deadline.
948 #
949 # Useful for large scale consistent reads such as mapreduces, or
950 # for coordinating many reads against a consistent snapshot of the
951 # data.
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -0400952 "strong": True or False, # Read at a timestamp where all previously committed transactions
953 # are visible.
954 },
955 },
956 }
957
958 x__xgafv: string, V1 error format.
959 Allowed values
960 1 - v1 error format
961 2 - v2 error format
962
963Returns:
964 An object of the form:
965
966 { # The response for Commit.
967 "commitTimestamp": "A String", # The Cloud Spanner timestamp at which the transaction committed.
968 }</pre>
969</div>
970
971<div class="method">
972 <code class="details" id="create">create(database, x__xgafv=None)</code>
973 <pre>Creates a new session. A session can be used to perform
974transactions that read and/or modify data in a Cloud Spanner database.
975Sessions are meant to be reused for many consecutive
976transactions.
977
978Sessions can only execute one transaction at a time. To execute
979multiple concurrent read-write/write-only transactions, create
980multiple sessions. Note that standalone reads and queries use a
981transaction internally, and count toward the one transaction
982limit.
983
984Cloud Spanner limits the number of sessions that can exist at any given
985time; thus, it is a good idea to delete idle and/or unneeded sessions.
Sai Cheemalapatie833b792017-03-24 15:06:46 -0700986Aside from explicit deletes, Cloud Spanner can delete sessions for which no
987operations are sent for more than an hour. If a session is deleted,
988requests to it return `NOT_FOUND`.
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -0400989
990Idle sessions can be kept alive by sending a trivial SQL query
991periodically, e.g., `"SELECT 1"`.
992
993Args:
994 database: string, Required. The database in which the new session is created. (required)
995 x__xgafv: string, V1 error format.
996 Allowed values
997 1 - v1 error format
998 2 - v2 error format
999
1000Returns:
1001 An object of the form:
1002
1003 { # A session in the Cloud Spanner API.
1004 "name": "A String", # Required. The name of the session.
1005 }</pre>
1006</div>
1007
1008<div class="method">
1009 <code class="details" id="delete">delete(name, x__xgafv=None)</code>
1010 <pre>Ends a session, releasing server resources associated with it.
1011
1012Args:
1013 name: string, Required. The name of the session to delete. (required)
1014 x__xgafv: string, V1 error format.
1015 Allowed values
1016 1 - v1 error format
1017 2 - v2 error format
1018
1019Returns:
1020 An object of the form:
1021
1022 { # A generic empty message that you can re-use to avoid defining duplicated
1023 # empty messages in your APIs. A typical example is to use it as the request
1024 # or the response type of an API method. For instance:
1025 #
1026 # service Foo {
1027 # rpc Bar(google.protobuf.Empty) returns (google.protobuf.Empty);
1028 # }
1029 #
1030 # The JSON representation for `Empty` is empty JSON object `{}`.
1031 }</pre>
1032</div>
1033
1034<div class="method">
1035 <code class="details" id="executeSql">executeSql(session, body, x__xgafv=None)</code>
1036 <pre>Executes an SQL query, returning all rows in a single reply. This
1037method cannot be used to return a result set larger than 10 MiB;
1038if the query yields more data than that, the query fails with
1039a `FAILED_PRECONDITION` error.
1040
1041Queries inside read-write transactions might return `ABORTED`. If
1042this occurs, the application should restart the transaction from
1043the beginning. See Transaction for more details.
1044
1045Larger result sets can be fetched in streaming fashion by calling
1046ExecuteStreamingSql instead.
1047
1048Args:
1049 session: string, Required. The session in which the SQL query should be performed. (required)
1050 body: object, The request body. (required)
1051 The object takes the form of:
1052
1053{ # The request for ExecuteSql and
1054 # ExecuteStreamingSql.
1055 "transaction": { # This message is used to select the transaction in which a # The transaction to use. If none is provided, the default is a
1056 # temporary read-only transaction with strong concurrency.
1057 # Read or
1058 # ExecuteSql call runs.
1059 #
1060 # See TransactionOptions for more information about transactions.
1061 "begin": { # # Transactions # Begin a new transaction and execute this read or SQL query in
1062 # it. The transaction ID of the new transaction is returned in
1063 # ResultSetMetadata.transaction, which is a Transaction.
1064 #
1065 #
1066 # Each session can have at most one active transaction at a time. After the
1067 # active transaction is completed, the session can immediately be
1068 # re-used for the next transaction. It is not necessary to create a
1069 # new session for each transaction.
1070 #
1071 # # Transaction Modes
1072 #
1073 # Cloud Spanner supports two transaction modes:
1074 #
1075 # 1. Locking read-write. This type of transaction is the only way
1076 # to write data into Cloud Spanner. These transactions rely on
1077 # pessimistic locking and, if necessary, two-phase commit.
1078 # Locking read-write transactions may abort, requiring the
1079 # application to retry.
1080 #
1081 # 2. Snapshot read-only. This transaction type provides guaranteed
1082 # consistency across several reads, but does not allow
1083 # writes. Snapshot read-only transactions can be configured to
1084 # read at timestamps in the past. Snapshot read-only
1085 # transactions do not need to be committed.
1086 #
1087 # For transactions that only read, snapshot read-only transactions
1088 # provide simpler semantics and are almost always faster. In
1089 # particular, read-only transactions do not take locks, so they do
1090 # not conflict with read-write transactions. As a consequence of not
1091 # taking locks, they also do not abort, so retry loops are not needed.
1092 #
1093 # Transactions may only read/write data in a single database. They
1094 # may, however, read/write data in different tables within that
1095 # database.
1096 #
1097 # ## Locking Read-Write Transactions
1098 #
1099 # Locking transactions may be used to atomically read-modify-write
1100 # data anywhere in a database. This type of transaction is externally
1101 # consistent.
1102 #
1103 # Clients should attempt to minimize the amount of time a transaction
1104 # is active. Faster transactions commit with higher probability
1105 # and cause less contention. Cloud Spanner attempts to keep read locks
1106 # active as long as the transaction continues to do reads, and the
1107 # transaction has not been terminated by
1108 # Commit or
1109 # Rollback. Long periods of
1110 # inactivity at the client may cause Cloud Spanner to release a
1111 # transaction's locks and abort it.
1112 #
1113 # Reads performed within a transaction acquire locks on the data
1114 # being read. Writes can only be done at commit time, after all reads
1115 # have been completed.
1116 # Conceptually, a read-write transaction consists of zero or more
1117 # reads or SQL queries followed by
1118 # Commit. At any time before
1119 # Commit, the client can send a
1120 # Rollback request to abort the
1121 # transaction.
1122 #
1123 # ### Semantics
1124 #
1125 # Cloud Spanner can commit the transaction if all read locks it acquired
1126 # are still valid at commit time, and it is able to acquire write
1127 # locks for all writes. Cloud Spanner can abort the transaction for any
1128 # reason. If a commit attempt returns `ABORTED`, Cloud Spanner guarantees
1129 # that the transaction has not modified any user data in Cloud Spanner.
1130 #
1131 # Unless the transaction commits, Cloud Spanner makes no guarantees about
1132 # how long the transaction's locks were held for. It is an error to
1133 # use Cloud Spanner locks for any sort of mutual exclusion other than
1134 # between Cloud Spanner transactions themselves.
1135 #
1136 # ### Retrying Aborted Transactions
1137 #
1138 # When a transaction aborts, the application can choose to retry the
1139 # whole transaction again. To maximize the chances of successfully
1140 # committing the retry, the client should execute the retry in the
1141 # same session as the original attempt. The original session's lock
1142 # priority increases with each consecutive abort, meaning that each
1143 # attempt has a slightly better chance of success than the previous.
1144 #
1145 # Under some circumstances (e.g., many transactions attempting to
1146 # modify the same row(s)), a transaction can abort many times in a
1147 # short period before successfully committing. Thus, it is not a good
1148 # idea to cap the number of retries a transaction can attempt;
1149 # instead, it is better to limit the total amount of wall time spent
1150 # retrying.
1151 #
1152 # ### Idle Transactions
1153 #
1154 # A transaction is considered idle if it has no outstanding reads or
1155 # SQL queries and has not started a read or SQL query within the last 10
1156 # seconds. Idle transactions can be aborted by Cloud Spanner so that they
1157 # don't hold on to locks indefinitely. In that case, the commit will
1158 # fail with error `ABORTED`.
1159 #
1160 # If this behavior is undesirable, periodically executing a simple
1161 # SQL query in the transaction (e.g., `SELECT 1`) prevents the
1162 # transaction from becoming idle.
1163 #
1164 # ## Snapshot Read-Only Transactions
1165 #
1166 # Snapshot read-only transactions provides a simpler method than
1167 # locking read-write transactions for doing several consistent
1168 # reads. However, this type of transaction does not support writes.
1169 #
1170 # Snapshot transactions do not take locks. Instead, they work by
1171 # choosing a Cloud Spanner timestamp, then executing all reads at that
1172 # timestamp. Since they do not acquire locks, they do not block
1173 # concurrent read-write transactions.
1174 #
1175 # Unlike locking read-write transactions, snapshot read-only
1176 # transactions never abort. They can fail if the chosen read
1177 # timestamp is garbage collected; however, the default garbage
1178 # collection policy is generous enough that most applications do not
1179 # need to worry about this in practice.
1180 #
1181 # Snapshot read-only transactions do not need to call
1182 # Commit or
1183 # Rollback (and in fact are not
1184 # permitted to do so).
1185 #
1186 # To execute a snapshot transaction, the client specifies a timestamp
1187 # bound, which tells Cloud Spanner how to choose a read timestamp.
1188 #
1189 # The types of timestamp bound are:
1190 #
1191 # - Strong (the default).
1192 # - Bounded staleness.
1193 # - Exact staleness.
1194 #
1195 # If the Cloud Spanner database to be read is geographically distributed,
1196 # stale read-only transactions can execute more quickly than strong
1197 # or read-write transaction, because they are able to execute far
1198 # from the leader replica.
1199 #
1200 # Each type of timestamp bound is discussed in detail below.
1201 #
1202 # ### Strong
1203 #
1204 # Strong reads are guaranteed to see the effects of all transactions
1205 # that have committed before the start of the read. Furthermore, all
1206 # rows yielded by a single read are consistent with each other -- if
1207 # any part of the read observes a transaction, all parts of the read
1208 # see the transaction.
1209 #
1210 # Strong reads are not repeatable: two consecutive strong read-only
1211 # transactions might return inconsistent results if there are
1212 # concurrent writes. If consistency across reads is required, the
1213 # reads should be executed within a transaction or at an exact read
1214 # timestamp.
1215 #
1216 # See TransactionOptions.ReadOnly.strong.
1217 #
1218 # ### Exact Staleness
1219 #
1220 # These timestamp bounds execute reads at a user-specified
1221 # timestamp. Reads at a timestamp are guaranteed to see a consistent
1222 # prefix of the global transaction history: they observe
1223 # modifications done by all transactions with a commit timestamp <=
1224 # the read timestamp, and observe none of the modifications done by
1225 # transactions with a larger commit timestamp. They will block until
1226 # all conflicting transactions that may be assigned commit timestamps
1227 # <= the read timestamp have finished.
1228 #
1229 # The timestamp can either be expressed as an absolute Cloud Spanner commit
1230 # timestamp or a staleness relative to the current time.
1231 #
1232 # These modes do not require a "negotiation phase" to pick a
1233 # timestamp. As a result, they execute slightly faster than the
1234 # equivalent boundedly stale concurrency modes. On the other hand,
1235 # boundedly stale reads usually return fresher results.
1236 #
1237 # See TransactionOptions.ReadOnly.read_timestamp and
1238 # TransactionOptions.ReadOnly.exact_staleness.
1239 #
1240 # ### Bounded Staleness
1241 #
1242 # Bounded staleness modes allow Cloud Spanner to pick the read timestamp,
1243 # subject to a user-provided staleness bound. Cloud Spanner chooses the
1244 # newest timestamp within the staleness bound that allows execution
1245 # of the reads at the closest available replica without blocking.
1246 #
1247 # All rows yielded are consistent with each other -- if any part of
1248 # the read observes a transaction, all parts of the read see the
1249 # transaction. Boundedly stale reads are not repeatable: two stale
1250 # reads, even if they use the same staleness bound, can execute at
1251 # different timestamps and thus return inconsistent results.
1252 #
1253 # Boundedly stale reads execute in two phases: the first phase
1254 # negotiates a timestamp among all replicas needed to serve the
1255 # read. In the second phase, reads are executed at the negotiated
1256 # timestamp.
1257 #
1258 # As a result of the two phase execution, bounded staleness reads are
1259 # usually a little slower than comparable exact staleness
1260 # reads. However, they are typically able to return fresher
1261 # results, and are more likely to execute at the closest replica.
1262 #
1263 # Because the timestamp negotiation requires up-front knowledge of
1264 # which rows will be read, it can only be used with single-use
1265 # read-only transactions.
1266 #
1267 # See TransactionOptions.ReadOnly.max_staleness and
1268 # TransactionOptions.ReadOnly.min_read_timestamp.
1269 #
1270 # ### Old Read Timestamps and Garbage Collection
1271 #
1272 # Cloud Spanner continuously garbage collects deleted and overwritten data
1273 # in the background to reclaim storage space. This process is known
1274 # as "version GC". By default, version GC reclaims versions after they
1275 # are one hour old. Because of this, Cloud Spanner cannot perform reads
1276 # at read timestamps more than one hour in the past. This
1277 # restriction also applies to in-progress reads and/or SQL queries whose
1278 # timestamp become too old while executing. Reads and SQL queries with
1279 # too-old read timestamps fail with the error `FAILED_PRECONDITION`.
1280 "readWrite": { # Options for read-write transactions. # Transaction may write.
1281 #
1282 # Authorization to begin a read-write transaction requires
1283 # `spanner.databases.beginOrRollbackReadWriteTransaction` permission
1284 # on the `session` resource.
1285 },
1286 "readOnly": { # Options for read-only transactions. # Transaction will not write.
1287 #
1288 # Authorization to begin a read-only transaction requires
1289 # `spanner.databases.beginReadOnlyTransaction` permission
1290 # on the `session` resource.
1291 "minReadTimestamp": "A String", # Executes all reads at a timestamp >= `min_read_timestamp`.
1292 #
1293 # This is useful for requesting fresher data than some previous
1294 # read, or data that is fresh enough to observe the effects of some
1295 # previously committed transaction whose timestamp is known.
1296 #
1297 # Note that this option can only be used in single-use transactions.
Thomas Coffee2f245372017-03-27 10:39:26 -07001298 "returnReadTimestamp": True or False, # If true, the Cloud Spanner-selected read timestamp is included in
1299 # the Transaction message that describes the transaction.
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -04001300 "maxStaleness": "A String", # Read data at a timestamp >= `NOW - max_staleness`
1301 # seconds. Guarantees that all writes that have committed more
1302 # than the specified number of seconds ago are visible. Because
1303 # Cloud Spanner chooses the exact timestamp, this mode works even if
1304 # the client's local clock is substantially skewed from Cloud Spanner
1305 # commit timestamps.
1306 #
1307 # Useful for reading the freshest data available at a nearby
1308 # replica, while bounding the possible staleness if the local
1309 # replica has fallen behind.
1310 #
1311 # Note that this option can only be used in single-use
1312 # transactions.
1313 "exactStaleness": "A String", # Executes all reads at a timestamp that is `exact_staleness`
1314 # old. The timestamp is chosen soon after the read is started.
1315 #
1316 # Guarantees that all writes that have committed more than the
1317 # specified number of seconds ago are visible. Because Cloud Spanner
1318 # chooses the exact timestamp, this mode works even if the client's
1319 # local clock is substantially skewed from Cloud Spanner commit
1320 # timestamps.
1321 #
1322 # Useful for reading at nearby replicas without the distributed
1323 # timestamp negotiation overhead of `max_staleness`.
Thomas Coffee2f245372017-03-27 10:39:26 -07001324 "readTimestamp": "A String", # Executes all reads at the given timestamp. Unlike other modes,
1325 # reads at a specific timestamp are repeatable; the same read at
1326 # the same timestamp always returns the same data. If the
1327 # timestamp is in the future, the read will block until the
1328 # specified timestamp, modulo the read's deadline.
1329 #
1330 # Useful for large scale consistent reads such as mapreduces, or
1331 # for coordinating many reads against a consistent snapshot of the
1332 # data.
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -04001333 "strong": True or False, # Read at a timestamp where all previously committed transactions
1334 # are visible.
1335 },
1336 },
1337 "singleUse": { # # Transactions # Execute the read or SQL query in a temporary transaction.
1338 # This is the most efficient way to execute a transaction that
1339 # consists of a single SQL query.
1340 #
1341 #
1342 # Each session can have at most one active transaction at a time. After the
1343 # active transaction is completed, the session can immediately be
1344 # re-used for the next transaction. It is not necessary to create a
1345 # new session for each transaction.
1346 #
1347 # # Transaction Modes
1348 #
1349 # Cloud Spanner supports two transaction modes:
1350 #
1351 # 1. Locking read-write. This type of transaction is the only way
1352 # to write data into Cloud Spanner. These transactions rely on
1353 # pessimistic locking and, if necessary, two-phase commit.
1354 # Locking read-write transactions may abort, requiring the
1355 # application to retry.
1356 #
1357 # 2. Snapshot read-only. This transaction type provides guaranteed
1358 # consistency across several reads, but does not allow
1359 # writes. Snapshot read-only transactions can be configured to
1360 # read at timestamps in the past. Snapshot read-only
1361 # transactions do not need to be committed.
1362 #
1363 # For transactions that only read, snapshot read-only transactions
1364 # provide simpler semantics and are almost always faster. In
1365 # particular, read-only transactions do not take locks, so they do
1366 # not conflict with read-write transactions. As a consequence of not
1367 # taking locks, they also do not abort, so retry loops are not needed.
1368 #
1369 # Transactions may only read/write data in a single database. They
1370 # may, however, read/write data in different tables within that
1371 # database.
1372 #
1373 # ## Locking Read-Write Transactions
1374 #
1375 # Locking transactions may be used to atomically read-modify-write
1376 # data anywhere in a database. This type of transaction is externally
1377 # consistent.
1378 #
1379 # Clients should attempt to minimize the amount of time a transaction
1380 # is active. Faster transactions commit with higher probability
1381 # and cause less contention. Cloud Spanner attempts to keep read locks
1382 # active as long as the transaction continues to do reads, and the
1383 # transaction has not been terminated by
1384 # Commit or
1385 # Rollback. Long periods of
1386 # inactivity at the client may cause Cloud Spanner to release a
1387 # transaction's locks and abort it.
1388 #
1389 # Reads performed within a transaction acquire locks on the data
1390 # being read. Writes can only be done at commit time, after all reads
1391 # have been completed.
1392 # Conceptually, a read-write transaction consists of zero or more
1393 # reads or SQL queries followed by
1394 # Commit. At any time before
1395 # Commit, the client can send a
1396 # Rollback request to abort the
1397 # transaction.
1398 #
1399 # ### Semantics
1400 #
1401 # Cloud Spanner can commit the transaction if all read locks it acquired
1402 # are still valid at commit time, and it is able to acquire write
1403 # locks for all writes. Cloud Spanner can abort the transaction for any
1404 # reason. If a commit attempt returns `ABORTED`, Cloud Spanner guarantees
1405 # that the transaction has not modified any user data in Cloud Spanner.
1406 #
1407 # Unless the transaction commits, Cloud Spanner makes no guarantees about
1408 # how long the transaction's locks were held for. It is an error to
1409 # use Cloud Spanner locks for any sort of mutual exclusion other than
1410 # between Cloud Spanner transactions themselves.
1411 #
1412 # ### Retrying Aborted Transactions
1413 #
1414 # When a transaction aborts, the application can choose to retry the
1415 # whole transaction again. To maximize the chances of successfully
1416 # committing the retry, the client should execute the retry in the
1417 # same session as the original attempt. The original session's lock
1418 # priority increases with each consecutive abort, meaning that each
1419 # attempt has a slightly better chance of success than the previous.
1420 #
1421 # Under some circumstances (e.g., many transactions attempting to
1422 # modify the same row(s)), a transaction can abort many times in a
1423 # short period before successfully committing. Thus, it is not a good
1424 # idea to cap the number of retries a transaction can attempt;
1425 # instead, it is better to limit the total amount of wall time spent
1426 # retrying.
1427 #
1428 # ### Idle Transactions
1429 #
1430 # A transaction is considered idle if it has no outstanding reads or
1431 # SQL queries and has not started a read or SQL query within the last 10
1432 # seconds. Idle transactions can be aborted by Cloud Spanner so that they
1433 # don't hold on to locks indefinitely. In that case, the commit will
1434 # fail with error `ABORTED`.
1435 #
1436 # If this behavior is undesirable, periodically executing a simple
1437 # SQL query in the transaction (e.g., `SELECT 1`) prevents the
1438 # transaction from becoming idle.
1439 #
1440 # ## Snapshot Read-Only Transactions
1441 #
1442 # Snapshot read-only transactions provides a simpler method than
1443 # locking read-write transactions for doing several consistent
1444 # reads. However, this type of transaction does not support writes.
1445 #
1446 # Snapshot transactions do not take locks. Instead, they work by
1447 # choosing a Cloud Spanner timestamp, then executing all reads at that
1448 # timestamp. Since they do not acquire locks, they do not block
1449 # concurrent read-write transactions.
1450 #
1451 # Unlike locking read-write transactions, snapshot read-only
1452 # transactions never abort. They can fail if the chosen read
1453 # timestamp is garbage collected; however, the default garbage
1454 # collection policy is generous enough that most applications do not
1455 # need to worry about this in practice.
1456 #
1457 # Snapshot read-only transactions do not need to call
1458 # Commit or
1459 # Rollback (and in fact are not
1460 # permitted to do so).
1461 #
1462 # To execute a snapshot transaction, the client specifies a timestamp
1463 # bound, which tells Cloud Spanner how to choose a read timestamp.
1464 #
1465 # The types of timestamp bound are:
1466 #
1467 # - Strong (the default).
1468 # - Bounded staleness.
1469 # - Exact staleness.
1470 #
1471 # If the Cloud Spanner database to be read is geographically distributed,
1472 # stale read-only transactions can execute more quickly than strong
1473 # or read-write transaction, because they are able to execute far
1474 # from the leader replica.
1475 #
1476 # Each type of timestamp bound is discussed in detail below.
1477 #
1478 # ### Strong
1479 #
1480 # Strong reads are guaranteed to see the effects of all transactions
1481 # that have committed before the start of the read. Furthermore, all
1482 # rows yielded by a single read are consistent with each other -- if
1483 # any part of the read observes a transaction, all parts of the read
1484 # see the transaction.
1485 #
1486 # Strong reads are not repeatable: two consecutive strong read-only
1487 # transactions might return inconsistent results if there are
1488 # concurrent writes. If consistency across reads is required, the
1489 # reads should be executed within a transaction or at an exact read
1490 # timestamp.
1491 #
1492 # See TransactionOptions.ReadOnly.strong.
1493 #
1494 # ### Exact Staleness
1495 #
1496 # These timestamp bounds execute reads at a user-specified
1497 # timestamp. Reads at a timestamp are guaranteed to see a consistent
1498 # prefix of the global transaction history: they observe
1499 # modifications done by all transactions with a commit timestamp <=
1500 # the read timestamp, and observe none of the modifications done by
1501 # transactions with a larger commit timestamp. They will block until
1502 # all conflicting transactions that may be assigned commit timestamps
1503 # <= the read timestamp have finished.
1504 #
1505 # The timestamp can either be expressed as an absolute Cloud Spanner commit
1506 # timestamp or a staleness relative to the current time.
1507 #
1508 # These modes do not require a "negotiation phase" to pick a
1509 # timestamp. As a result, they execute slightly faster than the
1510 # equivalent boundedly stale concurrency modes. On the other hand,
1511 # boundedly stale reads usually return fresher results.
1512 #
1513 # See TransactionOptions.ReadOnly.read_timestamp and
1514 # TransactionOptions.ReadOnly.exact_staleness.
1515 #
1516 # ### Bounded Staleness
1517 #
1518 # Bounded staleness modes allow Cloud Spanner to pick the read timestamp,
1519 # subject to a user-provided staleness bound. Cloud Spanner chooses the
1520 # newest timestamp within the staleness bound that allows execution
1521 # of the reads at the closest available replica without blocking.
1522 #
1523 # All rows yielded are consistent with each other -- if any part of
1524 # the read observes a transaction, all parts of the read see the
1525 # transaction. Boundedly stale reads are not repeatable: two stale
1526 # reads, even if they use the same staleness bound, can execute at
1527 # different timestamps and thus return inconsistent results.
1528 #
1529 # Boundedly stale reads execute in two phases: the first phase
1530 # negotiates a timestamp among all replicas needed to serve the
1531 # read. In the second phase, reads are executed at the negotiated
1532 # timestamp.
1533 #
1534 # As a result of the two phase execution, bounded staleness reads are
1535 # usually a little slower than comparable exact staleness
1536 # reads. However, they are typically able to return fresher
1537 # results, and are more likely to execute at the closest replica.
1538 #
1539 # Because the timestamp negotiation requires up-front knowledge of
1540 # which rows will be read, it can only be used with single-use
1541 # read-only transactions.
1542 #
1543 # See TransactionOptions.ReadOnly.max_staleness and
1544 # TransactionOptions.ReadOnly.min_read_timestamp.
1545 #
1546 # ### Old Read Timestamps and Garbage Collection
1547 #
1548 # Cloud Spanner continuously garbage collects deleted and overwritten data
1549 # in the background to reclaim storage space. This process is known
1550 # as "version GC". By default, version GC reclaims versions after they
1551 # are one hour old. Because of this, Cloud Spanner cannot perform reads
1552 # at read timestamps more than one hour in the past. This
1553 # restriction also applies to in-progress reads and/or SQL queries whose
1554 # timestamp become too old while executing. Reads and SQL queries with
1555 # too-old read timestamps fail with the error `FAILED_PRECONDITION`.
1556 "readWrite": { # Options for read-write transactions. # Transaction may write.
1557 #
1558 # Authorization to begin a read-write transaction requires
1559 # `spanner.databases.beginOrRollbackReadWriteTransaction` permission
1560 # on the `session` resource.
1561 },
1562 "readOnly": { # Options for read-only transactions. # Transaction will not write.
1563 #
1564 # Authorization to begin a read-only transaction requires
1565 # `spanner.databases.beginReadOnlyTransaction` permission
1566 # on the `session` resource.
1567 "minReadTimestamp": "A String", # Executes all reads at a timestamp >= `min_read_timestamp`.
1568 #
1569 # This is useful for requesting fresher data than some previous
1570 # read, or data that is fresh enough to observe the effects of some
1571 # previously committed transaction whose timestamp is known.
1572 #
1573 # Note that this option can only be used in single-use transactions.
Thomas Coffee2f245372017-03-27 10:39:26 -07001574 "returnReadTimestamp": True or False, # If true, the Cloud Spanner-selected read timestamp is included in
1575 # the Transaction message that describes the transaction.
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -04001576 "maxStaleness": "A String", # Read data at a timestamp >= `NOW - max_staleness`
1577 # seconds. Guarantees that all writes that have committed more
1578 # than the specified number of seconds ago are visible. Because
1579 # Cloud Spanner chooses the exact timestamp, this mode works even if
1580 # the client's local clock is substantially skewed from Cloud Spanner
1581 # commit timestamps.
1582 #
1583 # Useful for reading the freshest data available at a nearby
1584 # replica, while bounding the possible staleness if the local
1585 # replica has fallen behind.
1586 #
1587 # Note that this option can only be used in single-use
1588 # transactions.
1589 "exactStaleness": "A String", # Executes all reads at a timestamp that is `exact_staleness`
1590 # old. The timestamp is chosen soon after the read is started.
1591 #
1592 # Guarantees that all writes that have committed more than the
1593 # specified number of seconds ago are visible. Because Cloud Spanner
1594 # chooses the exact timestamp, this mode works even if the client's
1595 # local clock is substantially skewed from Cloud Spanner commit
1596 # timestamps.
1597 #
1598 # Useful for reading at nearby replicas without the distributed
1599 # timestamp negotiation overhead of `max_staleness`.
Thomas Coffee2f245372017-03-27 10:39:26 -07001600 "readTimestamp": "A String", # Executes all reads at the given timestamp. Unlike other modes,
1601 # reads at a specific timestamp are repeatable; the same read at
1602 # the same timestamp always returns the same data. If the
1603 # timestamp is in the future, the read will block until the
1604 # specified timestamp, modulo the read's deadline.
1605 #
1606 # Useful for large scale consistent reads such as mapreduces, or
1607 # for coordinating many reads against a consistent snapshot of the
1608 # data.
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -04001609 "strong": True or False, # Read at a timestamp where all previously committed transactions
1610 # are visible.
1611 },
1612 },
1613 "id": "A String", # Execute the read or SQL query in a previously-started transaction.
1614 },
1615 "resumeToken": "A String", # If this request is resuming a previously interrupted SQL query
1616 # execution, `resume_token` should be copied from the last
1617 # PartialResultSet yielded before the interruption. Doing this
1618 # enables the new SQL query execution to resume where the last one left
1619 # off. The rest of the request parameters must exactly match the
1620 # request that yielded this token.
1621 "paramTypes": { # It is not always possible for Cloud Spanner to infer the right SQL type
1622 # from a JSON value. For example, values of type `BYTES` and values
1623 # of type `STRING` both appear in params as JSON strings.
1624 #
1625 # In these cases, `param_types` can be used to specify the exact
1626 # SQL type for some or all of the SQL query parameters. See the
1627 # definition of Type for more information
1628 # about SQL types.
1629 "a_key": { # `Type` indicates the type of a Cloud Spanner value, as might be stored in a
1630 # table cell or returned from an SQL query.
1631 "structType": # Object with schema name: StructType # If code == STRUCT, then `struct_type`
1632 # provides type information for the struct's fields.
1633 "code": "A String", # Required. The TypeCode for this type.
1634 "arrayElementType": # Object with schema name: Type # If code == ARRAY, then `array_element_type`
1635 # is the type of the array elements.
1636 },
1637 },
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -04001638 "params": { # The SQL query string can contain parameter placeholders. A parameter
1639 # placeholder consists of `'@'` followed by the parameter
1640 # name. Parameter names consist of any combination of letters,
1641 # numbers, and underscores.
1642 #
1643 # Parameters can appear anywhere that a literal value is expected. The same
1644 # parameter name can be used more than once, for example:
1645 # `"WHERE id > @msg_id AND id < @msg_id + 100"`
1646 #
1647 # It is an error to execute an SQL query with unbound parameters.
1648 #
1649 # Parameter values are specified using `params`, which is a JSON
1650 # object whose keys are parameter names, and whose values are the
1651 # corresponding parameter values.
1652 "a_key": "", # Properties of the object.
1653 },
Thomas Coffee2f245372017-03-27 10:39:26 -07001654 "sql": "A String", # Required. The SQL query string.
1655 "queryMode": "A String", # Used to control the amount of debugging information returned in
1656 # ResultSetStats.
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -04001657 }
1658
1659 x__xgafv: string, V1 error format.
1660 Allowed values
1661 1 - v1 error format
1662 2 - v2 error format
1663
1664Returns:
1665 An object of the form:
1666
1667 { # Results from Read or
1668 # ExecuteSql.
1669 "rows": [ # Each element in `rows` is a row whose format is defined by
1670 # metadata.row_type. The ith element
1671 # in each row matches the ith field in
1672 # metadata.row_type. Elements are
1673 # encoded based on type as described
1674 # here.
1675 [
1676 "",
1677 ],
1678 ],
1679 "stats": { # Additional statistics about a ResultSet or PartialResultSet. # Query plan and execution statistics for the query that produced this
1680 # result set. These can be requested by setting
1681 # ExecuteSqlRequest.query_mode.
1682 "queryPlan": { # Contains an ordered list of nodes appearing in the query plan. # QueryPlan for the query associated with this result.
1683 "planNodes": [ # The nodes in the query plan. Plan nodes are returned in pre-order starting
1684 # with the plan root. Each PlanNode's `id` corresponds to its index in
1685 # `plan_nodes`.
1686 { # Node information for nodes appearing in a QueryPlan.plan_nodes.
1687 "index": 42, # The `PlanNode`'s index in node list.
1688 "kind": "A String", # Used to determine the type of node. May be needed for visualizing
1689 # different kinds of nodes differently. For example, If the node is a
1690 # SCALAR node, it will have a condensed representation
1691 # which can be used to directly embed a description of the node in its
1692 # parent.
1693 "displayName": "A String", # The display name for the node.
1694 "executionStats": { # The execution statistics associated with the node, contained in a group of
1695 # key-value pairs. Only present if the plan was returned as a result of a
1696 # profile query. For example, number of executions, number of rows/time per
1697 # execution etc.
1698 "a_key": "", # Properties of the object.
1699 },
1700 "childLinks": [ # List of child node `index`es and their relationship to this parent.
1701 { # Metadata associated with a parent-child relationship appearing in a
1702 # PlanNode.
1703 "variable": "A String", # Only present if the child node is SCALAR and corresponds
1704 # to an output variable of the parent node. The field carries the name of
1705 # the output variable.
1706 # For example, a `TableScan` operator that reads rows from a table will
1707 # have child links to the `SCALAR` nodes representing the output variables
1708 # created for each column that is read by the operator. The corresponding
1709 # `variable` fields will be set to the variable names assigned to the
1710 # columns.
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -04001711 "type": "A String", # The type of the link. For example, in Hash Joins this could be used to
1712 # distinguish between the build child and the probe child, or in the case
1713 # of the child being an output variable, to represent the tag associated
1714 # with the output variable.
Thomas Coffee2f245372017-03-27 10:39:26 -07001715 "childIndex": 42, # The node to which the link points.
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -04001716 },
1717 ],
1718 "shortRepresentation": { # Condensed representation of a node and its subtree. Only present for # Condensed representation for SCALAR nodes.
1719 # `SCALAR` PlanNode(s).
1720 "subqueries": { # A mapping of (subquery variable name) -> (subquery node id) for cases
1721 # where the `description` string of this node references a `SCALAR`
1722 # subquery contained in the expression subtree rooted at this node. The
1723 # referenced `SCALAR` subquery may not necessarily be a direct child of
1724 # this node.
1725 "a_key": 42,
1726 },
1727 "description": "A String", # A string representation of the expression subtree rooted at this node.
1728 },
1729 "metadata": { # Attributes relevant to the node contained in a group of key-value pairs.
1730 # For example, a Parameter Reference node could have the following
1731 # information in its metadata:
1732 #
1733 # {
1734 # "parameter_reference": "param1",
1735 # "parameter_type": "array"
1736 # }
1737 "a_key": "", # Properties of the object.
1738 },
1739 },
1740 ],
1741 },
1742 "queryStats": { # Aggregated statistics from the execution of the query. Only present when
1743 # the query is profiled. For example, a query could return the statistics as
1744 # follows:
1745 #
1746 # {
1747 # "rows_returned": "3",
1748 # "elapsed_time": "1.22 secs",
1749 # "cpu_time": "1.19 secs"
1750 # }
1751 "a_key": "", # Properties of the object.
1752 },
1753 },
1754 "metadata": { # Metadata about a ResultSet or PartialResultSet. # Metadata about the result set, such as row type information.
1755 "rowType": { # `StructType` defines the fields of a STRUCT type. # Indicates the field names and types for the rows in the result
1756 # set. For example, a SQL query like `"SELECT UserId, UserName FROM
1757 # Users"` could return a `row_type` value like:
1758 #
1759 # "fields": [
1760 # { "name": "UserId", "type": { "code": "INT64" } },
1761 # { "name": "UserName", "type": { "code": "STRING" } },
1762 # ]
1763 "fields": [ # The list of fields that make up this struct. Order is
1764 # significant, because values of this struct type are represented as
1765 # lists, where the order of field values matches the order of
1766 # fields in the StructType. In turn, the order of fields
1767 # matches the order of columns in a read request, or the order of
1768 # fields in the `SELECT` clause of a query.
1769 { # Message representing a single field of a struct.
1770 "type": { # `Type` indicates the type of a Cloud Spanner value, as might be stored in a # The type of the field.
1771 # table cell or returned from an SQL query.
1772 "structType": # Object with schema name: StructType # If code == STRUCT, then `struct_type`
1773 # provides type information for the struct's fields.
1774 "code": "A String", # Required. The TypeCode for this type.
1775 "arrayElementType": # Object with schema name: Type # If code == ARRAY, then `array_element_type`
1776 # is the type of the array elements.
1777 },
1778 "name": "A String", # The name of the field. For reads, this is the column name. For
1779 # SQL queries, it is the column alias (e.g., `"Word"` in the
1780 # query `"SELECT 'hello' AS Word"`), or the column name (e.g.,
1781 # `"ColName"` in the query `"SELECT ColName FROM Table"`). Some
1782 # columns might have an empty name (e.g., !"SELECT
1783 # UPPER(ColName)"`). Note that a query result can contain
1784 # multiple fields with the same name.
1785 },
1786 ],
1787 },
1788 "transaction": { # A transaction. # If the read or SQL query began a transaction as a side-effect, the
1789 # information about the new transaction is yielded here.
1790 "readTimestamp": "A String", # For snapshot read-only transactions, the read timestamp chosen
1791 # for the transaction. Not returned by default: see
1792 # TransactionOptions.ReadOnly.return_read_timestamp.
1793 "id": "A String", # `id` may be used to identify the transaction in subsequent
1794 # Read,
1795 # ExecuteSql,
1796 # Commit, or
1797 # Rollback calls.
1798 #
1799 # Single-use read-only transactions do not have IDs, because
1800 # single-use transactions do not support multiple requests.
1801 },
1802 },
1803 }</pre>
1804</div>
1805
1806<div class="method">
1807 <code class="details" id="executeStreamingSql">executeStreamingSql(session, body, x__xgafv=None)</code>
1808 <pre>Like ExecuteSql, except returns the result
1809set as a stream. Unlike ExecuteSql, there
1810is no limit on the size of the returned result set. However, no
1811individual row in the result set can exceed 100 MiB, and no
1812column value can exceed 10 MiB.
1813
1814Args:
1815 session: string, Required. The session in which the SQL query should be performed. (required)
1816 body: object, The request body. (required)
1817 The object takes the form of:
1818
1819{ # The request for ExecuteSql and
1820 # ExecuteStreamingSql.
1821 "transaction": { # This message is used to select the transaction in which a # The transaction to use. If none is provided, the default is a
1822 # temporary read-only transaction with strong concurrency.
1823 # Read or
1824 # ExecuteSql call runs.
1825 #
1826 # See TransactionOptions for more information about transactions.
1827 "begin": { # # Transactions # Begin a new transaction and execute this read or SQL query in
1828 # it. The transaction ID of the new transaction is returned in
1829 # ResultSetMetadata.transaction, which is a Transaction.
1830 #
1831 #
1832 # Each session can have at most one active transaction at a time. After the
1833 # active transaction is completed, the session can immediately be
1834 # re-used for the next transaction. It is not necessary to create a
1835 # new session for each transaction.
1836 #
1837 # # Transaction Modes
1838 #
1839 # Cloud Spanner supports two transaction modes:
1840 #
1841 # 1. Locking read-write. This type of transaction is the only way
1842 # to write data into Cloud Spanner. These transactions rely on
1843 # pessimistic locking and, if necessary, two-phase commit.
1844 # Locking read-write transactions may abort, requiring the
1845 # application to retry.
1846 #
1847 # 2. Snapshot read-only. This transaction type provides guaranteed
1848 # consistency across several reads, but does not allow
1849 # writes. Snapshot read-only transactions can be configured to
1850 # read at timestamps in the past. Snapshot read-only
1851 # transactions do not need to be committed.
1852 #
1853 # For transactions that only read, snapshot read-only transactions
1854 # provide simpler semantics and are almost always faster. In
1855 # particular, read-only transactions do not take locks, so they do
1856 # not conflict with read-write transactions. As a consequence of not
1857 # taking locks, they also do not abort, so retry loops are not needed.
1858 #
1859 # Transactions may only read/write data in a single database. They
1860 # may, however, read/write data in different tables within that
1861 # database.
1862 #
1863 # ## Locking Read-Write Transactions
1864 #
1865 # Locking transactions may be used to atomically read-modify-write
1866 # data anywhere in a database. This type of transaction is externally
1867 # consistent.
1868 #
1869 # Clients should attempt to minimize the amount of time a transaction
1870 # is active. Faster transactions commit with higher probability
1871 # and cause less contention. Cloud Spanner attempts to keep read locks
1872 # active as long as the transaction continues to do reads, and the
1873 # transaction has not been terminated by
1874 # Commit or
1875 # Rollback. Long periods of
1876 # inactivity at the client may cause Cloud Spanner to release a
1877 # transaction's locks and abort it.
1878 #
1879 # Reads performed within a transaction acquire locks on the data
1880 # being read. Writes can only be done at commit time, after all reads
1881 # have been completed.
1882 # Conceptually, a read-write transaction consists of zero or more
1883 # reads or SQL queries followed by
1884 # Commit. At any time before
1885 # Commit, the client can send a
1886 # Rollback request to abort the
1887 # transaction.
1888 #
1889 # ### Semantics
1890 #
1891 # Cloud Spanner can commit the transaction if all read locks it acquired
1892 # are still valid at commit time, and it is able to acquire write
1893 # locks for all writes. Cloud Spanner can abort the transaction for any
1894 # reason. If a commit attempt returns `ABORTED`, Cloud Spanner guarantees
1895 # that the transaction has not modified any user data in Cloud Spanner.
1896 #
1897 # Unless the transaction commits, Cloud Spanner makes no guarantees about
1898 # how long the transaction's locks were held for. It is an error to
1899 # use Cloud Spanner locks for any sort of mutual exclusion other than
1900 # between Cloud Spanner transactions themselves.
1901 #
1902 # ### Retrying Aborted Transactions
1903 #
1904 # When a transaction aborts, the application can choose to retry the
1905 # whole transaction again. To maximize the chances of successfully
1906 # committing the retry, the client should execute the retry in the
1907 # same session as the original attempt. The original session's lock
1908 # priority increases with each consecutive abort, meaning that each
1909 # attempt has a slightly better chance of success than the previous.
1910 #
1911 # Under some circumstances (e.g., many transactions attempting to
1912 # modify the same row(s)), a transaction can abort many times in a
1913 # short period before successfully committing. Thus, it is not a good
1914 # idea to cap the number of retries a transaction can attempt;
1915 # instead, it is better to limit the total amount of wall time spent
1916 # retrying.
1917 #
1918 # ### Idle Transactions
1919 #
1920 # A transaction is considered idle if it has no outstanding reads or
1921 # SQL queries and has not started a read or SQL query within the last 10
1922 # seconds. Idle transactions can be aborted by Cloud Spanner so that they
1923 # don't hold on to locks indefinitely. In that case, the commit will
1924 # fail with error `ABORTED`.
1925 #
1926 # If this behavior is undesirable, periodically executing a simple
1927 # SQL query in the transaction (e.g., `SELECT 1`) prevents the
1928 # transaction from becoming idle.
1929 #
1930 # ## Snapshot Read-Only Transactions
1931 #
1932 # Snapshot read-only transactions provides a simpler method than
1933 # locking read-write transactions for doing several consistent
1934 # reads. However, this type of transaction does not support writes.
1935 #
1936 # Snapshot transactions do not take locks. Instead, they work by
1937 # choosing a Cloud Spanner timestamp, then executing all reads at that
1938 # timestamp. Since they do not acquire locks, they do not block
1939 # concurrent read-write transactions.
1940 #
1941 # Unlike locking read-write transactions, snapshot read-only
1942 # transactions never abort. They can fail if the chosen read
1943 # timestamp is garbage collected; however, the default garbage
1944 # collection policy is generous enough that most applications do not
1945 # need to worry about this in practice.
1946 #
1947 # Snapshot read-only transactions do not need to call
1948 # Commit or
1949 # Rollback (and in fact are not
1950 # permitted to do so).
1951 #
1952 # To execute a snapshot transaction, the client specifies a timestamp
1953 # bound, which tells Cloud Spanner how to choose a read timestamp.
1954 #
1955 # The types of timestamp bound are:
1956 #
1957 # - Strong (the default).
1958 # - Bounded staleness.
1959 # - Exact staleness.
1960 #
1961 # If the Cloud Spanner database to be read is geographically distributed,
1962 # stale read-only transactions can execute more quickly than strong
1963 # or read-write transaction, because they are able to execute far
1964 # from the leader replica.
1965 #
1966 # Each type of timestamp bound is discussed in detail below.
1967 #
1968 # ### Strong
1969 #
1970 # Strong reads are guaranteed to see the effects of all transactions
1971 # that have committed before the start of the read. Furthermore, all
1972 # rows yielded by a single read are consistent with each other -- if
1973 # any part of the read observes a transaction, all parts of the read
1974 # see the transaction.
1975 #
1976 # Strong reads are not repeatable: two consecutive strong read-only
1977 # transactions might return inconsistent results if there are
1978 # concurrent writes. If consistency across reads is required, the
1979 # reads should be executed within a transaction or at an exact read
1980 # timestamp.
1981 #
1982 # See TransactionOptions.ReadOnly.strong.
1983 #
1984 # ### Exact Staleness
1985 #
1986 # These timestamp bounds execute reads at a user-specified
1987 # timestamp. Reads at a timestamp are guaranteed to see a consistent
1988 # prefix of the global transaction history: they observe
1989 # modifications done by all transactions with a commit timestamp <=
1990 # the read timestamp, and observe none of the modifications done by
1991 # transactions with a larger commit timestamp. They will block until
1992 # all conflicting transactions that may be assigned commit timestamps
1993 # <= the read timestamp have finished.
1994 #
1995 # The timestamp can either be expressed as an absolute Cloud Spanner commit
1996 # timestamp or a staleness relative to the current time.
1997 #
1998 # These modes do not require a "negotiation phase" to pick a
1999 # timestamp. As a result, they execute slightly faster than the
2000 # equivalent boundedly stale concurrency modes. On the other hand,
2001 # boundedly stale reads usually return fresher results.
2002 #
2003 # See TransactionOptions.ReadOnly.read_timestamp and
2004 # TransactionOptions.ReadOnly.exact_staleness.
2005 #
2006 # ### Bounded Staleness
2007 #
2008 # Bounded staleness modes allow Cloud Spanner to pick the read timestamp,
2009 # subject to a user-provided staleness bound. Cloud Spanner chooses the
2010 # newest timestamp within the staleness bound that allows execution
2011 # of the reads at the closest available replica without blocking.
2012 #
2013 # All rows yielded are consistent with each other -- if any part of
2014 # the read observes a transaction, all parts of the read see the
2015 # transaction. Boundedly stale reads are not repeatable: two stale
2016 # reads, even if they use the same staleness bound, can execute at
2017 # different timestamps and thus return inconsistent results.
2018 #
2019 # Boundedly stale reads execute in two phases: the first phase
2020 # negotiates a timestamp among all replicas needed to serve the
2021 # read. In the second phase, reads are executed at the negotiated
2022 # timestamp.
2023 #
2024 # As a result of the two phase execution, bounded staleness reads are
2025 # usually a little slower than comparable exact staleness
2026 # reads. However, they are typically able to return fresher
2027 # results, and are more likely to execute at the closest replica.
2028 #
2029 # Because the timestamp negotiation requires up-front knowledge of
2030 # which rows will be read, it can only be used with single-use
2031 # read-only transactions.
2032 #
2033 # See TransactionOptions.ReadOnly.max_staleness and
2034 # TransactionOptions.ReadOnly.min_read_timestamp.
2035 #
2036 # ### Old Read Timestamps and Garbage Collection
2037 #
2038 # Cloud Spanner continuously garbage collects deleted and overwritten data
2039 # in the background to reclaim storage space. This process is known
2040 # as "version GC". By default, version GC reclaims versions after they
2041 # are one hour old. Because of this, Cloud Spanner cannot perform reads
2042 # at read timestamps more than one hour in the past. This
2043 # restriction also applies to in-progress reads and/or SQL queries whose
2044 # timestamp become too old while executing. Reads and SQL queries with
2045 # too-old read timestamps fail with the error `FAILED_PRECONDITION`.
2046 "readWrite": { # Options for read-write transactions. # Transaction may write.
2047 #
2048 # Authorization to begin a read-write transaction requires
2049 # `spanner.databases.beginOrRollbackReadWriteTransaction` permission
2050 # on the `session` resource.
2051 },
2052 "readOnly": { # Options for read-only transactions. # Transaction will not write.
2053 #
2054 # Authorization to begin a read-only transaction requires
2055 # `spanner.databases.beginReadOnlyTransaction` permission
2056 # on the `session` resource.
2057 "minReadTimestamp": "A String", # Executes all reads at a timestamp >= `min_read_timestamp`.
2058 #
2059 # This is useful for requesting fresher data than some previous
2060 # read, or data that is fresh enough to observe the effects of some
2061 # previously committed transaction whose timestamp is known.
2062 #
2063 # Note that this option can only be used in single-use transactions.
Thomas Coffee2f245372017-03-27 10:39:26 -07002064 "returnReadTimestamp": True or False, # If true, the Cloud Spanner-selected read timestamp is included in
2065 # the Transaction message that describes the transaction.
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -04002066 "maxStaleness": "A String", # Read data at a timestamp >= `NOW - max_staleness`
2067 # seconds. Guarantees that all writes that have committed more
2068 # than the specified number of seconds ago are visible. Because
2069 # Cloud Spanner chooses the exact timestamp, this mode works even if
2070 # the client's local clock is substantially skewed from Cloud Spanner
2071 # commit timestamps.
2072 #
2073 # Useful for reading the freshest data available at a nearby
2074 # replica, while bounding the possible staleness if the local
2075 # replica has fallen behind.
2076 #
2077 # Note that this option can only be used in single-use
2078 # transactions.
2079 "exactStaleness": "A String", # Executes all reads at a timestamp that is `exact_staleness`
2080 # old. The timestamp is chosen soon after the read is started.
2081 #
2082 # Guarantees that all writes that have committed more than the
2083 # specified number of seconds ago are visible. Because Cloud Spanner
2084 # chooses the exact timestamp, this mode works even if the client's
2085 # local clock is substantially skewed from Cloud Spanner commit
2086 # timestamps.
2087 #
2088 # Useful for reading at nearby replicas without the distributed
2089 # timestamp negotiation overhead of `max_staleness`.
Thomas Coffee2f245372017-03-27 10:39:26 -07002090 "readTimestamp": "A String", # Executes all reads at the given timestamp. Unlike other modes,
2091 # reads at a specific timestamp are repeatable; the same read at
2092 # the same timestamp always returns the same data. If the
2093 # timestamp is in the future, the read will block until the
2094 # specified timestamp, modulo the read's deadline.
2095 #
2096 # Useful for large scale consistent reads such as mapreduces, or
2097 # for coordinating many reads against a consistent snapshot of the
2098 # data.
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -04002099 "strong": True or False, # Read at a timestamp where all previously committed transactions
2100 # are visible.
2101 },
2102 },
2103 "singleUse": { # # Transactions # Execute the read or SQL query in a temporary transaction.
2104 # This is the most efficient way to execute a transaction that
2105 # consists of a single SQL query.
2106 #
2107 #
2108 # Each session can have at most one active transaction at a time. After the
2109 # active transaction is completed, the session can immediately be
2110 # re-used for the next transaction. It is not necessary to create a
2111 # new session for each transaction.
2112 #
2113 # # Transaction Modes
2114 #
2115 # Cloud Spanner supports two transaction modes:
2116 #
2117 # 1. Locking read-write. This type of transaction is the only way
2118 # to write data into Cloud Spanner. These transactions rely on
2119 # pessimistic locking and, if necessary, two-phase commit.
2120 # Locking read-write transactions may abort, requiring the
2121 # application to retry.
2122 #
2123 # 2. Snapshot read-only. This transaction type provides guaranteed
2124 # consistency across several reads, but does not allow
2125 # writes. Snapshot read-only transactions can be configured to
2126 # read at timestamps in the past. Snapshot read-only
2127 # transactions do not need to be committed.
2128 #
2129 # For transactions that only read, snapshot read-only transactions
2130 # provide simpler semantics and are almost always faster. In
2131 # particular, read-only transactions do not take locks, so they do
2132 # not conflict with read-write transactions. As a consequence of not
2133 # taking locks, they also do not abort, so retry loops are not needed.
2134 #
2135 # Transactions may only read/write data in a single database. They
2136 # may, however, read/write data in different tables within that
2137 # database.
2138 #
2139 # ## Locking Read-Write Transactions
2140 #
2141 # Locking transactions may be used to atomically read-modify-write
2142 # data anywhere in a database. This type of transaction is externally
2143 # consistent.
2144 #
2145 # Clients should attempt to minimize the amount of time a transaction
2146 # is active. Faster transactions commit with higher probability
2147 # and cause less contention. Cloud Spanner attempts to keep read locks
2148 # active as long as the transaction continues to do reads, and the
2149 # transaction has not been terminated by
2150 # Commit or
2151 # Rollback. Long periods of
2152 # inactivity at the client may cause Cloud Spanner to release a
2153 # transaction's locks and abort it.
2154 #
2155 # Reads performed within a transaction acquire locks on the data
2156 # being read. Writes can only be done at commit time, after all reads
2157 # have been completed.
2158 # Conceptually, a read-write transaction consists of zero or more
2159 # reads or SQL queries followed by
2160 # Commit. At any time before
2161 # Commit, the client can send a
2162 # Rollback request to abort the
2163 # transaction.
2164 #
2165 # ### Semantics
2166 #
2167 # Cloud Spanner can commit the transaction if all read locks it acquired
2168 # are still valid at commit time, and it is able to acquire write
2169 # locks for all writes. Cloud Spanner can abort the transaction for any
2170 # reason. If a commit attempt returns `ABORTED`, Cloud Spanner guarantees
2171 # that the transaction has not modified any user data in Cloud Spanner.
2172 #
2173 # Unless the transaction commits, Cloud Spanner makes no guarantees about
2174 # how long the transaction's locks were held for. It is an error to
2175 # use Cloud Spanner locks for any sort of mutual exclusion other than
2176 # between Cloud Spanner transactions themselves.
2177 #
2178 # ### Retrying Aborted Transactions
2179 #
2180 # When a transaction aborts, the application can choose to retry the
2181 # whole transaction again. To maximize the chances of successfully
2182 # committing the retry, the client should execute the retry in the
2183 # same session as the original attempt. The original session's lock
2184 # priority increases with each consecutive abort, meaning that each
2185 # attempt has a slightly better chance of success than the previous.
2186 #
2187 # Under some circumstances (e.g., many transactions attempting to
2188 # modify the same row(s)), a transaction can abort many times in a
2189 # short period before successfully committing. Thus, it is not a good
2190 # idea to cap the number of retries a transaction can attempt;
2191 # instead, it is better to limit the total amount of wall time spent
2192 # retrying.
2193 #
2194 # ### Idle Transactions
2195 #
2196 # A transaction is considered idle if it has no outstanding reads or
2197 # SQL queries and has not started a read or SQL query within the last 10
2198 # seconds. Idle transactions can be aborted by Cloud Spanner so that they
2199 # don't hold on to locks indefinitely. In that case, the commit will
2200 # fail with error `ABORTED`.
2201 #
2202 # If this behavior is undesirable, periodically executing a simple
2203 # SQL query in the transaction (e.g., `SELECT 1`) prevents the
2204 # transaction from becoming idle.
2205 #
2206 # ## Snapshot Read-Only Transactions
2207 #
2208 # Snapshot read-only transactions provides a simpler method than
2209 # locking read-write transactions for doing several consistent
2210 # reads. However, this type of transaction does not support writes.
2211 #
2212 # Snapshot transactions do not take locks. Instead, they work by
2213 # choosing a Cloud Spanner timestamp, then executing all reads at that
2214 # timestamp. Since they do not acquire locks, they do not block
2215 # concurrent read-write transactions.
2216 #
2217 # Unlike locking read-write transactions, snapshot read-only
2218 # transactions never abort. They can fail if the chosen read
2219 # timestamp is garbage collected; however, the default garbage
2220 # collection policy is generous enough that most applications do not
2221 # need to worry about this in practice.
2222 #
2223 # Snapshot read-only transactions do not need to call
2224 # Commit or
2225 # Rollback (and in fact are not
2226 # permitted to do so).
2227 #
2228 # To execute a snapshot transaction, the client specifies a timestamp
2229 # bound, which tells Cloud Spanner how to choose a read timestamp.
2230 #
2231 # The types of timestamp bound are:
2232 #
2233 # - Strong (the default).
2234 # - Bounded staleness.
2235 # - Exact staleness.
2236 #
2237 # If the Cloud Spanner database to be read is geographically distributed,
2238 # stale read-only transactions can execute more quickly than strong
2239 # or read-write transaction, because they are able to execute far
2240 # from the leader replica.
2241 #
2242 # Each type of timestamp bound is discussed in detail below.
2243 #
2244 # ### Strong
2245 #
2246 # Strong reads are guaranteed to see the effects of all transactions
2247 # that have committed before the start of the read. Furthermore, all
2248 # rows yielded by a single read are consistent with each other -- if
2249 # any part of the read observes a transaction, all parts of the read
2250 # see the transaction.
2251 #
2252 # Strong reads are not repeatable: two consecutive strong read-only
2253 # transactions might return inconsistent results if there are
2254 # concurrent writes. If consistency across reads is required, the
2255 # reads should be executed within a transaction or at an exact read
2256 # timestamp.
2257 #
2258 # See TransactionOptions.ReadOnly.strong.
2259 #
2260 # ### Exact Staleness
2261 #
2262 # These timestamp bounds execute reads at a user-specified
2263 # timestamp. Reads at a timestamp are guaranteed to see a consistent
2264 # prefix of the global transaction history: they observe
2265 # modifications done by all transactions with a commit timestamp <=
2266 # the read timestamp, and observe none of the modifications done by
2267 # transactions with a larger commit timestamp. They will block until
2268 # all conflicting transactions that may be assigned commit timestamps
2269 # <= the read timestamp have finished.
2270 #
2271 # The timestamp can either be expressed as an absolute Cloud Spanner commit
2272 # timestamp or a staleness relative to the current time.
2273 #
2274 # These modes do not require a "negotiation phase" to pick a
2275 # timestamp. As a result, they execute slightly faster than the
2276 # equivalent boundedly stale concurrency modes. On the other hand,
2277 # boundedly stale reads usually return fresher results.
2278 #
2279 # See TransactionOptions.ReadOnly.read_timestamp and
2280 # TransactionOptions.ReadOnly.exact_staleness.
2281 #
2282 # ### Bounded Staleness
2283 #
2284 # Bounded staleness modes allow Cloud Spanner to pick the read timestamp,
2285 # subject to a user-provided staleness bound. Cloud Spanner chooses the
2286 # newest timestamp within the staleness bound that allows execution
2287 # of the reads at the closest available replica without blocking.
2288 #
2289 # All rows yielded are consistent with each other -- if any part of
2290 # the read observes a transaction, all parts of the read see the
2291 # transaction. Boundedly stale reads are not repeatable: two stale
2292 # reads, even if they use the same staleness bound, can execute at
2293 # different timestamps and thus return inconsistent results.
2294 #
2295 # Boundedly stale reads execute in two phases: the first phase
2296 # negotiates a timestamp among all replicas needed to serve the
2297 # read. In the second phase, reads are executed at the negotiated
2298 # timestamp.
2299 #
2300 # As a result of the two phase execution, bounded staleness reads are
2301 # usually a little slower than comparable exact staleness
2302 # reads. However, they are typically able to return fresher
2303 # results, and are more likely to execute at the closest replica.
2304 #
2305 # Because the timestamp negotiation requires up-front knowledge of
2306 # which rows will be read, it can only be used with single-use
2307 # read-only transactions.
2308 #
2309 # See TransactionOptions.ReadOnly.max_staleness and
2310 # TransactionOptions.ReadOnly.min_read_timestamp.
2311 #
2312 # ### Old Read Timestamps and Garbage Collection
2313 #
2314 # Cloud Spanner continuously garbage collects deleted and overwritten data
2315 # in the background to reclaim storage space. This process is known
2316 # as "version GC". By default, version GC reclaims versions after they
2317 # are one hour old. Because of this, Cloud Spanner cannot perform reads
2318 # at read timestamps more than one hour in the past. This
2319 # restriction also applies to in-progress reads and/or SQL queries whose
2320 # timestamp become too old while executing. Reads and SQL queries with
2321 # too-old read timestamps fail with the error `FAILED_PRECONDITION`.
2322 "readWrite": { # Options for read-write transactions. # Transaction may write.
2323 #
2324 # Authorization to begin a read-write transaction requires
2325 # `spanner.databases.beginOrRollbackReadWriteTransaction` permission
2326 # on the `session` resource.
2327 },
2328 "readOnly": { # Options for read-only transactions. # Transaction will not write.
2329 #
2330 # Authorization to begin a read-only transaction requires
2331 # `spanner.databases.beginReadOnlyTransaction` permission
2332 # on the `session` resource.
2333 "minReadTimestamp": "A String", # Executes all reads at a timestamp >= `min_read_timestamp`.
2334 #
2335 # This is useful for requesting fresher data than some previous
2336 # read, or data that is fresh enough to observe the effects of some
2337 # previously committed transaction whose timestamp is known.
2338 #
2339 # Note that this option can only be used in single-use transactions.
Thomas Coffee2f245372017-03-27 10:39:26 -07002340 "returnReadTimestamp": True or False, # If true, the Cloud Spanner-selected read timestamp is included in
2341 # the Transaction message that describes the transaction.
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -04002342 "maxStaleness": "A String", # Read data at a timestamp >= `NOW - max_staleness`
2343 # seconds. Guarantees that all writes that have committed more
2344 # than the specified number of seconds ago are visible. Because
2345 # Cloud Spanner chooses the exact timestamp, this mode works even if
2346 # the client's local clock is substantially skewed from Cloud Spanner
2347 # commit timestamps.
2348 #
2349 # Useful for reading the freshest data available at a nearby
2350 # replica, while bounding the possible staleness if the local
2351 # replica has fallen behind.
2352 #
2353 # Note that this option can only be used in single-use
2354 # transactions.
2355 "exactStaleness": "A String", # Executes all reads at a timestamp that is `exact_staleness`
2356 # old. The timestamp is chosen soon after the read is started.
2357 #
2358 # Guarantees that all writes that have committed more than the
2359 # specified number of seconds ago are visible. Because Cloud Spanner
2360 # chooses the exact timestamp, this mode works even if the client's
2361 # local clock is substantially skewed from Cloud Spanner commit
2362 # timestamps.
2363 #
2364 # Useful for reading at nearby replicas without the distributed
2365 # timestamp negotiation overhead of `max_staleness`.
Thomas Coffee2f245372017-03-27 10:39:26 -07002366 "readTimestamp": "A String", # Executes all reads at the given timestamp. Unlike other modes,
2367 # reads at a specific timestamp are repeatable; the same read at
2368 # the same timestamp always returns the same data. If the
2369 # timestamp is in the future, the read will block until the
2370 # specified timestamp, modulo the read's deadline.
2371 #
2372 # Useful for large scale consistent reads such as mapreduces, or
2373 # for coordinating many reads against a consistent snapshot of the
2374 # data.
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -04002375 "strong": True or False, # Read at a timestamp where all previously committed transactions
2376 # are visible.
2377 },
2378 },
2379 "id": "A String", # Execute the read or SQL query in a previously-started transaction.
2380 },
2381 "resumeToken": "A String", # If this request is resuming a previously interrupted SQL query
2382 # execution, `resume_token` should be copied from the last
2383 # PartialResultSet yielded before the interruption. Doing this
2384 # enables the new SQL query execution to resume where the last one left
2385 # off. The rest of the request parameters must exactly match the
2386 # request that yielded this token.
2387 "paramTypes": { # It is not always possible for Cloud Spanner to infer the right SQL type
2388 # from a JSON value. For example, values of type `BYTES` and values
2389 # of type `STRING` both appear in params as JSON strings.
2390 #
2391 # In these cases, `param_types` can be used to specify the exact
2392 # SQL type for some or all of the SQL query parameters. See the
2393 # definition of Type for more information
2394 # about SQL types.
2395 "a_key": { # `Type` indicates the type of a Cloud Spanner value, as might be stored in a
2396 # table cell or returned from an SQL query.
2397 "structType": # Object with schema name: StructType # If code == STRUCT, then `struct_type`
2398 # provides type information for the struct's fields.
2399 "code": "A String", # Required. The TypeCode for this type.
2400 "arrayElementType": # Object with schema name: Type # If code == ARRAY, then `array_element_type`
2401 # is the type of the array elements.
2402 },
2403 },
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -04002404 "params": { # The SQL query string can contain parameter placeholders. A parameter
2405 # placeholder consists of `'@'` followed by the parameter
2406 # name. Parameter names consist of any combination of letters,
2407 # numbers, and underscores.
2408 #
2409 # Parameters can appear anywhere that a literal value is expected. The same
2410 # parameter name can be used more than once, for example:
2411 # `"WHERE id > @msg_id AND id < @msg_id + 100"`
2412 #
2413 # It is an error to execute an SQL query with unbound parameters.
2414 #
2415 # Parameter values are specified using `params`, which is a JSON
2416 # object whose keys are parameter names, and whose values are the
2417 # corresponding parameter values.
2418 "a_key": "", # Properties of the object.
2419 },
Thomas Coffee2f245372017-03-27 10:39:26 -07002420 "sql": "A String", # Required. The SQL query string.
2421 "queryMode": "A String", # Used to control the amount of debugging information returned in
2422 # ResultSetStats.
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -04002423 }
2424
2425 x__xgafv: string, V1 error format.
2426 Allowed values
2427 1 - v1 error format
2428 2 - v2 error format
2429
2430Returns:
2431 An object of the form:
2432
2433 { # Partial results from a streaming read or SQL query. Streaming reads and
2434 # SQL queries better tolerate large result sets, large rows, and large
2435 # values, but are a little trickier to consume.
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -04002436 "values": [ # A streamed result set consists of a stream of values, which might
2437 # be split into many `PartialResultSet` messages to accommodate
2438 # large rows and/or large values. Every N complete values defines a
2439 # row, where N is equal to the number of entries in
2440 # metadata.row_type.fields.
2441 #
2442 # Most values are encoded based on type as described
2443 # here.
2444 #
2445 # It is possible that the last value in values is "chunked",
2446 # meaning that the rest of the value is sent in subsequent
2447 # `PartialResultSet`(s). This is denoted by the chunked_value
2448 # field. Two or more chunked values can be merged to form a
2449 # complete value as follows:
2450 #
2451 # * `bool/number/null`: cannot be chunked
2452 # * `string`: concatenate the strings
2453 # * `list`: concatenate the lists. If the last element in a list is a
2454 # `string`, `list`, or `object`, merge it with the first element in
2455 # the next list by applying these rules recursively.
2456 # * `object`: concatenate the (field name, field value) pairs. If a
2457 # field name is duplicated, then apply these rules recursively
2458 # to merge the field values.
2459 #
2460 # Some examples of merging:
2461 #
2462 # # Strings are concatenated.
2463 # "foo", "bar" => "foobar"
2464 #
2465 # # Lists of non-strings are concatenated.
2466 # [2, 3], [4] => [2, 3, 4]
2467 #
2468 # # Lists are concatenated, but the last and first elements are merged
2469 # # because they are strings.
2470 # ["a", "b"], ["c", "d"] => ["a", "bc", "d"]
2471 #
2472 # # Lists are concatenated, but the last and first elements are merged
2473 # # because they are lists. Recursively, the last and first elements
2474 # # of the inner lists are merged because they are strings.
2475 # ["a", ["b", "c"]], [["d"], "e"] => ["a", ["b", "cd"], "e"]
2476 #
2477 # # Non-overlapping object fields are combined.
2478 # {"a": "1"}, {"b": "2"} => {"a": "1", "b": 2"}
2479 #
2480 # # Overlapping object fields are merged.
2481 # {"a": "1"}, {"a": "2"} => {"a": "12"}
2482 #
2483 # # Examples of merging objects containing lists of strings.
2484 # {"a": ["1"]}, {"a": ["2"]} => {"a": ["12"]}
2485 #
2486 # For a more complete example, suppose a streaming SQL query is
2487 # yielding a result set whose rows contain a single string
2488 # field. The following `PartialResultSet`s might be yielded:
2489 #
2490 # {
2491 # "metadata": { ... }
2492 # "values": ["Hello", "W"]
2493 # "chunked_value": true
2494 # "resume_token": "Af65..."
2495 # }
2496 # {
2497 # "values": ["orl"]
2498 # "chunked_value": true
2499 # "resume_token": "Bqp2..."
2500 # }
2501 # {
2502 # "values": ["d"]
2503 # "resume_token": "Zx1B..."
2504 # }
2505 #
2506 # This sequence of `PartialResultSet`s encodes two rows, one
2507 # containing the field value `"Hello"`, and a second containing the
2508 # field value `"World" = "W" + "orl" + "d"`.
2509 "",
2510 ],
Sai Cheemalapatie833b792017-03-24 15:06:46 -07002511 "chunkedValue": True or False, # If true, then the final value in values is chunked, and must
2512 # be combined with more values from subsequent `PartialResultSet`s
2513 # to obtain a complete field value.
2514 "resumeToken": "A String", # Streaming calls might be interrupted for a variety of reasons, such
2515 # as TCP connection loss. If this occurs, the stream of results can
2516 # be resumed by re-sending the original request and including
2517 # `resume_token`. Note that executing any other transaction in the
2518 # same session invalidates the token.
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -04002519 "stats": { # Additional statistics about a ResultSet or PartialResultSet. # Query plan and execution statistics for the query that produced this
2520 # streaming result set. These can be requested by setting
2521 # ExecuteSqlRequest.query_mode and are sent
2522 # only once with the last response in the stream.
2523 "queryPlan": { # Contains an ordered list of nodes appearing in the query plan. # QueryPlan for the query associated with this result.
2524 "planNodes": [ # The nodes in the query plan. Plan nodes are returned in pre-order starting
2525 # with the plan root. Each PlanNode's `id` corresponds to its index in
2526 # `plan_nodes`.
2527 { # Node information for nodes appearing in a QueryPlan.plan_nodes.
2528 "index": 42, # The `PlanNode`'s index in node list.
2529 "kind": "A String", # Used to determine the type of node. May be needed for visualizing
2530 # different kinds of nodes differently. For example, If the node is a
2531 # SCALAR node, it will have a condensed representation
2532 # which can be used to directly embed a description of the node in its
2533 # parent.
2534 "displayName": "A String", # The display name for the node.
2535 "executionStats": { # The execution statistics associated with the node, contained in a group of
2536 # key-value pairs. Only present if the plan was returned as a result of a
2537 # profile query. For example, number of executions, number of rows/time per
2538 # execution etc.
2539 "a_key": "", # Properties of the object.
2540 },
2541 "childLinks": [ # List of child node `index`es and their relationship to this parent.
2542 { # Metadata associated with a parent-child relationship appearing in a
2543 # PlanNode.
2544 "variable": "A String", # Only present if the child node is SCALAR and corresponds
2545 # to an output variable of the parent node. The field carries the name of
2546 # the output variable.
2547 # For example, a `TableScan` operator that reads rows from a table will
2548 # have child links to the `SCALAR` nodes representing the output variables
2549 # created for each column that is read by the operator. The corresponding
2550 # `variable` fields will be set to the variable names assigned to the
2551 # columns.
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -04002552 "type": "A String", # The type of the link. For example, in Hash Joins this could be used to
2553 # distinguish between the build child and the probe child, or in the case
2554 # of the child being an output variable, to represent the tag associated
2555 # with the output variable.
Thomas Coffee2f245372017-03-27 10:39:26 -07002556 "childIndex": 42, # The node to which the link points.
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -04002557 },
2558 ],
2559 "shortRepresentation": { # Condensed representation of a node and its subtree. Only present for # Condensed representation for SCALAR nodes.
2560 # `SCALAR` PlanNode(s).
2561 "subqueries": { # A mapping of (subquery variable name) -> (subquery node id) for cases
2562 # where the `description` string of this node references a `SCALAR`
2563 # subquery contained in the expression subtree rooted at this node. The
2564 # referenced `SCALAR` subquery may not necessarily be a direct child of
2565 # this node.
2566 "a_key": 42,
2567 },
2568 "description": "A String", # A string representation of the expression subtree rooted at this node.
2569 },
2570 "metadata": { # Attributes relevant to the node contained in a group of key-value pairs.
2571 # For example, a Parameter Reference node could have the following
2572 # information in its metadata:
2573 #
2574 # {
2575 # "parameter_reference": "param1",
2576 # "parameter_type": "array"
2577 # }
2578 "a_key": "", # Properties of the object.
2579 },
2580 },
2581 ],
2582 },
2583 "queryStats": { # Aggregated statistics from the execution of the query. Only present when
2584 # the query is profiled. For example, a query could return the statistics as
2585 # follows:
2586 #
2587 # {
2588 # "rows_returned": "3",
2589 # "elapsed_time": "1.22 secs",
2590 # "cpu_time": "1.19 secs"
2591 # }
2592 "a_key": "", # Properties of the object.
2593 },
2594 },
2595 "metadata": { # Metadata about a ResultSet or PartialResultSet. # Metadata about the result set, such as row type information.
2596 # Only present in the first response.
2597 "rowType": { # `StructType` defines the fields of a STRUCT type. # Indicates the field names and types for the rows in the result
2598 # set. For example, a SQL query like `"SELECT UserId, UserName FROM
2599 # Users"` could return a `row_type` value like:
2600 #
2601 # "fields": [
2602 # { "name": "UserId", "type": { "code": "INT64" } },
2603 # { "name": "UserName", "type": { "code": "STRING" } },
2604 # ]
2605 "fields": [ # The list of fields that make up this struct. Order is
2606 # significant, because values of this struct type are represented as
2607 # lists, where the order of field values matches the order of
2608 # fields in the StructType. In turn, the order of fields
2609 # matches the order of columns in a read request, or the order of
2610 # fields in the `SELECT` clause of a query.
2611 { # Message representing a single field of a struct.
2612 "type": { # `Type` indicates the type of a Cloud Spanner value, as might be stored in a # The type of the field.
2613 # table cell or returned from an SQL query.
2614 "structType": # Object with schema name: StructType # If code == STRUCT, then `struct_type`
2615 # provides type information for the struct's fields.
2616 "code": "A String", # Required. The TypeCode for this type.
2617 "arrayElementType": # Object with schema name: Type # If code == ARRAY, then `array_element_type`
2618 # is the type of the array elements.
2619 },
2620 "name": "A String", # The name of the field. For reads, this is the column name. For
2621 # SQL queries, it is the column alias (e.g., `"Word"` in the
2622 # query `"SELECT 'hello' AS Word"`), or the column name (e.g.,
2623 # `"ColName"` in the query `"SELECT ColName FROM Table"`). Some
2624 # columns might have an empty name (e.g., !"SELECT
2625 # UPPER(ColName)"`). Note that a query result can contain
2626 # multiple fields with the same name.
2627 },
2628 ],
2629 },
2630 "transaction": { # A transaction. # If the read or SQL query began a transaction as a side-effect, the
2631 # information about the new transaction is yielded here.
2632 "readTimestamp": "A String", # For snapshot read-only transactions, the read timestamp chosen
2633 # for the transaction. Not returned by default: see
2634 # TransactionOptions.ReadOnly.return_read_timestamp.
2635 "id": "A String", # `id` may be used to identify the transaction in subsequent
2636 # Read,
2637 # ExecuteSql,
2638 # Commit, or
2639 # Rollback calls.
2640 #
2641 # Single-use read-only transactions do not have IDs, because
2642 # single-use transactions do not support multiple requests.
2643 },
2644 },
2645 }</pre>
2646</div>
2647
2648<div class="method">
2649 <code class="details" id="get">get(name, x__xgafv=None)</code>
2650 <pre>Gets a session. Returns `NOT_FOUND` if the session does not exist.
2651This is mainly useful for determining whether a session is still
2652alive.
2653
2654Args:
2655 name: string, Required. The name of the session to retrieve. (required)
2656 x__xgafv: string, V1 error format.
2657 Allowed values
2658 1 - v1 error format
2659 2 - v2 error format
2660
2661Returns:
2662 An object of the form:
2663
2664 { # A session in the Cloud Spanner API.
2665 "name": "A String", # Required. The name of the session.
2666 }</pre>
2667</div>
2668
2669<div class="method">
2670 <code class="details" id="read">read(session, body, x__xgafv=None)</code>
2671 <pre>Reads rows from the database using key lookups and scans, as a
2672simple key/value style alternative to
2673ExecuteSql. This method cannot be used to
2674return a result set larger than 10 MiB; if the read matches more
2675data than that, the read fails with a `FAILED_PRECONDITION`
2676error.
2677
2678Reads inside read-write transactions might return `ABORTED`. If
2679this occurs, the application should restart the transaction from
2680the beginning. See Transaction for more details.
2681
2682Larger result sets can be yielded in streaming fashion by calling
2683StreamingRead instead.
2684
2685Args:
2686 session: string, Required. The session in which the read should be performed. (required)
2687 body: object, The request body. (required)
2688 The object takes the form of:
2689
2690{ # The request for Read and
2691 # StreamingRead.
2692 "index": "A String", # If non-empty, the name of an index on table. This index is
2693 # used instead of the table primary key when interpreting key_set
2694 # and sorting result rows. See key_set for further information.
2695 "transaction": { # This message is used to select the transaction in which a # The transaction to use. If none is provided, the default is a
2696 # temporary read-only transaction with strong concurrency.
2697 # Read or
2698 # ExecuteSql call runs.
2699 #
2700 # See TransactionOptions for more information about transactions.
2701 "begin": { # # Transactions # Begin a new transaction and execute this read or SQL query in
2702 # it. The transaction ID of the new transaction is returned in
2703 # ResultSetMetadata.transaction, which is a Transaction.
2704 #
2705 #
2706 # Each session can have at most one active transaction at a time. After the
2707 # active transaction is completed, the session can immediately be
2708 # re-used for the next transaction. It is not necessary to create a
2709 # new session for each transaction.
2710 #
2711 # # Transaction Modes
2712 #
2713 # Cloud Spanner supports two transaction modes:
2714 #
2715 # 1. Locking read-write. This type of transaction is the only way
2716 # to write data into Cloud Spanner. These transactions rely on
2717 # pessimistic locking and, if necessary, two-phase commit.
2718 # Locking read-write transactions may abort, requiring the
2719 # application to retry.
2720 #
2721 # 2. Snapshot read-only. This transaction type provides guaranteed
2722 # consistency across several reads, but does not allow
2723 # writes. Snapshot read-only transactions can be configured to
2724 # read at timestamps in the past. Snapshot read-only
2725 # transactions do not need to be committed.
2726 #
2727 # For transactions that only read, snapshot read-only transactions
2728 # provide simpler semantics and are almost always faster. In
2729 # particular, read-only transactions do not take locks, so they do
2730 # not conflict with read-write transactions. As a consequence of not
2731 # taking locks, they also do not abort, so retry loops are not needed.
2732 #
2733 # Transactions may only read/write data in a single database. They
2734 # may, however, read/write data in different tables within that
2735 # database.
2736 #
2737 # ## Locking Read-Write Transactions
2738 #
2739 # Locking transactions may be used to atomically read-modify-write
2740 # data anywhere in a database. This type of transaction is externally
2741 # consistent.
2742 #
2743 # Clients should attempt to minimize the amount of time a transaction
2744 # is active. Faster transactions commit with higher probability
2745 # and cause less contention. Cloud Spanner attempts to keep read locks
2746 # active as long as the transaction continues to do reads, and the
2747 # transaction has not been terminated by
2748 # Commit or
2749 # Rollback. Long periods of
2750 # inactivity at the client may cause Cloud Spanner to release a
2751 # transaction's locks and abort it.
2752 #
2753 # Reads performed within a transaction acquire locks on the data
2754 # being read. Writes can only be done at commit time, after all reads
2755 # have been completed.
2756 # Conceptually, a read-write transaction consists of zero or more
2757 # reads or SQL queries followed by
2758 # Commit. At any time before
2759 # Commit, the client can send a
2760 # Rollback request to abort the
2761 # transaction.
2762 #
2763 # ### Semantics
2764 #
2765 # Cloud Spanner can commit the transaction if all read locks it acquired
2766 # are still valid at commit time, and it is able to acquire write
2767 # locks for all writes. Cloud Spanner can abort the transaction for any
2768 # reason. If a commit attempt returns `ABORTED`, Cloud Spanner guarantees
2769 # that the transaction has not modified any user data in Cloud Spanner.
2770 #
2771 # Unless the transaction commits, Cloud Spanner makes no guarantees about
2772 # how long the transaction's locks were held for. It is an error to
2773 # use Cloud Spanner locks for any sort of mutual exclusion other than
2774 # between Cloud Spanner transactions themselves.
2775 #
2776 # ### Retrying Aborted Transactions
2777 #
2778 # When a transaction aborts, the application can choose to retry the
2779 # whole transaction again. To maximize the chances of successfully
2780 # committing the retry, the client should execute the retry in the
2781 # same session as the original attempt. The original session's lock
2782 # priority increases with each consecutive abort, meaning that each
2783 # attempt has a slightly better chance of success than the previous.
2784 #
2785 # Under some circumstances (e.g., many transactions attempting to
2786 # modify the same row(s)), a transaction can abort many times in a
2787 # short period before successfully committing. Thus, it is not a good
2788 # idea to cap the number of retries a transaction can attempt;
2789 # instead, it is better to limit the total amount of wall time spent
2790 # retrying.
2791 #
2792 # ### Idle Transactions
2793 #
2794 # A transaction is considered idle if it has no outstanding reads or
2795 # SQL queries and has not started a read or SQL query within the last 10
2796 # seconds. Idle transactions can be aborted by Cloud Spanner so that they
2797 # don't hold on to locks indefinitely. In that case, the commit will
2798 # fail with error `ABORTED`.
2799 #
2800 # If this behavior is undesirable, periodically executing a simple
2801 # SQL query in the transaction (e.g., `SELECT 1`) prevents the
2802 # transaction from becoming idle.
2803 #
2804 # ## Snapshot Read-Only Transactions
2805 #
2806 # Snapshot read-only transactions provides a simpler method than
2807 # locking read-write transactions for doing several consistent
2808 # reads. However, this type of transaction does not support writes.
2809 #
2810 # Snapshot transactions do not take locks. Instead, they work by
2811 # choosing a Cloud Spanner timestamp, then executing all reads at that
2812 # timestamp. Since they do not acquire locks, they do not block
2813 # concurrent read-write transactions.
2814 #
2815 # Unlike locking read-write transactions, snapshot read-only
2816 # transactions never abort. They can fail if the chosen read
2817 # timestamp is garbage collected; however, the default garbage
2818 # collection policy is generous enough that most applications do not
2819 # need to worry about this in practice.
2820 #
2821 # Snapshot read-only transactions do not need to call
2822 # Commit or
2823 # Rollback (and in fact are not
2824 # permitted to do so).
2825 #
2826 # To execute a snapshot transaction, the client specifies a timestamp
2827 # bound, which tells Cloud Spanner how to choose a read timestamp.
2828 #
2829 # The types of timestamp bound are:
2830 #
2831 # - Strong (the default).
2832 # - Bounded staleness.
2833 # - Exact staleness.
2834 #
2835 # If the Cloud Spanner database to be read is geographically distributed,
2836 # stale read-only transactions can execute more quickly than strong
2837 # or read-write transaction, because they are able to execute far
2838 # from the leader replica.
2839 #
2840 # Each type of timestamp bound is discussed in detail below.
2841 #
2842 # ### Strong
2843 #
2844 # Strong reads are guaranteed to see the effects of all transactions
2845 # that have committed before the start of the read. Furthermore, all
2846 # rows yielded by a single read are consistent with each other -- if
2847 # any part of the read observes a transaction, all parts of the read
2848 # see the transaction.
2849 #
2850 # Strong reads are not repeatable: two consecutive strong read-only
2851 # transactions might return inconsistent results if there are
2852 # concurrent writes. If consistency across reads is required, the
2853 # reads should be executed within a transaction or at an exact read
2854 # timestamp.
2855 #
2856 # See TransactionOptions.ReadOnly.strong.
2857 #
2858 # ### Exact Staleness
2859 #
2860 # These timestamp bounds execute reads at a user-specified
2861 # timestamp. Reads at a timestamp are guaranteed to see a consistent
2862 # prefix of the global transaction history: they observe
2863 # modifications done by all transactions with a commit timestamp <=
2864 # the read timestamp, and observe none of the modifications done by
2865 # transactions with a larger commit timestamp. They will block until
2866 # all conflicting transactions that may be assigned commit timestamps
2867 # <= the read timestamp have finished.
2868 #
2869 # The timestamp can either be expressed as an absolute Cloud Spanner commit
2870 # timestamp or a staleness relative to the current time.
2871 #
2872 # These modes do not require a "negotiation phase" to pick a
2873 # timestamp. As a result, they execute slightly faster than the
2874 # equivalent boundedly stale concurrency modes. On the other hand,
2875 # boundedly stale reads usually return fresher results.
2876 #
2877 # See TransactionOptions.ReadOnly.read_timestamp and
2878 # TransactionOptions.ReadOnly.exact_staleness.
2879 #
2880 # ### Bounded Staleness
2881 #
2882 # Bounded staleness modes allow Cloud Spanner to pick the read timestamp,
2883 # subject to a user-provided staleness bound. Cloud Spanner chooses the
2884 # newest timestamp within the staleness bound that allows execution
2885 # of the reads at the closest available replica without blocking.
2886 #
2887 # All rows yielded are consistent with each other -- if any part of
2888 # the read observes a transaction, all parts of the read see the
2889 # transaction. Boundedly stale reads are not repeatable: two stale
2890 # reads, even if they use the same staleness bound, can execute at
2891 # different timestamps and thus return inconsistent results.
2892 #
2893 # Boundedly stale reads execute in two phases: the first phase
2894 # negotiates a timestamp among all replicas needed to serve the
2895 # read. In the second phase, reads are executed at the negotiated
2896 # timestamp.
2897 #
2898 # As a result of the two phase execution, bounded staleness reads are
2899 # usually a little slower than comparable exact staleness
2900 # reads. However, they are typically able to return fresher
2901 # results, and are more likely to execute at the closest replica.
2902 #
2903 # Because the timestamp negotiation requires up-front knowledge of
2904 # which rows will be read, it can only be used with single-use
2905 # read-only transactions.
2906 #
2907 # See TransactionOptions.ReadOnly.max_staleness and
2908 # TransactionOptions.ReadOnly.min_read_timestamp.
2909 #
2910 # ### Old Read Timestamps and Garbage Collection
2911 #
2912 # Cloud Spanner continuously garbage collects deleted and overwritten data
2913 # in the background to reclaim storage space. This process is known
2914 # as "version GC". By default, version GC reclaims versions after they
2915 # are one hour old. Because of this, Cloud Spanner cannot perform reads
2916 # at read timestamps more than one hour in the past. This
2917 # restriction also applies to in-progress reads and/or SQL queries whose
2918 # timestamp become too old while executing. Reads and SQL queries with
2919 # too-old read timestamps fail with the error `FAILED_PRECONDITION`.
2920 "readWrite": { # Options for read-write transactions. # Transaction may write.
2921 #
2922 # Authorization to begin a read-write transaction requires
2923 # `spanner.databases.beginOrRollbackReadWriteTransaction` permission
2924 # on the `session` resource.
2925 },
2926 "readOnly": { # Options for read-only transactions. # Transaction will not write.
2927 #
2928 # Authorization to begin a read-only transaction requires
2929 # `spanner.databases.beginReadOnlyTransaction` permission
2930 # on the `session` resource.
2931 "minReadTimestamp": "A String", # Executes all reads at a timestamp >= `min_read_timestamp`.
2932 #
2933 # This is useful for requesting fresher data than some previous
2934 # read, or data that is fresh enough to observe the effects of some
2935 # previously committed transaction whose timestamp is known.
2936 #
2937 # Note that this option can only be used in single-use transactions.
Thomas Coffee2f245372017-03-27 10:39:26 -07002938 "returnReadTimestamp": True or False, # If true, the Cloud Spanner-selected read timestamp is included in
2939 # the Transaction message that describes the transaction.
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -04002940 "maxStaleness": "A String", # Read data at a timestamp >= `NOW - max_staleness`
2941 # seconds. Guarantees that all writes that have committed more
2942 # than the specified number of seconds ago are visible. Because
2943 # Cloud Spanner chooses the exact timestamp, this mode works even if
2944 # the client's local clock is substantially skewed from Cloud Spanner
2945 # commit timestamps.
2946 #
2947 # Useful for reading the freshest data available at a nearby
2948 # replica, while bounding the possible staleness if the local
2949 # replica has fallen behind.
2950 #
2951 # Note that this option can only be used in single-use
2952 # transactions.
2953 "exactStaleness": "A String", # Executes all reads at a timestamp that is `exact_staleness`
2954 # old. The timestamp is chosen soon after the read is started.
2955 #
2956 # Guarantees that all writes that have committed more than the
2957 # specified number of seconds ago are visible. Because Cloud Spanner
2958 # chooses the exact timestamp, this mode works even if the client's
2959 # local clock is substantially skewed from Cloud Spanner commit
2960 # timestamps.
2961 #
2962 # Useful for reading at nearby replicas without the distributed
2963 # timestamp negotiation overhead of `max_staleness`.
Thomas Coffee2f245372017-03-27 10:39:26 -07002964 "readTimestamp": "A String", # Executes all reads at the given timestamp. Unlike other modes,
2965 # reads at a specific timestamp are repeatable; the same read at
2966 # the same timestamp always returns the same data. If the
2967 # timestamp is in the future, the read will block until the
2968 # specified timestamp, modulo the read's deadline.
2969 #
2970 # Useful for large scale consistent reads such as mapreduces, or
2971 # for coordinating many reads against a consistent snapshot of the
2972 # data.
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -04002973 "strong": True or False, # Read at a timestamp where all previously committed transactions
2974 # are visible.
2975 },
2976 },
2977 "singleUse": { # # Transactions # Execute the read or SQL query in a temporary transaction.
2978 # This is the most efficient way to execute a transaction that
2979 # consists of a single SQL query.
2980 #
2981 #
2982 # Each session can have at most one active transaction at a time. After the
2983 # active transaction is completed, the session can immediately be
2984 # re-used for the next transaction. It is not necessary to create a
2985 # new session for each transaction.
2986 #
2987 # # Transaction Modes
2988 #
2989 # Cloud Spanner supports two transaction modes:
2990 #
2991 # 1. Locking read-write. This type of transaction is the only way
2992 # to write data into Cloud Spanner. These transactions rely on
2993 # pessimistic locking and, if necessary, two-phase commit.
2994 # Locking read-write transactions may abort, requiring the
2995 # application to retry.
2996 #
2997 # 2. Snapshot read-only. This transaction type provides guaranteed
2998 # consistency across several reads, but does not allow
2999 # writes. Snapshot read-only transactions can be configured to
3000 # read at timestamps in the past. Snapshot read-only
3001 # transactions do not need to be committed.
3002 #
3003 # For transactions that only read, snapshot read-only transactions
3004 # provide simpler semantics and are almost always faster. In
3005 # particular, read-only transactions do not take locks, so they do
3006 # not conflict with read-write transactions. As a consequence of not
3007 # taking locks, they also do not abort, so retry loops are not needed.
3008 #
3009 # Transactions may only read/write data in a single database. They
3010 # may, however, read/write data in different tables within that
3011 # database.
3012 #
3013 # ## Locking Read-Write Transactions
3014 #
3015 # Locking transactions may be used to atomically read-modify-write
3016 # data anywhere in a database. This type of transaction is externally
3017 # consistent.
3018 #
3019 # Clients should attempt to minimize the amount of time a transaction
3020 # is active. Faster transactions commit with higher probability
3021 # and cause less contention. Cloud Spanner attempts to keep read locks
3022 # active as long as the transaction continues to do reads, and the
3023 # transaction has not been terminated by
3024 # Commit or
3025 # Rollback. Long periods of
3026 # inactivity at the client may cause Cloud Spanner to release a
3027 # transaction's locks and abort it.
3028 #
3029 # Reads performed within a transaction acquire locks on the data
3030 # being read. Writes can only be done at commit time, after all reads
3031 # have been completed.
3032 # Conceptually, a read-write transaction consists of zero or more
3033 # reads or SQL queries followed by
3034 # Commit. At any time before
3035 # Commit, the client can send a
3036 # Rollback request to abort the
3037 # transaction.
3038 #
3039 # ### Semantics
3040 #
3041 # Cloud Spanner can commit the transaction if all read locks it acquired
3042 # are still valid at commit time, and it is able to acquire write
3043 # locks for all writes. Cloud Spanner can abort the transaction for any
3044 # reason. If a commit attempt returns `ABORTED`, Cloud Spanner guarantees
3045 # that the transaction has not modified any user data in Cloud Spanner.
3046 #
3047 # Unless the transaction commits, Cloud Spanner makes no guarantees about
3048 # how long the transaction's locks were held for. It is an error to
3049 # use Cloud Spanner locks for any sort of mutual exclusion other than
3050 # between Cloud Spanner transactions themselves.
3051 #
3052 # ### Retrying Aborted Transactions
3053 #
3054 # When a transaction aborts, the application can choose to retry the
3055 # whole transaction again. To maximize the chances of successfully
3056 # committing the retry, the client should execute the retry in the
3057 # same session as the original attempt. The original session's lock
3058 # priority increases with each consecutive abort, meaning that each
3059 # attempt has a slightly better chance of success than the previous.
3060 #
3061 # Under some circumstances (e.g., many transactions attempting to
3062 # modify the same row(s)), a transaction can abort many times in a
3063 # short period before successfully committing. Thus, it is not a good
3064 # idea to cap the number of retries a transaction can attempt;
3065 # instead, it is better to limit the total amount of wall time spent
3066 # retrying.
3067 #
3068 # ### Idle Transactions
3069 #
3070 # A transaction is considered idle if it has no outstanding reads or
3071 # SQL queries and has not started a read or SQL query within the last 10
3072 # seconds. Idle transactions can be aborted by Cloud Spanner so that they
3073 # don't hold on to locks indefinitely. In that case, the commit will
3074 # fail with error `ABORTED`.
3075 #
3076 # If this behavior is undesirable, periodically executing a simple
3077 # SQL query in the transaction (e.g., `SELECT 1`) prevents the
3078 # transaction from becoming idle.
3079 #
3080 # ## Snapshot Read-Only Transactions
3081 #
3082 # Snapshot read-only transactions provides a simpler method than
3083 # locking read-write transactions for doing several consistent
3084 # reads. However, this type of transaction does not support writes.
3085 #
3086 # Snapshot transactions do not take locks. Instead, they work by
3087 # choosing a Cloud Spanner timestamp, then executing all reads at that
3088 # timestamp. Since they do not acquire locks, they do not block
3089 # concurrent read-write transactions.
3090 #
3091 # Unlike locking read-write transactions, snapshot read-only
3092 # transactions never abort. They can fail if the chosen read
3093 # timestamp is garbage collected; however, the default garbage
3094 # collection policy is generous enough that most applications do not
3095 # need to worry about this in practice.
3096 #
3097 # Snapshot read-only transactions do not need to call
3098 # Commit or
3099 # Rollback (and in fact are not
3100 # permitted to do so).
3101 #
3102 # To execute a snapshot transaction, the client specifies a timestamp
3103 # bound, which tells Cloud Spanner how to choose a read timestamp.
3104 #
3105 # The types of timestamp bound are:
3106 #
3107 # - Strong (the default).
3108 # - Bounded staleness.
3109 # - Exact staleness.
3110 #
3111 # If the Cloud Spanner database to be read is geographically distributed,
3112 # stale read-only transactions can execute more quickly than strong
3113 # or read-write transaction, because they are able to execute far
3114 # from the leader replica.
3115 #
3116 # Each type of timestamp bound is discussed in detail below.
3117 #
3118 # ### Strong
3119 #
3120 # Strong reads are guaranteed to see the effects of all transactions
3121 # that have committed before the start of the read. Furthermore, all
3122 # rows yielded by a single read are consistent with each other -- if
3123 # any part of the read observes a transaction, all parts of the read
3124 # see the transaction.
3125 #
3126 # Strong reads are not repeatable: two consecutive strong read-only
3127 # transactions might return inconsistent results if there are
3128 # concurrent writes. If consistency across reads is required, the
3129 # reads should be executed within a transaction or at an exact read
3130 # timestamp.
3131 #
3132 # See TransactionOptions.ReadOnly.strong.
3133 #
3134 # ### Exact Staleness
3135 #
3136 # These timestamp bounds execute reads at a user-specified
3137 # timestamp. Reads at a timestamp are guaranteed to see a consistent
3138 # prefix of the global transaction history: they observe
3139 # modifications done by all transactions with a commit timestamp <=
3140 # the read timestamp, and observe none of the modifications done by
3141 # transactions with a larger commit timestamp. They will block until
3142 # all conflicting transactions that may be assigned commit timestamps
3143 # <= the read timestamp have finished.
3144 #
3145 # The timestamp can either be expressed as an absolute Cloud Spanner commit
3146 # timestamp or a staleness relative to the current time.
3147 #
3148 # These modes do not require a "negotiation phase" to pick a
3149 # timestamp. As a result, they execute slightly faster than the
3150 # equivalent boundedly stale concurrency modes. On the other hand,
3151 # boundedly stale reads usually return fresher results.
3152 #
3153 # See TransactionOptions.ReadOnly.read_timestamp and
3154 # TransactionOptions.ReadOnly.exact_staleness.
3155 #
3156 # ### Bounded Staleness
3157 #
3158 # Bounded staleness modes allow Cloud Spanner to pick the read timestamp,
3159 # subject to a user-provided staleness bound. Cloud Spanner chooses the
3160 # newest timestamp within the staleness bound that allows execution
3161 # of the reads at the closest available replica without blocking.
3162 #
3163 # All rows yielded are consistent with each other -- if any part of
3164 # the read observes a transaction, all parts of the read see the
3165 # transaction. Boundedly stale reads are not repeatable: two stale
3166 # reads, even if they use the same staleness bound, can execute at
3167 # different timestamps and thus return inconsistent results.
3168 #
3169 # Boundedly stale reads execute in two phases: the first phase
3170 # negotiates a timestamp among all replicas needed to serve the
3171 # read. In the second phase, reads are executed at the negotiated
3172 # timestamp.
3173 #
3174 # As a result of the two phase execution, bounded staleness reads are
3175 # usually a little slower than comparable exact staleness
3176 # reads. However, they are typically able to return fresher
3177 # results, and are more likely to execute at the closest replica.
3178 #
3179 # Because the timestamp negotiation requires up-front knowledge of
3180 # which rows will be read, it can only be used with single-use
3181 # read-only transactions.
3182 #
3183 # See TransactionOptions.ReadOnly.max_staleness and
3184 # TransactionOptions.ReadOnly.min_read_timestamp.
3185 #
3186 # ### Old Read Timestamps and Garbage Collection
3187 #
3188 # Cloud Spanner continuously garbage collects deleted and overwritten data
3189 # in the background to reclaim storage space. This process is known
3190 # as "version GC". By default, version GC reclaims versions after they
3191 # are one hour old. Because of this, Cloud Spanner cannot perform reads
3192 # at read timestamps more than one hour in the past. This
3193 # restriction also applies to in-progress reads and/or SQL queries whose
3194 # timestamp become too old while executing. Reads and SQL queries with
3195 # too-old read timestamps fail with the error `FAILED_PRECONDITION`.
3196 "readWrite": { # Options for read-write transactions. # Transaction may write.
3197 #
3198 # Authorization to begin a read-write transaction requires
3199 # `spanner.databases.beginOrRollbackReadWriteTransaction` permission
3200 # on the `session` resource.
3201 },
3202 "readOnly": { # Options for read-only transactions. # Transaction will not write.
3203 #
3204 # Authorization to begin a read-only transaction requires
3205 # `spanner.databases.beginReadOnlyTransaction` permission
3206 # on the `session` resource.
3207 "minReadTimestamp": "A String", # Executes all reads at a timestamp >= `min_read_timestamp`.
3208 #
3209 # This is useful for requesting fresher data than some previous
3210 # read, or data that is fresh enough to observe the effects of some
3211 # previously committed transaction whose timestamp is known.
3212 #
3213 # Note that this option can only be used in single-use transactions.
Thomas Coffee2f245372017-03-27 10:39:26 -07003214 "returnReadTimestamp": True or False, # If true, the Cloud Spanner-selected read timestamp is included in
3215 # the Transaction message that describes the transaction.
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -04003216 "maxStaleness": "A String", # Read data at a timestamp >= `NOW - max_staleness`
3217 # seconds. Guarantees that all writes that have committed more
3218 # than the specified number of seconds ago are visible. Because
3219 # Cloud Spanner chooses the exact timestamp, this mode works even if
3220 # the client's local clock is substantially skewed from Cloud Spanner
3221 # commit timestamps.
3222 #
3223 # Useful for reading the freshest data available at a nearby
3224 # replica, while bounding the possible staleness if the local
3225 # replica has fallen behind.
3226 #
3227 # Note that this option can only be used in single-use
3228 # transactions.
3229 "exactStaleness": "A String", # Executes all reads at a timestamp that is `exact_staleness`
3230 # old. The timestamp is chosen soon after the read is started.
3231 #
3232 # Guarantees that all writes that have committed more than the
3233 # specified number of seconds ago are visible. Because Cloud Spanner
3234 # chooses the exact timestamp, this mode works even if the client's
3235 # local clock is substantially skewed from Cloud Spanner commit
3236 # timestamps.
3237 #
3238 # Useful for reading at nearby replicas without the distributed
3239 # timestamp negotiation overhead of `max_staleness`.
Thomas Coffee2f245372017-03-27 10:39:26 -07003240 "readTimestamp": "A String", # Executes all reads at the given timestamp. Unlike other modes,
3241 # reads at a specific timestamp are repeatable; the same read at
3242 # the same timestamp always returns the same data. If the
3243 # timestamp is in the future, the read will block until the
3244 # specified timestamp, modulo the read's deadline.
3245 #
3246 # Useful for large scale consistent reads such as mapreduces, or
3247 # for coordinating many reads against a consistent snapshot of the
3248 # data.
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -04003249 "strong": True or False, # Read at a timestamp where all previously committed transactions
3250 # are visible.
3251 },
3252 },
3253 "id": "A String", # Execute the read or SQL query in a previously-started transaction.
3254 },
3255 "resumeToken": "A String", # If this request is resuming a previously interrupted read,
3256 # `resume_token` should be copied from the last
3257 # PartialResultSet yielded before the interruption. Doing this
3258 # enables the new read to resume where the last read left off. The
3259 # rest of the request parameters must exactly match the request
3260 # that yielded this token.
3261 "keySet": { # `KeySet` defines a collection of Cloud Spanner keys and/or key ranges. All # Required. `key_set` identifies the rows to be yielded. `key_set` names the
3262 # primary keys of the rows in table to be yielded, unless index
3263 # is present. If index is present, then key_set instead names
3264 # index keys in index.
3265 #
3266 # Rows are yielded in table primary key order (if index is empty)
3267 # or index key order (if index is non-empty).
3268 #
3269 # It is not an error for the `key_set` to name rows that do not
3270 # exist in the database. Read yields nothing for nonexistent rows.
3271 # the keys are expected to be in the same table or index. The keys need
3272 # not be sorted in any particular way.
3273 #
3274 # If the same key is specified multiple times in the set (for example
3275 # if two ranges, two keys, or a key and a range overlap), Cloud Spanner
3276 # behaves as if the key were only specified once.
Sai Cheemalapatie833b792017-03-24 15:06:46 -07003277 "keys": [ # A list of specific keys. Entries in `keys` should have exactly as
3278 # many elements as there are columns in the primary or index key
3279 # with which this `KeySet` is used. Individual key values are
3280 # encoded as described here.
3281 [
3282 "",
3283 ],
3284 ],
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -04003285 "ranges": [ # A list of key ranges. See KeyRange for more information about
3286 # key range specifications.
3287 { # KeyRange represents a range of rows in a table or index.
3288 #
3289 # A range has a start key and an end key. These keys can be open or
3290 # closed, indicating if the range includes rows with that key.
3291 #
3292 # Keys are represented by lists, where the ith value in the list
3293 # corresponds to the ith component of the table or index primary key.
3294 # Individual values are encoded as described here.
3295 #
3296 # For example, consider the following table definition:
3297 #
3298 # CREATE TABLE UserEvents (
3299 # UserName STRING(MAX),
3300 # EventDate STRING(10)
3301 # ) PRIMARY KEY(UserName, EventDate);
3302 #
3303 # The following keys name rows in this table:
3304 #
3305 # "Bob", "2014-09-23"
3306 #
3307 # Since the `UserEvents` table's `PRIMARY KEY` clause names two
3308 # columns, each `UserEvents` key has two elements; the first is the
3309 # `UserName`, and the second is the `EventDate`.
3310 #
3311 # Key ranges with multiple components are interpreted
3312 # lexicographically by component using the table or index key's declared
3313 # sort order. For example, the following range returns all events for
3314 # user `"Bob"` that occurred in the year 2015:
3315 #
3316 # "start_closed": ["Bob", "2015-01-01"]
3317 # "end_closed": ["Bob", "2015-12-31"]
3318 #
3319 # Start and end keys can omit trailing key components. This affects the
3320 # inclusion and exclusion of rows that exactly match the provided key
3321 # components: if the key is closed, then rows that exactly match the
3322 # provided components are included; if the key is open, then rows
3323 # that exactly match are not included.
3324 #
3325 # For example, the following range includes all events for `"Bob"` that
3326 # occurred during and after the year 2000:
3327 #
3328 # "start_closed": ["Bob", "2000-01-01"]
3329 # "end_closed": ["Bob"]
3330 #
3331 # The next example retrieves all events for `"Bob"`:
3332 #
3333 # "start_closed": ["Bob"]
3334 # "end_closed": ["Bob"]
3335 #
3336 # To retrieve events before the year 2000:
3337 #
3338 # "start_closed": ["Bob"]
3339 # "end_open": ["Bob", "2000-01-01"]
3340 #
3341 # The following range includes all rows in the table:
3342 #
3343 # "start_closed": []
3344 # "end_closed": []
3345 #
3346 # This range returns all users whose `UserName` begins with any
3347 # character from A to C:
3348 #
3349 # "start_closed": ["A"]
3350 # "end_open": ["D"]
3351 #
3352 # This range returns all users whose `UserName` begins with B:
3353 #
3354 # "start_closed": ["B"]
3355 # "end_open": ["C"]
3356 #
3357 # Key ranges honor column sort order. For example, suppose a table is
3358 # defined as follows:
3359 #
3360 # CREATE TABLE DescendingSortedTable {
3361 # Key INT64,
3362 # ...
3363 # ) PRIMARY KEY(Key DESC);
3364 #
3365 # The following range retrieves all rows with key values between 1
3366 # and 100 inclusive:
3367 #
3368 # "start_closed": ["100"]
3369 # "end_closed": ["1"]
3370 #
3371 # Note that 100 is passed as the start, and 1 is passed as the end,
3372 # because `Key` is a descending column in the schema.
3373 "endOpen": [ # If the end is open, then the range excludes rows whose first
3374 # `len(end_open)` key columns exactly match `end_open`.
3375 "",
3376 ],
3377 "startOpen": [ # If the start is open, then the range excludes rows whose first
3378 # `len(start_open)` key columns exactly match `start_open`.
3379 "",
3380 ],
3381 "endClosed": [ # If the end is closed, then the range includes all rows whose
3382 # first `len(end_closed)` key columns exactly match `end_closed`.
3383 "",
3384 ],
3385 "startClosed": [ # If the start is closed, then the range includes all rows whose
3386 # first `len(start_closed)` key columns exactly match `start_closed`.
3387 "",
3388 ],
3389 },
3390 ],
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -04003391 "all": True or False, # For convenience `all` can be set to `true` to indicate that this
3392 # `KeySet` matches all keys in the table or index. Note that any keys
3393 # specified in `keys` or `ranges` are only yielded once.
3394 },
3395 "limit": "A String", # If greater than zero, only the first `limit` rows are yielded. If `limit`
3396 # is zero, the default is no limit.
3397 "table": "A String", # Required. The name of the table in the database to be read.
3398 "columns": [ # The columns of table to be returned for each row matching
3399 # this request.
3400 "A String",
3401 ],
3402 }
3403
3404 x__xgafv: string, V1 error format.
3405 Allowed values
3406 1 - v1 error format
3407 2 - v2 error format
3408
3409Returns:
3410 An object of the form:
3411
3412 { # Results from Read or
3413 # ExecuteSql.
3414 "rows": [ # Each element in `rows` is a row whose format is defined by
3415 # metadata.row_type. The ith element
3416 # in each row matches the ith field in
3417 # metadata.row_type. Elements are
3418 # encoded based on type as described
3419 # here.
3420 [
3421 "",
3422 ],
3423 ],
3424 "stats": { # Additional statistics about a ResultSet or PartialResultSet. # Query plan and execution statistics for the query that produced this
3425 # result set. These can be requested by setting
3426 # ExecuteSqlRequest.query_mode.
3427 "queryPlan": { # Contains an ordered list of nodes appearing in the query plan. # QueryPlan for the query associated with this result.
3428 "planNodes": [ # The nodes in the query plan. Plan nodes are returned in pre-order starting
3429 # with the plan root. Each PlanNode's `id` corresponds to its index in
3430 # `plan_nodes`.
3431 { # Node information for nodes appearing in a QueryPlan.plan_nodes.
3432 "index": 42, # The `PlanNode`'s index in node list.
3433 "kind": "A String", # Used to determine the type of node. May be needed for visualizing
3434 # different kinds of nodes differently. For example, If the node is a
3435 # SCALAR node, it will have a condensed representation
3436 # which can be used to directly embed a description of the node in its
3437 # parent.
3438 "displayName": "A String", # The display name for the node.
3439 "executionStats": { # The execution statistics associated with the node, contained in a group of
3440 # key-value pairs. Only present if the plan was returned as a result of a
3441 # profile query. For example, number of executions, number of rows/time per
3442 # execution etc.
3443 "a_key": "", # Properties of the object.
3444 },
3445 "childLinks": [ # List of child node `index`es and their relationship to this parent.
3446 { # Metadata associated with a parent-child relationship appearing in a
3447 # PlanNode.
3448 "variable": "A String", # Only present if the child node is SCALAR and corresponds
3449 # to an output variable of the parent node. The field carries the name of
3450 # the output variable.
3451 # For example, a `TableScan` operator that reads rows from a table will
3452 # have child links to the `SCALAR` nodes representing the output variables
3453 # created for each column that is read by the operator. The corresponding
3454 # `variable` fields will be set to the variable names assigned to the
3455 # columns.
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -04003456 "type": "A String", # The type of the link. For example, in Hash Joins this could be used to
3457 # distinguish between the build child and the probe child, or in the case
3458 # of the child being an output variable, to represent the tag associated
3459 # with the output variable.
Thomas Coffee2f245372017-03-27 10:39:26 -07003460 "childIndex": 42, # The node to which the link points.
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -04003461 },
3462 ],
3463 "shortRepresentation": { # Condensed representation of a node and its subtree. Only present for # Condensed representation for SCALAR nodes.
3464 # `SCALAR` PlanNode(s).
3465 "subqueries": { # A mapping of (subquery variable name) -> (subquery node id) for cases
3466 # where the `description` string of this node references a `SCALAR`
3467 # subquery contained in the expression subtree rooted at this node. The
3468 # referenced `SCALAR` subquery may not necessarily be a direct child of
3469 # this node.
3470 "a_key": 42,
3471 },
3472 "description": "A String", # A string representation of the expression subtree rooted at this node.
3473 },
3474 "metadata": { # Attributes relevant to the node contained in a group of key-value pairs.
3475 # For example, a Parameter Reference node could have the following
3476 # information in its metadata:
3477 #
3478 # {
3479 # "parameter_reference": "param1",
3480 # "parameter_type": "array"
3481 # }
3482 "a_key": "", # Properties of the object.
3483 },
3484 },
3485 ],
3486 },
3487 "queryStats": { # Aggregated statistics from the execution of the query. Only present when
3488 # the query is profiled. For example, a query could return the statistics as
3489 # follows:
3490 #
3491 # {
3492 # "rows_returned": "3",
3493 # "elapsed_time": "1.22 secs",
3494 # "cpu_time": "1.19 secs"
3495 # }
3496 "a_key": "", # Properties of the object.
3497 },
3498 },
3499 "metadata": { # Metadata about a ResultSet or PartialResultSet. # Metadata about the result set, such as row type information.
3500 "rowType": { # `StructType` defines the fields of a STRUCT type. # Indicates the field names and types for the rows in the result
3501 # set. For example, a SQL query like `"SELECT UserId, UserName FROM
3502 # Users"` could return a `row_type` value like:
3503 #
3504 # "fields": [
3505 # { "name": "UserId", "type": { "code": "INT64" } },
3506 # { "name": "UserName", "type": { "code": "STRING" } },
3507 # ]
3508 "fields": [ # The list of fields that make up this struct. Order is
3509 # significant, because values of this struct type are represented as
3510 # lists, where the order of field values matches the order of
3511 # fields in the StructType. In turn, the order of fields
3512 # matches the order of columns in a read request, or the order of
3513 # fields in the `SELECT` clause of a query.
3514 { # Message representing a single field of a struct.
3515 "type": { # `Type` indicates the type of a Cloud Spanner value, as might be stored in a # The type of the field.
3516 # table cell or returned from an SQL query.
3517 "structType": # Object with schema name: StructType # If code == STRUCT, then `struct_type`
3518 # provides type information for the struct's fields.
3519 "code": "A String", # Required. The TypeCode for this type.
3520 "arrayElementType": # Object with schema name: Type # If code == ARRAY, then `array_element_type`
3521 # is the type of the array elements.
3522 },
3523 "name": "A String", # The name of the field. For reads, this is the column name. For
3524 # SQL queries, it is the column alias (e.g., `"Word"` in the
3525 # query `"SELECT 'hello' AS Word"`), or the column name (e.g.,
3526 # `"ColName"` in the query `"SELECT ColName FROM Table"`). Some
3527 # columns might have an empty name (e.g., !"SELECT
3528 # UPPER(ColName)"`). Note that a query result can contain
3529 # multiple fields with the same name.
3530 },
3531 ],
3532 },
3533 "transaction": { # A transaction. # If the read or SQL query began a transaction as a side-effect, the
3534 # information about the new transaction is yielded here.
3535 "readTimestamp": "A String", # For snapshot read-only transactions, the read timestamp chosen
3536 # for the transaction. Not returned by default: see
3537 # TransactionOptions.ReadOnly.return_read_timestamp.
3538 "id": "A String", # `id` may be used to identify the transaction in subsequent
3539 # Read,
3540 # ExecuteSql,
3541 # Commit, or
3542 # Rollback calls.
3543 #
3544 # Single-use read-only transactions do not have IDs, because
3545 # single-use transactions do not support multiple requests.
3546 },
3547 },
3548 }</pre>
3549</div>
3550
3551<div class="method">
3552 <code class="details" id="rollback">rollback(session, body, x__xgafv=None)</code>
3553 <pre>Rolls back a transaction, releasing any locks it holds. It is a good
3554idea to call this for any transaction that includes one or more
3555Read or ExecuteSql requests and
3556ultimately decides not to commit.
3557
3558`Rollback` returns `OK` if it successfully aborts the transaction, the
3559transaction was already aborted, or the transaction is not
3560found. `Rollback` never returns `ABORTED`.
3561
3562Args:
3563 session: string, Required. The session in which the transaction to roll back is running. (required)
3564 body: object, The request body. (required)
3565 The object takes the form of:
3566
3567{ # The request for Rollback.
3568 "transactionId": "A String", # Required. The transaction to roll back.
3569 }
3570
3571 x__xgafv: string, V1 error format.
3572 Allowed values
3573 1 - v1 error format
3574 2 - v2 error format
3575
3576Returns:
3577 An object of the form:
3578
3579 { # A generic empty message that you can re-use to avoid defining duplicated
3580 # empty messages in your APIs. A typical example is to use it as the request
3581 # or the response type of an API method. For instance:
3582 #
3583 # service Foo {
3584 # rpc Bar(google.protobuf.Empty) returns (google.protobuf.Empty);
3585 # }
3586 #
3587 # The JSON representation for `Empty` is empty JSON object `{}`.
3588 }</pre>
3589</div>
3590
3591<div class="method">
3592 <code class="details" id="streamingRead">streamingRead(session, body, x__xgafv=None)</code>
3593 <pre>Like Read, except returns the result set as a
3594stream. Unlike Read, there is no limit on the
3595size of the returned result set. However, no individual row in
3596the result set can exceed 100 MiB, and no column value can exceed
359710 MiB.
3598
3599Args:
3600 session: string, Required. The session in which the read should be performed. (required)
3601 body: object, The request body. (required)
3602 The object takes the form of:
3603
3604{ # The request for Read and
3605 # StreamingRead.
3606 "index": "A String", # If non-empty, the name of an index on table. This index is
3607 # used instead of the table primary key when interpreting key_set
3608 # and sorting result rows. See key_set for further information.
3609 "transaction": { # This message is used to select the transaction in which a # The transaction to use. If none is provided, the default is a
3610 # temporary read-only transaction with strong concurrency.
3611 # Read or
3612 # ExecuteSql call runs.
3613 #
3614 # See TransactionOptions for more information about transactions.
3615 "begin": { # # Transactions # Begin a new transaction and execute this read or SQL query in
3616 # it. The transaction ID of the new transaction is returned in
3617 # ResultSetMetadata.transaction, which is a Transaction.
3618 #
3619 #
3620 # Each session can have at most one active transaction at a time. After the
3621 # active transaction is completed, the session can immediately be
3622 # re-used for the next transaction. It is not necessary to create a
3623 # new session for each transaction.
3624 #
3625 # # Transaction Modes
3626 #
3627 # Cloud Spanner supports two transaction modes:
3628 #
3629 # 1. Locking read-write. This type of transaction is the only way
3630 # to write data into Cloud Spanner. These transactions rely on
3631 # pessimistic locking and, if necessary, two-phase commit.
3632 # Locking read-write transactions may abort, requiring the
3633 # application to retry.
3634 #
3635 # 2. Snapshot read-only. This transaction type provides guaranteed
3636 # consistency across several reads, but does not allow
3637 # writes. Snapshot read-only transactions can be configured to
3638 # read at timestamps in the past. Snapshot read-only
3639 # transactions do not need to be committed.
3640 #
3641 # For transactions that only read, snapshot read-only transactions
3642 # provide simpler semantics and are almost always faster. In
3643 # particular, read-only transactions do not take locks, so they do
3644 # not conflict with read-write transactions. As a consequence of not
3645 # taking locks, they also do not abort, so retry loops are not needed.
3646 #
3647 # Transactions may only read/write data in a single database. They
3648 # may, however, read/write data in different tables within that
3649 # database.
3650 #
3651 # ## Locking Read-Write Transactions
3652 #
3653 # Locking transactions may be used to atomically read-modify-write
3654 # data anywhere in a database. This type of transaction is externally
3655 # consistent.
3656 #
3657 # Clients should attempt to minimize the amount of time a transaction
3658 # is active. Faster transactions commit with higher probability
3659 # and cause less contention. Cloud Spanner attempts to keep read locks
3660 # active as long as the transaction continues to do reads, and the
3661 # transaction has not been terminated by
3662 # Commit or
3663 # Rollback. Long periods of
3664 # inactivity at the client may cause Cloud Spanner to release a
3665 # transaction's locks and abort it.
3666 #
3667 # Reads performed within a transaction acquire locks on the data
3668 # being read. Writes can only be done at commit time, after all reads
3669 # have been completed.
3670 # Conceptually, a read-write transaction consists of zero or more
3671 # reads or SQL queries followed by
3672 # Commit. At any time before
3673 # Commit, the client can send a
3674 # Rollback request to abort the
3675 # transaction.
3676 #
3677 # ### Semantics
3678 #
3679 # Cloud Spanner can commit the transaction if all read locks it acquired
3680 # are still valid at commit time, and it is able to acquire write
3681 # locks for all writes. Cloud Spanner can abort the transaction for any
3682 # reason. If a commit attempt returns `ABORTED`, Cloud Spanner guarantees
3683 # that the transaction has not modified any user data in Cloud Spanner.
3684 #
3685 # Unless the transaction commits, Cloud Spanner makes no guarantees about
3686 # how long the transaction's locks were held for. It is an error to
3687 # use Cloud Spanner locks for any sort of mutual exclusion other than
3688 # between Cloud Spanner transactions themselves.
3689 #
3690 # ### Retrying Aborted Transactions
3691 #
3692 # When a transaction aborts, the application can choose to retry the
3693 # whole transaction again. To maximize the chances of successfully
3694 # committing the retry, the client should execute the retry in the
3695 # same session as the original attempt. The original session's lock
3696 # priority increases with each consecutive abort, meaning that each
3697 # attempt has a slightly better chance of success than the previous.
3698 #
3699 # Under some circumstances (e.g., many transactions attempting to
3700 # modify the same row(s)), a transaction can abort many times in a
3701 # short period before successfully committing. Thus, it is not a good
3702 # idea to cap the number of retries a transaction can attempt;
3703 # instead, it is better to limit the total amount of wall time spent
3704 # retrying.
3705 #
3706 # ### Idle Transactions
3707 #
3708 # A transaction is considered idle if it has no outstanding reads or
3709 # SQL queries and has not started a read or SQL query within the last 10
3710 # seconds. Idle transactions can be aborted by Cloud Spanner so that they
3711 # don't hold on to locks indefinitely. In that case, the commit will
3712 # fail with error `ABORTED`.
3713 #
3714 # If this behavior is undesirable, periodically executing a simple
3715 # SQL query in the transaction (e.g., `SELECT 1`) prevents the
3716 # transaction from becoming idle.
3717 #
3718 # ## Snapshot Read-Only Transactions
3719 #
3720 # Snapshot read-only transactions provides a simpler method than
3721 # locking read-write transactions for doing several consistent
3722 # reads. However, this type of transaction does not support writes.
3723 #
3724 # Snapshot transactions do not take locks. Instead, they work by
3725 # choosing a Cloud Spanner timestamp, then executing all reads at that
3726 # timestamp. Since they do not acquire locks, they do not block
3727 # concurrent read-write transactions.
3728 #
3729 # Unlike locking read-write transactions, snapshot read-only
3730 # transactions never abort. They can fail if the chosen read
3731 # timestamp is garbage collected; however, the default garbage
3732 # collection policy is generous enough that most applications do not
3733 # need to worry about this in practice.
3734 #
3735 # Snapshot read-only transactions do not need to call
3736 # Commit or
3737 # Rollback (and in fact are not
3738 # permitted to do so).
3739 #
3740 # To execute a snapshot transaction, the client specifies a timestamp
3741 # bound, which tells Cloud Spanner how to choose a read timestamp.
3742 #
3743 # The types of timestamp bound are:
3744 #
3745 # - Strong (the default).
3746 # - Bounded staleness.
3747 # - Exact staleness.
3748 #
3749 # If the Cloud Spanner database to be read is geographically distributed,
3750 # stale read-only transactions can execute more quickly than strong
3751 # or read-write transaction, because they are able to execute far
3752 # from the leader replica.
3753 #
3754 # Each type of timestamp bound is discussed in detail below.
3755 #
3756 # ### Strong
3757 #
3758 # Strong reads are guaranteed to see the effects of all transactions
3759 # that have committed before the start of the read. Furthermore, all
3760 # rows yielded by a single read are consistent with each other -- if
3761 # any part of the read observes a transaction, all parts of the read
3762 # see the transaction.
3763 #
3764 # Strong reads are not repeatable: two consecutive strong read-only
3765 # transactions might return inconsistent results if there are
3766 # concurrent writes. If consistency across reads is required, the
3767 # reads should be executed within a transaction or at an exact read
3768 # timestamp.
3769 #
3770 # See TransactionOptions.ReadOnly.strong.
3771 #
3772 # ### Exact Staleness
3773 #
3774 # These timestamp bounds execute reads at a user-specified
3775 # timestamp. Reads at a timestamp are guaranteed to see a consistent
3776 # prefix of the global transaction history: they observe
3777 # modifications done by all transactions with a commit timestamp <=
3778 # the read timestamp, and observe none of the modifications done by
3779 # transactions with a larger commit timestamp. They will block until
3780 # all conflicting transactions that may be assigned commit timestamps
3781 # <= the read timestamp have finished.
3782 #
3783 # The timestamp can either be expressed as an absolute Cloud Spanner commit
3784 # timestamp or a staleness relative to the current time.
3785 #
3786 # These modes do not require a "negotiation phase" to pick a
3787 # timestamp. As a result, they execute slightly faster than the
3788 # equivalent boundedly stale concurrency modes. On the other hand,
3789 # boundedly stale reads usually return fresher results.
3790 #
3791 # See TransactionOptions.ReadOnly.read_timestamp and
3792 # TransactionOptions.ReadOnly.exact_staleness.
3793 #
3794 # ### Bounded Staleness
3795 #
3796 # Bounded staleness modes allow Cloud Spanner to pick the read timestamp,
3797 # subject to a user-provided staleness bound. Cloud Spanner chooses the
3798 # newest timestamp within the staleness bound that allows execution
3799 # of the reads at the closest available replica without blocking.
3800 #
3801 # All rows yielded are consistent with each other -- if any part of
3802 # the read observes a transaction, all parts of the read see the
3803 # transaction. Boundedly stale reads are not repeatable: two stale
3804 # reads, even if they use the same staleness bound, can execute at
3805 # different timestamps and thus return inconsistent results.
3806 #
3807 # Boundedly stale reads execute in two phases: the first phase
3808 # negotiates a timestamp among all replicas needed to serve the
3809 # read. In the second phase, reads are executed at the negotiated
3810 # timestamp.
3811 #
3812 # As a result of the two phase execution, bounded staleness reads are
3813 # usually a little slower than comparable exact staleness
3814 # reads. However, they are typically able to return fresher
3815 # results, and are more likely to execute at the closest replica.
3816 #
3817 # Because the timestamp negotiation requires up-front knowledge of
3818 # which rows will be read, it can only be used with single-use
3819 # read-only transactions.
3820 #
3821 # See TransactionOptions.ReadOnly.max_staleness and
3822 # TransactionOptions.ReadOnly.min_read_timestamp.
3823 #
3824 # ### Old Read Timestamps and Garbage Collection
3825 #
3826 # Cloud Spanner continuously garbage collects deleted and overwritten data
3827 # in the background to reclaim storage space. This process is known
3828 # as "version GC". By default, version GC reclaims versions after they
3829 # are one hour old. Because of this, Cloud Spanner cannot perform reads
3830 # at read timestamps more than one hour in the past. This
3831 # restriction also applies to in-progress reads and/or SQL queries whose
3832 # timestamp become too old while executing. Reads and SQL queries with
3833 # too-old read timestamps fail with the error `FAILED_PRECONDITION`.
3834 "readWrite": { # Options for read-write transactions. # Transaction may write.
3835 #
3836 # Authorization to begin a read-write transaction requires
3837 # `spanner.databases.beginOrRollbackReadWriteTransaction` permission
3838 # on the `session` resource.
3839 },
3840 "readOnly": { # Options for read-only transactions. # Transaction will not write.
3841 #
3842 # Authorization to begin a read-only transaction requires
3843 # `spanner.databases.beginReadOnlyTransaction` permission
3844 # on the `session` resource.
3845 "minReadTimestamp": "A String", # Executes all reads at a timestamp >= `min_read_timestamp`.
3846 #
3847 # This is useful for requesting fresher data than some previous
3848 # read, or data that is fresh enough to observe the effects of some
3849 # previously committed transaction whose timestamp is known.
3850 #
3851 # Note that this option can only be used in single-use transactions.
Thomas Coffee2f245372017-03-27 10:39:26 -07003852 "returnReadTimestamp": True or False, # If true, the Cloud Spanner-selected read timestamp is included in
3853 # the Transaction message that describes the transaction.
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -04003854 "maxStaleness": "A String", # Read data at a timestamp >= `NOW - max_staleness`
3855 # seconds. Guarantees that all writes that have committed more
3856 # than the specified number of seconds ago are visible. Because
3857 # Cloud Spanner chooses the exact timestamp, this mode works even if
3858 # the client's local clock is substantially skewed from Cloud Spanner
3859 # commit timestamps.
3860 #
3861 # Useful for reading the freshest data available at a nearby
3862 # replica, while bounding the possible staleness if the local
3863 # replica has fallen behind.
3864 #
3865 # Note that this option can only be used in single-use
3866 # transactions.
3867 "exactStaleness": "A String", # Executes all reads at a timestamp that is `exact_staleness`
3868 # old. The timestamp is chosen soon after the read is started.
3869 #
3870 # Guarantees that all writes that have committed more than the
3871 # specified number of seconds ago are visible. Because Cloud Spanner
3872 # chooses the exact timestamp, this mode works even if the client's
3873 # local clock is substantially skewed from Cloud Spanner commit
3874 # timestamps.
3875 #
3876 # Useful for reading at nearby replicas without the distributed
3877 # timestamp negotiation overhead of `max_staleness`.
Thomas Coffee2f245372017-03-27 10:39:26 -07003878 "readTimestamp": "A String", # Executes all reads at the given timestamp. Unlike other modes,
3879 # reads at a specific timestamp are repeatable; the same read at
3880 # the same timestamp always returns the same data. If the
3881 # timestamp is in the future, the read will block until the
3882 # specified timestamp, modulo the read's deadline.
3883 #
3884 # Useful for large scale consistent reads such as mapreduces, or
3885 # for coordinating many reads against a consistent snapshot of the
3886 # data.
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -04003887 "strong": True or False, # Read at a timestamp where all previously committed transactions
3888 # are visible.
3889 },
3890 },
3891 "singleUse": { # # Transactions # Execute the read or SQL query in a temporary transaction.
3892 # This is the most efficient way to execute a transaction that
3893 # consists of a single SQL query.
3894 #
3895 #
3896 # Each session can have at most one active transaction at a time. After the
3897 # active transaction is completed, the session can immediately be
3898 # re-used for the next transaction. It is not necessary to create a
3899 # new session for each transaction.
3900 #
3901 # # Transaction Modes
3902 #
3903 # Cloud Spanner supports two transaction modes:
3904 #
3905 # 1. Locking read-write. This type of transaction is the only way
3906 # to write data into Cloud Spanner. These transactions rely on
3907 # pessimistic locking and, if necessary, two-phase commit.
3908 # Locking read-write transactions may abort, requiring the
3909 # application to retry.
3910 #
3911 # 2. Snapshot read-only. This transaction type provides guaranteed
3912 # consistency across several reads, but does not allow
3913 # writes. Snapshot read-only transactions can be configured to
3914 # read at timestamps in the past. Snapshot read-only
3915 # transactions do not need to be committed.
3916 #
3917 # For transactions that only read, snapshot read-only transactions
3918 # provide simpler semantics and are almost always faster. In
3919 # particular, read-only transactions do not take locks, so they do
3920 # not conflict with read-write transactions. As a consequence of not
3921 # taking locks, they also do not abort, so retry loops are not needed.
3922 #
3923 # Transactions may only read/write data in a single database. They
3924 # may, however, read/write data in different tables within that
3925 # database.
3926 #
3927 # ## Locking Read-Write Transactions
3928 #
3929 # Locking transactions may be used to atomically read-modify-write
3930 # data anywhere in a database. This type of transaction is externally
3931 # consistent.
3932 #
3933 # Clients should attempt to minimize the amount of time a transaction
3934 # is active. Faster transactions commit with higher probability
3935 # and cause less contention. Cloud Spanner attempts to keep read locks
3936 # active as long as the transaction continues to do reads, and the
3937 # transaction has not been terminated by
3938 # Commit or
3939 # Rollback. Long periods of
3940 # inactivity at the client may cause Cloud Spanner to release a
3941 # transaction's locks and abort it.
3942 #
3943 # Reads performed within a transaction acquire locks on the data
3944 # being read. Writes can only be done at commit time, after all reads
3945 # have been completed.
3946 # Conceptually, a read-write transaction consists of zero or more
3947 # reads or SQL queries followed by
3948 # Commit. At any time before
3949 # Commit, the client can send a
3950 # Rollback request to abort the
3951 # transaction.
3952 #
3953 # ### Semantics
3954 #
3955 # Cloud Spanner can commit the transaction if all read locks it acquired
3956 # are still valid at commit time, and it is able to acquire write
3957 # locks for all writes. Cloud Spanner can abort the transaction for any
3958 # reason. If a commit attempt returns `ABORTED`, Cloud Spanner guarantees
3959 # that the transaction has not modified any user data in Cloud Spanner.
3960 #
3961 # Unless the transaction commits, Cloud Spanner makes no guarantees about
3962 # how long the transaction's locks were held for. It is an error to
3963 # use Cloud Spanner locks for any sort of mutual exclusion other than
3964 # between Cloud Spanner transactions themselves.
3965 #
3966 # ### Retrying Aborted Transactions
3967 #
3968 # When a transaction aborts, the application can choose to retry the
3969 # whole transaction again. To maximize the chances of successfully
3970 # committing the retry, the client should execute the retry in the
3971 # same session as the original attempt. The original session's lock
3972 # priority increases with each consecutive abort, meaning that each
3973 # attempt has a slightly better chance of success than the previous.
3974 #
3975 # Under some circumstances (e.g., many transactions attempting to
3976 # modify the same row(s)), a transaction can abort many times in a
3977 # short period before successfully committing. Thus, it is not a good
3978 # idea to cap the number of retries a transaction can attempt;
3979 # instead, it is better to limit the total amount of wall time spent
3980 # retrying.
3981 #
3982 # ### Idle Transactions
3983 #
3984 # A transaction is considered idle if it has no outstanding reads or
3985 # SQL queries and has not started a read or SQL query within the last 10
3986 # seconds. Idle transactions can be aborted by Cloud Spanner so that they
3987 # don't hold on to locks indefinitely. In that case, the commit will
3988 # fail with error `ABORTED`.
3989 #
3990 # If this behavior is undesirable, periodically executing a simple
3991 # SQL query in the transaction (e.g., `SELECT 1`) prevents the
3992 # transaction from becoming idle.
3993 #
3994 # ## Snapshot Read-Only Transactions
3995 #
3996 # Snapshot read-only transactions provides a simpler method than
3997 # locking read-write transactions for doing several consistent
3998 # reads. However, this type of transaction does not support writes.
3999 #
4000 # Snapshot transactions do not take locks. Instead, they work by
4001 # choosing a Cloud Spanner timestamp, then executing all reads at that
4002 # timestamp. Since they do not acquire locks, they do not block
4003 # concurrent read-write transactions.
4004 #
4005 # Unlike locking read-write transactions, snapshot read-only
4006 # transactions never abort. They can fail if the chosen read
4007 # timestamp is garbage collected; however, the default garbage
4008 # collection policy is generous enough that most applications do not
4009 # need to worry about this in practice.
4010 #
4011 # Snapshot read-only transactions do not need to call
4012 # Commit or
4013 # Rollback (and in fact are not
4014 # permitted to do so).
4015 #
4016 # To execute a snapshot transaction, the client specifies a timestamp
4017 # bound, which tells Cloud Spanner how to choose a read timestamp.
4018 #
4019 # The types of timestamp bound are:
4020 #
4021 # - Strong (the default).
4022 # - Bounded staleness.
4023 # - Exact staleness.
4024 #
4025 # If the Cloud Spanner database to be read is geographically distributed,
4026 # stale read-only transactions can execute more quickly than strong
4027 # or read-write transaction, because they are able to execute far
4028 # from the leader replica.
4029 #
4030 # Each type of timestamp bound is discussed in detail below.
4031 #
4032 # ### Strong
4033 #
4034 # Strong reads are guaranteed to see the effects of all transactions
4035 # that have committed before the start of the read. Furthermore, all
4036 # rows yielded by a single read are consistent with each other -- if
4037 # any part of the read observes a transaction, all parts of the read
4038 # see the transaction.
4039 #
4040 # Strong reads are not repeatable: two consecutive strong read-only
4041 # transactions might return inconsistent results if there are
4042 # concurrent writes. If consistency across reads is required, the
4043 # reads should be executed within a transaction or at an exact read
4044 # timestamp.
4045 #
4046 # See TransactionOptions.ReadOnly.strong.
4047 #
4048 # ### Exact Staleness
4049 #
4050 # These timestamp bounds execute reads at a user-specified
4051 # timestamp. Reads at a timestamp are guaranteed to see a consistent
4052 # prefix of the global transaction history: they observe
4053 # modifications done by all transactions with a commit timestamp <=
4054 # the read timestamp, and observe none of the modifications done by
4055 # transactions with a larger commit timestamp. They will block until
4056 # all conflicting transactions that may be assigned commit timestamps
4057 # <= the read timestamp have finished.
4058 #
4059 # The timestamp can either be expressed as an absolute Cloud Spanner commit
4060 # timestamp or a staleness relative to the current time.
4061 #
4062 # These modes do not require a "negotiation phase" to pick a
4063 # timestamp. As a result, they execute slightly faster than the
4064 # equivalent boundedly stale concurrency modes. On the other hand,
4065 # boundedly stale reads usually return fresher results.
4066 #
4067 # See TransactionOptions.ReadOnly.read_timestamp and
4068 # TransactionOptions.ReadOnly.exact_staleness.
4069 #
4070 # ### Bounded Staleness
4071 #
4072 # Bounded staleness modes allow Cloud Spanner to pick the read timestamp,
4073 # subject to a user-provided staleness bound. Cloud Spanner chooses the
4074 # newest timestamp within the staleness bound that allows execution
4075 # of the reads at the closest available replica without blocking.
4076 #
4077 # All rows yielded are consistent with each other -- if any part of
4078 # the read observes a transaction, all parts of the read see the
4079 # transaction. Boundedly stale reads are not repeatable: two stale
4080 # reads, even if they use the same staleness bound, can execute at
4081 # different timestamps and thus return inconsistent results.
4082 #
4083 # Boundedly stale reads execute in two phases: the first phase
4084 # negotiates a timestamp among all replicas needed to serve the
4085 # read. In the second phase, reads are executed at the negotiated
4086 # timestamp.
4087 #
4088 # As a result of the two phase execution, bounded staleness reads are
4089 # usually a little slower than comparable exact staleness
4090 # reads. However, they are typically able to return fresher
4091 # results, and are more likely to execute at the closest replica.
4092 #
4093 # Because the timestamp negotiation requires up-front knowledge of
4094 # which rows will be read, it can only be used with single-use
4095 # read-only transactions.
4096 #
4097 # See TransactionOptions.ReadOnly.max_staleness and
4098 # TransactionOptions.ReadOnly.min_read_timestamp.
4099 #
4100 # ### Old Read Timestamps and Garbage Collection
4101 #
4102 # Cloud Spanner continuously garbage collects deleted and overwritten data
4103 # in the background to reclaim storage space. This process is known
4104 # as "version GC". By default, version GC reclaims versions after they
4105 # are one hour old. Because of this, Cloud Spanner cannot perform reads
4106 # at read timestamps more than one hour in the past. This
4107 # restriction also applies to in-progress reads and/or SQL queries whose
4108 # timestamp become too old while executing. Reads and SQL queries with
4109 # too-old read timestamps fail with the error `FAILED_PRECONDITION`.
4110 "readWrite": { # Options for read-write transactions. # Transaction may write.
4111 #
4112 # Authorization to begin a read-write transaction requires
4113 # `spanner.databases.beginOrRollbackReadWriteTransaction` permission
4114 # on the `session` resource.
4115 },
4116 "readOnly": { # Options for read-only transactions. # Transaction will not write.
4117 #
4118 # Authorization to begin a read-only transaction requires
4119 # `spanner.databases.beginReadOnlyTransaction` permission
4120 # on the `session` resource.
4121 "minReadTimestamp": "A String", # Executes all reads at a timestamp >= `min_read_timestamp`.
4122 #
4123 # This is useful for requesting fresher data than some previous
4124 # read, or data that is fresh enough to observe the effects of some
4125 # previously committed transaction whose timestamp is known.
4126 #
4127 # Note that this option can only be used in single-use transactions.
Thomas Coffee2f245372017-03-27 10:39:26 -07004128 "returnReadTimestamp": True or False, # If true, the Cloud Spanner-selected read timestamp is included in
4129 # the Transaction message that describes the transaction.
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -04004130 "maxStaleness": "A String", # Read data at a timestamp >= `NOW - max_staleness`
4131 # seconds. Guarantees that all writes that have committed more
4132 # than the specified number of seconds ago are visible. Because
4133 # Cloud Spanner chooses the exact timestamp, this mode works even if
4134 # the client's local clock is substantially skewed from Cloud Spanner
4135 # commit timestamps.
4136 #
4137 # Useful for reading the freshest data available at a nearby
4138 # replica, while bounding the possible staleness if the local
4139 # replica has fallen behind.
4140 #
4141 # Note that this option can only be used in single-use
4142 # transactions.
4143 "exactStaleness": "A String", # Executes all reads at a timestamp that is `exact_staleness`
4144 # old. The timestamp is chosen soon after the read is started.
4145 #
4146 # Guarantees that all writes that have committed more than the
4147 # specified number of seconds ago are visible. Because Cloud Spanner
4148 # chooses the exact timestamp, this mode works even if the client's
4149 # local clock is substantially skewed from Cloud Spanner commit
4150 # timestamps.
4151 #
4152 # Useful for reading at nearby replicas without the distributed
4153 # timestamp negotiation overhead of `max_staleness`.
Thomas Coffee2f245372017-03-27 10:39:26 -07004154 "readTimestamp": "A String", # Executes all reads at the given timestamp. Unlike other modes,
4155 # reads at a specific timestamp are repeatable; the same read at
4156 # the same timestamp always returns the same data. If the
4157 # timestamp is in the future, the read will block until the
4158 # specified timestamp, modulo the read's deadline.
4159 #
4160 # Useful for large scale consistent reads such as mapreduces, or
4161 # for coordinating many reads against a consistent snapshot of the
4162 # data.
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -04004163 "strong": True or False, # Read at a timestamp where all previously committed transactions
4164 # are visible.
4165 },
4166 },
4167 "id": "A String", # Execute the read or SQL query in a previously-started transaction.
4168 },
4169 "resumeToken": "A String", # If this request is resuming a previously interrupted read,
4170 # `resume_token` should be copied from the last
4171 # PartialResultSet yielded before the interruption. Doing this
4172 # enables the new read to resume where the last read left off. The
4173 # rest of the request parameters must exactly match the request
4174 # that yielded this token.
4175 "keySet": { # `KeySet` defines a collection of Cloud Spanner keys and/or key ranges. All # Required. `key_set` identifies the rows to be yielded. `key_set` names the
4176 # primary keys of the rows in table to be yielded, unless index
4177 # is present. If index is present, then key_set instead names
4178 # index keys in index.
4179 #
4180 # Rows are yielded in table primary key order (if index is empty)
4181 # or index key order (if index is non-empty).
4182 #
4183 # It is not an error for the `key_set` to name rows that do not
4184 # exist in the database. Read yields nothing for nonexistent rows.
4185 # the keys are expected to be in the same table or index. The keys need
4186 # not be sorted in any particular way.
4187 #
4188 # If the same key is specified multiple times in the set (for example
4189 # if two ranges, two keys, or a key and a range overlap), Cloud Spanner
4190 # behaves as if the key were only specified once.
Sai Cheemalapatie833b792017-03-24 15:06:46 -07004191 "keys": [ # A list of specific keys. Entries in `keys` should have exactly as
4192 # many elements as there are columns in the primary or index key
4193 # with which this `KeySet` is used. Individual key values are
4194 # encoded as described here.
4195 [
4196 "",
4197 ],
4198 ],
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -04004199 "ranges": [ # A list of key ranges. See KeyRange for more information about
4200 # key range specifications.
4201 { # KeyRange represents a range of rows in a table or index.
4202 #
4203 # A range has a start key and an end key. These keys can be open or
4204 # closed, indicating if the range includes rows with that key.
4205 #
4206 # Keys are represented by lists, where the ith value in the list
4207 # corresponds to the ith component of the table or index primary key.
4208 # Individual values are encoded as described here.
4209 #
4210 # For example, consider the following table definition:
4211 #
4212 # CREATE TABLE UserEvents (
4213 # UserName STRING(MAX),
4214 # EventDate STRING(10)
4215 # ) PRIMARY KEY(UserName, EventDate);
4216 #
4217 # The following keys name rows in this table:
4218 #
4219 # "Bob", "2014-09-23"
4220 #
4221 # Since the `UserEvents` table's `PRIMARY KEY` clause names two
4222 # columns, each `UserEvents` key has two elements; the first is the
4223 # `UserName`, and the second is the `EventDate`.
4224 #
4225 # Key ranges with multiple components are interpreted
4226 # lexicographically by component using the table or index key's declared
4227 # sort order. For example, the following range returns all events for
4228 # user `"Bob"` that occurred in the year 2015:
4229 #
4230 # "start_closed": ["Bob", "2015-01-01"]
4231 # "end_closed": ["Bob", "2015-12-31"]
4232 #
4233 # Start and end keys can omit trailing key components. This affects the
4234 # inclusion and exclusion of rows that exactly match the provided key
4235 # components: if the key is closed, then rows that exactly match the
4236 # provided components are included; if the key is open, then rows
4237 # that exactly match are not included.
4238 #
4239 # For example, the following range includes all events for `"Bob"` that
4240 # occurred during and after the year 2000:
4241 #
4242 # "start_closed": ["Bob", "2000-01-01"]
4243 # "end_closed": ["Bob"]
4244 #
4245 # The next example retrieves all events for `"Bob"`:
4246 #
4247 # "start_closed": ["Bob"]
4248 # "end_closed": ["Bob"]
4249 #
4250 # To retrieve events before the year 2000:
4251 #
4252 # "start_closed": ["Bob"]
4253 # "end_open": ["Bob", "2000-01-01"]
4254 #
4255 # The following range includes all rows in the table:
4256 #
4257 # "start_closed": []
4258 # "end_closed": []
4259 #
4260 # This range returns all users whose `UserName` begins with any
4261 # character from A to C:
4262 #
4263 # "start_closed": ["A"]
4264 # "end_open": ["D"]
4265 #
4266 # This range returns all users whose `UserName` begins with B:
4267 #
4268 # "start_closed": ["B"]
4269 # "end_open": ["C"]
4270 #
4271 # Key ranges honor column sort order. For example, suppose a table is
4272 # defined as follows:
4273 #
4274 # CREATE TABLE DescendingSortedTable {
4275 # Key INT64,
4276 # ...
4277 # ) PRIMARY KEY(Key DESC);
4278 #
4279 # The following range retrieves all rows with key values between 1
4280 # and 100 inclusive:
4281 #
4282 # "start_closed": ["100"]
4283 # "end_closed": ["1"]
4284 #
4285 # Note that 100 is passed as the start, and 1 is passed as the end,
4286 # because `Key` is a descending column in the schema.
4287 "endOpen": [ # If the end is open, then the range excludes rows whose first
4288 # `len(end_open)` key columns exactly match `end_open`.
4289 "",
4290 ],
4291 "startOpen": [ # If the start is open, then the range excludes rows whose first
4292 # `len(start_open)` key columns exactly match `start_open`.
4293 "",
4294 ],
4295 "endClosed": [ # If the end is closed, then the range includes all rows whose
4296 # first `len(end_closed)` key columns exactly match `end_closed`.
4297 "",
4298 ],
4299 "startClosed": [ # If the start is closed, then the range includes all rows whose
4300 # first `len(start_closed)` key columns exactly match `start_closed`.
4301 "",
4302 ],
4303 },
4304 ],
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -04004305 "all": True or False, # For convenience `all` can be set to `true` to indicate that this
4306 # `KeySet` matches all keys in the table or index. Note that any keys
4307 # specified in `keys` or `ranges` are only yielded once.
4308 },
4309 "limit": "A String", # If greater than zero, only the first `limit` rows are yielded. If `limit`
4310 # is zero, the default is no limit.
4311 "table": "A String", # Required. The name of the table in the database to be read.
4312 "columns": [ # The columns of table to be returned for each row matching
4313 # this request.
4314 "A String",
4315 ],
4316 }
4317
4318 x__xgafv: string, V1 error format.
4319 Allowed values
4320 1 - v1 error format
4321 2 - v2 error format
4322
4323Returns:
4324 An object of the form:
4325
4326 { # Partial results from a streaming read or SQL query. Streaming reads and
4327 # SQL queries better tolerate large result sets, large rows, and large
4328 # values, but are a little trickier to consume.
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -04004329 "values": [ # A streamed result set consists of a stream of values, which might
4330 # be split into many `PartialResultSet` messages to accommodate
4331 # large rows and/or large values. Every N complete values defines a
4332 # row, where N is equal to the number of entries in
4333 # metadata.row_type.fields.
4334 #
4335 # Most values are encoded based on type as described
4336 # here.
4337 #
4338 # It is possible that the last value in values is "chunked",
4339 # meaning that the rest of the value is sent in subsequent
4340 # `PartialResultSet`(s). This is denoted by the chunked_value
4341 # field. Two or more chunked values can be merged to form a
4342 # complete value as follows:
4343 #
4344 # * `bool/number/null`: cannot be chunked
4345 # * `string`: concatenate the strings
4346 # * `list`: concatenate the lists. If the last element in a list is a
4347 # `string`, `list`, or `object`, merge it with the first element in
4348 # the next list by applying these rules recursively.
4349 # * `object`: concatenate the (field name, field value) pairs. If a
4350 # field name is duplicated, then apply these rules recursively
4351 # to merge the field values.
4352 #
4353 # Some examples of merging:
4354 #
4355 # # Strings are concatenated.
4356 # "foo", "bar" => "foobar"
4357 #
4358 # # Lists of non-strings are concatenated.
4359 # [2, 3], [4] => [2, 3, 4]
4360 #
4361 # # Lists are concatenated, but the last and first elements are merged
4362 # # because they are strings.
4363 # ["a", "b"], ["c", "d"] => ["a", "bc", "d"]
4364 #
4365 # # Lists are concatenated, but the last and first elements are merged
4366 # # because they are lists. Recursively, the last and first elements
4367 # # of the inner lists are merged because they are strings.
4368 # ["a", ["b", "c"]], [["d"], "e"] => ["a", ["b", "cd"], "e"]
4369 #
4370 # # Non-overlapping object fields are combined.
4371 # {"a": "1"}, {"b": "2"} => {"a": "1", "b": 2"}
4372 #
4373 # # Overlapping object fields are merged.
4374 # {"a": "1"}, {"a": "2"} => {"a": "12"}
4375 #
4376 # # Examples of merging objects containing lists of strings.
4377 # {"a": ["1"]}, {"a": ["2"]} => {"a": ["12"]}
4378 #
4379 # For a more complete example, suppose a streaming SQL query is
4380 # yielding a result set whose rows contain a single string
4381 # field. The following `PartialResultSet`s might be yielded:
4382 #
4383 # {
4384 # "metadata": { ... }
4385 # "values": ["Hello", "W"]
4386 # "chunked_value": true
4387 # "resume_token": "Af65..."
4388 # }
4389 # {
4390 # "values": ["orl"]
4391 # "chunked_value": true
4392 # "resume_token": "Bqp2..."
4393 # }
4394 # {
4395 # "values": ["d"]
4396 # "resume_token": "Zx1B..."
4397 # }
4398 #
4399 # This sequence of `PartialResultSet`s encodes two rows, one
4400 # containing the field value `"Hello"`, and a second containing the
4401 # field value `"World" = "W" + "orl" + "d"`.
4402 "",
4403 ],
Sai Cheemalapatie833b792017-03-24 15:06:46 -07004404 "chunkedValue": True or False, # If true, then the final value in values is chunked, and must
4405 # be combined with more values from subsequent `PartialResultSet`s
4406 # to obtain a complete field value.
4407 "resumeToken": "A String", # Streaming calls might be interrupted for a variety of reasons, such
4408 # as TCP connection loss. If this occurs, the stream of results can
4409 # be resumed by re-sending the original request and including
4410 # `resume_token`. Note that executing any other transaction in the
4411 # same session invalidates the token.
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -04004412 "stats": { # Additional statistics about a ResultSet or PartialResultSet. # Query plan and execution statistics for the query that produced this
4413 # streaming result set. These can be requested by setting
4414 # ExecuteSqlRequest.query_mode and are sent
4415 # only once with the last response in the stream.
4416 "queryPlan": { # Contains an ordered list of nodes appearing in the query plan. # QueryPlan for the query associated with this result.
4417 "planNodes": [ # The nodes in the query plan. Plan nodes are returned in pre-order starting
4418 # with the plan root. Each PlanNode's `id` corresponds to its index in
4419 # `plan_nodes`.
4420 { # Node information for nodes appearing in a QueryPlan.plan_nodes.
4421 "index": 42, # The `PlanNode`'s index in node list.
4422 "kind": "A String", # Used to determine the type of node. May be needed for visualizing
4423 # different kinds of nodes differently. For example, If the node is a
4424 # SCALAR node, it will have a condensed representation
4425 # which can be used to directly embed a description of the node in its
4426 # parent.
4427 "displayName": "A String", # The display name for the node.
4428 "executionStats": { # The execution statistics associated with the node, contained in a group of
4429 # key-value pairs. Only present if the plan was returned as a result of a
4430 # profile query. For example, number of executions, number of rows/time per
4431 # execution etc.
4432 "a_key": "", # Properties of the object.
4433 },
4434 "childLinks": [ # List of child node `index`es and their relationship to this parent.
4435 { # Metadata associated with a parent-child relationship appearing in a
4436 # PlanNode.
4437 "variable": "A String", # Only present if the child node is SCALAR and corresponds
4438 # to an output variable of the parent node. The field carries the name of
4439 # the output variable.
4440 # For example, a `TableScan` operator that reads rows from a table will
4441 # have child links to the `SCALAR` nodes representing the output variables
4442 # created for each column that is read by the operator. The corresponding
4443 # `variable` fields will be set to the variable names assigned to the
4444 # columns.
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -04004445 "type": "A String", # The type of the link. For example, in Hash Joins this could be used to
4446 # distinguish between the build child and the probe child, or in the case
4447 # of the child being an output variable, to represent the tag associated
4448 # with the output variable.
Thomas Coffee2f245372017-03-27 10:39:26 -07004449 "childIndex": 42, # The node to which the link points.
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -04004450 },
4451 ],
4452 "shortRepresentation": { # Condensed representation of a node and its subtree. Only present for # Condensed representation for SCALAR nodes.
4453 # `SCALAR` PlanNode(s).
4454 "subqueries": { # A mapping of (subquery variable name) -> (subquery node id) for cases
4455 # where the `description` string of this node references a `SCALAR`
4456 # subquery contained in the expression subtree rooted at this node. The
4457 # referenced `SCALAR` subquery may not necessarily be a direct child of
4458 # this node.
4459 "a_key": 42,
4460 },
4461 "description": "A String", # A string representation of the expression subtree rooted at this node.
4462 },
4463 "metadata": { # Attributes relevant to the node contained in a group of key-value pairs.
4464 # For example, a Parameter Reference node could have the following
4465 # information in its metadata:
4466 #
4467 # {
4468 # "parameter_reference": "param1",
4469 # "parameter_type": "array"
4470 # }
4471 "a_key": "", # Properties of the object.
4472 },
4473 },
4474 ],
4475 },
4476 "queryStats": { # Aggregated statistics from the execution of the query. Only present when
4477 # the query is profiled. For example, a query could return the statistics as
4478 # follows:
4479 #
4480 # {
4481 # "rows_returned": "3",
4482 # "elapsed_time": "1.22 secs",
4483 # "cpu_time": "1.19 secs"
4484 # }
4485 "a_key": "", # Properties of the object.
4486 },
4487 },
4488 "metadata": { # Metadata about a ResultSet or PartialResultSet. # Metadata about the result set, such as row type information.
4489 # Only present in the first response.
4490 "rowType": { # `StructType` defines the fields of a STRUCT type. # Indicates the field names and types for the rows in the result
4491 # set. For example, a SQL query like `"SELECT UserId, UserName FROM
4492 # Users"` could return a `row_type` value like:
4493 #
4494 # "fields": [
4495 # { "name": "UserId", "type": { "code": "INT64" } },
4496 # { "name": "UserName", "type": { "code": "STRING" } },
4497 # ]
4498 "fields": [ # The list of fields that make up this struct. Order is
4499 # significant, because values of this struct type are represented as
4500 # lists, where the order of field values matches the order of
4501 # fields in the StructType. In turn, the order of fields
4502 # matches the order of columns in a read request, or the order of
4503 # fields in the `SELECT` clause of a query.
4504 { # Message representing a single field of a struct.
4505 "type": { # `Type` indicates the type of a Cloud Spanner value, as might be stored in a # The type of the field.
4506 # table cell or returned from an SQL query.
4507 "structType": # Object with schema name: StructType # If code == STRUCT, then `struct_type`
4508 # provides type information for the struct's fields.
4509 "code": "A String", # Required. The TypeCode for this type.
4510 "arrayElementType": # Object with schema name: Type # If code == ARRAY, then `array_element_type`
4511 # is the type of the array elements.
4512 },
4513 "name": "A String", # The name of the field. For reads, this is the column name. For
4514 # SQL queries, it is the column alias (e.g., `"Word"` in the
4515 # query `"SELECT 'hello' AS Word"`), or the column name (e.g.,
4516 # `"ColName"` in the query `"SELECT ColName FROM Table"`). Some
4517 # columns might have an empty name (e.g., !"SELECT
4518 # UPPER(ColName)"`). Note that a query result can contain
4519 # multiple fields with the same name.
4520 },
4521 ],
4522 },
4523 "transaction": { # A transaction. # If the read or SQL query began a transaction as a side-effect, the
4524 # information about the new transaction is yielded here.
4525 "readTimestamp": "A String", # For snapshot read-only transactions, the read timestamp chosen
4526 # for the transaction. Not returned by default: see
4527 # TransactionOptions.ReadOnly.return_read_timestamp.
4528 "id": "A String", # `id` may be used to identify the transaction in subsequent
4529 # Read,
4530 # ExecuteSql,
4531 # Commit, or
4532 # Rollback calls.
4533 #
4534 # Single-use read-only transactions do not have IDs, because
4535 # single-use transactions do not support multiple requests.
4536 },
4537 },
4538 }</pre>
4539</div>
4540
4541</body></html>