Bu Sun Kim | 715bd7f | 2019-06-14 16:50:42 -0700 | [diff] [blame^] | 1 | <html><body> |
| 2 | <style> |
| 3 | |
| 4 | body, h1, h2, h3, div, span, p, pre, a { |
| 5 | margin: 0; |
| 6 | padding: 0; |
| 7 | border: 0; |
| 8 | font-weight: inherit; |
| 9 | font-style: inherit; |
| 10 | font-size: 100%; |
| 11 | font-family: inherit; |
| 12 | vertical-align: baseline; |
| 13 | } |
| 14 | |
| 15 | body { |
| 16 | font-size: 13px; |
| 17 | padding: 1em; |
| 18 | } |
| 19 | |
| 20 | h1 { |
| 21 | font-size: 26px; |
| 22 | margin-bottom: 1em; |
| 23 | } |
| 24 | |
| 25 | h2 { |
| 26 | font-size: 24px; |
| 27 | margin-bottom: 1em; |
| 28 | } |
| 29 | |
| 30 | h3 { |
| 31 | font-size: 20px; |
| 32 | margin-bottom: 1em; |
| 33 | margin-top: 1em; |
| 34 | } |
| 35 | |
| 36 | pre, code { |
| 37 | line-height: 1.5; |
| 38 | font-family: Monaco, 'DejaVu Sans Mono', 'Bitstream Vera Sans Mono', 'Lucida Console', monospace; |
| 39 | } |
| 40 | |
| 41 | pre { |
| 42 | margin-top: 0.5em; |
| 43 | } |
| 44 | |
| 45 | h1, h2, h3, p { |
| 46 | font-family: Arial, sans serif; |
| 47 | } |
| 48 | |
| 49 | h1, h2, h3 { |
| 50 | border-bottom: solid #CCC 1px; |
| 51 | } |
| 52 | |
| 53 | .toc_element { |
| 54 | margin-top: 0.5em; |
| 55 | } |
| 56 | |
| 57 | .firstline { |
| 58 | margin-left: 2 em; |
| 59 | } |
| 60 | |
| 61 | .method { |
| 62 | margin-top: 1em; |
| 63 | border: solid 1px #CCC; |
| 64 | padding: 1em; |
| 65 | background: #EEE; |
| 66 | } |
| 67 | |
| 68 | .details { |
| 69 | font-weight: bold; |
| 70 | font-size: 14px; |
| 71 | } |
| 72 | |
| 73 | </style> |
| 74 | |
| 75 | <h1><a href="remotebuildexecution_v2.html">Remote Build Execution API</a> . <a href="remotebuildexecution_v2.blobs.html">blobs</a></h1> |
| 76 | <h2>Instance Methods</h2> |
| 77 | <p class="toc_element"> |
| 78 | <code><a href="#batchRead">batchRead(instanceName, body, x__xgafv=None)</a></code></p> |
| 79 | <p class="firstline">Download many blobs at once.</p> |
| 80 | <p class="toc_element"> |
| 81 | <code><a href="#batchUpdate">batchUpdate(instanceName, body, x__xgafv=None)</a></code></p> |
| 82 | <p class="firstline">Upload many blobs at once.</p> |
| 83 | <p class="toc_element"> |
| 84 | <code><a href="#findMissing">findMissing(instanceName, body, x__xgafv=None)</a></code></p> |
| 85 | <p class="firstline">Determine if blobs are present in the CAS.</p> |
| 86 | <p class="toc_element"> |
| 87 | <code><a href="#getTree">getTree(instanceName, hash, sizeBytes, pageSize=None, pageToken=None, x__xgafv=None)</a></code></p> |
| 88 | <p class="firstline">Fetch the entire directory tree rooted at a node.</p> |
| 89 | <p class="toc_element"> |
| 90 | <code><a href="#getTree_next">getTree_next(previous_request, previous_response)</a></code></p> |
| 91 | <p class="firstline">Retrieves the next page of results.</p> |
| 92 | <h3>Method Details</h3> |
| 93 | <div class="method"> |
| 94 | <code class="details" id="batchRead">batchRead(instanceName, body, x__xgafv=None)</code> |
| 95 | <pre>Download many blobs at once. |
| 96 | |
| 97 | The server may enforce a limit of the combined total size of blobs |
| 98 | to be downloaded using this API. This limit may be obtained using the |
| 99 | Capabilities API. |
| 100 | Requests exceeding the limit should either be split into smaller |
| 101 | chunks or downloaded using the |
| 102 | ByteStream API, as appropriate. |
| 103 | |
| 104 | This request is equivalent to calling a Bytestream `Read` request |
| 105 | on each individual blob, in parallel. The requests may succeed or fail |
| 106 | independently. |
| 107 | |
| 108 | Errors: |
| 109 | |
| 110 | * `INVALID_ARGUMENT`: The client attempted to read more than the |
| 111 | server supported limit. |
| 112 | |
| 113 | Every error on individual read will be returned in the corresponding digest |
| 114 | status. |
| 115 | |
| 116 | Args: |
| 117 | instanceName: string, The instance of the execution system to operate against. A server may |
| 118 | support multiple instances of the execution system (with their own workers, |
| 119 | storage, caches, etc.). The server MAY require use of this field to select |
| 120 | between them in an implementation-defined fashion, otherwise it can be |
| 121 | omitted. (required) |
| 122 | body: object, The request body. (required) |
| 123 | The object takes the form of: |
| 124 | |
| 125 | { # A request message for |
| 126 | # ContentAddressableStorage.BatchReadBlobs. |
| 127 | "digests": [ # The individual blob digests. |
| 128 | { # A content digest. A digest for a given blob consists of the size of the blob |
| 129 | # and its hash. The hash algorithm to use is defined by the server, but servers |
| 130 | # SHOULD use SHA-256. |
| 131 | # |
| 132 | # The size is considered to be an integral part of the digest and cannot be |
| 133 | # separated. That is, even if the `hash` field is correctly specified but |
| 134 | # `size_bytes` is not, the server MUST reject the request. |
| 135 | # |
| 136 | # The reason for including the size in the digest is as follows: in a great |
| 137 | # many cases, the server needs to know the size of the blob it is about to work |
| 138 | # with prior to starting an operation with it, such as flattening Merkle tree |
| 139 | # structures or streaming it to a worker. Technically, the server could |
| 140 | # implement a separate metadata store, but this results in a significantly more |
| 141 | # complicated implementation as opposed to having the client specify the size |
| 142 | # up-front (or storing the size along with the digest in every message where |
| 143 | # digests are embedded). This does mean that the API leaks some implementation |
| 144 | # details of (what we consider to be) a reasonable server implementation, but |
| 145 | # we consider this to be a worthwhile tradeoff. |
| 146 | # |
| 147 | # When a `Digest` is used to refer to a proto message, it always refers to the |
| 148 | # message in binary encoded form. To ensure consistent hashing, clients and |
| 149 | # servers MUST ensure that they serialize messages according to the following |
| 150 | # rules, even if there are alternate valid encodings for the same message: |
| 151 | # |
| 152 | # * Fields are serialized in tag order. |
| 153 | # * There are no unknown fields. |
| 154 | # * There are no duplicate fields. |
| 155 | # * Fields are serialized according to the default semantics for their type. |
| 156 | # |
| 157 | # Most protocol buffer implementations will always follow these rules when |
| 158 | # serializing, but care should be taken to avoid shortcuts. For instance, |
| 159 | # concatenating two messages to merge them may produce duplicate fields. |
| 160 | "sizeBytes": "A String", # The size of the blob, in bytes. |
| 161 | "hash": "A String", # The hash. In the case of SHA-256, it will always be a lowercase hex string |
| 162 | # exactly 64 characters long. |
| 163 | }, |
| 164 | ], |
| 165 | } |
| 166 | |
| 167 | x__xgafv: string, V1 error format. |
| 168 | Allowed values |
| 169 | 1 - v1 error format |
| 170 | 2 - v2 error format |
| 171 | |
| 172 | Returns: |
| 173 | An object of the form: |
| 174 | |
| 175 | { # A response message for |
| 176 | # ContentAddressableStorage.BatchReadBlobs. |
| 177 | "responses": [ # The responses to the requests. |
| 178 | { # A response corresponding to a single blob that the client tried to download. |
| 179 | "status": { # The `Status` type defines a logical error model that is suitable for # The result of attempting to download that blob. |
| 180 | # different programming environments, including REST APIs and RPC APIs. It is |
| 181 | # used by [gRPC](https://github.com/grpc). Each `Status` message contains |
| 182 | # three pieces of data: error code, error message, and error details. |
| 183 | # |
| 184 | # You can find out more about this error model and how to work with it in the |
| 185 | # [API Design Guide](https://cloud.google.com/apis/design/errors). |
| 186 | "message": "A String", # A developer-facing error message, which should be in English. Any |
| 187 | # user-facing error message should be localized and sent in the |
| 188 | # google.rpc.Status.details field, or localized by the client. |
| 189 | "code": 42, # The status code, which should be an enum value of google.rpc.Code. |
| 190 | "details": [ # A list of messages that carry the error details. There is a common set of |
| 191 | # message types for APIs to use. |
| 192 | { |
| 193 | "a_key": "", # Properties of the object. Contains field @type with type URL. |
| 194 | }, |
| 195 | ], |
| 196 | }, |
| 197 | "data": "A String", # The raw binary data. |
| 198 | "digest": { # A content digest. A digest for a given blob consists of the size of the blob # The digest to which this response corresponds. |
| 199 | # and its hash. The hash algorithm to use is defined by the server, but servers |
| 200 | # SHOULD use SHA-256. |
| 201 | # |
| 202 | # The size is considered to be an integral part of the digest and cannot be |
| 203 | # separated. That is, even if the `hash` field is correctly specified but |
| 204 | # `size_bytes` is not, the server MUST reject the request. |
| 205 | # |
| 206 | # The reason for including the size in the digest is as follows: in a great |
| 207 | # many cases, the server needs to know the size of the blob it is about to work |
| 208 | # with prior to starting an operation with it, such as flattening Merkle tree |
| 209 | # structures or streaming it to a worker. Technically, the server could |
| 210 | # implement a separate metadata store, but this results in a significantly more |
| 211 | # complicated implementation as opposed to having the client specify the size |
| 212 | # up-front (or storing the size along with the digest in every message where |
| 213 | # digests are embedded). This does mean that the API leaks some implementation |
| 214 | # details of (what we consider to be) a reasonable server implementation, but |
| 215 | # we consider this to be a worthwhile tradeoff. |
| 216 | # |
| 217 | # When a `Digest` is used to refer to a proto message, it always refers to the |
| 218 | # message in binary encoded form. To ensure consistent hashing, clients and |
| 219 | # servers MUST ensure that they serialize messages according to the following |
| 220 | # rules, even if there are alternate valid encodings for the same message: |
| 221 | # |
| 222 | # * Fields are serialized in tag order. |
| 223 | # * There are no unknown fields. |
| 224 | # * There are no duplicate fields. |
| 225 | # * Fields are serialized according to the default semantics for their type. |
| 226 | # |
| 227 | # Most protocol buffer implementations will always follow these rules when |
| 228 | # serializing, but care should be taken to avoid shortcuts. For instance, |
| 229 | # concatenating two messages to merge them may produce duplicate fields. |
| 230 | "sizeBytes": "A String", # The size of the blob, in bytes. |
| 231 | "hash": "A String", # The hash. In the case of SHA-256, it will always be a lowercase hex string |
| 232 | # exactly 64 characters long. |
| 233 | }, |
| 234 | }, |
| 235 | ], |
| 236 | }</pre> |
| 237 | </div> |
| 238 | |
| 239 | <div class="method"> |
| 240 | <code class="details" id="batchUpdate">batchUpdate(instanceName, body, x__xgafv=None)</code> |
| 241 | <pre>Upload many blobs at once. |
| 242 | |
| 243 | The server may enforce a limit of the combined total size of blobs |
| 244 | to be uploaded using this API. This limit may be obtained using the |
| 245 | Capabilities API. |
| 246 | Requests exceeding the limit should either be split into smaller |
| 247 | chunks or uploaded using the |
| 248 | ByteStream API, as appropriate. |
| 249 | |
| 250 | This request is equivalent to calling a Bytestream `Write` request |
| 251 | on each individual blob, in parallel. The requests may succeed or fail |
| 252 | independently. |
| 253 | |
| 254 | Errors: |
| 255 | |
| 256 | * `INVALID_ARGUMENT`: The client attempted to upload more than the |
| 257 | server supported limit. |
| 258 | |
| 259 | Individual requests may return the following errors, additionally: |
| 260 | |
| 261 | * `RESOURCE_EXHAUSTED`: There is insufficient disk quota to store the blob. |
| 262 | * `INVALID_ARGUMENT`: The |
| 263 | Digest does not match the |
| 264 | provided data. |
| 265 | |
| 266 | Args: |
| 267 | instanceName: string, The instance of the execution system to operate against. A server may |
| 268 | support multiple instances of the execution system (with their own workers, |
| 269 | storage, caches, etc.). The server MAY require use of this field to select |
| 270 | between them in an implementation-defined fashion, otherwise it can be |
| 271 | omitted. (required) |
| 272 | body: object, The request body. (required) |
| 273 | The object takes the form of: |
| 274 | |
| 275 | { # A request message for |
| 276 | # ContentAddressableStorage.BatchUpdateBlobs. |
| 277 | "requests": [ # The individual upload requests. |
| 278 | { # A request corresponding to a single blob that the client wants to upload. |
| 279 | "data": "A String", # The raw binary data. |
| 280 | "digest": { # A content digest. A digest for a given blob consists of the size of the blob # The digest of the blob. This MUST be the digest of `data`. |
| 281 | # and its hash. The hash algorithm to use is defined by the server, but servers |
| 282 | # SHOULD use SHA-256. |
| 283 | # |
| 284 | # The size is considered to be an integral part of the digest and cannot be |
| 285 | # separated. That is, even if the `hash` field is correctly specified but |
| 286 | # `size_bytes` is not, the server MUST reject the request. |
| 287 | # |
| 288 | # The reason for including the size in the digest is as follows: in a great |
| 289 | # many cases, the server needs to know the size of the blob it is about to work |
| 290 | # with prior to starting an operation with it, such as flattening Merkle tree |
| 291 | # structures or streaming it to a worker. Technically, the server could |
| 292 | # implement a separate metadata store, but this results in a significantly more |
| 293 | # complicated implementation as opposed to having the client specify the size |
| 294 | # up-front (or storing the size along with the digest in every message where |
| 295 | # digests are embedded). This does mean that the API leaks some implementation |
| 296 | # details of (what we consider to be) a reasonable server implementation, but |
| 297 | # we consider this to be a worthwhile tradeoff. |
| 298 | # |
| 299 | # When a `Digest` is used to refer to a proto message, it always refers to the |
| 300 | # message in binary encoded form. To ensure consistent hashing, clients and |
| 301 | # servers MUST ensure that they serialize messages according to the following |
| 302 | # rules, even if there are alternate valid encodings for the same message: |
| 303 | # |
| 304 | # * Fields are serialized in tag order. |
| 305 | # * There are no unknown fields. |
| 306 | # * There are no duplicate fields. |
| 307 | # * Fields are serialized according to the default semantics for their type. |
| 308 | # |
| 309 | # Most protocol buffer implementations will always follow these rules when |
| 310 | # serializing, but care should be taken to avoid shortcuts. For instance, |
| 311 | # concatenating two messages to merge them may produce duplicate fields. |
| 312 | "sizeBytes": "A String", # The size of the blob, in bytes. |
| 313 | "hash": "A String", # The hash. In the case of SHA-256, it will always be a lowercase hex string |
| 314 | # exactly 64 characters long. |
| 315 | }, |
| 316 | }, |
| 317 | ], |
| 318 | } |
| 319 | |
| 320 | x__xgafv: string, V1 error format. |
| 321 | Allowed values |
| 322 | 1 - v1 error format |
| 323 | 2 - v2 error format |
| 324 | |
| 325 | Returns: |
| 326 | An object of the form: |
| 327 | |
| 328 | { # A response message for |
| 329 | # ContentAddressableStorage.BatchUpdateBlobs. |
| 330 | "responses": [ # The responses to the requests. |
| 331 | { # A response corresponding to a single blob that the client tried to upload. |
| 332 | "status": { # The `Status` type defines a logical error model that is suitable for # The result of attempting to upload that blob. |
| 333 | # different programming environments, including REST APIs and RPC APIs. It is |
| 334 | # used by [gRPC](https://github.com/grpc). Each `Status` message contains |
| 335 | # three pieces of data: error code, error message, and error details. |
| 336 | # |
| 337 | # You can find out more about this error model and how to work with it in the |
| 338 | # [API Design Guide](https://cloud.google.com/apis/design/errors). |
| 339 | "message": "A String", # A developer-facing error message, which should be in English. Any |
| 340 | # user-facing error message should be localized and sent in the |
| 341 | # google.rpc.Status.details field, or localized by the client. |
| 342 | "code": 42, # The status code, which should be an enum value of google.rpc.Code. |
| 343 | "details": [ # A list of messages that carry the error details. There is a common set of |
| 344 | # message types for APIs to use. |
| 345 | { |
| 346 | "a_key": "", # Properties of the object. Contains field @type with type URL. |
| 347 | }, |
| 348 | ], |
| 349 | }, |
| 350 | "digest": { # A content digest. A digest for a given blob consists of the size of the blob # The blob digest to which this response corresponds. |
| 351 | # and its hash. The hash algorithm to use is defined by the server, but servers |
| 352 | # SHOULD use SHA-256. |
| 353 | # |
| 354 | # The size is considered to be an integral part of the digest and cannot be |
| 355 | # separated. That is, even if the `hash` field is correctly specified but |
| 356 | # `size_bytes` is not, the server MUST reject the request. |
| 357 | # |
| 358 | # The reason for including the size in the digest is as follows: in a great |
| 359 | # many cases, the server needs to know the size of the blob it is about to work |
| 360 | # with prior to starting an operation with it, such as flattening Merkle tree |
| 361 | # structures or streaming it to a worker. Technically, the server could |
| 362 | # implement a separate metadata store, but this results in a significantly more |
| 363 | # complicated implementation as opposed to having the client specify the size |
| 364 | # up-front (or storing the size along with the digest in every message where |
| 365 | # digests are embedded). This does mean that the API leaks some implementation |
| 366 | # details of (what we consider to be) a reasonable server implementation, but |
| 367 | # we consider this to be a worthwhile tradeoff. |
| 368 | # |
| 369 | # When a `Digest` is used to refer to a proto message, it always refers to the |
| 370 | # message in binary encoded form. To ensure consistent hashing, clients and |
| 371 | # servers MUST ensure that they serialize messages according to the following |
| 372 | # rules, even if there are alternate valid encodings for the same message: |
| 373 | # |
| 374 | # * Fields are serialized in tag order. |
| 375 | # * There are no unknown fields. |
| 376 | # * There are no duplicate fields. |
| 377 | # * Fields are serialized according to the default semantics for their type. |
| 378 | # |
| 379 | # Most protocol buffer implementations will always follow these rules when |
| 380 | # serializing, but care should be taken to avoid shortcuts. For instance, |
| 381 | # concatenating two messages to merge them may produce duplicate fields. |
| 382 | "sizeBytes": "A String", # The size of the blob, in bytes. |
| 383 | "hash": "A String", # The hash. In the case of SHA-256, it will always be a lowercase hex string |
| 384 | # exactly 64 characters long. |
| 385 | }, |
| 386 | }, |
| 387 | ], |
| 388 | }</pre> |
| 389 | </div> |
| 390 | |
| 391 | <div class="method"> |
| 392 | <code class="details" id="findMissing">findMissing(instanceName, body, x__xgafv=None)</code> |
| 393 | <pre>Determine if blobs are present in the CAS. |
| 394 | |
| 395 | Clients can use this API before uploading blobs to determine which ones are |
| 396 | already present in the CAS and do not need to be uploaded again. |
| 397 | |
| 398 | There are no method-specific errors. |
| 399 | |
| 400 | Args: |
| 401 | instanceName: string, The instance of the execution system to operate against. A server may |
| 402 | support multiple instances of the execution system (with their own workers, |
| 403 | storage, caches, etc.). The server MAY require use of this field to select |
| 404 | between them in an implementation-defined fashion, otherwise it can be |
| 405 | omitted. (required) |
| 406 | body: object, The request body. (required) |
| 407 | The object takes the form of: |
| 408 | |
| 409 | { # A request message for |
| 410 | # ContentAddressableStorage.FindMissingBlobs. |
| 411 | "blobDigests": [ # A list of the blobs to check. |
| 412 | { # A content digest. A digest for a given blob consists of the size of the blob |
| 413 | # and its hash. The hash algorithm to use is defined by the server, but servers |
| 414 | # SHOULD use SHA-256. |
| 415 | # |
| 416 | # The size is considered to be an integral part of the digest and cannot be |
| 417 | # separated. That is, even if the `hash` field is correctly specified but |
| 418 | # `size_bytes` is not, the server MUST reject the request. |
| 419 | # |
| 420 | # The reason for including the size in the digest is as follows: in a great |
| 421 | # many cases, the server needs to know the size of the blob it is about to work |
| 422 | # with prior to starting an operation with it, such as flattening Merkle tree |
| 423 | # structures or streaming it to a worker. Technically, the server could |
| 424 | # implement a separate metadata store, but this results in a significantly more |
| 425 | # complicated implementation as opposed to having the client specify the size |
| 426 | # up-front (or storing the size along with the digest in every message where |
| 427 | # digests are embedded). This does mean that the API leaks some implementation |
| 428 | # details of (what we consider to be) a reasonable server implementation, but |
| 429 | # we consider this to be a worthwhile tradeoff. |
| 430 | # |
| 431 | # When a `Digest` is used to refer to a proto message, it always refers to the |
| 432 | # message in binary encoded form. To ensure consistent hashing, clients and |
| 433 | # servers MUST ensure that they serialize messages according to the following |
| 434 | # rules, even if there are alternate valid encodings for the same message: |
| 435 | # |
| 436 | # * Fields are serialized in tag order. |
| 437 | # * There are no unknown fields. |
| 438 | # * There are no duplicate fields. |
| 439 | # * Fields are serialized according to the default semantics for their type. |
| 440 | # |
| 441 | # Most protocol buffer implementations will always follow these rules when |
| 442 | # serializing, but care should be taken to avoid shortcuts. For instance, |
| 443 | # concatenating two messages to merge them may produce duplicate fields. |
| 444 | "sizeBytes": "A String", # The size of the blob, in bytes. |
| 445 | "hash": "A String", # The hash. In the case of SHA-256, it will always be a lowercase hex string |
| 446 | # exactly 64 characters long. |
| 447 | }, |
| 448 | ], |
| 449 | } |
| 450 | |
| 451 | x__xgafv: string, V1 error format. |
| 452 | Allowed values |
| 453 | 1 - v1 error format |
| 454 | 2 - v2 error format |
| 455 | |
| 456 | Returns: |
| 457 | An object of the form: |
| 458 | |
| 459 | { # A response message for |
| 460 | # ContentAddressableStorage.FindMissingBlobs. |
| 461 | "missingBlobDigests": [ # A list of the blobs requested *not* present in the storage. |
| 462 | { # A content digest. A digest for a given blob consists of the size of the blob |
| 463 | # and its hash. The hash algorithm to use is defined by the server, but servers |
| 464 | # SHOULD use SHA-256. |
| 465 | # |
| 466 | # The size is considered to be an integral part of the digest and cannot be |
| 467 | # separated. That is, even if the `hash` field is correctly specified but |
| 468 | # `size_bytes` is not, the server MUST reject the request. |
| 469 | # |
| 470 | # The reason for including the size in the digest is as follows: in a great |
| 471 | # many cases, the server needs to know the size of the blob it is about to work |
| 472 | # with prior to starting an operation with it, such as flattening Merkle tree |
| 473 | # structures or streaming it to a worker. Technically, the server could |
| 474 | # implement a separate metadata store, but this results in a significantly more |
| 475 | # complicated implementation as opposed to having the client specify the size |
| 476 | # up-front (or storing the size along with the digest in every message where |
| 477 | # digests are embedded). This does mean that the API leaks some implementation |
| 478 | # details of (what we consider to be) a reasonable server implementation, but |
| 479 | # we consider this to be a worthwhile tradeoff. |
| 480 | # |
| 481 | # When a `Digest` is used to refer to a proto message, it always refers to the |
| 482 | # message in binary encoded form. To ensure consistent hashing, clients and |
| 483 | # servers MUST ensure that they serialize messages according to the following |
| 484 | # rules, even if there are alternate valid encodings for the same message: |
| 485 | # |
| 486 | # * Fields are serialized in tag order. |
| 487 | # * There are no unknown fields. |
| 488 | # * There are no duplicate fields. |
| 489 | # * Fields are serialized according to the default semantics for their type. |
| 490 | # |
| 491 | # Most protocol buffer implementations will always follow these rules when |
| 492 | # serializing, but care should be taken to avoid shortcuts. For instance, |
| 493 | # concatenating two messages to merge them may produce duplicate fields. |
| 494 | "sizeBytes": "A String", # The size of the blob, in bytes. |
| 495 | "hash": "A String", # The hash. In the case of SHA-256, it will always be a lowercase hex string |
| 496 | # exactly 64 characters long. |
| 497 | }, |
| 498 | ], |
| 499 | }</pre> |
| 500 | </div> |
| 501 | |
| 502 | <div class="method"> |
| 503 | <code class="details" id="getTree">getTree(instanceName, hash, sizeBytes, pageSize=None, pageToken=None, x__xgafv=None)</code> |
| 504 | <pre>Fetch the entire directory tree rooted at a node. |
| 505 | |
| 506 | This request must be targeted at a |
| 507 | Directory stored in the |
| 508 | ContentAddressableStorage |
| 509 | (CAS). The server will enumerate the `Directory` tree recursively and |
| 510 | return every node descended from the root. |
| 511 | |
| 512 | The GetTreeRequest.page_token parameter can be used to skip ahead in |
| 513 | the stream (e.g. when retrying a partially completed and aborted request), |
| 514 | by setting it to a value taken from GetTreeResponse.next_page_token of the |
| 515 | last successfully processed GetTreeResponse). |
| 516 | |
| 517 | The exact traversal order is unspecified and, unless retrieving subsequent |
| 518 | pages from an earlier request, is not guaranteed to be stable across |
| 519 | multiple invocations of `GetTree`. |
| 520 | |
| 521 | If part of the tree is missing from the CAS, the server will return the |
| 522 | portion present and omit the rest. |
| 523 | |
| 524 | * `NOT_FOUND`: The requested tree root is not present in the CAS. |
| 525 | |
| 526 | Args: |
| 527 | instanceName: string, The instance of the execution system to operate against. A server may |
| 528 | support multiple instances of the execution system (with their own workers, |
| 529 | storage, caches, etc.). The server MAY require use of this field to select |
| 530 | between them in an implementation-defined fashion, otherwise it can be |
| 531 | omitted. (required) |
| 532 | hash: string, The hash. In the case of SHA-256, it will always be a lowercase hex string |
| 533 | exactly 64 characters long. (required) |
| 534 | sizeBytes: string, The size of the blob, in bytes. (required) |
| 535 | pageSize: integer, A maximum page size to request. If present, the server will request no more |
| 536 | than this many items. Regardless of whether a page size is specified, the |
| 537 | server may place its own limit on the number of items to be returned and |
| 538 | require the client to retrieve more items using a subsequent request. |
| 539 | pageToken: string, A page token, which must be a value received in a previous |
| 540 | GetTreeResponse. |
| 541 | If present, the server will use it to return the following page of results. |
| 542 | x__xgafv: string, V1 error format. |
| 543 | Allowed values |
| 544 | 1 - v1 error format |
| 545 | 2 - v2 error format |
| 546 | |
| 547 | Returns: |
| 548 | An object of the form: |
| 549 | |
| 550 | { # A response message for |
| 551 | # ContentAddressableStorage.GetTree. |
| 552 | "nextPageToken": "A String", # If present, signifies that there are more results which the client can |
| 553 | # retrieve by passing this as the page_token in a subsequent |
| 554 | # request. |
| 555 | # If empty, signifies that this is the last page of results. |
| 556 | "directories": [ # The directories descended from the requested root. |
| 557 | { # A `Directory` represents a directory node in a file tree, containing zero or |
| 558 | # more children FileNodes, |
| 559 | # DirectoryNodes and |
| 560 | # SymlinkNodes. |
| 561 | # Each `Node` contains its name in the directory, either the digest of its |
| 562 | # content (either a file blob or a `Directory` proto) or a symlink target, as |
| 563 | # well as possibly some metadata about the file or directory. |
| 564 | # |
| 565 | # In order to ensure that two equivalent directory trees hash to the same |
| 566 | # value, the following restrictions MUST be obeyed when constructing a |
| 567 | # a `Directory`: |
| 568 | # |
| 569 | # * Every child in the directory must have a path of exactly one segment. |
| 570 | # Multiple levels of directory hierarchy may not be collapsed. |
| 571 | # * Each child in the directory must have a unique path segment (file name). |
| 572 | # Note that while the API itself is case-sensitive, the environment where |
| 573 | # the Action is executed may or may not be case-sensitive. That is, it is |
| 574 | # legal to call the API with a Directory that has both "Foo" and "foo" as |
| 575 | # children, but the Action may be rejected by the remote system upon |
| 576 | # execution. |
| 577 | # * The files, directories and symlinks in the directory must each be sorted |
| 578 | # in lexicographical order by path. The path strings must be sorted by code |
| 579 | # point, equivalently, by UTF-8 bytes. |
| 580 | # |
| 581 | # A `Directory` that obeys the restrictions is said to be in canonical form. |
| 582 | # |
| 583 | # As an example, the following could be used for a file named `bar` and a |
| 584 | # directory named `foo` with an executable file named `baz` (hashes shortened |
| 585 | # for readability): |
| 586 | # |
| 587 | # ```json |
| 588 | # // (Directory proto) |
| 589 | # { |
| 590 | # files: [ |
| 591 | # { |
| 592 | # name: "bar", |
| 593 | # digest: { |
| 594 | # hash: "4a73bc9d03...", |
| 595 | # size: 65534 |
| 596 | # } |
| 597 | # } |
| 598 | # ], |
| 599 | # directories: [ |
| 600 | # { |
| 601 | # name: "foo", |
| 602 | # digest: { |
| 603 | # hash: "4cf2eda940...", |
| 604 | # size: 43 |
| 605 | # } |
| 606 | # } |
| 607 | # ] |
| 608 | # } |
| 609 | # |
| 610 | # // (Directory proto with hash "4cf2eda940..." and size 43) |
| 611 | # { |
| 612 | # files: [ |
| 613 | # { |
| 614 | # name: "baz", |
| 615 | # digest: { |
| 616 | # hash: "b2c941073e...", |
| 617 | # size: 1294, |
| 618 | # }, |
| 619 | # is_executable: true |
| 620 | # } |
| 621 | # ] |
| 622 | # } |
| 623 | # ``` |
| 624 | "symlinks": [ # The symlinks in the directory. |
| 625 | { # A `SymlinkNode` represents a symbolic link. |
| 626 | "name": "A String", # The name of the symlink. |
| 627 | "target": "A String", # The target path of the symlink. The path separator is a forward slash `/`. |
| 628 | # The target path can be relative to the parent directory of the symlink or |
| 629 | # it can be an absolute path starting with `/`. Support for absolute paths |
| 630 | # can be checked using the Capabilities |
| 631 | # API. The canonical form forbids the substrings `/./` and `//` in the target |
| 632 | # path. `..` components are allowed anywhere in the target path. |
| 633 | }, |
| 634 | ], |
| 635 | "files": [ # The files in the directory. |
| 636 | { # A `FileNode` represents a single file and associated metadata. |
| 637 | "isExecutable": True or False, # True if file is executable, false otherwise. |
| 638 | "name": "A String", # The name of the file. |
| 639 | "digest": { # A content digest. A digest for a given blob consists of the size of the blob # The digest of the file's content. |
| 640 | # and its hash. The hash algorithm to use is defined by the server, but servers |
| 641 | # SHOULD use SHA-256. |
| 642 | # |
| 643 | # The size is considered to be an integral part of the digest and cannot be |
| 644 | # separated. That is, even if the `hash` field is correctly specified but |
| 645 | # `size_bytes` is not, the server MUST reject the request. |
| 646 | # |
| 647 | # The reason for including the size in the digest is as follows: in a great |
| 648 | # many cases, the server needs to know the size of the blob it is about to work |
| 649 | # with prior to starting an operation with it, such as flattening Merkle tree |
| 650 | # structures or streaming it to a worker. Technically, the server could |
| 651 | # implement a separate metadata store, but this results in a significantly more |
| 652 | # complicated implementation as opposed to having the client specify the size |
| 653 | # up-front (or storing the size along with the digest in every message where |
| 654 | # digests are embedded). This does mean that the API leaks some implementation |
| 655 | # details of (what we consider to be) a reasonable server implementation, but |
| 656 | # we consider this to be a worthwhile tradeoff. |
| 657 | # |
| 658 | # When a `Digest` is used to refer to a proto message, it always refers to the |
| 659 | # message in binary encoded form. To ensure consistent hashing, clients and |
| 660 | # servers MUST ensure that they serialize messages according to the following |
| 661 | # rules, even if there are alternate valid encodings for the same message: |
| 662 | # |
| 663 | # * Fields are serialized in tag order. |
| 664 | # * There are no unknown fields. |
| 665 | # * There are no duplicate fields. |
| 666 | # * Fields are serialized according to the default semantics for their type. |
| 667 | # |
| 668 | # Most protocol buffer implementations will always follow these rules when |
| 669 | # serializing, but care should be taken to avoid shortcuts. For instance, |
| 670 | # concatenating two messages to merge them may produce duplicate fields. |
| 671 | "sizeBytes": "A String", # The size of the blob, in bytes. |
| 672 | "hash": "A String", # The hash. In the case of SHA-256, it will always be a lowercase hex string |
| 673 | # exactly 64 characters long. |
| 674 | }, |
| 675 | }, |
| 676 | ], |
| 677 | "directories": [ # The subdirectories in the directory. |
| 678 | { # A `DirectoryNode` represents a child of a |
| 679 | # Directory which is itself |
| 680 | # a `Directory` and its associated metadata. |
| 681 | "name": "A String", # The name of the directory. |
| 682 | "digest": { # A content digest. A digest for a given blob consists of the size of the blob # The digest of the |
| 683 | # Directory object |
| 684 | # represented. See Digest |
| 685 | # for information about how to take the digest of a proto message. |
| 686 | # and its hash. The hash algorithm to use is defined by the server, but servers |
| 687 | # SHOULD use SHA-256. |
| 688 | # |
| 689 | # The size is considered to be an integral part of the digest and cannot be |
| 690 | # separated. That is, even if the `hash` field is correctly specified but |
| 691 | # `size_bytes` is not, the server MUST reject the request. |
| 692 | # |
| 693 | # The reason for including the size in the digest is as follows: in a great |
| 694 | # many cases, the server needs to know the size of the blob it is about to work |
| 695 | # with prior to starting an operation with it, such as flattening Merkle tree |
| 696 | # structures or streaming it to a worker. Technically, the server could |
| 697 | # implement a separate metadata store, but this results in a significantly more |
| 698 | # complicated implementation as opposed to having the client specify the size |
| 699 | # up-front (or storing the size along with the digest in every message where |
| 700 | # digests are embedded). This does mean that the API leaks some implementation |
| 701 | # details of (what we consider to be) a reasonable server implementation, but |
| 702 | # we consider this to be a worthwhile tradeoff. |
| 703 | # |
| 704 | # When a `Digest` is used to refer to a proto message, it always refers to the |
| 705 | # message in binary encoded form. To ensure consistent hashing, clients and |
| 706 | # servers MUST ensure that they serialize messages according to the following |
| 707 | # rules, even if there are alternate valid encodings for the same message: |
| 708 | # |
| 709 | # * Fields are serialized in tag order. |
| 710 | # * There are no unknown fields. |
| 711 | # * There are no duplicate fields. |
| 712 | # * Fields are serialized according to the default semantics for their type. |
| 713 | # |
| 714 | # Most protocol buffer implementations will always follow these rules when |
| 715 | # serializing, but care should be taken to avoid shortcuts. For instance, |
| 716 | # concatenating two messages to merge them may produce duplicate fields. |
| 717 | "sizeBytes": "A String", # The size of the blob, in bytes. |
| 718 | "hash": "A String", # The hash. In the case of SHA-256, it will always be a lowercase hex string |
| 719 | # exactly 64 characters long. |
| 720 | }, |
| 721 | }, |
| 722 | ], |
| 723 | }, |
| 724 | ], |
| 725 | }</pre> |
| 726 | </div> |
| 727 | |
| 728 | <div class="method"> |
| 729 | <code class="details" id="getTree_next">getTree_next(previous_request, previous_response)</code> |
| 730 | <pre>Retrieves the next page of results. |
| 731 | |
| 732 | Args: |
| 733 | previous_request: The request for the previous page. (required) |
| 734 | previous_response: The response from the request for the previous page. (required) |
| 735 | |
| 736 | Returns: |
| 737 | A request object that you can call 'execute()' on to request the next |
| 738 | page. Returns None if there are no more items in the collection. |
| 739 | </pre> |
| 740 | </div> |
| 741 | |
| 742 | </body></html> |