Regen all docs. (#700)
* Stop recursing if discovery == {}
* Generate docs with 'make docs'.
diff --git a/docs/dyn/remotebuildexecution_v2.blobs.html b/docs/dyn/remotebuildexecution_v2.blobs.html
new file mode 100644
index 0000000..bf048d0
--- /dev/null
+++ b/docs/dyn/remotebuildexecution_v2.blobs.html
@@ -0,0 +1,742 @@
+<html><body>
+<style>
+
+body, h1, h2, h3, div, span, p, pre, a {
+ margin: 0;
+ padding: 0;
+ border: 0;
+ font-weight: inherit;
+ font-style: inherit;
+ font-size: 100%;
+ font-family: inherit;
+ vertical-align: baseline;
+}
+
+body {
+ font-size: 13px;
+ padding: 1em;
+}
+
+h1 {
+ font-size: 26px;
+ margin-bottom: 1em;
+}
+
+h2 {
+ font-size: 24px;
+ margin-bottom: 1em;
+}
+
+h3 {
+ font-size: 20px;
+ margin-bottom: 1em;
+ margin-top: 1em;
+}
+
+pre, code {
+ line-height: 1.5;
+ font-family: Monaco, 'DejaVu Sans Mono', 'Bitstream Vera Sans Mono', 'Lucida Console', monospace;
+}
+
+pre {
+ margin-top: 0.5em;
+}
+
+h1, h2, h3, p {
+ font-family: Arial, sans serif;
+}
+
+h1, h2, h3 {
+ border-bottom: solid #CCC 1px;
+}
+
+.toc_element {
+ margin-top: 0.5em;
+}
+
+.firstline {
+ margin-left: 2 em;
+}
+
+.method {
+ margin-top: 1em;
+ border: solid 1px #CCC;
+ padding: 1em;
+ background: #EEE;
+}
+
+.details {
+ font-weight: bold;
+ font-size: 14px;
+}
+
+</style>
+
+<h1><a href="remotebuildexecution_v2.html">Remote Build Execution API</a> . <a href="remotebuildexecution_v2.blobs.html">blobs</a></h1>
+<h2>Instance Methods</h2>
+<p class="toc_element">
+ <code><a href="#batchRead">batchRead(instanceName, body, x__xgafv=None)</a></code></p>
+<p class="firstline">Download many blobs at once.</p>
+<p class="toc_element">
+ <code><a href="#batchUpdate">batchUpdate(instanceName, body, x__xgafv=None)</a></code></p>
+<p class="firstline">Upload many blobs at once.</p>
+<p class="toc_element">
+ <code><a href="#findMissing">findMissing(instanceName, body, x__xgafv=None)</a></code></p>
+<p class="firstline">Determine if blobs are present in the CAS.</p>
+<p class="toc_element">
+ <code><a href="#getTree">getTree(instanceName, hash, sizeBytes, pageSize=None, pageToken=None, x__xgafv=None)</a></code></p>
+<p class="firstline">Fetch the entire directory tree rooted at a node.</p>
+<p class="toc_element">
+ <code><a href="#getTree_next">getTree_next(previous_request, previous_response)</a></code></p>
+<p class="firstline">Retrieves the next page of results.</p>
+<h3>Method Details</h3>
+<div class="method">
+ <code class="details" id="batchRead">batchRead(instanceName, body, x__xgafv=None)</code>
+ <pre>Download many blobs at once.
+
+The server may enforce a limit of the combined total size of blobs
+to be downloaded using this API. This limit may be obtained using the
+Capabilities API.
+Requests exceeding the limit should either be split into smaller
+chunks or downloaded using the
+ByteStream API, as appropriate.
+
+This request is equivalent to calling a Bytestream `Read` request
+on each individual blob, in parallel. The requests may succeed or fail
+independently.
+
+Errors:
+
+* `INVALID_ARGUMENT`: The client attempted to read more than the
+ server supported limit.
+
+Every error on individual read will be returned in the corresponding digest
+status.
+
+Args:
+ instanceName: string, The instance of the execution system to operate against. A server may
+support multiple instances of the execution system (with their own workers,
+storage, caches, etc.). The server MAY require use of this field to select
+between them in an implementation-defined fashion, otherwise it can be
+omitted. (required)
+ body: object, The request body. (required)
+ The object takes the form of:
+
+{ # A request message for
+ # ContentAddressableStorage.BatchReadBlobs.
+ "digests": [ # The individual blob digests.
+ { # A content digest. A digest for a given blob consists of the size of the blob
+ # and its hash. The hash algorithm to use is defined by the server, but servers
+ # SHOULD use SHA-256.
+ #
+ # The size is considered to be an integral part of the digest and cannot be
+ # separated. That is, even if the `hash` field is correctly specified but
+ # `size_bytes` is not, the server MUST reject the request.
+ #
+ # The reason for including the size in the digest is as follows: in a great
+ # many cases, the server needs to know the size of the blob it is about to work
+ # with prior to starting an operation with it, such as flattening Merkle tree
+ # structures or streaming it to a worker. Technically, the server could
+ # implement a separate metadata store, but this results in a significantly more
+ # complicated implementation as opposed to having the client specify the size
+ # up-front (or storing the size along with the digest in every message where
+ # digests are embedded). This does mean that the API leaks some implementation
+ # details of (what we consider to be) a reasonable server implementation, but
+ # we consider this to be a worthwhile tradeoff.
+ #
+ # When a `Digest` is used to refer to a proto message, it always refers to the
+ # message in binary encoded form. To ensure consistent hashing, clients and
+ # servers MUST ensure that they serialize messages according to the following
+ # rules, even if there are alternate valid encodings for the same message:
+ #
+ # * Fields are serialized in tag order.
+ # * There are no unknown fields.
+ # * There are no duplicate fields.
+ # * Fields are serialized according to the default semantics for their type.
+ #
+ # Most protocol buffer implementations will always follow these rules when
+ # serializing, but care should be taken to avoid shortcuts. For instance,
+ # concatenating two messages to merge them may produce duplicate fields.
+ "sizeBytes": "A String", # The size of the blob, in bytes.
+ "hash": "A String", # The hash. In the case of SHA-256, it will always be a lowercase hex string
+ # exactly 64 characters long.
+ },
+ ],
+ }
+
+ x__xgafv: string, V1 error format.
+ Allowed values
+ 1 - v1 error format
+ 2 - v2 error format
+
+Returns:
+ An object of the form:
+
+ { # A response message for
+ # ContentAddressableStorage.BatchReadBlobs.
+ "responses": [ # The responses to the requests.
+ { # A response corresponding to a single blob that the client tried to download.
+ "status": { # The `Status` type defines a logical error model that is suitable for # The result of attempting to download that blob.
+ # different programming environments, including REST APIs and RPC APIs. It is
+ # used by [gRPC](https://github.com/grpc). Each `Status` message contains
+ # three pieces of data: error code, error message, and error details.
+ #
+ # You can find out more about this error model and how to work with it in the
+ # [API Design Guide](https://cloud.google.com/apis/design/errors).
+ "message": "A String", # A developer-facing error message, which should be in English. Any
+ # user-facing error message should be localized and sent in the
+ # google.rpc.Status.details field, or localized by the client.
+ "code": 42, # The status code, which should be an enum value of google.rpc.Code.
+ "details": [ # A list of messages that carry the error details. There is a common set of
+ # message types for APIs to use.
+ {
+ "a_key": "", # Properties of the object. Contains field @type with type URL.
+ },
+ ],
+ },
+ "data": "A String", # The raw binary data.
+ "digest": { # A content digest. A digest for a given blob consists of the size of the blob # The digest to which this response corresponds.
+ # and its hash. The hash algorithm to use is defined by the server, but servers
+ # SHOULD use SHA-256.
+ #
+ # The size is considered to be an integral part of the digest and cannot be
+ # separated. That is, even if the `hash` field is correctly specified but
+ # `size_bytes` is not, the server MUST reject the request.
+ #
+ # The reason for including the size in the digest is as follows: in a great
+ # many cases, the server needs to know the size of the blob it is about to work
+ # with prior to starting an operation with it, such as flattening Merkle tree
+ # structures or streaming it to a worker. Technically, the server could
+ # implement a separate metadata store, but this results in a significantly more
+ # complicated implementation as opposed to having the client specify the size
+ # up-front (or storing the size along with the digest in every message where
+ # digests are embedded). This does mean that the API leaks some implementation
+ # details of (what we consider to be) a reasonable server implementation, but
+ # we consider this to be a worthwhile tradeoff.
+ #
+ # When a `Digest` is used to refer to a proto message, it always refers to the
+ # message in binary encoded form. To ensure consistent hashing, clients and
+ # servers MUST ensure that they serialize messages according to the following
+ # rules, even if there are alternate valid encodings for the same message:
+ #
+ # * Fields are serialized in tag order.
+ # * There are no unknown fields.
+ # * There are no duplicate fields.
+ # * Fields are serialized according to the default semantics for their type.
+ #
+ # Most protocol buffer implementations will always follow these rules when
+ # serializing, but care should be taken to avoid shortcuts. For instance,
+ # concatenating two messages to merge them may produce duplicate fields.
+ "sizeBytes": "A String", # The size of the blob, in bytes.
+ "hash": "A String", # The hash. In the case of SHA-256, it will always be a lowercase hex string
+ # exactly 64 characters long.
+ },
+ },
+ ],
+ }</pre>
+</div>
+
+<div class="method">
+ <code class="details" id="batchUpdate">batchUpdate(instanceName, body, x__xgafv=None)</code>
+ <pre>Upload many blobs at once.
+
+The server may enforce a limit of the combined total size of blobs
+to be uploaded using this API. This limit may be obtained using the
+Capabilities API.
+Requests exceeding the limit should either be split into smaller
+chunks or uploaded using the
+ByteStream API, as appropriate.
+
+This request is equivalent to calling a Bytestream `Write` request
+on each individual blob, in parallel. The requests may succeed or fail
+independently.
+
+Errors:
+
+* `INVALID_ARGUMENT`: The client attempted to upload more than the
+ server supported limit.
+
+Individual requests may return the following errors, additionally:
+
+* `RESOURCE_EXHAUSTED`: There is insufficient disk quota to store the blob.
+* `INVALID_ARGUMENT`: The
+Digest does not match the
+provided data.
+
+Args:
+ instanceName: string, The instance of the execution system to operate against. A server may
+support multiple instances of the execution system (with their own workers,
+storage, caches, etc.). The server MAY require use of this field to select
+between them in an implementation-defined fashion, otherwise it can be
+omitted. (required)
+ body: object, The request body. (required)
+ The object takes the form of:
+
+{ # A request message for
+ # ContentAddressableStorage.BatchUpdateBlobs.
+ "requests": [ # The individual upload requests.
+ { # A request corresponding to a single blob that the client wants to upload.
+ "data": "A String", # The raw binary data.
+ "digest": { # A content digest. A digest for a given blob consists of the size of the blob # The digest of the blob. This MUST be the digest of `data`.
+ # and its hash. The hash algorithm to use is defined by the server, but servers
+ # SHOULD use SHA-256.
+ #
+ # The size is considered to be an integral part of the digest and cannot be
+ # separated. That is, even if the `hash` field is correctly specified but
+ # `size_bytes` is not, the server MUST reject the request.
+ #
+ # The reason for including the size in the digest is as follows: in a great
+ # many cases, the server needs to know the size of the blob it is about to work
+ # with prior to starting an operation with it, such as flattening Merkle tree
+ # structures or streaming it to a worker. Technically, the server could
+ # implement a separate metadata store, but this results in a significantly more
+ # complicated implementation as opposed to having the client specify the size
+ # up-front (or storing the size along with the digest in every message where
+ # digests are embedded). This does mean that the API leaks some implementation
+ # details of (what we consider to be) a reasonable server implementation, but
+ # we consider this to be a worthwhile tradeoff.
+ #
+ # When a `Digest` is used to refer to a proto message, it always refers to the
+ # message in binary encoded form. To ensure consistent hashing, clients and
+ # servers MUST ensure that they serialize messages according to the following
+ # rules, even if there are alternate valid encodings for the same message:
+ #
+ # * Fields are serialized in tag order.
+ # * There are no unknown fields.
+ # * There are no duplicate fields.
+ # * Fields are serialized according to the default semantics for their type.
+ #
+ # Most protocol buffer implementations will always follow these rules when
+ # serializing, but care should be taken to avoid shortcuts. For instance,
+ # concatenating two messages to merge them may produce duplicate fields.
+ "sizeBytes": "A String", # The size of the blob, in bytes.
+ "hash": "A String", # The hash. In the case of SHA-256, it will always be a lowercase hex string
+ # exactly 64 characters long.
+ },
+ },
+ ],
+ }
+
+ x__xgafv: string, V1 error format.
+ Allowed values
+ 1 - v1 error format
+ 2 - v2 error format
+
+Returns:
+ An object of the form:
+
+ { # A response message for
+ # ContentAddressableStorage.BatchUpdateBlobs.
+ "responses": [ # The responses to the requests.
+ { # A response corresponding to a single blob that the client tried to upload.
+ "status": { # The `Status` type defines a logical error model that is suitable for # The result of attempting to upload that blob.
+ # different programming environments, including REST APIs and RPC APIs. It is
+ # used by [gRPC](https://github.com/grpc). Each `Status` message contains
+ # three pieces of data: error code, error message, and error details.
+ #
+ # You can find out more about this error model and how to work with it in the
+ # [API Design Guide](https://cloud.google.com/apis/design/errors).
+ "message": "A String", # A developer-facing error message, which should be in English. Any
+ # user-facing error message should be localized and sent in the
+ # google.rpc.Status.details field, or localized by the client.
+ "code": 42, # The status code, which should be an enum value of google.rpc.Code.
+ "details": [ # A list of messages that carry the error details. There is a common set of
+ # message types for APIs to use.
+ {
+ "a_key": "", # Properties of the object. Contains field @type with type URL.
+ },
+ ],
+ },
+ "digest": { # A content digest. A digest for a given blob consists of the size of the blob # The blob digest to which this response corresponds.
+ # and its hash. The hash algorithm to use is defined by the server, but servers
+ # SHOULD use SHA-256.
+ #
+ # The size is considered to be an integral part of the digest and cannot be
+ # separated. That is, even if the `hash` field is correctly specified but
+ # `size_bytes` is not, the server MUST reject the request.
+ #
+ # The reason for including the size in the digest is as follows: in a great
+ # many cases, the server needs to know the size of the blob it is about to work
+ # with prior to starting an operation with it, such as flattening Merkle tree
+ # structures or streaming it to a worker. Technically, the server could
+ # implement a separate metadata store, but this results in a significantly more
+ # complicated implementation as opposed to having the client specify the size
+ # up-front (or storing the size along with the digest in every message where
+ # digests are embedded). This does mean that the API leaks some implementation
+ # details of (what we consider to be) a reasonable server implementation, but
+ # we consider this to be a worthwhile tradeoff.
+ #
+ # When a `Digest` is used to refer to a proto message, it always refers to the
+ # message in binary encoded form. To ensure consistent hashing, clients and
+ # servers MUST ensure that they serialize messages according to the following
+ # rules, even if there are alternate valid encodings for the same message:
+ #
+ # * Fields are serialized in tag order.
+ # * There are no unknown fields.
+ # * There are no duplicate fields.
+ # * Fields are serialized according to the default semantics for their type.
+ #
+ # Most protocol buffer implementations will always follow these rules when
+ # serializing, but care should be taken to avoid shortcuts. For instance,
+ # concatenating two messages to merge them may produce duplicate fields.
+ "sizeBytes": "A String", # The size of the blob, in bytes.
+ "hash": "A String", # The hash. In the case of SHA-256, it will always be a lowercase hex string
+ # exactly 64 characters long.
+ },
+ },
+ ],
+ }</pre>
+</div>
+
+<div class="method">
+ <code class="details" id="findMissing">findMissing(instanceName, body, x__xgafv=None)</code>
+ <pre>Determine if blobs are present in the CAS.
+
+Clients can use this API before uploading blobs to determine which ones are
+already present in the CAS and do not need to be uploaded again.
+
+There are no method-specific errors.
+
+Args:
+ instanceName: string, The instance of the execution system to operate against. A server may
+support multiple instances of the execution system (with their own workers,
+storage, caches, etc.). The server MAY require use of this field to select
+between them in an implementation-defined fashion, otherwise it can be
+omitted. (required)
+ body: object, The request body. (required)
+ The object takes the form of:
+
+{ # A request message for
+ # ContentAddressableStorage.FindMissingBlobs.
+ "blobDigests": [ # A list of the blobs to check.
+ { # A content digest. A digest for a given blob consists of the size of the blob
+ # and its hash. The hash algorithm to use is defined by the server, but servers
+ # SHOULD use SHA-256.
+ #
+ # The size is considered to be an integral part of the digest and cannot be
+ # separated. That is, even if the `hash` field is correctly specified but
+ # `size_bytes` is not, the server MUST reject the request.
+ #
+ # The reason for including the size in the digest is as follows: in a great
+ # many cases, the server needs to know the size of the blob it is about to work
+ # with prior to starting an operation with it, such as flattening Merkle tree
+ # structures or streaming it to a worker. Technically, the server could
+ # implement a separate metadata store, but this results in a significantly more
+ # complicated implementation as opposed to having the client specify the size
+ # up-front (or storing the size along with the digest in every message where
+ # digests are embedded). This does mean that the API leaks some implementation
+ # details of (what we consider to be) a reasonable server implementation, but
+ # we consider this to be a worthwhile tradeoff.
+ #
+ # When a `Digest` is used to refer to a proto message, it always refers to the
+ # message in binary encoded form. To ensure consistent hashing, clients and
+ # servers MUST ensure that they serialize messages according to the following
+ # rules, even if there are alternate valid encodings for the same message:
+ #
+ # * Fields are serialized in tag order.
+ # * There are no unknown fields.
+ # * There are no duplicate fields.
+ # * Fields are serialized according to the default semantics for their type.
+ #
+ # Most protocol buffer implementations will always follow these rules when
+ # serializing, but care should be taken to avoid shortcuts. For instance,
+ # concatenating two messages to merge them may produce duplicate fields.
+ "sizeBytes": "A String", # The size of the blob, in bytes.
+ "hash": "A String", # The hash. In the case of SHA-256, it will always be a lowercase hex string
+ # exactly 64 characters long.
+ },
+ ],
+ }
+
+ x__xgafv: string, V1 error format.
+ Allowed values
+ 1 - v1 error format
+ 2 - v2 error format
+
+Returns:
+ An object of the form:
+
+ { # A response message for
+ # ContentAddressableStorage.FindMissingBlobs.
+ "missingBlobDigests": [ # A list of the blobs requested *not* present in the storage.
+ { # A content digest. A digest for a given blob consists of the size of the blob
+ # and its hash. The hash algorithm to use is defined by the server, but servers
+ # SHOULD use SHA-256.
+ #
+ # The size is considered to be an integral part of the digest and cannot be
+ # separated. That is, even if the `hash` field is correctly specified but
+ # `size_bytes` is not, the server MUST reject the request.
+ #
+ # The reason for including the size in the digest is as follows: in a great
+ # many cases, the server needs to know the size of the blob it is about to work
+ # with prior to starting an operation with it, such as flattening Merkle tree
+ # structures or streaming it to a worker. Technically, the server could
+ # implement a separate metadata store, but this results in a significantly more
+ # complicated implementation as opposed to having the client specify the size
+ # up-front (or storing the size along with the digest in every message where
+ # digests are embedded). This does mean that the API leaks some implementation
+ # details of (what we consider to be) a reasonable server implementation, but
+ # we consider this to be a worthwhile tradeoff.
+ #
+ # When a `Digest` is used to refer to a proto message, it always refers to the
+ # message in binary encoded form. To ensure consistent hashing, clients and
+ # servers MUST ensure that they serialize messages according to the following
+ # rules, even if there are alternate valid encodings for the same message:
+ #
+ # * Fields are serialized in tag order.
+ # * There are no unknown fields.
+ # * There are no duplicate fields.
+ # * Fields are serialized according to the default semantics for their type.
+ #
+ # Most protocol buffer implementations will always follow these rules when
+ # serializing, but care should be taken to avoid shortcuts. For instance,
+ # concatenating two messages to merge them may produce duplicate fields.
+ "sizeBytes": "A String", # The size of the blob, in bytes.
+ "hash": "A String", # The hash. In the case of SHA-256, it will always be a lowercase hex string
+ # exactly 64 characters long.
+ },
+ ],
+ }</pre>
+</div>
+
+<div class="method">
+ <code class="details" id="getTree">getTree(instanceName, hash, sizeBytes, pageSize=None, pageToken=None, x__xgafv=None)</code>
+ <pre>Fetch the entire directory tree rooted at a node.
+
+This request must be targeted at a
+Directory stored in the
+ContentAddressableStorage
+(CAS). The server will enumerate the `Directory` tree recursively and
+return every node descended from the root.
+
+The GetTreeRequest.page_token parameter can be used to skip ahead in
+the stream (e.g. when retrying a partially completed and aborted request),
+by setting it to a value taken from GetTreeResponse.next_page_token of the
+last successfully processed GetTreeResponse).
+
+The exact traversal order is unspecified and, unless retrieving subsequent
+pages from an earlier request, is not guaranteed to be stable across
+multiple invocations of `GetTree`.
+
+If part of the tree is missing from the CAS, the server will return the
+portion present and omit the rest.
+
+* `NOT_FOUND`: The requested tree root is not present in the CAS.
+
+Args:
+ instanceName: string, The instance of the execution system to operate against. A server may
+support multiple instances of the execution system (with their own workers,
+storage, caches, etc.). The server MAY require use of this field to select
+between them in an implementation-defined fashion, otherwise it can be
+omitted. (required)
+ hash: string, The hash. In the case of SHA-256, it will always be a lowercase hex string
+exactly 64 characters long. (required)
+ sizeBytes: string, The size of the blob, in bytes. (required)
+ pageSize: integer, A maximum page size to request. If present, the server will request no more
+than this many items. Regardless of whether a page size is specified, the
+server may place its own limit on the number of items to be returned and
+require the client to retrieve more items using a subsequent request.
+ pageToken: string, A page token, which must be a value received in a previous
+GetTreeResponse.
+If present, the server will use it to return the following page of results.
+ x__xgafv: string, V1 error format.
+ Allowed values
+ 1 - v1 error format
+ 2 - v2 error format
+
+Returns:
+ An object of the form:
+
+ { # A response message for
+ # ContentAddressableStorage.GetTree.
+ "nextPageToken": "A String", # If present, signifies that there are more results which the client can
+ # retrieve by passing this as the page_token in a subsequent
+ # request.
+ # If empty, signifies that this is the last page of results.
+ "directories": [ # The directories descended from the requested root.
+ { # A `Directory` represents a directory node in a file tree, containing zero or
+ # more children FileNodes,
+ # DirectoryNodes and
+ # SymlinkNodes.
+ # Each `Node` contains its name in the directory, either the digest of its
+ # content (either a file blob or a `Directory` proto) or a symlink target, as
+ # well as possibly some metadata about the file or directory.
+ #
+ # In order to ensure that two equivalent directory trees hash to the same
+ # value, the following restrictions MUST be obeyed when constructing a
+ # a `Directory`:
+ #
+ # * Every child in the directory must have a path of exactly one segment.
+ # Multiple levels of directory hierarchy may not be collapsed.
+ # * Each child in the directory must have a unique path segment (file name).
+ # Note that while the API itself is case-sensitive, the environment where
+ # the Action is executed may or may not be case-sensitive. That is, it is
+ # legal to call the API with a Directory that has both "Foo" and "foo" as
+ # children, but the Action may be rejected by the remote system upon
+ # execution.
+ # * The files, directories and symlinks in the directory must each be sorted
+ # in lexicographical order by path. The path strings must be sorted by code
+ # point, equivalently, by UTF-8 bytes.
+ #
+ # A `Directory` that obeys the restrictions is said to be in canonical form.
+ #
+ # As an example, the following could be used for a file named `bar` and a
+ # directory named `foo` with an executable file named `baz` (hashes shortened
+ # for readability):
+ #
+ # ```json
+ # // (Directory proto)
+ # {
+ # files: [
+ # {
+ # name: "bar",
+ # digest: {
+ # hash: "4a73bc9d03...",
+ # size: 65534
+ # }
+ # }
+ # ],
+ # directories: [
+ # {
+ # name: "foo",
+ # digest: {
+ # hash: "4cf2eda940...",
+ # size: 43
+ # }
+ # }
+ # ]
+ # }
+ #
+ # // (Directory proto with hash "4cf2eda940..." and size 43)
+ # {
+ # files: [
+ # {
+ # name: "baz",
+ # digest: {
+ # hash: "b2c941073e...",
+ # size: 1294,
+ # },
+ # is_executable: true
+ # }
+ # ]
+ # }
+ # ```
+ "symlinks": [ # The symlinks in the directory.
+ { # A `SymlinkNode` represents a symbolic link.
+ "name": "A String", # The name of the symlink.
+ "target": "A String", # The target path of the symlink. The path separator is a forward slash `/`.
+ # The target path can be relative to the parent directory of the symlink or
+ # it can be an absolute path starting with `/`. Support for absolute paths
+ # can be checked using the Capabilities
+ # API. The canonical form forbids the substrings `/./` and `//` in the target
+ # path. `..` components are allowed anywhere in the target path.
+ },
+ ],
+ "files": [ # The files in the directory.
+ { # A `FileNode` represents a single file and associated metadata.
+ "isExecutable": True or False, # True if file is executable, false otherwise.
+ "name": "A String", # The name of the file.
+ "digest": { # A content digest. A digest for a given blob consists of the size of the blob # The digest of the file's content.
+ # and its hash. The hash algorithm to use is defined by the server, but servers
+ # SHOULD use SHA-256.
+ #
+ # The size is considered to be an integral part of the digest and cannot be
+ # separated. That is, even if the `hash` field is correctly specified but
+ # `size_bytes` is not, the server MUST reject the request.
+ #
+ # The reason for including the size in the digest is as follows: in a great
+ # many cases, the server needs to know the size of the blob it is about to work
+ # with prior to starting an operation with it, such as flattening Merkle tree
+ # structures or streaming it to a worker. Technically, the server could
+ # implement a separate metadata store, but this results in a significantly more
+ # complicated implementation as opposed to having the client specify the size
+ # up-front (or storing the size along with the digest in every message where
+ # digests are embedded). This does mean that the API leaks some implementation
+ # details of (what we consider to be) a reasonable server implementation, but
+ # we consider this to be a worthwhile tradeoff.
+ #
+ # When a `Digest` is used to refer to a proto message, it always refers to the
+ # message in binary encoded form. To ensure consistent hashing, clients and
+ # servers MUST ensure that they serialize messages according to the following
+ # rules, even if there are alternate valid encodings for the same message:
+ #
+ # * Fields are serialized in tag order.
+ # * There are no unknown fields.
+ # * There are no duplicate fields.
+ # * Fields are serialized according to the default semantics for their type.
+ #
+ # Most protocol buffer implementations will always follow these rules when
+ # serializing, but care should be taken to avoid shortcuts. For instance,
+ # concatenating two messages to merge them may produce duplicate fields.
+ "sizeBytes": "A String", # The size of the blob, in bytes.
+ "hash": "A String", # The hash. In the case of SHA-256, it will always be a lowercase hex string
+ # exactly 64 characters long.
+ },
+ },
+ ],
+ "directories": [ # The subdirectories in the directory.
+ { # A `DirectoryNode` represents a child of a
+ # Directory which is itself
+ # a `Directory` and its associated metadata.
+ "name": "A String", # The name of the directory.
+ "digest": { # A content digest. A digest for a given blob consists of the size of the blob # The digest of the
+ # Directory object
+ # represented. See Digest
+ # for information about how to take the digest of a proto message.
+ # and its hash. The hash algorithm to use is defined by the server, but servers
+ # SHOULD use SHA-256.
+ #
+ # The size is considered to be an integral part of the digest and cannot be
+ # separated. That is, even if the `hash` field is correctly specified but
+ # `size_bytes` is not, the server MUST reject the request.
+ #
+ # The reason for including the size in the digest is as follows: in a great
+ # many cases, the server needs to know the size of the blob it is about to work
+ # with prior to starting an operation with it, such as flattening Merkle tree
+ # structures or streaming it to a worker. Technically, the server could
+ # implement a separate metadata store, but this results in a significantly more
+ # complicated implementation as opposed to having the client specify the size
+ # up-front (or storing the size along with the digest in every message where
+ # digests are embedded). This does mean that the API leaks some implementation
+ # details of (what we consider to be) a reasonable server implementation, but
+ # we consider this to be a worthwhile tradeoff.
+ #
+ # When a `Digest` is used to refer to a proto message, it always refers to the
+ # message in binary encoded form. To ensure consistent hashing, clients and
+ # servers MUST ensure that they serialize messages according to the following
+ # rules, even if there are alternate valid encodings for the same message:
+ #
+ # * Fields are serialized in tag order.
+ # * There are no unknown fields.
+ # * There are no duplicate fields.
+ # * Fields are serialized according to the default semantics for their type.
+ #
+ # Most protocol buffer implementations will always follow these rules when
+ # serializing, but care should be taken to avoid shortcuts. For instance,
+ # concatenating two messages to merge them may produce duplicate fields.
+ "sizeBytes": "A String", # The size of the blob, in bytes.
+ "hash": "A String", # The hash. In the case of SHA-256, it will always be a lowercase hex string
+ # exactly 64 characters long.
+ },
+ },
+ ],
+ },
+ ],
+ }</pre>
+</div>
+
+<div class="method">
+ <code class="details" id="getTree_next">getTree_next(previous_request, previous_response)</code>
+ <pre>Retrieves the next page of results.
+
+Args:
+ previous_request: The request for the previous page. (required)
+ previous_response: The response from the request for the previous page. (required)
+
+Returns:
+ A request object that you can call 'execute()' on to request the next
+ page. Returns None if there are no more items in the collection.
+ </pre>
+</div>
+
+</body></html>
\ No newline at end of file