blob: c1bef3f9b26d4189b41e19636d08d3d34604950c [file] [log] [blame]
<html><body>
<style>
body, h1, h2, h3, div, span, p, pre, a {
margin: 0;
padding: 0;
border: 0;
font-weight: inherit;
font-style: inherit;
font-size: 100%;
font-family: inherit;
vertical-align: baseline;
}
body {
font-size: 13px;
padding: 1em;
}
h1 {
font-size: 26px;
margin-bottom: 1em;
}
h2 {
font-size: 24px;
margin-bottom: 1em;
}
h3 {
font-size: 20px;
margin-bottom: 1em;
margin-top: 1em;
}
pre, code {
line-height: 1.5;
font-family: Monaco, 'DejaVu Sans Mono', 'Bitstream Vera Sans Mono', 'Lucida Console', monospace;
}
pre {
margin-top: 0.5em;
}
h1, h2, h3, p {
font-family: Arial, sans serif;
}
h1, h2, h3 {
border-bottom: solid #CCC 1px;
}
.toc_element {
margin-top: 0.5em;
}
.firstline {
margin-left: 2 em;
}
.method {
margin-top: 1em;
border: solid 1px #CCC;
padding: 1em;
background: #EEE;
}
.details {
font-weight: bold;
font-size: 14px;
}
</style>
<h1><a href="documentai_v1.html">Cloud Document AI API</a> . <a href="documentai_v1.projects.html">projects</a> . <a href="documentai_v1.projects.locations.html">locations</a> . <a href="documentai_v1.projects.locations.processors.html">processors</a></h1>
<h2>Instance Methods</h2>
<p class="toc_element">
<code><a href="documentai_v1.projects.locations.processors.humanReviewConfig.html">humanReviewConfig()</a></code>
</p>
<p class="firstline">Returns the humanReviewConfig Resource.</p>
<p class="toc_element">
<code><a href="documentai_v1.projects.locations.processors.processorVersions.html">processorVersions()</a></code>
</p>
<p class="firstline">Returns the processorVersions Resource.</p>
<p class="toc_element">
<code><a href="#batchProcess">batchProcess(name, body=None, x__xgafv=None)</a></code></p>
<p class="firstline">LRO endpoint to batch process many documents. The output is written to Cloud Storage as JSON in the [Document] format.</p>
<p class="toc_element">
<code><a href="#close">close()</a></code></p>
<p class="firstline">Close httplib2 connections.</p>
<p class="toc_element">
<code><a href="#process">process(name, body=None, x__xgafv=None)</a></code></p>
<p class="firstline">Processes a single document.</p>
<h3>Method Details</h3>
<div class="method">
<code class="details" id="batchProcess">batchProcess(name, body=None, x__xgafv=None)</code>
<pre>LRO endpoint to batch process many documents. The output is written to Cloud Storage as JSON in the [Document] format.
Args:
name: string, Required. The resource name of Processor or ProcessorVersion. Format: projects/{project}/locations/{location}/processors/{processor}, or projects/{project}/locations/{location}/processors/{processor}/processorVersions/{processorVersion} (required)
body: object, The request body.
The object takes the form of:
{ # Request message for batch process document method.
&quot;documentOutputConfig&quot;: { # Config that controls the output of documents. All documents will be written as a JSON file. # The overall output config for batch process.
&quot;gcsOutputConfig&quot;: { # The configuration used when outputting documents. # Output config to write the results to Cloud Storage.
&quot;gcsUri&quot;: &quot;A String&quot;, # The Cloud Storage uri (a directory) of the output.
},
},
&quot;inputDocuments&quot;: { # The common config to specify a set of documents used as input. # The input documents for batch process.
&quot;gcsDocuments&quot;: { # Specifies a set of documents on Cloud Storage. # The set of documents individually specified on Cloud Storage.
&quot;documents&quot;: [ # The list of documents.
{ # Specifies a document stored on Cloud Storage.
&quot;gcsUri&quot;: &quot;A String&quot;, # The Cloud Storage object uri.
&quot;mimeType&quot;: &quot;A String&quot;, # An IANA MIME type (RFC6838) of the content.
},
],
},
&quot;gcsPrefix&quot;: { # Specifies all documents on Cloud Storage with a common prefix. # The set of documents that match the specified Cloud Storage [gcs_prefix].
&quot;gcsUriPrefix&quot;: &quot;A String&quot;, # The URI prefix.
},
},
&quot;skipHumanReview&quot;: True or False, # Whether Human Review feature should be skipped for this request. Default to false.
}
x__xgafv: string, V1 error format.
Allowed values
1 - v1 error format
2 - v2 error format
Returns:
An object of the form:
{ # This resource represents a long-running operation that is the result of a network API call.
&quot;done&quot;: True or False, # If the value is `false`, it means the operation is still in progress. If `true`, the operation is completed, and either `error` or `response` is available.
&quot;error&quot;: { # The `Status` type defines a logical error model that is suitable for different programming environments, including REST APIs and RPC APIs. It is used by [gRPC](https://github.com/grpc). Each `Status` message contains three pieces of data: error code, error message, and error details. You can find out more about this error model and how to work with it in the [API Design Guide](https://cloud.google.com/apis/design/errors). # The error result of the operation in case of failure or cancellation.
&quot;code&quot;: 42, # The status code, which should be an enum value of google.rpc.Code.
&quot;details&quot;: [ # A list of messages that carry the error details. There is a common set of message types for APIs to use.
{
&quot;a_key&quot;: &quot;&quot;, # Properties of the object. Contains field @type with type URL.
},
],
&quot;message&quot;: &quot;A String&quot;, # A developer-facing error message, which should be in English. Any user-facing error message should be localized and sent in the google.rpc.Status.details field, or localized by the client.
},
&quot;metadata&quot;: { # Service-specific metadata associated with the operation. It typically contains progress information and common metadata such as create time. Some services might not provide such metadata. Any method that returns a long-running operation should document the metadata type, if any.
&quot;a_key&quot;: &quot;&quot;, # Properties of the object. Contains field @type with type URL.
},
&quot;name&quot;: &quot;A String&quot;, # The server-assigned name, which is only unique within the same service that originally returns it. If you use the default HTTP mapping, the `name` should be a resource name ending with `operations/{unique_id}`.
&quot;response&quot;: { # The normal response of the operation in case of success. If the original method returns no data on success, such as `Delete`, the response is `google.protobuf.Empty`. If the original method is standard `Get`/`Create`/`Update`, the response should be the resource. For other methods, the response should have the type `XxxResponse`, where `Xxx` is the original method name. For example, if the original method name is `TakeSnapshot()`, the inferred response type is `TakeSnapshotResponse`.
&quot;a_key&quot;: &quot;&quot;, # Properties of the object. Contains field @type with type URL.
},
}</pre>
</div>
<div class="method">
<code class="details" id="close">close()</code>
<pre>Close httplib2 connections.</pre>
</div>
<div class="method">
<code class="details" id="process">process(name, body=None, x__xgafv=None)</code>
<pre>Processes a single document.
Args:
name: string, Required. The resource name of the Processor or ProcessorVersion to use for processing. If a Processor is specified, the server will use its default version. Format: projects/{project}/locations/{location}/processors/{processor}, or projects/{project}/locations/{location}/processors/{processor}/processorVersions/{processorVersion} (required)
body: object, The request body.
The object takes the form of:
{ # Request message for the process document method.
&quot;inlineDocument&quot;: { # Document represents the canonical document resource in Document Understanding AI. It is an interchange format that provides insights into documents and allows for collaboration between users and Document Understanding AI to iterate and optimize for quality. # An inline document proto.
&quot;content&quot;: &quot;A String&quot;, # Optional. Inline document content, represented as a stream of bytes. Note: As with all `bytes` fields, protobuffers use a pure binary representation, whereas JSON representations use base64.
&quot;entities&quot;: [ # A list of entities detected on Document.text. For document shards, entities in this list may cross shard boundaries.
{ # A phrase in the text that is a known entity type, such as a person, an organization, or location.
&quot;confidence&quot;: 3.14, # Optional. Confidence of detected Schema entity. Range [0, 1].
&quot;id&quot;: &quot;A String&quot;, # Optional. Canonical id. This will be a unique value in the entity list for this document.
&quot;mentionId&quot;: &quot;A String&quot;, # Optional. Deprecated. Use `id` field instead.
&quot;mentionText&quot;: &quot;A String&quot;, # Optional. Text value in the document e.g. `1600 Amphitheatre Pkwy`.
&quot;normalizedValue&quot;: { # Parsed and normalized entity value. # Optional. Normalized entity value. Absent if the extracted value could not be converted or the type (e.g. address) is not supported for certain parsers. This field is also only populated for certain supported document types.
&quot;addressValue&quot;: { # Represents a postal address, e.g. for postal delivery or payments addresses. Given a postal address, a postal service can deliver items to a premise, P.O. Box or similar. It is not intended to model geographical locations (roads, towns, mountains). In typical usage an address would be created via user input or from importing existing data, depending on the type of process. Advice on address input / editing: - Use an i18n-ready address widget such as https://github.com/google/libaddressinput) - Users should not be presented with UI elements for input or editing of fields outside countries where that field is used. For more guidance on how to use this schema, please see: https://support.google.com/business/answer/6397478 # Postal address. See also: https://github.com/googleapis/googleapis/blob/master/google/type/postal_address.proto
&quot;addressLines&quot;: [ # Unstructured address lines describing the lower levels of an address. Because values in address_lines do not have type information and may sometimes contain multiple values in a single field (e.g. &quot;Austin, TX&quot;), it is important that the line order is clear. The order of address lines should be &quot;envelope order&quot; for the country/region of the address. In places where this can vary (e.g. Japan), address_language is used to make it explicit (e.g. &quot;ja&quot; for large-to-small ordering and &quot;ja-Latn&quot; or &quot;en&quot; for small-to-large). This way, the most specific line of an address can be selected based on the language. The minimum permitted structural representation of an address consists of a region_code with all remaining information placed in the address_lines. It would be possible to format such an address very approximately without geocoding, but no semantic reasoning could be made about any of the address components until it was at least partially resolved. Creating an address only containing a region_code and address_lines, and then geocoding is the recommended way to handle completely unstructured addresses (as opposed to guessing which parts of the address should be localities or administrative areas).
&quot;A String&quot;,
],
&quot;administrativeArea&quot;: &quot;A String&quot;, # Optional. Highest administrative subdivision which is used for postal addresses of a country or region. For example, this can be a state, a province, an oblast, or a prefecture. Specifically, for Spain this is the province and not the autonomous community (e.g. &quot;Barcelona&quot; and not &quot;Catalonia&quot;). Many countries don&#x27;t use an administrative area in postal addresses. E.g. in Switzerland this should be left unpopulated.
&quot;languageCode&quot;: &quot;A String&quot;, # Optional. BCP-47 language code of the contents of this address (if known). This is often the UI language of the input form or is expected to match one of the languages used in the address&#x27; country/region, or their transliterated equivalents. This can affect formatting in certain countries, but is not critical to the correctness of the data and will never affect any validation or other non-formatting related operations. If this value is not known, it should be omitted (rather than specifying a possibly incorrect default). Examples: &quot;zh-Hant&quot;, &quot;ja&quot;, &quot;ja-Latn&quot;, &quot;en&quot;.
&quot;locality&quot;: &quot;A String&quot;, # Optional. Generally refers to the city/town portion of the address. Examples: US city, IT comune, UK post town. In regions of the world where localities are not well defined or do not fit into this structure well, leave locality empty and use address_lines.
&quot;organization&quot;: &quot;A String&quot;, # Optional. The name of the organization at the address.
&quot;postalCode&quot;: &quot;A String&quot;, # Optional. Postal code of the address. Not all countries use or require postal codes to be present, but where they are used, they may trigger additional validation with other parts of the address (e.g. state/zip validation in the U.S.A.).
&quot;recipients&quot;: [ # Optional. The recipient at the address. This field may, under certain circumstances, contain multiline information. For example, it might contain &quot;care of&quot; information.
&quot;A String&quot;,
],
&quot;regionCode&quot;: &quot;A String&quot;, # Required. CLDR region code of the country/region of the address. This is never inferred and it is up to the user to ensure the value is correct. See http://cldr.unicode.org/ and http://www.unicode.org/cldr/charts/30/supplemental/territory_information.html for details. Example: &quot;CH&quot; for Switzerland.
&quot;revision&quot;: 42, # The schema revision of the `PostalAddress`. This must be set to 0, which is the latest revision. All new revisions **must** be backward compatible with old revisions.
&quot;sortingCode&quot;: &quot;A String&quot;, # Optional. Additional, country-specific, sorting code. This is not used in most regions. Where it is used, the value is either a string like &quot;CEDEX&quot;, optionally followed by a number (e.g. &quot;CEDEX 7&quot;), or just a number alone, representing the &quot;sector code&quot; (Jamaica), &quot;delivery area indicator&quot; (Malawi) or &quot;post office indicator&quot; (e.g. Côte d&#x27;Ivoire).
&quot;sublocality&quot;: &quot;A String&quot;, # Optional. Sublocality of the address. For example, this can be neighborhoods, boroughs, districts.
},
&quot;booleanValue&quot;: True or False, # Boolean value. Can be used for entities with binary values, or for checkboxes.
&quot;dateValue&quot;: { # Represents a whole or partial calendar date, such as a birthday. The time of day and time zone are either specified elsewhere or are insignificant. The date is relative to the Gregorian Calendar. This can represent one of the following: * A full date, with non-zero year, month, and day values * A month and day value, with a zero year, such as an anniversary * A year on its own, with zero month and day values * A year and month value, with a zero day, such as a credit card expiration date Related types are google.type.TimeOfDay and `google.protobuf.Timestamp`. # Date value. Includes year, month, day. See also: https://github.com/googleapis/googleapis/blob/master/google/type/date.proto
&quot;day&quot;: 42, # Day of a month. Must be from 1 to 31 and valid for the year and month, or 0 to specify a year by itself or a year and month where the day isn&#x27;t significant.
&quot;month&quot;: 42, # Month of a year. Must be from 1 to 12, or 0 to specify a year without a month and day.
&quot;year&quot;: 42, # Year of the date. Must be from 1 to 9999, or 0 to specify a date without a year.
},
&quot;datetimeValue&quot;: { # Represents civil time (or occasionally physical time). This type can represent a civil time in one of a few possible ways: * When utc_offset is set and time_zone is unset: a civil time on a calendar day with a particular offset from UTC. * When time_zone is set and utc_offset is unset: a civil time on a calendar day in a particular time zone. * When neither time_zone nor utc_offset is set: a civil time on a calendar day in local time. The date is relative to the Proleptic Gregorian Calendar. If year is 0, the DateTime is considered not to have a specific year. month and day must have valid, non-zero values. This type may also be used to represent a physical time if all the date and time fields are set and either case of the `time_offset` oneof is set. Consider using `Timestamp` message for physical time instead. If your use case also would like to store the user&#x27;s timezone, that can be done in another field. This type is more flexible than some applications may want. Make sure to document and validate your application&#x27;s limitations. # DateTime value. Includes date, time, and timezone. See also: https://github.com/googleapis/googleapis/blob/master/google/type/datetime.proto
&quot;day&quot;: 42, # Required. Day of month. Must be from 1 to 31 and valid for the year and month.
&quot;hours&quot;: 42, # Required. Hours of day in 24 hour format. Should be from 0 to 23. An API may choose to allow the value &quot;24:00:00&quot; for scenarios like business closing time.
&quot;minutes&quot;: 42, # Required. Minutes of hour of day. Must be from 0 to 59.
&quot;month&quot;: 42, # Required. Month of year. Must be from 1 to 12.
&quot;nanos&quot;: 42, # Required. Fractions of seconds in nanoseconds. Must be from 0 to 999,999,999.
&quot;seconds&quot;: 42, # Required. Seconds of minutes of the time. Must normally be from 0 to 59. An API may allow the value 60 if it allows leap-seconds.
&quot;timeZone&quot;: { # Represents a time zone from the [IANA Time Zone Database](https://www.iana.org/time-zones). # Time zone.
&quot;id&quot;: &quot;A String&quot;, # IANA Time Zone Database time zone, e.g. &quot;America/New_York&quot;.
&quot;version&quot;: &quot;A String&quot;, # Optional. IANA Time Zone Database version number, e.g. &quot;2019a&quot;.
},
&quot;utcOffset&quot;: &quot;A String&quot;, # UTC offset. Must be whole seconds, between -18 hours and +18 hours. For example, a UTC offset of -4:00 would be represented as { seconds: -14400 }.
&quot;year&quot;: 42, # Optional. Year of date. Must be from 1 to 9999, or 0 if specifying a datetime without a year.
},
&quot;moneyValue&quot;: { # Represents an amount of money with its currency type. # Money value. See also: https://github.com/googleapis/googleapis/blob/master/google/type/money.proto
&quot;currencyCode&quot;: &quot;A String&quot;, # The three-letter currency code defined in ISO 4217.
&quot;nanos&quot;: 42, # Number of nano (10^-9) units of the amount. The value must be between -999,999,999 and +999,999,999 inclusive. If `units` is positive, `nanos` must be positive or zero. If `units` is zero, `nanos` can be positive, zero, or negative. If `units` is negative, `nanos` must be negative or zero. For example $-1.75 is represented as `units`=-1 and `nanos`=-750,000,000.
&quot;units&quot;: &quot;A String&quot;, # The whole units of the amount. For example if `currencyCode` is `&quot;USD&quot;`, then 1 unit is one US dollar.
},
&quot;text&quot;: &quot;A String&quot;, # Required. Normalized entity value stored as a string. This field is populated for supported document type (e.g. Invoice). For some entity types, one of respective &#x27;structured_value&#x27; fields may also be populated. - Money/Currency type (`money_value`) is in the ISO 4217 text format. - Date type (`date_value`) is in the ISO 8601 text format. - Datetime type (`datetime_value`) is in the ISO 8601 text format.
},
&quot;pageAnchor&quot;: { # Referencing the visual context of the entity in the Document.pages. Page anchors can be cross-page, consist of multiple bounding polygons and optionally reference specific layout element types. # Optional. Represents the provenance of this entity wrt. the location on the page where it was found.
&quot;pageRefs&quot;: [ # One or more references to visual page elements
{ # Represents a weak reference to a page element within a document.
&quot;boundingPoly&quot;: { # A bounding polygon for the detected image annotation. # Optional. Identifies the bounding polygon of a layout element on the page.
&quot;normalizedVertices&quot;: [ # The bounding polygon normalized vertices.
{ # A vertex represents a 2D point in the image. NOTE: the normalized vertex coordinates are relative to the original image and range from 0 to 1.
&quot;x&quot;: 3.14, # X coordinate.
&quot;y&quot;: 3.14, # Y coordinate (starts from the top of the image).
},
],
&quot;vertices&quot;: [ # The bounding polygon vertices.
{ # A vertex represents a 2D point in the image. NOTE: the vertex coordinates are in the same scale as the original image.
&quot;x&quot;: 42, # X coordinate.
&quot;y&quot;: 42, # Y coordinate (starts from the top of the image).
},
],
},
&quot;confidence&quot;: 3.14, # Optional. Confidence of detected page element, if applicable. Range [0, 1].
&quot;layoutId&quot;: &quot;A String&quot;, # Optional. Deprecated. Use PageRef.bounding_poly instead.
&quot;layoutType&quot;: &quot;A String&quot;, # Optional. The type of the layout element that is being referenced if any.
&quot;page&quot;: &quot;A String&quot;, # Required. Index into the Document.pages element, for example using Document.pages to locate the related page element. This field is skipped when its value is the default 0. See https://developers.google.com/protocol-buffers/docs/proto3#json.
},
],
},
&quot;properties&quot;: [ # Optional. Entities can be nested to form a hierarchical data structure representing the content in the document.
# Object with schema name: GoogleCloudDocumentaiV1DocumentEntity
],
&quot;provenance&quot;: { # Structure to identify provenance relationships between annotations in different revisions. # Optional. The history of this annotation.
&quot;id&quot;: 42, # The Id of this operation. Needs to be unique within the scope of the revision.
&quot;parents&quot;: [ # References to the original elements that are replaced.
{ # Structure for referencing parent provenances. When an element replaces one of more other elements parent references identify the elements that are replaced.
&quot;id&quot;: 42, # The id of the parent provenance.
&quot;index&quot;: 42, # The index of the parent revisions corresponding collection of items (eg. list of entities, properties within entities, etc.)
&quot;revision&quot;: 42, # The index of the [Document.revisions] identifying the parent revision.
},
],
&quot;revision&quot;: 42, # The index of the revision that produced this element.
&quot;type&quot;: &quot;A String&quot;, # The type of provenance operation.
},
&quot;redacted&quot;: True or False, # Optional. Whether the entity will be redacted for de-identification purposes.
&quot;textAnchor&quot;: { # Text reference indexing into the Document.text. # Optional. Provenance of the entity. Text anchor indexing into the Document.text.
&quot;content&quot;: &quot;A String&quot;, # Contains the content of the text span so that users do not have to look it up in the text_segments.
&quot;textSegments&quot;: [ # The text segments from the Document.text.
{ # A text segment in the Document.text. The indices may be out of bounds which indicate that the text extends into another document shard for large sharded documents. See ShardInfo.text_offset
&quot;endIndex&quot;: &quot;A String&quot;, # TextSegment half open end UTF-8 char index in the Document.text.
&quot;startIndex&quot;: &quot;A String&quot;, # TextSegment start UTF-8 char index in the Document.text.
},
],
},
&quot;type&quot;: &quot;A String&quot;, # Entity type from a schema e.g. `Address`.
},
],
&quot;entityRelations&quot;: [ # Relationship among Document.entities.
{ # Relationship between Entities.
&quot;objectId&quot;: &quot;A String&quot;, # Object entity id.
&quot;relation&quot;: &quot;A String&quot;, # Relationship description.
&quot;subjectId&quot;: &quot;A String&quot;, # Subject entity id.
},
],
&quot;error&quot;: { # The `Status` type defines a logical error model that is suitable for different programming environments, including REST APIs and RPC APIs. It is used by [gRPC](https://github.com/grpc). Each `Status` message contains three pieces of data: error code, error message, and error details. You can find out more about this error model and how to work with it in the [API Design Guide](https://cloud.google.com/apis/design/errors). # Any error that occurred while processing this document.
&quot;code&quot;: 42, # The status code, which should be an enum value of google.rpc.Code.
&quot;details&quot;: [ # A list of messages that carry the error details. There is a common set of message types for APIs to use.
{
&quot;a_key&quot;: &quot;&quot;, # Properties of the object. Contains field @type with type URL.
},
],
&quot;message&quot;: &quot;A String&quot;, # A developer-facing error message, which should be in English. Any user-facing error message should be localized and sent in the google.rpc.Status.details field, or localized by the client.
},
&quot;mimeType&quot;: &quot;A String&quot;, # An IANA published MIME type (also referred to as media type). For more information, see https://www.iana.org/assignments/media-types/media-types.xhtml.
&quot;pages&quot;: [ # Visual page layout for the Document.
{ # A page in a Document.
&quot;blocks&quot;: [ # A list of visually detected text blocks on the page. A block has a set of lines (collected into paragraphs) that have a common line-spacing and orientation.
{ # A block has a set of lines (collected into paragraphs) that have a common line-spacing and orientation.
&quot;detectedLanguages&quot;: [ # A list of detected languages together with confidence.
{ # Detected language for a structural component.
&quot;confidence&quot;: 3.14, # Confidence of detected language. Range [0, 1].
&quot;languageCode&quot;: &quot;A String&quot;, # The BCP-47 language code, such as &quot;en-US&quot; or &quot;sr-Latn&quot;. For more information, see http://www.unicode.org/reports/tr35/#Unicode_locale_identifier.
},
],
&quot;layout&quot;: { # Visual element describing a layout unit on a page. # Layout for Block.
&quot;boundingPoly&quot;: { # A bounding polygon for the detected image annotation. # The bounding polygon for the Layout.
&quot;normalizedVertices&quot;: [ # The bounding polygon normalized vertices.
{ # A vertex represents a 2D point in the image. NOTE: the normalized vertex coordinates are relative to the original image and range from 0 to 1.
&quot;x&quot;: 3.14, # X coordinate.
&quot;y&quot;: 3.14, # Y coordinate (starts from the top of the image).
},
],
&quot;vertices&quot;: [ # The bounding polygon vertices.
{ # A vertex represents a 2D point in the image. NOTE: the vertex coordinates are in the same scale as the original image.
&quot;x&quot;: 42, # X coordinate.
&quot;y&quot;: 42, # Y coordinate (starts from the top of the image).
},
],
},
&quot;confidence&quot;: 3.14, # Confidence of the current Layout within context of the object this layout is for. e.g. confidence can be for a single token, a table, a visual element, etc. depending on context. Range [0, 1].
&quot;orientation&quot;: &quot;A String&quot;, # Detected orientation for the Layout.
&quot;textAnchor&quot;: { # Text reference indexing into the Document.text. # Text anchor indexing into the Document.text.
&quot;content&quot;: &quot;A String&quot;, # Contains the content of the text span so that users do not have to look it up in the text_segments.
&quot;textSegments&quot;: [ # The text segments from the Document.text.
{ # A text segment in the Document.text. The indices may be out of bounds which indicate that the text extends into another document shard for large sharded documents. See ShardInfo.text_offset
&quot;endIndex&quot;: &quot;A String&quot;, # TextSegment half open end UTF-8 char index in the Document.text.
&quot;startIndex&quot;: &quot;A String&quot;, # TextSegment start UTF-8 char index in the Document.text.
},
],
},
},
&quot;provenance&quot;: { # Structure to identify provenance relationships between annotations in different revisions. # The history of this annotation.
&quot;id&quot;: 42, # The Id of this operation. Needs to be unique within the scope of the revision.
&quot;parents&quot;: [ # References to the original elements that are replaced.
{ # Structure for referencing parent provenances. When an element replaces one of more other elements parent references identify the elements that are replaced.
&quot;id&quot;: 42, # The id of the parent provenance.
&quot;index&quot;: 42, # The index of the parent revisions corresponding collection of items (eg. list of entities, properties within entities, etc.)
&quot;revision&quot;: 42, # The index of the [Document.revisions] identifying the parent revision.
},
],
&quot;revision&quot;: 42, # The index of the revision that produced this element.
&quot;type&quot;: &quot;A String&quot;, # The type of provenance operation.
},
},
],
&quot;detectedLanguages&quot;: [ # A list of detected languages together with confidence.
{ # Detected language for a structural component.
&quot;confidence&quot;: 3.14, # Confidence of detected language. Range [0, 1].
&quot;languageCode&quot;: &quot;A String&quot;, # The BCP-47 language code, such as &quot;en-US&quot; or &quot;sr-Latn&quot;. For more information, see http://www.unicode.org/reports/tr35/#Unicode_locale_identifier.
},
],
&quot;dimension&quot;: { # Dimension for the page. # Physical dimension of the page.
&quot;height&quot;: 3.14, # Page height.
&quot;unit&quot;: &quot;A String&quot;, # Dimension unit.
&quot;width&quot;: 3.14, # Page width.
},
&quot;formFields&quot;: [ # A list of visually detected form fields on the page.
{ # A form field detected on the page.
&quot;fieldName&quot;: { # Visual element describing a layout unit on a page. # Layout for the FormField name. e.g. `Address`, `Email`, `Grand total`, `Phone number`, etc.
&quot;boundingPoly&quot;: { # A bounding polygon for the detected image annotation. # The bounding polygon for the Layout.
&quot;normalizedVertices&quot;: [ # The bounding polygon normalized vertices.
{ # A vertex represents a 2D point in the image. NOTE: the normalized vertex coordinates are relative to the original image and range from 0 to 1.
&quot;x&quot;: 3.14, # X coordinate.
&quot;y&quot;: 3.14, # Y coordinate (starts from the top of the image).
},
],
&quot;vertices&quot;: [ # The bounding polygon vertices.
{ # A vertex represents a 2D point in the image. NOTE: the vertex coordinates are in the same scale as the original image.
&quot;x&quot;: 42, # X coordinate.
&quot;y&quot;: 42, # Y coordinate (starts from the top of the image).
},
],
},
&quot;confidence&quot;: 3.14, # Confidence of the current Layout within context of the object this layout is for. e.g. confidence can be for a single token, a table, a visual element, etc. depending on context. Range [0, 1].
&quot;orientation&quot;: &quot;A String&quot;, # Detected orientation for the Layout.
&quot;textAnchor&quot;: { # Text reference indexing into the Document.text. # Text anchor indexing into the Document.text.
&quot;content&quot;: &quot;A String&quot;, # Contains the content of the text span so that users do not have to look it up in the text_segments.
&quot;textSegments&quot;: [ # The text segments from the Document.text.
{ # A text segment in the Document.text. The indices may be out of bounds which indicate that the text extends into another document shard for large sharded documents. See ShardInfo.text_offset
&quot;endIndex&quot;: &quot;A String&quot;, # TextSegment half open end UTF-8 char index in the Document.text.
&quot;startIndex&quot;: &quot;A String&quot;, # TextSegment start UTF-8 char index in the Document.text.
},
],
},
},
&quot;fieldValue&quot;: { # Visual element describing a layout unit on a page. # Layout for the FormField value.
&quot;boundingPoly&quot;: { # A bounding polygon for the detected image annotation. # The bounding polygon for the Layout.
&quot;normalizedVertices&quot;: [ # The bounding polygon normalized vertices.
{ # A vertex represents a 2D point in the image. NOTE: the normalized vertex coordinates are relative to the original image and range from 0 to 1.
&quot;x&quot;: 3.14, # X coordinate.
&quot;y&quot;: 3.14, # Y coordinate (starts from the top of the image).
},
],
&quot;vertices&quot;: [ # The bounding polygon vertices.
{ # A vertex represents a 2D point in the image. NOTE: the vertex coordinates are in the same scale as the original image.
&quot;x&quot;: 42, # X coordinate.
&quot;y&quot;: 42, # Y coordinate (starts from the top of the image).
},
],
},
&quot;confidence&quot;: 3.14, # Confidence of the current Layout within context of the object this layout is for. e.g. confidence can be for a single token, a table, a visual element, etc. depending on context. Range [0, 1].
&quot;orientation&quot;: &quot;A String&quot;, # Detected orientation for the Layout.
&quot;textAnchor&quot;: { # Text reference indexing into the Document.text. # Text anchor indexing into the Document.text.
&quot;content&quot;: &quot;A String&quot;, # Contains the content of the text span so that users do not have to look it up in the text_segments.
&quot;textSegments&quot;: [ # The text segments from the Document.text.
{ # A text segment in the Document.text. The indices may be out of bounds which indicate that the text extends into another document shard for large sharded documents. See ShardInfo.text_offset
&quot;endIndex&quot;: &quot;A String&quot;, # TextSegment half open end UTF-8 char index in the Document.text.
&quot;startIndex&quot;: &quot;A String&quot;, # TextSegment start UTF-8 char index in the Document.text.
},
],
},
},
&quot;nameDetectedLanguages&quot;: [ # A list of detected languages for name together with confidence.
{ # Detected language for a structural component.
&quot;confidence&quot;: 3.14, # Confidence of detected language. Range [0, 1].
&quot;languageCode&quot;: &quot;A String&quot;, # The BCP-47 language code, such as &quot;en-US&quot; or &quot;sr-Latn&quot;. For more information, see http://www.unicode.org/reports/tr35/#Unicode_locale_identifier.
},
],
&quot;provenance&quot;: { # Structure to identify provenance relationships between annotations in different revisions. # The history of this annotation.
&quot;id&quot;: 42, # The Id of this operation. Needs to be unique within the scope of the revision.
&quot;parents&quot;: [ # References to the original elements that are replaced.
{ # Structure for referencing parent provenances. When an element replaces one of more other elements parent references identify the elements that are replaced.
&quot;id&quot;: 42, # The id of the parent provenance.
&quot;index&quot;: 42, # The index of the parent revisions corresponding collection of items (eg. list of entities, properties within entities, etc.)
&quot;revision&quot;: 42, # The index of the [Document.revisions] identifying the parent revision.
},
],
&quot;revision&quot;: 42, # The index of the revision that produced this element.
&quot;type&quot;: &quot;A String&quot;, # The type of provenance operation.
},
&quot;valueDetectedLanguages&quot;: [ # A list of detected languages for value together with confidence.
{ # Detected language for a structural component.
&quot;confidence&quot;: 3.14, # Confidence of detected language. Range [0, 1].
&quot;languageCode&quot;: &quot;A String&quot;, # The BCP-47 language code, such as &quot;en-US&quot; or &quot;sr-Latn&quot;. For more information, see http://www.unicode.org/reports/tr35/#Unicode_locale_identifier.
},
],
&quot;valueType&quot;: &quot;A String&quot;, # If the value is non-textual, this field represents the type. Current valid values are: - blank (this indicates the field_value is normal text) - &quot;unfilled_checkbox&quot; - &quot;filled_checkbox&quot;
},
],
&quot;image&quot;: { # Rendered image contents for this page. # Rendered image for this page. This image is preprocessed to remove any skew, rotation, and distortions such that the annotation bounding boxes can be upright and axis-aligned.
&quot;content&quot;: &quot;A String&quot;, # Raw byte content of the image.
&quot;height&quot;: 42, # Height of the image in pixels.
&quot;mimeType&quot;: &quot;A String&quot;, # Encoding mime type for the image.
&quot;width&quot;: 42, # Width of the image in pixels.
},
&quot;layout&quot;: { # Visual element describing a layout unit on a page. # Layout for the page.
&quot;boundingPoly&quot;: { # A bounding polygon for the detected image annotation. # The bounding polygon for the Layout.
&quot;normalizedVertices&quot;: [ # The bounding polygon normalized vertices.
{ # A vertex represents a 2D point in the image. NOTE: the normalized vertex coordinates are relative to the original image and range from 0 to 1.
&quot;x&quot;: 3.14, # X coordinate.
&quot;y&quot;: 3.14, # Y coordinate (starts from the top of the image).
},
],
&quot;vertices&quot;: [ # The bounding polygon vertices.
{ # A vertex represents a 2D point in the image. NOTE: the vertex coordinates are in the same scale as the original image.
&quot;x&quot;: 42, # X coordinate.
&quot;y&quot;: 42, # Y coordinate (starts from the top of the image).
},
],
},
&quot;confidence&quot;: 3.14, # Confidence of the current Layout within context of the object this layout is for. e.g. confidence can be for a single token, a table, a visual element, etc. depending on context. Range [0, 1].
&quot;orientation&quot;: &quot;A String&quot;, # Detected orientation for the Layout.
&quot;textAnchor&quot;: { # Text reference indexing into the Document.text. # Text anchor indexing into the Document.text.
&quot;content&quot;: &quot;A String&quot;, # Contains the content of the text span so that users do not have to look it up in the text_segments.
&quot;textSegments&quot;: [ # The text segments from the Document.text.
{ # A text segment in the Document.text. The indices may be out of bounds which indicate that the text extends into another document shard for large sharded documents. See ShardInfo.text_offset
&quot;endIndex&quot;: &quot;A String&quot;, # TextSegment half open end UTF-8 char index in the Document.text.
&quot;startIndex&quot;: &quot;A String&quot;, # TextSegment start UTF-8 char index in the Document.text.
},
],
},
},
&quot;lines&quot;: [ # A list of visually detected text lines on the page. A collection of tokens that a human would perceive as a line.
{ # A collection of tokens that a human would perceive as a line. Does not cross column boundaries, can be horizontal, vertical, etc.
&quot;detectedLanguages&quot;: [ # A list of detected languages together with confidence.
{ # Detected language for a structural component.
&quot;confidence&quot;: 3.14, # Confidence of detected language. Range [0, 1].
&quot;languageCode&quot;: &quot;A String&quot;, # The BCP-47 language code, such as &quot;en-US&quot; or &quot;sr-Latn&quot;. For more information, see http://www.unicode.org/reports/tr35/#Unicode_locale_identifier.
},
],
&quot;layout&quot;: { # Visual element describing a layout unit on a page. # Layout for Line.
&quot;boundingPoly&quot;: { # A bounding polygon for the detected image annotation. # The bounding polygon for the Layout.
&quot;normalizedVertices&quot;: [ # The bounding polygon normalized vertices.
{ # A vertex represents a 2D point in the image. NOTE: the normalized vertex coordinates are relative to the original image and range from 0 to 1.
&quot;x&quot;: 3.14, # X coordinate.
&quot;y&quot;: 3.14, # Y coordinate (starts from the top of the image).
},
],
&quot;vertices&quot;: [ # The bounding polygon vertices.
{ # A vertex represents a 2D point in the image. NOTE: the vertex coordinates are in the same scale as the original image.
&quot;x&quot;: 42, # X coordinate.
&quot;y&quot;: 42, # Y coordinate (starts from the top of the image).
},
],
},
&quot;confidence&quot;: 3.14, # Confidence of the current Layout within context of the object this layout is for. e.g. confidence can be for a single token, a table, a visual element, etc. depending on context. Range [0, 1].
&quot;orientation&quot;: &quot;A String&quot;, # Detected orientation for the Layout.
&quot;textAnchor&quot;: { # Text reference indexing into the Document.text. # Text anchor indexing into the Document.text.
&quot;content&quot;: &quot;A String&quot;, # Contains the content of the text span so that users do not have to look it up in the text_segments.
&quot;textSegments&quot;: [ # The text segments from the Document.text.
{ # A text segment in the Document.text. The indices may be out of bounds which indicate that the text extends into another document shard for large sharded documents. See ShardInfo.text_offset
&quot;endIndex&quot;: &quot;A String&quot;, # TextSegment half open end UTF-8 char index in the Document.text.
&quot;startIndex&quot;: &quot;A String&quot;, # TextSegment start UTF-8 char index in the Document.text.
},
],
},
},
&quot;provenance&quot;: { # Structure to identify provenance relationships between annotations in different revisions. # The history of this annotation.
&quot;id&quot;: 42, # The Id of this operation. Needs to be unique within the scope of the revision.
&quot;parents&quot;: [ # References to the original elements that are replaced.
{ # Structure for referencing parent provenances. When an element replaces one of more other elements parent references identify the elements that are replaced.
&quot;id&quot;: 42, # The id of the parent provenance.
&quot;index&quot;: 42, # The index of the parent revisions corresponding collection of items (eg. list of entities, properties within entities, etc.)
&quot;revision&quot;: 42, # The index of the [Document.revisions] identifying the parent revision.
},
],
&quot;revision&quot;: 42, # The index of the revision that produced this element.
&quot;type&quot;: &quot;A String&quot;, # The type of provenance operation.
},
},
],
&quot;pageNumber&quot;: 42, # 1-based index for current Page in a parent Document. Useful when a page is taken out of a Document for individual processing.
&quot;paragraphs&quot;: [ # A list of visually detected text paragraphs on the page. A collection of lines that a human would perceive as a paragraph.
{ # A collection of lines that a human would perceive as a paragraph.
&quot;detectedLanguages&quot;: [ # A list of detected languages together with confidence.
{ # Detected language for a structural component.
&quot;confidence&quot;: 3.14, # Confidence of detected language. Range [0, 1].
&quot;languageCode&quot;: &quot;A String&quot;, # The BCP-47 language code, such as &quot;en-US&quot; or &quot;sr-Latn&quot;. For more information, see http://www.unicode.org/reports/tr35/#Unicode_locale_identifier.
},
],
&quot;layout&quot;: { # Visual element describing a layout unit on a page. # Layout for Paragraph.
&quot;boundingPoly&quot;: { # A bounding polygon for the detected image annotation. # The bounding polygon for the Layout.
&quot;normalizedVertices&quot;: [ # The bounding polygon normalized vertices.
{ # A vertex represents a 2D point in the image. NOTE: the normalized vertex coordinates are relative to the original image and range from 0 to 1.
&quot;x&quot;: 3.14, # X coordinate.
&quot;y&quot;: 3.14, # Y coordinate (starts from the top of the image).
},
],
&quot;vertices&quot;: [ # The bounding polygon vertices.
{ # A vertex represents a 2D point in the image. NOTE: the vertex coordinates are in the same scale as the original image.
&quot;x&quot;: 42, # X coordinate.
&quot;y&quot;: 42, # Y coordinate (starts from the top of the image).
},
],
},
&quot;confidence&quot;: 3.14, # Confidence of the current Layout within context of the object this layout is for. e.g. confidence can be for a single token, a table, a visual element, etc. depending on context. Range [0, 1].
&quot;orientation&quot;: &quot;A String&quot;, # Detected orientation for the Layout.
&quot;textAnchor&quot;: { # Text reference indexing into the Document.text. # Text anchor indexing into the Document.text.
&quot;content&quot;: &quot;A String&quot;, # Contains the content of the text span so that users do not have to look it up in the text_segments.
&quot;textSegments&quot;: [ # The text segments from the Document.text.
{ # A text segment in the Document.text. The indices may be out of bounds which indicate that the text extends into another document shard for large sharded documents. See ShardInfo.text_offset
&quot;endIndex&quot;: &quot;A String&quot;, # TextSegment half open end UTF-8 char index in the Document.text.
&quot;startIndex&quot;: &quot;A String&quot;, # TextSegment start UTF-8 char index in the Document.text.
},
],
},
},
&quot;provenance&quot;: { # Structure to identify provenance relationships between annotations in different revisions. # The history of this annotation.
&quot;id&quot;: 42, # The Id of this operation. Needs to be unique within the scope of the revision.
&quot;parents&quot;: [ # References to the original elements that are replaced.
{ # Structure for referencing parent provenances. When an element replaces one of more other elements parent references identify the elements that are replaced.
&quot;id&quot;: 42, # The id of the parent provenance.
&quot;index&quot;: 42, # The index of the parent revisions corresponding collection of items (eg. list of entities, properties within entities, etc.)
&quot;revision&quot;: 42, # The index of the [Document.revisions] identifying the parent revision.
},
],
&quot;revision&quot;: 42, # The index of the revision that produced this element.
&quot;type&quot;: &quot;A String&quot;, # The type of provenance operation.
},
},
],
&quot;provenance&quot;: { # Structure to identify provenance relationships between annotations in different revisions. # The history of this page.
&quot;id&quot;: 42, # The Id of this operation. Needs to be unique within the scope of the revision.
&quot;parents&quot;: [ # References to the original elements that are replaced.
{ # Structure for referencing parent provenances. When an element replaces one of more other elements parent references identify the elements that are replaced.
&quot;id&quot;: 42, # The id of the parent provenance.
&quot;index&quot;: 42, # The index of the parent revisions corresponding collection of items (eg. list of entities, properties within entities, etc.)
&quot;revision&quot;: 42, # The index of the [Document.revisions] identifying the parent revision.
},
],
&quot;revision&quot;: 42, # The index of the revision that produced this element.
&quot;type&quot;: &quot;A String&quot;, # The type of provenance operation.
},
&quot;tables&quot;: [ # A list of visually detected tables on the page.
{ # A table representation similar to HTML table structure.
&quot;bodyRows&quot;: [ # Body rows of the table.
{ # A row of table cells.
&quot;cells&quot;: [ # Cells that make up this row.
{ # A cell representation inside the table.
&quot;colSpan&quot;: 42, # How many columns this cell spans.
&quot;detectedLanguages&quot;: [ # A list of detected languages together with confidence.
{ # Detected language for a structural component.
&quot;confidence&quot;: 3.14, # Confidence of detected language. Range [0, 1].
&quot;languageCode&quot;: &quot;A String&quot;, # The BCP-47 language code, such as &quot;en-US&quot; or &quot;sr-Latn&quot;. For more information, see http://www.unicode.org/reports/tr35/#Unicode_locale_identifier.
},
],
&quot;layout&quot;: { # Visual element describing a layout unit on a page. # Layout for TableCell.
&quot;boundingPoly&quot;: { # A bounding polygon for the detected image annotation. # The bounding polygon for the Layout.
&quot;normalizedVertices&quot;: [ # The bounding polygon normalized vertices.
{ # A vertex represents a 2D point in the image. NOTE: the normalized vertex coordinates are relative to the original image and range from 0 to 1.
&quot;x&quot;: 3.14, # X coordinate.
&quot;y&quot;: 3.14, # Y coordinate (starts from the top of the image).
},
],
&quot;vertices&quot;: [ # The bounding polygon vertices.
{ # A vertex represents a 2D point in the image. NOTE: the vertex coordinates are in the same scale as the original image.
&quot;x&quot;: 42, # X coordinate.
&quot;y&quot;: 42, # Y coordinate (starts from the top of the image).
},
],
},
&quot;confidence&quot;: 3.14, # Confidence of the current Layout within context of the object this layout is for. e.g. confidence can be for a single token, a table, a visual element, etc. depending on context. Range [0, 1].
&quot;orientation&quot;: &quot;A String&quot;, # Detected orientation for the Layout.
&quot;textAnchor&quot;: { # Text reference indexing into the Document.text. # Text anchor indexing into the Document.text.
&quot;content&quot;: &quot;A String&quot;, # Contains the content of the text span so that users do not have to look it up in the text_segments.
&quot;textSegments&quot;: [ # The text segments from the Document.text.
{ # A text segment in the Document.text. The indices may be out of bounds which indicate that the text extends into another document shard for large sharded documents. See ShardInfo.text_offset
&quot;endIndex&quot;: &quot;A String&quot;, # TextSegment half open end UTF-8 char index in the Document.text.
&quot;startIndex&quot;: &quot;A String&quot;, # TextSegment start UTF-8 char index in the Document.text.
},
],
},
},
&quot;rowSpan&quot;: 42, # How many rows this cell spans.
},
],
},
],
&quot;detectedLanguages&quot;: [ # A list of detected languages together with confidence.
{ # Detected language for a structural component.
&quot;confidence&quot;: 3.14, # Confidence of detected language. Range [0, 1].
&quot;languageCode&quot;: &quot;A String&quot;, # The BCP-47 language code, such as &quot;en-US&quot; or &quot;sr-Latn&quot;. For more information, see http://www.unicode.org/reports/tr35/#Unicode_locale_identifier.
},
],
&quot;headerRows&quot;: [ # Header rows of the table.
{ # A row of table cells.
&quot;cells&quot;: [ # Cells that make up this row.
{ # A cell representation inside the table.
&quot;colSpan&quot;: 42, # How many columns this cell spans.
&quot;detectedLanguages&quot;: [ # A list of detected languages together with confidence.
{ # Detected language for a structural component.
&quot;confidence&quot;: 3.14, # Confidence of detected language. Range [0, 1].
&quot;languageCode&quot;: &quot;A String&quot;, # The BCP-47 language code, such as &quot;en-US&quot; or &quot;sr-Latn&quot;. For more information, see http://www.unicode.org/reports/tr35/#Unicode_locale_identifier.
},
],
&quot;layout&quot;: { # Visual element describing a layout unit on a page. # Layout for TableCell.
&quot;boundingPoly&quot;: { # A bounding polygon for the detected image annotation. # The bounding polygon for the Layout.
&quot;normalizedVertices&quot;: [ # The bounding polygon normalized vertices.
{ # A vertex represents a 2D point in the image. NOTE: the normalized vertex coordinates are relative to the original image and range from 0 to 1.
&quot;x&quot;: 3.14, # X coordinate.
&quot;y&quot;: 3.14, # Y coordinate (starts from the top of the image).
},
],
&quot;vertices&quot;: [ # The bounding polygon vertices.
{ # A vertex represents a 2D point in the image. NOTE: the vertex coordinates are in the same scale as the original image.
&quot;x&quot;: 42, # X coordinate.
&quot;y&quot;: 42, # Y coordinate (starts from the top of the image).
},
],
},
&quot;confidence&quot;: 3.14, # Confidence of the current Layout within context of the object this layout is for. e.g. confidence can be for a single token, a table, a visual element, etc. depending on context. Range [0, 1].
&quot;orientation&quot;: &quot;A String&quot;, # Detected orientation for the Layout.
&quot;textAnchor&quot;: { # Text reference indexing into the Document.text. # Text anchor indexing into the Document.text.
&quot;content&quot;: &quot;A String&quot;, # Contains the content of the text span so that users do not have to look it up in the text_segments.
&quot;textSegments&quot;: [ # The text segments from the Document.text.
{ # A text segment in the Document.text. The indices may be out of bounds which indicate that the text extends into another document shard for large sharded documents. See ShardInfo.text_offset
&quot;endIndex&quot;: &quot;A String&quot;, # TextSegment half open end UTF-8 char index in the Document.text.
&quot;startIndex&quot;: &quot;A String&quot;, # TextSegment start UTF-8 char index in the Document.text.
},
],
},
},
&quot;rowSpan&quot;: 42, # How many rows this cell spans.
},
],
},
],
&quot;layout&quot;: { # Visual element describing a layout unit on a page. # Layout for Table.
&quot;boundingPoly&quot;: { # A bounding polygon for the detected image annotation. # The bounding polygon for the Layout.
&quot;normalizedVertices&quot;: [ # The bounding polygon normalized vertices.
{ # A vertex represents a 2D point in the image. NOTE: the normalized vertex coordinates are relative to the original image and range from 0 to 1.
&quot;x&quot;: 3.14, # X coordinate.
&quot;y&quot;: 3.14, # Y coordinate (starts from the top of the image).
},
],
&quot;vertices&quot;: [ # The bounding polygon vertices.
{ # A vertex represents a 2D point in the image. NOTE: the vertex coordinates are in the same scale as the original image.
&quot;x&quot;: 42, # X coordinate.
&quot;y&quot;: 42, # Y coordinate (starts from the top of the image).
},
],
},
&quot;confidence&quot;: 3.14, # Confidence of the current Layout within context of the object this layout is for. e.g. confidence can be for a single token, a table, a visual element, etc. depending on context. Range [0, 1].
&quot;orientation&quot;: &quot;A String&quot;, # Detected orientation for the Layout.
&quot;textAnchor&quot;: { # Text reference indexing into the Document.text. # Text anchor indexing into the Document.text.
&quot;content&quot;: &quot;A String&quot;, # Contains the content of the text span so that users do not have to look it up in the text_segments.
&quot;textSegments&quot;: [ # The text segments from the Document.text.
{ # A text segment in the Document.text. The indices may be out of bounds which indicate that the text extends into another document shard for large sharded documents. See ShardInfo.text_offset
&quot;endIndex&quot;: &quot;A String&quot;, # TextSegment half open end UTF-8 char index in the Document.text.
&quot;startIndex&quot;: &quot;A String&quot;, # TextSegment start UTF-8 char index in the Document.text.
},
],
},
},
},
],
&quot;tokens&quot;: [ # A list of visually detected tokens on the page.
{ # A detected token.
&quot;detectedBreak&quot;: { # Detected break at the end of a Token. # Detected break at the end of a Token.
&quot;type&quot;: &quot;A String&quot;, # Detected break type.
},
&quot;detectedLanguages&quot;: [ # A list of detected languages together with confidence.
{ # Detected language for a structural component.
&quot;confidence&quot;: 3.14, # Confidence of detected language. Range [0, 1].
&quot;languageCode&quot;: &quot;A String&quot;, # The BCP-47 language code, such as &quot;en-US&quot; or &quot;sr-Latn&quot;. For more information, see http://www.unicode.org/reports/tr35/#Unicode_locale_identifier.
},
],
&quot;layout&quot;: { # Visual element describing a layout unit on a page. # Layout for Token.
&quot;boundingPoly&quot;: { # A bounding polygon for the detected image annotation. # The bounding polygon for the Layout.
&quot;normalizedVertices&quot;: [ # The bounding polygon normalized vertices.
{ # A vertex represents a 2D point in the image. NOTE: the normalized vertex coordinates are relative to the original image and range from 0 to 1.
&quot;x&quot;: 3.14, # X coordinate.
&quot;y&quot;: 3.14, # Y coordinate (starts from the top of the image).
},
],
&quot;vertices&quot;: [ # The bounding polygon vertices.
{ # A vertex represents a 2D point in the image. NOTE: the vertex coordinates are in the same scale as the original image.
&quot;x&quot;: 42, # X coordinate.
&quot;y&quot;: 42, # Y coordinate (starts from the top of the image).
},
],
},
&quot;confidence&quot;: 3.14, # Confidence of the current Layout within context of the object this layout is for. e.g. confidence can be for a single token, a table, a visual element, etc. depending on context. Range [0, 1].
&quot;orientation&quot;: &quot;A String&quot;, # Detected orientation for the Layout.
&quot;textAnchor&quot;: { # Text reference indexing into the Document.text. # Text anchor indexing into the Document.text.
&quot;content&quot;: &quot;A String&quot;, # Contains the content of the text span so that users do not have to look it up in the text_segments.
&quot;textSegments&quot;: [ # The text segments from the Document.text.
{ # A text segment in the Document.text. The indices may be out of bounds which indicate that the text extends into another document shard for large sharded documents. See ShardInfo.text_offset
&quot;endIndex&quot;: &quot;A String&quot;, # TextSegment half open end UTF-8 char index in the Document.text.
&quot;startIndex&quot;: &quot;A String&quot;, # TextSegment start UTF-8 char index in the Document.text.
},
],
},
},
&quot;provenance&quot;: { # Structure to identify provenance relationships between annotations in different revisions. # The history of this annotation.
&quot;id&quot;: 42, # The Id of this operation. Needs to be unique within the scope of the revision.
&quot;parents&quot;: [ # References to the original elements that are replaced.
{ # Structure for referencing parent provenances. When an element replaces one of more other elements parent references identify the elements that are replaced.
&quot;id&quot;: 42, # The id of the parent provenance.
&quot;index&quot;: 42, # The index of the parent revisions corresponding collection of items (eg. list of entities, properties within entities, etc.)
&quot;revision&quot;: 42, # The index of the [Document.revisions] identifying the parent revision.
},
],
&quot;revision&quot;: 42, # The index of the revision that produced this element.
&quot;type&quot;: &quot;A String&quot;, # The type of provenance operation.
},
},
],
&quot;transforms&quot;: [ # Transformation matrices that were applied to the original document image to produce Page.image.
{ # Representation for transformation matrix, intended to be compatible and used with OpenCV format for image manipulation.
&quot;cols&quot;: 42, # Number of columns in the matrix.
&quot;data&quot;: &quot;A String&quot;, # The matrix data.
&quot;rows&quot;: 42, # Number of rows in the matrix.
&quot;type&quot;: 42, # This encodes information about what data type the matrix uses. For example, 0 (CV_8U) is an unsigned 8-bit image. For the full list of OpenCV primitive data types, please refer to https://docs.opencv.org/4.3.0/d1/d1b/group__core__hal__interface.html
},
],
&quot;visualElements&quot;: [ # A list of detected non-text visual elements e.g. checkbox, signature etc. on the page.
{ # Detected non-text visual elements e.g. checkbox, signature etc. on the page.
&quot;detectedLanguages&quot;: [ # A list of detected languages together with confidence.
{ # Detected language for a structural component.
&quot;confidence&quot;: 3.14, # Confidence of detected language. Range [0, 1].
&quot;languageCode&quot;: &quot;A String&quot;, # The BCP-47 language code, such as &quot;en-US&quot; or &quot;sr-Latn&quot;. For more information, see http://www.unicode.org/reports/tr35/#Unicode_locale_identifier.
},
],
&quot;layout&quot;: { # Visual element describing a layout unit on a page. # Layout for VisualElement.
&quot;boundingPoly&quot;: { # A bounding polygon for the detected image annotation. # The bounding polygon for the Layout.
&quot;normalizedVertices&quot;: [ # The bounding polygon normalized vertices.
{ # A vertex represents a 2D point in the image. NOTE: the normalized vertex coordinates are relative to the original image and range from 0 to 1.
&quot;x&quot;: 3.14, # X coordinate.
&quot;y&quot;: 3.14, # Y coordinate (starts from the top of the image).
},
],
&quot;vertices&quot;: [ # The bounding polygon vertices.
{ # A vertex represents a 2D point in the image. NOTE: the vertex coordinates are in the same scale as the original image.
&quot;x&quot;: 42, # X coordinate.
&quot;y&quot;: 42, # Y coordinate (starts from the top of the image).
},
],
},
&quot;confidence&quot;: 3.14, # Confidence of the current Layout within context of the object this layout is for. e.g. confidence can be for a single token, a table, a visual element, etc. depending on context. Range [0, 1].
&quot;orientation&quot;: &quot;A String&quot;, # Detected orientation for the Layout.
&quot;textAnchor&quot;: { # Text reference indexing into the Document.text. # Text anchor indexing into the Document.text.
&quot;content&quot;: &quot;A String&quot;, # Contains the content of the text span so that users do not have to look it up in the text_segments.
&quot;textSegments&quot;: [ # The text segments from the Document.text.
{ # A text segment in the Document.text. The indices may be out of bounds which indicate that the text extends into another document shard for large sharded documents. See ShardInfo.text_offset
&quot;endIndex&quot;: &quot;A String&quot;, # TextSegment half open end UTF-8 char index in the Document.text.
&quot;startIndex&quot;: &quot;A String&quot;, # TextSegment start UTF-8 char index in the Document.text.
},
],
},
},
&quot;type&quot;: &quot;A String&quot;, # Type of the VisualElement.
},
],
},
],
&quot;revisions&quot;: [ # Revision history of this document.
{ # Contains past or forward revisions of this document.
&quot;agent&quot;: &quot;A String&quot;, # If the change was made by a person specify the name or id of that person.
&quot;createTime&quot;: &quot;A String&quot;, # The time that the revision was created.
&quot;humanReview&quot;: { # Human Review information of the document. # Human Review information of this revision.
&quot;state&quot;: &quot;A String&quot;, # Human review state. e.g. `requested`, `succeeded`, `rejected`.
&quot;stateMessage&quot;: &quot;A String&quot;, # A message providing more details about the current state of processing. For example, the rejection reason when the state is `rejected`.
},
&quot;id&quot;: &quot;A String&quot;, # Id of the revision. Unique within the context of the document.
&quot;parent&quot;: [ # The revisions that this revision is based on. This can include one or more parent (when documents are merged.) This field represents the index into the `revisions` field.
42,
],
&quot;processor&quot;: &quot;A String&quot;, # If the annotation was made by processor identify the processor by its resource name.
},
],
&quot;shardInfo&quot;: { # For a large document, sharding may be performed to produce several document shards. Each document shard contains this field to detail which shard it is. # Information about the sharding if this document is sharded part of a larger document. If the document is not sharded, this message is not specified.
&quot;shardCount&quot;: &quot;A String&quot;, # Total number of shards.
&quot;shardIndex&quot;: &quot;A String&quot;, # The 0-based index of this shard.
&quot;textOffset&quot;: &quot;A String&quot;, # The index of the first character in Document.text in the overall document global text.
},
&quot;text&quot;: &quot;A String&quot;, # Optional. UTF-8 encoded text in reading order from the document.
&quot;textChanges&quot;: [ # A list of text corrections made to [Document.text]. This is usually used for annotating corrections to OCR mistakes. Text changes for a given revision may not overlap with each other.
{ # This message is used for text changes aka. OCR corrections.
&quot;changedText&quot;: &quot;A String&quot;, # The text that replaces the text identified in the `text_anchor`.
&quot;provenance&quot;: [ # The history of this annotation.
{ # Structure to identify provenance relationships between annotations in different revisions.
&quot;id&quot;: 42, # The Id of this operation. Needs to be unique within the scope of the revision.
&quot;parents&quot;: [ # References to the original elements that are replaced.
{ # Structure for referencing parent provenances. When an element replaces one of more other elements parent references identify the elements that are replaced.
&quot;id&quot;: 42, # The id of the parent provenance.
&quot;index&quot;: 42, # The index of the parent revisions corresponding collection of items (eg. list of entities, properties within entities, etc.)
&quot;revision&quot;: 42, # The index of the [Document.revisions] identifying the parent revision.
},
],
&quot;revision&quot;: 42, # The index of the revision that produced this element.
&quot;type&quot;: &quot;A String&quot;, # The type of provenance operation.
},
],
&quot;textAnchor&quot;: { # Text reference indexing into the Document.text. # Provenance of the correction. Text anchor indexing into the Document.text. There can only be a single `TextAnchor.text_segments` element. If the start and end index of the text segment are the same, the text change is inserted before that index.
&quot;content&quot;: &quot;A String&quot;, # Contains the content of the text span so that users do not have to look it up in the text_segments.
&quot;textSegments&quot;: [ # The text segments from the Document.text.
{ # A text segment in the Document.text. The indices may be out of bounds which indicate that the text extends into another document shard for large sharded documents. See ShardInfo.text_offset
&quot;endIndex&quot;: &quot;A String&quot;, # TextSegment half open end UTF-8 char index in the Document.text.
&quot;startIndex&quot;: &quot;A String&quot;, # TextSegment start UTF-8 char index in the Document.text.
},
],
},
},
],
&quot;textStyles&quot;: [ # Styles for the Document.text.
{ # Annotation for common text style attributes. This adheres to CSS conventions as much as possible.
&quot;backgroundColor&quot;: { # Represents a color in the RGBA color space. This representation is designed for simplicity of conversion to/from color representations in various languages over compactness. For example, the fields of this representation can be trivially provided to the constructor of `java.awt.Color` in Java; it can also be trivially provided to UIColor&#x27;s `+colorWithRed:green:blue:alpha` method in iOS; and, with just a little work, it can be easily formatted into a CSS `rgba()` string in JavaScript. This reference page doesn&#x27;t carry information about the absolute color space that should be used to interpret the RGB value (e.g. sRGB, Adobe RGB, DCI-P3, BT.2020, etc.). By default, applications should assume the sRGB color space. When color equality needs to be decided, implementations, unless documented otherwise, treat two colors as equal if all their red, green, blue, and alpha values each differ by at most 1e-5. Example (Java): import com.google.type.Color; // ... public static java.awt.Color fromProto(Color protocolor) { float alpha = protocolor.hasAlpha() ? protocolor.getAlpha().getValue() : 1.0; return new java.awt.Color( protocolor.getRed(), protocolor.getGreen(), protocolor.getBlue(), alpha); } public static Color toProto(java.awt.Color color) { float red = (float) color.getRed(); float green = (float) color.getGreen(); float blue = (float) color.getBlue(); float denominator = 255.0; Color.Builder resultBuilder = Color .newBuilder() .setRed(red / denominator) .setGreen(green / denominator) .setBlue(blue / denominator); int alpha = color.getAlpha(); if (alpha != 255) { result.setAlpha( FloatValue .newBuilder() .setValue(((float) alpha) / denominator) .build()); } return resultBuilder.build(); } // ... Example (iOS / Obj-C): // ... static UIColor* fromProto(Color* protocolor) { float red = [protocolor red]; float green = [protocolor green]; float blue = [protocolor blue]; FloatValue* alpha_wrapper = [protocolor alpha]; float alpha = 1.0; if (alpha_wrapper != nil) { alpha = [alpha_wrapper value]; } return [UIColor colorWithRed:red green:green blue:blue alpha:alpha]; } static Color* toProto(UIColor* color) { CGFloat red, green, blue, alpha; if (![color getRed:&amp;red green:&amp;green blue:&amp;blue alpha:&amp;alpha]) { return nil; } Color* result = [[Color alloc] init]; [result setRed:red]; [result setGreen:green]; [result setBlue:blue]; if (alpha &lt;= 0.9999) { [result setAlpha:floatWrapperWithValue(alpha)]; } [result autorelease]; return result; } // ... Example (JavaScript): // ... var protoToCssColor = function(rgb_color) { var redFrac = rgb_color.red || 0.0; var greenFrac = rgb_color.green || 0.0; var blueFrac = rgb_color.blue || 0.0; var red = Math.floor(redFrac * 255); var green = Math.floor(greenFrac * 255); var blue = Math.floor(blueFrac * 255); if (!(&#x27;alpha&#x27; in rgb_color)) { return rgbToCssColor(red, green, blue); } var alphaFrac = rgb_color.alpha.value || 0.0; var rgbParams = [red, green, blue].join(&#x27;,&#x27;); return [&#x27;rgba(&#x27;, rgbParams, &#x27;,&#x27;, alphaFrac, &#x27;)&#x27;].join(&#x27;&#x27;); }; var rgbToCssColor = function(red, green, blue) { var rgbNumber = new Number((red &lt;&lt; 16) | (green &lt;&lt; 8) | blue); var hexString = rgbNumber.toString(16); var missingZeros = 6 - hexString.length; var resultBuilder = [&#x27;#&#x27;]; for (var i = 0; i &lt; missingZeros; i++) { resultBuilder.push(&#x27;0&#x27;); } resultBuilder.push(hexString); return resultBuilder.join(&#x27;&#x27;); }; // ... # Text background color.
&quot;alpha&quot;: 3.14, # The fraction of this color that should be applied to the pixel. That is, the final pixel color is defined by the equation: `pixel color = alpha * (this color) + (1.0 - alpha) * (background color)` This means that a value of 1.0 corresponds to a solid color, whereas a value of 0.0 corresponds to a completely transparent color. This uses a wrapper message rather than a simple float scalar so that it is possible to distinguish between a default value and the value being unset. If omitted, this color object is rendered as a solid color (as if the alpha value had been explicitly given a value of 1.0).
&quot;blue&quot;: 3.14, # The amount of blue in the color as a value in the interval [0, 1].
&quot;green&quot;: 3.14, # The amount of green in the color as a value in the interval [0, 1].
&quot;red&quot;: 3.14, # The amount of red in the color as a value in the interval [0, 1].
},
&quot;color&quot;: { # Represents a color in the RGBA color space. This representation is designed for simplicity of conversion to/from color representations in various languages over compactness. For example, the fields of this representation can be trivially provided to the constructor of `java.awt.Color` in Java; it can also be trivially provided to UIColor&#x27;s `+colorWithRed:green:blue:alpha` method in iOS; and, with just a little work, it can be easily formatted into a CSS `rgba()` string in JavaScript. This reference page doesn&#x27;t carry information about the absolute color space that should be used to interpret the RGB value (e.g. sRGB, Adobe RGB, DCI-P3, BT.2020, etc.). By default, applications should assume the sRGB color space. When color equality needs to be decided, implementations, unless documented otherwise, treat two colors as equal if all their red, green, blue, and alpha values each differ by at most 1e-5. Example (Java): import com.google.type.Color; // ... public static java.awt.Color fromProto(Color protocolor) { float alpha = protocolor.hasAlpha() ? protocolor.getAlpha().getValue() : 1.0; return new java.awt.Color( protocolor.getRed(), protocolor.getGreen(), protocolor.getBlue(), alpha); } public static Color toProto(java.awt.Color color) { float red = (float) color.getRed(); float green = (float) color.getGreen(); float blue = (float) color.getBlue(); float denominator = 255.0; Color.Builder resultBuilder = Color .newBuilder() .setRed(red / denominator) .setGreen(green / denominator) .setBlue(blue / denominator); int alpha = color.getAlpha(); if (alpha != 255) { result.setAlpha( FloatValue .newBuilder() .setValue(((float) alpha) / denominator) .build()); } return resultBuilder.build(); } // ... Example (iOS / Obj-C): // ... static UIColor* fromProto(Color* protocolor) { float red = [protocolor red]; float green = [protocolor green]; float blue = [protocolor blue]; FloatValue* alpha_wrapper = [protocolor alpha]; float alpha = 1.0; if (alpha_wrapper != nil) { alpha = [alpha_wrapper value]; } return [UIColor colorWithRed:red green:green blue:blue alpha:alpha]; } static Color* toProto(UIColor* color) { CGFloat red, green, blue, alpha; if (![color getRed:&amp;red green:&amp;green blue:&amp;blue alpha:&amp;alpha]) { return nil; } Color* result = [[Color alloc] init]; [result setRed:red]; [result setGreen:green]; [result setBlue:blue]; if (alpha &lt;= 0.9999) { [result setAlpha:floatWrapperWithValue(alpha)]; } [result autorelease]; return result; } // ... Example (JavaScript): // ... var protoToCssColor = function(rgb_color) { var redFrac = rgb_color.red || 0.0; var greenFrac = rgb_color.green || 0.0; var blueFrac = rgb_color.blue || 0.0; var red = Math.floor(redFrac * 255); var green = Math.floor(greenFrac * 255); var blue = Math.floor(blueFrac * 255); if (!(&#x27;alpha&#x27; in rgb_color)) { return rgbToCssColor(red, green, blue); } var alphaFrac = rgb_color.alpha.value || 0.0; var rgbParams = [red, green, blue].join(&#x27;,&#x27;); return [&#x27;rgba(&#x27;, rgbParams, &#x27;,&#x27;, alphaFrac, &#x27;)&#x27;].join(&#x27;&#x27;); }; var rgbToCssColor = function(red, green, blue) { var rgbNumber = new Number((red &lt;&lt; 16) | (green &lt;&lt; 8) | blue); var hexString = rgbNumber.toString(16); var missingZeros = 6 - hexString.length; var resultBuilder = [&#x27;#&#x27;]; for (var i = 0; i &lt; missingZeros; i++) { resultBuilder.push(&#x27;0&#x27;); } resultBuilder.push(hexString); return resultBuilder.join(&#x27;&#x27;); }; // ... # Text color.
&quot;alpha&quot;: 3.14, # The fraction of this color that should be applied to the pixel. That is, the final pixel color is defined by the equation: `pixel color = alpha * (this color) + (1.0 - alpha) * (background color)` This means that a value of 1.0 corresponds to a solid color, whereas a value of 0.0 corresponds to a completely transparent color. This uses a wrapper message rather than a simple float scalar so that it is possible to distinguish between a default value and the value being unset. If omitted, this color object is rendered as a solid color (as if the alpha value had been explicitly given a value of 1.0).
&quot;blue&quot;: 3.14, # The amount of blue in the color as a value in the interval [0, 1].
&quot;green&quot;: 3.14, # The amount of green in the color as a value in the interval [0, 1].
&quot;red&quot;: 3.14, # The amount of red in the color as a value in the interval [0, 1].
},
&quot;fontSize&quot;: { # Font size with unit. # Font size.
&quot;size&quot;: 3.14, # Font size for the text.
&quot;unit&quot;: &quot;A String&quot;, # Unit for the font size. Follows CSS naming (in, px, pt, etc.).
},
&quot;fontWeight&quot;: &quot;A String&quot;, # Font weight. Possible values are normal, bold, bolder, and lighter. https://www.w3schools.com/cssref/pr_font_weight.asp
&quot;textAnchor&quot;: { # Text reference indexing into the Document.text. # Text anchor indexing into the Document.text.
&quot;content&quot;: &quot;A String&quot;, # Contains the content of the text span so that users do not have to look it up in the text_segments.
&quot;textSegments&quot;: [ # The text segments from the Document.text.
{ # A text segment in the Document.text. The indices may be out of bounds which indicate that the text extends into another document shard for large sharded documents. See ShardInfo.text_offset
&quot;endIndex&quot;: &quot;A String&quot;, # TextSegment half open end UTF-8 char index in the Document.text.
&quot;startIndex&quot;: &quot;A String&quot;, # TextSegment start UTF-8 char index in the Document.text.
},
],
},
&quot;textDecoration&quot;: &quot;A String&quot;, # Text decoration. Follows CSS standard. https://www.w3schools.com/cssref/pr_text_text-decoration.asp
&quot;textStyle&quot;: &quot;A String&quot;, # Text style. Possible values are normal, italic, and oblique. https://www.w3schools.com/cssref/pr_font_font-style.asp
},
],
&quot;uri&quot;: &quot;A String&quot;, # Optional. Currently supports Google Cloud Storage URI of the form `gs://bucket_name/object_name`. Object versioning is not supported. See [Google Cloud Storage Request URIs](https://cloud.google.com/storage/docs/reference-uris) for more info.
},
&quot;rawDocument&quot;: { # Payload message of raw document content (bytes). # A raw document content (bytes).
&quot;content&quot;: &quot;A String&quot;, # Inline document content.
&quot;mimeType&quot;: &quot;A String&quot;, # An IANA MIME type (RFC6838) indicating the nature and format of the [content].
},
&quot;skipHumanReview&quot;: True or False, # Whether Human Review feature should be skipped for this request. Default to false.
}
x__xgafv: string, V1 error format.
Allowed values
1 - v1 error format
2 - v2 error format
Returns:
An object of the form:
{ # Response message for the process document method.
&quot;document&quot;: { # Document represents the canonical document resource in Document Understanding AI. It is an interchange format that provides insights into documents and allows for collaboration between users and Document Understanding AI to iterate and optimize for quality. # The document payload, will populate fields based on the processor&#x27;s behavior.
&quot;content&quot;: &quot;A String&quot;, # Optional. Inline document content, represented as a stream of bytes. Note: As with all `bytes` fields, protobuffers use a pure binary representation, whereas JSON representations use base64.
&quot;entities&quot;: [ # A list of entities detected on Document.text. For document shards, entities in this list may cross shard boundaries.
{ # A phrase in the text that is a known entity type, such as a person, an organization, or location.
&quot;confidence&quot;: 3.14, # Optional. Confidence of detected Schema entity. Range [0, 1].
&quot;id&quot;: &quot;A String&quot;, # Optional. Canonical id. This will be a unique value in the entity list for this document.
&quot;mentionId&quot;: &quot;A String&quot;, # Optional. Deprecated. Use `id` field instead.
&quot;mentionText&quot;: &quot;A String&quot;, # Optional. Text value in the document e.g. `1600 Amphitheatre Pkwy`.
&quot;normalizedValue&quot;: { # Parsed and normalized entity value. # Optional. Normalized entity value. Absent if the extracted value could not be converted or the type (e.g. address) is not supported for certain parsers. This field is also only populated for certain supported document types.
&quot;addressValue&quot;: { # Represents a postal address, e.g. for postal delivery or payments addresses. Given a postal address, a postal service can deliver items to a premise, P.O. Box or similar. It is not intended to model geographical locations (roads, towns, mountains). In typical usage an address would be created via user input or from importing existing data, depending on the type of process. Advice on address input / editing: - Use an i18n-ready address widget such as https://github.com/google/libaddressinput) - Users should not be presented with UI elements for input or editing of fields outside countries where that field is used. For more guidance on how to use this schema, please see: https://support.google.com/business/answer/6397478 # Postal address. See also: https://github.com/googleapis/googleapis/blob/master/google/type/postal_address.proto
&quot;addressLines&quot;: [ # Unstructured address lines describing the lower levels of an address. Because values in address_lines do not have type information and may sometimes contain multiple values in a single field (e.g. &quot;Austin, TX&quot;), it is important that the line order is clear. The order of address lines should be &quot;envelope order&quot; for the country/region of the address. In places where this can vary (e.g. Japan), address_language is used to make it explicit (e.g. &quot;ja&quot; for large-to-small ordering and &quot;ja-Latn&quot; or &quot;en&quot; for small-to-large). This way, the most specific line of an address can be selected based on the language. The minimum permitted structural representation of an address consists of a region_code with all remaining information placed in the address_lines. It would be possible to format such an address very approximately without geocoding, but no semantic reasoning could be made about any of the address components until it was at least partially resolved. Creating an address only containing a region_code and address_lines, and then geocoding is the recommended way to handle completely unstructured addresses (as opposed to guessing which parts of the address should be localities or administrative areas).
&quot;A String&quot;,
],
&quot;administrativeArea&quot;: &quot;A String&quot;, # Optional. Highest administrative subdivision which is used for postal addresses of a country or region. For example, this can be a state, a province, an oblast, or a prefecture. Specifically, for Spain this is the province and not the autonomous community (e.g. &quot;Barcelona&quot; and not &quot;Catalonia&quot;). Many countries don&#x27;t use an administrative area in postal addresses. E.g. in Switzerland this should be left unpopulated.
&quot;languageCode&quot;: &quot;A String&quot;, # Optional. BCP-47 language code of the contents of this address (if known). This is often the UI language of the input form or is expected to match one of the languages used in the address&#x27; country/region, or their transliterated equivalents. This can affect formatting in certain countries, but is not critical to the correctness of the data and will never affect any validation or other non-formatting related operations. If this value is not known, it should be omitted (rather than specifying a possibly incorrect default). Examples: &quot;zh-Hant&quot;, &quot;ja&quot;, &quot;ja-Latn&quot;, &quot;en&quot;.
&quot;locality&quot;: &quot;A String&quot;, # Optional. Generally refers to the city/town portion of the address. Examples: US city, IT comune, UK post town. In regions of the world where localities are not well defined or do not fit into this structure well, leave locality empty and use address_lines.
&quot;organization&quot;: &quot;A String&quot;, # Optional. The name of the organization at the address.
&quot;postalCode&quot;: &quot;A String&quot;, # Optional. Postal code of the address. Not all countries use or require postal codes to be present, but where they are used, they may trigger additional validation with other parts of the address (e.g. state/zip validation in the U.S.A.).
&quot;recipients&quot;: [ # Optional. The recipient at the address. This field may, under certain circumstances, contain multiline information. For example, it might contain &quot;care of&quot; information.
&quot;A String&quot;,
],
&quot;regionCode&quot;: &quot;A String&quot;, # Required. CLDR region code of the country/region of the address. This is never inferred and it is up to the user to ensure the value is correct. See http://cldr.unicode.org/ and http://www.unicode.org/cldr/charts/30/supplemental/territory_information.html for details. Example: &quot;CH&quot; for Switzerland.
&quot;revision&quot;: 42, # The schema revision of the `PostalAddress`. This must be set to 0, which is the latest revision. All new revisions **must** be backward compatible with old revisions.
&quot;sortingCode&quot;: &quot;A String&quot;, # Optional. Additional, country-specific, sorting code. This is not used in most regions. Where it is used, the value is either a string like &quot;CEDEX&quot;, optionally followed by a number (e.g. &quot;CEDEX 7&quot;), or just a number alone, representing the &quot;sector code&quot; (Jamaica), &quot;delivery area indicator&quot; (Malawi) or &quot;post office indicator&quot; (e.g. Côte d&#x27;Ivoire).
&quot;sublocality&quot;: &quot;A String&quot;, # Optional. Sublocality of the address. For example, this can be neighborhoods, boroughs, districts.
},
&quot;booleanValue&quot;: True or False, # Boolean value. Can be used for entities with binary values, or for checkboxes.
&quot;dateValue&quot;: { # Represents a whole or partial calendar date, such as a birthday. The time of day and time zone are either specified elsewhere or are insignificant. The date is relative to the Gregorian Calendar. This can represent one of the following: * A full date, with non-zero year, month, and day values * A month and day value, with a zero year, such as an anniversary * A year on its own, with zero month and day values * A year and month value, with a zero day, such as a credit card expiration date Related types are google.type.TimeOfDay and `google.protobuf.Timestamp`. # Date value. Includes year, month, day. See also: https://github.com/googleapis/googleapis/blob/master/google/type/date.proto
&quot;day&quot;: 42, # Day of a month. Must be from 1 to 31 and valid for the year and month, or 0 to specify a year by itself or a year and month where the day isn&#x27;t significant.
&quot;month&quot;: 42, # Month of a year. Must be from 1 to 12, or 0 to specify a year without a month and day.
&quot;year&quot;: 42, # Year of the date. Must be from 1 to 9999, or 0 to specify a date without a year.
},
&quot;datetimeValue&quot;: { # Represents civil time (or occasionally physical time). This type can represent a civil time in one of a few possible ways: * When utc_offset is set and time_zone is unset: a civil time on a calendar day with a particular offset from UTC. * When time_zone is set and utc_offset is unset: a civil time on a calendar day in a particular time zone. * When neither time_zone nor utc_offset is set: a civil time on a calendar day in local time. The date is relative to the Proleptic Gregorian Calendar. If year is 0, the DateTime is considered not to have a specific year. month and day must have valid, non-zero values. This type may also be used to represent a physical time if all the date and time fields are set and either case of the `time_offset` oneof is set. Consider using `Timestamp` message for physical time instead. If your use case also would like to store the user&#x27;s timezone, that can be done in another field. This type is more flexible than some applications may want. Make sure to document and validate your application&#x27;s limitations. # DateTime value. Includes date, time, and timezone. See also: https://github.com/googleapis/googleapis/blob/master/google/type/datetime.proto
&quot;day&quot;: 42, # Required. Day of month. Must be from 1 to 31 and valid for the year and month.
&quot;hours&quot;: 42, # Required. Hours of day in 24 hour format. Should be from 0 to 23. An API may choose to allow the value &quot;24:00:00&quot; for scenarios like business closing time.
&quot;minutes&quot;: 42, # Required. Minutes of hour of day. Must be from 0 to 59.
&quot;month&quot;: 42, # Required. Month of year. Must be from 1 to 12.
&quot;nanos&quot;: 42, # Required. Fractions of seconds in nanoseconds. Must be from 0 to 999,999,999.
&quot;seconds&quot;: 42, # Required. Seconds of minutes of the time. Must normally be from 0 to 59. An API may allow the value 60 if it allows leap-seconds.
&quot;timeZone&quot;: { # Represents a time zone from the [IANA Time Zone Database](https://www.iana.org/time-zones). # Time zone.
&quot;id&quot;: &quot;A String&quot;, # IANA Time Zone Database time zone, e.g. &quot;America/New_York&quot;.
&quot;version&quot;: &quot;A String&quot;, # Optional. IANA Time Zone Database version number, e.g. &quot;2019a&quot;.
},
&quot;utcOffset&quot;: &quot;A String&quot;, # UTC offset. Must be whole seconds, between -18 hours and +18 hours. For example, a UTC offset of -4:00 would be represented as { seconds: -14400 }.
&quot;year&quot;: 42, # Optional. Year of date. Must be from 1 to 9999, or 0 if specifying a datetime without a year.
},
&quot;moneyValue&quot;: { # Represents an amount of money with its currency type. # Money value. See also: https://github.com/googleapis/googleapis/blob/master/google/type/money.proto
&quot;currencyCode&quot;: &quot;A String&quot;, # The three-letter currency code defined in ISO 4217.
&quot;nanos&quot;: 42, # Number of nano (10^-9) units of the amount. The value must be between -999,999,999 and +999,999,999 inclusive. If `units` is positive, `nanos` must be positive or zero. If `units` is zero, `nanos` can be positive, zero, or negative. If `units` is negative, `nanos` must be negative or zero. For example $-1.75 is represented as `units`=-1 and `nanos`=-750,000,000.
&quot;units&quot;: &quot;A String&quot;, # The whole units of the amount. For example if `currencyCode` is `&quot;USD&quot;`, then 1 unit is one US dollar.
},
&quot;text&quot;: &quot;A String&quot;, # Required. Normalized entity value stored as a string. This field is populated for supported document type (e.g. Invoice). For some entity types, one of respective &#x27;structured_value&#x27; fields may also be populated. - Money/Currency type (`money_value`) is in the ISO 4217 text format. - Date type (`date_value`) is in the ISO 8601 text format. - Datetime type (`datetime_value`) is in the ISO 8601 text format.
},
&quot;pageAnchor&quot;: { # Referencing the visual context of the entity in the Document.pages. Page anchors can be cross-page, consist of multiple bounding polygons and optionally reference specific layout element types. # Optional. Represents the provenance of this entity wrt. the location on the page where it was found.
&quot;pageRefs&quot;: [ # One or more references to visual page elements
{ # Represents a weak reference to a page element within a document.
&quot;boundingPoly&quot;: { # A bounding polygon for the detected image annotation. # Optional. Identifies the bounding polygon of a layout element on the page.
&quot;normalizedVertices&quot;: [ # The bounding polygon normalized vertices.
{ # A vertex represents a 2D point in the image. NOTE: the normalized vertex coordinates are relative to the original image and range from 0 to 1.
&quot;x&quot;: 3.14, # X coordinate.
&quot;y&quot;: 3.14, # Y coordinate (starts from the top of the image).
},
],
&quot;vertices&quot;: [ # The bounding polygon vertices.
{ # A vertex represents a 2D point in the image. NOTE: the vertex coordinates are in the same scale as the original image.
&quot;x&quot;: 42, # X coordinate.
&quot;y&quot;: 42, # Y coordinate (starts from the top of the image).
},
],
},
&quot;confidence&quot;: 3.14, # Optional. Confidence of detected page element, if applicable. Range [0, 1].
&quot;layoutId&quot;: &quot;A String&quot;, # Optional. Deprecated. Use PageRef.bounding_poly instead.
&quot;layoutType&quot;: &quot;A String&quot;, # Optional. The type of the layout element that is being referenced if any.
&quot;page&quot;: &quot;A String&quot;, # Required. Index into the Document.pages element, for example using Document.pages to locate the related page element. This field is skipped when its value is the default 0. See https://developers.google.com/protocol-buffers/docs/proto3#json.
},
],
},
&quot;properties&quot;: [ # Optional. Entities can be nested to form a hierarchical data structure representing the content in the document.
# Object with schema name: GoogleCloudDocumentaiV1DocumentEntity
],
&quot;provenance&quot;: { # Structure to identify provenance relationships between annotations in different revisions. # Optional. The history of this annotation.
&quot;id&quot;: 42, # The Id of this operation. Needs to be unique within the scope of the revision.
&quot;parents&quot;: [ # References to the original elements that are replaced.
{ # Structure for referencing parent provenances. When an element replaces one of more other elements parent references identify the elements that are replaced.
&quot;id&quot;: 42, # The id of the parent provenance.
&quot;index&quot;: 42, # The index of the parent revisions corresponding collection of items (eg. list of entities, properties within entities, etc.)
&quot;revision&quot;: 42, # The index of the [Document.revisions] identifying the parent revision.
},
],
&quot;revision&quot;: 42, # The index of the revision that produced this element.
&quot;type&quot;: &quot;A String&quot;, # The type of provenance operation.
},
&quot;redacted&quot;: True or False, # Optional. Whether the entity will be redacted for de-identification purposes.
&quot;textAnchor&quot;: { # Text reference indexing into the Document.text. # Optional. Provenance of the entity. Text anchor indexing into the Document.text.
&quot;content&quot;: &quot;A String&quot;, # Contains the content of the text span so that users do not have to look it up in the text_segments.
&quot;textSegments&quot;: [ # The text segments from the Document.text.
{ # A text segment in the Document.text. The indices may be out of bounds which indicate that the text extends into another document shard for large sharded documents. See ShardInfo.text_offset
&quot;endIndex&quot;: &quot;A String&quot;, # TextSegment half open end UTF-8 char index in the Document.text.
&quot;startIndex&quot;: &quot;A String&quot;, # TextSegment start UTF-8 char index in the Document.text.
},
],
},
&quot;type&quot;: &quot;A String&quot;, # Entity type from a schema e.g. `Address`.
},
],
&quot;entityRelations&quot;: [ # Relationship among Document.entities.
{ # Relationship between Entities.
&quot;objectId&quot;: &quot;A String&quot;, # Object entity id.
&quot;relation&quot;: &quot;A String&quot;, # Relationship description.
&quot;subjectId&quot;: &quot;A String&quot;, # Subject entity id.
},
],
&quot;error&quot;: { # The `Status` type defines a logical error model that is suitable for different programming environments, including REST APIs and RPC APIs. It is used by [gRPC](https://github.com/grpc). Each `Status` message contains three pieces of data: error code, error message, and error details. You can find out more about this error model and how to work with it in the [API Design Guide](https://cloud.google.com/apis/design/errors). # Any error that occurred while processing this document.
&quot;code&quot;: 42, # The status code, which should be an enum value of google.rpc.Code.
&quot;details&quot;: [ # A list of messages that carry the error details. There is a common set of message types for APIs to use.
{
&quot;a_key&quot;: &quot;&quot;, # Properties of the object. Contains field @type with type URL.
},
],
&quot;message&quot;: &quot;A String&quot;, # A developer-facing error message, which should be in English. Any user-facing error message should be localized and sent in the google.rpc.Status.details field, or localized by the client.
},
&quot;mimeType&quot;: &quot;A String&quot;, # An IANA published MIME type (also referred to as media type). For more information, see https://www.iana.org/assignments/media-types/media-types.xhtml.
&quot;pages&quot;: [ # Visual page layout for the Document.
{ # A page in a Document.
&quot;blocks&quot;: [ # A list of visually detected text blocks on the page. A block has a set of lines (collected into paragraphs) that have a common line-spacing and orientation.
{ # A block has a set of lines (collected into paragraphs) that have a common line-spacing and orientation.
&quot;detectedLanguages&quot;: [ # A list of detected languages together with confidence.
{ # Detected language for a structural component.
&quot;confidence&quot;: 3.14, # Confidence of detected language. Range [0, 1].
&quot;languageCode&quot;: &quot;A String&quot;, # The BCP-47 language code, such as &quot;en-US&quot; or &quot;sr-Latn&quot;. For more information, see http://www.unicode.org/reports/tr35/#Unicode_locale_identifier.
},
],
&quot;layout&quot;: { # Visual element describing a layout unit on a page. # Layout for Block.
&quot;boundingPoly&quot;: { # A bounding polygon for the detected image annotation. # The bounding polygon for the Layout.
&quot;normalizedVertices&quot;: [ # The bounding polygon normalized vertices.
{ # A vertex represents a 2D point in the image. NOTE: the normalized vertex coordinates are relative to the original image and range from 0 to 1.
&quot;x&quot;: 3.14, # X coordinate.
&quot;y&quot;: 3.14, # Y coordinate (starts from the top of the image).
},
],
&quot;vertices&quot;: [ # The bounding polygon vertices.
{ # A vertex represents a 2D point in the image. NOTE: the vertex coordinates are in the same scale as the original image.
&quot;x&quot;: 42, # X coordinate.
&quot;y&quot;: 42, # Y coordinate (starts from the top of the image).
},
],
},
&quot;confidence&quot;: 3.14, # Confidence of the current Layout within context of the object this layout is for. e.g. confidence can be for a single token, a table, a visual element, etc. depending on context. Range [0, 1].
&quot;orientation&quot;: &quot;A String&quot;, # Detected orientation for the Layout.
&quot;textAnchor&quot;: { # Text reference indexing into the Document.text. # Text anchor indexing into the Document.text.
&quot;content&quot;: &quot;A String&quot;, # Contains the content of the text span so that users do not have to look it up in the text_segments.
&quot;textSegments&quot;: [ # The text segments from the Document.text.
{ # A text segment in the Document.text. The indices may be out of bounds which indicate that the text extends into another document shard for large sharded documents. See ShardInfo.text_offset
&quot;endIndex&quot;: &quot;A String&quot;, # TextSegment half open end UTF-8 char index in the Document.text.
&quot;startIndex&quot;: &quot;A String&quot;, # TextSegment start UTF-8 char index in the Document.text.
},
],
},
},
&quot;provenance&quot;: { # Structure to identify provenance relationships between annotations in different revisions. # The history of this annotation.
&quot;id&quot;: 42, # The Id of this operation. Needs to be unique within the scope of the revision.
&quot;parents&quot;: [ # References to the original elements that are replaced.
{ # Structure for referencing parent provenances. When an element replaces one of more other elements parent references identify the elements that are replaced.
&quot;id&quot;: 42, # The id of the parent provenance.
&quot;index&quot;: 42, # The index of the parent revisions corresponding collection of items (eg. list of entities, properties within entities, etc.)
&quot;revision&quot;: 42, # The index of the [Document.revisions] identifying the parent revision.
},
],
&quot;revision&quot;: 42, # The index of the revision that produced this element.
&quot;type&quot;: &quot;A String&quot;, # The type of provenance operation.
},
},
],
&quot;detectedLanguages&quot;: [ # A list of detected languages together with confidence.
{ # Detected language for a structural component.
&quot;confidence&quot;: 3.14, # Confidence of detected language. Range [0, 1].
&quot;languageCode&quot;: &quot;A String&quot;, # The BCP-47 language code, such as &quot;en-US&quot; or &quot;sr-Latn&quot;. For more information, see http://www.unicode.org/reports/tr35/#Unicode_locale_identifier.
},
],
&quot;dimension&quot;: { # Dimension for the page. # Physical dimension of the page.
&quot;height&quot;: 3.14, # Page height.
&quot;unit&quot;: &quot;A String&quot;, # Dimension unit.
&quot;width&quot;: 3.14, # Page width.
},
&quot;formFields&quot;: [ # A list of visually detected form fields on the page.
{ # A form field detected on the page.
&quot;fieldName&quot;: { # Visual element describing a layout unit on a page. # Layout for the FormField name. e.g. `Address`, `Email`, `Grand total`, `Phone number`, etc.
&quot;boundingPoly&quot;: { # A bounding polygon for the detected image annotation. # The bounding polygon for the Layout.
&quot;normalizedVertices&quot;: [ # The bounding polygon normalized vertices.
{ # A vertex represents a 2D point in the image. NOTE: the normalized vertex coordinates are relative to the original image and range from 0 to 1.
&quot;x&quot;: 3.14, # X coordinate.
&quot;y&quot;: 3.14, # Y coordinate (starts from the top of the image).
},
],
&quot;vertices&quot;: [ # The bounding polygon vertices.
{ # A vertex represents a 2D point in the image. NOTE: the vertex coordinates are in the same scale as the original image.
&quot;x&quot;: 42, # X coordinate.
&quot;y&quot;: 42, # Y coordinate (starts from the top of the image).
},
],
},
&quot;confidence&quot;: 3.14, # Confidence of the current Layout within context of the object this layout is for. e.g. confidence can be for a single token, a table, a visual element, etc. depending on context. Range [0, 1].
&quot;orientation&quot;: &quot;A String&quot;, # Detected orientation for the Layout.
&quot;textAnchor&quot;: { # Text reference indexing into the Document.text. # Text anchor indexing into the Document.text.
&quot;content&quot;: &quot;A String&quot;, # Contains the content of the text span so that users do not have to look it up in the text_segments.
&quot;textSegments&quot;: [ # The text segments from the Document.text.
{ # A text segment in the Document.text. The indices may be out of bounds which indicate that the text extends into another document shard for large sharded documents. See ShardInfo.text_offset
&quot;endIndex&quot;: &quot;A String&quot;, # TextSegment half open end UTF-8 char index in the Document.text.
&quot;startIndex&quot;: &quot;A String&quot;, # TextSegment start UTF-8 char index in the Document.text.
},
],
},
},
&quot;fieldValue&quot;: { # Visual element describing a layout unit on a page. # Layout for the FormField value.
&quot;boundingPoly&quot;: { # A bounding polygon for the detected image annotation. # The bounding polygon for the Layout.
&quot;normalizedVertices&quot;: [ # The bounding polygon normalized vertices.
{ # A vertex represents a 2D point in the image. NOTE: the normalized vertex coordinates are relative to the original image and range from 0 to 1.
&quot;x&quot;: 3.14, # X coordinate.
&quot;y&quot;: 3.14, # Y coordinate (starts from the top of the image).
},
],
&quot;vertices&quot;: [ # The bounding polygon vertices.
{ # A vertex represents a 2D point in the image. NOTE: the vertex coordinates are in the same scale as the original image.
&quot;x&quot;: 42, # X coordinate.
&quot;y&quot;: 42, # Y coordinate (starts from the top of the image).
},
],
},
&quot;confidence&quot;: 3.14, # Confidence of the current Layout within context of the object this layout is for. e.g. confidence can be for a single token, a table, a visual element, etc. depending on context. Range [0, 1].
&quot;orientation&quot;: &quot;A String&quot;, # Detected orientation for the Layout.
&quot;textAnchor&quot;: { # Text reference indexing into the Document.text. # Text anchor indexing into the Document.text.
&quot;content&quot;: &quot;A String&quot;, # Contains the content of the text span so that users do not have to look it up in the text_segments.
&quot;textSegments&quot;: [ # The text segments from the Document.text.
{ # A text segment in the Document.text. The indices may be out of bounds which indicate that the text extends into another document shard for large sharded documents. See ShardInfo.text_offset
&quot;endIndex&quot;: &quot;A String&quot;, # TextSegment half open end UTF-8 char index in the Document.text.
&quot;startIndex&quot;: &quot;A String&quot;, # TextSegment start UTF-8 char index in the Document.text.
},
],
},
},
&quot;nameDetectedLanguages&quot;: [ # A list of detected languages for name together with confidence.
{ # Detected language for a structural component.
&quot;confidence&quot;: 3.14, # Confidence of detected language. Range [0, 1].
&quot;languageCode&quot;: &quot;A String&quot;, # The BCP-47 language code, such as &quot;en-US&quot; or &quot;sr-Latn&quot;. For more information, see http://www.unicode.org/reports/tr35/#Unicode_locale_identifier.
},
],
&quot;provenance&quot;: { # Structure to identify provenance relationships between annotations in different revisions. # The history of this annotation.
&quot;id&quot;: 42, # The Id of this operation. Needs to be unique within the scope of the revision.
&quot;parents&quot;: [ # References to the original elements that are replaced.
{ # Structure for referencing parent provenances. When an element replaces one of more other elements parent references identify the elements that are replaced.
&quot;id&quot;: 42, # The id of the parent provenance.
&quot;index&quot;: 42, # The index of the parent revisions corresponding collection of items (eg. list of entities, properties within entities, etc.)
&quot;revision&quot;: 42, # The index of the [Document.revisions] identifying the parent revision.
},
],
&quot;revision&quot;: 42, # The index of the revision that produced this element.
&quot;type&quot;: &quot;A String&quot;, # The type of provenance operation.
},
&quot;valueDetectedLanguages&quot;: [ # A list of detected languages for value together with confidence.
{ # Detected language for a structural component.
&quot;confidence&quot;: 3.14, # Confidence of detected language. Range [0, 1].
&quot;languageCode&quot;: &quot;A String&quot;, # The BCP-47 language code, such as &quot;en-US&quot; or &quot;sr-Latn&quot;. For more information, see http://www.unicode.org/reports/tr35/#Unicode_locale_identifier.
},
],
&quot;valueType&quot;: &quot;A String&quot;, # If the value is non-textual, this field represents the type. Current valid values are: - blank (this indicates the field_value is normal text) - &quot;unfilled_checkbox&quot; - &quot;filled_checkbox&quot;
},
],
&quot;image&quot;: { # Rendered image contents for this page. # Rendered image for this page. This image is preprocessed to remove any skew, rotation, and distortions such that the annotation bounding boxes can be upright and axis-aligned.
&quot;content&quot;: &quot;A String&quot;, # Raw byte content of the image.
&quot;height&quot;: 42, # Height of the image in pixels.
&quot;mimeType&quot;: &quot;A String&quot;, # Encoding mime type for the image.
&quot;width&quot;: 42, # Width of the image in pixels.
},
&quot;layout&quot;: { # Visual element describing a layout unit on a page. # Layout for the page.
&quot;boundingPoly&quot;: { # A bounding polygon for the detected image annotation. # The bounding polygon for the Layout.
&quot;normalizedVertices&quot;: [ # The bounding polygon normalized vertices.
{ # A vertex represents a 2D point in the image. NOTE: the normalized vertex coordinates are relative to the original image and range from 0 to 1.
&quot;x&quot;: 3.14, # X coordinate.
&quot;y&quot;: 3.14, # Y coordinate (starts from the top of the image).
},
],
&quot;vertices&quot;: [ # The bounding polygon vertices.
{ # A vertex represents a 2D point in the image. NOTE: the vertex coordinates are in the same scale as the original image.
&quot;x&quot;: 42, # X coordinate.
&quot;y&quot;: 42, # Y coordinate (starts from the top of the image).
},
],
},
&quot;confidence&quot;: 3.14, # Confidence of the current Layout within context of the object this layout is for. e.g. confidence can be for a single token, a table, a visual element, etc. depending on context. Range [0, 1].
&quot;orientation&quot;: &quot;A String&quot;, # Detected orientation for the Layout.
&quot;textAnchor&quot;: { # Text reference indexing into the Document.text. # Text anchor indexing into the Document.text.
&quot;content&quot;: &quot;A String&quot;, # Contains the content of the text span so that users do not have to look it up in the text_segments.
&quot;textSegments&quot;: [ # The text segments from the Document.text.
{ # A text segment in the Document.text. The indices may be out of bounds which indicate that the text extends into another document shard for large sharded documents. See ShardInfo.text_offset
&quot;endIndex&quot;: &quot;A String&quot;, # TextSegment half open end UTF-8 char index in the Document.text.
&quot;startIndex&quot;: &quot;A String&quot;, # TextSegment start UTF-8 char index in the Document.text.
},
],
},
},
&quot;lines&quot;: [ # A list of visually detected text lines on the page. A collection of tokens that a human would perceive as a line.
{ # A collection of tokens that a human would perceive as a line. Does not cross column boundaries, can be horizontal, vertical, etc.
&quot;detectedLanguages&quot;: [ # A list of detected languages together with confidence.
{ # Detected language for a structural component.
&quot;confidence&quot;: 3.14, # Confidence of detected language. Range [0, 1].
&quot;languageCode&quot;: &quot;A String&quot;, # The BCP-47 language code, such as &quot;en-US&quot; or &quot;sr-Latn&quot;. For more information, see http://www.unicode.org/reports/tr35/#Unicode_locale_identifier.
},
],
&quot;layout&quot;: { # Visual element describing a layout unit on a page. # Layout for Line.
&quot;boundingPoly&quot;: { # A bounding polygon for the detected image annotation. # The bounding polygon for the Layout.
&quot;normalizedVertices&quot;: [ # The bounding polygon normalized vertices.
{ # A vertex represents a 2D point in the image. NOTE: the normalized vertex coordinates are relative to the original image and range from 0 to 1.
&quot;x&quot;: 3.14, # X coordinate.
&quot;y&quot;: 3.14, # Y coordinate (starts from the top of the image).
},
],
&quot;vertices&quot;: [ # The bounding polygon vertices.
{ # A vertex represents a 2D point in the image. NOTE: the vertex coordinates are in the same scale as the original image.
&quot;x&quot;: 42, # X coordinate.
&quot;y&quot;: 42, # Y coordinate (starts from the top of the image).
},
],
},
&quot;confidence&quot;: 3.14, # Confidence of the current Layout within context of the object this layout is for. e.g. confidence can be for a single token, a table, a visual element, etc. depending on context. Range [0, 1].
&quot;orientation&quot;: &quot;A String&quot;, # Detected orientation for the Layout.
&quot;textAnchor&quot;: { # Text reference indexing into the Document.text. # Text anchor indexing into the Document.text.
&quot;content&quot;: &quot;A String&quot;, # Contains the content of the text span so that users do not have to look it up in the text_segments.
&quot;textSegments&quot;: [ # The text segments from the Document.text.
{ # A text segment in the Document.text. The indices may be out of bounds which indicate that the text extends into another document shard for large sharded documents. See ShardInfo.text_offset
&quot;endIndex&quot;: &quot;A String&quot;, # TextSegment half open end UTF-8 char index in the Document.text.
&quot;startIndex&quot;: &quot;A String&quot;, # TextSegment start UTF-8 char index in the Document.text.
},
],
},
},
&quot;provenance&quot;: { # Structure to identify provenance relationships between annotations in different revisions. # The history of this annotation.
&quot;id&quot;: 42, # The Id of this operation. Needs to be unique within the scope of the revision.
&quot;parents&quot;: [ # References to the original elements that are replaced.
{ # Structure for referencing parent provenances. When an element replaces one of more other elements parent references identify the elements that are replaced.
&quot;id&quot;: 42, # The id of the parent provenance.
&quot;index&quot;: 42, # The index of the parent revisions corresponding collection of items (eg. list of entities, properties within entities, etc.)
&quot;revision&quot;: 42, # The index of the [Document.revisions] identifying the parent revision.
},
],
&quot;revision&quot;: 42, # The index of the revision that produced this element.
&quot;type&quot;: &quot;A String&quot;, # The type of provenance operation.
},
},
],
&quot;pageNumber&quot;: 42, # 1-based index for current Page in a parent Document. Useful when a page is taken out of a Document for individual processing.
&quot;paragraphs&quot;: [ # A list of visually detected text paragraphs on the page. A collection of lines that a human would perceive as a paragraph.
{ # A collection of lines that a human would perceive as a paragraph.
&quot;detectedLanguages&quot;: [ # A list of detected languages together with confidence.
{ # Detected language for a structural component.
&quot;confidence&quot;: 3.14, # Confidence of detected language. Range [0, 1].
&quot;languageCode&quot;: &quot;A String&quot;, # The BCP-47 language code, such as &quot;en-US&quot; or &quot;sr-Latn&quot;. For more information, see http://www.unicode.org/reports/tr35/#Unicode_locale_identifier.
},
],
&quot;layout&quot;: { # Visual element describing a layout unit on a page. # Layout for Paragraph.
&quot;boundingPoly&quot;: { # A bounding polygon for the detected image annotation. # The bounding polygon for the Layout.
&quot;normalizedVertices&quot;: [ # The bounding polygon normalized vertices.
{ # A vertex represents a 2D point in the image. NOTE: the normalized vertex coordinates are relative to the original image and range from 0 to 1.
&quot;x&quot;: 3.14, # X coordinate.
&quot;y&quot;: 3.14, # Y coordinate (starts from the top of the image).
},
],
&quot;vertices&quot;: [ # The bounding polygon vertices.
{ # A vertex represents a 2D point in the image. NOTE: the vertex coordinates are in the same scale as the original image.
&quot;x&quot;: 42, # X coordinate.
&quot;y&quot;: 42, # Y coordinate (starts from the top of the image).
},
],
},
&quot;confidence&quot;: 3.14, # Confidence of the current Layout within context of the object this layout is for. e.g. confidence can be for a single token, a table, a visual element, etc. depending on context. Range [0, 1].
&quot;orientation&quot;: &quot;A String&quot;, # Detected orientation for the Layout.
&quot;textAnchor&quot;: { # Text reference indexing into the Document.text. # Text anchor indexing into the Document.text.
&quot;content&quot;: &quot;A String&quot;, # Contains the content of the text span so that users do not have to look it up in the text_segments.
&quot;textSegments&quot;: [ # The text segments from the Document.text.
{ # A text segment in the Document.text. The indices may be out of bounds which indicate that the text extends into another document shard for large sharded documents. See ShardInfo.text_offset
&quot;endIndex&quot;: &quot;A String&quot;, # TextSegment half open end UTF-8 char index in the Document.text.
&quot;startIndex&quot;: &quot;A String&quot;, # TextSegment start UTF-8 char index in the Document.text.
},
],
},
},
&quot;provenance&quot;: { # Structure to identify provenance relationships between annotations in different revisions. # The history of this annotation.
&quot;id&quot;: 42, # The Id of this operation. Needs to be unique within the scope of the revision.
&quot;parents&quot;: [ # References to the original elements that are replaced.
{ # Structure for referencing parent provenances. When an element replaces one of more other elements parent references identify the elements that are replaced.
&quot;id&quot;: 42, # The id of the parent provenance.
&quot;index&quot;: 42, # The index of the parent revisions corresponding collection of items (eg. list of entities, properties within entities, etc.)
&quot;revision&quot;: 42, # The index of the [Document.revisions] identifying the parent revision.
},
],
&quot;revision&quot;: 42, # The index of the revision that produced this element.
&quot;type&quot;: &quot;A String&quot;, # The type of provenance operation.
},
},
],
&quot;provenance&quot;: { # Structure to identify provenance relationships between annotations in different revisions. # The history of this page.
&quot;id&quot;: 42, # The Id of this operation. Needs to be unique within the scope of the revision.
&quot;parents&quot;: [ # References to the original elements that are replaced.
{ # Structure for referencing parent provenances. When an element replaces one of more other elements parent references identify the elements that are replaced.
&quot;id&quot;: 42, # The id of the parent provenance.
&quot;index&quot;: 42, # The index of the parent revisions corresponding collection of items (eg. list of entities, properties within entities, etc.)
&quot;revision&quot;: 42, # The index of the [Document.revisions] identifying the parent revision.
},
],
&quot;revision&quot;: 42, # The index of the revision that produced this element.
&quot;type&quot;: &quot;A String&quot;, # The type of provenance operation.
},
&quot;tables&quot;: [ # A list of visually detected tables on the page.
{ # A table representation similar to HTML table structure.
&quot;bodyRows&quot;: [ # Body rows of the table.
{ # A row of table cells.
&quot;cells&quot;: [ # Cells that make up this row.
{ # A cell representation inside the table.
&quot;colSpan&quot;: 42, # How many columns this cell spans.
&quot;detectedLanguages&quot;: [ # A list of detected languages together with confidence.
{ # Detected language for a structural component.
&quot;confidence&quot;: 3.14, # Confidence of detected language. Range [0, 1].
&quot;languageCode&quot;: &quot;A String&quot;, # The BCP-47 language code, such as &quot;en-US&quot; or &quot;sr-Latn&quot;. For more information, see http://www.unicode.org/reports/tr35/#Unicode_locale_identifier.
},
],
&quot;layout&quot;: { # Visual element describing a layout unit on a page. # Layout for TableCell.
&quot;boundingPoly&quot;: { # A bounding polygon for the detected image annotation. # The bounding polygon for the Layout.
&quot;normalizedVertices&quot;: [ # The bounding polygon normalized vertices.
{ # A vertex represents a 2D point in the image. NOTE: the normalized vertex coordinates are relative to the original image and range from 0 to 1.
&quot;x&quot;: 3.14, # X coordinate.
&quot;y&quot;: 3.14, # Y coordinate (starts from the top of the image).
},
],
&quot;vertices&quot;: [ # The bounding polygon vertices.
{ # A vertex represents a 2D point in the image. NOTE: the vertex coordinates are in the same scale as the original image.
&quot;x&quot;: 42, # X coordinate.
&quot;y&quot;: 42, # Y coordinate (starts from the top of the image).
},
],
},
&quot;confidence&quot;: 3.14, # Confidence of the current Layout within context of the object this layout is for. e.g. confidence can be for a single token, a table, a visual element, etc. depending on context. Range [0, 1].
&quot;orientation&quot;: &quot;A String&quot;, # Detected orientation for the Layout.
&quot;textAnchor&quot;: { # Text reference indexing into the Document.text. # Text anchor indexing into the Document.text.
&quot;content&quot;: &quot;A String&quot;, # Contains the content of the text span so that users do not have to look it up in the text_segments.
&quot;textSegments&quot;: [ # The text segments from the Document.text.
{ # A text segment in the Document.text. The indices may be out of bounds which indicate that the text extends into another document shard for large sharded documents. See ShardInfo.text_offset
&quot;endIndex&quot;: &quot;A String&quot;, # TextSegment half open end UTF-8 char index in the Document.text.
&quot;startIndex&quot;: &quot;A String&quot;, # TextSegment start UTF-8 char index in the Document.text.
},
],
},
},
&quot;rowSpan&quot;: 42, # How many rows this cell spans.
},
],
},
],
&quot;detectedLanguages&quot;: [ # A list of detected languages together with confidence.
{ # Detected language for a structural component.
&quot;confidence&quot;: 3.14, # Confidence of detected language. Range [0, 1].
&quot;languageCode&quot;: &quot;A String&quot;, # The BCP-47 language code, such as &quot;en-US&quot; or &quot;sr-Latn&quot;. For more information, see http://www.unicode.org/reports/tr35/#Unicode_locale_identifier.
},
],
&quot;headerRows&quot;: [ # Header rows of the table.
{ # A row of table cells.
&quot;cells&quot;: [ # Cells that make up this row.
{ # A cell representation inside the table.
&quot;colSpan&quot;: 42, # How many columns this cell spans.
&quot;detectedLanguages&quot;: [ # A list of detected languages together with confidence.
{ # Detected language for a structural component.
&quot;confidence&quot;: 3.14, # Confidence of detected language. Range [0, 1].
&quot;languageCode&quot;: &quot;A String&quot;, # The BCP-47 language code, such as &quot;en-US&quot; or &quot;sr-Latn&quot;. For more information, see http://www.unicode.org/reports/tr35/#Unicode_locale_identifier.
},
],
&quot;layout&quot;: { # Visual element describing a layout unit on a page. # Layout for TableCell.
&quot;boundingPoly&quot;: { # A bounding polygon for the detected image annotation. # The bounding polygon for the Layout.
&quot;normalizedVertices&quot;: [ # The bounding polygon normalized vertices.
{ # A vertex represents a 2D point in the image. NOTE: the normalized vertex coordinates are relative to the original image and range from 0 to 1.
&quot;x&quot;: 3.14, # X coordinate.
&quot;y&quot;: 3.14, # Y coordinate (starts from the top of the image).
},
],
&quot;vertices&quot;: [ # The bounding polygon vertices.
{ # A vertex represents a 2D point in the image. NOTE: the vertex coordinates are in the same scale as the original image.
&quot;x&quot;: 42, # X coordinate.
&quot;y&quot;: 42, # Y coordinate (starts from the top of the image).
},
],
},
&quot;confidence&quot;: 3.14, # Confidence of the current Layout within context of the object this layout is for. e.g. confidence can be for a single token, a table, a visual element, etc. depending on context. Range [0, 1].
&quot;orientation&quot;: &quot;A String&quot;, # Detected orientation for the Layout.
&quot;textAnchor&quot;: { # Text reference indexing into the Document.text. # Text anchor indexing into the Document.text.
&quot;content&quot;: &quot;A String&quot;, # Contains the content of the text span so that users do not have to look it up in the text_segments.
&quot;textSegments&quot;: [ # The text segments from the Document.text.
{ # A text segment in the Document.text. The indices may be out of bounds which indicate that the text extends into another document shard for large sharded documents. See ShardInfo.text_offset
&quot;endIndex&quot;: &quot;A String&quot;, # TextSegment half open end UTF-8 char index in the Document.text.
&quot;startIndex&quot;: &quot;A String&quot;, # TextSegment start UTF-8 char index in the Document.text.
},
],
},
},
&quot;rowSpan&quot;: 42, # How many rows this cell spans.
},
],
},
],
&quot;layout&quot;: { # Visual element describing a layout unit on a page. # Layout for Table.
&quot;boundingPoly&quot;: { # A bounding polygon for the detected image annotation. # The bounding polygon for the Layout.
&quot;normalizedVertices&quot;: [ # The bounding polygon normalized vertices.
{ # A vertex represents a 2D point in the image. NOTE: the normalized vertex coordinates are relative to the original image and range from 0 to 1.
&quot;x&quot;: 3.14, # X coordinate.
&quot;y&quot;: 3.14, # Y coordinate (starts from the top of the image).
},
],
&quot;vertices&quot;: [ # The bounding polygon vertices.
{ # A vertex represents a 2D point in the image. NOTE: the vertex coordinates are in the same scale as the original image.
&quot;x&quot;: 42, # X coordinate.
&quot;y&quot;: 42, # Y coordinate (starts from the top of the image).
},
],
},
&quot;confidence&quot;: 3.14, # Confidence of the current Layout within context of the object this layout is for. e.g. confidence can be for a single token, a table, a visual element, etc. depending on context. Range [0, 1].
&quot;orientation&quot;: &quot;A String&quot;, # Detected orientation for the Layout.
&quot;textAnchor&quot;: { # Text reference indexing into the Document.text. # Text anchor indexing into the Document.text.
&quot;content&quot;: &quot;A String&quot;, # Contains the content of the text span so that users do not have to look it up in the text_segments.
&quot;textSegments&quot;: [ # The text segments from the Document.text.
{ # A text segment in the Document.text. The indices may be out of bounds which indicate that the text extends into another document shard for large sharded documents. See ShardInfo.text_offset
&quot;endIndex&quot;: &quot;A String&quot;, # TextSegment half open end UTF-8 char index in the Document.text.
&quot;startIndex&quot;: &quot;A String&quot;, # TextSegment start UTF-8 char index in the Document.text.
},
],
},
},
},
],
&quot;tokens&quot;: [ # A list of visually detected tokens on the page.
{ # A detected token.
&quot;detectedBreak&quot;: { # Detected break at the end of a Token. # Detected break at the end of a Token.
&quot;type&quot;: &quot;A String&quot;, # Detected break type.
},
&quot;detectedLanguages&quot;: [ # A list of detected languages together with confidence.
{ # Detected language for a structural component.
&quot;confidence&quot;: 3.14, # Confidence of detected language. Range [0, 1].
&quot;languageCode&quot;: &quot;A String&quot;, # The BCP-47 language code, such as &quot;en-US&quot; or &quot;sr-Latn&quot;. For more information, see http://www.unicode.org/reports/tr35/#Unicode_locale_identifier.
},
],
&quot;layout&quot;: { # Visual element describing a layout unit on a page. # Layout for Token.
&quot;boundingPoly&quot;: { # A bounding polygon for the detected image annotation. # The bounding polygon for the Layout.
&quot;normalizedVertices&quot;: [ # The bounding polygon normalized vertices.
{ # A vertex represents a 2D point in the image. NOTE: the normalized vertex coordinates are relative to the original image and range from 0 to 1.
&quot;x&quot;: 3.14, # X coordinate.
&quot;y&quot;: 3.14, # Y coordinate (starts from the top of the image).
},
],
&quot;vertices&quot;: [ # The bounding polygon vertices.
{ # A vertex represents a 2D point in the image. NOTE: the vertex coordinates are in the same scale as the original image.
&quot;x&quot;: 42, # X coordinate.
&quot;y&quot;: 42, # Y coordinate (starts from the top of the image).
},
],
},
&quot;confidence&quot;: 3.14, # Confidence of the current Layout within context of the object this layout is for. e.g. confidence can be for a single token, a table, a visual element, etc. depending on context. Range [0, 1].
&quot;orientation&quot;: &quot;A String&quot;, # Detected orientation for the Layout.
&quot;textAnchor&quot;: { # Text reference indexing into the Document.text. # Text anchor indexing into the Document.text.
&quot;content&quot;: &quot;A String&quot;, # Contains the content of the text span so that users do not have to look it up in the text_segments.
&quot;textSegments&quot;: [ # The text segments from the Document.text.
{ # A text segment in the Document.text. The indices may be out of bounds which indicate that the text extends into another document shard for large sharded documents. See ShardInfo.text_offset
&quot;endIndex&quot;: &quot;A String&quot;, # TextSegment half open end UTF-8 char index in the Document.text.
&quot;startIndex&quot;: &quot;A String&quot;, # TextSegment start UTF-8 char index in the Document.text.
},
],
},
},
&quot;provenance&quot;: { # Structure to identify provenance relationships between annotations in different revisions. # The history of this annotation.
&quot;id&quot;: 42, # The Id of this operation. Needs to be unique within the scope of the revision.
&quot;parents&quot;: [ # References to the original elements that are replaced.
{ # Structure for referencing parent provenances. When an element replaces one of more other elements parent references identify the elements that are replaced.
&quot;id&quot;: 42, # The id of the parent provenance.
&quot;index&quot;: 42, # The index of the parent revisions corresponding collection of items (eg. list of entities, properties within entities, etc.)
&quot;revision&quot;: 42, # The index of the [Document.revisions] identifying the parent revision.
},
],
&quot;revision&quot;: 42, # The index of the revision that produced this element.
&quot;type&quot;: &quot;A String&quot;, # The type of provenance operation.
},
},
],
&quot;transforms&quot;: [ # Transformation matrices that were applied to the original document image to produce Page.image.
{ # Representation for transformation matrix, intended to be compatible and used with OpenCV format for image manipulation.
&quot;cols&quot;: 42, # Number of columns in the matrix.
&quot;data&quot;: &quot;A String&quot;, # The matrix data.
&quot;rows&quot;: 42, # Number of rows in the matrix.
&quot;type&quot;: 42, # This encodes information about what data type the matrix uses. For example, 0 (CV_8U) is an unsigned 8-bit image. For the full list of OpenCV primitive data types, please refer to https://docs.opencv.org/4.3.0/d1/d1b/group__core__hal__interface.html
},
],
&quot;visualElements&quot;: [ # A list of detected non-text visual elements e.g. checkbox, signature etc. on the page.
{ # Detected non-text visual elements e.g. checkbox, signature etc. on the page.
&quot;detectedLanguages&quot;: [ # A list of detected languages together with confidence.
{ # Detected language for a structural component.
&quot;confidence&quot;: 3.14, # Confidence of detected language. Range [0, 1].
&quot;languageCode&quot;: &quot;A String&quot;, # The BCP-47 language code, such as &quot;en-US&quot; or &quot;sr-Latn&quot;. For more information, see http://www.unicode.org/reports/tr35/#Unicode_locale_identifier.
},
],
&quot;layout&quot;: { # Visual element describing a layout unit on a page. # Layout for VisualElement.
&quot;boundingPoly&quot;: { # A bounding polygon for the detected image annotation. # The bounding polygon for the Layout.
&quot;normalizedVertices&quot;: [ # The bounding polygon normalized vertices.
{ # A vertex represents a 2D point in the image. NOTE: the normalized vertex coordinates are relative to the original image and range from 0 to 1.
&quot;x&quot;: 3.14, # X coordinate.
&quot;y&quot;: 3.14, # Y coordinate (starts from the top of the image).
},
],
&quot;vertices&quot;: [ # The bounding polygon vertices.
{ # A vertex represents a 2D point in the image. NOTE: the vertex coordinates are in the same scale as the original image.
&quot;x&quot;: 42, # X coordinate.
&quot;y&quot;: 42, # Y coordinate (starts from the top of the image).
},
],
},
&quot;confidence&quot;: 3.14, # Confidence of the current Layout within context of the object this layout is for. e.g. confidence can be for a single token, a table, a visual element, etc. depending on context. Range [0, 1].
&quot;orientation&quot;: &quot;A String&quot;, # Detected orientation for the Layout.
&quot;textAnchor&quot;: { # Text reference indexing into the Document.text. # Text anchor indexing into the Document.text.
&quot;content&quot;: &quot;A String&quot;, # Contains the content of the text span so that users do not have to look it up in the text_segments.
&quot;textSegments&quot;: [ # The text segments from the Document.text.
{ # A text segment in the Document.text. The indices may be out of bounds which indicate that the text extends into another document shard for large sharded documents. See ShardInfo.text_offset
&quot;endIndex&quot;: &quot;A String&quot;, # TextSegment half open end UTF-8 char index in the Document.text.
&quot;startIndex&quot;: &quot;A String&quot;, # TextSegment start UTF-8 char index in the Document.text.
},
],
},
},
&quot;type&quot;: &quot;A String&quot;, # Type of the VisualElement.
},
],
},
],
&quot;revisions&quot;: [ # Revision history of this document.
{ # Contains past or forward revisions of this document.
&quot;agent&quot;: &quot;A String&quot;, # If the change was made by a person specify the name or id of that person.
&quot;createTime&quot;: &quot;A String&quot;, # The time that the revision was created.
&quot;humanReview&quot;: { # Human Review information of the document. # Human Review information of this revision.
&quot;state&quot;: &quot;A String&quot;, # Human review state. e.g. `requested`, `succeeded`, `rejected`.
&quot;stateMessage&quot;: &quot;A String&quot;, # A message providing more details about the current state of processing. For example, the rejection reason when the state is `rejected`.
},
&quot;id&quot;: &quot;A String&quot;, # Id of the revision. Unique within the context of the document.
&quot;parent&quot;: [ # The revisions that this revision is based on. This can include one or more parent (when documents are merged.) This field represents the index into the `revisions` field.
42,
],
&quot;processor&quot;: &quot;A String&quot;, # If the annotation was made by processor identify the processor by its resource name.
},
],
&quot;shardInfo&quot;: { # For a large document, sharding may be performed to produce several document shards. Each document shard contains this field to detail which shard it is. # Information about the sharding if this document is sharded part of a larger document. If the document is not sharded, this message is not specified.
&quot;shardCount&quot;: &quot;A String&quot;, # Total number of shards.
&quot;shardIndex&quot;: &quot;A String&quot;, # The 0-based index of this shard.
&quot;textOffset&quot;: &quot;A String&quot;, # The index of the first character in Document.text in the overall document global text.
},
&quot;text&quot;: &quot;A String&quot;, # Optional. UTF-8 encoded text in reading order from the document.
&quot;textChanges&quot;: [ # A list of text corrections made to [Document.text]. This is usually used for annotating corrections to OCR mistakes. Text changes for a given revision may not overlap with each other.
{ # This message is used for text changes aka. OCR corrections.
&quot;changedText&quot;: &quot;A String&quot;, # The text that replaces the text identified in the `text_anchor`.
&quot;provenance&quot;: [ # The history of this annotation.
{ # Structure to identify provenance relationships between annotations in different revisions.
&quot;id&quot;: 42, # The Id of this operation. Needs to be unique within the scope of the revision.
&quot;parents&quot;: [ # References to the original elements that are replaced.
{ # Structure for referencing parent provenances. When an element replaces one of more other elements parent references identify the elements that are replaced.
&quot;id&quot;: 42, # The id of the parent provenance.
&quot;index&quot;: 42, # The index of the parent revisions corresponding collection of items (eg. list of entities, properties within entities, etc.)
&quot;revision&quot;: 42, # The index of the [Document.revisions] identifying the parent revision.
},
],
&quot;revision&quot;: 42, # The index of the revision that produced this element.
&quot;type&quot;: &quot;A String&quot;, # The type of provenance operation.
},
],
&quot;textAnchor&quot;: { # Text reference indexing into the Document.text. # Provenance of the correction. Text anchor indexing into the Document.text. There can only be a single `TextAnchor.text_segments` element. If the start and end index of the text segment are the same, the text change is inserted before that index.
&quot;content&quot;: &quot;A String&quot;, # Contains the content of the text span so that users do not have to look it up in the text_segments.
&quot;textSegments&quot;: [ # The text segments from the Document.text.
{ # A text segment in the Document.text. The indices may be out of bounds which indicate that the text extends into another document shard for large sharded documents. See ShardInfo.text_offset
&quot;endIndex&quot;: &quot;A String&quot;, # TextSegment half open end UTF-8 char index in the Document.text.
&quot;startIndex&quot;: &quot;A String&quot;, # TextSegment start UTF-8 char index in the Document.text.
},
],
},
},
],
&quot;textStyles&quot;: [ # Styles for the Document.text.
{ # Annotation for common text style attributes. This adheres to CSS conventions as much as possible.
&quot;backgroundColor&quot;: { # Represents a color in the RGBA color space. This representation is designed for simplicity of conversion to/from color representations in various languages over compactness. For example, the fields of this representation can be trivially provided to the constructor of `java.awt.Color` in Java; it can also be trivially provided to UIColor&#x27;s `+colorWithRed:green:blue:alpha` method in iOS; and, with just a little work, it can be easily formatted into a CSS `rgba()` string in JavaScript. This reference page doesn&#x27;t carry information about the absolute color space that should be used to interpret the RGB value (e.g. sRGB, Adobe RGB, DCI-P3, BT.2020, etc.). By default, applications should assume the sRGB color space. When color equality needs to be decided, implementations, unless documented otherwise, treat two colors as equal if all their red, green, blue, and alpha values each differ by at most 1e-5. Example (Java): import com.google.type.Color; // ... public static java.awt.Color fromProto(Color protocolor) { float alpha = protocolor.hasAlpha() ? protocolor.getAlpha().getValue() : 1.0; return new java.awt.Color( protocolor.getRed(), protocolor.getGreen(), protocolor.getBlue(), alpha); } public static Color toProto(java.awt.Color color) { float red = (float) color.getRed(); float green = (float) color.getGreen(); float blue = (float) color.getBlue(); float denominator = 255.0; Color.Builder resultBuilder = Color .newBuilder() .setRed(red / denominator) .setGreen(green / denominator) .setBlue(blue / denominator); int alpha = color.getAlpha(); if (alpha != 255) { result.setAlpha( FloatValue .newBuilder() .setValue(((float) alpha) / denominator) .build()); } return resultBuilder.build(); } // ... Example (iOS / Obj-C): // ... static UIColor* fromProto(Color* protocolor) { float red = [protocolor red]; float green = [protocolor green]; float blue = [protocolor blue]; FloatValue* alpha_wrapper = [protocolor alpha]; float alpha = 1.0; if (alpha_wrapper != nil) { alpha = [alpha_wrapper value]; } return [UIColor colorWithRed:red green:green blue:blue alpha:alpha]; } static Color* toProto(UIColor* color) { CGFloat red, green, blue, alpha; if (![color getRed:&amp;red green:&amp;green blue:&amp;blue alpha:&amp;alpha]) { return nil; } Color* result = [[Color alloc] init]; [result setRed:red]; [result setGreen:green]; [result setBlue:blue]; if (alpha &lt;= 0.9999) { [result setAlpha:floatWrapperWithValue(alpha)]; } [result autorelease]; return result; } // ... Example (JavaScript): // ... var protoToCssColor = function(rgb_color) { var redFrac = rgb_color.red || 0.0; var greenFrac = rgb_color.green || 0.0; var blueFrac = rgb_color.blue || 0.0; var red = Math.floor(redFrac * 255); var green = Math.floor(greenFrac * 255); var blue = Math.floor(blueFrac * 255); if (!(&#x27;alpha&#x27; in rgb_color)) { return rgbToCssColor(red, green, blue); } var alphaFrac = rgb_color.alpha.value || 0.0; var rgbParams = [red, green, blue].join(&#x27;,&#x27;); return [&#x27;rgba(&#x27;, rgbParams, &#x27;,&#x27;, alphaFrac, &#x27;)&#x27;].join(&#x27;&#x27;); }; var rgbToCssColor = function(red, green, blue) { var rgbNumber = new Number((red &lt;&lt; 16) | (green &lt;&lt; 8) | blue); var hexString = rgbNumber.toString(16); var missingZeros = 6 - hexString.length; var resultBuilder = [&#x27;#&#x27;]; for (var i = 0; i &lt; missingZeros; i++) { resultBuilder.push(&#x27;0&#x27;); } resultBuilder.push(hexString); return resultBuilder.join(&#x27;&#x27;); }; // ... # Text background color.
&quot;alpha&quot;: 3.14, # The fraction of this color that should be applied to the pixel. That is, the final pixel color is defined by the equation: `pixel color = alpha * (this color) + (1.0 - alpha) * (background color)` This means that a value of 1.0 corresponds to a solid color, whereas a value of 0.0 corresponds to a completely transparent color. This uses a wrapper message rather than a simple float scalar so that it is possible to distinguish between a default value and the value being unset. If omitted, this color object is rendered as a solid color (as if the alpha value had been explicitly given a value of 1.0).
&quot;blue&quot;: 3.14, # The amount of blue in the color as a value in the interval [0, 1].
&quot;green&quot;: 3.14, # The amount of green in the color as a value in the interval [0, 1].
&quot;red&quot;: 3.14, # The amount of red in the color as a value in the interval [0, 1].
},
&quot;color&quot;: { # Represents a color in the RGBA color space. This representation is designed for simplicity of conversion to/from color representations in various languages over compactness. For example, the fields of this representation can be trivially provided to the constructor of `java.awt.Color` in Java; it can also be trivially provided to UIColor&#x27;s `+colorWithRed:green:blue:alpha` method in iOS; and, with just a little work, it can be easily formatted into a CSS `rgba()` string in JavaScript. This reference page doesn&#x27;t carry information about the absolute color space that should be used to interpret the RGB value (e.g. sRGB, Adobe RGB, DCI-P3, BT.2020, etc.). By default, applications should assume the sRGB color space. When color equality needs to be decided, implementations, unless documented otherwise, treat two colors as equal if all their red, green, blue, and alpha values each differ by at most 1e-5. Example (Java): import com.google.type.Color; // ... public static java.awt.Color fromProto(Color protocolor) { float alpha = protocolor.hasAlpha() ? protocolor.getAlpha().getValue() : 1.0; return new java.awt.Color( protocolor.getRed(), protocolor.getGreen(), protocolor.getBlue(), alpha); } public static Color toProto(java.awt.Color color) { float red = (float) color.getRed(); float green = (float) color.getGreen(); float blue = (float) color.getBlue(); float denominator = 255.0; Color.Builder resultBuilder = Color .newBuilder() .setRed(red / denominator) .setGreen(green / denominator) .setBlue(blue / denominator); int alpha = color.getAlpha(); if (alpha != 255) { result.setAlpha( FloatValue .newBuilder() .setValue(((float) alpha) / denominator) .build()); } return resultBuilder.build(); } // ... Example (iOS / Obj-C): // ... static UIColor* fromProto(Color* protocolor) { float red = [protocolor red]; float green = [protocolor green]; float blue = [protocolor blue]; FloatValue* alpha_wrapper = [protocolor alpha]; float alpha = 1.0; if (alpha_wrapper != nil) { alpha = [alpha_wrapper value]; } return [UIColor colorWithRed:red green:green blue:blue alpha:alpha]; } static Color* toProto(UIColor* color) { CGFloat red, green, blue, alpha; if (![color getRed:&amp;red green:&amp;green blue:&amp;blue alpha:&amp;alpha]) { return nil; } Color* result = [[Color alloc] init]; [result setRed:red]; [result setGreen:green]; [result setBlue:blue]; if (alpha &lt;= 0.9999) { [result setAlpha:floatWrapperWithValue(alpha)]; } [result autorelease]; return result; } // ... Example (JavaScript): // ... var protoToCssColor = function(rgb_color) { var redFrac = rgb_color.red || 0.0; var greenFrac = rgb_color.green || 0.0; var blueFrac = rgb_color.blue || 0.0; var red = Math.floor(redFrac * 255); var green = Math.floor(greenFrac * 255); var blue = Math.floor(blueFrac * 255); if (!(&#x27;alpha&#x27; in rgb_color)) { return rgbToCssColor(red, green, blue); } var alphaFrac = rgb_color.alpha.value || 0.0; var rgbParams = [red, green, blue].join(&#x27;,&#x27;); return [&#x27;rgba(&#x27;, rgbParams, &#x27;,&#x27;, alphaFrac, &#x27;)&#x27;].join(&#x27;&#x27;); }; var rgbToCssColor = function(red, green, blue) { var rgbNumber = new Number((red &lt;&lt; 16) | (green &lt;&lt; 8) | blue); var hexString = rgbNumber.toString(16); var missingZeros = 6 - hexString.length; var resultBuilder = [&#x27;#&#x27;]; for (var i = 0; i &lt; missingZeros; i++) { resultBuilder.push(&#x27;0&#x27;); } resultBuilder.push(hexString); return resultBuilder.join(&#x27;&#x27;); }; // ... # Text color.
&quot;alpha&quot;: 3.14, # The fraction of this color that should be applied to the pixel. That is, the final pixel color is defined by the equation: `pixel color = alpha * (this color) + (1.0 - alpha) * (background color)` This means that a value of 1.0 corresponds to a solid color, whereas a value of 0.0 corresponds to a completely transparent color. This uses a wrapper message rather than a simple float scalar so that it is possible to distinguish between a default value and the value being unset. If omitted, this color object is rendered as a solid color (as if the alpha value had been explicitly given a value of 1.0).
&quot;blue&quot;: 3.14, # The amount of blue in the color as a value in the interval [0, 1].
&quot;green&quot;: 3.14, # The amount of green in the color as a value in the interval [0, 1].
&quot;red&quot;: 3.14, # The amount of red in the color as a value in the interval [0, 1].
},
&quot;fontSize&quot;: { # Font size with unit. # Font size.
&quot;size&quot;: 3.14, # Font size for the text.
&quot;unit&quot;: &quot;A String&quot;, # Unit for the font size. Follows CSS naming (in, px, pt, etc.).
},
&quot;fontWeight&quot;: &quot;A String&quot;, # Font weight. Possible values are normal, bold, bolder, and lighter. https://www.w3schools.com/cssref/pr_font_weight.asp
&quot;textAnchor&quot;: { # Text reference indexing into the Document.text. # Text anchor indexing into the Document.text.
&quot;content&quot;: &quot;A String&quot;, # Contains the content of the text span so that users do not have to look it up in the text_segments.
&quot;textSegments&quot;: [ # The text segments from the Document.text.
{ # A text segment in the Document.text. The indices may be out of bounds which indicate that the text extends into another document shard for large sharded documents. See ShardInfo.text_offset
&quot;endIndex&quot;: &quot;A String&quot;, # TextSegment half open end UTF-8 char index in the Document.text.
&quot;startIndex&quot;: &quot;A String&quot;, # TextSegment start UTF-8 char index in the Document.text.
},
],
},
&quot;textDecoration&quot;: &quot;A String&quot;, # Text decoration. Follows CSS standard. https://www.w3schools.com/cssref/pr_text_text-decoration.asp
&quot;textStyle&quot;: &quot;A String&quot;, # Text style. Possible values are normal, italic, and oblique. https://www.w3schools.com/cssref/pr_font_font-style.asp
},
],
&quot;uri&quot;: &quot;A String&quot;, # Optional. Currently supports Google Cloud Storage URI of the form `gs://bucket_name/object_name`. Object versioning is not supported. See [Google Cloud Storage Request URIs](https://cloud.google.com/storage/docs/reference-uris) for more info.
},
&quot;humanReviewStatus&quot;: { # The status of human review on a processed document. # The status of human review on the processed document.
&quot;humanReviewOperation&quot;: &quot;A String&quot;, # The name of the operation triggered by the processed document. This field is populated only when the [state] is [HUMAN_REVIEW_IN_PROGRESS]. It has the same response type and metadata as the long running operation returned by [ReviewDocument] method.
&quot;state&quot;: &quot;A String&quot;, # The state of human review on the processing request.
&quot;stateMessage&quot;: &quot;A String&quot;, # A message providing more details about the human review state.
},
}</pre>
</div>
</body></html>