blob: bbf1be70a8255f8520f9d59c6b6bad0fd2ba37a6 [file] [log] [blame]
Bu Sun Kim65020912020-05-20 12:08:20 -07001<html><body>
2<style>
3
4body, h1, h2, h3, div, span, p, pre, a {
5 margin: 0;
6 padding: 0;
7 border: 0;
8 font-weight: inherit;
9 font-style: inherit;
10 font-size: 100%;
11 font-family: inherit;
12 vertical-align: baseline;
13}
14
15body {
16 font-size: 13px;
17 padding: 1em;
18}
19
20h1 {
21 font-size: 26px;
22 margin-bottom: 1em;
23}
24
25h2 {
26 font-size: 24px;
27 margin-bottom: 1em;
28}
29
30h3 {
31 font-size: 20px;
32 margin-bottom: 1em;
33 margin-top: 1em;
34}
35
36pre, code {
37 line-height: 1.5;
38 font-family: Monaco, 'DejaVu Sans Mono', 'Bitstream Vera Sans Mono', 'Lucida Console', monospace;
39}
40
41pre {
42 margin-top: 0.5em;
43}
44
45h1, h2, h3, p {
46 font-family: Arial, sans serif;
47}
48
49h1, h2, h3 {
50 border-bottom: solid #CCC 1px;
51}
52
53.toc_element {
54 margin-top: 0.5em;
55}
56
57.firstline {
58 margin-left: 2 em;
59}
60
61.method {
62 margin-top: 1em;
63 border: solid 1px #CCC;
64 padding: 1em;
65 background: #EEE;
66}
67
68.details {
69 font-weight: bold;
70 font-size: 14px;
71}
72
73</style>
74
75<h1><a href="vision_v1p2beta1.html">Cloud Vision API</a> . <a href="vision_v1p2beta1.projects.html">projects</a> . <a href="vision_v1p2beta1.projects.files.html">files</a></h1>
76<h2>Instance Methods</h2>
77<p class="toc_element">
78 <code><a href="#annotate">annotate(parent, body=None, x__xgafv=None)</a></code></p>
79<p class="firstline">Service that performs image detection and annotation for a batch of files.</p>
80<p class="toc_element">
81 <code><a href="#asyncBatchAnnotate">asyncBatchAnnotate(parent, body=None, x__xgafv=None)</a></code></p>
82<p class="firstline">Run asynchronous image detection and annotation for a list of generic</p>
83<h3>Method Details</h3>
84<div class="method">
85 <code class="details" id="annotate">annotate(parent, body=None, x__xgafv=None)</code>
86 <pre>Service that performs image detection and annotation for a batch of files.
87Now only &quot;application/pdf&quot;, &quot;image/tiff&quot; and &quot;image/gif&quot; are supported.
88
89This service will extract at most 5 (customers can specify which 5 in
90AnnotateFileRequest.pages) frames (gif) or pages (pdf or tiff) from each
91file provided and perform detection and annotation for each image
92extracted.
93
94Args:
95 parent: string, Optional. Target project and location to make a call.
96
97Format: `projects/{project-id}/locations/{location-id}`.
98
99If no parent is specified, a region will be chosen automatically.
100
101Supported location-ids:
102 `us`: USA country only,
103 `asia`: East asia areas, like Japan, Taiwan,
104 `eu`: The European Union.
105
106Example: `projects/project-A/locations/eu`. (required)
107 body: object, The request body.
108 The object takes the form of:
109
110{ # A list of requests to annotate files using the BatchAnnotateFiles API.
111 &quot;parent&quot;: &quot;A String&quot;, # Optional. Target project and location to make a call.
112 #
113 # Format: `projects/{project-id}/locations/{location-id}`.
114 #
115 # If no parent is specified, a region will be chosen automatically.
116 #
117 # Supported location-ids:
118 # `us`: USA country only,
119 # `asia`: East asia areas, like Japan, Taiwan,
120 # `eu`: The European Union.
121 #
122 # Example: `projects/project-A/locations/eu`.
123 &quot;requests&quot;: [ # Required. The list of file annotation requests. Right now we support only one
124 # AnnotateFileRequest in BatchAnnotateFilesRequest.
125 { # A request to annotate one single file, e.g. a PDF, TIFF or GIF file.
126 &quot;imageContext&quot;: { # Image context and/or feature-specific parameters. # Additional context that may accompany the image(s) in the file.
127 &quot;languageHints&quot;: [ # List of languages to use for TEXT_DETECTION. In most cases, an empty value
128 # yields the best results since it enables automatic language detection. For
129 # languages based on the Latin alphabet, setting `language_hints` is not
130 # needed. In rare cases, when the language of the text in the image is known,
131 # setting a hint will help get better results (although it will be a
132 # significant hindrance if the hint is wrong). Text detection returns an
133 # error if one or more of the specified languages is not one of the
134 # [supported languages](https://cloud.google.com/vision/docs/languages).
135 &quot;A String&quot;,
136 ],
137 &quot;webDetectionParams&quot;: { # Parameters for web detection request. # Parameters for web detection.
138 &quot;includeGeoResults&quot;: True or False, # Whether to include results derived from the geo information in the image.
139 },
140 &quot;latLongRect&quot;: { # Rectangle determined by min and max `LatLng` pairs. # Not used.
141 &quot;minLatLng&quot;: { # An object representing a latitude/longitude pair. This is expressed as a pair # Min lat/long pair.
142 # of doubles representing degrees latitude and degrees longitude. Unless
143 # specified otherwise, this must conform to the
144 # &lt;a href=&quot;http://www.unoosa.org/pdf/icg/2012/template/WGS_84.pdf&quot;&gt;WGS84
145 # standard&lt;/a&gt;. Values must be within normalized ranges.
146 &quot;latitude&quot;: 3.14, # The latitude in degrees. It must be in the range [-90.0, +90.0].
147 &quot;longitude&quot;: 3.14, # The longitude in degrees. It must be in the range [-180.0, +180.0].
148 },
149 &quot;maxLatLng&quot;: { # An object representing a latitude/longitude pair. This is expressed as a pair # Max lat/long pair.
150 # of doubles representing degrees latitude and degrees longitude. Unless
151 # specified otherwise, this must conform to the
152 # &lt;a href=&quot;http://www.unoosa.org/pdf/icg/2012/template/WGS_84.pdf&quot;&gt;WGS84
153 # standard&lt;/a&gt;. Values must be within normalized ranges.
154 &quot;latitude&quot;: 3.14, # The latitude in degrees. It must be in the range [-90.0, +90.0].
155 &quot;longitude&quot;: 3.14, # The longitude in degrees. It must be in the range [-180.0, +180.0].
156 },
157 },
158 &quot;cropHintsParams&quot;: { # Parameters for crop hints annotation request. # Parameters for crop hints annotation request.
159 &quot;aspectRatios&quot;: [ # Aspect ratios in floats, representing the ratio of the width to the height
160 # of the image. For example, if the desired aspect ratio is 4/3, the
161 # corresponding float value should be 1.33333. If not specified, the
162 # best possible crop is returned. The number of provided aspect ratios is
163 # limited to a maximum of 16; any aspect ratios provided after the 16th are
164 # ignored.
165 3.14,
166 ],
167 },
168 &quot;productSearchParams&quot;: { # Parameters for a product search request. # Parameters for product search.
169 &quot;productSet&quot;: &quot;A String&quot;, # The resource name of a ProductSet to be searched for similar images.
170 #
171 # Format is:
172 # `projects/PROJECT_ID/locations/LOC_ID/productSets/PRODUCT_SET_ID`.
173 &quot;boundingPoly&quot;: { # A bounding polygon for the detected image annotation. # The bounding polygon around the area of interest in the image.
174 # If it is not specified, system discretion will be applied.
175 &quot;normalizedVertices&quot;: [ # The bounding polygon normalized vertices.
176 { # A vertex represents a 2D point in the image.
177 # NOTE: the normalized vertex coordinates are relative to the original image
178 # and range from 0 to 1.
179 &quot;y&quot;: 3.14, # Y coordinate.
180 &quot;x&quot;: 3.14, # X coordinate.
181 },
182 ],
183 &quot;vertices&quot;: [ # The bounding polygon vertices.
184 { # A vertex represents a 2D point in the image.
185 # NOTE: the vertex coordinates are in the same scale as the original image.
186 &quot;y&quot;: 42, # Y coordinate.
187 &quot;x&quot;: 42, # X coordinate.
188 },
189 ],
190 },
191 &quot;productCategories&quot;: [ # The list of product categories to search in. Currently, we only consider
192 # the first category, and either &quot;homegoods-v2&quot;, &quot;apparel-v2&quot;, &quot;toys-v2&quot;,
193 # &quot;packagedgoods-v1&quot;, or &quot;general-v1&quot; should be specified. The legacy
194 # categories &quot;homegoods&quot;, &quot;apparel&quot;, and &quot;toys&quot; are still supported but will
195 # be deprecated. For new products, please use &quot;homegoods-v2&quot;, &quot;apparel-v2&quot;,
196 # or &quot;toys-v2&quot; for better product search accuracy. It is recommended to
197 # migrate existing products to these categories as well.
198 &quot;A String&quot;,
199 ],
200 &quot;filter&quot;: &quot;A String&quot;, # The filtering expression. This can be used to restrict search results based
201 # on Product labels. We currently support an AND of OR of key-value
202 # expressions, where each expression within an OR must have the same key. An
203 # &#x27;=&#x27; should be used to connect the key and value.
204 #
205 # For example, &quot;(color = red OR color = blue) AND brand = Google&quot; is
206 # acceptable, but &quot;(color = red OR brand = Google)&quot; is not acceptable.
207 # &quot;color: red&quot; is not acceptable because it uses a &#x27;:&#x27; instead of an &#x27;=&#x27;.
208 },
209 },
210 &quot;pages&quot;: [ # Pages of the file to perform image annotation.
211 #
212 # Pages starts from 1, we assume the first page of the file is page 1.
213 # At most 5 pages are supported per request. Pages can be negative.
214 #
215 # Page 1 means the first page.
216 # Page 2 means the second page.
217 # Page -1 means the last page.
218 # Page -2 means the second to the last page.
219 #
220 # If the file is GIF instead of PDF or TIFF, page refers to GIF frames.
221 #
222 # If this field is empty, by default the service performs image annotation
223 # for the first 5 pages of the file.
224 42,
225 ],
226 &quot;inputConfig&quot;: { # The desired input location and metadata. # Required. Information about the input file.
227 &quot;gcsSource&quot;: { # The Google Cloud Storage location where the input will be read from. # The Google Cloud Storage location to read the input from.
228 &quot;uri&quot;: &quot;A String&quot;, # Google Cloud Storage URI for the input file. This must only be a
229 # Google Cloud Storage object. Wildcards are not currently supported.
230 },
231 &quot;mimeType&quot;: &quot;A String&quot;, # The type of the file. Currently only &quot;application/pdf&quot;, &quot;image/tiff&quot; and
232 # &quot;image/gif&quot; are supported. Wildcards are not supported.
233 &quot;content&quot;: &quot;A String&quot;, # File content, represented as a stream of bytes.
234 # Note: As with all `bytes` fields, protobuffers use a pure binary
235 # representation, whereas JSON representations use base64.
236 #
237 # Currently, this field only works for BatchAnnotateFiles requests. It does
238 # not work for AsyncBatchAnnotateFiles requests.
239 },
240 &quot;features&quot;: [ # Required. Requested features.
241 { # The type of Google Cloud Vision API detection to perform, and the maximum
242 # number of results to return for that type. Multiple `Feature` objects can
243 # be specified in the `features` list.
244 &quot;type&quot;: &quot;A String&quot;, # The feature type.
245 &quot;maxResults&quot;: 42, # Maximum number of results of this type. Does not apply to
246 # `TEXT_DETECTION`, `DOCUMENT_TEXT_DETECTION`, or `CROP_HINTS`.
247 &quot;model&quot;: &quot;A String&quot;, # Model to use for the feature.
248 # Supported values: &quot;builtin/stable&quot; (the default if unset) and
249 # &quot;builtin/latest&quot;.
250 },
251 ],
252 },
253 ],
254 }
255
256 x__xgafv: string, V1 error format.
257 Allowed values
258 1 - v1 error format
259 2 - v2 error format
260
261Returns:
262 An object of the form:
263
264 { # A list of file annotation responses.
265 &quot;responses&quot;: [ # The list of file annotation responses, each response corresponding to each
266 # AnnotateFileRequest in BatchAnnotateFilesRequest.
267 { # Response to a single file annotation request. A file may contain one or more
268 # images, which individually have their own responses.
269 &quot;totalPages&quot;: 42, # This field gives the total number of pages in the file.
270 &quot;error&quot;: { # The `Status` type defines a logical error model that is suitable for # If set, represents the error message for the failed request. The
271 # `responses` field will not be set in this case.
272 # different programming environments, including REST APIs and RPC APIs. It is
273 # used by [gRPC](https://github.com/grpc). Each `Status` message contains
274 # three pieces of data: error code, error message, and error details.
275 #
276 # You can find out more about this error model and how to work with it in the
277 # [API Design Guide](https://cloud.google.com/apis/design/errors).
278 &quot;code&quot;: 42, # The status code, which should be an enum value of google.rpc.Code.
279 &quot;message&quot;: &quot;A String&quot;, # A developer-facing error message, which should be in English. Any
280 # user-facing error message should be localized and sent in the
281 # google.rpc.Status.details field, or localized by the client.
282 &quot;details&quot;: [ # A list of messages that carry the error details. There is a common set of
283 # message types for APIs to use.
284 {
285 &quot;a_key&quot;: &quot;&quot;, # Properties of the object. Contains field @type with type URL.
286 },
287 ],
288 },
289 &quot;responses&quot;: [ # Individual responses to images found within the file. This field will be
290 # empty if the `error` field is set.
291 { # Response to an image annotation request.
292 &quot;context&quot;: { # If an image was produced from a file (e.g. a PDF), this message gives # If present, contextual information is needed to understand where this image
293 # comes from.
294 # information about the source of that image.
295 &quot;uri&quot;: &quot;A String&quot;, # The URI of the file used to produce the image.
296 &quot;pageNumber&quot;: 42, # If the file was a PDF or TIFF, this field gives the page number within
297 # the file used to produce the image.
298 },
299 &quot;logoAnnotations&quot;: [ # If present, logo detection has completed successfully.
300 { # Set of detected entity features.
301 &quot;properties&quot;: [ # Some entities may have optional user-supplied `Property` (name/value)
302 # fields, such a score or string that qualifies the entity.
303 { # A `Property` consists of a user-supplied name/value pair.
304 &quot;uint64Value&quot;: &quot;A String&quot;, # Value of numeric properties.
305 &quot;name&quot;: &quot;A String&quot;, # Name of the property.
306 &quot;value&quot;: &quot;A String&quot;, # Value of the property.
307 },
308 ],
309 &quot;score&quot;: 3.14, # Overall score of the result. Range [0, 1].
310 &quot;locations&quot;: [ # The location information for the detected entity. Multiple
311 # `LocationInfo` elements can be present because one location may
312 # indicate the location of the scene in the image, and another location
313 # may indicate the location of the place where the image was taken.
314 # Location information is usually present for landmarks.
315 { # Detected entity location information.
316 &quot;latLng&quot;: { # An object representing a latitude/longitude pair. This is expressed as a pair # lat/long location coordinates.
317 # of doubles representing degrees latitude and degrees longitude. Unless
318 # specified otherwise, this must conform to the
319 # &lt;a href=&quot;http://www.unoosa.org/pdf/icg/2012/template/WGS_84.pdf&quot;&gt;WGS84
320 # standard&lt;/a&gt;. Values must be within normalized ranges.
321 &quot;latitude&quot;: 3.14, # The latitude in degrees. It must be in the range [-90.0, +90.0].
322 &quot;longitude&quot;: 3.14, # The longitude in degrees. It must be in the range [-180.0, +180.0].
323 },
324 },
325 ],
326 &quot;mid&quot;: &quot;A String&quot;, # Opaque entity ID. Some IDs may be available in
327 # [Google Knowledge Graph Search
328 # API](https://developers.google.com/knowledge-graph/).
329 &quot;confidence&quot;: 3.14, # **Deprecated. Use `score` instead.**
330 # The accuracy of the entity detection in an image.
331 # For example, for an image in which the &quot;Eiffel Tower&quot; entity is detected,
332 # this field represents the confidence that there is a tower in the query
333 # image. Range [0, 1].
334 &quot;locale&quot;: &quot;A String&quot;, # The language code for the locale in which the entity textual
335 # `description` is expressed.
336 &quot;boundingPoly&quot;: { # A bounding polygon for the detected image annotation. # Image region to which this entity belongs. Not produced
337 # for `LABEL_DETECTION` features.
338 &quot;normalizedVertices&quot;: [ # The bounding polygon normalized vertices.
339 { # A vertex represents a 2D point in the image.
340 # NOTE: the normalized vertex coordinates are relative to the original image
341 # and range from 0 to 1.
342 &quot;y&quot;: 3.14, # Y coordinate.
343 &quot;x&quot;: 3.14, # X coordinate.
344 },
345 ],
346 &quot;vertices&quot;: [ # The bounding polygon vertices.
347 { # A vertex represents a 2D point in the image.
348 # NOTE: the vertex coordinates are in the same scale as the original image.
349 &quot;y&quot;: 42, # Y coordinate.
350 &quot;x&quot;: 42, # X coordinate.
351 },
352 ],
353 },
354 &quot;description&quot;: &quot;A String&quot;, # Entity textual description, expressed in its `locale` language.
355 &quot;topicality&quot;: 3.14, # The relevancy of the ICA (Image Content Annotation) label to the
356 # image. For example, the relevancy of &quot;tower&quot; is likely higher to an image
357 # containing the detected &quot;Eiffel Tower&quot; than to an image containing a
358 # detected distant towering building, even though the confidence that
359 # there is a tower in each image may be the same. Range [0, 1].
360 },
361 ],
362 &quot;webDetection&quot;: { # Relevant information for the image from the Internet. # If present, web detection has completed successfully.
363 &quot;webEntities&quot;: [ # Deduced entities from similar images on the Internet.
364 { # Entity deduced from similar images on the Internet.
365 &quot;entityId&quot;: &quot;A String&quot;, # Opaque entity ID.
366 &quot;description&quot;: &quot;A String&quot;, # Canonical description of the entity, in English.
367 &quot;score&quot;: 3.14, # Overall relevancy score for the entity.
368 # Not normalized and not comparable across different image queries.
369 },
370 ],
371 &quot;pagesWithMatchingImages&quot;: [ # Web pages containing the matching images from the Internet.
372 { # Metadata for web pages.
373 &quot;score&quot;: 3.14, # (Deprecated) Overall relevancy score for the web page.
374 &quot;partialMatchingImages&quot;: [ # Partial matching images on the page.
375 # Those images are similar enough to share some key-point features. For
376 # example an original image will likely have partial matching for its
377 # crops.
378 { # Metadata for online images.
379 &quot;score&quot;: 3.14, # (Deprecated) Overall relevancy score for the image.
380 &quot;url&quot;: &quot;A String&quot;, # The result image URL.
381 },
382 ],
383 &quot;url&quot;: &quot;A String&quot;, # The result web page URL.
384 &quot;pageTitle&quot;: &quot;A String&quot;, # Title for the web page, may contain HTML markups.
385 &quot;fullMatchingImages&quot;: [ # Fully matching images on the page.
386 # Can include resized copies of the query image.
387 { # Metadata for online images.
388 &quot;score&quot;: 3.14, # (Deprecated) Overall relevancy score for the image.
389 &quot;url&quot;: &quot;A String&quot;, # The result image URL.
390 },
391 ],
392 },
393 ],
394 &quot;partialMatchingImages&quot;: [ # Partial matching images from the Internet.
395 # Those images are similar enough to share some key-point features. For
396 # example an original image will likely have partial matching for its crops.
397 { # Metadata for online images.
398 &quot;score&quot;: 3.14, # (Deprecated) Overall relevancy score for the image.
399 &quot;url&quot;: &quot;A String&quot;, # The result image URL.
400 },
401 ],
402 &quot;visuallySimilarImages&quot;: [ # The visually similar image results.
403 { # Metadata for online images.
404 &quot;score&quot;: 3.14, # (Deprecated) Overall relevancy score for the image.
405 &quot;url&quot;: &quot;A String&quot;, # The result image URL.
406 },
407 ],
408 &quot;bestGuessLabels&quot;: [ # The service&#x27;s best guess as to the topic of the request image.
409 # Inferred from similar images on the open web.
410 { # Label to provide extra metadata for the web detection.
411 &quot;label&quot;: &quot;A String&quot;, # Label for extra metadata.
412 &quot;languageCode&quot;: &quot;A String&quot;, # The BCP-47 language code for `label`, such as &quot;en-US&quot; or &quot;sr-Latn&quot;.
413 # For more information, see
414 # http://www.unicode.org/reports/tr35/#Unicode_locale_identifier.
415 },
416 ],
417 &quot;fullMatchingImages&quot;: [ # Fully matching images from the Internet.
418 # Can include resized copies of the query image.
419 { # Metadata for online images.
420 &quot;score&quot;: 3.14, # (Deprecated) Overall relevancy score for the image.
421 &quot;url&quot;: &quot;A String&quot;, # The result image URL.
422 },
423 ],
424 },
425 &quot;safeSearchAnnotation&quot;: { # Set of features pertaining to the image, computed by computer vision # If present, safe-search annotation has completed successfully.
426 # methods over safe-search verticals (for example, adult, spoof, medical,
427 # violence).
428 &quot;racy&quot;: &quot;A String&quot;, # Likelihood that the request image contains racy content. Racy content may
429 # include (but is not limited to) skimpy or sheer clothing, strategically
430 # covered nudity, lewd or provocative poses, or close-ups of sensitive
431 # body areas.
432 &quot;violence&quot;: &quot;A String&quot;, # Likelihood that this image contains violent content.
433 &quot;adult&quot;: &quot;A String&quot;, # Represents the adult content likelihood for the image. Adult content may
434 # contain elements such as nudity, pornographic images or cartoons, or
435 # sexual activities.
436 &quot;spoof&quot;: &quot;A String&quot;, # Spoof likelihood. The likelihood that an modification
437 # was made to the image&#x27;s canonical version to make it appear
438 # funny or offensive.
439 &quot;medical&quot;: &quot;A String&quot;, # Likelihood that this is a medical image.
440 },
441 &quot;landmarkAnnotations&quot;: [ # If present, landmark detection has completed successfully.
442 { # Set of detected entity features.
443 &quot;properties&quot;: [ # Some entities may have optional user-supplied `Property` (name/value)
444 # fields, such a score or string that qualifies the entity.
445 { # A `Property` consists of a user-supplied name/value pair.
446 &quot;uint64Value&quot;: &quot;A String&quot;, # Value of numeric properties.
447 &quot;name&quot;: &quot;A String&quot;, # Name of the property.
448 &quot;value&quot;: &quot;A String&quot;, # Value of the property.
449 },
450 ],
451 &quot;score&quot;: 3.14, # Overall score of the result. Range [0, 1].
452 &quot;locations&quot;: [ # The location information for the detected entity. Multiple
453 # `LocationInfo` elements can be present because one location may
454 # indicate the location of the scene in the image, and another location
455 # may indicate the location of the place where the image was taken.
456 # Location information is usually present for landmarks.
457 { # Detected entity location information.
458 &quot;latLng&quot;: { # An object representing a latitude/longitude pair. This is expressed as a pair # lat/long location coordinates.
459 # of doubles representing degrees latitude and degrees longitude. Unless
460 # specified otherwise, this must conform to the
461 # &lt;a href=&quot;http://www.unoosa.org/pdf/icg/2012/template/WGS_84.pdf&quot;&gt;WGS84
462 # standard&lt;/a&gt;. Values must be within normalized ranges.
463 &quot;latitude&quot;: 3.14, # The latitude in degrees. It must be in the range [-90.0, +90.0].
464 &quot;longitude&quot;: 3.14, # The longitude in degrees. It must be in the range [-180.0, +180.0].
465 },
466 },
467 ],
468 &quot;mid&quot;: &quot;A String&quot;, # Opaque entity ID. Some IDs may be available in
469 # [Google Knowledge Graph Search
470 # API](https://developers.google.com/knowledge-graph/).
471 &quot;confidence&quot;: 3.14, # **Deprecated. Use `score` instead.**
472 # The accuracy of the entity detection in an image.
473 # For example, for an image in which the &quot;Eiffel Tower&quot; entity is detected,
474 # this field represents the confidence that there is a tower in the query
475 # image. Range [0, 1].
476 &quot;locale&quot;: &quot;A String&quot;, # The language code for the locale in which the entity textual
477 # `description` is expressed.
478 &quot;boundingPoly&quot;: { # A bounding polygon for the detected image annotation. # Image region to which this entity belongs. Not produced
479 # for `LABEL_DETECTION` features.
480 &quot;normalizedVertices&quot;: [ # The bounding polygon normalized vertices.
481 { # A vertex represents a 2D point in the image.
482 # NOTE: the normalized vertex coordinates are relative to the original image
483 # and range from 0 to 1.
484 &quot;y&quot;: 3.14, # Y coordinate.
485 &quot;x&quot;: 3.14, # X coordinate.
486 },
487 ],
488 &quot;vertices&quot;: [ # The bounding polygon vertices.
489 { # A vertex represents a 2D point in the image.
490 # NOTE: the vertex coordinates are in the same scale as the original image.
491 &quot;y&quot;: 42, # Y coordinate.
492 &quot;x&quot;: 42, # X coordinate.
493 },
494 ],
495 },
496 &quot;description&quot;: &quot;A String&quot;, # Entity textual description, expressed in its `locale` language.
497 &quot;topicality&quot;: 3.14, # The relevancy of the ICA (Image Content Annotation) label to the
498 # image. For example, the relevancy of &quot;tower&quot; is likely higher to an image
499 # containing the detected &quot;Eiffel Tower&quot; than to an image containing a
500 # detected distant towering building, even though the confidence that
501 # there is a tower in each image may be the same. Range [0, 1].
502 },
503 ],
504 &quot;faceAnnotations&quot;: [ # If present, face detection has completed successfully.
505 { # A face annotation object contains the results of face detection.
506 &quot;boundingPoly&quot;: { # A bounding polygon for the detected image annotation. # The bounding polygon around the face. The coordinates of the bounding box
507 # are in the original image&#x27;s scale.
508 # The bounding box is computed to &quot;frame&quot; the face in accordance with human
509 # expectations. It is based on the landmarker results.
510 # Note that one or more x and/or y coordinates may not be generated in the
511 # `BoundingPoly` (the polygon will be unbounded) if only a partial face
512 # appears in the image to be annotated.
513 &quot;normalizedVertices&quot;: [ # The bounding polygon normalized vertices.
514 { # A vertex represents a 2D point in the image.
515 # NOTE: the normalized vertex coordinates are relative to the original image
516 # and range from 0 to 1.
517 &quot;y&quot;: 3.14, # Y coordinate.
518 &quot;x&quot;: 3.14, # X coordinate.
519 },
520 ],
521 &quot;vertices&quot;: [ # The bounding polygon vertices.
522 { # A vertex represents a 2D point in the image.
523 # NOTE: the vertex coordinates are in the same scale as the original image.
524 &quot;y&quot;: 42, # Y coordinate.
525 &quot;x&quot;: 42, # X coordinate.
526 },
527 ],
528 },
529 &quot;rollAngle&quot;: 3.14, # Roll angle, which indicates the amount of clockwise/anti-clockwise rotation
530 # of the face relative to the image vertical about the axis perpendicular to
531 # the face. Range [-180,180].
532 &quot;sorrowLikelihood&quot;: &quot;A String&quot;, # Sorrow likelihood.
533 &quot;tiltAngle&quot;: 3.14, # Pitch angle, which indicates the upwards/downwards angle that the face is
534 # pointing relative to the image&#x27;s horizontal plane. Range [-180,180].
535 &quot;fdBoundingPoly&quot;: { # A bounding polygon for the detected image annotation. # The `fd_bounding_poly` bounding polygon is tighter than the
536 # `boundingPoly`, and encloses only the skin part of the face. Typically, it
537 # is used to eliminate the face from any image analysis that detects the
538 # &quot;amount of skin&quot; visible in an image. It is not based on the
539 # landmarker results, only on the initial face detection, hence
540 # the &lt;code&gt;fd&lt;/code&gt; (face detection) prefix.
541 &quot;normalizedVertices&quot;: [ # The bounding polygon normalized vertices.
542 { # A vertex represents a 2D point in the image.
543 # NOTE: the normalized vertex coordinates are relative to the original image
544 # and range from 0 to 1.
545 &quot;y&quot;: 3.14, # Y coordinate.
546 &quot;x&quot;: 3.14, # X coordinate.
547 },
548 ],
549 &quot;vertices&quot;: [ # The bounding polygon vertices.
550 { # A vertex represents a 2D point in the image.
551 # NOTE: the vertex coordinates are in the same scale as the original image.
552 &quot;y&quot;: 42, # Y coordinate.
553 &quot;x&quot;: 42, # X coordinate.
554 },
555 ],
556 },
557 &quot;angerLikelihood&quot;: &quot;A String&quot;, # Anger likelihood.
558 &quot;landmarks&quot;: [ # Detected face landmarks.
559 { # A face-specific landmark (for example, a face feature).
560 &quot;position&quot;: { # A 3D position in the image, used primarily for Face detection landmarks. # Face landmark position.
561 # A valid Position must have both x and y coordinates.
562 # The position coordinates are in the same scale as the original image.
563 &quot;y&quot;: 3.14, # Y coordinate.
564 &quot;x&quot;: 3.14, # X coordinate.
565 &quot;z&quot;: 3.14, # Z coordinate (or depth).
566 },
567 &quot;type&quot;: &quot;A String&quot;, # Face landmark type.
568 },
569 ],
570 &quot;surpriseLikelihood&quot;: &quot;A String&quot;, # Surprise likelihood.
571 &quot;landmarkingConfidence&quot;: 3.14, # Face landmarking confidence. Range [0, 1].
572 &quot;joyLikelihood&quot;: &quot;A String&quot;, # Joy likelihood.
573 &quot;underExposedLikelihood&quot;: &quot;A String&quot;, # Under-exposed likelihood.
574 &quot;panAngle&quot;: 3.14, # Yaw angle, which indicates the leftward/rightward angle that the face is
575 # pointing relative to the vertical plane perpendicular to the image. Range
576 # [-180,180].
577 &quot;detectionConfidence&quot;: 3.14, # Detection confidence. Range [0, 1].
578 &quot;blurredLikelihood&quot;: &quot;A String&quot;, # Blurred likelihood.
579 &quot;headwearLikelihood&quot;: &quot;A String&quot;, # Headwear likelihood.
580 },
581 ],
582 &quot;cropHintsAnnotation&quot;: { # Set of crop hints that are used to generate new crops when serving images. # If present, crop hints have completed successfully.
583 &quot;cropHints&quot;: [ # Crop hint results.
584 { # Single crop hint that is used to generate a new crop when serving an image.
585 &quot;confidence&quot;: 3.14, # Confidence of this being a salient region. Range [0, 1].
586 &quot;importanceFraction&quot;: 3.14, # Fraction of importance of this salient region with respect to the original
587 # image.
588 &quot;boundingPoly&quot;: { # A bounding polygon for the detected image annotation. # The bounding polygon for the crop region. The coordinates of the bounding
589 # box are in the original image&#x27;s scale.
590 &quot;normalizedVertices&quot;: [ # The bounding polygon normalized vertices.
591 { # A vertex represents a 2D point in the image.
592 # NOTE: the normalized vertex coordinates are relative to the original image
593 # and range from 0 to 1.
594 &quot;y&quot;: 3.14, # Y coordinate.
595 &quot;x&quot;: 3.14, # X coordinate.
596 },
597 ],
598 &quot;vertices&quot;: [ # The bounding polygon vertices.
599 { # A vertex represents a 2D point in the image.
600 # NOTE: the vertex coordinates are in the same scale as the original image.
601 &quot;y&quot;: 42, # Y coordinate.
602 &quot;x&quot;: 42, # X coordinate.
603 },
604 ],
605 },
606 },
607 ],
608 },
609 &quot;labelAnnotations&quot;: [ # If present, label detection has completed successfully.
610 { # Set of detected entity features.
611 &quot;properties&quot;: [ # Some entities may have optional user-supplied `Property` (name/value)
612 # fields, such a score or string that qualifies the entity.
613 { # A `Property` consists of a user-supplied name/value pair.
614 &quot;uint64Value&quot;: &quot;A String&quot;, # Value of numeric properties.
615 &quot;name&quot;: &quot;A String&quot;, # Name of the property.
616 &quot;value&quot;: &quot;A String&quot;, # Value of the property.
617 },
618 ],
619 &quot;score&quot;: 3.14, # Overall score of the result. Range [0, 1].
620 &quot;locations&quot;: [ # The location information for the detected entity. Multiple
621 # `LocationInfo` elements can be present because one location may
622 # indicate the location of the scene in the image, and another location
623 # may indicate the location of the place where the image was taken.
624 # Location information is usually present for landmarks.
625 { # Detected entity location information.
626 &quot;latLng&quot;: { # An object representing a latitude/longitude pair. This is expressed as a pair # lat/long location coordinates.
627 # of doubles representing degrees latitude and degrees longitude. Unless
628 # specified otherwise, this must conform to the
629 # &lt;a href=&quot;http://www.unoosa.org/pdf/icg/2012/template/WGS_84.pdf&quot;&gt;WGS84
630 # standard&lt;/a&gt;. Values must be within normalized ranges.
631 &quot;latitude&quot;: 3.14, # The latitude in degrees. It must be in the range [-90.0, +90.0].
632 &quot;longitude&quot;: 3.14, # The longitude in degrees. It must be in the range [-180.0, +180.0].
633 },
634 },
635 ],
636 &quot;mid&quot;: &quot;A String&quot;, # Opaque entity ID. Some IDs may be available in
637 # [Google Knowledge Graph Search
638 # API](https://developers.google.com/knowledge-graph/).
639 &quot;confidence&quot;: 3.14, # **Deprecated. Use `score` instead.**
640 # The accuracy of the entity detection in an image.
641 # For example, for an image in which the &quot;Eiffel Tower&quot; entity is detected,
642 # this field represents the confidence that there is a tower in the query
643 # image. Range [0, 1].
644 &quot;locale&quot;: &quot;A String&quot;, # The language code for the locale in which the entity textual
645 # `description` is expressed.
646 &quot;boundingPoly&quot;: { # A bounding polygon for the detected image annotation. # Image region to which this entity belongs. Not produced
647 # for `LABEL_DETECTION` features.
648 &quot;normalizedVertices&quot;: [ # The bounding polygon normalized vertices.
649 { # A vertex represents a 2D point in the image.
650 # NOTE: the normalized vertex coordinates are relative to the original image
651 # and range from 0 to 1.
652 &quot;y&quot;: 3.14, # Y coordinate.
653 &quot;x&quot;: 3.14, # X coordinate.
654 },
655 ],
656 &quot;vertices&quot;: [ # The bounding polygon vertices.
657 { # A vertex represents a 2D point in the image.
658 # NOTE: the vertex coordinates are in the same scale as the original image.
659 &quot;y&quot;: 42, # Y coordinate.
660 &quot;x&quot;: 42, # X coordinate.
661 },
662 ],
663 },
664 &quot;description&quot;: &quot;A String&quot;, # Entity textual description, expressed in its `locale` language.
665 &quot;topicality&quot;: 3.14, # The relevancy of the ICA (Image Content Annotation) label to the
666 # image. For example, the relevancy of &quot;tower&quot; is likely higher to an image
667 # containing the detected &quot;Eiffel Tower&quot; than to an image containing a
668 # detected distant towering building, even though the confidence that
669 # there is a tower in each image may be the same. Range [0, 1].
670 },
671 ],
672 &quot;localizedObjectAnnotations&quot;: [ # If present, localized object detection has completed successfully.
673 # This will be sorted descending by confidence score.
674 { # Set of detected objects with bounding boxes.
675 &quot;languageCode&quot;: &quot;A String&quot;, # The BCP-47 language code, such as &quot;en-US&quot; or &quot;sr-Latn&quot;. For more
676 # information, see
677 # http://www.unicode.org/reports/tr35/#Unicode_locale_identifier.
678 &quot;mid&quot;: &quot;A String&quot;, # Object ID that should align with EntityAnnotation mid.
679 &quot;name&quot;: &quot;A String&quot;, # Object name, expressed in its `language_code` language.
680 &quot;boundingPoly&quot;: { # A bounding polygon for the detected image annotation. # Image region to which this object belongs. This must be populated.
681 &quot;normalizedVertices&quot;: [ # The bounding polygon normalized vertices.
682 { # A vertex represents a 2D point in the image.
683 # NOTE: the normalized vertex coordinates are relative to the original image
684 # and range from 0 to 1.
685 &quot;y&quot;: 3.14, # Y coordinate.
686 &quot;x&quot;: 3.14, # X coordinate.
687 },
688 ],
689 &quot;vertices&quot;: [ # The bounding polygon vertices.
690 { # A vertex represents a 2D point in the image.
691 # NOTE: the vertex coordinates are in the same scale as the original image.
692 &quot;y&quot;: 42, # Y coordinate.
693 &quot;x&quot;: 42, # X coordinate.
694 },
695 ],
696 },
697 &quot;score&quot;: 3.14, # Score of the result. Range [0, 1].
698 },
699 ],
700 &quot;productSearchResults&quot;: { # Results for a product search request. # If present, product search has completed successfully.
701 &quot;indexTime&quot;: &quot;A String&quot;, # Timestamp of the index which provided these results. Products added to the
702 # product set and products removed from the product set after this time are
703 # not reflected in the current results.
704 &quot;productGroupedResults&quot;: [ # List of results grouped by products detected in the query image. Each entry
705 # corresponds to one bounding polygon in the query image, and contains the
706 # matching products specific to that region. There may be duplicate product
707 # matches in the union of all the per-product results.
708 { # Information about the products similar to a single product in a query
709 # image.
710 &quot;objectAnnotations&quot;: [ # List of generic predictions for the object in the bounding box.
711 { # Prediction for what the object in the bounding box is.
712 &quot;score&quot;: 3.14, # Score of the result. Range [0, 1].
713 &quot;languageCode&quot;: &quot;A String&quot;, # The BCP-47 language code, such as &quot;en-US&quot; or &quot;sr-Latn&quot;. For more
714 # information, see
715 # http://www.unicode.org/reports/tr35/#Unicode_locale_identifier.
716 &quot;mid&quot;: &quot;A String&quot;, # Object ID that should align with EntityAnnotation mid.
717 &quot;name&quot;: &quot;A String&quot;, # Object name, expressed in its `language_code` language.
718 },
719 ],
720 &quot;boundingPoly&quot;: { # A bounding polygon for the detected image annotation. # The bounding polygon around the product detected in the query image.
721 &quot;normalizedVertices&quot;: [ # The bounding polygon normalized vertices.
722 { # A vertex represents a 2D point in the image.
723 # NOTE: the normalized vertex coordinates are relative to the original image
724 # and range from 0 to 1.
725 &quot;y&quot;: 3.14, # Y coordinate.
726 &quot;x&quot;: 3.14, # X coordinate.
727 },
728 ],
729 &quot;vertices&quot;: [ # The bounding polygon vertices.
730 { # A vertex represents a 2D point in the image.
731 # NOTE: the vertex coordinates are in the same scale as the original image.
732 &quot;y&quot;: 42, # Y coordinate.
733 &quot;x&quot;: 42, # X coordinate.
734 },
735 ],
736 },
737 &quot;results&quot;: [ # List of results, one for each product match.
738 { # Information about a product.
739 &quot;image&quot;: &quot;A String&quot;, # The resource name of the image from the product that is the closest match
740 # to the query.
741 &quot;product&quot;: { # A Product contains ReferenceImages. # The Product.
742 &quot;name&quot;: &quot;A String&quot;, # The resource name of the product.
743 #
744 # Format is:
745 # `projects/PROJECT_ID/locations/LOC_ID/products/PRODUCT_ID`.
746 #
747 # This field is ignored when creating a product.
748 &quot;displayName&quot;: &quot;A String&quot;, # The user-provided name for this Product. Must not be empty. Must be at most
749 # 4096 characters long.
750 &quot;description&quot;: &quot;A String&quot;, # User-provided metadata to be stored with this product. Must be at most 4096
751 # characters long.
752 &quot;productCategory&quot;: &quot;A String&quot;, # Immutable. The category for the product identified by the reference image. This should
753 # be either &quot;homegoods-v2&quot;, &quot;apparel-v2&quot;, or &quot;toys-v2&quot;. The legacy categories
754 # &quot;homegoods&quot;, &quot;apparel&quot;, and &quot;toys&quot; are still supported, but these should
755 # not be used for new products.
756 &quot;productLabels&quot;: [ # Key-value pairs that can be attached to a product. At query time,
757 # constraints can be specified based on the product_labels.
758 #
759 # Note that integer values can be provided as strings, e.g. &quot;1199&quot;. Only
760 # strings with integer values can match a range-based restriction which is
761 # to be supported soon.
762 #
763 # Multiple values can be assigned to the same key. One product may have up to
764 # 500 product_labels.
765 #
766 # Notice that the total number of distinct product_labels over all products
767 # in one ProductSet cannot exceed 1M, otherwise the product search pipeline
768 # will refuse to work for that ProductSet.
769 { # A product label represented as a key-value pair.
770 &quot;value&quot;: &quot;A String&quot;, # The value of the label attached to the product. Cannot be empty and
771 # cannot exceed 128 bytes.
772 &quot;key&quot;: &quot;A String&quot;, # The key of the label attached to the product. Cannot be empty and cannot
773 # exceed 128 bytes.
774 },
775 ],
776 },
777 &quot;score&quot;: 3.14, # A confidence level on the match, ranging from 0 (no confidence) to
778 # 1 (full confidence).
779 },
780 ],
781 },
782 ],
783 &quot;results&quot;: [ # List of results, one for each product match.
784 { # Information about a product.
785 &quot;image&quot;: &quot;A String&quot;, # The resource name of the image from the product that is the closest match
786 # to the query.
787 &quot;product&quot;: { # A Product contains ReferenceImages. # The Product.
788 &quot;name&quot;: &quot;A String&quot;, # The resource name of the product.
789 #
790 # Format is:
791 # `projects/PROJECT_ID/locations/LOC_ID/products/PRODUCT_ID`.
792 #
793 # This field is ignored when creating a product.
794 &quot;displayName&quot;: &quot;A String&quot;, # The user-provided name for this Product. Must not be empty. Must be at most
795 # 4096 characters long.
796 &quot;description&quot;: &quot;A String&quot;, # User-provided metadata to be stored with this product. Must be at most 4096
797 # characters long.
798 &quot;productCategory&quot;: &quot;A String&quot;, # Immutable. The category for the product identified by the reference image. This should
799 # be either &quot;homegoods-v2&quot;, &quot;apparel-v2&quot;, or &quot;toys-v2&quot;. The legacy categories
800 # &quot;homegoods&quot;, &quot;apparel&quot;, and &quot;toys&quot; are still supported, but these should
801 # not be used for new products.
802 &quot;productLabels&quot;: [ # Key-value pairs that can be attached to a product. At query time,
803 # constraints can be specified based on the product_labels.
804 #
805 # Note that integer values can be provided as strings, e.g. &quot;1199&quot;. Only
806 # strings with integer values can match a range-based restriction which is
807 # to be supported soon.
808 #
809 # Multiple values can be assigned to the same key. One product may have up to
810 # 500 product_labels.
811 #
812 # Notice that the total number of distinct product_labels over all products
813 # in one ProductSet cannot exceed 1M, otherwise the product search pipeline
814 # will refuse to work for that ProductSet.
815 { # A product label represented as a key-value pair.
816 &quot;value&quot;: &quot;A String&quot;, # The value of the label attached to the product. Cannot be empty and
817 # cannot exceed 128 bytes.
818 &quot;key&quot;: &quot;A String&quot;, # The key of the label attached to the product. Cannot be empty and cannot
819 # exceed 128 bytes.
820 },
821 ],
822 },
823 &quot;score&quot;: 3.14, # A confidence level on the match, ranging from 0 (no confidence) to
824 # 1 (full confidence).
825 },
826 ],
827 },
828 &quot;error&quot;: { # The `Status` type defines a logical error model that is suitable for # If set, represents the error message for the operation.
829 # Note that filled-in image annotations are guaranteed to be
830 # correct, even when `error` is set.
831 # different programming environments, including REST APIs and RPC APIs. It is
832 # used by [gRPC](https://github.com/grpc). Each `Status` message contains
833 # three pieces of data: error code, error message, and error details.
834 #
835 # You can find out more about this error model and how to work with it in the
836 # [API Design Guide](https://cloud.google.com/apis/design/errors).
837 &quot;code&quot;: 42, # The status code, which should be an enum value of google.rpc.Code.
838 &quot;message&quot;: &quot;A String&quot;, # A developer-facing error message, which should be in English. Any
839 # user-facing error message should be localized and sent in the
840 # google.rpc.Status.details field, or localized by the client.
841 &quot;details&quot;: [ # A list of messages that carry the error details. There is a common set of
842 # message types for APIs to use.
843 {
844 &quot;a_key&quot;: &quot;&quot;, # Properties of the object. Contains field @type with type URL.
845 },
846 ],
847 },
848 &quot;fullTextAnnotation&quot;: { # TextAnnotation contains a structured representation of OCR extracted text. # If present, text (OCR) detection or document (OCR) text detection has
849 # completed successfully.
850 # This annotation provides the structural hierarchy for the OCR detected
851 # text.
852 # The hierarchy of an OCR extracted text structure is like this:
853 # TextAnnotation -&gt; Page -&gt; Block -&gt; Paragraph -&gt; Word -&gt; Symbol
854 # Each structural component, starting from Page, may further have their own
855 # properties. Properties describe detected languages, breaks etc.. Please refer
856 # to the TextAnnotation.TextProperty message definition below for more
857 # detail.
858 &quot;pages&quot;: [ # List of pages detected by OCR.
859 { # Detected page from OCR.
860 &quot;width&quot;: 42, # Page width. For PDFs the unit is points. For images (including
861 # TIFFs) the unit is pixels.
862 &quot;blocks&quot;: [ # List of blocks of text, images etc on this page.
863 { # Logical element on the page.
864 &quot;property&quot;: { # Additional information detected on the structural component. # Additional information detected for the block.
865 &quot;detectedLanguages&quot;: [ # A list of detected languages together with confidence.
866 { # Detected language for a structural component.
867 &quot;languageCode&quot;: &quot;A String&quot;, # The BCP-47 language code, such as &quot;en-US&quot; or &quot;sr-Latn&quot;. For more
868 # information, see
869 # http://www.unicode.org/reports/tr35/#Unicode_locale_identifier.
870 &quot;confidence&quot;: 3.14, # Confidence of detected language. Range [0, 1].
871 },
872 ],
873 &quot;detectedBreak&quot;: { # Detected start or end of a structural component. # Detected start or end of a text segment.
874 &quot;type&quot;: &quot;A String&quot;, # Detected break type.
875 &quot;isPrefix&quot;: True or False, # True if break prepends the element.
876 },
877 },
878 &quot;blockType&quot;: &quot;A String&quot;, # Detected block type (text, image etc) for this block.
879 &quot;boundingBox&quot;: { # A bounding polygon for the detected image annotation. # The bounding box for the block.
880 # The vertices are in the order of top-left, top-right, bottom-right,
881 # bottom-left. When a rotation of the bounding box is detected the rotation
882 # is represented as around the top-left corner as defined when the text is
883 # read in the &#x27;natural&#x27; orientation.
884 # For example:
885 #
886 # * when the text is horizontal it might look like:
887 #
888 # 0----1
889 # | |
890 # 3----2
891 #
892 # * when it&#x27;s rotated 180 degrees around the top-left corner it becomes:
893 #
894 # 2----3
895 # | |
896 # 1----0
897 #
898 # and the vertex order will still be (0, 1, 2, 3).
899 &quot;normalizedVertices&quot;: [ # The bounding polygon normalized vertices.
900 { # A vertex represents a 2D point in the image.
901 # NOTE: the normalized vertex coordinates are relative to the original image
902 # and range from 0 to 1.
903 &quot;y&quot;: 3.14, # Y coordinate.
904 &quot;x&quot;: 3.14, # X coordinate.
905 },
906 ],
907 &quot;vertices&quot;: [ # The bounding polygon vertices.
908 { # A vertex represents a 2D point in the image.
909 # NOTE: the vertex coordinates are in the same scale as the original image.
910 &quot;y&quot;: 42, # Y coordinate.
911 &quot;x&quot;: 42, # X coordinate.
912 },
913 ],
914 },
915 &quot;confidence&quot;: 3.14, # Confidence of the OCR results on the block. Range [0, 1].
916 &quot;paragraphs&quot;: [ # List of paragraphs in this block (if this blocks is of type text).
917 { # Structural unit of text representing a number of words in certain order.
918 &quot;property&quot;: { # Additional information detected on the structural component. # Additional information detected for the paragraph.
919 &quot;detectedLanguages&quot;: [ # A list of detected languages together with confidence.
920 { # Detected language for a structural component.
921 &quot;languageCode&quot;: &quot;A String&quot;, # The BCP-47 language code, such as &quot;en-US&quot; or &quot;sr-Latn&quot;. For more
922 # information, see
923 # http://www.unicode.org/reports/tr35/#Unicode_locale_identifier.
924 &quot;confidence&quot;: 3.14, # Confidence of detected language. Range [0, 1].
925 },
926 ],
927 &quot;detectedBreak&quot;: { # Detected start or end of a structural component. # Detected start or end of a text segment.
928 &quot;type&quot;: &quot;A String&quot;, # Detected break type.
929 &quot;isPrefix&quot;: True or False, # True if break prepends the element.
930 },
931 },
932 &quot;boundingBox&quot;: { # A bounding polygon for the detected image annotation. # The bounding box for the paragraph.
933 # The vertices are in the order of top-left, top-right, bottom-right,
934 # bottom-left. When a rotation of the bounding box is detected the rotation
935 # is represented as around the top-left corner as defined when the text is
936 # read in the &#x27;natural&#x27; orientation.
937 # For example:
938 # * when the text is horizontal it might look like:
939 # 0----1
940 # | |
941 # 3----2
942 # * when it&#x27;s rotated 180 degrees around the top-left corner it becomes:
943 # 2----3
944 # | |
945 # 1----0
946 # and the vertex order will still be (0, 1, 2, 3).
947 &quot;normalizedVertices&quot;: [ # The bounding polygon normalized vertices.
948 { # A vertex represents a 2D point in the image.
949 # NOTE: the normalized vertex coordinates are relative to the original image
950 # and range from 0 to 1.
951 &quot;y&quot;: 3.14, # Y coordinate.
952 &quot;x&quot;: 3.14, # X coordinate.
953 },
954 ],
955 &quot;vertices&quot;: [ # The bounding polygon vertices.
956 { # A vertex represents a 2D point in the image.
957 # NOTE: the vertex coordinates are in the same scale as the original image.
958 &quot;y&quot;: 42, # Y coordinate.
959 &quot;x&quot;: 42, # X coordinate.
960 },
961 ],
962 },
963 &quot;confidence&quot;: 3.14, # Confidence of the OCR results for the paragraph. Range [0, 1].
964 &quot;words&quot;: [ # List of all words in this paragraph.
965 { # A word representation.
966 &quot;property&quot;: { # Additional information detected on the structural component. # Additional information detected for the word.
967 &quot;detectedLanguages&quot;: [ # A list of detected languages together with confidence.
968 { # Detected language for a structural component.
969 &quot;languageCode&quot;: &quot;A String&quot;, # The BCP-47 language code, such as &quot;en-US&quot; or &quot;sr-Latn&quot;. For more
970 # information, see
971 # http://www.unicode.org/reports/tr35/#Unicode_locale_identifier.
972 &quot;confidence&quot;: 3.14, # Confidence of detected language. Range [0, 1].
973 },
974 ],
975 &quot;detectedBreak&quot;: { # Detected start or end of a structural component. # Detected start or end of a text segment.
976 &quot;type&quot;: &quot;A String&quot;, # Detected break type.
977 &quot;isPrefix&quot;: True or False, # True if break prepends the element.
978 },
979 },
980 &quot;boundingBox&quot;: { # A bounding polygon for the detected image annotation. # The bounding box for the word.
981 # The vertices are in the order of top-left, top-right, bottom-right,
982 # bottom-left. When a rotation of the bounding box is detected the rotation
983 # is represented as around the top-left corner as defined when the text is
984 # read in the &#x27;natural&#x27; orientation.
985 # For example:
986 # * when the text is horizontal it might look like:
987 # 0----1
988 # | |
989 # 3----2
990 # * when it&#x27;s rotated 180 degrees around the top-left corner it becomes:
991 # 2----3
992 # | |
993 # 1----0
994 # and the vertex order will still be (0, 1, 2, 3).
995 &quot;normalizedVertices&quot;: [ # The bounding polygon normalized vertices.
996 { # A vertex represents a 2D point in the image.
997 # NOTE: the normalized vertex coordinates are relative to the original image
998 # and range from 0 to 1.
999 &quot;y&quot;: 3.14, # Y coordinate.
1000 &quot;x&quot;: 3.14, # X coordinate.
1001 },
1002 ],
1003 &quot;vertices&quot;: [ # The bounding polygon vertices.
1004 { # A vertex represents a 2D point in the image.
1005 # NOTE: the vertex coordinates are in the same scale as the original image.
1006 &quot;y&quot;: 42, # Y coordinate.
1007 &quot;x&quot;: 42, # X coordinate.
1008 },
1009 ],
1010 },
1011 &quot;confidence&quot;: 3.14, # Confidence of the OCR results for the word. Range [0, 1].
1012 &quot;symbols&quot;: [ # List of symbols in the word.
1013 # The order of the symbols follows the natural reading order.
1014 { # A single symbol representation.
1015 &quot;boundingBox&quot;: { # A bounding polygon for the detected image annotation. # The bounding box for the symbol.
1016 # The vertices are in the order of top-left, top-right, bottom-right,
1017 # bottom-left. When a rotation of the bounding box is detected the rotation
1018 # is represented as around the top-left corner as defined when the text is
1019 # read in the &#x27;natural&#x27; orientation.
1020 # For example:
1021 # * when the text is horizontal it might look like:
1022 # 0----1
1023 # | |
1024 # 3----2
1025 # * when it&#x27;s rotated 180 degrees around the top-left corner it becomes:
1026 # 2----3
1027 # | |
1028 # 1----0
1029 # and the vertex order will still be (0, 1, 2, 3).
1030 &quot;normalizedVertices&quot;: [ # The bounding polygon normalized vertices.
1031 { # A vertex represents a 2D point in the image.
1032 # NOTE: the normalized vertex coordinates are relative to the original image
1033 # and range from 0 to 1.
1034 &quot;y&quot;: 3.14, # Y coordinate.
1035 &quot;x&quot;: 3.14, # X coordinate.
1036 },
1037 ],
1038 &quot;vertices&quot;: [ # The bounding polygon vertices.
1039 { # A vertex represents a 2D point in the image.
1040 # NOTE: the vertex coordinates are in the same scale as the original image.
1041 &quot;y&quot;: 42, # Y coordinate.
1042 &quot;x&quot;: 42, # X coordinate.
1043 },
1044 ],
1045 },
1046 &quot;confidence&quot;: 3.14, # Confidence of the OCR results for the symbol. Range [0, 1].
1047 &quot;text&quot;: &quot;A String&quot;, # The actual UTF-8 representation of the symbol.
1048 &quot;property&quot;: { # Additional information detected on the structural component. # Additional information detected for the symbol.
1049 &quot;detectedLanguages&quot;: [ # A list of detected languages together with confidence.
1050 { # Detected language for a structural component.
1051 &quot;languageCode&quot;: &quot;A String&quot;, # The BCP-47 language code, such as &quot;en-US&quot; or &quot;sr-Latn&quot;. For more
1052 # information, see
1053 # http://www.unicode.org/reports/tr35/#Unicode_locale_identifier.
1054 &quot;confidence&quot;: 3.14, # Confidence of detected language. Range [0, 1].
1055 },
1056 ],
1057 &quot;detectedBreak&quot;: { # Detected start or end of a structural component. # Detected start or end of a text segment.
1058 &quot;type&quot;: &quot;A String&quot;, # Detected break type.
1059 &quot;isPrefix&quot;: True or False, # True if break prepends the element.
1060 },
1061 },
1062 },
1063 ],
1064 },
1065 ],
1066 },
1067 ],
1068 },
1069 ],
1070 &quot;property&quot;: { # Additional information detected on the structural component. # Additional information detected on the page.
1071 &quot;detectedLanguages&quot;: [ # A list of detected languages together with confidence.
1072 { # Detected language for a structural component.
1073 &quot;languageCode&quot;: &quot;A String&quot;, # The BCP-47 language code, such as &quot;en-US&quot; or &quot;sr-Latn&quot;. For more
1074 # information, see
1075 # http://www.unicode.org/reports/tr35/#Unicode_locale_identifier.
1076 &quot;confidence&quot;: 3.14, # Confidence of detected language. Range [0, 1].
1077 },
1078 ],
1079 &quot;detectedBreak&quot;: { # Detected start or end of a structural component. # Detected start or end of a text segment.
1080 &quot;type&quot;: &quot;A String&quot;, # Detected break type.
1081 &quot;isPrefix&quot;: True or False, # True if break prepends the element.
1082 },
1083 },
1084 &quot;confidence&quot;: 3.14, # Confidence of the OCR results on the page. Range [0, 1].
1085 &quot;height&quot;: 42, # Page height. For PDFs the unit is points. For images (including
1086 # TIFFs) the unit is pixels.
1087 },
1088 ],
1089 &quot;text&quot;: &quot;A String&quot;, # UTF-8 text detected on the pages.
1090 },
1091 &quot;textAnnotations&quot;: [ # If present, text (OCR) detection has completed successfully.
1092 { # Set of detected entity features.
1093 &quot;properties&quot;: [ # Some entities may have optional user-supplied `Property` (name/value)
1094 # fields, such a score or string that qualifies the entity.
1095 { # A `Property` consists of a user-supplied name/value pair.
1096 &quot;uint64Value&quot;: &quot;A String&quot;, # Value of numeric properties.
1097 &quot;name&quot;: &quot;A String&quot;, # Name of the property.
1098 &quot;value&quot;: &quot;A String&quot;, # Value of the property.
1099 },
1100 ],
1101 &quot;score&quot;: 3.14, # Overall score of the result. Range [0, 1].
1102 &quot;locations&quot;: [ # The location information for the detected entity. Multiple
1103 # `LocationInfo` elements can be present because one location may
1104 # indicate the location of the scene in the image, and another location
1105 # may indicate the location of the place where the image was taken.
1106 # Location information is usually present for landmarks.
1107 { # Detected entity location information.
1108 &quot;latLng&quot;: { # An object representing a latitude/longitude pair. This is expressed as a pair # lat/long location coordinates.
1109 # of doubles representing degrees latitude and degrees longitude. Unless
1110 # specified otherwise, this must conform to the
1111 # &lt;a href=&quot;http://www.unoosa.org/pdf/icg/2012/template/WGS_84.pdf&quot;&gt;WGS84
1112 # standard&lt;/a&gt;. Values must be within normalized ranges.
1113 &quot;latitude&quot;: 3.14, # The latitude in degrees. It must be in the range [-90.0, +90.0].
1114 &quot;longitude&quot;: 3.14, # The longitude in degrees. It must be in the range [-180.0, +180.0].
1115 },
1116 },
1117 ],
1118 &quot;mid&quot;: &quot;A String&quot;, # Opaque entity ID. Some IDs may be available in
1119 # [Google Knowledge Graph Search
1120 # API](https://developers.google.com/knowledge-graph/).
1121 &quot;confidence&quot;: 3.14, # **Deprecated. Use `score` instead.**
1122 # The accuracy of the entity detection in an image.
1123 # For example, for an image in which the &quot;Eiffel Tower&quot; entity is detected,
1124 # this field represents the confidence that there is a tower in the query
1125 # image. Range [0, 1].
1126 &quot;locale&quot;: &quot;A String&quot;, # The language code for the locale in which the entity textual
1127 # `description` is expressed.
1128 &quot;boundingPoly&quot;: { # A bounding polygon for the detected image annotation. # Image region to which this entity belongs. Not produced
1129 # for `LABEL_DETECTION` features.
1130 &quot;normalizedVertices&quot;: [ # The bounding polygon normalized vertices.
1131 { # A vertex represents a 2D point in the image.
1132 # NOTE: the normalized vertex coordinates are relative to the original image
1133 # and range from 0 to 1.
1134 &quot;y&quot;: 3.14, # Y coordinate.
1135 &quot;x&quot;: 3.14, # X coordinate.
1136 },
1137 ],
1138 &quot;vertices&quot;: [ # The bounding polygon vertices.
1139 { # A vertex represents a 2D point in the image.
1140 # NOTE: the vertex coordinates are in the same scale as the original image.
1141 &quot;y&quot;: 42, # Y coordinate.
1142 &quot;x&quot;: 42, # X coordinate.
1143 },
1144 ],
1145 },
1146 &quot;description&quot;: &quot;A String&quot;, # Entity textual description, expressed in its `locale` language.
1147 &quot;topicality&quot;: 3.14, # The relevancy of the ICA (Image Content Annotation) label to the
1148 # image. For example, the relevancy of &quot;tower&quot; is likely higher to an image
1149 # containing the detected &quot;Eiffel Tower&quot; than to an image containing a
1150 # detected distant towering building, even though the confidence that
1151 # there is a tower in each image may be the same. Range [0, 1].
1152 },
1153 ],
1154 &quot;imagePropertiesAnnotation&quot;: { # Stores image properties, such as dominant colors. # If present, image properties were extracted successfully.
1155 &quot;dominantColors&quot;: { # Set of dominant colors and their corresponding scores. # If present, dominant colors completed successfully.
1156 &quot;colors&quot;: [ # RGB color values with their score and pixel fraction.
1157 { # Color information consists of RGB channels, score, and the fraction of
1158 # the image that the color occupies in the image.
1159 &quot;pixelFraction&quot;: 3.14, # The fraction of pixels the color occupies in the image.
1160 # Value in range [0, 1].
1161 &quot;color&quot;: { # Represents a color in the RGBA color space. This representation is designed # RGB components of the color.
1162 # for simplicity of conversion to/from color representations in various
1163 # languages over compactness; for example, the fields of this representation
1164 # can be trivially provided to the constructor of &quot;java.awt.Color&quot; in Java; it
1165 # can also be trivially provided to UIColor&#x27;s &quot;+colorWithRed:green:blue:alpha&quot;
1166 # method in iOS; and, with just a little work, it can be easily formatted into
1167 # a CSS &quot;rgba()&quot; string in JavaScript, as well.
1168 #
1169 # Note: this proto does not carry information about the absolute color space
1170 # that should be used to interpret the RGB value (e.g. sRGB, Adobe RGB,
1171 # DCI-P3, BT.2020, etc.). By default, applications SHOULD assume the sRGB color
1172 # space.
1173 #
1174 # Example (Java):
1175 #
1176 # import com.google.type.Color;
1177 #
1178 # // ...
1179 # public static java.awt.Color fromProto(Color protocolor) {
1180 # float alpha = protocolor.hasAlpha()
1181 # ? protocolor.getAlpha().getValue()
1182 # : 1.0;
1183 #
1184 # return new java.awt.Color(
1185 # protocolor.getRed(),
1186 # protocolor.getGreen(),
1187 # protocolor.getBlue(),
1188 # alpha);
1189 # }
1190 #
1191 # public static Color toProto(java.awt.Color color) {
1192 # float red = (float) color.getRed();
1193 # float green = (float) color.getGreen();
1194 # float blue = (float) color.getBlue();
1195 # float denominator = 255.0;
1196 # Color.Builder resultBuilder =
1197 # Color
1198 # .newBuilder()
1199 # .setRed(red / denominator)
1200 # .setGreen(green / denominator)
1201 # .setBlue(blue / denominator);
1202 # int alpha = color.getAlpha();
1203 # if (alpha != 255) {
1204 # result.setAlpha(
1205 # FloatValue
1206 # .newBuilder()
1207 # .setValue(((float) alpha) / denominator)
1208 # .build());
1209 # }
1210 # return resultBuilder.build();
1211 # }
1212 # // ...
1213 #
1214 # Example (iOS / Obj-C):
1215 #
1216 # // ...
1217 # static UIColor* fromProto(Color* protocolor) {
1218 # float red = [protocolor red];
1219 # float green = [protocolor green];
1220 # float blue = [protocolor blue];
1221 # FloatValue* alpha_wrapper = [protocolor alpha];
1222 # float alpha = 1.0;
1223 # if (alpha_wrapper != nil) {
1224 # alpha = [alpha_wrapper value];
1225 # }
1226 # return [UIColor colorWithRed:red green:green blue:blue alpha:alpha];
1227 # }
1228 #
1229 # static Color* toProto(UIColor* color) {
1230 # CGFloat red, green, blue, alpha;
1231 # if (![color getRed:&amp;red green:&amp;green blue:&amp;blue alpha:&amp;alpha]) {
1232 # return nil;
1233 # }
1234 # Color* result = [[Color alloc] init];
1235 # [result setRed:red];
1236 # [result setGreen:green];
1237 # [result setBlue:blue];
1238 # if (alpha &lt;= 0.9999) {
1239 # [result setAlpha:floatWrapperWithValue(alpha)];
1240 # }
1241 # [result autorelease];
1242 # return result;
1243 # }
1244 # // ...
1245 #
1246 # Example (JavaScript):
1247 #
1248 # // ...
1249 #
1250 # var protoToCssColor = function(rgb_color) {
1251 # var redFrac = rgb_color.red || 0.0;
1252 # var greenFrac = rgb_color.green || 0.0;
1253 # var blueFrac = rgb_color.blue || 0.0;
1254 # var red = Math.floor(redFrac * 255);
1255 # var green = Math.floor(greenFrac * 255);
1256 # var blue = Math.floor(blueFrac * 255);
1257 #
1258 # if (!(&#x27;alpha&#x27; in rgb_color)) {
1259 # return rgbToCssColor_(red, green, blue);
1260 # }
1261 #
1262 # var alphaFrac = rgb_color.alpha.value || 0.0;
1263 # var rgbParams = [red, green, blue].join(&#x27;,&#x27;);
1264 # return [&#x27;rgba(&#x27;, rgbParams, &#x27;,&#x27;, alphaFrac, &#x27;)&#x27;].join(&#x27;&#x27;);
1265 # };
1266 #
1267 # var rgbToCssColor_ = function(red, green, blue) {
1268 # var rgbNumber = new Number((red &lt;&lt; 16) | (green &lt;&lt; 8) | blue);
1269 # var hexString = rgbNumber.toString(16);
1270 # var missingZeros = 6 - hexString.length;
1271 # var resultBuilder = [&#x27;#&#x27;];
1272 # for (var i = 0; i &lt; missingZeros; i++) {
1273 # resultBuilder.push(&#x27;0&#x27;);
1274 # }
1275 # resultBuilder.push(hexString);
1276 # return resultBuilder.join(&#x27;&#x27;);
1277 # };
1278 #
1279 # // ...
1280 &quot;red&quot;: 3.14, # The amount of red in the color as a value in the interval [0, 1].
1281 &quot;green&quot;: 3.14, # The amount of green in the color as a value in the interval [0, 1].
1282 &quot;blue&quot;: 3.14, # The amount of blue in the color as a value in the interval [0, 1].
1283 &quot;alpha&quot;: 3.14, # The fraction of this color that should be applied to the pixel. That is,
1284 # the final pixel color is defined by the equation:
1285 #
1286 # pixel color = alpha * (this color) + (1.0 - alpha) * (background color)
1287 #
1288 # This means that a value of 1.0 corresponds to a solid color, whereas
1289 # a value of 0.0 corresponds to a completely transparent color. This
1290 # uses a wrapper message rather than a simple float scalar so that it is
1291 # possible to distinguish between a default value and the value being unset.
1292 # If omitted, this color object is to be rendered as a solid color
1293 # (as if the alpha value had been explicitly given with a value of 1.0).
1294 },
1295 &quot;score&quot;: 3.14, # Image-specific score for this color. Value in range [0, 1].
1296 },
1297 ],
1298 },
1299 },
1300 },
1301 ],
1302 &quot;inputConfig&quot;: { # The desired input location and metadata. # Information about the file for which this response is generated.
1303 &quot;gcsSource&quot;: { # The Google Cloud Storage location where the input will be read from. # The Google Cloud Storage location to read the input from.
1304 &quot;uri&quot;: &quot;A String&quot;, # Google Cloud Storage URI for the input file. This must only be a
1305 # Google Cloud Storage object. Wildcards are not currently supported.
1306 },
1307 &quot;mimeType&quot;: &quot;A String&quot;, # The type of the file. Currently only &quot;application/pdf&quot;, &quot;image/tiff&quot; and
1308 # &quot;image/gif&quot; are supported. Wildcards are not supported.
1309 &quot;content&quot;: &quot;A String&quot;, # File content, represented as a stream of bytes.
1310 # Note: As with all `bytes` fields, protobuffers use a pure binary
1311 # representation, whereas JSON representations use base64.
1312 #
1313 # Currently, this field only works for BatchAnnotateFiles requests. It does
1314 # not work for AsyncBatchAnnotateFiles requests.
1315 },
1316 },
1317 ],
1318 }</pre>
1319</div>
1320
1321<div class="method">
1322 <code class="details" id="asyncBatchAnnotate">asyncBatchAnnotate(parent, body=None, x__xgafv=None)</code>
1323 <pre>Run asynchronous image detection and annotation for a list of generic
1324files, such as PDF files, which may contain multiple pages and multiple
1325images per page. Progress and results can be retrieved through the
1326`google.longrunning.Operations` interface.
1327`Operation.metadata` contains `OperationMetadata` (metadata).
1328`Operation.response` contains `AsyncBatchAnnotateFilesResponse` (results).
1329
1330Args:
1331 parent: string, Optional. Target project and location to make a call.
1332
1333Format: `projects/{project-id}/locations/{location-id}`.
1334
1335If no parent is specified, a region will be chosen automatically.
1336
1337Supported location-ids:
1338 `us`: USA country only,
1339 `asia`: East asia areas, like Japan, Taiwan,
1340 `eu`: The European Union.
1341
1342Example: `projects/project-A/locations/eu`. (required)
1343 body: object, The request body.
1344 The object takes the form of:
1345
1346{ # Multiple async file annotation requests are batched into a single service
1347 # call.
1348 &quot;requests&quot;: [ # Required. Individual async file annotation requests for this batch.
1349 { # An offline file annotation request.
1350 &quot;imageContext&quot;: { # Image context and/or feature-specific parameters. # Additional context that may accompany the image(s) in the file.
1351 &quot;languageHints&quot;: [ # List of languages to use for TEXT_DETECTION. In most cases, an empty value
1352 # yields the best results since it enables automatic language detection. For
1353 # languages based on the Latin alphabet, setting `language_hints` is not
1354 # needed. In rare cases, when the language of the text in the image is known,
1355 # setting a hint will help get better results (although it will be a
1356 # significant hindrance if the hint is wrong). Text detection returns an
1357 # error if one or more of the specified languages is not one of the
1358 # [supported languages](https://cloud.google.com/vision/docs/languages).
1359 &quot;A String&quot;,
1360 ],
1361 &quot;webDetectionParams&quot;: { # Parameters for web detection request. # Parameters for web detection.
1362 &quot;includeGeoResults&quot;: True or False, # Whether to include results derived from the geo information in the image.
1363 },
1364 &quot;latLongRect&quot;: { # Rectangle determined by min and max `LatLng` pairs. # Not used.
1365 &quot;minLatLng&quot;: { # An object representing a latitude/longitude pair. This is expressed as a pair # Min lat/long pair.
1366 # of doubles representing degrees latitude and degrees longitude. Unless
1367 # specified otherwise, this must conform to the
1368 # &lt;a href=&quot;http://www.unoosa.org/pdf/icg/2012/template/WGS_84.pdf&quot;&gt;WGS84
1369 # standard&lt;/a&gt;. Values must be within normalized ranges.
1370 &quot;latitude&quot;: 3.14, # The latitude in degrees. It must be in the range [-90.0, +90.0].
1371 &quot;longitude&quot;: 3.14, # The longitude in degrees. It must be in the range [-180.0, +180.0].
1372 },
1373 &quot;maxLatLng&quot;: { # An object representing a latitude/longitude pair. This is expressed as a pair # Max lat/long pair.
1374 # of doubles representing degrees latitude and degrees longitude. Unless
1375 # specified otherwise, this must conform to the
1376 # &lt;a href=&quot;http://www.unoosa.org/pdf/icg/2012/template/WGS_84.pdf&quot;&gt;WGS84
1377 # standard&lt;/a&gt;. Values must be within normalized ranges.
1378 &quot;latitude&quot;: 3.14, # The latitude in degrees. It must be in the range [-90.0, +90.0].
1379 &quot;longitude&quot;: 3.14, # The longitude in degrees. It must be in the range [-180.0, +180.0].
1380 },
1381 },
1382 &quot;cropHintsParams&quot;: { # Parameters for crop hints annotation request. # Parameters for crop hints annotation request.
1383 &quot;aspectRatios&quot;: [ # Aspect ratios in floats, representing the ratio of the width to the height
1384 # of the image. For example, if the desired aspect ratio is 4/3, the
1385 # corresponding float value should be 1.33333. If not specified, the
1386 # best possible crop is returned. The number of provided aspect ratios is
1387 # limited to a maximum of 16; any aspect ratios provided after the 16th are
1388 # ignored.
1389 3.14,
1390 ],
1391 },
1392 &quot;productSearchParams&quot;: { # Parameters for a product search request. # Parameters for product search.
1393 &quot;productSet&quot;: &quot;A String&quot;, # The resource name of a ProductSet to be searched for similar images.
1394 #
1395 # Format is:
1396 # `projects/PROJECT_ID/locations/LOC_ID/productSets/PRODUCT_SET_ID`.
1397 &quot;boundingPoly&quot;: { # A bounding polygon for the detected image annotation. # The bounding polygon around the area of interest in the image.
1398 # If it is not specified, system discretion will be applied.
1399 &quot;normalizedVertices&quot;: [ # The bounding polygon normalized vertices.
1400 { # A vertex represents a 2D point in the image.
1401 # NOTE: the normalized vertex coordinates are relative to the original image
1402 # and range from 0 to 1.
1403 &quot;y&quot;: 3.14, # Y coordinate.
1404 &quot;x&quot;: 3.14, # X coordinate.
1405 },
1406 ],
1407 &quot;vertices&quot;: [ # The bounding polygon vertices.
1408 { # A vertex represents a 2D point in the image.
1409 # NOTE: the vertex coordinates are in the same scale as the original image.
1410 &quot;y&quot;: 42, # Y coordinate.
1411 &quot;x&quot;: 42, # X coordinate.
1412 },
1413 ],
1414 },
1415 &quot;productCategories&quot;: [ # The list of product categories to search in. Currently, we only consider
1416 # the first category, and either &quot;homegoods-v2&quot;, &quot;apparel-v2&quot;, &quot;toys-v2&quot;,
1417 # &quot;packagedgoods-v1&quot;, or &quot;general-v1&quot; should be specified. The legacy
1418 # categories &quot;homegoods&quot;, &quot;apparel&quot;, and &quot;toys&quot; are still supported but will
1419 # be deprecated. For new products, please use &quot;homegoods-v2&quot;, &quot;apparel-v2&quot;,
1420 # or &quot;toys-v2&quot; for better product search accuracy. It is recommended to
1421 # migrate existing products to these categories as well.
1422 &quot;A String&quot;,
1423 ],
1424 &quot;filter&quot;: &quot;A String&quot;, # The filtering expression. This can be used to restrict search results based
1425 # on Product labels. We currently support an AND of OR of key-value
1426 # expressions, where each expression within an OR must have the same key. An
1427 # &#x27;=&#x27; should be used to connect the key and value.
1428 #
1429 # For example, &quot;(color = red OR color = blue) AND brand = Google&quot; is
1430 # acceptable, but &quot;(color = red OR brand = Google)&quot; is not acceptable.
1431 # &quot;color: red&quot; is not acceptable because it uses a &#x27;:&#x27; instead of an &#x27;=&#x27;.
1432 },
1433 },
1434 &quot;outputConfig&quot;: { # The desired output location and metadata. # Required. The desired output location and metadata (e.g. format).
1435 &quot;gcsDestination&quot;: { # The Google Cloud Storage location where the output will be written to. # The Google Cloud Storage location to write the output(s) to.
1436 &quot;uri&quot;: &quot;A String&quot;, # Google Cloud Storage URI prefix where the results will be stored. Results
1437 # will be in JSON format and preceded by its corresponding input URI prefix.
1438 # This field can either represent a gcs file prefix or gcs directory. In
1439 # either case, the uri should be unique because in order to get all of the
1440 # output files, you will need to do a wildcard gcs search on the uri prefix
1441 # you provide.
1442 #
1443 # Examples:
1444 #
1445 # * File Prefix: gs://bucket-name/here/filenameprefix The output files
1446 # will be created in gs://bucket-name/here/ and the names of the
1447 # output files will begin with &quot;filenameprefix&quot;.
1448 #
1449 # * Directory Prefix: gs://bucket-name/some/location/ The output files
1450 # will be created in gs://bucket-name/some/location/ and the names of the
1451 # output files could be anything because there was no filename prefix
1452 # specified.
1453 #
1454 # If multiple outputs, each response is still AnnotateFileResponse, each of
1455 # which contains some subset of the full list of AnnotateImageResponse.
1456 # Multiple outputs can happen if, for example, the output JSON is too large
1457 # and overflows into multiple sharded files.
1458 },
1459 &quot;batchSize&quot;: 42, # The max number of response protos to put into each output JSON file on
1460 # Google Cloud Storage.
1461 # The valid range is [1, 100]. If not specified, the default value is 20.
1462 #
1463 # For example, for one pdf file with 100 pages, 100 response protos will
1464 # be generated. If `batch_size` = 20, then 5 json files each
1465 # containing 20 response protos will be written under the prefix
1466 # `gcs_destination`.`uri`.
1467 #
1468 # Currently, batch_size only applies to GcsDestination, with potential future
1469 # support for other output configurations.
1470 },
1471 &quot;inputConfig&quot;: { # The desired input location and metadata. # Required. Information about the input file.
1472 &quot;gcsSource&quot;: { # The Google Cloud Storage location where the input will be read from. # The Google Cloud Storage location to read the input from.
1473 &quot;uri&quot;: &quot;A String&quot;, # Google Cloud Storage URI for the input file. This must only be a
1474 # Google Cloud Storage object. Wildcards are not currently supported.
1475 },
1476 &quot;mimeType&quot;: &quot;A String&quot;, # The type of the file. Currently only &quot;application/pdf&quot;, &quot;image/tiff&quot; and
1477 # &quot;image/gif&quot; are supported. Wildcards are not supported.
1478 &quot;content&quot;: &quot;A String&quot;, # File content, represented as a stream of bytes.
1479 # Note: As with all `bytes` fields, protobuffers use a pure binary
1480 # representation, whereas JSON representations use base64.
1481 #
1482 # Currently, this field only works for BatchAnnotateFiles requests. It does
1483 # not work for AsyncBatchAnnotateFiles requests.
1484 },
1485 &quot;features&quot;: [ # Required. Requested features.
1486 { # The type of Google Cloud Vision API detection to perform, and the maximum
1487 # number of results to return for that type. Multiple `Feature` objects can
1488 # be specified in the `features` list.
1489 &quot;type&quot;: &quot;A String&quot;, # The feature type.
1490 &quot;maxResults&quot;: 42, # Maximum number of results of this type. Does not apply to
1491 # `TEXT_DETECTION`, `DOCUMENT_TEXT_DETECTION`, or `CROP_HINTS`.
1492 &quot;model&quot;: &quot;A String&quot;, # Model to use for the feature.
1493 # Supported values: &quot;builtin/stable&quot; (the default if unset) and
1494 # &quot;builtin/latest&quot;.
1495 },
1496 ],
1497 },
1498 ],
1499 &quot;parent&quot;: &quot;A String&quot;, # Optional. Target project and location to make a call.
1500 #
1501 # Format: `projects/{project-id}/locations/{location-id}`.
1502 #
1503 # If no parent is specified, a region will be chosen automatically.
1504 #
1505 # Supported location-ids:
1506 # `us`: USA country only,
1507 # `asia`: East asia areas, like Japan, Taiwan,
1508 # `eu`: The European Union.
1509 #
1510 # Example: `projects/project-A/locations/eu`.
1511 }
1512
1513 x__xgafv: string, V1 error format.
1514 Allowed values
1515 1 - v1 error format
1516 2 - v2 error format
1517
1518Returns:
1519 An object of the form:
1520
1521 { # This resource represents a long-running operation that is the result of a
1522 # network API call.
1523 &quot;error&quot;: { # The `Status` type defines a logical error model that is suitable for # The error result of the operation in case of failure or cancellation.
1524 # different programming environments, including REST APIs and RPC APIs. It is
1525 # used by [gRPC](https://github.com/grpc). Each `Status` message contains
1526 # three pieces of data: error code, error message, and error details.
1527 #
1528 # You can find out more about this error model and how to work with it in the
1529 # [API Design Guide](https://cloud.google.com/apis/design/errors).
1530 &quot;code&quot;: 42, # The status code, which should be an enum value of google.rpc.Code.
1531 &quot;message&quot;: &quot;A String&quot;, # A developer-facing error message, which should be in English. Any
1532 # user-facing error message should be localized and sent in the
1533 # google.rpc.Status.details field, or localized by the client.
1534 &quot;details&quot;: [ # A list of messages that carry the error details. There is a common set of
1535 # message types for APIs to use.
1536 {
1537 &quot;a_key&quot;: &quot;&quot;, # Properties of the object. Contains field @type with type URL.
1538 },
1539 ],
1540 },
1541 &quot;metadata&quot;: { # Service-specific metadata associated with the operation. It typically
1542 # contains progress information and common metadata such as create time.
1543 # Some services might not provide such metadata. Any method that returns a
1544 # long-running operation should document the metadata type, if any.
1545 &quot;a_key&quot;: &quot;&quot;, # Properties of the object. Contains field @type with type URL.
1546 },
1547 &quot;done&quot;: True or False, # If the value is `false`, it means the operation is still in progress.
1548 # If `true`, the operation is completed, and either `error` or `response` is
1549 # available.
1550 &quot;response&quot;: { # The normal response of the operation in case of success. If the original
1551 # method returns no data on success, such as `Delete`, the response is
1552 # `google.protobuf.Empty`. If the original method is standard
1553 # `Get`/`Create`/`Update`, the response should be the resource. For other
1554 # methods, the response should have the type `XxxResponse`, where `Xxx`
1555 # is the original method name. For example, if the original method name
1556 # is `TakeSnapshot()`, the inferred response type is
1557 # `TakeSnapshotResponse`.
1558 &quot;a_key&quot;: &quot;&quot;, # Properties of the object. Contains field @type with type URL.
1559 },
1560 &quot;name&quot;: &quot;A String&quot;, # The server-assigned name, which is only unique within the same service that
1561 # originally returns it. If you use the default HTTP mapping, the
1562 # `name` should be a resource name ending with `operations/{unique_id}`.
1563 }</pre>
1564</div>
1565
1566</body></html>