blob: ad3440042c79af38a5583ff391d119caa2d232d9 [file] [log] [blame]
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07001<html><body>
2<style>
3
4body, h1, h2, h3, div, span, p, pre, a {
5 margin: 0;
6 padding: 0;
7 border: 0;
8 font-weight: inherit;
9 font-style: inherit;
10 font-size: 100%;
11 font-family: inherit;
12 vertical-align: baseline;
13}
14
15body {
16 font-size: 13px;
17 padding: 1em;
18}
19
20h1 {
21 font-size: 26px;
22 margin-bottom: 1em;
23}
24
25h2 {
26 font-size: 24px;
27 margin-bottom: 1em;
28}
29
30h3 {
31 font-size: 20px;
32 margin-bottom: 1em;
33 margin-top: 1em;
34}
35
36pre, code {
37 line-height: 1.5;
38 font-family: Monaco, 'DejaVu Sans Mono', 'Bitstream Vera Sans Mono', 'Lucida Console', monospace;
39}
40
41pre {
42 margin-top: 0.5em;
43}
44
45h1, h2, h3, p {
46 font-family: Arial, sans serif;
47}
48
49h1, h2, h3 {
50 border-bottom: solid #CCC 1px;
51}
52
53.toc_element {
54 margin-top: 0.5em;
55}
56
57.firstline {
58 margin-left: 2 em;
59}
60
61.method {
62 margin-top: 1em;
63 border: solid 1px #CCC;
64 padding: 1em;
65 background: #EEE;
66}
67
68.details {
69 font-weight: bold;
70 font-size: 14px;
71}
72
73</style>
74
75<h1><a href="vision_v1p1beta1.html">Cloud Vision API</a> . <a href="vision_v1p1beta1.images.html">images</a></h1>
76<h2>Instance Methods</h2>
77<p class="toc_element">
Dan O'Mearadd494642020-05-01 07:42:23 -070078 <code><a href="#annotate">annotate(body=None, x__xgafv=None)</a></code></p>
Bu Sun Kim715bd7f2019-06-14 16:50:42 -070079<p class="firstline">Run image detection and annotation for a batch of images.</p>
80<p class="toc_element">
Dan O'Mearadd494642020-05-01 07:42:23 -070081 <code><a href="#asyncBatchAnnotate">asyncBatchAnnotate(body=None, x__xgafv=None)</a></code></p>
Bu Sun Kim715bd7f2019-06-14 16:50:42 -070082<p class="firstline">Run asynchronous image detection and annotation for a list of images.</p>
83<h3>Method Details</h3>
84<div class="method">
Dan O'Mearadd494642020-05-01 07:42:23 -070085 <code class="details" id="annotate">annotate(body=None, x__xgafv=None)</code>
Bu Sun Kim715bd7f2019-06-14 16:50:42 -070086 <pre>Run image detection and annotation for a batch of images.
87
88Args:
Dan O'Mearadd494642020-05-01 07:42:23 -070089 body: object, The request body.
Bu Sun Kim715bd7f2019-06-14 16:50:42 -070090 The object takes the form of:
91
92{ # Multiple image annotation requests are batched into a single service call.
Bu Sun Kim65020912020-05-20 12:08:20 -070093 &quot;parent&quot;: &quot;A String&quot;, # Optional. Target project and location to make a call.
94 #
95 # Format: `projects/{project-id}/locations/{location-id}`.
96 #
97 # If no parent is specified, a region will be chosen automatically.
98 #
99 # Supported location-ids:
100 # `us`: USA country only,
101 # `asia`: East asia areas, like Japan, Taiwan,
102 # `eu`: The European Union.
103 #
104 # Example: `projects/project-A/locations/eu`.
105 &quot;requests&quot;: [ # Required. Individual image annotation requests for this batch.
Bu Sun Kim715bd7f2019-06-14 16:50:42 -0700106 { # Request for performing Google Cloud Vision API tasks over a user-provided
107 # image, with user-requested features, and with context information.
Bu Sun Kim65020912020-05-20 12:08:20 -0700108 &quot;image&quot;: { # Client image to perform Google Cloud Vision API tasks over. # The image to be processed.
109 &quot;content&quot;: &quot;A String&quot;, # Image content, represented as a stream of bytes.
Bu Sun Kim715bd7f2019-06-14 16:50:42 -0700110 # Note: As with all `bytes` fields, protobuffers use a pure binary
111 # representation, whereas JSON representations use base64.
Bu Sun Kim65020912020-05-20 12:08:20 -0700112 &quot;source&quot;: { # External image source (Google Cloud Storage or web URL image location). # Google Cloud Storage image location, or publicly-accessible image
Bu Sun Kim715bd7f2019-06-14 16:50:42 -0700113 # URL. If both `content` and `source` are provided for an image, `content`
114 # takes precedence and is used to perform the image annotation request.
Bu Sun Kim65020912020-05-20 12:08:20 -0700115 &quot;imageUri&quot;: &quot;A String&quot;, # The URI of the source image. Can be either:
Bu Sun Kim715bd7f2019-06-14 16:50:42 -0700116 #
117 # 1. A Google Cloud Storage URI of the form
118 # `gs://bucket_name/object_name`. Object versioning is not supported. See
119 # [Google Cloud Storage Request
120 # URIs](https://cloud.google.com/storage/docs/reference-uris) for more
121 # info.
122 #
123 # 2. A publicly-accessible image HTTP/HTTPS URL. When fetching images from
124 # HTTP/HTTPS URLs, Google cannot guarantee that the request will be
125 # completed. Your request may fail if the specified host denies the
126 # request (e.g. due to request throttling or DOS prevention), or if Google
127 # throttles requests to the site for abuse prevention. You should not
128 # depend on externally-hosted images for production applications.
129 #
130 # When both `gcs_image_uri` and `image_uri` are specified, `image_uri` takes
131 # precedence.
Bu Sun Kim65020912020-05-20 12:08:20 -0700132 &quot;gcsImageUri&quot;: &quot;A String&quot;, # **Use `image_uri` instead.**
133 #
134 # The Google Cloud Storage URI of the form
135 # `gs://bucket_name/object_name`. Object versioning is not supported. See
136 # [Google Cloud Storage Request
137 # URIs](https://cloud.google.com/storage/docs/reference-uris) for more info.
Bu Sun Kim715bd7f2019-06-14 16:50:42 -0700138 },
139 },
Bu Sun Kim65020912020-05-20 12:08:20 -0700140 &quot;features&quot;: [ # Requested features.
Bu Sun Kim715bd7f2019-06-14 16:50:42 -0700141 { # The type of Google Cloud Vision API detection to perform, and the maximum
142 # number of results to return for that type. Multiple `Feature` objects can
143 # be specified in the `features` list.
Bu Sun Kim65020912020-05-20 12:08:20 -0700144 &quot;type&quot;: &quot;A String&quot;, # The feature type.
145 &quot;maxResults&quot;: 42, # Maximum number of results of this type. Does not apply to
Bu Sun Kim715bd7f2019-06-14 16:50:42 -0700146 # `TEXT_DETECTION`, `DOCUMENT_TEXT_DETECTION`, or `CROP_HINTS`.
Bu Sun Kim65020912020-05-20 12:08:20 -0700147 &quot;model&quot;: &quot;A String&quot;, # Model to use for the feature.
148 # Supported values: &quot;builtin/stable&quot; (the default if unset) and
149 # &quot;builtin/latest&quot;.
Bu Sun Kim715bd7f2019-06-14 16:50:42 -0700150 },
151 ],
Bu Sun Kim65020912020-05-20 12:08:20 -0700152 &quot;imageContext&quot;: { # Image context and/or feature-specific parameters. # Additional context that may accompany the image.
153 &quot;languageHints&quot;: [ # List of languages to use for TEXT_DETECTION. In most cases, an empty value
154 # yields the best results since it enables automatic language detection. For
155 # languages based on the Latin alphabet, setting `language_hints` is not
156 # needed. In rare cases, when the language of the text in the image is known,
157 # setting a hint will help get better results (although it will be a
158 # significant hindrance if the hint is wrong). Text detection returns an
159 # error if one or more of the specified languages is not one of the
160 # [supported languages](https://cloud.google.com/vision/docs/languages).
161 &quot;A String&quot;,
162 ],
163 &quot;webDetectionParams&quot;: { # Parameters for web detection request. # Parameters for web detection.
164 &quot;includeGeoResults&quot;: True or False, # Whether to include results derived from the geo information in the image.
165 },
166 &quot;latLongRect&quot;: { # Rectangle determined by min and max `LatLng` pairs. # Not used.
167 &quot;minLatLng&quot;: { # An object representing a latitude/longitude pair. This is expressed as a pair # Min lat/long pair.
168 # of doubles representing degrees latitude and degrees longitude. Unless
169 # specified otherwise, this must conform to the
170 # &lt;a href=&quot;http://www.unoosa.org/pdf/icg/2012/template/WGS_84.pdf&quot;&gt;WGS84
171 # standard&lt;/a&gt;. Values must be within normalized ranges.
172 &quot;latitude&quot;: 3.14, # The latitude in degrees. It must be in the range [-90.0, +90.0].
173 &quot;longitude&quot;: 3.14, # The longitude in degrees. It must be in the range [-180.0, +180.0].
174 },
175 &quot;maxLatLng&quot;: { # An object representing a latitude/longitude pair. This is expressed as a pair # Max lat/long pair.
176 # of doubles representing degrees latitude and degrees longitude. Unless
177 # specified otherwise, this must conform to the
178 # &lt;a href=&quot;http://www.unoosa.org/pdf/icg/2012/template/WGS_84.pdf&quot;&gt;WGS84
179 # standard&lt;/a&gt;. Values must be within normalized ranges.
180 &quot;latitude&quot;: 3.14, # The latitude in degrees. It must be in the range [-90.0, +90.0].
181 &quot;longitude&quot;: 3.14, # The longitude in degrees. It must be in the range [-180.0, +180.0].
182 },
183 },
184 &quot;cropHintsParams&quot;: { # Parameters for crop hints annotation request. # Parameters for crop hints annotation request.
185 &quot;aspectRatios&quot;: [ # Aspect ratios in floats, representing the ratio of the width to the height
186 # of the image. For example, if the desired aspect ratio is 4/3, the
187 # corresponding float value should be 1.33333. If not specified, the
188 # best possible crop is returned. The number of provided aspect ratios is
189 # limited to a maximum of 16; any aspect ratios provided after the 16th are
190 # ignored.
191 3.14,
192 ],
193 },
194 &quot;productSearchParams&quot;: { # Parameters for a product search request. # Parameters for product search.
195 &quot;filter&quot;: &quot;A String&quot;, # The filtering expression. This can be used to restrict search results based
196 # on Product labels. We currently support an AND of OR of key-value
197 # expressions, where each expression within an OR must have the same key. An
198 # &#x27;=&#x27; should be used to connect the key and value.
199 #
200 # For example, &quot;(color = red OR color = blue) AND brand = Google&quot; is
201 # acceptable, but &quot;(color = red OR brand = Google)&quot; is not acceptable.
202 # &quot;color: red&quot; is not acceptable because it uses a &#x27;:&#x27; instead of an &#x27;=&#x27;.
203 &quot;productSet&quot;: &quot;A String&quot;, # The resource name of a ProductSet to be searched for similar images.
204 #
205 # Format is:
206 # `projects/PROJECT_ID/locations/LOC_ID/productSets/PRODUCT_SET_ID`.
207 &quot;boundingPoly&quot;: { # A bounding polygon for the detected image annotation. # The bounding polygon around the area of interest in the image.
208 # If it is not specified, system discretion will be applied.
209 &quot;normalizedVertices&quot;: [ # The bounding polygon normalized vertices.
210 { # A vertex represents a 2D point in the image.
211 # NOTE: the normalized vertex coordinates are relative to the original image
212 # and range from 0 to 1.
213 &quot;y&quot;: 3.14, # Y coordinate.
214 &quot;x&quot;: 3.14, # X coordinate.
215 },
216 ],
217 &quot;vertices&quot;: [ # The bounding polygon vertices.
218 { # A vertex represents a 2D point in the image.
219 # NOTE: the vertex coordinates are in the same scale as the original image.
220 &quot;y&quot;: 42, # Y coordinate.
221 &quot;x&quot;: 42, # X coordinate.
222 },
223 ],
224 },
225 &quot;productCategories&quot;: [ # The list of product categories to search in. Currently, we only consider
226 # the first category, and either &quot;homegoods-v2&quot;, &quot;apparel-v2&quot;, &quot;toys-v2&quot;,
227 # &quot;packagedgoods-v1&quot;, or &quot;general-v1&quot; should be specified. The legacy
228 # categories &quot;homegoods&quot;, &quot;apparel&quot;, and &quot;toys&quot; are still supported but will
229 # be deprecated. For new products, please use &quot;homegoods-v2&quot;, &quot;apparel-v2&quot;,
230 # or &quot;toys-v2&quot; for better product search accuracy. It is recommended to
231 # migrate existing products to these categories as well.
232 &quot;A String&quot;,
233 ],
234 },
235 },
Bu Sun Kim715bd7f2019-06-14 16:50:42 -0700236 },
237 ],
238 }
239
240 x__xgafv: string, V1 error format.
241 Allowed values
242 1 - v1 error format
243 2 - v2 error format
244
245Returns:
246 An object of the form:
247
248 { # Response to a batch image annotation request.
Bu Sun Kim65020912020-05-20 12:08:20 -0700249 &quot;responses&quot;: [ # Individual responses to image annotation requests within the batch.
Bu Sun Kim715bd7f2019-06-14 16:50:42 -0700250 { # Response to an image annotation request.
Bu Sun Kim65020912020-05-20 12:08:20 -0700251 &quot;landmarkAnnotations&quot;: [ # If present, landmark detection has completed successfully.
Bu Sun Kim715bd7f2019-06-14 16:50:42 -0700252 { # Set of detected entity features.
Bu Sun Kim65020912020-05-20 12:08:20 -0700253 &quot;description&quot;: &quot;A String&quot;, # Entity textual description, expressed in its `locale` language.
254 &quot;topicality&quot;: 3.14, # The relevancy of the ICA (Image Content Annotation) label to the
255 # image. For example, the relevancy of &quot;tower&quot; is likely higher to an image
256 # containing the detected &quot;Eiffel Tower&quot; than to an image containing a
Bu Sun Kim715bd7f2019-06-14 16:50:42 -0700257 # detected distant towering building, even though the confidence that
258 # there is a tower in each image may be the same. Range [0, 1].
Bu Sun Kim65020912020-05-20 12:08:20 -0700259 &quot;properties&quot;: [ # Some entities may have optional user-supplied `Property` (name/value)
260 # fields, such a score or string that qualifies the entity.
261 { # A `Property` consists of a user-supplied name/value pair.
262 &quot;uint64Value&quot;: &quot;A String&quot;, # Value of numeric properties.
263 &quot;name&quot;: &quot;A String&quot;, # Name of the property.
264 &quot;value&quot;: &quot;A String&quot;, # Value of the property.
265 },
266 ],
267 &quot;score&quot;: 3.14, # Overall score of the result. Range [0, 1].
268 &quot;locations&quot;: [ # The location information for the detected entity. Multiple
Bu Sun Kim715bd7f2019-06-14 16:50:42 -0700269 # `LocationInfo` elements can be present because one location may
270 # indicate the location of the scene in the image, and another location
271 # may indicate the location of the place where the image was taken.
272 # Location information is usually present for landmarks.
273 { # Detected entity location information.
Bu Sun Kim65020912020-05-20 12:08:20 -0700274 &quot;latLng&quot;: { # An object representing a latitude/longitude pair. This is expressed as a pair # lat/long location coordinates.
Bu Sun Kim715bd7f2019-06-14 16:50:42 -0700275 # of doubles representing degrees latitude and degrees longitude. Unless
276 # specified otherwise, this must conform to the
Bu Sun Kim65020912020-05-20 12:08:20 -0700277 # &lt;a href=&quot;http://www.unoosa.org/pdf/icg/2012/template/WGS_84.pdf&quot;&gt;WGS84
Dan O'Mearadd494642020-05-01 07:42:23 -0700278 # standard&lt;/a&gt;. Values must be within normalized ranges.
Bu Sun Kim65020912020-05-20 12:08:20 -0700279 &quot;latitude&quot;: 3.14, # The latitude in degrees. It must be in the range [-90.0, +90.0].
280 &quot;longitude&quot;: 3.14, # The longitude in degrees. It must be in the range [-180.0, +180.0].
Bu Sun Kim715bd7f2019-06-14 16:50:42 -0700281 },
282 },
283 ],
Bu Sun Kim65020912020-05-20 12:08:20 -0700284 &quot;mid&quot;: &quot;A String&quot;, # Opaque entity ID. Some IDs may be available in
285 # [Google Knowledge Graph Search
286 # API](https://developers.google.com/knowledge-graph/).
287 &quot;confidence&quot;: 3.14, # **Deprecated. Use `score` instead.**
288 # The accuracy of the entity detection in an image.
289 # For example, for an image in which the &quot;Eiffel Tower&quot; entity is detected,
290 # this field represents the confidence that there is a tower in the query
291 # image. Range [0, 1].
292 &quot;boundingPoly&quot;: { # A bounding polygon for the detected image annotation. # Image region to which this entity belongs. Not produced
Bu Sun Kim715bd7f2019-06-14 16:50:42 -0700293 # for `LABEL_DETECTION` features.
Bu Sun Kim65020912020-05-20 12:08:20 -0700294 &quot;normalizedVertices&quot;: [ # The bounding polygon normalized vertices.
Bu Sun Kim715bd7f2019-06-14 16:50:42 -0700295 { # A vertex represents a 2D point in the image.
296 # NOTE: the normalized vertex coordinates are relative to the original image
297 # and range from 0 to 1.
Bu Sun Kim65020912020-05-20 12:08:20 -0700298 &quot;y&quot;: 3.14, # Y coordinate.
299 &quot;x&quot;: 3.14, # X coordinate.
Bu Sun Kim715bd7f2019-06-14 16:50:42 -0700300 },
301 ],
Bu Sun Kim65020912020-05-20 12:08:20 -0700302 &quot;vertices&quot;: [ # The bounding polygon vertices.
Bu Sun Kim715bd7f2019-06-14 16:50:42 -0700303 { # A vertex represents a 2D point in the image.
304 # NOTE: the vertex coordinates are in the same scale as the original image.
Bu Sun Kim65020912020-05-20 12:08:20 -0700305 &quot;y&quot;: 42, # Y coordinate.
306 &quot;x&quot;: 42, # X coordinate.
Bu Sun Kim715bd7f2019-06-14 16:50:42 -0700307 },
308 ],
309 },
Bu Sun Kim65020912020-05-20 12:08:20 -0700310 &quot;locale&quot;: &quot;A String&quot;, # The language code for the locale in which the entity textual
311 # `description` is expressed.
Bu Sun Kim715bd7f2019-06-14 16:50:42 -0700312 },
313 ],
Bu Sun Kim65020912020-05-20 12:08:20 -0700314 &quot;faceAnnotations&quot;: [ # If present, face detection has completed successfully.
315 { # A face annotation object contains the results of face detection.
316 &quot;angerLikelihood&quot;: &quot;A String&quot;, # Anger likelihood.
317 &quot;landmarks&quot;: [ # Detected face landmarks.
318 { # A face-specific landmark (for example, a face feature).
319 &quot;position&quot;: { # A 3D position in the image, used primarily for Face detection landmarks. # Face landmark position.
320 # A valid Position must have both x and y coordinates.
321 # The position coordinates are in the same scale as the original image.
322 &quot;x&quot;: 3.14, # X coordinate.
323 &quot;z&quot;: 3.14, # Z coordinate (or depth).
324 &quot;y&quot;: 3.14, # Y coordinate.
325 },
326 &quot;type&quot;: &quot;A String&quot;, # Face landmark type.
327 },
328 ],
329 &quot;surpriseLikelihood&quot;: &quot;A String&quot;, # Surprise likelihood.
330 &quot;joyLikelihood&quot;: &quot;A String&quot;, # Joy likelihood.
331 &quot;landmarkingConfidence&quot;: 3.14, # Face landmarking confidence. Range [0, 1].
332 &quot;detectionConfidence&quot;: 3.14, # Detection confidence. Range [0, 1].
333 &quot;panAngle&quot;: 3.14, # Yaw angle, which indicates the leftward/rightward angle that the face is
334 # pointing relative to the vertical plane perpendicular to the image. Range
335 # [-180,180].
336 &quot;underExposedLikelihood&quot;: &quot;A String&quot;, # Under-exposed likelihood.
337 &quot;blurredLikelihood&quot;: &quot;A String&quot;, # Blurred likelihood.
338 &quot;headwearLikelihood&quot;: &quot;A String&quot;, # Headwear likelihood.
339 &quot;boundingPoly&quot;: { # A bounding polygon for the detected image annotation. # The bounding polygon around the face. The coordinates of the bounding box
340 # are in the original image&#x27;s scale.
341 # The bounding box is computed to &quot;frame&quot; the face in accordance with human
342 # expectations. It is based on the landmarker results.
343 # Note that one or more x and/or y coordinates may not be generated in the
344 # `BoundingPoly` (the polygon will be unbounded) if only a partial face
345 # appears in the image to be annotated.
346 &quot;normalizedVertices&quot;: [ # The bounding polygon normalized vertices.
347 { # A vertex represents a 2D point in the image.
348 # NOTE: the normalized vertex coordinates are relative to the original image
349 # and range from 0 to 1.
350 &quot;y&quot;: 3.14, # Y coordinate.
351 &quot;x&quot;: 3.14, # X coordinate.
Bu Sun Kim715bd7f2019-06-14 16:50:42 -0700352 },
353 ],
Bu Sun Kim65020912020-05-20 12:08:20 -0700354 &quot;vertices&quot;: [ # The bounding polygon vertices.
355 { # A vertex represents a 2D point in the image.
356 # NOTE: the vertex coordinates are in the same scale as the original image.
357 &quot;y&quot;: 42, # Y coordinate.
358 &quot;x&quot;: 42, # X coordinate.
Bu Sun Kim715bd7f2019-06-14 16:50:42 -0700359 },
360 ],
361 },
Bu Sun Kim65020912020-05-20 12:08:20 -0700362 &quot;rollAngle&quot;: 3.14, # Roll angle, which indicates the amount of clockwise/anti-clockwise rotation
363 # of the face relative to the image vertical about the axis perpendicular to
364 # the face. Range [-180,180].
365 &quot;sorrowLikelihood&quot;: &quot;A String&quot;, # Sorrow likelihood.
366 &quot;tiltAngle&quot;: 3.14, # Pitch angle, which indicates the upwards/downwards angle that the face is
367 # pointing relative to the image&#x27;s horizontal plane. Range [-180,180].
368 &quot;fdBoundingPoly&quot;: { # A bounding polygon for the detected image annotation. # The `fd_bounding_poly` bounding polygon is tighter than the
369 # `boundingPoly`, and encloses only the skin part of the face. Typically, it
370 # is used to eliminate the face from any image analysis that detects the
371 # &quot;amount of skin&quot; visible in an image. It is not based on the
372 # landmarker results, only on the initial face detection, hence
373 # the &lt;code&gt;fd&lt;/code&gt; (face detection) prefix.
374 &quot;normalizedVertices&quot;: [ # The bounding polygon normalized vertices.
375 { # A vertex represents a 2D point in the image.
376 # NOTE: the normalized vertex coordinates are relative to the original image
377 # and range from 0 to 1.
378 &quot;y&quot;: 3.14, # Y coordinate.
379 &quot;x&quot;: 3.14, # X coordinate.
380 },
381 ],
382 &quot;vertices&quot;: [ # The bounding polygon vertices.
383 { # A vertex represents a 2D point in the image.
384 # NOTE: the vertex coordinates are in the same scale as the original image.
385 &quot;y&quot;: 42, # Y coordinate.
386 &quot;x&quot;: 42, # X coordinate.
387 },
388 ],
Bu Sun Kim715bd7f2019-06-14 16:50:42 -0700389 },
Bu Sun Kim65020912020-05-20 12:08:20 -0700390 },
391 ],
392 &quot;cropHintsAnnotation&quot;: { # Set of crop hints that are used to generate new crops when serving images. # If present, crop hints have completed successfully.
393 &quot;cropHints&quot;: [ # Crop hint results.
394 { # Single crop hint that is used to generate a new crop when serving an image.
395 &quot;confidence&quot;: 3.14, # Confidence of this being a salient region. Range [0, 1].
396 &quot;importanceFraction&quot;: 3.14, # Fraction of importance of this salient region with respect to the original
397 # image.
398 &quot;boundingPoly&quot;: { # A bounding polygon for the detected image annotation. # The bounding polygon for the crop region. The coordinates of the bounding
399 # box are in the original image&#x27;s scale.
400 &quot;normalizedVertices&quot;: [ # The bounding polygon normalized vertices.
401 { # A vertex represents a 2D point in the image.
402 # NOTE: the normalized vertex coordinates are relative to the original image
403 # and range from 0 to 1.
404 &quot;y&quot;: 3.14, # Y coordinate.
405 &quot;x&quot;: 3.14, # X coordinate.
406 },
407 ],
408 &quot;vertices&quot;: [ # The bounding polygon vertices.
409 { # A vertex represents a 2D point in the image.
410 # NOTE: the vertex coordinates are in the same scale as the original image.
411 &quot;y&quot;: 42, # Y coordinate.
412 &quot;x&quot;: 42, # X coordinate.
413 },
414 ],
415 },
Bu Sun Kim715bd7f2019-06-14 16:50:42 -0700416 },
417 ],
418 },
Bu Sun Kim65020912020-05-20 12:08:20 -0700419 &quot;labelAnnotations&quot;: [ # If present, label detection has completed successfully.
420 { # Set of detected entity features.
421 &quot;description&quot;: &quot;A String&quot;, # Entity textual description, expressed in its `locale` language.
422 &quot;topicality&quot;: 3.14, # The relevancy of the ICA (Image Content Annotation) label to the
423 # image. For example, the relevancy of &quot;tower&quot; is likely higher to an image
424 # containing the detected &quot;Eiffel Tower&quot; than to an image containing a
425 # detected distant towering building, even though the confidence that
426 # there is a tower in each image may be the same. Range [0, 1].
427 &quot;properties&quot;: [ # Some entities may have optional user-supplied `Property` (name/value)
428 # fields, such a score or string that qualifies the entity.
429 { # A `Property` consists of a user-supplied name/value pair.
430 &quot;uint64Value&quot;: &quot;A String&quot;, # Value of numeric properties.
431 &quot;name&quot;: &quot;A String&quot;, # Name of the property.
432 &quot;value&quot;: &quot;A String&quot;, # Value of the property.
433 },
434 ],
435 &quot;score&quot;: 3.14, # Overall score of the result. Range [0, 1].
436 &quot;locations&quot;: [ # The location information for the detected entity. Multiple
437 # `LocationInfo` elements can be present because one location may
438 # indicate the location of the scene in the image, and another location
439 # may indicate the location of the place where the image was taken.
440 # Location information is usually present for landmarks.
441 { # Detected entity location information.
442 &quot;latLng&quot;: { # An object representing a latitude/longitude pair. This is expressed as a pair # lat/long location coordinates.
443 # of doubles representing degrees latitude and degrees longitude. Unless
444 # specified otherwise, this must conform to the
445 # &lt;a href=&quot;http://www.unoosa.org/pdf/icg/2012/template/WGS_84.pdf&quot;&gt;WGS84
446 # standard&lt;/a&gt;. Values must be within normalized ranges.
447 &quot;latitude&quot;: 3.14, # The latitude in degrees. It must be in the range [-90.0, +90.0].
448 &quot;longitude&quot;: 3.14, # The longitude in degrees. It must be in the range [-180.0, +180.0].
449 },
450 },
451 ],
452 &quot;mid&quot;: &quot;A String&quot;, # Opaque entity ID. Some IDs may be available in
453 # [Google Knowledge Graph Search
454 # API](https://developers.google.com/knowledge-graph/).
455 &quot;confidence&quot;: 3.14, # **Deprecated. Use `score` instead.**
456 # The accuracy of the entity detection in an image.
457 # For example, for an image in which the &quot;Eiffel Tower&quot; entity is detected,
458 # this field represents the confidence that there is a tower in the query
459 # image. Range [0, 1].
460 &quot;boundingPoly&quot;: { # A bounding polygon for the detected image annotation. # Image region to which this entity belongs. Not produced
461 # for `LABEL_DETECTION` features.
462 &quot;normalizedVertices&quot;: [ # The bounding polygon normalized vertices.
Bu Sun Kim715bd7f2019-06-14 16:50:42 -0700463 { # A vertex represents a 2D point in the image.
464 # NOTE: the normalized vertex coordinates are relative to the original image
465 # and range from 0 to 1.
Bu Sun Kim65020912020-05-20 12:08:20 -0700466 &quot;y&quot;: 3.14, # Y coordinate.
467 &quot;x&quot;: 3.14, # X coordinate.
Bu Sun Kim715bd7f2019-06-14 16:50:42 -0700468 },
469 ],
Bu Sun Kim65020912020-05-20 12:08:20 -0700470 &quot;vertices&quot;: [ # The bounding polygon vertices.
Bu Sun Kim715bd7f2019-06-14 16:50:42 -0700471 { # A vertex represents a 2D point in the image.
472 # NOTE: the vertex coordinates are in the same scale as the original image.
Bu Sun Kim65020912020-05-20 12:08:20 -0700473 &quot;y&quot;: 42, # Y coordinate.
474 &quot;x&quot;: 42, # X coordinate.
Bu Sun Kim715bd7f2019-06-14 16:50:42 -0700475 },
476 ],
477 },
Bu Sun Kim65020912020-05-20 12:08:20 -0700478 &quot;locale&quot;: &quot;A String&quot;, # The language code for the locale in which the entity textual
479 # `description` is expressed.
Bu Sun Kim715bd7f2019-06-14 16:50:42 -0700480 },
481 ],
Bu Sun Kim65020912020-05-20 12:08:20 -0700482 &quot;productSearchResults&quot;: { # Results for a product search request. # If present, product search has completed successfully.
483 &quot;productGroupedResults&quot;: [ # List of results grouped by products detected in the query image. Each entry
484 # corresponds to one bounding polygon in the query image, and contains the
485 # matching products specific to that region. There may be duplicate product
486 # matches in the union of all the per-product results.
487 { # Information about the products similar to a single product in a query
488 # image.
489 &quot;objectAnnotations&quot;: [ # List of generic predictions for the object in the bounding box.
490 { # Prediction for what the object in the bounding box is.
491 &quot;languageCode&quot;: &quot;A String&quot;, # The BCP-47 language code, such as &quot;en-US&quot; or &quot;sr-Latn&quot;. For more
492 # information, see
493 # http://www.unicode.org/reports/tr35/#Unicode_locale_identifier.
494 &quot;mid&quot;: &quot;A String&quot;, # Object ID that should align with EntityAnnotation mid.
495 &quot;name&quot;: &quot;A String&quot;, # Object name, expressed in its `language_code` language.
496 &quot;score&quot;: 3.14, # Score of the result. Range [0, 1].
497 },
498 ],
499 &quot;boundingPoly&quot;: { # A bounding polygon for the detected image annotation. # The bounding polygon around the product detected in the query image.
500 &quot;normalizedVertices&quot;: [ # The bounding polygon normalized vertices.
501 { # A vertex represents a 2D point in the image.
502 # NOTE: the normalized vertex coordinates are relative to the original image
503 # and range from 0 to 1.
504 &quot;y&quot;: 3.14, # Y coordinate.
505 &quot;x&quot;: 3.14, # X coordinate.
506 },
507 ],
508 &quot;vertices&quot;: [ # The bounding polygon vertices.
509 { # A vertex represents a 2D point in the image.
510 # NOTE: the vertex coordinates are in the same scale as the original image.
511 &quot;y&quot;: 42, # Y coordinate.
512 &quot;x&quot;: 42, # X coordinate.
513 },
514 ],
515 },
516 &quot;results&quot;: [ # List of results, one for each product match.
517 { # Information about a product.
518 &quot;image&quot;: &quot;A String&quot;, # The resource name of the image from the product that is the closest match
519 # to the query.
520 &quot;product&quot;: { # A Product contains ReferenceImages. # The Product.
521 &quot;displayName&quot;: &quot;A String&quot;, # The user-provided name for this Product. Must not be empty. Must be at most
522 # 4096 characters long.
523 &quot;description&quot;: &quot;A String&quot;, # User-provided metadata to be stored with this product. Must be at most 4096
524 # characters long.
525 &quot;productCategory&quot;: &quot;A String&quot;, # Immutable. The category for the product identified by the reference image. This should
526 # be either &quot;homegoods-v2&quot;, &quot;apparel-v2&quot;, or &quot;toys-v2&quot;. The legacy categories
527 # &quot;homegoods&quot;, &quot;apparel&quot;, and &quot;toys&quot; are still supported, but these should
528 # not be used for new products.
529 &quot;productLabels&quot;: [ # Key-value pairs that can be attached to a product. At query time,
530 # constraints can be specified based on the product_labels.
531 #
532 # Note that integer values can be provided as strings, e.g. &quot;1199&quot;. Only
533 # strings with integer values can match a range-based restriction which is
534 # to be supported soon.
535 #
536 # Multiple values can be assigned to the same key. One product may have up to
537 # 500 product_labels.
538 #
539 # Notice that the total number of distinct product_labels over all products
540 # in one ProductSet cannot exceed 1M, otherwise the product search pipeline
541 # will refuse to work for that ProductSet.
542 { # A product label represented as a key-value pair.
543 &quot;key&quot;: &quot;A String&quot;, # The key of the label attached to the product. Cannot be empty and cannot
544 # exceed 128 bytes.
545 &quot;value&quot;: &quot;A String&quot;, # The value of the label attached to the product. Cannot be empty and
546 # cannot exceed 128 bytes.
547 },
548 ],
549 &quot;name&quot;: &quot;A String&quot;, # The resource name of the product.
550 #
551 # Format is:
552 # `projects/PROJECT_ID/locations/LOC_ID/products/PRODUCT_ID`.
553 #
554 # This field is ignored when creating a product.
555 },
556 &quot;score&quot;: 3.14, # A confidence level on the match, ranging from 0 (no confidence) to
557 # 1 (full confidence).
558 },
559 ],
560 },
561 ],
562 &quot;results&quot;: [ # List of results, one for each product match.
563 { # Information about a product.
564 &quot;image&quot;: &quot;A String&quot;, # The resource name of the image from the product that is the closest match
565 # to the query.
566 &quot;product&quot;: { # A Product contains ReferenceImages. # The Product.
567 &quot;displayName&quot;: &quot;A String&quot;, # The user-provided name for this Product. Must not be empty. Must be at most
568 # 4096 characters long.
569 &quot;description&quot;: &quot;A String&quot;, # User-provided metadata to be stored with this product. Must be at most 4096
570 # characters long.
571 &quot;productCategory&quot;: &quot;A String&quot;, # Immutable. The category for the product identified by the reference image. This should
572 # be either &quot;homegoods-v2&quot;, &quot;apparel-v2&quot;, or &quot;toys-v2&quot;. The legacy categories
573 # &quot;homegoods&quot;, &quot;apparel&quot;, and &quot;toys&quot; are still supported, but these should
574 # not be used for new products.
575 &quot;productLabels&quot;: [ # Key-value pairs that can be attached to a product. At query time,
576 # constraints can be specified based on the product_labels.
577 #
578 # Note that integer values can be provided as strings, e.g. &quot;1199&quot;. Only
579 # strings with integer values can match a range-based restriction which is
580 # to be supported soon.
581 #
582 # Multiple values can be assigned to the same key. One product may have up to
583 # 500 product_labels.
584 #
585 # Notice that the total number of distinct product_labels over all products
586 # in one ProductSet cannot exceed 1M, otherwise the product search pipeline
587 # will refuse to work for that ProductSet.
588 { # A product label represented as a key-value pair.
589 &quot;key&quot;: &quot;A String&quot;, # The key of the label attached to the product. Cannot be empty and cannot
590 # exceed 128 bytes.
591 &quot;value&quot;: &quot;A String&quot;, # The value of the label attached to the product. Cannot be empty and
592 # cannot exceed 128 bytes.
593 },
594 ],
595 &quot;name&quot;: &quot;A String&quot;, # The resource name of the product.
596 #
597 # Format is:
598 # `projects/PROJECT_ID/locations/LOC_ID/products/PRODUCT_ID`.
599 #
600 # This field is ignored when creating a product.
601 },
602 &quot;score&quot;: 3.14, # A confidence level on the match, ranging from 0 (no confidence) to
603 # 1 (full confidence).
604 },
605 ],
606 &quot;indexTime&quot;: &quot;A String&quot;, # Timestamp of the index which provided these results. Products added to the
607 # product set and products removed from the product set after this time are
608 # not reflected in the current results.
609 },
610 &quot;localizedObjectAnnotations&quot;: [ # If present, localized object detection has completed successfully.
611 # This will be sorted descending by confidence score.
612 { # Set of detected objects with bounding boxes.
613 &quot;languageCode&quot;: &quot;A String&quot;, # The BCP-47 language code, such as &quot;en-US&quot; or &quot;sr-Latn&quot;. For more
614 # information, see
615 # http://www.unicode.org/reports/tr35/#Unicode_locale_identifier.
616 &quot;mid&quot;: &quot;A String&quot;, # Object ID that should align with EntityAnnotation mid.
617 &quot;name&quot;: &quot;A String&quot;, # Object name, expressed in its `language_code` language.
618 &quot;boundingPoly&quot;: { # A bounding polygon for the detected image annotation. # Image region to which this object belongs. This must be populated.
619 &quot;normalizedVertices&quot;: [ # The bounding polygon normalized vertices.
620 { # A vertex represents a 2D point in the image.
621 # NOTE: the normalized vertex coordinates are relative to the original image
622 # and range from 0 to 1.
623 &quot;y&quot;: 3.14, # Y coordinate.
624 &quot;x&quot;: 3.14, # X coordinate.
625 },
626 ],
627 &quot;vertices&quot;: [ # The bounding polygon vertices.
628 { # A vertex represents a 2D point in the image.
629 # NOTE: the vertex coordinates are in the same scale as the original image.
630 &quot;y&quot;: 42, # Y coordinate.
631 &quot;x&quot;: 42, # X coordinate.
632 },
633 ],
634 },
635 &quot;score&quot;: 3.14, # Score of the result. Range [0, 1].
636 },
637 ],
638 &quot;error&quot;: { # The `Status` type defines a logical error model that is suitable for # If set, represents the error message for the operation.
639 # Note that filled-in image annotations are guaranteed to be
640 # correct, even when `error` is set.
641 # different programming environments, including REST APIs and RPC APIs. It is
642 # used by [gRPC](https://github.com/grpc). Each `Status` message contains
643 # three pieces of data: error code, error message, and error details.
644 #
645 # You can find out more about this error model and how to work with it in the
646 # [API Design Guide](https://cloud.google.com/apis/design/errors).
647 &quot;code&quot;: 42, # The status code, which should be an enum value of google.rpc.Code.
648 &quot;message&quot;: &quot;A String&quot;, # A developer-facing error message, which should be in English. Any
649 # user-facing error message should be localized and sent in the
650 # google.rpc.Status.details field, or localized by the client.
651 &quot;details&quot;: [ # A list of messages that carry the error details. There is a common set of
652 # message types for APIs to use.
653 {
654 &quot;a_key&quot;: &quot;&quot;, # Properties of the object. Contains field @type with type URL.
655 },
656 ],
657 },
658 &quot;fullTextAnnotation&quot;: { # TextAnnotation contains a structured representation of OCR extracted text. # If present, text (OCR) detection or document (OCR) text detection has
Bu Sun Kim715bd7f2019-06-14 16:50:42 -0700659 # completed successfully.
660 # This annotation provides the structural hierarchy for the OCR detected
661 # text.
662 # The hierarchy of an OCR extracted text structure is like this:
Dan O'Mearadd494642020-05-01 07:42:23 -0700663 # TextAnnotation -&gt; Page -&gt; Block -&gt; Paragraph -&gt; Word -&gt; Symbol
Bu Sun Kim715bd7f2019-06-14 16:50:42 -0700664 # Each structural component, starting from Page, may further have their own
665 # properties. Properties describe detected languages, breaks etc.. Please refer
666 # to the TextAnnotation.TextProperty message definition below for more
667 # detail.
Bu Sun Kim65020912020-05-20 12:08:20 -0700668 &quot;pages&quot;: [ # List of pages detected by OCR.
Bu Sun Kim715bd7f2019-06-14 16:50:42 -0700669 { # Detected page from OCR.
Bu Sun Kim65020912020-05-20 12:08:20 -0700670 &quot;confidence&quot;: 3.14, # Confidence of the OCR results on the page. Range [0, 1].
671 &quot;height&quot;: 42, # Page height. For PDFs the unit is points. For images (including
Bu Sun Kim715bd7f2019-06-14 16:50:42 -0700672 # TIFFs) the unit is pixels.
Bu Sun Kim65020912020-05-20 12:08:20 -0700673 &quot;width&quot;: 42, # Page width. For PDFs the unit is points. For images (including
674 # TIFFs) the unit is pixels.
675 &quot;blocks&quot;: [ # List of blocks of text, images etc on this page.
Bu Sun Kim715bd7f2019-06-14 16:50:42 -0700676 { # Logical element on the page.
Bu Sun Kim65020912020-05-20 12:08:20 -0700677 &quot;property&quot;: { # Additional information detected on the structural component. # Additional information detected for the block.
678 &quot;detectedLanguages&quot;: [ # A list of detected languages together with confidence.
679 { # Detected language for a structural component.
680 &quot;languageCode&quot;: &quot;A String&quot;, # The BCP-47 language code, such as &quot;en-US&quot; or &quot;sr-Latn&quot;. For more
681 # information, see
682 # http://www.unicode.org/reports/tr35/#Unicode_locale_identifier.
683 &quot;confidence&quot;: 3.14, # Confidence of detected language. Range [0, 1].
684 },
685 ],
686 &quot;detectedBreak&quot;: { # Detected start or end of a structural component. # Detected start or end of a text segment.
687 &quot;type&quot;: &quot;A String&quot;, # Detected break type.
688 &quot;isPrefix&quot;: True or False, # True if break prepends the element.
689 },
690 },
691 &quot;blockType&quot;: &quot;A String&quot;, # Detected block type (text, image etc) for this block.
692 &quot;boundingBox&quot;: { # A bounding polygon for the detected image annotation. # The bounding box for the block.
Bu Sun Kim715bd7f2019-06-14 16:50:42 -0700693 # The vertices are in the order of top-left, top-right, bottom-right,
694 # bottom-left. When a rotation of the bounding box is detected the rotation
695 # is represented as around the top-left corner as defined when the text is
Bu Sun Kim65020912020-05-20 12:08:20 -0700696 # read in the &#x27;natural&#x27; orientation.
Bu Sun Kim715bd7f2019-06-14 16:50:42 -0700697 # For example:
698 #
699 # * when the text is horizontal it might look like:
700 #
701 # 0----1
702 # | |
703 # 3----2
704 #
Bu Sun Kim65020912020-05-20 12:08:20 -0700705 # * when it&#x27;s rotated 180 degrees around the top-left corner it becomes:
Bu Sun Kim715bd7f2019-06-14 16:50:42 -0700706 #
707 # 2----3
708 # | |
709 # 1----0
710 #
711 # and the vertex order will still be (0, 1, 2, 3).
Bu Sun Kim65020912020-05-20 12:08:20 -0700712 &quot;normalizedVertices&quot;: [ # The bounding polygon normalized vertices.
Bu Sun Kim715bd7f2019-06-14 16:50:42 -0700713 { # A vertex represents a 2D point in the image.
714 # NOTE: the normalized vertex coordinates are relative to the original image
715 # and range from 0 to 1.
Bu Sun Kim65020912020-05-20 12:08:20 -0700716 &quot;y&quot;: 3.14, # Y coordinate.
717 &quot;x&quot;: 3.14, # X coordinate.
Bu Sun Kim715bd7f2019-06-14 16:50:42 -0700718 },
719 ],
Bu Sun Kim65020912020-05-20 12:08:20 -0700720 &quot;vertices&quot;: [ # The bounding polygon vertices.
Bu Sun Kim715bd7f2019-06-14 16:50:42 -0700721 { # A vertex represents a 2D point in the image.
722 # NOTE: the vertex coordinates are in the same scale as the original image.
Bu Sun Kim65020912020-05-20 12:08:20 -0700723 &quot;y&quot;: 42, # Y coordinate.
724 &quot;x&quot;: 42, # X coordinate.
Bu Sun Kim715bd7f2019-06-14 16:50:42 -0700725 },
726 ],
727 },
Bu Sun Kim65020912020-05-20 12:08:20 -0700728 &quot;confidence&quot;: 3.14, # Confidence of the OCR results on the block. Range [0, 1].
729 &quot;paragraphs&quot;: [ # List of paragraphs in this block (if this blocks is of type text).
Bu Sun Kim715bd7f2019-06-14 16:50:42 -0700730 { # Structural unit of text representing a number of words in certain order.
Bu Sun Kim65020912020-05-20 12:08:20 -0700731 &quot;property&quot;: { # Additional information detected on the structural component. # Additional information detected for the paragraph.
732 &quot;detectedLanguages&quot;: [ # A list of detected languages together with confidence.
733 { # Detected language for a structural component.
734 &quot;languageCode&quot;: &quot;A String&quot;, # The BCP-47 language code, such as &quot;en-US&quot; or &quot;sr-Latn&quot;. For more
735 # information, see
736 # http://www.unicode.org/reports/tr35/#Unicode_locale_identifier.
737 &quot;confidence&quot;: 3.14, # Confidence of detected language. Range [0, 1].
738 },
739 ],
740 &quot;detectedBreak&quot;: { # Detected start or end of a structural component. # Detected start or end of a text segment.
741 &quot;type&quot;: &quot;A String&quot;, # Detected break type.
742 &quot;isPrefix&quot;: True or False, # True if break prepends the element.
743 },
744 },
745 &quot;boundingBox&quot;: { # A bounding polygon for the detected image annotation. # The bounding box for the paragraph.
Bu Sun Kim715bd7f2019-06-14 16:50:42 -0700746 # The vertices are in the order of top-left, top-right, bottom-right,
747 # bottom-left. When a rotation of the bounding box is detected the rotation
748 # is represented as around the top-left corner as defined when the text is
Bu Sun Kim65020912020-05-20 12:08:20 -0700749 # read in the &#x27;natural&#x27; orientation.
Bu Sun Kim715bd7f2019-06-14 16:50:42 -0700750 # For example:
751 # * when the text is horizontal it might look like:
752 # 0----1
753 # | |
754 # 3----2
Bu Sun Kim65020912020-05-20 12:08:20 -0700755 # * when it&#x27;s rotated 180 degrees around the top-left corner it becomes:
Bu Sun Kim715bd7f2019-06-14 16:50:42 -0700756 # 2----3
757 # | |
758 # 1----0
759 # and the vertex order will still be (0, 1, 2, 3).
Bu Sun Kim65020912020-05-20 12:08:20 -0700760 &quot;normalizedVertices&quot;: [ # The bounding polygon normalized vertices.
Bu Sun Kim715bd7f2019-06-14 16:50:42 -0700761 { # A vertex represents a 2D point in the image.
762 # NOTE: the normalized vertex coordinates are relative to the original image
763 # and range from 0 to 1.
Bu Sun Kim65020912020-05-20 12:08:20 -0700764 &quot;y&quot;: 3.14, # Y coordinate.
765 &quot;x&quot;: 3.14, # X coordinate.
Bu Sun Kim715bd7f2019-06-14 16:50:42 -0700766 },
767 ],
Bu Sun Kim65020912020-05-20 12:08:20 -0700768 &quot;vertices&quot;: [ # The bounding polygon vertices.
Bu Sun Kim715bd7f2019-06-14 16:50:42 -0700769 { # A vertex represents a 2D point in the image.
770 # NOTE: the vertex coordinates are in the same scale as the original image.
Bu Sun Kim65020912020-05-20 12:08:20 -0700771 &quot;y&quot;: 42, # Y coordinate.
772 &quot;x&quot;: 42, # X coordinate.
Bu Sun Kim715bd7f2019-06-14 16:50:42 -0700773 },
774 ],
775 },
Bu Sun Kim65020912020-05-20 12:08:20 -0700776 &quot;confidence&quot;: 3.14, # Confidence of the OCR results for the paragraph. Range [0, 1].
777 &quot;words&quot;: [ # List of all words in this paragraph.
Bu Sun Kim715bd7f2019-06-14 16:50:42 -0700778 { # A word representation.
Bu Sun Kim65020912020-05-20 12:08:20 -0700779 &quot;boundingBox&quot;: { # A bounding polygon for the detected image annotation. # The bounding box for the word.
Dan O'Mearadd494642020-05-01 07:42:23 -0700780 # The vertices are in the order of top-left, top-right, bottom-right,
781 # bottom-left. When a rotation of the bounding box is detected the rotation
782 # is represented as around the top-left corner as defined when the text is
Bu Sun Kim65020912020-05-20 12:08:20 -0700783 # read in the &#x27;natural&#x27; orientation.
Dan O'Mearadd494642020-05-01 07:42:23 -0700784 # For example:
785 # * when the text is horizontal it might look like:
786 # 0----1
787 # | |
788 # 3----2
Bu Sun Kim65020912020-05-20 12:08:20 -0700789 # * when it&#x27;s rotated 180 degrees around the top-left corner it becomes:
Dan O'Mearadd494642020-05-01 07:42:23 -0700790 # 2----3
791 # | |
792 # 1----0
793 # and the vertex order will still be (0, 1, 2, 3).
Bu Sun Kim65020912020-05-20 12:08:20 -0700794 &quot;normalizedVertices&quot;: [ # The bounding polygon normalized vertices.
Dan O'Mearadd494642020-05-01 07:42:23 -0700795 { # A vertex represents a 2D point in the image.
796 # NOTE: the normalized vertex coordinates are relative to the original image
797 # and range from 0 to 1.
Bu Sun Kim65020912020-05-20 12:08:20 -0700798 &quot;y&quot;: 3.14, # Y coordinate.
799 &quot;x&quot;: 3.14, # X coordinate.
Dan O'Mearadd494642020-05-01 07:42:23 -0700800 },
801 ],
Bu Sun Kim65020912020-05-20 12:08:20 -0700802 &quot;vertices&quot;: [ # The bounding polygon vertices.
Dan O'Mearadd494642020-05-01 07:42:23 -0700803 { # A vertex represents a 2D point in the image.
804 # NOTE: the vertex coordinates are in the same scale as the original image.
Bu Sun Kim65020912020-05-20 12:08:20 -0700805 &quot;y&quot;: 42, # Y coordinate.
806 &quot;x&quot;: 42, # X coordinate.
Dan O'Mearadd494642020-05-01 07:42:23 -0700807 },
808 ],
809 },
Bu Sun Kim65020912020-05-20 12:08:20 -0700810 &quot;confidence&quot;: 3.14, # Confidence of the OCR results for the word. Range [0, 1].
811 &quot;symbols&quot;: [ # List of symbols in the word.
812 # The order of the symbols follows the natural reading order.
813 { # A single symbol representation.
814 &quot;property&quot;: { # Additional information detected on the structural component. # Additional information detected for the symbol.
815 &quot;detectedLanguages&quot;: [ # A list of detected languages together with confidence.
816 { # Detected language for a structural component.
817 &quot;languageCode&quot;: &quot;A String&quot;, # The BCP-47 language code, such as &quot;en-US&quot; or &quot;sr-Latn&quot;. For more
818 # information, see
819 # http://www.unicode.org/reports/tr35/#Unicode_locale_identifier.
820 &quot;confidence&quot;: 3.14, # Confidence of detected language. Range [0, 1].
821 },
822 ],
823 &quot;detectedBreak&quot;: { # Detected start or end of a structural component. # Detected start or end of a text segment.
824 &quot;type&quot;: &quot;A String&quot;, # Detected break type.
825 &quot;isPrefix&quot;: True or False, # True if break prepends the element.
826 },
827 },
828 &quot;boundingBox&quot;: { # A bounding polygon for the detected image annotation. # The bounding box for the symbol.
829 # The vertices are in the order of top-left, top-right, bottom-right,
830 # bottom-left. When a rotation of the bounding box is detected the rotation
831 # is represented as around the top-left corner as defined when the text is
832 # read in the &#x27;natural&#x27; orientation.
833 # For example:
834 # * when the text is horizontal it might look like:
835 # 0----1
836 # | |
837 # 3----2
838 # * when it&#x27;s rotated 180 degrees around the top-left corner it becomes:
839 # 2----3
840 # | |
841 # 1----0
842 # and the vertex order will still be (0, 1, 2, 3).
843 &quot;normalizedVertices&quot;: [ # The bounding polygon normalized vertices.
844 { # A vertex represents a 2D point in the image.
845 # NOTE: the normalized vertex coordinates are relative to the original image
846 # and range from 0 to 1.
847 &quot;y&quot;: 3.14, # Y coordinate.
848 &quot;x&quot;: 3.14, # X coordinate.
849 },
850 ],
851 &quot;vertices&quot;: [ # The bounding polygon vertices.
852 { # A vertex represents a 2D point in the image.
853 # NOTE: the vertex coordinates are in the same scale as the original image.
854 &quot;y&quot;: 42, # Y coordinate.
855 &quot;x&quot;: 42, # X coordinate.
856 },
857 ],
858 },
859 &quot;confidence&quot;: 3.14, # Confidence of the OCR results for the symbol. Range [0, 1].
860 &quot;text&quot;: &quot;A String&quot;, # The actual UTF-8 representation of the symbol.
Bu Sun Kim715bd7f2019-06-14 16:50:42 -0700861 },
Bu Sun Kim65020912020-05-20 12:08:20 -0700862 ],
863 &quot;property&quot;: { # Additional information detected on the structural component. # Additional information detected for the word.
864 &quot;detectedLanguages&quot;: [ # A list of detected languages together with confidence.
Bu Sun Kim715bd7f2019-06-14 16:50:42 -0700865 { # Detected language for a structural component.
Bu Sun Kim65020912020-05-20 12:08:20 -0700866 &quot;languageCode&quot;: &quot;A String&quot;, # The BCP-47 language code, such as &quot;en-US&quot; or &quot;sr-Latn&quot;. For more
Bu Sun Kim715bd7f2019-06-14 16:50:42 -0700867 # information, see
868 # http://www.unicode.org/reports/tr35/#Unicode_locale_identifier.
Bu Sun Kim65020912020-05-20 12:08:20 -0700869 &quot;confidence&quot;: 3.14, # Confidence of detected language. Range [0, 1].
Bu Sun Kim715bd7f2019-06-14 16:50:42 -0700870 },
871 ],
Bu Sun Kim65020912020-05-20 12:08:20 -0700872 &quot;detectedBreak&quot;: { # Detected start or end of a structural component. # Detected start or end of a text segment.
873 &quot;type&quot;: &quot;A String&quot;, # Detected break type.
874 &quot;isPrefix&quot;: True or False, # True if break prepends the element.
875 },
Bu Sun Kim715bd7f2019-06-14 16:50:42 -0700876 },
877 },
878 ],
879 },
880 ],
881 },
882 ],
Bu Sun Kim65020912020-05-20 12:08:20 -0700883 &quot;property&quot;: { # Additional information detected on the structural component. # Additional information detected on the page.
884 &quot;detectedLanguages&quot;: [ # A list of detected languages together with confidence.
885 { # Detected language for a structural component.
886 &quot;languageCode&quot;: &quot;A String&quot;, # The BCP-47 language code, such as &quot;en-US&quot; or &quot;sr-Latn&quot;. For more
887 # information, see
888 # http://www.unicode.org/reports/tr35/#Unicode_locale_identifier.
889 &quot;confidence&quot;: 3.14, # Confidence of detected language. Range [0, 1].
890 },
891 ],
892 &quot;detectedBreak&quot;: { # Detected start or end of a structural component. # Detected start or end of a text segment.
893 &quot;type&quot;: &quot;A String&quot;, # Detected break type.
894 &quot;isPrefix&quot;: True or False, # True if break prepends the element.
895 },
896 },
Bu Sun Kim715bd7f2019-06-14 16:50:42 -0700897 },
898 ],
Bu Sun Kim65020912020-05-20 12:08:20 -0700899 &quot;text&quot;: &quot;A String&quot;, # UTF-8 text detected on the pages.
Bu Sun Kim715bd7f2019-06-14 16:50:42 -0700900 },
Bu Sun Kim65020912020-05-20 12:08:20 -0700901 &quot;textAnnotations&quot;: [ # If present, text (OCR) detection has completed successfully.
Bu Sun Kim715bd7f2019-06-14 16:50:42 -0700902 { # Set of detected entity features.
Bu Sun Kim65020912020-05-20 12:08:20 -0700903 &quot;description&quot;: &quot;A String&quot;, # Entity textual description, expressed in its `locale` language.
904 &quot;topicality&quot;: 3.14, # The relevancy of the ICA (Image Content Annotation) label to the
905 # image. For example, the relevancy of &quot;tower&quot; is likely higher to an image
906 # containing the detected &quot;Eiffel Tower&quot; than to an image containing a
Bu Sun Kim715bd7f2019-06-14 16:50:42 -0700907 # detected distant towering building, even though the confidence that
908 # there is a tower in each image may be the same. Range [0, 1].
Bu Sun Kim65020912020-05-20 12:08:20 -0700909 &quot;properties&quot;: [ # Some entities may have optional user-supplied `Property` (name/value)
910 # fields, such a score or string that qualifies the entity.
911 { # A `Property` consists of a user-supplied name/value pair.
912 &quot;uint64Value&quot;: &quot;A String&quot;, # Value of numeric properties.
913 &quot;name&quot;: &quot;A String&quot;, # Name of the property.
914 &quot;value&quot;: &quot;A String&quot;, # Value of the property.
915 },
916 ],
917 &quot;score&quot;: 3.14, # Overall score of the result. Range [0, 1].
918 &quot;locations&quot;: [ # The location information for the detected entity. Multiple
Bu Sun Kim715bd7f2019-06-14 16:50:42 -0700919 # `LocationInfo` elements can be present because one location may
920 # indicate the location of the scene in the image, and another location
921 # may indicate the location of the place where the image was taken.
922 # Location information is usually present for landmarks.
923 { # Detected entity location information.
Bu Sun Kim65020912020-05-20 12:08:20 -0700924 &quot;latLng&quot;: { # An object representing a latitude/longitude pair. This is expressed as a pair # lat/long location coordinates.
Bu Sun Kim715bd7f2019-06-14 16:50:42 -0700925 # of doubles representing degrees latitude and degrees longitude. Unless
926 # specified otherwise, this must conform to the
Bu Sun Kim65020912020-05-20 12:08:20 -0700927 # &lt;a href=&quot;http://www.unoosa.org/pdf/icg/2012/template/WGS_84.pdf&quot;&gt;WGS84
Dan O'Mearadd494642020-05-01 07:42:23 -0700928 # standard&lt;/a&gt;. Values must be within normalized ranges.
Bu Sun Kim65020912020-05-20 12:08:20 -0700929 &quot;latitude&quot;: 3.14, # The latitude in degrees. It must be in the range [-90.0, +90.0].
930 &quot;longitude&quot;: 3.14, # The longitude in degrees. It must be in the range [-180.0, +180.0].
Bu Sun Kim715bd7f2019-06-14 16:50:42 -0700931 },
932 },
933 ],
Bu Sun Kim65020912020-05-20 12:08:20 -0700934 &quot;mid&quot;: &quot;A String&quot;, # Opaque entity ID. Some IDs may be available in
935 # [Google Knowledge Graph Search
936 # API](https://developers.google.com/knowledge-graph/).
937 &quot;confidence&quot;: 3.14, # **Deprecated. Use `score` instead.**
938 # The accuracy of the entity detection in an image.
939 # For example, for an image in which the &quot;Eiffel Tower&quot; entity is detected,
940 # this field represents the confidence that there is a tower in the query
941 # image. Range [0, 1].
942 &quot;boundingPoly&quot;: { # A bounding polygon for the detected image annotation. # Image region to which this entity belongs. Not produced
Bu Sun Kim715bd7f2019-06-14 16:50:42 -0700943 # for `LABEL_DETECTION` features.
Bu Sun Kim65020912020-05-20 12:08:20 -0700944 &quot;normalizedVertices&quot;: [ # The bounding polygon normalized vertices.
Bu Sun Kim715bd7f2019-06-14 16:50:42 -0700945 { # A vertex represents a 2D point in the image.
946 # NOTE: the normalized vertex coordinates are relative to the original image
947 # and range from 0 to 1.
Bu Sun Kim65020912020-05-20 12:08:20 -0700948 &quot;y&quot;: 3.14, # Y coordinate.
949 &quot;x&quot;: 3.14, # X coordinate.
Bu Sun Kim715bd7f2019-06-14 16:50:42 -0700950 },
951 ],
Bu Sun Kim65020912020-05-20 12:08:20 -0700952 &quot;vertices&quot;: [ # The bounding polygon vertices.
Bu Sun Kim715bd7f2019-06-14 16:50:42 -0700953 { # A vertex represents a 2D point in the image.
954 # NOTE: the vertex coordinates are in the same scale as the original image.
Bu Sun Kim65020912020-05-20 12:08:20 -0700955 &quot;y&quot;: 42, # Y coordinate.
956 &quot;x&quot;: 42, # X coordinate.
Bu Sun Kim715bd7f2019-06-14 16:50:42 -0700957 },
958 ],
959 },
Bu Sun Kim65020912020-05-20 12:08:20 -0700960 &quot;locale&quot;: &quot;A String&quot;, # The language code for the locale in which the entity textual
961 # `description` is expressed.
Bu Sun Kim715bd7f2019-06-14 16:50:42 -0700962 },
963 ],
Bu Sun Kim65020912020-05-20 12:08:20 -0700964 &quot;imagePropertiesAnnotation&quot;: { # Stores image properties, such as dominant colors. # If present, image properties were extracted successfully.
965 &quot;dominantColors&quot;: { # Set of dominant colors and their corresponding scores. # If present, dominant colors completed successfully.
966 &quot;colors&quot;: [ # RGB color values with their score and pixel fraction.
Bu Sun Kim715bd7f2019-06-14 16:50:42 -0700967 { # Color information consists of RGB channels, score, and the fraction of
968 # the image that the color occupies in the image.
Bu Sun Kim65020912020-05-20 12:08:20 -0700969 &quot;pixelFraction&quot;: 3.14, # The fraction of pixels the color occupies in the image.
970 # Value in range [0, 1].
971 &quot;color&quot;: { # Represents a color in the RGBA color space. This representation is designed # RGB components of the color.
Bu Sun Kim715bd7f2019-06-14 16:50:42 -0700972 # for simplicity of conversion to/from color representations in various
973 # languages over compactness; for example, the fields of this representation
Bu Sun Kim65020912020-05-20 12:08:20 -0700974 # can be trivially provided to the constructor of &quot;java.awt.Color&quot; in Java; it
975 # can also be trivially provided to UIColor&#x27;s &quot;+colorWithRed:green:blue:alpha&quot;
Bu Sun Kim715bd7f2019-06-14 16:50:42 -0700976 # method in iOS; and, with just a little work, it can be easily formatted into
Bu Sun Kim65020912020-05-20 12:08:20 -0700977 # a CSS &quot;rgba()&quot; string in JavaScript, as well.
Bu Sun Kim715bd7f2019-06-14 16:50:42 -0700978 #
979 # Note: this proto does not carry information about the absolute color space
980 # that should be used to interpret the RGB value (e.g. sRGB, Adobe RGB,
981 # DCI-P3, BT.2020, etc.). By default, applications SHOULD assume the sRGB color
982 # space.
983 #
984 # Example (Java):
985 #
986 # import com.google.type.Color;
987 #
988 # // ...
989 # public static java.awt.Color fromProto(Color protocolor) {
990 # float alpha = protocolor.hasAlpha()
991 # ? protocolor.getAlpha().getValue()
992 # : 1.0;
993 #
994 # return new java.awt.Color(
995 # protocolor.getRed(),
996 # protocolor.getGreen(),
997 # protocolor.getBlue(),
998 # alpha);
999 # }
1000 #
1001 # public static Color toProto(java.awt.Color color) {
1002 # float red = (float) color.getRed();
1003 # float green = (float) color.getGreen();
1004 # float blue = (float) color.getBlue();
1005 # float denominator = 255.0;
1006 # Color.Builder resultBuilder =
1007 # Color
1008 # .newBuilder()
1009 # .setRed(red / denominator)
1010 # .setGreen(green / denominator)
1011 # .setBlue(blue / denominator);
1012 # int alpha = color.getAlpha();
1013 # if (alpha != 255) {
1014 # result.setAlpha(
1015 # FloatValue
1016 # .newBuilder()
1017 # .setValue(((float) alpha) / denominator)
1018 # .build());
1019 # }
1020 # return resultBuilder.build();
1021 # }
1022 # // ...
1023 #
1024 # Example (iOS / Obj-C):
1025 #
1026 # // ...
1027 # static UIColor* fromProto(Color* protocolor) {
1028 # float red = [protocolor red];
1029 # float green = [protocolor green];
1030 # float blue = [protocolor blue];
1031 # FloatValue* alpha_wrapper = [protocolor alpha];
1032 # float alpha = 1.0;
1033 # if (alpha_wrapper != nil) {
1034 # alpha = [alpha_wrapper value];
1035 # }
1036 # return [UIColor colorWithRed:red green:green blue:blue alpha:alpha];
1037 # }
1038 #
1039 # static Color* toProto(UIColor* color) {
1040 # CGFloat red, green, blue, alpha;
Dan O'Mearadd494642020-05-01 07:42:23 -07001041 # if (![color getRed:&amp;red green:&amp;green blue:&amp;blue alpha:&amp;alpha]) {
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07001042 # return nil;
1043 # }
1044 # Color* result = [[Color alloc] init];
1045 # [result setRed:red];
1046 # [result setGreen:green];
1047 # [result setBlue:blue];
Dan O'Mearadd494642020-05-01 07:42:23 -07001048 # if (alpha &lt;= 0.9999) {
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07001049 # [result setAlpha:floatWrapperWithValue(alpha)];
1050 # }
1051 # [result autorelease];
1052 # return result;
1053 # }
1054 # // ...
1055 #
1056 # Example (JavaScript):
1057 #
1058 # // ...
1059 #
1060 # var protoToCssColor = function(rgb_color) {
1061 # var redFrac = rgb_color.red || 0.0;
1062 # var greenFrac = rgb_color.green || 0.0;
1063 # var blueFrac = rgb_color.blue || 0.0;
1064 # var red = Math.floor(redFrac * 255);
1065 # var green = Math.floor(greenFrac * 255);
1066 # var blue = Math.floor(blueFrac * 255);
1067 #
Bu Sun Kim65020912020-05-20 12:08:20 -07001068 # if (!(&#x27;alpha&#x27; in rgb_color)) {
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07001069 # return rgbToCssColor_(red, green, blue);
1070 # }
1071 #
1072 # var alphaFrac = rgb_color.alpha.value || 0.0;
Bu Sun Kim65020912020-05-20 12:08:20 -07001073 # var rgbParams = [red, green, blue].join(&#x27;,&#x27;);
1074 # return [&#x27;rgba(&#x27;, rgbParams, &#x27;,&#x27;, alphaFrac, &#x27;)&#x27;].join(&#x27;&#x27;);
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07001075 # };
1076 #
1077 # var rgbToCssColor_ = function(red, green, blue) {
Dan O'Mearadd494642020-05-01 07:42:23 -07001078 # var rgbNumber = new Number((red &lt;&lt; 16) | (green &lt;&lt; 8) | blue);
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07001079 # var hexString = rgbNumber.toString(16);
1080 # var missingZeros = 6 - hexString.length;
Bu Sun Kim65020912020-05-20 12:08:20 -07001081 # var resultBuilder = [&#x27;#&#x27;];
Dan O'Mearadd494642020-05-01 07:42:23 -07001082 # for (var i = 0; i &lt; missingZeros; i++) {
Bu Sun Kim65020912020-05-20 12:08:20 -07001083 # resultBuilder.push(&#x27;0&#x27;);
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07001084 # }
1085 # resultBuilder.push(hexString);
Bu Sun Kim65020912020-05-20 12:08:20 -07001086 # return resultBuilder.join(&#x27;&#x27;);
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07001087 # };
1088 #
1089 # // ...
Bu Sun Kim65020912020-05-20 12:08:20 -07001090 &quot;red&quot;: 3.14, # The amount of red in the color as a value in the interval [0, 1].
1091 &quot;green&quot;: 3.14, # The amount of green in the color as a value in the interval [0, 1].
1092 &quot;blue&quot;: 3.14, # The amount of blue in the color as a value in the interval [0, 1].
1093 &quot;alpha&quot;: 3.14, # The fraction of this color that should be applied to the pixel. That is,
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07001094 # the final pixel color is defined by the equation:
1095 #
1096 # pixel color = alpha * (this color) + (1.0 - alpha) * (background color)
1097 #
1098 # This means that a value of 1.0 corresponds to a solid color, whereas
1099 # a value of 0.0 corresponds to a completely transparent color. This
1100 # uses a wrapper message rather than a simple float scalar so that it is
1101 # possible to distinguish between a default value and the value being unset.
1102 # If omitted, this color object is to be rendered as a solid color
1103 # (as if the alpha value had been explicitly given with a value of 1.0).
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07001104 },
Bu Sun Kim65020912020-05-20 12:08:20 -07001105 &quot;score&quot;: 3.14, # Image-specific score for this color. Value in range [0, 1].
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07001106 },
1107 ],
1108 },
1109 },
Bu Sun Kim65020912020-05-20 12:08:20 -07001110 &quot;logoAnnotations&quot;: [ # If present, logo detection has completed successfully.
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07001111 { # Set of detected entity features.
Bu Sun Kim65020912020-05-20 12:08:20 -07001112 &quot;description&quot;: &quot;A String&quot;, # Entity textual description, expressed in its `locale` language.
1113 &quot;topicality&quot;: 3.14, # The relevancy of the ICA (Image Content Annotation) label to the
1114 # image. For example, the relevancy of &quot;tower&quot; is likely higher to an image
1115 # containing the detected &quot;Eiffel Tower&quot; than to an image containing a
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07001116 # detected distant towering building, even though the confidence that
1117 # there is a tower in each image may be the same. Range [0, 1].
Bu Sun Kim65020912020-05-20 12:08:20 -07001118 &quot;properties&quot;: [ # Some entities may have optional user-supplied `Property` (name/value)
1119 # fields, such a score or string that qualifies the entity.
1120 { # A `Property` consists of a user-supplied name/value pair.
1121 &quot;uint64Value&quot;: &quot;A String&quot;, # Value of numeric properties.
1122 &quot;name&quot;: &quot;A String&quot;, # Name of the property.
1123 &quot;value&quot;: &quot;A String&quot;, # Value of the property.
1124 },
1125 ],
1126 &quot;score&quot;: 3.14, # Overall score of the result. Range [0, 1].
1127 &quot;locations&quot;: [ # The location information for the detected entity. Multiple
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07001128 # `LocationInfo` elements can be present because one location may
1129 # indicate the location of the scene in the image, and another location
1130 # may indicate the location of the place where the image was taken.
1131 # Location information is usually present for landmarks.
1132 { # Detected entity location information.
Bu Sun Kim65020912020-05-20 12:08:20 -07001133 &quot;latLng&quot;: { # An object representing a latitude/longitude pair. This is expressed as a pair # lat/long location coordinates.
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07001134 # of doubles representing degrees latitude and degrees longitude. Unless
1135 # specified otherwise, this must conform to the
Bu Sun Kim65020912020-05-20 12:08:20 -07001136 # &lt;a href=&quot;http://www.unoosa.org/pdf/icg/2012/template/WGS_84.pdf&quot;&gt;WGS84
Dan O'Mearadd494642020-05-01 07:42:23 -07001137 # standard&lt;/a&gt;. Values must be within normalized ranges.
Bu Sun Kim65020912020-05-20 12:08:20 -07001138 &quot;latitude&quot;: 3.14, # The latitude in degrees. It must be in the range [-90.0, +90.0].
1139 &quot;longitude&quot;: 3.14, # The longitude in degrees. It must be in the range [-180.0, +180.0].
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07001140 },
1141 },
1142 ],
Bu Sun Kim65020912020-05-20 12:08:20 -07001143 &quot;mid&quot;: &quot;A String&quot;, # Opaque entity ID. Some IDs may be available in
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07001144 # [Google Knowledge Graph Search
1145 # API](https://developers.google.com/knowledge-graph/).
Bu Sun Kim65020912020-05-20 12:08:20 -07001146 &quot;confidence&quot;: 3.14, # **Deprecated. Use `score` instead.**
1147 # The accuracy of the entity detection in an image.
1148 # For example, for an image in which the &quot;Eiffel Tower&quot; entity is detected,
1149 # this field represents the confidence that there is a tower in the query
1150 # image. Range [0, 1].
1151 &quot;boundingPoly&quot;: { # A bounding polygon for the detected image annotation. # Image region to which this entity belongs. Not produced
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07001152 # for `LABEL_DETECTION` features.
Bu Sun Kim65020912020-05-20 12:08:20 -07001153 &quot;normalizedVertices&quot;: [ # The bounding polygon normalized vertices.
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07001154 { # A vertex represents a 2D point in the image.
1155 # NOTE: the normalized vertex coordinates are relative to the original image
1156 # and range from 0 to 1.
Bu Sun Kim65020912020-05-20 12:08:20 -07001157 &quot;y&quot;: 3.14, # Y coordinate.
1158 &quot;x&quot;: 3.14, # X coordinate.
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07001159 },
1160 ],
Bu Sun Kim65020912020-05-20 12:08:20 -07001161 &quot;vertices&quot;: [ # The bounding polygon vertices.
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07001162 { # A vertex represents a 2D point in the image.
1163 # NOTE: the vertex coordinates are in the same scale as the original image.
Bu Sun Kim65020912020-05-20 12:08:20 -07001164 &quot;y&quot;: 42, # Y coordinate.
1165 &quot;x&quot;: 42, # X coordinate.
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07001166 },
1167 ],
1168 },
Bu Sun Kim65020912020-05-20 12:08:20 -07001169 &quot;locale&quot;: &quot;A String&quot;, # The language code for the locale in which the entity textual
1170 # `description` is expressed.
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07001171 },
1172 ],
Bu Sun Kim65020912020-05-20 12:08:20 -07001173 &quot;context&quot;: { # If an image was produced from a file (e.g. a PDF), this message gives # If present, contextual information is needed to understand where this image
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07001174 # comes from.
1175 # information about the source of that image.
Bu Sun Kim65020912020-05-20 12:08:20 -07001176 &quot;uri&quot;: &quot;A String&quot;, # The URI of the file used to produce the image.
1177 &quot;pageNumber&quot;: 42, # If the file was a PDF or TIFF, this field gives the page number within
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07001178 # the file used to produce the image.
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07001179 },
Bu Sun Kim65020912020-05-20 12:08:20 -07001180 &quot;webDetection&quot;: { # Relevant information for the image from the Internet. # If present, web detection has completed successfully.
1181 &quot;visuallySimilarImages&quot;: [ # The visually similar image results.
1182 { # Metadata for online images.
1183 &quot;score&quot;: 3.14, # (Deprecated) Overall relevancy score for the image.
1184 &quot;url&quot;: &quot;A String&quot;, # The result image URL.
1185 },
1186 ],
1187 &quot;bestGuessLabels&quot;: [ # The service&#x27;s best guess as to the topic of the request image.
1188 # Inferred from similar images on the open web.
1189 { # Label to provide extra metadata for the web detection.
1190 &quot;label&quot;: &quot;A String&quot;, # Label for extra metadata.
1191 &quot;languageCode&quot;: &quot;A String&quot;, # The BCP-47 language code for `label`, such as &quot;en-US&quot; or &quot;sr-Latn&quot;.
1192 # For more information, see
1193 # http://www.unicode.org/reports/tr35/#Unicode_locale_identifier.
1194 },
1195 ],
1196 &quot;fullMatchingImages&quot;: [ # Fully matching images from the Internet.
1197 # Can include resized copies of the query image.
1198 { # Metadata for online images.
1199 &quot;score&quot;: 3.14, # (Deprecated) Overall relevancy score for the image.
1200 &quot;url&quot;: &quot;A String&quot;, # The result image URL.
1201 },
1202 ],
1203 &quot;webEntities&quot;: [ # Deduced entities from similar images on the Internet.
1204 { # Entity deduced from similar images on the Internet.
1205 &quot;entityId&quot;: &quot;A String&quot;, # Opaque entity ID.
1206 &quot;description&quot;: &quot;A String&quot;, # Canonical description of the entity, in English.
1207 &quot;score&quot;: 3.14, # Overall relevancy score for the entity.
1208 # Not normalized and not comparable across different image queries.
1209 },
1210 ],
1211 &quot;pagesWithMatchingImages&quot;: [ # Web pages containing the matching images from the Internet.
1212 { # Metadata for web pages.
1213 &quot;pageTitle&quot;: &quot;A String&quot;, # Title for the web page, may contain HTML markups.
1214 &quot;fullMatchingImages&quot;: [ # Fully matching images on the page.
1215 # Can include resized copies of the query image.
1216 { # Metadata for online images.
1217 &quot;score&quot;: 3.14, # (Deprecated) Overall relevancy score for the image.
1218 &quot;url&quot;: &quot;A String&quot;, # The result image URL.
1219 },
1220 ],
1221 &quot;score&quot;: 3.14, # (Deprecated) Overall relevancy score for the web page.
1222 &quot;partialMatchingImages&quot;: [ # Partial matching images on the page.
1223 # Those images are similar enough to share some key-point features. For
1224 # example an original image will likely have partial matching for its
1225 # crops.
1226 { # Metadata for online images.
1227 &quot;score&quot;: 3.14, # (Deprecated) Overall relevancy score for the image.
1228 &quot;url&quot;: &quot;A String&quot;, # The result image URL.
1229 },
1230 ],
1231 &quot;url&quot;: &quot;A String&quot;, # The result web page URL.
1232 },
1233 ],
1234 &quot;partialMatchingImages&quot;: [ # Partial matching images from the Internet.
1235 # Those images are similar enough to share some key-point features. For
1236 # example an original image will likely have partial matching for its crops.
1237 { # Metadata for online images.
1238 &quot;score&quot;: 3.14, # (Deprecated) Overall relevancy score for the image.
1239 &quot;url&quot;: &quot;A String&quot;, # The result image URL.
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07001240 },
1241 ],
1242 },
Bu Sun Kim65020912020-05-20 12:08:20 -07001243 &quot;safeSearchAnnotation&quot;: { # Set of features pertaining to the image, computed by computer vision # If present, safe-search annotation has completed successfully.
1244 # methods over safe-search verticals (for example, adult, spoof, medical,
1245 # violence).
1246 &quot;medical&quot;: &quot;A String&quot;, # Likelihood that this is a medical image.
1247 &quot;racy&quot;: &quot;A String&quot;, # Likelihood that the request image contains racy content. Racy content may
1248 # include (but is not limited to) skimpy or sheer clothing, strategically
1249 # covered nudity, lewd or provocative poses, or close-ups of sensitive
1250 # body areas.
1251 &quot;violence&quot;: &quot;A String&quot;, # Likelihood that this image contains violent content.
1252 &quot;adult&quot;: &quot;A String&quot;, # Represents the adult content likelihood for the image. Adult content may
1253 # contain elements such as nudity, pornographic images or cartoons, or
1254 # sexual activities.
1255 &quot;spoof&quot;: &quot;A String&quot;, # Spoof likelihood. The likelihood that an modification
1256 # was made to the image&#x27;s canonical version to make it appear
1257 # funny or offensive.
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07001258 },
1259 },
1260 ],
1261 }</pre>
1262</div>
1263
1264<div class="method">
Dan O'Mearadd494642020-05-01 07:42:23 -07001265 <code class="details" id="asyncBatchAnnotate">asyncBatchAnnotate(body=None, x__xgafv=None)</code>
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07001266 <pre>Run asynchronous image detection and annotation for a list of images.
1267
1268Progress and results can be retrieved through the
1269`google.longrunning.Operations` interface.
1270`Operation.metadata` contains `OperationMetadata` (metadata).
1271`Operation.response` contains `AsyncBatchAnnotateImagesResponse` (results).
1272
1273This service will write image annotation outputs to json files in customer
1274GCS bucket, each json file containing BatchAnnotateImagesResponse proto.
1275
1276Args:
Dan O'Mearadd494642020-05-01 07:42:23 -07001277 body: object, The request body.
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07001278 The object takes the form of:
1279
1280{ # Request for async image annotation for a list of images.
Bu Sun Kim65020912020-05-20 12:08:20 -07001281 &quot;parent&quot;: &quot;A String&quot;, # Optional. Target project and location to make a call.
1282 #
1283 # Format: `projects/{project-id}/locations/{location-id}`.
1284 #
1285 # If no parent is specified, a region will be chosen automatically.
1286 #
1287 # Supported location-ids:
1288 # `us`: USA country only,
1289 # `asia`: East asia areas, like Japan, Taiwan,
1290 # `eu`: The European Union.
1291 #
1292 # Example: `projects/project-A/locations/eu`.
1293 &quot;outputConfig&quot;: { # The desired output location and metadata. # Required. The desired output location and metadata (e.g. format).
1294 &quot;gcsDestination&quot;: { # The Google Cloud Storage location where the output will be written to. # The Google Cloud Storage location to write the output(s) to.
1295 &quot;uri&quot;: &quot;A String&quot;, # Google Cloud Storage URI prefix where the results will be stored. Results
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07001296 # will be in JSON format and preceded by its corresponding input URI prefix.
1297 # This field can either represent a gcs file prefix or gcs directory. In
1298 # either case, the uri should be unique because in order to get all of the
1299 # output files, you will need to do a wildcard gcs search on the uri prefix
1300 # you provide.
1301 #
1302 # Examples:
1303 #
1304 # * File Prefix: gs://bucket-name/here/filenameprefix The output files
1305 # will be created in gs://bucket-name/here/ and the names of the
Bu Sun Kim65020912020-05-20 12:08:20 -07001306 # output files will begin with &quot;filenameprefix&quot;.
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07001307 #
1308 # * Directory Prefix: gs://bucket-name/some/location/ The output files
1309 # will be created in gs://bucket-name/some/location/ and the names of the
1310 # output files could be anything because there was no filename prefix
1311 # specified.
1312 #
1313 # If multiple outputs, each response is still AnnotateFileResponse, each of
1314 # which contains some subset of the full list of AnnotateImageResponse.
1315 # Multiple outputs can happen if, for example, the output JSON is too large
1316 # and overflows into multiple sharded files.
1317 },
Bu Sun Kim65020912020-05-20 12:08:20 -07001318 &quot;batchSize&quot;: 42, # The max number of response protos to put into each output JSON file on
1319 # Google Cloud Storage.
1320 # The valid range is [1, 100]. If not specified, the default value is 20.
1321 #
1322 # For example, for one pdf file with 100 pages, 100 response protos will
1323 # be generated. If `batch_size` = 20, then 5 json files each
1324 # containing 20 response protos will be written under the prefix
1325 # `gcs_destination`.`uri`.
1326 #
1327 # Currently, batch_size only applies to GcsDestination, with potential future
1328 # support for other output configurations.
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07001329 },
Bu Sun Kim65020912020-05-20 12:08:20 -07001330 &quot;requests&quot;: [ # Required. Individual image annotation requests for this batch.
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07001331 { # Request for performing Google Cloud Vision API tasks over a user-provided
1332 # image, with user-requested features, and with context information.
Bu Sun Kim65020912020-05-20 12:08:20 -07001333 &quot;image&quot;: { # Client image to perform Google Cloud Vision API tasks over. # The image to be processed.
1334 &quot;content&quot;: &quot;A String&quot;, # Image content, represented as a stream of bytes.
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07001335 # Note: As with all `bytes` fields, protobuffers use a pure binary
1336 # representation, whereas JSON representations use base64.
Bu Sun Kim65020912020-05-20 12:08:20 -07001337 &quot;source&quot;: { # External image source (Google Cloud Storage or web URL image location). # Google Cloud Storage image location, or publicly-accessible image
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07001338 # URL. If both `content` and `source` are provided for an image, `content`
1339 # takes precedence and is used to perform the image annotation request.
Bu Sun Kim65020912020-05-20 12:08:20 -07001340 &quot;imageUri&quot;: &quot;A String&quot;, # The URI of the source image. Can be either:
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07001341 #
1342 # 1. A Google Cloud Storage URI of the form
1343 # `gs://bucket_name/object_name`. Object versioning is not supported. See
1344 # [Google Cloud Storage Request
1345 # URIs](https://cloud.google.com/storage/docs/reference-uris) for more
1346 # info.
1347 #
1348 # 2. A publicly-accessible image HTTP/HTTPS URL. When fetching images from
1349 # HTTP/HTTPS URLs, Google cannot guarantee that the request will be
1350 # completed. Your request may fail if the specified host denies the
1351 # request (e.g. due to request throttling or DOS prevention), or if Google
1352 # throttles requests to the site for abuse prevention. You should not
1353 # depend on externally-hosted images for production applications.
1354 #
1355 # When both `gcs_image_uri` and `image_uri` are specified, `image_uri` takes
1356 # precedence.
Bu Sun Kim65020912020-05-20 12:08:20 -07001357 &quot;gcsImageUri&quot;: &quot;A String&quot;, # **Use `image_uri` instead.**
1358 #
1359 # The Google Cloud Storage URI of the form
1360 # `gs://bucket_name/object_name`. Object versioning is not supported. See
1361 # [Google Cloud Storage Request
1362 # URIs](https://cloud.google.com/storage/docs/reference-uris) for more info.
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07001363 },
1364 },
Bu Sun Kim65020912020-05-20 12:08:20 -07001365 &quot;features&quot;: [ # Requested features.
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07001366 { # The type of Google Cloud Vision API detection to perform, and the maximum
1367 # number of results to return for that type. Multiple `Feature` objects can
1368 # be specified in the `features` list.
Bu Sun Kim65020912020-05-20 12:08:20 -07001369 &quot;type&quot;: &quot;A String&quot;, # The feature type.
1370 &quot;maxResults&quot;: 42, # Maximum number of results of this type. Does not apply to
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07001371 # `TEXT_DETECTION`, `DOCUMENT_TEXT_DETECTION`, or `CROP_HINTS`.
Bu Sun Kim65020912020-05-20 12:08:20 -07001372 &quot;model&quot;: &quot;A String&quot;, # Model to use for the feature.
1373 # Supported values: &quot;builtin/stable&quot; (the default if unset) and
1374 # &quot;builtin/latest&quot;.
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07001375 },
1376 ],
Bu Sun Kim65020912020-05-20 12:08:20 -07001377 &quot;imageContext&quot;: { # Image context and/or feature-specific parameters. # Additional context that may accompany the image.
1378 &quot;languageHints&quot;: [ # List of languages to use for TEXT_DETECTION. In most cases, an empty value
1379 # yields the best results since it enables automatic language detection. For
1380 # languages based on the Latin alphabet, setting `language_hints` is not
1381 # needed. In rare cases, when the language of the text in the image is known,
1382 # setting a hint will help get better results (although it will be a
1383 # significant hindrance if the hint is wrong). Text detection returns an
1384 # error if one or more of the specified languages is not one of the
1385 # [supported languages](https://cloud.google.com/vision/docs/languages).
1386 &quot;A String&quot;,
1387 ],
1388 &quot;webDetectionParams&quot;: { # Parameters for web detection request. # Parameters for web detection.
1389 &quot;includeGeoResults&quot;: True or False, # Whether to include results derived from the geo information in the image.
1390 },
1391 &quot;latLongRect&quot;: { # Rectangle determined by min and max `LatLng` pairs. # Not used.
1392 &quot;minLatLng&quot;: { # An object representing a latitude/longitude pair. This is expressed as a pair # Min lat/long pair.
1393 # of doubles representing degrees latitude and degrees longitude. Unless
1394 # specified otherwise, this must conform to the
1395 # &lt;a href=&quot;http://www.unoosa.org/pdf/icg/2012/template/WGS_84.pdf&quot;&gt;WGS84
1396 # standard&lt;/a&gt;. Values must be within normalized ranges.
1397 &quot;latitude&quot;: 3.14, # The latitude in degrees. It must be in the range [-90.0, +90.0].
1398 &quot;longitude&quot;: 3.14, # The longitude in degrees. It must be in the range [-180.0, +180.0].
1399 },
1400 &quot;maxLatLng&quot;: { # An object representing a latitude/longitude pair. This is expressed as a pair # Max lat/long pair.
1401 # of doubles representing degrees latitude and degrees longitude. Unless
1402 # specified otherwise, this must conform to the
1403 # &lt;a href=&quot;http://www.unoosa.org/pdf/icg/2012/template/WGS_84.pdf&quot;&gt;WGS84
1404 # standard&lt;/a&gt;. Values must be within normalized ranges.
1405 &quot;latitude&quot;: 3.14, # The latitude in degrees. It must be in the range [-90.0, +90.0].
1406 &quot;longitude&quot;: 3.14, # The longitude in degrees. It must be in the range [-180.0, +180.0].
1407 },
1408 },
1409 &quot;cropHintsParams&quot;: { # Parameters for crop hints annotation request. # Parameters for crop hints annotation request.
1410 &quot;aspectRatios&quot;: [ # Aspect ratios in floats, representing the ratio of the width to the height
1411 # of the image. For example, if the desired aspect ratio is 4/3, the
1412 # corresponding float value should be 1.33333. If not specified, the
1413 # best possible crop is returned. The number of provided aspect ratios is
1414 # limited to a maximum of 16; any aspect ratios provided after the 16th are
1415 # ignored.
1416 3.14,
1417 ],
1418 },
1419 &quot;productSearchParams&quot;: { # Parameters for a product search request. # Parameters for product search.
1420 &quot;filter&quot;: &quot;A String&quot;, # The filtering expression. This can be used to restrict search results based
1421 # on Product labels. We currently support an AND of OR of key-value
1422 # expressions, where each expression within an OR must have the same key. An
1423 # &#x27;=&#x27; should be used to connect the key and value.
1424 #
1425 # For example, &quot;(color = red OR color = blue) AND brand = Google&quot; is
1426 # acceptable, but &quot;(color = red OR brand = Google)&quot; is not acceptable.
1427 # &quot;color: red&quot; is not acceptable because it uses a &#x27;:&#x27; instead of an &#x27;=&#x27;.
1428 &quot;productSet&quot;: &quot;A String&quot;, # The resource name of a ProductSet to be searched for similar images.
1429 #
1430 # Format is:
1431 # `projects/PROJECT_ID/locations/LOC_ID/productSets/PRODUCT_SET_ID`.
1432 &quot;boundingPoly&quot;: { # A bounding polygon for the detected image annotation. # The bounding polygon around the area of interest in the image.
1433 # If it is not specified, system discretion will be applied.
1434 &quot;normalizedVertices&quot;: [ # The bounding polygon normalized vertices.
1435 { # A vertex represents a 2D point in the image.
1436 # NOTE: the normalized vertex coordinates are relative to the original image
1437 # and range from 0 to 1.
1438 &quot;y&quot;: 3.14, # Y coordinate.
1439 &quot;x&quot;: 3.14, # X coordinate.
1440 },
1441 ],
1442 &quot;vertices&quot;: [ # The bounding polygon vertices.
1443 { # A vertex represents a 2D point in the image.
1444 # NOTE: the vertex coordinates are in the same scale as the original image.
1445 &quot;y&quot;: 42, # Y coordinate.
1446 &quot;x&quot;: 42, # X coordinate.
1447 },
1448 ],
1449 },
1450 &quot;productCategories&quot;: [ # The list of product categories to search in. Currently, we only consider
1451 # the first category, and either &quot;homegoods-v2&quot;, &quot;apparel-v2&quot;, &quot;toys-v2&quot;,
1452 # &quot;packagedgoods-v1&quot;, or &quot;general-v1&quot; should be specified. The legacy
1453 # categories &quot;homegoods&quot;, &quot;apparel&quot;, and &quot;toys&quot; are still supported but will
1454 # be deprecated. For new products, please use &quot;homegoods-v2&quot;, &quot;apparel-v2&quot;,
1455 # or &quot;toys-v2&quot; for better product search accuracy. It is recommended to
1456 # migrate existing products to these categories as well.
1457 &quot;A String&quot;,
1458 ],
1459 },
1460 },
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07001461 },
1462 ],
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07001463 }
1464
1465 x__xgafv: string, V1 error format.
1466 Allowed values
1467 1 - v1 error format
1468 2 - v2 error format
1469
1470Returns:
1471 An object of the form:
1472
1473 { # This resource represents a long-running operation that is the result of a
1474 # network API call.
Bu Sun Kim65020912020-05-20 12:08:20 -07001475 &quot;error&quot;: { # The `Status` type defines a logical error model that is suitable for # The error result of the operation in case of failure or cancellation.
1476 # different programming environments, including REST APIs and RPC APIs. It is
1477 # used by [gRPC](https://github.com/grpc). Each `Status` message contains
1478 # three pieces of data: error code, error message, and error details.
1479 #
1480 # You can find out more about this error model and how to work with it in the
1481 # [API Design Guide](https://cloud.google.com/apis/design/errors).
1482 &quot;code&quot;: 42, # The status code, which should be an enum value of google.rpc.Code.
1483 &quot;message&quot;: &quot;A String&quot;, # A developer-facing error message, which should be in English. Any
1484 # user-facing error message should be localized and sent in the
1485 # google.rpc.Status.details field, or localized by the client.
1486 &quot;details&quot;: [ # A list of messages that carry the error details. There is a common set of
1487 # message types for APIs to use.
1488 {
1489 &quot;a_key&quot;: &quot;&quot;, # Properties of the object. Contains field @type with type URL.
1490 },
1491 ],
1492 },
1493 &quot;metadata&quot;: { # Service-specific metadata associated with the operation. It typically
1494 # contains progress information and common metadata such as create time.
1495 # Some services might not provide such metadata. Any method that returns a
1496 # long-running operation should document the metadata type, if any.
1497 &quot;a_key&quot;: &quot;&quot;, # Properties of the object. Contains field @type with type URL.
1498 },
1499 &quot;done&quot;: True or False, # If the value is `false`, it means the operation is still in progress.
1500 # If `true`, the operation is completed, and either `error` or `response` is
1501 # available.
1502 &quot;response&quot;: { # The normal response of the operation in case of success. If the original
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07001503 # method returns no data on success, such as `Delete`, the response is
1504 # `google.protobuf.Empty`. If the original method is standard
1505 # `Get`/`Create`/`Update`, the response should be the resource. For other
1506 # methods, the response should have the type `XxxResponse`, where `Xxx`
1507 # is the original method name. For example, if the original method name
1508 # is `TakeSnapshot()`, the inferred response type is
1509 # `TakeSnapshotResponse`.
Bu Sun Kim65020912020-05-20 12:08:20 -07001510 &quot;a_key&quot;: &quot;&quot;, # Properties of the object. Contains field @type with type URL.
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07001511 },
Bu Sun Kim65020912020-05-20 12:08:20 -07001512 &quot;name&quot;: &quot;A String&quot;, # The server-assigned name, which is only unique within the same service that
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07001513 # originally returns it. If you use the default HTTP mapping, the
1514 # `name` should be a resource name ending with `operations/{unique_id}`.
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07001515 }</pre>
1516</div>
1517
1518</body></html>