| Bu Sun Kim | 6502091 | 2020-05-20 12:08:20 -0700 | [diff] [blame] | 1 | <html><body> | 
|  | 2 | <style> | 
|  | 3 |  | 
|  | 4 | body, h1, h2, h3, div, span, p, pre, a { | 
|  | 5 | margin: 0; | 
|  | 6 | padding: 0; | 
|  | 7 | border: 0; | 
|  | 8 | font-weight: inherit; | 
|  | 9 | font-style: inherit; | 
|  | 10 | font-size: 100%; | 
|  | 11 | font-family: inherit; | 
|  | 12 | vertical-align: baseline; | 
|  | 13 | } | 
|  | 14 |  | 
|  | 15 | body { | 
|  | 16 | font-size: 13px; | 
|  | 17 | padding: 1em; | 
|  | 18 | } | 
|  | 19 |  | 
|  | 20 | h1 { | 
|  | 21 | font-size: 26px; | 
|  | 22 | margin-bottom: 1em; | 
|  | 23 | } | 
|  | 24 |  | 
|  | 25 | h2 { | 
|  | 26 | font-size: 24px; | 
|  | 27 | margin-bottom: 1em; | 
|  | 28 | } | 
|  | 29 |  | 
|  | 30 | h3 { | 
|  | 31 | font-size: 20px; | 
|  | 32 | margin-bottom: 1em; | 
|  | 33 | margin-top: 1em; | 
|  | 34 | } | 
|  | 35 |  | 
|  | 36 | pre, code { | 
|  | 37 | line-height: 1.5; | 
|  | 38 | font-family: Monaco, 'DejaVu Sans Mono', 'Bitstream Vera Sans Mono', 'Lucida Console', monospace; | 
|  | 39 | } | 
|  | 40 |  | 
|  | 41 | pre { | 
|  | 42 | margin-top: 0.5em; | 
|  | 43 | } | 
|  | 44 |  | 
|  | 45 | h1, h2, h3, p { | 
|  | 46 | font-family: Arial, sans serif; | 
|  | 47 | } | 
|  | 48 |  | 
|  | 49 | h1, h2, h3 { | 
|  | 50 | border-bottom: solid #CCC 1px; | 
|  | 51 | } | 
|  | 52 |  | 
|  | 53 | .toc_element { | 
|  | 54 | margin-top: 0.5em; | 
|  | 55 | } | 
|  | 56 |  | 
|  | 57 | .firstline { | 
|  | 58 | margin-left: 2 em; | 
|  | 59 | } | 
|  | 60 |  | 
|  | 61 | .method  { | 
|  | 62 | margin-top: 1em; | 
|  | 63 | border: solid 1px #CCC; | 
|  | 64 | padding: 1em; | 
|  | 65 | background: #EEE; | 
|  | 66 | } | 
|  | 67 |  | 
|  | 68 | .details { | 
|  | 69 | font-weight: bold; | 
|  | 70 | font-size: 14px; | 
|  | 71 | } | 
|  | 72 |  | 
|  | 73 | </style> | 
|  | 74 |  | 
|  | 75 | <h1><a href="vision_v1p2beta1.html">Cloud Vision API</a> . <a href="vision_v1p2beta1.projects.html">projects</a> . <a href="vision_v1p2beta1.projects.files.html">files</a></h1> | 
|  | 76 | <h2>Instance Methods</h2> | 
|  | 77 | <p class="toc_element"> | 
|  | 78 | <code><a href="#annotate">annotate(parent, body=None, x__xgafv=None)</a></code></p> | 
|  | 79 | <p class="firstline">Service that performs image detection and annotation for a batch of files.</p> | 
|  | 80 | <p class="toc_element"> | 
|  | 81 | <code><a href="#asyncBatchAnnotate">asyncBatchAnnotate(parent, body=None, x__xgafv=None)</a></code></p> | 
|  | 82 | <p class="firstline">Run asynchronous image detection and annotation for a list of generic</p> | 
|  | 83 | <h3>Method Details</h3> | 
|  | 84 | <div class="method"> | 
|  | 85 | <code class="details" id="annotate">annotate(parent, body=None, x__xgafv=None)</code> | 
|  | 86 | <pre>Service that performs image detection and annotation for a batch of files. | 
|  | 87 | Now only "application/pdf", "image/tiff" and "image/gif" are supported. | 
|  | 88 |  | 
|  | 89 | This service will extract at most 5 (customers can specify which 5 in | 
|  | 90 | AnnotateFileRequest.pages) frames (gif) or pages (pdf or tiff) from each | 
|  | 91 | file provided and perform detection and annotation for each image | 
|  | 92 | extracted. | 
|  | 93 |  | 
|  | 94 | Args: | 
|  | 95 | parent: string, Optional. Target project and location to make a call. | 
|  | 96 |  | 
|  | 97 | Format: `projects/{project-id}/locations/{location-id}`. | 
|  | 98 |  | 
|  | 99 | If no parent is specified, a region will be chosen automatically. | 
|  | 100 |  | 
|  | 101 | Supported location-ids: | 
|  | 102 | `us`: USA country only, | 
|  | 103 | `asia`: East asia areas, like Japan, Taiwan, | 
|  | 104 | `eu`: The European Union. | 
|  | 105 |  | 
|  | 106 | Example: `projects/project-A/locations/eu`. (required) | 
|  | 107 | body: object, The request body. | 
|  | 108 | The object takes the form of: | 
|  | 109 |  | 
|  | 110 | { # A list of requests to annotate files using the BatchAnnotateFiles API. | 
|  | 111 | "parent": "A String", # Optional. Target project and location to make a call. | 
|  | 112 | # | 
|  | 113 | # Format: `projects/{project-id}/locations/{location-id}`. | 
|  | 114 | # | 
|  | 115 | # If no parent is specified, a region will be chosen automatically. | 
|  | 116 | # | 
|  | 117 | # Supported location-ids: | 
|  | 118 | #     `us`: USA country only, | 
|  | 119 | #     `asia`: East asia areas, like Japan, Taiwan, | 
|  | 120 | #     `eu`: The European Union. | 
|  | 121 | # | 
|  | 122 | # Example: `projects/project-A/locations/eu`. | 
|  | 123 | "requests": [ # Required. The list of file annotation requests. Right now we support only one | 
|  | 124 | # AnnotateFileRequest in BatchAnnotateFilesRequest. | 
|  | 125 | { # A request to annotate one single file, e.g. a PDF, TIFF or GIF file. | 
| Bu Sun Kim | d059ad8 | 2020-07-22 17:02:09 -0700 | [diff] [blame] | 126 | "features": [ # Required. Requested features. | 
|  | 127 | { # The type of Google Cloud Vision API detection to perform, and the maximum | 
|  | 128 | # number of results to return for that type. Multiple `Feature` objects can | 
|  | 129 | # be specified in the `features` list. | 
|  | 130 | "type": "A String", # The feature type. | 
|  | 131 | "maxResults": 42, # Maximum number of results of this type. Does not apply to | 
|  | 132 | # `TEXT_DETECTION`, `DOCUMENT_TEXT_DETECTION`, or `CROP_HINTS`. | 
|  | 133 | "model": "A String", # Model to use for the feature. | 
|  | 134 | # Supported values: "builtin/stable" (the default if unset) and | 
|  | 135 | # "builtin/latest". | 
|  | 136 | }, | 
|  | 137 | ], | 
| Bu Sun Kim | 6502091 | 2020-05-20 12:08:20 -0700 | [diff] [blame] | 138 | "pages": [ # Pages of the file to perform image annotation. | 
|  | 139 | # | 
|  | 140 | # Pages starts from 1, we assume the first page of the file is page 1. | 
|  | 141 | # At most 5 pages are supported per request. Pages can be negative. | 
|  | 142 | # | 
|  | 143 | # Page 1 means the first page. | 
|  | 144 | # Page 2 means the second page. | 
|  | 145 | # Page -1 means the last page. | 
|  | 146 | # Page -2 means the second to the last page. | 
|  | 147 | # | 
|  | 148 | # If the file is GIF instead of PDF or TIFF, page refers to GIF frames. | 
|  | 149 | # | 
|  | 150 | # If this field is empty, by default the service performs image annotation | 
|  | 151 | # for the first 5 pages of the file. | 
|  | 152 | 42, | 
|  | 153 | ], | 
| Bu Sun Kim | 4ed7d3f | 2020-05-27 12:20:54 -0700 | [diff] [blame] | 154 | "imageContext": { # Image context and/or feature-specific parameters. # Additional context that may accompany the image(s) in the file. | 
|  | 155 | "cropHintsParams": { # Parameters for crop hints annotation request. # Parameters for crop hints annotation request. | 
|  | 156 | "aspectRatios": [ # Aspect ratios in floats, representing the ratio of the width to the height | 
|  | 157 | # of the image. For example, if the desired aspect ratio is 4/3, the | 
|  | 158 | # corresponding float value should be 1.33333.  If not specified, the | 
|  | 159 | # best possible crop is returned. The number of provided aspect ratios is | 
|  | 160 | # limited to a maximum of 16; any aspect ratios provided after the 16th are | 
|  | 161 | # ignored. | 
|  | 162 | 3.14, | 
|  | 163 | ], | 
|  | 164 | }, | 
|  | 165 | "productSearchParams": { # Parameters for a product search request. # Parameters for product search. | 
|  | 166 | "productCategories": [ # The list of product categories to search in. Currently, we only consider | 
|  | 167 | # the first category, and either "homegoods-v2", "apparel-v2", "toys-v2", | 
|  | 168 | # "packagedgoods-v1", or "general-v1" should be specified. The legacy | 
|  | 169 | # categories "homegoods", "apparel", and "toys" are still supported but will | 
|  | 170 | # be deprecated. For new products, please use "homegoods-v2", "apparel-v2", | 
|  | 171 | # or "toys-v2" for better product search accuracy. It is recommended to | 
|  | 172 | # migrate existing products to these categories as well. | 
|  | 173 | "A String", | 
|  | 174 | ], | 
| Bu Sun Kim | 4ed7d3f | 2020-05-27 12:20:54 -0700 | [diff] [blame] | 175 | "boundingPoly": { # A bounding polygon for the detected image annotation. # The bounding polygon around the area of interest in the image. | 
|  | 176 | # If it is not specified, system discretion will be applied. | 
| Bu Sun Kim | d059ad8 | 2020-07-22 17:02:09 -0700 | [diff] [blame] | 177 | "vertices": [ # The bounding polygon vertices. | 
|  | 178 | { # A vertex represents a 2D point in the image. | 
|  | 179 | # NOTE: the vertex coordinates are in the same scale as the original image. | 
|  | 180 | "x": 42, # X coordinate. | 
|  | 181 | "y": 42, # Y coordinate. | 
|  | 182 | }, | 
|  | 183 | ], | 
| Bu Sun Kim | 4ed7d3f | 2020-05-27 12:20:54 -0700 | [diff] [blame] | 184 | "normalizedVertices": [ # The bounding polygon normalized vertices. | 
|  | 185 | { # A vertex represents a 2D point in the image. | 
|  | 186 | # NOTE: the normalized vertex coordinates are relative to the original image | 
|  | 187 | # and range from 0 to 1. | 
|  | 188 | "x": 3.14, # X coordinate. | 
|  | 189 | "y": 3.14, # Y coordinate. | 
|  | 190 | }, | 
|  | 191 | ], | 
| Bu Sun Kim | 4ed7d3f | 2020-05-27 12:20:54 -0700 | [diff] [blame] | 192 | }, | 
| Bu Sun Kim | d059ad8 | 2020-07-22 17:02:09 -0700 | [diff] [blame] | 193 | "productSet": "A String", # The resource name of a ProductSet to be searched for similar images. | 
|  | 194 | # | 
|  | 195 | # Format is: | 
|  | 196 | # `projects/PROJECT_ID/locations/LOC_ID/productSets/PRODUCT_SET_ID`. | 
|  | 197 | "filter": "A String", # The filtering expression. This can be used to restrict search results based | 
|  | 198 | # on Product labels. We currently support an AND of OR of key-value | 
|  | 199 | # expressions, where each expression within an OR must have the same key. An | 
|  | 200 | # '=' should be used to connect the key and value. | 
|  | 201 | # | 
|  | 202 | # For example, "(color = red OR color = blue) AND brand = Google" is | 
|  | 203 | # acceptable, but "(color = red OR brand = Google)" is not acceptable. | 
|  | 204 | # "color: red" is not acceptable because it uses a ':' instead of an '='. | 
| Bu Sun Kim | 4ed7d3f | 2020-05-27 12:20:54 -0700 | [diff] [blame] | 205 | }, | 
|  | 206 | "languageHints": [ # List of languages to use for TEXT_DETECTION. In most cases, an empty value | 
|  | 207 | # yields the best results since it enables automatic language detection. For | 
|  | 208 | # languages based on the Latin alphabet, setting `language_hints` is not | 
|  | 209 | # needed. In rare cases, when the language of the text in the image is known, | 
|  | 210 | # setting a hint will help get better results (although it will be a | 
|  | 211 | # significant hindrance if the hint is wrong). Text detection returns an | 
|  | 212 | # error if one or more of the specified languages is not one of the | 
|  | 213 | # [supported languages](https://cloud.google.com/vision/docs/languages). | 
|  | 214 | "A String", | 
|  | 215 | ], | 
| Bu Sun Kim | 4ed7d3f | 2020-05-27 12:20:54 -0700 | [diff] [blame] | 216 | "latLongRect": { # Rectangle determined by min and max `LatLng` pairs. # Not used. | 
|  | 217 | "maxLatLng": { # An object representing a latitude/longitude pair. This is expressed as a pair # Max lat/long pair. | 
|  | 218 | # of doubles representing degrees latitude and degrees longitude. Unless | 
|  | 219 | # specified otherwise, this must conform to the | 
|  | 220 | # <a href="http://www.unoosa.org/pdf/icg/2012/template/WGS_84.pdf">WGS84 | 
|  | 221 | # standard</a>. Values must be within normalized ranges. | 
|  | 222 | "latitude": 3.14, # The latitude in degrees. It must be in the range [-90.0, +90.0]. | 
|  | 223 | "longitude": 3.14, # The longitude in degrees. It must be in the range [-180.0, +180.0]. | 
|  | 224 | }, | 
|  | 225 | "minLatLng": { # An object representing a latitude/longitude pair. This is expressed as a pair # Min lat/long pair. | 
|  | 226 | # of doubles representing degrees latitude and degrees longitude. Unless | 
|  | 227 | # specified otherwise, this must conform to the | 
|  | 228 | # <a href="http://www.unoosa.org/pdf/icg/2012/template/WGS_84.pdf">WGS84 | 
|  | 229 | # standard</a>. Values must be within normalized ranges. | 
|  | 230 | "latitude": 3.14, # The latitude in degrees. It must be in the range [-90.0, +90.0]. | 
|  | 231 | "longitude": 3.14, # The longitude in degrees. It must be in the range [-180.0, +180.0]. | 
|  | 232 | }, | 
|  | 233 | }, | 
| Bu Sun Kim | d059ad8 | 2020-07-22 17:02:09 -0700 | [diff] [blame] | 234 | "webDetectionParams": { # Parameters for web detection request. # Parameters for web detection. | 
|  | 235 | "includeGeoResults": True or False, # Whether to include results derived from the geo information in the image. | 
|  | 236 | }, | 
|  | 237 | }, | 
|  | 238 | "inputConfig": { # The desired input location and metadata. # Required. Information about the input file. | 
|  | 239 | "content": "A String", # File content, represented as a stream of bytes. | 
|  | 240 | # Note: As with all `bytes` fields, protobuffers use a pure binary | 
|  | 241 | # representation, whereas JSON representations use base64. | 
|  | 242 | # | 
|  | 243 | # Currently, this field only works for BatchAnnotateFiles requests. It does | 
|  | 244 | # not work for AsyncBatchAnnotateFiles requests. | 
|  | 245 | "mimeType": "A String", # The type of the file. Currently only "application/pdf", "image/tiff" and | 
|  | 246 | # "image/gif" are supported. Wildcards are not supported. | 
|  | 247 | "gcsSource": { # The Google Cloud Storage location where the input will be read from. # The Google Cloud Storage location to read the input from. | 
|  | 248 | "uri": "A String", # Google Cloud Storage URI for the input file. This must only be a | 
|  | 249 | # Google Cloud Storage object. Wildcards are not currently supported. | 
|  | 250 | }, | 
| Bu Sun Kim | 4ed7d3f | 2020-05-27 12:20:54 -0700 | [diff] [blame] | 251 | }, | 
| Bu Sun Kim | 6502091 | 2020-05-20 12:08:20 -0700 | [diff] [blame] | 252 | }, | 
|  | 253 | ], | 
|  | 254 | } | 
|  | 255 |  | 
|  | 256 | x__xgafv: string, V1 error format. | 
|  | 257 | Allowed values | 
|  | 258 | 1 - v1 error format | 
|  | 259 | 2 - v2 error format | 
|  | 260 |  | 
|  | 261 | Returns: | 
|  | 262 | An object of the form: | 
|  | 263 |  | 
|  | 264 | { # A list of file annotation responses. | 
|  | 265 | "responses": [ # The list of file annotation responses, each response corresponding to each | 
|  | 266 | # AnnotateFileRequest in BatchAnnotateFilesRequest. | 
|  | 267 | { # Response to a single file annotation request. A file may contain one or more | 
|  | 268 | # images, which individually have their own responses. | 
| Bu Sun Kim | d059ad8 | 2020-07-22 17:02:09 -0700 | [diff] [blame] | 269 | "inputConfig": { # The desired input location and metadata. # Information about the file for which this response is generated. | 
|  | 270 | "content": "A String", # File content, represented as a stream of bytes. | 
|  | 271 | # Note: As with all `bytes` fields, protobuffers use a pure binary | 
|  | 272 | # representation, whereas JSON representations use base64. | 
|  | 273 | # | 
|  | 274 | # Currently, this field only works for BatchAnnotateFiles requests. It does | 
|  | 275 | # not work for AsyncBatchAnnotateFiles requests. | 
|  | 276 | "mimeType": "A String", # The type of the file. Currently only "application/pdf", "image/tiff" and | 
|  | 277 | # "image/gif" are supported. Wildcards are not supported. | 
|  | 278 | "gcsSource": { # The Google Cloud Storage location where the input will be read from. # The Google Cloud Storage location to read the input from. | 
|  | 279 | "uri": "A String", # Google Cloud Storage URI for the input file. This must only be a | 
|  | 280 | # Google Cloud Storage object. Wildcards are not currently supported. | 
|  | 281 | }, | 
|  | 282 | }, | 
| Bu Sun Kim | 6502091 | 2020-05-20 12:08:20 -0700 | [diff] [blame] | 283 | "error": { # The `Status` type defines a logical error model that is suitable for # If set, represents the error message for the failed request. The | 
|  | 284 | # `responses` field will not be set in this case. | 
|  | 285 | # different programming environments, including REST APIs and RPC APIs. It is | 
|  | 286 | # used by [gRPC](https://github.com/grpc). Each `Status` message contains | 
|  | 287 | # three pieces of data: error code, error message, and error details. | 
|  | 288 | # | 
|  | 289 | # You can find out more about this error model and how to work with it in the | 
|  | 290 | # [API Design Guide](https://cloud.google.com/apis/design/errors). | 
|  | 291 | "code": 42, # The status code, which should be an enum value of google.rpc.Code. | 
| Bu Sun Kim | 6502091 | 2020-05-20 12:08:20 -0700 | [diff] [blame] | 292 | "details": [ # A list of messages that carry the error details.  There is a common set of | 
|  | 293 | # message types for APIs to use. | 
|  | 294 | { | 
|  | 295 | "a_key": "", # Properties of the object. Contains field @type with type URL. | 
|  | 296 | }, | 
|  | 297 | ], | 
| Bu Sun Kim | d059ad8 | 2020-07-22 17:02:09 -0700 | [diff] [blame] | 298 | "message": "A String", # A developer-facing error message, which should be in English. Any | 
|  | 299 | # user-facing error message should be localized and sent in the | 
|  | 300 | # google.rpc.Status.details field, or localized by the client. | 
| Bu Sun Kim | 6502091 | 2020-05-20 12:08:20 -0700 | [diff] [blame] | 301 | }, | 
| Bu Sun Kim | d059ad8 | 2020-07-22 17:02:09 -0700 | [diff] [blame] | 302 | "totalPages": 42, # This field gives the total number of pages in the file. | 
| Bu Sun Kim | 6502091 | 2020-05-20 12:08:20 -0700 | [diff] [blame] | 303 | "responses": [ # Individual responses to images found within the file. This field will be | 
|  | 304 | # empty if the `error` field is set. | 
|  | 305 | { # Response to an image annotation request. | 
| Bu Sun Kim | 6502091 | 2020-05-20 12:08:20 -0700 | [diff] [blame] | 306 | "productSearchResults": { # Results for a product search request. # If present, product search has completed successfully. | 
| Bu Sun Kim | d059ad8 | 2020-07-22 17:02:09 -0700 | [diff] [blame] | 307 | "indexTime": "A String", # Timestamp of the index which provided these results. Products added to the | 
|  | 308 | # product set and products removed from the product set after this time are | 
|  | 309 | # not reflected in the current results. | 
| Bu Sun Kim | 6502091 | 2020-05-20 12:08:20 -0700 | [diff] [blame] | 310 | "results": [ # List of results, one for each product match. | 
|  | 311 | { # Information about a product. | 
| Bu Sun Kim | d059ad8 | 2020-07-22 17:02:09 -0700 | [diff] [blame] | 312 | "score": 3.14, # A confidence level on the match, ranging from 0 (no confidence) to | 
|  | 313 | # 1 (full confidence). | 
| Bu Sun Kim | 6502091 | 2020-05-20 12:08:20 -0700 | [diff] [blame] | 314 | "image": "A String", # The resource name of the image from the product that is the closest match | 
|  | 315 | # to the query. | 
|  | 316 | "product": { # A Product contains ReferenceImages. # The Product. | 
| Bu Sun Kim | d059ad8 | 2020-07-22 17:02:09 -0700 | [diff] [blame] | 317 | "displayName": "A String", # The user-provided name for this Product. Must not be empty. Must be at most | 
|  | 318 | # 4096 characters long. | 
|  | 319 | "name": "A String", # The resource name of the product. | 
|  | 320 | # | 
|  | 321 | # Format is: | 
|  | 322 | # `projects/PROJECT_ID/locations/LOC_ID/products/PRODUCT_ID`. | 
|  | 323 | # | 
|  | 324 | # This field is ignored when creating a product. | 
|  | 325 | "description": "A String", # User-provided metadata to be stored with this product. Must be at most 4096 | 
|  | 326 | # characters long. | 
|  | 327 | "productCategory": "A String", # Immutable. The category for the product identified by the reference image. This should | 
|  | 328 | # be either "homegoods-v2", "apparel-v2", or "toys-v2". The legacy categories | 
|  | 329 | # "homegoods", "apparel", and "toys" are still supported, but these should | 
|  | 330 | # not be used for new products. | 
| Bu Sun Kim | 6502091 | 2020-05-20 12:08:20 -0700 | [diff] [blame] | 331 | "productLabels": [ # Key-value pairs that can be attached to a product. At query time, | 
|  | 332 | # constraints can be specified based on the product_labels. | 
|  | 333 | # | 
|  | 334 | # Note that integer values can be provided as strings, e.g. "1199". Only | 
|  | 335 | # strings with integer values can match a range-based restriction which is | 
|  | 336 | # to be supported soon. | 
|  | 337 | # | 
|  | 338 | # Multiple values can be assigned to the same key. One product may have up to | 
|  | 339 | # 500 product_labels. | 
|  | 340 | # | 
|  | 341 | # Notice that the total number of distinct product_labels over all products | 
|  | 342 | # in one ProductSet cannot exceed 1M, otherwise the product search pipeline | 
|  | 343 | # will refuse to work for that ProductSet. | 
|  | 344 | { # A product label represented as a key-value pair. | 
| Bu Sun Kim | 6502091 | 2020-05-20 12:08:20 -0700 | [diff] [blame] | 345 | "key": "A String", # The key of the label attached to the product. Cannot be empty and cannot | 
|  | 346 | # exceed 128 bytes. | 
| Bu Sun Kim | d059ad8 | 2020-07-22 17:02:09 -0700 | [diff] [blame] | 347 | "value": "A String", # The value of the label attached to the product. Cannot be empty and | 
|  | 348 | # cannot exceed 128 bytes. | 
| Bu Sun Kim | 6502091 | 2020-05-20 12:08:20 -0700 | [diff] [blame] | 349 | }, | 
|  | 350 | ], | 
|  | 351 | }, | 
| Bu Sun Kim | 6502091 | 2020-05-20 12:08:20 -0700 | [diff] [blame] | 352 | }, | 
|  | 353 | ], | 
| Bu Sun Kim | 4ed7d3f | 2020-05-27 12:20:54 -0700 | [diff] [blame] | 354 | "productGroupedResults": [ # List of results grouped by products detected in the query image. Each entry | 
|  | 355 | # corresponds to one bounding polygon in the query image, and contains the | 
|  | 356 | # matching products specific to that region. There may be duplicate product | 
|  | 357 | # matches in the union of all the per-product results. | 
|  | 358 | { # Information about the products similar to a single product in a query | 
|  | 359 | # image. | 
| Bu Sun Kim | 4ed7d3f | 2020-05-27 12:20:54 -0700 | [diff] [blame] | 360 | "boundingPoly": { # A bounding polygon for the detected image annotation. # The bounding polygon around the product detected in the query image. | 
| Bu Sun Kim | d059ad8 | 2020-07-22 17:02:09 -0700 | [diff] [blame] | 361 | "vertices": [ # The bounding polygon vertices. | 
|  | 362 | { # A vertex represents a 2D point in the image. | 
|  | 363 | # NOTE: the vertex coordinates are in the same scale as the original image. | 
|  | 364 | "x": 42, # X coordinate. | 
|  | 365 | "y": 42, # Y coordinate. | 
|  | 366 | }, | 
|  | 367 | ], | 
| Bu Sun Kim | 4ed7d3f | 2020-05-27 12:20:54 -0700 | [diff] [blame] | 368 | "normalizedVertices": [ # The bounding polygon normalized vertices. | 
|  | 369 | { # A vertex represents a 2D point in the image. | 
|  | 370 | # NOTE: the normalized vertex coordinates are relative to the original image | 
|  | 371 | # and range from 0 to 1. | 
|  | 372 | "x": 3.14, # X coordinate. | 
|  | 373 | "y": 3.14, # Y coordinate. | 
|  | 374 | }, | 
|  | 375 | ], | 
| Bu Sun Kim | 4ed7d3f | 2020-05-27 12:20:54 -0700 | [diff] [blame] | 376 | }, | 
| Bu Sun Kim | d059ad8 | 2020-07-22 17:02:09 -0700 | [diff] [blame] | 377 | "objectAnnotations": [ # List of generic predictions for the object in the bounding box. | 
|  | 378 | { # Prediction for what the object in the bounding box is. | 
|  | 379 | "score": 3.14, # Score of the result. Range [0, 1]. | 
|  | 380 | "languageCode": "A String", # The BCP-47 language code, such as "en-US" or "sr-Latn". For more | 
|  | 381 | # information, see | 
|  | 382 | # http://www.unicode.org/reports/tr35/#Unicode_locale_identifier. | 
|  | 383 | "mid": "A String", # Object ID that should align with EntityAnnotation mid. | 
|  | 384 | "name": "A String", # Object name, expressed in its `language_code` language. | 
|  | 385 | }, | 
|  | 386 | ], | 
| Bu Sun Kim | 4ed7d3f | 2020-05-27 12:20:54 -0700 | [diff] [blame] | 387 | "results": [ # List of results, one for each product match. | 
|  | 388 | { # Information about a product. | 
| Bu Sun Kim | d059ad8 | 2020-07-22 17:02:09 -0700 | [diff] [blame] | 389 | "score": 3.14, # A confidence level on the match, ranging from 0 (no confidence) to | 
|  | 390 | # 1 (full confidence). | 
| Bu Sun Kim | 4ed7d3f | 2020-05-27 12:20:54 -0700 | [diff] [blame] | 391 | "image": "A String", # The resource name of the image from the product that is the closest match | 
|  | 392 | # to the query. | 
|  | 393 | "product": { # A Product contains ReferenceImages. # The Product. | 
| Bu Sun Kim | d059ad8 | 2020-07-22 17:02:09 -0700 | [diff] [blame] | 394 | "displayName": "A String", # The user-provided name for this Product. Must not be empty. Must be at most | 
|  | 395 | # 4096 characters long. | 
|  | 396 | "name": "A String", # The resource name of the product. | 
|  | 397 | # | 
|  | 398 | # Format is: | 
|  | 399 | # `projects/PROJECT_ID/locations/LOC_ID/products/PRODUCT_ID`. | 
|  | 400 | # | 
|  | 401 | # This field is ignored when creating a product. | 
|  | 402 | "description": "A String", # User-provided metadata to be stored with this product. Must be at most 4096 | 
|  | 403 | # characters long. | 
|  | 404 | "productCategory": "A String", # Immutable. The category for the product identified by the reference image. This should | 
|  | 405 | # be either "homegoods-v2", "apparel-v2", or "toys-v2". The legacy categories | 
|  | 406 | # "homegoods", "apparel", and "toys" are still supported, but these should | 
|  | 407 | # not be used for new products. | 
| Bu Sun Kim | 4ed7d3f | 2020-05-27 12:20:54 -0700 | [diff] [blame] | 408 | "productLabels": [ # Key-value pairs that can be attached to a product. At query time, | 
|  | 409 | # constraints can be specified based on the product_labels. | 
|  | 410 | # | 
|  | 411 | # Note that integer values can be provided as strings, e.g. "1199". Only | 
|  | 412 | # strings with integer values can match a range-based restriction which is | 
|  | 413 | # to be supported soon. | 
|  | 414 | # | 
|  | 415 | # Multiple values can be assigned to the same key. One product may have up to | 
|  | 416 | # 500 product_labels. | 
|  | 417 | # | 
|  | 418 | # Notice that the total number of distinct product_labels over all products | 
|  | 419 | # in one ProductSet cannot exceed 1M, otherwise the product search pipeline | 
|  | 420 | # will refuse to work for that ProductSet. | 
|  | 421 | { # A product label represented as a key-value pair. | 
| Bu Sun Kim | 4ed7d3f | 2020-05-27 12:20:54 -0700 | [diff] [blame] | 422 | "key": "A String", # The key of the label attached to the product. Cannot be empty and cannot | 
|  | 423 | # exceed 128 bytes. | 
| Bu Sun Kim | d059ad8 | 2020-07-22 17:02:09 -0700 | [diff] [blame] | 424 | "value": "A String", # The value of the label attached to the product. Cannot be empty and | 
|  | 425 | # cannot exceed 128 bytes. | 
| Bu Sun Kim | 4ed7d3f | 2020-05-27 12:20:54 -0700 | [diff] [blame] | 426 | }, | 
|  | 427 | ], | 
| Bu Sun Kim | 4ed7d3f | 2020-05-27 12:20:54 -0700 | [diff] [blame] | 428 | }, | 
| Bu Sun Kim | 4ed7d3f | 2020-05-27 12:20:54 -0700 | [diff] [blame] | 429 | }, | 
|  | 430 | ], | 
|  | 431 | }, | 
|  | 432 | ], | 
| Bu Sun Kim | 6502091 | 2020-05-20 12:08:20 -0700 | [diff] [blame] | 433 | }, | 
| Bu Sun Kim | 6502091 | 2020-05-20 12:08:20 -0700 | [diff] [blame] | 434 | "textAnnotations": [ # If present, text (OCR) detection has completed successfully. | 
|  | 435 | { # Set of detected entity features. | 
| Bu Sun Kim | 4ed7d3f | 2020-05-27 12:20:54 -0700 | [diff] [blame] | 436 | "topicality": 3.14, # The relevancy of the ICA (Image Content Annotation) label to the | 
|  | 437 | # image. For example, the relevancy of "tower" is likely higher to an image | 
|  | 438 | # containing the detected "Eiffel Tower" than to an image containing a | 
|  | 439 | # detected distant towering building, even though the confidence that | 
|  | 440 | # there is a tower in each image may be the same. Range [0, 1]. | 
| Bu Sun Kim | d059ad8 | 2020-07-22 17:02:09 -0700 | [diff] [blame] | 441 | "locale": "A String", # The language code for the locale in which the entity textual | 
|  | 442 | # `description` is expressed. | 
| Bu Sun Kim | 6502091 | 2020-05-20 12:08:20 -0700 | [diff] [blame] | 443 | "locations": [ # The location information for the detected entity. Multiple | 
|  | 444 | # `LocationInfo` elements can be present because one location may | 
|  | 445 | # indicate the location of the scene in the image, and another location | 
|  | 446 | # may indicate the location of the place where the image was taken. | 
|  | 447 | # Location information is usually present for landmarks. | 
|  | 448 | { # Detected entity location information. | 
|  | 449 | "latLng": { # An object representing a latitude/longitude pair. This is expressed as a pair # lat/long location coordinates. | 
|  | 450 | # of doubles representing degrees latitude and degrees longitude. Unless | 
|  | 451 | # specified otherwise, this must conform to the | 
|  | 452 | # <a href="http://www.unoosa.org/pdf/icg/2012/template/WGS_84.pdf">WGS84 | 
|  | 453 | # standard</a>. Values must be within normalized ranges. | 
|  | 454 | "latitude": 3.14, # The latitude in degrees. It must be in the range [-90.0, +90.0]. | 
|  | 455 | "longitude": 3.14, # The longitude in degrees. It must be in the range [-180.0, +180.0]. | 
|  | 456 | }, | 
|  | 457 | }, | 
|  | 458 | ], | 
|  | 459 | "mid": "A String", # Opaque entity ID. Some IDs may be available in | 
|  | 460 | # [Google Knowledge Graph Search | 
|  | 461 | # API](https://developers.google.com/knowledge-graph/). | 
| Bu Sun Kim | d059ad8 | 2020-07-22 17:02:09 -0700 | [diff] [blame] | 462 | "description": "A String", # Entity textual description, expressed in its `locale` language. | 
| Bu Sun Kim | 6502091 | 2020-05-20 12:08:20 -0700 | [diff] [blame] | 463 | "confidence": 3.14, # **Deprecated. Use `score` instead.** | 
|  | 464 | # The accuracy of the entity detection in an image. | 
|  | 465 | # For example, for an image in which the "Eiffel Tower" entity is detected, | 
|  | 466 | # this field represents the confidence that there is a tower in the query | 
|  | 467 | # image. Range [0, 1]. | 
| Bu Sun Kim | d059ad8 | 2020-07-22 17:02:09 -0700 | [diff] [blame] | 468 | "boundingPoly": { # A bounding polygon for the detected image annotation. # Image region to which this entity belongs. Not produced | 
|  | 469 | # for `LABEL_DETECTION` features. | 
|  | 470 | "vertices": [ # The bounding polygon vertices. | 
|  | 471 | { # A vertex represents a 2D point in the image. | 
|  | 472 | # NOTE: the vertex coordinates are in the same scale as the original image. | 
|  | 473 | "x": 42, # X coordinate. | 
|  | 474 | "y": 42, # Y coordinate. | 
|  | 475 | }, | 
|  | 476 | ], | 
|  | 477 | "normalizedVertices": [ # The bounding polygon normalized vertices. | 
|  | 478 | { # A vertex represents a 2D point in the image. | 
|  | 479 | # NOTE: the normalized vertex coordinates are relative to the original image | 
|  | 480 | # and range from 0 to 1. | 
|  | 481 | "x": 3.14, # X coordinate. | 
|  | 482 | "y": 3.14, # Y coordinate. | 
|  | 483 | }, | 
|  | 484 | ], | 
|  | 485 | }, | 
|  | 486 | "properties": [ # Some entities may have optional user-supplied `Property` (name/value) | 
|  | 487 | # fields, such a score or string that qualifies the entity. | 
|  | 488 | { # A `Property` consists of a user-supplied name/value pair. | 
|  | 489 | "uint64Value": "A String", # Value of numeric properties. | 
|  | 490 | "value": "A String", # Value of the property. | 
|  | 491 | "name": "A String", # Name of the property. | 
|  | 492 | }, | 
|  | 493 | ], | 
|  | 494 | "score": 3.14, # Overall score of the result. Range [0, 1]. | 
| Bu Sun Kim | 6502091 | 2020-05-20 12:08:20 -0700 | [diff] [blame] | 495 | }, | 
|  | 496 | ], | 
| Bu Sun Kim | d059ad8 | 2020-07-22 17:02:09 -0700 | [diff] [blame] | 497 | "safeSearchAnnotation": { # Set of features pertaining to the image, computed by computer vision # If present, safe-search annotation has completed successfully. | 
|  | 498 | # methods over safe-search verticals (for example, adult, spoof, medical, | 
|  | 499 | # violence). | 
|  | 500 | "racy": "A String", # Likelihood that the request image contains racy content. Racy content may | 
|  | 501 | # include (but is not limited to) skimpy or sheer clothing, strategically | 
|  | 502 | # covered nudity, lewd or provocative poses, or close-ups of sensitive | 
|  | 503 | # body areas. | 
|  | 504 | "medical": "A String", # Likelihood that this is a medical image. | 
|  | 505 | "adult": "A String", # Represents the adult content likelihood for the image. Adult content may | 
|  | 506 | # contain elements such as nudity, pornographic images or cartoons, or | 
|  | 507 | # sexual activities. | 
|  | 508 | "violence": "A String", # Likelihood that this image contains violent content. | 
|  | 509 | "spoof": "A String", # Spoof likelihood. The likelihood that an modification | 
|  | 510 | # was made to the image's canonical version to make it appear | 
|  | 511 | # funny or offensive. | 
|  | 512 | }, | 
|  | 513 | "webDetection": { # Relevant information for the image from the Internet. # If present, web detection has completed successfully. | 
|  | 514 | "fullMatchingImages": [ # Fully matching images from the Internet. | 
|  | 515 | # Can include resized copies of the query image. | 
|  | 516 | { # Metadata for online images. | 
|  | 517 | "score": 3.14, # (Deprecated) Overall relevancy score for the image. | 
|  | 518 | "url": "A String", # The result image URL. | 
|  | 519 | }, | 
|  | 520 | ], | 
|  | 521 | "bestGuessLabels": [ # The service's best guess as to the topic of the request image. | 
|  | 522 | # Inferred from similar images on the open web. | 
|  | 523 | { # Label to provide extra metadata for the web detection. | 
|  | 524 | "languageCode": "A String", # The BCP-47 language code for `label`, such as "en-US" or "sr-Latn". | 
|  | 525 | # For more information, see | 
|  | 526 | # http://www.unicode.org/reports/tr35/#Unicode_locale_identifier. | 
|  | 527 | "label": "A String", # Label for extra metadata. | 
|  | 528 | }, | 
|  | 529 | ], | 
|  | 530 | "visuallySimilarImages": [ # The visually similar image results. | 
|  | 531 | { # Metadata for online images. | 
|  | 532 | "score": 3.14, # (Deprecated) Overall relevancy score for the image. | 
|  | 533 | "url": "A String", # The result image URL. | 
|  | 534 | }, | 
|  | 535 | ], | 
|  | 536 | "partialMatchingImages": [ # Partial matching images from the Internet. | 
|  | 537 | # Those images are similar enough to share some key-point features. For | 
|  | 538 | # example an original image will likely have partial matching for its crops. | 
|  | 539 | { # Metadata for online images. | 
|  | 540 | "score": 3.14, # (Deprecated) Overall relevancy score for the image. | 
|  | 541 | "url": "A String", # The result image URL. | 
|  | 542 | }, | 
|  | 543 | ], | 
|  | 544 | "webEntities": [ # Deduced entities from similar images on the Internet. | 
|  | 545 | { # Entity deduced from similar images on the Internet. | 
|  | 546 | "entityId": "A String", # Opaque entity ID. | 
|  | 547 | "score": 3.14, # Overall relevancy score for the entity. | 
|  | 548 | # Not normalized and not comparable across different image queries. | 
|  | 549 | "description": "A String", # Canonical description of the entity, in English. | 
|  | 550 | }, | 
|  | 551 | ], | 
|  | 552 | "pagesWithMatchingImages": [ # Web pages containing the matching images from the Internet. | 
|  | 553 | { # Metadata for web pages. | 
|  | 554 | "partialMatchingImages": [ # Partial matching images on the page. | 
|  | 555 | # Those images are similar enough to share some key-point features. For | 
|  | 556 | # example an original image will likely have partial matching for its | 
|  | 557 | # crops. | 
|  | 558 | { # Metadata for online images. | 
|  | 559 | "score": 3.14, # (Deprecated) Overall relevancy score for the image. | 
|  | 560 | "url": "A String", # The result image URL. | 
|  | 561 | }, | 
|  | 562 | ], | 
|  | 563 | "url": "A String", # The result web page URL. | 
|  | 564 | "fullMatchingImages": [ # Fully matching images on the page. | 
|  | 565 | # Can include resized copies of the query image. | 
|  | 566 | { # Metadata for online images. | 
|  | 567 | "score": 3.14, # (Deprecated) Overall relevancy score for the image. | 
|  | 568 | "url": "A String", # The result image URL. | 
|  | 569 | }, | 
|  | 570 | ], | 
|  | 571 | "score": 3.14, # (Deprecated) Overall relevancy score for the web page. | 
|  | 572 | "pageTitle": "A String", # Title for the web page, may contain HTML markups. | 
|  | 573 | }, | 
|  | 574 | ], | 
|  | 575 | }, | 
| Bu Sun Kim | 6502091 | 2020-05-20 12:08:20 -0700 | [diff] [blame] | 576 | "imagePropertiesAnnotation": { # Stores image properties, such as dominant colors. # If present, image properties were extracted successfully. | 
|  | 577 | "dominantColors": { # Set of dominant colors and their corresponding scores. # If present, dominant colors completed successfully. | 
|  | 578 | "colors": [ # RGB color values with their score and pixel fraction. | 
|  | 579 | { # Color information consists of RGB channels, score, and the fraction of | 
|  | 580 | # the image that the color occupies in the image. | 
|  | 581 | "pixelFraction": 3.14, # The fraction of pixels the color occupies in the image. | 
|  | 582 | # Value in range [0, 1]. | 
|  | 583 | "color": { # Represents a color in the RGBA color space. This representation is designed # RGB components of the color. | 
|  | 584 | # for simplicity of conversion to/from color representations in various | 
|  | 585 | # languages over compactness; for example, the fields of this representation | 
|  | 586 | # can be trivially provided to the constructor of "java.awt.Color" in Java; it | 
|  | 587 | # can also be trivially provided to UIColor's "+colorWithRed:green:blue:alpha" | 
|  | 588 | # method in iOS; and, with just a little work, it can be easily formatted into | 
|  | 589 | # a CSS "rgba()" string in JavaScript, as well. | 
|  | 590 | # | 
|  | 591 | # Note: this proto does not carry information about the absolute color space | 
|  | 592 | # that should be used to interpret the RGB value (e.g. sRGB, Adobe RGB, | 
|  | 593 | # DCI-P3, BT.2020, etc.). By default, applications SHOULD assume the sRGB color | 
|  | 594 | # space. | 
|  | 595 | # | 
| Bu Sun Kim | d059ad8 | 2020-07-22 17:02:09 -0700 | [diff] [blame] | 596 | # Note: when color equality needs to be decided, implementations, unless | 
|  | 597 | # documented otherwise, will treat two colors to be equal if all their red, | 
|  | 598 | # green, blue and alpha values each differ by at most 1e-5. | 
|  | 599 | # | 
| Bu Sun Kim | 6502091 | 2020-05-20 12:08:20 -0700 | [diff] [blame] | 600 | # Example (Java): | 
|  | 601 | # | 
|  | 602 | #      import com.google.type.Color; | 
|  | 603 | # | 
|  | 604 | #      // ... | 
|  | 605 | #      public static java.awt.Color fromProto(Color protocolor) { | 
|  | 606 | #        float alpha = protocolor.hasAlpha() | 
|  | 607 | #            ? protocolor.getAlpha().getValue() | 
|  | 608 | #            : 1.0; | 
|  | 609 | # | 
|  | 610 | #        return new java.awt.Color( | 
|  | 611 | #            protocolor.getRed(), | 
|  | 612 | #            protocolor.getGreen(), | 
|  | 613 | #            protocolor.getBlue(), | 
|  | 614 | #            alpha); | 
|  | 615 | #      } | 
|  | 616 | # | 
|  | 617 | #      public static Color toProto(java.awt.Color color) { | 
|  | 618 | #        float red = (float) color.getRed(); | 
|  | 619 | #        float green = (float) color.getGreen(); | 
|  | 620 | #        float blue = (float) color.getBlue(); | 
|  | 621 | #        float denominator = 255.0; | 
|  | 622 | #        Color.Builder resultBuilder = | 
|  | 623 | #            Color | 
|  | 624 | #                .newBuilder() | 
|  | 625 | #                .setRed(red / denominator) | 
|  | 626 | #                .setGreen(green / denominator) | 
|  | 627 | #                .setBlue(blue / denominator); | 
|  | 628 | #        int alpha = color.getAlpha(); | 
|  | 629 | #        if (alpha != 255) { | 
|  | 630 | #          result.setAlpha( | 
|  | 631 | #              FloatValue | 
|  | 632 | #                  .newBuilder() | 
|  | 633 | #                  .setValue(((float) alpha) / denominator) | 
|  | 634 | #                  .build()); | 
|  | 635 | #        } | 
|  | 636 | #        return resultBuilder.build(); | 
|  | 637 | #      } | 
|  | 638 | #      // ... | 
|  | 639 | # | 
|  | 640 | # Example (iOS / Obj-C): | 
|  | 641 | # | 
|  | 642 | #      // ... | 
|  | 643 | #      static UIColor* fromProto(Color* protocolor) { | 
|  | 644 | #         float red = [protocolor red]; | 
|  | 645 | #         float green = [protocolor green]; | 
|  | 646 | #         float blue = [protocolor blue]; | 
|  | 647 | #         FloatValue* alpha_wrapper = [protocolor alpha]; | 
|  | 648 | #         float alpha = 1.0; | 
|  | 649 | #         if (alpha_wrapper != nil) { | 
|  | 650 | #           alpha = [alpha_wrapper value]; | 
|  | 651 | #         } | 
|  | 652 | #         return [UIColor colorWithRed:red green:green blue:blue alpha:alpha]; | 
|  | 653 | #      } | 
|  | 654 | # | 
|  | 655 | #      static Color* toProto(UIColor* color) { | 
|  | 656 | #          CGFloat red, green, blue, alpha; | 
|  | 657 | #          if (![color getRed:&red green:&green blue:&blue alpha:&alpha]) { | 
|  | 658 | #            return nil; | 
|  | 659 | #          } | 
|  | 660 | #          Color* result = [[Color alloc] init]; | 
|  | 661 | #          [result setRed:red]; | 
|  | 662 | #          [result setGreen:green]; | 
|  | 663 | #          [result setBlue:blue]; | 
|  | 664 | #          if (alpha <= 0.9999) { | 
|  | 665 | #            [result setAlpha:floatWrapperWithValue(alpha)]; | 
|  | 666 | #          } | 
|  | 667 | #          [result autorelease]; | 
|  | 668 | #          return result; | 
|  | 669 | #     } | 
|  | 670 | #     // ... | 
|  | 671 | # | 
|  | 672 | #  Example (JavaScript): | 
|  | 673 | # | 
|  | 674 | #     // ... | 
|  | 675 | # | 
|  | 676 | #     var protoToCssColor = function(rgb_color) { | 
|  | 677 | #        var redFrac = rgb_color.red || 0.0; | 
|  | 678 | #        var greenFrac = rgb_color.green || 0.0; | 
|  | 679 | #        var blueFrac = rgb_color.blue || 0.0; | 
|  | 680 | #        var red = Math.floor(redFrac * 255); | 
|  | 681 | #        var green = Math.floor(greenFrac * 255); | 
|  | 682 | #        var blue = Math.floor(blueFrac * 255); | 
|  | 683 | # | 
|  | 684 | #        if (!('alpha' in rgb_color)) { | 
|  | 685 | #           return rgbToCssColor_(red, green, blue); | 
|  | 686 | #        } | 
|  | 687 | # | 
|  | 688 | #        var alphaFrac = rgb_color.alpha.value || 0.0; | 
|  | 689 | #        var rgbParams = [red, green, blue].join(','); | 
|  | 690 | #        return ['rgba(', rgbParams, ',', alphaFrac, ')'].join(''); | 
|  | 691 | #     }; | 
|  | 692 | # | 
|  | 693 | #     var rgbToCssColor_ = function(red, green, blue) { | 
|  | 694 | #       var rgbNumber = new Number((red << 16) | (green << 8) | blue); | 
|  | 695 | #       var hexString = rgbNumber.toString(16); | 
|  | 696 | #       var missingZeros = 6 - hexString.length; | 
|  | 697 | #       var resultBuilder = ['#']; | 
|  | 698 | #       for (var i = 0; i < missingZeros; i++) { | 
|  | 699 | #          resultBuilder.push('0'); | 
|  | 700 | #       } | 
|  | 701 | #       resultBuilder.push(hexString); | 
|  | 702 | #       return resultBuilder.join(''); | 
|  | 703 | #     }; | 
|  | 704 | # | 
|  | 705 | #     // ... | 
| Bu Sun Kim | d059ad8 | 2020-07-22 17:02:09 -0700 | [diff] [blame] | 706 | "red": 3.14, # The amount of red in the color as a value in the interval [0, 1]. | 
| Bu Sun Kim | 6502091 | 2020-05-20 12:08:20 -0700 | [diff] [blame] | 707 | "alpha": 3.14, # The fraction of this color that should be applied to the pixel. That is, | 
|  | 708 | # the final pixel color is defined by the equation: | 
|  | 709 | # | 
|  | 710 | #   pixel color = alpha * (this color) + (1.0 - alpha) * (background color) | 
|  | 711 | # | 
|  | 712 | # This means that a value of 1.0 corresponds to a solid color, whereas | 
|  | 713 | # a value of 0.0 corresponds to a completely transparent color. This | 
|  | 714 | # uses a wrapper message rather than a simple float scalar so that it is | 
|  | 715 | # possible to distinguish between a default value and the value being unset. | 
|  | 716 | # If omitted, this color object is to be rendered as a solid color | 
|  | 717 | # (as if the alpha value had been explicitly given with a value of 1.0). | 
| Bu Sun Kim | d059ad8 | 2020-07-22 17:02:09 -0700 | [diff] [blame] | 718 | "blue": 3.14, # The amount of blue in the color as a value in the interval [0, 1]. | 
|  | 719 | "green": 3.14, # The amount of green in the color as a value in the interval [0, 1]. | 
| Bu Sun Kim | 6502091 | 2020-05-20 12:08:20 -0700 | [diff] [blame] | 720 | }, | 
|  | 721 | "score": 3.14, # Image-specific score for this color. Value in range [0, 1]. | 
|  | 722 | }, | 
|  | 723 | ], | 
|  | 724 | }, | 
|  | 725 | }, | 
| Bu Sun Kim | d059ad8 | 2020-07-22 17:02:09 -0700 | [diff] [blame] | 726 | "cropHintsAnnotation": { # Set of crop hints that are used to generate new crops when serving images. # If present, crop hints have completed successfully. | 
|  | 727 | "cropHints": [ # Crop hint results. | 
|  | 728 | { # Single crop hint that is used to generate a new crop when serving an image. | 
|  | 729 | "boundingPoly": { # A bounding polygon for the detected image annotation. # The bounding polygon for the crop region. The coordinates of the bounding | 
|  | 730 | # box are in the original image's scale. | 
|  | 731 | "vertices": [ # The bounding polygon vertices. | 
|  | 732 | { # A vertex represents a 2D point in the image. | 
|  | 733 | # NOTE: the vertex coordinates are in the same scale as the original image. | 
|  | 734 | "x": 42, # X coordinate. | 
|  | 735 | "y": 42, # Y coordinate. | 
|  | 736 | }, | 
|  | 737 | ], | 
|  | 738 | "normalizedVertices": [ # The bounding polygon normalized vertices. | 
|  | 739 | { # A vertex represents a 2D point in the image. | 
|  | 740 | # NOTE: the normalized vertex coordinates are relative to the original image | 
|  | 741 | # and range from 0 to 1. | 
|  | 742 | "x": 3.14, # X coordinate. | 
|  | 743 | "y": 3.14, # Y coordinate. | 
|  | 744 | }, | 
|  | 745 | ], | 
|  | 746 | }, | 
|  | 747 | "importanceFraction": 3.14, # Fraction of importance of this salient region with respect to the original | 
|  | 748 | # image. | 
|  | 749 | "confidence": 3.14, # Confidence of this being a salient region.  Range [0, 1]. | 
|  | 750 | }, | 
|  | 751 | ], | 
|  | 752 | }, | 
|  | 753 | "fullTextAnnotation": { # TextAnnotation contains a structured representation of OCR extracted text. # If present, text (OCR) detection or document (OCR) text detection has | 
|  | 754 | # completed successfully. | 
|  | 755 | # This annotation provides the structural hierarchy for the OCR detected | 
|  | 756 | # text. | 
|  | 757 | # The hierarchy of an OCR extracted text structure is like this: | 
|  | 758 | #     TextAnnotation -> Page -> Block -> Paragraph -> Word -> Symbol | 
|  | 759 | # Each structural component, starting from Page, may further have their own | 
|  | 760 | # properties. Properties describe detected languages, breaks etc.. Please refer | 
|  | 761 | # to the TextAnnotation.TextProperty message definition below for more | 
|  | 762 | # detail. | 
|  | 763 | "text": "A String", # UTF-8 text detected on the pages. | 
|  | 764 | "pages": [ # List of pages detected by OCR. | 
|  | 765 | { # Detected page from OCR. | 
|  | 766 | "blocks": [ # List of blocks of text, images etc on this page. | 
|  | 767 | { # Logical element on the page. | 
|  | 768 | "blockType": "A String", # Detected block type (text, image etc) for this block. | 
|  | 769 | "paragraphs": [ # List of paragraphs in this block (if this blocks is of type text). | 
|  | 770 | { # Structural unit of text representing a number of words in certain order. | 
|  | 771 | "confidence": 3.14, # Confidence of the OCR results for the paragraph. Range [0, 1]. | 
|  | 772 | "property": { # Additional information detected on the structural component. # Additional information detected for the paragraph. | 
|  | 773 | "detectedBreak": { # Detected start or end of a structural component. # Detected start or end of a text segment. | 
|  | 774 | "type": "A String", # Detected break type. | 
|  | 775 | "isPrefix": True or False, # True if break prepends the element. | 
|  | 776 | }, | 
|  | 777 | "detectedLanguages": [ # A list of detected languages together with confidence. | 
|  | 778 | { # Detected language for a structural component. | 
|  | 779 | "confidence": 3.14, # Confidence of detected language. Range [0, 1]. | 
|  | 780 | "languageCode": "A String", # The BCP-47 language code, such as "en-US" or "sr-Latn". For more | 
|  | 781 | # information, see | 
|  | 782 | # http://www.unicode.org/reports/tr35/#Unicode_locale_identifier. | 
|  | 783 | }, | 
|  | 784 | ], | 
|  | 785 | }, | 
|  | 786 | "boundingBox": { # A bounding polygon for the detected image annotation. # The bounding box for the paragraph. | 
|  | 787 | # The vertices are in the order of top-left, top-right, bottom-right, | 
|  | 788 | # bottom-left. When a rotation of the bounding box is detected the rotation | 
|  | 789 | # is represented as around the top-left corner as defined when the text is | 
|  | 790 | # read in the 'natural' orientation. | 
|  | 791 | # For example: | 
|  | 792 | #   * when the text is horizontal it might look like: | 
|  | 793 | #      0----1 | 
|  | 794 | #      |    | | 
|  | 795 | #      3----2 | 
|  | 796 | #   * when it's rotated 180 degrees around the top-left corner it becomes: | 
|  | 797 | #      2----3 | 
|  | 798 | #      |    | | 
|  | 799 | #      1----0 | 
|  | 800 | #   and the vertex order will still be (0, 1, 2, 3). | 
|  | 801 | "vertices": [ # The bounding polygon vertices. | 
|  | 802 | { # A vertex represents a 2D point in the image. | 
|  | 803 | # NOTE: the vertex coordinates are in the same scale as the original image. | 
|  | 804 | "x": 42, # X coordinate. | 
|  | 805 | "y": 42, # Y coordinate. | 
|  | 806 | }, | 
|  | 807 | ], | 
|  | 808 | "normalizedVertices": [ # The bounding polygon normalized vertices. | 
|  | 809 | { # A vertex represents a 2D point in the image. | 
|  | 810 | # NOTE: the normalized vertex coordinates are relative to the original image | 
|  | 811 | # and range from 0 to 1. | 
|  | 812 | "x": 3.14, # X coordinate. | 
|  | 813 | "y": 3.14, # Y coordinate. | 
|  | 814 | }, | 
|  | 815 | ], | 
|  | 816 | }, | 
|  | 817 | "words": [ # List of all words in this paragraph. | 
|  | 818 | { # A word representation. | 
|  | 819 | "confidence": 3.14, # Confidence of the OCR results for the word. Range [0, 1]. | 
|  | 820 | "boundingBox": { # A bounding polygon for the detected image annotation. # The bounding box for the word. | 
|  | 821 | # The vertices are in the order of top-left, top-right, bottom-right, | 
|  | 822 | # bottom-left. When a rotation of the bounding box is detected the rotation | 
|  | 823 | # is represented as around the top-left corner as defined when the text is | 
|  | 824 | # read in the 'natural' orientation. | 
|  | 825 | # For example: | 
|  | 826 | #   * when the text is horizontal it might look like: | 
|  | 827 | #      0----1 | 
|  | 828 | #      |    | | 
|  | 829 | #      3----2 | 
|  | 830 | #   * when it's rotated 180 degrees around the top-left corner it becomes: | 
|  | 831 | #      2----3 | 
|  | 832 | #      |    | | 
|  | 833 | #      1----0 | 
|  | 834 | #   and the vertex order will still be (0, 1, 2, 3). | 
|  | 835 | "vertices": [ # The bounding polygon vertices. | 
|  | 836 | { # A vertex represents a 2D point in the image. | 
|  | 837 | # NOTE: the vertex coordinates are in the same scale as the original image. | 
|  | 838 | "x": 42, # X coordinate. | 
|  | 839 | "y": 42, # Y coordinate. | 
|  | 840 | }, | 
|  | 841 | ], | 
|  | 842 | "normalizedVertices": [ # The bounding polygon normalized vertices. | 
|  | 843 | { # A vertex represents a 2D point in the image. | 
|  | 844 | # NOTE: the normalized vertex coordinates are relative to the original image | 
|  | 845 | # and range from 0 to 1. | 
|  | 846 | "x": 3.14, # X coordinate. | 
|  | 847 | "y": 3.14, # Y coordinate. | 
|  | 848 | }, | 
|  | 849 | ], | 
|  | 850 | }, | 
|  | 851 | "property": { # Additional information detected on the structural component. # Additional information detected for the word. | 
|  | 852 | "detectedBreak": { # Detected start or end of a structural component. # Detected start or end of a text segment. | 
|  | 853 | "type": "A String", # Detected break type. | 
|  | 854 | "isPrefix": True or False, # True if break prepends the element. | 
|  | 855 | }, | 
|  | 856 | "detectedLanguages": [ # A list of detected languages together with confidence. | 
|  | 857 | { # Detected language for a structural component. | 
|  | 858 | "confidence": 3.14, # Confidence of detected language. Range [0, 1]. | 
|  | 859 | "languageCode": "A String", # The BCP-47 language code, such as "en-US" or "sr-Latn". For more | 
|  | 860 | # information, see | 
|  | 861 | # http://www.unicode.org/reports/tr35/#Unicode_locale_identifier. | 
|  | 862 | }, | 
|  | 863 | ], | 
|  | 864 | }, | 
|  | 865 | "symbols": [ # List of symbols in the word. | 
|  | 866 | # The order of the symbols follows the natural reading order. | 
|  | 867 | { # A single symbol representation. | 
|  | 868 | "confidence": 3.14, # Confidence of the OCR results for the symbol. Range [0, 1]. | 
|  | 869 | "property": { # Additional information detected on the structural component. # Additional information detected for the symbol. | 
|  | 870 | "detectedBreak": { # Detected start or end of a structural component. # Detected start or end of a text segment. | 
|  | 871 | "type": "A String", # Detected break type. | 
|  | 872 | "isPrefix": True or False, # True if break prepends the element. | 
|  | 873 | }, | 
|  | 874 | "detectedLanguages": [ # A list of detected languages together with confidence. | 
|  | 875 | { # Detected language for a structural component. | 
|  | 876 | "confidence": 3.14, # Confidence of detected language. Range [0, 1]. | 
|  | 877 | "languageCode": "A String", # The BCP-47 language code, such as "en-US" or "sr-Latn". For more | 
|  | 878 | # information, see | 
|  | 879 | # http://www.unicode.org/reports/tr35/#Unicode_locale_identifier. | 
|  | 880 | }, | 
|  | 881 | ], | 
|  | 882 | }, | 
|  | 883 | "text": "A String", # The actual UTF-8 representation of the symbol. | 
|  | 884 | "boundingBox": { # A bounding polygon for the detected image annotation. # The bounding box for the symbol. | 
|  | 885 | # The vertices are in the order of top-left, top-right, bottom-right, | 
|  | 886 | # bottom-left. When a rotation of the bounding box is detected the rotation | 
|  | 887 | # is represented as around the top-left corner as defined when the text is | 
|  | 888 | # read in the 'natural' orientation. | 
|  | 889 | # For example: | 
|  | 890 | #   * when the text is horizontal it might look like: | 
|  | 891 | #      0----1 | 
|  | 892 | #      |    | | 
|  | 893 | #      3----2 | 
|  | 894 | #   * when it's rotated 180 degrees around the top-left corner it becomes: | 
|  | 895 | #      2----3 | 
|  | 896 | #      |    | | 
|  | 897 | #      1----0 | 
|  | 898 | #   and the vertex order will still be (0, 1, 2, 3). | 
|  | 899 | "vertices": [ # The bounding polygon vertices. | 
|  | 900 | { # A vertex represents a 2D point in the image. | 
|  | 901 | # NOTE: the vertex coordinates are in the same scale as the original image. | 
|  | 902 | "x": 42, # X coordinate. | 
|  | 903 | "y": 42, # Y coordinate. | 
|  | 904 | }, | 
|  | 905 | ], | 
|  | 906 | "normalizedVertices": [ # The bounding polygon normalized vertices. | 
|  | 907 | { # A vertex represents a 2D point in the image. | 
|  | 908 | # NOTE: the normalized vertex coordinates are relative to the original image | 
|  | 909 | # and range from 0 to 1. | 
|  | 910 | "x": 3.14, # X coordinate. | 
|  | 911 | "y": 3.14, # Y coordinate. | 
|  | 912 | }, | 
|  | 913 | ], | 
|  | 914 | }, | 
|  | 915 | }, | 
|  | 916 | ], | 
|  | 917 | }, | 
|  | 918 | ], | 
|  | 919 | }, | 
|  | 920 | ], | 
|  | 921 | "boundingBox": { # A bounding polygon for the detected image annotation. # The bounding box for the block. | 
|  | 922 | # The vertices are in the order of top-left, top-right, bottom-right, | 
|  | 923 | # bottom-left. When a rotation of the bounding box is detected the rotation | 
|  | 924 | # is represented as around the top-left corner as defined when the text is | 
|  | 925 | # read in the 'natural' orientation. | 
|  | 926 | # For example: | 
|  | 927 | # | 
|  | 928 | # * when the text is horizontal it might look like: | 
|  | 929 | # | 
|  | 930 | #         0----1 | 
|  | 931 | #         |    | | 
|  | 932 | #         3----2 | 
|  | 933 | # | 
|  | 934 | # * when it's rotated 180 degrees around the top-left corner it becomes: | 
|  | 935 | # | 
|  | 936 | #         2----3 | 
|  | 937 | #         |    | | 
|  | 938 | #         1----0 | 
|  | 939 | # | 
|  | 940 | #   and the vertex order will still be (0, 1, 2, 3). | 
|  | 941 | "vertices": [ # The bounding polygon vertices. | 
|  | 942 | { # A vertex represents a 2D point in the image. | 
|  | 943 | # NOTE: the vertex coordinates are in the same scale as the original image. | 
|  | 944 | "x": 42, # X coordinate. | 
|  | 945 | "y": 42, # Y coordinate. | 
|  | 946 | }, | 
|  | 947 | ], | 
|  | 948 | "normalizedVertices": [ # The bounding polygon normalized vertices. | 
|  | 949 | { # A vertex represents a 2D point in the image. | 
|  | 950 | # NOTE: the normalized vertex coordinates are relative to the original image | 
|  | 951 | # and range from 0 to 1. | 
|  | 952 | "x": 3.14, # X coordinate. | 
|  | 953 | "y": 3.14, # Y coordinate. | 
|  | 954 | }, | 
|  | 955 | ], | 
|  | 956 | }, | 
|  | 957 | "confidence": 3.14, # Confidence of the OCR results on the block. Range [0, 1]. | 
|  | 958 | "property": { # Additional information detected on the structural component. # Additional information detected for the block. | 
|  | 959 | "detectedBreak": { # Detected start or end of a structural component. # Detected start or end of a text segment. | 
|  | 960 | "type": "A String", # Detected break type. | 
|  | 961 | "isPrefix": True or False, # True if break prepends the element. | 
|  | 962 | }, | 
|  | 963 | "detectedLanguages": [ # A list of detected languages together with confidence. | 
|  | 964 | { # Detected language for a structural component. | 
|  | 965 | "confidence": 3.14, # Confidence of detected language. Range [0, 1]. | 
|  | 966 | "languageCode": "A String", # The BCP-47 language code, such as "en-US" or "sr-Latn". For more | 
|  | 967 | # information, see | 
|  | 968 | # http://www.unicode.org/reports/tr35/#Unicode_locale_identifier. | 
|  | 969 | }, | 
|  | 970 | ], | 
|  | 971 | }, | 
|  | 972 | }, | 
|  | 973 | ], | 
|  | 974 | "property": { # Additional information detected on the structural component. # Additional information detected on the page. | 
|  | 975 | "detectedBreak": { # Detected start or end of a structural component. # Detected start or end of a text segment. | 
|  | 976 | "type": "A String", # Detected break type. | 
|  | 977 | "isPrefix": True or False, # True if break prepends the element. | 
|  | 978 | }, | 
|  | 979 | "detectedLanguages": [ # A list of detected languages together with confidence. | 
|  | 980 | { # Detected language for a structural component. | 
|  | 981 | "confidence": 3.14, # Confidence of detected language. Range [0, 1]. | 
|  | 982 | "languageCode": "A String", # The BCP-47 language code, such as "en-US" or "sr-Latn". For more | 
|  | 983 | # information, see | 
|  | 984 | # http://www.unicode.org/reports/tr35/#Unicode_locale_identifier. | 
|  | 985 | }, | 
|  | 986 | ], | 
|  | 987 | }, | 
|  | 988 | "width": 42, # Page width. For PDFs the unit is points. For images (including | 
|  | 989 | # TIFFs) the unit is pixels. | 
|  | 990 | "confidence": 3.14, # Confidence of the OCR results on the page. Range [0, 1]. | 
|  | 991 | "height": 42, # Page height. For PDFs the unit is points. For images (including | 
|  | 992 | # TIFFs) the unit is pixels. | 
|  | 993 | }, | 
|  | 994 | ], | 
|  | 995 | }, | 
|  | 996 | "error": { # The `Status` type defines a logical error model that is suitable for # If set, represents the error message for the operation. | 
|  | 997 | # Note that filled-in image annotations are guaranteed to be | 
|  | 998 | # correct, even when `error` is set. | 
|  | 999 | # different programming environments, including REST APIs and RPC APIs. It is | 
|  | 1000 | # used by [gRPC](https://github.com/grpc). Each `Status` message contains | 
|  | 1001 | # three pieces of data: error code, error message, and error details. | 
|  | 1002 | # | 
|  | 1003 | # You can find out more about this error model and how to work with it in the | 
|  | 1004 | # [API Design Guide](https://cloud.google.com/apis/design/errors). | 
|  | 1005 | "code": 42, # The status code, which should be an enum value of google.rpc.Code. | 
|  | 1006 | "details": [ # A list of messages that carry the error details.  There is a common set of | 
|  | 1007 | # message types for APIs to use. | 
|  | 1008 | { | 
|  | 1009 | "a_key": "", # Properties of the object. Contains field @type with type URL. | 
|  | 1010 | }, | 
|  | 1011 | ], | 
|  | 1012 | "message": "A String", # A developer-facing error message, which should be in English. Any | 
|  | 1013 | # user-facing error message should be localized and sent in the | 
|  | 1014 | # google.rpc.Status.details field, or localized by the client. | 
|  | 1015 | }, | 
|  | 1016 | "localizedObjectAnnotations": [ # If present, localized object detection has completed successfully. | 
|  | 1017 | # This will be sorted descending by confidence score. | 
|  | 1018 | { # Set of detected objects with bounding boxes. | 
|  | 1019 | "boundingPoly": { # A bounding polygon for the detected image annotation. # Image region to which this object belongs. This must be populated. | 
|  | 1020 | "vertices": [ # The bounding polygon vertices. | 
|  | 1021 | { # A vertex represents a 2D point in the image. | 
|  | 1022 | # NOTE: the vertex coordinates are in the same scale as the original image. | 
|  | 1023 | "x": 42, # X coordinate. | 
|  | 1024 | "y": 42, # Y coordinate. | 
|  | 1025 | }, | 
|  | 1026 | ], | 
| Bu Sun Kim | 4ed7d3f | 2020-05-27 12:20:54 -0700 | [diff] [blame] | 1027 | "normalizedVertices": [ # The bounding polygon normalized vertices. | 
|  | 1028 | { # A vertex represents a 2D point in the image. | 
|  | 1029 | # NOTE: the normalized vertex coordinates are relative to the original image | 
|  | 1030 | # and range from 0 to 1. | 
|  | 1031 | "x": 3.14, # X coordinate. | 
|  | 1032 | "y": 3.14, # Y coordinate. | 
|  | 1033 | }, | 
|  | 1034 | ], | 
| Bu Sun Kim | 4ed7d3f | 2020-05-27 12:20:54 -0700 | [diff] [blame] | 1035 | }, | 
| Bu Sun Kim | d059ad8 | 2020-07-22 17:02:09 -0700 | [diff] [blame] | 1036 | "name": "A String", # Object name, expressed in its `language_code` language. | 
|  | 1037 | "mid": "A String", # Object ID that should align with EntityAnnotation mid. | 
|  | 1038 | "languageCode": "A String", # The BCP-47 language code, such as "en-US" or "sr-Latn". For more | 
|  | 1039 | # information, see | 
|  | 1040 | # http://www.unicode.org/reports/tr35/#Unicode_locale_identifier. | 
|  | 1041 | "score": 3.14, # Score of the result. Range [0, 1]. | 
|  | 1042 | }, | 
|  | 1043 | ], | 
|  | 1044 | "labelAnnotations": [ # If present, label detection has completed successfully. | 
|  | 1045 | { # Set of detected entity features. | 
| Bu Sun Kim | 4ed7d3f | 2020-05-27 12:20:54 -0700 | [diff] [blame] | 1046 | "topicality": 3.14, # The relevancy of the ICA (Image Content Annotation) label to the | 
|  | 1047 | # image. For example, the relevancy of "tower" is likely higher to an image | 
|  | 1048 | # containing the detected "Eiffel Tower" than to an image containing a | 
|  | 1049 | # detected distant towering building, even though the confidence that | 
|  | 1050 | # there is a tower in each image may be the same. Range [0, 1]. | 
| Bu Sun Kim | d059ad8 | 2020-07-22 17:02:09 -0700 | [diff] [blame] | 1051 | "locale": "A String", # The language code for the locale in which the entity textual | 
|  | 1052 | # `description` is expressed. | 
| Bu Sun Kim | 4ed7d3f | 2020-05-27 12:20:54 -0700 | [diff] [blame] | 1053 | "locations": [ # The location information for the detected entity. Multiple | 
|  | 1054 | # `LocationInfo` elements can be present because one location may | 
|  | 1055 | # indicate the location of the scene in the image, and another location | 
|  | 1056 | # may indicate the location of the place where the image was taken. | 
|  | 1057 | # Location information is usually present for landmarks. | 
|  | 1058 | { # Detected entity location information. | 
|  | 1059 | "latLng": { # An object representing a latitude/longitude pair. This is expressed as a pair # lat/long location coordinates. | 
|  | 1060 | # of doubles representing degrees latitude and degrees longitude. Unless | 
|  | 1061 | # specified otherwise, this must conform to the | 
|  | 1062 | # <a href="http://www.unoosa.org/pdf/icg/2012/template/WGS_84.pdf">WGS84 | 
|  | 1063 | # standard</a>. Values must be within normalized ranges. | 
|  | 1064 | "latitude": 3.14, # The latitude in degrees. It must be in the range [-90.0, +90.0]. | 
|  | 1065 | "longitude": 3.14, # The longitude in degrees. It must be in the range [-180.0, +180.0]. | 
|  | 1066 | }, | 
|  | 1067 | }, | 
|  | 1068 | ], | 
|  | 1069 | "mid": "A String", # Opaque entity ID. Some IDs may be available in | 
|  | 1070 | # [Google Knowledge Graph Search | 
|  | 1071 | # API](https://developers.google.com/knowledge-graph/). | 
| Bu Sun Kim | d059ad8 | 2020-07-22 17:02:09 -0700 | [diff] [blame] | 1072 | "description": "A String", # Entity textual description, expressed in its `locale` language. | 
| Bu Sun Kim | 4ed7d3f | 2020-05-27 12:20:54 -0700 | [diff] [blame] | 1073 | "confidence": 3.14, # **Deprecated. Use `score` instead.** | 
|  | 1074 | # The accuracy of the entity detection in an image. | 
|  | 1075 | # For example, for an image in which the "Eiffel Tower" entity is detected, | 
|  | 1076 | # this field represents the confidence that there is a tower in the query | 
|  | 1077 | # image. Range [0, 1]. | 
| Bu Sun Kim | d059ad8 | 2020-07-22 17:02:09 -0700 | [diff] [blame] | 1078 | "boundingPoly": { # A bounding polygon for the detected image annotation. # Image region to which this entity belongs. Not produced | 
|  | 1079 | # for `LABEL_DETECTION` features. | 
|  | 1080 | "vertices": [ # The bounding polygon vertices. | 
|  | 1081 | { # A vertex represents a 2D point in the image. | 
|  | 1082 | # NOTE: the vertex coordinates are in the same scale as the original image. | 
|  | 1083 | "x": 42, # X coordinate. | 
|  | 1084 | "y": 42, # Y coordinate. | 
|  | 1085 | }, | 
|  | 1086 | ], | 
|  | 1087 | "normalizedVertices": [ # The bounding polygon normalized vertices. | 
|  | 1088 | { # A vertex represents a 2D point in the image. | 
|  | 1089 | # NOTE: the normalized vertex coordinates are relative to the original image | 
|  | 1090 | # and range from 0 to 1. | 
|  | 1091 | "x": 3.14, # X coordinate. | 
|  | 1092 | "y": 3.14, # Y coordinate. | 
|  | 1093 | }, | 
|  | 1094 | ], | 
|  | 1095 | }, | 
|  | 1096 | "properties": [ # Some entities may have optional user-supplied `Property` (name/value) | 
|  | 1097 | # fields, such a score or string that qualifies the entity. | 
|  | 1098 | { # A `Property` consists of a user-supplied name/value pair. | 
|  | 1099 | "uint64Value": "A String", # Value of numeric properties. | 
|  | 1100 | "value": "A String", # Value of the property. | 
|  | 1101 | "name": "A String", # Name of the property. | 
|  | 1102 | }, | 
|  | 1103 | ], | 
|  | 1104 | "score": 3.14, # Overall score of the result. Range [0, 1]. | 
|  | 1105 | }, | 
|  | 1106 | ], | 
|  | 1107 | "logoAnnotations": [ # If present, logo detection has completed successfully. | 
|  | 1108 | { # Set of detected entity features. | 
|  | 1109 | "topicality": 3.14, # The relevancy of the ICA (Image Content Annotation) label to the | 
|  | 1110 | # image. For example, the relevancy of "tower" is likely higher to an image | 
|  | 1111 | # containing the detected "Eiffel Tower" than to an image containing a | 
|  | 1112 | # detected distant towering building, even though the confidence that | 
|  | 1113 | # there is a tower in each image may be the same. Range [0, 1]. | 
|  | 1114 | "locale": "A String", # The language code for the locale in which the entity textual | 
|  | 1115 | # `description` is expressed. | 
|  | 1116 | "locations": [ # The location information for the detected entity. Multiple | 
|  | 1117 | # `LocationInfo` elements can be present because one location may | 
|  | 1118 | # indicate the location of the scene in the image, and another location | 
|  | 1119 | # may indicate the location of the place where the image was taken. | 
|  | 1120 | # Location information is usually present for landmarks. | 
|  | 1121 | { # Detected entity location information. | 
|  | 1122 | "latLng": { # An object representing a latitude/longitude pair. This is expressed as a pair # lat/long location coordinates. | 
|  | 1123 | # of doubles representing degrees latitude and degrees longitude. Unless | 
|  | 1124 | # specified otherwise, this must conform to the | 
|  | 1125 | # <a href="http://www.unoosa.org/pdf/icg/2012/template/WGS_84.pdf">WGS84 | 
|  | 1126 | # standard</a>. Values must be within normalized ranges. | 
|  | 1127 | "latitude": 3.14, # The latitude in degrees. It must be in the range [-90.0, +90.0]. | 
|  | 1128 | "longitude": 3.14, # The longitude in degrees. It must be in the range [-180.0, +180.0]. | 
|  | 1129 | }, | 
|  | 1130 | }, | 
|  | 1131 | ], | 
|  | 1132 | "mid": "A String", # Opaque entity ID. Some IDs may be available in | 
|  | 1133 | # [Google Knowledge Graph Search | 
|  | 1134 | # API](https://developers.google.com/knowledge-graph/). | 
|  | 1135 | "description": "A String", # Entity textual description, expressed in its `locale` language. | 
|  | 1136 | "confidence": 3.14, # **Deprecated. Use `score` instead.** | 
|  | 1137 | # The accuracy of the entity detection in an image. | 
|  | 1138 | # For example, for an image in which the "Eiffel Tower" entity is detected, | 
|  | 1139 | # this field represents the confidence that there is a tower in the query | 
|  | 1140 | # image. Range [0, 1]. | 
|  | 1141 | "boundingPoly": { # A bounding polygon for the detected image annotation. # Image region to which this entity belongs. Not produced | 
|  | 1142 | # for `LABEL_DETECTION` features. | 
|  | 1143 | "vertices": [ # The bounding polygon vertices. | 
|  | 1144 | { # A vertex represents a 2D point in the image. | 
|  | 1145 | # NOTE: the vertex coordinates are in the same scale as the original image. | 
|  | 1146 | "x": 42, # X coordinate. | 
|  | 1147 | "y": 42, # Y coordinate. | 
|  | 1148 | }, | 
|  | 1149 | ], | 
|  | 1150 | "normalizedVertices": [ # The bounding polygon normalized vertices. | 
|  | 1151 | { # A vertex represents a 2D point in the image. | 
|  | 1152 | # NOTE: the normalized vertex coordinates are relative to the original image | 
|  | 1153 | # and range from 0 to 1. | 
|  | 1154 | "x": 3.14, # X coordinate. | 
|  | 1155 | "y": 3.14, # Y coordinate. | 
|  | 1156 | }, | 
|  | 1157 | ], | 
|  | 1158 | }, | 
|  | 1159 | "properties": [ # Some entities may have optional user-supplied `Property` (name/value) | 
|  | 1160 | # fields, such a score or string that qualifies the entity. | 
|  | 1161 | { # A `Property` consists of a user-supplied name/value pair. | 
|  | 1162 | "uint64Value": "A String", # Value of numeric properties. | 
|  | 1163 | "value": "A String", # Value of the property. | 
|  | 1164 | "name": "A String", # Name of the property. | 
|  | 1165 | }, | 
|  | 1166 | ], | 
|  | 1167 | "score": 3.14, # Overall score of the result. Range [0, 1]. | 
| Bu Sun Kim | 4ed7d3f | 2020-05-27 12:20:54 -0700 | [diff] [blame] | 1168 | }, | 
|  | 1169 | ], | 
|  | 1170 | "context": { # If an image was produced from a file (e.g. a PDF), this message gives # If present, contextual information is needed to understand where this image | 
|  | 1171 | # comes from. | 
|  | 1172 | # information about the source of that image. | 
|  | 1173 | "uri": "A String", # The URI of the file used to produce the image. | 
|  | 1174 | "pageNumber": 42, # If the file was a PDF or TIFF, this field gives the page number within | 
|  | 1175 | # the file used to produce the image. | 
|  | 1176 | }, | 
| Bu Sun Kim | d059ad8 | 2020-07-22 17:02:09 -0700 | [diff] [blame] | 1177 | "faceAnnotations": [ # If present, face detection has completed successfully. | 
|  | 1178 | { # A face annotation object contains the results of face detection. | 
|  | 1179 | "surpriseLikelihood": "A String", # Surprise likelihood. | 
|  | 1180 | "headwearLikelihood": "A String", # Headwear likelihood. | 
|  | 1181 | "angerLikelihood": "A String", # Anger likelihood. | 
|  | 1182 | "detectionConfidence": 3.14, # Detection confidence. Range [0, 1]. | 
|  | 1183 | "landmarkingConfidence": 3.14, # Face landmarking confidence. Range [0, 1]. | 
|  | 1184 | "blurredLikelihood": "A String", # Blurred likelihood. | 
|  | 1185 | "tiltAngle": 3.14, # Pitch angle, which indicates the upwards/downwards angle that the face is | 
|  | 1186 | # pointing relative to the image's horizontal plane. Range [-180,180]. | 
|  | 1187 | "sorrowLikelihood": "A String", # Sorrow likelihood. | 
|  | 1188 | "panAngle": 3.14, # Yaw angle, which indicates the leftward/rightward angle that the face is | 
|  | 1189 | # pointing relative to the vertical plane perpendicular to the image. Range | 
|  | 1190 | # [-180,180]. | 
|  | 1191 | "landmarks": [ # Detected face landmarks. | 
|  | 1192 | { # A face-specific landmark (for example, a face feature). | 
|  | 1193 | "position": { # A 3D position in the image, used primarily for Face detection landmarks. # Face landmark position. | 
|  | 1194 | # A valid Position must have both x and y coordinates. | 
|  | 1195 | # The position coordinates are in the same scale as the original image. | 
|  | 1196 | "z": 3.14, # Z coordinate (or depth). | 
|  | 1197 | "y": 3.14, # Y coordinate. | 
|  | 1198 | "x": 3.14, # X coordinate. | 
|  | 1199 | }, | 
|  | 1200 | "type": "A String", # Face landmark type. | 
|  | 1201 | }, | 
|  | 1202 | ], | 
|  | 1203 | "rollAngle": 3.14, # Roll angle, which indicates the amount of clockwise/anti-clockwise rotation | 
|  | 1204 | # of the face relative to the image vertical about the axis perpendicular to | 
|  | 1205 | # the face. Range [-180,180]. | 
|  | 1206 | "underExposedLikelihood": "A String", # Under-exposed likelihood. | 
|  | 1207 | "joyLikelihood": "A String", # Joy likelihood. | 
|  | 1208 | "fdBoundingPoly": { # A bounding polygon for the detected image annotation. # The `fd_bounding_poly` bounding polygon is tighter than the | 
|  | 1209 | # `boundingPoly`, and encloses only the skin part of the face. Typically, it | 
|  | 1210 | # is used to eliminate the face from any image analysis that detects the | 
|  | 1211 | # "amount of skin" visible in an image. It is not based on the | 
|  | 1212 | # landmarker results, only on the initial face detection, hence | 
|  | 1213 | # the <code>fd</code> (face detection) prefix. | 
|  | 1214 | "vertices": [ # The bounding polygon vertices. | 
|  | 1215 | { # A vertex represents a 2D point in the image. | 
|  | 1216 | # NOTE: the vertex coordinates are in the same scale as the original image. | 
|  | 1217 | "x": 42, # X coordinate. | 
|  | 1218 | "y": 42, # Y coordinate. | 
| Bu Sun Kim | 4ed7d3f | 2020-05-27 12:20:54 -0700 | [diff] [blame] | 1219 | }, | 
|  | 1220 | ], | 
| Bu Sun Kim | 4ed7d3f | 2020-05-27 12:20:54 -0700 | [diff] [blame] | 1221 | "normalizedVertices": [ # The bounding polygon normalized vertices. | 
|  | 1222 | { # A vertex represents a 2D point in the image. | 
|  | 1223 | # NOTE: the normalized vertex coordinates are relative to the original image | 
|  | 1224 | # and range from 0 to 1. | 
|  | 1225 | "x": 3.14, # X coordinate. | 
|  | 1226 | "y": 3.14, # Y coordinate. | 
|  | 1227 | }, | 
|  | 1228 | ], | 
| Bu Sun Kim | 4ed7d3f | 2020-05-27 12:20:54 -0700 | [diff] [blame] | 1229 | }, | 
| Bu Sun Kim | 4ed7d3f | 2020-05-27 12:20:54 -0700 | [diff] [blame] | 1230 | "boundingPoly": { # A bounding polygon for the detected image annotation. # The bounding polygon around the face. The coordinates of the bounding box | 
|  | 1231 | # are in the original image's scale. | 
|  | 1232 | # The bounding box is computed to "frame" the face in accordance with human | 
|  | 1233 | # expectations. It is based on the landmarker results. | 
|  | 1234 | # Note that one or more x and/or y coordinates may not be generated in the | 
|  | 1235 | # `BoundingPoly` (the polygon will be unbounded) if only a partial face | 
|  | 1236 | # appears in the image to be annotated. | 
| Bu Sun Kim | d059ad8 | 2020-07-22 17:02:09 -0700 | [diff] [blame] | 1237 | "vertices": [ # The bounding polygon vertices. | 
|  | 1238 | { # A vertex represents a 2D point in the image. | 
|  | 1239 | # NOTE: the vertex coordinates are in the same scale as the original image. | 
|  | 1240 | "x": 42, # X coordinate. | 
|  | 1241 | "y": 42, # Y coordinate. | 
|  | 1242 | }, | 
|  | 1243 | ], | 
| Bu Sun Kim | 4ed7d3f | 2020-05-27 12:20:54 -0700 | [diff] [blame] | 1244 | "normalizedVertices": [ # The bounding polygon normalized vertices. | 
|  | 1245 | { # A vertex represents a 2D point in the image. | 
|  | 1246 | # NOTE: the normalized vertex coordinates are relative to the original image | 
|  | 1247 | # and range from 0 to 1. | 
|  | 1248 | "x": 3.14, # X coordinate. | 
|  | 1249 | "y": 3.14, # Y coordinate. | 
|  | 1250 | }, | 
|  | 1251 | ], | 
| Bu Sun Kim | 4ed7d3f | 2020-05-27 12:20:54 -0700 | [diff] [blame] | 1252 | }, | 
| Bu Sun Kim | 4ed7d3f | 2020-05-27 12:20:54 -0700 | [diff] [blame] | 1253 | }, | 
|  | 1254 | ], | 
| Bu Sun Kim | d059ad8 | 2020-07-22 17:02:09 -0700 | [diff] [blame] | 1255 | "landmarkAnnotations": [ # If present, landmark detection has completed successfully. | 
| Bu Sun Kim | 4ed7d3f | 2020-05-27 12:20:54 -0700 | [diff] [blame] | 1256 | { # Set of detected entity features. | 
| Bu Sun Kim | 4ed7d3f | 2020-05-27 12:20:54 -0700 | [diff] [blame] | 1257 | "topicality": 3.14, # The relevancy of the ICA (Image Content Annotation) label to the | 
|  | 1258 | # image. For example, the relevancy of "tower" is likely higher to an image | 
|  | 1259 | # containing the detected "Eiffel Tower" than to an image containing a | 
|  | 1260 | # detected distant towering building, even though the confidence that | 
|  | 1261 | # there is a tower in each image may be the same. Range [0, 1]. | 
| Bu Sun Kim | d059ad8 | 2020-07-22 17:02:09 -0700 | [diff] [blame] | 1262 | "locale": "A String", # The language code for the locale in which the entity textual | 
|  | 1263 | # `description` is expressed. | 
| Bu Sun Kim | 4ed7d3f | 2020-05-27 12:20:54 -0700 | [diff] [blame] | 1264 | "locations": [ # The location information for the detected entity. Multiple | 
|  | 1265 | # `LocationInfo` elements can be present because one location may | 
|  | 1266 | # indicate the location of the scene in the image, and another location | 
|  | 1267 | # may indicate the location of the place where the image was taken. | 
|  | 1268 | # Location information is usually present for landmarks. | 
|  | 1269 | { # Detected entity location information. | 
|  | 1270 | "latLng": { # An object representing a latitude/longitude pair. This is expressed as a pair # lat/long location coordinates. | 
|  | 1271 | # of doubles representing degrees latitude and degrees longitude. Unless | 
|  | 1272 | # specified otherwise, this must conform to the | 
|  | 1273 | # <a href="http://www.unoosa.org/pdf/icg/2012/template/WGS_84.pdf">WGS84 | 
|  | 1274 | # standard</a>. Values must be within normalized ranges. | 
|  | 1275 | "latitude": 3.14, # The latitude in degrees. It must be in the range [-90.0, +90.0]. | 
|  | 1276 | "longitude": 3.14, # The longitude in degrees. It must be in the range [-180.0, +180.0]. | 
|  | 1277 | }, | 
|  | 1278 | }, | 
|  | 1279 | ], | 
|  | 1280 | "mid": "A String", # Opaque entity ID. Some IDs may be available in | 
|  | 1281 | # [Google Knowledge Graph Search | 
|  | 1282 | # API](https://developers.google.com/knowledge-graph/). | 
| Bu Sun Kim | d059ad8 | 2020-07-22 17:02:09 -0700 | [diff] [blame] | 1283 | "description": "A String", # Entity textual description, expressed in its `locale` language. | 
| Bu Sun Kim | 4ed7d3f | 2020-05-27 12:20:54 -0700 | [diff] [blame] | 1284 | "confidence": 3.14, # **Deprecated. Use `score` instead.** | 
|  | 1285 | # The accuracy of the entity detection in an image. | 
|  | 1286 | # For example, for an image in which the "Eiffel Tower" entity is detected, | 
|  | 1287 | # this field represents the confidence that there is a tower in the query | 
|  | 1288 | # image. Range [0, 1]. | 
| Bu Sun Kim | d059ad8 | 2020-07-22 17:02:09 -0700 | [diff] [blame] | 1289 | "boundingPoly": { # A bounding polygon for the detected image annotation. # Image region to which this entity belongs. Not produced | 
|  | 1290 | # for `LABEL_DETECTION` features. | 
|  | 1291 | "vertices": [ # The bounding polygon vertices. | 
|  | 1292 | { # A vertex represents a 2D point in the image. | 
|  | 1293 | # NOTE: the vertex coordinates are in the same scale as the original image. | 
|  | 1294 | "x": 42, # X coordinate. | 
|  | 1295 | "y": 42, # Y coordinate. | 
|  | 1296 | }, | 
|  | 1297 | ], | 
|  | 1298 | "normalizedVertices": [ # The bounding polygon normalized vertices. | 
|  | 1299 | { # A vertex represents a 2D point in the image. | 
|  | 1300 | # NOTE: the normalized vertex coordinates are relative to the original image | 
|  | 1301 | # and range from 0 to 1. | 
|  | 1302 | "x": 3.14, # X coordinate. | 
|  | 1303 | "y": 3.14, # Y coordinate. | 
|  | 1304 | }, | 
|  | 1305 | ], | 
|  | 1306 | }, | 
|  | 1307 | "properties": [ # Some entities may have optional user-supplied `Property` (name/value) | 
|  | 1308 | # fields, such a score or string that qualifies the entity. | 
|  | 1309 | { # A `Property` consists of a user-supplied name/value pair. | 
|  | 1310 | "uint64Value": "A String", # Value of numeric properties. | 
|  | 1311 | "value": "A String", # Value of the property. | 
|  | 1312 | "name": "A String", # Name of the property. | 
|  | 1313 | }, | 
|  | 1314 | ], | 
|  | 1315 | "score": 3.14, # Overall score of the result. Range [0, 1]. | 
| Bu Sun Kim | 4ed7d3f | 2020-05-27 12:20:54 -0700 | [diff] [blame] | 1316 | }, | 
|  | 1317 | ], | 
| Bu Sun Kim | 6502091 | 2020-05-20 12:08:20 -0700 | [diff] [blame] | 1318 | }, | 
|  | 1319 | ], | 
| Bu Sun Kim | 6502091 | 2020-05-20 12:08:20 -0700 | [diff] [blame] | 1320 | }, | 
|  | 1321 | ], | 
|  | 1322 | }</pre> | 
|  | 1323 | </div> | 
|  | 1324 |  | 
|  | 1325 | <div class="method"> | 
|  | 1326 | <code class="details" id="asyncBatchAnnotate">asyncBatchAnnotate(parent, body=None, x__xgafv=None)</code> | 
|  | 1327 | <pre>Run asynchronous image detection and annotation for a list of generic | 
|  | 1328 | files, such as PDF files, which may contain multiple pages and multiple | 
|  | 1329 | images per page. Progress and results can be retrieved through the | 
|  | 1330 | `google.longrunning.Operations` interface. | 
|  | 1331 | `Operation.metadata` contains `OperationMetadata` (metadata). | 
|  | 1332 | `Operation.response` contains `AsyncBatchAnnotateFilesResponse` (results). | 
|  | 1333 |  | 
|  | 1334 | Args: | 
|  | 1335 | parent: string, Optional. Target project and location to make a call. | 
|  | 1336 |  | 
|  | 1337 | Format: `projects/{project-id}/locations/{location-id}`. | 
|  | 1338 |  | 
|  | 1339 | If no parent is specified, a region will be chosen automatically. | 
|  | 1340 |  | 
|  | 1341 | Supported location-ids: | 
|  | 1342 | `us`: USA country only, | 
|  | 1343 | `asia`: East asia areas, like Japan, Taiwan, | 
|  | 1344 | `eu`: The European Union. | 
|  | 1345 |  | 
|  | 1346 | Example: `projects/project-A/locations/eu`. (required) | 
|  | 1347 | body: object, The request body. | 
|  | 1348 | The object takes the form of: | 
|  | 1349 |  | 
|  | 1350 | { # Multiple async file annotation requests are batched into a single service | 
|  | 1351 | # call. | 
|  | 1352 | "requests": [ # Required. Individual async file annotation requests for this batch. | 
|  | 1353 | { # An offline file annotation request. | 
|  | 1354 | "imageContext": { # Image context and/or feature-specific parameters. # Additional context that may accompany the image(s) in the file. | 
| Bu Sun Kim | 6502091 | 2020-05-20 12:08:20 -0700 | [diff] [blame] | 1355 | "cropHintsParams": { # Parameters for crop hints annotation request. # Parameters for crop hints annotation request. | 
|  | 1356 | "aspectRatios": [ # Aspect ratios in floats, representing the ratio of the width to the height | 
|  | 1357 | # of the image. For example, if the desired aspect ratio is 4/3, the | 
|  | 1358 | # corresponding float value should be 1.33333.  If not specified, the | 
|  | 1359 | # best possible crop is returned. The number of provided aspect ratios is | 
|  | 1360 | # limited to a maximum of 16; any aspect ratios provided after the 16th are | 
|  | 1361 | # ignored. | 
|  | 1362 | 3.14, | 
|  | 1363 | ], | 
|  | 1364 | }, | 
|  | 1365 | "productSearchParams": { # Parameters for a product search request. # Parameters for product search. | 
| Bu Sun Kim | 6502091 | 2020-05-20 12:08:20 -0700 | [diff] [blame] | 1366 | "productCategories": [ # The list of product categories to search in. Currently, we only consider | 
|  | 1367 | # the first category, and either "homegoods-v2", "apparel-v2", "toys-v2", | 
|  | 1368 | # "packagedgoods-v1", or "general-v1" should be specified. The legacy | 
|  | 1369 | # categories "homegoods", "apparel", and "toys" are still supported but will | 
|  | 1370 | # be deprecated. For new products, please use "homegoods-v2", "apparel-v2", | 
|  | 1371 | # or "toys-v2" for better product search accuracy. It is recommended to | 
|  | 1372 | # migrate existing products to these categories as well. | 
|  | 1373 | "A String", | 
|  | 1374 | ], | 
| Bu Sun Kim | 4ed7d3f | 2020-05-27 12:20:54 -0700 | [diff] [blame] | 1375 | "boundingPoly": { # A bounding polygon for the detected image annotation. # The bounding polygon around the area of interest in the image. | 
|  | 1376 | # If it is not specified, system discretion will be applied. | 
| Bu Sun Kim | d059ad8 | 2020-07-22 17:02:09 -0700 | [diff] [blame] | 1377 | "vertices": [ # The bounding polygon vertices. | 
|  | 1378 | { # A vertex represents a 2D point in the image. | 
|  | 1379 | # NOTE: the vertex coordinates are in the same scale as the original image. | 
|  | 1380 | "x": 42, # X coordinate. | 
|  | 1381 | "y": 42, # Y coordinate. | 
|  | 1382 | }, | 
|  | 1383 | ], | 
| Bu Sun Kim | 4ed7d3f | 2020-05-27 12:20:54 -0700 | [diff] [blame] | 1384 | "normalizedVertices": [ # The bounding polygon normalized vertices. | 
|  | 1385 | { # A vertex represents a 2D point in the image. | 
|  | 1386 | # NOTE: the normalized vertex coordinates are relative to the original image | 
|  | 1387 | # and range from 0 to 1. | 
|  | 1388 | "x": 3.14, # X coordinate. | 
|  | 1389 | "y": 3.14, # Y coordinate. | 
|  | 1390 | }, | 
|  | 1391 | ], | 
| Bu Sun Kim | 4ed7d3f | 2020-05-27 12:20:54 -0700 | [diff] [blame] | 1392 | }, | 
| Bu Sun Kim | d059ad8 | 2020-07-22 17:02:09 -0700 | [diff] [blame] | 1393 | "productSet": "A String", # The resource name of a ProductSet to be searched for similar images. | 
|  | 1394 | # | 
|  | 1395 | # Format is: | 
|  | 1396 | # `projects/PROJECT_ID/locations/LOC_ID/productSets/PRODUCT_SET_ID`. | 
|  | 1397 | "filter": "A String", # The filtering expression. This can be used to restrict search results based | 
|  | 1398 | # on Product labels. We currently support an AND of OR of key-value | 
|  | 1399 | # expressions, where each expression within an OR must have the same key. An | 
|  | 1400 | # '=' should be used to connect the key and value. | 
|  | 1401 | # | 
|  | 1402 | # For example, "(color = red OR color = blue) AND brand = Google" is | 
|  | 1403 | # acceptable, but "(color = red OR brand = Google)" is not acceptable. | 
|  | 1404 | # "color: red" is not acceptable because it uses a ':' instead of an '='. | 
| Bu Sun Kim | 4ed7d3f | 2020-05-27 12:20:54 -0700 | [diff] [blame] | 1405 | }, | 
|  | 1406 | "languageHints": [ # List of languages to use for TEXT_DETECTION. In most cases, an empty value | 
|  | 1407 | # yields the best results since it enables automatic language detection. For | 
|  | 1408 | # languages based on the Latin alphabet, setting `language_hints` is not | 
|  | 1409 | # needed. In rare cases, when the language of the text in the image is known, | 
|  | 1410 | # setting a hint will help get better results (although it will be a | 
|  | 1411 | # significant hindrance if the hint is wrong). Text detection returns an | 
|  | 1412 | # error if one or more of the specified languages is not one of the | 
|  | 1413 | # [supported languages](https://cloud.google.com/vision/docs/languages). | 
|  | 1414 | "A String", | 
|  | 1415 | ], | 
| Bu Sun Kim | 4ed7d3f | 2020-05-27 12:20:54 -0700 | [diff] [blame] | 1416 | "latLongRect": { # Rectangle determined by min and max `LatLng` pairs. # Not used. | 
|  | 1417 | "maxLatLng": { # An object representing a latitude/longitude pair. This is expressed as a pair # Max lat/long pair. | 
|  | 1418 | # of doubles representing degrees latitude and degrees longitude. Unless | 
|  | 1419 | # specified otherwise, this must conform to the | 
|  | 1420 | # <a href="http://www.unoosa.org/pdf/icg/2012/template/WGS_84.pdf">WGS84 | 
|  | 1421 | # standard</a>. Values must be within normalized ranges. | 
|  | 1422 | "latitude": 3.14, # The latitude in degrees. It must be in the range [-90.0, +90.0]. | 
|  | 1423 | "longitude": 3.14, # The longitude in degrees. It must be in the range [-180.0, +180.0]. | 
|  | 1424 | }, | 
|  | 1425 | "minLatLng": { # An object representing a latitude/longitude pair. This is expressed as a pair # Min lat/long pair. | 
|  | 1426 | # of doubles representing degrees latitude and degrees longitude. Unless | 
|  | 1427 | # specified otherwise, this must conform to the | 
|  | 1428 | # <a href="http://www.unoosa.org/pdf/icg/2012/template/WGS_84.pdf">WGS84 | 
|  | 1429 | # standard</a>. Values must be within normalized ranges. | 
|  | 1430 | "latitude": 3.14, # The latitude in degrees. It must be in the range [-90.0, +90.0]. | 
|  | 1431 | "longitude": 3.14, # The longitude in degrees. It must be in the range [-180.0, +180.0]. | 
|  | 1432 | }, | 
| Bu Sun Kim | 6502091 | 2020-05-20 12:08:20 -0700 | [diff] [blame] | 1433 | }, | 
| Bu Sun Kim | d059ad8 | 2020-07-22 17:02:09 -0700 | [diff] [blame] | 1434 | "webDetectionParams": { # Parameters for web detection request. # Parameters for web detection. | 
|  | 1435 | "includeGeoResults": True or False, # Whether to include results derived from the geo information in the image. | 
|  | 1436 | }, | 
| Bu Sun Kim | 6502091 | 2020-05-20 12:08:20 -0700 | [diff] [blame] | 1437 | }, | 
| Bu Sun Kim | d059ad8 | 2020-07-22 17:02:09 -0700 | [diff] [blame] | 1438 | "inputConfig": { # The desired input location and metadata. # Required. Information about the input file. | 
|  | 1439 | "content": "A String", # File content, represented as a stream of bytes. | 
|  | 1440 | # Note: As with all `bytes` fields, protobuffers use a pure binary | 
|  | 1441 | # representation, whereas JSON representations use base64. | 
|  | 1442 | # | 
|  | 1443 | # Currently, this field only works for BatchAnnotateFiles requests. It does | 
|  | 1444 | # not work for AsyncBatchAnnotateFiles requests. | 
|  | 1445 | "mimeType": "A String", # The type of the file. Currently only "application/pdf", "image/tiff" and | 
|  | 1446 | # "image/gif" are supported. Wildcards are not supported. | 
|  | 1447 | "gcsSource": { # The Google Cloud Storage location where the input will be read from. # The Google Cloud Storage location to read the input from. | 
|  | 1448 | "uri": "A String", # Google Cloud Storage URI for the input file. This must only be a | 
|  | 1449 | # Google Cloud Storage object. Wildcards are not currently supported. | 
|  | 1450 | }, | 
|  | 1451 | }, | 
|  | 1452 | "features": [ # Required. Requested features. | 
|  | 1453 | { # The type of Google Cloud Vision API detection to perform, and the maximum | 
|  | 1454 | # number of results to return for that type. Multiple `Feature` objects can | 
|  | 1455 | # be specified in the `features` list. | 
|  | 1456 | "type": "A String", # The feature type. | 
|  | 1457 | "maxResults": 42, # Maximum number of results of this type. Does not apply to | 
|  | 1458 | # `TEXT_DETECTION`, `DOCUMENT_TEXT_DETECTION`, or `CROP_HINTS`. | 
|  | 1459 | "model": "A String", # Model to use for the feature. | 
|  | 1460 | # Supported values: "builtin/stable" (the default if unset) and | 
|  | 1461 | # "builtin/latest". | 
|  | 1462 | }, | 
|  | 1463 | ], | 
| Bu Sun Kim | 6502091 | 2020-05-20 12:08:20 -0700 | [diff] [blame] | 1464 | "outputConfig": { # The desired output location and metadata. # Required. The desired output location and metadata (e.g. format). | 
| Bu Sun Kim | d059ad8 | 2020-07-22 17:02:09 -0700 | [diff] [blame] | 1465 | "batchSize": 42, # The max number of response protos to put into each output JSON file on | 
|  | 1466 | # Google Cloud Storage. | 
|  | 1467 | # The valid range is [1, 100]. If not specified, the default value is 20. | 
|  | 1468 | # | 
|  | 1469 | # For example, for one pdf file with 100 pages, 100 response protos will | 
|  | 1470 | # be generated. If `batch_size` = 20, then 5 json files each | 
|  | 1471 | # containing 20 response protos will be written under the prefix | 
|  | 1472 | # `gcs_destination`.`uri`. | 
|  | 1473 | # | 
|  | 1474 | # Currently, batch_size only applies to GcsDestination, with potential future | 
|  | 1475 | # support for other output configurations. | 
| Bu Sun Kim | 6502091 | 2020-05-20 12:08:20 -0700 | [diff] [blame] | 1476 | "gcsDestination": { # The Google Cloud Storage location where the output will be written to. # The Google Cloud Storage location to write the output(s) to. | 
|  | 1477 | "uri": "A String", # Google Cloud Storage URI prefix where the results will be stored. Results | 
|  | 1478 | # will be in JSON format and preceded by its corresponding input URI prefix. | 
|  | 1479 | # This field can either represent a gcs file prefix or gcs directory. In | 
|  | 1480 | # either case, the uri should be unique because in order to get all of the | 
|  | 1481 | # output files, you will need to do a wildcard gcs search on the uri prefix | 
|  | 1482 | # you provide. | 
|  | 1483 | # | 
|  | 1484 | # Examples: | 
|  | 1485 | # | 
|  | 1486 | # *    File Prefix: gs://bucket-name/here/filenameprefix   The output files | 
|  | 1487 | # will be created in gs://bucket-name/here/ and the names of the | 
|  | 1488 | # output files will begin with "filenameprefix". | 
|  | 1489 | # | 
|  | 1490 | # *    Directory Prefix: gs://bucket-name/some/location/   The output files | 
|  | 1491 | # will be created in gs://bucket-name/some/location/ and the names of the | 
|  | 1492 | # output files could be anything because there was no filename prefix | 
|  | 1493 | # specified. | 
|  | 1494 | # | 
|  | 1495 | # If multiple outputs, each response is still AnnotateFileResponse, each of | 
|  | 1496 | # which contains some subset of the full list of AnnotateImageResponse. | 
|  | 1497 | # Multiple outputs can happen if, for example, the output JSON is too large | 
|  | 1498 | # and overflows into multiple sharded files. | 
|  | 1499 | }, | 
| Bu Sun Kim | 6502091 | 2020-05-20 12:08:20 -0700 | [diff] [blame] | 1500 | }, | 
| Bu Sun Kim | 6502091 | 2020-05-20 12:08:20 -0700 | [diff] [blame] | 1501 | }, | 
|  | 1502 | ], | 
|  | 1503 | "parent": "A String", # Optional. Target project and location to make a call. | 
|  | 1504 | # | 
|  | 1505 | # Format: `projects/{project-id}/locations/{location-id}`. | 
|  | 1506 | # | 
|  | 1507 | # If no parent is specified, a region will be chosen automatically. | 
|  | 1508 | # | 
|  | 1509 | # Supported location-ids: | 
|  | 1510 | #     `us`: USA country only, | 
|  | 1511 | #     `asia`: East asia areas, like Japan, Taiwan, | 
|  | 1512 | #     `eu`: The European Union. | 
|  | 1513 | # | 
|  | 1514 | # Example: `projects/project-A/locations/eu`. | 
|  | 1515 | } | 
|  | 1516 |  | 
|  | 1517 | x__xgafv: string, V1 error format. | 
|  | 1518 | Allowed values | 
|  | 1519 | 1 - v1 error format | 
|  | 1520 | 2 - v2 error format | 
|  | 1521 |  | 
|  | 1522 | Returns: | 
|  | 1523 | An object of the form: | 
|  | 1524 |  | 
|  | 1525 | { # This resource represents a long-running operation that is the result of a | 
|  | 1526 | # network API call. | 
| Bu Sun Kim | 6502091 | 2020-05-20 12:08:20 -0700 | [diff] [blame] | 1527 | "response": { # The normal response of the operation in case of success.  If the original | 
|  | 1528 | # method returns no data on success, such as `Delete`, the response is | 
|  | 1529 | # `google.protobuf.Empty`.  If the original method is standard | 
|  | 1530 | # `Get`/`Create`/`Update`, the response should be the resource.  For other | 
|  | 1531 | # methods, the response should have the type `XxxResponse`, where `Xxx` | 
|  | 1532 | # is the original method name.  For example, if the original method name | 
|  | 1533 | # is `TakeSnapshot()`, the inferred response type is | 
|  | 1534 | # `TakeSnapshotResponse`. | 
|  | 1535 | "a_key": "", # Properties of the object. Contains field @type with type URL. | 
|  | 1536 | }, | 
| Bu Sun Kim | d059ad8 | 2020-07-22 17:02:09 -0700 | [diff] [blame] | 1537 | "metadata": { # Service-specific metadata associated with the operation.  It typically | 
|  | 1538 | # contains progress information and common metadata such as create time. | 
|  | 1539 | # Some services might not provide such metadata.  Any method that returns a | 
|  | 1540 | # long-running operation should document the metadata type, if any. | 
|  | 1541 | "a_key": "", # Properties of the object. Contains field @type with type URL. | 
|  | 1542 | }, | 
| Bu Sun Kim | 6502091 | 2020-05-20 12:08:20 -0700 | [diff] [blame] | 1543 | "name": "A String", # The server-assigned name, which is only unique within the same service that | 
|  | 1544 | # originally returns it. If you use the default HTTP mapping, the | 
|  | 1545 | # `name` should be a resource name ending with `operations/{unique_id}`. | 
| Bu Sun Kim | d059ad8 | 2020-07-22 17:02:09 -0700 | [diff] [blame] | 1546 | "error": { # The `Status` type defines a logical error model that is suitable for # The error result of the operation in case of failure or cancellation. | 
|  | 1547 | # different programming environments, including REST APIs and RPC APIs. It is | 
|  | 1548 | # used by [gRPC](https://github.com/grpc). Each `Status` message contains | 
|  | 1549 | # three pieces of data: error code, error message, and error details. | 
|  | 1550 | # | 
|  | 1551 | # You can find out more about this error model and how to work with it in the | 
|  | 1552 | # [API Design Guide](https://cloud.google.com/apis/design/errors). | 
|  | 1553 | "code": 42, # The status code, which should be an enum value of google.rpc.Code. | 
|  | 1554 | "details": [ # A list of messages that carry the error details.  There is a common set of | 
|  | 1555 | # message types for APIs to use. | 
|  | 1556 | { | 
|  | 1557 | "a_key": "", # Properties of the object. Contains field @type with type URL. | 
|  | 1558 | }, | 
|  | 1559 | ], | 
|  | 1560 | "message": "A String", # A developer-facing error message, which should be in English. Any | 
|  | 1561 | # user-facing error message should be localized and sent in the | 
|  | 1562 | # google.rpc.Status.details field, or localized by the client. | 
|  | 1563 | }, | 
|  | 1564 | "done": True or False, # If the value is `false`, it means the operation is still in progress. | 
|  | 1565 | # If `true`, the operation is completed, and either `error` or `response` is | 
|  | 1566 | # available. | 
| Bu Sun Kim | 6502091 | 2020-05-20 12:08:20 -0700 | [diff] [blame] | 1567 | }</pre> | 
|  | 1568 | </div> | 
|  | 1569 |  | 
|  | 1570 | </body></html> |