blob: d94804899b18d4de10e4736d74bfb87e78af7175 [file] [log] [blame]
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -04001<html><body>
2<style>
3
4body, h1, h2, h3, div, span, p, pre, a {
5 margin: 0;
6 padding: 0;
7 border: 0;
8 font-weight: inherit;
9 font-style: inherit;
10 font-size: 100%;
11 font-family: inherit;
12 vertical-align: baseline;
13}
14
15body {
16 font-size: 13px;
17 padding: 1em;
18}
19
20h1 {
21 font-size: 26px;
22 margin-bottom: 1em;
23}
24
25h2 {
26 font-size: 24px;
27 margin-bottom: 1em;
28}
29
30h3 {
31 font-size: 20px;
32 margin-bottom: 1em;
33 margin-top: 1em;
34}
35
36pre, code {
37 line-height: 1.5;
38 font-family: Monaco, 'DejaVu Sans Mono', 'Bitstream Vera Sans Mono', 'Lucida Console', monospace;
39}
40
41pre {
42 margin-top: 0.5em;
43}
44
45h1, h2, h3, p {
46 font-family: Arial, sans serif;
47}
48
49h1, h2, h3 {
50 border-bottom: solid #CCC 1px;
51}
52
53.toc_element {
54 margin-top: 0.5em;
55}
56
57.firstline {
58 margin-left: 2 em;
59}
60
61.method {
62 margin-top: 1em;
63 border: solid 1px #CCC;
64 padding: 1em;
65 background: #EEE;
66}
67
68.details {
69 font-weight: bold;
70 font-size: 14px;
71}
72
73</style>
74
Dan O'Mearadd494642020-05-01 07:42:23 -070075<h1><a href="ml_v1.html">AI Platform Training & Prediction API</a> . <a href="ml_v1.projects.html">projects</a> . <a href="ml_v1.projects.models.html">models</a> . <a href="ml_v1.projects.models.versions.html">versions</a></h1>
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -040076<h2>Instance Methods</h2>
77<p class="toc_element">
Dan O'Mearadd494642020-05-01 07:42:23 -070078 <code><a href="#create">create(parent, body=None, x__xgafv=None)</a></code></p>
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -040079<p class="firstline">Creates a new version of a model from a trained TensorFlow model.</p>
80<p class="toc_element">
Thomas Coffee2f245372017-03-27 10:39:26 -070081 <code><a href="#delete">delete(name, x__xgafv=None)</a></code></p>
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -040082<p class="firstline">Deletes a model version.</p>
83<p class="toc_element">
Thomas Coffee2f245372017-03-27 10:39:26 -070084 <code><a href="#get">get(name, x__xgafv=None)</a></code></p>
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -040085<p class="firstline">Gets information about a model version.</p>
86<p class="toc_element">
Bu Sun Kim715bd7f2019-06-14 16:50:42 -070087 <code><a href="#list">list(parent, pageToken=None, x__xgafv=None, pageSize=None, filter=None)</a></code></p>
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -040088<p class="firstline">Gets basic information about all the versions of a model.</p>
89<p class="toc_element">
90 <code><a href="#list_next">list_next(previous_request, previous_response)</a></code></p>
91<p class="firstline">Retrieves the next page of results.</p>
92<p class="toc_element">
Dan O'Mearadd494642020-05-01 07:42:23 -070093 <code><a href="#patch">patch(name, body=None, updateMask=None, x__xgafv=None)</a></code></p>
Bu Sun Kim715bd7f2019-06-14 16:50:42 -070094<p class="firstline">Updates the specified Version resource.</p>
95<p class="toc_element">
96 <code><a href="#setDefault">setDefault(name, body=None, x__xgafv=None)</a></code></p>
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -040097<p class="firstline">Designates a version to be the default for the model.</p>
98<h3>Method Details</h3>
99<div class="method">
Dan O'Mearadd494642020-05-01 07:42:23 -0700100 <code class="details" id="create">create(parent, body=None, x__xgafv=None)</code>
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -0400101 <pre>Creates a new version of a model from a trained TensorFlow model.
102
103If the version created in the cloud by this call is the first deployed
104version of the specified model, it will be made the default version of the
105model. When you add a version to a model that already has one or more
106versions, the default version does not automatically change. If you want a
107new version to be the default, you must call
Dan O'Mearadd494642020-05-01 07:42:23 -0700108projects.models.versions.setDefault.
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -0400109
110Args:
Bu Sun Kim715bd7f2019-06-14 16:50:42 -0700111 parent: string, Required. The name of the model. (required)
Dan O'Mearadd494642020-05-01 07:42:23 -0700112 body: object, The request body.
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -0400113 The object takes the form of:
114
115{ # Represents a version of the model.
116 #
117 # Each version is a trained model deployed in the cloud, ready to handle
118 # prediction requests. A model can have multiple versions. You can get
119 # information about all of the versions of a given model by calling
Dan O'Mearadd494642020-05-01 07:42:23 -0700120 # projects.models.versions.list.
121 "acceleratorConfig": { # Represents a hardware accelerator request config. # Optional. Accelerator config for using GPUs for online prediction (beta).
122 # Only specify this field if you have specified a Compute Engine (N1) machine
123 # type in the `machineType` field. Learn more about [using GPUs for online
124 # prediction](/ml-engine/docs/machine-types-online-prediction#gpus).
125 # Note that the AcceleratorConfig can be used in both Jobs and Versions.
126 # Learn more about [accelerators for training](/ml-engine/docs/using-gpus) and
127 # [accelerators for online
128 # prediction](/ml-engine/docs/machine-types-online-prediction#gpus).
129 "count": "A String", # The number of accelerators to attach to each machine running the job.
130 "type": "A String", # The type of accelerator to use.
131 },
Bu Sun Kim715bd7f2019-06-14 16:50:42 -0700132 "labels": { # Optional. One or more labels that you can add, to organize your model
133 # versions. Each label is a key-value pair, where both the key and the value
134 # are arbitrary strings that you supply.
135 # For more information, see the documentation on
Dan O'Mearadd494642020-05-01 07:42:23 -0700136 # &lt;a href="/ml-engine/docs/tensorflow/resource-labels"&gt;using labels&lt;/a&gt;.
Bu Sun Kim715bd7f2019-06-14 16:50:42 -0700137 "a_key": "A String",
138 },
Bu Sun Kim715bd7f2019-06-14 16:50:42 -0700139 "predictionClass": "A String", # Optional. The fully qualified name
Dan O'Mearadd494642020-05-01 07:42:23 -0700140 # (&lt;var&gt;module_name&lt;/var&gt;.&lt;var&gt;class_name&lt;/var&gt;) of a class that implements
Bu Sun Kim715bd7f2019-06-14 16:50:42 -0700141 # the Predictor interface described in this reference field. The module
142 # containing this class should be included in a package provided to the
143 # [`packageUris` field](#Version.FIELDS.package_uris).
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -0400144 #
Bu Sun Kim715bd7f2019-06-14 16:50:42 -0700145 # Specify this field if and only if you are deploying a [custom prediction
146 # routine (beta)](/ml-engine/docs/tensorflow/custom-prediction-routines).
147 # If you specify this field, you must set
Dan O'Mearadd494642020-05-01 07:42:23 -0700148 # [`runtimeVersion`](#Version.FIELDS.runtime_version) to 1.4 or greater and
149 # you must set `machineType` to a [legacy (MLS1)
150 # machine type](/ml-engine/docs/machine-types-online-prediction).
Bu Sun Kim715bd7f2019-06-14 16:50:42 -0700151 #
152 # The following code sample provides the Predictor interface:
153 #
Dan O'Mearadd494642020-05-01 07:42:23 -0700154 # &lt;pre style="max-width: 626px;"&gt;
Bu Sun Kim715bd7f2019-06-14 16:50:42 -0700155 # class Predictor(object):
156 # """Interface for constructing custom predictors."""
157 #
158 # def predict(self, instances, **kwargs):
159 # """Performs custom prediction.
160 #
161 # Instances are the decoded values from the request. They have already
162 # been deserialized from JSON.
163 #
164 # Args:
165 # instances: A list of prediction input instances.
166 # **kwargs: A dictionary of keyword args provided as additional
167 # fields on the predict request body.
168 #
169 # Returns:
170 # A list of outputs containing the prediction results. This list must
171 # be JSON serializable.
172 # """
173 # raise NotImplementedError()
174 #
175 # @classmethod
176 # def from_path(cls, model_dir):
177 # """Creates an instance of Predictor using the given path.
178 #
179 # Loading of the predictor should be done in this method.
180 #
181 # Args:
182 # model_dir: The local directory that contains the exported model
183 # file along with any additional files uploaded when creating the
184 # version resource.
185 #
186 # Returns:
187 # An instance implementing this Predictor class.
188 # """
189 # raise NotImplementedError()
Dan O'Mearadd494642020-05-01 07:42:23 -0700190 # &lt;/pre&gt;
Bu Sun Kim715bd7f2019-06-14 16:50:42 -0700191 #
192 # Learn more about [the Predictor interface and custom prediction
193 # routines](/ml-engine/docs/tensorflow/custom-prediction-routines).
Bu Sun Kim715bd7f2019-06-14 16:50:42 -0700194 "serviceAccount": "A String", # Optional. Specifies the service account for resource access control.
195 "state": "A String", # Output only. The state of a version.
Bu Sun Kim715bd7f2019-06-14 16:50:42 -0700196 "packageUris": [ # Optional. Cloud Storage paths (`gs://…`) of packages for [custom
197 # prediction routines](/ml-engine/docs/tensorflow/custom-prediction-routines)
198 # or [scikit-learn pipelines with custom
199 # code](/ml-engine/docs/scikit/exporting-for-prediction#custom-pipeline-code).
200 #
201 # For a custom prediction routine, one of these packages must contain your
202 # Predictor class (see
203 # [`predictionClass`](#Version.FIELDS.prediction_class)). Additionally,
204 # include any dependencies used by your Predictor or scikit-learn pipeline
205 # uses that are not already included in your selected [runtime
206 # version](/ml-engine/docs/tensorflow/runtime-version-list).
207 #
208 # If you specify this field, you must also set
209 # [`runtimeVersion`](#Version.FIELDS.runtime_version) to 1.4 or greater.
210 "A String",
211 ],
212 "etag": "A String", # `etag` is used for optimistic concurrency control as a way to help
213 # prevent simultaneous updates of a model from overwriting each other.
214 # It is strongly suggested that systems make use of the `etag` in the
215 # read-modify-write cycle to perform model updates in order to avoid race
216 # conditions: An `etag` is returned in the response to `GetVersion`, and
217 # systems are expected to put that etag in the request to `UpdateVersion` to
218 # ensure that their change will be applied to the model as intended.
219 "lastUseTime": "A String", # Output only. The time the version was last used for prediction.
220 "deploymentUri": "A String", # Required. The Cloud Storage location of the trained model used to
221 # create the version. See the
222 # [guide to model
223 # deployment](/ml-engine/docs/tensorflow/deploying-models) for more
224 # information.
225 #
226 # When passing Version to
Dan O'Mearadd494642020-05-01 07:42:23 -0700227 # projects.models.versions.create
Bu Sun Kim715bd7f2019-06-14 16:50:42 -0700228 # the model service uses the specified location as the source of the model.
229 # Once deployed, the model version is hosted by the prediction service, so
230 # this location is useful only as a historical record.
231 # The total number of model files can't exceed 1000.
Dan O'Mearadd494642020-05-01 07:42:23 -0700232 "explanationConfig": { # Message holding configuration options for explaining model predictions. # Optional. Configures explainability features on the model's version.
233 # Some explanation features require additional metadata to be loaded
234 # as part of the model payload.
235 # There are two feature attribution methods supported for TensorFlow models:
236 # integrated gradients and sampled Shapley.
237 # [Learn more about feature
238 # attributions.](/ml-engine/docs/ai-explanations/overview)
239 "xraiAttribution": { # Attributes credit by computing the XRAI taking advantage # Attributes credit by computing the XRAI taking advantage
240 # of the model's fully differentiable structure. Refer to this paper for
241 # more details: https://arxiv.org/abs/1906.02825
242 # Currently only implemented for models with natural image inputs.
243 # of the model's fully differentiable structure. Refer to this paper for
244 # more details: https://arxiv.org/abs/1906.02825
245 # Currently only implemented for models with natural image inputs.
246 "numIntegralSteps": 42, # Number of steps for approximating the path integral.
247 # A good value to start is 50 and gradually increase until the
248 # sum to diff property is met within the desired error range.
249 },
250 "sampledShapleyAttribution": { # An attribution method that approximates Shapley values for features that # An attribution method that approximates Shapley values for features that
251 # contribute to the label being predicted. A sampling strategy is used to
252 # approximate the value rather than considering all subsets of features.
253 # contribute to the label being predicted. A sampling strategy is used to
254 # approximate the value rather than considering all subsets of features.
255 "numPaths": 42, # The number of feature permutations to consider when approximating the
256 # Shapley values.
257 },
258 "integratedGradientsAttribution": { # Attributes credit by computing the Aumann-Shapley value taking advantage # Attributes credit by computing the Aumann-Shapley value taking advantage
259 # of the model's fully differentiable structure. Refer to this paper for
260 # more details: http://proceedings.mlr.press/v70/sundararajan17a.html
261 # of the model's fully differentiable structure. Refer to this paper for
262 # more details: http://proceedings.mlr.press/v70/sundararajan17a.html
263 "numIntegralSteps": 42, # Number of steps for approximating the path integral.
264 # A good value to start is 50 and gradually increase until the
265 # sum to diff property is met within the desired error range.
266 },
267 },
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -0400268 "isDefault": True or False, # Output only. If true, this version will be used to handle prediction
269 # requests that do not specify a version.
270 #
271 # You can change the default version by calling
Dan O'Mearadd494642020-05-01 07:42:23 -0700272 # projects.methods.versions.setDefault.
273 "machineType": "A String", # Optional. The type of machine on which to serve the model. Currently only
274 # applies to online prediction service. If this field is not specified, it
275 # defaults to `mls1-c1-m2`.
276 #
277 # Online prediction supports the following machine types:
278 #
279 # * `mls1-c1-m2`
280 # * `mls1-c4-m2`
281 # * `n1-standard-2`
282 # * `n1-standard-4`
283 # * `n1-standard-8`
284 # * `n1-standard-16`
285 # * `n1-standard-32`
286 # * `n1-highmem-2`
287 # * `n1-highmem-4`
288 # * `n1-highmem-8`
289 # * `n1-highmem-16`
290 # * `n1-highmem-32`
291 # * `n1-highcpu-2`
292 # * `n1-highcpu-4`
293 # * `n1-highcpu-8`
294 # * `n1-highcpu-16`
295 # * `n1-highcpu-32`
296 #
297 # `mls1-c1-m2` is generally available. All other machine types are available
298 # in beta. Learn more about the [differences between machine
299 # types](/ml-engine/docs/machine-types-online-prediction).
300 "description": "A String", # Optional. The description specified for the version when it was created.
301 "runtimeVersion": "A String", # Required. The AI Platform runtime version to use for this deployment.
302 #
303 # For more information, see the
304 # [runtime version list](/ml-engine/docs/runtime-version-list) and
305 # [how to manage runtime versions](/ml-engine/docs/versioning).
306 "manualScaling": { # Options for manually scaling a model. # Manually select the number of nodes to use for serving the
307 # model. You should generally use `auto_scaling` with an appropriate
308 # `min_nodes` instead, but this option is available if you want more
309 # predictable billing. Beware that latency and error rates will increase
310 # if the traffic exceeds that capability of the system to serve it based
311 # on the selected number of nodes.
312 "nodes": 42, # The number of nodes to allocate for this model. These nodes are always up,
313 # starting from the time the model is deployed, so the cost of operating
314 # this model will be proportional to `nodes` * number of hours since
315 # last billing cycle plus the cost for each prediction performed.
316 },
317 "errorMessage": "A String", # Output only. The details of a failure or a cancellation.
318 "framework": "A String", # Optional. The machine learning framework AI Platform uses to train
319 # this version of the model. Valid values are `TENSORFLOW`, `SCIKIT_LEARN`,
320 # `XGBOOST`. If you do not specify a framework, AI Platform
321 # will analyze files in the deployment_uri to determine a framework. If you
322 # choose `SCIKIT_LEARN` or `XGBOOST`, you must also set the runtime version
323 # of the model to 1.4 or greater.
324 #
325 # Do **not** specify a framework if you're deploying a [custom
326 # prediction routine](/ml-engine/docs/tensorflow/custom-prediction-routines).
327 #
328 # If you specify a [Compute Engine (N1) machine
329 # type](/ml-engine/docs/machine-types-online-prediction) in the
330 # `machineType` field, you must specify `TENSORFLOW`
331 # for the framework.
332 "createTime": "A String", # Output only. The time the version was created.
333 "name": "A String", # Required. The name specified for the version when it was created.
Thomas Coffee2f245372017-03-27 10:39:26 -0700334 #
335 # The version name must be unique within the model it is created in.
Dan O'Mearadd494642020-05-01 07:42:23 -0700336 "autoScaling": { # Options for automatically scaling a model. # Automatically scale the number of nodes used to serve the model in
337 # response to increases and decreases in traffic. Care should be
338 # taken to ramp up traffic according to the model's ability to scale
339 # or you will start seeing increases in latency and 429 response codes.
340 #
341 # Note that you cannot use AutoScaling if your version uses
342 # [GPUs](#Version.FIELDS.accelerator_config). Instead, you must use specify
343 # `manual_scaling`.
344 "minNodes": 42, # Optional. The minimum number of nodes to allocate for this model. These
345 # nodes are always up, starting from the time the model is deployed.
346 # Therefore, the cost of operating this model will be at least
347 # `rate` * `min_nodes` * number of hours since last billing cycle,
348 # where `rate` is the cost per node-hour as documented in the
349 # [pricing guide](/ml-engine/docs/pricing),
350 # even if no predictions are performed. There is additional cost for each
351 # prediction performed.
352 #
353 # Unlike manual scaling, if the load gets too heavy for the nodes
354 # that are up, the service will automatically add nodes to handle the
355 # increased load as well as scale back as traffic drops, always maintaining
356 # at least `min_nodes`. You will be charged for the time in which additional
357 # nodes are used.
358 #
359 # If `min_nodes` is not specified and AutoScaling is used with a [legacy
360 # (MLS1) machine type](/ml-engine/docs/machine-types-online-prediction),
361 # `min_nodes` defaults to 0, in which case, when traffic to a model stops
362 # (and after a cool-down period), nodes will be shut down and no charges will
363 # be incurred until traffic to the model resumes.
364 #
365 # If `min_nodes` is not specified and AutoScaling is used with a [Compute
366 # Engine (N1) machine type](/ml-engine/docs/machine-types-online-prediction),
367 # `min_nodes` defaults to 1. `min_nodes` must be at least 1 for use with a
368 # Compute Engine machine type.
369 #
370 # Note that you cannot use AutoScaling if your version uses
371 # [GPUs](#Version.FIELDS.accelerator_config). Instead, you must use
372 # ManualScaling.
373 #
374 # You can set `min_nodes` when creating the model version, and you can also
375 # update `min_nodes` for an existing version:
376 # &lt;pre&gt;
377 # update_body.json:
378 # {
379 # 'autoScaling': {
380 # 'minNodes': 5
381 # }
382 # }
383 # &lt;/pre&gt;
384 # HTTP request:
385 # &lt;pre style="max-width: 626px;"&gt;
386 # PATCH
387 # https://ml.googleapis.com/v1/{name=projects/*/models/*/versions/*}?update_mask=autoScaling.minNodes
388 # -d @./update_body.json
389 # &lt;/pre&gt;
390 },
391 "pythonVersion": "A String", # Required. The version of Python used in prediction.
392 #
393 # The following Python versions are available:
394 #
395 # * Python '3.7' is available when `runtime_version` is set to '1.15' or
396 # later.
397 # * Python '3.5' is available when `runtime_version` is set to a version
398 # from '1.4' to '1.14'.
399 # * Python '2.7' is available when `runtime_version` is set to '1.15' or
400 # earlier.
401 #
402 # Read more about the Python versions available for [each runtime
403 # version](/ml-engine/docs/runtime-version-list).
404 "requestLoggingConfig": { # Configuration for logging request-response pairs to a BigQuery table. # Optional. *Only* specify this field in a
405 # projects.models.versions.patch
406 # request. Specifying it in a
407 # projects.models.versions.create
408 # request has no effect.
409 #
410 # Configures the request-response pair logging on predictions from this
411 # Version.
412 # Online prediction requests to a model version and the responses to these
413 # requests are converted to raw strings and saved to the specified BigQuery
414 # table. Logging is constrained by [BigQuery quotas and
415 # limits](/bigquery/quotas). If your project exceeds BigQuery quotas or limits,
416 # AI Platform Prediction does not log request-response pairs, but it continues
417 # to serve predictions.
418 #
419 # If you are using [continuous
420 # evaluation](/ml-engine/docs/continuous-evaluation/), you do not need to
421 # specify this configuration manually. Setting up continuous evaluation
422 # automatically enables logging of request-response pairs.
423 "samplingPercentage": 3.14, # Percentage of requests to be logged, expressed as a fraction from 0 to 1.
424 # For example, if you want to log 10% of requests, enter `0.1`. The sampling
425 # window is the lifetime of the model version. Defaults to 0.
426 "bigqueryTableName": "A String", # Required. Fully qualified BigQuery table name in the following format:
427 # "&lt;var&gt;project_id&lt;/var&gt;.&lt;var&gt;dataset_name&lt;/var&gt;.&lt;var&gt;table_name&lt;/var&gt;"
428 #
429 # The specified table must already exist, and the "Cloud ML Service Agent"
430 # for your project must have permission to write to it. The table must have
431 # the following [schema](/bigquery/docs/schemas):
432 #
433 # &lt;table&gt;
434 # &lt;tr&gt;&lt;th&gt;Field name&lt;/th&gt;&lt;th style="display: table-cell"&gt;Type&lt;/th&gt;
435 # &lt;th style="display: table-cell"&gt;Mode&lt;/th&gt;&lt;/tr&gt;
436 # &lt;tr&gt;&lt;td&gt;model&lt;/td&gt;&lt;td&gt;STRING&lt;/td&gt;&lt;td&gt;REQUIRED&lt;/td&gt;&lt;/tr&gt;
437 # &lt;tr&gt;&lt;td&gt;model_version&lt;/td&gt;&lt;td&gt;STRING&lt;/td&gt;&lt;td&gt;REQUIRED&lt;/td&gt;&lt;/tr&gt;
438 # &lt;tr&gt;&lt;td&gt;time&lt;/td&gt;&lt;td&gt;TIMESTAMP&lt;/td&gt;&lt;td&gt;REQUIRED&lt;/td&gt;&lt;/tr&gt;
439 # &lt;tr&gt;&lt;td&gt;raw_data&lt;/td&gt;&lt;td&gt;STRING&lt;/td&gt;&lt;td&gt;REQUIRED&lt;/td&gt;&lt;/tr&gt;
440 # &lt;tr&gt;&lt;td&gt;raw_prediction&lt;/td&gt;&lt;td&gt;STRING&lt;/td&gt;&lt;td&gt;NULLABLE&lt;/td&gt;&lt;/tr&gt;
441 # &lt;tr&gt;&lt;td&gt;groundtruth&lt;/td&gt;&lt;td&gt;STRING&lt;/td&gt;&lt;td&gt;NULLABLE&lt;/td&gt;&lt;/tr&gt;
442 # &lt;/table&gt;
443 },
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -0400444}
445
446 x__xgafv: string, V1 error format.
447 Allowed values
448 1 - v1 error format
449 2 - v2 error format
450
451Returns:
452 An object of the form:
453
454 { # This resource represents a long-running operation that is the result of a
455 # network API call.
Thomas Coffee2f245372017-03-27 10:39:26 -0700456 "metadata": { # Service-specific metadata associated with the operation. It typically
457 # contains progress information and common metadata such as create time.
458 # Some services might not provide such metadata. Any method that returns a
459 # long-running operation should document the metadata type, if any.
460 "a_key": "", # Properties of the object. Contains field @type with type URL.
461 },
Bu Sun Kim715bd7f2019-06-14 16:50:42 -0700462 "error": { # The `Status` type defines a logical error model that is suitable for # The error result of the operation in case of failure or cancellation.
463 # different programming environments, including REST APIs and RPC APIs. It is
464 # used by [gRPC](https://github.com/grpc). Each `Status` message contains
465 # three pieces of data: error code, error message, and error details.
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -0400466 #
Bu Sun Kim715bd7f2019-06-14 16:50:42 -0700467 # You can find out more about this error model and how to work with it in the
468 # [API Design Guide](https://cloud.google.com/apis/design/errors).
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -0400469 "message": "A String", # A developer-facing error message, which should be in English. Any
470 # user-facing error message should be localized and sent in the
471 # google.rpc.Status.details field, or localized by the client.
472 "code": 42, # The status code, which should be an enum value of google.rpc.Code.
Bu Sun Kim715bd7f2019-06-14 16:50:42 -0700473 "details": [ # A list of messages that carry the error details. There is a common set of
474 # message types for APIs to use.
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -0400475 {
476 "a_key": "", # Properties of the object. Contains field @type with type URL.
477 },
478 ],
479 },
Sai Cheemalapati4ba8c232017-06-06 18:46:08 -0400480 "done": True or False, # If the value is `false`, it means the operation is still in progress.
Bu Sun Kim715bd7f2019-06-14 16:50:42 -0700481 # If `true`, the operation is completed, and either `error` or `response` is
Sai Cheemalapati4ba8c232017-06-06 18:46:08 -0400482 # available.
483 "response": { # The normal response of the operation in case of success. If the original
484 # method returns no data on success, such as `Delete`, the response is
485 # `google.protobuf.Empty`. If the original method is standard
486 # `Get`/`Create`/`Update`, the response should be the resource. For other
487 # methods, the response should have the type `XxxResponse`, where `Xxx`
488 # is the original method name. For example, if the original method name
489 # is `TakeSnapshot()`, the inferred response type is
490 # `TakeSnapshotResponse`.
491 "a_key": "", # Properties of the object. Contains field @type with type URL.
492 },
493 "name": "A String", # The server-assigned name, which is only unique within the same service that
494 # originally returns it. If you use the default HTTP mapping, the
Bu Sun Kim715bd7f2019-06-14 16:50:42 -0700495 # `name` should be a resource name ending with `operations/{unique_id}`.
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -0400496 }</pre>
497</div>
498
499<div class="method">
Thomas Coffee2f245372017-03-27 10:39:26 -0700500 <code class="details" id="delete">delete(name, x__xgafv=None)</code>
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -0400501 <pre>Deletes a model version.
502
503Each model can have multiple versions deployed and in use at any given
504time. Use this method to remove a single version.
505
506Note: You cannot delete the version that is set as the default version
507of the model unless it is the only remaining version.
508
509Args:
510 name: string, Required. The name of the version. You can get the names of all the
511versions of a model by calling
Dan O'Mearadd494642020-05-01 07:42:23 -0700512projects.models.versions.list. (required)
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -0400513 x__xgafv: string, V1 error format.
514 Allowed values
515 1 - v1 error format
516 2 - v2 error format
517
518Returns:
519 An object of the form:
520
521 { # This resource represents a long-running operation that is the result of a
522 # network API call.
Thomas Coffee2f245372017-03-27 10:39:26 -0700523 "metadata": { # Service-specific metadata associated with the operation. It typically
524 # contains progress information and common metadata such as create time.
525 # Some services might not provide such metadata. Any method that returns a
526 # long-running operation should document the metadata type, if any.
527 "a_key": "", # Properties of the object. Contains field @type with type URL.
528 },
Bu Sun Kim715bd7f2019-06-14 16:50:42 -0700529 "error": { # The `Status` type defines a logical error model that is suitable for # The error result of the operation in case of failure or cancellation.
530 # different programming environments, including REST APIs and RPC APIs. It is
531 # used by [gRPC](https://github.com/grpc). Each `Status` message contains
532 # three pieces of data: error code, error message, and error details.
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -0400533 #
Bu Sun Kim715bd7f2019-06-14 16:50:42 -0700534 # You can find out more about this error model and how to work with it in the
535 # [API Design Guide](https://cloud.google.com/apis/design/errors).
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -0400536 "message": "A String", # A developer-facing error message, which should be in English. Any
537 # user-facing error message should be localized and sent in the
538 # google.rpc.Status.details field, or localized by the client.
539 "code": 42, # The status code, which should be an enum value of google.rpc.Code.
Bu Sun Kim715bd7f2019-06-14 16:50:42 -0700540 "details": [ # A list of messages that carry the error details. There is a common set of
541 # message types for APIs to use.
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -0400542 {
543 "a_key": "", # Properties of the object. Contains field @type with type URL.
544 },
545 ],
546 },
Sai Cheemalapati4ba8c232017-06-06 18:46:08 -0400547 "done": True or False, # If the value is `false`, it means the operation is still in progress.
Bu Sun Kim715bd7f2019-06-14 16:50:42 -0700548 # If `true`, the operation is completed, and either `error` or `response` is
Sai Cheemalapati4ba8c232017-06-06 18:46:08 -0400549 # available.
550 "response": { # The normal response of the operation in case of success. If the original
551 # method returns no data on success, such as `Delete`, the response is
552 # `google.protobuf.Empty`. If the original method is standard
553 # `Get`/`Create`/`Update`, the response should be the resource. For other
554 # methods, the response should have the type `XxxResponse`, where `Xxx`
555 # is the original method name. For example, if the original method name
556 # is `TakeSnapshot()`, the inferred response type is
557 # `TakeSnapshotResponse`.
558 "a_key": "", # Properties of the object. Contains field @type with type URL.
559 },
560 "name": "A String", # The server-assigned name, which is only unique within the same service that
561 # originally returns it. If you use the default HTTP mapping, the
Bu Sun Kim715bd7f2019-06-14 16:50:42 -0700562 # `name` should be a resource name ending with `operations/{unique_id}`.
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -0400563 }</pre>
564</div>
565
566<div class="method">
Thomas Coffee2f245372017-03-27 10:39:26 -0700567 <code class="details" id="get">get(name, x__xgafv=None)</code>
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -0400568 <pre>Gets information about a model version.
569
570Models can have multiple versions. You can call
Dan O'Mearadd494642020-05-01 07:42:23 -0700571projects.models.versions.list
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -0400572to get the same information that this method returns for all of the
573versions of a model.
574
575Args:
Bu Sun Kim715bd7f2019-06-14 16:50:42 -0700576 name: string, Required. The name of the version. (required)
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -0400577 x__xgafv: string, V1 error format.
578 Allowed values
579 1 - v1 error format
580 2 - v2 error format
581
582Returns:
583 An object of the form:
584
585 { # Represents a version of the model.
586 #
587 # Each version is a trained model deployed in the cloud, ready to handle
588 # prediction requests. A model can have multiple versions. You can get
589 # information about all of the versions of a given model by calling
Dan O'Mearadd494642020-05-01 07:42:23 -0700590 # projects.models.versions.list.
591 "acceleratorConfig": { # Represents a hardware accelerator request config. # Optional. Accelerator config for using GPUs for online prediction (beta).
592 # Only specify this field if you have specified a Compute Engine (N1) machine
593 # type in the `machineType` field. Learn more about [using GPUs for online
594 # prediction](/ml-engine/docs/machine-types-online-prediction#gpus).
595 # Note that the AcceleratorConfig can be used in both Jobs and Versions.
596 # Learn more about [accelerators for training](/ml-engine/docs/using-gpus) and
597 # [accelerators for online
598 # prediction](/ml-engine/docs/machine-types-online-prediction#gpus).
599 "count": "A String", # The number of accelerators to attach to each machine running the job.
600 "type": "A String", # The type of accelerator to use.
601 },
Bu Sun Kim715bd7f2019-06-14 16:50:42 -0700602 "labels": { # Optional. One or more labels that you can add, to organize your model
603 # versions. Each label is a key-value pair, where both the key and the value
604 # are arbitrary strings that you supply.
605 # For more information, see the documentation on
Dan O'Mearadd494642020-05-01 07:42:23 -0700606 # &lt;a href="/ml-engine/docs/tensorflow/resource-labels"&gt;using labels&lt;/a&gt;.
Bu Sun Kim715bd7f2019-06-14 16:50:42 -0700607 "a_key": "A String",
608 },
Bu Sun Kim715bd7f2019-06-14 16:50:42 -0700609 "predictionClass": "A String", # Optional. The fully qualified name
Dan O'Mearadd494642020-05-01 07:42:23 -0700610 # (&lt;var&gt;module_name&lt;/var&gt;.&lt;var&gt;class_name&lt;/var&gt;) of a class that implements
Bu Sun Kim715bd7f2019-06-14 16:50:42 -0700611 # the Predictor interface described in this reference field. The module
612 # containing this class should be included in a package provided to the
613 # [`packageUris` field](#Version.FIELDS.package_uris).
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -0400614 #
Bu Sun Kim715bd7f2019-06-14 16:50:42 -0700615 # Specify this field if and only if you are deploying a [custom prediction
616 # routine (beta)](/ml-engine/docs/tensorflow/custom-prediction-routines).
617 # If you specify this field, you must set
Dan O'Mearadd494642020-05-01 07:42:23 -0700618 # [`runtimeVersion`](#Version.FIELDS.runtime_version) to 1.4 or greater and
619 # you must set `machineType` to a [legacy (MLS1)
620 # machine type](/ml-engine/docs/machine-types-online-prediction).
Bu Sun Kim715bd7f2019-06-14 16:50:42 -0700621 #
622 # The following code sample provides the Predictor interface:
623 #
Dan O'Mearadd494642020-05-01 07:42:23 -0700624 # &lt;pre style="max-width: 626px;"&gt;
Bu Sun Kim715bd7f2019-06-14 16:50:42 -0700625 # class Predictor(object):
626 # """Interface for constructing custom predictors."""
627 #
628 # def predict(self, instances, **kwargs):
629 # """Performs custom prediction.
630 #
631 # Instances are the decoded values from the request. They have already
632 # been deserialized from JSON.
633 #
634 # Args:
635 # instances: A list of prediction input instances.
636 # **kwargs: A dictionary of keyword args provided as additional
637 # fields on the predict request body.
638 #
639 # Returns:
640 # A list of outputs containing the prediction results. This list must
641 # be JSON serializable.
642 # """
643 # raise NotImplementedError()
644 #
645 # @classmethod
646 # def from_path(cls, model_dir):
647 # """Creates an instance of Predictor using the given path.
648 #
649 # Loading of the predictor should be done in this method.
650 #
651 # Args:
652 # model_dir: The local directory that contains the exported model
653 # file along with any additional files uploaded when creating the
654 # version resource.
655 #
656 # Returns:
657 # An instance implementing this Predictor class.
658 # """
659 # raise NotImplementedError()
Dan O'Mearadd494642020-05-01 07:42:23 -0700660 # &lt;/pre&gt;
Bu Sun Kim715bd7f2019-06-14 16:50:42 -0700661 #
662 # Learn more about [the Predictor interface and custom prediction
663 # routines](/ml-engine/docs/tensorflow/custom-prediction-routines).
Bu Sun Kim715bd7f2019-06-14 16:50:42 -0700664 "serviceAccount": "A String", # Optional. Specifies the service account for resource access control.
665 "state": "A String", # Output only. The state of a version.
Bu Sun Kim715bd7f2019-06-14 16:50:42 -0700666 "packageUris": [ # Optional. Cloud Storage paths (`gs://…`) of packages for [custom
667 # prediction routines](/ml-engine/docs/tensorflow/custom-prediction-routines)
668 # or [scikit-learn pipelines with custom
669 # code](/ml-engine/docs/scikit/exporting-for-prediction#custom-pipeline-code).
670 #
671 # For a custom prediction routine, one of these packages must contain your
672 # Predictor class (see
673 # [`predictionClass`](#Version.FIELDS.prediction_class)). Additionally,
674 # include any dependencies used by your Predictor or scikit-learn pipeline
675 # uses that are not already included in your selected [runtime
676 # version](/ml-engine/docs/tensorflow/runtime-version-list).
677 #
678 # If you specify this field, you must also set
679 # [`runtimeVersion`](#Version.FIELDS.runtime_version) to 1.4 or greater.
680 "A String",
681 ],
682 "etag": "A String", # `etag` is used for optimistic concurrency control as a way to help
683 # prevent simultaneous updates of a model from overwriting each other.
684 # It is strongly suggested that systems make use of the `etag` in the
685 # read-modify-write cycle to perform model updates in order to avoid race
686 # conditions: An `etag` is returned in the response to `GetVersion`, and
687 # systems are expected to put that etag in the request to `UpdateVersion` to
688 # ensure that their change will be applied to the model as intended.
689 "lastUseTime": "A String", # Output only. The time the version was last used for prediction.
690 "deploymentUri": "A String", # Required. The Cloud Storage location of the trained model used to
691 # create the version. See the
692 # [guide to model
693 # deployment](/ml-engine/docs/tensorflow/deploying-models) for more
694 # information.
695 #
696 # When passing Version to
Dan O'Mearadd494642020-05-01 07:42:23 -0700697 # projects.models.versions.create
Bu Sun Kim715bd7f2019-06-14 16:50:42 -0700698 # the model service uses the specified location as the source of the model.
699 # Once deployed, the model version is hosted by the prediction service, so
700 # this location is useful only as a historical record.
701 # The total number of model files can't exceed 1000.
Dan O'Mearadd494642020-05-01 07:42:23 -0700702 "explanationConfig": { # Message holding configuration options for explaining model predictions. # Optional. Configures explainability features on the model's version.
703 # Some explanation features require additional metadata to be loaded
704 # as part of the model payload.
705 # There are two feature attribution methods supported for TensorFlow models:
706 # integrated gradients and sampled Shapley.
707 # [Learn more about feature
708 # attributions.](/ml-engine/docs/ai-explanations/overview)
709 "xraiAttribution": { # Attributes credit by computing the XRAI taking advantage # Attributes credit by computing the XRAI taking advantage
710 # of the model's fully differentiable structure. Refer to this paper for
711 # more details: https://arxiv.org/abs/1906.02825
712 # Currently only implemented for models with natural image inputs.
713 # of the model's fully differentiable structure. Refer to this paper for
714 # more details: https://arxiv.org/abs/1906.02825
715 # Currently only implemented for models with natural image inputs.
716 "numIntegralSteps": 42, # Number of steps for approximating the path integral.
717 # A good value to start is 50 and gradually increase until the
718 # sum to diff property is met within the desired error range.
719 },
720 "sampledShapleyAttribution": { # An attribution method that approximates Shapley values for features that # An attribution method that approximates Shapley values for features that
721 # contribute to the label being predicted. A sampling strategy is used to
722 # approximate the value rather than considering all subsets of features.
723 # contribute to the label being predicted. A sampling strategy is used to
724 # approximate the value rather than considering all subsets of features.
725 "numPaths": 42, # The number of feature permutations to consider when approximating the
726 # Shapley values.
727 },
728 "integratedGradientsAttribution": { # Attributes credit by computing the Aumann-Shapley value taking advantage # Attributes credit by computing the Aumann-Shapley value taking advantage
729 # of the model's fully differentiable structure. Refer to this paper for
730 # more details: http://proceedings.mlr.press/v70/sundararajan17a.html
731 # of the model's fully differentiable structure. Refer to this paper for
732 # more details: http://proceedings.mlr.press/v70/sundararajan17a.html
733 "numIntegralSteps": 42, # Number of steps for approximating the path integral.
734 # A good value to start is 50 and gradually increase until the
735 # sum to diff property is met within the desired error range.
736 },
737 },
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -0400738 "isDefault": True or False, # Output only. If true, this version will be used to handle prediction
739 # requests that do not specify a version.
740 #
741 # You can change the default version by calling
Dan O'Mearadd494642020-05-01 07:42:23 -0700742 # projects.methods.versions.setDefault.
743 "machineType": "A String", # Optional. The type of machine on which to serve the model. Currently only
744 # applies to online prediction service. If this field is not specified, it
745 # defaults to `mls1-c1-m2`.
746 #
747 # Online prediction supports the following machine types:
748 #
749 # * `mls1-c1-m2`
750 # * `mls1-c4-m2`
751 # * `n1-standard-2`
752 # * `n1-standard-4`
753 # * `n1-standard-8`
754 # * `n1-standard-16`
755 # * `n1-standard-32`
756 # * `n1-highmem-2`
757 # * `n1-highmem-4`
758 # * `n1-highmem-8`
759 # * `n1-highmem-16`
760 # * `n1-highmem-32`
761 # * `n1-highcpu-2`
762 # * `n1-highcpu-4`
763 # * `n1-highcpu-8`
764 # * `n1-highcpu-16`
765 # * `n1-highcpu-32`
766 #
767 # `mls1-c1-m2` is generally available. All other machine types are available
768 # in beta. Learn more about the [differences between machine
769 # types](/ml-engine/docs/machine-types-online-prediction).
770 "description": "A String", # Optional. The description specified for the version when it was created.
771 "runtimeVersion": "A String", # Required. The AI Platform runtime version to use for this deployment.
772 #
773 # For more information, see the
774 # [runtime version list](/ml-engine/docs/runtime-version-list) and
775 # [how to manage runtime versions](/ml-engine/docs/versioning).
776 "manualScaling": { # Options for manually scaling a model. # Manually select the number of nodes to use for serving the
777 # model. You should generally use `auto_scaling` with an appropriate
778 # `min_nodes` instead, but this option is available if you want more
779 # predictable billing. Beware that latency and error rates will increase
780 # if the traffic exceeds that capability of the system to serve it based
781 # on the selected number of nodes.
782 "nodes": 42, # The number of nodes to allocate for this model. These nodes are always up,
783 # starting from the time the model is deployed, so the cost of operating
784 # this model will be proportional to `nodes` * number of hours since
785 # last billing cycle plus the cost for each prediction performed.
786 },
787 "errorMessage": "A String", # Output only. The details of a failure or a cancellation.
788 "framework": "A String", # Optional. The machine learning framework AI Platform uses to train
789 # this version of the model. Valid values are `TENSORFLOW`, `SCIKIT_LEARN`,
790 # `XGBOOST`. If you do not specify a framework, AI Platform
791 # will analyze files in the deployment_uri to determine a framework. If you
792 # choose `SCIKIT_LEARN` or `XGBOOST`, you must also set the runtime version
793 # of the model to 1.4 or greater.
794 #
795 # Do **not** specify a framework if you're deploying a [custom
796 # prediction routine](/ml-engine/docs/tensorflow/custom-prediction-routines).
797 #
798 # If you specify a [Compute Engine (N1) machine
799 # type](/ml-engine/docs/machine-types-online-prediction) in the
800 # `machineType` field, you must specify `TENSORFLOW`
801 # for the framework.
802 "createTime": "A String", # Output only. The time the version was created.
803 "name": "A String", # Required. The name specified for the version when it was created.
Thomas Coffee2f245372017-03-27 10:39:26 -0700804 #
805 # The version name must be unique within the model it is created in.
Dan O'Mearadd494642020-05-01 07:42:23 -0700806 "autoScaling": { # Options for automatically scaling a model. # Automatically scale the number of nodes used to serve the model in
807 # response to increases and decreases in traffic. Care should be
808 # taken to ramp up traffic according to the model's ability to scale
809 # or you will start seeing increases in latency and 429 response codes.
810 #
811 # Note that you cannot use AutoScaling if your version uses
812 # [GPUs](#Version.FIELDS.accelerator_config). Instead, you must use specify
813 # `manual_scaling`.
814 "minNodes": 42, # Optional. The minimum number of nodes to allocate for this model. These
815 # nodes are always up, starting from the time the model is deployed.
816 # Therefore, the cost of operating this model will be at least
817 # `rate` * `min_nodes` * number of hours since last billing cycle,
818 # where `rate` is the cost per node-hour as documented in the
819 # [pricing guide](/ml-engine/docs/pricing),
820 # even if no predictions are performed. There is additional cost for each
821 # prediction performed.
822 #
823 # Unlike manual scaling, if the load gets too heavy for the nodes
824 # that are up, the service will automatically add nodes to handle the
825 # increased load as well as scale back as traffic drops, always maintaining
826 # at least `min_nodes`. You will be charged for the time in which additional
827 # nodes are used.
828 #
829 # If `min_nodes` is not specified and AutoScaling is used with a [legacy
830 # (MLS1) machine type](/ml-engine/docs/machine-types-online-prediction),
831 # `min_nodes` defaults to 0, in which case, when traffic to a model stops
832 # (and after a cool-down period), nodes will be shut down and no charges will
833 # be incurred until traffic to the model resumes.
834 #
835 # If `min_nodes` is not specified and AutoScaling is used with a [Compute
836 # Engine (N1) machine type](/ml-engine/docs/machine-types-online-prediction),
837 # `min_nodes` defaults to 1. `min_nodes` must be at least 1 for use with a
838 # Compute Engine machine type.
839 #
840 # Note that you cannot use AutoScaling if your version uses
841 # [GPUs](#Version.FIELDS.accelerator_config). Instead, you must use
842 # ManualScaling.
843 #
844 # You can set `min_nodes` when creating the model version, and you can also
845 # update `min_nodes` for an existing version:
846 # &lt;pre&gt;
847 # update_body.json:
848 # {
849 # 'autoScaling': {
850 # 'minNodes': 5
851 # }
852 # }
853 # &lt;/pre&gt;
854 # HTTP request:
855 # &lt;pre style="max-width: 626px;"&gt;
856 # PATCH
857 # https://ml.googleapis.com/v1/{name=projects/*/models/*/versions/*}?update_mask=autoScaling.minNodes
858 # -d @./update_body.json
859 # &lt;/pre&gt;
860 },
861 "pythonVersion": "A String", # Required. The version of Python used in prediction.
862 #
863 # The following Python versions are available:
864 #
865 # * Python '3.7' is available when `runtime_version` is set to '1.15' or
866 # later.
867 # * Python '3.5' is available when `runtime_version` is set to a version
868 # from '1.4' to '1.14'.
869 # * Python '2.7' is available when `runtime_version` is set to '1.15' or
870 # earlier.
871 #
872 # Read more about the Python versions available for [each runtime
873 # version](/ml-engine/docs/runtime-version-list).
874 "requestLoggingConfig": { # Configuration for logging request-response pairs to a BigQuery table. # Optional. *Only* specify this field in a
875 # projects.models.versions.patch
876 # request. Specifying it in a
877 # projects.models.versions.create
878 # request has no effect.
879 #
880 # Configures the request-response pair logging on predictions from this
881 # Version.
882 # Online prediction requests to a model version and the responses to these
883 # requests are converted to raw strings and saved to the specified BigQuery
884 # table. Logging is constrained by [BigQuery quotas and
885 # limits](/bigquery/quotas). If your project exceeds BigQuery quotas or limits,
886 # AI Platform Prediction does not log request-response pairs, but it continues
887 # to serve predictions.
888 #
889 # If you are using [continuous
890 # evaluation](/ml-engine/docs/continuous-evaluation/), you do not need to
891 # specify this configuration manually. Setting up continuous evaluation
892 # automatically enables logging of request-response pairs.
893 "samplingPercentage": 3.14, # Percentage of requests to be logged, expressed as a fraction from 0 to 1.
894 # For example, if you want to log 10% of requests, enter `0.1`. The sampling
895 # window is the lifetime of the model version. Defaults to 0.
896 "bigqueryTableName": "A String", # Required. Fully qualified BigQuery table name in the following format:
897 # "&lt;var&gt;project_id&lt;/var&gt;.&lt;var&gt;dataset_name&lt;/var&gt;.&lt;var&gt;table_name&lt;/var&gt;"
898 #
899 # The specified table must already exist, and the "Cloud ML Service Agent"
900 # for your project must have permission to write to it. The table must have
901 # the following [schema](/bigquery/docs/schemas):
902 #
903 # &lt;table&gt;
904 # &lt;tr&gt;&lt;th&gt;Field name&lt;/th&gt;&lt;th style="display: table-cell"&gt;Type&lt;/th&gt;
905 # &lt;th style="display: table-cell"&gt;Mode&lt;/th&gt;&lt;/tr&gt;
906 # &lt;tr&gt;&lt;td&gt;model&lt;/td&gt;&lt;td&gt;STRING&lt;/td&gt;&lt;td&gt;REQUIRED&lt;/td&gt;&lt;/tr&gt;
907 # &lt;tr&gt;&lt;td&gt;model_version&lt;/td&gt;&lt;td&gt;STRING&lt;/td&gt;&lt;td&gt;REQUIRED&lt;/td&gt;&lt;/tr&gt;
908 # &lt;tr&gt;&lt;td&gt;time&lt;/td&gt;&lt;td&gt;TIMESTAMP&lt;/td&gt;&lt;td&gt;REQUIRED&lt;/td&gt;&lt;/tr&gt;
909 # &lt;tr&gt;&lt;td&gt;raw_data&lt;/td&gt;&lt;td&gt;STRING&lt;/td&gt;&lt;td&gt;REQUIRED&lt;/td&gt;&lt;/tr&gt;
910 # &lt;tr&gt;&lt;td&gt;raw_prediction&lt;/td&gt;&lt;td&gt;STRING&lt;/td&gt;&lt;td&gt;NULLABLE&lt;/td&gt;&lt;/tr&gt;
911 # &lt;tr&gt;&lt;td&gt;groundtruth&lt;/td&gt;&lt;td&gt;STRING&lt;/td&gt;&lt;td&gt;NULLABLE&lt;/td&gt;&lt;/tr&gt;
912 # &lt;/table&gt;
913 },
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -0400914 }</pre>
915</div>
916
917<div class="method">
Bu Sun Kim715bd7f2019-06-14 16:50:42 -0700918 <code class="details" id="list">list(parent, pageToken=None, x__xgafv=None, pageSize=None, filter=None)</code>
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -0400919 <pre>Gets basic information about all the versions of a model.
920
Bu Sun Kim715bd7f2019-06-14 16:50:42 -0700921If you expect that a model has many versions, or if you need to handle
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -0400922only a limited number of results at a time, you can request that the list
Bu Sun Kim715bd7f2019-06-14 16:50:42 -0700923be retrieved in batches (called pages).
924
925If there are no versions that match the request parameters, the list
926request returns an empty response body: {}.
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -0400927
928Args:
Bu Sun Kim715bd7f2019-06-14 16:50:42 -0700929 parent: string, Required. The name of the model for which to list the version. (required)
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -0400930 pageToken: string, Optional. A page token to request the next page of results.
931
932You get the token from the `next_page_token` field of the response from
933the previous call.
934 x__xgafv: string, V1 error format.
935 Allowed values
936 1 - v1 error format
937 2 - v2 error format
Sai Cheemalapati4ba8c232017-06-06 18:46:08 -0400938 pageSize: integer, Optional. The number of versions to retrieve per "page" of results. If
939there are more remaining results than this number, the response message
940will contain a valid value in the `next_page_token` field.
941
942The default value is 20, and the maximum page size is 100.
Bu Sun Kim715bd7f2019-06-14 16:50:42 -0700943 filter: string, Optional. Specifies the subset of versions to retrieve.
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -0400944
945Returns:
946 An object of the form:
947
948 { # Response message for the ListVersions method.
949 "nextPageToken": "A String", # Optional. Pass this token as the `page_token` field of the request for a
950 # subsequent call.
951 "versions": [ # The list of versions.
952 { # Represents a version of the model.
953 #
954 # Each version is a trained model deployed in the cloud, ready to handle
955 # prediction requests. A model can have multiple versions. You can get
956 # information about all of the versions of a given model by calling
Dan O'Mearadd494642020-05-01 07:42:23 -0700957 # projects.models.versions.list.
958 "acceleratorConfig": { # Represents a hardware accelerator request config. # Optional. Accelerator config for using GPUs for online prediction (beta).
959 # Only specify this field if you have specified a Compute Engine (N1) machine
960 # type in the `machineType` field. Learn more about [using GPUs for online
961 # prediction](/ml-engine/docs/machine-types-online-prediction#gpus).
962 # Note that the AcceleratorConfig can be used in both Jobs and Versions.
963 # Learn more about [accelerators for training](/ml-engine/docs/using-gpus) and
964 # [accelerators for online
965 # prediction](/ml-engine/docs/machine-types-online-prediction#gpus).
966 "count": "A String", # The number of accelerators to attach to each machine running the job.
967 "type": "A String", # The type of accelerator to use.
968 },
Bu Sun Kim715bd7f2019-06-14 16:50:42 -0700969 "labels": { # Optional. One or more labels that you can add, to organize your model
970 # versions. Each label is a key-value pair, where both the key and the value
971 # are arbitrary strings that you supply.
972 # For more information, see the documentation on
Dan O'Mearadd494642020-05-01 07:42:23 -0700973 # &lt;a href="/ml-engine/docs/tensorflow/resource-labels"&gt;using labels&lt;/a&gt;.
Bu Sun Kim715bd7f2019-06-14 16:50:42 -0700974 "a_key": "A String",
975 },
Bu Sun Kim715bd7f2019-06-14 16:50:42 -0700976 "predictionClass": "A String", # Optional. The fully qualified name
Dan O'Mearadd494642020-05-01 07:42:23 -0700977 # (&lt;var&gt;module_name&lt;/var&gt;.&lt;var&gt;class_name&lt;/var&gt;) of a class that implements
Bu Sun Kim715bd7f2019-06-14 16:50:42 -0700978 # the Predictor interface described in this reference field. The module
979 # containing this class should be included in a package provided to the
980 # [`packageUris` field](#Version.FIELDS.package_uris).
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -0400981 #
Bu Sun Kim715bd7f2019-06-14 16:50:42 -0700982 # Specify this field if and only if you are deploying a [custom prediction
983 # routine (beta)](/ml-engine/docs/tensorflow/custom-prediction-routines).
984 # If you specify this field, you must set
Dan O'Mearadd494642020-05-01 07:42:23 -0700985 # [`runtimeVersion`](#Version.FIELDS.runtime_version) to 1.4 or greater and
986 # you must set `machineType` to a [legacy (MLS1)
987 # machine type](/ml-engine/docs/machine-types-online-prediction).
Bu Sun Kim715bd7f2019-06-14 16:50:42 -0700988 #
989 # The following code sample provides the Predictor interface:
990 #
Dan O'Mearadd494642020-05-01 07:42:23 -0700991 # &lt;pre style="max-width: 626px;"&gt;
Bu Sun Kim715bd7f2019-06-14 16:50:42 -0700992 # class Predictor(object):
993 # """Interface for constructing custom predictors."""
994 #
995 # def predict(self, instances, **kwargs):
996 # """Performs custom prediction.
997 #
998 # Instances are the decoded values from the request. They have already
999 # been deserialized from JSON.
1000 #
1001 # Args:
1002 # instances: A list of prediction input instances.
1003 # **kwargs: A dictionary of keyword args provided as additional
1004 # fields on the predict request body.
1005 #
1006 # Returns:
1007 # A list of outputs containing the prediction results. This list must
1008 # be JSON serializable.
1009 # """
1010 # raise NotImplementedError()
1011 #
1012 # @classmethod
1013 # def from_path(cls, model_dir):
1014 # """Creates an instance of Predictor using the given path.
1015 #
1016 # Loading of the predictor should be done in this method.
1017 #
1018 # Args:
1019 # model_dir: The local directory that contains the exported model
1020 # file along with any additional files uploaded when creating the
1021 # version resource.
1022 #
1023 # Returns:
1024 # An instance implementing this Predictor class.
1025 # """
1026 # raise NotImplementedError()
Dan O'Mearadd494642020-05-01 07:42:23 -07001027 # &lt;/pre&gt;
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07001028 #
1029 # Learn more about [the Predictor interface and custom prediction
1030 # routines](/ml-engine/docs/tensorflow/custom-prediction-routines).
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07001031 "serviceAccount": "A String", # Optional. Specifies the service account for resource access control.
1032 "state": "A String", # Output only. The state of a version.
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07001033 "packageUris": [ # Optional. Cloud Storage paths (`gs://…`) of packages for [custom
1034 # prediction routines](/ml-engine/docs/tensorflow/custom-prediction-routines)
1035 # or [scikit-learn pipelines with custom
1036 # code](/ml-engine/docs/scikit/exporting-for-prediction#custom-pipeline-code).
1037 #
1038 # For a custom prediction routine, one of these packages must contain your
1039 # Predictor class (see
1040 # [`predictionClass`](#Version.FIELDS.prediction_class)). Additionally,
1041 # include any dependencies used by your Predictor or scikit-learn pipeline
1042 # uses that are not already included in your selected [runtime
1043 # version](/ml-engine/docs/tensorflow/runtime-version-list).
1044 #
1045 # If you specify this field, you must also set
1046 # [`runtimeVersion`](#Version.FIELDS.runtime_version) to 1.4 or greater.
1047 "A String",
1048 ],
1049 "etag": "A String", # `etag` is used for optimistic concurrency control as a way to help
1050 # prevent simultaneous updates of a model from overwriting each other.
1051 # It is strongly suggested that systems make use of the `etag` in the
1052 # read-modify-write cycle to perform model updates in order to avoid race
1053 # conditions: An `etag` is returned in the response to `GetVersion`, and
1054 # systems are expected to put that etag in the request to `UpdateVersion` to
1055 # ensure that their change will be applied to the model as intended.
1056 "lastUseTime": "A String", # Output only. The time the version was last used for prediction.
1057 "deploymentUri": "A String", # Required. The Cloud Storage location of the trained model used to
1058 # create the version. See the
1059 # [guide to model
1060 # deployment](/ml-engine/docs/tensorflow/deploying-models) for more
1061 # information.
1062 #
1063 # When passing Version to
Dan O'Mearadd494642020-05-01 07:42:23 -07001064 # projects.models.versions.create
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07001065 # the model service uses the specified location as the source of the model.
1066 # Once deployed, the model version is hosted by the prediction service, so
1067 # this location is useful only as a historical record.
1068 # The total number of model files can't exceed 1000.
Dan O'Mearadd494642020-05-01 07:42:23 -07001069 "explanationConfig": { # Message holding configuration options for explaining model predictions. # Optional. Configures explainability features on the model's version.
1070 # Some explanation features require additional metadata to be loaded
1071 # as part of the model payload.
1072 # There are two feature attribution methods supported for TensorFlow models:
1073 # integrated gradients and sampled Shapley.
1074 # [Learn more about feature
1075 # attributions.](/ml-engine/docs/ai-explanations/overview)
1076 "xraiAttribution": { # Attributes credit by computing the XRAI taking advantage # Attributes credit by computing the XRAI taking advantage
1077 # of the model's fully differentiable structure. Refer to this paper for
1078 # more details: https://arxiv.org/abs/1906.02825
1079 # Currently only implemented for models with natural image inputs.
1080 # of the model's fully differentiable structure. Refer to this paper for
1081 # more details: https://arxiv.org/abs/1906.02825
1082 # Currently only implemented for models with natural image inputs.
1083 "numIntegralSteps": 42, # Number of steps for approximating the path integral.
1084 # A good value to start is 50 and gradually increase until the
1085 # sum to diff property is met within the desired error range.
1086 },
1087 "sampledShapleyAttribution": { # An attribution method that approximates Shapley values for features that # An attribution method that approximates Shapley values for features that
1088 # contribute to the label being predicted. A sampling strategy is used to
1089 # approximate the value rather than considering all subsets of features.
1090 # contribute to the label being predicted. A sampling strategy is used to
1091 # approximate the value rather than considering all subsets of features.
1092 "numPaths": 42, # The number of feature permutations to consider when approximating the
1093 # Shapley values.
1094 },
1095 "integratedGradientsAttribution": { # Attributes credit by computing the Aumann-Shapley value taking advantage # Attributes credit by computing the Aumann-Shapley value taking advantage
1096 # of the model's fully differentiable structure. Refer to this paper for
1097 # more details: http://proceedings.mlr.press/v70/sundararajan17a.html
1098 # of the model's fully differentiable structure. Refer to this paper for
1099 # more details: http://proceedings.mlr.press/v70/sundararajan17a.html
1100 "numIntegralSteps": 42, # Number of steps for approximating the path integral.
1101 # A good value to start is 50 and gradually increase until the
1102 # sum to diff property is met within the desired error range.
1103 },
1104 },
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -04001105 "isDefault": True or False, # Output only. If true, this version will be used to handle prediction
1106 # requests that do not specify a version.
1107 #
1108 # You can change the default version by calling
Dan O'Mearadd494642020-05-01 07:42:23 -07001109 # projects.methods.versions.setDefault.
1110 "machineType": "A String", # Optional. The type of machine on which to serve the model. Currently only
1111 # applies to online prediction service. If this field is not specified, it
1112 # defaults to `mls1-c1-m2`.
1113 #
1114 # Online prediction supports the following machine types:
1115 #
1116 # * `mls1-c1-m2`
1117 # * `mls1-c4-m2`
1118 # * `n1-standard-2`
1119 # * `n1-standard-4`
1120 # * `n1-standard-8`
1121 # * `n1-standard-16`
1122 # * `n1-standard-32`
1123 # * `n1-highmem-2`
1124 # * `n1-highmem-4`
1125 # * `n1-highmem-8`
1126 # * `n1-highmem-16`
1127 # * `n1-highmem-32`
1128 # * `n1-highcpu-2`
1129 # * `n1-highcpu-4`
1130 # * `n1-highcpu-8`
1131 # * `n1-highcpu-16`
1132 # * `n1-highcpu-32`
1133 #
1134 # `mls1-c1-m2` is generally available. All other machine types are available
1135 # in beta. Learn more about the [differences between machine
1136 # types](/ml-engine/docs/machine-types-online-prediction).
1137 "description": "A String", # Optional. The description specified for the version when it was created.
1138 "runtimeVersion": "A String", # Required. The AI Platform runtime version to use for this deployment.
1139 #
1140 # For more information, see the
1141 # [runtime version list](/ml-engine/docs/runtime-version-list) and
1142 # [how to manage runtime versions](/ml-engine/docs/versioning).
1143 "manualScaling": { # Options for manually scaling a model. # Manually select the number of nodes to use for serving the
1144 # model. You should generally use `auto_scaling` with an appropriate
1145 # `min_nodes` instead, but this option is available if you want more
1146 # predictable billing. Beware that latency and error rates will increase
1147 # if the traffic exceeds that capability of the system to serve it based
1148 # on the selected number of nodes.
1149 "nodes": 42, # The number of nodes to allocate for this model. These nodes are always up,
1150 # starting from the time the model is deployed, so the cost of operating
1151 # this model will be proportional to `nodes` * number of hours since
1152 # last billing cycle plus the cost for each prediction performed.
1153 },
1154 "errorMessage": "A String", # Output only. The details of a failure or a cancellation.
1155 "framework": "A String", # Optional. The machine learning framework AI Platform uses to train
1156 # this version of the model. Valid values are `TENSORFLOW`, `SCIKIT_LEARN`,
1157 # `XGBOOST`. If you do not specify a framework, AI Platform
1158 # will analyze files in the deployment_uri to determine a framework. If you
1159 # choose `SCIKIT_LEARN` or `XGBOOST`, you must also set the runtime version
1160 # of the model to 1.4 or greater.
1161 #
1162 # Do **not** specify a framework if you're deploying a [custom
1163 # prediction routine](/ml-engine/docs/tensorflow/custom-prediction-routines).
1164 #
1165 # If you specify a [Compute Engine (N1) machine
1166 # type](/ml-engine/docs/machine-types-online-prediction) in the
1167 # `machineType` field, you must specify `TENSORFLOW`
1168 # for the framework.
1169 "createTime": "A String", # Output only. The time the version was created.
1170 "name": "A String", # Required. The name specified for the version when it was created.
Thomas Coffee2f245372017-03-27 10:39:26 -07001171 #
1172 # The version name must be unique within the model it is created in.
Dan O'Mearadd494642020-05-01 07:42:23 -07001173 "autoScaling": { # Options for automatically scaling a model. # Automatically scale the number of nodes used to serve the model in
1174 # response to increases and decreases in traffic. Care should be
1175 # taken to ramp up traffic according to the model's ability to scale
1176 # or you will start seeing increases in latency and 429 response codes.
1177 #
1178 # Note that you cannot use AutoScaling if your version uses
1179 # [GPUs](#Version.FIELDS.accelerator_config). Instead, you must use specify
1180 # `manual_scaling`.
1181 "minNodes": 42, # Optional. The minimum number of nodes to allocate for this model. These
1182 # nodes are always up, starting from the time the model is deployed.
1183 # Therefore, the cost of operating this model will be at least
1184 # `rate` * `min_nodes` * number of hours since last billing cycle,
1185 # where `rate` is the cost per node-hour as documented in the
1186 # [pricing guide](/ml-engine/docs/pricing),
1187 # even if no predictions are performed. There is additional cost for each
1188 # prediction performed.
1189 #
1190 # Unlike manual scaling, if the load gets too heavy for the nodes
1191 # that are up, the service will automatically add nodes to handle the
1192 # increased load as well as scale back as traffic drops, always maintaining
1193 # at least `min_nodes`. You will be charged for the time in which additional
1194 # nodes are used.
1195 #
1196 # If `min_nodes` is not specified and AutoScaling is used with a [legacy
1197 # (MLS1) machine type](/ml-engine/docs/machine-types-online-prediction),
1198 # `min_nodes` defaults to 0, in which case, when traffic to a model stops
1199 # (and after a cool-down period), nodes will be shut down and no charges will
1200 # be incurred until traffic to the model resumes.
1201 #
1202 # If `min_nodes` is not specified and AutoScaling is used with a [Compute
1203 # Engine (N1) machine type](/ml-engine/docs/machine-types-online-prediction),
1204 # `min_nodes` defaults to 1. `min_nodes` must be at least 1 for use with a
1205 # Compute Engine machine type.
1206 #
1207 # Note that you cannot use AutoScaling if your version uses
1208 # [GPUs](#Version.FIELDS.accelerator_config). Instead, you must use
1209 # ManualScaling.
1210 #
1211 # You can set `min_nodes` when creating the model version, and you can also
1212 # update `min_nodes` for an existing version:
1213 # &lt;pre&gt;
1214 # update_body.json:
1215 # {
1216 # 'autoScaling': {
1217 # 'minNodes': 5
1218 # }
1219 # }
1220 # &lt;/pre&gt;
1221 # HTTP request:
1222 # &lt;pre style="max-width: 626px;"&gt;
1223 # PATCH
1224 # https://ml.googleapis.com/v1/{name=projects/*/models/*/versions/*}?update_mask=autoScaling.minNodes
1225 # -d @./update_body.json
1226 # &lt;/pre&gt;
1227 },
1228 "pythonVersion": "A String", # Required. The version of Python used in prediction.
1229 #
1230 # The following Python versions are available:
1231 #
1232 # * Python '3.7' is available when `runtime_version` is set to '1.15' or
1233 # later.
1234 # * Python '3.5' is available when `runtime_version` is set to a version
1235 # from '1.4' to '1.14'.
1236 # * Python '2.7' is available when `runtime_version` is set to '1.15' or
1237 # earlier.
1238 #
1239 # Read more about the Python versions available for [each runtime
1240 # version](/ml-engine/docs/runtime-version-list).
1241 "requestLoggingConfig": { # Configuration for logging request-response pairs to a BigQuery table. # Optional. *Only* specify this field in a
1242 # projects.models.versions.patch
1243 # request. Specifying it in a
1244 # projects.models.versions.create
1245 # request has no effect.
1246 #
1247 # Configures the request-response pair logging on predictions from this
1248 # Version.
1249 # Online prediction requests to a model version and the responses to these
1250 # requests are converted to raw strings and saved to the specified BigQuery
1251 # table. Logging is constrained by [BigQuery quotas and
1252 # limits](/bigquery/quotas). If your project exceeds BigQuery quotas or limits,
1253 # AI Platform Prediction does not log request-response pairs, but it continues
1254 # to serve predictions.
1255 #
1256 # If you are using [continuous
1257 # evaluation](/ml-engine/docs/continuous-evaluation/), you do not need to
1258 # specify this configuration manually. Setting up continuous evaluation
1259 # automatically enables logging of request-response pairs.
1260 "samplingPercentage": 3.14, # Percentage of requests to be logged, expressed as a fraction from 0 to 1.
1261 # For example, if you want to log 10% of requests, enter `0.1`. The sampling
1262 # window is the lifetime of the model version. Defaults to 0.
1263 "bigqueryTableName": "A String", # Required. Fully qualified BigQuery table name in the following format:
1264 # "&lt;var&gt;project_id&lt;/var&gt;.&lt;var&gt;dataset_name&lt;/var&gt;.&lt;var&gt;table_name&lt;/var&gt;"
1265 #
1266 # The specified table must already exist, and the "Cloud ML Service Agent"
1267 # for your project must have permission to write to it. The table must have
1268 # the following [schema](/bigquery/docs/schemas):
1269 #
1270 # &lt;table&gt;
1271 # &lt;tr&gt;&lt;th&gt;Field name&lt;/th&gt;&lt;th style="display: table-cell"&gt;Type&lt;/th&gt;
1272 # &lt;th style="display: table-cell"&gt;Mode&lt;/th&gt;&lt;/tr&gt;
1273 # &lt;tr&gt;&lt;td&gt;model&lt;/td&gt;&lt;td&gt;STRING&lt;/td&gt;&lt;td&gt;REQUIRED&lt;/td&gt;&lt;/tr&gt;
1274 # &lt;tr&gt;&lt;td&gt;model_version&lt;/td&gt;&lt;td&gt;STRING&lt;/td&gt;&lt;td&gt;REQUIRED&lt;/td&gt;&lt;/tr&gt;
1275 # &lt;tr&gt;&lt;td&gt;time&lt;/td&gt;&lt;td&gt;TIMESTAMP&lt;/td&gt;&lt;td&gt;REQUIRED&lt;/td&gt;&lt;/tr&gt;
1276 # &lt;tr&gt;&lt;td&gt;raw_data&lt;/td&gt;&lt;td&gt;STRING&lt;/td&gt;&lt;td&gt;REQUIRED&lt;/td&gt;&lt;/tr&gt;
1277 # &lt;tr&gt;&lt;td&gt;raw_prediction&lt;/td&gt;&lt;td&gt;STRING&lt;/td&gt;&lt;td&gt;NULLABLE&lt;/td&gt;&lt;/tr&gt;
1278 # &lt;tr&gt;&lt;td&gt;groundtruth&lt;/td&gt;&lt;td&gt;STRING&lt;/td&gt;&lt;td&gt;NULLABLE&lt;/td&gt;&lt;/tr&gt;
1279 # &lt;/table&gt;
1280 },
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -04001281 },
1282 ],
1283 }</pre>
1284</div>
1285
1286<div class="method">
1287 <code class="details" id="list_next">list_next(previous_request, previous_response)</code>
1288 <pre>Retrieves the next page of results.
1289
1290Args:
1291 previous_request: The request for the previous page. (required)
1292 previous_response: The response from the request for the previous page. (required)
1293
1294Returns:
1295 A request object that you can call 'execute()' on to request the next
1296 page. Returns None if there are no more items in the collection.
1297 </pre>
1298</div>
1299
1300<div class="method">
Dan O'Mearadd494642020-05-01 07:42:23 -07001301 <code class="details" id="patch">patch(name, body=None, updateMask=None, x__xgafv=None)</code>
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07001302 <pre>Updates the specified Version resource.
1303
Dan O'Mearadd494642020-05-01 07:42:23 -07001304Currently the only update-able fields are `description`,
1305`requestLoggingConfig`, `autoScaling.minNodes`, and `manualScaling.nodes`.
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07001306
1307Args:
1308 name: string, Required. The name of the model. (required)
Dan O'Mearadd494642020-05-01 07:42:23 -07001309 body: object, The request body.
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07001310 The object takes the form of:
1311
1312{ # Represents a version of the model.
1313 #
1314 # Each version is a trained model deployed in the cloud, ready to handle
1315 # prediction requests. A model can have multiple versions. You can get
1316 # information about all of the versions of a given model by calling
Dan O'Mearadd494642020-05-01 07:42:23 -07001317 # projects.models.versions.list.
1318 "acceleratorConfig": { # Represents a hardware accelerator request config. # Optional. Accelerator config for using GPUs for online prediction (beta).
1319 # Only specify this field if you have specified a Compute Engine (N1) machine
1320 # type in the `machineType` field. Learn more about [using GPUs for online
1321 # prediction](/ml-engine/docs/machine-types-online-prediction#gpus).
1322 # Note that the AcceleratorConfig can be used in both Jobs and Versions.
1323 # Learn more about [accelerators for training](/ml-engine/docs/using-gpus) and
1324 # [accelerators for online
1325 # prediction](/ml-engine/docs/machine-types-online-prediction#gpus).
1326 "count": "A String", # The number of accelerators to attach to each machine running the job.
1327 "type": "A String", # The type of accelerator to use.
1328 },
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07001329 "labels": { # Optional. One or more labels that you can add, to organize your model
1330 # versions. Each label is a key-value pair, where both the key and the value
1331 # are arbitrary strings that you supply.
1332 # For more information, see the documentation on
Dan O'Mearadd494642020-05-01 07:42:23 -07001333 # &lt;a href="/ml-engine/docs/tensorflow/resource-labels"&gt;using labels&lt;/a&gt;.
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07001334 "a_key": "A String",
1335 },
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07001336 "predictionClass": "A String", # Optional. The fully qualified name
Dan O'Mearadd494642020-05-01 07:42:23 -07001337 # (&lt;var&gt;module_name&lt;/var&gt;.&lt;var&gt;class_name&lt;/var&gt;) of a class that implements
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07001338 # the Predictor interface described in this reference field. The module
1339 # containing this class should be included in a package provided to the
1340 # [`packageUris` field](#Version.FIELDS.package_uris).
1341 #
1342 # Specify this field if and only if you are deploying a [custom prediction
1343 # routine (beta)](/ml-engine/docs/tensorflow/custom-prediction-routines).
1344 # If you specify this field, you must set
Dan O'Mearadd494642020-05-01 07:42:23 -07001345 # [`runtimeVersion`](#Version.FIELDS.runtime_version) to 1.4 or greater and
1346 # you must set `machineType` to a [legacy (MLS1)
1347 # machine type](/ml-engine/docs/machine-types-online-prediction).
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07001348 #
1349 # The following code sample provides the Predictor interface:
1350 #
Dan O'Mearadd494642020-05-01 07:42:23 -07001351 # &lt;pre style="max-width: 626px;"&gt;
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07001352 # class Predictor(object):
1353 # """Interface for constructing custom predictors."""
1354 #
1355 # def predict(self, instances, **kwargs):
1356 # """Performs custom prediction.
1357 #
1358 # Instances are the decoded values from the request. They have already
1359 # been deserialized from JSON.
1360 #
1361 # Args:
1362 # instances: A list of prediction input instances.
1363 # **kwargs: A dictionary of keyword args provided as additional
1364 # fields on the predict request body.
1365 #
1366 # Returns:
1367 # A list of outputs containing the prediction results. This list must
1368 # be JSON serializable.
1369 # """
1370 # raise NotImplementedError()
1371 #
1372 # @classmethod
1373 # def from_path(cls, model_dir):
1374 # """Creates an instance of Predictor using the given path.
1375 #
1376 # Loading of the predictor should be done in this method.
1377 #
1378 # Args:
1379 # model_dir: The local directory that contains the exported model
1380 # file along with any additional files uploaded when creating the
1381 # version resource.
1382 #
1383 # Returns:
1384 # An instance implementing this Predictor class.
1385 # """
1386 # raise NotImplementedError()
Dan O'Mearadd494642020-05-01 07:42:23 -07001387 # &lt;/pre&gt;
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07001388 #
1389 # Learn more about [the Predictor interface and custom prediction
1390 # routines](/ml-engine/docs/tensorflow/custom-prediction-routines).
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07001391 "serviceAccount": "A String", # Optional. Specifies the service account for resource access control.
1392 "state": "A String", # Output only. The state of a version.
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07001393 "packageUris": [ # Optional. Cloud Storage paths (`gs://…`) of packages for [custom
1394 # prediction routines](/ml-engine/docs/tensorflow/custom-prediction-routines)
1395 # or [scikit-learn pipelines with custom
1396 # code](/ml-engine/docs/scikit/exporting-for-prediction#custom-pipeline-code).
1397 #
1398 # For a custom prediction routine, one of these packages must contain your
1399 # Predictor class (see
1400 # [`predictionClass`](#Version.FIELDS.prediction_class)). Additionally,
1401 # include any dependencies used by your Predictor or scikit-learn pipeline
1402 # uses that are not already included in your selected [runtime
1403 # version](/ml-engine/docs/tensorflow/runtime-version-list).
1404 #
1405 # If you specify this field, you must also set
1406 # [`runtimeVersion`](#Version.FIELDS.runtime_version) to 1.4 or greater.
1407 "A String",
1408 ],
1409 "etag": "A String", # `etag` is used for optimistic concurrency control as a way to help
1410 # prevent simultaneous updates of a model from overwriting each other.
1411 # It is strongly suggested that systems make use of the `etag` in the
1412 # read-modify-write cycle to perform model updates in order to avoid race
1413 # conditions: An `etag` is returned in the response to `GetVersion`, and
1414 # systems are expected to put that etag in the request to `UpdateVersion` to
1415 # ensure that their change will be applied to the model as intended.
1416 "lastUseTime": "A String", # Output only. The time the version was last used for prediction.
1417 "deploymentUri": "A String", # Required. The Cloud Storage location of the trained model used to
1418 # create the version. See the
1419 # [guide to model
1420 # deployment](/ml-engine/docs/tensorflow/deploying-models) for more
1421 # information.
1422 #
1423 # When passing Version to
Dan O'Mearadd494642020-05-01 07:42:23 -07001424 # projects.models.versions.create
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07001425 # the model service uses the specified location as the source of the model.
1426 # Once deployed, the model version is hosted by the prediction service, so
1427 # this location is useful only as a historical record.
1428 # The total number of model files can't exceed 1000.
Dan O'Mearadd494642020-05-01 07:42:23 -07001429 "explanationConfig": { # Message holding configuration options for explaining model predictions. # Optional. Configures explainability features on the model's version.
1430 # Some explanation features require additional metadata to be loaded
1431 # as part of the model payload.
1432 # There are two feature attribution methods supported for TensorFlow models:
1433 # integrated gradients and sampled Shapley.
1434 # [Learn more about feature
1435 # attributions.](/ml-engine/docs/ai-explanations/overview)
1436 "xraiAttribution": { # Attributes credit by computing the XRAI taking advantage # Attributes credit by computing the XRAI taking advantage
1437 # of the model's fully differentiable structure. Refer to this paper for
1438 # more details: https://arxiv.org/abs/1906.02825
1439 # Currently only implemented for models with natural image inputs.
1440 # of the model's fully differentiable structure. Refer to this paper for
1441 # more details: https://arxiv.org/abs/1906.02825
1442 # Currently only implemented for models with natural image inputs.
1443 "numIntegralSteps": 42, # Number of steps for approximating the path integral.
1444 # A good value to start is 50 and gradually increase until the
1445 # sum to diff property is met within the desired error range.
1446 },
1447 "sampledShapleyAttribution": { # An attribution method that approximates Shapley values for features that # An attribution method that approximates Shapley values for features that
1448 # contribute to the label being predicted. A sampling strategy is used to
1449 # approximate the value rather than considering all subsets of features.
1450 # contribute to the label being predicted. A sampling strategy is used to
1451 # approximate the value rather than considering all subsets of features.
1452 "numPaths": 42, # The number of feature permutations to consider when approximating the
1453 # Shapley values.
1454 },
1455 "integratedGradientsAttribution": { # Attributes credit by computing the Aumann-Shapley value taking advantage # Attributes credit by computing the Aumann-Shapley value taking advantage
1456 # of the model's fully differentiable structure. Refer to this paper for
1457 # more details: http://proceedings.mlr.press/v70/sundararajan17a.html
1458 # of the model's fully differentiable structure. Refer to this paper for
1459 # more details: http://proceedings.mlr.press/v70/sundararajan17a.html
1460 "numIntegralSteps": 42, # Number of steps for approximating the path integral.
1461 # A good value to start is 50 and gradually increase until the
1462 # sum to diff property is met within the desired error range.
1463 },
1464 },
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07001465 "isDefault": True or False, # Output only. If true, this version will be used to handle prediction
1466 # requests that do not specify a version.
1467 #
1468 # You can change the default version by calling
Dan O'Mearadd494642020-05-01 07:42:23 -07001469 # projects.methods.versions.setDefault.
1470 "machineType": "A String", # Optional. The type of machine on which to serve the model. Currently only
1471 # applies to online prediction service. If this field is not specified, it
1472 # defaults to `mls1-c1-m2`.
1473 #
1474 # Online prediction supports the following machine types:
1475 #
1476 # * `mls1-c1-m2`
1477 # * `mls1-c4-m2`
1478 # * `n1-standard-2`
1479 # * `n1-standard-4`
1480 # * `n1-standard-8`
1481 # * `n1-standard-16`
1482 # * `n1-standard-32`
1483 # * `n1-highmem-2`
1484 # * `n1-highmem-4`
1485 # * `n1-highmem-8`
1486 # * `n1-highmem-16`
1487 # * `n1-highmem-32`
1488 # * `n1-highcpu-2`
1489 # * `n1-highcpu-4`
1490 # * `n1-highcpu-8`
1491 # * `n1-highcpu-16`
1492 # * `n1-highcpu-32`
1493 #
1494 # `mls1-c1-m2` is generally available. All other machine types are available
1495 # in beta. Learn more about the [differences between machine
1496 # types](/ml-engine/docs/machine-types-online-prediction).
1497 "description": "A String", # Optional. The description specified for the version when it was created.
1498 "runtimeVersion": "A String", # Required. The AI Platform runtime version to use for this deployment.
1499 #
1500 # For more information, see the
1501 # [runtime version list](/ml-engine/docs/runtime-version-list) and
1502 # [how to manage runtime versions](/ml-engine/docs/versioning).
1503 "manualScaling": { # Options for manually scaling a model. # Manually select the number of nodes to use for serving the
1504 # model. You should generally use `auto_scaling` with an appropriate
1505 # `min_nodes` instead, but this option is available if you want more
1506 # predictable billing. Beware that latency and error rates will increase
1507 # if the traffic exceeds that capability of the system to serve it based
1508 # on the selected number of nodes.
1509 "nodes": 42, # The number of nodes to allocate for this model. These nodes are always up,
1510 # starting from the time the model is deployed, so the cost of operating
1511 # this model will be proportional to `nodes` * number of hours since
1512 # last billing cycle plus the cost for each prediction performed.
1513 },
1514 "errorMessage": "A String", # Output only. The details of a failure or a cancellation.
1515 "framework": "A String", # Optional. The machine learning framework AI Platform uses to train
1516 # this version of the model. Valid values are `TENSORFLOW`, `SCIKIT_LEARN`,
1517 # `XGBOOST`. If you do not specify a framework, AI Platform
1518 # will analyze files in the deployment_uri to determine a framework. If you
1519 # choose `SCIKIT_LEARN` or `XGBOOST`, you must also set the runtime version
1520 # of the model to 1.4 or greater.
1521 #
1522 # Do **not** specify a framework if you're deploying a [custom
1523 # prediction routine](/ml-engine/docs/tensorflow/custom-prediction-routines).
1524 #
1525 # If you specify a [Compute Engine (N1) machine
1526 # type](/ml-engine/docs/machine-types-online-prediction) in the
1527 # `machineType` field, you must specify `TENSORFLOW`
1528 # for the framework.
1529 "createTime": "A String", # Output only. The time the version was created.
1530 "name": "A String", # Required. The name specified for the version when it was created.
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07001531 #
1532 # The version name must be unique within the model it is created in.
Dan O'Mearadd494642020-05-01 07:42:23 -07001533 "autoScaling": { # Options for automatically scaling a model. # Automatically scale the number of nodes used to serve the model in
1534 # response to increases and decreases in traffic. Care should be
1535 # taken to ramp up traffic according to the model's ability to scale
1536 # or you will start seeing increases in latency and 429 response codes.
1537 #
1538 # Note that you cannot use AutoScaling if your version uses
1539 # [GPUs](#Version.FIELDS.accelerator_config). Instead, you must use specify
1540 # `manual_scaling`.
1541 "minNodes": 42, # Optional. The minimum number of nodes to allocate for this model. These
1542 # nodes are always up, starting from the time the model is deployed.
1543 # Therefore, the cost of operating this model will be at least
1544 # `rate` * `min_nodes` * number of hours since last billing cycle,
1545 # where `rate` is the cost per node-hour as documented in the
1546 # [pricing guide](/ml-engine/docs/pricing),
1547 # even if no predictions are performed. There is additional cost for each
1548 # prediction performed.
1549 #
1550 # Unlike manual scaling, if the load gets too heavy for the nodes
1551 # that are up, the service will automatically add nodes to handle the
1552 # increased load as well as scale back as traffic drops, always maintaining
1553 # at least `min_nodes`. You will be charged for the time in which additional
1554 # nodes are used.
1555 #
1556 # If `min_nodes` is not specified and AutoScaling is used with a [legacy
1557 # (MLS1) machine type](/ml-engine/docs/machine-types-online-prediction),
1558 # `min_nodes` defaults to 0, in which case, when traffic to a model stops
1559 # (and after a cool-down period), nodes will be shut down and no charges will
1560 # be incurred until traffic to the model resumes.
1561 #
1562 # If `min_nodes` is not specified and AutoScaling is used with a [Compute
1563 # Engine (N1) machine type](/ml-engine/docs/machine-types-online-prediction),
1564 # `min_nodes` defaults to 1. `min_nodes` must be at least 1 for use with a
1565 # Compute Engine machine type.
1566 #
1567 # Note that you cannot use AutoScaling if your version uses
1568 # [GPUs](#Version.FIELDS.accelerator_config). Instead, you must use
1569 # ManualScaling.
1570 #
1571 # You can set `min_nodes` when creating the model version, and you can also
1572 # update `min_nodes` for an existing version:
1573 # &lt;pre&gt;
1574 # update_body.json:
1575 # {
1576 # 'autoScaling': {
1577 # 'minNodes': 5
1578 # }
1579 # }
1580 # &lt;/pre&gt;
1581 # HTTP request:
1582 # &lt;pre style="max-width: 626px;"&gt;
1583 # PATCH
1584 # https://ml.googleapis.com/v1/{name=projects/*/models/*/versions/*}?update_mask=autoScaling.minNodes
1585 # -d @./update_body.json
1586 # &lt;/pre&gt;
1587 },
1588 "pythonVersion": "A String", # Required. The version of Python used in prediction.
1589 #
1590 # The following Python versions are available:
1591 #
1592 # * Python '3.7' is available when `runtime_version` is set to '1.15' or
1593 # later.
1594 # * Python '3.5' is available when `runtime_version` is set to a version
1595 # from '1.4' to '1.14'.
1596 # * Python '2.7' is available when `runtime_version` is set to '1.15' or
1597 # earlier.
1598 #
1599 # Read more about the Python versions available for [each runtime
1600 # version](/ml-engine/docs/runtime-version-list).
1601 "requestLoggingConfig": { # Configuration for logging request-response pairs to a BigQuery table. # Optional. *Only* specify this field in a
1602 # projects.models.versions.patch
1603 # request. Specifying it in a
1604 # projects.models.versions.create
1605 # request has no effect.
1606 #
1607 # Configures the request-response pair logging on predictions from this
1608 # Version.
1609 # Online prediction requests to a model version and the responses to these
1610 # requests are converted to raw strings and saved to the specified BigQuery
1611 # table. Logging is constrained by [BigQuery quotas and
1612 # limits](/bigquery/quotas). If your project exceeds BigQuery quotas or limits,
1613 # AI Platform Prediction does not log request-response pairs, but it continues
1614 # to serve predictions.
1615 #
1616 # If you are using [continuous
1617 # evaluation](/ml-engine/docs/continuous-evaluation/), you do not need to
1618 # specify this configuration manually. Setting up continuous evaluation
1619 # automatically enables logging of request-response pairs.
1620 "samplingPercentage": 3.14, # Percentage of requests to be logged, expressed as a fraction from 0 to 1.
1621 # For example, if you want to log 10% of requests, enter `0.1`. The sampling
1622 # window is the lifetime of the model version. Defaults to 0.
1623 "bigqueryTableName": "A String", # Required. Fully qualified BigQuery table name in the following format:
1624 # "&lt;var&gt;project_id&lt;/var&gt;.&lt;var&gt;dataset_name&lt;/var&gt;.&lt;var&gt;table_name&lt;/var&gt;"
1625 #
1626 # The specified table must already exist, and the "Cloud ML Service Agent"
1627 # for your project must have permission to write to it. The table must have
1628 # the following [schema](/bigquery/docs/schemas):
1629 #
1630 # &lt;table&gt;
1631 # &lt;tr&gt;&lt;th&gt;Field name&lt;/th&gt;&lt;th style="display: table-cell"&gt;Type&lt;/th&gt;
1632 # &lt;th style="display: table-cell"&gt;Mode&lt;/th&gt;&lt;/tr&gt;
1633 # &lt;tr&gt;&lt;td&gt;model&lt;/td&gt;&lt;td&gt;STRING&lt;/td&gt;&lt;td&gt;REQUIRED&lt;/td&gt;&lt;/tr&gt;
1634 # &lt;tr&gt;&lt;td&gt;model_version&lt;/td&gt;&lt;td&gt;STRING&lt;/td&gt;&lt;td&gt;REQUIRED&lt;/td&gt;&lt;/tr&gt;
1635 # &lt;tr&gt;&lt;td&gt;time&lt;/td&gt;&lt;td&gt;TIMESTAMP&lt;/td&gt;&lt;td&gt;REQUIRED&lt;/td&gt;&lt;/tr&gt;
1636 # &lt;tr&gt;&lt;td&gt;raw_data&lt;/td&gt;&lt;td&gt;STRING&lt;/td&gt;&lt;td&gt;REQUIRED&lt;/td&gt;&lt;/tr&gt;
1637 # &lt;tr&gt;&lt;td&gt;raw_prediction&lt;/td&gt;&lt;td&gt;STRING&lt;/td&gt;&lt;td&gt;NULLABLE&lt;/td&gt;&lt;/tr&gt;
1638 # &lt;tr&gt;&lt;td&gt;groundtruth&lt;/td&gt;&lt;td&gt;STRING&lt;/td&gt;&lt;td&gt;NULLABLE&lt;/td&gt;&lt;/tr&gt;
1639 # &lt;/table&gt;
1640 },
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07001641}
1642
1643 updateMask: string, Required. Specifies the path, relative to `Version`, of the field to
1644update. Must be present and non-empty.
1645
1646For example, to change the description of a version to "foo", the
1647`update_mask` parameter would be specified as `description`, and the
1648`PATCH` request body would specify the new value, as follows:
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07001649
Dan O'Mearadd494642020-05-01 07:42:23 -07001650```
1651{
1652 "description": "foo"
1653}
1654```
1655
1656Currently the only supported update mask fields are `description`,
1657`requestLoggingConfig`, `autoScaling.minNodes`, and `manualScaling.nodes`.
1658However, you can only update `manualScaling.nodes` if the version uses a
1659[Compute Engine (N1)
1660machine type](/ml-engine/docs/machine-types-online-prediction).
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07001661 x__xgafv: string, V1 error format.
1662 Allowed values
1663 1 - v1 error format
1664 2 - v2 error format
1665
1666Returns:
1667 An object of the form:
1668
1669 { # This resource represents a long-running operation that is the result of a
1670 # network API call.
1671 "metadata": { # Service-specific metadata associated with the operation. It typically
1672 # contains progress information and common metadata such as create time.
1673 # Some services might not provide such metadata. Any method that returns a
1674 # long-running operation should document the metadata type, if any.
1675 "a_key": "", # Properties of the object. Contains field @type with type URL.
1676 },
1677 "error": { # The `Status` type defines a logical error model that is suitable for # The error result of the operation in case of failure or cancellation.
1678 # different programming environments, including REST APIs and RPC APIs. It is
1679 # used by [gRPC](https://github.com/grpc). Each `Status` message contains
1680 # three pieces of data: error code, error message, and error details.
1681 #
1682 # You can find out more about this error model and how to work with it in the
1683 # [API Design Guide](https://cloud.google.com/apis/design/errors).
1684 "message": "A String", # A developer-facing error message, which should be in English. Any
1685 # user-facing error message should be localized and sent in the
1686 # google.rpc.Status.details field, or localized by the client.
1687 "code": 42, # The status code, which should be an enum value of google.rpc.Code.
1688 "details": [ # A list of messages that carry the error details. There is a common set of
1689 # message types for APIs to use.
1690 {
1691 "a_key": "", # Properties of the object. Contains field @type with type URL.
1692 },
1693 ],
1694 },
1695 "done": True or False, # If the value is `false`, it means the operation is still in progress.
1696 # If `true`, the operation is completed, and either `error` or `response` is
1697 # available.
1698 "response": { # The normal response of the operation in case of success. If the original
1699 # method returns no data on success, such as `Delete`, the response is
1700 # `google.protobuf.Empty`. If the original method is standard
1701 # `Get`/`Create`/`Update`, the response should be the resource. For other
1702 # methods, the response should have the type `XxxResponse`, where `Xxx`
1703 # is the original method name. For example, if the original method name
1704 # is `TakeSnapshot()`, the inferred response type is
1705 # `TakeSnapshotResponse`.
1706 "a_key": "", # Properties of the object. Contains field @type with type URL.
1707 },
1708 "name": "A String", # The server-assigned name, which is only unique within the same service that
1709 # originally returns it. If you use the default HTTP mapping, the
1710 # `name` should be a resource name ending with `operations/{unique_id}`.
1711 }</pre>
1712</div>
1713
1714<div class="method">
1715 <code class="details" id="setDefault">setDefault(name, body=None, x__xgafv=None)</code>
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -04001716 <pre>Designates a version to be the default for the model.
1717
1718The default version is used for prediction requests made against the model
1719that don't specify a version.
1720
1721The first version to be created for a model is automatically set as the
1722default. You must make any subsequent changes to the default version
1723setting manually using this method.
1724
1725Args:
1726 name: string, Required. The name of the version to make the default for the model. You
1727can get the names of all the versions of a model by calling
Dan O'Mearadd494642020-05-01 07:42:23 -07001728projects.models.versions.list. (required)
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07001729 body: object, The request body.
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -04001730 The object takes the form of:
1731
1732{ # Request message for the SetDefaultVersion request.
1733 }
1734
1735 x__xgafv: string, V1 error format.
1736 Allowed values
1737 1 - v1 error format
1738 2 - v2 error format
1739
1740Returns:
1741 An object of the form:
1742
1743 { # Represents a version of the model.
1744 #
1745 # Each version is a trained model deployed in the cloud, ready to handle
1746 # prediction requests. A model can have multiple versions. You can get
1747 # information about all of the versions of a given model by calling
Dan O'Mearadd494642020-05-01 07:42:23 -07001748 # projects.models.versions.list.
1749 "acceleratorConfig": { # Represents a hardware accelerator request config. # Optional. Accelerator config for using GPUs for online prediction (beta).
1750 # Only specify this field if you have specified a Compute Engine (N1) machine
1751 # type in the `machineType` field. Learn more about [using GPUs for online
1752 # prediction](/ml-engine/docs/machine-types-online-prediction#gpus).
1753 # Note that the AcceleratorConfig can be used in both Jobs and Versions.
1754 # Learn more about [accelerators for training](/ml-engine/docs/using-gpus) and
1755 # [accelerators for online
1756 # prediction](/ml-engine/docs/machine-types-online-prediction#gpus).
1757 "count": "A String", # The number of accelerators to attach to each machine running the job.
1758 "type": "A String", # The type of accelerator to use.
1759 },
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07001760 "labels": { # Optional. One or more labels that you can add, to organize your model
1761 # versions. Each label is a key-value pair, where both the key and the value
1762 # are arbitrary strings that you supply.
1763 # For more information, see the documentation on
Dan O'Mearadd494642020-05-01 07:42:23 -07001764 # &lt;a href="/ml-engine/docs/tensorflow/resource-labels"&gt;using labels&lt;/a&gt;.
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07001765 "a_key": "A String",
1766 },
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07001767 "predictionClass": "A String", # Optional. The fully qualified name
Dan O'Mearadd494642020-05-01 07:42:23 -07001768 # (&lt;var&gt;module_name&lt;/var&gt;.&lt;var&gt;class_name&lt;/var&gt;) of a class that implements
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07001769 # the Predictor interface described in this reference field. The module
1770 # containing this class should be included in a package provided to the
1771 # [`packageUris` field](#Version.FIELDS.package_uris).
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -04001772 #
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07001773 # Specify this field if and only if you are deploying a [custom prediction
1774 # routine (beta)](/ml-engine/docs/tensorflow/custom-prediction-routines).
1775 # If you specify this field, you must set
Dan O'Mearadd494642020-05-01 07:42:23 -07001776 # [`runtimeVersion`](#Version.FIELDS.runtime_version) to 1.4 or greater and
1777 # you must set `machineType` to a [legacy (MLS1)
1778 # machine type](/ml-engine/docs/machine-types-online-prediction).
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07001779 #
1780 # The following code sample provides the Predictor interface:
1781 #
Dan O'Mearadd494642020-05-01 07:42:23 -07001782 # &lt;pre style="max-width: 626px;"&gt;
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07001783 # class Predictor(object):
1784 # """Interface for constructing custom predictors."""
1785 #
1786 # def predict(self, instances, **kwargs):
1787 # """Performs custom prediction.
1788 #
1789 # Instances are the decoded values from the request. They have already
1790 # been deserialized from JSON.
1791 #
1792 # Args:
1793 # instances: A list of prediction input instances.
1794 # **kwargs: A dictionary of keyword args provided as additional
1795 # fields on the predict request body.
1796 #
1797 # Returns:
1798 # A list of outputs containing the prediction results. This list must
1799 # be JSON serializable.
1800 # """
1801 # raise NotImplementedError()
1802 #
1803 # @classmethod
1804 # def from_path(cls, model_dir):
1805 # """Creates an instance of Predictor using the given path.
1806 #
1807 # Loading of the predictor should be done in this method.
1808 #
1809 # Args:
1810 # model_dir: The local directory that contains the exported model
1811 # file along with any additional files uploaded when creating the
1812 # version resource.
1813 #
1814 # Returns:
1815 # An instance implementing this Predictor class.
1816 # """
1817 # raise NotImplementedError()
Dan O'Mearadd494642020-05-01 07:42:23 -07001818 # &lt;/pre&gt;
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07001819 #
1820 # Learn more about [the Predictor interface and custom prediction
1821 # routines](/ml-engine/docs/tensorflow/custom-prediction-routines).
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07001822 "serviceAccount": "A String", # Optional. Specifies the service account for resource access control.
1823 "state": "A String", # Output only. The state of a version.
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07001824 "packageUris": [ # Optional. Cloud Storage paths (`gs://…`) of packages for [custom
1825 # prediction routines](/ml-engine/docs/tensorflow/custom-prediction-routines)
1826 # or [scikit-learn pipelines with custom
1827 # code](/ml-engine/docs/scikit/exporting-for-prediction#custom-pipeline-code).
1828 #
1829 # For a custom prediction routine, one of these packages must contain your
1830 # Predictor class (see
1831 # [`predictionClass`](#Version.FIELDS.prediction_class)). Additionally,
1832 # include any dependencies used by your Predictor or scikit-learn pipeline
1833 # uses that are not already included in your selected [runtime
1834 # version](/ml-engine/docs/tensorflow/runtime-version-list).
1835 #
1836 # If you specify this field, you must also set
1837 # [`runtimeVersion`](#Version.FIELDS.runtime_version) to 1.4 or greater.
1838 "A String",
1839 ],
1840 "etag": "A String", # `etag` is used for optimistic concurrency control as a way to help
1841 # prevent simultaneous updates of a model from overwriting each other.
1842 # It is strongly suggested that systems make use of the `etag` in the
1843 # read-modify-write cycle to perform model updates in order to avoid race
1844 # conditions: An `etag` is returned in the response to `GetVersion`, and
1845 # systems are expected to put that etag in the request to `UpdateVersion` to
1846 # ensure that their change will be applied to the model as intended.
1847 "lastUseTime": "A String", # Output only. The time the version was last used for prediction.
1848 "deploymentUri": "A String", # Required. The Cloud Storage location of the trained model used to
1849 # create the version. See the
1850 # [guide to model
1851 # deployment](/ml-engine/docs/tensorflow/deploying-models) for more
1852 # information.
1853 #
1854 # When passing Version to
Dan O'Mearadd494642020-05-01 07:42:23 -07001855 # projects.models.versions.create
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07001856 # the model service uses the specified location as the source of the model.
1857 # Once deployed, the model version is hosted by the prediction service, so
1858 # this location is useful only as a historical record.
1859 # The total number of model files can't exceed 1000.
Dan O'Mearadd494642020-05-01 07:42:23 -07001860 "explanationConfig": { # Message holding configuration options for explaining model predictions. # Optional. Configures explainability features on the model's version.
1861 # Some explanation features require additional metadata to be loaded
1862 # as part of the model payload.
1863 # There are two feature attribution methods supported for TensorFlow models:
1864 # integrated gradients and sampled Shapley.
1865 # [Learn more about feature
1866 # attributions.](/ml-engine/docs/ai-explanations/overview)
1867 "xraiAttribution": { # Attributes credit by computing the XRAI taking advantage # Attributes credit by computing the XRAI taking advantage
1868 # of the model's fully differentiable structure. Refer to this paper for
1869 # more details: https://arxiv.org/abs/1906.02825
1870 # Currently only implemented for models with natural image inputs.
1871 # of the model's fully differentiable structure. Refer to this paper for
1872 # more details: https://arxiv.org/abs/1906.02825
1873 # Currently only implemented for models with natural image inputs.
1874 "numIntegralSteps": 42, # Number of steps for approximating the path integral.
1875 # A good value to start is 50 and gradually increase until the
1876 # sum to diff property is met within the desired error range.
1877 },
1878 "sampledShapleyAttribution": { # An attribution method that approximates Shapley values for features that # An attribution method that approximates Shapley values for features that
1879 # contribute to the label being predicted. A sampling strategy is used to
1880 # approximate the value rather than considering all subsets of features.
1881 # contribute to the label being predicted. A sampling strategy is used to
1882 # approximate the value rather than considering all subsets of features.
1883 "numPaths": 42, # The number of feature permutations to consider when approximating the
1884 # Shapley values.
1885 },
1886 "integratedGradientsAttribution": { # Attributes credit by computing the Aumann-Shapley value taking advantage # Attributes credit by computing the Aumann-Shapley value taking advantage
1887 # of the model's fully differentiable structure. Refer to this paper for
1888 # more details: http://proceedings.mlr.press/v70/sundararajan17a.html
1889 # of the model's fully differentiable structure. Refer to this paper for
1890 # more details: http://proceedings.mlr.press/v70/sundararajan17a.html
1891 "numIntegralSteps": 42, # Number of steps for approximating the path integral.
1892 # A good value to start is 50 and gradually increase until the
1893 # sum to diff property is met within the desired error range.
1894 },
1895 },
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -04001896 "isDefault": True or False, # Output only. If true, this version will be used to handle prediction
1897 # requests that do not specify a version.
1898 #
1899 # You can change the default version by calling
Dan O'Mearadd494642020-05-01 07:42:23 -07001900 # projects.methods.versions.setDefault.
1901 "machineType": "A String", # Optional. The type of machine on which to serve the model. Currently only
1902 # applies to online prediction service. If this field is not specified, it
1903 # defaults to `mls1-c1-m2`.
1904 #
1905 # Online prediction supports the following machine types:
1906 #
1907 # * `mls1-c1-m2`
1908 # * `mls1-c4-m2`
1909 # * `n1-standard-2`
1910 # * `n1-standard-4`
1911 # * `n1-standard-8`
1912 # * `n1-standard-16`
1913 # * `n1-standard-32`
1914 # * `n1-highmem-2`
1915 # * `n1-highmem-4`
1916 # * `n1-highmem-8`
1917 # * `n1-highmem-16`
1918 # * `n1-highmem-32`
1919 # * `n1-highcpu-2`
1920 # * `n1-highcpu-4`
1921 # * `n1-highcpu-8`
1922 # * `n1-highcpu-16`
1923 # * `n1-highcpu-32`
1924 #
1925 # `mls1-c1-m2` is generally available. All other machine types are available
1926 # in beta. Learn more about the [differences between machine
1927 # types](/ml-engine/docs/machine-types-online-prediction).
1928 "description": "A String", # Optional. The description specified for the version when it was created.
1929 "runtimeVersion": "A String", # Required. The AI Platform runtime version to use for this deployment.
1930 #
1931 # For more information, see the
1932 # [runtime version list](/ml-engine/docs/runtime-version-list) and
1933 # [how to manage runtime versions](/ml-engine/docs/versioning).
1934 "manualScaling": { # Options for manually scaling a model. # Manually select the number of nodes to use for serving the
1935 # model. You should generally use `auto_scaling` with an appropriate
1936 # `min_nodes` instead, but this option is available if you want more
1937 # predictable billing. Beware that latency and error rates will increase
1938 # if the traffic exceeds that capability of the system to serve it based
1939 # on the selected number of nodes.
1940 "nodes": 42, # The number of nodes to allocate for this model. These nodes are always up,
1941 # starting from the time the model is deployed, so the cost of operating
1942 # this model will be proportional to `nodes` * number of hours since
1943 # last billing cycle plus the cost for each prediction performed.
1944 },
1945 "errorMessage": "A String", # Output only. The details of a failure or a cancellation.
1946 "framework": "A String", # Optional. The machine learning framework AI Platform uses to train
1947 # this version of the model. Valid values are `TENSORFLOW`, `SCIKIT_LEARN`,
1948 # `XGBOOST`. If you do not specify a framework, AI Platform
1949 # will analyze files in the deployment_uri to determine a framework. If you
1950 # choose `SCIKIT_LEARN` or `XGBOOST`, you must also set the runtime version
1951 # of the model to 1.4 or greater.
1952 #
1953 # Do **not** specify a framework if you're deploying a [custom
1954 # prediction routine](/ml-engine/docs/tensorflow/custom-prediction-routines).
1955 #
1956 # If you specify a [Compute Engine (N1) machine
1957 # type](/ml-engine/docs/machine-types-online-prediction) in the
1958 # `machineType` field, you must specify `TENSORFLOW`
1959 # for the framework.
1960 "createTime": "A String", # Output only. The time the version was created.
1961 "name": "A String", # Required. The name specified for the version when it was created.
Thomas Coffee2f245372017-03-27 10:39:26 -07001962 #
1963 # The version name must be unique within the model it is created in.
Dan O'Mearadd494642020-05-01 07:42:23 -07001964 "autoScaling": { # Options for automatically scaling a model. # Automatically scale the number of nodes used to serve the model in
1965 # response to increases and decreases in traffic. Care should be
1966 # taken to ramp up traffic according to the model's ability to scale
1967 # or you will start seeing increases in latency and 429 response codes.
1968 #
1969 # Note that you cannot use AutoScaling if your version uses
1970 # [GPUs](#Version.FIELDS.accelerator_config). Instead, you must use specify
1971 # `manual_scaling`.
1972 "minNodes": 42, # Optional. The minimum number of nodes to allocate for this model. These
1973 # nodes are always up, starting from the time the model is deployed.
1974 # Therefore, the cost of operating this model will be at least
1975 # `rate` * `min_nodes` * number of hours since last billing cycle,
1976 # where `rate` is the cost per node-hour as documented in the
1977 # [pricing guide](/ml-engine/docs/pricing),
1978 # even if no predictions are performed. There is additional cost for each
1979 # prediction performed.
1980 #
1981 # Unlike manual scaling, if the load gets too heavy for the nodes
1982 # that are up, the service will automatically add nodes to handle the
1983 # increased load as well as scale back as traffic drops, always maintaining
1984 # at least `min_nodes`. You will be charged for the time in which additional
1985 # nodes are used.
1986 #
1987 # If `min_nodes` is not specified and AutoScaling is used with a [legacy
1988 # (MLS1) machine type](/ml-engine/docs/machine-types-online-prediction),
1989 # `min_nodes` defaults to 0, in which case, when traffic to a model stops
1990 # (and after a cool-down period), nodes will be shut down and no charges will
1991 # be incurred until traffic to the model resumes.
1992 #
1993 # If `min_nodes` is not specified and AutoScaling is used with a [Compute
1994 # Engine (N1) machine type](/ml-engine/docs/machine-types-online-prediction),
1995 # `min_nodes` defaults to 1. `min_nodes` must be at least 1 for use with a
1996 # Compute Engine machine type.
1997 #
1998 # Note that you cannot use AutoScaling if your version uses
1999 # [GPUs](#Version.FIELDS.accelerator_config). Instead, you must use
2000 # ManualScaling.
2001 #
2002 # You can set `min_nodes` when creating the model version, and you can also
2003 # update `min_nodes` for an existing version:
2004 # &lt;pre&gt;
2005 # update_body.json:
2006 # {
2007 # 'autoScaling': {
2008 # 'minNodes': 5
2009 # }
2010 # }
2011 # &lt;/pre&gt;
2012 # HTTP request:
2013 # &lt;pre style="max-width: 626px;"&gt;
2014 # PATCH
2015 # https://ml.googleapis.com/v1/{name=projects/*/models/*/versions/*}?update_mask=autoScaling.minNodes
2016 # -d @./update_body.json
2017 # &lt;/pre&gt;
2018 },
2019 "pythonVersion": "A String", # Required. The version of Python used in prediction.
2020 #
2021 # The following Python versions are available:
2022 #
2023 # * Python '3.7' is available when `runtime_version` is set to '1.15' or
2024 # later.
2025 # * Python '3.5' is available when `runtime_version` is set to a version
2026 # from '1.4' to '1.14'.
2027 # * Python '2.7' is available when `runtime_version` is set to '1.15' or
2028 # earlier.
2029 #
2030 # Read more about the Python versions available for [each runtime
2031 # version](/ml-engine/docs/runtime-version-list).
2032 "requestLoggingConfig": { # Configuration for logging request-response pairs to a BigQuery table. # Optional. *Only* specify this field in a
2033 # projects.models.versions.patch
2034 # request. Specifying it in a
2035 # projects.models.versions.create
2036 # request has no effect.
2037 #
2038 # Configures the request-response pair logging on predictions from this
2039 # Version.
2040 # Online prediction requests to a model version and the responses to these
2041 # requests are converted to raw strings and saved to the specified BigQuery
2042 # table. Logging is constrained by [BigQuery quotas and
2043 # limits](/bigquery/quotas). If your project exceeds BigQuery quotas or limits,
2044 # AI Platform Prediction does not log request-response pairs, but it continues
2045 # to serve predictions.
2046 #
2047 # If you are using [continuous
2048 # evaluation](/ml-engine/docs/continuous-evaluation/), you do not need to
2049 # specify this configuration manually. Setting up continuous evaluation
2050 # automatically enables logging of request-response pairs.
2051 "samplingPercentage": 3.14, # Percentage of requests to be logged, expressed as a fraction from 0 to 1.
2052 # For example, if you want to log 10% of requests, enter `0.1`. The sampling
2053 # window is the lifetime of the model version. Defaults to 0.
2054 "bigqueryTableName": "A String", # Required. Fully qualified BigQuery table name in the following format:
2055 # "&lt;var&gt;project_id&lt;/var&gt;.&lt;var&gt;dataset_name&lt;/var&gt;.&lt;var&gt;table_name&lt;/var&gt;"
2056 #
2057 # The specified table must already exist, and the "Cloud ML Service Agent"
2058 # for your project must have permission to write to it. The table must have
2059 # the following [schema](/bigquery/docs/schemas):
2060 #
2061 # &lt;table&gt;
2062 # &lt;tr&gt;&lt;th&gt;Field name&lt;/th&gt;&lt;th style="display: table-cell"&gt;Type&lt;/th&gt;
2063 # &lt;th style="display: table-cell"&gt;Mode&lt;/th&gt;&lt;/tr&gt;
2064 # &lt;tr&gt;&lt;td&gt;model&lt;/td&gt;&lt;td&gt;STRING&lt;/td&gt;&lt;td&gt;REQUIRED&lt;/td&gt;&lt;/tr&gt;
2065 # &lt;tr&gt;&lt;td&gt;model_version&lt;/td&gt;&lt;td&gt;STRING&lt;/td&gt;&lt;td&gt;REQUIRED&lt;/td&gt;&lt;/tr&gt;
2066 # &lt;tr&gt;&lt;td&gt;time&lt;/td&gt;&lt;td&gt;TIMESTAMP&lt;/td&gt;&lt;td&gt;REQUIRED&lt;/td&gt;&lt;/tr&gt;
2067 # &lt;tr&gt;&lt;td&gt;raw_data&lt;/td&gt;&lt;td&gt;STRING&lt;/td&gt;&lt;td&gt;REQUIRED&lt;/td&gt;&lt;/tr&gt;
2068 # &lt;tr&gt;&lt;td&gt;raw_prediction&lt;/td&gt;&lt;td&gt;STRING&lt;/td&gt;&lt;td&gt;NULLABLE&lt;/td&gt;&lt;/tr&gt;
2069 # &lt;tr&gt;&lt;td&gt;groundtruth&lt;/td&gt;&lt;td&gt;STRING&lt;/td&gt;&lt;td&gt;NULLABLE&lt;/td&gt;&lt;/tr&gt;
2070 # &lt;/table&gt;
2071 },
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -04002072 }</pre>
2073</div>
2074
2075</body></html>