blob: 1971b65ed01e8611af6168c673f008d474722aae [file] [log] [blame]
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -04001<html><body>
2<style>
3
4body, h1, h2, h3, div, span, p, pre, a {
5 margin: 0;
6 padding: 0;
7 border: 0;
8 font-weight: inherit;
9 font-style: inherit;
10 font-size: 100%;
11 font-family: inherit;
12 vertical-align: baseline;
13}
14
15body {
16 font-size: 13px;
17 padding: 1em;
18}
19
20h1 {
21 font-size: 26px;
22 margin-bottom: 1em;
23}
24
25h2 {
26 font-size: 24px;
27 margin-bottom: 1em;
28}
29
30h3 {
31 font-size: 20px;
32 margin-bottom: 1em;
33 margin-top: 1em;
34}
35
36pre, code {
37 line-height: 1.5;
38 font-family: Monaco, 'DejaVu Sans Mono', 'Bitstream Vera Sans Mono', 'Lucida Console', monospace;
39}
40
41pre {
42 margin-top: 0.5em;
43}
44
45h1, h2, h3, p {
46 font-family: Arial, sans serif;
47}
48
49h1, h2, h3 {
50 border-bottom: solid #CCC 1px;
51}
52
53.toc_element {
54 margin-top: 0.5em;
55}
56
57.firstline {
58 margin-left: 2 em;
59}
60
61.method {
62 margin-top: 1em;
63 border: solid 1px #CCC;
64 padding: 1em;
65 background: #EEE;
66}
67
68.details {
69 font-weight: bold;
70 font-size: 14px;
71}
72
73</style>
74
Dan O'Mearadd494642020-05-01 07:42:23 -070075<h1><a href="ml_v1.html">AI Platform Training & Prediction API</a> . <a href="ml_v1.projects.html">projects</a> . <a href="ml_v1.projects.models.html">models</a> . <a href="ml_v1.projects.models.versions.html">versions</a></h1>
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -040076<h2>Instance Methods</h2>
77<p class="toc_element">
Dan O'Mearadd494642020-05-01 07:42:23 -070078 <code><a href="#create">create(parent, body=None, x__xgafv=None)</a></code></p>
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -040079<p class="firstline">Creates a new version of a model from a trained TensorFlow model.</p>
80<p class="toc_element">
Thomas Coffee2f245372017-03-27 10:39:26 -070081 <code><a href="#delete">delete(name, x__xgafv=None)</a></code></p>
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -040082<p class="firstline">Deletes a model version.</p>
83<p class="toc_element">
Thomas Coffee2f245372017-03-27 10:39:26 -070084 <code><a href="#get">get(name, x__xgafv=None)</a></code></p>
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -040085<p class="firstline">Gets information about a model version.</p>
86<p class="toc_element">
Bu Sun Kimd059ad82020-07-22 17:02:09 -070087 <code><a href="#list">list(parent, filter=None, pageToken=None, pageSize=None, x__xgafv=None)</a></code></p>
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -040088<p class="firstline">Gets basic information about all the versions of a model.</p>
89<p class="toc_element">
90 <code><a href="#list_next">list_next(previous_request, previous_response)</a></code></p>
91<p class="firstline">Retrieves the next page of results.</p>
92<p class="toc_element">
Dan O'Mearadd494642020-05-01 07:42:23 -070093 <code><a href="#patch">patch(name, body=None, updateMask=None, x__xgafv=None)</a></code></p>
Bu Sun Kim715bd7f2019-06-14 16:50:42 -070094<p class="firstline">Updates the specified Version resource.</p>
95<p class="toc_element">
96 <code><a href="#setDefault">setDefault(name, body=None, x__xgafv=None)</a></code></p>
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -040097<p class="firstline">Designates a version to be the default for the model.</p>
98<h3>Method Details</h3>
99<div class="method">
Dan O'Mearadd494642020-05-01 07:42:23 -0700100 <code class="details" id="create">create(parent, body=None, x__xgafv=None)</code>
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -0400101 <pre>Creates a new version of a model from a trained TensorFlow model.
102
103If the version created in the cloud by this call is the first deployed
104version of the specified model, it will be made the default version of the
105model. When you add a version to a model that already has one or more
106versions, the default version does not automatically change. If you want a
107new version to be the default, you must call
Dan O'Mearadd494642020-05-01 07:42:23 -0700108projects.models.versions.setDefault.
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -0400109
110Args:
Bu Sun Kim715bd7f2019-06-14 16:50:42 -0700111 parent: string, Required. The name of the model. (required)
Dan O'Mearadd494642020-05-01 07:42:23 -0700112 body: object, The request body.
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -0400113 The object takes the form of:
114
115{ # Represents a version of the model.
116 #
117 # Each version is a trained model deployed in the cloud, ready to handle
118 # prediction requests. A model can have multiple versions. You can get
119 # information about all of the versions of a given model by calling
Dan O'Mearadd494642020-05-01 07:42:23 -0700120 # projects.models.versions.list.
Bu Sun Kimd059ad82020-07-22 17:02:09 -0700121 &quot;labels&quot;: { # Optional. One or more labels that you can add, to organize your model
122 # versions. Each label is a key-value pair, where both the key and the value
123 # are arbitrary strings that you supply.
124 # For more information, see the documentation on
125 # &lt;a href=&quot;/ml-engine/docs/tensorflow/resource-labels&quot;&gt;using labels&lt;/a&gt;.
126 &quot;a_key&quot;: &quot;A String&quot;,
Bu Sun Kim4ed7d3f2020-05-27 12:20:54 -0700127 },
Bu Sun Kimd059ad82020-07-22 17:02:09 -0700128 &quot;machineType&quot;: &quot;A String&quot;, # Optional. The type of machine on which to serve the model. Currently only
129 # applies to online prediction service. If this field is not specified, it
130 # defaults to `mls1-c1-m2`.
Bu Sun Kim4ed7d3f2020-05-27 12:20:54 -0700131 #
Bu Sun Kimd059ad82020-07-22 17:02:09 -0700132 # Online prediction supports the following machine types:
Bu Sun Kim65020912020-05-20 12:08:20 -0700133 #
Bu Sun Kimd059ad82020-07-22 17:02:09 -0700134 # * `mls1-c1-m2`
135 # * `mls1-c4-m2`
136 # * `n1-standard-2`
137 # * `n1-standard-4`
138 # * `n1-standard-8`
139 # * `n1-standard-16`
140 # * `n1-standard-32`
141 # * `n1-highmem-2`
142 # * `n1-highmem-4`
143 # * `n1-highmem-8`
144 # * `n1-highmem-16`
145 # * `n1-highmem-32`
146 # * `n1-highcpu-2`
147 # * `n1-highcpu-4`
148 # * `n1-highcpu-8`
149 # * `n1-highcpu-16`
150 # * `n1-highcpu-32`
Bu Sun Kim65020912020-05-20 12:08:20 -0700151 #
Bu Sun Kimd059ad82020-07-22 17:02:09 -0700152 # `mls1-c1-m2` is generally available. All other machine types are available
153 # in beta. Learn more about the [differences between machine
154 # types](/ml-engine/docs/machine-types-online-prediction).
Bu Sun Kim65020912020-05-20 12:08:20 -0700155 &quot;packageUris&quot;: [ # Optional. Cloud Storage paths (`gs://…`) of packages for [custom
Bu Sun Kim715bd7f2019-06-14 16:50:42 -0700156 # prediction routines](/ml-engine/docs/tensorflow/custom-prediction-routines)
157 # or [scikit-learn pipelines with custom
158 # code](/ml-engine/docs/scikit/exporting-for-prediction#custom-pipeline-code).
159 #
160 # For a custom prediction routine, one of these packages must contain your
161 # Predictor class (see
162 # [`predictionClass`](#Version.FIELDS.prediction_class)). Additionally,
163 # include any dependencies used by your Predictor or scikit-learn pipeline
164 # uses that are not already included in your selected [runtime
165 # version](/ml-engine/docs/tensorflow/runtime-version-list).
166 #
167 # If you specify this field, you must also set
168 # [`runtimeVersion`](#Version.FIELDS.runtime_version) to 1.4 or greater.
Bu Sun Kim65020912020-05-20 12:08:20 -0700169 &quot;A String&quot;,
Bu Sun Kim715bd7f2019-06-14 16:50:42 -0700170 ],
Bu Sun Kimd059ad82020-07-22 17:02:09 -0700171 &quot;acceleratorConfig&quot;: { # Represents a hardware accelerator request config. # Optional. Accelerator config for using GPUs for online prediction (beta).
172 # Only specify this field if you have specified a Compute Engine (N1) machine
173 # type in the `machineType` field. Learn more about [using GPUs for online
174 # prediction](/ml-engine/docs/machine-types-online-prediction#gpus).
175 # Note that the AcceleratorConfig can be used in both Jobs and Versions.
176 # Learn more about [accelerators for training](/ml-engine/docs/using-gpus) and
177 # [accelerators for online
178 # prediction](/ml-engine/docs/machine-types-online-prediction#gpus).
179 &quot;type&quot;: &quot;A String&quot;, # The type of accelerator to use.
180 &quot;count&quot;: &quot;A String&quot;, # The number of accelerators to attach to each machine running the job.
Bu Sun Kim65020912020-05-20 12:08:20 -0700181 },
Bu Sun Kimd059ad82020-07-22 17:02:09 -0700182 &quot;state&quot;: &quot;A String&quot;, # Output only. The state of a version.
183 &quot;name&quot;: &quot;A String&quot;, # Required. The name specified for the version when it was created.
184 #
185 # The version name must be unique within the model it is created in.
Bu Sun Kim65020912020-05-20 12:08:20 -0700186 &quot;autoScaling&quot;: { # Options for automatically scaling a model. # Automatically scale the number of nodes used to serve the model in
Dan O'Mearadd494642020-05-01 07:42:23 -0700187 # response to increases and decreases in traffic. Care should be
Bu Sun Kim65020912020-05-20 12:08:20 -0700188 # taken to ramp up traffic according to the model&#x27;s ability to scale
Dan O'Mearadd494642020-05-01 07:42:23 -0700189 # or you will start seeing increases in latency and 429 response codes.
190 #
191 # Note that you cannot use AutoScaling if your version uses
192 # [GPUs](#Version.FIELDS.accelerator_config). Instead, you must use specify
193 # `manual_scaling`.
Bu Sun Kim65020912020-05-20 12:08:20 -0700194 &quot;minNodes&quot;: 42, # Optional. The minimum number of nodes to allocate for this model. These
Dan O'Mearadd494642020-05-01 07:42:23 -0700195 # nodes are always up, starting from the time the model is deployed.
196 # Therefore, the cost of operating this model will be at least
197 # `rate` * `min_nodes` * number of hours since last billing cycle,
198 # where `rate` is the cost per node-hour as documented in the
199 # [pricing guide](/ml-engine/docs/pricing),
200 # even if no predictions are performed. There is additional cost for each
201 # prediction performed.
202 #
203 # Unlike manual scaling, if the load gets too heavy for the nodes
204 # that are up, the service will automatically add nodes to handle the
205 # increased load as well as scale back as traffic drops, always maintaining
206 # at least `min_nodes`. You will be charged for the time in which additional
207 # nodes are used.
208 #
209 # If `min_nodes` is not specified and AutoScaling is used with a [legacy
210 # (MLS1) machine type](/ml-engine/docs/machine-types-online-prediction),
211 # `min_nodes` defaults to 0, in which case, when traffic to a model stops
212 # (and after a cool-down period), nodes will be shut down and no charges will
213 # be incurred until traffic to the model resumes.
214 #
215 # If `min_nodes` is not specified and AutoScaling is used with a [Compute
216 # Engine (N1) machine type](/ml-engine/docs/machine-types-online-prediction),
217 # `min_nodes` defaults to 1. `min_nodes` must be at least 1 for use with a
218 # Compute Engine machine type.
219 #
220 # Note that you cannot use AutoScaling if your version uses
221 # [GPUs](#Version.FIELDS.accelerator_config). Instead, you must use
222 # ManualScaling.
223 #
224 # You can set `min_nodes` when creating the model version, and you can also
225 # update `min_nodes` for an existing version:
226 # &lt;pre&gt;
227 # update_body.json:
228 # {
Bu Sun Kim65020912020-05-20 12:08:20 -0700229 # &#x27;autoScaling&#x27;: {
230 # &#x27;minNodes&#x27;: 5
Dan O'Mearadd494642020-05-01 07:42:23 -0700231 # }
232 # }
233 # &lt;/pre&gt;
234 # HTTP request:
Bu Sun Kim65020912020-05-20 12:08:20 -0700235 # &lt;pre style=&quot;max-width: 626px;&quot;&gt;
Dan O'Mearadd494642020-05-01 07:42:23 -0700236 # PATCH
237 # https://ml.googleapis.com/v1/{name=projects/*/models/*/versions/*}?update_mask=autoScaling.minNodes
238 # -d @./update_body.json
239 # &lt;/pre&gt;
240 },
Bu Sun Kimd059ad82020-07-22 17:02:09 -0700241 &quot;explanationConfig&quot;: { # Message holding configuration options for explaining model predictions. # Optional. Configures explainability features on the model&#x27;s version.
242 # Some explanation features require additional metadata to be loaded
243 # as part of the model payload.
244 # There are two feature attribution methods supported for TensorFlow models:
245 # integrated gradients and sampled Shapley.
246 # [Learn more about feature
247 # attributions.](/ai-platform/prediction/docs/ai-explanations/overview)
248 &quot;integratedGradientsAttribution&quot;: { # Attributes credit by computing the Aumann-Shapley value taking advantage # Attributes credit by computing the Aumann-Shapley value taking advantage
249 # of the model&#x27;s fully differentiable structure. Refer to this paper for
250 # more details: http://proceedings.mlr.press/v70/sundararajan17a.html
251 # of the model&#x27;s fully differentiable structure. Refer to this paper for
252 # more details: https://arxiv.org/abs/1703.01365
253 &quot;numIntegralSteps&quot;: 42, # Number of steps for approximating the path integral.
254 # A good value to start is 50 and gradually increase until the
255 # sum to diff property is met within the desired error range.
256 },
257 &quot;xraiAttribution&quot;: { # Attributes credit by computing the XRAI taking advantage # Attributes credit by computing the XRAI taking advantage
258 # of the model&#x27;s fully differentiable structure. Refer to this paper for
259 # more details: https://arxiv.org/abs/1906.02825
260 # Currently only implemented for models with natural image inputs.
261 # of the model&#x27;s fully differentiable structure. Refer to this paper for
262 # more details: https://arxiv.org/abs/1906.02825
263 # Currently only implemented for models with natural image inputs.
264 &quot;numIntegralSteps&quot;: 42, # Number of steps for approximating the path integral.
265 # A good value to start is 50 and gradually increase until the
266 # sum to diff property is met within the desired error range.
267 },
268 &quot;sampledShapleyAttribution&quot;: { # An attribution method that approximates Shapley values for features that # An attribution method that approximates Shapley values for features that
269 # contribute to the label being predicted. A sampling strategy is used to
270 # approximate the value rather than considering all subsets of features.
271 # contribute to the label being predicted. A sampling strategy is used to
272 # approximate the value rather than considering all subsets of features.
273 &quot;numPaths&quot;: 42, # The number of feature permutations to consider when approximating the
274 # Shapley values.
275 },
Bu Sun Kim65020912020-05-20 12:08:20 -0700276 },
Bu Sun Kimd059ad82020-07-22 17:02:09 -0700277 &quot;pythonVersion&quot;: &quot;A String&quot;, # Required. The version of Python used in prediction.
278 #
279 # The following Python versions are available:
280 #
281 # * Python &#x27;3.7&#x27; is available when `runtime_version` is set to &#x27;1.15&#x27; or
282 # later.
283 # * Python &#x27;3.5&#x27; is available when `runtime_version` is set to a version
284 # from &#x27;1.4&#x27; to &#x27;1.14&#x27;.
285 # * Python &#x27;2.7&#x27; is available when `runtime_version` is set to &#x27;1.15&#x27; or
286 # earlier.
287 #
288 # Read more about the Python versions available for [each runtime
289 # version](/ml-engine/docs/runtime-version-list).
Bu Sun Kim65020912020-05-20 12:08:20 -0700290 &quot;requestLoggingConfig&quot;: { # Configuration for logging request-response pairs to a BigQuery table. # Optional. *Only* specify this field in a
Dan O'Mearadd494642020-05-01 07:42:23 -0700291 # projects.models.versions.patch
292 # request. Specifying it in a
293 # projects.models.versions.create
294 # request has no effect.
295 #
296 # Configures the request-response pair logging on predictions from this
297 # Version.
298 # Online prediction requests to a model version and the responses to these
299 # requests are converted to raw strings and saved to the specified BigQuery
300 # table. Logging is constrained by [BigQuery quotas and
301 # limits](/bigquery/quotas). If your project exceeds BigQuery quotas or limits,
302 # AI Platform Prediction does not log request-response pairs, but it continues
303 # to serve predictions.
304 #
305 # If you are using [continuous
306 # evaluation](/ml-engine/docs/continuous-evaluation/), you do not need to
307 # specify this configuration manually. Setting up continuous evaluation
308 # automatically enables logging of request-response pairs.
Bu Sun Kimd059ad82020-07-22 17:02:09 -0700309 &quot;samplingPercentage&quot;: 3.14, # Percentage of requests to be logged, expressed as a fraction from 0 to 1.
310 # For example, if you want to log 10% of requests, enter `0.1`. The sampling
311 # window is the lifetime of the model version. Defaults to 0.
Bu Sun Kim65020912020-05-20 12:08:20 -0700312 &quot;bigqueryTableName&quot;: &quot;A String&quot;, # Required. Fully qualified BigQuery table name in the following format:
313 # &quot;&lt;var&gt;project_id&lt;/var&gt;.&lt;var&gt;dataset_name&lt;/var&gt;.&lt;var&gt;table_name&lt;/var&gt;&quot;
Dan O'Mearadd494642020-05-01 07:42:23 -0700314 #
Bu Sun Kim65020912020-05-20 12:08:20 -0700315 # The specified table must already exist, and the &quot;Cloud ML Service Agent&quot;
Dan O'Mearadd494642020-05-01 07:42:23 -0700316 # for your project must have permission to write to it. The table must have
317 # the following [schema](/bigquery/docs/schemas):
318 #
319 # &lt;table&gt;
Bu Sun Kim65020912020-05-20 12:08:20 -0700320 # &lt;tr&gt;&lt;th&gt;Field name&lt;/th&gt;&lt;th style=&quot;display: table-cell&quot;&gt;Type&lt;/th&gt;
321 # &lt;th style=&quot;display: table-cell&quot;&gt;Mode&lt;/th&gt;&lt;/tr&gt;
Dan O'Mearadd494642020-05-01 07:42:23 -0700322 # &lt;tr&gt;&lt;td&gt;model&lt;/td&gt;&lt;td&gt;STRING&lt;/td&gt;&lt;td&gt;REQUIRED&lt;/td&gt;&lt;/tr&gt;
323 # &lt;tr&gt;&lt;td&gt;model_version&lt;/td&gt;&lt;td&gt;STRING&lt;/td&gt;&lt;td&gt;REQUIRED&lt;/td&gt;&lt;/tr&gt;
324 # &lt;tr&gt;&lt;td&gt;time&lt;/td&gt;&lt;td&gt;TIMESTAMP&lt;/td&gt;&lt;td&gt;REQUIRED&lt;/td&gt;&lt;/tr&gt;
325 # &lt;tr&gt;&lt;td&gt;raw_data&lt;/td&gt;&lt;td&gt;STRING&lt;/td&gt;&lt;td&gt;REQUIRED&lt;/td&gt;&lt;/tr&gt;
326 # &lt;tr&gt;&lt;td&gt;raw_prediction&lt;/td&gt;&lt;td&gt;STRING&lt;/td&gt;&lt;td&gt;NULLABLE&lt;/td&gt;&lt;/tr&gt;
327 # &lt;tr&gt;&lt;td&gt;groundtruth&lt;/td&gt;&lt;td&gt;STRING&lt;/td&gt;&lt;td&gt;NULLABLE&lt;/td&gt;&lt;/tr&gt;
328 # &lt;/table&gt;
Bu Sun Kim65020912020-05-20 12:08:20 -0700329 },
Bu Sun Kimd059ad82020-07-22 17:02:09 -0700330 &quot;manualScaling&quot;: { # Options for manually scaling a model. # Manually select the number of nodes to use for serving the
331 # model. You should generally use `auto_scaling` with an appropriate
332 # `min_nodes` instead, but this option is available if you want more
333 # predictable billing. Beware that latency and error rates will increase
334 # if the traffic exceeds that capability of the system to serve it based
335 # on the selected number of nodes.
336 &quot;nodes&quot;: 42, # The number of nodes to allocate for this model. These nodes are always up,
337 # starting from the time the model is deployed, so the cost of operating
338 # this model will be proportional to `nodes` * number of hours since
339 # last billing cycle plus the cost for each prediction performed.
340 },
341 &quot;createTime&quot;: &quot;A String&quot;, # Output only. The time the version was created.
342 &quot;lastUseTime&quot;: &quot;A String&quot;, # Output only. The time the version was last used for prediction.
Bu Sun Kim65020912020-05-20 12:08:20 -0700343 &quot;framework&quot;: &quot;A String&quot;, # Optional. The machine learning framework AI Platform uses to train
344 # this version of the model. Valid values are `TENSORFLOW`, `SCIKIT_LEARN`,
345 # `XGBOOST`. If you do not specify a framework, AI Platform
346 # will analyze files in the deployment_uri to determine a framework. If you
347 # choose `SCIKIT_LEARN` or `XGBOOST`, you must also set the runtime version
348 # of the model to 1.4 or greater.
349 #
350 # Do **not** specify a framework if you&#x27;re deploying a [custom
351 # prediction routine](/ml-engine/docs/tensorflow/custom-prediction-routines).
352 #
353 # If you specify a [Compute Engine (N1) machine
354 # type](/ml-engine/docs/machine-types-online-prediction) in the
355 # `machineType` field, you must specify `TENSORFLOW`
356 # for the framework.
Bu Sun Kimd059ad82020-07-22 17:02:09 -0700357 &quot;predictionClass&quot;: &quot;A String&quot;, # Optional. The fully qualified name
358 # (&lt;var&gt;module_name&lt;/var&gt;.&lt;var&gt;class_name&lt;/var&gt;) of a class that implements
359 # the Predictor interface described in this reference field. The module
360 # containing this class should be included in a package provided to the
361 # [`packageUris` field](#Version.FIELDS.package_uris).
362 #
363 # Specify this field if and only if you are deploying a [custom prediction
364 # routine (beta)](/ml-engine/docs/tensorflow/custom-prediction-routines).
365 # If you specify this field, you must set
366 # [`runtimeVersion`](#Version.FIELDS.runtime_version) to 1.4 or greater and
367 # you must set `machineType` to a [legacy (MLS1)
368 # machine type](/ml-engine/docs/machine-types-online-prediction).
369 #
370 # The following code sample provides the Predictor interface:
371 #
372 # &lt;pre style=&quot;max-width: 626px;&quot;&gt;
373 # class Predictor(object):
374 # &quot;&quot;&quot;Interface for constructing custom predictors.&quot;&quot;&quot;
375 #
376 # def predict(self, instances, **kwargs):
377 # &quot;&quot;&quot;Performs custom prediction.
378 #
379 # Instances are the decoded values from the request. They have already
380 # been deserialized from JSON.
381 #
382 # Args:
383 # instances: A list of prediction input instances.
384 # **kwargs: A dictionary of keyword args provided as additional
385 # fields on the predict request body.
386 #
387 # Returns:
388 # A list of outputs containing the prediction results. This list must
389 # be JSON serializable.
390 # &quot;&quot;&quot;
391 # raise NotImplementedError()
392 #
393 # @classmethod
394 # def from_path(cls, model_dir):
395 # &quot;&quot;&quot;Creates an instance of Predictor using the given path.
396 #
397 # Loading of the predictor should be done in this method.
398 #
399 # Args:
400 # model_dir: The local directory that contains the exported model
401 # file along with any additional files uploaded when creating the
402 # version resource.
403 #
404 # Returns:
405 # An instance implementing this Predictor class.
406 # &quot;&quot;&quot;
407 # raise NotImplementedError()
408 # &lt;/pre&gt;
409 #
410 # Learn more about [the Predictor interface and custom prediction
411 # routines](/ml-engine/docs/tensorflow/custom-prediction-routines).
412 &quot;isDefault&quot;: True or False, # Output only. If true, this version will be used to handle prediction
413 # requests that do not specify a version.
414 #
415 # You can change the default version by calling
416 # projects.methods.versions.setDefault.
Bu Sun Kim65020912020-05-20 12:08:20 -0700417 &quot;etag&quot;: &quot;A String&quot;, # `etag` is used for optimistic concurrency control as a way to help
418 # prevent simultaneous updates of a model from overwriting each other.
419 # It is strongly suggested that systems make use of the `etag` in the
420 # read-modify-write cycle to perform model updates in order to avoid race
421 # conditions: An `etag` is returned in the response to `GetVersion`, and
422 # systems are expected to put that etag in the request to `UpdateVersion` to
423 # ensure that their change will be applied to the model as intended.
Bu Sun Kimd059ad82020-07-22 17:02:09 -0700424 &quot;serviceAccount&quot;: &quot;A String&quot;, # Optional. Specifies the service account for resource access control.
425 &quot;errorMessage&quot;: &quot;A String&quot;, # Output only. The details of a failure or a cancellation.
426 &quot;deploymentUri&quot;: &quot;A String&quot;, # Required. The Cloud Storage location of the trained model used to
427 # create the version. See the
428 # [guide to model
429 # deployment](/ml-engine/docs/tensorflow/deploying-models) for more
430 # information.
431 #
432 # When passing Version to
433 # projects.models.versions.create
434 # the model service uses the specified location as the source of the model.
435 # Once deployed, the model version is hosted by the prediction service, so
436 # this location is useful only as a historical record.
437 # The total number of model files can&#x27;t exceed 1000.
438 &quot;runtimeVersion&quot;: &quot;A String&quot;, # Required. The AI Platform runtime version to use for this deployment.
439 #
440 # For more information, see the
441 # [runtime version list](/ml-engine/docs/runtime-version-list) and
442 # [how to manage runtime versions](/ml-engine/docs/versioning).
443 &quot;description&quot;: &quot;A String&quot;, # Optional. The description specified for the version when it was created.
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -0400444}
445
446 x__xgafv: string, V1 error format.
447 Allowed values
448 1 - v1 error format
449 2 - v2 error format
450
451Returns:
452 An object of the form:
453
454 { # This resource represents a long-running operation that is the result of a
455 # network API call.
Bu Sun Kim65020912020-05-20 12:08:20 -0700456 &quot;error&quot;: { # The `Status` type defines a logical error model that is suitable for # The error result of the operation in case of failure or cancellation.
457 # different programming environments, including REST APIs and RPC APIs. It is
458 # used by [gRPC](https://github.com/grpc). Each `Status` message contains
459 # three pieces of data: error code, error message, and error details.
460 #
461 # You can find out more about this error model and how to work with it in the
462 # [API Design Guide](https://cloud.google.com/apis/design/errors).
463 &quot;details&quot;: [ # A list of messages that carry the error details. There is a common set of
464 # message types for APIs to use.
465 {
466 &quot;a_key&quot;: &quot;&quot;, # Properties of the object. Contains field @type with type URL.
467 },
468 ],
Bu Sun Kimd059ad82020-07-22 17:02:09 -0700469 &quot;message&quot;: &quot;A String&quot;, # A developer-facing error message, which should be in English. Any
470 # user-facing error message should be localized and sent in the
471 # google.rpc.Status.details field, or localized by the client.
472 &quot;code&quot;: 42, # The status code, which should be an enum value of google.rpc.Code.
Bu Sun Kim65020912020-05-20 12:08:20 -0700473 },
474 &quot;done&quot;: True or False, # If the value is `false`, it means the operation is still in progress.
475 # If `true`, the operation is completed, and either `error` or `response` is
476 # available.
Bu Sun Kim4ed7d3f2020-05-27 12:20:54 -0700477 &quot;response&quot;: { # The normal response of the operation in case of success. If the original
478 # method returns no data on success, such as `Delete`, the response is
479 # `google.protobuf.Empty`. If the original method is standard
480 # `Get`/`Create`/`Update`, the response should be the resource. For other
481 # methods, the response should have the type `XxxResponse`, where `Xxx`
482 # is the original method name. For example, if the original method name
483 # is `TakeSnapshot()`, the inferred response type is
484 # `TakeSnapshotResponse`.
485 &quot;a_key&quot;: &quot;&quot;, # Properties of the object. Contains field @type with type URL.
486 },
Bu Sun Kimd059ad82020-07-22 17:02:09 -0700487 &quot;metadata&quot;: { # Service-specific metadata associated with the operation. It typically
488 # contains progress information and common metadata such as create time.
489 # Some services might not provide such metadata. Any method that returns a
490 # long-running operation should document the metadata type, if any.
491 &quot;a_key&quot;: &quot;&quot;, # Properties of the object. Contains field @type with type URL.
492 },
493 &quot;name&quot;: &quot;A String&quot;, # The server-assigned name, which is only unique within the same service that
494 # originally returns it. If you use the default HTTP mapping, the
495 # `name` should be a resource name ending with `operations/{unique_id}`.
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -0400496 }</pre>
497</div>
498
499<div class="method">
Thomas Coffee2f245372017-03-27 10:39:26 -0700500 <code class="details" id="delete">delete(name, x__xgafv=None)</code>
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -0400501 <pre>Deletes a model version.
502
503Each model can have multiple versions deployed and in use at any given
504time. Use this method to remove a single version.
505
506Note: You cannot delete the version that is set as the default version
507of the model unless it is the only remaining version.
508
509Args:
510 name: string, Required. The name of the version. You can get the names of all the
511versions of a model by calling
Dan O'Mearadd494642020-05-01 07:42:23 -0700512projects.models.versions.list. (required)
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -0400513 x__xgafv: string, V1 error format.
514 Allowed values
515 1 - v1 error format
516 2 - v2 error format
517
518Returns:
519 An object of the form:
520
521 { # This resource represents a long-running operation that is the result of a
522 # network API call.
Bu Sun Kim65020912020-05-20 12:08:20 -0700523 &quot;error&quot;: { # The `Status` type defines a logical error model that is suitable for # The error result of the operation in case of failure or cancellation.
524 # different programming environments, including REST APIs and RPC APIs. It is
525 # used by [gRPC](https://github.com/grpc). Each `Status` message contains
526 # three pieces of data: error code, error message, and error details.
527 #
528 # You can find out more about this error model and how to work with it in the
529 # [API Design Guide](https://cloud.google.com/apis/design/errors).
530 &quot;details&quot;: [ # A list of messages that carry the error details. There is a common set of
531 # message types for APIs to use.
532 {
533 &quot;a_key&quot;: &quot;&quot;, # Properties of the object. Contains field @type with type URL.
534 },
535 ],
Bu Sun Kimd059ad82020-07-22 17:02:09 -0700536 &quot;message&quot;: &quot;A String&quot;, # A developer-facing error message, which should be in English. Any
537 # user-facing error message should be localized and sent in the
538 # google.rpc.Status.details field, or localized by the client.
539 &quot;code&quot;: 42, # The status code, which should be an enum value of google.rpc.Code.
Bu Sun Kim65020912020-05-20 12:08:20 -0700540 },
541 &quot;done&quot;: True or False, # If the value is `false`, it means the operation is still in progress.
542 # If `true`, the operation is completed, and either `error` or `response` is
543 # available.
Bu Sun Kim4ed7d3f2020-05-27 12:20:54 -0700544 &quot;response&quot;: { # The normal response of the operation in case of success. If the original
545 # method returns no data on success, such as `Delete`, the response is
546 # `google.protobuf.Empty`. If the original method is standard
547 # `Get`/`Create`/`Update`, the response should be the resource. For other
548 # methods, the response should have the type `XxxResponse`, where `Xxx`
549 # is the original method name. For example, if the original method name
550 # is `TakeSnapshot()`, the inferred response type is
551 # `TakeSnapshotResponse`.
552 &quot;a_key&quot;: &quot;&quot;, # Properties of the object. Contains field @type with type URL.
553 },
Bu Sun Kimd059ad82020-07-22 17:02:09 -0700554 &quot;metadata&quot;: { # Service-specific metadata associated with the operation. It typically
555 # contains progress information and common metadata such as create time.
556 # Some services might not provide such metadata. Any method that returns a
557 # long-running operation should document the metadata type, if any.
558 &quot;a_key&quot;: &quot;&quot;, # Properties of the object. Contains field @type with type URL.
559 },
560 &quot;name&quot;: &quot;A String&quot;, # The server-assigned name, which is only unique within the same service that
561 # originally returns it. If you use the default HTTP mapping, the
562 # `name` should be a resource name ending with `operations/{unique_id}`.
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -0400563 }</pre>
564</div>
565
566<div class="method">
Thomas Coffee2f245372017-03-27 10:39:26 -0700567 <code class="details" id="get">get(name, x__xgafv=None)</code>
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -0400568 <pre>Gets information about a model version.
569
570Models can have multiple versions. You can call
Dan O'Mearadd494642020-05-01 07:42:23 -0700571projects.models.versions.list
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -0400572to get the same information that this method returns for all of the
573versions of a model.
574
575Args:
Bu Sun Kim715bd7f2019-06-14 16:50:42 -0700576 name: string, Required. The name of the version. (required)
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -0400577 x__xgafv: string, V1 error format.
578 Allowed values
579 1 - v1 error format
580 2 - v2 error format
581
582Returns:
583 An object of the form:
584
585 { # Represents a version of the model.
586 #
587 # Each version is a trained model deployed in the cloud, ready to handle
588 # prediction requests. A model can have multiple versions. You can get
589 # information about all of the versions of a given model by calling
Dan O'Mearadd494642020-05-01 07:42:23 -0700590 # projects.models.versions.list.
Bu Sun Kimd059ad82020-07-22 17:02:09 -0700591 &quot;labels&quot;: { # Optional. One or more labels that you can add, to organize your model
592 # versions. Each label is a key-value pair, where both the key and the value
593 # are arbitrary strings that you supply.
594 # For more information, see the documentation on
595 # &lt;a href=&quot;/ml-engine/docs/tensorflow/resource-labels&quot;&gt;using labels&lt;/a&gt;.
596 &quot;a_key&quot;: &quot;A String&quot;,
Bu Sun Kim4ed7d3f2020-05-27 12:20:54 -0700597 },
Bu Sun Kimd059ad82020-07-22 17:02:09 -0700598 &quot;machineType&quot;: &quot;A String&quot;, # Optional. The type of machine on which to serve the model. Currently only
599 # applies to online prediction service. If this field is not specified, it
600 # defaults to `mls1-c1-m2`.
Bu Sun Kim4ed7d3f2020-05-27 12:20:54 -0700601 #
Bu Sun Kimd059ad82020-07-22 17:02:09 -0700602 # Online prediction supports the following machine types:
Bu Sun Kim65020912020-05-20 12:08:20 -0700603 #
Bu Sun Kimd059ad82020-07-22 17:02:09 -0700604 # * `mls1-c1-m2`
605 # * `mls1-c4-m2`
606 # * `n1-standard-2`
607 # * `n1-standard-4`
608 # * `n1-standard-8`
609 # * `n1-standard-16`
610 # * `n1-standard-32`
611 # * `n1-highmem-2`
612 # * `n1-highmem-4`
613 # * `n1-highmem-8`
614 # * `n1-highmem-16`
615 # * `n1-highmem-32`
616 # * `n1-highcpu-2`
617 # * `n1-highcpu-4`
618 # * `n1-highcpu-8`
619 # * `n1-highcpu-16`
620 # * `n1-highcpu-32`
Bu Sun Kim65020912020-05-20 12:08:20 -0700621 #
Bu Sun Kimd059ad82020-07-22 17:02:09 -0700622 # `mls1-c1-m2` is generally available. All other machine types are available
623 # in beta. Learn more about the [differences between machine
624 # types](/ml-engine/docs/machine-types-online-prediction).
Bu Sun Kim65020912020-05-20 12:08:20 -0700625 &quot;packageUris&quot;: [ # Optional. Cloud Storage paths (`gs://…`) of packages for [custom
Bu Sun Kim715bd7f2019-06-14 16:50:42 -0700626 # prediction routines](/ml-engine/docs/tensorflow/custom-prediction-routines)
627 # or [scikit-learn pipelines with custom
628 # code](/ml-engine/docs/scikit/exporting-for-prediction#custom-pipeline-code).
629 #
630 # For a custom prediction routine, one of these packages must contain your
631 # Predictor class (see
632 # [`predictionClass`](#Version.FIELDS.prediction_class)). Additionally,
633 # include any dependencies used by your Predictor or scikit-learn pipeline
634 # uses that are not already included in your selected [runtime
635 # version](/ml-engine/docs/tensorflow/runtime-version-list).
636 #
637 # If you specify this field, you must also set
638 # [`runtimeVersion`](#Version.FIELDS.runtime_version) to 1.4 or greater.
Bu Sun Kim65020912020-05-20 12:08:20 -0700639 &quot;A String&quot;,
Bu Sun Kim715bd7f2019-06-14 16:50:42 -0700640 ],
Bu Sun Kimd059ad82020-07-22 17:02:09 -0700641 &quot;acceleratorConfig&quot;: { # Represents a hardware accelerator request config. # Optional. Accelerator config for using GPUs for online prediction (beta).
642 # Only specify this field if you have specified a Compute Engine (N1) machine
643 # type in the `machineType` field. Learn more about [using GPUs for online
644 # prediction](/ml-engine/docs/machine-types-online-prediction#gpus).
645 # Note that the AcceleratorConfig can be used in both Jobs and Versions.
646 # Learn more about [accelerators for training](/ml-engine/docs/using-gpus) and
647 # [accelerators for online
648 # prediction](/ml-engine/docs/machine-types-online-prediction#gpus).
649 &quot;type&quot;: &quot;A String&quot;, # The type of accelerator to use.
650 &quot;count&quot;: &quot;A String&quot;, # The number of accelerators to attach to each machine running the job.
Bu Sun Kim65020912020-05-20 12:08:20 -0700651 },
Bu Sun Kimd059ad82020-07-22 17:02:09 -0700652 &quot;state&quot;: &quot;A String&quot;, # Output only. The state of a version.
653 &quot;name&quot;: &quot;A String&quot;, # Required. The name specified for the version when it was created.
654 #
655 # The version name must be unique within the model it is created in.
Bu Sun Kim65020912020-05-20 12:08:20 -0700656 &quot;autoScaling&quot;: { # Options for automatically scaling a model. # Automatically scale the number of nodes used to serve the model in
Dan O'Mearadd494642020-05-01 07:42:23 -0700657 # response to increases and decreases in traffic. Care should be
Bu Sun Kim65020912020-05-20 12:08:20 -0700658 # taken to ramp up traffic according to the model&#x27;s ability to scale
Dan O'Mearadd494642020-05-01 07:42:23 -0700659 # or you will start seeing increases in latency and 429 response codes.
660 #
661 # Note that you cannot use AutoScaling if your version uses
662 # [GPUs](#Version.FIELDS.accelerator_config). Instead, you must use specify
663 # `manual_scaling`.
Bu Sun Kim65020912020-05-20 12:08:20 -0700664 &quot;minNodes&quot;: 42, # Optional. The minimum number of nodes to allocate for this model. These
Dan O'Mearadd494642020-05-01 07:42:23 -0700665 # nodes are always up, starting from the time the model is deployed.
666 # Therefore, the cost of operating this model will be at least
667 # `rate` * `min_nodes` * number of hours since last billing cycle,
668 # where `rate` is the cost per node-hour as documented in the
669 # [pricing guide](/ml-engine/docs/pricing),
670 # even if no predictions are performed. There is additional cost for each
671 # prediction performed.
672 #
673 # Unlike manual scaling, if the load gets too heavy for the nodes
674 # that are up, the service will automatically add nodes to handle the
675 # increased load as well as scale back as traffic drops, always maintaining
676 # at least `min_nodes`. You will be charged for the time in which additional
677 # nodes are used.
678 #
679 # If `min_nodes` is not specified and AutoScaling is used with a [legacy
680 # (MLS1) machine type](/ml-engine/docs/machine-types-online-prediction),
681 # `min_nodes` defaults to 0, in which case, when traffic to a model stops
682 # (and after a cool-down period), nodes will be shut down and no charges will
683 # be incurred until traffic to the model resumes.
684 #
685 # If `min_nodes` is not specified and AutoScaling is used with a [Compute
686 # Engine (N1) machine type](/ml-engine/docs/machine-types-online-prediction),
687 # `min_nodes` defaults to 1. `min_nodes` must be at least 1 for use with a
688 # Compute Engine machine type.
689 #
690 # Note that you cannot use AutoScaling if your version uses
691 # [GPUs](#Version.FIELDS.accelerator_config). Instead, you must use
692 # ManualScaling.
693 #
694 # You can set `min_nodes` when creating the model version, and you can also
695 # update `min_nodes` for an existing version:
696 # &lt;pre&gt;
697 # update_body.json:
698 # {
Bu Sun Kim65020912020-05-20 12:08:20 -0700699 # &#x27;autoScaling&#x27;: {
700 # &#x27;minNodes&#x27;: 5
Dan O'Mearadd494642020-05-01 07:42:23 -0700701 # }
702 # }
703 # &lt;/pre&gt;
704 # HTTP request:
Bu Sun Kim65020912020-05-20 12:08:20 -0700705 # &lt;pre style=&quot;max-width: 626px;&quot;&gt;
Dan O'Mearadd494642020-05-01 07:42:23 -0700706 # PATCH
707 # https://ml.googleapis.com/v1/{name=projects/*/models/*/versions/*}?update_mask=autoScaling.minNodes
708 # -d @./update_body.json
709 # &lt;/pre&gt;
710 },
Bu Sun Kimd059ad82020-07-22 17:02:09 -0700711 &quot;explanationConfig&quot;: { # Message holding configuration options for explaining model predictions. # Optional. Configures explainability features on the model&#x27;s version.
712 # Some explanation features require additional metadata to be loaded
713 # as part of the model payload.
714 # There are two feature attribution methods supported for TensorFlow models:
715 # integrated gradients and sampled Shapley.
716 # [Learn more about feature
717 # attributions.](/ai-platform/prediction/docs/ai-explanations/overview)
718 &quot;integratedGradientsAttribution&quot;: { # Attributes credit by computing the Aumann-Shapley value taking advantage # Attributes credit by computing the Aumann-Shapley value taking advantage
719 # of the model&#x27;s fully differentiable structure. Refer to this paper for
720 # more details: http://proceedings.mlr.press/v70/sundararajan17a.html
721 # of the model&#x27;s fully differentiable structure. Refer to this paper for
722 # more details: https://arxiv.org/abs/1703.01365
723 &quot;numIntegralSteps&quot;: 42, # Number of steps for approximating the path integral.
724 # A good value to start is 50 and gradually increase until the
725 # sum to diff property is met within the desired error range.
726 },
727 &quot;xraiAttribution&quot;: { # Attributes credit by computing the XRAI taking advantage # Attributes credit by computing the XRAI taking advantage
728 # of the model&#x27;s fully differentiable structure. Refer to this paper for
729 # more details: https://arxiv.org/abs/1906.02825
730 # Currently only implemented for models with natural image inputs.
731 # of the model&#x27;s fully differentiable structure. Refer to this paper for
732 # more details: https://arxiv.org/abs/1906.02825
733 # Currently only implemented for models with natural image inputs.
734 &quot;numIntegralSteps&quot;: 42, # Number of steps for approximating the path integral.
735 # A good value to start is 50 and gradually increase until the
736 # sum to diff property is met within the desired error range.
737 },
738 &quot;sampledShapleyAttribution&quot;: { # An attribution method that approximates Shapley values for features that # An attribution method that approximates Shapley values for features that
739 # contribute to the label being predicted. A sampling strategy is used to
740 # approximate the value rather than considering all subsets of features.
741 # contribute to the label being predicted. A sampling strategy is used to
742 # approximate the value rather than considering all subsets of features.
743 &quot;numPaths&quot;: 42, # The number of feature permutations to consider when approximating the
744 # Shapley values.
745 },
Bu Sun Kim65020912020-05-20 12:08:20 -0700746 },
Bu Sun Kimd059ad82020-07-22 17:02:09 -0700747 &quot;pythonVersion&quot;: &quot;A String&quot;, # Required. The version of Python used in prediction.
748 #
749 # The following Python versions are available:
750 #
751 # * Python &#x27;3.7&#x27; is available when `runtime_version` is set to &#x27;1.15&#x27; or
752 # later.
753 # * Python &#x27;3.5&#x27; is available when `runtime_version` is set to a version
754 # from &#x27;1.4&#x27; to &#x27;1.14&#x27;.
755 # * Python &#x27;2.7&#x27; is available when `runtime_version` is set to &#x27;1.15&#x27; or
756 # earlier.
757 #
758 # Read more about the Python versions available for [each runtime
759 # version](/ml-engine/docs/runtime-version-list).
Bu Sun Kim65020912020-05-20 12:08:20 -0700760 &quot;requestLoggingConfig&quot;: { # Configuration for logging request-response pairs to a BigQuery table. # Optional. *Only* specify this field in a
Dan O'Mearadd494642020-05-01 07:42:23 -0700761 # projects.models.versions.patch
762 # request. Specifying it in a
763 # projects.models.versions.create
764 # request has no effect.
765 #
766 # Configures the request-response pair logging on predictions from this
767 # Version.
768 # Online prediction requests to a model version and the responses to these
769 # requests are converted to raw strings and saved to the specified BigQuery
770 # table. Logging is constrained by [BigQuery quotas and
771 # limits](/bigquery/quotas). If your project exceeds BigQuery quotas or limits,
772 # AI Platform Prediction does not log request-response pairs, but it continues
773 # to serve predictions.
774 #
775 # If you are using [continuous
776 # evaluation](/ml-engine/docs/continuous-evaluation/), you do not need to
777 # specify this configuration manually. Setting up continuous evaluation
778 # automatically enables logging of request-response pairs.
Bu Sun Kimd059ad82020-07-22 17:02:09 -0700779 &quot;samplingPercentage&quot;: 3.14, # Percentage of requests to be logged, expressed as a fraction from 0 to 1.
780 # For example, if you want to log 10% of requests, enter `0.1`. The sampling
781 # window is the lifetime of the model version. Defaults to 0.
Bu Sun Kim65020912020-05-20 12:08:20 -0700782 &quot;bigqueryTableName&quot;: &quot;A String&quot;, # Required. Fully qualified BigQuery table name in the following format:
783 # &quot;&lt;var&gt;project_id&lt;/var&gt;.&lt;var&gt;dataset_name&lt;/var&gt;.&lt;var&gt;table_name&lt;/var&gt;&quot;
Dan O'Mearadd494642020-05-01 07:42:23 -0700784 #
Bu Sun Kim65020912020-05-20 12:08:20 -0700785 # The specified table must already exist, and the &quot;Cloud ML Service Agent&quot;
Dan O'Mearadd494642020-05-01 07:42:23 -0700786 # for your project must have permission to write to it. The table must have
787 # the following [schema](/bigquery/docs/schemas):
788 #
789 # &lt;table&gt;
Bu Sun Kim65020912020-05-20 12:08:20 -0700790 # &lt;tr&gt;&lt;th&gt;Field name&lt;/th&gt;&lt;th style=&quot;display: table-cell&quot;&gt;Type&lt;/th&gt;
791 # &lt;th style=&quot;display: table-cell&quot;&gt;Mode&lt;/th&gt;&lt;/tr&gt;
Dan O'Mearadd494642020-05-01 07:42:23 -0700792 # &lt;tr&gt;&lt;td&gt;model&lt;/td&gt;&lt;td&gt;STRING&lt;/td&gt;&lt;td&gt;REQUIRED&lt;/td&gt;&lt;/tr&gt;
793 # &lt;tr&gt;&lt;td&gt;model_version&lt;/td&gt;&lt;td&gt;STRING&lt;/td&gt;&lt;td&gt;REQUIRED&lt;/td&gt;&lt;/tr&gt;
794 # &lt;tr&gt;&lt;td&gt;time&lt;/td&gt;&lt;td&gt;TIMESTAMP&lt;/td&gt;&lt;td&gt;REQUIRED&lt;/td&gt;&lt;/tr&gt;
795 # &lt;tr&gt;&lt;td&gt;raw_data&lt;/td&gt;&lt;td&gt;STRING&lt;/td&gt;&lt;td&gt;REQUIRED&lt;/td&gt;&lt;/tr&gt;
796 # &lt;tr&gt;&lt;td&gt;raw_prediction&lt;/td&gt;&lt;td&gt;STRING&lt;/td&gt;&lt;td&gt;NULLABLE&lt;/td&gt;&lt;/tr&gt;
797 # &lt;tr&gt;&lt;td&gt;groundtruth&lt;/td&gt;&lt;td&gt;STRING&lt;/td&gt;&lt;td&gt;NULLABLE&lt;/td&gt;&lt;/tr&gt;
798 # &lt;/table&gt;
Bu Sun Kim65020912020-05-20 12:08:20 -0700799 },
Bu Sun Kimd059ad82020-07-22 17:02:09 -0700800 &quot;manualScaling&quot;: { # Options for manually scaling a model. # Manually select the number of nodes to use for serving the
801 # model. You should generally use `auto_scaling` with an appropriate
802 # `min_nodes` instead, but this option is available if you want more
803 # predictable billing. Beware that latency and error rates will increase
804 # if the traffic exceeds that capability of the system to serve it based
805 # on the selected number of nodes.
806 &quot;nodes&quot;: 42, # The number of nodes to allocate for this model. These nodes are always up,
807 # starting from the time the model is deployed, so the cost of operating
808 # this model will be proportional to `nodes` * number of hours since
809 # last billing cycle plus the cost for each prediction performed.
810 },
811 &quot;createTime&quot;: &quot;A String&quot;, # Output only. The time the version was created.
812 &quot;lastUseTime&quot;: &quot;A String&quot;, # Output only. The time the version was last used for prediction.
Bu Sun Kim65020912020-05-20 12:08:20 -0700813 &quot;framework&quot;: &quot;A String&quot;, # Optional. The machine learning framework AI Platform uses to train
814 # this version of the model. Valid values are `TENSORFLOW`, `SCIKIT_LEARN`,
815 # `XGBOOST`. If you do not specify a framework, AI Platform
816 # will analyze files in the deployment_uri to determine a framework. If you
817 # choose `SCIKIT_LEARN` or `XGBOOST`, you must also set the runtime version
818 # of the model to 1.4 or greater.
819 #
820 # Do **not** specify a framework if you&#x27;re deploying a [custom
821 # prediction routine](/ml-engine/docs/tensorflow/custom-prediction-routines).
822 #
823 # If you specify a [Compute Engine (N1) machine
824 # type](/ml-engine/docs/machine-types-online-prediction) in the
825 # `machineType` field, you must specify `TENSORFLOW`
826 # for the framework.
Bu Sun Kimd059ad82020-07-22 17:02:09 -0700827 &quot;predictionClass&quot;: &quot;A String&quot;, # Optional. The fully qualified name
828 # (&lt;var&gt;module_name&lt;/var&gt;.&lt;var&gt;class_name&lt;/var&gt;) of a class that implements
829 # the Predictor interface described in this reference field. The module
830 # containing this class should be included in a package provided to the
831 # [`packageUris` field](#Version.FIELDS.package_uris).
832 #
833 # Specify this field if and only if you are deploying a [custom prediction
834 # routine (beta)](/ml-engine/docs/tensorflow/custom-prediction-routines).
835 # If you specify this field, you must set
836 # [`runtimeVersion`](#Version.FIELDS.runtime_version) to 1.4 or greater and
837 # you must set `machineType` to a [legacy (MLS1)
838 # machine type](/ml-engine/docs/machine-types-online-prediction).
839 #
840 # The following code sample provides the Predictor interface:
841 #
842 # &lt;pre style=&quot;max-width: 626px;&quot;&gt;
843 # class Predictor(object):
844 # &quot;&quot;&quot;Interface for constructing custom predictors.&quot;&quot;&quot;
845 #
846 # def predict(self, instances, **kwargs):
847 # &quot;&quot;&quot;Performs custom prediction.
848 #
849 # Instances are the decoded values from the request. They have already
850 # been deserialized from JSON.
851 #
852 # Args:
853 # instances: A list of prediction input instances.
854 # **kwargs: A dictionary of keyword args provided as additional
855 # fields on the predict request body.
856 #
857 # Returns:
858 # A list of outputs containing the prediction results. This list must
859 # be JSON serializable.
860 # &quot;&quot;&quot;
861 # raise NotImplementedError()
862 #
863 # @classmethod
864 # def from_path(cls, model_dir):
865 # &quot;&quot;&quot;Creates an instance of Predictor using the given path.
866 #
867 # Loading of the predictor should be done in this method.
868 #
869 # Args:
870 # model_dir: The local directory that contains the exported model
871 # file along with any additional files uploaded when creating the
872 # version resource.
873 #
874 # Returns:
875 # An instance implementing this Predictor class.
876 # &quot;&quot;&quot;
877 # raise NotImplementedError()
878 # &lt;/pre&gt;
879 #
880 # Learn more about [the Predictor interface and custom prediction
881 # routines](/ml-engine/docs/tensorflow/custom-prediction-routines).
882 &quot;isDefault&quot;: True or False, # Output only. If true, this version will be used to handle prediction
883 # requests that do not specify a version.
884 #
885 # You can change the default version by calling
886 # projects.methods.versions.setDefault.
Bu Sun Kim65020912020-05-20 12:08:20 -0700887 &quot;etag&quot;: &quot;A String&quot;, # `etag` is used for optimistic concurrency control as a way to help
888 # prevent simultaneous updates of a model from overwriting each other.
889 # It is strongly suggested that systems make use of the `etag` in the
890 # read-modify-write cycle to perform model updates in order to avoid race
891 # conditions: An `etag` is returned in the response to `GetVersion`, and
892 # systems are expected to put that etag in the request to `UpdateVersion` to
893 # ensure that their change will be applied to the model as intended.
Bu Sun Kimd059ad82020-07-22 17:02:09 -0700894 &quot;serviceAccount&quot;: &quot;A String&quot;, # Optional. Specifies the service account for resource access control.
895 &quot;errorMessage&quot;: &quot;A String&quot;, # Output only. The details of a failure or a cancellation.
896 &quot;deploymentUri&quot;: &quot;A String&quot;, # Required. The Cloud Storage location of the trained model used to
897 # create the version. See the
898 # [guide to model
899 # deployment](/ml-engine/docs/tensorflow/deploying-models) for more
900 # information.
901 #
902 # When passing Version to
903 # projects.models.versions.create
904 # the model service uses the specified location as the source of the model.
905 # Once deployed, the model version is hosted by the prediction service, so
906 # this location is useful only as a historical record.
907 # The total number of model files can&#x27;t exceed 1000.
908 &quot;runtimeVersion&quot;: &quot;A String&quot;, # Required. The AI Platform runtime version to use for this deployment.
909 #
910 # For more information, see the
911 # [runtime version list](/ml-engine/docs/runtime-version-list) and
912 # [how to manage runtime versions](/ml-engine/docs/versioning).
913 &quot;description&quot;: &quot;A String&quot;, # Optional. The description specified for the version when it was created.
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -0400914 }</pre>
915</div>
916
917<div class="method">
Bu Sun Kimd059ad82020-07-22 17:02:09 -0700918 <code class="details" id="list">list(parent, filter=None, pageToken=None, pageSize=None, x__xgafv=None)</code>
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -0400919 <pre>Gets basic information about all the versions of a model.
920
Bu Sun Kim715bd7f2019-06-14 16:50:42 -0700921If you expect that a model has many versions, or if you need to handle
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -0400922only a limited number of results at a time, you can request that the list
Bu Sun Kim715bd7f2019-06-14 16:50:42 -0700923be retrieved in batches (called pages).
924
925If there are no versions that match the request parameters, the list
926request returns an empty response body: {}.
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -0400927
928Args:
Bu Sun Kim715bd7f2019-06-14 16:50:42 -0700929 parent: string, Required. The name of the model for which to list the version. (required)
Bu Sun Kimd059ad82020-07-22 17:02:09 -0700930 filter: string, Optional. Specifies the subset of versions to retrieve.
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -0400931 pageToken: string, Optional. A page token to request the next page of results.
932
933You get the token from the `next_page_token` field of the response from
934the previous call.
Bu Sun Kim65020912020-05-20 12:08:20 -0700935 pageSize: integer, Optional. The number of versions to retrieve per &quot;page&quot; of results. If
Sai Cheemalapati4ba8c232017-06-06 18:46:08 -0400936there are more remaining results than this number, the response message
937will contain a valid value in the `next_page_token` field.
938
939The default value is 20, and the maximum page size is 100.
Bu Sun Kim65020912020-05-20 12:08:20 -0700940 x__xgafv: string, V1 error format.
941 Allowed values
942 1 - v1 error format
943 2 - v2 error format
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -0400944
945Returns:
946 An object of the form:
947
948 { # Response message for the ListVersions method.
Bu Sun Kim65020912020-05-20 12:08:20 -0700949 &quot;nextPageToken&quot;: &quot;A String&quot;, # Optional. Pass this token as the `page_token` field of the request for a
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -0400950 # subsequent call.
Bu Sun Kim65020912020-05-20 12:08:20 -0700951 &quot;versions&quot;: [ # The list of versions.
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -0400952 { # Represents a version of the model.
953 #
954 # Each version is a trained model deployed in the cloud, ready to handle
955 # prediction requests. A model can have multiple versions. You can get
956 # information about all of the versions of a given model by calling
Dan O'Mearadd494642020-05-01 07:42:23 -0700957 # projects.models.versions.list.
Bu Sun Kimd059ad82020-07-22 17:02:09 -0700958 &quot;labels&quot;: { # Optional. One or more labels that you can add, to organize your model
959 # versions. Each label is a key-value pair, where both the key and the value
960 # are arbitrary strings that you supply.
961 # For more information, see the documentation on
962 # &lt;a href=&quot;/ml-engine/docs/tensorflow/resource-labels&quot;&gt;using labels&lt;/a&gt;.
963 &quot;a_key&quot;: &quot;A String&quot;,
Bu Sun Kim4ed7d3f2020-05-27 12:20:54 -0700964 },
Bu Sun Kimd059ad82020-07-22 17:02:09 -0700965 &quot;machineType&quot;: &quot;A String&quot;, # Optional. The type of machine on which to serve the model. Currently only
966 # applies to online prediction service. If this field is not specified, it
967 # defaults to `mls1-c1-m2`.
Bu Sun Kim4ed7d3f2020-05-27 12:20:54 -0700968 #
Bu Sun Kimd059ad82020-07-22 17:02:09 -0700969 # Online prediction supports the following machine types:
Bu Sun Kim65020912020-05-20 12:08:20 -0700970 #
Bu Sun Kimd059ad82020-07-22 17:02:09 -0700971 # * `mls1-c1-m2`
972 # * `mls1-c4-m2`
973 # * `n1-standard-2`
974 # * `n1-standard-4`
975 # * `n1-standard-8`
976 # * `n1-standard-16`
977 # * `n1-standard-32`
978 # * `n1-highmem-2`
979 # * `n1-highmem-4`
980 # * `n1-highmem-8`
981 # * `n1-highmem-16`
982 # * `n1-highmem-32`
983 # * `n1-highcpu-2`
984 # * `n1-highcpu-4`
985 # * `n1-highcpu-8`
986 # * `n1-highcpu-16`
987 # * `n1-highcpu-32`
Bu Sun Kim65020912020-05-20 12:08:20 -0700988 #
Bu Sun Kimd059ad82020-07-22 17:02:09 -0700989 # `mls1-c1-m2` is generally available. All other machine types are available
990 # in beta. Learn more about the [differences between machine
991 # types](/ml-engine/docs/machine-types-online-prediction).
Bu Sun Kim65020912020-05-20 12:08:20 -0700992 &quot;packageUris&quot;: [ # Optional. Cloud Storage paths (`gs://…`) of packages for [custom
Bu Sun Kim715bd7f2019-06-14 16:50:42 -0700993 # prediction routines](/ml-engine/docs/tensorflow/custom-prediction-routines)
994 # or [scikit-learn pipelines with custom
995 # code](/ml-engine/docs/scikit/exporting-for-prediction#custom-pipeline-code).
996 #
997 # For a custom prediction routine, one of these packages must contain your
998 # Predictor class (see
999 # [`predictionClass`](#Version.FIELDS.prediction_class)). Additionally,
1000 # include any dependencies used by your Predictor or scikit-learn pipeline
1001 # uses that are not already included in your selected [runtime
1002 # version](/ml-engine/docs/tensorflow/runtime-version-list).
1003 #
1004 # If you specify this field, you must also set
1005 # [`runtimeVersion`](#Version.FIELDS.runtime_version) to 1.4 or greater.
Bu Sun Kim65020912020-05-20 12:08:20 -07001006 &quot;A String&quot;,
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07001007 ],
Bu Sun Kimd059ad82020-07-22 17:02:09 -07001008 &quot;acceleratorConfig&quot;: { # Represents a hardware accelerator request config. # Optional. Accelerator config for using GPUs for online prediction (beta).
1009 # Only specify this field if you have specified a Compute Engine (N1) machine
1010 # type in the `machineType` field. Learn more about [using GPUs for online
1011 # prediction](/ml-engine/docs/machine-types-online-prediction#gpus).
1012 # Note that the AcceleratorConfig can be used in both Jobs and Versions.
1013 # Learn more about [accelerators for training](/ml-engine/docs/using-gpus) and
1014 # [accelerators for online
1015 # prediction](/ml-engine/docs/machine-types-online-prediction#gpus).
1016 &quot;type&quot;: &quot;A String&quot;, # The type of accelerator to use.
1017 &quot;count&quot;: &quot;A String&quot;, # The number of accelerators to attach to each machine running the job.
Bu Sun Kim65020912020-05-20 12:08:20 -07001018 },
Bu Sun Kimd059ad82020-07-22 17:02:09 -07001019 &quot;state&quot;: &quot;A String&quot;, # Output only. The state of a version.
1020 &quot;name&quot;: &quot;A String&quot;, # Required. The name specified for the version when it was created.
1021 #
1022 # The version name must be unique within the model it is created in.
Bu Sun Kim65020912020-05-20 12:08:20 -07001023 &quot;autoScaling&quot;: { # Options for automatically scaling a model. # Automatically scale the number of nodes used to serve the model in
Dan O'Mearadd494642020-05-01 07:42:23 -07001024 # response to increases and decreases in traffic. Care should be
Bu Sun Kim65020912020-05-20 12:08:20 -07001025 # taken to ramp up traffic according to the model&#x27;s ability to scale
Dan O'Mearadd494642020-05-01 07:42:23 -07001026 # or you will start seeing increases in latency and 429 response codes.
1027 #
1028 # Note that you cannot use AutoScaling if your version uses
1029 # [GPUs](#Version.FIELDS.accelerator_config). Instead, you must use specify
1030 # `manual_scaling`.
Bu Sun Kim65020912020-05-20 12:08:20 -07001031 &quot;minNodes&quot;: 42, # Optional. The minimum number of nodes to allocate for this model. These
Dan O'Mearadd494642020-05-01 07:42:23 -07001032 # nodes are always up, starting from the time the model is deployed.
1033 # Therefore, the cost of operating this model will be at least
1034 # `rate` * `min_nodes` * number of hours since last billing cycle,
1035 # where `rate` is the cost per node-hour as documented in the
1036 # [pricing guide](/ml-engine/docs/pricing),
1037 # even if no predictions are performed. There is additional cost for each
1038 # prediction performed.
1039 #
1040 # Unlike manual scaling, if the load gets too heavy for the nodes
1041 # that are up, the service will automatically add nodes to handle the
1042 # increased load as well as scale back as traffic drops, always maintaining
1043 # at least `min_nodes`. You will be charged for the time in which additional
1044 # nodes are used.
1045 #
1046 # If `min_nodes` is not specified and AutoScaling is used with a [legacy
1047 # (MLS1) machine type](/ml-engine/docs/machine-types-online-prediction),
1048 # `min_nodes` defaults to 0, in which case, when traffic to a model stops
1049 # (and after a cool-down period), nodes will be shut down and no charges will
1050 # be incurred until traffic to the model resumes.
1051 #
1052 # If `min_nodes` is not specified and AutoScaling is used with a [Compute
1053 # Engine (N1) machine type](/ml-engine/docs/machine-types-online-prediction),
1054 # `min_nodes` defaults to 1. `min_nodes` must be at least 1 for use with a
1055 # Compute Engine machine type.
1056 #
1057 # Note that you cannot use AutoScaling if your version uses
1058 # [GPUs](#Version.FIELDS.accelerator_config). Instead, you must use
1059 # ManualScaling.
1060 #
1061 # You can set `min_nodes` when creating the model version, and you can also
1062 # update `min_nodes` for an existing version:
1063 # &lt;pre&gt;
1064 # update_body.json:
1065 # {
Bu Sun Kim65020912020-05-20 12:08:20 -07001066 # &#x27;autoScaling&#x27;: {
1067 # &#x27;minNodes&#x27;: 5
Dan O'Mearadd494642020-05-01 07:42:23 -07001068 # }
1069 # }
1070 # &lt;/pre&gt;
1071 # HTTP request:
Bu Sun Kim65020912020-05-20 12:08:20 -07001072 # &lt;pre style=&quot;max-width: 626px;&quot;&gt;
Dan O'Mearadd494642020-05-01 07:42:23 -07001073 # PATCH
1074 # https://ml.googleapis.com/v1/{name=projects/*/models/*/versions/*}?update_mask=autoScaling.minNodes
1075 # -d @./update_body.json
1076 # &lt;/pre&gt;
1077 },
Bu Sun Kimd059ad82020-07-22 17:02:09 -07001078 &quot;explanationConfig&quot;: { # Message holding configuration options for explaining model predictions. # Optional. Configures explainability features on the model&#x27;s version.
1079 # Some explanation features require additional metadata to be loaded
1080 # as part of the model payload.
1081 # There are two feature attribution methods supported for TensorFlow models:
1082 # integrated gradients and sampled Shapley.
1083 # [Learn more about feature
1084 # attributions.](/ai-platform/prediction/docs/ai-explanations/overview)
1085 &quot;integratedGradientsAttribution&quot;: { # Attributes credit by computing the Aumann-Shapley value taking advantage # Attributes credit by computing the Aumann-Shapley value taking advantage
1086 # of the model&#x27;s fully differentiable structure. Refer to this paper for
1087 # more details: http://proceedings.mlr.press/v70/sundararajan17a.html
1088 # of the model&#x27;s fully differentiable structure. Refer to this paper for
1089 # more details: https://arxiv.org/abs/1703.01365
1090 &quot;numIntegralSteps&quot;: 42, # Number of steps for approximating the path integral.
1091 # A good value to start is 50 and gradually increase until the
1092 # sum to diff property is met within the desired error range.
1093 },
1094 &quot;xraiAttribution&quot;: { # Attributes credit by computing the XRAI taking advantage # Attributes credit by computing the XRAI taking advantage
1095 # of the model&#x27;s fully differentiable structure. Refer to this paper for
1096 # more details: https://arxiv.org/abs/1906.02825
1097 # Currently only implemented for models with natural image inputs.
1098 # of the model&#x27;s fully differentiable structure. Refer to this paper for
1099 # more details: https://arxiv.org/abs/1906.02825
1100 # Currently only implemented for models with natural image inputs.
1101 &quot;numIntegralSteps&quot;: 42, # Number of steps for approximating the path integral.
1102 # A good value to start is 50 and gradually increase until the
1103 # sum to diff property is met within the desired error range.
1104 },
1105 &quot;sampledShapleyAttribution&quot;: { # An attribution method that approximates Shapley values for features that # An attribution method that approximates Shapley values for features that
1106 # contribute to the label being predicted. A sampling strategy is used to
1107 # approximate the value rather than considering all subsets of features.
1108 # contribute to the label being predicted. A sampling strategy is used to
1109 # approximate the value rather than considering all subsets of features.
1110 &quot;numPaths&quot;: 42, # The number of feature permutations to consider when approximating the
1111 # Shapley values.
1112 },
Bu Sun Kim65020912020-05-20 12:08:20 -07001113 },
Bu Sun Kimd059ad82020-07-22 17:02:09 -07001114 &quot;pythonVersion&quot;: &quot;A String&quot;, # Required. The version of Python used in prediction.
1115 #
1116 # The following Python versions are available:
1117 #
1118 # * Python &#x27;3.7&#x27; is available when `runtime_version` is set to &#x27;1.15&#x27; or
1119 # later.
1120 # * Python &#x27;3.5&#x27; is available when `runtime_version` is set to a version
1121 # from &#x27;1.4&#x27; to &#x27;1.14&#x27;.
1122 # * Python &#x27;2.7&#x27; is available when `runtime_version` is set to &#x27;1.15&#x27; or
1123 # earlier.
1124 #
1125 # Read more about the Python versions available for [each runtime
1126 # version](/ml-engine/docs/runtime-version-list).
Bu Sun Kim65020912020-05-20 12:08:20 -07001127 &quot;requestLoggingConfig&quot;: { # Configuration for logging request-response pairs to a BigQuery table. # Optional. *Only* specify this field in a
Dan O'Mearadd494642020-05-01 07:42:23 -07001128 # projects.models.versions.patch
1129 # request. Specifying it in a
1130 # projects.models.versions.create
1131 # request has no effect.
1132 #
1133 # Configures the request-response pair logging on predictions from this
1134 # Version.
1135 # Online prediction requests to a model version and the responses to these
1136 # requests are converted to raw strings and saved to the specified BigQuery
1137 # table. Logging is constrained by [BigQuery quotas and
1138 # limits](/bigquery/quotas). If your project exceeds BigQuery quotas or limits,
1139 # AI Platform Prediction does not log request-response pairs, but it continues
1140 # to serve predictions.
1141 #
1142 # If you are using [continuous
1143 # evaluation](/ml-engine/docs/continuous-evaluation/), you do not need to
1144 # specify this configuration manually. Setting up continuous evaluation
1145 # automatically enables logging of request-response pairs.
Bu Sun Kimd059ad82020-07-22 17:02:09 -07001146 &quot;samplingPercentage&quot;: 3.14, # Percentage of requests to be logged, expressed as a fraction from 0 to 1.
1147 # For example, if you want to log 10% of requests, enter `0.1`. The sampling
1148 # window is the lifetime of the model version. Defaults to 0.
Bu Sun Kim65020912020-05-20 12:08:20 -07001149 &quot;bigqueryTableName&quot;: &quot;A String&quot;, # Required. Fully qualified BigQuery table name in the following format:
1150 # &quot;&lt;var&gt;project_id&lt;/var&gt;.&lt;var&gt;dataset_name&lt;/var&gt;.&lt;var&gt;table_name&lt;/var&gt;&quot;
Dan O'Mearadd494642020-05-01 07:42:23 -07001151 #
Bu Sun Kim65020912020-05-20 12:08:20 -07001152 # The specified table must already exist, and the &quot;Cloud ML Service Agent&quot;
Dan O'Mearadd494642020-05-01 07:42:23 -07001153 # for your project must have permission to write to it. The table must have
1154 # the following [schema](/bigquery/docs/schemas):
1155 #
1156 # &lt;table&gt;
Bu Sun Kim65020912020-05-20 12:08:20 -07001157 # &lt;tr&gt;&lt;th&gt;Field name&lt;/th&gt;&lt;th style=&quot;display: table-cell&quot;&gt;Type&lt;/th&gt;
1158 # &lt;th style=&quot;display: table-cell&quot;&gt;Mode&lt;/th&gt;&lt;/tr&gt;
Dan O'Mearadd494642020-05-01 07:42:23 -07001159 # &lt;tr&gt;&lt;td&gt;model&lt;/td&gt;&lt;td&gt;STRING&lt;/td&gt;&lt;td&gt;REQUIRED&lt;/td&gt;&lt;/tr&gt;
1160 # &lt;tr&gt;&lt;td&gt;model_version&lt;/td&gt;&lt;td&gt;STRING&lt;/td&gt;&lt;td&gt;REQUIRED&lt;/td&gt;&lt;/tr&gt;
1161 # &lt;tr&gt;&lt;td&gt;time&lt;/td&gt;&lt;td&gt;TIMESTAMP&lt;/td&gt;&lt;td&gt;REQUIRED&lt;/td&gt;&lt;/tr&gt;
1162 # &lt;tr&gt;&lt;td&gt;raw_data&lt;/td&gt;&lt;td&gt;STRING&lt;/td&gt;&lt;td&gt;REQUIRED&lt;/td&gt;&lt;/tr&gt;
1163 # &lt;tr&gt;&lt;td&gt;raw_prediction&lt;/td&gt;&lt;td&gt;STRING&lt;/td&gt;&lt;td&gt;NULLABLE&lt;/td&gt;&lt;/tr&gt;
1164 # &lt;tr&gt;&lt;td&gt;groundtruth&lt;/td&gt;&lt;td&gt;STRING&lt;/td&gt;&lt;td&gt;NULLABLE&lt;/td&gt;&lt;/tr&gt;
1165 # &lt;/table&gt;
Bu Sun Kim65020912020-05-20 12:08:20 -07001166 },
Bu Sun Kimd059ad82020-07-22 17:02:09 -07001167 &quot;manualScaling&quot;: { # Options for manually scaling a model. # Manually select the number of nodes to use for serving the
1168 # model. You should generally use `auto_scaling` with an appropriate
1169 # `min_nodes` instead, but this option is available if you want more
1170 # predictable billing. Beware that latency and error rates will increase
1171 # if the traffic exceeds that capability of the system to serve it based
1172 # on the selected number of nodes.
1173 &quot;nodes&quot;: 42, # The number of nodes to allocate for this model. These nodes are always up,
1174 # starting from the time the model is deployed, so the cost of operating
1175 # this model will be proportional to `nodes` * number of hours since
1176 # last billing cycle plus the cost for each prediction performed.
1177 },
1178 &quot;createTime&quot;: &quot;A String&quot;, # Output only. The time the version was created.
1179 &quot;lastUseTime&quot;: &quot;A String&quot;, # Output only. The time the version was last used for prediction.
Bu Sun Kim65020912020-05-20 12:08:20 -07001180 &quot;framework&quot;: &quot;A String&quot;, # Optional. The machine learning framework AI Platform uses to train
1181 # this version of the model. Valid values are `TENSORFLOW`, `SCIKIT_LEARN`,
1182 # `XGBOOST`. If you do not specify a framework, AI Platform
1183 # will analyze files in the deployment_uri to determine a framework. If you
1184 # choose `SCIKIT_LEARN` or `XGBOOST`, you must also set the runtime version
1185 # of the model to 1.4 or greater.
1186 #
1187 # Do **not** specify a framework if you&#x27;re deploying a [custom
1188 # prediction routine](/ml-engine/docs/tensorflow/custom-prediction-routines).
1189 #
1190 # If you specify a [Compute Engine (N1) machine
1191 # type](/ml-engine/docs/machine-types-online-prediction) in the
1192 # `machineType` field, you must specify `TENSORFLOW`
1193 # for the framework.
Bu Sun Kimd059ad82020-07-22 17:02:09 -07001194 &quot;predictionClass&quot;: &quot;A String&quot;, # Optional. The fully qualified name
1195 # (&lt;var&gt;module_name&lt;/var&gt;.&lt;var&gt;class_name&lt;/var&gt;) of a class that implements
1196 # the Predictor interface described in this reference field. The module
1197 # containing this class should be included in a package provided to the
1198 # [`packageUris` field](#Version.FIELDS.package_uris).
1199 #
1200 # Specify this field if and only if you are deploying a [custom prediction
1201 # routine (beta)](/ml-engine/docs/tensorflow/custom-prediction-routines).
1202 # If you specify this field, you must set
1203 # [`runtimeVersion`](#Version.FIELDS.runtime_version) to 1.4 or greater and
1204 # you must set `machineType` to a [legacy (MLS1)
1205 # machine type](/ml-engine/docs/machine-types-online-prediction).
1206 #
1207 # The following code sample provides the Predictor interface:
1208 #
1209 # &lt;pre style=&quot;max-width: 626px;&quot;&gt;
1210 # class Predictor(object):
1211 # &quot;&quot;&quot;Interface for constructing custom predictors.&quot;&quot;&quot;
1212 #
1213 # def predict(self, instances, **kwargs):
1214 # &quot;&quot;&quot;Performs custom prediction.
1215 #
1216 # Instances are the decoded values from the request. They have already
1217 # been deserialized from JSON.
1218 #
1219 # Args:
1220 # instances: A list of prediction input instances.
1221 # **kwargs: A dictionary of keyword args provided as additional
1222 # fields on the predict request body.
1223 #
1224 # Returns:
1225 # A list of outputs containing the prediction results. This list must
1226 # be JSON serializable.
1227 # &quot;&quot;&quot;
1228 # raise NotImplementedError()
1229 #
1230 # @classmethod
1231 # def from_path(cls, model_dir):
1232 # &quot;&quot;&quot;Creates an instance of Predictor using the given path.
1233 #
1234 # Loading of the predictor should be done in this method.
1235 #
1236 # Args:
1237 # model_dir: The local directory that contains the exported model
1238 # file along with any additional files uploaded when creating the
1239 # version resource.
1240 #
1241 # Returns:
1242 # An instance implementing this Predictor class.
1243 # &quot;&quot;&quot;
1244 # raise NotImplementedError()
1245 # &lt;/pre&gt;
1246 #
1247 # Learn more about [the Predictor interface and custom prediction
1248 # routines](/ml-engine/docs/tensorflow/custom-prediction-routines).
1249 &quot;isDefault&quot;: True or False, # Output only. If true, this version will be used to handle prediction
1250 # requests that do not specify a version.
1251 #
1252 # You can change the default version by calling
1253 # projects.methods.versions.setDefault.
Bu Sun Kim65020912020-05-20 12:08:20 -07001254 &quot;etag&quot;: &quot;A String&quot;, # `etag` is used for optimistic concurrency control as a way to help
1255 # prevent simultaneous updates of a model from overwriting each other.
1256 # It is strongly suggested that systems make use of the `etag` in the
1257 # read-modify-write cycle to perform model updates in order to avoid race
1258 # conditions: An `etag` is returned in the response to `GetVersion`, and
1259 # systems are expected to put that etag in the request to `UpdateVersion` to
1260 # ensure that their change will be applied to the model as intended.
Bu Sun Kimd059ad82020-07-22 17:02:09 -07001261 &quot;serviceAccount&quot;: &quot;A String&quot;, # Optional. Specifies the service account for resource access control.
1262 &quot;errorMessage&quot;: &quot;A String&quot;, # Output only. The details of a failure or a cancellation.
1263 &quot;deploymentUri&quot;: &quot;A String&quot;, # Required. The Cloud Storage location of the trained model used to
1264 # create the version. See the
1265 # [guide to model
1266 # deployment](/ml-engine/docs/tensorflow/deploying-models) for more
1267 # information.
1268 #
1269 # When passing Version to
1270 # projects.models.versions.create
1271 # the model service uses the specified location as the source of the model.
1272 # Once deployed, the model version is hosted by the prediction service, so
1273 # this location is useful only as a historical record.
1274 # The total number of model files can&#x27;t exceed 1000.
1275 &quot;runtimeVersion&quot;: &quot;A String&quot;, # Required. The AI Platform runtime version to use for this deployment.
1276 #
1277 # For more information, see the
1278 # [runtime version list](/ml-engine/docs/runtime-version-list) and
1279 # [how to manage runtime versions](/ml-engine/docs/versioning).
1280 &quot;description&quot;: &quot;A String&quot;, # Optional. The description specified for the version when it was created.
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -04001281 },
1282 ],
1283 }</pre>
1284</div>
1285
1286<div class="method">
1287 <code class="details" id="list_next">list_next(previous_request, previous_response)</code>
1288 <pre>Retrieves the next page of results.
1289
1290Args:
1291 previous_request: The request for the previous page. (required)
1292 previous_response: The response from the request for the previous page. (required)
1293
1294Returns:
Bu Sun Kim65020912020-05-20 12:08:20 -07001295 A request object that you can call &#x27;execute()&#x27; on to request the next
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -04001296 page. Returns None if there are no more items in the collection.
1297 </pre>
1298</div>
1299
1300<div class="method">
Dan O'Mearadd494642020-05-01 07:42:23 -07001301 <code class="details" id="patch">patch(name, body=None, updateMask=None, x__xgafv=None)</code>
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07001302 <pre>Updates the specified Version resource.
1303
Dan O'Mearadd494642020-05-01 07:42:23 -07001304Currently the only update-able fields are `description`,
1305`requestLoggingConfig`, `autoScaling.minNodes`, and `manualScaling.nodes`.
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07001306
1307Args:
1308 name: string, Required. The name of the model. (required)
Dan O'Mearadd494642020-05-01 07:42:23 -07001309 body: object, The request body.
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07001310 The object takes the form of:
1311
1312{ # Represents a version of the model.
1313 #
1314 # Each version is a trained model deployed in the cloud, ready to handle
1315 # prediction requests. A model can have multiple versions. You can get
1316 # information about all of the versions of a given model by calling
Dan O'Mearadd494642020-05-01 07:42:23 -07001317 # projects.models.versions.list.
Bu Sun Kimd059ad82020-07-22 17:02:09 -07001318 &quot;labels&quot;: { # Optional. One or more labels that you can add, to organize your model
1319 # versions. Each label is a key-value pair, where both the key and the value
1320 # are arbitrary strings that you supply.
1321 # For more information, see the documentation on
1322 # &lt;a href=&quot;/ml-engine/docs/tensorflow/resource-labels&quot;&gt;using labels&lt;/a&gt;.
1323 &quot;a_key&quot;: &quot;A String&quot;,
Bu Sun Kim4ed7d3f2020-05-27 12:20:54 -07001324 },
Bu Sun Kimd059ad82020-07-22 17:02:09 -07001325 &quot;machineType&quot;: &quot;A String&quot;, # Optional. The type of machine on which to serve the model. Currently only
1326 # applies to online prediction service. If this field is not specified, it
1327 # defaults to `mls1-c1-m2`.
Bu Sun Kim4ed7d3f2020-05-27 12:20:54 -07001328 #
Bu Sun Kimd059ad82020-07-22 17:02:09 -07001329 # Online prediction supports the following machine types:
Bu Sun Kim65020912020-05-20 12:08:20 -07001330 #
Bu Sun Kimd059ad82020-07-22 17:02:09 -07001331 # * `mls1-c1-m2`
1332 # * `mls1-c4-m2`
1333 # * `n1-standard-2`
1334 # * `n1-standard-4`
1335 # * `n1-standard-8`
1336 # * `n1-standard-16`
1337 # * `n1-standard-32`
1338 # * `n1-highmem-2`
1339 # * `n1-highmem-4`
1340 # * `n1-highmem-8`
1341 # * `n1-highmem-16`
1342 # * `n1-highmem-32`
1343 # * `n1-highcpu-2`
1344 # * `n1-highcpu-4`
1345 # * `n1-highcpu-8`
1346 # * `n1-highcpu-16`
1347 # * `n1-highcpu-32`
Bu Sun Kim65020912020-05-20 12:08:20 -07001348 #
Bu Sun Kimd059ad82020-07-22 17:02:09 -07001349 # `mls1-c1-m2` is generally available. All other machine types are available
1350 # in beta. Learn more about the [differences between machine
1351 # types](/ml-engine/docs/machine-types-online-prediction).
Bu Sun Kim65020912020-05-20 12:08:20 -07001352 &quot;packageUris&quot;: [ # Optional. Cloud Storage paths (`gs://…`) of packages for [custom
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07001353 # prediction routines](/ml-engine/docs/tensorflow/custom-prediction-routines)
1354 # or [scikit-learn pipelines with custom
1355 # code](/ml-engine/docs/scikit/exporting-for-prediction#custom-pipeline-code).
1356 #
1357 # For a custom prediction routine, one of these packages must contain your
1358 # Predictor class (see
1359 # [`predictionClass`](#Version.FIELDS.prediction_class)). Additionally,
1360 # include any dependencies used by your Predictor or scikit-learn pipeline
1361 # uses that are not already included in your selected [runtime
1362 # version](/ml-engine/docs/tensorflow/runtime-version-list).
1363 #
1364 # If you specify this field, you must also set
1365 # [`runtimeVersion`](#Version.FIELDS.runtime_version) to 1.4 or greater.
Bu Sun Kim65020912020-05-20 12:08:20 -07001366 &quot;A String&quot;,
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07001367 ],
Bu Sun Kimd059ad82020-07-22 17:02:09 -07001368 &quot;acceleratorConfig&quot;: { # Represents a hardware accelerator request config. # Optional. Accelerator config for using GPUs for online prediction (beta).
1369 # Only specify this field if you have specified a Compute Engine (N1) machine
1370 # type in the `machineType` field. Learn more about [using GPUs for online
1371 # prediction](/ml-engine/docs/machine-types-online-prediction#gpus).
1372 # Note that the AcceleratorConfig can be used in both Jobs and Versions.
1373 # Learn more about [accelerators for training](/ml-engine/docs/using-gpus) and
1374 # [accelerators for online
1375 # prediction](/ml-engine/docs/machine-types-online-prediction#gpus).
1376 &quot;type&quot;: &quot;A String&quot;, # The type of accelerator to use.
1377 &quot;count&quot;: &quot;A String&quot;, # The number of accelerators to attach to each machine running the job.
Bu Sun Kim65020912020-05-20 12:08:20 -07001378 },
Bu Sun Kimd059ad82020-07-22 17:02:09 -07001379 &quot;state&quot;: &quot;A String&quot;, # Output only. The state of a version.
1380 &quot;name&quot;: &quot;A String&quot;, # Required. The name specified for the version when it was created.
1381 #
1382 # The version name must be unique within the model it is created in.
Bu Sun Kim65020912020-05-20 12:08:20 -07001383 &quot;autoScaling&quot;: { # Options for automatically scaling a model. # Automatically scale the number of nodes used to serve the model in
Dan O'Mearadd494642020-05-01 07:42:23 -07001384 # response to increases and decreases in traffic. Care should be
Bu Sun Kim65020912020-05-20 12:08:20 -07001385 # taken to ramp up traffic according to the model&#x27;s ability to scale
Dan O'Mearadd494642020-05-01 07:42:23 -07001386 # or you will start seeing increases in latency and 429 response codes.
1387 #
1388 # Note that you cannot use AutoScaling if your version uses
1389 # [GPUs](#Version.FIELDS.accelerator_config). Instead, you must use specify
1390 # `manual_scaling`.
Bu Sun Kim65020912020-05-20 12:08:20 -07001391 &quot;minNodes&quot;: 42, # Optional. The minimum number of nodes to allocate for this model. These
Dan O'Mearadd494642020-05-01 07:42:23 -07001392 # nodes are always up, starting from the time the model is deployed.
1393 # Therefore, the cost of operating this model will be at least
1394 # `rate` * `min_nodes` * number of hours since last billing cycle,
1395 # where `rate` is the cost per node-hour as documented in the
1396 # [pricing guide](/ml-engine/docs/pricing),
1397 # even if no predictions are performed. There is additional cost for each
1398 # prediction performed.
1399 #
1400 # Unlike manual scaling, if the load gets too heavy for the nodes
1401 # that are up, the service will automatically add nodes to handle the
1402 # increased load as well as scale back as traffic drops, always maintaining
1403 # at least `min_nodes`. You will be charged for the time in which additional
1404 # nodes are used.
1405 #
1406 # If `min_nodes` is not specified and AutoScaling is used with a [legacy
1407 # (MLS1) machine type](/ml-engine/docs/machine-types-online-prediction),
1408 # `min_nodes` defaults to 0, in which case, when traffic to a model stops
1409 # (and after a cool-down period), nodes will be shut down and no charges will
1410 # be incurred until traffic to the model resumes.
1411 #
1412 # If `min_nodes` is not specified and AutoScaling is used with a [Compute
1413 # Engine (N1) machine type](/ml-engine/docs/machine-types-online-prediction),
1414 # `min_nodes` defaults to 1. `min_nodes` must be at least 1 for use with a
1415 # Compute Engine machine type.
1416 #
1417 # Note that you cannot use AutoScaling if your version uses
1418 # [GPUs](#Version.FIELDS.accelerator_config). Instead, you must use
1419 # ManualScaling.
1420 #
1421 # You can set `min_nodes` when creating the model version, and you can also
1422 # update `min_nodes` for an existing version:
1423 # &lt;pre&gt;
1424 # update_body.json:
1425 # {
Bu Sun Kim65020912020-05-20 12:08:20 -07001426 # &#x27;autoScaling&#x27;: {
1427 # &#x27;minNodes&#x27;: 5
Dan O'Mearadd494642020-05-01 07:42:23 -07001428 # }
1429 # }
1430 # &lt;/pre&gt;
1431 # HTTP request:
Bu Sun Kim65020912020-05-20 12:08:20 -07001432 # &lt;pre style=&quot;max-width: 626px;&quot;&gt;
Dan O'Mearadd494642020-05-01 07:42:23 -07001433 # PATCH
1434 # https://ml.googleapis.com/v1/{name=projects/*/models/*/versions/*}?update_mask=autoScaling.minNodes
1435 # -d @./update_body.json
1436 # &lt;/pre&gt;
1437 },
Bu Sun Kimd059ad82020-07-22 17:02:09 -07001438 &quot;explanationConfig&quot;: { # Message holding configuration options for explaining model predictions. # Optional. Configures explainability features on the model&#x27;s version.
1439 # Some explanation features require additional metadata to be loaded
1440 # as part of the model payload.
1441 # There are two feature attribution methods supported for TensorFlow models:
1442 # integrated gradients and sampled Shapley.
1443 # [Learn more about feature
1444 # attributions.](/ai-platform/prediction/docs/ai-explanations/overview)
1445 &quot;integratedGradientsAttribution&quot;: { # Attributes credit by computing the Aumann-Shapley value taking advantage # Attributes credit by computing the Aumann-Shapley value taking advantage
1446 # of the model&#x27;s fully differentiable structure. Refer to this paper for
1447 # more details: http://proceedings.mlr.press/v70/sundararajan17a.html
1448 # of the model&#x27;s fully differentiable structure. Refer to this paper for
1449 # more details: https://arxiv.org/abs/1703.01365
1450 &quot;numIntegralSteps&quot;: 42, # Number of steps for approximating the path integral.
1451 # A good value to start is 50 and gradually increase until the
1452 # sum to diff property is met within the desired error range.
1453 },
1454 &quot;xraiAttribution&quot;: { # Attributes credit by computing the XRAI taking advantage # Attributes credit by computing the XRAI taking advantage
1455 # of the model&#x27;s fully differentiable structure. Refer to this paper for
1456 # more details: https://arxiv.org/abs/1906.02825
1457 # Currently only implemented for models with natural image inputs.
1458 # of the model&#x27;s fully differentiable structure. Refer to this paper for
1459 # more details: https://arxiv.org/abs/1906.02825
1460 # Currently only implemented for models with natural image inputs.
1461 &quot;numIntegralSteps&quot;: 42, # Number of steps for approximating the path integral.
1462 # A good value to start is 50 and gradually increase until the
1463 # sum to diff property is met within the desired error range.
1464 },
1465 &quot;sampledShapleyAttribution&quot;: { # An attribution method that approximates Shapley values for features that # An attribution method that approximates Shapley values for features that
1466 # contribute to the label being predicted. A sampling strategy is used to
1467 # approximate the value rather than considering all subsets of features.
1468 # contribute to the label being predicted. A sampling strategy is used to
1469 # approximate the value rather than considering all subsets of features.
1470 &quot;numPaths&quot;: 42, # The number of feature permutations to consider when approximating the
1471 # Shapley values.
1472 },
Bu Sun Kim65020912020-05-20 12:08:20 -07001473 },
Bu Sun Kimd059ad82020-07-22 17:02:09 -07001474 &quot;pythonVersion&quot;: &quot;A String&quot;, # Required. The version of Python used in prediction.
1475 #
1476 # The following Python versions are available:
1477 #
1478 # * Python &#x27;3.7&#x27; is available when `runtime_version` is set to &#x27;1.15&#x27; or
1479 # later.
1480 # * Python &#x27;3.5&#x27; is available when `runtime_version` is set to a version
1481 # from &#x27;1.4&#x27; to &#x27;1.14&#x27;.
1482 # * Python &#x27;2.7&#x27; is available when `runtime_version` is set to &#x27;1.15&#x27; or
1483 # earlier.
1484 #
1485 # Read more about the Python versions available for [each runtime
1486 # version](/ml-engine/docs/runtime-version-list).
Bu Sun Kim65020912020-05-20 12:08:20 -07001487 &quot;requestLoggingConfig&quot;: { # Configuration for logging request-response pairs to a BigQuery table. # Optional. *Only* specify this field in a
Dan O'Mearadd494642020-05-01 07:42:23 -07001488 # projects.models.versions.patch
1489 # request. Specifying it in a
1490 # projects.models.versions.create
1491 # request has no effect.
1492 #
1493 # Configures the request-response pair logging on predictions from this
1494 # Version.
1495 # Online prediction requests to a model version and the responses to these
1496 # requests are converted to raw strings and saved to the specified BigQuery
1497 # table. Logging is constrained by [BigQuery quotas and
1498 # limits](/bigquery/quotas). If your project exceeds BigQuery quotas or limits,
1499 # AI Platform Prediction does not log request-response pairs, but it continues
1500 # to serve predictions.
1501 #
1502 # If you are using [continuous
1503 # evaluation](/ml-engine/docs/continuous-evaluation/), you do not need to
1504 # specify this configuration manually. Setting up continuous evaluation
1505 # automatically enables logging of request-response pairs.
Bu Sun Kimd059ad82020-07-22 17:02:09 -07001506 &quot;samplingPercentage&quot;: 3.14, # Percentage of requests to be logged, expressed as a fraction from 0 to 1.
1507 # For example, if you want to log 10% of requests, enter `0.1`. The sampling
1508 # window is the lifetime of the model version. Defaults to 0.
Bu Sun Kim65020912020-05-20 12:08:20 -07001509 &quot;bigqueryTableName&quot;: &quot;A String&quot;, # Required. Fully qualified BigQuery table name in the following format:
1510 # &quot;&lt;var&gt;project_id&lt;/var&gt;.&lt;var&gt;dataset_name&lt;/var&gt;.&lt;var&gt;table_name&lt;/var&gt;&quot;
Dan O'Mearadd494642020-05-01 07:42:23 -07001511 #
Bu Sun Kim65020912020-05-20 12:08:20 -07001512 # The specified table must already exist, and the &quot;Cloud ML Service Agent&quot;
Dan O'Mearadd494642020-05-01 07:42:23 -07001513 # for your project must have permission to write to it. The table must have
1514 # the following [schema](/bigquery/docs/schemas):
1515 #
1516 # &lt;table&gt;
Bu Sun Kim65020912020-05-20 12:08:20 -07001517 # &lt;tr&gt;&lt;th&gt;Field name&lt;/th&gt;&lt;th style=&quot;display: table-cell&quot;&gt;Type&lt;/th&gt;
1518 # &lt;th style=&quot;display: table-cell&quot;&gt;Mode&lt;/th&gt;&lt;/tr&gt;
Dan O'Mearadd494642020-05-01 07:42:23 -07001519 # &lt;tr&gt;&lt;td&gt;model&lt;/td&gt;&lt;td&gt;STRING&lt;/td&gt;&lt;td&gt;REQUIRED&lt;/td&gt;&lt;/tr&gt;
1520 # &lt;tr&gt;&lt;td&gt;model_version&lt;/td&gt;&lt;td&gt;STRING&lt;/td&gt;&lt;td&gt;REQUIRED&lt;/td&gt;&lt;/tr&gt;
1521 # &lt;tr&gt;&lt;td&gt;time&lt;/td&gt;&lt;td&gt;TIMESTAMP&lt;/td&gt;&lt;td&gt;REQUIRED&lt;/td&gt;&lt;/tr&gt;
1522 # &lt;tr&gt;&lt;td&gt;raw_data&lt;/td&gt;&lt;td&gt;STRING&lt;/td&gt;&lt;td&gt;REQUIRED&lt;/td&gt;&lt;/tr&gt;
1523 # &lt;tr&gt;&lt;td&gt;raw_prediction&lt;/td&gt;&lt;td&gt;STRING&lt;/td&gt;&lt;td&gt;NULLABLE&lt;/td&gt;&lt;/tr&gt;
1524 # &lt;tr&gt;&lt;td&gt;groundtruth&lt;/td&gt;&lt;td&gt;STRING&lt;/td&gt;&lt;td&gt;NULLABLE&lt;/td&gt;&lt;/tr&gt;
1525 # &lt;/table&gt;
Bu Sun Kim65020912020-05-20 12:08:20 -07001526 },
Bu Sun Kimd059ad82020-07-22 17:02:09 -07001527 &quot;manualScaling&quot;: { # Options for manually scaling a model. # Manually select the number of nodes to use for serving the
1528 # model. You should generally use `auto_scaling` with an appropriate
1529 # `min_nodes` instead, but this option is available if you want more
1530 # predictable billing. Beware that latency and error rates will increase
1531 # if the traffic exceeds that capability of the system to serve it based
1532 # on the selected number of nodes.
1533 &quot;nodes&quot;: 42, # The number of nodes to allocate for this model. These nodes are always up,
1534 # starting from the time the model is deployed, so the cost of operating
1535 # this model will be proportional to `nodes` * number of hours since
1536 # last billing cycle plus the cost for each prediction performed.
1537 },
1538 &quot;createTime&quot;: &quot;A String&quot;, # Output only. The time the version was created.
1539 &quot;lastUseTime&quot;: &quot;A String&quot;, # Output only. The time the version was last used for prediction.
Bu Sun Kim65020912020-05-20 12:08:20 -07001540 &quot;framework&quot;: &quot;A String&quot;, # Optional. The machine learning framework AI Platform uses to train
1541 # this version of the model. Valid values are `TENSORFLOW`, `SCIKIT_LEARN`,
1542 # `XGBOOST`. If you do not specify a framework, AI Platform
1543 # will analyze files in the deployment_uri to determine a framework. If you
1544 # choose `SCIKIT_LEARN` or `XGBOOST`, you must also set the runtime version
1545 # of the model to 1.4 or greater.
1546 #
1547 # Do **not** specify a framework if you&#x27;re deploying a [custom
1548 # prediction routine](/ml-engine/docs/tensorflow/custom-prediction-routines).
1549 #
1550 # If you specify a [Compute Engine (N1) machine
1551 # type](/ml-engine/docs/machine-types-online-prediction) in the
1552 # `machineType` field, you must specify `TENSORFLOW`
1553 # for the framework.
Bu Sun Kimd059ad82020-07-22 17:02:09 -07001554 &quot;predictionClass&quot;: &quot;A String&quot;, # Optional. The fully qualified name
1555 # (&lt;var&gt;module_name&lt;/var&gt;.&lt;var&gt;class_name&lt;/var&gt;) of a class that implements
1556 # the Predictor interface described in this reference field. The module
1557 # containing this class should be included in a package provided to the
1558 # [`packageUris` field](#Version.FIELDS.package_uris).
1559 #
1560 # Specify this field if and only if you are deploying a [custom prediction
1561 # routine (beta)](/ml-engine/docs/tensorflow/custom-prediction-routines).
1562 # If you specify this field, you must set
1563 # [`runtimeVersion`](#Version.FIELDS.runtime_version) to 1.4 or greater and
1564 # you must set `machineType` to a [legacy (MLS1)
1565 # machine type](/ml-engine/docs/machine-types-online-prediction).
1566 #
1567 # The following code sample provides the Predictor interface:
1568 #
1569 # &lt;pre style=&quot;max-width: 626px;&quot;&gt;
1570 # class Predictor(object):
1571 # &quot;&quot;&quot;Interface for constructing custom predictors.&quot;&quot;&quot;
1572 #
1573 # def predict(self, instances, **kwargs):
1574 # &quot;&quot;&quot;Performs custom prediction.
1575 #
1576 # Instances are the decoded values from the request. They have already
1577 # been deserialized from JSON.
1578 #
1579 # Args:
1580 # instances: A list of prediction input instances.
1581 # **kwargs: A dictionary of keyword args provided as additional
1582 # fields on the predict request body.
1583 #
1584 # Returns:
1585 # A list of outputs containing the prediction results. This list must
1586 # be JSON serializable.
1587 # &quot;&quot;&quot;
1588 # raise NotImplementedError()
1589 #
1590 # @classmethod
1591 # def from_path(cls, model_dir):
1592 # &quot;&quot;&quot;Creates an instance of Predictor using the given path.
1593 #
1594 # Loading of the predictor should be done in this method.
1595 #
1596 # Args:
1597 # model_dir: The local directory that contains the exported model
1598 # file along with any additional files uploaded when creating the
1599 # version resource.
1600 #
1601 # Returns:
1602 # An instance implementing this Predictor class.
1603 # &quot;&quot;&quot;
1604 # raise NotImplementedError()
1605 # &lt;/pre&gt;
1606 #
1607 # Learn more about [the Predictor interface and custom prediction
1608 # routines](/ml-engine/docs/tensorflow/custom-prediction-routines).
1609 &quot;isDefault&quot;: True or False, # Output only. If true, this version will be used to handle prediction
1610 # requests that do not specify a version.
1611 #
1612 # You can change the default version by calling
1613 # projects.methods.versions.setDefault.
Bu Sun Kim65020912020-05-20 12:08:20 -07001614 &quot;etag&quot;: &quot;A String&quot;, # `etag` is used for optimistic concurrency control as a way to help
1615 # prevent simultaneous updates of a model from overwriting each other.
1616 # It is strongly suggested that systems make use of the `etag` in the
1617 # read-modify-write cycle to perform model updates in order to avoid race
1618 # conditions: An `etag` is returned in the response to `GetVersion`, and
1619 # systems are expected to put that etag in the request to `UpdateVersion` to
1620 # ensure that their change will be applied to the model as intended.
Bu Sun Kimd059ad82020-07-22 17:02:09 -07001621 &quot;serviceAccount&quot;: &quot;A String&quot;, # Optional. Specifies the service account for resource access control.
1622 &quot;errorMessage&quot;: &quot;A String&quot;, # Output only. The details of a failure or a cancellation.
1623 &quot;deploymentUri&quot;: &quot;A String&quot;, # Required. The Cloud Storage location of the trained model used to
1624 # create the version. See the
1625 # [guide to model
1626 # deployment](/ml-engine/docs/tensorflow/deploying-models) for more
1627 # information.
1628 #
1629 # When passing Version to
1630 # projects.models.versions.create
1631 # the model service uses the specified location as the source of the model.
1632 # Once deployed, the model version is hosted by the prediction service, so
1633 # this location is useful only as a historical record.
1634 # The total number of model files can&#x27;t exceed 1000.
1635 &quot;runtimeVersion&quot;: &quot;A String&quot;, # Required. The AI Platform runtime version to use for this deployment.
1636 #
1637 # For more information, see the
1638 # [runtime version list](/ml-engine/docs/runtime-version-list) and
1639 # [how to manage runtime versions](/ml-engine/docs/versioning).
1640 &quot;description&quot;: &quot;A String&quot;, # Optional. The description specified for the version when it was created.
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07001641}
1642
1643 updateMask: string, Required. Specifies the path, relative to `Version`, of the field to
1644update. Must be present and non-empty.
1645
Bu Sun Kim65020912020-05-20 12:08:20 -07001646For example, to change the description of a version to &quot;foo&quot;, the
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07001647`update_mask` parameter would be specified as `description`, and the
1648`PATCH` request body would specify the new value, as follows:
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07001649
Dan O'Mearadd494642020-05-01 07:42:23 -07001650```
1651{
Bu Sun Kim65020912020-05-20 12:08:20 -07001652 &quot;description&quot;: &quot;foo&quot;
Dan O'Mearadd494642020-05-01 07:42:23 -07001653}
1654```
1655
1656Currently the only supported update mask fields are `description`,
1657`requestLoggingConfig`, `autoScaling.minNodes`, and `manualScaling.nodes`.
1658However, you can only update `manualScaling.nodes` if the version uses a
1659[Compute Engine (N1)
1660machine type](/ml-engine/docs/machine-types-online-prediction).
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07001661 x__xgafv: string, V1 error format.
1662 Allowed values
1663 1 - v1 error format
1664 2 - v2 error format
1665
1666Returns:
1667 An object of the form:
1668
1669 { # This resource represents a long-running operation that is the result of a
1670 # network API call.
Bu Sun Kim65020912020-05-20 12:08:20 -07001671 &quot;error&quot;: { # The `Status` type defines a logical error model that is suitable for # The error result of the operation in case of failure or cancellation.
1672 # different programming environments, including REST APIs and RPC APIs. It is
1673 # used by [gRPC](https://github.com/grpc). Each `Status` message contains
1674 # three pieces of data: error code, error message, and error details.
1675 #
1676 # You can find out more about this error model and how to work with it in the
1677 # [API Design Guide](https://cloud.google.com/apis/design/errors).
1678 &quot;details&quot;: [ # A list of messages that carry the error details. There is a common set of
1679 # message types for APIs to use.
1680 {
1681 &quot;a_key&quot;: &quot;&quot;, # Properties of the object. Contains field @type with type URL.
1682 },
1683 ],
Bu Sun Kimd059ad82020-07-22 17:02:09 -07001684 &quot;message&quot;: &quot;A String&quot;, # A developer-facing error message, which should be in English. Any
1685 # user-facing error message should be localized and sent in the
1686 # google.rpc.Status.details field, or localized by the client.
1687 &quot;code&quot;: 42, # The status code, which should be an enum value of google.rpc.Code.
Bu Sun Kim65020912020-05-20 12:08:20 -07001688 },
1689 &quot;done&quot;: True or False, # If the value is `false`, it means the operation is still in progress.
1690 # If `true`, the operation is completed, and either `error` or `response` is
1691 # available.
Bu Sun Kim4ed7d3f2020-05-27 12:20:54 -07001692 &quot;response&quot;: { # The normal response of the operation in case of success. If the original
1693 # method returns no data on success, such as `Delete`, the response is
1694 # `google.protobuf.Empty`. If the original method is standard
1695 # `Get`/`Create`/`Update`, the response should be the resource. For other
1696 # methods, the response should have the type `XxxResponse`, where `Xxx`
1697 # is the original method name. For example, if the original method name
1698 # is `TakeSnapshot()`, the inferred response type is
1699 # `TakeSnapshotResponse`.
1700 &quot;a_key&quot;: &quot;&quot;, # Properties of the object. Contains field @type with type URL.
1701 },
Bu Sun Kimd059ad82020-07-22 17:02:09 -07001702 &quot;metadata&quot;: { # Service-specific metadata associated with the operation. It typically
1703 # contains progress information and common metadata such as create time.
1704 # Some services might not provide such metadata. Any method that returns a
1705 # long-running operation should document the metadata type, if any.
1706 &quot;a_key&quot;: &quot;&quot;, # Properties of the object. Contains field @type with type URL.
1707 },
1708 &quot;name&quot;: &quot;A String&quot;, # The server-assigned name, which is only unique within the same service that
1709 # originally returns it. If you use the default HTTP mapping, the
1710 # `name` should be a resource name ending with `operations/{unique_id}`.
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07001711 }</pre>
1712</div>
1713
1714<div class="method">
1715 <code class="details" id="setDefault">setDefault(name, body=None, x__xgafv=None)</code>
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -04001716 <pre>Designates a version to be the default for the model.
1717
1718The default version is used for prediction requests made against the model
Bu Sun Kim65020912020-05-20 12:08:20 -07001719that don&#x27;t specify a version.
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -04001720
1721The first version to be created for a model is automatically set as the
1722default. You must make any subsequent changes to the default version
1723setting manually using this method.
1724
1725Args:
1726 name: string, Required. The name of the version to make the default for the model. You
1727can get the names of all the versions of a model by calling
Dan O'Mearadd494642020-05-01 07:42:23 -07001728projects.models.versions.list. (required)
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07001729 body: object, The request body.
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -04001730 The object takes the form of:
1731
1732{ # Request message for the SetDefaultVersion request.
1733 }
1734
1735 x__xgafv: string, V1 error format.
1736 Allowed values
1737 1 - v1 error format
1738 2 - v2 error format
1739
1740Returns:
1741 An object of the form:
1742
1743 { # Represents a version of the model.
1744 #
1745 # Each version is a trained model deployed in the cloud, ready to handle
1746 # prediction requests. A model can have multiple versions. You can get
1747 # information about all of the versions of a given model by calling
Dan O'Mearadd494642020-05-01 07:42:23 -07001748 # projects.models.versions.list.
Bu Sun Kimd059ad82020-07-22 17:02:09 -07001749 &quot;labels&quot;: { # Optional. One or more labels that you can add, to organize your model
1750 # versions. Each label is a key-value pair, where both the key and the value
1751 # are arbitrary strings that you supply.
1752 # For more information, see the documentation on
1753 # &lt;a href=&quot;/ml-engine/docs/tensorflow/resource-labels&quot;&gt;using labels&lt;/a&gt;.
1754 &quot;a_key&quot;: &quot;A String&quot;,
Bu Sun Kim4ed7d3f2020-05-27 12:20:54 -07001755 },
Bu Sun Kimd059ad82020-07-22 17:02:09 -07001756 &quot;machineType&quot;: &quot;A String&quot;, # Optional. The type of machine on which to serve the model. Currently only
1757 # applies to online prediction service. If this field is not specified, it
1758 # defaults to `mls1-c1-m2`.
Bu Sun Kim4ed7d3f2020-05-27 12:20:54 -07001759 #
Bu Sun Kimd059ad82020-07-22 17:02:09 -07001760 # Online prediction supports the following machine types:
Bu Sun Kim65020912020-05-20 12:08:20 -07001761 #
Bu Sun Kimd059ad82020-07-22 17:02:09 -07001762 # * `mls1-c1-m2`
1763 # * `mls1-c4-m2`
1764 # * `n1-standard-2`
1765 # * `n1-standard-4`
1766 # * `n1-standard-8`
1767 # * `n1-standard-16`
1768 # * `n1-standard-32`
1769 # * `n1-highmem-2`
1770 # * `n1-highmem-4`
1771 # * `n1-highmem-8`
1772 # * `n1-highmem-16`
1773 # * `n1-highmem-32`
1774 # * `n1-highcpu-2`
1775 # * `n1-highcpu-4`
1776 # * `n1-highcpu-8`
1777 # * `n1-highcpu-16`
1778 # * `n1-highcpu-32`
Bu Sun Kim65020912020-05-20 12:08:20 -07001779 #
Bu Sun Kimd059ad82020-07-22 17:02:09 -07001780 # `mls1-c1-m2` is generally available. All other machine types are available
1781 # in beta. Learn more about the [differences between machine
1782 # types](/ml-engine/docs/machine-types-online-prediction).
Bu Sun Kim65020912020-05-20 12:08:20 -07001783 &quot;packageUris&quot;: [ # Optional. Cloud Storage paths (`gs://…`) of packages for [custom
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07001784 # prediction routines](/ml-engine/docs/tensorflow/custom-prediction-routines)
1785 # or [scikit-learn pipelines with custom
1786 # code](/ml-engine/docs/scikit/exporting-for-prediction#custom-pipeline-code).
1787 #
1788 # For a custom prediction routine, one of these packages must contain your
1789 # Predictor class (see
1790 # [`predictionClass`](#Version.FIELDS.prediction_class)). Additionally,
1791 # include any dependencies used by your Predictor or scikit-learn pipeline
1792 # uses that are not already included in your selected [runtime
1793 # version](/ml-engine/docs/tensorflow/runtime-version-list).
1794 #
1795 # If you specify this field, you must also set
1796 # [`runtimeVersion`](#Version.FIELDS.runtime_version) to 1.4 or greater.
Bu Sun Kim65020912020-05-20 12:08:20 -07001797 &quot;A String&quot;,
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07001798 ],
Bu Sun Kimd059ad82020-07-22 17:02:09 -07001799 &quot;acceleratorConfig&quot;: { # Represents a hardware accelerator request config. # Optional. Accelerator config for using GPUs for online prediction (beta).
1800 # Only specify this field if you have specified a Compute Engine (N1) machine
1801 # type in the `machineType` field. Learn more about [using GPUs for online
1802 # prediction](/ml-engine/docs/machine-types-online-prediction#gpus).
1803 # Note that the AcceleratorConfig can be used in both Jobs and Versions.
1804 # Learn more about [accelerators for training](/ml-engine/docs/using-gpus) and
1805 # [accelerators for online
1806 # prediction](/ml-engine/docs/machine-types-online-prediction#gpus).
1807 &quot;type&quot;: &quot;A String&quot;, # The type of accelerator to use.
1808 &quot;count&quot;: &quot;A String&quot;, # The number of accelerators to attach to each machine running the job.
Bu Sun Kim65020912020-05-20 12:08:20 -07001809 },
Bu Sun Kimd059ad82020-07-22 17:02:09 -07001810 &quot;state&quot;: &quot;A String&quot;, # Output only. The state of a version.
1811 &quot;name&quot;: &quot;A String&quot;, # Required. The name specified for the version when it was created.
1812 #
1813 # The version name must be unique within the model it is created in.
Bu Sun Kim65020912020-05-20 12:08:20 -07001814 &quot;autoScaling&quot;: { # Options for automatically scaling a model. # Automatically scale the number of nodes used to serve the model in
Dan O'Mearadd494642020-05-01 07:42:23 -07001815 # response to increases and decreases in traffic. Care should be
Bu Sun Kim65020912020-05-20 12:08:20 -07001816 # taken to ramp up traffic according to the model&#x27;s ability to scale
Dan O'Mearadd494642020-05-01 07:42:23 -07001817 # or you will start seeing increases in latency and 429 response codes.
1818 #
1819 # Note that you cannot use AutoScaling if your version uses
1820 # [GPUs](#Version.FIELDS.accelerator_config). Instead, you must use specify
1821 # `manual_scaling`.
Bu Sun Kim65020912020-05-20 12:08:20 -07001822 &quot;minNodes&quot;: 42, # Optional. The minimum number of nodes to allocate for this model. These
Dan O'Mearadd494642020-05-01 07:42:23 -07001823 # nodes are always up, starting from the time the model is deployed.
1824 # Therefore, the cost of operating this model will be at least
1825 # `rate` * `min_nodes` * number of hours since last billing cycle,
1826 # where `rate` is the cost per node-hour as documented in the
1827 # [pricing guide](/ml-engine/docs/pricing),
1828 # even if no predictions are performed. There is additional cost for each
1829 # prediction performed.
1830 #
1831 # Unlike manual scaling, if the load gets too heavy for the nodes
1832 # that are up, the service will automatically add nodes to handle the
1833 # increased load as well as scale back as traffic drops, always maintaining
1834 # at least `min_nodes`. You will be charged for the time in which additional
1835 # nodes are used.
1836 #
1837 # If `min_nodes` is not specified and AutoScaling is used with a [legacy
1838 # (MLS1) machine type](/ml-engine/docs/machine-types-online-prediction),
1839 # `min_nodes` defaults to 0, in which case, when traffic to a model stops
1840 # (and after a cool-down period), nodes will be shut down and no charges will
1841 # be incurred until traffic to the model resumes.
1842 #
1843 # If `min_nodes` is not specified and AutoScaling is used with a [Compute
1844 # Engine (N1) machine type](/ml-engine/docs/machine-types-online-prediction),
1845 # `min_nodes` defaults to 1. `min_nodes` must be at least 1 for use with a
1846 # Compute Engine machine type.
1847 #
1848 # Note that you cannot use AutoScaling if your version uses
1849 # [GPUs](#Version.FIELDS.accelerator_config). Instead, you must use
1850 # ManualScaling.
1851 #
1852 # You can set `min_nodes` when creating the model version, and you can also
1853 # update `min_nodes` for an existing version:
1854 # &lt;pre&gt;
1855 # update_body.json:
1856 # {
Bu Sun Kim65020912020-05-20 12:08:20 -07001857 # &#x27;autoScaling&#x27;: {
1858 # &#x27;minNodes&#x27;: 5
Dan O'Mearadd494642020-05-01 07:42:23 -07001859 # }
1860 # }
1861 # &lt;/pre&gt;
1862 # HTTP request:
Bu Sun Kim65020912020-05-20 12:08:20 -07001863 # &lt;pre style=&quot;max-width: 626px;&quot;&gt;
Dan O'Mearadd494642020-05-01 07:42:23 -07001864 # PATCH
1865 # https://ml.googleapis.com/v1/{name=projects/*/models/*/versions/*}?update_mask=autoScaling.minNodes
1866 # -d @./update_body.json
1867 # &lt;/pre&gt;
1868 },
Bu Sun Kimd059ad82020-07-22 17:02:09 -07001869 &quot;explanationConfig&quot;: { # Message holding configuration options for explaining model predictions. # Optional. Configures explainability features on the model&#x27;s version.
1870 # Some explanation features require additional metadata to be loaded
1871 # as part of the model payload.
1872 # There are two feature attribution methods supported for TensorFlow models:
1873 # integrated gradients and sampled Shapley.
1874 # [Learn more about feature
1875 # attributions.](/ai-platform/prediction/docs/ai-explanations/overview)
1876 &quot;integratedGradientsAttribution&quot;: { # Attributes credit by computing the Aumann-Shapley value taking advantage # Attributes credit by computing the Aumann-Shapley value taking advantage
1877 # of the model&#x27;s fully differentiable structure. Refer to this paper for
1878 # more details: http://proceedings.mlr.press/v70/sundararajan17a.html
1879 # of the model&#x27;s fully differentiable structure. Refer to this paper for
1880 # more details: https://arxiv.org/abs/1703.01365
1881 &quot;numIntegralSteps&quot;: 42, # Number of steps for approximating the path integral.
1882 # A good value to start is 50 and gradually increase until the
1883 # sum to diff property is met within the desired error range.
1884 },
1885 &quot;xraiAttribution&quot;: { # Attributes credit by computing the XRAI taking advantage # Attributes credit by computing the XRAI taking advantage
1886 # of the model&#x27;s fully differentiable structure. Refer to this paper for
1887 # more details: https://arxiv.org/abs/1906.02825
1888 # Currently only implemented for models with natural image inputs.
1889 # of the model&#x27;s fully differentiable structure. Refer to this paper for
1890 # more details: https://arxiv.org/abs/1906.02825
1891 # Currently only implemented for models with natural image inputs.
1892 &quot;numIntegralSteps&quot;: 42, # Number of steps for approximating the path integral.
1893 # A good value to start is 50 and gradually increase until the
1894 # sum to diff property is met within the desired error range.
1895 },
1896 &quot;sampledShapleyAttribution&quot;: { # An attribution method that approximates Shapley values for features that # An attribution method that approximates Shapley values for features that
1897 # contribute to the label being predicted. A sampling strategy is used to
1898 # approximate the value rather than considering all subsets of features.
1899 # contribute to the label being predicted. A sampling strategy is used to
1900 # approximate the value rather than considering all subsets of features.
1901 &quot;numPaths&quot;: 42, # The number of feature permutations to consider when approximating the
1902 # Shapley values.
1903 },
Bu Sun Kim65020912020-05-20 12:08:20 -07001904 },
Bu Sun Kimd059ad82020-07-22 17:02:09 -07001905 &quot;pythonVersion&quot;: &quot;A String&quot;, # Required. The version of Python used in prediction.
1906 #
1907 # The following Python versions are available:
1908 #
1909 # * Python &#x27;3.7&#x27; is available when `runtime_version` is set to &#x27;1.15&#x27; or
1910 # later.
1911 # * Python &#x27;3.5&#x27; is available when `runtime_version` is set to a version
1912 # from &#x27;1.4&#x27; to &#x27;1.14&#x27;.
1913 # * Python &#x27;2.7&#x27; is available when `runtime_version` is set to &#x27;1.15&#x27; or
1914 # earlier.
1915 #
1916 # Read more about the Python versions available for [each runtime
1917 # version](/ml-engine/docs/runtime-version-list).
Bu Sun Kim65020912020-05-20 12:08:20 -07001918 &quot;requestLoggingConfig&quot;: { # Configuration for logging request-response pairs to a BigQuery table. # Optional. *Only* specify this field in a
Dan O'Mearadd494642020-05-01 07:42:23 -07001919 # projects.models.versions.patch
1920 # request. Specifying it in a
1921 # projects.models.versions.create
1922 # request has no effect.
1923 #
1924 # Configures the request-response pair logging on predictions from this
1925 # Version.
1926 # Online prediction requests to a model version and the responses to these
1927 # requests are converted to raw strings and saved to the specified BigQuery
1928 # table. Logging is constrained by [BigQuery quotas and
1929 # limits](/bigquery/quotas). If your project exceeds BigQuery quotas or limits,
1930 # AI Platform Prediction does not log request-response pairs, but it continues
1931 # to serve predictions.
1932 #
1933 # If you are using [continuous
1934 # evaluation](/ml-engine/docs/continuous-evaluation/), you do not need to
1935 # specify this configuration manually. Setting up continuous evaluation
1936 # automatically enables logging of request-response pairs.
Bu Sun Kimd059ad82020-07-22 17:02:09 -07001937 &quot;samplingPercentage&quot;: 3.14, # Percentage of requests to be logged, expressed as a fraction from 0 to 1.
1938 # For example, if you want to log 10% of requests, enter `0.1`. The sampling
1939 # window is the lifetime of the model version. Defaults to 0.
Bu Sun Kim65020912020-05-20 12:08:20 -07001940 &quot;bigqueryTableName&quot;: &quot;A String&quot;, # Required. Fully qualified BigQuery table name in the following format:
1941 # &quot;&lt;var&gt;project_id&lt;/var&gt;.&lt;var&gt;dataset_name&lt;/var&gt;.&lt;var&gt;table_name&lt;/var&gt;&quot;
Dan O'Mearadd494642020-05-01 07:42:23 -07001942 #
Bu Sun Kim65020912020-05-20 12:08:20 -07001943 # The specified table must already exist, and the &quot;Cloud ML Service Agent&quot;
Dan O'Mearadd494642020-05-01 07:42:23 -07001944 # for your project must have permission to write to it. The table must have
1945 # the following [schema](/bigquery/docs/schemas):
1946 #
1947 # &lt;table&gt;
Bu Sun Kim65020912020-05-20 12:08:20 -07001948 # &lt;tr&gt;&lt;th&gt;Field name&lt;/th&gt;&lt;th style=&quot;display: table-cell&quot;&gt;Type&lt;/th&gt;
1949 # &lt;th style=&quot;display: table-cell&quot;&gt;Mode&lt;/th&gt;&lt;/tr&gt;
Dan O'Mearadd494642020-05-01 07:42:23 -07001950 # &lt;tr&gt;&lt;td&gt;model&lt;/td&gt;&lt;td&gt;STRING&lt;/td&gt;&lt;td&gt;REQUIRED&lt;/td&gt;&lt;/tr&gt;
1951 # &lt;tr&gt;&lt;td&gt;model_version&lt;/td&gt;&lt;td&gt;STRING&lt;/td&gt;&lt;td&gt;REQUIRED&lt;/td&gt;&lt;/tr&gt;
1952 # &lt;tr&gt;&lt;td&gt;time&lt;/td&gt;&lt;td&gt;TIMESTAMP&lt;/td&gt;&lt;td&gt;REQUIRED&lt;/td&gt;&lt;/tr&gt;
1953 # &lt;tr&gt;&lt;td&gt;raw_data&lt;/td&gt;&lt;td&gt;STRING&lt;/td&gt;&lt;td&gt;REQUIRED&lt;/td&gt;&lt;/tr&gt;
1954 # &lt;tr&gt;&lt;td&gt;raw_prediction&lt;/td&gt;&lt;td&gt;STRING&lt;/td&gt;&lt;td&gt;NULLABLE&lt;/td&gt;&lt;/tr&gt;
1955 # &lt;tr&gt;&lt;td&gt;groundtruth&lt;/td&gt;&lt;td&gt;STRING&lt;/td&gt;&lt;td&gt;NULLABLE&lt;/td&gt;&lt;/tr&gt;
1956 # &lt;/table&gt;
Bu Sun Kim65020912020-05-20 12:08:20 -07001957 },
Bu Sun Kimd059ad82020-07-22 17:02:09 -07001958 &quot;manualScaling&quot;: { # Options for manually scaling a model. # Manually select the number of nodes to use for serving the
1959 # model. You should generally use `auto_scaling` with an appropriate
1960 # `min_nodes` instead, but this option is available if you want more
1961 # predictable billing. Beware that latency and error rates will increase
1962 # if the traffic exceeds that capability of the system to serve it based
1963 # on the selected number of nodes.
1964 &quot;nodes&quot;: 42, # The number of nodes to allocate for this model. These nodes are always up,
1965 # starting from the time the model is deployed, so the cost of operating
1966 # this model will be proportional to `nodes` * number of hours since
1967 # last billing cycle plus the cost for each prediction performed.
1968 },
1969 &quot;createTime&quot;: &quot;A String&quot;, # Output only. The time the version was created.
1970 &quot;lastUseTime&quot;: &quot;A String&quot;, # Output only. The time the version was last used for prediction.
Bu Sun Kim65020912020-05-20 12:08:20 -07001971 &quot;framework&quot;: &quot;A String&quot;, # Optional. The machine learning framework AI Platform uses to train
1972 # this version of the model. Valid values are `TENSORFLOW`, `SCIKIT_LEARN`,
1973 # `XGBOOST`. If you do not specify a framework, AI Platform
1974 # will analyze files in the deployment_uri to determine a framework. If you
1975 # choose `SCIKIT_LEARN` or `XGBOOST`, you must also set the runtime version
1976 # of the model to 1.4 or greater.
1977 #
1978 # Do **not** specify a framework if you&#x27;re deploying a [custom
1979 # prediction routine](/ml-engine/docs/tensorflow/custom-prediction-routines).
1980 #
1981 # If you specify a [Compute Engine (N1) machine
1982 # type](/ml-engine/docs/machine-types-online-prediction) in the
1983 # `machineType` field, you must specify `TENSORFLOW`
1984 # for the framework.
Bu Sun Kimd059ad82020-07-22 17:02:09 -07001985 &quot;predictionClass&quot;: &quot;A String&quot;, # Optional. The fully qualified name
1986 # (&lt;var&gt;module_name&lt;/var&gt;.&lt;var&gt;class_name&lt;/var&gt;) of a class that implements
1987 # the Predictor interface described in this reference field. The module
1988 # containing this class should be included in a package provided to the
1989 # [`packageUris` field](#Version.FIELDS.package_uris).
1990 #
1991 # Specify this field if and only if you are deploying a [custom prediction
1992 # routine (beta)](/ml-engine/docs/tensorflow/custom-prediction-routines).
1993 # If you specify this field, you must set
1994 # [`runtimeVersion`](#Version.FIELDS.runtime_version) to 1.4 or greater and
1995 # you must set `machineType` to a [legacy (MLS1)
1996 # machine type](/ml-engine/docs/machine-types-online-prediction).
1997 #
1998 # The following code sample provides the Predictor interface:
1999 #
2000 # &lt;pre style=&quot;max-width: 626px;&quot;&gt;
2001 # class Predictor(object):
2002 # &quot;&quot;&quot;Interface for constructing custom predictors.&quot;&quot;&quot;
2003 #
2004 # def predict(self, instances, **kwargs):
2005 # &quot;&quot;&quot;Performs custom prediction.
2006 #
2007 # Instances are the decoded values from the request. They have already
2008 # been deserialized from JSON.
2009 #
2010 # Args:
2011 # instances: A list of prediction input instances.
2012 # **kwargs: A dictionary of keyword args provided as additional
2013 # fields on the predict request body.
2014 #
2015 # Returns:
2016 # A list of outputs containing the prediction results. This list must
2017 # be JSON serializable.
2018 # &quot;&quot;&quot;
2019 # raise NotImplementedError()
2020 #
2021 # @classmethod
2022 # def from_path(cls, model_dir):
2023 # &quot;&quot;&quot;Creates an instance of Predictor using the given path.
2024 #
2025 # Loading of the predictor should be done in this method.
2026 #
2027 # Args:
2028 # model_dir: The local directory that contains the exported model
2029 # file along with any additional files uploaded when creating the
2030 # version resource.
2031 #
2032 # Returns:
2033 # An instance implementing this Predictor class.
2034 # &quot;&quot;&quot;
2035 # raise NotImplementedError()
2036 # &lt;/pre&gt;
2037 #
2038 # Learn more about [the Predictor interface and custom prediction
2039 # routines](/ml-engine/docs/tensorflow/custom-prediction-routines).
2040 &quot;isDefault&quot;: True or False, # Output only. If true, this version will be used to handle prediction
2041 # requests that do not specify a version.
2042 #
2043 # You can change the default version by calling
2044 # projects.methods.versions.setDefault.
Bu Sun Kim65020912020-05-20 12:08:20 -07002045 &quot;etag&quot;: &quot;A String&quot;, # `etag` is used for optimistic concurrency control as a way to help
2046 # prevent simultaneous updates of a model from overwriting each other.
2047 # It is strongly suggested that systems make use of the `etag` in the
2048 # read-modify-write cycle to perform model updates in order to avoid race
2049 # conditions: An `etag` is returned in the response to `GetVersion`, and
2050 # systems are expected to put that etag in the request to `UpdateVersion` to
2051 # ensure that their change will be applied to the model as intended.
Bu Sun Kimd059ad82020-07-22 17:02:09 -07002052 &quot;serviceAccount&quot;: &quot;A String&quot;, # Optional. Specifies the service account for resource access control.
2053 &quot;errorMessage&quot;: &quot;A String&quot;, # Output only. The details of a failure or a cancellation.
2054 &quot;deploymentUri&quot;: &quot;A String&quot;, # Required. The Cloud Storage location of the trained model used to
2055 # create the version. See the
2056 # [guide to model
2057 # deployment](/ml-engine/docs/tensorflow/deploying-models) for more
2058 # information.
2059 #
2060 # When passing Version to
2061 # projects.models.versions.create
2062 # the model service uses the specified location as the source of the model.
2063 # Once deployed, the model version is hosted by the prediction service, so
2064 # this location is useful only as a historical record.
2065 # The total number of model files can&#x27;t exceed 1000.
2066 &quot;runtimeVersion&quot;: &quot;A String&quot;, # Required. The AI Platform runtime version to use for this deployment.
2067 #
2068 # For more information, see the
2069 # [runtime version list](/ml-engine/docs/runtime-version-list) and
2070 # [how to manage runtime versions](/ml-engine/docs/versioning).
2071 &quot;description&quot;: &quot;A String&quot;, # Optional. The description specified for the version when it was created.
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -04002072 }</pre>
2073</div>
2074
2075</body></html>