blob: cc2e44902e7ebe93ef0316c8a9482cd4e2043573 [file] [log] [blame]
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -04001<html><body>
2<style>
3
4body, h1, h2, h3, div, span, p, pre, a {
5 margin: 0;
6 padding: 0;
7 border: 0;
8 font-weight: inherit;
9 font-style: inherit;
10 font-size: 100%;
11 font-family: inherit;
12 vertical-align: baseline;
13}
14
15body {
16 font-size: 13px;
17 padding: 1em;
18}
19
20h1 {
21 font-size: 26px;
22 margin-bottom: 1em;
23}
24
25h2 {
26 font-size: 24px;
27 margin-bottom: 1em;
28}
29
30h3 {
31 font-size: 20px;
32 margin-bottom: 1em;
33 margin-top: 1em;
34}
35
36pre, code {
37 line-height: 1.5;
38 font-family: Monaco, 'DejaVu Sans Mono', 'Bitstream Vera Sans Mono', 'Lucida Console', monospace;
39}
40
41pre {
42 margin-top: 0.5em;
43}
44
45h1, h2, h3, p {
46 font-family: Arial, sans serif;
47}
48
49h1, h2, h3 {
50 border-bottom: solid #CCC 1px;
51}
52
53.toc_element {
54 margin-top: 0.5em;
55}
56
57.firstline {
58 margin-left: 2 em;
59}
60
61.method {
62 margin-top: 1em;
63 border: solid 1px #CCC;
64 padding: 1em;
65 background: #EEE;
66}
67
68.details {
69 font-weight: bold;
70 font-size: 14px;
71}
72
73</style>
74
Dan O'Mearadd494642020-05-01 07:42:23 -070075<h1><a href="ml_v1.html">AI Platform Training & Prediction API</a> . <a href="ml_v1.projects.html">projects</a> . <a href="ml_v1.projects.models.html">models</a> . <a href="ml_v1.projects.models.versions.html">versions</a></h1>
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -040076<h2>Instance Methods</h2>
77<p class="toc_element">
Dan O'Mearadd494642020-05-01 07:42:23 -070078 <code><a href="#create">create(parent, body=None, x__xgafv=None)</a></code></p>
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -040079<p class="firstline">Creates a new version of a model from a trained TensorFlow model.</p>
80<p class="toc_element">
Thomas Coffee2f245372017-03-27 10:39:26 -070081 <code><a href="#delete">delete(name, x__xgafv=None)</a></code></p>
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -040082<p class="firstline">Deletes a model version.</p>
83<p class="toc_element">
Thomas Coffee2f245372017-03-27 10:39:26 -070084 <code><a href="#get">get(name, x__xgafv=None)</a></code></p>
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -040085<p class="firstline">Gets information about a model version.</p>
86<p class="toc_element">
Bu Sun Kim65020912020-05-20 12:08:20 -070087 <code><a href="#list">list(parent, pageToken=None, pageSize=None, filter=None, x__xgafv=None)</a></code></p>
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -040088<p class="firstline">Gets basic information about all the versions of a model.</p>
89<p class="toc_element">
90 <code><a href="#list_next">list_next(previous_request, previous_response)</a></code></p>
91<p class="firstline">Retrieves the next page of results.</p>
92<p class="toc_element">
Dan O'Mearadd494642020-05-01 07:42:23 -070093 <code><a href="#patch">patch(name, body=None, updateMask=None, x__xgafv=None)</a></code></p>
Bu Sun Kim715bd7f2019-06-14 16:50:42 -070094<p class="firstline">Updates the specified Version resource.</p>
95<p class="toc_element">
96 <code><a href="#setDefault">setDefault(name, body=None, x__xgafv=None)</a></code></p>
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -040097<p class="firstline">Designates a version to be the default for the model.</p>
98<h3>Method Details</h3>
99<div class="method">
Dan O'Mearadd494642020-05-01 07:42:23 -0700100 <code class="details" id="create">create(parent, body=None, x__xgafv=None)</code>
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -0400101 <pre>Creates a new version of a model from a trained TensorFlow model.
102
103If the version created in the cloud by this call is the first deployed
104version of the specified model, it will be made the default version of the
105model. When you add a version to a model that already has one or more
106versions, the default version does not automatically change. If you want a
107new version to be the default, you must call
Dan O'Mearadd494642020-05-01 07:42:23 -0700108projects.models.versions.setDefault.
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -0400109
110Args:
Bu Sun Kim715bd7f2019-06-14 16:50:42 -0700111 parent: string, Required. The name of the model. (required)
Dan O'Mearadd494642020-05-01 07:42:23 -0700112 body: object, The request body.
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -0400113 The object takes the form of:
114
115{ # Represents a version of the model.
116 #
117 # Each version is a trained model deployed in the cloud, ready to handle
118 # prediction requests. A model can have multiple versions. You can get
119 # information about all of the versions of a given model by calling
Dan O'Mearadd494642020-05-01 07:42:23 -0700120 # projects.models.versions.list.
Bu Sun Kim4ed7d3f2020-05-27 12:20:54 -0700121 &quot;acceleratorConfig&quot;: { # Represents a hardware accelerator request config. # Optional. Accelerator config for using GPUs for online prediction (beta).
122 # Only specify this field if you have specified a Compute Engine (N1) machine
123 # type in the `machineType` field. Learn more about [using GPUs for online
124 # prediction](/ml-engine/docs/machine-types-online-prediction#gpus).
125 # Note that the AcceleratorConfig can be used in both Jobs and Versions.
126 # Learn more about [accelerators for training](/ml-engine/docs/using-gpus) and
127 # [accelerators for online
128 # prediction](/ml-engine/docs/machine-types-online-prediction#gpus).
129 &quot;count&quot;: &quot;A String&quot;, # The number of accelerators to attach to each machine running the job.
130 &quot;type&quot;: &quot;A String&quot;, # The type of accelerator to use.
131 },
132 &quot;isDefault&quot;: True or False, # Output only. If true, this version will be used to handle prediction
133 # requests that do not specify a version.
134 #
135 # You can change the default version by calling
136 # projects.methods.versions.setDefault.
Bu Sun Kim65020912020-05-20 12:08:20 -0700137 &quot;manualScaling&quot;: { # Options for manually scaling a model. # Manually select the number of nodes to use for serving the
138 # model. You should generally use `auto_scaling` with an appropriate
139 # `min_nodes` instead, but this option is available if you want more
140 # predictable billing. Beware that latency and error rates will increase
141 # if the traffic exceeds that capability of the system to serve it based
142 # on the selected number of nodes.
143 &quot;nodes&quot;: 42, # The number of nodes to allocate for this model. These nodes are always up,
144 # starting from the time the model is deployed, so the cost of operating
145 # this model will be proportional to `nodes` * number of hours since
146 # last billing cycle plus the cost for each prediction performed.
Dan O'Mearadd494642020-05-01 07:42:23 -0700147 },
Bu Sun Kim4ed7d3f2020-05-27 12:20:54 -0700148 &quot;state&quot;: &quot;A String&quot;, # Output only. The state of a version.
Bu Sun Kim65020912020-05-20 12:08:20 -0700149 &quot;name&quot;: &quot;A String&quot;, # Required. The name specified for the version when it was created.
150 #
151 # The version name must be unique within the model it is created in.
152 &quot;serviceAccount&quot;: &quot;A String&quot;, # Optional. Specifies the service account for resource access control.
153 &quot;pythonVersion&quot;: &quot;A String&quot;, # Required. The version of Python used in prediction.
154 #
155 # The following Python versions are available:
156 #
157 # * Python &#x27;3.7&#x27; is available when `runtime_version` is set to &#x27;1.15&#x27; or
158 # later.
159 # * Python &#x27;3.5&#x27; is available when `runtime_version` is set to a version
160 # from &#x27;1.4&#x27; to &#x27;1.14&#x27;.
161 # * Python &#x27;2.7&#x27; is available when `runtime_version` is set to &#x27;1.15&#x27; or
162 # earlier.
163 #
164 # Read more about the Python versions available for [each runtime
165 # version](/ml-engine/docs/runtime-version-list).
166 &quot;lastUseTime&quot;: &quot;A String&quot;, # Output only. The time the version was last used for prediction.
167 &quot;predictionClass&quot;: &quot;A String&quot;, # Optional. The fully qualified name
Dan O'Mearadd494642020-05-01 07:42:23 -0700168 # (&lt;var&gt;module_name&lt;/var&gt;.&lt;var&gt;class_name&lt;/var&gt;) of a class that implements
Bu Sun Kim715bd7f2019-06-14 16:50:42 -0700169 # the Predictor interface described in this reference field. The module
170 # containing this class should be included in a package provided to the
171 # [`packageUris` field](#Version.FIELDS.package_uris).
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -0400172 #
Bu Sun Kim715bd7f2019-06-14 16:50:42 -0700173 # Specify this field if and only if you are deploying a [custom prediction
174 # routine (beta)](/ml-engine/docs/tensorflow/custom-prediction-routines).
175 # If you specify this field, you must set
Dan O'Mearadd494642020-05-01 07:42:23 -0700176 # [`runtimeVersion`](#Version.FIELDS.runtime_version) to 1.4 or greater and
177 # you must set `machineType` to a [legacy (MLS1)
178 # machine type](/ml-engine/docs/machine-types-online-prediction).
Bu Sun Kim715bd7f2019-06-14 16:50:42 -0700179 #
180 # The following code sample provides the Predictor interface:
181 #
Bu Sun Kim65020912020-05-20 12:08:20 -0700182 # &lt;pre style=&quot;max-width: 626px;&quot;&gt;
Bu Sun Kim715bd7f2019-06-14 16:50:42 -0700183 # class Predictor(object):
Bu Sun Kim65020912020-05-20 12:08:20 -0700184 # &quot;&quot;&quot;Interface for constructing custom predictors.&quot;&quot;&quot;
Bu Sun Kim715bd7f2019-06-14 16:50:42 -0700185 #
186 # def predict(self, instances, **kwargs):
Bu Sun Kim65020912020-05-20 12:08:20 -0700187 # &quot;&quot;&quot;Performs custom prediction.
Bu Sun Kim715bd7f2019-06-14 16:50:42 -0700188 #
189 # Instances are the decoded values from the request. They have already
190 # been deserialized from JSON.
191 #
192 # Args:
193 # instances: A list of prediction input instances.
194 # **kwargs: A dictionary of keyword args provided as additional
195 # fields on the predict request body.
196 #
197 # Returns:
198 # A list of outputs containing the prediction results. This list must
199 # be JSON serializable.
Bu Sun Kim65020912020-05-20 12:08:20 -0700200 # &quot;&quot;&quot;
Bu Sun Kim715bd7f2019-06-14 16:50:42 -0700201 # raise NotImplementedError()
202 #
203 # @classmethod
204 # def from_path(cls, model_dir):
Bu Sun Kim65020912020-05-20 12:08:20 -0700205 # &quot;&quot;&quot;Creates an instance of Predictor using the given path.
Bu Sun Kim715bd7f2019-06-14 16:50:42 -0700206 #
207 # Loading of the predictor should be done in this method.
208 #
209 # Args:
210 # model_dir: The local directory that contains the exported model
211 # file along with any additional files uploaded when creating the
212 # version resource.
213 #
214 # Returns:
215 # An instance implementing this Predictor class.
Bu Sun Kim65020912020-05-20 12:08:20 -0700216 # &quot;&quot;&quot;
Bu Sun Kim715bd7f2019-06-14 16:50:42 -0700217 # raise NotImplementedError()
Dan O'Mearadd494642020-05-01 07:42:23 -0700218 # &lt;/pre&gt;
Bu Sun Kim715bd7f2019-06-14 16:50:42 -0700219 #
220 # Learn more about [the Predictor interface and custom prediction
221 # routines](/ml-engine/docs/tensorflow/custom-prediction-routines).
Bu Sun Kim4ed7d3f2020-05-27 12:20:54 -0700222 &quot;deploymentUri&quot;: &quot;A String&quot;, # Required. The Cloud Storage location of the trained model used to
223 # create the version. See the
224 # [guide to model
225 # deployment](/ml-engine/docs/tensorflow/deploying-models) for more
226 # information.
227 #
228 # When passing Version to
229 # projects.models.versions.create
230 # the model service uses the specified location as the source of the model.
231 # Once deployed, the model version is hosted by the prediction service, so
232 # this location is useful only as a historical record.
233 # The total number of model files can&#x27;t exceed 1000.
Bu Sun Kim65020912020-05-20 12:08:20 -0700234 &quot;packageUris&quot;: [ # Optional. Cloud Storage paths (`gs://…`) of packages for [custom
Bu Sun Kim715bd7f2019-06-14 16:50:42 -0700235 # prediction routines](/ml-engine/docs/tensorflow/custom-prediction-routines)
236 # or [scikit-learn pipelines with custom
237 # code](/ml-engine/docs/scikit/exporting-for-prediction#custom-pipeline-code).
238 #
239 # For a custom prediction routine, one of these packages must contain your
240 # Predictor class (see
241 # [`predictionClass`](#Version.FIELDS.prediction_class)). Additionally,
242 # include any dependencies used by your Predictor or scikit-learn pipeline
243 # uses that are not already included in your selected [runtime
244 # version](/ml-engine/docs/tensorflow/runtime-version-list).
245 #
246 # If you specify this field, you must also set
247 # [`runtimeVersion`](#Version.FIELDS.runtime_version) to 1.4 or greater.
Bu Sun Kim65020912020-05-20 12:08:20 -0700248 &quot;A String&quot;,
Bu Sun Kim715bd7f2019-06-14 16:50:42 -0700249 ],
Bu Sun Kim65020912020-05-20 12:08:20 -0700250 &quot;explanationConfig&quot;: { # Message holding configuration options for explaining model predictions. # Optional. Configures explainability features on the model&#x27;s version.
251 # Some explanation features require additional metadata to be loaded
252 # as part of the model payload.
253 # There are two feature attribution methods supported for TensorFlow models:
254 # integrated gradients and sampled Shapley.
255 # [Learn more about feature
256 # attributions.](/ai-platform/prediction/docs/ai-explanations/overview)
257 &quot;integratedGradientsAttribution&quot;: { # Attributes credit by computing the Aumann-Shapley value taking advantage # Attributes credit by computing the Aumann-Shapley value taking advantage
258 # of the model&#x27;s fully differentiable structure. Refer to this paper for
259 # more details: http://proceedings.mlr.press/v70/sundararajan17a.html
260 # of the model&#x27;s fully differentiable structure. Refer to this paper for
261 # more details: https://arxiv.org/abs/1703.01365
262 &quot;numIntegralSteps&quot;: 42, # Number of steps for approximating the path integral.
263 # A good value to start is 50 and gradually increase until the
264 # sum to diff property is met within the desired error range.
265 },
266 &quot;sampledShapleyAttribution&quot;: { # An attribution method that approximates Shapley values for features that # An attribution method that approximates Shapley values for features that
267 # contribute to the label being predicted. A sampling strategy is used to
268 # approximate the value rather than considering all subsets of features.
269 # contribute to the label being predicted. A sampling strategy is used to
270 # approximate the value rather than considering all subsets of features.
271 &quot;numPaths&quot;: 42, # The number of feature permutations to consider when approximating the
272 # Shapley values.
273 },
274 &quot;xraiAttribution&quot;: { # Attributes credit by computing the XRAI taking advantage # Attributes credit by computing the XRAI taking advantage
275 # of the model&#x27;s fully differentiable structure. Refer to this paper for
276 # more details: https://arxiv.org/abs/1906.02825
277 # Currently only implemented for models with natural image inputs.
278 # of the model&#x27;s fully differentiable structure. Refer to this paper for
279 # more details: https://arxiv.org/abs/1906.02825
280 # Currently only implemented for models with natural image inputs.
281 &quot;numIntegralSteps&quot;: 42, # Number of steps for approximating the path integral.
282 # A good value to start is 50 and gradually increase until the
283 # sum to diff property is met within the desired error range.
284 },
285 },
Bu Sun Kim65020912020-05-20 12:08:20 -0700286 &quot;autoScaling&quot;: { # Options for automatically scaling a model. # Automatically scale the number of nodes used to serve the model in
Dan O'Mearadd494642020-05-01 07:42:23 -0700287 # response to increases and decreases in traffic. Care should be
Bu Sun Kim65020912020-05-20 12:08:20 -0700288 # taken to ramp up traffic according to the model&#x27;s ability to scale
Dan O'Mearadd494642020-05-01 07:42:23 -0700289 # or you will start seeing increases in latency and 429 response codes.
290 #
291 # Note that you cannot use AutoScaling if your version uses
292 # [GPUs](#Version.FIELDS.accelerator_config). Instead, you must use specify
293 # `manual_scaling`.
Bu Sun Kim65020912020-05-20 12:08:20 -0700294 &quot;minNodes&quot;: 42, # Optional. The minimum number of nodes to allocate for this model. These
Dan O'Mearadd494642020-05-01 07:42:23 -0700295 # nodes are always up, starting from the time the model is deployed.
296 # Therefore, the cost of operating this model will be at least
297 # `rate` * `min_nodes` * number of hours since last billing cycle,
298 # where `rate` is the cost per node-hour as documented in the
299 # [pricing guide](/ml-engine/docs/pricing),
300 # even if no predictions are performed. There is additional cost for each
301 # prediction performed.
302 #
303 # Unlike manual scaling, if the load gets too heavy for the nodes
304 # that are up, the service will automatically add nodes to handle the
305 # increased load as well as scale back as traffic drops, always maintaining
306 # at least `min_nodes`. You will be charged for the time in which additional
307 # nodes are used.
308 #
309 # If `min_nodes` is not specified and AutoScaling is used with a [legacy
310 # (MLS1) machine type](/ml-engine/docs/machine-types-online-prediction),
311 # `min_nodes` defaults to 0, in which case, when traffic to a model stops
312 # (and after a cool-down period), nodes will be shut down and no charges will
313 # be incurred until traffic to the model resumes.
314 #
315 # If `min_nodes` is not specified and AutoScaling is used with a [Compute
316 # Engine (N1) machine type](/ml-engine/docs/machine-types-online-prediction),
317 # `min_nodes` defaults to 1. `min_nodes` must be at least 1 for use with a
318 # Compute Engine machine type.
319 #
320 # Note that you cannot use AutoScaling if your version uses
321 # [GPUs](#Version.FIELDS.accelerator_config). Instead, you must use
322 # ManualScaling.
323 #
324 # You can set `min_nodes` when creating the model version, and you can also
325 # update `min_nodes` for an existing version:
326 # &lt;pre&gt;
327 # update_body.json:
328 # {
Bu Sun Kim65020912020-05-20 12:08:20 -0700329 # &#x27;autoScaling&#x27;: {
330 # &#x27;minNodes&#x27;: 5
Dan O'Mearadd494642020-05-01 07:42:23 -0700331 # }
332 # }
333 # &lt;/pre&gt;
334 # HTTP request:
Bu Sun Kim65020912020-05-20 12:08:20 -0700335 # &lt;pre style=&quot;max-width: 626px;&quot;&gt;
Dan O'Mearadd494642020-05-01 07:42:23 -0700336 # PATCH
337 # https://ml.googleapis.com/v1/{name=projects/*/models/*/versions/*}?update_mask=autoScaling.minNodes
338 # -d @./update_body.json
339 # &lt;/pre&gt;
340 },
Bu Sun Kim4ed7d3f2020-05-27 12:20:54 -0700341 &quot;createTime&quot;: &quot;A String&quot;, # Output only. The time the version was created.
Bu Sun Kim65020912020-05-20 12:08:20 -0700342 &quot;labels&quot;: { # Optional. One or more labels that you can add, to organize your model
343 # versions. Each label is a key-value pair, where both the key and the value
344 # are arbitrary strings that you supply.
345 # For more information, see the documentation on
346 # &lt;a href=&quot;/ml-engine/docs/tensorflow/resource-labels&quot;&gt;using labels&lt;/a&gt;.
347 &quot;a_key&quot;: &quot;A String&quot;,
348 },
Bu Sun Kim65020912020-05-20 12:08:20 -0700349 &quot;requestLoggingConfig&quot;: { # Configuration for logging request-response pairs to a BigQuery table. # Optional. *Only* specify this field in a
Dan O'Mearadd494642020-05-01 07:42:23 -0700350 # projects.models.versions.patch
351 # request. Specifying it in a
352 # projects.models.versions.create
353 # request has no effect.
354 #
355 # Configures the request-response pair logging on predictions from this
356 # Version.
357 # Online prediction requests to a model version and the responses to these
358 # requests are converted to raw strings and saved to the specified BigQuery
359 # table. Logging is constrained by [BigQuery quotas and
360 # limits](/bigquery/quotas). If your project exceeds BigQuery quotas or limits,
361 # AI Platform Prediction does not log request-response pairs, but it continues
362 # to serve predictions.
363 #
364 # If you are using [continuous
365 # evaluation](/ml-engine/docs/continuous-evaluation/), you do not need to
366 # specify this configuration manually. Setting up continuous evaluation
367 # automatically enables logging of request-response pairs.
Bu Sun Kim65020912020-05-20 12:08:20 -0700368 &quot;bigqueryTableName&quot;: &quot;A String&quot;, # Required. Fully qualified BigQuery table name in the following format:
369 # &quot;&lt;var&gt;project_id&lt;/var&gt;.&lt;var&gt;dataset_name&lt;/var&gt;.&lt;var&gt;table_name&lt;/var&gt;&quot;
Dan O'Mearadd494642020-05-01 07:42:23 -0700370 #
Bu Sun Kim65020912020-05-20 12:08:20 -0700371 # The specified table must already exist, and the &quot;Cloud ML Service Agent&quot;
Dan O'Mearadd494642020-05-01 07:42:23 -0700372 # for your project must have permission to write to it. The table must have
373 # the following [schema](/bigquery/docs/schemas):
374 #
375 # &lt;table&gt;
Bu Sun Kim65020912020-05-20 12:08:20 -0700376 # &lt;tr&gt;&lt;th&gt;Field name&lt;/th&gt;&lt;th style=&quot;display: table-cell&quot;&gt;Type&lt;/th&gt;
377 # &lt;th style=&quot;display: table-cell&quot;&gt;Mode&lt;/th&gt;&lt;/tr&gt;
Dan O'Mearadd494642020-05-01 07:42:23 -0700378 # &lt;tr&gt;&lt;td&gt;model&lt;/td&gt;&lt;td&gt;STRING&lt;/td&gt;&lt;td&gt;REQUIRED&lt;/td&gt;&lt;/tr&gt;
379 # &lt;tr&gt;&lt;td&gt;model_version&lt;/td&gt;&lt;td&gt;STRING&lt;/td&gt;&lt;td&gt;REQUIRED&lt;/td&gt;&lt;/tr&gt;
380 # &lt;tr&gt;&lt;td&gt;time&lt;/td&gt;&lt;td&gt;TIMESTAMP&lt;/td&gt;&lt;td&gt;REQUIRED&lt;/td&gt;&lt;/tr&gt;
381 # &lt;tr&gt;&lt;td&gt;raw_data&lt;/td&gt;&lt;td&gt;STRING&lt;/td&gt;&lt;td&gt;REQUIRED&lt;/td&gt;&lt;/tr&gt;
382 # &lt;tr&gt;&lt;td&gt;raw_prediction&lt;/td&gt;&lt;td&gt;STRING&lt;/td&gt;&lt;td&gt;NULLABLE&lt;/td&gt;&lt;/tr&gt;
383 # &lt;tr&gt;&lt;td&gt;groundtruth&lt;/td&gt;&lt;td&gt;STRING&lt;/td&gt;&lt;td&gt;NULLABLE&lt;/td&gt;&lt;/tr&gt;
384 # &lt;/table&gt;
Bu Sun Kim65020912020-05-20 12:08:20 -0700385 &quot;samplingPercentage&quot;: 3.14, # Percentage of requests to be logged, expressed as a fraction from 0 to 1.
386 # For example, if you want to log 10% of requests, enter `0.1`. The sampling
387 # window is the lifetime of the model version. Defaults to 0.
388 },
389 &quot;errorMessage&quot;: &quot;A String&quot;, # Output only. The details of a failure or a cancellation.
390 &quot;machineType&quot;: &quot;A String&quot;, # Optional. The type of machine on which to serve the model. Currently only
391 # applies to online prediction service. If this field is not specified, it
392 # defaults to `mls1-c1-m2`.
393 #
394 # Online prediction supports the following machine types:
395 #
396 # * `mls1-c1-m2`
397 # * `mls1-c4-m2`
398 # * `n1-standard-2`
399 # * `n1-standard-4`
400 # * `n1-standard-8`
401 # * `n1-standard-16`
402 # * `n1-standard-32`
403 # * `n1-highmem-2`
404 # * `n1-highmem-4`
405 # * `n1-highmem-8`
406 # * `n1-highmem-16`
407 # * `n1-highmem-32`
408 # * `n1-highcpu-2`
409 # * `n1-highcpu-4`
410 # * `n1-highcpu-8`
411 # * `n1-highcpu-16`
412 # * `n1-highcpu-32`
413 #
414 # `mls1-c1-m2` is generally available. All other machine types are available
415 # in beta. Learn more about the [differences between machine
416 # types](/ml-engine/docs/machine-types-online-prediction).
417 &quot;runtimeVersion&quot;: &quot;A String&quot;, # Required. The AI Platform runtime version to use for this deployment.
418 #
419 # For more information, see the
420 # [runtime version list](/ml-engine/docs/runtime-version-list) and
421 # [how to manage runtime versions](/ml-engine/docs/versioning).
422 &quot;description&quot;: &quot;A String&quot;, # Optional. The description specified for the version when it was created.
423 &quot;framework&quot;: &quot;A String&quot;, # Optional. The machine learning framework AI Platform uses to train
424 # this version of the model. Valid values are `TENSORFLOW`, `SCIKIT_LEARN`,
425 # `XGBOOST`. If you do not specify a framework, AI Platform
426 # will analyze files in the deployment_uri to determine a framework. If you
427 # choose `SCIKIT_LEARN` or `XGBOOST`, you must also set the runtime version
428 # of the model to 1.4 or greater.
429 #
430 # Do **not** specify a framework if you&#x27;re deploying a [custom
431 # prediction routine](/ml-engine/docs/tensorflow/custom-prediction-routines).
432 #
433 # If you specify a [Compute Engine (N1) machine
434 # type](/ml-engine/docs/machine-types-online-prediction) in the
435 # `machineType` field, you must specify `TENSORFLOW`
436 # for the framework.
437 &quot;etag&quot;: &quot;A String&quot;, # `etag` is used for optimistic concurrency control as a way to help
438 # prevent simultaneous updates of a model from overwriting each other.
439 # It is strongly suggested that systems make use of the `etag` in the
440 # read-modify-write cycle to perform model updates in order to avoid race
441 # conditions: An `etag` is returned in the response to `GetVersion`, and
442 # systems are expected to put that etag in the request to `UpdateVersion` to
443 # ensure that their change will be applied to the model as intended.
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -0400444}
445
446 x__xgafv: string, V1 error format.
447 Allowed values
448 1 - v1 error format
449 2 - v2 error format
450
451Returns:
452 An object of the form:
453
454 { # This resource represents a long-running operation that is the result of a
455 # network API call.
Bu Sun Kim65020912020-05-20 12:08:20 -0700456 &quot;name&quot;: &quot;A String&quot;, # The server-assigned name, which is only unique within the same service that
Sai Cheemalapati4ba8c232017-06-06 18:46:08 -0400457 # originally returns it. If you use the default HTTP mapping, the
Bu Sun Kim715bd7f2019-06-14 16:50:42 -0700458 # `name` should be a resource name ending with `operations/{unique_id}`.
Bu Sun Kim65020912020-05-20 12:08:20 -0700459 &quot;error&quot;: { # The `Status` type defines a logical error model that is suitable for # The error result of the operation in case of failure or cancellation.
460 # different programming environments, including REST APIs and RPC APIs. It is
461 # used by [gRPC](https://github.com/grpc). Each `Status` message contains
462 # three pieces of data: error code, error message, and error details.
463 #
464 # You can find out more about this error model and how to work with it in the
465 # [API Design Guide](https://cloud.google.com/apis/design/errors).
Bu Sun Kim4ed7d3f2020-05-27 12:20:54 -0700466 &quot;code&quot;: 42, # The status code, which should be an enum value of google.rpc.Code.
467 &quot;message&quot;: &quot;A String&quot;, # A developer-facing error message, which should be in English. Any
468 # user-facing error message should be localized and sent in the
469 # google.rpc.Status.details field, or localized by the client.
Bu Sun Kim65020912020-05-20 12:08:20 -0700470 &quot;details&quot;: [ # A list of messages that carry the error details. There is a common set of
471 # message types for APIs to use.
472 {
473 &quot;a_key&quot;: &quot;&quot;, # Properties of the object. Contains field @type with type URL.
474 },
475 ],
Bu Sun Kim65020912020-05-20 12:08:20 -0700476 },
477 &quot;metadata&quot;: { # Service-specific metadata associated with the operation. It typically
478 # contains progress information and common metadata such as create time.
479 # Some services might not provide such metadata. Any method that returns a
480 # long-running operation should document the metadata type, if any.
481 &quot;a_key&quot;: &quot;&quot;, # Properties of the object. Contains field @type with type URL.
482 },
483 &quot;done&quot;: True or False, # If the value is `false`, it means the operation is still in progress.
484 # If `true`, the operation is completed, and either `error` or `response` is
485 # available.
Bu Sun Kim4ed7d3f2020-05-27 12:20:54 -0700486 &quot;response&quot;: { # The normal response of the operation in case of success. If the original
487 # method returns no data on success, such as `Delete`, the response is
488 # `google.protobuf.Empty`. If the original method is standard
489 # `Get`/`Create`/`Update`, the response should be the resource. For other
490 # methods, the response should have the type `XxxResponse`, where `Xxx`
491 # is the original method name. For example, if the original method name
492 # is `TakeSnapshot()`, the inferred response type is
493 # `TakeSnapshotResponse`.
494 &quot;a_key&quot;: &quot;&quot;, # Properties of the object. Contains field @type with type URL.
495 },
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -0400496 }</pre>
497</div>
498
499<div class="method">
Thomas Coffee2f245372017-03-27 10:39:26 -0700500 <code class="details" id="delete">delete(name, x__xgafv=None)</code>
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -0400501 <pre>Deletes a model version.
502
503Each model can have multiple versions deployed and in use at any given
504time. Use this method to remove a single version.
505
506Note: You cannot delete the version that is set as the default version
507of the model unless it is the only remaining version.
508
509Args:
510 name: string, Required. The name of the version. You can get the names of all the
511versions of a model by calling
Dan O'Mearadd494642020-05-01 07:42:23 -0700512projects.models.versions.list. (required)
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -0400513 x__xgafv: string, V1 error format.
514 Allowed values
515 1 - v1 error format
516 2 - v2 error format
517
518Returns:
519 An object of the form:
520
521 { # This resource represents a long-running operation that is the result of a
522 # network API call.
Bu Sun Kim65020912020-05-20 12:08:20 -0700523 &quot;name&quot;: &quot;A String&quot;, # The server-assigned name, which is only unique within the same service that
Sai Cheemalapati4ba8c232017-06-06 18:46:08 -0400524 # originally returns it. If you use the default HTTP mapping, the
Bu Sun Kim715bd7f2019-06-14 16:50:42 -0700525 # `name` should be a resource name ending with `operations/{unique_id}`.
Bu Sun Kim65020912020-05-20 12:08:20 -0700526 &quot;error&quot;: { # The `Status` type defines a logical error model that is suitable for # The error result of the operation in case of failure or cancellation.
527 # different programming environments, including REST APIs and RPC APIs. It is
528 # used by [gRPC](https://github.com/grpc). Each `Status` message contains
529 # three pieces of data: error code, error message, and error details.
530 #
531 # You can find out more about this error model and how to work with it in the
532 # [API Design Guide](https://cloud.google.com/apis/design/errors).
Bu Sun Kim4ed7d3f2020-05-27 12:20:54 -0700533 &quot;code&quot;: 42, # The status code, which should be an enum value of google.rpc.Code.
534 &quot;message&quot;: &quot;A String&quot;, # A developer-facing error message, which should be in English. Any
535 # user-facing error message should be localized and sent in the
536 # google.rpc.Status.details field, or localized by the client.
Bu Sun Kim65020912020-05-20 12:08:20 -0700537 &quot;details&quot;: [ # A list of messages that carry the error details. There is a common set of
538 # message types for APIs to use.
539 {
540 &quot;a_key&quot;: &quot;&quot;, # Properties of the object. Contains field @type with type URL.
541 },
542 ],
Bu Sun Kim65020912020-05-20 12:08:20 -0700543 },
544 &quot;metadata&quot;: { # Service-specific metadata associated with the operation. It typically
545 # contains progress information and common metadata such as create time.
546 # Some services might not provide such metadata. Any method that returns a
547 # long-running operation should document the metadata type, if any.
548 &quot;a_key&quot;: &quot;&quot;, # Properties of the object. Contains field @type with type URL.
549 },
550 &quot;done&quot;: True or False, # If the value is `false`, it means the operation is still in progress.
551 # If `true`, the operation is completed, and either `error` or `response` is
552 # available.
Bu Sun Kim4ed7d3f2020-05-27 12:20:54 -0700553 &quot;response&quot;: { # The normal response of the operation in case of success. If the original
554 # method returns no data on success, such as `Delete`, the response is
555 # `google.protobuf.Empty`. If the original method is standard
556 # `Get`/`Create`/`Update`, the response should be the resource. For other
557 # methods, the response should have the type `XxxResponse`, where `Xxx`
558 # is the original method name. For example, if the original method name
559 # is `TakeSnapshot()`, the inferred response type is
560 # `TakeSnapshotResponse`.
561 &quot;a_key&quot;: &quot;&quot;, # Properties of the object. Contains field @type with type URL.
562 },
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -0400563 }</pre>
564</div>
565
566<div class="method">
Thomas Coffee2f245372017-03-27 10:39:26 -0700567 <code class="details" id="get">get(name, x__xgafv=None)</code>
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -0400568 <pre>Gets information about a model version.
569
570Models can have multiple versions. You can call
Dan O'Mearadd494642020-05-01 07:42:23 -0700571projects.models.versions.list
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -0400572to get the same information that this method returns for all of the
573versions of a model.
574
575Args:
Bu Sun Kim715bd7f2019-06-14 16:50:42 -0700576 name: string, Required. The name of the version. (required)
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -0400577 x__xgafv: string, V1 error format.
578 Allowed values
579 1 - v1 error format
580 2 - v2 error format
581
582Returns:
583 An object of the form:
584
585 { # Represents a version of the model.
586 #
587 # Each version is a trained model deployed in the cloud, ready to handle
588 # prediction requests. A model can have multiple versions. You can get
589 # information about all of the versions of a given model by calling
Dan O'Mearadd494642020-05-01 07:42:23 -0700590 # projects.models.versions.list.
Bu Sun Kim4ed7d3f2020-05-27 12:20:54 -0700591 &quot;acceleratorConfig&quot;: { # Represents a hardware accelerator request config. # Optional. Accelerator config for using GPUs for online prediction (beta).
592 # Only specify this field if you have specified a Compute Engine (N1) machine
593 # type in the `machineType` field. Learn more about [using GPUs for online
594 # prediction](/ml-engine/docs/machine-types-online-prediction#gpus).
595 # Note that the AcceleratorConfig can be used in both Jobs and Versions.
596 # Learn more about [accelerators for training](/ml-engine/docs/using-gpus) and
597 # [accelerators for online
598 # prediction](/ml-engine/docs/machine-types-online-prediction#gpus).
599 &quot;count&quot;: &quot;A String&quot;, # The number of accelerators to attach to each machine running the job.
600 &quot;type&quot;: &quot;A String&quot;, # The type of accelerator to use.
601 },
602 &quot;isDefault&quot;: True or False, # Output only. If true, this version will be used to handle prediction
603 # requests that do not specify a version.
604 #
605 # You can change the default version by calling
606 # projects.methods.versions.setDefault.
Bu Sun Kim65020912020-05-20 12:08:20 -0700607 &quot;manualScaling&quot;: { # Options for manually scaling a model. # Manually select the number of nodes to use for serving the
608 # model. You should generally use `auto_scaling` with an appropriate
609 # `min_nodes` instead, but this option is available if you want more
610 # predictable billing. Beware that latency and error rates will increase
611 # if the traffic exceeds that capability of the system to serve it based
612 # on the selected number of nodes.
613 &quot;nodes&quot;: 42, # The number of nodes to allocate for this model. These nodes are always up,
614 # starting from the time the model is deployed, so the cost of operating
615 # this model will be proportional to `nodes` * number of hours since
616 # last billing cycle plus the cost for each prediction performed.
Dan O'Mearadd494642020-05-01 07:42:23 -0700617 },
Bu Sun Kim4ed7d3f2020-05-27 12:20:54 -0700618 &quot;state&quot;: &quot;A String&quot;, # Output only. The state of a version.
Bu Sun Kim65020912020-05-20 12:08:20 -0700619 &quot;name&quot;: &quot;A String&quot;, # Required. The name specified for the version when it was created.
620 #
621 # The version name must be unique within the model it is created in.
622 &quot;serviceAccount&quot;: &quot;A String&quot;, # Optional. Specifies the service account for resource access control.
623 &quot;pythonVersion&quot;: &quot;A String&quot;, # Required. The version of Python used in prediction.
624 #
625 # The following Python versions are available:
626 #
627 # * Python &#x27;3.7&#x27; is available when `runtime_version` is set to &#x27;1.15&#x27; or
628 # later.
629 # * Python &#x27;3.5&#x27; is available when `runtime_version` is set to a version
630 # from &#x27;1.4&#x27; to &#x27;1.14&#x27;.
631 # * Python &#x27;2.7&#x27; is available when `runtime_version` is set to &#x27;1.15&#x27; or
632 # earlier.
633 #
634 # Read more about the Python versions available for [each runtime
635 # version](/ml-engine/docs/runtime-version-list).
636 &quot;lastUseTime&quot;: &quot;A String&quot;, # Output only. The time the version was last used for prediction.
637 &quot;predictionClass&quot;: &quot;A String&quot;, # Optional. The fully qualified name
Dan O'Mearadd494642020-05-01 07:42:23 -0700638 # (&lt;var&gt;module_name&lt;/var&gt;.&lt;var&gt;class_name&lt;/var&gt;) of a class that implements
Bu Sun Kim715bd7f2019-06-14 16:50:42 -0700639 # the Predictor interface described in this reference field. The module
640 # containing this class should be included in a package provided to the
641 # [`packageUris` field](#Version.FIELDS.package_uris).
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -0400642 #
Bu Sun Kim715bd7f2019-06-14 16:50:42 -0700643 # Specify this field if and only if you are deploying a [custom prediction
644 # routine (beta)](/ml-engine/docs/tensorflow/custom-prediction-routines).
645 # If you specify this field, you must set
Dan O'Mearadd494642020-05-01 07:42:23 -0700646 # [`runtimeVersion`](#Version.FIELDS.runtime_version) to 1.4 or greater and
647 # you must set `machineType` to a [legacy (MLS1)
648 # machine type](/ml-engine/docs/machine-types-online-prediction).
Bu Sun Kim715bd7f2019-06-14 16:50:42 -0700649 #
650 # The following code sample provides the Predictor interface:
651 #
Bu Sun Kim65020912020-05-20 12:08:20 -0700652 # &lt;pre style=&quot;max-width: 626px;&quot;&gt;
Bu Sun Kim715bd7f2019-06-14 16:50:42 -0700653 # class Predictor(object):
Bu Sun Kim65020912020-05-20 12:08:20 -0700654 # &quot;&quot;&quot;Interface for constructing custom predictors.&quot;&quot;&quot;
Bu Sun Kim715bd7f2019-06-14 16:50:42 -0700655 #
656 # def predict(self, instances, **kwargs):
Bu Sun Kim65020912020-05-20 12:08:20 -0700657 # &quot;&quot;&quot;Performs custom prediction.
Bu Sun Kim715bd7f2019-06-14 16:50:42 -0700658 #
659 # Instances are the decoded values from the request. They have already
660 # been deserialized from JSON.
661 #
662 # Args:
663 # instances: A list of prediction input instances.
664 # **kwargs: A dictionary of keyword args provided as additional
665 # fields on the predict request body.
666 #
667 # Returns:
668 # A list of outputs containing the prediction results. This list must
669 # be JSON serializable.
Bu Sun Kim65020912020-05-20 12:08:20 -0700670 # &quot;&quot;&quot;
Bu Sun Kim715bd7f2019-06-14 16:50:42 -0700671 # raise NotImplementedError()
672 #
673 # @classmethod
674 # def from_path(cls, model_dir):
Bu Sun Kim65020912020-05-20 12:08:20 -0700675 # &quot;&quot;&quot;Creates an instance of Predictor using the given path.
Bu Sun Kim715bd7f2019-06-14 16:50:42 -0700676 #
677 # Loading of the predictor should be done in this method.
678 #
679 # Args:
680 # model_dir: The local directory that contains the exported model
681 # file along with any additional files uploaded when creating the
682 # version resource.
683 #
684 # Returns:
685 # An instance implementing this Predictor class.
Bu Sun Kim65020912020-05-20 12:08:20 -0700686 # &quot;&quot;&quot;
Bu Sun Kim715bd7f2019-06-14 16:50:42 -0700687 # raise NotImplementedError()
Dan O'Mearadd494642020-05-01 07:42:23 -0700688 # &lt;/pre&gt;
Bu Sun Kim715bd7f2019-06-14 16:50:42 -0700689 #
690 # Learn more about [the Predictor interface and custom prediction
691 # routines](/ml-engine/docs/tensorflow/custom-prediction-routines).
Bu Sun Kim4ed7d3f2020-05-27 12:20:54 -0700692 &quot;deploymentUri&quot;: &quot;A String&quot;, # Required. The Cloud Storage location of the trained model used to
693 # create the version. See the
694 # [guide to model
695 # deployment](/ml-engine/docs/tensorflow/deploying-models) for more
696 # information.
697 #
698 # When passing Version to
699 # projects.models.versions.create
700 # the model service uses the specified location as the source of the model.
701 # Once deployed, the model version is hosted by the prediction service, so
702 # this location is useful only as a historical record.
703 # The total number of model files can&#x27;t exceed 1000.
Bu Sun Kim65020912020-05-20 12:08:20 -0700704 &quot;packageUris&quot;: [ # Optional. Cloud Storage paths (`gs://…`) of packages for [custom
Bu Sun Kim715bd7f2019-06-14 16:50:42 -0700705 # prediction routines](/ml-engine/docs/tensorflow/custom-prediction-routines)
706 # or [scikit-learn pipelines with custom
707 # code](/ml-engine/docs/scikit/exporting-for-prediction#custom-pipeline-code).
708 #
709 # For a custom prediction routine, one of these packages must contain your
710 # Predictor class (see
711 # [`predictionClass`](#Version.FIELDS.prediction_class)). Additionally,
712 # include any dependencies used by your Predictor or scikit-learn pipeline
713 # uses that are not already included in your selected [runtime
714 # version](/ml-engine/docs/tensorflow/runtime-version-list).
715 #
716 # If you specify this field, you must also set
717 # [`runtimeVersion`](#Version.FIELDS.runtime_version) to 1.4 or greater.
Bu Sun Kim65020912020-05-20 12:08:20 -0700718 &quot;A String&quot;,
Bu Sun Kim715bd7f2019-06-14 16:50:42 -0700719 ],
Bu Sun Kim65020912020-05-20 12:08:20 -0700720 &quot;explanationConfig&quot;: { # Message holding configuration options for explaining model predictions. # Optional. Configures explainability features on the model&#x27;s version.
721 # Some explanation features require additional metadata to be loaded
722 # as part of the model payload.
723 # There are two feature attribution methods supported for TensorFlow models:
724 # integrated gradients and sampled Shapley.
725 # [Learn more about feature
726 # attributions.](/ai-platform/prediction/docs/ai-explanations/overview)
727 &quot;integratedGradientsAttribution&quot;: { # Attributes credit by computing the Aumann-Shapley value taking advantage # Attributes credit by computing the Aumann-Shapley value taking advantage
728 # of the model&#x27;s fully differentiable structure. Refer to this paper for
729 # more details: http://proceedings.mlr.press/v70/sundararajan17a.html
730 # of the model&#x27;s fully differentiable structure. Refer to this paper for
731 # more details: https://arxiv.org/abs/1703.01365
732 &quot;numIntegralSteps&quot;: 42, # Number of steps for approximating the path integral.
733 # A good value to start is 50 and gradually increase until the
734 # sum to diff property is met within the desired error range.
735 },
736 &quot;sampledShapleyAttribution&quot;: { # An attribution method that approximates Shapley values for features that # An attribution method that approximates Shapley values for features that
737 # contribute to the label being predicted. A sampling strategy is used to
738 # approximate the value rather than considering all subsets of features.
739 # contribute to the label being predicted. A sampling strategy is used to
740 # approximate the value rather than considering all subsets of features.
741 &quot;numPaths&quot;: 42, # The number of feature permutations to consider when approximating the
742 # Shapley values.
743 },
744 &quot;xraiAttribution&quot;: { # Attributes credit by computing the XRAI taking advantage # Attributes credit by computing the XRAI taking advantage
745 # of the model&#x27;s fully differentiable structure. Refer to this paper for
746 # more details: https://arxiv.org/abs/1906.02825
747 # Currently only implemented for models with natural image inputs.
748 # of the model&#x27;s fully differentiable structure. Refer to this paper for
749 # more details: https://arxiv.org/abs/1906.02825
750 # Currently only implemented for models with natural image inputs.
751 &quot;numIntegralSteps&quot;: 42, # Number of steps for approximating the path integral.
752 # A good value to start is 50 and gradually increase until the
753 # sum to diff property is met within the desired error range.
754 },
755 },
Bu Sun Kim65020912020-05-20 12:08:20 -0700756 &quot;autoScaling&quot;: { # Options for automatically scaling a model. # Automatically scale the number of nodes used to serve the model in
Dan O'Mearadd494642020-05-01 07:42:23 -0700757 # response to increases and decreases in traffic. Care should be
Bu Sun Kim65020912020-05-20 12:08:20 -0700758 # taken to ramp up traffic according to the model&#x27;s ability to scale
Dan O'Mearadd494642020-05-01 07:42:23 -0700759 # or you will start seeing increases in latency and 429 response codes.
760 #
761 # Note that you cannot use AutoScaling if your version uses
762 # [GPUs](#Version.FIELDS.accelerator_config). Instead, you must use specify
763 # `manual_scaling`.
Bu Sun Kim65020912020-05-20 12:08:20 -0700764 &quot;minNodes&quot;: 42, # Optional. The minimum number of nodes to allocate for this model. These
Dan O'Mearadd494642020-05-01 07:42:23 -0700765 # nodes are always up, starting from the time the model is deployed.
766 # Therefore, the cost of operating this model will be at least
767 # `rate` * `min_nodes` * number of hours since last billing cycle,
768 # where `rate` is the cost per node-hour as documented in the
769 # [pricing guide](/ml-engine/docs/pricing),
770 # even if no predictions are performed. There is additional cost for each
771 # prediction performed.
772 #
773 # Unlike manual scaling, if the load gets too heavy for the nodes
774 # that are up, the service will automatically add nodes to handle the
775 # increased load as well as scale back as traffic drops, always maintaining
776 # at least `min_nodes`. You will be charged for the time in which additional
777 # nodes are used.
778 #
779 # If `min_nodes` is not specified and AutoScaling is used with a [legacy
780 # (MLS1) machine type](/ml-engine/docs/machine-types-online-prediction),
781 # `min_nodes` defaults to 0, in which case, when traffic to a model stops
782 # (and after a cool-down period), nodes will be shut down and no charges will
783 # be incurred until traffic to the model resumes.
784 #
785 # If `min_nodes` is not specified and AutoScaling is used with a [Compute
786 # Engine (N1) machine type](/ml-engine/docs/machine-types-online-prediction),
787 # `min_nodes` defaults to 1. `min_nodes` must be at least 1 for use with a
788 # Compute Engine machine type.
789 #
790 # Note that you cannot use AutoScaling if your version uses
791 # [GPUs](#Version.FIELDS.accelerator_config). Instead, you must use
792 # ManualScaling.
793 #
794 # You can set `min_nodes` when creating the model version, and you can also
795 # update `min_nodes` for an existing version:
796 # &lt;pre&gt;
797 # update_body.json:
798 # {
Bu Sun Kim65020912020-05-20 12:08:20 -0700799 # &#x27;autoScaling&#x27;: {
800 # &#x27;minNodes&#x27;: 5
Dan O'Mearadd494642020-05-01 07:42:23 -0700801 # }
802 # }
803 # &lt;/pre&gt;
804 # HTTP request:
Bu Sun Kim65020912020-05-20 12:08:20 -0700805 # &lt;pre style=&quot;max-width: 626px;&quot;&gt;
Dan O'Mearadd494642020-05-01 07:42:23 -0700806 # PATCH
807 # https://ml.googleapis.com/v1/{name=projects/*/models/*/versions/*}?update_mask=autoScaling.minNodes
808 # -d @./update_body.json
809 # &lt;/pre&gt;
810 },
Bu Sun Kim4ed7d3f2020-05-27 12:20:54 -0700811 &quot;createTime&quot;: &quot;A String&quot;, # Output only. The time the version was created.
Bu Sun Kim65020912020-05-20 12:08:20 -0700812 &quot;labels&quot;: { # Optional. One or more labels that you can add, to organize your model
813 # versions. Each label is a key-value pair, where both the key and the value
814 # are arbitrary strings that you supply.
815 # For more information, see the documentation on
816 # &lt;a href=&quot;/ml-engine/docs/tensorflow/resource-labels&quot;&gt;using labels&lt;/a&gt;.
817 &quot;a_key&quot;: &quot;A String&quot;,
818 },
Bu Sun Kim65020912020-05-20 12:08:20 -0700819 &quot;requestLoggingConfig&quot;: { # Configuration for logging request-response pairs to a BigQuery table. # Optional. *Only* specify this field in a
Dan O'Mearadd494642020-05-01 07:42:23 -0700820 # projects.models.versions.patch
821 # request. Specifying it in a
822 # projects.models.versions.create
823 # request has no effect.
824 #
825 # Configures the request-response pair logging on predictions from this
826 # Version.
827 # Online prediction requests to a model version and the responses to these
828 # requests are converted to raw strings and saved to the specified BigQuery
829 # table. Logging is constrained by [BigQuery quotas and
830 # limits](/bigquery/quotas). If your project exceeds BigQuery quotas or limits,
831 # AI Platform Prediction does not log request-response pairs, but it continues
832 # to serve predictions.
833 #
834 # If you are using [continuous
835 # evaluation](/ml-engine/docs/continuous-evaluation/), you do not need to
836 # specify this configuration manually. Setting up continuous evaluation
837 # automatically enables logging of request-response pairs.
Bu Sun Kim65020912020-05-20 12:08:20 -0700838 &quot;bigqueryTableName&quot;: &quot;A String&quot;, # Required. Fully qualified BigQuery table name in the following format:
839 # &quot;&lt;var&gt;project_id&lt;/var&gt;.&lt;var&gt;dataset_name&lt;/var&gt;.&lt;var&gt;table_name&lt;/var&gt;&quot;
Dan O'Mearadd494642020-05-01 07:42:23 -0700840 #
Bu Sun Kim65020912020-05-20 12:08:20 -0700841 # The specified table must already exist, and the &quot;Cloud ML Service Agent&quot;
Dan O'Mearadd494642020-05-01 07:42:23 -0700842 # for your project must have permission to write to it. The table must have
843 # the following [schema](/bigquery/docs/schemas):
844 #
845 # &lt;table&gt;
Bu Sun Kim65020912020-05-20 12:08:20 -0700846 # &lt;tr&gt;&lt;th&gt;Field name&lt;/th&gt;&lt;th style=&quot;display: table-cell&quot;&gt;Type&lt;/th&gt;
847 # &lt;th style=&quot;display: table-cell&quot;&gt;Mode&lt;/th&gt;&lt;/tr&gt;
Dan O'Mearadd494642020-05-01 07:42:23 -0700848 # &lt;tr&gt;&lt;td&gt;model&lt;/td&gt;&lt;td&gt;STRING&lt;/td&gt;&lt;td&gt;REQUIRED&lt;/td&gt;&lt;/tr&gt;
849 # &lt;tr&gt;&lt;td&gt;model_version&lt;/td&gt;&lt;td&gt;STRING&lt;/td&gt;&lt;td&gt;REQUIRED&lt;/td&gt;&lt;/tr&gt;
850 # &lt;tr&gt;&lt;td&gt;time&lt;/td&gt;&lt;td&gt;TIMESTAMP&lt;/td&gt;&lt;td&gt;REQUIRED&lt;/td&gt;&lt;/tr&gt;
851 # &lt;tr&gt;&lt;td&gt;raw_data&lt;/td&gt;&lt;td&gt;STRING&lt;/td&gt;&lt;td&gt;REQUIRED&lt;/td&gt;&lt;/tr&gt;
852 # &lt;tr&gt;&lt;td&gt;raw_prediction&lt;/td&gt;&lt;td&gt;STRING&lt;/td&gt;&lt;td&gt;NULLABLE&lt;/td&gt;&lt;/tr&gt;
853 # &lt;tr&gt;&lt;td&gt;groundtruth&lt;/td&gt;&lt;td&gt;STRING&lt;/td&gt;&lt;td&gt;NULLABLE&lt;/td&gt;&lt;/tr&gt;
854 # &lt;/table&gt;
Bu Sun Kim65020912020-05-20 12:08:20 -0700855 &quot;samplingPercentage&quot;: 3.14, # Percentage of requests to be logged, expressed as a fraction from 0 to 1.
856 # For example, if you want to log 10% of requests, enter `0.1`. The sampling
857 # window is the lifetime of the model version. Defaults to 0.
858 },
859 &quot;errorMessage&quot;: &quot;A String&quot;, # Output only. The details of a failure or a cancellation.
860 &quot;machineType&quot;: &quot;A String&quot;, # Optional. The type of machine on which to serve the model. Currently only
861 # applies to online prediction service. If this field is not specified, it
862 # defaults to `mls1-c1-m2`.
863 #
864 # Online prediction supports the following machine types:
865 #
866 # * `mls1-c1-m2`
867 # * `mls1-c4-m2`
868 # * `n1-standard-2`
869 # * `n1-standard-4`
870 # * `n1-standard-8`
871 # * `n1-standard-16`
872 # * `n1-standard-32`
873 # * `n1-highmem-2`
874 # * `n1-highmem-4`
875 # * `n1-highmem-8`
876 # * `n1-highmem-16`
877 # * `n1-highmem-32`
878 # * `n1-highcpu-2`
879 # * `n1-highcpu-4`
880 # * `n1-highcpu-8`
881 # * `n1-highcpu-16`
882 # * `n1-highcpu-32`
883 #
884 # `mls1-c1-m2` is generally available. All other machine types are available
885 # in beta. Learn more about the [differences between machine
886 # types](/ml-engine/docs/machine-types-online-prediction).
887 &quot;runtimeVersion&quot;: &quot;A String&quot;, # Required. The AI Platform runtime version to use for this deployment.
888 #
889 # For more information, see the
890 # [runtime version list](/ml-engine/docs/runtime-version-list) and
891 # [how to manage runtime versions](/ml-engine/docs/versioning).
892 &quot;description&quot;: &quot;A String&quot;, # Optional. The description specified for the version when it was created.
893 &quot;framework&quot;: &quot;A String&quot;, # Optional. The machine learning framework AI Platform uses to train
894 # this version of the model. Valid values are `TENSORFLOW`, `SCIKIT_LEARN`,
895 # `XGBOOST`. If you do not specify a framework, AI Platform
896 # will analyze files in the deployment_uri to determine a framework. If you
897 # choose `SCIKIT_LEARN` or `XGBOOST`, you must also set the runtime version
898 # of the model to 1.4 or greater.
899 #
900 # Do **not** specify a framework if you&#x27;re deploying a [custom
901 # prediction routine](/ml-engine/docs/tensorflow/custom-prediction-routines).
902 #
903 # If you specify a [Compute Engine (N1) machine
904 # type](/ml-engine/docs/machine-types-online-prediction) in the
905 # `machineType` field, you must specify `TENSORFLOW`
906 # for the framework.
907 &quot;etag&quot;: &quot;A String&quot;, # `etag` is used for optimistic concurrency control as a way to help
908 # prevent simultaneous updates of a model from overwriting each other.
909 # It is strongly suggested that systems make use of the `etag` in the
910 # read-modify-write cycle to perform model updates in order to avoid race
911 # conditions: An `etag` is returned in the response to `GetVersion`, and
912 # systems are expected to put that etag in the request to `UpdateVersion` to
913 # ensure that their change will be applied to the model as intended.
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -0400914 }</pre>
915</div>
916
917<div class="method">
Bu Sun Kim65020912020-05-20 12:08:20 -0700918 <code class="details" id="list">list(parent, pageToken=None, pageSize=None, filter=None, x__xgafv=None)</code>
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -0400919 <pre>Gets basic information about all the versions of a model.
920
Bu Sun Kim715bd7f2019-06-14 16:50:42 -0700921If you expect that a model has many versions, or if you need to handle
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -0400922only a limited number of results at a time, you can request that the list
Bu Sun Kim715bd7f2019-06-14 16:50:42 -0700923be retrieved in batches (called pages).
924
925If there are no versions that match the request parameters, the list
926request returns an empty response body: {}.
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -0400927
928Args:
Bu Sun Kim715bd7f2019-06-14 16:50:42 -0700929 parent: string, Required. The name of the model for which to list the version. (required)
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -0400930 pageToken: string, Optional. A page token to request the next page of results.
931
932You get the token from the `next_page_token` field of the response from
933the previous call.
Bu Sun Kim65020912020-05-20 12:08:20 -0700934 pageSize: integer, Optional. The number of versions to retrieve per &quot;page&quot; of results. If
Sai Cheemalapati4ba8c232017-06-06 18:46:08 -0400935there are more remaining results than this number, the response message
936will contain a valid value in the `next_page_token` field.
937
938The default value is 20, and the maximum page size is 100.
Bu Sun Kim715bd7f2019-06-14 16:50:42 -0700939 filter: string, Optional. Specifies the subset of versions to retrieve.
Bu Sun Kim65020912020-05-20 12:08:20 -0700940 x__xgafv: string, V1 error format.
941 Allowed values
942 1 - v1 error format
943 2 - v2 error format
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -0400944
945Returns:
946 An object of the form:
947
948 { # Response message for the ListVersions method.
Bu Sun Kim65020912020-05-20 12:08:20 -0700949 &quot;nextPageToken&quot;: &quot;A String&quot;, # Optional. Pass this token as the `page_token` field of the request for a
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -0400950 # subsequent call.
Bu Sun Kim65020912020-05-20 12:08:20 -0700951 &quot;versions&quot;: [ # The list of versions.
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -0400952 { # Represents a version of the model.
953 #
954 # Each version is a trained model deployed in the cloud, ready to handle
955 # prediction requests. A model can have multiple versions. You can get
956 # information about all of the versions of a given model by calling
Dan O'Mearadd494642020-05-01 07:42:23 -0700957 # projects.models.versions.list.
Bu Sun Kim4ed7d3f2020-05-27 12:20:54 -0700958 &quot;acceleratorConfig&quot;: { # Represents a hardware accelerator request config. # Optional. Accelerator config for using GPUs for online prediction (beta).
959 # Only specify this field if you have specified a Compute Engine (N1) machine
960 # type in the `machineType` field. Learn more about [using GPUs for online
961 # prediction](/ml-engine/docs/machine-types-online-prediction#gpus).
962 # Note that the AcceleratorConfig can be used in both Jobs and Versions.
963 # Learn more about [accelerators for training](/ml-engine/docs/using-gpus) and
964 # [accelerators for online
965 # prediction](/ml-engine/docs/machine-types-online-prediction#gpus).
966 &quot;count&quot;: &quot;A String&quot;, # The number of accelerators to attach to each machine running the job.
967 &quot;type&quot;: &quot;A String&quot;, # The type of accelerator to use.
968 },
969 &quot;isDefault&quot;: True or False, # Output only. If true, this version will be used to handle prediction
970 # requests that do not specify a version.
971 #
972 # You can change the default version by calling
973 # projects.methods.versions.setDefault.
Bu Sun Kim65020912020-05-20 12:08:20 -0700974 &quot;manualScaling&quot;: { # Options for manually scaling a model. # Manually select the number of nodes to use for serving the
975 # model. You should generally use `auto_scaling` with an appropriate
976 # `min_nodes` instead, but this option is available if you want more
977 # predictable billing. Beware that latency and error rates will increase
978 # if the traffic exceeds that capability of the system to serve it based
979 # on the selected number of nodes.
980 &quot;nodes&quot;: 42, # The number of nodes to allocate for this model. These nodes are always up,
981 # starting from the time the model is deployed, so the cost of operating
982 # this model will be proportional to `nodes` * number of hours since
983 # last billing cycle plus the cost for each prediction performed.
Dan O'Mearadd494642020-05-01 07:42:23 -0700984 },
Bu Sun Kim4ed7d3f2020-05-27 12:20:54 -0700985 &quot;state&quot;: &quot;A String&quot;, # Output only. The state of a version.
Bu Sun Kim65020912020-05-20 12:08:20 -0700986 &quot;name&quot;: &quot;A String&quot;, # Required. The name specified for the version when it was created.
987 #
988 # The version name must be unique within the model it is created in.
989 &quot;serviceAccount&quot;: &quot;A String&quot;, # Optional. Specifies the service account for resource access control.
990 &quot;pythonVersion&quot;: &quot;A String&quot;, # Required. The version of Python used in prediction.
991 #
992 # The following Python versions are available:
993 #
994 # * Python &#x27;3.7&#x27; is available when `runtime_version` is set to &#x27;1.15&#x27; or
995 # later.
996 # * Python &#x27;3.5&#x27; is available when `runtime_version` is set to a version
997 # from &#x27;1.4&#x27; to &#x27;1.14&#x27;.
998 # * Python &#x27;2.7&#x27; is available when `runtime_version` is set to &#x27;1.15&#x27; or
999 # earlier.
1000 #
1001 # Read more about the Python versions available for [each runtime
1002 # version](/ml-engine/docs/runtime-version-list).
1003 &quot;lastUseTime&quot;: &quot;A String&quot;, # Output only. The time the version was last used for prediction.
1004 &quot;predictionClass&quot;: &quot;A String&quot;, # Optional. The fully qualified name
Dan O'Mearadd494642020-05-01 07:42:23 -07001005 # (&lt;var&gt;module_name&lt;/var&gt;.&lt;var&gt;class_name&lt;/var&gt;) of a class that implements
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07001006 # the Predictor interface described in this reference field. The module
1007 # containing this class should be included in a package provided to the
1008 # [`packageUris` field](#Version.FIELDS.package_uris).
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -04001009 #
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07001010 # Specify this field if and only if you are deploying a [custom prediction
1011 # routine (beta)](/ml-engine/docs/tensorflow/custom-prediction-routines).
1012 # If you specify this field, you must set
Dan O'Mearadd494642020-05-01 07:42:23 -07001013 # [`runtimeVersion`](#Version.FIELDS.runtime_version) to 1.4 or greater and
1014 # you must set `machineType` to a [legacy (MLS1)
1015 # machine type](/ml-engine/docs/machine-types-online-prediction).
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07001016 #
1017 # The following code sample provides the Predictor interface:
1018 #
Bu Sun Kim65020912020-05-20 12:08:20 -07001019 # &lt;pre style=&quot;max-width: 626px;&quot;&gt;
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07001020 # class Predictor(object):
Bu Sun Kim65020912020-05-20 12:08:20 -07001021 # &quot;&quot;&quot;Interface for constructing custom predictors.&quot;&quot;&quot;
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07001022 #
1023 # def predict(self, instances, **kwargs):
Bu Sun Kim65020912020-05-20 12:08:20 -07001024 # &quot;&quot;&quot;Performs custom prediction.
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07001025 #
1026 # Instances are the decoded values from the request. They have already
1027 # been deserialized from JSON.
1028 #
1029 # Args:
1030 # instances: A list of prediction input instances.
1031 # **kwargs: A dictionary of keyword args provided as additional
1032 # fields on the predict request body.
1033 #
1034 # Returns:
1035 # A list of outputs containing the prediction results. This list must
1036 # be JSON serializable.
Bu Sun Kim65020912020-05-20 12:08:20 -07001037 # &quot;&quot;&quot;
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07001038 # raise NotImplementedError()
1039 #
1040 # @classmethod
1041 # def from_path(cls, model_dir):
Bu Sun Kim65020912020-05-20 12:08:20 -07001042 # &quot;&quot;&quot;Creates an instance of Predictor using the given path.
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07001043 #
1044 # Loading of the predictor should be done in this method.
1045 #
1046 # Args:
1047 # model_dir: The local directory that contains the exported model
1048 # file along with any additional files uploaded when creating the
1049 # version resource.
1050 #
1051 # Returns:
1052 # An instance implementing this Predictor class.
Bu Sun Kim65020912020-05-20 12:08:20 -07001053 # &quot;&quot;&quot;
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07001054 # raise NotImplementedError()
Dan O'Mearadd494642020-05-01 07:42:23 -07001055 # &lt;/pre&gt;
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07001056 #
1057 # Learn more about [the Predictor interface and custom prediction
1058 # routines](/ml-engine/docs/tensorflow/custom-prediction-routines).
Bu Sun Kim4ed7d3f2020-05-27 12:20:54 -07001059 &quot;deploymentUri&quot;: &quot;A String&quot;, # Required. The Cloud Storage location of the trained model used to
1060 # create the version. See the
1061 # [guide to model
1062 # deployment](/ml-engine/docs/tensorflow/deploying-models) for more
1063 # information.
1064 #
1065 # When passing Version to
1066 # projects.models.versions.create
1067 # the model service uses the specified location as the source of the model.
1068 # Once deployed, the model version is hosted by the prediction service, so
1069 # this location is useful only as a historical record.
1070 # The total number of model files can&#x27;t exceed 1000.
Bu Sun Kim65020912020-05-20 12:08:20 -07001071 &quot;packageUris&quot;: [ # Optional. Cloud Storage paths (`gs://…`) of packages for [custom
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07001072 # prediction routines](/ml-engine/docs/tensorflow/custom-prediction-routines)
1073 # or [scikit-learn pipelines with custom
1074 # code](/ml-engine/docs/scikit/exporting-for-prediction#custom-pipeline-code).
1075 #
1076 # For a custom prediction routine, one of these packages must contain your
1077 # Predictor class (see
1078 # [`predictionClass`](#Version.FIELDS.prediction_class)). Additionally,
1079 # include any dependencies used by your Predictor or scikit-learn pipeline
1080 # uses that are not already included in your selected [runtime
1081 # version](/ml-engine/docs/tensorflow/runtime-version-list).
1082 #
1083 # If you specify this field, you must also set
1084 # [`runtimeVersion`](#Version.FIELDS.runtime_version) to 1.4 or greater.
Bu Sun Kim65020912020-05-20 12:08:20 -07001085 &quot;A String&quot;,
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07001086 ],
Bu Sun Kim65020912020-05-20 12:08:20 -07001087 &quot;explanationConfig&quot;: { # Message holding configuration options for explaining model predictions. # Optional. Configures explainability features on the model&#x27;s version.
1088 # Some explanation features require additional metadata to be loaded
1089 # as part of the model payload.
1090 # There are two feature attribution methods supported for TensorFlow models:
1091 # integrated gradients and sampled Shapley.
1092 # [Learn more about feature
1093 # attributions.](/ai-platform/prediction/docs/ai-explanations/overview)
1094 &quot;integratedGradientsAttribution&quot;: { # Attributes credit by computing the Aumann-Shapley value taking advantage # Attributes credit by computing the Aumann-Shapley value taking advantage
1095 # of the model&#x27;s fully differentiable structure. Refer to this paper for
1096 # more details: http://proceedings.mlr.press/v70/sundararajan17a.html
1097 # of the model&#x27;s fully differentiable structure. Refer to this paper for
1098 # more details: https://arxiv.org/abs/1703.01365
1099 &quot;numIntegralSteps&quot;: 42, # Number of steps for approximating the path integral.
1100 # A good value to start is 50 and gradually increase until the
1101 # sum to diff property is met within the desired error range.
1102 },
1103 &quot;sampledShapleyAttribution&quot;: { # An attribution method that approximates Shapley values for features that # An attribution method that approximates Shapley values for features that
1104 # contribute to the label being predicted. A sampling strategy is used to
1105 # approximate the value rather than considering all subsets of features.
1106 # contribute to the label being predicted. A sampling strategy is used to
1107 # approximate the value rather than considering all subsets of features.
1108 &quot;numPaths&quot;: 42, # The number of feature permutations to consider when approximating the
1109 # Shapley values.
1110 },
1111 &quot;xraiAttribution&quot;: { # Attributes credit by computing the XRAI taking advantage # Attributes credit by computing the XRAI taking advantage
1112 # of the model&#x27;s fully differentiable structure. Refer to this paper for
1113 # more details: https://arxiv.org/abs/1906.02825
1114 # Currently only implemented for models with natural image inputs.
1115 # of the model&#x27;s fully differentiable structure. Refer to this paper for
1116 # more details: https://arxiv.org/abs/1906.02825
1117 # Currently only implemented for models with natural image inputs.
1118 &quot;numIntegralSteps&quot;: 42, # Number of steps for approximating the path integral.
1119 # A good value to start is 50 and gradually increase until the
1120 # sum to diff property is met within the desired error range.
1121 },
1122 },
Bu Sun Kim65020912020-05-20 12:08:20 -07001123 &quot;autoScaling&quot;: { # Options for automatically scaling a model. # Automatically scale the number of nodes used to serve the model in
Dan O'Mearadd494642020-05-01 07:42:23 -07001124 # response to increases and decreases in traffic. Care should be
Bu Sun Kim65020912020-05-20 12:08:20 -07001125 # taken to ramp up traffic according to the model&#x27;s ability to scale
Dan O'Mearadd494642020-05-01 07:42:23 -07001126 # or you will start seeing increases in latency and 429 response codes.
1127 #
1128 # Note that you cannot use AutoScaling if your version uses
1129 # [GPUs](#Version.FIELDS.accelerator_config). Instead, you must use specify
1130 # `manual_scaling`.
Bu Sun Kim65020912020-05-20 12:08:20 -07001131 &quot;minNodes&quot;: 42, # Optional. The minimum number of nodes to allocate for this model. These
Dan O'Mearadd494642020-05-01 07:42:23 -07001132 # nodes are always up, starting from the time the model is deployed.
1133 # Therefore, the cost of operating this model will be at least
1134 # `rate` * `min_nodes` * number of hours since last billing cycle,
1135 # where `rate` is the cost per node-hour as documented in the
1136 # [pricing guide](/ml-engine/docs/pricing),
1137 # even if no predictions are performed. There is additional cost for each
1138 # prediction performed.
1139 #
1140 # Unlike manual scaling, if the load gets too heavy for the nodes
1141 # that are up, the service will automatically add nodes to handle the
1142 # increased load as well as scale back as traffic drops, always maintaining
1143 # at least `min_nodes`. You will be charged for the time in which additional
1144 # nodes are used.
1145 #
1146 # If `min_nodes` is not specified and AutoScaling is used with a [legacy
1147 # (MLS1) machine type](/ml-engine/docs/machine-types-online-prediction),
1148 # `min_nodes` defaults to 0, in which case, when traffic to a model stops
1149 # (and after a cool-down period), nodes will be shut down and no charges will
1150 # be incurred until traffic to the model resumes.
1151 #
1152 # If `min_nodes` is not specified and AutoScaling is used with a [Compute
1153 # Engine (N1) machine type](/ml-engine/docs/machine-types-online-prediction),
1154 # `min_nodes` defaults to 1. `min_nodes` must be at least 1 for use with a
1155 # Compute Engine machine type.
1156 #
1157 # Note that you cannot use AutoScaling if your version uses
1158 # [GPUs](#Version.FIELDS.accelerator_config). Instead, you must use
1159 # ManualScaling.
1160 #
1161 # You can set `min_nodes` when creating the model version, and you can also
1162 # update `min_nodes` for an existing version:
1163 # &lt;pre&gt;
1164 # update_body.json:
1165 # {
Bu Sun Kim65020912020-05-20 12:08:20 -07001166 # &#x27;autoScaling&#x27;: {
1167 # &#x27;minNodes&#x27;: 5
Dan O'Mearadd494642020-05-01 07:42:23 -07001168 # }
1169 # }
1170 # &lt;/pre&gt;
1171 # HTTP request:
Bu Sun Kim65020912020-05-20 12:08:20 -07001172 # &lt;pre style=&quot;max-width: 626px;&quot;&gt;
Dan O'Mearadd494642020-05-01 07:42:23 -07001173 # PATCH
1174 # https://ml.googleapis.com/v1/{name=projects/*/models/*/versions/*}?update_mask=autoScaling.minNodes
1175 # -d @./update_body.json
1176 # &lt;/pre&gt;
1177 },
Bu Sun Kim4ed7d3f2020-05-27 12:20:54 -07001178 &quot;createTime&quot;: &quot;A String&quot;, # Output only. The time the version was created.
Bu Sun Kim65020912020-05-20 12:08:20 -07001179 &quot;labels&quot;: { # Optional. One or more labels that you can add, to organize your model
1180 # versions. Each label is a key-value pair, where both the key and the value
1181 # are arbitrary strings that you supply.
1182 # For more information, see the documentation on
1183 # &lt;a href=&quot;/ml-engine/docs/tensorflow/resource-labels&quot;&gt;using labels&lt;/a&gt;.
1184 &quot;a_key&quot;: &quot;A String&quot;,
1185 },
Bu Sun Kim65020912020-05-20 12:08:20 -07001186 &quot;requestLoggingConfig&quot;: { # Configuration for logging request-response pairs to a BigQuery table. # Optional. *Only* specify this field in a
Dan O'Mearadd494642020-05-01 07:42:23 -07001187 # projects.models.versions.patch
1188 # request. Specifying it in a
1189 # projects.models.versions.create
1190 # request has no effect.
1191 #
1192 # Configures the request-response pair logging on predictions from this
1193 # Version.
1194 # Online prediction requests to a model version and the responses to these
1195 # requests are converted to raw strings and saved to the specified BigQuery
1196 # table. Logging is constrained by [BigQuery quotas and
1197 # limits](/bigquery/quotas). If your project exceeds BigQuery quotas or limits,
1198 # AI Platform Prediction does not log request-response pairs, but it continues
1199 # to serve predictions.
1200 #
1201 # If you are using [continuous
1202 # evaluation](/ml-engine/docs/continuous-evaluation/), you do not need to
1203 # specify this configuration manually. Setting up continuous evaluation
1204 # automatically enables logging of request-response pairs.
Bu Sun Kim65020912020-05-20 12:08:20 -07001205 &quot;bigqueryTableName&quot;: &quot;A String&quot;, # Required. Fully qualified BigQuery table name in the following format:
1206 # &quot;&lt;var&gt;project_id&lt;/var&gt;.&lt;var&gt;dataset_name&lt;/var&gt;.&lt;var&gt;table_name&lt;/var&gt;&quot;
Dan O'Mearadd494642020-05-01 07:42:23 -07001207 #
Bu Sun Kim65020912020-05-20 12:08:20 -07001208 # The specified table must already exist, and the &quot;Cloud ML Service Agent&quot;
Dan O'Mearadd494642020-05-01 07:42:23 -07001209 # for your project must have permission to write to it. The table must have
1210 # the following [schema](/bigquery/docs/schemas):
1211 #
1212 # &lt;table&gt;
Bu Sun Kim65020912020-05-20 12:08:20 -07001213 # &lt;tr&gt;&lt;th&gt;Field name&lt;/th&gt;&lt;th style=&quot;display: table-cell&quot;&gt;Type&lt;/th&gt;
1214 # &lt;th style=&quot;display: table-cell&quot;&gt;Mode&lt;/th&gt;&lt;/tr&gt;
Dan O'Mearadd494642020-05-01 07:42:23 -07001215 # &lt;tr&gt;&lt;td&gt;model&lt;/td&gt;&lt;td&gt;STRING&lt;/td&gt;&lt;td&gt;REQUIRED&lt;/td&gt;&lt;/tr&gt;
1216 # &lt;tr&gt;&lt;td&gt;model_version&lt;/td&gt;&lt;td&gt;STRING&lt;/td&gt;&lt;td&gt;REQUIRED&lt;/td&gt;&lt;/tr&gt;
1217 # &lt;tr&gt;&lt;td&gt;time&lt;/td&gt;&lt;td&gt;TIMESTAMP&lt;/td&gt;&lt;td&gt;REQUIRED&lt;/td&gt;&lt;/tr&gt;
1218 # &lt;tr&gt;&lt;td&gt;raw_data&lt;/td&gt;&lt;td&gt;STRING&lt;/td&gt;&lt;td&gt;REQUIRED&lt;/td&gt;&lt;/tr&gt;
1219 # &lt;tr&gt;&lt;td&gt;raw_prediction&lt;/td&gt;&lt;td&gt;STRING&lt;/td&gt;&lt;td&gt;NULLABLE&lt;/td&gt;&lt;/tr&gt;
1220 # &lt;tr&gt;&lt;td&gt;groundtruth&lt;/td&gt;&lt;td&gt;STRING&lt;/td&gt;&lt;td&gt;NULLABLE&lt;/td&gt;&lt;/tr&gt;
1221 # &lt;/table&gt;
Bu Sun Kim65020912020-05-20 12:08:20 -07001222 &quot;samplingPercentage&quot;: 3.14, # Percentage of requests to be logged, expressed as a fraction from 0 to 1.
1223 # For example, if you want to log 10% of requests, enter `0.1`. The sampling
1224 # window is the lifetime of the model version. Defaults to 0.
1225 },
1226 &quot;errorMessage&quot;: &quot;A String&quot;, # Output only. The details of a failure or a cancellation.
1227 &quot;machineType&quot;: &quot;A String&quot;, # Optional. The type of machine on which to serve the model. Currently only
1228 # applies to online prediction service. If this field is not specified, it
1229 # defaults to `mls1-c1-m2`.
1230 #
1231 # Online prediction supports the following machine types:
1232 #
1233 # * `mls1-c1-m2`
1234 # * `mls1-c4-m2`
1235 # * `n1-standard-2`
1236 # * `n1-standard-4`
1237 # * `n1-standard-8`
1238 # * `n1-standard-16`
1239 # * `n1-standard-32`
1240 # * `n1-highmem-2`
1241 # * `n1-highmem-4`
1242 # * `n1-highmem-8`
1243 # * `n1-highmem-16`
1244 # * `n1-highmem-32`
1245 # * `n1-highcpu-2`
1246 # * `n1-highcpu-4`
1247 # * `n1-highcpu-8`
1248 # * `n1-highcpu-16`
1249 # * `n1-highcpu-32`
1250 #
1251 # `mls1-c1-m2` is generally available. All other machine types are available
1252 # in beta. Learn more about the [differences between machine
1253 # types](/ml-engine/docs/machine-types-online-prediction).
1254 &quot;runtimeVersion&quot;: &quot;A String&quot;, # Required. The AI Platform runtime version to use for this deployment.
1255 #
1256 # For more information, see the
1257 # [runtime version list](/ml-engine/docs/runtime-version-list) and
1258 # [how to manage runtime versions](/ml-engine/docs/versioning).
1259 &quot;description&quot;: &quot;A String&quot;, # Optional. The description specified for the version when it was created.
1260 &quot;framework&quot;: &quot;A String&quot;, # Optional. The machine learning framework AI Platform uses to train
1261 # this version of the model. Valid values are `TENSORFLOW`, `SCIKIT_LEARN`,
1262 # `XGBOOST`. If you do not specify a framework, AI Platform
1263 # will analyze files in the deployment_uri to determine a framework. If you
1264 # choose `SCIKIT_LEARN` or `XGBOOST`, you must also set the runtime version
1265 # of the model to 1.4 or greater.
1266 #
1267 # Do **not** specify a framework if you&#x27;re deploying a [custom
1268 # prediction routine](/ml-engine/docs/tensorflow/custom-prediction-routines).
1269 #
1270 # If you specify a [Compute Engine (N1) machine
1271 # type](/ml-engine/docs/machine-types-online-prediction) in the
1272 # `machineType` field, you must specify `TENSORFLOW`
1273 # for the framework.
1274 &quot;etag&quot;: &quot;A String&quot;, # `etag` is used for optimistic concurrency control as a way to help
1275 # prevent simultaneous updates of a model from overwriting each other.
1276 # It is strongly suggested that systems make use of the `etag` in the
1277 # read-modify-write cycle to perform model updates in order to avoid race
1278 # conditions: An `etag` is returned in the response to `GetVersion`, and
1279 # systems are expected to put that etag in the request to `UpdateVersion` to
1280 # ensure that their change will be applied to the model as intended.
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -04001281 },
1282 ],
1283 }</pre>
1284</div>
1285
1286<div class="method">
1287 <code class="details" id="list_next">list_next(previous_request, previous_response)</code>
1288 <pre>Retrieves the next page of results.
1289
1290Args:
1291 previous_request: The request for the previous page. (required)
1292 previous_response: The response from the request for the previous page. (required)
1293
1294Returns:
Bu Sun Kim65020912020-05-20 12:08:20 -07001295 A request object that you can call &#x27;execute()&#x27; on to request the next
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -04001296 page. Returns None if there are no more items in the collection.
1297 </pre>
1298</div>
1299
1300<div class="method">
Dan O'Mearadd494642020-05-01 07:42:23 -07001301 <code class="details" id="patch">patch(name, body=None, updateMask=None, x__xgafv=None)</code>
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07001302 <pre>Updates the specified Version resource.
1303
Dan O'Mearadd494642020-05-01 07:42:23 -07001304Currently the only update-able fields are `description`,
1305`requestLoggingConfig`, `autoScaling.minNodes`, and `manualScaling.nodes`.
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07001306
1307Args:
1308 name: string, Required. The name of the model. (required)
Dan O'Mearadd494642020-05-01 07:42:23 -07001309 body: object, The request body.
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07001310 The object takes the form of:
1311
1312{ # Represents a version of the model.
1313 #
1314 # Each version is a trained model deployed in the cloud, ready to handle
1315 # prediction requests. A model can have multiple versions. You can get
1316 # information about all of the versions of a given model by calling
Dan O'Mearadd494642020-05-01 07:42:23 -07001317 # projects.models.versions.list.
Bu Sun Kim4ed7d3f2020-05-27 12:20:54 -07001318 &quot;acceleratorConfig&quot;: { # Represents a hardware accelerator request config. # Optional. Accelerator config for using GPUs for online prediction (beta).
1319 # Only specify this field if you have specified a Compute Engine (N1) machine
1320 # type in the `machineType` field. Learn more about [using GPUs for online
1321 # prediction](/ml-engine/docs/machine-types-online-prediction#gpus).
1322 # Note that the AcceleratorConfig can be used in both Jobs and Versions.
1323 # Learn more about [accelerators for training](/ml-engine/docs/using-gpus) and
1324 # [accelerators for online
1325 # prediction](/ml-engine/docs/machine-types-online-prediction#gpus).
1326 &quot;count&quot;: &quot;A String&quot;, # The number of accelerators to attach to each machine running the job.
1327 &quot;type&quot;: &quot;A String&quot;, # The type of accelerator to use.
1328 },
1329 &quot;isDefault&quot;: True or False, # Output only. If true, this version will be used to handle prediction
1330 # requests that do not specify a version.
1331 #
1332 # You can change the default version by calling
1333 # projects.methods.versions.setDefault.
Bu Sun Kim65020912020-05-20 12:08:20 -07001334 &quot;manualScaling&quot;: { # Options for manually scaling a model. # Manually select the number of nodes to use for serving the
1335 # model. You should generally use `auto_scaling` with an appropriate
1336 # `min_nodes` instead, but this option is available if you want more
1337 # predictable billing. Beware that latency and error rates will increase
1338 # if the traffic exceeds that capability of the system to serve it based
1339 # on the selected number of nodes.
1340 &quot;nodes&quot;: 42, # The number of nodes to allocate for this model. These nodes are always up,
1341 # starting from the time the model is deployed, so the cost of operating
1342 # this model will be proportional to `nodes` * number of hours since
1343 # last billing cycle plus the cost for each prediction performed.
Dan O'Mearadd494642020-05-01 07:42:23 -07001344 },
Bu Sun Kim4ed7d3f2020-05-27 12:20:54 -07001345 &quot;state&quot;: &quot;A String&quot;, # Output only. The state of a version.
Bu Sun Kim65020912020-05-20 12:08:20 -07001346 &quot;name&quot;: &quot;A String&quot;, # Required. The name specified for the version when it was created.
1347 #
1348 # The version name must be unique within the model it is created in.
1349 &quot;serviceAccount&quot;: &quot;A String&quot;, # Optional. Specifies the service account for resource access control.
1350 &quot;pythonVersion&quot;: &quot;A String&quot;, # Required. The version of Python used in prediction.
1351 #
1352 # The following Python versions are available:
1353 #
1354 # * Python &#x27;3.7&#x27; is available when `runtime_version` is set to &#x27;1.15&#x27; or
1355 # later.
1356 # * Python &#x27;3.5&#x27; is available when `runtime_version` is set to a version
1357 # from &#x27;1.4&#x27; to &#x27;1.14&#x27;.
1358 # * Python &#x27;2.7&#x27; is available when `runtime_version` is set to &#x27;1.15&#x27; or
1359 # earlier.
1360 #
1361 # Read more about the Python versions available for [each runtime
1362 # version](/ml-engine/docs/runtime-version-list).
1363 &quot;lastUseTime&quot;: &quot;A String&quot;, # Output only. The time the version was last used for prediction.
1364 &quot;predictionClass&quot;: &quot;A String&quot;, # Optional. The fully qualified name
Dan O'Mearadd494642020-05-01 07:42:23 -07001365 # (&lt;var&gt;module_name&lt;/var&gt;.&lt;var&gt;class_name&lt;/var&gt;) of a class that implements
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07001366 # the Predictor interface described in this reference field. The module
1367 # containing this class should be included in a package provided to the
1368 # [`packageUris` field](#Version.FIELDS.package_uris).
1369 #
1370 # Specify this field if and only if you are deploying a [custom prediction
1371 # routine (beta)](/ml-engine/docs/tensorflow/custom-prediction-routines).
1372 # If you specify this field, you must set
Dan O'Mearadd494642020-05-01 07:42:23 -07001373 # [`runtimeVersion`](#Version.FIELDS.runtime_version) to 1.4 or greater and
1374 # you must set `machineType` to a [legacy (MLS1)
1375 # machine type](/ml-engine/docs/machine-types-online-prediction).
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07001376 #
1377 # The following code sample provides the Predictor interface:
1378 #
Bu Sun Kim65020912020-05-20 12:08:20 -07001379 # &lt;pre style=&quot;max-width: 626px;&quot;&gt;
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07001380 # class Predictor(object):
Bu Sun Kim65020912020-05-20 12:08:20 -07001381 # &quot;&quot;&quot;Interface for constructing custom predictors.&quot;&quot;&quot;
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07001382 #
1383 # def predict(self, instances, **kwargs):
Bu Sun Kim65020912020-05-20 12:08:20 -07001384 # &quot;&quot;&quot;Performs custom prediction.
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07001385 #
1386 # Instances are the decoded values from the request. They have already
1387 # been deserialized from JSON.
1388 #
1389 # Args:
1390 # instances: A list of prediction input instances.
1391 # **kwargs: A dictionary of keyword args provided as additional
1392 # fields on the predict request body.
1393 #
1394 # Returns:
1395 # A list of outputs containing the prediction results. This list must
1396 # be JSON serializable.
Bu Sun Kim65020912020-05-20 12:08:20 -07001397 # &quot;&quot;&quot;
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07001398 # raise NotImplementedError()
1399 #
1400 # @classmethod
1401 # def from_path(cls, model_dir):
Bu Sun Kim65020912020-05-20 12:08:20 -07001402 # &quot;&quot;&quot;Creates an instance of Predictor using the given path.
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07001403 #
1404 # Loading of the predictor should be done in this method.
1405 #
1406 # Args:
1407 # model_dir: The local directory that contains the exported model
1408 # file along with any additional files uploaded when creating the
1409 # version resource.
1410 #
1411 # Returns:
1412 # An instance implementing this Predictor class.
Bu Sun Kim65020912020-05-20 12:08:20 -07001413 # &quot;&quot;&quot;
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07001414 # raise NotImplementedError()
Dan O'Mearadd494642020-05-01 07:42:23 -07001415 # &lt;/pre&gt;
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07001416 #
1417 # Learn more about [the Predictor interface and custom prediction
1418 # routines](/ml-engine/docs/tensorflow/custom-prediction-routines).
Bu Sun Kim4ed7d3f2020-05-27 12:20:54 -07001419 &quot;deploymentUri&quot;: &quot;A String&quot;, # Required. The Cloud Storage location of the trained model used to
1420 # create the version. See the
1421 # [guide to model
1422 # deployment](/ml-engine/docs/tensorflow/deploying-models) for more
1423 # information.
1424 #
1425 # When passing Version to
1426 # projects.models.versions.create
1427 # the model service uses the specified location as the source of the model.
1428 # Once deployed, the model version is hosted by the prediction service, so
1429 # this location is useful only as a historical record.
1430 # The total number of model files can&#x27;t exceed 1000.
Bu Sun Kim65020912020-05-20 12:08:20 -07001431 &quot;packageUris&quot;: [ # Optional. Cloud Storage paths (`gs://…`) of packages for [custom
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07001432 # prediction routines](/ml-engine/docs/tensorflow/custom-prediction-routines)
1433 # or [scikit-learn pipelines with custom
1434 # code](/ml-engine/docs/scikit/exporting-for-prediction#custom-pipeline-code).
1435 #
1436 # For a custom prediction routine, one of these packages must contain your
1437 # Predictor class (see
1438 # [`predictionClass`](#Version.FIELDS.prediction_class)). Additionally,
1439 # include any dependencies used by your Predictor or scikit-learn pipeline
1440 # uses that are not already included in your selected [runtime
1441 # version](/ml-engine/docs/tensorflow/runtime-version-list).
1442 #
1443 # If you specify this field, you must also set
1444 # [`runtimeVersion`](#Version.FIELDS.runtime_version) to 1.4 or greater.
Bu Sun Kim65020912020-05-20 12:08:20 -07001445 &quot;A String&quot;,
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07001446 ],
Bu Sun Kim65020912020-05-20 12:08:20 -07001447 &quot;explanationConfig&quot;: { # Message holding configuration options for explaining model predictions. # Optional. Configures explainability features on the model&#x27;s version.
1448 # Some explanation features require additional metadata to be loaded
1449 # as part of the model payload.
1450 # There are two feature attribution methods supported for TensorFlow models:
1451 # integrated gradients and sampled Shapley.
1452 # [Learn more about feature
1453 # attributions.](/ai-platform/prediction/docs/ai-explanations/overview)
1454 &quot;integratedGradientsAttribution&quot;: { # Attributes credit by computing the Aumann-Shapley value taking advantage # Attributes credit by computing the Aumann-Shapley value taking advantage
1455 # of the model&#x27;s fully differentiable structure. Refer to this paper for
1456 # more details: http://proceedings.mlr.press/v70/sundararajan17a.html
1457 # of the model&#x27;s fully differentiable structure. Refer to this paper for
1458 # more details: https://arxiv.org/abs/1703.01365
1459 &quot;numIntegralSteps&quot;: 42, # Number of steps for approximating the path integral.
1460 # A good value to start is 50 and gradually increase until the
1461 # sum to diff property is met within the desired error range.
1462 },
1463 &quot;sampledShapleyAttribution&quot;: { # An attribution method that approximates Shapley values for features that # An attribution method that approximates Shapley values for features that
1464 # contribute to the label being predicted. A sampling strategy is used to
1465 # approximate the value rather than considering all subsets of features.
1466 # contribute to the label being predicted. A sampling strategy is used to
1467 # approximate the value rather than considering all subsets of features.
1468 &quot;numPaths&quot;: 42, # The number of feature permutations to consider when approximating the
1469 # Shapley values.
1470 },
1471 &quot;xraiAttribution&quot;: { # Attributes credit by computing the XRAI taking advantage # Attributes credit by computing the XRAI taking advantage
1472 # of the model&#x27;s fully differentiable structure. Refer to this paper for
1473 # more details: https://arxiv.org/abs/1906.02825
1474 # Currently only implemented for models with natural image inputs.
1475 # of the model&#x27;s fully differentiable structure. Refer to this paper for
1476 # more details: https://arxiv.org/abs/1906.02825
1477 # Currently only implemented for models with natural image inputs.
1478 &quot;numIntegralSteps&quot;: 42, # Number of steps for approximating the path integral.
1479 # A good value to start is 50 and gradually increase until the
1480 # sum to diff property is met within the desired error range.
1481 },
1482 },
Bu Sun Kim65020912020-05-20 12:08:20 -07001483 &quot;autoScaling&quot;: { # Options for automatically scaling a model. # Automatically scale the number of nodes used to serve the model in
Dan O'Mearadd494642020-05-01 07:42:23 -07001484 # response to increases and decreases in traffic. Care should be
Bu Sun Kim65020912020-05-20 12:08:20 -07001485 # taken to ramp up traffic according to the model&#x27;s ability to scale
Dan O'Mearadd494642020-05-01 07:42:23 -07001486 # or you will start seeing increases in latency and 429 response codes.
1487 #
1488 # Note that you cannot use AutoScaling if your version uses
1489 # [GPUs](#Version.FIELDS.accelerator_config). Instead, you must use specify
1490 # `manual_scaling`.
Bu Sun Kim65020912020-05-20 12:08:20 -07001491 &quot;minNodes&quot;: 42, # Optional. The minimum number of nodes to allocate for this model. These
Dan O'Mearadd494642020-05-01 07:42:23 -07001492 # nodes are always up, starting from the time the model is deployed.
1493 # Therefore, the cost of operating this model will be at least
1494 # `rate` * `min_nodes` * number of hours since last billing cycle,
1495 # where `rate` is the cost per node-hour as documented in the
1496 # [pricing guide](/ml-engine/docs/pricing),
1497 # even if no predictions are performed. There is additional cost for each
1498 # prediction performed.
1499 #
1500 # Unlike manual scaling, if the load gets too heavy for the nodes
1501 # that are up, the service will automatically add nodes to handle the
1502 # increased load as well as scale back as traffic drops, always maintaining
1503 # at least `min_nodes`. You will be charged for the time in which additional
1504 # nodes are used.
1505 #
1506 # If `min_nodes` is not specified and AutoScaling is used with a [legacy
1507 # (MLS1) machine type](/ml-engine/docs/machine-types-online-prediction),
1508 # `min_nodes` defaults to 0, in which case, when traffic to a model stops
1509 # (and after a cool-down period), nodes will be shut down and no charges will
1510 # be incurred until traffic to the model resumes.
1511 #
1512 # If `min_nodes` is not specified and AutoScaling is used with a [Compute
1513 # Engine (N1) machine type](/ml-engine/docs/machine-types-online-prediction),
1514 # `min_nodes` defaults to 1. `min_nodes` must be at least 1 for use with a
1515 # Compute Engine machine type.
1516 #
1517 # Note that you cannot use AutoScaling if your version uses
1518 # [GPUs](#Version.FIELDS.accelerator_config). Instead, you must use
1519 # ManualScaling.
1520 #
1521 # You can set `min_nodes` when creating the model version, and you can also
1522 # update `min_nodes` for an existing version:
1523 # &lt;pre&gt;
1524 # update_body.json:
1525 # {
Bu Sun Kim65020912020-05-20 12:08:20 -07001526 # &#x27;autoScaling&#x27;: {
1527 # &#x27;minNodes&#x27;: 5
Dan O'Mearadd494642020-05-01 07:42:23 -07001528 # }
1529 # }
1530 # &lt;/pre&gt;
1531 # HTTP request:
Bu Sun Kim65020912020-05-20 12:08:20 -07001532 # &lt;pre style=&quot;max-width: 626px;&quot;&gt;
Dan O'Mearadd494642020-05-01 07:42:23 -07001533 # PATCH
1534 # https://ml.googleapis.com/v1/{name=projects/*/models/*/versions/*}?update_mask=autoScaling.minNodes
1535 # -d @./update_body.json
1536 # &lt;/pre&gt;
1537 },
Bu Sun Kim4ed7d3f2020-05-27 12:20:54 -07001538 &quot;createTime&quot;: &quot;A String&quot;, # Output only. The time the version was created.
Bu Sun Kim65020912020-05-20 12:08:20 -07001539 &quot;labels&quot;: { # Optional. One or more labels that you can add, to organize your model
1540 # versions. Each label is a key-value pair, where both the key and the value
1541 # are arbitrary strings that you supply.
1542 # For more information, see the documentation on
1543 # &lt;a href=&quot;/ml-engine/docs/tensorflow/resource-labels&quot;&gt;using labels&lt;/a&gt;.
1544 &quot;a_key&quot;: &quot;A String&quot;,
1545 },
Bu Sun Kim65020912020-05-20 12:08:20 -07001546 &quot;requestLoggingConfig&quot;: { # Configuration for logging request-response pairs to a BigQuery table. # Optional. *Only* specify this field in a
Dan O'Mearadd494642020-05-01 07:42:23 -07001547 # projects.models.versions.patch
1548 # request. Specifying it in a
1549 # projects.models.versions.create
1550 # request has no effect.
1551 #
1552 # Configures the request-response pair logging on predictions from this
1553 # Version.
1554 # Online prediction requests to a model version and the responses to these
1555 # requests are converted to raw strings and saved to the specified BigQuery
1556 # table. Logging is constrained by [BigQuery quotas and
1557 # limits](/bigquery/quotas). If your project exceeds BigQuery quotas or limits,
1558 # AI Platform Prediction does not log request-response pairs, but it continues
1559 # to serve predictions.
1560 #
1561 # If you are using [continuous
1562 # evaluation](/ml-engine/docs/continuous-evaluation/), you do not need to
1563 # specify this configuration manually. Setting up continuous evaluation
1564 # automatically enables logging of request-response pairs.
Bu Sun Kim65020912020-05-20 12:08:20 -07001565 &quot;bigqueryTableName&quot;: &quot;A String&quot;, # Required. Fully qualified BigQuery table name in the following format:
1566 # &quot;&lt;var&gt;project_id&lt;/var&gt;.&lt;var&gt;dataset_name&lt;/var&gt;.&lt;var&gt;table_name&lt;/var&gt;&quot;
Dan O'Mearadd494642020-05-01 07:42:23 -07001567 #
Bu Sun Kim65020912020-05-20 12:08:20 -07001568 # The specified table must already exist, and the &quot;Cloud ML Service Agent&quot;
Dan O'Mearadd494642020-05-01 07:42:23 -07001569 # for your project must have permission to write to it. The table must have
1570 # the following [schema](/bigquery/docs/schemas):
1571 #
1572 # &lt;table&gt;
Bu Sun Kim65020912020-05-20 12:08:20 -07001573 # &lt;tr&gt;&lt;th&gt;Field name&lt;/th&gt;&lt;th style=&quot;display: table-cell&quot;&gt;Type&lt;/th&gt;
1574 # &lt;th style=&quot;display: table-cell&quot;&gt;Mode&lt;/th&gt;&lt;/tr&gt;
Dan O'Mearadd494642020-05-01 07:42:23 -07001575 # &lt;tr&gt;&lt;td&gt;model&lt;/td&gt;&lt;td&gt;STRING&lt;/td&gt;&lt;td&gt;REQUIRED&lt;/td&gt;&lt;/tr&gt;
1576 # &lt;tr&gt;&lt;td&gt;model_version&lt;/td&gt;&lt;td&gt;STRING&lt;/td&gt;&lt;td&gt;REQUIRED&lt;/td&gt;&lt;/tr&gt;
1577 # &lt;tr&gt;&lt;td&gt;time&lt;/td&gt;&lt;td&gt;TIMESTAMP&lt;/td&gt;&lt;td&gt;REQUIRED&lt;/td&gt;&lt;/tr&gt;
1578 # &lt;tr&gt;&lt;td&gt;raw_data&lt;/td&gt;&lt;td&gt;STRING&lt;/td&gt;&lt;td&gt;REQUIRED&lt;/td&gt;&lt;/tr&gt;
1579 # &lt;tr&gt;&lt;td&gt;raw_prediction&lt;/td&gt;&lt;td&gt;STRING&lt;/td&gt;&lt;td&gt;NULLABLE&lt;/td&gt;&lt;/tr&gt;
1580 # &lt;tr&gt;&lt;td&gt;groundtruth&lt;/td&gt;&lt;td&gt;STRING&lt;/td&gt;&lt;td&gt;NULLABLE&lt;/td&gt;&lt;/tr&gt;
1581 # &lt;/table&gt;
Bu Sun Kim65020912020-05-20 12:08:20 -07001582 &quot;samplingPercentage&quot;: 3.14, # Percentage of requests to be logged, expressed as a fraction from 0 to 1.
1583 # For example, if you want to log 10% of requests, enter `0.1`. The sampling
1584 # window is the lifetime of the model version. Defaults to 0.
1585 },
1586 &quot;errorMessage&quot;: &quot;A String&quot;, # Output only. The details of a failure or a cancellation.
1587 &quot;machineType&quot;: &quot;A String&quot;, # Optional. The type of machine on which to serve the model. Currently only
1588 # applies to online prediction service. If this field is not specified, it
1589 # defaults to `mls1-c1-m2`.
1590 #
1591 # Online prediction supports the following machine types:
1592 #
1593 # * `mls1-c1-m2`
1594 # * `mls1-c4-m2`
1595 # * `n1-standard-2`
1596 # * `n1-standard-4`
1597 # * `n1-standard-8`
1598 # * `n1-standard-16`
1599 # * `n1-standard-32`
1600 # * `n1-highmem-2`
1601 # * `n1-highmem-4`
1602 # * `n1-highmem-8`
1603 # * `n1-highmem-16`
1604 # * `n1-highmem-32`
1605 # * `n1-highcpu-2`
1606 # * `n1-highcpu-4`
1607 # * `n1-highcpu-8`
1608 # * `n1-highcpu-16`
1609 # * `n1-highcpu-32`
1610 #
1611 # `mls1-c1-m2` is generally available. All other machine types are available
1612 # in beta. Learn more about the [differences between machine
1613 # types](/ml-engine/docs/machine-types-online-prediction).
1614 &quot;runtimeVersion&quot;: &quot;A String&quot;, # Required. The AI Platform runtime version to use for this deployment.
1615 #
1616 # For more information, see the
1617 # [runtime version list](/ml-engine/docs/runtime-version-list) and
1618 # [how to manage runtime versions](/ml-engine/docs/versioning).
1619 &quot;description&quot;: &quot;A String&quot;, # Optional. The description specified for the version when it was created.
1620 &quot;framework&quot;: &quot;A String&quot;, # Optional. The machine learning framework AI Platform uses to train
1621 # this version of the model. Valid values are `TENSORFLOW`, `SCIKIT_LEARN`,
1622 # `XGBOOST`. If you do not specify a framework, AI Platform
1623 # will analyze files in the deployment_uri to determine a framework. If you
1624 # choose `SCIKIT_LEARN` or `XGBOOST`, you must also set the runtime version
1625 # of the model to 1.4 or greater.
1626 #
1627 # Do **not** specify a framework if you&#x27;re deploying a [custom
1628 # prediction routine](/ml-engine/docs/tensorflow/custom-prediction-routines).
1629 #
1630 # If you specify a [Compute Engine (N1) machine
1631 # type](/ml-engine/docs/machine-types-online-prediction) in the
1632 # `machineType` field, you must specify `TENSORFLOW`
1633 # for the framework.
1634 &quot;etag&quot;: &quot;A String&quot;, # `etag` is used for optimistic concurrency control as a way to help
1635 # prevent simultaneous updates of a model from overwriting each other.
1636 # It is strongly suggested that systems make use of the `etag` in the
1637 # read-modify-write cycle to perform model updates in order to avoid race
1638 # conditions: An `etag` is returned in the response to `GetVersion`, and
1639 # systems are expected to put that etag in the request to `UpdateVersion` to
1640 # ensure that their change will be applied to the model as intended.
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07001641}
1642
1643 updateMask: string, Required. Specifies the path, relative to `Version`, of the field to
1644update. Must be present and non-empty.
1645
Bu Sun Kim65020912020-05-20 12:08:20 -07001646For example, to change the description of a version to &quot;foo&quot;, the
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07001647`update_mask` parameter would be specified as `description`, and the
1648`PATCH` request body would specify the new value, as follows:
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07001649
Dan O'Mearadd494642020-05-01 07:42:23 -07001650```
1651{
Bu Sun Kim65020912020-05-20 12:08:20 -07001652 &quot;description&quot;: &quot;foo&quot;
Dan O'Mearadd494642020-05-01 07:42:23 -07001653}
1654```
1655
1656Currently the only supported update mask fields are `description`,
1657`requestLoggingConfig`, `autoScaling.minNodes`, and `manualScaling.nodes`.
1658However, you can only update `manualScaling.nodes` if the version uses a
1659[Compute Engine (N1)
1660machine type](/ml-engine/docs/machine-types-online-prediction).
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07001661 x__xgafv: string, V1 error format.
1662 Allowed values
1663 1 - v1 error format
1664 2 - v2 error format
1665
1666Returns:
1667 An object of the form:
1668
1669 { # This resource represents a long-running operation that is the result of a
1670 # network API call.
Bu Sun Kim65020912020-05-20 12:08:20 -07001671 &quot;name&quot;: &quot;A String&quot;, # The server-assigned name, which is only unique within the same service that
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07001672 # originally returns it. If you use the default HTTP mapping, the
1673 # `name` should be a resource name ending with `operations/{unique_id}`.
Bu Sun Kim65020912020-05-20 12:08:20 -07001674 &quot;error&quot;: { # The `Status` type defines a logical error model that is suitable for # The error result of the operation in case of failure or cancellation.
1675 # different programming environments, including REST APIs and RPC APIs. It is
1676 # used by [gRPC](https://github.com/grpc). Each `Status` message contains
1677 # three pieces of data: error code, error message, and error details.
1678 #
1679 # You can find out more about this error model and how to work with it in the
1680 # [API Design Guide](https://cloud.google.com/apis/design/errors).
Bu Sun Kim4ed7d3f2020-05-27 12:20:54 -07001681 &quot;code&quot;: 42, # The status code, which should be an enum value of google.rpc.Code.
1682 &quot;message&quot;: &quot;A String&quot;, # A developer-facing error message, which should be in English. Any
1683 # user-facing error message should be localized and sent in the
1684 # google.rpc.Status.details field, or localized by the client.
Bu Sun Kim65020912020-05-20 12:08:20 -07001685 &quot;details&quot;: [ # A list of messages that carry the error details. There is a common set of
1686 # message types for APIs to use.
1687 {
1688 &quot;a_key&quot;: &quot;&quot;, # Properties of the object. Contains field @type with type URL.
1689 },
1690 ],
Bu Sun Kim65020912020-05-20 12:08:20 -07001691 },
1692 &quot;metadata&quot;: { # Service-specific metadata associated with the operation. It typically
1693 # contains progress information and common metadata such as create time.
1694 # Some services might not provide such metadata. Any method that returns a
1695 # long-running operation should document the metadata type, if any.
1696 &quot;a_key&quot;: &quot;&quot;, # Properties of the object. Contains field @type with type URL.
1697 },
1698 &quot;done&quot;: True or False, # If the value is `false`, it means the operation is still in progress.
1699 # If `true`, the operation is completed, and either `error` or `response` is
1700 # available.
Bu Sun Kim4ed7d3f2020-05-27 12:20:54 -07001701 &quot;response&quot;: { # The normal response of the operation in case of success. If the original
1702 # method returns no data on success, such as `Delete`, the response is
1703 # `google.protobuf.Empty`. If the original method is standard
1704 # `Get`/`Create`/`Update`, the response should be the resource. For other
1705 # methods, the response should have the type `XxxResponse`, where `Xxx`
1706 # is the original method name. For example, if the original method name
1707 # is `TakeSnapshot()`, the inferred response type is
1708 # `TakeSnapshotResponse`.
1709 &quot;a_key&quot;: &quot;&quot;, # Properties of the object. Contains field @type with type URL.
1710 },
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07001711 }</pre>
1712</div>
1713
1714<div class="method">
1715 <code class="details" id="setDefault">setDefault(name, body=None, x__xgafv=None)</code>
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -04001716 <pre>Designates a version to be the default for the model.
1717
1718The default version is used for prediction requests made against the model
Bu Sun Kim65020912020-05-20 12:08:20 -07001719that don&#x27;t specify a version.
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -04001720
1721The first version to be created for a model is automatically set as the
1722default. You must make any subsequent changes to the default version
1723setting manually using this method.
1724
1725Args:
1726 name: string, Required. The name of the version to make the default for the model. You
1727can get the names of all the versions of a model by calling
Dan O'Mearadd494642020-05-01 07:42:23 -07001728projects.models.versions.list. (required)
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07001729 body: object, The request body.
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -04001730 The object takes the form of:
1731
1732{ # Request message for the SetDefaultVersion request.
1733 }
1734
1735 x__xgafv: string, V1 error format.
1736 Allowed values
1737 1 - v1 error format
1738 2 - v2 error format
1739
1740Returns:
1741 An object of the form:
1742
1743 { # Represents a version of the model.
1744 #
1745 # Each version is a trained model deployed in the cloud, ready to handle
1746 # prediction requests. A model can have multiple versions. You can get
1747 # information about all of the versions of a given model by calling
Dan O'Mearadd494642020-05-01 07:42:23 -07001748 # projects.models.versions.list.
Bu Sun Kim4ed7d3f2020-05-27 12:20:54 -07001749 &quot;acceleratorConfig&quot;: { # Represents a hardware accelerator request config. # Optional. Accelerator config for using GPUs for online prediction (beta).
1750 # Only specify this field if you have specified a Compute Engine (N1) machine
1751 # type in the `machineType` field. Learn more about [using GPUs for online
1752 # prediction](/ml-engine/docs/machine-types-online-prediction#gpus).
1753 # Note that the AcceleratorConfig can be used in both Jobs and Versions.
1754 # Learn more about [accelerators for training](/ml-engine/docs/using-gpus) and
1755 # [accelerators for online
1756 # prediction](/ml-engine/docs/machine-types-online-prediction#gpus).
1757 &quot;count&quot;: &quot;A String&quot;, # The number of accelerators to attach to each machine running the job.
1758 &quot;type&quot;: &quot;A String&quot;, # The type of accelerator to use.
1759 },
1760 &quot;isDefault&quot;: True or False, # Output only. If true, this version will be used to handle prediction
1761 # requests that do not specify a version.
1762 #
1763 # You can change the default version by calling
1764 # projects.methods.versions.setDefault.
Bu Sun Kim65020912020-05-20 12:08:20 -07001765 &quot;manualScaling&quot;: { # Options for manually scaling a model. # Manually select the number of nodes to use for serving the
1766 # model. You should generally use `auto_scaling` with an appropriate
1767 # `min_nodes` instead, but this option is available if you want more
1768 # predictable billing. Beware that latency and error rates will increase
1769 # if the traffic exceeds that capability of the system to serve it based
1770 # on the selected number of nodes.
1771 &quot;nodes&quot;: 42, # The number of nodes to allocate for this model. These nodes are always up,
1772 # starting from the time the model is deployed, so the cost of operating
1773 # this model will be proportional to `nodes` * number of hours since
1774 # last billing cycle plus the cost for each prediction performed.
Dan O'Mearadd494642020-05-01 07:42:23 -07001775 },
Bu Sun Kim4ed7d3f2020-05-27 12:20:54 -07001776 &quot;state&quot;: &quot;A String&quot;, # Output only. The state of a version.
Bu Sun Kim65020912020-05-20 12:08:20 -07001777 &quot;name&quot;: &quot;A String&quot;, # Required. The name specified for the version when it was created.
1778 #
1779 # The version name must be unique within the model it is created in.
1780 &quot;serviceAccount&quot;: &quot;A String&quot;, # Optional. Specifies the service account for resource access control.
1781 &quot;pythonVersion&quot;: &quot;A String&quot;, # Required. The version of Python used in prediction.
1782 #
1783 # The following Python versions are available:
1784 #
1785 # * Python &#x27;3.7&#x27; is available when `runtime_version` is set to &#x27;1.15&#x27; or
1786 # later.
1787 # * Python &#x27;3.5&#x27; is available when `runtime_version` is set to a version
1788 # from &#x27;1.4&#x27; to &#x27;1.14&#x27;.
1789 # * Python &#x27;2.7&#x27; is available when `runtime_version` is set to &#x27;1.15&#x27; or
1790 # earlier.
1791 #
1792 # Read more about the Python versions available for [each runtime
1793 # version](/ml-engine/docs/runtime-version-list).
1794 &quot;lastUseTime&quot;: &quot;A String&quot;, # Output only. The time the version was last used for prediction.
1795 &quot;predictionClass&quot;: &quot;A String&quot;, # Optional. The fully qualified name
Dan O'Mearadd494642020-05-01 07:42:23 -07001796 # (&lt;var&gt;module_name&lt;/var&gt;.&lt;var&gt;class_name&lt;/var&gt;) of a class that implements
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07001797 # the Predictor interface described in this reference field. The module
1798 # containing this class should be included in a package provided to the
1799 # [`packageUris` field](#Version.FIELDS.package_uris).
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -04001800 #
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07001801 # Specify this field if and only if you are deploying a [custom prediction
1802 # routine (beta)](/ml-engine/docs/tensorflow/custom-prediction-routines).
1803 # If you specify this field, you must set
Dan O'Mearadd494642020-05-01 07:42:23 -07001804 # [`runtimeVersion`](#Version.FIELDS.runtime_version) to 1.4 or greater and
1805 # you must set `machineType` to a [legacy (MLS1)
1806 # machine type](/ml-engine/docs/machine-types-online-prediction).
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07001807 #
1808 # The following code sample provides the Predictor interface:
1809 #
Bu Sun Kim65020912020-05-20 12:08:20 -07001810 # &lt;pre style=&quot;max-width: 626px;&quot;&gt;
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07001811 # class Predictor(object):
Bu Sun Kim65020912020-05-20 12:08:20 -07001812 # &quot;&quot;&quot;Interface for constructing custom predictors.&quot;&quot;&quot;
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07001813 #
1814 # def predict(self, instances, **kwargs):
Bu Sun Kim65020912020-05-20 12:08:20 -07001815 # &quot;&quot;&quot;Performs custom prediction.
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07001816 #
1817 # Instances are the decoded values from the request. They have already
1818 # been deserialized from JSON.
1819 #
1820 # Args:
1821 # instances: A list of prediction input instances.
1822 # **kwargs: A dictionary of keyword args provided as additional
1823 # fields on the predict request body.
1824 #
1825 # Returns:
1826 # A list of outputs containing the prediction results. This list must
1827 # be JSON serializable.
Bu Sun Kim65020912020-05-20 12:08:20 -07001828 # &quot;&quot;&quot;
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07001829 # raise NotImplementedError()
1830 #
1831 # @classmethod
1832 # def from_path(cls, model_dir):
Bu Sun Kim65020912020-05-20 12:08:20 -07001833 # &quot;&quot;&quot;Creates an instance of Predictor using the given path.
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07001834 #
1835 # Loading of the predictor should be done in this method.
1836 #
1837 # Args:
1838 # model_dir: The local directory that contains the exported model
1839 # file along with any additional files uploaded when creating the
1840 # version resource.
1841 #
1842 # Returns:
1843 # An instance implementing this Predictor class.
Bu Sun Kim65020912020-05-20 12:08:20 -07001844 # &quot;&quot;&quot;
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07001845 # raise NotImplementedError()
Dan O'Mearadd494642020-05-01 07:42:23 -07001846 # &lt;/pre&gt;
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07001847 #
1848 # Learn more about [the Predictor interface and custom prediction
1849 # routines](/ml-engine/docs/tensorflow/custom-prediction-routines).
Bu Sun Kim4ed7d3f2020-05-27 12:20:54 -07001850 &quot;deploymentUri&quot;: &quot;A String&quot;, # Required. The Cloud Storage location of the trained model used to
1851 # create the version. See the
1852 # [guide to model
1853 # deployment](/ml-engine/docs/tensorflow/deploying-models) for more
1854 # information.
1855 #
1856 # When passing Version to
1857 # projects.models.versions.create
1858 # the model service uses the specified location as the source of the model.
1859 # Once deployed, the model version is hosted by the prediction service, so
1860 # this location is useful only as a historical record.
1861 # The total number of model files can&#x27;t exceed 1000.
Bu Sun Kim65020912020-05-20 12:08:20 -07001862 &quot;packageUris&quot;: [ # Optional. Cloud Storage paths (`gs://…`) of packages for [custom
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07001863 # prediction routines](/ml-engine/docs/tensorflow/custom-prediction-routines)
1864 # or [scikit-learn pipelines with custom
1865 # code](/ml-engine/docs/scikit/exporting-for-prediction#custom-pipeline-code).
1866 #
1867 # For a custom prediction routine, one of these packages must contain your
1868 # Predictor class (see
1869 # [`predictionClass`](#Version.FIELDS.prediction_class)). Additionally,
1870 # include any dependencies used by your Predictor or scikit-learn pipeline
1871 # uses that are not already included in your selected [runtime
1872 # version](/ml-engine/docs/tensorflow/runtime-version-list).
1873 #
1874 # If you specify this field, you must also set
1875 # [`runtimeVersion`](#Version.FIELDS.runtime_version) to 1.4 or greater.
Bu Sun Kim65020912020-05-20 12:08:20 -07001876 &quot;A String&quot;,
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07001877 ],
Bu Sun Kim65020912020-05-20 12:08:20 -07001878 &quot;explanationConfig&quot;: { # Message holding configuration options for explaining model predictions. # Optional. Configures explainability features on the model&#x27;s version.
1879 # Some explanation features require additional metadata to be loaded
1880 # as part of the model payload.
1881 # There are two feature attribution methods supported for TensorFlow models:
1882 # integrated gradients and sampled Shapley.
1883 # [Learn more about feature
1884 # attributions.](/ai-platform/prediction/docs/ai-explanations/overview)
1885 &quot;integratedGradientsAttribution&quot;: { # Attributes credit by computing the Aumann-Shapley value taking advantage # Attributes credit by computing the Aumann-Shapley value taking advantage
1886 # of the model&#x27;s fully differentiable structure. Refer to this paper for
1887 # more details: http://proceedings.mlr.press/v70/sundararajan17a.html
1888 # of the model&#x27;s fully differentiable structure. Refer to this paper for
1889 # more details: https://arxiv.org/abs/1703.01365
1890 &quot;numIntegralSteps&quot;: 42, # Number of steps for approximating the path integral.
1891 # A good value to start is 50 and gradually increase until the
1892 # sum to diff property is met within the desired error range.
1893 },
1894 &quot;sampledShapleyAttribution&quot;: { # An attribution method that approximates Shapley values for features that # An attribution method that approximates Shapley values for features that
1895 # contribute to the label being predicted. A sampling strategy is used to
1896 # approximate the value rather than considering all subsets of features.
1897 # contribute to the label being predicted. A sampling strategy is used to
1898 # approximate the value rather than considering all subsets of features.
1899 &quot;numPaths&quot;: 42, # The number of feature permutations to consider when approximating the
1900 # Shapley values.
1901 },
1902 &quot;xraiAttribution&quot;: { # Attributes credit by computing the XRAI taking advantage # Attributes credit by computing the XRAI taking advantage
1903 # of the model&#x27;s fully differentiable structure. Refer to this paper for
1904 # more details: https://arxiv.org/abs/1906.02825
1905 # Currently only implemented for models with natural image inputs.
1906 # of the model&#x27;s fully differentiable structure. Refer to this paper for
1907 # more details: https://arxiv.org/abs/1906.02825
1908 # Currently only implemented for models with natural image inputs.
1909 &quot;numIntegralSteps&quot;: 42, # Number of steps for approximating the path integral.
1910 # A good value to start is 50 and gradually increase until the
1911 # sum to diff property is met within the desired error range.
1912 },
1913 },
Bu Sun Kim65020912020-05-20 12:08:20 -07001914 &quot;autoScaling&quot;: { # Options for automatically scaling a model. # Automatically scale the number of nodes used to serve the model in
Dan O'Mearadd494642020-05-01 07:42:23 -07001915 # response to increases and decreases in traffic. Care should be
Bu Sun Kim65020912020-05-20 12:08:20 -07001916 # taken to ramp up traffic according to the model&#x27;s ability to scale
Dan O'Mearadd494642020-05-01 07:42:23 -07001917 # or you will start seeing increases in latency and 429 response codes.
1918 #
1919 # Note that you cannot use AutoScaling if your version uses
1920 # [GPUs](#Version.FIELDS.accelerator_config). Instead, you must use specify
1921 # `manual_scaling`.
Bu Sun Kim65020912020-05-20 12:08:20 -07001922 &quot;minNodes&quot;: 42, # Optional. The minimum number of nodes to allocate for this model. These
Dan O'Mearadd494642020-05-01 07:42:23 -07001923 # nodes are always up, starting from the time the model is deployed.
1924 # Therefore, the cost of operating this model will be at least
1925 # `rate` * `min_nodes` * number of hours since last billing cycle,
1926 # where `rate` is the cost per node-hour as documented in the
1927 # [pricing guide](/ml-engine/docs/pricing),
1928 # even if no predictions are performed. There is additional cost for each
1929 # prediction performed.
1930 #
1931 # Unlike manual scaling, if the load gets too heavy for the nodes
1932 # that are up, the service will automatically add nodes to handle the
1933 # increased load as well as scale back as traffic drops, always maintaining
1934 # at least `min_nodes`. You will be charged for the time in which additional
1935 # nodes are used.
1936 #
1937 # If `min_nodes` is not specified and AutoScaling is used with a [legacy
1938 # (MLS1) machine type](/ml-engine/docs/machine-types-online-prediction),
1939 # `min_nodes` defaults to 0, in which case, when traffic to a model stops
1940 # (and after a cool-down period), nodes will be shut down and no charges will
1941 # be incurred until traffic to the model resumes.
1942 #
1943 # If `min_nodes` is not specified and AutoScaling is used with a [Compute
1944 # Engine (N1) machine type](/ml-engine/docs/machine-types-online-prediction),
1945 # `min_nodes` defaults to 1. `min_nodes` must be at least 1 for use with a
1946 # Compute Engine machine type.
1947 #
1948 # Note that you cannot use AutoScaling if your version uses
1949 # [GPUs](#Version.FIELDS.accelerator_config). Instead, you must use
1950 # ManualScaling.
1951 #
1952 # You can set `min_nodes` when creating the model version, and you can also
1953 # update `min_nodes` for an existing version:
1954 # &lt;pre&gt;
1955 # update_body.json:
1956 # {
Bu Sun Kim65020912020-05-20 12:08:20 -07001957 # &#x27;autoScaling&#x27;: {
1958 # &#x27;minNodes&#x27;: 5
Dan O'Mearadd494642020-05-01 07:42:23 -07001959 # }
1960 # }
1961 # &lt;/pre&gt;
1962 # HTTP request:
Bu Sun Kim65020912020-05-20 12:08:20 -07001963 # &lt;pre style=&quot;max-width: 626px;&quot;&gt;
Dan O'Mearadd494642020-05-01 07:42:23 -07001964 # PATCH
1965 # https://ml.googleapis.com/v1/{name=projects/*/models/*/versions/*}?update_mask=autoScaling.minNodes
1966 # -d @./update_body.json
1967 # &lt;/pre&gt;
1968 },
Bu Sun Kim4ed7d3f2020-05-27 12:20:54 -07001969 &quot;createTime&quot;: &quot;A String&quot;, # Output only. The time the version was created.
Bu Sun Kim65020912020-05-20 12:08:20 -07001970 &quot;labels&quot;: { # Optional. One or more labels that you can add, to organize your model
1971 # versions. Each label is a key-value pair, where both the key and the value
1972 # are arbitrary strings that you supply.
1973 # For more information, see the documentation on
1974 # &lt;a href=&quot;/ml-engine/docs/tensorflow/resource-labels&quot;&gt;using labels&lt;/a&gt;.
1975 &quot;a_key&quot;: &quot;A String&quot;,
1976 },
Bu Sun Kim65020912020-05-20 12:08:20 -07001977 &quot;requestLoggingConfig&quot;: { # Configuration for logging request-response pairs to a BigQuery table. # Optional. *Only* specify this field in a
Dan O'Mearadd494642020-05-01 07:42:23 -07001978 # projects.models.versions.patch
1979 # request. Specifying it in a
1980 # projects.models.versions.create
1981 # request has no effect.
1982 #
1983 # Configures the request-response pair logging on predictions from this
1984 # Version.
1985 # Online prediction requests to a model version and the responses to these
1986 # requests are converted to raw strings and saved to the specified BigQuery
1987 # table. Logging is constrained by [BigQuery quotas and
1988 # limits](/bigquery/quotas). If your project exceeds BigQuery quotas or limits,
1989 # AI Platform Prediction does not log request-response pairs, but it continues
1990 # to serve predictions.
1991 #
1992 # If you are using [continuous
1993 # evaluation](/ml-engine/docs/continuous-evaluation/), you do not need to
1994 # specify this configuration manually. Setting up continuous evaluation
1995 # automatically enables logging of request-response pairs.
Bu Sun Kim65020912020-05-20 12:08:20 -07001996 &quot;bigqueryTableName&quot;: &quot;A String&quot;, # Required. Fully qualified BigQuery table name in the following format:
1997 # &quot;&lt;var&gt;project_id&lt;/var&gt;.&lt;var&gt;dataset_name&lt;/var&gt;.&lt;var&gt;table_name&lt;/var&gt;&quot;
Dan O'Mearadd494642020-05-01 07:42:23 -07001998 #
Bu Sun Kim65020912020-05-20 12:08:20 -07001999 # The specified table must already exist, and the &quot;Cloud ML Service Agent&quot;
Dan O'Mearadd494642020-05-01 07:42:23 -07002000 # for your project must have permission to write to it. The table must have
2001 # the following [schema](/bigquery/docs/schemas):
2002 #
2003 # &lt;table&gt;
Bu Sun Kim65020912020-05-20 12:08:20 -07002004 # &lt;tr&gt;&lt;th&gt;Field name&lt;/th&gt;&lt;th style=&quot;display: table-cell&quot;&gt;Type&lt;/th&gt;
2005 # &lt;th style=&quot;display: table-cell&quot;&gt;Mode&lt;/th&gt;&lt;/tr&gt;
Dan O'Mearadd494642020-05-01 07:42:23 -07002006 # &lt;tr&gt;&lt;td&gt;model&lt;/td&gt;&lt;td&gt;STRING&lt;/td&gt;&lt;td&gt;REQUIRED&lt;/td&gt;&lt;/tr&gt;
2007 # &lt;tr&gt;&lt;td&gt;model_version&lt;/td&gt;&lt;td&gt;STRING&lt;/td&gt;&lt;td&gt;REQUIRED&lt;/td&gt;&lt;/tr&gt;
2008 # &lt;tr&gt;&lt;td&gt;time&lt;/td&gt;&lt;td&gt;TIMESTAMP&lt;/td&gt;&lt;td&gt;REQUIRED&lt;/td&gt;&lt;/tr&gt;
2009 # &lt;tr&gt;&lt;td&gt;raw_data&lt;/td&gt;&lt;td&gt;STRING&lt;/td&gt;&lt;td&gt;REQUIRED&lt;/td&gt;&lt;/tr&gt;
2010 # &lt;tr&gt;&lt;td&gt;raw_prediction&lt;/td&gt;&lt;td&gt;STRING&lt;/td&gt;&lt;td&gt;NULLABLE&lt;/td&gt;&lt;/tr&gt;
2011 # &lt;tr&gt;&lt;td&gt;groundtruth&lt;/td&gt;&lt;td&gt;STRING&lt;/td&gt;&lt;td&gt;NULLABLE&lt;/td&gt;&lt;/tr&gt;
2012 # &lt;/table&gt;
Bu Sun Kim65020912020-05-20 12:08:20 -07002013 &quot;samplingPercentage&quot;: 3.14, # Percentage of requests to be logged, expressed as a fraction from 0 to 1.
2014 # For example, if you want to log 10% of requests, enter `0.1`. The sampling
2015 # window is the lifetime of the model version. Defaults to 0.
2016 },
2017 &quot;errorMessage&quot;: &quot;A String&quot;, # Output only. The details of a failure or a cancellation.
2018 &quot;machineType&quot;: &quot;A String&quot;, # Optional. The type of machine on which to serve the model. Currently only
2019 # applies to online prediction service. If this field is not specified, it
2020 # defaults to `mls1-c1-m2`.
2021 #
2022 # Online prediction supports the following machine types:
2023 #
2024 # * `mls1-c1-m2`
2025 # * `mls1-c4-m2`
2026 # * `n1-standard-2`
2027 # * `n1-standard-4`
2028 # * `n1-standard-8`
2029 # * `n1-standard-16`
2030 # * `n1-standard-32`
2031 # * `n1-highmem-2`
2032 # * `n1-highmem-4`
2033 # * `n1-highmem-8`
2034 # * `n1-highmem-16`
2035 # * `n1-highmem-32`
2036 # * `n1-highcpu-2`
2037 # * `n1-highcpu-4`
2038 # * `n1-highcpu-8`
2039 # * `n1-highcpu-16`
2040 # * `n1-highcpu-32`
2041 #
2042 # `mls1-c1-m2` is generally available. All other machine types are available
2043 # in beta. Learn more about the [differences between machine
2044 # types](/ml-engine/docs/machine-types-online-prediction).
2045 &quot;runtimeVersion&quot;: &quot;A String&quot;, # Required. The AI Platform runtime version to use for this deployment.
2046 #
2047 # For more information, see the
2048 # [runtime version list](/ml-engine/docs/runtime-version-list) and
2049 # [how to manage runtime versions](/ml-engine/docs/versioning).
2050 &quot;description&quot;: &quot;A String&quot;, # Optional. The description specified for the version when it was created.
2051 &quot;framework&quot;: &quot;A String&quot;, # Optional. The machine learning framework AI Platform uses to train
2052 # this version of the model. Valid values are `TENSORFLOW`, `SCIKIT_LEARN`,
2053 # `XGBOOST`. If you do not specify a framework, AI Platform
2054 # will analyze files in the deployment_uri to determine a framework. If you
2055 # choose `SCIKIT_LEARN` or `XGBOOST`, you must also set the runtime version
2056 # of the model to 1.4 or greater.
2057 #
2058 # Do **not** specify a framework if you&#x27;re deploying a [custom
2059 # prediction routine](/ml-engine/docs/tensorflow/custom-prediction-routines).
2060 #
2061 # If you specify a [Compute Engine (N1) machine
2062 # type](/ml-engine/docs/machine-types-online-prediction) in the
2063 # `machineType` field, you must specify `TENSORFLOW`
2064 # for the framework.
2065 &quot;etag&quot;: &quot;A String&quot;, # `etag` is used for optimistic concurrency control as a way to help
2066 # prevent simultaneous updates of a model from overwriting each other.
2067 # It is strongly suggested that systems make use of the `etag` in the
2068 # read-modify-write cycle to perform model updates in order to avoid race
2069 # conditions: An `etag` is returned in the response to `GetVersion`, and
2070 # systems are expected to put that etag in the request to `UpdateVersion` to
2071 # ensure that their change will be applied to the model as intended.
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -04002072 }</pre>
2073</div>
2074
2075</body></html>