blob: 9cc9ec9fb6e78473a799555ad07b2d74780d6cea [file] [log] [blame]
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -04001<html><body>
2<style>
3
4body, h1, h2, h3, div, span, p, pre, a {
5 margin: 0;
6 padding: 0;
7 border: 0;
8 font-weight: inherit;
9 font-style: inherit;
10 font-size: 100%;
11 font-family: inherit;
12 vertical-align: baseline;
13}
14
15body {
16 font-size: 13px;
17 padding: 1em;
18}
19
20h1 {
21 font-size: 26px;
22 margin-bottom: 1em;
23}
24
25h2 {
26 font-size: 24px;
27 margin-bottom: 1em;
28}
29
30h3 {
31 font-size: 20px;
32 margin-bottom: 1em;
33 margin-top: 1em;
34}
35
36pre, code {
37 line-height: 1.5;
38 font-family: Monaco, 'DejaVu Sans Mono', 'Bitstream Vera Sans Mono', 'Lucida Console', monospace;
39}
40
41pre {
42 margin-top: 0.5em;
43}
44
45h1, h2, h3, p {
46 font-family: Arial, sans serif;
47}
48
49h1, h2, h3 {
50 border-bottom: solid #CCC 1px;
51}
52
53.toc_element {
54 margin-top: 0.5em;
55}
56
57.firstline {
58 margin-left: 2 em;
59}
60
61.method {
62 margin-top: 1em;
63 border: solid 1px #CCC;
64 padding: 1em;
65 background: #EEE;
66}
67
68.details {
69 font-weight: bold;
70 font-size: 14px;
71}
72
73</style>
74
Dan O'Mearadd494642020-05-01 07:42:23 -070075<h1><a href="ml_v1.html">AI Platform Training & Prediction API</a> . <a href="ml_v1.projects.html">projects</a> . <a href="ml_v1.projects.models.html">models</a> . <a href="ml_v1.projects.models.versions.html">versions</a></h1>
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -040076<h2>Instance Methods</h2>
77<p class="toc_element">
Dan O'Mearadd494642020-05-01 07:42:23 -070078 <code><a href="#create">create(parent, body=None, x__xgafv=None)</a></code></p>
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -040079<p class="firstline">Creates a new version of a model from a trained TensorFlow model.</p>
80<p class="toc_element">
Thomas Coffee2f245372017-03-27 10:39:26 -070081 <code><a href="#delete">delete(name, x__xgafv=None)</a></code></p>
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -040082<p class="firstline">Deletes a model version.</p>
83<p class="toc_element">
Thomas Coffee2f245372017-03-27 10:39:26 -070084 <code><a href="#get">get(name, x__xgafv=None)</a></code></p>
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -040085<p class="firstline">Gets information about a model version.</p>
86<p class="toc_element">
Bu Sun Kim65020912020-05-20 12:08:20 -070087 <code><a href="#list">list(parent, pageToken=None, pageSize=None, filter=None, x__xgafv=None)</a></code></p>
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -040088<p class="firstline">Gets basic information about all the versions of a model.</p>
89<p class="toc_element">
90 <code><a href="#list_next">list_next(previous_request, previous_response)</a></code></p>
91<p class="firstline">Retrieves the next page of results.</p>
92<p class="toc_element">
Dan O'Mearadd494642020-05-01 07:42:23 -070093 <code><a href="#patch">patch(name, body=None, updateMask=None, x__xgafv=None)</a></code></p>
Bu Sun Kim715bd7f2019-06-14 16:50:42 -070094<p class="firstline">Updates the specified Version resource.</p>
95<p class="toc_element">
96 <code><a href="#setDefault">setDefault(name, body=None, x__xgafv=None)</a></code></p>
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -040097<p class="firstline">Designates a version to be the default for the model.</p>
98<h3>Method Details</h3>
99<div class="method">
Dan O'Mearadd494642020-05-01 07:42:23 -0700100 <code class="details" id="create">create(parent, body=None, x__xgafv=None)</code>
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -0400101 <pre>Creates a new version of a model from a trained TensorFlow model.
102
103If the version created in the cloud by this call is the first deployed
104version of the specified model, it will be made the default version of the
105model. When you add a version to a model that already has one or more
106versions, the default version does not automatically change. If you want a
107new version to be the default, you must call
Dan O'Mearadd494642020-05-01 07:42:23 -0700108projects.models.versions.setDefault.
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -0400109
110Args:
Bu Sun Kim715bd7f2019-06-14 16:50:42 -0700111 parent: string, Required. The name of the model. (required)
Dan O'Mearadd494642020-05-01 07:42:23 -0700112 body: object, The request body.
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -0400113 The object takes the form of:
114
115{ # Represents a version of the model.
116 #
117 # Each version is a trained model deployed in the cloud, ready to handle
118 # prediction requests. A model can have multiple versions. You can get
119 # information about all of the versions of a given model by calling
Dan O'Mearadd494642020-05-01 07:42:23 -0700120 # projects.models.versions.list.
Bu Sun Kim65020912020-05-20 12:08:20 -0700121 &quot;state&quot;: &quot;A String&quot;, # Output only. The state of a version.
122 &quot;manualScaling&quot;: { # Options for manually scaling a model. # Manually select the number of nodes to use for serving the
123 # model. You should generally use `auto_scaling` with an appropriate
124 # `min_nodes` instead, but this option is available if you want more
125 # predictable billing. Beware that latency and error rates will increase
126 # if the traffic exceeds that capability of the system to serve it based
127 # on the selected number of nodes.
128 &quot;nodes&quot;: 42, # The number of nodes to allocate for this model. These nodes are always up,
129 # starting from the time the model is deployed, so the cost of operating
130 # this model will be proportional to `nodes` * number of hours since
131 # last billing cycle plus the cost for each prediction performed.
Dan O'Mearadd494642020-05-01 07:42:23 -0700132 },
Bu Sun Kim65020912020-05-20 12:08:20 -0700133 &quot;name&quot;: &quot;A String&quot;, # Required. The name specified for the version when it was created.
134 #
135 # The version name must be unique within the model it is created in.
136 &quot;serviceAccount&quot;: &quot;A String&quot;, # Optional. Specifies the service account for resource access control.
137 &quot;pythonVersion&quot;: &quot;A String&quot;, # Required. The version of Python used in prediction.
138 #
139 # The following Python versions are available:
140 #
141 # * Python &#x27;3.7&#x27; is available when `runtime_version` is set to &#x27;1.15&#x27; or
142 # later.
143 # * Python &#x27;3.5&#x27; is available when `runtime_version` is set to a version
144 # from &#x27;1.4&#x27; to &#x27;1.14&#x27;.
145 # * Python &#x27;2.7&#x27; is available when `runtime_version` is set to &#x27;1.15&#x27; or
146 # earlier.
147 #
148 # Read more about the Python versions available for [each runtime
149 # version](/ml-engine/docs/runtime-version-list).
150 &quot;lastUseTime&quot;: &quot;A String&quot;, # Output only. The time the version was last used for prediction.
151 &quot;predictionClass&quot;: &quot;A String&quot;, # Optional. The fully qualified name
Dan O'Mearadd494642020-05-01 07:42:23 -0700152 # (&lt;var&gt;module_name&lt;/var&gt;.&lt;var&gt;class_name&lt;/var&gt;) of a class that implements
Bu Sun Kim715bd7f2019-06-14 16:50:42 -0700153 # the Predictor interface described in this reference field. The module
154 # containing this class should be included in a package provided to the
155 # [`packageUris` field](#Version.FIELDS.package_uris).
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -0400156 #
Bu Sun Kim715bd7f2019-06-14 16:50:42 -0700157 # Specify this field if and only if you are deploying a [custom prediction
158 # routine (beta)](/ml-engine/docs/tensorflow/custom-prediction-routines).
159 # If you specify this field, you must set
Dan O'Mearadd494642020-05-01 07:42:23 -0700160 # [`runtimeVersion`](#Version.FIELDS.runtime_version) to 1.4 or greater and
161 # you must set `machineType` to a [legacy (MLS1)
162 # machine type](/ml-engine/docs/machine-types-online-prediction).
Bu Sun Kim715bd7f2019-06-14 16:50:42 -0700163 #
164 # The following code sample provides the Predictor interface:
165 #
Bu Sun Kim65020912020-05-20 12:08:20 -0700166 # &lt;pre style=&quot;max-width: 626px;&quot;&gt;
Bu Sun Kim715bd7f2019-06-14 16:50:42 -0700167 # class Predictor(object):
Bu Sun Kim65020912020-05-20 12:08:20 -0700168 # &quot;&quot;&quot;Interface for constructing custom predictors.&quot;&quot;&quot;
Bu Sun Kim715bd7f2019-06-14 16:50:42 -0700169 #
170 # def predict(self, instances, **kwargs):
Bu Sun Kim65020912020-05-20 12:08:20 -0700171 # &quot;&quot;&quot;Performs custom prediction.
Bu Sun Kim715bd7f2019-06-14 16:50:42 -0700172 #
173 # Instances are the decoded values from the request. They have already
174 # been deserialized from JSON.
175 #
176 # Args:
177 # instances: A list of prediction input instances.
178 # **kwargs: A dictionary of keyword args provided as additional
179 # fields on the predict request body.
180 #
181 # Returns:
182 # A list of outputs containing the prediction results. This list must
183 # be JSON serializable.
Bu Sun Kim65020912020-05-20 12:08:20 -0700184 # &quot;&quot;&quot;
Bu Sun Kim715bd7f2019-06-14 16:50:42 -0700185 # raise NotImplementedError()
186 #
187 # @classmethod
188 # def from_path(cls, model_dir):
Bu Sun Kim65020912020-05-20 12:08:20 -0700189 # &quot;&quot;&quot;Creates an instance of Predictor using the given path.
Bu Sun Kim715bd7f2019-06-14 16:50:42 -0700190 #
191 # Loading of the predictor should be done in this method.
192 #
193 # Args:
194 # model_dir: The local directory that contains the exported model
195 # file along with any additional files uploaded when creating the
196 # version resource.
197 #
198 # Returns:
199 # An instance implementing this Predictor class.
Bu Sun Kim65020912020-05-20 12:08:20 -0700200 # &quot;&quot;&quot;
Bu Sun Kim715bd7f2019-06-14 16:50:42 -0700201 # raise NotImplementedError()
Dan O'Mearadd494642020-05-01 07:42:23 -0700202 # &lt;/pre&gt;
Bu Sun Kim715bd7f2019-06-14 16:50:42 -0700203 #
204 # Learn more about [the Predictor interface and custom prediction
205 # routines](/ml-engine/docs/tensorflow/custom-prediction-routines).
Bu Sun Kim65020912020-05-20 12:08:20 -0700206 &quot;packageUris&quot;: [ # Optional. Cloud Storage paths (`gs://…`) of packages for [custom
Bu Sun Kim715bd7f2019-06-14 16:50:42 -0700207 # prediction routines](/ml-engine/docs/tensorflow/custom-prediction-routines)
208 # or [scikit-learn pipelines with custom
209 # code](/ml-engine/docs/scikit/exporting-for-prediction#custom-pipeline-code).
210 #
211 # For a custom prediction routine, one of these packages must contain your
212 # Predictor class (see
213 # [`predictionClass`](#Version.FIELDS.prediction_class)). Additionally,
214 # include any dependencies used by your Predictor or scikit-learn pipeline
215 # uses that are not already included in your selected [runtime
216 # version](/ml-engine/docs/tensorflow/runtime-version-list).
217 #
218 # If you specify this field, you must also set
219 # [`runtimeVersion`](#Version.FIELDS.runtime_version) to 1.4 or greater.
Bu Sun Kim65020912020-05-20 12:08:20 -0700220 &quot;A String&quot;,
Bu Sun Kim715bd7f2019-06-14 16:50:42 -0700221 ],
Bu Sun Kim65020912020-05-20 12:08:20 -0700222 &quot;explanationConfig&quot;: { # Message holding configuration options for explaining model predictions. # Optional. Configures explainability features on the model&#x27;s version.
223 # Some explanation features require additional metadata to be loaded
224 # as part of the model payload.
225 # There are two feature attribution methods supported for TensorFlow models:
226 # integrated gradients and sampled Shapley.
227 # [Learn more about feature
228 # attributions.](/ai-platform/prediction/docs/ai-explanations/overview)
229 &quot;integratedGradientsAttribution&quot;: { # Attributes credit by computing the Aumann-Shapley value taking advantage # Attributes credit by computing the Aumann-Shapley value taking advantage
230 # of the model&#x27;s fully differentiable structure. Refer to this paper for
231 # more details: http://proceedings.mlr.press/v70/sundararajan17a.html
232 # of the model&#x27;s fully differentiable structure. Refer to this paper for
233 # more details: https://arxiv.org/abs/1703.01365
234 &quot;numIntegralSteps&quot;: 42, # Number of steps for approximating the path integral.
235 # A good value to start is 50 and gradually increase until the
236 # sum to diff property is met within the desired error range.
237 },
238 &quot;sampledShapleyAttribution&quot;: { # An attribution method that approximates Shapley values for features that # An attribution method that approximates Shapley values for features that
239 # contribute to the label being predicted. A sampling strategy is used to
240 # approximate the value rather than considering all subsets of features.
241 # contribute to the label being predicted. A sampling strategy is used to
242 # approximate the value rather than considering all subsets of features.
243 &quot;numPaths&quot;: 42, # The number of feature permutations to consider when approximating the
244 # Shapley values.
245 },
246 &quot;xraiAttribution&quot;: { # Attributes credit by computing the XRAI taking advantage # Attributes credit by computing the XRAI taking advantage
247 # of the model&#x27;s fully differentiable structure. Refer to this paper for
248 # more details: https://arxiv.org/abs/1906.02825
249 # Currently only implemented for models with natural image inputs.
250 # of the model&#x27;s fully differentiable structure. Refer to this paper for
251 # more details: https://arxiv.org/abs/1906.02825
252 # Currently only implemented for models with natural image inputs.
253 &quot;numIntegralSteps&quot;: 42, # Number of steps for approximating the path integral.
254 # A good value to start is 50 and gradually increase until the
255 # sum to diff property is met within the desired error range.
256 },
257 },
258 &quot;deploymentUri&quot;: &quot;A String&quot;, # Required. The Cloud Storage location of the trained model used to
Bu Sun Kim715bd7f2019-06-14 16:50:42 -0700259 # create the version. See the
260 # [guide to model
261 # deployment](/ml-engine/docs/tensorflow/deploying-models) for more
262 # information.
263 #
264 # When passing Version to
Dan O'Mearadd494642020-05-01 07:42:23 -0700265 # projects.models.versions.create
Bu Sun Kim715bd7f2019-06-14 16:50:42 -0700266 # the model service uses the specified location as the source of the model.
267 # Once deployed, the model version is hosted by the prediction service, so
268 # this location is useful only as a historical record.
Bu Sun Kim65020912020-05-20 12:08:20 -0700269 # The total number of model files can&#x27;t exceed 1000.
270 &quot;autoScaling&quot;: { # Options for automatically scaling a model. # Automatically scale the number of nodes used to serve the model in
Dan O'Mearadd494642020-05-01 07:42:23 -0700271 # response to increases and decreases in traffic. Care should be
Bu Sun Kim65020912020-05-20 12:08:20 -0700272 # taken to ramp up traffic according to the model&#x27;s ability to scale
Dan O'Mearadd494642020-05-01 07:42:23 -0700273 # or you will start seeing increases in latency and 429 response codes.
274 #
275 # Note that you cannot use AutoScaling if your version uses
276 # [GPUs](#Version.FIELDS.accelerator_config). Instead, you must use specify
277 # `manual_scaling`.
Bu Sun Kim65020912020-05-20 12:08:20 -0700278 &quot;minNodes&quot;: 42, # Optional. The minimum number of nodes to allocate for this model. These
Dan O'Mearadd494642020-05-01 07:42:23 -0700279 # nodes are always up, starting from the time the model is deployed.
280 # Therefore, the cost of operating this model will be at least
281 # `rate` * `min_nodes` * number of hours since last billing cycle,
282 # where `rate` is the cost per node-hour as documented in the
283 # [pricing guide](/ml-engine/docs/pricing),
284 # even if no predictions are performed. There is additional cost for each
285 # prediction performed.
286 #
287 # Unlike manual scaling, if the load gets too heavy for the nodes
288 # that are up, the service will automatically add nodes to handle the
289 # increased load as well as scale back as traffic drops, always maintaining
290 # at least `min_nodes`. You will be charged for the time in which additional
291 # nodes are used.
292 #
293 # If `min_nodes` is not specified and AutoScaling is used with a [legacy
294 # (MLS1) machine type](/ml-engine/docs/machine-types-online-prediction),
295 # `min_nodes` defaults to 0, in which case, when traffic to a model stops
296 # (and after a cool-down period), nodes will be shut down and no charges will
297 # be incurred until traffic to the model resumes.
298 #
299 # If `min_nodes` is not specified and AutoScaling is used with a [Compute
300 # Engine (N1) machine type](/ml-engine/docs/machine-types-online-prediction),
301 # `min_nodes` defaults to 1. `min_nodes` must be at least 1 for use with a
302 # Compute Engine machine type.
303 #
304 # Note that you cannot use AutoScaling if your version uses
305 # [GPUs](#Version.FIELDS.accelerator_config). Instead, you must use
306 # ManualScaling.
307 #
308 # You can set `min_nodes` when creating the model version, and you can also
309 # update `min_nodes` for an existing version:
310 # &lt;pre&gt;
311 # update_body.json:
312 # {
Bu Sun Kim65020912020-05-20 12:08:20 -0700313 # &#x27;autoScaling&#x27;: {
314 # &#x27;minNodes&#x27;: 5
Dan O'Mearadd494642020-05-01 07:42:23 -0700315 # }
316 # }
317 # &lt;/pre&gt;
318 # HTTP request:
Bu Sun Kim65020912020-05-20 12:08:20 -0700319 # &lt;pre style=&quot;max-width: 626px;&quot;&gt;
Dan O'Mearadd494642020-05-01 07:42:23 -0700320 # PATCH
321 # https://ml.googleapis.com/v1/{name=projects/*/models/*/versions/*}?update_mask=autoScaling.minNodes
322 # -d @./update_body.json
323 # &lt;/pre&gt;
324 },
Bu Sun Kim65020912020-05-20 12:08:20 -0700325 &quot;labels&quot;: { # Optional. One or more labels that you can add, to organize your model
326 # versions. Each label is a key-value pair, where both the key and the value
327 # are arbitrary strings that you supply.
328 # For more information, see the documentation on
329 # &lt;a href=&quot;/ml-engine/docs/tensorflow/resource-labels&quot;&gt;using labels&lt;/a&gt;.
330 &quot;a_key&quot;: &quot;A String&quot;,
331 },
332 &quot;createTime&quot;: &quot;A String&quot;, # Output only. The time the version was created.
333 &quot;requestLoggingConfig&quot;: { # Configuration for logging request-response pairs to a BigQuery table. # Optional. *Only* specify this field in a
Dan O'Mearadd494642020-05-01 07:42:23 -0700334 # projects.models.versions.patch
335 # request. Specifying it in a
336 # projects.models.versions.create
337 # request has no effect.
338 #
339 # Configures the request-response pair logging on predictions from this
340 # Version.
341 # Online prediction requests to a model version and the responses to these
342 # requests are converted to raw strings and saved to the specified BigQuery
343 # table. Logging is constrained by [BigQuery quotas and
344 # limits](/bigquery/quotas). If your project exceeds BigQuery quotas or limits,
345 # AI Platform Prediction does not log request-response pairs, but it continues
346 # to serve predictions.
347 #
348 # If you are using [continuous
349 # evaluation](/ml-engine/docs/continuous-evaluation/), you do not need to
350 # specify this configuration manually. Setting up continuous evaluation
351 # automatically enables logging of request-response pairs.
Bu Sun Kim65020912020-05-20 12:08:20 -0700352 &quot;bigqueryTableName&quot;: &quot;A String&quot;, # Required. Fully qualified BigQuery table name in the following format:
353 # &quot;&lt;var&gt;project_id&lt;/var&gt;.&lt;var&gt;dataset_name&lt;/var&gt;.&lt;var&gt;table_name&lt;/var&gt;&quot;
Dan O'Mearadd494642020-05-01 07:42:23 -0700354 #
Bu Sun Kim65020912020-05-20 12:08:20 -0700355 # The specified table must already exist, and the &quot;Cloud ML Service Agent&quot;
Dan O'Mearadd494642020-05-01 07:42:23 -0700356 # for your project must have permission to write to it. The table must have
357 # the following [schema](/bigquery/docs/schemas):
358 #
359 # &lt;table&gt;
Bu Sun Kim65020912020-05-20 12:08:20 -0700360 # &lt;tr&gt;&lt;th&gt;Field name&lt;/th&gt;&lt;th style=&quot;display: table-cell&quot;&gt;Type&lt;/th&gt;
361 # &lt;th style=&quot;display: table-cell&quot;&gt;Mode&lt;/th&gt;&lt;/tr&gt;
Dan O'Mearadd494642020-05-01 07:42:23 -0700362 # &lt;tr&gt;&lt;td&gt;model&lt;/td&gt;&lt;td&gt;STRING&lt;/td&gt;&lt;td&gt;REQUIRED&lt;/td&gt;&lt;/tr&gt;
363 # &lt;tr&gt;&lt;td&gt;model_version&lt;/td&gt;&lt;td&gt;STRING&lt;/td&gt;&lt;td&gt;REQUIRED&lt;/td&gt;&lt;/tr&gt;
364 # &lt;tr&gt;&lt;td&gt;time&lt;/td&gt;&lt;td&gt;TIMESTAMP&lt;/td&gt;&lt;td&gt;REQUIRED&lt;/td&gt;&lt;/tr&gt;
365 # &lt;tr&gt;&lt;td&gt;raw_data&lt;/td&gt;&lt;td&gt;STRING&lt;/td&gt;&lt;td&gt;REQUIRED&lt;/td&gt;&lt;/tr&gt;
366 # &lt;tr&gt;&lt;td&gt;raw_prediction&lt;/td&gt;&lt;td&gt;STRING&lt;/td&gt;&lt;td&gt;NULLABLE&lt;/td&gt;&lt;/tr&gt;
367 # &lt;tr&gt;&lt;td&gt;groundtruth&lt;/td&gt;&lt;td&gt;STRING&lt;/td&gt;&lt;td&gt;NULLABLE&lt;/td&gt;&lt;/tr&gt;
368 # &lt;/table&gt;
Bu Sun Kim65020912020-05-20 12:08:20 -0700369 &quot;samplingPercentage&quot;: 3.14, # Percentage of requests to be logged, expressed as a fraction from 0 to 1.
370 # For example, if you want to log 10% of requests, enter `0.1`. The sampling
371 # window is the lifetime of the model version. Defaults to 0.
372 },
373 &quot;errorMessage&quot;: &quot;A String&quot;, # Output only. The details of a failure or a cancellation.
374 &quot;machineType&quot;: &quot;A String&quot;, # Optional. The type of machine on which to serve the model. Currently only
375 # applies to online prediction service. If this field is not specified, it
376 # defaults to `mls1-c1-m2`.
377 #
378 # Online prediction supports the following machine types:
379 #
380 # * `mls1-c1-m2`
381 # * `mls1-c4-m2`
382 # * `n1-standard-2`
383 # * `n1-standard-4`
384 # * `n1-standard-8`
385 # * `n1-standard-16`
386 # * `n1-standard-32`
387 # * `n1-highmem-2`
388 # * `n1-highmem-4`
389 # * `n1-highmem-8`
390 # * `n1-highmem-16`
391 # * `n1-highmem-32`
392 # * `n1-highcpu-2`
393 # * `n1-highcpu-4`
394 # * `n1-highcpu-8`
395 # * `n1-highcpu-16`
396 # * `n1-highcpu-32`
397 #
398 # `mls1-c1-m2` is generally available. All other machine types are available
399 # in beta. Learn more about the [differences between machine
400 # types](/ml-engine/docs/machine-types-online-prediction).
401 &quot;runtimeVersion&quot;: &quot;A String&quot;, # Required. The AI Platform runtime version to use for this deployment.
402 #
403 # For more information, see the
404 # [runtime version list](/ml-engine/docs/runtime-version-list) and
405 # [how to manage runtime versions](/ml-engine/docs/versioning).
406 &quot;description&quot;: &quot;A String&quot;, # Optional. The description specified for the version when it was created.
407 &quot;framework&quot;: &quot;A String&quot;, # Optional. The machine learning framework AI Platform uses to train
408 # this version of the model. Valid values are `TENSORFLOW`, `SCIKIT_LEARN`,
409 # `XGBOOST`. If you do not specify a framework, AI Platform
410 # will analyze files in the deployment_uri to determine a framework. If you
411 # choose `SCIKIT_LEARN` or `XGBOOST`, you must also set the runtime version
412 # of the model to 1.4 or greater.
413 #
414 # Do **not** specify a framework if you&#x27;re deploying a [custom
415 # prediction routine](/ml-engine/docs/tensorflow/custom-prediction-routines).
416 #
417 # If you specify a [Compute Engine (N1) machine
418 # type](/ml-engine/docs/machine-types-online-prediction) in the
419 # `machineType` field, you must specify `TENSORFLOW`
420 # for the framework.
421 &quot;etag&quot;: &quot;A String&quot;, # `etag` is used for optimistic concurrency control as a way to help
422 # prevent simultaneous updates of a model from overwriting each other.
423 # It is strongly suggested that systems make use of the `etag` in the
424 # read-modify-write cycle to perform model updates in order to avoid race
425 # conditions: An `etag` is returned in the response to `GetVersion`, and
426 # systems are expected to put that etag in the request to `UpdateVersion` to
427 # ensure that their change will be applied to the model as intended.
428 &quot;isDefault&quot;: True or False, # Output only. If true, this version will be used to handle prediction
429 # requests that do not specify a version.
430 #
431 # You can change the default version by calling
432 # projects.methods.versions.setDefault.
433 &quot;acceleratorConfig&quot;: { # Represents a hardware accelerator request config. # Optional. Accelerator config for using GPUs for online prediction (beta).
434 # Only specify this field if you have specified a Compute Engine (N1) machine
435 # type in the `machineType` field. Learn more about [using GPUs for online
436 # prediction](/ml-engine/docs/machine-types-online-prediction#gpus).
437 # Note that the AcceleratorConfig can be used in both Jobs and Versions.
438 # Learn more about [accelerators for training](/ml-engine/docs/using-gpus) and
439 # [accelerators for online
440 # prediction](/ml-engine/docs/machine-types-online-prediction#gpus).
441 &quot;count&quot;: &quot;A String&quot;, # The number of accelerators to attach to each machine running the job.
442 &quot;type&quot;: &quot;A String&quot;, # The type of accelerator to use.
Dan O'Mearadd494642020-05-01 07:42:23 -0700443 },
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -0400444}
445
446 x__xgafv: string, V1 error format.
447 Allowed values
448 1 - v1 error format
449 2 - v2 error format
450
451Returns:
452 An object of the form:
453
454 { # This resource represents a long-running operation that is the result of a
455 # network API call.
Bu Sun Kim65020912020-05-20 12:08:20 -0700456 &quot;response&quot;: { # The normal response of the operation in case of success. If the original
Sai Cheemalapati4ba8c232017-06-06 18:46:08 -0400457 # method returns no data on success, such as `Delete`, the response is
458 # `google.protobuf.Empty`. If the original method is standard
459 # `Get`/`Create`/`Update`, the response should be the resource. For other
460 # methods, the response should have the type `XxxResponse`, where `Xxx`
461 # is the original method name. For example, if the original method name
462 # is `TakeSnapshot()`, the inferred response type is
463 # `TakeSnapshotResponse`.
Bu Sun Kim65020912020-05-20 12:08:20 -0700464 &quot;a_key&quot;: &quot;&quot;, # Properties of the object. Contains field @type with type URL.
Sai Cheemalapati4ba8c232017-06-06 18:46:08 -0400465 },
Bu Sun Kim65020912020-05-20 12:08:20 -0700466 &quot;name&quot;: &quot;A String&quot;, # The server-assigned name, which is only unique within the same service that
Sai Cheemalapati4ba8c232017-06-06 18:46:08 -0400467 # originally returns it. If you use the default HTTP mapping, the
Bu Sun Kim715bd7f2019-06-14 16:50:42 -0700468 # `name` should be a resource name ending with `operations/{unique_id}`.
Bu Sun Kim65020912020-05-20 12:08:20 -0700469 &quot;error&quot;: { # The `Status` type defines a logical error model that is suitable for # The error result of the operation in case of failure or cancellation.
470 # different programming environments, including REST APIs and RPC APIs. It is
471 # used by [gRPC](https://github.com/grpc). Each `Status` message contains
472 # three pieces of data: error code, error message, and error details.
473 #
474 # You can find out more about this error model and how to work with it in the
475 # [API Design Guide](https://cloud.google.com/apis/design/errors).
476 &quot;details&quot;: [ # A list of messages that carry the error details. There is a common set of
477 # message types for APIs to use.
478 {
479 &quot;a_key&quot;: &quot;&quot;, # Properties of the object. Contains field @type with type URL.
480 },
481 ],
482 &quot;code&quot;: 42, # The status code, which should be an enum value of google.rpc.Code.
483 &quot;message&quot;: &quot;A String&quot;, # A developer-facing error message, which should be in English. Any
484 # user-facing error message should be localized and sent in the
485 # google.rpc.Status.details field, or localized by the client.
486 },
487 &quot;metadata&quot;: { # Service-specific metadata associated with the operation. It typically
488 # contains progress information and common metadata such as create time.
489 # Some services might not provide such metadata. Any method that returns a
490 # long-running operation should document the metadata type, if any.
491 &quot;a_key&quot;: &quot;&quot;, # Properties of the object. Contains field @type with type URL.
492 },
493 &quot;done&quot;: True or False, # If the value is `false`, it means the operation is still in progress.
494 # If `true`, the operation is completed, and either `error` or `response` is
495 # available.
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -0400496 }</pre>
497</div>
498
499<div class="method">
Thomas Coffee2f245372017-03-27 10:39:26 -0700500 <code class="details" id="delete">delete(name, x__xgafv=None)</code>
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -0400501 <pre>Deletes a model version.
502
503Each model can have multiple versions deployed and in use at any given
504time. Use this method to remove a single version.
505
506Note: You cannot delete the version that is set as the default version
507of the model unless it is the only remaining version.
508
509Args:
510 name: string, Required. The name of the version. You can get the names of all the
511versions of a model by calling
Dan O'Mearadd494642020-05-01 07:42:23 -0700512projects.models.versions.list. (required)
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -0400513 x__xgafv: string, V1 error format.
514 Allowed values
515 1 - v1 error format
516 2 - v2 error format
517
518Returns:
519 An object of the form:
520
521 { # This resource represents a long-running operation that is the result of a
522 # network API call.
Bu Sun Kim65020912020-05-20 12:08:20 -0700523 &quot;response&quot;: { # The normal response of the operation in case of success. If the original
Sai Cheemalapati4ba8c232017-06-06 18:46:08 -0400524 # method returns no data on success, such as `Delete`, the response is
525 # `google.protobuf.Empty`. If the original method is standard
526 # `Get`/`Create`/`Update`, the response should be the resource. For other
527 # methods, the response should have the type `XxxResponse`, where `Xxx`
528 # is the original method name. For example, if the original method name
529 # is `TakeSnapshot()`, the inferred response type is
530 # `TakeSnapshotResponse`.
Bu Sun Kim65020912020-05-20 12:08:20 -0700531 &quot;a_key&quot;: &quot;&quot;, # Properties of the object. Contains field @type with type URL.
Sai Cheemalapati4ba8c232017-06-06 18:46:08 -0400532 },
Bu Sun Kim65020912020-05-20 12:08:20 -0700533 &quot;name&quot;: &quot;A String&quot;, # The server-assigned name, which is only unique within the same service that
Sai Cheemalapati4ba8c232017-06-06 18:46:08 -0400534 # originally returns it. If you use the default HTTP mapping, the
Bu Sun Kim715bd7f2019-06-14 16:50:42 -0700535 # `name` should be a resource name ending with `operations/{unique_id}`.
Bu Sun Kim65020912020-05-20 12:08:20 -0700536 &quot;error&quot;: { # The `Status` type defines a logical error model that is suitable for # The error result of the operation in case of failure or cancellation.
537 # different programming environments, including REST APIs and RPC APIs. It is
538 # used by [gRPC](https://github.com/grpc). Each `Status` message contains
539 # three pieces of data: error code, error message, and error details.
540 #
541 # You can find out more about this error model and how to work with it in the
542 # [API Design Guide](https://cloud.google.com/apis/design/errors).
543 &quot;details&quot;: [ # A list of messages that carry the error details. There is a common set of
544 # message types for APIs to use.
545 {
546 &quot;a_key&quot;: &quot;&quot;, # Properties of the object. Contains field @type with type URL.
547 },
548 ],
549 &quot;code&quot;: 42, # The status code, which should be an enum value of google.rpc.Code.
550 &quot;message&quot;: &quot;A String&quot;, # A developer-facing error message, which should be in English. Any
551 # user-facing error message should be localized and sent in the
552 # google.rpc.Status.details field, or localized by the client.
553 },
554 &quot;metadata&quot;: { # Service-specific metadata associated with the operation. It typically
555 # contains progress information and common metadata such as create time.
556 # Some services might not provide such metadata. Any method that returns a
557 # long-running operation should document the metadata type, if any.
558 &quot;a_key&quot;: &quot;&quot;, # Properties of the object. Contains field @type with type URL.
559 },
560 &quot;done&quot;: True or False, # If the value is `false`, it means the operation is still in progress.
561 # If `true`, the operation is completed, and either `error` or `response` is
562 # available.
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -0400563 }</pre>
564</div>
565
566<div class="method">
Thomas Coffee2f245372017-03-27 10:39:26 -0700567 <code class="details" id="get">get(name, x__xgafv=None)</code>
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -0400568 <pre>Gets information about a model version.
569
570Models can have multiple versions. You can call
Dan O'Mearadd494642020-05-01 07:42:23 -0700571projects.models.versions.list
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -0400572to get the same information that this method returns for all of the
573versions of a model.
574
575Args:
Bu Sun Kim715bd7f2019-06-14 16:50:42 -0700576 name: string, Required. The name of the version. (required)
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -0400577 x__xgafv: string, V1 error format.
578 Allowed values
579 1 - v1 error format
580 2 - v2 error format
581
582Returns:
583 An object of the form:
584
585 { # Represents a version of the model.
586 #
587 # Each version is a trained model deployed in the cloud, ready to handle
588 # prediction requests. A model can have multiple versions. You can get
589 # information about all of the versions of a given model by calling
Dan O'Mearadd494642020-05-01 07:42:23 -0700590 # projects.models.versions.list.
Bu Sun Kim65020912020-05-20 12:08:20 -0700591 &quot;state&quot;: &quot;A String&quot;, # Output only. The state of a version.
592 &quot;manualScaling&quot;: { # Options for manually scaling a model. # Manually select the number of nodes to use for serving the
593 # model. You should generally use `auto_scaling` with an appropriate
594 # `min_nodes` instead, but this option is available if you want more
595 # predictable billing. Beware that latency and error rates will increase
596 # if the traffic exceeds that capability of the system to serve it based
597 # on the selected number of nodes.
598 &quot;nodes&quot;: 42, # The number of nodes to allocate for this model. These nodes are always up,
599 # starting from the time the model is deployed, so the cost of operating
600 # this model will be proportional to `nodes` * number of hours since
601 # last billing cycle plus the cost for each prediction performed.
Dan O'Mearadd494642020-05-01 07:42:23 -0700602 },
Bu Sun Kim65020912020-05-20 12:08:20 -0700603 &quot;name&quot;: &quot;A String&quot;, # Required. The name specified for the version when it was created.
604 #
605 # The version name must be unique within the model it is created in.
606 &quot;serviceAccount&quot;: &quot;A String&quot;, # Optional. Specifies the service account for resource access control.
607 &quot;pythonVersion&quot;: &quot;A String&quot;, # Required. The version of Python used in prediction.
608 #
609 # The following Python versions are available:
610 #
611 # * Python &#x27;3.7&#x27; is available when `runtime_version` is set to &#x27;1.15&#x27; or
612 # later.
613 # * Python &#x27;3.5&#x27; is available when `runtime_version` is set to a version
614 # from &#x27;1.4&#x27; to &#x27;1.14&#x27;.
615 # * Python &#x27;2.7&#x27; is available when `runtime_version` is set to &#x27;1.15&#x27; or
616 # earlier.
617 #
618 # Read more about the Python versions available for [each runtime
619 # version](/ml-engine/docs/runtime-version-list).
620 &quot;lastUseTime&quot;: &quot;A String&quot;, # Output only. The time the version was last used for prediction.
621 &quot;predictionClass&quot;: &quot;A String&quot;, # Optional. The fully qualified name
Dan O'Mearadd494642020-05-01 07:42:23 -0700622 # (&lt;var&gt;module_name&lt;/var&gt;.&lt;var&gt;class_name&lt;/var&gt;) of a class that implements
Bu Sun Kim715bd7f2019-06-14 16:50:42 -0700623 # the Predictor interface described in this reference field. The module
624 # containing this class should be included in a package provided to the
625 # [`packageUris` field](#Version.FIELDS.package_uris).
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -0400626 #
Bu Sun Kim715bd7f2019-06-14 16:50:42 -0700627 # Specify this field if and only if you are deploying a [custom prediction
628 # routine (beta)](/ml-engine/docs/tensorflow/custom-prediction-routines).
629 # If you specify this field, you must set
Dan O'Mearadd494642020-05-01 07:42:23 -0700630 # [`runtimeVersion`](#Version.FIELDS.runtime_version) to 1.4 or greater and
631 # you must set `machineType` to a [legacy (MLS1)
632 # machine type](/ml-engine/docs/machine-types-online-prediction).
Bu Sun Kim715bd7f2019-06-14 16:50:42 -0700633 #
634 # The following code sample provides the Predictor interface:
635 #
Bu Sun Kim65020912020-05-20 12:08:20 -0700636 # &lt;pre style=&quot;max-width: 626px;&quot;&gt;
Bu Sun Kim715bd7f2019-06-14 16:50:42 -0700637 # class Predictor(object):
Bu Sun Kim65020912020-05-20 12:08:20 -0700638 # &quot;&quot;&quot;Interface for constructing custom predictors.&quot;&quot;&quot;
Bu Sun Kim715bd7f2019-06-14 16:50:42 -0700639 #
640 # def predict(self, instances, **kwargs):
Bu Sun Kim65020912020-05-20 12:08:20 -0700641 # &quot;&quot;&quot;Performs custom prediction.
Bu Sun Kim715bd7f2019-06-14 16:50:42 -0700642 #
643 # Instances are the decoded values from the request. They have already
644 # been deserialized from JSON.
645 #
646 # Args:
647 # instances: A list of prediction input instances.
648 # **kwargs: A dictionary of keyword args provided as additional
649 # fields on the predict request body.
650 #
651 # Returns:
652 # A list of outputs containing the prediction results. This list must
653 # be JSON serializable.
Bu Sun Kim65020912020-05-20 12:08:20 -0700654 # &quot;&quot;&quot;
Bu Sun Kim715bd7f2019-06-14 16:50:42 -0700655 # raise NotImplementedError()
656 #
657 # @classmethod
658 # def from_path(cls, model_dir):
Bu Sun Kim65020912020-05-20 12:08:20 -0700659 # &quot;&quot;&quot;Creates an instance of Predictor using the given path.
Bu Sun Kim715bd7f2019-06-14 16:50:42 -0700660 #
661 # Loading of the predictor should be done in this method.
662 #
663 # Args:
664 # model_dir: The local directory that contains the exported model
665 # file along with any additional files uploaded when creating the
666 # version resource.
667 #
668 # Returns:
669 # An instance implementing this Predictor class.
Bu Sun Kim65020912020-05-20 12:08:20 -0700670 # &quot;&quot;&quot;
Bu Sun Kim715bd7f2019-06-14 16:50:42 -0700671 # raise NotImplementedError()
Dan O'Mearadd494642020-05-01 07:42:23 -0700672 # &lt;/pre&gt;
Bu Sun Kim715bd7f2019-06-14 16:50:42 -0700673 #
674 # Learn more about [the Predictor interface and custom prediction
675 # routines](/ml-engine/docs/tensorflow/custom-prediction-routines).
Bu Sun Kim65020912020-05-20 12:08:20 -0700676 &quot;packageUris&quot;: [ # Optional. Cloud Storage paths (`gs://…`) of packages for [custom
Bu Sun Kim715bd7f2019-06-14 16:50:42 -0700677 # prediction routines](/ml-engine/docs/tensorflow/custom-prediction-routines)
678 # or [scikit-learn pipelines with custom
679 # code](/ml-engine/docs/scikit/exporting-for-prediction#custom-pipeline-code).
680 #
681 # For a custom prediction routine, one of these packages must contain your
682 # Predictor class (see
683 # [`predictionClass`](#Version.FIELDS.prediction_class)). Additionally,
684 # include any dependencies used by your Predictor or scikit-learn pipeline
685 # uses that are not already included in your selected [runtime
686 # version](/ml-engine/docs/tensorflow/runtime-version-list).
687 #
688 # If you specify this field, you must also set
689 # [`runtimeVersion`](#Version.FIELDS.runtime_version) to 1.4 or greater.
Bu Sun Kim65020912020-05-20 12:08:20 -0700690 &quot;A String&quot;,
Bu Sun Kim715bd7f2019-06-14 16:50:42 -0700691 ],
Bu Sun Kim65020912020-05-20 12:08:20 -0700692 &quot;explanationConfig&quot;: { # Message holding configuration options for explaining model predictions. # Optional. Configures explainability features on the model&#x27;s version.
693 # Some explanation features require additional metadata to be loaded
694 # as part of the model payload.
695 # There are two feature attribution methods supported for TensorFlow models:
696 # integrated gradients and sampled Shapley.
697 # [Learn more about feature
698 # attributions.](/ai-platform/prediction/docs/ai-explanations/overview)
699 &quot;integratedGradientsAttribution&quot;: { # Attributes credit by computing the Aumann-Shapley value taking advantage # Attributes credit by computing the Aumann-Shapley value taking advantage
700 # of the model&#x27;s fully differentiable structure. Refer to this paper for
701 # more details: http://proceedings.mlr.press/v70/sundararajan17a.html
702 # of the model&#x27;s fully differentiable structure. Refer to this paper for
703 # more details: https://arxiv.org/abs/1703.01365
704 &quot;numIntegralSteps&quot;: 42, # Number of steps for approximating the path integral.
705 # A good value to start is 50 and gradually increase until the
706 # sum to diff property is met within the desired error range.
707 },
708 &quot;sampledShapleyAttribution&quot;: { # An attribution method that approximates Shapley values for features that # An attribution method that approximates Shapley values for features that
709 # contribute to the label being predicted. A sampling strategy is used to
710 # approximate the value rather than considering all subsets of features.
711 # contribute to the label being predicted. A sampling strategy is used to
712 # approximate the value rather than considering all subsets of features.
713 &quot;numPaths&quot;: 42, # The number of feature permutations to consider when approximating the
714 # Shapley values.
715 },
716 &quot;xraiAttribution&quot;: { # Attributes credit by computing the XRAI taking advantage # Attributes credit by computing the XRAI taking advantage
717 # of the model&#x27;s fully differentiable structure. Refer to this paper for
718 # more details: https://arxiv.org/abs/1906.02825
719 # Currently only implemented for models with natural image inputs.
720 # of the model&#x27;s fully differentiable structure. Refer to this paper for
721 # more details: https://arxiv.org/abs/1906.02825
722 # Currently only implemented for models with natural image inputs.
723 &quot;numIntegralSteps&quot;: 42, # Number of steps for approximating the path integral.
724 # A good value to start is 50 and gradually increase until the
725 # sum to diff property is met within the desired error range.
726 },
727 },
728 &quot;deploymentUri&quot;: &quot;A String&quot;, # Required. The Cloud Storage location of the trained model used to
Bu Sun Kim715bd7f2019-06-14 16:50:42 -0700729 # create the version. See the
730 # [guide to model
731 # deployment](/ml-engine/docs/tensorflow/deploying-models) for more
732 # information.
733 #
734 # When passing Version to
Dan O'Mearadd494642020-05-01 07:42:23 -0700735 # projects.models.versions.create
Bu Sun Kim715bd7f2019-06-14 16:50:42 -0700736 # the model service uses the specified location as the source of the model.
737 # Once deployed, the model version is hosted by the prediction service, so
738 # this location is useful only as a historical record.
Bu Sun Kim65020912020-05-20 12:08:20 -0700739 # The total number of model files can&#x27;t exceed 1000.
740 &quot;autoScaling&quot;: { # Options for automatically scaling a model. # Automatically scale the number of nodes used to serve the model in
Dan O'Mearadd494642020-05-01 07:42:23 -0700741 # response to increases and decreases in traffic. Care should be
Bu Sun Kim65020912020-05-20 12:08:20 -0700742 # taken to ramp up traffic according to the model&#x27;s ability to scale
Dan O'Mearadd494642020-05-01 07:42:23 -0700743 # or you will start seeing increases in latency and 429 response codes.
744 #
745 # Note that you cannot use AutoScaling if your version uses
746 # [GPUs](#Version.FIELDS.accelerator_config). Instead, you must use specify
747 # `manual_scaling`.
Bu Sun Kim65020912020-05-20 12:08:20 -0700748 &quot;minNodes&quot;: 42, # Optional. The minimum number of nodes to allocate for this model. These
Dan O'Mearadd494642020-05-01 07:42:23 -0700749 # nodes are always up, starting from the time the model is deployed.
750 # Therefore, the cost of operating this model will be at least
751 # `rate` * `min_nodes` * number of hours since last billing cycle,
752 # where `rate` is the cost per node-hour as documented in the
753 # [pricing guide](/ml-engine/docs/pricing),
754 # even if no predictions are performed. There is additional cost for each
755 # prediction performed.
756 #
757 # Unlike manual scaling, if the load gets too heavy for the nodes
758 # that are up, the service will automatically add nodes to handle the
759 # increased load as well as scale back as traffic drops, always maintaining
760 # at least `min_nodes`. You will be charged for the time in which additional
761 # nodes are used.
762 #
763 # If `min_nodes` is not specified and AutoScaling is used with a [legacy
764 # (MLS1) machine type](/ml-engine/docs/machine-types-online-prediction),
765 # `min_nodes` defaults to 0, in which case, when traffic to a model stops
766 # (and after a cool-down period), nodes will be shut down and no charges will
767 # be incurred until traffic to the model resumes.
768 #
769 # If `min_nodes` is not specified and AutoScaling is used with a [Compute
770 # Engine (N1) machine type](/ml-engine/docs/machine-types-online-prediction),
771 # `min_nodes` defaults to 1. `min_nodes` must be at least 1 for use with a
772 # Compute Engine machine type.
773 #
774 # Note that you cannot use AutoScaling if your version uses
775 # [GPUs](#Version.FIELDS.accelerator_config). Instead, you must use
776 # ManualScaling.
777 #
778 # You can set `min_nodes` when creating the model version, and you can also
779 # update `min_nodes` for an existing version:
780 # &lt;pre&gt;
781 # update_body.json:
782 # {
Bu Sun Kim65020912020-05-20 12:08:20 -0700783 # &#x27;autoScaling&#x27;: {
784 # &#x27;minNodes&#x27;: 5
Dan O'Mearadd494642020-05-01 07:42:23 -0700785 # }
786 # }
787 # &lt;/pre&gt;
788 # HTTP request:
Bu Sun Kim65020912020-05-20 12:08:20 -0700789 # &lt;pre style=&quot;max-width: 626px;&quot;&gt;
Dan O'Mearadd494642020-05-01 07:42:23 -0700790 # PATCH
791 # https://ml.googleapis.com/v1/{name=projects/*/models/*/versions/*}?update_mask=autoScaling.minNodes
792 # -d @./update_body.json
793 # &lt;/pre&gt;
794 },
Bu Sun Kim65020912020-05-20 12:08:20 -0700795 &quot;labels&quot;: { # Optional. One or more labels that you can add, to organize your model
796 # versions. Each label is a key-value pair, where both the key and the value
797 # are arbitrary strings that you supply.
798 # For more information, see the documentation on
799 # &lt;a href=&quot;/ml-engine/docs/tensorflow/resource-labels&quot;&gt;using labels&lt;/a&gt;.
800 &quot;a_key&quot;: &quot;A String&quot;,
801 },
802 &quot;createTime&quot;: &quot;A String&quot;, # Output only. The time the version was created.
803 &quot;requestLoggingConfig&quot;: { # Configuration for logging request-response pairs to a BigQuery table. # Optional. *Only* specify this field in a
Dan O'Mearadd494642020-05-01 07:42:23 -0700804 # projects.models.versions.patch
805 # request. Specifying it in a
806 # projects.models.versions.create
807 # request has no effect.
808 #
809 # Configures the request-response pair logging on predictions from this
810 # Version.
811 # Online prediction requests to a model version and the responses to these
812 # requests are converted to raw strings and saved to the specified BigQuery
813 # table. Logging is constrained by [BigQuery quotas and
814 # limits](/bigquery/quotas). If your project exceeds BigQuery quotas or limits,
815 # AI Platform Prediction does not log request-response pairs, but it continues
816 # to serve predictions.
817 #
818 # If you are using [continuous
819 # evaluation](/ml-engine/docs/continuous-evaluation/), you do not need to
820 # specify this configuration manually. Setting up continuous evaluation
821 # automatically enables logging of request-response pairs.
Bu Sun Kim65020912020-05-20 12:08:20 -0700822 &quot;bigqueryTableName&quot;: &quot;A String&quot;, # Required. Fully qualified BigQuery table name in the following format:
823 # &quot;&lt;var&gt;project_id&lt;/var&gt;.&lt;var&gt;dataset_name&lt;/var&gt;.&lt;var&gt;table_name&lt;/var&gt;&quot;
Dan O'Mearadd494642020-05-01 07:42:23 -0700824 #
Bu Sun Kim65020912020-05-20 12:08:20 -0700825 # The specified table must already exist, and the &quot;Cloud ML Service Agent&quot;
Dan O'Mearadd494642020-05-01 07:42:23 -0700826 # for your project must have permission to write to it. The table must have
827 # the following [schema](/bigquery/docs/schemas):
828 #
829 # &lt;table&gt;
Bu Sun Kim65020912020-05-20 12:08:20 -0700830 # &lt;tr&gt;&lt;th&gt;Field name&lt;/th&gt;&lt;th style=&quot;display: table-cell&quot;&gt;Type&lt;/th&gt;
831 # &lt;th style=&quot;display: table-cell&quot;&gt;Mode&lt;/th&gt;&lt;/tr&gt;
Dan O'Mearadd494642020-05-01 07:42:23 -0700832 # &lt;tr&gt;&lt;td&gt;model&lt;/td&gt;&lt;td&gt;STRING&lt;/td&gt;&lt;td&gt;REQUIRED&lt;/td&gt;&lt;/tr&gt;
833 # &lt;tr&gt;&lt;td&gt;model_version&lt;/td&gt;&lt;td&gt;STRING&lt;/td&gt;&lt;td&gt;REQUIRED&lt;/td&gt;&lt;/tr&gt;
834 # &lt;tr&gt;&lt;td&gt;time&lt;/td&gt;&lt;td&gt;TIMESTAMP&lt;/td&gt;&lt;td&gt;REQUIRED&lt;/td&gt;&lt;/tr&gt;
835 # &lt;tr&gt;&lt;td&gt;raw_data&lt;/td&gt;&lt;td&gt;STRING&lt;/td&gt;&lt;td&gt;REQUIRED&lt;/td&gt;&lt;/tr&gt;
836 # &lt;tr&gt;&lt;td&gt;raw_prediction&lt;/td&gt;&lt;td&gt;STRING&lt;/td&gt;&lt;td&gt;NULLABLE&lt;/td&gt;&lt;/tr&gt;
837 # &lt;tr&gt;&lt;td&gt;groundtruth&lt;/td&gt;&lt;td&gt;STRING&lt;/td&gt;&lt;td&gt;NULLABLE&lt;/td&gt;&lt;/tr&gt;
838 # &lt;/table&gt;
Bu Sun Kim65020912020-05-20 12:08:20 -0700839 &quot;samplingPercentage&quot;: 3.14, # Percentage of requests to be logged, expressed as a fraction from 0 to 1.
840 # For example, if you want to log 10% of requests, enter `0.1`. The sampling
841 # window is the lifetime of the model version. Defaults to 0.
842 },
843 &quot;errorMessage&quot;: &quot;A String&quot;, # Output only. The details of a failure or a cancellation.
844 &quot;machineType&quot;: &quot;A String&quot;, # Optional. The type of machine on which to serve the model. Currently only
845 # applies to online prediction service. If this field is not specified, it
846 # defaults to `mls1-c1-m2`.
847 #
848 # Online prediction supports the following machine types:
849 #
850 # * `mls1-c1-m2`
851 # * `mls1-c4-m2`
852 # * `n1-standard-2`
853 # * `n1-standard-4`
854 # * `n1-standard-8`
855 # * `n1-standard-16`
856 # * `n1-standard-32`
857 # * `n1-highmem-2`
858 # * `n1-highmem-4`
859 # * `n1-highmem-8`
860 # * `n1-highmem-16`
861 # * `n1-highmem-32`
862 # * `n1-highcpu-2`
863 # * `n1-highcpu-4`
864 # * `n1-highcpu-8`
865 # * `n1-highcpu-16`
866 # * `n1-highcpu-32`
867 #
868 # `mls1-c1-m2` is generally available. All other machine types are available
869 # in beta. Learn more about the [differences between machine
870 # types](/ml-engine/docs/machine-types-online-prediction).
871 &quot;runtimeVersion&quot;: &quot;A String&quot;, # Required. The AI Platform runtime version to use for this deployment.
872 #
873 # For more information, see the
874 # [runtime version list](/ml-engine/docs/runtime-version-list) and
875 # [how to manage runtime versions](/ml-engine/docs/versioning).
876 &quot;description&quot;: &quot;A String&quot;, # Optional. The description specified for the version when it was created.
877 &quot;framework&quot;: &quot;A String&quot;, # Optional. The machine learning framework AI Platform uses to train
878 # this version of the model. Valid values are `TENSORFLOW`, `SCIKIT_LEARN`,
879 # `XGBOOST`. If you do not specify a framework, AI Platform
880 # will analyze files in the deployment_uri to determine a framework. If you
881 # choose `SCIKIT_LEARN` or `XGBOOST`, you must also set the runtime version
882 # of the model to 1.4 or greater.
883 #
884 # Do **not** specify a framework if you&#x27;re deploying a [custom
885 # prediction routine](/ml-engine/docs/tensorflow/custom-prediction-routines).
886 #
887 # If you specify a [Compute Engine (N1) machine
888 # type](/ml-engine/docs/machine-types-online-prediction) in the
889 # `machineType` field, you must specify `TENSORFLOW`
890 # for the framework.
891 &quot;etag&quot;: &quot;A String&quot;, # `etag` is used for optimistic concurrency control as a way to help
892 # prevent simultaneous updates of a model from overwriting each other.
893 # It is strongly suggested that systems make use of the `etag` in the
894 # read-modify-write cycle to perform model updates in order to avoid race
895 # conditions: An `etag` is returned in the response to `GetVersion`, and
896 # systems are expected to put that etag in the request to `UpdateVersion` to
897 # ensure that their change will be applied to the model as intended.
898 &quot;isDefault&quot;: True or False, # Output only. If true, this version will be used to handle prediction
899 # requests that do not specify a version.
900 #
901 # You can change the default version by calling
902 # projects.methods.versions.setDefault.
903 &quot;acceleratorConfig&quot;: { # Represents a hardware accelerator request config. # Optional. Accelerator config for using GPUs for online prediction (beta).
904 # Only specify this field if you have specified a Compute Engine (N1) machine
905 # type in the `machineType` field. Learn more about [using GPUs for online
906 # prediction](/ml-engine/docs/machine-types-online-prediction#gpus).
907 # Note that the AcceleratorConfig can be used in both Jobs and Versions.
908 # Learn more about [accelerators for training](/ml-engine/docs/using-gpus) and
909 # [accelerators for online
910 # prediction](/ml-engine/docs/machine-types-online-prediction#gpus).
911 &quot;count&quot;: &quot;A String&quot;, # The number of accelerators to attach to each machine running the job.
912 &quot;type&quot;: &quot;A String&quot;, # The type of accelerator to use.
Dan O'Mearadd494642020-05-01 07:42:23 -0700913 },
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -0400914 }</pre>
915</div>
916
917<div class="method">
Bu Sun Kim65020912020-05-20 12:08:20 -0700918 <code class="details" id="list">list(parent, pageToken=None, pageSize=None, filter=None, x__xgafv=None)</code>
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -0400919 <pre>Gets basic information about all the versions of a model.
920
Bu Sun Kim715bd7f2019-06-14 16:50:42 -0700921If you expect that a model has many versions, or if you need to handle
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -0400922only a limited number of results at a time, you can request that the list
Bu Sun Kim715bd7f2019-06-14 16:50:42 -0700923be retrieved in batches (called pages).
924
925If there are no versions that match the request parameters, the list
926request returns an empty response body: {}.
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -0400927
928Args:
Bu Sun Kim715bd7f2019-06-14 16:50:42 -0700929 parent: string, Required. The name of the model for which to list the version. (required)
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -0400930 pageToken: string, Optional. A page token to request the next page of results.
931
932You get the token from the `next_page_token` field of the response from
933the previous call.
Bu Sun Kim65020912020-05-20 12:08:20 -0700934 pageSize: integer, Optional. The number of versions to retrieve per &quot;page&quot; of results. If
Sai Cheemalapati4ba8c232017-06-06 18:46:08 -0400935there are more remaining results than this number, the response message
936will contain a valid value in the `next_page_token` field.
937
938The default value is 20, and the maximum page size is 100.
Bu Sun Kim715bd7f2019-06-14 16:50:42 -0700939 filter: string, Optional. Specifies the subset of versions to retrieve.
Bu Sun Kim65020912020-05-20 12:08:20 -0700940 x__xgafv: string, V1 error format.
941 Allowed values
942 1 - v1 error format
943 2 - v2 error format
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -0400944
945Returns:
946 An object of the form:
947
948 { # Response message for the ListVersions method.
Bu Sun Kim65020912020-05-20 12:08:20 -0700949 &quot;nextPageToken&quot;: &quot;A String&quot;, # Optional. Pass this token as the `page_token` field of the request for a
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -0400950 # subsequent call.
Bu Sun Kim65020912020-05-20 12:08:20 -0700951 &quot;versions&quot;: [ # The list of versions.
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -0400952 { # Represents a version of the model.
953 #
954 # Each version is a trained model deployed in the cloud, ready to handle
955 # prediction requests. A model can have multiple versions. You can get
956 # information about all of the versions of a given model by calling
Dan O'Mearadd494642020-05-01 07:42:23 -0700957 # projects.models.versions.list.
Bu Sun Kim65020912020-05-20 12:08:20 -0700958 &quot;state&quot;: &quot;A String&quot;, # Output only. The state of a version.
959 &quot;manualScaling&quot;: { # Options for manually scaling a model. # Manually select the number of nodes to use for serving the
960 # model. You should generally use `auto_scaling` with an appropriate
961 # `min_nodes` instead, but this option is available if you want more
962 # predictable billing. Beware that latency and error rates will increase
963 # if the traffic exceeds that capability of the system to serve it based
964 # on the selected number of nodes.
965 &quot;nodes&quot;: 42, # The number of nodes to allocate for this model. These nodes are always up,
966 # starting from the time the model is deployed, so the cost of operating
967 # this model will be proportional to `nodes` * number of hours since
968 # last billing cycle plus the cost for each prediction performed.
Dan O'Mearadd494642020-05-01 07:42:23 -0700969 },
Bu Sun Kim65020912020-05-20 12:08:20 -0700970 &quot;name&quot;: &quot;A String&quot;, # Required. The name specified for the version when it was created.
971 #
972 # The version name must be unique within the model it is created in.
973 &quot;serviceAccount&quot;: &quot;A String&quot;, # Optional. Specifies the service account for resource access control.
974 &quot;pythonVersion&quot;: &quot;A String&quot;, # Required. The version of Python used in prediction.
975 #
976 # The following Python versions are available:
977 #
978 # * Python &#x27;3.7&#x27; is available when `runtime_version` is set to &#x27;1.15&#x27; or
979 # later.
980 # * Python &#x27;3.5&#x27; is available when `runtime_version` is set to a version
981 # from &#x27;1.4&#x27; to &#x27;1.14&#x27;.
982 # * Python &#x27;2.7&#x27; is available when `runtime_version` is set to &#x27;1.15&#x27; or
983 # earlier.
984 #
985 # Read more about the Python versions available for [each runtime
986 # version](/ml-engine/docs/runtime-version-list).
987 &quot;lastUseTime&quot;: &quot;A String&quot;, # Output only. The time the version was last used for prediction.
988 &quot;predictionClass&quot;: &quot;A String&quot;, # Optional. The fully qualified name
Dan O'Mearadd494642020-05-01 07:42:23 -0700989 # (&lt;var&gt;module_name&lt;/var&gt;.&lt;var&gt;class_name&lt;/var&gt;) of a class that implements
Bu Sun Kim715bd7f2019-06-14 16:50:42 -0700990 # the Predictor interface described in this reference field. The module
991 # containing this class should be included in a package provided to the
992 # [`packageUris` field](#Version.FIELDS.package_uris).
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -0400993 #
Bu Sun Kim715bd7f2019-06-14 16:50:42 -0700994 # Specify this field if and only if you are deploying a [custom prediction
995 # routine (beta)](/ml-engine/docs/tensorflow/custom-prediction-routines).
996 # If you specify this field, you must set
Dan O'Mearadd494642020-05-01 07:42:23 -0700997 # [`runtimeVersion`](#Version.FIELDS.runtime_version) to 1.4 or greater and
998 # you must set `machineType` to a [legacy (MLS1)
999 # machine type](/ml-engine/docs/machine-types-online-prediction).
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07001000 #
1001 # The following code sample provides the Predictor interface:
1002 #
Bu Sun Kim65020912020-05-20 12:08:20 -07001003 # &lt;pre style=&quot;max-width: 626px;&quot;&gt;
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07001004 # class Predictor(object):
Bu Sun Kim65020912020-05-20 12:08:20 -07001005 # &quot;&quot;&quot;Interface for constructing custom predictors.&quot;&quot;&quot;
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07001006 #
1007 # def predict(self, instances, **kwargs):
Bu Sun Kim65020912020-05-20 12:08:20 -07001008 # &quot;&quot;&quot;Performs custom prediction.
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07001009 #
1010 # Instances are the decoded values from the request. They have already
1011 # been deserialized from JSON.
1012 #
1013 # Args:
1014 # instances: A list of prediction input instances.
1015 # **kwargs: A dictionary of keyword args provided as additional
1016 # fields on the predict request body.
1017 #
1018 # Returns:
1019 # A list of outputs containing the prediction results. This list must
1020 # be JSON serializable.
Bu Sun Kim65020912020-05-20 12:08:20 -07001021 # &quot;&quot;&quot;
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07001022 # raise NotImplementedError()
1023 #
1024 # @classmethod
1025 # def from_path(cls, model_dir):
Bu Sun Kim65020912020-05-20 12:08:20 -07001026 # &quot;&quot;&quot;Creates an instance of Predictor using the given path.
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07001027 #
1028 # Loading of the predictor should be done in this method.
1029 #
1030 # Args:
1031 # model_dir: The local directory that contains the exported model
1032 # file along with any additional files uploaded when creating the
1033 # version resource.
1034 #
1035 # Returns:
1036 # An instance implementing this Predictor class.
Bu Sun Kim65020912020-05-20 12:08:20 -07001037 # &quot;&quot;&quot;
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07001038 # raise NotImplementedError()
Dan O'Mearadd494642020-05-01 07:42:23 -07001039 # &lt;/pre&gt;
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07001040 #
1041 # Learn more about [the Predictor interface and custom prediction
1042 # routines](/ml-engine/docs/tensorflow/custom-prediction-routines).
Bu Sun Kim65020912020-05-20 12:08:20 -07001043 &quot;packageUris&quot;: [ # Optional. Cloud Storage paths (`gs://…`) of packages for [custom
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07001044 # prediction routines](/ml-engine/docs/tensorflow/custom-prediction-routines)
1045 # or [scikit-learn pipelines with custom
1046 # code](/ml-engine/docs/scikit/exporting-for-prediction#custom-pipeline-code).
1047 #
1048 # For a custom prediction routine, one of these packages must contain your
1049 # Predictor class (see
1050 # [`predictionClass`](#Version.FIELDS.prediction_class)). Additionally,
1051 # include any dependencies used by your Predictor or scikit-learn pipeline
1052 # uses that are not already included in your selected [runtime
1053 # version](/ml-engine/docs/tensorflow/runtime-version-list).
1054 #
1055 # If you specify this field, you must also set
1056 # [`runtimeVersion`](#Version.FIELDS.runtime_version) to 1.4 or greater.
Bu Sun Kim65020912020-05-20 12:08:20 -07001057 &quot;A String&quot;,
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07001058 ],
Bu Sun Kim65020912020-05-20 12:08:20 -07001059 &quot;explanationConfig&quot;: { # Message holding configuration options for explaining model predictions. # Optional. Configures explainability features on the model&#x27;s version.
1060 # Some explanation features require additional metadata to be loaded
1061 # as part of the model payload.
1062 # There are two feature attribution methods supported for TensorFlow models:
1063 # integrated gradients and sampled Shapley.
1064 # [Learn more about feature
1065 # attributions.](/ai-platform/prediction/docs/ai-explanations/overview)
1066 &quot;integratedGradientsAttribution&quot;: { # Attributes credit by computing the Aumann-Shapley value taking advantage # Attributes credit by computing the Aumann-Shapley value taking advantage
1067 # of the model&#x27;s fully differentiable structure. Refer to this paper for
1068 # more details: http://proceedings.mlr.press/v70/sundararajan17a.html
1069 # of the model&#x27;s fully differentiable structure. Refer to this paper for
1070 # more details: https://arxiv.org/abs/1703.01365
1071 &quot;numIntegralSteps&quot;: 42, # Number of steps for approximating the path integral.
1072 # A good value to start is 50 and gradually increase until the
1073 # sum to diff property is met within the desired error range.
1074 },
1075 &quot;sampledShapleyAttribution&quot;: { # An attribution method that approximates Shapley values for features that # An attribution method that approximates Shapley values for features that
1076 # contribute to the label being predicted. A sampling strategy is used to
1077 # approximate the value rather than considering all subsets of features.
1078 # contribute to the label being predicted. A sampling strategy is used to
1079 # approximate the value rather than considering all subsets of features.
1080 &quot;numPaths&quot;: 42, # The number of feature permutations to consider when approximating the
1081 # Shapley values.
1082 },
1083 &quot;xraiAttribution&quot;: { # Attributes credit by computing the XRAI taking advantage # Attributes credit by computing the XRAI taking advantage
1084 # of the model&#x27;s fully differentiable structure. Refer to this paper for
1085 # more details: https://arxiv.org/abs/1906.02825
1086 # Currently only implemented for models with natural image inputs.
1087 # of the model&#x27;s fully differentiable structure. Refer to this paper for
1088 # more details: https://arxiv.org/abs/1906.02825
1089 # Currently only implemented for models with natural image inputs.
1090 &quot;numIntegralSteps&quot;: 42, # Number of steps for approximating the path integral.
1091 # A good value to start is 50 and gradually increase until the
1092 # sum to diff property is met within the desired error range.
1093 },
1094 },
1095 &quot;deploymentUri&quot;: &quot;A String&quot;, # Required. The Cloud Storage location of the trained model used to
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07001096 # create the version. See the
1097 # [guide to model
1098 # deployment](/ml-engine/docs/tensorflow/deploying-models) for more
1099 # information.
1100 #
1101 # When passing Version to
Dan O'Mearadd494642020-05-01 07:42:23 -07001102 # projects.models.versions.create
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07001103 # the model service uses the specified location as the source of the model.
1104 # Once deployed, the model version is hosted by the prediction service, so
1105 # this location is useful only as a historical record.
Bu Sun Kim65020912020-05-20 12:08:20 -07001106 # The total number of model files can&#x27;t exceed 1000.
1107 &quot;autoScaling&quot;: { # Options for automatically scaling a model. # Automatically scale the number of nodes used to serve the model in
Dan O'Mearadd494642020-05-01 07:42:23 -07001108 # response to increases and decreases in traffic. Care should be
Bu Sun Kim65020912020-05-20 12:08:20 -07001109 # taken to ramp up traffic according to the model&#x27;s ability to scale
Dan O'Mearadd494642020-05-01 07:42:23 -07001110 # or you will start seeing increases in latency and 429 response codes.
1111 #
1112 # Note that you cannot use AutoScaling if your version uses
1113 # [GPUs](#Version.FIELDS.accelerator_config). Instead, you must use specify
1114 # `manual_scaling`.
Bu Sun Kim65020912020-05-20 12:08:20 -07001115 &quot;minNodes&quot;: 42, # Optional. The minimum number of nodes to allocate for this model. These
Dan O'Mearadd494642020-05-01 07:42:23 -07001116 # nodes are always up, starting from the time the model is deployed.
1117 # Therefore, the cost of operating this model will be at least
1118 # `rate` * `min_nodes` * number of hours since last billing cycle,
1119 # where `rate` is the cost per node-hour as documented in the
1120 # [pricing guide](/ml-engine/docs/pricing),
1121 # even if no predictions are performed. There is additional cost for each
1122 # prediction performed.
1123 #
1124 # Unlike manual scaling, if the load gets too heavy for the nodes
1125 # that are up, the service will automatically add nodes to handle the
1126 # increased load as well as scale back as traffic drops, always maintaining
1127 # at least `min_nodes`. You will be charged for the time in which additional
1128 # nodes are used.
1129 #
1130 # If `min_nodes` is not specified and AutoScaling is used with a [legacy
1131 # (MLS1) machine type](/ml-engine/docs/machine-types-online-prediction),
1132 # `min_nodes` defaults to 0, in which case, when traffic to a model stops
1133 # (and after a cool-down period), nodes will be shut down and no charges will
1134 # be incurred until traffic to the model resumes.
1135 #
1136 # If `min_nodes` is not specified and AutoScaling is used with a [Compute
1137 # Engine (N1) machine type](/ml-engine/docs/machine-types-online-prediction),
1138 # `min_nodes` defaults to 1. `min_nodes` must be at least 1 for use with a
1139 # Compute Engine machine type.
1140 #
1141 # Note that you cannot use AutoScaling if your version uses
1142 # [GPUs](#Version.FIELDS.accelerator_config). Instead, you must use
1143 # ManualScaling.
1144 #
1145 # You can set `min_nodes` when creating the model version, and you can also
1146 # update `min_nodes` for an existing version:
1147 # &lt;pre&gt;
1148 # update_body.json:
1149 # {
Bu Sun Kim65020912020-05-20 12:08:20 -07001150 # &#x27;autoScaling&#x27;: {
1151 # &#x27;minNodes&#x27;: 5
Dan O'Mearadd494642020-05-01 07:42:23 -07001152 # }
1153 # }
1154 # &lt;/pre&gt;
1155 # HTTP request:
Bu Sun Kim65020912020-05-20 12:08:20 -07001156 # &lt;pre style=&quot;max-width: 626px;&quot;&gt;
Dan O'Mearadd494642020-05-01 07:42:23 -07001157 # PATCH
1158 # https://ml.googleapis.com/v1/{name=projects/*/models/*/versions/*}?update_mask=autoScaling.minNodes
1159 # -d @./update_body.json
1160 # &lt;/pre&gt;
1161 },
Bu Sun Kim65020912020-05-20 12:08:20 -07001162 &quot;labels&quot;: { # Optional. One or more labels that you can add, to organize your model
1163 # versions. Each label is a key-value pair, where both the key and the value
1164 # are arbitrary strings that you supply.
1165 # For more information, see the documentation on
1166 # &lt;a href=&quot;/ml-engine/docs/tensorflow/resource-labels&quot;&gt;using labels&lt;/a&gt;.
1167 &quot;a_key&quot;: &quot;A String&quot;,
1168 },
1169 &quot;createTime&quot;: &quot;A String&quot;, # Output only. The time the version was created.
1170 &quot;requestLoggingConfig&quot;: { # Configuration for logging request-response pairs to a BigQuery table. # Optional. *Only* specify this field in a
Dan O'Mearadd494642020-05-01 07:42:23 -07001171 # projects.models.versions.patch
1172 # request. Specifying it in a
1173 # projects.models.versions.create
1174 # request has no effect.
1175 #
1176 # Configures the request-response pair logging on predictions from this
1177 # Version.
1178 # Online prediction requests to a model version and the responses to these
1179 # requests are converted to raw strings and saved to the specified BigQuery
1180 # table. Logging is constrained by [BigQuery quotas and
1181 # limits](/bigquery/quotas). If your project exceeds BigQuery quotas or limits,
1182 # AI Platform Prediction does not log request-response pairs, but it continues
1183 # to serve predictions.
1184 #
1185 # If you are using [continuous
1186 # evaluation](/ml-engine/docs/continuous-evaluation/), you do not need to
1187 # specify this configuration manually. Setting up continuous evaluation
1188 # automatically enables logging of request-response pairs.
Bu Sun Kim65020912020-05-20 12:08:20 -07001189 &quot;bigqueryTableName&quot;: &quot;A String&quot;, # Required. Fully qualified BigQuery table name in the following format:
1190 # &quot;&lt;var&gt;project_id&lt;/var&gt;.&lt;var&gt;dataset_name&lt;/var&gt;.&lt;var&gt;table_name&lt;/var&gt;&quot;
Dan O'Mearadd494642020-05-01 07:42:23 -07001191 #
Bu Sun Kim65020912020-05-20 12:08:20 -07001192 # The specified table must already exist, and the &quot;Cloud ML Service Agent&quot;
Dan O'Mearadd494642020-05-01 07:42:23 -07001193 # for your project must have permission to write to it. The table must have
1194 # the following [schema](/bigquery/docs/schemas):
1195 #
1196 # &lt;table&gt;
Bu Sun Kim65020912020-05-20 12:08:20 -07001197 # &lt;tr&gt;&lt;th&gt;Field name&lt;/th&gt;&lt;th style=&quot;display: table-cell&quot;&gt;Type&lt;/th&gt;
1198 # &lt;th style=&quot;display: table-cell&quot;&gt;Mode&lt;/th&gt;&lt;/tr&gt;
Dan O'Mearadd494642020-05-01 07:42:23 -07001199 # &lt;tr&gt;&lt;td&gt;model&lt;/td&gt;&lt;td&gt;STRING&lt;/td&gt;&lt;td&gt;REQUIRED&lt;/td&gt;&lt;/tr&gt;
1200 # &lt;tr&gt;&lt;td&gt;model_version&lt;/td&gt;&lt;td&gt;STRING&lt;/td&gt;&lt;td&gt;REQUIRED&lt;/td&gt;&lt;/tr&gt;
1201 # &lt;tr&gt;&lt;td&gt;time&lt;/td&gt;&lt;td&gt;TIMESTAMP&lt;/td&gt;&lt;td&gt;REQUIRED&lt;/td&gt;&lt;/tr&gt;
1202 # &lt;tr&gt;&lt;td&gt;raw_data&lt;/td&gt;&lt;td&gt;STRING&lt;/td&gt;&lt;td&gt;REQUIRED&lt;/td&gt;&lt;/tr&gt;
1203 # &lt;tr&gt;&lt;td&gt;raw_prediction&lt;/td&gt;&lt;td&gt;STRING&lt;/td&gt;&lt;td&gt;NULLABLE&lt;/td&gt;&lt;/tr&gt;
1204 # &lt;tr&gt;&lt;td&gt;groundtruth&lt;/td&gt;&lt;td&gt;STRING&lt;/td&gt;&lt;td&gt;NULLABLE&lt;/td&gt;&lt;/tr&gt;
1205 # &lt;/table&gt;
Bu Sun Kim65020912020-05-20 12:08:20 -07001206 &quot;samplingPercentage&quot;: 3.14, # Percentage of requests to be logged, expressed as a fraction from 0 to 1.
1207 # For example, if you want to log 10% of requests, enter `0.1`. The sampling
1208 # window is the lifetime of the model version. Defaults to 0.
1209 },
1210 &quot;errorMessage&quot;: &quot;A String&quot;, # Output only. The details of a failure or a cancellation.
1211 &quot;machineType&quot;: &quot;A String&quot;, # Optional. The type of machine on which to serve the model. Currently only
1212 # applies to online prediction service. If this field is not specified, it
1213 # defaults to `mls1-c1-m2`.
1214 #
1215 # Online prediction supports the following machine types:
1216 #
1217 # * `mls1-c1-m2`
1218 # * `mls1-c4-m2`
1219 # * `n1-standard-2`
1220 # * `n1-standard-4`
1221 # * `n1-standard-8`
1222 # * `n1-standard-16`
1223 # * `n1-standard-32`
1224 # * `n1-highmem-2`
1225 # * `n1-highmem-4`
1226 # * `n1-highmem-8`
1227 # * `n1-highmem-16`
1228 # * `n1-highmem-32`
1229 # * `n1-highcpu-2`
1230 # * `n1-highcpu-4`
1231 # * `n1-highcpu-8`
1232 # * `n1-highcpu-16`
1233 # * `n1-highcpu-32`
1234 #
1235 # `mls1-c1-m2` is generally available. All other machine types are available
1236 # in beta. Learn more about the [differences between machine
1237 # types](/ml-engine/docs/machine-types-online-prediction).
1238 &quot;runtimeVersion&quot;: &quot;A String&quot;, # Required. The AI Platform runtime version to use for this deployment.
1239 #
1240 # For more information, see the
1241 # [runtime version list](/ml-engine/docs/runtime-version-list) and
1242 # [how to manage runtime versions](/ml-engine/docs/versioning).
1243 &quot;description&quot;: &quot;A String&quot;, # Optional. The description specified for the version when it was created.
1244 &quot;framework&quot;: &quot;A String&quot;, # Optional. The machine learning framework AI Platform uses to train
1245 # this version of the model. Valid values are `TENSORFLOW`, `SCIKIT_LEARN`,
1246 # `XGBOOST`. If you do not specify a framework, AI Platform
1247 # will analyze files in the deployment_uri to determine a framework. If you
1248 # choose `SCIKIT_LEARN` or `XGBOOST`, you must also set the runtime version
1249 # of the model to 1.4 or greater.
1250 #
1251 # Do **not** specify a framework if you&#x27;re deploying a [custom
1252 # prediction routine](/ml-engine/docs/tensorflow/custom-prediction-routines).
1253 #
1254 # If you specify a [Compute Engine (N1) machine
1255 # type](/ml-engine/docs/machine-types-online-prediction) in the
1256 # `machineType` field, you must specify `TENSORFLOW`
1257 # for the framework.
1258 &quot;etag&quot;: &quot;A String&quot;, # `etag` is used for optimistic concurrency control as a way to help
1259 # prevent simultaneous updates of a model from overwriting each other.
1260 # It is strongly suggested that systems make use of the `etag` in the
1261 # read-modify-write cycle to perform model updates in order to avoid race
1262 # conditions: An `etag` is returned in the response to `GetVersion`, and
1263 # systems are expected to put that etag in the request to `UpdateVersion` to
1264 # ensure that their change will be applied to the model as intended.
1265 &quot;isDefault&quot;: True or False, # Output only. If true, this version will be used to handle prediction
1266 # requests that do not specify a version.
1267 #
1268 # You can change the default version by calling
1269 # projects.methods.versions.setDefault.
1270 &quot;acceleratorConfig&quot;: { # Represents a hardware accelerator request config. # Optional. Accelerator config for using GPUs for online prediction (beta).
1271 # Only specify this field if you have specified a Compute Engine (N1) machine
1272 # type in the `machineType` field. Learn more about [using GPUs for online
1273 # prediction](/ml-engine/docs/machine-types-online-prediction#gpus).
1274 # Note that the AcceleratorConfig can be used in both Jobs and Versions.
1275 # Learn more about [accelerators for training](/ml-engine/docs/using-gpus) and
1276 # [accelerators for online
1277 # prediction](/ml-engine/docs/machine-types-online-prediction#gpus).
1278 &quot;count&quot;: &quot;A String&quot;, # The number of accelerators to attach to each machine running the job.
1279 &quot;type&quot;: &quot;A String&quot;, # The type of accelerator to use.
Dan O'Mearadd494642020-05-01 07:42:23 -07001280 },
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -04001281 },
1282 ],
1283 }</pre>
1284</div>
1285
1286<div class="method">
1287 <code class="details" id="list_next">list_next(previous_request, previous_response)</code>
1288 <pre>Retrieves the next page of results.
1289
1290Args:
1291 previous_request: The request for the previous page. (required)
1292 previous_response: The response from the request for the previous page. (required)
1293
1294Returns:
Bu Sun Kim65020912020-05-20 12:08:20 -07001295 A request object that you can call &#x27;execute()&#x27; on to request the next
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -04001296 page. Returns None if there are no more items in the collection.
1297 </pre>
1298</div>
1299
1300<div class="method">
Dan O'Mearadd494642020-05-01 07:42:23 -07001301 <code class="details" id="patch">patch(name, body=None, updateMask=None, x__xgafv=None)</code>
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07001302 <pre>Updates the specified Version resource.
1303
Dan O'Mearadd494642020-05-01 07:42:23 -07001304Currently the only update-able fields are `description`,
1305`requestLoggingConfig`, `autoScaling.minNodes`, and `manualScaling.nodes`.
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07001306
1307Args:
1308 name: string, Required. The name of the model. (required)
Dan O'Mearadd494642020-05-01 07:42:23 -07001309 body: object, The request body.
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07001310 The object takes the form of:
1311
1312{ # Represents a version of the model.
1313 #
1314 # Each version is a trained model deployed in the cloud, ready to handle
1315 # prediction requests. A model can have multiple versions. You can get
1316 # information about all of the versions of a given model by calling
Dan O'Mearadd494642020-05-01 07:42:23 -07001317 # projects.models.versions.list.
Bu Sun Kim65020912020-05-20 12:08:20 -07001318 &quot;state&quot;: &quot;A String&quot;, # Output only. The state of a version.
1319 &quot;manualScaling&quot;: { # Options for manually scaling a model. # Manually select the number of nodes to use for serving the
1320 # model. You should generally use `auto_scaling` with an appropriate
1321 # `min_nodes` instead, but this option is available if you want more
1322 # predictable billing. Beware that latency and error rates will increase
1323 # if the traffic exceeds that capability of the system to serve it based
1324 # on the selected number of nodes.
1325 &quot;nodes&quot;: 42, # The number of nodes to allocate for this model. These nodes are always up,
1326 # starting from the time the model is deployed, so the cost of operating
1327 # this model will be proportional to `nodes` * number of hours since
1328 # last billing cycle plus the cost for each prediction performed.
Dan O'Mearadd494642020-05-01 07:42:23 -07001329 },
Bu Sun Kim65020912020-05-20 12:08:20 -07001330 &quot;name&quot;: &quot;A String&quot;, # Required. The name specified for the version when it was created.
1331 #
1332 # The version name must be unique within the model it is created in.
1333 &quot;serviceAccount&quot;: &quot;A String&quot;, # Optional. Specifies the service account for resource access control.
1334 &quot;pythonVersion&quot;: &quot;A String&quot;, # Required. The version of Python used in prediction.
1335 #
1336 # The following Python versions are available:
1337 #
1338 # * Python &#x27;3.7&#x27; is available when `runtime_version` is set to &#x27;1.15&#x27; or
1339 # later.
1340 # * Python &#x27;3.5&#x27; is available when `runtime_version` is set to a version
1341 # from &#x27;1.4&#x27; to &#x27;1.14&#x27;.
1342 # * Python &#x27;2.7&#x27; is available when `runtime_version` is set to &#x27;1.15&#x27; or
1343 # earlier.
1344 #
1345 # Read more about the Python versions available for [each runtime
1346 # version](/ml-engine/docs/runtime-version-list).
1347 &quot;lastUseTime&quot;: &quot;A String&quot;, # Output only. The time the version was last used for prediction.
1348 &quot;predictionClass&quot;: &quot;A String&quot;, # Optional. The fully qualified name
Dan O'Mearadd494642020-05-01 07:42:23 -07001349 # (&lt;var&gt;module_name&lt;/var&gt;.&lt;var&gt;class_name&lt;/var&gt;) of a class that implements
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07001350 # the Predictor interface described in this reference field. The module
1351 # containing this class should be included in a package provided to the
1352 # [`packageUris` field](#Version.FIELDS.package_uris).
1353 #
1354 # Specify this field if and only if you are deploying a [custom prediction
1355 # routine (beta)](/ml-engine/docs/tensorflow/custom-prediction-routines).
1356 # If you specify this field, you must set
Dan O'Mearadd494642020-05-01 07:42:23 -07001357 # [`runtimeVersion`](#Version.FIELDS.runtime_version) to 1.4 or greater and
1358 # you must set `machineType` to a [legacy (MLS1)
1359 # machine type](/ml-engine/docs/machine-types-online-prediction).
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07001360 #
1361 # The following code sample provides the Predictor interface:
1362 #
Bu Sun Kim65020912020-05-20 12:08:20 -07001363 # &lt;pre style=&quot;max-width: 626px;&quot;&gt;
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07001364 # class Predictor(object):
Bu Sun Kim65020912020-05-20 12:08:20 -07001365 # &quot;&quot;&quot;Interface for constructing custom predictors.&quot;&quot;&quot;
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07001366 #
1367 # def predict(self, instances, **kwargs):
Bu Sun Kim65020912020-05-20 12:08:20 -07001368 # &quot;&quot;&quot;Performs custom prediction.
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07001369 #
1370 # Instances are the decoded values from the request. They have already
1371 # been deserialized from JSON.
1372 #
1373 # Args:
1374 # instances: A list of prediction input instances.
1375 # **kwargs: A dictionary of keyword args provided as additional
1376 # fields on the predict request body.
1377 #
1378 # Returns:
1379 # A list of outputs containing the prediction results. This list must
1380 # be JSON serializable.
Bu Sun Kim65020912020-05-20 12:08:20 -07001381 # &quot;&quot;&quot;
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07001382 # raise NotImplementedError()
1383 #
1384 # @classmethod
1385 # def from_path(cls, model_dir):
Bu Sun Kim65020912020-05-20 12:08:20 -07001386 # &quot;&quot;&quot;Creates an instance of Predictor using the given path.
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07001387 #
1388 # Loading of the predictor should be done in this method.
1389 #
1390 # Args:
1391 # model_dir: The local directory that contains the exported model
1392 # file along with any additional files uploaded when creating the
1393 # version resource.
1394 #
1395 # Returns:
1396 # An instance implementing this Predictor class.
Bu Sun Kim65020912020-05-20 12:08:20 -07001397 # &quot;&quot;&quot;
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07001398 # raise NotImplementedError()
Dan O'Mearadd494642020-05-01 07:42:23 -07001399 # &lt;/pre&gt;
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07001400 #
1401 # Learn more about [the Predictor interface and custom prediction
1402 # routines](/ml-engine/docs/tensorflow/custom-prediction-routines).
Bu Sun Kim65020912020-05-20 12:08:20 -07001403 &quot;packageUris&quot;: [ # Optional. Cloud Storage paths (`gs://…`) of packages for [custom
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07001404 # prediction routines](/ml-engine/docs/tensorflow/custom-prediction-routines)
1405 # or [scikit-learn pipelines with custom
1406 # code](/ml-engine/docs/scikit/exporting-for-prediction#custom-pipeline-code).
1407 #
1408 # For a custom prediction routine, one of these packages must contain your
1409 # Predictor class (see
1410 # [`predictionClass`](#Version.FIELDS.prediction_class)). Additionally,
1411 # include any dependencies used by your Predictor or scikit-learn pipeline
1412 # uses that are not already included in your selected [runtime
1413 # version](/ml-engine/docs/tensorflow/runtime-version-list).
1414 #
1415 # If you specify this field, you must also set
1416 # [`runtimeVersion`](#Version.FIELDS.runtime_version) to 1.4 or greater.
Bu Sun Kim65020912020-05-20 12:08:20 -07001417 &quot;A String&quot;,
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07001418 ],
Bu Sun Kim65020912020-05-20 12:08:20 -07001419 &quot;explanationConfig&quot;: { # Message holding configuration options for explaining model predictions. # Optional. Configures explainability features on the model&#x27;s version.
1420 # Some explanation features require additional metadata to be loaded
1421 # as part of the model payload.
1422 # There are two feature attribution methods supported for TensorFlow models:
1423 # integrated gradients and sampled Shapley.
1424 # [Learn more about feature
1425 # attributions.](/ai-platform/prediction/docs/ai-explanations/overview)
1426 &quot;integratedGradientsAttribution&quot;: { # Attributes credit by computing the Aumann-Shapley value taking advantage # Attributes credit by computing the Aumann-Shapley value taking advantage
1427 # of the model&#x27;s fully differentiable structure. Refer to this paper for
1428 # more details: http://proceedings.mlr.press/v70/sundararajan17a.html
1429 # of the model&#x27;s fully differentiable structure. Refer to this paper for
1430 # more details: https://arxiv.org/abs/1703.01365
1431 &quot;numIntegralSteps&quot;: 42, # Number of steps for approximating the path integral.
1432 # A good value to start is 50 and gradually increase until the
1433 # sum to diff property is met within the desired error range.
1434 },
1435 &quot;sampledShapleyAttribution&quot;: { # An attribution method that approximates Shapley values for features that # An attribution method that approximates Shapley values for features that
1436 # contribute to the label being predicted. A sampling strategy is used to
1437 # approximate the value rather than considering all subsets of features.
1438 # contribute to the label being predicted. A sampling strategy is used to
1439 # approximate the value rather than considering all subsets of features.
1440 &quot;numPaths&quot;: 42, # The number of feature permutations to consider when approximating the
1441 # Shapley values.
1442 },
1443 &quot;xraiAttribution&quot;: { # Attributes credit by computing the XRAI taking advantage # Attributes credit by computing the XRAI taking advantage
1444 # of the model&#x27;s fully differentiable structure. Refer to this paper for
1445 # more details: https://arxiv.org/abs/1906.02825
1446 # Currently only implemented for models with natural image inputs.
1447 # of the model&#x27;s fully differentiable structure. Refer to this paper for
1448 # more details: https://arxiv.org/abs/1906.02825
1449 # Currently only implemented for models with natural image inputs.
1450 &quot;numIntegralSteps&quot;: 42, # Number of steps for approximating the path integral.
1451 # A good value to start is 50 and gradually increase until the
1452 # sum to diff property is met within the desired error range.
1453 },
1454 },
1455 &quot;deploymentUri&quot;: &quot;A String&quot;, # Required. The Cloud Storage location of the trained model used to
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07001456 # create the version. See the
1457 # [guide to model
1458 # deployment](/ml-engine/docs/tensorflow/deploying-models) for more
1459 # information.
1460 #
1461 # When passing Version to
Dan O'Mearadd494642020-05-01 07:42:23 -07001462 # projects.models.versions.create
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07001463 # the model service uses the specified location as the source of the model.
1464 # Once deployed, the model version is hosted by the prediction service, so
1465 # this location is useful only as a historical record.
Bu Sun Kim65020912020-05-20 12:08:20 -07001466 # The total number of model files can&#x27;t exceed 1000.
1467 &quot;autoScaling&quot;: { # Options for automatically scaling a model. # Automatically scale the number of nodes used to serve the model in
Dan O'Mearadd494642020-05-01 07:42:23 -07001468 # response to increases and decreases in traffic. Care should be
Bu Sun Kim65020912020-05-20 12:08:20 -07001469 # taken to ramp up traffic according to the model&#x27;s ability to scale
Dan O'Mearadd494642020-05-01 07:42:23 -07001470 # or you will start seeing increases in latency and 429 response codes.
1471 #
1472 # Note that you cannot use AutoScaling if your version uses
1473 # [GPUs](#Version.FIELDS.accelerator_config). Instead, you must use specify
1474 # `manual_scaling`.
Bu Sun Kim65020912020-05-20 12:08:20 -07001475 &quot;minNodes&quot;: 42, # Optional. The minimum number of nodes to allocate for this model. These
Dan O'Mearadd494642020-05-01 07:42:23 -07001476 # nodes are always up, starting from the time the model is deployed.
1477 # Therefore, the cost of operating this model will be at least
1478 # `rate` * `min_nodes` * number of hours since last billing cycle,
1479 # where `rate` is the cost per node-hour as documented in the
1480 # [pricing guide](/ml-engine/docs/pricing),
1481 # even if no predictions are performed. There is additional cost for each
1482 # prediction performed.
1483 #
1484 # Unlike manual scaling, if the load gets too heavy for the nodes
1485 # that are up, the service will automatically add nodes to handle the
1486 # increased load as well as scale back as traffic drops, always maintaining
1487 # at least `min_nodes`. You will be charged for the time in which additional
1488 # nodes are used.
1489 #
1490 # If `min_nodes` is not specified and AutoScaling is used with a [legacy
1491 # (MLS1) machine type](/ml-engine/docs/machine-types-online-prediction),
1492 # `min_nodes` defaults to 0, in which case, when traffic to a model stops
1493 # (and after a cool-down period), nodes will be shut down and no charges will
1494 # be incurred until traffic to the model resumes.
1495 #
1496 # If `min_nodes` is not specified and AutoScaling is used with a [Compute
1497 # Engine (N1) machine type](/ml-engine/docs/machine-types-online-prediction),
1498 # `min_nodes` defaults to 1. `min_nodes` must be at least 1 for use with a
1499 # Compute Engine machine type.
1500 #
1501 # Note that you cannot use AutoScaling if your version uses
1502 # [GPUs](#Version.FIELDS.accelerator_config). Instead, you must use
1503 # ManualScaling.
1504 #
1505 # You can set `min_nodes` when creating the model version, and you can also
1506 # update `min_nodes` for an existing version:
1507 # &lt;pre&gt;
1508 # update_body.json:
1509 # {
Bu Sun Kim65020912020-05-20 12:08:20 -07001510 # &#x27;autoScaling&#x27;: {
1511 # &#x27;minNodes&#x27;: 5
Dan O'Mearadd494642020-05-01 07:42:23 -07001512 # }
1513 # }
1514 # &lt;/pre&gt;
1515 # HTTP request:
Bu Sun Kim65020912020-05-20 12:08:20 -07001516 # &lt;pre style=&quot;max-width: 626px;&quot;&gt;
Dan O'Mearadd494642020-05-01 07:42:23 -07001517 # PATCH
1518 # https://ml.googleapis.com/v1/{name=projects/*/models/*/versions/*}?update_mask=autoScaling.minNodes
1519 # -d @./update_body.json
1520 # &lt;/pre&gt;
1521 },
Bu Sun Kim65020912020-05-20 12:08:20 -07001522 &quot;labels&quot;: { # Optional. One or more labels that you can add, to organize your model
1523 # versions. Each label is a key-value pair, where both the key and the value
1524 # are arbitrary strings that you supply.
1525 # For more information, see the documentation on
1526 # &lt;a href=&quot;/ml-engine/docs/tensorflow/resource-labels&quot;&gt;using labels&lt;/a&gt;.
1527 &quot;a_key&quot;: &quot;A String&quot;,
1528 },
1529 &quot;createTime&quot;: &quot;A String&quot;, # Output only. The time the version was created.
1530 &quot;requestLoggingConfig&quot;: { # Configuration for logging request-response pairs to a BigQuery table. # Optional. *Only* specify this field in a
Dan O'Mearadd494642020-05-01 07:42:23 -07001531 # projects.models.versions.patch
1532 # request. Specifying it in a
1533 # projects.models.versions.create
1534 # request has no effect.
1535 #
1536 # Configures the request-response pair logging on predictions from this
1537 # Version.
1538 # Online prediction requests to a model version and the responses to these
1539 # requests are converted to raw strings and saved to the specified BigQuery
1540 # table. Logging is constrained by [BigQuery quotas and
1541 # limits](/bigquery/quotas). If your project exceeds BigQuery quotas or limits,
1542 # AI Platform Prediction does not log request-response pairs, but it continues
1543 # to serve predictions.
1544 #
1545 # If you are using [continuous
1546 # evaluation](/ml-engine/docs/continuous-evaluation/), you do not need to
1547 # specify this configuration manually. Setting up continuous evaluation
1548 # automatically enables logging of request-response pairs.
Bu Sun Kim65020912020-05-20 12:08:20 -07001549 &quot;bigqueryTableName&quot;: &quot;A String&quot;, # Required. Fully qualified BigQuery table name in the following format:
1550 # &quot;&lt;var&gt;project_id&lt;/var&gt;.&lt;var&gt;dataset_name&lt;/var&gt;.&lt;var&gt;table_name&lt;/var&gt;&quot;
Dan O'Mearadd494642020-05-01 07:42:23 -07001551 #
Bu Sun Kim65020912020-05-20 12:08:20 -07001552 # The specified table must already exist, and the &quot;Cloud ML Service Agent&quot;
Dan O'Mearadd494642020-05-01 07:42:23 -07001553 # for your project must have permission to write to it. The table must have
1554 # the following [schema](/bigquery/docs/schemas):
1555 #
1556 # &lt;table&gt;
Bu Sun Kim65020912020-05-20 12:08:20 -07001557 # &lt;tr&gt;&lt;th&gt;Field name&lt;/th&gt;&lt;th style=&quot;display: table-cell&quot;&gt;Type&lt;/th&gt;
1558 # &lt;th style=&quot;display: table-cell&quot;&gt;Mode&lt;/th&gt;&lt;/tr&gt;
Dan O'Mearadd494642020-05-01 07:42:23 -07001559 # &lt;tr&gt;&lt;td&gt;model&lt;/td&gt;&lt;td&gt;STRING&lt;/td&gt;&lt;td&gt;REQUIRED&lt;/td&gt;&lt;/tr&gt;
1560 # &lt;tr&gt;&lt;td&gt;model_version&lt;/td&gt;&lt;td&gt;STRING&lt;/td&gt;&lt;td&gt;REQUIRED&lt;/td&gt;&lt;/tr&gt;
1561 # &lt;tr&gt;&lt;td&gt;time&lt;/td&gt;&lt;td&gt;TIMESTAMP&lt;/td&gt;&lt;td&gt;REQUIRED&lt;/td&gt;&lt;/tr&gt;
1562 # &lt;tr&gt;&lt;td&gt;raw_data&lt;/td&gt;&lt;td&gt;STRING&lt;/td&gt;&lt;td&gt;REQUIRED&lt;/td&gt;&lt;/tr&gt;
1563 # &lt;tr&gt;&lt;td&gt;raw_prediction&lt;/td&gt;&lt;td&gt;STRING&lt;/td&gt;&lt;td&gt;NULLABLE&lt;/td&gt;&lt;/tr&gt;
1564 # &lt;tr&gt;&lt;td&gt;groundtruth&lt;/td&gt;&lt;td&gt;STRING&lt;/td&gt;&lt;td&gt;NULLABLE&lt;/td&gt;&lt;/tr&gt;
1565 # &lt;/table&gt;
Bu Sun Kim65020912020-05-20 12:08:20 -07001566 &quot;samplingPercentage&quot;: 3.14, # Percentage of requests to be logged, expressed as a fraction from 0 to 1.
1567 # For example, if you want to log 10% of requests, enter `0.1`. The sampling
1568 # window is the lifetime of the model version. Defaults to 0.
1569 },
1570 &quot;errorMessage&quot;: &quot;A String&quot;, # Output only. The details of a failure or a cancellation.
1571 &quot;machineType&quot;: &quot;A String&quot;, # Optional. The type of machine on which to serve the model. Currently only
1572 # applies to online prediction service. If this field is not specified, it
1573 # defaults to `mls1-c1-m2`.
1574 #
1575 # Online prediction supports the following machine types:
1576 #
1577 # * `mls1-c1-m2`
1578 # * `mls1-c4-m2`
1579 # * `n1-standard-2`
1580 # * `n1-standard-4`
1581 # * `n1-standard-8`
1582 # * `n1-standard-16`
1583 # * `n1-standard-32`
1584 # * `n1-highmem-2`
1585 # * `n1-highmem-4`
1586 # * `n1-highmem-8`
1587 # * `n1-highmem-16`
1588 # * `n1-highmem-32`
1589 # * `n1-highcpu-2`
1590 # * `n1-highcpu-4`
1591 # * `n1-highcpu-8`
1592 # * `n1-highcpu-16`
1593 # * `n1-highcpu-32`
1594 #
1595 # `mls1-c1-m2` is generally available. All other machine types are available
1596 # in beta. Learn more about the [differences between machine
1597 # types](/ml-engine/docs/machine-types-online-prediction).
1598 &quot;runtimeVersion&quot;: &quot;A String&quot;, # Required. The AI Platform runtime version to use for this deployment.
1599 #
1600 # For more information, see the
1601 # [runtime version list](/ml-engine/docs/runtime-version-list) and
1602 # [how to manage runtime versions](/ml-engine/docs/versioning).
1603 &quot;description&quot;: &quot;A String&quot;, # Optional. The description specified for the version when it was created.
1604 &quot;framework&quot;: &quot;A String&quot;, # Optional. The machine learning framework AI Platform uses to train
1605 # this version of the model. Valid values are `TENSORFLOW`, `SCIKIT_LEARN`,
1606 # `XGBOOST`. If you do not specify a framework, AI Platform
1607 # will analyze files in the deployment_uri to determine a framework. If you
1608 # choose `SCIKIT_LEARN` or `XGBOOST`, you must also set the runtime version
1609 # of the model to 1.4 or greater.
1610 #
1611 # Do **not** specify a framework if you&#x27;re deploying a [custom
1612 # prediction routine](/ml-engine/docs/tensorflow/custom-prediction-routines).
1613 #
1614 # If you specify a [Compute Engine (N1) machine
1615 # type](/ml-engine/docs/machine-types-online-prediction) in the
1616 # `machineType` field, you must specify `TENSORFLOW`
1617 # for the framework.
1618 &quot;etag&quot;: &quot;A String&quot;, # `etag` is used for optimistic concurrency control as a way to help
1619 # prevent simultaneous updates of a model from overwriting each other.
1620 # It is strongly suggested that systems make use of the `etag` in the
1621 # read-modify-write cycle to perform model updates in order to avoid race
1622 # conditions: An `etag` is returned in the response to `GetVersion`, and
1623 # systems are expected to put that etag in the request to `UpdateVersion` to
1624 # ensure that their change will be applied to the model as intended.
1625 &quot;isDefault&quot;: True or False, # Output only. If true, this version will be used to handle prediction
1626 # requests that do not specify a version.
1627 #
1628 # You can change the default version by calling
1629 # projects.methods.versions.setDefault.
1630 &quot;acceleratorConfig&quot;: { # Represents a hardware accelerator request config. # Optional. Accelerator config for using GPUs for online prediction (beta).
1631 # Only specify this field if you have specified a Compute Engine (N1) machine
1632 # type in the `machineType` field. Learn more about [using GPUs for online
1633 # prediction](/ml-engine/docs/machine-types-online-prediction#gpus).
1634 # Note that the AcceleratorConfig can be used in both Jobs and Versions.
1635 # Learn more about [accelerators for training](/ml-engine/docs/using-gpus) and
1636 # [accelerators for online
1637 # prediction](/ml-engine/docs/machine-types-online-prediction#gpus).
1638 &quot;count&quot;: &quot;A String&quot;, # The number of accelerators to attach to each machine running the job.
1639 &quot;type&quot;: &quot;A String&quot;, # The type of accelerator to use.
Dan O'Mearadd494642020-05-01 07:42:23 -07001640 },
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07001641}
1642
1643 updateMask: string, Required. Specifies the path, relative to `Version`, of the field to
1644update. Must be present and non-empty.
1645
Bu Sun Kim65020912020-05-20 12:08:20 -07001646For example, to change the description of a version to &quot;foo&quot;, the
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07001647`update_mask` parameter would be specified as `description`, and the
1648`PATCH` request body would specify the new value, as follows:
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07001649
Dan O'Mearadd494642020-05-01 07:42:23 -07001650```
1651{
Bu Sun Kim65020912020-05-20 12:08:20 -07001652 &quot;description&quot;: &quot;foo&quot;
Dan O'Mearadd494642020-05-01 07:42:23 -07001653}
1654```
1655
1656Currently the only supported update mask fields are `description`,
1657`requestLoggingConfig`, `autoScaling.minNodes`, and `manualScaling.nodes`.
1658However, you can only update `manualScaling.nodes` if the version uses a
1659[Compute Engine (N1)
1660machine type](/ml-engine/docs/machine-types-online-prediction).
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07001661 x__xgafv: string, V1 error format.
1662 Allowed values
1663 1 - v1 error format
1664 2 - v2 error format
1665
1666Returns:
1667 An object of the form:
1668
1669 { # This resource represents a long-running operation that is the result of a
1670 # network API call.
Bu Sun Kim65020912020-05-20 12:08:20 -07001671 &quot;response&quot;: { # The normal response of the operation in case of success. If the original
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07001672 # method returns no data on success, such as `Delete`, the response is
1673 # `google.protobuf.Empty`. If the original method is standard
1674 # `Get`/`Create`/`Update`, the response should be the resource. For other
1675 # methods, the response should have the type `XxxResponse`, where `Xxx`
1676 # is the original method name. For example, if the original method name
1677 # is `TakeSnapshot()`, the inferred response type is
1678 # `TakeSnapshotResponse`.
Bu Sun Kim65020912020-05-20 12:08:20 -07001679 &quot;a_key&quot;: &quot;&quot;, # Properties of the object. Contains field @type with type URL.
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07001680 },
Bu Sun Kim65020912020-05-20 12:08:20 -07001681 &quot;name&quot;: &quot;A String&quot;, # The server-assigned name, which is only unique within the same service that
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07001682 # originally returns it. If you use the default HTTP mapping, the
1683 # `name` should be a resource name ending with `operations/{unique_id}`.
Bu Sun Kim65020912020-05-20 12:08:20 -07001684 &quot;error&quot;: { # The `Status` type defines a logical error model that is suitable for # The error result of the operation in case of failure or cancellation.
1685 # different programming environments, including REST APIs and RPC APIs. It is
1686 # used by [gRPC](https://github.com/grpc). Each `Status` message contains
1687 # three pieces of data: error code, error message, and error details.
1688 #
1689 # You can find out more about this error model and how to work with it in the
1690 # [API Design Guide](https://cloud.google.com/apis/design/errors).
1691 &quot;details&quot;: [ # A list of messages that carry the error details. There is a common set of
1692 # message types for APIs to use.
1693 {
1694 &quot;a_key&quot;: &quot;&quot;, # Properties of the object. Contains field @type with type URL.
1695 },
1696 ],
1697 &quot;code&quot;: 42, # The status code, which should be an enum value of google.rpc.Code.
1698 &quot;message&quot;: &quot;A String&quot;, # A developer-facing error message, which should be in English. Any
1699 # user-facing error message should be localized and sent in the
1700 # google.rpc.Status.details field, or localized by the client.
1701 },
1702 &quot;metadata&quot;: { # Service-specific metadata associated with the operation. It typically
1703 # contains progress information and common metadata such as create time.
1704 # Some services might not provide such metadata. Any method that returns a
1705 # long-running operation should document the metadata type, if any.
1706 &quot;a_key&quot;: &quot;&quot;, # Properties of the object. Contains field @type with type URL.
1707 },
1708 &quot;done&quot;: True or False, # If the value is `false`, it means the operation is still in progress.
1709 # If `true`, the operation is completed, and either `error` or `response` is
1710 # available.
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07001711 }</pre>
1712</div>
1713
1714<div class="method">
1715 <code class="details" id="setDefault">setDefault(name, body=None, x__xgafv=None)</code>
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -04001716 <pre>Designates a version to be the default for the model.
1717
1718The default version is used for prediction requests made against the model
Bu Sun Kim65020912020-05-20 12:08:20 -07001719that don&#x27;t specify a version.
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -04001720
1721The first version to be created for a model is automatically set as the
1722default. You must make any subsequent changes to the default version
1723setting manually using this method.
1724
1725Args:
1726 name: string, Required. The name of the version to make the default for the model. You
1727can get the names of all the versions of a model by calling
Dan O'Mearadd494642020-05-01 07:42:23 -07001728projects.models.versions.list. (required)
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07001729 body: object, The request body.
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -04001730 The object takes the form of:
1731
1732{ # Request message for the SetDefaultVersion request.
1733 }
1734
1735 x__xgafv: string, V1 error format.
1736 Allowed values
1737 1 - v1 error format
1738 2 - v2 error format
1739
1740Returns:
1741 An object of the form:
1742
1743 { # Represents a version of the model.
1744 #
1745 # Each version is a trained model deployed in the cloud, ready to handle
1746 # prediction requests. A model can have multiple versions. You can get
1747 # information about all of the versions of a given model by calling
Dan O'Mearadd494642020-05-01 07:42:23 -07001748 # projects.models.versions.list.
Bu Sun Kim65020912020-05-20 12:08:20 -07001749 &quot;state&quot;: &quot;A String&quot;, # Output only. The state of a version.
1750 &quot;manualScaling&quot;: { # Options for manually scaling a model. # Manually select the number of nodes to use for serving the
1751 # model. You should generally use `auto_scaling` with an appropriate
1752 # `min_nodes` instead, but this option is available if you want more
1753 # predictable billing. Beware that latency and error rates will increase
1754 # if the traffic exceeds that capability of the system to serve it based
1755 # on the selected number of nodes.
1756 &quot;nodes&quot;: 42, # The number of nodes to allocate for this model. These nodes are always up,
1757 # starting from the time the model is deployed, so the cost of operating
1758 # this model will be proportional to `nodes` * number of hours since
1759 # last billing cycle plus the cost for each prediction performed.
Dan O'Mearadd494642020-05-01 07:42:23 -07001760 },
Bu Sun Kim65020912020-05-20 12:08:20 -07001761 &quot;name&quot;: &quot;A String&quot;, # Required. The name specified for the version when it was created.
1762 #
1763 # The version name must be unique within the model it is created in.
1764 &quot;serviceAccount&quot;: &quot;A String&quot;, # Optional. Specifies the service account for resource access control.
1765 &quot;pythonVersion&quot;: &quot;A String&quot;, # Required. The version of Python used in prediction.
1766 #
1767 # The following Python versions are available:
1768 #
1769 # * Python &#x27;3.7&#x27; is available when `runtime_version` is set to &#x27;1.15&#x27; or
1770 # later.
1771 # * Python &#x27;3.5&#x27; is available when `runtime_version` is set to a version
1772 # from &#x27;1.4&#x27; to &#x27;1.14&#x27;.
1773 # * Python &#x27;2.7&#x27; is available when `runtime_version` is set to &#x27;1.15&#x27; or
1774 # earlier.
1775 #
1776 # Read more about the Python versions available for [each runtime
1777 # version](/ml-engine/docs/runtime-version-list).
1778 &quot;lastUseTime&quot;: &quot;A String&quot;, # Output only. The time the version was last used for prediction.
1779 &quot;predictionClass&quot;: &quot;A String&quot;, # Optional. The fully qualified name
Dan O'Mearadd494642020-05-01 07:42:23 -07001780 # (&lt;var&gt;module_name&lt;/var&gt;.&lt;var&gt;class_name&lt;/var&gt;) of a class that implements
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07001781 # the Predictor interface described in this reference field. The module
1782 # containing this class should be included in a package provided to the
1783 # [`packageUris` field](#Version.FIELDS.package_uris).
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -04001784 #
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07001785 # Specify this field if and only if you are deploying a [custom prediction
1786 # routine (beta)](/ml-engine/docs/tensorflow/custom-prediction-routines).
1787 # If you specify this field, you must set
Dan O'Mearadd494642020-05-01 07:42:23 -07001788 # [`runtimeVersion`](#Version.FIELDS.runtime_version) to 1.4 or greater and
1789 # you must set `machineType` to a [legacy (MLS1)
1790 # machine type](/ml-engine/docs/machine-types-online-prediction).
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07001791 #
1792 # The following code sample provides the Predictor interface:
1793 #
Bu Sun Kim65020912020-05-20 12:08:20 -07001794 # &lt;pre style=&quot;max-width: 626px;&quot;&gt;
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07001795 # class Predictor(object):
Bu Sun Kim65020912020-05-20 12:08:20 -07001796 # &quot;&quot;&quot;Interface for constructing custom predictors.&quot;&quot;&quot;
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07001797 #
1798 # def predict(self, instances, **kwargs):
Bu Sun Kim65020912020-05-20 12:08:20 -07001799 # &quot;&quot;&quot;Performs custom prediction.
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07001800 #
1801 # Instances are the decoded values from the request. They have already
1802 # been deserialized from JSON.
1803 #
1804 # Args:
1805 # instances: A list of prediction input instances.
1806 # **kwargs: A dictionary of keyword args provided as additional
1807 # fields on the predict request body.
1808 #
1809 # Returns:
1810 # A list of outputs containing the prediction results. This list must
1811 # be JSON serializable.
Bu Sun Kim65020912020-05-20 12:08:20 -07001812 # &quot;&quot;&quot;
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07001813 # raise NotImplementedError()
1814 #
1815 # @classmethod
1816 # def from_path(cls, model_dir):
Bu Sun Kim65020912020-05-20 12:08:20 -07001817 # &quot;&quot;&quot;Creates an instance of Predictor using the given path.
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07001818 #
1819 # Loading of the predictor should be done in this method.
1820 #
1821 # Args:
1822 # model_dir: The local directory that contains the exported model
1823 # file along with any additional files uploaded when creating the
1824 # version resource.
1825 #
1826 # Returns:
1827 # An instance implementing this Predictor class.
Bu Sun Kim65020912020-05-20 12:08:20 -07001828 # &quot;&quot;&quot;
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07001829 # raise NotImplementedError()
Dan O'Mearadd494642020-05-01 07:42:23 -07001830 # &lt;/pre&gt;
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07001831 #
1832 # Learn more about [the Predictor interface and custom prediction
1833 # routines](/ml-engine/docs/tensorflow/custom-prediction-routines).
Bu Sun Kim65020912020-05-20 12:08:20 -07001834 &quot;packageUris&quot;: [ # Optional. Cloud Storage paths (`gs://…`) of packages for [custom
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07001835 # prediction routines](/ml-engine/docs/tensorflow/custom-prediction-routines)
1836 # or [scikit-learn pipelines with custom
1837 # code](/ml-engine/docs/scikit/exporting-for-prediction#custom-pipeline-code).
1838 #
1839 # For a custom prediction routine, one of these packages must contain your
1840 # Predictor class (see
1841 # [`predictionClass`](#Version.FIELDS.prediction_class)). Additionally,
1842 # include any dependencies used by your Predictor or scikit-learn pipeline
1843 # uses that are not already included in your selected [runtime
1844 # version](/ml-engine/docs/tensorflow/runtime-version-list).
1845 #
1846 # If you specify this field, you must also set
1847 # [`runtimeVersion`](#Version.FIELDS.runtime_version) to 1.4 or greater.
Bu Sun Kim65020912020-05-20 12:08:20 -07001848 &quot;A String&quot;,
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07001849 ],
Bu Sun Kim65020912020-05-20 12:08:20 -07001850 &quot;explanationConfig&quot;: { # Message holding configuration options for explaining model predictions. # Optional. Configures explainability features on the model&#x27;s version.
1851 # Some explanation features require additional metadata to be loaded
1852 # as part of the model payload.
1853 # There are two feature attribution methods supported for TensorFlow models:
1854 # integrated gradients and sampled Shapley.
1855 # [Learn more about feature
1856 # attributions.](/ai-platform/prediction/docs/ai-explanations/overview)
1857 &quot;integratedGradientsAttribution&quot;: { # Attributes credit by computing the Aumann-Shapley value taking advantage # Attributes credit by computing the Aumann-Shapley value taking advantage
1858 # of the model&#x27;s fully differentiable structure. Refer to this paper for
1859 # more details: http://proceedings.mlr.press/v70/sundararajan17a.html
1860 # of the model&#x27;s fully differentiable structure. Refer to this paper for
1861 # more details: https://arxiv.org/abs/1703.01365
1862 &quot;numIntegralSteps&quot;: 42, # Number of steps for approximating the path integral.
1863 # A good value to start is 50 and gradually increase until the
1864 # sum to diff property is met within the desired error range.
1865 },
1866 &quot;sampledShapleyAttribution&quot;: { # An attribution method that approximates Shapley values for features that # An attribution method that approximates Shapley values for features that
1867 # contribute to the label being predicted. A sampling strategy is used to
1868 # approximate the value rather than considering all subsets of features.
1869 # contribute to the label being predicted. A sampling strategy is used to
1870 # approximate the value rather than considering all subsets of features.
1871 &quot;numPaths&quot;: 42, # The number of feature permutations to consider when approximating the
1872 # Shapley values.
1873 },
1874 &quot;xraiAttribution&quot;: { # Attributes credit by computing the XRAI taking advantage # Attributes credit by computing the XRAI taking advantage
1875 # of the model&#x27;s fully differentiable structure. Refer to this paper for
1876 # more details: https://arxiv.org/abs/1906.02825
1877 # Currently only implemented for models with natural image inputs.
1878 # of the model&#x27;s fully differentiable structure. Refer to this paper for
1879 # more details: https://arxiv.org/abs/1906.02825
1880 # Currently only implemented for models with natural image inputs.
1881 &quot;numIntegralSteps&quot;: 42, # Number of steps for approximating the path integral.
1882 # A good value to start is 50 and gradually increase until the
1883 # sum to diff property is met within the desired error range.
1884 },
1885 },
1886 &quot;deploymentUri&quot;: &quot;A String&quot;, # Required. The Cloud Storage location of the trained model used to
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07001887 # create the version. See the
1888 # [guide to model
1889 # deployment](/ml-engine/docs/tensorflow/deploying-models) for more
1890 # information.
1891 #
1892 # When passing Version to
Dan O'Mearadd494642020-05-01 07:42:23 -07001893 # projects.models.versions.create
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07001894 # the model service uses the specified location as the source of the model.
1895 # Once deployed, the model version is hosted by the prediction service, so
1896 # this location is useful only as a historical record.
Bu Sun Kim65020912020-05-20 12:08:20 -07001897 # The total number of model files can&#x27;t exceed 1000.
1898 &quot;autoScaling&quot;: { # Options for automatically scaling a model. # Automatically scale the number of nodes used to serve the model in
Dan O'Mearadd494642020-05-01 07:42:23 -07001899 # response to increases and decreases in traffic. Care should be
Bu Sun Kim65020912020-05-20 12:08:20 -07001900 # taken to ramp up traffic according to the model&#x27;s ability to scale
Dan O'Mearadd494642020-05-01 07:42:23 -07001901 # or you will start seeing increases in latency and 429 response codes.
1902 #
1903 # Note that you cannot use AutoScaling if your version uses
1904 # [GPUs](#Version.FIELDS.accelerator_config). Instead, you must use specify
1905 # `manual_scaling`.
Bu Sun Kim65020912020-05-20 12:08:20 -07001906 &quot;minNodes&quot;: 42, # Optional. The minimum number of nodes to allocate for this model. These
Dan O'Mearadd494642020-05-01 07:42:23 -07001907 # nodes are always up, starting from the time the model is deployed.
1908 # Therefore, the cost of operating this model will be at least
1909 # `rate` * `min_nodes` * number of hours since last billing cycle,
1910 # where `rate` is the cost per node-hour as documented in the
1911 # [pricing guide](/ml-engine/docs/pricing),
1912 # even if no predictions are performed. There is additional cost for each
1913 # prediction performed.
1914 #
1915 # Unlike manual scaling, if the load gets too heavy for the nodes
1916 # that are up, the service will automatically add nodes to handle the
1917 # increased load as well as scale back as traffic drops, always maintaining
1918 # at least `min_nodes`. You will be charged for the time in which additional
1919 # nodes are used.
1920 #
1921 # If `min_nodes` is not specified and AutoScaling is used with a [legacy
1922 # (MLS1) machine type](/ml-engine/docs/machine-types-online-prediction),
1923 # `min_nodes` defaults to 0, in which case, when traffic to a model stops
1924 # (and after a cool-down period), nodes will be shut down and no charges will
1925 # be incurred until traffic to the model resumes.
1926 #
1927 # If `min_nodes` is not specified and AutoScaling is used with a [Compute
1928 # Engine (N1) machine type](/ml-engine/docs/machine-types-online-prediction),
1929 # `min_nodes` defaults to 1. `min_nodes` must be at least 1 for use with a
1930 # Compute Engine machine type.
1931 #
1932 # Note that you cannot use AutoScaling if your version uses
1933 # [GPUs](#Version.FIELDS.accelerator_config). Instead, you must use
1934 # ManualScaling.
1935 #
1936 # You can set `min_nodes` when creating the model version, and you can also
1937 # update `min_nodes` for an existing version:
1938 # &lt;pre&gt;
1939 # update_body.json:
1940 # {
Bu Sun Kim65020912020-05-20 12:08:20 -07001941 # &#x27;autoScaling&#x27;: {
1942 # &#x27;minNodes&#x27;: 5
Dan O'Mearadd494642020-05-01 07:42:23 -07001943 # }
1944 # }
1945 # &lt;/pre&gt;
1946 # HTTP request:
Bu Sun Kim65020912020-05-20 12:08:20 -07001947 # &lt;pre style=&quot;max-width: 626px;&quot;&gt;
Dan O'Mearadd494642020-05-01 07:42:23 -07001948 # PATCH
1949 # https://ml.googleapis.com/v1/{name=projects/*/models/*/versions/*}?update_mask=autoScaling.minNodes
1950 # -d @./update_body.json
1951 # &lt;/pre&gt;
1952 },
Bu Sun Kim65020912020-05-20 12:08:20 -07001953 &quot;labels&quot;: { # Optional. One or more labels that you can add, to organize your model
1954 # versions. Each label is a key-value pair, where both the key and the value
1955 # are arbitrary strings that you supply.
1956 # For more information, see the documentation on
1957 # &lt;a href=&quot;/ml-engine/docs/tensorflow/resource-labels&quot;&gt;using labels&lt;/a&gt;.
1958 &quot;a_key&quot;: &quot;A String&quot;,
1959 },
1960 &quot;createTime&quot;: &quot;A String&quot;, # Output only. The time the version was created.
1961 &quot;requestLoggingConfig&quot;: { # Configuration for logging request-response pairs to a BigQuery table. # Optional. *Only* specify this field in a
Dan O'Mearadd494642020-05-01 07:42:23 -07001962 # projects.models.versions.patch
1963 # request. Specifying it in a
1964 # projects.models.versions.create
1965 # request has no effect.
1966 #
1967 # Configures the request-response pair logging on predictions from this
1968 # Version.
1969 # Online prediction requests to a model version and the responses to these
1970 # requests are converted to raw strings and saved to the specified BigQuery
1971 # table. Logging is constrained by [BigQuery quotas and
1972 # limits](/bigquery/quotas). If your project exceeds BigQuery quotas or limits,
1973 # AI Platform Prediction does not log request-response pairs, but it continues
1974 # to serve predictions.
1975 #
1976 # If you are using [continuous
1977 # evaluation](/ml-engine/docs/continuous-evaluation/), you do not need to
1978 # specify this configuration manually. Setting up continuous evaluation
1979 # automatically enables logging of request-response pairs.
Bu Sun Kim65020912020-05-20 12:08:20 -07001980 &quot;bigqueryTableName&quot;: &quot;A String&quot;, # Required. Fully qualified BigQuery table name in the following format:
1981 # &quot;&lt;var&gt;project_id&lt;/var&gt;.&lt;var&gt;dataset_name&lt;/var&gt;.&lt;var&gt;table_name&lt;/var&gt;&quot;
Dan O'Mearadd494642020-05-01 07:42:23 -07001982 #
Bu Sun Kim65020912020-05-20 12:08:20 -07001983 # The specified table must already exist, and the &quot;Cloud ML Service Agent&quot;
Dan O'Mearadd494642020-05-01 07:42:23 -07001984 # for your project must have permission to write to it. The table must have
1985 # the following [schema](/bigquery/docs/schemas):
1986 #
1987 # &lt;table&gt;
Bu Sun Kim65020912020-05-20 12:08:20 -07001988 # &lt;tr&gt;&lt;th&gt;Field name&lt;/th&gt;&lt;th style=&quot;display: table-cell&quot;&gt;Type&lt;/th&gt;
1989 # &lt;th style=&quot;display: table-cell&quot;&gt;Mode&lt;/th&gt;&lt;/tr&gt;
Dan O'Mearadd494642020-05-01 07:42:23 -07001990 # &lt;tr&gt;&lt;td&gt;model&lt;/td&gt;&lt;td&gt;STRING&lt;/td&gt;&lt;td&gt;REQUIRED&lt;/td&gt;&lt;/tr&gt;
1991 # &lt;tr&gt;&lt;td&gt;model_version&lt;/td&gt;&lt;td&gt;STRING&lt;/td&gt;&lt;td&gt;REQUIRED&lt;/td&gt;&lt;/tr&gt;
1992 # &lt;tr&gt;&lt;td&gt;time&lt;/td&gt;&lt;td&gt;TIMESTAMP&lt;/td&gt;&lt;td&gt;REQUIRED&lt;/td&gt;&lt;/tr&gt;
1993 # &lt;tr&gt;&lt;td&gt;raw_data&lt;/td&gt;&lt;td&gt;STRING&lt;/td&gt;&lt;td&gt;REQUIRED&lt;/td&gt;&lt;/tr&gt;
1994 # &lt;tr&gt;&lt;td&gt;raw_prediction&lt;/td&gt;&lt;td&gt;STRING&lt;/td&gt;&lt;td&gt;NULLABLE&lt;/td&gt;&lt;/tr&gt;
1995 # &lt;tr&gt;&lt;td&gt;groundtruth&lt;/td&gt;&lt;td&gt;STRING&lt;/td&gt;&lt;td&gt;NULLABLE&lt;/td&gt;&lt;/tr&gt;
1996 # &lt;/table&gt;
Bu Sun Kim65020912020-05-20 12:08:20 -07001997 &quot;samplingPercentage&quot;: 3.14, # Percentage of requests to be logged, expressed as a fraction from 0 to 1.
1998 # For example, if you want to log 10% of requests, enter `0.1`. The sampling
1999 # window is the lifetime of the model version. Defaults to 0.
2000 },
2001 &quot;errorMessage&quot;: &quot;A String&quot;, # Output only. The details of a failure or a cancellation.
2002 &quot;machineType&quot;: &quot;A String&quot;, # Optional. The type of machine on which to serve the model. Currently only
2003 # applies to online prediction service. If this field is not specified, it
2004 # defaults to `mls1-c1-m2`.
2005 #
2006 # Online prediction supports the following machine types:
2007 #
2008 # * `mls1-c1-m2`
2009 # * `mls1-c4-m2`
2010 # * `n1-standard-2`
2011 # * `n1-standard-4`
2012 # * `n1-standard-8`
2013 # * `n1-standard-16`
2014 # * `n1-standard-32`
2015 # * `n1-highmem-2`
2016 # * `n1-highmem-4`
2017 # * `n1-highmem-8`
2018 # * `n1-highmem-16`
2019 # * `n1-highmem-32`
2020 # * `n1-highcpu-2`
2021 # * `n1-highcpu-4`
2022 # * `n1-highcpu-8`
2023 # * `n1-highcpu-16`
2024 # * `n1-highcpu-32`
2025 #
2026 # `mls1-c1-m2` is generally available. All other machine types are available
2027 # in beta. Learn more about the [differences between machine
2028 # types](/ml-engine/docs/machine-types-online-prediction).
2029 &quot;runtimeVersion&quot;: &quot;A String&quot;, # Required. The AI Platform runtime version to use for this deployment.
2030 #
2031 # For more information, see the
2032 # [runtime version list](/ml-engine/docs/runtime-version-list) and
2033 # [how to manage runtime versions](/ml-engine/docs/versioning).
2034 &quot;description&quot;: &quot;A String&quot;, # Optional. The description specified for the version when it was created.
2035 &quot;framework&quot;: &quot;A String&quot;, # Optional. The machine learning framework AI Platform uses to train
2036 # this version of the model. Valid values are `TENSORFLOW`, `SCIKIT_LEARN`,
2037 # `XGBOOST`. If you do not specify a framework, AI Platform
2038 # will analyze files in the deployment_uri to determine a framework. If you
2039 # choose `SCIKIT_LEARN` or `XGBOOST`, you must also set the runtime version
2040 # of the model to 1.4 or greater.
2041 #
2042 # Do **not** specify a framework if you&#x27;re deploying a [custom
2043 # prediction routine](/ml-engine/docs/tensorflow/custom-prediction-routines).
2044 #
2045 # If you specify a [Compute Engine (N1) machine
2046 # type](/ml-engine/docs/machine-types-online-prediction) in the
2047 # `machineType` field, you must specify `TENSORFLOW`
2048 # for the framework.
2049 &quot;etag&quot;: &quot;A String&quot;, # `etag` is used for optimistic concurrency control as a way to help
2050 # prevent simultaneous updates of a model from overwriting each other.
2051 # It is strongly suggested that systems make use of the `etag` in the
2052 # read-modify-write cycle to perform model updates in order to avoid race
2053 # conditions: An `etag` is returned in the response to `GetVersion`, and
2054 # systems are expected to put that etag in the request to `UpdateVersion` to
2055 # ensure that their change will be applied to the model as intended.
2056 &quot;isDefault&quot;: True or False, # Output only. If true, this version will be used to handle prediction
2057 # requests that do not specify a version.
2058 #
2059 # You can change the default version by calling
2060 # projects.methods.versions.setDefault.
2061 &quot;acceleratorConfig&quot;: { # Represents a hardware accelerator request config. # Optional. Accelerator config for using GPUs for online prediction (beta).
2062 # Only specify this field if you have specified a Compute Engine (N1) machine
2063 # type in the `machineType` field. Learn more about [using GPUs for online
2064 # prediction](/ml-engine/docs/machine-types-online-prediction#gpus).
2065 # Note that the AcceleratorConfig can be used in both Jobs and Versions.
2066 # Learn more about [accelerators for training](/ml-engine/docs/using-gpus) and
2067 # [accelerators for online
2068 # prediction](/ml-engine/docs/machine-types-online-prediction#gpus).
2069 &quot;count&quot;: &quot;A String&quot;, # The number of accelerators to attach to each machine running the job.
2070 &quot;type&quot;: &quot;A String&quot;, # The type of accelerator to use.
Dan O'Mearadd494642020-05-01 07:42:23 -07002071 },
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -04002072 }</pre>
2073</div>
2074
2075</body></html>