chore: regens API reference docs (#889)
diff --git a/docs/dyn/ml_v1.projects.models.html b/docs/dyn/ml_v1.projects.models.html
index e13fcf2..d7d8c66 100644
--- a/docs/dyn/ml_v1.projects.models.html
+++ b/docs/dyn/ml_v1.projects.models.html
@@ -72,7 +72,7 @@
</style>
-<h1><a href="ml_v1.html">Cloud Machine Learning Engine</a> . <a href="ml_v1.projects.html">projects</a> . <a href="ml_v1.projects.models.html">models</a></h1>
+<h1><a href="ml_v1.html">AI Platform Training & Prediction API</a> . <a href="ml_v1.projects.html">projects</a> . <a href="ml_v1.projects.models.html">models</a></h1>
<h2>Instance Methods</h2>
<p class="toc_element">
<code><a href="ml_v1.projects.models.versions.html">versions()</a></code>
@@ -80,7 +80,7 @@
<p class="firstline">Returns the versions Resource.</p>
<p class="toc_element">
- <code><a href="#create">create(parent, body, x__xgafv=None)</a></code></p>
+ <code><a href="#create">create(parent, body=None, x__xgafv=None)</a></code></p>
<p class="firstline">Creates a model which will later contain one or more versions.</p>
<p class="toc_element">
<code><a href="#delete">delete(name, x__xgafv=None)</a></code></p>
@@ -89,35 +89,35 @@
<code><a href="#get">get(name, x__xgafv=None)</a></code></p>
<p class="firstline">Gets information about a model, including its name, the description (if</p>
<p class="toc_element">
- <code><a href="#getIamPolicy">getIamPolicy(resource, x__xgafv=None)</a></code></p>
+ <code><a href="#getIamPolicy">getIamPolicy(resource, options_requestedPolicyVersion=None, x__xgafv=None)</a></code></p>
<p class="firstline">Gets the access control policy for a resource.</p>
<p class="toc_element">
- <code><a href="#list">list(parent, pageToken=None, x__xgafv=None, pageSize=None, filter=None)</a></code></p>
+ <code><a href="#list">list(parent, pageSize=None, pageToken=None, x__xgafv=None, filter=None)</a></code></p>
<p class="firstline">Lists the models in a project.</p>
<p class="toc_element">
<code><a href="#list_next">list_next(previous_request, previous_response)</a></code></p>
<p class="firstline">Retrieves the next page of results.</p>
<p class="toc_element">
- <code><a href="#patch">patch(name, body, updateMask=None, x__xgafv=None)</a></code></p>
+ <code><a href="#patch">patch(name, body=None, updateMask=None, x__xgafv=None)</a></code></p>
<p class="firstline">Updates a specific model resource.</p>
<p class="toc_element">
- <code><a href="#setIamPolicy">setIamPolicy(resource, body, x__xgafv=None)</a></code></p>
+ <code><a href="#setIamPolicy">setIamPolicy(resource, body=None, x__xgafv=None)</a></code></p>
<p class="firstline">Sets the access control policy on the specified resource. Replaces any</p>
<p class="toc_element">
- <code><a href="#testIamPermissions">testIamPermissions(resource, body, x__xgafv=None)</a></code></p>
+ <code><a href="#testIamPermissions">testIamPermissions(resource, body=None, x__xgafv=None)</a></code></p>
<p class="firstline">Returns permissions that a caller has on the specified resource.</p>
<h3>Method Details</h3>
<div class="method">
- <code class="details" id="create">create(parent, body, x__xgafv=None)</code>
+ <code class="details" id="create">create(parent, body=None, x__xgafv=None)</code>
<pre>Creates a model which will later contain one or more versions.
You must add at least one version before you can request predictions from
the model. Add versions by calling
-[projects.models.versions.create](/ml-engine/reference/rest/v1/projects.models.versions/create).
+projects.models.versions.create.
Args:
parent: string, Required. The project name. (required)
- body: object, The request body. (required)
+ body: object, The request body.
The object takes the form of:
{ # Represents a machine learning solution.
@@ -125,7 +125,9 @@
# A model can have multiple versions, each of which is a deployed, trained
# model ready to receive prediction requests. The model itself is just a
# container.
- "description": "A String", # Optional. The description specified for the model when it was created.
+ "name": "A String", # Required. The name specified for the model when it was created.
+ #
+ # The model name must be unique within the project it is created in.
"onlinePredictionConsoleLogging": True or False, # Optional. If true, online prediction nodes send `stderr` and `stdout`
# streams to Stackdriver Logging. These can be more verbose than the standard
# access logs (see `onlinePredictionLogging`) and can incur higher cost.
@@ -139,13 +141,13 @@
# Each label is a key-value pair, where both the key and the value are
# arbitrary strings that you supply.
# For more information, see the documentation on
- # <a href="/ml-engine/docs/tensorflow/resource-labels">using labels</a>.
+ # <a href="/ml-engine/docs/tensorflow/resource-labels">using labels</a>.
"a_key": "A String",
},
"regions": [ # Optional. The list of regions where the model is going to be deployed.
- # Currently only one region per model is supported.
+ # Only one region per model is supported.
# Defaults to 'us-central1' if nothing is set.
- # See the <a href="/ml-engine/docs/tensorflow/regions">available regions</a>
+ # See the <a href="/ml-engine/docs/tensorflow/regions">available regions</a>
# for AI Platform services.
# Note:
# * No matter where a model is deployed, it can always be accessed by
@@ -166,53 +168,32 @@
# handle prediction requests that do not specify a version.
#
# You can change the default version by calling
- # [projects.methods.versions.setDefault](/ml-engine/reference/rest/v1/projects.models.versions/setDefault).
+ # projects.models.versions.setDefault.
#
# Each version is a trained model deployed in the cloud, ready to handle
# prediction requests. A model can have multiple versions. You can get
# information about all of the versions of a given model by calling
- # [projects.models.versions.list](/ml-engine/reference/rest/v1/projects.models.versions/list).
- "errorMessage": "A String", # Output only. The details of a failure or a cancellation.
+ # projects.models.versions.list.
+ "acceleratorConfig": { # Represents a hardware accelerator request config. # Optional. Accelerator config for using GPUs for online prediction (beta).
+ # Only specify this field if you have specified a Compute Engine (N1) machine
+ # type in the `machineType` field. Learn more about [using GPUs for online
+ # prediction](/ml-engine/docs/machine-types-online-prediction#gpus).
+ # Note that the AcceleratorConfig can be used in both Jobs and Versions.
+ # Learn more about [accelerators for training](/ml-engine/docs/using-gpus) and
+ # [accelerators for online
+ # prediction](/ml-engine/docs/machine-types-online-prediction#gpus).
+ "count": "A String", # The number of accelerators to attach to each machine running the job.
+ "type": "A String", # The type of accelerator to use.
+ },
"labels": { # Optional. One or more labels that you can add, to organize your model
# versions. Each label is a key-value pair, where both the key and the value
# are arbitrary strings that you supply.
# For more information, see the documentation on
- # <a href="/ml-engine/docs/tensorflow/resource-labels">using labels</a>.
+ # <a href="/ml-engine/docs/tensorflow/resource-labels">using labels</a>.
"a_key": "A String",
},
- "machineType": "A String", # Optional. The type of machine on which to serve the model. Currently only
- # applies to online prediction service.
- # <dl>
- # <dt>mls1-c1-m2</dt>
- # <dd>
- # The <b>default</b> machine type, with 1 core and 2 GB RAM. The deprecated
- # name for this machine type is "mls1-highmem-1".
- # </dd>
- # <dt>mls1-c4-m2</dt>
- # <dd>
- # In <b>Beta</b>. This machine type has 4 cores and 2 GB RAM. The
- # deprecated name for this machine type is "mls1-highcpu-4".
- # </dd>
- # </dl>
- "description": "A String", # Optional. The description specified for the version when it was created.
- "runtimeVersion": "A String", # Optional. The AI Platform runtime version to use for this deployment.
- # If not set, AI Platform uses the default stable version, 1.0. For more
- # information, see the
- # [runtime version list](/ml-engine/docs/runtime-version-list) and
- # [how to manage runtime versions](/ml-engine/docs/versioning).
- "manualScaling": { # Options for manually scaling a model. # Manually select the number of nodes to use for serving the
- # model. You should generally use `auto_scaling` with an appropriate
- # `min_nodes` instead, but this option is available if you want more
- # predictable billing. Beware that latency and error rates will increase
- # if the traffic exceeds that capability of the system to serve it based
- # on the selected number of nodes.
- "nodes": 42, # The number of nodes to allocate for this model. These nodes are always up,
- # starting from the time the model is deployed, so the cost of operating
- # this model will be proportional to `nodes` * number of hours since
- # last billing cycle plus the cost for each prediction performed.
- },
"predictionClass": "A String", # Optional. The fully qualified name
- # (<var>module_name</var>.<var>class_name</var>) of a class that implements
+ # (<var>module_name</var>.<var>class_name</var>) of a class that implements
# the Predictor interface described in this reference field. The module
# containing this class should be included in a package provided to the
# [`packageUris` field](#Version.FIELDS.package_uris).
@@ -220,11 +201,13 @@
# Specify this field if and only if you are deploying a [custom prediction
# routine (beta)](/ml-engine/docs/tensorflow/custom-prediction-routines).
# If you specify this field, you must set
- # [`runtimeVersion`](#Version.FIELDS.runtime_version) to 1.4 or greater.
+ # [`runtimeVersion`](#Version.FIELDS.runtime_version) to 1.4 or greater and
+ # you must set `machineType` to a [legacy (MLS1)
+ # machine type](/ml-engine/docs/machine-types-online-prediction).
#
# The following code sample provides the Predictor interface:
#
- # ```py
+ # <pre style="max-width: 626px;">
# class Predictor(object):
# """Interface for constructing custom predictors."""
#
@@ -260,64 +243,12 @@
# An instance implementing this Predictor class.
# """
# raise NotImplementedError()
- # ```
+ # </pre>
#
# Learn more about [the Predictor interface and custom prediction
# routines](/ml-engine/docs/tensorflow/custom-prediction-routines).
- "autoScaling": { # Options for automatically scaling a model. # Automatically scale the number of nodes used to serve the model in
- # response to increases and decreases in traffic. Care should be
- # taken to ramp up traffic according to the model's ability to scale
- # or you will start seeing increases in latency and 429 response codes.
- "minNodes": 42, # Optional. The minimum number of nodes to allocate for this model. These
- # nodes are always up, starting from the time the model is deployed.
- # Therefore, the cost of operating this model will be at least
- # `rate` * `min_nodes` * number of hours since last billing cycle,
- # where `rate` is the cost per node-hour as documented in the
- # [pricing guide](/ml-engine/docs/pricing),
- # even if no predictions are performed. There is additional cost for each
- # prediction performed.
- #
- # Unlike manual scaling, if the load gets too heavy for the nodes
- # that are up, the service will automatically add nodes to handle the
- # increased load as well as scale back as traffic drops, always maintaining
- # at least `min_nodes`. You will be charged for the time in which additional
- # nodes are used.
- #
- # If not specified, `min_nodes` defaults to 0, in which case, when traffic
- # to a model stops (and after a cool-down period), nodes will be shut down
- # and no charges will be incurred until traffic to the model resumes.
- #
- # You can set `min_nodes` when creating the model version, and you can also
- # update `min_nodes` for an existing version:
- # <pre>
- # update_body.json:
- # {
- # 'autoScaling': {
- # 'minNodes': 5
- # }
- # }
- # </pre>
- # HTTP request:
- # <pre>
- # PATCH
- # https://ml.googleapis.com/v1/{name=projects/*/models/*/versions/*}?update_mask=autoScaling.minNodes
- # -d @./update_body.json
- # </pre>
- },
"serviceAccount": "A String", # Optional. Specifies the service account for resource access control.
"state": "A String", # Output only. The state of a version.
- "pythonVersion": "A String", # Optional. The version of Python used in prediction. If not set, the default
- # version is '2.7'. Python '3.5' is available when `runtime_version` is set
- # to '1.4' and above. Python '2.7' works with all supported runtime versions.
- "framework": "A String", # Optional. The machine learning framework AI Platform uses to train
- # this version of the model. Valid values are `TENSORFLOW`, `SCIKIT_LEARN`,
- # `XGBOOST`. If you do not specify a framework, AI Platform
- # will analyze files in the deployment_uri to determine a framework. If you
- # choose `SCIKIT_LEARN` or `XGBOOST`, you must also set the runtime version
- # of the model to 1.4 or greater.
- #
- # Do **not** specify a framework if you're deploying a [custom
- # prediction routine](/ml-engine/docs/tensorflow/custom-prediction-routines).
"packageUris": [ # Optional. Cloud Storage paths (`gs://…`) of packages for [custom
# prediction routines](/ml-engine/docs/tensorflow/custom-prediction-routines)
# or [scikit-learn pipelines with custom
@@ -349,20 +280,223 @@
# information.
#
# When passing Version to
- # [projects.models.versions.create](/ml-engine/reference/rest/v1/projects.models.versions/create)
+ # projects.models.versions.create
# the model service uses the specified location as the source of the model.
# Once deployed, the model version is hosted by the prediction service, so
# this location is useful only as a historical record.
# The total number of model files can't exceed 1000.
- "createTime": "A String", # Output only. The time the version was created.
+ "explanationConfig": { # Message holding configuration options for explaining model predictions. # Optional. Configures explainability features on the model's version.
+ # Some explanation features require additional metadata to be loaded
+ # as part of the model payload.
+ # There are two feature attribution methods supported for TensorFlow models:
+ # integrated gradients and sampled Shapley.
+ # [Learn more about feature
+ # attributions.](/ml-engine/docs/ai-explanations/overview)
+ "xraiAttribution": { # Attributes credit by computing the XRAI taking advantage # Attributes credit by computing the XRAI taking advantage
+ # of the model's fully differentiable structure. Refer to this paper for
+ # more details: https://arxiv.org/abs/1906.02825
+ # Currently only implemented for models with natural image inputs.
+ # of the model's fully differentiable structure. Refer to this paper for
+ # more details: https://arxiv.org/abs/1906.02825
+ # Currently only implemented for models with natural image inputs.
+ "numIntegralSteps": 42, # Number of steps for approximating the path integral.
+ # A good value to start is 50 and gradually increase until the
+ # sum to diff property is met within the desired error range.
+ },
+ "sampledShapleyAttribution": { # An attribution method that approximates Shapley values for features that # An attribution method that approximates Shapley values for features that
+ # contribute to the label being predicted. A sampling strategy is used to
+ # approximate the value rather than considering all subsets of features.
+ # contribute to the label being predicted. A sampling strategy is used to
+ # approximate the value rather than considering all subsets of features.
+ "numPaths": 42, # The number of feature permutations to consider when approximating the
+ # Shapley values.
+ },
+ "integratedGradientsAttribution": { # Attributes credit by computing the Aumann-Shapley value taking advantage # Attributes credit by computing the Aumann-Shapley value taking advantage
+ # of the model's fully differentiable structure. Refer to this paper for
+ # more details: http://proceedings.mlr.press/v70/sundararajan17a.html
+ # of the model's fully differentiable structure. Refer to this paper for
+ # more details: http://proceedings.mlr.press/v70/sundararajan17a.html
+ "numIntegralSteps": 42, # Number of steps for approximating the path integral.
+ # A good value to start is 50 and gradually increase until the
+ # sum to diff property is met within the desired error range.
+ },
+ },
"isDefault": True or False, # Output only. If true, this version will be used to handle prediction
# requests that do not specify a version.
#
# You can change the default version by calling
- # [projects.methods.versions.setDefault](/ml-engine/reference/rest/v1/projects.models.versions/setDefault).
- "name": "A String", # Required.The name specified for the version when it was created.
+ # projects.methods.versions.setDefault.
+ "machineType": "A String", # Optional. The type of machine on which to serve the model. Currently only
+ # applies to online prediction service. If this field is not specified, it
+ # defaults to `mls1-c1-m2`.
+ #
+ # Online prediction supports the following machine types:
+ #
+ # * `mls1-c1-m2`
+ # * `mls1-c4-m2`
+ # * `n1-standard-2`
+ # * `n1-standard-4`
+ # * `n1-standard-8`
+ # * `n1-standard-16`
+ # * `n1-standard-32`
+ # * `n1-highmem-2`
+ # * `n1-highmem-4`
+ # * `n1-highmem-8`
+ # * `n1-highmem-16`
+ # * `n1-highmem-32`
+ # * `n1-highcpu-2`
+ # * `n1-highcpu-4`
+ # * `n1-highcpu-8`
+ # * `n1-highcpu-16`
+ # * `n1-highcpu-32`
+ #
+ # `mls1-c1-m2` is generally available. All other machine types are available
+ # in beta. Learn more about the [differences between machine
+ # types](/ml-engine/docs/machine-types-online-prediction).
+ "description": "A String", # Optional. The description specified for the version when it was created.
+ "runtimeVersion": "A String", # Required. The AI Platform runtime version to use for this deployment.
+ #
+ # For more information, see the
+ # [runtime version list](/ml-engine/docs/runtime-version-list) and
+ # [how to manage runtime versions](/ml-engine/docs/versioning).
+ "manualScaling": { # Options for manually scaling a model. # Manually select the number of nodes to use for serving the
+ # model. You should generally use `auto_scaling` with an appropriate
+ # `min_nodes` instead, but this option is available if you want more
+ # predictable billing. Beware that latency and error rates will increase
+ # if the traffic exceeds that capability of the system to serve it based
+ # on the selected number of nodes.
+ "nodes": 42, # The number of nodes to allocate for this model. These nodes are always up,
+ # starting from the time the model is deployed, so the cost of operating
+ # this model will be proportional to `nodes` * number of hours since
+ # last billing cycle plus the cost for each prediction performed.
+ },
+ "errorMessage": "A String", # Output only. The details of a failure or a cancellation.
+ "framework": "A String", # Optional. The machine learning framework AI Platform uses to train
+ # this version of the model. Valid values are `TENSORFLOW`, `SCIKIT_LEARN`,
+ # `XGBOOST`. If you do not specify a framework, AI Platform
+ # will analyze files in the deployment_uri to determine a framework. If you
+ # choose `SCIKIT_LEARN` or `XGBOOST`, you must also set the runtime version
+ # of the model to 1.4 or greater.
+ #
+ # Do **not** specify a framework if you're deploying a [custom
+ # prediction routine](/ml-engine/docs/tensorflow/custom-prediction-routines).
+ #
+ # If you specify a [Compute Engine (N1) machine
+ # type](/ml-engine/docs/machine-types-online-prediction) in the
+ # `machineType` field, you must specify `TENSORFLOW`
+ # for the framework.
+ "createTime": "A String", # Output only. The time the version was created.
+ "name": "A String", # Required. The name specified for the version when it was created.
#
# The version name must be unique within the model it is created in.
+ "autoScaling": { # Options for automatically scaling a model. # Automatically scale the number of nodes used to serve the model in
+ # response to increases and decreases in traffic. Care should be
+ # taken to ramp up traffic according to the model's ability to scale
+ # or you will start seeing increases in latency and 429 response codes.
+ #
+ # Note that you cannot use AutoScaling if your version uses
+ # [GPUs](#Version.FIELDS.accelerator_config). Instead, you must use specify
+ # `manual_scaling`.
+ "minNodes": 42, # Optional. The minimum number of nodes to allocate for this model. These
+ # nodes are always up, starting from the time the model is deployed.
+ # Therefore, the cost of operating this model will be at least
+ # `rate` * `min_nodes` * number of hours since last billing cycle,
+ # where `rate` is the cost per node-hour as documented in the
+ # [pricing guide](/ml-engine/docs/pricing),
+ # even if no predictions are performed. There is additional cost for each
+ # prediction performed.
+ #
+ # Unlike manual scaling, if the load gets too heavy for the nodes
+ # that are up, the service will automatically add nodes to handle the
+ # increased load as well as scale back as traffic drops, always maintaining
+ # at least `min_nodes`. You will be charged for the time in which additional
+ # nodes are used.
+ #
+ # If `min_nodes` is not specified and AutoScaling is used with a [legacy
+ # (MLS1) machine type](/ml-engine/docs/machine-types-online-prediction),
+ # `min_nodes` defaults to 0, in which case, when traffic to a model stops
+ # (and after a cool-down period), nodes will be shut down and no charges will
+ # be incurred until traffic to the model resumes.
+ #
+ # If `min_nodes` is not specified and AutoScaling is used with a [Compute
+ # Engine (N1) machine type](/ml-engine/docs/machine-types-online-prediction),
+ # `min_nodes` defaults to 1. `min_nodes` must be at least 1 for use with a
+ # Compute Engine machine type.
+ #
+ # Note that you cannot use AutoScaling if your version uses
+ # [GPUs](#Version.FIELDS.accelerator_config). Instead, you must use
+ # ManualScaling.
+ #
+ # You can set `min_nodes` when creating the model version, and you can also
+ # update `min_nodes` for an existing version:
+ # <pre>
+ # update_body.json:
+ # {
+ # 'autoScaling': {
+ # 'minNodes': 5
+ # }
+ # }
+ # </pre>
+ # HTTP request:
+ # <pre style="max-width: 626px;">
+ # PATCH
+ # https://ml.googleapis.com/v1/{name=projects/*/models/*/versions/*}?update_mask=autoScaling.minNodes
+ # -d @./update_body.json
+ # </pre>
+ },
+ "pythonVersion": "A String", # Required. The version of Python used in prediction.
+ #
+ # The following Python versions are available:
+ #
+ # * Python '3.7' is available when `runtime_version` is set to '1.15' or
+ # later.
+ # * Python '3.5' is available when `runtime_version` is set to a version
+ # from '1.4' to '1.14'.
+ # * Python '2.7' is available when `runtime_version` is set to '1.15' or
+ # earlier.
+ #
+ # Read more about the Python versions available for [each runtime
+ # version](/ml-engine/docs/runtime-version-list).
+ "requestLoggingConfig": { # Configuration for logging request-response pairs to a BigQuery table. # Optional. *Only* specify this field in a
+ # projects.models.versions.patch
+ # request. Specifying it in a
+ # projects.models.versions.create
+ # request has no effect.
+ #
+ # Configures the request-response pair logging on predictions from this
+ # Version.
+ # Online prediction requests to a model version and the responses to these
+ # requests are converted to raw strings and saved to the specified BigQuery
+ # table. Logging is constrained by [BigQuery quotas and
+ # limits](/bigquery/quotas). If your project exceeds BigQuery quotas or limits,
+ # AI Platform Prediction does not log request-response pairs, but it continues
+ # to serve predictions.
+ #
+ # If you are using [continuous
+ # evaluation](/ml-engine/docs/continuous-evaluation/), you do not need to
+ # specify this configuration manually. Setting up continuous evaluation
+ # automatically enables logging of request-response pairs.
+ "samplingPercentage": 3.14, # Percentage of requests to be logged, expressed as a fraction from 0 to 1.
+ # For example, if you want to log 10% of requests, enter `0.1`. The sampling
+ # window is the lifetime of the model version. Defaults to 0.
+ "bigqueryTableName": "A String", # Required. Fully qualified BigQuery table name in the following format:
+ # "<var>project_id</var>.<var>dataset_name</var>.<var>table_name</var>"
+ #
+ # The specified table must already exist, and the "Cloud ML Service Agent"
+ # for your project must have permission to write to it. The table must have
+ # the following [schema](/bigquery/docs/schemas):
+ #
+ # <table>
+ # <tr><th>Field name</th><th style="display: table-cell">Type</th>
+ # <th style="display: table-cell">Mode</th></tr>
+ # <tr><td>model</td><td>STRING</td><td>REQUIRED</td></tr>
+ # <tr><td>model_version</td><td>STRING</td><td>REQUIRED</td></tr>
+ # <tr><td>time</td><td>TIMESTAMP</td><td>REQUIRED</td></tr>
+ # <tr><td>raw_data</td><td>STRING</td><td>REQUIRED</td></tr>
+ # <tr><td>raw_prediction</td><td>STRING</td><td>NULLABLE</td></tr>
+ # <tr><td>groundtruth</td><td>STRING</td><td>NULLABLE</td></tr>
+ # </table>
+ },
},
"onlinePredictionLogging": True or False, # Optional. If true, online prediction access logs are sent to StackDriver
# Logging. These logs are like standard server access logs, containing
@@ -372,9 +506,7 @@
# (QPS). Estimate your costs before enabling this option.
#
# Default is false.
- "name": "A String", # Required. The name specified for the model when it was created.
- #
- # The model name must be unique within the project it is created in.
+ "description": "A String", # Optional. The description specified for the model when it was created.
}
x__xgafv: string, V1 error format.
@@ -390,7 +522,9 @@
# A model can have multiple versions, each of which is a deployed, trained
# model ready to receive prediction requests. The model itself is just a
# container.
- "description": "A String", # Optional. The description specified for the model when it was created.
+ "name": "A String", # Required. The name specified for the model when it was created.
+ #
+ # The model name must be unique within the project it is created in.
"onlinePredictionConsoleLogging": True or False, # Optional. If true, online prediction nodes send `stderr` and `stdout`
# streams to Stackdriver Logging. These can be more verbose than the standard
# access logs (see `onlinePredictionLogging`) and can incur higher cost.
@@ -404,13 +538,13 @@
# Each label is a key-value pair, where both the key and the value are
# arbitrary strings that you supply.
# For more information, see the documentation on
- # <a href="/ml-engine/docs/tensorflow/resource-labels">using labels</a>.
+ # <a href="/ml-engine/docs/tensorflow/resource-labels">using labels</a>.
"a_key": "A String",
},
"regions": [ # Optional. The list of regions where the model is going to be deployed.
- # Currently only one region per model is supported.
+ # Only one region per model is supported.
# Defaults to 'us-central1' if nothing is set.
- # See the <a href="/ml-engine/docs/tensorflow/regions">available regions</a>
+ # See the <a href="/ml-engine/docs/tensorflow/regions">available regions</a>
# for AI Platform services.
# Note:
# * No matter where a model is deployed, it can always be accessed by
@@ -431,53 +565,32 @@
# handle prediction requests that do not specify a version.
#
# You can change the default version by calling
- # [projects.methods.versions.setDefault](/ml-engine/reference/rest/v1/projects.models.versions/setDefault).
+ # projects.models.versions.setDefault.
#
# Each version is a trained model deployed in the cloud, ready to handle
# prediction requests. A model can have multiple versions. You can get
# information about all of the versions of a given model by calling
- # [projects.models.versions.list](/ml-engine/reference/rest/v1/projects.models.versions/list).
- "errorMessage": "A String", # Output only. The details of a failure or a cancellation.
+ # projects.models.versions.list.
+ "acceleratorConfig": { # Represents a hardware accelerator request config. # Optional. Accelerator config for using GPUs for online prediction (beta).
+ # Only specify this field if you have specified a Compute Engine (N1) machine
+ # type in the `machineType` field. Learn more about [using GPUs for online
+ # prediction](/ml-engine/docs/machine-types-online-prediction#gpus).
+ # Note that the AcceleratorConfig can be used in both Jobs and Versions.
+ # Learn more about [accelerators for training](/ml-engine/docs/using-gpus) and
+ # [accelerators for online
+ # prediction](/ml-engine/docs/machine-types-online-prediction#gpus).
+ "count": "A String", # The number of accelerators to attach to each machine running the job.
+ "type": "A String", # The type of accelerator to use.
+ },
"labels": { # Optional. One or more labels that you can add, to organize your model
# versions. Each label is a key-value pair, where both the key and the value
# are arbitrary strings that you supply.
# For more information, see the documentation on
- # <a href="/ml-engine/docs/tensorflow/resource-labels">using labels</a>.
+ # <a href="/ml-engine/docs/tensorflow/resource-labels">using labels</a>.
"a_key": "A String",
},
- "machineType": "A String", # Optional. The type of machine on which to serve the model. Currently only
- # applies to online prediction service.
- # <dl>
- # <dt>mls1-c1-m2</dt>
- # <dd>
- # The <b>default</b> machine type, with 1 core and 2 GB RAM. The deprecated
- # name for this machine type is "mls1-highmem-1".
- # </dd>
- # <dt>mls1-c4-m2</dt>
- # <dd>
- # In <b>Beta</b>. This machine type has 4 cores and 2 GB RAM. The
- # deprecated name for this machine type is "mls1-highcpu-4".
- # </dd>
- # </dl>
- "description": "A String", # Optional. The description specified for the version when it was created.
- "runtimeVersion": "A String", # Optional. The AI Platform runtime version to use for this deployment.
- # If not set, AI Platform uses the default stable version, 1.0. For more
- # information, see the
- # [runtime version list](/ml-engine/docs/runtime-version-list) and
- # [how to manage runtime versions](/ml-engine/docs/versioning).
- "manualScaling": { # Options for manually scaling a model. # Manually select the number of nodes to use for serving the
- # model. You should generally use `auto_scaling` with an appropriate
- # `min_nodes` instead, but this option is available if you want more
- # predictable billing. Beware that latency and error rates will increase
- # if the traffic exceeds that capability of the system to serve it based
- # on the selected number of nodes.
- "nodes": 42, # The number of nodes to allocate for this model. These nodes are always up,
- # starting from the time the model is deployed, so the cost of operating
- # this model will be proportional to `nodes` * number of hours since
- # last billing cycle plus the cost for each prediction performed.
- },
"predictionClass": "A String", # Optional. The fully qualified name
- # (<var>module_name</var>.<var>class_name</var>) of a class that implements
+ # (<var>module_name</var>.<var>class_name</var>) of a class that implements
# the Predictor interface described in this reference field. The module
# containing this class should be included in a package provided to the
# [`packageUris` field](#Version.FIELDS.package_uris).
@@ -485,11 +598,13 @@
# Specify this field if and only if you are deploying a [custom prediction
# routine (beta)](/ml-engine/docs/tensorflow/custom-prediction-routines).
# If you specify this field, you must set
- # [`runtimeVersion`](#Version.FIELDS.runtime_version) to 1.4 or greater.
+ # [`runtimeVersion`](#Version.FIELDS.runtime_version) to 1.4 or greater and
+ # you must set `machineType` to a [legacy (MLS1)
+ # machine type](/ml-engine/docs/machine-types-online-prediction).
#
# The following code sample provides the Predictor interface:
#
- # ```py
+ # <pre style="max-width: 626px;">
# class Predictor(object):
# """Interface for constructing custom predictors."""
#
@@ -525,64 +640,12 @@
# An instance implementing this Predictor class.
# """
# raise NotImplementedError()
- # ```
+ # </pre>
#
# Learn more about [the Predictor interface and custom prediction
# routines](/ml-engine/docs/tensorflow/custom-prediction-routines).
- "autoScaling": { # Options for automatically scaling a model. # Automatically scale the number of nodes used to serve the model in
- # response to increases and decreases in traffic. Care should be
- # taken to ramp up traffic according to the model's ability to scale
- # or you will start seeing increases in latency and 429 response codes.
- "minNodes": 42, # Optional. The minimum number of nodes to allocate for this model. These
- # nodes are always up, starting from the time the model is deployed.
- # Therefore, the cost of operating this model will be at least
- # `rate` * `min_nodes` * number of hours since last billing cycle,
- # where `rate` is the cost per node-hour as documented in the
- # [pricing guide](/ml-engine/docs/pricing),
- # even if no predictions are performed. There is additional cost for each
- # prediction performed.
- #
- # Unlike manual scaling, if the load gets too heavy for the nodes
- # that are up, the service will automatically add nodes to handle the
- # increased load as well as scale back as traffic drops, always maintaining
- # at least `min_nodes`. You will be charged for the time in which additional
- # nodes are used.
- #
- # If not specified, `min_nodes` defaults to 0, in which case, when traffic
- # to a model stops (and after a cool-down period), nodes will be shut down
- # and no charges will be incurred until traffic to the model resumes.
- #
- # You can set `min_nodes` when creating the model version, and you can also
- # update `min_nodes` for an existing version:
- # <pre>
- # update_body.json:
- # {
- # 'autoScaling': {
- # 'minNodes': 5
- # }
- # }
- # </pre>
- # HTTP request:
- # <pre>
- # PATCH
- # https://ml.googleapis.com/v1/{name=projects/*/models/*/versions/*}?update_mask=autoScaling.minNodes
- # -d @./update_body.json
- # </pre>
- },
"serviceAccount": "A String", # Optional. Specifies the service account for resource access control.
"state": "A String", # Output only. The state of a version.
- "pythonVersion": "A String", # Optional. The version of Python used in prediction. If not set, the default
- # version is '2.7'. Python '3.5' is available when `runtime_version` is set
- # to '1.4' and above. Python '2.7' works with all supported runtime versions.
- "framework": "A String", # Optional. The machine learning framework AI Platform uses to train
- # this version of the model. Valid values are `TENSORFLOW`, `SCIKIT_LEARN`,
- # `XGBOOST`. If you do not specify a framework, AI Platform
- # will analyze files in the deployment_uri to determine a framework. If you
- # choose `SCIKIT_LEARN` or `XGBOOST`, you must also set the runtime version
- # of the model to 1.4 or greater.
- #
- # Do **not** specify a framework if you're deploying a [custom
- # prediction routine](/ml-engine/docs/tensorflow/custom-prediction-routines).
"packageUris": [ # Optional. Cloud Storage paths (`gs://…`) of packages for [custom
# prediction routines](/ml-engine/docs/tensorflow/custom-prediction-routines)
# or [scikit-learn pipelines with custom
@@ -614,20 +677,223 @@
# information.
#
# When passing Version to
- # [projects.models.versions.create](/ml-engine/reference/rest/v1/projects.models.versions/create)
+ # projects.models.versions.create
# the model service uses the specified location as the source of the model.
# Once deployed, the model version is hosted by the prediction service, so
# this location is useful only as a historical record.
# The total number of model files can't exceed 1000.
- "createTime": "A String", # Output only. The time the version was created.
+ "explanationConfig": { # Message holding configuration options for explaining model predictions. # Optional. Configures explainability features on the model's version.
+ # Some explanation features require additional metadata to be loaded
+ # as part of the model payload.
+ # There are two feature attribution methods supported for TensorFlow models:
+ # integrated gradients and sampled Shapley.
+ # [Learn more about feature
+ # attributions.](/ml-engine/docs/ai-explanations/overview)
+ "xraiAttribution": { # Attributes credit by computing the XRAI taking advantage # Attributes credit by computing the XRAI taking advantage
+ # of the model's fully differentiable structure. Refer to this paper for
+ # more details: https://arxiv.org/abs/1906.02825
+ # Currently only implemented for models with natural image inputs.
+ # of the model's fully differentiable structure. Refer to this paper for
+ # more details: https://arxiv.org/abs/1906.02825
+ # Currently only implemented for models with natural image inputs.
+ "numIntegralSteps": 42, # Number of steps for approximating the path integral.
+ # A good value to start is 50 and gradually increase until the
+ # sum to diff property is met within the desired error range.
+ },
+ "sampledShapleyAttribution": { # An attribution method that approximates Shapley values for features that # An attribution method that approximates Shapley values for features that
+ # contribute to the label being predicted. A sampling strategy is used to
+ # approximate the value rather than considering all subsets of features.
+ # contribute to the label being predicted. A sampling strategy is used to
+ # approximate the value rather than considering all subsets of features.
+ "numPaths": 42, # The number of feature permutations to consider when approximating the
+ # Shapley values.
+ },
+ "integratedGradientsAttribution": { # Attributes credit by computing the Aumann-Shapley value taking advantage # Attributes credit by computing the Aumann-Shapley value taking advantage
+ # of the model's fully differentiable structure. Refer to this paper for
+ # more details: http://proceedings.mlr.press/v70/sundararajan17a.html
+ # of the model's fully differentiable structure. Refer to this paper for
+ # more details: http://proceedings.mlr.press/v70/sundararajan17a.html
+ "numIntegralSteps": 42, # Number of steps for approximating the path integral.
+ # A good value to start is 50 and gradually increase until the
+ # sum to diff property is met within the desired error range.
+ },
+ },
"isDefault": True or False, # Output only. If true, this version will be used to handle prediction
# requests that do not specify a version.
#
# You can change the default version by calling
- # [projects.methods.versions.setDefault](/ml-engine/reference/rest/v1/projects.models.versions/setDefault).
- "name": "A String", # Required.The name specified for the version when it was created.
+ # projects.methods.versions.setDefault.
+ "machineType": "A String", # Optional. The type of machine on which to serve the model. Currently only
+ # applies to online prediction service. If this field is not specified, it
+ # defaults to `mls1-c1-m2`.
+ #
+ # Online prediction supports the following machine types:
+ #
+ # * `mls1-c1-m2`
+ # * `mls1-c4-m2`
+ # * `n1-standard-2`
+ # * `n1-standard-4`
+ # * `n1-standard-8`
+ # * `n1-standard-16`
+ # * `n1-standard-32`
+ # * `n1-highmem-2`
+ # * `n1-highmem-4`
+ # * `n1-highmem-8`
+ # * `n1-highmem-16`
+ # * `n1-highmem-32`
+ # * `n1-highcpu-2`
+ # * `n1-highcpu-4`
+ # * `n1-highcpu-8`
+ # * `n1-highcpu-16`
+ # * `n1-highcpu-32`
+ #
+ # `mls1-c1-m2` is generally available. All other machine types are available
+ # in beta. Learn more about the [differences between machine
+ # types](/ml-engine/docs/machine-types-online-prediction).
+ "description": "A String", # Optional. The description specified for the version when it was created.
+ "runtimeVersion": "A String", # Required. The AI Platform runtime version to use for this deployment.
+ #
+ # For more information, see the
+ # [runtime version list](/ml-engine/docs/runtime-version-list) and
+ # [how to manage runtime versions](/ml-engine/docs/versioning).
+ "manualScaling": { # Options for manually scaling a model. # Manually select the number of nodes to use for serving the
+ # model. You should generally use `auto_scaling` with an appropriate
+ # `min_nodes` instead, but this option is available if you want more
+ # predictable billing. Beware that latency and error rates will increase
+ # if the traffic exceeds that capability of the system to serve it based
+ # on the selected number of nodes.
+ "nodes": 42, # The number of nodes to allocate for this model. These nodes are always up,
+ # starting from the time the model is deployed, so the cost of operating
+ # this model will be proportional to `nodes` * number of hours since
+ # last billing cycle plus the cost for each prediction performed.
+ },
+ "errorMessage": "A String", # Output only. The details of a failure or a cancellation.
+ "framework": "A String", # Optional. The machine learning framework AI Platform uses to train
+ # this version of the model. Valid values are `TENSORFLOW`, `SCIKIT_LEARN`,
+ # `XGBOOST`. If you do not specify a framework, AI Platform
+ # will analyze files in the deployment_uri to determine a framework. If you
+ # choose `SCIKIT_LEARN` or `XGBOOST`, you must also set the runtime version
+ # of the model to 1.4 or greater.
+ #
+ # Do **not** specify a framework if you're deploying a [custom
+ # prediction routine](/ml-engine/docs/tensorflow/custom-prediction-routines).
+ #
+ # If you specify a [Compute Engine (N1) machine
+ # type](/ml-engine/docs/machine-types-online-prediction) in the
+ # `machineType` field, you must specify `TENSORFLOW`
+ # for the framework.
+ "createTime": "A String", # Output only. The time the version was created.
+ "name": "A String", # Required. The name specified for the version when it was created.
#
# The version name must be unique within the model it is created in.
+ "autoScaling": { # Options for automatically scaling a model. # Automatically scale the number of nodes used to serve the model in
+ # response to increases and decreases in traffic. Care should be
+ # taken to ramp up traffic according to the model's ability to scale
+ # or you will start seeing increases in latency and 429 response codes.
+ #
+ # Note that you cannot use AutoScaling if your version uses
+ # [GPUs](#Version.FIELDS.accelerator_config). Instead, you must use specify
+ # `manual_scaling`.
+ "minNodes": 42, # Optional. The minimum number of nodes to allocate for this model. These
+ # nodes are always up, starting from the time the model is deployed.
+ # Therefore, the cost of operating this model will be at least
+ # `rate` * `min_nodes` * number of hours since last billing cycle,
+ # where `rate` is the cost per node-hour as documented in the
+ # [pricing guide](/ml-engine/docs/pricing),
+ # even if no predictions are performed. There is additional cost for each
+ # prediction performed.
+ #
+ # Unlike manual scaling, if the load gets too heavy for the nodes
+ # that are up, the service will automatically add nodes to handle the
+ # increased load as well as scale back as traffic drops, always maintaining
+ # at least `min_nodes`. You will be charged for the time in which additional
+ # nodes are used.
+ #
+ # If `min_nodes` is not specified and AutoScaling is used with a [legacy
+ # (MLS1) machine type](/ml-engine/docs/machine-types-online-prediction),
+ # `min_nodes` defaults to 0, in which case, when traffic to a model stops
+ # (and after a cool-down period), nodes will be shut down and no charges will
+ # be incurred until traffic to the model resumes.
+ #
+ # If `min_nodes` is not specified and AutoScaling is used with a [Compute
+ # Engine (N1) machine type](/ml-engine/docs/machine-types-online-prediction),
+ # `min_nodes` defaults to 1. `min_nodes` must be at least 1 for use with a
+ # Compute Engine machine type.
+ #
+ # Note that you cannot use AutoScaling if your version uses
+ # [GPUs](#Version.FIELDS.accelerator_config). Instead, you must use
+ # ManualScaling.
+ #
+ # You can set `min_nodes` when creating the model version, and you can also
+ # update `min_nodes` for an existing version:
+ # <pre>
+ # update_body.json:
+ # {
+ # 'autoScaling': {
+ # 'minNodes': 5
+ # }
+ # }
+ # </pre>
+ # HTTP request:
+ # <pre style="max-width: 626px;">
+ # PATCH
+ # https://ml.googleapis.com/v1/{name=projects/*/models/*/versions/*}?update_mask=autoScaling.minNodes
+ # -d @./update_body.json
+ # </pre>
+ },
+ "pythonVersion": "A String", # Required. The version of Python used in prediction.
+ #
+ # The following Python versions are available:
+ #
+ # * Python '3.7' is available when `runtime_version` is set to '1.15' or
+ # later.
+ # * Python '3.5' is available when `runtime_version` is set to a version
+ # from '1.4' to '1.14'.
+ # * Python '2.7' is available when `runtime_version` is set to '1.15' or
+ # earlier.
+ #
+ # Read more about the Python versions available for [each runtime
+ # version](/ml-engine/docs/runtime-version-list).
+ "requestLoggingConfig": { # Configuration for logging request-response pairs to a BigQuery table. # Optional. *Only* specify this field in a
+ # projects.models.versions.patch
+ # request. Specifying it in a
+ # projects.models.versions.create
+ # request has no effect.
+ #
+ # Configures the request-response pair logging on predictions from this
+ # Version.
+ # Online prediction requests to a model version and the responses to these
+ # requests are converted to raw strings and saved to the specified BigQuery
+ # table. Logging is constrained by [BigQuery quotas and
+ # limits](/bigquery/quotas). If your project exceeds BigQuery quotas or limits,
+ # AI Platform Prediction does not log request-response pairs, but it continues
+ # to serve predictions.
+ #
+ # If you are using [continuous
+ # evaluation](/ml-engine/docs/continuous-evaluation/), you do not need to
+ # specify this configuration manually. Setting up continuous evaluation
+ # automatically enables logging of request-response pairs.
+ "samplingPercentage": 3.14, # Percentage of requests to be logged, expressed as a fraction from 0 to 1.
+ # For example, if you want to log 10% of requests, enter `0.1`. The sampling
+ # window is the lifetime of the model version. Defaults to 0.
+ "bigqueryTableName": "A String", # Required. Fully qualified BigQuery table name in the following format:
+ # "<var>project_id</var>.<var>dataset_name</var>.<var>table_name</var>"
+ #
+ # The specified table must already exist, and the "Cloud ML Service Agent"
+ # for your project must have permission to write to it. The table must have
+ # the following [schema](/bigquery/docs/schemas):
+ #
+ # <table>
+ # <tr><th>Field name</th><th style="display: table-cell">Type</th>
+ # <th style="display: table-cell">Mode</th></tr>
+ # <tr><td>model</td><td>STRING</td><td>REQUIRED</td></tr>
+ # <tr><td>model_version</td><td>STRING</td><td>REQUIRED</td></tr>
+ # <tr><td>time</td><td>TIMESTAMP</td><td>REQUIRED</td></tr>
+ # <tr><td>raw_data</td><td>STRING</td><td>REQUIRED</td></tr>
+ # <tr><td>raw_prediction</td><td>STRING</td><td>NULLABLE</td></tr>
+ # <tr><td>groundtruth</td><td>STRING</td><td>NULLABLE</td></tr>
+ # </table>
+ },
},
"onlinePredictionLogging": True or False, # Optional. If true, online prediction access logs are sent to StackDriver
# Logging. These logs are like standard server access logs, containing
@@ -637,9 +903,7 @@
# (QPS). Estimate your costs before enabling this option.
#
# Default is false.
- "name": "A String", # Required. The name specified for the model when it was created.
- #
- # The model name must be unique within the project it is created in.
+ "description": "A String", # Optional. The description specified for the model when it was created.
}</pre>
</div>
@@ -649,7 +913,7 @@
You can only delete a model if there are no versions in it. You can delete
versions by calling
-[projects.models.versions.delete](/ml-engine/reference/rest/v1/projects.models.versions/delete).
+projects.models.versions.delete.
Args:
name: string, Required. The name of the model. (required)
@@ -727,7 +991,9 @@
# A model can have multiple versions, each of which is a deployed, trained
# model ready to receive prediction requests. The model itself is just a
# container.
- "description": "A String", # Optional. The description specified for the model when it was created.
+ "name": "A String", # Required. The name specified for the model when it was created.
+ #
+ # The model name must be unique within the project it is created in.
"onlinePredictionConsoleLogging": True or False, # Optional. If true, online prediction nodes send `stderr` and `stdout`
# streams to Stackdriver Logging. These can be more verbose than the standard
# access logs (see `onlinePredictionLogging`) and can incur higher cost.
@@ -741,13 +1007,13 @@
# Each label is a key-value pair, where both the key and the value are
# arbitrary strings that you supply.
# For more information, see the documentation on
- # <a href="/ml-engine/docs/tensorflow/resource-labels">using labels</a>.
+ # <a href="/ml-engine/docs/tensorflow/resource-labels">using labels</a>.
"a_key": "A String",
},
"regions": [ # Optional. The list of regions where the model is going to be deployed.
- # Currently only one region per model is supported.
+ # Only one region per model is supported.
# Defaults to 'us-central1' if nothing is set.
- # See the <a href="/ml-engine/docs/tensorflow/regions">available regions</a>
+ # See the <a href="/ml-engine/docs/tensorflow/regions">available regions</a>
# for AI Platform services.
# Note:
# * No matter where a model is deployed, it can always be accessed by
@@ -768,53 +1034,32 @@
# handle prediction requests that do not specify a version.
#
# You can change the default version by calling
- # [projects.methods.versions.setDefault](/ml-engine/reference/rest/v1/projects.models.versions/setDefault).
+ # projects.models.versions.setDefault.
#
# Each version is a trained model deployed in the cloud, ready to handle
# prediction requests. A model can have multiple versions. You can get
# information about all of the versions of a given model by calling
- # [projects.models.versions.list](/ml-engine/reference/rest/v1/projects.models.versions/list).
- "errorMessage": "A String", # Output only. The details of a failure or a cancellation.
+ # projects.models.versions.list.
+ "acceleratorConfig": { # Represents a hardware accelerator request config. # Optional. Accelerator config for using GPUs for online prediction (beta).
+ # Only specify this field if you have specified a Compute Engine (N1) machine
+ # type in the `machineType` field. Learn more about [using GPUs for online
+ # prediction](/ml-engine/docs/machine-types-online-prediction#gpus).
+ # Note that the AcceleratorConfig can be used in both Jobs and Versions.
+ # Learn more about [accelerators for training](/ml-engine/docs/using-gpus) and
+ # [accelerators for online
+ # prediction](/ml-engine/docs/machine-types-online-prediction#gpus).
+ "count": "A String", # The number of accelerators to attach to each machine running the job.
+ "type": "A String", # The type of accelerator to use.
+ },
"labels": { # Optional. One or more labels that you can add, to organize your model
# versions. Each label is a key-value pair, where both the key and the value
# are arbitrary strings that you supply.
# For more information, see the documentation on
- # <a href="/ml-engine/docs/tensorflow/resource-labels">using labels</a>.
+ # <a href="/ml-engine/docs/tensorflow/resource-labels">using labels</a>.
"a_key": "A String",
},
- "machineType": "A String", # Optional. The type of machine on which to serve the model. Currently only
- # applies to online prediction service.
- # <dl>
- # <dt>mls1-c1-m2</dt>
- # <dd>
- # The <b>default</b> machine type, with 1 core and 2 GB RAM. The deprecated
- # name for this machine type is "mls1-highmem-1".
- # </dd>
- # <dt>mls1-c4-m2</dt>
- # <dd>
- # In <b>Beta</b>. This machine type has 4 cores and 2 GB RAM. The
- # deprecated name for this machine type is "mls1-highcpu-4".
- # </dd>
- # </dl>
- "description": "A String", # Optional. The description specified for the version when it was created.
- "runtimeVersion": "A String", # Optional. The AI Platform runtime version to use for this deployment.
- # If not set, AI Platform uses the default stable version, 1.0. For more
- # information, see the
- # [runtime version list](/ml-engine/docs/runtime-version-list) and
- # [how to manage runtime versions](/ml-engine/docs/versioning).
- "manualScaling": { # Options for manually scaling a model. # Manually select the number of nodes to use for serving the
- # model. You should generally use `auto_scaling` with an appropriate
- # `min_nodes` instead, but this option is available if you want more
- # predictable billing. Beware that latency and error rates will increase
- # if the traffic exceeds that capability of the system to serve it based
- # on the selected number of nodes.
- "nodes": 42, # The number of nodes to allocate for this model. These nodes are always up,
- # starting from the time the model is deployed, so the cost of operating
- # this model will be proportional to `nodes` * number of hours since
- # last billing cycle plus the cost for each prediction performed.
- },
"predictionClass": "A String", # Optional. The fully qualified name
- # (<var>module_name</var>.<var>class_name</var>) of a class that implements
+ # (<var>module_name</var>.<var>class_name</var>) of a class that implements
# the Predictor interface described in this reference field. The module
# containing this class should be included in a package provided to the
# [`packageUris` field](#Version.FIELDS.package_uris).
@@ -822,11 +1067,13 @@
# Specify this field if and only if you are deploying a [custom prediction
# routine (beta)](/ml-engine/docs/tensorflow/custom-prediction-routines).
# If you specify this field, you must set
- # [`runtimeVersion`](#Version.FIELDS.runtime_version) to 1.4 or greater.
+ # [`runtimeVersion`](#Version.FIELDS.runtime_version) to 1.4 or greater and
+ # you must set `machineType` to a [legacy (MLS1)
+ # machine type](/ml-engine/docs/machine-types-online-prediction).
#
# The following code sample provides the Predictor interface:
#
- # ```py
+ # <pre style="max-width: 626px;">
# class Predictor(object):
# """Interface for constructing custom predictors."""
#
@@ -862,64 +1109,12 @@
# An instance implementing this Predictor class.
# """
# raise NotImplementedError()
- # ```
+ # </pre>
#
# Learn more about [the Predictor interface and custom prediction
# routines](/ml-engine/docs/tensorflow/custom-prediction-routines).
- "autoScaling": { # Options for automatically scaling a model. # Automatically scale the number of nodes used to serve the model in
- # response to increases and decreases in traffic. Care should be
- # taken to ramp up traffic according to the model's ability to scale
- # or you will start seeing increases in latency and 429 response codes.
- "minNodes": 42, # Optional. The minimum number of nodes to allocate for this model. These
- # nodes are always up, starting from the time the model is deployed.
- # Therefore, the cost of operating this model will be at least
- # `rate` * `min_nodes` * number of hours since last billing cycle,
- # where `rate` is the cost per node-hour as documented in the
- # [pricing guide](/ml-engine/docs/pricing),
- # even if no predictions are performed. There is additional cost for each
- # prediction performed.
- #
- # Unlike manual scaling, if the load gets too heavy for the nodes
- # that are up, the service will automatically add nodes to handle the
- # increased load as well as scale back as traffic drops, always maintaining
- # at least `min_nodes`. You will be charged for the time in which additional
- # nodes are used.
- #
- # If not specified, `min_nodes` defaults to 0, in which case, when traffic
- # to a model stops (and after a cool-down period), nodes will be shut down
- # and no charges will be incurred until traffic to the model resumes.
- #
- # You can set `min_nodes` when creating the model version, and you can also
- # update `min_nodes` for an existing version:
- # <pre>
- # update_body.json:
- # {
- # 'autoScaling': {
- # 'minNodes': 5
- # }
- # }
- # </pre>
- # HTTP request:
- # <pre>
- # PATCH
- # https://ml.googleapis.com/v1/{name=projects/*/models/*/versions/*}?update_mask=autoScaling.minNodes
- # -d @./update_body.json
- # </pre>
- },
"serviceAccount": "A String", # Optional. Specifies the service account for resource access control.
"state": "A String", # Output only. The state of a version.
- "pythonVersion": "A String", # Optional. The version of Python used in prediction. If not set, the default
- # version is '2.7'. Python '3.5' is available when `runtime_version` is set
- # to '1.4' and above. Python '2.7' works with all supported runtime versions.
- "framework": "A String", # Optional. The machine learning framework AI Platform uses to train
- # this version of the model. Valid values are `TENSORFLOW`, `SCIKIT_LEARN`,
- # `XGBOOST`. If you do not specify a framework, AI Platform
- # will analyze files in the deployment_uri to determine a framework. If you
- # choose `SCIKIT_LEARN` or `XGBOOST`, you must also set the runtime version
- # of the model to 1.4 or greater.
- #
- # Do **not** specify a framework if you're deploying a [custom
- # prediction routine](/ml-engine/docs/tensorflow/custom-prediction-routines).
"packageUris": [ # Optional. Cloud Storage paths (`gs://…`) of packages for [custom
# prediction routines](/ml-engine/docs/tensorflow/custom-prediction-routines)
# or [scikit-learn pipelines with custom
@@ -951,20 +1146,223 @@
# information.
#
# When passing Version to
- # [projects.models.versions.create](/ml-engine/reference/rest/v1/projects.models.versions/create)
+ # projects.models.versions.create
# the model service uses the specified location as the source of the model.
# Once deployed, the model version is hosted by the prediction service, so
# this location is useful only as a historical record.
# The total number of model files can't exceed 1000.
- "createTime": "A String", # Output only. The time the version was created.
+ "explanationConfig": { # Message holding configuration options for explaining model predictions. # Optional. Configures explainability features on the model's version.
+ # Some explanation features require additional metadata to be loaded
+ # as part of the model payload.
+ # There are two feature attribution methods supported for TensorFlow models:
+ # integrated gradients and sampled Shapley.
+ # [Learn more about feature
+ # attributions.](/ml-engine/docs/ai-explanations/overview)
+ "xraiAttribution": { # Attributes credit by computing the XRAI taking advantage # Attributes credit by computing the XRAI taking advantage
+ # of the model's fully differentiable structure. Refer to this paper for
+ # more details: https://arxiv.org/abs/1906.02825
+ # Currently only implemented for models with natural image inputs.
+ # of the model's fully differentiable structure. Refer to this paper for
+ # more details: https://arxiv.org/abs/1906.02825
+ # Currently only implemented for models with natural image inputs.
+ "numIntegralSteps": 42, # Number of steps for approximating the path integral.
+ # A good value to start is 50 and gradually increase until the
+ # sum to diff property is met within the desired error range.
+ },
+ "sampledShapleyAttribution": { # An attribution method that approximates Shapley values for features that # An attribution method that approximates Shapley values for features that
+ # contribute to the label being predicted. A sampling strategy is used to
+ # approximate the value rather than considering all subsets of features.
+ # contribute to the label being predicted. A sampling strategy is used to
+ # approximate the value rather than considering all subsets of features.
+ "numPaths": 42, # The number of feature permutations to consider when approximating the
+ # Shapley values.
+ },
+ "integratedGradientsAttribution": { # Attributes credit by computing the Aumann-Shapley value taking advantage # Attributes credit by computing the Aumann-Shapley value taking advantage
+ # of the model's fully differentiable structure. Refer to this paper for
+ # more details: http://proceedings.mlr.press/v70/sundararajan17a.html
+ # of the model's fully differentiable structure. Refer to this paper for
+ # more details: http://proceedings.mlr.press/v70/sundararajan17a.html
+ "numIntegralSteps": 42, # Number of steps for approximating the path integral.
+ # A good value to start is 50 and gradually increase until the
+ # sum to diff property is met within the desired error range.
+ },
+ },
"isDefault": True or False, # Output only. If true, this version will be used to handle prediction
# requests that do not specify a version.
#
# You can change the default version by calling
- # [projects.methods.versions.setDefault](/ml-engine/reference/rest/v1/projects.models.versions/setDefault).
- "name": "A String", # Required.The name specified for the version when it was created.
+ # projects.methods.versions.setDefault.
+ "machineType": "A String", # Optional. The type of machine on which to serve the model. Currently only
+ # applies to online prediction service. If this field is not specified, it
+ # defaults to `mls1-c1-m2`.
+ #
+ # Online prediction supports the following machine types:
+ #
+ # * `mls1-c1-m2`
+ # * `mls1-c4-m2`
+ # * `n1-standard-2`
+ # * `n1-standard-4`
+ # * `n1-standard-8`
+ # * `n1-standard-16`
+ # * `n1-standard-32`
+ # * `n1-highmem-2`
+ # * `n1-highmem-4`
+ # * `n1-highmem-8`
+ # * `n1-highmem-16`
+ # * `n1-highmem-32`
+ # * `n1-highcpu-2`
+ # * `n1-highcpu-4`
+ # * `n1-highcpu-8`
+ # * `n1-highcpu-16`
+ # * `n1-highcpu-32`
+ #
+ # `mls1-c1-m2` is generally available. All other machine types are available
+ # in beta. Learn more about the [differences between machine
+ # types](/ml-engine/docs/machine-types-online-prediction).
+ "description": "A String", # Optional. The description specified for the version when it was created.
+ "runtimeVersion": "A String", # Required. The AI Platform runtime version to use for this deployment.
+ #
+ # For more information, see the
+ # [runtime version list](/ml-engine/docs/runtime-version-list) and
+ # [how to manage runtime versions](/ml-engine/docs/versioning).
+ "manualScaling": { # Options for manually scaling a model. # Manually select the number of nodes to use for serving the
+ # model. You should generally use `auto_scaling` with an appropriate
+ # `min_nodes` instead, but this option is available if you want more
+ # predictable billing. Beware that latency and error rates will increase
+ # if the traffic exceeds that capability of the system to serve it based
+ # on the selected number of nodes.
+ "nodes": 42, # The number of nodes to allocate for this model. These nodes are always up,
+ # starting from the time the model is deployed, so the cost of operating
+ # this model will be proportional to `nodes` * number of hours since
+ # last billing cycle plus the cost for each prediction performed.
+ },
+ "errorMessage": "A String", # Output only. The details of a failure or a cancellation.
+ "framework": "A String", # Optional. The machine learning framework AI Platform uses to train
+ # this version of the model. Valid values are `TENSORFLOW`, `SCIKIT_LEARN`,
+ # `XGBOOST`. If you do not specify a framework, AI Platform
+ # will analyze files in the deployment_uri to determine a framework. If you
+ # choose `SCIKIT_LEARN` or `XGBOOST`, you must also set the runtime version
+ # of the model to 1.4 or greater.
+ #
+ # Do **not** specify a framework if you're deploying a [custom
+ # prediction routine](/ml-engine/docs/tensorflow/custom-prediction-routines).
+ #
+ # If you specify a [Compute Engine (N1) machine
+ # type](/ml-engine/docs/machine-types-online-prediction) in the
+ # `machineType` field, you must specify `TENSORFLOW`
+ # for the framework.
+ "createTime": "A String", # Output only. The time the version was created.
+ "name": "A String", # Required. The name specified for the version when it was created.
#
# The version name must be unique within the model it is created in.
+ "autoScaling": { # Options for automatically scaling a model. # Automatically scale the number of nodes used to serve the model in
+ # response to increases and decreases in traffic. Care should be
+ # taken to ramp up traffic according to the model's ability to scale
+ # or you will start seeing increases in latency and 429 response codes.
+ #
+ # Note that you cannot use AutoScaling if your version uses
+ # [GPUs](#Version.FIELDS.accelerator_config). Instead, you must use specify
+ # `manual_scaling`.
+ "minNodes": 42, # Optional. The minimum number of nodes to allocate for this model. These
+ # nodes are always up, starting from the time the model is deployed.
+ # Therefore, the cost of operating this model will be at least
+ # `rate` * `min_nodes` * number of hours since last billing cycle,
+ # where `rate` is the cost per node-hour as documented in the
+ # [pricing guide](/ml-engine/docs/pricing),
+ # even if no predictions are performed. There is additional cost for each
+ # prediction performed.
+ #
+ # Unlike manual scaling, if the load gets too heavy for the nodes
+ # that are up, the service will automatically add nodes to handle the
+ # increased load as well as scale back as traffic drops, always maintaining
+ # at least `min_nodes`. You will be charged for the time in which additional
+ # nodes are used.
+ #
+ # If `min_nodes` is not specified and AutoScaling is used with a [legacy
+ # (MLS1) machine type](/ml-engine/docs/machine-types-online-prediction),
+ # `min_nodes` defaults to 0, in which case, when traffic to a model stops
+ # (and after a cool-down period), nodes will be shut down and no charges will
+ # be incurred until traffic to the model resumes.
+ #
+ # If `min_nodes` is not specified and AutoScaling is used with a [Compute
+ # Engine (N1) machine type](/ml-engine/docs/machine-types-online-prediction),
+ # `min_nodes` defaults to 1. `min_nodes` must be at least 1 for use with a
+ # Compute Engine machine type.
+ #
+ # Note that you cannot use AutoScaling if your version uses
+ # [GPUs](#Version.FIELDS.accelerator_config). Instead, you must use
+ # ManualScaling.
+ #
+ # You can set `min_nodes` when creating the model version, and you can also
+ # update `min_nodes` for an existing version:
+ # <pre>
+ # update_body.json:
+ # {
+ # 'autoScaling': {
+ # 'minNodes': 5
+ # }
+ # }
+ # </pre>
+ # HTTP request:
+ # <pre style="max-width: 626px;">
+ # PATCH
+ # https://ml.googleapis.com/v1/{name=projects/*/models/*/versions/*}?update_mask=autoScaling.minNodes
+ # -d @./update_body.json
+ # </pre>
+ },
+ "pythonVersion": "A String", # Required. The version of Python used in prediction.
+ #
+ # The following Python versions are available:
+ #
+ # * Python '3.7' is available when `runtime_version` is set to '1.15' or
+ # later.
+ # * Python '3.5' is available when `runtime_version` is set to a version
+ # from '1.4' to '1.14'.
+ # * Python '2.7' is available when `runtime_version` is set to '1.15' or
+ # earlier.
+ #
+ # Read more about the Python versions available for [each runtime
+ # version](/ml-engine/docs/runtime-version-list).
+ "requestLoggingConfig": { # Configuration for logging request-response pairs to a BigQuery table. # Optional. *Only* specify this field in a
+ # projects.models.versions.patch
+ # request. Specifying it in a
+ # projects.models.versions.create
+ # request has no effect.
+ #
+ # Configures the request-response pair logging on predictions from this
+ # Version.
+ # Online prediction requests to a model version and the responses to these
+ # requests are converted to raw strings and saved to the specified BigQuery
+ # table. Logging is constrained by [BigQuery quotas and
+ # limits](/bigquery/quotas). If your project exceeds BigQuery quotas or limits,
+ # AI Platform Prediction does not log request-response pairs, but it continues
+ # to serve predictions.
+ #
+ # If you are using [continuous
+ # evaluation](/ml-engine/docs/continuous-evaluation/), you do not need to
+ # specify this configuration manually. Setting up continuous evaluation
+ # automatically enables logging of request-response pairs.
+ "samplingPercentage": 3.14, # Percentage of requests to be logged, expressed as a fraction from 0 to 1.
+ # For example, if you want to log 10% of requests, enter `0.1`. The sampling
+ # window is the lifetime of the model version. Defaults to 0.
+ "bigqueryTableName": "A String", # Required. Fully qualified BigQuery table name in the following format:
+ # "<var>project_id</var>.<var>dataset_name</var>.<var>table_name</var>"
+ #
+ # The specified table must already exist, and the "Cloud ML Service Agent"
+ # for your project must have permission to write to it. The table must have
+ # the following [schema](/bigquery/docs/schemas):
+ #
+ # <table>
+ # <tr><th>Field name</th><th style="display: table-cell">Type</th>
+ # <th style="display: table-cell">Mode</th></tr>
+ # <tr><td>model</td><td>STRING</td><td>REQUIRED</td></tr>
+ # <tr><td>model_version</td><td>STRING</td><td>REQUIRED</td></tr>
+ # <tr><td>time</td><td>TIMESTAMP</td><td>REQUIRED</td></tr>
+ # <tr><td>raw_data</td><td>STRING</td><td>REQUIRED</td></tr>
+ # <tr><td>raw_prediction</td><td>STRING</td><td>NULLABLE</td></tr>
+ # <tr><td>groundtruth</td><td>STRING</td><td>NULLABLE</td></tr>
+ # </table>
+ },
},
"onlinePredictionLogging": True or False, # Optional. If true, online prediction access logs are sent to StackDriver
# Logging. These logs are like standard server access logs, containing
@@ -974,14 +1372,12 @@
# (QPS). Estimate your costs before enabling this option.
#
# Default is false.
- "name": "A String", # Required. The name specified for the model when it was created.
- #
- # The model name must be unique within the project it is created in.
+ "description": "A String", # Optional. The description specified for the model when it was created.
}</pre>
</div>
<div class="method">
- <code class="details" id="getIamPolicy">getIamPolicy(resource, x__xgafv=None)</code>
+ <code class="details" id="getIamPolicy">getIamPolicy(resource, options_requestedPolicyVersion=None, x__xgafv=None)</code>
<pre>Gets the access control policy for a resource.
Returns an empty policy if the resource exists and does not have a policy
set.
@@ -989,6 +1385,14 @@
Args:
resource: string, REQUIRED: The resource for which the policy is being requested.
See the operation documentation for the appropriate value for this field. (required)
+ options_requestedPolicyVersion: integer, Optional. The policy format version to be returned.
+
+Valid values are 0, 1, and 3. Requests specifying an invalid value will be
+rejected.
+
+Requests for policies with any conditional bindings must specify version 3.
+Policies without any conditional bindings may specify any valid value or
+leave the field unset.
x__xgafv: string, V1 error format.
Allowed values
1 - v1 error format
@@ -997,53 +1401,72 @@
Returns:
An object of the form:
- { # Defines an Identity and Access Management (IAM) policy. It is used to
- # specify access control policies for Cloud Platform resources.
+ { # An Identity and Access Management (IAM) policy, which specifies access
+ # controls for Google Cloud resources.
#
#
- # A `Policy` consists of a list of `bindings`. A `binding` binds a list of
- # `members` to a `role`, where the members can be user accounts, Google groups,
- # Google domains, and service accounts. A `role` is a named list of permissions
- # defined by IAM.
+ # A `Policy` is a collection of `bindings`. A `binding` binds one or more
+ # `members` to a single `role`. Members can be user accounts, service accounts,
+ # Google groups, and domains (such as G Suite). A `role` is a named list of
+ # permissions; each `role` can be an IAM predefined role or a user-created
+ # custom role.
#
- # **JSON Example**
+ # Optionally, a `binding` can specify a `condition`, which is a logical
+ # expression that allows access to a resource only if the expression evaluates
+ # to `true`. A condition can add constraints based on attributes of the
+ # request, the resource, or both.
+ #
+ # **JSON example:**
#
# {
# "bindings": [
# {
- # "role": "roles/owner",
+ # "role": "roles/resourcemanager.organizationAdmin",
# "members": [
# "user:mike@example.com",
# "group:admins@example.com",
# "domain:google.com",
- # "serviceAccount:my-other-app@appspot.gserviceaccount.com"
+ # "serviceAccount:my-project-id@appspot.gserviceaccount.com"
# ]
# },
# {
- # "role": "roles/viewer",
- # "members": ["user:sean@example.com"]
+ # "role": "roles/resourcemanager.organizationViewer",
+ # "members": ["user:eve@example.com"],
+ # "condition": {
+ # "title": "expirable access",
+ # "description": "Does not grant access after Sep 2020",
+ # "expression": "request.time < timestamp('2020-10-01T00:00:00.000Z')",
+ # }
# }
- # ]
+ # ],
+ # "etag": "BwWWja0YfJA=",
+ # "version": 3
# }
#
- # **YAML Example**
+ # **YAML example:**
#
# bindings:
# - members:
# - user:mike@example.com
# - group:admins@example.com
# - domain:google.com
- # - serviceAccount:my-other-app@appspot.gserviceaccount.com
- # role: roles/owner
+ # - serviceAccount:my-project-id@appspot.gserviceaccount.com
+ # role: roles/resourcemanager.organizationAdmin
# - members:
- # - user:sean@example.com
- # role: roles/viewer
- #
+ # - user:eve@example.com
+ # role: roles/resourcemanager.organizationViewer
+ # condition:
+ # title: expirable access
+ # description: Does not grant access after Sep 2020
+ # expression: request.time < timestamp('2020-10-01T00:00:00.000Z')
+ # - etag: BwWWja0YfJA=
+ # - version: 3
#
# For a description of IAM and its features, see the
- # [IAM developer's guide](https://cloud.google.com/iam/docs).
- "bindings": [ # Associates a list of `members` to a `role`.
- # `bindings` with no members will result in an error.
+ # [IAM documentation](https://cloud.google.com/iam/docs/).
+ "bindings": [ # Associates a list of `members` to a `role`. Optionally, may specify a
+ # `condition` that determines how and when the `bindings` are applied. Each
+ # of the `bindings` must contain at least one member.
{ # Associates `members` with a `role`.
"role": "A String", # Role that is assigned to `members`.
# For example, `roles/viewer`, `roles/editor`, or `roles/owner`.
@@ -1057,7 +1480,7 @@
# who is authenticated with a Google account or a service account.
#
# * `user:{emailid}`: An email address that represents a specific Google
- # account. For example, `alice@gmail.com` .
+ # account. For example, `alice@example.com` .
#
#
# * `serviceAccount:{emailid}`: An email address that represents a service
@@ -1066,46 +1489,78 @@
# * `group:{emailid}`: An email address that represents a Google group.
# For example, `admins@example.com`.
#
+ # * `deleted:user:{emailid}?uid={uniqueid}`: An email address (plus unique
+ # identifier) representing a user that has been recently deleted. For
+ # example, `alice@example.com?uid=123456789012345678901`. If the user is
+ # recovered, this value reverts to `user:{emailid}` and the recovered user
+ # retains the role in the binding.
+ #
+ # * `deleted:serviceAccount:{emailid}?uid={uniqueid}`: An email address (plus
+ # unique identifier) representing a service account that has been recently
+ # deleted. For example,
+ # `my-other-app@appspot.gserviceaccount.com?uid=123456789012345678901`.
+ # If the service account is undeleted, this value reverts to
+ # `serviceAccount:{emailid}` and the undeleted service account retains the
+ # role in the binding.
+ #
+ # * `deleted:group:{emailid}?uid={uniqueid}`: An email address (plus unique
+ # identifier) representing a Google group that has been recently
+ # deleted. For example, `admins@example.com?uid=123456789012345678901`. If
+ # the group is recovered, this value reverts to `group:{emailid}` and the
+ # recovered group retains the role in the binding.
+ #
#
# * `domain:{domain}`: The G Suite domain (primary) that represents all the
# users of that domain. For example, `google.com` or `example.com`.
#
"A String",
],
- "condition": { # Represents an expression text. Example: # The condition that is associated with this binding.
+ "condition": { # Represents a textual expression in the Common Expression Language (CEL) # The condition that is associated with this binding.
# NOTE: An unsatisfied condition will not allow user access via current
# binding. Different bindings, including their conditions, are examined
# independently.
+ # syntax. CEL is a C-like expression language. The syntax and semantics of CEL
+ # are documented at https://github.com/google/cel-spec.
#
- # title: "User account presence"
- # description: "Determines whether the request has a user account"
- # expression: "size(request.user) > 0"
- "description": "A String", # An optional description of the expression. This is a longer text which
+ # Example (Comparison):
+ #
+ # title: "Summary size limit"
+ # description: "Determines if a summary is less than 100 chars"
+ # expression: "document.summary.size() < 100"
+ #
+ # Example (Equality):
+ #
+ # title: "Requestor is owner"
+ # description: "Determines if requestor is the document owner"
+ # expression: "document.owner == request.auth.claims.email"
+ #
+ # Example (Logic):
+ #
+ # title: "Public documents"
+ # description: "Determine whether the document should be publicly visible"
+ # expression: "document.type != 'private' && document.type != 'internal'"
+ #
+ # Example (Data Manipulation):
+ #
+ # title: "Notification string"
+ # description: "Create a notification string with a timestamp."
+ # expression: "'New message received at ' + string(document.create_time)"
+ #
+ # The exact variables and functions that may be referenced within an expression
+ # are determined by the service that evaluates it. See the service
+ # documentation for additional information.
+ "description": "A String", # Optional. Description of the expression. This is a longer text which
# describes the expression, e.g. when hovered over it in a UI.
- "expression": "A String", # Textual representation of an expression in
- # Common Expression Language syntax.
- #
- # The application context of the containing message determines which
- # well-known feature set of CEL is supported.
- "location": "A String", # An optional string indicating the location of the expression for error
+ "expression": "A String", # Textual representation of an expression in Common Expression Language
+ # syntax.
+ "location": "A String", # Optional. String indicating the location of the expression for error
# reporting, e.g. a file name and a position in the file.
- "title": "A String", # An optional title for the expression, i.e. a short string describing
+ "title": "A String", # Optional. Title for the expression, i.e. a short string describing
# its purpose. This can be used e.g. in UIs which allow to enter the
# expression.
},
},
],
- "etag": "A String", # `etag` is used for optimistic concurrency control as a way to help
- # prevent simultaneous updates of a policy from overwriting each other.
- # It is strongly suggested that systems make use of the `etag` in the
- # read-modify-write cycle to perform policy updates in order to avoid race
- # conditions: An `etag` is returned in the response to `getIamPolicy`, and
- # systems are expected to put that etag in the request to `setIamPolicy` to
- # ensure that their change will be applied to the same version of the policy.
- #
- # If no `etag` is provided in the call to `setIamPolicy`, then the existing
- # policy is overwritten blindly.
- "version": 42, # Deprecated.
"auditConfigs": [ # Specifies cloud audit logging configuration for this policy.
{ # Specifies the audit configuration for a service.
# The configuration determines which permission types are logged, and what
@@ -1127,7 +1582,7 @@
# {
# "log_type": "DATA_READ",
# "exempted_members": [
- # "user:foo@gmail.com"
+ # "user:jose@example.com"
# ]
# },
# {
@@ -1139,7 +1594,7 @@
# ]
# },
# {
- # "service": "fooservice.googleapis.com"
+ # "service": "sampleservice.googleapis.com"
# "audit_log_configs": [
# {
# "log_type": "DATA_READ",
@@ -1147,7 +1602,7 @@
# {
# "log_type": "DATA_WRITE",
# "exempted_members": [
- # "user:bar@gmail.com"
+ # "user:aliya@example.com"
# ]
# }
# ]
@@ -1155,9 +1610,9 @@
# ]
# }
#
- # For fooservice, this policy enables DATA_READ, DATA_WRITE and ADMIN_READ
- # logging. It also exempts foo@gmail.com from DATA_READ logging, and
- # bar@gmail.com from DATA_WRITE logging.
+ # For sampleservice, this policy enables DATA_READ, DATA_WRITE and ADMIN_READ
+ # logging. It also exempts jose@example.com from DATA_READ logging, and
+ # aliya@example.com from DATA_WRITE logging.
"auditLogConfigs": [ # The configuration for logging of each type of permission.
{ # Provides the configuration for logging a type of permissions.
# Example:
@@ -1167,7 +1622,7 @@
# {
# "log_type": "DATA_READ",
# "exempted_members": [
- # "user:foo@gmail.com"
+ # "user:jose@example.com"
# ]
# },
# {
@@ -1177,7 +1632,7 @@
# }
#
# This enables 'DATA_READ' and 'DATA_WRITE' logging, while exempting
- # foo@gmail.com from DATA_READ logging.
+ # jose@example.com from DATA_READ logging.
"exemptedMembers": [ # Specifies the identities that do not cause logging for this type of
# permission.
# Follows the same format of Binding.members.
@@ -1191,11 +1646,44 @@
# `allServices` is a special value that covers all services.
},
],
+ "etag": "A String", # `etag` is used for optimistic concurrency control as a way to help
+ # prevent simultaneous updates of a policy from overwriting each other.
+ # It is strongly suggested that systems make use of the `etag` in the
+ # read-modify-write cycle to perform policy updates in order to avoid race
+ # conditions: An `etag` is returned in the response to `getIamPolicy`, and
+ # systems are expected to put that etag in the request to `setIamPolicy` to
+ # ensure that their change will be applied to the same version of the policy.
+ #
+ # **Important:** If you use IAM Conditions, you must include the `etag` field
+ # whenever you call `setIamPolicy`. If you omit this field, then IAM allows
+ # you to overwrite a version `3` policy with a version `1` policy, and all of
+ # the conditions in the version `3` policy are lost.
+ "version": 42, # Specifies the format of the policy.
+ #
+ # Valid values are `0`, `1`, and `3`. Requests that specify an invalid value
+ # are rejected.
+ #
+ # Any operation that affects conditional role bindings must specify version
+ # `3`. This requirement applies to the following operations:
+ #
+ # * Getting a policy that includes a conditional role binding
+ # * Adding a conditional role binding to a policy
+ # * Changing a conditional role binding in a policy
+ # * Removing any role binding, with or without a condition, from a policy
+ # that includes conditions
+ #
+ # **Important:** If you use IAM Conditions, you must include the `etag` field
+ # whenever you call `setIamPolicy`. If you omit this field, then IAM allows
+ # you to overwrite a version `3` policy with a version `1` policy, and all of
+ # the conditions in the version `3` policy are lost.
+ #
+ # If a policy does not include any conditions, operations on that policy may
+ # specify any valid version or leave the field unset.
}</pre>
</div>
<div class="method">
- <code class="details" id="list">list(parent, pageToken=None, x__xgafv=None, pageSize=None, filter=None)</code>
+ <code class="details" id="list">list(parent, pageSize=None, pageToken=None, x__xgafv=None, filter=None)</code>
<pre>Lists the models in a project.
Each project can contain multiple models, and each model can have multiple
@@ -1206,6 +1694,11 @@
Args:
parent: string, Required. The name of the project whose models are to be listed. (required)
+ pageSize: integer, Optional. The number of models to retrieve per "page" of results. If there
+are more remaining results than this number, the response message will
+contain a valid value in the `next_page_token` field.
+
+The default value is 20, and the maximum page size is 100.
pageToken: string, Optional. A page token to request the next page of results.
You get the token from the `next_page_token` field of the response from
@@ -1214,11 +1707,6 @@
Allowed values
1 - v1 error format
2 - v2 error format
- pageSize: integer, Optional. The number of models to retrieve per "page" of results. If there
-are more remaining results than this number, the response message will
-contain a valid value in the `next_page_token` field.
-
-The default value is 20, and the maximum page size is 100.
filter: string, Optional. Specifies the subset of models to retrieve.
Returns:
@@ -1233,7 +1721,9 @@
# A model can have multiple versions, each of which is a deployed, trained
# model ready to receive prediction requests. The model itself is just a
# container.
- "description": "A String", # Optional. The description specified for the model when it was created.
+ "name": "A String", # Required. The name specified for the model when it was created.
+ #
+ # The model name must be unique within the project it is created in.
"onlinePredictionConsoleLogging": True or False, # Optional. If true, online prediction nodes send `stderr` and `stdout`
# streams to Stackdriver Logging. These can be more verbose than the standard
# access logs (see `onlinePredictionLogging`) and can incur higher cost.
@@ -1247,13 +1737,13 @@
# Each label is a key-value pair, where both the key and the value are
# arbitrary strings that you supply.
# For more information, see the documentation on
- # <a href="/ml-engine/docs/tensorflow/resource-labels">using labels</a>.
+ # <a href="/ml-engine/docs/tensorflow/resource-labels">using labels</a>.
"a_key": "A String",
},
"regions": [ # Optional. The list of regions where the model is going to be deployed.
- # Currently only one region per model is supported.
+ # Only one region per model is supported.
# Defaults to 'us-central1' if nothing is set.
- # See the <a href="/ml-engine/docs/tensorflow/regions">available regions</a>
+ # See the <a href="/ml-engine/docs/tensorflow/regions">available regions</a>
# for AI Platform services.
# Note:
# * No matter where a model is deployed, it can always be accessed by
@@ -1274,53 +1764,32 @@
# handle prediction requests that do not specify a version.
#
# You can change the default version by calling
- # [projects.methods.versions.setDefault](/ml-engine/reference/rest/v1/projects.models.versions/setDefault).
+ # projects.models.versions.setDefault.
#
# Each version is a trained model deployed in the cloud, ready to handle
# prediction requests. A model can have multiple versions. You can get
# information about all of the versions of a given model by calling
- # [projects.models.versions.list](/ml-engine/reference/rest/v1/projects.models.versions/list).
- "errorMessage": "A String", # Output only. The details of a failure or a cancellation.
+ # projects.models.versions.list.
+ "acceleratorConfig": { # Represents a hardware accelerator request config. # Optional. Accelerator config for using GPUs for online prediction (beta).
+ # Only specify this field if you have specified a Compute Engine (N1) machine
+ # type in the `machineType` field. Learn more about [using GPUs for online
+ # prediction](/ml-engine/docs/machine-types-online-prediction#gpus).
+ # Note that the AcceleratorConfig can be used in both Jobs and Versions.
+ # Learn more about [accelerators for training](/ml-engine/docs/using-gpus) and
+ # [accelerators for online
+ # prediction](/ml-engine/docs/machine-types-online-prediction#gpus).
+ "count": "A String", # The number of accelerators to attach to each machine running the job.
+ "type": "A String", # The type of accelerator to use.
+ },
"labels": { # Optional. One or more labels that you can add, to organize your model
# versions. Each label is a key-value pair, where both the key and the value
# are arbitrary strings that you supply.
# For more information, see the documentation on
- # <a href="/ml-engine/docs/tensorflow/resource-labels">using labels</a>.
+ # <a href="/ml-engine/docs/tensorflow/resource-labels">using labels</a>.
"a_key": "A String",
},
- "machineType": "A String", # Optional. The type of machine on which to serve the model. Currently only
- # applies to online prediction service.
- # <dl>
- # <dt>mls1-c1-m2</dt>
- # <dd>
- # The <b>default</b> machine type, with 1 core and 2 GB RAM. The deprecated
- # name for this machine type is "mls1-highmem-1".
- # </dd>
- # <dt>mls1-c4-m2</dt>
- # <dd>
- # In <b>Beta</b>. This machine type has 4 cores and 2 GB RAM. The
- # deprecated name for this machine type is "mls1-highcpu-4".
- # </dd>
- # </dl>
- "description": "A String", # Optional. The description specified for the version when it was created.
- "runtimeVersion": "A String", # Optional. The AI Platform runtime version to use for this deployment.
- # If not set, AI Platform uses the default stable version, 1.0. For more
- # information, see the
- # [runtime version list](/ml-engine/docs/runtime-version-list) and
- # [how to manage runtime versions](/ml-engine/docs/versioning).
- "manualScaling": { # Options for manually scaling a model. # Manually select the number of nodes to use for serving the
- # model. You should generally use `auto_scaling` with an appropriate
- # `min_nodes` instead, but this option is available if you want more
- # predictable billing. Beware that latency and error rates will increase
- # if the traffic exceeds that capability of the system to serve it based
- # on the selected number of nodes.
- "nodes": 42, # The number of nodes to allocate for this model. These nodes are always up,
- # starting from the time the model is deployed, so the cost of operating
- # this model will be proportional to `nodes` * number of hours since
- # last billing cycle plus the cost for each prediction performed.
- },
"predictionClass": "A String", # Optional. The fully qualified name
- # (<var>module_name</var>.<var>class_name</var>) of a class that implements
+ # (<var>module_name</var>.<var>class_name</var>) of a class that implements
# the Predictor interface described in this reference field. The module
# containing this class should be included in a package provided to the
# [`packageUris` field](#Version.FIELDS.package_uris).
@@ -1328,11 +1797,13 @@
# Specify this field if and only if you are deploying a [custom prediction
# routine (beta)](/ml-engine/docs/tensorflow/custom-prediction-routines).
# If you specify this field, you must set
- # [`runtimeVersion`](#Version.FIELDS.runtime_version) to 1.4 or greater.
+ # [`runtimeVersion`](#Version.FIELDS.runtime_version) to 1.4 or greater and
+ # you must set `machineType` to a [legacy (MLS1)
+ # machine type](/ml-engine/docs/machine-types-online-prediction).
#
# The following code sample provides the Predictor interface:
#
- # ```py
+ # <pre style="max-width: 626px;">
# class Predictor(object):
# """Interface for constructing custom predictors."""
#
@@ -1368,64 +1839,12 @@
# An instance implementing this Predictor class.
# """
# raise NotImplementedError()
- # ```
+ # </pre>
#
# Learn more about [the Predictor interface and custom prediction
# routines](/ml-engine/docs/tensorflow/custom-prediction-routines).
- "autoScaling": { # Options for automatically scaling a model. # Automatically scale the number of nodes used to serve the model in
- # response to increases and decreases in traffic. Care should be
- # taken to ramp up traffic according to the model's ability to scale
- # or you will start seeing increases in latency and 429 response codes.
- "minNodes": 42, # Optional. The minimum number of nodes to allocate for this model. These
- # nodes are always up, starting from the time the model is deployed.
- # Therefore, the cost of operating this model will be at least
- # `rate` * `min_nodes` * number of hours since last billing cycle,
- # where `rate` is the cost per node-hour as documented in the
- # [pricing guide](/ml-engine/docs/pricing),
- # even if no predictions are performed. There is additional cost for each
- # prediction performed.
- #
- # Unlike manual scaling, if the load gets too heavy for the nodes
- # that are up, the service will automatically add nodes to handle the
- # increased load as well as scale back as traffic drops, always maintaining
- # at least `min_nodes`. You will be charged for the time in which additional
- # nodes are used.
- #
- # If not specified, `min_nodes` defaults to 0, in which case, when traffic
- # to a model stops (and after a cool-down period), nodes will be shut down
- # and no charges will be incurred until traffic to the model resumes.
- #
- # You can set `min_nodes` when creating the model version, and you can also
- # update `min_nodes` for an existing version:
- # <pre>
- # update_body.json:
- # {
- # 'autoScaling': {
- # 'minNodes': 5
- # }
- # }
- # </pre>
- # HTTP request:
- # <pre>
- # PATCH
- # https://ml.googleapis.com/v1/{name=projects/*/models/*/versions/*}?update_mask=autoScaling.minNodes
- # -d @./update_body.json
- # </pre>
- },
"serviceAccount": "A String", # Optional. Specifies the service account for resource access control.
"state": "A String", # Output only. The state of a version.
- "pythonVersion": "A String", # Optional. The version of Python used in prediction. If not set, the default
- # version is '2.7'. Python '3.5' is available when `runtime_version` is set
- # to '1.4' and above. Python '2.7' works with all supported runtime versions.
- "framework": "A String", # Optional. The machine learning framework AI Platform uses to train
- # this version of the model. Valid values are `TENSORFLOW`, `SCIKIT_LEARN`,
- # `XGBOOST`. If you do not specify a framework, AI Platform
- # will analyze files in the deployment_uri to determine a framework. If you
- # choose `SCIKIT_LEARN` or `XGBOOST`, you must also set the runtime version
- # of the model to 1.4 or greater.
- #
- # Do **not** specify a framework if you're deploying a [custom
- # prediction routine](/ml-engine/docs/tensorflow/custom-prediction-routines).
"packageUris": [ # Optional. Cloud Storage paths (`gs://…`) of packages for [custom
# prediction routines](/ml-engine/docs/tensorflow/custom-prediction-routines)
# or [scikit-learn pipelines with custom
@@ -1457,20 +1876,223 @@
# information.
#
# When passing Version to
- # [projects.models.versions.create](/ml-engine/reference/rest/v1/projects.models.versions/create)
+ # projects.models.versions.create
# the model service uses the specified location as the source of the model.
# Once deployed, the model version is hosted by the prediction service, so
# this location is useful only as a historical record.
# The total number of model files can't exceed 1000.
- "createTime": "A String", # Output only. The time the version was created.
+ "explanationConfig": { # Message holding configuration options for explaining model predictions. # Optional. Configures explainability features on the model's version.
+ # Some explanation features require additional metadata to be loaded
+ # as part of the model payload.
+ # There are two feature attribution methods supported for TensorFlow models:
+ # integrated gradients and sampled Shapley.
+ # [Learn more about feature
+ # attributions.](/ml-engine/docs/ai-explanations/overview)
+ "xraiAttribution": { # Attributes credit by computing the XRAI taking advantage # Attributes credit by computing the XRAI taking advantage
+ # of the model's fully differentiable structure. Refer to this paper for
+ # more details: https://arxiv.org/abs/1906.02825
+ # Currently only implemented for models with natural image inputs.
+ # of the model's fully differentiable structure. Refer to this paper for
+ # more details: https://arxiv.org/abs/1906.02825
+ # Currently only implemented for models with natural image inputs.
+ "numIntegralSteps": 42, # Number of steps for approximating the path integral.
+ # A good value to start is 50 and gradually increase until the
+ # sum to diff property is met within the desired error range.
+ },
+ "sampledShapleyAttribution": { # An attribution method that approximates Shapley values for features that # An attribution method that approximates Shapley values for features that
+ # contribute to the label being predicted. A sampling strategy is used to
+ # approximate the value rather than considering all subsets of features.
+ # contribute to the label being predicted. A sampling strategy is used to
+ # approximate the value rather than considering all subsets of features.
+ "numPaths": 42, # The number of feature permutations to consider when approximating the
+ # Shapley values.
+ },
+ "integratedGradientsAttribution": { # Attributes credit by computing the Aumann-Shapley value taking advantage # Attributes credit by computing the Aumann-Shapley value taking advantage
+ # of the model's fully differentiable structure. Refer to this paper for
+ # more details: http://proceedings.mlr.press/v70/sundararajan17a.html
+ # of the model's fully differentiable structure. Refer to this paper for
+ # more details: http://proceedings.mlr.press/v70/sundararajan17a.html
+ "numIntegralSteps": 42, # Number of steps for approximating the path integral.
+ # A good value to start is 50 and gradually increase until the
+ # sum to diff property is met within the desired error range.
+ },
+ },
"isDefault": True or False, # Output only. If true, this version will be used to handle prediction
# requests that do not specify a version.
#
# You can change the default version by calling
- # [projects.methods.versions.setDefault](/ml-engine/reference/rest/v1/projects.models.versions/setDefault).
- "name": "A String", # Required.The name specified for the version when it was created.
+ # projects.methods.versions.setDefault.
+ "machineType": "A String", # Optional. The type of machine on which to serve the model. Currently only
+ # applies to online prediction service. If this field is not specified, it
+ # defaults to `mls1-c1-m2`.
+ #
+ # Online prediction supports the following machine types:
+ #
+ # * `mls1-c1-m2`
+ # * `mls1-c4-m2`
+ # * `n1-standard-2`
+ # * `n1-standard-4`
+ # * `n1-standard-8`
+ # * `n1-standard-16`
+ # * `n1-standard-32`
+ # * `n1-highmem-2`
+ # * `n1-highmem-4`
+ # * `n1-highmem-8`
+ # * `n1-highmem-16`
+ # * `n1-highmem-32`
+ # * `n1-highcpu-2`
+ # * `n1-highcpu-4`
+ # * `n1-highcpu-8`
+ # * `n1-highcpu-16`
+ # * `n1-highcpu-32`
+ #
+ # `mls1-c1-m2` is generally available. All other machine types are available
+ # in beta. Learn more about the [differences between machine
+ # types](/ml-engine/docs/machine-types-online-prediction).
+ "description": "A String", # Optional. The description specified for the version when it was created.
+ "runtimeVersion": "A String", # Required. The AI Platform runtime version to use for this deployment.
+ #
+ # For more information, see the
+ # [runtime version list](/ml-engine/docs/runtime-version-list) and
+ # [how to manage runtime versions](/ml-engine/docs/versioning).
+ "manualScaling": { # Options for manually scaling a model. # Manually select the number of nodes to use for serving the
+ # model. You should generally use `auto_scaling` with an appropriate
+ # `min_nodes` instead, but this option is available if you want more
+ # predictable billing. Beware that latency and error rates will increase
+ # if the traffic exceeds that capability of the system to serve it based
+ # on the selected number of nodes.
+ "nodes": 42, # The number of nodes to allocate for this model. These nodes are always up,
+ # starting from the time the model is deployed, so the cost of operating
+ # this model will be proportional to `nodes` * number of hours since
+ # last billing cycle plus the cost for each prediction performed.
+ },
+ "errorMessage": "A String", # Output only. The details of a failure or a cancellation.
+ "framework": "A String", # Optional. The machine learning framework AI Platform uses to train
+ # this version of the model. Valid values are `TENSORFLOW`, `SCIKIT_LEARN`,
+ # `XGBOOST`. If you do not specify a framework, AI Platform
+ # will analyze files in the deployment_uri to determine a framework. If you
+ # choose `SCIKIT_LEARN` or `XGBOOST`, you must also set the runtime version
+ # of the model to 1.4 or greater.
+ #
+ # Do **not** specify a framework if you're deploying a [custom
+ # prediction routine](/ml-engine/docs/tensorflow/custom-prediction-routines).
+ #
+ # If you specify a [Compute Engine (N1) machine
+ # type](/ml-engine/docs/machine-types-online-prediction) in the
+ # `machineType` field, you must specify `TENSORFLOW`
+ # for the framework.
+ "createTime": "A String", # Output only. The time the version was created.
+ "name": "A String", # Required. The name specified for the version when it was created.
#
# The version name must be unique within the model it is created in.
+ "autoScaling": { # Options for automatically scaling a model. # Automatically scale the number of nodes used to serve the model in
+ # response to increases and decreases in traffic. Care should be
+ # taken to ramp up traffic according to the model's ability to scale
+ # or you will start seeing increases in latency and 429 response codes.
+ #
+ # Note that you cannot use AutoScaling if your version uses
+ # [GPUs](#Version.FIELDS.accelerator_config). Instead, you must use specify
+ # `manual_scaling`.
+ "minNodes": 42, # Optional. The minimum number of nodes to allocate for this model. These
+ # nodes are always up, starting from the time the model is deployed.
+ # Therefore, the cost of operating this model will be at least
+ # `rate` * `min_nodes` * number of hours since last billing cycle,
+ # where `rate` is the cost per node-hour as documented in the
+ # [pricing guide](/ml-engine/docs/pricing),
+ # even if no predictions are performed. There is additional cost for each
+ # prediction performed.
+ #
+ # Unlike manual scaling, if the load gets too heavy for the nodes
+ # that are up, the service will automatically add nodes to handle the
+ # increased load as well as scale back as traffic drops, always maintaining
+ # at least `min_nodes`. You will be charged for the time in which additional
+ # nodes are used.
+ #
+ # If `min_nodes` is not specified and AutoScaling is used with a [legacy
+ # (MLS1) machine type](/ml-engine/docs/machine-types-online-prediction),
+ # `min_nodes` defaults to 0, in which case, when traffic to a model stops
+ # (and after a cool-down period), nodes will be shut down and no charges will
+ # be incurred until traffic to the model resumes.
+ #
+ # If `min_nodes` is not specified and AutoScaling is used with a [Compute
+ # Engine (N1) machine type](/ml-engine/docs/machine-types-online-prediction),
+ # `min_nodes` defaults to 1. `min_nodes` must be at least 1 for use with a
+ # Compute Engine machine type.
+ #
+ # Note that you cannot use AutoScaling if your version uses
+ # [GPUs](#Version.FIELDS.accelerator_config). Instead, you must use
+ # ManualScaling.
+ #
+ # You can set `min_nodes` when creating the model version, and you can also
+ # update `min_nodes` for an existing version:
+ # <pre>
+ # update_body.json:
+ # {
+ # 'autoScaling': {
+ # 'minNodes': 5
+ # }
+ # }
+ # </pre>
+ # HTTP request:
+ # <pre style="max-width: 626px;">
+ # PATCH
+ # https://ml.googleapis.com/v1/{name=projects/*/models/*/versions/*}?update_mask=autoScaling.minNodes
+ # -d @./update_body.json
+ # </pre>
+ },
+ "pythonVersion": "A String", # Required. The version of Python used in prediction.
+ #
+ # The following Python versions are available:
+ #
+ # * Python '3.7' is available when `runtime_version` is set to '1.15' or
+ # later.
+ # * Python '3.5' is available when `runtime_version` is set to a version
+ # from '1.4' to '1.14'.
+ # * Python '2.7' is available when `runtime_version` is set to '1.15' or
+ # earlier.
+ #
+ # Read more about the Python versions available for [each runtime
+ # version](/ml-engine/docs/runtime-version-list).
+ "requestLoggingConfig": { # Configuration for logging request-response pairs to a BigQuery table. # Optional. *Only* specify this field in a
+ # projects.models.versions.patch
+ # request. Specifying it in a
+ # projects.models.versions.create
+ # request has no effect.
+ #
+ # Configures the request-response pair logging on predictions from this
+ # Version.
+ # Online prediction requests to a model version and the responses to these
+ # requests are converted to raw strings and saved to the specified BigQuery
+ # table. Logging is constrained by [BigQuery quotas and
+ # limits](/bigquery/quotas). If your project exceeds BigQuery quotas or limits,
+ # AI Platform Prediction does not log request-response pairs, but it continues
+ # to serve predictions.
+ #
+ # If you are using [continuous
+ # evaluation](/ml-engine/docs/continuous-evaluation/), you do not need to
+ # specify this configuration manually. Setting up continuous evaluation
+ # automatically enables logging of request-response pairs.
+ "samplingPercentage": 3.14, # Percentage of requests to be logged, expressed as a fraction from 0 to 1.
+ # For example, if you want to log 10% of requests, enter `0.1`. The sampling
+ # window is the lifetime of the model version. Defaults to 0.
+ "bigqueryTableName": "A String", # Required. Fully qualified BigQuery table name in the following format:
+ # "<var>project_id</var>.<var>dataset_name</var>.<var>table_name</var>"
+ #
+ # The specified table must already exist, and the "Cloud ML Service Agent"
+ # for your project must have permission to write to it. The table must have
+ # the following [schema](/bigquery/docs/schemas):
+ #
+ # <table>
+ # <tr><th>Field name</th><th style="display: table-cell">Type</th>
+ # <th style="display: table-cell">Mode</th></tr>
+ # <tr><td>model</td><td>STRING</td><td>REQUIRED</td></tr>
+ # <tr><td>model_version</td><td>STRING</td><td>REQUIRED</td></tr>
+ # <tr><td>time</td><td>TIMESTAMP</td><td>REQUIRED</td></tr>
+ # <tr><td>raw_data</td><td>STRING</td><td>REQUIRED</td></tr>
+ # <tr><td>raw_prediction</td><td>STRING</td><td>NULLABLE</td></tr>
+ # <tr><td>groundtruth</td><td>STRING</td><td>NULLABLE</td></tr>
+ # </table>
+ },
},
"onlinePredictionLogging": True or False, # Optional. If true, online prediction access logs are sent to StackDriver
# Logging. These logs are like standard server access logs, containing
@@ -1480,9 +2102,7 @@
# (QPS). Estimate your costs before enabling this option.
#
# Default is false.
- "name": "A String", # Required. The name specified for the model when it was created.
- #
- # The model name must be unique within the project it is created in.
+ "description": "A String", # Optional. The description specified for the model when it was created.
},
],
}</pre>
@@ -1503,7 +2123,7 @@
</div>
<div class="method">
- <code class="details" id="patch">patch(name, body, updateMask=None, x__xgafv=None)</code>
+ <code class="details" id="patch">patch(name, body=None, updateMask=None, x__xgafv=None)</code>
<pre>Updates a specific model resource.
Currently the only supported fields to update are `description` and
@@ -1511,7 +2131,7 @@
Args:
name: string, Required. The project name. (required)
- body: object, The request body. (required)
+ body: object, The request body.
The object takes the form of:
{ # Represents a machine learning solution.
@@ -1519,7 +2139,9 @@
# A model can have multiple versions, each of which is a deployed, trained
# model ready to receive prediction requests. The model itself is just a
# container.
- "description": "A String", # Optional. The description specified for the model when it was created.
+ "name": "A String", # Required. The name specified for the model when it was created.
+ #
+ # The model name must be unique within the project it is created in.
"onlinePredictionConsoleLogging": True or False, # Optional. If true, online prediction nodes send `stderr` and `stdout`
# streams to Stackdriver Logging. These can be more verbose than the standard
# access logs (see `onlinePredictionLogging`) and can incur higher cost.
@@ -1533,13 +2155,13 @@
# Each label is a key-value pair, where both the key and the value are
# arbitrary strings that you supply.
# For more information, see the documentation on
- # <a href="/ml-engine/docs/tensorflow/resource-labels">using labels</a>.
+ # <a href="/ml-engine/docs/tensorflow/resource-labels">using labels</a>.
"a_key": "A String",
},
"regions": [ # Optional. The list of regions where the model is going to be deployed.
- # Currently only one region per model is supported.
+ # Only one region per model is supported.
# Defaults to 'us-central1' if nothing is set.
- # See the <a href="/ml-engine/docs/tensorflow/regions">available regions</a>
+ # See the <a href="/ml-engine/docs/tensorflow/regions">available regions</a>
# for AI Platform services.
# Note:
# * No matter where a model is deployed, it can always be accessed by
@@ -1560,53 +2182,32 @@
# handle prediction requests that do not specify a version.
#
# You can change the default version by calling
- # [projects.methods.versions.setDefault](/ml-engine/reference/rest/v1/projects.models.versions/setDefault).
+ # projects.models.versions.setDefault.
#
# Each version is a trained model deployed in the cloud, ready to handle
# prediction requests. A model can have multiple versions. You can get
# information about all of the versions of a given model by calling
- # [projects.models.versions.list](/ml-engine/reference/rest/v1/projects.models.versions/list).
- "errorMessage": "A String", # Output only. The details of a failure or a cancellation.
+ # projects.models.versions.list.
+ "acceleratorConfig": { # Represents a hardware accelerator request config. # Optional. Accelerator config for using GPUs for online prediction (beta).
+ # Only specify this field if you have specified a Compute Engine (N1) machine
+ # type in the `machineType` field. Learn more about [using GPUs for online
+ # prediction](/ml-engine/docs/machine-types-online-prediction#gpus).
+ # Note that the AcceleratorConfig can be used in both Jobs and Versions.
+ # Learn more about [accelerators for training](/ml-engine/docs/using-gpus) and
+ # [accelerators for online
+ # prediction](/ml-engine/docs/machine-types-online-prediction#gpus).
+ "count": "A String", # The number of accelerators to attach to each machine running the job.
+ "type": "A String", # The type of accelerator to use.
+ },
"labels": { # Optional. One or more labels that you can add, to organize your model
# versions. Each label is a key-value pair, where both the key and the value
# are arbitrary strings that you supply.
# For more information, see the documentation on
- # <a href="/ml-engine/docs/tensorflow/resource-labels">using labels</a>.
+ # <a href="/ml-engine/docs/tensorflow/resource-labels">using labels</a>.
"a_key": "A String",
},
- "machineType": "A String", # Optional. The type of machine on which to serve the model. Currently only
- # applies to online prediction service.
- # <dl>
- # <dt>mls1-c1-m2</dt>
- # <dd>
- # The <b>default</b> machine type, with 1 core and 2 GB RAM. The deprecated
- # name for this machine type is "mls1-highmem-1".
- # </dd>
- # <dt>mls1-c4-m2</dt>
- # <dd>
- # In <b>Beta</b>. This machine type has 4 cores and 2 GB RAM. The
- # deprecated name for this machine type is "mls1-highcpu-4".
- # </dd>
- # </dl>
- "description": "A String", # Optional. The description specified for the version when it was created.
- "runtimeVersion": "A String", # Optional. The AI Platform runtime version to use for this deployment.
- # If not set, AI Platform uses the default stable version, 1.0. For more
- # information, see the
- # [runtime version list](/ml-engine/docs/runtime-version-list) and
- # [how to manage runtime versions](/ml-engine/docs/versioning).
- "manualScaling": { # Options for manually scaling a model. # Manually select the number of nodes to use for serving the
- # model. You should generally use `auto_scaling` with an appropriate
- # `min_nodes` instead, but this option is available if you want more
- # predictable billing. Beware that latency and error rates will increase
- # if the traffic exceeds that capability of the system to serve it based
- # on the selected number of nodes.
- "nodes": 42, # The number of nodes to allocate for this model. These nodes are always up,
- # starting from the time the model is deployed, so the cost of operating
- # this model will be proportional to `nodes` * number of hours since
- # last billing cycle plus the cost for each prediction performed.
- },
"predictionClass": "A String", # Optional. The fully qualified name
- # (<var>module_name</var>.<var>class_name</var>) of a class that implements
+ # (<var>module_name</var>.<var>class_name</var>) of a class that implements
# the Predictor interface described in this reference field. The module
# containing this class should be included in a package provided to the
# [`packageUris` field](#Version.FIELDS.package_uris).
@@ -1614,11 +2215,13 @@
# Specify this field if and only if you are deploying a [custom prediction
# routine (beta)](/ml-engine/docs/tensorflow/custom-prediction-routines).
# If you specify this field, you must set
- # [`runtimeVersion`](#Version.FIELDS.runtime_version) to 1.4 or greater.
+ # [`runtimeVersion`](#Version.FIELDS.runtime_version) to 1.4 or greater and
+ # you must set `machineType` to a [legacy (MLS1)
+ # machine type](/ml-engine/docs/machine-types-online-prediction).
#
# The following code sample provides the Predictor interface:
#
- # ```py
+ # <pre style="max-width: 626px;">
# class Predictor(object):
# """Interface for constructing custom predictors."""
#
@@ -1654,64 +2257,12 @@
# An instance implementing this Predictor class.
# """
# raise NotImplementedError()
- # ```
+ # </pre>
#
# Learn more about [the Predictor interface and custom prediction
# routines](/ml-engine/docs/tensorflow/custom-prediction-routines).
- "autoScaling": { # Options for automatically scaling a model. # Automatically scale the number of nodes used to serve the model in
- # response to increases and decreases in traffic. Care should be
- # taken to ramp up traffic according to the model's ability to scale
- # or you will start seeing increases in latency and 429 response codes.
- "minNodes": 42, # Optional. The minimum number of nodes to allocate for this model. These
- # nodes are always up, starting from the time the model is deployed.
- # Therefore, the cost of operating this model will be at least
- # `rate` * `min_nodes` * number of hours since last billing cycle,
- # where `rate` is the cost per node-hour as documented in the
- # [pricing guide](/ml-engine/docs/pricing),
- # even if no predictions are performed. There is additional cost for each
- # prediction performed.
- #
- # Unlike manual scaling, if the load gets too heavy for the nodes
- # that are up, the service will automatically add nodes to handle the
- # increased load as well as scale back as traffic drops, always maintaining
- # at least `min_nodes`. You will be charged for the time in which additional
- # nodes are used.
- #
- # If not specified, `min_nodes` defaults to 0, in which case, when traffic
- # to a model stops (and after a cool-down period), nodes will be shut down
- # and no charges will be incurred until traffic to the model resumes.
- #
- # You can set `min_nodes` when creating the model version, and you can also
- # update `min_nodes` for an existing version:
- # <pre>
- # update_body.json:
- # {
- # 'autoScaling': {
- # 'minNodes': 5
- # }
- # }
- # </pre>
- # HTTP request:
- # <pre>
- # PATCH
- # https://ml.googleapis.com/v1/{name=projects/*/models/*/versions/*}?update_mask=autoScaling.minNodes
- # -d @./update_body.json
- # </pre>
- },
"serviceAccount": "A String", # Optional. Specifies the service account for resource access control.
"state": "A String", # Output only. The state of a version.
- "pythonVersion": "A String", # Optional. The version of Python used in prediction. If not set, the default
- # version is '2.7'. Python '3.5' is available when `runtime_version` is set
- # to '1.4' and above. Python '2.7' works with all supported runtime versions.
- "framework": "A String", # Optional. The machine learning framework AI Platform uses to train
- # this version of the model. Valid values are `TENSORFLOW`, `SCIKIT_LEARN`,
- # `XGBOOST`. If you do not specify a framework, AI Platform
- # will analyze files in the deployment_uri to determine a framework. If you
- # choose `SCIKIT_LEARN` or `XGBOOST`, you must also set the runtime version
- # of the model to 1.4 or greater.
- #
- # Do **not** specify a framework if you're deploying a [custom
- # prediction routine](/ml-engine/docs/tensorflow/custom-prediction-routines).
"packageUris": [ # Optional. Cloud Storage paths (`gs://…`) of packages for [custom
# prediction routines](/ml-engine/docs/tensorflow/custom-prediction-routines)
# or [scikit-learn pipelines with custom
@@ -1743,20 +2294,223 @@
# information.
#
# When passing Version to
- # [projects.models.versions.create](/ml-engine/reference/rest/v1/projects.models.versions/create)
+ # projects.models.versions.create
# the model service uses the specified location as the source of the model.
# Once deployed, the model version is hosted by the prediction service, so
# this location is useful only as a historical record.
# The total number of model files can't exceed 1000.
- "createTime": "A String", # Output only. The time the version was created.
+ "explanationConfig": { # Message holding configuration options for explaining model predictions. # Optional. Configures explainability features on the model's version.
+ # Some explanation features require additional metadata to be loaded
+ # as part of the model payload.
+ # There are two feature attribution methods supported for TensorFlow models:
+ # integrated gradients and sampled Shapley.
+ # [Learn more about feature
+ # attributions.](/ml-engine/docs/ai-explanations/overview)
+ "xraiAttribution": { # Attributes credit by computing the XRAI taking advantage # Attributes credit by computing the XRAI taking advantage
+ # of the model's fully differentiable structure. Refer to this paper for
+ # more details: https://arxiv.org/abs/1906.02825
+ # Currently only implemented for models with natural image inputs.
+ # of the model's fully differentiable structure. Refer to this paper for
+ # more details: https://arxiv.org/abs/1906.02825
+ # Currently only implemented for models with natural image inputs.
+ "numIntegralSteps": 42, # Number of steps for approximating the path integral.
+ # A good value to start is 50 and gradually increase until the
+ # sum to diff property is met within the desired error range.
+ },
+ "sampledShapleyAttribution": { # An attribution method that approximates Shapley values for features that # An attribution method that approximates Shapley values for features that
+ # contribute to the label being predicted. A sampling strategy is used to
+ # approximate the value rather than considering all subsets of features.
+ # contribute to the label being predicted. A sampling strategy is used to
+ # approximate the value rather than considering all subsets of features.
+ "numPaths": 42, # The number of feature permutations to consider when approximating the
+ # Shapley values.
+ },
+ "integratedGradientsAttribution": { # Attributes credit by computing the Aumann-Shapley value taking advantage # Attributes credit by computing the Aumann-Shapley value taking advantage
+ # of the model's fully differentiable structure. Refer to this paper for
+ # more details: http://proceedings.mlr.press/v70/sundararajan17a.html
+ # of the model's fully differentiable structure. Refer to this paper for
+ # more details: http://proceedings.mlr.press/v70/sundararajan17a.html
+ "numIntegralSteps": 42, # Number of steps for approximating the path integral.
+ # A good value to start is 50 and gradually increase until the
+ # sum to diff property is met within the desired error range.
+ },
+ },
"isDefault": True or False, # Output only. If true, this version will be used to handle prediction
# requests that do not specify a version.
#
# You can change the default version by calling
- # [projects.methods.versions.setDefault](/ml-engine/reference/rest/v1/projects.models.versions/setDefault).
- "name": "A String", # Required.The name specified for the version when it was created.
+ # projects.methods.versions.setDefault.
+ "machineType": "A String", # Optional. The type of machine on which to serve the model. Currently only
+ # applies to online prediction service. If this field is not specified, it
+ # defaults to `mls1-c1-m2`.
+ #
+ # Online prediction supports the following machine types:
+ #
+ # * `mls1-c1-m2`
+ # * `mls1-c4-m2`
+ # * `n1-standard-2`
+ # * `n1-standard-4`
+ # * `n1-standard-8`
+ # * `n1-standard-16`
+ # * `n1-standard-32`
+ # * `n1-highmem-2`
+ # * `n1-highmem-4`
+ # * `n1-highmem-8`
+ # * `n1-highmem-16`
+ # * `n1-highmem-32`
+ # * `n1-highcpu-2`
+ # * `n1-highcpu-4`
+ # * `n1-highcpu-8`
+ # * `n1-highcpu-16`
+ # * `n1-highcpu-32`
+ #
+ # `mls1-c1-m2` is generally available. All other machine types are available
+ # in beta. Learn more about the [differences between machine
+ # types](/ml-engine/docs/machine-types-online-prediction).
+ "description": "A String", # Optional. The description specified for the version when it was created.
+ "runtimeVersion": "A String", # Required. The AI Platform runtime version to use for this deployment.
+ #
+ # For more information, see the
+ # [runtime version list](/ml-engine/docs/runtime-version-list) and
+ # [how to manage runtime versions](/ml-engine/docs/versioning).
+ "manualScaling": { # Options for manually scaling a model. # Manually select the number of nodes to use for serving the
+ # model. You should generally use `auto_scaling` with an appropriate
+ # `min_nodes` instead, but this option is available if you want more
+ # predictable billing. Beware that latency and error rates will increase
+ # if the traffic exceeds that capability of the system to serve it based
+ # on the selected number of nodes.
+ "nodes": 42, # The number of nodes to allocate for this model. These nodes are always up,
+ # starting from the time the model is deployed, so the cost of operating
+ # this model will be proportional to `nodes` * number of hours since
+ # last billing cycle plus the cost for each prediction performed.
+ },
+ "errorMessage": "A String", # Output only. The details of a failure or a cancellation.
+ "framework": "A String", # Optional. The machine learning framework AI Platform uses to train
+ # this version of the model. Valid values are `TENSORFLOW`, `SCIKIT_LEARN`,
+ # `XGBOOST`. If you do not specify a framework, AI Platform
+ # will analyze files in the deployment_uri to determine a framework. If you
+ # choose `SCIKIT_LEARN` or `XGBOOST`, you must also set the runtime version
+ # of the model to 1.4 or greater.
+ #
+ # Do **not** specify a framework if you're deploying a [custom
+ # prediction routine](/ml-engine/docs/tensorflow/custom-prediction-routines).
+ #
+ # If you specify a [Compute Engine (N1) machine
+ # type](/ml-engine/docs/machine-types-online-prediction) in the
+ # `machineType` field, you must specify `TENSORFLOW`
+ # for the framework.
+ "createTime": "A String", # Output only. The time the version was created.
+ "name": "A String", # Required. The name specified for the version when it was created.
#
# The version name must be unique within the model it is created in.
+ "autoScaling": { # Options for automatically scaling a model. # Automatically scale the number of nodes used to serve the model in
+ # response to increases and decreases in traffic. Care should be
+ # taken to ramp up traffic according to the model's ability to scale
+ # or you will start seeing increases in latency and 429 response codes.
+ #
+ # Note that you cannot use AutoScaling if your version uses
+ # [GPUs](#Version.FIELDS.accelerator_config). Instead, you must use specify
+ # `manual_scaling`.
+ "minNodes": 42, # Optional. The minimum number of nodes to allocate for this model. These
+ # nodes are always up, starting from the time the model is deployed.
+ # Therefore, the cost of operating this model will be at least
+ # `rate` * `min_nodes` * number of hours since last billing cycle,
+ # where `rate` is the cost per node-hour as documented in the
+ # [pricing guide](/ml-engine/docs/pricing),
+ # even if no predictions are performed. There is additional cost for each
+ # prediction performed.
+ #
+ # Unlike manual scaling, if the load gets too heavy for the nodes
+ # that are up, the service will automatically add nodes to handle the
+ # increased load as well as scale back as traffic drops, always maintaining
+ # at least `min_nodes`. You will be charged for the time in which additional
+ # nodes are used.
+ #
+ # If `min_nodes` is not specified and AutoScaling is used with a [legacy
+ # (MLS1) machine type](/ml-engine/docs/machine-types-online-prediction),
+ # `min_nodes` defaults to 0, in which case, when traffic to a model stops
+ # (and after a cool-down period), nodes will be shut down and no charges will
+ # be incurred until traffic to the model resumes.
+ #
+ # If `min_nodes` is not specified and AutoScaling is used with a [Compute
+ # Engine (N1) machine type](/ml-engine/docs/machine-types-online-prediction),
+ # `min_nodes` defaults to 1. `min_nodes` must be at least 1 for use with a
+ # Compute Engine machine type.
+ #
+ # Note that you cannot use AutoScaling if your version uses
+ # [GPUs](#Version.FIELDS.accelerator_config). Instead, you must use
+ # ManualScaling.
+ #
+ # You can set `min_nodes` when creating the model version, and you can also
+ # update `min_nodes` for an existing version:
+ # <pre>
+ # update_body.json:
+ # {
+ # 'autoScaling': {
+ # 'minNodes': 5
+ # }
+ # }
+ # </pre>
+ # HTTP request:
+ # <pre style="max-width: 626px;">
+ # PATCH
+ # https://ml.googleapis.com/v1/{name=projects/*/models/*/versions/*}?update_mask=autoScaling.minNodes
+ # -d @./update_body.json
+ # </pre>
+ },
+ "pythonVersion": "A String", # Required. The version of Python used in prediction.
+ #
+ # The following Python versions are available:
+ #
+ # * Python '3.7' is available when `runtime_version` is set to '1.15' or
+ # later.
+ # * Python '3.5' is available when `runtime_version` is set to a version
+ # from '1.4' to '1.14'.
+ # * Python '2.7' is available when `runtime_version` is set to '1.15' or
+ # earlier.
+ #
+ # Read more about the Python versions available for [each runtime
+ # version](/ml-engine/docs/runtime-version-list).
+ "requestLoggingConfig": { # Configuration for logging request-response pairs to a BigQuery table. # Optional. *Only* specify this field in a
+ # projects.models.versions.patch
+ # request. Specifying it in a
+ # projects.models.versions.create
+ # request has no effect.
+ #
+ # Configures the request-response pair logging on predictions from this
+ # Version.
+ # Online prediction requests to a model version and the responses to these
+ # requests are converted to raw strings and saved to the specified BigQuery
+ # table. Logging is constrained by [BigQuery quotas and
+ # limits](/bigquery/quotas). If your project exceeds BigQuery quotas or limits,
+ # AI Platform Prediction does not log request-response pairs, but it continues
+ # to serve predictions.
+ #
+ # If you are using [continuous
+ # evaluation](/ml-engine/docs/continuous-evaluation/), you do not need to
+ # specify this configuration manually. Setting up continuous evaluation
+ # automatically enables logging of request-response pairs.
+ "samplingPercentage": 3.14, # Percentage of requests to be logged, expressed as a fraction from 0 to 1.
+ # For example, if you want to log 10% of requests, enter `0.1`. The sampling
+ # window is the lifetime of the model version. Defaults to 0.
+ "bigqueryTableName": "A String", # Required. Fully qualified BigQuery table name in the following format:
+ # "<var>project_id</var>.<var>dataset_name</var>.<var>table_name</var>"
+ #
+ # The specified table must already exist, and the "Cloud ML Service Agent"
+ # for your project must have permission to write to it. The table must have
+ # the following [schema](/bigquery/docs/schemas):
+ #
+ # <table>
+ # <tr><th>Field name</th><th style="display: table-cell">Type</th>
+ # <th style="display: table-cell">Mode</th></tr>
+ # <tr><td>model</td><td>STRING</td><td>REQUIRED</td></tr>
+ # <tr><td>model_version</td><td>STRING</td><td>REQUIRED</td></tr>
+ # <tr><td>time</td><td>TIMESTAMP</td><td>REQUIRED</td></tr>
+ # <tr><td>raw_data</td><td>STRING</td><td>REQUIRED</td></tr>
+ # <tr><td>raw_prediction</td><td>STRING</td><td>NULLABLE</td></tr>
+ # <tr><td>groundtruth</td><td>STRING</td><td>NULLABLE</td></tr>
+ # </table>
+ },
},
"onlinePredictionLogging": True or False, # Optional. If true, online prediction access logs are sent to StackDriver
# Logging. These logs are like standard server access logs, containing
@@ -1766,9 +2520,7 @@
# (QPS). Estimate your costs before enabling this option.
#
# Default is false.
- "name": "A String", # Required. The name specified for the model when it was created.
- #
- # The model name must be unique within the project it is created in.
+ "description": "A String", # Optional. The description specified for the model when it was created.
}
updateMask: string, Required. Specifies the path, relative to `Model`, of the field to update.
@@ -1840,67 +2592,88 @@
</div>
<div class="method">
- <code class="details" id="setIamPolicy">setIamPolicy(resource, body, x__xgafv=None)</code>
+ <code class="details" id="setIamPolicy">setIamPolicy(resource, body=None, x__xgafv=None)</code>
<pre>Sets the access control policy on the specified resource. Replaces any
existing policy.
+Can return Public Errors: NOT_FOUND, INVALID_ARGUMENT and PERMISSION_DENIED
+
Args:
resource: string, REQUIRED: The resource for which the policy is being specified.
See the operation documentation for the appropriate value for this field. (required)
- body: object, The request body. (required)
+ body: object, The request body.
The object takes the form of:
{ # Request message for `SetIamPolicy` method.
- "policy": { # Defines an Identity and Access Management (IAM) policy. It is used to # REQUIRED: The complete policy to be applied to the `resource`. The size of
+ "policy": { # An Identity and Access Management (IAM) policy, which specifies access # REQUIRED: The complete policy to be applied to the `resource`. The size of
# the policy is limited to a few 10s of KB. An empty policy is a
# valid policy but certain Cloud Platform services (such as Projects)
# might reject them.
- # specify access control policies for Cloud Platform resources.
+ # controls for Google Cloud resources.
#
#
- # A `Policy` consists of a list of `bindings`. A `binding` binds a list of
- # `members` to a `role`, where the members can be user accounts, Google groups,
- # Google domains, and service accounts. A `role` is a named list of permissions
- # defined by IAM.
+ # A `Policy` is a collection of `bindings`. A `binding` binds one or more
+ # `members` to a single `role`. Members can be user accounts, service accounts,
+ # Google groups, and domains (such as G Suite). A `role` is a named list of
+ # permissions; each `role` can be an IAM predefined role or a user-created
+ # custom role.
#
- # **JSON Example**
+ # Optionally, a `binding` can specify a `condition`, which is a logical
+ # expression that allows access to a resource only if the expression evaluates
+ # to `true`. A condition can add constraints based on attributes of the
+ # request, the resource, or both.
+ #
+ # **JSON example:**
#
# {
# "bindings": [
# {
- # "role": "roles/owner",
+ # "role": "roles/resourcemanager.organizationAdmin",
# "members": [
# "user:mike@example.com",
# "group:admins@example.com",
# "domain:google.com",
- # "serviceAccount:my-other-app@appspot.gserviceaccount.com"
+ # "serviceAccount:my-project-id@appspot.gserviceaccount.com"
# ]
# },
# {
- # "role": "roles/viewer",
- # "members": ["user:sean@example.com"]
+ # "role": "roles/resourcemanager.organizationViewer",
+ # "members": ["user:eve@example.com"],
+ # "condition": {
+ # "title": "expirable access",
+ # "description": "Does not grant access after Sep 2020",
+ # "expression": "request.time < timestamp('2020-10-01T00:00:00.000Z')",
+ # }
# }
- # ]
+ # ],
+ # "etag": "BwWWja0YfJA=",
+ # "version": 3
# }
#
- # **YAML Example**
+ # **YAML example:**
#
# bindings:
# - members:
# - user:mike@example.com
# - group:admins@example.com
# - domain:google.com
- # - serviceAccount:my-other-app@appspot.gserviceaccount.com
- # role: roles/owner
+ # - serviceAccount:my-project-id@appspot.gserviceaccount.com
+ # role: roles/resourcemanager.organizationAdmin
# - members:
- # - user:sean@example.com
- # role: roles/viewer
- #
+ # - user:eve@example.com
+ # role: roles/resourcemanager.organizationViewer
+ # condition:
+ # title: expirable access
+ # description: Does not grant access after Sep 2020
+ # expression: request.time < timestamp('2020-10-01T00:00:00.000Z')
+ # - etag: BwWWja0YfJA=
+ # - version: 3
#
# For a description of IAM and its features, see the
- # [IAM developer's guide](https://cloud.google.com/iam/docs).
- "bindings": [ # Associates a list of `members` to a `role`.
- # `bindings` with no members will result in an error.
+ # [IAM documentation](https://cloud.google.com/iam/docs/).
+ "bindings": [ # Associates a list of `members` to a `role`. Optionally, may specify a
+ # `condition` that determines how and when the `bindings` are applied. Each
+ # of the `bindings` must contain at least one member.
{ # Associates `members` with a `role`.
"role": "A String", # Role that is assigned to `members`.
# For example, `roles/viewer`, `roles/editor`, or `roles/owner`.
@@ -1914,7 +2687,7 @@
# who is authenticated with a Google account or a service account.
#
# * `user:{emailid}`: An email address that represents a specific Google
- # account. For example, `alice@gmail.com` .
+ # account. For example, `alice@example.com` .
#
#
# * `serviceAccount:{emailid}`: An email address that represents a service
@@ -1923,46 +2696,78 @@
# * `group:{emailid}`: An email address that represents a Google group.
# For example, `admins@example.com`.
#
+ # * `deleted:user:{emailid}?uid={uniqueid}`: An email address (plus unique
+ # identifier) representing a user that has been recently deleted. For
+ # example, `alice@example.com?uid=123456789012345678901`. If the user is
+ # recovered, this value reverts to `user:{emailid}` and the recovered user
+ # retains the role in the binding.
+ #
+ # * `deleted:serviceAccount:{emailid}?uid={uniqueid}`: An email address (plus
+ # unique identifier) representing a service account that has been recently
+ # deleted. For example,
+ # `my-other-app@appspot.gserviceaccount.com?uid=123456789012345678901`.
+ # If the service account is undeleted, this value reverts to
+ # `serviceAccount:{emailid}` and the undeleted service account retains the
+ # role in the binding.
+ #
+ # * `deleted:group:{emailid}?uid={uniqueid}`: An email address (plus unique
+ # identifier) representing a Google group that has been recently
+ # deleted. For example, `admins@example.com?uid=123456789012345678901`. If
+ # the group is recovered, this value reverts to `group:{emailid}` and the
+ # recovered group retains the role in the binding.
+ #
#
# * `domain:{domain}`: The G Suite domain (primary) that represents all the
# users of that domain. For example, `google.com` or `example.com`.
#
"A String",
],
- "condition": { # Represents an expression text. Example: # The condition that is associated with this binding.
+ "condition": { # Represents a textual expression in the Common Expression Language (CEL) # The condition that is associated with this binding.
# NOTE: An unsatisfied condition will not allow user access via current
# binding. Different bindings, including their conditions, are examined
# independently.
+ # syntax. CEL is a C-like expression language. The syntax and semantics of CEL
+ # are documented at https://github.com/google/cel-spec.
#
- # title: "User account presence"
- # description: "Determines whether the request has a user account"
- # expression: "size(request.user) > 0"
- "description": "A String", # An optional description of the expression. This is a longer text which
+ # Example (Comparison):
+ #
+ # title: "Summary size limit"
+ # description: "Determines if a summary is less than 100 chars"
+ # expression: "document.summary.size() < 100"
+ #
+ # Example (Equality):
+ #
+ # title: "Requestor is owner"
+ # description: "Determines if requestor is the document owner"
+ # expression: "document.owner == request.auth.claims.email"
+ #
+ # Example (Logic):
+ #
+ # title: "Public documents"
+ # description: "Determine whether the document should be publicly visible"
+ # expression: "document.type != 'private' && document.type != 'internal'"
+ #
+ # Example (Data Manipulation):
+ #
+ # title: "Notification string"
+ # description: "Create a notification string with a timestamp."
+ # expression: "'New message received at ' + string(document.create_time)"
+ #
+ # The exact variables and functions that may be referenced within an expression
+ # are determined by the service that evaluates it. See the service
+ # documentation for additional information.
+ "description": "A String", # Optional. Description of the expression. This is a longer text which
# describes the expression, e.g. when hovered over it in a UI.
- "expression": "A String", # Textual representation of an expression in
- # Common Expression Language syntax.
- #
- # The application context of the containing message determines which
- # well-known feature set of CEL is supported.
- "location": "A String", # An optional string indicating the location of the expression for error
+ "expression": "A String", # Textual representation of an expression in Common Expression Language
+ # syntax.
+ "location": "A String", # Optional. String indicating the location of the expression for error
# reporting, e.g. a file name and a position in the file.
- "title": "A String", # An optional title for the expression, i.e. a short string describing
+ "title": "A String", # Optional. Title for the expression, i.e. a short string describing
# its purpose. This can be used e.g. in UIs which allow to enter the
# expression.
},
},
],
- "etag": "A String", # `etag` is used for optimistic concurrency control as a way to help
- # prevent simultaneous updates of a policy from overwriting each other.
- # It is strongly suggested that systems make use of the `etag` in the
- # read-modify-write cycle to perform policy updates in order to avoid race
- # conditions: An `etag` is returned in the response to `getIamPolicy`, and
- # systems are expected to put that etag in the request to `setIamPolicy` to
- # ensure that their change will be applied to the same version of the policy.
- #
- # If no `etag` is provided in the call to `setIamPolicy`, then the existing
- # policy is overwritten blindly.
- "version": 42, # Deprecated.
"auditConfigs": [ # Specifies cloud audit logging configuration for this policy.
{ # Specifies the audit configuration for a service.
# The configuration determines which permission types are logged, and what
@@ -1984,7 +2789,7 @@
# {
# "log_type": "DATA_READ",
# "exempted_members": [
- # "user:foo@gmail.com"
+ # "user:jose@example.com"
# ]
# },
# {
@@ -1996,7 +2801,7 @@
# ]
# },
# {
- # "service": "fooservice.googleapis.com"
+ # "service": "sampleservice.googleapis.com"
# "audit_log_configs": [
# {
# "log_type": "DATA_READ",
@@ -2004,7 +2809,7 @@
# {
# "log_type": "DATA_WRITE",
# "exempted_members": [
- # "user:bar@gmail.com"
+ # "user:aliya@example.com"
# ]
# }
# ]
@@ -2012,9 +2817,9 @@
# ]
# }
#
- # For fooservice, this policy enables DATA_READ, DATA_WRITE and ADMIN_READ
- # logging. It also exempts foo@gmail.com from DATA_READ logging, and
- # bar@gmail.com from DATA_WRITE logging.
+ # For sampleservice, this policy enables DATA_READ, DATA_WRITE and ADMIN_READ
+ # logging. It also exempts jose@example.com from DATA_READ logging, and
+ # aliya@example.com from DATA_WRITE logging.
"auditLogConfigs": [ # The configuration for logging of each type of permission.
{ # Provides the configuration for logging a type of permissions.
# Example:
@@ -2024,7 +2829,7 @@
# {
# "log_type": "DATA_READ",
# "exempted_members": [
- # "user:foo@gmail.com"
+ # "user:jose@example.com"
# ]
# },
# {
@@ -2034,7 +2839,7 @@
# }
#
# This enables 'DATA_READ' and 'DATA_WRITE' logging, while exempting
- # foo@gmail.com from DATA_READ logging.
+ # jose@example.com from DATA_READ logging.
"exemptedMembers": [ # Specifies the identities that do not cause logging for this type of
# permission.
# Follows the same format of Binding.members.
@@ -2048,6 +2853,39 @@
# `allServices` is a special value that covers all services.
},
],
+ "etag": "A String", # `etag` is used for optimistic concurrency control as a way to help
+ # prevent simultaneous updates of a policy from overwriting each other.
+ # It is strongly suggested that systems make use of the `etag` in the
+ # read-modify-write cycle to perform policy updates in order to avoid race
+ # conditions: An `etag` is returned in the response to `getIamPolicy`, and
+ # systems are expected to put that etag in the request to `setIamPolicy` to
+ # ensure that their change will be applied to the same version of the policy.
+ #
+ # **Important:** If you use IAM Conditions, you must include the `etag` field
+ # whenever you call `setIamPolicy`. If you omit this field, then IAM allows
+ # you to overwrite a version `3` policy with a version `1` policy, and all of
+ # the conditions in the version `3` policy are lost.
+ "version": 42, # Specifies the format of the policy.
+ #
+ # Valid values are `0`, `1`, and `3`. Requests that specify an invalid value
+ # are rejected.
+ #
+ # Any operation that affects conditional role bindings must specify version
+ # `3`. This requirement applies to the following operations:
+ #
+ # * Getting a policy that includes a conditional role binding
+ # * Adding a conditional role binding to a policy
+ # * Changing a conditional role binding in a policy
+ # * Removing any role binding, with or without a condition, from a policy
+ # that includes conditions
+ #
+ # **Important:** If you use IAM Conditions, you must include the `etag` field
+ # whenever you call `setIamPolicy`. If you omit this field, then IAM allows
+ # you to overwrite a version `3` policy with a version `1` policy, and all of
+ # the conditions in the version `3` policy are lost.
+ #
+ # If a policy does not include any conditions, operations on that policy may
+ # specify any valid version or leave the field unset.
},
"updateMask": "A String", # OPTIONAL: A FieldMask specifying which fields of the policy to modify. Only
# the fields in the mask will be modified. If no mask is provided, the
@@ -2064,53 +2902,72 @@
Returns:
An object of the form:
- { # Defines an Identity and Access Management (IAM) policy. It is used to
- # specify access control policies for Cloud Platform resources.
+ { # An Identity and Access Management (IAM) policy, which specifies access
+ # controls for Google Cloud resources.
#
#
- # A `Policy` consists of a list of `bindings`. A `binding` binds a list of
- # `members` to a `role`, where the members can be user accounts, Google groups,
- # Google domains, and service accounts. A `role` is a named list of permissions
- # defined by IAM.
+ # A `Policy` is a collection of `bindings`. A `binding` binds one or more
+ # `members` to a single `role`. Members can be user accounts, service accounts,
+ # Google groups, and domains (such as G Suite). A `role` is a named list of
+ # permissions; each `role` can be an IAM predefined role or a user-created
+ # custom role.
#
- # **JSON Example**
+ # Optionally, a `binding` can specify a `condition`, which is a logical
+ # expression that allows access to a resource only if the expression evaluates
+ # to `true`. A condition can add constraints based on attributes of the
+ # request, the resource, or both.
+ #
+ # **JSON example:**
#
# {
# "bindings": [
# {
- # "role": "roles/owner",
+ # "role": "roles/resourcemanager.organizationAdmin",
# "members": [
# "user:mike@example.com",
# "group:admins@example.com",
# "domain:google.com",
- # "serviceAccount:my-other-app@appspot.gserviceaccount.com"
+ # "serviceAccount:my-project-id@appspot.gserviceaccount.com"
# ]
# },
# {
- # "role": "roles/viewer",
- # "members": ["user:sean@example.com"]
+ # "role": "roles/resourcemanager.organizationViewer",
+ # "members": ["user:eve@example.com"],
+ # "condition": {
+ # "title": "expirable access",
+ # "description": "Does not grant access after Sep 2020",
+ # "expression": "request.time < timestamp('2020-10-01T00:00:00.000Z')",
+ # }
# }
- # ]
+ # ],
+ # "etag": "BwWWja0YfJA=",
+ # "version": 3
# }
#
- # **YAML Example**
+ # **YAML example:**
#
# bindings:
# - members:
# - user:mike@example.com
# - group:admins@example.com
# - domain:google.com
- # - serviceAccount:my-other-app@appspot.gserviceaccount.com
- # role: roles/owner
+ # - serviceAccount:my-project-id@appspot.gserviceaccount.com
+ # role: roles/resourcemanager.organizationAdmin
# - members:
- # - user:sean@example.com
- # role: roles/viewer
- #
+ # - user:eve@example.com
+ # role: roles/resourcemanager.organizationViewer
+ # condition:
+ # title: expirable access
+ # description: Does not grant access after Sep 2020
+ # expression: request.time < timestamp('2020-10-01T00:00:00.000Z')
+ # - etag: BwWWja0YfJA=
+ # - version: 3
#
# For a description of IAM and its features, see the
- # [IAM developer's guide](https://cloud.google.com/iam/docs).
- "bindings": [ # Associates a list of `members` to a `role`.
- # `bindings` with no members will result in an error.
+ # [IAM documentation](https://cloud.google.com/iam/docs/).
+ "bindings": [ # Associates a list of `members` to a `role`. Optionally, may specify a
+ # `condition` that determines how and when the `bindings` are applied. Each
+ # of the `bindings` must contain at least one member.
{ # Associates `members` with a `role`.
"role": "A String", # Role that is assigned to `members`.
# For example, `roles/viewer`, `roles/editor`, or `roles/owner`.
@@ -2124,7 +2981,7 @@
# who is authenticated with a Google account or a service account.
#
# * `user:{emailid}`: An email address that represents a specific Google
- # account. For example, `alice@gmail.com` .
+ # account. For example, `alice@example.com` .
#
#
# * `serviceAccount:{emailid}`: An email address that represents a service
@@ -2133,46 +2990,78 @@
# * `group:{emailid}`: An email address that represents a Google group.
# For example, `admins@example.com`.
#
+ # * `deleted:user:{emailid}?uid={uniqueid}`: An email address (plus unique
+ # identifier) representing a user that has been recently deleted. For
+ # example, `alice@example.com?uid=123456789012345678901`. If the user is
+ # recovered, this value reverts to `user:{emailid}` and the recovered user
+ # retains the role in the binding.
+ #
+ # * `deleted:serviceAccount:{emailid}?uid={uniqueid}`: An email address (plus
+ # unique identifier) representing a service account that has been recently
+ # deleted. For example,
+ # `my-other-app@appspot.gserviceaccount.com?uid=123456789012345678901`.
+ # If the service account is undeleted, this value reverts to
+ # `serviceAccount:{emailid}` and the undeleted service account retains the
+ # role in the binding.
+ #
+ # * `deleted:group:{emailid}?uid={uniqueid}`: An email address (plus unique
+ # identifier) representing a Google group that has been recently
+ # deleted. For example, `admins@example.com?uid=123456789012345678901`. If
+ # the group is recovered, this value reverts to `group:{emailid}` and the
+ # recovered group retains the role in the binding.
+ #
#
# * `domain:{domain}`: The G Suite domain (primary) that represents all the
# users of that domain. For example, `google.com` or `example.com`.
#
"A String",
],
- "condition": { # Represents an expression text. Example: # The condition that is associated with this binding.
+ "condition": { # Represents a textual expression in the Common Expression Language (CEL) # The condition that is associated with this binding.
# NOTE: An unsatisfied condition will not allow user access via current
# binding. Different bindings, including their conditions, are examined
# independently.
+ # syntax. CEL is a C-like expression language. The syntax and semantics of CEL
+ # are documented at https://github.com/google/cel-spec.
#
- # title: "User account presence"
- # description: "Determines whether the request has a user account"
- # expression: "size(request.user) > 0"
- "description": "A String", # An optional description of the expression. This is a longer text which
+ # Example (Comparison):
+ #
+ # title: "Summary size limit"
+ # description: "Determines if a summary is less than 100 chars"
+ # expression: "document.summary.size() < 100"
+ #
+ # Example (Equality):
+ #
+ # title: "Requestor is owner"
+ # description: "Determines if requestor is the document owner"
+ # expression: "document.owner == request.auth.claims.email"
+ #
+ # Example (Logic):
+ #
+ # title: "Public documents"
+ # description: "Determine whether the document should be publicly visible"
+ # expression: "document.type != 'private' && document.type != 'internal'"
+ #
+ # Example (Data Manipulation):
+ #
+ # title: "Notification string"
+ # description: "Create a notification string with a timestamp."
+ # expression: "'New message received at ' + string(document.create_time)"
+ #
+ # The exact variables and functions that may be referenced within an expression
+ # are determined by the service that evaluates it. See the service
+ # documentation for additional information.
+ "description": "A String", # Optional. Description of the expression. This is a longer text which
# describes the expression, e.g. when hovered over it in a UI.
- "expression": "A String", # Textual representation of an expression in
- # Common Expression Language syntax.
- #
- # The application context of the containing message determines which
- # well-known feature set of CEL is supported.
- "location": "A String", # An optional string indicating the location of the expression for error
+ "expression": "A String", # Textual representation of an expression in Common Expression Language
+ # syntax.
+ "location": "A String", # Optional. String indicating the location of the expression for error
# reporting, e.g. a file name and a position in the file.
- "title": "A String", # An optional title for the expression, i.e. a short string describing
+ "title": "A String", # Optional. Title for the expression, i.e. a short string describing
# its purpose. This can be used e.g. in UIs which allow to enter the
# expression.
},
},
],
- "etag": "A String", # `etag` is used for optimistic concurrency control as a way to help
- # prevent simultaneous updates of a policy from overwriting each other.
- # It is strongly suggested that systems make use of the `etag` in the
- # read-modify-write cycle to perform policy updates in order to avoid race
- # conditions: An `etag` is returned in the response to `getIamPolicy`, and
- # systems are expected to put that etag in the request to `setIamPolicy` to
- # ensure that their change will be applied to the same version of the policy.
- #
- # If no `etag` is provided in the call to `setIamPolicy`, then the existing
- # policy is overwritten blindly.
- "version": 42, # Deprecated.
"auditConfigs": [ # Specifies cloud audit logging configuration for this policy.
{ # Specifies the audit configuration for a service.
# The configuration determines which permission types are logged, and what
@@ -2194,7 +3083,7 @@
# {
# "log_type": "DATA_READ",
# "exempted_members": [
- # "user:foo@gmail.com"
+ # "user:jose@example.com"
# ]
# },
# {
@@ -2206,7 +3095,7 @@
# ]
# },
# {
- # "service": "fooservice.googleapis.com"
+ # "service": "sampleservice.googleapis.com"
# "audit_log_configs": [
# {
# "log_type": "DATA_READ",
@@ -2214,7 +3103,7 @@
# {
# "log_type": "DATA_WRITE",
# "exempted_members": [
- # "user:bar@gmail.com"
+ # "user:aliya@example.com"
# ]
# }
# ]
@@ -2222,9 +3111,9 @@
# ]
# }
#
- # For fooservice, this policy enables DATA_READ, DATA_WRITE and ADMIN_READ
- # logging. It also exempts foo@gmail.com from DATA_READ logging, and
- # bar@gmail.com from DATA_WRITE logging.
+ # For sampleservice, this policy enables DATA_READ, DATA_WRITE and ADMIN_READ
+ # logging. It also exempts jose@example.com from DATA_READ logging, and
+ # aliya@example.com from DATA_WRITE logging.
"auditLogConfigs": [ # The configuration for logging of each type of permission.
{ # Provides the configuration for logging a type of permissions.
# Example:
@@ -2234,7 +3123,7 @@
# {
# "log_type": "DATA_READ",
# "exempted_members": [
- # "user:foo@gmail.com"
+ # "user:jose@example.com"
# ]
# },
# {
@@ -2244,7 +3133,7 @@
# }
#
# This enables 'DATA_READ' and 'DATA_WRITE' logging, while exempting
- # foo@gmail.com from DATA_READ logging.
+ # jose@example.com from DATA_READ logging.
"exemptedMembers": [ # Specifies the identities that do not cause logging for this type of
# permission.
# Follows the same format of Binding.members.
@@ -2258,11 +3147,44 @@
# `allServices` is a special value that covers all services.
},
],
+ "etag": "A String", # `etag` is used for optimistic concurrency control as a way to help
+ # prevent simultaneous updates of a policy from overwriting each other.
+ # It is strongly suggested that systems make use of the `etag` in the
+ # read-modify-write cycle to perform policy updates in order to avoid race
+ # conditions: An `etag` is returned in the response to `getIamPolicy`, and
+ # systems are expected to put that etag in the request to `setIamPolicy` to
+ # ensure that their change will be applied to the same version of the policy.
+ #
+ # **Important:** If you use IAM Conditions, you must include the `etag` field
+ # whenever you call `setIamPolicy`. If you omit this field, then IAM allows
+ # you to overwrite a version `3` policy with a version `1` policy, and all of
+ # the conditions in the version `3` policy are lost.
+ "version": 42, # Specifies the format of the policy.
+ #
+ # Valid values are `0`, `1`, and `3`. Requests that specify an invalid value
+ # are rejected.
+ #
+ # Any operation that affects conditional role bindings must specify version
+ # `3`. This requirement applies to the following operations:
+ #
+ # * Getting a policy that includes a conditional role binding
+ # * Adding a conditional role binding to a policy
+ # * Changing a conditional role binding in a policy
+ # * Removing any role binding, with or without a condition, from a policy
+ # that includes conditions
+ #
+ # **Important:** If you use IAM Conditions, you must include the `etag` field
+ # whenever you call `setIamPolicy`. If you omit this field, then IAM allows
+ # you to overwrite a version `3` policy with a version `1` policy, and all of
+ # the conditions in the version `3` policy are lost.
+ #
+ # If a policy does not include any conditions, operations on that policy may
+ # specify any valid version or leave the field unset.
}</pre>
</div>
<div class="method">
- <code class="details" id="testIamPermissions">testIamPermissions(resource, body, x__xgafv=None)</code>
+ <code class="details" id="testIamPermissions">testIamPermissions(resource, body=None, x__xgafv=None)</code>
<pre>Returns permissions that a caller has on the specified resource.
If the resource does not exist, this will return an empty set of
permissions, not a NOT_FOUND error.
@@ -2274,7 +3196,7 @@
Args:
resource: string, REQUIRED: The resource for which the policy detail is being requested.
See the operation documentation for the appropriate value for this field. (required)
- body: object, The request body. (required)
+ body: object, The request body.
The object takes the form of:
{ # Request message for `TestIamPermissions` method.