Regen all docs. (#700)
* Stop recursing if discovery == {}
* Generate docs with 'make docs'.
diff --git a/docs/dyn/ml_v1.projects.models.html b/docs/dyn/ml_v1.projects.models.html
index 830f7aa..e13fcf2 100644
--- a/docs/dyn/ml_v1.projects.models.html
+++ b/docs/dyn/ml_v1.projects.models.html
@@ -72,7 +72,7 @@
</style>
-<h1><a href="ml_v1.html">Google Cloud Machine Learning Engine</a> . <a href="ml_v1.projects.html">projects</a> . <a href="ml_v1.projects.models.html">models</a></h1>
+<h1><a href="ml_v1.html">Cloud Machine Learning Engine</a> . <a href="ml_v1.projects.html">projects</a> . <a href="ml_v1.projects.models.html">models</a></h1>
<h2>Instance Methods</h2>
<p class="toc_element">
<code><a href="ml_v1.projects.models.versions.html">versions()</a></code>
@@ -89,11 +89,23 @@
<code><a href="#get">get(name, x__xgafv=None)</a></code></p>
<p class="firstline">Gets information about a model, including its name, the description (if</p>
<p class="toc_element">
- <code><a href="#list">list(parent, pageToken=None, x__xgafv=None, pageSize=None)</a></code></p>
+ <code><a href="#getIamPolicy">getIamPolicy(resource, x__xgafv=None)</a></code></p>
+<p class="firstline">Gets the access control policy for a resource.</p>
+<p class="toc_element">
+ <code><a href="#list">list(parent, pageToken=None, x__xgafv=None, pageSize=None, filter=None)</a></code></p>
<p class="firstline">Lists the models in a project.</p>
<p class="toc_element">
<code><a href="#list_next">list_next(previous_request, previous_response)</a></code></p>
<p class="firstline">Retrieves the next page of results.</p>
+<p class="toc_element">
+ <code><a href="#patch">patch(name, body, updateMask=None, x__xgafv=None)</a></code></p>
+<p class="firstline">Updates a specific model resource.</p>
+<p class="toc_element">
+ <code><a href="#setIamPolicy">setIamPolicy(resource, body, x__xgafv=None)</a></code></p>
+<p class="firstline">Sets the access control policy on the specified resource. Replaces any</p>
+<p class="toc_element">
+ <code><a href="#testIamPermissions">testIamPermissions(resource, body, x__xgafv=None)</a></code></p>
+<p class="firstline">Returns permissions that a caller has on the specified resource.</p>
<h3>Method Details</h3>
<div class="method">
<code class="details" id="create">create(parent, body, x__xgafv=None)</code>
@@ -104,20 +116,302 @@
[projects.models.versions.create](/ml-engine/reference/rest/v1/projects.models.versions/create).
Args:
- parent: string, Required. The project name.
-
-Authorization: requires `Editor` role on the specified project. (required)
+ parent: string, Required. The project name. (required)
body: object, The request body. (required)
The object takes the form of:
{ # Represents a machine learning solution.
+ #
+ # A model can have multiple versions, each of which is a deployed, trained
+ # model ready to receive prediction requests. The model itself is just a
+ # container.
+ "description": "A String", # Optional. The description specified for the model when it was created.
+ "onlinePredictionConsoleLogging": True or False, # Optional. If true, online prediction nodes send `stderr` and `stdout`
+ # streams to Stackdriver Logging. These can be more verbose than the standard
+ # access logs (see `onlinePredictionLogging`) and can incur higher cost.
+ # However, they are helpful for debugging. Note that
+ # [Stackdriver logs may incur a cost](/stackdriver/pricing), especially if
+ # your project receives prediction requests at a high QPS. Estimate your
+ # costs before enabling this option.
#
+ # Default is false.
+ "labels": { # Optional. One or more labels that you can add, to organize your models.
+ # Each label is a key-value pair, where both the key and the value are
+ # arbitrary strings that you supply.
+ # For more information, see the documentation on
+ # <a href="/ml-engine/docs/tensorflow/resource-labels">using labels</a>.
+ "a_key": "A String",
+ },
+ "regions": [ # Optional. The list of regions where the model is going to be deployed.
+ # Currently only one region per model is supported.
+ # Defaults to 'us-central1' if nothing is set.
+ # See the <a href="/ml-engine/docs/tensorflow/regions">available regions</a>
+ # for AI Platform services.
+ # Note:
+ # * No matter where a model is deployed, it can always be accessed by
+ # users from anywhere, both for online and batch prediction.
+ # * The region for a batch prediction job is set by the region field when
+ # submitting the batch prediction job and does not take its value from
+ # this field.
+ "A String",
+ ],
+ "etag": "A String", # `etag` is used for optimistic concurrency control as a way to help
+ # prevent simultaneous updates of a model from overwriting each other.
+ # It is strongly suggested that systems make use of the `etag` in the
+ # read-modify-write cycle to perform model updates in order to avoid race
+ # conditions: An `etag` is returned in the response to `GetModel`, and
+ # systems are expected to put that etag in the request to `UpdateModel` to
+ # ensure that their change will be applied to the model as intended.
+ "defaultVersion": { # Represents a version of the model. # Output only. The default version of the model. This version will be used to
+ # handle prediction requests that do not specify a version.
+ #
+ # You can change the default version by calling
+ # [projects.methods.versions.setDefault](/ml-engine/reference/rest/v1/projects.models.versions/setDefault).
+ #
+ # Each version is a trained model deployed in the cloud, ready to handle
+ # prediction requests. A model can have multiple versions. You can get
+ # information about all of the versions of a given model by calling
+ # [projects.models.versions.list](/ml-engine/reference/rest/v1/projects.models.versions/list).
+ "errorMessage": "A String", # Output only. The details of a failure or a cancellation.
+ "labels": { # Optional. One or more labels that you can add, to organize your model
+ # versions. Each label is a key-value pair, where both the key and the value
+ # are arbitrary strings that you supply.
+ # For more information, see the documentation on
+ # <a href="/ml-engine/docs/tensorflow/resource-labels">using labels</a>.
+ "a_key": "A String",
+ },
+ "machineType": "A String", # Optional. The type of machine on which to serve the model. Currently only
+ # applies to online prediction service.
+ # <dl>
+ # <dt>mls1-c1-m2</dt>
+ # <dd>
+ # The <b>default</b> machine type, with 1 core and 2 GB RAM. The deprecated
+ # name for this machine type is "mls1-highmem-1".
+ # </dd>
+ # <dt>mls1-c4-m2</dt>
+ # <dd>
+ # In <b>Beta</b>. This machine type has 4 cores and 2 GB RAM. The
+ # deprecated name for this machine type is "mls1-highcpu-4".
+ # </dd>
+ # </dl>
+ "description": "A String", # Optional. The description specified for the version when it was created.
+ "runtimeVersion": "A String", # Optional. The AI Platform runtime version to use for this deployment.
+ # If not set, AI Platform uses the default stable version, 1.0. For more
+ # information, see the
+ # [runtime version list](/ml-engine/docs/runtime-version-list) and
+ # [how to manage runtime versions](/ml-engine/docs/versioning).
+ "manualScaling": { # Options for manually scaling a model. # Manually select the number of nodes to use for serving the
+ # model. You should generally use `auto_scaling` with an appropriate
+ # `min_nodes` instead, but this option is available if you want more
+ # predictable billing. Beware that latency and error rates will increase
+ # if the traffic exceeds that capability of the system to serve it based
+ # on the selected number of nodes.
+ "nodes": 42, # The number of nodes to allocate for this model. These nodes are always up,
+ # starting from the time the model is deployed, so the cost of operating
+ # this model will be proportional to `nodes` * number of hours since
+ # last billing cycle plus the cost for each prediction performed.
+ },
+ "predictionClass": "A String", # Optional. The fully qualified name
+ # (<var>module_name</var>.<var>class_name</var>) of a class that implements
+ # the Predictor interface described in this reference field. The module
+ # containing this class should be included in a package provided to the
+ # [`packageUris` field](#Version.FIELDS.package_uris).
+ #
+ # Specify this field if and only if you are deploying a [custom prediction
+ # routine (beta)](/ml-engine/docs/tensorflow/custom-prediction-routines).
+ # If you specify this field, you must set
+ # [`runtimeVersion`](#Version.FIELDS.runtime_version) to 1.4 or greater.
+ #
+ # The following code sample provides the Predictor interface:
+ #
+ # ```py
+ # class Predictor(object):
+ # """Interface for constructing custom predictors."""
+ #
+ # def predict(self, instances, **kwargs):
+ # """Performs custom prediction.
+ #
+ # Instances are the decoded values from the request. They have already
+ # been deserialized from JSON.
+ #
+ # Args:
+ # instances: A list of prediction input instances.
+ # **kwargs: A dictionary of keyword args provided as additional
+ # fields on the predict request body.
+ #
+ # Returns:
+ # A list of outputs containing the prediction results. This list must
+ # be JSON serializable.
+ # """
+ # raise NotImplementedError()
+ #
+ # @classmethod
+ # def from_path(cls, model_dir):
+ # """Creates an instance of Predictor using the given path.
+ #
+ # Loading of the predictor should be done in this method.
+ #
+ # Args:
+ # model_dir: The local directory that contains the exported model
+ # file along with any additional files uploaded when creating the
+ # version resource.
+ #
+ # Returns:
+ # An instance implementing this Predictor class.
+ # """
+ # raise NotImplementedError()
+ # ```
+ #
+ # Learn more about [the Predictor interface and custom prediction
+ # routines](/ml-engine/docs/tensorflow/custom-prediction-routines).
+ "autoScaling": { # Options for automatically scaling a model. # Automatically scale the number of nodes used to serve the model in
+ # response to increases and decreases in traffic. Care should be
+ # taken to ramp up traffic according to the model's ability to scale
+ # or you will start seeing increases in latency and 429 response codes.
+ "minNodes": 42, # Optional. The minimum number of nodes to allocate for this model. These
+ # nodes are always up, starting from the time the model is deployed.
+ # Therefore, the cost of operating this model will be at least
+ # `rate` * `min_nodes` * number of hours since last billing cycle,
+ # where `rate` is the cost per node-hour as documented in the
+ # [pricing guide](/ml-engine/docs/pricing),
+ # even if no predictions are performed. There is additional cost for each
+ # prediction performed.
+ #
+ # Unlike manual scaling, if the load gets too heavy for the nodes
+ # that are up, the service will automatically add nodes to handle the
+ # increased load as well as scale back as traffic drops, always maintaining
+ # at least `min_nodes`. You will be charged for the time in which additional
+ # nodes are used.
+ #
+ # If not specified, `min_nodes` defaults to 0, in which case, when traffic
+ # to a model stops (and after a cool-down period), nodes will be shut down
+ # and no charges will be incurred until traffic to the model resumes.
+ #
+ # You can set `min_nodes` when creating the model version, and you can also
+ # update `min_nodes` for an existing version:
+ # <pre>
+ # update_body.json:
+ # {
+ # 'autoScaling': {
+ # 'minNodes': 5
+ # }
+ # }
+ # </pre>
+ # HTTP request:
+ # <pre>
+ # PATCH
+ # https://ml.googleapis.com/v1/{name=projects/*/models/*/versions/*}?update_mask=autoScaling.minNodes
+ # -d @./update_body.json
+ # </pre>
+ },
+ "serviceAccount": "A String", # Optional. Specifies the service account for resource access control.
+ "state": "A String", # Output only. The state of a version.
+ "pythonVersion": "A String", # Optional. The version of Python used in prediction. If not set, the default
+ # version is '2.7'. Python '3.5' is available when `runtime_version` is set
+ # to '1.4' and above. Python '2.7' works with all supported runtime versions.
+ "framework": "A String", # Optional. The machine learning framework AI Platform uses to train
+ # this version of the model. Valid values are `TENSORFLOW`, `SCIKIT_LEARN`,
+ # `XGBOOST`. If you do not specify a framework, AI Platform
+ # will analyze files in the deployment_uri to determine a framework. If you
+ # choose `SCIKIT_LEARN` or `XGBOOST`, you must also set the runtime version
+ # of the model to 1.4 or greater.
+ #
+ # Do **not** specify a framework if you're deploying a [custom
+ # prediction routine](/ml-engine/docs/tensorflow/custom-prediction-routines).
+ "packageUris": [ # Optional. Cloud Storage paths (`gs://…`) of packages for [custom
+ # prediction routines](/ml-engine/docs/tensorflow/custom-prediction-routines)
+ # or [scikit-learn pipelines with custom
+ # code](/ml-engine/docs/scikit/exporting-for-prediction#custom-pipeline-code).
+ #
+ # For a custom prediction routine, one of these packages must contain your
+ # Predictor class (see
+ # [`predictionClass`](#Version.FIELDS.prediction_class)). Additionally,
+ # include any dependencies used by your Predictor or scikit-learn pipeline
+ # uses that are not already included in your selected [runtime
+ # version](/ml-engine/docs/tensorflow/runtime-version-list).
+ #
+ # If you specify this field, you must also set
+ # [`runtimeVersion`](#Version.FIELDS.runtime_version) to 1.4 or greater.
+ "A String",
+ ],
+ "etag": "A String", # `etag` is used for optimistic concurrency control as a way to help
+ # prevent simultaneous updates of a model from overwriting each other.
+ # It is strongly suggested that systems make use of the `etag` in the
+ # read-modify-write cycle to perform model updates in order to avoid race
+ # conditions: An `etag` is returned in the response to `GetVersion`, and
+ # systems are expected to put that etag in the request to `UpdateVersion` to
+ # ensure that their change will be applied to the model as intended.
+ "lastUseTime": "A String", # Output only. The time the version was last used for prediction.
+ "deploymentUri": "A String", # Required. The Cloud Storage location of the trained model used to
+ # create the version. See the
+ # [guide to model
+ # deployment](/ml-engine/docs/tensorflow/deploying-models) for more
+ # information.
+ #
+ # When passing Version to
+ # [projects.models.versions.create](/ml-engine/reference/rest/v1/projects.models.versions/create)
+ # the model service uses the specified location as the source of the model.
+ # Once deployed, the model version is hosted by the prediction service, so
+ # this location is useful only as a historical record.
+ # The total number of model files can't exceed 1000.
+ "createTime": "A String", # Output only. The time the version was created.
+ "isDefault": True or False, # Output only. If true, this version will be used to handle prediction
+ # requests that do not specify a version.
+ #
+ # You can change the default version by calling
+ # [projects.methods.versions.setDefault](/ml-engine/reference/rest/v1/projects.models.versions/setDefault).
+ "name": "A String", # Required.The name specified for the version when it was created.
+ #
+ # The version name must be unique within the model it is created in.
+ },
+ "onlinePredictionLogging": True or False, # Optional. If true, online prediction access logs are sent to StackDriver
+ # Logging. These logs are like standard server access logs, containing
+ # information like timestamp and latency for each request. Note that
+ # [Stackdriver logs may incur a cost](/stackdriver/pricing), especially if
+ # your project receives prediction requests at a high queries per second rate
+ # (QPS). Estimate your costs before enabling this option.
+ #
+ # Default is false.
+ "name": "A String", # Required. The name specified for the model when it was created.
+ #
+ # The model name must be unique within the project it is created in.
+}
+
+ x__xgafv: string, V1 error format.
+ Allowed values
+ 1 - v1 error format
+ 2 - v2 error format
+
+Returns:
+ An object of the form:
+
+ { # Represents a machine learning solution.
+ #
# A model can have multiple versions, each of which is a deployed, trained
# model ready to receive prediction requests. The model itself is just a
# container.
+ "description": "A String", # Optional. The description specified for the model when it was created.
+ "onlinePredictionConsoleLogging": True or False, # Optional. If true, online prediction nodes send `stderr` and `stdout`
+ # streams to Stackdriver Logging. These can be more verbose than the standard
+ # access logs (see `onlinePredictionLogging`) and can incur higher cost.
+ # However, they are helpful for debugging. Note that
+ # [Stackdriver logs may incur a cost](/stackdriver/pricing), especially if
+ # your project receives prediction requests at a high QPS. Estimate your
+ # costs before enabling this option.
+ #
+ # Default is false.
+ "labels": { # Optional. One or more labels that you can add, to organize your models.
+ # Each label is a key-value pair, where both the key and the value are
+ # arbitrary strings that you supply.
+ # For more information, see the documentation on
+ # <a href="/ml-engine/docs/tensorflow/resource-labels">using labels</a>.
+ "a_key": "A String",
+ },
"regions": [ # Optional. The list of regions where the model is going to be deployed.
# Currently only one region per model is supported.
# Defaults to 'us-central1' if nothing is set.
+ # See the <a href="/ml-engine/docs/tensorflow/regions">available regions</a>
+ # for AI Platform services.
# Note:
# * No matter where a model is deployed, it can always be accessed by
# users from anywhere, both for online and batch prediction.
@@ -126,9 +420,16 @@
# this field.
"A String",
],
+ "etag": "A String", # `etag` is used for optimistic concurrency control as a way to help
+ # prevent simultaneous updates of a model from overwriting each other.
+ # It is strongly suggested that systems make use of the `etag` in the
+ # read-modify-write cycle to perform model updates in order to avoid race
+ # conditions: An `etag` is returned in the response to `GetModel`, and
+ # systems are expected to put that etag in the request to `UpdateModel` to
+ # ensure that their change will be applied to the model as intended.
"defaultVersion": { # Represents a version of the model. # Output only. The default version of the model. This version will be used to
# handle prediction requests that do not specify a version.
- #
+ #
# You can change the default version by calling
# [projects.methods.versions.setDefault](/ml-engine/reference/rest/v1/projects.models.versions/setDefault).
#
@@ -136,11 +437,36 @@
# prediction requests. A model can have multiple versions. You can get
# information about all of the versions of a given model by calling
# [projects.models.versions.list](/ml-engine/reference/rest/v1/projects.models.versions/list).
+ "errorMessage": "A String", # Output only. The details of a failure or a cancellation.
+ "labels": { # Optional. One or more labels that you can add, to organize your model
+ # versions. Each label is a key-value pair, where both the key and the value
+ # are arbitrary strings that you supply.
+ # For more information, see the documentation on
+ # <a href="/ml-engine/docs/tensorflow/resource-labels">using labels</a>.
+ "a_key": "A String",
+ },
+ "machineType": "A String", # Optional. The type of machine on which to serve the model. Currently only
+ # applies to online prediction service.
+ # <dl>
+ # <dt>mls1-c1-m2</dt>
+ # <dd>
+ # The <b>default</b> machine type, with 1 core and 2 GB RAM. The deprecated
+ # name for this machine type is "mls1-highmem-1".
+ # </dd>
+ # <dt>mls1-c4-m2</dt>
+ # <dd>
+ # In <b>Beta</b>. This machine type has 4 cores and 2 GB RAM. The
+ # deprecated name for this machine type is "mls1-highcpu-4".
+ # </dd>
+ # </dl>
"description": "A String", # Optional. The description specified for the version when it was created.
- "runtimeVersion": "A String", # Optional. The Google Cloud ML runtime version to use for this deployment.
- # If not set, Google Cloud ML will choose a version.
+ "runtimeVersion": "A String", # Optional. The AI Platform runtime version to use for this deployment.
+ # If not set, AI Platform uses the default stable version, 1.0. For more
+ # information, see the
+ # [runtime version list](/ml-engine/docs/runtime-version-list) and
+ # [how to manage runtime versions](/ml-engine/docs/versioning).
"manualScaling": { # Options for manually scaling a model. # Manually select the number of nodes to use for serving the
- # model. You should generally use `automatic_scaling` with an appropriate
+ # model. You should generally use `auto_scaling` with an appropriate
# `min_nodes` instead, but this option is available if you want more
# predictable billing. Beware that latency and error rates will increase
# if the traffic exceeds that capability of the system to serve it based
@@ -150,29 +476,69 @@
# this model will be proportional to `nodes` * number of hours since
# last billing cycle plus the cost for each prediction performed.
},
- "deploymentUri": "A String", # Required. The Google Cloud Storage location of the trained model used to
- # create the version. See the
- # [overview of model
- # deployment](/ml-engine/docs/concepts/deployment-overview) for more
- # informaiton.
+ "predictionClass": "A String", # Optional. The fully qualified name
+ # (<var>module_name</var>.<var>class_name</var>) of a class that implements
+ # the Predictor interface described in this reference field. The module
+ # containing this class should be included in a package provided to the
+ # [`packageUris` field](#Version.FIELDS.package_uris).
#
- # When passing Version to
- # [projects.models.versions.create](/ml-engine/reference/rest/v1/projects.models.versions/create)
- # the model service uses the specified location as the source of the model.
- # Once deployed, the model version is hosted by the prediction service, so
- # this location is useful only as a historical record.
- # The total number of model files can't exceed 1000.
- "lastUseTime": "A String", # Output only. The time the version was last used for prediction.
- "automaticScaling": { # Options for automatically scaling a model. # Automatically scale the number of nodes used to serve the model in
+ # Specify this field if and only if you are deploying a [custom prediction
+ # routine (beta)](/ml-engine/docs/tensorflow/custom-prediction-routines).
+ # If you specify this field, you must set
+ # [`runtimeVersion`](#Version.FIELDS.runtime_version) to 1.4 or greater.
+ #
+ # The following code sample provides the Predictor interface:
+ #
+ # ```py
+ # class Predictor(object):
+ # """Interface for constructing custom predictors."""
+ #
+ # def predict(self, instances, **kwargs):
+ # """Performs custom prediction.
+ #
+ # Instances are the decoded values from the request. They have already
+ # been deserialized from JSON.
+ #
+ # Args:
+ # instances: A list of prediction input instances.
+ # **kwargs: A dictionary of keyword args provided as additional
+ # fields on the predict request body.
+ #
+ # Returns:
+ # A list of outputs containing the prediction results. This list must
+ # be JSON serializable.
+ # """
+ # raise NotImplementedError()
+ #
+ # @classmethod
+ # def from_path(cls, model_dir):
+ # """Creates an instance of Predictor using the given path.
+ #
+ # Loading of the predictor should be done in this method.
+ #
+ # Args:
+ # model_dir: The local directory that contains the exported model
+ # file along with any additional files uploaded when creating the
+ # version resource.
+ #
+ # Returns:
+ # An instance implementing this Predictor class.
+ # """
+ # raise NotImplementedError()
+ # ```
+ #
+ # Learn more about [the Predictor interface and custom prediction
+ # routines](/ml-engine/docs/tensorflow/custom-prediction-routines).
+ "autoScaling": { # Options for automatically scaling a model. # Automatically scale the number of nodes used to serve the model in
# response to increases and decreases in traffic. Care should be
# taken to ramp up traffic according to the model's ability to scale
# or you will start seeing increases in latency and 429 response codes.
"minNodes": 42, # Optional. The minimum number of nodes to allocate for this model. These
- # nodes are always up, starting from the time the model is deployed, so the
- # cost of operating this model will be at least
+ # nodes are always up, starting from the time the model is deployed.
+ # Therefore, the cost of operating this model will be at least
# `rate` * `min_nodes` * number of hours since last billing cycle,
- # where `rate` is the cost per node-hour as documented in
- # [pricing](https://cloud.google.com/ml-engine/pricing#prediction_pricing),
+ # where `rate` is the cost per node-hour as documented in the
+ # [pricing guide](/ml-engine/docs/pricing),
# even if no predictions are performed. There is additional cost for each
# prediction performed.
#
@@ -185,7 +551,74 @@
# If not specified, `min_nodes` defaults to 0, in which case, when traffic
# to a model stops (and after a cool-down period), nodes will be shut down
# and no charges will be incurred until traffic to the model resumes.
+ #
+ # You can set `min_nodes` when creating the model version, and you can also
+ # update `min_nodes` for an existing version:
+ # <pre>
+ # update_body.json:
+ # {
+ # 'autoScaling': {
+ # 'minNodes': 5
+ # }
+ # }
+ # </pre>
+ # HTTP request:
+ # <pre>
+ # PATCH
+ # https://ml.googleapis.com/v1/{name=projects/*/models/*/versions/*}?update_mask=autoScaling.minNodes
+ # -d @./update_body.json
+ # </pre>
},
+ "serviceAccount": "A String", # Optional. Specifies the service account for resource access control.
+ "state": "A String", # Output only. The state of a version.
+ "pythonVersion": "A String", # Optional. The version of Python used in prediction. If not set, the default
+ # version is '2.7'. Python '3.5' is available when `runtime_version` is set
+ # to '1.4' and above. Python '2.7' works with all supported runtime versions.
+ "framework": "A String", # Optional. The machine learning framework AI Platform uses to train
+ # this version of the model. Valid values are `TENSORFLOW`, `SCIKIT_LEARN`,
+ # `XGBOOST`. If you do not specify a framework, AI Platform
+ # will analyze files in the deployment_uri to determine a framework. If you
+ # choose `SCIKIT_LEARN` or `XGBOOST`, you must also set the runtime version
+ # of the model to 1.4 or greater.
+ #
+ # Do **not** specify a framework if you're deploying a [custom
+ # prediction routine](/ml-engine/docs/tensorflow/custom-prediction-routines).
+ "packageUris": [ # Optional. Cloud Storage paths (`gs://…`) of packages for [custom
+ # prediction routines](/ml-engine/docs/tensorflow/custom-prediction-routines)
+ # or [scikit-learn pipelines with custom
+ # code](/ml-engine/docs/scikit/exporting-for-prediction#custom-pipeline-code).
+ #
+ # For a custom prediction routine, one of these packages must contain your
+ # Predictor class (see
+ # [`predictionClass`](#Version.FIELDS.prediction_class)). Additionally,
+ # include any dependencies used by your Predictor or scikit-learn pipeline
+ # uses that are not already included in your selected [runtime
+ # version](/ml-engine/docs/tensorflow/runtime-version-list).
+ #
+ # If you specify this field, you must also set
+ # [`runtimeVersion`](#Version.FIELDS.runtime_version) to 1.4 or greater.
+ "A String",
+ ],
+ "etag": "A String", # `etag` is used for optimistic concurrency control as a way to help
+ # prevent simultaneous updates of a model from overwriting each other.
+ # It is strongly suggested that systems make use of the `etag` in the
+ # read-modify-write cycle to perform model updates in order to avoid race
+ # conditions: An `etag` is returned in the response to `GetVersion`, and
+ # systems are expected to put that etag in the request to `UpdateVersion` to
+ # ensure that their change will be applied to the model as intended.
+ "lastUseTime": "A String", # Output only. The time the version was last used for prediction.
+ "deploymentUri": "A String", # Required. The Cloud Storage location of the trained model used to
+ # create the version. See the
+ # [guide to model
+ # deployment](/ml-engine/docs/tensorflow/deploying-models) for more
+ # information.
+ #
+ # When passing Version to
+ # [projects.models.versions.create](/ml-engine/reference/rest/v1/projects.models.versions/create)
+ # the model service uses the specified location as the source of the model.
+ # Once deployed, the model version is hosted by the prediction service, so
+ # this location is useful only as a historical record.
+ # The total number of model files can't exceed 1000.
"createTime": "A String", # Output only. The time the version was created.
"isDefault": True or False, # Output only. If true, this version will be used to handle prediction
# requests that do not specify a version.
@@ -196,115 +629,18 @@
#
# The version name must be unique within the model it is created in.
},
- "name": "A String", # Required. The name specified for the model when it was created.
- #
- # The model name must be unique within the project it is created in.
- "onlinePredictionLogging": True or False, # Optional. If true, enables StackDriver Logging for online prediction.
- # Default is false.
- "description": "A String", # Optional. The description specified for the model when it was created.
- }
-
- x__xgafv: string, V1 error format.
- Allowed values
- 1 - v1 error format
- 2 - v2 error format
-
-Returns:
- An object of the form:
-
- { # Represents a machine learning solution.
+ "onlinePredictionLogging": True or False, # Optional. If true, online prediction access logs are sent to StackDriver
+ # Logging. These logs are like standard server access logs, containing
+ # information like timestamp and latency for each request. Note that
+ # [Stackdriver logs may incur a cost](/stackdriver/pricing), especially if
+ # your project receives prediction requests at a high queries per second rate
+ # (QPS). Estimate your costs before enabling this option.
#
- # A model can have multiple versions, each of which is a deployed, trained
- # model ready to receive prediction requests. The model itself is just a
- # container.
- "regions": [ # Optional. The list of regions where the model is going to be deployed.
- # Currently only one region per model is supported.
- # Defaults to 'us-central1' if nothing is set.
- # Note:
- # * No matter where a model is deployed, it can always be accessed by
- # users from anywhere, both for online and batch prediction.
- # * The region for a batch prediction job is set by the region field when
- # submitting the batch prediction job and does not take its value from
- # this field.
- "A String",
- ],
- "defaultVersion": { # Represents a version of the model. # Output only. The default version of the model. This version will be used to
- # handle prediction requests that do not specify a version.
- #
- # You can change the default version by calling
- # [projects.methods.versions.setDefault](/ml-engine/reference/rest/v1/projects.models.versions/setDefault).
- #
- # Each version is a trained model deployed in the cloud, ready to handle
- # prediction requests. A model can have multiple versions. You can get
- # information about all of the versions of a given model by calling
- # [projects.models.versions.list](/ml-engine/reference/rest/v1/projects.models.versions/list).
- "description": "A String", # Optional. The description specified for the version when it was created.
- "runtimeVersion": "A String", # Optional. The Google Cloud ML runtime version to use for this deployment.
- # If not set, Google Cloud ML will choose a version.
- "manualScaling": { # Options for manually scaling a model. # Manually select the number of nodes to use for serving the
- # model. You should generally use `automatic_scaling` with an appropriate
- # `min_nodes` instead, but this option is available if you want more
- # predictable billing. Beware that latency and error rates will increase
- # if the traffic exceeds that capability of the system to serve it based
- # on the selected number of nodes.
- "nodes": 42, # The number of nodes to allocate for this model. These nodes are always up,
- # starting from the time the model is deployed, so the cost of operating
- # this model will be proportional to `nodes` * number of hours since
- # last billing cycle plus the cost for each prediction performed.
- },
- "deploymentUri": "A String", # Required. The Google Cloud Storage location of the trained model used to
- # create the version. See the
- # [overview of model
- # deployment](/ml-engine/docs/concepts/deployment-overview) for more
- # informaiton.
- #
- # When passing Version to
- # [projects.models.versions.create](/ml-engine/reference/rest/v1/projects.models.versions/create)
- # the model service uses the specified location as the source of the model.
- # Once deployed, the model version is hosted by the prediction service, so
- # this location is useful only as a historical record.
- # The total number of model files can't exceed 1000.
- "lastUseTime": "A String", # Output only. The time the version was last used for prediction.
- "automaticScaling": { # Options for automatically scaling a model. # Automatically scale the number of nodes used to serve the model in
- # response to increases and decreases in traffic. Care should be
- # taken to ramp up traffic according to the model's ability to scale
- # or you will start seeing increases in latency and 429 response codes.
- "minNodes": 42, # Optional. The minimum number of nodes to allocate for this model. These
- # nodes are always up, starting from the time the model is deployed, so the
- # cost of operating this model will be at least
- # `rate` * `min_nodes` * number of hours since last billing cycle,
- # where `rate` is the cost per node-hour as documented in
- # [pricing](https://cloud.google.com/ml-engine/pricing#prediction_pricing),
- # even if no predictions are performed. There is additional cost for each
- # prediction performed.
- #
- # Unlike manual scaling, if the load gets too heavy for the nodes
- # that are up, the service will automatically add nodes to handle the
- # increased load as well as scale back as traffic drops, always maintaining
- # at least `min_nodes`. You will be charged for the time in which additional
- # nodes are used.
- #
- # If not specified, `min_nodes` defaults to 0, in which case, when traffic
- # to a model stops (and after a cool-down period), nodes will be shut down
- # and no charges will be incurred until traffic to the model resumes.
- },
- "createTime": "A String", # Output only. The time the version was created.
- "isDefault": True or False, # Output only. If true, this version will be used to handle prediction
- # requests that do not specify a version.
- #
- # You can change the default version by calling
- # [projects.methods.versions.setDefault](/ml-engine/reference/rest/v1/projects.models.versions/setDefault).
- "name": "A String", # Required.The name specified for the version when it was created.
- #
- # The version name must be unique within the model it is created in.
- },
- "name": "A String", # Required. The name specified for the model when it was created.
- #
- # The model name must be unique within the project it is created in.
- "onlinePredictionLogging": True or False, # Optional. If true, enables StackDriver Logging for online prediction.
- # Default is false.
- "description": "A String", # Optional. The description specified for the model when it was created.
- }</pre>
+ # Default is false.
+ "name": "A String", # Required. The name specified for the model when it was created.
+ #
+ # The model name must be unique within the project it is created in.
+ }</pre>
</div>
<div class="method">
@@ -316,9 +652,7 @@
[projects.models.versions.delete](/ml-engine/reference/rest/v1/projects.models.versions/delete).
Args:
- name: string, Required. The name of the model.
-
-Authorization: requires `Editor` role on the parent project. (required)
+ name: string, Required. The name of the model. (required)
x__xgafv: string, V1 error format.
Allowed values
1 - v1 error format
@@ -335,71 +669,26 @@
# long-running operation should document the metadata type, if any.
"a_key": "", # Properties of the object. Contains field @type with type URL.
},
- "error": { # The `Status` type defines a logical error model that is suitable for different # The error result of the operation in case of failure or cancellation.
- # programming environments, including REST APIs and RPC APIs. It is used by
- # [gRPC](https://github.com/grpc). The error model is designed to be:
+ "error": { # The `Status` type defines a logical error model that is suitable for # The error result of the operation in case of failure or cancellation.
+ # different programming environments, including REST APIs and RPC APIs. It is
+ # used by [gRPC](https://github.com/grpc). Each `Status` message contains
+ # three pieces of data: error code, error message, and error details.
#
- # - Simple to use and understand for most users
- # - Flexible enough to meet unexpected needs
- #
- # # Overview
- #
- # The `Status` message contains three pieces of data: error code, error message,
- # and error details. The error code should be an enum value of
- # google.rpc.Code, but it may accept additional error codes if needed. The
- # error message should be a developer-facing English message that helps
- # developers *understand* and *resolve* the error. If a localized user-facing
- # error message is needed, put the localized message in the error details or
- # localize it in the client. The optional error details may contain arbitrary
- # information about the error. There is a predefined set of error detail types
- # in the package `google.rpc` that can be used for common error conditions.
- #
- # # Language mapping
- #
- # The `Status` message is the logical representation of the error model, but it
- # is not necessarily the actual wire format. When the `Status` message is
- # exposed in different client libraries and different wire protocols, it can be
- # mapped differently. For example, it will likely be mapped to some exceptions
- # in Java, but more likely mapped to some error codes in C.
- #
- # # Other uses
- #
- # The error model and the `Status` message can be used in a variety of
- # environments, either with or without APIs, to provide a
- # consistent developer experience across different environments.
- #
- # Example uses of this error model include:
- #
- # - Partial errors. If a service needs to return partial errors to the client,
- # it may embed the `Status` in the normal response to indicate the partial
- # errors.
- #
- # - Workflow errors. A typical workflow has multiple steps. Each step may
- # have a `Status` message for error reporting.
- #
- # - Batch operations. If a client uses batch request and batch response, the
- # `Status` message should be used directly inside batch response, one for
- # each error sub-response.
- #
- # - Asynchronous operations. If an API call embeds asynchronous operation
- # results in its response, the status of those operations should be
- # represented directly using the `Status` message.
- #
- # - Logging. If some API errors are stored in logs, the message `Status` could
- # be used directly after any stripping needed for security/privacy reasons.
+ # You can find out more about this error model and how to work with it in the
+ # [API Design Guide](https://cloud.google.com/apis/design/errors).
"message": "A String", # A developer-facing error message, which should be in English. Any
# user-facing error message should be localized and sent in the
# google.rpc.Status.details field, or localized by the client.
"code": 42, # The status code, which should be an enum value of google.rpc.Code.
- "details": [ # A list of messages that carry the error details. There will be a
- # common set of message types for APIs to use.
+ "details": [ # A list of messages that carry the error details. There is a common set of
+ # message types for APIs to use.
{
"a_key": "", # Properties of the object. Contains field @type with type URL.
},
],
},
"done": True or False, # If the value is `false`, it means the operation is still in progress.
- # If true, the operation is completed, and either `error` or `response` is
+ # If `true`, the operation is completed, and either `error` or `response` is
# available.
"response": { # The normal response of the operation in case of success. If the original
# method returns no data on success, such as `Delete`, the response is
@@ -413,7 +702,7 @@
},
"name": "A String", # The server-assigned name, which is only unique within the same service that
# originally returns it. If you use the default HTTP mapping, the
- # `name` should have the format of `operations/some/unique/name`.
+ # `name` should be a resource name ending with `operations/{unique_id}`.
}</pre>
</div>
@@ -424,9 +713,7 @@
been deployed).
Args:
- name: string, Required. The name of the model.
-
-Authorization: requires `Viewer` role on the parent project. (required)
+ name: string, Required. The name of the model. (required)
x__xgafv: string, V1 error format.
Allowed values
1 - v1 error format
@@ -436,111 +723,489 @@
An object of the form:
{ # Represents a machine learning solution.
+ #
+ # A model can have multiple versions, each of which is a deployed, trained
+ # model ready to receive prediction requests. The model itself is just a
+ # container.
+ "description": "A String", # Optional. The description specified for the model when it was created.
+ "onlinePredictionConsoleLogging": True or False, # Optional. If true, online prediction nodes send `stderr` and `stdout`
+ # streams to Stackdriver Logging. These can be more verbose than the standard
+ # access logs (see `onlinePredictionLogging`) and can incur higher cost.
+ # However, they are helpful for debugging. Note that
+ # [Stackdriver logs may incur a cost](/stackdriver/pricing), especially if
+ # your project receives prediction requests at a high QPS. Estimate your
+ # costs before enabling this option.
#
- # A model can have multiple versions, each of which is a deployed, trained
- # model ready to receive prediction requests. The model itself is just a
- # container.
- "regions": [ # Optional. The list of regions where the model is going to be deployed.
- # Currently only one region per model is supported.
- # Defaults to 'us-central1' if nothing is set.
- # Note:
- # * No matter where a model is deployed, it can always be accessed by
- # users from anywhere, both for online and batch prediction.
- # * The region for a batch prediction job is set by the region field when
- # submitting the batch prediction job and does not take its value from
- # this field.
+ # Default is false.
+ "labels": { # Optional. One or more labels that you can add, to organize your models.
+ # Each label is a key-value pair, where both the key and the value are
+ # arbitrary strings that you supply.
+ # For more information, see the documentation on
+ # <a href="/ml-engine/docs/tensorflow/resource-labels">using labels</a>.
+ "a_key": "A String",
+ },
+ "regions": [ # Optional. The list of regions where the model is going to be deployed.
+ # Currently only one region per model is supported.
+ # Defaults to 'us-central1' if nothing is set.
+ # See the <a href="/ml-engine/docs/tensorflow/regions">available regions</a>
+ # for AI Platform services.
+ # Note:
+ # * No matter where a model is deployed, it can always be accessed by
+ # users from anywhere, both for online and batch prediction.
+ # * The region for a batch prediction job is set by the region field when
+ # submitting the batch prediction job and does not take its value from
+ # this field.
+ "A String",
+ ],
+ "etag": "A String", # `etag` is used for optimistic concurrency control as a way to help
+ # prevent simultaneous updates of a model from overwriting each other.
+ # It is strongly suggested that systems make use of the `etag` in the
+ # read-modify-write cycle to perform model updates in order to avoid race
+ # conditions: An `etag` is returned in the response to `GetModel`, and
+ # systems are expected to put that etag in the request to `UpdateModel` to
+ # ensure that their change will be applied to the model as intended.
+ "defaultVersion": { # Represents a version of the model. # Output only. The default version of the model. This version will be used to
+ # handle prediction requests that do not specify a version.
+ #
+ # You can change the default version by calling
+ # [projects.methods.versions.setDefault](/ml-engine/reference/rest/v1/projects.models.versions/setDefault).
+ #
+ # Each version is a trained model deployed in the cloud, ready to handle
+ # prediction requests. A model can have multiple versions. You can get
+ # information about all of the versions of a given model by calling
+ # [projects.models.versions.list](/ml-engine/reference/rest/v1/projects.models.versions/list).
+ "errorMessage": "A String", # Output only. The details of a failure or a cancellation.
+ "labels": { # Optional. One or more labels that you can add, to organize your model
+ # versions. Each label is a key-value pair, where both the key and the value
+ # are arbitrary strings that you supply.
+ # For more information, see the documentation on
+ # <a href="/ml-engine/docs/tensorflow/resource-labels">using labels</a>.
+ "a_key": "A String",
+ },
+ "machineType": "A String", # Optional. The type of machine on which to serve the model. Currently only
+ # applies to online prediction service.
+ # <dl>
+ # <dt>mls1-c1-m2</dt>
+ # <dd>
+ # The <b>default</b> machine type, with 1 core and 2 GB RAM. The deprecated
+ # name for this machine type is "mls1-highmem-1".
+ # </dd>
+ # <dt>mls1-c4-m2</dt>
+ # <dd>
+ # In <b>Beta</b>. This machine type has 4 cores and 2 GB RAM. The
+ # deprecated name for this machine type is "mls1-highcpu-4".
+ # </dd>
+ # </dl>
+ "description": "A String", # Optional. The description specified for the version when it was created.
+ "runtimeVersion": "A String", # Optional. The AI Platform runtime version to use for this deployment.
+ # If not set, AI Platform uses the default stable version, 1.0. For more
+ # information, see the
+ # [runtime version list](/ml-engine/docs/runtime-version-list) and
+ # [how to manage runtime versions](/ml-engine/docs/versioning).
+ "manualScaling": { # Options for manually scaling a model. # Manually select the number of nodes to use for serving the
+ # model. You should generally use `auto_scaling` with an appropriate
+ # `min_nodes` instead, but this option is available if you want more
+ # predictable billing. Beware that latency and error rates will increase
+ # if the traffic exceeds that capability of the system to serve it based
+ # on the selected number of nodes.
+ "nodes": 42, # The number of nodes to allocate for this model. These nodes are always up,
+ # starting from the time the model is deployed, so the cost of operating
+ # this model will be proportional to `nodes` * number of hours since
+ # last billing cycle plus the cost for each prediction performed.
+ },
+ "predictionClass": "A String", # Optional. The fully qualified name
+ # (<var>module_name</var>.<var>class_name</var>) of a class that implements
+ # the Predictor interface described in this reference field. The module
+ # containing this class should be included in a package provided to the
+ # [`packageUris` field](#Version.FIELDS.package_uris).
+ #
+ # Specify this field if and only if you are deploying a [custom prediction
+ # routine (beta)](/ml-engine/docs/tensorflow/custom-prediction-routines).
+ # If you specify this field, you must set
+ # [`runtimeVersion`](#Version.FIELDS.runtime_version) to 1.4 or greater.
+ #
+ # The following code sample provides the Predictor interface:
+ #
+ # ```py
+ # class Predictor(object):
+ # """Interface for constructing custom predictors."""
+ #
+ # def predict(self, instances, **kwargs):
+ # """Performs custom prediction.
+ #
+ # Instances are the decoded values from the request. They have already
+ # been deserialized from JSON.
+ #
+ # Args:
+ # instances: A list of prediction input instances.
+ # **kwargs: A dictionary of keyword args provided as additional
+ # fields on the predict request body.
+ #
+ # Returns:
+ # A list of outputs containing the prediction results. This list must
+ # be JSON serializable.
+ # """
+ # raise NotImplementedError()
+ #
+ # @classmethod
+ # def from_path(cls, model_dir):
+ # """Creates an instance of Predictor using the given path.
+ #
+ # Loading of the predictor should be done in this method.
+ #
+ # Args:
+ # model_dir: The local directory that contains the exported model
+ # file along with any additional files uploaded when creating the
+ # version resource.
+ #
+ # Returns:
+ # An instance implementing this Predictor class.
+ # """
+ # raise NotImplementedError()
+ # ```
+ #
+ # Learn more about [the Predictor interface and custom prediction
+ # routines](/ml-engine/docs/tensorflow/custom-prediction-routines).
+ "autoScaling": { # Options for automatically scaling a model. # Automatically scale the number of nodes used to serve the model in
+ # response to increases and decreases in traffic. Care should be
+ # taken to ramp up traffic according to the model's ability to scale
+ # or you will start seeing increases in latency and 429 response codes.
+ "minNodes": 42, # Optional. The minimum number of nodes to allocate for this model. These
+ # nodes are always up, starting from the time the model is deployed.
+ # Therefore, the cost of operating this model will be at least
+ # `rate` * `min_nodes` * number of hours since last billing cycle,
+ # where `rate` is the cost per node-hour as documented in the
+ # [pricing guide](/ml-engine/docs/pricing),
+ # even if no predictions are performed. There is additional cost for each
+ # prediction performed.
+ #
+ # Unlike manual scaling, if the load gets too heavy for the nodes
+ # that are up, the service will automatically add nodes to handle the
+ # increased load as well as scale back as traffic drops, always maintaining
+ # at least `min_nodes`. You will be charged for the time in which additional
+ # nodes are used.
+ #
+ # If not specified, `min_nodes` defaults to 0, in which case, when traffic
+ # to a model stops (and after a cool-down period), nodes will be shut down
+ # and no charges will be incurred until traffic to the model resumes.
+ #
+ # You can set `min_nodes` when creating the model version, and you can also
+ # update `min_nodes` for an existing version:
+ # <pre>
+ # update_body.json:
+ # {
+ # 'autoScaling': {
+ # 'minNodes': 5
+ # }
+ # }
+ # </pre>
+ # HTTP request:
+ # <pre>
+ # PATCH
+ # https://ml.googleapis.com/v1/{name=projects/*/models/*/versions/*}?update_mask=autoScaling.minNodes
+ # -d @./update_body.json
+ # </pre>
+ },
+ "serviceAccount": "A String", # Optional. Specifies the service account for resource access control.
+ "state": "A String", # Output only. The state of a version.
+ "pythonVersion": "A String", # Optional. The version of Python used in prediction. If not set, the default
+ # version is '2.7'. Python '3.5' is available when `runtime_version` is set
+ # to '1.4' and above. Python '2.7' works with all supported runtime versions.
+ "framework": "A String", # Optional. The machine learning framework AI Platform uses to train
+ # this version of the model. Valid values are `TENSORFLOW`, `SCIKIT_LEARN`,
+ # `XGBOOST`. If you do not specify a framework, AI Platform
+ # will analyze files in the deployment_uri to determine a framework. If you
+ # choose `SCIKIT_LEARN` or `XGBOOST`, you must also set the runtime version
+ # of the model to 1.4 or greater.
+ #
+ # Do **not** specify a framework if you're deploying a [custom
+ # prediction routine](/ml-engine/docs/tensorflow/custom-prediction-routines).
+ "packageUris": [ # Optional. Cloud Storage paths (`gs://…`) of packages for [custom
+ # prediction routines](/ml-engine/docs/tensorflow/custom-prediction-routines)
+ # or [scikit-learn pipelines with custom
+ # code](/ml-engine/docs/scikit/exporting-for-prediction#custom-pipeline-code).
+ #
+ # For a custom prediction routine, one of these packages must contain your
+ # Predictor class (see
+ # [`predictionClass`](#Version.FIELDS.prediction_class)). Additionally,
+ # include any dependencies used by your Predictor or scikit-learn pipeline
+ # uses that are not already included in your selected [runtime
+ # version](/ml-engine/docs/tensorflow/runtime-version-list).
+ #
+ # If you specify this field, you must also set
+ # [`runtimeVersion`](#Version.FIELDS.runtime_version) to 1.4 or greater.
"A String",
],
- "defaultVersion": { # Represents a version of the model. # Output only. The default version of the model. This version will be used to
- # handle prediction requests that do not specify a version.
+ "etag": "A String", # `etag` is used for optimistic concurrency control as a way to help
+ # prevent simultaneous updates of a model from overwriting each other.
+ # It is strongly suggested that systems make use of the `etag` in the
+ # read-modify-write cycle to perform model updates in order to avoid race
+ # conditions: An `etag` is returned in the response to `GetVersion`, and
+ # systems are expected to put that etag in the request to `UpdateVersion` to
+ # ensure that their change will be applied to the model as intended.
+ "lastUseTime": "A String", # Output only. The time the version was last used for prediction.
+ "deploymentUri": "A String", # Required. The Cloud Storage location of the trained model used to
+ # create the version. See the
+ # [guide to model
+ # deployment](/ml-engine/docs/tensorflow/deploying-models) for more
+ # information.
+ #
+ # When passing Version to
+ # [projects.models.versions.create](/ml-engine/reference/rest/v1/projects.models.versions/create)
+ # the model service uses the specified location as the source of the model.
+ # Once deployed, the model version is hosted by the prediction service, so
+ # this location is useful only as a historical record.
+ # The total number of model files can't exceed 1000.
+ "createTime": "A String", # Output only. The time the version was created.
+ "isDefault": True or False, # Output only. If true, this version will be used to handle prediction
+ # requests that do not specify a version.
#
# You can change the default version by calling
# [projects.methods.versions.setDefault](/ml-engine/reference/rest/v1/projects.models.versions/setDefault).
+ "name": "A String", # Required.The name specified for the version when it was created.
#
- # Each version is a trained model deployed in the cloud, ready to handle
- # prediction requests. A model can have multiple versions. You can get
- # information about all of the versions of a given model by calling
- # [projects.models.versions.list](/ml-engine/reference/rest/v1/projects.models.versions/list).
- "description": "A String", # Optional. The description specified for the version when it was created.
- "runtimeVersion": "A String", # Optional. The Google Cloud ML runtime version to use for this deployment.
- # If not set, Google Cloud ML will choose a version.
- "manualScaling": { # Options for manually scaling a model. # Manually select the number of nodes to use for serving the
- # model. You should generally use `automatic_scaling` with an appropriate
- # `min_nodes` instead, but this option is available if you want more
- # predictable billing. Beware that latency and error rates will increase
- # if the traffic exceeds that capability of the system to serve it based
- # on the selected number of nodes.
- "nodes": 42, # The number of nodes to allocate for this model. These nodes are always up,
- # starting from the time the model is deployed, so the cost of operating
- # this model will be proportional to `nodes` * number of hours since
- # last billing cycle plus the cost for each prediction performed.
- },
- "deploymentUri": "A String", # Required. The Google Cloud Storage location of the trained model used to
- # create the version. See the
- # [overview of model
- # deployment](/ml-engine/docs/concepts/deployment-overview) for more
- # informaiton.
- #
- # When passing Version to
- # [projects.models.versions.create](/ml-engine/reference/rest/v1/projects.models.versions/create)
- # the model service uses the specified location as the source of the model.
- # Once deployed, the model version is hosted by the prediction service, so
- # this location is useful only as a historical record.
- # The total number of model files can't exceed 1000.
- "lastUseTime": "A String", # Output only. The time the version was last used for prediction.
- "automaticScaling": { # Options for automatically scaling a model. # Automatically scale the number of nodes used to serve the model in
- # response to increases and decreases in traffic. Care should be
- # taken to ramp up traffic according to the model's ability to scale
- # or you will start seeing increases in latency and 429 response codes.
- "minNodes": 42, # Optional. The minimum number of nodes to allocate for this model. These
- # nodes are always up, starting from the time the model is deployed, so the
- # cost of operating this model will be at least
- # `rate` * `min_nodes` * number of hours since last billing cycle,
- # where `rate` is the cost per node-hour as documented in
- # [pricing](https://cloud.google.com/ml-engine/pricing#prediction_pricing),
- # even if no predictions are performed. There is additional cost for each
- # prediction performed.
- #
- # Unlike manual scaling, if the load gets too heavy for the nodes
- # that are up, the service will automatically add nodes to handle the
- # increased load as well as scale back as traffic drops, always maintaining
- # at least `min_nodes`. You will be charged for the time in which additional
- # nodes are used.
- #
- # If not specified, `min_nodes` defaults to 0, in which case, when traffic
- # to a model stops (and after a cool-down period), nodes will be shut down
- # and no charges will be incurred until traffic to the model resumes.
- },
- "createTime": "A String", # Output only. The time the version was created.
- "isDefault": True or False, # Output only. If true, this version will be used to handle prediction
- # requests that do not specify a version.
- #
- # You can change the default version by calling
- # [projects.methods.versions.setDefault](/ml-engine/reference/rest/v1/projects.models.versions/setDefault).
- "name": "A String", # Required.The name specified for the version when it was created.
- #
- # The version name must be unique within the model it is created in.
- },
- "name": "A String", # Required. The name specified for the model when it was created.
- #
- # The model name must be unique within the project it is created in.
- "onlinePredictionLogging": True or False, # Optional. If true, enables StackDriver Logging for online prediction.
- # Default is false.
- "description": "A String", # Optional. The description specified for the model when it was created.
- }</pre>
+ # The version name must be unique within the model it is created in.
+ },
+ "onlinePredictionLogging": True or False, # Optional. If true, online prediction access logs are sent to StackDriver
+ # Logging. These logs are like standard server access logs, containing
+ # information like timestamp and latency for each request. Note that
+ # [Stackdriver logs may incur a cost](/stackdriver/pricing), especially if
+ # your project receives prediction requests at a high queries per second rate
+ # (QPS). Estimate your costs before enabling this option.
+ #
+ # Default is false.
+ "name": "A String", # Required. The name specified for the model when it was created.
+ #
+ # The model name must be unique within the project it is created in.
+ }</pre>
</div>
<div class="method">
- <code class="details" id="list">list(parent, pageToken=None, x__xgafv=None, pageSize=None)</code>
+ <code class="details" id="getIamPolicy">getIamPolicy(resource, x__xgafv=None)</code>
+ <pre>Gets the access control policy for a resource.
+Returns an empty policy if the resource exists and does not have a policy
+set.
+
+Args:
+ resource: string, REQUIRED: The resource for which the policy is being requested.
+See the operation documentation for the appropriate value for this field. (required)
+ x__xgafv: string, V1 error format.
+ Allowed values
+ 1 - v1 error format
+ 2 - v2 error format
+
+Returns:
+ An object of the form:
+
+ { # Defines an Identity and Access Management (IAM) policy. It is used to
+ # specify access control policies for Cloud Platform resources.
+ #
+ #
+ # A `Policy` consists of a list of `bindings`. A `binding` binds a list of
+ # `members` to a `role`, where the members can be user accounts, Google groups,
+ # Google domains, and service accounts. A `role` is a named list of permissions
+ # defined by IAM.
+ #
+ # **JSON Example**
+ #
+ # {
+ # "bindings": [
+ # {
+ # "role": "roles/owner",
+ # "members": [
+ # "user:mike@example.com",
+ # "group:admins@example.com",
+ # "domain:google.com",
+ # "serviceAccount:my-other-app@appspot.gserviceaccount.com"
+ # ]
+ # },
+ # {
+ # "role": "roles/viewer",
+ # "members": ["user:sean@example.com"]
+ # }
+ # ]
+ # }
+ #
+ # **YAML Example**
+ #
+ # bindings:
+ # - members:
+ # - user:mike@example.com
+ # - group:admins@example.com
+ # - domain:google.com
+ # - serviceAccount:my-other-app@appspot.gserviceaccount.com
+ # role: roles/owner
+ # - members:
+ # - user:sean@example.com
+ # role: roles/viewer
+ #
+ #
+ # For a description of IAM and its features, see the
+ # [IAM developer's guide](https://cloud.google.com/iam/docs).
+ "bindings": [ # Associates a list of `members` to a `role`.
+ # `bindings` with no members will result in an error.
+ { # Associates `members` with a `role`.
+ "role": "A String", # Role that is assigned to `members`.
+ # For example, `roles/viewer`, `roles/editor`, or `roles/owner`.
+ "members": [ # Specifies the identities requesting access for a Cloud Platform resource.
+ # `members` can have the following values:
+ #
+ # * `allUsers`: A special identifier that represents anyone who is
+ # on the internet; with or without a Google account.
+ #
+ # * `allAuthenticatedUsers`: A special identifier that represents anyone
+ # who is authenticated with a Google account or a service account.
+ #
+ # * `user:{emailid}`: An email address that represents a specific Google
+ # account. For example, `alice@gmail.com` .
+ #
+ #
+ # * `serviceAccount:{emailid}`: An email address that represents a service
+ # account. For example, `my-other-app@appspot.gserviceaccount.com`.
+ #
+ # * `group:{emailid}`: An email address that represents a Google group.
+ # For example, `admins@example.com`.
+ #
+ #
+ # * `domain:{domain}`: The G Suite domain (primary) that represents all the
+ # users of that domain. For example, `google.com` or `example.com`.
+ #
+ "A String",
+ ],
+ "condition": { # Represents an expression text. Example: # The condition that is associated with this binding.
+ # NOTE: An unsatisfied condition will not allow user access via current
+ # binding. Different bindings, including their conditions, are examined
+ # independently.
+ #
+ # title: "User account presence"
+ # description: "Determines whether the request has a user account"
+ # expression: "size(request.user) > 0"
+ "description": "A String", # An optional description of the expression. This is a longer text which
+ # describes the expression, e.g. when hovered over it in a UI.
+ "expression": "A String", # Textual representation of an expression in
+ # Common Expression Language syntax.
+ #
+ # The application context of the containing message determines which
+ # well-known feature set of CEL is supported.
+ "location": "A String", # An optional string indicating the location of the expression for error
+ # reporting, e.g. a file name and a position in the file.
+ "title": "A String", # An optional title for the expression, i.e. a short string describing
+ # its purpose. This can be used e.g. in UIs which allow to enter the
+ # expression.
+ },
+ },
+ ],
+ "etag": "A String", # `etag` is used for optimistic concurrency control as a way to help
+ # prevent simultaneous updates of a policy from overwriting each other.
+ # It is strongly suggested that systems make use of the `etag` in the
+ # read-modify-write cycle to perform policy updates in order to avoid race
+ # conditions: An `etag` is returned in the response to `getIamPolicy`, and
+ # systems are expected to put that etag in the request to `setIamPolicy` to
+ # ensure that their change will be applied to the same version of the policy.
+ #
+ # If no `etag` is provided in the call to `setIamPolicy`, then the existing
+ # policy is overwritten blindly.
+ "version": 42, # Deprecated.
+ "auditConfigs": [ # Specifies cloud audit logging configuration for this policy.
+ { # Specifies the audit configuration for a service.
+ # The configuration determines which permission types are logged, and what
+ # identities, if any, are exempted from logging.
+ # An AuditConfig must have one or more AuditLogConfigs.
+ #
+ # If there are AuditConfigs for both `allServices` and a specific service,
+ # the union of the two AuditConfigs is used for that service: the log_types
+ # specified in each AuditConfig are enabled, and the exempted_members in each
+ # AuditLogConfig are exempted.
+ #
+ # Example Policy with multiple AuditConfigs:
+ #
+ # {
+ # "audit_configs": [
+ # {
+ # "service": "allServices"
+ # "audit_log_configs": [
+ # {
+ # "log_type": "DATA_READ",
+ # "exempted_members": [
+ # "user:foo@gmail.com"
+ # ]
+ # },
+ # {
+ # "log_type": "DATA_WRITE",
+ # },
+ # {
+ # "log_type": "ADMIN_READ",
+ # }
+ # ]
+ # },
+ # {
+ # "service": "fooservice.googleapis.com"
+ # "audit_log_configs": [
+ # {
+ # "log_type": "DATA_READ",
+ # },
+ # {
+ # "log_type": "DATA_WRITE",
+ # "exempted_members": [
+ # "user:bar@gmail.com"
+ # ]
+ # }
+ # ]
+ # }
+ # ]
+ # }
+ #
+ # For fooservice, this policy enables DATA_READ, DATA_WRITE and ADMIN_READ
+ # logging. It also exempts foo@gmail.com from DATA_READ logging, and
+ # bar@gmail.com from DATA_WRITE logging.
+ "auditLogConfigs": [ # The configuration for logging of each type of permission.
+ { # Provides the configuration for logging a type of permissions.
+ # Example:
+ #
+ # {
+ # "audit_log_configs": [
+ # {
+ # "log_type": "DATA_READ",
+ # "exempted_members": [
+ # "user:foo@gmail.com"
+ # ]
+ # },
+ # {
+ # "log_type": "DATA_WRITE",
+ # }
+ # ]
+ # }
+ #
+ # This enables 'DATA_READ' and 'DATA_WRITE' logging, while exempting
+ # foo@gmail.com from DATA_READ logging.
+ "exemptedMembers": [ # Specifies the identities that do not cause logging for this type of
+ # permission.
+ # Follows the same format of Binding.members.
+ "A String",
+ ],
+ "logType": "A String", # The log type that this config enables.
+ },
+ ],
+ "service": "A String", # Specifies a service that will be enabled for audit logging.
+ # For example, `storage.googleapis.com`, `cloudsql.googleapis.com`.
+ # `allServices` is a special value that covers all services.
+ },
+ ],
+ }</pre>
+</div>
+
+<div class="method">
+ <code class="details" id="list">list(parent, pageToken=None, x__xgafv=None, pageSize=None, filter=None)</code>
<pre>Lists the models in a project.
Each project can contain multiple models, and each model can have multiple
versions.
-Args:
- parent: string, Required. The name of the project whose models are to be listed.
+If there are no models that match the request parameters, the list request
+returns an empty response body: {}.
-Authorization: requires `Viewer` role on the specified project. (required)
+Args:
+ parent: string, Required. The name of the project whose models are to be listed. (required)
pageToken: string, Optional. A page token to request the next page of results.
You get the token from the `next_page_token` field of the response from
@@ -554,108 +1219,272 @@
contain a valid value in the `next_page_token` field.
The default value is 20, and the maximum page size is 100.
+ filter: string, Optional. Specifies the subset of models to retrieve.
Returns:
An object of the form:
{ # Response message for the ListModels method.
+ "nextPageToken": "A String", # Optional. Pass this token as the `page_token` field of the request for a
+ # subsequent call.
"models": [ # The list of models.
{ # Represents a machine learning solution.
+ #
+ # A model can have multiple versions, each of which is a deployed, trained
+ # model ready to receive prediction requests. The model itself is just a
+ # container.
+ "description": "A String", # Optional. The description specified for the model when it was created.
+ "onlinePredictionConsoleLogging": True or False, # Optional. If true, online prediction nodes send `stderr` and `stdout`
+ # streams to Stackdriver Logging. These can be more verbose than the standard
+ # access logs (see `onlinePredictionLogging`) and can incur higher cost.
+ # However, they are helpful for debugging. Note that
+ # [Stackdriver logs may incur a cost](/stackdriver/pricing), especially if
+ # your project receives prediction requests at a high QPS. Estimate your
+ # costs before enabling this option.
#
- # A model can have multiple versions, each of which is a deployed, trained
- # model ready to receive prediction requests. The model itself is just a
- # container.
- "regions": [ # Optional. The list of regions where the model is going to be deployed.
- # Currently only one region per model is supported.
- # Defaults to 'us-central1' if nothing is set.
- # Note:
- # * No matter where a model is deployed, it can always be accessed by
- # users from anywhere, both for online and batch prediction.
- # * The region for a batch prediction job is set by the region field when
- # submitting the batch prediction job and does not take its value from
- # this field.
+ # Default is false.
+ "labels": { # Optional. One or more labels that you can add, to organize your models.
+ # Each label is a key-value pair, where both the key and the value are
+ # arbitrary strings that you supply.
+ # For more information, see the documentation on
+ # <a href="/ml-engine/docs/tensorflow/resource-labels">using labels</a>.
+ "a_key": "A String",
+ },
+ "regions": [ # Optional. The list of regions where the model is going to be deployed.
+ # Currently only one region per model is supported.
+ # Defaults to 'us-central1' if nothing is set.
+ # See the <a href="/ml-engine/docs/tensorflow/regions">available regions</a>
+ # for AI Platform services.
+ # Note:
+ # * No matter where a model is deployed, it can always be accessed by
+ # users from anywhere, both for online and batch prediction.
+ # * The region for a batch prediction job is set by the region field when
+ # submitting the batch prediction job and does not take its value from
+ # this field.
+ "A String",
+ ],
+ "etag": "A String", # `etag` is used for optimistic concurrency control as a way to help
+ # prevent simultaneous updates of a model from overwriting each other.
+ # It is strongly suggested that systems make use of the `etag` in the
+ # read-modify-write cycle to perform model updates in order to avoid race
+ # conditions: An `etag` is returned in the response to `GetModel`, and
+ # systems are expected to put that etag in the request to `UpdateModel` to
+ # ensure that their change will be applied to the model as intended.
+ "defaultVersion": { # Represents a version of the model. # Output only. The default version of the model. This version will be used to
+ # handle prediction requests that do not specify a version.
+ #
+ # You can change the default version by calling
+ # [projects.methods.versions.setDefault](/ml-engine/reference/rest/v1/projects.models.versions/setDefault).
+ #
+ # Each version is a trained model deployed in the cloud, ready to handle
+ # prediction requests. A model can have multiple versions. You can get
+ # information about all of the versions of a given model by calling
+ # [projects.models.versions.list](/ml-engine/reference/rest/v1/projects.models.versions/list).
+ "errorMessage": "A String", # Output only. The details of a failure or a cancellation.
+ "labels": { # Optional. One or more labels that you can add, to organize your model
+ # versions. Each label is a key-value pair, where both the key and the value
+ # are arbitrary strings that you supply.
+ # For more information, see the documentation on
+ # <a href="/ml-engine/docs/tensorflow/resource-labels">using labels</a>.
+ "a_key": "A String",
+ },
+ "machineType": "A String", # Optional. The type of machine on which to serve the model. Currently only
+ # applies to online prediction service.
+ # <dl>
+ # <dt>mls1-c1-m2</dt>
+ # <dd>
+ # The <b>default</b> machine type, with 1 core and 2 GB RAM. The deprecated
+ # name for this machine type is "mls1-highmem-1".
+ # </dd>
+ # <dt>mls1-c4-m2</dt>
+ # <dd>
+ # In <b>Beta</b>. This machine type has 4 cores and 2 GB RAM. The
+ # deprecated name for this machine type is "mls1-highcpu-4".
+ # </dd>
+ # </dl>
+ "description": "A String", # Optional. The description specified for the version when it was created.
+ "runtimeVersion": "A String", # Optional. The AI Platform runtime version to use for this deployment.
+ # If not set, AI Platform uses the default stable version, 1.0. For more
+ # information, see the
+ # [runtime version list](/ml-engine/docs/runtime-version-list) and
+ # [how to manage runtime versions](/ml-engine/docs/versioning).
+ "manualScaling": { # Options for manually scaling a model. # Manually select the number of nodes to use for serving the
+ # model. You should generally use `auto_scaling` with an appropriate
+ # `min_nodes` instead, but this option is available if you want more
+ # predictable billing. Beware that latency and error rates will increase
+ # if the traffic exceeds that capability of the system to serve it based
+ # on the selected number of nodes.
+ "nodes": 42, # The number of nodes to allocate for this model. These nodes are always up,
+ # starting from the time the model is deployed, so the cost of operating
+ # this model will be proportional to `nodes` * number of hours since
+ # last billing cycle plus the cost for each prediction performed.
+ },
+ "predictionClass": "A String", # Optional. The fully qualified name
+ # (<var>module_name</var>.<var>class_name</var>) of a class that implements
+ # the Predictor interface described in this reference field. The module
+ # containing this class should be included in a package provided to the
+ # [`packageUris` field](#Version.FIELDS.package_uris).
+ #
+ # Specify this field if and only if you are deploying a [custom prediction
+ # routine (beta)](/ml-engine/docs/tensorflow/custom-prediction-routines).
+ # If you specify this field, you must set
+ # [`runtimeVersion`](#Version.FIELDS.runtime_version) to 1.4 or greater.
+ #
+ # The following code sample provides the Predictor interface:
+ #
+ # ```py
+ # class Predictor(object):
+ # """Interface for constructing custom predictors."""
+ #
+ # def predict(self, instances, **kwargs):
+ # """Performs custom prediction.
+ #
+ # Instances are the decoded values from the request. They have already
+ # been deserialized from JSON.
+ #
+ # Args:
+ # instances: A list of prediction input instances.
+ # **kwargs: A dictionary of keyword args provided as additional
+ # fields on the predict request body.
+ #
+ # Returns:
+ # A list of outputs containing the prediction results. This list must
+ # be JSON serializable.
+ # """
+ # raise NotImplementedError()
+ #
+ # @classmethod
+ # def from_path(cls, model_dir):
+ # """Creates an instance of Predictor using the given path.
+ #
+ # Loading of the predictor should be done in this method.
+ #
+ # Args:
+ # model_dir: The local directory that contains the exported model
+ # file along with any additional files uploaded when creating the
+ # version resource.
+ #
+ # Returns:
+ # An instance implementing this Predictor class.
+ # """
+ # raise NotImplementedError()
+ # ```
+ #
+ # Learn more about [the Predictor interface and custom prediction
+ # routines](/ml-engine/docs/tensorflow/custom-prediction-routines).
+ "autoScaling": { # Options for automatically scaling a model. # Automatically scale the number of nodes used to serve the model in
+ # response to increases and decreases in traffic. Care should be
+ # taken to ramp up traffic according to the model's ability to scale
+ # or you will start seeing increases in latency and 429 response codes.
+ "minNodes": 42, # Optional. The minimum number of nodes to allocate for this model. These
+ # nodes are always up, starting from the time the model is deployed.
+ # Therefore, the cost of operating this model will be at least
+ # `rate` * `min_nodes` * number of hours since last billing cycle,
+ # where `rate` is the cost per node-hour as documented in the
+ # [pricing guide](/ml-engine/docs/pricing),
+ # even if no predictions are performed. There is additional cost for each
+ # prediction performed.
+ #
+ # Unlike manual scaling, if the load gets too heavy for the nodes
+ # that are up, the service will automatically add nodes to handle the
+ # increased load as well as scale back as traffic drops, always maintaining
+ # at least `min_nodes`. You will be charged for the time in which additional
+ # nodes are used.
+ #
+ # If not specified, `min_nodes` defaults to 0, in which case, when traffic
+ # to a model stops (and after a cool-down period), nodes will be shut down
+ # and no charges will be incurred until traffic to the model resumes.
+ #
+ # You can set `min_nodes` when creating the model version, and you can also
+ # update `min_nodes` for an existing version:
+ # <pre>
+ # update_body.json:
+ # {
+ # 'autoScaling': {
+ # 'minNodes': 5
+ # }
+ # }
+ # </pre>
+ # HTTP request:
+ # <pre>
+ # PATCH
+ # https://ml.googleapis.com/v1/{name=projects/*/models/*/versions/*}?update_mask=autoScaling.minNodes
+ # -d @./update_body.json
+ # </pre>
+ },
+ "serviceAccount": "A String", # Optional. Specifies the service account for resource access control.
+ "state": "A String", # Output only. The state of a version.
+ "pythonVersion": "A String", # Optional. The version of Python used in prediction. If not set, the default
+ # version is '2.7'. Python '3.5' is available when `runtime_version` is set
+ # to '1.4' and above. Python '2.7' works with all supported runtime versions.
+ "framework": "A String", # Optional. The machine learning framework AI Platform uses to train
+ # this version of the model. Valid values are `TENSORFLOW`, `SCIKIT_LEARN`,
+ # `XGBOOST`. If you do not specify a framework, AI Platform
+ # will analyze files in the deployment_uri to determine a framework. If you
+ # choose `SCIKIT_LEARN` or `XGBOOST`, you must also set the runtime version
+ # of the model to 1.4 or greater.
+ #
+ # Do **not** specify a framework if you're deploying a [custom
+ # prediction routine](/ml-engine/docs/tensorflow/custom-prediction-routines).
+ "packageUris": [ # Optional. Cloud Storage paths (`gs://…`) of packages for [custom
+ # prediction routines](/ml-engine/docs/tensorflow/custom-prediction-routines)
+ # or [scikit-learn pipelines with custom
+ # code](/ml-engine/docs/scikit/exporting-for-prediction#custom-pipeline-code).
+ #
+ # For a custom prediction routine, one of these packages must contain your
+ # Predictor class (see
+ # [`predictionClass`](#Version.FIELDS.prediction_class)). Additionally,
+ # include any dependencies used by your Predictor or scikit-learn pipeline
+ # uses that are not already included in your selected [runtime
+ # version](/ml-engine/docs/tensorflow/runtime-version-list).
+ #
+ # If you specify this field, you must also set
+ # [`runtimeVersion`](#Version.FIELDS.runtime_version) to 1.4 or greater.
"A String",
],
- "defaultVersion": { # Represents a version of the model. # Output only. The default version of the model. This version will be used to
- # handle prediction requests that do not specify a version.
+ "etag": "A String", # `etag` is used for optimistic concurrency control as a way to help
+ # prevent simultaneous updates of a model from overwriting each other.
+ # It is strongly suggested that systems make use of the `etag` in the
+ # read-modify-write cycle to perform model updates in order to avoid race
+ # conditions: An `etag` is returned in the response to `GetVersion`, and
+ # systems are expected to put that etag in the request to `UpdateVersion` to
+ # ensure that their change will be applied to the model as intended.
+ "lastUseTime": "A String", # Output only. The time the version was last used for prediction.
+ "deploymentUri": "A String", # Required. The Cloud Storage location of the trained model used to
+ # create the version. See the
+ # [guide to model
+ # deployment](/ml-engine/docs/tensorflow/deploying-models) for more
+ # information.
+ #
+ # When passing Version to
+ # [projects.models.versions.create](/ml-engine/reference/rest/v1/projects.models.versions/create)
+ # the model service uses the specified location as the source of the model.
+ # Once deployed, the model version is hosted by the prediction service, so
+ # this location is useful only as a historical record.
+ # The total number of model files can't exceed 1000.
+ "createTime": "A String", # Output only. The time the version was created.
+ "isDefault": True or False, # Output only. If true, this version will be used to handle prediction
+ # requests that do not specify a version.
#
# You can change the default version by calling
# [projects.methods.versions.setDefault](/ml-engine/reference/rest/v1/projects.models.versions/setDefault).
+ "name": "A String", # Required.The name specified for the version when it was created.
#
- # Each version is a trained model deployed in the cloud, ready to handle
- # prediction requests. A model can have multiple versions. You can get
- # information about all of the versions of a given model by calling
- # [projects.models.versions.list](/ml-engine/reference/rest/v1/projects.models.versions/list).
- "description": "A String", # Optional. The description specified for the version when it was created.
- "runtimeVersion": "A String", # Optional. The Google Cloud ML runtime version to use for this deployment.
- # If not set, Google Cloud ML will choose a version.
- "manualScaling": { # Options for manually scaling a model. # Manually select the number of nodes to use for serving the
- # model. You should generally use `automatic_scaling` with an appropriate
- # `min_nodes` instead, but this option is available if you want more
- # predictable billing. Beware that latency and error rates will increase
- # if the traffic exceeds that capability of the system to serve it based
- # on the selected number of nodes.
- "nodes": 42, # The number of nodes to allocate for this model. These nodes are always up,
- # starting from the time the model is deployed, so the cost of operating
- # this model will be proportional to `nodes` * number of hours since
- # last billing cycle plus the cost for each prediction performed.
- },
- "deploymentUri": "A String", # Required. The Google Cloud Storage location of the trained model used to
- # create the version. See the
- # [overview of model
- # deployment](/ml-engine/docs/concepts/deployment-overview) for more
- # informaiton.
- #
- # When passing Version to
- # [projects.models.versions.create](/ml-engine/reference/rest/v1/projects.models.versions/create)
- # the model service uses the specified location as the source of the model.
- # Once deployed, the model version is hosted by the prediction service, so
- # this location is useful only as a historical record.
- # The total number of model files can't exceed 1000.
- "lastUseTime": "A String", # Output only. The time the version was last used for prediction.
- "automaticScaling": { # Options for automatically scaling a model. # Automatically scale the number of nodes used to serve the model in
- # response to increases and decreases in traffic. Care should be
- # taken to ramp up traffic according to the model's ability to scale
- # or you will start seeing increases in latency and 429 response codes.
- "minNodes": 42, # Optional. The minimum number of nodes to allocate for this model. These
- # nodes are always up, starting from the time the model is deployed, so the
- # cost of operating this model will be at least
- # `rate` * `min_nodes` * number of hours since last billing cycle,
- # where `rate` is the cost per node-hour as documented in
- # [pricing](https://cloud.google.com/ml-engine/pricing#prediction_pricing),
- # even if no predictions are performed. There is additional cost for each
- # prediction performed.
- #
- # Unlike manual scaling, if the load gets too heavy for the nodes
- # that are up, the service will automatically add nodes to handle the
- # increased load as well as scale back as traffic drops, always maintaining
- # at least `min_nodes`. You will be charged for the time in which additional
- # nodes are used.
- #
- # If not specified, `min_nodes` defaults to 0, in which case, when traffic
- # to a model stops (and after a cool-down period), nodes will be shut down
- # and no charges will be incurred until traffic to the model resumes.
- },
- "createTime": "A String", # Output only. The time the version was created.
- "isDefault": True or False, # Output only. If true, this version will be used to handle prediction
- # requests that do not specify a version.
- #
- # You can change the default version by calling
- # [projects.methods.versions.setDefault](/ml-engine/reference/rest/v1/projects.models.versions/setDefault).
- "name": "A String", # Required.The name specified for the version when it was created.
- #
- # The version name must be unique within the model it is created in.
- },
- "name": "A String", # Required. The name specified for the model when it was created.
- #
- # The model name must be unique within the project it is created in.
- "onlinePredictionLogging": True or False, # Optional. If true, enables StackDriver Logging for online prediction.
- # Default is false.
- "description": "A String", # Optional. The description specified for the model when it was created.
+ # The version name must be unique within the model it is created in.
},
+ "onlinePredictionLogging": True or False, # Optional. If true, online prediction access logs are sent to StackDriver
+ # Logging. These logs are like standard server access logs, containing
+ # information like timestamp and latency for each request. Note that
+ # [Stackdriver logs may incur a cost](/stackdriver/pricing), especially if
+ # your project receives prediction requests at a high queries per second rate
+ # (QPS). Estimate your costs before enabling this option.
+ #
+ # Default is false.
+ "name": "A String", # Required. The name specified for the model when it was created.
+ #
+ # The model name must be unique within the project it is created in.
+ },
],
- "nextPageToken": "A String", # Optional. Pass this token as the `page_token` field of the request for a
- # subsequent call.
}</pre>
</div>
@@ -673,4 +1502,804 @@
</pre>
</div>
+<div class="method">
+ <code class="details" id="patch">patch(name, body, updateMask=None, x__xgafv=None)</code>
+ <pre>Updates a specific model resource.
+
+Currently the only supported fields to update are `description` and
+`default_version.name`.
+
+Args:
+ name: string, Required. The project name. (required)
+ body: object, The request body. (required)
+ The object takes the form of:
+
+{ # Represents a machine learning solution.
+ #
+ # A model can have multiple versions, each of which is a deployed, trained
+ # model ready to receive prediction requests. The model itself is just a
+ # container.
+ "description": "A String", # Optional. The description specified for the model when it was created.
+ "onlinePredictionConsoleLogging": True or False, # Optional. If true, online prediction nodes send `stderr` and `stdout`
+ # streams to Stackdriver Logging. These can be more verbose than the standard
+ # access logs (see `onlinePredictionLogging`) and can incur higher cost.
+ # However, they are helpful for debugging. Note that
+ # [Stackdriver logs may incur a cost](/stackdriver/pricing), especially if
+ # your project receives prediction requests at a high QPS. Estimate your
+ # costs before enabling this option.
+ #
+ # Default is false.
+ "labels": { # Optional. One or more labels that you can add, to organize your models.
+ # Each label is a key-value pair, where both the key and the value are
+ # arbitrary strings that you supply.
+ # For more information, see the documentation on
+ # <a href="/ml-engine/docs/tensorflow/resource-labels">using labels</a>.
+ "a_key": "A String",
+ },
+ "regions": [ # Optional. The list of regions where the model is going to be deployed.
+ # Currently only one region per model is supported.
+ # Defaults to 'us-central1' if nothing is set.
+ # See the <a href="/ml-engine/docs/tensorflow/regions">available regions</a>
+ # for AI Platform services.
+ # Note:
+ # * No matter where a model is deployed, it can always be accessed by
+ # users from anywhere, both for online and batch prediction.
+ # * The region for a batch prediction job is set by the region field when
+ # submitting the batch prediction job and does not take its value from
+ # this field.
+ "A String",
+ ],
+ "etag": "A String", # `etag` is used for optimistic concurrency control as a way to help
+ # prevent simultaneous updates of a model from overwriting each other.
+ # It is strongly suggested that systems make use of the `etag` in the
+ # read-modify-write cycle to perform model updates in order to avoid race
+ # conditions: An `etag` is returned in the response to `GetModel`, and
+ # systems are expected to put that etag in the request to `UpdateModel` to
+ # ensure that their change will be applied to the model as intended.
+ "defaultVersion": { # Represents a version of the model. # Output only. The default version of the model. This version will be used to
+ # handle prediction requests that do not specify a version.
+ #
+ # You can change the default version by calling
+ # [projects.methods.versions.setDefault](/ml-engine/reference/rest/v1/projects.models.versions/setDefault).
+ #
+ # Each version is a trained model deployed in the cloud, ready to handle
+ # prediction requests. A model can have multiple versions. You can get
+ # information about all of the versions of a given model by calling
+ # [projects.models.versions.list](/ml-engine/reference/rest/v1/projects.models.versions/list).
+ "errorMessage": "A String", # Output only. The details of a failure or a cancellation.
+ "labels": { # Optional. One or more labels that you can add, to organize your model
+ # versions. Each label is a key-value pair, where both the key and the value
+ # are arbitrary strings that you supply.
+ # For more information, see the documentation on
+ # <a href="/ml-engine/docs/tensorflow/resource-labels">using labels</a>.
+ "a_key": "A String",
+ },
+ "machineType": "A String", # Optional. The type of machine on which to serve the model. Currently only
+ # applies to online prediction service.
+ # <dl>
+ # <dt>mls1-c1-m2</dt>
+ # <dd>
+ # The <b>default</b> machine type, with 1 core and 2 GB RAM. The deprecated
+ # name for this machine type is "mls1-highmem-1".
+ # </dd>
+ # <dt>mls1-c4-m2</dt>
+ # <dd>
+ # In <b>Beta</b>. This machine type has 4 cores and 2 GB RAM. The
+ # deprecated name for this machine type is "mls1-highcpu-4".
+ # </dd>
+ # </dl>
+ "description": "A String", # Optional. The description specified for the version when it was created.
+ "runtimeVersion": "A String", # Optional. The AI Platform runtime version to use for this deployment.
+ # If not set, AI Platform uses the default stable version, 1.0. For more
+ # information, see the
+ # [runtime version list](/ml-engine/docs/runtime-version-list) and
+ # [how to manage runtime versions](/ml-engine/docs/versioning).
+ "manualScaling": { # Options for manually scaling a model. # Manually select the number of nodes to use for serving the
+ # model. You should generally use `auto_scaling` with an appropriate
+ # `min_nodes` instead, but this option is available if you want more
+ # predictable billing. Beware that latency and error rates will increase
+ # if the traffic exceeds that capability of the system to serve it based
+ # on the selected number of nodes.
+ "nodes": 42, # The number of nodes to allocate for this model. These nodes are always up,
+ # starting from the time the model is deployed, so the cost of operating
+ # this model will be proportional to `nodes` * number of hours since
+ # last billing cycle plus the cost for each prediction performed.
+ },
+ "predictionClass": "A String", # Optional. The fully qualified name
+ # (<var>module_name</var>.<var>class_name</var>) of a class that implements
+ # the Predictor interface described in this reference field. The module
+ # containing this class should be included in a package provided to the
+ # [`packageUris` field](#Version.FIELDS.package_uris).
+ #
+ # Specify this field if and only if you are deploying a [custom prediction
+ # routine (beta)](/ml-engine/docs/tensorflow/custom-prediction-routines).
+ # If you specify this field, you must set
+ # [`runtimeVersion`](#Version.FIELDS.runtime_version) to 1.4 or greater.
+ #
+ # The following code sample provides the Predictor interface:
+ #
+ # ```py
+ # class Predictor(object):
+ # """Interface for constructing custom predictors."""
+ #
+ # def predict(self, instances, **kwargs):
+ # """Performs custom prediction.
+ #
+ # Instances are the decoded values from the request. They have already
+ # been deserialized from JSON.
+ #
+ # Args:
+ # instances: A list of prediction input instances.
+ # **kwargs: A dictionary of keyword args provided as additional
+ # fields on the predict request body.
+ #
+ # Returns:
+ # A list of outputs containing the prediction results. This list must
+ # be JSON serializable.
+ # """
+ # raise NotImplementedError()
+ #
+ # @classmethod
+ # def from_path(cls, model_dir):
+ # """Creates an instance of Predictor using the given path.
+ #
+ # Loading of the predictor should be done in this method.
+ #
+ # Args:
+ # model_dir: The local directory that contains the exported model
+ # file along with any additional files uploaded when creating the
+ # version resource.
+ #
+ # Returns:
+ # An instance implementing this Predictor class.
+ # """
+ # raise NotImplementedError()
+ # ```
+ #
+ # Learn more about [the Predictor interface and custom prediction
+ # routines](/ml-engine/docs/tensorflow/custom-prediction-routines).
+ "autoScaling": { # Options for automatically scaling a model. # Automatically scale the number of nodes used to serve the model in
+ # response to increases and decreases in traffic. Care should be
+ # taken to ramp up traffic according to the model's ability to scale
+ # or you will start seeing increases in latency and 429 response codes.
+ "minNodes": 42, # Optional. The minimum number of nodes to allocate for this model. These
+ # nodes are always up, starting from the time the model is deployed.
+ # Therefore, the cost of operating this model will be at least
+ # `rate` * `min_nodes` * number of hours since last billing cycle,
+ # where `rate` is the cost per node-hour as documented in the
+ # [pricing guide](/ml-engine/docs/pricing),
+ # even if no predictions are performed. There is additional cost for each
+ # prediction performed.
+ #
+ # Unlike manual scaling, if the load gets too heavy for the nodes
+ # that are up, the service will automatically add nodes to handle the
+ # increased load as well as scale back as traffic drops, always maintaining
+ # at least `min_nodes`. You will be charged for the time in which additional
+ # nodes are used.
+ #
+ # If not specified, `min_nodes` defaults to 0, in which case, when traffic
+ # to a model stops (and after a cool-down period), nodes will be shut down
+ # and no charges will be incurred until traffic to the model resumes.
+ #
+ # You can set `min_nodes` when creating the model version, and you can also
+ # update `min_nodes` for an existing version:
+ # <pre>
+ # update_body.json:
+ # {
+ # 'autoScaling': {
+ # 'minNodes': 5
+ # }
+ # }
+ # </pre>
+ # HTTP request:
+ # <pre>
+ # PATCH
+ # https://ml.googleapis.com/v1/{name=projects/*/models/*/versions/*}?update_mask=autoScaling.minNodes
+ # -d @./update_body.json
+ # </pre>
+ },
+ "serviceAccount": "A String", # Optional. Specifies the service account for resource access control.
+ "state": "A String", # Output only. The state of a version.
+ "pythonVersion": "A String", # Optional. The version of Python used in prediction. If not set, the default
+ # version is '2.7'. Python '3.5' is available when `runtime_version` is set
+ # to '1.4' and above. Python '2.7' works with all supported runtime versions.
+ "framework": "A String", # Optional. The machine learning framework AI Platform uses to train
+ # this version of the model. Valid values are `TENSORFLOW`, `SCIKIT_LEARN`,
+ # `XGBOOST`. If you do not specify a framework, AI Platform
+ # will analyze files in the deployment_uri to determine a framework. If you
+ # choose `SCIKIT_LEARN` or `XGBOOST`, you must also set the runtime version
+ # of the model to 1.4 or greater.
+ #
+ # Do **not** specify a framework if you're deploying a [custom
+ # prediction routine](/ml-engine/docs/tensorflow/custom-prediction-routines).
+ "packageUris": [ # Optional. Cloud Storage paths (`gs://…`) of packages for [custom
+ # prediction routines](/ml-engine/docs/tensorflow/custom-prediction-routines)
+ # or [scikit-learn pipelines with custom
+ # code](/ml-engine/docs/scikit/exporting-for-prediction#custom-pipeline-code).
+ #
+ # For a custom prediction routine, one of these packages must contain your
+ # Predictor class (see
+ # [`predictionClass`](#Version.FIELDS.prediction_class)). Additionally,
+ # include any dependencies used by your Predictor or scikit-learn pipeline
+ # uses that are not already included in your selected [runtime
+ # version](/ml-engine/docs/tensorflow/runtime-version-list).
+ #
+ # If you specify this field, you must also set
+ # [`runtimeVersion`](#Version.FIELDS.runtime_version) to 1.4 or greater.
+ "A String",
+ ],
+ "etag": "A String", # `etag` is used for optimistic concurrency control as a way to help
+ # prevent simultaneous updates of a model from overwriting each other.
+ # It is strongly suggested that systems make use of the `etag` in the
+ # read-modify-write cycle to perform model updates in order to avoid race
+ # conditions: An `etag` is returned in the response to `GetVersion`, and
+ # systems are expected to put that etag in the request to `UpdateVersion` to
+ # ensure that their change will be applied to the model as intended.
+ "lastUseTime": "A String", # Output only. The time the version was last used for prediction.
+ "deploymentUri": "A String", # Required. The Cloud Storage location of the trained model used to
+ # create the version. See the
+ # [guide to model
+ # deployment](/ml-engine/docs/tensorflow/deploying-models) for more
+ # information.
+ #
+ # When passing Version to
+ # [projects.models.versions.create](/ml-engine/reference/rest/v1/projects.models.versions/create)
+ # the model service uses the specified location as the source of the model.
+ # Once deployed, the model version is hosted by the prediction service, so
+ # this location is useful only as a historical record.
+ # The total number of model files can't exceed 1000.
+ "createTime": "A String", # Output only. The time the version was created.
+ "isDefault": True or False, # Output only. If true, this version will be used to handle prediction
+ # requests that do not specify a version.
+ #
+ # You can change the default version by calling
+ # [projects.methods.versions.setDefault](/ml-engine/reference/rest/v1/projects.models.versions/setDefault).
+ "name": "A String", # Required.The name specified for the version when it was created.
+ #
+ # The version name must be unique within the model it is created in.
+ },
+ "onlinePredictionLogging": True or False, # Optional. If true, online prediction access logs are sent to StackDriver
+ # Logging. These logs are like standard server access logs, containing
+ # information like timestamp and latency for each request. Note that
+ # [Stackdriver logs may incur a cost](/stackdriver/pricing), especially if
+ # your project receives prediction requests at a high queries per second rate
+ # (QPS). Estimate your costs before enabling this option.
+ #
+ # Default is false.
+ "name": "A String", # Required. The name specified for the model when it was created.
+ #
+ # The model name must be unique within the project it is created in.
+}
+
+ updateMask: string, Required. Specifies the path, relative to `Model`, of the field to update.
+
+For example, to change the description of a model to "foo" and set its
+default version to "version_1", the `update_mask` parameter would be
+specified as `description`, `default_version.name`, and the `PATCH`
+request body would specify the new value, as follows:
+ {
+ "description": "foo",
+ "defaultVersion": {
+ "name":"version_1"
+ }
+ }
+
+Currently the supported update masks are `description` and
+`default_version.name`.
+ x__xgafv: string, V1 error format.
+ Allowed values
+ 1 - v1 error format
+ 2 - v2 error format
+
+Returns:
+ An object of the form:
+
+ { # This resource represents a long-running operation that is the result of a
+ # network API call.
+ "metadata": { # Service-specific metadata associated with the operation. It typically
+ # contains progress information and common metadata such as create time.
+ # Some services might not provide such metadata. Any method that returns a
+ # long-running operation should document the metadata type, if any.
+ "a_key": "", # Properties of the object. Contains field @type with type URL.
+ },
+ "error": { # The `Status` type defines a logical error model that is suitable for # The error result of the operation in case of failure or cancellation.
+ # different programming environments, including REST APIs and RPC APIs. It is
+ # used by [gRPC](https://github.com/grpc). Each `Status` message contains
+ # three pieces of data: error code, error message, and error details.
+ #
+ # You can find out more about this error model and how to work with it in the
+ # [API Design Guide](https://cloud.google.com/apis/design/errors).
+ "message": "A String", # A developer-facing error message, which should be in English. Any
+ # user-facing error message should be localized and sent in the
+ # google.rpc.Status.details field, or localized by the client.
+ "code": 42, # The status code, which should be an enum value of google.rpc.Code.
+ "details": [ # A list of messages that carry the error details. There is a common set of
+ # message types for APIs to use.
+ {
+ "a_key": "", # Properties of the object. Contains field @type with type URL.
+ },
+ ],
+ },
+ "done": True or False, # If the value is `false`, it means the operation is still in progress.
+ # If `true`, the operation is completed, and either `error` or `response` is
+ # available.
+ "response": { # The normal response of the operation in case of success. If the original
+ # method returns no data on success, such as `Delete`, the response is
+ # `google.protobuf.Empty`. If the original method is standard
+ # `Get`/`Create`/`Update`, the response should be the resource. For other
+ # methods, the response should have the type `XxxResponse`, where `Xxx`
+ # is the original method name. For example, if the original method name
+ # is `TakeSnapshot()`, the inferred response type is
+ # `TakeSnapshotResponse`.
+ "a_key": "", # Properties of the object. Contains field @type with type URL.
+ },
+ "name": "A String", # The server-assigned name, which is only unique within the same service that
+ # originally returns it. If you use the default HTTP mapping, the
+ # `name` should be a resource name ending with `operations/{unique_id}`.
+ }</pre>
+</div>
+
+<div class="method">
+ <code class="details" id="setIamPolicy">setIamPolicy(resource, body, x__xgafv=None)</code>
+ <pre>Sets the access control policy on the specified resource. Replaces any
+existing policy.
+
+Args:
+ resource: string, REQUIRED: The resource for which the policy is being specified.
+See the operation documentation for the appropriate value for this field. (required)
+ body: object, The request body. (required)
+ The object takes the form of:
+
+{ # Request message for `SetIamPolicy` method.
+ "policy": { # Defines an Identity and Access Management (IAM) policy. It is used to # REQUIRED: The complete policy to be applied to the `resource`. The size of
+ # the policy is limited to a few 10s of KB. An empty policy is a
+ # valid policy but certain Cloud Platform services (such as Projects)
+ # might reject them.
+ # specify access control policies for Cloud Platform resources.
+ #
+ #
+ # A `Policy` consists of a list of `bindings`. A `binding` binds a list of
+ # `members` to a `role`, where the members can be user accounts, Google groups,
+ # Google domains, and service accounts. A `role` is a named list of permissions
+ # defined by IAM.
+ #
+ # **JSON Example**
+ #
+ # {
+ # "bindings": [
+ # {
+ # "role": "roles/owner",
+ # "members": [
+ # "user:mike@example.com",
+ # "group:admins@example.com",
+ # "domain:google.com",
+ # "serviceAccount:my-other-app@appspot.gserviceaccount.com"
+ # ]
+ # },
+ # {
+ # "role": "roles/viewer",
+ # "members": ["user:sean@example.com"]
+ # }
+ # ]
+ # }
+ #
+ # **YAML Example**
+ #
+ # bindings:
+ # - members:
+ # - user:mike@example.com
+ # - group:admins@example.com
+ # - domain:google.com
+ # - serviceAccount:my-other-app@appspot.gserviceaccount.com
+ # role: roles/owner
+ # - members:
+ # - user:sean@example.com
+ # role: roles/viewer
+ #
+ #
+ # For a description of IAM and its features, see the
+ # [IAM developer's guide](https://cloud.google.com/iam/docs).
+ "bindings": [ # Associates a list of `members` to a `role`.
+ # `bindings` with no members will result in an error.
+ { # Associates `members` with a `role`.
+ "role": "A String", # Role that is assigned to `members`.
+ # For example, `roles/viewer`, `roles/editor`, or `roles/owner`.
+ "members": [ # Specifies the identities requesting access for a Cloud Platform resource.
+ # `members` can have the following values:
+ #
+ # * `allUsers`: A special identifier that represents anyone who is
+ # on the internet; with or without a Google account.
+ #
+ # * `allAuthenticatedUsers`: A special identifier that represents anyone
+ # who is authenticated with a Google account or a service account.
+ #
+ # * `user:{emailid}`: An email address that represents a specific Google
+ # account. For example, `alice@gmail.com` .
+ #
+ #
+ # * `serviceAccount:{emailid}`: An email address that represents a service
+ # account. For example, `my-other-app@appspot.gserviceaccount.com`.
+ #
+ # * `group:{emailid}`: An email address that represents a Google group.
+ # For example, `admins@example.com`.
+ #
+ #
+ # * `domain:{domain}`: The G Suite domain (primary) that represents all the
+ # users of that domain. For example, `google.com` or `example.com`.
+ #
+ "A String",
+ ],
+ "condition": { # Represents an expression text. Example: # The condition that is associated with this binding.
+ # NOTE: An unsatisfied condition will not allow user access via current
+ # binding. Different bindings, including their conditions, are examined
+ # independently.
+ #
+ # title: "User account presence"
+ # description: "Determines whether the request has a user account"
+ # expression: "size(request.user) > 0"
+ "description": "A String", # An optional description of the expression. This is a longer text which
+ # describes the expression, e.g. when hovered over it in a UI.
+ "expression": "A String", # Textual representation of an expression in
+ # Common Expression Language syntax.
+ #
+ # The application context of the containing message determines which
+ # well-known feature set of CEL is supported.
+ "location": "A String", # An optional string indicating the location of the expression for error
+ # reporting, e.g. a file name and a position in the file.
+ "title": "A String", # An optional title for the expression, i.e. a short string describing
+ # its purpose. This can be used e.g. in UIs which allow to enter the
+ # expression.
+ },
+ },
+ ],
+ "etag": "A String", # `etag` is used for optimistic concurrency control as a way to help
+ # prevent simultaneous updates of a policy from overwriting each other.
+ # It is strongly suggested that systems make use of the `etag` in the
+ # read-modify-write cycle to perform policy updates in order to avoid race
+ # conditions: An `etag` is returned in the response to `getIamPolicy`, and
+ # systems are expected to put that etag in the request to `setIamPolicy` to
+ # ensure that their change will be applied to the same version of the policy.
+ #
+ # If no `etag` is provided in the call to `setIamPolicy`, then the existing
+ # policy is overwritten blindly.
+ "version": 42, # Deprecated.
+ "auditConfigs": [ # Specifies cloud audit logging configuration for this policy.
+ { # Specifies the audit configuration for a service.
+ # The configuration determines which permission types are logged, and what
+ # identities, if any, are exempted from logging.
+ # An AuditConfig must have one or more AuditLogConfigs.
+ #
+ # If there are AuditConfigs for both `allServices` and a specific service,
+ # the union of the two AuditConfigs is used for that service: the log_types
+ # specified in each AuditConfig are enabled, and the exempted_members in each
+ # AuditLogConfig are exempted.
+ #
+ # Example Policy with multiple AuditConfigs:
+ #
+ # {
+ # "audit_configs": [
+ # {
+ # "service": "allServices"
+ # "audit_log_configs": [
+ # {
+ # "log_type": "DATA_READ",
+ # "exempted_members": [
+ # "user:foo@gmail.com"
+ # ]
+ # },
+ # {
+ # "log_type": "DATA_WRITE",
+ # },
+ # {
+ # "log_type": "ADMIN_READ",
+ # }
+ # ]
+ # },
+ # {
+ # "service": "fooservice.googleapis.com"
+ # "audit_log_configs": [
+ # {
+ # "log_type": "DATA_READ",
+ # },
+ # {
+ # "log_type": "DATA_WRITE",
+ # "exempted_members": [
+ # "user:bar@gmail.com"
+ # ]
+ # }
+ # ]
+ # }
+ # ]
+ # }
+ #
+ # For fooservice, this policy enables DATA_READ, DATA_WRITE and ADMIN_READ
+ # logging. It also exempts foo@gmail.com from DATA_READ logging, and
+ # bar@gmail.com from DATA_WRITE logging.
+ "auditLogConfigs": [ # The configuration for logging of each type of permission.
+ { # Provides the configuration for logging a type of permissions.
+ # Example:
+ #
+ # {
+ # "audit_log_configs": [
+ # {
+ # "log_type": "DATA_READ",
+ # "exempted_members": [
+ # "user:foo@gmail.com"
+ # ]
+ # },
+ # {
+ # "log_type": "DATA_WRITE",
+ # }
+ # ]
+ # }
+ #
+ # This enables 'DATA_READ' and 'DATA_WRITE' logging, while exempting
+ # foo@gmail.com from DATA_READ logging.
+ "exemptedMembers": [ # Specifies the identities that do not cause logging for this type of
+ # permission.
+ # Follows the same format of Binding.members.
+ "A String",
+ ],
+ "logType": "A String", # The log type that this config enables.
+ },
+ ],
+ "service": "A String", # Specifies a service that will be enabled for audit logging.
+ # For example, `storage.googleapis.com`, `cloudsql.googleapis.com`.
+ # `allServices` is a special value that covers all services.
+ },
+ ],
+ },
+ "updateMask": "A String", # OPTIONAL: A FieldMask specifying which fields of the policy to modify. Only
+ # the fields in the mask will be modified. If no mask is provided, the
+ # following default mask is used:
+ # paths: "bindings, etag"
+ # This field is only used by Cloud IAM.
+ }
+
+ x__xgafv: string, V1 error format.
+ Allowed values
+ 1 - v1 error format
+ 2 - v2 error format
+
+Returns:
+ An object of the form:
+
+ { # Defines an Identity and Access Management (IAM) policy. It is used to
+ # specify access control policies for Cloud Platform resources.
+ #
+ #
+ # A `Policy` consists of a list of `bindings`. A `binding` binds a list of
+ # `members` to a `role`, where the members can be user accounts, Google groups,
+ # Google domains, and service accounts. A `role` is a named list of permissions
+ # defined by IAM.
+ #
+ # **JSON Example**
+ #
+ # {
+ # "bindings": [
+ # {
+ # "role": "roles/owner",
+ # "members": [
+ # "user:mike@example.com",
+ # "group:admins@example.com",
+ # "domain:google.com",
+ # "serviceAccount:my-other-app@appspot.gserviceaccount.com"
+ # ]
+ # },
+ # {
+ # "role": "roles/viewer",
+ # "members": ["user:sean@example.com"]
+ # }
+ # ]
+ # }
+ #
+ # **YAML Example**
+ #
+ # bindings:
+ # - members:
+ # - user:mike@example.com
+ # - group:admins@example.com
+ # - domain:google.com
+ # - serviceAccount:my-other-app@appspot.gserviceaccount.com
+ # role: roles/owner
+ # - members:
+ # - user:sean@example.com
+ # role: roles/viewer
+ #
+ #
+ # For a description of IAM and its features, see the
+ # [IAM developer's guide](https://cloud.google.com/iam/docs).
+ "bindings": [ # Associates a list of `members` to a `role`.
+ # `bindings` with no members will result in an error.
+ { # Associates `members` with a `role`.
+ "role": "A String", # Role that is assigned to `members`.
+ # For example, `roles/viewer`, `roles/editor`, or `roles/owner`.
+ "members": [ # Specifies the identities requesting access for a Cloud Platform resource.
+ # `members` can have the following values:
+ #
+ # * `allUsers`: A special identifier that represents anyone who is
+ # on the internet; with or without a Google account.
+ #
+ # * `allAuthenticatedUsers`: A special identifier that represents anyone
+ # who is authenticated with a Google account or a service account.
+ #
+ # * `user:{emailid}`: An email address that represents a specific Google
+ # account. For example, `alice@gmail.com` .
+ #
+ #
+ # * `serviceAccount:{emailid}`: An email address that represents a service
+ # account. For example, `my-other-app@appspot.gserviceaccount.com`.
+ #
+ # * `group:{emailid}`: An email address that represents a Google group.
+ # For example, `admins@example.com`.
+ #
+ #
+ # * `domain:{domain}`: The G Suite domain (primary) that represents all the
+ # users of that domain. For example, `google.com` or `example.com`.
+ #
+ "A String",
+ ],
+ "condition": { # Represents an expression text. Example: # The condition that is associated with this binding.
+ # NOTE: An unsatisfied condition will not allow user access via current
+ # binding. Different bindings, including their conditions, are examined
+ # independently.
+ #
+ # title: "User account presence"
+ # description: "Determines whether the request has a user account"
+ # expression: "size(request.user) > 0"
+ "description": "A String", # An optional description of the expression. This is a longer text which
+ # describes the expression, e.g. when hovered over it in a UI.
+ "expression": "A String", # Textual representation of an expression in
+ # Common Expression Language syntax.
+ #
+ # The application context of the containing message determines which
+ # well-known feature set of CEL is supported.
+ "location": "A String", # An optional string indicating the location of the expression for error
+ # reporting, e.g. a file name and a position in the file.
+ "title": "A String", # An optional title for the expression, i.e. a short string describing
+ # its purpose. This can be used e.g. in UIs which allow to enter the
+ # expression.
+ },
+ },
+ ],
+ "etag": "A String", # `etag` is used for optimistic concurrency control as a way to help
+ # prevent simultaneous updates of a policy from overwriting each other.
+ # It is strongly suggested that systems make use of the `etag` in the
+ # read-modify-write cycle to perform policy updates in order to avoid race
+ # conditions: An `etag` is returned in the response to `getIamPolicy`, and
+ # systems are expected to put that etag in the request to `setIamPolicy` to
+ # ensure that their change will be applied to the same version of the policy.
+ #
+ # If no `etag` is provided in the call to `setIamPolicy`, then the existing
+ # policy is overwritten blindly.
+ "version": 42, # Deprecated.
+ "auditConfigs": [ # Specifies cloud audit logging configuration for this policy.
+ { # Specifies the audit configuration for a service.
+ # The configuration determines which permission types are logged, and what
+ # identities, if any, are exempted from logging.
+ # An AuditConfig must have one or more AuditLogConfigs.
+ #
+ # If there are AuditConfigs for both `allServices` and a specific service,
+ # the union of the two AuditConfigs is used for that service: the log_types
+ # specified in each AuditConfig are enabled, and the exempted_members in each
+ # AuditLogConfig are exempted.
+ #
+ # Example Policy with multiple AuditConfigs:
+ #
+ # {
+ # "audit_configs": [
+ # {
+ # "service": "allServices"
+ # "audit_log_configs": [
+ # {
+ # "log_type": "DATA_READ",
+ # "exempted_members": [
+ # "user:foo@gmail.com"
+ # ]
+ # },
+ # {
+ # "log_type": "DATA_WRITE",
+ # },
+ # {
+ # "log_type": "ADMIN_READ",
+ # }
+ # ]
+ # },
+ # {
+ # "service": "fooservice.googleapis.com"
+ # "audit_log_configs": [
+ # {
+ # "log_type": "DATA_READ",
+ # },
+ # {
+ # "log_type": "DATA_WRITE",
+ # "exempted_members": [
+ # "user:bar@gmail.com"
+ # ]
+ # }
+ # ]
+ # }
+ # ]
+ # }
+ #
+ # For fooservice, this policy enables DATA_READ, DATA_WRITE and ADMIN_READ
+ # logging. It also exempts foo@gmail.com from DATA_READ logging, and
+ # bar@gmail.com from DATA_WRITE logging.
+ "auditLogConfigs": [ # The configuration for logging of each type of permission.
+ { # Provides the configuration for logging a type of permissions.
+ # Example:
+ #
+ # {
+ # "audit_log_configs": [
+ # {
+ # "log_type": "DATA_READ",
+ # "exempted_members": [
+ # "user:foo@gmail.com"
+ # ]
+ # },
+ # {
+ # "log_type": "DATA_WRITE",
+ # }
+ # ]
+ # }
+ #
+ # This enables 'DATA_READ' and 'DATA_WRITE' logging, while exempting
+ # foo@gmail.com from DATA_READ logging.
+ "exemptedMembers": [ # Specifies the identities that do not cause logging for this type of
+ # permission.
+ # Follows the same format of Binding.members.
+ "A String",
+ ],
+ "logType": "A String", # The log type that this config enables.
+ },
+ ],
+ "service": "A String", # Specifies a service that will be enabled for audit logging.
+ # For example, `storage.googleapis.com`, `cloudsql.googleapis.com`.
+ # `allServices` is a special value that covers all services.
+ },
+ ],
+ }</pre>
+</div>
+
+<div class="method">
+ <code class="details" id="testIamPermissions">testIamPermissions(resource, body, x__xgafv=None)</code>
+ <pre>Returns permissions that a caller has on the specified resource.
+If the resource does not exist, this will return an empty set of
+permissions, not a NOT_FOUND error.
+
+Note: This operation is designed to be used for building permission-aware
+UIs and command-line tools, not for authorization checking. This operation
+may "fail open" without warning.
+
+Args:
+ resource: string, REQUIRED: The resource for which the policy detail is being requested.
+See the operation documentation for the appropriate value for this field. (required)
+ body: object, The request body. (required)
+ The object takes the form of:
+
+{ # Request message for `TestIamPermissions` method.
+ "permissions": [ # The set of permissions to check for the `resource`. Permissions with
+ # wildcards (such as '*' or 'storage.*') are not allowed. For more
+ # information see
+ # [IAM Overview](https://cloud.google.com/iam/docs/overview#permissions).
+ "A String",
+ ],
+ }
+
+ x__xgafv: string, V1 error format.
+ Allowed values
+ 1 - v1 error format
+ 2 - v2 error format
+
+Returns:
+ An object of the form:
+
+ { # Response message for `TestIamPermissions` method.
+ "permissions": [ # A subset of `TestPermissionsRequest.permissions` that the caller is
+ # allowed.
+ "A String",
+ ],
+ }</pre>
+</div>
+
</body></html>
\ No newline at end of file