docs: docs update (#911)
Thank you for opening a Pull Request! Before submitting your PR, there are a few things you can do to make sure it goes smoothly:
- [ ] Make sure to open an issue as a [bug/issue](https://github.com/googleapis/google-api-python-client/issues/new/choose) before writing your code! That way we can discuss the change, evaluate designs, and agree on the general idea
- [ ] Ensure the tests and linter pass
- [ ] Code coverage does not decrease (if any source code was changed)
- [ ] Appropriate docs were updated (if necessary)
Fixes #<issue_number_goes_here> 🦕
diff --git a/docs/dyn/ml_v1.projects.models.versions.html b/docs/dyn/ml_v1.projects.models.versions.html
index d948048..9cc9ec9 100644
--- a/docs/dyn/ml_v1.projects.models.versions.html
+++ b/docs/dyn/ml_v1.projects.models.versions.html
@@ -84,7 +84,7 @@
<code><a href="#get">get(name, x__xgafv=None)</a></code></p>
<p class="firstline">Gets information about a model version.</p>
<p class="toc_element">
- <code><a href="#list">list(parent, pageToken=None, x__xgafv=None, pageSize=None, filter=None)</a></code></p>
+ <code><a href="#list">list(parent, pageToken=None, pageSize=None, filter=None, x__xgafv=None)</a></code></p>
<p class="firstline">Gets basic information about all the versions of a model.</p>
<p class="toc_element">
<code><a href="#list_next">list_next(previous_request, previous_response)</a></code></p>
@@ -118,25 +118,37 @@
# prediction requests. A model can have multiple versions. You can get
# information about all of the versions of a given model by calling
# projects.models.versions.list.
- "acceleratorConfig": { # Represents a hardware accelerator request config. # Optional. Accelerator config for using GPUs for online prediction (beta).
- # Only specify this field if you have specified a Compute Engine (N1) machine
- # type in the `machineType` field. Learn more about [using GPUs for online
- # prediction](/ml-engine/docs/machine-types-online-prediction#gpus).
- # Note that the AcceleratorConfig can be used in both Jobs and Versions.
- # Learn more about [accelerators for training](/ml-engine/docs/using-gpus) and
- # [accelerators for online
- # prediction](/ml-engine/docs/machine-types-online-prediction#gpus).
- "count": "A String", # The number of accelerators to attach to each machine running the job.
- "type": "A String", # The type of accelerator to use.
+ "state": "A String", # Output only. The state of a version.
+ "manualScaling": { # Options for manually scaling a model. # Manually select the number of nodes to use for serving the
+ # model. You should generally use `auto_scaling` with an appropriate
+ # `min_nodes` instead, but this option is available if you want more
+ # predictable billing. Beware that latency and error rates will increase
+ # if the traffic exceeds that capability of the system to serve it based
+ # on the selected number of nodes.
+ "nodes": 42, # The number of nodes to allocate for this model. These nodes are always up,
+ # starting from the time the model is deployed, so the cost of operating
+ # this model will be proportional to `nodes` * number of hours since
+ # last billing cycle plus the cost for each prediction performed.
},
- "labels": { # Optional. One or more labels that you can add, to organize your model
- # versions. Each label is a key-value pair, where both the key and the value
- # are arbitrary strings that you supply.
- # For more information, see the documentation on
- # <a href="/ml-engine/docs/tensorflow/resource-labels">using labels</a>.
- "a_key": "A String",
- },
- "predictionClass": "A String", # Optional. The fully qualified name
+ "name": "A String", # Required. The name specified for the version when it was created.
+ #
+ # The version name must be unique within the model it is created in.
+ "serviceAccount": "A String", # Optional. Specifies the service account for resource access control.
+ "pythonVersion": "A String", # Required. The version of Python used in prediction.
+ #
+ # The following Python versions are available:
+ #
+ # * Python '3.7' is available when `runtime_version` is set to '1.15' or
+ # later.
+ # * Python '3.5' is available when `runtime_version` is set to a version
+ # from '1.4' to '1.14'.
+ # * Python '2.7' is available when `runtime_version` is set to '1.15' or
+ # earlier.
+ #
+ # Read more about the Python versions available for [each runtime
+ # version](/ml-engine/docs/runtime-version-list).
+ "lastUseTime": "A String", # Output only. The time the version was last used for prediction.
+ "predictionClass": "A String", # Optional. The fully qualified name
# (<var>module_name</var>.<var>class_name</var>) of a class that implements
# the Predictor interface described in this reference field. The module
# containing this class should be included in a package provided to the
@@ -151,12 +163,12 @@
#
# The following code sample provides the Predictor interface:
#
- # <pre style="max-width: 626px;">
+ # <pre style="max-width: 626px;">
# class Predictor(object):
- # """Interface for constructing custom predictors."""
+ # """Interface for constructing custom predictors."""
#
# def predict(self, instances, **kwargs):
- # """Performs custom prediction.
+ # """Performs custom prediction.
#
# Instances are the decoded values from the request. They have already
# been deserialized from JSON.
@@ -169,12 +181,12 @@
# Returns:
# A list of outputs containing the prediction results. This list must
# be JSON serializable.
- # """
+ # """
# raise NotImplementedError()
#
# @classmethod
# def from_path(cls, model_dir):
- # """Creates an instance of Predictor using the given path.
+ # """Creates an instance of Predictor using the given path.
#
# Loading of the predictor should be done in this method.
#
@@ -185,15 +197,13 @@
#
# Returns:
# An instance implementing this Predictor class.
- # """
+ # """
# raise NotImplementedError()
# </pre>
#
# Learn more about [the Predictor interface and custom prediction
# routines](/ml-engine/docs/tensorflow/custom-prediction-routines).
- "serviceAccount": "A String", # Optional. Specifies the service account for resource access control.
- "state": "A String", # Output only. The state of a version.
- "packageUris": [ # Optional. Cloud Storage paths (`gs://…`) of packages for [custom
+ "packageUris": [ # Optional. Cloud Storage paths (`gs://…`) of packages for [custom
# prediction routines](/ml-engine/docs/tensorflow/custom-prediction-routines)
# or [scikit-learn pipelines with custom
# code](/ml-engine/docs/scikit/exporting-for-prediction#custom-pipeline-code).
@@ -207,17 +217,45 @@
#
# If you specify this field, you must also set
# [`runtimeVersion`](#Version.FIELDS.runtime_version) to 1.4 or greater.
- "A String",
+ "A String",
],
- "etag": "A String", # `etag` is used for optimistic concurrency control as a way to help
- # prevent simultaneous updates of a model from overwriting each other.
- # It is strongly suggested that systems make use of the `etag` in the
- # read-modify-write cycle to perform model updates in order to avoid race
- # conditions: An `etag` is returned in the response to `GetVersion`, and
- # systems are expected to put that etag in the request to `UpdateVersion` to
- # ensure that their change will be applied to the model as intended.
- "lastUseTime": "A String", # Output only. The time the version was last used for prediction.
- "deploymentUri": "A String", # Required. The Cloud Storage location of the trained model used to
+ "explanationConfig": { # Message holding configuration options for explaining model predictions. # Optional. Configures explainability features on the model's version.
+ # Some explanation features require additional metadata to be loaded
+ # as part of the model payload.
+ # There are two feature attribution methods supported for TensorFlow models:
+ # integrated gradients and sampled Shapley.
+ # [Learn more about feature
+ # attributions.](/ai-platform/prediction/docs/ai-explanations/overview)
+ "integratedGradientsAttribution": { # Attributes credit by computing the Aumann-Shapley value taking advantage # Attributes credit by computing the Aumann-Shapley value taking advantage
+ # of the model's fully differentiable structure. Refer to this paper for
+ # more details: http://proceedings.mlr.press/v70/sundararajan17a.html
+ # of the model's fully differentiable structure. Refer to this paper for
+ # more details: https://arxiv.org/abs/1703.01365
+ "numIntegralSteps": 42, # Number of steps for approximating the path integral.
+ # A good value to start is 50 and gradually increase until the
+ # sum to diff property is met within the desired error range.
+ },
+ "sampledShapleyAttribution": { # An attribution method that approximates Shapley values for features that # An attribution method that approximates Shapley values for features that
+ # contribute to the label being predicted. A sampling strategy is used to
+ # approximate the value rather than considering all subsets of features.
+ # contribute to the label being predicted. A sampling strategy is used to
+ # approximate the value rather than considering all subsets of features.
+ "numPaths": 42, # The number of feature permutations to consider when approximating the
+ # Shapley values.
+ },
+ "xraiAttribution": { # Attributes credit by computing the XRAI taking advantage # Attributes credit by computing the XRAI taking advantage
+ # of the model's fully differentiable structure. Refer to this paper for
+ # more details: https://arxiv.org/abs/1906.02825
+ # Currently only implemented for models with natural image inputs.
+ # of the model's fully differentiable structure. Refer to this paper for
+ # more details: https://arxiv.org/abs/1906.02825
+ # Currently only implemented for models with natural image inputs.
+ "numIntegralSteps": 42, # Number of steps for approximating the path integral.
+ # A good value to start is 50 and gradually increase until the
+ # sum to diff property is met within the desired error range.
+ },
+ },
+ "deploymentUri": "A String", # Required. The Cloud Storage location of the trained model used to
# create the version. See the
# [guide to model
# deployment](/ml-engine/docs/tensorflow/deploying-models) for more
@@ -228,120 +266,16 @@
# the model service uses the specified location as the source of the model.
# Once deployed, the model version is hosted by the prediction service, so
# this location is useful only as a historical record.
- # The total number of model files can't exceed 1000.
- "explanationConfig": { # Message holding configuration options for explaining model predictions. # Optional. Configures explainability features on the model's version.
- # Some explanation features require additional metadata to be loaded
- # as part of the model payload.
- # There are two feature attribution methods supported for TensorFlow models:
- # integrated gradients and sampled Shapley.
- # [Learn more about feature
- # attributions.](/ml-engine/docs/ai-explanations/overview)
- "xraiAttribution": { # Attributes credit by computing the XRAI taking advantage # Attributes credit by computing the XRAI taking advantage
- # of the model's fully differentiable structure. Refer to this paper for
- # more details: https://arxiv.org/abs/1906.02825
- # Currently only implemented for models with natural image inputs.
- # of the model's fully differentiable structure. Refer to this paper for
- # more details: https://arxiv.org/abs/1906.02825
- # Currently only implemented for models with natural image inputs.
- "numIntegralSteps": 42, # Number of steps for approximating the path integral.
- # A good value to start is 50 and gradually increase until the
- # sum to diff property is met within the desired error range.
- },
- "sampledShapleyAttribution": { # An attribution method that approximates Shapley values for features that # An attribution method that approximates Shapley values for features that
- # contribute to the label being predicted. A sampling strategy is used to
- # approximate the value rather than considering all subsets of features.
- # contribute to the label being predicted. A sampling strategy is used to
- # approximate the value rather than considering all subsets of features.
- "numPaths": 42, # The number of feature permutations to consider when approximating the
- # Shapley values.
- },
- "integratedGradientsAttribution": { # Attributes credit by computing the Aumann-Shapley value taking advantage # Attributes credit by computing the Aumann-Shapley value taking advantage
- # of the model's fully differentiable structure. Refer to this paper for
- # more details: http://proceedings.mlr.press/v70/sundararajan17a.html
- # of the model's fully differentiable structure. Refer to this paper for
- # more details: http://proceedings.mlr.press/v70/sundararajan17a.html
- "numIntegralSteps": 42, # Number of steps for approximating the path integral.
- # A good value to start is 50 and gradually increase until the
- # sum to diff property is met within the desired error range.
- },
- },
- "isDefault": True or False, # Output only. If true, this version will be used to handle prediction
- # requests that do not specify a version.
- #
- # You can change the default version by calling
- # projects.methods.versions.setDefault.
- "machineType": "A String", # Optional. The type of machine on which to serve the model. Currently only
- # applies to online prediction service. If this field is not specified, it
- # defaults to `mls1-c1-m2`.
- #
- # Online prediction supports the following machine types:
- #
- # * `mls1-c1-m2`
- # * `mls1-c4-m2`
- # * `n1-standard-2`
- # * `n1-standard-4`
- # * `n1-standard-8`
- # * `n1-standard-16`
- # * `n1-standard-32`
- # * `n1-highmem-2`
- # * `n1-highmem-4`
- # * `n1-highmem-8`
- # * `n1-highmem-16`
- # * `n1-highmem-32`
- # * `n1-highcpu-2`
- # * `n1-highcpu-4`
- # * `n1-highcpu-8`
- # * `n1-highcpu-16`
- # * `n1-highcpu-32`
- #
- # `mls1-c1-m2` is generally available. All other machine types are available
- # in beta. Learn more about the [differences between machine
- # types](/ml-engine/docs/machine-types-online-prediction).
- "description": "A String", # Optional. The description specified for the version when it was created.
- "runtimeVersion": "A String", # Required. The AI Platform runtime version to use for this deployment.
- #
- # For more information, see the
- # [runtime version list](/ml-engine/docs/runtime-version-list) and
- # [how to manage runtime versions](/ml-engine/docs/versioning).
- "manualScaling": { # Options for manually scaling a model. # Manually select the number of nodes to use for serving the
- # model. You should generally use `auto_scaling` with an appropriate
- # `min_nodes` instead, but this option is available if you want more
- # predictable billing. Beware that latency and error rates will increase
- # if the traffic exceeds that capability of the system to serve it based
- # on the selected number of nodes.
- "nodes": 42, # The number of nodes to allocate for this model. These nodes are always up,
- # starting from the time the model is deployed, so the cost of operating
- # this model will be proportional to `nodes` * number of hours since
- # last billing cycle plus the cost for each prediction performed.
- },
- "errorMessage": "A String", # Output only. The details of a failure or a cancellation.
- "framework": "A String", # Optional. The machine learning framework AI Platform uses to train
- # this version of the model. Valid values are `TENSORFLOW`, `SCIKIT_LEARN`,
- # `XGBOOST`. If you do not specify a framework, AI Platform
- # will analyze files in the deployment_uri to determine a framework. If you
- # choose `SCIKIT_LEARN` or `XGBOOST`, you must also set the runtime version
- # of the model to 1.4 or greater.
- #
- # Do **not** specify a framework if you're deploying a [custom
- # prediction routine](/ml-engine/docs/tensorflow/custom-prediction-routines).
- #
- # If you specify a [Compute Engine (N1) machine
- # type](/ml-engine/docs/machine-types-online-prediction) in the
- # `machineType` field, you must specify `TENSORFLOW`
- # for the framework.
- "createTime": "A String", # Output only. The time the version was created.
- "name": "A String", # Required. The name specified for the version when it was created.
- #
- # The version name must be unique within the model it is created in.
- "autoScaling": { # Options for automatically scaling a model. # Automatically scale the number of nodes used to serve the model in
+ # The total number of model files can't exceed 1000.
+ "autoScaling": { # Options for automatically scaling a model. # Automatically scale the number of nodes used to serve the model in
# response to increases and decreases in traffic. Care should be
- # taken to ramp up traffic according to the model's ability to scale
+ # taken to ramp up traffic according to the model's ability to scale
# or you will start seeing increases in latency and 429 response codes.
#
# Note that you cannot use AutoScaling if your version uses
# [GPUs](#Version.FIELDS.accelerator_config). Instead, you must use specify
# `manual_scaling`.
- "minNodes": 42, # Optional. The minimum number of nodes to allocate for this model. These
+ "minNodes": 42, # Optional. The minimum number of nodes to allocate for this model. These
# nodes are always up, starting from the time the model is deployed.
# Therefore, the cost of operating this model will be at least
# `rate` * `min_nodes` * number of hours since last billing cycle,
@@ -376,32 +310,27 @@
# <pre>
# update_body.json:
# {
- # 'autoScaling': {
- # 'minNodes': 5
+ # 'autoScaling': {
+ # 'minNodes': 5
# }
# }
# </pre>
# HTTP request:
- # <pre style="max-width: 626px;">
+ # <pre style="max-width: 626px;">
# PATCH
# https://ml.googleapis.com/v1/{name=projects/*/models/*/versions/*}?update_mask=autoScaling.minNodes
# -d @./update_body.json
# </pre>
},
- "pythonVersion": "A String", # Required. The version of Python used in prediction.
- #
- # The following Python versions are available:
- #
- # * Python '3.7' is available when `runtime_version` is set to '1.15' or
- # later.
- # * Python '3.5' is available when `runtime_version` is set to a version
- # from '1.4' to '1.14'.
- # * Python '2.7' is available when `runtime_version` is set to '1.15' or
- # earlier.
- #
- # Read more about the Python versions available for [each runtime
- # version](/ml-engine/docs/runtime-version-list).
- "requestLoggingConfig": { # Configuration for logging request-response pairs to a BigQuery table. # Optional. *Only* specify this field in a
+ "labels": { # Optional. One or more labels that you can add, to organize your model
+ # versions. Each label is a key-value pair, where both the key and the value
+ # are arbitrary strings that you supply.
+ # For more information, see the documentation on
+ # <a href="/ml-engine/docs/tensorflow/resource-labels">using labels</a>.
+ "a_key": "A String",
+ },
+ "createTime": "A String", # Output only. The time the version was created.
+ "requestLoggingConfig": { # Configuration for logging request-response pairs to a BigQuery table. # Optional. *Only* specify this field in a
# projects.models.versions.patch
# request. Specifying it in a
# projects.models.versions.create
@@ -420,19 +349,16 @@
# evaluation](/ml-engine/docs/continuous-evaluation/), you do not need to
# specify this configuration manually. Setting up continuous evaluation
# automatically enables logging of request-response pairs.
- "samplingPercentage": 3.14, # Percentage of requests to be logged, expressed as a fraction from 0 to 1.
- # For example, if you want to log 10% of requests, enter `0.1`. The sampling
- # window is the lifetime of the model version. Defaults to 0.
- "bigqueryTableName": "A String", # Required. Fully qualified BigQuery table name in the following format:
- # "<var>project_id</var>.<var>dataset_name</var>.<var>table_name</var>"
+ "bigqueryTableName": "A String", # Required. Fully qualified BigQuery table name in the following format:
+ # "<var>project_id</var>.<var>dataset_name</var>.<var>table_name</var>"
#
- # The specified table must already exist, and the "Cloud ML Service Agent"
+ # The specified table must already exist, and the "Cloud ML Service Agent"
# for your project must have permission to write to it. The table must have
# the following [schema](/bigquery/docs/schemas):
#
# <table>
- # <tr><th>Field name</th><th style="display: table-cell">Type</th>
- # <th style="display: table-cell">Mode</th></tr>
+ # <tr><th>Field name</th><th style="display: table-cell">Type</th>
+ # <th style="display: table-cell">Mode</th></tr>
# <tr><td>model</td><td>STRING</td><td>REQUIRED</td></tr>
# <tr><td>model_version</td><td>STRING</td><td>REQUIRED</td></tr>
# <tr><td>time</td><td>TIMESTAMP</td><td>REQUIRED</td></tr>
@@ -440,6 +366,80 @@
# <tr><td>raw_prediction</td><td>STRING</td><td>NULLABLE</td></tr>
# <tr><td>groundtruth</td><td>STRING</td><td>NULLABLE</td></tr>
# </table>
+ "samplingPercentage": 3.14, # Percentage of requests to be logged, expressed as a fraction from 0 to 1.
+ # For example, if you want to log 10% of requests, enter `0.1`. The sampling
+ # window is the lifetime of the model version. Defaults to 0.
+ },
+ "errorMessage": "A String", # Output only. The details of a failure or a cancellation.
+ "machineType": "A String", # Optional. The type of machine on which to serve the model. Currently only
+ # applies to online prediction service. If this field is not specified, it
+ # defaults to `mls1-c1-m2`.
+ #
+ # Online prediction supports the following machine types:
+ #
+ # * `mls1-c1-m2`
+ # * `mls1-c4-m2`
+ # * `n1-standard-2`
+ # * `n1-standard-4`
+ # * `n1-standard-8`
+ # * `n1-standard-16`
+ # * `n1-standard-32`
+ # * `n1-highmem-2`
+ # * `n1-highmem-4`
+ # * `n1-highmem-8`
+ # * `n1-highmem-16`
+ # * `n1-highmem-32`
+ # * `n1-highcpu-2`
+ # * `n1-highcpu-4`
+ # * `n1-highcpu-8`
+ # * `n1-highcpu-16`
+ # * `n1-highcpu-32`
+ #
+ # `mls1-c1-m2` is generally available. All other machine types are available
+ # in beta. Learn more about the [differences between machine
+ # types](/ml-engine/docs/machine-types-online-prediction).
+ "runtimeVersion": "A String", # Required. The AI Platform runtime version to use for this deployment.
+ #
+ # For more information, see the
+ # [runtime version list](/ml-engine/docs/runtime-version-list) and
+ # [how to manage runtime versions](/ml-engine/docs/versioning).
+ "description": "A String", # Optional. The description specified for the version when it was created.
+ "framework": "A String", # Optional. The machine learning framework AI Platform uses to train
+ # this version of the model. Valid values are `TENSORFLOW`, `SCIKIT_LEARN`,
+ # `XGBOOST`. If you do not specify a framework, AI Platform
+ # will analyze files in the deployment_uri to determine a framework. If you
+ # choose `SCIKIT_LEARN` or `XGBOOST`, you must also set the runtime version
+ # of the model to 1.4 or greater.
+ #
+ # Do **not** specify a framework if you're deploying a [custom
+ # prediction routine](/ml-engine/docs/tensorflow/custom-prediction-routines).
+ #
+ # If you specify a [Compute Engine (N1) machine
+ # type](/ml-engine/docs/machine-types-online-prediction) in the
+ # `machineType` field, you must specify `TENSORFLOW`
+ # for the framework.
+ "etag": "A String", # `etag` is used for optimistic concurrency control as a way to help
+ # prevent simultaneous updates of a model from overwriting each other.
+ # It is strongly suggested that systems make use of the `etag` in the
+ # read-modify-write cycle to perform model updates in order to avoid race
+ # conditions: An `etag` is returned in the response to `GetVersion`, and
+ # systems are expected to put that etag in the request to `UpdateVersion` to
+ # ensure that their change will be applied to the model as intended.
+ "isDefault": True or False, # Output only. If true, this version will be used to handle prediction
+ # requests that do not specify a version.
+ #
+ # You can change the default version by calling
+ # projects.methods.versions.setDefault.
+ "acceleratorConfig": { # Represents a hardware accelerator request config. # Optional. Accelerator config for using GPUs for online prediction (beta).
+ # Only specify this field if you have specified a Compute Engine (N1) machine
+ # type in the `machineType` field. Learn more about [using GPUs for online
+ # prediction](/ml-engine/docs/machine-types-online-prediction#gpus).
+ # Note that the AcceleratorConfig can be used in both Jobs and Versions.
+ # Learn more about [accelerators for training](/ml-engine/docs/using-gpus) and
+ # [accelerators for online
+ # prediction](/ml-engine/docs/machine-types-online-prediction#gpus).
+ "count": "A String", # The number of accelerators to attach to each machine running the job.
+ "type": "A String", # The type of accelerator to use.
},
}
@@ -453,34 +453,7 @@
{ # This resource represents a long-running operation that is the result of a
# network API call.
- "metadata": { # Service-specific metadata associated with the operation. It typically
- # contains progress information and common metadata such as create time.
- # Some services might not provide such metadata. Any method that returns a
- # long-running operation should document the metadata type, if any.
- "a_key": "", # Properties of the object. Contains field @type with type URL.
- },
- "error": { # The `Status` type defines a logical error model that is suitable for # The error result of the operation in case of failure or cancellation.
- # different programming environments, including REST APIs and RPC APIs. It is
- # used by [gRPC](https://github.com/grpc). Each `Status` message contains
- # three pieces of data: error code, error message, and error details.
- #
- # You can find out more about this error model and how to work with it in the
- # [API Design Guide](https://cloud.google.com/apis/design/errors).
- "message": "A String", # A developer-facing error message, which should be in English. Any
- # user-facing error message should be localized and sent in the
- # google.rpc.Status.details field, or localized by the client.
- "code": 42, # The status code, which should be an enum value of google.rpc.Code.
- "details": [ # A list of messages that carry the error details. There is a common set of
- # message types for APIs to use.
- {
- "a_key": "", # Properties of the object. Contains field @type with type URL.
- },
- ],
- },
- "done": True or False, # If the value is `false`, it means the operation is still in progress.
- # If `true`, the operation is completed, and either `error` or `response` is
- # available.
- "response": { # The normal response of the operation in case of success. If the original
+ "response": { # The normal response of the operation in case of success. If the original
# method returns no data on success, such as `Delete`, the response is
# `google.protobuf.Empty`. If the original method is standard
# `Get`/`Create`/`Update`, the response should be the resource. For other
@@ -488,11 +461,38 @@
# is the original method name. For example, if the original method name
# is `TakeSnapshot()`, the inferred response type is
# `TakeSnapshotResponse`.
- "a_key": "", # Properties of the object. Contains field @type with type URL.
+ "a_key": "", # Properties of the object. Contains field @type with type URL.
},
- "name": "A String", # The server-assigned name, which is only unique within the same service that
+ "name": "A String", # The server-assigned name, which is only unique within the same service that
# originally returns it. If you use the default HTTP mapping, the
# `name` should be a resource name ending with `operations/{unique_id}`.
+ "error": { # The `Status` type defines a logical error model that is suitable for # The error result of the operation in case of failure or cancellation.
+ # different programming environments, including REST APIs and RPC APIs. It is
+ # used by [gRPC](https://github.com/grpc). Each `Status` message contains
+ # three pieces of data: error code, error message, and error details.
+ #
+ # You can find out more about this error model and how to work with it in the
+ # [API Design Guide](https://cloud.google.com/apis/design/errors).
+ "details": [ # A list of messages that carry the error details. There is a common set of
+ # message types for APIs to use.
+ {
+ "a_key": "", # Properties of the object. Contains field @type with type URL.
+ },
+ ],
+ "code": 42, # The status code, which should be an enum value of google.rpc.Code.
+ "message": "A String", # A developer-facing error message, which should be in English. Any
+ # user-facing error message should be localized and sent in the
+ # google.rpc.Status.details field, or localized by the client.
+ },
+ "metadata": { # Service-specific metadata associated with the operation. It typically
+ # contains progress information and common metadata such as create time.
+ # Some services might not provide such metadata. Any method that returns a
+ # long-running operation should document the metadata type, if any.
+ "a_key": "", # Properties of the object. Contains field @type with type URL.
+ },
+ "done": True or False, # If the value is `false`, it means the operation is still in progress.
+ # If `true`, the operation is completed, and either `error` or `response` is
+ # available.
}</pre>
</div>
@@ -520,34 +520,7 @@
{ # This resource represents a long-running operation that is the result of a
# network API call.
- "metadata": { # Service-specific metadata associated with the operation. It typically
- # contains progress information and common metadata such as create time.
- # Some services might not provide such metadata. Any method that returns a
- # long-running operation should document the metadata type, if any.
- "a_key": "", # Properties of the object. Contains field @type with type URL.
- },
- "error": { # The `Status` type defines a logical error model that is suitable for # The error result of the operation in case of failure or cancellation.
- # different programming environments, including REST APIs and RPC APIs. It is
- # used by [gRPC](https://github.com/grpc). Each `Status` message contains
- # three pieces of data: error code, error message, and error details.
- #
- # You can find out more about this error model and how to work with it in the
- # [API Design Guide](https://cloud.google.com/apis/design/errors).
- "message": "A String", # A developer-facing error message, which should be in English. Any
- # user-facing error message should be localized and sent in the
- # google.rpc.Status.details field, or localized by the client.
- "code": 42, # The status code, which should be an enum value of google.rpc.Code.
- "details": [ # A list of messages that carry the error details. There is a common set of
- # message types for APIs to use.
- {
- "a_key": "", # Properties of the object. Contains field @type with type URL.
- },
- ],
- },
- "done": True or False, # If the value is `false`, it means the operation is still in progress.
- # If `true`, the operation is completed, and either `error` or `response` is
- # available.
- "response": { # The normal response of the operation in case of success. If the original
+ "response": { # The normal response of the operation in case of success. If the original
# method returns no data on success, such as `Delete`, the response is
# `google.protobuf.Empty`. If the original method is standard
# `Get`/`Create`/`Update`, the response should be the resource. For other
@@ -555,11 +528,38 @@
# is the original method name. For example, if the original method name
# is `TakeSnapshot()`, the inferred response type is
# `TakeSnapshotResponse`.
- "a_key": "", # Properties of the object. Contains field @type with type URL.
+ "a_key": "", # Properties of the object. Contains field @type with type URL.
},
- "name": "A String", # The server-assigned name, which is only unique within the same service that
+ "name": "A String", # The server-assigned name, which is only unique within the same service that
# originally returns it. If you use the default HTTP mapping, the
# `name` should be a resource name ending with `operations/{unique_id}`.
+ "error": { # The `Status` type defines a logical error model that is suitable for # The error result of the operation in case of failure or cancellation.
+ # different programming environments, including REST APIs and RPC APIs. It is
+ # used by [gRPC](https://github.com/grpc). Each `Status` message contains
+ # three pieces of data: error code, error message, and error details.
+ #
+ # You can find out more about this error model and how to work with it in the
+ # [API Design Guide](https://cloud.google.com/apis/design/errors).
+ "details": [ # A list of messages that carry the error details. There is a common set of
+ # message types for APIs to use.
+ {
+ "a_key": "", # Properties of the object. Contains field @type with type URL.
+ },
+ ],
+ "code": 42, # The status code, which should be an enum value of google.rpc.Code.
+ "message": "A String", # A developer-facing error message, which should be in English. Any
+ # user-facing error message should be localized and sent in the
+ # google.rpc.Status.details field, or localized by the client.
+ },
+ "metadata": { # Service-specific metadata associated with the operation. It typically
+ # contains progress information and common metadata such as create time.
+ # Some services might not provide such metadata. Any method that returns a
+ # long-running operation should document the metadata type, if any.
+ "a_key": "", # Properties of the object. Contains field @type with type URL.
+ },
+ "done": True or False, # If the value is `false`, it means the operation is still in progress.
+ # If `true`, the operation is completed, and either `error` or `response` is
+ # available.
}</pre>
</div>
@@ -588,25 +588,37 @@
# prediction requests. A model can have multiple versions. You can get
# information about all of the versions of a given model by calling
# projects.models.versions.list.
- "acceleratorConfig": { # Represents a hardware accelerator request config. # Optional. Accelerator config for using GPUs for online prediction (beta).
- # Only specify this field if you have specified a Compute Engine (N1) machine
- # type in the `machineType` field. Learn more about [using GPUs for online
- # prediction](/ml-engine/docs/machine-types-online-prediction#gpus).
- # Note that the AcceleratorConfig can be used in both Jobs and Versions.
- # Learn more about [accelerators for training](/ml-engine/docs/using-gpus) and
- # [accelerators for online
- # prediction](/ml-engine/docs/machine-types-online-prediction#gpus).
- "count": "A String", # The number of accelerators to attach to each machine running the job.
- "type": "A String", # The type of accelerator to use.
+ "state": "A String", # Output only. The state of a version.
+ "manualScaling": { # Options for manually scaling a model. # Manually select the number of nodes to use for serving the
+ # model. You should generally use `auto_scaling` with an appropriate
+ # `min_nodes` instead, but this option is available if you want more
+ # predictable billing. Beware that latency and error rates will increase
+ # if the traffic exceeds that capability of the system to serve it based
+ # on the selected number of nodes.
+ "nodes": 42, # The number of nodes to allocate for this model. These nodes are always up,
+ # starting from the time the model is deployed, so the cost of operating
+ # this model will be proportional to `nodes` * number of hours since
+ # last billing cycle plus the cost for each prediction performed.
},
- "labels": { # Optional. One or more labels that you can add, to organize your model
- # versions. Each label is a key-value pair, where both the key and the value
- # are arbitrary strings that you supply.
- # For more information, see the documentation on
- # <a href="/ml-engine/docs/tensorflow/resource-labels">using labels</a>.
- "a_key": "A String",
- },
- "predictionClass": "A String", # Optional. The fully qualified name
+ "name": "A String", # Required. The name specified for the version when it was created.
+ #
+ # The version name must be unique within the model it is created in.
+ "serviceAccount": "A String", # Optional. Specifies the service account for resource access control.
+ "pythonVersion": "A String", # Required. The version of Python used in prediction.
+ #
+ # The following Python versions are available:
+ #
+ # * Python '3.7' is available when `runtime_version` is set to '1.15' or
+ # later.
+ # * Python '3.5' is available when `runtime_version` is set to a version
+ # from '1.4' to '1.14'.
+ # * Python '2.7' is available when `runtime_version` is set to '1.15' or
+ # earlier.
+ #
+ # Read more about the Python versions available for [each runtime
+ # version](/ml-engine/docs/runtime-version-list).
+ "lastUseTime": "A String", # Output only. The time the version was last used for prediction.
+ "predictionClass": "A String", # Optional. The fully qualified name
# (<var>module_name</var>.<var>class_name</var>) of a class that implements
# the Predictor interface described in this reference field. The module
# containing this class should be included in a package provided to the
@@ -621,12 +633,12 @@
#
# The following code sample provides the Predictor interface:
#
- # <pre style="max-width: 626px;">
+ # <pre style="max-width: 626px;">
# class Predictor(object):
- # """Interface for constructing custom predictors."""
+ # """Interface for constructing custom predictors."""
#
# def predict(self, instances, **kwargs):
- # """Performs custom prediction.
+ # """Performs custom prediction.
#
# Instances are the decoded values from the request. They have already
# been deserialized from JSON.
@@ -639,12 +651,12 @@
# Returns:
# A list of outputs containing the prediction results. This list must
# be JSON serializable.
- # """
+ # """
# raise NotImplementedError()
#
# @classmethod
# def from_path(cls, model_dir):
- # """Creates an instance of Predictor using the given path.
+ # """Creates an instance of Predictor using the given path.
#
# Loading of the predictor should be done in this method.
#
@@ -655,15 +667,13 @@
#
# Returns:
# An instance implementing this Predictor class.
- # """
+ # """
# raise NotImplementedError()
# </pre>
#
# Learn more about [the Predictor interface and custom prediction
# routines](/ml-engine/docs/tensorflow/custom-prediction-routines).
- "serviceAccount": "A String", # Optional. Specifies the service account for resource access control.
- "state": "A String", # Output only. The state of a version.
- "packageUris": [ # Optional. Cloud Storage paths (`gs://…`) of packages for [custom
+ "packageUris": [ # Optional. Cloud Storage paths (`gs://…`) of packages for [custom
# prediction routines](/ml-engine/docs/tensorflow/custom-prediction-routines)
# or [scikit-learn pipelines with custom
# code](/ml-engine/docs/scikit/exporting-for-prediction#custom-pipeline-code).
@@ -677,17 +687,45 @@
#
# If you specify this field, you must also set
# [`runtimeVersion`](#Version.FIELDS.runtime_version) to 1.4 or greater.
- "A String",
+ "A String",
],
- "etag": "A String", # `etag` is used for optimistic concurrency control as a way to help
- # prevent simultaneous updates of a model from overwriting each other.
- # It is strongly suggested that systems make use of the `etag` in the
- # read-modify-write cycle to perform model updates in order to avoid race
- # conditions: An `etag` is returned in the response to `GetVersion`, and
- # systems are expected to put that etag in the request to `UpdateVersion` to
- # ensure that their change will be applied to the model as intended.
- "lastUseTime": "A String", # Output only. The time the version was last used for prediction.
- "deploymentUri": "A String", # Required. The Cloud Storage location of the trained model used to
+ "explanationConfig": { # Message holding configuration options for explaining model predictions. # Optional. Configures explainability features on the model's version.
+ # Some explanation features require additional metadata to be loaded
+ # as part of the model payload.
+ # There are two feature attribution methods supported for TensorFlow models:
+ # integrated gradients and sampled Shapley.
+ # [Learn more about feature
+ # attributions.](/ai-platform/prediction/docs/ai-explanations/overview)
+ "integratedGradientsAttribution": { # Attributes credit by computing the Aumann-Shapley value taking advantage # Attributes credit by computing the Aumann-Shapley value taking advantage
+ # of the model's fully differentiable structure. Refer to this paper for
+ # more details: http://proceedings.mlr.press/v70/sundararajan17a.html
+ # of the model's fully differentiable structure. Refer to this paper for
+ # more details: https://arxiv.org/abs/1703.01365
+ "numIntegralSteps": 42, # Number of steps for approximating the path integral.
+ # A good value to start is 50 and gradually increase until the
+ # sum to diff property is met within the desired error range.
+ },
+ "sampledShapleyAttribution": { # An attribution method that approximates Shapley values for features that # An attribution method that approximates Shapley values for features that
+ # contribute to the label being predicted. A sampling strategy is used to
+ # approximate the value rather than considering all subsets of features.
+ # contribute to the label being predicted. A sampling strategy is used to
+ # approximate the value rather than considering all subsets of features.
+ "numPaths": 42, # The number of feature permutations to consider when approximating the
+ # Shapley values.
+ },
+ "xraiAttribution": { # Attributes credit by computing the XRAI taking advantage # Attributes credit by computing the XRAI taking advantage
+ # of the model's fully differentiable structure. Refer to this paper for
+ # more details: https://arxiv.org/abs/1906.02825
+ # Currently only implemented for models with natural image inputs.
+ # of the model's fully differentiable structure. Refer to this paper for
+ # more details: https://arxiv.org/abs/1906.02825
+ # Currently only implemented for models with natural image inputs.
+ "numIntegralSteps": 42, # Number of steps for approximating the path integral.
+ # A good value to start is 50 and gradually increase until the
+ # sum to diff property is met within the desired error range.
+ },
+ },
+ "deploymentUri": "A String", # Required. The Cloud Storage location of the trained model used to
# create the version. See the
# [guide to model
# deployment](/ml-engine/docs/tensorflow/deploying-models) for more
@@ -698,120 +736,16 @@
# the model service uses the specified location as the source of the model.
# Once deployed, the model version is hosted by the prediction service, so
# this location is useful only as a historical record.
- # The total number of model files can't exceed 1000.
- "explanationConfig": { # Message holding configuration options for explaining model predictions. # Optional. Configures explainability features on the model's version.
- # Some explanation features require additional metadata to be loaded
- # as part of the model payload.
- # There are two feature attribution methods supported for TensorFlow models:
- # integrated gradients and sampled Shapley.
- # [Learn more about feature
- # attributions.](/ml-engine/docs/ai-explanations/overview)
- "xraiAttribution": { # Attributes credit by computing the XRAI taking advantage # Attributes credit by computing the XRAI taking advantage
- # of the model's fully differentiable structure. Refer to this paper for
- # more details: https://arxiv.org/abs/1906.02825
- # Currently only implemented for models with natural image inputs.
- # of the model's fully differentiable structure. Refer to this paper for
- # more details: https://arxiv.org/abs/1906.02825
- # Currently only implemented for models with natural image inputs.
- "numIntegralSteps": 42, # Number of steps for approximating the path integral.
- # A good value to start is 50 and gradually increase until the
- # sum to diff property is met within the desired error range.
- },
- "sampledShapleyAttribution": { # An attribution method that approximates Shapley values for features that # An attribution method that approximates Shapley values for features that
- # contribute to the label being predicted. A sampling strategy is used to
- # approximate the value rather than considering all subsets of features.
- # contribute to the label being predicted. A sampling strategy is used to
- # approximate the value rather than considering all subsets of features.
- "numPaths": 42, # The number of feature permutations to consider when approximating the
- # Shapley values.
- },
- "integratedGradientsAttribution": { # Attributes credit by computing the Aumann-Shapley value taking advantage # Attributes credit by computing the Aumann-Shapley value taking advantage
- # of the model's fully differentiable structure. Refer to this paper for
- # more details: http://proceedings.mlr.press/v70/sundararajan17a.html
- # of the model's fully differentiable structure. Refer to this paper for
- # more details: http://proceedings.mlr.press/v70/sundararajan17a.html
- "numIntegralSteps": 42, # Number of steps for approximating the path integral.
- # A good value to start is 50 and gradually increase until the
- # sum to diff property is met within the desired error range.
- },
- },
- "isDefault": True or False, # Output only. If true, this version will be used to handle prediction
- # requests that do not specify a version.
- #
- # You can change the default version by calling
- # projects.methods.versions.setDefault.
- "machineType": "A String", # Optional. The type of machine on which to serve the model. Currently only
- # applies to online prediction service. If this field is not specified, it
- # defaults to `mls1-c1-m2`.
- #
- # Online prediction supports the following machine types:
- #
- # * `mls1-c1-m2`
- # * `mls1-c4-m2`
- # * `n1-standard-2`
- # * `n1-standard-4`
- # * `n1-standard-8`
- # * `n1-standard-16`
- # * `n1-standard-32`
- # * `n1-highmem-2`
- # * `n1-highmem-4`
- # * `n1-highmem-8`
- # * `n1-highmem-16`
- # * `n1-highmem-32`
- # * `n1-highcpu-2`
- # * `n1-highcpu-4`
- # * `n1-highcpu-8`
- # * `n1-highcpu-16`
- # * `n1-highcpu-32`
- #
- # `mls1-c1-m2` is generally available. All other machine types are available
- # in beta. Learn more about the [differences between machine
- # types](/ml-engine/docs/machine-types-online-prediction).
- "description": "A String", # Optional. The description specified for the version when it was created.
- "runtimeVersion": "A String", # Required. The AI Platform runtime version to use for this deployment.
- #
- # For more information, see the
- # [runtime version list](/ml-engine/docs/runtime-version-list) and
- # [how to manage runtime versions](/ml-engine/docs/versioning).
- "manualScaling": { # Options for manually scaling a model. # Manually select the number of nodes to use for serving the
- # model. You should generally use `auto_scaling` with an appropriate
- # `min_nodes` instead, but this option is available if you want more
- # predictable billing. Beware that latency and error rates will increase
- # if the traffic exceeds that capability of the system to serve it based
- # on the selected number of nodes.
- "nodes": 42, # The number of nodes to allocate for this model. These nodes are always up,
- # starting from the time the model is deployed, so the cost of operating
- # this model will be proportional to `nodes` * number of hours since
- # last billing cycle plus the cost for each prediction performed.
- },
- "errorMessage": "A String", # Output only. The details of a failure or a cancellation.
- "framework": "A String", # Optional. The machine learning framework AI Platform uses to train
- # this version of the model. Valid values are `TENSORFLOW`, `SCIKIT_LEARN`,
- # `XGBOOST`. If you do not specify a framework, AI Platform
- # will analyze files in the deployment_uri to determine a framework. If you
- # choose `SCIKIT_LEARN` or `XGBOOST`, you must also set the runtime version
- # of the model to 1.4 or greater.
- #
- # Do **not** specify a framework if you're deploying a [custom
- # prediction routine](/ml-engine/docs/tensorflow/custom-prediction-routines).
- #
- # If you specify a [Compute Engine (N1) machine
- # type](/ml-engine/docs/machine-types-online-prediction) in the
- # `machineType` field, you must specify `TENSORFLOW`
- # for the framework.
- "createTime": "A String", # Output only. The time the version was created.
- "name": "A String", # Required. The name specified for the version when it was created.
- #
- # The version name must be unique within the model it is created in.
- "autoScaling": { # Options for automatically scaling a model. # Automatically scale the number of nodes used to serve the model in
+ # The total number of model files can't exceed 1000.
+ "autoScaling": { # Options for automatically scaling a model. # Automatically scale the number of nodes used to serve the model in
# response to increases and decreases in traffic. Care should be
- # taken to ramp up traffic according to the model's ability to scale
+ # taken to ramp up traffic according to the model's ability to scale
# or you will start seeing increases in latency and 429 response codes.
#
# Note that you cannot use AutoScaling if your version uses
# [GPUs](#Version.FIELDS.accelerator_config). Instead, you must use specify
# `manual_scaling`.
- "minNodes": 42, # Optional. The minimum number of nodes to allocate for this model. These
+ "minNodes": 42, # Optional. The minimum number of nodes to allocate for this model. These
# nodes are always up, starting from the time the model is deployed.
# Therefore, the cost of operating this model will be at least
# `rate` * `min_nodes` * number of hours since last billing cycle,
@@ -846,32 +780,27 @@
# <pre>
# update_body.json:
# {
- # 'autoScaling': {
- # 'minNodes': 5
+ # 'autoScaling': {
+ # 'minNodes': 5
# }
# }
# </pre>
# HTTP request:
- # <pre style="max-width: 626px;">
+ # <pre style="max-width: 626px;">
# PATCH
# https://ml.googleapis.com/v1/{name=projects/*/models/*/versions/*}?update_mask=autoScaling.minNodes
# -d @./update_body.json
# </pre>
},
- "pythonVersion": "A String", # Required. The version of Python used in prediction.
- #
- # The following Python versions are available:
- #
- # * Python '3.7' is available when `runtime_version` is set to '1.15' or
- # later.
- # * Python '3.5' is available when `runtime_version` is set to a version
- # from '1.4' to '1.14'.
- # * Python '2.7' is available when `runtime_version` is set to '1.15' or
- # earlier.
- #
- # Read more about the Python versions available for [each runtime
- # version](/ml-engine/docs/runtime-version-list).
- "requestLoggingConfig": { # Configuration for logging request-response pairs to a BigQuery table. # Optional. *Only* specify this field in a
+ "labels": { # Optional. One or more labels that you can add, to organize your model
+ # versions. Each label is a key-value pair, where both the key and the value
+ # are arbitrary strings that you supply.
+ # For more information, see the documentation on
+ # <a href="/ml-engine/docs/tensorflow/resource-labels">using labels</a>.
+ "a_key": "A String",
+ },
+ "createTime": "A String", # Output only. The time the version was created.
+ "requestLoggingConfig": { # Configuration for logging request-response pairs to a BigQuery table. # Optional. *Only* specify this field in a
# projects.models.versions.patch
# request. Specifying it in a
# projects.models.versions.create
@@ -890,19 +819,16 @@
# evaluation](/ml-engine/docs/continuous-evaluation/), you do not need to
# specify this configuration manually. Setting up continuous evaluation
# automatically enables logging of request-response pairs.
- "samplingPercentage": 3.14, # Percentage of requests to be logged, expressed as a fraction from 0 to 1.
- # For example, if you want to log 10% of requests, enter `0.1`. The sampling
- # window is the lifetime of the model version. Defaults to 0.
- "bigqueryTableName": "A String", # Required. Fully qualified BigQuery table name in the following format:
- # "<var>project_id</var>.<var>dataset_name</var>.<var>table_name</var>"
+ "bigqueryTableName": "A String", # Required. Fully qualified BigQuery table name in the following format:
+ # "<var>project_id</var>.<var>dataset_name</var>.<var>table_name</var>"
#
- # The specified table must already exist, and the "Cloud ML Service Agent"
+ # The specified table must already exist, and the "Cloud ML Service Agent"
# for your project must have permission to write to it. The table must have
# the following [schema](/bigquery/docs/schemas):
#
# <table>
- # <tr><th>Field name</th><th style="display: table-cell">Type</th>
- # <th style="display: table-cell">Mode</th></tr>
+ # <tr><th>Field name</th><th style="display: table-cell">Type</th>
+ # <th style="display: table-cell">Mode</th></tr>
# <tr><td>model</td><td>STRING</td><td>REQUIRED</td></tr>
# <tr><td>model_version</td><td>STRING</td><td>REQUIRED</td></tr>
# <tr><td>time</td><td>TIMESTAMP</td><td>REQUIRED</td></tr>
@@ -910,12 +836,86 @@
# <tr><td>raw_prediction</td><td>STRING</td><td>NULLABLE</td></tr>
# <tr><td>groundtruth</td><td>STRING</td><td>NULLABLE</td></tr>
# </table>
+ "samplingPercentage": 3.14, # Percentage of requests to be logged, expressed as a fraction from 0 to 1.
+ # For example, if you want to log 10% of requests, enter `0.1`. The sampling
+ # window is the lifetime of the model version. Defaults to 0.
+ },
+ "errorMessage": "A String", # Output only. The details of a failure or a cancellation.
+ "machineType": "A String", # Optional. The type of machine on which to serve the model. Currently only
+ # applies to online prediction service. If this field is not specified, it
+ # defaults to `mls1-c1-m2`.
+ #
+ # Online prediction supports the following machine types:
+ #
+ # * `mls1-c1-m2`
+ # * `mls1-c4-m2`
+ # * `n1-standard-2`
+ # * `n1-standard-4`
+ # * `n1-standard-8`
+ # * `n1-standard-16`
+ # * `n1-standard-32`
+ # * `n1-highmem-2`
+ # * `n1-highmem-4`
+ # * `n1-highmem-8`
+ # * `n1-highmem-16`
+ # * `n1-highmem-32`
+ # * `n1-highcpu-2`
+ # * `n1-highcpu-4`
+ # * `n1-highcpu-8`
+ # * `n1-highcpu-16`
+ # * `n1-highcpu-32`
+ #
+ # `mls1-c1-m2` is generally available. All other machine types are available
+ # in beta. Learn more about the [differences between machine
+ # types](/ml-engine/docs/machine-types-online-prediction).
+ "runtimeVersion": "A String", # Required. The AI Platform runtime version to use for this deployment.
+ #
+ # For more information, see the
+ # [runtime version list](/ml-engine/docs/runtime-version-list) and
+ # [how to manage runtime versions](/ml-engine/docs/versioning).
+ "description": "A String", # Optional. The description specified for the version when it was created.
+ "framework": "A String", # Optional. The machine learning framework AI Platform uses to train
+ # this version of the model. Valid values are `TENSORFLOW`, `SCIKIT_LEARN`,
+ # `XGBOOST`. If you do not specify a framework, AI Platform
+ # will analyze files in the deployment_uri to determine a framework. If you
+ # choose `SCIKIT_LEARN` or `XGBOOST`, you must also set the runtime version
+ # of the model to 1.4 or greater.
+ #
+ # Do **not** specify a framework if you're deploying a [custom
+ # prediction routine](/ml-engine/docs/tensorflow/custom-prediction-routines).
+ #
+ # If you specify a [Compute Engine (N1) machine
+ # type](/ml-engine/docs/machine-types-online-prediction) in the
+ # `machineType` field, you must specify `TENSORFLOW`
+ # for the framework.
+ "etag": "A String", # `etag` is used for optimistic concurrency control as a way to help
+ # prevent simultaneous updates of a model from overwriting each other.
+ # It is strongly suggested that systems make use of the `etag` in the
+ # read-modify-write cycle to perform model updates in order to avoid race
+ # conditions: An `etag` is returned in the response to `GetVersion`, and
+ # systems are expected to put that etag in the request to `UpdateVersion` to
+ # ensure that their change will be applied to the model as intended.
+ "isDefault": True or False, # Output only. If true, this version will be used to handle prediction
+ # requests that do not specify a version.
+ #
+ # You can change the default version by calling
+ # projects.methods.versions.setDefault.
+ "acceleratorConfig": { # Represents a hardware accelerator request config. # Optional. Accelerator config for using GPUs for online prediction (beta).
+ # Only specify this field if you have specified a Compute Engine (N1) machine
+ # type in the `machineType` field. Learn more about [using GPUs for online
+ # prediction](/ml-engine/docs/machine-types-online-prediction#gpus).
+ # Note that the AcceleratorConfig can be used in both Jobs and Versions.
+ # Learn more about [accelerators for training](/ml-engine/docs/using-gpus) and
+ # [accelerators for online
+ # prediction](/ml-engine/docs/machine-types-online-prediction#gpus).
+ "count": "A String", # The number of accelerators to attach to each machine running the job.
+ "type": "A String", # The type of accelerator to use.
},
}</pre>
</div>
<div class="method">
- <code class="details" id="list">list(parent, pageToken=None, x__xgafv=None, pageSize=None, filter=None)</code>
+ <code class="details" id="list">list(parent, pageToken=None, pageSize=None, filter=None, x__xgafv=None)</code>
<pre>Gets basic information about all the versions of a model.
If you expect that a model has many versions, or if you need to handle
@@ -931,49 +931,61 @@
You get the token from the `next_page_token` field of the response from
the previous call.
- x__xgafv: string, V1 error format.
- Allowed values
- 1 - v1 error format
- 2 - v2 error format
- pageSize: integer, Optional. The number of versions to retrieve per "page" of results. If
+ pageSize: integer, Optional. The number of versions to retrieve per "page" of results. If
there are more remaining results than this number, the response message
will contain a valid value in the `next_page_token` field.
The default value is 20, and the maximum page size is 100.
filter: string, Optional. Specifies the subset of versions to retrieve.
+ x__xgafv: string, V1 error format.
+ Allowed values
+ 1 - v1 error format
+ 2 - v2 error format
Returns:
An object of the form:
{ # Response message for the ListVersions method.
- "nextPageToken": "A String", # Optional. Pass this token as the `page_token` field of the request for a
+ "nextPageToken": "A String", # Optional. Pass this token as the `page_token` field of the request for a
# subsequent call.
- "versions": [ # The list of versions.
+ "versions": [ # The list of versions.
{ # Represents a version of the model.
#
# Each version is a trained model deployed in the cloud, ready to handle
# prediction requests. A model can have multiple versions. You can get
# information about all of the versions of a given model by calling
# projects.models.versions.list.
- "acceleratorConfig": { # Represents a hardware accelerator request config. # Optional. Accelerator config for using GPUs for online prediction (beta).
- # Only specify this field if you have specified a Compute Engine (N1) machine
- # type in the `machineType` field. Learn more about [using GPUs for online
- # prediction](/ml-engine/docs/machine-types-online-prediction#gpus).
- # Note that the AcceleratorConfig can be used in both Jobs and Versions.
- # Learn more about [accelerators for training](/ml-engine/docs/using-gpus) and
- # [accelerators for online
- # prediction](/ml-engine/docs/machine-types-online-prediction#gpus).
- "count": "A String", # The number of accelerators to attach to each machine running the job.
- "type": "A String", # The type of accelerator to use.
+ "state": "A String", # Output only. The state of a version.
+ "manualScaling": { # Options for manually scaling a model. # Manually select the number of nodes to use for serving the
+ # model. You should generally use `auto_scaling` with an appropriate
+ # `min_nodes` instead, but this option is available if you want more
+ # predictable billing. Beware that latency and error rates will increase
+ # if the traffic exceeds that capability of the system to serve it based
+ # on the selected number of nodes.
+ "nodes": 42, # The number of nodes to allocate for this model. These nodes are always up,
+ # starting from the time the model is deployed, so the cost of operating
+ # this model will be proportional to `nodes` * number of hours since
+ # last billing cycle plus the cost for each prediction performed.
},
- "labels": { # Optional. One or more labels that you can add, to organize your model
- # versions. Each label is a key-value pair, where both the key and the value
- # are arbitrary strings that you supply.
- # For more information, see the documentation on
- # <a href="/ml-engine/docs/tensorflow/resource-labels">using labels</a>.
- "a_key": "A String",
- },
- "predictionClass": "A String", # Optional. The fully qualified name
+ "name": "A String", # Required. The name specified for the version when it was created.
+ #
+ # The version name must be unique within the model it is created in.
+ "serviceAccount": "A String", # Optional. Specifies the service account for resource access control.
+ "pythonVersion": "A String", # Required. The version of Python used in prediction.
+ #
+ # The following Python versions are available:
+ #
+ # * Python '3.7' is available when `runtime_version` is set to '1.15' or
+ # later.
+ # * Python '3.5' is available when `runtime_version` is set to a version
+ # from '1.4' to '1.14'.
+ # * Python '2.7' is available when `runtime_version` is set to '1.15' or
+ # earlier.
+ #
+ # Read more about the Python versions available for [each runtime
+ # version](/ml-engine/docs/runtime-version-list).
+ "lastUseTime": "A String", # Output only. The time the version was last used for prediction.
+ "predictionClass": "A String", # Optional. The fully qualified name
# (<var>module_name</var>.<var>class_name</var>) of a class that implements
# the Predictor interface described in this reference field. The module
# containing this class should be included in a package provided to the
@@ -988,12 +1000,12 @@
#
# The following code sample provides the Predictor interface:
#
- # <pre style="max-width: 626px;">
+ # <pre style="max-width: 626px;">
# class Predictor(object):
- # """Interface for constructing custom predictors."""
+ # """Interface for constructing custom predictors."""
#
# def predict(self, instances, **kwargs):
- # """Performs custom prediction.
+ # """Performs custom prediction.
#
# Instances are the decoded values from the request. They have already
# been deserialized from JSON.
@@ -1006,12 +1018,12 @@
# Returns:
# A list of outputs containing the prediction results. This list must
# be JSON serializable.
- # """
+ # """
# raise NotImplementedError()
#
# @classmethod
# def from_path(cls, model_dir):
- # """Creates an instance of Predictor using the given path.
+ # """Creates an instance of Predictor using the given path.
#
# Loading of the predictor should be done in this method.
#
@@ -1022,15 +1034,13 @@
#
# Returns:
# An instance implementing this Predictor class.
- # """
+ # """
# raise NotImplementedError()
# </pre>
#
# Learn more about [the Predictor interface and custom prediction
# routines](/ml-engine/docs/tensorflow/custom-prediction-routines).
- "serviceAccount": "A String", # Optional. Specifies the service account for resource access control.
- "state": "A String", # Output only. The state of a version.
- "packageUris": [ # Optional. Cloud Storage paths (`gs://…`) of packages for [custom
+ "packageUris": [ # Optional. Cloud Storage paths (`gs://…`) of packages for [custom
# prediction routines](/ml-engine/docs/tensorflow/custom-prediction-routines)
# or [scikit-learn pipelines with custom
# code](/ml-engine/docs/scikit/exporting-for-prediction#custom-pipeline-code).
@@ -1044,17 +1054,45 @@
#
# If you specify this field, you must also set
# [`runtimeVersion`](#Version.FIELDS.runtime_version) to 1.4 or greater.
- "A String",
+ "A String",
],
- "etag": "A String", # `etag` is used for optimistic concurrency control as a way to help
- # prevent simultaneous updates of a model from overwriting each other.
- # It is strongly suggested that systems make use of the `etag` in the
- # read-modify-write cycle to perform model updates in order to avoid race
- # conditions: An `etag` is returned in the response to `GetVersion`, and
- # systems are expected to put that etag in the request to `UpdateVersion` to
- # ensure that their change will be applied to the model as intended.
- "lastUseTime": "A String", # Output only. The time the version was last used for prediction.
- "deploymentUri": "A String", # Required. The Cloud Storage location of the trained model used to
+ "explanationConfig": { # Message holding configuration options for explaining model predictions. # Optional. Configures explainability features on the model's version.
+ # Some explanation features require additional metadata to be loaded
+ # as part of the model payload.
+ # There are two feature attribution methods supported for TensorFlow models:
+ # integrated gradients and sampled Shapley.
+ # [Learn more about feature
+ # attributions.](/ai-platform/prediction/docs/ai-explanations/overview)
+ "integratedGradientsAttribution": { # Attributes credit by computing the Aumann-Shapley value taking advantage # Attributes credit by computing the Aumann-Shapley value taking advantage
+ # of the model's fully differentiable structure. Refer to this paper for
+ # more details: http://proceedings.mlr.press/v70/sundararajan17a.html
+ # of the model's fully differentiable structure. Refer to this paper for
+ # more details: https://arxiv.org/abs/1703.01365
+ "numIntegralSteps": 42, # Number of steps for approximating the path integral.
+ # A good value to start is 50 and gradually increase until the
+ # sum to diff property is met within the desired error range.
+ },
+ "sampledShapleyAttribution": { # An attribution method that approximates Shapley values for features that # An attribution method that approximates Shapley values for features that
+ # contribute to the label being predicted. A sampling strategy is used to
+ # approximate the value rather than considering all subsets of features.
+ # contribute to the label being predicted. A sampling strategy is used to
+ # approximate the value rather than considering all subsets of features.
+ "numPaths": 42, # The number of feature permutations to consider when approximating the
+ # Shapley values.
+ },
+ "xraiAttribution": { # Attributes credit by computing the XRAI taking advantage # Attributes credit by computing the XRAI taking advantage
+ # of the model's fully differentiable structure. Refer to this paper for
+ # more details: https://arxiv.org/abs/1906.02825
+ # Currently only implemented for models with natural image inputs.
+ # of the model's fully differentiable structure. Refer to this paper for
+ # more details: https://arxiv.org/abs/1906.02825
+ # Currently only implemented for models with natural image inputs.
+ "numIntegralSteps": 42, # Number of steps for approximating the path integral.
+ # A good value to start is 50 and gradually increase until the
+ # sum to diff property is met within the desired error range.
+ },
+ },
+ "deploymentUri": "A String", # Required. The Cloud Storage location of the trained model used to
# create the version. See the
# [guide to model
# deployment](/ml-engine/docs/tensorflow/deploying-models) for more
@@ -1065,120 +1103,16 @@
# the model service uses the specified location as the source of the model.
# Once deployed, the model version is hosted by the prediction service, so
# this location is useful only as a historical record.
- # The total number of model files can't exceed 1000.
- "explanationConfig": { # Message holding configuration options for explaining model predictions. # Optional. Configures explainability features on the model's version.
- # Some explanation features require additional metadata to be loaded
- # as part of the model payload.
- # There are two feature attribution methods supported for TensorFlow models:
- # integrated gradients and sampled Shapley.
- # [Learn more about feature
- # attributions.](/ml-engine/docs/ai-explanations/overview)
- "xraiAttribution": { # Attributes credit by computing the XRAI taking advantage # Attributes credit by computing the XRAI taking advantage
- # of the model's fully differentiable structure. Refer to this paper for
- # more details: https://arxiv.org/abs/1906.02825
- # Currently only implemented for models with natural image inputs.
- # of the model's fully differentiable structure. Refer to this paper for
- # more details: https://arxiv.org/abs/1906.02825
- # Currently only implemented for models with natural image inputs.
- "numIntegralSteps": 42, # Number of steps for approximating the path integral.
- # A good value to start is 50 and gradually increase until the
- # sum to diff property is met within the desired error range.
- },
- "sampledShapleyAttribution": { # An attribution method that approximates Shapley values for features that # An attribution method that approximates Shapley values for features that
- # contribute to the label being predicted. A sampling strategy is used to
- # approximate the value rather than considering all subsets of features.
- # contribute to the label being predicted. A sampling strategy is used to
- # approximate the value rather than considering all subsets of features.
- "numPaths": 42, # The number of feature permutations to consider when approximating the
- # Shapley values.
- },
- "integratedGradientsAttribution": { # Attributes credit by computing the Aumann-Shapley value taking advantage # Attributes credit by computing the Aumann-Shapley value taking advantage
- # of the model's fully differentiable structure. Refer to this paper for
- # more details: http://proceedings.mlr.press/v70/sundararajan17a.html
- # of the model's fully differentiable structure. Refer to this paper for
- # more details: http://proceedings.mlr.press/v70/sundararajan17a.html
- "numIntegralSteps": 42, # Number of steps for approximating the path integral.
- # A good value to start is 50 and gradually increase until the
- # sum to diff property is met within the desired error range.
- },
- },
- "isDefault": True or False, # Output only. If true, this version will be used to handle prediction
- # requests that do not specify a version.
- #
- # You can change the default version by calling
- # projects.methods.versions.setDefault.
- "machineType": "A String", # Optional. The type of machine on which to serve the model. Currently only
- # applies to online prediction service. If this field is not specified, it
- # defaults to `mls1-c1-m2`.
- #
- # Online prediction supports the following machine types:
- #
- # * `mls1-c1-m2`
- # * `mls1-c4-m2`
- # * `n1-standard-2`
- # * `n1-standard-4`
- # * `n1-standard-8`
- # * `n1-standard-16`
- # * `n1-standard-32`
- # * `n1-highmem-2`
- # * `n1-highmem-4`
- # * `n1-highmem-8`
- # * `n1-highmem-16`
- # * `n1-highmem-32`
- # * `n1-highcpu-2`
- # * `n1-highcpu-4`
- # * `n1-highcpu-8`
- # * `n1-highcpu-16`
- # * `n1-highcpu-32`
- #
- # `mls1-c1-m2` is generally available. All other machine types are available
- # in beta. Learn more about the [differences between machine
- # types](/ml-engine/docs/machine-types-online-prediction).
- "description": "A String", # Optional. The description specified for the version when it was created.
- "runtimeVersion": "A String", # Required. The AI Platform runtime version to use for this deployment.
- #
- # For more information, see the
- # [runtime version list](/ml-engine/docs/runtime-version-list) and
- # [how to manage runtime versions](/ml-engine/docs/versioning).
- "manualScaling": { # Options for manually scaling a model. # Manually select the number of nodes to use for serving the
- # model. You should generally use `auto_scaling` with an appropriate
- # `min_nodes` instead, but this option is available if you want more
- # predictable billing. Beware that latency and error rates will increase
- # if the traffic exceeds that capability of the system to serve it based
- # on the selected number of nodes.
- "nodes": 42, # The number of nodes to allocate for this model. These nodes are always up,
- # starting from the time the model is deployed, so the cost of operating
- # this model will be proportional to `nodes` * number of hours since
- # last billing cycle plus the cost for each prediction performed.
- },
- "errorMessage": "A String", # Output only. The details of a failure or a cancellation.
- "framework": "A String", # Optional. The machine learning framework AI Platform uses to train
- # this version of the model. Valid values are `TENSORFLOW`, `SCIKIT_LEARN`,
- # `XGBOOST`. If you do not specify a framework, AI Platform
- # will analyze files in the deployment_uri to determine a framework. If you
- # choose `SCIKIT_LEARN` or `XGBOOST`, you must also set the runtime version
- # of the model to 1.4 or greater.
- #
- # Do **not** specify a framework if you're deploying a [custom
- # prediction routine](/ml-engine/docs/tensorflow/custom-prediction-routines).
- #
- # If you specify a [Compute Engine (N1) machine
- # type](/ml-engine/docs/machine-types-online-prediction) in the
- # `machineType` field, you must specify `TENSORFLOW`
- # for the framework.
- "createTime": "A String", # Output only. The time the version was created.
- "name": "A String", # Required. The name specified for the version when it was created.
- #
- # The version name must be unique within the model it is created in.
- "autoScaling": { # Options for automatically scaling a model. # Automatically scale the number of nodes used to serve the model in
+ # The total number of model files can't exceed 1000.
+ "autoScaling": { # Options for automatically scaling a model. # Automatically scale the number of nodes used to serve the model in
# response to increases and decreases in traffic. Care should be
- # taken to ramp up traffic according to the model's ability to scale
+ # taken to ramp up traffic according to the model's ability to scale
# or you will start seeing increases in latency and 429 response codes.
#
# Note that you cannot use AutoScaling if your version uses
# [GPUs](#Version.FIELDS.accelerator_config). Instead, you must use specify
# `manual_scaling`.
- "minNodes": 42, # Optional. The minimum number of nodes to allocate for this model. These
+ "minNodes": 42, # Optional. The minimum number of nodes to allocate for this model. These
# nodes are always up, starting from the time the model is deployed.
# Therefore, the cost of operating this model will be at least
# `rate` * `min_nodes` * number of hours since last billing cycle,
@@ -1213,32 +1147,27 @@
# <pre>
# update_body.json:
# {
- # 'autoScaling': {
- # 'minNodes': 5
+ # 'autoScaling': {
+ # 'minNodes': 5
# }
# }
# </pre>
# HTTP request:
- # <pre style="max-width: 626px;">
+ # <pre style="max-width: 626px;">
# PATCH
# https://ml.googleapis.com/v1/{name=projects/*/models/*/versions/*}?update_mask=autoScaling.minNodes
# -d @./update_body.json
# </pre>
},
- "pythonVersion": "A String", # Required. The version of Python used in prediction.
- #
- # The following Python versions are available:
- #
- # * Python '3.7' is available when `runtime_version` is set to '1.15' or
- # later.
- # * Python '3.5' is available when `runtime_version` is set to a version
- # from '1.4' to '1.14'.
- # * Python '2.7' is available when `runtime_version` is set to '1.15' or
- # earlier.
- #
- # Read more about the Python versions available for [each runtime
- # version](/ml-engine/docs/runtime-version-list).
- "requestLoggingConfig": { # Configuration for logging request-response pairs to a BigQuery table. # Optional. *Only* specify this field in a
+ "labels": { # Optional. One or more labels that you can add, to organize your model
+ # versions. Each label is a key-value pair, where both the key and the value
+ # are arbitrary strings that you supply.
+ # For more information, see the documentation on
+ # <a href="/ml-engine/docs/tensorflow/resource-labels">using labels</a>.
+ "a_key": "A String",
+ },
+ "createTime": "A String", # Output only. The time the version was created.
+ "requestLoggingConfig": { # Configuration for logging request-response pairs to a BigQuery table. # Optional. *Only* specify this field in a
# projects.models.versions.patch
# request. Specifying it in a
# projects.models.versions.create
@@ -1257,19 +1186,16 @@
# evaluation](/ml-engine/docs/continuous-evaluation/), you do not need to
# specify this configuration manually. Setting up continuous evaluation
# automatically enables logging of request-response pairs.
- "samplingPercentage": 3.14, # Percentage of requests to be logged, expressed as a fraction from 0 to 1.
- # For example, if you want to log 10% of requests, enter `0.1`. The sampling
- # window is the lifetime of the model version. Defaults to 0.
- "bigqueryTableName": "A String", # Required. Fully qualified BigQuery table name in the following format:
- # "<var>project_id</var>.<var>dataset_name</var>.<var>table_name</var>"
+ "bigqueryTableName": "A String", # Required. Fully qualified BigQuery table name in the following format:
+ # "<var>project_id</var>.<var>dataset_name</var>.<var>table_name</var>"
#
- # The specified table must already exist, and the "Cloud ML Service Agent"
+ # The specified table must already exist, and the "Cloud ML Service Agent"
# for your project must have permission to write to it. The table must have
# the following [schema](/bigquery/docs/schemas):
#
# <table>
- # <tr><th>Field name</th><th style="display: table-cell">Type</th>
- # <th style="display: table-cell">Mode</th></tr>
+ # <tr><th>Field name</th><th style="display: table-cell">Type</th>
+ # <th style="display: table-cell">Mode</th></tr>
# <tr><td>model</td><td>STRING</td><td>REQUIRED</td></tr>
# <tr><td>model_version</td><td>STRING</td><td>REQUIRED</td></tr>
# <tr><td>time</td><td>TIMESTAMP</td><td>REQUIRED</td></tr>
@@ -1277,6 +1203,80 @@
# <tr><td>raw_prediction</td><td>STRING</td><td>NULLABLE</td></tr>
# <tr><td>groundtruth</td><td>STRING</td><td>NULLABLE</td></tr>
# </table>
+ "samplingPercentage": 3.14, # Percentage of requests to be logged, expressed as a fraction from 0 to 1.
+ # For example, if you want to log 10% of requests, enter `0.1`. The sampling
+ # window is the lifetime of the model version. Defaults to 0.
+ },
+ "errorMessage": "A String", # Output only. The details of a failure or a cancellation.
+ "machineType": "A String", # Optional. The type of machine on which to serve the model. Currently only
+ # applies to online prediction service. If this field is not specified, it
+ # defaults to `mls1-c1-m2`.
+ #
+ # Online prediction supports the following machine types:
+ #
+ # * `mls1-c1-m2`
+ # * `mls1-c4-m2`
+ # * `n1-standard-2`
+ # * `n1-standard-4`
+ # * `n1-standard-8`
+ # * `n1-standard-16`
+ # * `n1-standard-32`
+ # * `n1-highmem-2`
+ # * `n1-highmem-4`
+ # * `n1-highmem-8`
+ # * `n1-highmem-16`
+ # * `n1-highmem-32`
+ # * `n1-highcpu-2`
+ # * `n1-highcpu-4`
+ # * `n1-highcpu-8`
+ # * `n1-highcpu-16`
+ # * `n1-highcpu-32`
+ #
+ # `mls1-c1-m2` is generally available. All other machine types are available
+ # in beta. Learn more about the [differences between machine
+ # types](/ml-engine/docs/machine-types-online-prediction).
+ "runtimeVersion": "A String", # Required. The AI Platform runtime version to use for this deployment.
+ #
+ # For more information, see the
+ # [runtime version list](/ml-engine/docs/runtime-version-list) and
+ # [how to manage runtime versions](/ml-engine/docs/versioning).
+ "description": "A String", # Optional. The description specified for the version when it was created.
+ "framework": "A String", # Optional. The machine learning framework AI Platform uses to train
+ # this version of the model. Valid values are `TENSORFLOW`, `SCIKIT_LEARN`,
+ # `XGBOOST`. If you do not specify a framework, AI Platform
+ # will analyze files in the deployment_uri to determine a framework. If you
+ # choose `SCIKIT_LEARN` or `XGBOOST`, you must also set the runtime version
+ # of the model to 1.4 or greater.
+ #
+ # Do **not** specify a framework if you're deploying a [custom
+ # prediction routine](/ml-engine/docs/tensorflow/custom-prediction-routines).
+ #
+ # If you specify a [Compute Engine (N1) machine
+ # type](/ml-engine/docs/machine-types-online-prediction) in the
+ # `machineType` field, you must specify `TENSORFLOW`
+ # for the framework.
+ "etag": "A String", # `etag` is used for optimistic concurrency control as a way to help
+ # prevent simultaneous updates of a model from overwriting each other.
+ # It is strongly suggested that systems make use of the `etag` in the
+ # read-modify-write cycle to perform model updates in order to avoid race
+ # conditions: An `etag` is returned in the response to `GetVersion`, and
+ # systems are expected to put that etag in the request to `UpdateVersion` to
+ # ensure that their change will be applied to the model as intended.
+ "isDefault": True or False, # Output only. If true, this version will be used to handle prediction
+ # requests that do not specify a version.
+ #
+ # You can change the default version by calling
+ # projects.methods.versions.setDefault.
+ "acceleratorConfig": { # Represents a hardware accelerator request config. # Optional. Accelerator config for using GPUs for online prediction (beta).
+ # Only specify this field if you have specified a Compute Engine (N1) machine
+ # type in the `machineType` field. Learn more about [using GPUs for online
+ # prediction](/ml-engine/docs/machine-types-online-prediction#gpus).
+ # Note that the AcceleratorConfig can be used in both Jobs and Versions.
+ # Learn more about [accelerators for training](/ml-engine/docs/using-gpus) and
+ # [accelerators for online
+ # prediction](/ml-engine/docs/machine-types-online-prediction#gpus).
+ "count": "A String", # The number of accelerators to attach to each machine running the job.
+ "type": "A String", # The type of accelerator to use.
},
},
],
@@ -1292,7 +1292,7 @@
previous_response: The response from the request for the previous page. (required)
Returns:
- A request object that you can call 'execute()' on to request the next
+ A request object that you can call 'execute()' on to request the next
page. Returns None if there are no more items in the collection.
</pre>
</div>
@@ -1315,25 +1315,37 @@
# prediction requests. A model can have multiple versions. You can get
# information about all of the versions of a given model by calling
# projects.models.versions.list.
- "acceleratorConfig": { # Represents a hardware accelerator request config. # Optional. Accelerator config for using GPUs for online prediction (beta).
- # Only specify this field if you have specified a Compute Engine (N1) machine
- # type in the `machineType` field. Learn more about [using GPUs for online
- # prediction](/ml-engine/docs/machine-types-online-prediction#gpus).
- # Note that the AcceleratorConfig can be used in both Jobs and Versions.
- # Learn more about [accelerators for training](/ml-engine/docs/using-gpus) and
- # [accelerators for online
- # prediction](/ml-engine/docs/machine-types-online-prediction#gpus).
- "count": "A String", # The number of accelerators to attach to each machine running the job.
- "type": "A String", # The type of accelerator to use.
+ "state": "A String", # Output only. The state of a version.
+ "manualScaling": { # Options for manually scaling a model. # Manually select the number of nodes to use for serving the
+ # model. You should generally use `auto_scaling` with an appropriate
+ # `min_nodes` instead, but this option is available if you want more
+ # predictable billing. Beware that latency and error rates will increase
+ # if the traffic exceeds that capability of the system to serve it based
+ # on the selected number of nodes.
+ "nodes": 42, # The number of nodes to allocate for this model. These nodes are always up,
+ # starting from the time the model is deployed, so the cost of operating
+ # this model will be proportional to `nodes` * number of hours since
+ # last billing cycle plus the cost for each prediction performed.
},
- "labels": { # Optional. One or more labels that you can add, to organize your model
- # versions. Each label is a key-value pair, where both the key and the value
- # are arbitrary strings that you supply.
- # For more information, see the documentation on
- # <a href="/ml-engine/docs/tensorflow/resource-labels">using labels</a>.
- "a_key": "A String",
- },
- "predictionClass": "A String", # Optional. The fully qualified name
+ "name": "A String", # Required. The name specified for the version when it was created.
+ #
+ # The version name must be unique within the model it is created in.
+ "serviceAccount": "A String", # Optional. Specifies the service account for resource access control.
+ "pythonVersion": "A String", # Required. The version of Python used in prediction.
+ #
+ # The following Python versions are available:
+ #
+ # * Python '3.7' is available when `runtime_version` is set to '1.15' or
+ # later.
+ # * Python '3.5' is available when `runtime_version` is set to a version
+ # from '1.4' to '1.14'.
+ # * Python '2.7' is available when `runtime_version` is set to '1.15' or
+ # earlier.
+ #
+ # Read more about the Python versions available for [each runtime
+ # version](/ml-engine/docs/runtime-version-list).
+ "lastUseTime": "A String", # Output only. The time the version was last used for prediction.
+ "predictionClass": "A String", # Optional. The fully qualified name
# (<var>module_name</var>.<var>class_name</var>) of a class that implements
# the Predictor interface described in this reference field. The module
# containing this class should be included in a package provided to the
@@ -1348,12 +1360,12 @@
#
# The following code sample provides the Predictor interface:
#
- # <pre style="max-width: 626px;">
+ # <pre style="max-width: 626px;">
# class Predictor(object):
- # """Interface for constructing custom predictors."""
+ # """Interface for constructing custom predictors."""
#
# def predict(self, instances, **kwargs):
- # """Performs custom prediction.
+ # """Performs custom prediction.
#
# Instances are the decoded values from the request. They have already
# been deserialized from JSON.
@@ -1366,12 +1378,12 @@
# Returns:
# A list of outputs containing the prediction results. This list must
# be JSON serializable.
- # """
+ # """
# raise NotImplementedError()
#
# @classmethod
# def from_path(cls, model_dir):
- # """Creates an instance of Predictor using the given path.
+ # """Creates an instance of Predictor using the given path.
#
# Loading of the predictor should be done in this method.
#
@@ -1382,15 +1394,13 @@
#
# Returns:
# An instance implementing this Predictor class.
- # """
+ # """
# raise NotImplementedError()
# </pre>
#
# Learn more about [the Predictor interface and custom prediction
# routines](/ml-engine/docs/tensorflow/custom-prediction-routines).
- "serviceAccount": "A String", # Optional. Specifies the service account for resource access control.
- "state": "A String", # Output only. The state of a version.
- "packageUris": [ # Optional. Cloud Storage paths (`gs://…`) of packages for [custom
+ "packageUris": [ # Optional. Cloud Storage paths (`gs://…`) of packages for [custom
# prediction routines](/ml-engine/docs/tensorflow/custom-prediction-routines)
# or [scikit-learn pipelines with custom
# code](/ml-engine/docs/scikit/exporting-for-prediction#custom-pipeline-code).
@@ -1404,17 +1414,45 @@
#
# If you specify this field, you must also set
# [`runtimeVersion`](#Version.FIELDS.runtime_version) to 1.4 or greater.
- "A String",
+ "A String",
],
- "etag": "A String", # `etag` is used for optimistic concurrency control as a way to help
- # prevent simultaneous updates of a model from overwriting each other.
- # It is strongly suggested that systems make use of the `etag` in the
- # read-modify-write cycle to perform model updates in order to avoid race
- # conditions: An `etag` is returned in the response to `GetVersion`, and
- # systems are expected to put that etag in the request to `UpdateVersion` to
- # ensure that their change will be applied to the model as intended.
- "lastUseTime": "A String", # Output only. The time the version was last used for prediction.
- "deploymentUri": "A String", # Required. The Cloud Storage location of the trained model used to
+ "explanationConfig": { # Message holding configuration options for explaining model predictions. # Optional. Configures explainability features on the model's version.
+ # Some explanation features require additional metadata to be loaded
+ # as part of the model payload.
+ # There are two feature attribution methods supported for TensorFlow models:
+ # integrated gradients and sampled Shapley.
+ # [Learn more about feature
+ # attributions.](/ai-platform/prediction/docs/ai-explanations/overview)
+ "integratedGradientsAttribution": { # Attributes credit by computing the Aumann-Shapley value taking advantage # Attributes credit by computing the Aumann-Shapley value taking advantage
+ # of the model's fully differentiable structure. Refer to this paper for
+ # more details: http://proceedings.mlr.press/v70/sundararajan17a.html
+ # of the model's fully differentiable structure. Refer to this paper for
+ # more details: https://arxiv.org/abs/1703.01365
+ "numIntegralSteps": 42, # Number of steps for approximating the path integral.
+ # A good value to start is 50 and gradually increase until the
+ # sum to diff property is met within the desired error range.
+ },
+ "sampledShapleyAttribution": { # An attribution method that approximates Shapley values for features that # An attribution method that approximates Shapley values for features that
+ # contribute to the label being predicted. A sampling strategy is used to
+ # approximate the value rather than considering all subsets of features.
+ # contribute to the label being predicted. A sampling strategy is used to
+ # approximate the value rather than considering all subsets of features.
+ "numPaths": 42, # The number of feature permutations to consider when approximating the
+ # Shapley values.
+ },
+ "xraiAttribution": { # Attributes credit by computing the XRAI taking advantage # Attributes credit by computing the XRAI taking advantage
+ # of the model's fully differentiable structure. Refer to this paper for
+ # more details: https://arxiv.org/abs/1906.02825
+ # Currently only implemented for models with natural image inputs.
+ # of the model's fully differentiable structure. Refer to this paper for
+ # more details: https://arxiv.org/abs/1906.02825
+ # Currently only implemented for models with natural image inputs.
+ "numIntegralSteps": 42, # Number of steps for approximating the path integral.
+ # A good value to start is 50 and gradually increase until the
+ # sum to diff property is met within the desired error range.
+ },
+ },
+ "deploymentUri": "A String", # Required. The Cloud Storage location of the trained model used to
# create the version. See the
# [guide to model
# deployment](/ml-engine/docs/tensorflow/deploying-models) for more
@@ -1425,120 +1463,16 @@
# the model service uses the specified location as the source of the model.
# Once deployed, the model version is hosted by the prediction service, so
# this location is useful only as a historical record.
- # The total number of model files can't exceed 1000.
- "explanationConfig": { # Message holding configuration options for explaining model predictions. # Optional. Configures explainability features on the model's version.
- # Some explanation features require additional metadata to be loaded
- # as part of the model payload.
- # There are two feature attribution methods supported for TensorFlow models:
- # integrated gradients and sampled Shapley.
- # [Learn more about feature
- # attributions.](/ml-engine/docs/ai-explanations/overview)
- "xraiAttribution": { # Attributes credit by computing the XRAI taking advantage # Attributes credit by computing the XRAI taking advantage
- # of the model's fully differentiable structure. Refer to this paper for
- # more details: https://arxiv.org/abs/1906.02825
- # Currently only implemented for models with natural image inputs.
- # of the model's fully differentiable structure. Refer to this paper for
- # more details: https://arxiv.org/abs/1906.02825
- # Currently only implemented for models with natural image inputs.
- "numIntegralSteps": 42, # Number of steps for approximating the path integral.
- # A good value to start is 50 and gradually increase until the
- # sum to diff property is met within the desired error range.
- },
- "sampledShapleyAttribution": { # An attribution method that approximates Shapley values for features that # An attribution method that approximates Shapley values for features that
- # contribute to the label being predicted. A sampling strategy is used to
- # approximate the value rather than considering all subsets of features.
- # contribute to the label being predicted. A sampling strategy is used to
- # approximate the value rather than considering all subsets of features.
- "numPaths": 42, # The number of feature permutations to consider when approximating the
- # Shapley values.
- },
- "integratedGradientsAttribution": { # Attributes credit by computing the Aumann-Shapley value taking advantage # Attributes credit by computing the Aumann-Shapley value taking advantage
- # of the model's fully differentiable structure. Refer to this paper for
- # more details: http://proceedings.mlr.press/v70/sundararajan17a.html
- # of the model's fully differentiable structure. Refer to this paper for
- # more details: http://proceedings.mlr.press/v70/sundararajan17a.html
- "numIntegralSteps": 42, # Number of steps for approximating the path integral.
- # A good value to start is 50 and gradually increase until the
- # sum to diff property is met within the desired error range.
- },
- },
- "isDefault": True or False, # Output only. If true, this version will be used to handle prediction
- # requests that do not specify a version.
- #
- # You can change the default version by calling
- # projects.methods.versions.setDefault.
- "machineType": "A String", # Optional. The type of machine on which to serve the model. Currently only
- # applies to online prediction service. If this field is not specified, it
- # defaults to `mls1-c1-m2`.
- #
- # Online prediction supports the following machine types:
- #
- # * `mls1-c1-m2`
- # * `mls1-c4-m2`
- # * `n1-standard-2`
- # * `n1-standard-4`
- # * `n1-standard-8`
- # * `n1-standard-16`
- # * `n1-standard-32`
- # * `n1-highmem-2`
- # * `n1-highmem-4`
- # * `n1-highmem-8`
- # * `n1-highmem-16`
- # * `n1-highmem-32`
- # * `n1-highcpu-2`
- # * `n1-highcpu-4`
- # * `n1-highcpu-8`
- # * `n1-highcpu-16`
- # * `n1-highcpu-32`
- #
- # `mls1-c1-m2` is generally available. All other machine types are available
- # in beta. Learn more about the [differences between machine
- # types](/ml-engine/docs/machine-types-online-prediction).
- "description": "A String", # Optional. The description specified for the version when it was created.
- "runtimeVersion": "A String", # Required. The AI Platform runtime version to use for this deployment.
- #
- # For more information, see the
- # [runtime version list](/ml-engine/docs/runtime-version-list) and
- # [how to manage runtime versions](/ml-engine/docs/versioning).
- "manualScaling": { # Options for manually scaling a model. # Manually select the number of nodes to use for serving the
- # model. You should generally use `auto_scaling` with an appropriate
- # `min_nodes` instead, but this option is available if you want more
- # predictable billing. Beware that latency and error rates will increase
- # if the traffic exceeds that capability of the system to serve it based
- # on the selected number of nodes.
- "nodes": 42, # The number of nodes to allocate for this model. These nodes are always up,
- # starting from the time the model is deployed, so the cost of operating
- # this model will be proportional to `nodes` * number of hours since
- # last billing cycle plus the cost for each prediction performed.
- },
- "errorMessage": "A String", # Output only. The details of a failure or a cancellation.
- "framework": "A String", # Optional. The machine learning framework AI Platform uses to train
- # this version of the model. Valid values are `TENSORFLOW`, `SCIKIT_LEARN`,
- # `XGBOOST`. If you do not specify a framework, AI Platform
- # will analyze files in the deployment_uri to determine a framework. If you
- # choose `SCIKIT_LEARN` or `XGBOOST`, you must also set the runtime version
- # of the model to 1.4 or greater.
- #
- # Do **not** specify a framework if you're deploying a [custom
- # prediction routine](/ml-engine/docs/tensorflow/custom-prediction-routines).
- #
- # If you specify a [Compute Engine (N1) machine
- # type](/ml-engine/docs/machine-types-online-prediction) in the
- # `machineType` field, you must specify `TENSORFLOW`
- # for the framework.
- "createTime": "A String", # Output only. The time the version was created.
- "name": "A String", # Required. The name specified for the version when it was created.
- #
- # The version name must be unique within the model it is created in.
- "autoScaling": { # Options for automatically scaling a model. # Automatically scale the number of nodes used to serve the model in
+ # The total number of model files can't exceed 1000.
+ "autoScaling": { # Options for automatically scaling a model. # Automatically scale the number of nodes used to serve the model in
# response to increases and decreases in traffic. Care should be
- # taken to ramp up traffic according to the model's ability to scale
+ # taken to ramp up traffic according to the model's ability to scale
# or you will start seeing increases in latency and 429 response codes.
#
# Note that you cannot use AutoScaling if your version uses
# [GPUs](#Version.FIELDS.accelerator_config). Instead, you must use specify
# `manual_scaling`.
- "minNodes": 42, # Optional. The minimum number of nodes to allocate for this model. These
+ "minNodes": 42, # Optional. The minimum number of nodes to allocate for this model. These
# nodes are always up, starting from the time the model is deployed.
# Therefore, the cost of operating this model will be at least
# `rate` * `min_nodes` * number of hours since last billing cycle,
@@ -1573,32 +1507,27 @@
# <pre>
# update_body.json:
# {
- # 'autoScaling': {
- # 'minNodes': 5
+ # 'autoScaling': {
+ # 'minNodes': 5
# }
# }
# </pre>
# HTTP request:
- # <pre style="max-width: 626px;">
+ # <pre style="max-width: 626px;">
# PATCH
# https://ml.googleapis.com/v1/{name=projects/*/models/*/versions/*}?update_mask=autoScaling.minNodes
# -d @./update_body.json
# </pre>
},
- "pythonVersion": "A String", # Required. The version of Python used in prediction.
- #
- # The following Python versions are available:
- #
- # * Python '3.7' is available when `runtime_version` is set to '1.15' or
- # later.
- # * Python '3.5' is available when `runtime_version` is set to a version
- # from '1.4' to '1.14'.
- # * Python '2.7' is available when `runtime_version` is set to '1.15' or
- # earlier.
- #
- # Read more about the Python versions available for [each runtime
- # version](/ml-engine/docs/runtime-version-list).
- "requestLoggingConfig": { # Configuration for logging request-response pairs to a BigQuery table. # Optional. *Only* specify this field in a
+ "labels": { # Optional. One or more labels that you can add, to organize your model
+ # versions. Each label is a key-value pair, where both the key and the value
+ # are arbitrary strings that you supply.
+ # For more information, see the documentation on
+ # <a href="/ml-engine/docs/tensorflow/resource-labels">using labels</a>.
+ "a_key": "A String",
+ },
+ "createTime": "A String", # Output only. The time the version was created.
+ "requestLoggingConfig": { # Configuration for logging request-response pairs to a BigQuery table. # Optional. *Only* specify this field in a
# projects.models.versions.patch
# request. Specifying it in a
# projects.models.versions.create
@@ -1617,19 +1546,16 @@
# evaluation](/ml-engine/docs/continuous-evaluation/), you do not need to
# specify this configuration manually. Setting up continuous evaluation
# automatically enables logging of request-response pairs.
- "samplingPercentage": 3.14, # Percentage of requests to be logged, expressed as a fraction from 0 to 1.
- # For example, if you want to log 10% of requests, enter `0.1`. The sampling
- # window is the lifetime of the model version. Defaults to 0.
- "bigqueryTableName": "A String", # Required. Fully qualified BigQuery table name in the following format:
- # "<var>project_id</var>.<var>dataset_name</var>.<var>table_name</var>"
+ "bigqueryTableName": "A String", # Required. Fully qualified BigQuery table name in the following format:
+ # "<var>project_id</var>.<var>dataset_name</var>.<var>table_name</var>"
#
- # The specified table must already exist, and the "Cloud ML Service Agent"
+ # The specified table must already exist, and the "Cloud ML Service Agent"
# for your project must have permission to write to it. The table must have
# the following [schema](/bigquery/docs/schemas):
#
# <table>
- # <tr><th>Field name</th><th style="display: table-cell">Type</th>
- # <th style="display: table-cell">Mode</th></tr>
+ # <tr><th>Field name</th><th style="display: table-cell">Type</th>
+ # <th style="display: table-cell">Mode</th></tr>
# <tr><td>model</td><td>STRING</td><td>REQUIRED</td></tr>
# <tr><td>model_version</td><td>STRING</td><td>REQUIRED</td></tr>
# <tr><td>time</td><td>TIMESTAMP</td><td>REQUIRED</td></tr>
@@ -1637,19 +1563,93 @@
# <tr><td>raw_prediction</td><td>STRING</td><td>NULLABLE</td></tr>
# <tr><td>groundtruth</td><td>STRING</td><td>NULLABLE</td></tr>
# </table>
+ "samplingPercentage": 3.14, # Percentage of requests to be logged, expressed as a fraction from 0 to 1.
+ # For example, if you want to log 10% of requests, enter `0.1`. The sampling
+ # window is the lifetime of the model version. Defaults to 0.
+ },
+ "errorMessage": "A String", # Output only. The details of a failure or a cancellation.
+ "machineType": "A String", # Optional. The type of machine on which to serve the model. Currently only
+ # applies to online prediction service. If this field is not specified, it
+ # defaults to `mls1-c1-m2`.
+ #
+ # Online prediction supports the following machine types:
+ #
+ # * `mls1-c1-m2`
+ # * `mls1-c4-m2`
+ # * `n1-standard-2`
+ # * `n1-standard-4`
+ # * `n1-standard-8`
+ # * `n1-standard-16`
+ # * `n1-standard-32`
+ # * `n1-highmem-2`
+ # * `n1-highmem-4`
+ # * `n1-highmem-8`
+ # * `n1-highmem-16`
+ # * `n1-highmem-32`
+ # * `n1-highcpu-2`
+ # * `n1-highcpu-4`
+ # * `n1-highcpu-8`
+ # * `n1-highcpu-16`
+ # * `n1-highcpu-32`
+ #
+ # `mls1-c1-m2` is generally available. All other machine types are available
+ # in beta. Learn more about the [differences between machine
+ # types](/ml-engine/docs/machine-types-online-prediction).
+ "runtimeVersion": "A String", # Required. The AI Platform runtime version to use for this deployment.
+ #
+ # For more information, see the
+ # [runtime version list](/ml-engine/docs/runtime-version-list) and
+ # [how to manage runtime versions](/ml-engine/docs/versioning).
+ "description": "A String", # Optional. The description specified for the version when it was created.
+ "framework": "A String", # Optional. The machine learning framework AI Platform uses to train
+ # this version of the model. Valid values are `TENSORFLOW`, `SCIKIT_LEARN`,
+ # `XGBOOST`. If you do not specify a framework, AI Platform
+ # will analyze files in the deployment_uri to determine a framework. If you
+ # choose `SCIKIT_LEARN` or `XGBOOST`, you must also set the runtime version
+ # of the model to 1.4 or greater.
+ #
+ # Do **not** specify a framework if you're deploying a [custom
+ # prediction routine](/ml-engine/docs/tensorflow/custom-prediction-routines).
+ #
+ # If you specify a [Compute Engine (N1) machine
+ # type](/ml-engine/docs/machine-types-online-prediction) in the
+ # `machineType` field, you must specify `TENSORFLOW`
+ # for the framework.
+ "etag": "A String", # `etag` is used for optimistic concurrency control as a way to help
+ # prevent simultaneous updates of a model from overwriting each other.
+ # It is strongly suggested that systems make use of the `etag` in the
+ # read-modify-write cycle to perform model updates in order to avoid race
+ # conditions: An `etag` is returned in the response to `GetVersion`, and
+ # systems are expected to put that etag in the request to `UpdateVersion` to
+ # ensure that their change will be applied to the model as intended.
+ "isDefault": True or False, # Output only. If true, this version will be used to handle prediction
+ # requests that do not specify a version.
+ #
+ # You can change the default version by calling
+ # projects.methods.versions.setDefault.
+ "acceleratorConfig": { # Represents a hardware accelerator request config. # Optional. Accelerator config for using GPUs for online prediction (beta).
+ # Only specify this field if you have specified a Compute Engine (N1) machine
+ # type in the `machineType` field. Learn more about [using GPUs for online
+ # prediction](/ml-engine/docs/machine-types-online-prediction#gpus).
+ # Note that the AcceleratorConfig can be used in both Jobs and Versions.
+ # Learn more about [accelerators for training](/ml-engine/docs/using-gpus) and
+ # [accelerators for online
+ # prediction](/ml-engine/docs/machine-types-online-prediction#gpus).
+ "count": "A String", # The number of accelerators to attach to each machine running the job.
+ "type": "A String", # The type of accelerator to use.
},
}
updateMask: string, Required. Specifies the path, relative to `Version`, of the field to
update. Must be present and non-empty.
-For example, to change the description of a version to "foo", the
+For example, to change the description of a version to "foo", the
`update_mask` parameter would be specified as `description`, and the
`PATCH` request body would specify the new value, as follows:
```
{
- "description": "foo"
+ "description": "foo"
}
```
@@ -1668,34 +1668,7 @@
{ # This resource represents a long-running operation that is the result of a
# network API call.
- "metadata": { # Service-specific metadata associated with the operation. It typically
- # contains progress information and common metadata such as create time.
- # Some services might not provide such metadata. Any method that returns a
- # long-running operation should document the metadata type, if any.
- "a_key": "", # Properties of the object. Contains field @type with type URL.
- },
- "error": { # The `Status` type defines a logical error model that is suitable for # The error result of the operation in case of failure or cancellation.
- # different programming environments, including REST APIs and RPC APIs. It is
- # used by [gRPC](https://github.com/grpc). Each `Status` message contains
- # three pieces of data: error code, error message, and error details.
- #
- # You can find out more about this error model and how to work with it in the
- # [API Design Guide](https://cloud.google.com/apis/design/errors).
- "message": "A String", # A developer-facing error message, which should be in English. Any
- # user-facing error message should be localized and sent in the
- # google.rpc.Status.details field, or localized by the client.
- "code": 42, # The status code, which should be an enum value of google.rpc.Code.
- "details": [ # A list of messages that carry the error details. There is a common set of
- # message types for APIs to use.
- {
- "a_key": "", # Properties of the object. Contains field @type with type URL.
- },
- ],
- },
- "done": True or False, # If the value is `false`, it means the operation is still in progress.
- # If `true`, the operation is completed, and either `error` or `response` is
- # available.
- "response": { # The normal response of the operation in case of success. If the original
+ "response": { # The normal response of the operation in case of success. If the original
# method returns no data on success, such as `Delete`, the response is
# `google.protobuf.Empty`. If the original method is standard
# `Get`/`Create`/`Update`, the response should be the resource. For other
@@ -1703,11 +1676,38 @@
# is the original method name. For example, if the original method name
# is `TakeSnapshot()`, the inferred response type is
# `TakeSnapshotResponse`.
- "a_key": "", # Properties of the object. Contains field @type with type URL.
+ "a_key": "", # Properties of the object. Contains field @type with type URL.
},
- "name": "A String", # The server-assigned name, which is only unique within the same service that
+ "name": "A String", # The server-assigned name, which is only unique within the same service that
# originally returns it. If you use the default HTTP mapping, the
# `name` should be a resource name ending with `operations/{unique_id}`.
+ "error": { # The `Status` type defines a logical error model that is suitable for # The error result of the operation in case of failure or cancellation.
+ # different programming environments, including REST APIs and RPC APIs. It is
+ # used by [gRPC](https://github.com/grpc). Each `Status` message contains
+ # three pieces of data: error code, error message, and error details.
+ #
+ # You can find out more about this error model and how to work with it in the
+ # [API Design Guide](https://cloud.google.com/apis/design/errors).
+ "details": [ # A list of messages that carry the error details. There is a common set of
+ # message types for APIs to use.
+ {
+ "a_key": "", # Properties of the object. Contains field @type with type URL.
+ },
+ ],
+ "code": 42, # The status code, which should be an enum value of google.rpc.Code.
+ "message": "A String", # A developer-facing error message, which should be in English. Any
+ # user-facing error message should be localized and sent in the
+ # google.rpc.Status.details field, or localized by the client.
+ },
+ "metadata": { # Service-specific metadata associated with the operation. It typically
+ # contains progress information and common metadata such as create time.
+ # Some services might not provide such metadata. Any method that returns a
+ # long-running operation should document the metadata type, if any.
+ "a_key": "", # Properties of the object. Contains field @type with type URL.
+ },
+ "done": True or False, # If the value is `false`, it means the operation is still in progress.
+ # If `true`, the operation is completed, and either `error` or `response` is
+ # available.
}</pre>
</div>
@@ -1716,7 +1716,7 @@
<pre>Designates a version to be the default for the model.
The default version is used for prediction requests made against the model
-that don't specify a version.
+that don't specify a version.
The first version to be created for a model is automatically set as the
default. You must make any subsequent changes to the default version
@@ -1746,25 +1746,37 @@
# prediction requests. A model can have multiple versions. You can get
# information about all of the versions of a given model by calling
# projects.models.versions.list.
- "acceleratorConfig": { # Represents a hardware accelerator request config. # Optional. Accelerator config for using GPUs for online prediction (beta).
- # Only specify this field if you have specified a Compute Engine (N1) machine
- # type in the `machineType` field. Learn more about [using GPUs for online
- # prediction](/ml-engine/docs/machine-types-online-prediction#gpus).
- # Note that the AcceleratorConfig can be used in both Jobs and Versions.
- # Learn more about [accelerators for training](/ml-engine/docs/using-gpus) and
- # [accelerators for online
- # prediction](/ml-engine/docs/machine-types-online-prediction#gpus).
- "count": "A String", # The number of accelerators to attach to each machine running the job.
- "type": "A String", # The type of accelerator to use.
+ "state": "A String", # Output only. The state of a version.
+ "manualScaling": { # Options for manually scaling a model. # Manually select the number of nodes to use for serving the
+ # model. You should generally use `auto_scaling` with an appropriate
+ # `min_nodes` instead, but this option is available if you want more
+ # predictable billing. Beware that latency and error rates will increase
+ # if the traffic exceeds that capability of the system to serve it based
+ # on the selected number of nodes.
+ "nodes": 42, # The number of nodes to allocate for this model. These nodes are always up,
+ # starting from the time the model is deployed, so the cost of operating
+ # this model will be proportional to `nodes` * number of hours since
+ # last billing cycle plus the cost for each prediction performed.
},
- "labels": { # Optional. One or more labels that you can add, to organize your model
- # versions. Each label is a key-value pair, where both the key and the value
- # are arbitrary strings that you supply.
- # For more information, see the documentation on
- # <a href="/ml-engine/docs/tensorflow/resource-labels">using labels</a>.
- "a_key": "A String",
- },
- "predictionClass": "A String", # Optional. The fully qualified name
+ "name": "A String", # Required. The name specified for the version when it was created.
+ #
+ # The version name must be unique within the model it is created in.
+ "serviceAccount": "A String", # Optional. Specifies the service account for resource access control.
+ "pythonVersion": "A String", # Required. The version of Python used in prediction.
+ #
+ # The following Python versions are available:
+ #
+ # * Python '3.7' is available when `runtime_version` is set to '1.15' or
+ # later.
+ # * Python '3.5' is available when `runtime_version` is set to a version
+ # from '1.4' to '1.14'.
+ # * Python '2.7' is available when `runtime_version` is set to '1.15' or
+ # earlier.
+ #
+ # Read more about the Python versions available for [each runtime
+ # version](/ml-engine/docs/runtime-version-list).
+ "lastUseTime": "A String", # Output only. The time the version was last used for prediction.
+ "predictionClass": "A String", # Optional. The fully qualified name
# (<var>module_name</var>.<var>class_name</var>) of a class that implements
# the Predictor interface described in this reference field. The module
# containing this class should be included in a package provided to the
@@ -1779,12 +1791,12 @@
#
# The following code sample provides the Predictor interface:
#
- # <pre style="max-width: 626px;">
+ # <pre style="max-width: 626px;">
# class Predictor(object):
- # """Interface for constructing custom predictors."""
+ # """Interface for constructing custom predictors."""
#
# def predict(self, instances, **kwargs):
- # """Performs custom prediction.
+ # """Performs custom prediction.
#
# Instances are the decoded values from the request. They have already
# been deserialized from JSON.
@@ -1797,12 +1809,12 @@
# Returns:
# A list of outputs containing the prediction results. This list must
# be JSON serializable.
- # """
+ # """
# raise NotImplementedError()
#
# @classmethod
# def from_path(cls, model_dir):
- # """Creates an instance of Predictor using the given path.
+ # """Creates an instance of Predictor using the given path.
#
# Loading of the predictor should be done in this method.
#
@@ -1813,15 +1825,13 @@
#
# Returns:
# An instance implementing this Predictor class.
- # """
+ # """
# raise NotImplementedError()
# </pre>
#
# Learn more about [the Predictor interface and custom prediction
# routines](/ml-engine/docs/tensorflow/custom-prediction-routines).
- "serviceAccount": "A String", # Optional. Specifies the service account for resource access control.
- "state": "A String", # Output only. The state of a version.
- "packageUris": [ # Optional. Cloud Storage paths (`gs://…`) of packages for [custom
+ "packageUris": [ # Optional. Cloud Storage paths (`gs://…`) of packages for [custom
# prediction routines](/ml-engine/docs/tensorflow/custom-prediction-routines)
# or [scikit-learn pipelines with custom
# code](/ml-engine/docs/scikit/exporting-for-prediction#custom-pipeline-code).
@@ -1835,17 +1845,45 @@
#
# If you specify this field, you must also set
# [`runtimeVersion`](#Version.FIELDS.runtime_version) to 1.4 or greater.
- "A String",
+ "A String",
],
- "etag": "A String", # `etag` is used for optimistic concurrency control as a way to help
- # prevent simultaneous updates of a model from overwriting each other.
- # It is strongly suggested that systems make use of the `etag` in the
- # read-modify-write cycle to perform model updates in order to avoid race
- # conditions: An `etag` is returned in the response to `GetVersion`, and
- # systems are expected to put that etag in the request to `UpdateVersion` to
- # ensure that their change will be applied to the model as intended.
- "lastUseTime": "A String", # Output only. The time the version was last used for prediction.
- "deploymentUri": "A String", # Required. The Cloud Storage location of the trained model used to
+ "explanationConfig": { # Message holding configuration options for explaining model predictions. # Optional. Configures explainability features on the model's version.
+ # Some explanation features require additional metadata to be loaded
+ # as part of the model payload.
+ # There are two feature attribution methods supported for TensorFlow models:
+ # integrated gradients and sampled Shapley.
+ # [Learn more about feature
+ # attributions.](/ai-platform/prediction/docs/ai-explanations/overview)
+ "integratedGradientsAttribution": { # Attributes credit by computing the Aumann-Shapley value taking advantage # Attributes credit by computing the Aumann-Shapley value taking advantage
+ # of the model's fully differentiable structure. Refer to this paper for
+ # more details: http://proceedings.mlr.press/v70/sundararajan17a.html
+ # of the model's fully differentiable structure. Refer to this paper for
+ # more details: https://arxiv.org/abs/1703.01365
+ "numIntegralSteps": 42, # Number of steps for approximating the path integral.
+ # A good value to start is 50 and gradually increase until the
+ # sum to diff property is met within the desired error range.
+ },
+ "sampledShapleyAttribution": { # An attribution method that approximates Shapley values for features that # An attribution method that approximates Shapley values for features that
+ # contribute to the label being predicted. A sampling strategy is used to
+ # approximate the value rather than considering all subsets of features.
+ # contribute to the label being predicted. A sampling strategy is used to
+ # approximate the value rather than considering all subsets of features.
+ "numPaths": 42, # The number of feature permutations to consider when approximating the
+ # Shapley values.
+ },
+ "xraiAttribution": { # Attributes credit by computing the XRAI taking advantage # Attributes credit by computing the XRAI taking advantage
+ # of the model's fully differentiable structure. Refer to this paper for
+ # more details: https://arxiv.org/abs/1906.02825
+ # Currently only implemented for models with natural image inputs.
+ # of the model's fully differentiable structure. Refer to this paper for
+ # more details: https://arxiv.org/abs/1906.02825
+ # Currently only implemented for models with natural image inputs.
+ "numIntegralSteps": 42, # Number of steps for approximating the path integral.
+ # A good value to start is 50 and gradually increase until the
+ # sum to diff property is met within the desired error range.
+ },
+ },
+ "deploymentUri": "A String", # Required. The Cloud Storage location of the trained model used to
# create the version. See the
# [guide to model
# deployment](/ml-engine/docs/tensorflow/deploying-models) for more
@@ -1856,120 +1894,16 @@
# the model service uses the specified location as the source of the model.
# Once deployed, the model version is hosted by the prediction service, so
# this location is useful only as a historical record.
- # The total number of model files can't exceed 1000.
- "explanationConfig": { # Message holding configuration options for explaining model predictions. # Optional. Configures explainability features on the model's version.
- # Some explanation features require additional metadata to be loaded
- # as part of the model payload.
- # There are two feature attribution methods supported for TensorFlow models:
- # integrated gradients and sampled Shapley.
- # [Learn more about feature
- # attributions.](/ml-engine/docs/ai-explanations/overview)
- "xraiAttribution": { # Attributes credit by computing the XRAI taking advantage # Attributes credit by computing the XRAI taking advantage
- # of the model's fully differentiable structure. Refer to this paper for
- # more details: https://arxiv.org/abs/1906.02825
- # Currently only implemented for models with natural image inputs.
- # of the model's fully differentiable structure. Refer to this paper for
- # more details: https://arxiv.org/abs/1906.02825
- # Currently only implemented for models with natural image inputs.
- "numIntegralSteps": 42, # Number of steps for approximating the path integral.
- # A good value to start is 50 and gradually increase until the
- # sum to diff property is met within the desired error range.
- },
- "sampledShapleyAttribution": { # An attribution method that approximates Shapley values for features that # An attribution method that approximates Shapley values for features that
- # contribute to the label being predicted. A sampling strategy is used to
- # approximate the value rather than considering all subsets of features.
- # contribute to the label being predicted. A sampling strategy is used to
- # approximate the value rather than considering all subsets of features.
- "numPaths": 42, # The number of feature permutations to consider when approximating the
- # Shapley values.
- },
- "integratedGradientsAttribution": { # Attributes credit by computing the Aumann-Shapley value taking advantage # Attributes credit by computing the Aumann-Shapley value taking advantage
- # of the model's fully differentiable structure. Refer to this paper for
- # more details: http://proceedings.mlr.press/v70/sundararajan17a.html
- # of the model's fully differentiable structure. Refer to this paper for
- # more details: http://proceedings.mlr.press/v70/sundararajan17a.html
- "numIntegralSteps": 42, # Number of steps for approximating the path integral.
- # A good value to start is 50 and gradually increase until the
- # sum to diff property is met within the desired error range.
- },
- },
- "isDefault": True or False, # Output only. If true, this version will be used to handle prediction
- # requests that do not specify a version.
- #
- # You can change the default version by calling
- # projects.methods.versions.setDefault.
- "machineType": "A String", # Optional. The type of machine on which to serve the model. Currently only
- # applies to online prediction service. If this field is not specified, it
- # defaults to `mls1-c1-m2`.
- #
- # Online prediction supports the following machine types:
- #
- # * `mls1-c1-m2`
- # * `mls1-c4-m2`
- # * `n1-standard-2`
- # * `n1-standard-4`
- # * `n1-standard-8`
- # * `n1-standard-16`
- # * `n1-standard-32`
- # * `n1-highmem-2`
- # * `n1-highmem-4`
- # * `n1-highmem-8`
- # * `n1-highmem-16`
- # * `n1-highmem-32`
- # * `n1-highcpu-2`
- # * `n1-highcpu-4`
- # * `n1-highcpu-8`
- # * `n1-highcpu-16`
- # * `n1-highcpu-32`
- #
- # `mls1-c1-m2` is generally available. All other machine types are available
- # in beta. Learn more about the [differences between machine
- # types](/ml-engine/docs/machine-types-online-prediction).
- "description": "A String", # Optional. The description specified for the version when it was created.
- "runtimeVersion": "A String", # Required. The AI Platform runtime version to use for this deployment.
- #
- # For more information, see the
- # [runtime version list](/ml-engine/docs/runtime-version-list) and
- # [how to manage runtime versions](/ml-engine/docs/versioning).
- "manualScaling": { # Options for manually scaling a model. # Manually select the number of nodes to use for serving the
- # model. You should generally use `auto_scaling` with an appropriate
- # `min_nodes` instead, but this option is available if you want more
- # predictable billing. Beware that latency and error rates will increase
- # if the traffic exceeds that capability of the system to serve it based
- # on the selected number of nodes.
- "nodes": 42, # The number of nodes to allocate for this model. These nodes are always up,
- # starting from the time the model is deployed, so the cost of operating
- # this model will be proportional to `nodes` * number of hours since
- # last billing cycle plus the cost for each prediction performed.
- },
- "errorMessage": "A String", # Output only. The details of a failure or a cancellation.
- "framework": "A String", # Optional. The machine learning framework AI Platform uses to train
- # this version of the model. Valid values are `TENSORFLOW`, `SCIKIT_LEARN`,
- # `XGBOOST`. If you do not specify a framework, AI Platform
- # will analyze files in the deployment_uri to determine a framework. If you
- # choose `SCIKIT_LEARN` or `XGBOOST`, you must also set the runtime version
- # of the model to 1.4 or greater.
- #
- # Do **not** specify a framework if you're deploying a [custom
- # prediction routine](/ml-engine/docs/tensorflow/custom-prediction-routines).
- #
- # If you specify a [Compute Engine (N1) machine
- # type](/ml-engine/docs/machine-types-online-prediction) in the
- # `machineType` field, you must specify `TENSORFLOW`
- # for the framework.
- "createTime": "A String", # Output only. The time the version was created.
- "name": "A String", # Required. The name specified for the version when it was created.
- #
- # The version name must be unique within the model it is created in.
- "autoScaling": { # Options for automatically scaling a model. # Automatically scale the number of nodes used to serve the model in
+ # The total number of model files can't exceed 1000.
+ "autoScaling": { # Options for automatically scaling a model. # Automatically scale the number of nodes used to serve the model in
# response to increases and decreases in traffic. Care should be
- # taken to ramp up traffic according to the model's ability to scale
+ # taken to ramp up traffic according to the model's ability to scale
# or you will start seeing increases in latency and 429 response codes.
#
# Note that you cannot use AutoScaling if your version uses
# [GPUs](#Version.FIELDS.accelerator_config). Instead, you must use specify
# `manual_scaling`.
- "minNodes": 42, # Optional. The minimum number of nodes to allocate for this model. These
+ "minNodes": 42, # Optional. The minimum number of nodes to allocate for this model. These
# nodes are always up, starting from the time the model is deployed.
# Therefore, the cost of operating this model will be at least
# `rate` * `min_nodes` * number of hours since last billing cycle,
@@ -2004,32 +1938,27 @@
# <pre>
# update_body.json:
# {
- # 'autoScaling': {
- # 'minNodes': 5
+ # 'autoScaling': {
+ # 'minNodes': 5
# }
# }
# </pre>
# HTTP request:
- # <pre style="max-width: 626px;">
+ # <pre style="max-width: 626px;">
# PATCH
# https://ml.googleapis.com/v1/{name=projects/*/models/*/versions/*}?update_mask=autoScaling.minNodes
# -d @./update_body.json
# </pre>
},
- "pythonVersion": "A String", # Required. The version of Python used in prediction.
- #
- # The following Python versions are available:
- #
- # * Python '3.7' is available when `runtime_version` is set to '1.15' or
- # later.
- # * Python '3.5' is available when `runtime_version` is set to a version
- # from '1.4' to '1.14'.
- # * Python '2.7' is available when `runtime_version` is set to '1.15' or
- # earlier.
- #
- # Read more about the Python versions available for [each runtime
- # version](/ml-engine/docs/runtime-version-list).
- "requestLoggingConfig": { # Configuration for logging request-response pairs to a BigQuery table. # Optional. *Only* specify this field in a
+ "labels": { # Optional. One or more labels that you can add, to organize your model
+ # versions. Each label is a key-value pair, where both the key and the value
+ # are arbitrary strings that you supply.
+ # For more information, see the documentation on
+ # <a href="/ml-engine/docs/tensorflow/resource-labels">using labels</a>.
+ "a_key": "A String",
+ },
+ "createTime": "A String", # Output only. The time the version was created.
+ "requestLoggingConfig": { # Configuration for logging request-response pairs to a BigQuery table. # Optional. *Only* specify this field in a
# projects.models.versions.patch
# request. Specifying it in a
# projects.models.versions.create
@@ -2048,19 +1977,16 @@
# evaluation](/ml-engine/docs/continuous-evaluation/), you do not need to
# specify this configuration manually. Setting up continuous evaluation
# automatically enables logging of request-response pairs.
- "samplingPercentage": 3.14, # Percentage of requests to be logged, expressed as a fraction from 0 to 1.
- # For example, if you want to log 10% of requests, enter `0.1`. The sampling
- # window is the lifetime of the model version. Defaults to 0.
- "bigqueryTableName": "A String", # Required. Fully qualified BigQuery table name in the following format:
- # "<var>project_id</var>.<var>dataset_name</var>.<var>table_name</var>"
+ "bigqueryTableName": "A String", # Required. Fully qualified BigQuery table name in the following format:
+ # "<var>project_id</var>.<var>dataset_name</var>.<var>table_name</var>"
#
- # The specified table must already exist, and the "Cloud ML Service Agent"
+ # The specified table must already exist, and the "Cloud ML Service Agent"
# for your project must have permission to write to it. The table must have
# the following [schema](/bigquery/docs/schemas):
#
# <table>
- # <tr><th>Field name</th><th style="display: table-cell">Type</th>
- # <th style="display: table-cell">Mode</th></tr>
+ # <tr><th>Field name</th><th style="display: table-cell">Type</th>
+ # <th style="display: table-cell">Mode</th></tr>
# <tr><td>model</td><td>STRING</td><td>REQUIRED</td></tr>
# <tr><td>model_version</td><td>STRING</td><td>REQUIRED</td></tr>
# <tr><td>time</td><td>TIMESTAMP</td><td>REQUIRED</td></tr>
@@ -2068,6 +1994,80 @@
# <tr><td>raw_prediction</td><td>STRING</td><td>NULLABLE</td></tr>
# <tr><td>groundtruth</td><td>STRING</td><td>NULLABLE</td></tr>
# </table>
+ "samplingPercentage": 3.14, # Percentage of requests to be logged, expressed as a fraction from 0 to 1.
+ # For example, if you want to log 10% of requests, enter `0.1`. The sampling
+ # window is the lifetime of the model version. Defaults to 0.
+ },
+ "errorMessage": "A String", # Output only. The details of a failure or a cancellation.
+ "machineType": "A String", # Optional. The type of machine on which to serve the model. Currently only
+ # applies to online prediction service. If this field is not specified, it
+ # defaults to `mls1-c1-m2`.
+ #
+ # Online prediction supports the following machine types:
+ #
+ # * `mls1-c1-m2`
+ # * `mls1-c4-m2`
+ # * `n1-standard-2`
+ # * `n1-standard-4`
+ # * `n1-standard-8`
+ # * `n1-standard-16`
+ # * `n1-standard-32`
+ # * `n1-highmem-2`
+ # * `n1-highmem-4`
+ # * `n1-highmem-8`
+ # * `n1-highmem-16`
+ # * `n1-highmem-32`
+ # * `n1-highcpu-2`
+ # * `n1-highcpu-4`
+ # * `n1-highcpu-8`
+ # * `n1-highcpu-16`
+ # * `n1-highcpu-32`
+ #
+ # `mls1-c1-m2` is generally available. All other machine types are available
+ # in beta. Learn more about the [differences between machine
+ # types](/ml-engine/docs/machine-types-online-prediction).
+ "runtimeVersion": "A String", # Required. The AI Platform runtime version to use for this deployment.
+ #
+ # For more information, see the
+ # [runtime version list](/ml-engine/docs/runtime-version-list) and
+ # [how to manage runtime versions](/ml-engine/docs/versioning).
+ "description": "A String", # Optional. The description specified for the version when it was created.
+ "framework": "A String", # Optional. The machine learning framework AI Platform uses to train
+ # this version of the model. Valid values are `TENSORFLOW`, `SCIKIT_LEARN`,
+ # `XGBOOST`. If you do not specify a framework, AI Platform
+ # will analyze files in the deployment_uri to determine a framework. If you
+ # choose `SCIKIT_LEARN` or `XGBOOST`, you must also set the runtime version
+ # of the model to 1.4 or greater.
+ #
+ # Do **not** specify a framework if you're deploying a [custom
+ # prediction routine](/ml-engine/docs/tensorflow/custom-prediction-routines).
+ #
+ # If you specify a [Compute Engine (N1) machine
+ # type](/ml-engine/docs/machine-types-online-prediction) in the
+ # `machineType` field, you must specify `TENSORFLOW`
+ # for the framework.
+ "etag": "A String", # `etag` is used for optimistic concurrency control as a way to help
+ # prevent simultaneous updates of a model from overwriting each other.
+ # It is strongly suggested that systems make use of the `etag` in the
+ # read-modify-write cycle to perform model updates in order to avoid race
+ # conditions: An `etag` is returned in the response to `GetVersion`, and
+ # systems are expected to put that etag in the request to `UpdateVersion` to
+ # ensure that their change will be applied to the model as intended.
+ "isDefault": True or False, # Output only. If true, this version will be used to handle prediction
+ # requests that do not specify a version.
+ #
+ # You can change the default version by calling
+ # projects.methods.versions.setDefault.
+ "acceleratorConfig": { # Represents a hardware accelerator request config. # Optional. Accelerator config for using GPUs for online prediction (beta).
+ # Only specify this field if you have specified a Compute Engine (N1) machine
+ # type in the `machineType` field. Learn more about [using GPUs for online
+ # prediction](/ml-engine/docs/machine-types-online-prediction#gpus).
+ # Note that the AcceleratorConfig can be used in both Jobs and Versions.
+ # Learn more about [accelerators for training](/ml-engine/docs/using-gpus) and
+ # [accelerators for online
+ # prediction](/ml-engine/docs/machine-types-online-prediction#gpus).
+ "count": "A String", # The number of accelerators to attach to each machine running the job.
+ "type": "A String", # The type of accelerator to use.
},
}</pre>
</div>