docs: docs update (#911)

Thank you for opening a Pull Request! Before submitting your PR, there are a few things you can do to make sure it goes smoothly:
- [ ] Make sure to open an issue as a [bug/issue](https://github.com/googleapis/google-api-python-client/issues/new/choose) before writing your code!  That way we can discuss the change, evaluate designs, and agree on the general idea
- [ ] Ensure the tests and linter pass
- [ ] Code coverage does not decrease (if any source code was changed)
- [ ] Appropriate docs were updated (if necessary)

Fixes #<issue_number_goes_here> 🦕
diff --git a/docs/dyn/ml_v1.projects.models.html b/docs/dyn/ml_v1.projects.models.html
index d7d8c66..fbb4b05 100644
--- a/docs/dyn/ml_v1.projects.models.html
+++ b/docs/dyn/ml_v1.projects.models.html
@@ -92,7 +92,7 @@
   <code><a href="#getIamPolicy">getIamPolicy(resource, options_requestedPolicyVersion=None, x__xgafv=None)</a></code></p>
 <p class="firstline">Gets the access control policy for a resource.</p>
 <p class="toc_element">
-  <code><a href="#list">list(parent, pageSize=None, pageToken=None, x__xgafv=None, filter=None)</a></code></p>
+  <code><a href="#list">list(parent, filter=None, pageToken=None, pageSize=None, x__xgafv=None)</a></code></p>
 <p class="firstline">Lists the models in a project.</p>
 <p class="toc_element">
   <code><a href="#list_next">list_next(previous_request, previous_response)</a></code></p>
@@ -121,449 +121,23 @@
     The object takes the form of:
 
 { # Represents a machine learning solution.
-    # 
-    # A model can have multiple versions, each of which is a deployed, trained
-    # model ready to receive prediction requests. The model itself is just a
-    # container.
-  "name": "A String", # Required. The name specified for the model when it was created.
       # 
-      # The model name must be unique within the project it is created in.
-  "onlinePredictionConsoleLogging": True or False, # Optional. If true, online prediction nodes send `stderr` and `stdout`
-      # streams to Stackdriver Logging. These can be more verbose than the standard
-      # access logs (see `onlinePredictionLogging`) and can incur higher cost.
-      # However, they are helpful for debugging. Note that
-      # [Stackdriver logs may incur a cost](/stackdriver/pricing), especially if
-      # your project receives prediction requests at a high QPS. Estimate your
-      # costs before enabling this option.
-      # 
-      # Default is false.
-  "labels": { # Optional. One or more labels that you can add, to organize your models.
-      # Each label is a key-value pair, where both the key and the value are
-      # arbitrary strings that you supply.
-      # For more information, see the documentation on
-      # &lt;a href="/ml-engine/docs/tensorflow/resource-labels"&gt;using labels&lt;/a&gt;.
-    "a_key": "A String",
-  },
-  "regions": [ # Optional. The list of regions where the model is going to be deployed.
-      # Only one region per model is supported.
-      # Defaults to 'us-central1' if nothing is set.
-      # See the &lt;a href="/ml-engine/docs/tensorflow/regions"&gt;available regions&lt;/a&gt;
-      # for AI Platform services.
-      # Note:
-      # *   No matter where a model is deployed, it can always be accessed by
-      #     users from anywhere, both for online and batch prediction.
-      # *   The region for a batch prediction job is set by the region field when
-      #     submitting the batch prediction job and does not take its value from
-      #     this field.
-    "A String",
-  ],
-  "etag": "A String", # `etag` is used for optimistic concurrency control as a way to help
-      # prevent simultaneous updates of a model from overwriting each other.
-      # It is strongly suggested that systems make use of the `etag` in the
-      # read-modify-write cycle to perform model updates in order to avoid race
-      # conditions: An `etag` is returned in the response to `GetModel`, and
-      # systems are expected to put that etag in the request to `UpdateModel` to
-      # ensure that their change will be applied to the model as intended.
-  "defaultVersion": { # Represents a version of the model. # Output only. The default version of the model. This version will be used to
-      # handle prediction requests that do not specify a version.
-      # 
-      # You can change the default version by calling
-      # projects.models.versions.setDefault.
-      #
-      # Each version is a trained model deployed in the cloud, ready to handle
-      # prediction requests. A model can have multiple versions. You can get
-      # information about all of the versions of a given model by calling
-      # projects.models.versions.list.
-    "acceleratorConfig": { # Represents a hardware accelerator request config. # Optional. Accelerator config for using GPUs for online prediction (beta).
-        # Only specify this field if you have specified a Compute Engine (N1) machine
-        # type in the `machineType` field. Learn more about [using GPUs for online
-        # prediction](/ml-engine/docs/machine-types-online-prediction#gpus).
-        # Note that the AcceleratorConfig can be used in both Jobs and Versions.
-        # Learn more about [accelerators for training](/ml-engine/docs/using-gpus) and
-        # [accelerators for online
-        # prediction](/ml-engine/docs/machine-types-online-prediction#gpus).
-      "count": "A String", # The number of accelerators to attach to each machine running the job.
-      "type": "A String", # The type of accelerator to use.
-    },
-    "labels": { # Optional. One or more labels that you can add, to organize your model
-        # versions. Each label is a key-value pair, where both the key and the value
-        # are arbitrary strings that you supply.
-        # For more information, see the documentation on
-        # &lt;a href="/ml-engine/docs/tensorflow/resource-labels"&gt;using labels&lt;/a&gt;.
-      "a_key": "A String",
-    },
-    "predictionClass": "A String", # Optional. The fully qualified name
-        # (&lt;var&gt;module_name&lt;/var&gt;.&lt;var&gt;class_name&lt;/var&gt;) of a class that implements
-        # the Predictor interface described in this reference field. The module
-        # containing this class should be included in a package provided to the
-        # [`packageUris` field](#Version.FIELDS.package_uris).
-        #
-        # Specify this field if and only if you are deploying a [custom prediction
-        # routine (beta)](/ml-engine/docs/tensorflow/custom-prediction-routines).
-        # If you specify this field, you must set
-        # [`runtimeVersion`](#Version.FIELDS.runtime_version) to 1.4 or greater and
-        # you must set `machineType` to a [legacy (MLS1)
-        # machine type](/ml-engine/docs/machine-types-online-prediction).
-        #
-        # The following code sample provides the Predictor interface:
-        #
-        # &lt;pre style="max-width: 626px;"&gt;
-        # class Predictor(object):
-        # """Interface for constructing custom predictors."""
-        #
-        # def predict(self, instances, **kwargs):
-        #     """Performs custom prediction.
-        #
-        #     Instances are the decoded values from the request. They have already
-        #     been deserialized from JSON.
-        #
-        #     Args:
-        #         instances: A list of prediction input instances.
-        #         **kwargs: A dictionary of keyword args provided as additional
-        #             fields on the predict request body.
-        #
-        #     Returns:
-        #         A list of outputs containing the prediction results. This list must
-        #         be JSON serializable.
-        #     """
-        #     raise NotImplementedError()
-        #
-        # @classmethod
-        # def from_path(cls, model_dir):
-        #     """Creates an instance of Predictor using the given path.
-        #
-        #     Loading of the predictor should be done in this method.
-        #
-        #     Args:
-        #         model_dir: The local directory that contains the exported model
-        #             file along with any additional files uploaded when creating the
-        #             version resource.
-        #
-        #     Returns:
-        #         An instance implementing this Predictor class.
-        #     """
-        #     raise NotImplementedError()
-        # &lt;/pre&gt;
-        #
-        # Learn more about [the Predictor interface and custom prediction
-        # routines](/ml-engine/docs/tensorflow/custom-prediction-routines).
-    "serviceAccount": "A String", # Optional. Specifies the service account for resource access control.
-    "state": "A String", # Output only. The state of a version.
-    "packageUris": [ # Optional. Cloud Storage paths (`gs://…`) of packages for [custom
-        # prediction routines](/ml-engine/docs/tensorflow/custom-prediction-routines)
-        # or [scikit-learn pipelines with custom
-        # code](/ml-engine/docs/scikit/exporting-for-prediction#custom-pipeline-code).
-        #
-        # For a custom prediction routine, one of these packages must contain your
-        # Predictor class (see
-        # [`predictionClass`](#Version.FIELDS.prediction_class)). Additionally,
-        # include any dependencies used by your Predictor or scikit-learn pipeline
-        # uses that are not already included in your selected [runtime
-        # version](/ml-engine/docs/tensorflow/runtime-version-list).
-        #
-        # If you specify this field, you must also set
-        # [`runtimeVersion`](#Version.FIELDS.runtime_version) to 1.4 or greater.
-      "A String",
-    ],
-    "etag": "A String", # `etag` is used for optimistic concurrency control as a way to help
-        # prevent simultaneous updates of a model from overwriting each other.
-        # It is strongly suggested that systems make use of the `etag` in the
-        # read-modify-write cycle to perform model updates in order to avoid race
-        # conditions: An `etag` is returned in the response to `GetVersion`, and
-        # systems are expected to put that etag in the request to `UpdateVersion` to
-        # ensure that their change will be applied to the model as intended.
-    "lastUseTime": "A String", # Output only. The time the version was last used for prediction.
-    "deploymentUri": "A String", # Required. The Cloud Storage location of the trained model used to
-        # create the version. See the
-        # [guide to model
-        # deployment](/ml-engine/docs/tensorflow/deploying-models) for more
-        # information.
-        #
-        # When passing Version to
-        # projects.models.versions.create
-        # the model service uses the specified location as the source of the model.
-        # Once deployed, the model version is hosted by the prediction service, so
-        # this location is useful only as a historical record.
-        # The total number of model files can't exceed 1000.
-    "explanationConfig": { # Message holding configuration options for explaining model predictions. # Optional. Configures explainability features on the model's version.
-        # Some explanation features require additional metadata to be loaded
-        # as part of the model payload.
-        # There are two feature attribution methods supported for TensorFlow models:
-        # integrated gradients and sampled Shapley.
-        # [Learn more about feature
-        # attributions.](/ml-engine/docs/ai-explanations/overview)
-      "xraiAttribution": { # Attributes credit by computing the XRAI taking advantage # Attributes credit by computing the XRAI taking advantage
-          # of the model's fully differentiable structure. Refer to this paper for
-          # more details: https://arxiv.org/abs/1906.02825
-          # Currently only implemented for models with natural image inputs.
-          # of the model's fully differentiable structure. Refer to this paper for
-          # more details: https://arxiv.org/abs/1906.02825
-          # Currently only implemented for models with natural image inputs.
-        "numIntegralSteps": 42, # Number of steps for approximating the path integral.
-            # A good value to start is 50 and gradually increase until the
-            # sum to diff property is met within the desired error range.
-      },
-      "sampledShapleyAttribution": { # An attribution method that approximates Shapley values for features that # An attribution method that approximates Shapley values for features that
-          # contribute to the label being predicted. A sampling strategy is used to
-          # approximate the value rather than considering all subsets of features.
-          # contribute to the label being predicted. A sampling strategy is used to
-          # approximate the value rather than considering all subsets of features.
-        "numPaths": 42, # The number of feature permutations to consider when approximating the
-            # Shapley values.
-      },
-      "integratedGradientsAttribution": { # Attributes credit by computing the Aumann-Shapley value taking advantage # Attributes credit by computing the Aumann-Shapley value taking advantage
-          # of the model's fully differentiable structure. Refer to this paper for
-          # more details: http://proceedings.mlr.press/v70/sundararajan17a.html
-          # of the model's fully differentiable structure. Refer to this paper for
-          # more details: http://proceedings.mlr.press/v70/sundararajan17a.html
-        "numIntegralSteps": 42, # Number of steps for approximating the path integral.
-            # A good value to start is 50 and gradually increase until the
-            # sum to diff property is met within the desired error range.
-      },
-    },
-    "isDefault": True or False, # Output only. If true, this version will be used to handle prediction
-        # requests that do not specify a version.
-        #
-        # You can change the default version by calling
-        # projects.methods.versions.setDefault.
-    "machineType": "A String", # Optional. The type of machine on which to serve the model. Currently only
-        # applies to online prediction service. If this field is not specified, it
-        # defaults to `mls1-c1-m2`.
-        #
-        # Online prediction supports the following machine types:
-        #
-        # * `mls1-c1-m2`
-        # * `mls1-c4-m2`
-        # * `n1-standard-2`
-        # * `n1-standard-4`
-        # * `n1-standard-8`
-        # * `n1-standard-16`
-        # * `n1-standard-32`
-        # * `n1-highmem-2`
-        # * `n1-highmem-4`
-        # * `n1-highmem-8`
-        # * `n1-highmem-16`
-        # * `n1-highmem-32`
-        # * `n1-highcpu-2`
-        # * `n1-highcpu-4`
-        # * `n1-highcpu-8`
-        # * `n1-highcpu-16`
-        # * `n1-highcpu-32`
-        #
-        # `mls1-c1-m2` is generally available. All other machine types are available
-        # in beta. Learn more about the [differences between machine
-        # types](/ml-engine/docs/machine-types-online-prediction).
-    "description": "A String", # Optional. The description specified for the version when it was created.
-    "runtimeVersion": "A String", # Required. The AI Platform runtime version to use for this deployment.
-        #
-        # For more information, see the
-        # [runtime version list](/ml-engine/docs/runtime-version-list) and
-        # [how to manage runtime versions](/ml-engine/docs/versioning).
-    "manualScaling": { # Options for manually scaling a model. # Manually select the number of nodes to use for serving the
-        # model. You should generally use `auto_scaling` with an appropriate
-        # `min_nodes` instead, but this option is available if you want more
-        # predictable billing. Beware that latency and error rates will increase
-        # if the traffic exceeds that capability of the system to serve it based
-        # on the selected number of nodes.
-      "nodes": 42, # The number of nodes to allocate for this model. These nodes are always up,
-          # starting from the time the model is deployed, so the cost of operating
-          # this model will be proportional to `nodes` * number of hours since
-          # last billing cycle plus the cost for each prediction performed.
-    },
-    "errorMessage": "A String", # Output only. The details of a failure or a cancellation.
-    "framework": "A String", # Optional. The machine learning framework AI Platform uses to train
-        # this version of the model. Valid values are `TENSORFLOW`, `SCIKIT_LEARN`,
-        # `XGBOOST`. If you do not specify a framework, AI Platform
-        # will analyze files in the deployment_uri to determine a framework. If you
-        # choose `SCIKIT_LEARN` or `XGBOOST`, you must also set the runtime version
-        # of the model to 1.4 or greater.
-        #
-        # Do **not** specify a framework if you're deploying a [custom
-        # prediction routine](/ml-engine/docs/tensorflow/custom-prediction-routines).
-        #
-        # If you specify a [Compute Engine (N1) machine
-        # type](/ml-engine/docs/machine-types-online-prediction) in the
-        # `machineType` field, you must specify `TENSORFLOW`
-        # for the framework.
-    "createTime": "A String", # Output only. The time the version was created.
-    "name": "A String", # Required. The name specified for the version when it was created.
-        #
-        # The version name must be unique within the model it is created in.
-    "autoScaling": { # Options for automatically scaling a model. # Automatically scale the number of nodes used to serve the model in
-        # response to increases and decreases in traffic. Care should be
-        # taken to ramp up traffic according to the model's ability to scale
-        # or you will start seeing increases in latency and 429 response codes.
-        #
-        # Note that you cannot use AutoScaling if your version uses
-        # [GPUs](#Version.FIELDS.accelerator_config). Instead, you must use specify
-        # `manual_scaling`.
-      "minNodes": 42, # Optional. The minimum number of nodes to allocate for this model. These
-          # nodes are always up, starting from the time the model is deployed.
-          # Therefore, the cost of operating this model will be at least
-          # `rate` * `min_nodes` * number of hours since last billing cycle,
-          # where `rate` is the cost per node-hour as documented in the
-          # [pricing guide](/ml-engine/docs/pricing),
-          # even if no predictions are performed. There is additional cost for each
-          # prediction performed.
-          #
-          # Unlike manual scaling, if the load gets too heavy for the nodes
-          # that are up, the service will automatically add nodes to handle the
-          # increased load as well as scale back as traffic drops, always maintaining
-          # at least `min_nodes`. You will be charged for the time in which additional
-          # nodes are used.
-          #
-          # If `min_nodes` is not specified and AutoScaling is used with a [legacy
-          # (MLS1) machine type](/ml-engine/docs/machine-types-online-prediction),
-          # `min_nodes` defaults to 0, in which case, when traffic to a model stops
-          # (and after a cool-down period), nodes will be shut down and no charges will
-          # be incurred until traffic to the model resumes.
-          #
-          # If `min_nodes` is not specified and AutoScaling is used with a [Compute
-          # Engine (N1) machine type](/ml-engine/docs/machine-types-online-prediction),
-          # `min_nodes` defaults to 1. `min_nodes` must be at least 1 for use with a
-          # Compute Engine machine type.
-          #
-          # Note that you cannot use AutoScaling if your version uses
-          # [GPUs](#Version.FIELDS.accelerator_config). Instead, you must use
-          # ManualScaling.
-          #
-          # You can set `min_nodes` when creating the model version, and you can also
-          # update `min_nodes` for an existing version:
-          # &lt;pre&gt;
-          # update_body.json:
-          # {
-          #   'autoScaling': {
-          #     'minNodes': 5
-          #   }
-          # }
-          # &lt;/pre&gt;
-          # HTTP request:
-          # &lt;pre style="max-width: 626px;"&gt;
-          # PATCH
-          # https://ml.googleapis.com/v1/{name=projects/*/models/*/versions/*}?update_mask=autoScaling.minNodes
-          # -d @./update_body.json
-          # &lt;/pre&gt;
-    },
-    "pythonVersion": "A String", # Required. The version of Python used in prediction.
-        #
-        # The following Python versions are available:
-        #
-        # * Python '3.7' is available when `runtime_version` is set to '1.15' or
-        #   later.
-        # * Python '3.5' is available when `runtime_version` is set to a version
-        #   from '1.4' to '1.14'.
-        # * Python '2.7' is available when `runtime_version` is set to '1.15' or
-        #   earlier.
-        #
-        # Read more about the Python versions available for [each runtime
-        # version](/ml-engine/docs/runtime-version-list).
-    "requestLoggingConfig": { # Configuration for logging request-response pairs to a BigQuery table. # Optional. *Only* specify this field in a
-        # projects.models.versions.patch
-        # request. Specifying it in a
-        # projects.models.versions.create
-        # request has no effect.
-        #
-        # Configures the request-response pair logging on predictions from this
-        # Version.
-        # Online prediction requests to a model version and the responses to these
-        # requests are converted to raw strings and saved to the specified BigQuery
-        # table. Logging is constrained by [BigQuery quotas and
-        # limits](/bigquery/quotas). If your project exceeds BigQuery quotas or limits,
-        # AI Platform Prediction does not log request-response pairs, but it continues
-        # to serve predictions.
-        #
-        # If you are using [continuous
-        # evaluation](/ml-engine/docs/continuous-evaluation/), you do not need to
-        # specify this configuration manually. Setting up continuous evaluation
-        # automatically enables logging of request-response pairs.
-      "samplingPercentage": 3.14, # Percentage of requests to be logged, expressed as a fraction from 0 to 1.
-          # For example, if you want to log 10% of requests, enter `0.1`. The sampling
-          # window is the lifetime of the model version. Defaults to 0.
-      "bigqueryTableName": "A String", # Required. Fully qualified BigQuery table name in the following format:
-          # "&lt;var&gt;project_id&lt;/var&gt;.&lt;var&gt;dataset_name&lt;/var&gt;.&lt;var&gt;table_name&lt;/var&gt;"
-          #
-          # The specified table must already exist, and the "Cloud ML Service Agent"
-          # for your project must have permission to write to it. The table must have
-          # the following [schema](/bigquery/docs/schemas):
-          #
-          # &lt;table&gt;
-          #   &lt;tr&gt;&lt;th&gt;Field name&lt;/th&gt;&lt;th style="display: table-cell"&gt;Type&lt;/th&gt;
-          #     &lt;th style="display: table-cell"&gt;Mode&lt;/th&gt;&lt;/tr&gt;
-          #   &lt;tr&gt;&lt;td&gt;model&lt;/td&gt;&lt;td&gt;STRING&lt;/td&gt;&lt;td&gt;REQUIRED&lt;/td&gt;&lt;/tr&gt;
-          #   &lt;tr&gt;&lt;td&gt;model_version&lt;/td&gt;&lt;td&gt;STRING&lt;/td&gt;&lt;td&gt;REQUIRED&lt;/td&gt;&lt;/tr&gt;
-          #   &lt;tr&gt;&lt;td&gt;time&lt;/td&gt;&lt;td&gt;TIMESTAMP&lt;/td&gt;&lt;td&gt;REQUIRED&lt;/td&gt;&lt;/tr&gt;
-          #   &lt;tr&gt;&lt;td&gt;raw_data&lt;/td&gt;&lt;td&gt;STRING&lt;/td&gt;&lt;td&gt;REQUIRED&lt;/td&gt;&lt;/tr&gt;
-          #   &lt;tr&gt;&lt;td&gt;raw_prediction&lt;/td&gt;&lt;td&gt;STRING&lt;/td&gt;&lt;td&gt;NULLABLE&lt;/td&gt;&lt;/tr&gt;
-          #   &lt;tr&gt;&lt;td&gt;groundtruth&lt;/td&gt;&lt;td&gt;STRING&lt;/td&gt;&lt;td&gt;NULLABLE&lt;/td&gt;&lt;/tr&gt;
-          # &lt;/table&gt;
-    },
-  },
-  "onlinePredictionLogging": True or False, # Optional. If true, online prediction access logs are sent to StackDriver
-      # Logging. These logs are like standard server access logs, containing
-      # information like timestamp and latency for each request. Note that
-      # [Stackdriver logs may incur a cost](/stackdriver/pricing), especially if
-      # your project receives prediction requests at a high queries per second rate
-      # (QPS). Estimate your costs before enabling this option.
-      # 
-      # Default is false.
-  "description": "A String", # Optional. The description specified for the model when it was created.
-}
-
-  x__xgafv: string, V1 error format.
-    Allowed values
-      1 - v1 error format
-      2 - v2 error format
-
-Returns:
-  An object of the form:
-
-    { # Represents a machine learning solution.
-      #
       # A model can have multiple versions, each of which is a deployed, trained
       # model ready to receive prediction requests. The model itself is just a
       # container.
-    "name": "A String", # Required. The name specified for the model when it was created.
-        #
-        # The model name must be unique within the project it is created in.
-    "onlinePredictionConsoleLogging": True or False, # Optional. If true, online prediction nodes send `stderr` and `stdout`
-        # streams to Stackdriver Logging. These can be more verbose than the standard
-        # access logs (see `onlinePredictionLogging`) and can incur higher cost.
-        # However, they are helpful for debugging. Note that
-        # [Stackdriver logs may incur a cost](/stackdriver/pricing), especially if
-        # your project receives prediction requests at a high QPS. Estimate your
-        # costs before enabling this option.
-        #
-        # Default is false.
-    "labels": { # Optional. One or more labels that you can add, to organize your models.
+    &quot;labels&quot;: { # Optional. One or more labels that you can add, to organize your models.
         # Each label is a key-value pair, where both the key and the value are
         # arbitrary strings that you supply.
         # For more information, see the documentation on
-        # &lt;a href="/ml-engine/docs/tensorflow/resource-labels"&gt;using labels&lt;/a&gt;.
-      "a_key": "A String",
+        # &lt;a href=&quot;/ml-engine/docs/tensorflow/resource-labels&quot;&gt;using labels&lt;/a&gt;.
+      &quot;a_key&quot;: &quot;A String&quot;,
     },
-    "regions": [ # Optional. The list of regions where the model is going to be deployed.
-        # Only one region per model is supported.
-        # Defaults to 'us-central1' if nothing is set.
-        # See the &lt;a href="/ml-engine/docs/tensorflow/regions"&gt;available regions&lt;/a&gt;
-        # for AI Platform services.
-        # Note:
-        # *   No matter where a model is deployed, it can always be accessed by
-        #     users from anywhere, both for online and batch prediction.
-        # *   The region for a batch prediction job is set by the region field when
-        #     submitting the batch prediction job and does not take its value from
-        #     this field.
-      "A String",
-    ],
-    "etag": "A String", # `etag` is used for optimistic concurrency control as a way to help
-        # prevent simultaneous updates of a model from overwriting each other.
-        # It is strongly suggested that systems make use of the `etag` in the
-        # read-modify-write cycle to perform model updates in order to avoid race
-        # conditions: An `etag` is returned in the response to `GetModel`, and
-        # systems are expected to put that etag in the request to `UpdateModel` to
-        # ensure that their change will be applied to the model as intended.
-    "defaultVersion": { # Represents a version of the model. # Output only. The default version of the model. This version will be used to
+    &quot;name&quot;: &quot;A String&quot;, # Required. The name specified for the model when it was created.
+        # 
+        # The model name must be unique within the project it is created in.
+    &quot;defaultVersion&quot;: { # Represents a version of the model. # Output only. The default version of the model. This version will be used to
         # handle prediction requests that do not specify a version.
-        #
+        # 
         # You can change the default version by calling
         # projects.models.versions.setDefault.
         #
@@ -571,25 +145,37 @@
         # prediction requests. A model can have multiple versions. You can get
         # information about all of the versions of a given model by calling
         # projects.models.versions.list.
-      "acceleratorConfig": { # Represents a hardware accelerator request config. # Optional. Accelerator config for using GPUs for online prediction (beta).
-          # Only specify this field if you have specified a Compute Engine (N1) machine
-          # type in the `machineType` field. Learn more about [using GPUs for online
-          # prediction](/ml-engine/docs/machine-types-online-prediction#gpus).
-          # Note that the AcceleratorConfig can be used in both Jobs and Versions.
-          # Learn more about [accelerators for training](/ml-engine/docs/using-gpus) and
-          # [accelerators for online
-          # prediction](/ml-engine/docs/machine-types-online-prediction#gpus).
-        "count": "A String", # The number of accelerators to attach to each machine running the job.
-        "type": "A String", # The type of accelerator to use.
+      &quot;state&quot;: &quot;A String&quot;, # Output only. The state of a version.
+      &quot;manualScaling&quot;: { # Options for manually scaling a model. # Manually select the number of nodes to use for serving the
+          # model. You should generally use `auto_scaling` with an appropriate
+          # `min_nodes` instead, but this option is available if you want more
+          # predictable billing. Beware that latency and error rates will increase
+          # if the traffic exceeds that capability of the system to serve it based
+          # on the selected number of nodes.
+        &quot;nodes&quot;: 42, # The number of nodes to allocate for this model. These nodes are always up,
+            # starting from the time the model is deployed, so the cost of operating
+            # this model will be proportional to `nodes` * number of hours since
+            # last billing cycle plus the cost for each prediction performed.
       },
-      "labels": { # Optional. One or more labels that you can add, to organize your model
-          # versions. Each label is a key-value pair, where both the key and the value
-          # are arbitrary strings that you supply.
-          # For more information, see the documentation on
-          # &lt;a href="/ml-engine/docs/tensorflow/resource-labels"&gt;using labels&lt;/a&gt;.
-        "a_key": "A String",
-      },
-      "predictionClass": "A String", # Optional. The fully qualified name
+      &quot;name&quot;: &quot;A String&quot;, # Required. The name specified for the version when it was created.
+          #
+          # The version name must be unique within the model it is created in.
+      &quot;serviceAccount&quot;: &quot;A String&quot;, # Optional. Specifies the service account for resource access control.
+      &quot;pythonVersion&quot;: &quot;A String&quot;, # Required. The version of Python used in prediction.
+          #
+          # The following Python versions are available:
+          #
+          # * Python &#x27;3.7&#x27; is available when `runtime_version` is set to &#x27;1.15&#x27; or
+          #   later.
+          # * Python &#x27;3.5&#x27; is available when `runtime_version` is set to a version
+          #   from &#x27;1.4&#x27; to &#x27;1.14&#x27;.
+          # * Python &#x27;2.7&#x27; is available when `runtime_version` is set to &#x27;1.15&#x27; or
+          #   earlier.
+          #
+          # Read more about the Python versions available for [each runtime
+          # version](/ml-engine/docs/runtime-version-list).
+      &quot;lastUseTime&quot;: &quot;A String&quot;, # Output only. The time the version was last used for prediction.
+      &quot;predictionClass&quot;: &quot;A String&quot;, # Optional. The fully qualified name
           # (&lt;var&gt;module_name&lt;/var&gt;.&lt;var&gt;class_name&lt;/var&gt;) of a class that implements
           # the Predictor interface described in this reference field. The module
           # containing this class should be included in a package provided to the
@@ -604,12 +190,12 @@
           #
           # The following code sample provides the Predictor interface:
           #
-          # &lt;pre style="max-width: 626px;"&gt;
+          # &lt;pre style=&quot;max-width: 626px;&quot;&gt;
           # class Predictor(object):
-          # """Interface for constructing custom predictors."""
+          # &quot;&quot;&quot;Interface for constructing custom predictors.&quot;&quot;&quot;
           #
           # def predict(self, instances, **kwargs):
-          #     """Performs custom prediction.
+          #     &quot;&quot;&quot;Performs custom prediction.
           #
           #     Instances are the decoded values from the request. They have already
           #     been deserialized from JSON.
@@ -622,12 +208,12 @@
           #     Returns:
           #         A list of outputs containing the prediction results. This list must
           #         be JSON serializable.
-          #     """
+          #     &quot;&quot;&quot;
           #     raise NotImplementedError()
           #
           # @classmethod
           # def from_path(cls, model_dir):
-          #     """Creates an instance of Predictor using the given path.
+          #     &quot;&quot;&quot;Creates an instance of Predictor using the given path.
           #
           #     Loading of the predictor should be done in this method.
           #
@@ -638,15 +224,13 @@
           #
           #     Returns:
           #         An instance implementing this Predictor class.
-          #     """
+          #     &quot;&quot;&quot;
           #     raise NotImplementedError()
           # &lt;/pre&gt;
           #
           # Learn more about [the Predictor interface and custom prediction
           # routines](/ml-engine/docs/tensorflow/custom-prediction-routines).
-      "serviceAccount": "A String", # Optional. Specifies the service account for resource access control.
-      "state": "A String", # Output only. The state of a version.
-      "packageUris": [ # Optional. Cloud Storage paths (`gs://…`) of packages for [custom
+      &quot;packageUris&quot;: [ # Optional. Cloud Storage paths (`gs://…`) of packages for [custom
           # prediction routines](/ml-engine/docs/tensorflow/custom-prediction-routines)
           # or [scikit-learn pipelines with custom
           # code](/ml-engine/docs/scikit/exporting-for-prediction#custom-pipeline-code).
@@ -660,17 +244,45 @@
           #
           # If you specify this field, you must also set
           # [`runtimeVersion`](#Version.FIELDS.runtime_version) to 1.4 or greater.
-        "A String",
+        &quot;A String&quot;,
       ],
-      "etag": "A String", # `etag` is used for optimistic concurrency control as a way to help
-          # prevent simultaneous updates of a model from overwriting each other.
-          # It is strongly suggested that systems make use of the `etag` in the
-          # read-modify-write cycle to perform model updates in order to avoid race
-          # conditions: An `etag` is returned in the response to `GetVersion`, and
-          # systems are expected to put that etag in the request to `UpdateVersion` to
-          # ensure that their change will be applied to the model as intended.
-      "lastUseTime": "A String", # Output only. The time the version was last used for prediction.
-      "deploymentUri": "A String", # Required. The Cloud Storage location of the trained model used to
+      &quot;explanationConfig&quot;: { # Message holding configuration options for explaining model predictions. # Optional. Configures explainability features on the model&#x27;s version.
+          # Some explanation features require additional metadata to be loaded
+          # as part of the model payload.
+          # There are two feature attribution methods supported for TensorFlow models:
+          # integrated gradients and sampled Shapley.
+          # [Learn more about feature
+          # attributions.](/ai-platform/prediction/docs/ai-explanations/overview)
+        &quot;integratedGradientsAttribution&quot;: { # Attributes credit by computing the Aumann-Shapley value taking advantage # Attributes credit by computing the Aumann-Shapley value taking advantage
+            # of the model&#x27;s fully differentiable structure. Refer to this paper for
+            # more details: http://proceedings.mlr.press/v70/sundararajan17a.html
+            # of the model&#x27;s fully differentiable structure. Refer to this paper for
+            # more details: https://arxiv.org/abs/1703.01365
+          &quot;numIntegralSteps&quot;: 42, # Number of steps for approximating the path integral.
+              # A good value to start is 50 and gradually increase until the
+              # sum to diff property is met within the desired error range.
+        },
+        &quot;sampledShapleyAttribution&quot;: { # An attribution method that approximates Shapley values for features that # An attribution method that approximates Shapley values for features that
+            # contribute to the label being predicted. A sampling strategy is used to
+            # approximate the value rather than considering all subsets of features.
+            # contribute to the label being predicted. A sampling strategy is used to
+            # approximate the value rather than considering all subsets of features.
+          &quot;numPaths&quot;: 42, # The number of feature permutations to consider when approximating the
+              # Shapley values.
+        },
+        &quot;xraiAttribution&quot;: { # Attributes credit by computing the XRAI taking advantage # Attributes credit by computing the XRAI taking advantage
+            # of the model&#x27;s fully differentiable structure. Refer to this paper for
+            # more details: https://arxiv.org/abs/1906.02825
+            # Currently only implemented for models with natural image inputs.
+            # of the model&#x27;s fully differentiable structure. Refer to this paper for
+            # more details: https://arxiv.org/abs/1906.02825
+            # Currently only implemented for models with natural image inputs.
+          &quot;numIntegralSteps&quot;: 42, # Number of steps for approximating the path integral.
+              # A good value to start is 50 and gradually increase until the
+              # sum to diff property is met within the desired error range.
+        },
+      },
+      &quot;deploymentUri&quot;: &quot;A String&quot;, # Required. The Cloud Storage location of the trained model used to
           # create the version. See the
           # [guide to model
           # deployment](/ml-engine/docs/tensorflow/deploying-models) for more
@@ -681,120 +293,16 @@
           # the model service uses the specified location as the source of the model.
           # Once deployed, the model version is hosted by the prediction service, so
           # this location is useful only as a historical record.
-          # The total number of model files can't exceed 1000.
-      "explanationConfig": { # Message holding configuration options for explaining model predictions. # Optional. Configures explainability features on the model's version.
-          # Some explanation features require additional metadata to be loaded
-          # as part of the model payload.
-          # There are two feature attribution methods supported for TensorFlow models:
-          # integrated gradients and sampled Shapley.
-          # [Learn more about feature
-          # attributions.](/ml-engine/docs/ai-explanations/overview)
-        "xraiAttribution": { # Attributes credit by computing the XRAI taking advantage # Attributes credit by computing the XRAI taking advantage
-            # of the model's fully differentiable structure. Refer to this paper for
-            # more details: https://arxiv.org/abs/1906.02825
-            # Currently only implemented for models with natural image inputs.
-            # of the model's fully differentiable structure. Refer to this paper for
-            # more details: https://arxiv.org/abs/1906.02825
-            # Currently only implemented for models with natural image inputs.
-          "numIntegralSteps": 42, # Number of steps for approximating the path integral.
-              # A good value to start is 50 and gradually increase until the
-              # sum to diff property is met within the desired error range.
-        },
-        "sampledShapleyAttribution": { # An attribution method that approximates Shapley values for features that # An attribution method that approximates Shapley values for features that
-            # contribute to the label being predicted. A sampling strategy is used to
-            # approximate the value rather than considering all subsets of features.
-            # contribute to the label being predicted. A sampling strategy is used to
-            # approximate the value rather than considering all subsets of features.
-          "numPaths": 42, # The number of feature permutations to consider when approximating the
-              # Shapley values.
-        },
-        "integratedGradientsAttribution": { # Attributes credit by computing the Aumann-Shapley value taking advantage # Attributes credit by computing the Aumann-Shapley value taking advantage
-            # of the model's fully differentiable structure. Refer to this paper for
-            # more details: http://proceedings.mlr.press/v70/sundararajan17a.html
-            # of the model's fully differentiable structure. Refer to this paper for
-            # more details: http://proceedings.mlr.press/v70/sundararajan17a.html
-          "numIntegralSteps": 42, # Number of steps for approximating the path integral.
-              # A good value to start is 50 and gradually increase until the
-              # sum to diff property is met within the desired error range.
-        },
-      },
-      "isDefault": True or False, # Output only. If true, this version will be used to handle prediction
-          # requests that do not specify a version.
-          #
-          # You can change the default version by calling
-          # projects.methods.versions.setDefault.
-      "machineType": "A String", # Optional. The type of machine on which to serve the model. Currently only
-          # applies to online prediction service. If this field is not specified, it
-          # defaults to `mls1-c1-m2`.
-          #
-          # Online prediction supports the following machine types:
-          #
-          # * `mls1-c1-m2`
-          # * `mls1-c4-m2`
-          # * `n1-standard-2`
-          # * `n1-standard-4`
-          # * `n1-standard-8`
-          # * `n1-standard-16`
-          # * `n1-standard-32`
-          # * `n1-highmem-2`
-          # * `n1-highmem-4`
-          # * `n1-highmem-8`
-          # * `n1-highmem-16`
-          # * `n1-highmem-32`
-          # * `n1-highcpu-2`
-          # * `n1-highcpu-4`
-          # * `n1-highcpu-8`
-          # * `n1-highcpu-16`
-          # * `n1-highcpu-32`
-          #
-          # `mls1-c1-m2` is generally available. All other machine types are available
-          # in beta. Learn more about the [differences between machine
-          # types](/ml-engine/docs/machine-types-online-prediction).
-      "description": "A String", # Optional. The description specified for the version when it was created.
-      "runtimeVersion": "A String", # Required. The AI Platform runtime version to use for this deployment.
-          #
-          # For more information, see the
-          # [runtime version list](/ml-engine/docs/runtime-version-list) and
-          # [how to manage runtime versions](/ml-engine/docs/versioning).
-      "manualScaling": { # Options for manually scaling a model. # Manually select the number of nodes to use for serving the
-          # model. You should generally use `auto_scaling` with an appropriate
-          # `min_nodes` instead, but this option is available if you want more
-          # predictable billing. Beware that latency and error rates will increase
-          # if the traffic exceeds that capability of the system to serve it based
-          # on the selected number of nodes.
-        "nodes": 42, # The number of nodes to allocate for this model. These nodes are always up,
-            # starting from the time the model is deployed, so the cost of operating
-            # this model will be proportional to `nodes` * number of hours since
-            # last billing cycle plus the cost for each prediction performed.
-      },
-      "errorMessage": "A String", # Output only. The details of a failure or a cancellation.
-      "framework": "A String", # Optional. The machine learning framework AI Platform uses to train
-          # this version of the model. Valid values are `TENSORFLOW`, `SCIKIT_LEARN`,
-          # `XGBOOST`. If you do not specify a framework, AI Platform
-          # will analyze files in the deployment_uri to determine a framework. If you
-          # choose `SCIKIT_LEARN` or `XGBOOST`, you must also set the runtime version
-          # of the model to 1.4 or greater.
-          #
-          # Do **not** specify a framework if you're deploying a [custom
-          # prediction routine](/ml-engine/docs/tensorflow/custom-prediction-routines).
-          #
-          # If you specify a [Compute Engine (N1) machine
-          # type](/ml-engine/docs/machine-types-online-prediction) in the
-          # `machineType` field, you must specify `TENSORFLOW`
-          # for the framework.
-      "createTime": "A String", # Output only. The time the version was created.
-      "name": "A String", # Required. The name specified for the version when it was created.
-          #
-          # The version name must be unique within the model it is created in.
-      "autoScaling": { # Options for automatically scaling a model. # Automatically scale the number of nodes used to serve the model in
+          # The total number of model files can&#x27;t exceed 1000.
+      &quot;autoScaling&quot;: { # Options for automatically scaling a model. # Automatically scale the number of nodes used to serve the model in
           # response to increases and decreases in traffic. Care should be
-          # taken to ramp up traffic according to the model's ability to scale
+          # taken to ramp up traffic according to the model&#x27;s ability to scale
           # or you will start seeing increases in latency and 429 response codes.
           #
           # Note that you cannot use AutoScaling if your version uses
           # [GPUs](#Version.FIELDS.accelerator_config). Instead, you must use specify
           # `manual_scaling`.
-        "minNodes": 42, # Optional. The minimum number of nodes to allocate for this model. These
+        &quot;minNodes&quot;: 42, # Optional. The minimum number of nodes to allocate for this model. These
             # nodes are always up, starting from the time the model is deployed.
             # Therefore, the cost of operating this model will be at least
             # `rate` * `min_nodes` * number of hours since last billing cycle,
@@ -829,32 +337,27 @@
             # &lt;pre&gt;
             # update_body.json:
             # {
-            #   'autoScaling': {
-            #     'minNodes': 5
+            #   &#x27;autoScaling&#x27;: {
+            #     &#x27;minNodes&#x27;: 5
             #   }
             # }
             # &lt;/pre&gt;
             # HTTP request:
-            # &lt;pre style="max-width: 626px;"&gt;
+            # &lt;pre style=&quot;max-width: 626px;&quot;&gt;
             # PATCH
             # https://ml.googleapis.com/v1/{name=projects/*/models/*/versions/*}?update_mask=autoScaling.minNodes
             # -d @./update_body.json
             # &lt;/pre&gt;
       },
-      "pythonVersion": "A String", # Required. The version of Python used in prediction.
-          #
-          # The following Python versions are available:
-          #
-          # * Python '3.7' is available when `runtime_version` is set to '1.15' or
-          #   later.
-          # * Python '3.5' is available when `runtime_version` is set to a version
-          #   from '1.4' to '1.14'.
-          # * Python '2.7' is available when `runtime_version` is set to '1.15' or
-          #   earlier.
-          #
-          # Read more about the Python versions available for [each runtime
-          # version](/ml-engine/docs/runtime-version-list).
-      "requestLoggingConfig": { # Configuration for logging request-response pairs to a BigQuery table. # Optional. *Only* specify this field in a
+      &quot;labels&quot;: { # Optional. One or more labels that you can add, to organize your model
+          # versions. Each label is a key-value pair, where both the key and the value
+          # are arbitrary strings that you supply.
+          # For more information, see the documentation on
+          # &lt;a href=&quot;/ml-engine/docs/tensorflow/resource-labels&quot;&gt;using labels&lt;/a&gt;.
+        &quot;a_key&quot;: &quot;A String&quot;,
+      },
+      &quot;createTime&quot;: &quot;A String&quot;, # Output only. The time the version was created.
+      &quot;requestLoggingConfig&quot;: { # Configuration for logging request-response pairs to a BigQuery table. # Optional. *Only* specify this field in a
           # projects.models.versions.patch
           # request. Specifying it in a
           # projects.models.versions.create
@@ -873,19 +376,16 @@
           # evaluation](/ml-engine/docs/continuous-evaluation/), you do not need to
           # specify this configuration manually. Setting up continuous evaluation
           # automatically enables logging of request-response pairs.
-        "samplingPercentage": 3.14, # Percentage of requests to be logged, expressed as a fraction from 0 to 1.
-            # For example, if you want to log 10% of requests, enter `0.1`. The sampling
-            # window is the lifetime of the model version. Defaults to 0.
-        "bigqueryTableName": "A String", # Required. Fully qualified BigQuery table name in the following format:
-            # "&lt;var&gt;project_id&lt;/var&gt;.&lt;var&gt;dataset_name&lt;/var&gt;.&lt;var&gt;table_name&lt;/var&gt;"
+        &quot;bigqueryTableName&quot;: &quot;A String&quot;, # Required. Fully qualified BigQuery table name in the following format:
+            # &quot;&lt;var&gt;project_id&lt;/var&gt;.&lt;var&gt;dataset_name&lt;/var&gt;.&lt;var&gt;table_name&lt;/var&gt;&quot;
             #
-            # The specified table must already exist, and the "Cloud ML Service Agent"
+            # The specified table must already exist, and the &quot;Cloud ML Service Agent&quot;
             # for your project must have permission to write to it. The table must have
             # the following [schema](/bigquery/docs/schemas):
             #
             # &lt;table&gt;
-            #   &lt;tr&gt;&lt;th&gt;Field name&lt;/th&gt;&lt;th style="display: table-cell"&gt;Type&lt;/th&gt;
-            #     &lt;th style="display: table-cell"&gt;Mode&lt;/th&gt;&lt;/tr&gt;
+            #   &lt;tr&gt;&lt;th&gt;Field name&lt;/th&gt;&lt;th style=&quot;display: table-cell&quot;&gt;Type&lt;/th&gt;
+            #     &lt;th style=&quot;display: table-cell&quot;&gt;Mode&lt;/th&gt;&lt;/tr&gt;
             #   &lt;tr&gt;&lt;td&gt;model&lt;/td&gt;&lt;td&gt;STRING&lt;/td&gt;&lt;td&gt;REQUIRED&lt;/td&gt;&lt;/tr&gt;
             #   &lt;tr&gt;&lt;td&gt;model_version&lt;/td&gt;&lt;td&gt;STRING&lt;/td&gt;&lt;td&gt;REQUIRED&lt;/td&gt;&lt;/tr&gt;
             #   &lt;tr&gt;&lt;td&gt;time&lt;/td&gt;&lt;td&gt;TIMESTAMP&lt;/td&gt;&lt;td&gt;REQUIRED&lt;/td&gt;&lt;/tr&gt;
@@ -893,18 +393,518 @@
             #   &lt;tr&gt;&lt;td&gt;raw_prediction&lt;/td&gt;&lt;td&gt;STRING&lt;/td&gt;&lt;td&gt;NULLABLE&lt;/td&gt;&lt;/tr&gt;
             #   &lt;tr&gt;&lt;td&gt;groundtruth&lt;/td&gt;&lt;td&gt;STRING&lt;/td&gt;&lt;td&gt;NULLABLE&lt;/td&gt;&lt;/tr&gt;
             # &lt;/table&gt;
+        &quot;samplingPercentage&quot;: 3.14, # Percentage of requests to be logged, expressed as a fraction from 0 to 1.
+            # For example, if you want to log 10% of requests, enter `0.1`. The sampling
+            # window is the lifetime of the model version. Defaults to 0.
+      },
+      &quot;errorMessage&quot;: &quot;A String&quot;, # Output only. The details of a failure or a cancellation.
+      &quot;machineType&quot;: &quot;A String&quot;, # Optional. The type of machine on which to serve the model. Currently only
+          # applies to online prediction service. If this field is not specified, it
+          # defaults to `mls1-c1-m2`.
+          #
+          # Online prediction supports the following machine types:
+          #
+          # * `mls1-c1-m2`
+          # * `mls1-c4-m2`
+          # * `n1-standard-2`
+          # * `n1-standard-4`
+          # * `n1-standard-8`
+          # * `n1-standard-16`
+          # * `n1-standard-32`
+          # * `n1-highmem-2`
+          # * `n1-highmem-4`
+          # * `n1-highmem-8`
+          # * `n1-highmem-16`
+          # * `n1-highmem-32`
+          # * `n1-highcpu-2`
+          # * `n1-highcpu-4`
+          # * `n1-highcpu-8`
+          # * `n1-highcpu-16`
+          # * `n1-highcpu-32`
+          #
+          # `mls1-c1-m2` is generally available. All other machine types are available
+          # in beta. Learn more about the [differences between machine
+          # types](/ml-engine/docs/machine-types-online-prediction).
+      &quot;runtimeVersion&quot;: &quot;A String&quot;, # Required. The AI Platform runtime version to use for this deployment.
+          #
+          # For more information, see the
+          # [runtime version list](/ml-engine/docs/runtime-version-list) and
+          # [how to manage runtime versions](/ml-engine/docs/versioning).
+      &quot;description&quot;: &quot;A String&quot;, # Optional. The description specified for the version when it was created.
+      &quot;framework&quot;: &quot;A String&quot;, # Optional. The machine learning framework AI Platform uses to train
+          # this version of the model. Valid values are `TENSORFLOW`, `SCIKIT_LEARN`,
+          # `XGBOOST`. If you do not specify a framework, AI Platform
+          # will analyze files in the deployment_uri to determine a framework. If you
+          # choose `SCIKIT_LEARN` or `XGBOOST`, you must also set the runtime version
+          # of the model to 1.4 or greater.
+          #
+          # Do **not** specify a framework if you&#x27;re deploying a [custom
+          # prediction routine](/ml-engine/docs/tensorflow/custom-prediction-routines).
+          #
+          # If you specify a [Compute Engine (N1) machine
+          # type](/ml-engine/docs/machine-types-online-prediction) in the
+          # `machineType` field, you must specify `TENSORFLOW`
+          # for the framework.
+      &quot;etag&quot;: &quot;A String&quot;, # `etag` is used for optimistic concurrency control as a way to help
+          # prevent simultaneous updates of a model from overwriting each other.
+          # It is strongly suggested that systems make use of the `etag` in the
+          # read-modify-write cycle to perform model updates in order to avoid race
+          # conditions: An `etag` is returned in the response to `GetVersion`, and
+          # systems are expected to put that etag in the request to `UpdateVersion` to
+          # ensure that their change will be applied to the model as intended.
+      &quot;isDefault&quot;: True or False, # Output only. If true, this version will be used to handle prediction
+          # requests that do not specify a version.
+          #
+          # You can change the default version by calling
+          # projects.methods.versions.setDefault.
+      &quot;acceleratorConfig&quot;: { # Represents a hardware accelerator request config. # Optional. Accelerator config for using GPUs for online prediction (beta).
+          # Only specify this field if you have specified a Compute Engine (N1) machine
+          # type in the `machineType` field. Learn more about [using GPUs for online
+          # prediction](/ml-engine/docs/machine-types-online-prediction#gpus).
+          # Note that the AcceleratorConfig can be used in both Jobs and Versions.
+          # Learn more about [accelerators for training](/ml-engine/docs/using-gpus) and
+          # [accelerators for online
+          # prediction](/ml-engine/docs/machine-types-online-prediction#gpus).
+        &quot;count&quot;: &quot;A String&quot;, # The number of accelerators to attach to each machine running the job.
+        &quot;type&quot;: &quot;A String&quot;, # The type of accelerator to use.
       },
     },
-    "onlinePredictionLogging": True or False, # Optional. If true, online prediction access logs are sent to StackDriver
+    &quot;onlinePredictionConsoleLogging&quot;: True or False, # Optional. If true, online prediction nodes send `stderr` and `stdout`
+        # streams to Stackdriver Logging. These can be more verbose than the standard
+        # access logs (see `onlinePredictionLogging`) and can incur higher cost.
+        # However, they are helpful for debugging. Note that
+        # [Stackdriver logs may incur a cost](/stackdriver/pricing), especially if
+        # your project receives prediction requests at a high QPS. Estimate your
+        # costs before enabling this option.
+        # 
+        # Default is false.
+    &quot;regions&quot;: [ # Optional. The list of regions where the model is going to be deployed.
+        # Only one region per model is supported.
+        # Defaults to &#x27;us-central1&#x27; if nothing is set.
+        # See the &lt;a href=&quot;/ml-engine/docs/tensorflow/regions&quot;&gt;available regions&lt;/a&gt;
+        # for AI Platform services.
+        # Note:
+        # *   No matter where a model is deployed, it can always be accessed by
+        #     users from anywhere, both for online and batch prediction.
+        # *   The region for a batch prediction job is set by the region field when
+        #     submitting the batch prediction job and does not take its value from
+        #     this field.
+      &quot;A String&quot;,
+    ],
+    &quot;description&quot;: &quot;A String&quot;, # Optional. The description specified for the model when it was created.
+    &quot;onlinePredictionLogging&quot;: True or False, # Optional. If true, online prediction access logs are sent to StackDriver
         # Logging. These logs are like standard server access logs, containing
         # information like timestamp and latency for each request. Note that
         # [Stackdriver logs may incur a cost](/stackdriver/pricing), especially if
         # your project receives prediction requests at a high queries per second rate
         # (QPS). Estimate your costs before enabling this option.
-        #
+        # 
         # Default is false.
-    "description": "A String", # Optional. The description specified for the model when it was created.
-  }</pre>
+    &quot;etag&quot;: &quot;A String&quot;, # `etag` is used for optimistic concurrency control as a way to help
+        # prevent simultaneous updates of a model from overwriting each other.
+        # It is strongly suggested that systems make use of the `etag` in the
+        # read-modify-write cycle to perform model updates in order to avoid race
+        # conditions: An `etag` is returned in the response to `GetModel`, and
+        # systems are expected to put that etag in the request to `UpdateModel` to
+        # ensure that their change will be applied to the model as intended.
+  }
+
+  x__xgafv: string, V1 error format.
+    Allowed values
+      1 - v1 error format
+      2 - v2 error format
+
+Returns:
+  An object of the form:
+
+    { # Represents a machine learning solution.
+        #
+        # A model can have multiple versions, each of which is a deployed, trained
+        # model ready to receive prediction requests. The model itself is just a
+        # container.
+      &quot;labels&quot;: { # Optional. One or more labels that you can add, to organize your models.
+          # Each label is a key-value pair, where both the key and the value are
+          # arbitrary strings that you supply.
+          # For more information, see the documentation on
+          # &lt;a href=&quot;/ml-engine/docs/tensorflow/resource-labels&quot;&gt;using labels&lt;/a&gt;.
+        &quot;a_key&quot;: &quot;A String&quot;,
+      },
+      &quot;name&quot;: &quot;A String&quot;, # Required. The name specified for the model when it was created.
+          #
+          # The model name must be unique within the project it is created in.
+      &quot;defaultVersion&quot;: { # Represents a version of the model. # Output only. The default version of the model. This version will be used to
+          # handle prediction requests that do not specify a version.
+          #
+          # You can change the default version by calling
+          # projects.models.versions.setDefault.
+          #
+          # Each version is a trained model deployed in the cloud, ready to handle
+          # prediction requests. A model can have multiple versions. You can get
+          # information about all of the versions of a given model by calling
+          # projects.models.versions.list.
+        &quot;state&quot;: &quot;A String&quot;, # Output only. The state of a version.
+        &quot;manualScaling&quot;: { # Options for manually scaling a model. # Manually select the number of nodes to use for serving the
+            # model. You should generally use `auto_scaling` with an appropriate
+            # `min_nodes` instead, but this option is available if you want more
+            # predictable billing. Beware that latency and error rates will increase
+            # if the traffic exceeds that capability of the system to serve it based
+            # on the selected number of nodes.
+          &quot;nodes&quot;: 42, # The number of nodes to allocate for this model. These nodes are always up,
+              # starting from the time the model is deployed, so the cost of operating
+              # this model will be proportional to `nodes` * number of hours since
+              # last billing cycle plus the cost for each prediction performed.
+        },
+        &quot;name&quot;: &quot;A String&quot;, # Required. The name specified for the version when it was created.
+            #
+            # The version name must be unique within the model it is created in.
+        &quot;serviceAccount&quot;: &quot;A String&quot;, # Optional. Specifies the service account for resource access control.
+        &quot;pythonVersion&quot;: &quot;A String&quot;, # Required. The version of Python used in prediction.
+            #
+            # The following Python versions are available:
+            #
+            # * Python &#x27;3.7&#x27; is available when `runtime_version` is set to &#x27;1.15&#x27; or
+            #   later.
+            # * Python &#x27;3.5&#x27; is available when `runtime_version` is set to a version
+            #   from &#x27;1.4&#x27; to &#x27;1.14&#x27;.
+            # * Python &#x27;2.7&#x27; is available when `runtime_version` is set to &#x27;1.15&#x27; or
+            #   earlier.
+            #
+            # Read more about the Python versions available for [each runtime
+            # version](/ml-engine/docs/runtime-version-list).
+        &quot;lastUseTime&quot;: &quot;A String&quot;, # Output only. The time the version was last used for prediction.
+        &quot;predictionClass&quot;: &quot;A String&quot;, # Optional. The fully qualified name
+            # (&lt;var&gt;module_name&lt;/var&gt;.&lt;var&gt;class_name&lt;/var&gt;) of a class that implements
+            # the Predictor interface described in this reference field. The module
+            # containing this class should be included in a package provided to the
+            # [`packageUris` field](#Version.FIELDS.package_uris).
+            #
+            # Specify this field if and only if you are deploying a [custom prediction
+            # routine (beta)](/ml-engine/docs/tensorflow/custom-prediction-routines).
+            # If you specify this field, you must set
+            # [`runtimeVersion`](#Version.FIELDS.runtime_version) to 1.4 or greater and
+            # you must set `machineType` to a [legacy (MLS1)
+            # machine type](/ml-engine/docs/machine-types-online-prediction).
+            #
+            # The following code sample provides the Predictor interface:
+            #
+            # &lt;pre style=&quot;max-width: 626px;&quot;&gt;
+            # class Predictor(object):
+            # &quot;&quot;&quot;Interface for constructing custom predictors.&quot;&quot;&quot;
+            #
+            # def predict(self, instances, **kwargs):
+            #     &quot;&quot;&quot;Performs custom prediction.
+            #
+            #     Instances are the decoded values from the request. They have already
+            #     been deserialized from JSON.
+            #
+            #     Args:
+            #         instances: A list of prediction input instances.
+            #         **kwargs: A dictionary of keyword args provided as additional
+            #             fields on the predict request body.
+            #
+            #     Returns:
+            #         A list of outputs containing the prediction results. This list must
+            #         be JSON serializable.
+            #     &quot;&quot;&quot;
+            #     raise NotImplementedError()
+            #
+            # @classmethod
+            # def from_path(cls, model_dir):
+            #     &quot;&quot;&quot;Creates an instance of Predictor using the given path.
+            #
+            #     Loading of the predictor should be done in this method.
+            #
+            #     Args:
+            #         model_dir: The local directory that contains the exported model
+            #             file along with any additional files uploaded when creating the
+            #             version resource.
+            #
+            #     Returns:
+            #         An instance implementing this Predictor class.
+            #     &quot;&quot;&quot;
+            #     raise NotImplementedError()
+            # &lt;/pre&gt;
+            #
+            # Learn more about [the Predictor interface and custom prediction
+            # routines](/ml-engine/docs/tensorflow/custom-prediction-routines).
+        &quot;packageUris&quot;: [ # Optional. Cloud Storage paths (`gs://…`) of packages for [custom
+            # prediction routines](/ml-engine/docs/tensorflow/custom-prediction-routines)
+            # or [scikit-learn pipelines with custom
+            # code](/ml-engine/docs/scikit/exporting-for-prediction#custom-pipeline-code).
+            #
+            # For a custom prediction routine, one of these packages must contain your
+            # Predictor class (see
+            # [`predictionClass`](#Version.FIELDS.prediction_class)). Additionally,
+            # include any dependencies used by your Predictor or scikit-learn pipeline
+            # uses that are not already included in your selected [runtime
+            # version](/ml-engine/docs/tensorflow/runtime-version-list).
+            #
+            # If you specify this field, you must also set
+            # [`runtimeVersion`](#Version.FIELDS.runtime_version) to 1.4 or greater.
+          &quot;A String&quot;,
+        ],
+        &quot;explanationConfig&quot;: { # Message holding configuration options for explaining model predictions. # Optional. Configures explainability features on the model&#x27;s version.
+            # Some explanation features require additional metadata to be loaded
+            # as part of the model payload.
+            # There are two feature attribution methods supported for TensorFlow models:
+            # integrated gradients and sampled Shapley.
+            # [Learn more about feature
+            # attributions.](/ai-platform/prediction/docs/ai-explanations/overview)
+          &quot;integratedGradientsAttribution&quot;: { # Attributes credit by computing the Aumann-Shapley value taking advantage # Attributes credit by computing the Aumann-Shapley value taking advantage
+              # of the model&#x27;s fully differentiable structure. Refer to this paper for
+              # more details: http://proceedings.mlr.press/v70/sundararajan17a.html
+              # of the model&#x27;s fully differentiable structure. Refer to this paper for
+              # more details: https://arxiv.org/abs/1703.01365
+            &quot;numIntegralSteps&quot;: 42, # Number of steps for approximating the path integral.
+                # A good value to start is 50 and gradually increase until the
+                # sum to diff property is met within the desired error range.
+          },
+          &quot;sampledShapleyAttribution&quot;: { # An attribution method that approximates Shapley values for features that # An attribution method that approximates Shapley values for features that
+              # contribute to the label being predicted. A sampling strategy is used to
+              # approximate the value rather than considering all subsets of features.
+              # contribute to the label being predicted. A sampling strategy is used to
+              # approximate the value rather than considering all subsets of features.
+            &quot;numPaths&quot;: 42, # The number of feature permutations to consider when approximating the
+                # Shapley values.
+          },
+          &quot;xraiAttribution&quot;: { # Attributes credit by computing the XRAI taking advantage # Attributes credit by computing the XRAI taking advantage
+              # of the model&#x27;s fully differentiable structure. Refer to this paper for
+              # more details: https://arxiv.org/abs/1906.02825
+              # Currently only implemented for models with natural image inputs.
+              # of the model&#x27;s fully differentiable structure. Refer to this paper for
+              # more details: https://arxiv.org/abs/1906.02825
+              # Currently only implemented for models with natural image inputs.
+            &quot;numIntegralSteps&quot;: 42, # Number of steps for approximating the path integral.
+                # A good value to start is 50 and gradually increase until the
+                # sum to diff property is met within the desired error range.
+          },
+        },
+        &quot;deploymentUri&quot;: &quot;A String&quot;, # Required. The Cloud Storage location of the trained model used to
+            # create the version. See the
+            # [guide to model
+            # deployment](/ml-engine/docs/tensorflow/deploying-models) for more
+            # information.
+            #
+            # When passing Version to
+            # projects.models.versions.create
+            # the model service uses the specified location as the source of the model.
+            # Once deployed, the model version is hosted by the prediction service, so
+            # this location is useful only as a historical record.
+            # The total number of model files can&#x27;t exceed 1000.
+        &quot;autoScaling&quot;: { # Options for automatically scaling a model. # Automatically scale the number of nodes used to serve the model in
+            # response to increases and decreases in traffic. Care should be
+            # taken to ramp up traffic according to the model&#x27;s ability to scale
+            # or you will start seeing increases in latency and 429 response codes.
+            #
+            # Note that you cannot use AutoScaling if your version uses
+            # [GPUs](#Version.FIELDS.accelerator_config). Instead, you must use specify
+            # `manual_scaling`.
+          &quot;minNodes&quot;: 42, # Optional. The minimum number of nodes to allocate for this model. These
+              # nodes are always up, starting from the time the model is deployed.
+              # Therefore, the cost of operating this model will be at least
+              # `rate` * `min_nodes` * number of hours since last billing cycle,
+              # where `rate` is the cost per node-hour as documented in the
+              # [pricing guide](/ml-engine/docs/pricing),
+              # even if no predictions are performed. There is additional cost for each
+              # prediction performed.
+              #
+              # Unlike manual scaling, if the load gets too heavy for the nodes
+              # that are up, the service will automatically add nodes to handle the
+              # increased load as well as scale back as traffic drops, always maintaining
+              # at least `min_nodes`. You will be charged for the time in which additional
+              # nodes are used.
+              #
+              # If `min_nodes` is not specified and AutoScaling is used with a [legacy
+              # (MLS1) machine type](/ml-engine/docs/machine-types-online-prediction),
+              # `min_nodes` defaults to 0, in which case, when traffic to a model stops
+              # (and after a cool-down period), nodes will be shut down and no charges will
+              # be incurred until traffic to the model resumes.
+              #
+              # If `min_nodes` is not specified and AutoScaling is used with a [Compute
+              # Engine (N1) machine type](/ml-engine/docs/machine-types-online-prediction),
+              # `min_nodes` defaults to 1. `min_nodes` must be at least 1 for use with a
+              # Compute Engine machine type.
+              #
+              # Note that you cannot use AutoScaling if your version uses
+              # [GPUs](#Version.FIELDS.accelerator_config). Instead, you must use
+              # ManualScaling.
+              #
+              # You can set `min_nodes` when creating the model version, and you can also
+              # update `min_nodes` for an existing version:
+              # &lt;pre&gt;
+              # update_body.json:
+              # {
+              #   &#x27;autoScaling&#x27;: {
+              #     &#x27;minNodes&#x27;: 5
+              #   }
+              # }
+              # &lt;/pre&gt;
+              # HTTP request:
+              # &lt;pre style=&quot;max-width: 626px;&quot;&gt;
+              # PATCH
+              # https://ml.googleapis.com/v1/{name=projects/*/models/*/versions/*}?update_mask=autoScaling.minNodes
+              # -d @./update_body.json
+              # &lt;/pre&gt;
+        },
+        &quot;labels&quot;: { # Optional. One or more labels that you can add, to organize your model
+            # versions. Each label is a key-value pair, where both the key and the value
+            # are arbitrary strings that you supply.
+            # For more information, see the documentation on
+            # &lt;a href=&quot;/ml-engine/docs/tensorflow/resource-labels&quot;&gt;using labels&lt;/a&gt;.
+          &quot;a_key&quot;: &quot;A String&quot;,
+        },
+        &quot;createTime&quot;: &quot;A String&quot;, # Output only. The time the version was created.
+        &quot;requestLoggingConfig&quot;: { # Configuration for logging request-response pairs to a BigQuery table. # Optional. *Only* specify this field in a
+            # projects.models.versions.patch
+            # request. Specifying it in a
+            # projects.models.versions.create
+            # request has no effect.
+            #
+            # Configures the request-response pair logging on predictions from this
+            # Version.
+            # Online prediction requests to a model version and the responses to these
+            # requests are converted to raw strings and saved to the specified BigQuery
+            # table. Logging is constrained by [BigQuery quotas and
+            # limits](/bigquery/quotas). If your project exceeds BigQuery quotas or limits,
+            # AI Platform Prediction does not log request-response pairs, but it continues
+            # to serve predictions.
+            #
+            # If you are using [continuous
+            # evaluation](/ml-engine/docs/continuous-evaluation/), you do not need to
+            # specify this configuration manually. Setting up continuous evaluation
+            # automatically enables logging of request-response pairs.
+          &quot;bigqueryTableName&quot;: &quot;A String&quot;, # Required. Fully qualified BigQuery table name in the following format:
+              # &quot;&lt;var&gt;project_id&lt;/var&gt;.&lt;var&gt;dataset_name&lt;/var&gt;.&lt;var&gt;table_name&lt;/var&gt;&quot;
+              #
+              # The specified table must already exist, and the &quot;Cloud ML Service Agent&quot;
+              # for your project must have permission to write to it. The table must have
+              # the following [schema](/bigquery/docs/schemas):
+              #
+              # &lt;table&gt;
+              #   &lt;tr&gt;&lt;th&gt;Field name&lt;/th&gt;&lt;th style=&quot;display: table-cell&quot;&gt;Type&lt;/th&gt;
+              #     &lt;th style=&quot;display: table-cell&quot;&gt;Mode&lt;/th&gt;&lt;/tr&gt;
+              #   &lt;tr&gt;&lt;td&gt;model&lt;/td&gt;&lt;td&gt;STRING&lt;/td&gt;&lt;td&gt;REQUIRED&lt;/td&gt;&lt;/tr&gt;
+              #   &lt;tr&gt;&lt;td&gt;model_version&lt;/td&gt;&lt;td&gt;STRING&lt;/td&gt;&lt;td&gt;REQUIRED&lt;/td&gt;&lt;/tr&gt;
+              #   &lt;tr&gt;&lt;td&gt;time&lt;/td&gt;&lt;td&gt;TIMESTAMP&lt;/td&gt;&lt;td&gt;REQUIRED&lt;/td&gt;&lt;/tr&gt;
+              #   &lt;tr&gt;&lt;td&gt;raw_data&lt;/td&gt;&lt;td&gt;STRING&lt;/td&gt;&lt;td&gt;REQUIRED&lt;/td&gt;&lt;/tr&gt;
+              #   &lt;tr&gt;&lt;td&gt;raw_prediction&lt;/td&gt;&lt;td&gt;STRING&lt;/td&gt;&lt;td&gt;NULLABLE&lt;/td&gt;&lt;/tr&gt;
+              #   &lt;tr&gt;&lt;td&gt;groundtruth&lt;/td&gt;&lt;td&gt;STRING&lt;/td&gt;&lt;td&gt;NULLABLE&lt;/td&gt;&lt;/tr&gt;
+              # &lt;/table&gt;
+          &quot;samplingPercentage&quot;: 3.14, # Percentage of requests to be logged, expressed as a fraction from 0 to 1.
+              # For example, if you want to log 10% of requests, enter `0.1`. The sampling
+              # window is the lifetime of the model version. Defaults to 0.
+        },
+        &quot;errorMessage&quot;: &quot;A String&quot;, # Output only. The details of a failure or a cancellation.
+        &quot;machineType&quot;: &quot;A String&quot;, # Optional. The type of machine on which to serve the model. Currently only
+            # applies to online prediction service. If this field is not specified, it
+            # defaults to `mls1-c1-m2`.
+            #
+            # Online prediction supports the following machine types:
+            #
+            # * `mls1-c1-m2`
+            # * `mls1-c4-m2`
+            # * `n1-standard-2`
+            # * `n1-standard-4`
+            # * `n1-standard-8`
+            # * `n1-standard-16`
+            # * `n1-standard-32`
+            # * `n1-highmem-2`
+            # * `n1-highmem-4`
+            # * `n1-highmem-8`
+            # * `n1-highmem-16`
+            # * `n1-highmem-32`
+            # * `n1-highcpu-2`
+            # * `n1-highcpu-4`
+            # * `n1-highcpu-8`
+            # * `n1-highcpu-16`
+            # * `n1-highcpu-32`
+            #
+            # `mls1-c1-m2` is generally available. All other machine types are available
+            # in beta. Learn more about the [differences between machine
+            # types](/ml-engine/docs/machine-types-online-prediction).
+        &quot;runtimeVersion&quot;: &quot;A String&quot;, # Required. The AI Platform runtime version to use for this deployment.
+            #
+            # For more information, see the
+            # [runtime version list](/ml-engine/docs/runtime-version-list) and
+            # [how to manage runtime versions](/ml-engine/docs/versioning).
+        &quot;description&quot;: &quot;A String&quot;, # Optional. The description specified for the version when it was created.
+        &quot;framework&quot;: &quot;A String&quot;, # Optional. The machine learning framework AI Platform uses to train
+            # this version of the model. Valid values are `TENSORFLOW`, `SCIKIT_LEARN`,
+            # `XGBOOST`. If you do not specify a framework, AI Platform
+            # will analyze files in the deployment_uri to determine a framework. If you
+            # choose `SCIKIT_LEARN` or `XGBOOST`, you must also set the runtime version
+            # of the model to 1.4 or greater.
+            #
+            # Do **not** specify a framework if you&#x27;re deploying a [custom
+            # prediction routine](/ml-engine/docs/tensorflow/custom-prediction-routines).
+            #
+            # If you specify a [Compute Engine (N1) machine
+            # type](/ml-engine/docs/machine-types-online-prediction) in the
+            # `machineType` field, you must specify `TENSORFLOW`
+            # for the framework.
+        &quot;etag&quot;: &quot;A String&quot;, # `etag` is used for optimistic concurrency control as a way to help
+            # prevent simultaneous updates of a model from overwriting each other.
+            # It is strongly suggested that systems make use of the `etag` in the
+            # read-modify-write cycle to perform model updates in order to avoid race
+            # conditions: An `etag` is returned in the response to `GetVersion`, and
+            # systems are expected to put that etag in the request to `UpdateVersion` to
+            # ensure that their change will be applied to the model as intended.
+        &quot;isDefault&quot;: True or False, # Output only. If true, this version will be used to handle prediction
+            # requests that do not specify a version.
+            #
+            # You can change the default version by calling
+            # projects.methods.versions.setDefault.
+        &quot;acceleratorConfig&quot;: { # Represents a hardware accelerator request config. # Optional. Accelerator config for using GPUs for online prediction (beta).
+            # Only specify this field if you have specified a Compute Engine (N1) machine
+            # type in the `machineType` field. Learn more about [using GPUs for online
+            # prediction](/ml-engine/docs/machine-types-online-prediction#gpus).
+            # Note that the AcceleratorConfig can be used in both Jobs and Versions.
+            # Learn more about [accelerators for training](/ml-engine/docs/using-gpus) and
+            # [accelerators for online
+            # prediction](/ml-engine/docs/machine-types-online-prediction#gpus).
+          &quot;count&quot;: &quot;A String&quot;, # The number of accelerators to attach to each machine running the job.
+          &quot;type&quot;: &quot;A String&quot;, # The type of accelerator to use.
+        },
+      },
+      &quot;onlinePredictionConsoleLogging&quot;: True or False, # Optional. If true, online prediction nodes send `stderr` and `stdout`
+          # streams to Stackdriver Logging. These can be more verbose than the standard
+          # access logs (see `onlinePredictionLogging`) and can incur higher cost.
+          # However, they are helpful for debugging. Note that
+          # [Stackdriver logs may incur a cost](/stackdriver/pricing), especially if
+          # your project receives prediction requests at a high QPS. Estimate your
+          # costs before enabling this option.
+          #
+          # Default is false.
+      &quot;regions&quot;: [ # Optional. The list of regions where the model is going to be deployed.
+          # Only one region per model is supported.
+          # Defaults to &#x27;us-central1&#x27; if nothing is set.
+          # See the &lt;a href=&quot;/ml-engine/docs/tensorflow/regions&quot;&gt;available regions&lt;/a&gt;
+          # for AI Platform services.
+          # Note:
+          # *   No matter where a model is deployed, it can always be accessed by
+          #     users from anywhere, both for online and batch prediction.
+          # *   The region for a batch prediction job is set by the region field when
+          #     submitting the batch prediction job and does not take its value from
+          #     this field.
+        &quot;A String&quot;,
+      ],
+      &quot;description&quot;: &quot;A String&quot;, # Optional. The description specified for the model when it was created.
+      &quot;onlinePredictionLogging&quot;: True or False, # Optional. If true, online prediction access logs are sent to StackDriver
+          # Logging. These logs are like standard server access logs, containing
+          # information like timestamp and latency for each request. Note that
+          # [Stackdriver logs may incur a cost](/stackdriver/pricing), especially if
+          # your project receives prediction requests at a high queries per second rate
+          # (QPS). Estimate your costs before enabling this option.
+          #
+          # Default is false.
+      &quot;etag&quot;: &quot;A String&quot;, # `etag` is used for optimistic concurrency control as a way to help
+          # prevent simultaneous updates of a model from overwriting each other.
+          # It is strongly suggested that systems make use of the `etag` in the
+          # read-modify-write cycle to perform model updates in order to avoid race
+          # conditions: An `etag` is returned in the response to `GetModel`, and
+          # systems are expected to put that etag in the request to `UpdateModel` to
+          # ensure that their change will be applied to the model as intended.
+    }</pre>
 </div>
 
 <div class="method">
@@ -927,34 +927,7 @@
 
     { # This resource represents a long-running operation that is the result of a
       # network API call.
-    "metadata": { # Service-specific metadata associated with the operation.  It typically
-        # contains progress information and common metadata such as create time.
-        # Some services might not provide such metadata.  Any method that returns a
-        # long-running operation should document the metadata type, if any.
-      "a_key": "", # Properties of the object. Contains field @type with type URL.
-    },
-    "error": { # The `Status` type defines a logical error model that is suitable for # The error result of the operation in case of failure or cancellation.
-        # different programming environments, including REST APIs and RPC APIs. It is
-        # used by [gRPC](https://github.com/grpc). Each `Status` message contains
-        # three pieces of data: error code, error message, and error details.
-        #
-        # You can find out more about this error model and how to work with it in the
-        # [API Design Guide](https://cloud.google.com/apis/design/errors).
-      "message": "A String", # A developer-facing error message, which should be in English. Any
-          # user-facing error message should be localized and sent in the
-          # google.rpc.Status.details field, or localized by the client.
-      "code": 42, # The status code, which should be an enum value of google.rpc.Code.
-      "details": [ # A list of messages that carry the error details.  There is a common set of
-          # message types for APIs to use.
-        {
-          "a_key": "", # Properties of the object. Contains field @type with type URL.
-        },
-      ],
-    },
-    "done": True or False, # If the value is `false`, it means the operation is still in progress.
-        # If `true`, the operation is completed, and either `error` or `response` is
-        # available.
-    "response": { # The normal response of the operation in case of success.  If the original
+    &quot;response&quot;: { # The normal response of the operation in case of success.  If the original
         # method returns no data on success, such as `Delete`, the response is
         # `google.protobuf.Empty`.  If the original method is standard
         # `Get`/`Create`/`Update`, the response should be the resource.  For other
@@ -962,11 +935,38 @@
         # is the original method name.  For example, if the original method name
         # is `TakeSnapshot()`, the inferred response type is
         # `TakeSnapshotResponse`.
-      "a_key": "", # Properties of the object. Contains field @type with type URL.
+      &quot;a_key&quot;: &quot;&quot;, # Properties of the object. Contains field @type with type URL.
     },
-    "name": "A String", # The server-assigned name, which is only unique within the same service that
+    &quot;name&quot;: &quot;A String&quot;, # The server-assigned name, which is only unique within the same service that
         # originally returns it. If you use the default HTTP mapping, the
         # `name` should be a resource name ending with `operations/{unique_id}`.
+    &quot;error&quot;: { # The `Status` type defines a logical error model that is suitable for # The error result of the operation in case of failure or cancellation.
+        # different programming environments, including REST APIs and RPC APIs. It is
+        # used by [gRPC](https://github.com/grpc). Each `Status` message contains
+        # three pieces of data: error code, error message, and error details.
+        #
+        # You can find out more about this error model and how to work with it in the
+        # [API Design Guide](https://cloud.google.com/apis/design/errors).
+      &quot;details&quot;: [ # A list of messages that carry the error details.  There is a common set of
+          # message types for APIs to use.
+        {
+          &quot;a_key&quot;: &quot;&quot;, # Properties of the object. Contains field @type with type URL.
+        },
+      ],
+      &quot;code&quot;: 42, # The status code, which should be an enum value of google.rpc.Code.
+      &quot;message&quot;: &quot;A String&quot;, # A developer-facing error message, which should be in English. Any
+          # user-facing error message should be localized and sent in the
+          # google.rpc.Status.details field, or localized by the client.
+    },
+    &quot;metadata&quot;: { # Service-specific metadata associated with the operation.  It typically
+        # contains progress information and common metadata such as create time.
+        # Some services might not provide such metadata.  Any method that returns a
+        # long-running operation should document the metadata type, if any.
+      &quot;a_key&quot;: &quot;&quot;, # Properties of the object. Contains field @type with type URL.
+    },
+    &quot;done&quot;: True or False, # If the value is `false`, it means the operation is still in progress.
+        # If `true`, the operation is completed, and either `error` or `response` is
+        # available.
   }</pre>
 </div>
 
@@ -987,393 +987,393 @@
   An object of the form:
 
     { # Represents a machine learning solution.
-      #
-      # A model can have multiple versions, each of which is a deployed, trained
-      # model ready to receive prediction requests. The model itself is just a
-      # container.
-    "name": "A String", # Required. The name specified for the model when it was created.
         #
-        # The model name must be unique within the project it is created in.
-    "onlinePredictionConsoleLogging": True or False, # Optional. If true, online prediction nodes send `stderr` and `stdout`
-        # streams to Stackdriver Logging. These can be more verbose than the standard
-        # access logs (see `onlinePredictionLogging`) and can incur higher cost.
-        # However, they are helpful for debugging. Note that
-        # [Stackdriver logs may incur a cost](/stackdriver/pricing), especially if
-        # your project receives prediction requests at a high QPS. Estimate your
-        # costs before enabling this option.
-        #
-        # Default is false.
-    "labels": { # Optional. One or more labels that you can add, to organize your models.
-        # Each label is a key-value pair, where both the key and the value are
-        # arbitrary strings that you supply.
-        # For more information, see the documentation on
-        # &lt;a href="/ml-engine/docs/tensorflow/resource-labels"&gt;using labels&lt;/a&gt;.
-      "a_key": "A String",
-    },
-    "regions": [ # Optional. The list of regions where the model is going to be deployed.
-        # Only one region per model is supported.
-        # Defaults to 'us-central1' if nothing is set.
-        # See the &lt;a href="/ml-engine/docs/tensorflow/regions"&gt;available regions&lt;/a&gt;
-        # for AI Platform services.
-        # Note:
-        # *   No matter where a model is deployed, it can always be accessed by
-        #     users from anywhere, both for online and batch prediction.
-        # *   The region for a batch prediction job is set by the region field when
-        #     submitting the batch prediction job and does not take its value from
-        #     this field.
-      "A String",
-    ],
-    "etag": "A String", # `etag` is used for optimistic concurrency control as a way to help
-        # prevent simultaneous updates of a model from overwriting each other.
-        # It is strongly suggested that systems make use of the `etag` in the
-        # read-modify-write cycle to perform model updates in order to avoid race
-        # conditions: An `etag` is returned in the response to `GetModel`, and
-        # systems are expected to put that etag in the request to `UpdateModel` to
-        # ensure that their change will be applied to the model as intended.
-    "defaultVersion": { # Represents a version of the model. # Output only. The default version of the model. This version will be used to
-        # handle prediction requests that do not specify a version.
-        #
-        # You can change the default version by calling
-        # projects.models.versions.setDefault.
-        #
-        # Each version is a trained model deployed in the cloud, ready to handle
-        # prediction requests. A model can have multiple versions. You can get
-        # information about all of the versions of a given model by calling
-        # projects.models.versions.list.
-      "acceleratorConfig": { # Represents a hardware accelerator request config. # Optional. Accelerator config for using GPUs for online prediction (beta).
-          # Only specify this field if you have specified a Compute Engine (N1) machine
-          # type in the `machineType` field. Learn more about [using GPUs for online
-          # prediction](/ml-engine/docs/machine-types-online-prediction#gpus).
-          # Note that the AcceleratorConfig can be used in both Jobs and Versions.
-          # Learn more about [accelerators for training](/ml-engine/docs/using-gpus) and
-          # [accelerators for online
-          # prediction](/ml-engine/docs/machine-types-online-prediction#gpus).
-        "count": "A String", # The number of accelerators to attach to each machine running the job.
-        "type": "A String", # The type of accelerator to use.
-      },
-      "labels": { # Optional. One or more labels that you can add, to organize your model
-          # versions. Each label is a key-value pair, where both the key and the value
-          # are arbitrary strings that you supply.
+        # A model can have multiple versions, each of which is a deployed, trained
+        # model ready to receive prediction requests. The model itself is just a
+        # container.
+      &quot;labels&quot;: { # Optional. One or more labels that you can add, to organize your models.
+          # Each label is a key-value pair, where both the key and the value are
+          # arbitrary strings that you supply.
           # For more information, see the documentation on
-          # &lt;a href="/ml-engine/docs/tensorflow/resource-labels"&gt;using labels&lt;/a&gt;.
-        "a_key": "A String",
+          # &lt;a href=&quot;/ml-engine/docs/tensorflow/resource-labels&quot;&gt;using labels&lt;/a&gt;.
+        &quot;a_key&quot;: &quot;A String&quot;,
       },
-      "predictionClass": "A String", # Optional. The fully qualified name
-          # (&lt;var&gt;module_name&lt;/var&gt;.&lt;var&gt;class_name&lt;/var&gt;) of a class that implements
-          # the Predictor interface described in this reference field. The module
-          # containing this class should be included in a package provided to the
-          # [`packageUris` field](#Version.FIELDS.package_uris).
+      &quot;name&quot;: &quot;A String&quot;, # Required. The name specified for the model when it was created.
           #
-          # Specify this field if and only if you are deploying a [custom prediction
-          # routine (beta)](/ml-engine/docs/tensorflow/custom-prediction-routines).
-          # If you specify this field, you must set
-          # [`runtimeVersion`](#Version.FIELDS.runtime_version) to 1.4 or greater and
-          # you must set `machineType` to a [legacy (MLS1)
-          # machine type](/ml-engine/docs/machine-types-online-prediction).
+          # The model name must be unique within the project it is created in.
+      &quot;defaultVersion&quot;: { # Represents a version of the model. # Output only. The default version of the model. This version will be used to
+          # handle prediction requests that do not specify a version.
           #
-          # The following code sample provides the Predictor interface:
+          # You can change the default version by calling
+          # projects.models.versions.setDefault.
           #
-          # &lt;pre style="max-width: 626px;"&gt;
-          # class Predictor(object):
-          # """Interface for constructing custom predictors."""
+          # Each version is a trained model deployed in the cloud, ready to handle
+          # prediction requests. A model can have multiple versions. You can get
+          # information about all of the versions of a given model by calling
+          # projects.models.versions.list.
+        &quot;state&quot;: &quot;A String&quot;, # Output only. The state of a version.
+        &quot;manualScaling&quot;: { # Options for manually scaling a model. # Manually select the number of nodes to use for serving the
+            # model. You should generally use `auto_scaling` with an appropriate
+            # `min_nodes` instead, but this option is available if you want more
+            # predictable billing. Beware that latency and error rates will increase
+            # if the traffic exceeds that capability of the system to serve it based
+            # on the selected number of nodes.
+          &quot;nodes&quot;: 42, # The number of nodes to allocate for this model. These nodes are always up,
+              # starting from the time the model is deployed, so the cost of operating
+              # this model will be proportional to `nodes` * number of hours since
+              # last billing cycle plus the cost for each prediction performed.
+        },
+        &quot;name&quot;: &quot;A String&quot;, # Required. The name specified for the version when it was created.
+            #
+            # The version name must be unique within the model it is created in.
+        &quot;serviceAccount&quot;: &quot;A String&quot;, # Optional. Specifies the service account for resource access control.
+        &quot;pythonVersion&quot;: &quot;A String&quot;, # Required. The version of Python used in prediction.
+            #
+            # The following Python versions are available:
+            #
+            # * Python &#x27;3.7&#x27; is available when `runtime_version` is set to &#x27;1.15&#x27; or
+            #   later.
+            # * Python &#x27;3.5&#x27; is available when `runtime_version` is set to a version
+            #   from &#x27;1.4&#x27; to &#x27;1.14&#x27;.
+            # * Python &#x27;2.7&#x27; is available when `runtime_version` is set to &#x27;1.15&#x27; or
+            #   earlier.
+            #
+            # Read more about the Python versions available for [each runtime
+            # version](/ml-engine/docs/runtime-version-list).
+        &quot;lastUseTime&quot;: &quot;A String&quot;, # Output only. The time the version was last used for prediction.
+        &quot;predictionClass&quot;: &quot;A String&quot;, # Optional. The fully qualified name
+            # (&lt;var&gt;module_name&lt;/var&gt;.&lt;var&gt;class_name&lt;/var&gt;) of a class that implements
+            # the Predictor interface described in this reference field. The module
+            # containing this class should be included in a package provided to the
+            # [`packageUris` field](#Version.FIELDS.package_uris).
+            #
+            # Specify this field if and only if you are deploying a [custom prediction
+            # routine (beta)](/ml-engine/docs/tensorflow/custom-prediction-routines).
+            # If you specify this field, you must set
+            # [`runtimeVersion`](#Version.FIELDS.runtime_version) to 1.4 or greater and
+            # you must set `machineType` to a [legacy (MLS1)
+            # machine type](/ml-engine/docs/machine-types-online-prediction).
+            #
+            # The following code sample provides the Predictor interface:
+            #
+            # &lt;pre style=&quot;max-width: 626px;&quot;&gt;
+            # class Predictor(object):
+            # &quot;&quot;&quot;Interface for constructing custom predictors.&quot;&quot;&quot;
+            #
+            # def predict(self, instances, **kwargs):
+            #     &quot;&quot;&quot;Performs custom prediction.
+            #
+            #     Instances are the decoded values from the request. They have already
+            #     been deserialized from JSON.
+            #
+            #     Args:
+            #         instances: A list of prediction input instances.
+            #         **kwargs: A dictionary of keyword args provided as additional
+            #             fields on the predict request body.
+            #
+            #     Returns:
+            #         A list of outputs containing the prediction results. This list must
+            #         be JSON serializable.
+            #     &quot;&quot;&quot;
+            #     raise NotImplementedError()
+            #
+            # @classmethod
+            # def from_path(cls, model_dir):
+            #     &quot;&quot;&quot;Creates an instance of Predictor using the given path.
+            #
+            #     Loading of the predictor should be done in this method.
+            #
+            #     Args:
+            #         model_dir: The local directory that contains the exported model
+            #             file along with any additional files uploaded when creating the
+            #             version resource.
+            #
+            #     Returns:
+            #         An instance implementing this Predictor class.
+            #     &quot;&quot;&quot;
+            #     raise NotImplementedError()
+            # &lt;/pre&gt;
+            #
+            # Learn more about [the Predictor interface and custom prediction
+            # routines](/ml-engine/docs/tensorflow/custom-prediction-routines).
+        &quot;packageUris&quot;: [ # Optional. Cloud Storage paths (`gs://…`) of packages for [custom
+            # prediction routines](/ml-engine/docs/tensorflow/custom-prediction-routines)
+            # or [scikit-learn pipelines with custom
+            # code](/ml-engine/docs/scikit/exporting-for-prediction#custom-pipeline-code).
+            #
+            # For a custom prediction routine, one of these packages must contain your
+            # Predictor class (see
+            # [`predictionClass`](#Version.FIELDS.prediction_class)). Additionally,
+            # include any dependencies used by your Predictor or scikit-learn pipeline
+            # uses that are not already included in your selected [runtime
+            # version](/ml-engine/docs/tensorflow/runtime-version-list).
+            #
+            # If you specify this field, you must also set
+            # [`runtimeVersion`](#Version.FIELDS.runtime_version) to 1.4 or greater.
+          &quot;A String&quot;,
+        ],
+        &quot;explanationConfig&quot;: { # Message holding configuration options for explaining model predictions. # Optional. Configures explainability features on the model&#x27;s version.
+            # Some explanation features require additional metadata to be loaded
+            # as part of the model payload.
+            # There are two feature attribution methods supported for TensorFlow models:
+            # integrated gradients and sampled Shapley.
+            # [Learn more about feature
+            # attributions.](/ai-platform/prediction/docs/ai-explanations/overview)
+          &quot;integratedGradientsAttribution&quot;: { # Attributes credit by computing the Aumann-Shapley value taking advantage # Attributes credit by computing the Aumann-Shapley value taking advantage
+              # of the model&#x27;s fully differentiable structure. Refer to this paper for
+              # more details: http://proceedings.mlr.press/v70/sundararajan17a.html
+              # of the model&#x27;s fully differentiable structure. Refer to this paper for
+              # more details: https://arxiv.org/abs/1703.01365
+            &quot;numIntegralSteps&quot;: 42, # Number of steps for approximating the path integral.
+                # A good value to start is 50 and gradually increase until the
+                # sum to diff property is met within the desired error range.
+          },
+          &quot;sampledShapleyAttribution&quot;: { # An attribution method that approximates Shapley values for features that # An attribution method that approximates Shapley values for features that
+              # contribute to the label being predicted. A sampling strategy is used to
+              # approximate the value rather than considering all subsets of features.
+              # contribute to the label being predicted. A sampling strategy is used to
+              # approximate the value rather than considering all subsets of features.
+            &quot;numPaths&quot;: 42, # The number of feature permutations to consider when approximating the
+                # Shapley values.
+          },
+          &quot;xraiAttribution&quot;: { # Attributes credit by computing the XRAI taking advantage # Attributes credit by computing the XRAI taking advantage
+              # of the model&#x27;s fully differentiable structure. Refer to this paper for
+              # more details: https://arxiv.org/abs/1906.02825
+              # Currently only implemented for models with natural image inputs.
+              # of the model&#x27;s fully differentiable structure. Refer to this paper for
+              # more details: https://arxiv.org/abs/1906.02825
+              # Currently only implemented for models with natural image inputs.
+            &quot;numIntegralSteps&quot;: 42, # Number of steps for approximating the path integral.
+                # A good value to start is 50 and gradually increase until the
+                # sum to diff property is met within the desired error range.
+          },
+        },
+        &quot;deploymentUri&quot;: &quot;A String&quot;, # Required. The Cloud Storage location of the trained model used to
+            # create the version. See the
+            # [guide to model
+            # deployment](/ml-engine/docs/tensorflow/deploying-models) for more
+            # information.
+            #
+            # When passing Version to
+            # projects.models.versions.create
+            # the model service uses the specified location as the source of the model.
+            # Once deployed, the model version is hosted by the prediction service, so
+            # this location is useful only as a historical record.
+            # The total number of model files can&#x27;t exceed 1000.
+        &quot;autoScaling&quot;: { # Options for automatically scaling a model. # Automatically scale the number of nodes used to serve the model in
+            # response to increases and decreases in traffic. Care should be
+            # taken to ramp up traffic according to the model&#x27;s ability to scale
+            # or you will start seeing increases in latency and 429 response codes.
+            #
+            # Note that you cannot use AutoScaling if your version uses
+            # [GPUs](#Version.FIELDS.accelerator_config). Instead, you must use specify
+            # `manual_scaling`.
+          &quot;minNodes&quot;: 42, # Optional. The minimum number of nodes to allocate for this model. These
+              # nodes are always up, starting from the time the model is deployed.
+              # Therefore, the cost of operating this model will be at least
+              # `rate` * `min_nodes` * number of hours since last billing cycle,
+              # where `rate` is the cost per node-hour as documented in the
+              # [pricing guide](/ml-engine/docs/pricing),
+              # even if no predictions are performed. There is additional cost for each
+              # prediction performed.
+              #
+              # Unlike manual scaling, if the load gets too heavy for the nodes
+              # that are up, the service will automatically add nodes to handle the
+              # increased load as well as scale back as traffic drops, always maintaining
+              # at least `min_nodes`. You will be charged for the time in which additional
+              # nodes are used.
+              #
+              # If `min_nodes` is not specified and AutoScaling is used with a [legacy
+              # (MLS1) machine type](/ml-engine/docs/machine-types-online-prediction),
+              # `min_nodes` defaults to 0, in which case, when traffic to a model stops
+              # (and after a cool-down period), nodes will be shut down and no charges will
+              # be incurred until traffic to the model resumes.
+              #
+              # If `min_nodes` is not specified and AutoScaling is used with a [Compute
+              # Engine (N1) machine type](/ml-engine/docs/machine-types-online-prediction),
+              # `min_nodes` defaults to 1. `min_nodes` must be at least 1 for use with a
+              # Compute Engine machine type.
+              #
+              # Note that you cannot use AutoScaling if your version uses
+              # [GPUs](#Version.FIELDS.accelerator_config). Instead, you must use
+              # ManualScaling.
+              #
+              # You can set `min_nodes` when creating the model version, and you can also
+              # update `min_nodes` for an existing version:
+              # &lt;pre&gt;
+              # update_body.json:
+              # {
+              #   &#x27;autoScaling&#x27;: {
+              #     &#x27;minNodes&#x27;: 5
+              #   }
+              # }
+              # &lt;/pre&gt;
+              # HTTP request:
+              # &lt;pre style=&quot;max-width: 626px;&quot;&gt;
+              # PATCH
+              # https://ml.googleapis.com/v1/{name=projects/*/models/*/versions/*}?update_mask=autoScaling.minNodes
+              # -d @./update_body.json
+              # &lt;/pre&gt;
+        },
+        &quot;labels&quot;: { # Optional. One or more labels that you can add, to organize your model
+            # versions. Each label is a key-value pair, where both the key and the value
+            # are arbitrary strings that you supply.
+            # For more information, see the documentation on
+            # &lt;a href=&quot;/ml-engine/docs/tensorflow/resource-labels&quot;&gt;using labels&lt;/a&gt;.
+          &quot;a_key&quot;: &quot;A String&quot;,
+        },
+        &quot;createTime&quot;: &quot;A String&quot;, # Output only. The time the version was created.
+        &quot;requestLoggingConfig&quot;: { # Configuration for logging request-response pairs to a BigQuery table. # Optional. *Only* specify this field in a
+            # projects.models.versions.patch
+            # request. Specifying it in a
+            # projects.models.versions.create
+            # request has no effect.
+            #
+            # Configures the request-response pair logging on predictions from this
+            # Version.
+            # Online prediction requests to a model version and the responses to these
+            # requests are converted to raw strings and saved to the specified BigQuery
+            # table. Logging is constrained by [BigQuery quotas and
+            # limits](/bigquery/quotas). If your project exceeds BigQuery quotas or limits,
+            # AI Platform Prediction does not log request-response pairs, but it continues
+            # to serve predictions.
+            #
+            # If you are using [continuous
+            # evaluation](/ml-engine/docs/continuous-evaluation/), you do not need to
+            # specify this configuration manually. Setting up continuous evaluation
+            # automatically enables logging of request-response pairs.
+          &quot;bigqueryTableName&quot;: &quot;A String&quot;, # Required. Fully qualified BigQuery table name in the following format:
+              # &quot;&lt;var&gt;project_id&lt;/var&gt;.&lt;var&gt;dataset_name&lt;/var&gt;.&lt;var&gt;table_name&lt;/var&gt;&quot;
+              #
+              # The specified table must already exist, and the &quot;Cloud ML Service Agent&quot;
+              # for your project must have permission to write to it. The table must have
+              # the following [schema](/bigquery/docs/schemas):
+              #
+              # &lt;table&gt;
+              #   &lt;tr&gt;&lt;th&gt;Field name&lt;/th&gt;&lt;th style=&quot;display: table-cell&quot;&gt;Type&lt;/th&gt;
+              #     &lt;th style=&quot;display: table-cell&quot;&gt;Mode&lt;/th&gt;&lt;/tr&gt;
+              #   &lt;tr&gt;&lt;td&gt;model&lt;/td&gt;&lt;td&gt;STRING&lt;/td&gt;&lt;td&gt;REQUIRED&lt;/td&gt;&lt;/tr&gt;
+              #   &lt;tr&gt;&lt;td&gt;model_version&lt;/td&gt;&lt;td&gt;STRING&lt;/td&gt;&lt;td&gt;REQUIRED&lt;/td&gt;&lt;/tr&gt;
+              #   &lt;tr&gt;&lt;td&gt;time&lt;/td&gt;&lt;td&gt;TIMESTAMP&lt;/td&gt;&lt;td&gt;REQUIRED&lt;/td&gt;&lt;/tr&gt;
+              #   &lt;tr&gt;&lt;td&gt;raw_data&lt;/td&gt;&lt;td&gt;STRING&lt;/td&gt;&lt;td&gt;REQUIRED&lt;/td&gt;&lt;/tr&gt;
+              #   &lt;tr&gt;&lt;td&gt;raw_prediction&lt;/td&gt;&lt;td&gt;STRING&lt;/td&gt;&lt;td&gt;NULLABLE&lt;/td&gt;&lt;/tr&gt;
+              #   &lt;tr&gt;&lt;td&gt;groundtruth&lt;/td&gt;&lt;td&gt;STRING&lt;/td&gt;&lt;td&gt;NULLABLE&lt;/td&gt;&lt;/tr&gt;
+              # &lt;/table&gt;
+          &quot;samplingPercentage&quot;: 3.14, # Percentage of requests to be logged, expressed as a fraction from 0 to 1.
+              # For example, if you want to log 10% of requests, enter `0.1`. The sampling
+              # window is the lifetime of the model version. Defaults to 0.
+        },
+        &quot;errorMessage&quot;: &quot;A String&quot;, # Output only. The details of a failure or a cancellation.
+        &quot;machineType&quot;: &quot;A String&quot;, # Optional. The type of machine on which to serve the model. Currently only
+            # applies to online prediction service. If this field is not specified, it
+            # defaults to `mls1-c1-m2`.
+            #
+            # Online prediction supports the following machine types:
+            #
+            # * `mls1-c1-m2`
+            # * `mls1-c4-m2`
+            # * `n1-standard-2`
+            # * `n1-standard-4`
+            # * `n1-standard-8`
+            # * `n1-standard-16`
+            # * `n1-standard-32`
+            # * `n1-highmem-2`
+            # * `n1-highmem-4`
+            # * `n1-highmem-8`
+            # * `n1-highmem-16`
+            # * `n1-highmem-32`
+            # * `n1-highcpu-2`
+            # * `n1-highcpu-4`
+            # * `n1-highcpu-8`
+            # * `n1-highcpu-16`
+            # * `n1-highcpu-32`
+            #
+            # `mls1-c1-m2` is generally available. All other machine types are available
+            # in beta. Learn more about the [differences between machine
+            # types](/ml-engine/docs/machine-types-online-prediction).
+        &quot;runtimeVersion&quot;: &quot;A String&quot;, # Required. The AI Platform runtime version to use for this deployment.
+            #
+            # For more information, see the
+            # [runtime version list](/ml-engine/docs/runtime-version-list) and
+            # [how to manage runtime versions](/ml-engine/docs/versioning).
+        &quot;description&quot;: &quot;A String&quot;, # Optional. The description specified for the version when it was created.
+        &quot;framework&quot;: &quot;A String&quot;, # Optional. The machine learning framework AI Platform uses to train
+            # this version of the model. Valid values are `TENSORFLOW`, `SCIKIT_LEARN`,
+            # `XGBOOST`. If you do not specify a framework, AI Platform
+            # will analyze files in the deployment_uri to determine a framework. If you
+            # choose `SCIKIT_LEARN` or `XGBOOST`, you must also set the runtime version
+            # of the model to 1.4 or greater.
+            #
+            # Do **not** specify a framework if you&#x27;re deploying a [custom
+            # prediction routine](/ml-engine/docs/tensorflow/custom-prediction-routines).
+            #
+            # If you specify a [Compute Engine (N1) machine
+            # type](/ml-engine/docs/machine-types-online-prediction) in the
+            # `machineType` field, you must specify `TENSORFLOW`
+            # for the framework.
+        &quot;etag&quot;: &quot;A String&quot;, # `etag` is used for optimistic concurrency control as a way to help
+            # prevent simultaneous updates of a model from overwriting each other.
+            # It is strongly suggested that systems make use of the `etag` in the
+            # read-modify-write cycle to perform model updates in order to avoid race
+            # conditions: An `etag` is returned in the response to `GetVersion`, and
+            # systems are expected to put that etag in the request to `UpdateVersion` to
+            # ensure that their change will be applied to the model as intended.
+        &quot;isDefault&quot;: True or False, # Output only. If true, this version will be used to handle prediction
+            # requests that do not specify a version.
+            #
+            # You can change the default version by calling
+            # projects.methods.versions.setDefault.
+        &quot;acceleratorConfig&quot;: { # Represents a hardware accelerator request config. # Optional. Accelerator config for using GPUs for online prediction (beta).
+            # Only specify this field if you have specified a Compute Engine (N1) machine
+            # type in the `machineType` field. Learn more about [using GPUs for online
+            # prediction](/ml-engine/docs/machine-types-online-prediction#gpus).
+            # Note that the AcceleratorConfig can be used in both Jobs and Versions.
+            # Learn more about [accelerators for training](/ml-engine/docs/using-gpus) and
+            # [accelerators for online
+            # prediction](/ml-engine/docs/machine-types-online-prediction#gpus).
+          &quot;count&quot;: &quot;A String&quot;, # The number of accelerators to attach to each machine running the job.
+          &quot;type&quot;: &quot;A String&quot;, # The type of accelerator to use.
+        },
+      },
+      &quot;onlinePredictionConsoleLogging&quot;: True or False, # Optional. If true, online prediction nodes send `stderr` and `stdout`
+          # streams to Stackdriver Logging. These can be more verbose than the standard
+          # access logs (see `onlinePredictionLogging`) and can incur higher cost.
+          # However, they are helpful for debugging. Note that
+          # [Stackdriver logs may incur a cost](/stackdriver/pricing), especially if
+          # your project receives prediction requests at a high QPS. Estimate your
+          # costs before enabling this option.
           #
-          # def predict(self, instances, **kwargs):
-          #     """Performs custom prediction.
-          #
-          #     Instances are the decoded values from the request. They have already
-          #     been deserialized from JSON.
-          #
-          #     Args:
-          #         instances: A list of prediction input instances.
-          #         **kwargs: A dictionary of keyword args provided as additional
-          #             fields on the predict request body.
-          #
-          #     Returns:
-          #         A list of outputs containing the prediction results. This list must
-          #         be JSON serializable.
-          #     """
-          #     raise NotImplementedError()
-          #
-          # @classmethod
-          # def from_path(cls, model_dir):
-          #     """Creates an instance of Predictor using the given path.
-          #
-          #     Loading of the predictor should be done in this method.
-          #
-          #     Args:
-          #         model_dir: The local directory that contains the exported model
-          #             file along with any additional files uploaded when creating the
-          #             version resource.
-          #
-          #     Returns:
-          #         An instance implementing this Predictor class.
-          #     """
-          #     raise NotImplementedError()
-          # &lt;/pre&gt;
-          #
-          # Learn more about [the Predictor interface and custom prediction
-          # routines](/ml-engine/docs/tensorflow/custom-prediction-routines).
-      "serviceAccount": "A String", # Optional. Specifies the service account for resource access control.
-      "state": "A String", # Output only. The state of a version.
-      "packageUris": [ # Optional. Cloud Storage paths (`gs://…`) of packages for [custom
-          # prediction routines](/ml-engine/docs/tensorflow/custom-prediction-routines)
-          # or [scikit-learn pipelines with custom
-          # code](/ml-engine/docs/scikit/exporting-for-prediction#custom-pipeline-code).
-          #
-          # For a custom prediction routine, one of these packages must contain your
-          # Predictor class (see
-          # [`predictionClass`](#Version.FIELDS.prediction_class)). Additionally,
-          # include any dependencies used by your Predictor or scikit-learn pipeline
-          # uses that are not already included in your selected [runtime
-          # version](/ml-engine/docs/tensorflow/runtime-version-list).
-          #
-          # If you specify this field, you must also set
-          # [`runtimeVersion`](#Version.FIELDS.runtime_version) to 1.4 or greater.
-        "A String",
+          # Default is false.
+      &quot;regions&quot;: [ # Optional. The list of regions where the model is going to be deployed.
+          # Only one region per model is supported.
+          # Defaults to &#x27;us-central1&#x27; if nothing is set.
+          # See the &lt;a href=&quot;/ml-engine/docs/tensorflow/regions&quot;&gt;available regions&lt;/a&gt;
+          # for AI Platform services.
+          # Note:
+          # *   No matter where a model is deployed, it can always be accessed by
+          #     users from anywhere, both for online and batch prediction.
+          # *   The region for a batch prediction job is set by the region field when
+          #     submitting the batch prediction job and does not take its value from
+          #     this field.
+        &quot;A String&quot;,
       ],
-      "etag": "A String", # `etag` is used for optimistic concurrency control as a way to help
+      &quot;description&quot;: &quot;A String&quot;, # Optional. The description specified for the model when it was created.
+      &quot;onlinePredictionLogging&quot;: True or False, # Optional. If true, online prediction access logs are sent to StackDriver
+          # Logging. These logs are like standard server access logs, containing
+          # information like timestamp and latency for each request. Note that
+          # [Stackdriver logs may incur a cost](/stackdriver/pricing), especially if
+          # your project receives prediction requests at a high queries per second rate
+          # (QPS). Estimate your costs before enabling this option.
+          #
+          # Default is false.
+      &quot;etag&quot;: &quot;A String&quot;, # `etag` is used for optimistic concurrency control as a way to help
           # prevent simultaneous updates of a model from overwriting each other.
           # It is strongly suggested that systems make use of the `etag` in the
           # read-modify-write cycle to perform model updates in order to avoid race
-          # conditions: An `etag` is returned in the response to `GetVersion`, and
-          # systems are expected to put that etag in the request to `UpdateVersion` to
+          # conditions: An `etag` is returned in the response to `GetModel`, and
+          # systems are expected to put that etag in the request to `UpdateModel` to
           # ensure that their change will be applied to the model as intended.
-      "lastUseTime": "A String", # Output only. The time the version was last used for prediction.
-      "deploymentUri": "A String", # Required. The Cloud Storage location of the trained model used to
-          # create the version. See the
-          # [guide to model
-          # deployment](/ml-engine/docs/tensorflow/deploying-models) for more
-          # information.
-          #
-          # When passing Version to
-          # projects.models.versions.create
-          # the model service uses the specified location as the source of the model.
-          # Once deployed, the model version is hosted by the prediction service, so
-          # this location is useful only as a historical record.
-          # The total number of model files can't exceed 1000.
-      "explanationConfig": { # Message holding configuration options for explaining model predictions. # Optional. Configures explainability features on the model's version.
-          # Some explanation features require additional metadata to be loaded
-          # as part of the model payload.
-          # There are two feature attribution methods supported for TensorFlow models:
-          # integrated gradients and sampled Shapley.
-          # [Learn more about feature
-          # attributions.](/ml-engine/docs/ai-explanations/overview)
-        "xraiAttribution": { # Attributes credit by computing the XRAI taking advantage # Attributes credit by computing the XRAI taking advantage
-            # of the model's fully differentiable structure. Refer to this paper for
-            # more details: https://arxiv.org/abs/1906.02825
-            # Currently only implemented for models with natural image inputs.
-            # of the model's fully differentiable structure. Refer to this paper for
-            # more details: https://arxiv.org/abs/1906.02825
-            # Currently only implemented for models with natural image inputs.
-          "numIntegralSteps": 42, # Number of steps for approximating the path integral.
-              # A good value to start is 50 and gradually increase until the
-              # sum to diff property is met within the desired error range.
-        },
-        "sampledShapleyAttribution": { # An attribution method that approximates Shapley values for features that # An attribution method that approximates Shapley values for features that
-            # contribute to the label being predicted. A sampling strategy is used to
-            # approximate the value rather than considering all subsets of features.
-            # contribute to the label being predicted. A sampling strategy is used to
-            # approximate the value rather than considering all subsets of features.
-          "numPaths": 42, # The number of feature permutations to consider when approximating the
-              # Shapley values.
-        },
-        "integratedGradientsAttribution": { # Attributes credit by computing the Aumann-Shapley value taking advantage # Attributes credit by computing the Aumann-Shapley value taking advantage
-            # of the model's fully differentiable structure. Refer to this paper for
-            # more details: http://proceedings.mlr.press/v70/sundararajan17a.html
-            # of the model's fully differentiable structure. Refer to this paper for
-            # more details: http://proceedings.mlr.press/v70/sundararajan17a.html
-          "numIntegralSteps": 42, # Number of steps for approximating the path integral.
-              # A good value to start is 50 and gradually increase until the
-              # sum to diff property is met within the desired error range.
-        },
-      },
-      "isDefault": True or False, # Output only. If true, this version will be used to handle prediction
-          # requests that do not specify a version.
-          #
-          # You can change the default version by calling
-          # projects.methods.versions.setDefault.
-      "machineType": "A String", # Optional. The type of machine on which to serve the model. Currently only
-          # applies to online prediction service. If this field is not specified, it
-          # defaults to `mls1-c1-m2`.
-          #
-          # Online prediction supports the following machine types:
-          #
-          # * `mls1-c1-m2`
-          # * `mls1-c4-m2`
-          # * `n1-standard-2`
-          # * `n1-standard-4`
-          # * `n1-standard-8`
-          # * `n1-standard-16`
-          # * `n1-standard-32`
-          # * `n1-highmem-2`
-          # * `n1-highmem-4`
-          # * `n1-highmem-8`
-          # * `n1-highmem-16`
-          # * `n1-highmem-32`
-          # * `n1-highcpu-2`
-          # * `n1-highcpu-4`
-          # * `n1-highcpu-8`
-          # * `n1-highcpu-16`
-          # * `n1-highcpu-32`
-          #
-          # `mls1-c1-m2` is generally available. All other machine types are available
-          # in beta. Learn more about the [differences between machine
-          # types](/ml-engine/docs/machine-types-online-prediction).
-      "description": "A String", # Optional. The description specified for the version when it was created.
-      "runtimeVersion": "A String", # Required. The AI Platform runtime version to use for this deployment.
-          #
-          # For more information, see the
-          # [runtime version list](/ml-engine/docs/runtime-version-list) and
-          # [how to manage runtime versions](/ml-engine/docs/versioning).
-      "manualScaling": { # Options for manually scaling a model. # Manually select the number of nodes to use for serving the
-          # model. You should generally use `auto_scaling` with an appropriate
-          # `min_nodes` instead, but this option is available if you want more
-          # predictable billing. Beware that latency and error rates will increase
-          # if the traffic exceeds that capability of the system to serve it based
-          # on the selected number of nodes.
-        "nodes": 42, # The number of nodes to allocate for this model. These nodes are always up,
-            # starting from the time the model is deployed, so the cost of operating
-            # this model will be proportional to `nodes` * number of hours since
-            # last billing cycle plus the cost for each prediction performed.
-      },
-      "errorMessage": "A String", # Output only. The details of a failure or a cancellation.
-      "framework": "A String", # Optional. The machine learning framework AI Platform uses to train
-          # this version of the model. Valid values are `TENSORFLOW`, `SCIKIT_LEARN`,
-          # `XGBOOST`. If you do not specify a framework, AI Platform
-          # will analyze files in the deployment_uri to determine a framework. If you
-          # choose `SCIKIT_LEARN` or `XGBOOST`, you must also set the runtime version
-          # of the model to 1.4 or greater.
-          #
-          # Do **not** specify a framework if you're deploying a [custom
-          # prediction routine](/ml-engine/docs/tensorflow/custom-prediction-routines).
-          #
-          # If you specify a [Compute Engine (N1) machine
-          # type](/ml-engine/docs/machine-types-online-prediction) in the
-          # `machineType` field, you must specify `TENSORFLOW`
-          # for the framework.
-      "createTime": "A String", # Output only. The time the version was created.
-      "name": "A String", # Required. The name specified for the version when it was created.
-          #
-          # The version name must be unique within the model it is created in.
-      "autoScaling": { # Options for automatically scaling a model. # Automatically scale the number of nodes used to serve the model in
-          # response to increases and decreases in traffic. Care should be
-          # taken to ramp up traffic according to the model's ability to scale
-          # or you will start seeing increases in latency and 429 response codes.
-          #
-          # Note that you cannot use AutoScaling if your version uses
-          # [GPUs](#Version.FIELDS.accelerator_config). Instead, you must use specify
-          # `manual_scaling`.
-        "minNodes": 42, # Optional. The minimum number of nodes to allocate for this model. These
-            # nodes are always up, starting from the time the model is deployed.
-            # Therefore, the cost of operating this model will be at least
-            # `rate` * `min_nodes` * number of hours since last billing cycle,
-            # where `rate` is the cost per node-hour as documented in the
-            # [pricing guide](/ml-engine/docs/pricing),
-            # even if no predictions are performed. There is additional cost for each
-            # prediction performed.
-            #
-            # Unlike manual scaling, if the load gets too heavy for the nodes
-            # that are up, the service will automatically add nodes to handle the
-            # increased load as well as scale back as traffic drops, always maintaining
-            # at least `min_nodes`. You will be charged for the time in which additional
-            # nodes are used.
-            #
-            # If `min_nodes` is not specified and AutoScaling is used with a [legacy
-            # (MLS1) machine type](/ml-engine/docs/machine-types-online-prediction),
-            # `min_nodes` defaults to 0, in which case, when traffic to a model stops
-            # (and after a cool-down period), nodes will be shut down and no charges will
-            # be incurred until traffic to the model resumes.
-            #
-            # If `min_nodes` is not specified and AutoScaling is used with a [Compute
-            # Engine (N1) machine type](/ml-engine/docs/machine-types-online-prediction),
-            # `min_nodes` defaults to 1. `min_nodes` must be at least 1 for use with a
-            # Compute Engine machine type.
-            #
-            # Note that you cannot use AutoScaling if your version uses
-            # [GPUs](#Version.FIELDS.accelerator_config). Instead, you must use
-            # ManualScaling.
-            #
-            # You can set `min_nodes` when creating the model version, and you can also
-            # update `min_nodes` for an existing version:
-            # &lt;pre&gt;
-            # update_body.json:
-            # {
-            #   'autoScaling': {
-            #     'minNodes': 5
-            #   }
-            # }
-            # &lt;/pre&gt;
-            # HTTP request:
-            # &lt;pre style="max-width: 626px;"&gt;
-            # PATCH
-            # https://ml.googleapis.com/v1/{name=projects/*/models/*/versions/*}?update_mask=autoScaling.minNodes
-            # -d @./update_body.json
-            # &lt;/pre&gt;
-      },
-      "pythonVersion": "A String", # Required. The version of Python used in prediction.
-          #
-          # The following Python versions are available:
-          #
-          # * Python '3.7' is available when `runtime_version` is set to '1.15' or
-          #   later.
-          # * Python '3.5' is available when `runtime_version` is set to a version
-          #   from '1.4' to '1.14'.
-          # * Python '2.7' is available when `runtime_version` is set to '1.15' or
-          #   earlier.
-          #
-          # Read more about the Python versions available for [each runtime
-          # version](/ml-engine/docs/runtime-version-list).
-      "requestLoggingConfig": { # Configuration for logging request-response pairs to a BigQuery table. # Optional. *Only* specify this field in a
-          # projects.models.versions.patch
-          # request. Specifying it in a
-          # projects.models.versions.create
-          # request has no effect.
-          #
-          # Configures the request-response pair logging on predictions from this
-          # Version.
-          # Online prediction requests to a model version and the responses to these
-          # requests are converted to raw strings and saved to the specified BigQuery
-          # table. Logging is constrained by [BigQuery quotas and
-          # limits](/bigquery/quotas). If your project exceeds BigQuery quotas or limits,
-          # AI Platform Prediction does not log request-response pairs, but it continues
-          # to serve predictions.
-          #
-          # If you are using [continuous
-          # evaluation](/ml-engine/docs/continuous-evaluation/), you do not need to
-          # specify this configuration manually. Setting up continuous evaluation
-          # automatically enables logging of request-response pairs.
-        "samplingPercentage": 3.14, # Percentage of requests to be logged, expressed as a fraction from 0 to 1.
-            # For example, if you want to log 10% of requests, enter `0.1`. The sampling
-            # window is the lifetime of the model version. Defaults to 0.
-        "bigqueryTableName": "A String", # Required. Fully qualified BigQuery table name in the following format:
-            # "&lt;var&gt;project_id&lt;/var&gt;.&lt;var&gt;dataset_name&lt;/var&gt;.&lt;var&gt;table_name&lt;/var&gt;"
-            #
-            # The specified table must already exist, and the "Cloud ML Service Agent"
-            # for your project must have permission to write to it. The table must have
-            # the following [schema](/bigquery/docs/schemas):
-            #
-            # &lt;table&gt;
-            #   &lt;tr&gt;&lt;th&gt;Field name&lt;/th&gt;&lt;th style="display: table-cell"&gt;Type&lt;/th&gt;
-            #     &lt;th style="display: table-cell"&gt;Mode&lt;/th&gt;&lt;/tr&gt;
-            #   &lt;tr&gt;&lt;td&gt;model&lt;/td&gt;&lt;td&gt;STRING&lt;/td&gt;&lt;td&gt;REQUIRED&lt;/td&gt;&lt;/tr&gt;
-            #   &lt;tr&gt;&lt;td&gt;model_version&lt;/td&gt;&lt;td&gt;STRING&lt;/td&gt;&lt;td&gt;REQUIRED&lt;/td&gt;&lt;/tr&gt;
-            #   &lt;tr&gt;&lt;td&gt;time&lt;/td&gt;&lt;td&gt;TIMESTAMP&lt;/td&gt;&lt;td&gt;REQUIRED&lt;/td&gt;&lt;/tr&gt;
-            #   &lt;tr&gt;&lt;td&gt;raw_data&lt;/td&gt;&lt;td&gt;STRING&lt;/td&gt;&lt;td&gt;REQUIRED&lt;/td&gt;&lt;/tr&gt;
-            #   &lt;tr&gt;&lt;td&gt;raw_prediction&lt;/td&gt;&lt;td&gt;STRING&lt;/td&gt;&lt;td&gt;NULLABLE&lt;/td&gt;&lt;/tr&gt;
-            #   &lt;tr&gt;&lt;td&gt;groundtruth&lt;/td&gt;&lt;td&gt;STRING&lt;/td&gt;&lt;td&gt;NULLABLE&lt;/td&gt;&lt;/tr&gt;
-            # &lt;/table&gt;
-      },
-    },
-    "onlinePredictionLogging": True or False, # Optional. If true, online prediction access logs are sent to StackDriver
-        # Logging. These logs are like standard server access logs, containing
-        # information like timestamp and latency for each request. Note that
-        # [Stackdriver logs may incur a cost](/stackdriver/pricing), especially if
-        # your project receives prediction requests at a high queries per second rate
-        # (QPS). Estimate your costs before enabling this option.
-        #
-        # Default is false.
-    "description": "A String", # Optional. The description specified for the model when it was created.
-  }</pre>
+    }</pre>
 </div>
 
 <div class="method">
@@ -1393,6 +1393,10 @@
 Requests for policies with any conditional bindings must specify version 3.
 Policies without any conditional bindings may specify any valid value or
 leave the field unset.
+
+To learn which resources support conditions in their IAM policies, see the
+[IAM
+documentation](https://cloud.google.com/iam/help/conditions/resource-policies).
   x__xgafv: string, V1 error format.
     Allowed values
       1 - v1 error format
@@ -1411,36 +1415,40 @@
       # permissions; each `role` can be an IAM predefined role or a user-created
       # custom role.
       #
-      # Optionally, a `binding` can specify a `condition`, which is a logical
-      # expression that allows access to a resource only if the expression evaluates
-      # to `true`. A condition can add constraints based on attributes of the
-      # request, the resource, or both.
+      # For some types of Google Cloud resources, a `binding` can also specify a
+      # `condition`, which is a logical expression that allows access to a resource
+      # only if the expression evaluates to `true`. A condition can add constraints
+      # based on attributes of the request, the resource, or both. To learn which
+      # resources support conditions in their IAM policies, see the
+      # [IAM documentation](https://cloud.google.com/iam/help/conditions/resource-policies).
       #
       # **JSON example:**
       #
       #     {
-      #       "bindings": [
+      #       &quot;bindings&quot;: [
       #         {
-      #           "role": "roles/resourcemanager.organizationAdmin",
-      #           "members": [
-      #             "user:mike@example.com",
-      #             "group:admins@example.com",
-      #             "domain:google.com",
-      #             "serviceAccount:my-project-id@appspot.gserviceaccount.com"
+      #           &quot;role&quot;: &quot;roles/resourcemanager.organizationAdmin&quot;,
+      #           &quot;members&quot;: [
+      #             &quot;user:mike@example.com&quot;,
+      #             &quot;group:admins@example.com&quot;,
+      #             &quot;domain:google.com&quot;,
+      #             &quot;serviceAccount:my-project-id@appspot.gserviceaccount.com&quot;
       #           ]
       #         },
       #         {
-      #           "role": "roles/resourcemanager.organizationViewer",
-      #           "members": ["user:eve@example.com"],
-      #           "condition": {
-      #             "title": "expirable access",
-      #             "description": "Does not grant access after Sep 2020",
-      #             "expression": "request.time &lt; timestamp('2020-10-01T00:00:00.000Z')",
+      #           &quot;role&quot;: &quot;roles/resourcemanager.organizationViewer&quot;,
+      #           &quot;members&quot;: [
+      #             &quot;user:eve@example.com&quot;
+      #           ],
+      #           &quot;condition&quot;: {
+      #             &quot;title&quot;: &quot;expirable access&quot;,
+      #             &quot;description&quot;: &quot;Does not grant access after Sep 2020&quot;,
+      #             &quot;expression&quot;: &quot;request.time &lt; timestamp(&#x27;2020-10-01T00:00:00.000Z&#x27;)&quot;,
       #           }
       #         }
       #       ],
-      #       "etag": "BwWWja0YfJA=",
-      #       "version": 3
+      #       &quot;etag&quot;: &quot;BwWWja0YfJA=&quot;,
+      #       &quot;version&quot;: 3
       #     }
       #
       # **YAML example:**
@@ -1458,19 +1466,190 @@
       #       condition:
       #         title: expirable access
       #         description: Does not grant access after Sep 2020
-      #         expression: request.time &lt; timestamp('2020-10-01T00:00:00.000Z')
+      #         expression: request.time &lt; timestamp(&#x27;2020-10-01T00:00:00.000Z&#x27;)
       #     - etag: BwWWja0YfJA=
       #     - version: 3
       #
       # For a description of IAM and its features, see the
       # [IAM documentation](https://cloud.google.com/iam/docs/).
-    "bindings": [ # Associates a list of `members` to a `role`. Optionally, may specify a
+    &quot;etag&quot;: &quot;A String&quot;, # `etag` is used for optimistic concurrency control as a way to help
+        # prevent simultaneous updates of a policy from overwriting each other.
+        # It is strongly suggested that systems make use of the `etag` in the
+        # read-modify-write cycle to perform policy updates in order to avoid race
+        # conditions: An `etag` is returned in the response to `getIamPolicy`, and
+        # systems are expected to put that etag in the request to `setIamPolicy` to
+        # ensure that their change will be applied to the same version of the policy.
+        #
+        # **Important:** If you use IAM Conditions, you must include the `etag` field
+        # whenever you call `setIamPolicy`. If you omit this field, then IAM allows
+        # you to overwrite a version `3` policy with a version `1` policy, and all of
+        # the conditions in the version `3` policy are lost.
+    &quot;version&quot;: 42, # Specifies the format of the policy.
+        #
+        # Valid values are `0`, `1`, and `3`. Requests that specify an invalid value
+        # are rejected.
+        #
+        # Any operation that affects conditional role bindings must specify version
+        # `3`. This requirement applies to the following operations:
+        #
+        # * Getting a policy that includes a conditional role binding
+        # * Adding a conditional role binding to a policy
+        # * Changing a conditional role binding in a policy
+        # * Removing any role binding, with or without a condition, from a policy
+        #   that includes conditions
+        #
+        # **Important:** If you use IAM Conditions, you must include the `etag` field
+        # whenever you call `setIamPolicy`. If you omit this field, then IAM allows
+        # you to overwrite a version `3` policy with a version `1` policy, and all of
+        # the conditions in the version `3` policy are lost.
+        #
+        # If a policy does not include any conditions, operations on that policy may
+        # specify any valid version or leave the field unset.
+        #
+        # To learn which resources support conditions in their IAM policies, see the
+        # [IAM documentation](https://cloud.google.com/iam/help/conditions/resource-policies).
+    &quot;auditConfigs&quot;: [ # Specifies cloud audit logging configuration for this policy.
+      { # Specifies the audit configuration for a service.
+          # The configuration determines which permission types are logged, and what
+          # identities, if any, are exempted from logging.
+          # An AuditConfig must have one or more AuditLogConfigs.
+          #
+          # If there are AuditConfigs for both `allServices` and a specific service,
+          # the union of the two AuditConfigs is used for that service: the log_types
+          # specified in each AuditConfig are enabled, and the exempted_members in each
+          # AuditLogConfig are exempted.
+          #
+          # Example Policy with multiple AuditConfigs:
+          #
+          #     {
+          #       &quot;audit_configs&quot;: [
+          #         {
+          #           &quot;service&quot;: &quot;allServices&quot;
+          #           &quot;audit_log_configs&quot;: [
+          #             {
+          #               &quot;log_type&quot;: &quot;DATA_READ&quot;,
+          #               &quot;exempted_members&quot;: [
+          #                 &quot;user:jose@example.com&quot;
+          #               ]
+          #             },
+          #             {
+          #               &quot;log_type&quot;: &quot;DATA_WRITE&quot;,
+          #             },
+          #             {
+          #               &quot;log_type&quot;: &quot;ADMIN_READ&quot;,
+          #             }
+          #           ]
+          #         },
+          #         {
+          #           &quot;service&quot;: &quot;sampleservice.googleapis.com&quot;
+          #           &quot;audit_log_configs&quot;: [
+          #             {
+          #               &quot;log_type&quot;: &quot;DATA_READ&quot;,
+          #             },
+          #             {
+          #               &quot;log_type&quot;: &quot;DATA_WRITE&quot;,
+          #               &quot;exempted_members&quot;: [
+          #                 &quot;user:aliya@example.com&quot;
+          #               ]
+          #             }
+          #           ]
+          #         }
+          #       ]
+          #     }
+          #
+          # For sampleservice, this policy enables DATA_READ, DATA_WRITE and ADMIN_READ
+          # logging. It also exempts jose@example.com from DATA_READ logging, and
+          # aliya@example.com from DATA_WRITE logging.
+        &quot;service&quot;: &quot;A String&quot;, # Specifies a service that will be enabled for audit logging.
+            # For example, `storage.googleapis.com`, `cloudsql.googleapis.com`.
+            # `allServices` is a special value that covers all services.
+        &quot;auditLogConfigs&quot;: [ # The configuration for logging of each type of permission.
+          { # Provides the configuration for logging a type of permissions.
+              # Example:
+              #
+              #     {
+              #       &quot;audit_log_configs&quot;: [
+              #         {
+              #           &quot;log_type&quot;: &quot;DATA_READ&quot;,
+              #           &quot;exempted_members&quot;: [
+              #             &quot;user:jose@example.com&quot;
+              #           ]
+              #         },
+              #         {
+              #           &quot;log_type&quot;: &quot;DATA_WRITE&quot;,
+              #         }
+              #       ]
+              #     }
+              #
+              # This enables &#x27;DATA_READ&#x27; and &#x27;DATA_WRITE&#x27; logging, while exempting
+              # jose@example.com from DATA_READ logging.
+            &quot;exemptedMembers&quot;: [ # Specifies the identities that do not cause logging for this type of
+                # permission.
+                # Follows the same format of Binding.members.
+              &quot;A String&quot;,
+            ],
+            &quot;logType&quot;: &quot;A String&quot;, # The log type that this config enables.
+          },
+        ],
+      },
+    ],
+    &quot;bindings&quot;: [ # Associates a list of `members` to a `role`. Optionally, may specify a
         # `condition` that determines how and when the `bindings` are applied. Each
         # of the `bindings` must contain at least one member.
       { # Associates `members` with a `role`.
-        "role": "A String", # Role that is assigned to `members`.
-            # For example, `roles/viewer`, `roles/editor`, or `roles/owner`.
-        "members": [ # Specifies the identities requesting access for a Cloud Platform resource.
+        &quot;condition&quot;: { # Represents a textual expression in the Common Expression Language (CEL) # The condition that is associated with this binding.
+            #
+            # If the condition evaluates to `true`, then this binding applies to the
+            # current request.
+            #
+            # If the condition evaluates to `false`, then this binding does not apply to
+            # the current request. However, a different role binding might grant the same
+            # role to one or more of the members in this binding.
+            #
+            # To learn which resources support conditions in their IAM policies, see the
+            # [IAM
+            # documentation](https://cloud.google.com/iam/help/conditions/resource-policies).
+            # syntax. CEL is a C-like expression language. The syntax and semantics of CEL
+            # are documented at https://github.com/google/cel-spec.
+            #
+            # Example (Comparison):
+            #
+            #     title: &quot;Summary size limit&quot;
+            #     description: &quot;Determines if a summary is less than 100 chars&quot;
+            #     expression: &quot;document.summary.size() &lt; 100&quot;
+            #
+            # Example (Equality):
+            #
+            #     title: &quot;Requestor is owner&quot;
+            #     description: &quot;Determines if requestor is the document owner&quot;
+            #     expression: &quot;document.owner == request.auth.claims.email&quot;
+            #
+            # Example (Logic):
+            #
+            #     title: &quot;Public documents&quot;
+            #     description: &quot;Determine whether the document should be publicly visible&quot;
+            #     expression: &quot;document.type != &#x27;private&#x27; &amp;&amp; document.type != &#x27;internal&#x27;&quot;
+            #
+            # Example (Data Manipulation):
+            #
+            #     title: &quot;Notification string&quot;
+            #     description: &quot;Create a notification string with a timestamp.&quot;
+            #     expression: &quot;&#x27;New message received at &#x27; + string(document.create_time)&quot;
+            #
+            # The exact variables and functions that may be referenced within an expression
+            # are determined by the service that evaluates it. See the service
+            # documentation for additional information.
+          &quot;title&quot;: &quot;A String&quot;, # Optional. Title for the expression, i.e. a short string describing
+              # its purpose. This can be used e.g. in UIs which allow to enter the
+              # expression.
+          &quot;location&quot;: &quot;A String&quot;, # Optional. String indicating the location of the expression for error
+              # reporting, e.g. a file name and a position in the file.
+          &quot;description&quot;: &quot;A String&quot;, # Optional. Description of the expression. This is a longer text which
+              # describes the expression, e.g. when hovered over it in a UI.
+          &quot;expression&quot;: &quot;A String&quot;, # Textual representation of an expression in Common Expression Language
+              # syntax.
+        },
+        &quot;members&quot;: [ # Specifies the identities requesting access for a Cloud Platform resource.
             # `members` can have the following values:
             #
             # * `allUsers`: A special identifier that represents anyone who is
@@ -1513,177 +1692,17 @@
             # * `domain:{domain}`: The G Suite domain (primary) that represents all the
             #    users of that domain. For example, `google.com` or `example.com`.
             #
-          "A String",
+          &quot;A String&quot;,
         ],
-        "condition": { # Represents a textual expression in the Common Expression Language (CEL) # The condition that is associated with this binding.
-            # NOTE: An unsatisfied condition will not allow user access via current
-            # binding. Different bindings, including their conditions, are examined
-            # independently.
-            # syntax. CEL is a C-like expression language. The syntax and semantics of CEL
-            # are documented at https://github.com/google/cel-spec.
-            #
-            # Example (Comparison):
-            #
-            #     title: "Summary size limit"
-            #     description: "Determines if a summary is less than 100 chars"
-            #     expression: "document.summary.size() &lt; 100"
-            #
-            # Example (Equality):
-            #
-            #     title: "Requestor is owner"
-            #     description: "Determines if requestor is the document owner"
-            #     expression: "document.owner == request.auth.claims.email"
-            #
-            # Example (Logic):
-            #
-            #     title: "Public documents"
-            #     description: "Determine whether the document should be publicly visible"
-            #     expression: "document.type != 'private' &amp;&amp; document.type != 'internal'"
-            #
-            # Example (Data Manipulation):
-            #
-            #     title: "Notification string"
-            #     description: "Create a notification string with a timestamp."
-            #     expression: "'New message received at ' + string(document.create_time)"
-            #
-            # The exact variables and functions that may be referenced within an expression
-            # are determined by the service that evaluates it. See the service
-            # documentation for additional information.
-          "description": "A String", # Optional. Description of the expression. This is a longer text which
-              # describes the expression, e.g. when hovered over it in a UI.
-          "expression": "A String", # Textual representation of an expression in Common Expression Language
-              # syntax.
-          "location": "A String", # Optional. String indicating the location of the expression for error
-              # reporting, e.g. a file name and a position in the file.
-          "title": "A String", # Optional. Title for the expression, i.e. a short string describing
-              # its purpose. This can be used e.g. in UIs which allow to enter the
-              # expression.
-        },
+        &quot;role&quot;: &quot;A String&quot;, # Role that is assigned to `members`.
+            # For example, `roles/viewer`, `roles/editor`, or `roles/owner`.
       },
     ],
-    "auditConfigs": [ # Specifies cloud audit logging configuration for this policy.
-      { # Specifies the audit configuration for a service.
-          # The configuration determines which permission types are logged, and what
-          # identities, if any, are exempted from logging.
-          # An AuditConfig must have one or more AuditLogConfigs.
-          #
-          # If there are AuditConfigs for both `allServices` and a specific service,
-          # the union of the two AuditConfigs is used for that service: the log_types
-          # specified in each AuditConfig are enabled, and the exempted_members in each
-          # AuditLogConfig are exempted.
-          #
-          # Example Policy with multiple AuditConfigs:
-          #
-          #     {
-          #       "audit_configs": [
-          #         {
-          #           "service": "allServices"
-          #           "audit_log_configs": [
-          #             {
-          #               "log_type": "DATA_READ",
-          #               "exempted_members": [
-          #                 "user:jose@example.com"
-          #               ]
-          #             },
-          #             {
-          #               "log_type": "DATA_WRITE",
-          #             },
-          #             {
-          #               "log_type": "ADMIN_READ",
-          #             }
-          #           ]
-          #         },
-          #         {
-          #           "service": "sampleservice.googleapis.com"
-          #           "audit_log_configs": [
-          #             {
-          #               "log_type": "DATA_READ",
-          #             },
-          #             {
-          #               "log_type": "DATA_WRITE",
-          #               "exempted_members": [
-          #                 "user:aliya@example.com"
-          #               ]
-          #             }
-          #           ]
-          #         }
-          #       ]
-          #     }
-          #
-          # For sampleservice, this policy enables DATA_READ, DATA_WRITE and ADMIN_READ
-          # logging. It also exempts jose@example.com from DATA_READ logging, and
-          # aliya@example.com from DATA_WRITE logging.
-        "auditLogConfigs": [ # The configuration for logging of each type of permission.
-          { # Provides the configuration for logging a type of permissions.
-              # Example:
-              #
-              #     {
-              #       "audit_log_configs": [
-              #         {
-              #           "log_type": "DATA_READ",
-              #           "exempted_members": [
-              #             "user:jose@example.com"
-              #           ]
-              #         },
-              #         {
-              #           "log_type": "DATA_WRITE",
-              #         }
-              #       ]
-              #     }
-              #
-              # This enables 'DATA_READ' and 'DATA_WRITE' logging, while exempting
-              # jose@example.com from DATA_READ logging.
-            "exemptedMembers": [ # Specifies the identities that do not cause logging for this type of
-                # permission.
-                # Follows the same format of Binding.members.
-              "A String",
-            ],
-            "logType": "A String", # The log type that this config enables.
-          },
-        ],
-        "service": "A String", # Specifies a service that will be enabled for audit logging.
-            # For example, `storage.googleapis.com`, `cloudsql.googleapis.com`.
-            # `allServices` is a special value that covers all services.
-      },
-    ],
-    "etag": "A String", # `etag` is used for optimistic concurrency control as a way to help
-        # prevent simultaneous updates of a policy from overwriting each other.
-        # It is strongly suggested that systems make use of the `etag` in the
-        # read-modify-write cycle to perform policy updates in order to avoid race
-        # conditions: An `etag` is returned in the response to `getIamPolicy`, and
-        # systems are expected to put that etag in the request to `setIamPolicy` to
-        # ensure that their change will be applied to the same version of the policy.
-        #
-        # **Important:** If you use IAM Conditions, you must include the `etag` field
-        # whenever you call `setIamPolicy`. If you omit this field, then IAM allows
-        # you to overwrite a version `3` policy with a version `1` policy, and all of
-        # the conditions in the version `3` policy are lost.
-    "version": 42, # Specifies the format of the policy.
-        #
-        # Valid values are `0`, `1`, and `3`. Requests that specify an invalid value
-        # are rejected.
-        #
-        # Any operation that affects conditional role bindings must specify version
-        # `3`. This requirement applies to the following operations:
-        #
-        # * Getting a policy that includes a conditional role binding
-        # * Adding a conditional role binding to a policy
-        # * Changing a conditional role binding in a policy
-        # * Removing any role binding, with or without a condition, from a policy
-        #   that includes conditions
-        #
-        # **Important:** If you use IAM Conditions, you must include the `etag` field
-        # whenever you call `setIamPolicy`. If you omit this field, then IAM allows
-        # you to overwrite a version `3` policy with a version `1` policy, and all of
-        # the conditions in the version `3` policy are lost.
-        #
-        # If a policy does not include any conditions, operations on that policy may
-        # specify any valid version or leave the field unset.
   }</pre>
 </div>
 
 <div class="method">
-    <code class="details" id="list">list(parent, pageSize=None, pageToken=None, x__xgafv=None, filter=None)</code>
+    <code class="details" id="list">list(parent, filter=None, pageToken=None, pageSize=None, x__xgafv=None)</code>
   <pre>Lists the models in a project.
 
 Each project can contain multiple models, and each model can have multiple
@@ -1694,417 +1713,417 @@
 
 Args:
   parent: string, Required. The name of the project whose models are to be listed. (required)
-  pageSize: integer, Optional. The number of models to retrieve per "page" of results. If there
-are more remaining results than this number, the response message will
-contain a valid value in the `next_page_token` field.
-
-The default value is 20, and the maximum page size is 100.
+  filter: string, Optional. Specifies the subset of models to retrieve.
   pageToken: string, Optional. A page token to request the next page of results.
 
 You get the token from the `next_page_token` field of the response from
 the previous call.
+  pageSize: integer, Optional. The number of models to retrieve per &quot;page&quot; of results. If there
+are more remaining results than this number, the response message will
+contain a valid value in the `next_page_token` field.
+
+The default value is 20, and the maximum page size is 100.
   x__xgafv: string, V1 error format.
     Allowed values
       1 - v1 error format
       2 - v2 error format
-  filter: string, Optional. Specifies the subset of models to retrieve.
 
 Returns:
   An object of the form:
 
     { # Response message for the ListModels method.
-    "nextPageToken": "A String", # Optional. Pass this token as the `page_token` field of the request for a
-        # subsequent call.
-    "models": [ # The list of models.
+    &quot;models&quot;: [ # The list of models.
       { # Represents a machine learning solution.
-          #
-          # A model can have multiple versions, each of which is a deployed, trained
-          # model ready to receive prediction requests. The model itself is just a
-          # container.
-        "name": "A String", # Required. The name specified for the model when it was created.
             #
-            # The model name must be unique within the project it is created in.
-        "onlinePredictionConsoleLogging": True or False, # Optional. If true, online prediction nodes send `stderr` and `stdout`
-            # streams to Stackdriver Logging. These can be more verbose than the standard
-            # access logs (see `onlinePredictionLogging`) and can incur higher cost.
-            # However, they are helpful for debugging. Note that
-            # [Stackdriver logs may incur a cost](/stackdriver/pricing), especially if
-            # your project receives prediction requests at a high QPS. Estimate your
-            # costs before enabling this option.
-            #
-            # Default is false.
-        "labels": { # Optional. One or more labels that you can add, to organize your models.
-            # Each label is a key-value pair, where both the key and the value are
-            # arbitrary strings that you supply.
-            # For more information, see the documentation on
-            # &lt;a href="/ml-engine/docs/tensorflow/resource-labels"&gt;using labels&lt;/a&gt;.
-          "a_key": "A String",
-        },
-        "regions": [ # Optional. The list of regions where the model is going to be deployed.
-            # Only one region per model is supported.
-            # Defaults to 'us-central1' if nothing is set.
-            # See the &lt;a href="/ml-engine/docs/tensorflow/regions"&gt;available regions&lt;/a&gt;
-            # for AI Platform services.
-            # Note:
-            # *   No matter where a model is deployed, it can always be accessed by
-            #     users from anywhere, both for online and batch prediction.
-            # *   The region for a batch prediction job is set by the region field when
-            #     submitting the batch prediction job and does not take its value from
-            #     this field.
-          "A String",
-        ],
-        "etag": "A String", # `etag` is used for optimistic concurrency control as a way to help
-            # prevent simultaneous updates of a model from overwriting each other.
-            # It is strongly suggested that systems make use of the `etag` in the
-            # read-modify-write cycle to perform model updates in order to avoid race
-            # conditions: An `etag` is returned in the response to `GetModel`, and
-            # systems are expected to put that etag in the request to `UpdateModel` to
-            # ensure that their change will be applied to the model as intended.
-        "defaultVersion": { # Represents a version of the model. # Output only. The default version of the model. This version will be used to
-            # handle prediction requests that do not specify a version.
-            #
-            # You can change the default version by calling
-            # projects.models.versions.setDefault.
-            #
-            # Each version is a trained model deployed in the cloud, ready to handle
-            # prediction requests. A model can have multiple versions. You can get
-            # information about all of the versions of a given model by calling
-            # projects.models.versions.list.
-          "acceleratorConfig": { # Represents a hardware accelerator request config. # Optional. Accelerator config for using GPUs for online prediction (beta).
-              # Only specify this field if you have specified a Compute Engine (N1) machine
-              # type in the `machineType` field. Learn more about [using GPUs for online
-              # prediction](/ml-engine/docs/machine-types-online-prediction#gpus).
-              # Note that the AcceleratorConfig can be used in both Jobs and Versions.
-              # Learn more about [accelerators for training](/ml-engine/docs/using-gpus) and
-              # [accelerators for online
-              # prediction](/ml-engine/docs/machine-types-online-prediction#gpus).
-            "count": "A String", # The number of accelerators to attach to each machine running the job.
-            "type": "A String", # The type of accelerator to use.
-          },
-          "labels": { # Optional. One or more labels that you can add, to organize your model
-              # versions. Each label is a key-value pair, where both the key and the value
-              # are arbitrary strings that you supply.
+            # A model can have multiple versions, each of which is a deployed, trained
+            # model ready to receive prediction requests. The model itself is just a
+            # container.
+          &quot;labels&quot;: { # Optional. One or more labels that you can add, to organize your models.
+              # Each label is a key-value pair, where both the key and the value are
+              # arbitrary strings that you supply.
               # For more information, see the documentation on
-              # &lt;a href="/ml-engine/docs/tensorflow/resource-labels"&gt;using labels&lt;/a&gt;.
-            "a_key": "A String",
+              # &lt;a href=&quot;/ml-engine/docs/tensorflow/resource-labels&quot;&gt;using labels&lt;/a&gt;.
+            &quot;a_key&quot;: &quot;A String&quot;,
           },
-          "predictionClass": "A String", # Optional. The fully qualified name
-              # (&lt;var&gt;module_name&lt;/var&gt;.&lt;var&gt;class_name&lt;/var&gt;) of a class that implements
-              # the Predictor interface described in this reference field. The module
-              # containing this class should be included in a package provided to the
-              # [`packageUris` field](#Version.FIELDS.package_uris).
+          &quot;name&quot;: &quot;A String&quot;, # Required. The name specified for the model when it was created.
               #
-              # Specify this field if and only if you are deploying a [custom prediction
-              # routine (beta)](/ml-engine/docs/tensorflow/custom-prediction-routines).
-              # If you specify this field, you must set
-              # [`runtimeVersion`](#Version.FIELDS.runtime_version) to 1.4 or greater and
-              # you must set `machineType` to a [legacy (MLS1)
-              # machine type](/ml-engine/docs/machine-types-online-prediction).
+              # The model name must be unique within the project it is created in.
+          &quot;defaultVersion&quot;: { # Represents a version of the model. # Output only. The default version of the model. This version will be used to
+              # handle prediction requests that do not specify a version.
               #
-              # The following code sample provides the Predictor interface:
+              # You can change the default version by calling
+              # projects.models.versions.setDefault.
               #
-              # &lt;pre style="max-width: 626px;"&gt;
-              # class Predictor(object):
-              # """Interface for constructing custom predictors."""
+              # Each version is a trained model deployed in the cloud, ready to handle
+              # prediction requests. A model can have multiple versions. You can get
+              # information about all of the versions of a given model by calling
+              # projects.models.versions.list.
+            &quot;state&quot;: &quot;A String&quot;, # Output only. The state of a version.
+            &quot;manualScaling&quot;: { # Options for manually scaling a model. # Manually select the number of nodes to use for serving the
+                # model. You should generally use `auto_scaling` with an appropriate
+                # `min_nodes` instead, but this option is available if you want more
+                # predictable billing. Beware that latency and error rates will increase
+                # if the traffic exceeds that capability of the system to serve it based
+                # on the selected number of nodes.
+              &quot;nodes&quot;: 42, # The number of nodes to allocate for this model. These nodes are always up,
+                  # starting from the time the model is deployed, so the cost of operating
+                  # this model will be proportional to `nodes` * number of hours since
+                  # last billing cycle plus the cost for each prediction performed.
+            },
+            &quot;name&quot;: &quot;A String&quot;, # Required. The name specified for the version when it was created.
+                #
+                # The version name must be unique within the model it is created in.
+            &quot;serviceAccount&quot;: &quot;A String&quot;, # Optional. Specifies the service account for resource access control.
+            &quot;pythonVersion&quot;: &quot;A String&quot;, # Required. The version of Python used in prediction.
+                #
+                # The following Python versions are available:
+                #
+                # * Python &#x27;3.7&#x27; is available when `runtime_version` is set to &#x27;1.15&#x27; or
+                #   later.
+                # * Python &#x27;3.5&#x27; is available when `runtime_version` is set to a version
+                #   from &#x27;1.4&#x27; to &#x27;1.14&#x27;.
+                # * Python &#x27;2.7&#x27; is available when `runtime_version` is set to &#x27;1.15&#x27; or
+                #   earlier.
+                #
+                # Read more about the Python versions available for [each runtime
+                # version](/ml-engine/docs/runtime-version-list).
+            &quot;lastUseTime&quot;: &quot;A String&quot;, # Output only. The time the version was last used for prediction.
+            &quot;predictionClass&quot;: &quot;A String&quot;, # Optional. The fully qualified name
+                # (&lt;var&gt;module_name&lt;/var&gt;.&lt;var&gt;class_name&lt;/var&gt;) of a class that implements
+                # the Predictor interface described in this reference field. The module
+                # containing this class should be included in a package provided to the
+                # [`packageUris` field](#Version.FIELDS.package_uris).
+                #
+                # Specify this field if and only if you are deploying a [custom prediction
+                # routine (beta)](/ml-engine/docs/tensorflow/custom-prediction-routines).
+                # If you specify this field, you must set
+                # [`runtimeVersion`](#Version.FIELDS.runtime_version) to 1.4 or greater and
+                # you must set `machineType` to a [legacy (MLS1)
+                # machine type](/ml-engine/docs/machine-types-online-prediction).
+                #
+                # The following code sample provides the Predictor interface:
+                #
+                # &lt;pre style=&quot;max-width: 626px;&quot;&gt;
+                # class Predictor(object):
+                # &quot;&quot;&quot;Interface for constructing custom predictors.&quot;&quot;&quot;
+                #
+                # def predict(self, instances, **kwargs):
+                #     &quot;&quot;&quot;Performs custom prediction.
+                #
+                #     Instances are the decoded values from the request. They have already
+                #     been deserialized from JSON.
+                #
+                #     Args:
+                #         instances: A list of prediction input instances.
+                #         **kwargs: A dictionary of keyword args provided as additional
+                #             fields on the predict request body.
+                #
+                #     Returns:
+                #         A list of outputs containing the prediction results. This list must
+                #         be JSON serializable.
+                #     &quot;&quot;&quot;
+                #     raise NotImplementedError()
+                #
+                # @classmethod
+                # def from_path(cls, model_dir):
+                #     &quot;&quot;&quot;Creates an instance of Predictor using the given path.
+                #
+                #     Loading of the predictor should be done in this method.
+                #
+                #     Args:
+                #         model_dir: The local directory that contains the exported model
+                #             file along with any additional files uploaded when creating the
+                #             version resource.
+                #
+                #     Returns:
+                #         An instance implementing this Predictor class.
+                #     &quot;&quot;&quot;
+                #     raise NotImplementedError()
+                # &lt;/pre&gt;
+                #
+                # Learn more about [the Predictor interface and custom prediction
+                # routines](/ml-engine/docs/tensorflow/custom-prediction-routines).
+            &quot;packageUris&quot;: [ # Optional. Cloud Storage paths (`gs://…`) of packages for [custom
+                # prediction routines](/ml-engine/docs/tensorflow/custom-prediction-routines)
+                # or [scikit-learn pipelines with custom
+                # code](/ml-engine/docs/scikit/exporting-for-prediction#custom-pipeline-code).
+                #
+                # For a custom prediction routine, one of these packages must contain your
+                # Predictor class (see
+                # [`predictionClass`](#Version.FIELDS.prediction_class)). Additionally,
+                # include any dependencies used by your Predictor or scikit-learn pipeline
+                # uses that are not already included in your selected [runtime
+                # version](/ml-engine/docs/tensorflow/runtime-version-list).
+                #
+                # If you specify this field, you must also set
+                # [`runtimeVersion`](#Version.FIELDS.runtime_version) to 1.4 or greater.
+              &quot;A String&quot;,
+            ],
+            &quot;explanationConfig&quot;: { # Message holding configuration options for explaining model predictions. # Optional. Configures explainability features on the model&#x27;s version.
+                # Some explanation features require additional metadata to be loaded
+                # as part of the model payload.
+                # There are two feature attribution methods supported for TensorFlow models:
+                # integrated gradients and sampled Shapley.
+                # [Learn more about feature
+                # attributions.](/ai-platform/prediction/docs/ai-explanations/overview)
+              &quot;integratedGradientsAttribution&quot;: { # Attributes credit by computing the Aumann-Shapley value taking advantage # Attributes credit by computing the Aumann-Shapley value taking advantage
+                  # of the model&#x27;s fully differentiable structure. Refer to this paper for
+                  # more details: http://proceedings.mlr.press/v70/sundararajan17a.html
+                  # of the model&#x27;s fully differentiable structure. Refer to this paper for
+                  # more details: https://arxiv.org/abs/1703.01365
+                &quot;numIntegralSteps&quot;: 42, # Number of steps for approximating the path integral.
+                    # A good value to start is 50 and gradually increase until the
+                    # sum to diff property is met within the desired error range.
+              },
+              &quot;sampledShapleyAttribution&quot;: { # An attribution method that approximates Shapley values for features that # An attribution method that approximates Shapley values for features that
+                  # contribute to the label being predicted. A sampling strategy is used to
+                  # approximate the value rather than considering all subsets of features.
+                  # contribute to the label being predicted. A sampling strategy is used to
+                  # approximate the value rather than considering all subsets of features.
+                &quot;numPaths&quot;: 42, # The number of feature permutations to consider when approximating the
+                    # Shapley values.
+              },
+              &quot;xraiAttribution&quot;: { # Attributes credit by computing the XRAI taking advantage # Attributes credit by computing the XRAI taking advantage
+                  # of the model&#x27;s fully differentiable structure. Refer to this paper for
+                  # more details: https://arxiv.org/abs/1906.02825
+                  # Currently only implemented for models with natural image inputs.
+                  # of the model&#x27;s fully differentiable structure. Refer to this paper for
+                  # more details: https://arxiv.org/abs/1906.02825
+                  # Currently only implemented for models with natural image inputs.
+                &quot;numIntegralSteps&quot;: 42, # Number of steps for approximating the path integral.
+                    # A good value to start is 50 and gradually increase until the
+                    # sum to diff property is met within the desired error range.
+              },
+            },
+            &quot;deploymentUri&quot;: &quot;A String&quot;, # Required. The Cloud Storage location of the trained model used to
+                # create the version. See the
+                # [guide to model
+                # deployment](/ml-engine/docs/tensorflow/deploying-models) for more
+                # information.
+                #
+                # When passing Version to
+                # projects.models.versions.create
+                # the model service uses the specified location as the source of the model.
+                # Once deployed, the model version is hosted by the prediction service, so
+                # this location is useful only as a historical record.
+                # The total number of model files can&#x27;t exceed 1000.
+            &quot;autoScaling&quot;: { # Options for automatically scaling a model. # Automatically scale the number of nodes used to serve the model in
+                # response to increases and decreases in traffic. Care should be
+                # taken to ramp up traffic according to the model&#x27;s ability to scale
+                # or you will start seeing increases in latency and 429 response codes.
+                #
+                # Note that you cannot use AutoScaling if your version uses
+                # [GPUs](#Version.FIELDS.accelerator_config). Instead, you must use specify
+                # `manual_scaling`.
+              &quot;minNodes&quot;: 42, # Optional. The minimum number of nodes to allocate for this model. These
+                  # nodes are always up, starting from the time the model is deployed.
+                  # Therefore, the cost of operating this model will be at least
+                  # `rate` * `min_nodes` * number of hours since last billing cycle,
+                  # where `rate` is the cost per node-hour as documented in the
+                  # [pricing guide](/ml-engine/docs/pricing),
+                  # even if no predictions are performed. There is additional cost for each
+                  # prediction performed.
+                  #
+                  # Unlike manual scaling, if the load gets too heavy for the nodes
+                  # that are up, the service will automatically add nodes to handle the
+                  # increased load as well as scale back as traffic drops, always maintaining
+                  # at least `min_nodes`. You will be charged for the time in which additional
+                  # nodes are used.
+                  #
+                  # If `min_nodes` is not specified and AutoScaling is used with a [legacy
+                  # (MLS1) machine type](/ml-engine/docs/machine-types-online-prediction),
+                  # `min_nodes` defaults to 0, in which case, when traffic to a model stops
+                  # (and after a cool-down period), nodes will be shut down and no charges will
+                  # be incurred until traffic to the model resumes.
+                  #
+                  # If `min_nodes` is not specified and AutoScaling is used with a [Compute
+                  # Engine (N1) machine type](/ml-engine/docs/machine-types-online-prediction),
+                  # `min_nodes` defaults to 1. `min_nodes` must be at least 1 for use with a
+                  # Compute Engine machine type.
+                  #
+                  # Note that you cannot use AutoScaling if your version uses
+                  # [GPUs](#Version.FIELDS.accelerator_config). Instead, you must use
+                  # ManualScaling.
+                  #
+                  # You can set `min_nodes` when creating the model version, and you can also
+                  # update `min_nodes` for an existing version:
+                  # &lt;pre&gt;
+                  # update_body.json:
+                  # {
+                  #   &#x27;autoScaling&#x27;: {
+                  #     &#x27;minNodes&#x27;: 5
+                  #   }
+                  # }
+                  # &lt;/pre&gt;
+                  # HTTP request:
+                  # &lt;pre style=&quot;max-width: 626px;&quot;&gt;
+                  # PATCH
+                  # https://ml.googleapis.com/v1/{name=projects/*/models/*/versions/*}?update_mask=autoScaling.minNodes
+                  # -d @./update_body.json
+                  # &lt;/pre&gt;
+            },
+            &quot;labels&quot;: { # Optional. One or more labels that you can add, to organize your model
+                # versions. Each label is a key-value pair, where both the key and the value
+                # are arbitrary strings that you supply.
+                # For more information, see the documentation on
+                # &lt;a href=&quot;/ml-engine/docs/tensorflow/resource-labels&quot;&gt;using labels&lt;/a&gt;.
+              &quot;a_key&quot;: &quot;A String&quot;,
+            },
+            &quot;createTime&quot;: &quot;A String&quot;, # Output only. The time the version was created.
+            &quot;requestLoggingConfig&quot;: { # Configuration for logging request-response pairs to a BigQuery table. # Optional. *Only* specify this field in a
+                # projects.models.versions.patch
+                # request. Specifying it in a
+                # projects.models.versions.create
+                # request has no effect.
+                #
+                # Configures the request-response pair logging on predictions from this
+                # Version.
+                # Online prediction requests to a model version and the responses to these
+                # requests are converted to raw strings and saved to the specified BigQuery
+                # table. Logging is constrained by [BigQuery quotas and
+                # limits](/bigquery/quotas). If your project exceeds BigQuery quotas or limits,
+                # AI Platform Prediction does not log request-response pairs, but it continues
+                # to serve predictions.
+                #
+                # If you are using [continuous
+                # evaluation](/ml-engine/docs/continuous-evaluation/), you do not need to
+                # specify this configuration manually. Setting up continuous evaluation
+                # automatically enables logging of request-response pairs.
+              &quot;bigqueryTableName&quot;: &quot;A String&quot;, # Required. Fully qualified BigQuery table name in the following format:
+                  # &quot;&lt;var&gt;project_id&lt;/var&gt;.&lt;var&gt;dataset_name&lt;/var&gt;.&lt;var&gt;table_name&lt;/var&gt;&quot;
+                  #
+                  # The specified table must already exist, and the &quot;Cloud ML Service Agent&quot;
+                  # for your project must have permission to write to it. The table must have
+                  # the following [schema](/bigquery/docs/schemas):
+                  #
+                  # &lt;table&gt;
+                  #   &lt;tr&gt;&lt;th&gt;Field name&lt;/th&gt;&lt;th style=&quot;display: table-cell&quot;&gt;Type&lt;/th&gt;
+                  #     &lt;th style=&quot;display: table-cell&quot;&gt;Mode&lt;/th&gt;&lt;/tr&gt;
+                  #   &lt;tr&gt;&lt;td&gt;model&lt;/td&gt;&lt;td&gt;STRING&lt;/td&gt;&lt;td&gt;REQUIRED&lt;/td&gt;&lt;/tr&gt;
+                  #   &lt;tr&gt;&lt;td&gt;model_version&lt;/td&gt;&lt;td&gt;STRING&lt;/td&gt;&lt;td&gt;REQUIRED&lt;/td&gt;&lt;/tr&gt;
+                  #   &lt;tr&gt;&lt;td&gt;time&lt;/td&gt;&lt;td&gt;TIMESTAMP&lt;/td&gt;&lt;td&gt;REQUIRED&lt;/td&gt;&lt;/tr&gt;
+                  #   &lt;tr&gt;&lt;td&gt;raw_data&lt;/td&gt;&lt;td&gt;STRING&lt;/td&gt;&lt;td&gt;REQUIRED&lt;/td&gt;&lt;/tr&gt;
+                  #   &lt;tr&gt;&lt;td&gt;raw_prediction&lt;/td&gt;&lt;td&gt;STRING&lt;/td&gt;&lt;td&gt;NULLABLE&lt;/td&gt;&lt;/tr&gt;
+                  #   &lt;tr&gt;&lt;td&gt;groundtruth&lt;/td&gt;&lt;td&gt;STRING&lt;/td&gt;&lt;td&gt;NULLABLE&lt;/td&gt;&lt;/tr&gt;
+                  # &lt;/table&gt;
+              &quot;samplingPercentage&quot;: 3.14, # Percentage of requests to be logged, expressed as a fraction from 0 to 1.
+                  # For example, if you want to log 10% of requests, enter `0.1`. The sampling
+                  # window is the lifetime of the model version. Defaults to 0.
+            },
+            &quot;errorMessage&quot;: &quot;A String&quot;, # Output only. The details of a failure or a cancellation.
+            &quot;machineType&quot;: &quot;A String&quot;, # Optional. The type of machine on which to serve the model. Currently only
+                # applies to online prediction service. If this field is not specified, it
+                # defaults to `mls1-c1-m2`.
+                #
+                # Online prediction supports the following machine types:
+                #
+                # * `mls1-c1-m2`
+                # * `mls1-c4-m2`
+                # * `n1-standard-2`
+                # * `n1-standard-4`
+                # * `n1-standard-8`
+                # * `n1-standard-16`
+                # * `n1-standard-32`
+                # * `n1-highmem-2`
+                # * `n1-highmem-4`
+                # * `n1-highmem-8`
+                # * `n1-highmem-16`
+                # * `n1-highmem-32`
+                # * `n1-highcpu-2`
+                # * `n1-highcpu-4`
+                # * `n1-highcpu-8`
+                # * `n1-highcpu-16`
+                # * `n1-highcpu-32`
+                #
+                # `mls1-c1-m2` is generally available. All other machine types are available
+                # in beta. Learn more about the [differences between machine
+                # types](/ml-engine/docs/machine-types-online-prediction).
+            &quot;runtimeVersion&quot;: &quot;A String&quot;, # Required. The AI Platform runtime version to use for this deployment.
+                #
+                # For more information, see the
+                # [runtime version list](/ml-engine/docs/runtime-version-list) and
+                # [how to manage runtime versions](/ml-engine/docs/versioning).
+            &quot;description&quot;: &quot;A String&quot;, # Optional. The description specified for the version when it was created.
+            &quot;framework&quot;: &quot;A String&quot;, # Optional. The machine learning framework AI Platform uses to train
+                # this version of the model. Valid values are `TENSORFLOW`, `SCIKIT_LEARN`,
+                # `XGBOOST`. If you do not specify a framework, AI Platform
+                # will analyze files in the deployment_uri to determine a framework. If you
+                # choose `SCIKIT_LEARN` or `XGBOOST`, you must also set the runtime version
+                # of the model to 1.4 or greater.
+                #
+                # Do **not** specify a framework if you&#x27;re deploying a [custom
+                # prediction routine](/ml-engine/docs/tensorflow/custom-prediction-routines).
+                #
+                # If you specify a [Compute Engine (N1) machine
+                # type](/ml-engine/docs/machine-types-online-prediction) in the
+                # `machineType` field, you must specify `TENSORFLOW`
+                # for the framework.
+            &quot;etag&quot;: &quot;A String&quot;, # `etag` is used for optimistic concurrency control as a way to help
+                # prevent simultaneous updates of a model from overwriting each other.
+                # It is strongly suggested that systems make use of the `etag` in the
+                # read-modify-write cycle to perform model updates in order to avoid race
+                # conditions: An `etag` is returned in the response to `GetVersion`, and
+                # systems are expected to put that etag in the request to `UpdateVersion` to
+                # ensure that their change will be applied to the model as intended.
+            &quot;isDefault&quot;: True or False, # Output only. If true, this version will be used to handle prediction
+                # requests that do not specify a version.
+                #
+                # You can change the default version by calling
+                # projects.methods.versions.setDefault.
+            &quot;acceleratorConfig&quot;: { # Represents a hardware accelerator request config. # Optional. Accelerator config for using GPUs for online prediction (beta).
+                # Only specify this field if you have specified a Compute Engine (N1) machine
+                # type in the `machineType` field. Learn more about [using GPUs for online
+                # prediction](/ml-engine/docs/machine-types-online-prediction#gpus).
+                # Note that the AcceleratorConfig can be used in both Jobs and Versions.
+                # Learn more about [accelerators for training](/ml-engine/docs/using-gpus) and
+                # [accelerators for online
+                # prediction](/ml-engine/docs/machine-types-online-prediction#gpus).
+              &quot;count&quot;: &quot;A String&quot;, # The number of accelerators to attach to each machine running the job.
+              &quot;type&quot;: &quot;A String&quot;, # The type of accelerator to use.
+            },
+          },
+          &quot;onlinePredictionConsoleLogging&quot;: True or False, # Optional. If true, online prediction nodes send `stderr` and `stdout`
+              # streams to Stackdriver Logging. These can be more verbose than the standard
+              # access logs (see `onlinePredictionLogging`) and can incur higher cost.
+              # However, they are helpful for debugging. Note that
+              # [Stackdriver logs may incur a cost](/stackdriver/pricing), especially if
+              # your project receives prediction requests at a high QPS. Estimate your
+              # costs before enabling this option.
               #
-              # def predict(self, instances, **kwargs):
-              #     """Performs custom prediction.
-              #
-              #     Instances are the decoded values from the request. They have already
-              #     been deserialized from JSON.
-              #
-              #     Args:
-              #         instances: A list of prediction input instances.
-              #         **kwargs: A dictionary of keyword args provided as additional
-              #             fields on the predict request body.
-              #
-              #     Returns:
-              #         A list of outputs containing the prediction results. This list must
-              #         be JSON serializable.
-              #     """
-              #     raise NotImplementedError()
-              #
-              # @classmethod
-              # def from_path(cls, model_dir):
-              #     """Creates an instance of Predictor using the given path.
-              #
-              #     Loading of the predictor should be done in this method.
-              #
-              #     Args:
-              #         model_dir: The local directory that contains the exported model
-              #             file along with any additional files uploaded when creating the
-              #             version resource.
-              #
-              #     Returns:
-              #         An instance implementing this Predictor class.
-              #     """
-              #     raise NotImplementedError()
-              # &lt;/pre&gt;
-              #
-              # Learn more about [the Predictor interface and custom prediction
-              # routines](/ml-engine/docs/tensorflow/custom-prediction-routines).
-          "serviceAccount": "A String", # Optional. Specifies the service account for resource access control.
-          "state": "A String", # Output only. The state of a version.
-          "packageUris": [ # Optional. Cloud Storage paths (`gs://…`) of packages for [custom
-              # prediction routines](/ml-engine/docs/tensorflow/custom-prediction-routines)
-              # or [scikit-learn pipelines with custom
-              # code](/ml-engine/docs/scikit/exporting-for-prediction#custom-pipeline-code).
-              #
-              # For a custom prediction routine, one of these packages must contain your
-              # Predictor class (see
-              # [`predictionClass`](#Version.FIELDS.prediction_class)). Additionally,
-              # include any dependencies used by your Predictor or scikit-learn pipeline
-              # uses that are not already included in your selected [runtime
-              # version](/ml-engine/docs/tensorflow/runtime-version-list).
-              #
-              # If you specify this field, you must also set
-              # [`runtimeVersion`](#Version.FIELDS.runtime_version) to 1.4 or greater.
-            "A String",
+              # Default is false.
+          &quot;regions&quot;: [ # Optional. The list of regions where the model is going to be deployed.
+              # Only one region per model is supported.
+              # Defaults to &#x27;us-central1&#x27; if nothing is set.
+              # See the &lt;a href=&quot;/ml-engine/docs/tensorflow/regions&quot;&gt;available regions&lt;/a&gt;
+              # for AI Platform services.
+              # Note:
+              # *   No matter where a model is deployed, it can always be accessed by
+              #     users from anywhere, both for online and batch prediction.
+              # *   The region for a batch prediction job is set by the region field when
+              #     submitting the batch prediction job and does not take its value from
+              #     this field.
+            &quot;A String&quot;,
           ],
-          "etag": "A String", # `etag` is used for optimistic concurrency control as a way to help
+          &quot;description&quot;: &quot;A String&quot;, # Optional. The description specified for the model when it was created.
+          &quot;onlinePredictionLogging&quot;: True or False, # Optional. If true, online prediction access logs are sent to StackDriver
+              # Logging. These logs are like standard server access logs, containing
+              # information like timestamp and latency for each request. Note that
+              # [Stackdriver logs may incur a cost](/stackdriver/pricing), especially if
+              # your project receives prediction requests at a high queries per second rate
+              # (QPS). Estimate your costs before enabling this option.
+              #
+              # Default is false.
+          &quot;etag&quot;: &quot;A String&quot;, # `etag` is used for optimistic concurrency control as a way to help
               # prevent simultaneous updates of a model from overwriting each other.
               # It is strongly suggested that systems make use of the `etag` in the
               # read-modify-write cycle to perform model updates in order to avoid race
-              # conditions: An `etag` is returned in the response to `GetVersion`, and
-              # systems are expected to put that etag in the request to `UpdateVersion` to
+              # conditions: An `etag` is returned in the response to `GetModel`, and
+              # systems are expected to put that etag in the request to `UpdateModel` to
               # ensure that their change will be applied to the model as intended.
-          "lastUseTime": "A String", # Output only. The time the version was last used for prediction.
-          "deploymentUri": "A String", # Required. The Cloud Storage location of the trained model used to
-              # create the version. See the
-              # [guide to model
-              # deployment](/ml-engine/docs/tensorflow/deploying-models) for more
-              # information.
-              #
-              # When passing Version to
-              # projects.models.versions.create
-              # the model service uses the specified location as the source of the model.
-              # Once deployed, the model version is hosted by the prediction service, so
-              # this location is useful only as a historical record.
-              # The total number of model files can't exceed 1000.
-          "explanationConfig": { # Message holding configuration options for explaining model predictions. # Optional. Configures explainability features on the model's version.
-              # Some explanation features require additional metadata to be loaded
-              # as part of the model payload.
-              # There are two feature attribution methods supported for TensorFlow models:
-              # integrated gradients and sampled Shapley.
-              # [Learn more about feature
-              # attributions.](/ml-engine/docs/ai-explanations/overview)
-            "xraiAttribution": { # Attributes credit by computing the XRAI taking advantage # Attributes credit by computing the XRAI taking advantage
-                # of the model's fully differentiable structure. Refer to this paper for
-                # more details: https://arxiv.org/abs/1906.02825
-                # Currently only implemented for models with natural image inputs.
-                # of the model's fully differentiable structure. Refer to this paper for
-                # more details: https://arxiv.org/abs/1906.02825
-                # Currently only implemented for models with natural image inputs.
-              "numIntegralSteps": 42, # Number of steps for approximating the path integral.
-                  # A good value to start is 50 and gradually increase until the
-                  # sum to diff property is met within the desired error range.
-            },
-            "sampledShapleyAttribution": { # An attribution method that approximates Shapley values for features that # An attribution method that approximates Shapley values for features that
-                # contribute to the label being predicted. A sampling strategy is used to
-                # approximate the value rather than considering all subsets of features.
-                # contribute to the label being predicted. A sampling strategy is used to
-                # approximate the value rather than considering all subsets of features.
-              "numPaths": 42, # The number of feature permutations to consider when approximating the
-                  # Shapley values.
-            },
-            "integratedGradientsAttribution": { # Attributes credit by computing the Aumann-Shapley value taking advantage # Attributes credit by computing the Aumann-Shapley value taking advantage
-                # of the model's fully differentiable structure. Refer to this paper for
-                # more details: http://proceedings.mlr.press/v70/sundararajan17a.html
-                # of the model's fully differentiable structure. Refer to this paper for
-                # more details: http://proceedings.mlr.press/v70/sundararajan17a.html
-              "numIntegralSteps": 42, # Number of steps for approximating the path integral.
-                  # A good value to start is 50 and gradually increase until the
-                  # sum to diff property is met within the desired error range.
-            },
-          },
-          "isDefault": True or False, # Output only. If true, this version will be used to handle prediction
-              # requests that do not specify a version.
-              #
-              # You can change the default version by calling
-              # projects.methods.versions.setDefault.
-          "machineType": "A String", # Optional. The type of machine on which to serve the model. Currently only
-              # applies to online prediction service. If this field is not specified, it
-              # defaults to `mls1-c1-m2`.
-              #
-              # Online prediction supports the following machine types:
-              #
-              # * `mls1-c1-m2`
-              # * `mls1-c4-m2`
-              # * `n1-standard-2`
-              # * `n1-standard-4`
-              # * `n1-standard-8`
-              # * `n1-standard-16`
-              # * `n1-standard-32`
-              # * `n1-highmem-2`
-              # * `n1-highmem-4`
-              # * `n1-highmem-8`
-              # * `n1-highmem-16`
-              # * `n1-highmem-32`
-              # * `n1-highcpu-2`
-              # * `n1-highcpu-4`
-              # * `n1-highcpu-8`
-              # * `n1-highcpu-16`
-              # * `n1-highcpu-32`
-              #
-              # `mls1-c1-m2` is generally available. All other machine types are available
-              # in beta. Learn more about the [differences between machine
-              # types](/ml-engine/docs/machine-types-online-prediction).
-          "description": "A String", # Optional. The description specified for the version when it was created.
-          "runtimeVersion": "A String", # Required. The AI Platform runtime version to use for this deployment.
-              #
-              # For more information, see the
-              # [runtime version list](/ml-engine/docs/runtime-version-list) and
-              # [how to manage runtime versions](/ml-engine/docs/versioning).
-          "manualScaling": { # Options for manually scaling a model. # Manually select the number of nodes to use for serving the
-              # model. You should generally use `auto_scaling` with an appropriate
-              # `min_nodes` instead, but this option is available if you want more
-              # predictable billing. Beware that latency and error rates will increase
-              # if the traffic exceeds that capability of the system to serve it based
-              # on the selected number of nodes.
-            "nodes": 42, # The number of nodes to allocate for this model. These nodes are always up,
-                # starting from the time the model is deployed, so the cost of operating
-                # this model will be proportional to `nodes` * number of hours since
-                # last billing cycle plus the cost for each prediction performed.
-          },
-          "errorMessage": "A String", # Output only. The details of a failure or a cancellation.
-          "framework": "A String", # Optional. The machine learning framework AI Platform uses to train
-              # this version of the model. Valid values are `TENSORFLOW`, `SCIKIT_LEARN`,
-              # `XGBOOST`. If you do not specify a framework, AI Platform
-              # will analyze files in the deployment_uri to determine a framework. If you
-              # choose `SCIKIT_LEARN` or `XGBOOST`, you must also set the runtime version
-              # of the model to 1.4 or greater.
-              #
-              # Do **not** specify a framework if you're deploying a [custom
-              # prediction routine](/ml-engine/docs/tensorflow/custom-prediction-routines).
-              #
-              # If you specify a [Compute Engine (N1) machine
-              # type](/ml-engine/docs/machine-types-online-prediction) in the
-              # `machineType` field, you must specify `TENSORFLOW`
-              # for the framework.
-          "createTime": "A String", # Output only. The time the version was created.
-          "name": "A String", # Required. The name specified for the version when it was created.
-              #
-              # The version name must be unique within the model it is created in.
-          "autoScaling": { # Options for automatically scaling a model. # Automatically scale the number of nodes used to serve the model in
-              # response to increases and decreases in traffic. Care should be
-              # taken to ramp up traffic according to the model's ability to scale
-              # or you will start seeing increases in latency and 429 response codes.
-              #
-              # Note that you cannot use AutoScaling if your version uses
-              # [GPUs](#Version.FIELDS.accelerator_config). Instead, you must use specify
-              # `manual_scaling`.
-            "minNodes": 42, # Optional. The minimum number of nodes to allocate for this model. These
-                # nodes are always up, starting from the time the model is deployed.
-                # Therefore, the cost of operating this model will be at least
-                # `rate` * `min_nodes` * number of hours since last billing cycle,
-                # where `rate` is the cost per node-hour as documented in the
-                # [pricing guide](/ml-engine/docs/pricing),
-                # even if no predictions are performed. There is additional cost for each
-                # prediction performed.
-                #
-                # Unlike manual scaling, if the load gets too heavy for the nodes
-                # that are up, the service will automatically add nodes to handle the
-                # increased load as well as scale back as traffic drops, always maintaining
-                # at least `min_nodes`. You will be charged for the time in which additional
-                # nodes are used.
-                #
-                # If `min_nodes` is not specified and AutoScaling is used with a [legacy
-                # (MLS1) machine type](/ml-engine/docs/machine-types-online-prediction),
-                # `min_nodes` defaults to 0, in which case, when traffic to a model stops
-                # (and after a cool-down period), nodes will be shut down and no charges will
-                # be incurred until traffic to the model resumes.
-                #
-                # If `min_nodes` is not specified and AutoScaling is used with a [Compute
-                # Engine (N1) machine type](/ml-engine/docs/machine-types-online-prediction),
-                # `min_nodes` defaults to 1. `min_nodes` must be at least 1 for use with a
-                # Compute Engine machine type.
-                #
-                # Note that you cannot use AutoScaling if your version uses
-                # [GPUs](#Version.FIELDS.accelerator_config). Instead, you must use
-                # ManualScaling.
-                #
-                # You can set `min_nodes` when creating the model version, and you can also
-                # update `min_nodes` for an existing version:
-                # &lt;pre&gt;
-                # update_body.json:
-                # {
-                #   'autoScaling': {
-                #     'minNodes': 5
-                #   }
-                # }
-                # &lt;/pre&gt;
-                # HTTP request:
-                # &lt;pre style="max-width: 626px;"&gt;
-                # PATCH
-                # https://ml.googleapis.com/v1/{name=projects/*/models/*/versions/*}?update_mask=autoScaling.minNodes
-                # -d @./update_body.json
-                # &lt;/pre&gt;
-          },
-          "pythonVersion": "A String", # Required. The version of Python used in prediction.
-              #
-              # The following Python versions are available:
-              #
-              # * Python '3.7' is available when `runtime_version` is set to '1.15' or
-              #   later.
-              # * Python '3.5' is available when `runtime_version` is set to a version
-              #   from '1.4' to '1.14'.
-              # * Python '2.7' is available when `runtime_version` is set to '1.15' or
-              #   earlier.
-              #
-              # Read more about the Python versions available for [each runtime
-              # version](/ml-engine/docs/runtime-version-list).
-          "requestLoggingConfig": { # Configuration for logging request-response pairs to a BigQuery table. # Optional. *Only* specify this field in a
-              # projects.models.versions.patch
-              # request. Specifying it in a
-              # projects.models.versions.create
-              # request has no effect.
-              #
-              # Configures the request-response pair logging on predictions from this
-              # Version.
-              # Online prediction requests to a model version and the responses to these
-              # requests are converted to raw strings and saved to the specified BigQuery
-              # table. Logging is constrained by [BigQuery quotas and
-              # limits](/bigquery/quotas). If your project exceeds BigQuery quotas or limits,
-              # AI Platform Prediction does not log request-response pairs, but it continues
-              # to serve predictions.
-              #
-              # If you are using [continuous
-              # evaluation](/ml-engine/docs/continuous-evaluation/), you do not need to
-              # specify this configuration manually. Setting up continuous evaluation
-              # automatically enables logging of request-response pairs.
-            "samplingPercentage": 3.14, # Percentage of requests to be logged, expressed as a fraction from 0 to 1.
-                # For example, if you want to log 10% of requests, enter `0.1`. The sampling
-                # window is the lifetime of the model version. Defaults to 0.
-            "bigqueryTableName": "A String", # Required. Fully qualified BigQuery table name in the following format:
-                # "&lt;var&gt;project_id&lt;/var&gt;.&lt;var&gt;dataset_name&lt;/var&gt;.&lt;var&gt;table_name&lt;/var&gt;"
-                #
-                # The specified table must already exist, and the "Cloud ML Service Agent"
-                # for your project must have permission to write to it. The table must have
-                # the following [schema](/bigquery/docs/schemas):
-                #
-                # &lt;table&gt;
-                #   &lt;tr&gt;&lt;th&gt;Field name&lt;/th&gt;&lt;th style="display: table-cell"&gt;Type&lt;/th&gt;
-                #     &lt;th style="display: table-cell"&gt;Mode&lt;/th&gt;&lt;/tr&gt;
-                #   &lt;tr&gt;&lt;td&gt;model&lt;/td&gt;&lt;td&gt;STRING&lt;/td&gt;&lt;td&gt;REQUIRED&lt;/td&gt;&lt;/tr&gt;
-                #   &lt;tr&gt;&lt;td&gt;model_version&lt;/td&gt;&lt;td&gt;STRING&lt;/td&gt;&lt;td&gt;REQUIRED&lt;/td&gt;&lt;/tr&gt;
-                #   &lt;tr&gt;&lt;td&gt;time&lt;/td&gt;&lt;td&gt;TIMESTAMP&lt;/td&gt;&lt;td&gt;REQUIRED&lt;/td&gt;&lt;/tr&gt;
-                #   &lt;tr&gt;&lt;td&gt;raw_data&lt;/td&gt;&lt;td&gt;STRING&lt;/td&gt;&lt;td&gt;REQUIRED&lt;/td&gt;&lt;/tr&gt;
-                #   &lt;tr&gt;&lt;td&gt;raw_prediction&lt;/td&gt;&lt;td&gt;STRING&lt;/td&gt;&lt;td&gt;NULLABLE&lt;/td&gt;&lt;/tr&gt;
-                #   &lt;tr&gt;&lt;td&gt;groundtruth&lt;/td&gt;&lt;td&gt;STRING&lt;/td&gt;&lt;td&gt;NULLABLE&lt;/td&gt;&lt;/tr&gt;
-                # &lt;/table&gt;
-          },
         },
-        "onlinePredictionLogging": True or False, # Optional. If true, online prediction access logs are sent to StackDriver
-            # Logging. These logs are like standard server access logs, containing
-            # information like timestamp and latency for each request. Note that
-            # [Stackdriver logs may incur a cost](/stackdriver/pricing), especially if
-            # your project receives prediction requests at a high queries per second rate
-            # (QPS). Estimate your costs before enabling this option.
-            #
-            # Default is false.
-        "description": "A String", # Optional. The description specified for the model when it was created.
-      },
     ],
+    &quot;nextPageToken&quot;: &quot;A String&quot;, # Optional. Pass this token as the `page_token` field of the request for a
+        # subsequent call.
   }</pre>
 </div>
 
@@ -2117,7 +2136,7 @@
   previous_response: The response from the request for the previous page. (required)
 
 Returns:
-  A request object that you can call 'execute()' on to request the next
+  A request object that you can call &#x27;execute()&#x27; on to request the next
   page. Returns None if there are no more items in the collection.
     </pre>
 </div>
@@ -2135,404 +2154,404 @@
     The object takes the form of:
 
 { # Represents a machine learning solution.
-    # 
-    # A model can have multiple versions, each of which is a deployed, trained
-    # model ready to receive prediction requests. The model itself is just a
-    # container.
-  "name": "A String", # Required. The name specified for the model when it was created.
       # 
-      # The model name must be unique within the project it is created in.
-  "onlinePredictionConsoleLogging": True or False, # Optional. If true, online prediction nodes send `stderr` and `stdout`
-      # streams to Stackdriver Logging. These can be more verbose than the standard
-      # access logs (see `onlinePredictionLogging`) and can incur higher cost.
-      # However, they are helpful for debugging. Note that
-      # [Stackdriver logs may incur a cost](/stackdriver/pricing), especially if
-      # your project receives prediction requests at a high QPS. Estimate your
-      # costs before enabling this option.
-      # 
-      # Default is false.
-  "labels": { # Optional. One or more labels that you can add, to organize your models.
-      # Each label is a key-value pair, where both the key and the value are
-      # arbitrary strings that you supply.
-      # For more information, see the documentation on
-      # &lt;a href="/ml-engine/docs/tensorflow/resource-labels"&gt;using labels&lt;/a&gt;.
-    "a_key": "A String",
-  },
-  "regions": [ # Optional. The list of regions where the model is going to be deployed.
-      # Only one region per model is supported.
-      # Defaults to 'us-central1' if nothing is set.
-      # See the &lt;a href="/ml-engine/docs/tensorflow/regions"&gt;available regions&lt;/a&gt;
-      # for AI Platform services.
-      # Note:
-      # *   No matter where a model is deployed, it can always be accessed by
-      #     users from anywhere, both for online and batch prediction.
-      # *   The region for a batch prediction job is set by the region field when
-      #     submitting the batch prediction job and does not take its value from
-      #     this field.
-    "A String",
-  ],
-  "etag": "A String", # `etag` is used for optimistic concurrency control as a way to help
-      # prevent simultaneous updates of a model from overwriting each other.
-      # It is strongly suggested that systems make use of the `etag` in the
-      # read-modify-write cycle to perform model updates in order to avoid race
-      # conditions: An `etag` is returned in the response to `GetModel`, and
-      # systems are expected to put that etag in the request to `UpdateModel` to
-      # ensure that their change will be applied to the model as intended.
-  "defaultVersion": { # Represents a version of the model. # Output only. The default version of the model. This version will be used to
-      # handle prediction requests that do not specify a version.
-      # 
-      # You can change the default version by calling
-      # projects.models.versions.setDefault.
-      #
-      # Each version is a trained model deployed in the cloud, ready to handle
-      # prediction requests. A model can have multiple versions. You can get
-      # information about all of the versions of a given model by calling
-      # projects.models.versions.list.
-    "acceleratorConfig": { # Represents a hardware accelerator request config. # Optional. Accelerator config for using GPUs for online prediction (beta).
-        # Only specify this field if you have specified a Compute Engine (N1) machine
-        # type in the `machineType` field. Learn more about [using GPUs for online
-        # prediction](/ml-engine/docs/machine-types-online-prediction#gpus).
-        # Note that the AcceleratorConfig can be used in both Jobs and Versions.
-        # Learn more about [accelerators for training](/ml-engine/docs/using-gpus) and
-        # [accelerators for online
-        # prediction](/ml-engine/docs/machine-types-online-prediction#gpus).
-      "count": "A String", # The number of accelerators to attach to each machine running the job.
-      "type": "A String", # The type of accelerator to use.
-    },
-    "labels": { # Optional. One or more labels that you can add, to organize your model
-        # versions. Each label is a key-value pair, where both the key and the value
-        # are arbitrary strings that you supply.
+      # A model can have multiple versions, each of which is a deployed, trained
+      # model ready to receive prediction requests. The model itself is just a
+      # container.
+    &quot;labels&quot;: { # Optional. One or more labels that you can add, to organize your models.
+        # Each label is a key-value pair, where both the key and the value are
+        # arbitrary strings that you supply.
         # For more information, see the documentation on
-        # &lt;a href="/ml-engine/docs/tensorflow/resource-labels"&gt;using labels&lt;/a&gt;.
-      "a_key": "A String",
+        # &lt;a href=&quot;/ml-engine/docs/tensorflow/resource-labels&quot;&gt;using labels&lt;/a&gt;.
+      &quot;a_key&quot;: &quot;A String&quot;,
     },
-    "predictionClass": "A String", # Optional. The fully qualified name
-        # (&lt;var&gt;module_name&lt;/var&gt;.&lt;var&gt;class_name&lt;/var&gt;) of a class that implements
-        # the Predictor interface described in this reference field. The module
-        # containing this class should be included in a package provided to the
-        # [`packageUris` field](#Version.FIELDS.package_uris).
+    &quot;name&quot;: &quot;A String&quot;, # Required. The name specified for the model when it was created.
+        # 
+        # The model name must be unique within the project it is created in.
+    &quot;defaultVersion&quot;: { # Represents a version of the model. # Output only. The default version of the model. This version will be used to
+        # handle prediction requests that do not specify a version.
+        # 
+        # You can change the default version by calling
+        # projects.models.versions.setDefault.
         #
-        # Specify this field if and only if you are deploying a [custom prediction
-        # routine (beta)](/ml-engine/docs/tensorflow/custom-prediction-routines).
-        # If you specify this field, you must set
-        # [`runtimeVersion`](#Version.FIELDS.runtime_version) to 1.4 or greater and
-        # you must set `machineType` to a [legacy (MLS1)
-        # machine type](/ml-engine/docs/machine-types-online-prediction).
-        #
-        # The following code sample provides the Predictor interface:
-        #
-        # &lt;pre style="max-width: 626px;"&gt;
-        # class Predictor(object):
-        # """Interface for constructing custom predictors."""
-        #
-        # def predict(self, instances, **kwargs):
-        #     """Performs custom prediction.
-        #
-        #     Instances are the decoded values from the request. They have already
-        #     been deserialized from JSON.
-        #
-        #     Args:
-        #         instances: A list of prediction input instances.
-        #         **kwargs: A dictionary of keyword args provided as additional
-        #             fields on the predict request body.
-        #
-        #     Returns:
-        #         A list of outputs containing the prediction results. This list must
-        #         be JSON serializable.
-        #     """
-        #     raise NotImplementedError()
-        #
-        # @classmethod
-        # def from_path(cls, model_dir):
-        #     """Creates an instance of Predictor using the given path.
-        #
-        #     Loading of the predictor should be done in this method.
-        #
-        #     Args:
-        #         model_dir: The local directory that contains the exported model
-        #             file along with any additional files uploaded when creating the
-        #             version resource.
-        #
-        #     Returns:
-        #         An instance implementing this Predictor class.
-        #     """
-        #     raise NotImplementedError()
-        # &lt;/pre&gt;
-        #
-        # Learn more about [the Predictor interface and custom prediction
-        # routines](/ml-engine/docs/tensorflow/custom-prediction-routines).
-    "serviceAccount": "A String", # Optional. Specifies the service account for resource access control.
-    "state": "A String", # Output only. The state of a version.
-    "packageUris": [ # Optional. Cloud Storage paths (`gs://…`) of packages for [custom
-        # prediction routines](/ml-engine/docs/tensorflow/custom-prediction-routines)
-        # or [scikit-learn pipelines with custom
-        # code](/ml-engine/docs/scikit/exporting-for-prediction#custom-pipeline-code).
-        #
-        # For a custom prediction routine, one of these packages must contain your
-        # Predictor class (see
-        # [`predictionClass`](#Version.FIELDS.prediction_class)). Additionally,
-        # include any dependencies used by your Predictor or scikit-learn pipeline
-        # uses that are not already included in your selected [runtime
-        # version](/ml-engine/docs/tensorflow/runtime-version-list).
-        #
-        # If you specify this field, you must also set
-        # [`runtimeVersion`](#Version.FIELDS.runtime_version) to 1.4 or greater.
-      "A String",
+        # Each version is a trained model deployed in the cloud, ready to handle
+        # prediction requests. A model can have multiple versions. You can get
+        # information about all of the versions of a given model by calling
+        # projects.models.versions.list.
+      &quot;state&quot;: &quot;A String&quot;, # Output only. The state of a version.
+      &quot;manualScaling&quot;: { # Options for manually scaling a model. # Manually select the number of nodes to use for serving the
+          # model. You should generally use `auto_scaling` with an appropriate
+          # `min_nodes` instead, but this option is available if you want more
+          # predictable billing. Beware that latency and error rates will increase
+          # if the traffic exceeds that capability of the system to serve it based
+          # on the selected number of nodes.
+        &quot;nodes&quot;: 42, # The number of nodes to allocate for this model. These nodes are always up,
+            # starting from the time the model is deployed, so the cost of operating
+            # this model will be proportional to `nodes` * number of hours since
+            # last billing cycle plus the cost for each prediction performed.
+      },
+      &quot;name&quot;: &quot;A String&quot;, # Required. The name specified for the version when it was created.
+          #
+          # The version name must be unique within the model it is created in.
+      &quot;serviceAccount&quot;: &quot;A String&quot;, # Optional. Specifies the service account for resource access control.
+      &quot;pythonVersion&quot;: &quot;A String&quot;, # Required. The version of Python used in prediction.
+          #
+          # The following Python versions are available:
+          #
+          # * Python &#x27;3.7&#x27; is available when `runtime_version` is set to &#x27;1.15&#x27; or
+          #   later.
+          # * Python &#x27;3.5&#x27; is available when `runtime_version` is set to a version
+          #   from &#x27;1.4&#x27; to &#x27;1.14&#x27;.
+          # * Python &#x27;2.7&#x27; is available when `runtime_version` is set to &#x27;1.15&#x27; or
+          #   earlier.
+          #
+          # Read more about the Python versions available for [each runtime
+          # version](/ml-engine/docs/runtime-version-list).
+      &quot;lastUseTime&quot;: &quot;A String&quot;, # Output only. The time the version was last used for prediction.
+      &quot;predictionClass&quot;: &quot;A String&quot;, # Optional. The fully qualified name
+          # (&lt;var&gt;module_name&lt;/var&gt;.&lt;var&gt;class_name&lt;/var&gt;) of a class that implements
+          # the Predictor interface described in this reference field. The module
+          # containing this class should be included in a package provided to the
+          # [`packageUris` field](#Version.FIELDS.package_uris).
+          #
+          # Specify this field if and only if you are deploying a [custom prediction
+          # routine (beta)](/ml-engine/docs/tensorflow/custom-prediction-routines).
+          # If you specify this field, you must set
+          # [`runtimeVersion`](#Version.FIELDS.runtime_version) to 1.4 or greater and
+          # you must set `machineType` to a [legacy (MLS1)
+          # machine type](/ml-engine/docs/machine-types-online-prediction).
+          #
+          # The following code sample provides the Predictor interface:
+          #
+          # &lt;pre style=&quot;max-width: 626px;&quot;&gt;
+          # class Predictor(object):
+          # &quot;&quot;&quot;Interface for constructing custom predictors.&quot;&quot;&quot;
+          #
+          # def predict(self, instances, **kwargs):
+          #     &quot;&quot;&quot;Performs custom prediction.
+          #
+          #     Instances are the decoded values from the request. They have already
+          #     been deserialized from JSON.
+          #
+          #     Args:
+          #         instances: A list of prediction input instances.
+          #         **kwargs: A dictionary of keyword args provided as additional
+          #             fields on the predict request body.
+          #
+          #     Returns:
+          #         A list of outputs containing the prediction results. This list must
+          #         be JSON serializable.
+          #     &quot;&quot;&quot;
+          #     raise NotImplementedError()
+          #
+          # @classmethod
+          # def from_path(cls, model_dir):
+          #     &quot;&quot;&quot;Creates an instance of Predictor using the given path.
+          #
+          #     Loading of the predictor should be done in this method.
+          #
+          #     Args:
+          #         model_dir: The local directory that contains the exported model
+          #             file along with any additional files uploaded when creating the
+          #             version resource.
+          #
+          #     Returns:
+          #         An instance implementing this Predictor class.
+          #     &quot;&quot;&quot;
+          #     raise NotImplementedError()
+          # &lt;/pre&gt;
+          #
+          # Learn more about [the Predictor interface and custom prediction
+          # routines](/ml-engine/docs/tensorflow/custom-prediction-routines).
+      &quot;packageUris&quot;: [ # Optional. Cloud Storage paths (`gs://…`) of packages for [custom
+          # prediction routines](/ml-engine/docs/tensorflow/custom-prediction-routines)
+          # or [scikit-learn pipelines with custom
+          # code](/ml-engine/docs/scikit/exporting-for-prediction#custom-pipeline-code).
+          #
+          # For a custom prediction routine, one of these packages must contain your
+          # Predictor class (see
+          # [`predictionClass`](#Version.FIELDS.prediction_class)). Additionally,
+          # include any dependencies used by your Predictor or scikit-learn pipeline
+          # uses that are not already included in your selected [runtime
+          # version](/ml-engine/docs/tensorflow/runtime-version-list).
+          #
+          # If you specify this field, you must also set
+          # [`runtimeVersion`](#Version.FIELDS.runtime_version) to 1.4 or greater.
+        &quot;A String&quot;,
+      ],
+      &quot;explanationConfig&quot;: { # Message holding configuration options for explaining model predictions. # Optional. Configures explainability features on the model&#x27;s version.
+          # Some explanation features require additional metadata to be loaded
+          # as part of the model payload.
+          # There are two feature attribution methods supported for TensorFlow models:
+          # integrated gradients and sampled Shapley.
+          # [Learn more about feature
+          # attributions.](/ai-platform/prediction/docs/ai-explanations/overview)
+        &quot;integratedGradientsAttribution&quot;: { # Attributes credit by computing the Aumann-Shapley value taking advantage # Attributes credit by computing the Aumann-Shapley value taking advantage
+            # of the model&#x27;s fully differentiable structure. Refer to this paper for
+            # more details: http://proceedings.mlr.press/v70/sundararajan17a.html
+            # of the model&#x27;s fully differentiable structure. Refer to this paper for
+            # more details: https://arxiv.org/abs/1703.01365
+          &quot;numIntegralSteps&quot;: 42, # Number of steps for approximating the path integral.
+              # A good value to start is 50 and gradually increase until the
+              # sum to diff property is met within the desired error range.
+        },
+        &quot;sampledShapleyAttribution&quot;: { # An attribution method that approximates Shapley values for features that # An attribution method that approximates Shapley values for features that
+            # contribute to the label being predicted. A sampling strategy is used to
+            # approximate the value rather than considering all subsets of features.
+            # contribute to the label being predicted. A sampling strategy is used to
+            # approximate the value rather than considering all subsets of features.
+          &quot;numPaths&quot;: 42, # The number of feature permutations to consider when approximating the
+              # Shapley values.
+        },
+        &quot;xraiAttribution&quot;: { # Attributes credit by computing the XRAI taking advantage # Attributes credit by computing the XRAI taking advantage
+            # of the model&#x27;s fully differentiable structure. Refer to this paper for
+            # more details: https://arxiv.org/abs/1906.02825
+            # Currently only implemented for models with natural image inputs.
+            # of the model&#x27;s fully differentiable structure. Refer to this paper for
+            # more details: https://arxiv.org/abs/1906.02825
+            # Currently only implemented for models with natural image inputs.
+          &quot;numIntegralSteps&quot;: 42, # Number of steps for approximating the path integral.
+              # A good value to start is 50 and gradually increase until the
+              # sum to diff property is met within the desired error range.
+        },
+      },
+      &quot;deploymentUri&quot;: &quot;A String&quot;, # Required. The Cloud Storage location of the trained model used to
+          # create the version. See the
+          # [guide to model
+          # deployment](/ml-engine/docs/tensorflow/deploying-models) for more
+          # information.
+          #
+          # When passing Version to
+          # projects.models.versions.create
+          # the model service uses the specified location as the source of the model.
+          # Once deployed, the model version is hosted by the prediction service, so
+          # this location is useful only as a historical record.
+          # The total number of model files can&#x27;t exceed 1000.
+      &quot;autoScaling&quot;: { # Options for automatically scaling a model. # Automatically scale the number of nodes used to serve the model in
+          # response to increases and decreases in traffic. Care should be
+          # taken to ramp up traffic according to the model&#x27;s ability to scale
+          # or you will start seeing increases in latency and 429 response codes.
+          #
+          # Note that you cannot use AutoScaling if your version uses
+          # [GPUs](#Version.FIELDS.accelerator_config). Instead, you must use specify
+          # `manual_scaling`.
+        &quot;minNodes&quot;: 42, # Optional. The minimum number of nodes to allocate for this model. These
+            # nodes are always up, starting from the time the model is deployed.
+            # Therefore, the cost of operating this model will be at least
+            # `rate` * `min_nodes` * number of hours since last billing cycle,
+            # where `rate` is the cost per node-hour as documented in the
+            # [pricing guide](/ml-engine/docs/pricing),
+            # even if no predictions are performed. There is additional cost for each
+            # prediction performed.
+            #
+            # Unlike manual scaling, if the load gets too heavy for the nodes
+            # that are up, the service will automatically add nodes to handle the
+            # increased load as well as scale back as traffic drops, always maintaining
+            # at least `min_nodes`. You will be charged for the time in which additional
+            # nodes are used.
+            #
+            # If `min_nodes` is not specified and AutoScaling is used with a [legacy
+            # (MLS1) machine type](/ml-engine/docs/machine-types-online-prediction),
+            # `min_nodes` defaults to 0, in which case, when traffic to a model stops
+            # (and after a cool-down period), nodes will be shut down and no charges will
+            # be incurred until traffic to the model resumes.
+            #
+            # If `min_nodes` is not specified and AutoScaling is used with a [Compute
+            # Engine (N1) machine type](/ml-engine/docs/machine-types-online-prediction),
+            # `min_nodes` defaults to 1. `min_nodes` must be at least 1 for use with a
+            # Compute Engine machine type.
+            #
+            # Note that you cannot use AutoScaling if your version uses
+            # [GPUs](#Version.FIELDS.accelerator_config). Instead, you must use
+            # ManualScaling.
+            #
+            # You can set `min_nodes` when creating the model version, and you can also
+            # update `min_nodes` for an existing version:
+            # &lt;pre&gt;
+            # update_body.json:
+            # {
+            #   &#x27;autoScaling&#x27;: {
+            #     &#x27;minNodes&#x27;: 5
+            #   }
+            # }
+            # &lt;/pre&gt;
+            # HTTP request:
+            # &lt;pre style=&quot;max-width: 626px;&quot;&gt;
+            # PATCH
+            # https://ml.googleapis.com/v1/{name=projects/*/models/*/versions/*}?update_mask=autoScaling.minNodes
+            # -d @./update_body.json
+            # &lt;/pre&gt;
+      },
+      &quot;labels&quot;: { # Optional. One or more labels that you can add, to organize your model
+          # versions. Each label is a key-value pair, where both the key and the value
+          # are arbitrary strings that you supply.
+          # For more information, see the documentation on
+          # &lt;a href=&quot;/ml-engine/docs/tensorflow/resource-labels&quot;&gt;using labels&lt;/a&gt;.
+        &quot;a_key&quot;: &quot;A String&quot;,
+      },
+      &quot;createTime&quot;: &quot;A String&quot;, # Output only. The time the version was created.
+      &quot;requestLoggingConfig&quot;: { # Configuration for logging request-response pairs to a BigQuery table. # Optional. *Only* specify this field in a
+          # projects.models.versions.patch
+          # request. Specifying it in a
+          # projects.models.versions.create
+          # request has no effect.
+          #
+          # Configures the request-response pair logging on predictions from this
+          # Version.
+          # Online prediction requests to a model version and the responses to these
+          # requests are converted to raw strings and saved to the specified BigQuery
+          # table. Logging is constrained by [BigQuery quotas and
+          # limits](/bigquery/quotas). If your project exceeds BigQuery quotas or limits,
+          # AI Platform Prediction does not log request-response pairs, but it continues
+          # to serve predictions.
+          #
+          # If you are using [continuous
+          # evaluation](/ml-engine/docs/continuous-evaluation/), you do not need to
+          # specify this configuration manually. Setting up continuous evaluation
+          # automatically enables logging of request-response pairs.
+        &quot;bigqueryTableName&quot;: &quot;A String&quot;, # Required. Fully qualified BigQuery table name in the following format:
+            # &quot;&lt;var&gt;project_id&lt;/var&gt;.&lt;var&gt;dataset_name&lt;/var&gt;.&lt;var&gt;table_name&lt;/var&gt;&quot;
+            #
+            # The specified table must already exist, and the &quot;Cloud ML Service Agent&quot;
+            # for your project must have permission to write to it. The table must have
+            # the following [schema](/bigquery/docs/schemas):
+            #
+            # &lt;table&gt;
+            #   &lt;tr&gt;&lt;th&gt;Field name&lt;/th&gt;&lt;th style=&quot;display: table-cell&quot;&gt;Type&lt;/th&gt;
+            #     &lt;th style=&quot;display: table-cell&quot;&gt;Mode&lt;/th&gt;&lt;/tr&gt;
+            #   &lt;tr&gt;&lt;td&gt;model&lt;/td&gt;&lt;td&gt;STRING&lt;/td&gt;&lt;td&gt;REQUIRED&lt;/td&gt;&lt;/tr&gt;
+            #   &lt;tr&gt;&lt;td&gt;model_version&lt;/td&gt;&lt;td&gt;STRING&lt;/td&gt;&lt;td&gt;REQUIRED&lt;/td&gt;&lt;/tr&gt;
+            #   &lt;tr&gt;&lt;td&gt;time&lt;/td&gt;&lt;td&gt;TIMESTAMP&lt;/td&gt;&lt;td&gt;REQUIRED&lt;/td&gt;&lt;/tr&gt;
+            #   &lt;tr&gt;&lt;td&gt;raw_data&lt;/td&gt;&lt;td&gt;STRING&lt;/td&gt;&lt;td&gt;REQUIRED&lt;/td&gt;&lt;/tr&gt;
+            #   &lt;tr&gt;&lt;td&gt;raw_prediction&lt;/td&gt;&lt;td&gt;STRING&lt;/td&gt;&lt;td&gt;NULLABLE&lt;/td&gt;&lt;/tr&gt;
+            #   &lt;tr&gt;&lt;td&gt;groundtruth&lt;/td&gt;&lt;td&gt;STRING&lt;/td&gt;&lt;td&gt;NULLABLE&lt;/td&gt;&lt;/tr&gt;
+            # &lt;/table&gt;
+        &quot;samplingPercentage&quot;: 3.14, # Percentage of requests to be logged, expressed as a fraction from 0 to 1.
+            # For example, if you want to log 10% of requests, enter `0.1`. The sampling
+            # window is the lifetime of the model version. Defaults to 0.
+      },
+      &quot;errorMessage&quot;: &quot;A String&quot;, # Output only. The details of a failure or a cancellation.
+      &quot;machineType&quot;: &quot;A String&quot;, # Optional. The type of machine on which to serve the model. Currently only
+          # applies to online prediction service. If this field is not specified, it
+          # defaults to `mls1-c1-m2`.
+          #
+          # Online prediction supports the following machine types:
+          #
+          # * `mls1-c1-m2`
+          # * `mls1-c4-m2`
+          # * `n1-standard-2`
+          # * `n1-standard-4`
+          # * `n1-standard-8`
+          # * `n1-standard-16`
+          # * `n1-standard-32`
+          # * `n1-highmem-2`
+          # * `n1-highmem-4`
+          # * `n1-highmem-8`
+          # * `n1-highmem-16`
+          # * `n1-highmem-32`
+          # * `n1-highcpu-2`
+          # * `n1-highcpu-4`
+          # * `n1-highcpu-8`
+          # * `n1-highcpu-16`
+          # * `n1-highcpu-32`
+          #
+          # `mls1-c1-m2` is generally available. All other machine types are available
+          # in beta. Learn more about the [differences between machine
+          # types](/ml-engine/docs/machine-types-online-prediction).
+      &quot;runtimeVersion&quot;: &quot;A String&quot;, # Required. The AI Platform runtime version to use for this deployment.
+          #
+          # For more information, see the
+          # [runtime version list](/ml-engine/docs/runtime-version-list) and
+          # [how to manage runtime versions](/ml-engine/docs/versioning).
+      &quot;description&quot;: &quot;A String&quot;, # Optional. The description specified for the version when it was created.
+      &quot;framework&quot;: &quot;A String&quot;, # Optional. The machine learning framework AI Platform uses to train
+          # this version of the model. Valid values are `TENSORFLOW`, `SCIKIT_LEARN`,
+          # `XGBOOST`. If you do not specify a framework, AI Platform
+          # will analyze files in the deployment_uri to determine a framework. If you
+          # choose `SCIKIT_LEARN` or `XGBOOST`, you must also set the runtime version
+          # of the model to 1.4 or greater.
+          #
+          # Do **not** specify a framework if you&#x27;re deploying a [custom
+          # prediction routine](/ml-engine/docs/tensorflow/custom-prediction-routines).
+          #
+          # If you specify a [Compute Engine (N1) machine
+          # type](/ml-engine/docs/machine-types-online-prediction) in the
+          # `machineType` field, you must specify `TENSORFLOW`
+          # for the framework.
+      &quot;etag&quot;: &quot;A String&quot;, # `etag` is used for optimistic concurrency control as a way to help
+          # prevent simultaneous updates of a model from overwriting each other.
+          # It is strongly suggested that systems make use of the `etag` in the
+          # read-modify-write cycle to perform model updates in order to avoid race
+          # conditions: An `etag` is returned in the response to `GetVersion`, and
+          # systems are expected to put that etag in the request to `UpdateVersion` to
+          # ensure that their change will be applied to the model as intended.
+      &quot;isDefault&quot;: True or False, # Output only. If true, this version will be used to handle prediction
+          # requests that do not specify a version.
+          #
+          # You can change the default version by calling
+          # projects.methods.versions.setDefault.
+      &quot;acceleratorConfig&quot;: { # Represents a hardware accelerator request config. # Optional. Accelerator config for using GPUs for online prediction (beta).
+          # Only specify this field if you have specified a Compute Engine (N1) machine
+          # type in the `machineType` field. Learn more about [using GPUs for online
+          # prediction](/ml-engine/docs/machine-types-online-prediction#gpus).
+          # Note that the AcceleratorConfig can be used in both Jobs and Versions.
+          # Learn more about [accelerators for training](/ml-engine/docs/using-gpus) and
+          # [accelerators for online
+          # prediction](/ml-engine/docs/machine-types-online-prediction#gpus).
+        &quot;count&quot;: &quot;A String&quot;, # The number of accelerators to attach to each machine running the job.
+        &quot;type&quot;: &quot;A String&quot;, # The type of accelerator to use.
+      },
+    },
+    &quot;onlinePredictionConsoleLogging&quot;: True or False, # Optional. If true, online prediction nodes send `stderr` and `stdout`
+        # streams to Stackdriver Logging. These can be more verbose than the standard
+        # access logs (see `onlinePredictionLogging`) and can incur higher cost.
+        # However, they are helpful for debugging. Note that
+        # [Stackdriver logs may incur a cost](/stackdriver/pricing), especially if
+        # your project receives prediction requests at a high QPS. Estimate your
+        # costs before enabling this option.
+        # 
+        # Default is false.
+    &quot;regions&quot;: [ # Optional. The list of regions where the model is going to be deployed.
+        # Only one region per model is supported.
+        # Defaults to &#x27;us-central1&#x27; if nothing is set.
+        # See the &lt;a href=&quot;/ml-engine/docs/tensorflow/regions&quot;&gt;available regions&lt;/a&gt;
+        # for AI Platform services.
+        # Note:
+        # *   No matter where a model is deployed, it can always be accessed by
+        #     users from anywhere, both for online and batch prediction.
+        # *   The region for a batch prediction job is set by the region field when
+        #     submitting the batch prediction job and does not take its value from
+        #     this field.
+      &quot;A String&quot;,
     ],
-    "etag": "A String", # `etag` is used for optimistic concurrency control as a way to help
+    &quot;description&quot;: &quot;A String&quot;, # Optional. The description specified for the model when it was created.
+    &quot;onlinePredictionLogging&quot;: True or False, # Optional. If true, online prediction access logs are sent to StackDriver
+        # Logging. These logs are like standard server access logs, containing
+        # information like timestamp and latency for each request. Note that
+        # [Stackdriver logs may incur a cost](/stackdriver/pricing), especially if
+        # your project receives prediction requests at a high queries per second rate
+        # (QPS). Estimate your costs before enabling this option.
+        # 
+        # Default is false.
+    &quot;etag&quot;: &quot;A String&quot;, # `etag` is used for optimistic concurrency control as a way to help
         # prevent simultaneous updates of a model from overwriting each other.
         # It is strongly suggested that systems make use of the `etag` in the
         # read-modify-write cycle to perform model updates in order to avoid race
-        # conditions: An `etag` is returned in the response to `GetVersion`, and
-        # systems are expected to put that etag in the request to `UpdateVersion` to
+        # conditions: An `etag` is returned in the response to `GetModel`, and
+        # systems are expected to put that etag in the request to `UpdateModel` to
         # ensure that their change will be applied to the model as intended.
-    "lastUseTime": "A String", # Output only. The time the version was last used for prediction.
-    "deploymentUri": "A String", # Required. The Cloud Storage location of the trained model used to
-        # create the version. See the
-        # [guide to model
-        # deployment](/ml-engine/docs/tensorflow/deploying-models) for more
-        # information.
-        #
-        # When passing Version to
-        # projects.models.versions.create
-        # the model service uses the specified location as the source of the model.
-        # Once deployed, the model version is hosted by the prediction service, so
-        # this location is useful only as a historical record.
-        # The total number of model files can't exceed 1000.
-    "explanationConfig": { # Message holding configuration options for explaining model predictions. # Optional. Configures explainability features on the model's version.
-        # Some explanation features require additional metadata to be loaded
-        # as part of the model payload.
-        # There are two feature attribution methods supported for TensorFlow models:
-        # integrated gradients and sampled Shapley.
-        # [Learn more about feature
-        # attributions.](/ml-engine/docs/ai-explanations/overview)
-      "xraiAttribution": { # Attributes credit by computing the XRAI taking advantage # Attributes credit by computing the XRAI taking advantage
-          # of the model's fully differentiable structure. Refer to this paper for
-          # more details: https://arxiv.org/abs/1906.02825
-          # Currently only implemented for models with natural image inputs.
-          # of the model's fully differentiable structure. Refer to this paper for
-          # more details: https://arxiv.org/abs/1906.02825
-          # Currently only implemented for models with natural image inputs.
-        "numIntegralSteps": 42, # Number of steps for approximating the path integral.
-            # A good value to start is 50 and gradually increase until the
-            # sum to diff property is met within the desired error range.
-      },
-      "sampledShapleyAttribution": { # An attribution method that approximates Shapley values for features that # An attribution method that approximates Shapley values for features that
-          # contribute to the label being predicted. A sampling strategy is used to
-          # approximate the value rather than considering all subsets of features.
-          # contribute to the label being predicted. A sampling strategy is used to
-          # approximate the value rather than considering all subsets of features.
-        "numPaths": 42, # The number of feature permutations to consider when approximating the
-            # Shapley values.
-      },
-      "integratedGradientsAttribution": { # Attributes credit by computing the Aumann-Shapley value taking advantage # Attributes credit by computing the Aumann-Shapley value taking advantage
-          # of the model's fully differentiable structure. Refer to this paper for
-          # more details: http://proceedings.mlr.press/v70/sundararajan17a.html
-          # of the model's fully differentiable structure. Refer to this paper for
-          # more details: http://proceedings.mlr.press/v70/sundararajan17a.html
-        "numIntegralSteps": 42, # Number of steps for approximating the path integral.
-            # A good value to start is 50 and gradually increase until the
-            # sum to diff property is met within the desired error range.
-      },
-    },
-    "isDefault": True or False, # Output only. If true, this version will be used to handle prediction
-        # requests that do not specify a version.
-        #
-        # You can change the default version by calling
-        # projects.methods.versions.setDefault.
-    "machineType": "A String", # Optional. The type of machine on which to serve the model. Currently only
-        # applies to online prediction service. If this field is not specified, it
-        # defaults to `mls1-c1-m2`.
-        #
-        # Online prediction supports the following machine types:
-        #
-        # * `mls1-c1-m2`
-        # * `mls1-c4-m2`
-        # * `n1-standard-2`
-        # * `n1-standard-4`
-        # * `n1-standard-8`
-        # * `n1-standard-16`
-        # * `n1-standard-32`
-        # * `n1-highmem-2`
-        # * `n1-highmem-4`
-        # * `n1-highmem-8`
-        # * `n1-highmem-16`
-        # * `n1-highmem-32`
-        # * `n1-highcpu-2`
-        # * `n1-highcpu-4`
-        # * `n1-highcpu-8`
-        # * `n1-highcpu-16`
-        # * `n1-highcpu-32`
-        #
-        # `mls1-c1-m2` is generally available. All other machine types are available
-        # in beta. Learn more about the [differences between machine
-        # types](/ml-engine/docs/machine-types-online-prediction).
-    "description": "A String", # Optional. The description specified for the version when it was created.
-    "runtimeVersion": "A String", # Required. The AI Platform runtime version to use for this deployment.
-        #
-        # For more information, see the
-        # [runtime version list](/ml-engine/docs/runtime-version-list) and
-        # [how to manage runtime versions](/ml-engine/docs/versioning).
-    "manualScaling": { # Options for manually scaling a model. # Manually select the number of nodes to use for serving the
-        # model. You should generally use `auto_scaling` with an appropriate
-        # `min_nodes` instead, but this option is available if you want more
-        # predictable billing. Beware that latency and error rates will increase
-        # if the traffic exceeds that capability of the system to serve it based
-        # on the selected number of nodes.
-      "nodes": 42, # The number of nodes to allocate for this model. These nodes are always up,
-          # starting from the time the model is deployed, so the cost of operating
-          # this model will be proportional to `nodes` * number of hours since
-          # last billing cycle plus the cost for each prediction performed.
-    },
-    "errorMessage": "A String", # Output only. The details of a failure or a cancellation.
-    "framework": "A String", # Optional. The machine learning framework AI Platform uses to train
-        # this version of the model. Valid values are `TENSORFLOW`, `SCIKIT_LEARN`,
-        # `XGBOOST`. If you do not specify a framework, AI Platform
-        # will analyze files in the deployment_uri to determine a framework. If you
-        # choose `SCIKIT_LEARN` or `XGBOOST`, you must also set the runtime version
-        # of the model to 1.4 or greater.
-        #
-        # Do **not** specify a framework if you're deploying a [custom
-        # prediction routine](/ml-engine/docs/tensorflow/custom-prediction-routines).
-        #
-        # If you specify a [Compute Engine (N1) machine
-        # type](/ml-engine/docs/machine-types-online-prediction) in the
-        # `machineType` field, you must specify `TENSORFLOW`
-        # for the framework.
-    "createTime": "A String", # Output only. The time the version was created.
-    "name": "A String", # Required. The name specified for the version when it was created.
-        #
-        # The version name must be unique within the model it is created in.
-    "autoScaling": { # Options for automatically scaling a model. # Automatically scale the number of nodes used to serve the model in
-        # response to increases and decreases in traffic. Care should be
-        # taken to ramp up traffic according to the model's ability to scale
-        # or you will start seeing increases in latency and 429 response codes.
-        #
-        # Note that you cannot use AutoScaling if your version uses
-        # [GPUs](#Version.FIELDS.accelerator_config). Instead, you must use specify
-        # `manual_scaling`.
-      "minNodes": 42, # Optional. The minimum number of nodes to allocate for this model. These
-          # nodes are always up, starting from the time the model is deployed.
-          # Therefore, the cost of operating this model will be at least
-          # `rate` * `min_nodes` * number of hours since last billing cycle,
-          # where `rate` is the cost per node-hour as documented in the
-          # [pricing guide](/ml-engine/docs/pricing),
-          # even if no predictions are performed. There is additional cost for each
-          # prediction performed.
-          #
-          # Unlike manual scaling, if the load gets too heavy for the nodes
-          # that are up, the service will automatically add nodes to handle the
-          # increased load as well as scale back as traffic drops, always maintaining
-          # at least `min_nodes`. You will be charged for the time in which additional
-          # nodes are used.
-          #
-          # If `min_nodes` is not specified and AutoScaling is used with a [legacy
-          # (MLS1) machine type](/ml-engine/docs/machine-types-online-prediction),
-          # `min_nodes` defaults to 0, in which case, when traffic to a model stops
-          # (and after a cool-down period), nodes will be shut down and no charges will
-          # be incurred until traffic to the model resumes.
-          #
-          # If `min_nodes` is not specified and AutoScaling is used with a [Compute
-          # Engine (N1) machine type](/ml-engine/docs/machine-types-online-prediction),
-          # `min_nodes` defaults to 1. `min_nodes` must be at least 1 for use with a
-          # Compute Engine machine type.
-          #
-          # Note that you cannot use AutoScaling if your version uses
-          # [GPUs](#Version.FIELDS.accelerator_config). Instead, you must use
-          # ManualScaling.
-          #
-          # You can set `min_nodes` when creating the model version, and you can also
-          # update `min_nodes` for an existing version:
-          # &lt;pre&gt;
-          # update_body.json:
-          # {
-          #   'autoScaling': {
-          #     'minNodes': 5
-          #   }
-          # }
-          # &lt;/pre&gt;
-          # HTTP request:
-          # &lt;pre style="max-width: 626px;"&gt;
-          # PATCH
-          # https://ml.googleapis.com/v1/{name=projects/*/models/*/versions/*}?update_mask=autoScaling.minNodes
-          # -d @./update_body.json
-          # &lt;/pre&gt;
-    },
-    "pythonVersion": "A String", # Required. The version of Python used in prediction.
-        #
-        # The following Python versions are available:
-        #
-        # * Python '3.7' is available when `runtime_version` is set to '1.15' or
-        #   later.
-        # * Python '3.5' is available when `runtime_version` is set to a version
-        #   from '1.4' to '1.14'.
-        # * Python '2.7' is available when `runtime_version` is set to '1.15' or
-        #   earlier.
-        #
-        # Read more about the Python versions available for [each runtime
-        # version](/ml-engine/docs/runtime-version-list).
-    "requestLoggingConfig": { # Configuration for logging request-response pairs to a BigQuery table. # Optional. *Only* specify this field in a
-        # projects.models.versions.patch
-        # request. Specifying it in a
-        # projects.models.versions.create
-        # request has no effect.
-        #
-        # Configures the request-response pair logging on predictions from this
-        # Version.
-        # Online prediction requests to a model version and the responses to these
-        # requests are converted to raw strings and saved to the specified BigQuery
-        # table. Logging is constrained by [BigQuery quotas and
-        # limits](/bigquery/quotas). If your project exceeds BigQuery quotas or limits,
-        # AI Platform Prediction does not log request-response pairs, but it continues
-        # to serve predictions.
-        #
-        # If you are using [continuous
-        # evaluation](/ml-engine/docs/continuous-evaluation/), you do not need to
-        # specify this configuration manually. Setting up continuous evaluation
-        # automatically enables logging of request-response pairs.
-      "samplingPercentage": 3.14, # Percentage of requests to be logged, expressed as a fraction from 0 to 1.
-          # For example, if you want to log 10% of requests, enter `0.1`. The sampling
-          # window is the lifetime of the model version. Defaults to 0.
-      "bigqueryTableName": "A String", # Required. Fully qualified BigQuery table name in the following format:
-          # "&lt;var&gt;project_id&lt;/var&gt;.&lt;var&gt;dataset_name&lt;/var&gt;.&lt;var&gt;table_name&lt;/var&gt;"
-          #
-          # The specified table must already exist, and the "Cloud ML Service Agent"
-          # for your project must have permission to write to it. The table must have
-          # the following [schema](/bigquery/docs/schemas):
-          #
-          # &lt;table&gt;
-          #   &lt;tr&gt;&lt;th&gt;Field name&lt;/th&gt;&lt;th style="display: table-cell"&gt;Type&lt;/th&gt;
-          #     &lt;th style="display: table-cell"&gt;Mode&lt;/th&gt;&lt;/tr&gt;
-          #   &lt;tr&gt;&lt;td&gt;model&lt;/td&gt;&lt;td&gt;STRING&lt;/td&gt;&lt;td&gt;REQUIRED&lt;/td&gt;&lt;/tr&gt;
-          #   &lt;tr&gt;&lt;td&gt;model_version&lt;/td&gt;&lt;td&gt;STRING&lt;/td&gt;&lt;td&gt;REQUIRED&lt;/td&gt;&lt;/tr&gt;
-          #   &lt;tr&gt;&lt;td&gt;time&lt;/td&gt;&lt;td&gt;TIMESTAMP&lt;/td&gt;&lt;td&gt;REQUIRED&lt;/td&gt;&lt;/tr&gt;
-          #   &lt;tr&gt;&lt;td&gt;raw_data&lt;/td&gt;&lt;td&gt;STRING&lt;/td&gt;&lt;td&gt;REQUIRED&lt;/td&gt;&lt;/tr&gt;
-          #   &lt;tr&gt;&lt;td&gt;raw_prediction&lt;/td&gt;&lt;td&gt;STRING&lt;/td&gt;&lt;td&gt;NULLABLE&lt;/td&gt;&lt;/tr&gt;
-          #   &lt;tr&gt;&lt;td&gt;groundtruth&lt;/td&gt;&lt;td&gt;STRING&lt;/td&gt;&lt;td&gt;NULLABLE&lt;/td&gt;&lt;/tr&gt;
-          # &lt;/table&gt;
-    },
-  },
-  "onlinePredictionLogging": True or False, # Optional. If true, online prediction access logs are sent to StackDriver
-      # Logging. These logs are like standard server access logs, containing
-      # information like timestamp and latency for each request. Note that
-      # [Stackdriver logs may incur a cost](/stackdriver/pricing), especially if
-      # your project receives prediction requests at a high queries per second rate
-      # (QPS). Estimate your costs before enabling this option.
-      # 
-      # Default is false.
-  "description": "A String", # Optional. The description specified for the model when it was created.
-}
+  }
 
   updateMask: string, Required. Specifies the path, relative to `Model`, of the field to update.
 
-For example, to change the description of a model to "foo" and set its
-default version to "version_1", the `update_mask` parameter would be
+For example, to change the description of a model to &quot;foo&quot; and set its
+default version to &quot;version_1&quot;, the `update_mask` parameter would be
 specified as `description`, `default_version.name`, and the `PATCH`
 request body would specify the new value, as follows:
     {
-      "description": "foo",
-      "defaultVersion": {
-        "name":"version_1"
+      &quot;description&quot;: &quot;foo&quot;,
+      &quot;defaultVersion&quot;: {
+        &quot;name&quot;:&quot;version_1&quot;
       }
     }
 
@@ -2548,34 +2567,7 @@
 
     { # This resource represents a long-running operation that is the result of a
       # network API call.
-    "metadata": { # Service-specific metadata associated with the operation.  It typically
-        # contains progress information and common metadata such as create time.
-        # Some services might not provide such metadata.  Any method that returns a
-        # long-running operation should document the metadata type, if any.
-      "a_key": "", # Properties of the object. Contains field @type with type URL.
-    },
-    "error": { # The `Status` type defines a logical error model that is suitable for # The error result of the operation in case of failure or cancellation.
-        # different programming environments, including REST APIs and RPC APIs. It is
-        # used by [gRPC](https://github.com/grpc). Each `Status` message contains
-        # three pieces of data: error code, error message, and error details.
-        #
-        # You can find out more about this error model and how to work with it in the
-        # [API Design Guide](https://cloud.google.com/apis/design/errors).
-      "message": "A String", # A developer-facing error message, which should be in English. Any
-          # user-facing error message should be localized and sent in the
-          # google.rpc.Status.details field, or localized by the client.
-      "code": 42, # The status code, which should be an enum value of google.rpc.Code.
-      "details": [ # A list of messages that carry the error details.  There is a common set of
-          # message types for APIs to use.
-        {
-          "a_key": "", # Properties of the object. Contains field @type with type URL.
-        },
-      ],
-    },
-    "done": True or False, # If the value is `false`, it means the operation is still in progress.
-        # If `true`, the operation is completed, and either `error` or `response` is
-        # available.
-    "response": { # The normal response of the operation in case of success.  If the original
+    &quot;response&quot;: { # The normal response of the operation in case of success.  If the original
         # method returns no data on success, such as `Delete`, the response is
         # `google.protobuf.Empty`.  If the original method is standard
         # `Get`/`Create`/`Update`, the response should be the resource.  For other
@@ -2583,11 +2575,38 @@
         # is the original method name.  For example, if the original method name
         # is `TakeSnapshot()`, the inferred response type is
         # `TakeSnapshotResponse`.
-      "a_key": "", # Properties of the object. Contains field @type with type URL.
+      &quot;a_key&quot;: &quot;&quot;, # Properties of the object. Contains field @type with type URL.
     },
-    "name": "A String", # The server-assigned name, which is only unique within the same service that
+    &quot;name&quot;: &quot;A String&quot;, # The server-assigned name, which is only unique within the same service that
         # originally returns it. If you use the default HTTP mapping, the
         # `name` should be a resource name ending with `operations/{unique_id}`.
+    &quot;error&quot;: { # The `Status` type defines a logical error model that is suitable for # The error result of the operation in case of failure or cancellation.
+        # different programming environments, including REST APIs and RPC APIs. It is
+        # used by [gRPC](https://github.com/grpc). Each `Status` message contains
+        # three pieces of data: error code, error message, and error details.
+        #
+        # You can find out more about this error model and how to work with it in the
+        # [API Design Guide](https://cloud.google.com/apis/design/errors).
+      &quot;details&quot;: [ # A list of messages that carry the error details.  There is a common set of
+          # message types for APIs to use.
+        {
+          &quot;a_key&quot;: &quot;&quot;, # Properties of the object. Contains field @type with type URL.
+        },
+      ],
+      &quot;code&quot;: 42, # The status code, which should be an enum value of google.rpc.Code.
+      &quot;message&quot;: &quot;A String&quot;, # A developer-facing error message, which should be in English. Any
+          # user-facing error message should be localized and sent in the
+          # google.rpc.Status.details field, or localized by the client.
+    },
+    &quot;metadata&quot;: { # Service-specific metadata associated with the operation.  It typically
+        # contains progress information and common metadata such as create time.
+        # Some services might not provide such metadata.  Any method that returns a
+        # long-running operation should document the metadata type, if any.
+      &quot;a_key&quot;: &quot;&quot;, # Properties of the object. Contains field @type with type URL.
+    },
+    &quot;done&quot;: True or False, # If the value is `false`, it means the operation is still in progress.
+        # If `true`, the operation is completed, and either `error` or `response` is
+        # available.
   }</pre>
 </div>
 
@@ -2596,7 +2615,7 @@
   <pre>Sets the access control policy on the specified resource. Replaces any
 existing policy.
 
-Can return Public Errors: NOT_FOUND, INVALID_ARGUMENT and PERMISSION_DENIED
+Can return `NOT_FOUND`, `INVALID_ARGUMENT`, and `PERMISSION_DENIED` errors.
 
 Args:
   resource: string, REQUIRED: The resource for which the policy is being specified.
@@ -2605,7 +2624,7 @@
     The object takes the form of:
 
 { # Request message for `SetIamPolicy` method.
-    "policy": { # An Identity and Access Management (IAM) policy, which specifies access # REQUIRED: The complete policy to be applied to the `resource`. The size of
+    &quot;policy&quot;: { # An Identity and Access Management (IAM) policy, which specifies access # REQUIRED: The complete policy to be applied to the `resource`. The size of
         # the policy is limited to a few 10s of KB. An empty policy is a
         # valid policy but certain Cloud Platform services (such as Projects)
         # might reject them.
@@ -2618,36 +2637,40 @@
         # permissions; each `role` can be an IAM predefined role or a user-created
         # custom role.
         #
-        # Optionally, a `binding` can specify a `condition`, which is a logical
-        # expression that allows access to a resource only if the expression evaluates
-        # to `true`. A condition can add constraints based on attributes of the
-        # request, the resource, or both.
+        # For some types of Google Cloud resources, a `binding` can also specify a
+        # `condition`, which is a logical expression that allows access to a resource
+        # only if the expression evaluates to `true`. A condition can add constraints
+        # based on attributes of the request, the resource, or both. To learn which
+        # resources support conditions in their IAM policies, see the
+        # [IAM documentation](https://cloud.google.com/iam/help/conditions/resource-policies).
         #
         # **JSON example:**
         #
         #     {
-        #       "bindings": [
+        #       &quot;bindings&quot;: [
         #         {
-        #           "role": "roles/resourcemanager.organizationAdmin",
-        #           "members": [
-        #             "user:mike@example.com",
-        #             "group:admins@example.com",
-        #             "domain:google.com",
-        #             "serviceAccount:my-project-id@appspot.gserviceaccount.com"
+        #           &quot;role&quot;: &quot;roles/resourcemanager.organizationAdmin&quot;,
+        #           &quot;members&quot;: [
+        #             &quot;user:mike@example.com&quot;,
+        #             &quot;group:admins@example.com&quot;,
+        #             &quot;domain:google.com&quot;,
+        #             &quot;serviceAccount:my-project-id@appspot.gserviceaccount.com&quot;
         #           ]
         #         },
         #         {
-        #           "role": "roles/resourcemanager.organizationViewer",
-        #           "members": ["user:eve@example.com"],
-        #           "condition": {
-        #             "title": "expirable access",
-        #             "description": "Does not grant access after Sep 2020",
-        #             "expression": "request.time &lt; timestamp('2020-10-01T00:00:00.000Z')",
+        #           &quot;role&quot;: &quot;roles/resourcemanager.organizationViewer&quot;,
+        #           &quot;members&quot;: [
+        #             &quot;user:eve@example.com&quot;
+        #           ],
+        #           &quot;condition&quot;: {
+        #             &quot;title&quot;: &quot;expirable access&quot;,
+        #             &quot;description&quot;: &quot;Does not grant access after Sep 2020&quot;,
+        #             &quot;expression&quot;: &quot;request.time &lt; timestamp(&#x27;2020-10-01T00:00:00.000Z&#x27;)&quot;,
         #           }
         #         }
         #       ],
-        #       "etag": "BwWWja0YfJA=",
-        #       "version": 3
+        #       &quot;etag&quot;: &quot;BwWWja0YfJA=&quot;,
+        #       &quot;version&quot;: 3
         #     }
         #
         # **YAML example:**
@@ -2665,19 +2688,190 @@
         #       condition:
         #         title: expirable access
         #         description: Does not grant access after Sep 2020
-        #         expression: request.time &lt; timestamp('2020-10-01T00:00:00.000Z')
+        #         expression: request.time &lt; timestamp(&#x27;2020-10-01T00:00:00.000Z&#x27;)
         #     - etag: BwWWja0YfJA=
         #     - version: 3
         #
         # For a description of IAM and its features, see the
         # [IAM documentation](https://cloud.google.com/iam/docs/).
-      "bindings": [ # Associates a list of `members` to a `role`. Optionally, may specify a
+      &quot;etag&quot;: &quot;A String&quot;, # `etag` is used for optimistic concurrency control as a way to help
+          # prevent simultaneous updates of a policy from overwriting each other.
+          # It is strongly suggested that systems make use of the `etag` in the
+          # read-modify-write cycle to perform policy updates in order to avoid race
+          # conditions: An `etag` is returned in the response to `getIamPolicy`, and
+          # systems are expected to put that etag in the request to `setIamPolicy` to
+          # ensure that their change will be applied to the same version of the policy.
+          #
+          # **Important:** If you use IAM Conditions, you must include the `etag` field
+          # whenever you call `setIamPolicy`. If you omit this field, then IAM allows
+          # you to overwrite a version `3` policy with a version `1` policy, and all of
+          # the conditions in the version `3` policy are lost.
+      &quot;version&quot;: 42, # Specifies the format of the policy.
+          #
+          # Valid values are `0`, `1`, and `3`. Requests that specify an invalid value
+          # are rejected.
+          #
+          # Any operation that affects conditional role bindings must specify version
+          # `3`. This requirement applies to the following operations:
+          #
+          # * Getting a policy that includes a conditional role binding
+          # * Adding a conditional role binding to a policy
+          # * Changing a conditional role binding in a policy
+          # * Removing any role binding, with or without a condition, from a policy
+          #   that includes conditions
+          #
+          # **Important:** If you use IAM Conditions, you must include the `etag` field
+          # whenever you call `setIamPolicy`. If you omit this field, then IAM allows
+          # you to overwrite a version `3` policy with a version `1` policy, and all of
+          # the conditions in the version `3` policy are lost.
+          #
+          # If a policy does not include any conditions, operations on that policy may
+          # specify any valid version or leave the field unset.
+          #
+          # To learn which resources support conditions in their IAM policies, see the
+          # [IAM documentation](https://cloud.google.com/iam/help/conditions/resource-policies).
+      &quot;auditConfigs&quot;: [ # Specifies cloud audit logging configuration for this policy.
+        { # Specifies the audit configuration for a service.
+            # The configuration determines which permission types are logged, and what
+            # identities, if any, are exempted from logging.
+            # An AuditConfig must have one or more AuditLogConfigs.
+            #
+            # If there are AuditConfigs for both `allServices` and a specific service,
+            # the union of the two AuditConfigs is used for that service: the log_types
+            # specified in each AuditConfig are enabled, and the exempted_members in each
+            # AuditLogConfig are exempted.
+            #
+            # Example Policy with multiple AuditConfigs:
+            #
+            #     {
+            #       &quot;audit_configs&quot;: [
+            #         {
+            #           &quot;service&quot;: &quot;allServices&quot;
+            #           &quot;audit_log_configs&quot;: [
+            #             {
+            #               &quot;log_type&quot;: &quot;DATA_READ&quot;,
+            #               &quot;exempted_members&quot;: [
+            #                 &quot;user:jose@example.com&quot;
+            #               ]
+            #             },
+            #             {
+            #               &quot;log_type&quot;: &quot;DATA_WRITE&quot;,
+            #             },
+            #             {
+            #               &quot;log_type&quot;: &quot;ADMIN_READ&quot;,
+            #             }
+            #           ]
+            #         },
+            #         {
+            #           &quot;service&quot;: &quot;sampleservice.googleapis.com&quot;
+            #           &quot;audit_log_configs&quot;: [
+            #             {
+            #               &quot;log_type&quot;: &quot;DATA_READ&quot;,
+            #             },
+            #             {
+            #               &quot;log_type&quot;: &quot;DATA_WRITE&quot;,
+            #               &quot;exempted_members&quot;: [
+            #                 &quot;user:aliya@example.com&quot;
+            #               ]
+            #             }
+            #           ]
+            #         }
+            #       ]
+            #     }
+            #
+            # For sampleservice, this policy enables DATA_READ, DATA_WRITE and ADMIN_READ
+            # logging. It also exempts jose@example.com from DATA_READ logging, and
+            # aliya@example.com from DATA_WRITE logging.
+          &quot;service&quot;: &quot;A String&quot;, # Specifies a service that will be enabled for audit logging.
+              # For example, `storage.googleapis.com`, `cloudsql.googleapis.com`.
+              # `allServices` is a special value that covers all services.
+          &quot;auditLogConfigs&quot;: [ # The configuration for logging of each type of permission.
+            { # Provides the configuration for logging a type of permissions.
+                # Example:
+                #
+                #     {
+                #       &quot;audit_log_configs&quot;: [
+                #         {
+                #           &quot;log_type&quot;: &quot;DATA_READ&quot;,
+                #           &quot;exempted_members&quot;: [
+                #             &quot;user:jose@example.com&quot;
+                #           ]
+                #         },
+                #         {
+                #           &quot;log_type&quot;: &quot;DATA_WRITE&quot;,
+                #         }
+                #       ]
+                #     }
+                #
+                # This enables &#x27;DATA_READ&#x27; and &#x27;DATA_WRITE&#x27; logging, while exempting
+                # jose@example.com from DATA_READ logging.
+              &quot;exemptedMembers&quot;: [ # Specifies the identities that do not cause logging for this type of
+                  # permission.
+                  # Follows the same format of Binding.members.
+                &quot;A String&quot;,
+              ],
+              &quot;logType&quot;: &quot;A String&quot;, # The log type that this config enables.
+            },
+          ],
+        },
+      ],
+      &quot;bindings&quot;: [ # Associates a list of `members` to a `role`. Optionally, may specify a
           # `condition` that determines how and when the `bindings` are applied. Each
           # of the `bindings` must contain at least one member.
         { # Associates `members` with a `role`.
-          "role": "A String", # Role that is assigned to `members`.
-              # For example, `roles/viewer`, `roles/editor`, or `roles/owner`.
-          "members": [ # Specifies the identities requesting access for a Cloud Platform resource.
+          &quot;condition&quot;: { # Represents a textual expression in the Common Expression Language (CEL) # The condition that is associated with this binding.
+              #
+              # If the condition evaluates to `true`, then this binding applies to the
+              # current request.
+              #
+              # If the condition evaluates to `false`, then this binding does not apply to
+              # the current request. However, a different role binding might grant the same
+              # role to one or more of the members in this binding.
+              #
+              # To learn which resources support conditions in their IAM policies, see the
+              # [IAM
+              # documentation](https://cloud.google.com/iam/help/conditions/resource-policies).
+              # syntax. CEL is a C-like expression language. The syntax and semantics of CEL
+              # are documented at https://github.com/google/cel-spec.
+              #
+              # Example (Comparison):
+              #
+              #     title: &quot;Summary size limit&quot;
+              #     description: &quot;Determines if a summary is less than 100 chars&quot;
+              #     expression: &quot;document.summary.size() &lt; 100&quot;
+              #
+              # Example (Equality):
+              #
+              #     title: &quot;Requestor is owner&quot;
+              #     description: &quot;Determines if requestor is the document owner&quot;
+              #     expression: &quot;document.owner == request.auth.claims.email&quot;
+              #
+              # Example (Logic):
+              #
+              #     title: &quot;Public documents&quot;
+              #     description: &quot;Determine whether the document should be publicly visible&quot;
+              #     expression: &quot;document.type != &#x27;private&#x27; &amp;&amp; document.type != &#x27;internal&#x27;&quot;
+              #
+              # Example (Data Manipulation):
+              #
+              #     title: &quot;Notification string&quot;
+              #     description: &quot;Create a notification string with a timestamp.&quot;
+              #     expression: &quot;&#x27;New message received at &#x27; + string(document.create_time)&quot;
+              #
+              # The exact variables and functions that may be referenced within an expression
+              # are determined by the service that evaluates it. See the service
+              # documentation for additional information.
+            &quot;title&quot;: &quot;A String&quot;, # Optional. Title for the expression, i.e. a short string describing
+                # its purpose. This can be used e.g. in UIs which allow to enter the
+                # expression.
+            &quot;location&quot;: &quot;A String&quot;, # Optional. String indicating the location of the expression for error
+                # reporting, e.g. a file name and a position in the file.
+            &quot;description&quot;: &quot;A String&quot;, # Optional. Description of the expression. This is a longer text which
+                # describes the expression, e.g. when hovered over it in a UI.
+            &quot;expression&quot;: &quot;A String&quot;, # Textual representation of an expression in Common Expression Language
+                # syntax.
+          },
+          &quot;members&quot;: [ # Specifies the identities requesting access for a Cloud Platform resource.
               # `members` can have the following values:
               #
               # * `allUsers`: A special identifier that represents anyone who is
@@ -2720,178 +2914,18 @@
               # * `domain:{domain}`: The G Suite domain (primary) that represents all the
               #    users of that domain. For example, `google.com` or `example.com`.
               #
-            "A String",
+            &quot;A String&quot;,
           ],
-          "condition": { # Represents a textual expression in the Common Expression Language (CEL) # The condition that is associated with this binding.
-              # NOTE: An unsatisfied condition will not allow user access via current
-              # binding. Different bindings, including their conditions, are examined
-              # independently.
-              # syntax. CEL is a C-like expression language. The syntax and semantics of CEL
-              # are documented at https://github.com/google/cel-spec.
-              #
-              # Example (Comparison):
-              #
-              #     title: "Summary size limit"
-              #     description: "Determines if a summary is less than 100 chars"
-              #     expression: "document.summary.size() &lt; 100"
-              #
-              # Example (Equality):
-              #
-              #     title: "Requestor is owner"
-              #     description: "Determines if requestor is the document owner"
-              #     expression: "document.owner == request.auth.claims.email"
-              #
-              # Example (Logic):
-              #
-              #     title: "Public documents"
-              #     description: "Determine whether the document should be publicly visible"
-              #     expression: "document.type != 'private' &amp;&amp; document.type != 'internal'"
-              #
-              # Example (Data Manipulation):
-              #
-              #     title: "Notification string"
-              #     description: "Create a notification string with a timestamp."
-              #     expression: "'New message received at ' + string(document.create_time)"
-              #
-              # The exact variables and functions that may be referenced within an expression
-              # are determined by the service that evaluates it. See the service
-              # documentation for additional information.
-            "description": "A String", # Optional. Description of the expression. This is a longer text which
-                # describes the expression, e.g. when hovered over it in a UI.
-            "expression": "A String", # Textual representation of an expression in Common Expression Language
-                # syntax.
-            "location": "A String", # Optional. String indicating the location of the expression for error
-                # reporting, e.g. a file name and a position in the file.
-            "title": "A String", # Optional. Title for the expression, i.e. a short string describing
-                # its purpose. This can be used e.g. in UIs which allow to enter the
-                # expression.
-          },
+          &quot;role&quot;: &quot;A String&quot;, # Role that is assigned to `members`.
+              # For example, `roles/viewer`, `roles/editor`, or `roles/owner`.
         },
       ],
-      "auditConfigs": [ # Specifies cloud audit logging configuration for this policy.
-        { # Specifies the audit configuration for a service.
-            # The configuration determines which permission types are logged, and what
-            # identities, if any, are exempted from logging.
-            # An AuditConfig must have one or more AuditLogConfigs.
-            #
-            # If there are AuditConfigs for both `allServices` and a specific service,
-            # the union of the two AuditConfigs is used for that service: the log_types
-            # specified in each AuditConfig are enabled, and the exempted_members in each
-            # AuditLogConfig are exempted.
-            #
-            # Example Policy with multiple AuditConfigs:
-            #
-            #     {
-            #       "audit_configs": [
-            #         {
-            #           "service": "allServices"
-            #           "audit_log_configs": [
-            #             {
-            #               "log_type": "DATA_READ",
-            #               "exempted_members": [
-            #                 "user:jose@example.com"
-            #               ]
-            #             },
-            #             {
-            #               "log_type": "DATA_WRITE",
-            #             },
-            #             {
-            #               "log_type": "ADMIN_READ",
-            #             }
-            #           ]
-            #         },
-            #         {
-            #           "service": "sampleservice.googleapis.com"
-            #           "audit_log_configs": [
-            #             {
-            #               "log_type": "DATA_READ",
-            #             },
-            #             {
-            #               "log_type": "DATA_WRITE",
-            #               "exempted_members": [
-            #                 "user:aliya@example.com"
-            #               ]
-            #             }
-            #           ]
-            #         }
-            #       ]
-            #     }
-            #
-            # For sampleservice, this policy enables DATA_READ, DATA_WRITE and ADMIN_READ
-            # logging. It also exempts jose@example.com from DATA_READ logging, and
-            # aliya@example.com from DATA_WRITE logging.
-          "auditLogConfigs": [ # The configuration for logging of each type of permission.
-            { # Provides the configuration for logging a type of permissions.
-                # Example:
-                #
-                #     {
-                #       "audit_log_configs": [
-                #         {
-                #           "log_type": "DATA_READ",
-                #           "exempted_members": [
-                #             "user:jose@example.com"
-                #           ]
-                #         },
-                #         {
-                #           "log_type": "DATA_WRITE",
-                #         }
-                #       ]
-                #     }
-                #
-                # This enables 'DATA_READ' and 'DATA_WRITE' logging, while exempting
-                # jose@example.com from DATA_READ logging.
-              "exemptedMembers": [ # Specifies the identities that do not cause logging for this type of
-                  # permission.
-                  # Follows the same format of Binding.members.
-                "A String",
-              ],
-              "logType": "A String", # The log type that this config enables.
-            },
-          ],
-          "service": "A String", # Specifies a service that will be enabled for audit logging.
-              # For example, `storage.googleapis.com`, `cloudsql.googleapis.com`.
-              # `allServices` is a special value that covers all services.
-        },
-      ],
-      "etag": "A String", # `etag` is used for optimistic concurrency control as a way to help
-          # prevent simultaneous updates of a policy from overwriting each other.
-          # It is strongly suggested that systems make use of the `etag` in the
-          # read-modify-write cycle to perform policy updates in order to avoid race
-          # conditions: An `etag` is returned in the response to `getIamPolicy`, and
-          # systems are expected to put that etag in the request to `setIamPolicy` to
-          # ensure that their change will be applied to the same version of the policy.
-          #
-          # **Important:** If you use IAM Conditions, you must include the `etag` field
-          # whenever you call `setIamPolicy`. If you omit this field, then IAM allows
-          # you to overwrite a version `3` policy with a version `1` policy, and all of
-          # the conditions in the version `3` policy are lost.
-      "version": 42, # Specifies the format of the policy.
-          #
-          # Valid values are `0`, `1`, and `3`. Requests that specify an invalid value
-          # are rejected.
-          #
-          # Any operation that affects conditional role bindings must specify version
-          # `3`. This requirement applies to the following operations:
-          #
-          # * Getting a policy that includes a conditional role binding
-          # * Adding a conditional role binding to a policy
-          # * Changing a conditional role binding in a policy
-          # * Removing any role binding, with or without a condition, from a policy
-          #   that includes conditions
-          #
-          # **Important:** If you use IAM Conditions, you must include the `etag` field
-          # whenever you call `setIamPolicy`. If you omit this field, then IAM allows
-          # you to overwrite a version `3` policy with a version `1` policy, and all of
-          # the conditions in the version `3` policy are lost.
-          #
-          # If a policy does not include any conditions, operations on that policy may
-          # specify any valid version or leave the field unset.
     },
-    "updateMask": "A String", # OPTIONAL: A FieldMask specifying which fields of the policy to modify. Only
+    &quot;updateMask&quot;: &quot;A String&quot;, # OPTIONAL: A FieldMask specifying which fields of the policy to modify. Only
         # the fields in the mask will be modified. If no mask is provided, the
         # following default mask is used:
-        # paths: "bindings, etag"
-        # This field is only used by Cloud IAM.
+        # 
+        # `paths: &quot;bindings, etag&quot;`
   }
 
   x__xgafv: string, V1 error format.
@@ -2912,36 +2946,40 @@
       # permissions; each `role` can be an IAM predefined role or a user-created
       # custom role.
       #
-      # Optionally, a `binding` can specify a `condition`, which is a logical
-      # expression that allows access to a resource only if the expression evaluates
-      # to `true`. A condition can add constraints based on attributes of the
-      # request, the resource, or both.
+      # For some types of Google Cloud resources, a `binding` can also specify a
+      # `condition`, which is a logical expression that allows access to a resource
+      # only if the expression evaluates to `true`. A condition can add constraints
+      # based on attributes of the request, the resource, or both. To learn which
+      # resources support conditions in their IAM policies, see the
+      # [IAM documentation](https://cloud.google.com/iam/help/conditions/resource-policies).
       #
       # **JSON example:**
       #
       #     {
-      #       "bindings": [
+      #       &quot;bindings&quot;: [
       #         {
-      #           "role": "roles/resourcemanager.organizationAdmin",
-      #           "members": [
-      #             "user:mike@example.com",
-      #             "group:admins@example.com",
-      #             "domain:google.com",
-      #             "serviceAccount:my-project-id@appspot.gserviceaccount.com"
+      #           &quot;role&quot;: &quot;roles/resourcemanager.organizationAdmin&quot;,
+      #           &quot;members&quot;: [
+      #             &quot;user:mike@example.com&quot;,
+      #             &quot;group:admins@example.com&quot;,
+      #             &quot;domain:google.com&quot;,
+      #             &quot;serviceAccount:my-project-id@appspot.gserviceaccount.com&quot;
       #           ]
       #         },
       #         {
-      #           "role": "roles/resourcemanager.organizationViewer",
-      #           "members": ["user:eve@example.com"],
-      #           "condition": {
-      #             "title": "expirable access",
-      #             "description": "Does not grant access after Sep 2020",
-      #             "expression": "request.time &lt; timestamp('2020-10-01T00:00:00.000Z')",
+      #           &quot;role&quot;: &quot;roles/resourcemanager.organizationViewer&quot;,
+      #           &quot;members&quot;: [
+      #             &quot;user:eve@example.com&quot;
+      #           ],
+      #           &quot;condition&quot;: {
+      #             &quot;title&quot;: &quot;expirable access&quot;,
+      #             &quot;description&quot;: &quot;Does not grant access after Sep 2020&quot;,
+      #             &quot;expression&quot;: &quot;request.time &lt; timestamp(&#x27;2020-10-01T00:00:00.000Z&#x27;)&quot;,
       #           }
       #         }
       #       ],
-      #       "etag": "BwWWja0YfJA=",
-      #       "version": 3
+      #       &quot;etag&quot;: &quot;BwWWja0YfJA=&quot;,
+      #       &quot;version&quot;: 3
       #     }
       #
       # **YAML example:**
@@ -2959,19 +2997,190 @@
       #       condition:
       #         title: expirable access
       #         description: Does not grant access after Sep 2020
-      #         expression: request.time &lt; timestamp('2020-10-01T00:00:00.000Z')
+      #         expression: request.time &lt; timestamp(&#x27;2020-10-01T00:00:00.000Z&#x27;)
       #     - etag: BwWWja0YfJA=
       #     - version: 3
       #
       # For a description of IAM and its features, see the
       # [IAM documentation](https://cloud.google.com/iam/docs/).
-    "bindings": [ # Associates a list of `members` to a `role`. Optionally, may specify a
+    &quot;etag&quot;: &quot;A String&quot;, # `etag` is used for optimistic concurrency control as a way to help
+        # prevent simultaneous updates of a policy from overwriting each other.
+        # It is strongly suggested that systems make use of the `etag` in the
+        # read-modify-write cycle to perform policy updates in order to avoid race
+        # conditions: An `etag` is returned in the response to `getIamPolicy`, and
+        # systems are expected to put that etag in the request to `setIamPolicy` to
+        # ensure that their change will be applied to the same version of the policy.
+        #
+        # **Important:** If you use IAM Conditions, you must include the `etag` field
+        # whenever you call `setIamPolicy`. If you omit this field, then IAM allows
+        # you to overwrite a version `3` policy with a version `1` policy, and all of
+        # the conditions in the version `3` policy are lost.
+    &quot;version&quot;: 42, # Specifies the format of the policy.
+        #
+        # Valid values are `0`, `1`, and `3`. Requests that specify an invalid value
+        # are rejected.
+        #
+        # Any operation that affects conditional role bindings must specify version
+        # `3`. This requirement applies to the following operations:
+        #
+        # * Getting a policy that includes a conditional role binding
+        # * Adding a conditional role binding to a policy
+        # * Changing a conditional role binding in a policy
+        # * Removing any role binding, with or without a condition, from a policy
+        #   that includes conditions
+        #
+        # **Important:** If you use IAM Conditions, you must include the `etag` field
+        # whenever you call `setIamPolicy`. If you omit this field, then IAM allows
+        # you to overwrite a version `3` policy with a version `1` policy, and all of
+        # the conditions in the version `3` policy are lost.
+        #
+        # If a policy does not include any conditions, operations on that policy may
+        # specify any valid version or leave the field unset.
+        #
+        # To learn which resources support conditions in their IAM policies, see the
+        # [IAM documentation](https://cloud.google.com/iam/help/conditions/resource-policies).
+    &quot;auditConfigs&quot;: [ # Specifies cloud audit logging configuration for this policy.
+      { # Specifies the audit configuration for a service.
+          # The configuration determines which permission types are logged, and what
+          # identities, if any, are exempted from logging.
+          # An AuditConfig must have one or more AuditLogConfigs.
+          #
+          # If there are AuditConfigs for both `allServices` and a specific service,
+          # the union of the two AuditConfigs is used for that service: the log_types
+          # specified in each AuditConfig are enabled, and the exempted_members in each
+          # AuditLogConfig are exempted.
+          #
+          # Example Policy with multiple AuditConfigs:
+          #
+          #     {
+          #       &quot;audit_configs&quot;: [
+          #         {
+          #           &quot;service&quot;: &quot;allServices&quot;
+          #           &quot;audit_log_configs&quot;: [
+          #             {
+          #               &quot;log_type&quot;: &quot;DATA_READ&quot;,
+          #               &quot;exempted_members&quot;: [
+          #                 &quot;user:jose@example.com&quot;
+          #               ]
+          #             },
+          #             {
+          #               &quot;log_type&quot;: &quot;DATA_WRITE&quot;,
+          #             },
+          #             {
+          #               &quot;log_type&quot;: &quot;ADMIN_READ&quot;,
+          #             }
+          #           ]
+          #         },
+          #         {
+          #           &quot;service&quot;: &quot;sampleservice.googleapis.com&quot;
+          #           &quot;audit_log_configs&quot;: [
+          #             {
+          #               &quot;log_type&quot;: &quot;DATA_READ&quot;,
+          #             },
+          #             {
+          #               &quot;log_type&quot;: &quot;DATA_WRITE&quot;,
+          #               &quot;exempted_members&quot;: [
+          #                 &quot;user:aliya@example.com&quot;
+          #               ]
+          #             }
+          #           ]
+          #         }
+          #       ]
+          #     }
+          #
+          # For sampleservice, this policy enables DATA_READ, DATA_WRITE and ADMIN_READ
+          # logging. It also exempts jose@example.com from DATA_READ logging, and
+          # aliya@example.com from DATA_WRITE logging.
+        &quot;service&quot;: &quot;A String&quot;, # Specifies a service that will be enabled for audit logging.
+            # For example, `storage.googleapis.com`, `cloudsql.googleapis.com`.
+            # `allServices` is a special value that covers all services.
+        &quot;auditLogConfigs&quot;: [ # The configuration for logging of each type of permission.
+          { # Provides the configuration for logging a type of permissions.
+              # Example:
+              #
+              #     {
+              #       &quot;audit_log_configs&quot;: [
+              #         {
+              #           &quot;log_type&quot;: &quot;DATA_READ&quot;,
+              #           &quot;exempted_members&quot;: [
+              #             &quot;user:jose@example.com&quot;
+              #           ]
+              #         },
+              #         {
+              #           &quot;log_type&quot;: &quot;DATA_WRITE&quot;,
+              #         }
+              #       ]
+              #     }
+              #
+              # This enables &#x27;DATA_READ&#x27; and &#x27;DATA_WRITE&#x27; logging, while exempting
+              # jose@example.com from DATA_READ logging.
+            &quot;exemptedMembers&quot;: [ # Specifies the identities that do not cause logging for this type of
+                # permission.
+                # Follows the same format of Binding.members.
+              &quot;A String&quot;,
+            ],
+            &quot;logType&quot;: &quot;A String&quot;, # The log type that this config enables.
+          },
+        ],
+      },
+    ],
+    &quot;bindings&quot;: [ # Associates a list of `members` to a `role`. Optionally, may specify a
         # `condition` that determines how and when the `bindings` are applied. Each
         # of the `bindings` must contain at least one member.
       { # Associates `members` with a `role`.
-        "role": "A String", # Role that is assigned to `members`.
-            # For example, `roles/viewer`, `roles/editor`, or `roles/owner`.
-        "members": [ # Specifies the identities requesting access for a Cloud Platform resource.
+        &quot;condition&quot;: { # Represents a textual expression in the Common Expression Language (CEL) # The condition that is associated with this binding.
+            #
+            # If the condition evaluates to `true`, then this binding applies to the
+            # current request.
+            #
+            # If the condition evaluates to `false`, then this binding does not apply to
+            # the current request. However, a different role binding might grant the same
+            # role to one or more of the members in this binding.
+            #
+            # To learn which resources support conditions in their IAM policies, see the
+            # [IAM
+            # documentation](https://cloud.google.com/iam/help/conditions/resource-policies).
+            # syntax. CEL is a C-like expression language. The syntax and semantics of CEL
+            # are documented at https://github.com/google/cel-spec.
+            #
+            # Example (Comparison):
+            #
+            #     title: &quot;Summary size limit&quot;
+            #     description: &quot;Determines if a summary is less than 100 chars&quot;
+            #     expression: &quot;document.summary.size() &lt; 100&quot;
+            #
+            # Example (Equality):
+            #
+            #     title: &quot;Requestor is owner&quot;
+            #     description: &quot;Determines if requestor is the document owner&quot;
+            #     expression: &quot;document.owner == request.auth.claims.email&quot;
+            #
+            # Example (Logic):
+            #
+            #     title: &quot;Public documents&quot;
+            #     description: &quot;Determine whether the document should be publicly visible&quot;
+            #     expression: &quot;document.type != &#x27;private&#x27; &amp;&amp; document.type != &#x27;internal&#x27;&quot;
+            #
+            # Example (Data Manipulation):
+            #
+            #     title: &quot;Notification string&quot;
+            #     description: &quot;Create a notification string with a timestamp.&quot;
+            #     expression: &quot;&#x27;New message received at &#x27; + string(document.create_time)&quot;
+            #
+            # The exact variables and functions that may be referenced within an expression
+            # are determined by the service that evaluates it. See the service
+            # documentation for additional information.
+          &quot;title&quot;: &quot;A String&quot;, # Optional. Title for the expression, i.e. a short string describing
+              # its purpose. This can be used e.g. in UIs which allow to enter the
+              # expression.
+          &quot;location&quot;: &quot;A String&quot;, # Optional. String indicating the location of the expression for error
+              # reporting, e.g. a file name and a position in the file.
+          &quot;description&quot;: &quot;A String&quot;, # Optional. Description of the expression. This is a longer text which
+              # describes the expression, e.g. when hovered over it in a UI.
+          &quot;expression&quot;: &quot;A String&quot;, # Textual representation of an expression in Common Expression Language
+              # syntax.
+        },
+        &quot;members&quot;: [ # Specifies the identities requesting access for a Cloud Platform resource.
             # `members` can have the following values:
             #
             # * `allUsers`: A special identifier that represents anyone who is
@@ -3014,172 +3223,12 @@
             # * `domain:{domain}`: The G Suite domain (primary) that represents all the
             #    users of that domain. For example, `google.com` or `example.com`.
             #
-          "A String",
+          &quot;A String&quot;,
         ],
-        "condition": { # Represents a textual expression in the Common Expression Language (CEL) # The condition that is associated with this binding.
-            # NOTE: An unsatisfied condition will not allow user access via current
-            # binding. Different bindings, including their conditions, are examined
-            # independently.
-            # syntax. CEL is a C-like expression language. The syntax and semantics of CEL
-            # are documented at https://github.com/google/cel-spec.
-            #
-            # Example (Comparison):
-            #
-            #     title: "Summary size limit"
-            #     description: "Determines if a summary is less than 100 chars"
-            #     expression: "document.summary.size() &lt; 100"
-            #
-            # Example (Equality):
-            #
-            #     title: "Requestor is owner"
-            #     description: "Determines if requestor is the document owner"
-            #     expression: "document.owner == request.auth.claims.email"
-            #
-            # Example (Logic):
-            #
-            #     title: "Public documents"
-            #     description: "Determine whether the document should be publicly visible"
-            #     expression: "document.type != 'private' &amp;&amp; document.type != 'internal'"
-            #
-            # Example (Data Manipulation):
-            #
-            #     title: "Notification string"
-            #     description: "Create a notification string with a timestamp."
-            #     expression: "'New message received at ' + string(document.create_time)"
-            #
-            # The exact variables and functions that may be referenced within an expression
-            # are determined by the service that evaluates it. See the service
-            # documentation for additional information.
-          "description": "A String", # Optional. Description of the expression. This is a longer text which
-              # describes the expression, e.g. when hovered over it in a UI.
-          "expression": "A String", # Textual representation of an expression in Common Expression Language
-              # syntax.
-          "location": "A String", # Optional. String indicating the location of the expression for error
-              # reporting, e.g. a file name and a position in the file.
-          "title": "A String", # Optional. Title for the expression, i.e. a short string describing
-              # its purpose. This can be used e.g. in UIs which allow to enter the
-              # expression.
-        },
+        &quot;role&quot;: &quot;A String&quot;, # Role that is assigned to `members`.
+            # For example, `roles/viewer`, `roles/editor`, or `roles/owner`.
       },
     ],
-    "auditConfigs": [ # Specifies cloud audit logging configuration for this policy.
-      { # Specifies the audit configuration for a service.
-          # The configuration determines which permission types are logged, and what
-          # identities, if any, are exempted from logging.
-          # An AuditConfig must have one or more AuditLogConfigs.
-          #
-          # If there are AuditConfigs for both `allServices` and a specific service,
-          # the union of the two AuditConfigs is used for that service: the log_types
-          # specified in each AuditConfig are enabled, and the exempted_members in each
-          # AuditLogConfig are exempted.
-          #
-          # Example Policy with multiple AuditConfigs:
-          #
-          #     {
-          #       "audit_configs": [
-          #         {
-          #           "service": "allServices"
-          #           "audit_log_configs": [
-          #             {
-          #               "log_type": "DATA_READ",
-          #               "exempted_members": [
-          #                 "user:jose@example.com"
-          #               ]
-          #             },
-          #             {
-          #               "log_type": "DATA_WRITE",
-          #             },
-          #             {
-          #               "log_type": "ADMIN_READ",
-          #             }
-          #           ]
-          #         },
-          #         {
-          #           "service": "sampleservice.googleapis.com"
-          #           "audit_log_configs": [
-          #             {
-          #               "log_type": "DATA_READ",
-          #             },
-          #             {
-          #               "log_type": "DATA_WRITE",
-          #               "exempted_members": [
-          #                 "user:aliya@example.com"
-          #               ]
-          #             }
-          #           ]
-          #         }
-          #       ]
-          #     }
-          #
-          # For sampleservice, this policy enables DATA_READ, DATA_WRITE and ADMIN_READ
-          # logging. It also exempts jose@example.com from DATA_READ logging, and
-          # aliya@example.com from DATA_WRITE logging.
-        "auditLogConfigs": [ # The configuration for logging of each type of permission.
-          { # Provides the configuration for logging a type of permissions.
-              # Example:
-              #
-              #     {
-              #       "audit_log_configs": [
-              #         {
-              #           "log_type": "DATA_READ",
-              #           "exempted_members": [
-              #             "user:jose@example.com"
-              #           ]
-              #         },
-              #         {
-              #           "log_type": "DATA_WRITE",
-              #         }
-              #       ]
-              #     }
-              #
-              # This enables 'DATA_READ' and 'DATA_WRITE' logging, while exempting
-              # jose@example.com from DATA_READ logging.
-            "exemptedMembers": [ # Specifies the identities that do not cause logging for this type of
-                # permission.
-                # Follows the same format of Binding.members.
-              "A String",
-            ],
-            "logType": "A String", # The log type that this config enables.
-          },
-        ],
-        "service": "A String", # Specifies a service that will be enabled for audit logging.
-            # For example, `storage.googleapis.com`, `cloudsql.googleapis.com`.
-            # `allServices` is a special value that covers all services.
-      },
-    ],
-    "etag": "A String", # `etag` is used for optimistic concurrency control as a way to help
-        # prevent simultaneous updates of a policy from overwriting each other.
-        # It is strongly suggested that systems make use of the `etag` in the
-        # read-modify-write cycle to perform policy updates in order to avoid race
-        # conditions: An `etag` is returned in the response to `getIamPolicy`, and
-        # systems are expected to put that etag in the request to `setIamPolicy` to
-        # ensure that their change will be applied to the same version of the policy.
-        #
-        # **Important:** If you use IAM Conditions, you must include the `etag` field
-        # whenever you call `setIamPolicy`. If you omit this field, then IAM allows
-        # you to overwrite a version `3` policy with a version `1` policy, and all of
-        # the conditions in the version `3` policy are lost.
-    "version": 42, # Specifies the format of the policy.
-        #
-        # Valid values are `0`, `1`, and `3`. Requests that specify an invalid value
-        # are rejected.
-        #
-        # Any operation that affects conditional role bindings must specify version
-        # `3`. This requirement applies to the following operations:
-        #
-        # * Getting a policy that includes a conditional role binding
-        # * Adding a conditional role binding to a policy
-        # * Changing a conditional role binding in a policy
-        # * Removing any role binding, with or without a condition, from a policy
-        #   that includes conditions
-        #
-        # **Important:** If you use IAM Conditions, you must include the `etag` field
-        # whenever you call `setIamPolicy`. If you omit this field, then IAM allows
-        # you to overwrite a version `3` policy with a version `1` policy, and all of
-        # the conditions in the version `3` policy are lost.
-        #
-        # If a policy does not include any conditions, operations on that policy may
-        # specify any valid version or leave the field unset.
   }</pre>
 </div>
 
@@ -3187,11 +3236,11 @@
     <code class="details" id="testIamPermissions">testIamPermissions(resource, body=None, x__xgafv=None)</code>
   <pre>Returns permissions that a caller has on the specified resource.
 If the resource does not exist, this will return an empty set of
-permissions, not a NOT_FOUND error.
+permissions, not a `NOT_FOUND` error.
 
 Note: This operation is designed to be used for building permission-aware
 UIs and command-line tools, not for authorization checking. This operation
-may "fail open" without warning.
+may &quot;fail open&quot; without warning.
 
 Args:
   resource: string, REQUIRED: The resource for which the policy detail is being requested.
@@ -3200,11 +3249,11 @@
     The object takes the form of:
 
 { # Request message for `TestIamPermissions` method.
-    "permissions": [ # The set of permissions to check for the `resource`. Permissions with
-        # wildcards (such as '*' or 'storage.*') are not allowed. For more
+    &quot;permissions&quot;: [ # The set of permissions to check for the `resource`. Permissions with
+        # wildcards (such as &#x27;*&#x27; or &#x27;storage.*&#x27;) are not allowed. For more
         # information see
         # [IAM Overview](https://cloud.google.com/iam/docs/overview#permissions).
-      "A String",
+      &quot;A String&quot;,
     ],
   }
 
@@ -3217,9 +3266,9 @@
   An object of the form:
 
     { # Response message for `TestIamPermissions` method.
-    "permissions": [ # A subset of `TestPermissionsRequest.permissions` that the caller is
+    &quot;permissions&quot;: [ # A subset of `TestPermissionsRequest.permissions` that the caller is
         # allowed.
-      "A String",
+      &quot;A String&quot;,
     ],
   }</pre>
 </div>