docs: docs update (#911)

Thank you for opening a Pull Request! Before submitting your PR, there are a few things you can do to make sure it goes smoothly:
- [ ] Make sure to open an issue as a [bug/issue](https://github.com/googleapis/google-api-python-client/issues/new/choose) before writing your code!  That way we can discuss the change, evaluate designs, and agree on the general idea
- [ ] Ensure the tests and linter pass
- [ ] Code coverage does not decrease (if any source code was changed)
- [ ] Appropriate docs were updated (if necessary)

Fixes #<issue_number_goes_here> 🦕
diff --git a/docs/dyn/ml_v1.projects.models.versions.html b/docs/dyn/ml_v1.projects.models.versions.html
index d948048..9cc9ec9 100644
--- a/docs/dyn/ml_v1.projects.models.versions.html
+++ b/docs/dyn/ml_v1.projects.models.versions.html
@@ -84,7 +84,7 @@
   <code><a href="#get">get(name, x__xgafv=None)</a></code></p>
 <p class="firstline">Gets information about a model version.</p>
 <p class="toc_element">
-  <code><a href="#list">list(parent, pageToken=None, x__xgafv=None, pageSize=None, filter=None)</a></code></p>
+  <code><a href="#list">list(parent, pageToken=None, pageSize=None, filter=None, x__xgafv=None)</a></code></p>
 <p class="firstline">Gets basic information about all the versions of a model.</p>
 <p class="toc_element">
   <code><a href="#list_next">list_next(previous_request, previous_response)</a></code></p>
@@ -118,25 +118,37 @@
     # prediction requests. A model can have multiple versions. You can get
     # information about all of the versions of a given model by calling
     # projects.models.versions.list.
-  "acceleratorConfig": { # Represents a hardware accelerator request config. # Optional. Accelerator config for using GPUs for online prediction (beta).
-      # Only specify this field if you have specified a Compute Engine (N1) machine
-      # type in the `machineType` field. Learn more about [using GPUs for online
-      # prediction](/ml-engine/docs/machine-types-online-prediction#gpus).
-      # Note that the AcceleratorConfig can be used in both Jobs and Versions.
-      # Learn more about [accelerators for training](/ml-engine/docs/using-gpus) and
-      # [accelerators for online
-      # prediction](/ml-engine/docs/machine-types-online-prediction#gpus).
-    "count": "A String", # The number of accelerators to attach to each machine running the job.
-    "type": "A String", # The type of accelerator to use.
+  &quot;state&quot;: &quot;A String&quot;, # Output only. The state of a version.
+  &quot;manualScaling&quot;: { # Options for manually scaling a model. # Manually select the number of nodes to use for serving the
+      # model. You should generally use `auto_scaling` with an appropriate
+      # `min_nodes` instead, but this option is available if you want more
+      # predictable billing. Beware that latency and error rates will increase
+      # if the traffic exceeds that capability of the system to serve it based
+      # on the selected number of nodes.
+    &quot;nodes&quot;: 42, # The number of nodes to allocate for this model. These nodes are always up,
+        # starting from the time the model is deployed, so the cost of operating
+        # this model will be proportional to `nodes` * number of hours since
+        # last billing cycle plus the cost for each prediction performed.
   },
-  "labels": { # Optional. One or more labels that you can add, to organize your model
-      # versions. Each label is a key-value pair, where both the key and the value
-      # are arbitrary strings that you supply.
-      # For more information, see the documentation on
-      # &lt;a href="/ml-engine/docs/tensorflow/resource-labels"&gt;using labels&lt;/a&gt;.
-    "a_key": "A String",
-  },
-  "predictionClass": "A String", # Optional. The fully qualified name
+  &quot;name&quot;: &quot;A String&quot;, # Required. The name specified for the version when it was created.
+      # 
+      # The version name must be unique within the model it is created in.
+  &quot;serviceAccount&quot;: &quot;A String&quot;, # Optional. Specifies the service account for resource access control.
+  &quot;pythonVersion&quot;: &quot;A String&quot;, # Required. The version of Python used in prediction.
+      # 
+      # The following Python versions are available:
+      # 
+      # * Python &#x27;3.7&#x27; is available when `runtime_version` is set to &#x27;1.15&#x27; or
+      #   later.
+      # * Python &#x27;3.5&#x27; is available when `runtime_version` is set to a version
+      #   from &#x27;1.4&#x27; to &#x27;1.14&#x27;.
+      # * Python &#x27;2.7&#x27; is available when `runtime_version` is set to &#x27;1.15&#x27; or
+      #   earlier.
+      # 
+      # Read more about the Python versions available for [each runtime
+      # version](/ml-engine/docs/runtime-version-list).
+  &quot;lastUseTime&quot;: &quot;A String&quot;, # Output only. The time the version was last used for prediction.
+  &quot;predictionClass&quot;: &quot;A String&quot;, # Optional. The fully qualified name
       # (&lt;var&gt;module_name&lt;/var&gt;.&lt;var&gt;class_name&lt;/var&gt;) of a class that implements
       # the Predictor interface described in this reference field. The module
       # containing this class should be included in a package provided to the
@@ -151,12 +163,12 @@
       # 
       # The following code sample provides the Predictor interface:
       # 
-      # &lt;pre style="max-width: 626px;"&gt;
+      # &lt;pre style=&quot;max-width: 626px;&quot;&gt;
       # class Predictor(object):
-      # """Interface for constructing custom predictors."""
+      # &quot;&quot;&quot;Interface for constructing custom predictors.&quot;&quot;&quot;
       # 
       # def predict(self, instances, **kwargs):
-      #     """Performs custom prediction.
+      #     &quot;&quot;&quot;Performs custom prediction.
       # 
       #     Instances are the decoded values from the request. They have already
       #     been deserialized from JSON.
@@ -169,12 +181,12 @@
       #     Returns:
       #         A list of outputs containing the prediction results. This list must
       #         be JSON serializable.
-      #     """
+      #     &quot;&quot;&quot;
       #     raise NotImplementedError()
       # 
       # @classmethod
       # def from_path(cls, model_dir):
-      #     """Creates an instance of Predictor using the given path.
+      #     &quot;&quot;&quot;Creates an instance of Predictor using the given path.
       # 
       #     Loading of the predictor should be done in this method.
       # 
@@ -185,15 +197,13 @@
       # 
       #     Returns:
       #         An instance implementing this Predictor class.
-      #     """
+      #     &quot;&quot;&quot;
       #     raise NotImplementedError()
       # &lt;/pre&gt;
       # 
       # Learn more about [the Predictor interface and custom prediction
       # routines](/ml-engine/docs/tensorflow/custom-prediction-routines).
-  "serviceAccount": "A String", # Optional. Specifies the service account for resource access control.
-  "state": "A String", # Output only. The state of a version.
-  "packageUris": [ # Optional. Cloud Storage paths (`gs://…`) of packages for [custom
+  &quot;packageUris&quot;: [ # Optional. Cloud Storage paths (`gs://…`) of packages for [custom
       # prediction routines](/ml-engine/docs/tensorflow/custom-prediction-routines)
       # or [scikit-learn pipelines with custom
       # code](/ml-engine/docs/scikit/exporting-for-prediction#custom-pipeline-code).
@@ -207,17 +217,45 @@
       # 
       # If you specify this field, you must also set
       # [`runtimeVersion`](#Version.FIELDS.runtime_version) to 1.4 or greater.
-    "A String",
+    &quot;A String&quot;,
   ],
-  "etag": "A String", # `etag` is used for optimistic concurrency control as a way to help
-      # prevent simultaneous updates of a model from overwriting each other.
-      # It is strongly suggested that systems make use of the `etag` in the
-      # read-modify-write cycle to perform model updates in order to avoid race
-      # conditions: An `etag` is returned in the response to `GetVersion`, and
-      # systems are expected to put that etag in the request to `UpdateVersion` to
-      # ensure that their change will be applied to the model as intended.
-  "lastUseTime": "A String", # Output only. The time the version was last used for prediction.
-  "deploymentUri": "A String", # Required. The Cloud Storage location of the trained model used to
+  &quot;explanationConfig&quot;: { # Message holding configuration options for explaining model predictions. # Optional. Configures explainability features on the model&#x27;s version.
+      # Some explanation features require additional metadata to be loaded
+      # as part of the model payload.
+      # There are two feature attribution methods supported for TensorFlow models:
+      # integrated gradients and sampled Shapley.
+      # [Learn more about feature
+      # attributions.](/ai-platform/prediction/docs/ai-explanations/overview)
+    &quot;integratedGradientsAttribution&quot;: { # Attributes credit by computing the Aumann-Shapley value taking advantage # Attributes credit by computing the Aumann-Shapley value taking advantage
+        # of the model&#x27;s fully differentiable structure. Refer to this paper for
+        # more details: http://proceedings.mlr.press/v70/sundararajan17a.html
+        # of the model&#x27;s fully differentiable structure. Refer to this paper for
+        # more details: https://arxiv.org/abs/1703.01365
+      &quot;numIntegralSteps&quot;: 42, # Number of steps for approximating the path integral.
+          # A good value to start is 50 and gradually increase until the
+          # sum to diff property is met within the desired error range.
+    },
+    &quot;sampledShapleyAttribution&quot;: { # An attribution method that approximates Shapley values for features that # An attribution method that approximates Shapley values for features that
+        # contribute to the label being predicted. A sampling strategy is used to
+        # approximate the value rather than considering all subsets of features.
+        # contribute to the label being predicted. A sampling strategy is used to
+        # approximate the value rather than considering all subsets of features.
+      &quot;numPaths&quot;: 42, # The number of feature permutations to consider when approximating the
+          # Shapley values.
+    },
+    &quot;xraiAttribution&quot;: { # Attributes credit by computing the XRAI taking advantage # Attributes credit by computing the XRAI taking advantage
+        # of the model&#x27;s fully differentiable structure. Refer to this paper for
+        # more details: https://arxiv.org/abs/1906.02825
+        # Currently only implemented for models with natural image inputs.
+        # of the model&#x27;s fully differentiable structure. Refer to this paper for
+        # more details: https://arxiv.org/abs/1906.02825
+        # Currently only implemented for models with natural image inputs.
+      &quot;numIntegralSteps&quot;: 42, # Number of steps for approximating the path integral.
+          # A good value to start is 50 and gradually increase until the
+          # sum to diff property is met within the desired error range.
+    },
+  },
+  &quot;deploymentUri&quot;: &quot;A String&quot;, # Required. The Cloud Storage location of the trained model used to
       # create the version. See the
       # [guide to model
       # deployment](/ml-engine/docs/tensorflow/deploying-models) for more
@@ -228,120 +266,16 @@
       # the model service uses the specified location as the source of the model.
       # Once deployed, the model version is hosted by the prediction service, so
       # this location is useful only as a historical record.
-      # The total number of model files can't exceed 1000.
-  "explanationConfig": { # Message holding configuration options for explaining model predictions. # Optional. Configures explainability features on the model's version.
-      # Some explanation features require additional metadata to be loaded
-      # as part of the model payload.
-      # There are two feature attribution methods supported for TensorFlow models:
-      # integrated gradients and sampled Shapley.
-      # [Learn more about feature
-      # attributions.](/ml-engine/docs/ai-explanations/overview)
-    "xraiAttribution": { # Attributes credit by computing the XRAI taking advantage # Attributes credit by computing the XRAI taking advantage
-        # of the model's fully differentiable structure. Refer to this paper for
-        # more details: https://arxiv.org/abs/1906.02825
-        # Currently only implemented for models with natural image inputs.
-        # of the model's fully differentiable structure. Refer to this paper for
-        # more details: https://arxiv.org/abs/1906.02825
-        # Currently only implemented for models with natural image inputs.
-      "numIntegralSteps": 42, # Number of steps for approximating the path integral.
-          # A good value to start is 50 and gradually increase until the
-          # sum to diff property is met within the desired error range.
-    },
-    "sampledShapleyAttribution": { # An attribution method that approximates Shapley values for features that # An attribution method that approximates Shapley values for features that
-        # contribute to the label being predicted. A sampling strategy is used to
-        # approximate the value rather than considering all subsets of features.
-        # contribute to the label being predicted. A sampling strategy is used to
-        # approximate the value rather than considering all subsets of features.
-      "numPaths": 42, # The number of feature permutations to consider when approximating the
-          # Shapley values.
-    },
-    "integratedGradientsAttribution": { # Attributes credit by computing the Aumann-Shapley value taking advantage # Attributes credit by computing the Aumann-Shapley value taking advantage
-        # of the model's fully differentiable structure. Refer to this paper for
-        # more details: http://proceedings.mlr.press/v70/sundararajan17a.html
-        # of the model's fully differentiable structure. Refer to this paper for
-        # more details: http://proceedings.mlr.press/v70/sundararajan17a.html
-      "numIntegralSteps": 42, # Number of steps for approximating the path integral.
-          # A good value to start is 50 and gradually increase until the
-          # sum to diff property is met within the desired error range.
-    },
-  },
-  "isDefault": True or False, # Output only. If true, this version will be used to handle prediction
-      # requests that do not specify a version.
-      # 
-      # You can change the default version by calling
-      # projects.methods.versions.setDefault.
-  "machineType": "A String", # Optional. The type of machine on which to serve the model. Currently only
-      # applies to online prediction service. If this field is not specified, it
-      # defaults to `mls1-c1-m2`.
-      # 
-      # Online prediction supports the following machine types:
-      # 
-      # * `mls1-c1-m2`
-      # * `mls1-c4-m2`
-      # * `n1-standard-2`
-      # * `n1-standard-4`
-      # * `n1-standard-8`
-      # * `n1-standard-16`
-      # * `n1-standard-32`
-      # * `n1-highmem-2`
-      # * `n1-highmem-4`
-      # * `n1-highmem-8`
-      # * `n1-highmem-16`
-      # * `n1-highmem-32`
-      # * `n1-highcpu-2`
-      # * `n1-highcpu-4`
-      # * `n1-highcpu-8`
-      # * `n1-highcpu-16`
-      # * `n1-highcpu-32`
-      # 
-      # `mls1-c1-m2` is generally available. All other machine types are available
-      # in beta. Learn more about the [differences between machine
-      # types](/ml-engine/docs/machine-types-online-prediction).
-  "description": "A String", # Optional. The description specified for the version when it was created.
-  "runtimeVersion": "A String", # Required. The AI Platform runtime version to use for this deployment.
-      # 
-      # For more information, see the
-      # [runtime version list](/ml-engine/docs/runtime-version-list) and
-      # [how to manage runtime versions](/ml-engine/docs/versioning).
-  "manualScaling": { # Options for manually scaling a model. # Manually select the number of nodes to use for serving the
-      # model. You should generally use `auto_scaling` with an appropriate
-      # `min_nodes` instead, but this option is available if you want more
-      # predictable billing. Beware that latency and error rates will increase
-      # if the traffic exceeds that capability of the system to serve it based
-      # on the selected number of nodes.
-    "nodes": 42, # The number of nodes to allocate for this model. These nodes are always up,
-        # starting from the time the model is deployed, so the cost of operating
-        # this model will be proportional to `nodes` * number of hours since
-        # last billing cycle plus the cost for each prediction performed.
-  },
-  "errorMessage": "A String", # Output only. The details of a failure or a cancellation.
-  "framework": "A String", # Optional. The machine learning framework AI Platform uses to train
-      # this version of the model. Valid values are `TENSORFLOW`, `SCIKIT_LEARN`,
-      # `XGBOOST`. If you do not specify a framework, AI Platform
-      # will analyze files in the deployment_uri to determine a framework. If you
-      # choose `SCIKIT_LEARN` or `XGBOOST`, you must also set the runtime version
-      # of the model to 1.4 or greater.
-      # 
-      # Do **not** specify a framework if you're deploying a [custom
-      # prediction routine](/ml-engine/docs/tensorflow/custom-prediction-routines).
-      # 
-      # If you specify a [Compute Engine (N1) machine
-      # type](/ml-engine/docs/machine-types-online-prediction) in the
-      # `machineType` field, you must specify `TENSORFLOW`
-      # for the framework.
-  "createTime": "A String", # Output only. The time the version was created.
-  "name": "A String", # Required. The name specified for the version when it was created.
-      # 
-      # The version name must be unique within the model it is created in.
-  "autoScaling": { # Options for automatically scaling a model. # Automatically scale the number of nodes used to serve the model in
+      # The total number of model files can&#x27;t exceed 1000.
+  &quot;autoScaling&quot;: { # Options for automatically scaling a model. # Automatically scale the number of nodes used to serve the model in
       # response to increases and decreases in traffic. Care should be
-      # taken to ramp up traffic according to the model's ability to scale
+      # taken to ramp up traffic according to the model&#x27;s ability to scale
       # or you will start seeing increases in latency and 429 response codes.
       # 
       # Note that you cannot use AutoScaling if your version uses
       # [GPUs](#Version.FIELDS.accelerator_config). Instead, you must use specify
       # `manual_scaling`.
-    "minNodes": 42, # Optional. The minimum number of nodes to allocate for this model. These
+    &quot;minNodes&quot;: 42, # Optional. The minimum number of nodes to allocate for this model. These
         # nodes are always up, starting from the time the model is deployed.
         # Therefore, the cost of operating this model will be at least
         # `rate` * `min_nodes` * number of hours since last billing cycle,
@@ -376,32 +310,27 @@
         # &lt;pre&gt;
         # update_body.json:
         # {
-        #   'autoScaling': {
-        #     'minNodes': 5
+        #   &#x27;autoScaling&#x27;: {
+        #     &#x27;minNodes&#x27;: 5
         #   }
         # }
         # &lt;/pre&gt;
         # HTTP request:
-        # &lt;pre style="max-width: 626px;"&gt;
+        # &lt;pre style=&quot;max-width: 626px;&quot;&gt;
         # PATCH
         # https://ml.googleapis.com/v1/{name=projects/*/models/*/versions/*}?update_mask=autoScaling.minNodes
         # -d @./update_body.json
         # &lt;/pre&gt;
   },
-  "pythonVersion": "A String", # Required. The version of Python used in prediction.
-      # 
-      # The following Python versions are available:
-      # 
-      # * Python '3.7' is available when `runtime_version` is set to '1.15' or
-      #   later.
-      # * Python '3.5' is available when `runtime_version` is set to a version
-      #   from '1.4' to '1.14'.
-      # * Python '2.7' is available when `runtime_version` is set to '1.15' or
-      #   earlier.
-      # 
-      # Read more about the Python versions available for [each runtime
-      # version](/ml-engine/docs/runtime-version-list).
-  "requestLoggingConfig": { # Configuration for logging request-response pairs to a BigQuery table. # Optional. *Only* specify this field in a
+  &quot;labels&quot;: { # Optional. One or more labels that you can add, to organize your model
+      # versions. Each label is a key-value pair, where both the key and the value
+      # are arbitrary strings that you supply.
+      # For more information, see the documentation on
+      # &lt;a href=&quot;/ml-engine/docs/tensorflow/resource-labels&quot;&gt;using labels&lt;/a&gt;.
+    &quot;a_key&quot;: &quot;A String&quot;,
+  },
+  &quot;createTime&quot;: &quot;A String&quot;, # Output only. The time the version was created.
+  &quot;requestLoggingConfig&quot;: { # Configuration for logging request-response pairs to a BigQuery table. # Optional. *Only* specify this field in a
       # projects.models.versions.patch
       # request. Specifying it in a
       # projects.models.versions.create
@@ -420,19 +349,16 @@
       # evaluation](/ml-engine/docs/continuous-evaluation/), you do not need to
       # specify this configuration manually. Setting up continuous evaluation
       # automatically enables logging of request-response pairs.
-    "samplingPercentage": 3.14, # Percentage of requests to be logged, expressed as a fraction from 0 to 1.
-        # For example, if you want to log 10% of requests, enter `0.1`. The sampling
-        # window is the lifetime of the model version. Defaults to 0.
-    "bigqueryTableName": "A String", # Required. Fully qualified BigQuery table name in the following format:
-        # "&lt;var&gt;project_id&lt;/var&gt;.&lt;var&gt;dataset_name&lt;/var&gt;.&lt;var&gt;table_name&lt;/var&gt;"
+    &quot;bigqueryTableName&quot;: &quot;A String&quot;, # Required. Fully qualified BigQuery table name in the following format:
+        # &quot;&lt;var&gt;project_id&lt;/var&gt;.&lt;var&gt;dataset_name&lt;/var&gt;.&lt;var&gt;table_name&lt;/var&gt;&quot;
         #
-        # The specified table must already exist, and the "Cloud ML Service Agent"
+        # The specified table must already exist, and the &quot;Cloud ML Service Agent&quot;
         # for your project must have permission to write to it. The table must have
         # the following [schema](/bigquery/docs/schemas):
         #
         # &lt;table&gt;
-        #   &lt;tr&gt;&lt;th&gt;Field name&lt;/th&gt;&lt;th style="display: table-cell"&gt;Type&lt;/th&gt;
-        #     &lt;th style="display: table-cell"&gt;Mode&lt;/th&gt;&lt;/tr&gt;
+        #   &lt;tr&gt;&lt;th&gt;Field name&lt;/th&gt;&lt;th style=&quot;display: table-cell&quot;&gt;Type&lt;/th&gt;
+        #     &lt;th style=&quot;display: table-cell&quot;&gt;Mode&lt;/th&gt;&lt;/tr&gt;
         #   &lt;tr&gt;&lt;td&gt;model&lt;/td&gt;&lt;td&gt;STRING&lt;/td&gt;&lt;td&gt;REQUIRED&lt;/td&gt;&lt;/tr&gt;
         #   &lt;tr&gt;&lt;td&gt;model_version&lt;/td&gt;&lt;td&gt;STRING&lt;/td&gt;&lt;td&gt;REQUIRED&lt;/td&gt;&lt;/tr&gt;
         #   &lt;tr&gt;&lt;td&gt;time&lt;/td&gt;&lt;td&gt;TIMESTAMP&lt;/td&gt;&lt;td&gt;REQUIRED&lt;/td&gt;&lt;/tr&gt;
@@ -440,6 +366,80 @@
         #   &lt;tr&gt;&lt;td&gt;raw_prediction&lt;/td&gt;&lt;td&gt;STRING&lt;/td&gt;&lt;td&gt;NULLABLE&lt;/td&gt;&lt;/tr&gt;
         #   &lt;tr&gt;&lt;td&gt;groundtruth&lt;/td&gt;&lt;td&gt;STRING&lt;/td&gt;&lt;td&gt;NULLABLE&lt;/td&gt;&lt;/tr&gt;
         # &lt;/table&gt;
+    &quot;samplingPercentage&quot;: 3.14, # Percentage of requests to be logged, expressed as a fraction from 0 to 1.
+        # For example, if you want to log 10% of requests, enter `0.1`. The sampling
+        # window is the lifetime of the model version. Defaults to 0.
+  },
+  &quot;errorMessage&quot;: &quot;A String&quot;, # Output only. The details of a failure or a cancellation.
+  &quot;machineType&quot;: &quot;A String&quot;, # Optional. The type of machine on which to serve the model. Currently only
+      # applies to online prediction service. If this field is not specified, it
+      # defaults to `mls1-c1-m2`.
+      # 
+      # Online prediction supports the following machine types:
+      # 
+      # * `mls1-c1-m2`
+      # * `mls1-c4-m2`
+      # * `n1-standard-2`
+      # * `n1-standard-4`
+      # * `n1-standard-8`
+      # * `n1-standard-16`
+      # * `n1-standard-32`
+      # * `n1-highmem-2`
+      # * `n1-highmem-4`
+      # * `n1-highmem-8`
+      # * `n1-highmem-16`
+      # * `n1-highmem-32`
+      # * `n1-highcpu-2`
+      # * `n1-highcpu-4`
+      # * `n1-highcpu-8`
+      # * `n1-highcpu-16`
+      # * `n1-highcpu-32`
+      # 
+      # `mls1-c1-m2` is generally available. All other machine types are available
+      # in beta. Learn more about the [differences between machine
+      # types](/ml-engine/docs/machine-types-online-prediction).
+  &quot;runtimeVersion&quot;: &quot;A String&quot;, # Required. The AI Platform runtime version to use for this deployment.
+      # 
+      # For more information, see the
+      # [runtime version list](/ml-engine/docs/runtime-version-list) and
+      # [how to manage runtime versions](/ml-engine/docs/versioning).
+  &quot;description&quot;: &quot;A String&quot;, # Optional. The description specified for the version when it was created.
+  &quot;framework&quot;: &quot;A String&quot;, # Optional. The machine learning framework AI Platform uses to train
+      # this version of the model. Valid values are `TENSORFLOW`, `SCIKIT_LEARN`,
+      # `XGBOOST`. If you do not specify a framework, AI Platform
+      # will analyze files in the deployment_uri to determine a framework. If you
+      # choose `SCIKIT_LEARN` or `XGBOOST`, you must also set the runtime version
+      # of the model to 1.4 or greater.
+      # 
+      # Do **not** specify a framework if you&#x27;re deploying a [custom
+      # prediction routine](/ml-engine/docs/tensorflow/custom-prediction-routines).
+      # 
+      # If you specify a [Compute Engine (N1) machine
+      # type](/ml-engine/docs/machine-types-online-prediction) in the
+      # `machineType` field, you must specify `TENSORFLOW`
+      # for the framework.
+  &quot;etag&quot;: &quot;A String&quot;, # `etag` is used for optimistic concurrency control as a way to help
+      # prevent simultaneous updates of a model from overwriting each other.
+      # It is strongly suggested that systems make use of the `etag` in the
+      # read-modify-write cycle to perform model updates in order to avoid race
+      # conditions: An `etag` is returned in the response to `GetVersion`, and
+      # systems are expected to put that etag in the request to `UpdateVersion` to
+      # ensure that their change will be applied to the model as intended.
+  &quot;isDefault&quot;: True or False, # Output only. If true, this version will be used to handle prediction
+      # requests that do not specify a version.
+      # 
+      # You can change the default version by calling
+      # projects.methods.versions.setDefault.
+  &quot;acceleratorConfig&quot;: { # Represents a hardware accelerator request config. # Optional. Accelerator config for using GPUs for online prediction (beta).
+      # Only specify this field if you have specified a Compute Engine (N1) machine
+      # type in the `machineType` field. Learn more about [using GPUs for online
+      # prediction](/ml-engine/docs/machine-types-online-prediction#gpus).
+      # Note that the AcceleratorConfig can be used in both Jobs and Versions.
+      # Learn more about [accelerators for training](/ml-engine/docs/using-gpus) and
+      # [accelerators for online
+      # prediction](/ml-engine/docs/machine-types-online-prediction#gpus).
+    &quot;count&quot;: &quot;A String&quot;, # The number of accelerators to attach to each machine running the job.
+    &quot;type&quot;: &quot;A String&quot;, # The type of accelerator to use.
   },
 }
 
@@ -453,34 +453,7 @@
 
     { # This resource represents a long-running operation that is the result of a
       # network API call.
-    "metadata": { # Service-specific metadata associated with the operation.  It typically
-        # contains progress information and common metadata such as create time.
-        # Some services might not provide such metadata.  Any method that returns a
-        # long-running operation should document the metadata type, if any.
-      "a_key": "", # Properties of the object. Contains field @type with type URL.
-    },
-    "error": { # The `Status` type defines a logical error model that is suitable for # The error result of the operation in case of failure or cancellation.
-        # different programming environments, including REST APIs and RPC APIs. It is
-        # used by [gRPC](https://github.com/grpc). Each `Status` message contains
-        # three pieces of data: error code, error message, and error details.
-        #
-        # You can find out more about this error model and how to work with it in the
-        # [API Design Guide](https://cloud.google.com/apis/design/errors).
-      "message": "A String", # A developer-facing error message, which should be in English. Any
-          # user-facing error message should be localized and sent in the
-          # google.rpc.Status.details field, or localized by the client.
-      "code": 42, # The status code, which should be an enum value of google.rpc.Code.
-      "details": [ # A list of messages that carry the error details.  There is a common set of
-          # message types for APIs to use.
-        {
-          "a_key": "", # Properties of the object. Contains field @type with type URL.
-        },
-      ],
-    },
-    "done": True or False, # If the value is `false`, it means the operation is still in progress.
-        # If `true`, the operation is completed, and either `error` or `response` is
-        # available.
-    "response": { # The normal response of the operation in case of success.  If the original
+    &quot;response&quot;: { # The normal response of the operation in case of success.  If the original
         # method returns no data on success, such as `Delete`, the response is
         # `google.protobuf.Empty`.  If the original method is standard
         # `Get`/`Create`/`Update`, the response should be the resource.  For other
@@ -488,11 +461,38 @@
         # is the original method name.  For example, if the original method name
         # is `TakeSnapshot()`, the inferred response type is
         # `TakeSnapshotResponse`.
-      "a_key": "", # Properties of the object. Contains field @type with type URL.
+      &quot;a_key&quot;: &quot;&quot;, # Properties of the object. Contains field @type with type URL.
     },
-    "name": "A String", # The server-assigned name, which is only unique within the same service that
+    &quot;name&quot;: &quot;A String&quot;, # The server-assigned name, which is only unique within the same service that
         # originally returns it. If you use the default HTTP mapping, the
         # `name` should be a resource name ending with `operations/{unique_id}`.
+    &quot;error&quot;: { # The `Status` type defines a logical error model that is suitable for # The error result of the operation in case of failure or cancellation.
+        # different programming environments, including REST APIs and RPC APIs. It is
+        # used by [gRPC](https://github.com/grpc). Each `Status` message contains
+        # three pieces of data: error code, error message, and error details.
+        #
+        # You can find out more about this error model and how to work with it in the
+        # [API Design Guide](https://cloud.google.com/apis/design/errors).
+      &quot;details&quot;: [ # A list of messages that carry the error details.  There is a common set of
+          # message types for APIs to use.
+        {
+          &quot;a_key&quot;: &quot;&quot;, # Properties of the object. Contains field @type with type URL.
+        },
+      ],
+      &quot;code&quot;: 42, # The status code, which should be an enum value of google.rpc.Code.
+      &quot;message&quot;: &quot;A String&quot;, # A developer-facing error message, which should be in English. Any
+          # user-facing error message should be localized and sent in the
+          # google.rpc.Status.details field, or localized by the client.
+    },
+    &quot;metadata&quot;: { # Service-specific metadata associated with the operation.  It typically
+        # contains progress information and common metadata such as create time.
+        # Some services might not provide such metadata.  Any method that returns a
+        # long-running operation should document the metadata type, if any.
+      &quot;a_key&quot;: &quot;&quot;, # Properties of the object. Contains field @type with type URL.
+    },
+    &quot;done&quot;: True or False, # If the value is `false`, it means the operation is still in progress.
+        # If `true`, the operation is completed, and either `error` or `response` is
+        # available.
   }</pre>
 </div>
 
@@ -520,34 +520,7 @@
 
     { # This resource represents a long-running operation that is the result of a
       # network API call.
-    "metadata": { # Service-specific metadata associated with the operation.  It typically
-        # contains progress information and common metadata such as create time.
-        # Some services might not provide such metadata.  Any method that returns a
-        # long-running operation should document the metadata type, if any.
-      "a_key": "", # Properties of the object. Contains field @type with type URL.
-    },
-    "error": { # The `Status` type defines a logical error model that is suitable for # The error result of the operation in case of failure or cancellation.
-        # different programming environments, including REST APIs and RPC APIs. It is
-        # used by [gRPC](https://github.com/grpc). Each `Status` message contains
-        # three pieces of data: error code, error message, and error details.
-        #
-        # You can find out more about this error model and how to work with it in the
-        # [API Design Guide](https://cloud.google.com/apis/design/errors).
-      "message": "A String", # A developer-facing error message, which should be in English. Any
-          # user-facing error message should be localized and sent in the
-          # google.rpc.Status.details field, or localized by the client.
-      "code": 42, # The status code, which should be an enum value of google.rpc.Code.
-      "details": [ # A list of messages that carry the error details.  There is a common set of
-          # message types for APIs to use.
-        {
-          "a_key": "", # Properties of the object. Contains field @type with type URL.
-        },
-      ],
-    },
-    "done": True or False, # If the value is `false`, it means the operation is still in progress.
-        # If `true`, the operation is completed, and either `error` or `response` is
-        # available.
-    "response": { # The normal response of the operation in case of success.  If the original
+    &quot;response&quot;: { # The normal response of the operation in case of success.  If the original
         # method returns no data on success, such as `Delete`, the response is
         # `google.protobuf.Empty`.  If the original method is standard
         # `Get`/`Create`/`Update`, the response should be the resource.  For other
@@ -555,11 +528,38 @@
         # is the original method name.  For example, if the original method name
         # is `TakeSnapshot()`, the inferred response type is
         # `TakeSnapshotResponse`.
-      "a_key": "", # Properties of the object. Contains field @type with type URL.
+      &quot;a_key&quot;: &quot;&quot;, # Properties of the object. Contains field @type with type URL.
     },
-    "name": "A String", # The server-assigned name, which is only unique within the same service that
+    &quot;name&quot;: &quot;A String&quot;, # The server-assigned name, which is only unique within the same service that
         # originally returns it. If you use the default HTTP mapping, the
         # `name` should be a resource name ending with `operations/{unique_id}`.
+    &quot;error&quot;: { # The `Status` type defines a logical error model that is suitable for # The error result of the operation in case of failure or cancellation.
+        # different programming environments, including REST APIs and RPC APIs. It is
+        # used by [gRPC](https://github.com/grpc). Each `Status` message contains
+        # three pieces of data: error code, error message, and error details.
+        #
+        # You can find out more about this error model and how to work with it in the
+        # [API Design Guide](https://cloud.google.com/apis/design/errors).
+      &quot;details&quot;: [ # A list of messages that carry the error details.  There is a common set of
+          # message types for APIs to use.
+        {
+          &quot;a_key&quot;: &quot;&quot;, # Properties of the object. Contains field @type with type URL.
+        },
+      ],
+      &quot;code&quot;: 42, # The status code, which should be an enum value of google.rpc.Code.
+      &quot;message&quot;: &quot;A String&quot;, # A developer-facing error message, which should be in English. Any
+          # user-facing error message should be localized and sent in the
+          # google.rpc.Status.details field, or localized by the client.
+    },
+    &quot;metadata&quot;: { # Service-specific metadata associated with the operation.  It typically
+        # contains progress information and common metadata such as create time.
+        # Some services might not provide such metadata.  Any method that returns a
+        # long-running operation should document the metadata type, if any.
+      &quot;a_key&quot;: &quot;&quot;, # Properties of the object. Contains field @type with type URL.
+    },
+    &quot;done&quot;: True or False, # If the value is `false`, it means the operation is still in progress.
+        # If `true`, the operation is completed, and either `error` or `response` is
+        # available.
   }</pre>
 </div>
 
@@ -588,25 +588,37 @@
       # prediction requests. A model can have multiple versions. You can get
       # information about all of the versions of a given model by calling
       # projects.models.versions.list.
-    "acceleratorConfig": { # Represents a hardware accelerator request config. # Optional. Accelerator config for using GPUs for online prediction (beta).
-        # Only specify this field if you have specified a Compute Engine (N1) machine
-        # type in the `machineType` field. Learn more about [using GPUs for online
-        # prediction](/ml-engine/docs/machine-types-online-prediction#gpus).
-        # Note that the AcceleratorConfig can be used in both Jobs and Versions.
-        # Learn more about [accelerators for training](/ml-engine/docs/using-gpus) and
-        # [accelerators for online
-        # prediction](/ml-engine/docs/machine-types-online-prediction#gpus).
-      "count": "A String", # The number of accelerators to attach to each machine running the job.
-      "type": "A String", # The type of accelerator to use.
+    &quot;state&quot;: &quot;A String&quot;, # Output only. The state of a version.
+    &quot;manualScaling&quot;: { # Options for manually scaling a model. # Manually select the number of nodes to use for serving the
+        # model. You should generally use `auto_scaling` with an appropriate
+        # `min_nodes` instead, but this option is available if you want more
+        # predictable billing. Beware that latency and error rates will increase
+        # if the traffic exceeds that capability of the system to serve it based
+        # on the selected number of nodes.
+      &quot;nodes&quot;: 42, # The number of nodes to allocate for this model. These nodes are always up,
+          # starting from the time the model is deployed, so the cost of operating
+          # this model will be proportional to `nodes` * number of hours since
+          # last billing cycle plus the cost for each prediction performed.
     },
-    "labels": { # Optional. One or more labels that you can add, to organize your model
-        # versions. Each label is a key-value pair, where both the key and the value
-        # are arbitrary strings that you supply.
-        # For more information, see the documentation on
-        # &lt;a href="/ml-engine/docs/tensorflow/resource-labels"&gt;using labels&lt;/a&gt;.
-      "a_key": "A String",
-    },
-    "predictionClass": "A String", # Optional. The fully qualified name
+    &quot;name&quot;: &quot;A String&quot;, # Required. The name specified for the version when it was created.
+        #
+        # The version name must be unique within the model it is created in.
+    &quot;serviceAccount&quot;: &quot;A String&quot;, # Optional. Specifies the service account for resource access control.
+    &quot;pythonVersion&quot;: &quot;A String&quot;, # Required. The version of Python used in prediction.
+        #
+        # The following Python versions are available:
+        #
+        # * Python &#x27;3.7&#x27; is available when `runtime_version` is set to &#x27;1.15&#x27; or
+        #   later.
+        # * Python &#x27;3.5&#x27; is available when `runtime_version` is set to a version
+        #   from &#x27;1.4&#x27; to &#x27;1.14&#x27;.
+        # * Python &#x27;2.7&#x27; is available when `runtime_version` is set to &#x27;1.15&#x27; or
+        #   earlier.
+        #
+        # Read more about the Python versions available for [each runtime
+        # version](/ml-engine/docs/runtime-version-list).
+    &quot;lastUseTime&quot;: &quot;A String&quot;, # Output only. The time the version was last used for prediction.
+    &quot;predictionClass&quot;: &quot;A String&quot;, # Optional. The fully qualified name
         # (&lt;var&gt;module_name&lt;/var&gt;.&lt;var&gt;class_name&lt;/var&gt;) of a class that implements
         # the Predictor interface described in this reference field. The module
         # containing this class should be included in a package provided to the
@@ -621,12 +633,12 @@
         #
         # The following code sample provides the Predictor interface:
         #
-        # &lt;pre style="max-width: 626px;"&gt;
+        # &lt;pre style=&quot;max-width: 626px;&quot;&gt;
         # class Predictor(object):
-        # """Interface for constructing custom predictors."""
+        # &quot;&quot;&quot;Interface for constructing custom predictors.&quot;&quot;&quot;
         #
         # def predict(self, instances, **kwargs):
-        #     """Performs custom prediction.
+        #     &quot;&quot;&quot;Performs custom prediction.
         #
         #     Instances are the decoded values from the request. They have already
         #     been deserialized from JSON.
@@ -639,12 +651,12 @@
         #     Returns:
         #         A list of outputs containing the prediction results. This list must
         #         be JSON serializable.
-        #     """
+        #     &quot;&quot;&quot;
         #     raise NotImplementedError()
         #
         # @classmethod
         # def from_path(cls, model_dir):
-        #     """Creates an instance of Predictor using the given path.
+        #     &quot;&quot;&quot;Creates an instance of Predictor using the given path.
         #
         #     Loading of the predictor should be done in this method.
         #
@@ -655,15 +667,13 @@
         #
         #     Returns:
         #         An instance implementing this Predictor class.
-        #     """
+        #     &quot;&quot;&quot;
         #     raise NotImplementedError()
         # &lt;/pre&gt;
         #
         # Learn more about [the Predictor interface and custom prediction
         # routines](/ml-engine/docs/tensorflow/custom-prediction-routines).
-    "serviceAccount": "A String", # Optional. Specifies the service account for resource access control.
-    "state": "A String", # Output only. The state of a version.
-    "packageUris": [ # Optional. Cloud Storage paths (`gs://…`) of packages for [custom
+    &quot;packageUris&quot;: [ # Optional. Cloud Storage paths (`gs://…`) of packages for [custom
         # prediction routines](/ml-engine/docs/tensorflow/custom-prediction-routines)
         # or [scikit-learn pipelines with custom
         # code](/ml-engine/docs/scikit/exporting-for-prediction#custom-pipeline-code).
@@ -677,17 +687,45 @@
         #
         # If you specify this field, you must also set
         # [`runtimeVersion`](#Version.FIELDS.runtime_version) to 1.4 or greater.
-      "A String",
+      &quot;A String&quot;,
     ],
-    "etag": "A String", # `etag` is used for optimistic concurrency control as a way to help
-        # prevent simultaneous updates of a model from overwriting each other.
-        # It is strongly suggested that systems make use of the `etag` in the
-        # read-modify-write cycle to perform model updates in order to avoid race
-        # conditions: An `etag` is returned in the response to `GetVersion`, and
-        # systems are expected to put that etag in the request to `UpdateVersion` to
-        # ensure that their change will be applied to the model as intended.
-    "lastUseTime": "A String", # Output only. The time the version was last used for prediction.
-    "deploymentUri": "A String", # Required. The Cloud Storage location of the trained model used to
+    &quot;explanationConfig&quot;: { # Message holding configuration options for explaining model predictions. # Optional. Configures explainability features on the model&#x27;s version.
+        # Some explanation features require additional metadata to be loaded
+        # as part of the model payload.
+        # There are two feature attribution methods supported for TensorFlow models:
+        # integrated gradients and sampled Shapley.
+        # [Learn more about feature
+        # attributions.](/ai-platform/prediction/docs/ai-explanations/overview)
+      &quot;integratedGradientsAttribution&quot;: { # Attributes credit by computing the Aumann-Shapley value taking advantage # Attributes credit by computing the Aumann-Shapley value taking advantage
+          # of the model&#x27;s fully differentiable structure. Refer to this paper for
+          # more details: http://proceedings.mlr.press/v70/sundararajan17a.html
+          # of the model&#x27;s fully differentiable structure. Refer to this paper for
+          # more details: https://arxiv.org/abs/1703.01365
+        &quot;numIntegralSteps&quot;: 42, # Number of steps for approximating the path integral.
+            # A good value to start is 50 and gradually increase until the
+            # sum to diff property is met within the desired error range.
+      },
+      &quot;sampledShapleyAttribution&quot;: { # An attribution method that approximates Shapley values for features that # An attribution method that approximates Shapley values for features that
+          # contribute to the label being predicted. A sampling strategy is used to
+          # approximate the value rather than considering all subsets of features.
+          # contribute to the label being predicted. A sampling strategy is used to
+          # approximate the value rather than considering all subsets of features.
+        &quot;numPaths&quot;: 42, # The number of feature permutations to consider when approximating the
+            # Shapley values.
+      },
+      &quot;xraiAttribution&quot;: { # Attributes credit by computing the XRAI taking advantage # Attributes credit by computing the XRAI taking advantage
+          # of the model&#x27;s fully differentiable structure. Refer to this paper for
+          # more details: https://arxiv.org/abs/1906.02825
+          # Currently only implemented for models with natural image inputs.
+          # of the model&#x27;s fully differentiable structure. Refer to this paper for
+          # more details: https://arxiv.org/abs/1906.02825
+          # Currently only implemented for models with natural image inputs.
+        &quot;numIntegralSteps&quot;: 42, # Number of steps for approximating the path integral.
+            # A good value to start is 50 and gradually increase until the
+            # sum to diff property is met within the desired error range.
+      },
+    },
+    &quot;deploymentUri&quot;: &quot;A String&quot;, # Required. The Cloud Storage location of the trained model used to
         # create the version. See the
         # [guide to model
         # deployment](/ml-engine/docs/tensorflow/deploying-models) for more
@@ -698,120 +736,16 @@
         # the model service uses the specified location as the source of the model.
         # Once deployed, the model version is hosted by the prediction service, so
         # this location is useful only as a historical record.
-        # The total number of model files can't exceed 1000.
-    "explanationConfig": { # Message holding configuration options for explaining model predictions. # Optional. Configures explainability features on the model's version.
-        # Some explanation features require additional metadata to be loaded
-        # as part of the model payload.
-        # There are two feature attribution methods supported for TensorFlow models:
-        # integrated gradients and sampled Shapley.
-        # [Learn more about feature
-        # attributions.](/ml-engine/docs/ai-explanations/overview)
-      "xraiAttribution": { # Attributes credit by computing the XRAI taking advantage # Attributes credit by computing the XRAI taking advantage
-          # of the model's fully differentiable structure. Refer to this paper for
-          # more details: https://arxiv.org/abs/1906.02825
-          # Currently only implemented for models with natural image inputs.
-          # of the model's fully differentiable structure. Refer to this paper for
-          # more details: https://arxiv.org/abs/1906.02825
-          # Currently only implemented for models with natural image inputs.
-        "numIntegralSteps": 42, # Number of steps for approximating the path integral.
-            # A good value to start is 50 and gradually increase until the
-            # sum to diff property is met within the desired error range.
-      },
-      "sampledShapleyAttribution": { # An attribution method that approximates Shapley values for features that # An attribution method that approximates Shapley values for features that
-          # contribute to the label being predicted. A sampling strategy is used to
-          # approximate the value rather than considering all subsets of features.
-          # contribute to the label being predicted. A sampling strategy is used to
-          # approximate the value rather than considering all subsets of features.
-        "numPaths": 42, # The number of feature permutations to consider when approximating the
-            # Shapley values.
-      },
-      "integratedGradientsAttribution": { # Attributes credit by computing the Aumann-Shapley value taking advantage # Attributes credit by computing the Aumann-Shapley value taking advantage
-          # of the model's fully differentiable structure. Refer to this paper for
-          # more details: http://proceedings.mlr.press/v70/sundararajan17a.html
-          # of the model's fully differentiable structure. Refer to this paper for
-          # more details: http://proceedings.mlr.press/v70/sundararajan17a.html
-        "numIntegralSteps": 42, # Number of steps for approximating the path integral.
-            # A good value to start is 50 and gradually increase until the
-            # sum to diff property is met within the desired error range.
-      },
-    },
-    "isDefault": True or False, # Output only. If true, this version will be used to handle prediction
-        # requests that do not specify a version.
-        #
-        # You can change the default version by calling
-        # projects.methods.versions.setDefault.
-    "machineType": "A String", # Optional. The type of machine on which to serve the model. Currently only
-        # applies to online prediction service. If this field is not specified, it
-        # defaults to `mls1-c1-m2`.
-        #
-        # Online prediction supports the following machine types:
-        #
-        # * `mls1-c1-m2`
-        # * `mls1-c4-m2`
-        # * `n1-standard-2`
-        # * `n1-standard-4`
-        # * `n1-standard-8`
-        # * `n1-standard-16`
-        # * `n1-standard-32`
-        # * `n1-highmem-2`
-        # * `n1-highmem-4`
-        # * `n1-highmem-8`
-        # * `n1-highmem-16`
-        # * `n1-highmem-32`
-        # * `n1-highcpu-2`
-        # * `n1-highcpu-4`
-        # * `n1-highcpu-8`
-        # * `n1-highcpu-16`
-        # * `n1-highcpu-32`
-        #
-        # `mls1-c1-m2` is generally available. All other machine types are available
-        # in beta. Learn more about the [differences between machine
-        # types](/ml-engine/docs/machine-types-online-prediction).
-    "description": "A String", # Optional. The description specified for the version when it was created.
-    "runtimeVersion": "A String", # Required. The AI Platform runtime version to use for this deployment.
-        #
-        # For more information, see the
-        # [runtime version list](/ml-engine/docs/runtime-version-list) and
-        # [how to manage runtime versions](/ml-engine/docs/versioning).
-    "manualScaling": { # Options for manually scaling a model. # Manually select the number of nodes to use for serving the
-        # model. You should generally use `auto_scaling` with an appropriate
-        # `min_nodes` instead, but this option is available if you want more
-        # predictable billing. Beware that latency and error rates will increase
-        # if the traffic exceeds that capability of the system to serve it based
-        # on the selected number of nodes.
-      "nodes": 42, # The number of nodes to allocate for this model. These nodes are always up,
-          # starting from the time the model is deployed, so the cost of operating
-          # this model will be proportional to `nodes` * number of hours since
-          # last billing cycle plus the cost for each prediction performed.
-    },
-    "errorMessage": "A String", # Output only. The details of a failure or a cancellation.
-    "framework": "A String", # Optional. The machine learning framework AI Platform uses to train
-        # this version of the model. Valid values are `TENSORFLOW`, `SCIKIT_LEARN`,
-        # `XGBOOST`. If you do not specify a framework, AI Platform
-        # will analyze files in the deployment_uri to determine a framework. If you
-        # choose `SCIKIT_LEARN` or `XGBOOST`, you must also set the runtime version
-        # of the model to 1.4 or greater.
-        #
-        # Do **not** specify a framework if you're deploying a [custom
-        # prediction routine](/ml-engine/docs/tensorflow/custom-prediction-routines).
-        #
-        # If you specify a [Compute Engine (N1) machine
-        # type](/ml-engine/docs/machine-types-online-prediction) in the
-        # `machineType` field, you must specify `TENSORFLOW`
-        # for the framework.
-    "createTime": "A String", # Output only. The time the version was created.
-    "name": "A String", # Required. The name specified for the version when it was created.
-        #
-        # The version name must be unique within the model it is created in.
-    "autoScaling": { # Options for automatically scaling a model. # Automatically scale the number of nodes used to serve the model in
+        # The total number of model files can&#x27;t exceed 1000.
+    &quot;autoScaling&quot;: { # Options for automatically scaling a model. # Automatically scale the number of nodes used to serve the model in
         # response to increases and decreases in traffic. Care should be
-        # taken to ramp up traffic according to the model's ability to scale
+        # taken to ramp up traffic according to the model&#x27;s ability to scale
         # or you will start seeing increases in latency and 429 response codes.
         #
         # Note that you cannot use AutoScaling if your version uses
         # [GPUs](#Version.FIELDS.accelerator_config). Instead, you must use specify
         # `manual_scaling`.
-      "minNodes": 42, # Optional. The minimum number of nodes to allocate for this model. These
+      &quot;minNodes&quot;: 42, # Optional. The minimum number of nodes to allocate for this model. These
           # nodes are always up, starting from the time the model is deployed.
           # Therefore, the cost of operating this model will be at least
           # `rate` * `min_nodes` * number of hours since last billing cycle,
@@ -846,32 +780,27 @@
           # &lt;pre&gt;
           # update_body.json:
           # {
-          #   'autoScaling': {
-          #     'minNodes': 5
+          #   &#x27;autoScaling&#x27;: {
+          #     &#x27;minNodes&#x27;: 5
           #   }
           # }
           # &lt;/pre&gt;
           # HTTP request:
-          # &lt;pre style="max-width: 626px;"&gt;
+          # &lt;pre style=&quot;max-width: 626px;&quot;&gt;
           # PATCH
           # https://ml.googleapis.com/v1/{name=projects/*/models/*/versions/*}?update_mask=autoScaling.minNodes
           # -d @./update_body.json
           # &lt;/pre&gt;
     },
-    "pythonVersion": "A String", # Required. The version of Python used in prediction.
-        #
-        # The following Python versions are available:
-        #
-        # * Python '3.7' is available when `runtime_version` is set to '1.15' or
-        #   later.
-        # * Python '3.5' is available when `runtime_version` is set to a version
-        #   from '1.4' to '1.14'.
-        # * Python '2.7' is available when `runtime_version` is set to '1.15' or
-        #   earlier.
-        #
-        # Read more about the Python versions available for [each runtime
-        # version](/ml-engine/docs/runtime-version-list).
-    "requestLoggingConfig": { # Configuration for logging request-response pairs to a BigQuery table. # Optional. *Only* specify this field in a
+    &quot;labels&quot;: { # Optional. One or more labels that you can add, to organize your model
+        # versions. Each label is a key-value pair, where both the key and the value
+        # are arbitrary strings that you supply.
+        # For more information, see the documentation on
+        # &lt;a href=&quot;/ml-engine/docs/tensorflow/resource-labels&quot;&gt;using labels&lt;/a&gt;.
+      &quot;a_key&quot;: &quot;A String&quot;,
+    },
+    &quot;createTime&quot;: &quot;A String&quot;, # Output only. The time the version was created.
+    &quot;requestLoggingConfig&quot;: { # Configuration for logging request-response pairs to a BigQuery table. # Optional. *Only* specify this field in a
         # projects.models.versions.patch
         # request. Specifying it in a
         # projects.models.versions.create
@@ -890,19 +819,16 @@
         # evaluation](/ml-engine/docs/continuous-evaluation/), you do not need to
         # specify this configuration manually. Setting up continuous evaluation
         # automatically enables logging of request-response pairs.
-      "samplingPercentage": 3.14, # Percentage of requests to be logged, expressed as a fraction from 0 to 1.
-          # For example, if you want to log 10% of requests, enter `0.1`. The sampling
-          # window is the lifetime of the model version. Defaults to 0.
-      "bigqueryTableName": "A String", # Required. Fully qualified BigQuery table name in the following format:
-          # "&lt;var&gt;project_id&lt;/var&gt;.&lt;var&gt;dataset_name&lt;/var&gt;.&lt;var&gt;table_name&lt;/var&gt;"
+      &quot;bigqueryTableName&quot;: &quot;A String&quot;, # Required. Fully qualified BigQuery table name in the following format:
+          # &quot;&lt;var&gt;project_id&lt;/var&gt;.&lt;var&gt;dataset_name&lt;/var&gt;.&lt;var&gt;table_name&lt;/var&gt;&quot;
           #
-          # The specified table must already exist, and the "Cloud ML Service Agent"
+          # The specified table must already exist, and the &quot;Cloud ML Service Agent&quot;
           # for your project must have permission to write to it. The table must have
           # the following [schema](/bigquery/docs/schemas):
           #
           # &lt;table&gt;
-          #   &lt;tr&gt;&lt;th&gt;Field name&lt;/th&gt;&lt;th style="display: table-cell"&gt;Type&lt;/th&gt;
-          #     &lt;th style="display: table-cell"&gt;Mode&lt;/th&gt;&lt;/tr&gt;
+          #   &lt;tr&gt;&lt;th&gt;Field name&lt;/th&gt;&lt;th style=&quot;display: table-cell&quot;&gt;Type&lt;/th&gt;
+          #     &lt;th style=&quot;display: table-cell&quot;&gt;Mode&lt;/th&gt;&lt;/tr&gt;
           #   &lt;tr&gt;&lt;td&gt;model&lt;/td&gt;&lt;td&gt;STRING&lt;/td&gt;&lt;td&gt;REQUIRED&lt;/td&gt;&lt;/tr&gt;
           #   &lt;tr&gt;&lt;td&gt;model_version&lt;/td&gt;&lt;td&gt;STRING&lt;/td&gt;&lt;td&gt;REQUIRED&lt;/td&gt;&lt;/tr&gt;
           #   &lt;tr&gt;&lt;td&gt;time&lt;/td&gt;&lt;td&gt;TIMESTAMP&lt;/td&gt;&lt;td&gt;REQUIRED&lt;/td&gt;&lt;/tr&gt;
@@ -910,12 +836,86 @@
           #   &lt;tr&gt;&lt;td&gt;raw_prediction&lt;/td&gt;&lt;td&gt;STRING&lt;/td&gt;&lt;td&gt;NULLABLE&lt;/td&gt;&lt;/tr&gt;
           #   &lt;tr&gt;&lt;td&gt;groundtruth&lt;/td&gt;&lt;td&gt;STRING&lt;/td&gt;&lt;td&gt;NULLABLE&lt;/td&gt;&lt;/tr&gt;
           # &lt;/table&gt;
+      &quot;samplingPercentage&quot;: 3.14, # Percentage of requests to be logged, expressed as a fraction from 0 to 1.
+          # For example, if you want to log 10% of requests, enter `0.1`. The sampling
+          # window is the lifetime of the model version. Defaults to 0.
+    },
+    &quot;errorMessage&quot;: &quot;A String&quot;, # Output only. The details of a failure or a cancellation.
+    &quot;machineType&quot;: &quot;A String&quot;, # Optional. The type of machine on which to serve the model. Currently only
+        # applies to online prediction service. If this field is not specified, it
+        # defaults to `mls1-c1-m2`.
+        #
+        # Online prediction supports the following machine types:
+        #
+        # * `mls1-c1-m2`
+        # * `mls1-c4-m2`
+        # * `n1-standard-2`
+        # * `n1-standard-4`
+        # * `n1-standard-8`
+        # * `n1-standard-16`
+        # * `n1-standard-32`
+        # * `n1-highmem-2`
+        # * `n1-highmem-4`
+        # * `n1-highmem-8`
+        # * `n1-highmem-16`
+        # * `n1-highmem-32`
+        # * `n1-highcpu-2`
+        # * `n1-highcpu-4`
+        # * `n1-highcpu-8`
+        # * `n1-highcpu-16`
+        # * `n1-highcpu-32`
+        #
+        # `mls1-c1-m2` is generally available. All other machine types are available
+        # in beta. Learn more about the [differences between machine
+        # types](/ml-engine/docs/machine-types-online-prediction).
+    &quot;runtimeVersion&quot;: &quot;A String&quot;, # Required. The AI Platform runtime version to use for this deployment.
+        #
+        # For more information, see the
+        # [runtime version list](/ml-engine/docs/runtime-version-list) and
+        # [how to manage runtime versions](/ml-engine/docs/versioning).
+    &quot;description&quot;: &quot;A String&quot;, # Optional. The description specified for the version when it was created.
+    &quot;framework&quot;: &quot;A String&quot;, # Optional. The machine learning framework AI Platform uses to train
+        # this version of the model. Valid values are `TENSORFLOW`, `SCIKIT_LEARN`,
+        # `XGBOOST`. If you do not specify a framework, AI Platform
+        # will analyze files in the deployment_uri to determine a framework. If you
+        # choose `SCIKIT_LEARN` or `XGBOOST`, you must also set the runtime version
+        # of the model to 1.4 or greater.
+        #
+        # Do **not** specify a framework if you&#x27;re deploying a [custom
+        # prediction routine](/ml-engine/docs/tensorflow/custom-prediction-routines).
+        #
+        # If you specify a [Compute Engine (N1) machine
+        # type](/ml-engine/docs/machine-types-online-prediction) in the
+        # `machineType` field, you must specify `TENSORFLOW`
+        # for the framework.
+    &quot;etag&quot;: &quot;A String&quot;, # `etag` is used for optimistic concurrency control as a way to help
+        # prevent simultaneous updates of a model from overwriting each other.
+        # It is strongly suggested that systems make use of the `etag` in the
+        # read-modify-write cycle to perform model updates in order to avoid race
+        # conditions: An `etag` is returned in the response to `GetVersion`, and
+        # systems are expected to put that etag in the request to `UpdateVersion` to
+        # ensure that their change will be applied to the model as intended.
+    &quot;isDefault&quot;: True or False, # Output only. If true, this version will be used to handle prediction
+        # requests that do not specify a version.
+        #
+        # You can change the default version by calling
+        # projects.methods.versions.setDefault.
+    &quot;acceleratorConfig&quot;: { # Represents a hardware accelerator request config. # Optional. Accelerator config for using GPUs for online prediction (beta).
+        # Only specify this field if you have specified a Compute Engine (N1) machine
+        # type in the `machineType` field. Learn more about [using GPUs for online
+        # prediction](/ml-engine/docs/machine-types-online-prediction#gpus).
+        # Note that the AcceleratorConfig can be used in both Jobs and Versions.
+        # Learn more about [accelerators for training](/ml-engine/docs/using-gpus) and
+        # [accelerators for online
+        # prediction](/ml-engine/docs/machine-types-online-prediction#gpus).
+      &quot;count&quot;: &quot;A String&quot;, # The number of accelerators to attach to each machine running the job.
+      &quot;type&quot;: &quot;A String&quot;, # The type of accelerator to use.
     },
   }</pre>
 </div>
 
 <div class="method">
-    <code class="details" id="list">list(parent, pageToken=None, x__xgafv=None, pageSize=None, filter=None)</code>
+    <code class="details" id="list">list(parent, pageToken=None, pageSize=None, filter=None, x__xgafv=None)</code>
   <pre>Gets basic information about all the versions of a model.
 
 If you expect that a model has many versions, or if you need to handle
@@ -931,49 +931,61 @@
 
 You get the token from the `next_page_token` field of the response from
 the previous call.
-  x__xgafv: string, V1 error format.
-    Allowed values
-      1 - v1 error format
-      2 - v2 error format
-  pageSize: integer, Optional. The number of versions to retrieve per "page" of results. If
+  pageSize: integer, Optional. The number of versions to retrieve per &quot;page&quot; of results. If
 there are more remaining results than this number, the response message
 will contain a valid value in the `next_page_token` field.
 
 The default value is 20, and the maximum page size is 100.
   filter: string, Optional. Specifies the subset of versions to retrieve.
+  x__xgafv: string, V1 error format.
+    Allowed values
+      1 - v1 error format
+      2 - v2 error format
 
 Returns:
   An object of the form:
 
     { # Response message for the ListVersions method.
-    "nextPageToken": "A String", # Optional. Pass this token as the `page_token` field of the request for a
+    &quot;nextPageToken&quot;: &quot;A String&quot;, # Optional. Pass this token as the `page_token` field of the request for a
         # subsequent call.
-    "versions": [ # The list of versions.
+    &quot;versions&quot;: [ # The list of versions.
       { # Represents a version of the model.
           #
           # Each version is a trained model deployed in the cloud, ready to handle
           # prediction requests. A model can have multiple versions. You can get
           # information about all of the versions of a given model by calling
           # projects.models.versions.list.
-        "acceleratorConfig": { # Represents a hardware accelerator request config. # Optional. Accelerator config for using GPUs for online prediction (beta).
-            # Only specify this field if you have specified a Compute Engine (N1) machine
-            # type in the `machineType` field. Learn more about [using GPUs for online
-            # prediction](/ml-engine/docs/machine-types-online-prediction#gpus).
-            # Note that the AcceleratorConfig can be used in both Jobs and Versions.
-            # Learn more about [accelerators for training](/ml-engine/docs/using-gpus) and
-            # [accelerators for online
-            # prediction](/ml-engine/docs/machine-types-online-prediction#gpus).
-          "count": "A String", # The number of accelerators to attach to each machine running the job.
-          "type": "A String", # The type of accelerator to use.
+        &quot;state&quot;: &quot;A String&quot;, # Output only. The state of a version.
+        &quot;manualScaling&quot;: { # Options for manually scaling a model. # Manually select the number of nodes to use for serving the
+            # model. You should generally use `auto_scaling` with an appropriate
+            # `min_nodes` instead, but this option is available if you want more
+            # predictable billing. Beware that latency and error rates will increase
+            # if the traffic exceeds that capability of the system to serve it based
+            # on the selected number of nodes.
+          &quot;nodes&quot;: 42, # The number of nodes to allocate for this model. These nodes are always up,
+              # starting from the time the model is deployed, so the cost of operating
+              # this model will be proportional to `nodes` * number of hours since
+              # last billing cycle plus the cost for each prediction performed.
         },
-        "labels": { # Optional. One or more labels that you can add, to organize your model
-            # versions. Each label is a key-value pair, where both the key and the value
-            # are arbitrary strings that you supply.
-            # For more information, see the documentation on
-            # &lt;a href="/ml-engine/docs/tensorflow/resource-labels"&gt;using labels&lt;/a&gt;.
-          "a_key": "A String",
-        },
-        "predictionClass": "A String", # Optional. The fully qualified name
+        &quot;name&quot;: &quot;A String&quot;, # Required. The name specified for the version when it was created.
+            #
+            # The version name must be unique within the model it is created in.
+        &quot;serviceAccount&quot;: &quot;A String&quot;, # Optional. Specifies the service account for resource access control.
+        &quot;pythonVersion&quot;: &quot;A String&quot;, # Required. The version of Python used in prediction.
+            #
+            # The following Python versions are available:
+            #
+            # * Python &#x27;3.7&#x27; is available when `runtime_version` is set to &#x27;1.15&#x27; or
+            #   later.
+            # * Python &#x27;3.5&#x27; is available when `runtime_version` is set to a version
+            #   from &#x27;1.4&#x27; to &#x27;1.14&#x27;.
+            # * Python &#x27;2.7&#x27; is available when `runtime_version` is set to &#x27;1.15&#x27; or
+            #   earlier.
+            #
+            # Read more about the Python versions available for [each runtime
+            # version](/ml-engine/docs/runtime-version-list).
+        &quot;lastUseTime&quot;: &quot;A String&quot;, # Output only. The time the version was last used for prediction.
+        &quot;predictionClass&quot;: &quot;A String&quot;, # Optional. The fully qualified name
             # (&lt;var&gt;module_name&lt;/var&gt;.&lt;var&gt;class_name&lt;/var&gt;) of a class that implements
             # the Predictor interface described in this reference field. The module
             # containing this class should be included in a package provided to the
@@ -988,12 +1000,12 @@
             #
             # The following code sample provides the Predictor interface:
             #
-            # &lt;pre style="max-width: 626px;"&gt;
+            # &lt;pre style=&quot;max-width: 626px;&quot;&gt;
             # class Predictor(object):
-            # """Interface for constructing custom predictors."""
+            # &quot;&quot;&quot;Interface for constructing custom predictors.&quot;&quot;&quot;
             #
             # def predict(self, instances, **kwargs):
-            #     """Performs custom prediction.
+            #     &quot;&quot;&quot;Performs custom prediction.
             #
             #     Instances are the decoded values from the request. They have already
             #     been deserialized from JSON.
@@ -1006,12 +1018,12 @@
             #     Returns:
             #         A list of outputs containing the prediction results. This list must
             #         be JSON serializable.
-            #     """
+            #     &quot;&quot;&quot;
             #     raise NotImplementedError()
             #
             # @classmethod
             # def from_path(cls, model_dir):
-            #     """Creates an instance of Predictor using the given path.
+            #     &quot;&quot;&quot;Creates an instance of Predictor using the given path.
             #
             #     Loading of the predictor should be done in this method.
             #
@@ -1022,15 +1034,13 @@
             #
             #     Returns:
             #         An instance implementing this Predictor class.
-            #     """
+            #     &quot;&quot;&quot;
             #     raise NotImplementedError()
             # &lt;/pre&gt;
             #
             # Learn more about [the Predictor interface and custom prediction
             # routines](/ml-engine/docs/tensorflow/custom-prediction-routines).
-        "serviceAccount": "A String", # Optional. Specifies the service account for resource access control.
-        "state": "A String", # Output only. The state of a version.
-        "packageUris": [ # Optional. Cloud Storage paths (`gs://…`) of packages for [custom
+        &quot;packageUris&quot;: [ # Optional. Cloud Storage paths (`gs://…`) of packages for [custom
             # prediction routines](/ml-engine/docs/tensorflow/custom-prediction-routines)
             # or [scikit-learn pipelines with custom
             # code](/ml-engine/docs/scikit/exporting-for-prediction#custom-pipeline-code).
@@ -1044,17 +1054,45 @@
             #
             # If you specify this field, you must also set
             # [`runtimeVersion`](#Version.FIELDS.runtime_version) to 1.4 or greater.
-          "A String",
+          &quot;A String&quot;,
         ],
-        "etag": "A String", # `etag` is used for optimistic concurrency control as a way to help
-            # prevent simultaneous updates of a model from overwriting each other.
-            # It is strongly suggested that systems make use of the `etag` in the
-            # read-modify-write cycle to perform model updates in order to avoid race
-            # conditions: An `etag` is returned in the response to `GetVersion`, and
-            # systems are expected to put that etag in the request to `UpdateVersion` to
-            # ensure that their change will be applied to the model as intended.
-        "lastUseTime": "A String", # Output only. The time the version was last used for prediction.
-        "deploymentUri": "A String", # Required. The Cloud Storage location of the trained model used to
+        &quot;explanationConfig&quot;: { # Message holding configuration options for explaining model predictions. # Optional. Configures explainability features on the model&#x27;s version.
+            # Some explanation features require additional metadata to be loaded
+            # as part of the model payload.
+            # There are two feature attribution methods supported for TensorFlow models:
+            # integrated gradients and sampled Shapley.
+            # [Learn more about feature
+            # attributions.](/ai-platform/prediction/docs/ai-explanations/overview)
+          &quot;integratedGradientsAttribution&quot;: { # Attributes credit by computing the Aumann-Shapley value taking advantage # Attributes credit by computing the Aumann-Shapley value taking advantage
+              # of the model&#x27;s fully differentiable structure. Refer to this paper for
+              # more details: http://proceedings.mlr.press/v70/sundararajan17a.html
+              # of the model&#x27;s fully differentiable structure. Refer to this paper for
+              # more details: https://arxiv.org/abs/1703.01365
+            &quot;numIntegralSteps&quot;: 42, # Number of steps for approximating the path integral.
+                # A good value to start is 50 and gradually increase until the
+                # sum to diff property is met within the desired error range.
+          },
+          &quot;sampledShapleyAttribution&quot;: { # An attribution method that approximates Shapley values for features that # An attribution method that approximates Shapley values for features that
+              # contribute to the label being predicted. A sampling strategy is used to
+              # approximate the value rather than considering all subsets of features.
+              # contribute to the label being predicted. A sampling strategy is used to
+              # approximate the value rather than considering all subsets of features.
+            &quot;numPaths&quot;: 42, # The number of feature permutations to consider when approximating the
+                # Shapley values.
+          },
+          &quot;xraiAttribution&quot;: { # Attributes credit by computing the XRAI taking advantage # Attributes credit by computing the XRAI taking advantage
+              # of the model&#x27;s fully differentiable structure. Refer to this paper for
+              # more details: https://arxiv.org/abs/1906.02825
+              # Currently only implemented for models with natural image inputs.
+              # of the model&#x27;s fully differentiable structure. Refer to this paper for
+              # more details: https://arxiv.org/abs/1906.02825
+              # Currently only implemented for models with natural image inputs.
+            &quot;numIntegralSteps&quot;: 42, # Number of steps for approximating the path integral.
+                # A good value to start is 50 and gradually increase until the
+                # sum to diff property is met within the desired error range.
+          },
+        },
+        &quot;deploymentUri&quot;: &quot;A String&quot;, # Required. The Cloud Storage location of the trained model used to
             # create the version. See the
             # [guide to model
             # deployment](/ml-engine/docs/tensorflow/deploying-models) for more
@@ -1065,120 +1103,16 @@
             # the model service uses the specified location as the source of the model.
             # Once deployed, the model version is hosted by the prediction service, so
             # this location is useful only as a historical record.
-            # The total number of model files can't exceed 1000.
-        "explanationConfig": { # Message holding configuration options for explaining model predictions. # Optional. Configures explainability features on the model's version.
-            # Some explanation features require additional metadata to be loaded
-            # as part of the model payload.
-            # There are two feature attribution methods supported for TensorFlow models:
-            # integrated gradients and sampled Shapley.
-            # [Learn more about feature
-            # attributions.](/ml-engine/docs/ai-explanations/overview)
-          "xraiAttribution": { # Attributes credit by computing the XRAI taking advantage # Attributes credit by computing the XRAI taking advantage
-              # of the model's fully differentiable structure. Refer to this paper for
-              # more details: https://arxiv.org/abs/1906.02825
-              # Currently only implemented for models with natural image inputs.
-              # of the model's fully differentiable structure. Refer to this paper for
-              # more details: https://arxiv.org/abs/1906.02825
-              # Currently only implemented for models with natural image inputs.
-            "numIntegralSteps": 42, # Number of steps for approximating the path integral.
-                # A good value to start is 50 and gradually increase until the
-                # sum to diff property is met within the desired error range.
-          },
-          "sampledShapleyAttribution": { # An attribution method that approximates Shapley values for features that # An attribution method that approximates Shapley values for features that
-              # contribute to the label being predicted. A sampling strategy is used to
-              # approximate the value rather than considering all subsets of features.
-              # contribute to the label being predicted. A sampling strategy is used to
-              # approximate the value rather than considering all subsets of features.
-            "numPaths": 42, # The number of feature permutations to consider when approximating the
-                # Shapley values.
-          },
-          "integratedGradientsAttribution": { # Attributes credit by computing the Aumann-Shapley value taking advantage # Attributes credit by computing the Aumann-Shapley value taking advantage
-              # of the model's fully differentiable structure. Refer to this paper for
-              # more details: http://proceedings.mlr.press/v70/sundararajan17a.html
-              # of the model's fully differentiable structure. Refer to this paper for
-              # more details: http://proceedings.mlr.press/v70/sundararajan17a.html
-            "numIntegralSteps": 42, # Number of steps for approximating the path integral.
-                # A good value to start is 50 and gradually increase until the
-                # sum to diff property is met within the desired error range.
-          },
-        },
-        "isDefault": True or False, # Output only. If true, this version will be used to handle prediction
-            # requests that do not specify a version.
-            #
-            # You can change the default version by calling
-            # projects.methods.versions.setDefault.
-        "machineType": "A String", # Optional. The type of machine on which to serve the model. Currently only
-            # applies to online prediction service. If this field is not specified, it
-            # defaults to `mls1-c1-m2`.
-            #
-            # Online prediction supports the following machine types:
-            #
-            # * `mls1-c1-m2`
-            # * `mls1-c4-m2`
-            # * `n1-standard-2`
-            # * `n1-standard-4`
-            # * `n1-standard-8`
-            # * `n1-standard-16`
-            # * `n1-standard-32`
-            # * `n1-highmem-2`
-            # * `n1-highmem-4`
-            # * `n1-highmem-8`
-            # * `n1-highmem-16`
-            # * `n1-highmem-32`
-            # * `n1-highcpu-2`
-            # * `n1-highcpu-4`
-            # * `n1-highcpu-8`
-            # * `n1-highcpu-16`
-            # * `n1-highcpu-32`
-            #
-            # `mls1-c1-m2` is generally available. All other machine types are available
-            # in beta. Learn more about the [differences between machine
-            # types](/ml-engine/docs/machine-types-online-prediction).
-        "description": "A String", # Optional. The description specified for the version when it was created.
-        "runtimeVersion": "A String", # Required. The AI Platform runtime version to use for this deployment.
-            #
-            # For more information, see the
-            # [runtime version list](/ml-engine/docs/runtime-version-list) and
-            # [how to manage runtime versions](/ml-engine/docs/versioning).
-        "manualScaling": { # Options for manually scaling a model. # Manually select the number of nodes to use for serving the
-            # model. You should generally use `auto_scaling` with an appropriate
-            # `min_nodes` instead, but this option is available if you want more
-            # predictable billing. Beware that latency and error rates will increase
-            # if the traffic exceeds that capability of the system to serve it based
-            # on the selected number of nodes.
-          "nodes": 42, # The number of nodes to allocate for this model. These nodes are always up,
-              # starting from the time the model is deployed, so the cost of operating
-              # this model will be proportional to `nodes` * number of hours since
-              # last billing cycle plus the cost for each prediction performed.
-        },
-        "errorMessage": "A String", # Output only. The details of a failure or a cancellation.
-        "framework": "A String", # Optional. The machine learning framework AI Platform uses to train
-            # this version of the model. Valid values are `TENSORFLOW`, `SCIKIT_LEARN`,
-            # `XGBOOST`. If you do not specify a framework, AI Platform
-            # will analyze files in the deployment_uri to determine a framework. If you
-            # choose `SCIKIT_LEARN` or `XGBOOST`, you must also set the runtime version
-            # of the model to 1.4 or greater.
-            #
-            # Do **not** specify a framework if you're deploying a [custom
-            # prediction routine](/ml-engine/docs/tensorflow/custom-prediction-routines).
-            #
-            # If you specify a [Compute Engine (N1) machine
-            # type](/ml-engine/docs/machine-types-online-prediction) in the
-            # `machineType` field, you must specify `TENSORFLOW`
-            # for the framework.
-        "createTime": "A String", # Output only. The time the version was created.
-        "name": "A String", # Required. The name specified for the version when it was created.
-            #
-            # The version name must be unique within the model it is created in.
-        "autoScaling": { # Options for automatically scaling a model. # Automatically scale the number of nodes used to serve the model in
+            # The total number of model files can&#x27;t exceed 1000.
+        &quot;autoScaling&quot;: { # Options for automatically scaling a model. # Automatically scale the number of nodes used to serve the model in
             # response to increases and decreases in traffic. Care should be
-            # taken to ramp up traffic according to the model's ability to scale
+            # taken to ramp up traffic according to the model&#x27;s ability to scale
             # or you will start seeing increases in latency and 429 response codes.
             #
             # Note that you cannot use AutoScaling if your version uses
             # [GPUs](#Version.FIELDS.accelerator_config). Instead, you must use specify
             # `manual_scaling`.
-          "minNodes": 42, # Optional. The minimum number of nodes to allocate for this model. These
+          &quot;minNodes&quot;: 42, # Optional. The minimum number of nodes to allocate for this model. These
               # nodes are always up, starting from the time the model is deployed.
               # Therefore, the cost of operating this model will be at least
               # `rate` * `min_nodes` * number of hours since last billing cycle,
@@ -1213,32 +1147,27 @@
               # &lt;pre&gt;
               # update_body.json:
               # {
-              #   'autoScaling': {
-              #     'minNodes': 5
+              #   &#x27;autoScaling&#x27;: {
+              #     &#x27;minNodes&#x27;: 5
               #   }
               # }
               # &lt;/pre&gt;
               # HTTP request:
-              # &lt;pre style="max-width: 626px;"&gt;
+              # &lt;pre style=&quot;max-width: 626px;&quot;&gt;
               # PATCH
               # https://ml.googleapis.com/v1/{name=projects/*/models/*/versions/*}?update_mask=autoScaling.minNodes
               # -d @./update_body.json
               # &lt;/pre&gt;
         },
-        "pythonVersion": "A String", # Required. The version of Python used in prediction.
-            #
-            # The following Python versions are available:
-            #
-            # * Python '3.7' is available when `runtime_version` is set to '1.15' or
-            #   later.
-            # * Python '3.5' is available when `runtime_version` is set to a version
-            #   from '1.4' to '1.14'.
-            # * Python '2.7' is available when `runtime_version` is set to '1.15' or
-            #   earlier.
-            #
-            # Read more about the Python versions available for [each runtime
-            # version](/ml-engine/docs/runtime-version-list).
-        "requestLoggingConfig": { # Configuration for logging request-response pairs to a BigQuery table. # Optional. *Only* specify this field in a
+        &quot;labels&quot;: { # Optional. One or more labels that you can add, to organize your model
+            # versions. Each label is a key-value pair, where both the key and the value
+            # are arbitrary strings that you supply.
+            # For more information, see the documentation on
+            # &lt;a href=&quot;/ml-engine/docs/tensorflow/resource-labels&quot;&gt;using labels&lt;/a&gt;.
+          &quot;a_key&quot;: &quot;A String&quot;,
+        },
+        &quot;createTime&quot;: &quot;A String&quot;, # Output only. The time the version was created.
+        &quot;requestLoggingConfig&quot;: { # Configuration for logging request-response pairs to a BigQuery table. # Optional. *Only* specify this field in a
             # projects.models.versions.patch
             # request. Specifying it in a
             # projects.models.versions.create
@@ -1257,19 +1186,16 @@
             # evaluation](/ml-engine/docs/continuous-evaluation/), you do not need to
             # specify this configuration manually. Setting up continuous evaluation
             # automatically enables logging of request-response pairs.
-          "samplingPercentage": 3.14, # Percentage of requests to be logged, expressed as a fraction from 0 to 1.
-              # For example, if you want to log 10% of requests, enter `0.1`. The sampling
-              # window is the lifetime of the model version. Defaults to 0.
-          "bigqueryTableName": "A String", # Required. Fully qualified BigQuery table name in the following format:
-              # "&lt;var&gt;project_id&lt;/var&gt;.&lt;var&gt;dataset_name&lt;/var&gt;.&lt;var&gt;table_name&lt;/var&gt;"
+          &quot;bigqueryTableName&quot;: &quot;A String&quot;, # Required. Fully qualified BigQuery table name in the following format:
+              # &quot;&lt;var&gt;project_id&lt;/var&gt;.&lt;var&gt;dataset_name&lt;/var&gt;.&lt;var&gt;table_name&lt;/var&gt;&quot;
               #
-              # The specified table must already exist, and the "Cloud ML Service Agent"
+              # The specified table must already exist, and the &quot;Cloud ML Service Agent&quot;
               # for your project must have permission to write to it. The table must have
               # the following [schema](/bigquery/docs/schemas):
               #
               # &lt;table&gt;
-              #   &lt;tr&gt;&lt;th&gt;Field name&lt;/th&gt;&lt;th style="display: table-cell"&gt;Type&lt;/th&gt;
-              #     &lt;th style="display: table-cell"&gt;Mode&lt;/th&gt;&lt;/tr&gt;
+              #   &lt;tr&gt;&lt;th&gt;Field name&lt;/th&gt;&lt;th style=&quot;display: table-cell&quot;&gt;Type&lt;/th&gt;
+              #     &lt;th style=&quot;display: table-cell&quot;&gt;Mode&lt;/th&gt;&lt;/tr&gt;
               #   &lt;tr&gt;&lt;td&gt;model&lt;/td&gt;&lt;td&gt;STRING&lt;/td&gt;&lt;td&gt;REQUIRED&lt;/td&gt;&lt;/tr&gt;
               #   &lt;tr&gt;&lt;td&gt;model_version&lt;/td&gt;&lt;td&gt;STRING&lt;/td&gt;&lt;td&gt;REQUIRED&lt;/td&gt;&lt;/tr&gt;
               #   &lt;tr&gt;&lt;td&gt;time&lt;/td&gt;&lt;td&gt;TIMESTAMP&lt;/td&gt;&lt;td&gt;REQUIRED&lt;/td&gt;&lt;/tr&gt;
@@ -1277,6 +1203,80 @@
               #   &lt;tr&gt;&lt;td&gt;raw_prediction&lt;/td&gt;&lt;td&gt;STRING&lt;/td&gt;&lt;td&gt;NULLABLE&lt;/td&gt;&lt;/tr&gt;
               #   &lt;tr&gt;&lt;td&gt;groundtruth&lt;/td&gt;&lt;td&gt;STRING&lt;/td&gt;&lt;td&gt;NULLABLE&lt;/td&gt;&lt;/tr&gt;
               # &lt;/table&gt;
+          &quot;samplingPercentage&quot;: 3.14, # Percentage of requests to be logged, expressed as a fraction from 0 to 1.
+              # For example, if you want to log 10% of requests, enter `0.1`. The sampling
+              # window is the lifetime of the model version. Defaults to 0.
+        },
+        &quot;errorMessage&quot;: &quot;A String&quot;, # Output only. The details of a failure or a cancellation.
+        &quot;machineType&quot;: &quot;A String&quot;, # Optional. The type of machine on which to serve the model. Currently only
+            # applies to online prediction service. If this field is not specified, it
+            # defaults to `mls1-c1-m2`.
+            #
+            # Online prediction supports the following machine types:
+            #
+            # * `mls1-c1-m2`
+            # * `mls1-c4-m2`
+            # * `n1-standard-2`
+            # * `n1-standard-4`
+            # * `n1-standard-8`
+            # * `n1-standard-16`
+            # * `n1-standard-32`
+            # * `n1-highmem-2`
+            # * `n1-highmem-4`
+            # * `n1-highmem-8`
+            # * `n1-highmem-16`
+            # * `n1-highmem-32`
+            # * `n1-highcpu-2`
+            # * `n1-highcpu-4`
+            # * `n1-highcpu-8`
+            # * `n1-highcpu-16`
+            # * `n1-highcpu-32`
+            #
+            # `mls1-c1-m2` is generally available. All other machine types are available
+            # in beta. Learn more about the [differences between machine
+            # types](/ml-engine/docs/machine-types-online-prediction).
+        &quot;runtimeVersion&quot;: &quot;A String&quot;, # Required. The AI Platform runtime version to use for this deployment.
+            #
+            # For more information, see the
+            # [runtime version list](/ml-engine/docs/runtime-version-list) and
+            # [how to manage runtime versions](/ml-engine/docs/versioning).
+        &quot;description&quot;: &quot;A String&quot;, # Optional. The description specified for the version when it was created.
+        &quot;framework&quot;: &quot;A String&quot;, # Optional. The machine learning framework AI Platform uses to train
+            # this version of the model. Valid values are `TENSORFLOW`, `SCIKIT_LEARN`,
+            # `XGBOOST`. If you do not specify a framework, AI Platform
+            # will analyze files in the deployment_uri to determine a framework. If you
+            # choose `SCIKIT_LEARN` or `XGBOOST`, you must also set the runtime version
+            # of the model to 1.4 or greater.
+            #
+            # Do **not** specify a framework if you&#x27;re deploying a [custom
+            # prediction routine](/ml-engine/docs/tensorflow/custom-prediction-routines).
+            #
+            # If you specify a [Compute Engine (N1) machine
+            # type](/ml-engine/docs/machine-types-online-prediction) in the
+            # `machineType` field, you must specify `TENSORFLOW`
+            # for the framework.
+        &quot;etag&quot;: &quot;A String&quot;, # `etag` is used for optimistic concurrency control as a way to help
+            # prevent simultaneous updates of a model from overwriting each other.
+            # It is strongly suggested that systems make use of the `etag` in the
+            # read-modify-write cycle to perform model updates in order to avoid race
+            # conditions: An `etag` is returned in the response to `GetVersion`, and
+            # systems are expected to put that etag in the request to `UpdateVersion` to
+            # ensure that their change will be applied to the model as intended.
+        &quot;isDefault&quot;: True or False, # Output only. If true, this version will be used to handle prediction
+            # requests that do not specify a version.
+            #
+            # You can change the default version by calling
+            # projects.methods.versions.setDefault.
+        &quot;acceleratorConfig&quot;: { # Represents a hardware accelerator request config. # Optional. Accelerator config for using GPUs for online prediction (beta).
+            # Only specify this field if you have specified a Compute Engine (N1) machine
+            # type in the `machineType` field. Learn more about [using GPUs for online
+            # prediction](/ml-engine/docs/machine-types-online-prediction#gpus).
+            # Note that the AcceleratorConfig can be used in both Jobs and Versions.
+            # Learn more about [accelerators for training](/ml-engine/docs/using-gpus) and
+            # [accelerators for online
+            # prediction](/ml-engine/docs/machine-types-online-prediction#gpus).
+          &quot;count&quot;: &quot;A String&quot;, # The number of accelerators to attach to each machine running the job.
+          &quot;type&quot;: &quot;A String&quot;, # The type of accelerator to use.
         },
       },
     ],
@@ -1292,7 +1292,7 @@
   previous_response: The response from the request for the previous page. (required)
 
 Returns:
-  A request object that you can call 'execute()' on to request the next
+  A request object that you can call &#x27;execute()&#x27; on to request the next
   page. Returns None if there are no more items in the collection.
     </pre>
 </div>
@@ -1315,25 +1315,37 @@
     # prediction requests. A model can have multiple versions. You can get
     # information about all of the versions of a given model by calling
     # projects.models.versions.list.
-  "acceleratorConfig": { # Represents a hardware accelerator request config. # Optional. Accelerator config for using GPUs for online prediction (beta).
-      # Only specify this field if you have specified a Compute Engine (N1) machine
-      # type in the `machineType` field. Learn more about [using GPUs for online
-      # prediction](/ml-engine/docs/machine-types-online-prediction#gpus).
-      # Note that the AcceleratorConfig can be used in both Jobs and Versions.
-      # Learn more about [accelerators for training](/ml-engine/docs/using-gpus) and
-      # [accelerators for online
-      # prediction](/ml-engine/docs/machine-types-online-prediction#gpus).
-    "count": "A String", # The number of accelerators to attach to each machine running the job.
-    "type": "A String", # The type of accelerator to use.
+  &quot;state&quot;: &quot;A String&quot;, # Output only. The state of a version.
+  &quot;manualScaling&quot;: { # Options for manually scaling a model. # Manually select the number of nodes to use for serving the
+      # model. You should generally use `auto_scaling` with an appropriate
+      # `min_nodes` instead, but this option is available if you want more
+      # predictable billing. Beware that latency and error rates will increase
+      # if the traffic exceeds that capability of the system to serve it based
+      # on the selected number of nodes.
+    &quot;nodes&quot;: 42, # The number of nodes to allocate for this model. These nodes are always up,
+        # starting from the time the model is deployed, so the cost of operating
+        # this model will be proportional to `nodes` * number of hours since
+        # last billing cycle plus the cost for each prediction performed.
   },
-  "labels": { # Optional. One or more labels that you can add, to organize your model
-      # versions. Each label is a key-value pair, where both the key and the value
-      # are arbitrary strings that you supply.
-      # For more information, see the documentation on
-      # &lt;a href="/ml-engine/docs/tensorflow/resource-labels"&gt;using labels&lt;/a&gt;.
-    "a_key": "A String",
-  },
-  "predictionClass": "A String", # Optional. The fully qualified name
+  &quot;name&quot;: &quot;A String&quot;, # Required. The name specified for the version when it was created.
+      # 
+      # The version name must be unique within the model it is created in.
+  &quot;serviceAccount&quot;: &quot;A String&quot;, # Optional. Specifies the service account for resource access control.
+  &quot;pythonVersion&quot;: &quot;A String&quot;, # Required. The version of Python used in prediction.
+      # 
+      # The following Python versions are available:
+      # 
+      # * Python &#x27;3.7&#x27; is available when `runtime_version` is set to &#x27;1.15&#x27; or
+      #   later.
+      # * Python &#x27;3.5&#x27; is available when `runtime_version` is set to a version
+      #   from &#x27;1.4&#x27; to &#x27;1.14&#x27;.
+      # * Python &#x27;2.7&#x27; is available when `runtime_version` is set to &#x27;1.15&#x27; or
+      #   earlier.
+      # 
+      # Read more about the Python versions available for [each runtime
+      # version](/ml-engine/docs/runtime-version-list).
+  &quot;lastUseTime&quot;: &quot;A String&quot;, # Output only. The time the version was last used for prediction.
+  &quot;predictionClass&quot;: &quot;A String&quot;, # Optional. The fully qualified name
       # (&lt;var&gt;module_name&lt;/var&gt;.&lt;var&gt;class_name&lt;/var&gt;) of a class that implements
       # the Predictor interface described in this reference field. The module
       # containing this class should be included in a package provided to the
@@ -1348,12 +1360,12 @@
       # 
       # The following code sample provides the Predictor interface:
       # 
-      # &lt;pre style="max-width: 626px;"&gt;
+      # &lt;pre style=&quot;max-width: 626px;&quot;&gt;
       # class Predictor(object):
-      # """Interface for constructing custom predictors."""
+      # &quot;&quot;&quot;Interface for constructing custom predictors.&quot;&quot;&quot;
       # 
       # def predict(self, instances, **kwargs):
-      #     """Performs custom prediction.
+      #     &quot;&quot;&quot;Performs custom prediction.
       # 
       #     Instances are the decoded values from the request. They have already
       #     been deserialized from JSON.
@@ -1366,12 +1378,12 @@
       #     Returns:
       #         A list of outputs containing the prediction results. This list must
       #         be JSON serializable.
-      #     """
+      #     &quot;&quot;&quot;
       #     raise NotImplementedError()
       # 
       # @classmethod
       # def from_path(cls, model_dir):
-      #     """Creates an instance of Predictor using the given path.
+      #     &quot;&quot;&quot;Creates an instance of Predictor using the given path.
       # 
       #     Loading of the predictor should be done in this method.
       # 
@@ -1382,15 +1394,13 @@
       # 
       #     Returns:
       #         An instance implementing this Predictor class.
-      #     """
+      #     &quot;&quot;&quot;
       #     raise NotImplementedError()
       # &lt;/pre&gt;
       # 
       # Learn more about [the Predictor interface and custom prediction
       # routines](/ml-engine/docs/tensorflow/custom-prediction-routines).
-  "serviceAccount": "A String", # Optional. Specifies the service account for resource access control.
-  "state": "A String", # Output only. The state of a version.
-  "packageUris": [ # Optional. Cloud Storage paths (`gs://…`) of packages for [custom
+  &quot;packageUris&quot;: [ # Optional. Cloud Storage paths (`gs://…`) of packages for [custom
       # prediction routines](/ml-engine/docs/tensorflow/custom-prediction-routines)
       # or [scikit-learn pipelines with custom
       # code](/ml-engine/docs/scikit/exporting-for-prediction#custom-pipeline-code).
@@ -1404,17 +1414,45 @@
       # 
       # If you specify this field, you must also set
       # [`runtimeVersion`](#Version.FIELDS.runtime_version) to 1.4 or greater.
-    "A String",
+    &quot;A String&quot;,
   ],
-  "etag": "A String", # `etag` is used for optimistic concurrency control as a way to help
-      # prevent simultaneous updates of a model from overwriting each other.
-      # It is strongly suggested that systems make use of the `etag` in the
-      # read-modify-write cycle to perform model updates in order to avoid race
-      # conditions: An `etag` is returned in the response to `GetVersion`, and
-      # systems are expected to put that etag in the request to `UpdateVersion` to
-      # ensure that their change will be applied to the model as intended.
-  "lastUseTime": "A String", # Output only. The time the version was last used for prediction.
-  "deploymentUri": "A String", # Required. The Cloud Storage location of the trained model used to
+  &quot;explanationConfig&quot;: { # Message holding configuration options for explaining model predictions. # Optional. Configures explainability features on the model&#x27;s version.
+      # Some explanation features require additional metadata to be loaded
+      # as part of the model payload.
+      # There are two feature attribution methods supported for TensorFlow models:
+      # integrated gradients and sampled Shapley.
+      # [Learn more about feature
+      # attributions.](/ai-platform/prediction/docs/ai-explanations/overview)
+    &quot;integratedGradientsAttribution&quot;: { # Attributes credit by computing the Aumann-Shapley value taking advantage # Attributes credit by computing the Aumann-Shapley value taking advantage
+        # of the model&#x27;s fully differentiable structure. Refer to this paper for
+        # more details: http://proceedings.mlr.press/v70/sundararajan17a.html
+        # of the model&#x27;s fully differentiable structure. Refer to this paper for
+        # more details: https://arxiv.org/abs/1703.01365
+      &quot;numIntegralSteps&quot;: 42, # Number of steps for approximating the path integral.
+          # A good value to start is 50 and gradually increase until the
+          # sum to diff property is met within the desired error range.
+    },
+    &quot;sampledShapleyAttribution&quot;: { # An attribution method that approximates Shapley values for features that # An attribution method that approximates Shapley values for features that
+        # contribute to the label being predicted. A sampling strategy is used to
+        # approximate the value rather than considering all subsets of features.
+        # contribute to the label being predicted. A sampling strategy is used to
+        # approximate the value rather than considering all subsets of features.
+      &quot;numPaths&quot;: 42, # The number of feature permutations to consider when approximating the
+          # Shapley values.
+    },
+    &quot;xraiAttribution&quot;: { # Attributes credit by computing the XRAI taking advantage # Attributes credit by computing the XRAI taking advantage
+        # of the model&#x27;s fully differentiable structure. Refer to this paper for
+        # more details: https://arxiv.org/abs/1906.02825
+        # Currently only implemented for models with natural image inputs.
+        # of the model&#x27;s fully differentiable structure. Refer to this paper for
+        # more details: https://arxiv.org/abs/1906.02825
+        # Currently only implemented for models with natural image inputs.
+      &quot;numIntegralSteps&quot;: 42, # Number of steps for approximating the path integral.
+          # A good value to start is 50 and gradually increase until the
+          # sum to diff property is met within the desired error range.
+    },
+  },
+  &quot;deploymentUri&quot;: &quot;A String&quot;, # Required. The Cloud Storage location of the trained model used to
       # create the version. See the
       # [guide to model
       # deployment](/ml-engine/docs/tensorflow/deploying-models) for more
@@ -1425,120 +1463,16 @@
       # the model service uses the specified location as the source of the model.
       # Once deployed, the model version is hosted by the prediction service, so
       # this location is useful only as a historical record.
-      # The total number of model files can't exceed 1000.
-  "explanationConfig": { # Message holding configuration options for explaining model predictions. # Optional. Configures explainability features on the model's version.
-      # Some explanation features require additional metadata to be loaded
-      # as part of the model payload.
-      # There are two feature attribution methods supported for TensorFlow models:
-      # integrated gradients and sampled Shapley.
-      # [Learn more about feature
-      # attributions.](/ml-engine/docs/ai-explanations/overview)
-    "xraiAttribution": { # Attributes credit by computing the XRAI taking advantage # Attributes credit by computing the XRAI taking advantage
-        # of the model's fully differentiable structure. Refer to this paper for
-        # more details: https://arxiv.org/abs/1906.02825
-        # Currently only implemented for models with natural image inputs.
-        # of the model's fully differentiable structure. Refer to this paper for
-        # more details: https://arxiv.org/abs/1906.02825
-        # Currently only implemented for models with natural image inputs.
-      "numIntegralSteps": 42, # Number of steps for approximating the path integral.
-          # A good value to start is 50 and gradually increase until the
-          # sum to diff property is met within the desired error range.
-    },
-    "sampledShapleyAttribution": { # An attribution method that approximates Shapley values for features that # An attribution method that approximates Shapley values for features that
-        # contribute to the label being predicted. A sampling strategy is used to
-        # approximate the value rather than considering all subsets of features.
-        # contribute to the label being predicted. A sampling strategy is used to
-        # approximate the value rather than considering all subsets of features.
-      "numPaths": 42, # The number of feature permutations to consider when approximating the
-          # Shapley values.
-    },
-    "integratedGradientsAttribution": { # Attributes credit by computing the Aumann-Shapley value taking advantage # Attributes credit by computing the Aumann-Shapley value taking advantage
-        # of the model's fully differentiable structure. Refer to this paper for
-        # more details: http://proceedings.mlr.press/v70/sundararajan17a.html
-        # of the model's fully differentiable structure. Refer to this paper for
-        # more details: http://proceedings.mlr.press/v70/sundararajan17a.html
-      "numIntegralSteps": 42, # Number of steps for approximating the path integral.
-          # A good value to start is 50 and gradually increase until the
-          # sum to diff property is met within the desired error range.
-    },
-  },
-  "isDefault": True or False, # Output only. If true, this version will be used to handle prediction
-      # requests that do not specify a version.
-      # 
-      # You can change the default version by calling
-      # projects.methods.versions.setDefault.
-  "machineType": "A String", # Optional. The type of machine on which to serve the model. Currently only
-      # applies to online prediction service. If this field is not specified, it
-      # defaults to `mls1-c1-m2`.
-      # 
-      # Online prediction supports the following machine types:
-      # 
-      # * `mls1-c1-m2`
-      # * `mls1-c4-m2`
-      # * `n1-standard-2`
-      # * `n1-standard-4`
-      # * `n1-standard-8`
-      # * `n1-standard-16`
-      # * `n1-standard-32`
-      # * `n1-highmem-2`
-      # * `n1-highmem-4`
-      # * `n1-highmem-8`
-      # * `n1-highmem-16`
-      # * `n1-highmem-32`
-      # * `n1-highcpu-2`
-      # * `n1-highcpu-4`
-      # * `n1-highcpu-8`
-      # * `n1-highcpu-16`
-      # * `n1-highcpu-32`
-      # 
-      # `mls1-c1-m2` is generally available. All other machine types are available
-      # in beta. Learn more about the [differences between machine
-      # types](/ml-engine/docs/machine-types-online-prediction).
-  "description": "A String", # Optional. The description specified for the version when it was created.
-  "runtimeVersion": "A String", # Required. The AI Platform runtime version to use for this deployment.
-      # 
-      # For more information, see the
-      # [runtime version list](/ml-engine/docs/runtime-version-list) and
-      # [how to manage runtime versions](/ml-engine/docs/versioning).
-  "manualScaling": { # Options for manually scaling a model. # Manually select the number of nodes to use for serving the
-      # model. You should generally use `auto_scaling` with an appropriate
-      # `min_nodes` instead, but this option is available if you want more
-      # predictable billing. Beware that latency and error rates will increase
-      # if the traffic exceeds that capability of the system to serve it based
-      # on the selected number of nodes.
-    "nodes": 42, # The number of nodes to allocate for this model. These nodes are always up,
-        # starting from the time the model is deployed, so the cost of operating
-        # this model will be proportional to `nodes` * number of hours since
-        # last billing cycle plus the cost for each prediction performed.
-  },
-  "errorMessage": "A String", # Output only. The details of a failure or a cancellation.
-  "framework": "A String", # Optional. The machine learning framework AI Platform uses to train
-      # this version of the model. Valid values are `TENSORFLOW`, `SCIKIT_LEARN`,
-      # `XGBOOST`. If you do not specify a framework, AI Platform
-      # will analyze files in the deployment_uri to determine a framework. If you
-      # choose `SCIKIT_LEARN` or `XGBOOST`, you must also set the runtime version
-      # of the model to 1.4 or greater.
-      # 
-      # Do **not** specify a framework if you're deploying a [custom
-      # prediction routine](/ml-engine/docs/tensorflow/custom-prediction-routines).
-      # 
-      # If you specify a [Compute Engine (N1) machine
-      # type](/ml-engine/docs/machine-types-online-prediction) in the
-      # `machineType` field, you must specify `TENSORFLOW`
-      # for the framework.
-  "createTime": "A String", # Output only. The time the version was created.
-  "name": "A String", # Required. The name specified for the version when it was created.
-      # 
-      # The version name must be unique within the model it is created in.
-  "autoScaling": { # Options for automatically scaling a model. # Automatically scale the number of nodes used to serve the model in
+      # The total number of model files can&#x27;t exceed 1000.
+  &quot;autoScaling&quot;: { # Options for automatically scaling a model. # Automatically scale the number of nodes used to serve the model in
       # response to increases and decreases in traffic. Care should be
-      # taken to ramp up traffic according to the model's ability to scale
+      # taken to ramp up traffic according to the model&#x27;s ability to scale
       # or you will start seeing increases in latency and 429 response codes.
       # 
       # Note that you cannot use AutoScaling if your version uses
       # [GPUs](#Version.FIELDS.accelerator_config). Instead, you must use specify
       # `manual_scaling`.
-    "minNodes": 42, # Optional. The minimum number of nodes to allocate for this model. These
+    &quot;minNodes&quot;: 42, # Optional. The minimum number of nodes to allocate for this model. These
         # nodes are always up, starting from the time the model is deployed.
         # Therefore, the cost of operating this model will be at least
         # `rate` * `min_nodes` * number of hours since last billing cycle,
@@ -1573,32 +1507,27 @@
         # &lt;pre&gt;
         # update_body.json:
         # {
-        #   'autoScaling': {
-        #     'minNodes': 5
+        #   &#x27;autoScaling&#x27;: {
+        #     &#x27;minNodes&#x27;: 5
         #   }
         # }
         # &lt;/pre&gt;
         # HTTP request:
-        # &lt;pre style="max-width: 626px;"&gt;
+        # &lt;pre style=&quot;max-width: 626px;&quot;&gt;
         # PATCH
         # https://ml.googleapis.com/v1/{name=projects/*/models/*/versions/*}?update_mask=autoScaling.minNodes
         # -d @./update_body.json
         # &lt;/pre&gt;
   },
-  "pythonVersion": "A String", # Required. The version of Python used in prediction.
-      # 
-      # The following Python versions are available:
-      # 
-      # * Python '3.7' is available when `runtime_version` is set to '1.15' or
-      #   later.
-      # * Python '3.5' is available when `runtime_version` is set to a version
-      #   from '1.4' to '1.14'.
-      # * Python '2.7' is available when `runtime_version` is set to '1.15' or
-      #   earlier.
-      # 
-      # Read more about the Python versions available for [each runtime
-      # version](/ml-engine/docs/runtime-version-list).
-  "requestLoggingConfig": { # Configuration for logging request-response pairs to a BigQuery table. # Optional. *Only* specify this field in a
+  &quot;labels&quot;: { # Optional. One or more labels that you can add, to organize your model
+      # versions. Each label is a key-value pair, where both the key and the value
+      # are arbitrary strings that you supply.
+      # For more information, see the documentation on
+      # &lt;a href=&quot;/ml-engine/docs/tensorflow/resource-labels&quot;&gt;using labels&lt;/a&gt;.
+    &quot;a_key&quot;: &quot;A String&quot;,
+  },
+  &quot;createTime&quot;: &quot;A String&quot;, # Output only. The time the version was created.
+  &quot;requestLoggingConfig&quot;: { # Configuration for logging request-response pairs to a BigQuery table. # Optional. *Only* specify this field in a
       # projects.models.versions.patch
       # request. Specifying it in a
       # projects.models.versions.create
@@ -1617,19 +1546,16 @@
       # evaluation](/ml-engine/docs/continuous-evaluation/), you do not need to
       # specify this configuration manually. Setting up continuous evaluation
       # automatically enables logging of request-response pairs.
-    "samplingPercentage": 3.14, # Percentage of requests to be logged, expressed as a fraction from 0 to 1.
-        # For example, if you want to log 10% of requests, enter `0.1`. The sampling
-        # window is the lifetime of the model version. Defaults to 0.
-    "bigqueryTableName": "A String", # Required. Fully qualified BigQuery table name in the following format:
-        # "&lt;var&gt;project_id&lt;/var&gt;.&lt;var&gt;dataset_name&lt;/var&gt;.&lt;var&gt;table_name&lt;/var&gt;"
+    &quot;bigqueryTableName&quot;: &quot;A String&quot;, # Required. Fully qualified BigQuery table name in the following format:
+        # &quot;&lt;var&gt;project_id&lt;/var&gt;.&lt;var&gt;dataset_name&lt;/var&gt;.&lt;var&gt;table_name&lt;/var&gt;&quot;
         #
-        # The specified table must already exist, and the "Cloud ML Service Agent"
+        # The specified table must already exist, and the &quot;Cloud ML Service Agent&quot;
         # for your project must have permission to write to it. The table must have
         # the following [schema](/bigquery/docs/schemas):
         #
         # &lt;table&gt;
-        #   &lt;tr&gt;&lt;th&gt;Field name&lt;/th&gt;&lt;th style="display: table-cell"&gt;Type&lt;/th&gt;
-        #     &lt;th style="display: table-cell"&gt;Mode&lt;/th&gt;&lt;/tr&gt;
+        #   &lt;tr&gt;&lt;th&gt;Field name&lt;/th&gt;&lt;th style=&quot;display: table-cell&quot;&gt;Type&lt;/th&gt;
+        #     &lt;th style=&quot;display: table-cell&quot;&gt;Mode&lt;/th&gt;&lt;/tr&gt;
         #   &lt;tr&gt;&lt;td&gt;model&lt;/td&gt;&lt;td&gt;STRING&lt;/td&gt;&lt;td&gt;REQUIRED&lt;/td&gt;&lt;/tr&gt;
         #   &lt;tr&gt;&lt;td&gt;model_version&lt;/td&gt;&lt;td&gt;STRING&lt;/td&gt;&lt;td&gt;REQUIRED&lt;/td&gt;&lt;/tr&gt;
         #   &lt;tr&gt;&lt;td&gt;time&lt;/td&gt;&lt;td&gt;TIMESTAMP&lt;/td&gt;&lt;td&gt;REQUIRED&lt;/td&gt;&lt;/tr&gt;
@@ -1637,19 +1563,93 @@
         #   &lt;tr&gt;&lt;td&gt;raw_prediction&lt;/td&gt;&lt;td&gt;STRING&lt;/td&gt;&lt;td&gt;NULLABLE&lt;/td&gt;&lt;/tr&gt;
         #   &lt;tr&gt;&lt;td&gt;groundtruth&lt;/td&gt;&lt;td&gt;STRING&lt;/td&gt;&lt;td&gt;NULLABLE&lt;/td&gt;&lt;/tr&gt;
         # &lt;/table&gt;
+    &quot;samplingPercentage&quot;: 3.14, # Percentage of requests to be logged, expressed as a fraction from 0 to 1.
+        # For example, if you want to log 10% of requests, enter `0.1`. The sampling
+        # window is the lifetime of the model version. Defaults to 0.
+  },
+  &quot;errorMessage&quot;: &quot;A String&quot;, # Output only. The details of a failure or a cancellation.
+  &quot;machineType&quot;: &quot;A String&quot;, # Optional. The type of machine on which to serve the model. Currently only
+      # applies to online prediction service. If this field is not specified, it
+      # defaults to `mls1-c1-m2`.
+      # 
+      # Online prediction supports the following machine types:
+      # 
+      # * `mls1-c1-m2`
+      # * `mls1-c4-m2`
+      # * `n1-standard-2`
+      # * `n1-standard-4`
+      # * `n1-standard-8`
+      # * `n1-standard-16`
+      # * `n1-standard-32`
+      # * `n1-highmem-2`
+      # * `n1-highmem-4`
+      # * `n1-highmem-8`
+      # * `n1-highmem-16`
+      # * `n1-highmem-32`
+      # * `n1-highcpu-2`
+      # * `n1-highcpu-4`
+      # * `n1-highcpu-8`
+      # * `n1-highcpu-16`
+      # * `n1-highcpu-32`
+      # 
+      # `mls1-c1-m2` is generally available. All other machine types are available
+      # in beta. Learn more about the [differences between machine
+      # types](/ml-engine/docs/machine-types-online-prediction).
+  &quot;runtimeVersion&quot;: &quot;A String&quot;, # Required. The AI Platform runtime version to use for this deployment.
+      # 
+      # For more information, see the
+      # [runtime version list](/ml-engine/docs/runtime-version-list) and
+      # [how to manage runtime versions](/ml-engine/docs/versioning).
+  &quot;description&quot;: &quot;A String&quot;, # Optional. The description specified for the version when it was created.
+  &quot;framework&quot;: &quot;A String&quot;, # Optional. The machine learning framework AI Platform uses to train
+      # this version of the model. Valid values are `TENSORFLOW`, `SCIKIT_LEARN`,
+      # `XGBOOST`. If you do not specify a framework, AI Platform
+      # will analyze files in the deployment_uri to determine a framework. If you
+      # choose `SCIKIT_LEARN` or `XGBOOST`, you must also set the runtime version
+      # of the model to 1.4 or greater.
+      # 
+      # Do **not** specify a framework if you&#x27;re deploying a [custom
+      # prediction routine](/ml-engine/docs/tensorflow/custom-prediction-routines).
+      # 
+      # If you specify a [Compute Engine (N1) machine
+      # type](/ml-engine/docs/machine-types-online-prediction) in the
+      # `machineType` field, you must specify `TENSORFLOW`
+      # for the framework.
+  &quot;etag&quot;: &quot;A String&quot;, # `etag` is used for optimistic concurrency control as a way to help
+      # prevent simultaneous updates of a model from overwriting each other.
+      # It is strongly suggested that systems make use of the `etag` in the
+      # read-modify-write cycle to perform model updates in order to avoid race
+      # conditions: An `etag` is returned in the response to `GetVersion`, and
+      # systems are expected to put that etag in the request to `UpdateVersion` to
+      # ensure that their change will be applied to the model as intended.
+  &quot;isDefault&quot;: True or False, # Output only. If true, this version will be used to handle prediction
+      # requests that do not specify a version.
+      # 
+      # You can change the default version by calling
+      # projects.methods.versions.setDefault.
+  &quot;acceleratorConfig&quot;: { # Represents a hardware accelerator request config. # Optional. Accelerator config for using GPUs for online prediction (beta).
+      # Only specify this field if you have specified a Compute Engine (N1) machine
+      # type in the `machineType` field. Learn more about [using GPUs for online
+      # prediction](/ml-engine/docs/machine-types-online-prediction#gpus).
+      # Note that the AcceleratorConfig can be used in both Jobs and Versions.
+      # Learn more about [accelerators for training](/ml-engine/docs/using-gpus) and
+      # [accelerators for online
+      # prediction](/ml-engine/docs/machine-types-online-prediction#gpus).
+    &quot;count&quot;: &quot;A String&quot;, # The number of accelerators to attach to each machine running the job.
+    &quot;type&quot;: &quot;A String&quot;, # The type of accelerator to use.
   },
 }
 
   updateMask: string, Required. Specifies the path, relative to `Version`, of the field to
 update. Must be present and non-empty.
 
-For example, to change the description of a version to "foo", the
+For example, to change the description of a version to &quot;foo&quot;, the
 `update_mask` parameter would be specified as `description`, and the
 `PATCH` request body would specify the new value, as follows:
 
 ```
 {
-  "description": "foo"
+  &quot;description&quot;: &quot;foo&quot;
 }
 ```
 
@@ -1668,34 +1668,7 @@
 
     { # This resource represents a long-running operation that is the result of a
       # network API call.
-    "metadata": { # Service-specific metadata associated with the operation.  It typically
-        # contains progress information and common metadata such as create time.
-        # Some services might not provide such metadata.  Any method that returns a
-        # long-running operation should document the metadata type, if any.
-      "a_key": "", # Properties of the object. Contains field @type with type URL.
-    },
-    "error": { # The `Status` type defines a logical error model that is suitable for # The error result of the operation in case of failure or cancellation.
-        # different programming environments, including REST APIs and RPC APIs. It is
-        # used by [gRPC](https://github.com/grpc). Each `Status` message contains
-        # three pieces of data: error code, error message, and error details.
-        #
-        # You can find out more about this error model and how to work with it in the
-        # [API Design Guide](https://cloud.google.com/apis/design/errors).
-      "message": "A String", # A developer-facing error message, which should be in English. Any
-          # user-facing error message should be localized and sent in the
-          # google.rpc.Status.details field, or localized by the client.
-      "code": 42, # The status code, which should be an enum value of google.rpc.Code.
-      "details": [ # A list of messages that carry the error details.  There is a common set of
-          # message types for APIs to use.
-        {
-          "a_key": "", # Properties of the object. Contains field @type with type URL.
-        },
-      ],
-    },
-    "done": True or False, # If the value is `false`, it means the operation is still in progress.
-        # If `true`, the operation is completed, and either `error` or `response` is
-        # available.
-    "response": { # The normal response of the operation in case of success.  If the original
+    &quot;response&quot;: { # The normal response of the operation in case of success.  If the original
         # method returns no data on success, such as `Delete`, the response is
         # `google.protobuf.Empty`.  If the original method is standard
         # `Get`/`Create`/`Update`, the response should be the resource.  For other
@@ -1703,11 +1676,38 @@
         # is the original method name.  For example, if the original method name
         # is `TakeSnapshot()`, the inferred response type is
         # `TakeSnapshotResponse`.
-      "a_key": "", # Properties of the object. Contains field @type with type URL.
+      &quot;a_key&quot;: &quot;&quot;, # Properties of the object. Contains field @type with type URL.
     },
-    "name": "A String", # The server-assigned name, which is only unique within the same service that
+    &quot;name&quot;: &quot;A String&quot;, # The server-assigned name, which is only unique within the same service that
         # originally returns it. If you use the default HTTP mapping, the
         # `name` should be a resource name ending with `operations/{unique_id}`.
+    &quot;error&quot;: { # The `Status` type defines a logical error model that is suitable for # The error result of the operation in case of failure or cancellation.
+        # different programming environments, including REST APIs and RPC APIs. It is
+        # used by [gRPC](https://github.com/grpc). Each `Status` message contains
+        # three pieces of data: error code, error message, and error details.
+        #
+        # You can find out more about this error model and how to work with it in the
+        # [API Design Guide](https://cloud.google.com/apis/design/errors).
+      &quot;details&quot;: [ # A list of messages that carry the error details.  There is a common set of
+          # message types for APIs to use.
+        {
+          &quot;a_key&quot;: &quot;&quot;, # Properties of the object. Contains field @type with type URL.
+        },
+      ],
+      &quot;code&quot;: 42, # The status code, which should be an enum value of google.rpc.Code.
+      &quot;message&quot;: &quot;A String&quot;, # A developer-facing error message, which should be in English. Any
+          # user-facing error message should be localized and sent in the
+          # google.rpc.Status.details field, or localized by the client.
+    },
+    &quot;metadata&quot;: { # Service-specific metadata associated with the operation.  It typically
+        # contains progress information and common metadata such as create time.
+        # Some services might not provide such metadata.  Any method that returns a
+        # long-running operation should document the metadata type, if any.
+      &quot;a_key&quot;: &quot;&quot;, # Properties of the object. Contains field @type with type URL.
+    },
+    &quot;done&quot;: True or False, # If the value is `false`, it means the operation is still in progress.
+        # If `true`, the operation is completed, and either `error` or `response` is
+        # available.
   }</pre>
 </div>
 
@@ -1716,7 +1716,7 @@
   <pre>Designates a version to be the default for the model.
 
 The default version is used for prediction requests made against the model
-that don't specify a version.
+that don&#x27;t specify a version.
 
 The first version to be created for a model is automatically set as the
 default. You must make any subsequent changes to the default version
@@ -1746,25 +1746,37 @@
       # prediction requests. A model can have multiple versions. You can get
       # information about all of the versions of a given model by calling
       # projects.models.versions.list.
-    "acceleratorConfig": { # Represents a hardware accelerator request config. # Optional. Accelerator config for using GPUs for online prediction (beta).
-        # Only specify this field if you have specified a Compute Engine (N1) machine
-        # type in the `machineType` field. Learn more about [using GPUs for online
-        # prediction](/ml-engine/docs/machine-types-online-prediction#gpus).
-        # Note that the AcceleratorConfig can be used in both Jobs and Versions.
-        # Learn more about [accelerators for training](/ml-engine/docs/using-gpus) and
-        # [accelerators for online
-        # prediction](/ml-engine/docs/machine-types-online-prediction#gpus).
-      "count": "A String", # The number of accelerators to attach to each machine running the job.
-      "type": "A String", # The type of accelerator to use.
+    &quot;state&quot;: &quot;A String&quot;, # Output only. The state of a version.
+    &quot;manualScaling&quot;: { # Options for manually scaling a model. # Manually select the number of nodes to use for serving the
+        # model. You should generally use `auto_scaling` with an appropriate
+        # `min_nodes` instead, but this option is available if you want more
+        # predictable billing. Beware that latency and error rates will increase
+        # if the traffic exceeds that capability of the system to serve it based
+        # on the selected number of nodes.
+      &quot;nodes&quot;: 42, # The number of nodes to allocate for this model. These nodes are always up,
+          # starting from the time the model is deployed, so the cost of operating
+          # this model will be proportional to `nodes` * number of hours since
+          # last billing cycle plus the cost for each prediction performed.
     },
-    "labels": { # Optional. One or more labels that you can add, to organize your model
-        # versions. Each label is a key-value pair, where both the key and the value
-        # are arbitrary strings that you supply.
-        # For more information, see the documentation on
-        # &lt;a href="/ml-engine/docs/tensorflow/resource-labels"&gt;using labels&lt;/a&gt;.
-      "a_key": "A String",
-    },
-    "predictionClass": "A String", # Optional. The fully qualified name
+    &quot;name&quot;: &quot;A String&quot;, # Required. The name specified for the version when it was created.
+        #
+        # The version name must be unique within the model it is created in.
+    &quot;serviceAccount&quot;: &quot;A String&quot;, # Optional. Specifies the service account for resource access control.
+    &quot;pythonVersion&quot;: &quot;A String&quot;, # Required. The version of Python used in prediction.
+        #
+        # The following Python versions are available:
+        #
+        # * Python &#x27;3.7&#x27; is available when `runtime_version` is set to &#x27;1.15&#x27; or
+        #   later.
+        # * Python &#x27;3.5&#x27; is available when `runtime_version` is set to a version
+        #   from &#x27;1.4&#x27; to &#x27;1.14&#x27;.
+        # * Python &#x27;2.7&#x27; is available when `runtime_version` is set to &#x27;1.15&#x27; or
+        #   earlier.
+        #
+        # Read more about the Python versions available for [each runtime
+        # version](/ml-engine/docs/runtime-version-list).
+    &quot;lastUseTime&quot;: &quot;A String&quot;, # Output only. The time the version was last used for prediction.
+    &quot;predictionClass&quot;: &quot;A String&quot;, # Optional. The fully qualified name
         # (&lt;var&gt;module_name&lt;/var&gt;.&lt;var&gt;class_name&lt;/var&gt;) of a class that implements
         # the Predictor interface described in this reference field. The module
         # containing this class should be included in a package provided to the
@@ -1779,12 +1791,12 @@
         #
         # The following code sample provides the Predictor interface:
         #
-        # &lt;pre style="max-width: 626px;"&gt;
+        # &lt;pre style=&quot;max-width: 626px;&quot;&gt;
         # class Predictor(object):
-        # """Interface for constructing custom predictors."""
+        # &quot;&quot;&quot;Interface for constructing custom predictors.&quot;&quot;&quot;
         #
         # def predict(self, instances, **kwargs):
-        #     """Performs custom prediction.
+        #     &quot;&quot;&quot;Performs custom prediction.
         #
         #     Instances are the decoded values from the request. They have already
         #     been deserialized from JSON.
@@ -1797,12 +1809,12 @@
         #     Returns:
         #         A list of outputs containing the prediction results. This list must
         #         be JSON serializable.
-        #     """
+        #     &quot;&quot;&quot;
         #     raise NotImplementedError()
         #
         # @classmethod
         # def from_path(cls, model_dir):
-        #     """Creates an instance of Predictor using the given path.
+        #     &quot;&quot;&quot;Creates an instance of Predictor using the given path.
         #
         #     Loading of the predictor should be done in this method.
         #
@@ -1813,15 +1825,13 @@
         #
         #     Returns:
         #         An instance implementing this Predictor class.
-        #     """
+        #     &quot;&quot;&quot;
         #     raise NotImplementedError()
         # &lt;/pre&gt;
         #
         # Learn more about [the Predictor interface and custom prediction
         # routines](/ml-engine/docs/tensorflow/custom-prediction-routines).
-    "serviceAccount": "A String", # Optional. Specifies the service account for resource access control.
-    "state": "A String", # Output only. The state of a version.
-    "packageUris": [ # Optional. Cloud Storage paths (`gs://…`) of packages for [custom
+    &quot;packageUris&quot;: [ # Optional. Cloud Storage paths (`gs://…`) of packages for [custom
         # prediction routines](/ml-engine/docs/tensorflow/custom-prediction-routines)
         # or [scikit-learn pipelines with custom
         # code](/ml-engine/docs/scikit/exporting-for-prediction#custom-pipeline-code).
@@ -1835,17 +1845,45 @@
         #
         # If you specify this field, you must also set
         # [`runtimeVersion`](#Version.FIELDS.runtime_version) to 1.4 or greater.
-      "A String",
+      &quot;A String&quot;,
     ],
-    "etag": "A String", # `etag` is used for optimistic concurrency control as a way to help
-        # prevent simultaneous updates of a model from overwriting each other.
-        # It is strongly suggested that systems make use of the `etag` in the
-        # read-modify-write cycle to perform model updates in order to avoid race
-        # conditions: An `etag` is returned in the response to `GetVersion`, and
-        # systems are expected to put that etag in the request to `UpdateVersion` to
-        # ensure that their change will be applied to the model as intended.
-    "lastUseTime": "A String", # Output only. The time the version was last used for prediction.
-    "deploymentUri": "A String", # Required. The Cloud Storage location of the trained model used to
+    &quot;explanationConfig&quot;: { # Message holding configuration options for explaining model predictions. # Optional. Configures explainability features on the model&#x27;s version.
+        # Some explanation features require additional metadata to be loaded
+        # as part of the model payload.
+        # There are two feature attribution methods supported for TensorFlow models:
+        # integrated gradients and sampled Shapley.
+        # [Learn more about feature
+        # attributions.](/ai-platform/prediction/docs/ai-explanations/overview)
+      &quot;integratedGradientsAttribution&quot;: { # Attributes credit by computing the Aumann-Shapley value taking advantage # Attributes credit by computing the Aumann-Shapley value taking advantage
+          # of the model&#x27;s fully differentiable structure. Refer to this paper for
+          # more details: http://proceedings.mlr.press/v70/sundararajan17a.html
+          # of the model&#x27;s fully differentiable structure. Refer to this paper for
+          # more details: https://arxiv.org/abs/1703.01365
+        &quot;numIntegralSteps&quot;: 42, # Number of steps for approximating the path integral.
+            # A good value to start is 50 and gradually increase until the
+            # sum to diff property is met within the desired error range.
+      },
+      &quot;sampledShapleyAttribution&quot;: { # An attribution method that approximates Shapley values for features that # An attribution method that approximates Shapley values for features that
+          # contribute to the label being predicted. A sampling strategy is used to
+          # approximate the value rather than considering all subsets of features.
+          # contribute to the label being predicted. A sampling strategy is used to
+          # approximate the value rather than considering all subsets of features.
+        &quot;numPaths&quot;: 42, # The number of feature permutations to consider when approximating the
+            # Shapley values.
+      },
+      &quot;xraiAttribution&quot;: { # Attributes credit by computing the XRAI taking advantage # Attributes credit by computing the XRAI taking advantage
+          # of the model&#x27;s fully differentiable structure. Refer to this paper for
+          # more details: https://arxiv.org/abs/1906.02825
+          # Currently only implemented for models with natural image inputs.
+          # of the model&#x27;s fully differentiable structure. Refer to this paper for
+          # more details: https://arxiv.org/abs/1906.02825
+          # Currently only implemented for models with natural image inputs.
+        &quot;numIntegralSteps&quot;: 42, # Number of steps for approximating the path integral.
+            # A good value to start is 50 and gradually increase until the
+            # sum to diff property is met within the desired error range.
+      },
+    },
+    &quot;deploymentUri&quot;: &quot;A String&quot;, # Required. The Cloud Storage location of the trained model used to
         # create the version. See the
         # [guide to model
         # deployment](/ml-engine/docs/tensorflow/deploying-models) for more
@@ -1856,120 +1894,16 @@
         # the model service uses the specified location as the source of the model.
         # Once deployed, the model version is hosted by the prediction service, so
         # this location is useful only as a historical record.
-        # The total number of model files can't exceed 1000.
-    "explanationConfig": { # Message holding configuration options for explaining model predictions. # Optional. Configures explainability features on the model's version.
-        # Some explanation features require additional metadata to be loaded
-        # as part of the model payload.
-        # There are two feature attribution methods supported for TensorFlow models:
-        # integrated gradients and sampled Shapley.
-        # [Learn more about feature
-        # attributions.](/ml-engine/docs/ai-explanations/overview)
-      "xraiAttribution": { # Attributes credit by computing the XRAI taking advantage # Attributes credit by computing the XRAI taking advantage
-          # of the model's fully differentiable structure. Refer to this paper for
-          # more details: https://arxiv.org/abs/1906.02825
-          # Currently only implemented for models with natural image inputs.
-          # of the model's fully differentiable structure. Refer to this paper for
-          # more details: https://arxiv.org/abs/1906.02825
-          # Currently only implemented for models with natural image inputs.
-        "numIntegralSteps": 42, # Number of steps for approximating the path integral.
-            # A good value to start is 50 and gradually increase until the
-            # sum to diff property is met within the desired error range.
-      },
-      "sampledShapleyAttribution": { # An attribution method that approximates Shapley values for features that # An attribution method that approximates Shapley values for features that
-          # contribute to the label being predicted. A sampling strategy is used to
-          # approximate the value rather than considering all subsets of features.
-          # contribute to the label being predicted. A sampling strategy is used to
-          # approximate the value rather than considering all subsets of features.
-        "numPaths": 42, # The number of feature permutations to consider when approximating the
-            # Shapley values.
-      },
-      "integratedGradientsAttribution": { # Attributes credit by computing the Aumann-Shapley value taking advantage # Attributes credit by computing the Aumann-Shapley value taking advantage
-          # of the model's fully differentiable structure. Refer to this paper for
-          # more details: http://proceedings.mlr.press/v70/sundararajan17a.html
-          # of the model's fully differentiable structure. Refer to this paper for
-          # more details: http://proceedings.mlr.press/v70/sundararajan17a.html
-        "numIntegralSteps": 42, # Number of steps for approximating the path integral.
-            # A good value to start is 50 and gradually increase until the
-            # sum to diff property is met within the desired error range.
-      },
-    },
-    "isDefault": True or False, # Output only. If true, this version will be used to handle prediction
-        # requests that do not specify a version.
-        #
-        # You can change the default version by calling
-        # projects.methods.versions.setDefault.
-    "machineType": "A String", # Optional. The type of machine on which to serve the model. Currently only
-        # applies to online prediction service. If this field is not specified, it
-        # defaults to `mls1-c1-m2`.
-        #
-        # Online prediction supports the following machine types:
-        #
-        # * `mls1-c1-m2`
-        # * `mls1-c4-m2`
-        # * `n1-standard-2`
-        # * `n1-standard-4`
-        # * `n1-standard-8`
-        # * `n1-standard-16`
-        # * `n1-standard-32`
-        # * `n1-highmem-2`
-        # * `n1-highmem-4`
-        # * `n1-highmem-8`
-        # * `n1-highmem-16`
-        # * `n1-highmem-32`
-        # * `n1-highcpu-2`
-        # * `n1-highcpu-4`
-        # * `n1-highcpu-8`
-        # * `n1-highcpu-16`
-        # * `n1-highcpu-32`
-        #
-        # `mls1-c1-m2` is generally available. All other machine types are available
-        # in beta. Learn more about the [differences between machine
-        # types](/ml-engine/docs/machine-types-online-prediction).
-    "description": "A String", # Optional. The description specified for the version when it was created.
-    "runtimeVersion": "A String", # Required. The AI Platform runtime version to use for this deployment.
-        #
-        # For more information, see the
-        # [runtime version list](/ml-engine/docs/runtime-version-list) and
-        # [how to manage runtime versions](/ml-engine/docs/versioning).
-    "manualScaling": { # Options for manually scaling a model. # Manually select the number of nodes to use for serving the
-        # model. You should generally use `auto_scaling` with an appropriate
-        # `min_nodes` instead, but this option is available if you want more
-        # predictable billing. Beware that latency and error rates will increase
-        # if the traffic exceeds that capability of the system to serve it based
-        # on the selected number of nodes.
-      "nodes": 42, # The number of nodes to allocate for this model. These nodes are always up,
-          # starting from the time the model is deployed, so the cost of operating
-          # this model will be proportional to `nodes` * number of hours since
-          # last billing cycle plus the cost for each prediction performed.
-    },
-    "errorMessage": "A String", # Output only. The details of a failure or a cancellation.
-    "framework": "A String", # Optional. The machine learning framework AI Platform uses to train
-        # this version of the model. Valid values are `TENSORFLOW`, `SCIKIT_LEARN`,
-        # `XGBOOST`. If you do not specify a framework, AI Platform
-        # will analyze files in the deployment_uri to determine a framework. If you
-        # choose `SCIKIT_LEARN` or `XGBOOST`, you must also set the runtime version
-        # of the model to 1.4 or greater.
-        #
-        # Do **not** specify a framework if you're deploying a [custom
-        # prediction routine](/ml-engine/docs/tensorflow/custom-prediction-routines).
-        #
-        # If you specify a [Compute Engine (N1) machine
-        # type](/ml-engine/docs/machine-types-online-prediction) in the
-        # `machineType` field, you must specify `TENSORFLOW`
-        # for the framework.
-    "createTime": "A String", # Output only. The time the version was created.
-    "name": "A String", # Required. The name specified for the version when it was created.
-        #
-        # The version name must be unique within the model it is created in.
-    "autoScaling": { # Options for automatically scaling a model. # Automatically scale the number of nodes used to serve the model in
+        # The total number of model files can&#x27;t exceed 1000.
+    &quot;autoScaling&quot;: { # Options for automatically scaling a model. # Automatically scale the number of nodes used to serve the model in
         # response to increases and decreases in traffic. Care should be
-        # taken to ramp up traffic according to the model's ability to scale
+        # taken to ramp up traffic according to the model&#x27;s ability to scale
         # or you will start seeing increases in latency and 429 response codes.
         #
         # Note that you cannot use AutoScaling if your version uses
         # [GPUs](#Version.FIELDS.accelerator_config). Instead, you must use specify
         # `manual_scaling`.
-      "minNodes": 42, # Optional. The minimum number of nodes to allocate for this model. These
+      &quot;minNodes&quot;: 42, # Optional. The minimum number of nodes to allocate for this model. These
           # nodes are always up, starting from the time the model is deployed.
           # Therefore, the cost of operating this model will be at least
           # `rate` * `min_nodes` * number of hours since last billing cycle,
@@ -2004,32 +1938,27 @@
           # &lt;pre&gt;
           # update_body.json:
           # {
-          #   'autoScaling': {
-          #     'minNodes': 5
+          #   &#x27;autoScaling&#x27;: {
+          #     &#x27;minNodes&#x27;: 5
           #   }
           # }
           # &lt;/pre&gt;
           # HTTP request:
-          # &lt;pre style="max-width: 626px;"&gt;
+          # &lt;pre style=&quot;max-width: 626px;&quot;&gt;
           # PATCH
           # https://ml.googleapis.com/v1/{name=projects/*/models/*/versions/*}?update_mask=autoScaling.minNodes
           # -d @./update_body.json
           # &lt;/pre&gt;
     },
-    "pythonVersion": "A String", # Required. The version of Python used in prediction.
-        #
-        # The following Python versions are available:
-        #
-        # * Python '3.7' is available when `runtime_version` is set to '1.15' or
-        #   later.
-        # * Python '3.5' is available when `runtime_version` is set to a version
-        #   from '1.4' to '1.14'.
-        # * Python '2.7' is available when `runtime_version` is set to '1.15' or
-        #   earlier.
-        #
-        # Read more about the Python versions available for [each runtime
-        # version](/ml-engine/docs/runtime-version-list).
-    "requestLoggingConfig": { # Configuration for logging request-response pairs to a BigQuery table. # Optional. *Only* specify this field in a
+    &quot;labels&quot;: { # Optional. One or more labels that you can add, to organize your model
+        # versions. Each label is a key-value pair, where both the key and the value
+        # are arbitrary strings that you supply.
+        # For more information, see the documentation on
+        # &lt;a href=&quot;/ml-engine/docs/tensorflow/resource-labels&quot;&gt;using labels&lt;/a&gt;.
+      &quot;a_key&quot;: &quot;A String&quot;,
+    },
+    &quot;createTime&quot;: &quot;A String&quot;, # Output only. The time the version was created.
+    &quot;requestLoggingConfig&quot;: { # Configuration for logging request-response pairs to a BigQuery table. # Optional. *Only* specify this field in a
         # projects.models.versions.patch
         # request. Specifying it in a
         # projects.models.versions.create
@@ -2048,19 +1977,16 @@
         # evaluation](/ml-engine/docs/continuous-evaluation/), you do not need to
         # specify this configuration manually. Setting up continuous evaluation
         # automatically enables logging of request-response pairs.
-      "samplingPercentage": 3.14, # Percentage of requests to be logged, expressed as a fraction from 0 to 1.
-          # For example, if you want to log 10% of requests, enter `0.1`. The sampling
-          # window is the lifetime of the model version. Defaults to 0.
-      "bigqueryTableName": "A String", # Required. Fully qualified BigQuery table name in the following format:
-          # "&lt;var&gt;project_id&lt;/var&gt;.&lt;var&gt;dataset_name&lt;/var&gt;.&lt;var&gt;table_name&lt;/var&gt;"
+      &quot;bigqueryTableName&quot;: &quot;A String&quot;, # Required. Fully qualified BigQuery table name in the following format:
+          # &quot;&lt;var&gt;project_id&lt;/var&gt;.&lt;var&gt;dataset_name&lt;/var&gt;.&lt;var&gt;table_name&lt;/var&gt;&quot;
           #
-          # The specified table must already exist, and the "Cloud ML Service Agent"
+          # The specified table must already exist, and the &quot;Cloud ML Service Agent&quot;
           # for your project must have permission to write to it. The table must have
           # the following [schema](/bigquery/docs/schemas):
           #
           # &lt;table&gt;
-          #   &lt;tr&gt;&lt;th&gt;Field name&lt;/th&gt;&lt;th style="display: table-cell"&gt;Type&lt;/th&gt;
-          #     &lt;th style="display: table-cell"&gt;Mode&lt;/th&gt;&lt;/tr&gt;
+          #   &lt;tr&gt;&lt;th&gt;Field name&lt;/th&gt;&lt;th style=&quot;display: table-cell&quot;&gt;Type&lt;/th&gt;
+          #     &lt;th style=&quot;display: table-cell&quot;&gt;Mode&lt;/th&gt;&lt;/tr&gt;
           #   &lt;tr&gt;&lt;td&gt;model&lt;/td&gt;&lt;td&gt;STRING&lt;/td&gt;&lt;td&gt;REQUIRED&lt;/td&gt;&lt;/tr&gt;
           #   &lt;tr&gt;&lt;td&gt;model_version&lt;/td&gt;&lt;td&gt;STRING&lt;/td&gt;&lt;td&gt;REQUIRED&lt;/td&gt;&lt;/tr&gt;
           #   &lt;tr&gt;&lt;td&gt;time&lt;/td&gt;&lt;td&gt;TIMESTAMP&lt;/td&gt;&lt;td&gt;REQUIRED&lt;/td&gt;&lt;/tr&gt;
@@ -2068,6 +1994,80 @@
           #   &lt;tr&gt;&lt;td&gt;raw_prediction&lt;/td&gt;&lt;td&gt;STRING&lt;/td&gt;&lt;td&gt;NULLABLE&lt;/td&gt;&lt;/tr&gt;
           #   &lt;tr&gt;&lt;td&gt;groundtruth&lt;/td&gt;&lt;td&gt;STRING&lt;/td&gt;&lt;td&gt;NULLABLE&lt;/td&gt;&lt;/tr&gt;
           # &lt;/table&gt;
+      &quot;samplingPercentage&quot;: 3.14, # Percentage of requests to be logged, expressed as a fraction from 0 to 1.
+          # For example, if you want to log 10% of requests, enter `0.1`. The sampling
+          # window is the lifetime of the model version. Defaults to 0.
+    },
+    &quot;errorMessage&quot;: &quot;A String&quot;, # Output only. The details of a failure or a cancellation.
+    &quot;machineType&quot;: &quot;A String&quot;, # Optional. The type of machine on which to serve the model. Currently only
+        # applies to online prediction service. If this field is not specified, it
+        # defaults to `mls1-c1-m2`.
+        #
+        # Online prediction supports the following machine types:
+        #
+        # * `mls1-c1-m2`
+        # * `mls1-c4-m2`
+        # * `n1-standard-2`
+        # * `n1-standard-4`
+        # * `n1-standard-8`
+        # * `n1-standard-16`
+        # * `n1-standard-32`
+        # * `n1-highmem-2`
+        # * `n1-highmem-4`
+        # * `n1-highmem-8`
+        # * `n1-highmem-16`
+        # * `n1-highmem-32`
+        # * `n1-highcpu-2`
+        # * `n1-highcpu-4`
+        # * `n1-highcpu-8`
+        # * `n1-highcpu-16`
+        # * `n1-highcpu-32`
+        #
+        # `mls1-c1-m2` is generally available. All other machine types are available
+        # in beta. Learn more about the [differences between machine
+        # types](/ml-engine/docs/machine-types-online-prediction).
+    &quot;runtimeVersion&quot;: &quot;A String&quot;, # Required. The AI Platform runtime version to use for this deployment.
+        #
+        # For more information, see the
+        # [runtime version list](/ml-engine/docs/runtime-version-list) and
+        # [how to manage runtime versions](/ml-engine/docs/versioning).
+    &quot;description&quot;: &quot;A String&quot;, # Optional. The description specified for the version when it was created.
+    &quot;framework&quot;: &quot;A String&quot;, # Optional. The machine learning framework AI Platform uses to train
+        # this version of the model. Valid values are `TENSORFLOW`, `SCIKIT_LEARN`,
+        # `XGBOOST`. If you do not specify a framework, AI Platform
+        # will analyze files in the deployment_uri to determine a framework. If you
+        # choose `SCIKIT_LEARN` or `XGBOOST`, you must also set the runtime version
+        # of the model to 1.4 or greater.
+        #
+        # Do **not** specify a framework if you&#x27;re deploying a [custom
+        # prediction routine](/ml-engine/docs/tensorflow/custom-prediction-routines).
+        #
+        # If you specify a [Compute Engine (N1) machine
+        # type](/ml-engine/docs/machine-types-online-prediction) in the
+        # `machineType` field, you must specify `TENSORFLOW`
+        # for the framework.
+    &quot;etag&quot;: &quot;A String&quot;, # `etag` is used for optimistic concurrency control as a way to help
+        # prevent simultaneous updates of a model from overwriting each other.
+        # It is strongly suggested that systems make use of the `etag` in the
+        # read-modify-write cycle to perform model updates in order to avoid race
+        # conditions: An `etag` is returned in the response to `GetVersion`, and
+        # systems are expected to put that etag in the request to `UpdateVersion` to
+        # ensure that their change will be applied to the model as intended.
+    &quot;isDefault&quot;: True or False, # Output only. If true, this version will be used to handle prediction
+        # requests that do not specify a version.
+        #
+        # You can change the default version by calling
+        # projects.methods.versions.setDefault.
+    &quot;acceleratorConfig&quot;: { # Represents a hardware accelerator request config. # Optional. Accelerator config for using GPUs for online prediction (beta).
+        # Only specify this field if you have specified a Compute Engine (N1) machine
+        # type in the `machineType` field. Learn more about [using GPUs for online
+        # prediction](/ml-engine/docs/machine-types-online-prediction#gpus).
+        # Note that the AcceleratorConfig can be used in both Jobs and Versions.
+        # Learn more about [accelerators for training](/ml-engine/docs/using-gpus) and
+        # [accelerators for online
+        # prediction](/ml-engine/docs/machine-types-online-prediction#gpus).
+      &quot;count&quot;: &quot;A String&quot;, # The number of accelerators to attach to each machine running the job.
+      &quot;type&quot;: &quot;A String&quot;, # The type of accelerator to use.
     },
   }</pre>
 </div>