Regen all docs. (#700)

* Stop recursing if discovery == {}

* Generate docs with 'make docs'.
diff --git a/docs/dyn/ml_v1.projects.jobs.html b/docs/dyn/ml_v1.projects.jobs.html
index 50e5559..d0a9ac2 100644
--- a/docs/dyn/ml_v1.projects.jobs.html
+++ b/docs/dyn/ml_v1.projects.jobs.html
@@ -72,10 +72,10 @@
 
 </style>
 
-<h1><a href="ml_v1.html">Google Cloud Machine Learning Engine</a> . <a href="ml_v1.projects.html">projects</a> . <a href="ml_v1.projects.jobs.html">jobs</a></h1>
+<h1><a href="ml_v1.html">Cloud Machine Learning Engine</a> . <a href="ml_v1.projects.html">projects</a> . <a href="ml_v1.projects.jobs.html">jobs</a></h1>
 <h2>Instance Methods</h2>
 <p class="toc_element">
-  <code><a href="#cancel">cancel(name, body, x__xgafv=None)</a></code></p>
+  <code><a href="#cancel">cancel(name, body=None, x__xgafv=None)</a></code></p>
 <p class="firstline">Cancels a running job.</p>
 <p class="toc_element">
   <code><a href="#create">create(parent, body, x__xgafv=None)</a></code></p>
@@ -84,21 +84,31 @@
   <code><a href="#get">get(name, x__xgafv=None)</a></code></p>
 <p class="firstline">Describes a job.</p>
 <p class="toc_element">
-  <code><a href="#list">list(parent, pageSize=None, filter=None, pageToken=None, x__xgafv=None)</a></code></p>
+  <code><a href="#getIamPolicy">getIamPolicy(resource, x__xgafv=None)</a></code></p>
+<p class="firstline">Gets the access control policy for a resource.</p>
+<p class="toc_element">
+  <code><a href="#list">list(parent, pageToken=None, x__xgafv=None, pageSize=None, filter=None)</a></code></p>
 <p class="firstline">Lists the jobs in the project.</p>
 <p class="toc_element">
   <code><a href="#list_next">list_next(previous_request, previous_response)</a></code></p>
 <p class="firstline">Retrieves the next page of results.</p>
+<p class="toc_element">
+  <code><a href="#patch">patch(name, body, updateMask=None, x__xgafv=None)</a></code></p>
+<p class="firstline">Updates a specific job resource.</p>
+<p class="toc_element">
+  <code><a href="#setIamPolicy">setIamPolicy(resource, body, x__xgafv=None)</a></code></p>
+<p class="firstline">Sets the access control policy on the specified resource. Replaces any</p>
+<p class="toc_element">
+  <code><a href="#testIamPermissions">testIamPermissions(resource, body, x__xgafv=None)</a></code></p>
+<p class="firstline">Returns permissions that a caller has on the specified resource.</p>
 <h3>Method Details</h3>
 <div class="method">
-    <code class="details" id="cancel">cancel(name, body, x__xgafv=None)</code>
+    <code class="details" id="cancel">cancel(name, body=None, x__xgafv=None)</code>
   <pre>Cancels a running job.
 
 Args:
-  name: string, Required. The name of the job to cancel.
-
-Authorization: requires `Editor` role on the parent project. (required)
-  body: object, The request body. (required)
+  name: string, Required. The name of the job to cancel. (required)
+  body: object, The request body.
     The object takes the form of:
 
 { # Request message for the CancelJob method.
@@ -129,14 +139,474 @@
   <pre>Creates a training or a batch prediction job.
 
 Args:
-  parent: string, Required. The project name.
-
-Authorization: requires `Editor` role on the specified project. (required)
+  parent: string, Required. The project name. (required)
   body: object, The request body. (required)
     The object takes the form of:
 
 { # Represents a training or prediction job.
+  "errorMessage": "A String", # Output only. The details of a failure or a cancellation.
+  "trainingOutput": { # Represents results of a training job. Output only. # The current training job result.
+    "completedTrialCount": "A String", # The number of hyperparameter tuning trials that completed successfully.
+        # Only set for hyperparameter tuning jobs.
+    "trials": [ # Results for individual Hyperparameter trials.
+        # Only set for hyperparameter tuning jobs.
+      { # Represents the result of a single hyperparameter tuning trial from a
+          # training job. The TrainingOutput object that is returned on successful
+          # completion of a training job with hyperparameter tuning includes a list
+          # of HyperparameterOutput objects, one for each successful trial.
+        "hyperparameters": { # The hyperparameters given to this trial.
+          "a_key": "A String",
+        },
+        "finalMetric": { # An observed value of a metric. # The final objective metric seen for this trial.
+          "trainingStep": "A String", # The global training step for this metric.
+          "objectiveValue": 3.14, # The objective value at this training step.
+        },
+        "allMetrics": [ # All recorded object metrics for this trial. This field is not currently
+            # populated.
+          { # An observed value of a metric.
+            "trainingStep": "A String", # The global training step for this metric.
+            "objectiveValue": 3.14, # The objective value at this training step.
+          },
+        ],
+        "isTrialStoppedEarly": True or False, # True if the trial is stopped early.
+        "trialId": "A String", # The trial id for these results.
+        "builtInAlgorithmOutput": { # Represents output related to a built-in algorithm Job. # Details related to built-in algorithms jobs.
+            # Only set for trials of built-in algorithms jobs that have succeeded.
+          "framework": "A String", # Framework on which the built-in algorithm was trained.
+          "modelPath": "A String", # The Cloud Storage path to the `model/` directory where the training job
+              # saves the trained model. Only set for successful jobs that don't use
+              # hyperparameter tuning.
+          "runtimeVersion": "A String", # AI Platform runtime version on which the built-in algorithm was
+              # trained.
+          "pythonVersion": "A String", # Python version on which the built-in algorithm was trained.
+        },
+      },
+    ],
+    "isHyperparameterTuningJob": True or False, # Whether this job is a hyperparameter tuning job.
+    "isBuiltInAlgorithmJob": True or False, # Whether this job is a built-in Algorithm job.
+    "consumedMLUnits": 3.14, # The amount of ML units consumed by the job.
+    "hyperparameterMetricTag": "A String", # The TensorFlow summary tag name used for optimizing hyperparameter tuning
+        # trials. See
+        # [`HyperparameterSpec.hyperparameterMetricTag`](#HyperparameterSpec.FIELDS.hyperparameter_metric_tag)
+        # for more information. Only set for hyperparameter tuning jobs.
+    "builtInAlgorithmOutput": { # Represents output related to a built-in algorithm Job. # Details related to built-in algorithms jobs.
+        # Only set for built-in algorithms jobs.
+      "framework": "A String", # Framework on which the built-in algorithm was trained.
+      "modelPath": "A String", # The Cloud Storage path to the `model/` directory where the training job
+          # saves the trained model. Only set for successful jobs that don't use
+          # hyperparameter tuning.
+      "runtimeVersion": "A String", # AI Platform runtime version on which the built-in algorithm was
+          # trained.
+      "pythonVersion": "A String", # Python version on which the built-in algorithm was trained.
+    },
+  },
+  "predictionInput": { # Represents input parameters for a prediction job. # Input parameters to create a prediction job.
+    "modelName": "A String", # Use this field if you want to use the default version for the specified
+        # model. The string must use the following format:
+        #
+        # `"projects/YOUR_PROJECT/models/YOUR_MODEL"`
+    "runtimeVersion": "A String", # Optional. The AI Platform runtime version to use for this batch
+        # prediction. If not set, AI Platform will pick the runtime version used
+        # during the CreateVersion request for this model version, or choose the
+        # latest stable version when model version information is not available
+        # such as when the model is specified by uri.
+    "signatureName": "A String", # Optional. The name of the signature defined in the SavedModel to use for
+        # this job. Please refer to
+        # [SavedModel](https://tensorflow.github.io/serving/serving_basic.html)
+        # for information about how to use signatures.
+        #
+        # Defaults to
+        # [DEFAULT_SERVING_SIGNATURE_DEF_KEY](https://www.tensorflow.org/api_docs/python/tf/saved_model/signature_constants)
+        # , which is "serving_default".
+    "batchSize": "A String", # Optional. Number of records per batch, defaults to 64.
+        # The service will buffer batch_size number of records in memory before
+        # invoking one Tensorflow prediction call internally. So take the record
+        # size and memory available into consideration when setting this parameter.
+    "inputPaths": [ # Required. The Cloud Storage location of the input data files. May contain
+        # <a href="/storage/docs/gsutil/addlhelp/WildcardNames">wildcards</a>.
+      "A String",
+    ],
+    "maxWorkerCount": "A String", # Optional. The maximum number of workers to be used for parallel processing.
+        # Defaults to 10 if not specified.
+    "uri": "A String", # Use this field if you want to specify a Google Cloud Storage path for
+        # the model to use.
+    "outputPath": "A String", # Required. The output Google Cloud Storage location.
+    "dataFormat": "A String", # Required. The format of the input data files.
+    "versionName": "A String", # Use this field if you want to specify a version of the model to use. The
+        # string is formatted the same way as `model_version`, with the addition
+        # of the version information:
+        #
+        # `"projects/YOUR_PROJECT/models/YOUR_MODEL/versions/YOUR_VERSION"`
+    "region": "A String", # Required. The Google Compute Engine region to run the prediction job in.
+        # See the <a href="/ml-engine/docs/tensorflow/regions">available regions</a>
+        # for AI Platform services.
+    "outputDataFormat": "A String", # Optional. Format of the output data files, defaults to JSON.
+  },
+  "trainingInput": { # Represents input parameters for a training job. When using the # Input parameters to create a training job.
+      # gcloud command to submit your training job, you can specify
+      # the input parameters as command-line arguments and/or in a YAML configuration
+      # file referenced from the --config command-line argument. For
+      # details, see the guide to
+      # <a href="/ml-engine/docs/tensorflow/training-jobs">submitting a training
+      # job</a>.
+    "workerType": "A String", # Optional. Specifies the type of virtual machine to use for your training
+        # job's worker nodes.
+        #
+        # The supported values are the same as those described in the entry for
+        # `masterType`.
+        #
+        # This value must be consistent with the category of machine type that
+        # `masterType` uses. In other words, both must be AI Platform machine
+        # types or both must be Compute Engine machine types.
+        #
+        # If you use `cloud_tpu` for this value, see special instructions for
+        # [configuring a custom TPU
+        # machine](/ml-engine/docs/tensorflow/using-tpus#configuring_a_custom_tpu_machine).
+        #
+        # This value must be present when `scaleTier` is set to `CUSTOM` and
+        # `workerCount` is greater than zero.
+    "parameterServerConfig": { # Represents the configuration for a replica in a cluster. # Optional. The configuration for parameter servers.
+        #
+        # You should only set `parameterServerConfig.acceleratorConfig` if
+        # `parameterServerConfigType` is set to a Compute Engine machine type. [Learn
+        # about restrictions on accelerator configurations for
+        # training.](/ml-engine/docs/tensorflow/using-gpus#compute-engine-machine-types-with-gpu)
+        #
+        # Set `parameterServerConfig.imageUri` only if you build a custom image for
+        # your parameter server. If `parameterServerConfig.imageUri` has not been
+        # set, AI Platform uses the value of `masterConfig.imageUri`.
+        # Learn more about [configuring custom
+        # containers](/ml-engine/docs/distributed-training-containers).
+      "acceleratorConfig": { # Represents a hardware accelerator request config. # Represents the type and number of accelerators used by the replica.
+          # [Learn about restrictions on accelerator configurations for
+          # training.](/ml-engine/docs/tensorflow/using-gpus#compute-engine-machine-types-with-gpu)
+        "count": "A String", # The number of accelerators to attach to each machine running the job.
+        "type": "A String", # The type of accelerator to use.
+      },
+      "imageUri": "A String", # The Docker image to run on the replica. This image must be in Container
+          # Registry. Learn more about [configuring custom
+          # containers](/ml-engine/docs/distributed-training-containers).
+    },
+    "runtimeVersion": "A String", # Optional. The AI Platform runtime version to use for training. If not
+        # set, AI Platform uses the default stable version, 1.0. For more
+        # information, see the
+        # <a href="/ml-engine/docs/runtime-version-list">runtime version list</a>
+        # and
+        # <a href="/ml-engine/docs/versioning">how to manage runtime versions</a>.
+    "scaleTier": "A String", # Required. Specifies the machine types, the number of replicas for workers
+        # and parameter servers.
+    "masterType": "A String", # Optional. Specifies the type of virtual machine to use for your training
+        # job's master worker.
+        #
+        # The following types are supported:
+        #
+        # <dl>
+        #   <dt>standard</dt>
+        #   <dd>
+        #   A basic machine configuration suitable for training simple models with
+        #   small to moderate datasets.
+        #   </dd>
+        #   <dt>large_model</dt>
+        #   <dd>
+        #   A machine with a lot of memory, specially suited for parameter servers
+        #   when your model is large (having many hidden layers or layers with very
+        #   large numbers of nodes).
+        #   </dd>
+        #   <dt>complex_model_s</dt>
+        #   <dd>
+        #   A machine suitable for the master and workers of the cluster when your
+        #   model requires more computation than the standard machine can handle
+        #   satisfactorily.
+        #   </dd>
+        #   <dt>complex_model_m</dt>
+        #   <dd>
+        #   A machine with roughly twice the number of cores and roughly double the
+        #   memory of <i>complex_model_s</i>.
+        #   </dd>
+        #   <dt>complex_model_l</dt>
+        #   <dd>
+        #   A machine with roughly twice the number of cores and roughly double the
+        #   memory of <i>complex_model_m</i>.
+        #   </dd>
+        #   <dt>standard_gpu</dt>
+        #   <dd>
+        #   A machine equivalent to <i>standard</i> that
+        #   also includes a single NVIDIA Tesla K80 GPU. See more about
+        #   <a href="/ml-engine/docs/tensorflow/using-gpus">using GPUs to
+        #   train your model</a>.
+        #   </dd>
+        #   <dt>complex_model_m_gpu</dt>
+        #   <dd>
+        #   A machine equivalent to <i>complex_model_m</i> that also includes
+        #   four NVIDIA Tesla K80 GPUs.
+        #   </dd>
+        #   <dt>complex_model_l_gpu</dt>
+        #   <dd>
+        #   A machine equivalent to <i>complex_model_l</i> that also includes
+        #   eight NVIDIA Tesla K80 GPUs.
+        #   </dd>
+        #   <dt>standard_p100</dt>
+        #   <dd>
+        #   A machine equivalent to <i>standard</i> that
+        #   also includes a single NVIDIA Tesla P100 GPU.
+        #   </dd>
+        #   <dt>complex_model_m_p100</dt>
+        #   <dd>
+        #   A machine equivalent to <i>complex_model_m</i> that also includes
+        #   four NVIDIA Tesla P100 GPUs.
+        #   </dd>
+        #   <dt>standard_v100</dt>
+        #   <dd>
+        #   A machine equivalent to <i>standard</i> that
+        #   also includes a single NVIDIA Tesla V100 GPU.
+        #   </dd>
+        #   <dt>large_model_v100</dt>
+        #   <dd>
+        #   A machine equivalent to <i>large_model</i> that
+        #   also includes a single NVIDIA Tesla V100 GPU.
+        #   </dd>
+        #   <dt>complex_model_m_v100</dt>
+        #   <dd>
+        #   A machine equivalent to <i>complex_model_m</i> that
+        #   also includes four NVIDIA Tesla V100 GPUs.
+        #   </dd>
+        #   <dt>complex_model_l_v100</dt>
+        #   <dd>
+        #   A machine equivalent to <i>complex_model_l</i> that
+        #   also includes eight NVIDIA Tesla V100 GPUs.
+        #   </dd>
+        #   <dt>cloud_tpu</dt>
+        #   <dd>
+        #   A TPU VM including one Cloud TPU. See more about
+        #   <a href="/ml-engine/docs/tensorflow/using-tpus">using TPUs to train
+        #   your model</a>.
+        #   </dd>
+        # </dl>
+        #
+        # You may also use certain Compute Engine machine types directly in this
+        # field. The following types are supported:
+        #
+        # - `n1-standard-4`
+        # - `n1-standard-8`
+        # - `n1-standard-16`
+        # - `n1-standard-32`
+        # - `n1-standard-64`
+        # - `n1-standard-96`
+        # - `n1-highmem-2`
+        # - `n1-highmem-4`
+        # - `n1-highmem-8`
+        # - `n1-highmem-16`
+        # - `n1-highmem-32`
+        # - `n1-highmem-64`
+        # - `n1-highmem-96`
+        # - `n1-highcpu-16`
+        # - `n1-highcpu-32`
+        # - `n1-highcpu-64`
+        # - `n1-highcpu-96`
+        #
+        # See more about [using Compute Engine machine
+        # types](/ml-engine/docs/tensorflow/machine-types#compute-engine-machine-types).
+        #
+        # You must set this value when `scaleTier` is set to `CUSTOM`.
+    "hyperparameters": { # Represents a set of hyperparameters to optimize. # Optional. The set of Hyperparameters to tune.
+      "maxTrials": 42, # Optional. How many training trials should be attempted to optimize
+          # the specified hyperparameters.
+          #
+          # Defaults to one.
+      "goal": "A String", # Required. The type of goal to use for tuning. Available types are
+          # `MAXIMIZE` and `MINIMIZE`.
+          #
+          # Defaults to `MAXIMIZE`.
+      "algorithm": "A String", # Optional. The search algorithm specified for the hyperparameter
+          # tuning job.
+          # Uses the default AI Platform hyperparameter tuning
+          # algorithm if unspecified.
+      "maxFailedTrials": 42, # Optional. The number of failed trials that need to be seen before failing
+          # the hyperparameter tuning job. You can specify this field to override the
+          # default failing criteria for AI Platform hyperparameter tuning jobs.
+          #
+          # Defaults to zero, which means the service decides when a hyperparameter
+          # job should fail.
+      "enableTrialEarlyStopping": True or False, # Optional. Indicates if the hyperparameter tuning job enables auto trial
+          # early stopping.
+      "resumePreviousJobId": "A String", # Optional. The prior hyperparameter tuning job id that users hope to
+          # continue with. The job id will be used to find the corresponding vizier
+          # study guid and resume the study.
+      "params": [ # Required. The set of parameters to tune.
+        { # Represents a single hyperparameter to optimize.
+          "maxValue": 3.14, # Required if type is `DOUBLE` or `INTEGER`. This field
+              # should be unset if type is `CATEGORICAL`. This value should be integers if
+              # type is `INTEGER`.
+          "categoricalValues": [ # Required if type is `CATEGORICAL`. The list of possible categories.
+            "A String",
+          ],
+          "discreteValues": [ # Required if type is `DISCRETE`.
+              # A list of feasible points.
+              # The list should be in strictly increasing order. For instance, this
+              # parameter might have possible settings of 1.5, 2.5, and 4.0. This list
+              # should not contain more than 1,000 values.
+            3.14,
+          ],
+          "parameterName": "A String", # Required. The parameter name must be unique amongst all ParameterConfigs in
+              # a HyperparameterSpec message. E.g., "learning_rate".
+          "minValue": 3.14, # Required if type is `DOUBLE` or `INTEGER`. This field
+              # should be unset if type is `CATEGORICAL`. This value should be integers if
+              # type is INTEGER.
+          "type": "A String", # Required. The type of the parameter.
+          "scaleType": "A String", # Optional. How the parameter should be scaled to the hypercube.
+              # Leave unset for categorical parameters.
+              # Some kind of scaling is strongly recommended for real or integral
+              # parameters (e.g., `UNIT_LINEAR_SCALE`).
+        },
+      ],
+      "hyperparameterMetricTag": "A String", # Optional. The TensorFlow summary tag name to use for optimizing trials. For
+          # current versions of TensorFlow, this tag name should exactly match what is
+          # shown in TensorBoard, including all scopes.  For versions of TensorFlow
+          # prior to 0.12, this should be only the tag passed to tf.Summary.
+          # By default, "training/hptuning/metric" will be used.
+      "maxParallelTrials": 42, # Optional. The number of training trials to run concurrently.
+          # You can reduce the time it takes to perform hyperparameter tuning by adding
+          # trials in parallel. However, each trail only benefits from the information
+          # gained in completed trials. That means that a trial does not get access to
+          # the results of trials running at the same time, which could reduce the
+          # quality of the overall optimization.
+          #
+          # Each trial will use the same scale tier and machine types.
+          #
+          # Defaults to one.
+    },
+    "region": "A String", # Required. The Google Compute Engine region to run the training job in.
+        # See the <a href="/ml-engine/docs/tensorflow/regions">available regions</a>
+        # for AI Platform services.
+    "args": [ # Optional. Command line arguments to pass to the program.
+      "A String",
+    ],
+    "pythonModule": "A String", # Required. The Python module name to run after installing the packages.
+    "pythonVersion": "A String", # Optional. The version of Python used in training. If not set, the default
+        # version is '2.7'. Python '3.5' is available when `runtime_version` is set
+        # to '1.4' and above. Python '2.7' works with all supported
+        # <a href="/ml-engine/docs/runtime-version-list">runtime versions</a>.
+    "jobDir": "A String", # Optional. A Google Cloud Storage path in which to store training outputs
+        # and other data needed for training. This path is passed to your TensorFlow
+        # program as the '--job-dir' command-line argument. The benefit of specifying
+        # this field is that Cloud ML validates the path for use in training.
+    "packageUris": [ # Required. The Google Cloud Storage location of the packages with
+        # the training program and any additional dependencies.
+        # The maximum number of package URIs is 100.
+      "A String",
+    ],
+    "workerCount": "A String", # Optional. The number of worker replicas to use for the training job. Each
+        # replica in the cluster will be of the type specified in `worker_type`.
+        #
+        # This value can only be used when `scale_tier` is set to `CUSTOM`. If you
+        # set this value, you must also set `worker_type`.
+        #
+        # The default value is zero.
+    "parameterServerType": "A String", # Optional. Specifies the type of virtual machine to use for your training
+        # job's parameter server.
+        #
+        # The supported values are the same as those described in the entry for
+        # `master_type`.
+        #
+        # This value must be consistent with the category of machine type that
+        # `masterType` uses. In other words, both must be AI Platform machine
+        # types or both must be Compute Engine machine types.
+        #
+        # This value must be present when `scaleTier` is set to `CUSTOM` and
+        # `parameter_server_count` is greater than zero.
+    "workerConfig": { # Represents the configuration for a replica in a cluster. # Optional. The configuration for workers.
+        #
+        # You should only set `workerConfig.acceleratorConfig` if `workerType` is set
+        # to a Compute Engine machine type. [Learn about restrictions on accelerator
+        # configurations for
+        # training.](/ml-engine/docs/tensorflow/using-gpus#compute-engine-machine-types-with-gpu)
+        #
+        # Set `workerConfig.imageUri` only if you build a custom image for your
+        # worker. If `workerConfig.imageUri` has not been set, AI Platform uses
+        # the value of `masterConfig.imageUri`. Learn more about
+        # [configuring custom
+        # containers](/ml-engine/docs/distributed-training-containers).
+      "acceleratorConfig": { # Represents a hardware accelerator request config. # Represents the type and number of accelerators used by the replica.
+          # [Learn about restrictions on accelerator configurations for
+          # training.](/ml-engine/docs/tensorflow/using-gpus#compute-engine-machine-types-with-gpu)
+        "count": "A String", # The number of accelerators to attach to each machine running the job.
+        "type": "A String", # The type of accelerator to use.
+      },
+      "imageUri": "A String", # The Docker image to run on the replica. This image must be in Container
+          # Registry. Learn more about [configuring custom
+          # containers](/ml-engine/docs/distributed-training-containers).
+    },
+    "maxRunningTime": "A String", # Optional. The maximum job running time. The default is 7 days.
+    "masterConfig": { # Represents the configuration for a replica in a cluster. # Optional. The configuration for your master worker.
+        #
+        # You should only set `masterConfig.acceleratorConfig` if `masterType` is set
+        # to a Compute Engine machine type. Learn about [restrictions on accelerator
+        # configurations for
+        # training.](/ml-engine/docs/tensorflow/using-gpus#compute-engine-machine-types-with-gpu)
+        #
+        # Set `masterConfig.imageUri` only if you build a custom image. Only one of
+        # `masterConfig.imageUri` and `runtimeVersion` should be set. Learn more about
+        # [configuring custom
+        # containers](/ml-engine/docs/distributed-training-containers).
+      "acceleratorConfig": { # Represents a hardware accelerator request config. # Represents the type and number of accelerators used by the replica.
+          # [Learn about restrictions on accelerator configurations for
+          # training.](/ml-engine/docs/tensorflow/using-gpus#compute-engine-machine-types-with-gpu)
+        "count": "A String", # The number of accelerators to attach to each machine running the job.
+        "type": "A String", # The type of accelerator to use.
+      },
+      "imageUri": "A String", # The Docker image to run on the replica. This image must be in Container
+          # Registry. Learn more about [configuring custom
+          # containers](/ml-engine/docs/distributed-training-containers).
+    },
+    "parameterServerCount": "A String", # Optional. The number of parameter server replicas to use for the training
+        # job. Each replica in the cluster will be of the type specified in
+        # `parameter_server_type`.
+        #
+        # This value can only be used when `scale_tier` is set to `CUSTOM`.If you
+        # set this value, you must also set `parameter_server_type`.
+        #
+        # The default value is zero.
+  },
+  "jobId": "A String", # Required. The user-specified id of the job.
+  "labels": { # Optional. One or more labels that you can add, to organize your jobs.
+      # Each label is a key-value pair, where both the key and the value are
+      # arbitrary strings that you supply.
+      # For more information, see the documentation on
+      # <a href="/ml-engine/docs/tensorflow/resource-labels">using labels</a>.
+    "a_key": "A String",
+  },
+  "state": "A String", # Output only. The detailed state of a job.
+  "etag": "A String", # `etag` is used for optimistic concurrency control as a way to help
+      # prevent simultaneous updates of a job from overwriting each other.
+      # It is strongly suggested that systems make use of the `etag` in the
+      # read-modify-write cycle to perform job updates in order to avoid race
+      # conditions: An `etag` is returned in the response to `GetJob`, and
+      # systems are expected to put that etag in the request to `UpdateJob` to
+      # ensure that their change will be applied to the same version of the job.
+  "startTime": "A String", # Output only. When the job processing was started.
+  "endTime": "A String", # Output only. When the job processing was completed.
+  "predictionOutput": { # Represents results of a prediction job. # The current prediction job result.
+    "outputPath": "A String", # The output Google Cloud Storage location provided at the job creation time.
+    "nodeHours": 3.14, # Node hours used by the batch prediction job.
+    "predictionCount": "A String", # The number of generated predictions.
+    "errorCount": "A String", # The number of data instances which resulted in errors.
+  },
+  "createTime": "A String", # Output only. When the job was created.
+}
+
+  x__xgafv: string, V1 error format.
+    Allowed values
+      1 - v1 error format
+      2 - v2 error format
+
+Returns:
+  An object of the form:
+
+    { # Represents a training or prediction job.
+    "errorMessage": "A String", # Output only. The details of a failure or a cancellation.
     "trainingOutput": { # Represents results of a training job. Output only. # The current training job result.
+      "completedTrialCount": "A String", # The number of hyperparameter tuning trials that completed successfully.
+          # Only set for hyperparameter tuning jobs.
       "trials": [ # Results for individual Hyperparameter trials.
           # Only set for hyperparameter tuning jobs.
         { # Represents the result of a single hyperparameter tuning trial from a
@@ -146,35 +616,142 @@
           "hyperparameters": { # The hyperparameters given to this trial.
             "a_key": "A String",
           },
-          "trialId": "A String", # The trial id for these results.
-          "allMetrics": [ # All recorded object metrics for this trial.
+          "finalMetric": { # An observed value of a metric. # The final objective metric seen for this trial.
+            "trainingStep": "A String", # The global training step for this metric.
+            "objectiveValue": 3.14, # The objective value at this training step.
+          },
+          "allMetrics": [ # All recorded object metrics for this trial. This field is not currently
+              # populated.
             { # An observed value of a metric.
               "trainingStep": "A String", # The global training step for this metric.
               "objectiveValue": 3.14, # The objective value at this training step.
             },
           ],
-          "finalMetric": { # An observed value of a metric. # The final objective metric seen for this trial.
-            "trainingStep": "A String", # The global training step for this metric.
-            "objectiveValue": 3.14, # The objective value at this training step.
+          "isTrialStoppedEarly": True or False, # True if the trial is stopped early.
+          "trialId": "A String", # The trial id for these results.
+          "builtInAlgorithmOutput": { # Represents output related to a built-in algorithm Job. # Details related to built-in algorithms jobs.
+              # Only set for trials of built-in algorithms jobs that have succeeded.
+            "framework": "A String", # Framework on which the built-in algorithm was trained.
+            "modelPath": "A String", # The Cloud Storage path to the `model/` directory where the training job
+                # saves the trained model. Only set for successful jobs that don't use
+                # hyperparameter tuning.
+            "runtimeVersion": "A String", # AI Platform runtime version on which the built-in algorithm was
+                # trained.
+            "pythonVersion": "A String", # Python version on which the built-in algorithm was trained.
           },
         },
       ],
       "isHyperparameterTuningJob": True or False, # Whether this job is a hyperparameter tuning job.
+      "isBuiltInAlgorithmJob": True or False, # Whether this job is a built-in Algorithm job.
       "consumedMLUnits": 3.14, # The amount of ML units consumed by the job.
-      "completedTrialCount": "A String", # The number of hyperparameter tuning trials that completed successfully.
-          # Only set for hyperparameter tuning jobs.
+      "hyperparameterMetricTag": "A String", # The TensorFlow summary tag name used for optimizing hyperparameter tuning
+          # trials. See
+          # [`HyperparameterSpec.hyperparameterMetricTag`](#HyperparameterSpec.FIELDS.hyperparameter_metric_tag)
+          # for more information. Only set for hyperparameter tuning jobs.
+      "builtInAlgorithmOutput": { # Represents output related to a built-in algorithm Job. # Details related to built-in algorithms jobs.
+          # Only set for built-in algorithms jobs.
+        "framework": "A String", # Framework on which the built-in algorithm was trained.
+        "modelPath": "A String", # The Cloud Storage path to the `model/` directory where the training job
+            # saves the trained model. Only set for successful jobs that don't use
+            # hyperparameter tuning.
+        "runtimeVersion": "A String", # AI Platform runtime version on which the built-in algorithm was
+            # trained.
+        "pythonVersion": "A String", # Python version on which the built-in algorithm was trained.
+      },
     },
-    "trainingInput": { # Represents input parameters for a training job. # Input parameters to create a training job.
+    "predictionInput": { # Represents input parameters for a prediction job. # Input parameters to create a prediction job.
+      "modelName": "A String", # Use this field if you want to use the default version for the specified
+          # model. The string must use the following format:
+          #
+          # `"projects/YOUR_PROJECT/models/YOUR_MODEL"`
+      "runtimeVersion": "A String", # Optional. The AI Platform runtime version to use for this batch
+          # prediction. If not set, AI Platform will pick the runtime version used
+          # during the CreateVersion request for this model version, or choose the
+          # latest stable version when model version information is not available
+          # such as when the model is specified by uri.
+      "signatureName": "A String", # Optional. The name of the signature defined in the SavedModel to use for
+          # this job. Please refer to
+          # [SavedModel](https://tensorflow.github.io/serving/serving_basic.html)
+          # for information about how to use signatures.
+          #
+          # Defaults to
+          # [DEFAULT_SERVING_SIGNATURE_DEF_KEY](https://www.tensorflow.org/api_docs/python/tf/saved_model/signature_constants)
+          # , which is "serving_default".
+      "batchSize": "A String", # Optional. Number of records per batch, defaults to 64.
+          # The service will buffer batch_size number of records in memory before
+          # invoking one Tensorflow prediction call internally. So take the record
+          # size and memory available into consideration when setting this parameter.
+      "inputPaths": [ # Required. The Cloud Storage location of the input data files. May contain
+          # <a href="/storage/docs/gsutil/addlhelp/WildcardNames">wildcards</a>.
+        "A String",
+      ],
+      "maxWorkerCount": "A String", # Optional. The maximum number of workers to be used for parallel processing.
+          # Defaults to 10 if not specified.
+      "uri": "A String", # Use this field if you want to specify a Google Cloud Storage path for
+          # the model to use.
+      "outputPath": "A String", # Required. The output Google Cloud Storage location.
+      "dataFormat": "A String", # Required. The format of the input data files.
+      "versionName": "A String", # Use this field if you want to specify a version of the model to use. The
+          # string is formatted the same way as `model_version`, with the addition
+          # of the version information:
+          #
+          # `"projects/YOUR_PROJECT/models/YOUR_MODEL/versions/YOUR_VERSION"`
+      "region": "A String", # Required. The Google Compute Engine region to run the prediction job in.
+          # See the <a href="/ml-engine/docs/tensorflow/regions">available regions</a>
+          # for AI Platform services.
+      "outputDataFormat": "A String", # Optional. Format of the output data files, defaults to JSON.
+    },
+    "trainingInput": { # Represents input parameters for a training job. When using the # Input parameters to create a training job.
+        # gcloud command to submit your training job, you can specify
+        # the input parameters as command-line arguments and/or in a YAML configuration
+        # file referenced from the --config command-line argument. For
+        # details, see the guide to
+        # <a href="/ml-engine/docs/tensorflow/training-jobs">submitting a training
+        # job</a>.
       "workerType": "A String", # Optional. Specifies the type of virtual machine to use for your training
           # job's worker nodes.
           #
           # The supported values are the same as those described in the entry for
           # `masterType`.
           #
+          # This value must be consistent with the category of machine type that
+          # `masterType` uses. In other words, both must be AI Platform machine
+          # types or both must be Compute Engine machine types.
+          #
+          # If you use `cloud_tpu` for this value, see special instructions for
+          # [configuring a custom TPU
+          # machine](/ml-engine/docs/tensorflow/using-tpus#configuring_a_custom_tpu_machine).
+          #
           # This value must be present when `scaleTier` is set to `CUSTOM` and
           # `workerCount` is greater than zero.
-      "runtimeVersion": "A String", # Optional. The Google Cloud ML runtime version to use for training.  If not
-          # set, Google Cloud ML will choose the latest stable version.
+      "parameterServerConfig": { # Represents the configuration for a replica in a cluster. # Optional. The configuration for parameter servers.
+          #
+          # You should only set `parameterServerConfig.acceleratorConfig` if
+          # `parameterServerConfigType` is set to a Compute Engine machine type. [Learn
+          # about restrictions on accelerator configurations for
+          # training.](/ml-engine/docs/tensorflow/using-gpus#compute-engine-machine-types-with-gpu)
+          #
+          # Set `parameterServerConfig.imageUri` only if you build a custom image for
+          # your parameter server. If `parameterServerConfig.imageUri` has not been
+          # set, AI Platform uses the value of `masterConfig.imageUri`.
+          # Learn more about [configuring custom
+          # containers](/ml-engine/docs/distributed-training-containers).
+        "acceleratorConfig": { # Represents a hardware accelerator request config. # Represents the type and number of accelerators used by the replica.
+            # [Learn about restrictions on accelerator configurations for
+            # training.](/ml-engine/docs/tensorflow/using-gpus#compute-engine-machine-types-with-gpu)
+          "count": "A String", # The number of accelerators to attach to each machine running the job.
+          "type": "A String", # The type of accelerator to use.
+        },
+        "imageUri": "A String", # The Docker image to run on the replica. This image must be in Container
+            # Registry. Learn more about [configuring custom
+            # containers](/ml-engine/docs/distributed-training-containers).
+      },
+      "runtimeVersion": "A String", # Optional. The AI Platform runtime version to use for training. If not
+          # set, AI Platform uses the default stable version, 1.0. For more
+          # information, see the
+          # <a href="/ml-engine/docs/runtime-version-list">runtime version list</a>
+          # and
+          # <a href="/ml-engine/docs/versioning">how to manage runtime versions</a>.
       "scaleTier": "A String", # Required. Specifies the machine types, the number of replicas for workers
           # and parameter servers.
       "masterType": "A String", # Optional. Specifies the type of virtual machine to use for your training
@@ -203,42 +780,120 @@
           #   <dt>complex_model_m</dt>
           #   <dd>
           #   A machine with roughly twice the number of cores and roughly double the
-          #   memory of <code suppresswarning="true">complex_model_s</code>.
+          #   memory of <i>complex_model_s</i>.
           #   </dd>
           #   <dt>complex_model_l</dt>
           #   <dd>
           #   A machine with roughly twice the number of cores and roughly double the
-          #   memory of <code suppresswarning="true">complex_model_m</code>.
+          #   memory of <i>complex_model_m</i>.
           #   </dd>
           #   <dt>standard_gpu</dt>
           #   <dd>
-          #   A machine equivalent to <code suppresswarning="true">standard</code> that
-          #   also includes a
-          #   <a href="/ml-engine/docs/how-tos/using-gpus">
-          #   GPU that you can use in your trainer</a>.
+          #   A machine equivalent to <i>standard</i> that
+          #   also includes a single NVIDIA Tesla K80 GPU. See more about
+          #   <a href="/ml-engine/docs/tensorflow/using-gpus">using GPUs to
+          #   train your model</a>.
           #   </dd>
           #   <dt>complex_model_m_gpu</dt>
           #   <dd>
-          #   A machine equivalent to
-          #   <code suppresswarning="true">complex_model_m</code> that also includes
-          #   four GPUs.
+          #   A machine equivalent to <i>complex_model_m</i> that also includes
+          #   four NVIDIA Tesla K80 GPUs.
+          #   </dd>
+          #   <dt>complex_model_l_gpu</dt>
+          #   <dd>
+          #   A machine equivalent to <i>complex_model_l</i> that also includes
+          #   eight NVIDIA Tesla K80 GPUs.
+          #   </dd>
+          #   <dt>standard_p100</dt>
+          #   <dd>
+          #   A machine equivalent to <i>standard</i> that
+          #   also includes a single NVIDIA Tesla P100 GPU.
+          #   </dd>
+          #   <dt>complex_model_m_p100</dt>
+          #   <dd>
+          #   A machine equivalent to <i>complex_model_m</i> that also includes
+          #   four NVIDIA Tesla P100 GPUs.
+          #   </dd>
+          #   <dt>standard_v100</dt>
+          #   <dd>
+          #   A machine equivalent to <i>standard</i> that
+          #   also includes a single NVIDIA Tesla V100 GPU.
+          #   </dd>
+          #   <dt>large_model_v100</dt>
+          #   <dd>
+          #   A machine equivalent to <i>large_model</i> that
+          #   also includes a single NVIDIA Tesla V100 GPU.
+          #   </dd>
+          #   <dt>complex_model_m_v100</dt>
+          #   <dd>
+          #   A machine equivalent to <i>complex_model_m</i> that
+          #   also includes four NVIDIA Tesla V100 GPUs.
+          #   </dd>
+          #   <dt>complex_model_l_v100</dt>
+          #   <dd>
+          #   A machine equivalent to <i>complex_model_l</i> that
+          #   also includes eight NVIDIA Tesla V100 GPUs.
+          #   </dd>
+          #   <dt>cloud_tpu</dt>
+          #   <dd>
+          #   A TPU VM including one Cloud TPU. See more about
+          #   <a href="/ml-engine/docs/tensorflow/using-tpus">using TPUs to train
+          #   your model</a>.
           #   </dd>
           # </dl>
           #
+          # You may also use certain Compute Engine machine types directly in this
+          # field. The following types are supported:
+          #
+          # - `n1-standard-4`
+          # - `n1-standard-8`
+          # - `n1-standard-16`
+          # - `n1-standard-32`
+          # - `n1-standard-64`
+          # - `n1-standard-96`
+          # - `n1-highmem-2`
+          # - `n1-highmem-4`
+          # - `n1-highmem-8`
+          # - `n1-highmem-16`
+          # - `n1-highmem-32`
+          # - `n1-highmem-64`
+          # - `n1-highmem-96`
+          # - `n1-highcpu-16`
+          # - `n1-highcpu-32`
+          # - `n1-highcpu-64`
+          # - `n1-highcpu-96`
+          #
+          # See more about [using Compute Engine machine
+          # types](/ml-engine/docs/tensorflow/machine-types#compute-engine-machine-types).
+          #
           # You must set this value when `scaleTier` is set to `CUSTOM`.
       "hyperparameters": { # Represents a set of hyperparameters to optimize. # Optional. The set of Hyperparameters to tune.
         "maxTrials": 42, # Optional. How many training trials should be attempted to optimize
             # the specified hyperparameters.
             #
             # Defaults to one.
-        "hyperparameterMetricTag": "A String", # Optional. The Tensorflow summary tag name to use for optimizing trials. For
-            # current versions of Tensorflow, this tag name should exactly match what is
-            # shown in Tensorboard, including all scopes.  For versions of Tensorflow
-            # prior to 0.12, this should be only the tag passed to tf.Summary.
-            # By default, "training/hptuning/metric" will be used.
+        "goal": "A String", # Required. The type of goal to use for tuning. Available types are
+            # `MAXIMIZE` and `MINIMIZE`.
+            #
+            # Defaults to `MAXIMIZE`.
+        "algorithm": "A String", # Optional. The search algorithm specified for the hyperparameter
+            # tuning job.
+            # Uses the default AI Platform hyperparameter tuning
+            # algorithm if unspecified.
+        "maxFailedTrials": 42, # Optional. The number of failed trials that need to be seen before failing
+            # the hyperparameter tuning job. You can specify this field to override the
+            # default failing criteria for AI Platform hyperparameter tuning jobs.
+            #
+            # Defaults to zero, which means the service decides when a hyperparameter
+            # job should fail.
+        "enableTrialEarlyStopping": True or False, # Optional. Indicates if the hyperparameter tuning job enables auto trial
+            # early stopping.
+        "resumePreviousJobId": "A String", # Optional. The prior hyperparameter tuning job id that users hope to
+            # continue with. The job id will be used to find the corresponding vizier
+            # study guid and resume the study.
         "params": [ # Required. The set of parameters to tune.
           { # Represents a single hyperparameter to optimize.
-            "maxValue": 3.14, # Required if typeis `DOUBLE` or `INTEGER`. This field
+            "maxValue": 3.14, # Required if type is `DOUBLE` or `INTEGER`. This field
                 # should be unset if type is `CATEGORICAL`. This value should be integers if
                 # type is `INTEGER`.
             "categoricalValues": [ # Required if type is `CATEGORICAL`. The list of possible categories.
@@ -263,10 +918,11 @@
                 # parameters (e.g., `UNIT_LINEAR_SCALE`).
           },
         ],
-        "goal": "A String", # Required. The type of goal to use for tuning. Available types are
-            # `MAXIMIZE` and `MINIMIZE`.
-            #
-            # Defaults to `MAXIMIZE`.
+        "hyperparameterMetricTag": "A String", # Optional. The TensorFlow summary tag name to use for optimizing trials. For
+            # current versions of TensorFlow, this tag name should exactly match what is
+            # shown in TensorBoard, including all scopes.  For versions of TensorFlow
+            # prior to 0.12, this should be only the tag passed to tf.Summary.
+            # By default, "training/hptuning/metric" will be used.
         "maxParallelTrials": 42, # Optional. The number of training trials to run concurrently.
             # You can reduce the time it takes to perform hyperparameter tuning by adding
             # trials in parallel. However, each trail only benefits from the information
@@ -279,13 +935,19 @@
             # Defaults to one.
       },
       "region": "A String", # Required. The Google Compute Engine region to run the training job in.
+          # See the <a href="/ml-engine/docs/tensorflow/regions">available regions</a>
+          # for AI Platform services.
       "args": [ # Optional. Command line arguments to pass to the program.
         "A String",
       ],
       "pythonModule": "A String", # Required. The Python module name to run after installing the packages.
+      "pythonVersion": "A String", # Optional. The version of Python used in training. If not set, the default
+          # version is '2.7'. Python '3.5' is available when `runtime_version` is set
+          # to '1.4' and above. Python '2.7' works with all supported
+          # <a href="/ml-engine/docs/runtime-version-list">runtime versions</a>.
       "jobDir": "A String", # Optional. A Google Cloud Storage path in which to store training outputs
           # and other data needed for training. This path is passed to your TensorFlow
-          # program as the 'job_dir' command-line argument. The benefit of specifying
+          # program as the '--job-dir' command-line argument. The benefit of specifying
           # this field is that Cloud ML validates the path for use in training.
       "packageUris": [ # Required. The Google Cloud Storage location of the packages with
           # the training program and any additional dependencies.
@@ -297,32 +959,198 @@
           #
           # This value can only be used when `scale_tier` is set to `CUSTOM`. If you
           # set this value, you must also set `worker_type`.
+          #
+          # The default value is zero.
       "parameterServerType": "A String", # Optional. Specifies the type of virtual machine to use for your training
           # job's parameter server.
           #
           # The supported values are the same as those described in the entry for
           # `master_type`.
           #
+          # This value must be consistent with the category of machine type that
+          # `masterType` uses. In other words, both must be AI Platform machine
+          # types or both must be Compute Engine machine types.
+          #
           # This value must be present when `scaleTier` is set to `CUSTOM` and
           # `parameter_server_count` is greater than zero.
+      "workerConfig": { # Represents the configuration for a replica in a cluster. # Optional. The configuration for workers.
+          #
+          # You should only set `workerConfig.acceleratorConfig` if `workerType` is set
+          # to a Compute Engine machine type. [Learn about restrictions on accelerator
+          # configurations for
+          # training.](/ml-engine/docs/tensorflow/using-gpus#compute-engine-machine-types-with-gpu)
+          #
+          # Set `workerConfig.imageUri` only if you build a custom image for your
+          # worker. If `workerConfig.imageUri` has not been set, AI Platform uses
+          # the value of `masterConfig.imageUri`. Learn more about
+          # [configuring custom
+          # containers](/ml-engine/docs/distributed-training-containers).
+        "acceleratorConfig": { # Represents a hardware accelerator request config. # Represents the type and number of accelerators used by the replica.
+            # [Learn about restrictions on accelerator configurations for
+            # training.](/ml-engine/docs/tensorflow/using-gpus#compute-engine-machine-types-with-gpu)
+          "count": "A String", # The number of accelerators to attach to each machine running the job.
+          "type": "A String", # The type of accelerator to use.
+        },
+        "imageUri": "A String", # The Docker image to run on the replica. This image must be in Container
+            # Registry. Learn more about [configuring custom
+            # containers](/ml-engine/docs/distributed-training-containers).
+      },
+      "maxRunningTime": "A String", # Optional. The maximum job running time. The default is 7 days.
+      "masterConfig": { # Represents the configuration for a replica in a cluster. # Optional. The configuration for your master worker.
+          #
+          # You should only set `masterConfig.acceleratorConfig` if `masterType` is set
+          # to a Compute Engine machine type. Learn about [restrictions on accelerator
+          # configurations for
+          # training.](/ml-engine/docs/tensorflow/using-gpus#compute-engine-machine-types-with-gpu)
+          #
+          # Set `masterConfig.imageUri` only if you build a custom image. Only one of
+          # `masterConfig.imageUri` and `runtimeVersion` should be set. Learn more about
+          # [configuring custom
+          # containers](/ml-engine/docs/distributed-training-containers).
+        "acceleratorConfig": { # Represents a hardware accelerator request config. # Represents the type and number of accelerators used by the replica.
+            # [Learn about restrictions on accelerator configurations for
+            # training.](/ml-engine/docs/tensorflow/using-gpus#compute-engine-machine-types-with-gpu)
+          "count": "A String", # The number of accelerators to attach to each machine running the job.
+          "type": "A String", # The type of accelerator to use.
+        },
+        "imageUri": "A String", # The Docker image to run on the replica. This image must be in Container
+            # Registry. Learn more about [configuring custom
+            # containers](/ml-engine/docs/distributed-training-containers).
+      },
       "parameterServerCount": "A String", # Optional. The number of parameter server replicas to use for the training
           # job. Each replica in the cluster will be of the type specified in
           # `parameter_server_type`.
           #
           # This value can only be used when `scale_tier` is set to `CUSTOM`.If you
           # set this value, you must also set `parameter_server_type`.
+          #
+          # The default value is zero.
+    },
+    "jobId": "A String", # Required. The user-specified id of the job.
+    "labels": { # Optional. One or more labels that you can add, to organize your jobs.
+        # Each label is a key-value pair, where both the key and the value are
+        # arbitrary strings that you supply.
+        # For more information, see the documentation on
+        # <a href="/ml-engine/docs/tensorflow/resource-labels">using labels</a>.
+      "a_key": "A String",
+    },
+    "state": "A String", # Output only. The detailed state of a job.
+    "etag": "A String", # `etag` is used for optimistic concurrency control as a way to help
+        # prevent simultaneous updates of a job from overwriting each other.
+        # It is strongly suggested that systems make use of the `etag` in the
+        # read-modify-write cycle to perform job updates in order to avoid race
+        # conditions: An `etag` is returned in the response to `GetJob`, and
+        # systems are expected to put that etag in the request to `UpdateJob` to
+        # ensure that their change will be applied to the same version of the job.
+    "startTime": "A String", # Output only. When the job processing was started.
+    "endTime": "A String", # Output only. When the job processing was completed.
+    "predictionOutput": { # Represents results of a prediction job. # The current prediction job result.
+      "outputPath": "A String", # The output Google Cloud Storage location provided at the job creation time.
+      "nodeHours": 3.14, # Node hours used by the batch prediction job.
+      "predictionCount": "A String", # The number of generated predictions.
+      "errorCount": "A String", # The number of data instances which resulted in errors.
+    },
+    "createTime": "A String", # Output only. When the job was created.
+  }</pre>
+</div>
+
+<div class="method">
+    <code class="details" id="get">get(name, x__xgafv=None)</code>
+  <pre>Describes a job.
+
+Args:
+  name: string, Required. The name of the job to get the description of. (required)
+  x__xgafv: string, V1 error format.
+    Allowed values
+      1 - v1 error format
+      2 - v2 error format
+
+Returns:
+  An object of the form:
+
+    { # Represents a training or prediction job.
+    "errorMessage": "A String", # Output only. The details of a failure or a cancellation.
+    "trainingOutput": { # Represents results of a training job. Output only. # The current training job result.
+      "completedTrialCount": "A String", # The number of hyperparameter tuning trials that completed successfully.
+          # Only set for hyperparameter tuning jobs.
+      "trials": [ # Results for individual Hyperparameter trials.
+          # Only set for hyperparameter tuning jobs.
+        { # Represents the result of a single hyperparameter tuning trial from a
+            # training job. The TrainingOutput object that is returned on successful
+            # completion of a training job with hyperparameter tuning includes a list
+            # of HyperparameterOutput objects, one for each successful trial.
+          "hyperparameters": { # The hyperparameters given to this trial.
+            "a_key": "A String",
+          },
+          "finalMetric": { # An observed value of a metric. # The final objective metric seen for this trial.
+            "trainingStep": "A String", # The global training step for this metric.
+            "objectiveValue": 3.14, # The objective value at this training step.
+          },
+          "allMetrics": [ # All recorded object metrics for this trial. This field is not currently
+              # populated.
+            { # An observed value of a metric.
+              "trainingStep": "A String", # The global training step for this metric.
+              "objectiveValue": 3.14, # The objective value at this training step.
+            },
+          ],
+          "isTrialStoppedEarly": True or False, # True if the trial is stopped early.
+          "trialId": "A String", # The trial id for these results.
+          "builtInAlgorithmOutput": { # Represents output related to a built-in algorithm Job. # Details related to built-in algorithms jobs.
+              # Only set for trials of built-in algorithms jobs that have succeeded.
+            "framework": "A String", # Framework on which the built-in algorithm was trained.
+            "modelPath": "A String", # The Cloud Storage path to the `model/` directory where the training job
+                # saves the trained model. Only set for successful jobs that don't use
+                # hyperparameter tuning.
+            "runtimeVersion": "A String", # AI Platform runtime version on which the built-in algorithm was
+                # trained.
+            "pythonVersion": "A String", # Python version on which the built-in algorithm was trained.
+          },
+        },
+      ],
+      "isHyperparameterTuningJob": True or False, # Whether this job is a hyperparameter tuning job.
+      "isBuiltInAlgorithmJob": True or False, # Whether this job is a built-in Algorithm job.
+      "consumedMLUnits": 3.14, # The amount of ML units consumed by the job.
+      "hyperparameterMetricTag": "A String", # The TensorFlow summary tag name used for optimizing hyperparameter tuning
+          # trials. See
+          # [`HyperparameterSpec.hyperparameterMetricTag`](#HyperparameterSpec.FIELDS.hyperparameter_metric_tag)
+          # for more information. Only set for hyperparameter tuning jobs.
+      "builtInAlgorithmOutput": { # Represents output related to a built-in algorithm Job. # Details related to built-in algorithms jobs.
+          # Only set for built-in algorithms jobs.
+        "framework": "A String", # Framework on which the built-in algorithm was trained.
+        "modelPath": "A String", # The Cloud Storage path to the `model/` directory where the training job
+            # saves the trained model. Only set for successful jobs that don't use
+            # hyperparameter tuning.
+        "runtimeVersion": "A String", # AI Platform runtime version on which the built-in algorithm was
+            # trained.
+        "pythonVersion": "A String", # Python version on which the built-in algorithm was trained.
+      },
     },
     "predictionInput": { # Represents input parameters for a prediction job. # Input parameters to create a prediction job.
       "modelName": "A String", # Use this field if you want to use the default version for the specified
           # model. The string must use the following format:
           #
-          # `"projects/<var>[YOUR_PROJECT]</var>/models/<var>[YOUR_MODEL]</var>"`
-      "runtimeVersion": "A String", # Optional. The Google Cloud ML runtime version to use for this batch
-          # prediction. If not set, Google Cloud ML will pick the runtime version used
+          # `"projects/YOUR_PROJECT/models/YOUR_MODEL"`
+      "runtimeVersion": "A String", # Optional. The AI Platform runtime version to use for this batch
+          # prediction. If not set, AI Platform will pick the runtime version used
           # during the CreateVersion request for this model version, or choose the
           # latest stable version when model version information is not available
           # such as when the model is specified by uri.
-      "region": "A String", # Required. The Google Compute Engine region to run the prediction job in.
+      "signatureName": "A String", # Optional. The name of the signature defined in the SavedModel to use for
+          # this job. Please refer to
+          # [SavedModel](https://tensorflow.github.io/serving/serving_basic.html)
+          # for information about how to use signatures.
+          #
+          # Defaults to
+          # [DEFAULT_SERVING_SIGNATURE_DEF_KEY](https://www.tensorflow.org/api_docs/python/tf/saved_model/signature_constants)
+          # , which is "serving_default".
+      "batchSize": "A String", # Optional. Number of records per batch, defaults to 64.
+          # The service will buffer batch_size number of records in memory before
+          # invoking one Tensorflow prediction call internally. So take the record
+          # size and memory available into consideration when setting this parameter.
+      "inputPaths": [ # Required. The Cloud Storage location of the input data files. May contain
+          # <a href="/storage/docs/gsutil/addlhelp/WildcardNames">wildcards</a>.
+        "A String",
+      ],
       "maxWorkerCount": "A String", # Optional. The maximum number of workers to be used for parallel processing.
           # Defaults to 10 if not specified.
       "uri": "A String", # Use this field if you want to specify a Google Cloud Storage path for
@@ -333,15 +1161,353 @@
           # string is formatted the same way as `model_version`, with the addition
           # of the version information:
           #
-          # `"projects/<var>[YOUR_PROJECT]</var>/models/<var>YOUR_MODEL/versions/<var>[YOUR_VERSION]</var>"`
-      "inputPaths": [ # Required. The Google Cloud Storage location of the input data files.
-          # May contain wildcards.
+          # `"projects/YOUR_PROJECT/models/YOUR_MODEL/versions/YOUR_VERSION"`
+      "region": "A String", # Required. The Google Compute Engine region to run the prediction job in.
+          # See the <a href="/ml-engine/docs/tensorflow/regions">available regions</a>
+          # for AI Platform services.
+      "outputDataFormat": "A String", # Optional. Format of the output data files, defaults to JSON.
+    },
+    "trainingInput": { # Represents input parameters for a training job. When using the # Input parameters to create a training job.
+        # gcloud command to submit your training job, you can specify
+        # the input parameters as command-line arguments and/or in a YAML configuration
+        # file referenced from the --config command-line argument. For
+        # details, see the guide to
+        # <a href="/ml-engine/docs/tensorflow/training-jobs">submitting a training
+        # job</a>.
+      "workerType": "A String", # Optional. Specifies the type of virtual machine to use for your training
+          # job's worker nodes.
+          #
+          # The supported values are the same as those described in the entry for
+          # `masterType`.
+          #
+          # This value must be consistent with the category of machine type that
+          # `masterType` uses. In other words, both must be AI Platform machine
+          # types or both must be Compute Engine machine types.
+          #
+          # If you use `cloud_tpu` for this value, see special instructions for
+          # [configuring a custom TPU
+          # machine](/ml-engine/docs/tensorflow/using-tpus#configuring_a_custom_tpu_machine).
+          #
+          # This value must be present when `scaleTier` is set to `CUSTOM` and
+          # `workerCount` is greater than zero.
+      "parameterServerConfig": { # Represents the configuration for a replica in a cluster. # Optional. The configuration for parameter servers.
+          #
+          # You should only set `parameterServerConfig.acceleratorConfig` if
+          # `parameterServerConfigType` is set to a Compute Engine machine type. [Learn
+          # about restrictions on accelerator configurations for
+          # training.](/ml-engine/docs/tensorflow/using-gpus#compute-engine-machine-types-with-gpu)
+          #
+          # Set `parameterServerConfig.imageUri` only if you build a custom image for
+          # your parameter server. If `parameterServerConfig.imageUri` has not been
+          # set, AI Platform uses the value of `masterConfig.imageUri`.
+          # Learn more about [configuring custom
+          # containers](/ml-engine/docs/distributed-training-containers).
+        "acceleratorConfig": { # Represents a hardware accelerator request config. # Represents the type and number of accelerators used by the replica.
+            # [Learn about restrictions on accelerator configurations for
+            # training.](/ml-engine/docs/tensorflow/using-gpus#compute-engine-machine-types-with-gpu)
+          "count": "A String", # The number of accelerators to attach to each machine running the job.
+          "type": "A String", # The type of accelerator to use.
+        },
+        "imageUri": "A String", # The Docker image to run on the replica. This image must be in Container
+            # Registry. Learn more about [configuring custom
+            # containers](/ml-engine/docs/distributed-training-containers).
+      },
+      "runtimeVersion": "A String", # Optional. The AI Platform runtime version to use for training. If not
+          # set, AI Platform uses the default stable version, 1.0. For more
+          # information, see the
+          # <a href="/ml-engine/docs/runtime-version-list">runtime version list</a>
+          # and
+          # <a href="/ml-engine/docs/versioning">how to manage runtime versions</a>.
+      "scaleTier": "A String", # Required. Specifies the machine types, the number of replicas for workers
+          # and parameter servers.
+      "masterType": "A String", # Optional. Specifies the type of virtual machine to use for your training
+          # job's master worker.
+          #
+          # The following types are supported:
+          #
+          # <dl>
+          #   <dt>standard</dt>
+          #   <dd>
+          #   A basic machine configuration suitable for training simple models with
+          #   small to moderate datasets.
+          #   </dd>
+          #   <dt>large_model</dt>
+          #   <dd>
+          #   A machine with a lot of memory, specially suited for parameter servers
+          #   when your model is large (having many hidden layers or layers with very
+          #   large numbers of nodes).
+          #   </dd>
+          #   <dt>complex_model_s</dt>
+          #   <dd>
+          #   A machine suitable for the master and workers of the cluster when your
+          #   model requires more computation than the standard machine can handle
+          #   satisfactorily.
+          #   </dd>
+          #   <dt>complex_model_m</dt>
+          #   <dd>
+          #   A machine with roughly twice the number of cores and roughly double the
+          #   memory of <i>complex_model_s</i>.
+          #   </dd>
+          #   <dt>complex_model_l</dt>
+          #   <dd>
+          #   A machine with roughly twice the number of cores and roughly double the
+          #   memory of <i>complex_model_m</i>.
+          #   </dd>
+          #   <dt>standard_gpu</dt>
+          #   <dd>
+          #   A machine equivalent to <i>standard</i> that
+          #   also includes a single NVIDIA Tesla K80 GPU. See more about
+          #   <a href="/ml-engine/docs/tensorflow/using-gpus">using GPUs to
+          #   train your model</a>.
+          #   </dd>
+          #   <dt>complex_model_m_gpu</dt>
+          #   <dd>
+          #   A machine equivalent to <i>complex_model_m</i> that also includes
+          #   four NVIDIA Tesla K80 GPUs.
+          #   </dd>
+          #   <dt>complex_model_l_gpu</dt>
+          #   <dd>
+          #   A machine equivalent to <i>complex_model_l</i> that also includes
+          #   eight NVIDIA Tesla K80 GPUs.
+          #   </dd>
+          #   <dt>standard_p100</dt>
+          #   <dd>
+          #   A machine equivalent to <i>standard</i> that
+          #   also includes a single NVIDIA Tesla P100 GPU.
+          #   </dd>
+          #   <dt>complex_model_m_p100</dt>
+          #   <dd>
+          #   A machine equivalent to <i>complex_model_m</i> that also includes
+          #   four NVIDIA Tesla P100 GPUs.
+          #   </dd>
+          #   <dt>standard_v100</dt>
+          #   <dd>
+          #   A machine equivalent to <i>standard</i> that
+          #   also includes a single NVIDIA Tesla V100 GPU.
+          #   </dd>
+          #   <dt>large_model_v100</dt>
+          #   <dd>
+          #   A machine equivalent to <i>large_model</i> that
+          #   also includes a single NVIDIA Tesla V100 GPU.
+          #   </dd>
+          #   <dt>complex_model_m_v100</dt>
+          #   <dd>
+          #   A machine equivalent to <i>complex_model_m</i> that
+          #   also includes four NVIDIA Tesla V100 GPUs.
+          #   </dd>
+          #   <dt>complex_model_l_v100</dt>
+          #   <dd>
+          #   A machine equivalent to <i>complex_model_l</i> that
+          #   also includes eight NVIDIA Tesla V100 GPUs.
+          #   </dd>
+          #   <dt>cloud_tpu</dt>
+          #   <dd>
+          #   A TPU VM including one Cloud TPU. See more about
+          #   <a href="/ml-engine/docs/tensorflow/using-tpus">using TPUs to train
+          #   your model</a>.
+          #   </dd>
+          # </dl>
+          #
+          # You may also use certain Compute Engine machine types directly in this
+          # field. The following types are supported:
+          #
+          # - `n1-standard-4`
+          # - `n1-standard-8`
+          # - `n1-standard-16`
+          # - `n1-standard-32`
+          # - `n1-standard-64`
+          # - `n1-standard-96`
+          # - `n1-highmem-2`
+          # - `n1-highmem-4`
+          # - `n1-highmem-8`
+          # - `n1-highmem-16`
+          # - `n1-highmem-32`
+          # - `n1-highmem-64`
+          # - `n1-highmem-96`
+          # - `n1-highcpu-16`
+          # - `n1-highcpu-32`
+          # - `n1-highcpu-64`
+          # - `n1-highcpu-96`
+          #
+          # See more about [using Compute Engine machine
+          # types](/ml-engine/docs/tensorflow/machine-types#compute-engine-machine-types).
+          #
+          # You must set this value when `scaleTier` is set to `CUSTOM`.
+      "hyperparameters": { # Represents a set of hyperparameters to optimize. # Optional. The set of Hyperparameters to tune.
+        "maxTrials": 42, # Optional. How many training trials should be attempted to optimize
+            # the specified hyperparameters.
+            #
+            # Defaults to one.
+        "goal": "A String", # Required. The type of goal to use for tuning. Available types are
+            # `MAXIMIZE` and `MINIMIZE`.
+            #
+            # Defaults to `MAXIMIZE`.
+        "algorithm": "A String", # Optional. The search algorithm specified for the hyperparameter
+            # tuning job.
+            # Uses the default AI Platform hyperparameter tuning
+            # algorithm if unspecified.
+        "maxFailedTrials": 42, # Optional. The number of failed trials that need to be seen before failing
+            # the hyperparameter tuning job. You can specify this field to override the
+            # default failing criteria for AI Platform hyperparameter tuning jobs.
+            #
+            # Defaults to zero, which means the service decides when a hyperparameter
+            # job should fail.
+        "enableTrialEarlyStopping": True or False, # Optional. Indicates if the hyperparameter tuning job enables auto trial
+            # early stopping.
+        "resumePreviousJobId": "A String", # Optional. The prior hyperparameter tuning job id that users hope to
+            # continue with. The job id will be used to find the corresponding vizier
+            # study guid and resume the study.
+        "params": [ # Required. The set of parameters to tune.
+          { # Represents a single hyperparameter to optimize.
+            "maxValue": 3.14, # Required if type is `DOUBLE` or `INTEGER`. This field
+                # should be unset if type is `CATEGORICAL`. This value should be integers if
+                # type is `INTEGER`.
+            "categoricalValues": [ # Required if type is `CATEGORICAL`. The list of possible categories.
+              "A String",
+            ],
+            "discreteValues": [ # Required if type is `DISCRETE`.
+                # A list of feasible points.
+                # The list should be in strictly increasing order. For instance, this
+                # parameter might have possible settings of 1.5, 2.5, and 4.0. This list
+                # should not contain more than 1,000 values.
+              3.14,
+            ],
+            "parameterName": "A String", # Required. The parameter name must be unique amongst all ParameterConfigs in
+                # a HyperparameterSpec message. E.g., "learning_rate".
+            "minValue": 3.14, # Required if type is `DOUBLE` or `INTEGER`. This field
+                # should be unset if type is `CATEGORICAL`. This value should be integers if
+                # type is INTEGER.
+            "type": "A String", # Required. The type of the parameter.
+            "scaleType": "A String", # Optional. How the parameter should be scaled to the hypercube.
+                # Leave unset for categorical parameters.
+                # Some kind of scaling is strongly recommended for real or integral
+                # parameters (e.g., `UNIT_LINEAR_SCALE`).
+          },
+        ],
+        "hyperparameterMetricTag": "A String", # Optional. The TensorFlow summary tag name to use for optimizing trials. For
+            # current versions of TensorFlow, this tag name should exactly match what is
+            # shown in TensorBoard, including all scopes.  For versions of TensorFlow
+            # prior to 0.12, this should be only the tag passed to tf.Summary.
+            # By default, "training/hptuning/metric" will be used.
+        "maxParallelTrials": 42, # Optional. The number of training trials to run concurrently.
+            # You can reduce the time it takes to perform hyperparameter tuning by adding
+            # trials in parallel. However, each trail only benefits from the information
+            # gained in completed trials. That means that a trial does not get access to
+            # the results of trials running at the same time, which could reduce the
+            # quality of the overall optimization.
+            #
+            # Each trial will use the same scale tier and machine types.
+            #
+            # Defaults to one.
+      },
+      "region": "A String", # Required. The Google Compute Engine region to run the training job in.
+          # See the <a href="/ml-engine/docs/tensorflow/regions">available regions</a>
+          # for AI Platform services.
+      "args": [ # Optional. Command line arguments to pass to the program.
         "A String",
       ],
+      "pythonModule": "A String", # Required. The Python module name to run after installing the packages.
+      "pythonVersion": "A String", # Optional. The version of Python used in training. If not set, the default
+          # version is '2.7'. Python '3.5' is available when `runtime_version` is set
+          # to '1.4' and above. Python '2.7' works with all supported
+          # <a href="/ml-engine/docs/runtime-version-list">runtime versions</a>.
+      "jobDir": "A String", # Optional. A Google Cloud Storage path in which to store training outputs
+          # and other data needed for training. This path is passed to your TensorFlow
+          # program as the '--job-dir' command-line argument. The benefit of specifying
+          # this field is that Cloud ML validates the path for use in training.
+      "packageUris": [ # Required. The Google Cloud Storage location of the packages with
+          # the training program and any additional dependencies.
+          # The maximum number of package URIs is 100.
+        "A String",
+      ],
+      "workerCount": "A String", # Optional. The number of worker replicas to use for the training job. Each
+          # replica in the cluster will be of the type specified in `worker_type`.
+          #
+          # This value can only be used when `scale_tier` is set to `CUSTOM`. If you
+          # set this value, you must also set `worker_type`.
+          #
+          # The default value is zero.
+      "parameterServerType": "A String", # Optional. Specifies the type of virtual machine to use for your training
+          # job's parameter server.
+          #
+          # The supported values are the same as those described in the entry for
+          # `master_type`.
+          #
+          # This value must be consistent with the category of machine type that
+          # `masterType` uses. In other words, both must be AI Platform machine
+          # types or both must be Compute Engine machine types.
+          #
+          # This value must be present when `scaleTier` is set to `CUSTOM` and
+          # `parameter_server_count` is greater than zero.
+      "workerConfig": { # Represents the configuration for a replica in a cluster. # Optional. The configuration for workers.
+          #
+          # You should only set `workerConfig.acceleratorConfig` if `workerType` is set
+          # to a Compute Engine machine type. [Learn about restrictions on accelerator
+          # configurations for
+          # training.](/ml-engine/docs/tensorflow/using-gpus#compute-engine-machine-types-with-gpu)
+          #
+          # Set `workerConfig.imageUri` only if you build a custom image for your
+          # worker. If `workerConfig.imageUri` has not been set, AI Platform uses
+          # the value of `masterConfig.imageUri`. Learn more about
+          # [configuring custom
+          # containers](/ml-engine/docs/distributed-training-containers).
+        "acceleratorConfig": { # Represents a hardware accelerator request config. # Represents the type and number of accelerators used by the replica.
+            # [Learn about restrictions on accelerator configurations for
+            # training.](/ml-engine/docs/tensorflow/using-gpus#compute-engine-machine-types-with-gpu)
+          "count": "A String", # The number of accelerators to attach to each machine running the job.
+          "type": "A String", # The type of accelerator to use.
+        },
+        "imageUri": "A String", # The Docker image to run on the replica. This image must be in Container
+            # Registry. Learn more about [configuring custom
+            # containers](/ml-engine/docs/distributed-training-containers).
+      },
+      "maxRunningTime": "A String", # Optional. The maximum job running time. The default is 7 days.
+      "masterConfig": { # Represents the configuration for a replica in a cluster. # Optional. The configuration for your master worker.
+          #
+          # You should only set `masterConfig.acceleratorConfig` if `masterType` is set
+          # to a Compute Engine machine type. Learn about [restrictions on accelerator
+          # configurations for
+          # training.](/ml-engine/docs/tensorflow/using-gpus#compute-engine-machine-types-with-gpu)
+          #
+          # Set `masterConfig.imageUri` only if you build a custom image. Only one of
+          # `masterConfig.imageUri` and `runtimeVersion` should be set. Learn more about
+          # [configuring custom
+          # containers](/ml-engine/docs/distributed-training-containers).
+        "acceleratorConfig": { # Represents a hardware accelerator request config. # Represents the type and number of accelerators used by the replica.
+            # [Learn about restrictions on accelerator configurations for
+            # training.](/ml-engine/docs/tensorflow/using-gpus#compute-engine-machine-types-with-gpu)
+          "count": "A String", # The number of accelerators to attach to each machine running the job.
+          "type": "A String", # The type of accelerator to use.
+        },
+        "imageUri": "A String", # The Docker image to run on the replica. This image must be in Container
+            # Registry. Learn more about [configuring custom
+            # containers](/ml-engine/docs/distributed-training-containers).
+      },
+      "parameterServerCount": "A String", # Optional. The number of parameter server replicas to use for the training
+          # job. Each replica in the cluster will be of the type specified in
+          # `parameter_server_type`.
+          #
+          # This value can only be used when `scale_tier` is set to `CUSTOM`.If you
+          # set this value, you must also set `parameter_server_type`.
+          #
+          # The default value is zero.
     },
-    "errorMessage": "A String", # Output only. The details of a failure or a cancellation.
     "jobId": "A String", # Required. The user-specified id of the job.
+    "labels": { # Optional. One or more labels that you can add, to organize your jobs.
+        # Each label is a key-value pair, where both the key and the value are
+        # arbitrary strings that you supply.
+        # For more information, see the documentation on
+        # <a href="/ml-engine/docs/tensorflow/resource-labels">using labels</a>.
+      "a_key": "A String",
+    },
     "state": "A String", # Output only. The detailed state of a job.
+    "etag": "A String", # `etag` is used for optimistic concurrency control as a way to help
+        # prevent simultaneous updates of a job from overwriting each other.
+        # It is strongly suggested that systems make use of the `etag` in the
+        # read-modify-write cycle to perform job updates in order to avoid race
+        # conditions: An `etag` is returned in the response to `GetJob`, and
+        # systems are expected to put that etag in the request to `UpdateJob` to
+        # ensure that their change will be applied to the same version of the job.
     "startTime": "A String", # Output only. When the job processing was started.
     "endTime": "A String", # Output only. When the job processing was completed.
     "predictionOutput": { # Represents results of a prediction job. # The current prediction job result.
@@ -351,243 +1517,18 @@
       "errorCount": "A String", # The number of data instances which resulted in errors.
     },
     "createTime": "A String", # Output only. When the job was created.
-  }
-
-  x__xgafv: string, V1 error format.
-    Allowed values
-      1 - v1 error format
-      2 - v2 error format
-
-Returns:
-  An object of the form:
-
-    { # Represents a training or prediction job.
-      "trainingOutput": { # Represents results of a training job. Output only. # The current training job result.
-        "trials": [ # Results for individual Hyperparameter trials.
-            # Only set for hyperparameter tuning jobs.
-          { # Represents the result of a single hyperparameter tuning trial from a
-              # training job. The TrainingOutput object that is returned on successful
-              # completion of a training job with hyperparameter tuning includes a list
-              # of HyperparameterOutput objects, one for each successful trial.
-            "hyperparameters": { # The hyperparameters given to this trial.
-              "a_key": "A String",
-            },
-            "trialId": "A String", # The trial id for these results.
-            "allMetrics": [ # All recorded object metrics for this trial.
-              { # An observed value of a metric.
-                "trainingStep": "A String", # The global training step for this metric.
-                "objectiveValue": 3.14, # The objective value at this training step.
-              },
-            ],
-            "finalMetric": { # An observed value of a metric. # The final objective metric seen for this trial.
-              "trainingStep": "A String", # The global training step for this metric.
-              "objectiveValue": 3.14, # The objective value at this training step.
-            },
-          },
-        ],
-        "isHyperparameterTuningJob": True or False, # Whether this job is a hyperparameter tuning job.
-        "consumedMLUnits": 3.14, # The amount of ML units consumed by the job.
-        "completedTrialCount": "A String", # The number of hyperparameter tuning trials that completed successfully.
-            # Only set for hyperparameter tuning jobs.
-      },
-      "trainingInput": { # Represents input parameters for a training job. # Input parameters to create a training job.
-        "workerType": "A String", # Optional. Specifies the type of virtual machine to use for your training
-            # job's worker nodes.
-            #
-            # The supported values are the same as those described in the entry for
-            # `masterType`.
-            #
-            # This value must be present when `scaleTier` is set to `CUSTOM` and
-            # `workerCount` is greater than zero.
-        "runtimeVersion": "A String", # Optional. The Google Cloud ML runtime version to use for training.  If not
-            # set, Google Cloud ML will choose the latest stable version.
-        "scaleTier": "A String", # Required. Specifies the machine types, the number of replicas for workers
-            # and parameter servers.
-        "masterType": "A String", # Optional. Specifies the type of virtual machine to use for your training
-            # job's master worker.
-            #
-            # The following types are supported:
-            #
-            # <dl>
-            #   <dt>standard</dt>
-            #   <dd>
-            #   A basic machine configuration suitable for training simple models with
-            #   small to moderate datasets.
-            #   </dd>
-            #   <dt>large_model</dt>
-            #   <dd>
-            #   A machine with a lot of memory, specially suited for parameter servers
-            #   when your model is large (having many hidden layers or layers with very
-            #   large numbers of nodes).
-            #   </dd>
-            #   <dt>complex_model_s</dt>
-            #   <dd>
-            #   A machine suitable for the master and workers of the cluster when your
-            #   model requires more computation than the standard machine can handle
-            #   satisfactorily.
-            #   </dd>
-            #   <dt>complex_model_m</dt>
-            #   <dd>
-            #   A machine with roughly twice the number of cores and roughly double the
-            #   memory of <code suppresswarning="true">complex_model_s</code>.
-            #   </dd>
-            #   <dt>complex_model_l</dt>
-            #   <dd>
-            #   A machine with roughly twice the number of cores and roughly double the
-            #   memory of <code suppresswarning="true">complex_model_m</code>.
-            #   </dd>
-            #   <dt>standard_gpu</dt>
-            #   <dd>
-            #   A machine equivalent to <code suppresswarning="true">standard</code> that
-            #   also includes a
-            #   <a href="/ml-engine/docs/how-tos/using-gpus">
-            #   GPU that you can use in your trainer</a>.
-            #   </dd>
-            #   <dt>complex_model_m_gpu</dt>
-            #   <dd>
-            #   A machine equivalent to
-            #   <code suppresswarning="true">complex_model_m</code> that also includes
-            #   four GPUs.
-            #   </dd>
-            # </dl>
-            #
-            # You must set this value when `scaleTier` is set to `CUSTOM`.
-        "hyperparameters": { # Represents a set of hyperparameters to optimize. # Optional. The set of Hyperparameters to tune.
-          "maxTrials": 42, # Optional. How many training trials should be attempted to optimize
-              # the specified hyperparameters.
-              #
-              # Defaults to one.
-          "hyperparameterMetricTag": "A String", # Optional. The Tensorflow summary tag name to use for optimizing trials. For
-              # current versions of Tensorflow, this tag name should exactly match what is
-              # shown in Tensorboard, including all scopes.  For versions of Tensorflow
-              # prior to 0.12, this should be only the tag passed to tf.Summary.
-              # By default, "training/hptuning/metric" will be used.
-          "params": [ # Required. The set of parameters to tune.
-            { # Represents a single hyperparameter to optimize.
-              "maxValue": 3.14, # Required if typeis `DOUBLE` or `INTEGER`. This field
-                  # should be unset if type is `CATEGORICAL`. This value should be integers if
-                  # type is `INTEGER`.
-              "categoricalValues": [ # Required if type is `CATEGORICAL`. The list of possible categories.
-                "A String",
-              ],
-              "discreteValues": [ # Required if type is `DISCRETE`.
-                  # A list of feasible points.
-                  # The list should be in strictly increasing order. For instance, this
-                  # parameter might have possible settings of 1.5, 2.5, and 4.0. This list
-                  # should not contain more than 1,000 values.
-                3.14,
-              ],
-              "parameterName": "A String", # Required. The parameter name must be unique amongst all ParameterConfigs in
-                  # a HyperparameterSpec message. E.g., "learning_rate".
-              "minValue": 3.14, # Required if type is `DOUBLE` or `INTEGER`. This field
-                  # should be unset if type is `CATEGORICAL`. This value should be integers if
-                  # type is INTEGER.
-              "type": "A String", # Required. The type of the parameter.
-              "scaleType": "A String", # Optional. How the parameter should be scaled to the hypercube.
-                  # Leave unset for categorical parameters.
-                  # Some kind of scaling is strongly recommended for real or integral
-                  # parameters (e.g., `UNIT_LINEAR_SCALE`).
-            },
-          ],
-          "goal": "A String", # Required. The type of goal to use for tuning. Available types are
-              # `MAXIMIZE` and `MINIMIZE`.
-              #
-              # Defaults to `MAXIMIZE`.
-          "maxParallelTrials": 42, # Optional. The number of training trials to run concurrently.
-              # You can reduce the time it takes to perform hyperparameter tuning by adding
-              # trials in parallel. However, each trail only benefits from the information
-              # gained in completed trials. That means that a trial does not get access to
-              # the results of trials running at the same time, which could reduce the
-              # quality of the overall optimization.
-              #
-              # Each trial will use the same scale tier and machine types.
-              #
-              # Defaults to one.
-        },
-        "region": "A String", # Required. The Google Compute Engine region to run the training job in.
-        "args": [ # Optional. Command line arguments to pass to the program.
-          "A String",
-        ],
-        "pythonModule": "A String", # Required. The Python module name to run after installing the packages.
-        "jobDir": "A String", # Optional. A Google Cloud Storage path in which to store training outputs
-            # and other data needed for training. This path is passed to your TensorFlow
-            # program as the 'job_dir' command-line argument. The benefit of specifying
-            # this field is that Cloud ML validates the path for use in training.
-        "packageUris": [ # Required. The Google Cloud Storage location of the packages with
-            # the training program and any additional dependencies.
-            # The maximum number of package URIs is 100.
-          "A String",
-        ],
-        "workerCount": "A String", # Optional. The number of worker replicas to use for the training job. Each
-            # replica in the cluster will be of the type specified in `worker_type`.
-            #
-            # This value can only be used when `scale_tier` is set to `CUSTOM`. If you
-            # set this value, you must also set `worker_type`.
-        "parameterServerType": "A String", # Optional. Specifies the type of virtual machine to use for your training
-            # job's parameter server.
-            #
-            # The supported values are the same as those described in the entry for
-            # `master_type`.
-            #
-            # This value must be present when `scaleTier` is set to `CUSTOM` and
-            # `parameter_server_count` is greater than zero.
-        "parameterServerCount": "A String", # Optional. The number of parameter server replicas to use for the training
-            # job. Each replica in the cluster will be of the type specified in
-            # `parameter_server_type`.
-            #
-            # This value can only be used when `scale_tier` is set to `CUSTOM`.If you
-            # set this value, you must also set `parameter_server_type`.
-      },
-      "predictionInput": { # Represents input parameters for a prediction job. # Input parameters to create a prediction job.
-        "modelName": "A String", # Use this field if you want to use the default version for the specified
-            # model. The string must use the following format:
-            #
-            # `"projects/<var>[YOUR_PROJECT]</var>/models/<var>[YOUR_MODEL]</var>"`
-        "runtimeVersion": "A String", # Optional. The Google Cloud ML runtime version to use for this batch
-            # prediction. If not set, Google Cloud ML will pick the runtime version used
-            # during the CreateVersion request for this model version, or choose the
-            # latest stable version when model version information is not available
-            # such as when the model is specified by uri.
-        "region": "A String", # Required. The Google Compute Engine region to run the prediction job in.
-        "maxWorkerCount": "A String", # Optional. The maximum number of workers to be used for parallel processing.
-            # Defaults to 10 if not specified.
-        "uri": "A String", # Use this field if you want to specify a Google Cloud Storage path for
-            # the model to use.
-        "outputPath": "A String", # Required. The output Google Cloud Storage location.
-        "dataFormat": "A String", # Required. The format of the input data files.
-        "versionName": "A String", # Use this field if you want to specify a version of the model to use. The
-            # string is formatted the same way as `model_version`, with the addition
-            # of the version information:
-            #
-            # `"projects/<var>[YOUR_PROJECT]</var>/models/<var>YOUR_MODEL/versions/<var>[YOUR_VERSION]</var>"`
-        "inputPaths": [ # Required. The Google Cloud Storage location of the input data files.
-            # May contain wildcards.
-          "A String",
-        ],
-      },
-      "errorMessage": "A String", # Output only. The details of a failure or a cancellation.
-      "jobId": "A String", # Required. The user-specified id of the job.
-      "state": "A String", # Output only. The detailed state of a job.
-      "startTime": "A String", # Output only. When the job processing was started.
-      "endTime": "A String", # Output only. When the job processing was completed.
-      "predictionOutput": { # Represents results of a prediction job. # The current prediction job result.
-        "outputPath": "A String", # The output Google Cloud Storage location provided at the job creation time.
-        "nodeHours": 3.14, # Node hours used by the batch prediction job.
-        "predictionCount": "A String", # The number of generated predictions.
-        "errorCount": "A String", # The number of data instances which resulted in errors.
-      },
-      "createTime": "A String", # Output only. When the job was created.
-    }</pre>
+  }</pre>
 </div>
 
 <div class="method">
-    <code class="details" id="get">get(name, x__xgafv=None)</code>
-  <pre>Describes a job.
+    <code class="details" id="getIamPolicy">getIamPolicy(resource, x__xgafv=None)</code>
+  <pre>Gets the access control policy for a resource.
+Returns an empty policy if the resource exists and does not have a policy
+set.
 
 Args:
-  name: string, Required. The name of the job to get the description of.
-
-Authorization: requires `Viewer` role on the parent project. (required)
+  resource: string, REQUIRED: The resource for which the policy is being requested.
+See the operation documentation for the appropriate value for this field. (required)
   x__xgafv: string, V1 error format.
     Allowed values
       1 - v1 error format
@@ -596,239 +1537,212 @@
 Returns:
   An object of the form:
 
-    { # Represents a training or prediction job.
-      "trainingOutput": { # Represents results of a training job. Output only. # The current training job result.
-        "trials": [ # Results for individual Hyperparameter trials.
-            # Only set for hyperparameter tuning jobs.
-          { # Represents the result of a single hyperparameter tuning trial from a
-              # training job. The TrainingOutput object that is returned on successful
-              # completion of a training job with hyperparameter tuning includes a list
-              # of HyperparameterOutput objects, one for each successful trial.
-            "hyperparameters": { # The hyperparameters given to this trial.
-              "a_key": "A String",
-            },
-            "trialId": "A String", # The trial id for these results.
-            "allMetrics": [ # All recorded object metrics for this trial.
-              { # An observed value of a metric.
-                "trainingStep": "A String", # The global training step for this metric.
-                "objectiveValue": 3.14, # The objective value at this training step.
-              },
+    { # Defines an Identity and Access Management (IAM) policy. It is used to
+      # specify access control policies for Cloud Platform resources.
+      #
+      #
+      # A `Policy` consists of a list of `bindings`. A `binding` binds a list of
+      # `members` to a `role`, where the members can be user accounts, Google groups,
+      # Google domains, and service accounts. A `role` is a named list of permissions
+      # defined by IAM.
+      #
+      # **JSON Example**
+      #
+      #     {
+      #       "bindings": [
+      #         {
+      #           "role": "roles/owner",
+      #           "members": [
+      #             "user:mike@example.com",
+      #             "group:admins@example.com",
+      #             "domain:google.com",
+      #             "serviceAccount:my-other-app@appspot.gserviceaccount.com"
+      #           ]
+      #         },
+      #         {
+      #           "role": "roles/viewer",
+      #           "members": ["user:sean@example.com"]
+      #         }
+      #       ]
+      #     }
+      #
+      # **YAML Example**
+      #
+      #     bindings:
+      #     - members:
+      #       - user:mike@example.com
+      #       - group:admins@example.com
+      #       - domain:google.com
+      #       - serviceAccount:my-other-app@appspot.gserviceaccount.com
+      #       role: roles/owner
+      #     - members:
+      #       - user:sean@example.com
+      #       role: roles/viewer
+      #
+      #
+      # For a description of IAM and its features, see the
+      # [IAM developer's guide](https://cloud.google.com/iam/docs).
+    "bindings": [ # Associates a list of `members` to a `role`.
+        # `bindings` with no members will result in an error.
+      { # Associates `members` with a `role`.
+        "role": "A String", # Role that is assigned to `members`.
+            # For example, `roles/viewer`, `roles/editor`, or `roles/owner`.
+        "members": [ # Specifies the identities requesting access for a Cloud Platform resource.
+            # `members` can have the following values:
+            #
+            # * `allUsers`: A special identifier that represents anyone who is
+            #    on the internet; with or without a Google account.
+            #
+            # * `allAuthenticatedUsers`: A special identifier that represents anyone
+            #    who is authenticated with a Google account or a service account.
+            #
+            # * `user:{emailid}`: An email address that represents a specific Google
+            #    account. For example, `alice@gmail.com` .
+            #
+            #
+            # * `serviceAccount:{emailid}`: An email address that represents a service
+            #    account. For example, `my-other-app@appspot.gserviceaccount.com`.
+            #
+            # * `group:{emailid}`: An email address that represents a Google group.
+            #    For example, `admins@example.com`.
+            #
+            #
+            # * `domain:{domain}`: The G Suite domain (primary) that represents all the
+            #    users of that domain. For example, `google.com` or `example.com`.
+            #
+          "A String",
+        ],
+        "condition": { # Represents an expression text. Example: # The condition that is associated with this binding.
+            # NOTE: An unsatisfied condition will not allow user access via current
+            # binding. Different bindings, including their conditions, are examined
+            # independently.
+            #
+            #     title: "User account presence"
+            #     description: "Determines whether the request has a user account"
+            #     expression: "size(request.user) > 0"
+          "description": "A String", # An optional description of the expression. This is a longer text which
+              # describes the expression, e.g. when hovered over it in a UI.
+          "expression": "A String", # Textual representation of an expression in
+              # Common Expression Language syntax.
+              #
+              # The application context of the containing message determines which
+              # well-known feature set of CEL is supported.
+          "location": "A String", # An optional string indicating the location of the expression for error
+              # reporting, e.g. a file name and a position in the file.
+          "title": "A String", # An optional title for the expression, i.e. a short string describing
+              # its purpose. This can be used e.g. in UIs which allow to enter the
+              # expression.
+        },
+      },
+    ],
+    "etag": "A String", # `etag` is used for optimistic concurrency control as a way to help
+        # prevent simultaneous updates of a policy from overwriting each other.
+        # It is strongly suggested that systems make use of the `etag` in the
+        # read-modify-write cycle to perform policy updates in order to avoid race
+        # conditions: An `etag` is returned in the response to `getIamPolicy`, and
+        # systems are expected to put that etag in the request to `setIamPolicy` to
+        # ensure that their change will be applied to the same version of the policy.
+        #
+        # If no `etag` is provided in the call to `setIamPolicy`, then the existing
+        # policy is overwritten blindly.
+    "version": 42, # Deprecated.
+    "auditConfigs": [ # Specifies cloud audit logging configuration for this policy.
+      { # Specifies the audit configuration for a service.
+          # The configuration determines which permission types are logged, and what
+          # identities, if any, are exempted from logging.
+          # An AuditConfig must have one or more AuditLogConfigs.
+          #
+          # If there are AuditConfigs for both `allServices` and a specific service,
+          # the union of the two AuditConfigs is used for that service: the log_types
+          # specified in each AuditConfig are enabled, and the exempted_members in each
+          # AuditLogConfig are exempted.
+          #
+          # Example Policy with multiple AuditConfigs:
+          #
+          #     {
+          #       "audit_configs": [
+          #         {
+          #           "service": "allServices"
+          #           "audit_log_configs": [
+          #             {
+          #               "log_type": "DATA_READ",
+          #               "exempted_members": [
+          #                 "user:foo@gmail.com"
+          #               ]
+          #             },
+          #             {
+          #               "log_type": "DATA_WRITE",
+          #             },
+          #             {
+          #               "log_type": "ADMIN_READ",
+          #             }
+          #           ]
+          #         },
+          #         {
+          #           "service": "fooservice.googleapis.com"
+          #           "audit_log_configs": [
+          #             {
+          #               "log_type": "DATA_READ",
+          #             },
+          #             {
+          #               "log_type": "DATA_WRITE",
+          #               "exempted_members": [
+          #                 "user:bar@gmail.com"
+          #               ]
+          #             }
+          #           ]
+          #         }
+          #       ]
+          #     }
+          #
+          # For fooservice, this policy enables DATA_READ, DATA_WRITE and ADMIN_READ
+          # logging. It also exempts foo@gmail.com from DATA_READ logging, and
+          # bar@gmail.com from DATA_WRITE logging.
+        "auditLogConfigs": [ # The configuration for logging of each type of permission.
+          { # Provides the configuration for logging a type of permissions.
+              # Example:
+              #
+              #     {
+              #       "audit_log_configs": [
+              #         {
+              #           "log_type": "DATA_READ",
+              #           "exempted_members": [
+              #             "user:foo@gmail.com"
+              #           ]
+              #         },
+              #         {
+              #           "log_type": "DATA_WRITE",
+              #         }
+              #       ]
+              #     }
+              #
+              # This enables 'DATA_READ' and 'DATA_WRITE' logging, while exempting
+              # foo@gmail.com from DATA_READ logging.
+            "exemptedMembers": [ # Specifies the identities that do not cause logging for this type of
+                # permission.
+                # Follows the same format of Binding.members.
+              "A String",
             ],
-            "finalMetric": { # An observed value of a metric. # The final objective metric seen for this trial.
-              "trainingStep": "A String", # The global training step for this metric.
-              "objectiveValue": 3.14, # The objective value at this training step.
-            },
+            "logType": "A String", # The log type that this config enables.
           },
         ],
-        "isHyperparameterTuningJob": True or False, # Whether this job is a hyperparameter tuning job.
-        "consumedMLUnits": 3.14, # The amount of ML units consumed by the job.
-        "completedTrialCount": "A String", # The number of hyperparameter tuning trials that completed successfully.
-            # Only set for hyperparameter tuning jobs.
+        "service": "A String", # Specifies a service that will be enabled for audit logging.
+            # For example, `storage.googleapis.com`, `cloudsql.googleapis.com`.
+            # `allServices` is a special value that covers all services.
       },
-      "trainingInput": { # Represents input parameters for a training job. # Input parameters to create a training job.
-        "workerType": "A String", # Optional. Specifies the type of virtual machine to use for your training
-            # job's worker nodes.
-            #
-            # The supported values are the same as those described in the entry for
-            # `masterType`.
-            #
-            # This value must be present when `scaleTier` is set to `CUSTOM` and
-            # `workerCount` is greater than zero.
-        "runtimeVersion": "A String", # Optional. The Google Cloud ML runtime version to use for training.  If not
-            # set, Google Cloud ML will choose the latest stable version.
-        "scaleTier": "A String", # Required. Specifies the machine types, the number of replicas for workers
-            # and parameter servers.
-        "masterType": "A String", # Optional. Specifies the type of virtual machine to use for your training
-            # job's master worker.
-            #
-            # The following types are supported:
-            #
-            # <dl>
-            #   <dt>standard</dt>
-            #   <dd>
-            #   A basic machine configuration suitable for training simple models with
-            #   small to moderate datasets.
-            #   </dd>
-            #   <dt>large_model</dt>
-            #   <dd>
-            #   A machine with a lot of memory, specially suited for parameter servers
-            #   when your model is large (having many hidden layers or layers with very
-            #   large numbers of nodes).
-            #   </dd>
-            #   <dt>complex_model_s</dt>
-            #   <dd>
-            #   A machine suitable for the master and workers of the cluster when your
-            #   model requires more computation than the standard machine can handle
-            #   satisfactorily.
-            #   </dd>
-            #   <dt>complex_model_m</dt>
-            #   <dd>
-            #   A machine with roughly twice the number of cores and roughly double the
-            #   memory of <code suppresswarning="true">complex_model_s</code>.
-            #   </dd>
-            #   <dt>complex_model_l</dt>
-            #   <dd>
-            #   A machine with roughly twice the number of cores and roughly double the
-            #   memory of <code suppresswarning="true">complex_model_m</code>.
-            #   </dd>
-            #   <dt>standard_gpu</dt>
-            #   <dd>
-            #   A machine equivalent to <code suppresswarning="true">standard</code> that
-            #   also includes a
-            #   <a href="/ml-engine/docs/how-tos/using-gpus">
-            #   GPU that you can use in your trainer</a>.
-            #   </dd>
-            #   <dt>complex_model_m_gpu</dt>
-            #   <dd>
-            #   A machine equivalent to
-            #   <code suppresswarning="true">complex_model_m</code> that also includes
-            #   four GPUs.
-            #   </dd>
-            # </dl>
-            #
-            # You must set this value when `scaleTier` is set to `CUSTOM`.
-        "hyperparameters": { # Represents a set of hyperparameters to optimize. # Optional. The set of Hyperparameters to tune.
-          "maxTrials": 42, # Optional. How many training trials should be attempted to optimize
-              # the specified hyperparameters.
-              #
-              # Defaults to one.
-          "hyperparameterMetricTag": "A String", # Optional. The Tensorflow summary tag name to use for optimizing trials. For
-              # current versions of Tensorflow, this tag name should exactly match what is
-              # shown in Tensorboard, including all scopes.  For versions of Tensorflow
-              # prior to 0.12, this should be only the tag passed to tf.Summary.
-              # By default, "training/hptuning/metric" will be used.
-          "params": [ # Required. The set of parameters to tune.
-            { # Represents a single hyperparameter to optimize.
-              "maxValue": 3.14, # Required if typeis `DOUBLE` or `INTEGER`. This field
-                  # should be unset if type is `CATEGORICAL`. This value should be integers if
-                  # type is `INTEGER`.
-              "categoricalValues": [ # Required if type is `CATEGORICAL`. The list of possible categories.
-                "A String",
-              ],
-              "discreteValues": [ # Required if type is `DISCRETE`.
-                  # A list of feasible points.
-                  # The list should be in strictly increasing order. For instance, this
-                  # parameter might have possible settings of 1.5, 2.5, and 4.0. This list
-                  # should not contain more than 1,000 values.
-                3.14,
-              ],
-              "parameterName": "A String", # Required. The parameter name must be unique amongst all ParameterConfigs in
-                  # a HyperparameterSpec message. E.g., "learning_rate".
-              "minValue": 3.14, # Required if type is `DOUBLE` or `INTEGER`. This field
-                  # should be unset if type is `CATEGORICAL`. This value should be integers if
-                  # type is INTEGER.
-              "type": "A String", # Required. The type of the parameter.
-              "scaleType": "A String", # Optional. How the parameter should be scaled to the hypercube.
-                  # Leave unset for categorical parameters.
-                  # Some kind of scaling is strongly recommended for real or integral
-                  # parameters (e.g., `UNIT_LINEAR_SCALE`).
-            },
-          ],
-          "goal": "A String", # Required. The type of goal to use for tuning. Available types are
-              # `MAXIMIZE` and `MINIMIZE`.
-              #
-              # Defaults to `MAXIMIZE`.
-          "maxParallelTrials": 42, # Optional. The number of training trials to run concurrently.
-              # You can reduce the time it takes to perform hyperparameter tuning by adding
-              # trials in parallel. However, each trail only benefits from the information
-              # gained in completed trials. That means that a trial does not get access to
-              # the results of trials running at the same time, which could reduce the
-              # quality of the overall optimization.
-              #
-              # Each trial will use the same scale tier and machine types.
-              #
-              # Defaults to one.
-        },
-        "region": "A String", # Required. The Google Compute Engine region to run the training job in.
-        "args": [ # Optional. Command line arguments to pass to the program.
-          "A String",
-        ],
-        "pythonModule": "A String", # Required. The Python module name to run after installing the packages.
-        "jobDir": "A String", # Optional. A Google Cloud Storage path in which to store training outputs
-            # and other data needed for training. This path is passed to your TensorFlow
-            # program as the 'job_dir' command-line argument. The benefit of specifying
-            # this field is that Cloud ML validates the path for use in training.
-        "packageUris": [ # Required. The Google Cloud Storage location of the packages with
-            # the training program and any additional dependencies.
-            # The maximum number of package URIs is 100.
-          "A String",
-        ],
-        "workerCount": "A String", # Optional. The number of worker replicas to use for the training job. Each
-            # replica in the cluster will be of the type specified in `worker_type`.
-            #
-            # This value can only be used when `scale_tier` is set to `CUSTOM`. If you
-            # set this value, you must also set `worker_type`.
-        "parameterServerType": "A String", # Optional. Specifies the type of virtual machine to use for your training
-            # job's parameter server.
-            #
-            # The supported values are the same as those described in the entry for
-            # `master_type`.
-            #
-            # This value must be present when `scaleTier` is set to `CUSTOM` and
-            # `parameter_server_count` is greater than zero.
-        "parameterServerCount": "A String", # Optional. The number of parameter server replicas to use for the training
-            # job. Each replica in the cluster will be of the type specified in
-            # `parameter_server_type`.
-            #
-            # This value can only be used when `scale_tier` is set to `CUSTOM`.If you
-            # set this value, you must also set `parameter_server_type`.
-      },
-      "predictionInput": { # Represents input parameters for a prediction job. # Input parameters to create a prediction job.
-        "modelName": "A String", # Use this field if you want to use the default version for the specified
-            # model. The string must use the following format:
-            #
-            # `"projects/<var>[YOUR_PROJECT]</var>/models/<var>[YOUR_MODEL]</var>"`
-        "runtimeVersion": "A String", # Optional. The Google Cloud ML runtime version to use for this batch
-            # prediction. If not set, Google Cloud ML will pick the runtime version used
-            # during the CreateVersion request for this model version, or choose the
-            # latest stable version when model version information is not available
-            # such as when the model is specified by uri.
-        "region": "A String", # Required. The Google Compute Engine region to run the prediction job in.
-        "maxWorkerCount": "A String", # Optional. The maximum number of workers to be used for parallel processing.
-            # Defaults to 10 if not specified.
-        "uri": "A String", # Use this field if you want to specify a Google Cloud Storage path for
-            # the model to use.
-        "outputPath": "A String", # Required. The output Google Cloud Storage location.
-        "dataFormat": "A String", # Required. The format of the input data files.
-        "versionName": "A String", # Use this field if you want to specify a version of the model to use. The
-            # string is formatted the same way as `model_version`, with the addition
-            # of the version information:
-            #
-            # `"projects/<var>[YOUR_PROJECT]</var>/models/<var>YOUR_MODEL/versions/<var>[YOUR_VERSION]</var>"`
-        "inputPaths": [ # Required. The Google Cloud Storage location of the input data files.
-            # May contain wildcards.
-          "A String",
-        ],
-      },
-      "errorMessage": "A String", # Output only. The details of a failure or a cancellation.
-      "jobId": "A String", # Required. The user-specified id of the job.
-      "state": "A String", # Output only. The detailed state of a job.
-      "startTime": "A String", # Output only. When the job processing was started.
-      "endTime": "A String", # Output only. When the job processing was completed.
-      "predictionOutput": { # Represents results of a prediction job. # The current prediction job result.
-        "outputPath": "A String", # The output Google Cloud Storage location provided at the job creation time.
-        "nodeHours": 3.14, # Node hours used by the batch prediction job.
-        "predictionCount": "A String", # The number of generated predictions.
-        "errorCount": "A String", # The number of data instances which resulted in errors.
-      },
-      "createTime": "A String", # Output only. When the job was created.
-    }</pre>
+    ],
+  }</pre>
 </div>
 
 <div class="method">
-    <code class="details" id="list">list(parent, pageSize=None, filter=None, pageToken=None, x__xgafv=None)</code>
+    <code class="details" id="list">list(parent, pageToken=None, x__xgafv=None, pageSize=None, filter=None)</code>
   <pre>Lists the jobs in the project.
 
+If there are no jobs that match the request parameters, the list
+request returns an empty response body: {}.
+
 Args:
-  parent: string, Required. The name of the project for which to list jobs.
-
-Authorization: requires `Viewer` role on the specified project. (required)
-  pageSize: integer, Optional. The number of jobs to retrieve per "page" of results. If there
-are more remaining results than this number, the response message will
-contain a valid value in the `next_page_token` field.
-
-The default value is 20, and the maximum page size is 100.
-  filter: string, Optional. Specifies the subset of jobs to retrieve.
+  parent: string, Required. The name of the project for which to list jobs. (required)
   pageToken: string, Optional. A page token to request the next page of results.
 
 You get the token from the `next_page_token` field of the response from
@@ -837,6 +1751,20 @@
     Allowed values
       1 - v1 error format
       2 - v2 error format
+  pageSize: integer, Optional. The number of jobs to retrieve per "page" of results. If there
+are more remaining results than this number, the response message will
+contain a valid value in the `next_page_token` field.
+
+The default value is 20, and the maximum page size is 100.
+  filter: string, Optional. Specifies the subset of jobs to retrieve.
+You can filter on the value of one or more attributes of the job object.
+For example, retrieve jobs with a job identifier that starts with 'census':
+<p><code>gcloud ai-platform jobs list --filter='jobId:census*'</code>
+<p>List all failed jobs with names that start with 'rnn':
+<p><code>gcloud ai-platform jobs list --filter='jobId:rnn*
+AND state:FAILED'</code>
+<p>For more examples, see the guide to
+<a href="/ml-engine/docs/tensorflow/monitor-training">monitoring jobs</a>.
 
 Returns:
   An object of the form:
@@ -846,222 +1774,455 @@
         # subsequent call.
     "jobs": [ # The list of jobs.
       { # Represents a training or prediction job.
-          "trainingOutput": { # Represents results of a training job. Output only. # The current training job result.
-            "trials": [ # Results for individual Hyperparameter trials.
-                # Only set for hyperparameter tuning jobs.
-              { # Represents the result of a single hyperparameter tuning trial from a
-                  # training job. The TrainingOutput object that is returned on successful
-                  # completion of a training job with hyperparameter tuning includes a list
-                  # of HyperparameterOutput objects, one for each successful trial.
-                "hyperparameters": { # The hyperparameters given to this trial.
-                  "a_key": "A String",
-                },
-                "trialId": "A String", # The trial id for these results.
-                "allMetrics": [ # All recorded object metrics for this trial.
-                  { # An observed value of a metric.
-                    "trainingStep": "A String", # The global training step for this metric.
-                    "objectiveValue": 3.14, # The objective value at this training step.
-                  },
-                ],
-                "finalMetric": { # An observed value of a metric. # The final objective metric seen for this trial.
+        "errorMessage": "A String", # Output only. The details of a failure or a cancellation.
+        "trainingOutput": { # Represents results of a training job. Output only. # The current training job result.
+          "completedTrialCount": "A String", # The number of hyperparameter tuning trials that completed successfully.
+              # Only set for hyperparameter tuning jobs.
+          "trials": [ # Results for individual Hyperparameter trials.
+              # Only set for hyperparameter tuning jobs.
+            { # Represents the result of a single hyperparameter tuning trial from a
+                # training job. The TrainingOutput object that is returned on successful
+                # completion of a training job with hyperparameter tuning includes a list
+                # of HyperparameterOutput objects, one for each successful trial.
+              "hyperparameters": { # The hyperparameters given to this trial.
+                "a_key": "A String",
+              },
+              "finalMetric": { # An observed value of a metric. # The final objective metric seen for this trial.
+                "trainingStep": "A String", # The global training step for this metric.
+                "objectiveValue": 3.14, # The objective value at this training step.
+              },
+              "allMetrics": [ # All recorded object metrics for this trial. This field is not currently
+                  # populated.
+                { # An observed value of a metric.
                   "trainingStep": "A String", # The global training step for this metric.
                   "objectiveValue": 3.14, # The objective value at this training step.
                 },
+              ],
+              "isTrialStoppedEarly": True or False, # True if the trial is stopped early.
+              "trialId": "A String", # The trial id for these results.
+              "builtInAlgorithmOutput": { # Represents output related to a built-in algorithm Job. # Details related to built-in algorithms jobs.
+                  # Only set for trials of built-in algorithms jobs that have succeeded.
+                "framework": "A String", # Framework on which the built-in algorithm was trained.
+                "modelPath": "A String", # The Cloud Storage path to the `model/` directory where the training job
+                    # saves the trained model. Only set for successful jobs that don't use
+                    # hyperparameter tuning.
+                "runtimeVersion": "A String", # AI Platform runtime version on which the built-in algorithm was
+                    # trained.
+                "pythonVersion": "A String", # Python version on which the built-in algorithm was trained.
+              },
+            },
+          ],
+          "isHyperparameterTuningJob": True or False, # Whether this job is a hyperparameter tuning job.
+          "isBuiltInAlgorithmJob": True or False, # Whether this job is a built-in Algorithm job.
+          "consumedMLUnits": 3.14, # The amount of ML units consumed by the job.
+          "hyperparameterMetricTag": "A String", # The TensorFlow summary tag name used for optimizing hyperparameter tuning
+              # trials. See
+              # [`HyperparameterSpec.hyperparameterMetricTag`](#HyperparameterSpec.FIELDS.hyperparameter_metric_tag)
+              # for more information. Only set for hyperparameter tuning jobs.
+          "builtInAlgorithmOutput": { # Represents output related to a built-in algorithm Job. # Details related to built-in algorithms jobs.
+              # Only set for built-in algorithms jobs.
+            "framework": "A String", # Framework on which the built-in algorithm was trained.
+            "modelPath": "A String", # The Cloud Storage path to the `model/` directory where the training job
+                # saves the trained model. Only set for successful jobs that don't use
+                # hyperparameter tuning.
+            "runtimeVersion": "A String", # AI Platform runtime version on which the built-in algorithm was
+                # trained.
+            "pythonVersion": "A String", # Python version on which the built-in algorithm was trained.
+          },
+        },
+        "predictionInput": { # Represents input parameters for a prediction job. # Input parameters to create a prediction job.
+          "modelName": "A String", # Use this field if you want to use the default version for the specified
+              # model. The string must use the following format:
+              #
+              # `"projects/YOUR_PROJECT/models/YOUR_MODEL"`
+          "runtimeVersion": "A String", # Optional. The AI Platform runtime version to use for this batch
+              # prediction. If not set, AI Platform will pick the runtime version used
+              # during the CreateVersion request for this model version, or choose the
+              # latest stable version when model version information is not available
+              # such as when the model is specified by uri.
+          "signatureName": "A String", # Optional. The name of the signature defined in the SavedModel to use for
+              # this job. Please refer to
+              # [SavedModel](https://tensorflow.github.io/serving/serving_basic.html)
+              # for information about how to use signatures.
+              #
+              # Defaults to
+              # [DEFAULT_SERVING_SIGNATURE_DEF_KEY](https://www.tensorflow.org/api_docs/python/tf/saved_model/signature_constants)
+              # , which is "serving_default".
+          "batchSize": "A String", # Optional. Number of records per batch, defaults to 64.
+              # The service will buffer batch_size number of records in memory before
+              # invoking one Tensorflow prediction call internally. So take the record
+              # size and memory available into consideration when setting this parameter.
+          "inputPaths": [ # Required. The Cloud Storage location of the input data files. May contain
+              # <a href="/storage/docs/gsutil/addlhelp/WildcardNames">wildcards</a>.
+            "A String",
+          ],
+          "maxWorkerCount": "A String", # Optional. The maximum number of workers to be used for parallel processing.
+              # Defaults to 10 if not specified.
+          "uri": "A String", # Use this field if you want to specify a Google Cloud Storage path for
+              # the model to use.
+          "outputPath": "A String", # Required. The output Google Cloud Storage location.
+          "dataFormat": "A String", # Required. The format of the input data files.
+          "versionName": "A String", # Use this field if you want to specify a version of the model to use. The
+              # string is formatted the same way as `model_version`, with the addition
+              # of the version information:
+              #
+              # `"projects/YOUR_PROJECT/models/YOUR_MODEL/versions/YOUR_VERSION"`
+          "region": "A String", # Required. The Google Compute Engine region to run the prediction job in.
+              # See the <a href="/ml-engine/docs/tensorflow/regions">available regions</a>
+              # for AI Platform services.
+          "outputDataFormat": "A String", # Optional. Format of the output data files, defaults to JSON.
+        },
+        "trainingInput": { # Represents input parameters for a training job. When using the # Input parameters to create a training job.
+            # gcloud command to submit your training job, you can specify
+            # the input parameters as command-line arguments and/or in a YAML configuration
+            # file referenced from the --config command-line argument. For
+            # details, see the guide to
+            # <a href="/ml-engine/docs/tensorflow/training-jobs">submitting a training
+            # job</a>.
+          "workerType": "A String", # Optional. Specifies the type of virtual machine to use for your training
+              # job's worker nodes.
+              #
+              # The supported values are the same as those described in the entry for
+              # `masterType`.
+              #
+              # This value must be consistent with the category of machine type that
+              # `masterType` uses. In other words, both must be AI Platform machine
+              # types or both must be Compute Engine machine types.
+              #
+              # If you use `cloud_tpu` for this value, see special instructions for
+              # [configuring a custom TPU
+              # machine](/ml-engine/docs/tensorflow/using-tpus#configuring_a_custom_tpu_machine).
+              #
+              # This value must be present when `scaleTier` is set to `CUSTOM` and
+              # `workerCount` is greater than zero.
+          "parameterServerConfig": { # Represents the configuration for a replica in a cluster. # Optional. The configuration for parameter servers.
+              #
+              # You should only set `parameterServerConfig.acceleratorConfig` if
+              # `parameterServerConfigType` is set to a Compute Engine machine type. [Learn
+              # about restrictions on accelerator configurations for
+              # training.](/ml-engine/docs/tensorflow/using-gpus#compute-engine-machine-types-with-gpu)
+              #
+              # Set `parameterServerConfig.imageUri` only if you build a custom image for
+              # your parameter server. If `parameterServerConfig.imageUri` has not been
+              # set, AI Platform uses the value of `masterConfig.imageUri`.
+              # Learn more about [configuring custom
+              # containers](/ml-engine/docs/distributed-training-containers).
+            "acceleratorConfig": { # Represents a hardware accelerator request config. # Represents the type and number of accelerators used by the replica.
+                # [Learn about restrictions on accelerator configurations for
+                # training.](/ml-engine/docs/tensorflow/using-gpus#compute-engine-machine-types-with-gpu)
+              "count": "A String", # The number of accelerators to attach to each machine running the job.
+              "type": "A String", # The type of accelerator to use.
+            },
+            "imageUri": "A String", # The Docker image to run on the replica. This image must be in Container
+                # Registry. Learn more about [configuring custom
+                # containers](/ml-engine/docs/distributed-training-containers).
+          },
+          "runtimeVersion": "A String", # Optional. The AI Platform runtime version to use for training. If not
+              # set, AI Platform uses the default stable version, 1.0. For more
+              # information, see the
+              # <a href="/ml-engine/docs/runtime-version-list">runtime version list</a>
+              # and
+              # <a href="/ml-engine/docs/versioning">how to manage runtime versions</a>.
+          "scaleTier": "A String", # Required. Specifies the machine types, the number of replicas for workers
+              # and parameter servers.
+          "masterType": "A String", # Optional. Specifies the type of virtual machine to use for your training
+              # job's master worker.
+              #
+              # The following types are supported:
+              #
+              # <dl>
+              #   <dt>standard</dt>
+              #   <dd>
+              #   A basic machine configuration suitable for training simple models with
+              #   small to moderate datasets.
+              #   </dd>
+              #   <dt>large_model</dt>
+              #   <dd>
+              #   A machine with a lot of memory, specially suited for parameter servers
+              #   when your model is large (having many hidden layers or layers with very
+              #   large numbers of nodes).
+              #   </dd>
+              #   <dt>complex_model_s</dt>
+              #   <dd>
+              #   A machine suitable for the master and workers of the cluster when your
+              #   model requires more computation than the standard machine can handle
+              #   satisfactorily.
+              #   </dd>
+              #   <dt>complex_model_m</dt>
+              #   <dd>
+              #   A machine with roughly twice the number of cores and roughly double the
+              #   memory of <i>complex_model_s</i>.
+              #   </dd>
+              #   <dt>complex_model_l</dt>
+              #   <dd>
+              #   A machine with roughly twice the number of cores and roughly double the
+              #   memory of <i>complex_model_m</i>.
+              #   </dd>
+              #   <dt>standard_gpu</dt>
+              #   <dd>
+              #   A machine equivalent to <i>standard</i> that
+              #   also includes a single NVIDIA Tesla K80 GPU. See more about
+              #   <a href="/ml-engine/docs/tensorflow/using-gpus">using GPUs to
+              #   train your model</a>.
+              #   </dd>
+              #   <dt>complex_model_m_gpu</dt>
+              #   <dd>
+              #   A machine equivalent to <i>complex_model_m</i> that also includes
+              #   four NVIDIA Tesla K80 GPUs.
+              #   </dd>
+              #   <dt>complex_model_l_gpu</dt>
+              #   <dd>
+              #   A machine equivalent to <i>complex_model_l</i> that also includes
+              #   eight NVIDIA Tesla K80 GPUs.
+              #   </dd>
+              #   <dt>standard_p100</dt>
+              #   <dd>
+              #   A machine equivalent to <i>standard</i> that
+              #   also includes a single NVIDIA Tesla P100 GPU.
+              #   </dd>
+              #   <dt>complex_model_m_p100</dt>
+              #   <dd>
+              #   A machine equivalent to <i>complex_model_m</i> that also includes
+              #   four NVIDIA Tesla P100 GPUs.
+              #   </dd>
+              #   <dt>standard_v100</dt>
+              #   <dd>
+              #   A machine equivalent to <i>standard</i> that
+              #   also includes a single NVIDIA Tesla V100 GPU.
+              #   </dd>
+              #   <dt>large_model_v100</dt>
+              #   <dd>
+              #   A machine equivalent to <i>large_model</i> that
+              #   also includes a single NVIDIA Tesla V100 GPU.
+              #   </dd>
+              #   <dt>complex_model_m_v100</dt>
+              #   <dd>
+              #   A machine equivalent to <i>complex_model_m</i> that
+              #   also includes four NVIDIA Tesla V100 GPUs.
+              #   </dd>
+              #   <dt>complex_model_l_v100</dt>
+              #   <dd>
+              #   A machine equivalent to <i>complex_model_l</i> that
+              #   also includes eight NVIDIA Tesla V100 GPUs.
+              #   </dd>
+              #   <dt>cloud_tpu</dt>
+              #   <dd>
+              #   A TPU VM including one Cloud TPU. See more about
+              #   <a href="/ml-engine/docs/tensorflow/using-tpus">using TPUs to train
+              #   your model</a>.
+              #   </dd>
+              # </dl>
+              #
+              # You may also use certain Compute Engine machine types directly in this
+              # field. The following types are supported:
+              #
+              # - `n1-standard-4`
+              # - `n1-standard-8`
+              # - `n1-standard-16`
+              # - `n1-standard-32`
+              # - `n1-standard-64`
+              # - `n1-standard-96`
+              # - `n1-highmem-2`
+              # - `n1-highmem-4`
+              # - `n1-highmem-8`
+              # - `n1-highmem-16`
+              # - `n1-highmem-32`
+              # - `n1-highmem-64`
+              # - `n1-highmem-96`
+              # - `n1-highcpu-16`
+              # - `n1-highcpu-32`
+              # - `n1-highcpu-64`
+              # - `n1-highcpu-96`
+              #
+              # See more about [using Compute Engine machine
+              # types](/ml-engine/docs/tensorflow/machine-types#compute-engine-machine-types).
+              #
+              # You must set this value when `scaleTier` is set to `CUSTOM`.
+          "hyperparameters": { # Represents a set of hyperparameters to optimize. # Optional. The set of Hyperparameters to tune.
+            "maxTrials": 42, # Optional. How many training trials should be attempted to optimize
+                # the specified hyperparameters.
+                #
+                # Defaults to one.
+            "goal": "A String", # Required. The type of goal to use for tuning. Available types are
+                # `MAXIMIZE` and `MINIMIZE`.
+                #
+                # Defaults to `MAXIMIZE`.
+            "algorithm": "A String", # Optional. The search algorithm specified for the hyperparameter
+                # tuning job.
+                # Uses the default AI Platform hyperparameter tuning
+                # algorithm if unspecified.
+            "maxFailedTrials": 42, # Optional. The number of failed trials that need to be seen before failing
+                # the hyperparameter tuning job. You can specify this field to override the
+                # default failing criteria for AI Platform hyperparameter tuning jobs.
+                #
+                # Defaults to zero, which means the service decides when a hyperparameter
+                # job should fail.
+            "enableTrialEarlyStopping": True or False, # Optional. Indicates if the hyperparameter tuning job enables auto trial
+                # early stopping.
+            "resumePreviousJobId": "A String", # Optional. The prior hyperparameter tuning job id that users hope to
+                # continue with. The job id will be used to find the corresponding vizier
+                # study guid and resume the study.
+            "params": [ # Required. The set of parameters to tune.
+              { # Represents a single hyperparameter to optimize.
+                "maxValue": 3.14, # Required if type is `DOUBLE` or `INTEGER`. This field
+                    # should be unset if type is `CATEGORICAL`. This value should be integers if
+                    # type is `INTEGER`.
+                "categoricalValues": [ # Required if type is `CATEGORICAL`. The list of possible categories.
+                  "A String",
+                ],
+                "discreteValues": [ # Required if type is `DISCRETE`.
+                    # A list of feasible points.
+                    # The list should be in strictly increasing order. For instance, this
+                    # parameter might have possible settings of 1.5, 2.5, and 4.0. This list
+                    # should not contain more than 1,000 values.
+                  3.14,
+                ],
+                "parameterName": "A String", # Required. The parameter name must be unique amongst all ParameterConfigs in
+                    # a HyperparameterSpec message. E.g., "learning_rate".
+                "minValue": 3.14, # Required if type is `DOUBLE` or `INTEGER`. This field
+                    # should be unset if type is `CATEGORICAL`. This value should be integers if
+                    # type is INTEGER.
+                "type": "A String", # Required. The type of the parameter.
+                "scaleType": "A String", # Optional. How the parameter should be scaled to the hypercube.
+                    # Leave unset for categorical parameters.
+                    # Some kind of scaling is strongly recommended for real or integral
+                    # parameters (e.g., `UNIT_LINEAR_SCALE`).
               },
             ],
-            "isHyperparameterTuningJob": True or False, # Whether this job is a hyperparameter tuning job.
-            "consumedMLUnits": 3.14, # The amount of ML units consumed by the job.
-            "completedTrialCount": "A String", # The number of hyperparameter tuning trials that completed successfully.
-                # Only set for hyperparameter tuning jobs.
+            "hyperparameterMetricTag": "A String", # Optional. The TensorFlow summary tag name to use for optimizing trials. For
+                # current versions of TensorFlow, this tag name should exactly match what is
+                # shown in TensorBoard, including all scopes.  For versions of TensorFlow
+                # prior to 0.12, this should be only the tag passed to tf.Summary.
+                # By default, "training/hptuning/metric" will be used.
+            "maxParallelTrials": 42, # Optional. The number of training trials to run concurrently.
+                # You can reduce the time it takes to perform hyperparameter tuning by adding
+                # trials in parallel. However, each trail only benefits from the information
+                # gained in completed trials. That means that a trial does not get access to
+                # the results of trials running at the same time, which could reduce the
+                # quality of the overall optimization.
+                #
+                # Each trial will use the same scale tier and machine types.
+                #
+                # Defaults to one.
           },
-          "trainingInput": { # Represents input parameters for a training job. # Input parameters to create a training job.
-            "workerType": "A String", # Optional. Specifies the type of virtual machine to use for your training
-                # job's worker nodes.
-                #
-                # The supported values are the same as those described in the entry for
-                # `masterType`.
-                #
-                # This value must be present when `scaleTier` is set to `CUSTOM` and
-                # `workerCount` is greater than zero.
-            "runtimeVersion": "A String", # Optional. The Google Cloud ML runtime version to use for training.  If not
-                # set, Google Cloud ML will choose the latest stable version.
-            "scaleTier": "A String", # Required. Specifies the machine types, the number of replicas for workers
-                # and parameter servers.
-            "masterType": "A String", # Optional. Specifies the type of virtual machine to use for your training
-                # job's master worker.
-                #
-                # The following types are supported:
-                #
-                # <dl>
-                #   <dt>standard</dt>
-                #   <dd>
-                #   A basic machine configuration suitable for training simple models with
-                #   small to moderate datasets.
-                #   </dd>
-                #   <dt>large_model</dt>
-                #   <dd>
-                #   A machine with a lot of memory, specially suited for parameter servers
-                #   when your model is large (having many hidden layers or layers with very
-                #   large numbers of nodes).
-                #   </dd>
-                #   <dt>complex_model_s</dt>
-                #   <dd>
-                #   A machine suitable for the master and workers of the cluster when your
-                #   model requires more computation than the standard machine can handle
-                #   satisfactorily.
-                #   </dd>
-                #   <dt>complex_model_m</dt>
-                #   <dd>
-                #   A machine with roughly twice the number of cores and roughly double the
-                #   memory of <code suppresswarning="true">complex_model_s</code>.
-                #   </dd>
-                #   <dt>complex_model_l</dt>
-                #   <dd>
-                #   A machine with roughly twice the number of cores and roughly double the
-                #   memory of <code suppresswarning="true">complex_model_m</code>.
-                #   </dd>
-                #   <dt>standard_gpu</dt>
-                #   <dd>
-                #   A machine equivalent to <code suppresswarning="true">standard</code> that
-                #   also includes a
-                #   <a href="/ml-engine/docs/how-tos/using-gpus">
-                #   GPU that you can use in your trainer</a>.
-                #   </dd>
-                #   <dt>complex_model_m_gpu</dt>
-                #   <dd>
-                #   A machine equivalent to
-                #   <code suppresswarning="true">complex_model_m</code> that also includes
-                #   four GPUs.
-                #   </dd>
-                # </dl>
-                #
-                # You must set this value when `scaleTier` is set to `CUSTOM`.
-            "hyperparameters": { # Represents a set of hyperparameters to optimize. # Optional. The set of Hyperparameters to tune.
-              "maxTrials": 42, # Optional. How many training trials should be attempted to optimize
-                  # the specified hyperparameters.
-                  #
-                  # Defaults to one.
-              "hyperparameterMetricTag": "A String", # Optional. The Tensorflow summary tag name to use for optimizing trials. For
-                  # current versions of Tensorflow, this tag name should exactly match what is
-                  # shown in Tensorboard, including all scopes.  For versions of Tensorflow
-                  # prior to 0.12, this should be only the tag passed to tf.Summary.
-                  # By default, "training/hptuning/metric" will be used.
-              "params": [ # Required. The set of parameters to tune.
-                { # Represents a single hyperparameter to optimize.
-                  "maxValue": 3.14, # Required if typeis `DOUBLE` or `INTEGER`. This field
-                      # should be unset if type is `CATEGORICAL`. This value should be integers if
-                      # type is `INTEGER`.
-                  "categoricalValues": [ # Required if type is `CATEGORICAL`. The list of possible categories.
-                    "A String",
-                  ],
-                  "discreteValues": [ # Required if type is `DISCRETE`.
-                      # A list of feasible points.
-                      # The list should be in strictly increasing order. For instance, this
-                      # parameter might have possible settings of 1.5, 2.5, and 4.0. This list
-                      # should not contain more than 1,000 values.
-                    3.14,
-                  ],
-                  "parameterName": "A String", # Required. The parameter name must be unique amongst all ParameterConfigs in
-                      # a HyperparameterSpec message. E.g., "learning_rate".
-                  "minValue": 3.14, # Required if type is `DOUBLE` or `INTEGER`. This field
-                      # should be unset if type is `CATEGORICAL`. This value should be integers if
-                      # type is INTEGER.
-                  "type": "A String", # Required. The type of the parameter.
-                  "scaleType": "A String", # Optional. How the parameter should be scaled to the hypercube.
-                      # Leave unset for categorical parameters.
-                      # Some kind of scaling is strongly recommended for real or integral
-                      # parameters (e.g., `UNIT_LINEAR_SCALE`).
-                },
-              ],
-              "goal": "A String", # Required. The type of goal to use for tuning. Available types are
-                  # `MAXIMIZE` and `MINIMIZE`.
-                  #
-                  # Defaults to `MAXIMIZE`.
-              "maxParallelTrials": 42, # Optional. The number of training trials to run concurrently.
-                  # You can reduce the time it takes to perform hyperparameter tuning by adding
-                  # trials in parallel. However, each trail only benefits from the information
-                  # gained in completed trials. That means that a trial does not get access to
-                  # the results of trials running at the same time, which could reduce the
-                  # quality of the overall optimization.
-                  #
-                  # Each trial will use the same scale tier and machine types.
-                  #
-                  # Defaults to one.
+          "region": "A String", # Required. The Google Compute Engine region to run the training job in.
+              # See the <a href="/ml-engine/docs/tensorflow/regions">available regions</a>
+              # for AI Platform services.
+          "args": [ # Optional. Command line arguments to pass to the program.
+            "A String",
+          ],
+          "pythonModule": "A String", # Required. The Python module name to run after installing the packages.
+          "pythonVersion": "A String", # Optional. The version of Python used in training. If not set, the default
+              # version is '2.7'. Python '3.5' is available when `runtime_version` is set
+              # to '1.4' and above. Python '2.7' works with all supported
+              # <a href="/ml-engine/docs/runtime-version-list">runtime versions</a>.
+          "jobDir": "A String", # Optional. A Google Cloud Storage path in which to store training outputs
+              # and other data needed for training. This path is passed to your TensorFlow
+              # program as the '--job-dir' command-line argument. The benefit of specifying
+              # this field is that Cloud ML validates the path for use in training.
+          "packageUris": [ # Required. The Google Cloud Storage location of the packages with
+              # the training program and any additional dependencies.
+              # The maximum number of package URIs is 100.
+            "A String",
+          ],
+          "workerCount": "A String", # Optional. The number of worker replicas to use for the training job. Each
+              # replica in the cluster will be of the type specified in `worker_type`.
+              #
+              # This value can only be used when `scale_tier` is set to `CUSTOM`. If you
+              # set this value, you must also set `worker_type`.
+              #
+              # The default value is zero.
+          "parameterServerType": "A String", # Optional. Specifies the type of virtual machine to use for your training
+              # job's parameter server.
+              #
+              # The supported values are the same as those described in the entry for
+              # `master_type`.
+              #
+              # This value must be consistent with the category of machine type that
+              # `masterType` uses. In other words, both must be AI Platform machine
+              # types or both must be Compute Engine machine types.
+              #
+              # This value must be present when `scaleTier` is set to `CUSTOM` and
+              # `parameter_server_count` is greater than zero.
+          "workerConfig": { # Represents the configuration for a replica in a cluster. # Optional. The configuration for workers.
+              #
+              # You should only set `workerConfig.acceleratorConfig` if `workerType` is set
+              # to a Compute Engine machine type. [Learn about restrictions on accelerator
+              # configurations for
+              # training.](/ml-engine/docs/tensorflow/using-gpus#compute-engine-machine-types-with-gpu)
+              #
+              # Set `workerConfig.imageUri` only if you build a custom image for your
+              # worker. If `workerConfig.imageUri` has not been set, AI Platform uses
+              # the value of `masterConfig.imageUri`. Learn more about
+              # [configuring custom
+              # containers](/ml-engine/docs/distributed-training-containers).
+            "acceleratorConfig": { # Represents a hardware accelerator request config. # Represents the type and number of accelerators used by the replica.
+                # [Learn about restrictions on accelerator configurations for
+                # training.](/ml-engine/docs/tensorflow/using-gpus#compute-engine-machine-types-with-gpu)
+              "count": "A String", # The number of accelerators to attach to each machine running the job.
+              "type": "A String", # The type of accelerator to use.
             },
-            "region": "A String", # Required. The Google Compute Engine region to run the training job in.
-            "args": [ # Optional. Command line arguments to pass to the program.
-              "A String",
-            ],
-            "pythonModule": "A String", # Required. The Python module name to run after installing the packages.
-            "jobDir": "A String", # Optional. A Google Cloud Storage path in which to store training outputs
-                # and other data needed for training. This path is passed to your TensorFlow
-                # program as the 'job_dir' command-line argument. The benefit of specifying
-                # this field is that Cloud ML validates the path for use in training.
-            "packageUris": [ # Required. The Google Cloud Storage location of the packages with
-                # the training program and any additional dependencies.
-                # The maximum number of package URIs is 100.
-              "A String",
-            ],
-            "workerCount": "A String", # Optional. The number of worker replicas to use for the training job. Each
-                # replica in the cluster will be of the type specified in `worker_type`.
-                #
-                # This value can only be used when `scale_tier` is set to `CUSTOM`. If you
-                # set this value, you must also set `worker_type`.
-            "parameterServerType": "A String", # Optional. Specifies the type of virtual machine to use for your training
-                # job's parameter server.
-                #
-                # The supported values are the same as those described in the entry for
-                # `master_type`.
-                #
-                # This value must be present when `scaleTier` is set to `CUSTOM` and
-                # `parameter_server_count` is greater than zero.
-            "parameterServerCount": "A String", # Optional. The number of parameter server replicas to use for the training
-                # job. Each replica in the cluster will be of the type specified in
-                # `parameter_server_type`.
-                #
-                # This value can only be used when `scale_tier` is set to `CUSTOM`.If you
-                # set this value, you must also set `parameter_server_type`.
+            "imageUri": "A String", # The Docker image to run on the replica. This image must be in Container
+                # Registry. Learn more about [configuring custom
+                # containers](/ml-engine/docs/distributed-training-containers).
           },
-          "predictionInput": { # Represents input parameters for a prediction job. # Input parameters to create a prediction job.
-            "modelName": "A String", # Use this field if you want to use the default version for the specified
-                # model. The string must use the following format:
-                #
-                # `"projects/<var>[YOUR_PROJECT]</var>/models/<var>[YOUR_MODEL]</var>"`
-            "runtimeVersion": "A String", # Optional. The Google Cloud ML runtime version to use for this batch
-                # prediction. If not set, Google Cloud ML will pick the runtime version used
-                # during the CreateVersion request for this model version, or choose the
-                # latest stable version when model version information is not available
-                # such as when the model is specified by uri.
-            "region": "A String", # Required. The Google Compute Engine region to run the prediction job in.
-            "maxWorkerCount": "A String", # Optional. The maximum number of workers to be used for parallel processing.
-                # Defaults to 10 if not specified.
-            "uri": "A String", # Use this field if you want to specify a Google Cloud Storage path for
-                # the model to use.
-            "outputPath": "A String", # Required. The output Google Cloud Storage location.
-            "dataFormat": "A String", # Required. The format of the input data files.
-            "versionName": "A String", # Use this field if you want to specify a version of the model to use. The
-                # string is formatted the same way as `model_version`, with the addition
-                # of the version information:
-                #
-                # `"projects/<var>[YOUR_PROJECT]</var>/models/<var>YOUR_MODEL/versions/<var>[YOUR_VERSION]</var>"`
-            "inputPaths": [ # Required. The Google Cloud Storage location of the input data files.
-                # May contain wildcards.
-              "A String",
-            ],
+          "maxRunningTime": "A String", # Optional. The maximum job running time. The default is 7 days.
+          "masterConfig": { # Represents the configuration for a replica in a cluster. # Optional. The configuration for your master worker.
+              #
+              # You should only set `masterConfig.acceleratorConfig` if `masterType` is set
+              # to a Compute Engine machine type. Learn about [restrictions on accelerator
+              # configurations for
+              # training.](/ml-engine/docs/tensorflow/using-gpus#compute-engine-machine-types-with-gpu)
+              #
+              # Set `masterConfig.imageUri` only if you build a custom image. Only one of
+              # `masterConfig.imageUri` and `runtimeVersion` should be set. Learn more about
+              # [configuring custom
+              # containers](/ml-engine/docs/distributed-training-containers).
+            "acceleratorConfig": { # Represents a hardware accelerator request config. # Represents the type and number of accelerators used by the replica.
+                # [Learn about restrictions on accelerator configurations for
+                # training.](/ml-engine/docs/tensorflow/using-gpus#compute-engine-machine-types-with-gpu)
+              "count": "A String", # The number of accelerators to attach to each machine running the job.
+              "type": "A String", # The type of accelerator to use.
+            },
+            "imageUri": "A String", # The Docker image to run on the replica. This image must be in Container
+                # Registry. Learn more about [configuring custom
+                # containers](/ml-engine/docs/distributed-training-containers).
           },
-          "errorMessage": "A String", # Output only. The details of a failure or a cancellation.
-          "jobId": "A String", # Required. The user-specified id of the job.
-          "state": "A String", # Output only. The detailed state of a job.
-          "startTime": "A String", # Output only. When the job processing was started.
-          "endTime": "A String", # Output only. When the job processing was completed.
-          "predictionOutput": { # Represents results of a prediction job. # The current prediction job result.
-            "outputPath": "A String", # The output Google Cloud Storage location provided at the job creation time.
-            "nodeHours": 3.14, # Node hours used by the batch prediction job.
-            "predictionCount": "A String", # The number of generated predictions.
-            "errorCount": "A String", # The number of data instances which resulted in errors.
-          },
-          "createTime": "A String", # Output only. When the job was created.
+          "parameterServerCount": "A String", # Optional. The number of parameter server replicas to use for the training
+              # job. Each replica in the cluster will be of the type specified in
+              # `parameter_server_type`.
+              #
+              # This value can only be used when `scale_tier` is set to `CUSTOM`.If you
+              # set this value, you must also set `parameter_server_type`.
+              #
+              # The default value is zero.
         },
+        "jobId": "A String", # Required. The user-specified id of the job.
+        "labels": { # Optional. One or more labels that you can add, to organize your jobs.
+            # Each label is a key-value pair, where both the key and the value are
+            # arbitrary strings that you supply.
+            # For more information, see the documentation on
+            # <a href="/ml-engine/docs/tensorflow/resource-labels">using labels</a>.
+          "a_key": "A String",
+        },
+        "state": "A String", # Output only. The detailed state of a job.
+        "etag": "A String", # `etag` is used for optimistic concurrency control as a way to help
+            # prevent simultaneous updates of a job from overwriting each other.
+            # It is strongly suggested that systems make use of the `etag` in the
+            # read-modify-write cycle to perform job updates in order to avoid race
+            # conditions: An `etag` is returned in the response to `GetJob`, and
+            # systems are expected to put that etag in the request to `UpdateJob` to
+            # ensure that their change will be applied to the same version of the job.
+        "startTime": "A String", # Output only. When the job processing was started.
+        "endTime": "A String", # Output only. When the job processing was completed.
+        "predictionOutput": { # Represents results of a prediction job. # The current prediction job result.
+          "outputPath": "A String", # The output Google Cloud Storage location provided at the job creation time.
+          "nodeHours": 3.14, # Node hours used by the batch prediction job.
+          "predictionCount": "A String", # The number of generated predictions.
+          "errorCount": "A String", # The number of data instances which resulted in errors.
+        },
+        "createTime": "A String", # Output only. When the job was created.
+      },
     ],
   }</pre>
 </div>
@@ -1080,4 +2241,1408 @@
     </pre>
 </div>
 
+<div class="method">
+    <code class="details" id="patch">patch(name, body, updateMask=None, x__xgafv=None)</code>
+  <pre>Updates a specific job resource.
+
+Currently the only supported fields to update are `labels`.
+
+Args:
+  name: string, Required. The job name. (required)
+  body: object, The request body. (required)
+    The object takes the form of:
+
+{ # Represents a training or prediction job.
+  "errorMessage": "A String", # Output only. The details of a failure or a cancellation.
+  "trainingOutput": { # Represents results of a training job. Output only. # The current training job result.
+    "completedTrialCount": "A String", # The number of hyperparameter tuning trials that completed successfully.
+        # Only set for hyperparameter tuning jobs.
+    "trials": [ # Results for individual Hyperparameter trials.
+        # Only set for hyperparameter tuning jobs.
+      { # Represents the result of a single hyperparameter tuning trial from a
+          # training job. The TrainingOutput object that is returned on successful
+          # completion of a training job with hyperparameter tuning includes a list
+          # of HyperparameterOutput objects, one for each successful trial.
+        "hyperparameters": { # The hyperparameters given to this trial.
+          "a_key": "A String",
+        },
+        "finalMetric": { # An observed value of a metric. # The final objective metric seen for this trial.
+          "trainingStep": "A String", # The global training step for this metric.
+          "objectiveValue": 3.14, # The objective value at this training step.
+        },
+        "allMetrics": [ # All recorded object metrics for this trial. This field is not currently
+            # populated.
+          { # An observed value of a metric.
+            "trainingStep": "A String", # The global training step for this metric.
+            "objectiveValue": 3.14, # The objective value at this training step.
+          },
+        ],
+        "isTrialStoppedEarly": True or False, # True if the trial is stopped early.
+        "trialId": "A String", # The trial id for these results.
+        "builtInAlgorithmOutput": { # Represents output related to a built-in algorithm Job. # Details related to built-in algorithms jobs.
+            # Only set for trials of built-in algorithms jobs that have succeeded.
+          "framework": "A String", # Framework on which the built-in algorithm was trained.
+          "modelPath": "A String", # The Cloud Storage path to the `model/` directory where the training job
+              # saves the trained model. Only set for successful jobs that don't use
+              # hyperparameter tuning.
+          "runtimeVersion": "A String", # AI Platform runtime version on which the built-in algorithm was
+              # trained.
+          "pythonVersion": "A String", # Python version on which the built-in algorithm was trained.
+        },
+      },
+    ],
+    "isHyperparameterTuningJob": True or False, # Whether this job is a hyperparameter tuning job.
+    "isBuiltInAlgorithmJob": True or False, # Whether this job is a built-in Algorithm job.
+    "consumedMLUnits": 3.14, # The amount of ML units consumed by the job.
+    "hyperparameterMetricTag": "A String", # The TensorFlow summary tag name used for optimizing hyperparameter tuning
+        # trials. See
+        # [`HyperparameterSpec.hyperparameterMetricTag`](#HyperparameterSpec.FIELDS.hyperparameter_metric_tag)
+        # for more information. Only set for hyperparameter tuning jobs.
+    "builtInAlgorithmOutput": { # Represents output related to a built-in algorithm Job. # Details related to built-in algorithms jobs.
+        # Only set for built-in algorithms jobs.
+      "framework": "A String", # Framework on which the built-in algorithm was trained.
+      "modelPath": "A String", # The Cloud Storage path to the `model/` directory where the training job
+          # saves the trained model. Only set for successful jobs that don't use
+          # hyperparameter tuning.
+      "runtimeVersion": "A String", # AI Platform runtime version on which the built-in algorithm was
+          # trained.
+      "pythonVersion": "A String", # Python version on which the built-in algorithm was trained.
+    },
+  },
+  "predictionInput": { # Represents input parameters for a prediction job. # Input parameters to create a prediction job.
+    "modelName": "A String", # Use this field if you want to use the default version for the specified
+        # model. The string must use the following format:
+        #
+        # `"projects/YOUR_PROJECT/models/YOUR_MODEL"`
+    "runtimeVersion": "A String", # Optional. The AI Platform runtime version to use for this batch
+        # prediction. If not set, AI Platform will pick the runtime version used
+        # during the CreateVersion request for this model version, or choose the
+        # latest stable version when model version information is not available
+        # such as when the model is specified by uri.
+    "signatureName": "A String", # Optional. The name of the signature defined in the SavedModel to use for
+        # this job. Please refer to
+        # [SavedModel](https://tensorflow.github.io/serving/serving_basic.html)
+        # for information about how to use signatures.
+        #
+        # Defaults to
+        # [DEFAULT_SERVING_SIGNATURE_DEF_KEY](https://www.tensorflow.org/api_docs/python/tf/saved_model/signature_constants)
+        # , which is "serving_default".
+    "batchSize": "A String", # Optional. Number of records per batch, defaults to 64.
+        # The service will buffer batch_size number of records in memory before
+        # invoking one Tensorflow prediction call internally. So take the record
+        # size and memory available into consideration when setting this parameter.
+    "inputPaths": [ # Required. The Cloud Storage location of the input data files. May contain
+        # <a href="/storage/docs/gsutil/addlhelp/WildcardNames">wildcards</a>.
+      "A String",
+    ],
+    "maxWorkerCount": "A String", # Optional. The maximum number of workers to be used for parallel processing.
+        # Defaults to 10 if not specified.
+    "uri": "A String", # Use this field if you want to specify a Google Cloud Storage path for
+        # the model to use.
+    "outputPath": "A String", # Required. The output Google Cloud Storage location.
+    "dataFormat": "A String", # Required. The format of the input data files.
+    "versionName": "A String", # Use this field if you want to specify a version of the model to use. The
+        # string is formatted the same way as `model_version`, with the addition
+        # of the version information:
+        #
+        # `"projects/YOUR_PROJECT/models/YOUR_MODEL/versions/YOUR_VERSION"`
+    "region": "A String", # Required. The Google Compute Engine region to run the prediction job in.
+        # See the <a href="/ml-engine/docs/tensorflow/regions">available regions</a>
+        # for AI Platform services.
+    "outputDataFormat": "A String", # Optional. Format of the output data files, defaults to JSON.
+  },
+  "trainingInput": { # Represents input parameters for a training job. When using the # Input parameters to create a training job.
+      # gcloud command to submit your training job, you can specify
+      # the input parameters as command-line arguments and/or in a YAML configuration
+      # file referenced from the --config command-line argument. For
+      # details, see the guide to
+      # <a href="/ml-engine/docs/tensorflow/training-jobs">submitting a training
+      # job</a>.
+    "workerType": "A String", # Optional. Specifies the type of virtual machine to use for your training
+        # job's worker nodes.
+        #
+        # The supported values are the same as those described in the entry for
+        # `masterType`.
+        #
+        # This value must be consistent with the category of machine type that
+        # `masterType` uses. In other words, both must be AI Platform machine
+        # types or both must be Compute Engine machine types.
+        #
+        # If you use `cloud_tpu` for this value, see special instructions for
+        # [configuring a custom TPU
+        # machine](/ml-engine/docs/tensorflow/using-tpus#configuring_a_custom_tpu_machine).
+        #
+        # This value must be present when `scaleTier` is set to `CUSTOM` and
+        # `workerCount` is greater than zero.
+    "parameterServerConfig": { # Represents the configuration for a replica in a cluster. # Optional. The configuration for parameter servers.
+        #
+        # You should only set `parameterServerConfig.acceleratorConfig` if
+        # `parameterServerConfigType` is set to a Compute Engine machine type. [Learn
+        # about restrictions on accelerator configurations for
+        # training.](/ml-engine/docs/tensorflow/using-gpus#compute-engine-machine-types-with-gpu)
+        #
+        # Set `parameterServerConfig.imageUri` only if you build a custom image for
+        # your parameter server. If `parameterServerConfig.imageUri` has not been
+        # set, AI Platform uses the value of `masterConfig.imageUri`.
+        # Learn more about [configuring custom
+        # containers](/ml-engine/docs/distributed-training-containers).
+      "acceleratorConfig": { # Represents a hardware accelerator request config. # Represents the type and number of accelerators used by the replica.
+          # [Learn about restrictions on accelerator configurations for
+          # training.](/ml-engine/docs/tensorflow/using-gpus#compute-engine-machine-types-with-gpu)
+        "count": "A String", # The number of accelerators to attach to each machine running the job.
+        "type": "A String", # The type of accelerator to use.
+      },
+      "imageUri": "A String", # The Docker image to run on the replica. This image must be in Container
+          # Registry. Learn more about [configuring custom
+          # containers](/ml-engine/docs/distributed-training-containers).
+    },
+    "runtimeVersion": "A String", # Optional. The AI Platform runtime version to use for training. If not
+        # set, AI Platform uses the default stable version, 1.0. For more
+        # information, see the
+        # <a href="/ml-engine/docs/runtime-version-list">runtime version list</a>
+        # and
+        # <a href="/ml-engine/docs/versioning">how to manage runtime versions</a>.
+    "scaleTier": "A String", # Required. Specifies the machine types, the number of replicas for workers
+        # and parameter servers.
+    "masterType": "A String", # Optional. Specifies the type of virtual machine to use for your training
+        # job's master worker.
+        #
+        # The following types are supported:
+        #
+        # <dl>
+        #   <dt>standard</dt>
+        #   <dd>
+        #   A basic machine configuration suitable for training simple models with
+        #   small to moderate datasets.
+        #   </dd>
+        #   <dt>large_model</dt>
+        #   <dd>
+        #   A machine with a lot of memory, specially suited for parameter servers
+        #   when your model is large (having many hidden layers or layers with very
+        #   large numbers of nodes).
+        #   </dd>
+        #   <dt>complex_model_s</dt>
+        #   <dd>
+        #   A machine suitable for the master and workers of the cluster when your
+        #   model requires more computation than the standard machine can handle
+        #   satisfactorily.
+        #   </dd>
+        #   <dt>complex_model_m</dt>
+        #   <dd>
+        #   A machine with roughly twice the number of cores and roughly double the
+        #   memory of <i>complex_model_s</i>.
+        #   </dd>
+        #   <dt>complex_model_l</dt>
+        #   <dd>
+        #   A machine with roughly twice the number of cores and roughly double the
+        #   memory of <i>complex_model_m</i>.
+        #   </dd>
+        #   <dt>standard_gpu</dt>
+        #   <dd>
+        #   A machine equivalent to <i>standard</i> that
+        #   also includes a single NVIDIA Tesla K80 GPU. See more about
+        #   <a href="/ml-engine/docs/tensorflow/using-gpus">using GPUs to
+        #   train your model</a>.
+        #   </dd>
+        #   <dt>complex_model_m_gpu</dt>
+        #   <dd>
+        #   A machine equivalent to <i>complex_model_m</i> that also includes
+        #   four NVIDIA Tesla K80 GPUs.
+        #   </dd>
+        #   <dt>complex_model_l_gpu</dt>
+        #   <dd>
+        #   A machine equivalent to <i>complex_model_l</i> that also includes
+        #   eight NVIDIA Tesla K80 GPUs.
+        #   </dd>
+        #   <dt>standard_p100</dt>
+        #   <dd>
+        #   A machine equivalent to <i>standard</i> that
+        #   also includes a single NVIDIA Tesla P100 GPU.
+        #   </dd>
+        #   <dt>complex_model_m_p100</dt>
+        #   <dd>
+        #   A machine equivalent to <i>complex_model_m</i> that also includes
+        #   four NVIDIA Tesla P100 GPUs.
+        #   </dd>
+        #   <dt>standard_v100</dt>
+        #   <dd>
+        #   A machine equivalent to <i>standard</i> that
+        #   also includes a single NVIDIA Tesla V100 GPU.
+        #   </dd>
+        #   <dt>large_model_v100</dt>
+        #   <dd>
+        #   A machine equivalent to <i>large_model</i> that
+        #   also includes a single NVIDIA Tesla V100 GPU.
+        #   </dd>
+        #   <dt>complex_model_m_v100</dt>
+        #   <dd>
+        #   A machine equivalent to <i>complex_model_m</i> that
+        #   also includes four NVIDIA Tesla V100 GPUs.
+        #   </dd>
+        #   <dt>complex_model_l_v100</dt>
+        #   <dd>
+        #   A machine equivalent to <i>complex_model_l</i> that
+        #   also includes eight NVIDIA Tesla V100 GPUs.
+        #   </dd>
+        #   <dt>cloud_tpu</dt>
+        #   <dd>
+        #   A TPU VM including one Cloud TPU. See more about
+        #   <a href="/ml-engine/docs/tensorflow/using-tpus">using TPUs to train
+        #   your model</a>.
+        #   </dd>
+        # </dl>
+        #
+        # You may also use certain Compute Engine machine types directly in this
+        # field. The following types are supported:
+        #
+        # - `n1-standard-4`
+        # - `n1-standard-8`
+        # - `n1-standard-16`
+        # - `n1-standard-32`
+        # - `n1-standard-64`
+        # - `n1-standard-96`
+        # - `n1-highmem-2`
+        # - `n1-highmem-4`
+        # - `n1-highmem-8`
+        # - `n1-highmem-16`
+        # - `n1-highmem-32`
+        # - `n1-highmem-64`
+        # - `n1-highmem-96`
+        # - `n1-highcpu-16`
+        # - `n1-highcpu-32`
+        # - `n1-highcpu-64`
+        # - `n1-highcpu-96`
+        #
+        # See more about [using Compute Engine machine
+        # types](/ml-engine/docs/tensorflow/machine-types#compute-engine-machine-types).
+        #
+        # You must set this value when `scaleTier` is set to `CUSTOM`.
+    "hyperparameters": { # Represents a set of hyperparameters to optimize. # Optional. The set of Hyperparameters to tune.
+      "maxTrials": 42, # Optional. How many training trials should be attempted to optimize
+          # the specified hyperparameters.
+          #
+          # Defaults to one.
+      "goal": "A String", # Required. The type of goal to use for tuning. Available types are
+          # `MAXIMIZE` and `MINIMIZE`.
+          #
+          # Defaults to `MAXIMIZE`.
+      "algorithm": "A String", # Optional. The search algorithm specified for the hyperparameter
+          # tuning job.
+          # Uses the default AI Platform hyperparameter tuning
+          # algorithm if unspecified.
+      "maxFailedTrials": 42, # Optional. The number of failed trials that need to be seen before failing
+          # the hyperparameter tuning job. You can specify this field to override the
+          # default failing criteria for AI Platform hyperparameter tuning jobs.
+          #
+          # Defaults to zero, which means the service decides when a hyperparameter
+          # job should fail.
+      "enableTrialEarlyStopping": True or False, # Optional. Indicates if the hyperparameter tuning job enables auto trial
+          # early stopping.
+      "resumePreviousJobId": "A String", # Optional. The prior hyperparameter tuning job id that users hope to
+          # continue with. The job id will be used to find the corresponding vizier
+          # study guid and resume the study.
+      "params": [ # Required. The set of parameters to tune.
+        { # Represents a single hyperparameter to optimize.
+          "maxValue": 3.14, # Required if type is `DOUBLE` or `INTEGER`. This field
+              # should be unset if type is `CATEGORICAL`. This value should be integers if
+              # type is `INTEGER`.
+          "categoricalValues": [ # Required if type is `CATEGORICAL`. The list of possible categories.
+            "A String",
+          ],
+          "discreteValues": [ # Required if type is `DISCRETE`.
+              # A list of feasible points.
+              # The list should be in strictly increasing order. For instance, this
+              # parameter might have possible settings of 1.5, 2.5, and 4.0. This list
+              # should not contain more than 1,000 values.
+            3.14,
+          ],
+          "parameterName": "A String", # Required. The parameter name must be unique amongst all ParameterConfigs in
+              # a HyperparameterSpec message. E.g., "learning_rate".
+          "minValue": 3.14, # Required if type is `DOUBLE` or `INTEGER`. This field
+              # should be unset if type is `CATEGORICAL`. This value should be integers if
+              # type is INTEGER.
+          "type": "A String", # Required. The type of the parameter.
+          "scaleType": "A String", # Optional. How the parameter should be scaled to the hypercube.
+              # Leave unset for categorical parameters.
+              # Some kind of scaling is strongly recommended for real or integral
+              # parameters (e.g., `UNIT_LINEAR_SCALE`).
+        },
+      ],
+      "hyperparameterMetricTag": "A String", # Optional. The TensorFlow summary tag name to use for optimizing trials. For
+          # current versions of TensorFlow, this tag name should exactly match what is
+          # shown in TensorBoard, including all scopes.  For versions of TensorFlow
+          # prior to 0.12, this should be only the tag passed to tf.Summary.
+          # By default, "training/hptuning/metric" will be used.
+      "maxParallelTrials": 42, # Optional. The number of training trials to run concurrently.
+          # You can reduce the time it takes to perform hyperparameter tuning by adding
+          # trials in parallel. However, each trail only benefits from the information
+          # gained in completed trials. That means that a trial does not get access to
+          # the results of trials running at the same time, which could reduce the
+          # quality of the overall optimization.
+          #
+          # Each trial will use the same scale tier and machine types.
+          #
+          # Defaults to one.
+    },
+    "region": "A String", # Required. The Google Compute Engine region to run the training job in.
+        # See the <a href="/ml-engine/docs/tensorflow/regions">available regions</a>
+        # for AI Platform services.
+    "args": [ # Optional. Command line arguments to pass to the program.
+      "A String",
+    ],
+    "pythonModule": "A String", # Required. The Python module name to run after installing the packages.
+    "pythonVersion": "A String", # Optional. The version of Python used in training. If not set, the default
+        # version is '2.7'. Python '3.5' is available when `runtime_version` is set
+        # to '1.4' and above. Python '2.7' works with all supported
+        # <a href="/ml-engine/docs/runtime-version-list">runtime versions</a>.
+    "jobDir": "A String", # Optional. A Google Cloud Storage path in which to store training outputs
+        # and other data needed for training. This path is passed to your TensorFlow
+        # program as the '--job-dir' command-line argument. The benefit of specifying
+        # this field is that Cloud ML validates the path for use in training.
+    "packageUris": [ # Required. The Google Cloud Storage location of the packages with
+        # the training program and any additional dependencies.
+        # The maximum number of package URIs is 100.
+      "A String",
+    ],
+    "workerCount": "A String", # Optional. The number of worker replicas to use for the training job. Each
+        # replica in the cluster will be of the type specified in `worker_type`.
+        #
+        # This value can only be used when `scale_tier` is set to `CUSTOM`. If you
+        # set this value, you must also set `worker_type`.
+        #
+        # The default value is zero.
+    "parameterServerType": "A String", # Optional. Specifies the type of virtual machine to use for your training
+        # job's parameter server.
+        #
+        # The supported values are the same as those described in the entry for
+        # `master_type`.
+        #
+        # This value must be consistent with the category of machine type that
+        # `masterType` uses. In other words, both must be AI Platform machine
+        # types or both must be Compute Engine machine types.
+        #
+        # This value must be present when `scaleTier` is set to `CUSTOM` and
+        # `parameter_server_count` is greater than zero.
+    "workerConfig": { # Represents the configuration for a replica in a cluster. # Optional. The configuration for workers.
+        #
+        # You should only set `workerConfig.acceleratorConfig` if `workerType` is set
+        # to a Compute Engine machine type. [Learn about restrictions on accelerator
+        # configurations for
+        # training.](/ml-engine/docs/tensorflow/using-gpus#compute-engine-machine-types-with-gpu)
+        #
+        # Set `workerConfig.imageUri` only if you build a custom image for your
+        # worker. If `workerConfig.imageUri` has not been set, AI Platform uses
+        # the value of `masterConfig.imageUri`. Learn more about
+        # [configuring custom
+        # containers](/ml-engine/docs/distributed-training-containers).
+      "acceleratorConfig": { # Represents a hardware accelerator request config. # Represents the type and number of accelerators used by the replica.
+          # [Learn about restrictions on accelerator configurations for
+          # training.](/ml-engine/docs/tensorflow/using-gpus#compute-engine-machine-types-with-gpu)
+        "count": "A String", # The number of accelerators to attach to each machine running the job.
+        "type": "A String", # The type of accelerator to use.
+      },
+      "imageUri": "A String", # The Docker image to run on the replica. This image must be in Container
+          # Registry. Learn more about [configuring custom
+          # containers](/ml-engine/docs/distributed-training-containers).
+    },
+    "maxRunningTime": "A String", # Optional. The maximum job running time. The default is 7 days.
+    "masterConfig": { # Represents the configuration for a replica in a cluster. # Optional. The configuration for your master worker.
+        #
+        # You should only set `masterConfig.acceleratorConfig` if `masterType` is set
+        # to a Compute Engine machine type. Learn about [restrictions on accelerator
+        # configurations for
+        # training.](/ml-engine/docs/tensorflow/using-gpus#compute-engine-machine-types-with-gpu)
+        #
+        # Set `masterConfig.imageUri` only if you build a custom image. Only one of
+        # `masterConfig.imageUri` and `runtimeVersion` should be set. Learn more about
+        # [configuring custom
+        # containers](/ml-engine/docs/distributed-training-containers).
+      "acceleratorConfig": { # Represents a hardware accelerator request config. # Represents the type and number of accelerators used by the replica.
+          # [Learn about restrictions on accelerator configurations for
+          # training.](/ml-engine/docs/tensorflow/using-gpus#compute-engine-machine-types-with-gpu)
+        "count": "A String", # The number of accelerators to attach to each machine running the job.
+        "type": "A String", # The type of accelerator to use.
+      },
+      "imageUri": "A String", # The Docker image to run on the replica. This image must be in Container
+          # Registry. Learn more about [configuring custom
+          # containers](/ml-engine/docs/distributed-training-containers).
+    },
+    "parameterServerCount": "A String", # Optional. The number of parameter server replicas to use for the training
+        # job. Each replica in the cluster will be of the type specified in
+        # `parameter_server_type`.
+        #
+        # This value can only be used when `scale_tier` is set to `CUSTOM`.If you
+        # set this value, you must also set `parameter_server_type`.
+        #
+        # The default value is zero.
+  },
+  "jobId": "A String", # Required. The user-specified id of the job.
+  "labels": { # Optional. One or more labels that you can add, to organize your jobs.
+      # Each label is a key-value pair, where both the key and the value are
+      # arbitrary strings that you supply.
+      # For more information, see the documentation on
+      # <a href="/ml-engine/docs/tensorflow/resource-labels">using labels</a>.
+    "a_key": "A String",
+  },
+  "state": "A String", # Output only. The detailed state of a job.
+  "etag": "A String", # `etag` is used for optimistic concurrency control as a way to help
+      # prevent simultaneous updates of a job from overwriting each other.
+      # It is strongly suggested that systems make use of the `etag` in the
+      # read-modify-write cycle to perform job updates in order to avoid race
+      # conditions: An `etag` is returned in the response to `GetJob`, and
+      # systems are expected to put that etag in the request to `UpdateJob` to
+      # ensure that their change will be applied to the same version of the job.
+  "startTime": "A String", # Output only. When the job processing was started.
+  "endTime": "A String", # Output only. When the job processing was completed.
+  "predictionOutput": { # Represents results of a prediction job. # The current prediction job result.
+    "outputPath": "A String", # The output Google Cloud Storage location provided at the job creation time.
+    "nodeHours": 3.14, # Node hours used by the batch prediction job.
+    "predictionCount": "A String", # The number of generated predictions.
+    "errorCount": "A String", # The number of data instances which resulted in errors.
+  },
+  "createTime": "A String", # Output only. When the job was created.
+}
+
+  updateMask: string, Required. Specifies the path, relative to `Job`, of the field to update.
+To adopt etag mechanism, include `etag` field in the mask, and include the
+`etag` value in your job resource.
+
+For example, to change the labels of a job, the `update_mask` parameter
+would be specified as `labels`, `etag`, and the
+`PATCH` request body would specify the new value, as follows:
+    {
+      "labels": {
+         "owner": "Google",
+         "color": "Blue"
+      }
+      "etag": "33a64df551425fcc55e4d42a148795d9f25f89d4"
+    }
+If `etag` matches the one on the server, the labels of the job will be
+replaced with the given ones, and the server end `etag` will be
+recalculated.
+
+Currently the only supported update masks are `labels` and `etag`.
+  x__xgafv: string, V1 error format.
+    Allowed values
+      1 - v1 error format
+      2 - v2 error format
+
+Returns:
+  An object of the form:
+
+    { # Represents a training or prediction job.
+    "errorMessage": "A String", # Output only. The details of a failure or a cancellation.
+    "trainingOutput": { # Represents results of a training job. Output only. # The current training job result.
+      "completedTrialCount": "A String", # The number of hyperparameter tuning trials that completed successfully.
+          # Only set for hyperparameter tuning jobs.
+      "trials": [ # Results for individual Hyperparameter trials.
+          # Only set for hyperparameter tuning jobs.
+        { # Represents the result of a single hyperparameter tuning trial from a
+            # training job. The TrainingOutput object that is returned on successful
+            # completion of a training job with hyperparameter tuning includes a list
+            # of HyperparameterOutput objects, one for each successful trial.
+          "hyperparameters": { # The hyperparameters given to this trial.
+            "a_key": "A String",
+          },
+          "finalMetric": { # An observed value of a metric. # The final objective metric seen for this trial.
+            "trainingStep": "A String", # The global training step for this metric.
+            "objectiveValue": 3.14, # The objective value at this training step.
+          },
+          "allMetrics": [ # All recorded object metrics for this trial. This field is not currently
+              # populated.
+            { # An observed value of a metric.
+              "trainingStep": "A String", # The global training step for this metric.
+              "objectiveValue": 3.14, # The objective value at this training step.
+            },
+          ],
+          "isTrialStoppedEarly": True or False, # True if the trial is stopped early.
+          "trialId": "A String", # The trial id for these results.
+          "builtInAlgorithmOutput": { # Represents output related to a built-in algorithm Job. # Details related to built-in algorithms jobs.
+              # Only set for trials of built-in algorithms jobs that have succeeded.
+            "framework": "A String", # Framework on which the built-in algorithm was trained.
+            "modelPath": "A String", # The Cloud Storage path to the `model/` directory where the training job
+                # saves the trained model. Only set for successful jobs that don't use
+                # hyperparameter tuning.
+            "runtimeVersion": "A String", # AI Platform runtime version on which the built-in algorithm was
+                # trained.
+            "pythonVersion": "A String", # Python version on which the built-in algorithm was trained.
+          },
+        },
+      ],
+      "isHyperparameterTuningJob": True or False, # Whether this job is a hyperparameter tuning job.
+      "isBuiltInAlgorithmJob": True or False, # Whether this job is a built-in Algorithm job.
+      "consumedMLUnits": 3.14, # The amount of ML units consumed by the job.
+      "hyperparameterMetricTag": "A String", # The TensorFlow summary tag name used for optimizing hyperparameter tuning
+          # trials. See
+          # [`HyperparameterSpec.hyperparameterMetricTag`](#HyperparameterSpec.FIELDS.hyperparameter_metric_tag)
+          # for more information. Only set for hyperparameter tuning jobs.
+      "builtInAlgorithmOutput": { # Represents output related to a built-in algorithm Job. # Details related to built-in algorithms jobs.
+          # Only set for built-in algorithms jobs.
+        "framework": "A String", # Framework on which the built-in algorithm was trained.
+        "modelPath": "A String", # The Cloud Storage path to the `model/` directory where the training job
+            # saves the trained model. Only set for successful jobs that don't use
+            # hyperparameter tuning.
+        "runtimeVersion": "A String", # AI Platform runtime version on which the built-in algorithm was
+            # trained.
+        "pythonVersion": "A String", # Python version on which the built-in algorithm was trained.
+      },
+    },
+    "predictionInput": { # Represents input parameters for a prediction job. # Input parameters to create a prediction job.
+      "modelName": "A String", # Use this field if you want to use the default version for the specified
+          # model. The string must use the following format:
+          #
+          # `"projects/YOUR_PROJECT/models/YOUR_MODEL"`
+      "runtimeVersion": "A String", # Optional. The AI Platform runtime version to use for this batch
+          # prediction. If not set, AI Platform will pick the runtime version used
+          # during the CreateVersion request for this model version, or choose the
+          # latest stable version when model version information is not available
+          # such as when the model is specified by uri.
+      "signatureName": "A String", # Optional. The name of the signature defined in the SavedModel to use for
+          # this job. Please refer to
+          # [SavedModel](https://tensorflow.github.io/serving/serving_basic.html)
+          # for information about how to use signatures.
+          #
+          # Defaults to
+          # [DEFAULT_SERVING_SIGNATURE_DEF_KEY](https://www.tensorflow.org/api_docs/python/tf/saved_model/signature_constants)
+          # , which is "serving_default".
+      "batchSize": "A String", # Optional. Number of records per batch, defaults to 64.
+          # The service will buffer batch_size number of records in memory before
+          # invoking one Tensorflow prediction call internally. So take the record
+          # size and memory available into consideration when setting this parameter.
+      "inputPaths": [ # Required. The Cloud Storage location of the input data files. May contain
+          # <a href="/storage/docs/gsutil/addlhelp/WildcardNames">wildcards</a>.
+        "A String",
+      ],
+      "maxWorkerCount": "A String", # Optional. The maximum number of workers to be used for parallel processing.
+          # Defaults to 10 if not specified.
+      "uri": "A String", # Use this field if you want to specify a Google Cloud Storage path for
+          # the model to use.
+      "outputPath": "A String", # Required. The output Google Cloud Storage location.
+      "dataFormat": "A String", # Required. The format of the input data files.
+      "versionName": "A String", # Use this field if you want to specify a version of the model to use. The
+          # string is formatted the same way as `model_version`, with the addition
+          # of the version information:
+          #
+          # `"projects/YOUR_PROJECT/models/YOUR_MODEL/versions/YOUR_VERSION"`
+      "region": "A String", # Required. The Google Compute Engine region to run the prediction job in.
+          # See the <a href="/ml-engine/docs/tensorflow/regions">available regions</a>
+          # for AI Platform services.
+      "outputDataFormat": "A String", # Optional. Format of the output data files, defaults to JSON.
+    },
+    "trainingInput": { # Represents input parameters for a training job. When using the # Input parameters to create a training job.
+        # gcloud command to submit your training job, you can specify
+        # the input parameters as command-line arguments and/or in a YAML configuration
+        # file referenced from the --config command-line argument. For
+        # details, see the guide to
+        # <a href="/ml-engine/docs/tensorflow/training-jobs">submitting a training
+        # job</a>.
+      "workerType": "A String", # Optional. Specifies the type of virtual machine to use for your training
+          # job's worker nodes.
+          #
+          # The supported values are the same as those described in the entry for
+          # `masterType`.
+          #
+          # This value must be consistent with the category of machine type that
+          # `masterType` uses. In other words, both must be AI Platform machine
+          # types or both must be Compute Engine machine types.
+          #
+          # If you use `cloud_tpu` for this value, see special instructions for
+          # [configuring a custom TPU
+          # machine](/ml-engine/docs/tensorflow/using-tpus#configuring_a_custom_tpu_machine).
+          #
+          # This value must be present when `scaleTier` is set to `CUSTOM` and
+          # `workerCount` is greater than zero.
+      "parameterServerConfig": { # Represents the configuration for a replica in a cluster. # Optional. The configuration for parameter servers.
+          #
+          # You should only set `parameterServerConfig.acceleratorConfig` if
+          # `parameterServerConfigType` is set to a Compute Engine machine type. [Learn
+          # about restrictions on accelerator configurations for
+          # training.](/ml-engine/docs/tensorflow/using-gpus#compute-engine-machine-types-with-gpu)
+          #
+          # Set `parameterServerConfig.imageUri` only if you build a custom image for
+          # your parameter server. If `parameterServerConfig.imageUri` has not been
+          # set, AI Platform uses the value of `masterConfig.imageUri`.
+          # Learn more about [configuring custom
+          # containers](/ml-engine/docs/distributed-training-containers).
+        "acceleratorConfig": { # Represents a hardware accelerator request config. # Represents the type and number of accelerators used by the replica.
+            # [Learn about restrictions on accelerator configurations for
+            # training.](/ml-engine/docs/tensorflow/using-gpus#compute-engine-machine-types-with-gpu)
+          "count": "A String", # The number of accelerators to attach to each machine running the job.
+          "type": "A String", # The type of accelerator to use.
+        },
+        "imageUri": "A String", # The Docker image to run on the replica. This image must be in Container
+            # Registry. Learn more about [configuring custom
+            # containers](/ml-engine/docs/distributed-training-containers).
+      },
+      "runtimeVersion": "A String", # Optional. The AI Platform runtime version to use for training. If not
+          # set, AI Platform uses the default stable version, 1.0. For more
+          # information, see the
+          # <a href="/ml-engine/docs/runtime-version-list">runtime version list</a>
+          # and
+          # <a href="/ml-engine/docs/versioning">how to manage runtime versions</a>.
+      "scaleTier": "A String", # Required. Specifies the machine types, the number of replicas for workers
+          # and parameter servers.
+      "masterType": "A String", # Optional. Specifies the type of virtual machine to use for your training
+          # job's master worker.
+          #
+          # The following types are supported:
+          #
+          # <dl>
+          #   <dt>standard</dt>
+          #   <dd>
+          #   A basic machine configuration suitable for training simple models with
+          #   small to moderate datasets.
+          #   </dd>
+          #   <dt>large_model</dt>
+          #   <dd>
+          #   A machine with a lot of memory, specially suited for parameter servers
+          #   when your model is large (having many hidden layers or layers with very
+          #   large numbers of nodes).
+          #   </dd>
+          #   <dt>complex_model_s</dt>
+          #   <dd>
+          #   A machine suitable for the master and workers of the cluster when your
+          #   model requires more computation than the standard machine can handle
+          #   satisfactorily.
+          #   </dd>
+          #   <dt>complex_model_m</dt>
+          #   <dd>
+          #   A machine with roughly twice the number of cores and roughly double the
+          #   memory of <i>complex_model_s</i>.
+          #   </dd>
+          #   <dt>complex_model_l</dt>
+          #   <dd>
+          #   A machine with roughly twice the number of cores and roughly double the
+          #   memory of <i>complex_model_m</i>.
+          #   </dd>
+          #   <dt>standard_gpu</dt>
+          #   <dd>
+          #   A machine equivalent to <i>standard</i> that
+          #   also includes a single NVIDIA Tesla K80 GPU. See more about
+          #   <a href="/ml-engine/docs/tensorflow/using-gpus">using GPUs to
+          #   train your model</a>.
+          #   </dd>
+          #   <dt>complex_model_m_gpu</dt>
+          #   <dd>
+          #   A machine equivalent to <i>complex_model_m</i> that also includes
+          #   four NVIDIA Tesla K80 GPUs.
+          #   </dd>
+          #   <dt>complex_model_l_gpu</dt>
+          #   <dd>
+          #   A machine equivalent to <i>complex_model_l</i> that also includes
+          #   eight NVIDIA Tesla K80 GPUs.
+          #   </dd>
+          #   <dt>standard_p100</dt>
+          #   <dd>
+          #   A machine equivalent to <i>standard</i> that
+          #   also includes a single NVIDIA Tesla P100 GPU.
+          #   </dd>
+          #   <dt>complex_model_m_p100</dt>
+          #   <dd>
+          #   A machine equivalent to <i>complex_model_m</i> that also includes
+          #   four NVIDIA Tesla P100 GPUs.
+          #   </dd>
+          #   <dt>standard_v100</dt>
+          #   <dd>
+          #   A machine equivalent to <i>standard</i> that
+          #   also includes a single NVIDIA Tesla V100 GPU.
+          #   </dd>
+          #   <dt>large_model_v100</dt>
+          #   <dd>
+          #   A machine equivalent to <i>large_model</i> that
+          #   also includes a single NVIDIA Tesla V100 GPU.
+          #   </dd>
+          #   <dt>complex_model_m_v100</dt>
+          #   <dd>
+          #   A machine equivalent to <i>complex_model_m</i> that
+          #   also includes four NVIDIA Tesla V100 GPUs.
+          #   </dd>
+          #   <dt>complex_model_l_v100</dt>
+          #   <dd>
+          #   A machine equivalent to <i>complex_model_l</i> that
+          #   also includes eight NVIDIA Tesla V100 GPUs.
+          #   </dd>
+          #   <dt>cloud_tpu</dt>
+          #   <dd>
+          #   A TPU VM including one Cloud TPU. See more about
+          #   <a href="/ml-engine/docs/tensorflow/using-tpus">using TPUs to train
+          #   your model</a>.
+          #   </dd>
+          # </dl>
+          #
+          # You may also use certain Compute Engine machine types directly in this
+          # field. The following types are supported:
+          #
+          # - `n1-standard-4`
+          # - `n1-standard-8`
+          # - `n1-standard-16`
+          # - `n1-standard-32`
+          # - `n1-standard-64`
+          # - `n1-standard-96`
+          # - `n1-highmem-2`
+          # - `n1-highmem-4`
+          # - `n1-highmem-8`
+          # - `n1-highmem-16`
+          # - `n1-highmem-32`
+          # - `n1-highmem-64`
+          # - `n1-highmem-96`
+          # - `n1-highcpu-16`
+          # - `n1-highcpu-32`
+          # - `n1-highcpu-64`
+          # - `n1-highcpu-96`
+          #
+          # See more about [using Compute Engine machine
+          # types](/ml-engine/docs/tensorflow/machine-types#compute-engine-machine-types).
+          #
+          # You must set this value when `scaleTier` is set to `CUSTOM`.
+      "hyperparameters": { # Represents a set of hyperparameters to optimize. # Optional. The set of Hyperparameters to tune.
+        "maxTrials": 42, # Optional. How many training trials should be attempted to optimize
+            # the specified hyperparameters.
+            #
+            # Defaults to one.
+        "goal": "A String", # Required. The type of goal to use for tuning. Available types are
+            # `MAXIMIZE` and `MINIMIZE`.
+            #
+            # Defaults to `MAXIMIZE`.
+        "algorithm": "A String", # Optional. The search algorithm specified for the hyperparameter
+            # tuning job.
+            # Uses the default AI Platform hyperparameter tuning
+            # algorithm if unspecified.
+        "maxFailedTrials": 42, # Optional. The number of failed trials that need to be seen before failing
+            # the hyperparameter tuning job. You can specify this field to override the
+            # default failing criteria for AI Platform hyperparameter tuning jobs.
+            #
+            # Defaults to zero, which means the service decides when a hyperparameter
+            # job should fail.
+        "enableTrialEarlyStopping": True or False, # Optional. Indicates if the hyperparameter tuning job enables auto trial
+            # early stopping.
+        "resumePreviousJobId": "A String", # Optional. The prior hyperparameter tuning job id that users hope to
+            # continue with. The job id will be used to find the corresponding vizier
+            # study guid and resume the study.
+        "params": [ # Required. The set of parameters to tune.
+          { # Represents a single hyperparameter to optimize.
+            "maxValue": 3.14, # Required if type is `DOUBLE` or `INTEGER`. This field
+                # should be unset if type is `CATEGORICAL`. This value should be integers if
+                # type is `INTEGER`.
+            "categoricalValues": [ # Required if type is `CATEGORICAL`. The list of possible categories.
+              "A String",
+            ],
+            "discreteValues": [ # Required if type is `DISCRETE`.
+                # A list of feasible points.
+                # The list should be in strictly increasing order. For instance, this
+                # parameter might have possible settings of 1.5, 2.5, and 4.0. This list
+                # should not contain more than 1,000 values.
+              3.14,
+            ],
+            "parameterName": "A String", # Required. The parameter name must be unique amongst all ParameterConfigs in
+                # a HyperparameterSpec message. E.g., "learning_rate".
+            "minValue": 3.14, # Required if type is `DOUBLE` or `INTEGER`. This field
+                # should be unset if type is `CATEGORICAL`. This value should be integers if
+                # type is INTEGER.
+            "type": "A String", # Required. The type of the parameter.
+            "scaleType": "A String", # Optional. How the parameter should be scaled to the hypercube.
+                # Leave unset for categorical parameters.
+                # Some kind of scaling is strongly recommended for real or integral
+                # parameters (e.g., `UNIT_LINEAR_SCALE`).
+          },
+        ],
+        "hyperparameterMetricTag": "A String", # Optional. The TensorFlow summary tag name to use for optimizing trials. For
+            # current versions of TensorFlow, this tag name should exactly match what is
+            # shown in TensorBoard, including all scopes.  For versions of TensorFlow
+            # prior to 0.12, this should be only the tag passed to tf.Summary.
+            # By default, "training/hptuning/metric" will be used.
+        "maxParallelTrials": 42, # Optional. The number of training trials to run concurrently.
+            # You can reduce the time it takes to perform hyperparameter tuning by adding
+            # trials in parallel. However, each trail only benefits from the information
+            # gained in completed trials. That means that a trial does not get access to
+            # the results of trials running at the same time, which could reduce the
+            # quality of the overall optimization.
+            #
+            # Each trial will use the same scale tier and machine types.
+            #
+            # Defaults to one.
+      },
+      "region": "A String", # Required. The Google Compute Engine region to run the training job in.
+          # See the <a href="/ml-engine/docs/tensorflow/regions">available regions</a>
+          # for AI Platform services.
+      "args": [ # Optional. Command line arguments to pass to the program.
+        "A String",
+      ],
+      "pythonModule": "A String", # Required. The Python module name to run after installing the packages.
+      "pythonVersion": "A String", # Optional. The version of Python used in training. If not set, the default
+          # version is '2.7'. Python '3.5' is available when `runtime_version` is set
+          # to '1.4' and above. Python '2.7' works with all supported
+          # <a href="/ml-engine/docs/runtime-version-list">runtime versions</a>.
+      "jobDir": "A String", # Optional. A Google Cloud Storage path in which to store training outputs
+          # and other data needed for training. This path is passed to your TensorFlow
+          # program as the '--job-dir' command-line argument. The benefit of specifying
+          # this field is that Cloud ML validates the path for use in training.
+      "packageUris": [ # Required. The Google Cloud Storage location of the packages with
+          # the training program and any additional dependencies.
+          # The maximum number of package URIs is 100.
+        "A String",
+      ],
+      "workerCount": "A String", # Optional. The number of worker replicas to use for the training job. Each
+          # replica in the cluster will be of the type specified in `worker_type`.
+          #
+          # This value can only be used when `scale_tier` is set to `CUSTOM`. If you
+          # set this value, you must also set `worker_type`.
+          #
+          # The default value is zero.
+      "parameterServerType": "A String", # Optional. Specifies the type of virtual machine to use for your training
+          # job's parameter server.
+          #
+          # The supported values are the same as those described in the entry for
+          # `master_type`.
+          #
+          # This value must be consistent with the category of machine type that
+          # `masterType` uses. In other words, both must be AI Platform machine
+          # types or both must be Compute Engine machine types.
+          #
+          # This value must be present when `scaleTier` is set to `CUSTOM` and
+          # `parameter_server_count` is greater than zero.
+      "workerConfig": { # Represents the configuration for a replica in a cluster. # Optional. The configuration for workers.
+          #
+          # You should only set `workerConfig.acceleratorConfig` if `workerType` is set
+          # to a Compute Engine machine type. [Learn about restrictions on accelerator
+          # configurations for
+          # training.](/ml-engine/docs/tensorflow/using-gpus#compute-engine-machine-types-with-gpu)
+          #
+          # Set `workerConfig.imageUri` only if you build a custom image for your
+          # worker. If `workerConfig.imageUri` has not been set, AI Platform uses
+          # the value of `masterConfig.imageUri`. Learn more about
+          # [configuring custom
+          # containers](/ml-engine/docs/distributed-training-containers).
+        "acceleratorConfig": { # Represents a hardware accelerator request config. # Represents the type and number of accelerators used by the replica.
+            # [Learn about restrictions on accelerator configurations for
+            # training.](/ml-engine/docs/tensorflow/using-gpus#compute-engine-machine-types-with-gpu)
+          "count": "A String", # The number of accelerators to attach to each machine running the job.
+          "type": "A String", # The type of accelerator to use.
+        },
+        "imageUri": "A String", # The Docker image to run on the replica. This image must be in Container
+            # Registry. Learn more about [configuring custom
+            # containers](/ml-engine/docs/distributed-training-containers).
+      },
+      "maxRunningTime": "A String", # Optional. The maximum job running time. The default is 7 days.
+      "masterConfig": { # Represents the configuration for a replica in a cluster. # Optional. The configuration for your master worker.
+          #
+          # You should only set `masterConfig.acceleratorConfig` if `masterType` is set
+          # to a Compute Engine machine type. Learn about [restrictions on accelerator
+          # configurations for
+          # training.](/ml-engine/docs/tensorflow/using-gpus#compute-engine-machine-types-with-gpu)
+          #
+          # Set `masterConfig.imageUri` only if you build a custom image. Only one of
+          # `masterConfig.imageUri` and `runtimeVersion` should be set. Learn more about
+          # [configuring custom
+          # containers](/ml-engine/docs/distributed-training-containers).
+        "acceleratorConfig": { # Represents a hardware accelerator request config. # Represents the type and number of accelerators used by the replica.
+            # [Learn about restrictions on accelerator configurations for
+            # training.](/ml-engine/docs/tensorflow/using-gpus#compute-engine-machine-types-with-gpu)
+          "count": "A String", # The number of accelerators to attach to each machine running the job.
+          "type": "A String", # The type of accelerator to use.
+        },
+        "imageUri": "A String", # The Docker image to run on the replica. This image must be in Container
+            # Registry. Learn more about [configuring custom
+            # containers](/ml-engine/docs/distributed-training-containers).
+      },
+      "parameterServerCount": "A String", # Optional. The number of parameter server replicas to use for the training
+          # job. Each replica in the cluster will be of the type specified in
+          # `parameter_server_type`.
+          #
+          # This value can only be used when `scale_tier` is set to `CUSTOM`.If you
+          # set this value, you must also set `parameter_server_type`.
+          #
+          # The default value is zero.
+    },
+    "jobId": "A String", # Required. The user-specified id of the job.
+    "labels": { # Optional. One or more labels that you can add, to organize your jobs.
+        # Each label is a key-value pair, where both the key and the value are
+        # arbitrary strings that you supply.
+        # For more information, see the documentation on
+        # <a href="/ml-engine/docs/tensorflow/resource-labels">using labels</a>.
+      "a_key": "A String",
+    },
+    "state": "A String", # Output only. The detailed state of a job.
+    "etag": "A String", # `etag` is used for optimistic concurrency control as a way to help
+        # prevent simultaneous updates of a job from overwriting each other.
+        # It is strongly suggested that systems make use of the `etag` in the
+        # read-modify-write cycle to perform job updates in order to avoid race
+        # conditions: An `etag` is returned in the response to `GetJob`, and
+        # systems are expected to put that etag in the request to `UpdateJob` to
+        # ensure that their change will be applied to the same version of the job.
+    "startTime": "A String", # Output only. When the job processing was started.
+    "endTime": "A String", # Output only. When the job processing was completed.
+    "predictionOutput": { # Represents results of a prediction job. # The current prediction job result.
+      "outputPath": "A String", # The output Google Cloud Storage location provided at the job creation time.
+      "nodeHours": 3.14, # Node hours used by the batch prediction job.
+      "predictionCount": "A String", # The number of generated predictions.
+      "errorCount": "A String", # The number of data instances which resulted in errors.
+    },
+    "createTime": "A String", # Output only. When the job was created.
+  }</pre>
+</div>
+
+<div class="method">
+    <code class="details" id="setIamPolicy">setIamPolicy(resource, body, x__xgafv=None)</code>
+  <pre>Sets the access control policy on the specified resource. Replaces any
+existing policy.
+
+Args:
+  resource: string, REQUIRED: The resource for which the policy is being specified.
+See the operation documentation for the appropriate value for this field. (required)
+  body: object, The request body. (required)
+    The object takes the form of:
+
+{ # Request message for `SetIamPolicy` method.
+    "policy": { # Defines an Identity and Access Management (IAM) policy. It is used to # REQUIRED: The complete policy to be applied to the `resource`. The size of
+        # the policy is limited to a few 10s of KB. An empty policy is a
+        # valid policy but certain Cloud Platform services (such as Projects)
+        # might reject them.
+        # specify access control policies for Cloud Platform resources.
+        #
+        #
+        # A `Policy` consists of a list of `bindings`. A `binding` binds a list of
+        # `members` to a `role`, where the members can be user accounts, Google groups,
+        # Google domains, and service accounts. A `role` is a named list of permissions
+        # defined by IAM.
+        #
+        # **JSON Example**
+        #
+        #     {
+        #       "bindings": [
+        #         {
+        #           "role": "roles/owner",
+        #           "members": [
+        #             "user:mike@example.com",
+        #             "group:admins@example.com",
+        #             "domain:google.com",
+        #             "serviceAccount:my-other-app@appspot.gserviceaccount.com"
+        #           ]
+        #         },
+        #         {
+        #           "role": "roles/viewer",
+        #           "members": ["user:sean@example.com"]
+        #         }
+        #       ]
+        #     }
+        #
+        # **YAML Example**
+        #
+        #     bindings:
+        #     - members:
+        #       - user:mike@example.com
+        #       - group:admins@example.com
+        #       - domain:google.com
+        #       - serviceAccount:my-other-app@appspot.gserviceaccount.com
+        #       role: roles/owner
+        #     - members:
+        #       - user:sean@example.com
+        #       role: roles/viewer
+        #
+        #
+        # For a description of IAM and its features, see the
+        # [IAM developer's guide](https://cloud.google.com/iam/docs).
+      "bindings": [ # Associates a list of `members` to a `role`.
+          # `bindings` with no members will result in an error.
+        { # Associates `members` with a `role`.
+          "role": "A String", # Role that is assigned to `members`.
+              # For example, `roles/viewer`, `roles/editor`, or `roles/owner`.
+          "members": [ # Specifies the identities requesting access for a Cloud Platform resource.
+              # `members` can have the following values:
+              #
+              # * `allUsers`: A special identifier that represents anyone who is
+              #    on the internet; with or without a Google account.
+              #
+              # * `allAuthenticatedUsers`: A special identifier that represents anyone
+              #    who is authenticated with a Google account or a service account.
+              #
+              # * `user:{emailid}`: An email address that represents a specific Google
+              #    account. For example, `alice@gmail.com` .
+              #
+              #
+              # * `serviceAccount:{emailid}`: An email address that represents a service
+              #    account. For example, `my-other-app@appspot.gserviceaccount.com`.
+              #
+              # * `group:{emailid}`: An email address that represents a Google group.
+              #    For example, `admins@example.com`.
+              #
+              #
+              # * `domain:{domain}`: The G Suite domain (primary) that represents all the
+              #    users of that domain. For example, `google.com` or `example.com`.
+              #
+            "A String",
+          ],
+          "condition": { # Represents an expression text. Example: # The condition that is associated with this binding.
+              # NOTE: An unsatisfied condition will not allow user access via current
+              # binding. Different bindings, including their conditions, are examined
+              # independently.
+              #
+              #     title: "User account presence"
+              #     description: "Determines whether the request has a user account"
+              #     expression: "size(request.user) > 0"
+            "description": "A String", # An optional description of the expression. This is a longer text which
+                # describes the expression, e.g. when hovered over it in a UI.
+            "expression": "A String", # Textual representation of an expression in
+                # Common Expression Language syntax.
+                #
+                # The application context of the containing message determines which
+                # well-known feature set of CEL is supported.
+            "location": "A String", # An optional string indicating the location of the expression for error
+                # reporting, e.g. a file name and a position in the file.
+            "title": "A String", # An optional title for the expression, i.e. a short string describing
+                # its purpose. This can be used e.g. in UIs which allow to enter the
+                # expression.
+          },
+        },
+      ],
+      "etag": "A String", # `etag` is used for optimistic concurrency control as a way to help
+          # prevent simultaneous updates of a policy from overwriting each other.
+          # It is strongly suggested that systems make use of the `etag` in the
+          # read-modify-write cycle to perform policy updates in order to avoid race
+          # conditions: An `etag` is returned in the response to `getIamPolicy`, and
+          # systems are expected to put that etag in the request to `setIamPolicy` to
+          # ensure that their change will be applied to the same version of the policy.
+          #
+          # If no `etag` is provided in the call to `setIamPolicy`, then the existing
+          # policy is overwritten blindly.
+      "version": 42, # Deprecated.
+      "auditConfigs": [ # Specifies cloud audit logging configuration for this policy.
+        { # Specifies the audit configuration for a service.
+            # The configuration determines which permission types are logged, and what
+            # identities, if any, are exempted from logging.
+            # An AuditConfig must have one or more AuditLogConfigs.
+            #
+            # If there are AuditConfigs for both `allServices` and a specific service,
+            # the union of the two AuditConfigs is used for that service: the log_types
+            # specified in each AuditConfig are enabled, and the exempted_members in each
+            # AuditLogConfig are exempted.
+            #
+            # Example Policy with multiple AuditConfigs:
+            #
+            #     {
+            #       "audit_configs": [
+            #         {
+            #           "service": "allServices"
+            #           "audit_log_configs": [
+            #             {
+            #               "log_type": "DATA_READ",
+            #               "exempted_members": [
+            #                 "user:foo@gmail.com"
+            #               ]
+            #             },
+            #             {
+            #               "log_type": "DATA_WRITE",
+            #             },
+            #             {
+            #               "log_type": "ADMIN_READ",
+            #             }
+            #           ]
+            #         },
+            #         {
+            #           "service": "fooservice.googleapis.com"
+            #           "audit_log_configs": [
+            #             {
+            #               "log_type": "DATA_READ",
+            #             },
+            #             {
+            #               "log_type": "DATA_WRITE",
+            #               "exempted_members": [
+            #                 "user:bar@gmail.com"
+            #               ]
+            #             }
+            #           ]
+            #         }
+            #       ]
+            #     }
+            #
+            # For fooservice, this policy enables DATA_READ, DATA_WRITE and ADMIN_READ
+            # logging. It also exempts foo@gmail.com from DATA_READ logging, and
+            # bar@gmail.com from DATA_WRITE logging.
+          "auditLogConfigs": [ # The configuration for logging of each type of permission.
+            { # Provides the configuration for logging a type of permissions.
+                # Example:
+                #
+                #     {
+                #       "audit_log_configs": [
+                #         {
+                #           "log_type": "DATA_READ",
+                #           "exempted_members": [
+                #             "user:foo@gmail.com"
+                #           ]
+                #         },
+                #         {
+                #           "log_type": "DATA_WRITE",
+                #         }
+                #       ]
+                #     }
+                #
+                # This enables 'DATA_READ' and 'DATA_WRITE' logging, while exempting
+                # foo@gmail.com from DATA_READ logging.
+              "exemptedMembers": [ # Specifies the identities that do not cause logging for this type of
+                  # permission.
+                  # Follows the same format of Binding.members.
+                "A String",
+              ],
+              "logType": "A String", # The log type that this config enables.
+            },
+          ],
+          "service": "A String", # Specifies a service that will be enabled for audit logging.
+              # For example, `storage.googleapis.com`, `cloudsql.googleapis.com`.
+              # `allServices` is a special value that covers all services.
+        },
+      ],
+    },
+    "updateMask": "A String", # OPTIONAL: A FieldMask specifying which fields of the policy to modify. Only
+        # the fields in the mask will be modified. If no mask is provided, the
+        # following default mask is used:
+        # paths: "bindings, etag"
+        # This field is only used by Cloud IAM.
+  }
+
+  x__xgafv: string, V1 error format.
+    Allowed values
+      1 - v1 error format
+      2 - v2 error format
+
+Returns:
+  An object of the form:
+
+    { # Defines an Identity and Access Management (IAM) policy. It is used to
+      # specify access control policies for Cloud Platform resources.
+      #
+      #
+      # A `Policy` consists of a list of `bindings`. A `binding` binds a list of
+      # `members` to a `role`, where the members can be user accounts, Google groups,
+      # Google domains, and service accounts. A `role` is a named list of permissions
+      # defined by IAM.
+      #
+      # **JSON Example**
+      #
+      #     {
+      #       "bindings": [
+      #         {
+      #           "role": "roles/owner",
+      #           "members": [
+      #             "user:mike@example.com",
+      #             "group:admins@example.com",
+      #             "domain:google.com",
+      #             "serviceAccount:my-other-app@appspot.gserviceaccount.com"
+      #           ]
+      #         },
+      #         {
+      #           "role": "roles/viewer",
+      #           "members": ["user:sean@example.com"]
+      #         }
+      #       ]
+      #     }
+      #
+      # **YAML Example**
+      #
+      #     bindings:
+      #     - members:
+      #       - user:mike@example.com
+      #       - group:admins@example.com
+      #       - domain:google.com
+      #       - serviceAccount:my-other-app@appspot.gserviceaccount.com
+      #       role: roles/owner
+      #     - members:
+      #       - user:sean@example.com
+      #       role: roles/viewer
+      #
+      #
+      # For a description of IAM and its features, see the
+      # [IAM developer's guide](https://cloud.google.com/iam/docs).
+    "bindings": [ # Associates a list of `members` to a `role`.
+        # `bindings` with no members will result in an error.
+      { # Associates `members` with a `role`.
+        "role": "A String", # Role that is assigned to `members`.
+            # For example, `roles/viewer`, `roles/editor`, or `roles/owner`.
+        "members": [ # Specifies the identities requesting access for a Cloud Platform resource.
+            # `members` can have the following values:
+            #
+            # * `allUsers`: A special identifier that represents anyone who is
+            #    on the internet; with or without a Google account.
+            #
+            # * `allAuthenticatedUsers`: A special identifier that represents anyone
+            #    who is authenticated with a Google account or a service account.
+            #
+            # * `user:{emailid}`: An email address that represents a specific Google
+            #    account. For example, `alice@gmail.com` .
+            #
+            #
+            # * `serviceAccount:{emailid}`: An email address that represents a service
+            #    account. For example, `my-other-app@appspot.gserviceaccount.com`.
+            #
+            # * `group:{emailid}`: An email address that represents a Google group.
+            #    For example, `admins@example.com`.
+            #
+            #
+            # * `domain:{domain}`: The G Suite domain (primary) that represents all the
+            #    users of that domain. For example, `google.com` or `example.com`.
+            #
+          "A String",
+        ],
+        "condition": { # Represents an expression text. Example: # The condition that is associated with this binding.
+            # NOTE: An unsatisfied condition will not allow user access via current
+            # binding. Different bindings, including their conditions, are examined
+            # independently.
+            #
+            #     title: "User account presence"
+            #     description: "Determines whether the request has a user account"
+            #     expression: "size(request.user) > 0"
+          "description": "A String", # An optional description of the expression. This is a longer text which
+              # describes the expression, e.g. when hovered over it in a UI.
+          "expression": "A String", # Textual representation of an expression in
+              # Common Expression Language syntax.
+              #
+              # The application context of the containing message determines which
+              # well-known feature set of CEL is supported.
+          "location": "A String", # An optional string indicating the location of the expression for error
+              # reporting, e.g. a file name and a position in the file.
+          "title": "A String", # An optional title for the expression, i.e. a short string describing
+              # its purpose. This can be used e.g. in UIs which allow to enter the
+              # expression.
+        },
+      },
+    ],
+    "etag": "A String", # `etag` is used for optimistic concurrency control as a way to help
+        # prevent simultaneous updates of a policy from overwriting each other.
+        # It is strongly suggested that systems make use of the `etag` in the
+        # read-modify-write cycle to perform policy updates in order to avoid race
+        # conditions: An `etag` is returned in the response to `getIamPolicy`, and
+        # systems are expected to put that etag in the request to `setIamPolicy` to
+        # ensure that their change will be applied to the same version of the policy.
+        #
+        # If no `etag` is provided in the call to `setIamPolicy`, then the existing
+        # policy is overwritten blindly.
+    "version": 42, # Deprecated.
+    "auditConfigs": [ # Specifies cloud audit logging configuration for this policy.
+      { # Specifies the audit configuration for a service.
+          # The configuration determines which permission types are logged, and what
+          # identities, if any, are exempted from logging.
+          # An AuditConfig must have one or more AuditLogConfigs.
+          #
+          # If there are AuditConfigs for both `allServices` and a specific service,
+          # the union of the two AuditConfigs is used for that service: the log_types
+          # specified in each AuditConfig are enabled, and the exempted_members in each
+          # AuditLogConfig are exempted.
+          #
+          # Example Policy with multiple AuditConfigs:
+          #
+          #     {
+          #       "audit_configs": [
+          #         {
+          #           "service": "allServices"
+          #           "audit_log_configs": [
+          #             {
+          #               "log_type": "DATA_READ",
+          #               "exempted_members": [
+          #                 "user:foo@gmail.com"
+          #               ]
+          #             },
+          #             {
+          #               "log_type": "DATA_WRITE",
+          #             },
+          #             {
+          #               "log_type": "ADMIN_READ",
+          #             }
+          #           ]
+          #         },
+          #         {
+          #           "service": "fooservice.googleapis.com"
+          #           "audit_log_configs": [
+          #             {
+          #               "log_type": "DATA_READ",
+          #             },
+          #             {
+          #               "log_type": "DATA_WRITE",
+          #               "exempted_members": [
+          #                 "user:bar@gmail.com"
+          #               ]
+          #             }
+          #           ]
+          #         }
+          #       ]
+          #     }
+          #
+          # For fooservice, this policy enables DATA_READ, DATA_WRITE and ADMIN_READ
+          # logging. It also exempts foo@gmail.com from DATA_READ logging, and
+          # bar@gmail.com from DATA_WRITE logging.
+        "auditLogConfigs": [ # The configuration for logging of each type of permission.
+          { # Provides the configuration for logging a type of permissions.
+              # Example:
+              #
+              #     {
+              #       "audit_log_configs": [
+              #         {
+              #           "log_type": "DATA_READ",
+              #           "exempted_members": [
+              #             "user:foo@gmail.com"
+              #           ]
+              #         },
+              #         {
+              #           "log_type": "DATA_WRITE",
+              #         }
+              #       ]
+              #     }
+              #
+              # This enables 'DATA_READ' and 'DATA_WRITE' logging, while exempting
+              # foo@gmail.com from DATA_READ logging.
+            "exemptedMembers": [ # Specifies the identities that do not cause logging for this type of
+                # permission.
+                # Follows the same format of Binding.members.
+              "A String",
+            ],
+            "logType": "A String", # The log type that this config enables.
+          },
+        ],
+        "service": "A String", # Specifies a service that will be enabled for audit logging.
+            # For example, `storage.googleapis.com`, `cloudsql.googleapis.com`.
+            # `allServices` is a special value that covers all services.
+      },
+    ],
+  }</pre>
+</div>
+
+<div class="method">
+    <code class="details" id="testIamPermissions">testIamPermissions(resource, body, x__xgafv=None)</code>
+  <pre>Returns permissions that a caller has on the specified resource.
+If the resource does not exist, this will return an empty set of
+permissions, not a NOT_FOUND error.
+
+Note: This operation is designed to be used for building permission-aware
+UIs and command-line tools, not for authorization checking. This operation
+may "fail open" without warning.
+
+Args:
+  resource: string, REQUIRED: The resource for which the policy detail is being requested.
+See the operation documentation for the appropriate value for this field. (required)
+  body: object, The request body. (required)
+    The object takes the form of:
+
+{ # Request message for `TestIamPermissions` method.
+    "permissions": [ # The set of permissions to check for the `resource`. Permissions with
+        # wildcards (such as '*' or 'storage.*') are not allowed. For more
+        # information see
+        # [IAM Overview](https://cloud.google.com/iam/docs/overview#permissions).
+      "A String",
+    ],
+  }
+
+  x__xgafv: string, V1 error format.
+    Allowed values
+      1 - v1 error format
+      2 - v2 error format
+
+Returns:
+  An object of the form:
+
+    { # Response message for `TestIamPermissions` method.
+    "permissions": [ # A subset of `TestPermissionsRequest.permissions` that the caller is
+        # allowed.
+      "A String",
+    ],
+  }</pre>
+</div>
+
 </body></html>
\ No newline at end of file