chore: regens API reference docs (#889)

diff --git a/docs/dyn/dataproc_v1.projects.regions.workflowTemplates.html b/docs/dyn/dataproc_v1.projects.regions.workflowTemplates.html
index f15548b..6f76d7e 100644
--- a/docs/dyn/dataproc_v1.projects.regions.workflowTemplates.html
+++ b/docs/dyn/dataproc_v1.projects.regions.workflowTemplates.html
@@ -75,7 +75,7 @@
 <h1><a href="dataproc_v1.html">Cloud Dataproc API</a> . <a href="dataproc_v1.projects.html">projects</a> . <a href="dataproc_v1.projects.regions.html">regions</a> . <a href="dataproc_v1.projects.regions.workflowTemplates.html">workflowTemplates</a></h1>
 <h2>Instance Methods</h2>
 <p class="toc_element">
-  <code><a href="#create">create(parent, body, x__xgafv=None)</a></code></p>
+  <code><a href="#create">create(parent, body=None, x__xgafv=None)</a></code></p>
 <p class="firstline">Creates new workflow template.</p>
 <p class="toc_element">
   <code><a href="#delete">delete(name, version=None, x__xgafv=None)</a></code></p>
@@ -87,11 +87,11 @@
   <code><a href="#getIamPolicy">getIamPolicy(resource, body=None, x__xgafv=None)</a></code></p>
 <p class="firstline">Gets the access control policy for a resource. Returns an empty policy if the resource exists and does not have a policy set.</p>
 <p class="toc_element">
-  <code><a href="#instantiate">instantiate(name, body, x__xgafv=None)</a></code></p>
-<p class="firstline">Instantiates a template and begins execution.The returned Operation can be used to track execution of workflow by polling operations.get. The Operation will complete when entire workflow is finished.The running workflow can be aborted via operations.cancel. This will cause any inflight jobs to be cancelled and workflow-owned clusters to be deleted.The Operation.metadata will be WorkflowMetadata.On successful completion, Operation.response will be Empty.</p>
+  <code><a href="#instantiate">instantiate(name, body=None, x__xgafv=None)</a></code></p>
+<p class="firstline">Instantiates a template and begins execution.The returned Operation can be used to track execution of workflow by polling operations.get. The Operation will complete when entire workflow is finished.The running workflow can be aborted via operations.cancel. This will cause any inflight jobs to be cancelled and workflow-owned clusters to be deleted.The Operation.metadata will be WorkflowMetadata (https://cloud.google.com/dataproc/docs/reference/rpc/google.cloud.dataproc.v1#workflowmetadata). Also see Using WorkflowMetadata (https://cloud.google.com/dataproc/docs/concepts/workflows/debugging#using_workflowmetadata).On successful completion, Operation.response will be Empty.</p>
 <p class="toc_element">
-  <code><a href="#instantiateInline">instantiateInline(parent, body, requestId=None, x__xgafv=None)</a></code></p>
-<p class="firstline">Instantiates a template and begins execution.This method is equivalent to executing the sequence CreateWorkflowTemplate, InstantiateWorkflowTemplate, DeleteWorkflowTemplate.The returned Operation can be used to track execution of workflow by polling operations.get. The Operation will complete when entire workflow is finished.The running workflow can be aborted via operations.cancel. This will cause any inflight jobs to be cancelled and workflow-owned clusters to be deleted.The Operation.metadata will be WorkflowMetadata.On successful completion, Operation.response will be Empty.</p>
+  <code><a href="#instantiateInline">instantiateInline(parent, body=None, requestId=None, x__xgafv=None)</a></code></p>
+<p class="firstline">Instantiates a template and begins execution.This method is equivalent to executing the sequence CreateWorkflowTemplate, InstantiateWorkflowTemplate, DeleteWorkflowTemplate.The returned Operation can be used to track execution of workflow by polling operations.get. The Operation will complete when entire workflow is finished.The running workflow can be aborted via operations.cancel. This will cause any inflight jobs to be cancelled and workflow-owned clusters to be deleted.The Operation.metadata will be WorkflowMetadata (https://cloud.google.com/dataproc/docs/reference/rpc/google.cloud.dataproc.v1#workflowmetadata). Also see Using WorkflowMetadata (https://cloud.google.com/dataproc/docs/concepts/workflows/debugging#using_workflowmetadata).On successful completion, Operation.response will be Empty.</p>
 <p class="toc_element">
   <code><a href="#list">list(parent, pageToken=None, x__xgafv=None, pageSize=None)</a></code></p>
 <p class="firstline">Lists workflows that match the specified filter in the request.</p>
@@ -99,42 +99,50 @@
   <code><a href="#list_next">list_next(previous_request, previous_response)</a></code></p>
 <p class="firstline">Retrieves the next page of results.</p>
 <p class="toc_element">
-  <code><a href="#setIamPolicy">setIamPolicy(resource, body, x__xgafv=None)</a></code></p>
-<p class="firstline">Sets the access control policy on the specified resource. Replaces any existing policy.</p>
+  <code><a href="#setIamPolicy">setIamPolicy(resource, body=None, x__xgafv=None)</a></code></p>
+<p class="firstline">Sets the access control policy on the specified resource. Replaces any existing policy.Can return Public Errors: NOT_FOUND, INVALID_ARGUMENT and PERMISSION_DENIED</p>
 <p class="toc_element">
-  <code><a href="#testIamPermissions">testIamPermissions(resource, body, x__xgafv=None)</a></code></p>
+  <code><a href="#testIamPermissions">testIamPermissions(resource, body=None, x__xgafv=None)</a></code></p>
 <p class="firstline">Returns permissions that a caller has on the specified resource. If the resource does not exist, this will return an empty set of permissions, not a NOT_FOUND error.Note: This operation is designed to be used for building permission-aware UIs and command-line tools, not for authorization checking. This operation may "fail open" without warning.</p>
 <p class="toc_element">
-  <code><a href="#update">update(name, body, x__xgafv=None)</a></code></p>
+  <code><a href="#update">update(name, body=None, x__xgafv=None)</a></code></p>
 <p class="firstline">Updates (replaces) workflow template. The updated template must contain version that matches the current server version.</p>
 <h3>Method Details</h3>
 <div class="method">
-    <code class="details" id="create">create(parent, body, x__xgafv=None)</code>
+    <code class="details" id="create">create(parent, body=None, x__xgafv=None)</code>
   <pre>Creates new workflow template.
 
 Args:
-  parent: string, Required. The "resource name" of the region, as described in https://cloud.google.com/apis/design/resource_names of the form projects/{project_id}/regions/{region} (required)
-  body: object, The request body. (required)
+  parent: string, Required. The resource name of the region or location, as described in https://cloud.google.com/apis/design/resource_names.
+For projects.regions.workflowTemplates,create, the resource name of the  region has the following format:  projects/{project_id}/regions/{region}
+For projects.locations.workflowTemplates.create, the resource name of  the location has the following format:  projects/{project_id}/locations/{location} (required)
+  body: object, The request body.
     The object takes the form of:
 
-{ # A Cloud Dataproc workflow template resource.
+{ # A Dataproc workflow template resource.
   "updateTime": "A String", # Output only. The time template was last updated.
   "placement": { # Specifies workflow execution target.Either managed_cluster or cluster_selector is required. # Required. WorkflowTemplate scheduling information.
     "clusterSelector": { # A selector that chooses target cluster for jobs based on metadata. # Optional. A selector that chooses target cluster for jobs based on metadata.The selector is evaluated at the time each job is submitted.
+      "zone": "A String", # Optional. The zone where workflow process executes. This parameter does not affect the selection of the cluster.If unspecified, the zone of the first cluster matching the selector is used.
       "clusterLabels": { # Required. The cluster labels. Cluster must have all labels to match.
         "a_key": "A String",
       },
-      "zone": "A String", # Optional. The zone where workflow process executes. This parameter does not affect the selection of the cluster.If unspecified, the zone of the first cluster matching the selector is used.
     },
-    "managedCluster": { # Cluster that is managed by the workflow. # Optional. A cluster that is managed by the workflow.
+    "managedCluster": { # Cluster that is managed by the workflow. # A cluster that is managed by the workflow.
       "clusterName": "A String", # Required. The cluster name prefix. A unique cluster name will be formed by appending a random suffix.The name must contain only lower-case letters (a-z), numbers (0-9), and hyphens (-). Must begin with a letter. Cannot begin or end with hyphen. Must consist of between 2 and 35 characters.
       "labels": { # Optional. The labels to associate with this cluster.Label keys must be between 1 and 63 characters long, and must conform to the following PCRE regular expression: \p{Ll}\p{Lo}{0,62}Label values must be between 1 and 63 characters long, and must conform to the following PCRE regular expression: \p{Ll}\p{Lo}\p{N}_-{0,63}No more than 32 labels can be associated with a given cluster.
         "a_key": "A String",
       },
       "config": { # The cluster config. # Required. The cluster configuration.
+        "lifecycleConfig": { # Specifies the cluster auto-delete schedule configuration. # Optional. Lifecycle setting for the cluster.
+          "idleStartTime": "A String", # Output only. The time when cluster became idle (most recent job finished) and became eligible for deletion due to idleness (see JSON representation of Timestamp (https://developers.google.com/protocol-buffers/docs/proto3#json)).
+          "idleDeleteTtl": "A String", # Optional. The duration to keep the cluster alive while idling (when no jobs are running). Passing this threshold will cause the cluster to be deleted. Minimum value is 10 minutes; maximum value is 14 days (see JSON representation of Duration (https://developers.google.com/protocol-buffers/docs/proto3#json).
+          "autoDeleteTtl": "A String", # Optional. The lifetime duration of cluster. The cluster will be auto-deleted at the end of this period. Minimum value is 10 minutes; maximum value is 14 days (see JSON representation of Duration (https://developers.google.com/protocol-buffers/docs/proto3#json)).
+          "autoDeleteTime": "A String", # Optional. The time when cluster will be auto-deleted (see JSON representation of Timestamp (https://developers.google.com/protocol-buffers/docs/proto3#json)).
+        },
         "softwareConfig": { # Specifies the selection and config of software inside the cluster. # Optional. The config settings for software inside the cluster.
-          "imageVersion": "A String", # Optional. The version of software inside the cluster. It must be one of the supported Cloud Dataproc Versions, such as "1.2" (including a subminor version, such as "1.2.29"), or the "preview" version. If unspecified, it defaults to the latest Debian version.
-          "optionalComponents": [ # The set of optional components to activate on the cluster.
+          "imageVersion": "A String", # Optional. The version of software inside the cluster. It must be one of the supported Dataproc Versions (https://cloud.google.com/dataproc/docs/concepts/versioning/dataproc-versions#supported_cloud_dataproc_versions), such as "1.2" (including a subminor version, such as "1.2.29"), or the "preview" version (https://cloud.google.com/dataproc/docs/concepts/versioning/dataproc-versions#other_versions). If unspecified, it defaults to the latest Debian version.
+          "optionalComponents": [ # Optional. The set of components to activate on the cluster.
             "A String",
           ],
           "properties": { # Optional. The properties to set on daemon config files.Property keys are specified in prefix:property format, for example core:hadoop.tmp.dir. The following are supported prefixes and their mappings:
@@ -146,24 +154,29 @@
               # mapred: mapred-site.xml
               # pig: pig.properties
               # spark: spark-defaults.conf
-              # yarn: yarn-site.xmlFor more information, see Cluster properties.
+              # yarn: yarn-site.xmlFor more information, see Cluster properties (https://cloud.google.com/dataproc/docs/concepts/cluster-properties).
             "a_key": "A String",
           },
         },
-        "configBucket": "A String", # Optional. A Google Cloud Storage bucket used to stage job dependencies, config files, and job driver console output. If you do not specify a staging bucket, Cloud Dataproc will determine a Cloud Storage location (US, ASIA, or EU) for your cluster's staging bucket according to the Google Compute Engine zone where your cluster is deployed, and then create and manage this project-level, per-location bucket (see Cloud Dataproc staging bucket).
+        "configBucket": "A String", # Optional. A Cloud Storage bucket used to stage job dependencies, config files, and job driver console output. If you do not specify a staging bucket, Cloud Dataproc will determine a Cloud Storage location (US, ASIA, or EU) for your cluster's staging bucket according to the Compute Engine zone where your cluster is deployed, and then create and manage this project-level, per-location bucket (see Dataproc staging bucket (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/staging-bucket)).
         "gceClusterConfig": { # Common config settings for resources of Compute Engine cluster instances, applicable to all instances in the cluster. # Optional. The shared Compute Engine config settings for all instances in a cluster.
           "internalIpOnly": True or False, # Optional. If true, all instances in the cluster will only have internal IP addresses. By default, clusters are not restricted to internal IP addresses, and will have ephemeral external IP addresses assigned to each instance. This internal_ip_only restriction can only be enabled for subnetwork enabled networks, and all off-cluster dependencies must be configured to be accessible without external IP addresses.
-          "networkUri": "A String", # Optional. The Compute Engine network to be used for machine communications. Cannot be specified with subnetwork_uri. If neither network_uri nor subnetwork_uri is specified, the "default" network of the project is used, if it exists. Cannot be a "Custom Subnet Network" (see Using Subnetworks for more information).A full URL, partial URI, or short name are valid. Examples:
+          "reservationAffinity": { # Reservation Affinity for consuming Zonal reservation. # Optional. Reservation Affinity for consuming Zonal reservation.
+            "values": [ # Optional. Corresponds to the label values of reservation resource.
+              "A String",
+            ],
+            "key": "A String", # Optional. Corresponds to the label key of reservation resource.
+            "consumeReservationType": "A String", # Optional. Type of reservation to consume
+          },
+          "networkUri": "A String", # Optional. The Compute Engine network to be used for machine communications. Cannot be specified with subnetwork_uri. If neither network_uri nor subnetwork_uri is specified, the "default" network of the project is used, if it exists. Cannot be a "Custom Subnet Network" (see Using Subnetworks (https://cloud.google.com/compute/docs/subnetworks) for more information).A full URL, partial URI, or short name are valid. Examples:
               # https://www.googleapis.com/compute/v1/projects/[project_id]/regions/global/default
               # projects/[project_id]/regions/global/default
               # default
-          "tags": [ # The Compute Engine tags to add to all instances (see Tagging instances).
+          "tags": [ # The Compute Engine tags to add to all instances (see Tagging instances (https://cloud.google.com/compute/docs/label-or-tag-resources#tags)).
             "A String",
           ],
-          "serviceAccount": "A String", # Optional. The service account of the instances. Defaults to the default Compute Engine service account. Custom service accounts need permissions equivalent to the following IAM roles:
-              # roles/logging.logWriter
-              # roles/storage.objectAdmin(see https://cloud.google.com/compute/docs/access/service-accounts#custom_service_accounts for more information). Example: [account_id]@[project_id].iam.gserviceaccount.com
-          "zoneUri": "A String", # Optional. The zone where the Compute Engine cluster will be located. On a create request, it is required in the "global" region. If omitted in a non-global Cloud Dataproc region, the service will pick a zone in the corresponding Compute Engine region. On a get request, zone will always be present.A full URL, partial URI, or short name are valid. Examples:
+          "serviceAccount": "A String", # Optional. The Dataproc service account (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/service-accounts#service_accounts_in_cloud_dataproc) (also see VM Data Plane identity (https://cloud.google.com/dataproc/docs/concepts/iam/dataproc-principals#vm_service_account_data_plane_identity)) used by Dataproc cluster VM instances to access Google Cloud Platform services.If not specified, the Compute Engine default service account (https://cloud.google.com/compute/docs/access/service-accounts#default_service_account) is used.
+          "zoneUri": "A String", # Optional. The zone where the Compute Engine cluster will be located. On a create request, it is required in the "global" region. If omitted in a non-global Dataproc region, the service will pick a zone in the corresponding Compute Engine region. On a get request, zone will always be present.A full URL, partial URI, or short name are valid. Examples:
               # https://www.googleapis.com/compute/v1/projects/[project_id]/zones/[zone]
               # projects/[project_id]/zones/[zone]
               # us-central1-f
@@ -185,25 +198,37 @@
             "a_key": "A String",
           },
         },
-        "workerConfig": { # Optional. The config settings for Compute Engine resources in an instance group, such as a master or worker group. # Optional. The Compute Engine config settings for worker instances in a cluster.
-          "isPreemptible": True or False, # Optional. Specifies that this instance group contains preemptible instances.
+        "autoscalingConfig": { # Autoscaling Policy config associated with the cluster. # Optional. Autoscaling config for the policy associated with the cluster. Cluster does not autoscale if this field is unset.
+          "policyUri": "A String", # Optional. The autoscaling policy used by the cluster.Only resource names including projectid and location (region) are valid. Examples:
+              # https://www.googleapis.com/compute/v1/projects/[project_id]/locations/[dataproc_region]/autoscalingPolicies/[policy_id]
+              # projects/[project_id]/locations/[dataproc_region]/autoscalingPolicies/[policy_id]Note that the policy must be in the same project and Dataproc region.
+        },
+        "workerConfig": { # The config settings for Compute Engine resources in an instance group, such as a master or worker group. # Optional. The Compute Engine config settings for worker instances in a cluster.
+          "isPreemptible": True or False, # Output only. Specifies that this instance group contains preemptible instances.
+          "preemptibility": "A String", # Optional. Specifies the preemptibility of the instance group.The default value for master and worker groups is NON_PREEMPTIBLE. This default cannot be changed.The default value for secondary instances is PREEMPTIBLE.
           "machineTypeUri": "A String", # Optional. The Compute Engine machine type used for cluster instances.A full URL, partial URI, or short name are valid. Examples:
               # https://www.googleapis.com/compute/v1/projects/[project_id]/zones/us-east1-a/machineTypes/n1-standard-2
               # projects/[project_id]/zones/us-east1-a/machineTypes/n1-standard-2
-              # n1-standard-2Auto Zone Exception: If you are using the Cloud Dataproc Auto Zone Placement feature, you must use the short name of the machine type resource, for example, n1-standard-2.
-          "instanceNames": [ # Output only. The list of instance names. Cloud Dataproc derives the names from cluster_name, num_instances, and the instance group.
+              # n1-standard-2Auto Zone Exception: If you are using the Dataproc Auto Zone Placement (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/auto-zone#using_auto_zone_placement) feature, you must use the short name of the machine type resource, for example, n1-standard-2.
+          "instanceNames": [ # Output only. The list of instance names. Dataproc derives the names from cluster_name, num_instances, and the instance group.
             "A String",
           ],
-          "imageUri": "A String", # Optional. The Compute Engine image resource used for cluster instances. It can be specified or may be inferred from SoftwareConfig.image_version.
-          "accelerators": [ # Optional. The Compute Engine accelerator configuration for these instances.Beta Feature: This feature is still under development. It may be changed before final release.
-            { # Specifies the type and number of accelerator cards attached to the instances of an instance. See GPUs on Compute Engine.
+          "imageUri": "A String", # Optional. The Compute Engine image resource used for cluster instances.The URI can represent an image or image family.Image examples:
+              # https://www.googleapis.com/compute/beta/projects/[project_id]/global/images/[image-id]
+              # projects/[project_id]/global/images/[image-id]
+              # image-idImage family examples. Dataproc will use the most recent image from the family:
+              # https://www.googleapis.com/compute/beta/projects/[project_id]/global/images/family/[custom-image-family-name]
+              # projects/[project_id]/global/images/family/[custom-image-family-name]If the URI is unspecified, it will be inferred from SoftwareConfig.image_version or the system default.
+          "accelerators": [ # Optional. The Compute Engine accelerator configuration for these instances.
+            { # Specifies the type and number of accelerator cards attached to the instances of an instance. See GPUs on Compute Engine (https://cloud.google.com/compute/docs/gpus/).
               "acceleratorCount": 42, # The number of the accelerator cards of this type exposed to this instance.
-              "acceleratorTypeUri": "A String", # Full URL, partial URI, or short name of the accelerator type resource to expose to this instance. See Compute Engine AcceleratorTypes.Examples:
+              "acceleratorTypeUri": "A String", # Full URL, partial URI, or short name of the accelerator type resource to expose to this instance. See Compute Engine AcceleratorTypes (https://cloud.google.com/compute/docs/reference/beta/acceleratorTypes).Examples:
                   # https://www.googleapis.com/compute/beta/projects/[project_id]/zones/us-east1-a/acceleratorTypes/nvidia-tesla-k80
                   # projects/[project_id]/zones/us-east1-a/acceleratorTypes/nvidia-tesla-k80
-                  # nvidia-tesla-k80Auto Zone Exception: If you are using the Cloud Dataproc Auto Zone Placement feature, you must use the short name of the accelerator type resource, for example, nvidia-tesla-k80.
+                  # nvidia-tesla-k80Auto Zone Exception: If you are using the Dataproc Auto Zone Placement (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/auto-zone#using_auto_zone_placement) feature, you must use the short name of the accelerator type resource, for example, nvidia-tesla-k80.
             },
           ],
+          "minCpuPlatform": "A String", # Optional. Specifies the minimum cpu platform for the Instance Group. See Dataproc -&amp;gt; Minimum CPU Platform (https://cloud.google.com/dataproc/docs/concepts/compute/dataproc-min-cpu).
           "managedGroupConfig": { # Specifies the resources used to actively manage an instance group. # Output only. The config for Compute Engine Instance Group Manager that manages this group. This is only used for preemptible instance groups.
             "instanceTemplateName": "A String", # Output only. The name of the Instance Template used for the Managed Instance Group.
             "instanceGroupManagerName": "A String", # Output only. The name of the Instance Group Manager for this group.
@@ -224,32 +249,39 @@
             #   ... worker specific actions ...
             # fi
           { # Specifies an executable to run on a fully configured node and a timeout period for executable completion.
-            "executionTimeout": "A String", # Optional. Amount of time executable has to complete. Default is 10 minutes. Cluster creation fails with an explanatory error message (the name of the executable that caused the error and the exceeded timeout period) if the executable is not completed at end of the timeout period.
+            "executionTimeout": "A String", # Optional. Amount of time executable has to complete. Default is 10 minutes (see JSON representation of Duration (https://developers.google.com/protocol-buffers/docs/proto3#json)).Cluster creation fails with an explanatory error message (the name of the executable that caused the error and the exceeded timeout period) if the executable is not completed at end of the timeout period.
             "executableFile": "A String", # Required. Cloud Storage URI of executable file.
           },
         ],
         "encryptionConfig": { # Encryption settings for the cluster. # Optional. Encryption settings for the cluster.
           "gcePdKmsKeyName": "A String", # Optional. The Cloud KMS key name to use for PD disk encryption for all instances in the cluster.
         },
-        "secondaryWorkerConfig": { # Optional. The config settings for Compute Engine resources in an instance group, such as a master or worker group. # Optional. The Compute Engine config settings for additional worker instances in a cluster.
-          "isPreemptible": True or False, # Optional. Specifies that this instance group contains preemptible instances.
+        "secondaryWorkerConfig": { # The config settings for Compute Engine resources in an instance group, such as a master or worker group. # Optional. The Compute Engine config settings for additional worker instances in a cluster.
+          "isPreemptible": True or False, # Output only. Specifies that this instance group contains preemptible instances.
+          "preemptibility": "A String", # Optional. Specifies the preemptibility of the instance group.The default value for master and worker groups is NON_PREEMPTIBLE. This default cannot be changed.The default value for secondary instances is PREEMPTIBLE.
           "machineTypeUri": "A String", # Optional. The Compute Engine machine type used for cluster instances.A full URL, partial URI, or short name are valid. Examples:
               # https://www.googleapis.com/compute/v1/projects/[project_id]/zones/us-east1-a/machineTypes/n1-standard-2
               # projects/[project_id]/zones/us-east1-a/machineTypes/n1-standard-2
-              # n1-standard-2Auto Zone Exception: If you are using the Cloud Dataproc Auto Zone Placement feature, you must use the short name of the machine type resource, for example, n1-standard-2.
-          "instanceNames": [ # Output only. The list of instance names. Cloud Dataproc derives the names from cluster_name, num_instances, and the instance group.
+              # n1-standard-2Auto Zone Exception: If you are using the Dataproc Auto Zone Placement (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/auto-zone#using_auto_zone_placement) feature, you must use the short name of the machine type resource, for example, n1-standard-2.
+          "instanceNames": [ # Output only. The list of instance names. Dataproc derives the names from cluster_name, num_instances, and the instance group.
             "A String",
           ],
-          "imageUri": "A String", # Optional. The Compute Engine image resource used for cluster instances. It can be specified or may be inferred from SoftwareConfig.image_version.
-          "accelerators": [ # Optional. The Compute Engine accelerator configuration for these instances.Beta Feature: This feature is still under development. It may be changed before final release.
-            { # Specifies the type and number of accelerator cards attached to the instances of an instance. See GPUs on Compute Engine.
+          "imageUri": "A String", # Optional. The Compute Engine image resource used for cluster instances.The URI can represent an image or image family.Image examples:
+              # https://www.googleapis.com/compute/beta/projects/[project_id]/global/images/[image-id]
+              # projects/[project_id]/global/images/[image-id]
+              # image-idImage family examples. Dataproc will use the most recent image from the family:
+              # https://www.googleapis.com/compute/beta/projects/[project_id]/global/images/family/[custom-image-family-name]
+              # projects/[project_id]/global/images/family/[custom-image-family-name]If the URI is unspecified, it will be inferred from SoftwareConfig.image_version or the system default.
+          "accelerators": [ # Optional. The Compute Engine accelerator configuration for these instances.
+            { # Specifies the type and number of accelerator cards attached to the instances of an instance. See GPUs on Compute Engine (https://cloud.google.com/compute/docs/gpus/).
               "acceleratorCount": 42, # The number of the accelerator cards of this type exposed to this instance.
-              "acceleratorTypeUri": "A String", # Full URL, partial URI, or short name of the accelerator type resource to expose to this instance. See Compute Engine AcceleratorTypes.Examples:
+              "acceleratorTypeUri": "A String", # Full URL, partial URI, or short name of the accelerator type resource to expose to this instance. See Compute Engine AcceleratorTypes (https://cloud.google.com/compute/docs/reference/beta/acceleratorTypes).Examples:
                   # https://www.googleapis.com/compute/beta/projects/[project_id]/zones/us-east1-a/acceleratorTypes/nvidia-tesla-k80
                   # projects/[project_id]/zones/us-east1-a/acceleratorTypes/nvidia-tesla-k80
-                  # nvidia-tesla-k80Auto Zone Exception: If you are using the Cloud Dataproc Auto Zone Placement feature, you must use the short name of the accelerator type resource, for example, nvidia-tesla-k80.
+                  # nvidia-tesla-k80Auto Zone Exception: If you are using the Dataproc Auto Zone Placement (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/auto-zone#using_auto_zone_placement) feature, you must use the short name of the accelerator type resource, for example, nvidia-tesla-k80.
             },
           ],
+          "minCpuPlatform": "A String", # Optional. Specifies the minimum cpu platform for the Instance Group. See Dataproc -&amp;gt; Minimum CPU Platform (https://cloud.google.com/dataproc/docs/concepts/compute/dataproc-min-cpu).
           "managedGroupConfig": { # Specifies the resources used to actively manage an instance group. # Output only. The config for Compute Engine Instance Group Manager that manages this group. This is only used for preemptible instance groups.
             "instanceTemplateName": "A String", # Output only. The name of the Instance Template used for the Managed Instance Group.
             "instanceGroupManagerName": "A String", # Output only. The name of the Instance Group Manager for this group.
@@ -261,25 +293,32 @@
             "bootDiskSizeGb": 42, # Optional. Size in GB of the boot disk (default is 500GB).
           },
         },
-        "masterConfig": { # Optional. The config settings for Compute Engine resources in an instance group, such as a master or worker group. # Optional. The Compute Engine config settings for the master instance in a cluster.
-          "isPreemptible": True or False, # Optional. Specifies that this instance group contains preemptible instances.
+        "masterConfig": { # The config settings for Compute Engine resources in an instance group, such as a master or worker group. # Optional. The Compute Engine config settings for the master instance in a cluster.
+          "isPreemptible": True or False, # Output only. Specifies that this instance group contains preemptible instances.
+          "preemptibility": "A String", # Optional. Specifies the preemptibility of the instance group.The default value for master and worker groups is NON_PREEMPTIBLE. This default cannot be changed.The default value for secondary instances is PREEMPTIBLE.
           "machineTypeUri": "A String", # Optional. The Compute Engine machine type used for cluster instances.A full URL, partial URI, or short name are valid. Examples:
               # https://www.googleapis.com/compute/v1/projects/[project_id]/zones/us-east1-a/machineTypes/n1-standard-2
               # projects/[project_id]/zones/us-east1-a/machineTypes/n1-standard-2
-              # n1-standard-2Auto Zone Exception: If you are using the Cloud Dataproc Auto Zone Placement feature, you must use the short name of the machine type resource, for example, n1-standard-2.
-          "instanceNames": [ # Output only. The list of instance names. Cloud Dataproc derives the names from cluster_name, num_instances, and the instance group.
+              # n1-standard-2Auto Zone Exception: If you are using the Dataproc Auto Zone Placement (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/auto-zone#using_auto_zone_placement) feature, you must use the short name of the machine type resource, for example, n1-standard-2.
+          "instanceNames": [ # Output only. The list of instance names. Dataproc derives the names from cluster_name, num_instances, and the instance group.
             "A String",
           ],
-          "imageUri": "A String", # Optional. The Compute Engine image resource used for cluster instances. It can be specified or may be inferred from SoftwareConfig.image_version.
-          "accelerators": [ # Optional. The Compute Engine accelerator configuration for these instances.Beta Feature: This feature is still under development. It may be changed before final release.
-            { # Specifies the type and number of accelerator cards attached to the instances of an instance. See GPUs on Compute Engine.
+          "imageUri": "A String", # Optional. The Compute Engine image resource used for cluster instances.The URI can represent an image or image family.Image examples:
+              # https://www.googleapis.com/compute/beta/projects/[project_id]/global/images/[image-id]
+              # projects/[project_id]/global/images/[image-id]
+              # image-idImage family examples. Dataproc will use the most recent image from the family:
+              # https://www.googleapis.com/compute/beta/projects/[project_id]/global/images/family/[custom-image-family-name]
+              # projects/[project_id]/global/images/family/[custom-image-family-name]If the URI is unspecified, it will be inferred from SoftwareConfig.image_version or the system default.
+          "accelerators": [ # Optional. The Compute Engine accelerator configuration for these instances.
+            { # Specifies the type and number of accelerator cards attached to the instances of an instance. See GPUs on Compute Engine (https://cloud.google.com/compute/docs/gpus/).
               "acceleratorCount": 42, # The number of the accelerator cards of this type exposed to this instance.
-              "acceleratorTypeUri": "A String", # Full URL, partial URI, or short name of the accelerator type resource to expose to this instance. See Compute Engine AcceleratorTypes.Examples:
+              "acceleratorTypeUri": "A String", # Full URL, partial URI, or short name of the accelerator type resource to expose to this instance. See Compute Engine AcceleratorTypes (https://cloud.google.com/compute/docs/reference/beta/acceleratorTypes).Examples:
                   # https://www.googleapis.com/compute/beta/projects/[project_id]/zones/us-east1-a/acceleratorTypes/nvidia-tesla-k80
                   # projects/[project_id]/zones/us-east1-a/acceleratorTypes/nvidia-tesla-k80
-                  # nvidia-tesla-k80Auto Zone Exception: If you are using the Cloud Dataproc Auto Zone Placement feature, you must use the short name of the accelerator type resource, for example, nvidia-tesla-k80.
+                  # nvidia-tesla-k80Auto Zone Exception: If you are using the Dataproc Auto Zone Placement (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/auto-zone#using_auto_zone_placement) feature, you must use the short name of the accelerator type resource, for example, nvidia-tesla-k80.
             },
           ],
+          "minCpuPlatform": "A String", # Optional. Specifies the minimum cpu platform for the Instance Group. See Dataproc -&amp;gt; Minimum CPU Platform (https://cloud.google.com/dataproc/docs/concepts/compute/dataproc-min-cpu).
           "managedGroupConfig": { # Specifies the resources used to actively manage an instance group. # Output only. The config for Compute Engine Instance Group Manager that manages this group. This is only used for preemptible instance groups.
             "instanceTemplateName": "A String", # Output only. The name of the Instance Template used for the Managed Instance Group.
             "instanceGroupManagerName": "A String", # Output only. The name of the Instance Group Manager for this group.
@@ -295,8 +334,9 @@
           "kerberosConfig": { # Specifies Kerberos related configuration. # Kerberos related configuration.
             "truststorePasswordUri": "A String", # Optional. The Cloud Storage URI of a KMS encrypted file containing the password to the user provided truststore. For the self-signed certificate, this password is generated by Dataproc.
             "crossRealmTrustRealm": "A String", # Optional. The remote realm the Dataproc on-cluster KDC will trust, should the user enable cross realm trust.
+            "realm": "A String", # Optional. The name of the on-cluster Kerberos realm. If not specified, the uppercased domain of hostnames will be the realm.
             "keyPasswordUri": "A String", # Optional. The Cloud Storage URI of a KMS encrypted file containing the password to the user provided key. For the self-signed certificate, this password is generated by Dataproc.
-            "enableKerberos": True or False, # Optional. Flag to indicate whether to Kerberize the cluster.
+            "enableKerberos": True or False, # Optional. Flag to indicate whether to Kerberize the cluster (default: false). Set this field to true to enable Kerberos on a cluster.
             "crossRealmTrustAdminServer": "A String", # Optional. The admin server (IP or hostname) for the remote trusted realm in a cross realm trust relationship.
             "tgtLifetimeHours": 42, # Optional. The lifetime of the ticket granting ticket, in hours. If not specified, or user specifies 0, then default value 10 will be used.
             "keystoreUri": "A String", # Optional. The Cloud Storage URI of the keystore file used for SSL encryption. If not provided, Dataproc will provide a self-signed certificate.
@@ -312,8 +352,10 @@
       },
     },
   },
-  "name": "A String", # Output only. The "resource name" of the template, as described in https://cloud.google.com/apis/design/resource_names of the form projects/{project_id}/regions/{region}/workflowTemplates/{template_id}
-  "parameters": [ # Optional. Template parameters whose values are substituted into the template. Values for parameters must be provided when the template is instantiated.
+  "name": "A String", # Output only. The resource name of the workflow template, as described in https://cloud.google.com/apis/design/resource_names.
+      # For projects.regions.workflowTemplates, the resource name of the  template has the following format:  projects/{project_id}/regions/{region}/workflowTemplates/{template_id}
+      # For projects.locations.workflowTemplates, the resource name of the  template has the following format:  projects/{project_id}/locations/{location}/workflowTemplates/{template_id}
+  "parameters": [ # Optional. emplate parameters whose values are substituted into the template. Values for parameters must be provided when the template is instantiated.
     { # A configurable parameter that replaces one or more fields in the template. Parameterizable fields: - Labels - File uris - Job properties - Job arguments - Script variables - Main class (in HadoopJob and SparkJob) - Zone (in ClusterSelector)
       "fields": [ # Required. Paths to all fields that the parameter replaces. A field is allowed to appear in at most one parameter's list of field paths.A field path is similar in syntax to a google.protobuf.FieldMask. For example, a field path that references the zone field of a workflow template's cluster selector would be specified as placement.clusterSelector.zone.Also, field paths can reference fields using the following syntax:
           # Values in maps can be referenced by key:
@@ -365,7 +407,7 @@
   "version": 42, # Optional. Used to perform a consistent read-modify-write.This field should be left blank for a CreateWorkflowTemplate request. It is required for an UpdateWorkflowTemplate request, and must match the current server version. A typical update template flow would fetch the current template with a GetWorkflowTemplate request, which will return the current template with the version field filled in with the current server version. The user updates other fields in the template, then returns it as part of the UpdateWorkflowTemplate request.
   "jobs": [ # Required. The Directed Acyclic Graph of Jobs to submit.
     { # A job executed by the workflow.
-      "hadoopJob": { # A Cloud Dataproc job for running Apache Hadoop MapReduce (https://hadoop.apache.org/docs/current/hadoop-mapreduce-client/hadoop-mapreduce-client-core/MapReduceTutorial.html) jobs on Apache Hadoop YARN (https://hadoop.apache.org/docs/r2.7.1/hadoop-yarn/hadoop-yarn-site/YARN.html). # Job is a Hadoop job.
+      "hadoopJob": { # A Dataproc job for running Apache Hadoop MapReduce (https://hadoop.apache.org/docs/current/hadoop-mapreduce-client/hadoop-mapreduce-client-core/MapReduceTutorial.html) jobs on Apache Hadoop YARN (https://hadoop.apache.org/docs/r2.7.1/hadoop-yarn/hadoop-yarn-site/YARN.html). # Optional. Job is a Hadoop job.
         "args": [ # Optional. The arguments to pass to the driver. Do not include arguments, such as -libjars or -Dfoo=bar, that can be set as job properties, since a collision may occur that causes an incorrect job submission.
           "A String",
         ],
@@ -385,12 +427,32 @@
           "A String",
         ],
         "mainJarFileUri": "A String", # The HCFS URI of the jar file containing the main class. Examples:  'gs://foo-bucket/analytics-binaries/extract-useful-metrics-mr.jar'  'hdfs:/tmp/test-samples/custom-wordcount.jar'  'file:///home/usr/lib/hadoop-mapreduce/hadoop-mapreduce-examples.jar'
-        "properties": { # Optional. A mapping of property names to values, used to configure Hadoop. Properties that conflict with values set by the Cloud Dataproc API may be overwritten. Can include properties set in /etc/hadoop/conf/*-site and classes in user code.
+        "properties": { # Optional. A mapping of property names to values, used to configure Hadoop. Properties that conflict with values set by the Dataproc API may be overwritten. Can include properties set in /etc/hadoop/conf/*-site and classes in user code.
           "a_key": "A String",
         },
       },
       "stepId": "A String", # Required. The step id. The id must be unique among all jobs within the template.The step id is used as prefix for job id, as job goog-dataproc-workflow-step-id label, and in prerequisiteStepIds field from other steps.The id must contain only letters (a-z, A-Z), numbers (0-9), underscores (_), and hyphens (-). Cannot begin or end with underscore or hyphen. Must consist of between 3 and 50 characters.
-      "sparkSqlJob": { # A Cloud Dataproc job for running Apache Spark SQL (http://spark.apache.org/sql/) queries. # Job is a SparkSql job.
+      "sparkRJob": { # A Dataproc job for running Apache SparkR (https://spark.apache.org/docs/latest/sparkr.html) applications on YARN. # Optional. Job is a SparkR job.
+        "loggingConfig": { # The runtime logging config of the job. # Optional. The runtime log config for job execution.
+          "driverLogLevels": { # The per-package log levels for the driver. This may include "root" package name to configure rootLogger. Examples:  'com.google = FATAL', 'root = INFO', 'org.apache = DEBUG'
+            "a_key": "A String",
+          },
+        },
+        "args": [ # Optional. The arguments to pass to the driver. Do not include arguments, such as --conf, that can be set as job properties, since a collision may occur that causes an incorrect job submission.
+          "A String",
+        ],
+        "mainRFileUri": "A String", # Required. The HCFS URI of the main R file to use as the driver. Must be a .R file.
+        "fileUris": [ # Optional. HCFS URIs of files to be copied to the working directory of R drivers and distributed tasks. Useful for naively parallel tasks.
+          "A String",
+        ],
+        "archiveUris": [ # Optional. HCFS URIs of archives to be extracted in the working directory of Spark drivers and tasks. Supported file types: .jar, .tar, .tar.gz, .tgz, and .zip.
+          "A String",
+        ],
+        "properties": { # Optional. A mapping of property names to values, used to configure SparkR. Properties that conflict with values set by the Dataproc API may be overwritten. Can include properties set in /etc/spark/conf/spark-defaults.conf and classes in user code.
+          "a_key": "A String",
+        },
+      },
+      "sparkSqlJob": { # A Dataproc job for running Apache Spark SQL (http://spark.apache.org/sql/) queries. # Optional. Job is a SparkSql job.
         "queryFileUri": "A String", # The HCFS URI of the script that contains SQL queries.
         "scriptVariables": { # Optional. Mapping of query variable names to values (equivalent to the Spark SQL command: SET name="value";).
           "a_key": "A String",
@@ -417,14 +479,14 @@
             "A String",
           ],
         },
-        "properties": { # Optional. A mapping of property names to values, used to configure Spark SQL's SparkConf. Properties that conflict with values set by the Cloud Dataproc API may be overwritten.
+        "properties": { # Optional. A mapping of property names to values, used to configure Spark SQL's SparkConf. Properties that conflict with values set by the Dataproc API may be overwritten.
           "a_key": "A String",
         },
       },
       "prerequisiteStepIds": [ # Optional. The optional list of prerequisite job step_ids. If not specified, the job will start at the beginning of workflow.
         "A String",
       ],
-      "pigJob": { # A Cloud Dataproc job for running Apache Pig (https://pig.apache.org/) queries on YARN. # Job is a Pig job.
+      "pigJob": { # A Dataproc job for running Apache Pig (https://pig.apache.org/) queries on YARN. # Optional. Job is a Pig job.
         "queryFileUri": "A String", # The HCFS URI of the script that contains the Pig queries.
         "scriptVariables": { # Optional. Mapping of query variable names to values (equivalent to the Pig command: name=[value]).
           "a_key": "A String",
@@ -452,14 +514,14 @@
           ],
         },
         "continueOnFailure": True or False, # Optional. Whether to continue executing queries if a query fails. The default value is false. Setting to true can be useful when executing independent parallel queries.
-        "properties": { # Optional. A mapping of property names to values, used to configure Pig. Properties that conflict with values set by the Cloud Dataproc API may be overwritten. Can include properties set in /etc/hadoop/conf/*-site.xml, /etc/pig/conf/pig.properties, and classes in user code.
+        "properties": { # Optional. A mapping of property names to values, used to configure Pig. Properties that conflict with values set by the Dataproc API may be overwritten. Can include properties set in /etc/hadoop/conf/*-site.xml, /etc/pig/conf/pig.properties, and classes in user code.
           "a_key": "A String",
         },
       },
       "labels": { # Optional. The labels to associate with this job.Label keys must be between 1 and 63 characters long, and must conform to the following regular expression: \p{Ll}\p{Lo}{0,62}Label values must be between 1 and 63 characters long, and must conform to the following regular expression: \p{Ll}\p{Lo}\p{N}_-{0,63}No more than 32 labels can be associated with a given job.
         "a_key": "A String",
       },
-      "sparkJob": { # A Cloud Dataproc job for running Apache Spark (http://spark.apache.org/) applications on YARN. # Job is a Spark job.
+      "sparkJob": { # A Dataproc job for running Apache Spark (http://spark.apache.org/) applications on YARN. # Optional. Job is a Spark job.
         "args": [ # Optional. The arguments to pass to the driver. Do not include arguments, such as --conf, that can be set as job properties, since a collision may occur that causes an incorrect job submission.
           "A String",
         ],
@@ -479,16 +541,46 @@
           "A String",
         ],
         "mainJarFileUri": "A String", # The HCFS URI of the jar file that contains the main class.
-        "properties": { # Optional. A mapping of property names to values, used to configure Spark. Properties that conflict with values set by the Cloud Dataproc API may be overwritten. Can include properties set in /etc/spark/conf/spark-defaults.conf and classes in user code.
+        "properties": { # Optional. A mapping of property names to values, used to configure Spark. Properties that conflict with values set by the Dataproc API may be overwritten. Can include properties set in /etc/spark/conf/spark-defaults.conf and classes in user code.
+          "a_key": "A String",
+        },
+      },
+      "prestoJob": { # A Dataproc job for running Presto (https://prestosql.io/) queries. IMPORTANT: The Dataproc Presto Optional Component (https://cloud.google.com/dataproc/docs/concepts/components/presto) must be enabled when the cluster is created to submit a Presto job to the cluster. # Optional. Job is a Presto job.
+        "queryFileUri": "A String", # The HCFS URI of the script that contains SQL queries.
+        "outputFormat": "A String", # Optional. The format in which query output will be displayed. See the Presto documentation for supported output formats
+        "loggingConfig": { # The runtime logging config of the job. # Optional. The runtime log config for job execution.
+          "driverLogLevels": { # The per-package log levels for the driver. This may include "root" package name to configure rootLogger. Examples:  'com.google = FATAL', 'root = INFO', 'org.apache = DEBUG'
+            "a_key": "A String",
+          },
+        },
+        "clientTags": [ # Optional. Presto client tags to attach to this query
+          "A String",
+        ],
+        "queryList": { # A list of queries to run on a cluster. # A list of queries.
+          "queries": [ # Required. The queries to execute. You do not need to terminate a query with a semicolon. Multiple queries can be specified in one string by separating each with a semicolon. Here is an example of an Cloud Dataproc API snippet that uses a QueryList to specify a HiveJob:
+              # "hiveJob": {
+              #   "queryList": {
+              #     "queries": [
+              #       "query1",
+              #       "query2",
+              #       "query3;query4",
+              #     ]
+              #   }
+              # }
+            "A String",
+          ],
+        },
+        "continueOnFailure": True or False, # Optional. Whether to continue executing queries if a query fails. The default value is false. Setting to true can be useful when executing independent parallel queries.
+        "properties": { # Optional. A mapping of property names to values. Used to set Presto session properties (https://prestodb.io/docs/current/sql/set-session.html) Equivalent to using the --session flag in the Presto CLI
           "a_key": "A String",
         },
       },
       "scheduling": { # Job scheduling options. # Optional. Job scheduling configuration.
         "maxFailuresPerHour": 42, # Optional. Maximum number of times per hour a driver may be restarted as a result of driver terminating with non-zero code before job is reported failed.A job may be reported as thrashing if driver exits with non-zero code 4 times within 10 minute window.Maximum value is 10.
       },
-      "pysparkJob": { # A Cloud Dataproc job for running Apache PySpark (https://spark.apache.org/docs/0.9.0/python-programming-guide.html) applications on YARN. # Job is a Pyspark job.
+      "pysparkJob": { # A Dataproc job for running Apache PySpark (https://spark.apache.org/docs/0.9.0/python-programming-guide.html) applications on YARN. # Optional. Job is a PySpark job.
         "mainPythonFileUri": "A String", # Required. The HCFS URI of the main Python file to use as the driver. Must be a .py file.
-        "jarFileUris": [ # Optional. HCFS URIs of jar files to add to the CLASSPATHs of the Python driver and tasks.
+        "args": [ # Optional. The arguments to pass to the driver. Do not include arguments, such as --conf, that can be set as job properties, since a collision may occur that causes an incorrect job submission.
           "A String",
         ],
         "loggingConfig": { # The runtime logging config of the job. # Optional. The runtime log config for job execution.
@@ -496,7 +588,7 @@
             "a_key": "A String",
           },
         },
-        "args": [ # Optional. The arguments to pass to the driver. Do not include arguments, such as --conf, that can be set as job properties, since a collision may occur that causes an incorrect job submission.
+        "jarFileUris": [ # Optional. HCFS URIs of jar files to add to the CLASSPATHs of the Python driver and tasks.
           "A String",
         ],
         "fileUris": [ # Optional. HCFS URIs of files to be copied to the working directory of Python drivers and distributed tasks. Useful for naively parallel tasks.
@@ -508,11 +600,11 @@
         "pythonFileUris": [ # Optional. HCFS file URIs of Python files to pass to the PySpark framework. Supported file types: .py, .egg, and .zip.
           "A String",
         ],
-        "properties": { # Optional. A mapping of property names to values, used to configure PySpark. Properties that conflict with values set by the Cloud Dataproc API may be overwritten. Can include properties set in /etc/spark/conf/spark-defaults.conf and classes in user code.
+        "properties": { # Optional. A mapping of property names to values, used to configure PySpark. Properties that conflict with values set by the Dataproc API may be overwritten. Can include properties set in /etc/spark/conf/spark-defaults.conf and classes in user code.
           "a_key": "A String",
         },
       },
-      "hiveJob": { # A Cloud Dataproc job for running Apache Hive (https://hive.apache.org/) queries on YARN. # Job is a Hive job.
+      "hiveJob": { # A Dataproc job for running Apache Hive (https://hive.apache.org/) queries on YARN. # Optional. Job is a Hive job.
         "queryFileUri": "A String", # The HCFS URI of the script that contains Hive queries.
         "scriptVariables": { # Optional. Mapping of query variable names to values (equivalent to the Hive command: SET name="value";).
           "a_key": "A String",
@@ -535,13 +627,13 @@
           ],
         },
         "continueOnFailure": True or False, # Optional. Whether to continue executing queries if a query fails. The default value is false. Setting to true can be useful when executing independent parallel queries.
-        "properties": { # Optional. A mapping of property names and values, used to configure Hive. Properties that conflict with values set by the Cloud Dataproc API may be overwritten. Can include properties set in /etc/hadoop/conf/*-site.xml, /etc/hive/conf/hive-site.xml, and classes in user code.
+        "properties": { # Optional. A mapping of property names and values, used to configure Hive. Properties that conflict with values set by the Dataproc API may be overwritten. Can include properties set in /etc/hadoop/conf/*-site.xml, /etc/hive/conf/hive-site.xml, and classes in user code.
           "a_key": "A String",
         },
       },
     },
   ],
-  "id": "A String", # Required. The template id.The id must contain only letters (a-z, A-Z), numbers (0-9), underscores (_), and hyphens (-). Cannot begin or end with underscore or hyphen. Must consist of between 3 and 50 characters.
+  "id": "A String",
 }
 
   x__xgafv: string, V1 error format.
@@ -552,24 +644,30 @@
 Returns:
   An object of the form:
 
-    { # A Cloud Dataproc workflow template resource.
+    { # A Dataproc workflow template resource.
     "updateTime": "A String", # Output only. The time template was last updated.
     "placement": { # Specifies workflow execution target.Either managed_cluster or cluster_selector is required. # Required. WorkflowTemplate scheduling information.
       "clusterSelector": { # A selector that chooses target cluster for jobs based on metadata. # Optional. A selector that chooses target cluster for jobs based on metadata.The selector is evaluated at the time each job is submitted.
+        "zone": "A String", # Optional. The zone where workflow process executes. This parameter does not affect the selection of the cluster.If unspecified, the zone of the first cluster matching the selector is used.
         "clusterLabels": { # Required. The cluster labels. Cluster must have all labels to match.
           "a_key": "A String",
         },
-        "zone": "A String", # Optional. The zone where workflow process executes. This parameter does not affect the selection of the cluster.If unspecified, the zone of the first cluster matching the selector is used.
       },
-      "managedCluster": { # Cluster that is managed by the workflow. # Optional. A cluster that is managed by the workflow.
+      "managedCluster": { # Cluster that is managed by the workflow. # A cluster that is managed by the workflow.
         "clusterName": "A String", # Required. The cluster name prefix. A unique cluster name will be formed by appending a random suffix.The name must contain only lower-case letters (a-z), numbers (0-9), and hyphens (-). Must begin with a letter. Cannot begin or end with hyphen. Must consist of between 2 and 35 characters.
         "labels": { # Optional. The labels to associate with this cluster.Label keys must be between 1 and 63 characters long, and must conform to the following PCRE regular expression: \p{Ll}\p{Lo}{0,62}Label values must be between 1 and 63 characters long, and must conform to the following PCRE regular expression: \p{Ll}\p{Lo}\p{N}_-{0,63}No more than 32 labels can be associated with a given cluster.
           "a_key": "A String",
         },
         "config": { # The cluster config. # Required. The cluster configuration.
+          "lifecycleConfig": { # Specifies the cluster auto-delete schedule configuration. # Optional. Lifecycle setting for the cluster.
+            "idleStartTime": "A String", # Output only. The time when cluster became idle (most recent job finished) and became eligible for deletion due to idleness (see JSON representation of Timestamp (https://developers.google.com/protocol-buffers/docs/proto3#json)).
+            "idleDeleteTtl": "A String", # Optional. The duration to keep the cluster alive while idling (when no jobs are running). Passing this threshold will cause the cluster to be deleted. Minimum value is 10 minutes; maximum value is 14 days (see JSON representation of Duration (https://developers.google.com/protocol-buffers/docs/proto3#json).
+            "autoDeleteTtl": "A String", # Optional. The lifetime duration of cluster. The cluster will be auto-deleted at the end of this period. Minimum value is 10 minutes; maximum value is 14 days (see JSON representation of Duration (https://developers.google.com/protocol-buffers/docs/proto3#json)).
+            "autoDeleteTime": "A String", # Optional. The time when cluster will be auto-deleted (see JSON representation of Timestamp (https://developers.google.com/protocol-buffers/docs/proto3#json)).
+          },
           "softwareConfig": { # Specifies the selection and config of software inside the cluster. # Optional. The config settings for software inside the cluster.
-            "imageVersion": "A String", # Optional. The version of software inside the cluster. It must be one of the supported Cloud Dataproc Versions, such as "1.2" (including a subminor version, such as "1.2.29"), or the "preview" version. If unspecified, it defaults to the latest Debian version.
-            "optionalComponents": [ # The set of optional components to activate on the cluster.
+            "imageVersion": "A String", # Optional. The version of software inside the cluster. It must be one of the supported Dataproc Versions (https://cloud.google.com/dataproc/docs/concepts/versioning/dataproc-versions#supported_cloud_dataproc_versions), such as "1.2" (including a subminor version, such as "1.2.29"), or the "preview" version (https://cloud.google.com/dataproc/docs/concepts/versioning/dataproc-versions#other_versions). If unspecified, it defaults to the latest Debian version.
+            "optionalComponents": [ # Optional. The set of components to activate on the cluster.
               "A String",
             ],
             "properties": { # Optional. The properties to set on daemon config files.Property keys are specified in prefix:property format, for example core:hadoop.tmp.dir. The following are supported prefixes and their mappings:
@@ -581,24 +679,29 @@
                 # mapred: mapred-site.xml
                 # pig: pig.properties
                 # spark: spark-defaults.conf
-                # yarn: yarn-site.xmlFor more information, see Cluster properties.
+                # yarn: yarn-site.xmlFor more information, see Cluster properties (https://cloud.google.com/dataproc/docs/concepts/cluster-properties).
               "a_key": "A String",
             },
           },
-          "configBucket": "A String", # Optional. A Google Cloud Storage bucket used to stage job dependencies, config files, and job driver console output. If you do not specify a staging bucket, Cloud Dataproc will determine a Cloud Storage location (US, ASIA, or EU) for your cluster's staging bucket according to the Google Compute Engine zone where your cluster is deployed, and then create and manage this project-level, per-location bucket (see Cloud Dataproc staging bucket).
+          "configBucket": "A String", # Optional. A Cloud Storage bucket used to stage job dependencies, config files, and job driver console output. If you do not specify a staging bucket, Cloud Dataproc will determine a Cloud Storage location (US, ASIA, or EU) for your cluster's staging bucket according to the Compute Engine zone where your cluster is deployed, and then create and manage this project-level, per-location bucket (see Dataproc staging bucket (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/staging-bucket)).
           "gceClusterConfig": { # Common config settings for resources of Compute Engine cluster instances, applicable to all instances in the cluster. # Optional. The shared Compute Engine config settings for all instances in a cluster.
             "internalIpOnly": True or False, # Optional. If true, all instances in the cluster will only have internal IP addresses. By default, clusters are not restricted to internal IP addresses, and will have ephemeral external IP addresses assigned to each instance. This internal_ip_only restriction can only be enabled for subnetwork enabled networks, and all off-cluster dependencies must be configured to be accessible without external IP addresses.
-            "networkUri": "A String", # Optional. The Compute Engine network to be used for machine communications. Cannot be specified with subnetwork_uri. If neither network_uri nor subnetwork_uri is specified, the "default" network of the project is used, if it exists. Cannot be a "Custom Subnet Network" (see Using Subnetworks for more information).A full URL, partial URI, or short name are valid. Examples:
+            "reservationAffinity": { # Reservation Affinity for consuming Zonal reservation. # Optional. Reservation Affinity for consuming Zonal reservation.
+              "values": [ # Optional. Corresponds to the label values of reservation resource.
+                "A String",
+              ],
+              "key": "A String", # Optional. Corresponds to the label key of reservation resource.
+              "consumeReservationType": "A String", # Optional. Type of reservation to consume
+            },
+            "networkUri": "A String", # Optional. The Compute Engine network to be used for machine communications. Cannot be specified with subnetwork_uri. If neither network_uri nor subnetwork_uri is specified, the "default" network of the project is used, if it exists. Cannot be a "Custom Subnet Network" (see Using Subnetworks (https://cloud.google.com/compute/docs/subnetworks) for more information).A full URL, partial URI, or short name are valid. Examples:
                 # https://www.googleapis.com/compute/v1/projects/[project_id]/regions/global/default
                 # projects/[project_id]/regions/global/default
                 # default
-            "tags": [ # The Compute Engine tags to add to all instances (see Tagging instances).
+            "tags": [ # The Compute Engine tags to add to all instances (see Tagging instances (https://cloud.google.com/compute/docs/label-or-tag-resources#tags)).
               "A String",
             ],
-            "serviceAccount": "A String", # Optional. The service account of the instances. Defaults to the default Compute Engine service account. Custom service accounts need permissions equivalent to the following IAM roles:
-                # roles/logging.logWriter
-                # roles/storage.objectAdmin(see https://cloud.google.com/compute/docs/access/service-accounts#custom_service_accounts for more information). Example: [account_id]@[project_id].iam.gserviceaccount.com
-            "zoneUri": "A String", # Optional. The zone where the Compute Engine cluster will be located. On a create request, it is required in the "global" region. If omitted in a non-global Cloud Dataproc region, the service will pick a zone in the corresponding Compute Engine region. On a get request, zone will always be present.A full URL, partial URI, or short name are valid. Examples:
+            "serviceAccount": "A String", # Optional. The Dataproc service account (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/service-accounts#service_accounts_in_cloud_dataproc) (also see VM Data Plane identity (https://cloud.google.com/dataproc/docs/concepts/iam/dataproc-principals#vm_service_account_data_plane_identity)) used by Dataproc cluster VM instances to access Google Cloud Platform services.If not specified, the Compute Engine default service account (https://cloud.google.com/compute/docs/access/service-accounts#default_service_account) is used.
+            "zoneUri": "A String", # Optional. The zone where the Compute Engine cluster will be located. On a create request, it is required in the "global" region. If omitted in a non-global Dataproc region, the service will pick a zone in the corresponding Compute Engine region. On a get request, zone will always be present.A full URL, partial URI, or short name are valid. Examples:
                 # https://www.googleapis.com/compute/v1/projects/[project_id]/zones/[zone]
                 # projects/[project_id]/zones/[zone]
                 # us-central1-f
@@ -620,25 +723,37 @@
               "a_key": "A String",
             },
           },
-          "workerConfig": { # Optional. The config settings for Compute Engine resources in an instance group, such as a master or worker group. # Optional. The Compute Engine config settings for worker instances in a cluster.
-            "isPreemptible": True or False, # Optional. Specifies that this instance group contains preemptible instances.
+          "autoscalingConfig": { # Autoscaling Policy config associated with the cluster. # Optional. Autoscaling config for the policy associated with the cluster. Cluster does not autoscale if this field is unset.
+            "policyUri": "A String", # Optional. The autoscaling policy used by the cluster.Only resource names including projectid and location (region) are valid. Examples:
+                # https://www.googleapis.com/compute/v1/projects/[project_id]/locations/[dataproc_region]/autoscalingPolicies/[policy_id]
+                # projects/[project_id]/locations/[dataproc_region]/autoscalingPolicies/[policy_id]Note that the policy must be in the same project and Dataproc region.
+          },
+          "workerConfig": { # The config settings for Compute Engine resources in an instance group, such as a master or worker group. # Optional. The Compute Engine config settings for worker instances in a cluster.
+            "isPreemptible": True or False, # Output only. Specifies that this instance group contains preemptible instances.
+            "preemptibility": "A String", # Optional. Specifies the preemptibility of the instance group.The default value for master and worker groups is NON_PREEMPTIBLE. This default cannot be changed.The default value for secondary instances is PREEMPTIBLE.
             "machineTypeUri": "A String", # Optional. The Compute Engine machine type used for cluster instances.A full URL, partial URI, or short name are valid. Examples:
                 # https://www.googleapis.com/compute/v1/projects/[project_id]/zones/us-east1-a/machineTypes/n1-standard-2
                 # projects/[project_id]/zones/us-east1-a/machineTypes/n1-standard-2
-                # n1-standard-2Auto Zone Exception: If you are using the Cloud Dataproc Auto Zone Placement feature, you must use the short name of the machine type resource, for example, n1-standard-2.
-            "instanceNames": [ # Output only. The list of instance names. Cloud Dataproc derives the names from cluster_name, num_instances, and the instance group.
+                # n1-standard-2Auto Zone Exception: If you are using the Dataproc Auto Zone Placement (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/auto-zone#using_auto_zone_placement) feature, you must use the short name of the machine type resource, for example, n1-standard-2.
+            "instanceNames": [ # Output only. The list of instance names. Dataproc derives the names from cluster_name, num_instances, and the instance group.
               "A String",
             ],
-            "imageUri": "A String", # Optional. The Compute Engine image resource used for cluster instances. It can be specified or may be inferred from SoftwareConfig.image_version.
-            "accelerators": [ # Optional. The Compute Engine accelerator configuration for these instances.Beta Feature: This feature is still under development. It may be changed before final release.
-              { # Specifies the type and number of accelerator cards attached to the instances of an instance. See GPUs on Compute Engine.
+            "imageUri": "A String", # Optional. The Compute Engine image resource used for cluster instances.The URI can represent an image or image family.Image examples:
+                # https://www.googleapis.com/compute/beta/projects/[project_id]/global/images/[image-id]
+                # projects/[project_id]/global/images/[image-id]
+                # image-idImage family examples. Dataproc will use the most recent image from the family:
+                # https://www.googleapis.com/compute/beta/projects/[project_id]/global/images/family/[custom-image-family-name]
+                # projects/[project_id]/global/images/family/[custom-image-family-name]If the URI is unspecified, it will be inferred from SoftwareConfig.image_version or the system default.
+            "accelerators": [ # Optional. The Compute Engine accelerator configuration for these instances.
+              { # Specifies the type and number of accelerator cards attached to the instances of an instance. See GPUs on Compute Engine (https://cloud.google.com/compute/docs/gpus/).
                 "acceleratorCount": 42, # The number of the accelerator cards of this type exposed to this instance.
-                "acceleratorTypeUri": "A String", # Full URL, partial URI, or short name of the accelerator type resource to expose to this instance. See Compute Engine AcceleratorTypes.Examples:
+                "acceleratorTypeUri": "A String", # Full URL, partial URI, or short name of the accelerator type resource to expose to this instance. See Compute Engine AcceleratorTypes (https://cloud.google.com/compute/docs/reference/beta/acceleratorTypes).Examples:
                     # https://www.googleapis.com/compute/beta/projects/[project_id]/zones/us-east1-a/acceleratorTypes/nvidia-tesla-k80
                     # projects/[project_id]/zones/us-east1-a/acceleratorTypes/nvidia-tesla-k80
-                    # nvidia-tesla-k80Auto Zone Exception: If you are using the Cloud Dataproc Auto Zone Placement feature, you must use the short name of the accelerator type resource, for example, nvidia-tesla-k80.
+                    # nvidia-tesla-k80Auto Zone Exception: If you are using the Dataproc Auto Zone Placement (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/auto-zone#using_auto_zone_placement) feature, you must use the short name of the accelerator type resource, for example, nvidia-tesla-k80.
               },
             ],
+            "minCpuPlatform": "A String", # Optional. Specifies the minimum cpu platform for the Instance Group. See Dataproc -&amp;gt; Minimum CPU Platform (https://cloud.google.com/dataproc/docs/concepts/compute/dataproc-min-cpu).
             "managedGroupConfig": { # Specifies the resources used to actively manage an instance group. # Output only. The config for Compute Engine Instance Group Manager that manages this group. This is only used for preemptible instance groups.
               "instanceTemplateName": "A String", # Output only. The name of the Instance Template used for the Managed Instance Group.
               "instanceGroupManagerName": "A String", # Output only. The name of the Instance Group Manager for this group.
@@ -659,32 +774,39 @@
               #   ... worker specific actions ...
               # fi
             { # Specifies an executable to run on a fully configured node and a timeout period for executable completion.
-              "executionTimeout": "A String", # Optional. Amount of time executable has to complete. Default is 10 minutes. Cluster creation fails with an explanatory error message (the name of the executable that caused the error and the exceeded timeout period) if the executable is not completed at end of the timeout period.
+              "executionTimeout": "A String", # Optional. Amount of time executable has to complete. Default is 10 minutes (see JSON representation of Duration (https://developers.google.com/protocol-buffers/docs/proto3#json)).Cluster creation fails with an explanatory error message (the name of the executable that caused the error and the exceeded timeout period) if the executable is not completed at end of the timeout period.
               "executableFile": "A String", # Required. Cloud Storage URI of executable file.
             },
           ],
           "encryptionConfig": { # Encryption settings for the cluster. # Optional. Encryption settings for the cluster.
             "gcePdKmsKeyName": "A String", # Optional. The Cloud KMS key name to use for PD disk encryption for all instances in the cluster.
           },
-          "secondaryWorkerConfig": { # Optional. The config settings for Compute Engine resources in an instance group, such as a master or worker group. # Optional. The Compute Engine config settings for additional worker instances in a cluster.
-            "isPreemptible": True or False, # Optional. Specifies that this instance group contains preemptible instances.
+          "secondaryWorkerConfig": { # The config settings for Compute Engine resources in an instance group, such as a master or worker group. # Optional. The Compute Engine config settings for additional worker instances in a cluster.
+            "isPreemptible": True or False, # Output only. Specifies that this instance group contains preemptible instances.
+            "preemptibility": "A String", # Optional. Specifies the preemptibility of the instance group.The default value for master and worker groups is NON_PREEMPTIBLE. This default cannot be changed.The default value for secondary instances is PREEMPTIBLE.
             "machineTypeUri": "A String", # Optional. The Compute Engine machine type used for cluster instances.A full URL, partial URI, or short name are valid. Examples:
                 # https://www.googleapis.com/compute/v1/projects/[project_id]/zones/us-east1-a/machineTypes/n1-standard-2
                 # projects/[project_id]/zones/us-east1-a/machineTypes/n1-standard-2
-                # n1-standard-2Auto Zone Exception: If you are using the Cloud Dataproc Auto Zone Placement feature, you must use the short name of the machine type resource, for example, n1-standard-2.
-            "instanceNames": [ # Output only. The list of instance names. Cloud Dataproc derives the names from cluster_name, num_instances, and the instance group.
+                # n1-standard-2Auto Zone Exception: If you are using the Dataproc Auto Zone Placement (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/auto-zone#using_auto_zone_placement) feature, you must use the short name of the machine type resource, for example, n1-standard-2.
+            "instanceNames": [ # Output only. The list of instance names. Dataproc derives the names from cluster_name, num_instances, and the instance group.
               "A String",
             ],
-            "imageUri": "A String", # Optional. The Compute Engine image resource used for cluster instances. It can be specified or may be inferred from SoftwareConfig.image_version.
-            "accelerators": [ # Optional. The Compute Engine accelerator configuration for these instances.Beta Feature: This feature is still under development. It may be changed before final release.
-              { # Specifies the type and number of accelerator cards attached to the instances of an instance. See GPUs on Compute Engine.
+            "imageUri": "A String", # Optional. The Compute Engine image resource used for cluster instances.The URI can represent an image or image family.Image examples:
+                # https://www.googleapis.com/compute/beta/projects/[project_id]/global/images/[image-id]
+                # projects/[project_id]/global/images/[image-id]
+                # image-idImage family examples. Dataproc will use the most recent image from the family:
+                # https://www.googleapis.com/compute/beta/projects/[project_id]/global/images/family/[custom-image-family-name]
+                # projects/[project_id]/global/images/family/[custom-image-family-name]If the URI is unspecified, it will be inferred from SoftwareConfig.image_version or the system default.
+            "accelerators": [ # Optional. The Compute Engine accelerator configuration for these instances.
+              { # Specifies the type and number of accelerator cards attached to the instances of an instance. See GPUs on Compute Engine (https://cloud.google.com/compute/docs/gpus/).
                 "acceleratorCount": 42, # The number of the accelerator cards of this type exposed to this instance.
-                "acceleratorTypeUri": "A String", # Full URL, partial URI, or short name of the accelerator type resource to expose to this instance. See Compute Engine AcceleratorTypes.Examples:
+                "acceleratorTypeUri": "A String", # Full URL, partial URI, or short name of the accelerator type resource to expose to this instance. See Compute Engine AcceleratorTypes (https://cloud.google.com/compute/docs/reference/beta/acceleratorTypes).Examples:
                     # https://www.googleapis.com/compute/beta/projects/[project_id]/zones/us-east1-a/acceleratorTypes/nvidia-tesla-k80
                     # projects/[project_id]/zones/us-east1-a/acceleratorTypes/nvidia-tesla-k80
-                    # nvidia-tesla-k80Auto Zone Exception: If you are using the Cloud Dataproc Auto Zone Placement feature, you must use the short name of the accelerator type resource, for example, nvidia-tesla-k80.
+                    # nvidia-tesla-k80Auto Zone Exception: If you are using the Dataproc Auto Zone Placement (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/auto-zone#using_auto_zone_placement) feature, you must use the short name of the accelerator type resource, for example, nvidia-tesla-k80.
               },
             ],
+            "minCpuPlatform": "A String", # Optional. Specifies the minimum cpu platform for the Instance Group. See Dataproc -&amp;gt; Minimum CPU Platform (https://cloud.google.com/dataproc/docs/concepts/compute/dataproc-min-cpu).
             "managedGroupConfig": { # Specifies the resources used to actively manage an instance group. # Output only. The config for Compute Engine Instance Group Manager that manages this group. This is only used for preemptible instance groups.
               "instanceTemplateName": "A String", # Output only. The name of the Instance Template used for the Managed Instance Group.
               "instanceGroupManagerName": "A String", # Output only. The name of the Instance Group Manager for this group.
@@ -696,25 +818,32 @@
               "bootDiskSizeGb": 42, # Optional. Size in GB of the boot disk (default is 500GB).
             },
           },
-          "masterConfig": { # Optional. The config settings for Compute Engine resources in an instance group, such as a master or worker group. # Optional. The Compute Engine config settings for the master instance in a cluster.
-            "isPreemptible": True or False, # Optional. Specifies that this instance group contains preemptible instances.
+          "masterConfig": { # The config settings for Compute Engine resources in an instance group, such as a master or worker group. # Optional. The Compute Engine config settings for the master instance in a cluster.
+            "isPreemptible": True or False, # Output only. Specifies that this instance group contains preemptible instances.
+            "preemptibility": "A String", # Optional. Specifies the preemptibility of the instance group.The default value for master and worker groups is NON_PREEMPTIBLE. This default cannot be changed.The default value for secondary instances is PREEMPTIBLE.
             "machineTypeUri": "A String", # Optional. The Compute Engine machine type used for cluster instances.A full URL, partial URI, or short name are valid. Examples:
                 # https://www.googleapis.com/compute/v1/projects/[project_id]/zones/us-east1-a/machineTypes/n1-standard-2
                 # projects/[project_id]/zones/us-east1-a/machineTypes/n1-standard-2
-                # n1-standard-2Auto Zone Exception: If you are using the Cloud Dataproc Auto Zone Placement feature, you must use the short name of the machine type resource, for example, n1-standard-2.
-            "instanceNames": [ # Output only. The list of instance names. Cloud Dataproc derives the names from cluster_name, num_instances, and the instance group.
+                # n1-standard-2Auto Zone Exception: If you are using the Dataproc Auto Zone Placement (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/auto-zone#using_auto_zone_placement) feature, you must use the short name of the machine type resource, for example, n1-standard-2.
+            "instanceNames": [ # Output only. The list of instance names. Dataproc derives the names from cluster_name, num_instances, and the instance group.
               "A String",
             ],
-            "imageUri": "A String", # Optional. The Compute Engine image resource used for cluster instances. It can be specified or may be inferred from SoftwareConfig.image_version.
-            "accelerators": [ # Optional. The Compute Engine accelerator configuration for these instances.Beta Feature: This feature is still under development. It may be changed before final release.
-              { # Specifies the type and number of accelerator cards attached to the instances of an instance. See GPUs on Compute Engine.
+            "imageUri": "A String", # Optional. The Compute Engine image resource used for cluster instances.The URI can represent an image or image family.Image examples:
+                # https://www.googleapis.com/compute/beta/projects/[project_id]/global/images/[image-id]
+                # projects/[project_id]/global/images/[image-id]
+                # image-idImage family examples. Dataproc will use the most recent image from the family:
+                # https://www.googleapis.com/compute/beta/projects/[project_id]/global/images/family/[custom-image-family-name]
+                # projects/[project_id]/global/images/family/[custom-image-family-name]If the URI is unspecified, it will be inferred from SoftwareConfig.image_version or the system default.
+            "accelerators": [ # Optional. The Compute Engine accelerator configuration for these instances.
+              { # Specifies the type and number of accelerator cards attached to the instances of an instance. See GPUs on Compute Engine (https://cloud.google.com/compute/docs/gpus/).
                 "acceleratorCount": 42, # The number of the accelerator cards of this type exposed to this instance.
-                "acceleratorTypeUri": "A String", # Full URL, partial URI, or short name of the accelerator type resource to expose to this instance. See Compute Engine AcceleratorTypes.Examples:
+                "acceleratorTypeUri": "A String", # Full URL, partial URI, or short name of the accelerator type resource to expose to this instance. See Compute Engine AcceleratorTypes (https://cloud.google.com/compute/docs/reference/beta/acceleratorTypes).Examples:
                     # https://www.googleapis.com/compute/beta/projects/[project_id]/zones/us-east1-a/acceleratorTypes/nvidia-tesla-k80
                     # projects/[project_id]/zones/us-east1-a/acceleratorTypes/nvidia-tesla-k80
-                    # nvidia-tesla-k80Auto Zone Exception: If you are using the Cloud Dataproc Auto Zone Placement feature, you must use the short name of the accelerator type resource, for example, nvidia-tesla-k80.
+                    # nvidia-tesla-k80Auto Zone Exception: If you are using the Dataproc Auto Zone Placement (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/auto-zone#using_auto_zone_placement) feature, you must use the short name of the accelerator type resource, for example, nvidia-tesla-k80.
               },
             ],
+            "minCpuPlatform": "A String", # Optional. Specifies the minimum cpu platform for the Instance Group. See Dataproc -&amp;gt; Minimum CPU Platform (https://cloud.google.com/dataproc/docs/concepts/compute/dataproc-min-cpu).
             "managedGroupConfig": { # Specifies the resources used to actively manage an instance group. # Output only. The config for Compute Engine Instance Group Manager that manages this group. This is only used for preemptible instance groups.
               "instanceTemplateName": "A String", # Output only. The name of the Instance Template used for the Managed Instance Group.
               "instanceGroupManagerName": "A String", # Output only. The name of the Instance Group Manager for this group.
@@ -730,8 +859,9 @@
             "kerberosConfig": { # Specifies Kerberos related configuration. # Kerberos related configuration.
               "truststorePasswordUri": "A String", # Optional. The Cloud Storage URI of a KMS encrypted file containing the password to the user provided truststore. For the self-signed certificate, this password is generated by Dataproc.
               "crossRealmTrustRealm": "A String", # Optional. The remote realm the Dataproc on-cluster KDC will trust, should the user enable cross realm trust.
+              "realm": "A String", # Optional. The name of the on-cluster Kerberos realm. If not specified, the uppercased domain of hostnames will be the realm.
               "keyPasswordUri": "A String", # Optional. The Cloud Storage URI of a KMS encrypted file containing the password to the user provided key. For the self-signed certificate, this password is generated by Dataproc.
-              "enableKerberos": True or False, # Optional. Flag to indicate whether to Kerberize the cluster.
+              "enableKerberos": True or False, # Optional. Flag to indicate whether to Kerberize the cluster (default: false). Set this field to true to enable Kerberos on a cluster.
               "crossRealmTrustAdminServer": "A String", # Optional. The admin server (IP or hostname) for the remote trusted realm in a cross realm trust relationship.
               "tgtLifetimeHours": 42, # Optional. The lifetime of the ticket granting ticket, in hours. If not specified, or user specifies 0, then default value 10 will be used.
               "keystoreUri": "A String", # Optional. The Cloud Storage URI of the keystore file used for SSL encryption. If not provided, Dataproc will provide a self-signed certificate.
@@ -747,8 +877,10 @@
         },
       },
     },
-    "name": "A String", # Output only. The "resource name" of the template, as described in https://cloud.google.com/apis/design/resource_names of the form projects/{project_id}/regions/{region}/workflowTemplates/{template_id}
-    "parameters": [ # Optional. Template parameters whose values are substituted into the template. Values for parameters must be provided when the template is instantiated.
+    "name": "A String", # Output only. The resource name of the workflow template, as described in https://cloud.google.com/apis/design/resource_names.
+        # For projects.regions.workflowTemplates, the resource name of the  template has the following format:  projects/{project_id}/regions/{region}/workflowTemplates/{template_id}
+        # For projects.locations.workflowTemplates, the resource name of the  template has the following format:  projects/{project_id}/locations/{location}/workflowTemplates/{template_id}
+    "parameters": [ # Optional. emplate parameters whose values are substituted into the template. Values for parameters must be provided when the template is instantiated.
       { # A configurable parameter that replaces one or more fields in the template. Parameterizable fields: - Labels - File uris - Job properties - Job arguments - Script variables - Main class (in HadoopJob and SparkJob) - Zone (in ClusterSelector)
         "fields": [ # Required. Paths to all fields that the parameter replaces. A field is allowed to appear in at most one parameter's list of field paths.A field path is similar in syntax to a google.protobuf.FieldMask. For example, a field path that references the zone field of a workflow template's cluster selector would be specified as placement.clusterSelector.zone.Also, field paths can reference fields using the following syntax:
             # Values in maps can be referenced by key:
@@ -800,7 +932,7 @@
     "version": 42, # Optional. Used to perform a consistent read-modify-write.This field should be left blank for a CreateWorkflowTemplate request. It is required for an UpdateWorkflowTemplate request, and must match the current server version. A typical update template flow would fetch the current template with a GetWorkflowTemplate request, which will return the current template with the version field filled in with the current server version. The user updates other fields in the template, then returns it as part of the UpdateWorkflowTemplate request.
     "jobs": [ # Required. The Directed Acyclic Graph of Jobs to submit.
       { # A job executed by the workflow.
-        "hadoopJob": { # A Cloud Dataproc job for running Apache Hadoop MapReduce (https://hadoop.apache.org/docs/current/hadoop-mapreduce-client/hadoop-mapreduce-client-core/MapReduceTutorial.html) jobs on Apache Hadoop YARN (https://hadoop.apache.org/docs/r2.7.1/hadoop-yarn/hadoop-yarn-site/YARN.html). # Job is a Hadoop job.
+        "hadoopJob": { # A Dataproc job for running Apache Hadoop MapReduce (https://hadoop.apache.org/docs/current/hadoop-mapreduce-client/hadoop-mapreduce-client-core/MapReduceTutorial.html) jobs on Apache Hadoop YARN (https://hadoop.apache.org/docs/r2.7.1/hadoop-yarn/hadoop-yarn-site/YARN.html). # Optional. Job is a Hadoop job.
           "args": [ # Optional. The arguments to pass to the driver. Do not include arguments, such as -libjars or -Dfoo=bar, that can be set as job properties, since a collision may occur that causes an incorrect job submission.
             "A String",
           ],
@@ -820,12 +952,32 @@
             "A String",
           ],
           "mainJarFileUri": "A String", # The HCFS URI of the jar file containing the main class. Examples:  'gs://foo-bucket/analytics-binaries/extract-useful-metrics-mr.jar'  'hdfs:/tmp/test-samples/custom-wordcount.jar'  'file:///home/usr/lib/hadoop-mapreduce/hadoop-mapreduce-examples.jar'
-          "properties": { # Optional. A mapping of property names to values, used to configure Hadoop. Properties that conflict with values set by the Cloud Dataproc API may be overwritten. Can include properties set in /etc/hadoop/conf/*-site and classes in user code.
+          "properties": { # Optional. A mapping of property names to values, used to configure Hadoop. Properties that conflict with values set by the Dataproc API may be overwritten. Can include properties set in /etc/hadoop/conf/*-site and classes in user code.
             "a_key": "A String",
           },
         },
         "stepId": "A String", # Required. The step id. The id must be unique among all jobs within the template.The step id is used as prefix for job id, as job goog-dataproc-workflow-step-id label, and in prerequisiteStepIds field from other steps.The id must contain only letters (a-z, A-Z), numbers (0-9), underscores (_), and hyphens (-). Cannot begin or end with underscore or hyphen. Must consist of between 3 and 50 characters.
-        "sparkSqlJob": { # A Cloud Dataproc job for running Apache Spark SQL (http://spark.apache.org/sql/) queries. # Job is a SparkSql job.
+        "sparkRJob": { # A Dataproc job for running Apache SparkR (https://spark.apache.org/docs/latest/sparkr.html) applications on YARN. # Optional. Job is a SparkR job.
+          "loggingConfig": { # The runtime logging config of the job. # Optional. The runtime log config for job execution.
+            "driverLogLevels": { # The per-package log levels for the driver. This may include "root" package name to configure rootLogger. Examples:  'com.google = FATAL', 'root = INFO', 'org.apache = DEBUG'
+              "a_key": "A String",
+            },
+          },
+          "args": [ # Optional. The arguments to pass to the driver. Do not include arguments, such as --conf, that can be set as job properties, since a collision may occur that causes an incorrect job submission.
+            "A String",
+          ],
+          "mainRFileUri": "A String", # Required. The HCFS URI of the main R file to use as the driver. Must be a .R file.
+          "fileUris": [ # Optional. HCFS URIs of files to be copied to the working directory of R drivers and distributed tasks. Useful for naively parallel tasks.
+            "A String",
+          ],
+          "archiveUris": [ # Optional. HCFS URIs of archives to be extracted in the working directory of Spark drivers and tasks. Supported file types: .jar, .tar, .tar.gz, .tgz, and .zip.
+            "A String",
+          ],
+          "properties": { # Optional. A mapping of property names to values, used to configure SparkR. Properties that conflict with values set by the Dataproc API may be overwritten. Can include properties set in /etc/spark/conf/spark-defaults.conf and classes in user code.
+            "a_key": "A String",
+          },
+        },
+        "sparkSqlJob": { # A Dataproc job for running Apache Spark SQL (http://spark.apache.org/sql/) queries. # Optional. Job is a SparkSql job.
           "queryFileUri": "A String", # The HCFS URI of the script that contains SQL queries.
           "scriptVariables": { # Optional. Mapping of query variable names to values (equivalent to the Spark SQL command: SET name="value";).
             "a_key": "A String",
@@ -852,14 +1004,14 @@
               "A String",
             ],
           },
-          "properties": { # Optional. A mapping of property names to values, used to configure Spark SQL's SparkConf. Properties that conflict with values set by the Cloud Dataproc API may be overwritten.
+          "properties": { # Optional. A mapping of property names to values, used to configure Spark SQL's SparkConf. Properties that conflict with values set by the Dataproc API may be overwritten.
             "a_key": "A String",
           },
         },
         "prerequisiteStepIds": [ # Optional. The optional list of prerequisite job step_ids. If not specified, the job will start at the beginning of workflow.
           "A String",
         ],
-        "pigJob": { # A Cloud Dataproc job for running Apache Pig (https://pig.apache.org/) queries on YARN. # Job is a Pig job.
+        "pigJob": { # A Dataproc job for running Apache Pig (https://pig.apache.org/) queries on YARN. # Optional. Job is a Pig job.
           "queryFileUri": "A String", # The HCFS URI of the script that contains the Pig queries.
           "scriptVariables": { # Optional. Mapping of query variable names to values (equivalent to the Pig command: name=[value]).
             "a_key": "A String",
@@ -887,14 +1039,14 @@
             ],
           },
           "continueOnFailure": True or False, # Optional. Whether to continue executing queries if a query fails. The default value is false. Setting to true can be useful when executing independent parallel queries.
-          "properties": { # Optional. A mapping of property names to values, used to configure Pig. Properties that conflict with values set by the Cloud Dataproc API may be overwritten. Can include properties set in /etc/hadoop/conf/*-site.xml, /etc/pig/conf/pig.properties, and classes in user code.
+          "properties": { # Optional. A mapping of property names to values, used to configure Pig. Properties that conflict with values set by the Dataproc API may be overwritten. Can include properties set in /etc/hadoop/conf/*-site.xml, /etc/pig/conf/pig.properties, and classes in user code.
             "a_key": "A String",
           },
         },
         "labels": { # Optional. The labels to associate with this job.Label keys must be between 1 and 63 characters long, and must conform to the following regular expression: \p{Ll}\p{Lo}{0,62}Label values must be between 1 and 63 characters long, and must conform to the following regular expression: \p{Ll}\p{Lo}\p{N}_-{0,63}No more than 32 labels can be associated with a given job.
           "a_key": "A String",
         },
-        "sparkJob": { # A Cloud Dataproc job for running Apache Spark (http://spark.apache.org/) applications on YARN. # Job is a Spark job.
+        "sparkJob": { # A Dataproc job for running Apache Spark (http://spark.apache.org/) applications on YARN. # Optional. Job is a Spark job.
           "args": [ # Optional. The arguments to pass to the driver. Do not include arguments, such as --conf, that can be set as job properties, since a collision may occur that causes an incorrect job submission.
             "A String",
           ],
@@ -914,16 +1066,46 @@
             "A String",
           ],
           "mainJarFileUri": "A String", # The HCFS URI of the jar file that contains the main class.
-          "properties": { # Optional. A mapping of property names to values, used to configure Spark. Properties that conflict with values set by the Cloud Dataproc API may be overwritten. Can include properties set in /etc/spark/conf/spark-defaults.conf and classes in user code.
+          "properties": { # Optional. A mapping of property names to values, used to configure Spark. Properties that conflict with values set by the Dataproc API may be overwritten. Can include properties set in /etc/spark/conf/spark-defaults.conf and classes in user code.
+            "a_key": "A String",
+          },
+        },
+        "prestoJob": { # A Dataproc job for running Presto (https://prestosql.io/) queries. IMPORTANT: The Dataproc Presto Optional Component (https://cloud.google.com/dataproc/docs/concepts/components/presto) must be enabled when the cluster is created to submit a Presto job to the cluster. # Optional. Job is a Presto job.
+          "queryFileUri": "A String", # The HCFS URI of the script that contains SQL queries.
+          "outputFormat": "A String", # Optional. The format in which query output will be displayed. See the Presto documentation for supported output formats
+          "loggingConfig": { # The runtime logging config of the job. # Optional. The runtime log config for job execution.
+            "driverLogLevels": { # The per-package log levels for the driver. This may include "root" package name to configure rootLogger. Examples:  'com.google = FATAL', 'root = INFO', 'org.apache = DEBUG'
+              "a_key": "A String",
+            },
+          },
+          "clientTags": [ # Optional. Presto client tags to attach to this query
+            "A String",
+          ],
+          "queryList": { # A list of queries to run on a cluster. # A list of queries.
+            "queries": [ # Required. The queries to execute. You do not need to terminate a query with a semicolon. Multiple queries can be specified in one string by separating each with a semicolon. Here is an example of an Cloud Dataproc API snippet that uses a QueryList to specify a HiveJob:
+                # "hiveJob": {
+                #   "queryList": {
+                #     "queries": [
+                #       "query1",
+                #       "query2",
+                #       "query3;query4",
+                #     ]
+                #   }
+                # }
+              "A String",
+            ],
+          },
+          "continueOnFailure": True or False, # Optional. Whether to continue executing queries if a query fails. The default value is false. Setting to true can be useful when executing independent parallel queries.
+          "properties": { # Optional. A mapping of property names to values. Used to set Presto session properties (https://prestodb.io/docs/current/sql/set-session.html) Equivalent to using the --session flag in the Presto CLI
             "a_key": "A String",
           },
         },
         "scheduling": { # Job scheduling options. # Optional. Job scheduling configuration.
           "maxFailuresPerHour": 42, # Optional. Maximum number of times per hour a driver may be restarted as a result of driver terminating with non-zero code before job is reported failed.A job may be reported as thrashing if driver exits with non-zero code 4 times within 10 minute window.Maximum value is 10.
         },
-        "pysparkJob": { # A Cloud Dataproc job for running Apache PySpark (https://spark.apache.org/docs/0.9.0/python-programming-guide.html) applications on YARN. # Job is a Pyspark job.
+        "pysparkJob": { # A Dataproc job for running Apache PySpark (https://spark.apache.org/docs/0.9.0/python-programming-guide.html) applications on YARN. # Optional. Job is a PySpark job.
           "mainPythonFileUri": "A String", # Required. The HCFS URI of the main Python file to use as the driver. Must be a .py file.
-          "jarFileUris": [ # Optional. HCFS URIs of jar files to add to the CLASSPATHs of the Python driver and tasks.
+          "args": [ # Optional. The arguments to pass to the driver. Do not include arguments, such as --conf, that can be set as job properties, since a collision may occur that causes an incorrect job submission.
             "A String",
           ],
           "loggingConfig": { # The runtime logging config of the job. # Optional. The runtime log config for job execution.
@@ -931,7 +1113,7 @@
               "a_key": "A String",
             },
           },
-          "args": [ # Optional. The arguments to pass to the driver. Do not include arguments, such as --conf, that can be set as job properties, since a collision may occur that causes an incorrect job submission.
+          "jarFileUris": [ # Optional. HCFS URIs of jar files to add to the CLASSPATHs of the Python driver and tasks.
             "A String",
           ],
           "fileUris": [ # Optional. HCFS URIs of files to be copied to the working directory of Python drivers and distributed tasks. Useful for naively parallel tasks.
@@ -943,11 +1125,11 @@
           "pythonFileUris": [ # Optional. HCFS file URIs of Python files to pass to the PySpark framework. Supported file types: .py, .egg, and .zip.
             "A String",
           ],
-          "properties": { # Optional. A mapping of property names to values, used to configure PySpark. Properties that conflict with values set by the Cloud Dataproc API may be overwritten. Can include properties set in /etc/spark/conf/spark-defaults.conf and classes in user code.
+          "properties": { # Optional. A mapping of property names to values, used to configure PySpark. Properties that conflict with values set by the Dataproc API may be overwritten. Can include properties set in /etc/spark/conf/spark-defaults.conf and classes in user code.
             "a_key": "A String",
           },
         },
-        "hiveJob": { # A Cloud Dataproc job for running Apache Hive (https://hive.apache.org/) queries on YARN. # Job is a Hive job.
+        "hiveJob": { # A Dataproc job for running Apache Hive (https://hive.apache.org/) queries on YARN. # Optional. Job is a Hive job.
           "queryFileUri": "A String", # The HCFS URI of the script that contains Hive queries.
           "scriptVariables": { # Optional. Mapping of query variable names to values (equivalent to the Hive command: SET name="value";).
             "a_key": "A String",
@@ -970,13 +1152,13 @@
             ],
           },
           "continueOnFailure": True or False, # Optional. Whether to continue executing queries if a query fails. The default value is false. Setting to true can be useful when executing independent parallel queries.
-          "properties": { # Optional. A mapping of property names and values, used to configure Hive. Properties that conflict with values set by the Cloud Dataproc API may be overwritten. Can include properties set in /etc/hadoop/conf/*-site.xml, /etc/hive/conf/hive-site.xml, and classes in user code.
+          "properties": { # Optional. A mapping of property names and values, used to configure Hive. Properties that conflict with values set by the Dataproc API may be overwritten. Can include properties set in /etc/hadoop/conf/*-site.xml, /etc/hive/conf/hive-site.xml, and classes in user code.
             "a_key": "A String",
           },
         },
       },
     ],
-    "id": "A String", # Required. The template id.The id must contain only letters (a-z, A-Z), numbers (0-9), underscores (_), and hyphens (-). Cannot begin or end with underscore or hyphen. Must consist of between 3 and 50 characters.
+    "id": "A String",
   }</pre>
 </div>
 
@@ -985,7 +1167,9 @@
   <pre>Deletes a workflow template. It does not cancel in-progress workflows.
 
 Args:
-  name: string, Required. The "resource name" of the workflow template, as described in https://cloud.google.com/apis/design/resource_names of the form projects/{project_id}/regions/{region}/workflowTemplates/{template_id} (required)
+  name: string, Required. The resource name of the workflow template, as described in https://cloud.google.com/apis/design/resource_names.
+For projects.regions.workflowTemplates.delete, the resource name of the template has the following format:  projects/{project_id}/regions/{region}/workflowTemplates/{template_id}
+For projects.locations.workflowTemplates.instantiate, the resource name  of the template has the following format:  projects/{project_id}/locations/{location}/workflowTemplates/{template_id} (required)
   version: integer, Optional. The version of workflow template to delete. If specified, will only delete the template if the current server version matches specified version.
   x__xgafv: string, V1 error format.
     Allowed values
@@ -1008,8 +1192,10 @@
   <pre>Retrieves the latest workflow template.Can retrieve previously instantiated template by specifying optional version parameter.
 
 Args:
-  name: string, Required. The "resource name" of the workflow template, as described in https://cloud.google.com/apis/design/resource_names of the form projects/{project_id}/regions/{region}/workflowTemplates/{template_id} (required)
-  version: integer, Optional. The version of workflow template to retrieve. Only previously instatiated versions can be retrieved.If unspecified, retrieves the current version.
+  name: string, Required. The resource name of the workflow template, as described in https://cloud.google.com/apis/design/resource_names.
+For projects.regions.workflowTemplates.get, the resource name of the  template has the following format:  projects/{project_id}/regions/{region}/workflowTemplates/{template_id}
+For projects.locations.workflowTemplates.get, the resource name of the  template has the following format:  projects/{project_id}/locations/{location}/workflowTemplates/{template_id} (required)
+  version: integer, Optional. The version of workflow template to retrieve. Only previously instantiated versions can be retrieved.If unspecified, retrieves the current version.
   x__xgafv: string, V1 error format.
     Allowed values
       1 - v1 error format
@@ -1018,24 +1204,30 @@
 Returns:
   An object of the form:
 
-    { # A Cloud Dataproc workflow template resource.
+    { # A Dataproc workflow template resource.
     "updateTime": "A String", # Output only. The time template was last updated.
     "placement": { # Specifies workflow execution target.Either managed_cluster or cluster_selector is required. # Required. WorkflowTemplate scheduling information.
       "clusterSelector": { # A selector that chooses target cluster for jobs based on metadata. # Optional. A selector that chooses target cluster for jobs based on metadata.The selector is evaluated at the time each job is submitted.
+        "zone": "A String", # Optional. The zone where workflow process executes. This parameter does not affect the selection of the cluster.If unspecified, the zone of the first cluster matching the selector is used.
         "clusterLabels": { # Required. The cluster labels. Cluster must have all labels to match.
           "a_key": "A String",
         },
-        "zone": "A String", # Optional. The zone where workflow process executes. This parameter does not affect the selection of the cluster.If unspecified, the zone of the first cluster matching the selector is used.
       },
-      "managedCluster": { # Cluster that is managed by the workflow. # Optional. A cluster that is managed by the workflow.
+      "managedCluster": { # Cluster that is managed by the workflow. # A cluster that is managed by the workflow.
         "clusterName": "A String", # Required. The cluster name prefix. A unique cluster name will be formed by appending a random suffix.The name must contain only lower-case letters (a-z), numbers (0-9), and hyphens (-). Must begin with a letter. Cannot begin or end with hyphen. Must consist of between 2 and 35 characters.
         "labels": { # Optional. The labels to associate with this cluster.Label keys must be between 1 and 63 characters long, and must conform to the following PCRE regular expression: \p{Ll}\p{Lo}{0,62}Label values must be between 1 and 63 characters long, and must conform to the following PCRE regular expression: \p{Ll}\p{Lo}\p{N}_-{0,63}No more than 32 labels can be associated with a given cluster.
           "a_key": "A String",
         },
         "config": { # The cluster config. # Required. The cluster configuration.
+          "lifecycleConfig": { # Specifies the cluster auto-delete schedule configuration. # Optional. Lifecycle setting for the cluster.
+            "idleStartTime": "A String", # Output only. The time when cluster became idle (most recent job finished) and became eligible for deletion due to idleness (see JSON representation of Timestamp (https://developers.google.com/protocol-buffers/docs/proto3#json)).
+            "idleDeleteTtl": "A String", # Optional. The duration to keep the cluster alive while idling (when no jobs are running). Passing this threshold will cause the cluster to be deleted. Minimum value is 10 minutes; maximum value is 14 days (see JSON representation of Duration (https://developers.google.com/protocol-buffers/docs/proto3#json).
+            "autoDeleteTtl": "A String", # Optional. The lifetime duration of cluster. The cluster will be auto-deleted at the end of this period. Minimum value is 10 minutes; maximum value is 14 days (see JSON representation of Duration (https://developers.google.com/protocol-buffers/docs/proto3#json)).
+            "autoDeleteTime": "A String", # Optional. The time when cluster will be auto-deleted (see JSON representation of Timestamp (https://developers.google.com/protocol-buffers/docs/proto3#json)).
+          },
           "softwareConfig": { # Specifies the selection and config of software inside the cluster. # Optional. The config settings for software inside the cluster.
-            "imageVersion": "A String", # Optional. The version of software inside the cluster. It must be one of the supported Cloud Dataproc Versions, such as "1.2" (including a subminor version, such as "1.2.29"), or the "preview" version. If unspecified, it defaults to the latest Debian version.
-            "optionalComponents": [ # The set of optional components to activate on the cluster.
+            "imageVersion": "A String", # Optional. The version of software inside the cluster. It must be one of the supported Dataproc Versions (https://cloud.google.com/dataproc/docs/concepts/versioning/dataproc-versions#supported_cloud_dataproc_versions), such as "1.2" (including a subminor version, such as "1.2.29"), or the "preview" version (https://cloud.google.com/dataproc/docs/concepts/versioning/dataproc-versions#other_versions). If unspecified, it defaults to the latest Debian version.
+            "optionalComponents": [ # Optional. The set of components to activate on the cluster.
               "A String",
             ],
             "properties": { # Optional. The properties to set on daemon config files.Property keys are specified in prefix:property format, for example core:hadoop.tmp.dir. The following are supported prefixes and their mappings:
@@ -1047,24 +1239,29 @@
                 # mapred: mapred-site.xml
                 # pig: pig.properties
                 # spark: spark-defaults.conf
-                # yarn: yarn-site.xmlFor more information, see Cluster properties.
+                # yarn: yarn-site.xmlFor more information, see Cluster properties (https://cloud.google.com/dataproc/docs/concepts/cluster-properties).
               "a_key": "A String",
             },
           },
-          "configBucket": "A String", # Optional. A Google Cloud Storage bucket used to stage job dependencies, config files, and job driver console output. If you do not specify a staging bucket, Cloud Dataproc will determine a Cloud Storage location (US, ASIA, or EU) for your cluster's staging bucket according to the Google Compute Engine zone where your cluster is deployed, and then create and manage this project-level, per-location bucket (see Cloud Dataproc staging bucket).
+          "configBucket": "A String", # Optional. A Cloud Storage bucket used to stage job dependencies, config files, and job driver console output. If you do not specify a staging bucket, Cloud Dataproc will determine a Cloud Storage location (US, ASIA, or EU) for your cluster's staging bucket according to the Compute Engine zone where your cluster is deployed, and then create and manage this project-level, per-location bucket (see Dataproc staging bucket (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/staging-bucket)).
           "gceClusterConfig": { # Common config settings for resources of Compute Engine cluster instances, applicable to all instances in the cluster. # Optional. The shared Compute Engine config settings for all instances in a cluster.
             "internalIpOnly": True or False, # Optional. If true, all instances in the cluster will only have internal IP addresses. By default, clusters are not restricted to internal IP addresses, and will have ephemeral external IP addresses assigned to each instance. This internal_ip_only restriction can only be enabled for subnetwork enabled networks, and all off-cluster dependencies must be configured to be accessible without external IP addresses.
-            "networkUri": "A String", # Optional. The Compute Engine network to be used for machine communications. Cannot be specified with subnetwork_uri. If neither network_uri nor subnetwork_uri is specified, the "default" network of the project is used, if it exists. Cannot be a "Custom Subnet Network" (see Using Subnetworks for more information).A full URL, partial URI, or short name are valid. Examples:
+            "reservationAffinity": { # Reservation Affinity for consuming Zonal reservation. # Optional. Reservation Affinity for consuming Zonal reservation.
+              "values": [ # Optional. Corresponds to the label values of reservation resource.
+                "A String",
+              ],
+              "key": "A String", # Optional. Corresponds to the label key of reservation resource.
+              "consumeReservationType": "A String", # Optional. Type of reservation to consume
+            },
+            "networkUri": "A String", # Optional. The Compute Engine network to be used for machine communications. Cannot be specified with subnetwork_uri. If neither network_uri nor subnetwork_uri is specified, the "default" network of the project is used, if it exists. Cannot be a "Custom Subnet Network" (see Using Subnetworks (https://cloud.google.com/compute/docs/subnetworks) for more information).A full URL, partial URI, or short name are valid. Examples:
                 # https://www.googleapis.com/compute/v1/projects/[project_id]/regions/global/default
                 # projects/[project_id]/regions/global/default
                 # default
-            "tags": [ # The Compute Engine tags to add to all instances (see Tagging instances).
+            "tags": [ # The Compute Engine tags to add to all instances (see Tagging instances (https://cloud.google.com/compute/docs/label-or-tag-resources#tags)).
               "A String",
             ],
-            "serviceAccount": "A String", # Optional. The service account of the instances. Defaults to the default Compute Engine service account. Custom service accounts need permissions equivalent to the following IAM roles:
-                # roles/logging.logWriter
-                # roles/storage.objectAdmin(see https://cloud.google.com/compute/docs/access/service-accounts#custom_service_accounts for more information). Example: [account_id]@[project_id].iam.gserviceaccount.com
-            "zoneUri": "A String", # Optional. The zone where the Compute Engine cluster will be located. On a create request, it is required in the "global" region. If omitted in a non-global Cloud Dataproc region, the service will pick a zone in the corresponding Compute Engine region. On a get request, zone will always be present.A full URL, partial URI, or short name are valid. Examples:
+            "serviceAccount": "A String", # Optional. The Dataproc service account (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/service-accounts#service_accounts_in_cloud_dataproc) (also see VM Data Plane identity (https://cloud.google.com/dataproc/docs/concepts/iam/dataproc-principals#vm_service_account_data_plane_identity)) used by Dataproc cluster VM instances to access Google Cloud Platform services.If not specified, the Compute Engine default service account (https://cloud.google.com/compute/docs/access/service-accounts#default_service_account) is used.
+            "zoneUri": "A String", # Optional. The zone where the Compute Engine cluster will be located. On a create request, it is required in the "global" region. If omitted in a non-global Dataproc region, the service will pick a zone in the corresponding Compute Engine region. On a get request, zone will always be present.A full URL, partial URI, or short name are valid. Examples:
                 # https://www.googleapis.com/compute/v1/projects/[project_id]/zones/[zone]
                 # projects/[project_id]/zones/[zone]
                 # us-central1-f
@@ -1086,25 +1283,37 @@
               "a_key": "A String",
             },
           },
-          "workerConfig": { # Optional. The config settings for Compute Engine resources in an instance group, such as a master or worker group. # Optional. The Compute Engine config settings for worker instances in a cluster.
-            "isPreemptible": True or False, # Optional. Specifies that this instance group contains preemptible instances.
+          "autoscalingConfig": { # Autoscaling Policy config associated with the cluster. # Optional. Autoscaling config for the policy associated with the cluster. Cluster does not autoscale if this field is unset.
+            "policyUri": "A String", # Optional. The autoscaling policy used by the cluster.Only resource names including projectid and location (region) are valid. Examples:
+                # https://www.googleapis.com/compute/v1/projects/[project_id]/locations/[dataproc_region]/autoscalingPolicies/[policy_id]
+                # projects/[project_id]/locations/[dataproc_region]/autoscalingPolicies/[policy_id]Note that the policy must be in the same project and Dataproc region.
+          },
+          "workerConfig": { # The config settings for Compute Engine resources in an instance group, such as a master or worker group. # Optional. The Compute Engine config settings for worker instances in a cluster.
+            "isPreemptible": True or False, # Output only. Specifies that this instance group contains preemptible instances.
+            "preemptibility": "A String", # Optional. Specifies the preemptibility of the instance group.The default value for master and worker groups is NON_PREEMPTIBLE. This default cannot be changed.The default value for secondary instances is PREEMPTIBLE.
             "machineTypeUri": "A String", # Optional. The Compute Engine machine type used for cluster instances.A full URL, partial URI, or short name are valid. Examples:
                 # https://www.googleapis.com/compute/v1/projects/[project_id]/zones/us-east1-a/machineTypes/n1-standard-2
                 # projects/[project_id]/zones/us-east1-a/machineTypes/n1-standard-2
-                # n1-standard-2Auto Zone Exception: If you are using the Cloud Dataproc Auto Zone Placement feature, you must use the short name of the machine type resource, for example, n1-standard-2.
-            "instanceNames": [ # Output only. The list of instance names. Cloud Dataproc derives the names from cluster_name, num_instances, and the instance group.
+                # n1-standard-2Auto Zone Exception: If you are using the Dataproc Auto Zone Placement (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/auto-zone#using_auto_zone_placement) feature, you must use the short name of the machine type resource, for example, n1-standard-2.
+            "instanceNames": [ # Output only. The list of instance names. Dataproc derives the names from cluster_name, num_instances, and the instance group.
               "A String",
             ],
-            "imageUri": "A String", # Optional. The Compute Engine image resource used for cluster instances. It can be specified or may be inferred from SoftwareConfig.image_version.
-            "accelerators": [ # Optional. The Compute Engine accelerator configuration for these instances.Beta Feature: This feature is still under development. It may be changed before final release.
-              { # Specifies the type and number of accelerator cards attached to the instances of an instance. See GPUs on Compute Engine.
+            "imageUri": "A String", # Optional. The Compute Engine image resource used for cluster instances.The URI can represent an image or image family.Image examples:
+                # https://www.googleapis.com/compute/beta/projects/[project_id]/global/images/[image-id]
+                # projects/[project_id]/global/images/[image-id]
+                # image-idImage family examples. Dataproc will use the most recent image from the family:
+                # https://www.googleapis.com/compute/beta/projects/[project_id]/global/images/family/[custom-image-family-name]
+                # projects/[project_id]/global/images/family/[custom-image-family-name]If the URI is unspecified, it will be inferred from SoftwareConfig.image_version or the system default.
+            "accelerators": [ # Optional. The Compute Engine accelerator configuration for these instances.
+              { # Specifies the type and number of accelerator cards attached to the instances of an instance. See GPUs on Compute Engine (https://cloud.google.com/compute/docs/gpus/).
                 "acceleratorCount": 42, # The number of the accelerator cards of this type exposed to this instance.
-                "acceleratorTypeUri": "A String", # Full URL, partial URI, or short name of the accelerator type resource to expose to this instance. See Compute Engine AcceleratorTypes.Examples:
+                "acceleratorTypeUri": "A String", # Full URL, partial URI, or short name of the accelerator type resource to expose to this instance. See Compute Engine AcceleratorTypes (https://cloud.google.com/compute/docs/reference/beta/acceleratorTypes).Examples:
                     # https://www.googleapis.com/compute/beta/projects/[project_id]/zones/us-east1-a/acceleratorTypes/nvidia-tesla-k80
                     # projects/[project_id]/zones/us-east1-a/acceleratorTypes/nvidia-tesla-k80
-                    # nvidia-tesla-k80Auto Zone Exception: If you are using the Cloud Dataproc Auto Zone Placement feature, you must use the short name of the accelerator type resource, for example, nvidia-tesla-k80.
+                    # nvidia-tesla-k80Auto Zone Exception: If you are using the Dataproc Auto Zone Placement (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/auto-zone#using_auto_zone_placement) feature, you must use the short name of the accelerator type resource, for example, nvidia-tesla-k80.
               },
             ],
+            "minCpuPlatform": "A String", # Optional. Specifies the minimum cpu platform for the Instance Group. See Dataproc -&amp;gt; Minimum CPU Platform (https://cloud.google.com/dataproc/docs/concepts/compute/dataproc-min-cpu).
             "managedGroupConfig": { # Specifies the resources used to actively manage an instance group. # Output only. The config for Compute Engine Instance Group Manager that manages this group. This is only used for preemptible instance groups.
               "instanceTemplateName": "A String", # Output only. The name of the Instance Template used for the Managed Instance Group.
               "instanceGroupManagerName": "A String", # Output only. The name of the Instance Group Manager for this group.
@@ -1125,32 +1334,39 @@
               #   ... worker specific actions ...
               # fi
             { # Specifies an executable to run on a fully configured node and a timeout period for executable completion.
-              "executionTimeout": "A String", # Optional. Amount of time executable has to complete. Default is 10 minutes. Cluster creation fails with an explanatory error message (the name of the executable that caused the error and the exceeded timeout period) if the executable is not completed at end of the timeout period.
+              "executionTimeout": "A String", # Optional. Amount of time executable has to complete. Default is 10 minutes (see JSON representation of Duration (https://developers.google.com/protocol-buffers/docs/proto3#json)).Cluster creation fails with an explanatory error message (the name of the executable that caused the error and the exceeded timeout period) if the executable is not completed at end of the timeout period.
               "executableFile": "A String", # Required. Cloud Storage URI of executable file.
             },
           ],
           "encryptionConfig": { # Encryption settings for the cluster. # Optional. Encryption settings for the cluster.
             "gcePdKmsKeyName": "A String", # Optional. The Cloud KMS key name to use for PD disk encryption for all instances in the cluster.
           },
-          "secondaryWorkerConfig": { # Optional. The config settings for Compute Engine resources in an instance group, such as a master or worker group. # Optional. The Compute Engine config settings for additional worker instances in a cluster.
-            "isPreemptible": True or False, # Optional. Specifies that this instance group contains preemptible instances.
+          "secondaryWorkerConfig": { # The config settings for Compute Engine resources in an instance group, such as a master or worker group. # Optional. The Compute Engine config settings for additional worker instances in a cluster.
+            "isPreemptible": True or False, # Output only. Specifies that this instance group contains preemptible instances.
+            "preemptibility": "A String", # Optional. Specifies the preemptibility of the instance group.The default value for master and worker groups is NON_PREEMPTIBLE. This default cannot be changed.The default value for secondary instances is PREEMPTIBLE.
             "machineTypeUri": "A String", # Optional. The Compute Engine machine type used for cluster instances.A full URL, partial URI, or short name are valid. Examples:
                 # https://www.googleapis.com/compute/v1/projects/[project_id]/zones/us-east1-a/machineTypes/n1-standard-2
                 # projects/[project_id]/zones/us-east1-a/machineTypes/n1-standard-2
-                # n1-standard-2Auto Zone Exception: If you are using the Cloud Dataproc Auto Zone Placement feature, you must use the short name of the machine type resource, for example, n1-standard-2.
-            "instanceNames": [ # Output only. The list of instance names. Cloud Dataproc derives the names from cluster_name, num_instances, and the instance group.
+                # n1-standard-2Auto Zone Exception: If you are using the Dataproc Auto Zone Placement (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/auto-zone#using_auto_zone_placement) feature, you must use the short name of the machine type resource, for example, n1-standard-2.
+            "instanceNames": [ # Output only. The list of instance names. Dataproc derives the names from cluster_name, num_instances, and the instance group.
               "A String",
             ],
-            "imageUri": "A String", # Optional. The Compute Engine image resource used for cluster instances. It can be specified or may be inferred from SoftwareConfig.image_version.
-            "accelerators": [ # Optional. The Compute Engine accelerator configuration for these instances.Beta Feature: This feature is still under development. It may be changed before final release.
-              { # Specifies the type and number of accelerator cards attached to the instances of an instance. See GPUs on Compute Engine.
+            "imageUri": "A String", # Optional. The Compute Engine image resource used for cluster instances.The URI can represent an image or image family.Image examples:
+                # https://www.googleapis.com/compute/beta/projects/[project_id]/global/images/[image-id]
+                # projects/[project_id]/global/images/[image-id]
+                # image-idImage family examples. Dataproc will use the most recent image from the family:
+                # https://www.googleapis.com/compute/beta/projects/[project_id]/global/images/family/[custom-image-family-name]
+                # projects/[project_id]/global/images/family/[custom-image-family-name]If the URI is unspecified, it will be inferred from SoftwareConfig.image_version or the system default.
+            "accelerators": [ # Optional. The Compute Engine accelerator configuration for these instances.
+              { # Specifies the type and number of accelerator cards attached to the instances of an instance. See GPUs on Compute Engine (https://cloud.google.com/compute/docs/gpus/).
                 "acceleratorCount": 42, # The number of the accelerator cards of this type exposed to this instance.
-                "acceleratorTypeUri": "A String", # Full URL, partial URI, or short name of the accelerator type resource to expose to this instance. See Compute Engine AcceleratorTypes.Examples:
+                "acceleratorTypeUri": "A String", # Full URL, partial URI, or short name of the accelerator type resource to expose to this instance. See Compute Engine AcceleratorTypes (https://cloud.google.com/compute/docs/reference/beta/acceleratorTypes).Examples:
                     # https://www.googleapis.com/compute/beta/projects/[project_id]/zones/us-east1-a/acceleratorTypes/nvidia-tesla-k80
                     # projects/[project_id]/zones/us-east1-a/acceleratorTypes/nvidia-tesla-k80
-                    # nvidia-tesla-k80Auto Zone Exception: If you are using the Cloud Dataproc Auto Zone Placement feature, you must use the short name of the accelerator type resource, for example, nvidia-tesla-k80.
+                    # nvidia-tesla-k80Auto Zone Exception: If you are using the Dataproc Auto Zone Placement (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/auto-zone#using_auto_zone_placement) feature, you must use the short name of the accelerator type resource, for example, nvidia-tesla-k80.
               },
             ],
+            "minCpuPlatform": "A String", # Optional. Specifies the minimum cpu platform for the Instance Group. See Dataproc -&amp;gt; Minimum CPU Platform (https://cloud.google.com/dataproc/docs/concepts/compute/dataproc-min-cpu).
             "managedGroupConfig": { # Specifies the resources used to actively manage an instance group. # Output only. The config for Compute Engine Instance Group Manager that manages this group. This is only used for preemptible instance groups.
               "instanceTemplateName": "A String", # Output only. The name of the Instance Template used for the Managed Instance Group.
               "instanceGroupManagerName": "A String", # Output only. The name of the Instance Group Manager for this group.
@@ -1162,25 +1378,32 @@
               "bootDiskSizeGb": 42, # Optional. Size in GB of the boot disk (default is 500GB).
             },
           },
-          "masterConfig": { # Optional. The config settings for Compute Engine resources in an instance group, such as a master or worker group. # Optional. The Compute Engine config settings for the master instance in a cluster.
-            "isPreemptible": True or False, # Optional. Specifies that this instance group contains preemptible instances.
+          "masterConfig": { # The config settings for Compute Engine resources in an instance group, such as a master or worker group. # Optional. The Compute Engine config settings for the master instance in a cluster.
+            "isPreemptible": True or False, # Output only. Specifies that this instance group contains preemptible instances.
+            "preemptibility": "A String", # Optional. Specifies the preemptibility of the instance group.The default value for master and worker groups is NON_PREEMPTIBLE. This default cannot be changed.The default value for secondary instances is PREEMPTIBLE.
             "machineTypeUri": "A String", # Optional. The Compute Engine machine type used for cluster instances.A full URL, partial URI, or short name are valid. Examples:
                 # https://www.googleapis.com/compute/v1/projects/[project_id]/zones/us-east1-a/machineTypes/n1-standard-2
                 # projects/[project_id]/zones/us-east1-a/machineTypes/n1-standard-2
-                # n1-standard-2Auto Zone Exception: If you are using the Cloud Dataproc Auto Zone Placement feature, you must use the short name of the machine type resource, for example, n1-standard-2.
-            "instanceNames": [ # Output only. The list of instance names. Cloud Dataproc derives the names from cluster_name, num_instances, and the instance group.
+                # n1-standard-2Auto Zone Exception: If you are using the Dataproc Auto Zone Placement (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/auto-zone#using_auto_zone_placement) feature, you must use the short name of the machine type resource, for example, n1-standard-2.
+            "instanceNames": [ # Output only. The list of instance names. Dataproc derives the names from cluster_name, num_instances, and the instance group.
               "A String",
             ],
-            "imageUri": "A String", # Optional. The Compute Engine image resource used for cluster instances. It can be specified or may be inferred from SoftwareConfig.image_version.
-            "accelerators": [ # Optional. The Compute Engine accelerator configuration for these instances.Beta Feature: This feature is still under development. It may be changed before final release.
-              { # Specifies the type and number of accelerator cards attached to the instances of an instance. See GPUs on Compute Engine.
+            "imageUri": "A String", # Optional. The Compute Engine image resource used for cluster instances.The URI can represent an image or image family.Image examples:
+                # https://www.googleapis.com/compute/beta/projects/[project_id]/global/images/[image-id]
+                # projects/[project_id]/global/images/[image-id]
+                # image-idImage family examples. Dataproc will use the most recent image from the family:
+                # https://www.googleapis.com/compute/beta/projects/[project_id]/global/images/family/[custom-image-family-name]
+                # projects/[project_id]/global/images/family/[custom-image-family-name]If the URI is unspecified, it will be inferred from SoftwareConfig.image_version or the system default.
+            "accelerators": [ # Optional. The Compute Engine accelerator configuration for these instances.
+              { # Specifies the type and number of accelerator cards attached to the instances of an instance. See GPUs on Compute Engine (https://cloud.google.com/compute/docs/gpus/).
                 "acceleratorCount": 42, # The number of the accelerator cards of this type exposed to this instance.
-                "acceleratorTypeUri": "A String", # Full URL, partial URI, or short name of the accelerator type resource to expose to this instance. See Compute Engine AcceleratorTypes.Examples:
+                "acceleratorTypeUri": "A String", # Full URL, partial URI, or short name of the accelerator type resource to expose to this instance. See Compute Engine AcceleratorTypes (https://cloud.google.com/compute/docs/reference/beta/acceleratorTypes).Examples:
                     # https://www.googleapis.com/compute/beta/projects/[project_id]/zones/us-east1-a/acceleratorTypes/nvidia-tesla-k80
                     # projects/[project_id]/zones/us-east1-a/acceleratorTypes/nvidia-tesla-k80
-                    # nvidia-tesla-k80Auto Zone Exception: If you are using the Cloud Dataproc Auto Zone Placement feature, you must use the short name of the accelerator type resource, for example, nvidia-tesla-k80.
+                    # nvidia-tesla-k80Auto Zone Exception: If you are using the Dataproc Auto Zone Placement (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/auto-zone#using_auto_zone_placement) feature, you must use the short name of the accelerator type resource, for example, nvidia-tesla-k80.
               },
             ],
+            "minCpuPlatform": "A String", # Optional. Specifies the minimum cpu platform for the Instance Group. See Dataproc -&amp;gt; Minimum CPU Platform (https://cloud.google.com/dataproc/docs/concepts/compute/dataproc-min-cpu).
             "managedGroupConfig": { # Specifies the resources used to actively manage an instance group. # Output only. The config for Compute Engine Instance Group Manager that manages this group. This is only used for preemptible instance groups.
               "instanceTemplateName": "A String", # Output only. The name of the Instance Template used for the Managed Instance Group.
               "instanceGroupManagerName": "A String", # Output only. The name of the Instance Group Manager for this group.
@@ -1196,8 +1419,9 @@
             "kerberosConfig": { # Specifies Kerberos related configuration. # Kerberos related configuration.
               "truststorePasswordUri": "A String", # Optional. The Cloud Storage URI of a KMS encrypted file containing the password to the user provided truststore. For the self-signed certificate, this password is generated by Dataproc.
               "crossRealmTrustRealm": "A String", # Optional. The remote realm the Dataproc on-cluster KDC will trust, should the user enable cross realm trust.
+              "realm": "A String", # Optional. The name of the on-cluster Kerberos realm. If not specified, the uppercased domain of hostnames will be the realm.
               "keyPasswordUri": "A String", # Optional. The Cloud Storage URI of a KMS encrypted file containing the password to the user provided key. For the self-signed certificate, this password is generated by Dataproc.
-              "enableKerberos": True or False, # Optional. Flag to indicate whether to Kerberize the cluster.
+              "enableKerberos": True or False, # Optional. Flag to indicate whether to Kerberize the cluster (default: false). Set this field to true to enable Kerberos on a cluster.
               "crossRealmTrustAdminServer": "A String", # Optional. The admin server (IP or hostname) for the remote trusted realm in a cross realm trust relationship.
               "tgtLifetimeHours": 42, # Optional. The lifetime of the ticket granting ticket, in hours. If not specified, or user specifies 0, then default value 10 will be used.
               "keystoreUri": "A String", # Optional. The Cloud Storage URI of the keystore file used for SSL encryption. If not provided, Dataproc will provide a self-signed certificate.
@@ -1213,8 +1437,10 @@
         },
       },
     },
-    "name": "A String", # Output only. The "resource name" of the template, as described in https://cloud.google.com/apis/design/resource_names of the form projects/{project_id}/regions/{region}/workflowTemplates/{template_id}
-    "parameters": [ # Optional. Template parameters whose values are substituted into the template. Values for parameters must be provided when the template is instantiated.
+    "name": "A String", # Output only. The resource name of the workflow template, as described in https://cloud.google.com/apis/design/resource_names.
+        # For projects.regions.workflowTemplates, the resource name of the  template has the following format:  projects/{project_id}/regions/{region}/workflowTemplates/{template_id}
+        # For projects.locations.workflowTemplates, the resource name of the  template has the following format:  projects/{project_id}/locations/{location}/workflowTemplates/{template_id}
+    "parameters": [ # Optional. emplate parameters whose values are substituted into the template. Values for parameters must be provided when the template is instantiated.
       { # A configurable parameter that replaces one or more fields in the template. Parameterizable fields: - Labels - File uris - Job properties - Job arguments - Script variables - Main class (in HadoopJob and SparkJob) - Zone (in ClusterSelector)
         "fields": [ # Required. Paths to all fields that the parameter replaces. A field is allowed to appear in at most one parameter's list of field paths.A field path is similar in syntax to a google.protobuf.FieldMask. For example, a field path that references the zone field of a workflow template's cluster selector would be specified as placement.clusterSelector.zone.Also, field paths can reference fields using the following syntax:
             # Values in maps can be referenced by key:
@@ -1266,7 +1492,7 @@
     "version": 42, # Optional. Used to perform a consistent read-modify-write.This field should be left blank for a CreateWorkflowTemplate request. It is required for an UpdateWorkflowTemplate request, and must match the current server version. A typical update template flow would fetch the current template with a GetWorkflowTemplate request, which will return the current template with the version field filled in with the current server version. The user updates other fields in the template, then returns it as part of the UpdateWorkflowTemplate request.
     "jobs": [ # Required. The Directed Acyclic Graph of Jobs to submit.
       { # A job executed by the workflow.
-        "hadoopJob": { # A Cloud Dataproc job for running Apache Hadoop MapReduce (https://hadoop.apache.org/docs/current/hadoop-mapreduce-client/hadoop-mapreduce-client-core/MapReduceTutorial.html) jobs on Apache Hadoop YARN (https://hadoop.apache.org/docs/r2.7.1/hadoop-yarn/hadoop-yarn-site/YARN.html). # Job is a Hadoop job.
+        "hadoopJob": { # A Dataproc job for running Apache Hadoop MapReduce (https://hadoop.apache.org/docs/current/hadoop-mapreduce-client/hadoop-mapreduce-client-core/MapReduceTutorial.html) jobs on Apache Hadoop YARN (https://hadoop.apache.org/docs/r2.7.1/hadoop-yarn/hadoop-yarn-site/YARN.html). # Optional. Job is a Hadoop job.
           "args": [ # Optional. The arguments to pass to the driver. Do not include arguments, such as -libjars or -Dfoo=bar, that can be set as job properties, since a collision may occur that causes an incorrect job submission.
             "A String",
           ],
@@ -1286,12 +1512,32 @@
             "A String",
           ],
           "mainJarFileUri": "A String", # The HCFS URI of the jar file containing the main class. Examples:  'gs://foo-bucket/analytics-binaries/extract-useful-metrics-mr.jar'  'hdfs:/tmp/test-samples/custom-wordcount.jar'  'file:///home/usr/lib/hadoop-mapreduce/hadoop-mapreduce-examples.jar'
-          "properties": { # Optional. A mapping of property names to values, used to configure Hadoop. Properties that conflict with values set by the Cloud Dataproc API may be overwritten. Can include properties set in /etc/hadoop/conf/*-site and classes in user code.
+          "properties": { # Optional. A mapping of property names to values, used to configure Hadoop. Properties that conflict with values set by the Dataproc API may be overwritten. Can include properties set in /etc/hadoop/conf/*-site and classes in user code.
             "a_key": "A String",
           },
         },
         "stepId": "A String", # Required. The step id. The id must be unique among all jobs within the template.The step id is used as prefix for job id, as job goog-dataproc-workflow-step-id label, and in prerequisiteStepIds field from other steps.The id must contain only letters (a-z, A-Z), numbers (0-9), underscores (_), and hyphens (-). Cannot begin or end with underscore or hyphen. Must consist of between 3 and 50 characters.
-        "sparkSqlJob": { # A Cloud Dataproc job for running Apache Spark SQL (http://spark.apache.org/sql/) queries. # Job is a SparkSql job.
+        "sparkRJob": { # A Dataproc job for running Apache SparkR (https://spark.apache.org/docs/latest/sparkr.html) applications on YARN. # Optional. Job is a SparkR job.
+          "loggingConfig": { # The runtime logging config of the job. # Optional. The runtime log config for job execution.
+            "driverLogLevels": { # The per-package log levels for the driver. This may include "root" package name to configure rootLogger. Examples:  'com.google = FATAL', 'root = INFO', 'org.apache = DEBUG'
+              "a_key": "A String",
+            },
+          },
+          "args": [ # Optional. The arguments to pass to the driver. Do not include arguments, such as --conf, that can be set as job properties, since a collision may occur that causes an incorrect job submission.
+            "A String",
+          ],
+          "mainRFileUri": "A String", # Required. The HCFS URI of the main R file to use as the driver. Must be a .R file.
+          "fileUris": [ # Optional. HCFS URIs of files to be copied to the working directory of R drivers and distributed tasks. Useful for naively parallel tasks.
+            "A String",
+          ],
+          "archiveUris": [ # Optional. HCFS URIs of archives to be extracted in the working directory of Spark drivers and tasks. Supported file types: .jar, .tar, .tar.gz, .tgz, and .zip.
+            "A String",
+          ],
+          "properties": { # Optional. A mapping of property names to values, used to configure SparkR. Properties that conflict with values set by the Dataproc API may be overwritten. Can include properties set in /etc/spark/conf/spark-defaults.conf and classes in user code.
+            "a_key": "A String",
+          },
+        },
+        "sparkSqlJob": { # A Dataproc job for running Apache Spark SQL (http://spark.apache.org/sql/) queries. # Optional. Job is a SparkSql job.
           "queryFileUri": "A String", # The HCFS URI of the script that contains SQL queries.
           "scriptVariables": { # Optional. Mapping of query variable names to values (equivalent to the Spark SQL command: SET name="value";).
             "a_key": "A String",
@@ -1318,14 +1564,14 @@
               "A String",
             ],
           },
-          "properties": { # Optional. A mapping of property names to values, used to configure Spark SQL's SparkConf. Properties that conflict with values set by the Cloud Dataproc API may be overwritten.
+          "properties": { # Optional. A mapping of property names to values, used to configure Spark SQL's SparkConf. Properties that conflict with values set by the Dataproc API may be overwritten.
             "a_key": "A String",
           },
         },
         "prerequisiteStepIds": [ # Optional. The optional list of prerequisite job step_ids. If not specified, the job will start at the beginning of workflow.
           "A String",
         ],
-        "pigJob": { # A Cloud Dataproc job for running Apache Pig (https://pig.apache.org/) queries on YARN. # Job is a Pig job.
+        "pigJob": { # A Dataproc job for running Apache Pig (https://pig.apache.org/) queries on YARN. # Optional. Job is a Pig job.
           "queryFileUri": "A String", # The HCFS URI of the script that contains the Pig queries.
           "scriptVariables": { # Optional. Mapping of query variable names to values (equivalent to the Pig command: name=[value]).
             "a_key": "A String",
@@ -1353,14 +1599,14 @@
             ],
           },
           "continueOnFailure": True or False, # Optional. Whether to continue executing queries if a query fails. The default value is false. Setting to true can be useful when executing independent parallel queries.
-          "properties": { # Optional. A mapping of property names to values, used to configure Pig. Properties that conflict with values set by the Cloud Dataproc API may be overwritten. Can include properties set in /etc/hadoop/conf/*-site.xml, /etc/pig/conf/pig.properties, and classes in user code.
+          "properties": { # Optional. A mapping of property names to values, used to configure Pig. Properties that conflict with values set by the Dataproc API may be overwritten. Can include properties set in /etc/hadoop/conf/*-site.xml, /etc/pig/conf/pig.properties, and classes in user code.
             "a_key": "A String",
           },
         },
         "labels": { # Optional. The labels to associate with this job.Label keys must be between 1 and 63 characters long, and must conform to the following regular expression: \p{Ll}\p{Lo}{0,62}Label values must be between 1 and 63 characters long, and must conform to the following regular expression: \p{Ll}\p{Lo}\p{N}_-{0,63}No more than 32 labels can be associated with a given job.
           "a_key": "A String",
         },
-        "sparkJob": { # A Cloud Dataproc job for running Apache Spark (http://spark.apache.org/) applications on YARN. # Job is a Spark job.
+        "sparkJob": { # A Dataproc job for running Apache Spark (http://spark.apache.org/) applications on YARN. # Optional. Job is a Spark job.
           "args": [ # Optional. The arguments to pass to the driver. Do not include arguments, such as --conf, that can be set as job properties, since a collision may occur that causes an incorrect job submission.
             "A String",
           ],
@@ -1380,16 +1626,46 @@
             "A String",
           ],
           "mainJarFileUri": "A String", # The HCFS URI of the jar file that contains the main class.
-          "properties": { # Optional. A mapping of property names to values, used to configure Spark. Properties that conflict with values set by the Cloud Dataproc API may be overwritten. Can include properties set in /etc/spark/conf/spark-defaults.conf and classes in user code.
+          "properties": { # Optional. A mapping of property names to values, used to configure Spark. Properties that conflict with values set by the Dataproc API may be overwritten. Can include properties set in /etc/spark/conf/spark-defaults.conf and classes in user code.
+            "a_key": "A String",
+          },
+        },
+        "prestoJob": { # A Dataproc job for running Presto (https://prestosql.io/) queries. IMPORTANT: The Dataproc Presto Optional Component (https://cloud.google.com/dataproc/docs/concepts/components/presto) must be enabled when the cluster is created to submit a Presto job to the cluster. # Optional. Job is a Presto job.
+          "queryFileUri": "A String", # The HCFS URI of the script that contains SQL queries.
+          "outputFormat": "A String", # Optional. The format in which query output will be displayed. See the Presto documentation for supported output formats
+          "loggingConfig": { # The runtime logging config of the job. # Optional. The runtime log config for job execution.
+            "driverLogLevels": { # The per-package log levels for the driver. This may include "root" package name to configure rootLogger. Examples:  'com.google = FATAL', 'root = INFO', 'org.apache = DEBUG'
+              "a_key": "A String",
+            },
+          },
+          "clientTags": [ # Optional. Presto client tags to attach to this query
+            "A String",
+          ],
+          "queryList": { # A list of queries to run on a cluster. # A list of queries.
+            "queries": [ # Required. The queries to execute. You do not need to terminate a query with a semicolon. Multiple queries can be specified in one string by separating each with a semicolon. Here is an example of an Cloud Dataproc API snippet that uses a QueryList to specify a HiveJob:
+                # "hiveJob": {
+                #   "queryList": {
+                #     "queries": [
+                #       "query1",
+                #       "query2",
+                #       "query3;query4",
+                #     ]
+                #   }
+                # }
+              "A String",
+            ],
+          },
+          "continueOnFailure": True or False, # Optional. Whether to continue executing queries if a query fails. The default value is false. Setting to true can be useful when executing independent parallel queries.
+          "properties": { # Optional. A mapping of property names to values. Used to set Presto session properties (https://prestodb.io/docs/current/sql/set-session.html) Equivalent to using the --session flag in the Presto CLI
             "a_key": "A String",
           },
         },
         "scheduling": { # Job scheduling options. # Optional. Job scheduling configuration.
           "maxFailuresPerHour": 42, # Optional. Maximum number of times per hour a driver may be restarted as a result of driver terminating with non-zero code before job is reported failed.A job may be reported as thrashing if driver exits with non-zero code 4 times within 10 minute window.Maximum value is 10.
         },
-        "pysparkJob": { # A Cloud Dataproc job for running Apache PySpark (https://spark.apache.org/docs/0.9.0/python-programming-guide.html) applications on YARN. # Job is a Pyspark job.
+        "pysparkJob": { # A Dataproc job for running Apache PySpark (https://spark.apache.org/docs/0.9.0/python-programming-guide.html) applications on YARN. # Optional. Job is a PySpark job.
           "mainPythonFileUri": "A String", # Required. The HCFS URI of the main Python file to use as the driver. Must be a .py file.
-          "jarFileUris": [ # Optional. HCFS URIs of jar files to add to the CLASSPATHs of the Python driver and tasks.
+          "args": [ # Optional. The arguments to pass to the driver. Do not include arguments, such as --conf, that can be set as job properties, since a collision may occur that causes an incorrect job submission.
             "A String",
           ],
           "loggingConfig": { # The runtime logging config of the job. # Optional. The runtime log config for job execution.
@@ -1397,7 +1673,7 @@
               "a_key": "A String",
             },
           },
-          "args": [ # Optional. The arguments to pass to the driver. Do not include arguments, such as --conf, that can be set as job properties, since a collision may occur that causes an incorrect job submission.
+          "jarFileUris": [ # Optional. HCFS URIs of jar files to add to the CLASSPATHs of the Python driver and tasks.
             "A String",
           ],
           "fileUris": [ # Optional. HCFS URIs of files to be copied to the working directory of Python drivers and distributed tasks. Useful for naively parallel tasks.
@@ -1409,11 +1685,11 @@
           "pythonFileUris": [ # Optional. HCFS file URIs of Python files to pass to the PySpark framework. Supported file types: .py, .egg, and .zip.
             "A String",
           ],
-          "properties": { # Optional. A mapping of property names to values, used to configure PySpark. Properties that conflict with values set by the Cloud Dataproc API may be overwritten. Can include properties set in /etc/spark/conf/spark-defaults.conf and classes in user code.
+          "properties": { # Optional. A mapping of property names to values, used to configure PySpark. Properties that conflict with values set by the Dataproc API may be overwritten. Can include properties set in /etc/spark/conf/spark-defaults.conf and classes in user code.
             "a_key": "A String",
           },
         },
-        "hiveJob": { # A Cloud Dataproc job for running Apache Hive (https://hive.apache.org/) queries on YARN. # Job is a Hive job.
+        "hiveJob": { # A Dataproc job for running Apache Hive (https://hive.apache.org/) queries on YARN. # Optional. Job is a Hive job.
           "queryFileUri": "A String", # The HCFS URI of the script that contains Hive queries.
           "scriptVariables": { # Optional. Mapping of query variable names to values (equivalent to the Hive command: SET name="value";).
             "a_key": "A String",
@@ -1436,13 +1712,13 @@
             ],
           },
           "continueOnFailure": True or False, # Optional. Whether to continue executing queries if a query fails. The default value is false. Setting to true can be useful when executing independent parallel queries.
-          "properties": { # Optional. A mapping of property names and values, used to configure Hive. Properties that conflict with values set by the Cloud Dataproc API may be overwritten. Can include properties set in /etc/hadoop/conf/*-site.xml, /etc/hive/conf/hive-site.xml, and classes in user code.
+          "properties": { # Optional. A mapping of property names and values, used to configure Hive. Properties that conflict with values set by the Dataproc API may be overwritten. Can include properties set in /etc/hadoop/conf/*-site.xml, /etc/hive/conf/hive-site.xml, and classes in user code.
             "a_key": "A String",
           },
         },
       },
     ],
-    "id": "A String", # Required. The template id.The id must contain only letters (a-z, A-Z), numbers (0-9), underscores (_), and hyphens (-). Cannot begin or end with underscore or hyphen. Must consist of between 3 and 50 characters.
+    "id": "A String",
   }</pre>
 </div>
 
@@ -1456,6 +1732,9 @@
     The object takes the form of:
 
 { # Request message for GetIamPolicy method.
+    "options": { # Encapsulates settings provided to GetIamPolicy. # OPTIONAL: A GetPolicyOptions object for specifying options to GetIamPolicy. This field is only used by Cloud IAM.
+      "requestedPolicyVersion": 42, # Optional. The policy format version to be returned.Valid values are 0, 1, and 3. Requests specifying an invalid value will be rejected.Requests for policies with any conditional bindings must specify version 3. Policies without any conditional bindings may specify any valid value or leave the field unset.
+    },
   }
 
   x__xgafv: string, V1 error format.
@@ -1466,71 +1745,106 @@
 Returns:
   An object of the form:
 
-    { # Defines an Identity and Access Management (IAM) policy. It is used to specify access control policies for Cloud Platform resources.A Policy consists of a list of bindings. A binding binds a list of members to a role, where the members can be user accounts, Google groups, Google domains, and service accounts. A role is a named list of permissions defined by IAM.JSON Example
+    { # An Identity and Access Management (IAM) policy, which specifies access controls for Google Cloud resources.A Policy is a collection of bindings. A binding binds one or more members to a single role. Members can be user accounts, service accounts, Google groups, and domains (such as G Suite). A role is a named list of permissions; each role can be an IAM predefined role or a user-created custom role.Optionally, a binding can specify a condition, which is a logical expression that allows access to a resource only if the expression evaluates to true. A condition can add constraints based on attributes of the request, the resource, or both.JSON example:
       # {
       #   "bindings": [
       #     {
-      #       "role": "roles/owner",
+      #       "role": "roles/resourcemanager.organizationAdmin",
       #       "members": [
       #         "user:mike@example.com",
       #         "group:admins@example.com",
       #         "domain:google.com",
-      #         "serviceAccount:my-other-app@appspot.gserviceaccount.com"
+      #         "serviceAccount:my-project-id@appspot.gserviceaccount.com"
       #       ]
       #     },
       #     {
-      #       "role": "roles/viewer",
-      #       "members": ["user:sean@example.com"]
+      #       "role": "roles/resourcemanager.organizationViewer",
+      #       "members": ["user:eve@example.com"],
+      #       "condition": {
+      #         "title": "expirable access",
+      #         "description": "Does not grant access after Sep 2020",
+      #         "expression": "request.time &lt; timestamp('2020-10-01T00:00:00.000Z')",
+      #       }
       #     }
-      #   ]
+      #   ],
+      #   "etag": "BwWWja0YfJA=",
+      #   "version": 3
       # }
-      # YAML Example
+      # YAML example:
       # bindings:
       # - members:
       #   - user:mike@example.com
       #   - group:admins@example.com
       #   - domain:google.com
-      #   - serviceAccount:my-other-app@appspot.gserviceaccount.com
-      #   role: roles/owner
+      #   - serviceAccount:my-project-id@appspot.gserviceaccount.com
+      #   role: roles/resourcemanager.organizationAdmin
       # - members:
-      #   - user:sean@example.com
-      #   role: roles/viewer
-      # For a description of IAM and its features, see the IAM developer's guide (https://cloud.google.com/iam/docs).
-    "bindings": [ # Associates a list of members to a role. bindings with no members will result in an error.
+      #   - user:eve@example.com
+      #   role: roles/resourcemanager.organizationViewer
+      #   condition:
+      #     title: expirable access
+      #     description: Does not grant access after Sep 2020
+      #     expression: request.time &lt; timestamp('2020-10-01T00:00:00.000Z')
+      # - etag: BwWWja0YfJA=
+      # - version: 3
+      # For a description of IAM and its features, see the IAM documentation (https://cloud.google.com/iam/docs/).
+    "bindings": [ # Associates a list of members to a role. Optionally, may specify a condition that determines how and when the bindings are applied. Each of the bindings must contain at least one member.
       { # Associates members with a role.
         "role": "A String", # Role that is assigned to members. For example, roles/viewer, roles/editor, or roles/owner.
         "members": [ # Specifies the identities requesting access for a Cloud Platform resource. members can have the following values:
             # allUsers: A special identifier that represents anyone who is  on the internet; with or without a Google account.
             # allAuthenticatedUsers: A special identifier that represents anyone  who is authenticated with a Google account or a service account.
-            # user:{emailid}: An email address that represents a specific Google  account. For example, alice@gmail.com .
+            # user:{emailid}: An email address that represents a specific Google  account. For example, alice@example.com .
             # serviceAccount:{emailid}: An email address that represents a service  account. For example, my-other-app@appspot.gserviceaccount.com.
             # group:{emailid}: An email address that represents a Google group.  For example, admins@example.com.
+            # deleted:user:{emailid}?uid={uniqueid}: An email address (plus unique  identifier) representing a user that has been recently deleted. For  example, alice@example.com?uid=123456789012345678901. If the user is  recovered, this value reverts to user:{emailid} and the recovered user  retains the role in the binding.
+            # deleted:serviceAccount:{emailid}?uid={uniqueid}: An email address (plus  unique identifier) representing a service account that has been recently  deleted. For example,  my-other-app@appspot.gserviceaccount.com?uid=123456789012345678901.  If the service account is undeleted, this value reverts to  serviceAccount:{emailid} and the undeleted service account retains the  role in the binding.
+            # deleted:group:{emailid}?uid={uniqueid}: An email address (plus unique  identifier) representing a Google group that has been recently  deleted. For example, admins@example.com?uid=123456789012345678901. If  the group is recovered, this value reverts to group:{emailid} and the  recovered group retains the role in the binding.
             # domain:{domain}: The G Suite domain (primary) that represents all the  users of that domain. For example, google.com or example.com.
           "A String",
         ],
-        "condition": { # Represents an expression text. Example: # The condition that is associated with this binding. NOTE: An unsatisfied condition will not allow user access via current binding. Different bindings, including their conditions, are examined independently.
-            # title: "User account presence"
-            # description: "Determines whether the request has a user account"
-            # expression: "size(request.user) > 0"
-          "location": "A String", # An optional string indicating the location of the expression for error reporting, e.g. a file name and a position in the file.
-          "expression": "A String", # Textual representation of an expression in Common Expression Language syntax.The application context of the containing message determines which well-known feature set of CEL is supported.
-          "description": "A String", # An optional description of the expression. This is a longer text which describes the expression, e.g. when hovered over it in a UI.
-          "title": "A String", # An optional title for the expression, i.e. a short string describing its purpose. This can be used e.g. in UIs which allow to enter the expression.
+        "condition": { # Represents a textual expression in the Common Expression Language (CEL) syntax. CEL is a C-like expression language. The syntax and semantics of CEL are documented at https://github.com/google/cel-spec.Example (Comparison): # The condition that is associated with this binding. NOTE: An unsatisfied condition will not allow user access via current binding. Different bindings, including their conditions, are examined independently.
+            # title: "Summary size limit"
+            # description: "Determines if a summary is less than 100 chars"
+            # expression: "document.summary.size() &lt; 100"
+            # Example (Equality):
+            # title: "Requestor is owner"
+            # description: "Determines if requestor is the document owner"
+            # expression: "document.owner == request.auth.claims.email"
+            # Example (Logic):
+            # title: "Public documents"
+            # description: "Determine whether the document should be publicly visible"
+            # expression: "document.type != 'private' &amp;&amp; document.type != 'internal'"
+            # Example (Data Manipulation):
+            # title: "Notification string"
+            # description: "Create a notification string with a timestamp."
+            # expression: "'New message received at ' + string(document.create_time)"
+            # The exact variables and functions that may be referenced within an expression are determined by the service that evaluates it. See the service documentation for additional information.
+          "description": "A String", # Optional. Description of the expression. This is a longer text which describes the expression, e.g. when hovered over it in a UI.
+          "expression": "A String", # Textual representation of an expression in Common Expression Language syntax.
+          "location": "A String", # Optional. String indicating the location of the expression for error reporting, e.g. a file name and a position in the file.
+          "title": "A String", # Optional. Title for the expression, i.e. a short string describing its purpose. This can be used e.g. in UIs which allow to enter the expression.
         },
       },
     ],
-    "etag": "A String", # etag is used for optimistic concurrency control as a way to help prevent simultaneous updates of a policy from overwriting each other. It is strongly suggested that systems make use of the etag in the read-modify-write cycle to perform policy updates in order to avoid race conditions: An etag is returned in the response to getIamPolicy, and systems are expected to put that etag in the request to setIamPolicy to ensure that their change will be applied to the same version of the policy.If no etag is provided in the call to setIamPolicy, then the existing policy is overwritten blindly.
-    "version": 42, # Deprecated.
+    "etag": "A String", # etag is used for optimistic concurrency control as a way to help prevent simultaneous updates of a policy from overwriting each other. It is strongly suggested that systems make use of the etag in the read-modify-write cycle to perform policy updates in order to avoid race conditions: An etag is returned in the response to getIamPolicy, and systems are expected to put that etag in the request to setIamPolicy to ensure that their change will be applied to the same version of the policy.Important: If you use IAM Conditions, you must include the etag field whenever you call setIamPolicy. If you omit this field, then IAM allows you to overwrite a version 3 policy with a version 1 policy, and all of the conditions in the version 3 policy are lost.
+    "version": 42, # Specifies the format of the policy.Valid values are 0, 1, and 3. Requests that specify an invalid value are rejected.Any operation that affects conditional role bindings must specify version 3. This requirement applies to the following operations:
+        # Getting a policy that includes a conditional role binding
+        # Adding a conditional role binding to a policy
+        # Changing a conditional role binding in a policy
+        # Removing any role binding, with or without a condition, from a policy  that includes conditionsImportant: If you use IAM Conditions, you must include the etag field whenever you call setIamPolicy. If you omit this field, then IAM allows you to overwrite a version 3 policy with a version 1 policy, and all of the conditions in the version 3 policy are lost.If a policy does not include any conditions, operations on that policy may specify any valid version or leave the field unset.
   }</pre>
 </div>
 
 <div class="method">
-    <code class="details" id="instantiate">instantiate(name, body, x__xgafv=None)</code>
-  <pre>Instantiates a template and begins execution.The returned Operation can be used to track execution of workflow by polling operations.get. The Operation will complete when entire workflow is finished.The running workflow can be aborted via operations.cancel. This will cause any inflight jobs to be cancelled and workflow-owned clusters to be deleted.The Operation.metadata will be WorkflowMetadata.On successful completion, Operation.response will be Empty.
+    <code class="details" id="instantiate">instantiate(name, body=None, x__xgafv=None)</code>
+  <pre>Instantiates a template and begins execution.The returned Operation can be used to track execution of workflow by polling operations.get. The Operation will complete when entire workflow is finished.The running workflow can be aborted via operations.cancel. This will cause any inflight jobs to be cancelled and workflow-owned clusters to be deleted.The Operation.metadata will be WorkflowMetadata (https://cloud.google.com/dataproc/docs/reference/rpc/google.cloud.dataproc.v1#workflowmetadata). Also see Using WorkflowMetadata (https://cloud.google.com/dataproc/docs/concepts/workflows/debugging#using_workflowmetadata).On successful completion, Operation.response will be Empty.
 
 Args:
-  name: string, Required. The "resource name" of the workflow template, as described in https://cloud.google.com/apis/design/resource_names of the form projects/{project_id}/regions/{region}/workflowTemplates/{template_id} (required)
-  body: object, The request body. (required)
+  name: string, Required. The resource name of the workflow template, as described in https://cloud.google.com/apis/design/resource_names.
+For projects.regions.workflowTemplates.instantiate, the resource name of the template has the following format:  projects/{project_id}/regions/{region}/workflowTemplates/{template_id}
+For projects.locations.workflowTemplates.instantiate, the resource name  of the template has the following format:  projects/{project_id}/locations/{location}/workflowTemplates/{template_id} (required)
+  body: object, The request body.
     The object takes the form of:
 
 { # A request to instantiate a workflow template.
@@ -1553,11 +1867,6 @@
     "metadata": { # Service-specific metadata associated with the operation. It typically contains progress information and common metadata such as create time. Some services might not provide such metadata. Any method that returns a long-running operation should document the metadata type, if any.
       "a_key": "", # Properties of the object. Contains field @type with type URL.
     },
-    "done": True or False, # If the value is false, it means the operation is still in progress. If true, the operation is completed, and either error or response is available.
-    "response": { # The normal response of the operation in case of success. If the original method returns no data on success, such as Delete, the response is google.protobuf.Empty. If the original method is standard Get/Create/Update, the response should be the resource. For other methods, the response should have the type XxxResponse, where Xxx is the original method name. For example, if the original method name is TakeSnapshot(), the inferred response type is TakeSnapshotResponse.
-      "a_key": "", # Properties of the object. Contains field @type with type URL.
-    },
-    "name": "A String", # The server-assigned name, which is only unique within the same service that originally returns it. If you use the default HTTP mapping, the name should be a resource name ending with operations/{unique_id}.
     "error": { # The Status type defines a logical error model that is suitable for different programming environments, including REST APIs and RPC APIs. It is used by gRPC (https://github.com/grpc). Each Status message contains three pieces of data: error code, error message, and error details.You can find out more about this error model and how to work with it in the API Design Guide (https://cloud.google.com/apis/design/errors). # The error result of the operation in case of failure or cancellation.
       "message": "A String", # A developer-facing error message, which should be in English. Any user-facing error message should be localized and sent in the google.rpc.Status.details field, or localized by the client.
       "code": 42, # The status code, which should be an enum value of google.rpc.Code.
@@ -1567,36 +1876,49 @@
         },
       ],
     },
+    "done": True or False, # If the value is false, it means the operation is still in progress. If true, the operation is completed, and either error or response is available.
+    "response": { # The normal response of the operation in case of success. If the original method returns no data on success, such as Delete, the response is google.protobuf.Empty. If the original method is standard Get/Create/Update, the response should be the resource. For other methods, the response should have the type XxxResponse, where Xxx is the original method name. For example, if the original method name is TakeSnapshot(), the inferred response type is TakeSnapshotResponse.
+      "a_key": "", # Properties of the object. Contains field @type with type URL.
+    },
+    "name": "A String", # The server-assigned name, which is only unique within the same service that originally returns it. If you use the default HTTP mapping, the name should be a resource name ending with operations/{unique_id}.
   }</pre>
 </div>
 
 <div class="method">
-    <code class="details" id="instantiateInline">instantiateInline(parent, body, requestId=None, x__xgafv=None)</code>
-  <pre>Instantiates a template and begins execution.This method is equivalent to executing the sequence CreateWorkflowTemplate, InstantiateWorkflowTemplate, DeleteWorkflowTemplate.The returned Operation can be used to track execution of workflow by polling operations.get. The Operation will complete when entire workflow is finished.The running workflow can be aborted via operations.cancel. This will cause any inflight jobs to be cancelled and workflow-owned clusters to be deleted.The Operation.metadata will be WorkflowMetadata.On successful completion, Operation.response will be Empty.
+    <code class="details" id="instantiateInline">instantiateInline(parent, body=None, requestId=None, x__xgafv=None)</code>
+  <pre>Instantiates a template and begins execution.This method is equivalent to executing the sequence CreateWorkflowTemplate, InstantiateWorkflowTemplate, DeleteWorkflowTemplate.The returned Operation can be used to track execution of workflow by polling operations.get. The Operation will complete when entire workflow is finished.The running workflow can be aborted via operations.cancel. This will cause any inflight jobs to be cancelled and workflow-owned clusters to be deleted.The Operation.metadata will be WorkflowMetadata (https://cloud.google.com/dataproc/docs/reference/rpc/google.cloud.dataproc.v1#workflowmetadata). Also see Using WorkflowMetadata (https://cloud.google.com/dataproc/docs/concepts/workflows/debugging#using_workflowmetadata).On successful completion, Operation.response will be Empty.
 
 Args:
-  parent: string, Required. The "resource name" of the workflow template region, as described in https://cloud.google.com/apis/design/resource_names of the form projects/{project_id}/regions/{region} (required)
-  body: object, The request body. (required)
+  parent: string, Required. The resource name of the region or location, as described in https://cloud.google.com/apis/design/resource_names.
+For projects.regions.workflowTemplates,instantiateinline, the resource  name of the region has the following format:  projects/{project_id}/regions/{region}
+For projects.locations.workflowTemplates.instantiateinline, the  resource name of the location has the following format:  projects/{project_id}/locations/{location} (required)
+  body: object, The request body.
     The object takes the form of:
 
-{ # A Cloud Dataproc workflow template resource.
+{ # A Dataproc workflow template resource.
   "updateTime": "A String", # Output only. The time template was last updated.
   "placement": { # Specifies workflow execution target.Either managed_cluster or cluster_selector is required. # Required. WorkflowTemplate scheduling information.
     "clusterSelector": { # A selector that chooses target cluster for jobs based on metadata. # Optional. A selector that chooses target cluster for jobs based on metadata.The selector is evaluated at the time each job is submitted.
+      "zone": "A String", # Optional. The zone where workflow process executes. This parameter does not affect the selection of the cluster.If unspecified, the zone of the first cluster matching the selector is used.
       "clusterLabels": { # Required. The cluster labels. Cluster must have all labels to match.
         "a_key": "A String",
       },
-      "zone": "A String", # Optional. The zone where workflow process executes. This parameter does not affect the selection of the cluster.If unspecified, the zone of the first cluster matching the selector is used.
     },
-    "managedCluster": { # Cluster that is managed by the workflow. # Optional. A cluster that is managed by the workflow.
+    "managedCluster": { # Cluster that is managed by the workflow. # A cluster that is managed by the workflow.
       "clusterName": "A String", # Required. The cluster name prefix. A unique cluster name will be formed by appending a random suffix.The name must contain only lower-case letters (a-z), numbers (0-9), and hyphens (-). Must begin with a letter. Cannot begin or end with hyphen. Must consist of between 2 and 35 characters.
       "labels": { # Optional. The labels to associate with this cluster.Label keys must be between 1 and 63 characters long, and must conform to the following PCRE regular expression: \p{Ll}\p{Lo}{0,62}Label values must be between 1 and 63 characters long, and must conform to the following PCRE regular expression: \p{Ll}\p{Lo}\p{N}_-{0,63}No more than 32 labels can be associated with a given cluster.
         "a_key": "A String",
       },
       "config": { # The cluster config. # Required. The cluster configuration.
+        "lifecycleConfig": { # Specifies the cluster auto-delete schedule configuration. # Optional. Lifecycle setting for the cluster.
+          "idleStartTime": "A String", # Output only. The time when cluster became idle (most recent job finished) and became eligible for deletion due to idleness (see JSON representation of Timestamp (https://developers.google.com/protocol-buffers/docs/proto3#json)).
+          "idleDeleteTtl": "A String", # Optional. The duration to keep the cluster alive while idling (when no jobs are running). Passing this threshold will cause the cluster to be deleted. Minimum value is 10 minutes; maximum value is 14 days (see JSON representation of Duration (https://developers.google.com/protocol-buffers/docs/proto3#json).
+          "autoDeleteTtl": "A String", # Optional. The lifetime duration of cluster. The cluster will be auto-deleted at the end of this period. Minimum value is 10 minutes; maximum value is 14 days (see JSON representation of Duration (https://developers.google.com/protocol-buffers/docs/proto3#json)).
+          "autoDeleteTime": "A String", # Optional. The time when cluster will be auto-deleted (see JSON representation of Timestamp (https://developers.google.com/protocol-buffers/docs/proto3#json)).
+        },
         "softwareConfig": { # Specifies the selection and config of software inside the cluster. # Optional. The config settings for software inside the cluster.
-          "imageVersion": "A String", # Optional. The version of software inside the cluster. It must be one of the supported Cloud Dataproc Versions, such as "1.2" (including a subminor version, such as "1.2.29"), or the "preview" version. If unspecified, it defaults to the latest Debian version.
-          "optionalComponents": [ # The set of optional components to activate on the cluster.
+          "imageVersion": "A String", # Optional. The version of software inside the cluster. It must be one of the supported Dataproc Versions (https://cloud.google.com/dataproc/docs/concepts/versioning/dataproc-versions#supported_cloud_dataproc_versions), such as "1.2" (including a subminor version, such as "1.2.29"), or the "preview" version (https://cloud.google.com/dataproc/docs/concepts/versioning/dataproc-versions#other_versions). If unspecified, it defaults to the latest Debian version.
+          "optionalComponents": [ # Optional. The set of components to activate on the cluster.
             "A String",
           ],
           "properties": { # Optional. The properties to set on daemon config files.Property keys are specified in prefix:property format, for example core:hadoop.tmp.dir. The following are supported prefixes and their mappings:
@@ -1608,24 +1930,29 @@
               # mapred: mapred-site.xml
               # pig: pig.properties
               # spark: spark-defaults.conf
-              # yarn: yarn-site.xmlFor more information, see Cluster properties.
+              # yarn: yarn-site.xmlFor more information, see Cluster properties (https://cloud.google.com/dataproc/docs/concepts/cluster-properties).
             "a_key": "A String",
           },
         },
-        "configBucket": "A String", # Optional. A Google Cloud Storage bucket used to stage job dependencies, config files, and job driver console output. If you do not specify a staging bucket, Cloud Dataproc will determine a Cloud Storage location (US, ASIA, or EU) for your cluster's staging bucket according to the Google Compute Engine zone where your cluster is deployed, and then create and manage this project-level, per-location bucket (see Cloud Dataproc staging bucket).
+        "configBucket": "A String", # Optional. A Cloud Storage bucket used to stage job dependencies, config files, and job driver console output. If you do not specify a staging bucket, Cloud Dataproc will determine a Cloud Storage location (US, ASIA, or EU) for your cluster's staging bucket according to the Compute Engine zone where your cluster is deployed, and then create and manage this project-level, per-location bucket (see Dataproc staging bucket (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/staging-bucket)).
         "gceClusterConfig": { # Common config settings for resources of Compute Engine cluster instances, applicable to all instances in the cluster. # Optional. The shared Compute Engine config settings for all instances in a cluster.
           "internalIpOnly": True or False, # Optional. If true, all instances in the cluster will only have internal IP addresses. By default, clusters are not restricted to internal IP addresses, and will have ephemeral external IP addresses assigned to each instance. This internal_ip_only restriction can only be enabled for subnetwork enabled networks, and all off-cluster dependencies must be configured to be accessible without external IP addresses.
-          "networkUri": "A String", # Optional. The Compute Engine network to be used for machine communications. Cannot be specified with subnetwork_uri. If neither network_uri nor subnetwork_uri is specified, the "default" network of the project is used, if it exists. Cannot be a "Custom Subnet Network" (see Using Subnetworks for more information).A full URL, partial URI, or short name are valid. Examples:
+          "reservationAffinity": { # Reservation Affinity for consuming Zonal reservation. # Optional. Reservation Affinity for consuming Zonal reservation.
+            "values": [ # Optional. Corresponds to the label values of reservation resource.
+              "A String",
+            ],
+            "key": "A String", # Optional. Corresponds to the label key of reservation resource.
+            "consumeReservationType": "A String", # Optional. Type of reservation to consume
+          },
+          "networkUri": "A String", # Optional. The Compute Engine network to be used for machine communications. Cannot be specified with subnetwork_uri. If neither network_uri nor subnetwork_uri is specified, the "default" network of the project is used, if it exists. Cannot be a "Custom Subnet Network" (see Using Subnetworks (https://cloud.google.com/compute/docs/subnetworks) for more information).A full URL, partial URI, or short name are valid. Examples:
               # https://www.googleapis.com/compute/v1/projects/[project_id]/regions/global/default
               # projects/[project_id]/regions/global/default
               # default
-          "tags": [ # The Compute Engine tags to add to all instances (see Tagging instances).
+          "tags": [ # The Compute Engine tags to add to all instances (see Tagging instances (https://cloud.google.com/compute/docs/label-or-tag-resources#tags)).
             "A String",
           ],
-          "serviceAccount": "A String", # Optional. The service account of the instances. Defaults to the default Compute Engine service account. Custom service accounts need permissions equivalent to the following IAM roles:
-              # roles/logging.logWriter
-              # roles/storage.objectAdmin(see https://cloud.google.com/compute/docs/access/service-accounts#custom_service_accounts for more information). Example: [account_id]@[project_id].iam.gserviceaccount.com
-          "zoneUri": "A String", # Optional. The zone where the Compute Engine cluster will be located. On a create request, it is required in the "global" region. If omitted in a non-global Cloud Dataproc region, the service will pick a zone in the corresponding Compute Engine region. On a get request, zone will always be present.A full URL, partial URI, or short name are valid. Examples:
+          "serviceAccount": "A String", # Optional. The Dataproc service account (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/service-accounts#service_accounts_in_cloud_dataproc) (also see VM Data Plane identity (https://cloud.google.com/dataproc/docs/concepts/iam/dataproc-principals#vm_service_account_data_plane_identity)) used by Dataproc cluster VM instances to access Google Cloud Platform services.If not specified, the Compute Engine default service account (https://cloud.google.com/compute/docs/access/service-accounts#default_service_account) is used.
+          "zoneUri": "A String", # Optional. The zone where the Compute Engine cluster will be located. On a create request, it is required in the "global" region. If omitted in a non-global Dataproc region, the service will pick a zone in the corresponding Compute Engine region. On a get request, zone will always be present.A full URL, partial URI, or short name are valid. Examples:
               # https://www.googleapis.com/compute/v1/projects/[project_id]/zones/[zone]
               # projects/[project_id]/zones/[zone]
               # us-central1-f
@@ -1647,25 +1974,37 @@
             "a_key": "A String",
           },
         },
-        "workerConfig": { # Optional. The config settings for Compute Engine resources in an instance group, such as a master or worker group. # Optional. The Compute Engine config settings for worker instances in a cluster.
-          "isPreemptible": True or False, # Optional. Specifies that this instance group contains preemptible instances.
+        "autoscalingConfig": { # Autoscaling Policy config associated with the cluster. # Optional. Autoscaling config for the policy associated with the cluster. Cluster does not autoscale if this field is unset.
+          "policyUri": "A String", # Optional. The autoscaling policy used by the cluster.Only resource names including projectid and location (region) are valid. Examples:
+              # https://www.googleapis.com/compute/v1/projects/[project_id]/locations/[dataproc_region]/autoscalingPolicies/[policy_id]
+              # projects/[project_id]/locations/[dataproc_region]/autoscalingPolicies/[policy_id]Note that the policy must be in the same project and Dataproc region.
+        },
+        "workerConfig": { # The config settings for Compute Engine resources in an instance group, such as a master or worker group. # Optional. The Compute Engine config settings for worker instances in a cluster.
+          "isPreemptible": True or False, # Output only. Specifies that this instance group contains preemptible instances.
+          "preemptibility": "A String", # Optional. Specifies the preemptibility of the instance group.The default value for master and worker groups is NON_PREEMPTIBLE. This default cannot be changed.The default value for secondary instances is PREEMPTIBLE.
           "machineTypeUri": "A String", # Optional. The Compute Engine machine type used for cluster instances.A full URL, partial URI, or short name are valid. Examples:
               # https://www.googleapis.com/compute/v1/projects/[project_id]/zones/us-east1-a/machineTypes/n1-standard-2
               # projects/[project_id]/zones/us-east1-a/machineTypes/n1-standard-2
-              # n1-standard-2Auto Zone Exception: If you are using the Cloud Dataproc Auto Zone Placement feature, you must use the short name of the machine type resource, for example, n1-standard-2.
-          "instanceNames": [ # Output only. The list of instance names. Cloud Dataproc derives the names from cluster_name, num_instances, and the instance group.
+              # n1-standard-2Auto Zone Exception: If you are using the Dataproc Auto Zone Placement (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/auto-zone#using_auto_zone_placement) feature, you must use the short name of the machine type resource, for example, n1-standard-2.
+          "instanceNames": [ # Output only. The list of instance names. Dataproc derives the names from cluster_name, num_instances, and the instance group.
             "A String",
           ],
-          "imageUri": "A String", # Optional. The Compute Engine image resource used for cluster instances. It can be specified or may be inferred from SoftwareConfig.image_version.
-          "accelerators": [ # Optional. The Compute Engine accelerator configuration for these instances.Beta Feature: This feature is still under development. It may be changed before final release.
-            { # Specifies the type and number of accelerator cards attached to the instances of an instance. See GPUs on Compute Engine.
+          "imageUri": "A String", # Optional. The Compute Engine image resource used for cluster instances.The URI can represent an image or image family.Image examples:
+              # https://www.googleapis.com/compute/beta/projects/[project_id]/global/images/[image-id]
+              # projects/[project_id]/global/images/[image-id]
+              # image-idImage family examples. Dataproc will use the most recent image from the family:
+              # https://www.googleapis.com/compute/beta/projects/[project_id]/global/images/family/[custom-image-family-name]
+              # projects/[project_id]/global/images/family/[custom-image-family-name]If the URI is unspecified, it will be inferred from SoftwareConfig.image_version or the system default.
+          "accelerators": [ # Optional. The Compute Engine accelerator configuration for these instances.
+            { # Specifies the type and number of accelerator cards attached to the instances of an instance. See GPUs on Compute Engine (https://cloud.google.com/compute/docs/gpus/).
               "acceleratorCount": 42, # The number of the accelerator cards of this type exposed to this instance.
-              "acceleratorTypeUri": "A String", # Full URL, partial URI, or short name of the accelerator type resource to expose to this instance. See Compute Engine AcceleratorTypes.Examples:
+              "acceleratorTypeUri": "A String", # Full URL, partial URI, or short name of the accelerator type resource to expose to this instance. See Compute Engine AcceleratorTypes (https://cloud.google.com/compute/docs/reference/beta/acceleratorTypes).Examples:
                   # https://www.googleapis.com/compute/beta/projects/[project_id]/zones/us-east1-a/acceleratorTypes/nvidia-tesla-k80
                   # projects/[project_id]/zones/us-east1-a/acceleratorTypes/nvidia-tesla-k80
-                  # nvidia-tesla-k80Auto Zone Exception: If you are using the Cloud Dataproc Auto Zone Placement feature, you must use the short name of the accelerator type resource, for example, nvidia-tesla-k80.
+                  # nvidia-tesla-k80Auto Zone Exception: If you are using the Dataproc Auto Zone Placement (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/auto-zone#using_auto_zone_placement) feature, you must use the short name of the accelerator type resource, for example, nvidia-tesla-k80.
             },
           ],
+          "minCpuPlatform": "A String", # Optional. Specifies the minimum cpu platform for the Instance Group. See Dataproc -&amp;gt; Minimum CPU Platform (https://cloud.google.com/dataproc/docs/concepts/compute/dataproc-min-cpu).
           "managedGroupConfig": { # Specifies the resources used to actively manage an instance group. # Output only. The config for Compute Engine Instance Group Manager that manages this group. This is only used for preemptible instance groups.
             "instanceTemplateName": "A String", # Output only. The name of the Instance Template used for the Managed Instance Group.
             "instanceGroupManagerName": "A String", # Output only. The name of the Instance Group Manager for this group.
@@ -1686,32 +2025,39 @@
             #   ... worker specific actions ...
             # fi
           { # Specifies an executable to run on a fully configured node and a timeout period for executable completion.
-            "executionTimeout": "A String", # Optional. Amount of time executable has to complete. Default is 10 minutes. Cluster creation fails with an explanatory error message (the name of the executable that caused the error and the exceeded timeout period) if the executable is not completed at end of the timeout period.
+            "executionTimeout": "A String", # Optional. Amount of time executable has to complete. Default is 10 minutes (see JSON representation of Duration (https://developers.google.com/protocol-buffers/docs/proto3#json)).Cluster creation fails with an explanatory error message (the name of the executable that caused the error and the exceeded timeout period) if the executable is not completed at end of the timeout period.
             "executableFile": "A String", # Required. Cloud Storage URI of executable file.
           },
         ],
         "encryptionConfig": { # Encryption settings for the cluster. # Optional. Encryption settings for the cluster.
           "gcePdKmsKeyName": "A String", # Optional. The Cloud KMS key name to use for PD disk encryption for all instances in the cluster.
         },
-        "secondaryWorkerConfig": { # Optional. The config settings for Compute Engine resources in an instance group, such as a master or worker group. # Optional. The Compute Engine config settings for additional worker instances in a cluster.
-          "isPreemptible": True or False, # Optional. Specifies that this instance group contains preemptible instances.
+        "secondaryWorkerConfig": { # The config settings for Compute Engine resources in an instance group, such as a master or worker group. # Optional. The Compute Engine config settings for additional worker instances in a cluster.
+          "isPreemptible": True or False, # Output only. Specifies that this instance group contains preemptible instances.
+          "preemptibility": "A String", # Optional. Specifies the preemptibility of the instance group.The default value for master and worker groups is NON_PREEMPTIBLE. This default cannot be changed.The default value for secondary instances is PREEMPTIBLE.
           "machineTypeUri": "A String", # Optional. The Compute Engine machine type used for cluster instances.A full URL, partial URI, or short name are valid. Examples:
               # https://www.googleapis.com/compute/v1/projects/[project_id]/zones/us-east1-a/machineTypes/n1-standard-2
               # projects/[project_id]/zones/us-east1-a/machineTypes/n1-standard-2
-              # n1-standard-2Auto Zone Exception: If you are using the Cloud Dataproc Auto Zone Placement feature, you must use the short name of the machine type resource, for example, n1-standard-2.
-          "instanceNames": [ # Output only. The list of instance names. Cloud Dataproc derives the names from cluster_name, num_instances, and the instance group.
+              # n1-standard-2Auto Zone Exception: If you are using the Dataproc Auto Zone Placement (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/auto-zone#using_auto_zone_placement) feature, you must use the short name of the machine type resource, for example, n1-standard-2.
+          "instanceNames": [ # Output only. The list of instance names. Dataproc derives the names from cluster_name, num_instances, and the instance group.
             "A String",
           ],
-          "imageUri": "A String", # Optional. The Compute Engine image resource used for cluster instances. It can be specified or may be inferred from SoftwareConfig.image_version.
-          "accelerators": [ # Optional. The Compute Engine accelerator configuration for these instances.Beta Feature: This feature is still under development. It may be changed before final release.
-            { # Specifies the type and number of accelerator cards attached to the instances of an instance. See GPUs on Compute Engine.
+          "imageUri": "A String", # Optional. The Compute Engine image resource used for cluster instances.The URI can represent an image or image family.Image examples:
+              # https://www.googleapis.com/compute/beta/projects/[project_id]/global/images/[image-id]
+              # projects/[project_id]/global/images/[image-id]
+              # image-idImage family examples. Dataproc will use the most recent image from the family:
+              # https://www.googleapis.com/compute/beta/projects/[project_id]/global/images/family/[custom-image-family-name]
+              # projects/[project_id]/global/images/family/[custom-image-family-name]If the URI is unspecified, it will be inferred from SoftwareConfig.image_version or the system default.
+          "accelerators": [ # Optional. The Compute Engine accelerator configuration for these instances.
+            { # Specifies the type and number of accelerator cards attached to the instances of an instance. See GPUs on Compute Engine (https://cloud.google.com/compute/docs/gpus/).
               "acceleratorCount": 42, # The number of the accelerator cards of this type exposed to this instance.
-              "acceleratorTypeUri": "A String", # Full URL, partial URI, or short name of the accelerator type resource to expose to this instance. See Compute Engine AcceleratorTypes.Examples:
+              "acceleratorTypeUri": "A String", # Full URL, partial URI, or short name of the accelerator type resource to expose to this instance. See Compute Engine AcceleratorTypes (https://cloud.google.com/compute/docs/reference/beta/acceleratorTypes).Examples:
                   # https://www.googleapis.com/compute/beta/projects/[project_id]/zones/us-east1-a/acceleratorTypes/nvidia-tesla-k80
                   # projects/[project_id]/zones/us-east1-a/acceleratorTypes/nvidia-tesla-k80
-                  # nvidia-tesla-k80Auto Zone Exception: If you are using the Cloud Dataproc Auto Zone Placement feature, you must use the short name of the accelerator type resource, for example, nvidia-tesla-k80.
+                  # nvidia-tesla-k80Auto Zone Exception: If you are using the Dataproc Auto Zone Placement (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/auto-zone#using_auto_zone_placement) feature, you must use the short name of the accelerator type resource, for example, nvidia-tesla-k80.
             },
           ],
+          "minCpuPlatform": "A String", # Optional. Specifies the minimum cpu platform for the Instance Group. See Dataproc -&amp;gt; Minimum CPU Platform (https://cloud.google.com/dataproc/docs/concepts/compute/dataproc-min-cpu).
           "managedGroupConfig": { # Specifies the resources used to actively manage an instance group. # Output only. The config for Compute Engine Instance Group Manager that manages this group. This is only used for preemptible instance groups.
             "instanceTemplateName": "A String", # Output only. The name of the Instance Template used for the Managed Instance Group.
             "instanceGroupManagerName": "A String", # Output only. The name of the Instance Group Manager for this group.
@@ -1723,25 +2069,32 @@
             "bootDiskSizeGb": 42, # Optional. Size in GB of the boot disk (default is 500GB).
           },
         },
-        "masterConfig": { # Optional. The config settings for Compute Engine resources in an instance group, such as a master or worker group. # Optional. The Compute Engine config settings for the master instance in a cluster.
-          "isPreemptible": True or False, # Optional. Specifies that this instance group contains preemptible instances.
+        "masterConfig": { # The config settings for Compute Engine resources in an instance group, such as a master or worker group. # Optional. The Compute Engine config settings for the master instance in a cluster.
+          "isPreemptible": True or False, # Output only. Specifies that this instance group contains preemptible instances.
+          "preemptibility": "A String", # Optional. Specifies the preemptibility of the instance group.The default value for master and worker groups is NON_PREEMPTIBLE. This default cannot be changed.The default value for secondary instances is PREEMPTIBLE.
           "machineTypeUri": "A String", # Optional. The Compute Engine machine type used for cluster instances.A full URL, partial URI, or short name are valid. Examples:
               # https://www.googleapis.com/compute/v1/projects/[project_id]/zones/us-east1-a/machineTypes/n1-standard-2
               # projects/[project_id]/zones/us-east1-a/machineTypes/n1-standard-2
-              # n1-standard-2Auto Zone Exception: If you are using the Cloud Dataproc Auto Zone Placement feature, you must use the short name of the machine type resource, for example, n1-standard-2.
-          "instanceNames": [ # Output only. The list of instance names. Cloud Dataproc derives the names from cluster_name, num_instances, and the instance group.
+              # n1-standard-2Auto Zone Exception: If you are using the Dataproc Auto Zone Placement (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/auto-zone#using_auto_zone_placement) feature, you must use the short name of the machine type resource, for example, n1-standard-2.
+          "instanceNames": [ # Output only. The list of instance names. Dataproc derives the names from cluster_name, num_instances, and the instance group.
             "A String",
           ],
-          "imageUri": "A String", # Optional. The Compute Engine image resource used for cluster instances. It can be specified or may be inferred from SoftwareConfig.image_version.
-          "accelerators": [ # Optional. The Compute Engine accelerator configuration for these instances.Beta Feature: This feature is still under development. It may be changed before final release.
-            { # Specifies the type and number of accelerator cards attached to the instances of an instance. See GPUs on Compute Engine.
+          "imageUri": "A String", # Optional. The Compute Engine image resource used for cluster instances.The URI can represent an image or image family.Image examples:
+              # https://www.googleapis.com/compute/beta/projects/[project_id]/global/images/[image-id]
+              # projects/[project_id]/global/images/[image-id]
+              # image-idImage family examples. Dataproc will use the most recent image from the family:
+              # https://www.googleapis.com/compute/beta/projects/[project_id]/global/images/family/[custom-image-family-name]
+              # projects/[project_id]/global/images/family/[custom-image-family-name]If the URI is unspecified, it will be inferred from SoftwareConfig.image_version or the system default.
+          "accelerators": [ # Optional. The Compute Engine accelerator configuration for these instances.
+            { # Specifies the type and number of accelerator cards attached to the instances of an instance. See GPUs on Compute Engine (https://cloud.google.com/compute/docs/gpus/).
               "acceleratorCount": 42, # The number of the accelerator cards of this type exposed to this instance.
-              "acceleratorTypeUri": "A String", # Full URL, partial URI, or short name of the accelerator type resource to expose to this instance. See Compute Engine AcceleratorTypes.Examples:
+              "acceleratorTypeUri": "A String", # Full URL, partial URI, or short name of the accelerator type resource to expose to this instance. See Compute Engine AcceleratorTypes (https://cloud.google.com/compute/docs/reference/beta/acceleratorTypes).Examples:
                   # https://www.googleapis.com/compute/beta/projects/[project_id]/zones/us-east1-a/acceleratorTypes/nvidia-tesla-k80
                   # projects/[project_id]/zones/us-east1-a/acceleratorTypes/nvidia-tesla-k80
-                  # nvidia-tesla-k80Auto Zone Exception: If you are using the Cloud Dataproc Auto Zone Placement feature, you must use the short name of the accelerator type resource, for example, nvidia-tesla-k80.
+                  # nvidia-tesla-k80Auto Zone Exception: If you are using the Dataproc Auto Zone Placement (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/auto-zone#using_auto_zone_placement) feature, you must use the short name of the accelerator type resource, for example, nvidia-tesla-k80.
             },
           ],
+          "minCpuPlatform": "A String", # Optional. Specifies the minimum cpu platform for the Instance Group. See Dataproc -&amp;gt; Minimum CPU Platform (https://cloud.google.com/dataproc/docs/concepts/compute/dataproc-min-cpu).
           "managedGroupConfig": { # Specifies the resources used to actively manage an instance group. # Output only. The config for Compute Engine Instance Group Manager that manages this group. This is only used for preemptible instance groups.
             "instanceTemplateName": "A String", # Output only. The name of the Instance Template used for the Managed Instance Group.
             "instanceGroupManagerName": "A String", # Output only. The name of the Instance Group Manager for this group.
@@ -1757,8 +2110,9 @@
           "kerberosConfig": { # Specifies Kerberos related configuration. # Kerberos related configuration.
             "truststorePasswordUri": "A String", # Optional. The Cloud Storage URI of a KMS encrypted file containing the password to the user provided truststore. For the self-signed certificate, this password is generated by Dataproc.
             "crossRealmTrustRealm": "A String", # Optional. The remote realm the Dataproc on-cluster KDC will trust, should the user enable cross realm trust.
+            "realm": "A String", # Optional. The name of the on-cluster Kerberos realm. If not specified, the uppercased domain of hostnames will be the realm.
             "keyPasswordUri": "A String", # Optional. The Cloud Storage URI of a KMS encrypted file containing the password to the user provided key. For the self-signed certificate, this password is generated by Dataproc.
-            "enableKerberos": True or False, # Optional. Flag to indicate whether to Kerberize the cluster.
+            "enableKerberos": True or False, # Optional. Flag to indicate whether to Kerberize the cluster (default: false). Set this field to true to enable Kerberos on a cluster.
             "crossRealmTrustAdminServer": "A String", # Optional. The admin server (IP or hostname) for the remote trusted realm in a cross realm trust relationship.
             "tgtLifetimeHours": 42, # Optional. The lifetime of the ticket granting ticket, in hours. If not specified, or user specifies 0, then default value 10 will be used.
             "keystoreUri": "A String", # Optional. The Cloud Storage URI of the keystore file used for SSL encryption. If not provided, Dataproc will provide a self-signed certificate.
@@ -1774,8 +2128,10 @@
       },
     },
   },
-  "name": "A String", # Output only. The "resource name" of the template, as described in https://cloud.google.com/apis/design/resource_names of the form projects/{project_id}/regions/{region}/workflowTemplates/{template_id}
-  "parameters": [ # Optional. Template parameters whose values are substituted into the template. Values for parameters must be provided when the template is instantiated.
+  "name": "A String", # Output only. The resource name of the workflow template, as described in https://cloud.google.com/apis/design/resource_names.
+      # For projects.regions.workflowTemplates, the resource name of the  template has the following format:  projects/{project_id}/regions/{region}/workflowTemplates/{template_id}
+      # For projects.locations.workflowTemplates, the resource name of the  template has the following format:  projects/{project_id}/locations/{location}/workflowTemplates/{template_id}
+  "parameters": [ # Optional. emplate parameters whose values are substituted into the template. Values for parameters must be provided when the template is instantiated.
     { # A configurable parameter that replaces one or more fields in the template. Parameterizable fields: - Labels - File uris - Job properties - Job arguments - Script variables - Main class (in HadoopJob and SparkJob) - Zone (in ClusterSelector)
       "fields": [ # Required. Paths to all fields that the parameter replaces. A field is allowed to appear in at most one parameter's list of field paths.A field path is similar in syntax to a google.protobuf.FieldMask. For example, a field path that references the zone field of a workflow template's cluster selector would be specified as placement.clusterSelector.zone.Also, field paths can reference fields using the following syntax:
           # Values in maps can be referenced by key:
@@ -1827,7 +2183,7 @@
   "version": 42, # Optional. Used to perform a consistent read-modify-write.This field should be left blank for a CreateWorkflowTemplate request. It is required for an UpdateWorkflowTemplate request, and must match the current server version. A typical update template flow would fetch the current template with a GetWorkflowTemplate request, which will return the current template with the version field filled in with the current server version. The user updates other fields in the template, then returns it as part of the UpdateWorkflowTemplate request.
   "jobs": [ # Required. The Directed Acyclic Graph of Jobs to submit.
     { # A job executed by the workflow.
-      "hadoopJob": { # A Cloud Dataproc job for running Apache Hadoop MapReduce (https://hadoop.apache.org/docs/current/hadoop-mapreduce-client/hadoop-mapreduce-client-core/MapReduceTutorial.html) jobs on Apache Hadoop YARN (https://hadoop.apache.org/docs/r2.7.1/hadoop-yarn/hadoop-yarn-site/YARN.html). # Job is a Hadoop job.
+      "hadoopJob": { # A Dataproc job for running Apache Hadoop MapReduce (https://hadoop.apache.org/docs/current/hadoop-mapreduce-client/hadoop-mapreduce-client-core/MapReduceTutorial.html) jobs on Apache Hadoop YARN (https://hadoop.apache.org/docs/r2.7.1/hadoop-yarn/hadoop-yarn-site/YARN.html). # Optional. Job is a Hadoop job.
         "args": [ # Optional. The arguments to pass to the driver. Do not include arguments, such as -libjars or -Dfoo=bar, that can be set as job properties, since a collision may occur that causes an incorrect job submission.
           "A String",
         ],
@@ -1847,12 +2203,32 @@
           "A String",
         ],
         "mainJarFileUri": "A String", # The HCFS URI of the jar file containing the main class. Examples:  'gs://foo-bucket/analytics-binaries/extract-useful-metrics-mr.jar'  'hdfs:/tmp/test-samples/custom-wordcount.jar'  'file:///home/usr/lib/hadoop-mapreduce/hadoop-mapreduce-examples.jar'
-        "properties": { # Optional. A mapping of property names to values, used to configure Hadoop. Properties that conflict with values set by the Cloud Dataproc API may be overwritten. Can include properties set in /etc/hadoop/conf/*-site and classes in user code.
+        "properties": { # Optional. A mapping of property names to values, used to configure Hadoop. Properties that conflict with values set by the Dataproc API may be overwritten. Can include properties set in /etc/hadoop/conf/*-site and classes in user code.
           "a_key": "A String",
         },
       },
       "stepId": "A String", # Required. The step id. The id must be unique among all jobs within the template.The step id is used as prefix for job id, as job goog-dataproc-workflow-step-id label, and in prerequisiteStepIds field from other steps.The id must contain only letters (a-z, A-Z), numbers (0-9), underscores (_), and hyphens (-). Cannot begin or end with underscore or hyphen. Must consist of between 3 and 50 characters.
-      "sparkSqlJob": { # A Cloud Dataproc job for running Apache Spark SQL (http://spark.apache.org/sql/) queries. # Job is a SparkSql job.
+      "sparkRJob": { # A Dataproc job for running Apache SparkR (https://spark.apache.org/docs/latest/sparkr.html) applications on YARN. # Optional. Job is a SparkR job.
+        "loggingConfig": { # The runtime logging config of the job. # Optional. The runtime log config for job execution.
+          "driverLogLevels": { # The per-package log levels for the driver. This may include "root" package name to configure rootLogger. Examples:  'com.google = FATAL', 'root = INFO', 'org.apache = DEBUG'
+            "a_key": "A String",
+          },
+        },
+        "args": [ # Optional. The arguments to pass to the driver. Do not include arguments, such as --conf, that can be set as job properties, since a collision may occur that causes an incorrect job submission.
+          "A String",
+        ],
+        "mainRFileUri": "A String", # Required. The HCFS URI of the main R file to use as the driver. Must be a .R file.
+        "fileUris": [ # Optional. HCFS URIs of files to be copied to the working directory of R drivers and distributed tasks. Useful for naively parallel tasks.
+          "A String",
+        ],
+        "archiveUris": [ # Optional. HCFS URIs of archives to be extracted in the working directory of Spark drivers and tasks. Supported file types: .jar, .tar, .tar.gz, .tgz, and .zip.
+          "A String",
+        ],
+        "properties": { # Optional. A mapping of property names to values, used to configure SparkR. Properties that conflict with values set by the Dataproc API may be overwritten. Can include properties set in /etc/spark/conf/spark-defaults.conf and classes in user code.
+          "a_key": "A String",
+        },
+      },
+      "sparkSqlJob": { # A Dataproc job for running Apache Spark SQL (http://spark.apache.org/sql/) queries. # Optional. Job is a SparkSql job.
         "queryFileUri": "A String", # The HCFS URI of the script that contains SQL queries.
         "scriptVariables": { # Optional. Mapping of query variable names to values (equivalent to the Spark SQL command: SET name="value";).
           "a_key": "A String",
@@ -1879,14 +2255,14 @@
             "A String",
           ],
         },
-        "properties": { # Optional. A mapping of property names to values, used to configure Spark SQL's SparkConf. Properties that conflict with values set by the Cloud Dataproc API may be overwritten.
+        "properties": { # Optional. A mapping of property names to values, used to configure Spark SQL's SparkConf. Properties that conflict with values set by the Dataproc API may be overwritten.
           "a_key": "A String",
         },
       },
       "prerequisiteStepIds": [ # Optional. The optional list of prerequisite job step_ids. If not specified, the job will start at the beginning of workflow.
         "A String",
       ],
-      "pigJob": { # A Cloud Dataproc job for running Apache Pig (https://pig.apache.org/) queries on YARN. # Job is a Pig job.
+      "pigJob": { # A Dataproc job for running Apache Pig (https://pig.apache.org/) queries on YARN. # Optional. Job is a Pig job.
         "queryFileUri": "A String", # The HCFS URI of the script that contains the Pig queries.
         "scriptVariables": { # Optional. Mapping of query variable names to values (equivalent to the Pig command: name=[value]).
           "a_key": "A String",
@@ -1914,14 +2290,14 @@
           ],
         },
         "continueOnFailure": True or False, # Optional. Whether to continue executing queries if a query fails. The default value is false. Setting to true can be useful when executing independent parallel queries.
-        "properties": { # Optional. A mapping of property names to values, used to configure Pig. Properties that conflict with values set by the Cloud Dataproc API may be overwritten. Can include properties set in /etc/hadoop/conf/*-site.xml, /etc/pig/conf/pig.properties, and classes in user code.
+        "properties": { # Optional. A mapping of property names to values, used to configure Pig. Properties that conflict with values set by the Dataproc API may be overwritten. Can include properties set in /etc/hadoop/conf/*-site.xml, /etc/pig/conf/pig.properties, and classes in user code.
           "a_key": "A String",
         },
       },
       "labels": { # Optional. The labels to associate with this job.Label keys must be between 1 and 63 characters long, and must conform to the following regular expression: \p{Ll}\p{Lo}{0,62}Label values must be between 1 and 63 characters long, and must conform to the following regular expression: \p{Ll}\p{Lo}\p{N}_-{0,63}No more than 32 labels can be associated with a given job.
         "a_key": "A String",
       },
-      "sparkJob": { # A Cloud Dataproc job for running Apache Spark (http://spark.apache.org/) applications on YARN. # Job is a Spark job.
+      "sparkJob": { # A Dataproc job for running Apache Spark (http://spark.apache.org/) applications on YARN. # Optional. Job is a Spark job.
         "args": [ # Optional. The arguments to pass to the driver. Do not include arguments, such as --conf, that can be set as job properties, since a collision may occur that causes an incorrect job submission.
           "A String",
         ],
@@ -1941,16 +2317,46 @@
           "A String",
         ],
         "mainJarFileUri": "A String", # The HCFS URI of the jar file that contains the main class.
-        "properties": { # Optional. A mapping of property names to values, used to configure Spark. Properties that conflict with values set by the Cloud Dataproc API may be overwritten. Can include properties set in /etc/spark/conf/spark-defaults.conf and classes in user code.
+        "properties": { # Optional. A mapping of property names to values, used to configure Spark. Properties that conflict with values set by the Dataproc API may be overwritten. Can include properties set in /etc/spark/conf/spark-defaults.conf and classes in user code.
+          "a_key": "A String",
+        },
+      },
+      "prestoJob": { # A Dataproc job for running Presto (https://prestosql.io/) queries. IMPORTANT: The Dataproc Presto Optional Component (https://cloud.google.com/dataproc/docs/concepts/components/presto) must be enabled when the cluster is created to submit a Presto job to the cluster. # Optional. Job is a Presto job.
+        "queryFileUri": "A String", # The HCFS URI of the script that contains SQL queries.
+        "outputFormat": "A String", # Optional. The format in which query output will be displayed. See the Presto documentation for supported output formats
+        "loggingConfig": { # The runtime logging config of the job. # Optional. The runtime log config for job execution.
+          "driverLogLevels": { # The per-package log levels for the driver. This may include "root" package name to configure rootLogger. Examples:  'com.google = FATAL', 'root = INFO', 'org.apache = DEBUG'
+            "a_key": "A String",
+          },
+        },
+        "clientTags": [ # Optional. Presto client tags to attach to this query
+          "A String",
+        ],
+        "queryList": { # A list of queries to run on a cluster. # A list of queries.
+          "queries": [ # Required. The queries to execute. You do not need to terminate a query with a semicolon. Multiple queries can be specified in one string by separating each with a semicolon. Here is an example of an Cloud Dataproc API snippet that uses a QueryList to specify a HiveJob:
+              # "hiveJob": {
+              #   "queryList": {
+              #     "queries": [
+              #       "query1",
+              #       "query2",
+              #       "query3;query4",
+              #     ]
+              #   }
+              # }
+            "A String",
+          ],
+        },
+        "continueOnFailure": True or False, # Optional. Whether to continue executing queries if a query fails. The default value is false. Setting to true can be useful when executing independent parallel queries.
+        "properties": { # Optional. A mapping of property names to values. Used to set Presto session properties (https://prestodb.io/docs/current/sql/set-session.html) Equivalent to using the --session flag in the Presto CLI
           "a_key": "A String",
         },
       },
       "scheduling": { # Job scheduling options. # Optional. Job scheduling configuration.
         "maxFailuresPerHour": 42, # Optional. Maximum number of times per hour a driver may be restarted as a result of driver terminating with non-zero code before job is reported failed.A job may be reported as thrashing if driver exits with non-zero code 4 times within 10 minute window.Maximum value is 10.
       },
-      "pysparkJob": { # A Cloud Dataproc job for running Apache PySpark (https://spark.apache.org/docs/0.9.0/python-programming-guide.html) applications on YARN. # Job is a Pyspark job.
+      "pysparkJob": { # A Dataproc job for running Apache PySpark (https://spark.apache.org/docs/0.9.0/python-programming-guide.html) applications on YARN. # Optional. Job is a PySpark job.
         "mainPythonFileUri": "A String", # Required. The HCFS URI of the main Python file to use as the driver. Must be a .py file.
-        "jarFileUris": [ # Optional. HCFS URIs of jar files to add to the CLASSPATHs of the Python driver and tasks.
+        "args": [ # Optional. The arguments to pass to the driver. Do not include arguments, such as --conf, that can be set as job properties, since a collision may occur that causes an incorrect job submission.
           "A String",
         ],
         "loggingConfig": { # The runtime logging config of the job. # Optional. The runtime log config for job execution.
@@ -1958,7 +2364,7 @@
             "a_key": "A String",
           },
         },
-        "args": [ # Optional. The arguments to pass to the driver. Do not include arguments, such as --conf, that can be set as job properties, since a collision may occur that causes an incorrect job submission.
+        "jarFileUris": [ # Optional. HCFS URIs of jar files to add to the CLASSPATHs of the Python driver and tasks.
           "A String",
         ],
         "fileUris": [ # Optional. HCFS URIs of files to be copied to the working directory of Python drivers and distributed tasks. Useful for naively parallel tasks.
@@ -1970,11 +2376,11 @@
         "pythonFileUris": [ # Optional. HCFS file URIs of Python files to pass to the PySpark framework. Supported file types: .py, .egg, and .zip.
           "A String",
         ],
-        "properties": { # Optional. A mapping of property names to values, used to configure PySpark. Properties that conflict with values set by the Cloud Dataproc API may be overwritten. Can include properties set in /etc/spark/conf/spark-defaults.conf and classes in user code.
+        "properties": { # Optional. A mapping of property names to values, used to configure PySpark. Properties that conflict with values set by the Dataproc API may be overwritten. Can include properties set in /etc/spark/conf/spark-defaults.conf and classes in user code.
           "a_key": "A String",
         },
       },
-      "hiveJob": { # A Cloud Dataproc job for running Apache Hive (https://hive.apache.org/) queries on YARN. # Job is a Hive job.
+      "hiveJob": { # A Dataproc job for running Apache Hive (https://hive.apache.org/) queries on YARN. # Optional. Job is a Hive job.
         "queryFileUri": "A String", # The HCFS URI of the script that contains Hive queries.
         "scriptVariables": { # Optional. Mapping of query variable names to values (equivalent to the Hive command: SET name="value";).
           "a_key": "A String",
@@ -1997,13 +2403,13 @@
           ],
         },
         "continueOnFailure": True or False, # Optional. Whether to continue executing queries if a query fails. The default value is false. Setting to true can be useful when executing independent parallel queries.
-        "properties": { # Optional. A mapping of property names and values, used to configure Hive. Properties that conflict with values set by the Cloud Dataproc API may be overwritten. Can include properties set in /etc/hadoop/conf/*-site.xml, /etc/hive/conf/hive-site.xml, and classes in user code.
+        "properties": { # Optional. A mapping of property names and values, used to configure Hive. Properties that conflict with values set by the Dataproc API may be overwritten. Can include properties set in /etc/hadoop/conf/*-site.xml, /etc/hive/conf/hive-site.xml, and classes in user code.
           "a_key": "A String",
         },
       },
     },
   ],
-  "id": "A String", # Required. The template id.The id must contain only letters (a-z, A-Z), numbers (0-9), underscores (_), and hyphens (-). Cannot begin or end with underscore or hyphen. Must consist of between 3 and 50 characters.
+  "id": "A String",
 }
 
   requestId: string, Optional. A tag that prevents multiple concurrent workflow instances with the same tag from running. This mitigates risk of concurrent instances started due to retries.It is recommended to always set this value to a UUID (https://en.wikipedia.org/wiki/Universally_unique_identifier).The tag must contain only letters (a-z, A-Z), numbers (0-9), underscores (_), and hyphens (-). The maximum length is 40 characters.
@@ -2019,11 +2425,6 @@
     "metadata": { # Service-specific metadata associated with the operation. It typically contains progress information and common metadata such as create time. Some services might not provide such metadata. Any method that returns a long-running operation should document the metadata type, if any.
       "a_key": "", # Properties of the object. Contains field @type with type URL.
     },
-    "done": True or False, # If the value is false, it means the operation is still in progress. If true, the operation is completed, and either error or response is available.
-    "response": { # The normal response of the operation in case of success. If the original method returns no data on success, such as Delete, the response is google.protobuf.Empty. If the original method is standard Get/Create/Update, the response should be the resource. For other methods, the response should have the type XxxResponse, where Xxx is the original method name. For example, if the original method name is TakeSnapshot(), the inferred response type is TakeSnapshotResponse.
-      "a_key": "", # Properties of the object. Contains field @type with type URL.
-    },
-    "name": "A String", # The server-assigned name, which is only unique within the same service that originally returns it. If you use the default HTTP mapping, the name should be a resource name ending with operations/{unique_id}.
     "error": { # The Status type defines a logical error model that is suitable for different programming environments, including REST APIs and RPC APIs. It is used by gRPC (https://github.com/grpc). Each Status message contains three pieces of data: error code, error message, and error details.You can find out more about this error model and how to work with it in the API Design Guide (https://cloud.google.com/apis/design/errors). # The error result of the operation in case of failure or cancellation.
       "message": "A String", # A developer-facing error message, which should be in English. Any user-facing error message should be localized and sent in the google.rpc.Status.details field, or localized by the client.
       "code": 42, # The status code, which should be an enum value of google.rpc.Code.
@@ -2033,6 +2434,11 @@
         },
       ],
     },
+    "done": True or False, # If the value is false, it means the operation is still in progress. If true, the operation is completed, and either error or response is available.
+    "response": { # The normal response of the operation in case of success. If the original method returns no data on success, such as Delete, the response is google.protobuf.Empty. If the original method is standard Get/Create/Update, the response should be the resource. For other methods, the response should have the type XxxResponse, where Xxx is the original method name. For example, if the original method name is TakeSnapshot(), the inferred response type is TakeSnapshotResponse.
+      "a_key": "", # Properties of the object. Contains field @type with type URL.
+    },
+    "name": "A String", # The server-assigned name, which is only unique within the same service that originally returns it. If you use the default HTTP mapping, the name should be a resource name ending with operations/{unique_id}.
   }</pre>
 </div>
 
@@ -2041,7 +2447,9 @@
   <pre>Lists workflows that match the specified filter in the request.
 
 Args:
-  parent: string, Required. The "resource name" of the region, as described in https://cloud.google.com/apis/design/resource_names of the form projects/{project_id}/regions/{region} (required)
+  parent: string, Required. The resource name of the region or location, as described in https://cloud.google.com/apis/design/resource_names.
+For projects.regions.workflowTemplates,list, the resource  name of the region has the following format:  projects/{project_id}/regions/{region}
+For projects.locations.workflowTemplates.list, the  resource name of the location has the following format:  projects/{project_id}/locations/{location} (required)
   pageToken: string, Optional. The page token, returned by a previous call, to request the next page of results.
   x__xgafv: string, V1 error format.
     Allowed values
@@ -2053,26 +2461,31 @@
   An object of the form:
 
     { # A response to a request to list workflow templates in a project.
-    "nextPageToken": "A String", # Output only. This token is included in the response if there are more results to fetch. To fetch additional results, provide this value as the page_token in a subsequent <code>ListWorkflowTemplatesRequest</code>.
     "templates": [ # Output only. WorkflowTemplates list.
-      { # A Cloud Dataproc workflow template resource.
+      { # A Dataproc workflow template resource.
         "updateTime": "A String", # Output only. The time template was last updated.
         "placement": { # Specifies workflow execution target.Either managed_cluster or cluster_selector is required. # Required. WorkflowTemplate scheduling information.
           "clusterSelector": { # A selector that chooses target cluster for jobs based on metadata. # Optional. A selector that chooses target cluster for jobs based on metadata.The selector is evaluated at the time each job is submitted.
+            "zone": "A String", # Optional. The zone where workflow process executes. This parameter does not affect the selection of the cluster.If unspecified, the zone of the first cluster matching the selector is used.
             "clusterLabels": { # Required. The cluster labels. Cluster must have all labels to match.
               "a_key": "A String",
             },
-            "zone": "A String", # Optional. The zone where workflow process executes. This parameter does not affect the selection of the cluster.If unspecified, the zone of the first cluster matching the selector is used.
           },
-          "managedCluster": { # Cluster that is managed by the workflow. # Optional. A cluster that is managed by the workflow.
+          "managedCluster": { # Cluster that is managed by the workflow. # A cluster that is managed by the workflow.
             "clusterName": "A String", # Required. The cluster name prefix. A unique cluster name will be formed by appending a random suffix.The name must contain only lower-case letters (a-z), numbers (0-9), and hyphens (-). Must begin with a letter. Cannot begin or end with hyphen. Must consist of between 2 and 35 characters.
             "labels": { # Optional. The labels to associate with this cluster.Label keys must be between 1 and 63 characters long, and must conform to the following PCRE regular expression: \p{Ll}\p{Lo}{0,62}Label values must be between 1 and 63 characters long, and must conform to the following PCRE regular expression: \p{Ll}\p{Lo}\p{N}_-{0,63}No more than 32 labels can be associated with a given cluster.
               "a_key": "A String",
             },
             "config": { # The cluster config. # Required. The cluster configuration.
+              "lifecycleConfig": { # Specifies the cluster auto-delete schedule configuration. # Optional. Lifecycle setting for the cluster.
+                "idleStartTime": "A String", # Output only. The time when cluster became idle (most recent job finished) and became eligible for deletion due to idleness (see JSON representation of Timestamp (https://developers.google.com/protocol-buffers/docs/proto3#json)).
+                "idleDeleteTtl": "A String", # Optional. The duration to keep the cluster alive while idling (when no jobs are running). Passing this threshold will cause the cluster to be deleted. Minimum value is 10 minutes; maximum value is 14 days (see JSON representation of Duration (https://developers.google.com/protocol-buffers/docs/proto3#json).
+                "autoDeleteTtl": "A String", # Optional. The lifetime duration of cluster. The cluster will be auto-deleted at the end of this period. Minimum value is 10 minutes; maximum value is 14 days (see JSON representation of Duration (https://developers.google.com/protocol-buffers/docs/proto3#json)).
+                "autoDeleteTime": "A String", # Optional. The time when cluster will be auto-deleted (see JSON representation of Timestamp (https://developers.google.com/protocol-buffers/docs/proto3#json)).
+              },
               "softwareConfig": { # Specifies the selection and config of software inside the cluster. # Optional. The config settings for software inside the cluster.
-                "imageVersion": "A String", # Optional. The version of software inside the cluster. It must be one of the supported Cloud Dataproc Versions, such as "1.2" (including a subminor version, such as "1.2.29"), or the "preview" version. If unspecified, it defaults to the latest Debian version.
-                "optionalComponents": [ # The set of optional components to activate on the cluster.
+                "imageVersion": "A String", # Optional. The version of software inside the cluster. It must be one of the supported Dataproc Versions (https://cloud.google.com/dataproc/docs/concepts/versioning/dataproc-versions#supported_cloud_dataproc_versions), such as "1.2" (including a subminor version, such as "1.2.29"), or the "preview" version (https://cloud.google.com/dataproc/docs/concepts/versioning/dataproc-versions#other_versions). If unspecified, it defaults to the latest Debian version.
+                "optionalComponents": [ # Optional. The set of components to activate on the cluster.
                   "A String",
                 ],
                 "properties": { # Optional. The properties to set on daemon config files.Property keys are specified in prefix:property format, for example core:hadoop.tmp.dir. The following are supported prefixes and their mappings:
@@ -2084,24 +2497,29 @@
                     # mapred: mapred-site.xml
                     # pig: pig.properties
                     # spark: spark-defaults.conf
-                    # yarn: yarn-site.xmlFor more information, see Cluster properties.
+                    # yarn: yarn-site.xmlFor more information, see Cluster properties (https://cloud.google.com/dataproc/docs/concepts/cluster-properties).
                   "a_key": "A String",
                 },
               },
-              "configBucket": "A String", # Optional. A Google Cloud Storage bucket used to stage job dependencies, config files, and job driver console output. If you do not specify a staging bucket, Cloud Dataproc will determine a Cloud Storage location (US, ASIA, or EU) for your cluster's staging bucket according to the Google Compute Engine zone where your cluster is deployed, and then create and manage this project-level, per-location bucket (see Cloud Dataproc staging bucket).
+              "configBucket": "A String", # Optional. A Cloud Storage bucket used to stage job dependencies, config files, and job driver console output. If you do not specify a staging bucket, Cloud Dataproc will determine a Cloud Storage location (US, ASIA, or EU) for your cluster's staging bucket according to the Compute Engine zone where your cluster is deployed, and then create and manage this project-level, per-location bucket (see Dataproc staging bucket (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/staging-bucket)).
               "gceClusterConfig": { # Common config settings for resources of Compute Engine cluster instances, applicable to all instances in the cluster. # Optional. The shared Compute Engine config settings for all instances in a cluster.
                 "internalIpOnly": True or False, # Optional. If true, all instances in the cluster will only have internal IP addresses. By default, clusters are not restricted to internal IP addresses, and will have ephemeral external IP addresses assigned to each instance. This internal_ip_only restriction can only be enabled for subnetwork enabled networks, and all off-cluster dependencies must be configured to be accessible without external IP addresses.
-                "networkUri": "A String", # Optional. The Compute Engine network to be used for machine communications. Cannot be specified with subnetwork_uri. If neither network_uri nor subnetwork_uri is specified, the "default" network of the project is used, if it exists. Cannot be a "Custom Subnet Network" (see Using Subnetworks for more information).A full URL, partial URI, or short name are valid. Examples:
+                "reservationAffinity": { # Reservation Affinity for consuming Zonal reservation. # Optional. Reservation Affinity for consuming Zonal reservation.
+                  "values": [ # Optional. Corresponds to the label values of reservation resource.
+                    "A String",
+                  ],
+                  "key": "A String", # Optional. Corresponds to the label key of reservation resource.
+                  "consumeReservationType": "A String", # Optional. Type of reservation to consume
+                },
+                "networkUri": "A String", # Optional. The Compute Engine network to be used for machine communications. Cannot be specified with subnetwork_uri. If neither network_uri nor subnetwork_uri is specified, the "default" network of the project is used, if it exists. Cannot be a "Custom Subnet Network" (see Using Subnetworks (https://cloud.google.com/compute/docs/subnetworks) for more information).A full URL, partial URI, or short name are valid. Examples:
                     # https://www.googleapis.com/compute/v1/projects/[project_id]/regions/global/default
                     # projects/[project_id]/regions/global/default
                     # default
-                "tags": [ # The Compute Engine tags to add to all instances (see Tagging instances).
+                "tags": [ # The Compute Engine tags to add to all instances (see Tagging instances (https://cloud.google.com/compute/docs/label-or-tag-resources#tags)).
                   "A String",
                 ],
-                "serviceAccount": "A String", # Optional. The service account of the instances. Defaults to the default Compute Engine service account. Custom service accounts need permissions equivalent to the following IAM roles:
-                    # roles/logging.logWriter
-                    # roles/storage.objectAdmin(see https://cloud.google.com/compute/docs/access/service-accounts#custom_service_accounts for more information). Example: [account_id]@[project_id].iam.gserviceaccount.com
-                "zoneUri": "A String", # Optional. The zone where the Compute Engine cluster will be located. On a create request, it is required in the "global" region. If omitted in a non-global Cloud Dataproc region, the service will pick a zone in the corresponding Compute Engine region. On a get request, zone will always be present.A full URL, partial URI, or short name are valid. Examples:
+                "serviceAccount": "A String", # Optional. The Dataproc service account (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/service-accounts#service_accounts_in_cloud_dataproc) (also see VM Data Plane identity (https://cloud.google.com/dataproc/docs/concepts/iam/dataproc-principals#vm_service_account_data_plane_identity)) used by Dataproc cluster VM instances to access Google Cloud Platform services.If not specified, the Compute Engine default service account (https://cloud.google.com/compute/docs/access/service-accounts#default_service_account) is used.
+                "zoneUri": "A String", # Optional. The zone where the Compute Engine cluster will be located. On a create request, it is required in the "global" region. If omitted in a non-global Dataproc region, the service will pick a zone in the corresponding Compute Engine region. On a get request, zone will always be present.A full URL, partial URI, or short name are valid. Examples:
                     # https://www.googleapis.com/compute/v1/projects/[project_id]/zones/[zone]
                     # projects/[project_id]/zones/[zone]
                     # us-central1-f
@@ -2123,25 +2541,37 @@
                   "a_key": "A String",
                 },
               },
-              "workerConfig": { # Optional. The config settings for Compute Engine resources in an instance group, such as a master or worker group. # Optional. The Compute Engine config settings for worker instances in a cluster.
-                "isPreemptible": True or False, # Optional. Specifies that this instance group contains preemptible instances.
+              "autoscalingConfig": { # Autoscaling Policy config associated with the cluster. # Optional. Autoscaling config for the policy associated with the cluster. Cluster does not autoscale if this field is unset.
+                "policyUri": "A String", # Optional. The autoscaling policy used by the cluster.Only resource names including projectid and location (region) are valid. Examples:
+                    # https://www.googleapis.com/compute/v1/projects/[project_id]/locations/[dataproc_region]/autoscalingPolicies/[policy_id]
+                    # projects/[project_id]/locations/[dataproc_region]/autoscalingPolicies/[policy_id]Note that the policy must be in the same project and Dataproc region.
+              },
+              "workerConfig": { # The config settings for Compute Engine resources in an instance group, such as a master or worker group. # Optional. The Compute Engine config settings for worker instances in a cluster.
+                "isPreemptible": True or False, # Output only. Specifies that this instance group contains preemptible instances.
+                "preemptibility": "A String", # Optional. Specifies the preemptibility of the instance group.The default value for master and worker groups is NON_PREEMPTIBLE. This default cannot be changed.The default value for secondary instances is PREEMPTIBLE.
                 "machineTypeUri": "A String", # Optional. The Compute Engine machine type used for cluster instances.A full URL, partial URI, or short name are valid. Examples:
                     # https://www.googleapis.com/compute/v1/projects/[project_id]/zones/us-east1-a/machineTypes/n1-standard-2
                     # projects/[project_id]/zones/us-east1-a/machineTypes/n1-standard-2
-                    # n1-standard-2Auto Zone Exception: If you are using the Cloud Dataproc Auto Zone Placement feature, you must use the short name of the machine type resource, for example, n1-standard-2.
-                "instanceNames": [ # Output only. The list of instance names. Cloud Dataproc derives the names from cluster_name, num_instances, and the instance group.
+                    # n1-standard-2Auto Zone Exception: If you are using the Dataproc Auto Zone Placement (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/auto-zone#using_auto_zone_placement) feature, you must use the short name of the machine type resource, for example, n1-standard-2.
+                "instanceNames": [ # Output only. The list of instance names. Dataproc derives the names from cluster_name, num_instances, and the instance group.
                   "A String",
                 ],
-                "imageUri": "A String", # Optional. The Compute Engine image resource used for cluster instances. It can be specified or may be inferred from SoftwareConfig.image_version.
-                "accelerators": [ # Optional. The Compute Engine accelerator configuration for these instances.Beta Feature: This feature is still under development. It may be changed before final release.
-                  { # Specifies the type and number of accelerator cards attached to the instances of an instance. See GPUs on Compute Engine.
+                "imageUri": "A String", # Optional. The Compute Engine image resource used for cluster instances.The URI can represent an image or image family.Image examples:
+                    # https://www.googleapis.com/compute/beta/projects/[project_id]/global/images/[image-id]
+                    # projects/[project_id]/global/images/[image-id]
+                    # image-idImage family examples. Dataproc will use the most recent image from the family:
+                    # https://www.googleapis.com/compute/beta/projects/[project_id]/global/images/family/[custom-image-family-name]
+                    # projects/[project_id]/global/images/family/[custom-image-family-name]If the URI is unspecified, it will be inferred from SoftwareConfig.image_version or the system default.
+                "accelerators": [ # Optional. The Compute Engine accelerator configuration for these instances.
+                  { # Specifies the type and number of accelerator cards attached to the instances of an instance. See GPUs on Compute Engine (https://cloud.google.com/compute/docs/gpus/).
                     "acceleratorCount": 42, # The number of the accelerator cards of this type exposed to this instance.
-                    "acceleratorTypeUri": "A String", # Full URL, partial URI, or short name of the accelerator type resource to expose to this instance. See Compute Engine AcceleratorTypes.Examples:
+                    "acceleratorTypeUri": "A String", # Full URL, partial URI, or short name of the accelerator type resource to expose to this instance. See Compute Engine AcceleratorTypes (https://cloud.google.com/compute/docs/reference/beta/acceleratorTypes).Examples:
                         # https://www.googleapis.com/compute/beta/projects/[project_id]/zones/us-east1-a/acceleratorTypes/nvidia-tesla-k80
                         # projects/[project_id]/zones/us-east1-a/acceleratorTypes/nvidia-tesla-k80
-                        # nvidia-tesla-k80Auto Zone Exception: If you are using the Cloud Dataproc Auto Zone Placement feature, you must use the short name of the accelerator type resource, for example, nvidia-tesla-k80.
+                        # nvidia-tesla-k80Auto Zone Exception: If you are using the Dataproc Auto Zone Placement (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/auto-zone#using_auto_zone_placement) feature, you must use the short name of the accelerator type resource, for example, nvidia-tesla-k80.
                   },
                 ],
+                "minCpuPlatform": "A String", # Optional. Specifies the minimum cpu platform for the Instance Group. See Dataproc -&amp;gt; Minimum CPU Platform (https://cloud.google.com/dataproc/docs/concepts/compute/dataproc-min-cpu).
                 "managedGroupConfig": { # Specifies the resources used to actively manage an instance group. # Output only. The config for Compute Engine Instance Group Manager that manages this group. This is only used for preemptible instance groups.
                   "instanceTemplateName": "A String", # Output only. The name of the Instance Template used for the Managed Instance Group.
                   "instanceGroupManagerName": "A String", # Output only. The name of the Instance Group Manager for this group.
@@ -2162,32 +2592,39 @@
                   #   ... worker specific actions ...
                   # fi
                 { # Specifies an executable to run on a fully configured node and a timeout period for executable completion.
-                  "executionTimeout": "A String", # Optional. Amount of time executable has to complete. Default is 10 minutes. Cluster creation fails with an explanatory error message (the name of the executable that caused the error and the exceeded timeout period) if the executable is not completed at end of the timeout period.
+                  "executionTimeout": "A String", # Optional. Amount of time executable has to complete. Default is 10 minutes (see JSON representation of Duration (https://developers.google.com/protocol-buffers/docs/proto3#json)).Cluster creation fails with an explanatory error message (the name of the executable that caused the error and the exceeded timeout period) if the executable is not completed at end of the timeout period.
                   "executableFile": "A String", # Required. Cloud Storage URI of executable file.
                 },
               ],
               "encryptionConfig": { # Encryption settings for the cluster. # Optional. Encryption settings for the cluster.
                 "gcePdKmsKeyName": "A String", # Optional. The Cloud KMS key name to use for PD disk encryption for all instances in the cluster.
               },
-              "secondaryWorkerConfig": { # Optional. The config settings for Compute Engine resources in an instance group, such as a master or worker group. # Optional. The Compute Engine config settings for additional worker instances in a cluster.
-                "isPreemptible": True or False, # Optional. Specifies that this instance group contains preemptible instances.
+              "secondaryWorkerConfig": { # The config settings for Compute Engine resources in an instance group, such as a master or worker group. # Optional. The Compute Engine config settings for additional worker instances in a cluster.
+                "isPreemptible": True or False, # Output only. Specifies that this instance group contains preemptible instances.
+                "preemptibility": "A String", # Optional. Specifies the preemptibility of the instance group.The default value for master and worker groups is NON_PREEMPTIBLE. This default cannot be changed.The default value for secondary instances is PREEMPTIBLE.
                 "machineTypeUri": "A String", # Optional. The Compute Engine machine type used for cluster instances.A full URL, partial URI, or short name are valid. Examples:
                     # https://www.googleapis.com/compute/v1/projects/[project_id]/zones/us-east1-a/machineTypes/n1-standard-2
                     # projects/[project_id]/zones/us-east1-a/machineTypes/n1-standard-2
-                    # n1-standard-2Auto Zone Exception: If you are using the Cloud Dataproc Auto Zone Placement feature, you must use the short name of the machine type resource, for example, n1-standard-2.
-                "instanceNames": [ # Output only. The list of instance names. Cloud Dataproc derives the names from cluster_name, num_instances, and the instance group.
+                    # n1-standard-2Auto Zone Exception: If you are using the Dataproc Auto Zone Placement (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/auto-zone#using_auto_zone_placement) feature, you must use the short name of the machine type resource, for example, n1-standard-2.
+                "instanceNames": [ # Output only. The list of instance names. Dataproc derives the names from cluster_name, num_instances, and the instance group.
                   "A String",
                 ],
-                "imageUri": "A String", # Optional. The Compute Engine image resource used for cluster instances. It can be specified or may be inferred from SoftwareConfig.image_version.
-                "accelerators": [ # Optional. The Compute Engine accelerator configuration for these instances.Beta Feature: This feature is still under development. It may be changed before final release.
-                  { # Specifies the type and number of accelerator cards attached to the instances of an instance. See GPUs on Compute Engine.
+                "imageUri": "A String", # Optional. The Compute Engine image resource used for cluster instances.The URI can represent an image or image family.Image examples:
+                    # https://www.googleapis.com/compute/beta/projects/[project_id]/global/images/[image-id]
+                    # projects/[project_id]/global/images/[image-id]
+                    # image-idImage family examples. Dataproc will use the most recent image from the family:
+                    # https://www.googleapis.com/compute/beta/projects/[project_id]/global/images/family/[custom-image-family-name]
+                    # projects/[project_id]/global/images/family/[custom-image-family-name]If the URI is unspecified, it will be inferred from SoftwareConfig.image_version or the system default.
+                "accelerators": [ # Optional. The Compute Engine accelerator configuration for these instances.
+                  { # Specifies the type and number of accelerator cards attached to the instances of an instance. See GPUs on Compute Engine (https://cloud.google.com/compute/docs/gpus/).
                     "acceleratorCount": 42, # The number of the accelerator cards of this type exposed to this instance.
-                    "acceleratorTypeUri": "A String", # Full URL, partial URI, or short name of the accelerator type resource to expose to this instance. See Compute Engine AcceleratorTypes.Examples:
+                    "acceleratorTypeUri": "A String", # Full URL, partial URI, or short name of the accelerator type resource to expose to this instance. See Compute Engine AcceleratorTypes (https://cloud.google.com/compute/docs/reference/beta/acceleratorTypes).Examples:
                         # https://www.googleapis.com/compute/beta/projects/[project_id]/zones/us-east1-a/acceleratorTypes/nvidia-tesla-k80
                         # projects/[project_id]/zones/us-east1-a/acceleratorTypes/nvidia-tesla-k80
-                        # nvidia-tesla-k80Auto Zone Exception: If you are using the Cloud Dataproc Auto Zone Placement feature, you must use the short name of the accelerator type resource, for example, nvidia-tesla-k80.
+                        # nvidia-tesla-k80Auto Zone Exception: If you are using the Dataproc Auto Zone Placement (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/auto-zone#using_auto_zone_placement) feature, you must use the short name of the accelerator type resource, for example, nvidia-tesla-k80.
                   },
                 ],
+                "minCpuPlatform": "A String", # Optional. Specifies the minimum cpu platform for the Instance Group. See Dataproc -&amp;gt; Minimum CPU Platform (https://cloud.google.com/dataproc/docs/concepts/compute/dataproc-min-cpu).
                 "managedGroupConfig": { # Specifies the resources used to actively manage an instance group. # Output only. The config for Compute Engine Instance Group Manager that manages this group. This is only used for preemptible instance groups.
                   "instanceTemplateName": "A String", # Output only. The name of the Instance Template used for the Managed Instance Group.
                   "instanceGroupManagerName": "A String", # Output only. The name of the Instance Group Manager for this group.
@@ -2199,25 +2636,32 @@
                   "bootDiskSizeGb": 42, # Optional. Size in GB of the boot disk (default is 500GB).
                 },
               },
-              "masterConfig": { # Optional. The config settings for Compute Engine resources in an instance group, such as a master or worker group. # Optional. The Compute Engine config settings for the master instance in a cluster.
-                "isPreemptible": True or False, # Optional. Specifies that this instance group contains preemptible instances.
+              "masterConfig": { # The config settings for Compute Engine resources in an instance group, such as a master or worker group. # Optional. The Compute Engine config settings for the master instance in a cluster.
+                "isPreemptible": True or False, # Output only. Specifies that this instance group contains preemptible instances.
+                "preemptibility": "A String", # Optional. Specifies the preemptibility of the instance group.The default value for master and worker groups is NON_PREEMPTIBLE. This default cannot be changed.The default value for secondary instances is PREEMPTIBLE.
                 "machineTypeUri": "A String", # Optional. The Compute Engine machine type used for cluster instances.A full URL, partial URI, or short name are valid. Examples:
                     # https://www.googleapis.com/compute/v1/projects/[project_id]/zones/us-east1-a/machineTypes/n1-standard-2
                     # projects/[project_id]/zones/us-east1-a/machineTypes/n1-standard-2
-                    # n1-standard-2Auto Zone Exception: If you are using the Cloud Dataproc Auto Zone Placement feature, you must use the short name of the machine type resource, for example, n1-standard-2.
-                "instanceNames": [ # Output only. The list of instance names. Cloud Dataproc derives the names from cluster_name, num_instances, and the instance group.
+                    # n1-standard-2Auto Zone Exception: If you are using the Dataproc Auto Zone Placement (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/auto-zone#using_auto_zone_placement) feature, you must use the short name of the machine type resource, for example, n1-standard-2.
+                "instanceNames": [ # Output only. The list of instance names. Dataproc derives the names from cluster_name, num_instances, and the instance group.
                   "A String",
                 ],
-                "imageUri": "A String", # Optional. The Compute Engine image resource used for cluster instances. It can be specified or may be inferred from SoftwareConfig.image_version.
-                "accelerators": [ # Optional. The Compute Engine accelerator configuration for these instances.Beta Feature: This feature is still under development. It may be changed before final release.
-                  { # Specifies the type and number of accelerator cards attached to the instances of an instance. See GPUs on Compute Engine.
+                "imageUri": "A String", # Optional. The Compute Engine image resource used for cluster instances.The URI can represent an image or image family.Image examples:
+                    # https://www.googleapis.com/compute/beta/projects/[project_id]/global/images/[image-id]
+                    # projects/[project_id]/global/images/[image-id]
+                    # image-idImage family examples. Dataproc will use the most recent image from the family:
+                    # https://www.googleapis.com/compute/beta/projects/[project_id]/global/images/family/[custom-image-family-name]
+                    # projects/[project_id]/global/images/family/[custom-image-family-name]If the URI is unspecified, it will be inferred from SoftwareConfig.image_version or the system default.
+                "accelerators": [ # Optional. The Compute Engine accelerator configuration for these instances.
+                  { # Specifies the type and number of accelerator cards attached to the instances of an instance. See GPUs on Compute Engine (https://cloud.google.com/compute/docs/gpus/).
                     "acceleratorCount": 42, # The number of the accelerator cards of this type exposed to this instance.
-                    "acceleratorTypeUri": "A String", # Full URL, partial URI, or short name of the accelerator type resource to expose to this instance. See Compute Engine AcceleratorTypes.Examples:
+                    "acceleratorTypeUri": "A String", # Full URL, partial URI, or short name of the accelerator type resource to expose to this instance. See Compute Engine AcceleratorTypes (https://cloud.google.com/compute/docs/reference/beta/acceleratorTypes).Examples:
                         # https://www.googleapis.com/compute/beta/projects/[project_id]/zones/us-east1-a/acceleratorTypes/nvidia-tesla-k80
                         # projects/[project_id]/zones/us-east1-a/acceleratorTypes/nvidia-tesla-k80
-                        # nvidia-tesla-k80Auto Zone Exception: If you are using the Cloud Dataproc Auto Zone Placement feature, you must use the short name of the accelerator type resource, for example, nvidia-tesla-k80.
+                        # nvidia-tesla-k80Auto Zone Exception: If you are using the Dataproc Auto Zone Placement (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/auto-zone#using_auto_zone_placement) feature, you must use the short name of the accelerator type resource, for example, nvidia-tesla-k80.
                   },
                 ],
+                "minCpuPlatform": "A String", # Optional. Specifies the minimum cpu platform for the Instance Group. See Dataproc -&amp;gt; Minimum CPU Platform (https://cloud.google.com/dataproc/docs/concepts/compute/dataproc-min-cpu).
                 "managedGroupConfig": { # Specifies the resources used to actively manage an instance group. # Output only. The config for Compute Engine Instance Group Manager that manages this group. This is only used for preemptible instance groups.
                   "instanceTemplateName": "A String", # Output only. The name of the Instance Template used for the Managed Instance Group.
                   "instanceGroupManagerName": "A String", # Output only. The name of the Instance Group Manager for this group.
@@ -2233,8 +2677,9 @@
                 "kerberosConfig": { # Specifies Kerberos related configuration. # Kerberos related configuration.
                   "truststorePasswordUri": "A String", # Optional. The Cloud Storage URI of a KMS encrypted file containing the password to the user provided truststore. For the self-signed certificate, this password is generated by Dataproc.
                   "crossRealmTrustRealm": "A String", # Optional. The remote realm the Dataproc on-cluster KDC will trust, should the user enable cross realm trust.
+                  "realm": "A String", # Optional. The name of the on-cluster Kerberos realm. If not specified, the uppercased domain of hostnames will be the realm.
                   "keyPasswordUri": "A String", # Optional. The Cloud Storage URI of a KMS encrypted file containing the password to the user provided key. For the self-signed certificate, this password is generated by Dataproc.
-                  "enableKerberos": True or False, # Optional. Flag to indicate whether to Kerberize the cluster.
+                  "enableKerberos": True or False, # Optional. Flag to indicate whether to Kerberize the cluster (default: false). Set this field to true to enable Kerberos on a cluster.
                   "crossRealmTrustAdminServer": "A String", # Optional. The admin server (IP or hostname) for the remote trusted realm in a cross realm trust relationship.
                   "tgtLifetimeHours": 42, # Optional. The lifetime of the ticket granting ticket, in hours. If not specified, or user specifies 0, then default value 10 will be used.
                   "keystoreUri": "A String", # Optional. The Cloud Storage URI of the keystore file used for SSL encryption. If not provided, Dataproc will provide a self-signed certificate.
@@ -2250,8 +2695,10 @@
             },
           },
         },
-        "name": "A String", # Output only. The "resource name" of the template, as described in https://cloud.google.com/apis/design/resource_names of the form projects/{project_id}/regions/{region}/workflowTemplates/{template_id}
-        "parameters": [ # Optional. Template parameters whose values are substituted into the template. Values for parameters must be provided when the template is instantiated.
+        "name": "A String", # Output only. The resource name of the workflow template, as described in https://cloud.google.com/apis/design/resource_names.
+            # For projects.regions.workflowTemplates, the resource name of the  template has the following format:  projects/{project_id}/regions/{region}/workflowTemplates/{template_id}
+            # For projects.locations.workflowTemplates, the resource name of the  template has the following format:  projects/{project_id}/locations/{location}/workflowTemplates/{template_id}
+        "parameters": [ # Optional. emplate parameters whose values are substituted into the template. Values for parameters must be provided when the template is instantiated.
           { # A configurable parameter that replaces one or more fields in the template. Parameterizable fields: - Labels - File uris - Job properties - Job arguments - Script variables - Main class (in HadoopJob and SparkJob) - Zone (in ClusterSelector)
             "fields": [ # Required. Paths to all fields that the parameter replaces. A field is allowed to appear in at most one parameter's list of field paths.A field path is similar in syntax to a google.protobuf.FieldMask. For example, a field path that references the zone field of a workflow template's cluster selector would be specified as placement.clusterSelector.zone.Also, field paths can reference fields using the following syntax:
                 # Values in maps can be referenced by key:
@@ -2303,7 +2750,7 @@
         "version": 42, # Optional. Used to perform a consistent read-modify-write.This field should be left blank for a CreateWorkflowTemplate request. It is required for an UpdateWorkflowTemplate request, and must match the current server version. A typical update template flow would fetch the current template with a GetWorkflowTemplate request, which will return the current template with the version field filled in with the current server version. The user updates other fields in the template, then returns it as part of the UpdateWorkflowTemplate request.
         "jobs": [ # Required. The Directed Acyclic Graph of Jobs to submit.
           { # A job executed by the workflow.
-            "hadoopJob": { # A Cloud Dataproc job for running Apache Hadoop MapReduce (https://hadoop.apache.org/docs/current/hadoop-mapreduce-client/hadoop-mapreduce-client-core/MapReduceTutorial.html) jobs on Apache Hadoop YARN (https://hadoop.apache.org/docs/r2.7.1/hadoop-yarn/hadoop-yarn-site/YARN.html). # Job is a Hadoop job.
+            "hadoopJob": { # A Dataproc job for running Apache Hadoop MapReduce (https://hadoop.apache.org/docs/current/hadoop-mapreduce-client/hadoop-mapreduce-client-core/MapReduceTutorial.html) jobs on Apache Hadoop YARN (https://hadoop.apache.org/docs/r2.7.1/hadoop-yarn/hadoop-yarn-site/YARN.html). # Optional. Job is a Hadoop job.
               "args": [ # Optional. The arguments to pass to the driver. Do not include arguments, such as -libjars or -Dfoo=bar, that can be set as job properties, since a collision may occur that causes an incorrect job submission.
                 "A String",
               ],
@@ -2323,12 +2770,32 @@
                 "A String",
               ],
               "mainJarFileUri": "A String", # The HCFS URI of the jar file containing the main class. Examples:  'gs://foo-bucket/analytics-binaries/extract-useful-metrics-mr.jar'  'hdfs:/tmp/test-samples/custom-wordcount.jar'  'file:///home/usr/lib/hadoop-mapreduce/hadoop-mapreduce-examples.jar'
-              "properties": { # Optional. A mapping of property names to values, used to configure Hadoop. Properties that conflict with values set by the Cloud Dataproc API may be overwritten. Can include properties set in /etc/hadoop/conf/*-site and classes in user code.
+              "properties": { # Optional. A mapping of property names to values, used to configure Hadoop. Properties that conflict with values set by the Dataproc API may be overwritten. Can include properties set in /etc/hadoop/conf/*-site and classes in user code.
                 "a_key": "A String",
               },
             },
             "stepId": "A String", # Required. The step id. The id must be unique among all jobs within the template.The step id is used as prefix for job id, as job goog-dataproc-workflow-step-id label, and in prerequisiteStepIds field from other steps.The id must contain only letters (a-z, A-Z), numbers (0-9), underscores (_), and hyphens (-). Cannot begin or end with underscore or hyphen. Must consist of between 3 and 50 characters.
-            "sparkSqlJob": { # A Cloud Dataproc job for running Apache Spark SQL (http://spark.apache.org/sql/) queries. # Job is a SparkSql job.
+            "sparkRJob": { # A Dataproc job for running Apache SparkR (https://spark.apache.org/docs/latest/sparkr.html) applications on YARN. # Optional. Job is a SparkR job.
+              "loggingConfig": { # The runtime logging config of the job. # Optional. The runtime log config for job execution.
+                "driverLogLevels": { # The per-package log levels for the driver. This may include "root" package name to configure rootLogger. Examples:  'com.google = FATAL', 'root = INFO', 'org.apache = DEBUG'
+                  "a_key": "A String",
+                },
+              },
+              "args": [ # Optional. The arguments to pass to the driver. Do not include arguments, such as --conf, that can be set as job properties, since a collision may occur that causes an incorrect job submission.
+                "A String",
+              ],
+              "mainRFileUri": "A String", # Required. The HCFS URI of the main R file to use as the driver. Must be a .R file.
+              "fileUris": [ # Optional. HCFS URIs of files to be copied to the working directory of R drivers and distributed tasks. Useful for naively parallel tasks.
+                "A String",
+              ],
+              "archiveUris": [ # Optional. HCFS URIs of archives to be extracted in the working directory of Spark drivers and tasks. Supported file types: .jar, .tar, .tar.gz, .tgz, and .zip.
+                "A String",
+              ],
+              "properties": { # Optional. A mapping of property names to values, used to configure SparkR. Properties that conflict with values set by the Dataproc API may be overwritten. Can include properties set in /etc/spark/conf/spark-defaults.conf and classes in user code.
+                "a_key": "A String",
+              },
+            },
+            "sparkSqlJob": { # A Dataproc job for running Apache Spark SQL (http://spark.apache.org/sql/) queries. # Optional. Job is a SparkSql job.
               "queryFileUri": "A String", # The HCFS URI of the script that contains SQL queries.
               "scriptVariables": { # Optional. Mapping of query variable names to values (equivalent to the Spark SQL command: SET name="value";).
                 "a_key": "A String",
@@ -2355,14 +2822,14 @@
                   "A String",
                 ],
               },
-              "properties": { # Optional. A mapping of property names to values, used to configure Spark SQL's SparkConf. Properties that conflict with values set by the Cloud Dataproc API may be overwritten.
+              "properties": { # Optional. A mapping of property names to values, used to configure Spark SQL's SparkConf. Properties that conflict with values set by the Dataproc API may be overwritten.
                 "a_key": "A String",
               },
             },
             "prerequisiteStepIds": [ # Optional. The optional list of prerequisite job step_ids. If not specified, the job will start at the beginning of workflow.
               "A String",
             ],
-            "pigJob": { # A Cloud Dataproc job for running Apache Pig (https://pig.apache.org/) queries on YARN. # Job is a Pig job.
+            "pigJob": { # A Dataproc job for running Apache Pig (https://pig.apache.org/) queries on YARN. # Optional. Job is a Pig job.
               "queryFileUri": "A String", # The HCFS URI of the script that contains the Pig queries.
               "scriptVariables": { # Optional. Mapping of query variable names to values (equivalent to the Pig command: name=[value]).
                 "a_key": "A String",
@@ -2390,14 +2857,14 @@
                 ],
               },
               "continueOnFailure": True or False, # Optional. Whether to continue executing queries if a query fails. The default value is false. Setting to true can be useful when executing independent parallel queries.
-              "properties": { # Optional. A mapping of property names to values, used to configure Pig. Properties that conflict with values set by the Cloud Dataproc API may be overwritten. Can include properties set in /etc/hadoop/conf/*-site.xml, /etc/pig/conf/pig.properties, and classes in user code.
+              "properties": { # Optional. A mapping of property names to values, used to configure Pig. Properties that conflict with values set by the Dataproc API may be overwritten. Can include properties set in /etc/hadoop/conf/*-site.xml, /etc/pig/conf/pig.properties, and classes in user code.
                 "a_key": "A String",
               },
             },
             "labels": { # Optional. The labels to associate with this job.Label keys must be between 1 and 63 characters long, and must conform to the following regular expression: \p{Ll}\p{Lo}{0,62}Label values must be between 1 and 63 characters long, and must conform to the following regular expression: \p{Ll}\p{Lo}\p{N}_-{0,63}No more than 32 labels can be associated with a given job.
               "a_key": "A String",
             },
-            "sparkJob": { # A Cloud Dataproc job for running Apache Spark (http://spark.apache.org/) applications on YARN. # Job is a Spark job.
+            "sparkJob": { # A Dataproc job for running Apache Spark (http://spark.apache.org/) applications on YARN. # Optional. Job is a Spark job.
               "args": [ # Optional. The arguments to pass to the driver. Do not include arguments, such as --conf, that can be set as job properties, since a collision may occur that causes an incorrect job submission.
                 "A String",
               ],
@@ -2417,16 +2884,46 @@
                 "A String",
               ],
               "mainJarFileUri": "A String", # The HCFS URI of the jar file that contains the main class.
-              "properties": { # Optional. A mapping of property names to values, used to configure Spark. Properties that conflict with values set by the Cloud Dataproc API may be overwritten. Can include properties set in /etc/spark/conf/spark-defaults.conf and classes in user code.
+              "properties": { # Optional. A mapping of property names to values, used to configure Spark. Properties that conflict with values set by the Dataproc API may be overwritten. Can include properties set in /etc/spark/conf/spark-defaults.conf and classes in user code.
+                "a_key": "A String",
+              },
+            },
+            "prestoJob": { # A Dataproc job for running Presto (https://prestosql.io/) queries. IMPORTANT: The Dataproc Presto Optional Component (https://cloud.google.com/dataproc/docs/concepts/components/presto) must be enabled when the cluster is created to submit a Presto job to the cluster. # Optional. Job is a Presto job.
+              "queryFileUri": "A String", # The HCFS URI of the script that contains SQL queries.
+              "outputFormat": "A String", # Optional. The format in which query output will be displayed. See the Presto documentation for supported output formats
+              "loggingConfig": { # The runtime logging config of the job. # Optional. The runtime log config for job execution.
+                "driverLogLevels": { # The per-package log levels for the driver. This may include "root" package name to configure rootLogger. Examples:  'com.google = FATAL', 'root = INFO', 'org.apache = DEBUG'
+                  "a_key": "A String",
+                },
+              },
+              "clientTags": [ # Optional. Presto client tags to attach to this query
+                "A String",
+              ],
+              "queryList": { # A list of queries to run on a cluster. # A list of queries.
+                "queries": [ # Required. The queries to execute. You do not need to terminate a query with a semicolon. Multiple queries can be specified in one string by separating each with a semicolon. Here is an example of an Cloud Dataproc API snippet that uses a QueryList to specify a HiveJob:
+                    # "hiveJob": {
+                    #   "queryList": {
+                    #     "queries": [
+                    #       "query1",
+                    #       "query2",
+                    #       "query3;query4",
+                    #     ]
+                    #   }
+                    # }
+                  "A String",
+                ],
+              },
+              "continueOnFailure": True or False, # Optional. Whether to continue executing queries if a query fails. The default value is false. Setting to true can be useful when executing independent parallel queries.
+              "properties": { # Optional. A mapping of property names to values. Used to set Presto session properties (https://prestodb.io/docs/current/sql/set-session.html) Equivalent to using the --session flag in the Presto CLI
                 "a_key": "A String",
               },
             },
             "scheduling": { # Job scheduling options. # Optional. Job scheduling configuration.
               "maxFailuresPerHour": 42, # Optional. Maximum number of times per hour a driver may be restarted as a result of driver terminating with non-zero code before job is reported failed.A job may be reported as thrashing if driver exits with non-zero code 4 times within 10 minute window.Maximum value is 10.
             },
-            "pysparkJob": { # A Cloud Dataproc job for running Apache PySpark (https://spark.apache.org/docs/0.9.0/python-programming-guide.html) applications on YARN. # Job is a Pyspark job.
+            "pysparkJob": { # A Dataproc job for running Apache PySpark (https://spark.apache.org/docs/0.9.0/python-programming-guide.html) applications on YARN. # Optional. Job is a PySpark job.
               "mainPythonFileUri": "A String", # Required. The HCFS URI of the main Python file to use as the driver. Must be a .py file.
-              "jarFileUris": [ # Optional. HCFS URIs of jar files to add to the CLASSPATHs of the Python driver and tasks.
+              "args": [ # Optional. The arguments to pass to the driver. Do not include arguments, such as --conf, that can be set as job properties, since a collision may occur that causes an incorrect job submission.
                 "A String",
               ],
               "loggingConfig": { # The runtime logging config of the job. # Optional. The runtime log config for job execution.
@@ -2434,7 +2931,7 @@
                   "a_key": "A String",
                 },
               },
-              "args": [ # Optional. The arguments to pass to the driver. Do not include arguments, such as --conf, that can be set as job properties, since a collision may occur that causes an incorrect job submission.
+              "jarFileUris": [ # Optional. HCFS URIs of jar files to add to the CLASSPATHs of the Python driver and tasks.
                 "A String",
               ],
               "fileUris": [ # Optional. HCFS URIs of files to be copied to the working directory of Python drivers and distributed tasks. Useful for naively parallel tasks.
@@ -2446,11 +2943,11 @@
               "pythonFileUris": [ # Optional. HCFS file URIs of Python files to pass to the PySpark framework. Supported file types: .py, .egg, and .zip.
                 "A String",
               ],
-              "properties": { # Optional. A mapping of property names to values, used to configure PySpark. Properties that conflict with values set by the Cloud Dataproc API may be overwritten. Can include properties set in /etc/spark/conf/spark-defaults.conf and classes in user code.
+              "properties": { # Optional. A mapping of property names to values, used to configure PySpark. Properties that conflict with values set by the Dataproc API may be overwritten. Can include properties set in /etc/spark/conf/spark-defaults.conf and classes in user code.
                 "a_key": "A String",
               },
             },
-            "hiveJob": { # A Cloud Dataproc job for running Apache Hive (https://hive.apache.org/) queries on YARN. # Job is a Hive job.
+            "hiveJob": { # A Dataproc job for running Apache Hive (https://hive.apache.org/) queries on YARN. # Optional. Job is a Hive job.
               "queryFileUri": "A String", # The HCFS URI of the script that contains Hive queries.
               "scriptVariables": { # Optional. Mapping of query variable names to values (equivalent to the Hive command: SET name="value";).
                 "a_key": "A String",
@@ -2473,15 +2970,16 @@
                 ],
               },
               "continueOnFailure": True or False, # Optional. Whether to continue executing queries if a query fails. The default value is false. Setting to true can be useful when executing independent parallel queries.
-              "properties": { # Optional. A mapping of property names and values, used to configure Hive. Properties that conflict with values set by the Cloud Dataproc API may be overwritten. Can include properties set in /etc/hadoop/conf/*-site.xml, /etc/hive/conf/hive-site.xml, and classes in user code.
+              "properties": { # Optional. A mapping of property names and values, used to configure Hive. Properties that conflict with values set by the Dataproc API may be overwritten. Can include properties set in /etc/hadoop/conf/*-site.xml, /etc/hive/conf/hive-site.xml, and classes in user code.
                 "a_key": "A String",
               },
             },
           },
         ],
-        "id": "A String", # Required. The template id.The id must contain only letters (a-z, A-Z), numbers (0-9), underscores (_), and hyphens (-). Cannot begin or end with underscore or hyphen. Must consist of between 3 and 50 characters.
+        "id": "A String",
       },
     ],
+    "nextPageToken": "A String", # Output only. This token is included in the response if there are more results to fetch. To fetch additional results, provide this value as the page_token in a subsequent &lt;code&gt;ListWorkflowTemplatesRequest&lt;/code&gt;.
   }</pre>
 </div>
 
@@ -2500,70 +2998,103 @@
 </div>
 
 <div class="method">
-    <code class="details" id="setIamPolicy">setIamPolicy(resource, body, x__xgafv=None)</code>
-  <pre>Sets the access control policy on the specified resource. Replaces any existing policy.
+    <code class="details" id="setIamPolicy">setIamPolicy(resource, body=None, x__xgafv=None)</code>
+  <pre>Sets the access control policy on the specified resource. Replaces any existing policy.Can return Public Errors: NOT_FOUND, INVALID_ARGUMENT and PERMISSION_DENIED
 
 Args:
   resource: string, REQUIRED: The resource for which the policy is being specified. See the operation documentation for the appropriate value for this field. (required)
-  body: object, The request body. (required)
+  body: object, The request body.
     The object takes the form of:
 
 { # Request message for SetIamPolicy method.
-    "policy": { # Defines an Identity and Access Management (IAM) policy. It is used to specify access control policies for Cloud Platform resources.A Policy consists of a list of bindings. A binding binds a list of members to a role, where the members can be user accounts, Google groups, Google domains, and service accounts. A role is a named list of permissions defined by IAM.JSON Example # REQUIRED: The complete policy to be applied to the resource. The size of the policy is limited to a few 10s of KB. An empty policy is a valid policy but certain Cloud Platform services (such as Projects) might reject them.
+    "policy": { # An Identity and Access Management (IAM) policy, which specifies access controls for Google Cloud resources.A Policy is a collection of bindings. A binding binds one or more members to a single role. Members can be user accounts, service accounts, Google groups, and domains (such as G Suite). A role is a named list of permissions; each role can be an IAM predefined role or a user-created custom role.Optionally, a binding can specify a condition, which is a logical expression that allows access to a resource only if the expression evaluates to true. A condition can add constraints based on attributes of the request, the resource, or both.JSON example: # REQUIRED: The complete policy to be applied to the resource. The size of the policy is limited to a few 10s of KB. An empty policy is a valid policy but certain Cloud Platform services (such as Projects) might reject them.
         # {
         #   "bindings": [
         #     {
-        #       "role": "roles/owner",
+        #       "role": "roles/resourcemanager.organizationAdmin",
         #       "members": [
         #         "user:mike@example.com",
         #         "group:admins@example.com",
         #         "domain:google.com",
-        #         "serviceAccount:my-other-app@appspot.gserviceaccount.com"
+        #         "serviceAccount:my-project-id@appspot.gserviceaccount.com"
         #       ]
         #     },
         #     {
-        #       "role": "roles/viewer",
-        #       "members": ["user:sean@example.com"]
+        #       "role": "roles/resourcemanager.organizationViewer",
+        #       "members": ["user:eve@example.com"],
+        #       "condition": {
+        #         "title": "expirable access",
+        #         "description": "Does not grant access after Sep 2020",
+        #         "expression": "request.time &lt; timestamp('2020-10-01T00:00:00.000Z')",
+        #       }
         #     }
-        #   ]
+        #   ],
+        #   "etag": "BwWWja0YfJA=",
+        #   "version": 3
         # }
-        # YAML Example
+        # YAML example:
         # bindings:
         # - members:
         #   - user:mike@example.com
         #   - group:admins@example.com
         #   - domain:google.com
-        #   - serviceAccount:my-other-app@appspot.gserviceaccount.com
-        #   role: roles/owner
+        #   - serviceAccount:my-project-id@appspot.gserviceaccount.com
+        #   role: roles/resourcemanager.organizationAdmin
         # - members:
-        #   - user:sean@example.com
-        #   role: roles/viewer
-        # For a description of IAM and its features, see the IAM developer's guide (https://cloud.google.com/iam/docs).
-      "bindings": [ # Associates a list of members to a role. bindings with no members will result in an error.
+        #   - user:eve@example.com
+        #   role: roles/resourcemanager.organizationViewer
+        #   condition:
+        #     title: expirable access
+        #     description: Does not grant access after Sep 2020
+        #     expression: request.time &lt; timestamp('2020-10-01T00:00:00.000Z')
+        # - etag: BwWWja0YfJA=
+        # - version: 3
+        # For a description of IAM and its features, see the IAM documentation (https://cloud.google.com/iam/docs/).
+      "bindings": [ # Associates a list of members to a role. Optionally, may specify a condition that determines how and when the bindings are applied. Each of the bindings must contain at least one member.
         { # Associates members with a role.
           "role": "A String", # Role that is assigned to members. For example, roles/viewer, roles/editor, or roles/owner.
           "members": [ # Specifies the identities requesting access for a Cloud Platform resource. members can have the following values:
               # allUsers: A special identifier that represents anyone who is  on the internet; with or without a Google account.
               # allAuthenticatedUsers: A special identifier that represents anyone  who is authenticated with a Google account or a service account.
-              # user:{emailid}: An email address that represents a specific Google  account. For example, alice@gmail.com .
+              # user:{emailid}: An email address that represents a specific Google  account. For example, alice@example.com .
               # serviceAccount:{emailid}: An email address that represents a service  account. For example, my-other-app@appspot.gserviceaccount.com.
               # group:{emailid}: An email address that represents a Google group.  For example, admins@example.com.
+              # deleted:user:{emailid}?uid={uniqueid}: An email address (plus unique  identifier) representing a user that has been recently deleted. For  example, alice@example.com?uid=123456789012345678901. If the user is  recovered, this value reverts to user:{emailid} and the recovered user  retains the role in the binding.
+              # deleted:serviceAccount:{emailid}?uid={uniqueid}: An email address (plus  unique identifier) representing a service account that has been recently  deleted. For example,  my-other-app@appspot.gserviceaccount.com?uid=123456789012345678901.  If the service account is undeleted, this value reverts to  serviceAccount:{emailid} and the undeleted service account retains the  role in the binding.
+              # deleted:group:{emailid}?uid={uniqueid}: An email address (plus unique  identifier) representing a Google group that has been recently  deleted. For example, admins@example.com?uid=123456789012345678901. If  the group is recovered, this value reverts to group:{emailid} and the  recovered group retains the role in the binding.
               # domain:{domain}: The G Suite domain (primary) that represents all the  users of that domain. For example, google.com or example.com.
             "A String",
           ],
-          "condition": { # Represents an expression text. Example: # The condition that is associated with this binding. NOTE: An unsatisfied condition will not allow user access via current binding. Different bindings, including their conditions, are examined independently.
-              # title: "User account presence"
-              # description: "Determines whether the request has a user account"
-              # expression: "size(request.user) > 0"
-            "location": "A String", # An optional string indicating the location of the expression for error reporting, e.g. a file name and a position in the file.
-            "expression": "A String", # Textual representation of an expression in Common Expression Language syntax.The application context of the containing message determines which well-known feature set of CEL is supported.
-            "description": "A String", # An optional description of the expression. This is a longer text which describes the expression, e.g. when hovered over it in a UI.
-            "title": "A String", # An optional title for the expression, i.e. a short string describing its purpose. This can be used e.g. in UIs which allow to enter the expression.
+          "condition": { # Represents a textual expression in the Common Expression Language (CEL) syntax. CEL is a C-like expression language. The syntax and semantics of CEL are documented at https://github.com/google/cel-spec.Example (Comparison): # The condition that is associated with this binding. NOTE: An unsatisfied condition will not allow user access via current binding. Different bindings, including their conditions, are examined independently.
+              # title: "Summary size limit"
+              # description: "Determines if a summary is less than 100 chars"
+              # expression: "document.summary.size() &lt; 100"
+              # Example (Equality):
+              # title: "Requestor is owner"
+              # description: "Determines if requestor is the document owner"
+              # expression: "document.owner == request.auth.claims.email"
+              # Example (Logic):
+              # title: "Public documents"
+              # description: "Determine whether the document should be publicly visible"
+              # expression: "document.type != 'private' &amp;&amp; document.type != 'internal'"
+              # Example (Data Manipulation):
+              # title: "Notification string"
+              # description: "Create a notification string with a timestamp."
+              # expression: "'New message received at ' + string(document.create_time)"
+              # The exact variables and functions that may be referenced within an expression are determined by the service that evaluates it. See the service documentation for additional information.
+            "description": "A String", # Optional. Description of the expression. This is a longer text which describes the expression, e.g. when hovered over it in a UI.
+            "expression": "A String", # Textual representation of an expression in Common Expression Language syntax.
+            "location": "A String", # Optional. String indicating the location of the expression for error reporting, e.g. a file name and a position in the file.
+            "title": "A String", # Optional. Title for the expression, i.e. a short string describing its purpose. This can be used e.g. in UIs which allow to enter the expression.
           },
         },
       ],
-      "etag": "A String", # etag is used for optimistic concurrency control as a way to help prevent simultaneous updates of a policy from overwriting each other. It is strongly suggested that systems make use of the etag in the read-modify-write cycle to perform policy updates in order to avoid race conditions: An etag is returned in the response to getIamPolicy, and systems are expected to put that etag in the request to setIamPolicy to ensure that their change will be applied to the same version of the policy.If no etag is provided in the call to setIamPolicy, then the existing policy is overwritten blindly.
-      "version": 42, # Deprecated.
+      "etag": "A String", # etag is used for optimistic concurrency control as a way to help prevent simultaneous updates of a policy from overwriting each other. It is strongly suggested that systems make use of the etag in the read-modify-write cycle to perform policy updates in order to avoid race conditions: An etag is returned in the response to getIamPolicy, and systems are expected to put that etag in the request to setIamPolicy to ensure that their change will be applied to the same version of the policy.Important: If you use IAM Conditions, you must include the etag field whenever you call setIamPolicy. If you omit this field, then IAM allows you to overwrite a version 3 policy with a version 1 policy, and all of the conditions in the version 3 policy are lost.
+      "version": 42, # Specifies the format of the policy.Valid values are 0, 1, and 3. Requests that specify an invalid value are rejected.Any operation that affects conditional role bindings must specify version 3. This requirement applies to the following operations:
+          # Getting a policy that includes a conditional role binding
+          # Adding a conditional role binding to a policy
+          # Changing a conditional role binding in a policy
+          # Removing any role binding, with or without a condition, from a policy  that includes conditionsImportant: If you use IAM Conditions, you must include the etag field whenever you call setIamPolicy. If you omit this field, then IAM allows you to overwrite a version 3 policy with a version 1 policy, and all of the conditions in the version 3 policy are lost.If a policy does not include any conditions, operations on that policy may specify any valid version or leave the field unset.
     },
   }
 
@@ -2575,71 +3106,104 @@
 Returns:
   An object of the form:
 
-    { # Defines an Identity and Access Management (IAM) policy. It is used to specify access control policies for Cloud Platform resources.A Policy consists of a list of bindings. A binding binds a list of members to a role, where the members can be user accounts, Google groups, Google domains, and service accounts. A role is a named list of permissions defined by IAM.JSON Example
+    { # An Identity and Access Management (IAM) policy, which specifies access controls for Google Cloud resources.A Policy is a collection of bindings. A binding binds one or more members to a single role. Members can be user accounts, service accounts, Google groups, and domains (such as G Suite). A role is a named list of permissions; each role can be an IAM predefined role or a user-created custom role.Optionally, a binding can specify a condition, which is a logical expression that allows access to a resource only if the expression evaluates to true. A condition can add constraints based on attributes of the request, the resource, or both.JSON example:
       # {
       #   "bindings": [
       #     {
-      #       "role": "roles/owner",
+      #       "role": "roles/resourcemanager.organizationAdmin",
       #       "members": [
       #         "user:mike@example.com",
       #         "group:admins@example.com",
       #         "domain:google.com",
-      #         "serviceAccount:my-other-app@appspot.gserviceaccount.com"
+      #         "serviceAccount:my-project-id@appspot.gserviceaccount.com"
       #       ]
       #     },
       #     {
-      #       "role": "roles/viewer",
-      #       "members": ["user:sean@example.com"]
+      #       "role": "roles/resourcemanager.organizationViewer",
+      #       "members": ["user:eve@example.com"],
+      #       "condition": {
+      #         "title": "expirable access",
+      #         "description": "Does not grant access after Sep 2020",
+      #         "expression": "request.time &lt; timestamp('2020-10-01T00:00:00.000Z')",
+      #       }
       #     }
-      #   ]
+      #   ],
+      #   "etag": "BwWWja0YfJA=",
+      #   "version": 3
       # }
-      # YAML Example
+      # YAML example:
       # bindings:
       # - members:
       #   - user:mike@example.com
       #   - group:admins@example.com
       #   - domain:google.com
-      #   - serviceAccount:my-other-app@appspot.gserviceaccount.com
-      #   role: roles/owner
+      #   - serviceAccount:my-project-id@appspot.gserviceaccount.com
+      #   role: roles/resourcemanager.organizationAdmin
       # - members:
-      #   - user:sean@example.com
-      #   role: roles/viewer
-      # For a description of IAM and its features, see the IAM developer's guide (https://cloud.google.com/iam/docs).
-    "bindings": [ # Associates a list of members to a role. bindings with no members will result in an error.
+      #   - user:eve@example.com
+      #   role: roles/resourcemanager.organizationViewer
+      #   condition:
+      #     title: expirable access
+      #     description: Does not grant access after Sep 2020
+      #     expression: request.time &lt; timestamp('2020-10-01T00:00:00.000Z')
+      # - etag: BwWWja0YfJA=
+      # - version: 3
+      # For a description of IAM and its features, see the IAM documentation (https://cloud.google.com/iam/docs/).
+    "bindings": [ # Associates a list of members to a role. Optionally, may specify a condition that determines how and when the bindings are applied. Each of the bindings must contain at least one member.
       { # Associates members with a role.
         "role": "A String", # Role that is assigned to members. For example, roles/viewer, roles/editor, or roles/owner.
         "members": [ # Specifies the identities requesting access for a Cloud Platform resource. members can have the following values:
             # allUsers: A special identifier that represents anyone who is  on the internet; with or without a Google account.
             # allAuthenticatedUsers: A special identifier that represents anyone  who is authenticated with a Google account or a service account.
-            # user:{emailid}: An email address that represents a specific Google  account. For example, alice@gmail.com .
+            # user:{emailid}: An email address that represents a specific Google  account. For example, alice@example.com .
             # serviceAccount:{emailid}: An email address that represents a service  account. For example, my-other-app@appspot.gserviceaccount.com.
             # group:{emailid}: An email address that represents a Google group.  For example, admins@example.com.
+            # deleted:user:{emailid}?uid={uniqueid}: An email address (plus unique  identifier) representing a user that has been recently deleted. For  example, alice@example.com?uid=123456789012345678901. If the user is  recovered, this value reverts to user:{emailid} and the recovered user  retains the role in the binding.
+            # deleted:serviceAccount:{emailid}?uid={uniqueid}: An email address (plus  unique identifier) representing a service account that has been recently  deleted. For example,  my-other-app@appspot.gserviceaccount.com?uid=123456789012345678901.  If the service account is undeleted, this value reverts to  serviceAccount:{emailid} and the undeleted service account retains the  role in the binding.
+            # deleted:group:{emailid}?uid={uniqueid}: An email address (plus unique  identifier) representing a Google group that has been recently  deleted. For example, admins@example.com?uid=123456789012345678901. If  the group is recovered, this value reverts to group:{emailid} and the  recovered group retains the role in the binding.
             # domain:{domain}: The G Suite domain (primary) that represents all the  users of that domain. For example, google.com or example.com.
           "A String",
         ],
-        "condition": { # Represents an expression text. Example: # The condition that is associated with this binding. NOTE: An unsatisfied condition will not allow user access via current binding. Different bindings, including their conditions, are examined independently.
-            # title: "User account presence"
-            # description: "Determines whether the request has a user account"
-            # expression: "size(request.user) > 0"
-          "location": "A String", # An optional string indicating the location of the expression for error reporting, e.g. a file name and a position in the file.
-          "expression": "A String", # Textual representation of an expression in Common Expression Language syntax.The application context of the containing message determines which well-known feature set of CEL is supported.
-          "description": "A String", # An optional description of the expression. This is a longer text which describes the expression, e.g. when hovered over it in a UI.
-          "title": "A String", # An optional title for the expression, i.e. a short string describing its purpose. This can be used e.g. in UIs which allow to enter the expression.
+        "condition": { # Represents a textual expression in the Common Expression Language (CEL) syntax. CEL is a C-like expression language. The syntax and semantics of CEL are documented at https://github.com/google/cel-spec.Example (Comparison): # The condition that is associated with this binding. NOTE: An unsatisfied condition will not allow user access via current binding. Different bindings, including their conditions, are examined independently.
+            # title: "Summary size limit"
+            # description: "Determines if a summary is less than 100 chars"
+            # expression: "document.summary.size() &lt; 100"
+            # Example (Equality):
+            # title: "Requestor is owner"
+            # description: "Determines if requestor is the document owner"
+            # expression: "document.owner == request.auth.claims.email"
+            # Example (Logic):
+            # title: "Public documents"
+            # description: "Determine whether the document should be publicly visible"
+            # expression: "document.type != 'private' &amp;&amp; document.type != 'internal'"
+            # Example (Data Manipulation):
+            # title: "Notification string"
+            # description: "Create a notification string with a timestamp."
+            # expression: "'New message received at ' + string(document.create_time)"
+            # The exact variables and functions that may be referenced within an expression are determined by the service that evaluates it. See the service documentation for additional information.
+          "description": "A String", # Optional. Description of the expression. This is a longer text which describes the expression, e.g. when hovered over it in a UI.
+          "expression": "A String", # Textual representation of an expression in Common Expression Language syntax.
+          "location": "A String", # Optional. String indicating the location of the expression for error reporting, e.g. a file name and a position in the file.
+          "title": "A String", # Optional. Title for the expression, i.e. a short string describing its purpose. This can be used e.g. in UIs which allow to enter the expression.
         },
       },
     ],
-    "etag": "A String", # etag is used for optimistic concurrency control as a way to help prevent simultaneous updates of a policy from overwriting each other. It is strongly suggested that systems make use of the etag in the read-modify-write cycle to perform policy updates in order to avoid race conditions: An etag is returned in the response to getIamPolicy, and systems are expected to put that etag in the request to setIamPolicy to ensure that their change will be applied to the same version of the policy.If no etag is provided in the call to setIamPolicy, then the existing policy is overwritten blindly.
-    "version": 42, # Deprecated.
+    "etag": "A String", # etag is used for optimistic concurrency control as a way to help prevent simultaneous updates of a policy from overwriting each other. It is strongly suggested that systems make use of the etag in the read-modify-write cycle to perform policy updates in order to avoid race conditions: An etag is returned in the response to getIamPolicy, and systems are expected to put that etag in the request to setIamPolicy to ensure that their change will be applied to the same version of the policy.Important: If you use IAM Conditions, you must include the etag field whenever you call setIamPolicy. If you omit this field, then IAM allows you to overwrite a version 3 policy with a version 1 policy, and all of the conditions in the version 3 policy are lost.
+    "version": 42, # Specifies the format of the policy.Valid values are 0, 1, and 3. Requests that specify an invalid value are rejected.Any operation that affects conditional role bindings must specify version 3. This requirement applies to the following operations:
+        # Getting a policy that includes a conditional role binding
+        # Adding a conditional role binding to a policy
+        # Changing a conditional role binding in a policy
+        # Removing any role binding, with or without a condition, from a policy  that includes conditionsImportant: If you use IAM Conditions, you must include the etag field whenever you call setIamPolicy. If you omit this field, then IAM allows you to overwrite a version 3 policy with a version 1 policy, and all of the conditions in the version 3 policy are lost.If a policy does not include any conditions, operations on that policy may specify any valid version or leave the field unset.
   }</pre>
 </div>
 
 <div class="method">
-    <code class="details" id="testIamPermissions">testIamPermissions(resource, body, x__xgafv=None)</code>
+    <code class="details" id="testIamPermissions">testIamPermissions(resource, body=None, x__xgafv=None)</code>
   <pre>Returns permissions that a caller has on the specified resource. If the resource does not exist, this will return an empty set of permissions, not a NOT_FOUND error.Note: This operation is designed to be used for building permission-aware UIs and command-line tools, not for authorization checking. This operation may "fail open" without warning.
 
 Args:
   resource: string, REQUIRED: The resource for which the policy detail is being requested. See the operation documentation for the appropriate value for this field. (required)
-  body: object, The request body. (required)
+  body: object, The request body.
     The object takes the form of:
 
 { # Request message for TestIamPermissions method.
@@ -2664,32 +3228,40 @@
 </div>
 
 <div class="method">
-    <code class="details" id="update">update(name, body, x__xgafv=None)</code>
+    <code class="details" id="update">update(name, body=None, x__xgafv=None)</code>
   <pre>Updates (replaces) workflow template. The updated template must contain version that matches the current server version.
 
 Args:
-  name: string, Output only. The "resource name" of the template, as described in https://cloud.google.com/apis/design/resource_names of the form projects/{project_id}/regions/{region}/workflowTemplates/{template_id} (required)
-  body: object, The request body. (required)
+  name: string, Output only. The resource name of the workflow template, as described in https://cloud.google.com/apis/design/resource_names.
+For projects.regions.workflowTemplates, the resource name of the  template has the following format:  projects/{project_id}/regions/{region}/workflowTemplates/{template_id}
+For projects.locations.workflowTemplates, the resource name of the  template has the following format:  projects/{project_id}/locations/{location}/workflowTemplates/{template_id} (required)
+  body: object, The request body.
     The object takes the form of:
 
-{ # A Cloud Dataproc workflow template resource.
+{ # A Dataproc workflow template resource.
   "updateTime": "A String", # Output only. The time template was last updated.
   "placement": { # Specifies workflow execution target.Either managed_cluster or cluster_selector is required. # Required. WorkflowTemplate scheduling information.
     "clusterSelector": { # A selector that chooses target cluster for jobs based on metadata. # Optional. A selector that chooses target cluster for jobs based on metadata.The selector is evaluated at the time each job is submitted.
+      "zone": "A String", # Optional. The zone where workflow process executes. This parameter does not affect the selection of the cluster.If unspecified, the zone of the first cluster matching the selector is used.
       "clusterLabels": { # Required. The cluster labels. Cluster must have all labels to match.
         "a_key": "A String",
       },
-      "zone": "A String", # Optional. The zone where workflow process executes. This parameter does not affect the selection of the cluster.If unspecified, the zone of the first cluster matching the selector is used.
     },
-    "managedCluster": { # Cluster that is managed by the workflow. # Optional. A cluster that is managed by the workflow.
+    "managedCluster": { # Cluster that is managed by the workflow. # A cluster that is managed by the workflow.
       "clusterName": "A String", # Required. The cluster name prefix. A unique cluster name will be formed by appending a random suffix.The name must contain only lower-case letters (a-z), numbers (0-9), and hyphens (-). Must begin with a letter. Cannot begin or end with hyphen. Must consist of between 2 and 35 characters.
       "labels": { # Optional. The labels to associate with this cluster.Label keys must be between 1 and 63 characters long, and must conform to the following PCRE regular expression: \p{Ll}\p{Lo}{0,62}Label values must be between 1 and 63 characters long, and must conform to the following PCRE regular expression: \p{Ll}\p{Lo}\p{N}_-{0,63}No more than 32 labels can be associated with a given cluster.
         "a_key": "A String",
       },
       "config": { # The cluster config. # Required. The cluster configuration.
+        "lifecycleConfig": { # Specifies the cluster auto-delete schedule configuration. # Optional. Lifecycle setting for the cluster.
+          "idleStartTime": "A String", # Output only. The time when cluster became idle (most recent job finished) and became eligible for deletion due to idleness (see JSON representation of Timestamp (https://developers.google.com/protocol-buffers/docs/proto3#json)).
+          "idleDeleteTtl": "A String", # Optional. The duration to keep the cluster alive while idling (when no jobs are running). Passing this threshold will cause the cluster to be deleted. Minimum value is 10 minutes; maximum value is 14 days (see JSON representation of Duration (https://developers.google.com/protocol-buffers/docs/proto3#json).
+          "autoDeleteTtl": "A String", # Optional. The lifetime duration of cluster. The cluster will be auto-deleted at the end of this period. Minimum value is 10 minutes; maximum value is 14 days (see JSON representation of Duration (https://developers.google.com/protocol-buffers/docs/proto3#json)).
+          "autoDeleteTime": "A String", # Optional. The time when cluster will be auto-deleted (see JSON representation of Timestamp (https://developers.google.com/protocol-buffers/docs/proto3#json)).
+        },
         "softwareConfig": { # Specifies the selection and config of software inside the cluster. # Optional. The config settings for software inside the cluster.
-          "imageVersion": "A String", # Optional. The version of software inside the cluster. It must be one of the supported Cloud Dataproc Versions, such as "1.2" (including a subminor version, such as "1.2.29"), or the "preview" version. If unspecified, it defaults to the latest Debian version.
-          "optionalComponents": [ # The set of optional components to activate on the cluster.
+          "imageVersion": "A String", # Optional. The version of software inside the cluster. It must be one of the supported Dataproc Versions (https://cloud.google.com/dataproc/docs/concepts/versioning/dataproc-versions#supported_cloud_dataproc_versions), such as "1.2" (including a subminor version, such as "1.2.29"), or the "preview" version (https://cloud.google.com/dataproc/docs/concepts/versioning/dataproc-versions#other_versions). If unspecified, it defaults to the latest Debian version.
+          "optionalComponents": [ # Optional. The set of components to activate on the cluster.
             "A String",
           ],
           "properties": { # Optional. The properties to set on daemon config files.Property keys are specified in prefix:property format, for example core:hadoop.tmp.dir. The following are supported prefixes and their mappings:
@@ -2701,24 +3273,29 @@
               # mapred: mapred-site.xml
               # pig: pig.properties
               # spark: spark-defaults.conf
-              # yarn: yarn-site.xmlFor more information, see Cluster properties.
+              # yarn: yarn-site.xmlFor more information, see Cluster properties (https://cloud.google.com/dataproc/docs/concepts/cluster-properties).
             "a_key": "A String",
           },
         },
-        "configBucket": "A String", # Optional. A Google Cloud Storage bucket used to stage job dependencies, config files, and job driver console output. If you do not specify a staging bucket, Cloud Dataproc will determine a Cloud Storage location (US, ASIA, or EU) for your cluster's staging bucket according to the Google Compute Engine zone where your cluster is deployed, and then create and manage this project-level, per-location bucket (see Cloud Dataproc staging bucket).
+        "configBucket": "A String", # Optional. A Cloud Storage bucket used to stage job dependencies, config files, and job driver console output. If you do not specify a staging bucket, Cloud Dataproc will determine a Cloud Storage location (US, ASIA, or EU) for your cluster's staging bucket according to the Compute Engine zone where your cluster is deployed, and then create and manage this project-level, per-location bucket (see Dataproc staging bucket (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/staging-bucket)).
         "gceClusterConfig": { # Common config settings for resources of Compute Engine cluster instances, applicable to all instances in the cluster. # Optional. The shared Compute Engine config settings for all instances in a cluster.
           "internalIpOnly": True or False, # Optional. If true, all instances in the cluster will only have internal IP addresses. By default, clusters are not restricted to internal IP addresses, and will have ephemeral external IP addresses assigned to each instance. This internal_ip_only restriction can only be enabled for subnetwork enabled networks, and all off-cluster dependencies must be configured to be accessible without external IP addresses.
-          "networkUri": "A String", # Optional. The Compute Engine network to be used for machine communications. Cannot be specified with subnetwork_uri. If neither network_uri nor subnetwork_uri is specified, the "default" network of the project is used, if it exists. Cannot be a "Custom Subnet Network" (see Using Subnetworks for more information).A full URL, partial URI, or short name are valid. Examples:
+          "reservationAffinity": { # Reservation Affinity for consuming Zonal reservation. # Optional. Reservation Affinity for consuming Zonal reservation.
+            "values": [ # Optional. Corresponds to the label values of reservation resource.
+              "A String",
+            ],
+            "key": "A String", # Optional. Corresponds to the label key of reservation resource.
+            "consumeReservationType": "A String", # Optional. Type of reservation to consume
+          },
+          "networkUri": "A String", # Optional. The Compute Engine network to be used for machine communications. Cannot be specified with subnetwork_uri. If neither network_uri nor subnetwork_uri is specified, the "default" network of the project is used, if it exists. Cannot be a "Custom Subnet Network" (see Using Subnetworks (https://cloud.google.com/compute/docs/subnetworks) for more information).A full URL, partial URI, or short name are valid. Examples:
               # https://www.googleapis.com/compute/v1/projects/[project_id]/regions/global/default
               # projects/[project_id]/regions/global/default
               # default
-          "tags": [ # The Compute Engine tags to add to all instances (see Tagging instances).
+          "tags": [ # The Compute Engine tags to add to all instances (see Tagging instances (https://cloud.google.com/compute/docs/label-or-tag-resources#tags)).
             "A String",
           ],
-          "serviceAccount": "A String", # Optional. The service account of the instances. Defaults to the default Compute Engine service account. Custom service accounts need permissions equivalent to the following IAM roles:
-              # roles/logging.logWriter
-              # roles/storage.objectAdmin(see https://cloud.google.com/compute/docs/access/service-accounts#custom_service_accounts for more information). Example: [account_id]@[project_id].iam.gserviceaccount.com
-          "zoneUri": "A String", # Optional. The zone where the Compute Engine cluster will be located. On a create request, it is required in the "global" region. If omitted in a non-global Cloud Dataproc region, the service will pick a zone in the corresponding Compute Engine region. On a get request, zone will always be present.A full URL, partial URI, or short name are valid. Examples:
+          "serviceAccount": "A String", # Optional. The Dataproc service account (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/service-accounts#service_accounts_in_cloud_dataproc) (also see VM Data Plane identity (https://cloud.google.com/dataproc/docs/concepts/iam/dataproc-principals#vm_service_account_data_plane_identity)) used by Dataproc cluster VM instances to access Google Cloud Platform services.If not specified, the Compute Engine default service account (https://cloud.google.com/compute/docs/access/service-accounts#default_service_account) is used.
+          "zoneUri": "A String", # Optional. The zone where the Compute Engine cluster will be located. On a create request, it is required in the "global" region. If omitted in a non-global Dataproc region, the service will pick a zone in the corresponding Compute Engine region. On a get request, zone will always be present.A full URL, partial URI, or short name are valid. Examples:
               # https://www.googleapis.com/compute/v1/projects/[project_id]/zones/[zone]
               # projects/[project_id]/zones/[zone]
               # us-central1-f
@@ -2740,25 +3317,37 @@
             "a_key": "A String",
           },
         },
-        "workerConfig": { # Optional. The config settings for Compute Engine resources in an instance group, such as a master or worker group. # Optional. The Compute Engine config settings for worker instances in a cluster.
-          "isPreemptible": True or False, # Optional. Specifies that this instance group contains preemptible instances.
+        "autoscalingConfig": { # Autoscaling Policy config associated with the cluster. # Optional. Autoscaling config for the policy associated with the cluster. Cluster does not autoscale if this field is unset.
+          "policyUri": "A String", # Optional. The autoscaling policy used by the cluster.Only resource names including projectid and location (region) are valid. Examples:
+              # https://www.googleapis.com/compute/v1/projects/[project_id]/locations/[dataproc_region]/autoscalingPolicies/[policy_id]
+              # projects/[project_id]/locations/[dataproc_region]/autoscalingPolicies/[policy_id]Note that the policy must be in the same project and Dataproc region.
+        },
+        "workerConfig": { # The config settings for Compute Engine resources in an instance group, such as a master or worker group. # Optional. The Compute Engine config settings for worker instances in a cluster.
+          "isPreemptible": True or False, # Output only. Specifies that this instance group contains preemptible instances.
+          "preemptibility": "A String", # Optional. Specifies the preemptibility of the instance group.The default value for master and worker groups is NON_PREEMPTIBLE. This default cannot be changed.The default value for secondary instances is PREEMPTIBLE.
           "machineTypeUri": "A String", # Optional. The Compute Engine machine type used for cluster instances.A full URL, partial URI, or short name are valid. Examples:
               # https://www.googleapis.com/compute/v1/projects/[project_id]/zones/us-east1-a/machineTypes/n1-standard-2
               # projects/[project_id]/zones/us-east1-a/machineTypes/n1-standard-2
-              # n1-standard-2Auto Zone Exception: If you are using the Cloud Dataproc Auto Zone Placement feature, you must use the short name of the machine type resource, for example, n1-standard-2.
-          "instanceNames": [ # Output only. The list of instance names. Cloud Dataproc derives the names from cluster_name, num_instances, and the instance group.
+              # n1-standard-2Auto Zone Exception: If you are using the Dataproc Auto Zone Placement (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/auto-zone#using_auto_zone_placement) feature, you must use the short name of the machine type resource, for example, n1-standard-2.
+          "instanceNames": [ # Output only. The list of instance names. Dataproc derives the names from cluster_name, num_instances, and the instance group.
             "A String",
           ],
-          "imageUri": "A String", # Optional. The Compute Engine image resource used for cluster instances. It can be specified or may be inferred from SoftwareConfig.image_version.
-          "accelerators": [ # Optional. The Compute Engine accelerator configuration for these instances.Beta Feature: This feature is still under development. It may be changed before final release.
-            { # Specifies the type and number of accelerator cards attached to the instances of an instance. See GPUs on Compute Engine.
+          "imageUri": "A String", # Optional. The Compute Engine image resource used for cluster instances.The URI can represent an image or image family.Image examples:
+              # https://www.googleapis.com/compute/beta/projects/[project_id]/global/images/[image-id]
+              # projects/[project_id]/global/images/[image-id]
+              # image-idImage family examples. Dataproc will use the most recent image from the family:
+              # https://www.googleapis.com/compute/beta/projects/[project_id]/global/images/family/[custom-image-family-name]
+              # projects/[project_id]/global/images/family/[custom-image-family-name]If the URI is unspecified, it will be inferred from SoftwareConfig.image_version or the system default.
+          "accelerators": [ # Optional. The Compute Engine accelerator configuration for these instances.
+            { # Specifies the type and number of accelerator cards attached to the instances of an instance. See GPUs on Compute Engine (https://cloud.google.com/compute/docs/gpus/).
               "acceleratorCount": 42, # The number of the accelerator cards of this type exposed to this instance.
-              "acceleratorTypeUri": "A String", # Full URL, partial URI, or short name of the accelerator type resource to expose to this instance. See Compute Engine AcceleratorTypes.Examples:
+              "acceleratorTypeUri": "A String", # Full URL, partial URI, or short name of the accelerator type resource to expose to this instance. See Compute Engine AcceleratorTypes (https://cloud.google.com/compute/docs/reference/beta/acceleratorTypes).Examples:
                   # https://www.googleapis.com/compute/beta/projects/[project_id]/zones/us-east1-a/acceleratorTypes/nvidia-tesla-k80
                   # projects/[project_id]/zones/us-east1-a/acceleratorTypes/nvidia-tesla-k80
-                  # nvidia-tesla-k80Auto Zone Exception: If you are using the Cloud Dataproc Auto Zone Placement feature, you must use the short name of the accelerator type resource, for example, nvidia-tesla-k80.
+                  # nvidia-tesla-k80Auto Zone Exception: If you are using the Dataproc Auto Zone Placement (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/auto-zone#using_auto_zone_placement) feature, you must use the short name of the accelerator type resource, for example, nvidia-tesla-k80.
             },
           ],
+          "minCpuPlatform": "A String", # Optional. Specifies the minimum cpu platform for the Instance Group. See Dataproc -&amp;gt; Minimum CPU Platform (https://cloud.google.com/dataproc/docs/concepts/compute/dataproc-min-cpu).
           "managedGroupConfig": { # Specifies the resources used to actively manage an instance group. # Output only. The config for Compute Engine Instance Group Manager that manages this group. This is only used for preemptible instance groups.
             "instanceTemplateName": "A String", # Output only. The name of the Instance Template used for the Managed Instance Group.
             "instanceGroupManagerName": "A String", # Output only. The name of the Instance Group Manager for this group.
@@ -2779,32 +3368,39 @@
             #   ... worker specific actions ...
             # fi
           { # Specifies an executable to run on a fully configured node and a timeout period for executable completion.
-            "executionTimeout": "A String", # Optional. Amount of time executable has to complete. Default is 10 minutes. Cluster creation fails with an explanatory error message (the name of the executable that caused the error and the exceeded timeout period) if the executable is not completed at end of the timeout period.
+            "executionTimeout": "A String", # Optional. Amount of time executable has to complete. Default is 10 minutes (see JSON representation of Duration (https://developers.google.com/protocol-buffers/docs/proto3#json)).Cluster creation fails with an explanatory error message (the name of the executable that caused the error and the exceeded timeout period) if the executable is not completed at end of the timeout period.
             "executableFile": "A String", # Required. Cloud Storage URI of executable file.
           },
         ],
         "encryptionConfig": { # Encryption settings for the cluster. # Optional. Encryption settings for the cluster.
           "gcePdKmsKeyName": "A String", # Optional. The Cloud KMS key name to use for PD disk encryption for all instances in the cluster.
         },
-        "secondaryWorkerConfig": { # Optional. The config settings for Compute Engine resources in an instance group, such as a master or worker group. # Optional. The Compute Engine config settings for additional worker instances in a cluster.
-          "isPreemptible": True or False, # Optional. Specifies that this instance group contains preemptible instances.
+        "secondaryWorkerConfig": { # The config settings for Compute Engine resources in an instance group, such as a master or worker group. # Optional. The Compute Engine config settings for additional worker instances in a cluster.
+          "isPreemptible": True or False, # Output only. Specifies that this instance group contains preemptible instances.
+          "preemptibility": "A String", # Optional. Specifies the preemptibility of the instance group.The default value for master and worker groups is NON_PREEMPTIBLE. This default cannot be changed.The default value for secondary instances is PREEMPTIBLE.
           "machineTypeUri": "A String", # Optional. The Compute Engine machine type used for cluster instances.A full URL, partial URI, or short name are valid. Examples:
               # https://www.googleapis.com/compute/v1/projects/[project_id]/zones/us-east1-a/machineTypes/n1-standard-2
               # projects/[project_id]/zones/us-east1-a/machineTypes/n1-standard-2
-              # n1-standard-2Auto Zone Exception: If you are using the Cloud Dataproc Auto Zone Placement feature, you must use the short name of the machine type resource, for example, n1-standard-2.
-          "instanceNames": [ # Output only. The list of instance names. Cloud Dataproc derives the names from cluster_name, num_instances, and the instance group.
+              # n1-standard-2Auto Zone Exception: If you are using the Dataproc Auto Zone Placement (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/auto-zone#using_auto_zone_placement) feature, you must use the short name of the machine type resource, for example, n1-standard-2.
+          "instanceNames": [ # Output only. The list of instance names. Dataproc derives the names from cluster_name, num_instances, and the instance group.
             "A String",
           ],
-          "imageUri": "A String", # Optional. The Compute Engine image resource used for cluster instances. It can be specified or may be inferred from SoftwareConfig.image_version.
-          "accelerators": [ # Optional. The Compute Engine accelerator configuration for these instances.Beta Feature: This feature is still under development. It may be changed before final release.
-            { # Specifies the type and number of accelerator cards attached to the instances of an instance. See GPUs on Compute Engine.
+          "imageUri": "A String", # Optional. The Compute Engine image resource used for cluster instances.The URI can represent an image or image family.Image examples:
+              # https://www.googleapis.com/compute/beta/projects/[project_id]/global/images/[image-id]
+              # projects/[project_id]/global/images/[image-id]
+              # image-idImage family examples. Dataproc will use the most recent image from the family:
+              # https://www.googleapis.com/compute/beta/projects/[project_id]/global/images/family/[custom-image-family-name]
+              # projects/[project_id]/global/images/family/[custom-image-family-name]If the URI is unspecified, it will be inferred from SoftwareConfig.image_version or the system default.
+          "accelerators": [ # Optional. The Compute Engine accelerator configuration for these instances.
+            { # Specifies the type and number of accelerator cards attached to the instances of an instance. See GPUs on Compute Engine (https://cloud.google.com/compute/docs/gpus/).
               "acceleratorCount": 42, # The number of the accelerator cards of this type exposed to this instance.
-              "acceleratorTypeUri": "A String", # Full URL, partial URI, or short name of the accelerator type resource to expose to this instance. See Compute Engine AcceleratorTypes.Examples:
+              "acceleratorTypeUri": "A String", # Full URL, partial URI, or short name of the accelerator type resource to expose to this instance. See Compute Engine AcceleratorTypes (https://cloud.google.com/compute/docs/reference/beta/acceleratorTypes).Examples:
                   # https://www.googleapis.com/compute/beta/projects/[project_id]/zones/us-east1-a/acceleratorTypes/nvidia-tesla-k80
                   # projects/[project_id]/zones/us-east1-a/acceleratorTypes/nvidia-tesla-k80
-                  # nvidia-tesla-k80Auto Zone Exception: If you are using the Cloud Dataproc Auto Zone Placement feature, you must use the short name of the accelerator type resource, for example, nvidia-tesla-k80.
+                  # nvidia-tesla-k80Auto Zone Exception: If you are using the Dataproc Auto Zone Placement (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/auto-zone#using_auto_zone_placement) feature, you must use the short name of the accelerator type resource, for example, nvidia-tesla-k80.
             },
           ],
+          "minCpuPlatform": "A String", # Optional. Specifies the minimum cpu platform for the Instance Group. See Dataproc -&amp;gt; Minimum CPU Platform (https://cloud.google.com/dataproc/docs/concepts/compute/dataproc-min-cpu).
           "managedGroupConfig": { # Specifies the resources used to actively manage an instance group. # Output only. The config for Compute Engine Instance Group Manager that manages this group. This is only used for preemptible instance groups.
             "instanceTemplateName": "A String", # Output only. The name of the Instance Template used for the Managed Instance Group.
             "instanceGroupManagerName": "A String", # Output only. The name of the Instance Group Manager for this group.
@@ -2816,25 +3412,32 @@
             "bootDiskSizeGb": 42, # Optional. Size in GB of the boot disk (default is 500GB).
           },
         },
-        "masterConfig": { # Optional. The config settings for Compute Engine resources in an instance group, such as a master or worker group. # Optional. The Compute Engine config settings for the master instance in a cluster.
-          "isPreemptible": True or False, # Optional. Specifies that this instance group contains preemptible instances.
+        "masterConfig": { # The config settings for Compute Engine resources in an instance group, such as a master or worker group. # Optional. The Compute Engine config settings for the master instance in a cluster.
+          "isPreemptible": True or False, # Output only. Specifies that this instance group contains preemptible instances.
+          "preemptibility": "A String", # Optional. Specifies the preemptibility of the instance group.The default value for master and worker groups is NON_PREEMPTIBLE. This default cannot be changed.The default value for secondary instances is PREEMPTIBLE.
           "machineTypeUri": "A String", # Optional. The Compute Engine machine type used for cluster instances.A full URL, partial URI, or short name are valid. Examples:
               # https://www.googleapis.com/compute/v1/projects/[project_id]/zones/us-east1-a/machineTypes/n1-standard-2
               # projects/[project_id]/zones/us-east1-a/machineTypes/n1-standard-2
-              # n1-standard-2Auto Zone Exception: If you are using the Cloud Dataproc Auto Zone Placement feature, you must use the short name of the machine type resource, for example, n1-standard-2.
-          "instanceNames": [ # Output only. The list of instance names. Cloud Dataproc derives the names from cluster_name, num_instances, and the instance group.
+              # n1-standard-2Auto Zone Exception: If you are using the Dataproc Auto Zone Placement (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/auto-zone#using_auto_zone_placement) feature, you must use the short name of the machine type resource, for example, n1-standard-2.
+          "instanceNames": [ # Output only. The list of instance names. Dataproc derives the names from cluster_name, num_instances, and the instance group.
             "A String",
           ],
-          "imageUri": "A String", # Optional. The Compute Engine image resource used for cluster instances. It can be specified or may be inferred from SoftwareConfig.image_version.
-          "accelerators": [ # Optional. The Compute Engine accelerator configuration for these instances.Beta Feature: This feature is still under development. It may be changed before final release.
-            { # Specifies the type and number of accelerator cards attached to the instances of an instance. See GPUs on Compute Engine.
+          "imageUri": "A String", # Optional. The Compute Engine image resource used for cluster instances.The URI can represent an image or image family.Image examples:
+              # https://www.googleapis.com/compute/beta/projects/[project_id]/global/images/[image-id]
+              # projects/[project_id]/global/images/[image-id]
+              # image-idImage family examples. Dataproc will use the most recent image from the family:
+              # https://www.googleapis.com/compute/beta/projects/[project_id]/global/images/family/[custom-image-family-name]
+              # projects/[project_id]/global/images/family/[custom-image-family-name]If the URI is unspecified, it will be inferred from SoftwareConfig.image_version or the system default.
+          "accelerators": [ # Optional. The Compute Engine accelerator configuration for these instances.
+            { # Specifies the type and number of accelerator cards attached to the instances of an instance. See GPUs on Compute Engine (https://cloud.google.com/compute/docs/gpus/).
               "acceleratorCount": 42, # The number of the accelerator cards of this type exposed to this instance.
-              "acceleratorTypeUri": "A String", # Full URL, partial URI, or short name of the accelerator type resource to expose to this instance. See Compute Engine AcceleratorTypes.Examples:
+              "acceleratorTypeUri": "A String", # Full URL, partial URI, or short name of the accelerator type resource to expose to this instance. See Compute Engine AcceleratorTypes (https://cloud.google.com/compute/docs/reference/beta/acceleratorTypes).Examples:
                   # https://www.googleapis.com/compute/beta/projects/[project_id]/zones/us-east1-a/acceleratorTypes/nvidia-tesla-k80
                   # projects/[project_id]/zones/us-east1-a/acceleratorTypes/nvidia-tesla-k80
-                  # nvidia-tesla-k80Auto Zone Exception: If you are using the Cloud Dataproc Auto Zone Placement feature, you must use the short name of the accelerator type resource, for example, nvidia-tesla-k80.
+                  # nvidia-tesla-k80Auto Zone Exception: If you are using the Dataproc Auto Zone Placement (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/auto-zone#using_auto_zone_placement) feature, you must use the short name of the accelerator type resource, for example, nvidia-tesla-k80.
             },
           ],
+          "minCpuPlatform": "A String", # Optional. Specifies the minimum cpu platform for the Instance Group. See Dataproc -&amp;gt; Minimum CPU Platform (https://cloud.google.com/dataproc/docs/concepts/compute/dataproc-min-cpu).
           "managedGroupConfig": { # Specifies the resources used to actively manage an instance group. # Output only. The config for Compute Engine Instance Group Manager that manages this group. This is only used for preemptible instance groups.
             "instanceTemplateName": "A String", # Output only. The name of the Instance Template used for the Managed Instance Group.
             "instanceGroupManagerName": "A String", # Output only. The name of the Instance Group Manager for this group.
@@ -2850,8 +3453,9 @@
           "kerberosConfig": { # Specifies Kerberos related configuration. # Kerberos related configuration.
             "truststorePasswordUri": "A String", # Optional. The Cloud Storage URI of a KMS encrypted file containing the password to the user provided truststore. For the self-signed certificate, this password is generated by Dataproc.
             "crossRealmTrustRealm": "A String", # Optional. The remote realm the Dataproc on-cluster KDC will trust, should the user enable cross realm trust.
+            "realm": "A String", # Optional. The name of the on-cluster Kerberos realm. If not specified, the uppercased domain of hostnames will be the realm.
             "keyPasswordUri": "A String", # Optional. The Cloud Storage URI of a KMS encrypted file containing the password to the user provided key. For the self-signed certificate, this password is generated by Dataproc.
-            "enableKerberos": True or False, # Optional. Flag to indicate whether to Kerberize the cluster.
+            "enableKerberos": True or False, # Optional. Flag to indicate whether to Kerberize the cluster (default: false). Set this field to true to enable Kerberos on a cluster.
             "crossRealmTrustAdminServer": "A String", # Optional. The admin server (IP or hostname) for the remote trusted realm in a cross realm trust relationship.
             "tgtLifetimeHours": 42, # Optional. The lifetime of the ticket granting ticket, in hours. If not specified, or user specifies 0, then default value 10 will be used.
             "keystoreUri": "A String", # Optional. The Cloud Storage URI of the keystore file used for SSL encryption. If not provided, Dataproc will provide a self-signed certificate.
@@ -2867,8 +3471,10 @@
       },
     },
   },
-  "name": "A String", # Output only. The "resource name" of the template, as described in https://cloud.google.com/apis/design/resource_names of the form projects/{project_id}/regions/{region}/workflowTemplates/{template_id}
-  "parameters": [ # Optional. Template parameters whose values are substituted into the template. Values for parameters must be provided when the template is instantiated.
+  "name": "A String", # Output only. The resource name of the workflow template, as described in https://cloud.google.com/apis/design/resource_names.
+      # For projects.regions.workflowTemplates, the resource name of the  template has the following format:  projects/{project_id}/regions/{region}/workflowTemplates/{template_id}
+      # For projects.locations.workflowTemplates, the resource name of the  template has the following format:  projects/{project_id}/locations/{location}/workflowTemplates/{template_id}
+  "parameters": [ # Optional. emplate parameters whose values are substituted into the template. Values for parameters must be provided when the template is instantiated.
     { # A configurable parameter that replaces one or more fields in the template. Parameterizable fields: - Labels - File uris - Job properties - Job arguments - Script variables - Main class (in HadoopJob and SparkJob) - Zone (in ClusterSelector)
       "fields": [ # Required. Paths to all fields that the parameter replaces. A field is allowed to appear in at most one parameter's list of field paths.A field path is similar in syntax to a google.protobuf.FieldMask. For example, a field path that references the zone field of a workflow template's cluster selector would be specified as placement.clusterSelector.zone.Also, field paths can reference fields using the following syntax:
           # Values in maps can be referenced by key:
@@ -2920,7 +3526,7 @@
   "version": 42, # Optional. Used to perform a consistent read-modify-write.This field should be left blank for a CreateWorkflowTemplate request. It is required for an UpdateWorkflowTemplate request, and must match the current server version. A typical update template flow would fetch the current template with a GetWorkflowTemplate request, which will return the current template with the version field filled in with the current server version. The user updates other fields in the template, then returns it as part of the UpdateWorkflowTemplate request.
   "jobs": [ # Required. The Directed Acyclic Graph of Jobs to submit.
     { # A job executed by the workflow.
-      "hadoopJob": { # A Cloud Dataproc job for running Apache Hadoop MapReduce (https://hadoop.apache.org/docs/current/hadoop-mapreduce-client/hadoop-mapreduce-client-core/MapReduceTutorial.html) jobs on Apache Hadoop YARN (https://hadoop.apache.org/docs/r2.7.1/hadoop-yarn/hadoop-yarn-site/YARN.html). # Job is a Hadoop job.
+      "hadoopJob": { # A Dataproc job for running Apache Hadoop MapReduce (https://hadoop.apache.org/docs/current/hadoop-mapreduce-client/hadoop-mapreduce-client-core/MapReduceTutorial.html) jobs on Apache Hadoop YARN (https://hadoop.apache.org/docs/r2.7.1/hadoop-yarn/hadoop-yarn-site/YARN.html). # Optional. Job is a Hadoop job.
         "args": [ # Optional. The arguments to pass to the driver. Do not include arguments, such as -libjars or -Dfoo=bar, that can be set as job properties, since a collision may occur that causes an incorrect job submission.
           "A String",
         ],
@@ -2940,12 +3546,32 @@
           "A String",
         ],
         "mainJarFileUri": "A String", # The HCFS URI of the jar file containing the main class. Examples:  'gs://foo-bucket/analytics-binaries/extract-useful-metrics-mr.jar'  'hdfs:/tmp/test-samples/custom-wordcount.jar'  'file:///home/usr/lib/hadoop-mapreduce/hadoop-mapreduce-examples.jar'
-        "properties": { # Optional. A mapping of property names to values, used to configure Hadoop. Properties that conflict with values set by the Cloud Dataproc API may be overwritten. Can include properties set in /etc/hadoop/conf/*-site and classes in user code.
+        "properties": { # Optional. A mapping of property names to values, used to configure Hadoop. Properties that conflict with values set by the Dataproc API may be overwritten. Can include properties set in /etc/hadoop/conf/*-site and classes in user code.
           "a_key": "A String",
         },
       },
       "stepId": "A String", # Required. The step id. The id must be unique among all jobs within the template.The step id is used as prefix for job id, as job goog-dataproc-workflow-step-id label, and in prerequisiteStepIds field from other steps.The id must contain only letters (a-z, A-Z), numbers (0-9), underscores (_), and hyphens (-). Cannot begin or end with underscore or hyphen. Must consist of between 3 and 50 characters.
-      "sparkSqlJob": { # A Cloud Dataproc job for running Apache Spark SQL (http://spark.apache.org/sql/) queries. # Job is a SparkSql job.
+      "sparkRJob": { # A Dataproc job for running Apache SparkR (https://spark.apache.org/docs/latest/sparkr.html) applications on YARN. # Optional. Job is a SparkR job.
+        "loggingConfig": { # The runtime logging config of the job. # Optional. The runtime log config for job execution.
+          "driverLogLevels": { # The per-package log levels for the driver. This may include "root" package name to configure rootLogger. Examples:  'com.google = FATAL', 'root = INFO', 'org.apache = DEBUG'
+            "a_key": "A String",
+          },
+        },
+        "args": [ # Optional. The arguments to pass to the driver. Do not include arguments, such as --conf, that can be set as job properties, since a collision may occur that causes an incorrect job submission.
+          "A String",
+        ],
+        "mainRFileUri": "A String", # Required. The HCFS URI of the main R file to use as the driver. Must be a .R file.
+        "fileUris": [ # Optional. HCFS URIs of files to be copied to the working directory of R drivers and distributed tasks. Useful for naively parallel tasks.
+          "A String",
+        ],
+        "archiveUris": [ # Optional. HCFS URIs of archives to be extracted in the working directory of Spark drivers and tasks. Supported file types: .jar, .tar, .tar.gz, .tgz, and .zip.
+          "A String",
+        ],
+        "properties": { # Optional. A mapping of property names to values, used to configure SparkR. Properties that conflict with values set by the Dataproc API may be overwritten. Can include properties set in /etc/spark/conf/spark-defaults.conf and classes in user code.
+          "a_key": "A String",
+        },
+      },
+      "sparkSqlJob": { # A Dataproc job for running Apache Spark SQL (http://spark.apache.org/sql/) queries. # Optional. Job is a SparkSql job.
         "queryFileUri": "A String", # The HCFS URI of the script that contains SQL queries.
         "scriptVariables": { # Optional. Mapping of query variable names to values (equivalent to the Spark SQL command: SET name="value";).
           "a_key": "A String",
@@ -2972,14 +3598,14 @@
             "A String",
           ],
         },
-        "properties": { # Optional. A mapping of property names to values, used to configure Spark SQL's SparkConf. Properties that conflict with values set by the Cloud Dataproc API may be overwritten.
+        "properties": { # Optional. A mapping of property names to values, used to configure Spark SQL's SparkConf. Properties that conflict with values set by the Dataproc API may be overwritten.
           "a_key": "A String",
         },
       },
       "prerequisiteStepIds": [ # Optional. The optional list of prerequisite job step_ids. If not specified, the job will start at the beginning of workflow.
         "A String",
       ],
-      "pigJob": { # A Cloud Dataproc job for running Apache Pig (https://pig.apache.org/) queries on YARN. # Job is a Pig job.
+      "pigJob": { # A Dataproc job for running Apache Pig (https://pig.apache.org/) queries on YARN. # Optional. Job is a Pig job.
         "queryFileUri": "A String", # The HCFS URI of the script that contains the Pig queries.
         "scriptVariables": { # Optional. Mapping of query variable names to values (equivalent to the Pig command: name=[value]).
           "a_key": "A String",
@@ -3007,14 +3633,14 @@
           ],
         },
         "continueOnFailure": True or False, # Optional. Whether to continue executing queries if a query fails. The default value is false. Setting to true can be useful when executing independent parallel queries.
-        "properties": { # Optional. A mapping of property names to values, used to configure Pig. Properties that conflict with values set by the Cloud Dataproc API may be overwritten. Can include properties set in /etc/hadoop/conf/*-site.xml, /etc/pig/conf/pig.properties, and classes in user code.
+        "properties": { # Optional. A mapping of property names to values, used to configure Pig. Properties that conflict with values set by the Dataproc API may be overwritten. Can include properties set in /etc/hadoop/conf/*-site.xml, /etc/pig/conf/pig.properties, and classes in user code.
           "a_key": "A String",
         },
       },
       "labels": { # Optional. The labels to associate with this job.Label keys must be between 1 and 63 characters long, and must conform to the following regular expression: \p{Ll}\p{Lo}{0,62}Label values must be between 1 and 63 characters long, and must conform to the following regular expression: \p{Ll}\p{Lo}\p{N}_-{0,63}No more than 32 labels can be associated with a given job.
         "a_key": "A String",
       },
-      "sparkJob": { # A Cloud Dataproc job for running Apache Spark (http://spark.apache.org/) applications on YARN. # Job is a Spark job.
+      "sparkJob": { # A Dataproc job for running Apache Spark (http://spark.apache.org/) applications on YARN. # Optional. Job is a Spark job.
         "args": [ # Optional. The arguments to pass to the driver. Do not include arguments, such as --conf, that can be set as job properties, since a collision may occur that causes an incorrect job submission.
           "A String",
         ],
@@ -3034,16 +3660,46 @@
           "A String",
         ],
         "mainJarFileUri": "A String", # The HCFS URI of the jar file that contains the main class.
-        "properties": { # Optional. A mapping of property names to values, used to configure Spark. Properties that conflict with values set by the Cloud Dataproc API may be overwritten. Can include properties set in /etc/spark/conf/spark-defaults.conf and classes in user code.
+        "properties": { # Optional. A mapping of property names to values, used to configure Spark. Properties that conflict with values set by the Dataproc API may be overwritten. Can include properties set in /etc/spark/conf/spark-defaults.conf and classes in user code.
+          "a_key": "A String",
+        },
+      },
+      "prestoJob": { # A Dataproc job for running Presto (https://prestosql.io/) queries. IMPORTANT: The Dataproc Presto Optional Component (https://cloud.google.com/dataproc/docs/concepts/components/presto) must be enabled when the cluster is created to submit a Presto job to the cluster. # Optional. Job is a Presto job.
+        "queryFileUri": "A String", # The HCFS URI of the script that contains SQL queries.
+        "outputFormat": "A String", # Optional. The format in which query output will be displayed. See the Presto documentation for supported output formats
+        "loggingConfig": { # The runtime logging config of the job. # Optional. The runtime log config for job execution.
+          "driverLogLevels": { # The per-package log levels for the driver. This may include "root" package name to configure rootLogger. Examples:  'com.google = FATAL', 'root = INFO', 'org.apache = DEBUG'
+            "a_key": "A String",
+          },
+        },
+        "clientTags": [ # Optional. Presto client tags to attach to this query
+          "A String",
+        ],
+        "queryList": { # A list of queries to run on a cluster. # A list of queries.
+          "queries": [ # Required. The queries to execute. You do not need to terminate a query with a semicolon. Multiple queries can be specified in one string by separating each with a semicolon. Here is an example of an Cloud Dataproc API snippet that uses a QueryList to specify a HiveJob:
+              # "hiveJob": {
+              #   "queryList": {
+              #     "queries": [
+              #       "query1",
+              #       "query2",
+              #       "query3;query4",
+              #     ]
+              #   }
+              # }
+            "A String",
+          ],
+        },
+        "continueOnFailure": True or False, # Optional. Whether to continue executing queries if a query fails. The default value is false. Setting to true can be useful when executing independent parallel queries.
+        "properties": { # Optional. A mapping of property names to values. Used to set Presto session properties (https://prestodb.io/docs/current/sql/set-session.html) Equivalent to using the --session flag in the Presto CLI
           "a_key": "A String",
         },
       },
       "scheduling": { # Job scheduling options. # Optional. Job scheduling configuration.
         "maxFailuresPerHour": 42, # Optional. Maximum number of times per hour a driver may be restarted as a result of driver terminating with non-zero code before job is reported failed.A job may be reported as thrashing if driver exits with non-zero code 4 times within 10 minute window.Maximum value is 10.
       },
-      "pysparkJob": { # A Cloud Dataproc job for running Apache PySpark (https://spark.apache.org/docs/0.9.0/python-programming-guide.html) applications on YARN. # Job is a Pyspark job.
+      "pysparkJob": { # A Dataproc job for running Apache PySpark (https://spark.apache.org/docs/0.9.0/python-programming-guide.html) applications on YARN. # Optional. Job is a PySpark job.
         "mainPythonFileUri": "A String", # Required. The HCFS URI of the main Python file to use as the driver. Must be a .py file.
-        "jarFileUris": [ # Optional. HCFS URIs of jar files to add to the CLASSPATHs of the Python driver and tasks.
+        "args": [ # Optional. The arguments to pass to the driver. Do not include arguments, such as --conf, that can be set as job properties, since a collision may occur that causes an incorrect job submission.
           "A String",
         ],
         "loggingConfig": { # The runtime logging config of the job. # Optional. The runtime log config for job execution.
@@ -3051,7 +3707,7 @@
             "a_key": "A String",
           },
         },
-        "args": [ # Optional. The arguments to pass to the driver. Do not include arguments, such as --conf, that can be set as job properties, since a collision may occur that causes an incorrect job submission.
+        "jarFileUris": [ # Optional. HCFS URIs of jar files to add to the CLASSPATHs of the Python driver and tasks.
           "A String",
         ],
         "fileUris": [ # Optional. HCFS URIs of files to be copied to the working directory of Python drivers and distributed tasks. Useful for naively parallel tasks.
@@ -3063,11 +3719,11 @@
         "pythonFileUris": [ # Optional. HCFS file URIs of Python files to pass to the PySpark framework. Supported file types: .py, .egg, and .zip.
           "A String",
         ],
-        "properties": { # Optional. A mapping of property names to values, used to configure PySpark. Properties that conflict with values set by the Cloud Dataproc API may be overwritten. Can include properties set in /etc/spark/conf/spark-defaults.conf and classes in user code.
+        "properties": { # Optional. A mapping of property names to values, used to configure PySpark. Properties that conflict with values set by the Dataproc API may be overwritten. Can include properties set in /etc/spark/conf/spark-defaults.conf and classes in user code.
           "a_key": "A String",
         },
       },
-      "hiveJob": { # A Cloud Dataproc job for running Apache Hive (https://hive.apache.org/) queries on YARN. # Job is a Hive job.
+      "hiveJob": { # A Dataproc job for running Apache Hive (https://hive.apache.org/) queries on YARN. # Optional. Job is a Hive job.
         "queryFileUri": "A String", # The HCFS URI of the script that contains Hive queries.
         "scriptVariables": { # Optional. Mapping of query variable names to values (equivalent to the Hive command: SET name="value";).
           "a_key": "A String",
@@ -3090,13 +3746,13 @@
           ],
         },
         "continueOnFailure": True or False, # Optional. Whether to continue executing queries if a query fails. The default value is false. Setting to true can be useful when executing independent parallel queries.
-        "properties": { # Optional. A mapping of property names and values, used to configure Hive. Properties that conflict with values set by the Cloud Dataproc API may be overwritten. Can include properties set in /etc/hadoop/conf/*-site.xml, /etc/hive/conf/hive-site.xml, and classes in user code.
+        "properties": { # Optional. A mapping of property names and values, used to configure Hive. Properties that conflict with values set by the Dataproc API may be overwritten. Can include properties set in /etc/hadoop/conf/*-site.xml, /etc/hive/conf/hive-site.xml, and classes in user code.
           "a_key": "A String",
         },
       },
     },
   ],
-  "id": "A String", # Required. The template id.The id must contain only letters (a-z, A-Z), numbers (0-9), underscores (_), and hyphens (-). Cannot begin or end with underscore or hyphen. Must consist of between 3 and 50 characters.
+  "id": "A String",
 }
 
   x__xgafv: string, V1 error format.
@@ -3107,24 +3763,30 @@
 Returns:
   An object of the form:
 
-    { # A Cloud Dataproc workflow template resource.
+    { # A Dataproc workflow template resource.
     "updateTime": "A String", # Output only. The time template was last updated.
     "placement": { # Specifies workflow execution target.Either managed_cluster or cluster_selector is required. # Required. WorkflowTemplate scheduling information.
       "clusterSelector": { # A selector that chooses target cluster for jobs based on metadata. # Optional. A selector that chooses target cluster for jobs based on metadata.The selector is evaluated at the time each job is submitted.
+        "zone": "A String", # Optional. The zone where workflow process executes. This parameter does not affect the selection of the cluster.If unspecified, the zone of the first cluster matching the selector is used.
         "clusterLabels": { # Required. The cluster labels. Cluster must have all labels to match.
           "a_key": "A String",
         },
-        "zone": "A String", # Optional. The zone where workflow process executes. This parameter does not affect the selection of the cluster.If unspecified, the zone of the first cluster matching the selector is used.
       },
-      "managedCluster": { # Cluster that is managed by the workflow. # Optional. A cluster that is managed by the workflow.
+      "managedCluster": { # Cluster that is managed by the workflow. # A cluster that is managed by the workflow.
         "clusterName": "A String", # Required. The cluster name prefix. A unique cluster name will be formed by appending a random suffix.The name must contain only lower-case letters (a-z), numbers (0-9), and hyphens (-). Must begin with a letter. Cannot begin or end with hyphen. Must consist of between 2 and 35 characters.
         "labels": { # Optional. The labels to associate with this cluster.Label keys must be between 1 and 63 characters long, and must conform to the following PCRE regular expression: \p{Ll}\p{Lo}{0,62}Label values must be between 1 and 63 characters long, and must conform to the following PCRE regular expression: \p{Ll}\p{Lo}\p{N}_-{0,63}No more than 32 labels can be associated with a given cluster.
           "a_key": "A String",
         },
         "config": { # The cluster config. # Required. The cluster configuration.
+          "lifecycleConfig": { # Specifies the cluster auto-delete schedule configuration. # Optional. Lifecycle setting for the cluster.
+            "idleStartTime": "A String", # Output only. The time when cluster became idle (most recent job finished) and became eligible for deletion due to idleness (see JSON representation of Timestamp (https://developers.google.com/protocol-buffers/docs/proto3#json)).
+            "idleDeleteTtl": "A String", # Optional. The duration to keep the cluster alive while idling (when no jobs are running). Passing this threshold will cause the cluster to be deleted. Minimum value is 10 minutes; maximum value is 14 days (see JSON representation of Duration (https://developers.google.com/protocol-buffers/docs/proto3#json).
+            "autoDeleteTtl": "A String", # Optional. The lifetime duration of cluster. The cluster will be auto-deleted at the end of this period. Minimum value is 10 minutes; maximum value is 14 days (see JSON representation of Duration (https://developers.google.com/protocol-buffers/docs/proto3#json)).
+            "autoDeleteTime": "A String", # Optional. The time when cluster will be auto-deleted (see JSON representation of Timestamp (https://developers.google.com/protocol-buffers/docs/proto3#json)).
+          },
           "softwareConfig": { # Specifies the selection and config of software inside the cluster. # Optional. The config settings for software inside the cluster.
-            "imageVersion": "A String", # Optional. The version of software inside the cluster. It must be one of the supported Cloud Dataproc Versions, such as "1.2" (including a subminor version, such as "1.2.29"), or the "preview" version. If unspecified, it defaults to the latest Debian version.
-            "optionalComponents": [ # The set of optional components to activate on the cluster.
+            "imageVersion": "A String", # Optional. The version of software inside the cluster. It must be one of the supported Dataproc Versions (https://cloud.google.com/dataproc/docs/concepts/versioning/dataproc-versions#supported_cloud_dataproc_versions), such as "1.2" (including a subminor version, such as "1.2.29"), or the "preview" version (https://cloud.google.com/dataproc/docs/concepts/versioning/dataproc-versions#other_versions). If unspecified, it defaults to the latest Debian version.
+            "optionalComponents": [ # Optional. The set of components to activate on the cluster.
               "A String",
             ],
             "properties": { # Optional. The properties to set on daemon config files.Property keys are specified in prefix:property format, for example core:hadoop.tmp.dir. The following are supported prefixes and their mappings:
@@ -3136,24 +3798,29 @@
                 # mapred: mapred-site.xml
                 # pig: pig.properties
                 # spark: spark-defaults.conf
-                # yarn: yarn-site.xmlFor more information, see Cluster properties.
+                # yarn: yarn-site.xmlFor more information, see Cluster properties (https://cloud.google.com/dataproc/docs/concepts/cluster-properties).
               "a_key": "A String",
             },
           },
-          "configBucket": "A String", # Optional. A Google Cloud Storage bucket used to stage job dependencies, config files, and job driver console output. If you do not specify a staging bucket, Cloud Dataproc will determine a Cloud Storage location (US, ASIA, or EU) for your cluster's staging bucket according to the Google Compute Engine zone where your cluster is deployed, and then create and manage this project-level, per-location bucket (see Cloud Dataproc staging bucket).
+          "configBucket": "A String", # Optional. A Cloud Storage bucket used to stage job dependencies, config files, and job driver console output. If you do not specify a staging bucket, Cloud Dataproc will determine a Cloud Storage location (US, ASIA, or EU) for your cluster's staging bucket according to the Compute Engine zone where your cluster is deployed, and then create and manage this project-level, per-location bucket (see Dataproc staging bucket (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/staging-bucket)).
           "gceClusterConfig": { # Common config settings for resources of Compute Engine cluster instances, applicable to all instances in the cluster. # Optional. The shared Compute Engine config settings for all instances in a cluster.
             "internalIpOnly": True or False, # Optional. If true, all instances in the cluster will only have internal IP addresses. By default, clusters are not restricted to internal IP addresses, and will have ephemeral external IP addresses assigned to each instance. This internal_ip_only restriction can only be enabled for subnetwork enabled networks, and all off-cluster dependencies must be configured to be accessible without external IP addresses.
-            "networkUri": "A String", # Optional. The Compute Engine network to be used for machine communications. Cannot be specified with subnetwork_uri. If neither network_uri nor subnetwork_uri is specified, the "default" network of the project is used, if it exists. Cannot be a "Custom Subnet Network" (see Using Subnetworks for more information).A full URL, partial URI, or short name are valid. Examples:
+            "reservationAffinity": { # Reservation Affinity for consuming Zonal reservation. # Optional. Reservation Affinity for consuming Zonal reservation.
+              "values": [ # Optional. Corresponds to the label values of reservation resource.
+                "A String",
+              ],
+              "key": "A String", # Optional. Corresponds to the label key of reservation resource.
+              "consumeReservationType": "A String", # Optional. Type of reservation to consume
+            },
+            "networkUri": "A String", # Optional. The Compute Engine network to be used for machine communications. Cannot be specified with subnetwork_uri. If neither network_uri nor subnetwork_uri is specified, the "default" network of the project is used, if it exists. Cannot be a "Custom Subnet Network" (see Using Subnetworks (https://cloud.google.com/compute/docs/subnetworks) for more information).A full URL, partial URI, or short name are valid. Examples:
                 # https://www.googleapis.com/compute/v1/projects/[project_id]/regions/global/default
                 # projects/[project_id]/regions/global/default
                 # default
-            "tags": [ # The Compute Engine tags to add to all instances (see Tagging instances).
+            "tags": [ # The Compute Engine tags to add to all instances (see Tagging instances (https://cloud.google.com/compute/docs/label-or-tag-resources#tags)).
               "A String",
             ],
-            "serviceAccount": "A String", # Optional. The service account of the instances. Defaults to the default Compute Engine service account. Custom service accounts need permissions equivalent to the following IAM roles:
-                # roles/logging.logWriter
-                # roles/storage.objectAdmin(see https://cloud.google.com/compute/docs/access/service-accounts#custom_service_accounts for more information). Example: [account_id]@[project_id].iam.gserviceaccount.com
-            "zoneUri": "A String", # Optional. The zone where the Compute Engine cluster will be located. On a create request, it is required in the "global" region. If omitted in a non-global Cloud Dataproc region, the service will pick a zone in the corresponding Compute Engine region. On a get request, zone will always be present.A full URL, partial URI, or short name are valid. Examples:
+            "serviceAccount": "A String", # Optional. The Dataproc service account (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/service-accounts#service_accounts_in_cloud_dataproc) (also see VM Data Plane identity (https://cloud.google.com/dataproc/docs/concepts/iam/dataproc-principals#vm_service_account_data_plane_identity)) used by Dataproc cluster VM instances to access Google Cloud Platform services.If not specified, the Compute Engine default service account (https://cloud.google.com/compute/docs/access/service-accounts#default_service_account) is used.
+            "zoneUri": "A String", # Optional. The zone where the Compute Engine cluster will be located. On a create request, it is required in the "global" region. If omitted in a non-global Dataproc region, the service will pick a zone in the corresponding Compute Engine region. On a get request, zone will always be present.A full URL, partial URI, or short name are valid. Examples:
                 # https://www.googleapis.com/compute/v1/projects/[project_id]/zones/[zone]
                 # projects/[project_id]/zones/[zone]
                 # us-central1-f
@@ -3175,25 +3842,37 @@
               "a_key": "A String",
             },
           },
-          "workerConfig": { # Optional. The config settings for Compute Engine resources in an instance group, such as a master or worker group. # Optional. The Compute Engine config settings for worker instances in a cluster.
-            "isPreemptible": True or False, # Optional. Specifies that this instance group contains preemptible instances.
+          "autoscalingConfig": { # Autoscaling Policy config associated with the cluster. # Optional. Autoscaling config for the policy associated with the cluster. Cluster does not autoscale if this field is unset.
+            "policyUri": "A String", # Optional. The autoscaling policy used by the cluster.Only resource names including projectid and location (region) are valid. Examples:
+                # https://www.googleapis.com/compute/v1/projects/[project_id]/locations/[dataproc_region]/autoscalingPolicies/[policy_id]
+                # projects/[project_id]/locations/[dataproc_region]/autoscalingPolicies/[policy_id]Note that the policy must be in the same project and Dataproc region.
+          },
+          "workerConfig": { # The config settings for Compute Engine resources in an instance group, such as a master or worker group. # Optional. The Compute Engine config settings for worker instances in a cluster.
+            "isPreemptible": True or False, # Output only. Specifies that this instance group contains preemptible instances.
+            "preemptibility": "A String", # Optional. Specifies the preemptibility of the instance group.The default value for master and worker groups is NON_PREEMPTIBLE. This default cannot be changed.The default value for secondary instances is PREEMPTIBLE.
             "machineTypeUri": "A String", # Optional. The Compute Engine machine type used for cluster instances.A full URL, partial URI, or short name are valid. Examples:
                 # https://www.googleapis.com/compute/v1/projects/[project_id]/zones/us-east1-a/machineTypes/n1-standard-2
                 # projects/[project_id]/zones/us-east1-a/machineTypes/n1-standard-2
-                # n1-standard-2Auto Zone Exception: If you are using the Cloud Dataproc Auto Zone Placement feature, you must use the short name of the machine type resource, for example, n1-standard-2.
-            "instanceNames": [ # Output only. The list of instance names. Cloud Dataproc derives the names from cluster_name, num_instances, and the instance group.
+                # n1-standard-2Auto Zone Exception: If you are using the Dataproc Auto Zone Placement (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/auto-zone#using_auto_zone_placement) feature, you must use the short name of the machine type resource, for example, n1-standard-2.
+            "instanceNames": [ # Output only. The list of instance names. Dataproc derives the names from cluster_name, num_instances, and the instance group.
               "A String",
             ],
-            "imageUri": "A String", # Optional. The Compute Engine image resource used for cluster instances. It can be specified or may be inferred from SoftwareConfig.image_version.
-            "accelerators": [ # Optional. The Compute Engine accelerator configuration for these instances.Beta Feature: This feature is still under development. It may be changed before final release.
-              { # Specifies the type and number of accelerator cards attached to the instances of an instance. See GPUs on Compute Engine.
+            "imageUri": "A String", # Optional. The Compute Engine image resource used for cluster instances.The URI can represent an image or image family.Image examples:
+                # https://www.googleapis.com/compute/beta/projects/[project_id]/global/images/[image-id]
+                # projects/[project_id]/global/images/[image-id]
+                # image-idImage family examples. Dataproc will use the most recent image from the family:
+                # https://www.googleapis.com/compute/beta/projects/[project_id]/global/images/family/[custom-image-family-name]
+                # projects/[project_id]/global/images/family/[custom-image-family-name]If the URI is unspecified, it will be inferred from SoftwareConfig.image_version or the system default.
+            "accelerators": [ # Optional. The Compute Engine accelerator configuration for these instances.
+              { # Specifies the type and number of accelerator cards attached to the instances of an instance. See GPUs on Compute Engine (https://cloud.google.com/compute/docs/gpus/).
                 "acceleratorCount": 42, # The number of the accelerator cards of this type exposed to this instance.
-                "acceleratorTypeUri": "A String", # Full URL, partial URI, or short name of the accelerator type resource to expose to this instance. See Compute Engine AcceleratorTypes.Examples:
+                "acceleratorTypeUri": "A String", # Full URL, partial URI, or short name of the accelerator type resource to expose to this instance. See Compute Engine AcceleratorTypes (https://cloud.google.com/compute/docs/reference/beta/acceleratorTypes).Examples:
                     # https://www.googleapis.com/compute/beta/projects/[project_id]/zones/us-east1-a/acceleratorTypes/nvidia-tesla-k80
                     # projects/[project_id]/zones/us-east1-a/acceleratorTypes/nvidia-tesla-k80
-                    # nvidia-tesla-k80Auto Zone Exception: If you are using the Cloud Dataproc Auto Zone Placement feature, you must use the short name of the accelerator type resource, for example, nvidia-tesla-k80.
+                    # nvidia-tesla-k80Auto Zone Exception: If you are using the Dataproc Auto Zone Placement (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/auto-zone#using_auto_zone_placement) feature, you must use the short name of the accelerator type resource, for example, nvidia-tesla-k80.
               },
             ],
+            "minCpuPlatform": "A String", # Optional. Specifies the minimum cpu platform for the Instance Group. See Dataproc -&amp;gt; Minimum CPU Platform (https://cloud.google.com/dataproc/docs/concepts/compute/dataproc-min-cpu).
             "managedGroupConfig": { # Specifies the resources used to actively manage an instance group. # Output only. The config for Compute Engine Instance Group Manager that manages this group. This is only used for preemptible instance groups.
               "instanceTemplateName": "A String", # Output only. The name of the Instance Template used for the Managed Instance Group.
               "instanceGroupManagerName": "A String", # Output only. The name of the Instance Group Manager for this group.
@@ -3214,32 +3893,39 @@
               #   ... worker specific actions ...
               # fi
             { # Specifies an executable to run on a fully configured node and a timeout period for executable completion.
-              "executionTimeout": "A String", # Optional. Amount of time executable has to complete. Default is 10 minutes. Cluster creation fails with an explanatory error message (the name of the executable that caused the error and the exceeded timeout period) if the executable is not completed at end of the timeout period.
+              "executionTimeout": "A String", # Optional. Amount of time executable has to complete. Default is 10 minutes (see JSON representation of Duration (https://developers.google.com/protocol-buffers/docs/proto3#json)).Cluster creation fails with an explanatory error message (the name of the executable that caused the error and the exceeded timeout period) if the executable is not completed at end of the timeout period.
               "executableFile": "A String", # Required. Cloud Storage URI of executable file.
             },
           ],
           "encryptionConfig": { # Encryption settings for the cluster. # Optional. Encryption settings for the cluster.
             "gcePdKmsKeyName": "A String", # Optional. The Cloud KMS key name to use for PD disk encryption for all instances in the cluster.
           },
-          "secondaryWorkerConfig": { # Optional. The config settings for Compute Engine resources in an instance group, such as a master or worker group. # Optional. The Compute Engine config settings for additional worker instances in a cluster.
-            "isPreemptible": True or False, # Optional. Specifies that this instance group contains preemptible instances.
+          "secondaryWorkerConfig": { # The config settings for Compute Engine resources in an instance group, such as a master or worker group. # Optional. The Compute Engine config settings for additional worker instances in a cluster.
+            "isPreemptible": True or False, # Output only. Specifies that this instance group contains preemptible instances.
+            "preemptibility": "A String", # Optional. Specifies the preemptibility of the instance group.The default value for master and worker groups is NON_PREEMPTIBLE. This default cannot be changed.The default value for secondary instances is PREEMPTIBLE.
             "machineTypeUri": "A String", # Optional. The Compute Engine machine type used for cluster instances.A full URL, partial URI, or short name are valid. Examples:
                 # https://www.googleapis.com/compute/v1/projects/[project_id]/zones/us-east1-a/machineTypes/n1-standard-2
                 # projects/[project_id]/zones/us-east1-a/machineTypes/n1-standard-2
-                # n1-standard-2Auto Zone Exception: If you are using the Cloud Dataproc Auto Zone Placement feature, you must use the short name of the machine type resource, for example, n1-standard-2.
-            "instanceNames": [ # Output only. The list of instance names. Cloud Dataproc derives the names from cluster_name, num_instances, and the instance group.
+                # n1-standard-2Auto Zone Exception: If you are using the Dataproc Auto Zone Placement (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/auto-zone#using_auto_zone_placement) feature, you must use the short name of the machine type resource, for example, n1-standard-2.
+            "instanceNames": [ # Output only. The list of instance names. Dataproc derives the names from cluster_name, num_instances, and the instance group.
               "A String",
             ],
-            "imageUri": "A String", # Optional. The Compute Engine image resource used for cluster instances. It can be specified or may be inferred from SoftwareConfig.image_version.
-            "accelerators": [ # Optional. The Compute Engine accelerator configuration for these instances.Beta Feature: This feature is still under development. It may be changed before final release.
-              { # Specifies the type and number of accelerator cards attached to the instances of an instance. See GPUs on Compute Engine.
+            "imageUri": "A String", # Optional. The Compute Engine image resource used for cluster instances.The URI can represent an image or image family.Image examples:
+                # https://www.googleapis.com/compute/beta/projects/[project_id]/global/images/[image-id]
+                # projects/[project_id]/global/images/[image-id]
+                # image-idImage family examples. Dataproc will use the most recent image from the family:
+                # https://www.googleapis.com/compute/beta/projects/[project_id]/global/images/family/[custom-image-family-name]
+                # projects/[project_id]/global/images/family/[custom-image-family-name]If the URI is unspecified, it will be inferred from SoftwareConfig.image_version or the system default.
+            "accelerators": [ # Optional. The Compute Engine accelerator configuration for these instances.
+              { # Specifies the type and number of accelerator cards attached to the instances of an instance. See GPUs on Compute Engine (https://cloud.google.com/compute/docs/gpus/).
                 "acceleratorCount": 42, # The number of the accelerator cards of this type exposed to this instance.
-                "acceleratorTypeUri": "A String", # Full URL, partial URI, or short name of the accelerator type resource to expose to this instance. See Compute Engine AcceleratorTypes.Examples:
+                "acceleratorTypeUri": "A String", # Full URL, partial URI, or short name of the accelerator type resource to expose to this instance. See Compute Engine AcceleratorTypes (https://cloud.google.com/compute/docs/reference/beta/acceleratorTypes).Examples:
                     # https://www.googleapis.com/compute/beta/projects/[project_id]/zones/us-east1-a/acceleratorTypes/nvidia-tesla-k80
                     # projects/[project_id]/zones/us-east1-a/acceleratorTypes/nvidia-tesla-k80
-                    # nvidia-tesla-k80Auto Zone Exception: If you are using the Cloud Dataproc Auto Zone Placement feature, you must use the short name of the accelerator type resource, for example, nvidia-tesla-k80.
+                    # nvidia-tesla-k80Auto Zone Exception: If you are using the Dataproc Auto Zone Placement (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/auto-zone#using_auto_zone_placement) feature, you must use the short name of the accelerator type resource, for example, nvidia-tesla-k80.
               },
             ],
+            "minCpuPlatform": "A String", # Optional. Specifies the minimum cpu platform for the Instance Group. See Dataproc -&amp;gt; Minimum CPU Platform (https://cloud.google.com/dataproc/docs/concepts/compute/dataproc-min-cpu).
             "managedGroupConfig": { # Specifies the resources used to actively manage an instance group. # Output only. The config for Compute Engine Instance Group Manager that manages this group. This is only used for preemptible instance groups.
               "instanceTemplateName": "A String", # Output only. The name of the Instance Template used for the Managed Instance Group.
               "instanceGroupManagerName": "A String", # Output only. The name of the Instance Group Manager for this group.
@@ -3251,25 +3937,32 @@
               "bootDiskSizeGb": 42, # Optional. Size in GB of the boot disk (default is 500GB).
             },
           },
-          "masterConfig": { # Optional. The config settings for Compute Engine resources in an instance group, such as a master or worker group. # Optional. The Compute Engine config settings for the master instance in a cluster.
-            "isPreemptible": True or False, # Optional. Specifies that this instance group contains preemptible instances.
+          "masterConfig": { # The config settings for Compute Engine resources in an instance group, such as a master or worker group. # Optional. The Compute Engine config settings for the master instance in a cluster.
+            "isPreemptible": True or False, # Output only. Specifies that this instance group contains preemptible instances.
+            "preemptibility": "A String", # Optional. Specifies the preemptibility of the instance group.The default value for master and worker groups is NON_PREEMPTIBLE. This default cannot be changed.The default value for secondary instances is PREEMPTIBLE.
             "machineTypeUri": "A String", # Optional. The Compute Engine machine type used for cluster instances.A full URL, partial URI, or short name are valid. Examples:
                 # https://www.googleapis.com/compute/v1/projects/[project_id]/zones/us-east1-a/machineTypes/n1-standard-2
                 # projects/[project_id]/zones/us-east1-a/machineTypes/n1-standard-2
-                # n1-standard-2Auto Zone Exception: If you are using the Cloud Dataproc Auto Zone Placement feature, you must use the short name of the machine type resource, for example, n1-standard-2.
-            "instanceNames": [ # Output only. The list of instance names. Cloud Dataproc derives the names from cluster_name, num_instances, and the instance group.
+                # n1-standard-2Auto Zone Exception: If you are using the Dataproc Auto Zone Placement (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/auto-zone#using_auto_zone_placement) feature, you must use the short name of the machine type resource, for example, n1-standard-2.
+            "instanceNames": [ # Output only. The list of instance names. Dataproc derives the names from cluster_name, num_instances, and the instance group.
               "A String",
             ],
-            "imageUri": "A String", # Optional. The Compute Engine image resource used for cluster instances. It can be specified or may be inferred from SoftwareConfig.image_version.
-            "accelerators": [ # Optional. The Compute Engine accelerator configuration for these instances.Beta Feature: This feature is still under development. It may be changed before final release.
-              { # Specifies the type and number of accelerator cards attached to the instances of an instance. See GPUs on Compute Engine.
+            "imageUri": "A String", # Optional. The Compute Engine image resource used for cluster instances.The URI can represent an image or image family.Image examples:
+                # https://www.googleapis.com/compute/beta/projects/[project_id]/global/images/[image-id]
+                # projects/[project_id]/global/images/[image-id]
+                # image-idImage family examples. Dataproc will use the most recent image from the family:
+                # https://www.googleapis.com/compute/beta/projects/[project_id]/global/images/family/[custom-image-family-name]
+                # projects/[project_id]/global/images/family/[custom-image-family-name]If the URI is unspecified, it will be inferred from SoftwareConfig.image_version or the system default.
+            "accelerators": [ # Optional. The Compute Engine accelerator configuration for these instances.
+              { # Specifies the type and number of accelerator cards attached to the instances of an instance. See GPUs on Compute Engine (https://cloud.google.com/compute/docs/gpus/).
                 "acceleratorCount": 42, # The number of the accelerator cards of this type exposed to this instance.
-                "acceleratorTypeUri": "A String", # Full URL, partial URI, or short name of the accelerator type resource to expose to this instance. See Compute Engine AcceleratorTypes.Examples:
+                "acceleratorTypeUri": "A String", # Full URL, partial URI, or short name of the accelerator type resource to expose to this instance. See Compute Engine AcceleratorTypes (https://cloud.google.com/compute/docs/reference/beta/acceleratorTypes).Examples:
                     # https://www.googleapis.com/compute/beta/projects/[project_id]/zones/us-east1-a/acceleratorTypes/nvidia-tesla-k80
                     # projects/[project_id]/zones/us-east1-a/acceleratorTypes/nvidia-tesla-k80
-                    # nvidia-tesla-k80Auto Zone Exception: If you are using the Cloud Dataproc Auto Zone Placement feature, you must use the short name of the accelerator type resource, for example, nvidia-tesla-k80.
+                    # nvidia-tesla-k80Auto Zone Exception: If you are using the Dataproc Auto Zone Placement (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/auto-zone#using_auto_zone_placement) feature, you must use the short name of the accelerator type resource, for example, nvidia-tesla-k80.
               },
             ],
+            "minCpuPlatform": "A String", # Optional. Specifies the minimum cpu platform for the Instance Group. See Dataproc -&amp;gt; Minimum CPU Platform (https://cloud.google.com/dataproc/docs/concepts/compute/dataproc-min-cpu).
             "managedGroupConfig": { # Specifies the resources used to actively manage an instance group. # Output only. The config for Compute Engine Instance Group Manager that manages this group. This is only used for preemptible instance groups.
               "instanceTemplateName": "A String", # Output only. The name of the Instance Template used for the Managed Instance Group.
               "instanceGroupManagerName": "A String", # Output only. The name of the Instance Group Manager for this group.
@@ -3285,8 +3978,9 @@
             "kerberosConfig": { # Specifies Kerberos related configuration. # Kerberos related configuration.
               "truststorePasswordUri": "A String", # Optional. The Cloud Storage URI of a KMS encrypted file containing the password to the user provided truststore. For the self-signed certificate, this password is generated by Dataproc.
               "crossRealmTrustRealm": "A String", # Optional. The remote realm the Dataproc on-cluster KDC will trust, should the user enable cross realm trust.
+              "realm": "A String", # Optional. The name of the on-cluster Kerberos realm. If not specified, the uppercased domain of hostnames will be the realm.
               "keyPasswordUri": "A String", # Optional. The Cloud Storage URI of a KMS encrypted file containing the password to the user provided key. For the self-signed certificate, this password is generated by Dataproc.
-              "enableKerberos": True or False, # Optional. Flag to indicate whether to Kerberize the cluster.
+              "enableKerberos": True or False, # Optional. Flag to indicate whether to Kerberize the cluster (default: false). Set this field to true to enable Kerberos on a cluster.
               "crossRealmTrustAdminServer": "A String", # Optional. The admin server (IP or hostname) for the remote trusted realm in a cross realm trust relationship.
               "tgtLifetimeHours": 42, # Optional. The lifetime of the ticket granting ticket, in hours. If not specified, or user specifies 0, then default value 10 will be used.
               "keystoreUri": "A String", # Optional. The Cloud Storage URI of the keystore file used for SSL encryption. If not provided, Dataproc will provide a self-signed certificate.
@@ -3302,8 +3996,10 @@
         },
       },
     },
-    "name": "A String", # Output only. The "resource name" of the template, as described in https://cloud.google.com/apis/design/resource_names of the form projects/{project_id}/regions/{region}/workflowTemplates/{template_id}
-    "parameters": [ # Optional. Template parameters whose values are substituted into the template. Values for parameters must be provided when the template is instantiated.
+    "name": "A String", # Output only. The resource name of the workflow template, as described in https://cloud.google.com/apis/design/resource_names.
+        # For projects.regions.workflowTemplates, the resource name of the  template has the following format:  projects/{project_id}/regions/{region}/workflowTemplates/{template_id}
+        # For projects.locations.workflowTemplates, the resource name of the  template has the following format:  projects/{project_id}/locations/{location}/workflowTemplates/{template_id}
+    "parameters": [ # Optional. emplate parameters whose values are substituted into the template. Values for parameters must be provided when the template is instantiated.
       { # A configurable parameter that replaces one or more fields in the template. Parameterizable fields: - Labels - File uris - Job properties - Job arguments - Script variables - Main class (in HadoopJob and SparkJob) - Zone (in ClusterSelector)
         "fields": [ # Required. Paths to all fields that the parameter replaces. A field is allowed to appear in at most one parameter's list of field paths.A field path is similar in syntax to a google.protobuf.FieldMask. For example, a field path that references the zone field of a workflow template's cluster selector would be specified as placement.clusterSelector.zone.Also, field paths can reference fields using the following syntax:
             # Values in maps can be referenced by key:
@@ -3355,7 +4051,7 @@
     "version": 42, # Optional. Used to perform a consistent read-modify-write.This field should be left blank for a CreateWorkflowTemplate request. It is required for an UpdateWorkflowTemplate request, and must match the current server version. A typical update template flow would fetch the current template with a GetWorkflowTemplate request, which will return the current template with the version field filled in with the current server version. The user updates other fields in the template, then returns it as part of the UpdateWorkflowTemplate request.
     "jobs": [ # Required. The Directed Acyclic Graph of Jobs to submit.
       { # A job executed by the workflow.
-        "hadoopJob": { # A Cloud Dataproc job for running Apache Hadoop MapReduce (https://hadoop.apache.org/docs/current/hadoop-mapreduce-client/hadoop-mapreduce-client-core/MapReduceTutorial.html) jobs on Apache Hadoop YARN (https://hadoop.apache.org/docs/r2.7.1/hadoop-yarn/hadoop-yarn-site/YARN.html). # Job is a Hadoop job.
+        "hadoopJob": { # A Dataproc job for running Apache Hadoop MapReduce (https://hadoop.apache.org/docs/current/hadoop-mapreduce-client/hadoop-mapreduce-client-core/MapReduceTutorial.html) jobs on Apache Hadoop YARN (https://hadoop.apache.org/docs/r2.7.1/hadoop-yarn/hadoop-yarn-site/YARN.html). # Optional. Job is a Hadoop job.
           "args": [ # Optional. The arguments to pass to the driver. Do not include arguments, such as -libjars or -Dfoo=bar, that can be set as job properties, since a collision may occur that causes an incorrect job submission.
             "A String",
           ],
@@ -3375,12 +4071,32 @@
             "A String",
           ],
           "mainJarFileUri": "A String", # The HCFS URI of the jar file containing the main class. Examples:  'gs://foo-bucket/analytics-binaries/extract-useful-metrics-mr.jar'  'hdfs:/tmp/test-samples/custom-wordcount.jar'  'file:///home/usr/lib/hadoop-mapreduce/hadoop-mapreduce-examples.jar'
-          "properties": { # Optional. A mapping of property names to values, used to configure Hadoop. Properties that conflict with values set by the Cloud Dataproc API may be overwritten. Can include properties set in /etc/hadoop/conf/*-site and classes in user code.
+          "properties": { # Optional. A mapping of property names to values, used to configure Hadoop. Properties that conflict with values set by the Dataproc API may be overwritten. Can include properties set in /etc/hadoop/conf/*-site and classes in user code.
             "a_key": "A String",
           },
         },
         "stepId": "A String", # Required. The step id. The id must be unique among all jobs within the template.The step id is used as prefix for job id, as job goog-dataproc-workflow-step-id label, and in prerequisiteStepIds field from other steps.The id must contain only letters (a-z, A-Z), numbers (0-9), underscores (_), and hyphens (-). Cannot begin or end with underscore or hyphen. Must consist of between 3 and 50 characters.
-        "sparkSqlJob": { # A Cloud Dataproc job for running Apache Spark SQL (http://spark.apache.org/sql/) queries. # Job is a SparkSql job.
+        "sparkRJob": { # A Dataproc job for running Apache SparkR (https://spark.apache.org/docs/latest/sparkr.html) applications on YARN. # Optional. Job is a SparkR job.
+          "loggingConfig": { # The runtime logging config of the job. # Optional. The runtime log config for job execution.
+            "driverLogLevels": { # The per-package log levels for the driver. This may include "root" package name to configure rootLogger. Examples:  'com.google = FATAL', 'root = INFO', 'org.apache = DEBUG'
+              "a_key": "A String",
+            },
+          },
+          "args": [ # Optional. The arguments to pass to the driver. Do not include arguments, such as --conf, that can be set as job properties, since a collision may occur that causes an incorrect job submission.
+            "A String",
+          ],
+          "mainRFileUri": "A String", # Required. The HCFS URI of the main R file to use as the driver. Must be a .R file.
+          "fileUris": [ # Optional. HCFS URIs of files to be copied to the working directory of R drivers and distributed tasks. Useful for naively parallel tasks.
+            "A String",
+          ],
+          "archiveUris": [ # Optional. HCFS URIs of archives to be extracted in the working directory of Spark drivers and tasks. Supported file types: .jar, .tar, .tar.gz, .tgz, and .zip.
+            "A String",
+          ],
+          "properties": { # Optional. A mapping of property names to values, used to configure SparkR. Properties that conflict with values set by the Dataproc API may be overwritten. Can include properties set in /etc/spark/conf/spark-defaults.conf and classes in user code.
+            "a_key": "A String",
+          },
+        },
+        "sparkSqlJob": { # A Dataproc job for running Apache Spark SQL (http://spark.apache.org/sql/) queries. # Optional. Job is a SparkSql job.
           "queryFileUri": "A String", # The HCFS URI of the script that contains SQL queries.
           "scriptVariables": { # Optional. Mapping of query variable names to values (equivalent to the Spark SQL command: SET name="value";).
             "a_key": "A String",
@@ -3407,14 +4123,14 @@
               "A String",
             ],
           },
-          "properties": { # Optional. A mapping of property names to values, used to configure Spark SQL's SparkConf. Properties that conflict with values set by the Cloud Dataproc API may be overwritten.
+          "properties": { # Optional. A mapping of property names to values, used to configure Spark SQL's SparkConf. Properties that conflict with values set by the Dataproc API may be overwritten.
             "a_key": "A String",
           },
         },
         "prerequisiteStepIds": [ # Optional. The optional list of prerequisite job step_ids. If not specified, the job will start at the beginning of workflow.
           "A String",
         ],
-        "pigJob": { # A Cloud Dataproc job for running Apache Pig (https://pig.apache.org/) queries on YARN. # Job is a Pig job.
+        "pigJob": { # A Dataproc job for running Apache Pig (https://pig.apache.org/) queries on YARN. # Optional. Job is a Pig job.
           "queryFileUri": "A String", # The HCFS URI of the script that contains the Pig queries.
           "scriptVariables": { # Optional. Mapping of query variable names to values (equivalent to the Pig command: name=[value]).
             "a_key": "A String",
@@ -3442,14 +4158,14 @@
             ],
           },
           "continueOnFailure": True or False, # Optional. Whether to continue executing queries if a query fails. The default value is false. Setting to true can be useful when executing independent parallel queries.
-          "properties": { # Optional. A mapping of property names to values, used to configure Pig. Properties that conflict with values set by the Cloud Dataproc API may be overwritten. Can include properties set in /etc/hadoop/conf/*-site.xml, /etc/pig/conf/pig.properties, and classes in user code.
+          "properties": { # Optional. A mapping of property names to values, used to configure Pig. Properties that conflict with values set by the Dataproc API may be overwritten. Can include properties set in /etc/hadoop/conf/*-site.xml, /etc/pig/conf/pig.properties, and classes in user code.
             "a_key": "A String",
           },
         },
         "labels": { # Optional. The labels to associate with this job.Label keys must be between 1 and 63 characters long, and must conform to the following regular expression: \p{Ll}\p{Lo}{0,62}Label values must be between 1 and 63 characters long, and must conform to the following regular expression: \p{Ll}\p{Lo}\p{N}_-{0,63}No more than 32 labels can be associated with a given job.
           "a_key": "A String",
         },
-        "sparkJob": { # A Cloud Dataproc job for running Apache Spark (http://spark.apache.org/) applications on YARN. # Job is a Spark job.
+        "sparkJob": { # A Dataproc job for running Apache Spark (http://spark.apache.org/) applications on YARN. # Optional. Job is a Spark job.
           "args": [ # Optional. The arguments to pass to the driver. Do not include arguments, such as --conf, that can be set as job properties, since a collision may occur that causes an incorrect job submission.
             "A String",
           ],
@@ -3469,16 +4185,46 @@
             "A String",
           ],
           "mainJarFileUri": "A String", # The HCFS URI of the jar file that contains the main class.
-          "properties": { # Optional. A mapping of property names to values, used to configure Spark. Properties that conflict with values set by the Cloud Dataproc API may be overwritten. Can include properties set in /etc/spark/conf/spark-defaults.conf and classes in user code.
+          "properties": { # Optional. A mapping of property names to values, used to configure Spark. Properties that conflict with values set by the Dataproc API may be overwritten. Can include properties set in /etc/spark/conf/spark-defaults.conf and classes in user code.
+            "a_key": "A String",
+          },
+        },
+        "prestoJob": { # A Dataproc job for running Presto (https://prestosql.io/) queries. IMPORTANT: The Dataproc Presto Optional Component (https://cloud.google.com/dataproc/docs/concepts/components/presto) must be enabled when the cluster is created to submit a Presto job to the cluster. # Optional. Job is a Presto job.
+          "queryFileUri": "A String", # The HCFS URI of the script that contains SQL queries.
+          "outputFormat": "A String", # Optional. The format in which query output will be displayed. See the Presto documentation for supported output formats
+          "loggingConfig": { # The runtime logging config of the job. # Optional. The runtime log config for job execution.
+            "driverLogLevels": { # The per-package log levels for the driver. This may include "root" package name to configure rootLogger. Examples:  'com.google = FATAL', 'root = INFO', 'org.apache = DEBUG'
+              "a_key": "A String",
+            },
+          },
+          "clientTags": [ # Optional. Presto client tags to attach to this query
+            "A String",
+          ],
+          "queryList": { # A list of queries to run on a cluster. # A list of queries.
+            "queries": [ # Required. The queries to execute. You do not need to terminate a query with a semicolon. Multiple queries can be specified in one string by separating each with a semicolon. Here is an example of an Cloud Dataproc API snippet that uses a QueryList to specify a HiveJob:
+                # "hiveJob": {
+                #   "queryList": {
+                #     "queries": [
+                #       "query1",
+                #       "query2",
+                #       "query3;query4",
+                #     ]
+                #   }
+                # }
+              "A String",
+            ],
+          },
+          "continueOnFailure": True or False, # Optional. Whether to continue executing queries if a query fails. The default value is false. Setting to true can be useful when executing independent parallel queries.
+          "properties": { # Optional. A mapping of property names to values. Used to set Presto session properties (https://prestodb.io/docs/current/sql/set-session.html) Equivalent to using the --session flag in the Presto CLI
             "a_key": "A String",
           },
         },
         "scheduling": { # Job scheduling options. # Optional. Job scheduling configuration.
           "maxFailuresPerHour": 42, # Optional. Maximum number of times per hour a driver may be restarted as a result of driver terminating with non-zero code before job is reported failed.A job may be reported as thrashing if driver exits with non-zero code 4 times within 10 minute window.Maximum value is 10.
         },
-        "pysparkJob": { # A Cloud Dataproc job for running Apache PySpark (https://spark.apache.org/docs/0.9.0/python-programming-guide.html) applications on YARN. # Job is a Pyspark job.
+        "pysparkJob": { # A Dataproc job for running Apache PySpark (https://spark.apache.org/docs/0.9.0/python-programming-guide.html) applications on YARN. # Optional. Job is a PySpark job.
           "mainPythonFileUri": "A String", # Required. The HCFS URI of the main Python file to use as the driver. Must be a .py file.
-          "jarFileUris": [ # Optional. HCFS URIs of jar files to add to the CLASSPATHs of the Python driver and tasks.
+          "args": [ # Optional. The arguments to pass to the driver. Do not include arguments, such as --conf, that can be set as job properties, since a collision may occur that causes an incorrect job submission.
             "A String",
           ],
           "loggingConfig": { # The runtime logging config of the job. # Optional. The runtime log config for job execution.
@@ -3486,7 +4232,7 @@
               "a_key": "A String",
             },
           },
-          "args": [ # Optional. The arguments to pass to the driver. Do not include arguments, such as --conf, that can be set as job properties, since a collision may occur that causes an incorrect job submission.
+          "jarFileUris": [ # Optional. HCFS URIs of jar files to add to the CLASSPATHs of the Python driver and tasks.
             "A String",
           ],
           "fileUris": [ # Optional. HCFS URIs of files to be copied to the working directory of Python drivers and distributed tasks. Useful for naively parallel tasks.
@@ -3498,11 +4244,11 @@
           "pythonFileUris": [ # Optional. HCFS file URIs of Python files to pass to the PySpark framework. Supported file types: .py, .egg, and .zip.
             "A String",
           ],
-          "properties": { # Optional. A mapping of property names to values, used to configure PySpark. Properties that conflict with values set by the Cloud Dataproc API may be overwritten. Can include properties set in /etc/spark/conf/spark-defaults.conf and classes in user code.
+          "properties": { # Optional. A mapping of property names to values, used to configure PySpark. Properties that conflict with values set by the Dataproc API may be overwritten. Can include properties set in /etc/spark/conf/spark-defaults.conf and classes in user code.
             "a_key": "A String",
           },
         },
-        "hiveJob": { # A Cloud Dataproc job for running Apache Hive (https://hive.apache.org/) queries on YARN. # Job is a Hive job.
+        "hiveJob": { # A Dataproc job for running Apache Hive (https://hive.apache.org/) queries on YARN. # Optional. Job is a Hive job.
           "queryFileUri": "A String", # The HCFS URI of the script that contains Hive queries.
           "scriptVariables": { # Optional. Mapping of query variable names to values (equivalent to the Hive command: SET name="value";).
             "a_key": "A String",
@@ -3525,13 +4271,13 @@
             ],
           },
           "continueOnFailure": True or False, # Optional. Whether to continue executing queries if a query fails. The default value is false. Setting to true can be useful when executing independent parallel queries.
-          "properties": { # Optional. A mapping of property names and values, used to configure Hive. Properties that conflict with values set by the Cloud Dataproc API may be overwritten. Can include properties set in /etc/hadoop/conf/*-site.xml, /etc/hive/conf/hive-site.xml, and classes in user code.
+          "properties": { # Optional. A mapping of property names and values, used to configure Hive. Properties that conflict with values set by the Dataproc API may be overwritten. Can include properties set in /etc/hadoop/conf/*-site.xml, /etc/hive/conf/hive-site.xml, and classes in user code.
             "a_key": "A String",
           },
         },
       },
     ],
-    "id": "A String", # Required. The template id.The id must contain only letters (a-z, A-Z), numbers (0-9), underscores (_), and hyphens (-). Cannot begin or end with underscore or hyphen. Must consist of between 3 and 50 characters.
+    "id": "A String",
   }</pre>
 </div>