docs: docs update (#911)

Thank you for opening a Pull Request! Before submitting your PR, there are a few things you can do to make sure it goes smoothly:
- [ ] Make sure to open an issue as a [bug/issue](https://github.com/googleapis/google-api-python-client/issues/new/choose) before writing your code!  That way we can discuss the change, evaluate designs, and agree on the general idea
- [ ] Ensure the tests and linter pass
- [ ] Code coverage does not decrease (if any source code was changed)
- [ ] Appropriate docs were updated (if necessary)

Fixes #<issue_number_goes_here> 🦕
diff --git a/docs/dyn/dataflow_v1b3.projects.locations.jobs.html b/docs/dyn/dataflow_v1b3.projects.locations.jobs.html
index 1ddadd8..44292d9 100644
--- a/docs/dyn/dataflow_v1b3.projects.locations.jobs.html
+++ b/docs/dyn/dataflow_v1b3.projects.locations.jobs.html
@@ -95,16 +95,16 @@
 <p class="firstline">Returns the workItems Resource.</p>
 
 <p class="toc_element">
-  <code><a href="#create">create(projectId, location, body=None, x__xgafv=None, replaceJobId=None, view=None)</a></code></p>
+  <code><a href="#create">create(projectId, location, body=None, view=None, replaceJobId=None, x__xgafv=None)</a></code></p>
 <p class="firstline">Creates a Cloud Dataflow job.</p>
 <p class="toc_element">
-  <code><a href="#get">get(projectId, location, jobId, x__xgafv=None, view=None)</a></code></p>
+  <code><a href="#get">get(projectId, location, jobId, view=None, x__xgafv=None)</a></code></p>
 <p class="firstline">Gets the state of the specified Cloud Dataflow job.</p>
 <p class="toc_element">
   <code><a href="#getMetrics">getMetrics(projectId, location, jobId, startTime=None, x__xgafv=None)</a></code></p>
 <p class="firstline">Request the job status.</p>
 <p class="toc_element">
-  <code><a href="#list">list(projectId, location, pageSize=None, pageToken=None, x__xgafv=None, filter=None, view=None)</a></code></p>
+  <code><a href="#list">list(projectId, location, filter=None, pageToken=None, pageSize=None, view=None, x__xgafv=None)</a></code></p>
 <p class="firstline">List the jobs of a project.</p>
 <p class="toc_element">
   <code><a href="#list_next">list_next(previous_request, previous_response)</a></code></p>
@@ -117,7 +117,7 @@
 <p class="firstline">Updates the state of an existing Cloud Dataflow job.</p>
 <h3>Method Details</h3>
 <div class="method">
-    <code class="details" id="create">create(projectId, location, body=None, x__xgafv=None, replaceJobId=None, view=None)</code>
+    <code class="details" id="create">create(projectId, location, body=None, view=None, replaceJobId=None, x__xgafv=None)</code>
   <pre>Creates a Cloud Dataflow job.
 
 To create a job, we recommend using `projects.locations.jobs.create` with a
@@ -135,382 +135,71 @@
     The object takes the form of:
 
 { # Defines a job to be run by the Cloud Dataflow service.
-  "labels": { # User-defined labels for this job.
-      # 
-      # The labels map can contain no more than 64 entries.  Entries of the labels
-      # map are UTF8 strings that comply with the following restrictions:
-      # 
-      # * Keys must conform to regexp:  \p{Ll}\p{Lo}{0,62}
-      # * Values must conform to regexp:  [\p{Ll}\p{Lo}\p{N}_-]{0,63}
-      # * Both keys and values are additionally constrained to be &lt;= 128 bytes in
-      # size.
-    "a_key": "A String",
-  },
-  "jobMetadata": { # Metadata available primarily for filtering jobs. Will be included in the # This field is populated by the Dataflow service to support filtering jobs
-      # by the metadata values provided here. Populated for ListJobs and all GetJob
-      # views SUMMARY and higher.
-      # ListJob response and Job SUMMARY view.
-    "sdkVersion": { # The version of the SDK used to run the job. # The SDK version used to run the job.
-      "versionDisplayName": "A String", # A readable string describing the version of the SDK.
-      "version": "A String", # The version of the SDK used to run the job.
-      "sdkSupportStatus": "A String", # The support status for this SDK version.
-    },
-    "pubsubDetails": [ # Identification of a PubSub source used in the Dataflow job.
-      { # Metadata for a PubSub connector used by the job.
-        "topic": "A String", # Topic accessed in the connection.
-        "subscription": "A String", # Subscription used in the connection.
-      },
-    ],
-    "datastoreDetails": [ # Identification of a Datastore source used in the Dataflow job.
-      { # Metadata for a Datastore connector used by the job.
-        "projectId": "A String", # ProjectId accessed in the connection.
-        "namespace": "A String", # Namespace used in the connection.
-      },
-    ],
-    "fileDetails": [ # Identification of a File source used in the Dataflow job.
-      { # Metadata for a File connector used by the job.
-        "filePattern": "A String", # File Pattern used to access files by the connector.
-      },
-    ],
-    "spannerDetails": [ # Identification of a Spanner source used in the Dataflow job.
-      { # Metadata for a Spanner connector used by the job.
-        "instanceId": "A String", # InstanceId accessed in the connection.
-        "projectId": "A String", # ProjectId accessed in the connection.
-        "databaseId": "A String", # DatabaseId accessed in the connection.
-      },
-    ],
-    "bigTableDetails": [ # Identification of a BigTable source used in the Dataflow job.
-      { # Metadata for a BigTable connector used by the job.
-        "instanceId": "A String", # InstanceId accessed in the connection.
-        "projectId": "A String", # ProjectId accessed in the connection.
-        "tableId": "A String", # TableId accessed in the connection.
-      },
-    ],
-    "bigqueryDetails": [ # Identification of a BigQuery source used in the Dataflow job.
-      { # Metadata for a BigQuery connector used by the job.
-        "projectId": "A String", # Project accessed in the connection.
-        "query": "A String", # Query used to access data in the connection.
-        "table": "A String", # Table accessed in the connection.
-        "dataset": "A String", # Dataset accessed in the connection.
-      },
-    ],
-  },
-  "pipelineDescription": { # A descriptive representation of submitted pipeline as well as the executed # Preliminary field: The format of this data may change at any time.
-      # A description of the user pipeline and stages through which it is executed.
-      # Created by Cloud Dataflow service.  Only retrieved with
-      # JOB_VIEW_DESCRIPTION or JOB_VIEW_ALL.
-      # form.  This data is provided by the Dataflow service for ease of visualizing
-      # the pipeline and interpreting Dataflow provided metrics.
-    "originalPipelineTransform": [ # Description of each transform in the pipeline and collections between them.
-      { # Description of the type, names/ids, and input/outputs for a transform.
-        "kind": "A String", # Type of transform.
-        "name": "A String", # User provided name for this transform instance.
-        "inputCollectionName": [ # User names for all collection inputs to this transform.
-          "A String",
-        ],
-        "displayData": [ # Transform-specific display data.
-          { # Data provided with a pipeline or transform to provide descriptive info.
-            "key": "A String", # The key identifying the display data.
-                # This is intended to be used as a label for the display data
-                # when viewed in a dax monitoring system.
-            "shortStrValue": "A String", # A possible additional shorter value to display.
-                # For example a java_class_name_value of com.mypackage.MyDoFn
-                # will be stored with MyDoFn as the short_str_value and
-                # com.mypackage.MyDoFn as the java_class_name value.
-                # short_str_value can be displayed and java_class_name_value
-                # will be displayed as a tooltip.
-            "timestampValue": "A String", # Contains value if the data is of timestamp type.
-            "url": "A String", # An optional full URL.
-            "floatValue": 3.14, # Contains value if the data is of float type.
-            "namespace": "A String", # The namespace for the key. This is usually a class name or programming
-                # language namespace (i.e. python module) which defines the display data.
-                # This allows a dax monitoring system to specially handle the data
-                # and perform custom rendering.
-            "javaClassValue": "A String", # Contains value if the data is of java class type.
-            "label": "A String", # An optional label to display in a dax UI for the element.
-            "boolValue": True or False, # Contains value if the data is of a boolean type.
-            "strValue": "A String", # Contains value if the data is of string type.
-            "durationValue": "A String", # Contains value if the data is of duration type.
-            "int64Value": "A String", # Contains value if the data is of int64 type.
-          },
-        ],
-        "outputCollectionName": [ # User  names for all collection outputs to this transform.
-          "A String",
-        ],
-        "id": "A String", # SDK generated id of this transform instance.
-      },
-    ],
-    "executionPipelineStage": [ # Description of each stage of execution of the pipeline.
-      { # Description of the composing transforms, names/ids, and input/outputs of a
-          # stage of execution.  Some composing transforms and sources may have been
-          # generated by the Dataflow service during execution planning.
-        "componentSource": [ # Collections produced and consumed by component transforms of this stage.
-          { # Description of an interstitial value between transforms in an execution
-              # stage.
-            "userName": "A String", # Human-readable name for this transform; may be user or system generated.
-            "originalTransformOrCollection": "A String", # User name for the original user transform or collection with which this
-                # source is most closely associated.
-            "name": "A String", # Dataflow service generated name for this source.
-          },
-        ],
-        "kind": "A String", # Type of tranform this stage is executing.
-        "name": "A String", # Dataflow service generated name for this stage.
-        "outputSource": [ # Output sources for this stage.
-          { # Description of an input or output of an execution stage.
-            "userName": "A String", # Human-readable name for this source; may be user or system generated.
-            "sizeBytes": "A String", # Size of the source, if measurable.
-            "name": "A String", # Dataflow service generated name for this source.
-            "originalTransformOrCollection": "A String", # User name for the original user transform or collection with which this
-                # source is most closely associated.
-          },
-        ],
-        "inputSource": [ # Input sources for this stage.
-          { # Description of an input or output of an execution stage.
-            "userName": "A String", # Human-readable name for this source; may be user or system generated.
-            "sizeBytes": "A String", # Size of the source, if measurable.
-            "name": "A String", # Dataflow service generated name for this source.
-            "originalTransformOrCollection": "A String", # User name for the original user transform or collection with which this
-                # source is most closely associated.
-          },
-        ],
-        "componentTransform": [ # Transforms that comprise this execution stage.
-          { # Description of a transform executed as part of an execution stage.
-            "userName": "A String", # Human-readable name for this transform; may be user or system generated.
-            "originalTransform": "A String", # User name for the original user transform with which this transform is
-                # most closely associated.
-            "name": "A String", # Dataflow service generated name for this source.
-          },
-        ],
-        "id": "A String", # Dataflow service generated id for this stage.
-      },
-    ],
-    "displayData": [ # Pipeline level display data.
-      { # Data provided with a pipeline or transform to provide descriptive info.
-        "key": "A String", # The key identifying the display data.
-            # This is intended to be used as a label for the display data
-            # when viewed in a dax monitoring system.
-        "shortStrValue": "A String", # A possible additional shorter value to display.
-            # For example a java_class_name_value of com.mypackage.MyDoFn
-            # will be stored with MyDoFn as the short_str_value and
-            # com.mypackage.MyDoFn as the java_class_name value.
-            # short_str_value can be displayed and java_class_name_value
-            # will be displayed as a tooltip.
-        "timestampValue": "A String", # Contains value if the data is of timestamp type.
-        "url": "A String", # An optional full URL.
-        "floatValue": 3.14, # Contains value if the data is of float type.
-        "namespace": "A String", # The namespace for the key. This is usually a class name or programming
-            # language namespace (i.e. python module) which defines the display data.
-            # This allows a dax monitoring system to specially handle the data
-            # and perform custom rendering.
-        "javaClassValue": "A String", # Contains value if the data is of java class type.
-        "label": "A String", # An optional label to display in a dax UI for the element.
-        "boolValue": True or False, # Contains value if the data is of a boolean type.
-        "strValue": "A String", # Contains value if the data is of string type.
-        "durationValue": "A String", # Contains value if the data is of duration type.
-        "int64Value": "A String", # Contains value if the data is of int64 type.
-      },
-    ],
-  },
-  "stageStates": [ # This field may be mutated by the Cloud Dataflow service;
-      # callers cannot mutate it.
-    { # A message describing the state of a particular execution stage.
-      "executionStageName": "A String", # The name of the execution stage.
-      "executionStageState": "A String", # Executions stage states allow the same set of values as JobState.
-      "currentStateTime": "A String", # The time at which the stage transitioned to this state.
-    },
-  ],
-  "id": "A String", # The unique ID of this job.
+  &quot;clientRequestId&quot;: &quot;A String&quot;, # The client&#x27;s unique identifier of the job, re-used across retried attempts.
+      # If this field is set, the service will ensure its uniqueness.
+      # The request to create a job will fail if the service has knowledge of a
+      # previously submitted job with the same client&#x27;s ID and job name.
+      # The caller may use this field to ensure idempotence of job
+      # creation across retried attempts to create a job.
+      # By default, the field is empty and, in that case, the service ignores it.
+  &quot;id&quot;: &quot;A String&quot;, # The unique ID of this job.
       # 
       # This field is set by the Cloud Dataflow service when the Job is
       # created, and is immutable for the life of the job.
-  "replacedByJobId": "A String", # If another job is an update of this job (and thus, this job is in
-      # `JOB_STATE_UPDATED`), this field contains the ID of that job.
-  "projectId": "A String", # The ID of the Cloud Platform project that the job belongs to.
-  "transformNameMapping": { # The map of transform name prefixes of the job to be replaced to the
+  &quot;currentStateTime&quot;: &quot;A String&quot;, # The timestamp associated with the current state.
+  &quot;transformNameMapping&quot;: { # The map of transform name prefixes of the job to be replaced to the
       # corresponding name prefixes of the new job.
-    "a_key": "A String",
+    &quot;a_key&quot;: &quot;A String&quot;,
   },
-  "environment": { # Describes the environment in which a Dataflow Job runs. # The environment for the job.
-    "workerRegion": "A String", # The Compute Engine region
-        # (https://cloud.google.com/compute/docs/regions-zones/regions-zones) in
-        # which worker processing should occur, e.g. "us-west1". Mutually exclusive
-        # with worker_zone. If neither worker_region nor worker_zone is specified,
-        # default to the control plane's region.
-    "version": { # A structure describing which components and their versions of the service
-        # are required in order to run the job.
-      "a_key": "", # Properties of the object.
-    },
-    "flexResourceSchedulingGoal": "A String", # Which Flexible Resource Scheduling mode to run in.
-    "serviceKmsKeyName": "A String", # If set, contains the Cloud KMS key identifier used to encrypt data
-        # at rest, AKA a Customer Managed Encryption Key (CMEK).
-        #
-        # Format:
-        #   projects/PROJECT_ID/locations/LOCATION/keyRings/KEY_RING/cryptoKeys/KEY
-    "internalExperiments": { # Experimental settings.
-      "a_key": "", # Properties of the object. Contains field @type with type URL.
-    },
-    "dataset": "A String", # The dataset for the current project where various workflow
-        # related tables are stored.
-        #
-        # The supported resource type is:
-        #
-        # Google BigQuery:
-        #   bigquery.googleapis.com/{dataset}
-    "experiments": [ # The list of experiments to enable.
-      "A String",
-    ],
-    "serviceAccountEmail": "A String", # Identity to run virtual machines as. Defaults to the default account.
-    "sdkPipelineOptions": { # The Cloud Dataflow SDK pipeline options specified by the user. These
+  &quot;environment&quot;: { # Describes the environment in which a Dataflow Job runs. # The environment for the job.
+    &quot;sdkPipelineOptions&quot;: { # The Cloud Dataflow SDK pipeline options specified by the user. These
         # options are passed through the service and are used to recreate the
         # SDK pipeline options on the worker in a language agnostic and platform
         # independent way.
-      "a_key": "", # Properties of the object.
+      &quot;a_key&quot;: &quot;&quot;, # Properties of the object.
     },
-    "userAgent": { # A description of the process that generated the request.
-      "a_key": "", # Properties of the object.
-    },
-    "workerZone": "A String", # The Compute Engine zone
-        # (https://cloud.google.com/compute/docs/regions-zones/regions-zones) in
-        # which worker processing should occur, e.g. "us-west1-a". Mutually exclusive
-        # with worker_region. If neither worker_region nor worker_zone is specified,
-        # a zone in the control plane's region is chosen based on available capacity.
-    "workerPools": [ # The worker pools. At least one "harness" worker pool must be
+    &quot;flexResourceSchedulingGoal&quot;: &quot;A String&quot;, # Which Flexible Resource Scheduling mode to run in.
+    &quot;workerPools&quot;: [ # The worker pools. At least one &quot;harness&quot; worker pool must be
         # specified in order for the job to have workers.
       { # Describes one particular pool of Cloud Dataflow workers to be
           # instantiated by the Cloud Dataflow service in order to perform the
           # computations required by a job.  Note that a workflow job may use
           # multiple pools, in order to match the various computational
           # requirements of the various stages of the job.
-        "workerHarnessContainerImage": "A String", # Required. Docker container image that executes the Cloud Dataflow worker
-            # harness, residing in Google Container Registry.
-            #
-            # Deprecated for the Fn API path. Use sdk_harness_container_images instead.
-        "ipConfiguration": "A String", # Configuration for VM IPs.
-        "autoscalingSettings": { # Settings for WorkerPool autoscaling. # Settings for autoscaling of this WorkerPool.
-          "maxNumWorkers": 42, # The maximum number of workers to cap scaling at.
-          "algorithm": "A String", # The algorithm to use for autoscaling.
-        },
-        "diskSourceImage": "A String", # Fully qualified source image for disks.
-        "network": "A String", # Network to which VMs will be assigned.  If empty or unspecified,
-            # the service will use the network "default".
-        "zone": "A String", # Zone to run the worker pools in.  If empty or unspecified, the service
+        &quot;defaultPackageSet&quot;: &quot;A String&quot;, # The default package set to install.  This allows the service to
+            # select a default set of packages which are useful to worker
+            # harnesses written in a particular language.
+        &quot;network&quot;: &quot;A String&quot;, # Network to which VMs will be assigned.  If empty or unspecified,
+            # the service will use the network &quot;default&quot;.
+        &quot;zone&quot;: &quot;A String&quot;, # Zone to run the worker pools in.  If empty or unspecified, the service
             # will attempt to choose a reasonable default.
-        "metadata": { # Metadata to set on the Google Compute Engine VMs.
-          "a_key": "A String",
-        },
-        "machineType": "A String", # Machine type (e.g. "n1-standard-1").  If empty or unspecified, the
-            # service will attempt to choose a reasonable default.
-        "onHostMaintenance": "A String", # The action to take on host maintenance, as defined by the Google
-            # Compute Engine API.
-        "taskrunnerSettings": { # Taskrunner configuration settings. # Settings passed through to Google Compute Engine workers when
-            # using the standard Dataflow task runner.  Users should ignore
-            # this field.
-          "workflowFileName": "A String", # The file to store the workflow in.
-          "logUploadLocation": "A String", # Indicates where to put logs.  If this is not specified, the logs
-              # will not be uploaded.
-              #
-              # The supported resource type is:
-              #
-              # Google Cloud Storage:
-              #   storage.googleapis.com/{bucket}/{object}
-              #   bucket.storage.googleapis.com/{object}
-          "commandlinesFileName": "A String", # The file to store preprocessing commands in.
-          "alsologtostderr": True or False, # Whether to also send taskrunner log info to stderr.
-          "continueOnException": True or False, # Whether to continue taskrunner if an exception is hit.
-          "baseTaskDir": "A String", # The location on the worker for task-specific subdirectories.
-          "vmId": "A String", # The ID string of the VM.
-          "taskGroup": "A String", # The UNIX group ID on the worker VM to use for tasks launched by
-              # taskrunner; e.g. "wheel".
-          "taskUser": "A String", # The UNIX user ID on the worker VM to use for tasks launched by
-              # taskrunner; e.g. "root".
-          "oauthScopes": [ # The OAuth2 scopes to be requested by the taskrunner in order to
-              # access the Cloud Dataflow API.
-            "A String",
-          ],
-          "languageHint": "A String", # The suggested backend language.
-          "logToSerialconsole": True or False, # Whether to send taskrunner log info to Google Compute Engine VM serial
-              # console.
-          "streamingWorkerMainClass": "A String", # The streaming worker main class name.
-          "logDir": "A String", # The directory on the VM to store logs.
-          "parallelWorkerSettings": { # Provides data to pass through to the worker harness. # The settings to pass to the parallel worker harness.
-            "reportingEnabled": True or False, # Whether to send work progress updates to the service.
-            "shuffleServicePath": "A String", # The Shuffle service path relative to the root URL, for example,
-                # "shuffle/v1beta1".
-            "workerId": "A String", # The ID of the worker running this pipeline.
-            "baseUrl": "A String", # The base URL for accessing Google Cloud APIs.
-                #
-                # When workers access Google Cloud APIs, they logically do so via
-                # relative URLs.  If this field is specified, it supplies the base
-                # URL to use for resolving these relative URLs.  The normative
-                # algorithm used is defined by RFC 1808, "Relative Uniform Resource
-                # Locators".
-                #
-                # If not specified, the default value is "http://www.googleapis.com/"
-            "servicePath": "A String", # The Cloud Dataflow service path relative to the root URL, for example,
-                # "dataflow/v1b3/projects".
-            "tempStoragePrefix": "A String", # The prefix of the resources the system should use for temporary
-                # storage.
-                #
-                # The supported resource type is:
-                #
-                # Google Cloud Storage:
-                #
-                #   storage.googleapis.com/{bucket}/{object}
-                #   bucket.storage.googleapis.com/{object}
-          },
-          "dataflowApiVersion": "A String", # The API version of endpoint, e.g. "v1b3"
-          "harnessCommand": "A String", # The command to launch the worker harness.
-          "tempStoragePrefix": "A String", # The prefix of the resources the taskrunner should use for
-              # temporary storage.
-              #
-              # The supported resource type is:
-              #
-              # Google Cloud Storage:
-              #   storage.googleapis.com/{bucket}/{object}
-              #   bucket.storage.googleapis.com/{object}
-          "baseUrl": "A String", # The base URL for the taskrunner to use when accessing Google Cloud APIs.
-              #
-              # When workers access Google Cloud APIs, they logically do so via
-              # relative URLs.  If this field is specified, it supplies the base
-              # URL to use for resolving these relative URLs.  The normative
-              # algorithm used is defined by RFC 1808, "Relative Uniform Resource
-              # Locators".
-              #
-              # If not specified, the default value is "http://www.googleapis.com/"
-        },
-        "numThreadsPerWorker": 42, # The number of threads per worker harness. If empty or unspecified, the
+        &quot;numWorkers&quot;: 42, # Number of Google Compute Engine workers in this pool needed to
+            # execute the job.  If zero or unspecified, the service will
+            # attempt to choose a reasonable default.
+        &quot;numThreadsPerWorker&quot;: 42, # The number of threads per worker harness. If empty or unspecified, the
             # service will choose a number of threads (according to the number of cores
             # on the selected machine type for batch, or 1 by convention for streaming).
-        "poolArgs": { # Extra arguments for this worker pool.
-          "a_key": "", # Properties of the object. Contains field @type with type URL.
-        },
-        "packages": [ # Packages to be installed on workers.
+        &quot;diskSourceImage&quot;: &quot;A String&quot;, # Fully qualified source image for disks.
+        &quot;packages&quot;: [ # Packages to be installed on workers.
           { # The packages that must be installed in order for a worker to run the
               # steps of the Cloud Dataflow job that will be assigned to its worker
               # pool.
               #
               # This is the mechanism by which the Cloud Dataflow SDK causes code to
               # be loaded onto the workers. For example, the Cloud Dataflow Java SDK
-              # might use this to install jars containing the user's code and all of the
+              # might use this to install jars containing the user&#x27;s code and all of the
               # various dependencies (libraries, data files, etc.) required in order
               # for that code to run.
-            "location": "A String", # The resource to read the package from. The supported resource type is:
+            &quot;location&quot;: &quot;A String&quot;, # The resource to read the package from. The supported resource type is:
                 #
                 # Google Cloud Storage:
                 #
                 #   storage.googleapis.com/{bucket}
                 #   bucket.storage.googleapis.com/
-            "name": "A String", # The name of the package.
+            &quot;name&quot;: &quot;A String&quot;, # The name of the package.
           },
         ],
-        "defaultPackageSet": "A String", # The default package set to install.  This allows the service to
-            # select a default set of packages which are useful to worker
-            # harnesses written in a particular language.
-        "kind": "A String", # The kind of the worker pool; currently only `harness` and `shuffle`
-            # are supported.
-        "diskType": "A String", # Type of root disk for VMs.  If empty or unspecified, the service will
-            # attempt to choose a reasonable default.
-        "teardownPolicy": "A String", # Sets the policy for determining when to turndown worker pool.
+        &quot;teardownPolicy&quot;: &quot;A String&quot;, # Sets the policy for determining when to turndown worker pool.
             # Allowed values are: `TEARDOWN_ALWAYS`, `TEARDOWN_ON_SUCCESS`, and
             # `TEARDOWN_NEVER`.
             # `TEARDOWN_ALWAYS` means workers are always torn down regardless of whether
@@ -520,32 +209,41 @@
             #
             # If the workers are not torn down by the service, they will
             # continue to run and use Google Compute Engine VM resources in the
-            # user's project until they are explicitly terminated by the user.
+            # user&#x27;s project until they are explicitly terminated by the user.
             # Because of this, Google recommends using the `TEARDOWN_ALWAYS`
             # policy except for small, manually supervised test jobs.
             #
             # If unknown or unspecified, the service will attempt to choose a reasonable
             # default.
-        "diskSizeGb": 42, # Size of root disk for VMs, in GB.  If zero or unspecified, the service will
+        &quot;onHostMaintenance&quot;: &quot;A String&quot;, # The action to take on host maintenance, as defined by the Google
+            # Compute Engine API.
+        &quot;poolArgs&quot;: { # Extra arguments for this worker pool.
+          &quot;a_key&quot;: &quot;&quot;, # Properties of the object. Contains field @type with type URL.
+        },
+        &quot;diskSizeGb&quot;: 42, # Size of root disk for VMs, in GB.  If zero or unspecified, the service will
             # attempt to choose a reasonable default.
-        "numWorkers": 42, # Number of Google Compute Engine workers in this pool needed to
-            # execute the job.  If zero or unspecified, the service will
+        &quot;workerHarnessContainerImage&quot;: &quot;A String&quot;, # Required. Docker container image that executes the Cloud Dataflow worker
+            # harness, residing in Google Container Registry.
+            #
+            # Deprecated for the Fn API path. Use sdk_harness_container_images instead.
+        &quot;diskType&quot;: &quot;A String&quot;, # Type of root disk for VMs.  If empty or unspecified, the service will
             # attempt to choose a reasonable default.
-        "subnetwork": "A String", # Subnetwork to which VMs will be assigned, if desired.  Expected to be of
-            # the form "regions/REGION/subnetworks/SUBNETWORK".
-        "dataDisks": [ # Data disks that are used by a VM in this workflow.
+        &quot;machineType&quot;: &quot;A String&quot;, # Machine type (e.g. &quot;n1-standard-1&quot;).  If empty or unspecified, the
+            # service will attempt to choose a reasonable default.
+        &quot;kind&quot;: &quot;A String&quot;, # The kind of the worker pool; currently only `harness` and `shuffle`
+            # are supported.
+        &quot;dataDisks&quot;: [ # Data disks that are used by a VM in this workflow.
           { # Describes the data disk used by a workflow job.
-            "mountPoint": "A String", # Directory in a VM where disk is mounted.
-            "sizeGb": 42, # Size of disk in GB.  If zero or unspecified, the service will
+            &quot;sizeGb&quot;: 42, # Size of disk in GB.  If zero or unspecified, the service will
                 # attempt to choose a reasonable default.
-            "diskType": "A String", # Disk storage type, as defined by Google Compute Engine.  This
+            &quot;diskType&quot;: &quot;A String&quot;, # Disk storage type, as defined by Google Compute Engine.  This
                 # must be a disk type appropriate to the project and zone in which
                 # the workers will run.  If unknown or unspecified, the service
                 # will attempt to choose a reasonable default.
                 #
                 # For example, the standard persistent disk type is a resource name
-                # typically ending in "pd-standard".  If SSD persistent disks are
-                # available, the resource name typically ends with "pd-ssd".  The
+                # typically ending in &quot;pd-standard&quot;.  If SSD persistent disks are
+                # available, the resource name typically ends with &quot;pd-ssd&quot;.  The
                 # actual valid values are defined the Google Compute Engine API,
                 # not by the Cloud Dataflow API; consult the Google Compute Engine
                 # documentation for more information about determining the set of
@@ -556,29 +254,144 @@
                 # typically look something like this:
                 #
                 # compute.googleapis.com/projects/project-id/zones/zone/diskTypes/pd-standard
+            &quot;mountPoint&quot;: &quot;A String&quot;, # Directory in a VM where disk is mounted.
           },
         ],
-        "sdkHarnessContainerImages": [ # Set of SDK harness containers needed to execute this pipeline. This will
+        &quot;sdkHarnessContainerImages&quot;: [ # Set of SDK harness containers needed to execute this pipeline. This will
             # only be set in the Fn API path. For non-cross-language pipelines this
             # should have only one entry. Cross-language pipelines will have two or more
             # entries.
           { # Defines a SDK harness container for executing Dataflow pipelines.
-            "containerImage": "A String", # A docker container image that resides in Google Container Registry.
-            "useSingleCorePerContainer": True or False, # If true, recommends the Dataflow service to use only one core per SDK
+            &quot;containerImage&quot;: &quot;A String&quot;, # A docker container image that resides in Google Container Registry.
+            &quot;useSingleCorePerContainer&quot;: True or False, # If true, recommends the Dataflow service to use only one core per SDK
                 # container instance with this image. If false (or unset) recommends using
                 # more than one core per SDK container instance with this image for
                 # efficiency. Note that Dataflow service may choose to override this property
                 # if needed.
           },
         ],
+        &quot;subnetwork&quot;: &quot;A String&quot;, # Subnetwork to which VMs will be assigned, if desired.  Expected to be of
+            # the form &quot;regions/REGION/subnetworks/SUBNETWORK&quot;.
+        &quot;ipConfiguration&quot;: &quot;A String&quot;, # Configuration for VM IPs.
+        &quot;taskrunnerSettings&quot;: { # Taskrunner configuration settings. # Settings passed through to Google Compute Engine workers when
+            # using the standard Dataflow task runner.  Users should ignore
+            # this field.
+          &quot;alsologtostderr&quot;: True or False, # Whether to also send taskrunner log info to stderr.
+          &quot;taskGroup&quot;: &quot;A String&quot;, # The UNIX group ID on the worker VM to use for tasks launched by
+              # taskrunner; e.g. &quot;wheel&quot;.
+          &quot;harnessCommand&quot;: &quot;A String&quot;, # The command to launch the worker harness.
+          &quot;logDir&quot;: &quot;A String&quot;, # The directory on the VM to store logs.
+          &quot;oauthScopes&quot;: [ # The OAuth2 scopes to be requested by the taskrunner in order to
+              # access the Cloud Dataflow API.
+            &quot;A String&quot;,
+          ],
+          &quot;dataflowApiVersion&quot;: &quot;A String&quot;, # The API version of endpoint, e.g. &quot;v1b3&quot;
+          &quot;logUploadLocation&quot;: &quot;A String&quot;, # Indicates where to put logs.  If this is not specified, the logs
+              # will not be uploaded.
+              #
+              # The supported resource type is:
+              #
+              # Google Cloud Storage:
+              #   storage.googleapis.com/{bucket}/{object}
+              #   bucket.storage.googleapis.com/{object}
+          &quot;streamingWorkerMainClass&quot;: &quot;A String&quot;, # The streaming worker main class name.
+          &quot;workflowFileName&quot;: &quot;A String&quot;, # The file to store the workflow in.
+          &quot;baseTaskDir&quot;: &quot;A String&quot;, # The location on the worker for task-specific subdirectories.
+          &quot;tempStoragePrefix&quot;: &quot;A String&quot;, # The prefix of the resources the taskrunner should use for
+              # temporary storage.
+              #
+              # The supported resource type is:
+              #
+              # Google Cloud Storage:
+              #   storage.googleapis.com/{bucket}/{object}
+              #   bucket.storage.googleapis.com/{object}
+          &quot;commandlinesFileName&quot;: &quot;A String&quot;, # The file to store preprocessing commands in.
+          &quot;languageHint&quot;: &quot;A String&quot;, # The suggested backend language.
+          &quot;baseUrl&quot;: &quot;A String&quot;, # The base URL for the taskrunner to use when accessing Google Cloud APIs.
+              #
+              # When workers access Google Cloud APIs, they logically do so via
+              # relative URLs.  If this field is specified, it supplies the base
+              # URL to use for resolving these relative URLs.  The normative
+              # algorithm used is defined by RFC 1808, &quot;Relative Uniform Resource
+              # Locators&quot;.
+              #
+              # If not specified, the default value is &quot;http://www.googleapis.com/&quot;
+          &quot;logToSerialconsole&quot;: True or False, # Whether to send taskrunner log info to Google Compute Engine VM serial
+              # console.
+          &quot;continueOnException&quot;: True or False, # Whether to continue taskrunner if an exception is hit.
+          &quot;parallelWorkerSettings&quot;: { # Provides data to pass through to the worker harness. # The settings to pass to the parallel worker harness.
+            &quot;baseUrl&quot;: &quot;A String&quot;, # The base URL for accessing Google Cloud APIs.
+                #
+                # When workers access Google Cloud APIs, they logically do so via
+                # relative URLs.  If this field is specified, it supplies the base
+                # URL to use for resolving these relative URLs.  The normative
+                # algorithm used is defined by RFC 1808, &quot;Relative Uniform Resource
+                # Locators&quot;.
+                #
+                # If not specified, the default value is &quot;http://www.googleapis.com/&quot;
+            &quot;reportingEnabled&quot;: True or False, # Whether to send work progress updates to the service.
+            &quot;servicePath&quot;: &quot;A String&quot;, # The Cloud Dataflow service path relative to the root URL, for example,
+                # &quot;dataflow/v1b3/projects&quot;.
+            &quot;shuffleServicePath&quot;: &quot;A String&quot;, # The Shuffle service path relative to the root URL, for example,
+                # &quot;shuffle/v1beta1&quot;.
+            &quot;workerId&quot;: &quot;A String&quot;, # The ID of the worker running this pipeline.
+            &quot;tempStoragePrefix&quot;: &quot;A String&quot;, # The prefix of the resources the system should use for temporary
+                # storage.
+                #
+                # The supported resource type is:
+                #
+                # Google Cloud Storage:
+                #
+                #   storage.googleapis.com/{bucket}/{object}
+                #   bucket.storage.googleapis.com/{object}
+          },
+          &quot;vmId&quot;: &quot;A String&quot;, # The ID string of the VM.
+          &quot;taskUser&quot;: &quot;A String&quot;, # The UNIX user ID on the worker VM to use for tasks launched by
+              # taskrunner; e.g. &quot;root&quot;.
+        },
+        &quot;autoscalingSettings&quot;: { # Settings for WorkerPool autoscaling. # Settings for autoscaling of this WorkerPool.
+          &quot;maxNumWorkers&quot;: 42, # The maximum number of workers to cap scaling at.
+          &quot;algorithm&quot;: &quot;A String&quot;, # The algorithm to use for autoscaling.
+        },
+        &quot;metadata&quot;: { # Metadata to set on the Google Compute Engine VMs.
+          &quot;a_key&quot;: &quot;A String&quot;,
+        },
       },
     ],
-    "clusterManagerApiService": "A String", # The type of cluster manager API to use.  If unknown or
+    &quot;dataset&quot;: &quot;A String&quot;, # The dataset for the current project where various workflow
+        # related tables are stored.
+        #
+        # The supported resource type is:
+        #
+        # Google BigQuery:
+        #   bigquery.googleapis.com/{dataset}
+    &quot;internalExperiments&quot;: { # Experimental settings.
+      &quot;a_key&quot;: &quot;&quot;, # Properties of the object. Contains field @type with type URL.
+    },
+    &quot;workerRegion&quot;: &quot;A String&quot;, # The Compute Engine region
+        # (https://cloud.google.com/compute/docs/regions-zones/regions-zones) in
+        # which worker processing should occur, e.g. &quot;us-west1&quot;. Mutually exclusive
+        # with worker_zone. If neither worker_region nor worker_zone is specified,
+        # default to the control plane&#x27;s region.
+    &quot;serviceKmsKeyName&quot;: &quot;A String&quot;, # If set, contains the Cloud KMS key identifier used to encrypt data
+        # at rest, AKA a Customer Managed Encryption Key (CMEK).
+        #
+        # Format:
+        #   projects/PROJECT_ID/locations/LOCATION/keyRings/KEY_RING/cryptoKeys/KEY
+    &quot;userAgent&quot;: { # A description of the process that generated the request.
+      &quot;a_key&quot;: &quot;&quot;, # Properties of the object.
+    },
+    &quot;workerZone&quot;: &quot;A String&quot;, # The Compute Engine zone
+        # (https://cloud.google.com/compute/docs/regions-zones/regions-zones) in
+        # which worker processing should occur, e.g. &quot;us-west1-a&quot;. Mutually exclusive
+        # with worker_region. If neither worker_region nor worker_zone is specified,
+        # a zone in the control plane&#x27;s region is chosen based on available capacity.
+    &quot;clusterManagerApiService&quot;: &quot;A String&quot;, # The type of cluster manager API to use.  If unknown or
         # unspecified, the service will attempt to choose a reasonable
         # default.  This should be in the form of the API service name,
-        # e.g. "compute.googleapis.com".
-    "tempStoragePrefix": "A String", # The prefix of the resources the system should use for temporary
-        # storage.  The system will append the suffix "/temp-{JOBNAME} to
+        # e.g. &quot;compute.googleapis.com&quot;.
+    &quot;tempStoragePrefix&quot;: &quot;A String&quot;, # The prefix of the resources the system should use for temporary
+        # storage.  The system will append the suffix &quot;/temp-{JOBNAME} to
         # this resource prefix, where {JOBNAME} is the value of the
         # job_name field.  The resulting bucket and object prefix is used
         # as the prefix of the resources used to store temporary data
@@ -590,11 +403,199 @@
         #
         #   storage.googleapis.com/{bucket}/{object}
         #   bucket.storage.googleapis.com/{object}
+    &quot;experiments&quot;: [ # The list of experiments to enable.
+      &quot;A String&quot;,
+    ],
+    &quot;version&quot;: { # A structure describing which components and their versions of the service
+        # are required in order to run the job.
+      &quot;a_key&quot;: &quot;&quot;, # Properties of the object.
+    },
+    &quot;serviceAccountEmail&quot;: &quot;A String&quot;, # Identity to run virtual machines as. Defaults to the default account.
   },
-  "location": "A String", # The [regional endpoint]
-      # (https://cloud.google.com/dataflow/docs/concepts/regional-endpoints) that
-      # contains this job.
-  "tempFiles": [ # A set of files the system should be aware of that are used
+  &quot;stageStates&quot;: [ # This field may be mutated by the Cloud Dataflow service;
+      # callers cannot mutate it.
+    { # A message describing the state of a particular execution stage.
+      &quot;executionStageName&quot;: &quot;A String&quot;, # The name of the execution stage.
+      &quot;currentStateTime&quot;: &quot;A String&quot;, # The time at which the stage transitioned to this state.
+      &quot;executionStageState&quot;: &quot;A String&quot;, # Executions stage states allow the same set of values as JobState.
+    },
+  ],
+  &quot;jobMetadata&quot;: { # Metadata available primarily for filtering jobs. Will be included in the # This field is populated by the Dataflow service to support filtering jobs
+      # by the metadata values provided here. Populated for ListJobs and all GetJob
+      # views SUMMARY and higher.
+      # ListJob response and Job SUMMARY view.
+    &quot;bigTableDetails&quot;: [ # Identification of a BigTable source used in the Dataflow job.
+      { # Metadata for a BigTable connector used by the job.
+        &quot;tableId&quot;: &quot;A String&quot;, # TableId accessed in the connection.
+        &quot;projectId&quot;: &quot;A String&quot;, # ProjectId accessed in the connection.
+        &quot;instanceId&quot;: &quot;A String&quot;, # InstanceId accessed in the connection.
+      },
+    ],
+    &quot;spannerDetails&quot;: [ # Identification of a Spanner source used in the Dataflow job.
+      { # Metadata for a Spanner connector used by the job.
+        &quot;databaseId&quot;: &quot;A String&quot;, # DatabaseId accessed in the connection.
+        &quot;instanceId&quot;: &quot;A String&quot;, # InstanceId accessed in the connection.
+        &quot;projectId&quot;: &quot;A String&quot;, # ProjectId accessed in the connection.
+      },
+    ],
+    &quot;datastoreDetails&quot;: [ # Identification of a Datastore source used in the Dataflow job.
+      { # Metadata for a Datastore connector used by the job.
+        &quot;projectId&quot;: &quot;A String&quot;, # ProjectId accessed in the connection.
+        &quot;namespace&quot;: &quot;A String&quot;, # Namespace used in the connection.
+      },
+    ],
+    &quot;sdkVersion&quot;: { # The version of the SDK used to run the job. # The SDK version used to run the job.
+      &quot;versionDisplayName&quot;: &quot;A String&quot;, # A readable string describing the version of the SDK.
+      &quot;sdkSupportStatus&quot;: &quot;A String&quot;, # The support status for this SDK version.
+      &quot;version&quot;: &quot;A String&quot;, # The version of the SDK used to run the job.
+    },
+    &quot;bigqueryDetails&quot;: [ # Identification of a BigQuery source used in the Dataflow job.
+      { # Metadata for a BigQuery connector used by the job.
+        &quot;table&quot;: &quot;A String&quot;, # Table accessed in the connection.
+        &quot;dataset&quot;: &quot;A String&quot;, # Dataset accessed in the connection.
+        &quot;projectId&quot;: &quot;A String&quot;, # Project accessed in the connection.
+        &quot;query&quot;: &quot;A String&quot;, # Query used to access data in the connection.
+      },
+    ],
+    &quot;fileDetails&quot;: [ # Identification of a File source used in the Dataflow job.
+      { # Metadata for a File connector used by the job.
+        &quot;filePattern&quot;: &quot;A String&quot;, # File Pattern used to access files by the connector.
+      },
+    ],
+    &quot;pubsubDetails&quot;: [ # Identification of a PubSub source used in the Dataflow job.
+      { # Metadata for a PubSub connector used by the job.
+        &quot;subscription&quot;: &quot;A String&quot;, # Subscription used in the connection.
+        &quot;topic&quot;: &quot;A String&quot;, # Topic accessed in the connection.
+      },
+    ],
+  },
+  &quot;createdFromSnapshotId&quot;: &quot;A String&quot;, # If this is specified, the job&#x27;s initial state is populated from the given
+      # snapshot.
+  &quot;projectId&quot;: &quot;A String&quot;, # The ID of the Cloud Platform project that the job belongs to.
+  &quot;type&quot;: &quot;A String&quot;, # The type of Cloud Dataflow job.
+  &quot;pipelineDescription&quot;: { # A descriptive representation of submitted pipeline as well as the executed # Preliminary field: The format of this data may change at any time.
+      # A description of the user pipeline and stages through which it is executed.
+      # Created by Cloud Dataflow service.  Only retrieved with
+      # JOB_VIEW_DESCRIPTION or JOB_VIEW_ALL.
+      # form.  This data is provided by the Dataflow service for ease of visualizing
+      # the pipeline and interpreting Dataflow provided metrics.
+    &quot;executionPipelineStage&quot;: [ # Description of each stage of execution of the pipeline.
+      { # Description of the composing transforms, names/ids, and input/outputs of a
+          # stage of execution.  Some composing transforms and sources may have been
+          # generated by the Dataflow service during execution planning.
+        &quot;id&quot;: &quot;A String&quot;, # Dataflow service generated id for this stage.
+        &quot;componentTransform&quot;: [ # Transforms that comprise this execution stage.
+          { # Description of a transform executed as part of an execution stage.
+            &quot;originalTransform&quot;: &quot;A String&quot;, # User name for the original user transform with which this transform is
+                # most closely associated.
+            &quot;name&quot;: &quot;A String&quot;, # Dataflow service generated name for this source.
+            &quot;userName&quot;: &quot;A String&quot;, # Human-readable name for this transform; may be user or system generated.
+          },
+        ],
+        &quot;componentSource&quot;: [ # Collections produced and consumed by component transforms of this stage.
+          { # Description of an interstitial value between transforms in an execution
+              # stage.
+            &quot;name&quot;: &quot;A String&quot;, # Dataflow service generated name for this source.
+            &quot;userName&quot;: &quot;A String&quot;, # Human-readable name for this transform; may be user or system generated.
+            &quot;originalTransformOrCollection&quot;: &quot;A String&quot;, # User name for the original user transform or collection with which this
+                # source is most closely associated.
+          },
+        ],
+        &quot;kind&quot;: &quot;A String&quot;, # Type of tranform this stage is executing.
+        &quot;outputSource&quot;: [ # Output sources for this stage.
+          { # Description of an input or output of an execution stage.
+            &quot;originalTransformOrCollection&quot;: &quot;A String&quot;, # User name for the original user transform or collection with which this
+                # source is most closely associated.
+            &quot;name&quot;: &quot;A String&quot;, # Dataflow service generated name for this source.
+            &quot;sizeBytes&quot;: &quot;A String&quot;, # Size of the source, if measurable.
+            &quot;userName&quot;: &quot;A String&quot;, # Human-readable name for this source; may be user or system generated.
+          },
+        ],
+        &quot;name&quot;: &quot;A String&quot;, # Dataflow service generated name for this stage.
+        &quot;inputSource&quot;: [ # Input sources for this stage.
+          { # Description of an input or output of an execution stage.
+            &quot;originalTransformOrCollection&quot;: &quot;A String&quot;, # User name for the original user transform or collection with which this
+                # source is most closely associated.
+            &quot;name&quot;: &quot;A String&quot;, # Dataflow service generated name for this source.
+            &quot;sizeBytes&quot;: &quot;A String&quot;, # Size of the source, if measurable.
+            &quot;userName&quot;: &quot;A String&quot;, # Human-readable name for this source; may be user or system generated.
+          },
+        ],
+      },
+    ],
+    &quot;originalPipelineTransform&quot;: [ # Description of each transform in the pipeline and collections between them.
+      { # Description of the type, names/ids, and input/outputs for a transform.
+        &quot;kind&quot;: &quot;A String&quot;, # Type of transform.
+        &quot;inputCollectionName&quot;: [ # User names for all collection inputs to this transform.
+          &quot;A String&quot;,
+        ],
+        &quot;name&quot;: &quot;A String&quot;, # User provided name for this transform instance.
+        &quot;id&quot;: &quot;A String&quot;, # SDK generated id of this transform instance.
+        &quot;displayData&quot;: [ # Transform-specific display data.
+          { # Data provided with a pipeline or transform to provide descriptive info.
+            &quot;timestampValue&quot;: &quot;A String&quot;, # Contains value if the data is of timestamp type.
+            &quot;boolValue&quot;: True or False, # Contains value if the data is of a boolean type.
+            &quot;javaClassValue&quot;: &quot;A String&quot;, # Contains value if the data is of java class type.
+            &quot;strValue&quot;: &quot;A String&quot;, # Contains value if the data is of string type.
+            &quot;int64Value&quot;: &quot;A String&quot;, # Contains value if the data is of int64 type.
+            &quot;durationValue&quot;: &quot;A String&quot;, # Contains value if the data is of duration type.
+            &quot;namespace&quot;: &quot;A String&quot;, # The namespace for the key. This is usually a class name or programming
+                # language namespace (i.e. python module) which defines the display data.
+                # This allows a dax monitoring system to specially handle the data
+                # and perform custom rendering.
+            &quot;floatValue&quot;: 3.14, # Contains value if the data is of float type.
+            &quot;key&quot;: &quot;A String&quot;, # The key identifying the display data.
+                # This is intended to be used as a label for the display data
+                # when viewed in a dax monitoring system.
+            &quot;shortStrValue&quot;: &quot;A String&quot;, # A possible additional shorter value to display.
+                # For example a java_class_name_value of com.mypackage.MyDoFn
+                # will be stored with MyDoFn as the short_str_value and
+                # com.mypackage.MyDoFn as the java_class_name value.
+                # short_str_value can be displayed and java_class_name_value
+                # will be displayed as a tooltip.
+            &quot;url&quot;: &quot;A String&quot;, # An optional full URL.
+            &quot;label&quot;: &quot;A String&quot;, # An optional label to display in a dax UI for the element.
+          },
+        ],
+        &quot;outputCollectionName&quot;: [ # User  names for all collection outputs to this transform.
+          &quot;A String&quot;,
+        ],
+      },
+    ],
+    &quot;displayData&quot;: [ # Pipeline level display data.
+      { # Data provided with a pipeline or transform to provide descriptive info.
+        &quot;timestampValue&quot;: &quot;A String&quot;, # Contains value if the data is of timestamp type.
+        &quot;boolValue&quot;: True or False, # Contains value if the data is of a boolean type.
+        &quot;javaClassValue&quot;: &quot;A String&quot;, # Contains value if the data is of java class type.
+        &quot;strValue&quot;: &quot;A String&quot;, # Contains value if the data is of string type.
+        &quot;int64Value&quot;: &quot;A String&quot;, # Contains value if the data is of int64 type.
+        &quot;durationValue&quot;: &quot;A String&quot;, # Contains value if the data is of duration type.
+        &quot;namespace&quot;: &quot;A String&quot;, # The namespace for the key. This is usually a class name or programming
+            # language namespace (i.e. python module) which defines the display data.
+            # This allows a dax monitoring system to specially handle the data
+            # and perform custom rendering.
+        &quot;floatValue&quot;: 3.14, # Contains value if the data is of float type.
+        &quot;key&quot;: &quot;A String&quot;, # The key identifying the display data.
+            # This is intended to be used as a label for the display data
+            # when viewed in a dax monitoring system.
+        &quot;shortStrValue&quot;: &quot;A String&quot;, # A possible additional shorter value to display.
+            # For example a java_class_name_value of com.mypackage.MyDoFn
+            # will be stored with MyDoFn as the short_str_value and
+            # com.mypackage.MyDoFn as the java_class_name value.
+            # short_str_value can be displayed and java_class_name_value
+            # will be displayed as a tooltip.
+        &quot;url&quot;: &quot;A String&quot;, # An optional full URL.
+        &quot;label&quot;: &quot;A String&quot;, # An optional label to display in a dax UI for the element.
+      },
+    ],
+  },
+  &quot;replaceJobId&quot;: &quot;A String&quot;, # If this job is an update of an existing job, this field is the job ID
+      # of the job it replaced.
+      # 
+      # When sending a `CreateJobRequest`, you can update a job by specifying it
+      # here. The job named here is stopped, and its intermediate state is
+      # transferred to this job.
+  &quot;tempFiles&quot;: [ # A set of files the system should be aware of that are used
       # for temporary storage. These temporary files will be
       # removed on job completion.
       # No duplicates are allowed.
@@ -606,36 +607,9 @@
       # 
       #    storage.googleapis.com/{bucket}/{object}
       #    bucket.storage.googleapis.com/{object}
-    "A String",
+    &quot;A String&quot;,
   ],
-  "type": "A String", # The type of Cloud Dataflow job.
-  "clientRequestId": "A String", # The client's unique identifier of the job, re-used across retried attempts.
-      # If this field is set, the service will ensure its uniqueness.
-      # The request to create a job will fail if the service has knowledge of a
-      # previously submitted job with the same client's ID and job name.
-      # The caller may use this field to ensure idempotence of job
-      # creation across retried attempts to create a job.
-      # By default, the field is empty and, in that case, the service ignores it.
-  "createdFromSnapshotId": "A String", # If this is specified, the job's initial state is populated from the given
-      # snapshot.
-  "stepsLocation": "A String", # The GCS location where the steps are stored.
-  "currentStateTime": "A String", # The timestamp associated with the current state.
-  "startTime": "A String", # The timestamp when the job was started (transitioned to JOB_STATE_PENDING).
-      # Flexible resource scheduling jobs are started with some delay after job
-      # creation, so start_time is unset before start and is updated when the
-      # job is started by the Cloud Dataflow service. For other jobs, start_time
-      # always equals to create_time and is immutable and set by the Cloud Dataflow
-      # service.
-  "createTime": "A String", # The timestamp when the job was initially created. Immutable and set by the
-      # Cloud Dataflow service.
-  "requestedState": "A String", # The job's requested state.
-      # 
-      # `UpdateJob` may be used to switch between the `JOB_STATE_STOPPED` and
-      # `JOB_STATE_RUNNING` states, by setting requested_state.  `UpdateJob` may
-      # also be used to directly set a job's requested state to
-      # `JOB_STATE_CANCELLED` or `JOB_STATE_DONE`, irrevocably terminating the
-      # job if it has not already reached a terminal state.
-  "name": "A String", # The user-specified Cloud Dataflow job name.
+  &quot;name&quot;: &quot;A String&quot;, # The user-specified Cloud Dataflow job name.
       # 
       # Only one Job with a given name may exist in a project at any
       # given time. If a caller attempts to create a Job with the same
@@ -644,7 +618,7 @@
       # 
       # The name must match the regular expression
       # `[a-z]([-a-z0-9]{0,38}[a-z0-9])?`
-  "steps": [ # Exactly one of step or steps_location should be specified.
+  &quot;steps&quot;: [ # Exactly one of step or steps_location should be specified.
       # 
       # The top-level steps that constitute the entire job.
     { # Defines a particular step within a Cloud Dataflow job.
@@ -653,11 +627,11 @@
         # specific operation as part of the overall job.  Data is typically
         # passed from one step to another as part of the job.
         #
-        # Here's an example of a sequence of steps which together implement a
+        # Here&#x27;s an example of a sequence of steps which together implement a
         # Map-Reduce job:
         #
         #   * Read a collection of data from some source, parsing the
-        #     collection's elements.
+        #     collection&#x27;s elements.
         #
         #   * Validate the elements.
         #
@@ -672,23 +646,32 @@
         #
         # Note that the Cloud Dataflow service may be used to run many different
         # types of jobs, not just Map-Reduce.
-      "kind": "A String", # The kind of step in the Cloud Dataflow job.
-      "name": "A String", # The name that identifies the step. This must be unique for each
+      &quot;name&quot;: &quot;A String&quot;, # The name that identifies the step. This must be unique for each
           # step with respect to all other steps in the Cloud Dataflow job.
-      "properties": { # Named properties associated with the step. Each kind of
+      &quot;kind&quot;: &quot;A String&quot;, # The kind of step in the Cloud Dataflow job.
+      &quot;properties&quot;: { # Named properties associated with the step. Each kind of
           # predefined step has its own required set of properties.
           # Must be provided on Create.  Only retrieved with JOB_VIEW_ALL.
-        "a_key": "", # Properties of the object.
+        &quot;a_key&quot;: &quot;&quot;, # Properties of the object.
       },
     },
   ],
-  "replaceJobId": "A String", # If this job is an update of an existing job, this field is the job ID
-      # of the job it replaced.
-      # 
-      # When sending a `CreateJobRequest`, you can update a job by specifying it
-      # here. The job named here is stopped, and its intermediate state is
-      # transferred to this job.
-  "currentState": "A String", # The current state of the job.
+  &quot;replacedByJobId&quot;: &quot;A String&quot;, # If another job is an update of this job (and thus, this job is in
+      # `JOB_STATE_UPDATED`), this field contains the ID of that job.
+  &quot;executionInfo&quot;: { # Additional information about how a Cloud Dataflow job will be executed that # Deprecated.
+      # isn&#x27;t contained in the submitted job.
+    &quot;stages&quot;: { # A mapping from each stage to the information about that stage.
+      &quot;a_key&quot;: { # Contains information about how a particular
+          # google.dataflow.v1beta3.Step will be executed.
+        &quot;stepName&quot;: [ # The steps associated with the execution stage.
+            # Note that stages may have several steps, and that a given step
+            # might be run by more than one stage.
+          &quot;A String&quot;,
+        ],
+      },
+    },
+  },
+  &quot;currentState&quot;: &quot;A String&quot;, # The current state of the job.
       # 
       # Jobs are created in the `JOB_STATE_STOPPED` state unless otherwise
       # specified.
@@ -699,408 +682,114 @@
       # 
       # This field may be mutated by the Cloud Dataflow service;
       # callers cannot mutate it.
-  "executionInfo": { # Additional information about how a Cloud Dataflow job will be executed that # Deprecated.
-      # isn't contained in the submitted job.
-    "stages": { # A mapping from each stage to the information about that stage.
-      "a_key": { # Contains information about how a particular
-          # google.dataflow.v1beta3.Step will be executed.
-        "stepName": [ # The steps associated with the execution stage.
-            # Note that stages may have several steps, and that a given step
-            # might be run by more than one stage.
-          "A String",
-        ],
-      },
-    },
+  &quot;location&quot;: &quot;A String&quot;, # The [regional endpoint]
+      # (https://cloud.google.com/dataflow/docs/concepts/regional-endpoints) that
+      # contains this job.
+  &quot;startTime&quot;: &quot;A String&quot;, # The timestamp when the job was started (transitioned to JOB_STATE_PENDING).
+      # Flexible resource scheduling jobs are started with some delay after job
+      # creation, so start_time is unset before start and is updated when the
+      # job is started by the Cloud Dataflow service. For other jobs, start_time
+      # always equals to create_time and is immutable and set by the Cloud Dataflow
+      # service.
+  &quot;stepsLocation&quot;: &quot;A String&quot;, # The GCS location where the steps are stored.
+  &quot;labels&quot;: { # User-defined labels for this job.
+      # 
+      # The labels map can contain no more than 64 entries.  Entries of the labels
+      # map are UTF8 strings that comply with the following restrictions:
+      # 
+      # * Keys must conform to regexp:  \p{Ll}\p{Lo}{0,62}
+      # * Values must conform to regexp:  [\p{Ll}\p{Lo}\p{N}_-]{0,63}
+      # * Both keys and values are additionally constrained to be &lt;= 128 bytes in
+      # size.
+    &quot;a_key&quot;: &quot;A String&quot;,
   },
+  &quot;createTime&quot;: &quot;A String&quot;, # The timestamp when the job was initially created. Immutable and set by the
+      # Cloud Dataflow service.
+  &quot;requestedState&quot;: &quot;A String&quot;, # The job&#x27;s requested state.
+      # 
+      # `UpdateJob` may be used to switch between the `JOB_STATE_STOPPED` and
+      # `JOB_STATE_RUNNING` states, by setting requested_state.  `UpdateJob` may
+      # also be used to directly set a job&#x27;s requested state to
+      # `JOB_STATE_CANCELLED` or `JOB_STATE_DONE`, irrevocably terminating the
+      # job if it has not already reached a terminal state.
 }
 
+  view: string, The level of information requested in response.
+  replaceJobId: string, Deprecated. This field is now in the Job message.
   x__xgafv: string, V1 error format.
     Allowed values
       1 - v1 error format
       2 - v2 error format
-  replaceJobId: string, Deprecated. This field is now in the Job message.
-  view: string, The level of information requested in response.
 
 Returns:
   An object of the form:
 
     { # Defines a job to be run by the Cloud Dataflow service.
-    "labels": { # User-defined labels for this job.
-        #
-        # The labels map can contain no more than 64 entries.  Entries of the labels
-        # map are UTF8 strings that comply with the following restrictions:
-        #
-        # * Keys must conform to regexp:  \p{Ll}\p{Lo}{0,62}
-        # * Values must conform to regexp:  [\p{Ll}\p{Lo}\p{N}_-]{0,63}
-        # * Both keys and values are additionally constrained to be &lt;= 128 bytes in
-        # size.
-      "a_key": "A String",
-    },
-    "jobMetadata": { # Metadata available primarily for filtering jobs. Will be included in the # This field is populated by the Dataflow service to support filtering jobs
-        # by the metadata values provided here. Populated for ListJobs and all GetJob
-        # views SUMMARY and higher.
-        # ListJob response and Job SUMMARY view.
-      "sdkVersion": { # The version of the SDK used to run the job. # The SDK version used to run the job.
-        "versionDisplayName": "A String", # A readable string describing the version of the SDK.
-        "version": "A String", # The version of the SDK used to run the job.
-        "sdkSupportStatus": "A String", # The support status for this SDK version.
-      },
-      "pubsubDetails": [ # Identification of a PubSub source used in the Dataflow job.
-        { # Metadata for a PubSub connector used by the job.
-          "topic": "A String", # Topic accessed in the connection.
-          "subscription": "A String", # Subscription used in the connection.
-        },
-      ],
-      "datastoreDetails": [ # Identification of a Datastore source used in the Dataflow job.
-        { # Metadata for a Datastore connector used by the job.
-          "projectId": "A String", # ProjectId accessed in the connection.
-          "namespace": "A String", # Namespace used in the connection.
-        },
-      ],
-      "fileDetails": [ # Identification of a File source used in the Dataflow job.
-        { # Metadata for a File connector used by the job.
-          "filePattern": "A String", # File Pattern used to access files by the connector.
-        },
-      ],
-      "spannerDetails": [ # Identification of a Spanner source used in the Dataflow job.
-        { # Metadata for a Spanner connector used by the job.
-          "instanceId": "A String", # InstanceId accessed in the connection.
-          "projectId": "A String", # ProjectId accessed in the connection.
-          "databaseId": "A String", # DatabaseId accessed in the connection.
-        },
-      ],
-      "bigTableDetails": [ # Identification of a BigTable source used in the Dataflow job.
-        { # Metadata for a BigTable connector used by the job.
-          "instanceId": "A String", # InstanceId accessed in the connection.
-          "projectId": "A String", # ProjectId accessed in the connection.
-          "tableId": "A String", # TableId accessed in the connection.
-        },
-      ],
-      "bigqueryDetails": [ # Identification of a BigQuery source used in the Dataflow job.
-        { # Metadata for a BigQuery connector used by the job.
-          "projectId": "A String", # Project accessed in the connection.
-          "query": "A String", # Query used to access data in the connection.
-          "table": "A String", # Table accessed in the connection.
-          "dataset": "A String", # Dataset accessed in the connection.
-        },
-      ],
-    },
-    "pipelineDescription": { # A descriptive representation of submitted pipeline as well as the executed # Preliminary field: The format of this data may change at any time.
-        # A description of the user pipeline and stages through which it is executed.
-        # Created by Cloud Dataflow service.  Only retrieved with
-        # JOB_VIEW_DESCRIPTION or JOB_VIEW_ALL.
-        # form.  This data is provided by the Dataflow service for ease of visualizing
-        # the pipeline and interpreting Dataflow provided metrics.
-      "originalPipelineTransform": [ # Description of each transform in the pipeline and collections between them.
-        { # Description of the type, names/ids, and input/outputs for a transform.
-          "kind": "A String", # Type of transform.
-          "name": "A String", # User provided name for this transform instance.
-          "inputCollectionName": [ # User names for all collection inputs to this transform.
-            "A String",
-          ],
-          "displayData": [ # Transform-specific display data.
-            { # Data provided with a pipeline or transform to provide descriptive info.
-              "key": "A String", # The key identifying the display data.
-                  # This is intended to be used as a label for the display data
-                  # when viewed in a dax monitoring system.
-              "shortStrValue": "A String", # A possible additional shorter value to display.
-                  # For example a java_class_name_value of com.mypackage.MyDoFn
-                  # will be stored with MyDoFn as the short_str_value and
-                  # com.mypackage.MyDoFn as the java_class_name value.
-                  # short_str_value can be displayed and java_class_name_value
-                  # will be displayed as a tooltip.
-              "timestampValue": "A String", # Contains value if the data is of timestamp type.
-              "url": "A String", # An optional full URL.
-              "floatValue": 3.14, # Contains value if the data is of float type.
-              "namespace": "A String", # The namespace for the key. This is usually a class name or programming
-                  # language namespace (i.e. python module) which defines the display data.
-                  # This allows a dax monitoring system to specially handle the data
-                  # and perform custom rendering.
-              "javaClassValue": "A String", # Contains value if the data is of java class type.
-              "label": "A String", # An optional label to display in a dax UI for the element.
-              "boolValue": True or False, # Contains value if the data is of a boolean type.
-              "strValue": "A String", # Contains value if the data is of string type.
-              "durationValue": "A String", # Contains value if the data is of duration type.
-              "int64Value": "A String", # Contains value if the data is of int64 type.
-            },
-          ],
-          "outputCollectionName": [ # User  names for all collection outputs to this transform.
-            "A String",
-          ],
-          "id": "A String", # SDK generated id of this transform instance.
-        },
-      ],
-      "executionPipelineStage": [ # Description of each stage of execution of the pipeline.
-        { # Description of the composing transforms, names/ids, and input/outputs of a
-            # stage of execution.  Some composing transforms and sources may have been
-            # generated by the Dataflow service during execution planning.
-          "componentSource": [ # Collections produced and consumed by component transforms of this stage.
-            { # Description of an interstitial value between transforms in an execution
-                # stage.
-              "userName": "A String", # Human-readable name for this transform; may be user or system generated.
-              "originalTransformOrCollection": "A String", # User name for the original user transform or collection with which this
-                  # source is most closely associated.
-              "name": "A String", # Dataflow service generated name for this source.
-            },
-          ],
-          "kind": "A String", # Type of tranform this stage is executing.
-          "name": "A String", # Dataflow service generated name for this stage.
-          "outputSource": [ # Output sources for this stage.
-            { # Description of an input or output of an execution stage.
-              "userName": "A String", # Human-readable name for this source; may be user or system generated.
-              "sizeBytes": "A String", # Size of the source, if measurable.
-              "name": "A String", # Dataflow service generated name for this source.
-              "originalTransformOrCollection": "A String", # User name for the original user transform or collection with which this
-                  # source is most closely associated.
-            },
-          ],
-          "inputSource": [ # Input sources for this stage.
-            { # Description of an input or output of an execution stage.
-              "userName": "A String", # Human-readable name for this source; may be user or system generated.
-              "sizeBytes": "A String", # Size of the source, if measurable.
-              "name": "A String", # Dataflow service generated name for this source.
-              "originalTransformOrCollection": "A String", # User name for the original user transform or collection with which this
-                  # source is most closely associated.
-            },
-          ],
-          "componentTransform": [ # Transforms that comprise this execution stage.
-            { # Description of a transform executed as part of an execution stage.
-              "userName": "A String", # Human-readable name for this transform; may be user or system generated.
-              "originalTransform": "A String", # User name for the original user transform with which this transform is
-                  # most closely associated.
-              "name": "A String", # Dataflow service generated name for this source.
-            },
-          ],
-          "id": "A String", # Dataflow service generated id for this stage.
-        },
-      ],
-      "displayData": [ # Pipeline level display data.
-        { # Data provided with a pipeline or transform to provide descriptive info.
-          "key": "A String", # The key identifying the display data.
-              # This is intended to be used as a label for the display data
-              # when viewed in a dax monitoring system.
-          "shortStrValue": "A String", # A possible additional shorter value to display.
-              # For example a java_class_name_value of com.mypackage.MyDoFn
-              # will be stored with MyDoFn as the short_str_value and
-              # com.mypackage.MyDoFn as the java_class_name value.
-              # short_str_value can be displayed and java_class_name_value
-              # will be displayed as a tooltip.
-          "timestampValue": "A String", # Contains value if the data is of timestamp type.
-          "url": "A String", # An optional full URL.
-          "floatValue": 3.14, # Contains value if the data is of float type.
-          "namespace": "A String", # The namespace for the key. This is usually a class name or programming
-              # language namespace (i.e. python module) which defines the display data.
-              # This allows a dax monitoring system to specially handle the data
-              # and perform custom rendering.
-          "javaClassValue": "A String", # Contains value if the data is of java class type.
-          "label": "A String", # An optional label to display in a dax UI for the element.
-          "boolValue": True or False, # Contains value if the data is of a boolean type.
-          "strValue": "A String", # Contains value if the data is of string type.
-          "durationValue": "A String", # Contains value if the data is of duration type.
-          "int64Value": "A String", # Contains value if the data is of int64 type.
-        },
-      ],
-    },
-    "stageStates": [ # This field may be mutated by the Cloud Dataflow service;
-        # callers cannot mutate it.
-      { # A message describing the state of a particular execution stage.
-        "executionStageName": "A String", # The name of the execution stage.
-        "executionStageState": "A String", # Executions stage states allow the same set of values as JobState.
-        "currentStateTime": "A String", # The time at which the stage transitioned to this state.
-      },
-    ],
-    "id": "A String", # The unique ID of this job.
+    &quot;clientRequestId&quot;: &quot;A String&quot;, # The client&#x27;s unique identifier of the job, re-used across retried attempts.
+        # If this field is set, the service will ensure its uniqueness.
+        # The request to create a job will fail if the service has knowledge of a
+        # previously submitted job with the same client&#x27;s ID and job name.
+        # The caller may use this field to ensure idempotence of job
+        # creation across retried attempts to create a job.
+        # By default, the field is empty and, in that case, the service ignores it.
+    &quot;id&quot;: &quot;A String&quot;, # The unique ID of this job.
         #
         # This field is set by the Cloud Dataflow service when the Job is
         # created, and is immutable for the life of the job.
-    "replacedByJobId": "A String", # If another job is an update of this job (and thus, this job is in
-        # `JOB_STATE_UPDATED`), this field contains the ID of that job.
-    "projectId": "A String", # The ID of the Cloud Platform project that the job belongs to.
-    "transformNameMapping": { # The map of transform name prefixes of the job to be replaced to the
+    &quot;currentStateTime&quot;: &quot;A String&quot;, # The timestamp associated with the current state.
+    &quot;transformNameMapping&quot;: { # The map of transform name prefixes of the job to be replaced to the
         # corresponding name prefixes of the new job.
-      "a_key": "A String",
+      &quot;a_key&quot;: &quot;A String&quot;,
     },
-    "environment": { # Describes the environment in which a Dataflow Job runs. # The environment for the job.
-      "workerRegion": "A String", # The Compute Engine region
-          # (https://cloud.google.com/compute/docs/regions-zones/regions-zones) in
-          # which worker processing should occur, e.g. "us-west1". Mutually exclusive
-          # with worker_zone. If neither worker_region nor worker_zone is specified,
-          # default to the control plane's region.
-      "version": { # A structure describing which components and their versions of the service
-          # are required in order to run the job.
-        "a_key": "", # Properties of the object.
-      },
-      "flexResourceSchedulingGoal": "A String", # Which Flexible Resource Scheduling mode to run in.
-      "serviceKmsKeyName": "A String", # If set, contains the Cloud KMS key identifier used to encrypt data
-          # at rest, AKA a Customer Managed Encryption Key (CMEK).
-          #
-          # Format:
-          #   projects/PROJECT_ID/locations/LOCATION/keyRings/KEY_RING/cryptoKeys/KEY
-      "internalExperiments": { # Experimental settings.
-        "a_key": "", # Properties of the object. Contains field @type with type URL.
-      },
-      "dataset": "A String", # The dataset for the current project where various workflow
-          # related tables are stored.
-          #
-          # The supported resource type is:
-          #
-          # Google BigQuery:
-          #   bigquery.googleapis.com/{dataset}
-      "experiments": [ # The list of experiments to enable.
-        "A String",
-      ],
-      "serviceAccountEmail": "A String", # Identity to run virtual machines as. Defaults to the default account.
-      "sdkPipelineOptions": { # The Cloud Dataflow SDK pipeline options specified by the user. These
+    &quot;environment&quot;: { # Describes the environment in which a Dataflow Job runs. # The environment for the job.
+      &quot;sdkPipelineOptions&quot;: { # The Cloud Dataflow SDK pipeline options specified by the user. These
           # options are passed through the service and are used to recreate the
           # SDK pipeline options on the worker in a language agnostic and platform
           # independent way.
-        "a_key": "", # Properties of the object.
+        &quot;a_key&quot;: &quot;&quot;, # Properties of the object.
       },
-      "userAgent": { # A description of the process that generated the request.
-        "a_key": "", # Properties of the object.
-      },
-      "workerZone": "A String", # The Compute Engine zone
-          # (https://cloud.google.com/compute/docs/regions-zones/regions-zones) in
-          # which worker processing should occur, e.g. "us-west1-a". Mutually exclusive
-          # with worker_region. If neither worker_region nor worker_zone is specified,
-          # a zone in the control plane's region is chosen based on available capacity.
-      "workerPools": [ # The worker pools. At least one "harness" worker pool must be
+      &quot;flexResourceSchedulingGoal&quot;: &quot;A String&quot;, # Which Flexible Resource Scheduling mode to run in.
+      &quot;workerPools&quot;: [ # The worker pools. At least one &quot;harness&quot; worker pool must be
           # specified in order for the job to have workers.
         { # Describes one particular pool of Cloud Dataflow workers to be
             # instantiated by the Cloud Dataflow service in order to perform the
             # computations required by a job.  Note that a workflow job may use
             # multiple pools, in order to match the various computational
             # requirements of the various stages of the job.
-          "workerHarnessContainerImage": "A String", # Required. Docker container image that executes the Cloud Dataflow worker
-              # harness, residing in Google Container Registry.
-              #
-              # Deprecated for the Fn API path. Use sdk_harness_container_images instead.
-          "ipConfiguration": "A String", # Configuration for VM IPs.
-          "autoscalingSettings": { # Settings for WorkerPool autoscaling. # Settings for autoscaling of this WorkerPool.
-            "maxNumWorkers": 42, # The maximum number of workers to cap scaling at.
-            "algorithm": "A String", # The algorithm to use for autoscaling.
-          },
-          "diskSourceImage": "A String", # Fully qualified source image for disks.
-          "network": "A String", # Network to which VMs will be assigned.  If empty or unspecified,
-              # the service will use the network "default".
-          "zone": "A String", # Zone to run the worker pools in.  If empty or unspecified, the service
+          &quot;defaultPackageSet&quot;: &quot;A String&quot;, # The default package set to install.  This allows the service to
+              # select a default set of packages which are useful to worker
+              # harnesses written in a particular language.
+          &quot;network&quot;: &quot;A String&quot;, # Network to which VMs will be assigned.  If empty or unspecified,
+              # the service will use the network &quot;default&quot;.
+          &quot;zone&quot;: &quot;A String&quot;, # Zone to run the worker pools in.  If empty or unspecified, the service
               # will attempt to choose a reasonable default.
-          "metadata": { # Metadata to set on the Google Compute Engine VMs.
-            "a_key": "A String",
-          },
-          "machineType": "A String", # Machine type (e.g. "n1-standard-1").  If empty or unspecified, the
-              # service will attempt to choose a reasonable default.
-          "onHostMaintenance": "A String", # The action to take on host maintenance, as defined by the Google
-              # Compute Engine API.
-          "taskrunnerSettings": { # Taskrunner configuration settings. # Settings passed through to Google Compute Engine workers when
-              # using the standard Dataflow task runner.  Users should ignore
-              # this field.
-            "workflowFileName": "A String", # The file to store the workflow in.
-            "logUploadLocation": "A String", # Indicates where to put logs.  If this is not specified, the logs
-                # will not be uploaded.
-                #
-                # The supported resource type is:
-                #
-                # Google Cloud Storage:
-                #   storage.googleapis.com/{bucket}/{object}
-                #   bucket.storage.googleapis.com/{object}
-            "commandlinesFileName": "A String", # The file to store preprocessing commands in.
-            "alsologtostderr": True or False, # Whether to also send taskrunner log info to stderr.
-            "continueOnException": True or False, # Whether to continue taskrunner if an exception is hit.
-            "baseTaskDir": "A String", # The location on the worker for task-specific subdirectories.
-            "vmId": "A String", # The ID string of the VM.
-            "taskGroup": "A String", # The UNIX group ID on the worker VM to use for tasks launched by
-                # taskrunner; e.g. "wheel".
-            "taskUser": "A String", # The UNIX user ID on the worker VM to use for tasks launched by
-                # taskrunner; e.g. "root".
-            "oauthScopes": [ # The OAuth2 scopes to be requested by the taskrunner in order to
-                # access the Cloud Dataflow API.
-              "A String",
-            ],
-            "languageHint": "A String", # The suggested backend language.
-            "logToSerialconsole": True or False, # Whether to send taskrunner log info to Google Compute Engine VM serial
-                # console.
-            "streamingWorkerMainClass": "A String", # The streaming worker main class name.
-            "logDir": "A String", # The directory on the VM to store logs.
-            "parallelWorkerSettings": { # Provides data to pass through to the worker harness. # The settings to pass to the parallel worker harness.
-              "reportingEnabled": True or False, # Whether to send work progress updates to the service.
-              "shuffleServicePath": "A String", # The Shuffle service path relative to the root URL, for example,
-                  # "shuffle/v1beta1".
-              "workerId": "A String", # The ID of the worker running this pipeline.
-              "baseUrl": "A String", # The base URL for accessing Google Cloud APIs.
-                  #
-                  # When workers access Google Cloud APIs, they logically do so via
-                  # relative URLs.  If this field is specified, it supplies the base
-                  # URL to use for resolving these relative URLs.  The normative
-                  # algorithm used is defined by RFC 1808, "Relative Uniform Resource
-                  # Locators".
-                  #
-                  # If not specified, the default value is "http://www.googleapis.com/"
-              "servicePath": "A String", # The Cloud Dataflow service path relative to the root URL, for example,
-                  # "dataflow/v1b3/projects".
-              "tempStoragePrefix": "A String", # The prefix of the resources the system should use for temporary
-                  # storage.
-                  #
-                  # The supported resource type is:
-                  #
-                  # Google Cloud Storage:
-                  #
-                  #   storage.googleapis.com/{bucket}/{object}
-                  #   bucket.storage.googleapis.com/{object}
-            },
-            "dataflowApiVersion": "A String", # The API version of endpoint, e.g. "v1b3"
-            "harnessCommand": "A String", # The command to launch the worker harness.
-            "tempStoragePrefix": "A String", # The prefix of the resources the taskrunner should use for
-                # temporary storage.
-                #
-                # The supported resource type is:
-                #
-                # Google Cloud Storage:
-                #   storage.googleapis.com/{bucket}/{object}
-                #   bucket.storage.googleapis.com/{object}
-            "baseUrl": "A String", # The base URL for the taskrunner to use when accessing Google Cloud APIs.
-                #
-                # When workers access Google Cloud APIs, they logically do so via
-                # relative URLs.  If this field is specified, it supplies the base
-                # URL to use for resolving these relative URLs.  The normative
-                # algorithm used is defined by RFC 1808, "Relative Uniform Resource
-                # Locators".
-                #
-                # If not specified, the default value is "http://www.googleapis.com/"
-          },
-          "numThreadsPerWorker": 42, # The number of threads per worker harness. If empty or unspecified, the
+          &quot;numWorkers&quot;: 42, # Number of Google Compute Engine workers in this pool needed to
+              # execute the job.  If zero or unspecified, the service will
+              # attempt to choose a reasonable default.
+          &quot;numThreadsPerWorker&quot;: 42, # The number of threads per worker harness. If empty or unspecified, the
               # service will choose a number of threads (according to the number of cores
               # on the selected machine type for batch, or 1 by convention for streaming).
-          "poolArgs": { # Extra arguments for this worker pool.
-            "a_key": "", # Properties of the object. Contains field @type with type URL.
-          },
-          "packages": [ # Packages to be installed on workers.
+          &quot;diskSourceImage&quot;: &quot;A String&quot;, # Fully qualified source image for disks.
+          &quot;packages&quot;: [ # Packages to be installed on workers.
             { # The packages that must be installed in order for a worker to run the
                 # steps of the Cloud Dataflow job that will be assigned to its worker
                 # pool.
                 #
                 # This is the mechanism by which the Cloud Dataflow SDK causes code to
                 # be loaded onto the workers. For example, the Cloud Dataflow Java SDK
-                # might use this to install jars containing the user's code and all of the
+                # might use this to install jars containing the user&#x27;s code and all of the
                 # various dependencies (libraries, data files, etc.) required in order
                 # for that code to run.
-              "location": "A String", # The resource to read the package from. The supported resource type is:
+              &quot;location&quot;: &quot;A String&quot;, # The resource to read the package from. The supported resource type is:
                   #
                   # Google Cloud Storage:
                   #
                   #   storage.googleapis.com/{bucket}
                   #   bucket.storage.googleapis.com/
-              "name": "A String", # The name of the package.
+              &quot;name&quot;: &quot;A String&quot;, # The name of the package.
             },
           ],
-          "defaultPackageSet": "A String", # The default package set to install.  This allows the service to
-              # select a default set of packages which are useful to worker
-              # harnesses written in a particular language.
-          "kind": "A String", # The kind of the worker pool; currently only `harness` and `shuffle`
-              # are supported.
-          "diskType": "A String", # Type of root disk for VMs.  If empty or unspecified, the service will
-              # attempt to choose a reasonable default.
-          "teardownPolicy": "A String", # Sets the policy for determining when to turndown worker pool.
+          &quot;teardownPolicy&quot;: &quot;A String&quot;, # Sets the policy for determining when to turndown worker pool.
               # Allowed values are: `TEARDOWN_ALWAYS`, `TEARDOWN_ON_SUCCESS`, and
               # `TEARDOWN_NEVER`.
               # `TEARDOWN_ALWAYS` means workers are always torn down regardless of whether
@@ -1110,32 +799,41 @@
               #
               # If the workers are not torn down by the service, they will
               # continue to run and use Google Compute Engine VM resources in the
-              # user's project until they are explicitly terminated by the user.
+              # user&#x27;s project until they are explicitly terminated by the user.
               # Because of this, Google recommends using the `TEARDOWN_ALWAYS`
               # policy except for small, manually supervised test jobs.
               #
               # If unknown or unspecified, the service will attempt to choose a reasonable
               # default.
-          "diskSizeGb": 42, # Size of root disk for VMs, in GB.  If zero or unspecified, the service will
+          &quot;onHostMaintenance&quot;: &quot;A String&quot;, # The action to take on host maintenance, as defined by the Google
+              # Compute Engine API.
+          &quot;poolArgs&quot;: { # Extra arguments for this worker pool.
+            &quot;a_key&quot;: &quot;&quot;, # Properties of the object. Contains field @type with type URL.
+          },
+          &quot;diskSizeGb&quot;: 42, # Size of root disk for VMs, in GB.  If zero or unspecified, the service will
               # attempt to choose a reasonable default.
-          "numWorkers": 42, # Number of Google Compute Engine workers in this pool needed to
-              # execute the job.  If zero or unspecified, the service will
+          &quot;workerHarnessContainerImage&quot;: &quot;A String&quot;, # Required. Docker container image that executes the Cloud Dataflow worker
+              # harness, residing in Google Container Registry.
+              #
+              # Deprecated for the Fn API path. Use sdk_harness_container_images instead.
+          &quot;diskType&quot;: &quot;A String&quot;, # Type of root disk for VMs.  If empty or unspecified, the service will
               # attempt to choose a reasonable default.
-          "subnetwork": "A String", # Subnetwork to which VMs will be assigned, if desired.  Expected to be of
-              # the form "regions/REGION/subnetworks/SUBNETWORK".
-          "dataDisks": [ # Data disks that are used by a VM in this workflow.
+          &quot;machineType&quot;: &quot;A String&quot;, # Machine type (e.g. &quot;n1-standard-1&quot;).  If empty or unspecified, the
+              # service will attempt to choose a reasonable default.
+          &quot;kind&quot;: &quot;A String&quot;, # The kind of the worker pool; currently only `harness` and `shuffle`
+              # are supported.
+          &quot;dataDisks&quot;: [ # Data disks that are used by a VM in this workflow.
             { # Describes the data disk used by a workflow job.
-              "mountPoint": "A String", # Directory in a VM where disk is mounted.
-              "sizeGb": 42, # Size of disk in GB.  If zero or unspecified, the service will
+              &quot;sizeGb&quot;: 42, # Size of disk in GB.  If zero or unspecified, the service will
                   # attempt to choose a reasonable default.
-              "diskType": "A String", # Disk storage type, as defined by Google Compute Engine.  This
+              &quot;diskType&quot;: &quot;A String&quot;, # Disk storage type, as defined by Google Compute Engine.  This
                   # must be a disk type appropriate to the project and zone in which
                   # the workers will run.  If unknown or unspecified, the service
                   # will attempt to choose a reasonable default.
                   #
                   # For example, the standard persistent disk type is a resource name
-                  # typically ending in "pd-standard".  If SSD persistent disks are
-                  # available, the resource name typically ends with "pd-ssd".  The
+                  # typically ending in &quot;pd-standard&quot;.  If SSD persistent disks are
+                  # available, the resource name typically ends with &quot;pd-ssd&quot;.  The
                   # actual valid values are defined the Google Compute Engine API,
                   # not by the Cloud Dataflow API; consult the Google Compute Engine
                   # documentation for more information about determining the set of
@@ -1146,29 +844,144 @@
                   # typically look something like this:
                   #
                   # compute.googleapis.com/projects/project-id/zones/zone/diskTypes/pd-standard
+              &quot;mountPoint&quot;: &quot;A String&quot;, # Directory in a VM where disk is mounted.
             },
           ],
-          "sdkHarnessContainerImages": [ # Set of SDK harness containers needed to execute this pipeline. This will
+          &quot;sdkHarnessContainerImages&quot;: [ # Set of SDK harness containers needed to execute this pipeline. This will
               # only be set in the Fn API path. For non-cross-language pipelines this
               # should have only one entry. Cross-language pipelines will have two or more
               # entries.
             { # Defines a SDK harness container for executing Dataflow pipelines.
-              "containerImage": "A String", # A docker container image that resides in Google Container Registry.
-              "useSingleCorePerContainer": True or False, # If true, recommends the Dataflow service to use only one core per SDK
+              &quot;containerImage&quot;: &quot;A String&quot;, # A docker container image that resides in Google Container Registry.
+              &quot;useSingleCorePerContainer&quot;: True or False, # If true, recommends the Dataflow service to use only one core per SDK
                   # container instance with this image. If false (or unset) recommends using
                   # more than one core per SDK container instance with this image for
                   # efficiency. Note that Dataflow service may choose to override this property
                   # if needed.
             },
           ],
+          &quot;subnetwork&quot;: &quot;A String&quot;, # Subnetwork to which VMs will be assigned, if desired.  Expected to be of
+              # the form &quot;regions/REGION/subnetworks/SUBNETWORK&quot;.
+          &quot;ipConfiguration&quot;: &quot;A String&quot;, # Configuration for VM IPs.
+          &quot;taskrunnerSettings&quot;: { # Taskrunner configuration settings. # Settings passed through to Google Compute Engine workers when
+              # using the standard Dataflow task runner.  Users should ignore
+              # this field.
+            &quot;alsologtostderr&quot;: True or False, # Whether to also send taskrunner log info to stderr.
+            &quot;taskGroup&quot;: &quot;A String&quot;, # The UNIX group ID on the worker VM to use for tasks launched by
+                # taskrunner; e.g. &quot;wheel&quot;.
+            &quot;harnessCommand&quot;: &quot;A String&quot;, # The command to launch the worker harness.
+            &quot;logDir&quot;: &quot;A String&quot;, # The directory on the VM to store logs.
+            &quot;oauthScopes&quot;: [ # The OAuth2 scopes to be requested by the taskrunner in order to
+                # access the Cloud Dataflow API.
+              &quot;A String&quot;,
+            ],
+            &quot;dataflowApiVersion&quot;: &quot;A String&quot;, # The API version of endpoint, e.g. &quot;v1b3&quot;
+            &quot;logUploadLocation&quot;: &quot;A String&quot;, # Indicates where to put logs.  If this is not specified, the logs
+                # will not be uploaded.
+                #
+                # The supported resource type is:
+                #
+                # Google Cloud Storage:
+                #   storage.googleapis.com/{bucket}/{object}
+                #   bucket.storage.googleapis.com/{object}
+            &quot;streamingWorkerMainClass&quot;: &quot;A String&quot;, # The streaming worker main class name.
+            &quot;workflowFileName&quot;: &quot;A String&quot;, # The file to store the workflow in.
+            &quot;baseTaskDir&quot;: &quot;A String&quot;, # The location on the worker for task-specific subdirectories.
+            &quot;tempStoragePrefix&quot;: &quot;A String&quot;, # The prefix of the resources the taskrunner should use for
+                # temporary storage.
+                #
+                # The supported resource type is:
+                #
+                # Google Cloud Storage:
+                #   storage.googleapis.com/{bucket}/{object}
+                #   bucket.storage.googleapis.com/{object}
+            &quot;commandlinesFileName&quot;: &quot;A String&quot;, # The file to store preprocessing commands in.
+            &quot;languageHint&quot;: &quot;A String&quot;, # The suggested backend language.
+            &quot;baseUrl&quot;: &quot;A String&quot;, # The base URL for the taskrunner to use when accessing Google Cloud APIs.
+                #
+                # When workers access Google Cloud APIs, they logically do so via
+                # relative URLs.  If this field is specified, it supplies the base
+                # URL to use for resolving these relative URLs.  The normative
+                # algorithm used is defined by RFC 1808, &quot;Relative Uniform Resource
+                # Locators&quot;.
+                #
+                # If not specified, the default value is &quot;http://www.googleapis.com/&quot;
+            &quot;logToSerialconsole&quot;: True or False, # Whether to send taskrunner log info to Google Compute Engine VM serial
+                # console.
+            &quot;continueOnException&quot;: True or False, # Whether to continue taskrunner if an exception is hit.
+            &quot;parallelWorkerSettings&quot;: { # Provides data to pass through to the worker harness. # The settings to pass to the parallel worker harness.
+              &quot;baseUrl&quot;: &quot;A String&quot;, # The base URL for accessing Google Cloud APIs.
+                  #
+                  # When workers access Google Cloud APIs, they logically do so via
+                  # relative URLs.  If this field is specified, it supplies the base
+                  # URL to use for resolving these relative URLs.  The normative
+                  # algorithm used is defined by RFC 1808, &quot;Relative Uniform Resource
+                  # Locators&quot;.
+                  #
+                  # If not specified, the default value is &quot;http://www.googleapis.com/&quot;
+              &quot;reportingEnabled&quot;: True or False, # Whether to send work progress updates to the service.
+              &quot;servicePath&quot;: &quot;A String&quot;, # The Cloud Dataflow service path relative to the root URL, for example,
+                  # &quot;dataflow/v1b3/projects&quot;.
+              &quot;shuffleServicePath&quot;: &quot;A String&quot;, # The Shuffle service path relative to the root URL, for example,
+                  # &quot;shuffle/v1beta1&quot;.
+              &quot;workerId&quot;: &quot;A String&quot;, # The ID of the worker running this pipeline.
+              &quot;tempStoragePrefix&quot;: &quot;A String&quot;, # The prefix of the resources the system should use for temporary
+                  # storage.
+                  #
+                  # The supported resource type is:
+                  #
+                  # Google Cloud Storage:
+                  #
+                  #   storage.googleapis.com/{bucket}/{object}
+                  #   bucket.storage.googleapis.com/{object}
+            },
+            &quot;vmId&quot;: &quot;A String&quot;, # The ID string of the VM.
+            &quot;taskUser&quot;: &quot;A String&quot;, # The UNIX user ID on the worker VM to use for tasks launched by
+                # taskrunner; e.g. &quot;root&quot;.
+          },
+          &quot;autoscalingSettings&quot;: { # Settings for WorkerPool autoscaling. # Settings for autoscaling of this WorkerPool.
+            &quot;maxNumWorkers&quot;: 42, # The maximum number of workers to cap scaling at.
+            &quot;algorithm&quot;: &quot;A String&quot;, # The algorithm to use for autoscaling.
+          },
+          &quot;metadata&quot;: { # Metadata to set on the Google Compute Engine VMs.
+            &quot;a_key&quot;: &quot;A String&quot;,
+          },
         },
       ],
-      "clusterManagerApiService": "A String", # The type of cluster manager API to use.  If unknown or
+      &quot;dataset&quot;: &quot;A String&quot;, # The dataset for the current project where various workflow
+          # related tables are stored.
+          #
+          # The supported resource type is:
+          #
+          # Google BigQuery:
+          #   bigquery.googleapis.com/{dataset}
+      &quot;internalExperiments&quot;: { # Experimental settings.
+        &quot;a_key&quot;: &quot;&quot;, # Properties of the object. Contains field @type with type URL.
+      },
+      &quot;workerRegion&quot;: &quot;A String&quot;, # The Compute Engine region
+          # (https://cloud.google.com/compute/docs/regions-zones/regions-zones) in
+          # which worker processing should occur, e.g. &quot;us-west1&quot;. Mutually exclusive
+          # with worker_zone. If neither worker_region nor worker_zone is specified,
+          # default to the control plane&#x27;s region.
+      &quot;serviceKmsKeyName&quot;: &quot;A String&quot;, # If set, contains the Cloud KMS key identifier used to encrypt data
+          # at rest, AKA a Customer Managed Encryption Key (CMEK).
+          #
+          # Format:
+          #   projects/PROJECT_ID/locations/LOCATION/keyRings/KEY_RING/cryptoKeys/KEY
+      &quot;userAgent&quot;: { # A description of the process that generated the request.
+        &quot;a_key&quot;: &quot;&quot;, # Properties of the object.
+      },
+      &quot;workerZone&quot;: &quot;A String&quot;, # The Compute Engine zone
+          # (https://cloud.google.com/compute/docs/regions-zones/regions-zones) in
+          # which worker processing should occur, e.g. &quot;us-west1-a&quot;. Mutually exclusive
+          # with worker_region. If neither worker_region nor worker_zone is specified,
+          # a zone in the control plane&#x27;s region is chosen based on available capacity.
+      &quot;clusterManagerApiService&quot;: &quot;A String&quot;, # The type of cluster manager API to use.  If unknown or
           # unspecified, the service will attempt to choose a reasonable
           # default.  This should be in the form of the API service name,
-          # e.g. "compute.googleapis.com".
-      "tempStoragePrefix": "A String", # The prefix of the resources the system should use for temporary
-          # storage.  The system will append the suffix "/temp-{JOBNAME} to
+          # e.g. &quot;compute.googleapis.com&quot;.
+      &quot;tempStoragePrefix&quot;: &quot;A String&quot;, # The prefix of the resources the system should use for temporary
+          # storage.  The system will append the suffix &quot;/temp-{JOBNAME} to
           # this resource prefix, where {JOBNAME} is the value of the
           # job_name field.  The resulting bucket and object prefix is used
           # as the prefix of the resources used to store temporary data
@@ -1180,11 +993,199 @@
           #
           #   storage.googleapis.com/{bucket}/{object}
           #   bucket.storage.googleapis.com/{object}
+      &quot;experiments&quot;: [ # The list of experiments to enable.
+        &quot;A String&quot;,
+      ],
+      &quot;version&quot;: { # A structure describing which components and their versions of the service
+          # are required in order to run the job.
+        &quot;a_key&quot;: &quot;&quot;, # Properties of the object.
+      },
+      &quot;serviceAccountEmail&quot;: &quot;A String&quot;, # Identity to run virtual machines as. Defaults to the default account.
     },
-    "location": "A String", # The [regional endpoint]
-        # (https://cloud.google.com/dataflow/docs/concepts/regional-endpoints) that
-        # contains this job.
-    "tempFiles": [ # A set of files the system should be aware of that are used
+    &quot;stageStates&quot;: [ # This field may be mutated by the Cloud Dataflow service;
+        # callers cannot mutate it.
+      { # A message describing the state of a particular execution stage.
+        &quot;executionStageName&quot;: &quot;A String&quot;, # The name of the execution stage.
+        &quot;currentStateTime&quot;: &quot;A String&quot;, # The time at which the stage transitioned to this state.
+        &quot;executionStageState&quot;: &quot;A String&quot;, # Executions stage states allow the same set of values as JobState.
+      },
+    ],
+    &quot;jobMetadata&quot;: { # Metadata available primarily for filtering jobs. Will be included in the # This field is populated by the Dataflow service to support filtering jobs
+        # by the metadata values provided here. Populated for ListJobs and all GetJob
+        # views SUMMARY and higher.
+        # ListJob response and Job SUMMARY view.
+      &quot;bigTableDetails&quot;: [ # Identification of a BigTable source used in the Dataflow job.
+        { # Metadata for a BigTable connector used by the job.
+          &quot;tableId&quot;: &quot;A String&quot;, # TableId accessed in the connection.
+          &quot;projectId&quot;: &quot;A String&quot;, # ProjectId accessed in the connection.
+          &quot;instanceId&quot;: &quot;A String&quot;, # InstanceId accessed in the connection.
+        },
+      ],
+      &quot;spannerDetails&quot;: [ # Identification of a Spanner source used in the Dataflow job.
+        { # Metadata for a Spanner connector used by the job.
+          &quot;databaseId&quot;: &quot;A String&quot;, # DatabaseId accessed in the connection.
+          &quot;instanceId&quot;: &quot;A String&quot;, # InstanceId accessed in the connection.
+          &quot;projectId&quot;: &quot;A String&quot;, # ProjectId accessed in the connection.
+        },
+      ],
+      &quot;datastoreDetails&quot;: [ # Identification of a Datastore source used in the Dataflow job.
+        { # Metadata for a Datastore connector used by the job.
+          &quot;projectId&quot;: &quot;A String&quot;, # ProjectId accessed in the connection.
+          &quot;namespace&quot;: &quot;A String&quot;, # Namespace used in the connection.
+        },
+      ],
+      &quot;sdkVersion&quot;: { # The version of the SDK used to run the job. # The SDK version used to run the job.
+        &quot;versionDisplayName&quot;: &quot;A String&quot;, # A readable string describing the version of the SDK.
+        &quot;sdkSupportStatus&quot;: &quot;A String&quot;, # The support status for this SDK version.
+        &quot;version&quot;: &quot;A String&quot;, # The version of the SDK used to run the job.
+      },
+      &quot;bigqueryDetails&quot;: [ # Identification of a BigQuery source used in the Dataflow job.
+        { # Metadata for a BigQuery connector used by the job.
+          &quot;table&quot;: &quot;A String&quot;, # Table accessed in the connection.
+          &quot;dataset&quot;: &quot;A String&quot;, # Dataset accessed in the connection.
+          &quot;projectId&quot;: &quot;A String&quot;, # Project accessed in the connection.
+          &quot;query&quot;: &quot;A String&quot;, # Query used to access data in the connection.
+        },
+      ],
+      &quot;fileDetails&quot;: [ # Identification of a File source used in the Dataflow job.
+        { # Metadata for a File connector used by the job.
+          &quot;filePattern&quot;: &quot;A String&quot;, # File Pattern used to access files by the connector.
+        },
+      ],
+      &quot;pubsubDetails&quot;: [ # Identification of a PubSub source used in the Dataflow job.
+        { # Metadata for a PubSub connector used by the job.
+          &quot;subscription&quot;: &quot;A String&quot;, # Subscription used in the connection.
+          &quot;topic&quot;: &quot;A String&quot;, # Topic accessed in the connection.
+        },
+      ],
+    },
+    &quot;createdFromSnapshotId&quot;: &quot;A String&quot;, # If this is specified, the job&#x27;s initial state is populated from the given
+        # snapshot.
+    &quot;projectId&quot;: &quot;A String&quot;, # The ID of the Cloud Platform project that the job belongs to.
+    &quot;type&quot;: &quot;A String&quot;, # The type of Cloud Dataflow job.
+    &quot;pipelineDescription&quot;: { # A descriptive representation of submitted pipeline as well as the executed # Preliminary field: The format of this data may change at any time.
+        # A description of the user pipeline and stages through which it is executed.
+        # Created by Cloud Dataflow service.  Only retrieved with
+        # JOB_VIEW_DESCRIPTION or JOB_VIEW_ALL.
+        # form.  This data is provided by the Dataflow service for ease of visualizing
+        # the pipeline and interpreting Dataflow provided metrics.
+      &quot;executionPipelineStage&quot;: [ # Description of each stage of execution of the pipeline.
+        { # Description of the composing transforms, names/ids, and input/outputs of a
+            # stage of execution.  Some composing transforms and sources may have been
+            # generated by the Dataflow service during execution planning.
+          &quot;id&quot;: &quot;A String&quot;, # Dataflow service generated id for this stage.
+          &quot;componentTransform&quot;: [ # Transforms that comprise this execution stage.
+            { # Description of a transform executed as part of an execution stage.
+              &quot;originalTransform&quot;: &quot;A String&quot;, # User name for the original user transform with which this transform is
+                  # most closely associated.
+              &quot;name&quot;: &quot;A String&quot;, # Dataflow service generated name for this source.
+              &quot;userName&quot;: &quot;A String&quot;, # Human-readable name for this transform; may be user or system generated.
+            },
+          ],
+          &quot;componentSource&quot;: [ # Collections produced and consumed by component transforms of this stage.
+            { # Description of an interstitial value between transforms in an execution
+                # stage.
+              &quot;name&quot;: &quot;A String&quot;, # Dataflow service generated name for this source.
+              &quot;userName&quot;: &quot;A String&quot;, # Human-readable name for this transform; may be user or system generated.
+              &quot;originalTransformOrCollection&quot;: &quot;A String&quot;, # User name for the original user transform or collection with which this
+                  # source is most closely associated.
+            },
+          ],
+          &quot;kind&quot;: &quot;A String&quot;, # Type of tranform this stage is executing.
+          &quot;outputSource&quot;: [ # Output sources for this stage.
+            { # Description of an input or output of an execution stage.
+              &quot;originalTransformOrCollection&quot;: &quot;A String&quot;, # User name for the original user transform or collection with which this
+                  # source is most closely associated.
+              &quot;name&quot;: &quot;A String&quot;, # Dataflow service generated name for this source.
+              &quot;sizeBytes&quot;: &quot;A String&quot;, # Size of the source, if measurable.
+              &quot;userName&quot;: &quot;A String&quot;, # Human-readable name for this source; may be user or system generated.
+            },
+          ],
+          &quot;name&quot;: &quot;A String&quot;, # Dataflow service generated name for this stage.
+          &quot;inputSource&quot;: [ # Input sources for this stage.
+            { # Description of an input or output of an execution stage.
+              &quot;originalTransformOrCollection&quot;: &quot;A String&quot;, # User name for the original user transform or collection with which this
+                  # source is most closely associated.
+              &quot;name&quot;: &quot;A String&quot;, # Dataflow service generated name for this source.
+              &quot;sizeBytes&quot;: &quot;A String&quot;, # Size of the source, if measurable.
+              &quot;userName&quot;: &quot;A String&quot;, # Human-readable name for this source; may be user or system generated.
+            },
+          ],
+        },
+      ],
+      &quot;originalPipelineTransform&quot;: [ # Description of each transform in the pipeline and collections between them.
+        { # Description of the type, names/ids, and input/outputs for a transform.
+          &quot;kind&quot;: &quot;A String&quot;, # Type of transform.
+          &quot;inputCollectionName&quot;: [ # User names for all collection inputs to this transform.
+            &quot;A String&quot;,
+          ],
+          &quot;name&quot;: &quot;A String&quot;, # User provided name for this transform instance.
+          &quot;id&quot;: &quot;A String&quot;, # SDK generated id of this transform instance.
+          &quot;displayData&quot;: [ # Transform-specific display data.
+            { # Data provided with a pipeline or transform to provide descriptive info.
+              &quot;timestampValue&quot;: &quot;A String&quot;, # Contains value if the data is of timestamp type.
+              &quot;boolValue&quot;: True or False, # Contains value if the data is of a boolean type.
+              &quot;javaClassValue&quot;: &quot;A String&quot;, # Contains value if the data is of java class type.
+              &quot;strValue&quot;: &quot;A String&quot;, # Contains value if the data is of string type.
+              &quot;int64Value&quot;: &quot;A String&quot;, # Contains value if the data is of int64 type.
+              &quot;durationValue&quot;: &quot;A String&quot;, # Contains value if the data is of duration type.
+              &quot;namespace&quot;: &quot;A String&quot;, # The namespace for the key. This is usually a class name or programming
+                  # language namespace (i.e. python module) which defines the display data.
+                  # This allows a dax monitoring system to specially handle the data
+                  # and perform custom rendering.
+              &quot;floatValue&quot;: 3.14, # Contains value if the data is of float type.
+              &quot;key&quot;: &quot;A String&quot;, # The key identifying the display data.
+                  # This is intended to be used as a label for the display data
+                  # when viewed in a dax monitoring system.
+              &quot;shortStrValue&quot;: &quot;A String&quot;, # A possible additional shorter value to display.
+                  # For example a java_class_name_value of com.mypackage.MyDoFn
+                  # will be stored with MyDoFn as the short_str_value and
+                  # com.mypackage.MyDoFn as the java_class_name value.
+                  # short_str_value can be displayed and java_class_name_value
+                  # will be displayed as a tooltip.
+              &quot;url&quot;: &quot;A String&quot;, # An optional full URL.
+              &quot;label&quot;: &quot;A String&quot;, # An optional label to display in a dax UI for the element.
+            },
+          ],
+          &quot;outputCollectionName&quot;: [ # User  names for all collection outputs to this transform.
+            &quot;A String&quot;,
+          ],
+        },
+      ],
+      &quot;displayData&quot;: [ # Pipeline level display data.
+        { # Data provided with a pipeline or transform to provide descriptive info.
+          &quot;timestampValue&quot;: &quot;A String&quot;, # Contains value if the data is of timestamp type.
+          &quot;boolValue&quot;: True or False, # Contains value if the data is of a boolean type.
+          &quot;javaClassValue&quot;: &quot;A String&quot;, # Contains value if the data is of java class type.
+          &quot;strValue&quot;: &quot;A String&quot;, # Contains value if the data is of string type.
+          &quot;int64Value&quot;: &quot;A String&quot;, # Contains value if the data is of int64 type.
+          &quot;durationValue&quot;: &quot;A String&quot;, # Contains value if the data is of duration type.
+          &quot;namespace&quot;: &quot;A String&quot;, # The namespace for the key. This is usually a class name or programming
+              # language namespace (i.e. python module) which defines the display data.
+              # This allows a dax monitoring system to specially handle the data
+              # and perform custom rendering.
+          &quot;floatValue&quot;: 3.14, # Contains value if the data is of float type.
+          &quot;key&quot;: &quot;A String&quot;, # The key identifying the display data.
+              # This is intended to be used as a label for the display data
+              # when viewed in a dax monitoring system.
+          &quot;shortStrValue&quot;: &quot;A String&quot;, # A possible additional shorter value to display.
+              # For example a java_class_name_value of com.mypackage.MyDoFn
+              # will be stored with MyDoFn as the short_str_value and
+              # com.mypackage.MyDoFn as the java_class_name value.
+              # short_str_value can be displayed and java_class_name_value
+              # will be displayed as a tooltip.
+          &quot;url&quot;: &quot;A String&quot;, # An optional full URL.
+          &quot;label&quot;: &quot;A String&quot;, # An optional label to display in a dax UI for the element.
+        },
+      ],
+    },
+    &quot;replaceJobId&quot;: &quot;A String&quot;, # If this job is an update of an existing job, this field is the job ID
+        # of the job it replaced.
+        #
+        # When sending a `CreateJobRequest`, you can update a job by specifying it
+        # here. The job named here is stopped, and its intermediate state is
+        # transferred to this job.
+    &quot;tempFiles&quot;: [ # A set of files the system should be aware of that are used
         # for temporary storage. These temporary files will be
         # removed on job completion.
         # No duplicates are allowed.
@@ -1196,36 +1197,9 @@
         #
         #    storage.googleapis.com/{bucket}/{object}
         #    bucket.storage.googleapis.com/{object}
-      "A String",
+      &quot;A String&quot;,
     ],
-    "type": "A String", # The type of Cloud Dataflow job.
-    "clientRequestId": "A String", # The client's unique identifier of the job, re-used across retried attempts.
-        # If this field is set, the service will ensure its uniqueness.
-        # The request to create a job will fail if the service has knowledge of a
-        # previously submitted job with the same client's ID and job name.
-        # The caller may use this field to ensure idempotence of job
-        # creation across retried attempts to create a job.
-        # By default, the field is empty and, in that case, the service ignores it.
-    "createdFromSnapshotId": "A String", # If this is specified, the job's initial state is populated from the given
-        # snapshot.
-    "stepsLocation": "A String", # The GCS location where the steps are stored.
-    "currentStateTime": "A String", # The timestamp associated with the current state.
-    "startTime": "A String", # The timestamp when the job was started (transitioned to JOB_STATE_PENDING).
-        # Flexible resource scheduling jobs are started with some delay after job
-        # creation, so start_time is unset before start and is updated when the
-        # job is started by the Cloud Dataflow service. For other jobs, start_time
-        # always equals to create_time and is immutable and set by the Cloud Dataflow
-        # service.
-    "createTime": "A String", # The timestamp when the job was initially created. Immutable and set by the
-        # Cloud Dataflow service.
-    "requestedState": "A String", # The job's requested state.
-        #
-        # `UpdateJob` may be used to switch between the `JOB_STATE_STOPPED` and
-        # `JOB_STATE_RUNNING` states, by setting requested_state.  `UpdateJob` may
-        # also be used to directly set a job's requested state to
-        # `JOB_STATE_CANCELLED` or `JOB_STATE_DONE`, irrevocably terminating the
-        # job if it has not already reached a terminal state.
-    "name": "A String", # The user-specified Cloud Dataflow job name.
+    &quot;name&quot;: &quot;A String&quot;, # The user-specified Cloud Dataflow job name.
         #
         # Only one Job with a given name may exist in a project at any
         # given time. If a caller attempts to create a Job with the same
@@ -1234,7 +1208,7 @@
         #
         # The name must match the regular expression
         # `[a-z]([-a-z0-9]{0,38}[a-z0-9])?`
-    "steps": [ # Exactly one of step or steps_location should be specified.
+    &quot;steps&quot;: [ # Exactly one of step or steps_location should be specified.
         #
         # The top-level steps that constitute the entire job.
       { # Defines a particular step within a Cloud Dataflow job.
@@ -1243,11 +1217,11 @@
           # specific operation as part of the overall job.  Data is typically
           # passed from one step to another as part of the job.
           #
-          # Here's an example of a sequence of steps which together implement a
+          # Here&#x27;s an example of a sequence of steps which together implement a
           # Map-Reduce job:
           #
           #   * Read a collection of data from some source, parsing the
-          #     collection's elements.
+          #     collection&#x27;s elements.
           #
           #   * Validate the elements.
           #
@@ -1262,23 +1236,32 @@
           #
           # Note that the Cloud Dataflow service may be used to run many different
           # types of jobs, not just Map-Reduce.
-        "kind": "A String", # The kind of step in the Cloud Dataflow job.
-        "name": "A String", # The name that identifies the step. This must be unique for each
+        &quot;name&quot;: &quot;A String&quot;, # The name that identifies the step. This must be unique for each
             # step with respect to all other steps in the Cloud Dataflow job.
-        "properties": { # Named properties associated with the step. Each kind of
+        &quot;kind&quot;: &quot;A String&quot;, # The kind of step in the Cloud Dataflow job.
+        &quot;properties&quot;: { # Named properties associated with the step. Each kind of
             # predefined step has its own required set of properties.
             # Must be provided on Create.  Only retrieved with JOB_VIEW_ALL.
-          "a_key": "", # Properties of the object.
+          &quot;a_key&quot;: &quot;&quot;, # Properties of the object.
         },
       },
     ],
-    "replaceJobId": "A String", # If this job is an update of an existing job, this field is the job ID
-        # of the job it replaced.
-        #
-        # When sending a `CreateJobRequest`, you can update a job by specifying it
-        # here. The job named here is stopped, and its intermediate state is
-        # transferred to this job.
-    "currentState": "A String", # The current state of the job.
+    &quot;replacedByJobId&quot;: &quot;A String&quot;, # If another job is an update of this job (and thus, this job is in
+        # `JOB_STATE_UPDATED`), this field contains the ID of that job.
+    &quot;executionInfo&quot;: { # Additional information about how a Cloud Dataflow job will be executed that # Deprecated.
+        # isn&#x27;t contained in the submitted job.
+      &quot;stages&quot;: { # A mapping from each stage to the information about that stage.
+        &quot;a_key&quot;: { # Contains information about how a particular
+            # google.dataflow.v1beta3.Step will be executed.
+          &quot;stepName&quot;: [ # The steps associated with the execution stage.
+              # Note that stages may have several steps, and that a given step
+              # might be run by more than one stage.
+            &quot;A String&quot;,
+          ],
+        },
+      },
+    },
+    &quot;currentState&quot;: &quot;A String&quot;, # The current state of the job.
         #
         # Jobs are created in the `JOB_STATE_STOPPED` state unless otherwise
         # specified.
@@ -1289,24 +1272,41 @@
         #
         # This field may be mutated by the Cloud Dataflow service;
         # callers cannot mutate it.
-    "executionInfo": { # Additional information about how a Cloud Dataflow job will be executed that # Deprecated.
-        # isn't contained in the submitted job.
-      "stages": { # A mapping from each stage to the information about that stage.
-        "a_key": { # Contains information about how a particular
-            # google.dataflow.v1beta3.Step will be executed.
-          "stepName": [ # The steps associated with the execution stage.
-              # Note that stages may have several steps, and that a given step
-              # might be run by more than one stage.
-            "A String",
-          ],
-        },
-      },
+    &quot;location&quot;: &quot;A String&quot;, # The [regional endpoint]
+        # (https://cloud.google.com/dataflow/docs/concepts/regional-endpoints) that
+        # contains this job.
+    &quot;startTime&quot;: &quot;A String&quot;, # The timestamp when the job was started (transitioned to JOB_STATE_PENDING).
+        # Flexible resource scheduling jobs are started with some delay after job
+        # creation, so start_time is unset before start and is updated when the
+        # job is started by the Cloud Dataflow service. For other jobs, start_time
+        # always equals to create_time and is immutable and set by the Cloud Dataflow
+        # service.
+    &quot;stepsLocation&quot;: &quot;A String&quot;, # The GCS location where the steps are stored.
+    &quot;labels&quot;: { # User-defined labels for this job.
+        #
+        # The labels map can contain no more than 64 entries.  Entries of the labels
+        # map are UTF8 strings that comply with the following restrictions:
+        #
+        # * Keys must conform to regexp:  \p{Ll}\p{Lo}{0,62}
+        # * Values must conform to regexp:  [\p{Ll}\p{Lo}\p{N}_-]{0,63}
+        # * Both keys and values are additionally constrained to be &lt;= 128 bytes in
+        # size.
+      &quot;a_key&quot;: &quot;A String&quot;,
     },
+    &quot;createTime&quot;: &quot;A String&quot;, # The timestamp when the job was initially created. Immutable and set by the
+        # Cloud Dataflow service.
+    &quot;requestedState&quot;: &quot;A String&quot;, # The job&#x27;s requested state.
+        #
+        # `UpdateJob` may be used to switch between the `JOB_STATE_STOPPED` and
+        # `JOB_STATE_RUNNING` states, by setting requested_state.  `UpdateJob` may
+        # also be used to directly set a job&#x27;s requested state to
+        # `JOB_STATE_CANCELLED` or `JOB_STATE_DONE`, irrevocably terminating the
+        # job if it has not already reached a terminal state.
   }</pre>
 </div>
 
 <div class="method">
-    <code class="details" id="get">get(projectId, location, jobId, x__xgafv=None, view=None)</code>
+    <code class="details" id="get">get(projectId, location, jobId, view=None, x__xgafv=None)</code>
   <pre>Gets the state of the specified Cloud Dataflow job.
 
 To get the state of a job, we recommend using `projects.locations.jobs.get`
@@ -1321,392 +1321,81 @@
 (https://cloud.google.com/dataflow/docs/concepts/regional-endpoints) that
 contains this job. (required)
   jobId: string, The job ID. (required)
+  view: string, The level of information requested in response.
   x__xgafv: string, V1 error format.
     Allowed values
       1 - v1 error format
       2 - v2 error format
-  view: string, The level of information requested in response.
 
 Returns:
   An object of the form:
 
     { # Defines a job to be run by the Cloud Dataflow service.
-    "labels": { # User-defined labels for this job.
-        #
-        # The labels map can contain no more than 64 entries.  Entries of the labels
-        # map are UTF8 strings that comply with the following restrictions:
-        #
-        # * Keys must conform to regexp:  \p{Ll}\p{Lo}{0,62}
-        # * Values must conform to regexp:  [\p{Ll}\p{Lo}\p{N}_-]{0,63}
-        # * Both keys and values are additionally constrained to be &lt;= 128 bytes in
-        # size.
-      "a_key": "A String",
-    },
-    "jobMetadata": { # Metadata available primarily for filtering jobs. Will be included in the # This field is populated by the Dataflow service to support filtering jobs
-        # by the metadata values provided here. Populated for ListJobs and all GetJob
-        # views SUMMARY and higher.
-        # ListJob response and Job SUMMARY view.
-      "sdkVersion": { # The version of the SDK used to run the job. # The SDK version used to run the job.
-        "versionDisplayName": "A String", # A readable string describing the version of the SDK.
-        "version": "A String", # The version of the SDK used to run the job.
-        "sdkSupportStatus": "A String", # The support status for this SDK version.
-      },
-      "pubsubDetails": [ # Identification of a PubSub source used in the Dataflow job.
-        { # Metadata for a PubSub connector used by the job.
-          "topic": "A String", # Topic accessed in the connection.
-          "subscription": "A String", # Subscription used in the connection.
-        },
-      ],
-      "datastoreDetails": [ # Identification of a Datastore source used in the Dataflow job.
-        { # Metadata for a Datastore connector used by the job.
-          "projectId": "A String", # ProjectId accessed in the connection.
-          "namespace": "A String", # Namespace used in the connection.
-        },
-      ],
-      "fileDetails": [ # Identification of a File source used in the Dataflow job.
-        { # Metadata for a File connector used by the job.
-          "filePattern": "A String", # File Pattern used to access files by the connector.
-        },
-      ],
-      "spannerDetails": [ # Identification of a Spanner source used in the Dataflow job.
-        { # Metadata for a Spanner connector used by the job.
-          "instanceId": "A String", # InstanceId accessed in the connection.
-          "projectId": "A String", # ProjectId accessed in the connection.
-          "databaseId": "A String", # DatabaseId accessed in the connection.
-        },
-      ],
-      "bigTableDetails": [ # Identification of a BigTable source used in the Dataflow job.
-        { # Metadata for a BigTable connector used by the job.
-          "instanceId": "A String", # InstanceId accessed in the connection.
-          "projectId": "A String", # ProjectId accessed in the connection.
-          "tableId": "A String", # TableId accessed in the connection.
-        },
-      ],
-      "bigqueryDetails": [ # Identification of a BigQuery source used in the Dataflow job.
-        { # Metadata for a BigQuery connector used by the job.
-          "projectId": "A String", # Project accessed in the connection.
-          "query": "A String", # Query used to access data in the connection.
-          "table": "A String", # Table accessed in the connection.
-          "dataset": "A String", # Dataset accessed in the connection.
-        },
-      ],
-    },
-    "pipelineDescription": { # A descriptive representation of submitted pipeline as well as the executed # Preliminary field: The format of this data may change at any time.
-        # A description of the user pipeline and stages through which it is executed.
-        # Created by Cloud Dataflow service.  Only retrieved with
-        # JOB_VIEW_DESCRIPTION or JOB_VIEW_ALL.
-        # form.  This data is provided by the Dataflow service for ease of visualizing
-        # the pipeline and interpreting Dataflow provided metrics.
-      "originalPipelineTransform": [ # Description of each transform in the pipeline and collections between them.
-        { # Description of the type, names/ids, and input/outputs for a transform.
-          "kind": "A String", # Type of transform.
-          "name": "A String", # User provided name for this transform instance.
-          "inputCollectionName": [ # User names for all collection inputs to this transform.
-            "A String",
-          ],
-          "displayData": [ # Transform-specific display data.
-            { # Data provided with a pipeline or transform to provide descriptive info.
-              "key": "A String", # The key identifying the display data.
-                  # This is intended to be used as a label for the display data
-                  # when viewed in a dax monitoring system.
-              "shortStrValue": "A String", # A possible additional shorter value to display.
-                  # For example a java_class_name_value of com.mypackage.MyDoFn
-                  # will be stored with MyDoFn as the short_str_value and
-                  # com.mypackage.MyDoFn as the java_class_name value.
-                  # short_str_value can be displayed and java_class_name_value
-                  # will be displayed as a tooltip.
-              "timestampValue": "A String", # Contains value if the data is of timestamp type.
-              "url": "A String", # An optional full URL.
-              "floatValue": 3.14, # Contains value if the data is of float type.
-              "namespace": "A String", # The namespace for the key. This is usually a class name or programming
-                  # language namespace (i.e. python module) which defines the display data.
-                  # This allows a dax monitoring system to specially handle the data
-                  # and perform custom rendering.
-              "javaClassValue": "A String", # Contains value if the data is of java class type.
-              "label": "A String", # An optional label to display in a dax UI for the element.
-              "boolValue": True or False, # Contains value if the data is of a boolean type.
-              "strValue": "A String", # Contains value if the data is of string type.
-              "durationValue": "A String", # Contains value if the data is of duration type.
-              "int64Value": "A String", # Contains value if the data is of int64 type.
-            },
-          ],
-          "outputCollectionName": [ # User  names for all collection outputs to this transform.
-            "A String",
-          ],
-          "id": "A String", # SDK generated id of this transform instance.
-        },
-      ],
-      "executionPipelineStage": [ # Description of each stage of execution of the pipeline.
-        { # Description of the composing transforms, names/ids, and input/outputs of a
-            # stage of execution.  Some composing transforms and sources may have been
-            # generated by the Dataflow service during execution planning.
-          "componentSource": [ # Collections produced and consumed by component transforms of this stage.
-            { # Description of an interstitial value between transforms in an execution
-                # stage.
-              "userName": "A String", # Human-readable name for this transform; may be user or system generated.
-              "originalTransformOrCollection": "A String", # User name for the original user transform or collection with which this
-                  # source is most closely associated.
-              "name": "A String", # Dataflow service generated name for this source.
-            },
-          ],
-          "kind": "A String", # Type of tranform this stage is executing.
-          "name": "A String", # Dataflow service generated name for this stage.
-          "outputSource": [ # Output sources for this stage.
-            { # Description of an input or output of an execution stage.
-              "userName": "A String", # Human-readable name for this source; may be user or system generated.
-              "sizeBytes": "A String", # Size of the source, if measurable.
-              "name": "A String", # Dataflow service generated name for this source.
-              "originalTransformOrCollection": "A String", # User name for the original user transform or collection with which this
-                  # source is most closely associated.
-            },
-          ],
-          "inputSource": [ # Input sources for this stage.
-            { # Description of an input or output of an execution stage.
-              "userName": "A String", # Human-readable name for this source; may be user or system generated.
-              "sizeBytes": "A String", # Size of the source, if measurable.
-              "name": "A String", # Dataflow service generated name for this source.
-              "originalTransformOrCollection": "A String", # User name for the original user transform or collection with which this
-                  # source is most closely associated.
-            },
-          ],
-          "componentTransform": [ # Transforms that comprise this execution stage.
-            { # Description of a transform executed as part of an execution stage.
-              "userName": "A String", # Human-readable name for this transform; may be user or system generated.
-              "originalTransform": "A String", # User name for the original user transform with which this transform is
-                  # most closely associated.
-              "name": "A String", # Dataflow service generated name for this source.
-            },
-          ],
-          "id": "A String", # Dataflow service generated id for this stage.
-        },
-      ],
-      "displayData": [ # Pipeline level display data.
-        { # Data provided with a pipeline or transform to provide descriptive info.
-          "key": "A String", # The key identifying the display data.
-              # This is intended to be used as a label for the display data
-              # when viewed in a dax monitoring system.
-          "shortStrValue": "A String", # A possible additional shorter value to display.
-              # For example a java_class_name_value of com.mypackage.MyDoFn
-              # will be stored with MyDoFn as the short_str_value and
-              # com.mypackage.MyDoFn as the java_class_name value.
-              # short_str_value can be displayed and java_class_name_value
-              # will be displayed as a tooltip.
-          "timestampValue": "A String", # Contains value if the data is of timestamp type.
-          "url": "A String", # An optional full URL.
-          "floatValue": 3.14, # Contains value if the data is of float type.
-          "namespace": "A String", # The namespace for the key. This is usually a class name or programming
-              # language namespace (i.e. python module) which defines the display data.
-              # This allows a dax monitoring system to specially handle the data
-              # and perform custom rendering.
-          "javaClassValue": "A String", # Contains value if the data is of java class type.
-          "label": "A String", # An optional label to display in a dax UI for the element.
-          "boolValue": True or False, # Contains value if the data is of a boolean type.
-          "strValue": "A String", # Contains value if the data is of string type.
-          "durationValue": "A String", # Contains value if the data is of duration type.
-          "int64Value": "A String", # Contains value if the data is of int64 type.
-        },
-      ],
-    },
-    "stageStates": [ # This field may be mutated by the Cloud Dataflow service;
-        # callers cannot mutate it.
-      { # A message describing the state of a particular execution stage.
-        "executionStageName": "A String", # The name of the execution stage.
-        "executionStageState": "A String", # Executions stage states allow the same set of values as JobState.
-        "currentStateTime": "A String", # The time at which the stage transitioned to this state.
-      },
-    ],
-    "id": "A String", # The unique ID of this job.
+    &quot;clientRequestId&quot;: &quot;A String&quot;, # The client&#x27;s unique identifier of the job, re-used across retried attempts.
+        # If this field is set, the service will ensure its uniqueness.
+        # The request to create a job will fail if the service has knowledge of a
+        # previously submitted job with the same client&#x27;s ID and job name.
+        # The caller may use this field to ensure idempotence of job
+        # creation across retried attempts to create a job.
+        # By default, the field is empty and, in that case, the service ignores it.
+    &quot;id&quot;: &quot;A String&quot;, # The unique ID of this job.
         #
         # This field is set by the Cloud Dataflow service when the Job is
         # created, and is immutable for the life of the job.
-    "replacedByJobId": "A String", # If another job is an update of this job (and thus, this job is in
-        # `JOB_STATE_UPDATED`), this field contains the ID of that job.
-    "projectId": "A String", # The ID of the Cloud Platform project that the job belongs to.
-    "transformNameMapping": { # The map of transform name prefixes of the job to be replaced to the
+    &quot;currentStateTime&quot;: &quot;A String&quot;, # The timestamp associated with the current state.
+    &quot;transformNameMapping&quot;: { # The map of transform name prefixes of the job to be replaced to the
         # corresponding name prefixes of the new job.
-      "a_key": "A String",
+      &quot;a_key&quot;: &quot;A String&quot;,
     },
-    "environment": { # Describes the environment in which a Dataflow Job runs. # The environment for the job.
-      "workerRegion": "A String", # The Compute Engine region
-          # (https://cloud.google.com/compute/docs/regions-zones/regions-zones) in
-          # which worker processing should occur, e.g. "us-west1". Mutually exclusive
-          # with worker_zone. If neither worker_region nor worker_zone is specified,
-          # default to the control plane's region.
-      "version": { # A structure describing which components and their versions of the service
-          # are required in order to run the job.
-        "a_key": "", # Properties of the object.
-      },
-      "flexResourceSchedulingGoal": "A String", # Which Flexible Resource Scheduling mode to run in.
-      "serviceKmsKeyName": "A String", # If set, contains the Cloud KMS key identifier used to encrypt data
-          # at rest, AKA a Customer Managed Encryption Key (CMEK).
-          #
-          # Format:
-          #   projects/PROJECT_ID/locations/LOCATION/keyRings/KEY_RING/cryptoKeys/KEY
-      "internalExperiments": { # Experimental settings.
-        "a_key": "", # Properties of the object. Contains field @type with type URL.
-      },
-      "dataset": "A String", # The dataset for the current project where various workflow
-          # related tables are stored.
-          #
-          # The supported resource type is:
-          #
-          # Google BigQuery:
-          #   bigquery.googleapis.com/{dataset}
-      "experiments": [ # The list of experiments to enable.
-        "A String",
-      ],
-      "serviceAccountEmail": "A String", # Identity to run virtual machines as. Defaults to the default account.
-      "sdkPipelineOptions": { # The Cloud Dataflow SDK pipeline options specified by the user. These
+    &quot;environment&quot;: { # Describes the environment in which a Dataflow Job runs. # The environment for the job.
+      &quot;sdkPipelineOptions&quot;: { # The Cloud Dataflow SDK pipeline options specified by the user. These
           # options are passed through the service and are used to recreate the
           # SDK pipeline options on the worker in a language agnostic and platform
           # independent way.
-        "a_key": "", # Properties of the object.
+        &quot;a_key&quot;: &quot;&quot;, # Properties of the object.
       },
-      "userAgent": { # A description of the process that generated the request.
-        "a_key": "", # Properties of the object.
-      },
-      "workerZone": "A String", # The Compute Engine zone
-          # (https://cloud.google.com/compute/docs/regions-zones/regions-zones) in
-          # which worker processing should occur, e.g. "us-west1-a". Mutually exclusive
-          # with worker_region. If neither worker_region nor worker_zone is specified,
-          # a zone in the control plane's region is chosen based on available capacity.
-      "workerPools": [ # The worker pools. At least one "harness" worker pool must be
+      &quot;flexResourceSchedulingGoal&quot;: &quot;A String&quot;, # Which Flexible Resource Scheduling mode to run in.
+      &quot;workerPools&quot;: [ # The worker pools. At least one &quot;harness&quot; worker pool must be
           # specified in order for the job to have workers.
         { # Describes one particular pool of Cloud Dataflow workers to be
             # instantiated by the Cloud Dataflow service in order to perform the
             # computations required by a job.  Note that a workflow job may use
             # multiple pools, in order to match the various computational
             # requirements of the various stages of the job.
-          "workerHarnessContainerImage": "A String", # Required. Docker container image that executes the Cloud Dataflow worker
-              # harness, residing in Google Container Registry.
-              #
-              # Deprecated for the Fn API path. Use sdk_harness_container_images instead.
-          "ipConfiguration": "A String", # Configuration for VM IPs.
-          "autoscalingSettings": { # Settings for WorkerPool autoscaling. # Settings for autoscaling of this WorkerPool.
-            "maxNumWorkers": 42, # The maximum number of workers to cap scaling at.
-            "algorithm": "A String", # The algorithm to use for autoscaling.
-          },
-          "diskSourceImage": "A String", # Fully qualified source image for disks.
-          "network": "A String", # Network to which VMs will be assigned.  If empty or unspecified,
-              # the service will use the network "default".
-          "zone": "A String", # Zone to run the worker pools in.  If empty or unspecified, the service
+          &quot;defaultPackageSet&quot;: &quot;A String&quot;, # The default package set to install.  This allows the service to
+              # select a default set of packages which are useful to worker
+              # harnesses written in a particular language.
+          &quot;network&quot;: &quot;A String&quot;, # Network to which VMs will be assigned.  If empty or unspecified,
+              # the service will use the network &quot;default&quot;.
+          &quot;zone&quot;: &quot;A String&quot;, # Zone to run the worker pools in.  If empty or unspecified, the service
               # will attempt to choose a reasonable default.
-          "metadata": { # Metadata to set on the Google Compute Engine VMs.
-            "a_key": "A String",
-          },
-          "machineType": "A String", # Machine type (e.g. "n1-standard-1").  If empty or unspecified, the
-              # service will attempt to choose a reasonable default.
-          "onHostMaintenance": "A String", # The action to take on host maintenance, as defined by the Google
-              # Compute Engine API.
-          "taskrunnerSettings": { # Taskrunner configuration settings. # Settings passed through to Google Compute Engine workers when
-              # using the standard Dataflow task runner.  Users should ignore
-              # this field.
-            "workflowFileName": "A String", # The file to store the workflow in.
-            "logUploadLocation": "A String", # Indicates where to put logs.  If this is not specified, the logs
-                # will not be uploaded.
-                #
-                # The supported resource type is:
-                #
-                # Google Cloud Storage:
-                #   storage.googleapis.com/{bucket}/{object}
-                #   bucket.storage.googleapis.com/{object}
-            "commandlinesFileName": "A String", # The file to store preprocessing commands in.
-            "alsologtostderr": True or False, # Whether to also send taskrunner log info to stderr.
-            "continueOnException": True or False, # Whether to continue taskrunner if an exception is hit.
-            "baseTaskDir": "A String", # The location on the worker for task-specific subdirectories.
-            "vmId": "A String", # The ID string of the VM.
-            "taskGroup": "A String", # The UNIX group ID on the worker VM to use for tasks launched by
-                # taskrunner; e.g. "wheel".
-            "taskUser": "A String", # The UNIX user ID on the worker VM to use for tasks launched by
-                # taskrunner; e.g. "root".
-            "oauthScopes": [ # The OAuth2 scopes to be requested by the taskrunner in order to
-                # access the Cloud Dataflow API.
-              "A String",
-            ],
-            "languageHint": "A String", # The suggested backend language.
-            "logToSerialconsole": True or False, # Whether to send taskrunner log info to Google Compute Engine VM serial
-                # console.
-            "streamingWorkerMainClass": "A String", # The streaming worker main class name.
-            "logDir": "A String", # The directory on the VM to store logs.
-            "parallelWorkerSettings": { # Provides data to pass through to the worker harness. # The settings to pass to the parallel worker harness.
-              "reportingEnabled": True or False, # Whether to send work progress updates to the service.
-              "shuffleServicePath": "A String", # The Shuffle service path relative to the root URL, for example,
-                  # "shuffle/v1beta1".
-              "workerId": "A String", # The ID of the worker running this pipeline.
-              "baseUrl": "A String", # The base URL for accessing Google Cloud APIs.
-                  #
-                  # When workers access Google Cloud APIs, they logically do so via
-                  # relative URLs.  If this field is specified, it supplies the base
-                  # URL to use for resolving these relative URLs.  The normative
-                  # algorithm used is defined by RFC 1808, "Relative Uniform Resource
-                  # Locators".
-                  #
-                  # If not specified, the default value is "http://www.googleapis.com/"
-              "servicePath": "A String", # The Cloud Dataflow service path relative to the root URL, for example,
-                  # "dataflow/v1b3/projects".
-              "tempStoragePrefix": "A String", # The prefix of the resources the system should use for temporary
-                  # storage.
-                  #
-                  # The supported resource type is:
-                  #
-                  # Google Cloud Storage:
-                  #
-                  #   storage.googleapis.com/{bucket}/{object}
-                  #   bucket.storage.googleapis.com/{object}
-            },
-            "dataflowApiVersion": "A String", # The API version of endpoint, e.g. "v1b3"
-            "harnessCommand": "A String", # The command to launch the worker harness.
-            "tempStoragePrefix": "A String", # The prefix of the resources the taskrunner should use for
-                # temporary storage.
-                #
-                # The supported resource type is:
-                #
-                # Google Cloud Storage:
-                #   storage.googleapis.com/{bucket}/{object}
-                #   bucket.storage.googleapis.com/{object}
-            "baseUrl": "A String", # The base URL for the taskrunner to use when accessing Google Cloud APIs.
-                #
-                # When workers access Google Cloud APIs, they logically do so via
-                # relative URLs.  If this field is specified, it supplies the base
-                # URL to use for resolving these relative URLs.  The normative
-                # algorithm used is defined by RFC 1808, "Relative Uniform Resource
-                # Locators".
-                #
-                # If not specified, the default value is "http://www.googleapis.com/"
-          },
-          "numThreadsPerWorker": 42, # The number of threads per worker harness. If empty or unspecified, the
+          &quot;numWorkers&quot;: 42, # Number of Google Compute Engine workers in this pool needed to
+              # execute the job.  If zero or unspecified, the service will
+              # attempt to choose a reasonable default.
+          &quot;numThreadsPerWorker&quot;: 42, # The number of threads per worker harness. If empty or unspecified, the
               # service will choose a number of threads (according to the number of cores
               # on the selected machine type for batch, or 1 by convention for streaming).
-          "poolArgs": { # Extra arguments for this worker pool.
-            "a_key": "", # Properties of the object. Contains field @type with type URL.
-          },
-          "packages": [ # Packages to be installed on workers.
+          &quot;diskSourceImage&quot;: &quot;A String&quot;, # Fully qualified source image for disks.
+          &quot;packages&quot;: [ # Packages to be installed on workers.
             { # The packages that must be installed in order for a worker to run the
                 # steps of the Cloud Dataflow job that will be assigned to its worker
                 # pool.
                 #
                 # This is the mechanism by which the Cloud Dataflow SDK causes code to
                 # be loaded onto the workers. For example, the Cloud Dataflow Java SDK
-                # might use this to install jars containing the user's code and all of the
+                # might use this to install jars containing the user&#x27;s code and all of the
                 # various dependencies (libraries, data files, etc.) required in order
                 # for that code to run.
-              "location": "A String", # The resource to read the package from. The supported resource type is:
+              &quot;location&quot;: &quot;A String&quot;, # The resource to read the package from. The supported resource type is:
                   #
                   # Google Cloud Storage:
                   #
                   #   storage.googleapis.com/{bucket}
                   #   bucket.storage.googleapis.com/
-              "name": "A String", # The name of the package.
+              &quot;name&quot;: &quot;A String&quot;, # The name of the package.
             },
           ],
-          "defaultPackageSet": "A String", # The default package set to install.  This allows the service to
-              # select a default set of packages which are useful to worker
-              # harnesses written in a particular language.
-          "kind": "A String", # The kind of the worker pool; currently only `harness` and `shuffle`
-              # are supported.
-          "diskType": "A String", # Type of root disk for VMs.  If empty or unspecified, the service will
-              # attempt to choose a reasonable default.
-          "teardownPolicy": "A String", # Sets the policy for determining when to turndown worker pool.
+          &quot;teardownPolicy&quot;: &quot;A String&quot;, # Sets the policy for determining when to turndown worker pool.
               # Allowed values are: `TEARDOWN_ALWAYS`, `TEARDOWN_ON_SUCCESS`, and
               # `TEARDOWN_NEVER`.
               # `TEARDOWN_ALWAYS` means workers are always torn down regardless of whether
@@ -1716,32 +1405,41 @@
               #
               # If the workers are not torn down by the service, they will
               # continue to run and use Google Compute Engine VM resources in the
-              # user's project until they are explicitly terminated by the user.
+              # user&#x27;s project until they are explicitly terminated by the user.
               # Because of this, Google recommends using the `TEARDOWN_ALWAYS`
               # policy except for small, manually supervised test jobs.
               #
               # If unknown or unspecified, the service will attempt to choose a reasonable
               # default.
-          "diskSizeGb": 42, # Size of root disk for VMs, in GB.  If zero or unspecified, the service will
+          &quot;onHostMaintenance&quot;: &quot;A String&quot;, # The action to take on host maintenance, as defined by the Google
+              # Compute Engine API.
+          &quot;poolArgs&quot;: { # Extra arguments for this worker pool.
+            &quot;a_key&quot;: &quot;&quot;, # Properties of the object. Contains field @type with type URL.
+          },
+          &quot;diskSizeGb&quot;: 42, # Size of root disk for VMs, in GB.  If zero or unspecified, the service will
               # attempt to choose a reasonable default.
-          "numWorkers": 42, # Number of Google Compute Engine workers in this pool needed to
-              # execute the job.  If zero or unspecified, the service will
+          &quot;workerHarnessContainerImage&quot;: &quot;A String&quot;, # Required. Docker container image that executes the Cloud Dataflow worker
+              # harness, residing in Google Container Registry.
+              #
+              # Deprecated for the Fn API path. Use sdk_harness_container_images instead.
+          &quot;diskType&quot;: &quot;A String&quot;, # Type of root disk for VMs.  If empty or unspecified, the service will
               # attempt to choose a reasonable default.
-          "subnetwork": "A String", # Subnetwork to which VMs will be assigned, if desired.  Expected to be of
-              # the form "regions/REGION/subnetworks/SUBNETWORK".
-          "dataDisks": [ # Data disks that are used by a VM in this workflow.
+          &quot;machineType&quot;: &quot;A String&quot;, # Machine type (e.g. &quot;n1-standard-1&quot;).  If empty or unspecified, the
+              # service will attempt to choose a reasonable default.
+          &quot;kind&quot;: &quot;A String&quot;, # The kind of the worker pool; currently only `harness` and `shuffle`
+              # are supported.
+          &quot;dataDisks&quot;: [ # Data disks that are used by a VM in this workflow.
             { # Describes the data disk used by a workflow job.
-              "mountPoint": "A String", # Directory in a VM where disk is mounted.
-              "sizeGb": 42, # Size of disk in GB.  If zero or unspecified, the service will
+              &quot;sizeGb&quot;: 42, # Size of disk in GB.  If zero or unspecified, the service will
                   # attempt to choose a reasonable default.
-              "diskType": "A String", # Disk storage type, as defined by Google Compute Engine.  This
+              &quot;diskType&quot;: &quot;A String&quot;, # Disk storage type, as defined by Google Compute Engine.  This
                   # must be a disk type appropriate to the project and zone in which
                   # the workers will run.  If unknown or unspecified, the service
                   # will attempt to choose a reasonable default.
                   #
                   # For example, the standard persistent disk type is a resource name
-                  # typically ending in "pd-standard".  If SSD persistent disks are
-                  # available, the resource name typically ends with "pd-ssd".  The
+                  # typically ending in &quot;pd-standard&quot;.  If SSD persistent disks are
+                  # available, the resource name typically ends with &quot;pd-ssd&quot;.  The
                   # actual valid values are defined the Google Compute Engine API,
                   # not by the Cloud Dataflow API; consult the Google Compute Engine
                   # documentation for more information about determining the set of
@@ -1752,29 +1450,144 @@
                   # typically look something like this:
                   #
                   # compute.googleapis.com/projects/project-id/zones/zone/diskTypes/pd-standard
+              &quot;mountPoint&quot;: &quot;A String&quot;, # Directory in a VM where disk is mounted.
             },
           ],
-          "sdkHarnessContainerImages": [ # Set of SDK harness containers needed to execute this pipeline. This will
+          &quot;sdkHarnessContainerImages&quot;: [ # Set of SDK harness containers needed to execute this pipeline. This will
               # only be set in the Fn API path. For non-cross-language pipelines this
               # should have only one entry. Cross-language pipelines will have two or more
               # entries.
             { # Defines a SDK harness container for executing Dataflow pipelines.
-              "containerImage": "A String", # A docker container image that resides in Google Container Registry.
-              "useSingleCorePerContainer": True or False, # If true, recommends the Dataflow service to use only one core per SDK
+              &quot;containerImage&quot;: &quot;A String&quot;, # A docker container image that resides in Google Container Registry.
+              &quot;useSingleCorePerContainer&quot;: True or False, # If true, recommends the Dataflow service to use only one core per SDK
                   # container instance with this image. If false (or unset) recommends using
                   # more than one core per SDK container instance with this image for
                   # efficiency. Note that Dataflow service may choose to override this property
                   # if needed.
             },
           ],
+          &quot;subnetwork&quot;: &quot;A String&quot;, # Subnetwork to which VMs will be assigned, if desired.  Expected to be of
+              # the form &quot;regions/REGION/subnetworks/SUBNETWORK&quot;.
+          &quot;ipConfiguration&quot;: &quot;A String&quot;, # Configuration for VM IPs.
+          &quot;taskrunnerSettings&quot;: { # Taskrunner configuration settings. # Settings passed through to Google Compute Engine workers when
+              # using the standard Dataflow task runner.  Users should ignore
+              # this field.
+            &quot;alsologtostderr&quot;: True or False, # Whether to also send taskrunner log info to stderr.
+            &quot;taskGroup&quot;: &quot;A String&quot;, # The UNIX group ID on the worker VM to use for tasks launched by
+                # taskrunner; e.g. &quot;wheel&quot;.
+            &quot;harnessCommand&quot;: &quot;A String&quot;, # The command to launch the worker harness.
+            &quot;logDir&quot;: &quot;A String&quot;, # The directory on the VM to store logs.
+            &quot;oauthScopes&quot;: [ # The OAuth2 scopes to be requested by the taskrunner in order to
+                # access the Cloud Dataflow API.
+              &quot;A String&quot;,
+            ],
+            &quot;dataflowApiVersion&quot;: &quot;A String&quot;, # The API version of endpoint, e.g. &quot;v1b3&quot;
+            &quot;logUploadLocation&quot;: &quot;A String&quot;, # Indicates where to put logs.  If this is not specified, the logs
+                # will not be uploaded.
+                #
+                # The supported resource type is:
+                #
+                # Google Cloud Storage:
+                #   storage.googleapis.com/{bucket}/{object}
+                #   bucket.storage.googleapis.com/{object}
+            &quot;streamingWorkerMainClass&quot;: &quot;A String&quot;, # The streaming worker main class name.
+            &quot;workflowFileName&quot;: &quot;A String&quot;, # The file to store the workflow in.
+            &quot;baseTaskDir&quot;: &quot;A String&quot;, # The location on the worker for task-specific subdirectories.
+            &quot;tempStoragePrefix&quot;: &quot;A String&quot;, # The prefix of the resources the taskrunner should use for
+                # temporary storage.
+                #
+                # The supported resource type is:
+                #
+                # Google Cloud Storage:
+                #   storage.googleapis.com/{bucket}/{object}
+                #   bucket.storage.googleapis.com/{object}
+            &quot;commandlinesFileName&quot;: &quot;A String&quot;, # The file to store preprocessing commands in.
+            &quot;languageHint&quot;: &quot;A String&quot;, # The suggested backend language.
+            &quot;baseUrl&quot;: &quot;A String&quot;, # The base URL for the taskrunner to use when accessing Google Cloud APIs.
+                #
+                # When workers access Google Cloud APIs, they logically do so via
+                # relative URLs.  If this field is specified, it supplies the base
+                # URL to use for resolving these relative URLs.  The normative
+                # algorithm used is defined by RFC 1808, &quot;Relative Uniform Resource
+                # Locators&quot;.
+                #
+                # If not specified, the default value is &quot;http://www.googleapis.com/&quot;
+            &quot;logToSerialconsole&quot;: True or False, # Whether to send taskrunner log info to Google Compute Engine VM serial
+                # console.
+            &quot;continueOnException&quot;: True or False, # Whether to continue taskrunner if an exception is hit.
+            &quot;parallelWorkerSettings&quot;: { # Provides data to pass through to the worker harness. # The settings to pass to the parallel worker harness.
+              &quot;baseUrl&quot;: &quot;A String&quot;, # The base URL for accessing Google Cloud APIs.
+                  #
+                  # When workers access Google Cloud APIs, they logically do so via
+                  # relative URLs.  If this field is specified, it supplies the base
+                  # URL to use for resolving these relative URLs.  The normative
+                  # algorithm used is defined by RFC 1808, &quot;Relative Uniform Resource
+                  # Locators&quot;.
+                  #
+                  # If not specified, the default value is &quot;http://www.googleapis.com/&quot;
+              &quot;reportingEnabled&quot;: True or False, # Whether to send work progress updates to the service.
+              &quot;servicePath&quot;: &quot;A String&quot;, # The Cloud Dataflow service path relative to the root URL, for example,
+                  # &quot;dataflow/v1b3/projects&quot;.
+              &quot;shuffleServicePath&quot;: &quot;A String&quot;, # The Shuffle service path relative to the root URL, for example,
+                  # &quot;shuffle/v1beta1&quot;.
+              &quot;workerId&quot;: &quot;A String&quot;, # The ID of the worker running this pipeline.
+              &quot;tempStoragePrefix&quot;: &quot;A String&quot;, # The prefix of the resources the system should use for temporary
+                  # storage.
+                  #
+                  # The supported resource type is:
+                  #
+                  # Google Cloud Storage:
+                  #
+                  #   storage.googleapis.com/{bucket}/{object}
+                  #   bucket.storage.googleapis.com/{object}
+            },
+            &quot;vmId&quot;: &quot;A String&quot;, # The ID string of the VM.
+            &quot;taskUser&quot;: &quot;A String&quot;, # The UNIX user ID on the worker VM to use for tasks launched by
+                # taskrunner; e.g. &quot;root&quot;.
+          },
+          &quot;autoscalingSettings&quot;: { # Settings for WorkerPool autoscaling. # Settings for autoscaling of this WorkerPool.
+            &quot;maxNumWorkers&quot;: 42, # The maximum number of workers to cap scaling at.
+            &quot;algorithm&quot;: &quot;A String&quot;, # The algorithm to use for autoscaling.
+          },
+          &quot;metadata&quot;: { # Metadata to set on the Google Compute Engine VMs.
+            &quot;a_key&quot;: &quot;A String&quot;,
+          },
         },
       ],
-      "clusterManagerApiService": "A String", # The type of cluster manager API to use.  If unknown or
+      &quot;dataset&quot;: &quot;A String&quot;, # The dataset for the current project where various workflow
+          # related tables are stored.
+          #
+          # The supported resource type is:
+          #
+          # Google BigQuery:
+          #   bigquery.googleapis.com/{dataset}
+      &quot;internalExperiments&quot;: { # Experimental settings.
+        &quot;a_key&quot;: &quot;&quot;, # Properties of the object. Contains field @type with type URL.
+      },
+      &quot;workerRegion&quot;: &quot;A String&quot;, # The Compute Engine region
+          # (https://cloud.google.com/compute/docs/regions-zones/regions-zones) in
+          # which worker processing should occur, e.g. &quot;us-west1&quot;. Mutually exclusive
+          # with worker_zone. If neither worker_region nor worker_zone is specified,
+          # default to the control plane&#x27;s region.
+      &quot;serviceKmsKeyName&quot;: &quot;A String&quot;, # If set, contains the Cloud KMS key identifier used to encrypt data
+          # at rest, AKA a Customer Managed Encryption Key (CMEK).
+          #
+          # Format:
+          #   projects/PROJECT_ID/locations/LOCATION/keyRings/KEY_RING/cryptoKeys/KEY
+      &quot;userAgent&quot;: { # A description of the process that generated the request.
+        &quot;a_key&quot;: &quot;&quot;, # Properties of the object.
+      },
+      &quot;workerZone&quot;: &quot;A String&quot;, # The Compute Engine zone
+          # (https://cloud.google.com/compute/docs/regions-zones/regions-zones) in
+          # which worker processing should occur, e.g. &quot;us-west1-a&quot;. Mutually exclusive
+          # with worker_region. If neither worker_region nor worker_zone is specified,
+          # a zone in the control plane&#x27;s region is chosen based on available capacity.
+      &quot;clusterManagerApiService&quot;: &quot;A String&quot;, # The type of cluster manager API to use.  If unknown or
           # unspecified, the service will attempt to choose a reasonable
           # default.  This should be in the form of the API service name,
-          # e.g. "compute.googleapis.com".
-      "tempStoragePrefix": "A String", # The prefix of the resources the system should use for temporary
-          # storage.  The system will append the suffix "/temp-{JOBNAME} to
+          # e.g. &quot;compute.googleapis.com&quot;.
+      &quot;tempStoragePrefix&quot;: &quot;A String&quot;, # The prefix of the resources the system should use for temporary
+          # storage.  The system will append the suffix &quot;/temp-{JOBNAME} to
           # this resource prefix, where {JOBNAME} is the value of the
           # job_name field.  The resulting bucket and object prefix is used
           # as the prefix of the resources used to store temporary data
@@ -1786,11 +1599,199 @@
           #
           #   storage.googleapis.com/{bucket}/{object}
           #   bucket.storage.googleapis.com/{object}
+      &quot;experiments&quot;: [ # The list of experiments to enable.
+        &quot;A String&quot;,
+      ],
+      &quot;version&quot;: { # A structure describing which components and their versions of the service
+          # are required in order to run the job.
+        &quot;a_key&quot;: &quot;&quot;, # Properties of the object.
+      },
+      &quot;serviceAccountEmail&quot;: &quot;A String&quot;, # Identity to run virtual machines as. Defaults to the default account.
     },
-    "location": "A String", # The [regional endpoint]
-        # (https://cloud.google.com/dataflow/docs/concepts/regional-endpoints) that
-        # contains this job.
-    "tempFiles": [ # A set of files the system should be aware of that are used
+    &quot;stageStates&quot;: [ # This field may be mutated by the Cloud Dataflow service;
+        # callers cannot mutate it.
+      { # A message describing the state of a particular execution stage.
+        &quot;executionStageName&quot;: &quot;A String&quot;, # The name of the execution stage.
+        &quot;currentStateTime&quot;: &quot;A String&quot;, # The time at which the stage transitioned to this state.
+        &quot;executionStageState&quot;: &quot;A String&quot;, # Executions stage states allow the same set of values as JobState.
+      },
+    ],
+    &quot;jobMetadata&quot;: { # Metadata available primarily for filtering jobs. Will be included in the # This field is populated by the Dataflow service to support filtering jobs
+        # by the metadata values provided here. Populated for ListJobs and all GetJob
+        # views SUMMARY and higher.
+        # ListJob response and Job SUMMARY view.
+      &quot;bigTableDetails&quot;: [ # Identification of a BigTable source used in the Dataflow job.
+        { # Metadata for a BigTable connector used by the job.
+          &quot;tableId&quot;: &quot;A String&quot;, # TableId accessed in the connection.
+          &quot;projectId&quot;: &quot;A String&quot;, # ProjectId accessed in the connection.
+          &quot;instanceId&quot;: &quot;A String&quot;, # InstanceId accessed in the connection.
+        },
+      ],
+      &quot;spannerDetails&quot;: [ # Identification of a Spanner source used in the Dataflow job.
+        { # Metadata for a Spanner connector used by the job.
+          &quot;databaseId&quot;: &quot;A String&quot;, # DatabaseId accessed in the connection.
+          &quot;instanceId&quot;: &quot;A String&quot;, # InstanceId accessed in the connection.
+          &quot;projectId&quot;: &quot;A String&quot;, # ProjectId accessed in the connection.
+        },
+      ],
+      &quot;datastoreDetails&quot;: [ # Identification of a Datastore source used in the Dataflow job.
+        { # Metadata for a Datastore connector used by the job.
+          &quot;projectId&quot;: &quot;A String&quot;, # ProjectId accessed in the connection.
+          &quot;namespace&quot;: &quot;A String&quot;, # Namespace used in the connection.
+        },
+      ],
+      &quot;sdkVersion&quot;: { # The version of the SDK used to run the job. # The SDK version used to run the job.
+        &quot;versionDisplayName&quot;: &quot;A String&quot;, # A readable string describing the version of the SDK.
+        &quot;sdkSupportStatus&quot;: &quot;A String&quot;, # The support status for this SDK version.
+        &quot;version&quot;: &quot;A String&quot;, # The version of the SDK used to run the job.
+      },
+      &quot;bigqueryDetails&quot;: [ # Identification of a BigQuery source used in the Dataflow job.
+        { # Metadata for a BigQuery connector used by the job.
+          &quot;table&quot;: &quot;A String&quot;, # Table accessed in the connection.
+          &quot;dataset&quot;: &quot;A String&quot;, # Dataset accessed in the connection.
+          &quot;projectId&quot;: &quot;A String&quot;, # Project accessed in the connection.
+          &quot;query&quot;: &quot;A String&quot;, # Query used to access data in the connection.
+        },
+      ],
+      &quot;fileDetails&quot;: [ # Identification of a File source used in the Dataflow job.
+        { # Metadata for a File connector used by the job.
+          &quot;filePattern&quot;: &quot;A String&quot;, # File Pattern used to access files by the connector.
+        },
+      ],
+      &quot;pubsubDetails&quot;: [ # Identification of a PubSub source used in the Dataflow job.
+        { # Metadata for a PubSub connector used by the job.
+          &quot;subscription&quot;: &quot;A String&quot;, # Subscription used in the connection.
+          &quot;topic&quot;: &quot;A String&quot;, # Topic accessed in the connection.
+        },
+      ],
+    },
+    &quot;createdFromSnapshotId&quot;: &quot;A String&quot;, # If this is specified, the job&#x27;s initial state is populated from the given
+        # snapshot.
+    &quot;projectId&quot;: &quot;A String&quot;, # The ID of the Cloud Platform project that the job belongs to.
+    &quot;type&quot;: &quot;A String&quot;, # The type of Cloud Dataflow job.
+    &quot;pipelineDescription&quot;: { # A descriptive representation of submitted pipeline as well as the executed # Preliminary field: The format of this data may change at any time.
+        # A description of the user pipeline and stages through which it is executed.
+        # Created by Cloud Dataflow service.  Only retrieved with
+        # JOB_VIEW_DESCRIPTION or JOB_VIEW_ALL.
+        # form.  This data is provided by the Dataflow service for ease of visualizing
+        # the pipeline and interpreting Dataflow provided metrics.
+      &quot;executionPipelineStage&quot;: [ # Description of each stage of execution of the pipeline.
+        { # Description of the composing transforms, names/ids, and input/outputs of a
+            # stage of execution.  Some composing transforms and sources may have been
+            # generated by the Dataflow service during execution planning.
+          &quot;id&quot;: &quot;A String&quot;, # Dataflow service generated id for this stage.
+          &quot;componentTransform&quot;: [ # Transforms that comprise this execution stage.
+            { # Description of a transform executed as part of an execution stage.
+              &quot;originalTransform&quot;: &quot;A String&quot;, # User name for the original user transform with which this transform is
+                  # most closely associated.
+              &quot;name&quot;: &quot;A String&quot;, # Dataflow service generated name for this source.
+              &quot;userName&quot;: &quot;A String&quot;, # Human-readable name for this transform; may be user or system generated.
+            },
+          ],
+          &quot;componentSource&quot;: [ # Collections produced and consumed by component transforms of this stage.
+            { # Description of an interstitial value between transforms in an execution
+                # stage.
+              &quot;name&quot;: &quot;A String&quot;, # Dataflow service generated name for this source.
+              &quot;userName&quot;: &quot;A String&quot;, # Human-readable name for this transform; may be user or system generated.
+              &quot;originalTransformOrCollection&quot;: &quot;A String&quot;, # User name for the original user transform or collection with which this
+                  # source is most closely associated.
+            },
+          ],
+          &quot;kind&quot;: &quot;A String&quot;, # Type of tranform this stage is executing.
+          &quot;outputSource&quot;: [ # Output sources for this stage.
+            { # Description of an input or output of an execution stage.
+              &quot;originalTransformOrCollection&quot;: &quot;A String&quot;, # User name for the original user transform or collection with which this
+                  # source is most closely associated.
+              &quot;name&quot;: &quot;A String&quot;, # Dataflow service generated name for this source.
+              &quot;sizeBytes&quot;: &quot;A String&quot;, # Size of the source, if measurable.
+              &quot;userName&quot;: &quot;A String&quot;, # Human-readable name for this source; may be user or system generated.
+            },
+          ],
+          &quot;name&quot;: &quot;A String&quot;, # Dataflow service generated name for this stage.
+          &quot;inputSource&quot;: [ # Input sources for this stage.
+            { # Description of an input or output of an execution stage.
+              &quot;originalTransformOrCollection&quot;: &quot;A String&quot;, # User name for the original user transform or collection with which this
+                  # source is most closely associated.
+              &quot;name&quot;: &quot;A String&quot;, # Dataflow service generated name for this source.
+              &quot;sizeBytes&quot;: &quot;A String&quot;, # Size of the source, if measurable.
+              &quot;userName&quot;: &quot;A String&quot;, # Human-readable name for this source; may be user or system generated.
+            },
+          ],
+        },
+      ],
+      &quot;originalPipelineTransform&quot;: [ # Description of each transform in the pipeline and collections between them.
+        { # Description of the type, names/ids, and input/outputs for a transform.
+          &quot;kind&quot;: &quot;A String&quot;, # Type of transform.
+          &quot;inputCollectionName&quot;: [ # User names for all collection inputs to this transform.
+            &quot;A String&quot;,
+          ],
+          &quot;name&quot;: &quot;A String&quot;, # User provided name for this transform instance.
+          &quot;id&quot;: &quot;A String&quot;, # SDK generated id of this transform instance.
+          &quot;displayData&quot;: [ # Transform-specific display data.
+            { # Data provided with a pipeline or transform to provide descriptive info.
+              &quot;timestampValue&quot;: &quot;A String&quot;, # Contains value if the data is of timestamp type.
+              &quot;boolValue&quot;: True or False, # Contains value if the data is of a boolean type.
+              &quot;javaClassValue&quot;: &quot;A String&quot;, # Contains value if the data is of java class type.
+              &quot;strValue&quot;: &quot;A String&quot;, # Contains value if the data is of string type.
+              &quot;int64Value&quot;: &quot;A String&quot;, # Contains value if the data is of int64 type.
+              &quot;durationValue&quot;: &quot;A String&quot;, # Contains value if the data is of duration type.
+              &quot;namespace&quot;: &quot;A String&quot;, # The namespace for the key. This is usually a class name or programming
+                  # language namespace (i.e. python module) which defines the display data.
+                  # This allows a dax monitoring system to specially handle the data
+                  # and perform custom rendering.
+              &quot;floatValue&quot;: 3.14, # Contains value if the data is of float type.
+              &quot;key&quot;: &quot;A String&quot;, # The key identifying the display data.
+                  # This is intended to be used as a label for the display data
+                  # when viewed in a dax monitoring system.
+              &quot;shortStrValue&quot;: &quot;A String&quot;, # A possible additional shorter value to display.
+                  # For example a java_class_name_value of com.mypackage.MyDoFn
+                  # will be stored with MyDoFn as the short_str_value and
+                  # com.mypackage.MyDoFn as the java_class_name value.
+                  # short_str_value can be displayed and java_class_name_value
+                  # will be displayed as a tooltip.
+              &quot;url&quot;: &quot;A String&quot;, # An optional full URL.
+              &quot;label&quot;: &quot;A String&quot;, # An optional label to display in a dax UI for the element.
+            },
+          ],
+          &quot;outputCollectionName&quot;: [ # User  names for all collection outputs to this transform.
+            &quot;A String&quot;,
+          ],
+        },
+      ],
+      &quot;displayData&quot;: [ # Pipeline level display data.
+        { # Data provided with a pipeline or transform to provide descriptive info.
+          &quot;timestampValue&quot;: &quot;A String&quot;, # Contains value if the data is of timestamp type.
+          &quot;boolValue&quot;: True or False, # Contains value if the data is of a boolean type.
+          &quot;javaClassValue&quot;: &quot;A String&quot;, # Contains value if the data is of java class type.
+          &quot;strValue&quot;: &quot;A String&quot;, # Contains value if the data is of string type.
+          &quot;int64Value&quot;: &quot;A String&quot;, # Contains value if the data is of int64 type.
+          &quot;durationValue&quot;: &quot;A String&quot;, # Contains value if the data is of duration type.
+          &quot;namespace&quot;: &quot;A String&quot;, # The namespace for the key. This is usually a class name or programming
+              # language namespace (i.e. python module) which defines the display data.
+              # This allows a dax monitoring system to specially handle the data
+              # and perform custom rendering.
+          &quot;floatValue&quot;: 3.14, # Contains value if the data is of float type.
+          &quot;key&quot;: &quot;A String&quot;, # The key identifying the display data.
+              # This is intended to be used as a label for the display data
+              # when viewed in a dax monitoring system.
+          &quot;shortStrValue&quot;: &quot;A String&quot;, # A possible additional shorter value to display.
+              # For example a java_class_name_value of com.mypackage.MyDoFn
+              # will be stored with MyDoFn as the short_str_value and
+              # com.mypackage.MyDoFn as the java_class_name value.
+              # short_str_value can be displayed and java_class_name_value
+              # will be displayed as a tooltip.
+          &quot;url&quot;: &quot;A String&quot;, # An optional full URL.
+          &quot;label&quot;: &quot;A String&quot;, # An optional label to display in a dax UI for the element.
+        },
+      ],
+    },
+    &quot;replaceJobId&quot;: &quot;A String&quot;, # If this job is an update of an existing job, this field is the job ID
+        # of the job it replaced.
+        #
+        # When sending a `CreateJobRequest`, you can update a job by specifying it
+        # here. The job named here is stopped, and its intermediate state is
+        # transferred to this job.
+    &quot;tempFiles&quot;: [ # A set of files the system should be aware of that are used
         # for temporary storage. These temporary files will be
         # removed on job completion.
         # No duplicates are allowed.
@@ -1802,36 +1803,9 @@
         #
         #    storage.googleapis.com/{bucket}/{object}
         #    bucket.storage.googleapis.com/{object}
-      "A String",
+      &quot;A String&quot;,
     ],
-    "type": "A String", # The type of Cloud Dataflow job.
-    "clientRequestId": "A String", # The client's unique identifier of the job, re-used across retried attempts.
-        # If this field is set, the service will ensure its uniqueness.
-        # The request to create a job will fail if the service has knowledge of a
-        # previously submitted job with the same client's ID and job name.
-        # The caller may use this field to ensure idempotence of job
-        # creation across retried attempts to create a job.
-        # By default, the field is empty and, in that case, the service ignores it.
-    "createdFromSnapshotId": "A String", # If this is specified, the job's initial state is populated from the given
-        # snapshot.
-    "stepsLocation": "A String", # The GCS location where the steps are stored.
-    "currentStateTime": "A String", # The timestamp associated with the current state.
-    "startTime": "A String", # The timestamp when the job was started (transitioned to JOB_STATE_PENDING).
-        # Flexible resource scheduling jobs are started with some delay after job
-        # creation, so start_time is unset before start and is updated when the
-        # job is started by the Cloud Dataflow service. For other jobs, start_time
-        # always equals to create_time and is immutable and set by the Cloud Dataflow
-        # service.
-    "createTime": "A String", # The timestamp when the job was initially created. Immutable and set by the
-        # Cloud Dataflow service.
-    "requestedState": "A String", # The job's requested state.
-        #
-        # `UpdateJob` may be used to switch between the `JOB_STATE_STOPPED` and
-        # `JOB_STATE_RUNNING` states, by setting requested_state.  `UpdateJob` may
-        # also be used to directly set a job's requested state to
-        # `JOB_STATE_CANCELLED` or `JOB_STATE_DONE`, irrevocably terminating the
-        # job if it has not already reached a terminal state.
-    "name": "A String", # The user-specified Cloud Dataflow job name.
+    &quot;name&quot;: &quot;A String&quot;, # The user-specified Cloud Dataflow job name.
         #
         # Only one Job with a given name may exist in a project at any
         # given time. If a caller attempts to create a Job with the same
@@ -1840,7 +1814,7 @@
         #
         # The name must match the regular expression
         # `[a-z]([-a-z0-9]{0,38}[a-z0-9])?`
-    "steps": [ # Exactly one of step or steps_location should be specified.
+    &quot;steps&quot;: [ # Exactly one of step or steps_location should be specified.
         #
         # The top-level steps that constitute the entire job.
       { # Defines a particular step within a Cloud Dataflow job.
@@ -1849,11 +1823,11 @@
           # specific operation as part of the overall job.  Data is typically
           # passed from one step to another as part of the job.
           #
-          # Here's an example of a sequence of steps which together implement a
+          # Here&#x27;s an example of a sequence of steps which together implement a
           # Map-Reduce job:
           #
           #   * Read a collection of data from some source, parsing the
-          #     collection's elements.
+          #     collection&#x27;s elements.
           #
           #   * Validate the elements.
           #
@@ -1868,23 +1842,32 @@
           #
           # Note that the Cloud Dataflow service may be used to run many different
           # types of jobs, not just Map-Reduce.
-        "kind": "A String", # The kind of step in the Cloud Dataflow job.
-        "name": "A String", # The name that identifies the step. This must be unique for each
+        &quot;name&quot;: &quot;A String&quot;, # The name that identifies the step. This must be unique for each
             # step with respect to all other steps in the Cloud Dataflow job.
-        "properties": { # Named properties associated with the step. Each kind of
+        &quot;kind&quot;: &quot;A String&quot;, # The kind of step in the Cloud Dataflow job.
+        &quot;properties&quot;: { # Named properties associated with the step. Each kind of
             # predefined step has its own required set of properties.
             # Must be provided on Create.  Only retrieved with JOB_VIEW_ALL.
-          "a_key": "", # Properties of the object.
+          &quot;a_key&quot;: &quot;&quot;, # Properties of the object.
         },
       },
     ],
-    "replaceJobId": "A String", # If this job is an update of an existing job, this field is the job ID
-        # of the job it replaced.
-        #
-        # When sending a `CreateJobRequest`, you can update a job by specifying it
-        # here. The job named here is stopped, and its intermediate state is
-        # transferred to this job.
-    "currentState": "A String", # The current state of the job.
+    &quot;replacedByJobId&quot;: &quot;A String&quot;, # If another job is an update of this job (and thus, this job is in
+        # `JOB_STATE_UPDATED`), this field contains the ID of that job.
+    &quot;executionInfo&quot;: { # Additional information about how a Cloud Dataflow job will be executed that # Deprecated.
+        # isn&#x27;t contained in the submitted job.
+      &quot;stages&quot;: { # A mapping from each stage to the information about that stage.
+        &quot;a_key&quot;: { # Contains information about how a particular
+            # google.dataflow.v1beta3.Step will be executed.
+          &quot;stepName&quot;: [ # The steps associated with the execution stage.
+              # Note that stages may have several steps, and that a given step
+              # might be run by more than one stage.
+            &quot;A String&quot;,
+          ],
+        },
+      },
+    },
+    &quot;currentState&quot;: &quot;A String&quot;, # The current state of the job.
         #
         # Jobs are created in the `JOB_STATE_STOPPED` state unless otherwise
         # specified.
@@ -1895,19 +1878,36 @@
         #
         # This field may be mutated by the Cloud Dataflow service;
         # callers cannot mutate it.
-    "executionInfo": { # Additional information about how a Cloud Dataflow job will be executed that # Deprecated.
-        # isn't contained in the submitted job.
-      "stages": { # A mapping from each stage to the information about that stage.
-        "a_key": { # Contains information about how a particular
-            # google.dataflow.v1beta3.Step will be executed.
-          "stepName": [ # The steps associated with the execution stage.
-              # Note that stages may have several steps, and that a given step
-              # might be run by more than one stage.
-            "A String",
-          ],
-        },
-      },
+    &quot;location&quot;: &quot;A String&quot;, # The [regional endpoint]
+        # (https://cloud.google.com/dataflow/docs/concepts/regional-endpoints) that
+        # contains this job.
+    &quot;startTime&quot;: &quot;A String&quot;, # The timestamp when the job was started (transitioned to JOB_STATE_PENDING).
+        # Flexible resource scheduling jobs are started with some delay after job
+        # creation, so start_time is unset before start and is updated when the
+        # job is started by the Cloud Dataflow service. For other jobs, start_time
+        # always equals to create_time and is immutable and set by the Cloud Dataflow
+        # service.
+    &quot;stepsLocation&quot;: &quot;A String&quot;, # The GCS location where the steps are stored.
+    &quot;labels&quot;: { # User-defined labels for this job.
+        #
+        # The labels map can contain no more than 64 entries.  Entries of the labels
+        # map are UTF8 strings that comply with the following restrictions:
+        #
+        # * Keys must conform to regexp:  \p{Ll}\p{Lo}{0,62}
+        # * Values must conform to regexp:  [\p{Ll}\p{Lo}\p{N}_-]{0,63}
+        # * Both keys and values are additionally constrained to be &lt;= 128 bytes in
+        # size.
+      &quot;a_key&quot;: &quot;A String&quot;,
     },
+    &quot;createTime&quot;: &quot;A String&quot;, # The timestamp when the job was initially created. Immutable and set by the
+        # Cloud Dataflow service.
+    &quot;requestedState&quot;: &quot;A String&quot;, # The job&#x27;s requested state.
+        #
+        # `UpdateJob` may be used to switch between the `JOB_STATE_STOPPED` and
+        # `JOB_STATE_RUNNING` states, by setting requested_state.  `UpdateJob` may
+        # also be used to directly set a job&#x27;s requested state to
+        # `JOB_STATE_CANCELLED` or `JOB_STATE_DONE`, irrevocably terminating the
+        # job if it has not already reached a terminal state.
   }</pre>
 </div>
 
@@ -1944,63 +1944,63 @@
       # This resource captures only the most recent values of each metric;
       # time-series data can be queried for them (under the same metric names)
       # from Cloud Monitoring.
-    "metrics": [ # All metrics for this job.
+    &quot;metricTime&quot;: &quot;A String&quot;, # Timestamp as of which metric values are current.
+    &quot;metrics&quot;: [ # All metrics for this job.
       { # Describes the state of a metric.
-        "meanCount": "", # Worker-computed aggregate value for the "Mean" aggregation kind.
-            # This holds the count of the aggregated values and is used in combination
-            # with mean_sum above to obtain the actual mean aggregate value.
-            # The only possible value type is Long.
-        "kind": "A String", # Metric aggregation kind.  The possible metric aggregation kinds are
-            # "Sum", "Max", "Min", "Mean", "Set", "And", "Or", and "Distribution".
+        &quot;set&quot;: &quot;&quot;, # Worker-computed aggregate value for the &quot;Set&quot; aggregation kind.  The only
+            # possible value type is a list of Values whose type can be Long, Double,
+            # or String, according to the metric&#x27;s type.  All Values in the list must
+            # be of the same type.
+        &quot;gauge&quot;: &quot;&quot;, # A struct value describing properties of a Gauge.
+            # Metrics of gauge type show the value of a metric across time, and is
+            # aggregated based on the newest value.
+        &quot;cumulative&quot;: True or False, # True if this metric is reported as the total cumulative aggregate
+            # value accumulated since the worker started working on this WorkItem.
+            # By default this is false, indicating that this metric is reported
+            # as a delta that is not associated with any WorkItem.
+        &quot;internal&quot;: &quot;&quot;, # Worker-computed aggregate value for internal use by the Dataflow
+            # service.
+        &quot;kind&quot;: &quot;A String&quot;, # Metric aggregation kind.  The possible metric aggregation kinds are
+            # &quot;Sum&quot;, &quot;Max&quot;, &quot;Min&quot;, &quot;Mean&quot;, &quot;Set&quot;, &quot;And&quot;, &quot;Or&quot;, and &quot;Distribution&quot;.
             # The specified aggregation kind is case-insensitive.
             #
             # If omitted, this is not an aggregated value but instead
             # a single metric sample value.
-        "set": "", # Worker-computed aggregate value for the "Set" aggregation kind.  The only
-            # possible value type is a list of Values whose type can be Long, Double,
-            # or String, according to the metric's type.  All Values in the list must
-            # be of the same type.
-        "name": { # Identifies a metric, by describing the source which generated the # Name of the metric.
-            # metric.
-          "origin": "A String", # Origin (namespace) of metric name. May be blank for user-define metrics;
-              # will be "dataflow" for metrics defined by the Dataflow service or SDK.
-          "name": "A String", # Worker-defined metric name.
-          "context": { # Zero or more labeled fields which identify the part of the job this
-              # metric is associated with, such as the name of a step or collection.
-              #
-              # For example, built-in counters associated with steps will have
-              # context['step'] = &lt;step-name&gt;. Counters associated with PCollections
-              # in the SDK will have context['pcollection'] = &lt;pcollection-name&gt;.
-            "a_key": "A String",
-          },
-        },
-        "meanSum": "", # Worker-computed aggregate value for the "Mean" aggregation kind.
+        &quot;scalar&quot;: &quot;&quot;, # Worker-computed aggregate value for aggregation kinds &quot;Sum&quot;, &quot;Max&quot;, &quot;Min&quot;,
+            # &quot;And&quot;, and &quot;Or&quot;.  The possible value types are Long, Double, and Boolean.
+        &quot;meanCount&quot;: &quot;&quot;, # Worker-computed aggregate value for the &quot;Mean&quot; aggregation kind.
+            # This holds the count of the aggregated values and is used in combination
+            # with mean_sum above to obtain the actual mean aggregate value.
+            # The only possible value type is Long.
+        &quot;meanSum&quot;: &quot;&quot;, # Worker-computed aggregate value for the &quot;Mean&quot; aggregation kind.
             # This holds the sum of the aggregated values and is used in combination
             # with mean_count below to obtain the actual mean aggregate value.
             # The only possible value types are Long and Double.
-        "cumulative": True or False, # True if this metric is reported as the total cumulative aggregate
-            # value accumulated since the worker started working on this WorkItem.
-            # By default this is false, indicating that this metric is reported
-            # as a delta that is not associated with any WorkItem.
-        "updateTime": "A String", # Timestamp associated with the metric value. Optional when workers are
+        &quot;updateTime&quot;: &quot;A String&quot;, # Timestamp associated with the metric value. Optional when workers are
             # reporting work progress; it will be filled in responses from the
             # metrics API.
-        "scalar": "", # Worker-computed aggregate value for aggregation kinds "Sum", "Max", "Min",
-            # "And", and "Or".  The possible value types are Long, Double, and Boolean.
-        "internal": "", # Worker-computed aggregate value for internal use by the Dataflow
-            # service.
-        "gauge": "", # A struct value describing properties of a Gauge.
-            # Metrics of gauge type show the value of a metric across time, and is
-            # aggregated based on the newest value.
-        "distribution": "", # A struct value describing properties of a distribution of numeric values.
+        &quot;name&quot;: { # Identifies a metric, by describing the source which generated the # Name of the metric.
+            # metric.
+          &quot;context&quot;: { # Zero or more labeled fields which identify the part of the job this
+              # metric is associated with, such as the name of a step or collection.
+              #
+              # For example, built-in counters associated with steps will have
+              # context[&#x27;step&#x27;] = &lt;step-name&gt;. Counters associated with PCollections
+              # in the SDK will have context[&#x27;pcollection&#x27;] = &lt;pcollection-name&gt;.
+            &quot;a_key&quot;: &quot;A String&quot;,
+          },
+          &quot;origin&quot;: &quot;A String&quot;, # Origin (namespace) of metric name. May be blank for user-define metrics;
+              # will be &quot;dataflow&quot; for metrics defined by the Dataflow service or SDK.
+          &quot;name&quot;: &quot;A String&quot;, # Worker-defined metric name.
+        },
+        &quot;distribution&quot;: &quot;&quot;, # A struct value describing properties of a distribution of numeric values.
       },
     ],
-    "metricTime": "A String", # Timestamp as of which metric values are current.
   }</pre>
 </div>
 
 <div class="method">
-    <code class="details" id="list">list(projectId, location, pageSize=None, pageToken=None, x__xgafv=None, filter=None, view=None)</code>
+    <code class="details" id="list">list(projectId, location, filter=None, pageToken=None, pageSize=None, view=None, x__xgafv=None)</code>
   <pre>List the jobs of a project.
 
 To list the jobs of a project in a region, we recommend using
@@ -2015,17 +2015,17 @@
   location: string, The [regional endpoint]
 (https://cloud.google.com/dataflow/docs/concepts/regional-endpoints) that
 contains this job. (required)
+  filter: string, The kind of filter to use.
+  pageToken: string, Set this to the &#x27;next_page_token&#x27; field of a previous response
+to request additional results in a long list.
   pageSize: integer, If there are many jobs, limit response to at most this many.
 The actual number of jobs returned will be the lesser of max_responses
 and an unspecified server-defined limit.
-  pageToken: string, Set this to the 'next_page_token' field of a previous response
-to request additional results in a long list.
+  view: string, Level of information requested in response. Default is `JOB_VIEW_SUMMARY`.
   x__xgafv: string, V1 error format.
     Allowed values
       1 - v1 error format
       2 - v2 error format
-  filter: string, The kind of filter to use.
-  view: string, Level of information requested in response. Default is `JOB_VIEW_SUMMARY`.
 
 Returns:
   An object of the form:
@@ -2033,398 +2033,87 @@
     { # Response to a request to list Cloud Dataflow jobs in a project. This might
       # be a partial response, depending on the page size in the ListJobsRequest.
       # However, if the project does not have any jobs, an instance of
-      # ListJobsResponse is not returned and the requests's response
+      # ListJobsResponse is not returned and the requests&#x27;s response
       # body is empty {}.
-    "nextPageToken": "A String", # Set if there may be more results than fit in this response.
-    "failedLocation": [ # Zero or more messages describing the [regional endpoints]
+    &quot;nextPageToken&quot;: &quot;A String&quot;, # Set if there may be more results than fit in this response.
+    &quot;failedLocation&quot;: [ # Zero or more messages describing the [regional endpoints]
         # (https://cloud.google.com/dataflow/docs/concepts/regional-endpoints) that
         # failed to respond.
       { # Indicates which [regional endpoint]
           # (https://cloud.google.com/dataflow/docs/concepts/regional-endpoints) failed
           # to respond to a request for data.
-        "name": "A String", # The name of the [regional endpoint]
+        &quot;name&quot;: &quot;A String&quot;, # The name of the [regional endpoint]
             # (https://cloud.google.com/dataflow/docs/concepts/regional-endpoints) that
             # failed to respond.
       },
     ],
-    "jobs": [ # A subset of the requested job information.
+    &quot;jobs&quot;: [ # A subset of the requested job information.
       { # Defines a job to be run by the Cloud Dataflow service.
-        "labels": { # User-defined labels for this job.
-            #
-            # The labels map can contain no more than 64 entries.  Entries of the labels
-            # map are UTF8 strings that comply with the following restrictions:
-            #
-            # * Keys must conform to regexp:  \p{Ll}\p{Lo}{0,62}
-            # * Values must conform to regexp:  [\p{Ll}\p{Lo}\p{N}_-]{0,63}
-            # * Both keys and values are additionally constrained to be &lt;= 128 bytes in
-            # size.
-          "a_key": "A String",
-        },
-        "jobMetadata": { # Metadata available primarily for filtering jobs. Will be included in the # This field is populated by the Dataflow service to support filtering jobs
-            # by the metadata values provided here. Populated for ListJobs and all GetJob
-            # views SUMMARY and higher.
-            # ListJob response and Job SUMMARY view.
-          "sdkVersion": { # The version of the SDK used to run the job. # The SDK version used to run the job.
-            "versionDisplayName": "A String", # A readable string describing the version of the SDK.
-            "version": "A String", # The version of the SDK used to run the job.
-            "sdkSupportStatus": "A String", # The support status for this SDK version.
-          },
-          "pubsubDetails": [ # Identification of a PubSub source used in the Dataflow job.
-            { # Metadata for a PubSub connector used by the job.
-              "topic": "A String", # Topic accessed in the connection.
-              "subscription": "A String", # Subscription used in the connection.
-            },
-          ],
-          "datastoreDetails": [ # Identification of a Datastore source used in the Dataflow job.
-            { # Metadata for a Datastore connector used by the job.
-              "projectId": "A String", # ProjectId accessed in the connection.
-              "namespace": "A String", # Namespace used in the connection.
-            },
-          ],
-          "fileDetails": [ # Identification of a File source used in the Dataflow job.
-            { # Metadata for a File connector used by the job.
-              "filePattern": "A String", # File Pattern used to access files by the connector.
-            },
-          ],
-          "spannerDetails": [ # Identification of a Spanner source used in the Dataflow job.
-            { # Metadata for a Spanner connector used by the job.
-              "instanceId": "A String", # InstanceId accessed in the connection.
-              "projectId": "A String", # ProjectId accessed in the connection.
-              "databaseId": "A String", # DatabaseId accessed in the connection.
-            },
-          ],
-          "bigTableDetails": [ # Identification of a BigTable source used in the Dataflow job.
-            { # Metadata for a BigTable connector used by the job.
-              "instanceId": "A String", # InstanceId accessed in the connection.
-              "projectId": "A String", # ProjectId accessed in the connection.
-              "tableId": "A String", # TableId accessed in the connection.
-            },
-          ],
-          "bigqueryDetails": [ # Identification of a BigQuery source used in the Dataflow job.
-            { # Metadata for a BigQuery connector used by the job.
-              "projectId": "A String", # Project accessed in the connection.
-              "query": "A String", # Query used to access data in the connection.
-              "table": "A String", # Table accessed in the connection.
-              "dataset": "A String", # Dataset accessed in the connection.
-            },
-          ],
-        },
-        "pipelineDescription": { # A descriptive representation of submitted pipeline as well as the executed # Preliminary field: The format of this data may change at any time.
-            # A description of the user pipeline and stages through which it is executed.
-            # Created by Cloud Dataflow service.  Only retrieved with
-            # JOB_VIEW_DESCRIPTION or JOB_VIEW_ALL.
-            # form.  This data is provided by the Dataflow service for ease of visualizing
-            # the pipeline and interpreting Dataflow provided metrics.
-          "originalPipelineTransform": [ # Description of each transform in the pipeline and collections between them.
-            { # Description of the type, names/ids, and input/outputs for a transform.
-              "kind": "A String", # Type of transform.
-              "name": "A String", # User provided name for this transform instance.
-              "inputCollectionName": [ # User names for all collection inputs to this transform.
-                "A String",
-              ],
-              "displayData": [ # Transform-specific display data.
-                { # Data provided with a pipeline or transform to provide descriptive info.
-                  "key": "A String", # The key identifying the display data.
-                      # This is intended to be used as a label for the display data
-                      # when viewed in a dax monitoring system.
-                  "shortStrValue": "A String", # A possible additional shorter value to display.
-                      # For example a java_class_name_value of com.mypackage.MyDoFn
-                      # will be stored with MyDoFn as the short_str_value and
-                      # com.mypackage.MyDoFn as the java_class_name value.
-                      # short_str_value can be displayed and java_class_name_value
-                      # will be displayed as a tooltip.
-                  "timestampValue": "A String", # Contains value if the data is of timestamp type.
-                  "url": "A String", # An optional full URL.
-                  "floatValue": 3.14, # Contains value if the data is of float type.
-                  "namespace": "A String", # The namespace for the key. This is usually a class name or programming
-                      # language namespace (i.e. python module) which defines the display data.
-                      # This allows a dax monitoring system to specially handle the data
-                      # and perform custom rendering.
-                  "javaClassValue": "A String", # Contains value if the data is of java class type.
-                  "label": "A String", # An optional label to display in a dax UI for the element.
-                  "boolValue": True or False, # Contains value if the data is of a boolean type.
-                  "strValue": "A String", # Contains value if the data is of string type.
-                  "durationValue": "A String", # Contains value if the data is of duration type.
-                  "int64Value": "A String", # Contains value if the data is of int64 type.
-                },
-              ],
-              "outputCollectionName": [ # User  names for all collection outputs to this transform.
-                "A String",
-              ],
-              "id": "A String", # SDK generated id of this transform instance.
-            },
-          ],
-          "executionPipelineStage": [ # Description of each stage of execution of the pipeline.
-            { # Description of the composing transforms, names/ids, and input/outputs of a
-                # stage of execution.  Some composing transforms and sources may have been
-                # generated by the Dataflow service during execution planning.
-              "componentSource": [ # Collections produced and consumed by component transforms of this stage.
-                { # Description of an interstitial value between transforms in an execution
-                    # stage.
-                  "userName": "A String", # Human-readable name for this transform; may be user or system generated.
-                  "originalTransformOrCollection": "A String", # User name for the original user transform or collection with which this
-                      # source is most closely associated.
-                  "name": "A String", # Dataflow service generated name for this source.
-                },
-              ],
-              "kind": "A String", # Type of tranform this stage is executing.
-              "name": "A String", # Dataflow service generated name for this stage.
-              "outputSource": [ # Output sources for this stage.
-                { # Description of an input or output of an execution stage.
-                  "userName": "A String", # Human-readable name for this source; may be user or system generated.
-                  "sizeBytes": "A String", # Size of the source, if measurable.
-                  "name": "A String", # Dataflow service generated name for this source.
-                  "originalTransformOrCollection": "A String", # User name for the original user transform or collection with which this
-                      # source is most closely associated.
-                },
-              ],
-              "inputSource": [ # Input sources for this stage.
-                { # Description of an input or output of an execution stage.
-                  "userName": "A String", # Human-readable name for this source; may be user or system generated.
-                  "sizeBytes": "A String", # Size of the source, if measurable.
-                  "name": "A String", # Dataflow service generated name for this source.
-                  "originalTransformOrCollection": "A String", # User name for the original user transform or collection with which this
-                      # source is most closely associated.
-                },
-              ],
-              "componentTransform": [ # Transforms that comprise this execution stage.
-                { # Description of a transform executed as part of an execution stage.
-                  "userName": "A String", # Human-readable name for this transform; may be user or system generated.
-                  "originalTransform": "A String", # User name for the original user transform with which this transform is
-                      # most closely associated.
-                  "name": "A String", # Dataflow service generated name for this source.
-                },
-              ],
-              "id": "A String", # Dataflow service generated id for this stage.
-            },
-          ],
-          "displayData": [ # Pipeline level display data.
-            { # Data provided with a pipeline or transform to provide descriptive info.
-              "key": "A String", # The key identifying the display data.
-                  # This is intended to be used as a label for the display data
-                  # when viewed in a dax monitoring system.
-              "shortStrValue": "A String", # A possible additional shorter value to display.
-                  # For example a java_class_name_value of com.mypackage.MyDoFn
-                  # will be stored with MyDoFn as the short_str_value and
-                  # com.mypackage.MyDoFn as the java_class_name value.
-                  # short_str_value can be displayed and java_class_name_value
-                  # will be displayed as a tooltip.
-              "timestampValue": "A String", # Contains value if the data is of timestamp type.
-              "url": "A String", # An optional full URL.
-              "floatValue": 3.14, # Contains value if the data is of float type.
-              "namespace": "A String", # The namespace for the key. This is usually a class name or programming
-                  # language namespace (i.e. python module) which defines the display data.
-                  # This allows a dax monitoring system to specially handle the data
-                  # and perform custom rendering.
-              "javaClassValue": "A String", # Contains value if the data is of java class type.
-              "label": "A String", # An optional label to display in a dax UI for the element.
-              "boolValue": True or False, # Contains value if the data is of a boolean type.
-              "strValue": "A String", # Contains value if the data is of string type.
-              "durationValue": "A String", # Contains value if the data is of duration type.
-              "int64Value": "A String", # Contains value if the data is of int64 type.
-            },
-          ],
-        },
-        "stageStates": [ # This field may be mutated by the Cloud Dataflow service;
-            # callers cannot mutate it.
-          { # A message describing the state of a particular execution stage.
-            "executionStageName": "A String", # The name of the execution stage.
-            "executionStageState": "A String", # Executions stage states allow the same set of values as JobState.
-            "currentStateTime": "A String", # The time at which the stage transitioned to this state.
-          },
-        ],
-        "id": "A String", # The unique ID of this job.
+        &quot;clientRequestId&quot;: &quot;A String&quot;, # The client&#x27;s unique identifier of the job, re-used across retried attempts.
+            # If this field is set, the service will ensure its uniqueness.
+            # The request to create a job will fail if the service has knowledge of a
+            # previously submitted job with the same client&#x27;s ID and job name.
+            # The caller may use this field to ensure idempotence of job
+            # creation across retried attempts to create a job.
+            # By default, the field is empty and, in that case, the service ignores it.
+        &quot;id&quot;: &quot;A String&quot;, # The unique ID of this job.
             #
             # This field is set by the Cloud Dataflow service when the Job is
             # created, and is immutable for the life of the job.
-        "replacedByJobId": "A String", # If another job is an update of this job (and thus, this job is in
-            # `JOB_STATE_UPDATED`), this field contains the ID of that job.
-        "projectId": "A String", # The ID of the Cloud Platform project that the job belongs to.
-        "transformNameMapping": { # The map of transform name prefixes of the job to be replaced to the
+        &quot;currentStateTime&quot;: &quot;A String&quot;, # The timestamp associated with the current state.
+        &quot;transformNameMapping&quot;: { # The map of transform name prefixes of the job to be replaced to the
             # corresponding name prefixes of the new job.
-          "a_key": "A String",
+          &quot;a_key&quot;: &quot;A String&quot;,
         },
-        "environment": { # Describes the environment in which a Dataflow Job runs. # The environment for the job.
-          "workerRegion": "A String", # The Compute Engine region
-              # (https://cloud.google.com/compute/docs/regions-zones/regions-zones) in
-              # which worker processing should occur, e.g. "us-west1". Mutually exclusive
-              # with worker_zone. If neither worker_region nor worker_zone is specified,
-              # default to the control plane's region.
-          "version": { # A structure describing which components and their versions of the service
-              # are required in order to run the job.
-            "a_key": "", # Properties of the object.
-          },
-          "flexResourceSchedulingGoal": "A String", # Which Flexible Resource Scheduling mode to run in.
-          "serviceKmsKeyName": "A String", # If set, contains the Cloud KMS key identifier used to encrypt data
-              # at rest, AKA a Customer Managed Encryption Key (CMEK).
-              #
-              # Format:
-              #   projects/PROJECT_ID/locations/LOCATION/keyRings/KEY_RING/cryptoKeys/KEY
-          "internalExperiments": { # Experimental settings.
-            "a_key": "", # Properties of the object. Contains field @type with type URL.
-          },
-          "dataset": "A String", # The dataset for the current project where various workflow
-              # related tables are stored.
-              #
-              # The supported resource type is:
-              #
-              # Google BigQuery:
-              #   bigquery.googleapis.com/{dataset}
-          "experiments": [ # The list of experiments to enable.
-            "A String",
-          ],
-          "serviceAccountEmail": "A String", # Identity to run virtual machines as. Defaults to the default account.
-          "sdkPipelineOptions": { # The Cloud Dataflow SDK pipeline options specified by the user. These
+        &quot;environment&quot;: { # Describes the environment in which a Dataflow Job runs. # The environment for the job.
+          &quot;sdkPipelineOptions&quot;: { # The Cloud Dataflow SDK pipeline options specified by the user. These
               # options are passed through the service and are used to recreate the
               # SDK pipeline options on the worker in a language agnostic and platform
               # independent way.
-            "a_key": "", # Properties of the object.
+            &quot;a_key&quot;: &quot;&quot;, # Properties of the object.
           },
-          "userAgent": { # A description of the process that generated the request.
-            "a_key": "", # Properties of the object.
-          },
-          "workerZone": "A String", # The Compute Engine zone
-              # (https://cloud.google.com/compute/docs/regions-zones/regions-zones) in
-              # which worker processing should occur, e.g. "us-west1-a". Mutually exclusive
-              # with worker_region. If neither worker_region nor worker_zone is specified,
-              # a zone in the control plane's region is chosen based on available capacity.
-          "workerPools": [ # The worker pools. At least one "harness" worker pool must be
+          &quot;flexResourceSchedulingGoal&quot;: &quot;A String&quot;, # Which Flexible Resource Scheduling mode to run in.
+          &quot;workerPools&quot;: [ # The worker pools. At least one &quot;harness&quot; worker pool must be
               # specified in order for the job to have workers.
             { # Describes one particular pool of Cloud Dataflow workers to be
                 # instantiated by the Cloud Dataflow service in order to perform the
                 # computations required by a job.  Note that a workflow job may use
                 # multiple pools, in order to match the various computational
                 # requirements of the various stages of the job.
-              "workerHarnessContainerImage": "A String", # Required. Docker container image that executes the Cloud Dataflow worker
-                  # harness, residing in Google Container Registry.
-                  #
-                  # Deprecated for the Fn API path. Use sdk_harness_container_images instead.
-              "ipConfiguration": "A String", # Configuration for VM IPs.
-              "autoscalingSettings": { # Settings for WorkerPool autoscaling. # Settings for autoscaling of this WorkerPool.
-                "maxNumWorkers": 42, # The maximum number of workers to cap scaling at.
-                "algorithm": "A String", # The algorithm to use for autoscaling.
-              },
-              "diskSourceImage": "A String", # Fully qualified source image for disks.
-              "network": "A String", # Network to which VMs will be assigned.  If empty or unspecified,
-                  # the service will use the network "default".
-              "zone": "A String", # Zone to run the worker pools in.  If empty or unspecified, the service
+              &quot;defaultPackageSet&quot;: &quot;A String&quot;, # The default package set to install.  This allows the service to
+                  # select a default set of packages which are useful to worker
+                  # harnesses written in a particular language.
+              &quot;network&quot;: &quot;A String&quot;, # Network to which VMs will be assigned.  If empty or unspecified,
+                  # the service will use the network &quot;default&quot;.
+              &quot;zone&quot;: &quot;A String&quot;, # Zone to run the worker pools in.  If empty or unspecified, the service
                   # will attempt to choose a reasonable default.
-              "metadata": { # Metadata to set on the Google Compute Engine VMs.
-                "a_key": "A String",
-              },
-              "machineType": "A String", # Machine type (e.g. "n1-standard-1").  If empty or unspecified, the
-                  # service will attempt to choose a reasonable default.
-              "onHostMaintenance": "A String", # The action to take on host maintenance, as defined by the Google
-                  # Compute Engine API.
-              "taskrunnerSettings": { # Taskrunner configuration settings. # Settings passed through to Google Compute Engine workers when
-                  # using the standard Dataflow task runner.  Users should ignore
-                  # this field.
-                "workflowFileName": "A String", # The file to store the workflow in.
-                "logUploadLocation": "A String", # Indicates where to put logs.  If this is not specified, the logs
-                    # will not be uploaded.
-                    #
-                    # The supported resource type is:
-                    #
-                    # Google Cloud Storage:
-                    #   storage.googleapis.com/{bucket}/{object}
-                    #   bucket.storage.googleapis.com/{object}
-                "commandlinesFileName": "A String", # The file to store preprocessing commands in.
-                "alsologtostderr": True or False, # Whether to also send taskrunner log info to stderr.
-                "continueOnException": True or False, # Whether to continue taskrunner if an exception is hit.
-                "baseTaskDir": "A String", # The location on the worker for task-specific subdirectories.
-                "vmId": "A String", # The ID string of the VM.
-                "taskGroup": "A String", # The UNIX group ID on the worker VM to use for tasks launched by
-                    # taskrunner; e.g. "wheel".
-                "taskUser": "A String", # The UNIX user ID on the worker VM to use for tasks launched by
-                    # taskrunner; e.g. "root".
-                "oauthScopes": [ # The OAuth2 scopes to be requested by the taskrunner in order to
-                    # access the Cloud Dataflow API.
-                  "A String",
-                ],
-                "languageHint": "A String", # The suggested backend language.
-                "logToSerialconsole": True or False, # Whether to send taskrunner log info to Google Compute Engine VM serial
-                    # console.
-                "streamingWorkerMainClass": "A String", # The streaming worker main class name.
-                "logDir": "A String", # The directory on the VM to store logs.
-                "parallelWorkerSettings": { # Provides data to pass through to the worker harness. # The settings to pass to the parallel worker harness.
-                  "reportingEnabled": True or False, # Whether to send work progress updates to the service.
-                  "shuffleServicePath": "A String", # The Shuffle service path relative to the root URL, for example,
-                      # "shuffle/v1beta1".
-                  "workerId": "A String", # The ID of the worker running this pipeline.
-                  "baseUrl": "A String", # The base URL for accessing Google Cloud APIs.
-                      #
-                      # When workers access Google Cloud APIs, they logically do so via
-                      # relative URLs.  If this field is specified, it supplies the base
-                      # URL to use for resolving these relative URLs.  The normative
-                      # algorithm used is defined by RFC 1808, "Relative Uniform Resource
-                      # Locators".
-                      #
-                      # If not specified, the default value is "http://www.googleapis.com/"
-                  "servicePath": "A String", # The Cloud Dataflow service path relative to the root URL, for example,
-                      # "dataflow/v1b3/projects".
-                  "tempStoragePrefix": "A String", # The prefix of the resources the system should use for temporary
-                      # storage.
-                      #
-                      # The supported resource type is:
-                      #
-                      # Google Cloud Storage:
-                      #
-                      #   storage.googleapis.com/{bucket}/{object}
-                      #   bucket.storage.googleapis.com/{object}
-                },
-                "dataflowApiVersion": "A String", # The API version of endpoint, e.g. "v1b3"
-                "harnessCommand": "A String", # The command to launch the worker harness.
-                "tempStoragePrefix": "A String", # The prefix of the resources the taskrunner should use for
-                    # temporary storage.
-                    #
-                    # The supported resource type is:
-                    #
-                    # Google Cloud Storage:
-                    #   storage.googleapis.com/{bucket}/{object}
-                    #   bucket.storage.googleapis.com/{object}
-                "baseUrl": "A String", # The base URL for the taskrunner to use when accessing Google Cloud APIs.
-                    #
-                    # When workers access Google Cloud APIs, they logically do so via
-                    # relative URLs.  If this field is specified, it supplies the base
-                    # URL to use for resolving these relative URLs.  The normative
-                    # algorithm used is defined by RFC 1808, "Relative Uniform Resource
-                    # Locators".
-                    #
-                    # If not specified, the default value is "http://www.googleapis.com/"
-              },
-              "numThreadsPerWorker": 42, # The number of threads per worker harness. If empty or unspecified, the
+              &quot;numWorkers&quot;: 42, # Number of Google Compute Engine workers in this pool needed to
+                  # execute the job.  If zero or unspecified, the service will
+                  # attempt to choose a reasonable default.
+              &quot;numThreadsPerWorker&quot;: 42, # The number of threads per worker harness. If empty or unspecified, the
                   # service will choose a number of threads (according to the number of cores
                   # on the selected machine type for batch, or 1 by convention for streaming).
-              "poolArgs": { # Extra arguments for this worker pool.
-                "a_key": "", # Properties of the object. Contains field @type with type URL.
-              },
-              "packages": [ # Packages to be installed on workers.
+              &quot;diskSourceImage&quot;: &quot;A String&quot;, # Fully qualified source image for disks.
+              &quot;packages&quot;: [ # Packages to be installed on workers.
                 { # The packages that must be installed in order for a worker to run the
                     # steps of the Cloud Dataflow job that will be assigned to its worker
                     # pool.
                     #
                     # This is the mechanism by which the Cloud Dataflow SDK causes code to
                     # be loaded onto the workers. For example, the Cloud Dataflow Java SDK
-                    # might use this to install jars containing the user's code and all of the
+                    # might use this to install jars containing the user&#x27;s code and all of the
                     # various dependencies (libraries, data files, etc.) required in order
                     # for that code to run.
-                  "location": "A String", # The resource to read the package from. The supported resource type is:
+                  &quot;location&quot;: &quot;A String&quot;, # The resource to read the package from. The supported resource type is:
                       #
                       # Google Cloud Storage:
                       #
                       #   storage.googleapis.com/{bucket}
                       #   bucket.storage.googleapis.com/
-                  "name": "A String", # The name of the package.
+                  &quot;name&quot;: &quot;A String&quot;, # The name of the package.
                 },
               ],
-              "defaultPackageSet": "A String", # The default package set to install.  This allows the service to
-                  # select a default set of packages which are useful to worker
-                  # harnesses written in a particular language.
-              "kind": "A String", # The kind of the worker pool; currently only `harness` and `shuffle`
-                  # are supported.
-              "diskType": "A String", # Type of root disk for VMs.  If empty or unspecified, the service will
-                  # attempt to choose a reasonable default.
-              "teardownPolicy": "A String", # Sets the policy for determining when to turndown worker pool.
+              &quot;teardownPolicy&quot;: &quot;A String&quot;, # Sets the policy for determining when to turndown worker pool.
                   # Allowed values are: `TEARDOWN_ALWAYS`, `TEARDOWN_ON_SUCCESS`, and
                   # `TEARDOWN_NEVER`.
                   # `TEARDOWN_ALWAYS` means workers are always torn down regardless of whether
@@ -2434,32 +2123,41 @@
                   #
                   # If the workers are not torn down by the service, they will
                   # continue to run and use Google Compute Engine VM resources in the
-                  # user's project until they are explicitly terminated by the user.
+                  # user&#x27;s project until they are explicitly terminated by the user.
                   # Because of this, Google recommends using the `TEARDOWN_ALWAYS`
                   # policy except for small, manually supervised test jobs.
                   #
                   # If unknown or unspecified, the service will attempt to choose a reasonable
                   # default.
-              "diskSizeGb": 42, # Size of root disk for VMs, in GB.  If zero or unspecified, the service will
+              &quot;onHostMaintenance&quot;: &quot;A String&quot;, # The action to take on host maintenance, as defined by the Google
+                  # Compute Engine API.
+              &quot;poolArgs&quot;: { # Extra arguments for this worker pool.
+                &quot;a_key&quot;: &quot;&quot;, # Properties of the object. Contains field @type with type URL.
+              },
+              &quot;diskSizeGb&quot;: 42, # Size of root disk for VMs, in GB.  If zero or unspecified, the service will
                   # attempt to choose a reasonable default.
-              "numWorkers": 42, # Number of Google Compute Engine workers in this pool needed to
-                  # execute the job.  If zero or unspecified, the service will
+              &quot;workerHarnessContainerImage&quot;: &quot;A String&quot;, # Required. Docker container image that executes the Cloud Dataflow worker
+                  # harness, residing in Google Container Registry.
+                  #
+                  # Deprecated for the Fn API path. Use sdk_harness_container_images instead.
+              &quot;diskType&quot;: &quot;A String&quot;, # Type of root disk for VMs.  If empty or unspecified, the service will
                   # attempt to choose a reasonable default.
-              "subnetwork": "A String", # Subnetwork to which VMs will be assigned, if desired.  Expected to be of
-                  # the form "regions/REGION/subnetworks/SUBNETWORK".
-              "dataDisks": [ # Data disks that are used by a VM in this workflow.
+              &quot;machineType&quot;: &quot;A String&quot;, # Machine type (e.g. &quot;n1-standard-1&quot;).  If empty or unspecified, the
+                  # service will attempt to choose a reasonable default.
+              &quot;kind&quot;: &quot;A String&quot;, # The kind of the worker pool; currently only `harness` and `shuffle`
+                  # are supported.
+              &quot;dataDisks&quot;: [ # Data disks that are used by a VM in this workflow.
                 { # Describes the data disk used by a workflow job.
-                  "mountPoint": "A String", # Directory in a VM where disk is mounted.
-                  "sizeGb": 42, # Size of disk in GB.  If zero or unspecified, the service will
+                  &quot;sizeGb&quot;: 42, # Size of disk in GB.  If zero or unspecified, the service will
                       # attempt to choose a reasonable default.
-                  "diskType": "A String", # Disk storage type, as defined by Google Compute Engine.  This
+                  &quot;diskType&quot;: &quot;A String&quot;, # Disk storage type, as defined by Google Compute Engine.  This
                       # must be a disk type appropriate to the project and zone in which
                       # the workers will run.  If unknown or unspecified, the service
                       # will attempt to choose a reasonable default.
                       #
                       # For example, the standard persistent disk type is a resource name
-                      # typically ending in "pd-standard".  If SSD persistent disks are
-                      # available, the resource name typically ends with "pd-ssd".  The
+                      # typically ending in &quot;pd-standard&quot;.  If SSD persistent disks are
+                      # available, the resource name typically ends with &quot;pd-ssd&quot;.  The
                       # actual valid values are defined the Google Compute Engine API,
                       # not by the Cloud Dataflow API; consult the Google Compute Engine
                       # documentation for more information about determining the set of
@@ -2470,29 +2168,144 @@
                       # typically look something like this:
                       #
                       # compute.googleapis.com/projects/project-id/zones/zone/diskTypes/pd-standard
+                  &quot;mountPoint&quot;: &quot;A String&quot;, # Directory in a VM where disk is mounted.
                 },
               ],
-              "sdkHarnessContainerImages": [ # Set of SDK harness containers needed to execute this pipeline. This will
+              &quot;sdkHarnessContainerImages&quot;: [ # Set of SDK harness containers needed to execute this pipeline. This will
                   # only be set in the Fn API path. For non-cross-language pipelines this
                   # should have only one entry. Cross-language pipelines will have two or more
                   # entries.
                 { # Defines a SDK harness container for executing Dataflow pipelines.
-                  "containerImage": "A String", # A docker container image that resides in Google Container Registry.
-                  "useSingleCorePerContainer": True or False, # If true, recommends the Dataflow service to use only one core per SDK
+                  &quot;containerImage&quot;: &quot;A String&quot;, # A docker container image that resides in Google Container Registry.
+                  &quot;useSingleCorePerContainer&quot;: True or False, # If true, recommends the Dataflow service to use only one core per SDK
                       # container instance with this image. If false (or unset) recommends using
                       # more than one core per SDK container instance with this image for
                       # efficiency. Note that Dataflow service may choose to override this property
                       # if needed.
                 },
               ],
+              &quot;subnetwork&quot;: &quot;A String&quot;, # Subnetwork to which VMs will be assigned, if desired.  Expected to be of
+                  # the form &quot;regions/REGION/subnetworks/SUBNETWORK&quot;.
+              &quot;ipConfiguration&quot;: &quot;A String&quot;, # Configuration for VM IPs.
+              &quot;taskrunnerSettings&quot;: { # Taskrunner configuration settings. # Settings passed through to Google Compute Engine workers when
+                  # using the standard Dataflow task runner.  Users should ignore
+                  # this field.
+                &quot;alsologtostderr&quot;: True or False, # Whether to also send taskrunner log info to stderr.
+                &quot;taskGroup&quot;: &quot;A String&quot;, # The UNIX group ID on the worker VM to use for tasks launched by
+                    # taskrunner; e.g. &quot;wheel&quot;.
+                &quot;harnessCommand&quot;: &quot;A String&quot;, # The command to launch the worker harness.
+                &quot;logDir&quot;: &quot;A String&quot;, # The directory on the VM to store logs.
+                &quot;oauthScopes&quot;: [ # The OAuth2 scopes to be requested by the taskrunner in order to
+                    # access the Cloud Dataflow API.
+                  &quot;A String&quot;,
+                ],
+                &quot;dataflowApiVersion&quot;: &quot;A String&quot;, # The API version of endpoint, e.g. &quot;v1b3&quot;
+                &quot;logUploadLocation&quot;: &quot;A String&quot;, # Indicates where to put logs.  If this is not specified, the logs
+                    # will not be uploaded.
+                    #
+                    # The supported resource type is:
+                    #
+                    # Google Cloud Storage:
+                    #   storage.googleapis.com/{bucket}/{object}
+                    #   bucket.storage.googleapis.com/{object}
+                &quot;streamingWorkerMainClass&quot;: &quot;A String&quot;, # The streaming worker main class name.
+                &quot;workflowFileName&quot;: &quot;A String&quot;, # The file to store the workflow in.
+                &quot;baseTaskDir&quot;: &quot;A String&quot;, # The location on the worker for task-specific subdirectories.
+                &quot;tempStoragePrefix&quot;: &quot;A String&quot;, # The prefix of the resources the taskrunner should use for
+                    # temporary storage.
+                    #
+                    # The supported resource type is:
+                    #
+                    # Google Cloud Storage:
+                    #   storage.googleapis.com/{bucket}/{object}
+                    #   bucket.storage.googleapis.com/{object}
+                &quot;commandlinesFileName&quot;: &quot;A String&quot;, # The file to store preprocessing commands in.
+                &quot;languageHint&quot;: &quot;A String&quot;, # The suggested backend language.
+                &quot;baseUrl&quot;: &quot;A String&quot;, # The base URL for the taskrunner to use when accessing Google Cloud APIs.
+                    #
+                    # When workers access Google Cloud APIs, they logically do so via
+                    # relative URLs.  If this field is specified, it supplies the base
+                    # URL to use for resolving these relative URLs.  The normative
+                    # algorithm used is defined by RFC 1808, &quot;Relative Uniform Resource
+                    # Locators&quot;.
+                    #
+                    # If not specified, the default value is &quot;http://www.googleapis.com/&quot;
+                &quot;logToSerialconsole&quot;: True or False, # Whether to send taskrunner log info to Google Compute Engine VM serial
+                    # console.
+                &quot;continueOnException&quot;: True or False, # Whether to continue taskrunner if an exception is hit.
+                &quot;parallelWorkerSettings&quot;: { # Provides data to pass through to the worker harness. # The settings to pass to the parallel worker harness.
+                  &quot;baseUrl&quot;: &quot;A String&quot;, # The base URL for accessing Google Cloud APIs.
+                      #
+                      # When workers access Google Cloud APIs, they logically do so via
+                      # relative URLs.  If this field is specified, it supplies the base
+                      # URL to use for resolving these relative URLs.  The normative
+                      # algorithm used is defined by RFC 1808, &quot;Relative Uniform Resource
+                      # Locators&quot;.
+                      #
+                      # If not specified, the default value is &quot;http://www.googleapis.com/&quot;
+                  &quot;reportingEnabled&quot;: True or False, # Whether to send work progress updates to the service.
+                  &quot;servicePath&quot;: &quot;A String&quot;, # The Cloud Dataflow service path relative to the root URL, for example,
+                      # &quot;dataflow/v1b3/projects&quot;.
+                  &quot;shuffleServicePath&quot;: &quot;A String&quot;, # The Shuffle service path relative to the root URL, for example,
+                      # &quot;shuffle/v1beta1&quot;.
+                  &quot;workerId&quot;: &quot;A String&quot;, # The ID of the worker running this pipeline.
+                  &quot;tempStoragePrefix&quot;: &quot;A String&quot;, # The prefix of the resources the system should use for temporary
+                      # storage.
+                      #
+                      # The supported resource type is:
+                      #
+                      # Google Cloud Storage:
+                      #
+                      #   storage.googleapis.com/{bucket}/{object}
+                      #   bucket.storage.googleapis.com/{object}
+                },
+                &quot;vmId&quot;: &quot;A String&quot;, # The ID string of the VM.
+                &quot;taskUser&quot;: &quot;A String&quot;, # The UNIX user ID on the worker VM to use for tasks launched by
+                    # taskrunner; e.g. &quot;root&quot;.
+              },
+              &quot;autoscalingSettings&quot;: { # Settings for WorkerPool autoscaling. # Settings for autoscaling of this WorkerPool.
+                &quot;maxNumWorkers&quot;: 42, # The maximum number of workers to cap scaling at.
+                &quot;algorithm&quot;: &quot;A String&quot;, # The algorithm to use for autoscaling.
+              },
+              &quot;metadata&quot;: { # Metadata to set on the Google Compute Engine VMs.
+                &quot;a_key&quot;: &quot;A String&quot;,
+              },
             },
           ],
-          "clusterManagerApiService": "A String", # The type of cluster manager API to use.  If unknown or
+          &quot;dataset&quot;: &quot;A String&quot;, # The dataset for the current project where various workflow
+              # related tables are stored.
+              #
+              # The supported resource type is:
+              #
+              # Google BigQuery:
+              #   bigquery.googleapis.com/{dataset}
+          &quot;internalExperiments&quot;: { # Experimental settings.
+            &quot;a_key&quot;: &quot;&quot;, # Properties of the object. Contains field @type with type URL.
+          },
+          &quot;workerRegion&quot;: &quot;A String&quot;, # The Compute Engine region
+              # (https://cloud.google.com/compute/docs/regions-zones/regions-zones) in
+              # which worker processing should occur, e.g. &quot;us-west1&quot;. Mutually exclusive
+              # with worker_zone. If neither worker_region nor worker_zone is specified,
+              # default to the control plane&#x27;s region.
+          &quot;serviceKmsKeyName&quot;: &quot;A String&quot;, # If set, contains the Cloud KMS key identifier used to encrypt data
+              # at rest, AKA a Customer Managed Encryption Key (CMEK).
+              #
+              # Format:
+              #   projects/PROJECT_ID/locations/LOCATION/keyRings/KEY_RING/cryptoKeys/KEY
+          &quot;userAgent&quot;: { # A description of the process that generated the request.
+            &quot;a_key&quot;: &quot;&quot;, # Properties of the object.
+          },
+          &quot;workerZone&quot;: &quot;A String&quot;, # The Compute Engine zone
+              # (https://cloud.google.com/compute/docs/regions-zones/regions-zones) in
+              # which worker processing should occur, e.g. &quot;us-west1-a&quot;. Mutually exclusive
+              # with worker_region. If neither worker_region nor worker_zone is specified,
+              # a zone in the control plane&#x27;s region is chosen based on available capacity.
+          &quot;clusterManagerApiService&quot;: &quot;A String&quot;, # The type of cluster manager API to use.  If unknown or
               # unspecified, the service will attempt to choose a reasonable
               # default.  This should be in the form of the API service name,
-              # e.g. "compute.googleapis.com".
-          "tempStoragePrefix": "A String", # The prefix of the resources the system should use for temporary
-              # storage.  The system will append the suffix "/temp-{JOBNAME} to
+              # e.g. &quot;compute.googleapis.com&quot;.
+          &quot;tempStoragePrefix&quot;: &quot;A String&quot;, # The prefix of the resources the system should use for temporary
+              # storage.  The system will append the suffix &quot;/temp-{JOBNAME} to
               # this resource prefix, where {JOBNAME} is the value of the
               # job_name field.  The resulting bucket and object prefix is used
               # as the prefix of the resources used to store temporary data
@@ -2504,11 +2317,199 @@
               #
               #   storage.googleapis.com/{bucket}/{object}
               #   bucket.storage.googleapis.com/{object}
+          &quot;experiments&quot;: [ # The list of experiments to enable.
+            &quot;A String&quot;,
+          ],
+          &quot;version&quot;: { # A structure describing which components and their versions of the service
+              # are required in order to run the job.
+            &quot;a_key&quot;: &quot;&quot;, # Properties of the object.
+          },
+          &quot;serviceAccountEmail&quot;: &quot;A String&quot;, # Identity to run virtual machines as. Defaults to the default account.
         },
-        "location": "A String", # The [regional endpoint]
-            # (https://cloud.google.com/dataflow/docs/concepts/regional-endpoints) that
-            # contains this job.
-        "tempFiles": [ # A set of files the system should be aware of that are used
+        &quot;stageStates&quot;: [ # This field may be mutated by the Cloud Dataflow service;
+            # callers cannot mutate it.
+          { # A message describing the state of a particular execution stage.
+            &quot;executionStageName&quot;: &quot;A String&quot;, # The name of the execution stage.
+            &quot;currentStateTime&quot;: &quot;A String&quot;, # The time at which the stage transitioned to this state.
+            &quot;executionStageState&quot;: &quot;A String&quot;, # Executions stage states allow the same set of values as JobState.
+          },
+        ],
+        &quot;jobMetadata&quot;: { # Metadata available primarily for filtering jobs. Will be included in the # This field is populated by the Dataflow service to support filtering jobs
+            # by the metadata values provided here. Populated for ListJobs and all GetJob
+            # views SUMMARY and higher.
+            # ListJob response and Job SUMMARY view.
+          &quot;bigTableDetails&quot;: [ # Identification of a BigTable source used in the Dataflow job.
+            { # Metadata for a BigTable connector used by the job.
+              &quot;tableId&quot;: &quot;A String&quot;, # TableId accessed in the connection.
+              &quot;projectId&quot;: &quot;A String&quot;, # ProjectId accessed in the connection.
+              &quot;instanceId&quot;: &quot;A String&quot;, # InstanceId accessed in the connection.
+            },
+          ],
+          &quot;spannerDetails&quot;: [ # Identification of a Spanner source used in the Dataflow job.
+            { # Metadata for a Spanner connector used by the job.
+              &quot;databaseId&quot;: &quot;A String&quot;, # DatabaseId accessed in the connection.
+              &quot;instanceId&quot;: &quot;A String&quot;, # InstanceId accessed in the connection.
+              &quot;projectId&quot;: &quot;A String&quot;, # ProjectId accessed in the connection.
+            },
+          ],
+          &quot;datastoreDetails&quot;: [ # Identification of a Datastore source used in the Dataflow job.
+            { # Metadata for a Datastore connector used by the job.
+              &quot;projectId&quot;: &quot;A String&quot;, # ProjectId accessed in the connection.
+              &quot;namespace&quot;: &quot;A String&quot;, # Namespace used in the connection.
+            },
+          ],
+          &quot;sdkVersion&quot;: { # The version of the SDK used to run the job. # The SDK version used to run the job.
+            &quot;versionDisplayName&quot;: &quot;A String&quot;, # A readable string describing the version of the SDK.
+            &quot;sdkSupportStatus&quot;: &quot;A String&quot;, # The support status for this SDK version.
+            &quot;version&quot;: &quot;A String&quot;, # The version of the SDK used to run the job.
+          },
+          &quot;bigqueryDetails&quot;: [ # Identification of a BigQuery source used in the Dataflow job.
+            { # Metadata for a BigQuery connector used by the job.
+              &quot;table&quot;: &quot;A String&quot;, # Table accessed in the connection.
+              &quot;dataset&quot;: &quot;A String&quot;, # Dataset accessed in the connection.
+              &quot;projectId&quot;: &quot;A String&quot;, # Project accessed in the connection.
+              &quot;query&quot;: &quot;A String&quot;, # Query used to access data in the connection.
+            },
+          ],
+          &quot;fileDetails&quot;: [ # Identification of a File source used in the Dataflow job.
+            { # Metadata for a File connector used by the job.
+              &quot;filePattern&quot;: &quot;A String&quot;, # File Pattern used to access files by the connector.
+            },
+          ],
+          &quot;pubsubDetails&quot;: [ # Identification of a PubSub source used in the Dataflow job.
+            { # Metadata for a PubSub connector used by the job.
+              &quot;subscription&quot;: &quot;A String&quot;, # Subscription used in the connection.
+              &quot;topic&quot;: &quot;A String&quot;, # Topic accessed in the connection.
+            },
+          ],
+        },
+        &quot;createdFromSnapshotId&quot;: &quot;A String&quot;, # If this is specified, the job&#x27;s initial state is populated from the given
+            # snapshot.
+        &quot;projectId&quot;: &quot;A String&quot;, # The ID of the Cloud Platform project that the job belongs to.
+        &quot;type&quot;: &quot;A String&quot;, # The type of Cloud Dataflow job.
+        &quot;pipelineDescription&quot;: { # A descriptive representation of submitted pipeline as well as the executed # Preliminary field: The format of this data may change at any time.
+            # A description of the user pipeline and stages through which it is executed.
+            # Created by Cloud Dataflow service.  Only retrieved with
+            # JOB_VIEW_DESCRIPTION or JOB_VIEW_ALL.
+            # form.  This data is provided by the Dataflow service for ease of visualizing
+            # the pipeline and interpreting Dataflow provided metrics.
+          &quot;executionPipelineStage&quot;: [ # Description of each stage of execution of the pipeline.
+            { # Description of the composing transforms, names/ids, and input/outputs of a
+                # stage of execution.  Some composing transforms and sources may have been
+                # generated by the Dataflow service during execution planning.
+              &quot;id&quot;: &quot;A String&quot;, # Dataflow service generated id for this stage.
+              &quot;componentTransform&quot;: [ # Transforms that comprise this execution stage.
+                { # Description of a transform executed as part of an execution stage.
+                  &quot;originalTransform&quot;: &quot;A String&quot;, # User name for the original user transform with which this transform is
+                      # most closely associated.
+                  &quot;name&quot;: &quot;A String&quot;, # Dataflow service generated name for this source.
+                  &quot;userName&quot;: &quot;A String&quot;, # Human-readable name for this transform; may be user or system generated.
+                },
+              ],
+              &quot;componentSource&quot;: [ # Collections produced and consumed by component transforms of this stage.
+                { # Description of an interstitial value between transforms in an execution
+                    # stage.
+                  &quot;name&quot;: &quot;A String&quot;, # Dataflow service generated name for this source.
+                  &quot;userName&quot;: &quot;A String&quot;, # Human-readable name for this transform; may be user or system generated.
+                  &quot;originalTransformOrCollection&quot;: &quot;A String&quot;, # User name for the original user transform or collection with which this
+                      # source is most closely associated.
+                },
+              ],
+              &quot;kind&quot;: &quot;A String&quot;, # Type of tranform this stage is executing.
+              &quot;outputSource&quot;: [ # Output sources for this stage.
+                { # Description of an input or output of an execution stage.
+                  &quot;originalTransformOrCollection&quot;: &quot;A String&quot;, # User name for the original user transform or collection with which this
+                      # source is most closely associated.
+                  &quot;name&quot;: &quot;A String&quot;, # Dataflow service generated name for this source.
+                  &quot;sizeBytes&quot;: &quot;A String&quot;, # Size of the source, if measurable.
+                  &quot;userName&quot;: &quot;A String&quot;, # Human-readable name for this source; may be user or system generated.
+                },
+              ],
+              &quot;name&quot;: &quot;A String&quot;, # Dataflow service generated name for this stage.
+              &quot;inputSource&quot;: [ # Input sources for this stage.
+                { # Description of an input or output of an execution stage.
+                  &quot;originalTransformOrCollection&quot;: &quot;A String&quot;, # User name for the original user transform or collection with which this
+                      # source is most closely associated.
+                  &quot;name&quot;: &quot;A String&quot;, # Dataflow service generated name for this source.
+                  &quot;sizeBytes&quot;: &quot;A String&quot;, # Size of the source, if measurable.
+                  &quot;userName&quot;: &quot;A String&quot;, # Human-readable name for this source; may be user or system generated.
+                },
+              ],
+            },
+          ],
+          &quot;originalPipelineTransform&quot;: [ # Description of each transform in the pipeline and collections between them.
+            { # Description of the type, names/ids, and input/outputs for a transform.
+              &quot;kind&quot;: &quot;A String&quot;, # Type of transform.
+              &quot;inputCollectionName&quot;: [ # User names for all collection inputs to this transform.
+                &quot;A String&quot;,
+              ],
+              &quot;name&quot;: &quot;A String&quot;, # User provided name for this transform instance.
+              &quot;id&quot;: &quot;A String&quot;, # SDK generated id of this transform instance.
+              &quot;displayData&quot;: [ # Transform-specific display data.
+                { # Data provided with a pipeline or transform to provide descriptive info.
+                  &quot;timestampValue&quot;: &quot;A String&quot;, # Contains value if the data is of timestamp type.
+                  &quot;boolValue&quot;: True or False, # Contains value if the data is of a boolean type.
+                  &quot;javaClassValue&quot;: &quot;A String&quot;, # Contains value if the data is of java class type.
+                  &quot;strValue&quot;: &quot;A String&quot;, # Contains value if the data is of string type.
+                  &quot;int64Value&quot;: &quot;A String&quot;, # Contains value if the data is of int64 type.
+                  &quot;durationValue&quot;: &quot;A String&quot;, # Contains value if the data is of duration type.
+                  &quot;namespace&quot;: &quot;A String&quot;, # The namespace for the key. This is usually a class name or programming
+                      # language namespace (i.e. python module) which defines the display data.
+                      # This allows a dax monitoring system to specially handle the data
+                      # and perform custom rendering.
+                  &quot;floatValue&quot;: 3.14, # Contains value if the data is of float type.
+                  &quot;key&quot;: &quot;A String&quot;, # The key identifying the display data.
+                      # This is intended to be used as a label for the display data
+                      # when viewed in a dax monitoring system.
+                  &quot;shortStrValue&quot;: &quot;A String&quot;, # A possible additional shorter value to display.
+                      # For example a java_class_name_value of com.mypackage.MyDoFn
+                      # will be stored with MyDoFn as the short_str_value and
+                      # com.mypackage.MyDoFn as the java_class_name value.
+                      # short_str_value can be displayed and java_class_name_value
+                      # will be displayed as a tooltip.
+                  &quot;url&quot;: &quot;A String&quot;, # An optional full URL.
+                  &quot;label&quot;: &quot;A String&quot;, # An optional label to display in a dax UI for the element.
+                },
+              ],
+              &quot;outputCollectionName&quot;: [ # User  names for all collection outputs to this transform.
+                &quot;A String&quot;,
+              ],
+            },
+          ],
+          &quot;displayData&quot;: [ # Pipeline level display data.
+            { # Data provided with a pipeline or transform to provide descriptive info.
+              &quot;timestampValue&quot;: &quot;A String&quot;, # Contains value if the data is of timestamp type.
+              &quot;boolValue&quot;: True or False, # Contains value if the data is of a boolean type.
+              &quot;javaClassValue&quot;: &quot;A String&quot;, # Contains value if the data is of java class type.
+              &quot;strValue&quot;: &quot;A String&quot;, # Contains value if the data is of string type.
+              &quot;int64Value&quot;: &quot;A String&quot;, # Contains value if the data is of int64 type.
+              &quot;durationValue&quot;: &quot;A String&quot;, # Contains value if the data is of duration type.
+              &quot;namespace&quot;: &quot;A String&quot;, # The namespace for the key. This is usually a class name or programming
+                  # language namespace (i.e. python module) which defines the display data.
+                  # This allows a dax monitoring system to specially handle the data
+                  # and perform custom rendering.
+              &quot;floatValue&quot;: 3.14, # Contains value if the data is of float type.
+              &quot;key&quot;: &quot;A String&quot;, # The key identifying the display data.
+                  # This is intended to be used as a label for the display data
+                  # when viewed in a dax monitoring system.
+              &quot;shortStrValue&quot;: &quot;A String&quot;, # A possible additional shorter value to display.
+                  # For example a java_class_name_value of com.mypackage.MyDoFn
+                  # will be stored with MyDoFn as the short_str_value and
+                  # com.mypackage.MyDoFn as the java_class_name value.
+                  # short_str_value can be displayed and java_class_name_value
+                  # will be displayed as a tooltip.
+              &quot;url&quot;: &quot;A String&quot;, # An optional full URL.
+              &quot;label&quot;: &quot;A String&quot;, # An optional label to display in a dax UI for the element.
+            },
+          ],
+        },
+        &quot;replaceJobId&quot;: &quot;A String&quot;, # If this job is an update of an existing job, this field is the job ID
+            # of the job it replaced.
+            #
+            # When sending a `CreateJobRequest`, you can update a job by specifying it
+            # here. The job named here is stopped, and its intermediate state is
+            # transferred to this job.
+        &quot;tempFiles&quot;: [ # A set of files the system should be aware of that are used
             # for temporary storage. These temporary files will be
             # removed on job completion.
             # No duplicates are allowed.
@@ -2520,36 +2521,9 @@
             #
             #    storage.googleapis.com/{bucket}/{object}
             #    bucket.storage.googleapis.com/{object}
-          "A String",
+          &quot;A String&quot;,
         ],
-        "type": "A String", # The type of Cloud Dataflow job.
-        "clientRequestId": "A String", # The client's unique identifier of the job, re-used across retried attempts.
-            # If this field is set, the service will ensure its uniqueness.
-            # The request to create a job will fail if the service has knowledge of a
-            # previously submitted job with the same client's ID and job name.
-            # The caller may use this field to ensure idempotence of job
-            # creation across retried attempts to create a job.
-            # By default, the field is empty and, in that case, the service ignores it.
-        "createdFromSnapshotId": "A String", # If this is specified, the job's initial state is populated from the given
-            # snapshot.
-        "stepsLocation": "A String", # The GCS location where the steps are stored.
-        "currentStateTime": "A String", # The timestamp associated with the current state.
-        "startTime": "A String", # The timestamp when the job was started (transitioned to JOB_STATE_PENDING).
-            # Flexible resource scheduling jobs are started with some delay after job
-            # creation, so start_time is unset before start and is updated when the
-            # job is started by the Cloud Dataflow service. For other jobs, start_time
-            # always equals to create_time and is immutable and set by the Cloud Dataflow
-            # service.
-        "createTime": "A String", # The timestamp when the job was initially created. Immutable and set by the
-            # Cloud Dataflow service.
-        "requestedState": "A String", # The job's requested state.
-            #
-            # `UpdateJob` may be used to switch between the `JOB_STATE_STOPPED` and
-            # `JOB_STATE_RUNNING` states, by setting requested_state.  `UpdateJob` may
-            # also be used to directly set a job's requested state to
-            # `JOB_STATE_CANCELLED` or `JOB_STATE_DONE`, irrevocably terminating the
-            # job if it has not already reached a terminal state.
-        "name": "A String", # The user-specified Cloud Dataflow job name.
+        &quot;name&quot;: &quot;A String&quot;, # The user-specified Cloud Dataflow job name.
             #
             # Only one Job with a given name may exist in a project at any
             # given time. If a caller attempts to create a Job with the same
@@ -2558,7 +2532,7 @@
             #
             # The name must match the regular expression
             # `[a-z]([-a-z0-9]{0,38}[a-z0-9])?`
-        "steps": [ # Exactly one of step or steps_location should be specified.
+        &quot;steps&quot;: [ # Exactly one of step or steps_location should be specified.
             #
             # The top-level steps that constitute the entire job.
           { # Defines a particular step within a Cloud Dataflow job.
@@ -2567,11 +2541,11 @@
               # specific operation as part of the overall job.  Data is typically
               # passed from one step to another as part of the job.
               #
-              # Here's an example of a sequence of steps which together implement a
+              # Here&#x27;s an example of a sequence of steps which together implement a
               # Map-Reduce job:
               #
               #   * Read a collection of data from some source, parsing the
-              #     collection's elements.
+              #     collection&#x27;s elements.
               #
               #   * Validate the elements.
               #
@@ -2586,23 +2560,32 @@
               #
               # Note that the Cloud Dataflow service may be used to run many different
               # types of jobs, not just Map-Reduce.
-            "kind": "A String", # The kind of step in the Cloud Dataflow job.
-            "name": "A String", # The name that identifies the step. This must be unique for each
+            &quot;name&quot;: &quot;A String&quot;, # The name that identifies the step. This must be unique for each
                 # step with respect to all other steps in the Cloud Dataflow job.
-            "properties": { # Named properties associated with the step. Each kind of
+            &quot;kind&quot;: &quot;A String&quot;, # The kind of step in the Cloud Dataflow job.
+            &quot;properties&quot;: { # Named properties associated with the step. Each kind of
                 # predefined step has its own required set of properties.
                 # Must be provided on Create.  Only retrieved with JOB_VIEW_ALL.
-              "a_key": "", # Properties of the object.
+              &quot;a_key&quot;: &quot;&quot;, # Properties of the object.
             },
           },
         ],
-        "replaceJobId": "A String", # If this job is an update of an existing job, this field is the job ID
-            # of the job it replaced.
-            #
-            # When sending a `CreateJobRequest`, you can update a job by specifying it
-            # here. The job named here is stopped, and its intermediate state is
-            # transferred to this job.
-        "currentState": "A String", # The current state of the job.
+        &quot;replacedByJobId&quot;: &quot;A String&quot;, # If another job is an update of this job (and thus, this job is in
+            # `JOB_STATE_UPDATED`), this field contains the ID of that job.
+        &quot;executionInfo&quot;: { # Additional information about how a Cloud Dataflow job will be executed that # Deprecated.
+            # isn&#x27;t contained in the submitted job.
+          &quot;stages&quot;: { # A mapping from each stage to the information about that stage.
+            &quot;a_key&quot;: { # Contains information about how a particular
+                # google.dataflow.v1beta3.Step will be executed.
+              &quot;stepName&quot;: [ # The steps associated with the execution stage.
+                  # Note that stages may have several steps, and that a given step
+                  # might be run by more than one stage.
+                &quot;A String&quot;,
+              ],
+            },
+          },
+        },
+        &quot;currentState&quot;: &quot;A String&quot;, # The current state of the job.
             #
             # Jobs are created in the `JOB_STATE_STOPPED` state unless otherwise
             # specified.
@@ -2613,19 +2596,36 @@
             #
             # This field may be mutated by the Cloud Dataflow service;
             # callers cannot mutate it.
-        "executionInfo": { # Additional information about how a Cloud Dataflow job will be executed that # Deprecated.
-            # isn't contained in the submitted job.
-          "stages": { # A mapping from each stage to the information about that stage.
-            "a_key": { # Contains information about how a particular
-                # google.dataflow.v1beta3.Step will be executed.
-              "stepName": [ # The steps associated with the execution stage.
-                  # Note that stages may have several steps, and that a given step
-                  # might be run by more than one stage.
-                "A String",
-              ],
-            },
-          },
+        &quot;location&quot;: &quot;A String&quot;, # The [regional endpoint]
+            # (https://cloud.google.com/dataflow/docs/concepts/regional-endpoints) that
+            # contains this job.
+        &quot;startTime&quot;: &quot;A String&quot;, # The timestamp when the job was started (transitioned to JOB_STATE_PENDING).
+            # Flexible resource scheduling jobs are started with some delay after job
+            # creation, so start_time is unset before start and is updated when the
+            # job is started by the Cloud Dataflow service. For other jobs, start_time
+            # always equals to create_time and is immutable and set by the Cloud Dataflow
+            # service.
+        &quot;stepsLocation&quot;: &quot;A String&quot;, # The GCS location where the steps are stored.
+        &quot;labels&quot;: { # User-defined labels for this job.
+            #
+            # The labels map can contain no more than 64 entries.  Entries of the labels
+            # map are UTF8 strings that comply with the following restrictions:
+            #
+            # * Keys must conform to regexp:  \p{Ll}\p{Lo}{0,62}
+            # * Values must conform to regexp:  [\p{Ll}\p{Lo}\p{N}_-]{0,63}
+            # * Both keys and values are additionally constrained to be &lt;= 128 bytes in
+            # size.
+          &quot;a_key&quot;: &quot;A String&quot;,
         },
+        &quot;createTime&quot;: &quot;A String&quot;, # The timestamp when the job was initially created. Immutable and set by the
+            # Cloud Dataflow service.
+        &quot;requestedState&quot;: &quot;A String&quot;, # The job&#x27;s requested state.
+            #
+            # `UpdateJob` may be used to switch between the `JOB_STATE_STOPPED` and
+            # `JOB_STATE_RUNNING` states, by setting requested_state.  `UpdateJob` may
+            # also be used to directly set a job&#x27;s requested state to
+            # `JOB_STATE_CANCELLED` or `JOB_STATE_DONE`, irrevocably terminating the
+            # job if it has not already reached a terminal state.
       },
     ],
   }</pre>
@@ -2640,7 +2640,7 @@
   previous_response: The response from the request for the previous page. (required)
 
 Returns:
-  A request object that you can call 'execute()' on to request the next
+  A request object that you can call &#x27;execute()&#x27; on to request the next
   page. Returns None if there are no more items in the collection.
     </pre>
 </div>
@@ -2657,10 +2657,10 @@
     The object takes the form of:
 
 { # Request to create a snapshot of a job.
-    "location": "A String", # The location that contains this job.
-    "ttl": "A String", # TTL for the snapshot.
-    "description": "A String", # User specified description of the snapshot. Maybe empty.
-    "snapshotSources": True or False, # If true, perform snapshots for sources which support this.
+    &quot;description&quot;: &quot;A String&quot;, # User specified description of the snapshot. Maybe empty.
+    &quot;snapshotSources&quot;: True or False, # If true, perform snapshots for sources which support this.
+    &quot;ttl&quot;: &quot;A String&quot;, # TTL for the snapshot.
+    &quot;location&quot;: &quot;A String&quot;, # The location that contains this job.
   }
 
   x__xgafv: string, V1 error format.
@@ -2672,22 +2672,22 @@
   An object of the form:
 
     { # Represents a snapshot of a job.
-    "sourceJobId": "A String", # The job this snapshot was created from.
-    "diskSizeBytes": "A String", # The disk byte size of the snapshot. Only available for snapshots in READY
+    &quot;state&quot;: &quot;A String&quot;, # State of the snapshot.
+    &quot;sourceJobId&quot;: &quot;A String&quot;, # The job this snapshot was created from.
+    &quot;projectId&quot;: &quot;A String&quot;, # The project this snapshot belongs to.
+    &quot;id&quot;: &quot;A String&quot;, # The unique ID of this snapshot.
+    &quot;ttl&quot;: &quot;A String&quot;, # The time after which this snapshot will be automatically deleted.
+    &quot;description&quot;: &quot;A String&quot;, # User specified description of the snapshot. Maybe empty.
+    &quot;diskSizeBytes&quot;: &quot;A String&quot;, # The disk byte size of the snapshot. Only available for snapshots in READY
         # state.
-    "description": "A String", # User specified description of the snapshot. Maybe empty.
-    "projectId": "A String", # The project this snapshot belongs to.
-    "creationTime": "A String", # The time this snapshot was created.
-    "state": "A String", # State of the snapshot.
-    "ttl": "A String", # The time after which this snapshot will be automatically deleted.
-    "pubsubMetadata": [ # PubSub snapshot metadata.
+    &quot;pubsubMetadata&quot;: [ # PubSub snapshot metadata.
       { # Represents a Pubsub snapshot.
-        "expireTime": "A String", # The expire time of the Pubsub snapshot.
-        "snapshotName": "A String", # The name of the Pubsub snapshot.
-        "topicName": "A String", # The name of the Pubsub topic.
+        &quot;expireTime&quot;: &quot;A String&quot;, # The expire time of the Pubsub snapshot.
+        &quot;snapshotName&quot;: &quot;A String&quot;, # The name of the Pubsub snapshot.
+        &quot;topicName&quot;: &quot;A String&quot;, # The name of the Pubsub topic.
       },
     ],
-    "id": "A String", # The unique ID of this snapshot.
+    &quot;creationTime&quot;: &quot;A String&quot;, # The time this snapshot was created.
   }</pre>
 </div>
 
@@ -2711,382 +2711,71 @@
     The object takes the form of:
 
 { # Defines a job to be run by the Cloud Dataflow service.
-  "labels": { # User-defined labels for this job.
-      # 
-      # The labels map can contain no more than 64 entries.  Entries of the labels
-      # map are UTF8 strings that comply with the following restrictions:
-      # 
-      # * Keys must conform to regexp:  \p{Ll}\p{Lo}{0,62}
-      # * Values must conform to regexp:  [\p{Ll}\p{Lo}\p{N}_-]{0,63}
-      # * Both keys and values are additionally constrained to be &lt;= 128 bytes in
-      # size.
-    "a_key": "A String",
-  },
-  "jobMetadata": { # Metadata available primarily for filtering jobs. Will be included in the # This field is populated by the Dataflow service to support filtering jobs
-      # by the metadata values provided here. Populated for ListJobs and all GetJob
-      # views SUMMARY and higher.
-      # ListJob response and Job SUMMARY view.
-    "sdkVersion": { # The version of the SDK used to run the job. # The SDK version used to run the job.
-      "versionDisplayName": "A String", # A readable string describing the version of the SDK.
-      "version": "A String", # The version of the SDK used to run the job.
-      "sdkSupportStatus": "A String", # The support status for this SDK version.
-    },
-    "pubsubDetails": [ # Identification of a PubSub source used in the Dataflow job.
-      { # Metadata for a PubSub connector used by the job.
-        "topic": "A String", # Topic accessed in the connection.
-        "subscription": "A String", # Subscription used in the connection.
-      },
-    ],
-    "datastoreDetails": [ # Identification of a Datastore source used in the Dataflow job.
-      { # Metadata for a Datastore connector used by the job.
-        "projectId": "A String", # ProjectId accessed in the connection.
-        "namespace": "A String", # Namespace used in the connection.
-      },
-    ],
-    "fileDetails": [ # Identification of a File source used in the Dataflow job.
-      { # Metadata for a File connector used by the job.
-        "filePattern": "A String", # File Pattern used to access files by the connector.
-      },
-    ],
-    "spannerDetails": [ # Identification of a Spanner source used in the Dataflow job.
-      { # Metadata for a Spanner connector used by the job.
-        "instanceId": "A String", # InstanceId accessed in the connection.
-        "projectId": "A String", # ProjectId accessed in the connection.
-        "databaseId": "A String", # DatabaseId accessed in the connection.
-      },
-    ],
-    "bigTableDetails": [ # Identification of a BigTable source used in the Dataflow job.
-      { # Metadata for a BigTable connector used by the job.
-        "instanceId": "A String", # InstanceId accessed in the connection.
-        "projectId": "A String", # ProjectId accessed in the connection.
-        "tableId": "A String", # TableId accessed in the connection.
-      },
-    ],
-    "bigqueryDetails": [ # Identification of a BigQuery source used in the Dataflow job.
-      { # Metadata for a BigQuery connector used by the job.
-        "projectId": "A String", # Project accessed in the connection.
-        "query": "A String", # Query used to access data in the connection.
-        "table": "A String", # Table accessed in the connection.
-        "dataset": "A String", # Dataset accessed in the connection.
-      },
-    ],
-  },
-  "pipelineDescription": { # A descriptive representation of submitted pipeline as well as the executed # Preliminary field: The format of this data may change at any time.
-      # A description of the user pipeline and stages through which it is executed.
-      # Created by Cloud Dataflow service.  Only retrieved with
-      # JOB_VIEW_DESCRIPTION or JOB_VIEW_ALL.
-      # form.  This data is provided by the Dataflow service for ease of visualizing
-      # the pipeline and interpreting Dataflow provided metrics.
-    "originalPipelineTransform": [ # Description of each transform in the pipeline and collections between them.
-      { # Description of the type, names/ids, and input/outputs for a transform.
-        "kind": "A String", # Type of transform.
-        "name": "A String", # User provided name for this transform instance.
-        "inputCollectionName": [ # User names for all collection inputs to this transform.
-          "A String",
-        ],
-        "displayData": [ # Transform-specific display data.
-          { # Data provided with a pipeline or transform to provide descriptive info.
-            "key": "A String", # The key identifying the display data.
-                # This is intended to be used as a label for the display data
-                # when viewed in a dax monitoring system.
-            "shortStrValue": "A String", # A possible additional shorter value to display.
-                # For example a java_class_name_value of com.mypackage.MyDoFn
-                # will be stored with MyDoFn as the short_str_value and
-                # com.mypackage.MyDoFn as the java_class_name value.
-                # short_str_value can be displayed and java_class_name_value
-                # will be displayed as a tooltip.
-            "timestampValue": "A String", # Contains value if the data is of timestamp type.
-            "url": "A String", # An optional full URL.
-            "floatValue": 3.14, # Contains value if the data is of float type.
-            "namespace": "A String", # The namespace for the key. This is usually a class name or programming
-                # language namespace (i.e. python module) which defines the display data.
-                # This allows a dax monitoring system to specially handle the data
-                # and perform custom rendering.
-            "javaClassValue": "A String", # Contains value if the data is of java class type.
-            "label": "A String", # An optional label to display in a dax UI for the element.
-            "boolValue": True or False, # Contains value if the data is of a boolean type.
-            "strValue": "A String", # Contains value if the data is of string type.
-            "durationValue": "A String", # Contains value if the data is of duration type.
-            "int64Value": "A String", # Contains value if the data is of int64 type.
-          },
-        ],
-        "outputCollectionName": [ # User  names for all collection outputs to this transform.
-          "A String",
-        ],
-        "id": "A String", # SDK generated id of this transform instance.
-      },
-    ],
-    "executionPipelineStage": [ # Description of each stage of execution of the pipeline.
-      { # Description of the composing transforms, names/ids, and input/outputs of a
-          # stage of execution.  Some composing transforms and sources may have been
-          # generated by the Dataflow service during execution planning.
-        "componentSource": [ # Collections produced and consumed by component transforms of this stage.
-          { # Description of an interstitial value between transforms in an execution
-              # stage.
-            "userName": "A String", # Human-readable name for this transform; may be user or system generated.
-            "originalTransformOrCollection": "A String", # User name for the original user transform or collection with which this
-                # source is most closely associated.
-            "name": "A String", # Dataflow service generated name for this source.
-          },
-        ],
-        "kind": "A String", # Type of tranform this stage is executing.
-        "name": "A String", # Dataflow service generated name for this stage.
-        "outputSource": [ # Output sources for this stage.
-          { # Description of an input or output of an execution stage.
-            "userName": "A String", # Human-readable name for this source; may be user or system generated.
-            "sizeBytes": "A String", # Size of the source, if measurable.
-            "name": "A String", # Dataflow service generated name for this source.
-            "originalTransformOrCollection": "A String", # User name for the original user transform or collection with which this
-                # source is most closely associated.
-          },
-        ],
-        "inputSource": [ # Input sources for this stage.
-          { # Description of an input or output of an execution stage.
-            "userName": "A String", # Human-readable name for this source; may be user or system generated.
-            "sizeBytes": "A String", # Size of the source, if measurable.
-            "name": "A String", # Dataflow service generated name for this source.
-            "originalTransformOrCollection": "A String", # User name for the original user transform or collection with which this
-                # source is most closely associated.
-          },
-        ],
-        "componentTransform": [ # Transforms that comprise this execution stage.
-          { # Description of a transform executed as part of an execution stage.
-            "userName": "A String", # Human-readable name for this transform; may be user or system generated.
-            "originalTransform": "A String", # User name for the original user transform with which this transform is
-                # most closely associated.
-            "name": "A String", # Dataflow service generated name for this source.
-          },
-        ],
-        "id": "A String", # Dataflow service generated id for this stage.
-      },
-    ],
-    "displayData": [ # Pipeline level display data.
-      { # Data provided with a pipeline or transform to provide descriptive info.
-        "key": "A String", # The key identifying the display data.
-            # This is intended to be used as a label for the display data
-            # when viewed in a dax monitoring system.
-        "shortStrValue": "A String", # A possible additional shorter value to display.
-            # For example a java_class_name_value of com.mypackage.MyDoFn
-            # will be stored with MyDoFn as the short_str_value and
-            # com.mypackage.MyDoFn as the java_class_name value.
-            # short_str_value can be displayed and java_class_name_value
-            # will be displayed as a tooltip.
-        "timestampValue": "A String", # Contains value if the data is of timestamp type.
-        "url": "A String", # An optional full URL.
-        "floatValue": 3.14, # Contains value if the data is of float type.
-        "namespace": "A String", # The namespace for the key. This is usually a class name or programming
-            # language namespace (i.e. python module) which defines the display data.
-            # This allows a dax monitoring system to specially handle the data
-            # and perform custom rendering.
-        "javaClassValue": "A String", # Contains value if the data is of java class type.
-        "label": "A String", # An optional label to display in a dax UI for the element.
-        "boolValue": True or False, # Contains value if the data is of a boolean type.
-        "strValue": "A String", # Contains value if the data is of string type.
-        "durationValue": "A String", # Contains value if the data is of duration type.
-        "int64Value": "A String", # Contains value if the data is of int64 type.
-      },
-    ],
-  },
-  "stageStates": [ # This field may be mutated by the Cloud Dataflow service;
-      # callers cannot mutate it.
-    { # A message describing the state of a particular execution stage.
-      "executionStageName": "A String", # The name of the execution stage.
-      "executionStageState": "A String", # Executions stage states allow the same set of values as JobState.
-      "currentStateTime": "A String", # The time at which the stage transitioned to this state.
-    },
-  ],
-  "id": "A String", # The unique ID of this job.
+  &quot;clientRequestId&quot;: &quot;A String&quot;, # The client&#x27;s unique identifier of the job, re-used across retried attempts.
+      # If this field is set, the service will ensure its uniqueness.
+      # The request to create a job will fail if the service has knowledge of a
+      # previously submitted job with the same client&#x27;s ID and job name.
+      # The caller may use this field to ensure idempotence of job
+      # creation across retried attempts to create a job.
+      # By default, the field is empty and, in that case, the service ignores it.
+  &quot;id&quot;: &quot;A String&quot;, # The unique ID of this job.
       # 
       # This field is set by the Cloud Dataflow service when the Job is
       # created, and is immutable for the life of the job.
-  "replacedByJobId": "A String", # If another job is an update of this job (and thus, this job is in
-      # `JOB_STATE_UPDATED`), this field contains the ID of that job.
-  "projectId": "A String", # The ID of the Cloud Platform project that the job belongs to.
-  "transformNameMapping": { # The map of transform name prefixes of the job to be replaced to the
+  &quot;currentStateTime&quot;: &quot;A String&quot;, # The timestamp associated with the current state.
+  &quot;transformNameMapping&quot;: { # The map of transform name prefixes of the job to be replaced to the
       # corresponding name prefixes of the new job.
-    "a_key": "A String",
+    &quot;a_key&quot;: &quot;A String&quot;,
   },
-  "environment": { # Describes the environment in which a Dataflow Job runs. # The environment for the job.
-    "workerRegion": "A String", # The Compute Engine region
-        # (https://cloud.google.com/compute/docs/regions-zones/regions-zones) in
-        # which worker processing should occur, e.g. "us-west1". Mutually exclusive
-        # with worker_zone. If neither worker_region nor worker_zone is specified,
-        # default to the control plane's region.
-    "version": { # A structure describing which components and their versions of the service
-        # are required in order to run the job.
-      "a_key": "", # Properties of the object.
-    },
-    "flexResourceSchedulingGoal": "A String", # Which Flexible Resource Scheduling mode to run in.
-    "serviceKmsKeyName": "A String", # If set, contains the Cloud KMS key identifier used to encrypt data
-        # at rest, AKA a Customer Managed Encryption Key (CMEK).
-        #
-        # Format:
-        #   projects/PROJECT_ID/locations/LOCATION/keyRings/KEY_RING/cryptoKeys/KEY
-    "internalExperiments": { # Experimental settings.
-      "a_key": "", # Properties of the object. Contains field @type with type URL.
-    },
-    "dataset": "A String", # The dataset for the current project where various workflow
-        # related tables are stored.
-        #
-        # The supported resource type is:
-        #
-        # Google BigQuery:
-        #   bigquery.googleapis.com/{dataset}
-    "experiments": [ # The list of experiments to enable.
-      "A String",
-    ],
-    "serviceAccountEmail": "A String", # Identity to run virtual machines as. Defaults to the default account.
-    "sdkPipelineOptions": { # The Cloud Dataflow SDK pipeline options specified by the user. These
+  &quot;environment&quot;: { # Describes the environment in which a Dataflow Job runs. # The environment for the job.
+    &quot;sdkPipelineOptions&quot;: { # The Cloud Dataflow SDK pipeline options specified by the user. These
         # options are passed through the service and are used to recreate the
         # SDK pipeline options on the worker in a language agnostic and platform
         # independent way.
-      "a_key": "", # Properties of the object.
+      &quot;a_key&quot;: &quot;&quot;, # Properties of the object.
     },
-    "userAgent": { # A description of the process that generated the request.
-      "a_key": "", # Properties of the object.
-    },
-    "workerZone": "A String", # The Compute Engine zone
-        # (https://cloud.google.com/compute/docs/regions-zones/regions-zones) in
-        # which worker processing should occur, e.g. "us-west1-a". Mutually exclusive
-        # with worker_region. If neither worker_region nor worker_zone is specified,
-        # a zone in the control plane's region is chosen based on available capacity.
-    "workerPools": [ # The worker pools. At least one "harness" worker pool must be
+    &quot;flexResourceSchedulingGoal&quot;: &quot;A String&quot;, # Which Flexible Resource Scheduling mode to run in.
+    &quot;workerPools&quot;: [ # The worker pools. At least one &quot;harness&quot; worker pool must be
         # specified in order for the job to have workers.
       { # Describes one particular pool of Cloud Dataflow workers to be
           # instantiated by the Cloud Dataflow service in order to perform the
           # computations required by a job.  Note that a workflow job may use
           # multiple pools, in order to match the various computational
           # requirements of the various stages of the job.
-        "workerHarnessContainerImage": "A String", # Required. Docker container image that executes the Cloud Dataflow worker
-            # harness, residing in Google Container Registry.
-            #
-            # Deprecated for the Fn API path. Use sdk_harness_container_images instead.
-        "ipConfiguration": "A String", # Configuration for VM IPs.
-        "autoscalingSettings": { # Settings for WorkerPool autoscaling. # Settings for autoscaling of this WorkerPool.
-          "maxNumWorkers": 42, # The maximum number of workers to cap scaling at.
-          "algorithm": "A String", # The algorithm to use for autoscaling.
-        },
-        "diskSourceImage": "A String", # Fully qualified source image for disks.
-        "network": "A String", # Network to which VMs will be assigned.  If empty or unspecified,
-            # the service will use the network "default".
-        "zone": "A String", # Zone to run the worker pools in.  If empty or unspecified, the service
+        &quot;defaultPackageSet&quot;: &quot;A String&quot;, # The default package set to install.  This allows the service to
+            # select a default set of packages which are useful to worker
+            # harnesses written in a particular language.
+        &quot;network&quot;: &quot;A String&quot;, # Network to which VMs will be assigned.  If empty or unspecified,
+            # the service will use the network &quot;default&quot;.
+        &quot;zone&quot;: &quot;A String&quot;, # Zone to run the worker pools in.  If empty or unspecified, the service
             # will attempt to choose a reasonable default.
-        "metadata": { # Metadata to set on the Google Compute Engine VMs.
-          "a_key": "A String",
-        },
-        "machineType": "A String", # Machine type (e.g. "n1-standard-1").  If empty or unspecified, the
-            # service will attempt to choose a reasonable default.
-        "onHostMaintenance": "A String", # The action to take on host maintenance, as defined by the Google
-            # Compute Engine API.
-        "taskrunnerSettings": { # Taskrunner configuration settings. # Settings passed through to Google Compute Engine workers when
-            # using the standard Dataflow task runner.  Users should ignore
-            # this field.
-          "workflowFileName": "A String", # The file to store the workflow in.
-          "logUploadLocation": "A String", # Indicates where to put logs.  If this is not specified, the logs
-              # will not be uploaded.
-              #
-              # The supported resource type is:
-              #
-              # Google Cloud Storage:
-              #   storage.googleapis.com/{bucket}/{object}
-              #   bucket.storage.googleapis.com/{object}
-          "commandlinesFileName": "A String", # The file to store preprocessing commands in.
-          "alsologtostderr": True or False, # Whether to also send taskrunner log info to stderr.
-          "continueOnException": True or False, # Whether to continue taskrunner if an exception is hit.
-          "baseTaskDir": "A String", # The location on the worker for task-specific subdirectories.
-          "vmId": "A String", # The ID string of the VM.
-          "taskGroup": "A String", # The UNIX group ID on the worker VM to use for tasks launched by
-              # taskrunner; e.g. "wheel".
-          "taskUser": "A String", # The UNIX user ID on the worker VM to use for tasks launched by
-              # taskrunner; e.g. "root".
-          "oauthScopes": [ # The OAuth2 scopes to be requested by the taskrunner in order to
-              # access the Cloud Dataflow API.
-            "A String",
-          ],
-          "languageHint": "A String", # The suggested backend language.
-          "logToSerialconsole": True or False, # Whether to send taskrunner log info to Google Compute Engine VM serial
-              # console.
-          "streamingWorkerMainClass": "A String", # The streaming worker main class name.
-          "logDir": "A String", # The directory on the VM to store logs.
-          "parallelWorkerSettings": { # Provides data to pass through to the worker harness. # The settings to pass to the parallel worker harness.
-            "reportingEnabled": True or False, # Whether to send work progress updates to the service.
-            "shuffleServicePath": "A String", # The Shuffle service path relative to the root URL, for example,
-                # "shuffle/v1beta1".
-            "workerId": "A String", # The ID of the worker running this pipeline.
-            "baseUrl": "A String", # The base URL for accessing Google Cloud APIs.
-                #
-                # When workers access Google Cloud APIs, they logically do so via
-                # relative URLs.  If this field is specified, it supplies the base
-                # URL to use for resolving these relative URLs.  The normative
-                # algorithm used is defined by RFC 1808, "Relative Uniform Resource
-                # Locators".
-                #
-                # If not specified, the default value is "http://www.googleapis.com/"
-            "servicePath": "A String", # The Cloud Dataflow service path relative to the root URL, for example,
-                # "dataflow/v1b3/projects".
-            "tempStoragePrefix": "A String", # The prefix of the resources the system should use for temporary
-                # storage.
-                #
-                # The supported resource type is:
-                #
-                # Google Cloud Storage:
-                #
-                #   storage.googleapis.com/{bucket}/{object}
-                #   bucket.storage.googleapis.com/{object}
-          },
-          "dataflowApiVersion": "A String", # The API version of endpoint, e.g. "v1b3"
-          "harnessCommand": "A String", # The command to launch the worker harness.
-          "tempStoragePrefix": "A String", # The prefix of the resources the taskrunner should use for
-              # temporary storage.
-              #
-              # The supported resource type is:
-              #
-              # Google Cloud Storage:
-              #   storage.googleapis.com/{bucket}/{object}
-              #   bucket.storage.googleapis.com/{object}
-          "baseUrl": "A String", # The base URL for the taskrunner to use when accessing Google Cloud APIs.
-              #
-              # When workers access Google Cloud APIs, they logically do so via
-              # relative URLs.  If this field is specified, it supplies the base
-              # URL to use for resolving these relative URLs.  The normative
-              # algorithm used is defined by RFC 1808, "Relative Uniform Resource
-              # Locators".
-              #
-              # If not specified, the default value is "http://www.googleapis.com/"
-        },
-        "numThreadsPerWorker": 42, # The number of threads per worker harness. If empty or unspecified, the
+        &quot;numWorkers&quot;: 42, # Number of Google Compute Engine workers in this pool needed to
+            # execute the job.  If zero or unspecified, the service will
+            # attempt to choose a reasonable default.
+        &quot;numThreadsPerWorker&quot;: 42, # The number of threads per worker harness. If empty or unspecified, the
             # service will choose a number of threads (according to the number of cores
             # on the selected machine type for batch, or 1 by convention for streaming).
-        "poolArgs": { # Extra arguments for this worker pool.
-          "a_key": "", # Properties of the object. Contains field @type with type URL.
-        },
-        "packages": [ # Packages to be installed on workers.
+        &quot;diskSourceImage&quot;: &quot;A String&quot;, # Fully qualified source image for disks.
+        &quot;packages&quot;: [ # Packages to be installed on workers.
           { # The packages that must be installed in order for a worker to run the
               # steps of the Cloud Dataflow job that will be assigned to its worker
               # pool.
               #
               # This is the mechanism by which the Cloud Dataflow SDK causes code to
               # be loaded onto the workers. For example, the Cloud Dataflow Java SDK
-              # might use this to install jars containing the user's code and all of the
+              # might use this to install jars containing the user&#x27;s code and all of the
               # various dependencies (libraries, data files, etc.) required in order
               # for that code to run.
-            "location": "A String", # The resource to read the package from. The supported resource type is:
+            &quot;location&quot;: &quot;A String&quot;, # The resource to read the package from. The supported resource type is:
                 #
                 # Google Cloud Storage:
                 #
                 #   storage.googleapis.com/{bucket}
                 #   bucket.storage.googleapis.com/
-            "name": "A String", # The name of the package.
+            &quot;name&quot;: &quot;A String&quot;, # The name of the package.
           },
         ],
-        "defaultPackageSet": "A String", # The default package set to install.  This allows the service to
-            # select a default set of packages which are useful to worker
-            # harnesses written in a particular language.
-        "kind": "A String", # The kind of the worker pool; currently only `harness` and `shuffle`
-            # are supported.
-        "diskType": "A String", # Type of root disk for VMs.  If empty or unspecified, the service will
-            # attempt to choose a reasonable default.
-        "teardownPolicy": "A String", # Sets the policy for determining when to turndown worker pool.
+        &quot;teardownPolicy&quot;: &quot;A String&quot;, # Sets the policy for determining when to turndown worker pool.
             # Allowed values are: `TEARDOWN_ALWAYS`, `TEARDOWN_ON_SUCCESS`, and
             # `TEARDOWN_NEVER`.
             # `TEARDOWN_ALWAYS` means workers are always torn down regardless of whether
@@ -3096,32 +2785,41 @@
             #
             # If the workers are not torn down by the service, they will
             # continue to run and use Google Compute Engine VM resources in the
-            # user's project until they are explicitly terminated by the user.
+            # user&#x27;s project until they are explicitly terminated by the user.
             # Because of this, Google recommends using the `TEARDOWN_ALWAYS`
             # policy except for small, manually supervised test jobs.
             #
             # If unknown or unspecified, the service will attempt to choose a reasonable
             # default.
-        "diskSizeGb": 42, # Size of root disk for VMs, in GB.  If zero or unspecified, the service will
+        &quot;onHostMaintenance&quot;: &quot;A String&quot;, # The action to take on host maintenance, as defined by the Google
+            # Compute Engine API.
+        &quot;poolArgs&quot;: { # Extra arguments for this worker pool.
+          &quot;a_key&quot;: &quot;&quot;, # Properties of the object. Contains field @type with type URL.
+        },
+        &quot;diskSizeGb&quot;: 42, # Size of root disk for VMs, in GB.  If zero or unspecified, the service will
             # attempt to choose a reasonable default.
-        "numWorkers": 42, # Number of Google Compute Engine workers in this pool needed to
-            # execute the job.  If zero or unspecified, the service will
+        &quot;workerHarnessContainerImage&quot;: &quot;A String&quot;, # Required. Docker container image that executes the Cloud Dataflow worker
+            # harness, residing in Google Container Registry.
+            #
+            # Deprecated for the Fn API path. Use sdk_harness_container_images instead.
+        &quot;diskType&quot;: &quot;A String&quot;, # Type of root disk for VMs.  If empty or unspecified, the service will
             # attempt to choose a reasonable default.
-        "subnetwork": "A String", # Subnetwork to which VMs will be assigned, if desired.  Expected to be of
-            # the form "regions/REGION/subnetworks/SUBNETWORK".
-        "dataDisks": [ # Data disks that are used by a VM in this workflow.
+        &quot;machineType&quot;: &quot;A String&quot;, # Machine type (e.g. &quot;n1-standard-1&quot;).  If empty or unspecified, the
+            # service will attempt to choose a reasonable default.
+        &quot;kind&quot;: &quot;A String&quot;, # The kind of the worker pool; currently only `harness` and `shuffle`
+            # are supported.
+        &quot;dataDisks&quot;: [ # Data disks that are used by a VM in this workflow.
           { # Describes the data disk used by a workflow job.
-            "mountPoint": "A String", # Directory in a VM where disk is mounted.
-            "sizeGb": 42, # Size of disk in GB.  If zero or unspecified, the service will
+            &quot;sizeGb&quot;: 42, # Size of disk in GB.  If zero or unspecified, the service will
                 # attempt to choose a reasonable default.
-            "diskType": "A String", # Disk storage type, as defined by Google Compute Engine.  This
+            &quot;diskType&quot;: &quot;A String&quot;, # Disk storage type, as defined by Google Compute Engine.  This
                 # must be a disk type appropriate to the project and zone in which
                 # the workers will run.  If unknown or unspecified, the service
                 # will attempt to choose a reasonable default.
                 #
                 # For example, the standard persistent disk type is a resource name
-                # typically ending in "pd-standard".  If SSD persistent disks are
-                # available, the resource name typically ends with "pd-ssd".  The
+                # typically ending in &quot;pd-standard&quot;.  If SSD persistent disks are
+                # available, the resource name typically ends with &quot;pd-ssd&quot;.  The
                 # actual valid values are defined the Google Compute Engine API,
                 # not by the Cloud Dataflow API; consult the Google Compute Engine
                 # documentation for more information about determining the set of
@@ -3132,29 +2830,144 @@
                 # typically look something like this:
                 #
                 # compute.googleapis.com/projects/project-id/zones/zone/diskTypes/pd-standard
+            &quot;mountPoint&quot;: &quot;A String&quot;, # Directory in a VM where disk is mounted.
           },
         ],
-        "sdkHarnessContainerImages": [ # Set of SDK harness containers needed to execute this pipeline. This will
+        &quot;sdkHarnessContainerImages&quot;: [ # Set of SDK harness containers needed to execute this pipeline. This will
             # only be set in the Fn API path. For non-cross-language pipelines this
             # should have only one entry. Cross-language pipelines will have two or more
             # entries.
           { # Defines a SDK harness container for executing Dataflow pipelines.
-            "containerImage": "A String", # A docker container image that resides in Google Container Registry.
-            "useSingleCorePerContainer": True or False, # If true, recommends the Dataflow service to use only one core per SDK
+            &quot;containerImage&quot;: &quot;A String&quot;, # A docker container image that resides in Google Container Registry.
+            &quot;useSingleCorePerContainer&quot;: True or False, # If true, recommends the Dataflow service to use only one core per SDK
                 # container instance with this image. If false (or unset) recommends using
                 # more than one core per SDK container instance with this image for
                 # efficiency. Note that Dataflow service may choose to override this property
                 # if needed.
           },
         ],
+        &quot;subnetwork&quot;: &quot;A String&quot;, # Subnetwork to which VMs will be assigned, if desired.  Expected to be of
+            # the form &quot;regions/REGION/subnetworks/SUBNETWORK&quot;.
+        &quot;ipConfiguration&quot;: &quot;A String&quot;, # Configuration for VM IPs.
+        &quot;taskrunnerSettings&quot;: { # Taskrunner configuration settings. # Settings passed through to Google Compute Engine workers when
+            # using the standard Dataflow task runner.  Users should ignore
+            # this field.
+          &quot;alsologtostderr&quot;: True or False, # Whether to also send taskrunner log info to stderr.
+          &quot;taskGroup&quot;: &quot;A String&quot;, # The UNIX group ID on the worker VM to use for tasks launched by
+              # taskrunner; e.g. &quot;wheel&quot;.
+          &quot;harnessCommand&quot;: &quot;A String&quot;, # The command to launch the worker harness.
+          &quot;logDir&quot;: &quot;A String&quot;, # The directory on the VM to store logs.
+          &quot;oauthScopes&quot;: [ # The OAuth2 scopes to be requested by the taskrunner in order to
+              # access the Cloud Dataflow API.
+            &quot;A String&quot;,
+          ],
+          &quot;dataflowApiVersion&quot;: &quot;A String&quot;, # The API version of endpoint, e.g. &quot;v1b3&quot;
+          &quot;logUploadLocation&quot;: &quot;A String&quot;, # Indicates where to put logs.  If this is not specified, the logs
+              # will not be uploaded.
+              #
+              # The supported resource type is:
+              #
+              # Google Cloud Storage:
+              #   storage.googleapis.com/{bucket}/{object}
+              #   bucket.storage.googleapis.com/{object}
+          &quot;streamingWorkerMainClass&quot;: &quot;A String&quot;, # The streaming worker main class name.
+          &quot;workflowFileName&quot;: &quot;A String&quot;, # The file to store the workflow in.
+          &quot;baseTaskDir&quot;: &quot;A String&quot;, # The location on the worker for task-specific subdirectories.
+          &quot;tempStoragePrefix&quot;: &quot;A String&quot;, # The prefix of the resources the taskrunner should use for
+              # temporary storage.
+              #
+              # The supported resource type is:
+              #
+              # Google Cloud Storage:
+              #   storage.googleapis.com/{bucket}/{object}
+              #   bucket.storage.googleapis.com/{object}
+          &quot;commandlinesFileName&quot;: &quot;A String&quot;, # The file to store preprocessing commands in.
+          &quot;languageHint&quot;: &quot;A String&quot;, # The suggested backend language.
+          &quot;baseUrl&quot;: &quot;A String&quot;, # The base URL for the taskrunner to use when accessing Google Cloud APIs.
+              #
+              # When workers access Google Cloud APIs, they logically do so via
+              # relative URLs.  If this field is specified, it supplies the base
+              # URL to use for resolving these relative URLs.  The normative
+              # algorithm used is defined by RFC 1808, &quot;Relative Uniform Resource
+              # Locators&quot;.
+              #
+              # If not specified, the default value is &quot;http://www.googleapis.com/&quot;
+          &quot;logToSerialconsole&quot;: True or False, # Whether to send taskrunner log info to Google Compute Engine VM serial
+              # console.
+          &quot;continueOnException&quot;: True or False, # Whether to continue taskrunner if an exception is hit.
+          &quot;parallelWorkerSettings&quot;: { # Provides data to pass through to the worker harness. # The settings to pass to the parallel worker harness.
+            &quot;baseUrl&quot;: &quot;A String&quot;, # The base URL for accessing Google Cloud APIs.
+                #
+                # When workers access Google Cloud APIs, they logically do so via
+                # relative URLs.  If this field is specified, it supplies the base
+                # URL to use for resolving these relative URLs.  The normative
+                # algorithm used is defined by RFC 1808, &quot;Relative Uniform Resource
+                # Locators&quot;.
+                #
+                # If not specified, the default value is &quot;http://www.googleapis.com/&quot;
+            &quot;reportingEnabled&quot;: True or False, # Whether to send work progress updates to the service.
+            &quot;servicePath&quot;: &quot;A String&quot;, # The Cloud Dataflow service path relative to the root URL, for example,
+                # &quot;dataflow/v1b3/projects&quot;.
+            &quot;shuffleServicePath&quot;: &quot;A String&quot;, # The Shuffle service path relative to the root URL, for example,
+                # &quot;shuffle/v1beta1&quot;.
+            &quot;workerId&quot;: &quot;A String&quot;, # The ID of the worker running this pipeline.
+            &quot;tempStoragePrefix&quot;: &quot;A String&quot;, # The prefix of the resources the system should use for temporary
+                # storage.
+                #
+                # The supported resource type is:
+                #
+                # Google Cloud Storage:
+                #
+                #   storage.googleapis.com/{bucket}/{object}
+                #   bucket.storage.googleapis.com/{object}
+          },
+          &quot;vmId&quot;: &quot;A String&quot;, # The ID string of the VM.
+          &quot;taskUser&quot;: &quot;A String&quot;, # The UNIX user ID on the worker VM to use for tasks launched by
+              # taskrunner; e.g. &quot;root&quot;.
+        },
+        &quot;autoscalingSettings&quot;: { # Settings for WorkerPool autoscaling. # Settings for autoscaling of this WorkerPool.
+          &quot;maxNumWorkers&quot;: 42, # The maximum number of workers to cap scaling at.
+          &quot;algorithm&quot;: &quot;A String&quot;, # The algorithm to use for autoscaling.
+        },
+        &quot;metadata&quot;: { # Metadata to set on the Google Compute Engine VMs.
+          &quot;a_key&quot;: &quot;A String&quot;,
+        },
       },
     ],
-    "clusterManagerApiService": "A String", # The type of cluster manager API to use.  If unknown or
+    &quot;dataset&quot;: &quot;A String&quot;, # The dataset for the current project where various workflow
+        # related tables are stored.
+        #
+        # The supported resource type is:
+        #
+        # Google BigQuery:
+        #   bigquery.googleapis.com/{dataset}
+    &quot;internalExperiments&quot;: { # Experimental settings.
+      &quot;a_key&quot;: &quot;&quot;, # Properties of the object. Contains field @type with type URL.
+    },
+    &quot;workerRegion&quot;: &quot;A String&quot;, # The Compute Engine region
+        # (https://cloud.google.com/compute/docs/regions-zones/regions-zones) in
+        # which worker processing should occur, e.g. &quot;us-west1&quot;. Mutually exclusive
+        # with worker_zone. If neither worker_region nor worker_zone is specified,
+        # default to the control plane&#x27;s region.
+    &quot;serviceKmsKeyName&quot;: &quot;A String&quot;, # If set, contains the Cloud KMS key identifier used to encrypt data
+        # at rest, AKA a Customer Managed Encryption Key (CMEK).
+        #
+        # Format:
+        #   projects/PROJECT_ID/locations/LOCATION/keyRings/KEY_RING/cryptoKeys/KEY
+    &quot;userAgent&quot;: { # A description of the process that generated the request.
+      &quot;a_key&quot;: &quot;&quot;, # Properties of the object.
+    },
+    &quot;workerZone&quot;: &quot;A String&quot;, # The Compute Engine zone
+        # (https://cloud.google.com/compute/docs/regions-zones/regions-zones) in
+        # which worker processing should occur, e.g. &quot;us-west1-a&quot;. Mutually exclusive
+        # with worker_region. If neither worker_region nor worker_zone is specified,
+        # a zone in the control plane&#x27;s region is chosen based on available capacity.
+    &quot;clusterManagerApiService&quot;: &quot;A String&quot;, # The type of cluster manager API to use.  If unknown or
         # unspecified, the service will attempt to choose a reasonable
         # default.  This should be in the form of the API service name,
-        # e.g. "compute.googleapis.com".
-    "tempStoragePrefix": "A String", # The prefix of the resources the system should use for temporary
-        # storage.  The system will append the suffix "/temp-{JOBNAME} to
+        # e.g. &quot;compute.googleapis.com&quot;.
+    &quot;tempStoragePrefix&quot;: &quot;A String&quot;, # The prefix of the resources the system should use for temporary
+        # storage.  The system will append the suffix &quot;/temp-{JOBNAME} to
         # this resource prefix, where {JOBNAME} is the value of the
         # job_name field.  The resulting bucket and object prefix is used
         # as the prefix of the resources used to store temporary data
@@ -3166,11 +2979,199 @@
         #
         #   storage.googleapis.com/{bucket}/{object}
         #   bucket.storage.googleapis.com/{object}
+    &quot;experiments&quot;: [ # The list of experiments to enable.
+      &quot;A String&quot;,
+    ],
+    &quot;version&quot;: { # A structure describing which components and their versions of the service
+        # are required in order to run the job.
+      &quot;a_key&quot;: &quot;&quot;, # Properties of the object.
+    },
+    &quot;serviceAccountEmail&quot;: &quot;A String&quot;, # Identity to run virtual machines as. Defaults to the default account.
   },
-  "location": "A String", # The [regional endpoint]
-      # (https://cloud.google.com/dataflow/docs/concepts/regional-endpoints) that
-      # contains this job.
-  "tempFiles": [ # A set of files the system should be aware of that are used
+  &quot;stageStates&quot;: [ # This field may be mutated by the Cloud Dataflow service;
+      # callers cannot mutate it.
+    { # A message describing the state of a particular execution stage.
+      &quot;executionStageName&quot;: &quot;A String&quot;, # The name of the execution stage.
+      &quot;currentStateTime&quot;: &quot;A String&quot;, # The time at which the stage transitioned to this state.
+      &quot;executionStageState&quot;: &quot;A String&quot;, # Executions stage states allow the same set of values as JobState.
+    },
+  ],
+  &quot;jobMetadata&quot;: { # Metadata available primarily for filtering jobs. Will be included in the # This field is populated by the Dataflow service to support filtering jobs
+      # by the metadata values provided here. Populated for ListJobs and all GetJob
+      # views SUMMARY and higher.
+      # ListJob response and Job SUMMARY view.
+    &quot;bigTableDetails&quot;: [ # Identification of a BigTable source used in the Dataflow job.
+      { # Metadata for a BigTable connector used by the job.
+        &quot;tableId&quot;: &quot;A String&quot;, # TableId accessed in the connection.
+        &quot;projectId&quot;: &quot;A String&quot;, # ProjectId accessed in the connection.
+        &quot;instanceId&quot;: &quot;A String&quot;, # InstanceId accessed in the connection.
+      },
+    ],
+    &quot;spannerDetails&quot;: [ # Identification of a Spanner source used in the Dataflow job.
+      { # Metadata for a Spanner connector used by the job.
+        &quot;databaseId&quot;: &quot;A String&quot;, # DatabaseId accessed in the connection.
+        &quot;instanceId&quot;: &quot;A String&quot;, # InstanceId accessed in the connection.
+        &quot;projectId&quot;: &quot;A String&quot;, # ProjectId accessed in the connection.
+      },
+    ],
+    &quot;datastoreDetails&quot;: [ # Identification of a Datastore source used in the Dataflow job.
+      { # Metadata for a Datastore connector used by the job.
+        &quot;projectId&quot;: &quot;A String&quot;, # ProjectId accessed in the connection.
+        &quot;namespace&quot;: &quot;A String&quot;, # Namespace used in the connection.
+      },
+    ],
+    &quot;sdkVersion&quot;: { # The version of the SDK used to run the job. # The SDK version used to run the job.
+      &quot;versionDisplayName&quot;: &quot;A String&quot;, # A readable string describing the version of the SDK.
+      &quot;sdkSupportStatus&quot;: &quot;A String&quot;, # The support status for this SDK version.
+      &quot;version&quot;: &quot;A String&quot;, # The version of the SDK used to run the job.
+    },
+    &quot;bigqueryDetails&quot;: [ # Identification of a BigQuery source used in the Dataflow job.
+      { # Metadata for a BigQuery connector used by the job.
+        &quot;table&quot;: &quot;A String&quot;, # Table accessed in the connection.
+        &quot;dataset&quot;: &quot;A String&quot;, # Dataset accessed in the connection.
+        &quot;projectId&quot;: &quot;A String&quot;, # Project accessed in the connection.
+        &quot;query&quot;: &quot;A String&quot;, # Query used to access data in the connection.
+      },
+    ],
+    &quot;fileDetails&quot;: [ # Identification of a File source used in the Dataflow job.
+      { # Metadata for a File connector used by the job.
+        &quot;filePattern&quot;: &quot;A String&quot;, # File Pattern used to access files by the connector.
+      },
+    ],
+    &quot;pubsubDetails&quot;: [ # Identification of a PubSub source used in the Dataflow job.
+      { # Metadata for a PubSub connector used by the job.
+        &quot;subscription&quot;: &quot;A String&quot;, # Subscription used in the connection.
+        &quot;topic&quot;: &quot;A String&quot;, # Topic accessed in the connection.
+      },
+    ],
+  },
+  &quot;createdFromSnapshotId&quot;: &quot;A String&quot;, # If this is specified, the job&#x27;s initial state is populated from the given
+      # snapshot.
+  &quot;projectId&quot;: &quot;A String&quot;, # The ID of the Cloud Platform project that the job belongs to.
+  &quot;type&quot;: &quot;A String&quot;, # The type of Cloud Dataflow job.
+  &quot;pipelineDescription&quot;: { # A descriptive representation of submitted pipeline as well as the executed # Preliminary field: The format of this data may change at any time.
+      # A description of the user pipeline and stages through which it is executed.
+      # Created by Cloud Dataflow service.  Only retrieved with
+      # JOB_VIEW_DESCRIPTION or JOB_VIEW_ALL.
+      # form.  This data is provided by the Dataflow service for ease of visualizing
+      # the pipeline and interpreting Dataflow provided metrics.
+    &quot;executionPipelineStage&quot;: [ # Description of each stage of execution of the pipeline.
+      { # Description of the composing transforms, names/ids, and input/outputs of a
+          # stage of execution.  Some composing transforms and sources may have been
+          # generated by the Dataflow service during execution planning.
+        &quot;id&quot;: &quot;A String&quot;, # Dataflow service generated id for this stage.
+        &quot;componentTransform&quot;: [ # Transforms that comprise this execution stage.
+          { # Description of a transform executed as part of an execution stage.
+            &quot;originalTransform&quot;: &quot;A String&quot;, # User name for the original user transform with which this transform is
+                # most closely associated.
+            &quot;name&quot;: &quot;A String&quot;, # Dataflow service generated name for this source.
+            &quot;userName&quot;: &quot;A String&quot;, # Human-readable name for this transform; may be user or system generated.
+          },
+        ],
+        &quot;componentSource&quot;: [ # Collections produced and consumed by component transforms of this stage.
+          { # Description of an interstitial value between transforms in an execution
+              # stage.
+            &quot;name&quot;: &quot;A String&quot;, # Dataflow service generated name for this source.
+            &quot;userName&quot;: &quot;A String&quot;, # Human-readable name for this transform; may be user or system generated.
+            &quot;originalTransformOrCollection&quot;: &quot;A String&quot;, # User name for the original user transform or collection with which this
+                # source is most closely associated.
+          },
+        ],
+        &quot;kind&quot;: &quot;A String&quot;, # Type of tranform this stage is executing.
+        &quot;outputSource&quot;: [ # Output sources for this stage.
+          { # Description of an input or output of an execution stage.
+            &quot;originalTransformOrCollection&quot;: &quot;A String&quot;, # User name for the original user transform or collection with which this
+                # source is most closely associated.
+            &quot;name&quot;: &quot;A String&quot;, # Dataflow service generated name for this source.
+            &quot;sizeBytes&quot;: &quot;A String&quot;, # Size of the source, if measurable.
+            &quot;userName&quot;: &quot;A String&quot;, # Human-readable name for this source; may be user or system generated.
+          },
+        ],
+        &quot;name&quot;: &quot;A String&quot;, # Dataflow service generated name for this stage.
+        &quot;inputSource&quot;: [ # Input sources for this stage.
+          { # Description of an input or output of an execution stage.
+            &quot;originalTransformOrCollection&quot;: &quot;A String&quot;, # User name for the original user transform or collection with which this
+                # source is most closely associated.
+            &quot;name&quot;: &quot;A String&quot;, # Dataflow service generated name for this source.
+            &quot;sizeBytes&quot;: &quot;A String&quot;, # Size of the source, if measurable.
+            &quot;userName&quot;: &quot;A String&quot;, # Human-readable name for this source; may be user or system generated.
+          },
+        ],
+      },
+    ],
+    &quot;originalPipelineTransform&quot;: [ # Description of each transform in the pipeline and collections between them.
+      { # Description of the type, names/ids, and input/outputs for a transform.
+        &quot;kind&quot;: &quot;A String&quot;, # Type of transform.
+        &quot;inputCollectionName&quot;: [ # User names for all collection inputs to this transform.
+          &quot;A String&quot;,
+        ],
+        &quot;name&quot;: &quot;A String&quot;, # User provided name for this transform instance.
+        &quot;id&quot;: &quot;A String&quot;, # SDK generated id of this transform instance.
+        &quot;displayData&quot;: [ # Transform-specific display data.
+          { # Data provided with a pipeline or transform to provide descriptive info.
+            &quot;timestampValue&quot;: &quot;A String&quot;, # Contains value if the data is of timestamp type.
+            &quot;boolValue&quot;: True or False, # Contains value if the data is of a boolean type.
+            &quot;javaClassValue&quot;: &quot;A String&quot;, # Contains value if the data is of java class type.
+            &quot;strValue&quot;: &quot;A String&quot;, # Contains value if the data is of string type.
+            &quot;int64Value&quot;: &quot;A String&quot;, # Contains value if the data is of int64 type.
+            &quot;durationValue&quot;: &quot;A String&quot;, # Contains value if the data is of duration type.
+            &quot;namespace&quot;: &quot;A String&quot;, # The namespace for the key. This is usually a class name or programming
+                # language namespace (i.e. python module) which defines the display data.
+                # This allows a dax monitoring system to specially handle the data
+                # and perform custom rendering.
+            &quot;floatValue&quot;: 3.14, # Contains value if the data is of float type.
+            &quot;key&quot;: &quot;A String&quot;, # The key identifying the display data.
+                # This is intended to be used as a label for the display data
+                # when viewed in a dax monitoring system.
+            &quot;shortStrValue&quot;: &quot;A String&quot;, # A possible additional shorter value to display.
+                # For example a java_class_name_value of com.mypackage.MyDoFn
+                # will be stored with MyDoFn as the short_str_value and
+                # com.mypackage.MyDoFn as the java_class_name value.
+                # short_str_value can be displayed and java_class_name_value
+                # will be displayed as a tooltip.
+            &quot;url&quot;: &quot;A String&quot;, # An optional full URL.
+            &quot;label&quot;: &quot;A String&quot;, # An optional label to display in a dax UI for the element.
+          },
+        ],
+        &quot;outputCollectionName&quot;: [ # User  names for all collection outputs to this transform.
+          &quot;A String&quot;,
+        ],
+      },
+    ],
+    &quot;displayData&quot;: [ # Pipeline level display data.
+      { # Data provided with a pipeline or transform to provide descriptive info.
+        &quot;timestampValue&quot;: &quot;A String&quot;, # Contains value if the data is of timestamp type.
+        &quot;boolValue&quot;: True or False, # Contains value if the data is of a boolean type.
+        &quot;javaClassValue&quot;: &quot;A String&quot;, # Contains value if the data is of java class type.
+        &quot;strValue&quot;: &quot;A String&quot;, # Contains value if the data is of string type.
+        &quot;int64Value&quot;: &quot;A String&quot;, # Contains value if the data is of int64 type.
+        &quot;durationValue&quot;: &quot;A String&quot;, # Contains value if the data is of duration type.
+        &quot;namespace&quot;: &quot;A String&quot;, # The namespace for the key. This is usually a class name or programming
+            # language namespace (i.e. python module) which defines the display data.
+            # This allows a dax monitoring system to specially handle the data
+            # and perform custom rendering.
+        &quot;floatValue&quot;: 3.14, # Contains value if the data is of float type.
+        &quot;key&quot;: &quot;A String&quot;, # The key identifying the display data.
+            # This is intended to be used as a label for the display data
+            # when viewed in a dax monitoring system.
+        &quot;shortStrValue&quot;: &quot;A String&quot;, # A possible additional shorter value to display.
+            # For example a java_class_name_value of com.mypackage.MyDoFn
+            # will be stored with MyDoFn as the short_str_value and
+            # com.mypackage.MyDoFn as the java_class_name value.
+            # short_str_value can be displayed and java_class_name_value
+            # will be displayed as a tooltip.
+        &quot;url&quot;: &quot;A String&quot;, # An optional full URL.
+        &quot;label&quot;: &quot;A String&quot;, # An optional label to display in a dax UI for the element.
+      },
+    ],
+  },
+  &quot;replaceJobId&quot;: &quot;A String&quot;, # If this job is an update of an existing job, this field is the job ID
+      # of the job it replaced.
+      # 
+      # When sending a `CreateJobRequest`, you can update a job by specifying it
+      # here. The job named here is stopped, and its intermediate state is
+      # transferred to this job.
+  &quot;tempFiles&quot;: [ # A set of files the system should be aware of that are used
       # for temporary storage. These temporary files will be
       # removed on job completion.
       # No duplicates are allowed.
@@ -3182,36 +3183,9 @@
       # 
       #    storage.googleapis.com/{bucket}/{object}
       #    bucket.storage.googleapis.com/{object}
-    "A String",
+    &quot;A String&quot;,
   ],
-  "type": "A String", # The type of Cloud Dataflow job.
-  "clientRequestId": "A String", # The client's unique identifier of the job, re-used across retried attempts.
-      # If this field is set, the service will ensure its uniqueness.
-      # The request to create a job will fail if the service has knowledge of a
-      # previously submitted job with the same client's ID and job name.
-      # The caller may use this field to ensure idempotence of job
-      # creation across retried attempts to create a job.
-      # By default, the field is empty and, in that case, the service ignores it.
-  "createdFromSnapshotId": "A String", # If this is specified, the job's initial state is populated from the given
-      # snapshot.
-  "stepsLocation": "A String", # The GCS location where the steps are stored.
-  "currentStateTime": "A String", # The timestamp associated with the current state.
-  "startTime": "A String", # The timestamp when the job was started (transitioned to JOB_STATE_PENDING).
-      # Flexible resource scheduling jobs are started with some delay after job
-      # creation, so start_time is unset before start and is updated when the
-      # job is started by the Cloud Dataflow service. For other jobs, start_time
-      # always equals to create_time and is immutable and set by the Cloud Dataflow
-      # service.
-  "createTime": "A String", # The timestamp when the job was initially created. Immutable and set by the
-      # Cloud Dataflow service.
-  "requestedState": "A String", # The job's requested state.
-      # 
-      # `UpdateJob` may be used to switch between the `JOB_STATE_STOPPED` and
-      # `JOB_STATE_RUNNING` states, by setting requested_state.  `UpdateJob` may
-      # also be used to directly set a job's requested state to
-      # `JOB_STATE_CANCELLED` or `JOB_STATE_DONE`, irrevocably terminating the
-      # job if it has not already reached a terminal state.
-  "name": "A String", # The user-specified Cloud Dataflow job name.
+  &quot;name&quot;: &quot;A String&quot;, # The user-specified Cloud Dataflow job name.
       # 
       # Only one Job with a given name may exist in a project at any
       # given time. If a caller attempts to create a Job with the same
@@ -3220,7 +3194,7 @@
       # 
       # The name must match the regular expression
       # `[a-z]([-a-z0-9]{0,38}[a-z0-9])?`
-  "steps": [ # Exactly one of step or steps_location should be specified.
+  &quot;steps&quot;: [ # Exactly one of step or steps_location should be specified.
       # 
       # The top-level steps that constitute the entire job.
     { # Defines a particular step within a Cloud Dataflow job.
@@ -3229,11 +3203,11 @@
         # specific operation as part of the overall job.  Data is typically
         # passed from one step to another as part of the job.
         #
-        # Here's an example of a sequence of steps which together implement a
+        # Here&#x27;s an example of a sequence of steps which together implement a
         # Map-Reduce job:
         #
         #   * Read a collection of data from some source, parsing the
-        #     collection's elements.
+        #     collection&#x27;s elements.
         #
         #   * Validate the elements.
         #
@@ -3248,23 +3222,32 @@
         #
         # Note that the Cloud Dataflow service may be used to run many different
         # types of jobs, not just Map-Reduce.
-      "kind": "A String", # The kind of step in the Cloud Dataflow job.
-      "name": "A String", # The name that identifies the step. This must be unique for each
+      &quot;name&quot;: &quot;A String&quot;, # The name that identifies the step. This must be unique for each
           # step with respect to all other steps in the Cloud Dataflow job.
-      "properties": { # Named properties associated with the step. Each kind of
+      &quot;kind&quot;: &quot;A String&quot;, # The kind of step in the Cloud Dataflow job.
+      &quot;properties&quot;: { # Named properties associated with the step. Each kind of
           # predefined step has its own required set of properties.
           # Must be provided on Create.  Only retrieved with JOB_VIEW_ALL.
-        "a_key": "", # Properties of the object.
+        &quot;a_key&quot;: &quot;&quot;, # Properties of the object.
       },
     },
   ],
-  "replaceJobId": "A String", # If this job is an update of an existing job, this field is the job ID
-      # of the job it replaced.
-      # 
-      # When sending a `CreateJobRequest`, you can update a job by specifying it
-      # here. The job named here is stopped, and its intermediate state is
-      # transferred to this job.
-  "currentState": "A String", # The current state of the job.
+  &quot;replacedByJobId&quot;: &quot;A String&quot;, # If another job is an update of this job (and thus, this job is in
+      # `JOB_STATE_UPDATED`), this field contains the ID of that job.
+  &quot;executionInfo&quot;: { # Additional information about how a Cloud Dataflow job will be executed that # Deprecated.
+      # isn&#x27;t contained in the submitted job.
+    &quot;stages&quot;: { # A mapping from each stage to the information about that stage.
+      &quot;a_key&quot;: { # Contains information about how a particular
+          # google.dataflow.v1beta3.Step will be executed.
+        &quot;stepName&quot;: [ # The steps associated with the execution stage.
+            # Note that stages may have several steps, and that a given step
+            # might be run by more than one stage.
+          &quot;A String&quot;,
+        ],
+      },
+    },
+  },
+  &quot;currentState&quot;: &quot;A String&quot;, # The current state of the job.
       # 
       # Jobs are created in the `JOB_STATE_STOPPED` state unless otherwise
       # specified.
@@ -3275,19 +3258,36 @@
       # 
       # This field may be mutated by the Cloud Dataflow service;
       # callers cannot mutate it.
-  "executionInfo": { # Additional information about how a Cloud Dataflow job will be executed that # Deprecated.
-      # isn't contained in the submitted job.
-    "stages": { # A mapping from each stage to the information about that stage.
-      "a_key": { # Contains information about how a particular
-          # google.dataflow.v1beta3.Step will be executed.
-        "stepName": [ # The steps associated with the execution stage.
-            # Note that stages may have several steps, and that a given step
-            # might be run by more than one stage.
-          "A String",
-        ],
-      },
-    },
+  &quot;location&quot;: &quot;A String&quot;, # The [regional endpoint]
+      # (https://cloud.google.com/dataflow/docs/concepts/regional-endpoints) that
+      # contains this job.
+  &quot;startTime&quot;: &quot;A String&quot;, # The timestamp when the job was started (transitioned to JOB_STATE_PENDING).
+      # Flexible resource scheduling jobs are started with some delay after job
+      # creation, so start_time is unset before start and is updated when the
+      # job is started by the Cloud Dataflow service. For other jobs, start_time
+      # always equals to create_time and is immutable and set by the Cloud Dataflow
+      # service.
+  &quot;stepsLocation&quot;: &quot;A String&quot;, # The GCS location where the steps are stored.
+  &quot;labels&quot;: { # User-defined labels for this job.
+      # 
+      # The labels map can contain no more than 64 entries.  Entries of the labels
+      # map are UTF8 strings that comply with the following restrictions:
+      # 
+      # * Keys must conform to regexp:  \p{Ll}\p{Lo}{0,62}
+      # * Values must conform to regexp:  [\p{Ll}\p{Lo}\p{N}_-]{0,63}
+      # * Both keys and values are additionally constrained to be &lt;= 128 bytes in
+      # size.
+    &quot;a_key&quot;: &quot;A String&quot;,
   },
+  &quot;createTime&quot;: &quot;A String&quot;, # The timestamp when the job was initially created. Immutable and set by the
+      # Cloud Dataflow service.
+  &quot;requestedState&quot;: &quot;A String&quot;, # The job&#x27;s requested state.
+      # 
+      # `UpdateJob` may be used to switch between the `JOB_STATE_STOPPED` and
+      # `JOB_STATE_RUNNING` states, by setting requested_state.  `UpdateJob` may
+      # also be used to directly set a job&#x27;s requested state to
+      # `JOB_STATE_CANCELLED` or `JOB_STATE_DONE`, irrevocably terminating the
+      # job if it has not already reached a terminal state.
 }
 
   x__xgafv: string, V1 error format.
@@ -3299,382 +3299,71 @@
   An object of the form:
 
     { # Defines a job to be run by the Cloud Dataflow service.
-    "labels": { # User-defined labels for this job.
-        #
-        # The labels map can contain no more than 64 entries.  Entries of the labels
-        # map are UTF8 strings that comply with the following restrictions:
-        #
-        # * Keys must conform to regexp:  \p{Ll}\p{Lo}{0,62}
-        # * Values must conform to regexp:  [\p{Ll}\p{Lo}\p{N}_-]{0,63}
-        # * Both keys and values are additionally constrained to be &lt;= 128 bytes in
-        # size.
-      "a_key": "A String",
-    },
-    "jobMetadata": { # Metadata available primarily for filtering jobs. Will be included in the # This field is populated by the Dataflow service to support filtering jobs
-        # by the metadata values provided here. Populated for ListJobs and all GetJob
-        # views SUMMARY and higher.
-        # ListJob response and Job SUMMARY view.
-      "sdkVersion": { # The version of the SDK used to run the job. # The SDK version used to run the job.
-        "versionDisplayName": "A String", # A readable string describing the version of the SDK.
-        "version": "A String", # The version of the SDK used to run the job.
-        "sdkSupportStatus": "A String", # The support status for this SDK version.
-      },
-      "pubsubDetails": [ # Identification of a PubSub source used in the Dataflow job.
-        { # Metadata for a PubSub connector used by the job.
-          "topic": "A String", # Topic accessed in the connection.
-          "subscription": "A String", # Subscription used in the connection.
-        },
-      ],
-      "datastoreDetails": [ # Identification of a Datastore source used in the Dataflow job.
-        { # Metadata for a Datastore connector used by the job.
-          "projectId": "A String", # ProjectId accessed in the connection.
-          "namespace": "A String", # Namespace used in the connection.
-        },
-      ],
-      "fileDetails": [ # Identification of a File source used in the Dataflow job.
-        { # Metadata for a File connector used by the job.
-          "filePattern": "A String", # File Pattern used to access files by the connector.
-        },
-      ],
-      "spannerDetails": [ # Identification of a Spanner source used in the Dataflow job.
-        { # Metadata for a Spanner connector used by the job.
-          "instanceId": "A String", # InstanceId accessed in the connection.
-          "projectId": "A String", # ProjectId accessed in the connection.
-          "databaseId": "A String", # DatabaseId accessed in the connection.
-        },
-      ],
-      "bigTableDetails": [ # Identification of a BigTable source used in the Dataflow job.
-        { # Metadata for a BigTable connector used by the job.
-          "instanceId": "A String", # InstanceId accessed in the connection.
-          "projectId": "A String", # ProjectId accessed in the connection.
-          "tableId": "A String", # TableId accessed in the connection.
-        },
-      ],
-      "bigqueryDetails": [ # Identification of a BigQuery source used in the Dataflow job.
-        { # Metadata for a BigQuery connector used by the job.
-          "projectId": "A String", # Project accessed in the connection.
-          "query": "A String", # Query used to access data in the connection.
-          "table": "A String", # Table accessed in the connection.
-          "dataset": "A String", # Dataset accessed in the connection.
-        },
-      ],
-    },
-    "pipelineDescription": { # A descriptive representation of submitted pipeline as well as the executed # Preliminary field: The format of this data may change at any time.
-        # A description of the user pipeline and stages through which it is executed.
-        # Created by Cloud Dataflow service.  Only retrieved with
-        # JOB_VIEW_DESCRIPTION or JOB_VIEW_ALL.
-        # form.  This data is provided by the Dataflow service for ease of visualizing
-        # the pipeline and interpreting Dataflow provided metrics.
-      "originalPipelineTransform": [ # Description of each transform in the pipeline and collections between them.
-        { # Description of the type, names/ids, and input/outputs for a transform.
-          "kind": "A String", # Type of transform.
-          "name": "A String", # User provided name for this transform instance.
-          "inputCollectionName": [ # User names for all collection inputs to this transform.
-            "A String",
-          ],
-          "displayData": [ # Transform-specific display data.
-            { # Data provided with a pipeline or transform to provide descriptive info.
-              "key": "A String", # The key identifying the display data.
-                  # This is intended to be used as a label for the display data
-                  # when viewed in a dax monitoring system.
-              "shortStrValue": "A String", # A possible additional shorter value to display.
-                  # For example a java_class_name_value of com.mypackage.MyDoFn
-                  # will be stored with MyDoFn as the short_str_value and
-                  # com.mypackage.MyDoFn as the java_class_name value.
-                  # short_str_value can be displayed and java_class_name_value
-                  # will be displayed as a tooltip.
-              "timestampValue": "A String", # Contains value if the data is of timestamp type.
-              "url": "A String", # An optional full URL.
-              "floatValue": 3.14, # Contains value if the data is of float type.
-              "namespace": "A String", # The namespace for the key. This is usually a class name or programming
-                  # language namespace (i.e. python module) which defines the display data.
-                  # This allows a dax monitoring system to specially handle the data
-                  # and perform custom rendering.
-              "javaClassValue": "A String", # Contains value if the data is of java class type.
-              "label": "A String", # An optional label to display in a dax UI for the element.
-              "boolValue": True or False, # Contains value if the data is of a boolean type.
-              "strValue": "A String", # Contains value if the data is of string type.
-              "durationValue": "A String", # Contains value if the data is of duration type.
-              "int64Value": "A String", # Contains value if the data is of int64 type.
-            },
-          ],
-          "outputCollectionName": [ # User  names for all collection outputs to this transform.
-            "A String",
-          ],
-          "id": "A String", # SDK generated id of this transform instance.
-        },
-      ],
-      "executionPipelineStage": [ # Description of each stage of execution of the pipeline.
-        { # Description of the composing transforms, names/ids, and input/outputs of a
-            # stage of execution.  Some composing transforms and sources may have been
-            # generated by the Dataflow service during execution planning.
-          "componentSource": [ # Collections produced and consumed by component transforms of this stage.
-            { # Description of an interstitial value between transforms in an execution
-                # stage.
-              "userName": "A String", # Human-readable name for this transform; may be user or system generated.
-              "originalTransformOrCollection": "A String", # User name for the original user transform or collection with which this
-                  # source is most closely associated.
-              "name": "A String", # Dataflow service generated name for this source.
-            },
-          ],
-          "kind": "A String", # Type of tranform this stage is executing.
-          "name": "A String", # Dataflow service generated name for this stage.
-          "outputSource": [ # Output sources for this stage.
-            { # Description of an input or output of an execution stage.
-              "userName": "A String", # Human-readable name for this source; may be user or system generated.
-              "sizeBytes": "A String", # Size of the source, if measurable.
-              "name": "A String", # Dataflow service generated name for this source.
-              "originalTransformOrCollection": "A String", # User name for the original user transform or collection with which this
-                  # source is most closely associated.
-            },
-          ],
-          "inputSource": [ # Input sources for this stage.
-            { # Description of an input or output of an execution stage.
-              "userName": "A String", # Human-readable name for this source; may be user or system generated.
-              "sizeBytes": "A String", # Size of the source, if measurable.
-              "name": "A String", # Dataflow service generated name for this source.
-              "originalTransformOrCollection": "A String", # User name for the original user transform or collection with which this
-                  # source is most closely associated.
-            },
-          ],
-          "componentTransform": [ # Transforms that comprise this execution stage.
-            { # Description of a transform executed as part of an execution stage.
-              "userName": "A String", # Human-readable name for this transform; may be user or system generated.
-              "originalTransform": "A String", # User name for the original user transform with which this transform is
-                  # most closely associated.
-              "name": "A String", # Dataflow service generated name for this source.
-            },
-          ],
-          "id": "A String", # Dataflow service generated id for this stage.
-        },
-      ],
-      "displayData": [ # Pipeline level display data.
-        { # Data provided with a pipeline or transform to provide descriptive info.
-          "key": "A String", # The key identifying the display data.
-              # This is intended to be used as a label for the display data
-              # when viewed in a dax monitoring system.
-          "shortStrValue": "A String", # A possible additional shorter value to display.
-              # For example a java_class_name_value of com.mypackage.MyDoFn
-              # will be stored with MyDoFn as the short_str_value and
-              # com.mypackage.MyDoFn as the java_class_name value.
-              # short_str_value can be displayed and java_class_name_value
-              # will be displayed as a tooltip.
-          "timestampValue": "A String", # Contains value if the data is of timestamp type.
-          "url": "A String", # An optional full URL.
-          "floatValue": 3.14, # Contains value if the data is of float type.
-          "namespace": "A String", # The namespace for the key. This is usually a class name or programming
-              # language namespace (i.e. python module) which defines the display data.
-              # This allows a dax monitoring system to specially handle the data
-              # and perform custom rendering.
-          "javaClassValue": "A String", # Contains value if the data is of java class type.
-          "label": "A String", # An optional label to display in a dax UI for the element.
-          "boolValue": True or False, # Contains value if the data is of a boolean type.
-          "strValue": "A String", # Contains value if the data is of string type.
-          "durationValue": "A String", # Contains value if the data is of duration type.
-          "int64Value": "A String", # Contains value if the data is of int64 type.
-        },
-      ],
-    },
-    "stageStates": [ # This field may be mutated by the Cloud Dataflow service;
-        # callers cannot mutate it.
-      { # A message describing the state of a particular execution stage.
-        "executionStageName": "A String", # The name of the execution stage.
-        "executionStageState": "A String", # Executions stage states allow the same set of values as JobState.
-        "currentStateTime": "A String", # The time at which the stage transitioned to this state.
-      },
-    ],
-    "id": "A String", # The unique ID of this job.
+    &quot;clientRequestId&quot;: &quot;A String&quot;, # The client&#x27;s unique identifier of the job, re-used across retried attempts.
+        # If this field is set, the service will ensure its uniqueness.
+        # The request to create a job will fail if the service has knowledge of a
+        # previously submitted job with the same client&#x27;s ID and job name.
+        # The caller may use this field to ensure idempotence of job
+        # creation across retried attempts to create a job.
+        # By default, the field is empty and, in that case, the service ignores it.
+    &quot;id&quot;: &quot;A String&quot;, # The unique ID of this job.
         #
         # This field is set by the Cloud Dataflow service when the Job is
         # created, and is immutable for the life of the job.
-    "replacedByJobId": "A String", # If another job is an update of this job (and thus, this job is in
-        # `JOB_STATE_UPDATED`), this field contains the ID of that job.
-    "projectId": "A String", # The ID of the Cloud Platform project that the job belongs to.
-    "transformNameMapping": { # The map of transform name prefixes of the job to be replaced to the
+    &quot;currentStateTime&quot;: &quot;A String&quot;, # The timestamp associated with the current state.
+    &quot;transformNameMapping&quot;: { # The map of transform name prefixes of the job to be replaced to the
         # corresponding name prefixes of the new job.
-      "a_key": "A String",
+      &quot;a_key&quot;: &quot;A String&quot;,
     },
-    "environment": { # Describes the environment in which a Dataflow Job runs. # The environment for the job.
-      "workerRegion": "A String", # The Compute Engine region
-          # (https://cloud.google.com/compute/docs/regions-zones/regions-zones) in
-          # which worker processing should occur, e.g. "us-west1". Mutually exclusive
-          # with worker_zone. If neither worker_region nor worker_zone is specified,
-          # default to the control plane's region.
-      "version": { # A structure describing which components and their versions of the service
-          # are required in order to run the job.
-        "a_key": "", # Properties of the object.
-      },
-      "flexResourceSchedulingGoal": "A String", # Which Flexible Resource Scheduling mode to run in.
-      "serviceKmsKeyName": "A String", # If set, contains the Cloud KMS key identifier used to encrypt data
-          # at rest, AKA a Customer Managed Encryption Key (CMEK).
-          #
-          # Format:
-          #   projects/PROJECT_ID/locations/LOCATION/keyRings/KEY_RING/cryptoKeys/KEY
-      "internalExperiments": { # Experimental settings.
-        "a_key": "", # Properties of the object. Contains field @type with type URL.
-      },
-      "dataset": "A String", # The dataset for the current project where various workflow
-          # related tables are stored.
-          #
-          # The supported resource type is:
-          #
-          # Google BigQuery:
-          #   bigquery.googleapis.com/{dataset}
-      "experiments": [ # The list of experiments to enable.
-        "A String",
-      ],
-      "serviceAccountEmail": "A String", # Identity to run virtual machines as. Defaults to the default account.
-      "sdkPipelineOptions": { # The Cloud Dataflow SDK pipeline options specified by the user. These
+    &quot;environment&quot;: { # Describes the environment in which a Dataflow Job runs. # The environment for the job.
+      &quot;sdkPipelineOptions&quot;: { # The Cloud Dataflow SDK pipeline options specified by the user. These
           # options are passed through the service and are used to recreate the
           # SDK pipeline options on the worker in a language agnostic and platform
           # independent way.
-        "a_key": "", # Properties of the object.
+        &quot;a_key&quot;: &quot;&quot;, # Properties of the object.
       },
-      "userAgent": { # A description of the process that generated the request.
-        "a_key": "", # Properties of the object.
-      },
-      "workerZone": "A String", # The Compute Engine zone
-          # (https://cloud.google.com/compute/docs/regions-zones/regions-zones) in
-          # which worker processing should occur, e.g. "us-west1-a". Mutually exclusive
-          # with worker_region. If neither worker_region nor worker_zone is specified,
-          # a zone in the control plane's region is chosen based on available capacity.
-      "workerPools": [ # The worker pools. At least one "harness" worker pool must be
+      &quot;flexResourceSchedulingGoal&quot;: &quot;A String&quot;, # Which Flexible Resource Scheduling mode to run in.
+      &quot;workerPools&quot;: [ # The worker pools. At least one &quot;harness&quot; worker pool must be
           # specified in order for the job to have workers.
         { # Describes one particular pool of Cloud Dataflow workers to be
             # instantiated by the Cloud Dataflow service in order to perform the
             # computations required by a job.  Note that a workflow job may use
             # multiple pools, in order to match the various computational
             # requirements of the various stages of the job.
-          "workerHarnessContainerImage": "A String", # Required. Docker container image that executes the Cloud Dataflow worker
-              # harness, residing in Google Container Registry.
-              #
-              # Deprecated for the Fn API path. Use sdk_harness_container_images instead.
-          "ipConfiguration": "A String", # Configuration for VM IPs.
-          "autoscalingSettings": { # Settings for WorkerPool autoscaling. # Settings for autoscaling of this WorkerPool.
-            "maxNumWorkers": 42, # The maximum number of workers to cap scaling at.
-            "algorithm": "A String", # The algorithm to use for autoscaling.
-          },
-          "diskSourceImage": "A String", # Fully qualified source image for disks.
-          "network": "A String", # Network to which VMs will be assigned.  If empty or unspecified,
-              # the service will use the network "default".
-          "zone": "A String", # Zone to run the worker pools in.  If empty or unspecified, the service
+          &quot;defaultPackageSet&quot;: &quot;A String&quot;, # The default package set to install.  This allows the service to
+              # select a default set of packages which are useful to worker
+              # harnesses written in a particular language.
+          &quot;network&quot;: &quot;A String&quot;, # Network to which VMs will be assigned.  If empty or unspecified,
+              # the service will use the network &quot;default&quot;.
+          &quot;zone&quot;: &quot;A String&quot;, # Zone to run the worker pools in.  If empty or unspecified, the service
               # will attempt to choose a reasonable default.
-          "metadata": { # Metadata to set on the Google Compute Engine VMs.
-            "a_key": "A String",
-          },
-          "machineType": "A String", # Machine type (e.g. "n1-standard-1").  If empty or unspecified, the
-              # service will attempt to choose a reasonable default.
-          "onHostMaintenance": "A String", # The action to take on host maintenance, as defined by the Google
-              # Compute Engine API.
-          "taskrunnerSettings": { # Taskrunner configuration settings. # Settings passed through to Google Compute Engine workers when
-              # using the standard Dataflow task runner.  Users should ignore
-              # this field.
-            "workflowFileName": "A String", # The file to store the workflow in.
-            "logUploadLocation": "A String", # Indicates where to put logs.  If this is not specified, the logs
-                # will not be uploaded.
-                #
-                # The supported resource type is:
-                #
-                # Google Cloud Storage:
-                #   storage.googleapis.com/{bucket}/{object}
-                #   bucket.storage.googleapis.com/{object}
-            "commandlinesFileName": "A String", # The file to store preprocessing commands in.
-            "alsologtostderr": True or False, # Whether to also send taskrunner log info to stderr.
-            "continueOnException": True or False, # Whether to continue taskrunner if an exception is hit.
-            "baseTaskDir": "A String", # The location on the worker for task-specific subdirectories.
-            "vmId": "A String", # The ID string of the VM.
-            "taskGroup": "A String", # The UNIX group ID on the worker VM to use for tasks launched by
-                # taskrunner; e.g. "wheel".
-            "taskUser": "A String", # The UNIX user ID on the worker VM to use for tasks launched by
-                # taskrunner; e.g. "root".
-            "oauthScopes": [ # The OAuth2 scopes to be requested by the taskrunner in order to
-                # access the Cloud Dataflow API.
-              "A String",
-            ],
-            "languageHint": "A String", # The suggested backend language.
-            "logToSerialconsole": True or False, # Whether to send taskrunner log info to Google Compute Engine VM serial
-                # console.
-            "streamingWorkerMainClass": "A String", # The streaming worker main class name.
-            "logDir": "A String", # The directory on the VM to store logs.
-            "parallelWorkerSettings": { # Provides data to pass through to the worker harness. # The settings to pass to the parallel worker harness.
-              "reportingEnabled": True or False, # Whether to send work progress updates to the service.
-              "shuffleServicePath": "A String", # The Shuffle service path relative to the root URL, for example,
-                  # "shuffle/v1beta1".
-              "workerId": "A String", # The ID of the worker running this pipeline.
-              "baseUrl": "A String", # The base URL for accessing Google Cloud APIs.
-                  #
-                  # When workers access Google Cloud APIs, they logically do so via
-                  # relative URLs.  If this field is specified, it supplies the base
-                  # URL to use for resolving these relative URLs.  The normative
-                  # algorithm used is defined by RFC 1808, "Relative Uniform Resource
-                  # Locators".
-                  #
-                  # If not specified, the default value is "http://www.googleapis.com/"
-              "servicePath": "A String", # The Cloud Dataflow service path relative to the root URL, for example,
-                  # "dataflow/v1b3/projects".
-              "tempStoragePrefix": "A String", # The prefix of the resources the system should use for temporary
-                  # storage.
-                  #
-                  # The supported resource type is:
-                  #
-                  # Google Cloud Storage:
-                  #
-                  #   storage.googleapis.com/{bucket}/{object}
-                  #   bucket.storage.googleapis.com/{object}
-            },
-            "dataflowApiVersion": "A String", # The API version of endpoint, e.g. "v1b3"
-            "harnessCommand": "A String", # The command to launch the worker harness.
-            "tempStoragePrefix": "A String", # The prefix of the resources the taskrunner should use for
-                # temporary storage.
-                #
-                # The supported resource type is:
-                #
-                # Google Cloud Storage:
-                #   storage.googleapis.com/{bucket}/{object}
-                #   bucket.storage.googleapis.com/{object}
-            "baseUrl": "A String", # The base URL for the taskrunner to use when accessing Google Cloud APIs.
-                #
-                # When workers access Google Cloud APIs, they logically do so via
-                # relative URLs.  If this field is specified, it supplies the base
-                # URL to use for resolving these relative URLs.  The normative
-                # algorithm used is defined by RFC 1808, "Relative Uniform Resource
-                # Locators".
-                #
-                # If not specified, the default value is "http://www.googleapis.com/"
-          },
-          "numThreadsPerWorker": 42, # The number of threads per worker harness. If empty or unspecified, the
+          &quot;numWorkers&quot;: 42, # Number of Google Compute Engine workers in this pool needed to
+              # execute the job.  If zero or unspecified, the service will
+              # attempt to choose a reasonable default.
+          &quot;numThreadsPerWorker&quot;: 42, # The number of threads per worker harness. If empty or unspecified, the
               # service will choose a number of threads (according to the number of cores
               # on the selected machine type for batch, or 1 by convention for streaming).
-          "poolArgs": { # Extra arguments for this worker pool.
-            "a_key": "", # Properties of the object. Contains field @type with type URL.
-          },
-          "packages": [ # Packages to be installed on workers.
+          &quot;diskSourceImage&quot;: &quot;A String&quot;, # Fully qualified source image for disks.
+          &quot;packages&quot;: [ # Packages to be installed on workers.
             { # The packages that must be installed in order for a worker to run the
                 # steps of the Cloud Dataflow job that will be assigned to its worker
                 # pool.
                 #
                 # This is the mechanism by which the Cloud Dataflow SDK causes code to
                 # be loaded onto the workers. For example, the Cloud Dataflow Java SDK
-                # might use this to install jars containing the user's code and all of the
+                # might use this to install jars containing the user&#x27;s code and all of the
                 # various dependencies (libraries, data files, etc.) required in order
                 # for that code to run.
-              "location": "A String", # The resource to read the package from. The supported resource type is:
+              &quot;location&quot;: &quot;A String&quot;, # The resource to read the package from. The supported resource type is:
                   #
                   # Google Cloud Storage:
                   #
                   #   storage.googleapis.com/{bucket}
                   #   bucket.storage.googleapis.com/
-              "name": "A String", # The name of the package.
+              &quot;name&quot;: &quot;A String&quot;, # The name of the package.
             },
           ],
-          "defaultPackageSet": "A String", # The default package set to install.  This allows the service to
-              # select a default set of packages which are useful to worker
-              # harnesses written in a particular language.
-          "kind": "A String", # The kind of the worker pool; currently only `harness` and `shuffle`
-              # are supported.
-          "diskType": "A String", # Type of root disk for VMs.  If empty or unspecified, the service will
-              # attempt to choose a reasonable default.
-          "teardownPolicy": "A String", # Sets the policy for determining when to turndown worker pool.
+          &quot;teardownPolicy&quot;: &quot;A String&quot;, # Sets the policy for determining when to turndown worker pool.
               # Allowed values are: `TEARDOWN_ALWAYS`, `TEARDOWN_ON_SUCCESS`, and
               # `TEARDOWN_NEVER`.
               # `TEARDOWN_ALWAYS` means workers are always torn down regardless of whether
@@ -3684,32 +3373,41 @@
               #
               # If the workers are not torn down by the service, they will
               # continue to run and use Google Compute Engine VM resources in the
-              # user's project until they are explicitly terminated by the user.
+              # user&#x27;s project until they are explicitly terminated by the user.
               # Because of this, Google recommends using the `TEARDOWN_ALWAYS`
               # policy except for small, manually supervised test jobs.
               #
               # If unknown or unspecified, the service will attempt to choose a reasonable
               # default.
-          "diskSizeGb": 42, # Size of root disk for VMs, in GB.  If zero or unspecified, the service will
+          &quot;onHostMaintenance&quot;: &quot;A String&quot;, # The action to take on host maintenance, as defined by the Google
+              # Compute Engine API.
+          &quot;poolArgs&quot;: { # Extra arguments for this worker pool.
+            &quot;a_key&quot;: &quot;&quot;, # Properties of the object. Contains field @type with type URL.
+          },
+          &quot;diskSizeGb&quot;: 42, # Size of root disk for VMs, in GB.  If zero or unspecified, the service will
               # attempt to choose a reasonable default.
-          "numWorkers": 42, # Number of Google Compute Engine workers in this pool needed to
-              # execute the job.  If zero or unspecified, the service will
+          &quot;workerHarnessContainerImage&quot;: &quot;A String&quot;, # Required. Docker container image that executes the Cloud Dataflow worker
+              # harness, residing in Google Container Registry.
+              #
+              # Deprecated for the Fn API path. Use sdk_harness_container_images instead.
+          &quot;diskType&quot;: &quot;A String&quot;, # Type of root disk for VMs.  If empty or unspecified, the service will
               # attempt to choose a reasonable default.
-          "subnetwork": "A String", # Subnetwork to which VMs will be assigned, if desired.  Expected to be of
-              # the form "regions/REGION/subnetworks/SUBNETWORK".
-          "dataDisks": [ # Data disks that are used by a VM in this workflow.
+          &quot;machineType&quot;: &quot;A String&quot;, # Machine type (e.g. &quot;n1-standard-1&quot;).  If empty or unspecified, the
+              # service will attempt to choose a reasonable default.
+          &quot;kind&quot;: &quot;A String&quot;, # The kind of the worker pool; currently only `harness` and `shuffle`
+              # are supported.
+          &quot;dataDisks&quot;: [ # Data disks that are used by a VM in this workflow.
             { # Describes the data disk used by a workflow job.
-              "mountPoint": "A String", # Directory in a VM where disk is mounted.
-              "sizeGb": 42, # Size of disk in GB.  If zero or unspecified, the service will
+              &quot;sizeGb&quot;: 42, # Size of disk in GB.  If zero or unspecified, the service will
                   # attempt to choose a reasonable default.
-              "diskType": "A String", # Disk storage type, as defined by Google Compute Engine.  This
+              &quot;diskType&quot;: &quot;A String&quot;, # Disk storage type, as defined by Google Compute Engine.  This
                   # must be a disk type appropriate to the project and zone in which
                   # the workers will run.  If unknown or unspecified, the service
                   # will attempt to choose a reasonable default.
                   #
                   # For example, the standard persistent disk type is a resource name
-                  # typically ending in "pd-standard".  If SSD persistent disks are
-                  # available, the resource name typically ends with "pd-ssd".  The
+                  # typically ending in &quot;pd-standard&quot;.  If SSD persistent disks are
+                  # available, the resource name typically ends with &quot;pd-ssd&quot;.  The
                   # actual valid values are defined the Google Compute Engine API,
                   # not by the Cloud Dataflow API; consult the Google Compute Engine
                   # documentation for more information about determining the set of
@@ -3720,29 +3418,144 @@
                   # typically look something like this:
                   #
                   # compute.googleapis.com/projects/project-id/zones/zone/diskTypes/pd-standard
+              &quot;mountPoint&quot;: &quot;A String&quot;, # Directory in a VM where disk is mounted.
             },
           ],
-          "sdkHarnessContainerImages": [ # Set of SDK harness containers needed to execute this pipeline. This will
+          &quot;sdkHarnessContainerImages&quot;: [ # Set of SDK harness containers needed to execute this pipeline. This will
               # only be set in the Fn API path. For non-cross-language pipelines this
               # should have only one entry. Cross-language pipelines will have two or more
               # entries.
             { # Defines a SDK harness container for executing Dataflow pipelines.
-              "containerImage": "A String", # A docker container image that resides in Google Container Registry.
-              "useSingleCorePerContainer": True or False, # If true, recommends the Dataflow service to use only one core per SDK
+              &quot;containerImage&quot;: &quot;A String&quot;, # A docker container image that resides in Google Container Registry.
+              &quot;useSingleCorePerContainer&quot;: True or False, # If true, recommends the Dataflow service to use only one core per SDK
                   # container instance with this image. If false (or unset) recommends using
                   # more than one core per SDK container instance with this image for
                   # efficiency. Note that Dataflow service may choose to override this property
                   # if needed.
             },
           ],
+          &quot;subnetwork&quot;: &quot;A String&quot;, # Subnetwork to which VMs will be assigned, if desired.  Expected to be of
+              # the form &quot;regions/REGION/subnetworks/SUBNETWORK&quot;.
+          &quot;ipConfiguration&quot;: &quot;A String&quot;, # Configuration for VM IPs.
+          &quot;taskrunnerSettings&quot;: { # Taskrunner configuration settings. # Settings passed through to Google Compute Engine workers when
+              # using the standard Dataflow task runner.  Users should ignore
+              # this field.
+            &quot;alsologtostderr&quot;: True or False, # Whether to also send taskrunner log info to stderr.
+            &quot;taskGroup&quot;: &quot;A String&quot;, # The UNIX group ID on the worker VM to use for tasks launched by
+                # taskrunner; e.g. &quot;wheel&quot;.
+            &quot;harnessCommand&quot;: &quot;A String&quot;, # The command to launch the worker harness.
+            &quot;logDir&quot;: &quot;A String&quot;, # The directory on the VM to store logs.
+            &quot;oauthScopes&quot;: [ # The OAuth2 scopes to be requested by the taskrunner in order to
+                # access the Cloud Dataflow API.
+              &quot;A String&quot;,
+            ],
+            &quot;dataflowApiVersion&quot;: &quot;A String&quot;, # The API version of endpoint, e.g. &quot;v1b3&quot;
+            &quot;logUploadLocation&quot;: &quot;A String&quot;, # Indicates where to put logs.  If this is not specified, the logs
+                # will not be uploaded.
+                #
+                # The supported resource type is:
+                #
+                # Google Cloud Storage:
+                #   storage.googleapis.com/{bucket}/{object}
+                #   bucket.storage.googleapis.com/{object}
+            &quot;streamingWorkerMainClass&quot;: &quot;A String&quot;, # The streaming worker main class name.
+            &quot;workflowFileName&quot;: &quot;A String&quot;, # The file to store the workflow in.
+            &quot;baseTaskDir&quot;: &quot;A String&quot;, # The location on the worker for task-specific subdirectories.
+            &quot;tempStoragePrefix&quot;: &quot;A String&quot;, # The prefix of the resources the taskrunner should use for
+                # temporary storage.
+                #
+                # The supported resource type is:
+                #
+                # Google Cloud Storage:
+                #   storage.googleapis.com/{bucket}/{object}
+                #   bucket.storage.googleapis.com/{object}
+            &quot;commandlinesFileName&quot;: &quot;A String&quot;, # The file to store preprocessing commands in.
+            &quot;languageHint&quot;: &quot;A String&quot;, # The suggested backend language.
+            &quot;baseUrl&quot;: &quot;A String&quot;, # The base URL for the taskrunner to use when accessing Google Cloud APIs.
+                #
+                # When workers access Google Cloud APIs, they logically do so via
+                # relative URLs.  If this field is specified, it supplies the base
+                # URL to use for resolving these relative URLs.  The normative
+                # algorithm used is defined by RFC 1808, &quot;Relative Uniform Resource
+                # Locators&quot;.
+                #
+                # If not specified, the default value is &quot;http://www.googleapis.com/&quot;
+            &quot;logToSerialconsole&quot;: True or False, # Whether to send taskrunner log info to Google Compute Engine VM serial
+                # console.
+            &quot;continueOnException&quot;: True or False, # Whether to continue taskrunner if an exception is hit.
+            &quot;parallelWorkerSettings&quot;: { # Provides data to pass through to the worker harness. # The settings to pass to the parallel worker harness.
+              &quot;baseUrl&quot;: &quot;A String&quot;, # The base URL for accessing Google Cloud APIs.
+                  #
+                  # When workers access Google Cloud APIs, they logically do so via
+                  # relative URLs.  If this field is specified, it supplies the base
+                  # URL to use for resolving these relative URLs.  The normative
+                  # algorithm used is defined by RFC 1808, &quot;Relative Uniform Resource
+                  # Locators&quot;.
+                  #
+                  # If not specified, the default value is &quot;http://www.googleapis.com/&quot;
+              &quot;reportingEnabled&quot;: True or False, # Whether to send work progress updates to the service.
+              &quot;servicePath&quot;: &quot;A String&quot;, # The Cloud Dataflow service path relative to the root URL, for example,
+                  # &quot;dataflow/v1b3/projects&quot;.
+              &quot;shuffleServicePath&quot;: &quot;A String&quot;, # The Shuffle service path relative to the root URL, for example,
+                  # &quot;shuffle/v1beta1&quot;.
+              &quot;workerId&quot;: &quot;A String&quot;, # The ID of the worker running this pipeline.
+              &quot;tempStoragePrefix&quot;: &quot;A String&quot;, # The prefix of the resources the system should use for temporary
+                  # storage.
+                  #
+                  # The supported resource type is:
+                  #
+                  # Google Cloud Storage:
+                  #
+                  #   storage.googleapis.com/{bucket}/{object}
+                  #   bucket.storage.googleapis.com/{object}
+            },
+            &quot;vmId&quot;: &quot;A String&quot;, # The ID string of the VM.
+            &quot;taskUser&quot;: &quot;A String&quot;, # The UNIX user ID on the worker VM to use for tasks launched by
+                # taskrunner; e.g. &quot;root&quot;.
+          },
+          &quot;autoscalingSettings&quot;: { # Settings for WorkerPool autoscaling. # Settings for autoscaling of this WorkerPool.
+            &quot;maxNumWorkers&quot;: 42, # The maximum number of workers to cap scaling at.
+            &quot;algorithm&quot;: &quot;A String&quot;, # The algorithm to use for autoscaling.
+          },
+          &quot;metadata&quot;: { # Metadata to set on the Google Compute Engine VMs.
+            &quot;a_key&quot;: &quot;A String&quot;,
+          },
         },
       ],
-      "clusterManagerApiService": "A String", # The type of cluster manager API to use.  If unknown or
+      &quot;dataset&quot;: &quot;A String&quot;, # The dataset for the current project where various workflow
+          # related tables are stored.
+          #
+          # The supported resource type is:
+          #
+          # Google BigQuery:
+          #   bigquery.googleapis.com/{dataset}
+      &quot;internalExperiments&quot;: { # Experimental settings.
+        &quot;a_key&quot;: &quot;&quot;, # Properties of the object. Contains field @type with type URL.
+      },
+      &quot;workerRegion&quot;: &quot;A String&quot;, # The Compute Engine region
+          # (https://cloud.google.com/compute/docs/regions-zones/regions-zones) in
+          # which worker processing should occur, e.g. &quot;us-west1&quot;. Mutually exclusive
+          # with worker_zone. If neither worker_region nor worker_zone is specified,
+          # default to the control plane&#x27;s region.
+      &quot;serviceKmsKeyName&quot;: &quot;A String&quot;, # If set, contains the Cloud KMS key identifier used to encrypt data
+          # at rest, AKA a Customer Managed Encryption Key (CMEK).
+          #
+          # Format:
+          #   projects/PROJECT_ID/locations/LOCATION/keyRings/KEY_RING/cryptoKeys/KEY
+      &quot;userAgent&quot;: { # A description of the process that generated the request.
+        &quot;a_key&quot;: &quot;&quot;, # Properties of the object.
+      },
+      &quot;workerZone&quot;: &quot;A String&quot;, # The Compute Engine zone
+          # (https://cloud.google.com/compute/docs/regions-zones/regions-zones) in
+          # which worker processing should occur, e.g. &quot;us-west1-a&quot;. Mutually exclusive
+          # with worker_region. If neither worker_region nor worker_zone is specified,
+          # a zone in the control plane&#x27;s region is chosen based on available capacity.
+      &quot;clusterManagerApiService&quot;: &quot;A String&quot;, # The type of cluster manager API to use.  If unknown or
           # unspecified, the service will attempt to choose a reasonable
           # default.  This should be in the form of the API service name,
-          # e.g. "compute.googleapis.com".
-      "tempStoragePrefix": "A String", # The prefix of the resources the system should use for temporary
-          # storage.  The system will append the suffix "/temp-{JOBNAME} to
+          # e.g. &quot;compute.googleapis.com&quot;.
+      &quot;tempStoragePrefix&quot;: &quot;A String&quot;, # The prefix of the resources the system should use for temporary
+          # storage.  The system will append the suffix &quot;/temp-{JOBNAME} to
           # this resource prefix, where {JOBNAME} is the value of the
           # job_name field.  The resulting bucket and object prefix is used
           # as the prefix of the resources used to store temporary data
@@ -3754,11 +3567,199 @@
           #
           #   storage.googleapis.com/{bucket}/{object}
           #   bucket.storage.googleapis.com/{object}
+      &quot;experiments&quot;: [ # The list of experiments to enable.
+        &quot;A String&quot;,
+      ],
+      &quot;version&quot;: { # A structure describing which components and their versions of the service
+          # are required in order to run the job.
+        &quot;a_key&quot;: &quot;&quot;, # Properties of the object.
+      },
+      &quot;serviceAccountEmail&quot;: &quot;A String&quot;, # Identity to run virtual machines as. Defaults to the default account.
     },
-    "location": "A String", # The [regional endpoint]
-        # (https://cloud.google.com/dataflow/docs/concepts/regional-endpoints) that
-        # contains this job.
-    "tempFiles": [ # A set of files the system should be aware of that are used
+    &quot;stageStates&quot;: [ # This field may be mutated by the Cloud Dataflow service;
+        # callers cannot mutate it.
+      { # A message describing the state of a particular execution stage.
+        &quot;executionStageName&quot;: &quot;A String&quot;, # The name of the execution stage.
+        &quot;currentStateTime&quot;: &quot;A String&quot;, # The time at which the stage transitioned to this state.
+        &quot;executionStageState&quot;: &quot;A String&quot;, # Executions stage states allow the same set of values as JobState.
+      },
+    ],
+    &quot;jobMetadata&quot;: { # Metadata available primarily for filtering jobs. Will be included in the # This field is populated by the Dataflow service to support filtering jobs
+        # by the metadata values provided here. Populated for ListJobs and all GetJob
+        # views SUMMARY and higher.
+        # ListJob response and Job SUMMARY view.
+      &quot;bigTableDetails&quot;: [ # Identification of a BigTable source used in the Dataflow job.
+        { # Metadata for a BigTable connector used by the job.
+          &quot;tableId&quot;: &quot;A String&quot;, # TableId accessed in the connection.
+          &quot;projectId&quot;: &quot;A String&quot;, # ProjectId accessed in the connection.
+          &quot;instanceId&quot;: &quot;A String&quot;, # InstanceId accessed in the connection.
+        },
+      ],
+      &quot;spannerDetails&quot;: [ # Identification of a Spanner source used in the Dataflow job.
+        { # Metadata for a Spanner connector used by the job.
+          &quot;databaseId&quot;: &quot;A String&quot;, # DatabaseId accessed in the connection.
+          &quot;instanceId&quot;: &quot;A String&quot;, # InstanceId accessed in the connection.
+          &quot;projectId&quot;: &quot;A String&quot;, # ProjectId accessed in the connection.
+        },
+      ],
+      &quot;datastoreDetails&quot;: [ # Identification of a Datastore source used in the Dataflow job.
+        { # Metadata for a Datastore connector used by the job.
+          &quot;projectId&quot;: &quot;A String&quot;, # ProjectId accessed in the connection.
+          &quot;namespace&quot;: &quot;A String&quot;, # Namespace used in the connection.
+        },
+      ],
+      &quot;sdkVersion&quot;: { # The version of the SDK used to run the job. # The SDK version used to run the job.
+        &quot;versionDisplayName&quot;: &quot;A String&quot;, # A readable string describing the version of the SDK.
+        &quot;sdkSupportStatus&quot;: &quot;A String&quot;, # The support status for this SDK version.
+        &quot;version&quot;: &quot;A String&quot;, # The version of the SDK used to run the job.
+      },
+      &quot;bigqueryDetails&quot;: [ # Identification of a BigQuery source used in the Dataflow job.
+        { # Metadata for a BigQuery connector used by the job.
+          &quot;table&quot;: &quot;A String&quot;, # Table accessed in the connection.
+          &quot;dataset&quot;: &quot;A String&quot;, # Dataset accessed in the connection.
+          &quot;projectId&quot;: &quot;A String&quot;, # Project accessed in the connection.
+          &quot;query&quot;: &quot;A String&quot;, # Query used to access data in the connection.
+        },
+      ],
+      &quot;fileDetails&quot;: [ # Identification of a File source used in the Dataflow job.
+        { # Metadata for a File connector used by the job.
+          &quot;filePattern&quot;: &quot;A String&quot;, # File Pattern used to access files by the connector.
+        },
+      ],
+      &quot;pubsubDetails&quot;: [ # Identification of a PubSub source used in the Dataflow job.
+        { # Metadata for a PubSub connector used by the job.
+          &quot;subscription&quot;: &quot;A String&quot;, # Subscription used in the connection.
+          &quot;topic&quot;: &quot;A String&quot;, # Topic accessed in the connection.
+        },
+      ],
+    },
+    &quot;createdFromSnapshotId&quot;: &quot;A String&quot;, # If this is specified, the job&#x27;s initial state is populated from the given
+        # snapshot.
+    &quot;projectId&quot;: &quot;A String&quot;, # The ID of the Cloud Platform project that the job belongs to.
+    &quot;type&quot;: &quot;A String&quot;, # The type of Cloud Dataflow job.
+    &quot;pipelineDescription&quot;: { # A descriptive representation of submitted pipeline as well as the executed # Preliminary field: The format of this data may change at any time.
+        # A description of the user pipeline and stages through which it is executed.
+        # Created by Cloud Dataflow service.  Only retrieved with
+        # JOB_VIEW_DESCRIPTION or JOB_VIEW_ALL.
+        # form.  This data is provided by the Dataflow service for ease of visualizing
+        # the pipeline and interpreting Dataflow provided metrics.
+      &quot;executionPipelineStage&quot;: [ # Description of each stage of execution of the pipeline.
+        { # Description of the composing transforms, names/ids, and input/outputs of a
+            # stage of execution.  Some composing transforms and sources may have been
+            # generated by the Dataflow service during execution planning.
+          &quot;id&quot;: &quot;A String&quot;, # Dataflow service generated id for this stage.
+          &quot;componentTransform&quot;: [ # Transforms that comprise this execution stage.
+            { # Description of a transform executed as part of an execution stage.
+              &quot;originalTransform&quot;: &quot;A String&quot;, # User name for the original user transform with which this transform is
+                  # most closely associated.
+              &quot;name&quot;: &quot;A String&quot;, # Dataflow service generated name for this source.
+              &quot;userName&quot;: &quot;A String&quot;, # Human-readable name for this transform; may be user or system generated.
+            },
+          ],
+          &quot;componentSource&quot;: [ # Collections produced and consumed by component transforms of this stage.
+            { # Description of an interstitial value between transforms in an execution
+                # stage.
+              &quot;name&quot;: &quot;A String&quot;, # Dataflow service generated name for this source.
+              &quot;userName&quot;: &quot;A String&quot;, # Human-readable name for this transform; may be user or system generated.
+              &quot;originalTransformOrCollection&quot;: &quot;A String&quot;, # User name for the original user transform or collection with which this
+                  # source is most closely associated.
+            },
+          ],
+          &quot;kind&quot;: &quot;A String&quot;, # Type of tranform this stage is executing.
+          &quot;outputSource&quot;: [ # Output sources for this stage.
+            { # Description of an input or output of an execution stage.
+              &quot;originalTransformOrCollection&quot;: &quot;A String&quot;, # User name for the original user transform or collection with which this
+                  # source is most closely associated.
+              &quot;name&quot;: &quot;A String&quot;, # Dataflow service generated name for this source.
+              &quot;sizeBytes&quot;: &quot;A String&quot;, # Size of the source, if measurable.
+              &quot;userName&quot;: &quot;A String&quot;, # Human-readable name for this source; may be user or system generated.
+            },
+          ],
+          &quot;name&quot;: &quot;A String&quot;, # Dataflow service generated name for this stage.
+          &quot;inputSource&quot;: [ # Input sources for this stage.
+            { # Description of an input or output of an execution stage.
+              &quot;originalTransformOrCollection&quot;: &quot;A String&quot;, # User name for the original user transform or collection with which this
+                  # source is most closely associated.
+              &quot;name&quot;: &quot;A String&quot;, # Dataflow service generated name for this source.
+              &quot;sizeBytes&quot;: &quot;A String&quot;, # Size of the source, if measurable.
+              &quot;userName&quot;: &quot;A String&quot;, # Human-readable name for this source; may be user or system generated.
+            },
+          ],
+        },
+      ],
+      &quot;originalPipelineTransform&quot;: [ # Description of each transform in the pipeline and collections between them.
+        { # Description of the type, names/ids, and input/outputs for a transform.
+          &quot;kind&quot;: &quot;A String&quot;, # Type of transform.
+          &quot;inputCollectionName&quot;: [ # User names for all collection inputs to this transform.
+            &quot;A String&quot;,
+          ],
+          &quot;name&quot;: &quot;A String&quot;, # User provided name for this transform instance.
+          &quot;id&quot;: &quot;A String&quot;, # SDK generated id of this transform instance.
+          &quot;displayData&quot;: [ # Transform-specific display data.
+            { # Data provided with a pipeline or transform to provide descriptive info.
+              &quot;timestampValue&quot;: &quot;A String&quot;, # Contains value if the data is of timestamp type.
+              &quot;boolValue&quot;: True or False, # Contains value if the data is of a boolean type.
+              &quot;javaClassValue&quot;: &quot;A String&quot;, # Contains value if the data is of java class type.
+              &quot;strValue&quot;: &quot;A String&quot;, # Contains value if the data is of string type.
+              &quot;int64Value&quot;: &quot;A String&quot;, # Contains value if the data is of int64 type.
+              &quot;durationValue&quot;: &quot;A String&quot;, # Contains value if the data is of duration type.
+              &quot;namespace&quot;: &quot;A String&quot;, # The namespace for the key. This is usually a class name or programming
+                  # language namespace (i.e. python module) which defines the display data.
+                  # This allows a dax monitoring system to specially handle the data
+                  # and perform custom rendering.
+              &quot;floatValue&quot;: 3.14, # Contains value if the data is of float type.
+              &quot;key&quot;: &quot;A String&quot;, # The key identifying the display data.
+                  # This is intended to be used as a label for the display data
+                  # when viewed in a dax monitoring system.
+              &quot;shortStrValue&quot;: &quot;A String&quot;, # A possible additional shorter value to display.
+                  # For example a java_class_name_value of com.mypackage.MyDoFn
+                  # will be stored with MyDoFn as the short_str_value and
+                  # com.mypackage.MyDoFn as the java_class_name value.
+                  # short_str_value can be displayed and java_class_name_value
+                  # will be displayed as a tooltip.
+              &quot;url&quot;: &quot;A String&quot;, # An optional full URL.
+              &quot;label&quot;: &quot;A String&quot;, # An optional label to display in a dax UI for the element.
+            },
+          ],
+          &quot;outputCollectionName&quot;: [ # User  names for all collection outputs to this transform.
+            &quot;A String&quot;,
+          ],
+        },
+      ],
+      &quot;displayData&quot;: [ # Pipeline level display data.
+        { # Data provided with a pipeline or transform to provide descriptive info.
+          &quot;timestampValue&quot;: &quot;A String&quot;, # Contains value if the data is of timestamp type.
+          &quot;boolValue&quot;: True or False, # Contains value if the data is of a boolean type.
+          &quot;javaClassValue&quot;: &quot;A String&quot;, # Contains value if the data is of java class type.
+          &quot;strValue&quot;: &quot;A String&quot;, # Contains value if the data is of string type.
+          &quot;int64Value&quot;: &quot;A String&quot;, # Contains value if the data is of int64 type.
+          &quot;durationValue&quot;: &quot;A String&quot;, # Contains value if the data is of duration type.
+          &quot;namespace&quot;: &quot;A String&quot;, # The namespace for the key. This is usually a class name or programming
+              # language namespace (i.e. python module) which defines the display data.
+              # This allows a dax monitoring system to specially handle the data
+              # and perform custom rendering.
+          &quot;floatValue&quot;: 3.14, # Contains value if the data is of float type.
+          &quot;key&quot;: &quot;A String&quot;, # The key identifying the display data.
+              # This is intended to be used as a label for the display data
+              # when viewed in a dax monitoring system.
+          &quot;shortStrValue&quot;: &quot;A String&quot;, # A possible additional shorter value to display.
+              # For example a java_class_name_value of com.mypackage.MyDoFn
+              # will be stored with MyDoFn as the short_str_value and
+              # com.mypackage.MyDoFn as the java_class_name value.
+              # short_str_value can be displayed and java_class_name_value
+              # will be displayed as a tooltip.
+          &quot;url&quot;: &quot;A String&quot;, # An optional full URL.
+          &quot;label&quot;: &quot;A String&quot;, # An optional label to display in a dax UI for the element.
+        },
+      ],
+    },
+    &quot;replaceJobId&quot;: &quot;A String&quot;, # If this job is an update of an existing job, this field is the job ID
+        # of the job it replaced.
+        #
+        # When sending a `CreateJobRequest`, you can update a job by specifying it
+        # here. The job named here is stopped, and its intermediate state is
+        # transferred to this job.
+    &quot;tempFiles&quot;: [ # A set of files the system should be aware of that are used
         # for temporary storage. These temporary files will be
         # removed on job completion.
         # No duplicates are allowed.
@@ -3770,36 +3771,9 @@
         #
         #    storage.googleapis.com/{bucket}/{object}
         #    bucket.storage.googleapis.com/{object}
-      "A String",
+      &quot;A String&quot;,
     ],
-    "type": "A String", # The type of Cloud Dataflow job.
-    "clientRequestId": "A String", # The client's unique identifier of the job, re-used across retried attempts.
-        # If this field is set, the service will ensure its uniqueness.
-        # The request to create a job will fail if the service has knowledge of a
-        # previously submitted job with the same client's ID and job name.
-        # The caller may use this field to ensure idempotence of job
-        # creation across retried attempts to create a job.
-        # By default, the field is empty and, in that case, the service ignores it.
-    "createdFromSnapshotId": "A String", # If this is specified, the job's initial state is populated from the given
-        # snapshot.
-    "stepsLocation": "A String", # The GCS location where the steps are stored.
-    "currentStateTime": "A String", # The timestamp associated with the current state.
-    "startTime": "A String", # The timestamp when the job was started (transitioned to JOB_STATE_PENDING).
-        # Flexible resource scheduling jobs are started with some delay after job
-        # creation, so start_time is unset before start and is updated when the
-        # job is started by the Cloud Dataflow service. For other jobs, start_time
-        # always equals to create_time and is immutable and set by the Cloud Dataflow
-        # service.
-    "createTime": "A String", # The timestamp when the job was initially created. Immutable and set by the
-        # Cloud Dataflow service.
-    "requestedState": "A String", # The job's requested state.
-        #
-        # `UpdateJob` may be used to switch between the `JOB_STATE_STOPPED` and
-        # `JOB_STATE_RUNNING` states, by setting requested_state.  `UpdateJob` may
-        # also be used to directly set a job's requested state to
-        # `JOB_STATE_CANCELLED` or `JOB_STATE_DONE`, irrevocably terminating the
-        # job if it has not already reached a terminal state.
-    "name": "A String", # The user-specified Cloud Dataflow job name.
+    &quot;name&quot;: &quot;A String&quot;, # The user-specified Cloud Dataflow job name.
         #
         # Only one Job with a given name may exist in a project at any
         # given time. If a caller attempts to create a Job with the same
@@ -3808,7 +3782,7 @@
         #
         # The name must match the regular expression
         # `[a-z]([-a-z0-9]{0,38}[a-z0-9])?`
-    "steps": [ # Exactly one of step or steps_location should be specified.
+    &quot;steps&quot;: [ # Exactly one of step or steps_location should be specified.
         #
         # The top-level steps that constitute the entire job.
       { # Defines a particular step within a Cloud Dataflow job.
@@ -3817,11 +3791,11 @@
           # specific operation as part of the overall job.  Data is typically
           # passed from one step to another as part of the job.
           #
-          # Here's an example of a sequence of steps which together implement a
+          # Here&#x27;s an example of a sequence of steps which together implement a
           # Map-Reduce job:
           #
           #   * Read a collection of data from some source, parsing the
-          #     collection's elements.
+          #     collection&#x27;s elements.
           #
           #   * Validate the elements.
           #
@@ -3836,23 +3810,32 @@
           #
           # Note that the Cloud Dataflow service may be used to run many different
           # types of jobs, not just Map-Reduce.
-        "kind": "A String", # The kind of step in the Cloud Dataflow job.
-        "name": "A String", # The name that identifies the step. This must be unique for each
+        &quot;name&quot;: &quot;A String&quot;, # The name that identifies the step. This must be unique for each
             # step with respect to all other steps in the Cloud Dataflow job.
-        "properties": { # Named properties associated with the step. Each kind of
+        &quot;kind&quot;: &quot;A String&quot;, # The kind of step in the Cloud Dataflow job.
+        &quot;properties&quot;: { # Named properties associated with the step. Each kind of
             # predefined step has its own required set of properties.
             # Must be provided on Create.  Only retrieved with JOB_VIEW_ALL.
-          "a_key": "", # Properties of the object.
+          &quot;a_key&quot;: &quot;&quot;, # Properties of the object.
         },
       },
     ],
-    "replaceJobId": "A String", # If this job is an update of an existing job, this field is the job ID
-        # of the job it replaced.
-        #
-        # When sending a `CreateJobRequest`, you can update a job by specifying it
-        # here. The job named here is stopped, and its intermediate state is
-        # transferred to this job.
-    "currentState": "A String", # The current state of the job.
+    &quot;replacedByJobId&quot;: &quot;A String&quot;, # If another job is an update of this job (and thus, this job is in
+        # `JOB_STATE_UPDATED`), this field contains the ID of that job.
+    &quot;executionInfo&quot;: { # Additional information about how a Cloud Dataflow job will be executed that # Deprecated.
+        # isn&#x27;t contained in the submitted job.
+      &quot;stages&quot;: { # A mapping from each stage to the information about that stage.
+        &quot;a_key&quot;: { # Contains information about how a particular
+            # google.dataflow.v1beta3.Step will be executed.
+          &quot;stepName&quot;: [ # The steps associated with the execution stage.
+              # Note that stages may have several steps, and that a given step
+              # might be run by more than one stage.
+            &quot;A String&quot;,
+          ],
+        },
+      },
+    },
+    &quot;currentState&quot;: &quot;A String&quot;, # The current state of the job.
         #
         # Jobs are created in the `JOB_STATE_STOPPED` state unless otherwise
         # specified.
@@ -3863,19 +3846,36 @@
         #
         # This field may be mutated by the Cloud Dataflow service;
         # callers cannot mutate it.
-    "executionInfo": { # Additional information about how a Cloud Dataflow job will be executed that # Deprecated.
-        # isn't contained in the submitted job.
-      "stages": { # A mapping from each stage to the information about that stage.
-        "a_key": { # Contains information about how a particular
-            # google.dataflow.v1beta3.Step will be executed.
-          "stepName": [ # The steps associated with the execution stage.
-              # Note that stages may have several steps, and that a given step
-              # might be run by more than one stage.
-            "A String",
-          ],
-        },
-      },
+    &quot;location&quot;: &quot;A String&quot;, # The [regional endpoint]
+        # (https://cloud.google.com/dataflow/docs/concepts/regional-endpoints) that
+        # contains this job.
+    &quot;startTime&quot;: &quot;A String&quot;, # The timestamp when the job was started (transitioned to JOB_STATE_PENDING).
+        # Flexible resource scheduling jobs are started with some delay after job
+        # creation, so start_time is unset before start and is updated when the
+        # job is started by the Cloud Dataflow service. For other jobs, start_time
+        # always equals to create_time and is immutable and set by the Cloud Dataflow
+        # service.
+    &quot;stepsLocation&quot;: &quot;A String&quot;, # The GCS location where the steps are stored.
+    &quot;labels&quot;: { # User-defined labels for this job.
+        #
+        # The labels map can contain no more than 64 entries.  Entries of the labels
+        # map are UTF8 strings that comply with the following restrictions:
+        #
+        # * Keys must conform to regexp:  \p{Ll}\p{Lo}{0,62}
+        # * Values must conform to regexp:  [\p{Ll}\p{Lo}\p{N}_-]{0,63}
+        # * Both keys and values are additionally constrained to be &lt;= 128 bytes in
+        # size.
+      &quot;a_key&quot;: &quot;A String&quot;,
     },
+    &quot;createTime&quot;: &quot;A String&quot;, # The timestamp when the job was initially created. Immutable and set by the
+        # Cloud Dataflow service.
+    &quot;requestedState&quot;: &quot;A String&quot;, # The job&#x27;s requested state.
+        #
+        # `UpdateJob` may be used to switch between the `JOB_STATE_STOPPED` and
+        # `JOB_STATE_RUNNING` states, by setting requested_state.  `UpdateJob` may
+        # also be used to directly set a job&#x27;s requested state to
+        # `JOB_STATE_CANCELLED` or `JOB_STATE_DONE`, irrevocably terminating the
+        # job if it has not already reached a terminal state.
   }</pre>
 </div>