Jon Wayne Parrott | 36e41bc | 2016-02-19 16:02:29 -0800 | [diff] [blame] | 1 | <html><body> |
| 2 | <style> |
| 3 | |
| 4 | body, h1, h2, h3, div, span, p, pre, a { |
| 5 | margin: 0; |
| 6 | padding: 0; |
| 7 | border: 0; |
| 8 | font-weight: inherit; |
| 9 | font-style: inherit; |
| 10 | font-size: 100%; |
| 11 | font-family: inherit; |
| 12 | vertical-align: baseline; |
| 13 | } |
| 14 | |
| 15 | body { |
| 16 | font-size: 13px; |
| 17 | padding: 1em; |
| 18 | } |
| 19 | |
| 20 | h1 { |
| 21 | font-size: 26px; |
| 22 | margin-bottom: 1em; |
| 23 | } |
| 24 | |
| 25 | h2 { |
| 26 | font-size: 24px; |
| 27 | margin-bottom: 1em; |
| 28 | } |
| 29 | |
| 30 | h3 { |
| 31 | font-size: 20px; |
| 32 | margin-bottom: 1em; |
| 33 | margin-top: 1em; |
| 34 | } |
| 35 | |
| 36 | pre, code { |
| 37 | line-height: 1.5; |
| 38 | font-family: Monaco, 'DejaVu Sans Mono', 'Bitstream Vera Sans Mono', 'Lucida Console', monospace; |
| 39 | } |
| 40 | |
| 41 | pre { |
| 42 | margin-top: 0.5em; |
| 43 | } |
| 44 | |
| 45 | h1, h2, h3, p { |
| 46 | font-family: Arial, sans serif; |
| 47 | } |
| 48 | |
| 49 | h1, h2, h3 { |
| 50 | border-bottom: solid #CCC 1px; |
| 51 | } |
| 52 | |
| 53 | .toc_element { |
| 54 | margin-top: 0.5em; |
| 55 | } |
| 56 | |
| 57 | .firstline { |
| 58 | margin-left: 2 em; |
| 59 | } |
| 60 | |
| 61 | .method { |
| 62 | margin-top: 1em; |
| 63 | border: solid 1px #CCC; |
| 64 | padding: 1em; |
| 65 | background: #EEE; |
| 66 | } |
| 67 | |
| 68 | .details { |
| 69 | font-weight: bold; |
| 70 | font-size: 14px; |
| 71 | } |
| 72 | |
| 73 | </style> |
| 74 | |
| 75 | <h1><a href="dataproc_v1.html">Google Cloud Dataproc API</a> . <a href="dataproc_v1.projects.html">projects</a> . <a href="dataproc_v1.projects.regions.html">regions</a> . <a href="dataproc_v1.projects.regions.jobs.html">jobs</a></h1> |
| 76 | <h2>Instance Methods</h2> |
| 77 | <p class="toc_element"> |
| 78 | <code><a href="#cancel">cancel(projectId, region, jobId, body, x__xgafv=None)</a></code></p> |
| 79 | <p class="firstline">Starts a job cancellation request. To access the job resource after cancellation, call [regions/{region}/jobs.list](/dataproc/reference/rest/v1/projects.regions/{region}/jobs/list) or [regions/{region}/jobs.get](/dataproc/reference/rest/v1/projects.regions/{region}/jobs/get).</p> |
| 80 | <p class="toc_element"> |
| 81 | <code><a href="#delete">delete(projectId, region, jobId, x__xgafv=None)</a></code></p> |
| 82 | <p class="firstline">Deletes the job from the project. If the job is active, the delete fails, and the response returns `FAILED_PRECONDITION`.</p> |
| 83 | <p class="toc_element"> |
| 84 | <code><a href="#get">get(projectId, region, jobId, x__xgafv=None)</a></code></p> |
| 85 | <p class="firstline">Gets the resource representation for a job in a project.</p> |
| 86 | <p class="toc_element"> |
| 87 | <code><a href="#list">list(projectId, region, pageSize=None, x__xgafv=None, jobStateMatcher=None, pageToken=None, clusterName=None)</a></code></p> |
| 88 | <p class="firstline">Lists regions/{region}/jobs in a project.</p> |
| 89 | <p class="toc_element"> |
| 90 | <code><a href="#list_next">list_next(previous_request, previous_response)</a></code></p> |
| 91 | <p class="firstline">Retrieves the next page of results.</p> |
| 92 | <p class="toc_element"> |
| 93 | <code><a href="#submit">submit(projectId, region, body, x__xgafv=None)</a></code></p> |
| 94 | <p class="firstline">Submits a job to a cluster.</p> |
| 95 | <h3>Method Details</h3> |
| 96 | <div class="method"> |
| 97 | <code class="details" id="cancel">cancel(projectId, region, jobId, body, x__xgafv=None)</code> |
| 98 | <pre>Starts a job cancellation request. To access the job resource after cancellation, call [regions/{region}/jobs.list](/dataproc/reference/rest/v1/projects.regions/{region}/jobs/list) or [regions/{region}/jobs.get](/dataproc/reference/rest/v1/projects.regions/{region}/jobs/get). |
| 99 | |
| 100 | Args: |
| 101 | projectId: string, [Required] The ID of the Google Cloud Platform project that the job belongs to. (required) |
| 102 | region: string, [Required] The Dataproc region in which to handle the request. (required) |
| 103 | jobId: string, [Required] The job ID. (required) |
| 104 | body: object, The request body. (required) |
| 105 | The object takes the form of: |
| 106 | |
| 107 | { # A request to cancel a job. |
| 108 | } |
| 109 | |
| 110 | x__xgafv: string, V1 error format. |
| 111 | |
| 112 | Returns: |
| 113 | An object of the form: |
| 114 | |
| 115 | { # A Cloud Dataproc job resource. |
| 116 | "status": { # Cloud Dataproc job status. # [Output-only] The job status. Additional application-specific status information may be contained in the type_job and yarn_applications fields. |
| 117 | "state": "A String", # [Required] A state message specifying the overall job state. |
| 118 | "stateStartTime": "A String", # [Output-only] The time when this state was entered. |
| 119 | "details": "A String", # [Optional] Job state details, such as an error description if the state is ERROR. |
| 120 | }, |
| 121 | "hadoopJob": { # A Cloud Dataproc job for running Hadoop MapReduce jobs on YARN. # Job is a Hadoop job. |
| 122 | "jarFileUris": [ # [Optional] Jar file URIs to add to the CLASSPATHs of the Hadoop driver and tasks. |
| 123 | "A String", |
| 124 | ], |
| 125 | "loggingConfig": { # The runtime logging config of the job. # [Optional] The runtime log config for job execution. |
| 126 | "driverLogLevels": { # The per-package log levels for the driver. This may include "root" package name to configure rootLogger. Examples: 'com.google = FATAL', 'root = INFO', 'org.apache = DEBUG' |
| 127 | "a_key": "A String", |
| 128 | }, |
| 129 | }, |
| 130 | "args": [ # [Optional] The arguments to pass to the driver. Do not include arguments, such as `-libjars` or `-Dfoo=bar`, that can be set as job properties, since a collision may occur that causes an incorrect job submission. |
| 131 | "A String", |
| 132 | ], |
| 133 | "fileUris": [ # [Optional] HCFS URIs of files to be copied to the working directory of Hadoop drivers and distributed tasks. Useful for naively parallel tasks. |
| 134 | "A String", |
| 135 | ], |
| 136 | "mainClass": "A String", # The name of the driver's main class. The jar file containing the class must be in the default CLASSPATH or specified in `jar_file_uris`. |
| 137 | "archiveUris": [ # [Optional] HCFS URIs of archives to be extracted in the working directory of Hadoop drivers and tasks. Supported file types: .jar, .tar, .tar.gz, .tgz, or .zip. |
| 138 | "A String", |
| 139 | ], |
| 140 | "mainJarFileUri": "A String", # The Hadoop Compatible Filesystem (HCFS) URI of the jar file containing the main class. Examples: 'gs://foo-bucket/analytics-binaries/extract-useful-metrics-mr.jar' 'hdfs:/tmp/test-samples/custom-wordcount.jar' 'file:///home/usr/lib/hadoop-mapreduce/hadoop-mapreduce-examples.jar' |
| 141 | "properties": { # [Optional] A mapping of property names to values, used to configure Hadoop. Properties that conflict with values set by the Cloud Dataproc API may be overwritten. Can include properties set in /etc/hadoop/conf/*-site and classes in user code. |
| 142 | "a_key": "A String", |
| 143 | }, |
| 144 | }, |
| 145 | "statusHistory": [ # [Output-only] The previous job status. |
| 146 | { # Cloud Dataproc job status. |
| 147 | "state": "A String", # [Required] A state message specifying the overall job state. |
| 148 | "stateStartTime": "A String", # [Output-only] The time when this state was entered. |
| 149 | "details": "A String", # [Optional] Job state details, such as an error description if the state is ERROR. |
| 150 | }, |
| 151 | ], |
| 152 | "placement": { # Cloud Dataproc job config. # [Required] Job information, including how, when, and where to run the job. |
| 153 | "clusterName": "A String", # [Required] The name of the cluster where the job will be submitted. |
| 154 | "clusterUuid": "A String", # [Output-only] A cluster UUID generated by the Dataproc service when the job is submitted. |
| 155 | }, |
| 156 | "reference": { # Encapsulates the full scoping used to reference a job. # [Optional] The fully qualified reference to the job, which can be used to obtain the equivalent REST path of the job resource. If this property is not specified when a job is created, the server generates a job_id. |
| 157 | "projectId": "A String", # [Required] The ID of the Google Cloud Platform project that the job belongs to. |
| 158 | "jobId": "A String", # [Required] The job ID, which must be unique within the project. The job ID is generated by the server upon job submission or provided by the user as a means to perform retries without creating duplicate jobs. The ID must contain only letters (a-z, A-Z), numbers (0-9), underscores (_), or hyphens (-). The maximum length is 512 characters. |
| 159 | }, |
| 160 | "sparkSqlJob": { # A Cloud Dataproc job for running Spark SQL queries. # Job is a SparkSql job. |
| 161 | "queryFileUri": "A String", # The HCFS URI of the script that contains SQL queries. |
| 162 | "scriptVariables": { # [Optional] Mapping of query variable names to values (equivalent to the Spark SQL command: SET `name="value";`). |
| 163 | "a_key": "A String", |
| 164 | }, |
| 165 | "loggingConfig": { # The runtime logging config of the job. # [Optional] The runtime log config for job execution. |
| 166 | "driverLogLevels": { # The per-package log levels for the driver. This may include "root" package name to configure rootLogger. Examples: 'com.google = FATAL', 'root = INFO', 'org.apache = DEBUG' |
| 167 | "a_key": "A String", |
| 168 | }, |
| 169 | }, |
| 170 | "jarFileUris": [ # [Optional] HCFS URIs of jar files to be added to the Spark CLASSPATH. |
| 171 | "A String", |
| 172 | ], |
| 173 | "queryList": { # A list of queries to run on a cluster. # A list of queries. |
| 174 | "queries": [ # [Required] The queries to execute. You do not need to terminate a query with a semicolon. Multiple queries can be specified in one string by separating each with a semicolon. Here is an example of an Cloud Dataproc API snippet that uses a QueryList to specify a HiveJob: "hiveJob": { "queryList": { "queries": [ "query1", "query2", "query3;query4", ] } } |
| 175 | "A String", |
| 176 | ], |
| 177 | }, |
| 178 | "properties": { # [Optional] A mapping of property names to values, used to configure Spark SQL's SparkConf. Properties that conflict with values set by the Cloud Dataproc API may be overwritten. |
| 179 | "a_key": "A String", |
| 180 | }, |
| 181 | }, |
| 182 | "pigJob": { # A Cloud Dataproc job for running Pig queries on YARN. # Job is a Pig job. |
| 183 | "queryFileUri": "A String", # The HCFS URI of the script that contains the Pig queries. |
| 184 | "scriptVariables": { # [Optional] Mapping of query variable names to values (equivalent to the Pig command: `name=[value]`). |
| 185 | "a_key": "A String", |
| 186 | }, |
| 187 | "loggingConfig": { # The runtime logging config of the job. # [Optional] The runtime log config for job execution. |
| 188 | "driverLogLevels": { # The per-package log levels for the driver. This may include "root" package name to configure rootLogger. Examples: 'com.google = FATAL', 'root = INFO', 'org.apache = DEBUG' |
| 189 | "a_key": "A String", |
| 190 | }, |
| 191 | }, |
| 192 | "jarFileUris": [ # [Optional] HCFS URIs of jar files to add to the CLASSPATH of the Pig Client and Hadoop MapReduce (MR) tasks. Can contain Pig UDFs. |
| 193 | "A String", |
| 194 | ], |
| 195 | "queryList": { # A list of queries to run on a cluster. # A list of queries. |
| 196 | "queries": [ # [Required] The queries to execute. You do not need to terminate a query with a semicolon. Multiple queries can be specified in one string by separating each with a semicolon. Here is an example of an Cloud Dataproc API snippet that uses a QueryList to specify a HiveJob: "hiveJob": { "queryList": { "queries": [ "query1", "query2", "query3;query4", ] } } |
| 197 | "A String", |
| 198 | ], |
| 199 | }, |
| 200 | "continueOnFailure": True or False, # [Optional] Whether to continue executing queries if a query fails. The default value is `false`. Setting to `true` can be useful when executing independent parallel queries. |
| 201 | "properties": { # [Optional] A mapping of property names to values, used to configure Pig. Properties that conflict with values set by the Cloud Dataproc API may be overwritten. Can include properties set in /etc/hadoop/conf/*-site.xml, /etc/pig/conf/pig.properties, and classes in user code. |
| 202 | "a_key": "A String", |
| 203 | }, |
| 204 | }, |
| 205 | "driverOutputResourceUri": "A String", # [Output-only] A URI pointing to the location of the stdout of the job's driver program. |
| 206 | "driverControlFilesUri": "A String", # [Output-only] If present, the location of miscellaneous control files which may be used as part of job setup and handling. If not present, control files may be placed in the same location as `driver_output_uri`. |
| 207 | "sparkJob": { # A Cloud Dataproc job for running Spark applications on YARN. # Job is a Spark job. |
| 208 | "jarFileUris": [ # [Optional] HCFS URIs of jar files to add to the CLASSPATHs of the Spark driver and tasks. |
| 209 | "A String", |
| 210 | ], |
| 211 | "loggingConfig": { # The runtime logging config of the job. # [Optional] The runtime log config for job execution. |
| 212 | "driverLogLevels": { # The per-package log levels for the driver. This may include "root" package name to configure rootLogger. Examples: 'com.google = FATAL', 'root = INFO', 'org.apache = DEBUG' |
| 213 | "a_key": "A String", |
| 214 | }, |
| 215 | }, |
| 216 | "args": [ # [Optional] The arguments to pass to the driver. Do not include arguments, such as `--conf`, that can be set as job properties, since a collision may occur that causes an incorrect job submission. |
| 217 | "A String", |
| 218 | ], |
| 219 | "fileUris": [ # [Optional] HCFS URIs of files to be copied to the working directory of Spark drivers and distributed tasks. Useful for naively parallel tasks. |
| 220 | "A String", |
| 221 | ], |
| 222 | "mainClass": "A String", # The name of the driver's main class. The jar file that contains the class must be in the default CLASSPATH or specified in `jar_file_uris`. |
| 223 | "archiveUris": [ # [Optional] HCFS URIs of archives to be extracted in the working directory of Spark drivers and tasks. Supported file types: .jar, .tar, .tar.gz, .tgz, and .zip. |
| 224 | "A String", |
| 225 | ], |
| 226 | "mainJarFileUri": "A String", # The Hadoop Compatible Filesystem (HCFS) URI of the jar file that contains the main class. |
| 227 | "properties": { # [Optional] A mapping of property names to values, used to configure Spark. Properties that conflict with values set by the Cloud Dataproc API may be overwritten. Can include properties set in /etc/spark/conf/spark-defaults.conf and classes in user code. |
| 228 | "a_key": "A String", |
| 229 | }, |
| 230 | }, |
| 231 | "pysparkJob": { # A Cloud Dataproc job for running PySpark applications on YARN. # Job is a Pyspark job. |
| 232 | "mainPythonFileUri": "A String", # [Required] The Hadoop Compatible Filesystem (HCFS) URI of the main Python file to use as the driver. Must be a .py file. |
| 233 | "args": [ # [Optional] The arguments to pass to the driver. Do not include arguments, such as `--conf`, that can be set as job properties, since a collision may occur that causes an incorrect job submission. |
| 234 | "A String", |
| 235 | ], |
| 236 | "loggingConfig": { # The runtime logging config of the job. # [Optional] The runtime log config for job execution. |
| 237 | "driverLogLevels": { # The per-package log levels for the driver. This may include "root" package name to configure rootLogger. Examples: 'com.google = FATAL', 'root = INFO', 'org.apache = DEBUG' |
| 238 | "a_key": "A String", |
| 239 | }, |
| 240 | }, |
| 241 | "jarFileUris": [ # [Optional] HCFS URIs of jar files to add to the CLASSPATHs of the Python driver and tasks. |
| 242 | "A String", |
| 243 | ], |
| 244 | "fileUris": [ # [Optional] HCFS URIs of files to be copied to the working directory of Python drivers and distributed tasks. Useful for naively parallel tasks. |
| 245 | "A String", |
| 246 | ], |
| 247 | "archiveUris": [ # [Optional] HCFS URIs of archives to be extracted in the working directory of .jar, .tar, .tar.gz, .tgz, and .zip. |
| 248 | "A String", |
| 249 | ], |
| 250 | "pythonFileUris": [ # [Optional] HCFS file URIs of Python files to pass to the PySpark framework. Supported file types: .py, .egg, and .zip. |
| 251 | "A String", |
| 252 | ], |
| 253 | "properties": { # [Optional] A mapping of property names to values, used to configure PySpark. Properties that conflict with values set by the Cloud Dataproc API may be overwritten. Can include properties set in /etc/spark/conf/spark-defaults.conf and classes in user code. |
| 254 | "a_key": "A String", |
| 255 | }, |
| 256 | }, |
| 257 | "hiveJob": { # A Cloud Dataproc job for running Hive queries on YARN. # Job is a Hive job. |
| 258 | "queryFileUri": "A String", # The HCFS URI of the script that contains Hive queries. |
| 259 | "scriptVariables": { # [Optional] Mapping of query variable names to values (equivalent to the Hive command: `SET name="value";`). |
| 260 | "a_key": "A String", |
| 261 | }, |
| 262 | "jarFileUris": [ # [Optional] HCFS URIs of jar files to add to the CLASSPATH of the Hive server and Hadoop MapReduce (MR) tasks. Can contain Hive SerDes and UDFs. |
| 263 | "A String", |
| 264 | ], |
| 265 | "queryList": { # A list of queries to run on a cluster. # A list of queries. |
| 266 | "queries": [ # [Required] The queries to execute. You do not need to terminate a query with a semicolon. Multiple queries can be specified in one string by separating each with a semicolon. Here is an example of an Cloud Dataproc API snippet that uses a QueryList to specify a HiveJob: "hiveJob": { "queryList": { "queries": [ "query1", "query2", "query3;query4", ] } } |
| 267 | "A String", |
| 268 | ], |
| 269 | }, |
| 270 | "continueOnFailure": True or False, # [Optional] Whether to continue executing queries if a query fails. The default value is `false`. Setting to `true` can be useful when executing independent parallel queries. |
| 271 | "properties": { # [Optional] A mapping of property names and values, used to configure Hive. Properties that conflict with values set by the Cloud Dataproc API may be overwritten. Can include properties set in /etc/hadoop/conf/*-site.xml, /etc/hive/conf/hive-site.xml, and classes in user code. |
| 272 | "a_key": "A String", |
| 273 | }, |
| 274 | }, |
| 275 | }</pre> |
| 276 | </div> |
| 277 | |
| 278 | <div class="method"> |
| 279 | <code class="details" id="delete">delete(projectId, region, jobId, x__xgafv=None)</code> |
| 280 | <pre>Deletes the job from the project. If the job is active, the delete fails, and the response returns `FAILED_PRECONDITION`. |
| 281 | |
| 282 | Args: |
| 283 | projectId: string, [Required] The ID of the Google Cloud Platform project that the job belongs to. (required) |
| 284 | region: string, [Required] The Dataproc region in which to handle the request. (required) |
| 285 | jobId: string, [Required] The job ID. (required) |
| 286 | x__xgafv: string, V1 error format. |
| 287 | |
| 288 | Returns: |
| 289 | An object of the form: |
| 290 | |
| 291 | { # A generic empty message that you can re-use to avoid defining duplicated empty messages in your APIs. A typical example is to use it as the request or the response type of an API method. For instance: service Foo { rpc Bar(google.protobuf.Empty) returns (google.protobuf.Empty); } The JSON representation for `Empty` is empty JSON object `{}`. |
| 292 | }</pre> |
| 293 | </div> |
| 294 | |
| 295 | <div class="method"> |
| 296 | <code class="details" id="get">get(projectId, region, jobId, x__xgafv=None)</code> |
| 297 | <pre>Gets the resource representation for a job in a project. |
| 298 | |
| 299 | Args: |
| 300 | projectId: string, [Required] The ID of the Google Cloud Platform project that the job belongs to. (required) |
| 301 | region: string, [Required] The Dataproc region in which to handle the request. (required) |
| 302 | jobId: string, [Required] The job ID. (required) |
| 303 | x__xgafv: string, V1 error format. |
| 304 | |
| 305 | Returns: |
| 306 | An object of the form: |
| 307 | |
| 308 | { # A Cloud Dataproc job resource. |
| 309 | "status": { # Cloud Dataproc job status. # [Output-only] The job status. Additional application-specific status information may be contained in the type_job and yarn_applications fields. |
| 310 | "state": "A String", # [Required] A state message specifying the overall job state. |
| 311 | "stateStartTime": "A String", # [Output-only] The time when this state was entered. |
| 312 | "details": "A String", # [Optional] Job state details, such as an error description if the state is ERROR. |
| 313 | }, |
| 314 | "hadoopJob": { # A Cloud Dataproc job for running Hadoop MapReduce jobs on YARN. # Job is a Hadoop job. |
| 315 | "jarFileUris": [ # [Optional] Jar file URIs to add to the CLASSPATHs of the Hadoop driver and tasks. |
| 316 | "A String", |
| 317 | ], |
| 318 | "loggingConfig": { # The runtime logging config of the job. # [Optional] The runtime log config for job execution. |
| 319 | "driverLogLevels": { # The per-package log levels for the driver. This may include "root" package name to configure rootLogger. Examples: 'com.google = FATAL', 'root = INFO', 'org.apache = DEBUG' |
| 320 | "a_key": "A String", |
| 321 | }, |
| 322 | }, |
| 323 | "args": [ # [Optional] The arguments to pass to the driver. Do not include arguments, such as `-libjars` or `-Dfoo=bar`, that can be set as job properties, since a collision may occur that causes an incorrect job submission. |
| 324 | "A String", |
| 325 | ], |
| 326 | "fileUris": [ # [Optional] HCFS URIs of files to be copied to the working directory of Hadoop drivers and distributed tasks. Useful for naively parallel tasks. |
| 327 | "A String", |
| 328 | ], |
| 329 | "mainClass": "A String", # The name of the driver's main class. The jar file containing the class must be in the default CLASSPATH or specified in `jar_file_uris`. |
| 330 | "archiveUris": [ # [Optional] HCFS URIs of archives to be extracted in the working directory of Hadoop drivers and tasks. Supported file types: .jar, .tar, .tar.gz, .tgz, or .zip. |
| 331 | "A String", |
| 332 | ], |
| 333 | "mainJarFileUri": "A String", # The Hadoop Compatible Filesystem (HCFS) URI of the jar file containing the main class. Examples: 'gs://foo-bucket/analytics-binaries/extract-useful-metrics-mr.jar' 'hdfs:/tmp/test-samples/custom-wordcount.jar' 'file:///home/usr/lib/hadoop-mapreduce/hadoop-mapreduce-examples.jar' |
| 334 | "properties": { # [Optional] A mapping of property names to values, used to configure Hadoop. Properties that conflict with values set by the Cloud Dataproc API may be overwritten. Can include properties set in /etc/hadoop/conf/*-site and classes in user code. |
| 335 | "a_key": "A String", |
| 336 | }, |
| 337 | }, |
| 338 | "statusHistory": [ # [Output-only] The previous job status. |
| 339 | { # Cloud Dataproc job status. |
| 340 | "state": "A String", # [Required] A state message specifying the overall job state. |
| 341 | "stateStartTime": "A String", # [Output-only] The time when this state was entered. |
| 342 | "details": "A String", # [Optional] Job state details, such as an error description if the state is ERROR. |
| 343 | }, |
| 344 | ], |
| 345 | "placement": { # Cloud Dataproc job config. # [Required] Job information, including how, when, and where to run the job. |
| 346 | "clusterName": "A String", # [Required] The name of the cluster where the job will be submitted. |
| 347 | "clusterUuid": "A String", # [Output-only] A cluster UUID generated by the Dataproc service when the job is submitted. |
| 348 | }, |
| 349 | "reference": { # Encapsulates the full scoping used to reference a job. # [Optional] The fully qualified reference to the job, which can be used to obtain the equivalent REST path of the job resource. If this property is not specified when a job is created, the server generates a job_id. |
| 350 | "projectId": "A String", # [Required] The ID of the Google Cloud Platform project that the job belongs to. |
| 351 | "jobId": "A String", # [Required] The job ID, which must be unique within the project. The job ID is generated by the server upon job submission or provided by the user as a means to perform retries without creating duplicate jobs. The ID must contain only letters (a-z, A-Z), numbers (0-9), underscores (_), or hyphens (-). The maximum length is 512 characters. |
| 352 | }, |
| 353 | "sparkSqlJob": { # A Cloud Dataproc job for running Spark SQL queries. # Job is a SparkSql job. |
| 354 | "queryFileUri": "A String", # The HCFS URI of the script that contains SQL queries. |
| 355 | "scriptVariables": { # [Optional] Mapping of query variable names to values (equivalent to the Spark SQL command: SET `name="value";`). |
| 356 | "a_key": "A String", |
| 357 | }, |
| 358 | "loggingConfig": { # The runtime logging config of the job. # [Optional] The runtime log config for job execution. |
| 359 | "driverLogLevels": { # The per-package log levels for the driver. This may include "root" package name to configure rootLogger. Examples: 'com.google = FATAL', 'root = INFO', 'org.apache = DEBUG' |
| 360 | "a_key": "A String", |
| 361 | }, |
| 362 | }, |
| 363 | "jarFileUris": [ # [Optional] HCFS URIs of jar files to be added to the Spark CLASSPATH. |
| 364 | "A String", |
| 365 | ], |
| 366 | "queryList": { # A list of queries to run on a cluster. # A list of queries. |
| 367 | "queries": [ # [Required] The queries to execute. You do not need to terminate a query with a semicolon. Multiple queries can be specified in one string by separating each with a semicolon. Here is an example of an Cloud Dataproc API snippet that uses a QueryList to specify a HiveJob: "hiveJob": { "queryList": { "queries": [ "query1", "query2", "query3;query4", ] } } |
| 368 | "A String", |
| 369 | ], |
| 370 | }, |
| 371 | "properties": { # [Optional] A mapping of property names to values, used to configure Spark SQL's SparkConf. Properties that conflict with values set by the Cloud Dataproc API may be overwritten. |
| 372 | "a_key": "A String", |
| 373 | }, |
| 374 | }, |
| 375 | "pigJob": { # A Cloud Dataproc job for running Pig queries on YARN. # Job is a Pig job. |
| 376 | "queryFileUri": "A String", # The HCFS URI of the script that contains the Pig queries. |
| 377 | "scriptVariables": { # [Optional] Mapping of query variable names to values (equivalent to the Pig command: `name=[value]`). |
| 378 | "a_key": "A String", |
| 379 | }, |
| 380 | "loggingConfig": { # The runtime logging config of the job. # [Optional] The runtime log config for job execution. |
| 381 | "driverLogLevels": { # The per-package log levels for the driver. This may include "root" package name to configure rootLogger. Examples: 'com.google = FATAL', 'root = INFO', 'org.apache = DEBUG' |
| 382 | "a_key": "A String", |
| 383 | }, |
| 384 | }, |
| 385 | "jarFileUris": [ # [Optional] HCFS URIs of jar files to add to the CLASSPATH of the Pig Client and Hadoop MapReduce (MR) tasks. Can contain Pig UDFs. |
| 386 | "A String", |
| 387 | ], |
| 388 | "queryList": { # A list of queries to run on a cluster. # A list of queries. |
| 389 | "queries": [ # [Required] The queries to execute. You do not need to terminate a query with a semicolon. Multiple queries can be specified in one string by separating each with a semicolon. Here is an example of an Cloud Dataproc API snippet that uses a QueryList to specify a HiveJob: "hiveJob": { "queryList": { "queries": [ "query1", "query2", "query3;query4", ] } } |
| 390 | "A String", |
| 391 | ], |
| 392 | }, |
| 393 | "continueOnFailure": True or False, # [Optional] Whether to continue executing queries if a query fails. The default value is `false`. Setting to `true` can be useful when executing independent parallel queries. |
| 394 | "properties": { # [Optional] A mapping of property names to values, used to configure Pig. Properties that conflict with values set by the Cloud Dataproc API may be overwritten. Can include properties set in /etc/hadoop/conf/*-site.xml, /etc/pig/conf/pig.properties, and classes in user code. |
| 395 | "a_key": "A String", |
| 396 | }, |
| 397 | }, |
| 398 | "driverOutputResourceUri": "A String", # [Output-only] A URI pointing to the location of the stdout of the job's driver program. |
| 399 | "driverControlFilesUri": "A String", # [Output-only] If present, the location of miscellaneous control files which may be used as part of job setup and handling. If not present, control files may be placed in the same location as `driver_output_uri`. |
| 400 | "sparkJob": { # A Cloud Dataproc job for running Spark applications on YARN. # Job is a Spark job. |
| 401 | "jarFileUris": [ # [Optional] HCFS URIs of jar files to add to the CLASSPATHs of the Spark driver and tasks. |
| 402 | "A String", |
| 403 | ], |
| 404 | "loggingConfig": { # The runtime logging config of the job. # [Optional] The runtime log config for job execution. |
| 405 | "driverLogLevels": { # The per-package log levels for the driver. This may include "root" package name to configure rootLogger. Examples: 'com.google = FATAL', 'root = INFO', 'org.apache = DEBUG' |
| 406 | "a_key": "A String", |
| 407 | }, |
| 408 | }, |
| 409 | "args": [ # [Optional] The arguments to pass to the driver. Do not include arguments, such as `--conf`, that can be set as job properties, since a collision may occur that causes an incorrect job submission. |
| 410 | "A String", |
| 411 | ], |
| 412 | "fileUris": [ # [Optional] HCFS URIs of files to be copied to the working directory of Spark drivers and distributed tasks. Useful for naively parallel tasks. |
| 413 | "A String", |
| 414 | ], |
| 415 | "mainClass": "A String", # The name of the driver's main class. The jar file that contains the class must be in the default CLASSPATH or specified in `jar_file_uris`. |
| 416 | "archiveUris": [ # [Optional] HCFS URIs of archives to be extracted in the working directory of Spark drivers and tasks. Supported file types: .jar, .tar, .tar.gz, .tgz, and .zip. |
| 417 | "A String", |
| 418 | ], |
| 419 | "mainJarFileUri": "A String", # The Hadoop Compatible Filesystem (HCFS) URI of the jar file that contains the main class. |
| 420 | "properties": { # [Optional] A mapping of property names to values, used to configure Spark. Properties that conflict with values set by the Cloud Dataproc API may be overwritten. Can include properties set in /etc/spark/conf/spark-defaults.conf and classes in user code. |
| 421 | "a_key": "A String", |
| 422 | }, |
| 423 | }, |
| 424 | "pysparkJob": { # A Cloud Dataproc job for running PySpark applications on YARN. # Job is a Pyspark job. |
| 425 | "mainPythonFileUri": "A String", # [Required] The Hadoop Compatible Filesystem (HCFS) URI of the main Python file to use as the driver. Must be a .py file. |
| 426 | "args": [ # [Optional] The arguments to pass to the driver. Do not include arguments, such as `--conf`, that can be set as job properties, since a collision may occur that causes an incorrect job submission. |
| 427 | "A String", |
| 428 | ], |
| 429 | "loggingConfig": { # The runtime logging config of the job. # [Optional] The runtime log config for job execution. |
| 430 | "driverLogLevels": { # The per-package log levels for the driver. This may include "root" package name to configure rootLogger. Examples: 'com.google = FATAL', 'root = INFO', 'org.apache = DEBUG' |
| 431 | "a_key": "A String", |
| 432 | }, |
| 433 | }, |
| 434 | "jarFileUris": [ # [Optional] HCFS URIs of jar files to add to the CLASSPATHs of the Python driver and tasks. |
| 435 | "A String", |
| 436 | ], |
| 437 | "fileUris": [ # [Optional] HCFS URIs of files to be copied to the working directory of Python drivers and distributed tasks. Useful for naively parallel tasks. |
| 438 | "A String", |
| 439 | ], |
| 440 | "archiveUris": [ # [Optional] HCFS URIs of archives to be extracted in the working directory of .jar, .tar, .tar.gz, .tgz, and .zip. |
| 441 | "A String", |
| 442 | ], |
| 443 | "pythonFileUris": [ # [Optional] HCFS file URIs of Python files to pass to the PySpark framework. Supported file types: .py, .egg, and .zip. |
| 444 | "A String", |
| 445 | ], |
| 446 | "properties": { # [Optional] A mapping of property names to values, used to configure PySpark. Properties that conflict with values set by the Cloud Dataproc API may be overwritten. Can include properties set in /etc/spark/conf/spark-defaults.conf and classes in user code. |
| 447 | "a_key": "A String", |
| 448 | }, |
| 449 | }, |
| 450 | "hiveJob": { # A Cloud Dataproc job for running Hive queries on YARN. # Job is a Hive job. |
| 451 | "queryFileUri": "A String", # The HCFS URI of the script that contains Hive queries. |
| 452 | "scriptVariables": { # [Optional] Mapping of query variable names to values (equivalent to the Hive command: `SET name="value";`). |
| 453 | "a_key": "A String", |
| 454 | }, |
| 455 | "jarFileUris": [ # [Optional] HCFS URIs of jar files to add to the CLASSPATH of the Hive server and Hadoop MapReduce (MR) tasks. Can contain Hive SerDes and UDFs. |
| 456 | "A String", |
| 457 | ], |
| 458 | "queryList": { # A list of queries to run on a cluster. # A list of queries. |
| 459 | "queries": [ # [Required] The queries to execute. You do not need to terminate a query with a semicolon. Multiple queries can be specified in one string by separating each with a semicolon. Here is an example of an Cloud Dataproc API snippet that uses a QueryList to specify a HiveJob: "hiveJob": { "queryList": { "queries": [ "query1", "query2", "query3;query4", ] } } |
| 460 | "A String", |
| 461 | ], |
| 462 | }, |
| 463 | "continueOnFailure": True or False, # [Optional] Whether to continue executing queries if a query fails. The default value is `false`. Setting to `true` can be useful when executing independent parallel queries. |
| 464 | "properties": { # [Optional] A mapping of property names and values, used to configure Hive. Properties that conflict with values set by the Cloud Dataproc API may be overwritten. Can include properties set in /etc/hadoop/conf/*-site.xml, /etc/hive/conf/hive-site.xml, and classes in user code. |
| 465 | "a_key": "A String", |
| 466 | }, |
| 467 | }, |
| 468 | }</pre> |
| 469 | </div> |
| 470 | |
| 471 | <div class="method"> |
| 472 | <code class="details" id="list">list(projectId, region, pageSize=None, x__xgafv=None, jobStateMatcher=None, pageToken=None, clusterName=None)</code> |
| 473 | <pre>Lists regions/{region}/jobs in a project. |
| 474 | |
| 475 | Args: |
| 476 | projectId: string, [Required] The ID of the Google Cloud Platform project that the job belongs to. (required) |
| 477 | region: string, [Required] The Dataproc region in which to handle the request. (required) |
| 478 | pageSize: integer, [Optional] The number of results to return in each response. |
| 479 | x__xgafv: string, V1 error format. |
| 480 | jobStateMatcher: string, [Optional] Specifies enumerated categories of jobs to list. |
| 481 | pageToken: string, [Optional] The page token, returned by a previous call, to request the next page of results. |
| 482 | clusterName: string, [Optional] If set, the returned jobs list includes only jobs that were submitted to the named cluster. |
| 483 | |
| 484 | Returns: |
| 485 | An object of the form: |
| 486 | |
| 487 | { # A list of jobs in a project. |
| 488 | "nextPageToken": "A String", # [Optional] This token is included in the response if there are more results to fetch. To fetch additional results, provide this value as the `page_token` in a subsequent ListJobsRequest. |
| 489 | "jobs": [ # [Output-only] Jobs list. |
| 490 | { # A Cloud Dataproc job resource. |
| 491 | "status": { # Cloud Dataproc job status. # [Output-only] The job status. Additional application-specific status information may be contained in the type_job and yarn_applications fields. |
| 492 | "state": "A String", # [Required] A state message specifying the overall job state. |
| 493 | "stateStartTime": "A String", # [Output-only] The time when this state was entered. |
| 494 | "details": "A String", # [Optional] Job state details, such as an error description if the state is ERROR. |
| 495 | }, |
| 496 | "hadoopJob": { # A Cloud Dataproc job for running Hadoop MapReduce jobs on YARN. # Job is a Hadoop job. |
| 497 | "jarFileUris": [ # [Optional] Jar file URIs to add to the CLASSPATHs of the Hadoop driver and tasks. |
| 498 | "A String", |
| 499 | ], |
| 500 | "loggingConfig": { # The runtime logging config of the job. # [Optional] The runtime log config for job execution. |
| 501 | "driverLogLevels": { # The per-package log levels for the driver. This may include "root" package name to configure rootLogger. Examples: 'com.google = FATAL', 'root = INFO', 'org.apache = DEBUG' |
| 502 | "a_key": "A String", |
| 503 | }, |
| 504 | }, |
| 505 | "args": [ # [Optional] The arguments to pass to the driver. Do not include arguments, such as `-libjars` or `-Dfoo=bar`, that can be set as job properties, since a collision may occur that causes an incorrect job submission. |
| 506 | "A String", |
| 507 | ], |
| 508 | "fileUris": [ # [Optional] HCFS URIs of files to be copied to the working directory of Hadoop drivers and distributed tasks. Useful for naively parallel tasks. |
| 509 | "A String", |
| 510 | ], |
| 511 | "mainClass": "A String", # The name of the driver's main class. The jar file containing the class must be in the default CLASSPATH or specified in `jar_file_uris`. |
| 512 | "archiveUris": [ # [Optional] HCFS URIs of archives to be extracted in the working directory of Hadoop drivers and tasks. Supported file types: .jar, .tar, .tar.gz, .tgz, or .zip. |
| 513 | "A String", |
| 514 | ], |
| 515 | "mainJarFileUri": "A String", # The Hadoop Compatible Filesystem (HCFS) URI of the jar file containing the main class. Examples: 'gs://foo-bucket/analytics-binaries/extract-useful-metrics-mr.jar' 'hdfs:/tmp/test-samples/custom-wordcount.jar' 'file:///home/usr/lib/hadoop-mapreduce/hadoop-mapreduce-examples.jar' |
| 516 | "properties": { # [Optional] A mapping of property names to values, used to configure Hadoop. Properties that conflict with values set by the Cloud Dataproc API may be overwritten. Can include properties set in /etc/hadoop/conf/*-site and classes in user code. |
| 517 | "a_key": "A String", |
| 518 | }, |
| 519 | }, |
| 520 | "statusHistory": [ # [Output-only] The previous job status. |
| 521 | { # Cloud Dataproc job status. |
| 522 | "state": "A String", # [Required] A state message specifying the overall job state. |
| 523 | "stateStartTime": "A String", # [Output-only] The time when this state was entered. |
| 524 | "details": "A String", # [Optional] Job state details, such as an error description if the state is ERROR. |
| 525 | }, |
| 526 | ], |
| 527 | "placement": { # Cloud Dataproc job config. # [Required] Job information, including how, when, and where to run the job. |
| 528 | "clusterName": "A String", # [Required] The name of the cluster where the job will be submitted. |
| 529 | "clusterUuid": "A String", # [Output-only] A cluster UUID generated by the Dataproc service when the job is submitted. |
| 530 | }, |
| 531 | "reference": { # Encapsulates the full scoping used to reference a job. # [Optional] The fully qualified reference to the job, which can be used to obtain the equivalent REST path of the job resource. If this property is not specified when a job is created, the server generates a job_id. |
| 532 | "projectId": "A String", # [Required] The ID of the Google Cloud Platform project that the job belongs to. |
| 533 | "jobId": "A String", # [Required] The job ID, which must be unique within the project. The job ID is generated by the server upon job submission or provided by the user as a means to perform retries without creating duplicate jobs. The ID must contain only letters (a-z, A-Z), numbers (0-9), underscores (_), or hyphens (-). The maximum length is 512 characters. |
| 534 | }, |
| 535 | "sparkSqlJob": { # A Cloud Dataproc job for running Spark SQL queries. # Job is a SparkSql job. |
| 536 | "queryFileUri": "A String", # The HCFS URI of the script that contains SQL queries. |
| 537 | "scriptVariables": { # [Optional] Mapping of query variable names to values (equivalent to the Spark SQL command: SET `name="value";`). |
| 538 | "a_key": "A String", |
| 539 | }, |
| 540 | "loggingConfig": { # The runtime logging config of the job. # [Optional] The runtime log config for job execution. |
| 541 | "driverLogLevels": { # The per-package log levels for the driver. This may include "root" package name to configure rootLogger. Examples: 'com.google = FATAL', 'root = INFO', 'org.apache = DEBUG' |
| 542 | "a_key": "A String", |
| 543 | }, |
| 544 | }, |
| 545 | "jarFileUris": [ # [Optional] HCFS URIs of jar files to be added to the Spark CLASSPATH. |
| 546 | "A String", |
| 547 | ], |
| 548 | "queryList": { # A list of queries to run on a cluster. # A list of queries. |
| 549 | "queries": [ # [Required] The queries to execute. You do not need to terminate a query with a semicolon. Multiple queries can be specified in one string by separating each with a semicolon. Here is an example of an Cloud Dataproc API snippet that uses a QueryList to specify a HiveJob: "hiveJob": { "queryList": { "queries": [ "query1", "query2", "query3;query4", ] } } |
| 550 | "A String", |
| 551 | ], |
| 552 | }, |
| 553 | "properties": { # [Optional] A mapping of property names to values, used to configure Spark SQL's SparkConf. Properties that conflict with values set by the Cloud Dataproc API may be overwritten. |
| 554 | "a_key": "A String", |
| 555 | }, |
| 556 | }, |
| 557 | "pigJob": { # A Cloud Dataproc job for running Pig queries on YARN. # Job is a Pig job. |
| 558 | "queryFileUri": "A String", # The HCFS URI of the script that contains the Pig queries. |
| 559 | "scriptVariables": { # [Optional] Mapping of query variable names to values (equivalent to the Pig command: `name=[value]`). |
| 560 | "a_key": "A String", |
| 561 | }, |
| 562 | "loggingConfig": { # The runtime logging config of the job. # [Optional] The runtime log config for job execution. |
| 563 | "driverLogLevels": { # The per-package log levels for the driver. This may include "root" package name to configure rootLogger. Examples: 'com.google = FATAL', 'root = INFO', 'org.apache = DEBUG' |
| 564 | "a_key": "A String", |
| 565 | }, |
| 566 | }, |
| 567 | "jarFileUris": [ # [Optional] HCFS URIs of jar files to add to the CLASSPATH of the Pig Client and Hadoop MapReduce (MR) tasks. Can contain Pig UDFs. |
| 568 | "A String", |
| 569 | ], |
| 570 | "queryList": { # A list of queries to run on a cluster. # A list of queries. |
| 571 | "queries": [ # [Required] The queries to execute. You do not need to terminate a query with a semicolon. Multiple queries can be specified in one string by separating each with a semicolon. Here is an example of an Cloud Dataproc API snippet that uses a QueryList to specify a HiveJob: "hiveJob": { "queryList": { "queries": [ "query1", "query2", "query3;query4", ] } } |
| 572 | "A String", |
| 573 | ], |
| 574 | }, |
| 575 | "continueOnFailure": True or False, # [Optional] Whether to continue executing queries if a query fails. The default value is `false`. Setting to `true` can be useful when executing independent parallel queries. |
| 576 | "properties": { # [Optional] A mapping of property names to values, used to configure Pig. Properties that conflict with values set by the Cloud Dataproc API may be overwritten. Can include properties set in /etc/hadoop/conf/*-site.xml, /etc/pig/conf/pig.properties, and classes in user code. |
| 577 | "a_key": "A String", |
| 578 | }, |
| 579 | }, |
| 580 | "driverOutputResourceUri": "A String", # [Output-only] A URI pointing to the location of the stdout of the job's driver program. |
| 581 | "driverControlFilesUri": "A String", # [Output-only] If present, the location of miscellaneous control files which may be used as part of job setup and handling. If not present, control files may be placed in the same location as `driver_output_uri`. |
| 582 | "sparkJob": { # A Cloud Dataproc job for running Spark applications on YARN. # Job is a Spark job. |
| 583 | "jarFileUris": [ # [Optional] HCFS URIs of jar files to add to the CLASSPATHs of the Spark driver and tasks. |
| 584 | "A String", |
| 585 | ], |
| 586 | "loggingConfig": { # The runtime logging config of the job. # [Optional] The runtime log config for job execution. |
| 587 | "driverLogLevels": { # The per-package log levels for the driver. This may include "root" package name to configure rootLogger. Examples: 'com.google = FATAL', 'root = INFO', 'org.apache = DEBUG' |
| 588 | "a_key": "A String", |
| 589 | }, |
| 590 | }, |
| 591 | "args": [ # [Optional] The arguments to pass to the driver. Do not include arguments, such as `--conf`, that can be set as job properties, since a collision may occur that causes an incorrect job submission. |
| 592 | "A String", |
| 593 | ], |
| 594 | "fileUris": [ # [Optional] HCFS URIs of files to be copied to the working directory of Spark drivers and distributed tasks. Useful for naively parallel tasks. |
| 595 | "A String", |
| 596 | ], |
| 597 | "mainClass": "A String", # The name of the driver's main class. The jar file that contains the class must be in the default CLASSPATH or specified in `jar_file_uris`. |
| 598 | "archiveUris": [ # [Optional] HCFS URIs of archives to be extracted in the working directory of Spark drivers and tasks. Supported file types: .jar, .tar, .tar.gz, .tgz, and .zip. |
| 599 | "A String", |
| 600 | ], |
| 601 | "mainJarFileUri": "A String", # The Hadoop Compatible Filesystem (HCFS) URI of the jar file that contains the main class. |
| 602 | "properties": { # [Optional] A mapping of property names to values, used to configure Spark. Properties that conflict with values set by the Cloud Dataproc API may be overwritten. Can include properties set in /etc/spark/conf/spark-defaults.conf and classes in user code. |
| 603 | "a_key": "A String", |
| 604 | }, |
| 605 | }, |
| 606 | "pysparkJob": { # A Cloud Dataproc job for running PySpark applications on YARN. # Job is a Pyspark job. |
| 607 | "mainPythonFileUri": "A String", # [Required] The Hadoop Compatible Filesystem (HCFS) URI of the main Python file to use as the driver. Must be a .py file. |
| 608 | "args": [ # [Optional] The arguments to pass to the driver. Do not include arguments, such as `--conf`, that can be set as job properties, since a collision may occur that causes an incorrect job submission. |
| 609 | "A String", |
| 610 | ], |
| 611 | "loggingConfig": { # The runtime logging config of the job. # [Optional] The runtime log config for job execution. |
| 612 | "driverLogLevels": { # The per-package log levels for the driver. This may include "root" package name to configure rootLogger. Examples: 'com.google = FATAL', 'root = INFO', 'org.apache = DEBUG' |
| 613 | "a_key": "A String", |
| 614 | }, |
| 615 | }, |
| 616 | "jarFileUris": [ # [Optional] HCFS URIs of jar files to add to the CLASSPATHs of the Python driver and tasks. |
| 617 | "A String", |
| 618 | ], |
| 619 | "fileUris": [ # [Optional] HCFS URIs of files to be copied to the working directory of Python drivers and distributed tasks. Useful for naively parallel tasks. |
| 620 | "A String", |
| 621 | ], |
| 622 | "archiveUris": [ # [Optional] HCFS URIs of archives to be extracted in the working directory of .jar, .tar, .tar.gz, .tgz, and .zip. |
| 623 | "A String", |
| 624 | ], |
| 625 | "pythonFileUris": [ # [Optional] HCFS file URIs of Python files to pass to the PySpark framework. Supported file types: .py, .egg, and .zip. |
| 626 | "A String", |
| 627 | ], |
| 628 | "properties": { # [Optional] A mapping of property names to values, used to configure PySpark. Properties that conflict with values set by the Cloud Dataproc API may be overwritten. Can include properties set in /etc/spark/conf/spark-defaults.conf and classes in user code. |
| 629 | "a_key": "A String", |
| 630 | }, |
| 631 | }, |
| 632 | "hiveJob": { # A Cloud Dataproc job for running Hive queries on YARN. # Job is a Hive job. |
| 633 | "queryFileUri": "A String", # The HCFS URI of the script that contains Hive queries. |
| 634 | "scriptVariables": { # [Optional] Mapping of query variable names to values (equivalent to the Hive command: `SET name="value";`). |
| 635 | "a_key": "A String", |
| 636 | }, |
| 637 | "jarFileUris": [ # [Optional] HCFS URIs of jar files to add to the CLASSPATH of the Hive server and Hadoop MapReduce (MR) tasks. Can contain Hive SerDes and UDFs. |
| 638 | "A String", |
| 639 | ], |
| 640 | "queryList": { # A list of queries to run on a cluster. # A list of queries. |
| 641 | "queries": [ # [Required] The queries to execute. You do not need to terminate a query with a semicolon. Multiple queries can be specified in one string by separating each with a semicolon. Here is an example of an Cloud Dataproc API snippet that uses a QueryList to specify a HiveJob: "hiveJob": { "queryList": { "queries": [ "query1", "query2", "query3;query4", ] } } |
| 642 | "A String", |
| 643 | ], |
| 644 | }, |
| 645 | "continueOnFailure": True or False, # [Optional] Whether to continue executing queries if a query fails. The default value is `false`. Setting to `true` can be useful when executing independent parallel queries. |
| 646 | "properties": { # [Optional] A mapping of property names and values, used to configure Hive. Properties that conflict with values set by the Cloud Dataproc API may be overwritten. Can include properties set in /etc/hadoop/conf/*-site.xml, /etc/hive/conf/hive-site.xml, and classes in user code. |
| 647 | "a_key": "A String", |
| 648 | }, |
| 649 | }, |
| 650 | }, |
| 651 | ], |
| 652 | }</pre> |
| 653 | </div> |
| 654 | |
| 655 | <div class="method"> |
| 656 | <code class="details" id="list_next">list_next(previous_request, previous_response)</code> |
| 657 | <pre>Retrieves the next page of results. |
| 658 | |
| 659 | Args: |
| 660 | previous_request: The request for the previous page. (required) |
| 661 | previous_response: The response from the request for the previous page. (required) |
| 662 | |
| 663 | Returns: |
| 664 | A request object that you can call 'execute()' on to request the next |
| 665 | page. Returns None if there are no more items in the collection. |
| 666 | </pre> |
| 667 | </div> |
| 668 | |
| 669 | <div class="method"> |
| 670 | <code class="details" id="submit">submit(projectId, region, body, x__xgafv=None)</code> |
| 671 | <pre>Submits a job to a cluster. |
| 672 | |
| 673 | Args: |
| 674 | projectId: string, [Required] The ID of the Google Cloud Platform project that the job belongs to. (required) |
| 675 | region: string, [Required] The Dataproc region in which to handle the request. (required) |
| 676 | body: object, The request body. (required) |
| 677 | The object takes the form of: |
| 678 | |
| 679 | { # A request to submit a job. |
| 680 | "job": { # A Cloud Dataproc job resource. # [Required] The job resource. |
| 681 | "status": { # Cloud Dataproc job status. # [Output-only] The job status. Additional application-specific status information may be contained in the type_job and yarn_applications fields. |
| 682 | "state": "A String", # [Required] A state message specifying the overall job state. |
| 683 | "stateStartTime": "A String", # [Output-only] The time when this state was entered. |
| 684 | "details": "A String", # [Optional] Job state details, such as an error description if the state is ERROR. |
| 685 | }, |
| 686 | "hadoopJob": { # A Cloud Dataproc job for running Hadoop MapReduce jobs on YARN. # Job is a Hadoop job. |
| 687 | "jarFileUris": [ # [Optional] Jar file URIs to add to the CLASSPATHs of the Hadoop driver and tasks. |
| 688 | "A String", |
| 689 | ], |
| 690 | "loggingConfig": { # The runtime logging config of the job. # [Optional] The runtime log config for job execution. |
| 691 | "driverLogLevels": { # The per-package log levels for the driver. This may include "root" package name to configure rootLogger. Examples: 'com.google = FATAL', 'root = INFO', 'org.apache = DEBUG' |
| 692 | "a_key": "A String", |
| 693 | }, |
| 694 | }, |
| 695 | "args": [ # [Optional] The arguments to pass to the driver. Do not include arguments, such as `-libjars` or `-Dfoo=bar`, that can be set as job properties, since a collision may occur that causes an incorrect job submission. |
| 696 | "A String", |
| 697 | ], |
| 698 | "fileUris": [ # [Optional] HCFS URIs of files to be copied to the working directory of Hadoop drivers and distributed tasks. Useful for naively parallel tasks. |
| 699 | "A String", |
| 700 | ], |
| 701 | "mainClass": "A String", # The name of the driver's main class. The jar file containing the class must be in the default CLASSPATH or specified in `jar_file_uris`. |
| 702 | "archiveUris": [ # [Optional] HCFS URIs of archives to be extracted in the working directory of Hadoop drivers and tasks. Supported file types: .jar, .tar, .tar.gz, .tgz, or .zip. |
| 703 | "A String", |
| 704 | ], |
| 705 | "mainJarFileUri": "A String", # The Hadoop Compatible Filesystem (HCFS) URI of the jar file containing the main class. Examples: 'gs://foo-bucket/analytics-binaries/extract-useful-metrics-mr.jar' 'hdfs:/tmp/test-samples/custom-wordcount.jar' 'file:///home/usr/lib/hadoop-mapreduce/hadoop-mapreduce-examples.jar' |
| 706 | "properties": { # [Optional] A mapping of property names to values, used to configure Hadoop. Properties that conflict with values set by the Cloud Dataproc API may be overwritten. Can include properties set in /etc/hadoop/conf/*-site and classes in user code. |
| 707 | "a_key": "A String", |
| 708 | }, |
| 709 | }, |
| 710 | "statusHistory": [ # [Output-only] The previous job status. |
| 711 | { # Cloud Dataproc job status. |
| 712 | "state": "A String", # [Required] A state message specifying the overall job state. |
| 713 | "stateStartTime": "A String", # [Output-only] The time when this state was entered. |
| 714 | "details": "A String", # [Optional] Job state details, such as an error description if the state is ERROR. |
| 715 | }, |
| 716 | ], |
| 717 | "placement": { # Cloud Dataproc job config. # [Required] Job information, including how, when, and where to run the job. |
| 718 | "clusterName": "A String", # [Required] The name of the cluster where the job will be submitted. |
| 719 | "clusterUuid": "A String", # [Output-only] A cluster UUID generated by the Dataproc service when the job is submitted. |
| 720 | }, |
| 721 | "reference": { # Encapsulates the full scoping used to reference a job. # [Optional] The fully qualified reference to the job, which can be used to obtain the equivalent REST path of the job resource. If this property is not specified when a job is created, the server generates a job_id. |
| 722 | "projectId": "A String", # [Required] The ID of the Google Cloud Platform project that the job belongs to. |
| 723 | "jobId": "A String", # [Required] The job ID, which must be unique within the project. The job ID is generated by the server upon job submission or provided by the user as a means to perform retries without creating duplicate jobs. The ID must contain only letters (a-z, A-Z), numbers (0-9), underscores (_), or hyphens (-). The maximum length is 512 characters. |
| 724 | }, |
| 725 | "sparkSqlJob": { # A Cloud Dataproc job for running Spark SQL queries. # Job is a SparkSql job. |
| 726 | "queryFileUri": "A String", # The HCFS URI of the script that contains SQL queries. |
| 727 | "scriptVariables": { # [Optional] Mapping of query variable names to values (equivalent to the Spark SQL command: SET `name="value";`). |
| 728 | "a_key": "A String", |
| 729 | }, |
| 730 | "loggingConfig": { # The runtime logging config of the job. # [Optional] The runtime log config for job execution. |
| 731 | "driverLogLevels": { # The per-package log levels for the driver. This may include "root" package name to configure rootLogger. Examples: 'com.google = FATAL', 'root = INFO', 'org.apache = DEBUG' |
| 732 | "a_key": "A String", |
| 733 | }, |
| 734 | }, |
| 735 | "jarFileUris": [ # [Optional] HCFS URIs of jar files to be added to the Spark CLASSPATH. |
| 736 | "A String", |
| 737 | ], |
| 738 | "queryList": { # A list of queries to run on a cluster. # A list of queries. |
| 739 | "queries": [ # [Required] The queries to execute. You do not need to terminate a query with a semicolon. Multiple queries can be specified in one string by separating each with a semicolon. Here is an example of an Cloud Dataproc API snippet that uses a QueryList to specify a HiveJob: "hiveJob": { "queryList": { "queries": [ "query1", "query2", "query3;query4", ] } } |
| 740 | "A String", |
| 741 | ], |
| 742 | }, |
| 743 | "properties": { # [Optional] A mapping of property names to values, used to configure Spark SQL's SparkConf. Properties that conflict with values set by the Cloud Dataproc API may be overwritten. |
| 744 | "a_key": "A String", |
| 745 | }, |
| 746 | }, |
| 747 | "pigJob": { # A Cloud Dataproc job for running Pig queries on YARN. # Job is a Pig job. |
| 748 | "queryFileUri": "A String", # The HCFS URI of the script that contains the Pig queries. |
| 749 | "scriptVariables": { # [Optional] Mapping of query variable names to values (equivalent to the Pig command: `name=[value]`). |
| 750 | "a_key": "A String", |
| 751 | }, |
| 752 | "loggingConfig": { # The runtime logging config of the job. # [Optional] The runtime log config for job execution. |
| 753 | "driverLogLevels": { # The per-package log levels for the driver. This may include "root" package name to configure rootLogger. Examples: 'com.google = FATAL', 'root = INFO', 'org.apache = DEBUG' |
| 754 | "a_key": "A String", |
| 755 | }, |
| 756 | }, |
| 757 | "jarFileUris": [ # [Optional] HCFS URIs of jar files to add to the CLASSPATH of the Pig Client and Hadoop MapReduce (MR) tasks. Can contain Pig UDFs. |
| 758 | "A String", |
| 759 | ], |
| 760 | "queryList": { # A list of queries to run on a cluster. # A list of queries. |
| 761 | "queries": [ # [Required] The queries to execute. You do not need to terminate a query with a semicolon. Multiple queries can be specified in one string by separating each with a semicolon. Here is an example of an Cloud Dataproc API snippet that uses a QueryList to specify a HiveJob: "hiveJob": { "queryList": { "queries": [ "query1", "query2", "query3;query4", ] } } |
| 762 | "A String", |
| 763 | ], |
| 764 | }, |
| 765 | "continueOnFailure": True or False, # [Optional] Whether to continue executing queries if a query fails. The default value is `false`. Setting to `true` can be useful when executing independent parallel queries. |
| 766 | "properties": { # [Optional] A mapping of property names to values, used to configure Pig. Properties that conflict with values set by the Cloud Dataproc API may be overwritten. Can include properties set in /etc/hadoop/conf/*-site.xml, /etc/pig/conf/pig.properties, and classes in user code. |
| 767 | "a_key": "A String", |
| 768 | }, |
| 769 | }, |
| 770 | "driverOutputResourceUri": "A String", # [Output-only] A URI pointing to the location of the stdout of the job's driver program. |
| 771 | "driverControlFilesUri": "A String", # [Output-only] If present, the location of miscellaneous control files which may be used as part of job setup and handling. If not present, control files may be placed in the same location as `driver_output_uri`. |
| 772 | "sparkJob": { # A Cloud Dataproc job for running Spark applications on YARN. # Job is a Spark job. |
| 773 | "jarFileUris": [ # [Optional] HCFS URIs of jar files to add to the CLASSPATHs of the Spark driver and tasks. |
| 774 | "A String", |
| 775 | ], |
| 776 | "loggingConfig": { # The runtime logging config of the job. # [Optional] The runtime log config for job execution. |
| 777 | "driverLogLevels": { # The per-package log levels for the driver. This may include "root" package name to configure rootLogger. Examples: 'com.google = FATAL', 'root = INFO', 'org.apache = DEBUG' |
| 778 | "a_key": "A String", |
| 779 | }, |
| 780 | }, |
| 781 | "args": [ # [Optional] The arguments to pass to the driver. Do not include arguments, such as `--conf`, that can be set as job properties, since a collision may occur that causes an incorrect job submission. |
| 782 | "A String", |
| 783 | ], |
| 784 | "fileUris": [ # [Optional] HCFS URIs of files to be copied to the working directory of Spark drivers and distributed tasks. Useful for naively parallel tasks. |
| 785 | "A String", |
| 786 | ], |
| 787 | "mainClass": "A String", # The name of the driver's main class. The jar file that contains the class must be in the default CLASSPATH or specified in `jar_file_uris`. |
| 788 | "archiveUris": [ # [Optional] HCFS URIs of archives to be extracted in the working directory of Spark drivers and tasks. Supported file types: .jar, .tar, .tar.gz, .tgz, and .zip. |
| 789 | "A String", |
| 790 | ], |
| 791 | "mainJarFileUri": "A String", # The Hadoop Compatible Filesystem (HCFS) URI of the jar file that contains the main class. |
| 792 | "properties": { # [Optional] A mapping of property names to values, used to configure Spark. Properties that conflict with values set by the Cloud Dataproc API may be overwritten. Can include properties set in /etc/spark/conf/spark-defaults.conf and classes in user code. |
| 793 | "a_key": "A String", |
| 794 | }, |
| 795 | }, |
| 796 | "pysparkJob": { # A Cloud Dataproc job for running PySpark applications on YARN. # Job is a Pyspark job. |
| 797 | "mainPythonFileUri": "A String", # [Required] The Hadoop Compatible Filesystem (HCFS) URI of the main Python file to use as the driver. Must be a .py file. |
| 798 | "args": [ # [Optional] The arguments to pass to the driver. Do not include arguments, such as `--conf`, that can be set as job properties, since a collision may occur that causes an incorrect job submission. |
| 799 | "A String", |
| 800 | ], |
| 801 | "loggingConfig": { # The runtime logging config of the job. # [Optional] The runtime log config for job execution. |
| 802 | "driverLogLevels": { # The per-package log levels for the driver. This may include "root" package name to configure rootLogger. Examples: 'com.google = FATAL', 'root = INFO', 'org.apache = DEBUG' |
| 803 | "a_key": "A String", |
| 804 | }, |
| 805 | }, |
| 806 | "jarFileUris": [ # [Optional] HCFS URIs of jar files to add to the CLASSPATHs of the Python driver and tasks. |
| 807 | "A String", |
| 808 | ], |
| 809 | "fileUris": [ # [Optional] HCFS URIs of files to be copied to the working directory of Python drivers and distributed tasks. Useful for naively parallel tasks. |
| 810 | "A String", |
| 811 | ], |
| 812 | "archiveUris": [ # [Optional] HCFS URIs of archives to be extracted in the working directory of .jar, .tar, .tar.gz, .tgz, and .zip. |
| 813 | "A String", |
| 814 | ], |
| 815 | "pythonFileUris": [ # [Optional] HCFS file URIs of Python files to pass to the PySpark framework. Supported file types: .py, .egg, and .zip. |
| 816 | "A String", |
| 817 | ], |
| 818 | "properties": { # [Optional] A mapping of property names to values, used to configure PySpark. Properties that conflict with values set by the Cloud Dataproc API may be overwritten. Can include properties set in /etc/spark/conf/spark-defaults.conf and classes in user code. |
| 819 | "a_key": "A String", |
| 820 | }, |
| 821 | }, |
| 822 | "hiveJob": { # A Cloud Dataproc job for running Hive queries on YARN. # Job is a Hive job. |
| 823 | "queryFileUri": "A String", # The HCFS URI of the script that contains Hive queries. |
| 824 | "scriptVariables": { # [Optional] Mapping of query variable names to values (equivalent to the Hive command: `SET name="value";`). |
| 825 | "a_key": "A String", |
| 826 | }, |
| 827 | "jarFileUris": [ # [Optional] HCFS URIs of jar files to add to the CLASSPATH of the Hive server and Hadoop MapReduce (MR) tasks. Can contain Hive SerDes and UDFs. |
| 828 | "A String", |
| 829 | ], |
| 830 | "queryList": { # A list of queries to run on a cluster. # A list of queries. |
| 831 | "queries": [ # [Required] The queries to execute. You do not need to terminate a query with a semicolon. Multiple queries can be specified in one string by separating each with a semicolon. Here is an example of an Cloud Dataproc API snippet that uses a QueryList to specify a HiveJob: "hiveJob": { "queryList": { "queries": [ "query1", "query2", "query3;query4", ] } } |
| 832 | "A String", |
| 833 | ], |
| 834 | }, |
| 835 | "continueOnFailure": True or False, # [Optional] Whether to continue executing queries if a query fails. The default value is `false`. Setting to `true` can be useful when executing independent parallel queries. |
| 836 | "properties": { # [Optional] A mapping of property names and values, used to configure Hive. Properties that conflict with values set by the Cloud Dataproc API may be overwritten. Can include properties set in /etc/hadoop/conf/*-site.xml, /etc/hive/conf/hive-site.xml, and classes in user code. |
| 837 | "a_key": "A String", |
| 838 | }, |
| 839 | }, |
| 840 | }, |
| 841 | } |
| 842 | |
| 843 | x__xgafv: string, V1 error format. |
| 844 | |
| 845 | Returns: |
| 846 | An object of the form: |
| 847 | |
| 848 | { # A Cloud Dataproc job resource. |
| 849 | "status": { # Cloud Dataproc job status. # [Output-only] The job status. Additional application-specific status information may be contained in the type_job and yarn_applications fields. |
| 850 | "state": "A String", # [Required] A state message specifying the overall job state. |
| 851 | "stateStartTime": "A String", # [Output-only] The time when this state was entered. |
| 852 | "details": "A String", # [Optional] Job state details, such as an error description if the state is ERROR. |
| 853 | }, |
| 854 | "hadoopJob": { # A Cloud Dataproc job for running Hadoop MapReduce jobs on YARN. # Job is a Hadoop job. |
| 855 | "jarFileUris": [ # [Optional] Jar file URIs to add to the CLASSPATHs of the Hadoop driver and tasks. |
| 856 | "A String", |
| 857 | ], |
| 858 | "loggingConfig": { # The runtime logging config of the job. # [Optional] The runtime log config for job execution. |
| 859 | "driverLogLevels": { # The per-package log levels for the driver. This may include "root" package name to configure rootLogger. Examples: 'com.google = FATAL', 'root = INFO', 'org.apache = DEBUG' |
| 860 | "a_key": "A String", |
| 861 | }, |
| 862 | }, |
| 863 | "args": [ # [Optional] The arguments to pass to the driver. Do not include arguments, such as `-libjars` or `-Dfoo=bar`, that can be set as job properties, since a collision may occur that causes an incorrect job submission. |
| 864 | "A String", |
| 865 | ], |
| 866 | "fileUris": [ # [Optional] HCFS URIs of files to be copied to the working directory of Hadoop drivers and distributed tasks. Useful for naively parallel tasks. |
| 867 | "A String", |
| 868 | ], |
| 869 | "mainClass": "A String", # The name of the driver's main class. The jar file containing the class must be in the default CLASSPATH or specified in `jar_file_uris`. |
| 870 | "archiveUris": [ # [Optional] HCFS URIs of archives to be extracted in the working directory of Hadoop drivers and tasks. Supported file types: .jar, .tar, .tar.gz, .tgz, or .zip. |
| 871 | "A String", |
| 872 | ], |
| 873 | "mainJarFileUri": "A String", # The Hadoop Compatible Filesystem (HCFS) URI of the jar file containing the main class. Examples: 'gs://foo-bucket/analytics-binaries/extract-useful-metrics-mr.jar' 'hdfs:/tmp/test-samples/custom-wordcount.jar' 'file:///home/usr/lib/hadoop-mapreduce/hadoop-mapreduce-examples.jar' |
| 874 | "properties": { # [Optional] A mapping of property names to values, used to configure Hadoop. Properties that conflict with values set by the Cloud Dataproc API may be overwritten. Can include properties set in /etc/hadoop/conf/*-site and classes in user code. |
| 875 | "a_key": "A String", |
| 876 | }, |
| 877 | }, |
| 878 | "statusHistory": [ # [Output-only] The previous job status. |
| 879 | { # Cloud Dataproc job status. |
| 880 | "state": "A String", # [Required] A state message specifying the overall job state. |
| 881 | "stateStartTime": "A String", # [Output-only] The time when this state was entered. |
| 882 | "details": "A String", # [Optional] Job state details, such as an error description if the state is ERROR. |
| 883 | }, |
| 884 | ], |
| 885 | "placement": { # Cloud Dataproc job config. # [Required] Job information, including how, when, and where to run the job. |
| 886 | "clusterName": "A String", # [Required] The name of the cluster where the job will be submitted. |
| 887 | "clusterUuid": "A String", # [Output-only] A cluster UUID generated by the Dataproc service when the job is submitted. |
| 888 | }, |
| 889 | "reference": { # Encapsulates the full scoping used to reference a job. # [Optional] The fully qualified reference to the job, which can be used to obtain the equivalent REST path of the job resource. If this property is not specified when a job is created, the server generates a job_id. |
| 890 | "projectId": "A String", # [Required] The ID of the Google Cloud Platform project that the job belongs to. |
| 891 | "jobId": "A String", # [Required] The job ID, which must be unique within the project. The job ID is generated by the server upon job submission or provided by the user as a means to perform retries without creating duplicate jobs. The ID must contain only letters (a-z, A-Z), numbers (0-9), underscores (_), or hyphens (-). The maximum length is 512 characters. |
| 892 | }, |
| 893 | "sparkSqlJob": { # A Cloud Dataproc job for running Spark SQL queries. # Job is a SparkSql job. |
| 894 | "queryFileUri": "A String", # The HCFS URI of the script that contains SQL queries. |
| 895 | "scriptVariables": { # [Optional] Mapping of query variable names to values (equivalent to the Spark SQL command: SET `name="value";`). |
| 896 | "a_key": "A String", |
| 897 | }, |
| 898 | "loggingConfig": { # The runtime logging config of the job. # [Optional] The runtime log config for job execution. |
| 899 | "driverLogLevels": { # The per-package log levels for the driver. This may include "root" package name to configure rootLogger. Examples: 'com.google = FATAL', 'root = INFO', 'org.apache = DEBUG' |
| 900 | "a_key": "A String", |
| 901 | }, |
| 902 | }, |
| 903 | "jarFileUris": [ # [Optional] HCFS URIs of jar files to be added to the Spark CLASSPATH. |
| 904 | "A String", |
| 905 | ], |
| 906 | "queryList": { # A list of queries to run on a cluster. # A list of queries. |
| 907 | "queries": [ # [Required] The queries to execute. You do not need to terminate a query with a semicolon. Multiple queries can be specified in one string by separating each with a semicolon. Here is an example of an Cloud Dataproc API snippet that uses a QueryList to specify a HiveJob: "hiveJob": { "queryList": { "queries": [ "query1", "query2", "query3;query4", ] } } |
| 908 | "A String", |
| 909 | ], |
| 910 | }, |
| 911 | "properties": { # [Optional] A mapping of property names to values, used to configure Spark SQL's SparkConf. Properties that conflict with values set by the Cloud Dataproc API may be overwritten. |
| 912 | "a_key": "A String", |
| 913 | }, |
| 914 | }, |
| 915 | "pigJob": { # A Cloud Dataproc job for running Pig queries on YARN. # Job is a Pig job. |
| 916 | "queryFileUri": "A String", # The HCFS URI of the script that contains the Pig queries. |
| 917 | "scriptVariables": { # [Optional] Mapping of query variable names to values (equivalent to the Pig command: `name=[value]`). |
| 918 | "a_key": "A String", |
| 919 | }, |
| 920 | "loggingConfig": { # The runtime logging config of the job. # [Optional] The runtime log config for job execution. |
| 921 | "driverLogLevels": { # The per-package log levels for the driver. This may include "root" package name to configure rootLogger. Examples: 'com.google = FATAL', 'root = INFO', 'org.apache = DEBUG' |
| 922 | "a_key": "A String", |
| 923 | }, |
| 924 | }, |
| 925 | "jarFileUris": [ # [Optional] HCFS URIs of jar files to add to the CLASSPATH of the Pig Client and Hadoop MapReduce (MR) tasks. Can contain Pig UDFs. |
| 926 | "A String", |
| 927 | ], |
| 928 | "queryList": { # A list of queries to run on a cluster. # A list of queries. |
| 929 | "queries": [ # [Required] The queries to execute. You do not need to terminate a query with a semicolon. Multiple queries can be specified in one string by separating each with a semicolon. Here is an example of an Cloud Dataproc API snippet that uses a QueryList to specify a HiveJob: "hiveJob": { "queryList": { "queries": [ "query1", "query2", "query3;query4", ] } } |
| 930 | "A String", |
| 931 | ], |
| 932 | }, |
| 933 | "continueOnFailure": True or False, # [Optional] Whether to continue executing queries if a query fails. The default value is `false`. Setting to `true` can be useful when executing independent parallel queries. |
| 934 | "properties": { # [Optional] A mapping of property names to values, used to configure Pig. Properties that conflict with values set by the Cloud Dataproc API may be overwritten. Can include properties set in /etc/hadoop/conf/*-site.xml, /etc/pig/conf/pig.properties, and classes in user code. |
| 935 | "a_key": "A String", |
| 936 | }, |
| 937 | }, |
| 938 | "driverOutputResourceUri": "A String", # [Output-only] A URI pointing to the location of the stdout of the job's driver program. |
| 939 | "driverControlFilesUri": "A String", # [Output-only] If present, the location of miscellaneous control files which may be used as part of job setup and handling. If not present, control files may be placed in the same location as `driver_output_uri`. |
| 940 | "sparkJob": { # A Cloud Dataproc job for running Spark applications on YARN. # Job is a Spark job. |
| 941 | "jarFileUris": [ # [Optional] HCFS URIs of jar files to add to the CLASSPATHs of the Spark driver and tasks. |
| 942 | "A String", |
| 943 | ], |
| 944 | "loggingConfig": { # The runtime logging config of the job. # [Optional] The runtime log config for job execution. |
| 945 | "driverLogLevels": { # The per-package log levels for the driver. This may include "root" package name to configure rootLogger. Examples: 'com.google = FATAL', 'root = INFO', 'org.apache = DEBUG' |
| 946 | "a_key": "A String", |
| 947 | }, |
| 948 | }, |
| 949 | "args": [ # [Optional] The arguments to pass to the driver. Do not include arguments, such as `--conf`, that can be set as job properties, since a collision may occur that causes an incorrect job submission. |
| 950 | "A String", |
| 951 | ], |
| 952 | "fileUris": [ # [Optional] HCFS URIs of files to be copied to the working directory of Spark drivers and distributed tasks. Useful for naively parallel tasks. |
| 953 | "A String", |
| 954 | ], |
| 955 | "mainClass": "A String", # The name of the driver's main class. The jar file that contains the class must be in the default CLASSPATH or specified in `jar_file_uris`. |
| 956 | "archiveUris": [ # [Optional] HCFS URIs of archives to be extracted in the working directory of Spark drivers and tasks. Supported file types: .jar, .tar, .tar.gz, .tgz, and .zip. |
| 957 | "A String", |
| 958 | ], |
| 959 | "mainJarFileUri": "A String", # The Hadoop Compatible Filesystem (HCFS) URI of the jar file that contains the main class. |
| 960 | "properties": { # [Optional] A mapping of property names to values, used to configure Spark. Properties that conflict with values set by the Cloud Dataproc API may be overwritten. Can include properties set in /etc/spark/conf/spark-defaults.conf and classes in user code. |
| 961 | "a_key": "A String", |
| 962 | }, |
| 963 | }, |
| 964 | "pysparkJob": { # A Cloud Dataproc job for running PySpark applications on YARN. # Job is a Pyspark job. |
| 965 | "mainPythonFileUri": "A String", # [Required] The Hadoop Compatible Filesystem (HCFS) URI of the main Python file to use as the driver. Must be a .py file. |
| 966 | "args": [ # [Optional] The arguments to pass to the driver. Do not include arguments, such as `--conf`, that can be set as job properties, since a collision may occur that causes an incorrect job submission. |
| 967 | "A String", |
| 968 | ], |
| 969 | "loggingConfig": { # The runtime logging config of the job. # [Optional] The runtime log config for job execution. |
| 970 | "driverLogLevels": { # The per-package log levels for the driver. This may include "root" package name to configure rootLogger. Examples: 'com.google = FATAL', 'root = INFO', 'org.apache = DEBUG' |
| 971 | "a_key": "A String", |
| 972 | }, |
| 973 | }, |
| 974 | "jarFileUris": [ # [Optional] HCFS URIs of jar files to add to the CLASSPATHs of the Python driver and tasks. |
| 975 | "A String", |
| 976 | ], |
| 977 | "fileUris": [ # [Optional] HCFS URIs of files to be copied to the working directory of Python drivers and distributed tasks. Useful for naively parallel tasks. |
| 978 | "A String", |
| 979 | ], |
| 980 | "archiveUris": [ # [Optional] HCFS URIs of archives to be extracted in the working directory of .jar, .tar, .tar.gz, .tgz, and .zip. |
| 981 | "A String", |
| 982 | ], |
| 983 | "pythonFileUris": [ # [Optional] HCFS file URIs of Python files to pass to the PySpark framework. Supported file types: .py, .egg, and .zip. |
| 984 | "A String", |
| 985 | ], |
| 986 | "properties": { # [Optional] A mapping of property names to values, used to configure PySpark. Properties that conflict with values set by the Cloud Dataproc API may be overwritten. Can include properties set in /etc/spark/conf/spark-defaults.conf and classes in user code. |
| 987 | "a_key": "A String", |
| 988 | }, |
| 989 | }, |
| 990 | "hiveJob": { # A Cloud Dataproc job for running Hive queries on YARN. # Job is a Hive job. |
| 991 | "queryFileUri": "A String", # The HCFS URI of the script that contains Hive queries. |
| 992 | "scriptVariables": { # [Optional] Mapping of query variable names to values (equivalent to the Hive command: `SET name="value";`). |
| 993 | "a_key": "A String", |
| 994 | }, |
| 995 | "jarFileUris": [ # [Optional] HCFS URIs of jar files to add to the CLASSPATH of the Hive server and Hadoop MapReduce (MR) tasks. Can contain Hive SerDes and UDFs. |
| 996 | "A String", |
| 997 | ], |
| 998 | "queryList": { # A list of queries to run on a cluster. # A list of queries. |
| 999 | "queries": [ # [Required] The queries to execute. You do not need to terminate a query with a semicolon. Multiple queries can be specified in one string by separating each with a semicolon. Here is an example of an Cloud Dataproc API snippet that uses a QueryList to specify a HiveJob: "hiveJob": { "queryList": { "queries": [ "query1", "query2", "query3;query4", ] } } |
| 1000 | "A String", |
| 1001 | ], |
| 1002 | }, |
| 1003 | "continueOnFailure": True or False, # [Optional] Whether to continue executing queries if a query fails. The default value is `false`. Setting to `true` can be useful when executing independent parallel queries. |
| 1004 | "properties": { # [Optional] A mapping of property names and values, used to configure Hive. Properties that conflict with values set by the Cloud Dataproc API may be overwritten. Can include properties set in /etc/hadoop/conf/*-site.xml, /etc/hive/conf/hive-site.xml, and classes in user code. |
| 1005 | "a_key": "A String", |
| 1006 | }, |
| 1007 | }, |
| 1008 | }</pre> |
| 1009 | </div> |
| 1010 | |
| 1011 | </body></html> |