blob: 979bc29c15cfd1e41149d14a3dae94cd0330a3c3 [file] [log] [blame]
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -04001<html><body>
2<style>
3
4body, h1, h2, h3, div, span, p, pre, a {
5 margin: 0;
6 padding: 0;
7 border: 0;
8 font-weight: inherit;
9 font-style: inherit;
10 font-size: 100%;
11 font-family: inherit;
12 vertical-align: baseline;
13}
14
15body {
16 font-size: 13px;
17 padding: 1em;
18}
19
20h1 {
21 font-size: 26px;
22 margin-bottom: 1em;
23}
24
25h2 {
26 font-size: 24px;
27 margin-bottom: 1em;
28}
29
30h3 {
31 font-size: 20px;
32 margin-bottom: 1em;
33 margin-top: 1em;
34}
35
36pre, code {
37 line-height: 1.5;
38 font-family: Monaco, 'DejaVu Sans Mono', 'Bitstream Vera Sans Mono', 'Lucida Console', monospace;
39}
40
41pre {
42 margin-top: 0.5em;
43}
44
45h1, h2, h3, p {
46 font-family: Arial, sans serif;
47}
48
49h1, h2, h3 {
50 border-bottom: solid #CCC 1px;
51}
52
53.toc_element {
54 margin-top: 0.5em;
55}
56
57.firstline {
58 margin-left: 2 em;
59}
60
61.method {
62 margin-top: 1em;
63 border: solid 1px #CCC;
64 padding: 1em;
65 background: #EEE;
66}
67
68.details {
69 font-weight: bold;
70 font-size: 14px;
71}
72
73</style>
74
Dan O'Mearadd494642020-05-01 07:42:23 -070075<h1><a href="ml_v1.html">AI Platform Training & Prediction API</a> . <a href="ml_v1.projects.html">projects</a> . <a href="ml_v1.projects.jobs.html">jobs</a></h1>
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -040076<h2>Instance Methods</h2>
77<p class="toc_element">
Bu Sun Kim715bd7f2019-06-14 16:50:42 -070078 <code><a href="#cancel">cancel(name, body=None, x__xgafv=None)</a></code></p>
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -040079<p class="firstline">Cancels a running job.</p>
80<p class="toc_element">
Dan O'Mearadd494642020-05-01 07:42:23 -070081 <code><a href="#create">create(parent, body=None, x__xgafv=None)</a></code></p>
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -040082<p class="firstline">Creates a training or a batch prediction job.</p>
83<p class="toc_element">
Thomas Coffee2f245372017-03-27 10:39:26 -070084 <code><a href="#get">get(name, x__xgafv=None)</a></code></p>
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -040085<p class="firstline">Describes a job.</p>
86<p class="toc_element">
Dan O'Mearadd494642020-05-01 07:42:23 -070087 <code><a href="#getIamPolicy">getIamPolicy(resource, options_requestedPolicyVersion=None, x__xgafv=None)</a></code></p>
Bu Sun Kim715bd7f2019-06-14 16:50:42 -070088<p class="firstline">Gets the access control policy for a resource.</p>
89<p class="toc_element">
Bu Sun Kim65020912020-05-20 12:08:20 -070090 <code><a href="#list">list(parent, pageToken=None, pageSize=None, filter=None, x__xgafv=None)</a></code></p>
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -040091<p class="firstline">Lists the jobs in the project.</p>
92<p class="toc_element">
93 <code><a href="#list_next">list_next(previous_request, previous_response)</a></code></p>
94<p class="firstline">Retrieves the next page of results.</p>
Bu Sun Kim715bd7f2019-06-14 16:50:42 -070095<p class="toc_element">
Dan O'Mearadd494642020-05-01 07:42:23 -070096 <code><a href="#patch">patch(name, body=None, updateMask=None, x__xgafv=None)</a></code></p>
Bu Sun Kim715bd7f2019-06-14 16:50:42 -070097<p class="firstline">Updates a specific job resource.</p>
98<p class="toc_element">
Dan O'Mearadd494642020-05-01 07:42:23 -070099 <code><a href="#setIamPolicy">setIamPolicy(resource, body=None, x__xgafv=None)</a></code></p>
Bu Sun Kim715bd7f2019-06-14 16:50:42 -0700100<p class="firstline">Sets the access control policy on the specified resource. Replaces any</p>
101<p class="toc_element">
Dan O'Mearadd494642020-05-01 07:42:23 -0700102 <code><a href="#testIamPermissions">testIamPermissions(resource, body=None, x__xgafv=None)</a></code></p>
Bu Sun Kim715bd7f2019-06-14 16:50:42 -0700103<p class="firstline">Returns permissions that a caller has on the specified resource.</p>
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -0400104<h3>Method Details</h3>
105<div class="method">
Bu Sun Kim715bd7f2019-06-14 16:50:42 -0700106 <code class="details" id="cancel">cancel(name, body=None, x__xgafv=None)</code>
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -0400107 <pre>Cancels a running job.
108
109Args:
Bu Sun Kim715bd7f2019-06-14 16:50:42 -0700110 name: string, Required. The name of the job to cancel. (required)
111 body: object, The request body.
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -0400112 The object takes the form of:
113
114{ # Request message for the CancelJob method.
115 }
116
117 x__xgafv: string, V1 error format.
118 Allowed values
119 1 - v1 error format
120 2 - v2 error format
121
122Returns:
123 An object of the form:
124
125 { # A generic empty message that you can re-use to avoid defining duplicated
126 # empty messages in your APIs. A typical example is to use it as the request
127 # or the response type of an API method. For instance:
128 #
129 # service Foo {
130 # rpc Bar(google.protobuf.Empty) returns (google.protobuf.Empty);
131 # }
132 #
133 # The JSON representation for `Empty` is empty JSON object `{}`.
134 }</pre>
135</div>
136
137<div class="method">
Dan O'Mearadd494642020-05-01 07:42:23 -0700138 <code class="details" id="create">create(parent, body=None, x__xgafv=None)</code>
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -0400139 <pre>Creates a training or a batch prediction job.
140
141Args:
Bu Sun Kim715bd7f2019-06-14 16:50:42 -0700142 parent: string, Required. The project name. (required)
Dan O'Mearadd494642020-05-01 07:42:23 -0700143 body: object, The request body.
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -0400144 The object takes the form of:
145
146{ # Represents a training or prediction job.
Bu Sun Kim65020912020-05-20 12:08:20 -0700147 &quot;etag&quot;: &quot;A String&quot;, # `etag` is used for optimistic concurrency control as a way to help
Dan O'Mearadd494642020-05-01 07:42:23 -0700148 # prevent simultaneous updates of a job from overwriting each other.
149 # It is strongly suggested that systems make use of the `etag` in the
150 # read-modify-write cycle to perform job updates in order to avoid race
151 # conditions: An `etag` is returned in the response to `GetJob`, and
152 # systems are expected to put that etag in the request to `UpdateJob` to
153 # ensure that their change will be applied to the same version of the job.
Bu Sun Kim65020912020-05-20 12:08:20 -0700154 &quot;trainingInput&quot;: { # Represents input parameters for a training job. When using the gcloud command # Input parameters to create a training job.
Dan O'Mearadd494642020-05-01 07:42:23 -0700155 # to submit your training job, you can specify the input parameters as
156 # command-line arguments and/or in a YAML configuration file referenced from
157 # the --config command-line argument. For details, see the guide to [submitting
158 # a training job](/ai-platform/training/docs/training-jobs).
Bu Sun Kim65020912020-05-20 12:08:20 -0700159 &quot;parameterServerConfig&quot;: { # Represents the configuration for a replica in a cluster. # Optional. The configuration for parameter servers.
160 #
161 # You should only set `parameterServerConfig.acceleratorConfig` if
162 # `parameterServerType` is set to a Compute Engine machine type. [Learn
163 # about restrictions on accelerator configurations for
164 # training.](/ai-platform/training/docs/using-gpus#compute-engine-machine-types-with-gpu)
165 #
166 # Set `parameterServerConfig.imageUri` only if you build a custom image for
167 # your parameter server. If `parameterServerConfig.imageUri` has not been
168 # set, AI Platform uses the value of `masterConfig.imageUri`. Learn more about [configuring custom
169 # containers](/ai-platform/training/docs/distributed-training-containers).
170 &quot;tpuTfVersion&quot;: &quot;A String&quot;, # The AI Platform runtime version that includes a TensorFlow version matching
171 # the one used in the custom container. This field is required if the replica
172 # is a TPU worker that uses a custom container. Otherwise, do not specify
173 # this field. This must be a [runtime version that currently supports
174 # training with
175 # TPUs](/ml-engine/docs/tensorflow/runtime-version-list#tpu-support).
176 #
177 # Note that the version of TensorFlow included in a runtime version may
178 # differ from the numbering of the runtime version itself, because it may
179 # have a different [patch
180 # version](https://www.tensorflow.org/guide/version_compat#semantic_versioning_20).
181 # In this field, you must specify the runtime version (TensorFlow minor
182 # version). For example, if your custom container runs TensorFlow `1.x.y`,
183 # specify `1.x`.
184 &quot;containerCommand&quot;: [ # The command with which the replica&#x27;s custom container is run.
185 # If provided, it will override default ENTRYPOINT of the docker image.
186 # If not provided, the docker image&#x27;s ENTRYPOINT is used.
187 # It cannot be set if custom container image is
188 # not provided.
189 # Note that this field and [TrainingInput.args] are mutually exclusive, i.e.,
190 # both cannot be set at the same time.
191 &quot;A String&quot;,
192 ],
193 &quot;imageUri&quot;: &quot;A String&quot;, # The Docker image to run on the replica. This image must be in Container
194 # Registry. Learn more about [configuring custom
195 # containers](/ai-platform/training/docs/distributed-training-containers).
196 &quot;containerArgs&quot;: [ # Arguments to the entrypoint command.
197 # The following rules apply for container_command and container_args:
198 # - If you do not supply command or args:
199 # The defaults defined in the Docker image are used.
200 # - If you supply a command but no args:
201 # The default EntryPoint and the default Cmd defined in the Docker image
202 # are ignored. Your command is run without any arguments.
203 # - If you supply only args:
204 # The default Entrypoint defined in the Docker image is run with the args
205 # that you supplied.
206 # - If you supply a command and args:
207 # The default Entrypoint and the default Cmd defined in the Docker image
208 # are ignored. Your command is run with your args.
209 # It cannot be set if custom container image is
210 # not provided.
211 # Note that this field and [TrainingInput.args] are mutually exclusive, i.e.,
212 # both cannot be set at the same time.
213 &quot;A String&quot;,
214 ],
215 &quot;acceleratorConfig&quot;: { # Represents a hardware accelerator request config. # Represents the type and number of accelerators used by the replica.
216 # [Learn about restrictions on accelerator configurations for
217 # training.](/ai-platform/training/docs/using-gpus#compute-engine-machine-types-with-gpu)
218 # Note that the AcceleratorConfig can be used in both Jobs and Versions.
219 # Learn more about [accelerators for training](/ml-engine/docs/using-gpus) and
220 # [accelerators for online
221 # prediction](/ml-engine/docs/machine-types-online-prediction#gpus).
222 &quot;count&quot;: &quot;A String&quot;, # The number of accelerators to attach to each machine running the job.
223 &quot;type&quot;: &quot;A String&quot;, # The type of accelerator to use.
224 },
225 },
226 &quot;encryptionConfig&quot;: { # Represents a custom encryption key configuration that can be applied to # Optional. Options for using customer-managed encryption keys (CMEK) to
227 # protect resources created by a training job, instead of using Google&#x27;s
228 # default encryption. If this is set, then all resources created by the
229 # training job will be encrypted with the customer-managed encryption key
230 # that you specify.
231 #
232 # [Learn how and when to use CMEK with AI Platform
233 # Training](/ai-platform/training/docs/cmek).
234 # a resource.
235 &quot;kmsKeyName&quot;: &quot;A String&quot;, # The Cloud KMS resource identifier of the customer-managed encryption key
236 # used to protect a resource, such as a training job. It has the following
237 # format:
238 # `projects/{PROJECT_ID}/locations/{REGION}/keyRings/{KEY_RING_NAME}/cryptoKeys/{KEY_NAME}`
239 },
240 &quot;hyperparameters&quot;: { # Represents a set of hyperparameters to optimize. # Optional. The set of Hyperparameters to tune.
241 &quot;hyperparameterMetricTag&quot;: &quot;A String&quot;, # Optional. The TensorFlow summary tag name to use for optimizing trials. For
242 # current versions of TensorFlow, this tag name should exactly match what is
243 # shown in TensorBoard, including all scopes. For versions of TensorFlow
244 # prior to 0.12, this should be only the tag passed to tf.Summary.
245 # By default, &quot;training/hptuning/metric&quot; will be used.
246 &quot;params&quot;: [ # Required. The set of parameters to tune.
247 { # Represents a single hyperparameter to optimize.
248 &quot;type&quot;: &quot;A String&quot;, # Required. The type of the parameter.
249 &quot;categoricalValues&quot;: [ # Required if type is `CATEGORICAL`. The list of possible categories.
250 &quot;A String&quot;,
251 ],
252 &quot;parameterName&quot;: &quot;A String&quot;, # Required. The parameter name must be unique amongst all ParameterConfigs in
253 # a HyperparameterSpec message. E.g., &quot;learning_rate&quot;.
254 &quot;minValue&quot;: 3.14, # Required if type is `DOUBLE` or `INTEGER`. This field
255 # should be unset if type is `CATEGORICAL`. This value should be integers if
256 # type is INTEGER.
257 &quot;discreteValues&quot;: [ # Required if type is `DISCRETE`.
258 # A list of feasible points.
259 # The list should be in strictly increasing order. For instance, this
260 # parameter might have possible settings of 1.5, 2.5, and 4.0. This list
261 # should not contain more than 1,000 values.
262 3.14,
263 ],
264 &quot;scaleType&quot;: &quot;A String&quot;, # Optional. How the parameter should be scaled to the hypercube.
265 # Leave unset for categorical parameters.
266 # Some kind of scaling is strongly recommended for real or integral
267 # parameters (e.g., `UNIT_LINEAR_SCALE`).
268 &quot;maxValue&quot;: 3.14, # Required if type is `DOUBLE` or `INTEGER`. This field
269 # should be unset if type is `CATEGORICAL`. This value should be integers if
270 # type is `INTEGER`.
271 },
272 ],
273 &quot;enableTrialEarlyStopping&quot;: True or False, # Optional. Indicates if the hyperparameter tuning job enables auto trial
274 # early stopping.
275 &quot;resumePreviousJobId&quot;: &quot;A String&quot;, # Optional. The prior hyperparameter tuning job id that users hope to
276 # continue with. The job id will be used to find the corresponding vizier
277 # study guid and resume the study.
278 &quot;maxParallelTrials&quot;: 42, # Optional. The number of training trials to run concurrently.
279 # You can reduce the time it takes to perform hyperparameter tuning by adding
280 # trials in parallel. However, each trail only benefits from the information
281 # gained in completed trials. That means that a trial does not get access to
282 # the results of trials running at the same time, which could reduce the
283 # quality of the overall optimization.
284 #
285 # Each trial will use the same scale tier and machine types.
286 #
287 # Defaults to one.
288 &quot;maxFailedTrials&quot;: 42, # Optional. The number of failed trials that need to be seen before failing
289 # the hyperparameter tuning job. You can specify this field to override the
290 # default failing criteria for AI Platform hyperparameter tuning jobs.
291 #
292 # Defaults to zero, which means the service decides when a hyperparameter
293 # job should fail.
294 &quot;goal&quot;: &quot;A String&quot;, # Required. The type of goal to use for tuning. Available types are
295 # `MAXIMIZE` and `MINIMIZE`.
296 #
297 # Defaults to `MAXIMIZE`.
298 &quot;maxTrials&quot;: 42, # Optional. How many training trials should be attempted to optimize
299 # the specified hyperparameters.
300 #
301 # Defaults to one.
302 &quot;algorithm&quot;: &quot;A String&quot;, # Optional. The search algorithm specified for the hyperparameter
303 # tuning job.
304 # Uses the default AI Platform hyperparameter tuning
305 # algorithm if unspecified.
306 },
307 &quot;workerConfig&quot;: { # Represents the configuration for a replica in a cluster. # Optional. The configuration for workers.
308 #
309 # You should only set `workerConfig.acceleratorConfig` if `workerType` is set
310 # to a Compute Engine machine type. [Learn about restrictions on accelerator
311 # configurations for
312 # training.](/ai-platform/training/docs/using-gpus#compute-engine-machine-types-with-gpu)
313 #
314 # Set `workerConfig.imageUri` only if you build a custom image for your
315 # worker. If `workerConfig.imageUri` has not been set, AI Platform uses
316 # the value of `masterConfig.imageUri`. Learn more about [configuring custom
317 # containers](/ai-platform/training/docs/distributed-training-containers).
318 &quot;tpuTfVersion&quot;: &quot;A String&quot;, # The AI Platform runtime version that includes a TensorFlow version matching
319 # the one used in the custom container. This field is required if the replica
320 # is a TPU worker that uses a custom container. Otherwise, do not specify
321 # this field. This must be a [runtime version that currently supports
322 # training with
323 # TPUs](/ml-engine/docs/tensorflow/runtime-version-list#tpu-support).
324 #
325 # Note that the version of TensorFlow included in a runtime version may
326 # differ from the numbering of the runtime version itself, because it may
327 # have a different [patch
328 # version](https://www.tensorflow.org/guide/version_compat#semantic_versioning_20).
329 # In this field, you must specify the runtime version (TensorFlow minor
330 # version). For example, if your custom container runs TensorFlow `1.x.y`,
331 # specify `1.x`.
332 &quot;containerCommand&quot;: [ # The command with which the replica&#x27;s custom container is run.
333 # If provided, it will override default ENTRYPOINT of the docker image.
334 # If not provided, the docker image&#x27;s ENTRYPOINT is used.
335 # It cannot be set if custom container image is
336 # not provided.
337 # Note that this field and [TrainingInput.args] are mutually exclusive, i.e.,
338 # both cannot be set at the same time.
339 &quot;A String&quot;,
340 ],
341 &quot;imageUri&quot;: &quot;A String&quot;, # The Docker image to run on the replica. This image must be in Container
342 # Registry. Learn more about [configuring custom
343 # containers](/ai-platform/training/docs/distributed-training-containers).
344 &quot;containerArgs&quot;: [ # Arguments to the entrypoint command.
345 # The following rules apply for container_command and container_args:
346 # - If you do not supply command or args:
347 # The defaults defined in the Docker image are used.
348 # - If you supply a command but no args:
349 # The default EntryPoint and the default Cmd defined in the Docker image
350 # are ignored. Your command is run without any arguments.
351 # - If you supply only args:
352 # The default Entrypoint defined in the Docker image is run with the args
353 # that you supplied.
354 # - If you supply a command and args:
355 # The default Entrypoint and the default Cmd defined in the Docker image
356 # are ignored. Your command is run with your args.
357 # It cannot be set if custom container image is
358 # not provided.
359 # Note that this field and [TrainingInput.args] are mutually exclusive, i.e.,
360 # both cannot be set at the same time.
361 &quot;A String&quot;,
362 ],
363 &quot;acceleratorConfig&quot;: { # Represents a hardware accelerator request config. # Represents the type and number of accelerators used by the replica.
364 # [Learn about restrictions on accelerator configurations for
365 # training.](/ai-platform/training/docs/using-gpus#compute-engine-machine-types-with-gpu)
366 # Note that the AcceleratorConfig can be used in both Jobs and Versions.
367 # Learn more about [accelerators for training](/ml-engine/docs/using-gpus) and
368 # [accelerators for online
369 # prediction](/ml-engine/docs/machine-types-online-prediction#gpus).
370 &quot;count&quot;: &quot;A String&quot;, # The number of accelerators to attach to each machine running the job.
371 &quot;type&quot;: &quot;A String&quot;, # The type of accelerator to use.
372 },
373 },
374 &quot;parameterServerCount&quot;: &quot;A String&quot;, # Optional. The number of parameter server replicas to use for the training
375 # job. Each replica in the cluster will be of the type specified in
376 # `parameter_server_type`.
377 #
378 # This value can only be used when `scale_tier` is set to `CUSTOM`. If you
379 # set this value, you must also set `parameter_server_type`.
380 #
381 # The default value is zero.
382 &quot;packageUris&quot;: [ # Required. The Google Cloud Storage location of the packages with
383 # the training program and any additional dependencies.
384 # The maximum number of package URIs is 100.
385 &quot;A String&quot;,
386 ],
387 &quot;evaluatorCount&quot;: &quot;A String&quot;, # Optional. The number of evaluator replicas to use for the training job.
388 # Each replica in the cluster will be of the type specified in
389 # `evaluator_type`.
390 #
391 # This value can only be used when `scale_tier` is set to `CUSTOM`. If you
392 # set this value, you must also set `evaluator_type`.
393 #
394 # The default value is zero.
395 &quot;masterType&quot;: &quot;A String&quot;, # Optional. Specifies the type of virtual machine to use for your training
396 # job&#x27;s master worker. You must specify this field when `scaleTier` is set to
Dan O'Mearadd494642020-05-01 07:42:23 -0700397 # `CUSTOM`.
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -0400398 #
Dan O'Mearadd494642020-05-01 07:42:23 -0700399 # You can use certain Compute Engine machine types directly in this field.
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -0400400 # The following types are supported:
401 #
Bu Sun Kim715bd7f2019-06-14 16:50:42 -0700402 # - `n1-standard-4`
403 # - `n1-standard-8`
404 # - `n1-standard-16`
405 # - `n1-standard-32`
406 # - `n1-standard-64`
407 # - `n1-standard-96`
408 # - `n1-highmem-2`
409 # - `n1-highmem-4`
410 # - `n1-highmem-8`
411 # - `n1-highmem-16`
412 # - `n1-highmem-32`
413 # - `n1-highmem-64`
414 # - `n1-highmem-96`
415 # - `n1-highcpu-16`
416 # - `n1-highcpu-32`
417 # - `n1-highcpu-64`
418 # - `n1-highcpu-96`
419 #
Dan O'Mearadd494642020-05-01 07:42:23 -0700420 # Learn more about [using Compute Engine machine
421 # types](/ml-engine/docs/machine-types#compute-engine-machine-types).
Bu Sun Kim715bd7f2019-06-14 16:50:42 -0700422 #
Dan O'Mearadd494642020-05-01 07:42:23 -0700423 # Alternatively, you can use the following legacy machine types:
424 #
425 # - `standard`
426 # - `large_model`
427 # - `complex_model_s`
428 # - `complex_model_m`
429 # - `complex_model_l`
430 # - `standard_gpu`
431 # - `complex_model_m_gpu`
432 # - `complex_model_l_gpu`
433 # - `standard_p100`
434 # - `complex_model_m_p100`
435 # - `standard_v100`
436 # - `large_model_v100`
437 # - `complex_model_m_v100`
438 # - `complex_model_l_v100`
439 #
440 # Learn more about [using legacy machine
441 # types](/ml-engine/docs/machine-types#legacy-machine-types).
442 #
443 # Finally, if you want to use a TPU for training, specify `cloud_tpu` in this
444 # field. Learn more about the [special configuration options for training
445 # with
446 # TPUs](/ml-engine/docs/tensorflow/using-tpus#configuring_a_custom_tpu_machine).
Bu Sun Kim65020912020-05-20 12:08:20 -0700447 &quot;runtimeVersion&quot;: &quot;A String&quot;, # Optional. The AI Platform runtime version to use for training. You must
448 # either specify this field or specify `masterConfig.imageUri`.
449 #
450 # For more information, see the [runtime version
451 # list](/ai-platform/training/docs/runtime-version-list) and learn [how to
452 # manage runtime versions](/ai-platform/training/docs/versioning).
453 &quot;evaluatorType&quot;: &quot;A String&quot;, # Optional. Specifies the type of virtual machine to use for your training
454 # job&#x27;s evaluator nodes.
455 #
456 # The supported values are the same as those described in the entry for
457 # `masterType`.
458 #
459 # This value must be consistent with the category of machine type that
460 # `masterType` uses. In other words, both must be Compute Engine machine
461 # types or both must be legacy machine types.
462 #
463 # This value must be present when `scaleTier` is set to `CUSTOM` and
464 # `evaluatorCount` is greater than zero.
465 &quot;region&quot;: &quot;A String&quot;, # Required. The region to run the training job in. See the [available
466 # regions](/ai-platform/training/docs/regions) for AI Platform Training.
467 &quot;workerType&quot;: &quot;A String&quot;, # Optional. Specifies the type of virtual machine to use for your training
468 # job&#x27;s worker nodes.
469 #
470 # The supported values are the same as those described in the entry for
471 # `masterType`.
472 #
473 # This value must be consistent with the category of machine type that
474 # `masterType` uses. In other words, both must be Compute Engine machine
475 # types or both must be legacy machine types.
476 #
477 # If you use `cloud_tpu` for this value, see special instructions for
478 # [configuring a custom TPU
479 # machine](/ml-engine/docs/tensorflow/using-tpus#configuring_a_custom_tpu_machine).
480 #
481 # This value must be present when `scaleTier` is set to `CUSTOM` and
482 # `workerCount` is greater than zero.
483 &quot;parameterServerType&quot;: &quot;A String&quot;, # Optional. Specifies the type of virtual machine to use for your training
484 # job&#x27;s parameter server.
485 #
486 # The supported values are the same as those described in the entry for
487 # `master_type`.
488 #
489 # This value must be consistent with the category of machine type that
490 # `masterType` uses. In other words, both must be Compute Engine machine
491 # types or both must be legacy machine types.
492 #
493 # This value must be present when `scaleTier` is set to `CUSTOM` and
494 # `parameter_server_count` is greater than zero.
495 &quot;masterConfig&quot;: { # Represents the configuration for a replica in a cluster. # Optional. The configuration for your master worker.
496 #
497 # You should only set `masterConfig.acceleratorConfig` if `masterType` is set
498 # to a Compute Engine machine type. Learn about [restrictions on accelerator
499 # configurations for
500 # training.](/ai-platform/training/docs/using-gpus#compute-engine-machine-types-with-gpu)
501 #
502 # Set `masterConfig.imageUri` only if you build a custom image. Only one of
503 # `masterConfig.imageUri` and `runtimeVersion` should be set. Learn more
504 # about [configuring custom
505 # containers](/ai-platform/training/docs/distributed-training-containers).
506 &quot;tpuTfVersion&quot;: &quot;A String&quot;, # The AI Platform runtime version that includes a TensorFlow version matching
507 # the one used in the custom container. This field is required if the replica
508 # is a TPU worker that uses a custom container. Otherwise, do not specify
509 # this field. This must be a [runtime version that currently supports
510 # training with
511 # TPUs](/ml-engine/docs/tensorflow/runtime-version-list#tpu-support).
512 #
513 # Note that the version of TensorFlow included in a runtime version may
514 # differ from the numbering of the runtime version itself, because it may
515 # have a different [patch
516 # version](https://www.tensorflow.org/guide/version_compat#semantic_versioning_20).
517 # In this field, you must specify the runtime version (TensorFlow minor
518 # version). For example, if your custom container runs TensorFlow `1.x.y`,
519 # specify `1.x`.
520 &quot;containerCommand&quot;: [ # The command with which the replica&#x27;s custom container is run.
521 # If provided, it will override default ENTRYPOINT of the docker image.
522 # If not provided, the docker image&#x27;s ENTRYPOINT is used.
523 # It cannot be set if custom container image is
524 # not provided.
525 # Note that this field and [TrainingInput.args] are mutually exclusive, i.e.,
526 # both cannot be set at the same time.
527 &quot;A String&quot;,
528 ],
529 &quot;imageUri&quot;: &quot;A String&quot;, # The Docker image to run on the replica. This image must be in Container
530 # Registry. Learn more about [configuring custom
531 # containers](/ai-platform/training/docs/distributed-training-containers).
532 &quot;containerArgs&quot;: [ # Arguments to the entrypoint command.
533 # The following rules apply for container_command and container_args:
534 # - If you do not supply command or args:
535 # The defaults defined in the Docker image are used.
536 # - If you supply a command but no args:
537 # The default EntryPoint and the default Cmd defined in the Docker image
538 # are ignored. Your command is run without any arguments.
539 # - If you supply only args:
540 # The default Entrypoint defined in the Docker image is run with the args
541 # that you supplied.
542 # - If you supply a command and args:
543 # The default Entrypoint and the default Cmd defined in the Docker image
544 # are ignored. Your command is run with your args.
545 # It cannot be set if custom container image is
546 # not provided.
547 # Note that this field and [TrainingInput.args] are mutually exclusive, i.e.,
548 # both cannot be set at the same time.
549 &quot;A String&quot;,
550 ],
551 &quot;acceleratorConfig&quot;: { # Represents a hardware accelerator request config. # Represents the type and number of accelerators used by the replica.
552 # [Learn about restrictions on accelerator configurations for
553 # training.](/ai-platform/training/docs/using-gpus#compute-engine-machine-types-with-gpu)
554 # Note that the AcceleratorConfig can be used in both Jobs and Versions.
555 # Learn more about [accelerators for training](/ml-engine/docs/using-gpus) and
556 # [accelerators for online
557 # prediction](/ml-engine/docs/machine-types-online-prediction#gpus).
558 &quot;count&quot;: &quot;A String&quot;, # The number of accelerators to attach to each machine running the job.
559 &quot;type&quot;: &quot;A String&quot;, # The type of accelerator to use.
560 },
561 },
562 &quot;scaleTier&quot;: &quot;A String&quot;, # Required. Specifies the machine types, the number of replicas for workers
563 # and parameter servers.
564 &quot;jobDir&quot;: &quot;A String&quot;, # Optional. A Google Cloud Storage path in which to store training outputs
Dan O'Mearadd494642020-05-01 07:42:23 -0700565 # and other data needed for training. This path is passed to your TensorFlow
Bu Sun Kim65020912020-05-20 12:08:20 -0700566 # program as the &#x27;--job-dir&#x27; command-line argument. The benefit of specifying
Dan O'Mearadd494642020-05-01 07:42:23 -0700567 # this field is that Cloud ML validates the path for use in training.
Bu Sun Kim65020912020-05-20 12:08:20 -0700568 &quot;pythonVersion&quot;: &quot;A String&quot;, # Optional. The version of Python used in training. You must either specify
569 # this field or specify `masterConfig.imageUri`.
570 #
571 # The following Python versions are available:
572 #
573 # * Python &#x27;3.7&#x27; is available when `runtime_version` is set to &#x27;1.15&#x27; or
574 # later.
575 # * Python &#x27;3.5&#x27; is available when `runtime_version` is set to a version
576 # from &#x27;1.4&#x27; to &#x27;1.14&#x27;.
577 # * Python &#x27;2.7&#x27; is available when `runtime_version` is set to &#x27;1.15&#x27; or
578 # earlier.
579 #
580 # Read more about the Python versions available for [each runtime
581 # version](/ml-engine/docs/runtime-version-list).
582 &quot;network&quot;: &quot;A String&quot;, # Optional. The full name of the Google Compute Engine
583 # [network](/compute/docs/networks-and-firewalls#networks) to which the Job
584 # is peered. For example, projects/12345/global/networks/myVPC. Format is of
585 # the form projects/{project}/global/networks/{network}. Where {project} is a
586 # project number, as in &#x27;12345&#x27;, and {network} is network name.&quot;.
587 #
588 # Private services access must already be configured for the network. If left
589 # unspecified, the Job is not peered with any network. Learn more -
590 # Connecting Job to user network over private
591 # IP.
592 &quot;scheduling&quot;: { # All parameters related to scheduling of training jobs. # Optional. Scheduling options for a training job.
593 &quot;maxWaitTime&quot;: &quot;A String&quot;,
594 &quot;maxRunningTime&quot;: &quot;A String&quot;, # Optional. The maximum job running time, expressed in seconds. The field can
595 # contain up to nine fractional digits, terminated by `s`. If not specified,
596 # this field defaults to `604800s` (seven days).
Dan O'Mearadd494642020-05-01 07:42:23 -0700597 #
598 # If the training job is still running after this duration, AI Platform
599 # Training cancels it.
600 #
601 # For example, if you want to ensure your job runs for no more than 2 hours,
602 # set this field to `7200s` (2 hours * 60 minutes / hour * 60 seconds /
603 # minute).
604 #
605 # If you submit your training job using the `gcloud` tool, you can [provide
606 # this field in a `config.yaml`
607 # file](/ai-platform/training/docs/training-jobs#formatting_your_configuration_parameters).
608 # For example:
609 #
610 # ```yaml
611 # trainingInput:
612 # ...
613 # scheduling:
614 # maxRunningTime: 7200s
615 # ...
616 # ```
617 },
Bu Sun Kim65020912020-05-20 12:08:20 -0700618 &quot;evaluatorConfig&quot;: { # Represents the configuration for a replica in a cluster. # Optional. The configuration for evaluators.
Dan O'Mearadd494642020-05-01 07:42:23 -0700619 #
620 # You should only set `evaluatorConfig.acceleratorConfig` if
621 # `evaluatorType` is set to a Compute Engine machine type. [Learn
622 # about restrictions on accelerator configurations for
623 # training.](/ai-platform/training/docs/using-gpus#compute-engine-machine-types-with-gpu)
624 #
625 # Set `evaluatorConfig.imageUri` only if you build a custom image for
626 # your evaluator. If `evaluatorConfig.imageUri` has not been
627 # set, AI Platform uses the value of `masterConfig.imageUri`. Learn more about [configuring custom
628 # containers](/ai-platform/training/docs/distributed-training-containers).
Bu Sun Kim65020912020-05-20 12:08:20 -0700629 &quot;tpuTfVersion&quot;: &quot;A String&quot;, # The AI Platform runtime version that includes a TensorFlow version matching
Dan O'Mearadd494642020-05-01 07:42:23 -0700630 # the one used in the custom container. This field is required if the replica
631 # is a TPU worker that uses a custom container. Otherwise, do not specify
632 # this field. This must be a [runtime version that currently supports
633 # training with
634 # TPUs](/ml-engine/docs/tensorflow/runtime-version-list#tpu-support).
635 #
636 # Note that the version of TensorFlow included in a runtime version may
637 # differ from the numbering of the runtime version itself, because it may
638 # have a different [patch
639 # version](https://www.tensorflow.org/guide/version_compat#semantic_versioning_20).
640 # In this field, you must specify the runtime version (TensorFlow minor
641 # version). For example, if your custom container runs TensorFlow `1.x.y`,
642 # specify `1.x`.
Bu Sun Kim65020912020-05-20 12:08:20 -0700643 &quot;containerCommand&quot;: [ # The command with which the replica&#x27;s custom container is run.
644 # If provided, it will override default ENTRYPOINT of the docker image.
645 # If not provided, the docker image&#x27;s ENTRYPOINT is used.
646 # It cannot be set if custom container image is
647 # not provided.
648 # Note that this field and [TrainingInput.args] are mutually exclusive, i.e.,
649 # both cannot be set at the same time.
650 &quot;A String&quot;,
651 ],
652 &quot;imageUri&quot;: &quot;A String&quot;, # The Docker image to run on the replica. This image must be in Container
653 # Registry. Learn more about [configuring custom
654 # containers](/ai-platform/training/docs/distributed-training-containers).
655 &quot;containerArgs&quot;: [ # Arguments to the entrypoint command.
656 # The following rules apply for container_command and container_args:
657 # - If you do not supply command or args:
658 # The defaults defined in the Docker image are used.
659 # - If you supply a command but no args:
660 # The default EntryPoint and the default Cmd defined in the Docker image
661 # are ignored. Your command is run without any arguments.
662 # - If you supply only args:
663 # The default Entrypoint defined in the Docker image is run with the args
664 # that you supplied.
665 # - If you supply a command and args:
666 # The default Entrypoint and the default Cmd defined in the Docker image
667 # are ignored. Your command is run with your args.
668 # It cannot be set if custom container image is
669 # not provided.
670 # Note that this field and [TrainingInput.args] are mutually exclusive, i.e.,
671 # both cannot be set at the same time.
672 &quot;A String&quot;,
673 ],
674 &quot;acceleratorConfig&quot;: { # Represents a hardware accelerator request config. # Represents the type and number of accelerators used by the replica.
Dan O'Mearadd494642020-05-01 07:42:23 -0700675 # [Learn about restrictions on accelerator configurations for
676 # training.](/ai-platform/training/docs/using-gpus#compute-engine-machine-types-with-gpu)
677 # Note that the AcceleratorConfig can be used in both Jobs and Versions.
678 # Learn more about [accelerators for training](/ml-engine/docs/using-gpus) and
679 # [accelerators for online
680 # prediction](/ml-engine/docs/machine-types-online-prediction#gpus).
Bu Sun Kim65020912020-05-20 12:08:20 -0700681 &quot;count&quot;: &quot;A String&quot;, # The number of accelerators to attach to each machine running the job.
682 &quot;type&quot;: &quot;A String&quot;, # The type of accelerator to use.
Dan O'Mearadd494642020-05-01 07:42:23 -0700683 },
Dan O'Mearadd494642020-05-01 07:42:23 -0700684 },
Bu Sun Kim65020912020-05-20 12:08:20 -0700685 &quot;useChiefInTfConfig&quot;: True or False, # Optional. Use `chief` instead of `master` in the `TF_CONFIG` environment
Dan O'Mearadd494642020-05-01 07:42:23 -0700686 # variable when training with a custom container. Defaults to `false`. [Learn
687 # more about this
688 # field.](/ai-platform/training/docs/distributed-training-details#chief-versus-master)
689 #
Bu Sun Kim65020912020-05-20 12:08:20 -0700690 # This field has no effect for training jobs that don&#x27;t use a custom
Dan O'Mearadd494642020-05-01 07:42:23 -0700691 # container.
Bu Sun Kim65020912020-05-20 12:08:20 -0700692 &quot;workerCount&quot;: &quot;A String&quot;, # Optional. The number of worker replicas to use for the training job. Each
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -0400693 # replica in the cluster will be of the type specified in `worker_type`.
694 #
695 # This value can only be used when `scale_tier` is set to `CUSTOM`. If you
696 # set this value, you must also set `worker_type`.
Bu Sun Kim715bd7f2019-06-14 16:50:42 -0700697 #
698 # The default value is zero.
Bu Sun Kim65020912020-05-20 12:08:20 -0700699 &quot;pythonModule&quot;: &quot;A String&quot;, # Required. The Python module name to run after installing the packages.
700 &quot;args&quot;: [ # Optional. Command-line arguments passed to the training application when it
701 # starts. If your job uses a custom container, then the arguments are passed
702 # to the container&#x27;s &lt;a class=&quot;external&quot; target=&quot;_blank&quot;
703 # href=&quot;https://docs.docker.com/engine/reference/builder/#entrypoint&quot;&gt;
704 # `ENTRYPOINT`&lt;/a&gt; command.
705 &quot;A String&quot;,
706 ],
707 },
708 &quot;state&quot;: &quot;A String&quot;, # Output only. The detailed state of a job.
709 &quot;jobId&quot;: &quot;A String&quot;, # Required. The user-specified id of the job.
710 &quot;endTime&quot;: &quot;A String&quot;, # Output only. When the job processing was completed.
711 &quot;startTime&quot;: &quot;A String&quot;, # Output only. When the job processing was started.
712 &quot;predictionOutput&quot;: { # Represents results of a prediction job. # The current prediction job result.
713 &quot;errorCount&quot;: &quot;A String&quot;, # The number of data instances which resulted in errors.
714 &quot;outputPath&quot;: &quot;A String&quot;, # The output Google Cloud Storage location provided at the job creation time.
715 &quot;nodeHours&quot;: 3.14, # Node hours used by the batch prediction job.
716 &quot;predictionCount&quot;: &quot;A String&quot;, # The number of generated predictions.
717 },
718 &quot;trainingOutput&quot;: { # Represents results of a training job. Output only. # The current training job result.
719 &quot;trials&quot;: [ # Results for individual Hyperparameter trials.
720 # Only set for hyperparameter tuning jobs.
721 { # Represents the result of a single hyperparameter tuning trial from a
722 # training job. The TrainingOutput object that is returned on successful
723 # completion of a training job with hyperparameter tuning includes a list
724 # of HyperparameterOutput objects, one for each successful trial.
725 &quot;allMetrics&quot;: [ # All recorded object metrics for this trial. This field is not currently
726 # populated.
727 { # An observed value of a metric.
728 &quot;objectiveValue&quot;: 3.14, # The objective value at this training step.
729 &quot;trainingStep&quot;: &quot;A String&quot;, # The global training step for this metric.
730 },
731 ],
732 &quot;hyperparameters&quot;: { # The hyperparameters given to this trial.
733 &quot;a_key&quot;: &quot;A String&quot;,
734 },
735 &quot;trialId&quot;: &quot;A String&quot;, # The trial id for these results.
736 &quot;endTime&quot;: &quot;A String&quot;, # Output only. End time for the trial.
737 &quot;isTrialStoppedEarly&quot;: True or False, # True if the trial is stopped early.
738 &quot;startTime&quot;: &quot;A String&quot;, # Output only. Start time for the trial.
739 &quot;finalMetric&quot;: { # An observed value of a metric. # The final objective metric seen for this trial.
740 &quot;objectiveValue&quot;: 3.14, # The objective value at this training step.
741 &quot;trainingStep&quot;: &quot;A String&quot;, # The global training step for this metric.
742 },
743 &quot;builtInAlgorithmOutput&quot;: { # Represents output related to a built-in algorithm Job. # Details related to built-in algorithms jobs.
744 # Only set for trials of built-in algorithms jobs that have succeeded.
745 &quot;framework&quot;: &quot;A String&quot;, # Framework on which the built-in algorithm was trained.
746 &quot;modelPath&quot;: &quot;A String&quot;, # The Cloud Storage path to the `model/` directory where the training job
747 # saves the trained model. Only set for successful jobs that don&#x27;t use
748 # hyperparameter tuning.
749 &quot;pythonVersion&quot;: &quot;A String&quot;, # Python version on which the built-in algorithm was trained.
750 &quot;runtimeVersion&quot;: &quot;A String&quot;, # AI Platform runtime version on which the built-in algorithm was
751 # trained.
752 },
753 &quot;state&quot;: &quot;A String&quot;, # Output only. The detailed state of the trial.
Dan O'Mearadd494642020-05-01 07:42:23 -0700754 },
Bu Sun Kim65020912020-05-20 12:08:20 -0700755 ],
756 &quot;hyperparameterMetricTag&quot;: &quot;A String&quot;, # The TensorFlow summary tag name used for optimizing hyperparameter tuning
757 # trials. See
758 # [`HyperparameterSpec.hyperparameterMetricTag`](#HyperparameterSpec.FIELDS.hyperparameter_metric_tag)
759 # for more information. Only set for hyperparameter tuning jobs.
760 &quot;completedTrialCount&quot;: &quot;A String&quot;, # The number of hyperparameter tuning trials that completed successfully.
761 # Only set for hyperparameter tuning jobs.
762 &quot;isHyperparameterTuningJob&quot;: True or False, # Whether this job is a hyperparameter tuning job.
763 &quot;consumedMLUnits&quot;: 3.14, # The amount of ML units consumed by the job.
764 &quot;isBuiltInAlgorithmJob&quot;: True or False, # Whether this job is a built-in Algorithm job.
765 &quot;builtInAlgorithmOutput&quot;: { # Represents output related to a built-in algorithm Job. # Details related to built-in algorithms jobs.
766 # Only set for built-in algorithms jobs.
767 &quot;framework&quot;: &quot;A String&quot;, # Framework on which the built-in algorithm was trained.
768 &quot;modelPath&quot;: &quot;A String&quot;, # The Cloud Storage path to the `model/` directory where the training job
769 # saves the trained model. Only set for successful jobs that don&#x27;t use
770 # hyperparameter tuning.
771 &quot;pythonVersion&quot;: &quot;A String&quot;, # Python version on which the built-in algorithm was trained.
772 &quot;runtimeVersion&quot;: &quot;A String&quot;, # AI Platform runtime version on which the built-in algorithm was
773 # trained.
Dan O'Mearadd494642020-05-01 07:42:23 -0700774 },
Bu Sun Kim715bd7f2019-06-14 16:50:42 -0700775 },
Bu Sun Kim65020912020-05-20 12:08:20 -0700776 &quot;createTime&quot;: &quot;A String&quot;, # Output only. When the job was created.
777 &quot;labels&quot;: { # Optional. One or more labels that you can add, to organize your jobs.
778 # Each label is a key-value pair, where both the key and the value are
779 # arbitrary strings that you supply.
780 # For more information, see the documentation on
781 # &lt;a href=&quot;/ml-engine/docs/tensorflow/resource-labels&quot;&gt;using labels&lt;/a&gt;.
782 &quot;a_key&quot;: &quot;A String&quot;,
Bu Sun Kim715bd7f2019-06-14 16:50:42 -0700783 },
Bu Sun Kim65020912020-05-20 12:08:20 -0700784 &quot;predictionInput&quot;: { # Represents input parameters for a prediction job. # Input parameters to create a prediction job.
785 &quot;outputPath&quot;: &quot;A String&quot;, # Required. The output Google Cloud Storage location.
786 &quot;outputDataFormat&quot;: &quot;A String&quot;, # Optional. Format of the output data files, defaults to JSON.
787 &quot;dataFormat&quot;: &quot;A String&quot;, # Required. The format of the input data files.
788 &quot;batchSize&quot;: &quot;A String&quot;, # Optional. Number of records per batch, defaults to 64.
789 # The service will buffer batch_size number of records in memory before
790 # invoking one Tensorflow prediction call internally. So take the record
791 # size and memory available into consideration when setting this parameter.
792 &quot;runtimeVersion&quot;: &quot;A String&quot;, # Optional. The AI Platform runtime version to use for this batch
793 # prediction. If not set, AI Platform will pick the runtime version used
794 # during the CreateVersion request for this model version, or choose the
795 # latest stable version when model version information is not available
796 # such as when the model is specified by uri.
797 &quot;inputPaths&quot;: [ # Required. The Cloud Storage location of the input data files. May contain
798 # &lt;a href=&quot;/storage/docs/gsutil/addlhelp/WildcardNames&quot;&gt;wildcards&lt;/a&gt;.
799 &quot;A String&quot;,
800 ],
801 &quot;region&quot;: &quot;A String&quot;, # Required. The Google Compute Engine region to run the prediction job in.
802 # See the &lt;a href=&quot;/ml-engine/docs/tensorflow/regions&quot;&gt;available regions&lt;/a&gt;
803 # for AI Platform services.
804 &quot;versionName&quot;: &quot;A String&quot;, # Use this field if you want to specify a version of the model to use. The
805 # string is formatted the same way as `model_version`, with the addition
806 # of the version information:
807 #
808 # `&quot;projects/YOUR_PROJECT/models/YOUR_MODEL/versions/YOUR_VERSION&quot;`
809 &quot;modelName&quot;: &quot;A String&quot;, # Use this field if you want to use the default version for the specified
810 # model. The string must use the following format:
811 #
812 # `&quot;projects/YOUR_PROJECT/models/YOUR_MODEL&quot;`
813 &quot;uri&quot;: &quot;A String&quot;, # Use this field if you want to specify a Google Cloud Storage path for
814 # the model to use.
815 &quot;maxWorkerCount&quot;: &quot;A String&quot;, # Optional. The maximum number of workers to be used for parallel processing.
816 # Defaults to 10 if not specified.
817 &quot;signatureName&quot;: &quot;A String&quot;, # Optional. The name of the signature defined in the SavedModel to use for
818 # this job. Please refer to
819 # [SavedModel](https://tensorflow.github.io/serving/serving_basic.html)
820 # for information about how to use signatures.
821 #
822 # Defaults to
823 # [DEFAULT_SERVING_SIGNATURE_DEF_KEY](https://www.tensorflow.org/api_docs/python/tf/saved_model/signature_constants)
824 # , which is &quot;serving_default&quot;.
825 },
826 &quot;errorMessage&quot;: &quot;A String&quot;, # Output only. The details of a failure or a cancellation.
827 }
828
829 x__xgafv: string, V1 error format.
830 Allowed values
831 1 - v1 error format
832 2 - v2 error format
833
834Returns:
835 An object of the form:
836
837 { # Represents a training or prediction job.
838 &quot;etag&quot;: &quot;A String&quot;, # `etag` is used for optimistic concurrency control as a way to help
839 # prevent simultaneous updates of a job from overwriting each other.
840 # It is strongly suggested that systems make use of the `etag` in the
841 # read-modify-write cycle to perform job updates in order to avoid race
842 # conditions: An `etag` is returned in the response to `GetJob`, and
843 # systems are expected to put that etag in the request to `UpdateJob` to
844 # ensure that their change will be applied to the same version of the job.
845 &quot;trainingInput&quot;: { # Represents input parameters for a training job. When using the gcloud command # Input parameters to create a training job.
846 # to submit your training job, you can specify the input parameters as
847 # command-line arguments and/or in a YAML configuration file referenced from
848 # the --config command-line argument. For details, see the guide to [submitting
849 # a training job](/ai-platform/training/docs/training-jobs).
850 &quot;parameterServerConfig&quot;: { # Represents the configuration for a replica in a cluster. # Optional. The configuration for parameter servers.
851 #
852 # You should only set `parameterServerConfig.acceleratorConfig` if
853 # `parameterServerType` is set to a Compute Engine machine type. [Learn
854 # about restrictions on accelerator configurations for
855 # training.](/ai-platform/training/docs/using-gpus#compute-engine-machine-types-with-gpu)
856 #
857 # Set `parameterServerConfig.imageUri` only if you build a custom image for
858 # your parameter server. If `parameterServerConfig.imageUri` has not been
859 # set, AI Platform uses the value of `masterConfig.imageUri`. Learn more about [configuring custom
860 # containers](/ai-platform/training/docs/distributed-training-containers).
861 &quot;tpuTfVersion&quot;: &quot;A String&quot;, # The AI Platform runtime version that includes a TensorFlow version matching
862 # the one used in the custom container. This field is required if the replica
863 # is a TPU worker that uses a custom container. Otherwise, do not specify
864 # this field. This must be a [runtime version that currently supports
865 # training with
866 # TPUs](/ml-engine/docs/tensorflow/runtime-version-list#tpu-support).
867 #
868 # Note that the version of TensorFlow included in a runtime version may
869 # differ from the numbering of the runtime version itself, because it may
870 # have a different [patch
871 # version](https://www.tensorflow.org/guide/version_compat#semantic_versioning_20).
872 # In this field, you must specify the runtime version (TensorFlow minor
873 # version). For example, if your custom container runs TensorFlow `1.x.y`,
874 # specify `1.x`.
875 &quot;containerCommand&quot;: [ # The command with which the replica&#x27;s custom container is run.
876 # If provided, it will override default ENTRYPOINT of the docker image.
877 # If not provided, the docker image&#x27;s ENTRYPOINT is used.
878 # It cannot be set if custom container image is
879 # not provided.
880 # Note that this field and [TrainingInput.args] are mutually exclusive, i.e.,
881 # both cannot be set at the same time.
882 &quot;A String&quot;,
883 ],
884 &quot;imageUri&quot;: &quot;A String&quot;, # The Docker image to run on the replica. This image must be in Container
885 # Registry. Learn more about [configuring custom
886 # containers](/ai-platform/training/docs/distributed-training-containers).
887 &quot;containerArgs&quot;: [ # Arguments to the entrypoint command.
888 # The following rules apply for container_command and container_args:
889 # - If you do not supply command or args:
890 # The defaults defined in the Docker image are used.
891 # - If you supply a command but no args:
892 # The default EntryPoint and the default Cmd defined in the Docker image
893 # are ignored. Your command is run without any arguments.
894 # - If you supply only args:
895 # The default Entrypoint defined in the Docker image is run with the args
896 # that you supplied.
897 # - If you supply a command and args:
898 # The default Entrypoint and the default Cmd defined in the Docker image
899 # are ignored. Your command is run with your args.
900 # It cannot be set if custom container image is
901 # not provided.
902 # Note that this field and [TrainingInput.args] are mutually exclusive, i.e.,
903 # both cannot be set at the same time.
904 &quot;A String&quot;,
905 ],
906 &quot;acceleratorConfig&quot;: { # Represents a hardware accelerator request config. # Represents the type and number of accelerators used by the replica.
907 # [Learn about restrictions on accelerator configurations for
908 # training.](/ai-platform/training/docs/using-gpus#compute-engine-machine-types-with-gpu)
909 # Note that the AcceleratorConfig can be used in both Jobs and Versions.
910 # Learn more about [accelerators for training](/ml-engine/docs/using-gpus) and
911 # [accelerators for online
912 # prediction](/ml-engine/docs/machine-types-online-prediction#gpus).
913 &quot;count&quot;: &quot;A String&quot;, # The number of accelerators to attach to each machine running the job.
914 &quot;type&quot;: &quot;A String&quot;, # The type of accelerator to use.
915 },
916 },
917 &quot;encryptionConfig&quot;: { # Represents a custom encryption key configuration that can be applied to # Optional. Options for using customer-managed encryption keys (CMEK) to
918 # protect resources created by a training job, instead of using Google&#x27;s
919 # default encryption. If this is set, then all resources created by the
920 # training job will be encrypted with the customer-managed encryption key
921 # that you specify.
922 #
923 # [Learn how and when to use CMEK with AI Platform
924 # Training](/ai-platform/training/docs/cmek).
925 # a resource.
926 &quot;kmsKeyName&quot;: &quot;A String&quot;, # The Cloud KMS resource identifier of the customer-managed encryption key
927 # used to protect a resource, such as a training job. It has the following
928 # format:
929 # `projects/{PROJECT_ID}/locations/{REGION}/keyRings/{KEY_RING_NAME}/cryptoKeys/{KEY_NAME}`
930 },
931 &quot;hyperparameters&quot;: { # Represents a set of hyperparameters to optimize. # Optional. The set of Hyperparameters to tune.
932 &quot;hyperparameterMetricTag&quot;: &quot;A String&quot;, # Optional. The TensorFlow summary tag name to use for optimizing trials. For
933 # current versions of TensorFlow, this tag name should exactly match what is
934 # shown in TensorBoard, including all scopes. For versions of TensorFlow
935 # prior to 0.12, this should be only the tag passed to tf.Summary.
936 # By default, &quot;training/hptuning/metric&quot; will be used.
937 &quot;params&quot;: [ # Required. The set of parameters to tune.
938 { # Represents a single hyperparameter to optimize.
939 &quot;type&quot;: &quot;A String&quot;, # Required. The type of the parameter.
940 &quot;categoricalValues&quot;: [ # Required if type is `CATEGORICAL`. The list of possible categories.
941 &quot;A String&quot;,
942 ],
943 &quot;parameterName&quot;: &quot;A String&quot;, # Required. The parameter name must be unique amongst all ParameterConfigs in
944 # a HyperparameterSpec message. E.g., &quot;learning_rate&quot;.
945 &quot;minValue&quot;: 3.14, # Required if type is `DOUBLE` or `INTEGER`. This field
946 # should be unset if type is `CATEGORICAL`. This value should be integers if
947 # type is INTEGER.
948 &quot;discreteValues&quot;: [ # Required if type is `DISCRETE`.
949 # A list of feasible points.
950 # The list should be in strictly increasing order. For instance, this
951 # parameter might have possible settings of 1.5, 2.5, and 4.0. This list
952 # should not contain more than 1,000 values.
953 3.14,
954 ],
955 &quot;scaleType&quot;: &quot;A String&quot;, # Optional. How the parameter should be scaled to the hypercube.
956 # Leave unset for categorical parameters.
957 # Some kind of scaling is strongly recommended for real or integral
958 # parameters (e.g., `UNIT_LINEAR_SCALE`).
959 &quot;maxValue&quot;: 3.14, # Required if type is `DOUBLE` or `INTEGER`. This field
960 # should be unset if type is `CATEGORICAL`. This value should be integers if
961 # type is `INTEGER`.
962 },
963 ],
964 &quot;enableTrialEarlyStopping&quot;: True or False, # Optional. Indicates if the hyperparameter tuning job enables auto trial
965 # early stopping.
966 &quot;resumePreviousJobId&quot;: &quot;A String&quot;, # Optional. The prior hyperparameter tuning job id that users hope to
967 # continue with. The job id will be used to find the corresponding vizier
968 # study guid and resume the study.
969 &quot;maxParallelTrials&quot;: 42, # Optional. The number of training trials to run concurrently.
970 # You can reduce the time it takes to perform hyperparameter tuning by adding
971 # trials in parallel. However, each trail only benefits from the information
972 # gained in completed trials. That means that a trial does not get access to
973 # the results of trials running at the same time, which could reduce the
974 # quality of the overall optimization.
975 #
976 # Each trial will use the same scale tier and machine types.
977 #
978 # Defaults to one.
979 &quot;maxFailedTrials&quot;: 42, # Optional. The number of failed trials that need to be seen before failing
980 # the hyperparameter tuning job. You can specify this field to override the
981 # default failing criteria for AI Platform hyperparameter tuning jobs.
982 #
983 # Defaults to zero, which means the service decides when a hyperparameter
984 # job should fail.
985 &quot;goal&quot;: &quot;A String&quot;, # Required. The type of goal to use for tuning. Available types are
986 # `MAXIMIZE` and `MINIMIZE`.
987 #
988 # Defaults to `MAXIMIZE`.
989 &quot;maxTrials&quot;: 42, # Optional. How many training trials should be attempted to optimize
990 # the specified hyperparameters.
991 #
992 # Defaults to one.
993 &quot;algorithm&quot;: &quot;A String&quot;, # Optional. The search algorithm specified for the hyperparameter
994 # tuning job.
995 # Uses the default AI Platform hyperparameter tuning
996 # algorithm if unspecified.
997 },
998 &quot;workerConfig&quot;: { # Represents the configuration for a replica in a cluster. # Optional. The configuration for workers.
999 #
1000 # You should only set `workerConfig.acceleratorConfig` if `workerType` is set
1001 # to a Compute Engine machine type. [Learn about restrictions on accelerator
1002 # configurations for
1003 # training.](/ai-platform/training/docs/using-gpus#compute-engine-machine-types-with-gpu)
1004 #
1005 # Set `workerConfig.imageUri` only if you build a custom image for your
1006 # worker. If `workerConfig.imageUri` has not been set, AI Platform uses
1007 # the value of `masterConfig.imageUri`. Learn more about [configuring custom
1008 # containers](/ai-platform/training/docs/distributed-training-containers).
1009 &quot;tpuTfVersion&quot;: &quot;A String&quot;, # The AI Platform runtime version that includes a TensorFlow version matching
1010 # the one used in the custom container. This field is required if the replica
1011 # is a TPU worker that uses a custom container. Otherwise, do not specify
1012 # this field. This must be a [runtime version that currently supports
1013 # training with
1014 # TPUs](/ml-engine/docs/tensorflow/runtime-version-list#tpu-support).
1015 #
1016 # Note that the version of TensorFlow included in a runtime version may
1017 # differ from the numbering of the runtime version itself, because it may
1018 # have a different [patch
1019 # version](https://www.tensorflow.org/guide/version_compat#semantic_versioning_20).
1020 # In this field, you must specify the runtime version (TensorFlow minor
1021 # version). For example, if your custom container runs TensorFlow `1.x.y`,
1022 # specify `1.x`.
1023 &quot;containerCommand&quot;: [ # The command with which the replica&#x27;s custom container is run.
1024 # If provided, it will override default ENTRYPOINT of the docker image.
1025 # If not provided, the docker image&#x27;s ENTRYPOINT is used.
1026 # It cannot be set if custom container image is
1027 # not provided.
1028 # Note that this field and [TrainingInput.args] are mutually exclusive, i.e.,
1029 # both cannot be set at the same time.
1030 &quot;A String&quot;,
1031 ],
1032 &quot;imageUri&quot;: &quot;A String&quot;, # The Docker image to run on the replica. This image must be in Container
1033 # Registry. Learn more about [configuring custom
1034 # containers](/ai-platform/training/docs/distributed-training-containers).
1035 &quot;containerArgs&quot;: [ # Arguments to the entrypoint command.
1036 # The following rules apply for container_command and container_args:
1037 # - If you do not supply command or args:
1038 # The defaults defined in the Docker image are used.
1039 # - If you supply a command but no args:
1040 # The default EntryPoint and the default Cmd defined in the Docker image
1041 # are ignored. Your command is run without any arguments.
1042 # - If you supply only args:
1043 # The default Entrypoint defined in the Docker image is run with the args
1044 # that you supplied.
1045 # - If you supply a command and args:
1046 # The default Entrypoint and the default Cmd defined in the Docker image
1047 # are ignored. Your command is run with your args.
1048 # It cannot be set if custom container image is
1049 # not provided.
1050 # Note that this field and [TrainingInput.args] are mutually exclusive, i.e.,
1051 # both cannot be set at the same time.
1052 &quot;A String&quot;,
1053 ],
1054 &quot;acceleratorConfig&quot;: { # Represents a hardware accelerator request config. # Represents the type and number of accelerators used by the replica.
1055 # [Learn about restrictions on accelerator configurations for
1056 # training.](/ai-platform/training/docs/using-gpus#compute-engine-machine-types-with-gpu)
1057 # Note that the AcceleratorConfig can be used in both Jobs and Versions.
1058 # Learn more about [accelerators for training](/ml-engine/docs/using-gpus) and
1059 # [accelerators for online
1060 # prediction](/ml-engine/docs/machine-types-online-prediction#gpus).
1061 &quot;count&quot;: &quot;A String&quot;, # The number of accelerators to attach to each machine running the job.
1062 &quot;type&quot;: &quot;A String&quot;, # The type of accelerator to use.
1063 },
1064 },
1065 &quot;parameterServerCount&quot;: &quot;A String&quot;, # Optional. The number of parameter server replicas to use for the training
1066 # job. Each replica in the cluster will be of the type specified in
1067 # `parameter_server_type`.
1068 #
1069 # This value can only be used when `scale_tier` is set to `CUSTOM`. If you
1070 # set this value, you must also set `parameter_server_type`.
1071 #
1072 # The default value is zero.
1073 &quot;packageUris&quot;: [ # Required. The Google Cloud Storage location of the packages with
1074 # the training program and any additional dependencies.
1075 # The maximum number of package URIs is 100.
1076 &quot;A String&quot;,
1077 ],
1078 &quot;evaluatorCount&quot;: &quot;A String&quot;, # Optional. The number of evaluator replicas to use for the training job.
1079 # Each replica in the cluster will be of the type specified in
1080 # `evaluator_type`.
1081 #
1082 # This value can only be used when `scale_tier` is set to `CUSTOM`. If you
1083 # set this value, you must also set `evaluator_type`.
1084 #
1085 # The default value is zero.
1086 &quot;masterType&quot;: &quot;A String&quot;, # Optional. Specifies the type of virtual machine to use for your training
1087 # job&#x27;s master worker. You must specify this field when `scaleTier` is set to
1088 # `CUSTOM`.
1089 #
1090 # You can use certain Compute Engine machine types directly in this field.
1091 # The following types are supported:
1092 #
1093 # - `n1-standard-4`
1094 # - `n1-standard-8`
1095 # - `n1-standard-16`
1096 # - `n1-standard-32`
1097 # - `n1-standard-64`
1098 # - `n1-standard-96`
1099 # - `n1-highmem-2`
1100 # - `n1-highmem-4`
1101 # - `n1-highmem-8`
1102 # - `n1-highmem-16`
1103 # - `n1-highmem-32`
1104 # - `n1-highmem-64`
1105 # - `n1-highmem-96`
1106 # - `n1-highcpu-16`
1107 # - `n1-highcpu-32`
1108 # - `n1-highcpu-64`
1109 # - `n1-highcpu-96`
1110 #
1111 # Learn more about [using Compute Engine machine
1112 # types](/ml-engine/docs/machine-types#compute-engine-machine-types).
1113 #
1114 # Alternatively, you can use the following legacy machine types:
1115 #
1116 # - `standard`
1117 # - `large_model`
1118 # - `complex_model_s`
1119 # - `complex_model_m`
1120 # - `complex_model_l`
1121 # - `standard_gpu`
1122 # - `complex_model_m_gpu`
1123 # - `complex_model_l_gpu`
1124 # - `standard_p100`
1125 # - `complex_model_m_p100`
1126 # - `standard_v100`
1127 # - `large_model_v100`
1128 # - `complex_model_m_v100`
1129 # - `complex_model_l_v100`
1130 #
1131 # Learn more about [using legacy machine
1132 # types](/ml-engine/docs/machine-types#legacy-machine-types).
1133 #
1134 # Finally, if you want to use a TPU for training, specify `cloud_tpu` in this
1135 # field. Learn more about the [special configuration options for training
1136 # with
1137 # TPUs](/ml-engine/docs/tensorflow/using-tpus#configuring_a_custom_tpu_machine).
1138 &quot;runtimeVersion&quot;: &quot;A String&quot;, # Optional. The AI Platform runtime version to use for training. You must
1139 # either specify this field or specify `masterConfig.imageUri`.
1140 #
1141 # For more information, see the [runtime version
1142 # list](/ai-platform/training/docs/runtime-version-list) and learn [how to
1143 # manage runtime versions](/ai-platform/training/docs/versioning).
1144 &quot;evaluatorType&quot;: &quot;A String&quot;, # Optional. Specifies the type of virtual machine to use for your training
1145 # job&#x27;s evaluator nodes.
1146 #
1147 # The supported values are the same as those described in the entry for
1148 # `masterType`.
1149 #
1150 # This value must be consistent with the category of machine type that
1151 # `masterType` uses. In other words, both must be Compute Engine machine
1152 # types or both must be legacy machine types.
1153 #
1154 # This value must be present when `scaleTier` is set to `CUSTOM` and
1155 # `evaluatorCount` is greater than zero.
1156 &quot;region&quot;: &quot;A String&quot;, # Required. The region to run the training job in. See the [available
1157 # regions](/ai-platform/training/docs/regions) for AI Platform Training.
1158 &quot;workerType&quot;: &quot;A String&quot;, # Optional. Specifies the type of virtual machine to use for your training
1159 # job&#x27;s worker nodes.
1160 #
1161 # The supported values are the same as those described in the entry for
1162 # `masterType`.
1163 #
1164 # This value must be consistent with the category of machine type that
1165 # `masterType` uses. In other words, both must be Compute Engine machine
1166 # types or both must be legacy machine types.
1167 #
1168 # If you use `cloud_tpu` for this value, see special instructions for
1169 # [configuring a custom TPU
1170 # machine](/ml-engine/docs/tensorflow/using-tpus#configuring_a_custom_tpu_machine).
1171 #
1172 # This value must be present when `scaleTier` is set to `CUSTOM` and
1173 # `workerCount` is greater than zero.
1174 &quot;parameterServerType&quot;: &quot;A String&quot;, # Optional. Specifies the type of virtual machine to use for your training
1175 # job&#x27;s parameter server.
1176 #
1177 # The supported values are the same as those described in the entry for
1178 # `master_type`.
1179 #
1180 # This value must be consistent with the category of machine type that
1181 # `masterType` uses. In other words, both must be Compute Engine machine
1182 # types or both must be legacy machine types.
1183 #
1184 # This value must be present when `scaleTier` is set to `CUSTOM` and
1185 # `parameter_server_count` is greater than zero.
1186 &quot;masterConfig&quot;: { # Represents the configuration for a replica in a cluster. # Optional. The configuration for your master worker.
1187 #
1188 # You should only set `masterConfig.acceleratorConfig` if `masterType` is set
1189 # to a Compute Engine machine type. Learn about [restrictions on accelerator
1190 # configurations for
1191 # training.](/ai-platform/training/docs/using-gpus#compute-engine-machine-types-with-gpu)
1192 #
1193 # Set `masterConfig.imageUri` only if you build a custom image. Only one of
1194 # `masterConfig.imageUri` and `runtimeVersion` should be set. Learn more
1195 # about [configuring custom
1196 # containers](/ai-platform/training/docs/distributed-training-containers).
1197 &quot;tpuTfVersion&quot;: &quot;A String&quot;, # The AI Platform runtime version that includes a TensorFlow version matching
1198 # the one used in the custom container. This field is required if the replica
1199 # is a TPU worker that uses a custom container. Otherwise, do not specify
1200 # this field. This must be a [runtime version that currently supports
1201 # training with
1202 # TPUs](/ml-engine/docs/tensorflow/runtime-version-list#tpu-support).
1203 #
1204 # Note that the version of TensorFlow included in a runtime version may
1205 # differ from the numbering of the runtime version itself, because it may
1206 # have a different [patch
1207 # version](https://www.tensorflow.org/guide/version_compat#semantic_versioning_20).
1208 # In this field, you must specify the runtime version (TensorFlow minor
1209 # version). For example, if your custom container runs TensorFlow `1.x.y`,
1210 # specify `1.x`.
1211 &quot;containerCommand&quot;: [ # The command with which the replica&#x27;s custom container is run.
1212 # If provided, it will override default ENTRYPOINT of the docker image.
1213 # If not provided, the docker image&#x27;s ENTRYPOINT is used.
1214 # It cannot be set if custom container image is
1215 # not provided.
1216 # Note that this field and [TrainingInput.args] are mutually exclusive, i.e.,
1217 # both cannot be set at the same time.
1218 &quot;A String&quot;,
1219 ],
1220 &quot;imageUri&quot;: &quot;A String&quot;, # The Docker image to run on the replica. This image must be in Container
1221 # Registry. Learn more about [configuring custom
1222 # containers](/ai-platform/training/docs/distributed-training-containers).
1223 &quot;containerArgs&quot;: [ # Arguments to the entrypoint command.
1224 # The following rules apply for container_command and container_args:
1225 # - If you do not supply command or args:
1226 # The defaults defined in the Docker image are used.
1227 # - If you supply a command but no args:
1228 # The default EntryPoint and the default Cmd defined in the Docker image
1229 # are ignored. Your command is run without any arguments.
1230 # - If you supply only args:
1231 # The default Entrypoint defined in the Docker image is run with the args
1232 # that you supplied.
1233 # - If you supply a command and args:
1234 # The default Entrypoint and the default Cmd defined in the Docker image
1235 # are ignored. Your command is run with your args.
1236 # It cannot be set if custom container image is
1237 # not provided.
1238 # Note that this field and [TrainingInput.args] are mutually exclusive, i.e.,
1239 # both cannot be set at the same time.
1240 &quot;A String&quot;,
1241 ],
1242 &quot;acceleratorConfig&quot;: { # Represents a hardware accelerator request config. # Represents the type and number of accelerators used by the replica.
1243 # [Learn about restrictions on accelerator configurations for
1244 # training.](/ai-platform/training/docs/using-gpus#compute-engine-machine-types-with-gpu)
1245 # Note that the AcceleratorConfig can be used in both Jobs and Versions.
1246 # Learn more about [accelerators for training](/ml-engine/docs/using-gpus) and
1247 # [accelerators for online
1248 # prediction](/ml-engine/docs/machine-types-online-prediction#gpus).
1249 &quot;count&quot;: &quot;A String&quot;, # The number of accelerators to attach to each machine running the job.
1250 &quot;type&quot;: &quot;A String&quot;, # The type of accelerator to use.
1251 },
1252 },
1253 &quot;scaleTier&quot;: &quot;A String&quot;, # Required. Specifies the machine types, the number of replicas for workers
1254 # and parameter servers.
1255 &quot;jobDir&quot;: &quot;A String&quot;, # Optional. A Google Cloud Storage path in which to store training outputs
1256 # and other data needed for training. This path is passed to your TensorFlow
1257 # program as the &#x27;--job-dir&#x27; command-line argument. The benefit of specifying
1258 # this field is that Cloud ML validates the path for use in training.
1259 &quot;pythonVersion&quot;: &quot;A String&quot;, # Optional. The version of Python used in training. You must either specify
1260 # this field or specify `masterConfig.imageUri`.
1261 #
1262 # The following Python versions are available:
1263 #
1264 # * Python &#x27;3.7&#x27; is available when `runtime_version` is set to &#x27;1.15&#x27; or
1265 # later.
1266 # * Python &#x27;3.5&#x27; is available when `runtime_version` is set to a version
1267 # from &#x27;1.4&#x27; to &#x27;1.14&#x27;.
1268 # * Python &#x27;2.7&#x27; is available when `runtime_version` is set to &#x27;1.15&#x27; or
1269 # earlier.
1270 #
1271 # Read more about the Python versions available for [each runtime
1272 # version](/ml-engine/docs/runtime-version-list).
1273 &quot;network&quot;: &quot;A String&quot;, # Optional. The full name of the Google Compute Engine
1274 # [network](/compute/docs/networks-and-firewalls#networks) to which the Job
1275 # is peered. For example, projects/12345/global/networks/myVPC. Format is of
1276 # the form projects/{project}/global/networks/{network}. Where {project} is a
1277 # project number, as in &#x27;12345&#x27;, and {network} is network name.&quot;.
1278 #
1279 # Private services access must already be configured for the network. If left
1280 # unspecified, the Job is not peered with any network. Learn more -
1281 # Connecting Job to user network over private
1282 # IP.
1283 &quot;scheduling&quot;: { # All parameters related to scheduling of training jobs. # Optional. Scheduling options for a training job.
1284 &quot;maxWaitTime&quot;: &quot;A String&quot;,
1285 &quot;maxRunningTime&quot;: &quot;A String&quot;, # Optional. The maximum job running time, expressed in seconds. The field can
1286 # contain up to nine fractional digits, terminated by `s`. If not specified,
1287 # this field defaults to `604800s` (seven days).
1288 #
1289 # If the training job is still running after this duration, AI Platform
1290 # Training cancels it.
1291 #
1292 # For example, if you want to ensure your job runs for no more than 2 hours,
1293 # set this field to `7200s` (2 hours * 60 minutes / hour * 60 seconds /
1294 # minute).
1295 #
1296 # If you submit your training job using the `gcloud` tool, you can [provide
1297 # this field in a `config.yaml`
1298 # file](/ai-platform/training/docs/training-jobs#formatting_your_configuration_parameters).
1299 # For example:
1300 #
1301 # ```yaml
1302 # trainingInput:
1303 # ...
1304 # scheduling:
1305 # maxRunningTime: 7200s
1306 # ...
1307 # ```
1308 },
1309 &quot;evaluatorConfig&quot;: { # Represents the configuration for a replica in a cluster. # Optional. The configuration for evaluators.
1310 #
1311 # You should only set `evaluatorConfig.acceleratorConfig` if
1312 # `evaluatorType` is set to a Compute Engine machine type. [Learn
1313 # about restrictions on accelerator configurations for
1314 # training.](/ai-platform/training/docs/using-gpus#compute-engine-machine-types-with-gpu)
1315 #
1316 # Set `evaluatorConfig.imageUri` only if you build a custom image for
1317 # your evaluator. If `evaluatorConfig.imageUri` has not been
1318 # set, AI Platform uses the value of `masterConfig.imageUri`. Learn more about [configuring custom
1319 # containers](/ai-platform/training/docs/distributed-training-containers).
1320 &quot;tpuTfVersion&quot;: &quot;A String&quot;, # The AI Platform runtime version that includes a TensorFlow version matching
1321 # the one used in the custom container. This field is required if the replica
1322 # is a TPU worker that uses a custom container. Otherwise, do not specify
1323 # this field. This must be a [runtime version that currently supports
1324 # training with
1325 # TPUs](/ml-engine/docs/tensorflow/runtime-version-list#tpu-support).
1326 #
1327 # Note that the version of TensorFlow included in a runtime version may
1328 # differ from the numbering of the runtime version itself, because it may
1329 # have a different [patch
1330 # version](https://www.tensorflow.org/guide/version_compat#semantic_versioning_20).
1331 # In this field, you must specify the runtime version (TensorFlow minor
1332 # version). For example, if your custom container runs TensorFlow `1.x.y`,
1333 # specify `1.x`.
1334 &quot;containerCommand&quot;: [ # The command with which the replica&#x27;s custom container is run.
1335 # If provided, it will override default ENTRYPOINT of the docker image.
1336 # If not provided, the docker image&#x27;s ENTRYPOINT is used.
1337 # It cannot be set if custom container image is
1338 # not provided.
1339 # Note that this field and [TrainingInput.args] are mutually exclusive, i.e.,
1340 # both cannot be set at the same time.
1341 &quot;A String&quot;,
1342 ],
1343 &quot;imageUri&quot;: &quot;A String&quot;, # The Docker image to run on the replica. This image must be in Container
1344 # Registry. Learn more about [configuring custom
1345 # containers](/ai-platform/training/docs/distributed-training-containers).
1346 &quot;containerArgs&quot;: [ # Arguments to the entrypoint command.
1347 # The following rules apply for container_command and container_args:
1348 # - If you do not supply command or args:
1349 # The defaults defined in the Docker image are used.
1350 # - If you supply a command but no args:
1351 # The default EntryPoint and the default Cmd defined in the Docker image
1352 # are ignored. Your command is run without any arguments.
1353 # - If you supply only args:
1354 # The default Entrypoint defined in the Docker image is run with the args
1355 # that you supplied.
1356 # - If you supply a command and args:
1357 # The default Entrypoint and the default Cmd defined in the Docker image
1358 # are ignored. Your command is run with your args.
1359 # It cannot be set if custom container image is
1360 # not provided.
1361 # Note that this field and [TrainingInput.args] are mutually exclusive, i.e.,
1362 # both cannot be set at the same time.
1363 &quot;A String&quot;,
1364 ],
1365 &quot;acceleratorConfig&quot;: { # Represents a hardware accelerator request config. # Represents the type and number of accelerators used by the replica.
1366 # [Learn about restrictions on accelerator configurations for
1367 # training.](/ai-platform/training/docs/using-gpus#compute-engine-machine-types-with-gpu)
1368 # Note that the AcceleratorConfig can be used in both Jobs and Versions.
1369 # Learn more about [accelerators for training](/ml-engine/docs/using-gpus) and
1370 # [accelerators for online
1371 # prediction](/ml-engine/docs/machine-types-online-prediction#gpus).
1372 &quot;count&quot;: &quot;A String&quot;, # The number of accelerators to attach to each machine running the job.
1373 &quot;type&quot;: &quot;A String&quot;, # The type of accelerator to use.
1374 },
1375 },
1376 &quot;useChiefInTfConfig&quot;: True or False, # Optional. Use `chief` instead of `master` in the `TF_CONFIG` environment
1377 # variable when training with a custom container. Defaults to `false`. [Learn
1378 # more about this
1379 # field.](/ai-platform/training/docs/distributed-training-details#chief-versus-master)
1380 #
1381 # This field has no effect for training jobs that don&#x27;t use a custom
1382 # container.
1383 &quot;workerCount&quot;: &quot;A String&quot;, # Optional. The number of worker replicas to use for the training job. Each
1384 # replica in the cluster will be of the type specified in `worker_type`.
1385 #
1386 # This value can only be used when `scale_tier` is set to `CUSTOM`. If you
1387 # set this value, you must also set `worker_type`.
1388 #
1389 # The default value is zero.
1390 &quot;pythonModule&quot;: &quot;A String&quot;, # Required. The Python module name to run after installing the packages.
1391 &quot;args&quot;: [ # Optional. Command-line arguments passed to the training application when it
1392 # starts. If your job uses a custom container, then the arguments are passed
1393 # to the container&#x27;s &lt;a class=&quot;external&quot; target=&quot;_blank&quot;
1394 # href=&quot;https://docs.docker.com/engine/reference/builder/#entrypoint&quot;&gt;
1395 # `ENTRYPOINT`&lt;/a&gt; command.
1396 &quot;A String&quot;,
1397 ],
1398 },
1399 &quot;state&quot;: &quot;A String&quot;, # Output only. The detailed state of a job.
1400 &quot;jobId&quot;: &quot;A String&quot;, # Required. The user-specified id of the job.
1401 &quot;endTime&quot;: &quot;A String&quot;, # Output only. When the job processing was completed.
1402 &quot;startTime&quot;: &quot;A String&quot;, # Output only. When the job processing was started.
1403 &quot;predictionOutput&quot;: { # Represents results of a prediction job. # The current prediction job result.
1404 &quot;errorCount&quot;: &quot;A String&quot;, # The number of data instances which resulted in errors.
1405 &quot;outputPath&quot;: &quot;A String&quot;, # The output Google Cloud Storage location provided at the job creation time.
1406 &quot;nodeHours&quot;: 3.14, # Node hours used by the batch prediction job.
1407 &quot;predictionCount&quot;: &quot;A String&quot;, # The number of generated predictions.
1408 },
1409 &quot;trainingOutput&quot;: { # Represents results of a training job. Output only. # The current training job result.
1410 &quot;trials&quot;: [ # Results for individual Hyperparameter trials.
1411 # Only set for hyperparameter tuning jobs.
1412 { # Represents the result of a single hyperparameter tuning trial from a
1413 # training job. The TrainingOutput object that is returned on successful
1414 # completion of a training job with hyperparameter tuning includes a list
1415 # of HyperparameterOutput objects, one for each successful trial.
1416 &quot;allMetrics&quot;: [ # All recorded object metrics for this trial. This field is not currently
1417 # populated.
1418 { # An observed value of a metric.
1419 &quot;objectiveValue&quot;: 3.14, # The objective value at this training step.
1420 &quot;trainingStep&quot;: &quot;A String&quot;, # The global training step for this metric.
1421 },
1422 ],
1423 &quot;hyperparameters&quot;: { # The hyperparameters given to this trial.
1424 &quot;a_key&quot;: &quot;A String&quot;,
1425 },
1426 &quot;trialId&quot;: &quot;A String&quot;, # The trial id for these results.
1427 &quot;endTime&quot;: &quot;A String&quot;, # Output only. End time for the trial.
1428 &quot;isTrialStoppedEarly&quot;: True or False, # True if the trial is stopped early.
1429 &quot;startTime&quot;: &quot;A String&quot;, # Output only. Start time for the trial.
1430 &quot;finalMetric&quot;: { # An observed value of a metric. # The final objective metric seen for this trial.
1431 &quot;objectiveValue&quot;: 3.14, # The objective value at this training step.
1432 &quot;trainingStep&quot;: &quot;A String&quot;, # The global training step for this metric.
1433 },
1434 &quot;builtInAlgorithmOutput&quot;: { # Represents output related to a built-in algorithm Job. # Details related to built-in algorithms jobs.
1435 # Only set for trials of built-in algorithms jobs that have succeeded.
1436 &quot;framework&quot;: &quot;A String&quot;, # Framework on which the built-in algorithm was trained.
1437 &quot;modelPath&quot;: &quot;A String&quot;, # The Cloud Storage path to the `model/` directory where the training job
1438 # saves the trained model. Only set for successful jobs that don&#x27;t use
1439 # hyperparameter tuning.
1440 &quot;pythonVersion&quot;: &quot;A String&quot;, # Python version on which the built-in algorithm was trained.
1441 &quot;runtimeVersion&quot;: &quot;A String&quot;, # AI Platform runtime version on which the built-in algorithm was
1442 # trained.
1443 },
1444 &quot;state&quot;: &quot;A String&quot;, # Output only. The detailed state of the trial.
1445 },
1446 ],
1447 &quot;hyperparameterMetricTag&quot;: &quot;A String&quot;, # The TensorFlow summary tag name used for optimizing hyperparameter tuning
1448 # trials. See
1449 # [`HyperparameterSpec.hyperparameterMetricTag`](#HyperparameterSpec.FIELDS.hyperparameter_metric_tag)
1450 # for more information. Only set for hyperparameter tuning jobs.
1451 &quot;completedTrialCount&quot;: &quot;A String&quot;, # The number of hyperparameter tuning trials that completed successfully.
1452 # Only set for hyperparameter tuning jobs.
1453 &quot;isHyperparameterTuningJob&quot;: True or False, # Whether this job is a hyperparameter tuning job.
1454 &quot;consumedMLUnits&quot;: 3.14, # The amount of ML units consumed by the job.
1455 &quot;isBuiltInAlgorithmJob&quot;: True or False, # Whether this job is a built-in Algorithm job.
1456 &quot;builtInAlgorithmOutput&quot;: { # Represents output related to a built-in algorithm Job. # Details related to built-in algorithms jobs.
1457 # Only set for built-in algorithms jobs.
1458 &quot;framework&quot;: &quot;A String&quot;, # Framework on which the built-in algorithm was trained.
1459 &quot;modelPath&quot;: &quot;A String&quot;, # The Cloud Storage path to the `model/` directory where the training job
1460 # saves the trained model. Only set for successful jobs that don&#x27;t use
1461 # hyperparameter tuning.
1462 &quot;pythonVersion&quot;: &quot;A String&quot;, # Python version on which the built-in algorithm was trained.
1463 &quot;runtimeVersion&quot;: &quot;A String&quot;, # AI Platform runtime version on which the built-in algorithm was
1464 # trained.
1465 },
1466 },
1467 &quot;createTime&quot;: &quot;A String&quot;, # Output only. When the job was created.
1468 &quot;labels&quot;: { # Optional. One or more labels that you can add, to organize your jobs.
1469 # Each label is a key-value pair, where both the key and the value are
1470 # arbitrary strings that you supply.
1471 # For more information, see the documentation on
1472 # &lt;a href=&quot;/ml-engine/docs/tensorflow/resource-labels&quot;&gt;using labels&lt;/a&gt;.
1473 &quot;a_key&quot;: &quot;A String&quot;,
1474 },
1475 &quot;predictionInput&quot;: { # Represents input parameters for a prediction job. # Input parameters to create a prediction job.
1476 &quot;outputPath&quot;: &quot;A String&quot;, # Required. The output Google Cloud Storage location.
1477 &quot;outputDataFormat&quot;: &quot;A String&quot;, # Optional. Format of the output data files, defaults to JSON.
1478 &quot;dataFormat&quot;: &quot;A String&quot;, # Required. The format of the input data files.
1479 &quot;batchSize&quot;: &quot;A String&quot;, # Optional. Number of records per batch, defaults to 64.
1480 # The service will buffer batch_size number of records in memory before
1481 # invoking one Tensorflow prediction call internally. So take the record
1482 # size and memory available into consideration when setting this parameter.
1483 &quot;runtimeVersion&quot;: &quot;A String&quot;, # Optional. The AI Platform runtime version to use for this batch
1484 # prediction. If not set, AI Platform will pick the runtime version used
1485 # during the CreateVersion request for this model version, or choose the
1486 # latest stable version when model version information is not available
1487 # such as when the model is specified by uri.
1488 &quot;inputPaths&quot;: [ # Required. The Cloud Storage location of the input data files. May contain
1489 # &lt;a href=&quot;/storage/docs/gsutil/addlhelp/WildcardNames&quot;&gt;wildcards&lt;/a&gt;.
1490 &quot;A String&quot;,
1491 ],
1492 &quot;region&quot;: &quot;A String&quot;, # Required. The Google Compute Engine region to run the prediction job in.
1493 # See the &lt;a href=&quot;/ml-engine/docs/tensorflow/regions&quot;&gt;available regions&lt;/a&gt;
1494 # for AI Platform services.
1495 &quot;versionName&quot;: &quot;A String&quot;, # Use this field if you want to specify a version of the model to use. The
1496 # string is formatted the same way as `model_version`, with the addition
1497 # of the version information:
1498 #
1499 # `&quot;projects/YOUR_PROJECT/models/YOUR_MODEL/versions/YOUR_VERSION&quot;`
1500 &quot;modelName&quot;: &quot;A String&quot;, # Use this field if you want to use the default version for the specified
1501 # model. The string must use the following format:
1502 #
1503 # `&quot;projects/YOUR_PROJECT/models/YOUR_MODEL&quot;`
1504 &quot;uri&quot;: &quot;A String&quot;, # Use this field if you want to specify a Google Cloud Storage path for
1505 # the model to use.
1506 &quot;maxWorkerCount&quot;: &quot;A String&quot;, # Optional. The maximum number of workers to be used for parallel processing.
1507 # Defaults to 10 if not specified.
1508 &quot;signatureName&quot;: &quot;A String&quot;, # Optional. The name of the signature defined in the SavedModel to use for
1509 # this job. Please refer to
1510 # [SavedModel](https://tensorflow.github.io/serving/serving_basic.html)
1511 # for information about how to use signatures.
1512 #
1513 # Defaults to
1514 # [DEFAULT_SERVING_SIGNATURE_DEF_KEY](https://www.tensorflow.org/api_docs/python/tf/saved_model/signature_constants)
1515 # , which is &quot;serving_default&quot;.
1516 },
1517 &quot;errorMessage&quot;: &quot;A String&quot;, # Output only. The details of a failure or a cancellation.
1518 }</pre>
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07001519</div>
1520
1521<div class="method">
1522 <code class="details" id="get">get(name, x__xgafv=None)</code>
1523 <pre>Describes a job.
1524
1525Args:
1526 name: string, Required. The name of the job to get the description of. (required)
1527 x__xgafv: string, V1 error format.
1528 Allowed values
1529 1 - v1 error format
1530 2 - v2 error format
1531
1532Returns:
1533 An object of the form:
1534
1535 { # Represents a training or prediction job.
Bu Sun Kim65020912020-05-20 12:08:20 -07001536 &quot;etag&quot;: &quot;A String&quot;, # `etag` is used for optimistic concurrency control as a way to help
1537 # prevent simultaneous updates of a job from overwriting each other.
1538 # It is strongly suggested that systems make use of the `etag` in the
1539 # read-modify-write cycle to perform job updates in order to avoid race
1540 # conditions: An `etag` is returned in the response to `GetJob`, and
1541 # systems are expected to put that etag in the request to `UpdateJob` to
1542 # ensure that their change will be applied to the same version of the job.
1543 &quot;trainingInput&quot;: { # Represents input parameters for a training job. When using the gcloud command # Input parameters to create a training job.
1544 # to submit your training job, you can specify the input parameters as
1545 # command-line arguments and/or in a YAML configuration file referenced from
1546 # the --config command-line argument. For details, see the guide to [submitting
1547 # a training job](/ai-platform/training/docs/training-jobs).
1548 &quot;parameterServerConfig&quot;: { # Represents the configuration for a replica in a cluster. # Optional. The configuration for parameter servers.
1549 #
1550 # You should only set `parameterServerConfig.acceleratorConfig` if
1551 # `parameterServerType` is set to a Compute Engine machine type. [Learn
1552 # about restrictions on accelerator configurations for
1553 # training.](/ai-platform/training/docs/using-gpus#compute-engine-machine-types-with-gpu)
1554 #
1555 # Set `parameterServerConfig.imageUri` only if you build a custom image for
1556 # your parameter server. If `parameterServerConfig.imageUri` has not been
1557 # set, AI Platform uses the value of `masterConfig.imageUri`. Learn more about [configuring custom
1558 # containers](/ai-platform/training/docs/distributed-training-containers).
1559 &quot;tpuTfVersion&quot;: &quot;A String&quot;, # The AI Platform runtime version that includes a TensorFlow version matching
1560 # the one used in the custom container. This field is required if the replica
1561 # is a TPU worker that uses a custom container. Otherwise, do not specify
1562 # this field. This must be a [runtime version that currently supports
1563 # training with
1564 # TPUs](/ml-engine/docs/tensorflow/runtime-version-list#tpu-support).
1565 #
1566 # Note that the version of TensorFlow included in a runtime version may
1567 # differ from the numbering of the runtime version itself, because it may
1568 # have a different [patch
1569 # version](https://www.tensorflow.org/guide/version_compat#semantic_versioning_20).
1570 # In this field, you must specify the runtime version (TensorFlow minor
1571 # version). For example, if your custom container runs TensorFlow `1.x.y`,
1572 # specify `1.x`.
1573 &quot;containerCommand&quot;: [ # The command with which the replica&#x27;s custom container is run.
1574 # If provided, it will override default ENTRYPOINT of the docker image.
1575 # If not provided, the docker image&#x27;s ENTRYPOINT is used.
1576 # It cannot be set if custom container image is
1577 # not provided.
1578 # Note that this field and [TrainingInput.args] are mutually exclusive, i.e.,
1579 # both cannot be set at the same time.
1580 &quot;A String&quot;,
1581 ],
1582 &quot;imageUri&quot;: &quot;A String&quot;, # The Docker image to run on the replica. This image must be in Container
1583 # Registry. Learn more about [configuring custom
1584 # containers](/ai-platform/training/docs/distributed-training-containers).
1585 &quot;containerArgs&quot;: [ # Arguments to the entrypoint command.
1586 # The following rules apply for container_command and container_args:
1587 # - If you do not supply command or args:
1588 # The defaults defined in the Docker image are used.
1589 # - If you supply a command but no args:
1590 # The default EntryPoint and the default Cmd defined in the Docker image
1591 # are ignored. Your command is run without any arguments.
1592 # - If you supply only args:
1593 # The default Entrypoint defined in the Docker image is run with the args
1594 # that you supplied.
1595 # - If you supply a command and args:
1596 # The default Entrypoint and the default Cmd defined in the Docker image
1597 # are ignored. Your command is run with your args.
1598 # It cannot be set if custom container image is
1599 # not provided.
1600 # Note that this field and [TrainingInput.args] are mutually exclusive, i.e.,
1601 # both cannot be set at the same time.
1602 &quot;A String&quot;,
1603 ],
1604 &quot;acceleratorConfig&quot;: { # Represents a hardware accelerator request config. # Represents the type and number of accelerators used by the replica.
1605 # [Learn about restrictions on accelerator configurations for
1606 # training.](/ai-platform/training/docs/using-gpus#compute-engine-machine-types-with-gpu)
1607 # Note that the AcceleratorConfig can be used in both Jobs and Versions.
1608 # Learn more about [accelerators for training](/ml-engine/docs/using-gpus) and
1609 # [accelerators for online
1610 # prediction](/ml-engine/docs/machine-types-online-prediction#gpus).
1611 &quot;count&quot;: &quot;A String&quot;, # The number of accelerators to attach to each machine running the job.
1612 &quot;type&quot;: &quot;A String&quot;, # The type of accelerator to use.
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07001613 },
Bu Sun Kim65020912020-05-20 12:08:20 -07001614 },
1615 &quot;encryptionConfig&quot;: { # Represents a custom encryption key configuration that can be applied to # Optional. Options for using customer-managed encryption keys (CMEK) to
1616 # protect resources created by a training job, instead of using Google&#x27;s
1617 # default encryption. If this is set, then all resources created by the
1618 # training job will be encrypted with the customer-managed encryption key
1619 # that you specify.
1620 #
1621 # [Learn how and when to use CMEK with AI Platform
1622 # Training](/ai-platform/training/docs/cmek).
1623 # a resource.
1624 &quot;kmsKeyName&quot;: &quot;A String&quot;, # The Cloud KMS resource identifier of the customer-managed encryption key
1625 # used to protect a resource, such as a training job. It has the following
1626 # format:
1627 # `projects/{PROJECT_ID}/locations/{REGION}/keyRings/{KEY_RING_NAME}/cryptoKeys/{KEY_NAME}`
1628 },
1629 &quot;hyperparameters&quot;: { # Represents a set of hyperparameters to optimize. # Optional. The set of Hyperparameters to tune.
1630 &quot;hyperparameterMetricTag&quot;: &quot;A String&quot;, # Optional. The TensorFlow summary tag name to use for optimizing trials. For
1631 # current versions of TensorFlow, this tag name should exactly match what is
1632 # shown in TensorBoard, including all scopes. For versions of TensorFlow
1633 # prior to 0.12, this should be only the tag passed to tf.Summary.
1634 # By default, &quot;training/hptuning/metric&quot; will be used.
1635 &quot;params&quot;: [ # Required. The set of parameters to tune.
1636 { # Represents a single hyperparameter to optimize.
1637 &quot;type&quot;: &quot;A String&quot;, # Required. The type of the parameter.
1638 &quot;categoricalValues&quot;: [ # Required if type is `CATEGORICAL`. The list of possible categories.
1639 &quot;A String&quot;,
1640 ],
1641 &quot;parameterName&quot;: &quot;A String&quot;, # Required. The parameter name must be unique amongst all ParameterConfigs in
1642 # a HyperparameterSpec message. E.g., &quot;learning_rate&quot;.
1643 &quot;minValue&quot;: 3.14, # Required if type is `DOUBLE` or `INTEGER`. This field
1644 # should be unset if type is `CATEGORICAL`. This value should be integers if
1645 # type is INTEGER.
1646 &quot;discreteValues&quot;: [ # Required if type is `DISCRETE`.
1647 # A list of feasible points.
1648 # The list should be in strictly increasing order. For instance, this
1649 # parameter might have possible settings of 1.5, 2.5, and 4.0. This list
1650 # should not contain more than 1,000 values.
1651 3.14,
1652 ],
1653 &quot;scaleType&quot;: &quot;A String&quot;, # Optional. How the parameter should be scaled to the hypercube.
1654 # Leave unset for categorical parameters.
1655 # Some kind of scaling is strongly recommended for real or integral
1656 # parameters (e.g., `UNIT_LINEAR_SCALE`).
1657 &quot;maxValue&quot;: 3.14, # Required if type is `DOUBLE` or `INTEGER`. This field
1658 # should be unset if type is `CATEGORICAL`. This value should be integers if
1659 # type is `INTEGER`.
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07001660 },
1661 ],
Bu Sun Kim65020912020-05-20 12:08:20 -07001662 &quot;enableTrialEarlyStopping&quot;: True or False, # Optional. Indicates if the hyperparameter tuning job enables auto trial
1663 # early stopping.
1664 &quot;resumePreviousJobId&quot;: &quot;A String&quot;, # Optional. The prior hyperparameter tuning job id that users hope to
1665 # continue with. The job id will be used to find the corresponding vizier
1666 # study guid and resume the study.
1667 &quot;maxParallelTrials&quot;: 42, # Optional. The number of training trials to run concurrently.
1668 # You can reduce the time it takes to perform hyperparameter tuning by adding
1669 # trials in parallel. However, each trail only benefits from the information
1670 # gained in completed trials. That means that a trial does not get access to
1671 # the results of trials running at the same time, which could reduce the
1672 # quality of the overall optimization.
1673 #
1674 # Each trial will use the same scale tier and machine types.
1675 #
1676 # Defaults to one.
1677 &quot;maxFailedTrials&quot;: 42, # Optional. The number of failed trials that need to be seen before failing
1678 # the hyperparameter tuning job. You can specify this field to override the
1679 # default failing criteria for AI Platform hyperparameter tuning jobs.
1680 #
1681 # Defaults to zero, which means the service decides when a hyperparameter
1682 # job should fail.
1683 &quot;goal&quot;: &quot;A String&quot;, # Required. The type of goal to use for tuning. Available types are
1684 # `MAXIMIZE` and `MINIMIZE`.
1685 #
1686 # Defaults to `MAXIMIZE`.
1687 &quot;maxTrials&quot;: 42, # Optional. How many training trials should be attempted to optimize
1688 # the specified hyperparameters.
1689 #
1690 # Defaults to one.
1691 &quot;algorithm&quot;: &quot;A String&quot;, # Optional. The search algorithm specified for the hyperparameter
1692 # tuning job.
1693 # Uses the default AI Platform hyperparameter tuning
1694 # algorithm if unspecified.
1695 },
1696 &quot;workerConfig&quot;: { # Represents the configuration for a replica in a cluster. # Optional. The configuration for workers.
1697 #
1698 # You should only set `workerConfig.acceleratorConfig` if `workerType` is set
1699 # to a Compute Engine machine type. [Learn about restrictions on accelerator
1700 # configurations for
1701 # training.](/ai-platform/training/docs/using-gpus#compute-engine-machine-types-with-gpu)
1702 #
1703 # Set `workerConfig.imageUri` only if you build a custom image for your
1704 # worker. If `workerConfig.imageUri` has not been set, AI Platform uses
1705 # the value of `masterConfig.imageUri`. Learn more about [configuring custom
1706 # containers](/ai-platform/training/docs/distributed-training-containers).
1707 &quot;tpuTfVersion&quot;: &quot;A String&quot;, # The AI Platform runtime version that includes a TensorFlow version matching
1708 # the one used in the custom container. This field is required if the replica
1709 # is a TPU worker that uses a custom container. Otherwise, do not specify
1710 # this field. This must be a [runtime version that currently supports
1711 # training with
1712 # TPUs](/ml-engine/docs/tensorflow/runtime-version-list#tpu-support).
1713 #
1714 # Note that the version of TensorFlow included in a runtime version may
1715 # differ from the numbering of the runtime version itself, because it may
1716 # have a different [patch
1717 # version](https://www.tensorflow.org/guide/version_compat#semantic_versioning_20).
1718 # In this field, you must specify the runtime version (TensorFlow minor
1719 # version). For example, if your custom container runs TensorFlow `1.x.y`,
1720 # specify `1.x`.
1721 &quot;containerCommand&quot;: [ # The command with which the replica&#x27;s custom container is run.
1722 # If provided, it will override default ENTRYPOINT of the docker image.
1723 # If not provided, the docker image&#x27;s ENTRYPOINT is used.
1724 # It cannot be set if custom container image is
1725 # not provided.
1726 # Note that this field and [TrainingInput.args] are mutually exclusive, i.e.,
1727 # both cannot be set at the same time.
1728 &quot;A String&quot;,
1729 ],
1730 &quot;imageUri&quot;: &quot;A String&quot;, # The Docker image to run on the replica. This image must be in Container
1731 # Registry. Learn more about [configuring custom
1732 # containers](/ai-platform/training/docs/distributed-training-containers).
1733 &quot;containerArgs&quot;: [ # Arguments to the entrypoint command.
1734 # The following rules apply for container_command and container_args:
1735 # - If you do not supply command or args:
1736 # The defaults defined in the Docker image are used.
1737 # - If you supply a command but no args:
1738 # The default EntryPoint and the default Cmd defined in the Docker image
1739 # are ignored. Your command is run without any arguments.
1740 # - If you supply only args:
1741 # The default Entrypoint defined in the Docker image is run with the args
1742 # that you supplied.
1743 # - If you supply a command and args:
1744 # The default Entrypoint and the default Cmd defined in the Docker image
1745 # are ignored. Your command is run with your args.
1746 # It cannot be set if custom container image is
1747 # not provided.
1748 # Note that this field and [TrainingInput.args] are mutually exclusive, i.e.,
1749 # both cannot be set at the same time.
1750 &quot;A String&quot;,
1751 ],
1752 &quot;acceleratorConfig&quot;: { # Represents a hardware accelerator request config. # Represents the type and number of accelerators used by the replica.
1753 # [Learn about restrictions on accelerator configurations for
1754 # training.](/ai-platform/training/docs/using-gpus#compute-engine-machine-types-with-gpu)
1755 # Note that the AcceleratorConfig can be used in both Jobs and Versions.
1756 # Learn more about [accelerators for training](/ml-engine/docs/using-gpus) and
1757 # [accelerators for online
1758 # prediction](/ml-engine/docs/machine-types-online-prediction#gpus).
1759 &quot;count&quot;: &quot;A String&quot;, # The number of accelerators to attach to each machine running the job.
1760 &quot;type&quot;: &quot;A String&quot;, # The type of accelerator to use.
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07001761 },
1762 },
Bu Sun Kim65020912020-05-20 12:08:20 -07001763 &quot;parameterServerCount&quot;: &quot;A String&quot;, # Optional. The number of parameter server replicas to use for the training
1764 # job. Each replica in the cluster will be of the type specified in
1765 # `parameter_server_type`.
Dan O'Mearadd494642020-05-01 07:42:23 -07001766 #
Bu Sun Kim65020912020-05-20 12:08:20 -07001767 # This value can only be used when `scale_tier` is set to `CUSTOM`. If you
1768 # set this value, you must also set `parameter_server_type`.
Dan O'Mearadd494642020-05-01 07:42:23 -07001769 #
Bu Sun Kim65020912020-05-20 12:08:20 -07001770 # The default value is zero.
1771 &quot;packageUris&quot;: [ # Required. The Google Cloud Storage location of the packages with
1772 # the training program and any additional dependencies.
1773 # The maximum number of package URIs is 100.
1774 &quot;A String&quot;,
1775 ],
1776 &quot;evaluatorCount&quot;: &quot;A String&quot;, # Optional. The number of evaluator replicas to use for the training job.
1777 # Each replica in the cluster will be of the type specified in
1778 # `evaluator_type`.
Dan O'Mearadd494642020-05-01 07:42:23 -07001779 #
Bu Sun Kim65020912020-05-20 12:08:20 -07001780 # This value can only be used when `scale_tier` is set to `CUSTOM`. If you
1781 # set this value, you must also set `evaluator_type`.
Dan O'Mearadd494642020-05-01 07:42:23 -07001782 #
Bu Sun Kim65020912020-05-20 12:08:20 -07001783 # The default value is zero.
1784 &quot;masterType&quot;: &quot;A String&quot;, # Optional. Specifies the type of virtual machine to use for your training
1785 # job&#x27;s master worker. You must specify this field when `scaleTier` is set to
1786 # `CUSTOM`.
Dan O'Mearadd494642020-05-01 07:42:23 -07001787 #
Bu Sun Kim65020912020-05-20 12:08:20 -07001788 # You can use certain Compute Engine machine types directly in this field.
1789 # The following types are supported:
1790 #
1791 # - `n1-standard-4`
1792 # - `n1-standard-8`
1793 # - `n1-standard-16`
1794 # - `n1-standard-32`
1795 # - `n1-standard-64`
1796 # - `n1-standard-96`
1797 # - `n1-highmem-2`
1798 # - `n1-highmem-4`
1799 # - `n1-highmem-8`
1800 # - `n1-highmem-16`
1801 # - `n1-highmem-32`
1802 # - `n1-highmem-64`
1803 # - `n1-highmem-96`
1804 # - `n1-highcpu-16`
1805 # - `n1-highcpu-32`
1806 # - `n1-highcpu-64`
1807 # - `n1-highcpu-96`
1808 #
1809 # Learn more about [using Compute Engine machine
1810 # types](/ml-engine/docs/machine-types#compute-engine-machine-types).
1811 #
1812 # Alternatively, you can use the following legacy machine types:
1813 #
1814 # - `standard`
1815 # - `large_model`
1816 # - `complex_model_s`
1817 # - `complex_model_m`
1818 # - `complex_model_l`
1819 # - `standard_gpu`
1820 # - `complex_model_m_gpu`
1821 # - `complex_model_l_gpu`
1822 # - `standard_p100`
1823 # - `complex_model_m_p100`
1824 # - `standard_v100`
1825 # - `large_model_v100`
1826 # - `complex_model_m_v100`
1827 # - `complex_model_l_v100`
1828 #
1829 # Learn more about [using legacy machine
1830 # types](/ml-engine/docs/machine-types#legacy-machine-types).
1831 #
1832 # Finally, if you want to use a TPU for training, specify `cloud_tpu` in this
1833 # field. Learn more about the [special configuration options for training
1834 # with
1835 # TPUs](/ml-engine/docs/tensorflow/using-tpus#configuring_a_custom_tpu_machine).
1836 &quot;runtimeVersion&quot;: &quot;A String&quot;, # Optional. The AI Platform runtime version to use for training. You must
1837 # either specify this field or specify `masterConfig.imageUri`.
1838 #
1839 # For more information, see the [runtime version
1840 # list](/ai-platform/training/docs/runtime-version-list) and learn [how to
1841 # manage runtime versions](/ai-platform/training/docs/versioning).
1842 &quot;evaluatorType&quot;: &quot;A String&quot;, # Optional. Specifies the type of virtual machine to use for your training
1843 # job&#x27;s evaluator nodes.
1844 #
1845 # The supported values are the same as those described in the entry for
1846 # `masterType`.
1847 #
1848 # This value must be consistent with the category of machine type that
1849 # `masterType` uses. In other words, both must be Compute Engine machine
1850 # types or both must be legacy machine types.
1851 #
1852 # This value must be present when `scaleTier` is set to `CUSTOM` and
1853 # `evaluatorCount` is greater than zero.
1854 &quot;region&quot;: &quot;A String&quot;, # Required. The region to run the training job in. See the [available
1855 # regions](/ai-platform/training/docs/regions) for AI Platform Training.
1856 &quot;workerType&quot;: &quot;A String&quot;, # Optional. Specifies the type of virtual machine to use for your training
1857 # job&#x27;s worker nodes.
1858 #
1859 # The supported values are the same as those described in the entry for
1860 # `masterType`.
1861 #
1862 # This value must be consistent with the category of machine type that
1863 # `masterType` uses. In other words, both must be Compute Engine machine
1864 # types or both must be legacy machine types.
1865 #
1866 # If you use `cloud_tpu` for this value, see special instructions for
1867 # [configuring a custom TPU
1868 # machine](/ml-engine/docs/tensorflow/using-tpus#configuring_a_custom_tpu_machine).
1869 #
1870 # This value must be present when `scaleTier` is set to `CUSTOM` and
1871 # `workerCount` is greater than zero.
1872 &quot;parameterServerType&quot;: &quot;A String&quot;, # Optional. Specifies the type of virtual machine to use for your training
1873 # job&#x27;s parameter server.
1874 #
1875 # The supported values are the same as those described in the entry for
1876 # `master_type`.
1877 #
1878 # This value must be consistent with the category of machine type that
1879 # `masterType` uses. In other words, both must be Compute Engine machine
1880 # types or both must be legacy machine types.
1881 #
1882 # This value must be present when `scaleTier` is set to `CUSTOM` and
1883 # `parameter_server_count` is greater than zero.
1884 &quot;masterConfig&quot;: { # Represents the configuration for a replica in a cluster. # Optional. The configuration for your master worker.
1885 #
1886 # You should only set `masterConfig.acceleratorConfig` if `masterType` is set
1887 # to a Compute Engine machine type. Learn about [restrictions on accelerator
1888 # configurations for
Dan O'Mearadd494642020-05-01 07:42:23 -07001889 # training.](/ai-platform/training/docs/using-gpus#compute-engine-machine-types-with-gpu)
Dan O'Mearadd494642020-05-01 07:42:23 -07001890 #
Bu Sun Kim65020912020-05-20 12:08:20 -07001891 # Set `masterConfig.imageUri` only if you build a custom image. Only one of
1892 # `masterConfig.imageUri` and `runtimeVersion` should be set. Learn more
1893 # about [configuring custom
1894 # containers](/ai-platform/training/docs/distributed-training-containers).
1895 &quot;tpuTfVersion&quot;: &quot;A String&quot;, # The AI Platform runtime version that includes a TensorFlow version matching
1896 # the one used in the custom container. This field is required if the replica
1897 # is a TPU worker that uses a custom container. Otherwise, do not specify
1898 # this field. This must be a [runtime version that currently supports
1899 # training with
1900 # TPUs](/ml-engine/docs/tensorflow/runtime-version-list#tpu-support).
1901 #
1902 # Note that the version of TensorFlow included in a runtime version may
1903 # differ from the numbering of the runtime version itself, because it may
1904 # have a different [patch
1905 # version](https://www.tensorflow.org/guide/version_compat#semantic_versioning_20).
1906 # In this field, you must specify the runtime version (TensorFlow minor
1907 # version). For example, if your custom container runs TensorFlow `1.x.y`,
1908 # specify `1.x`.
1909 &quot;containerCommand&quot;: [ # The command with which the replica&#x27;s custom container is run.
1910 # If provided, it will override default ENTRYPOINT of the docker image.
1911 # If not provided, the docker image&#x27;s ENTRYPOINT is used.
1912 # It cannot be set if custom container image is
1913 # not provided.
1914 # Note that this field and [TrainingInput.args] are mutually exclusive, i.e.,
1915 # both cannot be set at the same time.
1916 &quot;A String&quot;,
1917 ],
1918 &quot;imageUri&quot;: &quot;A String&quot;, # The Docker image to run on the replica. This image must be in Container
1919 # Registry. Learn more about [configuring custom
1920 # containers](/ai-platform/training/docs/distributed-training-containers).
1921 &quot;containerArgs&quot;: [ # Arguments to the entrypoint command.
1922 # The following rules apply for container_command and container_args:
1923 # - If you do not supply command or args:
1924 # The defaults defined in the Docker image are used.
1925 # - If you supply a command but no args:
1926 # The default EntryPoint and the default Cmd defined in the Docker image
1927 # are ignored. Your command is run without any arguments.
1928 # - If you supply only args:
1929 # The default Entrypoint defined in the Docker image is run with the args
1930 # that you supplied.
1931 # - If you supply a command and args:
1932 # The default Entrypoint and the default Cmd defined in the Docker image
1933 # are ignored. Your command is run with your args.
1934 # It cannot be set if custom container image is
1935 # not provided.
1936 # Note that this field and [TrainingInput.args] are mutually exclusive, i.e.,
1937 # both cannot be set at the same time.
1938 &quot;A String&quot;,
1939 ],
1940 &quot;acceleratorConfig&quot;: { # Represents a hardware accelerator request config. # Represents the type and number of accelerators used by the replica.
1941 # [Learn about restrictions on accelerator configurations for
1942 # training.](/ai-platform/training/docs/using-gpus#compute-engine-machine-types-with-gpu)
1943 # Note that the AcceleratorConfig can be used in both Jobs and Versions.
1944 # Learn more about [accelerators for training](/ml-engine/docs/using-gpus) and
1945 # [accelerators for online
1946 # prediction](/ml-engine/docs/machine-types-online-prediction#gpus).
1947 &quot;count&quot;: &quot;A String&quot;, # The number of accelerators to attach to each machine running the job.
1948 &quot;type&quot;: &quot;A String&quot;, # The type of accelerator to use.
1949 },
1950 },
1951 &quot;scaleTier&quot;: &quot;A String&quot;, # Required. Specifies the machine types, the number of replicas for workers
1952 # and parameter servers.
1953 &quot;jobDir&quot;: &quot;A String&quot;, # Optional. A Google Cloud Storage path in which to store training outputs
1954 # and other data needed for training. This path is passed to your TensorFlow
1955 # program as the &#x27;--job-dir&#x27; command-line argument. The benefit of specifying
1956 # this field is that Cloud ML validates the path for use in training.
1957 &quot;pythonVersion&quot;: &quot;A String&quot;, # Optional. The version of Python used in training. You must either specify
1958 # this field or specify `masterConfig.imageUri`.
1959 #
1960 # The following Python versions are available:
1961 #
1962 # * Python &#x27;3.7&#x27; is available when `runtime_version` is set to &#x27;1.15&#x27; or
1963 # later.
1964 # * Python &#x27;3.5&#x27; is available when `runtime_version` is set to a version
1965 # from &#x27;1.4&#x27; to &#x27;1.14&#x27;.
1966 # * Python &#x27;2.7&#x27; is available when `runtime_version` is set to &#x27;1.15&#x27; or
1967 # earlier.
1968 #
1969 # Read more about the Python versions available for [each runtime
1970 # version](/ml-engine/docs/runtime-version-list).
1971 &quot;network&quot;: &quot;A String&quot;, # Optional. The full name of the Google Compute Engine
1972 # [network](/compute/docs/networks-and-firewalls#networks) to which the Job
1973 # is peered. For example, projects/12345/global/networks/myVPC. Format is of
1974 # the form projects/{project}/global/networks/{network}. Where {project} is a
1975 # project number, as in &#x27;12345&#x27;, and {network} is network name.&quot;.
1976 #
1977 # Private services access must already be configured for the network. If left
1978 # unspecified, the Job is not peered with any network. Learn more -
1979 # Connecting Job to user network over private
1980 # IP.
1981 &quot;scheduling&quot;: { # All parameters related to scheduling of training jobs. # Optional. Scheduling options for a training job.
1982 &quot;maxWaitTime&quot;: &quot;A String&quot;,
1983 &quot;maxRunningTime&quot;: &quot;A String&quot;, # Optional. The maximum job running time, expressed in seconds. The field can
1984 # contain up to nine fractional digits, terminated by `s`. If not specified,
1985 # this field defaults to `604800s` (seven days).
1986 #
1987 # If the training job is still running after this duration, AI Platform
1988 # Training cancels it.
1989 #
1990 # For example, if you want to ensure your job runs for no more than 2 hours,
1991 # set this field to `7200s` (2 hours * 60 minutes / hour * 60 seconds /
1992 # minute).
1993 #
1994 # If you submit your training job using the `gcloud` tool, you can [provide
1995 # this field in a `config.yaml`
1996 # file](/ai-platform/training/docs/training-jobs#formatting_your_configuration_parameters).
1997 # For example:
1998 #
1999 # ```yaml
2000 # trainingInput:
2001 # ...
2002 # scheduling:
2003 # maxRunningTime: 7200s
2004 # ...
2005 # ```
2006 },
2007 &quot;evaluatorConfig&quot;: { # Represents the configuration for a replica in a cluster. # Optional. The configuration for evaluators.
2008 #
2009 # You should only set `evaluatorConfig.acceleratorConfig` if
2010 # `evaluatorType` is set to a Compute Engine machine type. [Learn
2011 # about restrictions on accelerator configurations for
Dan O'Mearadd494642020-05-01 07:42:23 -07002012 # training.](/ai-platform/training/docs/using-gpus#compute-engine-machine-types-with-gpu)
Bu Sun Kim65020912020-05-20 12:08:20 -07002013 #
2014 # Set `evaluatorConfig.imageUri` only if you build a custom image for
2015 # your evaluator. If `evaluatorConfig.imageUri` has not been
2016 # set, AI Platform uses the value of `masterConfig.imageUri`. Learn more about [configuring custom
Dan O'Mearadd494642020-05-01 07:42:23 -07002017 # containers](/ai-platform/training/docs/distributed-training-containers).
Bu Sun Kim65020912020-05-20 12:08:20 -07002018 &quot;tpuTfVersion&quot;: &quot;A String&quot;, # The AI Platform runtime version that includes a TensorFlow version matching
2019 # the one used in the custom container. This field is required if the replica
2020 # is a TPU worker that uses a custom container. Otherwise, do not specify
2021 # this field. This must be a [runtime version that currently supports
2022 # training with
2023 # TPUs](/ml-engine/docs/tensorflow/runtime-version-list#tpu-support).
2024 #
2025 # Note that the version of TensorFlow included in a runtime version may
2026 # differ from the numbering of the runtime version itself, because it may
2027 # have a different [patch
2028 # version](https://www.tensorflow.org/guide/version_compat#semantic_versioning_20).
2029 # In this field, you must specify the runtime version (TensorFlow minor
2030 # version). For example, if your custom container runs TensorFlow `1.x.y`,
2031 # specify `1.x`.
2032 &quot;containerCommand&quot;: [ # The command with which the replica&#x27;s custom container is run.
2033 # If provided, it will override default ENTRYPOINT of the docker image.
2034 # If not provided, the docker image&#x27;s ENTRYPOINT is used.
2035 # It cannot be set if custom container image is
2036 # not provided.
2037 # Note that this field and [TrainingInput.args] are mutually exclusive, i.e.,
2038 # both cannot be set at the same time.
2039 &quot;A String&quot;,
2040 ],
2041 &quot;imageUri&quot;: &quot;A String&quot;, # The Docker image to run on the replica. This image must be in Container
2042 # Registry. Learn more about [configuring custom
2043 # containers](/ai-platform/training/docs/distributed-training-containers).
2044 &quot;containerArgs&quot;: [ # Arguments to the entrypoint command.
2045 # The following rules apply for container_command and container_args:
2046 # - If you do not supply command or args:
2047 # The defaults defined in the Docker image are used.
2048 # - If you supply a command but no args:
2049 # The default EntryPoint and the default Cmd defined in the Docker image
2050 # are ignored. Your command is run without any arguments.
2051 # - If you supply only args:
2052 # The default Entrypoint defined in the Docker image is run with the args
2053 # that you supplied.
2054 # - If you supply a command and args:
2055 # The default Entrypoint and the default Cmd defined in the Docker image
2056 # are ignored. Your command is run with your args.
2057 # It cannot be set if custom container image is
2058 # not provided.
2059 # Note that this field and [TrainingInput.args] are mutually exclusive, i.e.,
2060 # both cannot be set at the same time.
2061 &quot;A String&quot;,
2062 ],
2063 &quot;acceleratorConfig&quot;: { # Represents a hardware accelerator request config. # Represents the type and number of accelerators used by the replica.
2064 # [Learn about restrictions on accelerator configurations for
2065 # training.](/ai-platform/training/docs/using-gpus#compute-engine-machine-types-with-gpu)
2066 # Note that the AcceleratorConfig can be used in both Jobs and Versions.
2067 # Learn more about [accelerators for training](/ml-engine/docs/using-gpus) and
2068 # [accelerators for online
2069 # prediction](/ml-engine/docs/machine-types-online-prediction#gpus).
2070 &quot;count&quot;: &quot;A String&quot;, # The number of accelerators to attach to each machine running the job.
2071 &quot;type&quot;: &quot;A String&quot;, # The type of accelerator to use.
2072 },
Dan O'Mearadd494642020-05-01 07:42:23 -07002073 },
Bu Sun Kim65020912020-05-20 12:08:20 -07002074 &quot;useChiefInTfConfig&quot;: True or False, # Optional. Use `chief` instead of `master` in the `TF_CONFIG` environment
2075 # variable when training with a custom container. Defaults to `false`. [Learn
2076 # more about this
2077 # field.](/ai-platform/training/docs/distributed-training-details#chief-versus-master)
2078 #
2079 # This field has no effect for training jobs that don&#x27;t use a custom
2080 # container.
2081 &quot;workerCount&quot;: &quot;A String&quot;, # Optional. The number of worker replicas to use for the training job. Each
2082 # replica in the cluster will be of the type specified in `worker_type`.
2083 #
2084 # This value can only be used when `scale_tier` is set to `CUSTOM`. If you
2085 # set this value, you must also set `worker_type`.
2086 #
2087 # The default value is zero.
2088 &quot;pythonModule&quot;: &quot;A String&quot;, # Required. The Python module name to run after installing the packages.
2089 &quot;args&quot;: [ # Optional. Command-line arguments passed to the training application when it
2090 # starts. If your job uses a custom container, then the arguments are passed
2091 # to the container&#x27;s &lt;a class=&quot;external&quot; target=&quot;_blank&quot;
2092 # href=&quot;https://docs.docker.com/engine/reference/builder/#entrypoint&quot;&gt;
2093 # `ENTRYPOINT`&lt;/a&gt; command.
2094 &quot;A String&quot;,
2095 ],
Dan O'Mearadd494642020-05-01 07:42:23 -07002096 },
Bu Sun Kim65020912020-05-20 12:08:20 -07002097 &quot;state&quot;: &quot;A String&quot;, # Output only. The detailed state of a job.
2098 &quot;jobId&quot;: &quot;A String&quot;, # Required. The user-specified id of the job.
2099 &quot;endTime&quot;: &quot;A String&quot;, # Output only. When the job processing was completed.
2100 &quot;startTime&quot;: &quot;A String&quot;, # Output only. When the job processing was started.
2101 &quot;predictionOutput&quot;: { # Represents results of a prediction job. # The current prediction job result.
2102 &quot;errorCount&quot;: &quot;A String&quot;, # The number of data instances which resulted in errors.
2103 &quot;outputPath&quot;: &quot;A String&quot;, # The output Google Cloud Storage location provided at the job creation time.
2104 &quot;nodeHours&quot;: 3.14, # Node hours used by the batch prediction job.
2105 &quot;predictionCount&quot;: &quot;A String&quot;, # The number of generated predictions.
2106 },
2107 &quot;trainingOutput&quot;: { # Represents results of a training job. Output only. # The current training job result.
2108 &quot;trials&quot;: [ # Results for individual Hyperparameter trials.
2109 # Only set for hyperparameter tuning jobs.
2110 { # Represents the result of a single hyperparameter tuning trial from a
2111 # training job. The TrainingOutput object that is returned on successful
2112 # completion of a training job with hyperparameter tuning includes a list
2113 # of HyperparameterOutput objects, one for each successful trial.
2114 &quot;allMetrics&quot;: [ # All recorded object metrics for this trial. This field is not currently
2115 # populated.
2116 { # An observed value of a metric.
2117 &quot;objectiveValue&quot;: 3.14, # The objective value at this training step.
2118 &quot;trainingStep&quot;: &quot;A String&quot;, # The global training step for this metric.
2119 },
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07002120 ],
Bu Sun Kim65020912020-05-20 12:08:20 -07002121 &quot;hyperparameters&quot;: { # The hyperparameters given to this trial.
2122 &quot;a_key&quot;: &quot;A String&quot;,
2123 },
2124 &quot;trialId&quot;: &quot;A String&quot;, # The trial id for these results.
2125 &quot;endTime&quot;: &quot;A String&quot;, # Output only. End time for the trial.
2126 &quot;isTrialStoppedEarly&quot;: True or False, # True if the trial is stopped early.
2127 &quot;startTime&quot;: &quot;A String&quot;, # Output only. Start time for the trial.
2128 &quot;finalMetric&quot;: { # An observed value of a metric. # The final objective metric seen for this trial.
2129 &quot;objectiveValue&quot;: 3.14, # The objective value at this training step.
2130 &quot;trainingStep&quot;: &quot;A String&quot;, # The global training step for this metric.
2131 },
2132 &quot;builtInAlgorithmOutput&quot;: { # Represents output related to a built-in algorithm Job. # Details related to built-in algorithms jobs.
2133 # Only set for trials of built-in algorithms jobs that have succeeded.
2134 &quot;framework&quot;: &quot;A String&quot;, # Framework on which the built-in algorithm was trained.
2135 &quot;modelPath&quot;: &quot;A String&quot;, # The Cloud Storage path to the `model/` directory where the training job
2136 # saves the trained model. Only set for successful jobs that don&#x27;t use
2137 # hyperparameter tuning.
2138 &quot;pythonVersion&quot;: &quot;A String&quot;, # Python version on which the built-in algorithm was trained.
2139 &quot;runtimeVersion&quot;: &quot;A String&quot;, # AI Platform runtime version on which the built-in algorithm was
2140 # trained.
2141 },
2142 &quot;state&quot;: &quot;A String&quot;, # Output only. The detailed state of the trial.
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07002143 },
2144 ],
Bu Sun Kim65020912020-05-20 12:08:20 -07002145 &quot;hyperparameterMetricTag&quot;: &quot;A String&quot;, # The TensorFlow summary tag name used for optimizing hyperparameter tuning
2146 # trials. See
2147 # [`HyperparameterSpec.hyperparameterMetricTag`](#HyperparameterSpec.FIELDS.hyperparameter_metric_tag)
2148 # for more information. Only set for hyperparameter tuning jobs.
2149 &quot;completedTrialCount&quot;: &quot;A String&quot;, # The number of hyperparameter tuning trials that completed successfully.
2150 # Only set for hyperparameter tuning jobs.
2151 &quot;isHyperparameterTuningJob&quot;: True or False, # Whether this job is a hyperparameter tuning job.
2152 &quot;consumedMLUnits&quot;: 3.14, # The amount of ML units consumed by the job.
2153 &quot;isBuiltInAlgorithmJob&quot;: True or False, # Whether this job is a built-in Algorithm job.
2154 &quot;builtInAlgorithmOutput&quot;: { # Represents output related to a built-in algorithm Job. # Details related to built-in algorithms jobs.
2155 # Only set for built-in algorithms jobs.
2156 &quot;framework&quot;: &quot;A String&quot;, # Framework on which the built-in algorithm was trained.
2157 &quot;modelPath&quot;: &quot;A String&quot;, # The Cloud Storage path to the `model/` directory where the training job
2158 # saves the trained model. Only set for successful jobs that don&#x27;t use
2159 # hyperparameter tuning.
2160 &quot;pythonVersion&quot;: &quot;A String&quot;, # Python version on which the built-in algorithm was trained.
2161 &quot;runtimeVersion&quot;: &quot;A String&quot;, # AI Platform runtime version on which the built-in algorithm was
2162 # trained.
Dan O'Mearadd494642020-05-01 07:42:23 -07002163 },
Dan O'Mearadd494642020-05-01 07:42:23 -07002164 },
Bu Sun Kim65020912020-05-20 12:08:20 -07002165 &quot;createTime&quot;: &quot;A String&quot;, # Output only. When the job was created.
2166 &quot;labels&quot;: { # Optional. One or more labels that you can add, to organize your jobs.
2167 # Each label is a key-value pair, where both the key and the value are
2168 # arbitrary strings that you supply.
2169 # For more information, see the documentation on
2170 # &lt;a href=&quot;/ml-engine/docs/tensorflow/resource-labels&quot;&gt;using labels&lt;/a&gt;.
2171 &quot;a_key&quot;: &quot;A String&quot;,
2172 },
2173 &quot;predictionInput&quot;: { # Represents input parameters for a prediction job. # Input parameters to create a prediction job.
2174 &quot;outputPath&quot;: &quot;A String&quot;, # Required. The output Google Cloud Storage location.
2175 &quot;outputDataFormat&quot;: &quot;A String&quot;, # Optional. Format of the output data files, defaults to JSON.
2176 &quot;dataFormat&quot;: &quot;A String&quot;, # Required. The format of the input data files.
2177 &quot;batchSize&quot;: &quot;A String&quot;, # Optional. Number of records per batch, defaults to 64.
2178 # The service will buffer batch_size number of records in memory before
2179 # invoking one Tensorflow prediction call internally. So take the record
2180 # size and memory available into consideration when setting this parameter.
2181 &quot;runtimeVersion&quot;: &quot;A String&quot;, # Optional. The AI Platform runtime version to use for this batch
2182 # prediction. If not set, AI Platform will pick the runtime version used
2183 # during the CreateVersion request for this model version, or choose the
2184 # latest stable version when model version information is not available
2185 # such as when the model is specified by uri.
2186 &quot;inputPaths&quot;: [ # Required. The Cloud Storage location of the input data files. May contain
2187 # &lt;a href=&quot;/storage/docs/gsutil/addlhelp/WildcardNames&quot;&gt;wildcards&lt;/a&gt;.
2188 &quot;A String&quot;,
2189 ],
2190 &quot;region&quot;: &quot;A String&quot;, # Required. The Google Compute Engine region to run the prediction job in.
2191 # See the &lt;a href=&quot;/ml-engine/docs/tensorflow/regions&quot;&gt;available regions&lt;/a&gt;
2192 # for AI Platform services.
2193 &quot;versionName&quot;: &quot;A String&quot;, # Use this field if you want to specify a version of the model to use. The
2194 # string is formatted the same way as `model_version`, with the addition
2195 # of the version information:
2196 #
2197 # `&quot;projects/YOUR_PROJECT/models/YOUR_MODEL/versions/YOUR_VERSION&quot;`
2198 &quot;modelName&quot;: &quot;A String&quot;, # Use this field if you want to use the default version for the specified
2199 # model. The string must use the following format:
2200 #
2201 # `&quot;projects/YOUR_PROJECT/models/YOUR_MODEL&quot;`
2202 &quot;uri&quot;: &quot;A String&quot;, # Use this field if you want to specify a Google Cloud Storage path for
2203 # the model to use.
2204 &quot;maxWorkerCount&quot;: &quot;A String&quot;, # Optional. The maximum number of workers to be used for parallel processing.
2205 # Defaults to 10 if not specified.
2206 &quot;signatureName&quot;: &quot;A String&quot;, # Optional. The name of the signature defined in the SavedModel to use for
2207 # this job. Please refer to
2208 # [SavedModel](https://tensorflow.github.io/serving/serving_basic.html)
2209 # for information about how to use signatures.
2210 #
2211 # Defaults to
2212 # [DEFAULT_SERVING_SIGNATURE_DEF_KEY](https://www.tensorflow.org/api_docs/python/tf/saved_model/signature_constants)
2213 # , which is &quot;serving_default&quot;.
2214 },
2215 &quot;errorMessage&quot;: &quot;A String&quot;, # Output only. The details of a failure or a cancellation.
2216 }</pre>
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -04002217</div>
2218
2219<div class="method">
Dan O'Mearadd494642020-05-01 07:42:23 -07002220 <code class="details" id="getIamPolicy">getIamPolicy(resource, options_requestedPolicyVersion=None, x__xgafv=None)</code>
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07002221 <pre>Gets the access control policy for a resource.
2222Returns an empty policy if the resource exists and does not have a policy
2223set.
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -04002224
2225Args:
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07002226 resource: string, REQUIRED: The resource for which the policy is being requested.
2227See the operation documentation for the appropriate value for this field. (required)
Dan O'Mearadd494642020-05-01 07:42:23 -07002228 options_requestedPolicyVersion: integer, Optional. The policy format version to be returned.
2229
2230Valid values are 0, 1, and 3. Requests specifying an invalid value will be
2231rejected.
2232
2233Requests for policies with any conditional bindings must specify version 3.
2234Policies without any conditional bindings may specify any valid value or
2235leave the field unset.
Bu Sun Kim65020912020-05-20 12:08:20 -07002236
2237To learn which resources support conditions in their IAM policies, see the
2238[IAM
2239documentation](https://cloud.google.com/iam/help/conditions/resource-policies).
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -04002240 x__xgafv: string, V1 error format.
2241 Allowed values
2242 1 - v1 error format
2243 2 - v2 error format
2244
2245Returns:
2246 An object of the form:
2247
Dan O'Mearadd494642020-05-01 07:42:23 -07002248 { # An Identity and Access Management (IAM) policy, which specifies access
2249 # controls for Google Cloud resources.
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07002250 #
2251 #
Dan O'Mearadd494642020-05-01 07:42:23 -07002252 # A `Policy` is a collection of `bindings`. A `binding` binds one or more
2253 # `members` to a single `role`. Members can be user accounts, service accounts,
2254 # Google groups, and domains (such as G Suite). A `role` is a named list of
2255 # permissions; each `role` can be an IAM predefined role or a user-created
2256 # custom role.
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07002257 #
Bu Sun Kim65020912020-05-20 12:08:20 -07002258 # For some types of Google Cloud resources, a `binding` can also specify a
2259 # `condition`, which is a logical expression that allows access to a resource
2260 # only if the expression evaluates to `true`. A condition can add constraints
2261 # based on attributes of the request, the resource, or both. To learn which
2262 # resources support conditions in their IAM policies, see the
2263 # [IAM documentation](https://cloud.google.com/iam/help/conditions/resource-policies).
Dan O'Mearadd494642020-05-01 07:42:23 -07002264 #
2265 # **JSON example:**
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07002266 #
2267 # {
Bu Sun Kim65020912020-05-20 12:08:20 -07002268 # &quot;bindings&quot;: [
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07002269 # {
Bu Sun Kim65020912020-05-20 12:08:20 -07002270 # &quot;role&quot;: &quot;roles/resourcemanager.organizationAdmin&quot;,
2271 # &quot;members&quot;: [
2272 # &quot;user:mike@example.com&quot;,
2273 # &quot;group:admins@example.com&quot;,
2274 # &quot;domain:google.com&quot;,
2275 # &quot;serviceAccount:my-project-id@appspot.gserviceaccount.com&quot;
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07002276 # ]
2277 # },
2278 # {
Bu Sun Kim65020912020-05-20 12:08:20 -07002279 # &quot;role&quot;: &quot;roles/resourcemanager.organizationViewer&quot;,
2280 # &quot;members&quot;: [
2281 # &quot;user:eve@example.com&quot;
2282 # ],
2283 # &quot;condition&quot;: {
2284 # &quot;title&quot;: &quot;expirable access&quot;,
2285 # &quot;description&quot;: &quot;Does not grant access after Sep 2020&quot;,
2286 # &quot;expression&quot;: &quot;request.time &lt; timestamp(&#x27;2020-10-01T00:00:00.000Z&#x27;)&quot;,
Dan O'Mearadd494642020-05-01 07:42:23 -07002287 # }
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07002288 # }
Dan O'Mearadd494642020-05-01 07:42:23 -07002289 # ],
Bu Sun Kim65020912020-05-20 12:08:20 -07002290 # &quot;etag&quot;: &quot;BwWWja0YfJA=&quot;,
2291 # &quot;version&quot;: 3
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07002292 # }
2293 #
Dan O'Mearadd494642020-05-01 07:42:23 -07002294 # **YAML example:**
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07002295 #
2296 # bindings:
2297 # - members:
2298 # - user:mike@example.com
2299 # - group:admins@example.com
2300 # - domain:google.com
Dan O'Mearadd494642020-05-01 07:42:23 -07002301 # - serviceAccount:my-project-id@appspot.gserviceaccount.com
2302 # role: roles/resourcemanager.organizationAdmin
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07002303 # - members:
Dan O'Mearadd494642020-05-01 07:42:23 -07002304 # - user:eve@example.com
2305 # role: roles/resourcemanager.organizationViewer
2306 # condition:
2307 # title: expirable access
2308 # description: Does not grant access after Sep 2020
Bu Sun Kim65020912020-05-20 12:08:20 -07002309 # expression: request.time &lt; timestamp(&#x27;2020-10-01T00:00:00.000Z&#x27;)
Dan O'Mearadd494642020-05-01 07:42:23 -07002310 # - etag: BwWWja0YfJA=
2311 # - version: 3
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07002312 #
2313 # For a description of IAM and its features, see the
Dan O'Mearadd494642020-05-01 07:42:23 -07002314 # [IAM documentation](https://cloud.google.com/iam/docs/).
Bu Sun Kim65020912020-05-20 12:08:20 -07002315 &quot;etag&quot;: &quot;A String&quot;, # `etag` is used for optimistic concurrency control as a way to help
2316 # prevent simultaneous updates of a policy from overwriting each other.
2317 # It is strongly suggested that systems make use of the `etag` in the
2318 # read-modify-write cycle to perform policy updates in order to avoid race
2319 # conditions: An `etag` is returned in the response to `getIamPolicy`, and
2320 # systems are expected to put that etag in the request to `setIamPolicy` to
2321 # ensure that their change will be applied to the same version of the policy.
2322 #
2323 # **Important:** If you use IAM Conditions, you must include the `etag` field
2324 # whenever you call `setIamPolicy`. If you omit this field, then IAM allows
2325 # you to overwrite a version `3` policy with a version `1` policy, and all of
2326 # the conditions in the version `3` policy are lost.
2327 &quot;version&quot;: 42, # Specifies the format of the policy.
2328 #
2329 # Valid values are `0`, `1`, and `3`. Requests that specify an invalid value
2330 # are rejected.
2331 #
2332 # Any operation that affects conditional role bindings must specify version
2333 # `3`. This requirement applies to the following operations:
2334 #
2335 # * Getting a policy that includes a conditional role binding
2336 # * Adding a conditional role binding to a policy
2337 # * Changing a conditional role binding in a policy
2338 # * Removing any role binding, with or without a condition, from a policy
2339 # that includes conditions
2340 #
2341 # **Important:** If you use IAM Conditions, you must include the `etag` field
2342 # whenever you call `setIamPolicy`. If you omit this field, then IAM allows
2343 # you to overwrite a version `3` policy with a version `1` policy, and all of
2344 # the conditions in the version `3` policy are lost.
2345 #
2346 # If a policy does not include any conditions, operations on that policy may
2347 # specify any valid version or leave the field unset.
2348 #
2349 # To learn which resources support conditions in their IAM policies, see the
2350 # [IAM documentation](https://cloud.google.com/iam/help/conditions/resource-policies).
2351 &quot;auditConfigs&quot;: [ # Specifies cloud audit logging configuration for this policy.
2352 { # Specifies the audit configuration for a service.
2353 # The configuration determines which permission types are logged, and what
2354 # identities, if any, are exempted from logging.
2355 # An AuditConfig must have one or more AuditLogConfigs.
2356 #
2357 # If there are AuditConfigs for both `allServices` and a specific service,
2358 # the union of the two AuditConfigs is used for that service: the log_types
2359 # specified in each AuditConfig are enabled, and the exempted_members in each
2360 # AuditLogConfig are exempted.
2361 #
2362 # Example Policy with multiple AuditConfigs:
2363 #
2364 # {
2365 # &quot;audit_configs&quot;: [
2366 # {
2367 # &quot;service&quot;: &quot;allServices&quot;
2368 # &quot;audit_log_configs&quot;: [
2369 # {
2370 # &quot;log_type&quot;: &quot;DATA_READ&quot;,
2371 # &quot;exempted_members&quot;: [
2372 # &quot;user:jose@example.com&quot;
2373 # ]
2374 # },
2375 # {
2376 # &quot;log_type&quot;: &quot;DATA_WRITE&quot;,
2377 # },
2378 # {
2379 # &quot;log_type&quot;: &quot;ADMIN_READ&quot;,
2380 # }
2381 # ]
2382 # },
2383 # {
2384 # &quot;service&quot;: &quot;sampleservice.googleapis.com&quot;
2385 # &quot;audit_log_configs&quot;: [
2386 # {
2387 # &quot;log_type&quot;: &quot;DATA_READ&quot;,
2388 # },
2389 # {
2390 # &quot;log_type&quot;: &quot;DATA_WRITE&quot;,
2391 # &quot;exempted_members&quot;: [
2392 # &quot;user:aliya@example.com&quot;
2393 # ]
2394 # }
2395 # ]
2396 # }
2397 # ]
2398 # }
2399 #
2400 # For sampleservice, this policy enables DATA_READ, DATA_WRITE and ADMIN_READ
2401 # logging. It also exempts jose@example.com from DATA_READ logging, and
2402 # aliya@example.com from DATA_WRITE logging.
2403 &quot;service&quot;: &quot;A String&quot;, # Specifies a service that will be enabled for audit logging.
2404 # For example, `storage.googleapis.com`, `cloudsql.googleapis.com`.
2405 # `allServices` is a special value that covers all services.
2406 &quot;auditLogConfigs&quot;: [ # The configuration for logging of each type of permission.
2407 { # Provides the configuration for logging a type of permissions.
2408 # Example:
2409 #
2410 # {
2411 # &quot;audit_log_configs&quot;: [
2412 # {
2413 # &quot;log_type&quot;: &quot;DATA_READ&quot;,
2414 # &quot;exempted_members&quot;: [
2415 # &quot;user:jose@example.com&quot;
2416 # ]
2417 # },
2418 # {
2419 # &quot;log_type&quot;: &quot;DATA_WRITE&quot;,
2420 # }
2421 # ]
2422 # }
2423 #
2424 # This enables &#x27;DATA_READ&#x27; and &#x27;DATA_WRITE&#x27; logging, while exempting
2425 # jose@example.com from DATA_READ logging.
2426 &quot;exemptedMembers&quot;: [ # Specifies the identities that do not cause logging for this type of
2427 # permission.
2428 # Follows the same format of Binding.members.
2429 &quot;A String&quot;,
2430 ],
2431 &quot;logType&quot;: &quot;A String&quot;, # The log type that this config enables.
2432 },
2433 ],
2434 },
2435 ],
2436 &quot;bindings&quot;: [ # Associates a list of `members` to a `role`. Optionally, may specify a
Dan O'Mearadd494642020-05-01 07:42:23 -07002437 # `condition` that determines how and when the `bindings` are applied. Each
2438 # of the `bindings` must contain at least one member.
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07002439 { # Associates `members` with a `role`.
Bu Sun Kim65020912020-05-20 12:08:20 -07002440 &quot;condition&quot;: { # Represents a textual expression in the Common Expression Language (CEL) # The condition that is associated with this binding.
2441 #
2442 # If the condition evaluates to `true`, then this binding applies to the
2443 # current request.
2444 #
2445 # If the condition evaluates to `false`, then this binding does not apply to
2446 # the current request. However, a different role binding might grant the same
2447 # role to one or more of the members in this binding.
2448 #
2449 # To learn which resources support conditions in their IAM policies, see the
2450 # [IAM
2451 # documentation](https://cloud.google.com/iam/help/conditions/resource-policies).
2452 # syntax. CEL is a C-like expression language. The syntax and semantics of CEL
2453 # are documented at https://github.com/google/cel-spec.
2454 #
2455 # Example (Comparison):
2456 #
2457 # title: &quot;Summary size limit&quot;
2458 # description: &quot;Determines if a summary is less than 100 chars&quot;
2459 # expression: &quot;document.summary.size() &lt; 100&quot;
2460 #
2461 # Example (Equality):
2462 #
2463 # title: &quot;Requestor is owner&quot;
2464 # description: &quot;Determines if requestor is the document owner&quot;
2465 # expression: &quot;document.owner == request.auth.claims.email&quot;
2466 #
2467 # Example (Logic):
2468 #
2469 # title: &quot;Public documents&quot;
2470 # description: &quot;Determine whether the document should be publicly visible&quot;
2471 # expression: &quot;document.type != &#x27;private&#x27; &amp;&amp; document.type != &#x27;internal&#x27;&quot;
2472 #
2473 # Example (Data Manipulation):
2474 #
2475 # title: &quot;Notification string&quot;
2476 # description: &quot;Create a notification string with a timestamp.&quot;
2477 # expression: &quot;&#x27;New message received at &#x27; + string(document.create_time)&quot;
2478 #
2479 # The exact variables and functions that may be referenced within an expression
2480 # are determined by the service that evaluates it. See the service
2481 # documentation for additional information.
2482 &quot;title&quot;: &quot;A String&quot;, # Optional. Title for the expression, i.e. a short string describing
2483 # its purpose. This can be used e.g. in UIs which allow to enter the
2484 # expression.
2485 &quot;location&quot;: &quot;A String&quot;, # Optional. String indicating the location of the expression for error
2486 # reporting, e.g. a file name and a position in the file.
2487 &quot;description&quot;: &quot;A String&quot;, # Optional. Description of the expression. This is a longer text which
2488 # describes the expression, e.g. when hovered over it in a UI.
2489 &quot;expression&quot;: &quot;A String&quot;, # Textual representation of an expression in Common Expression Language
2490 # syntax.
2491 },
2492 &quot;members&quot;: [ # Specifies the identities requesting access for a Cloud Platform resource.
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07002493 # `members` can have the following values:
2494 #
2495 # * `allUsers`: A special identifier that represents anyone who is
2496 # on the internet; with or without a Google account.
2497 #
2498 # * `allAuthenticatedUsers`: A special identifier that represents anyone
2499 # who is authenticated with a Google account or a service account.
2500 #
2501 # * `user:{emailid}`: An email address that represents a specific Google
Dan O'Mearadd494642020-05-01 07:42:23 -07002502 # account. For example, `alice@example.com` .
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07002503 #
2504 #
2505 # * `serviceAccount:{emailid}`: An email address that represents a service
2506 # account. For example, `my-other-app@appspot.gserviceaccount.com`.
2507 #
2508 # * `group:{emailid}`: An email address that represents a Google group.
2509 # For example, `admins@example.com`.
2510 #
Dan O'Mearadd494642020-05-01 07:42:23 -07002511 # * `deleted:user:{emailid}?uid={uniqueid}`: An email address (plus unique
2512 # identifier) representing a user that has been recently deleted. For
2513 # example, `alice@example.com?uid=123456789012345678901`. If the user is
2514 # recovered, this value reverts to `user:{emailid}` and the recovered user
2515 # retains the role in the binding.
2516 #
2517 # * `deleted:serviceAccount:{emailid}?uid={uniqueid}`: An email address (plus
2518 # unique identifier) representing a service account that has been recently
2519 # deleted. For example,
2520 # `my-other-app@appspot.gserviceaccount.com?uid=123456789012345678901`.
2521 # If the service account is undeleted, this value reverts to
2522 # `serviceAccount:{emailid}` and the undeleted service account retains the
2523 # role in the binding.
2524 #
2525 # * `deleted:group:{emailid}?uid={uniqueid}`: An email address (plus unique
2526 # identifier) representing a Google group that has been recently
2527 # deleted. For example, `admins@example.com?uid=123456789012345678901`. If
2528 # the group is recovered, this value reverts to `group:{emailid}` and the
2529 # recovered group retains the role in the binding.
2530 #
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07002531 #
2532 # * `domain:{domain}`: The G Suite domain (primary) that represents all the
2533 # users of that domain. For example, `google.com` or `example.com`.
2534 #
Bu Sun Kim65020912020-05-20 12:08:20 -07002535 &quot;A String&quot;,
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07002536 ],
Bu Sun Kim65020912020-05-20 12:08:20 -07002537 &quot;role&quot;: &quot;A String&quot;, # Role that is assigned to `members`.
2538 # For example, `roles/viewer`, `roles/editor`, or `roles/owner`.
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07002539 },
2540 ],
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07002541 }</pre>
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -04002542</div>
2543
2544<div class="method">
Bu Sun Kim65020912020-05-20 12:08:20 -07002545 <code class="details" id="list">list(parent, pageToken=None, pageSize=None, filter=None, x__xgafv=None)</code>
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -04002546 <pre>Lists the jobs in the project.
2547
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07002548If there are no jobs that match the request parameters, the list
2549request returns an empty response body: {}.
2550
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -04002551Args:
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07002552 parent: string, Required. The name of the project for which to list jobs. (required)
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -04002553 pageToken: string, Optional. A page token to request the next page of results.
2554
2555You get the token from the `next_page_token` field of the response from
2556the previous call.
Bu Sun Kim65020912020-05-20 12:08:20 -07002557 pageSize: integer, Optional. The number of jobs to retrieve per &quot;page&quot; of results. If there
2558are more remaining results than this number, the response message will
2559contain a valid value in the `next_page_token` field.
2560
2561The default value is 20, and the maximum page size is 100.
2562 filter: string, Optional. Specifies the subset of jobs to retrieve.
2563You can filter on the value of one or more attributes of the job object.
2564For example, retrieve jobs with a job identifier that starts with &#x27;census&#x27;:
2565&lt;p&gt;&lt;code&gt;gcloud ai-platform jobs list --filter=&#x27;jobId:census*&#x27;&lt;/code&gt;
2566&lt;p&gt;List all failed jobs with names that start with &#x27;rnn&#x27;:
2567&lt;p&gt;&lt;code&gt;gcloud ai-platform jobs list --filter=&#x27;jobId:rnn*
2568AND state:FAILED&#x27;&lt;/code&gt;
2569&lt;p&gt;For more examples, see the guide to
2570&lt;a href=&quot;/ml-engine/docs/tensorflow/monitor-training&quot;&gt;monitoring jobs&lt;/a&gt;.
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -04002571 x__xgafv: string, V1 error format.
2572 Allowed values
2573 1 - v1 error format
2574 2 - v2 error format
2575
2576Returns:
2577 An object of the form:
2578
2579 { # Response message for the ListJobs method.
Bu Sun Kim65020912020-05-20 12:08:20 -07002580 &quot;jobs&quot;: [ # The list of jobs.
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -04002581 { # Represents a training or prediction job.
Bu Sun Kim65020912020-05-20 12:08:20 -07002582 &quot;etag&quot;: &quot;A String&quot;, # `etag` is used for optimistic concurrency control as a way to help
2583 # prevent simultaneous updates of a job from overwriting each other.
2584 # It is strongly suggested that systems make use of the `etag` in the
2585 # read-modify-write cycle to perform job updates in order to avoid race
2586 # conditions: An `etag` is returned in the response to `GetJob`, and
2587 # systems are expected to put that etag in the request to `UpdateJob` to
2588 # ensure that their change will be applied to the same version of the job.
2589 &quot;trainingInput&quot;: { # Represents input parameters for a training job. When using the gcloud command # Input parameters to create a training job.
2590 # to submit your training job, you can specify the input parameters as
2591 # command-line arguments and/or in a YAML configuration file referenced from
2592 # the --config command-line argument. For details, see the guide to [submitting
2593 # a training job](/ai-platform/training/docs/training-jobs).
2594 &quot;parameterServerConfig&quot;: { # Represents the configuration for a replica in a cluster. # Optional. The configuration for parameter servers.
2595 #
2596 # You should only set `parameterServerConfig.acceleratorConfig` if
2597 # `parameterServerType` is set to a Compute Engine machine type. [Learn
2598 # about restrictions on accelerator configurations for
2599 # training.](/ai-platform/training/docs/using-gpus#compute-engine-machine-types-with-gpu)
2600 #
2601 # Set `parameterServerConfig.imageUri` only if you build a custom image for
2602 # your parameter server. If `parameterServerConfig.imageUri` has not been
2603 # set, AI Platform uses the value of `masterConfig.imageUri`. Learn more about [configuring custom
2604 # containers](/ai-platform/training/docs/distributed-training-containers).
2605 &quot;tpuTfVersion&quot;: &quot;A String&quot;, # The AI Platform runtime version that includes a TensorFlow version matching
2606 # the one used in the custom container. This field is required if the replica
2607 # is a TPU worker that uses a custom container. Otherwise, do not specify
2608 # this field. This must be a [runtime version that currently supports
2609 # training with
2610 # TPUs](/ml-engine/docs/tensorflow/runtime-version-list#tpu-support).
2611 #
2612 # Note that the version of TensorFlow included in a runtime version may
2613 # differ from the numbering of the runtime version itself, because it may
2614 # have a different [patch
2615 # version](https://www.tensorflow.org/guide/version_compat#semantic_versioning_20).
2616 # In this field, you must specify the runtime version (TensorFlow minor
2617 # version). For example, if your custom container runs TensorFlow `1.x.y`,
2618 # specify `1.x`.
2619 &quot;containerCommand&quot;: [ # The command with which the replica&#x27;s custom container is run.
2620 # If provided, it will override default ENTRYPOINT of the docker image.
2621 # If not provided, the docker image&#x27;s ENTRYPOINT is used.
2622 # It cannot be set if custom container image is
2623 # not provided.
2624 # Note that this field and [TrainingInput.args] are mutually exclusive, i.e.,
2625 # both cannot be set at the same time.
2626 &quot;A String&quot;,
2627 ],
2628 &quot;imageUri&quot;: &quot;A String&quot;, # The Docker image to run on the replica. This image must be in Container
2629 # Registry. Learn more about [configuring custom
2630 # containers](/ai-platform/training/docs/distributed-training-containers).
2631 &quot;containerArgs&quot;: [ # Arguments to the entrypoint command.
2632 # The following rules apply for container_command and container_args:
2633 # - If you do not supply command or args:
2634 # The defaults defined in the Docker image are used.
2635 # - If you supply a command but no args:
2636 # The default EntryPoint and the default Cmd defined in the Docker image
2637 # are ignored. Your command is run without any arguments.
2638 # - If you supply only args:
2639 # The default Entrypoint defined in the Docker image is run with the args
2640 # that you supplied.
2641 # - If you supply a command and args:
2642 # The default Entrypoint and the default Cmd defined in the Docker image
2643 # are ignored. Your command is run with your args.
2644 # It cannot be set if custom container image is
2645 # not provided.
2646 # Note that this field and [TrainingInput.args] are mutually exclusive, i.e.,
2647 # both cannot be set at the same time.
2648 &quot;A String&quot;,
2649 ],
2650 &quot;acceleratorConfig&quot;: { # Represents a hardware accelerator request config. # Represents the type and number of accelerators used by the replica.
2651 # [Learn about restrictions on accelerator configurations for
2652 # training.](/ai-platform/training/docs/using-gpus#compute-engine-machine-types-with-gpu)
2653 # Note that the AcceleratorConfig can be used in both Jobs and Versions.
2654 # Learn more about [accelerators for training](/ml-engine/docs/using-gpus) and
2655 # [accelerators for online
2656 # prediction](/ml-engine/docs/machine-types-online-prediction#gpus).
2657 &quot;count&quot;: &quot;A String&quot;, # The number of accelerators to attach to each machine running the job.
2658 &quot;type&quot;: &quot;A String&quot;, # The type of accelerator to use.
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07002659 },
Bu Sun Kim65020912020-05-20 12:08:20 -07002660 },
2661 &quot;encryptionConfig&quot;: { # Represents a custom encryption key configuration that can be applied to # Optional. Options for using customer-managed encryption keys (CMEK) to
2662 # protect resources created by a training job, instead of using Google&#x27;s
2663 # default encryption. If this is set, then all resources created by the
2664 # training job will be encrypted with the customer-managed encryption key
2665 # that you specify.
2666 #
2667 # [Learn how and when to use CMEK with AI Platform
2668 # Training](/ai-platform/training/docs/cmek).
2669 # a resource.
2670 &quot;kmsKeyName&quot;: &quot;A String&quot;, # The Cloud KMS resource identifier of the customer-managed encryption key
2671 # used to protect a resource, such as a training job. It has the following
2672 # format:
2673 # `projects/{PROJECT_ID}/locations/{REGION}/keyRings/{KEY_RING_NAME}/cryptoKeys/{KEY_NAME}`
2674 },
2675 &quot;hyperparameters&quot;: { # Represents a set of hyperparameters to optimize. # Optional. The set of Hyperparameters to tune.
2676 &quot;hyperparameterMetricTag&quot;: &quot;A String&quot;, # Optional. The TensorFlow summary tag name to use for optimizing trials. For
2677 # current versions of TensorFlow, this tag name should exactly match what is
2678 # shown in TensorBoard, including all scopes. For versions of TensorFlow
2679 # prior to 0.12, this should be only the tag passed to tf.Summary.
2680 # By default, &quot;training/hptuning/metric&quot; will be used.
2681 &quot;params&quot;: [ # Required. The set of parameters to tune.
2682 { # Represents a single hyperparameter to optimize.
2683 &quot;type&quot;: &quot;A String&quot;, # Required. The type of the parameter.
2684 &quot;categoricalValues&quot;: [ # Required if type is `CATEGORICAL`. The list of possible categories.
2685 &quot;A String&quot;,
2686 ],
2687 &quot;parameterName&quot;: &quot;A String&quot;, # Required. The parameter name must be unique amongst all ParameterConfigs in
2688 # a HyperparameterSpec message. E.g., &quot;learning_rate&quot;.
2689 &quot;minValue&quot;: 3.14, # Required if type is `DOUBLE` or `INTEGER`. This field
2690 # should be unset if type is `CATEGORICAL`. This value should be integers if
2691 # type is INTEGER.
2692 &quot;discreteValues&quot;: [ # Required if type is `DISCRETE`.
2693 # A list of feasible points.
2694 # The list should be in strictly increasing order. For instance, this
2695 # parameter might have possible settings of 1.5, 2.5, and 4.0. This list
2696 # should not contain more than 1,000 values.
2697 3.14,
2698 ],
2699 &quot;scaleType&quot;: &quot;A String&quot;, # Optional. How the parameter should be scaled to the hypercube.
2700 # Leave unset for categorical parameters.
2701 # Some kind of scaling is strongly recommended for real or integral
2702 # parameters (e.g., `UNIT_LINEAR_SCALE`).
2703 &quot;maxValue&quot;: 3.14, # Required if type is `DOUBLE` or `INTEGER`. This field
2704 # should be unset if type is `CATEGORICAL`. This value should be integers if
2705 # type is `INTEGER`.
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -04002706 },
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07002707 ],
Bu Sun Kim65020912020-05-20 12:08:20 -07002708 &quot;enableTrialEarlyStopping&quot;: True or False, # Optional. Indicates if the hyperparameter tuning job enables auto trial
2709 # early stopping.
2710 &quot;resumePreviousJobId&quot;: &quot;A String&quot;, # Optional. The prior hyperparameter tuning job id that users hope to
2711 # continue with. The job id will be used to find the corresponding vizier
2712 # study guid and resume the study.
2713 &quot;maxParallelTrials&quot;: 42, # Optional. The number of training trials to run concurrently.
2714 # You can reduce the time it takes to perform hyperparameter tuning by adding
2715 # trials in parallel. However, each trail only benefits from the information
2716 # gained in completed trials. That means that a trial does not get access to
2717 # the results of trials running at the same time, which could reduce the
2718 # quality of the overall optimization.
2719 #
2720 # Each trial will use the same scale tier and machine types.
2721 #
2722 # Defaults to one.
2723 &quot;maxFailedTrials&quot;: 42, # Optional. The number of failed trials that need to be seen before failing
2724 # the hyperparameter tuning job. You can specify this field to override the
2725 # default failing criteria for AI Platform hyperparameter tuning jobs.
2726 #
2727 # Defaults to zero, which means the service decides when a hyperparameter
2728 # job should fail.
2729 &quot;goal&quot;: &quot;A String&quot;, # Required. The type of goal to use for tuning. Available types are
2730 # `MAXIMIZE` and `MINIMIZE`.
2731 #
2732 # Defaults to `MAXIMIZE`.
2733 &quot;maxTrials&quot;: 42, # Optional. How many training trials should be attempted to optimize
2734 # the specified hyperparameters.
2735 #
2736 # Defaults to one.
2737 &quot;algorithm&quot;: &quot;A String&quot;, # Optional. The search algorithm specified for the hyperparameter
2738 # tuning job.
2739 # Uses the default AI Platform hyperparameter tuning
2740 # algorithm if unspecified.
2741 },
2742 &quot;workerConfig&quot;: { # Represents the configuration for a replica in a cluster. # Optional. The configuration for workers.
2743 #
2744 # You should only set `workerConfig.acceleratorConfig` if `workerType` is set
2745 # to a Compute Engine machine type. [Learn about restrictions on accelerator
2746 # configurations for
2747 # training.](/ai-platform/training/docs/using-gpus#compute-engine-machine-types-with-gpu)
2748 #
2749 # Set `workerConfig.imageUri` only if you build a custom image for your
2750 # worker. If `workerConfig.imageUri` has not been set, AI Platform uses
2751 # the value of `masterConfig.imageUri`. Learn more about [configuring custom
2752 # containers](/ai-platform/training/docs/distributed-training-containers).
2753 &quot;tpuTfVersion&quot;: &quot;A String&quot;, # The AI Platform runtime version that includes a TensorFlow version matching
2754 # the one used in the custom container. This field is required if the replica
2755 # is a TPU worker that uses a custom container. Otherwise, do not specify
2756 # this field. This must be a [runtime version that currently supports
2757 # training with
2758 # TPUs](/ml-engine/docs/tensorflow/runtime-version-list#tpu-support).
2759 #
2760 # Note that the version of TensorFlow included in a runtime version may
2761 # differ from the numbering of the runtime version itself, because it may
2762 # have a different [patch
2763 # version](https://www.tensorflow.org/guide/version_compat#semantic_versioning_20).
2764 # In this field, you must specify the runtime version (TensorFlow minor
2765 # version). For example, if your custom container runs TensorFlow `1.x.y`,
2766 # specify `1.x`.
2767 &quot;containerCommand&quot;: [ # The command with which the replica&#x27;s custom container is run.
2768 # If provided, it will override default ENTRYPOINT of the docker image.
2769 # If not provided, the docker image&#x27;s ENTRYPOINT is used.
2770 # It cannot be set if custom container image is
2771 # not provided.
2772 # Note that this field and [TrainingInput.args] are mutually exclusive, i.e.,
2773 # both cannot be set at the same time.
2774 &quot;A String&quot;,
2775 ],
2776 &quot;imageUri&quot;: &quot;A String&quot;, # The Docker image to run on the replica. This image must be in Container
2777 # Registry. Learn more about [configuring custom
2778 # containers](/ai-platform/training/docs/distributed-training-containers).
2779 &quot;containerArgs&quot;: [ # Arguments to the entrypoint command.
2780 # The following rules apply for container_command and container_args:
2781 # - If you do not supply command or args:
2782 # The defaults defined in the Docker image are used.
2783 # - If you supply a command but no args:
2784 # The default EntryPoint and the default Cmd defined in the Docker image
2785 # are ignored. Your command is run without any arguments.
2786 # - If you supply only args:
2787 # The default Entrypoint defined in the Docker image is run with the args
2788 # that you supplied.
2789 # - If you supply a command and args:
2790 # The default Entrypoint and the default Cmd defined in the Docker image
2791 # are ignored. Your command is run with your args.
2792 # It cannot be set if custom container image is
2793 # not provided.
2794 # Note that this field and [TrainingInput.args] are mutually exclusive, i.e.,
2795 # both cannot be set at the same time.
2796 &quot;A String&quot;,
2797 ],
2798 &quot;acceleratorConfig&quot;: { # Represents a hardware accelerator request config. # Represents the type and number of accelerators used by the replica.
2799 # [Learn about restrictions on accelerator configurations for
2800 # training.](/ai-platform/training/docs/using-gpus#compute-engine-machine-types-with-gpu)
2801 # Note that the AcceleratorConfig can be used in both Jobs and Versions.
2802 # Learn more about [accelerators for training](/ml-engine/docs/using-gpus) and
2803 # [accelerators for online
2804 # prediction](/ml-engine/docs/machine-types-online-prediction#gpus).
2805 &quot;count&quot;: &quot;A String&quot;, # The number of accelerators to attach to each machine running the job.
2806 &quot;type&quot;: &quot;A String&quot;, # The type of accelerator to use.
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07002807 },
2808 },
Bu Sun Kim65020912020-05-20 12:08:20 -07002809 &quot;parameterServerCount&quot;: &quot;A String&quot;, # Optional. The number of parameter server replicas to use for the training
2810 # job. Each replica in the cluster will be of the type specified in
2811 # `parameter_server_type`.
Dan O'Mearadd494642020-05-01 07:42:23 -07002812 #
Bu Sun Kim65020912020-05-20 12:08:20 -07002813 # This value can only be used when `scale_tier` is set to `CUSTOM`. If you
2814 # set this value, you must also set `parameter_server_type`.
Dan O'Mearadd494642020-05-01 07:42:23 -07002815 #
Bu Sun Kim65020912020-05-20 12:08:20 -07002816 # The default value is zero.
2817 &quot;packageUris&quot;: [ # Required. The Google Cloud Storage location of the packages with
2818 # the training program and any additional dependencies.
2819 # The maximum number of package URIs is 100.
2820 &quot;A String&quot;,
2821 ],
2822 &quot;evaluatorCount&quot;: &quot;A String&quot;, # Optional. The number of evaluator replicas to use for the training job.
2823 # Each replica in the cluster will be of the type specified in
2824 # `evaluator_type`.
Dan O'Mearadd494642020-05-01 07:42:23 -07002825 #
Bu Sun Kim65020912020-05-20 12:08:20 -07002826 # This value can only be used when `scale_tier` is set to `CUSTOM`. If you
2827 # set this value, you must also set `evaluator_type`.
Dan O'Mearadd494642020-05-01 07:42:23 -07002828 #
Bu Sun Kim65020912020-05-20 12:08:20 -07002829 # The default value is zero.
2830 &quot;masterType&quot;: &quot;A String&quot;, # Optional. Specifies the type of virtual machine to use for your training
2831 # job&#x27;s master worker. You must specify this field when `scaleTier` is set to
2832 # `CUSTOM`.
Dan O'Mearadd494642020-05-01 07:42:23 -07002833 #
Bu Sun Kim65020912020-05-20 12:08:20 -07002834 # You can use certain Compute Engine machine types directly in this field.
2835 # The following types are supported:
2836 #
2837 # - `n1-standard-4`
2838 # - `n1-standard-8`
2839 # - `n1-standard-16`
2840 # - `n1-standard-32`
2841 # - `n1-standard-64`
2842 # - `n1-standard-96`
2843 # - `n1-highmem-2`
2844 # - `n1-highmem-4`
2845 # - `n1-highmem-8`
2846 # - `n1-highmem-16`
2847 # - `n1-highmem-32`
2848 # - `n1-highmem-64`
2849 # - `n1-highmem-96`
2850 # - `n1-highcpu-16`
2851 # - `n1-highcpu-32`
2852 # - `n1-highcpu-64`
2853 # - `n1-highcpu-96`
2854 #
2855 # Learn more about [using Compute Engine machine
2856 # types](/ml-engine/docs/machine-types#compute-engine-machine-types).
2857 #
2858 # Alternatively, you can use the following legacy machine types:
2859 #
2860 # - `standard`
2861 # - `large_model`
2862 # - `complex_model_s`
2863 # - `complex_model_m`
2864 # - `complex_model_l`
2865 # - `standard_gpu`
2866 # - `complex_model_m_gpu`
2867 # - `complex_model_l_gpu`
2868 # - `standard_p100`
2869 # - `complex_model_m_p100`
2870 # - `standard_v100`
2871 # - `large_model_v100`
2872 # - `complex_model_m_v100`
2873 # - `complex_model_l_v100`
2874 #
2875 # Learn more about [using legacy machine
2876 # types](/ml-engine/docs/machine-types#legacy-machine-types).
2877 #
2878 # Finally, if you want to use a TPU for training, specify `cloud_tpu` in this
2879 # field. Learn more about the [special configuration options for training
2880 # with
2881 # TPUs](/ml-engine/docs/tensorflow/using-tpus#configuring_a_custom_tpu_machine).
2882 &quot;runtimeVersion&quot;: &quot;A String&quot;, # Optional. The AI Platform runtime version to use for training. You must
2883 # either specify this field or specify `masterConfig.imageUri`.
2884 #
2885 # For more information, see the [runtime version
2886 # list](/ai-platform/training/docs/runtime-version-list) and learn [how to
2887 # manage runtime versions](/ai-platform/training/docs/versioning).
2888 &quot;evaluatorType&quot;: &quot;A String&quot;, # Optional. Specifies the type of virtual machine to use for your training
2889 # job&#x27;s evaluator nodes.
2890 #
2891 # The supported values are the same as those described in the entry for
2892 # `masterType`.
2893 #
2894 # This value must be consistent with the category of machine type that
2895 # `masterType` uses. In other words, both must be Compute Engine machine
2896 # types or both must be legacy machine types.
2897 #
2898 # This value must be present when `scaleTier` is set to `CUSTOM` and
2899 # `evaluatorCount` is greater than zero.
2900 &quot;region&quot;: &quot;A String&quot;, # Required. The region to run the training job in. See the [available
2901 # regions](/ai-platform/training/docs/regions) for AI Platform Training.
2902 &quot;workerType&quot;: &quot;A String&quot;, # Optional. Specifies the type of virtual machine to use for your training
2903 # job&#x27;s worker nodes.
2904 #
2905 # The supported values are the same as those described in the entry for
2906 # `masterType`.
2907 #
2908 # This value must be consistent with the category of machine type that
2909 # `masterType` uses. In other words, both must be Compute Engine machine
2910 # types or both must be legacy machine types.
2911 #
2912 # If you use `cloud_tpu` for this value, see special instructions for
2913 # [configuring a custom TPU
2914 # machine](/ml-engine/docs/tensorflow/using-tpus#configuring_a_custom_tpu_machine).
2915 #
2916 # This value must be present when `scaleTier` is set to `CUSTOM` and
2917 # `workerCount` is greater than zero.
2918 &quot;parameterServerType&quot;: &quot;A String&quot;, # Optional. Specifies the type of virtual machine to use for your training
2919 # job&#x27;s parameter server.
2920 #
2921 # The supported values are the same as those described in the entry for
2922 # `master_type`.
2923 #
2924 # This value must be consistent with the category of machine type that
2925 # `masterType` uses. In other words, both must be Compute Engine machine
2926 # types or both must be legacy machine types.
2927 #
2928 # This value must be present when `scaleTier` is set to `CUSTOM` and
2929 # `parameter_server_count` is greater than zero.
2930 &quot;masterConfig&quot;: { # Represents the configuration for a replica in a cluster. # Optional. The configuration for your master worker.
2931 #
2932 # You should only set `masterConfig.acceleratorConfig` if `masterType` is set
2933 # to a Compute Engine machine type. Learn about [restrictions on accelerator
2934 # configurations for
Dan O'Mearadd494642020-05-01 07:42:23 -07002935 # training.](/ai-platform/training/docs/using-gpus#compute-engine-machine-types-with-gpu)
Dan O'Mearadd494642020-05-01 07:42:23 -07002936 #
Bu Sun Kim65020912020-05-20 12:08:20 -07002937 # Set `masterConfig.imageUri` only if you build a custom image. Only one of
2938 # `masterConfig.imageUri` and `runtimeVersion` should be set. Learn more
2939 # about [configuring custom
2940 # containers](/ai-platform/training/docs/distributed-training-containers).
2941 &quot;tpuTfVersion&quot;: &quot;A String&quot;, # The AI Platform runtime version that includes a TensorFlow version matching
2942 # the one used in the custom container. This field is required if the replica
2943 # is a TPU worker that uses a custom container. Otherwise, do not specify
2944 # this field. This must be a [runtime version that currently supports
2945 # training with
2946 # TPUs](/ml-engine/docs/tensorflow/runtime-version-list#tpu-support).
2947 #
2948 # Note that the version of TensorFlow included in a runtime version may
2949 # differ from the numbering of the runtime version itself, because it may
2950 # have a different [patch
2951 # version](https://www.tensorflow.org/guide/version_compat#semantic_versioning_20).
2952 # In this field, you must specify the runtime version (TensorFlow minor
2953 # version). For example, if your custom container runs TensorFlow `1.x.y`,
2954 # specify `1.x`.
2955 &quot;containerCommand&quot;: [ # The command with which the replica&#x27;s custom container is run.
2956 # If provided, it will override default ENTRYPOINT of the docker image.
2957 # If not provided, the docker image&#x27;s ENTRYPOINT is used.
2958 # It cannot be set if custom container image is
2959 # not provided.
2960 # Note that this field and [TrainingInput.args] are mutually exclusive, i.e.,
2961 # both cannot be set at the same time.
2962 &quot;A String&quot;,
2963 ],
2964 &quot;imageUri&quot;: &quot;A String&quot;, # The Docker image to run on the replica. This image must be in Container
2965 # Registry. Learn more about [configuring custom
2966 # containers](/ai-platform/training/docs/distributed-training-containers).
2967 &quot;containerArgs&quot;: [ # Arguments to the entrypoint command.
2968 # The following rules apply for container_command and container_args:
2969 # - If you do not supply command or args:
2970 # The defaults defined in the Docker image are used.
2971 # - If you supply a command but no args:
2972 # The default EntryPoint and the default Cmd defined in the Docker image
2973 # are ignored. Your command is run without any arguments.
2974 # - If you supply only args:
2975 # The default Entrypoint defined in the Docker image is run with the args
2976 # that you supplied.
2977 # - If you supply a command and args:
2978 # The default Entrypoint and the default Cmd defined in the Docker image
2979 # are ignored. Your command is run with your args.
2980 # It cannot be set if custom container image is
2981 # not provided.
2982 # Note that this field and [TrainingInput.args] are mutually exclusive, i.e.,
2983 # both cannot be set at the same time.
2984 &quot;A String&quot;,
2985 ],
2986 &quot;acceleratorConfig&quot;: { # Represents a hardware accelerator request config. # Represents the type and number of accelerators used by the replica.
2987 # [Learn about restrictions on accelerator configurations for
2988 # training.](/ai-platform/training/docs/using-gpus#compute-engine-machine-types-with-gpu)
2989 # Note that the AcceleratorConfig can be used in both Jobs and Versions.
2990 # Learn more about [accelerators for training](/ml-engine/docs/using-gpus) and
2991 # [accelerators for online
2992 # prediction](/ml-engine/docs/machine-types-online-prediction#gpus).
2993 &quot;count&quot;: &quot;A String&quot;, # The number of accelerators to attach to each machine running the job.
2994 &quot;type&quot;: &quot;A String&quot;, # The type of accelerator to use.
2995 },
2996 },
2997 &quot;scaleTier&quot;: &quot;A String&quot;, # Required. Specifies the machine types, the number of replicas for workers
2998 # and parameter servers.
2999 &quot;jobDir&quot;: &quot;A String&quot;, # Optional. A Google Cloud Storage path in which to store training outputs
3000 # and other data needed for training. This path is passed to your TensorFlow
3001 # program as the &#x27;--job-dir&#x27; command-line argument. The benefit of specifying
3002 # this field is that Cloud ML validates the path for use in training.
3003 &quot;pythonVersion&quot;: &quot;A String&quot;, # Optional. The version of Python used in training. You must either specify
3004 # this field or specify `masterConfig.imageUri`.
3005 #
3006 # The following Python versions are available:
3007 #
3008 # * Python &#x27;3.7&#x27; is available when `runtime_version` is set to &#x27;1.15&#x27; or
3009 # later.
3010 # * Python &#x27;3.5&#x27; is available when `runtime_version` is set to a version
3011 # from &#x27;1.4&#x27; to &#x27;1.14&#x27;.
3012 # * Python &#x27;2.7&#x27; is available when `runtime_version` is set to &#x27;1.15&#x27; or
3013 # earlier.
3014 #
3015 # Read more about the Python versions available for [each runtime
3016 # version](/ml-engine/docs/runtime-version-list).
3017 &quot;network&quot;: &quot;A String&quot;, # Optional. The full name of the Google Compute Engine
3018 # [network](/compute/docs/networks-and-firewalls#networks) to which the Job
3019 # is peered. For example, projects/12345/global/networks/myVPC. Format is of
3020 # the form projects/{project}/global/networks/{network}. Where {project} is a
3021 # project number, as in &#x27;12345&#x27;, and {network} is network name.&quot;.
3022 #
3023 # Private services access must already be configured for the network. If left
3024 # unspecified, the Job is not peered with any network. Learn more -
3025 # Connecting Job to user network over private
3026 # IP.
3027 &quot;scheduling&quot;: { # All parameters related to scheduling of training jobs. # Optional. Scheduling options for a training job.
3028 &quot;maxWaitTime&quot;: &quot;A String&quot;,
3029 &quot;maxRunningTime&quot;: &quot;A String&quot;, # Optional. The maximum job running time, expressed in seconds. The field can
3030 # contain up to nine fractional digits, terminated by `s`. If not specified,
3031 # this field defaults to `604800s` (seven days).
3032 #
3033 # If the training job is still running after this duration, AI Platform
3034 # Training cancels it.
3035 #
3036 # For example, if you want to ensure your job runs for no more than 2 hours,
3037 # set this field to `7200s` (2 hours * 60 minutes / hour * 60 seconds /
3038 # minute).
3039 #
3040 # If you submit your training job using the `gcloud` tool, you can [provide
3041 # this field in a `config.yaml`
3042 # file](/ai-platform/training/docs/training-jobs#formatting_your_configuration_parameters).
3043 # For example:
3044 #
3045 # ```yaml
3046 # trainingInput:
3047 # ...
3048 # scheduling:
3049 # maxRunningTime: 7200s
3050 # ...
3051 # ```
3052 },
3053 &quot;evaluatorConfig&quot;: { # Represents the configuration for a replica in a cluster. # Optional. The configuration for evaluators.
3054 #
3055 # You should only set `evaluatorConfig.acceleratorConfig` if
3056 # `evaluatorType` is set to a Compute Engine machine type. [Learn
3057 # about restrictions on accelerator configurations for
Dan O'Mearadd494642020-05-01 07:42:23 -07003058 # training.](/ai-platform/training/docs/using-gpus#compute-engine-machine-types-with-gpu)
Bu Sun Kim65020912020-05-20 12:08:20 -07003059 #
3060 # Set `evaluatorConfig.imageUri` only if you build a custom image for
3061 # your evaluator. If `evaluatorConfig.imageUri` has not been
3062 # set, AI Platform uses the value of `masterConfig.imageUri`. Learn more about [configuring custom
Dan O'Mearadd494642020-05-01 07:42:23 -07003063 # containers](/ai-platform/training/docs/distributed-training-containers).
Bu Sun Kim65020912020-05-20 12:08:20 -07003064 &quot;tpuTfVersion&quot;: &quot;A String&quot;, # The AI Platform runtime version that includes a TensorFlow version matching
3065 # the one used in the custom container. This field is required if the replica
3066 # is a TPU worker that uses a custom container. Otherwise, do not specify
3067 # this field. This must be a [runtime version that currently supports
3068 # training with
3069 # TPUs](/ml-engine/docs/tensorflow/runtime-version-list#tpu-support).
3070 #
3071 # Note that the version of TensorFlow included in a runtime version may
3072 # differ from the numbering of the runtime version itself, because it may
3073 # have a different [patch
3074 # version](https://www.tensorflow.org/guide/version_compat#semantic_versioning_20).
3075 # In this field, you must specify the runtime version (TensorFlow minor
3076 # version). For example, if your custom container runs TensorFlow `1.x.y`,
3077 # specify `1.x`.
3078 &quot;containerCommand&quot;: [ # The command with which the replica&#x27;s custom container is run.
3079 # If provided, it will override default ENTRYPOINT of the docker image.
3080 # If not provided, the docker image&#x27;s ENTRYPOINT is used.
3081 # It cannot be set if custom container image is
3082 # not provided.
3083 # Note that this field and [TrainingInput.args] are mutually exclusive, i.e.,
3084 # both cannot be set at the same time.
3085 &quot;A String&quot;,
3086 ],
3087 &quot;imageUri&quot;: &quot;A String&quot;, # The Docker image to run on the replica. This image must be in Container
3088 # Registry. Learn more about [configuring custom
3089 # containers](/ai-platform/training/docs/distributed-training-containers).
3090 &quot;containerArgs&quot;: [ # Arguments to the entrypoint command.
3091 # The following rules apply for container_command and container_args:
3092 # - If you do not supply command or args:
3093 # The defaults defined in the Docker image are used.
3094 # - If you supply a command but no args:
3095 # The default EntryPoint and the default Cmd defined in the Docker image
3096 # are ignored. Your command is run without any arguments.
3097 # - If you supply only args:
3098 # The default Entrypoint defined in the Docker image is run with the args
3099 # that you supplied.
3100 # - If you supply a command and args:
3101 # The default Entrypoint and the default Cmd defined in the Docker image
3102 # are ignored. Your command is run with your args.
3103 # It cannot be set if custom container image is
3104 # not provided.
3105 # Note that this field and [TrainingInput.args] are mutually exclusive, i.e.,
3106 # both cannot be set at the same time.
3107 &quot;A String&quot;,
3108 ],
3109 &quot;acceleratorConfig&quot;: { # Represents a hardware accelerator request config. # Represents the type and number of accelerators used by the replica.
3110 # [Learn about restrictions on accelerator configurations for
3111 # training.](/ai-platform/training/docs/using-gpus#compute-engine-machine-types-with-gpu)
3112 # Note that the AcceleratorConfig can be used in both Jobs and Versions.
3113 # Learn more about [accelerators for training](/ml-engine/docs/using-gpus) and
3114 # [accelerators for online
3115 # prediction](/ml-engine/docs/machine-types-online-prediction#gpus).
3116 &quot;count&quot;: &quot;A String&quot;, # The number of accelerators to attach to each machine running the job.
3117 &quot;type&quot;: &quot;A String&quot;, # The type of accelerator to use.
3118 },
Dan O'Mearadd494642020-05-01 07:42:23 -07003119 },
Bu Sun Kim65020912020-05-20 12:08:20 -07003120 &quot;useChiefInTfConfig&quot;: True or False, # Optional. Use `chief` instead of `master` in the `TF_CONFIG` environment
3121 # variable when training with a custom container. Defaults to `false`. [Learn
3122 # more about this
3123 # field.](/ai-platform/training/docs/distributed-training-details#chief-versus-master)
3124 #
3125 # This field has no effect for training jobs that don&#x27;t use a custom
3126 # container.
3127 &quot;workerCount&quot;: &quot;A String&quot;, # Optional. The number of worker replicas to use for the training job. Each
3128 # replica in the cluster will be of the type specified in `worker_type`.
3129 #
3130 # This value can only be used when `scale_tier` is set to `CUSTOM`. If you
3131 # set this value, you must also set `worker_type`.
3132 #
3133 # The default value is zero.
3134 &quot;pythonModule&quot;: &quot;A String&quot;, # Required. The Python module name to run after installing the packages.
3135 &quot;args&quot;: [ # Optional. Command-line arguments passed to the training application when it
3136 # starts. If your job uses a custom container, then the arguments are passed
3137 # to the container&#x27;s &lt;a class=&quot;external&quot; target=&quot;_blank&quot;
3138 # href=&quot;https://docs.docker.com/engine/reference/builder/#entrypoint&quot;&gt;
3139 # `ENTRYPOINT`&lt;/a&gt; command.
3140 &quot;A String&quot;,
3141 ],
Dan O'Mearadd494642020-05-01 07:42:23 -07003142 },
Bu Sun Kim65020912020-05-20 12:08:20 -07003143 &quot;state&quot;: &quot;A String&quot;, # Output only. The detailed state of a job.
3144 &quot;jobId&quot;: &quot;A String&quot;, # Required. The user-specified id of the job.
3145 &quot;endTime&quot;: &quot;A String&quot;, # Output only. When the job processing was completed.
3146 &quot;startTime&quot;: &quot;A String&quot;, # Output only. When the job processing was started.
3147 &quot;predictionOutput&quot;: { # Represents results of a prediction job. # The current prediction job result.
3148 &quot;errorCount&quot;: &quot;A String&quot;, # The number of data instances which resulted in errors.
3149 &quot;outputPath&quot;: &quot;A String&quot;, # The output Google Cloud Storage location provided at the job creation time.
3150 &quot;nodeHours&quot;: 3.14, # Node hours used by the batch prediction job.
3151 &quot;predictionCount&quot;: &quot;A String&quot;, # The number of generated predictions.
3152 },
3153 &quot;trainingOutput&quot;: { # Represents results of a training job. Output only. # The current training job result.
3154 &quot;trials&quot;: [ # Results for individual Hyperparameter trials.
3155 # Only set for hyperparameter tuning jobs.
3156 { # Represents the result of a single hyperparameter tuning trial from a
3157 # training job. The TrainingOutput object that is returned on successful
3158 # completion of a training job with hyperparameter tuning includes a list
3159 # of HyperparameterOutput objects, one for each successful trial.
3160 &quot;allMetrics&quot;: [ # All recorded object metrics for this trial. This field is not currently
3161 # populated.
3162 { # An observed value of a metric.
3163 &quot;objectiveValue&quot;: 3.14, # The objective value at this training step.
3164 &quot;trainingStep&quot;: &quot;A String&quot;, # The global training step for this metric.
3165 },
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07003166 ],
Bu Sun Kim65020912020-05-20 12:08:20 -07003167 &quot;hyperparameters&quot;: { # The hyperparameters given to this trial.
3168 &quot;a_key&quot;: &quot;A String&quot;,
3169 },
3170 &quot;trialId&quot;: &quot;A String&quot;, # The trial id for these results.
3171 &quot;endTime&quot;: &quot;A String&quot;, # Output only. End time for the trial.
3172 &quot;isTrialStoppedEarly&quot;: True or False, # True if the trial is stopped early.
3173 &quot;startTime&quot;: &quot;A String&quot;, # Output only. Start time for the trial.
3174 &quot;finalMetric&quot;: { # An observed value of a metric. # The final objective metric seen for this trial.
3175 &quot;objectiveValue&quot;: 3.14, # The objective value at this training step.
3176 &quot;trainingStep&quot;: &quot;A String&quot;, # The global training step for this metric.
3177 },
3178 &quot;builtInAlgorithmOutput&quot;: { # Represents output related to a built-in algorithm Job. # Details related to built-in algorithms jobs.
3179 # Only set for trials of built-in algorithms jobs that have succeeded.
3180 &quot;framework&quot;: &quot;A String&quot;, # Framework on which the built-in algorithm was trained.
3181 &quot;modelPath&quot;: &quot;A String&quot;, # The Cloud Storage path to the `model/` directory where the training job
3182 # saves the trained model. Only set for successful jobs that don&#x27;t use
3183 # hyperparameter tuning.
3184 &quot;pythonVersion&quot;: &quot;A String&quot;, # Python version on which the built-in algorithm was trained.
3185 &quot;runtimeVersion&quot;: &quot;A String&quot;, # AI Platform runtime version on which the built-in algorithm was
3186 # trained.
3187 },
3188 &quot;state&quot;: &quot;A String&quot;, # Output only. The detailed state of the trial.
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -04003189 },
3190 ],
Bu Sun Kim65020912020-05-20 12:08:20 -07003191 &quot;hyperparameterMetricTag&quot;: &quot;A String&quot;, # The TensorFlow summary tag name used for optimizing hyperparameter tuning
3192 # trials. See
3193 # [`HyperparameterSpec.hyperparameterMetricTag`](#HyperparameterSpec.FIELDS.hyperparameter_metric_tag)
3194 # for more information. Only set for hyperparameter tuning jobs.
3195 &quot;completedTrialCount&quot;: &quot;A String&quot;, # The number of hyperparameter tuning trials that completed successfully.
3196 # Only set for hyperparameter tuning jobs.
3197 &quot;isHyperparameterTuningJob&quot;: True or False, # Whether this job is a hyperparameter tuning job.
3198 &quot;consumedMLUnits&quot;: 3.14, # The amount of ML units consumed by the job.
3199 &quot;isBuiltInAlgorithmJob&quot;: True or False, # Whether this job is a built-in Algorithm job.
3200 &quot;builtInAlgorithmOutput&quot;: { # Represents output related to a built-in algorithm Job. # Details related to built-in algorithms jobs.
3201 # Only set for built-in algorithms jobs.
3202 &quot;framework&quot;: &quot;A String&quot;, # Framework on which the built-in algorithm was trained.
3203 &quot;modelPath&quot;: &quot;A String&quot;, # The Cloud Storage path to the `model/` directory where the training job
3204 # saves the trained model. Only set for successful jobs that don&#x27;t use
3205 # hyperparameter tuning.
3206 &quot;pythonVersion&quot;: &quot;A String&quot;, # Python version on which the built-in algorithm was trained.
3207 &quot;runtimeVersion&quot;: &quot;A String&quot;, # AI Platform runtime version on which the built-in algorithm was
3208 # trained.
Dan O'Mearadd494642020-05-01 07:42:23 -07003209 },
Dan O'Mearadd494642020-05-01 07:42:23 -07003210 },
Bu Sun Kim65020912020-05-20 12:08:20 -07003211 &quot;createTime&quot;: &quot;A String&quot;, # Output only. When the job was created.
3212 &quot;labels&quot;: { # Optional. One or more labels that you can add, to organize your jobs.
3213 # Each label is a key-value pair, where both the key and the value are
3214 # arbitrary strings that you supply.
3215 # For more information, see the documentation on
3216 # &lt;a href=&quot;/ml-engine/docs/tensorflow/resource-labels&quot;&gt;using labels&lt;/a&gt;.
3217 &quot;a_key&quot;: &quot;A String&quot;,
3218 },
3219 &quot;predictionInput&quot;: { # Represents input parameters for a prediction job. # Input parameters to create a prediction job.
3220 &quot;outputPath&quot;: &quot;A String&quot;, # Required. The output Google Cloud Storage location.
3221 &quot;outputDataFormat&quot;: &quot;A String&quot;, # Optional. Format of the output data files, defaults to JSON.
3222 &quot;dataFormat&quot;: &quot;A String&quot;, # Required. The format of the input data files.
3223 &quot;batchSize&quot;: &quot;A String&quot;, # Optional. Number of records per batch, defaults to 64.
3224 # The service will buffer batch_size number of records in memory before
3225 # invoking one Tensorflow prediction call internally. So take the record
3226 # size and memory available into consideration when setting this parameter.
3227 &quot;runtimeVersion&quot;: &quot;A String&quot;, # Optional. The AI Platform runtime version to use for this batch
3228 # prediction. If not set, AI Platform will pick the runtime version used
3229 # during the CreateVersion request for this model version, or choose the
3230 # latest stable version when model version information is not available
3231 # such as when the model is specified by uri.
3232 &quot;inputPaths&quot;: [ # Required. The Cloud Storage location of the input data files. May contain
3233 # &lt;a href=&quot;/storage/docs/gsutil/addlhelp/WildcardNames&quot;&gt;wildcards&lt;/a&gt;.
3234 &quot;A String&quot;,
3235 ],
3236 &quot;region&quot;: &quot;A String&quot;, # Required. The Google Compute Engine region to run the prediction job in.
3237 # See the &lt;a href=&quot;/ml-engine/docs/tensorflow/regions&quot;&gt;available regions&lt;/a&gt;
3238 # for AI Platform services.
3239 &quot;versionName&quot;: &quot;A String&quot;, # Use this field if you want to specify a version of the model to use. The
3240 # string is formatted the same way as `model_version`, with the addition
3241 # of the version information:
3242 #
3243 # `&quot;projects/YOUR_PROJECT/models/YOUR_MODEL/versions/YOUR_VERSION&quot;`
3244 &quot;modelName&quot;: &quot;A String&quot;, # Use this field if you want to use the default version for the specified
3245 # model. The string must use the following format:
3246 #
3247 # `&quot;projects/YOUR_PROJECT/models/YOUR_MODEL&quot;`
3248 &quot;uri&quot;: &quot;A String&quot;, # Use this field if you want to specify a Google Cloud Storage path for
3249 # the model to use.
3250 &quot;maxWorkerCount&quot;: &quot;A String&quot;, # Optional. The maximum number of workers to be used for parallel processing.
3251 # Defaults to 10 if not specified.
3252 &quot;signatureName&quot;: &quot;A String&quot;, # Optional. The name of the signature defined in the SavedModel to use for
3253 # this job. Please refer to
3254 # [SavedModel](https://tensorflow.github.io/serving/serving_basic.html)
3255 # for information about how to use signatures.
3256 #
3257 # Defaults to
3258 # [DEFAULT_SERVING_SIGNATURE_DEF_KEY](https://www.tensorflow.org/api_docs/python/tf/saved_model/signature_constants)
3259 # , which is &quot;serving_default&quot;.
3260 },
3261 &quot;errorMessage&quot;: &quot;A String&quot;, # Output only. The details of a failure or a cancellation.
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -04003262 },
3263 ],
Bu Sun Kim65020912020-05-20 12:08:20 -07003264 &quot;nextPageToken&quot;: &quot;A String&quot;, # Optional. Pass this token as the `page_token` field of the request for a
3265 # subsequent call.
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -04003266 }</pre>
3267</div>
3268
3269<div class="method">
3270 <code class="details" id="list_next">list_next(previous_request, previous_response)</code>
3271 <pre>Retrieves the next page of results.
3272
3273Args:
3274 previous_request: The request for the previous page. (required)
3275 previous_response: The response from the request for the previous page. (required)
3276
3277Returns:
Bu Sun Kim65020912020-05-20 12:08:20 -07003278 A request object that you can call &#x27;execute()&#x27; on to request the next
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -04003279 page. Returns None if there are no more items in the collection.
3280 </pre>
3281</div>
3282
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07003283<div class="method">
Dan O'Mearadd494642020-05-01 07:42:23 -07003284 <code class="details" id="patch">patch(name, body=None, updateMask=None, x__xgafv=None)</code>
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07003285 <pre>Updates a specific job resource.
3286
3287Currently the only supported fields to update are `labels`.
3288
3289Args:
3290 name: string, Required. The job name. (required)
Dan O'Mearadd494642020-05-01 07:42:23 -07003291 body: object, The request body.
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07003292 The object takes the form of:
3293
3294{ # Represents a training or prediction job.
Bu Sun Kim65020912020-05-20 12:08:20 -07003295 &quot;etag&quot;: &quot;A String&quot;, # `etag` is used for optimistic concurrency control as a way to help
Dan O'Mearadd494642020-05-01 07:42:23 -07003296 # prevent simultaneous updates of a job from overwriting each other.
3297 # It is strongly suggested that systems make use of the `etag` in the
3298 # read-modify-write cycle to perform job updates in order to avoid race
3299 # conditions: An `etag` is returned in the response to `GetJob`, and
3300 # systems are expected to put that etag in the request to `UpdateJob` to
3301 # ensure that their change will be applied to the same version of the job.
Bu Sun Kim65020912020-05-20 12:08:20 -07003302 &quot;trainingInput&quot;: { # Represents input parameters for a training job. When using the gcloud command # Input parameters to create a training job.
Dan O'Mearadd494642020-05-01 07:42:23 -07003303 # to submit your training job, you can specify the input parameters as
3304 # command-line arguments and/or in a YAML configuration file referenced from
3305 # the --config command-line argument. For details, see the guide to [submitting
3306 # a training job](/ai-platform/training/docs/training-jobs).
Bu Sun Kim65020912020-05-20 12:08:20 -07003307 &quot;parameterServerConfig&quot;: { # Represents the configuration for a replica in a cluster. # Optional. The configuration for parameter servers.
3308 #
3309 # You should only set `parameterServerConfig.acceleratorConfig` if
3310 # `parameterServerType` is set to a Compute Engine machine type. [Learn
3311 # about restrictions on accelerator configurations for
3312 # training.](/ai-platform/training/docs/using-gpus#compute-engine-machine-types-with-gpu)
3313 #
3314 # Set `parameterServerConfig.imageUri` only if you build a custom image for
3315 # your parameter server. If `parameterServerConfig.imageUri` has not been
3316 # set, AI Platform uses the value of `masterConfig.imageUri`. Learn more about [configuring custom
3317 # containers](/ai-platform/training/docs/distributed-training-containers).
3318 &quot;tpuTfVersion&quot;: &quot;A String&quot;, # The AI Platform runtime version that includes a TensorFlow version matching
3319 # the one used in the custom container. This field is required if the replica
3320 # is a TPU worker that uses a custom container. Otherwise, do not specify
3321 # this field. This must be a [runtime version that currently supports
3322 # training with
3323 # TPUs](/ml-engine/docs/tensorflow/runtime-version-list#tpu-support).
3324 #
3325 # Note that the version of TensorFlow included in a runtime version may
3326 # differ from the numbering of the runtime version itself, because it may
3327 # have a different [patch
3328 # version](https://www.tensorflow.org/guide/version_compat#semantic_versioning_20).
3329 # In this field, you must specify the runtime version (TensorFlow minor
3330 # version). For example, if your custom container runs TensorFlow `1.x.y`,
3331 # specify `1.x`.
3332 &quot;containerCommand&quot;: [ # The command with which the replica&#x27;s custom container is run.
3333 # If provided, it will override default ENTRYPOINT of the docker image.
3334 # If not provided, the docker image&#x27;s ENTRYPOINT is used.
3335 # It cannot be set if custom container image is
3336 # not provided.
3337 # Note that this field and [TrainingInput.args] are mutually exclusive, i.e.,
3338 # both cannot be set at the same time.
3339 &quot;A String&quot;,
3340 ],
3341 &quot;imageUri&quot;: &quot;A String&quot;, # The Docker image to run on the replica. This image must be in Container
3342 # Registry. Learn more about [configuring custom
3343 # containers](/ai-platform/training/docs/distributed-training-containers).
3344 &quot;containerArgs&quot;: [ # Arguments to the entrypoint command.
3345 # The following rules apply for container_command and container_args:
3346 # - If you do not supply command or args:
3347 # The defaults defined in the Docker image are used.
3348 # - If you supply a command but no args:
3349 # The default EntryPoint and the default Cmd defined in the Docker image
3350 # are ignored. Your command is run without any arguments.
3351 # - If you supply only args:
3352 # The default Entrypoint defined in the Docker image is run with the args
3353 # that you supplied.
3354 # - If you supply a command and args:
3355 # The default Entrypoint and the default Cmd defined in the Docker image
3356 # are ignored. Your command is run with your args.
3357 # It cannot be set if custom container image is
3358 # not provided.
3359 # Note that this field and [TrainingInput.args] are mutually exclusive, i.e.,
3360 # both cannot be set at the same time.
3361 &quot;A String&quot;,
3362 ],
3363 &quot;acceleratorConfig&quot;: { # Represents a hardware accelerator request config. # Represents the type and number of accelerators used by the replica.
3364 # [Learn about restrictions on accelerator configurations for
3365 # training.](/ai-platform/training/docs/using-gpus#compute-engine-machine-types-with-gpu)
3366 # Note that the AcceleratorConfig can be used in both Jobs and Versions.
3367 # Learn more about [accelerators for training](/ml-engine/docs/using-gpus) and
3368 # [accelerators for online
3369 # prediction](/ml-engine/docs/machine-types-online-prediction#gpus).
3370 &quot;count&quot;: &quot;A String&quot;, # The number of accelerators to attach to each machine running the job.
3371 &quot;type&quot;: &quot;A String&quot;, # The type of accelerator to use.
3372 },
3373 },
3374 &quot;encryptionConfig&quot;: { # Represents a custom encryption key configuration that can be applied to # Optional. Options for using customer-managed encryption keys (CMEK) to
3375 # protect resources created by a training job, instead of using Google&#x27;s
3376 # default encryption. If this is set, then all resources created by the
3377 # training job will be encrypted with the customer-managed encryption key
3378 # that you specify.
3379 #
3380 # [Learn how and when to use CMEK with AI Platform
3381 # Training](/ai-platform/training/docs/cmek).
3382 # a resource.
3383 &quot;kmsKeyName&quot;: &quot;A String&quot;, # The Cloud KMS resource identifier of the customer-managed encryption key
3384 # used to protect a resource, such as a training job. It has the following
3385 # format:
3386 # `projects/{PROJECT_ID}/locations/{REGION}/keyRings/{KEY_RING_NAME}/cryptoKeys/{KEY_NAME}`
3387 },
3388 &quot;hyperparameters&quot;: { # Represents a set of hyperparameters to optimize. # Optional. The set of Hyperparameters to tune.
3389 &quot;hyperparameterMetricTag&quot;: &quot;A String&quot;, # Optional. The TensorFlow summary tag name to use for optimizing trials. For
3390 # current versions of TensorFlow, this tag name should exactly match what is
3391 # shown in TensorBoard, including all scopes. For versions of TensorFlow
3392 # prior to 0.12, this should be only the tag passed to tf.Summary.
3393 # By default, &quot;training/hptuning/metric&quot; will be used.
3394 &quot;params&quot;: [ # Required. The set of parameters to tune.
3395 { # Represents a single hyperparameter to optimize.
3396 &quot;type&quot;: &quot;A String&quot;, # Required. The type of the parameter.
3397 &quot;categoricalValues&quot;: [ # Required if type is `CATEGORICAL`. The list of possible categories.
3398 &quot;A String&quot;,
3399 ],
3400 &quot;parameterName&quot;: &quot;A String&quot;, # Required. The parameter name must be unique amongst all ParameterConfigs in
3401 # a HyperparameterSpec message. E.g., &quot;learning_rate&quot;.
3402 &quot;minValue&quot;: 3.14, # Required if type is `DOUBLE` or `INTEGER`. This field
3403 # should be unset if type is `CATEGORICAL`. This value should be integers if
3404 # type is INTEGER.
3405 &quot;discreteValues&quot;: [ # Required if type is `DISCRETE`.
3406 # A list of feasible points.
3407 # The list should be in strictly increasing order. For instance, this
3408 # parameter might have possible settings of 1.5, 2.5, and 4.0. This list
3409 # should not contain more than 1,000 values.
3410 3.14,
3411 ],
3412 &quot;scaleType&quot;: &quot;A String&quot;, # Optional. How the parameter should be scaled to the hypercube.
3413 # Leave unset for categorical parameters.
3414 # Some kind of scaling is strongly recommended for real or integral
3415 # parameters (e.g., `UNIT_LINEAR_SCALE`).
3416 &quot;maxValue&quot;: 3.14, # Required if type is `DOUBLE` or `INTEGER`. This field
3417 # should be unset if type is `CATEGORICAL`. This value should be integers if
3418 # type is `INTEGER`.
3419 },
3420 ],
3421 &quot;enableTrialEarlyStopping&quot;: True or False, # Optional. Indicates if the hyperparameter tuning job enables auto trial
3422 # early stopping.
3423 &quot;resumePreviousJobId&quot;: &quot;A String&quot;, # Optional. The prior hyperparameter tuning job id that users hope to
3424 # continue with. The job id will be used to find the corresponding vizier
3425 # study guid and resume the study.
3426 &quot;maxParallelTrials&quot;: 42, # Optional. The number of training trials to run concurrently.
3427 # You can reduce the time it takes to perform hyperparameter tuning by adding
3428 # trials in parallel. However, each trail only benefits from the information
3429 # gained in completed trials. That means that a trial does not get access to
3430 # the results of trials running at the same time, which could reduce the
3431 # quality of the overall optimization.
3432 #
3433 # Each trial will use the same scale tier and machine types.
3434 #
3435 # Defaults to one.
3436 &quot;maxFailedTrials&quot;: 42, # Optional. The number of failed trials that need to be seen before failing
3437 # the hyperparameter tuning job. You can specify this field to override the
3438 # default failing criteria for AI Platform hyperparameter tuning jobs.
3439 #
3440 # Defaults to zero, which means the service decides when a hyperparameter
3441 # job should fail.
3442 &quot;goal&quot;: &quot;A String&quot;, # Required. The type of goal to use for tuning. Available types are
3443 # `MAXIMIZE` and `MINIMIZE`.
3444 #
3445 # Defaults to `MAXIMIZE`.
3446 &quot;maxTrials&quot;: 42, # Optional. How many training trials should be attempted to optimize
3447 # the specified hyperparameters.
3448 #
3449 # Defaults to one.
3450 &quot;algorithm&quot;: &quot;A String&quot;, # Optional. The search algorithm specified for the hyperparameter
3451 # tuning job.
3452 # Uses the default AI Platform hyperparameter tuning
3453 # algorithm if unspecified.
3454 },
3455 &quot;workerConfig&quot;: { # Represents the configuration for a replica in a cluster. # Optional. The configuration for workers.
3456 #
3457 # You should only set `workerConfig.acceleratorConfig` if `workerType` is set
3458 # to a Compute Engine machine type. [Learn about restrictions on accelerator
3459 # configurations for
3460 # training.](/ai-platform/training/docs/using-gpus#compute-engine-machine-types-with-gpu)
3461 #
3462 # Set `workerConfig.imageUri` only if you build a custom image for your
3463 # worker. If `workerConfig.imageUri` has not been set, AI Platform uses
3464 # the value of `masterConfig.imageUri`. Learn more about [configuring custom
3465 # containers](/ai-platform/training/docs/distributed-training-containers).
3466 &quot;tpuTfVersion&quot;: &quot;A String&quot;, # The AI Platform runtime version that includes a TensorFlow version matching
3467 # the one used in the custom container. This field is required if the replica
3468 # is a TPU worker that uses a custom container. Otherwise, do not specify
3469 # this field. This must be a [runtime version that currently supports
3470 # training with
3471 # TPUs](/ml-engine/docs/tensorflow/runtime-version-list#tpu-support).
3472 #
3473 # Note that the version of TensorFlow included in a runtime version may
3474 # differ from the numbering of the runtime version itself, because it may
3475 # have a different [patch
3476 # version](https://www.tensorflow.org/guide/version_compat#semantic_versioning_20).
3477 # In this field, you must specify the runtime version (TensorFlow minor
3478 # version). For example, if your custom container runs TensorFlow `1.x.y`,
3479 # specify `1.x`.
3480 &quot;containerCommand&quot;: [ # The command with which the replica&#x27;s custom container is run.
3481 # If provided, it will override default ENTRYPOINT of the docker image.
3482 # If not provided, the docker image&#x27;s ENTRYPOINT is used.
3483 # It cannot be set if custom container image is
3484 # not provided.
3485 # Note that this field and [TrainingInput.args] are mutually exclusive, i.e.,
3486 # both cannot be set at the same time.
3487 &quot;A String&quot;,
3488 ],
3489 &quot;imageUri&quot;: &quot;A String&quot;, # The Docker image to run on the replica. This image must be in Container
3490 # Registry. Learn more about [configuring custom
3491 # containers](/ai-platform/training/docs/distributed-training-containers).
3492 &quot;containerArgs&quot;: [ # Arguments to the entrypoint command.
3493 # The following rules apply for container_command and container_args:
3494 # - If you do not supply command or args:
3495 # The defaults defined in the Docker image are used.
3496 # - If you supply a command but no args:
3497 # The default EntryPoint and the default Cmd defined in the Docker image
3498 # are ignored. Your command is run without any arguments.
3499 # - If you supply only args:
3500 # The default Entrypoint defined in the Docker image is run with the args
3501 # that you supplied.
3502 # - If you supply a command and args:
3503 # The default Entrypoint and the default Cmd defined in the Docker image
3504 # are ignored. Your command is run with your args.
3505 # It cannot be set if custom container image is
3506 # not provided.
3507 # Note that this field and [TrainingInput.args] are mutually exclusive, i.e.,
3508 # both cannot be set at the same time.
3509 &quot;A String&quot;,
3510 ],
3511 &quot;acceleratorConfig&quot;: { # Represents a hardware accelerator request config. # Represents the type and number of accelerators used by the replica.
3512 # [Learn about restrictions on accelerator configurations for
3513 # training.](/ai-platform/training/docs/using-gpus#compute-engine-machine-types-with-gpu)
3514 # Note that the AcceleratorConfig can be used in both Jobs and Versions.
3515 # Learn more about [accelerators for training](/ml-engine/docs/using-gpus) and
3516 # [accelerators for online
3517 # prediction](/ml-engine/docs/machine-types-online-prediction#gpus).
3518 &quot;count&quot;: &quot;A String&quot;, # The number of accelerators to attach to each machine running the job.
3519 &quot;type&quot;: &quot;A String&quot;, # The type of accelerator to use.
3520 },
3521 },
3522 &quot;parameterServerCount&quot;: &quot;A String&quot;, # Optional. The number of parameter server replicas to use for the training
3523 # job. Each replica in the cluster will be of the type specified in
3524 # `parameter_server_type`.
3525 #
3526 # This value can only be used when `scale_tier` is set to `CUSTOM`. If you
3527 # set this value, you must also set `parameter_server_type`.
3528 #
3529 # The default value is zero.
3530 &quot;packageUris&quot;: [ # Required. The Google Cloud Storage location of the packages with
3531 # the training program and any additional dependencies.
3532 # The maximum number of package URIs is 100.
3533 &quot;A String&quot;,
3534 ],
3535 &quot;evaluatorCount&quot;: &quot;A String&quot;, # Optional. The number of evaluator replicas to use for the training job.
3536 # Each replica in the cluster will be of the type specified in
3537 # `evaluator_type`.
3538 #
3539 # This value can only be used when `scale_tier` is set to `CUSTOM`. If you
3540 # set this value, you must also set `evaluator_type`.
3541 #
3542 # The default value is zero.
3543 &quot;masterType&quot;: &quot;A String&quot;, # Optional. Specifies the type of virtual machine to use for your training
3544 # job&#x27;s master worker. You must specify this field when `scaleTier` is set to
Dan O'Mearadd494642020-05-01 07:42:23 -07003545 # `CUSTOM`.
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07003546 #
Dan O'Mearadd494642020-05-01 07:42:23 -07003547 # You can use certain Compute Engine machine types directly in this field.
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07003548 # The following types are supported:
3549 #
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07003550 # - `n1-standard-4`
3551 # - `n1-standard-8`
3552 # - `n1-standard-16`
3553 # - `n1-standard-32`
3554 # - `n1-standard-64`
3555 # - `n1-standard-96`
3556 # - `n1-highmem-2`
3557 # - `n1-highmem-4`
3558 # - `n1-highmem-8`
3559 # - `n1-highmem-16`
3560 # - `n1-highmem-32`
3561 # - `n1-highmem-64`
3562 # - `n1-highmem-96`
3563 # - `n1-highcpu-16`
3564 # - `n1-highcpu-32`
3565 # - `n1-highcpu-64`
3566 # - `n1-highcpu-96`
3567 #
Dan O'Mearadd494642020-05-01 07:42:23 -07003568 # Learn more about [using Compute Engine machine
3569 # types](/ml-engine/docs/machine-types#compute-engine-machine-types).
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07003570 #
Dan O'Mearadd494642020-05-01 07:42:23 -07003571 # Alternatively, you can use the following legacy machine types:
3572 #
3573 # - `standard`
3574 # - `large_model`
3575 # - `complex_model_s`
3576 # - `complex_model_m`
3577 # - `complex_model_l`
3578 # - `standard_gpu`
3579 # - `complex_model_m_gpu`
3580 # - `complex_model_l_gpu`
3581 # - `standard_p100`
3582 # - `complex_model_m_p100`
3583 # - `standard_v100`
3584 # - `large_model_v100`
3585 # - `complex_model_m_v100`
3586 # - `complex_model_l_v100`
3587 #
3588 # Learn more about [using legacy machine
3589 # types](/ml-engine/docs/machine-types#legacy-machine-types).
3590 #
3591 # Finally, if you want to use a TPU for training, specify `cloud_tpu` in this
3592 # field. Learn more about the [special configuration options for training
3593 # with
3594 # TPUs](/ml-engine/docs/tensorflow/using-tpus#configuring_a_custom_tpu_machine).
Bu Sun Kim65020912020-05-20 12:08:20 -07003595 &quot;runtimeVersion&quot;: &quot;A String&quot;, # Optional. The AI Platform runtime version to use for training. You must
3596 # either specify this field or specify `masterConfig.imageUri`.
3597 #
3598 # For more information, see the [runtime version
3599 # list](/ai-platform/training/docs/runtime-version-list) and learn [how to
3600 # manage runtime versions](/ai-platform/training/docs/versioning).
3601 &quot;evaluatorType&quot;: &quot;A String&quot;, # Optional. Specifies the type of virtual machine to use for your training
3602 # job&#x27;s evaluator nodes.
3603 #
3604 # The supported values are the same as those described in the entry for
3605 # `masterType`.
3606 #
3607 # This value must be consistent with the category of machine type that
3608 # `masterType` uses. In other words, both must be Compute Engine machine
3609 # types or both must be legacy machine types.
3610 #
3611 # This value must be present when `scaleTier` is set to `CUSTOM` and
3612 # `evaluatorCount` is greater than zero.
3613 &quot;region&quot;: &quot;A String&quot;, # Required. The region to run the training job in. See the [available
3614 # regions](/ai-platform/training/docs/regions) for AI Platform Training.
3615 &quot;workerType&quot;: &quot;A String&quot;, # Optional. Specifies the type of virtual machine to use for your training
3616 # job&#x27;s worker nodes.
3617 #
3618 # The supported values are the same as those described in the entry for
3619 # `masterType`.
3620 #
3621 # This value must be consistent with the category of machine type that
3622 # `masterType` uses. In other words, both must be Compute Engine machine
3623 # types or both must be legacy machine types.
3624 #
3625 # If you use `cloud_tpu` for this value, see special instructions for
3626 # [configuring a custom TPU
3627 # machine](/ml-engine/docs/tensorflow/using-tpus#configuring_a_custom_tpu_machine).
3628 #
3629 # This value must be present when `scaleTier` is set to `CUSTOM` and
3630 # `workerCount` is greater than zero.
3631 &quot;parameterServerType&quot;: &quot;A String&quot;, # Optional. Specifies the type of virtual machine to use for your training
3632 # job&#x27;s parameter server.
3633 #
3634 # The supported values are the same as those described in the entry for
3635 # `master_type`.
3636 #
3637 # This value must be consistent with the category of machine type that
3638 # `masterType` uses. In other words, both must be Compute Engine machine
3639 # types or both must be legacy machine types.
3640 #
3641 # This value must be present when `scaleTier` is set to `CUSTOM` and
3642 # `parameter_server_count` is greater than zero.
3643 &quot;masterConfig&quot;: { # Represents the configuration for a replica in a cluster. # Optional. The configuration for your master worker.
3644 #
3645 # You should only set `masterConfig.acceleratorConfig` if `masterType` is set
3646 # to a Compute Engine machine type. Learn about [restrictions on accelerator
3647 # configurations for
3648 # training.](/ai-platform/training/docs/using-gpus#compute-engine-machine-types-with-gpu)
3649 #
3650 # Set `masterConfig.imageUri` only if you build a custom image. Only one of
3651 # `masterConfig.imageUri` and `runtimeVersion` should be set. Learn more
3652 # about [configuring custom
3653 # containers](/ai-platform/training/docs/distributed-training-containers).
3654 &quot;tpuTfVersion&quot;: &quot;A String&quot;, # The AI Platform runtime version that includes a TensorFlow version matching
3655 # the one used in the custom container. This field is required if the replica
3656 # is a TPU worker that uses a custom container. Otherwise, do not specify
3657 # this field. This must be a [runtime version that currently supports
3658 # training with
3659 # TPUs](/ml-engine/docs/tensorflow/runtime-version-list#tpu-support).
3660 #
3661 # Note that the version of TensorFlow included in a runtime version may
3662 # differ from the numbering of the runtime version itself, because it may
3663 # have a different [patch
3664 # version](https://www.tensorflow.org/guide/version_compat#semantic_versioning_20).
3665 # In this field, you must specify the runtime version (TensorFlow minor
3666 # version). For example, if your custom container runs TensorFlow `1.x.y`,
3667 # specify `1.x`.
3668 &quot;containerCommand&quot;: [ # The command with which the replica&#x27;s custom container is run.
3669 # If provided, it will override default ENTRYPOINT of the docker image.
3670 # If not provided, the docker image&#x27;s ENTRYPOINT is used.
3671 # It cannot be set if custom container image is
3672 # not provided.
3673 # Note that this field and [TrainingInput.args] are mutually exclusive, i.e.,
3674 # both cannot be set at the same time.
3675 &quot;A String&quot;,
3676 ],
3677 &quot;imageUri&quot;: &quot;A String&quot;, # The Docker image to run on the replica. This image must be in Container
3678 # Registry. Learn more about [configuring custom
3679 # containers](/ai-platform/training/docs/distributed-training-containers).
3680 &quot;containerArgs&quot;: [ # Arguments to the entrypoint command.
3681 # The following rules apply for container_command and container_args:
3682 # - If you do not supply command or args:
3683 # The defaults defined in the Docker image are used.
3684 # - If you supply a command but no args:
3685 # The default EntryPoint and the default Cmd defined in the Docker image
3686 # are ignored. Your command is run without any arguments.
3687 # - If you supply only args:
3688 # The default Entrypoint defined in the Docker image is run with the args
3689 # that you supplied.
3690 # - If you supply a command and args:
3691 # The default Entrypoint and the default Cmd defined in the Docker image
3692 # are ignored. Your command is run with your args.
3693 # It cannot be set if custom container image is
3694 # not provided.
3695 # Note that this field and [TrainingInput.args] are mutually exclusive, i.e.,
3696 # both cannot be set at the same time.
3697 &quot;A String&quot;,
3698 ],
3699 &quot;acceleratorConfig&quot;: { # Represents a hardware accelerator request config. # Represents the type and number of accelerators used by the replica.
3700 # [Learn about restrictions on accelerator configurations for
3701 # training.](/ai-platform/training/docs/using-gpus#compute-engine-machine-types-with-gpu)
3702 # Note that the AcceleratorConfig can be used in both Jobs and Versions.
3703 # Learn more about [accelerators for training](/ml-engine/docs/using-gpus) and
3704 # [accelerators for online
3705 # prediction](/ml-engine/docs/machine-types-online-prediction#gpus).
3706 &quot;count&quot;: &quot;A String&quot;, # The number of accelerators to attach to each machine running the job.
3707 &quot;type&quot;: &quot;A String&quot;, # The type of accelerator to use.
3708 },
3709 },
3710 &quot;scaleTier&quot;: &quot;A String&quot;, # Required. Specifies the machine types, the number of replicas for workers
3711 # and parameter servers.
3712 &quot;jobDir&quot;: &quot;A String&quot;, # Optional. A Google Cloud Storage path in which to store training outputs
Dan O'Mearadd494642020-05-01 07:42:23 -07003713 # and other data needed for training. This path is passed to your TensorFlow
Bu Sun Kim65020912020-05-20 12:08:20 -07003714 # program as the &#x27;--job-dir&#x27; command-line argument. The benefit of specifying
Dan O'Mearadd494642020-05-01 07:42:23 -07003715 # this field is that Cloud ML validates the path for use in training.
Bu Sun Kim65020912020-05-20 12:08:20 -07003716 &quot;pythonVersion&quot;: &quot;A String&quot;, # Optional. The version of Python used in training. You must either specify
3717 # this field or specify `masterConfig.imageUri`.
3718 #
3719 # The following Python versions are available:
3720 #
3721 # * Python &#x27;3.7&#x27; is available when `runtime_version` is set to &#x27;1.15&#x27; or
3722 # later.
3723 # * Python &#x27;3.5&#x27; is available when `runtime_version` is set to a version
3724 # from &#x27;1.4&#x27; to &#x27;1.14&#x27;.
3725 # * Python &#x27;2.7&#x27; is available when `runtime_version` is set to &#x27;1.15&#x27; or
3726 # earlier.
3727 #
3728 # Read more about the Python versions available for [each runtime
3729 # version](/ml-engine/docs/runtime-version-list).
3730 &quot;network&quot;: &quot;A String&quot;, # Optional. The full name of the Google Compute Engine
3731 # [network](/compute/docs/networks-and-firewalls#networks) to which the Job
3732 # is peered. For example, projects/12345/global/networks/myVPC. Format is of
3733 # the form projects/{project}/global/networks/{network}. Where {project} is a
3734 # project number, as in &#x27;12345&#x27;, and {network} is network name.&quot;.
3735 #
3736 # Private services access must already be configured for the network. If left
3737 # unspecified, the Job is not peered with any network. Learn more -
3738 # Connecting Job to user network over private
3739 # IP.
3740 &quot;scheduling&quot;: { # All parameters related to scheduling of training jobs. # Optional. Scheduling options for a training job.
3741 &quot;maxWaitTime&quot;: &quot;A String&quot;,
3742 &quot;maxRunningTime&quot;: &quot;A String&quot;, # Optional. The maximum job running time, expressed in seconds. The field can
3743 # contain up to nine fractional digits, terminated by `s`. If not specified,
3744 # this field defaults to `604800s` (seven days).
Dan O'Mearadd494642020-05-01 07:42:23 -07003745 #
3746 # If the training job is still running after this duration, AI Platform
3747 # Training cancels it.
3748 #
3749 # For example, if you want to ensure your job runs for no more than 2 hours,
3750 # set this field to `7200s` (2 hours * 60 minutes / hour * 60 seconds /
3751 # minute).
3752 #
3753 # If you submit your training job using the `gcloud` tool, you can [provide
3754 # this field in a `config.yaml`
3755 # file](/ai-platform/training/docs/training-jobs#formatting_your_configuration_parameters).
3756 # For example:
3757 #
3758 # ```yaml
3759 # trainingInput:
3760 # ...
3761 # scheduling:
3762 # maxRunningTime: 7200s
3763 # ...
3764 # ```
3765 },
Bu Sun Kim65020912020-05-20 12:08:20 -07003766 &quot;evaluatorConfig&quot;: { # Represents the configuration for a replica in a cluster. # Optional. The configuration for evaluators.
Dan O'Mearadd494642020-05-01 07:42:23 -07003767 #
3768 # You should only set `evaluatorConfig.acceleratorConfig` if
3769 # `evaluatorType` is set to a Compute Engine machine type. [Learn
3770 # about restrictions on accelerator configurations for
3771 # training.](/ai-platform/training/docs/using-gpus#compute-engine-machine-types-with-gpu)
3772 #
3773 # Set `evaluatorConfig.imageUri` only if you build a custom image for
3774 # your evaluator. If `evaluatorConfig.imageUri` has not been
3775 # set, AI Platform uses the value of `masterConfig.imageUri`. Learn more about [configuring custom
3776 # containers](/ai-platform/training/docs/distributed-training-containers).
Bu Sun Kim65020912020-05-20 12:08:20 -07003777 &quot;tpuTfVersion&quot;: &quot;A String&quot;, # The AI Platform runtime version that includes a TensorFlow version matching
Dan O'Mearadd494642020-05-01 07:42:23 -07003778 # the one used in the custom container. This field is required if the replica
3779 # is a TPU worker that uses a custom container. Otherwise, do not specify
3780 # this field. This must be a [runtime version that currently supports
3781 # training with
3782 # TPUs](/ml-engine/docs/tensorflow/runtime-version-list#tpu-support).
3783 #
3784 # Note that the version of TensorFlow included in a runtime version may
3785 # differ from the numbering of the runtime version itself, because it may
3786 # have a different [patch
3787 # version](https://www.tensorflow.org/guide/version_compat#semantic_versioning_20).
3788 # In this field, you must specify the runtime version (TensorFlow minor
3789 # version). For example, if your custom container runs TensorFlow `1.x.y`,
3790 # specify `1.x`.
Bu Sun Kim65020912020-05-20 12:08:20 -07003791 &quot;containerCommand&quot;: [ # The command with which the replica&#x27;s custom container is run.
3792 # If provided, it will override default ENTRYPOINT of the docker image.
3793 # If not provided, the docker image&#x27;s ENTRYPOINT is used.
3794 # It cannot be set if custom container image is
3795 # not provided.
3796 # Note that this field and [TrainingInput.args] are mutually exclusive, i.e.,
3797 # both cannot be set at the same time.
3798 &quot;A String&quot;,
3799 ],
3800 &quot;imageUri&quot;: &quot;A String&quot;, # The Docker image to run on the replica. This image must be in Container
3801 # Registry. Learn more about [configuring custom
3802 # containers](/ai-platform/training/docs/distributed-training-containers).
3803 &quot;containerArgs&quot;: [ # Arguments to the entrypoint command.
3804 # The following rules apply for container_command and container_args:
3805 # - If you do not supply command or args:
3806 # The defaults defined in the Docker image are used.
3807 # - If you supply a command but no args:
3808 # The default EntryPoint and the default Cmd defined in the Docker image
3809 # are ignored. Your command is run without any arguments.
3810 # - If you supply only args:
3811 # The default Entrypoint defined in the Docker image is run with the args
3812 # that you supplied.
3813 # - If you supply a command and args:
3814 # The default Entrypoint and the default Cmd defined in the Docker image
3815 # are ignored. Your command is run with your args.
3816 # It cannot be set if custom container image is
3817 # not provided.
3818 # Note that this field and [TrainingInput.args] are mutually exclusive, i.e.,
3819 # both cannot be set at the same time.
3820 &quot;A String&quot;,
3821 ],
3822 &quot;acceleratorConfig&quot;: { # Represents a hardware accelerator request config. # Represents the type and number of accelerators used by the replica.
Dan O'Mearadd494642020-05-01 07:42:23 -07003823 # [Learn about restrictions on accelerator configurations for
3824 # training.](/ai-platform/training/docs/using-gpus#compute-engine-machine-types-with-gpu)
3825 # Note that the AcceleratorConfig can be used in both Jobs and Versions.
3826 # Learn more about [accelerators for training](/ml-engine/docs/using-gpus) and
3827 # [accelerators for online
3828 # prediction](/ml-engine/docs/machine-types-online-prediction#gpus).
Bu Sun Kim65020912020-05-20 12:08:20 -07003829 &quot;count&quot;: &quot;A String&quot;, # The number of accelerators to attach to each machine running the job.
3830 &quot;type&quot;: &quot;A String&quot;, # The type of accelerator to use.
Dan O'Mearadd494642020-05-01 07:42:23 -07003831 },
Dan O'Mearadd494642020-05-01 07:42:23 -07003832 },
Bu Sun Kim65020912020-05-20 12:08:20 -07003833 &quot;useChiefInTfConfig&quot;: True or False, # Optional. Use `chief` instead of `master` in the `TF_CONFIG` environment
Dan O'Mearadd494642020-05-01 07:42:23 -07003834 # variable when training with a custom container. Defaults to `false`. [Learn
3835 # more about this
3836 # field.](/ai-platform/training/docs/distributed-training-details#chief-versus-master)
3837 #
Bu Sun Kim65020912020-05-20 12:08:20 -07003838 # This field has no effect for training jobs that don&#x27;t use a custom
Dan O'Mearadd494642020-05-01 07:42:23 -07003839 # container.
Bu Sun Kim65020912020-05-20 12:08:20 -07003840 &quot;workerCount&quot;: &quot;A String&quot;, # Optional. The number of worker replicas to use for the training job. Each
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07003841 # replica in the cluster will be of the type specified in `worker_type`.
3842 #
3843 # This value can only be used when `scale_tier` is set to `CUSTOM`. If you
3844 # set this value, you must also set `worker_type`.
3845 #
3846 # The default value is zero.
Bu Sun Kim65020912020-05-20 12:08:20 -07003847 &quot;pythonModule&quot;: &quot;A String&quot;, # Required. The Python module name to run after installing the packages.
3848 &quot;args&quot;: [ # Optional. Command-line arguments passed to the training application when it
3849 # starts. If your job uses a custom container, then the arguments are passed
3850 # to the container&#x27;s &lt;a class=&quot;external&quot; target=&quot;_blank&quot;
3851 # href=&quot;https://docs.docker.com/engine/reference/builder/#entrypoint&quot;&gt;
3852 # `ENTRYPOINT`&lt;/a&gt; command.
3853 &quot;A String&quot;,
3854 ],
3855 },
3856 &quot;state&quot;: &quot;A String&quot;, # Output only. The detailed state of a job.
3857 &quot;jobId&quot;: &quot;A String&quot;, # Required. The user-specified id of the job.
3858 &quot;endTime&quot;: &quot;A String&quot;, # Output only. When the job processing was completed.
3859 &quot;startTime&quot;: &quot;A String&quot;, # Output only. When the job processing was started.
3860 &quot;predictionOutput&quot;: { # Represents results of a prediction job. # The current prediction job result.
3861 &quot;errorCount&quot;: &quot;A String&quot;, # The number of data instances which resulted in errors.
3862 &quot;outputPath&quot;: &quot;A String&quot;, # The output Google Cloud Storage location provided at the job creation time.
3863 &quot;nodeHours&quot;: 3.14, # Node hours used by the batch prediction job.
3864 &quot;predictionCount&quot;: &quot;A String&quot;, # The number of generated predictions.
3865 },
3866 &quot;trainingOutput&quot;: { # Represents results of a training job. Output only. # The current training job result.
3867 &quot;trials&quot;: [ # Results for individual Hyperparameter trials.
3868 # Only set for hyperparameter tuning jobs.
3869 { # Represents the result of a single hyperparameter tuning trial from a
3870 # training job. The TrainingOutput object that is returned on successful
3871 # completion of a training job with hyperparameter tuning includes a list
3872 # of HyperparameterOutput objects, one for each successful trial.
3873 &quot;allMetrics&quot;: [ # All recorded object metrics for this trial. This field is not currently
3874 # populated.
3875 { # An observed value of a metric.
3876 &quot;objectiveValue&quot;: 3.14, # The objective value at this training step.
3877 &quot;trainingStep&quot;: &quot;A String&quot;, # The global training step for this metric.
3878 },
3879 ],
3880 &quot;hyperparameters&quot;: { # The hyperparameters given to this trial.
3881 &quot;a_key&quot;: &quot;A String&quot;,
3882 },
3883 &quot;trialId&quot;: &quot;A String&quot;, # The trial id for these results.
3884 &quot;endTime&quot;: &quot;A String&quot;, # Output only. End time for the trial.
3885 &quot;isTrialStoppedEarly&quot;: True or False, # True if the trial is stopped early.
3886 &quot;startTime&quot;: &quot;A String&quot;, # Output only. Start time for the trial.
3887 &quot;finalMetric&quot;: { # An observed value of a metric. # The final objective metric seen for this trial.
3888 &quot;objectiveValue&quot;: 3.14, # The objective value at this training step.
3889 &quot;trainingStep&quot;: &quot;A String&quot;, # The global training step for this metric.
3890 },
3891 &quot;builtInAlgorithmOutput&quot;: { # Represents output related to a built-in algorithm Job. # Details related to built-in algorithms jobs.
3892 # Only set for trials of built-in algorithms jobs that have succeeded.
3893 &quot;framework&quot;: &quot;A String&quot;, # Framework on which the built-in algorithm was trained.
3894 &quot;modelPath&quot;: &quot;A String&quot;, # The Cloud Storage path to the `model/` directory where the training job
3895 # saves the trained model. Only set for successful jobs that don&#x27;t use
3896 # hyperparameter tuning.
3897 &quot;pythonVersion&quot;: &quot;A String&quot;, # Python version on which the built-in algorithm was trained.
3898 &quot;runtimeVersion&quot;: &quot;A String&quot;, # AI Platform runtime version on which the built-in algorithm was
3899 # trained.
3900 },
3901 &quot;state&quot;: &quot;A String&quot;, # Output only. The detailed state of the trial.
Dan O'Mearadd494642020-05-01 07:42:23 -07003902 },
Bu Sun Kim65020912020-05-20 12:08:20 -07003903 ],
3904 &quot;hyperparameterMetricTag&quot;: &quot;A String&quot;, # The TensorFlow summary tag name used for optimizing hyperparameter tuning
3905 # trials. See
3906 # [`HyperparameterSpec.hyperparameterMetricTag`](#HyperparameterSpec.FIELDS.hyperparameter_metric_tag)
3907 # for more information. Only set for hyperparameter tuning jobs.
3908 &quot;completedTrialCount&quot;: &quot;A String&quot;, # The number of hyperparameter tuning trials that completed successfully.
3909 # Only set for hyperparameter tuning jobs.
3910 &quot;isHyperparameterTuningJob&quot;: True or False, # Whether this job is a hyperparameter tuning job.
3911 &quot;consumedMLUnits&quot;: 3.14, # The amount of ML units consumed by the job.
3912 &quot;isBuiltInAlgorithmJob&quot;: True or False, # Whether this job is a built-in Algorithm job.
3913 &quot;builtInAlgorithmOutput&quot;: { # Represents output related to a built-in algorithm Job. # Details related to built-in algorithms jobs.
3914 # Only set for built-in algorithms jobs.
3915 &quot;framework&quot;: &quot;A String&quot;, # Framework on which the built-in algorithm was trained.
3916 &quot;modelPath&quot;: &quot;A String&quot;, # The Cloud Storage path to the `model/` directory where the training job
3917 # saves the trained model. Only set for successful jobs that don&#x27;t use
3918 # hyperparameter tuning.
3919 &quot;pythonVersion&quot;: &quot;A String&quot;, # Python version on which the built-in algorithm was trained.
3920 &quot;runtimeVersion&quot;: &quot;A String&quot;, # AI Platform runtime version on which the built-in algorithm was
3921 # trained.
Dan O'Mearadd494642020-05-01 07:42:23 -07003922 },
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07003923 },
Bu Sun Kim65020912020-05-20 12:08:20 -07003924 &quot;createTime&quot;: &quot;A String&quot;, # Output only. When the job was created.
3925 &quot;labels&quot;: { # Optional. One or more labels that you can add, to organize your jobs.
3926 # Each label is a key-value pair, where both the key and the value are
3927 # arbitrary strings that you supply.
3928 # For more information, see the documentation on
3929 # &lt;a href=&quot;/ml-engine/docs/tensorflow/resource-labels&quot;&gt;using labels&lt;/a&gt;.
3930 &quot;a_key&quot;: &quot;A String&quot;,
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07003931 },
Bu Sun Kim65020912020-05-20 12:08:20 -07003932 &quot;predictionInput&quot;: { # Represents input parameters for a prediction job. # Input parameters to create a prediction job.
3933 &quot;outputPath&quot;: &quot;A String&quot;, # Required. The output Google Cloud Storage location.
3934 &quot;outputDataFormat&quot;: &quot;A String&quot;, # Optional. Format of the output data files, defaults to JSON.
3935 &quot;dataFormat&quot;: &quot;A String&quot;, # Required. The format of the input data files.
3936 &quot;batchSize&quot;: &quot;A String&quot;, # Optional. Number of records per batch, defaults to 64.
3937 # The service will buffer batch_size number of records in memory before
3938 # invoking one Tensorflow prediction call internally. So take the record
3939 # size and memory available into consideration when setting this parameter.
3940 &quot;runtimeVersion&quot;: &quot;A String&quot;, # Optional. The AI Platform runtime version to use for this batch
3941 # prediction. If not set, AI Platform will pick the runtime version used
3942 # during the CreateVersion request for this model version, or choose the
3943 # latest stable version when model version information is not available
3944 # such as when the model is specified by uri.
3945 &quot;inputPaths&quot;: [ # Required. The Cloud Storage location of the input data files. May contain
3946 # &lt;a href=&quot;/storage/docs/gsutil/addlhelp/WildcardNames&quot;&gt;wildcards&lt;/a&gt;.
3947 &quot;A String&quot;,
3948 ],
3949 &quot;region&quot;: &quot;A String&quot;, # Required. The Google Compute Engine region to run the prediction job in.
3950 # See the &lt;a href=&quot;/ml-engine/docs/tensorflow/regions&quot;&gt;available regions&lt;/a&gt;
3951 # for AI Platform services.
3952 &quot;versionName&quot;: &quot;A String&quot;, # Use this field if you want to specify a version of the model to use. The
3953 # string is formatted the same way as `model_version`, with the addition
3954 # of the version information:
3955 #
3956 # `&quot;projects/YOUR_PROJECT/models/YOUR_MODEL/versions/YOUR_VERSION&quot;`
3957 &quot;modelName&quot;: &quot;A String&quot;, # Use this field if you want to use the default version for the specified
3958 # model. The string must use the following format:
3959 #
3960 # `&quot;projects/YOUR_PROJECT/models/YOUR_MODEL&quot;`
3961 &quot;uri&quot;: &quot;A String&quot;, # Use this field if you want to specify a Google Cloud Storage path for
3962 # the model to use.
3963 &quot;maxWorkerCount&quot;: &quot;A String&quot;, # Optional. The maximum number of workers to be used for parallel processing.
3964 # Defaults to 10 if not specified.
3965 &quot;signatureName&quot;: &quot;A String&quot;, # Optional. The name of the signature defined in the SavedModel to use for
3966 # this job. Please refer to
3967 # [SavedModel](https://tensorflow.github.io/serving/serving_basic.html)
3968 # for information about how to use signatures.
3969 #
3970 # Defaults to
3971 # [DEFAULT_SERVING_SIGNATURE_DEF_KEY](https://www.tensorflow.org/api_docs/python/tf/saved_model/signature_constants)
3972 # , which is &quot;serving_default&quot;.
3973 },
3974 &quot;errorMessage&quot;: &quot;A String&quot;, # Output only. The details of a failure or a cancellation.
3975 }
3976
3977 updateMask: string, Required. Specifies the path, relative to `Job`, of the field to update.
3978To adopt etag mechanism, include `etag` field in the mask, and include the
3979`etag` value in your job resource.
3980
3981For example, to change the labels of a job, the `update_mask` parameter
3982would be specified as `labels`, `etag`, and the
3983`PATCH` request body would specify the new value, as follows:
3984 {
3985 &quot;labels&quot;: {
3986 &quot;owner&quot;: &quot;Google&quot;,
3987 &quot;color&quot;: &quot;Blue&quot;
3988 }
3989 &quot;etag&quot;: &quot;33a64df551425fcc55e4d42a148795d9f25f89d4&quot;
3990 }
3991If `etag` matches the one on the server, the labels of the job will be
3992replaced with the given ones, and the server end `etag` will be
3993recalculated.
3994
3995Currently the only supported update masks are `labels` and `etag`.
3996 x__xgafv: string, V1 error format.
3997 Allowed values
3998 1 - v1 error format
3999 2 - v2 error format
4000
4001Returns:
4002 An object of the form:
4003
4004 { # Represents a training or prediction job.
4005 &quot;etag&quot;: &quot;A String&quot;, # `etag` is used for optimistic concurrency control as a way to help
4006 # prevent simultaneous updates of a job from overwriting each other.
4007 # It is strongly suggested that systems make use of the `etag` in the
4008 # read-modify-write cycle to perform job updates in order to avoid race
4009 # conditions: An `etag` is returned in the response to `GetJob`, and
4010 # systems are expected to put that etag in the request to `UpdateJob` to
4011 # ensure that their change will be applied to the same version of the job.
4012 &quot;trainingInput&quot;: { # Represents input parameters for a training job. When using the gcloud command # Input parameters to create a training job.
4013 # to submit your training job, you can specify the input parameters as
4014 # command-line arguments and/or in a YAML configuration file referenced from
4015 # the --config command-line argument. For details, see the guide to [submitting
4016 # a training job](/ai-platform/training/docs/training-jobs).
4017 &quot;parameterServerConfig&quot;: { # Represents the configuration for a replica in a cluster. # Optional. The configuration for parameter servers.
4018 #
4019 # You should only set `parameterServerConfig.acceleratorConfig` if
4020 # `parameterServerType` is set to a Compute Engine machine type. [Learn
4021 # about restrictions on accelerator configurations for
4022 # training.](/ai-platform/training/docs/using-gpus#compute-engine-machine-types-with-gpu)
4023 #
4024 # Set `parameterServerConfig.imageUri` only if you build a custom image for
4025 # your parameter server. If `parameterServerConfig.imageUri` has not been
4026 # set, AI Platform uses the value of `masterConfig.imageUri`. Learn more about [configuring custom
4027 # containers](/ai-platform/training/docs/distributed-training-containers).
4028 &quot;tpuTfVersion&quot;: &quot;A String&quot;, # The AI Platform runtime version that includes a TensorFlow version matching
4029 # the one used in the custom container. This field is required if the replica
4030 # is a TPU worker that uses a custom container. Otherwise, do not specify
4031 # this field. This must be a [runtime version that currently supports
4032 # training with
4033 # TPUs](/ml-engine/docs/tensorflow/runtime-version-list#tpu-support).
4034 #
4035 # Note that the version of TensorFlow included in a runtime version may
4036 # differ from the numbering of the runtime version itself, because it may
4037 # have a different [patch
4038 # version](https://www.tensorflow.org/guide/version_compat#semantic_versioning_20).
4039 # In this field, you must specify the runtime version (TensorFlow minor
4040 # version). For example, if your custom container runs TensorFlow `1.x.y`,
4041 # specify `1.x`.
4042 &quot;containerCommand&quot;: [ # The command with which the replica&#x27;s custom container is run.
4043 # If provided, it will override default ENTRYPOINT of the docker image.
4044 # If not provided, the docker image&#x27;s ENTRYPOINT is used.
4045 # It cannot be set if custom container image is
4046 # not provided.
4047 # Note that this field and [TrainingInput.args] are mutually exclusive, i.e.,
4048 # both cannot be set at the same time.
4049 &quot;A String&quot;,
4050 ],
4051 &quot;imageUri&quot;: &quot;A String&quot;, # The Docker image to run on the replica. This image must be in Container
4052 # Registry. Learn more about [configuring custom
4053 # containers](/ai-platform/training/docs/distributed-training-containers).
4054 &quot;containerArgs&quot;: [ # Arguments to the entrypoint command.
4055 # The following rules apply for container_command and container_args:
4056 # - If you do not supply command or args:
4057 # The defaults defined in the Docker image are used.
4058 # - If you supply a command but no args:
4059 # The default EntryPoint and the default Cmd defined in the Docker image
4060 # are ignored. Your command is run without any arguments.
4061 # - If you supply only args:
4062 # The default Entrypoint defined in the Docker image is run with the args
4063 # that you supplied.
4064 # - If you supply a command and args:
4065 # The default Entrypoint and the default Cmd defined in the Docker image
4066 # are ignored. Your command is run with your args.
4067 # It cannot be set if custom container image is
4068 # not provided.
4069 # Note that this field and [TrainingInput.args] are mutually exclusive, i.e.,
4070 # both cannot be set at the same time.
4071 &quot;A String&quot;,
4072 ],
4073 &quot;acceleratorConfig&quot;: { # Represents a hardware accelerator request config. # Represents the type and number of accelerators used by the replica.
4074 # [Learn about restrictions on accelerator configurations for
4075 # training.](/ai-platform/training/docs/using-gpus#compute-engine-machine-types-with-gpu)
4076 # Note that the AcceleratorConfig can be used in both Jobs and Versions.
4077 # Learn more about [accelerators for training](/ml-engine/docs/using-gpus) and
4078 # [accelerators for online
4079 # prediction](/ml-engine/docs/machine-types-online-prediction#gpus).
4080 &quot;count&quot;: &quot;A String&quot;, # The number of accelerators to attach to each machine running the job.
4081 &quot;type&quot;: &quot;A String&quot;, # The type of accelerator to use.
4082 },
4083 },
4084 &quot;encryptionConfig&quot;: { # Represents a custom encryption key configuration that can be applied to # Optional. Options for using customer-managed encryption keys (CMEK) to
4085 # protect resources created by a training job, instead of using Google&#x27;s
4086 # default encryption. If this is set, then all resources created by the
4087 # training job will be encrypted with the customer-managed encryption key
4088 # that you specify.
4089 #
4090 # [Learn how and when to use CMEK with AI Platform
4091 # Training](/ai-platform/training/docs/cmek).
4092 # a resource.
4093 &quot;kmsKeyName&quot;: &quot;A String&quot;, # The Cloud KMS resource identifier of the customer-managed encryption key
4094 # used to protect a resource, such as a training job. It has the following
4095 # format:
4096 # `projects/{PROJECT_ID}/locations/{REGION}/keyRings/{KEY_RING_NAME}/cryptoKeys/{KEY_NAME}`
4097 },
4098 &quot;hyperparameters&quot;: { # Represents a set of hyperparameters to optimize. # Optional. The set of Hyperparameters to tune.
4099 &quot;hyperparameterMetricTag&quot;: &quot;A String&quot;, # Optional. The TensorFlow summary tag name to use for optimizing trials. For
4100 # current versions of TensorFlow, this tag name should exactly match what is
4101 # shown in TensorBoard, including all scopes. For versions of TensorFlow
4102 # prior to 0.12, this should be only the tag passed to tf.Summary.
4103 # By default, &quot;training/hptuning/metric&quot; will be used.
4104 &quot;params&quot;: [ # Required. The set of parameters to tune.
4105 { # Represents a single hyperparameter to optimize.
4106 &quot;type&quot;: &quot;A String&quot;, # Required. The type of the parameter.
4107 &quot;categoricalValues&quot;: [ # Required if type is `CATEGORICAL`. The list of possible categories.
4108 &quot;A String&quot;,
4109 ],
4110 &quot;parameterName&quot;: &quot;A String&quot;, # Required. The parameter name must be unique amongst all ParameterConfigs in
4111 # a HyperparameterSpec message. E.g., &quot;learning_rate&quot;.
4112 &quot;minValue&quot;: 3.14, # Required if type is `DOUBLE` or `INTEGER`. This field
4113 # should be unset if type is `CATEGORICAL`. This value should be integers if
4114 # type is INTEGER.
4115 &quot;discreteValues&quot;: [ # Required if type is `DISCRETE`.
4116 # A list of feasible points.
4117 # The list should be in strictly increasing order. For instance, this
4118 # parameter might have possible settings of 1.5, 2.5, and 4.0. This list
4119 # should not contain more than 1,000 values.
4120 3.14,
4121 ],
4122 &quot;scaleType&quot;: &quot;A String&quot;, # Optional. How the parameter should be scaled to the hypercube.
4123 # Leave unset for categorical parameters.
4124 # Some kind of scaling is strongly recommended for real or integral
4125 # parameters (e.g., `UNIT_LINEAR_SCALE`).
4126 &quot;maxValue&quot;: 3.14, # Required if type is `DOUBLE` or `INTEGER`. This field
4127 # should be unset if type is `CATEGORICAL`. This value should be integers if
4128 # type is `INTEGER`.
4129 },
4130 ],
4131 &quot;enableTrialEarlyStopping&quot;: True or False, # Optional. Indicates if the hyperparameter tuning job enables auto trial
4132 # early stopping.
4133 &quot;resumePreviousJobId&quot;: &quot;A String&quot;, # Optional. The prior hyperparameter tuning job id that users hope to
4134 # continue with. The job id will be used to find the corresponding vizier
4135 # study guid and resume the study.
4136 &quot;maxParallelTrials&quot;: 42, # Optional. The number of training trials to run concurrently.
4137 # You can reduce the time it takes to perform hyperparameter tuning by adding
4138 # trials in parallel. However, each trail only benefits from the information
4139 # gained in completed trials. That means that a trial does not get access to
4140 # the results of trials running at the same time, which could reduce the
4141 # quality of the overall optimization.
4142 #
4143 # Each trial will use the same scale tier and machine types.
4144 #
4145 # Defaults to one.
4146 &quot;maxFailedTrials&quot;: 42, # Optional. The number of failed trials that need to be seen before failing
4147 # the hyperparameter tuning job. You can specify this field to override the
4148 # default failing criteria for AI Platform hyperparameter tuning jobs.
4149 #
4150 # Defaults to zero, which means the service decides when a hyperparameter
4151 # job should fail.
4152 &quot;goal&quot;: &quot;A String&quot;, # Required. The type of goal to use for tuning. Available types are
4153 # `MAXIMIZE` and `MINIMIZE`.
4154 #
4155 # Defaults to `MAXIMIZE`.
4156 &quot;maxTrials&quot;: 42, # Optional. How many training trials should be attempted to optimize
4157 # the specified hyperparameters.
4158 #
4159 # Defaults to one.
4160 &quot;algorithm&quot;: &quot;A String&quot;, # Optional. The search algorithm specified for the hyperparameter
4161 # tuning job.
4162 # Uses the default AI Platform hyperparameter tuning
4163 # algorithm if unspecified.
4164 },
4165 &quot;workerConfig&quot;: { # Represents the configuration for a replica in a cluster. # Optional. The configuration for workers.
4166 #
4167 # You should only set `workerConfig.acceleratorConfig` if `workerType` is set
4168 # to a Compute Engine machine type. [Learn about restrictions on accelerator
4169 # configurations for
4170 # training.](/ai-platform/training/docs/using-gpus#compute-engine-machine-types-with-gpu)
4171 #
4172 # Set `workerConfig.imageUri` only if you build a custom image for your
4173 # worker. If `workerConfig.imageUri` has not been set, AI Platform uses
4174 # the value of `masterConfig.imageUri`. Learn more about [configuring custom
4175 # containers](/ai-platform/training/docs/distributed-training-containers).
4176 &quot;tpuTfVersion&quot;: &quot;A String&quot;, # The AI Platform runtime version that includes a TensorFlow version matching
4177 # the one used in the custom container. This field is required if the replica
4178 # is a TPU worker that uses a custom container. Otherwise, do not specify
4179 # this field. This must be a [runtime version that currently supports
4180 # training with
4181 # TPUs](/ml-engine/docs/tensorflow/runtime-version-list#tpu-support).
4182 #
4183 # Note that the version of TensorFlow included in a runtime version may
4184 # differ from the numbering of the runtime version itself, because it may
4185 # have a different [patch
4186 # version](https://www.tensorflow.org/guide/version_compat#semantic_versioning_20).
4187 # In this field, you must specify the runtime version (TensorFlow minor
4188 # version). For example, if your custom container runs TensorFlow `1.x.y`,
4189 # specify `1.x`.
4190 &quot;containerCommand&quot;: [ # The command with which the replica&#x27;s custom container is run.
4191 # If provided, it will override default ENTRYPOINT of the docker image.
4192 # If not provided, the docker image&#x27;s ENTRYPOINT is used.
4193 # It cannot be set if custom container image is
4194 # not provided.
4195 # Note that this field and [TrainingInput.args] are mutually exclusive, i.e.,
4196 # both cannot be set at the same time.
4197 &quot;A String&quot;,
4198 ],
4199 &quot;imageUri&quot;: &quot;A String&quot;, # The Docker image to run on the replica. This image must be in Container
4200 # Registry. Learn more about [configuring custom
4201 # containers](/ai-platform/training/docs/distributed-training-containers).
4202 &quot;containerArgs&quot;: [ # Arguments to the entrypoint command.
4203 # The following rules apply for container_command and container_args:
4204 # - If you do not supply command or args:
4205 # The defaults defined in the Docker image are used.
4206 # - If you supply a command but no args:
4207 # The default EntryPoint and the default Cmd defined in the Docker image
4208 # are ignored. Your command is run without any arguments.
4209 # - If you supply only args:
4210 # The default Entrypoint defined in the Docker image is run with the args
4211 # that you supplied.
4212 # - If you supply a command and args:
4213 # The default Entrypoint and the default Cmd defined in the Docker image
4214 # are ignored. Your command is run with your args.
4215 # It cannot be set if custom container image is
4216 # not provided.
4217 # Note that this field and [TrainingInput.args] are mutually exclusive, i.e.,
4218 # both cannot be set at the same time.
4219 &quot;A String&quot;,
4220 ],
4221 &quot;acceleratorConfig&quot;: { # Represents a hardware accelerator request config. # Represents the type and number of accelerators used by the replica.
4222 # [Learn about restrictions on accelerator configurations for
4223 # training.](/ai-platform/training/docs/using-gpus#compute-engine-machine-types-with-gpu)
4224 # Note that the AcceleratorConfig can be used in both Jobs and Versions.
4225 # Learn more about [accelerators for training](/ml-engine/docs/using-gpus) and
4226 # [accelerators for online
4227 # prediction](/ml-engine/docs/machine-types-online-prediction#gpus).
4228 &quot;count&quot;: &quot;A String&quot;, # The number of accelerators to attach to each machine running the job.
4229 &quot;type&quot;: &quot;A String&quot;, # The type of accelerator to use.
4230 },
4231 },
4232 &quot;parameterServerCount&quot;: &quot;A String&quot;, # Optional. The number of parameter server replicas to use for the training
4233 # job. Each replica in the cluster will be of the type specified in
4234 # `parameter_server_type`.
4235 #
4236 # This value can only be used when `scale_tier` is set to `CUSTOM`. If you
4237 # set this value, you must also set `parameter_server_type`.
4238 #
4239 # The default value is zero.
4240 &quot;packageUris&quot;: [ # Required. The Google Cloud Storage location of the packages with
4241 # the training program and any additional dependencies.
4242 # The maximum number of package URIs is 100.
4243 &quot;A String&quot;,
4244 ],
4245 &quot;evaluatorCount&quot;: &quot;A String&quot;, # Optional. The number of evaluator replicas to use for the training job.
4246 # Each replica in the cluster will be of the type specified in
4247 # `evaluator_type`.
4248 #
4249 # This value can only be used when `scale_tier` is set to `CUSTOM`. If you
4250 # set this value, you must also set `evaluator_type`.
4251 #
4252 # The default value is zero.
4253 &quot;masterType&quot;: &quot;A String&quot;, # Optional. Specifies the type of virtual machine to use for your training
4254 # job&#x27;s master worker. You must specify this field when `scaleTier` is set to
4255 # `CUSTOM`.
4256 #
4257 # You can use certain Compute Engine machine types directly in this field.
4258 # The following types are supported:
4259 #
4260 # - `n1-standard-4`
4261 # - `n1-standard-8`
4262 # - `n1-standard-16`
4263 # - `n1-standard-32`
4264 # - `n1-standard-64`
4265 # - `n1-standard-96`
4266 # - `n1-highmem-2`
4267 # - `n1-highmem-4`
4268 # - `n1-highmem-8`
4269 # - `n1-highmem-16`
4270 # - `n1-highmem-32`
4271 # - `n1-highmem-64`
4272 # - `n1-highmem-96`
4273 # - `n1-highcpu-16`
4274 # - `n1-highcpu-32`
4275 # - `n1-highcpu-64`
4276 # - `n1-highcpu-96`
4277 #
4278 # Learn more about [using Compute Engine machine
4279 # types](/ml-engine/docs/machine-types#compute-engine-machine-types).
4280 #
4281 # Alternatively, you can use the following legacy machine types:
4282 #
4283 # - `standard`
4284 # - `large_model`
4285 # - `complex_model_s`
4286 # - `complex_model_m`
4287 # - `complex_model_l`
4288 # - `standard_gpu`
4289 # - `complex_model_m_gpu`
4290 # - `complex_model_l_gpu`
4291 # - `standard_p100`
4292 # - `complex_model_m_p100`
4293 # - `standard_v100`
4294 # - `large_model_v100`
4295 # - `complex_model_m_v100`
4296 # - `complex_model_l_v100`
4297 #
4298 # Learn more about [using legacy machine
4299 # types](/ml-engine/docs/machine-types#legacy-machine-types).
4300 #
4301 # Finally, if you want to use a TPU for training, specify `cloud_tpu` in this
4302 # field. Learn more about the [special configuration options for training
4303 # with
4304 # TPUs](/ml-engine/docs/tensorflow/using-tpus#configuring_a_custom_tpu_machine).
4305 &quot;runtimeVersion&quot;: &quot;A String&quot;, # Optional. The AI Platform runtime version to use for training. You must
4306 # either specify this field or specify `masterConfig.imageUri`.
4307 #
4308 # For more information, see the [runtime version
4309 # list](/ai-platform/training/docs/runtime-version-list) and learn [how to
4310 # manage runtime versions](/ai-platform/training/docs/versioning).
4311 &quot;evaluatorType&quot;: &quot;A String&quot;, # Optional. Specifies the type of virtual machine to use for your training
4312 # job&#x27;s evaluator nodes.
4313 #
4314 # The supported values are the same as those described in the entry for
4315 # `masterType`.
4316 #
4317 # This value must be consistent with the category of machine type that
4318 # `masterType` uses. In other words, both must be Compute Engine machine
4319 # types or both must be legacy machine types.
4320 #
4321 # This value must be present when `scaleTier` is set to `CUSTOM` and
4322 # `evaluatorCount` is greater than zero.
4323 &quot;region&quot;: &quot;A String&quot;, # Required. The region to run the training job in. See the [available
4324 # regions](/ai-platform/training/docs/regions) for AI Platform Training.
4325 &quot;workerType&quot;: &quot;A String&quot;, # Optional. Specifies the type of virtual machine to use for your training
4326 # job&#x27;s worker nodes.
4327 #
4328 # The supported values are the same as those described in the entry for
4329 # `masterType`.
4330 #
4331 # This value must be consistent with the category of machine type that
4332 # `masterType` uses. In other words, both must be Compute Engine machine
4333 # types or both must be legacy machine types.
4334 #
4335 # If you use `cloud_tpu` for this value, see special instructions for
4336 # [configuring a custom TPU
4337 # machine](/ml-engine/docs/tensorflow/using-tpus#configuring_a_custom_tpu_machine).
4338 #
4339 # This value must be present when `scaleTier` is set to `CUSTOM` and
4340 # `workerCount` is greater than zero.
4341 &quot;parameterServerType&quot;: &quot;A String&quot;, # Optional. Specifies the type of virtual machine to use for your training
4342 # job&#x27;s parameter server.
4343 #
4344 # The supported values are the same as those described in the entry for
4345 # `master_type`.
4346 #
4347 # This value must be consistent with the category of machine type that
4348 # `masterType` uses. In other words, both must be Compute Engine machine
4349 # types or both must be legacy machine types.
4350 #
4351 # This value must be present when `scaleTier` is set to `CUSTOM` and
4352 # `parameter_server_count` is greater than zero.
4353 &quot;masterConfig&quot;: { # Represents the configuration for a replica in a cluster. # Optional. The configuration for your master worker.
4354 #
4355 # You should only set `masterConfig.acceleratorConfig` if `masterType` is set
4356 # to a Compute Engine machine type. Learn about [restrictions on accelerator
4357 # configurations for
4358 # training.](/ai-platform/training/docs/using-gpus#compute-engine-machine-types-with-gpu)
4359 #
4360 # Set `masterConfig.imageUri` only if you build a custom image. Only one of
4361 # `masterConfig.imageUri` and `runtimeVersion` should be set. Learn more
4362 # about [configuring custom
4363 # containers](/ai-platform/training/docs/distributed-training-containers).
4364 &quot;tpuTfVersion&quot;: &quot;A String&quot;, # The AI Platform runtime version that includes a TensorFlow version matching
4365 # the one used in the custom container. This field is required if the replica
4366 # is a TPU worker that uses a custom container. Otherwise, do not specify
4367 # this field. This must be a [runtime version that currently supports
4368 # training with
4369 # TPUs](/ml-engine/docs/tensorflow/runtime-version-list#tpu-support).
4370 #
4371 # Note that the version of TensorFlow included in a runtime version may
4372 # differ from the numbering of the runtime version itself, because it may
4373 # have a different [patch
4374 # version](https://www.tensorflow.org/guide/version_compat#semantic_versioning_20).
4375 # In this field, you must specify the runtime version (TensorFlow minor
4376 # version). For example, if your custom container runs TensorFlow `1.x.y`,
4377 # specify `1.x`.
4378 &quot;containerCommand&quot;: [ # The command with which the replica&#x27;s custom container is run.
4379 # If provided, it will override default ENTRYPOINT of the docker image.
4380 # If not provided, the docker image&#x27;s ENTRYPOINT is used.
4381 # It cannot be set if custom container image is
4382 # not provided.
4383 # Note that this field and [TrainingInput.args] are mutually exclusive, i.e.,
4384 # both cannot be set at the same time.
4385 &quot;A String&quot;,
4386 ],
4387 &quot;imageUri&quot;: &quot;A String&quot;, # The Docker image to run on the replica. This image must be in Container
4388 # Registry. Learn more about [configuring custom
4389 # containers](/ai-platform/training/docs/distributed-training-containers).
4390 &quot;containerArgs&quot;: [ # Arguments to the entrypoint command.
4391 # The following rules apply for container_command and container_args:
4392 # - If you do not supply command or args:
4393 # The defaults defined in the Docker image are used.
4394 # - If you supply a command but no args:
4395 # The default EntryPoint and the default Cmd defined in the Docker image
4396 # are ignored. Your command is run without any arguments.
4397 # - If you supply only args:
4398 # The default Entrypoint defined in the Docker image is run with the args
4399 # that you supplied.
4400 # - If you supply a command and args:
4401 # The default Entrypoint and the default Cmd defined in the Docker image
4402 # are ignored. Your command is run with your args.
4403 # It cannot be set if custom container image is
4404 # not provided.
4405 # Note that this field and [TrainingInput.args] are mutually exclusive, i.e.,
4406 # both cannot be set at the same time.
4407 &quot;A String&quot;,
4408 ],
4409 &quot;acceleratorConfig&quot;: { # Represents a hardware accelerator request config. # Represents the type and number of accelerators used by the replica.
4410 # [Learn about restrictions on accelerator configurations for
4411 # training.](/ai-platform/training/docs/using-gpus#compute-engine-machine-types-with-gpu)
4412 # Note that the AcceleratorConfig can be used in both Jobs and Versions.
4413 # Learn more about [accelerators for training](/ml-engine/docs/using-gpus) and
4414 # [accelerators for online
4415 # prediction](/ml-engine/docs/machine-types-online-prediction#gpus).
4416 &quot;count&quot;: &quot;A String&quot;, # The number of accelerators to attach to each machine running the job.
4417 &quot;type&quot;: &quot;A String&quot;, # The type of accelerator to use.
4418 },
4419 },
4420 &quot;scaleTier&quot;: &quot;A String&quot;, # Required. Specifies the machine types, the number of replicas for workers
4421 # and parameter servers.
4422 &quot;jobDir&quot;: &quot;A String&quot;, # Optional. A Google Cloud Storage path in which to store training outputs
4423 # and other data needed for training. This path is passed to your TensorFlow
4424 # program as the &#x27;--job-dir&#x27; command-line argument. The benefit of specifying
4425 # this field is that Cloud ML validates the path for use in training.
4426 &quot;pythonVersion&quot;: &quot;A String&quot;, # Optional. The version of Python used in training. You must either specify
4427 # this field or specify `masterConfig.imageUri`.
4428 #
4429 # The following Python versions are available:
4430 #
4431 # * Python &#x27;3.7&#x27; is available when `runtime_version` is set to &#x27;1.15&#x27; or
4432 # later.
4433 # * Python &#x27;3.5&#x27; is available when `runtime_version` is set to a version
4434 # from &#x27;1.4&#x27; to &#x27;1.14&#x27;.
4435 # * Python &#x27;2.7&#x27; is available when `runtime_version` is set to &#x27;1.15&#x27; or
4436 # earlier.
4437 #
4438 # Read more about the Python versions available for [each runtime
4439 # version](/ml-engine/docs/runtime-version-list).
4440 &quot;network&quot;: &quot;A String&quot;, # Optional. The full name of the Google Compute Engine
4441 # [network](/compute/docs/networks-and-firewalls#networks) to which the Job
4442 # is peered. For example, projects/12345/global/networks/myVPC. Format is of
4443 # the form projects/{project}/global/networks/{network}. Where {project} is a
4444 # project number, as in &#x27;12345&#x27;, and {network} is network name.&quot;.
4445 #
4446 # Private services access must already be configured for the network. If left
4447 # unspecified, the Job is not peered with any network. Learn more -
4448 # Connecting Job to user network over private
4449 # IP.
4450 &quot;scheduling&quot;: { # All parameters related to scheduling of training jobs. # Optional. Scheduling options for a training job.
4451 &quot;maxWaitTime&quot;: &quot;A String&quot;,
4452 &quot;maxRunningTime&quot;: &quot;A String&quot;, # Optional. The maximum job running time, expressed in seconds. The field can
4453 # contain up to nine fractional digits, terminated by `s`. If not specified,
4454 # this field defaults to `604800s` (seven days).
4455 #
4456 # If the training job is still running after this duration, AI Platform
4457 # Training cancels it.
4458 #
4459 # For example, if you want to ensure your job runs for no more than 2 hours,
4460 # set this field to `7200s` (2 hours * 60 minutes / hour * 60 seconds /
4461 # minute).
4462 #
4463 # If you submit your training job using the `gcloud` tool, you can [provide
4464 # this field in a `config.yaml`
4465 # file](/ai-platform/training/docs/training-jobs#formatting_your_configuration_parameters).
4466 # For example:
4467 #
4468 # ```yaml
4469 # trainingInput:
4470 # ...
4471 # scheduling:
4472 # maxRunningTime: 7200s
4473 # ...
4474 # ```
4475 },
4476 &quot;evaluatorConfig&quot;: { # Represents the configuration for a replica in a cluster. # Optional. The configuration for evaluators.
4477 #
4478 # You should only set `evaluatorConfig.acceleratorConfig` if
4479 # `evaluatorType` is set to a Compute Engine machine type. [Learn
4480 # about restrictions on accelerator configurations for
4481 # training.](/ai-platform/training/docs/using-gpus#compute-engine-machine-types-with-gpu)
4482 #
4483 # Set `evaluatorConfig.imageUri` only if you build a custom image for
4484 # your evaluator. If `evaluatorConfig.imageUri` has not been
4485 # set, AI Platform uses the value of `masterConfig.imageUri`. Learn more about [configuring custom
4486 # containers](/ai-platform/training/docs/distributed-training-containers).
4487 &quot;tpuTfVersion&quot;: &quot;A String&quot;, # The AI Platform runtime version that includes a TensorFlow version matching
4488 # the one used in the custom container. This field is required if the replica
4489 # is a TPU worker that uses a custom container. Otherwise, do not specify
4490 # this field. This must be a [runtime version that currently supports
4491 # training with
4492 # TPUs](/ml-engine/docs/tensorflow/runtime-version-list#tpu-support).
4493 #
4494 # Note that the version of TensorFlow included in a runtime version may
4495 # differ from the numbering of the runtime version itself, because it may
4496 # have a different [patch
4497 # version](https://www.tensorflow.org/guide/version_compat#semantic_versioning_20).
4498 # In this field, you must specify the runtime version (TensorFlow minor
4499 # version). For example, if your custom container runs TensorFlow `1.x.y`,
4500 # specify `1.x`.
4501 &quot;containerCommand&quot;: [ # The command with which the replica&#x27;s custom container is run.
4502 # If provided, it will override default ENTRYPOINT of the docker image.
4503 # If not provided, the docker image&#x27;s ENTRYPOINT is used.
4504 # It cannot be set if custom container image is
4505 # not provided.
4506 # Note that this field and [TrainingInput.args] are mutually exclusive, i.e.,
4507 # both cannot be set at the same time.
4508 &quot;A String&quot;,
4509 ],
4510 &quot;imageUri&quot;: &quot;A String&quot;, # The Docker image to run on the replica. This image must be in Container
4511 # Registry. Learn more about [configuring custom
4512 # containers](/ai-platform/training/docs/distributed-training-containers).
4513 &quot;containerArgs&quot;: [ # Arguments to the entrypoint command.
4514 # The following rules apply for container_command and container_args:
4515 # - If you do not supply command or args:
4516 # The defaults defined in the Docker image are used.
4517 # - If you supply a command but no args:
4518 # The default EntryPoint and the default Cmd defined in the Docker image
4519 # are ignored. Your command is run without any arguments.
4520 # - If you supply only args:
4521 # The default Entrypoint defined in the Docker image is run with the args
4522 # that you supplied.
4523 # - If you supply a command and args:
4524 # The default Entrypoint and the default Cmd defined in the Docker image
4525 # are ignored. Your command is run with your args.
4526 # It cannot be set if custom container image is
4527 # not provided.
4528 # Note that this field and [TrainingInput.args] are mutually exclusive, i.e.,
4529 # both cannot be set at the same time.
4530 &quot;A String&quot;,
4531 ],
4532 &quot;acceleratorConfig&quot;: { # Represents a hardware accelerator request config. # Represents the type and number of accelerators used by the replica.
4533 # [Learn about restrictions on accelerator configurations for
4534 # training.](/ai-platform/training/docs/using-gpus#compute-engine-machine-types-with-gpu)
4535 # Note that the AcceleratorConfig can be used in both Jobs and Versions.
4536 # Learn more about [accelerators for training](/ml-engine/docs/using-gpus) and
4537 # [accelerators for online
4538 # prediction](/ml-engine/docs/machine-types-online-prediction#gpus).
4539 &quot;count&quot;: &quot;A String&quot;, # The number of accelerators to attach to each machine running the job.
4540 &quot;type&quot;: &quot;A String&quot;, # The type of accelerator to use.
4541 },
4542 },
4543 &quot;useChiefInTfConfig&quot;: True or False, # Optional. Use `chief` instead of `master` in the `TF_CONFIG` environment
4544 # variable when training with a custom container. Defaults to `false`. [Learn
4545 # more about this
4546 # field.](/ai-platform/training/docs/distributed-training-details#chief-versus-master)
4547 #
4548 # This field has no effect for training jobs that don&#x27;t use a custom
4549 # container.
4550 &quot;workerCount&quot;: &quot;A String&quot;, # Optional. The number of worker replicas to use for the training job. Each
4551 # replica in the cluster will be of the type specified in `worker_type`.
4552 #
4553 # This value can only be used when `scale_tier` is set to `CUSTOM`. If you
4554 # set this value, you must also set `worker_type`.
4555 #
4556 # The default value is zero.
4557 &quot;pythonModule&quot;: &quot;A String&quot;, # Required. The Python module name to run after installing the packages.
4558 &quot;args&quot;: [ # Optional. Command-line arguments passed to the training application when it
4559 # starts. If your job uses a custom container, then the arguments are passed
4560 # to the container&#x27;s &lt;a class=&quot;external&quot; target=&quot;_blank&quot;
4561 # href=&quot;https://docs.docker.com/engine/reference/builder/#entrypoint&quot;&gt;
4562 # `ENTRYPOINT`&lt;/a&gt; command.
4563 &quot;A String&quot;,
4564 ],
4565 },
4566 &quot;state&quot;: &quot;A String&quot;, # Output only. The detailed state of a job.
4567 &quot;jobId&quot;: &quot;A String&quot;, # Required. The user-specified id of the job.
4568 &quot;endTime&quot;: &quot;A String&quot;, # Output only. When the job processing was completed.
4569 &quot;startTime&quot;: &quot;A String&quot;, # Output only. When the job processing was started.
4570 &quot;predictionOutput&quot;: { # Represents results of a prediction job. # The current prediction job result.
4571 &quot;errorCount&quot;: &quot;A String&quot;, # The number of data instances which resulted in errors.
4572 &quot;outputPath&quot;: &quot;A String&quot;, # The output Google Cloud Storage location provided at the job creation time.
4573 &quot;nodeHours&quot;: 3.14, # Node hours used by the batch prediction job.
4574 &quot;predictionCount&quot;: &quot;A String&quot;, # The number of generated predictions.
4575 },
4576 &quot;trainingOutput&quot;: { # Represents results of a training job. Output only. # The current training job result.
4577 &quot;trials&quot;: [ # Results for individual Hyperparameter trials.
4578 # Only set for hyperparameter tuning jobs.
4579 { # Represents the result of a single hyperparameter tuning trial from a
4580 # training job. The TrainingOutput object that is returned on successful
4581 # completion of a training job with hyperparameter tuning includes a list
4582 # of HyperparameterOutput objects, one for each successful trial.
4583 &quot;allMetrics&quot;: [ # All recorded object metrics for this trial. This field is not currently
4584 # populated.
4585 { # An observed value of a metric.
4586 &quot;objectiveValue&quot;: 3.14, # The objective value at this training step.
4587 &quot;trainingStep&quot;: &quot;A String&quot;, # The global training step for this metric.
4588 },
4589 ],
4590 &quot;hyperparameters&quot;: { # The hyperparameters given to this trial.
4591 &quot;a_key&quot;: &quot;A String&quot;,
4592 },
4593 &quot;trialId&quot;: &quot;A String&quot;, # The trial id for these results.
4594 &quot;endTime&quot;: &quot;A String&quot;, # Output only. End time for the trial.
4595 &quot;isTrialStoppedEarly&quot;: True or False, # True if the trial is stopped early.
4596 &quot;startTime&quot;: &quot;A String&quot;, # Output only. Start time for the trial.
4597 &quot;finalMetric&quot;: { # An observed value of a metric. # The final objective metric seen for this trial.
4598 &quot;objectiveValue&quot;: 3.14, # The objective value at this training step.
4599 &quot;trainingStep&quot;: &quot;A String&quot;, # The global training step for this metric.
4600 },
4601 &quot;builtInAlgorithmOutput&quot;: { # Represents output related to a built-in algorithm Job. # Details related to built-in algorithms jobs.
4602 # Only set for trials of built-in algorithms jobs that have succeeded.
4603 &quot;framework&quot;: &quot;A String&quot;, # Framework on which the built-in algorithm was trained.
4604 &quot;modelPath&quot;: &quot;A String&quot;, # The Cloud Storage path to the `model/` directory where the training job
4605 # saves the trained model. Only set for successful jobs that don&#x27;t use
4606 # hyperparameter tuning.
4607 &quot;pythonVersion&quot;: &quot;A String&quot;, # Python version on which the built-in algorithm was trained.
4608 &quot;runtimeVersion&quot;: &quot;A String&quot;, # AI Platform runtime version on which the built-in algorithm was
4609 # trained.
4610 },
4611 &quot;state&quot;: &quot;A String&quot;, # Output only. The detailed state of the trial.
4612 },
4613 ],
4614 &quot;hyperparameterMetricTag&quot;: &quot;A String&quot;, # The TensorFlow summary tag name used for optimizing hyperparameter tuning
4615 # trials. See
4616 # [`HyperparameterSpec.hyperparameterMetricTag`](#HyperparameterSpec.FIELDS.hyperparameter_metric_tag)
4617 # for more information. Only set for hyperparameter tuning jobs.
4618 &quot;completedTrialCount&quot;: &quot;A String&quot;, # The number of hyperparameter tuning trials that completed successfully.
4619 # Only set for hyperparameter tuning jobs.
4620 &quot;isHyperparameterTuningJob&quot;: True or False, # Whether this job is a hyperparameter tuning job.
4621 &quot;consumedMLUnits&quot;: 3.14, # The amount of ML units consumed by the job.
4622 &quot;isBuiltInAlgorithmJob&quot;: True or False, # Whether this job is a built-in Algorithm job.
4623 &quot;builtInAlgorithmOutput&quot;: { # Represents output related to a built-in algorithm Job. # Details related to built-in algorithms jobs.
4624 # Only set for built-in algorithms jobs.
4625 &quot;framework&quot;: &quot;A String&quot;, # Framework on which the built-in algorithm was trained.
4626 &quot;modelPath&quot;: &quot;A String&quot;, # The Cloud Storage path to the `model/` directory where the training job
4627 # saves the trained model. Only set for successful jobs that don&#x27;t use
4628 # hyperparameter tuning.
4629 &quot;pythonVersion&quot;: &quot;A String&quot;, # Python version on which the built-in algorithm was trained.
4630 &quot;runtimeVersion&quot;: &quot;A String&quot;, # AI Platform runtime version on which the built-in algorithm was
4631 # trained.
4632 },
4633 },
4634 &quot;createTime&quot;: &quot;A String&quot;, # Output only. When the job was created.
4635 &quot;labels&quot;: { # Optional. One or more labels that you can add, to organize your jobs.
4636 # Each label is a key-value pair, where both the key and the value are
4637 # arbitrary strings that you supply.
4638 # For more information, see the documentation on
4639 # &lt;a href=&quot;/ml-engine/docs/tensorflow/resource-labels&quot;&gt;using labels&lt;/a&gt;.
4640 &quot;a_key&quot;: &quot;A String&quot;,
4641 },
4642 &quot;predictionInput&quot;: { # Represents input parameters for a prediction job. # Input parameters to create a prediction job.
4643 &quot;outputPath&quot;: &quot;A String&quot;, # Required. The output Google Cloud Storage location.
4644 &quot;outputDataFormat&quot;: &quot;A String&quot;, # Optional. Format of the output data files, defaults to JSON.
4645 &quot;dataFormat&quot;: &quot;A String&quot;, # Required. The format of the input data files.
4646 &quot;batchSize&quot;: &quot;A String&quot;, # Optional. Number of records per batch, defaults to 64.
4647 # The service will buffer batch_size number of records in memory before
4648 # invoking one Tensorflow prediction call internally. So take the record
4649 # size and memory available into consideration when setting this parameter.
4650 &quot;runtimeVersion&quot;: &quot;A String&quot;, # Optional. The AI Platform runtime version to use for this batch
4651 # prediction. If not set, AI Platform will pick the runtime version used
4652 # during the CreateVersion request for this model version, or choose the
4653 # latest stable version when model version information is not available
4654 # such as when the model is specified by uri.
4655 &quot;inputPaths&quot;: [ # Required. The Cloud Storage location of the input data files. May contain
4656 # &lt;a href=&quot;/storage/docs/gsutil/addlhelp/WildcardNames&quot;&gt;wildcards&lt;/a&gt;.
4657 &quot;A String&quot;,
4658 ],
4659 &quot;region&quot;: &quot;A String&quot;, # Required. The Google Compute Engine region to run the prediction job in.
4660 # See the &lt;a href=&quot;/ml-engine/docs/tensorflow/regions&quot;&gt;available regions&lt;/a&gt;
4661 # for AI Platform services.
4662 &quot;versionName&quot;: &quot;A String&quot;, # Use this field if you want to specify a version of the model to use. The
4663 # string is formatted the same way as `model_version`, with the addition
4664 # of the version information:
4665 #
4666 # `&quot;projects/YOUR_PROJECT/models/YOUR_MODEL/versions/YOUR_VERSION&quot;`
4667 &quot;modelName&quot;: &quot;A String&quot;, # Use this field if you want to use the default version for the specified
4668 # model. The string must use the following format:
4669 #
4670 # `&quot;projects/YOUR_PROJECT/models/YOUR_MODEL&quot;`
4671 &quot;uri&quot;: &quot;A String&quot;, # Use this field if you want to specify a Google Cloud Storage path for
4672 # the model to use.
4673 &quot;maxWorkerCount&quot;: &quot;A String&quot;, # Optional. The maximum number of workers to be used for parallel processing.
4674 # Defaults to 10 if not specified.
4675 &quot;signatureName&quot;: &quot;A String&quot;, # Optional. The name of the signature defined in the SavedModel to use for
4676 # this job. Please refer to
4677 # [SavedModel](https://tensorflow.github.io/serving/serving_basic.html)
4678 # for information about how to use signatures.
4679 #
4680 # Defaults to
4681 # [DEFAULT_SERVING_SIGNATURE_DEF_KEY](https://www.tensorflow.org/api_docs/python/tf/saved_model/signature_constants)
4682 # , which is &quot;serving_default&quot;.
4683 },
4684 &quot;errorMessage&quot;: &quot;A String&quot;, # Output only. The details of a failure or a cancellation.
4685 }</pre>
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07004686</div>
4687
4688<div class="method">
Dan O'Mearadd494642020-05-01 07:42:23 -07004689 <code class="details" id="setIamPolicy">setIamPolicy(resource, body=None, x__xgafv=None)</code>
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07004690 <pre>Sets the access control policy on the specified resource. Replaces any
4691existing policy.
4692
Bu Sun Kim65020912020-05-20 12:08:20 -07004693Can return `NOT_FOUND`, `INVALID_ARGUMENT`, and `PERMISSION_DENIED` errors.
Dan O'Mearadd494642020-05-01 07:42:23 -07004694
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07004695Args:
4696 resource: string, REQUIRED: The resource for which the policy is being specified.
4697See the operation documentation for the appropriate value for this field. (required)
Dan O'Mearadd494642020-05-01 07:42:23 -07004698 body: object, The request body.
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07004699 The object takes the form of:
4700
4701{ # Request message for `SetIamPolicy` method.
Bu Sun Kim65020912020-05-20 12:08:20 -07004702 &quot;policy&quot;: { # An Identity and Access Management (IAM) policy, which specifies access # REQUIRED: The complete policy to be applied to the `resource`. The size of
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07004703 # the policy is limited to a few 10s of KB. An empty policy is a
4704 # valid policy but certain Cloud Platform services (such as Projects)
4705 # might reject them.
Dan O'Mearadd494642020-05-01 07:42:23 -07004706 # controls for Google Cloud resources.
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07004707 #
4708 #
Dan O'Mearadd494642020-05-01 07:42:23 -07004709 # A `Policy` is a collection of `bindings`. A `binding` binds one or more
4710 # `members` to a single `role`. Members can be user accounts, service accounts,
4711 # Google groups, and domains (such as G Suite). A `role` is a named list of
4712 # permissions; each `role` can be an IAM predefined role or a user-created
4713 # custom role.
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07004714 #
Bu Sun Kim65020912020-05-20 12:08:20 -07004715 # For some types of Google Cloud resources, a `binding` can also specify a
4716 # `condition`, which is a logical expression that allows access to a resource
4717 # only if the expression evaluates to `true`. A condition can add constraints
4718 # based on attributes of the request, the resource, or both. To learn which
4719 # resources support conditions in their IAM policies, see the
4720 # [IAM documentation](https://cloud.google.com/iam/help/conditions/resource-policies).
Dan O'Mearadd494642020-05-01 07:42:23 -07004721 #
4722 # **JSON example:**
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07004723 #
4724 # {
Bu Sun Kim65020912020-05-20 12:08:20 -07004725 # &quot;bindings&quot;: [
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07004726 # {
Bu Sun Kim65020912020-05-20 12:08:20 -07004727 # &quot;role&quot;: &quot;roles/resourcemanager.organizationAdmin&quot;,
4728 # &quot;members&quot;: [
4729 # &quot;user:mike@example.com&quot;,
4730 # &quot;group:admins@example.com&quot;,
4731 # &quot;domain:google.com&quot;,
4732 # &quot;serviceAccount:my-project-id@appspot.gserviceaccount.com&quot;
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07004733 # ]
4734 # },
4735 # {
Bu Sun Kim65020912020-05-20 12:08:20 -07004736 # &quot;role&quot;: &quot;roles/resourcemanager.organizationViewer&quot;,
4737 # &quot;members&quot;: [
4738 # &quot;user:eve@example.com&quot;
4739 # ],
4740 # &quot;condition&quot;: {
4741 # &quot;title&quot;: &quot;expirable access&quot;,
4742 # &quot;description&quot;: &quot;Does not grant access after Sep 2020&quot;,
4743 # &quot;expression&quot;: &quot;request.time &lt; timestamp(&#x27;2020-10-01T00:00:00.000Z&#x27;)&quot;,
Dan O'Mearadd494642020-05-01 07:42:23 -07004744 # }
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07004745 # }
Dan O'Mearadd494642020-05-01 07:42:23 -07004746 # ],
Bu Sun Kim65020912020-05-20 12:08:20 -07004747 # &quot;etag&quot;: &quot;BwWWja0YfJA=&quot;,
4748 # &quot;version&quot;: 3
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07004749 # }
4750 #
Dan O'Mearadd494642020-05-01 07:42:23 -07004751 # **YAML example:**
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07004752 #
4753 # bindings:
4754 # - members:
4755 # - user:mike@example.com
4756 # - group:admins@example.com
4757 # - domain:google.com
Dan O'Mearadd494642020-05-01 07:42:23 -07004758 # - serviceAccount:my-project-id@appspot.gserviceaccount.com
4759 # role: roles/resourcemanager.organizationAdmin
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07004760 # - members:
Dan O'Mearadd494642020-05-01 07:42:23 -07004761 # - user:eve@example.com
4762 # role: roles/resourcemanager.organizationViewer
4763 # condition:
4764 # title: expirable access
4765 # description: Does not grant access after Sep 2020
Bu Sun Kim65020912020-05-20 12:08:20 -07004766 # expression: request.time &lt; timestamp(&#x27;2020-10-01T00:00:00.000Z&#x27;)
Dan O'Mearadd494642020-05-01 07:42:23 -07004767 # - etag: BwWWja0YfJA=
4768 # - version: 3
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07004769 #
4770 # For a description of IAM and its features, see the
Dan O'Mearadd494642020-05-01 07:42:23 -07004771 # [IAM documentation](https://cloud.google.com/iam/docs/).
Bu Sun Kim65020912020-05-20 12:08:20 -07004772 &quot;etag&quot;: &quot;A String&quot;, # `etag` is used for optimistic concurrency control as a way to help
4773 # prevent simultaneous updates of a policy from overwriting each other.
4774 # It is strongly suggested that systems make use of the `etag` in the
4775 # read-modify-write cycle to perform policy updates in order to avoid race
4776 # conditions: An `etag` is returned in the response to `getIamPolicy`, and
4777 # systems are expected to put that etag in the request to `setIamPolicy` to
4778 # ensure that their change will be applied to the same version of the policy.
4779 #
4780 # **Important:** If you use IAM Conditions, you must include the `etag` field
4781 # whenever you call `setIamPolicy`. If you omit this field, then IAM allows
4782 # you to overwrite a version `3` policy with a version `1` policy, and all of
4783 # the conditions in the version `3` policy are lost.
4784 &quot;version&quot;: 42, # Specifies the format of the policy.
4785 #
4786 # Valid values are `0`, `1`, and `3`. Requests that specify an invalid value
4787 # are rejected.
4788 #
4789 # Any operation that affects conditional role bindings must specify version
4790 # `3`. This requirement applies to the following operations:
4791 #
4792 # * Getting a policy that includes a conditional role binding
4793 # * Adding a conditional role binding to a policy
4794 # * Changing a conditional role binding in a policy
4795 # * Removing any role binding, with or without a condition, from a policy
4796 # that includes conditions
4797 #
4798 # **Important:** If you use IAM Conditions, you must include the `etag` field
4799 # whenever you call `setIamPolicy`. If you omit this field, then IAM allows
4800 # you to overwrite a version `3` policy with a version `1` policy, and all of
4801 # the conditions in the version `3` policy are lost.
4802 #
4803 # If a policy does not include any conditions, operations on that policy may
4804 # specify any valid version or leave the field unset.
4805 #
4806 # To learn which resources support conditions in their IAM policies, see the
4807 # [IAM documentation](https://cloud.google.com/iam/help/conditions/resource-policies).
4808 &quot;auditConfigs&quot;: [ # Specifies cloud audit logging configuration for this policy.
4809 { # Specifies the audit configuration for a service.
4810 # The configuration determines which permission types are logged, and what
4811 # identities, if any, are exempted from logging.
4812 # An AuditConfig must have one or more AuditLogConfigs.
4813 #
4814 # If there are AuditConfigs for both `allServices` and a specific service,
4815 # the union of the two AuditConfigs is used for that service: the log_types
4816 # specified in each AuditConfig are enabled, and the exempted_members in each
4817 # AuditLogConfig are exempted.
4818 #
4819 # Example Policy with multiple AuditConfigs:
4820 #
4821 # {
4822 # &quot;audit_configs&quot;: [
4823 # {
4824 # &quot;service&quot;: &quot;allServices&quot;
4825 # &quot;audit_log_configs&quot;: [
4826 # {
4827 # &quot;log_type&quot;: &quot;DATA_READ&quot;,
4828 # &quot;exempted_members&quot;: [
4829 # &quot;user:jose@example.com&quot;
4830 # ]
4831 # },
4832 # {
4833 # &quot;log_type&quot;: &quot;DATA_WRITE&quot;,
4834 # },
4835 # {
4836 # &quot;log_type&quot;: &quot;ADMIN_READ&quot;,
4837 # }
4838 # ]
4839 # },
4840 # {
4841 # &quot;service&quot;: &quot;sampleservice.googleapis.com&quot;
4842 # &quot;audit_log_configs&quot;: [
4843 # {
4844 # &quot;log_type&quot;: &quot;DATA_READ&quot;,
4845 # },
4846 # {
4847 # &quot;log_type&quot;: &quot;DATA_WRITE&quot;,
4848 # &quot;exempted_members&quot;: [
4849 # &quot;user:aliya@example.com&quot;
4850 # ]
4851 # }
4852 # ]
4853 # }
4854 # ]
4855 # }
4856 #
4857 # For sampleservice, this policy enables DATA_READ, DATA_WRITE and ADMIN_READ
4858 # logging. It also exempts jose@example.com from DATA_READ logging, and
4859 # aliya@example.com from DATA_WRITE logging.
4860 &quot;service&quot;: &quot;A String&quot;, # Specifies a service that will be enabled for audit logging.
4861 # For example, `storage.googleapis.com`, `cloudsql.googleapis.com`.
4862 # `allServices` is a special value that covers all services.
4863 &quot;auditLogConfigs&quot;: [ # The configuration for logging of each type of permission.
4864 { # Provides the configuration for logging a type of permissions.
4865 # Example:
4866 #
4867 # {
4868 # &quot;audit_log_configs&quot;: [
4869 # {
4870 # &quot;log_type&quot;: &quot;DATA_READ&quot;,
4871 # &quot;exempted_members&quot;: [
4872 # &quot;user:jose@example.com&quot;
4873 # ]
4874 # },
4875 # {
4876 # &quot;log_type&quot;: &quot;DATA_WRITE&quot;,
4877 # }
4878 # ]
4879 # }
4880 #
4881 # This enables &#x27;DATA_READ&#x27; and &#x27;DATA_WRITE&#x27; logging, while exempting
4882 # jose@example.com from DATA_READ logging.
4883 &quot;exemptedMembers&quot;: [ # Specifies the identities that do not cause logging for this type of
4884 # permission.
4885 # Follows the same format of Binding.members.
4886 &quot;A String&quot;,
4887 ],
4888 &quot;logType&quot;: &quot;A String&quot;, # The log type that this config enables.
4889 },
4890 ],
4891 },
4892 ],
4893 &quot;bindings&quot;: [ # Associates a list of `members` to a `role`. Optionally, may specify a
Dan O'Mearadd494642020-05-01 07:42:23 -07004894 # `condition` that determines how and when the `bindings` are applied. Each
4895 # of the `bindings` must contain at least one member.
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07004896 { # Associates `members` with a `role`.
Bu Sun Kim65020912020-05-20 12:08:20 -07004897 &quot;condition&quot;: { # Represents a textual expression in the Common Expression Language (CEL) # The condition that is associated with this binding.
4898 #
4899 # If the condition evaluates to `true`, then this binding applies to the
4900 # current request.
4901 #
4902 # If the condition evaluates to `false`, then this binding does not apply to
4903 # the current request. However, a different role binding might grant the same
4904 # role to one or more of the members in this binding.
4905 #
4906 # To learn which resources support conditions in their IAM policies, see the
4907 # [IAM
4908 # documentation](https://cloud.google.com/iam/help/conditions/resource-policies).
4909 # syntax. CEL is a C-like expression language. The syntax and semantics of CEL
4910 # are documented at https://github.com/google/cel-spec.
4911 #
4912 # Example (Comparison):
4913 #
4914 # title: &quot;Summary size limit&quot;
4915 # description: &quot;Determines if a summary is less than 100 chars&quot;
4916 # expression: &quot;document.summary.size() &lt; 100&quot;
4917 #
4918 # Example (Equality):
4919 #
4920 # title: &quot;Requestor is owner&quot;
4921 # description: &quot;Determines if requestor is the document owner&quot;
4922 # expression: &quot;document.owner == request.auth.claims.email&quot;
4923 #
4924 # Example (Logic):
4925 #
4926 # title: &quot;Public documents&quot;
4927 # description: &quot;Determine whether the document should be publicly visible&quot;
4928 # expression: &quot;document.type != &#x27;private&#x27; &amp;&amp; document.type != &#x27;internal&#x27;&quot;
4929 #
4930 # Example (Data Manipulation):
4931 #
4932 # title: &quot;Notification string&quot;
4933 # description: &quot;Create a notification string with a timestamp.&quot;
4934 # expression: &quot;&#x27;New message received at &#x27; + string(document.create_time)&quot;
4935 #
4936 # The exact variables and functions that may be referenced within an expression
4937 # are determined by the service that evaluates it. See the service
4938 # documentation for additional information.
4939 &quot;title&quot;: &quot;A String&quot;, # Optional. Title for the expression, i.e. a short string describing
4940 # its purpose. This can be used e.g. in UIs which allow to enter the
4941 # expression.
4942 &quot;location&quot;: &quot;A String&quot;, # Optional. String indicating the location of the expression for error
4943 # reporting, e.g. a file name and a position in the file.
4944 &quot;description&quot;: &quot;A String&quot;, # Optional. Description of the expression. This is a longer text which
4945 # describes the expression, e.g. when hovered over it in a UI.
4946 &quot;expression&quot;: &quot;A String&quot;, # Textual representation of an expression in Common Expression Language
4947 # syntax.
4948 },
4949 &quot;members&quot;: [ # Specifies the identities requesting access for a Cloud Platform resource.
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07004950 # `members` can have the following values:
4951 #
4952 # * `allUsers`: A special identifier that represents anyone who is
4953 # on the internet; with or without a Google account.
4954 #
4955 # * `allAuthenticatedUsers`: A special identifier that represents anyone
4956 # who is authenticated with a Google account or a service account.
4957 #
4958 # * `user:{emailid}`: An email address that represents a specific Google
Dan O'Mearadd494642020-05-01 07:42:23 -07004959 # account. For example, `alice@example.com` .
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07004960 #
4961 #
4962 # * `serviceAccount:{emailid}`: An email address that represents a service
4963 # account. For example, `my-other-app@appspot.gserviceaccount.com`.
4964 #
4965 # * `group:{emailid}`: An email address that represents a Google group.
4966 # For example, `admins@example.com`.
4967 #
Dan O'Mearadd494642020-05-01 07:42:23 -07004968 # * `deleted:user:{emailid}?uid={uniqueid}`: An email address (plus unique
4969 # identifier) representing a user that has been recently deleted. For
4970 # example, `alice@example.com?uid=123456789012345678901`. If the user is
4971 # recovered, this value reverts to `user:{emailid}` and the recovered user
4972 # retains the role in the binding.
4973 #
4974 # * `deleted:serviceAccount:{emailid}?uid={uniqueid}`: An email address (plus
4975 # unique identifier) representing a service account that has been recently
4976 # deleted. For example,
4977 # `my-other-app@appspot.gserviceaccount.com?uid=123456789012345678901`.
4978 # If the service account is undeleted, this value reverts to
4979 # `serviceAccount:{emailid}` and the undeleted service account retains the
4980 # role in the binding.
4981 #
4982 # * `deleted:group:{emailid}?uid={uniqueid}`: An email address (plus unique
4983 # identifier) representing a Google group that has been recently
4984 # deleted. For example, `admins@example.com?uid=123456789012345678901`. If
4985 # the group is recovered, this value reverts to `group:{emailid}` and the
4986 # recovered group retains the role in the binding.
4987 #
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07004988 #
4989 # * `domain:{domain}`: The G Suite domain (primary) that represents all the
4990 # users of that domain. For example, `google.com` or `example.com`.
4991 #
Bu Sun Kim65020912020-05-20 12:08:20 -07004992 &quot;A String&quot;,
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07004993 ],
Bu Sun Kim65020912020-05-20 12:08:20 -07004994 &quot;role&quot;: &quot;A String&quot;, # Role that is assigned to `members`.
4995 # For example, `roles/viewer`, `roles/editor`, or `roles/owner`.
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07004996 },
4997 ],
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07004998 },
Bu Sun Kim65020912020-05-20 12:08:20 -07004999 &quot;updateMask&quot;: &quot;A String&quot;, # OPTIONAL: A FieldMask specifying which fields of the policy to modify. Only
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07005000 # the fields in the mask will be modified. If no mask is provided, the
5001 # following default mask is used:
Bu Sun Kim65020912020-05-20 12:08:20 -07005002 #
5003 # `paths: &quot;bindings, etag&quot;`
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07005004 }
5005
5006 x__xgafv: string, V1 error format.
5007 Allowed values
5008 1 - v1 error format
5009 2 - v2 error format
5010
5011Returns:
5012 An object of the form:
5013
Dan O'Mearadd494642020-05-01 07:42:23 -07005014 { # An Identity and Access Management (IAM) policy, which specifies access
5015 # controls for Google Cloud resources.
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07005016 #
5017 #
Dan O'Mearadd494642020-05-01 07:42:23 -07005018 # A `Policy` is a collection of `bindings`. A `binding` binds one or more
5019 # `members` to a single `role`. Members can be user accounts, service accounts,
5020 # Google groups, and domains (such as G Suite). A `role` is a named list of
5021 # permissions; each `role` can be an IAM predefined role or a user-created
5022 # custom role.
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07005023 #
Bu Sun Kim65020912020-05-20 12:08:20 -07005024 # For some types of Google Cloud resources, a `binding` can also specify a
5025 # `condition`, which is a logical expression that allows access to a resource
5026 # only if the expression evaluates to `true`. A condition can add constraints
5027 # based on attributes of the request, the resource, or both. To learn which
5028 # resources support conditions in their IAM policies, see the
5029 # [IAM documentation](https://cloud.google.com/iam/help/conditions/resource-policies).
Dan O'Mearadd494642020-05-01 07:42:23 -07005030 #
5031 # **JSON example:**
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07005032 #
5033 # {
Bu Sun Kim65020912020-05-20 12:08:20 -07005034 # &quot;bindings&quot;: [
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07005035 # {
Bu Sun Kim65020912020-05-20 12:08:20 -07005036 # &quot;role&quot;: &quot;roles/resourcemanager.organizationAdmin&quot;,
5037 # &quot;members&quot;: [
5038 # &quot;user:mike@example.com&quot;,
5039 # &quot;group:admins@example.com&quot;,
5040 # &quot;domain:google.com&quot;,
5041 # &quot;serviceAccount:my-project-id@appspot.gserviceaccount.com&quot;
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07005042 # ]
5043 # },
5044 # {
Bu Sun Kim65020912020-05-20 12:08:20 -07005045 # &quot;role&quot;: &quot;roles/resourcemanager.organizationViewer&quot;,
5046 # &quot;members&quot;: [
5047 # &quot;user:eve@example.com&quot;
5048 # ],
5049 # &quot;condition&quot;: {
5050 # &quot;title&quot;: &quot;expirable access&quot;,
5051 # &quot;description&quot;: &quot;Does not grant access after Sep 2020&quot;,
5052 # &quot;expression&quot;: &quot;request.time &lt; timestamp(&#x27;2020-10-01T00:00:00.000Z&#x27;)&quot;,
Dan O'Mearadd494642020-05-01 07:42:23 -07005053 # }
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07005054 # }
Dan O'Mearadd494642020-05-01 07:42:23 -07005055 # ],
Bu Sun Kim65020912020-05-20 12:08:20 -07005056 # &quot;etag&quot;: &quot;BwWWja0YfJA=&quot;,
5057 # &quot;version&quot;: 3
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07005058 # }
5059 #
Dan O'Mearadd494642020-05-01 07:42:23 -07005060 # **YAML example:**
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07005061 #
5062 # bindings:
5063 # - members:
5064 # - user:mike@example.com
5065 # - group:admins@example.com
5066 # - domain:google.com
Dan O'Mearadd494642020-05-01 07:42:23 -07005067 # - serviceAccount:my-project-id@appspot.gserviceaccount.com
5068 # role: roles/resourcemanager.organizationAdmin
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07005069 # - members:
Dan O'Mearadd494642020-05-01 07:42:23 -07005070 # - user:eve@example.com
5071 # role: roles/resourcemanager.organizationViewer
5072 # condition:
5073 # title: expirable access
5074 # description: Does not grant access after Sep 2020
Bu Sun Kim65020912020-05-20 12:08:20 -07005075 # expression: request.time &lt; timestamp(&#x27;2020-10-01T00:00:00.000Z&#x27;)
Dan O'Mearadd494642020-05-01 07:42:23 -07005076 # - etag: BwWWja0YfJA=
5077 # - version: 3
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07005078 #
5079 # For a description of IAM and its features, see the
Dan O'Mearadd494642020-05-01 07:42:23 -07005080 # [IAM documentation](https://cloud.google.com/iam/docs/).
Bu Sun Kim65020912020-05-20 12:08:20 -07005081 &quot;etag&quot;: &quot;A String&quot;, # `etag` is used for optimistic concurrency control as a way to help
5082 # prevent simultaneous updates of a policy from overwriting each other.
5083 # It is strongly suggested that systems make use of the `etag` in the
5084 # read-modify-write cycle to perform policy updates in order to avoid race
5085 # conditions: An `etag` is returned in the response to `getIamPolicy`, and
5086 # systems are expected to put that etag in the request to `setIamPolicy` to
5087 # ensure that their change will be applied to the same version of the policy.
5088 #
5089 # **Important:** If you use IAM Conditions, you must include the `etag` field
5090 # whenever you call `setIamPolicy`. If you omit this field, then IAM allows
5091 # you to overwrite a version `3` policy with a version `1` policy, and all of
5092 # the conditions in the version `3` policy are lost.
5093 &quot;version&quot;: 42, # Specifies the format of the policy.
5094 #
5095 # Valid values are `0`, `1`, and `3`. Requests that specify an invalid value
5096 # are rejected.
5097 #
5098 # Any operation that affects conditional role bindings must specify version
5099 # `3`. This requirement applies to the following operations:
5100 #
5101 # * Getting a policy that includes a conditional role binding
5102 # * Adding a conditional role binding to a policy
5103 # * Changing a conditional role binding in a policy
5104 # * Removing any role binding, with or without a condition, from a policy
5105 # that includes conditions
5106 #
5107 # **Important:** If you use IAM Conditions, you must include the `etag` field
5108 # whenever you call `setIamPolicy`. If you omit this field, then IAM allows
5109 # you to overwrite a version `3` policy with a version `1` policy, and all of
5110 # the conditions in the version `3` policy are lost.
5111 #
5112 # If a policy does not include any conditions, operations on that policy may
5113 # specify any valid version or leave the field unset.
5114 #
5115 # To learn which resources support conditions in their IAM policies, see the
5116 # [IAM documentation](https://cloud.google.com/iam/help/conditions/resource-policies).
5117 &quot;auditConfigs&quot;: [ # Specifies cloud audit logging configuration for this policy.
5118 { # Specifies the audit configuration for a service.
5119 # The configuration determines which permission types are logged, and what
5120 # identities, if any, are exempted from logging.
5121 # An AuditConfig must have one or more AuditLogConfigs.
5122 #
5123 # If there are AuditConfigs for both `allServices` and a specific service,
5124 # the union of the two AuditConfigs is used for that service: the log_types
5125 # specified in each AuditConfig are enabled, and the exempted_members in each
5126 # AuditLogConfig are exempted.
5127 #
5128 # Example Policy with multiple AuditConfigs:
5129 #
5130 # {
5131 # &quot;audit_configs&quot;: [
5132 # {
5133 # &quot;service&quot;: &quot;allServices&quot;
5134 # &quot;audit_log_configs&quot;: [
5135 # {
5136 # &quot;log_type&quot;: &quot;DATA_READ&quot;,
5137 # &quot;exempted_members&quot;: [
5138 # &quot;user:jose@example.com&quot;
5139 # ]
5140 # },
5141 # {
5142 # &quot;log_type&quot;: &quot;DATA_WRITE&quot;,
5143 # },
5144 # {
5145 # &quot;log_type&quot;: &quot;ADMIN_READ&quot;,
5146 # }
5147 # ]
5148 # },
5149 # {
5150 # &quot;service&quot;: &quot;sampleservice.googleapis.com&quot;
5151 # &quot;audit_log_configs&quot;: [
5152 # {
5153 # &quot;log_type&quot;: &quot;DATA_READ&quot;,
5154 # },
5155 # {
5156 # &quot;log_type&quot;: &quot;DATA_WRITE&quot;,
5157 # &quot;exempted_members&quot;: [
5158 # &quot;user:aliya@example.com&quot;
5159 # ]
5160 # }
5161 # ]
5162 # }
5163 # ]
5164 # }
5165 #
5166 # For sampleservice, this policy enables DATA_READ, DATA_WRITE and ADMIN_READ
5167 # logging. It also exempts jose@example.com from DATA_READ logging, and
5168 # aliya@example.com from DATA_WRITE logging.
5169 &quot;service&quot;: &quot;A String&quot;, # Specifies a service that will be enabled for audit logging.
5170 # For example, `storage.googleapis.com`, `cloudsql.googleapis.com`.
5171 # `allServices` is a special value that covers all services.
5172 &quot;auditLogConfigs&quot;: [ # The configuration for logging of each type of permission.
5173 { # Provides the configuration for logging a type of permissions.
5174 # Example:
5175 #
5176 # {
5177 # &quot;audit_log_configs&quot;: [
5178 # {
5179 # &quot;log_type&quot;: &quot;DATA_READ&quot;,
5180 # &quot;exempted_members&quot;: [
5181 # &quot;user:jose@example.com&quot;
5182 # ]
5183 # },
5184 # {
5185 # &quot;log_type&quot;: &quot;DATA_WRITE&quot;,
5186 # }
5187 # ]
5188 # }
5189 #
5190 # This enables &#x27;DATA_READ&#x27; and &#x27;DATA_WRITE&#x27; logging, while exempting
5191 # jose@example.com from DATA_READ logging.
5192 &quot;exemptedMembers&quot;: [ # Specifies the identities that do not cause logging for this type of
5193 # permission.
5194 # Follows the same format of Binding.members.
5195 &quot;A String&quot;,
5196 ],
5197 &quot;logType&quot;: &quot;A String&quot;, # The log type that this config enables.
5198 },
5199 ],
5200 },
5201 ],
5202 &quot;bindings&quot;: [ # Associates a list of `members` to a `role`. Optionally, may specify a
Dan O'Mearadd494642020-05-01 07:42:23 -07005203 # `condition` that determines how and when the `bindings` are applied. Each
5204 # of the `bindings` must contain at least one member.
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07005205 { # Associates `members` with a `role`.
Bu Sun Kim65020912020-05-20 12:08:20 -07005206 &quot;condition&quot;: { # Represents a textual expression in the Common Expression Language (CEL) # The condition that is associated with this binding.
5207 #
5208 # If the condition evaluates to `true`, then this binding applies to the
5209 # current request.
5210 #
5211 # If the condition evaluates to `false`, then this binding does not apply to
5212 # the current request. However, a different role binding might grant the same
5213 # role to one or more of the members in this binding.
5214 #
5215 # To learn which resources support conditions in their IAM policies, see the
5216 # [IAM
5217 # documentation](https://cloud.google.com/iam/help/conditions/resource-policies).
5218 # syntax. CEL is a C-like expression language. The syntax and semantics of CEL
5219 # are documented at https://github.com/google/cel-spec.
5220 #
5221 # Example (Comparison):
5222 #
5223 # title: &quot;Summary size limit&quot;
5224 # description: &quot;Determines if a summary is less than 100 chars&quot;
5225 # expression: &quot;document.summary.size() &lt; 100&quot;
5226 #
5227 # Example (Equality):
5228 #
5229 # title: &quot;Requestor is owner&quot;
5230 # description: &quot;Determines if requestor is the document owner&quot;
5231 # expression: &quot;document.owner == request.auth.claims.email&quot;
5232 #
5233 # Example (Logic):
5234 #
5235 # title: &quot;Public documents&quot;
5236 # description: &quot;Determine whether the document should be publicly visible&quot;
5237 # expression: &quot;document.type != &#x27;private&#x27; &amp;&amp; document.type != &#x27;internal&#x27;&quot;
5238 #
5239 # Example (Data Manipulation):
5240 #
5241 # title: &quot;Notification string&quot;
5242 # description: &quot;Create a notification string with a timestamp.&quot;
5243 # expression: &quot;&#x27;New message received at &#x27; + string(document.create_time)&quot;
5244 #
5245 # The exact variables and functions that may be referenced within an expression
5246 # are determined by the service that evaluates it. See the service
5247 # documentation for additional information.
5248 &quot;title&quot;: &quot;A String&quot;, # Optional. Title for the expression, i.e. a short string describing
5249 # its purpose. This can be used e.g. in UIs which allow to enter the
5250 # expression.
5251 &quot;location&quot;: &quot;A String&quot;, # Optional. String indicating the location of the expression for error
5252 # reporting, e.g. a file name and a position in the file.
5253 &quot;description&quot;: &quot;A String&quot;, # Optional. Description of the expression. This is a longer text which
5254 # describes the expression, e.g. when hovered over it in a UI.
5255 &quot;expression&quot;: &quot;A String&quot;, # Textual representation of an expression in Common Expression Language
5256 # syntax.
5257 },
5258 &quot;members&quot;: [ # Specifies the identities requesting access for a Cloud Platform resource.
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07005259 # `members` can have the following values:
5260 #
5261 # * `allUsers`: A special identifier that represents anyone who is
5262 # on the internet; with or without a Google account.
5263 #
5264 # * `allAuthenticatedUsers`: A special identifier that represents anyone
5265 # who is authenticated with a Google account or a service account.
5266 #
5267 # * `user:{emailid}`: An email address that represents a specific Google
Dan O'Mearadd494642020-05-01 07:42:23 -07005268 # account. For example, `alice@example.com` .
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07005269 #
5270 #
5271 # * `serviceAccount:{emailid}`: An email address that represents a service
5272 # account. For example, `my-other-app@appspot.gserviceaccount.com`.
5273 #
5274 # * `group:{emailid}`: An email address that represents a Google group.
5275 # For example, `admins@example.com`.
5276 #
Dan O'Mearadd494642020-05-01 07:42:23 -07005277 # * `deleted:user:{emailid}?uid={uniqueid}`: An email address (plus unique
5278 # identifier) representing a user that has been recently deleted. For
5279 # example, `alice@example.com?uid=123456789012345678901`. If the user is
5280 # recovered, this value reverts to `user:{emailid}` and the recovered user
5281 # retains the role in the binding.
5282 #
5283 # * `deleted:serviceAccount:{emailid}?uid={uniqueid}`: An email address (plus
5284 # unique identifier) representing a service account that has been recently
5285 # deleted. For example,
5286 # `my-other-app@appspot.gserviceaccount.com?uid=123456789012345678901`.
5287 # If the service account is undeleted, this value reverts to
5288 # `serviceAccount:{emailid}` and the undeleted service account retains the
5289 # role in the binding.
5290 #
5291 # * `deleted:group:{emailid}?uid={uniqueid}`: An email address (plus unique
5292 # identifier) representing a Google group that has been recently
5293 # deleted. For example, `admins@example.com?uid=123456789012345678901`. If
5294 # the group is recovered, this value reverts to `group:{emailid}` and the
5295 # recovered group retains the role in the binding.
5296 #
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07005297 #
5298 # * `domain:{domain}`: The G Suite domain (primary) that represents all the
5299 # users of that domain. For example, `google.com` or `example.com`.
5300 #
Bu Sun Kim65020912020-05-20 12:08:20 -07005301 &quot;A String&quot;,
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07005302 ],
Bu Sun Kim65020912020-05-20 12:08:20 -07005303 &quot;role&quot;: &quot;A String&quot;, # Role that is assigned to `members`.
5304 # For example, `roles/viewer`, `roles/editor`, or `roles/owner`.
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07005305 },
5306 ],
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07005307 }</pre>
5308</div>
5309
5310<div class="method">
Dan O'Mearadd494642020-05-01 07:42:23 -07005311 <code class="details" id="testIamPermissions">testIamPermissions(resource, body=None, x__xgafv=None)</code>
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07005312 <pre>Returns permissions that a caller has on the specified resource.
5313If the resource does not exist, this will return an empty set of
Bu Sun Kim65020912020-05-20 12:08:20 -07005314permissions, not a `NOT_FOUND` error.
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07005315
5316Note: This operation is designed to be used for building permission-aware
5317UIs and command-line tools, not for authorization checking. This operation
Bu Sun Kim65020912020-05-20 12:08:20 -07005318may &quot;fail open&quot; without warning.
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07005319
5320Args:
5321 resource: string, REQUIRED: The resource for which the policy detail is being requested.
5322See the operation documentation for the appropriate value for this field. (required)
Dan O'Mearadd494642020-05-01 07:42:23 -07005323 body: object, The request body.
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07005324 The object takes the form of:
5325
5326{ # Request message for `TestIamPermissions` method.
Bu Sun Kim65020912020-05-20 12:08:20 -07005327 &quot;permissions&quot;: [ # The set of permissions to check for the `resource`. Permissions with
5328 # wildcards (such as &#x27;*&#x27; or &#x27;storage.*&#x27;) are not allowed. For more
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07005329 # information see
5330 # [IAM Overview](https://cloud.google.com/iam/docs/overview#permissions).
Bu Sun Kim65020912020-05-20 12:08:20 -07005331 &quot;A String&quot;,
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07005332 ],
5333 }
5334
5335 x__xgafv: string, V1 error format.
5336 Allowed values
5337 1 - v1 error format
5338 2 - v2 error format
5339
5340Returns:
5341 An object of the form:
5342
5343 { # Response message for `TestIamPermissions` method.
Bu Sun Kim65020912020-05-20 12:08:20 -07005344 &quot;permissions&quot;: [ # A subset of `TestPermissionsRequest.permissions` that the caller is
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07005345 # allowed.
Bu Sun Kim65020912020-05-20 12:08:20 -07005346 &quot;A String&quot;,
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07005347 ],
5348 }</pre>
5349</div>
5350
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -04005351</body></html>