blob: 3952cf6e5805f03640175ebe48de89e218e0bf8a [file] [log] [blame]
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -04001<html><body>
2<style>
3
4body, h1, h2, h3, div, span, p, pre, a {
5 margin: 0;
6 padding: 0;
7 border: 0;
8 font-weight: inherit;
9 font-style: inherit;
10 font-size: 100%;
11 font-family: inherit;
12 vertical-align: baseline;
13}
14
15body {
16 font-size: 13px;
17 padding: 1em;
18}
19
20h1 {
21 font-size: 26px;
22 margin-bottom: 1em;
23}
24
25h2 {
26 font-size: 24px;
27 margin-bottom: 1em;
28}
29
30h3 {
31 font-size: 20px;
32 margin-bottom: 1em;
33 margin-top: 1em;
34}
35
36pre, code {
37 line-height: 1.5;
38 font-family: Monaco, 'DejaVu Sans Mono', 'Bitstream Vera Sans Mono', 'Lucida Console', monospace;
39}
40
41pre {
42 margin-top: 0.5em;
43}
44
45h1, h2, h3, p {
46 font-family: Arial, sans serif;
47}
48
49h1, h2, h3 {
50 border-bottom: solid #CCC 1px;
51}
52
53.toc_element {
54 margin-top: 0.5em;
55}
56
57.firstline {
58 margin-left: 2 em;
59}
60
61.method {
62 margin-top: 1em;
63 border: solid 1px #CCC;
64 padding: 1em;
65 background: #EEE;
66}
67
68.details {
69 font-weight: bold;
70 font-size: 14px;
71}
72
73</style>
74
Dan O'Mearadd494642020-05-01 07:42:23 -070075<h1><a href="ml_v1.html">AI Platform Training & Prediction API</a> . <a href="ml_v1.projects.html">projects</a> . <a href="ml_v1.projects.jobs.html">jobs</a></h1>
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -040076<h2>Instance Methods</h2>
77<p class="toc_element">
Bu Sun Kim715bd7f2019-06-14 16:50:42 -070078 <code><a href="#cancel">cancel(name, body=None, x__xgafv=None)</a></code></p>
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -040079<p class="firstline">Cancels a running job.</p>
80<p class="toc_element">
Dan O'Mearadd494642020-05-01 07:42:23 -070081 <code><a href="#create">create(parent, body=None, x__xgafv=None)</a></code></p>
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -040082<p class="firstline">Creates a training or a batch prediction job.</p>
83<p class="toc_element">
Thomas Coffee2f245372017-03-27 10:39:26 -070084 <code><a href="#get">get(name, x__xgafv=None)</a></code></p>
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -040085<p class="firstline">Describes a job.</p>
86<p class="toc_element">
Dan O'Mearadd494642020-05-01 07:42:23 -070087 <code><a href="#getIamPolicy">getIamPolicy(resource, options_requestedPolicyVersion=None, x__xgafv=None)</a></code></p>
Bu Sun Kim715bd7f2019-06-14 16:50:42 -070088<p class="firstline">Gets the access control policy for a resource.</p>
89<p class="toc_element">
Bu Sun Kim4ed7d3f2020-05-27 12:20:54 -070090 <code><a href="#list">list(parent, filter=None, pageToken=None, pageSize=None, x__xgafv=None)</a></code></p>
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -040091<p class="firstline">Lists the jobs in the project.</p>
92<p class="toc_element">
93 <code><a href="#list_next">list_next(previous_request, previous_response)</a></code></p>
94<p class="firstline">Retrieves the next page of results.</p>
Bu Sun Kim715bd7f2019-06-14 16:50:42 -070095<p class="toc_element">
Dan O'Mearadd494642020-05-01 07:42:23 -070096 <code><a href="#patch">patch(name, body=None, updateMask=None, x__xgafv=None)</a></code></p>
Bu Sun Kim715bd7f2019-06-14 16:50:42 -070097<p class="firstline">Updates a specific job resource.</p>
98<p class="toc_element">
Dan O'Mearadd494642020-05-01 07:42:23 -070099 <code><a href="#setIamPolicy">setIamPolicy(resource, body=None, x__xgafv=None)</a></code></p>
Bu Sun Kim715bd7f2019-06-14 16:50:42 -0700100<p class="firstline">Sets the access control policy on the specified resource. Replaces any</p>
101<p class="toc_element">
Dan O'Mearadd494642020-05-01 07:42:23 -0700102 <code><a href="#testIamPermissions">testIamPermissions(resource, body=None, x__xgafv=None)</a></code></p>
Bu Sun Kim715bd7f2019-06-14 16:50:42 -0700103<p class="firstline">Returns permissions that a caller has on the specified resource.</p>
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -0400104<h3>Method Details</h3>
105<div class="method">
Bu Sun Kim715bd7f2019-06-14 16:50:42 -0700106 <code class="details" id="cancel">cancel(name, body=None, x__xgafv=None)</code>
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -0400107 <pre>Cancels a running job.
108
109Args:
Bu Sun Kim715bd7f2019-06-14 16:50:42 -0700110 name: string, Required. The name of the job to cancel. (required)
111 body: object, The request body.
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -0400112 The object takes the form of:
113
114{ # Request message for the CancelJob method.
115 }
116
117 x__xgafv: string, V1 error format.
118 Allowed values
119 1 - v1 error format
120 2 - v2 error format
121
122Returns:
123 An object of the form:
124
125 { # A generic empty message that you can re-use to avoid defining duplicated
126 # empty messages in your APIs. A typical example is to use it as the request
127 # or the response type of an API method. For instance:
128 #
129 # service Foo {
130 # rpc Bar(google.protobuf.Empty) returns (google.protobuf.Empty);
131 # }
132 #
133 # The JSON representation for `Empty` is empty JSON object `{}`.
134 }</pre>
135</div>
136
137<div class="method">
Dan O'Mearadd494642020-05-01 07:42:23 -0700138 <code class="details" id="create">create(parent, body=None, x__xgafv=None)</code>
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -0400139 <pre>Creates a training or a batch prediction job.
140
141Args:
Bu Sun Kim715bd7f2019-06-14 16:50:42 -0700142 parent: string, Required. The project name. (required)
Dan O'Mearadd494642020-05-01 07:42:23 -0700143 body: object, The request body.
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -0400144 The object takes the form of:
145
146{ # Represents a training or prediction job.
Bu Sun Kim4ed7d3f2020-05-27 12:20:54 -0700147 &quot;predictionInput&quot;: { # Represents input parameters for a prediction job. # Input parameters to create a prediction job.
148 &quot;versionName&quot;: &quot;A String&quot;, # Use this field if you want to specify a version of the model to use. The
149 # string is formatted the same way as `model_version`, with the addition
150 # of the version information:
151 #
152 # `&quot;projects/YOUR_PROJECT/models/YOUR_MODEL/versions/YOUR_VERSION&quot;`
153 &quot;modelName&quot;: &quot;A String&quot;, # Use this field if you want to use the default version for the specified
154 # model. The string must use the following format:
155 #
156 # `&quot;projects/YOUR_PROJECT/models/YOUR_MODEL&quot;`
157 &quot;uri&quot;: &quot;A String&quot;, # Use this field if you want to specify a Google Cloud Storage path for
158 # the model to use.
159 &quot;maxWorkerCount&quot;: &quot;A String&quot;, # Optional. The maximum number of workers to be used for parallel processing.
160 # Defaults to 10 if not specified.
161 &quot;signatureName&quot;: &quot;A String&quot;, # Optional. The name of the signature defined in the SavedModel to use for
162 # this job. Please refer to
163 # [SavedModel](https://tensorflow.github.io/serving/serving_basic.html)
164 # for information about how to use signatures.
165 #
166 # Defaults to
167 # [DEFAULT_SERVING_SIGNATURE_DEF_KEY](https://www.tensorflow.org/api_docs/python/tf/saved_model/signature_constants)
168 # , which is &quot;serving_default&quot;.
169 &quot;outputPath&quot;: &quot;A String&quot;, # Required. The output Google Cloud Storage location.
170 &quot;outputDataFormat&quot;: &quot;A String&quot;, # Optional. Format of the output data files, defaults to JSON.
171 &quot;dataFormat&quot;: &quot;A String&quot;, # Required. The format of the input data files.
172 &quot;batchSize&quot;: &quot;A String&quot;, # Optional. Number of records per batch, defaults to 64.
173 # The service will buffer batch_size number of records in memory before
174 # invoking one Tensorflow prediction call internally. So take the record
175 # size and memory available into consideration when setting this parameter.
176 &quot;runtimeVersion&quot;: &quot;A String&quot;, # Optional. The AI Platform runtime version to use for this batch
177 # prediction. If not set, AI Platform will pick the runtime version used
178 # during the CreateVersion request for this model version, or choose the
179 # latest stable version when model version information is not available
180 # such as when the model is specified by uri.
181 &quot;inputPaths&quot;: [ # Required. The Cloud Storage location of the input data files. May contain
182 # &lt;a href=&quot;/storage/docs/gsutil/addlhelp/WildcardNames&quot;&gt;wildcards&lt;/a&gt;.
183 &quot;A String&quot;,
184 ],
185 &quot;region&quot;: &quot;A String&quot;, # Required. The Google Compute Engine region to run the prediction job in.
186 # See the &lt;a href=&quot;/ml-engine/docs/tensorflow/regions&quot;&gt;available regions&lt;/a&gt;
187 # for AI Platform services.
188 },
189 &quot;errorMessage&quot;: &quot;A String&quot;, # Output only. The details of a failure or a cancellation.
Bu Sun Kim65020912020-05-20 12:08:20 -0700190 &quot;etag&quot;: &quot;A String&quot;, # `etag` is used for optimistic concurrency control as a way to help
Dan O'Mearadd494642020-05-01 07:42:23 -0700191 # prevent simultaneous updates of a job from overwriting each other.
192 # It is strongly suggested that systems make use of the `etag` in the
193 # read-modify-write cycle to perform job updates in order to avoid race
194 # conditions: An `etag` is returned in the response to `GetJob`, and
195 # systems are expected to put that etag in the request to `UpdateJob` to
196 # ensure that their change will be applied to the same version of the job.
Bu Sun Kim65020912020-05-20 12:08:20 -0700197 &quot;trainingInput&quot;: { # Represents input parameters for a training job. When using the gcloud command # Input parameters to create a training job.
Dan O'Mearadd494642020-05-01 07:42:23 -0700198 # to submit your training job, you can specify the input parameters as
199 # command-line arguments and/or in a YAML configuration file referenced from
200 # the --config command-line argument. For details, see the guide to [submitting
201 # a training job](/ai-platform/training/docs/training-jobs).
Bu Sun Kim4ed7d3f2020-05-27 12:20:54 -0700202 &quot;workerCount&quot;: &quot;A String&quot;, # Optional. The number of worker replicas to use for the training job. Each
203 # replica in the cluster will be of the type specified in `worker_type`.
204 #
205 # This value can only be used when `scale_tier` is set to `CUSTOM`. If you
206 # set this value, you must also set `worker_type`.
207 #
208 # The default value is zero.
209 &quot;pythonModule&quot;: &quot;A String&quot;, # Required. The Python module name to run after installing the packages.
210 &quot;args&quot;: [ # Optional. Command-line arguments passed to the training application when it
211 # starts. If your job uses a custom container, then the arguments are passed
212 # to the container&#x27;s &lt;a class=&quot;external&quot; target=&quot;_blank&quot;
213 # href=&quot;https://docs.docker.com/engine/reference/builder/#entrypoint&quot;&gt;
214 # `ENTRYPOINT`&lt;/a&gt; command.
215 &quot;A String&quot;,
216 ],
Bu Sun Kim65020912020-05-20 12:08:20 -0700217 &quot;parameterServerConfig&quot;: { # Represents the configuration for a replica in a cluster. # Optional. The configuration for parameter servers.
218 #
219 # You should only set `parameterServerConfig.acceleratorConfig` if
220 # `parameterServerType` is set to a Compute Engine machine type. [Learn
221 # about restrictions on accelerator configurations for
222 # training.](/ai-platform/training/docs/using-gpus#compute-engine-machine-types-with-gpu)
223 #
224 # Set `parameterServerConfig.imageUri` only if you build a custom image for
225 # your parameter server. If `parameterServerConfig.imageUri` has not been
226 # set, AI Platform uses the value of `masterConfig.imageUri`. Learn more about [configuring custom
227 # containers](/ai-platform/training/docs/distributed-training-containers).
Bu Sun Kim65020912020-05-20 12:08:20 -0700228 &quot;imageUri&quot;: &quot;A String&quot;, # The Docker image to run on the replica. This image must be in Container
229 # Registry. Learn more about [configuring custom
230 # containers](/ai-platform/training/docs/distributed-training-containers).
231 &quot;containerArgs&quot;: [ # Arguments to the entrypoint command.
232 # The following rules apply for container_command and container_args:
233 # - If you do not supply command or args:
234 # The defaults defined in the Docker image are used.
235 # - If you supply a command but no args:
236 # The default EntryPoint and the default Cmd defined in the Docker image
237 # are ignored. Your command is run without any arguments.
238 # - If you supply only args:
239 # The default Entrypoint defined in the Docker image is run with the args
240 # that you supplied.
241 # - If you supply a command and args:
242 # The default Entrypoint and the default Cmd defined in the Docker image
243 # are ignored. Your command is run with your args.
244 # It cannot be set if custom container image is
245 # not provided.
246 # Note that this field and [TrainingInput.args] are mutually exclusive, i.e.,
247 # both cannot be set at the same time.
248 &quot;A String&quot;,
249 ],
250 &quot;acceleratorConfig&quot;: { # Represents a hardware accelerator request config. # Represents the type and number of accelerators used by the replica.
251 # [Learn about restrictions on accelerator configurations for
252 # training.](/ai-platform/training/docs/using-gpus#compute-engine-machine-types-with-gpu)
253 # Note that the AcceleratorConfig can be used in both Jobs and Versions.
254 # Learn more about [accelerators for training](/ml-engine/docs/using-gpus) and
255 # [accelerators for online
256 # prediction](/ml-engine/docs/machine-types-online-prediction#gpus).
257 &quot;count&quot;: &quot;A String&quot;, # The number of accelerators to attach to each machine running the job.
258 &quot;type&quot;: &quot;A String&quot;, # The type of accelerator to use.
259 },
Bu Sun Kim4ed7d3f2020-05-27 12:20:54 -0700260 &quot;tpuTfVersion&quot;: &quot;A String&quot;, # The AI Platform runtime version that includes a TensorFlow version matching
261 # the one used in the custom container. This field is required if the replica
262 # is a TPU worker that uses a custom container. Otherwise, do not specify
263 # this field. This must be a [runtime version that currently supports
264 # training with
265 # TPUs](/ml-engine/docs/tensorflow/runtime-version-list#tpu-support).
266 #
267 # Note that the version of TensorFlow included in a runtime version may
268 # differ from the numbering of the runtime version itself, because it may
269 # have a different [patch
270 # version](https://www.tensorflow.org/guide/version_compat#semantic_versioning_20).
271 # In this field, you must specify the runtime version (TensorFlow minor
272 # version). For example, if your custom container runs TensorFlow `1.x.y`,
273 # specify `1.x`.
274 &quot;containerCommand&quot;: [ # The command with which the replica&#x27;s custom container is run.
275 # If provided, it will override default ENTRYPOINT of the docker image.
276 # If not provided, the docker image&#x27;s ENTRYPOINT is used.
277 # It cannot be set if custom container image is
278 # not provided.
279 # Note that this field and [TrainingInput.args] are mutually exclusive, i.e.,
280 # both cannot be set at the same time.
281 &quot;A String&quot;,
282 ],
Bu Sun Kim65020912020-05-20 12:08:20 -0700283 },
284 &quot;encryptionConfig&quot;: { # Represents a custom encryption key configuration that can be applied to # Optional. Options for using customer-managed encryption keys (CMEK) to
285 # protect resources created by a training job, instead of using Google&#x27;s
286 # default encryption. If this is set, then all resources created by the
287 # training job will be encrypted with the customer-managed encryption key
288 # that you specify.
289 #
290 # [Learn how and when to use CMEK with AI Platform
291 # Training](/ai-platform/training/docs/cmek).
292 # a resource.
293 &quot;kmsKeyName&quot;: &quot;A String&quot;, # The Cloud KMS resource identifier of the customer-managed encryption key
294 # used to protect a resource, such as a training job. It has the following
295 # format:
296 # `projects/{PROJECT_ID}/locations/{REGION}/keyRings/{KEY_RING_NAME}/cryptoKeys/{KEY_NAME}`
297 },
298 &quot;hyperparameters&quot;: { # Represents a set of hyperparameters to optimize. # Optional. The set of Hyperparameters to tune.
Bu Sun Kim65020912020-05-20 12:08:20 -0700299 &quot;params&quot;: [ # Required. The set of parameters to tune.
300 { # Represents a single hyperparameter to optimize.
Bu Sun Kim65020912020-05-20 12:08:20 -0700301 &quot;categoricalValues&quot;: [ # Required if type is `CATEGORICAL`. The list of possible categories.
302 &quot;A String&quot;,
303 ],
304 &quot;parameterName&quot;: &quot;A String&quot;, # Required. The parameter name must be unique amongst all ParameterConfigs in
305 # a HyperparameterSpec message. E.g., &quot;learning_rate&quot;.
306 &quot;minValue&quot;: 3.14, # Required if type is `DOUBLE` or `INTEGER`. This field
307 # should be unset if type is `CATEGORICAL`. This value should be integers if
308 # type is INTEGER.
309 &quot;discreteValues&quot;: [ # Required if type is `DISCRETE`.
310 # A list of feasible points.
311 # The list should be in strictly increasing order. For instance, this
312 # parameter might have possible settings of 1.5, 2.5, and 4.0. This list
313 # should not contain more than 1,000 values.
314 3.14,
315 ],
316 &quot;scaleType&quot;: &quot;A String&quot;, # Optional. How the parameter should be scaled to the hypercube.
317 # Leave unset for categorical parameters.
318 # Some kind of scaling is strongly recommended for real or integral
319 # parameters (e.g., `UNIT_LINEAR_SCALE`).
320 &quot;maxValue&quot;: 3.14, # Required if type is `DOUBLE` or `INTEGER`. This field
321 # should be unset if type is `CATEGORICAL`. This value should be integers if
322 # type is `INTEGER`.
Bu Sun Kim4ed7d3f2020-05-27 12:20:54 -0700323 &quot;type&quot;: &quot;A String&quot;, # Required. The type of the parameter.
Bu Sun Kim65020912020-05-20 12:08:20 -0700324 },
325 ],
326 &quot;enableTrialEarlyStopping&quot;: True or False, # Optional. Indicates if the hyperparameter tuning job enables auto trial
327 # early stopping.
328 &quot;resumePreviousJobId&quot;: &quot;A String&quot;, # Optional. The prior hyperparameter tuning job id that users hope to
329 # continue with. The job id will be used to find the corresponding vizier
330 # study guid and resume the study.
331 &quot;maxParallelTrials&quot;: 42, # Optional. The number of training trials to run concurrently.
332 # You can reduce the time it takes to perform hyperparameter tuning by adding
333 # trials in parallel. However, each trail only benefits from the information
334 # gained in completed trials. That means that a trial does not get access to
335 # the results of trials running at the same time, which could reduce the
336 # quality of the overall optimization.
337 #
338 # Each trial will use the same scale tier and machine types.
339 #
340 # Defaults to one.
341 &quot;maxFailedTrials&quot;: 42, # Optional. The number of failed trials that need to be seen before failing
342 # the hyperparameter tuning job. You can specify this field to override the
343 # default failing criteria for AI Platform hyperparameter tuning jobs.
344 #
345 # Defaults to zero, which means the service decides when a hyperparameter
346 # job should fail.
347 &quot;goal&quot;: &quot;A String&quot;, # Required. The type of goal to use for tuning. Available types are
348 # `MAXIMIZE` and `MINIMIZE`.
349 #
350 # Defaults to `MAXIMIZE`.
351 &quot;maxTrials&quot;: 42, # Optional. How many training trials should be attempted to optimize
352 # the specified hyperparameters.
353 #
354 # Defaults to one.
355 &quot;algorithm&quot;: &quot;A String&quot;, # Optional. The search algorithm specified for the hyperparameter
356 # tuning job.
357 # Uses the default AI Platform hyperparameter tuning
358 # algorithm if unspecified.
Bu Sun Kim4ed7d3f2020-05-27 12:20:54 -0700359 &quot;hyperparameterMetricTag&quot;: &quot;A String&quot;, # Optional. The TensorFlow summary tag name to use for optimizing trials. For
360 # current versions of TensorFlow, this tag name should exactly match what is
361 # shown in TensorBoard, including all scopes. For versions of TensorFlow
362 # prior to 0.12, this should be only the tag passed to tf.Summary.
363 # By default, &quot;training/hptuning/metric&quot; will be used.
Bu Sun Kim65020912020-05-20 12:08:20 -0700364 },
365 &quot;workerConfig&quot;: { # Represents the configuration for a replica in a cluster. # Optional. The configuration for workers.
366 #
367 # You should only set `workerConfig.acceleratorConfig` if `workerType` is set
368 # to a Compute Engine machine type. [Learn about restrictions on accelerator
369 # configurations for
370 # training.](/ai-platform/training/docs/using-gpus#compute-engine-machine-types-with-gpu)
371 #
372 # Set `workerConfig.imageUri` only if you build a custom image for your
373 # worker. If `workerConfig.imageUri` has not been set, AI Platform uses
374 # the value of `masterConfig.imageUri`. Learn more about [configuring custom
375 # containers](/ai-platform/training/docs/distributed-training-containers).
Bu Sun Kim65020912020-05-20 12:08:20 -0700376 &quot;imageUri&quot;: &quot;A String&quot;, # The Docker image to run on the replica. This image must be in Container
377 # Registry. Learn more about [configuring custom
378 # containers](/ai-platform/training/docs/distributed-training-containers).
379 &quot;containerArgs&quot;: [ # Arguments to the entrypoint command.
380 # The following rules apply for container_command and container_args:
381 # - If you do not supply command or args:
382 # The defaults defined in the Docker image are used.
383 # - If you supply a command but no args:
384 # The default EntryPoint and the default Cmd defined in the Docker image
385 # are ignored. Your command is run without any arguments.
386 # - If you supply only args:
387 # The default Entrypoint defined in the Docker image is run with the args
388 # that you supplied.
389 # - If you supply a command and args:
390 # The default Entrypoint and the default Cmd defined in the Docker image
391 # are ignored. Your command is run with your args.
392 # It cannot be set if custom container image is
393 # not provided.
394 # Note that this field and [TrainingInput.args] are mutually exclusive, i.e.,
395 # both cannot be set at the same time.
396 &quot;A String&quot;,
397 ],
398 &quot;acceleratorConfig&quot;: { # Represents a hardware accelerator request config. # Represents the type and number of accelerators used by the replica.
399 # [Learn about restrictions on accelerator configurations for
400 # training.](/ai-platform/training/docs/using-gpus#compute-engine-machine-types-with-gpu)
401 # Note that the AcceleratorConfig can be used in both Jobs and Versions.
402 # Learn more about [accelerators for training](/ml-engine/docs/using-gpus) and
403 # [accelerators for online
404 # prediction](/ml-engine/docs/machine-types-online-prediction#gpus).
405 &quot;count&quot;: &quot;A String&quot;, # The number of accelerators to attach to each machine running the job.
406 &quot;type&quot;: &quot;A String&quot;, # The type of accelerator to use.
407 },
Bu Sun Kim4ed7d3f2020-05-27 12:20:54 -0700408 &quot;tpuTfVersion&quot;: &quot;A String&quot;, # The AI Platform runtime version that includes a TensorFlow version matching
409 # the one used in the custom container. This field is required if the replica
410 # is a TPU worker that uses a custom container. Otherwise, do not specify
411 # this field. This must be a [runtime version that currently supports
412 # training with
413 # TPUs](/ml-engine/docs/tensorflow/runtime-version-list#tpu-support).
414 #
415 # Note that the version of TensorFlow included in a runtime version may
416 # differ from the numbering of the runtime version itself, because it may
417 # have a different [patch
418 # version](https://www.tensorflow.org/guide/version_compat#semantic_versioning_20).
419 # In this field, you must specify the runtime version (TensorFlow minor
420 # version). For example, if your custom container runs TensorFlow `1.x.y`,
421 # specify `1.x`.
422 &quot;containerCommand&quot;: [ # The command with which the replica&#x27;s custom container is run.
423 # If provided, it will override default ENTRYPOINT of the docker image.
424 # If not provided, the docker image&#x27;s ENTRYPOINT is used.
425 # It cannot be set if custom container image is
426 # not provided.
427 # Note that this field and [TrainingInput.args] are mutually exclusive, i.e.,
428 # both cannot be set at the same time.
429 &quot;A String&quot;,
430 ],
Bu Sun Kim65020912020-05-20 12:08:20 -0700431 },
432 &quot;parameterServerCount&quot;: &quot;A String&quot;, # Optional. The number of parameter server replicas to use for the training
433 # job. Each replica in the cluster will be of the type specified in
434 # `parameter_server_type`.
435 #
436 # This value can only be used when `scale_tier` is set to `CUSTOM`. If you
437 # set this value, you must also set `parameter_server_type`.
438 #
439 # The default value is zero.
440 &quot;packageUris&quot;: [ # Required. The Google Cloud Storage location of the packages with
441 # the training program and any additional dependencies.
442 # The maximum number of package URIs is 100.
443 &quot;A String&quot;,
444 ],
445 &quot;evaluatorCount&quot;: &quot;A String&quot;, # Optional. The number of evaluator replicas to use for the training job.
446 # Each replica in the cluster will be of the type specified in
447 # `evaluator_type`.
448 #
449 # This value can only be used when `scale_tier` is set to `CUSTOM`. If you
450 # set this value, you must also set `evaluator_type`.
451 #
452 # The default value is zero.
453 &quot;masterType&quot;: &quot;A String&quot;, # Optional. Specifies the type of virtual machine to use for your training
454 # job&#x27;s master worker. You must specify this field when `scaleTier` is set to
Dan O'Mearadd494642020-05-01 07:42:23 -0700455 # `CUSTOM`.
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -0400456 #
Dan O'Mearadd494642020-05-01 07:42:23 -0700457 # You can use certain Compute Engine machine types directly in this field.
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -0400458 # The following types are supported:
459 #
Bu Sun Kim715bd7f2019-06-14 16:50:42 -0700460 # - `n1-standard-4`
461 # - `n1-standard-8`
462 # - `n1-standard-16`
463 # - `n1-standard-32`
464 # - `n1-standard-64`
465 # - `n1-standard-96`
466 # - `n1-highmem-2`
467 # - `n1-highmem-4`
468 # - `n1-highmem-8`
469 # - `n1-highmem-16`
470 # - `n1-highmem-32`
471 # - `n1-highmem-64`
472 # - `n1-highmem-96`
473 # - `n1-highcpu-16`
474 # - `n1-highcpu-32`
475 # - `n1-highcpu-64`
476 # - `n1-highcpu-96`
477 #
Dan O'Mearadd494642020-05-01 07:42:23 -0700478 # Learn more about [using Compute Engine machine
479 # types](/ml-engine/docs/machine-types#compute-engine-machine-types).
Bu Sun Kim715bd7f2019-06-14 16:50:42 -0700480 #
Dan O'Mearadd494642020-05-01 07:42:23 -0700481 # Alternatively, you can use the following legacy machine types:
482 #
483 # - `standard`
484 # - `large_model`
485 # - `complex_model_s`
486 # - `complex_model_m`
487 # - `complex_model_l`
488 # - `standard_gpu`
489 # - `complex_model_m_gpu`
490 # - `complex_model_l_gpu`
491 # - `standard_p100`
492 # - `complex_model_m_p100`
493 # - `standard_v100`
494 # - `large_model_v100`
495 # - `complex_model_m_v100`
496 # - `complex_model_l_v100`
497 #
498 # Learn more about [using legacy machine
499 # types](/ml-engine/docs/machine-types#legacy-machine-types).
500 #
501 # Finally, if you want to use a TPU for training, specify `cloud_tpu` in this
502 # field. Learn more about the [special configuration options for training
503 # with
504 # TPUs](/ml-engine/docs/tensorflow/using-tpus#configuring_a_custom_tpu_machine).
Bu Sun Kim65020912020-05-20 12:08:20 -0700505 &quot;runtimeVersion&quot;: &quot;A String&quot;, # Optional. The AI Platform runtime version to use for training. You must
506 # either specify this field or specify `masterConfig.imageUri`.
507 #
508 # For more information, see the [runtime version
509 # list](/ai-platform/training/docs/runtime-version-list) and learn [how to
510 # manage runtime versions](/ai-platform/training/docs/versioning).
511 &quot;evaluatorType&quot;: &quot;A String&quot;, # Optional. Specifies the type of virtual machine to use for your training
512 # job&#x27;s evaluator nodes.
513 #
514 # The supported values are the same as those described in the entry for
515 # `masterType`.
516 #
517 # This value must be consistent with the category of machine type that
518 # `masterType` uses. In other words, both must be Compute Engine machine
519 # types or both must be legacy machine types.
520 #
521 # This value must be present when `scaleTier` is set to `CUSTOM` and
522 # `evaluatorCount` is greater than zero.
Bu Sun Kim65020912020-05-20 12:08:20 -0700523 &quot;workerType&quot;: &quot;A String&quot;, # Optional. Specifies the type of virtual machine to use for your training
524 # job&#x27;s worker nodes.
525 #
526 # The supported values are the same as those described in the entry for
527 # `masterType`.
528 #
529 # This value must be consistent with the category of machine type that
530 # `masterType` uses. In other words, both must be Compute Engine machine
531 # types or both must be legacy machine types.
532 #
533 # If you use `cloud_tpu` for this value, see special instructions for
534 # [configuring a custom TPU
535 # machine](/ml-engine/docs/tensorflow/using-tpus#configuring_a_custom_tpu_machine).
536 #
537 # This value must be present when `scaleTier` is set to `CUSTOM` and
538 # `workerCount` is greater than zero.
Bu Sun Kim4ed7d3f2020-05-27 12:20:54 -0700539 &quot;region&quot;: &quot;A String&quot;, # Required. The region to run the training job in. See the [available
540 # regions](/ai-platform/training/docs/regions) for AI Platform Training.
Bu Sun Kim65020912020-05-20 12:08:20 -0700541 &quot;parameterServerType&quot;: &quot;A String&quot;, # Optional. Specifies the type of virtual machine to use for your training
542 # job&#x27;s parameter server.
543 #
544 # The supported values are the same as those described in the entry for
545 # `master_type`.
546 #
547 # This value must be consistent with the category of machine type that
548 # `masterType` uses. In other words, both must be Compute Engine machine
549 # types or both must be legacy machine types.
550 #
551 # This value must be present when `scaleTier` is set to `CUSTOM` and
552 # `parameter_server_count` is greater than zero.
553 &quot;masterConfig&quot;: { # Represents the configuration for a replica in a cluster. # Optional. The configuration for your master worker.
554 #
555 # You should only set `masterConfig.acceleratorConfig` if `masterType` is set
556 # to a Compute Engine machine type. Learn about [restrictions on accelerator
557 # configurations for
558 # training.](/ai-platform/training/docs/using-gpus#compute-engine-machine-types-with-gpu)
559 #
560 # Set `masterConfig.imageUri` only if you build a custom image. Only one of
561 # `masterConfig.imageUri` and `runtimeVersion` should be set. Learn more
562 # about [configuring custom
563 # containers](/ai-platform/training/docs/distributed-training-containers).
Bu Sun Kim65020912020-05-20 12:08:20 -0700564 &quot;imageUri&quot;: &quot;A String&quot;, # The Docker image to run on the replica. This image must be in Container
565 # Registry. Learn more about [configuring custom
566 # containers](/ai-platform/training/docs/distributed-training-containers).
567 &quot;containerArgs&quot;: [ # Arguments to the entrypoint command.
568 # The following rules apply for container_command and container_args:
569 # - If you do not supply command or args:
570 # The defaults defined in the Docker image are used.
571 # - If you supply a command but no args:
572 # The default EntryPoint and the default Cmd defined in the Docker image
573 # are ignored. Your command is run without any arguments.
574 # - If you supply only args:
575 # The default Entrypoint defined in the Docker image is run with the args
576 # that you supplied.
577 # - If you supply a command and args:
578 # The default Entrypoint and the default Cmd defined in the Docker image
579 # are ignored. Your command is run with your args.
580 # It cannot be set if custom container image is
581 # not provided.
582 # Note that this field and [TrainingInput.args] are mutually exclusive, i.e.,
583 # both cannot be set at the same time.
584 &quot;A String&quot;,
585 ],
586 &quot;acceleratorConfig&quot;: { # Represents a hardware accelerator request config. # Represents the type and number of accelerators used by the replica.
587 # [Learn about restrictions on accelerator configurations for
588 # training.](/ai-platform/training/docs/using-gpus#compute-engine-machine-types-with-gpu)
589 # Note that the AcceleratorConfig can be used in both Jobs and Versions.
590 # Learn more about [accelerators for training](/ml-engine/docs/using-gpus) and
591 # [accelerators for online
592 # prediction](/ml-engine/docs/machine-types-online-prediction#gpus).
593 &quot;count&quot;: &quot;A String&quot;, # The number of accelerators to attach to each machine running the job.
594 &quot;type&quot;: &quot;A String&quot;, # The type of accelerator to use.
595 },
Bu Sun Kim4ed7d3f2020-05-27 12:20:54 -0700596 &quot;tpuTfVersion&quot;: &quot;A String&quot;, # The AI Platform runtime version that includes a TensorFlow version matching
597 # the one used in the custom container. This field is required if the replica
598 # is a TPU worker that uses a custom container. Otherwise, do not specify
599 # this field. This must be a [runtime version that currently supports
600 # training with
601 # TPUs](/ml-engine/docs/tensorflow/runtime-version-list#tpu-support).
602 #
603 # Note that the version of TensorFlow included in a runtime version may
604 # differ from the numbering of the runtime version itself, because it may
605 # have a different [patch
606 # version](https://www.tensorflow.org/guide/version_compat#semantic_versioning_20).
607 # In this field, you must specify the runtime version (TensorFlow minor
608 # version). For example, if your custom container runs TensorFlow `1.x.y`,
609 # specify `1.x`.
610 &quot;containerCommand&quot;: [ # The command with which the replica&#x27;s custom container is run.
611 # If provided, it will override default ENTRYPOINT of the docker image.
612 # If not provided, the docker image&#x27;s ENTRYPOINT is used.
613 # It cannot be set if custom container image is
614 # not provided.
615 # Note that this field and [TrainingInput.args] are mutually exclusive, i.e.,
616 # both cannot be set at the same time.
617 &quot;A String&quot;,
618 ],
Bu Sun Kim65020912020-05-20 12:08:20 -0700619 },
620 &quot;scaleTier&quot;: &quot;A String&quot;, # Required. Specifies the machine types, the number of replicas for workers
621 # and parameter servers.
622 &quot;jobDir&quot;: &quot;A String&quot;, # Optional. A Google Cloud Storage path in which to store training outputs
Dan O'Mearadd494642020-05-01 07:42:23 -0700623 # and other data needed for training. This path is passed to your TensorFlow
Bu Sun Kim65020912020-05-20 12:08:20 -0700624 # program as the &#x27;--job-dir&#x27; command-line argument. The benefit of specifying
Dan O'Mearadd494642020-05-01 07:42:23 -0700625 # this field is that Cloud ML validates the path for use in training.
Bu Sun Kim65020912020-05-20 12:08:20 -0700626 &quot;pythonVersion&quot;: &quot;A String&quot;, # Optional. The version of Python used in training. You must either specify
627 # this field or specify `masterConfig.imageUri`.
628 #
629 # The following Python versions are available:
630 #
631 # * Python &#x27;3.7&#x27; is available when `runtime_version` is set to &#x27;1.15&#x27; or
632 # later.
633 # * Python &#x27;3.5&#x27; is available when `runtime_version` is set to a version
634 # from &#x27;1.4&#x27; to &#x27;1.14&#x27;.
635 # * Python &#x27;2.7&#x27; is available when `runtime_version` is set to &#x27;1.15&#x27; or
636 # earlier.
637 #
638 # Read more about the Python versions available for [each runtime
639 # version](/ml-engine/docs/runtime-version-list).
Bu Sun Kim65020912020-05-20 12:08:20 -0700640 &quot;scheduling&quot;: { # All parameters related to scheduling of training jobs. # Optional. Scheduling options for a training job.
641 &quot;maxWaitTime&quot;: &quot;A String&quot;,
642 &quot;maxRunningTime&quot;: &quot;A String&quot;, # Optional. The maximum job running time, expressed in seconds. The field can
643 # contain up to nine fractional digits, terminated by `s`. If not specified,
644 # this field defaults to `604800s` (seven days).
Dan O'Mearadd494642020-05-01 07:42:23 -0700645 #
646 # If the training job is still running after this duration, AI Platform
647 # Training cancels it.
648 #
649 # For example, if you want to ensure your job runs for no more than 2 hours,
650 # set this field to `7200s` (2 hours * 60 minutes / hour * 60 seconds /
651 # minute).
652 #
653 # If you submit your training job using the `gcloud` tool, you can [provide
654 # this field in a `config.yaml`
655 # file](/ai-platform/training/docs/training-jobs#formatting_your_configuration_parameters).
656 # For example:
657 #
658 # ```yaml
659 # trainingInput:
660 # ...
661 # scheduling:
662 # maxRunningTime: 7200s
663 # ...
664 # ```
665 },
Bu Sun Kim4ed7d3f2020-05-27 12:20:54 -0700666 &quot;network&quot;: &quot;A String&quot;, # Optional. The full name of the Google Compute Engine
667 # [network](/compute/docs/networks-and-firewalls#networks) to which the Job
668 # is peered. For example, projects/12345/global/networks/myVPC. Format is of
669 # the form projects/{project}/global/networks/{network}. Where {project} is a
670 # project number, as in &#x27;12345&#x27;, and {network} is network name.&quot;.
671 #
672 # Private services access must already be configured for the network. If left
673 # unspecified, the Job is not peered with any network. Learn more -
674 # Connecting Job to user network over private
675 # IP.
Bu Sun Kim65020912020-05-20 12:08:20 -0700676 &quot;evaluatorConfig&quot;: { # Represents the configuration for a replica in a cluster. # Optional. The configuration for evaluators.
Dan O'Mearadd494642020-05-01 07:42:23 -0700677 #
678 # You should only set `evaluatorConfig.acceleratorConfig` if
679 # `evaluatorType` is set to a Compute Engine machine type. [Learn
680 # about restrictions on accelerator configurations for
681 # training.](/ai-platform/training/docs/using-gpus#compute-engine-machine-types-with-gpu)
682 #
683 # Set `evaluatorConfig.imageUri` only if you build a custom image for
684 # your evaluator. If `evaluatorConfig.imageUri` has not been
685 # set, AI Platform uses the value of `masterConfig.imageUri`. Learn more about [configuring custom
686 # containers](/ai-platform/training/docs/distributed-training-containers).
Bu Sun Kim65020912020-05-20 12:08:20 -0700687 &quot;imageUri&quot;: &quot;A String&quot;, # The Docker image to run on the replica. This image must be in Container
688 # Registry. Learn more about [configuring custom
689 # containers](/ai-platform/training/docs/distributed-training-containers).
690 &quot;containerArgs&quot;: [ # Arguments to the entrypoint command.
691 # The following rules apply for container_command and container_args:
692 # - If you do not supply command or args:
693 # The defaults defined in the Docker image are used.
694 # - If you supply a command but no args:
695 # The default EntryPoint and the default Cmd defined in the Docker image
696 # are ignored. Your command is run without any arguments.
697 # - If you supply only args:
698 # The default Entrypoint defined in the Docker image is run with the args
699 # that you supplied.
700 # - If you supply a command and args:
701 # The default Entrypoint and the default Cmd defined in the Docker image
702 # are ignored. Your command is run with your args.
703 # It cannot be set if custom container image is
704 # not provided.
705 # Note that this field and [TrainingInput.args] are mutually exclusive, i.e.,
706 # both cannot be set at the same time.
707 &quot;A String&quot;,
708 ],
709 &quot;acceleratorConfig&quot;: { # Represents a hardware accelerator request config. # Represents the type and number of accelerators used by the replica.
Dan O'Mearadd494642020-05-01 07:42:23 -0700710 # [Learn about restrictions on accelerator configurations for
711 # training.](/ai-platform/training/docs/using-gpus#compute-engine-machine-types-with-gpu)
712 # Note that the AcceleratorConfig can be used in both Jobs and Versions.
713 # Learn more about [accelerators for training](/ml-engine/docs/using-gpus) and
714 # [accelerators for online
715 # prediction](/ml-engine/docs/machine-types-online-prediction#gpus).
Bu Sun Kim65020912020-05-20 12:08:20 -0700716 &quot;count&quot;: &quot;A String&quot;, # The number of accelerators to attach to each machine running the job.
717 &quot;type&quot;: &quot;A String&quot;, # The type of accelerator to use.
Dan O'Mearadd494642020-05-01 07:42:23 -0700718 },
Bu Sun Kim4ed7d3f2020-05-27 12:20:54 -0700719 &quot;tpuTfVersion&quot;: &quot;A String&quot;, # The AI Platform runtime version that includes a TensorFlow version matching
720 # the one used in the custom container. This field is required if the replica
721 # is a TPU worker that uses a custom container. Otherwise, do not specify
722 # this field. This must be a [runtime version that currently supports
723 # training with
724 # TPUs](/ml-engine/docs/tensorflow/runtime-version-list#tpu-support).
725 #
726 # Note that the version of TensorFlow included in a runtime version may
727 # differ from the numbering of the runtime version itself, because it may
728 # have a different [patch
729 # version](https://www.tensorflow.org/guide/version_compat#semantic_versioning_20).
730 # In this field, you must specify the runtime version (TensorFlow minor
731 # version). For example, if your custom container runs TensorFlow `1.x.y`,
732 # specify `1.x`.
733 &quot;containerCommand&quot;: [ # The command with which the replica&#x27;s custom container is run.
734 # If provided, it will override default ENTRYPOINT of the docker image.
735 # If not provided, the docker image&#x27;s ENTRYPOINT is used.
736 # It cannot be set if custom container image is
737 # not provided.
738 # Note that this field and [TrainingInput.args] are mutually exclusive, i.e.,
739 # both cannot be set at the same time.
740 &quot;A String&quot;,
741 ],
Dan O'Mearadd494642020-05-01 07:42:23 -0700742 },
Bu Sun Kim65020912020-05-20 12:08:20 -0700743 &quot;useChiefInTfConfig&quot;: True or False, # Optional. Use `chief` instead of `master` in the `TF_CONFIG` environment
Dan O'Mearadd494642020-05-01 07:42:23 -0700744 # variable when training with a custom container. Defaults to `false`. [Learn
745 # more about this
746 # field.](/ai-platform/training/docs/distributed-training-details#chief-versus-master)
747 #
Bu Sun Kim65020912020-05-20 12:08:20 -0700748 # This field has no effect for training jobs that don&#x27;t use a custom
Dan O'Mearadd494642020-05-01 07:42:23 -0700749 # container.
Bu Sun Kim65020912020-05-20 12:08:20 -0700750 },
751 &quot;state&quot;: &quot;A String&quot;, # Output only. The detailed state of a job.
752 &quot;jobId&quot;: &quot;A String&quot;, # Required. The user-specified id of the job.
753 &quot;endTime&quot;: &quot;A String&quot;, # Output only. When the job processing was completed.
754 &quot;startTime&quot;: &quot;A String&quot;, # Output only. When the job processing was started.
755 &quot;predictionOutput&quot;: { # Represents results of a prediction job. # The current prediction job result.
756 &quot;errorCount&quot;: &quot;A String&quot;, # The number of data instances which resulted in errors.
757 &quot;outputPath&quot;: &quot;A String&quot;, # The output Google Cloud Storage location provided at the job creation time.
758 &quot;nodeHours&quot;: 3.14, # Node hours used by the batch prediction job.
759 &quot;predictionCount&quot;: &quot;A String&quot;, # The number of generated predictions.
760 },
761 &quot;trainingOutput&quot;: { # Represents results of a training job. Output only. # The current training job result.
Bu Sun Kim4ed7d3f2020-05-27 12:20:54 -0700762 &quot;isBuiltInAlgorithmJob&quot;: True or False, # Whether this job is a built-in Algorithm job.
763 &quot;builtInAlgorithmOutput&quot;: { # Represents output related to a built-in algorithm Job. # Details related to built-in algorithms jobs.
764 # Only set for built-in algorithms jobs.
765 &quot;modelPath&quot;: &quot;A String&quot;, # The Cloud Storage path to the `model/` directory where the training job
766 # saves the trained model. Only set for successful jobs that don&#x27;t use
767 # hyperparameter tuning.
768 &quot;pythonVersion&quot;: &quot;A String&quot;, # Python version on which the built-in algorithm was trained.
769 &quot;runtimeVersion&quot;: &quot;A String&quot;, # AI Platform runtime version on which the built-in algorithm was
770 # trained.
771 &quot;framework&quot;: &quot;A String&quot;, # Framework on which the built-in algorithm was trained.
772 },
Bu Sun Kim65020912020-05-20 12:08:20 -0700773 &quot;trials&quot;: [ # Results for individual Hyperparameter trials.
774 # Only set for hyperparameter tuning jobs.
775 { # Represents the result of a single hyperparameter tuning trial from a
776 # training job. The TrainingOutput object that is returned on successful
777 # completion of a training job with hyperparameter tuning includes a list
778 # of HyperparameterOutput objects, one for each successful trial.
Bu Sun Kim65020912020-05-20 12:08:20 -0700779 &quot;trialId&quot;: &quot;A String&quot;, # The trial id for these results.
780 &quot;endTime&quot;: &quot;A String&quot;, # Output only. End time for the trial.
Bu Sun Kim65020912020-05-20 12:08:20 -0700781 &quot;startTime&quot;: &quot;A String&quot;, # Output only. Start time for the trial.
Bu Sun Kim4ed7d3f2020-05-27 12:20:54 -0700782 &quot;isTrialStoppedEarly&quot;: True or False, # True if the trial is stopped early.
Bu Sun Kim65020912020-05-20 12:08:20 -0700783 &quot;finalMetric&quot;: { # An observed value of a metric. # The final objective metric seen for this trial.
Bu Sun Kim65020912020-05-20 12:08:20 -0700784 &quot;trainingStep&quot;: &quot;A String&quot;, # The global training step for this metric.
Bu Sun Kim4ed7d3f2020-05-27 12:20:54 -0700785 &quot;objectiveValue&quot;: 3.14, # The objective value at this training step.
Bu Sun Kim65020912020-05-20 12:08:20 -0700786 },
787 &quot;builtInAlgorithmOutput&quot;: { # Represents output related to a built-in algorithm Job. # Details related to built-in algorithms jobs.
788 # Only set for trials of built-in algorithms jobs that have succeeded.
Bu Sun Kim65020912020-05-20 12:08:20 -0700789 &quot;modelPath&quot;: &quot;A String&quot;, # The Cloud Storage path to the `model/` directory where the training job
790 # saves the trained model. Only set for successful jobs that don&#x27;t use
791 # hyperparameter tuning.
792 &quot;pythonVersion&quot;: &quot;A String&quot;, # Python version on which the built-in algorithm was trained.
793 &quot;runtimeVersion&quot;: &quot;A String&quot;, # AI Platform runtime version on which the built-in algorithm was
794 # trained.
Bu Sun Kim4ed7d3f2020-05-27 12:20:54 -0700795 &quot;framework&quot;: &quot;A String&quot;, # Framework on which the built-in algorithm was trained.
Bu Sun Kim65020912020-05-20 12:08:20 -0700796 },
797 &quot;state&quot;: &quot;A String&quot;, # Output only. The detailed state of the trial.
Bu Sun Kim4ed7d3f2020-05-27 12:20:54 -0700798 &quot;allMetrics&quot;: [ # All recorded object metrics for this trial. This field is not currently
799 # populated.
800 { # An observed value of a metric.
801 &quot;trainingStep&quot;: &quot;A String&quot;, # The global training step for this metric.
802 &quot;objectiveValue&quot;: 3.14, # The objective value at this training step.
803 },
804 ],
805 &quot;hyperparameters&quot;: { # The hyperparameters given to this trial.
806 &quot;a_key&quot;: &quot;A String&quot;,
807 },
Dan O'Mearadd494642020-05-01 07:42:23 -0700808 },
Bu Sun Kim65020912020-05-20 12:08:20 -0700809 ],
810 &quot;hyperparameterMetricTag&quot;: &quot;A String&quot;, # The TensorFlow summary tag name used for optimizing hyperparameter tuning
811 # trials. See
812 # [`HyperparameterSpec.hyperparameterMetricTag`](#HyperparameterSpec.FIELDS.hyperparameter_metric_tag)
813 # for more information. Only set for hyperparameter tuning jobs.
814 &quot;completedTrialCount&quot;: &quot;A String&quot;, # The number of hyperparameter tuning trials that completed successfully.
815 # Only set for hyperparameter tuning jobs.
816 &quot;isHyperparameterTuningJob&quot;: True or False, # Whether this job is a hyperparameter tuning job.
817 &quot;consumedMLUnits&quot;: 3.14, # The amount of ML units consumed by the job.
Bu Sun Kim715bd7f2019-06-14 16:50:42 -0700818 },
Bu Sun Kim65020912020-05-20 12:08:20 -0700819 &quot;createTime&quot;: &quot;A String&quot;, # Output only. When the job was created.
820 &quot;labels&quot;: { # Optional. One or more labels that you can add, to organize your jobs.
821 # Each label is a key-value pair, where both the key and the value are
822 # arbitrary strings that you supply.
823 # For more information, see the documentation on
824 # &lt;a href=&quot;/ml-engine/docs/tensorflow/resource-labels&quot;&gt;using labels&lt;/a&gt;.
825 &quot;a_key&quot;: &quot;A String&quot;,
Bu Sun Kim715bd7f2019-06-14 16:50:42 -0700826 },
Bu Sun Kim65020912020-05-20 12:08:20 -0700827 }
828
829 x__xgafv: string, V1 error format.
830 Allowed values
831 1 - v1 error format
832 2 - v2 error format
833
834Returns:
835 An object of the form:
836
837 { # Represents a training or prediction job.
Bu Sun Kim4ed7d3f2020-05-27 12:20:54 -0700838 &quot;predictionInput&quot;: { # Represents input parameters for a prediction job. # Input parameters to create a prediction job.
839 &quot;versionName&quot;: &quot;A String&quot;, # Use this field if you want to specify a version of the model to use. The
840 # string is formatted the same way as `model_version`, with the addition
841 # of the version information:
842 #
843 # `&quot;projects/YOUR_PROJECT/models/YOUR_MODEL/versions/YOUR_VERSION&quot;`
844 &quot;modelName&quot;: &quot;A String&quot;, # Use this field if you want to use the default version for the specified
845 # model. The string must use the following format:
846 #
847 # `&quot;projects/YOUR_PROJECT/models/YOUR_MODEL&quot;`
848 &quot;uri&quot;: &quot;A String&quot;, # Use this field if you want to specify a Google Cloud Storage path for
849 # the model to use.
850 &quot;maxWorkerCount&quot;: &quot;A String&quot;, # Optional. The maximum number of workers to be used for parallel processing.
851 # Defaults to 10 if not specified.
852 &quot;signatureName&quot;: &quot;A String&quot;, # Optional. The name of the signature defined in the SavedModel to use for
853 # this job. Please refer to
854 # [SavedModel](https://tensorflow.github.io/serving/serving_basic.html)
855 # for information about how to use signatures.
856 #
857 # Defaults to
858 # [DEFAULT_SERVING_SIGNATURE_DEF_KEY](https://www.tensorflow.org/api_docs/python/tf/saved_model/signature_constants)
859 # , which is &quot;serving_default&quot;.
860 &quot;outputPath&quot;: &quot;A String&quot;, # Required. The output Google Cloud Storage location.
861 &quot;outputDataFormat&quot;: &quot;A String&quot;, # Optional. Format of the output data files, defaults to JSON.
862 &quot;dataFormat&quot;: &quot;A String&quot;, # Required. The format of the input data files.
863 &quot;batchSize&quot;: &quot;A String&quot;, # Optional. Number of records per batch, defaults to 64.
864 # The service will buffer batch_size number of records in memory before
865 # invoking one Tensorflow prediction call internally. So take the record
866 # size and memory available into consideration when setting this parameter.
867 &quot;runtimeVersion&quot;: &quot;A String&quot;, # Optional. The AI Platform runtime version to use for this batch
868 # prediction. If not set, AI Platform will pick the runtime version used
869 # during the CreateVersion request for this model version, or choose the
870 # latest stable version when model version information is not available
871 # such as when the model is specified by uri.
872 &quot;inputPaths&quot;: [ # Required. The Cloud Storage location of the input data files. May contain
873 # &lt;a href=&quot;/storage/docs/gsutil/addlhelp/WildcardNames&quot;&gt;wildcards&lt;/a&gt;.
874 &quot;A String&quot;,
875 ],
876 &quot;region&quot;: &quot;A String&quot;, # Required. The Google Compute Engine region to run the prediction job in.
877 # See the &lt;a href=&quot;/ml-engine/docs/tensorflow/regions&quot;&gt;available regions&lt;/a&gt;
878 # for AI Platform services.
879 },
880 &quot;errorMessage&quot;: &quot;A String&quot;, # Output only. The details of a failure or a cancellation.
Bu Sun Kim65020912020-05-20 12:08:20 -0700881 &quot;etag&quot;: &quot;A String&quot;, # `etag` is used for optimistic concurrency control as a way to help
882 # prevent simultaneous updates of a job from overwriting each other.
883 # It is strongly suggested that systems make use of the `etag` in the
884 # read-modify-write cycle to perform job updates in order to avoid race
885 # conditions: An `etag` is returned in the response to `GetJob`, and
886 # systems are expected to put that etag in the request to `UpdateJob` to
887 # ensure that their change will be applied to the same version of the job.
888 &quot;trainingInput&quot;: { # Represents input parameters for a training job. When using the gcloud command # Input parameters to create a training job.
889 # to submit your training job, you can specify the input parameters as
890 # command-line arguments and/or in a YAML configuration file referenced from
891 # the --config command-line argument. For details, see the guide to [submitting
892 # a training job](/ai-platform/training/docs/training-jobs).
Bu Sun Kim4ed7d3f2020-05-27 12:20:54 -0700893 &quot;workerCount&quot;: &quot;A String&quot;, # Optional. The number of worker replicas to use for the training job. Each
894 # replica in the cluster will be of the type specified in `worker_type`.
895 #
896 # This value can only be used when `scale_tier` is set to `CUSTOM`. If you
897 # set this value, you must also set `worker_type`.
898 #
899 # The default value is zero.
900 &quot;pythonModule&quot;: &quot;A String&quot;, # Required. The Python module name to run after installing the packages.
901 &quot;args&quot;: [ # Optional. Command-line arguments passed to the training application when it
902 # starts. If your job uses a custom container, then the arguments are passed
903 # to the container&#x27;s &lt;a class=&quot;external&quot; target=&quot;_blank&quot;
904 # href=&quot;https://docs.docker.com/engine/reference/builder/#entrypoint&quot;&gt;
905 # `ENTRYPOINT`&lt;/a&gt; command.
906 &quot;A String&quot;,
907 ],
Bu Sun Kim65020912020-05-20 12:08:20 -0700908 &quot;parameterServerConfig&quot;: { # Represents the configuration for a replica in a cluster. # Optional. The configuration for parameter servers.
909 #
910 # You should only set `parameterServerConfig.acceleratorConfig` if
911 # `parameterServerType` is set to a Compute Engine machine type. [Learn
912 # about restrictions on accelerator configurations for
913 # training.](/ai-platform/training/docs/using-gpus#compute-engine-machine-types-with-gpu)
914 #
915 # Set `parameterServerConfig.imageUri` only if you build a custom image for
916 # your parameter server. If `parameterServerConfig.imageUri` has not been
917 # set, AI Platform uses the value of `masterConfig.imageUri`. Learn more about [configuring custom
918 # containers](/ai-platform/training/docs/distributed-training-containers).
Bu Sun Kim65020912020-05-20 12:08:20 -0700919 &quot;imageUri&quot;: &quot;A String&quot;, # The Docker image to run on the replica. This image must be in Container
920 # Registry. Learn more about [configuring custom
921 # containers](/ai-platform/training/docs/distributed-training-containers).
922 &quot;containerArgs&quot;: [ # Arguments to the entrypoint command.
923 # The following rules apply for container_command and container_args:
924 # - If you do not supply command or args:
925 # The defaults defined in the Docker image are used.
926 # - If you supply a command but no args:
927 # The default EntryPoint and the default Cmd defined in the Docker image
928 # are ignored. Your command is run without any arguments.
929 # - If you supply only args:
930 # The default Entrypoint defined in the Docker image is run with the args
931 # that you supplied.
932 # - If you supply a command and args:
933 # The default Entrypoint and the default Cmd defined in the Docker image
934 # are ignored. Your command is run with your args.
935 # It cannot be set if custom container image is
936 # not provided.
937 # Note that this field and [TrainingInput.args] are mutually exclusive, i.e.,
938 # both cannot be set at the same time.
939 &quot;A String&quot;,
940 ],
941 &quot;acceleratorConfig&quot;: { # Represents a hardware accelerator request config. # Represents the type and number of accelerators used by the replica.
942 # [Learn about restrictions on accelerator configurations for
943 # training.](/ai-platform/training/docs/using-gpus#compute-engine-machine-types-with-gpu)
944 # Note that the AcceleratorConfig can be used in both Jobs and Versions.
945 # Learn more about [accelerators for training](/ml-engine/docs/using-gpus) and
946 # [accelerators for online
947 # prediction](/ml-engine/docs/machine-types-online-prediction#gpus).
948 &quot;count&quot;: &quot;A String&quot;, # The number of accelerators to attach to each machine running the job.
949 &quot;type&quot;: &quot;A String&quot;, # The type of accelerator to use.
950 },
Bu Sun Kim4ed7d3f2020-05-27 12:20:54 -0700951 &quot;tpuTfVersion&quot;: &quot;A String&quot;, # The AI Platform runtime version that includes a TensorFlow version matching
952 # the one used in the custom container. This field is required if the replica
953 # is a TPU worker that uses a custom container. Otherwise, do not specify
954 # this field. This must be a [runtime version that currently supports
955 # training with
956 # TPUs](/ml-engine/docs/tensorflow/runtime-version-list#tpu-support).
957 #
958 # Note that the version of TensorFlow included in a runtime version may
959 # differ from the numbering of the runtime version itself, because it may
960 # have a different [patch
961 # version](https://www.tensorflow.org/guide/version_compat#semantic_versioning_20).
962 # In this field, you must specify the runtime version (TensorFlow minor
963 # version). For example, if your custom container runs TensorFlow `1.x.y`,
964 # specify `1.x`.
965 &quot;containerCommand&quot;: [ # The command with which the replica&#x27;s custom container is run.
966 # If provided, it will override default ENTRYPOINT of the docker image.
967 # If not provided, the docker image&#x27;s ENTRYPOINT is used.
968 # It cannot be set if custom container image is
969 # not provided.
970 # Note that this field and [TrainingInput.args] are mutually exclusive, i.e.,
971 # both cannot be set at the same time.
972 &quot;A String&quot;,
973 ],
Bu Sun Kim65020912020-05-20 12:08:20 -0700974 },
975 &quot;encryptionConfig&quot;: { # Represents a custom encryption key configuration that can be applied to # Optional. Options for using customer-managed encryption keys (CMEK) to
976 # protect resources created by a training job, instead of using Google&#x27;s
977 # default encryption. If this is set, then all resources created by the
978 # training job will be encrypted with the customer-managed encryption key
979 # that you specify.
980 #
981 # [Learn how and when to use CMEK with AI Platform
982 # Training](/ai-platform/training/docs/cmek).
983 # a resource.
984 &quot;kmsKeyName&quot;: &quot;A String&quot;, # The Cloud KMS resource identifier of the customer-managed encryption key
985 # used to protect a resource, such as a training job. It has the following
986 # format:
987 # `projects/{PROJECT_ID}/locations/{REGION}/keyRings/{KEY_RING_NAME}/cryptoKeys/{KEY_NAME}`
988 },
989 &quot;hyperparameters&quot;: { # Represents a set of hyperparameters to optimize. # Optional. The set of Hyperparameters to tune.
Bu Sun Kim65020912020-05-20 12:08:20 -0700990 &quot;params&quot;: [ # Required. The set of parameters to tune.
991 { # Represents a single hyperparameter to optimize.
Bu Sun Kim65020912020-05-20 12:08:20 -0700992 &quot;categoricalValues&quot;: [ # Required if type is `CATEGORICAL`. The list of possible categories.
993 &quot;A String&quot;,
994 ],
995 &quot;parameterName&quot;: &quot;A String&quot;, # Required. The parameter name must be unique amongst all ParameterConfigs in
996 # a HyperparameterSpec message. E.g., &quot;learning_rate&quot;.
997 &quot;minValue&quot;: 3.14, # Required if type is `DOUBLE` or `INTEGER`. This field
998 # should be unset if type is `CATEGORICAL`. This value should be integers if
999 # type is INTEGER.
1000 &quot;discreteValues&quot;: [ # Required if type is `DISCRETE`.
1001 # A list of feasible points.
1002 # The list should be in strictly increasing order. For instance, this
1003 # parameter might have possible settings of 1.5, 2.5, and 4.0. This list
1004 # should not contain more than 1,000 values.
1005 3.14,
1006 ],
1007 &quot;scaleType&quot;: &quot;A String&quot;, # Optional. How the parameter should be scaled to the hypercube.
1008 # Leave unset for categorical parameters.
1009 # Some kind of scaling is strongly recommended for real or integral
1010 # parameters (e.g., `UNIT_LINEAR_SCALE`).
1011 &quot;maxValue&quot;: 3.14, # Required if type is `DOUBLE` or `INTEGER`. This field
1012 # should be unset if type is `CATEGORICAL`. This value should be integers if
1013 # type is `INTEGER`.
Bu Sun Kim4ed7d3f2020-05-27 12:20:54 -07001014 &quot;type&quot;: &quot;A String&quot;, # Required. The type of the parameter.
Bu Sun Kim65020912020-05-20 12:08:20 -07001015 },
1016 ],
1017 &quot;enableTrialEarlyStopping&quot;: True or False, # Optional. Indicates if the hyperparameter tuning job enables auto trial
1018 # early stopping.
1019 &quot;resumePreviousJobId&quot;: &quot;A String&quot;, # Optional. The prior hyperparameter tuning job id that users hope to
1020 # continue with. The job id will be used to find the corresponding vizier
1021 # study guid and resume the study.
1022 &quot;maxParallelTrials&quot;: 42, # Optional. The number of training trials to run concurrently.
1023 # You can reduce the time it takes to perform hyperparameter tuning by adding
1024 # trials in parallel. However, each trail only benefits from the information
1025 # gained in completed trials. That means that a trial does not get access to
1026 # the results of trials running at the same time, which could reduce the
1027 # quality of the overall optimization.
1028 #
1029 # Each trial will use the same scale tier and machine types.
1030 #
1031 # Defaults to one.
1032 &quot;maxFailedTrials&quot;: 42, # Optional. The number of failed trials that need to be seen before failing
1033 # the hyperparameter tuning job. You can specify this field to override the
1034 # default failing criteria for AI Platform hyperparameter tuning jobs.
1035 #
1036 # Defaults to zero, which means the service decides when a hyperparameter
1037 # job should fail.
1038 &quot;goal&quot;: &quot;A String&quot;, # Required. The type of goal to use for tuning. Available types are
1039 # `MAXIMIZE` and `MINIMIZE`.
1040 #
1041 # Defaults to `MAXIMIZE`.
1042 &quot;maxTrials&quot;: 42, # Optional. How many training trials should be attempted to optimize
1043 # the specified hyperparameters.
1044 #
1045 # Defaults to one.
1046 &quot;algorithm&quot;: &quot;A String&quot;, # Optional. The search algorithm specified for the hyperparameter
1047 # tuning job.
1048 # Uses the default AI Platform hyperparameter tuning
1049 # algorithm if unspecified.
Bu Sun Kim4ed7d3f2020-05-27 12:20:54 -07001050 &quot;hyperparameterMetricTag&quot;: &quot;A String&quot;, # Optional. The TensorFlow summary tag name to use for optimizing trials. For
1051 # current versions of TensorFlow, this tag name should exactly match what is
1052 # shown in TensorBoard, including all scopes. For versions of TensorFlow
1053 # prior to 0.12, this should be only the tag passed to tf.Summary.
1054 # By default, &quot;training/hptuning/metric&quot; will be used.
Bu Sun Kim65020912020-05-20 12:08:20 -07001055 },
1056 &quot;workerConfig&quot;: { # Represents the configuration for a replica in a cluster. # Optional. The configuration for workers.
1057 #
1058 # You should only set `workerConfig.acceleratorConfig` if `workerType` is set
1059 # to a Compute Engine machine type. [Learn about restrictions on accelerator
1060 # configurations for
1061 # training.](/ai-platform/training/docs/using-gpus#compute-engine-machine-types-with-gpu)
1062 #
1063 # Set `workerConfig.imageUri` only if you build a custom image for your
1064 # worker. If `workerConfig.imageUri` has not been set, AI Platform uses
1065 # the value of `masterConfig.imageUri`. Learn more about [configuring custom
1066 # containers](/ai-platform/training/docs/distributed-training-containers).
Bu Sun Kim65020912020-05-20 12:08:20 -07001067 &quot;imageUri&quot;: &quot;A String&quot;, # The Docker image to run on the replica. This image must be in Container
1068 # Registry. Learn more about [configuring custom
1069 # containers](/ai-platform/training/docs/distributed-training-containers).
1070 &quot;containerArgs&quot;: [ # Arguments to the entrypoint command.
1071 # The following rules apply for container_command and container_args:
1072 # - If you do not supply command or args:
1073 # The defaults defined in the Docker image are used.
1074 # - If you supply a command but no args:
1075 # The default EntryPoint and the default Cmd defined in the Docker image
1076 # are ignored. Your command is run without any arguments.
1077 # - If you supply only args:
1078 # The default Entrypoint defined in the Docker image is run with the args
1079 # that you supplied.
1080 # - If you supply a command and args:
1081 # The default Entrypoint and the default Cmd defined in the Docker image
1082 # are ignored. Your command is run with your args.
1083 # It cannot be set if custom container image is
1084 # not provided.
1085 # Note that this field and [TrainingInput.args] are mutually exclusive, i.e.,
1086 # both cannot be set at the same time.
1087 &quot;A String&quot;,
1088 ],
1089 &quot;acceleratorConfig&quot;: { # Represents a hardware accelerator request config. # Represents the type and number of accelerators used by the replica.
1090 # [Learn about restrictions on accelerator configurations for
1091 # training.](/ai-platform/training/docs/using-gpus#compute-engine-machine-types-with-gpu)
1092 # Note that the AcceleratorConfig can be used in both Jobs and Versions.
1093 # Learn more about [accelerators for training](/ml-engine/docs/using-gpus) and
1094 # [accelerators for online
1095 # prediction](/ml-engine/docs/machine-types-online-prediction#gpus).
1096 &quot;count&quot;: &quot;A String&quot;, # The number of accelerators to attach to each machine running the job.
1097 &quot;type&quot;: &quot;A String&quot;, # The type of accelerator to use.
1098 },
Bu Sun Kim4ed7d3f2020-05-27 12:20:54 -07001099 &quot;tpuTfVersion&quot;: &quot;A String&quot;, # The AI Platform runtime version that includes a TensorFlow version matching
1100 # the one used in the custom container. This field is required if the replica
1101 # is a TPU worker that uses a custom container. Otherwise, do not specify
1102 # this field. This must be a [runtime version that currently supports
1103 # training with
1104 # TPUs](/ml-engine/docs/tensorflow/runtime-version-list#tpu-support).
1105 #
1106 # Note that the version of TensorFlow included in a runtime version may
1107 # differ from the numbering of the runtime version itself, because it may
1108 # have a different [patch
1109 # version](https://www.tensorflow.org/guide/version_compat#semantic_versioning_20).
1110 # In this field, you must specify the runtime version (TensorFlow minor
1111 # version). For example, if your custom container runs TensorFlow `1.x.y`,
1112 # specify `1.x`.
1113 &quot;containerCommand&quot;: [ # The command with which the replica&#x27;s custom container is run.
1114 # If provided, it will override default ENTRYPOINT of the docker image.
1115 # If not provided, the docker image&#x27;s ENTRYPOINT is used.
1116 # It cannot be set if custom container image is
1117 # not provided.
1118 # Note that this field and [TrainingInput.args] are mutually exclusive, i.e.,
1119 # both cannot be set at the same time.
1120 &quot;A String&quot;,
1121 ],
Bu Sun Kim65020912020-05-20 12:08:20 -07001122 },
1123 &quot;parameterServerCount&quot;: &quot;A String&quot;, # Optional. The number of parameter server replicas to use for the training
1124 # job. Each replica in the cluster will be of the type specified in
1125 # `parameter_server_type`.
1126 #
1127 # This value can only be used when `scale_tier` is set to `CUSTOM`. If you
1128 # set this value, you must also set `parameter_server_type`.
1129 #
1130 # The default value is zero.
1131 &quot;packageUris&quot;: [ # Required. The Google Cloud Storage location of the packages with
1132 # the training program and any additional dependencies.
1133 # The maximum number of package URIs is 100.
1134 &quot;A String&quot;,
1135 ],
1136 &quot;evaluatorCount&quot;: &quot;A String&quot;, # Optional. The number of evaluator replicas to use for the training job.
1137 # Each replica in the cluster will be of the type specified in
1138 # `evaluator_type`.
1139 #
1140 # This value can only be used when `scale_tier` is set to `CUSTOM`. If you
1141 # set this value, you must also set `evaluator_type`.
1142 #
1143 # The default value is zero.
1144 &quot;masterType&quot;: &quot;A String&quot;, # Optional. Specifies the type of virtual machine to use for your training
1145 # job&#x27;s master worker. You must specify this field when `scaleTier` is set to
1146 # `CUSTOM`.
1147 #
1148 # You can use certain Compute Engine machine types directly in this field.
1149 # The following types are supported:
1150 #
1151 # - `n1-standard-4`
1152 # - `n1-standard-8`
1153 # - `n1-standard-16`
1154 # - `n1-standard-32`
1155 # - `n1-standard-64`
1156 # - `n1-standard-96`
1157 # - `n1-highmem-2`
1158 # - `n1-highmem-4`
1159 # - `n1-highmem-8`
1160 # - `n1-highmem-16`
1161 # - `n1-highmem-32`
1162 # - `n1-highmem-64`
1163 # - `n1-highmem-96`
1164 # - `n1-highcpu-16`
1165 # - `n1-highcpu-32`
1166 # - `n1-highcpu-64`
1167 # - `n1-highcpu-96`
1168 #
1169 # Learn more about [using Compute Engine machine
1170 # types](/ml-engine/docs/machine-types#compute-engine-machine-types).
1171 #
1172 # Alternatively, you can use the following legacy machine types:
1173 #
1174 # - `standard`
1175 # - `large_model`
1176 # - `complex_model_s`
1177 # - `complex_model_m`
1178 # - `complex_model_l`
1179 # - `standard_gpu`
1180 # - `complex_model_m_gpu`
1181 # - `complex_model_l_gpu`
1182 # - `standard_p100`
1183 # - `complex_model_m_p100`
1184 # - `standard_v100`
1185 # - `large_model_v100`
1186 # - `complex_model_m_v100`
1187 # - `complex_model_l_v100`
1188 #
1189 # Learn more about [using legacy machine
1190 # types](/ml-engine/docs/machine-types#legacy-machine-types).
1191 #
1192 # Finally, if you want to use a TPU for training, specify `cloud_tpu` in this
1193 # field. Learn more about the [special configuration options for training
1194 # with
1195 # TPUs](/ml-engine/docs/tensorflow/using-tpus#configuring_a_custom_tpu_machine).
1196 &quot;runtimeVersion&quot;: &quot;A String&quot;, # Optional. The AI Platform runtime version to use for training. You must
1197 # either specify this field or specify `masterConfig.imageUri`.
1198 #
1199 # For more information, see the [runtime version
1200 # list](/ai-platform/training/docs/runtime-version-list) and learn [how to
1201 # manage runtime versions](/ai-platform/training/docs/versioning).
1202 &quot;evaluatorType&quot;: &quot;A String&quot;, # Optional. Specifies the type of virtual machine to use for your training
1203 # job&#x27;s evaluator nodes.
1204 #
1205 # The supported values are the same as those described in the entry for
1206 # `masterType`.
1207 #
1208 # This value must be consistent with the category of machine type that
1209 # `masterType` uses. In other words, both must be Compute Engine machine
1210 # types or both must be legacy machine types.
1211 #
1212 # This value must be present when `scaleTier` is set to `CUSTOM` and
1213 # `evaluatorCount` is greater than zero.
Bu Sun Kim65020912020-05-20 12:08:20 -07001214 &quot;workerType&quot;: &quot;A String&quot;, # Optional. Specifies the type of virtual machine to use for your training
1215 # job&#x27;s worker nodes.
1216 #
1217 # The supported values are the same as those described in the entry for
1218 # `masterType`.
1219 #
1220 # This value must be consistent with the category of machine type that
1221 # `masterType` uses. In other words, both must be Compute Engine machine
1222 # types or both must be legacy machine types.
1223 #
1224 # If you use `cloud_tpu` for this value, see special instructions for
1225 # [configuring a custom TPU
1226 # machine](/ml-engine/docs/tensorflow/using-tpus#configuring_a_custom_tpu_machine).
1227 #
1228 # This value must be present when `scaleTier` is set to `CUSTOM` and
1229 # `workerCount` is greater than zero.
Bu Sun Kim4ed7d3f2020-05-27 12:20:54 -07001230 &quot;region&quot;: &quot;A String&quot;, # Required. The region to run the training job in. See the [available
1231 # regions](/ai-platform/training/docs/regions) for AI Platform Training.
Bu Sun Kim65020912020-05-20 12:08:20 -07001232 &quot;parameterServerType&quot;: &quot;A String&quot;, # Optional. Specifies the type of virtual machine to use for your training
1233 # job&#x27;s parameter server.
1234 #
1235 # The supported values are the same as those described in the entry for
1236 # `master_type`.
1237 #
1238 # This value must be consistent with the category of machine type that
1239 # `masterType` uses. In other words, both must be Compute Engine machine
1240 # types or both must be legacy machine types.
1241 #
1242 # This value must be present when `scaleTier` is set to `CUSTOM` and
1243 # `parameter_server_count` is greater than zero.
1244 &quot;masterConfig&quot;: { # Represents the configuration for a replica in a cluster. # Optional. The configuration for your master worker.
1245 #
1246 # You should only set `masterConfig.acceleratorConfig` if `masterType` is set
1247 # to a Compute Engine machine type. Learn about [restrictions on accelerator
1248 # configurations for
1249 # training.](/ai-platform/training/docs/using-gpus#compute-engine-machine-types-with-gpu)
1250 #
1251 # Set `masterConfig.imageUri` only if you build a custom image. Only one of
1252 # `masterConfig.imageUri` and `runtimeVersion` should be set. Learn more
1253 # about [configuring custom
1254 # containers](/ai-platform/training/docs/distributed-training-containers).
Bu Sun Kim65020912020-05-20 12:08:20 -07001255 &quot;imageUri&quot;: &quot;A String&quot;, # The Docker image to run on the replica. This image must be in Container
1256 # Registry. Learn more about [configuring custom
1257 # containers](/ai-platform/training/docs/distributed-training-containers).
1258 &quot;containerArgs&quot;: [ # Arguments to the entrypoint command.
1259 # The following rules apply for container_command and container_args:
1260 # - If you do not supply command or args:
1261 # The defaults defined in the Docker image are used.
1262 # - If you supply a command but no args:
1263 # The default EntryPoint and the default Cmd defined in the Docker image
1264 # are ignored. Your command is run without any arguments.
1265 # - If you supply only args:
1266 # The default Entrypoint defined in the Docker image is run with the args
1267 # that you supplied.
1268 # - If you supply a command and args:
1269 # The default Entrypoint and the default Cmd defined in the Docker image
1270 # are ignored. Your command is run with your args.
1271 # It cannot be set if custom container image is
1272 # not provided.
1273 # Note that this field and [TrainingInput.args] are mutually exclusive, i.e.,
1274 # both cannot be set at the same time.
1275 &quot;A String&quot;,
1276 ],
1277 &quot;acceleratorConfig&quot;: { # Represents a hardware accelerator request config. # Represents the type and number of accelerators used by the replica.
1278 # [Learn about restrictions on accelerator configurations for
1279 # training.](/ai-platform/training/docs/using-gpus#compute-engine-machine-types-with-gpu)
1280 # Note that the AcceleratorConfig can be used in both Jobs and Versions.
1281 # Learn more about [accelerators for training](/ml-engine/docs/using-gpus) and
1282 # [accelerators for online
1283 # prediction](/ml-engine/docs/machine-types-online-prediction#gpus).
1284 &quot;count&quot;: &quot;A String&quot;, # The number of accelerators to attach to each machine running the job.
1285 &quot;type&quot;: &quot;A String&quot;, # The type of accelerator to use.
1286 },
Bu Sun Kim4ed7d3f2020-05-27 12:20:54 -07001287 &quot;tpuTfVersion&quot;: &quot;A String&quot;, # The AI Platform runtime version that includes a TensorFlow version matching
1288 # the one used in the custom container. This field is required if the replica
1289 # is a TPU worker that uses a custom container. Otherwise, do not specify
1290 # this field. This must be a [runtime version that currently supports
1291 # training with
1292 # TPUs](/ml-engine/docs/tensorflow/runtime-version-list#tpu-support).
1293 #
1294 # Note that the version of TensorFlow included in a runtime version may
1295 # differ from the numbering of the runtime version itself, because it may
1296 # have a different [patch
1297 # version](https://www.tensorflow.org/guide/version_compat#semantic_versioning_20).
1298 # In this field, you must specify the runtime version (TensorFlow minor
1299 # version). For example, if your custom container runs TensorFlow `1.x.y`,
1300 # specify `1.x`.
1301 &quot;containerCommand&quot;: [ # The command with which the replica&#x27;s custom container is run.
1302 # If provided, it will override default ENTRYPOINT of the docker image.
1303 # If not provided, the docker image&#x27;s ENTRYPOINT is used.
1304 # It cannot be set if custom container image is
1305 # not provided.
1306 # Note that this field and [TrainingInput.args] are mutually exclusive, i.e.,
1307 # both cannot be set at the same time.
1308 &quot;A String&quot;,
1309 ],
Bu Sun Kim65020912020-05-20 12:08:20 -07001310 },
1311 &quot;scaleTier&quot;: &quot;A String&quot;, # Required. Specifies the machine types, the number of replicas for workers
1312 # and parameter servers.
1313 &quot;jobDir&quot;: &quot;A String&quot;, # Optional. A Google Cloud Storage path in which to store training outputs
1314 # and other data needed for training. This path is passed to your TensorFlow
1315 # program as the &#x27;--job-dir&#x27; command-line argument. The benefit of specifying
1316 # this field is that Cloud ML validates the path for use in training.
1317 &quot;pythonVersion&quot;: &quot;A String&quot;, # Optional. The version of Python used in training. You must either specify
1318 # this field or specify `masterConfig.imageUri`.
1319 #
1320 # The following Python versions are available:
1321 #
1322 # * Python &#x27;3.7&#x27; is available when `runtime_version` is set to &#x27;1.15&#x27; or
1323 # later.
1324 # * Python &#x27;3.5&#x27; is available when `runtime_version` is set to a version
1325 # from &#x27;1.4&#x27; to &#x27;1.14&#x27;.
1326 # * Python &#x27;2.7&#x27; is available when `runtime_version` is set to &#x27;1.15&#x27; or
1327 # earlier.
1328 #
1329 # Read more about the Python versions available for [each runtime
1330 # version](/ml-engine/docs/runtime-version-list).
Bu Sun Kim65020912020-05-20 12:08:20 -07001331 &quot;scheduling&quot;: { # All parameters related to scheduling of training jobs. # Optional. Scheduling options for a training job.
1332 &quot;maxWaitTime&quot;: &quot;A String&quot;,
1333 &quot;maxRunningTime&quot;: &quot;A String&quot;, # Optional. The maximum job running time, expressed in seconds. The field can
1334 # contain up to nine fractional digits, terminated by `s`. If not specified,
1335 # this field defaults to `604800s` (seven days).
1336 #
1337 # If the training job is still running after this duration, AI Platform
1338 # Training cancels it.
1339 #
1340 # For example, if you want to ensure your job runs for no more than 2 hours,
1341 # set this field to `7200s` (2 hours * 60 minutes / hour * 60 seconds /
1342 # minute).
1343 #
1344 # If you submit your training job using the `gcloud` tool, you can [provide
1345 # this field in a `config.yaml`
1346 # file](/ai-platform/training/docs/training-jobs#formatting_your_configuration_parameters).
1347 # For example:
1348 #
1349 # ```yaml
1350 # trainingInput:
1351 # ...
1352 # scheduling:
1353 # maxRunningTime: 7200s
1354 # ...
1355 # ```
1356 },
Bu Sun Kim4ed7d3f2020-05-27 12:20:54 -07001357 &quot;network&quot;: &quot;A String&quot;, # Optional. The full name of the Google Compute Engine
1358 # [network](/compute/docs/networks-and-firewalls#networks) to which the Job
1359 # is peered. For example, projects/12345/global/networks/myVPC. Format is of
1360 # the form projects/{project}/global/networks/{network}. Where {project} is a
1361 # project number, as in &#x27;12345&#x27;, and {network} is network name.&quot;.
1362 #
1363 # Private services access must already be configured for the network. If left
1364 # unspecified, the Job is not peered with any network. Learn more -
1365 # Connecting Job to user network over private
1366 # IP.
Bu Sun Kim65020912020-05-20 12:08:20 -07001367 &quot;evaluatorConfig&quot;: { # Represents the configuration for a replica in a cluster. # Optional. The configuration for evaluators.
1368 #
1369 # You should only set `evaluatorConfig.acceleratorConfig` if
1370 # `evaluatorType` is set to a Compute Engine machine type. [Learn
1371 # about restrictions on accelerator configurations for
1372 # training.](/ai-platform/training/docs/using-gpus#compute-engine-machine-types-with-gpu)
1373 #
1374 # Set `evaluatorConfig.imageUri` only if you build a custom image for
1375 # your evaluator. If `evaluatorConfig.imageUri` has not been
1376 # set, AI Platform uses the value of `masterConfig.imageUri`. Learn more about [configuring custom
1377 # containers](/ai-platform/training/docs/distributed-training-containers).
Bu Sun Kim65020912020-05-20 12:08:20 -07001378 &quot;imageUri&quot;: &quot;A String&quot;, # The Docker image to run on the replica. This image must be in Container
1379 # Registry. Learn more about [configuring custom
1380 # containers](/ai-platform/training/docs/distributed-training-containers).
1381 &quot;containerArgs&quot;: [ # Arguments to the entrypoint command.
1382 # The following rules apply for container_command and container_args:
1383 # - If you do not supply command or args:
1384 # The defaults defined in the Docker image are used.
1385 # - If you supply a command but no args:
1386 # The default EntryPoint and the default Cmd defined in the Docker image
1387 # are ignored. Your command is run without any arguments.
1388 # - If you supply only args:
1389 # The default Entrypoint defined in the Docker image is run with the args
1390 # that you supplied.
1391 # - If you supply a command and args:
1392 # The default Entrypoint and the default Cmd defined in the Docker image
1393 # are ignored. Your command is run with your args.
1394 # It cannot be set if custom container image is
1395 # not provided.
1396 # Note that this field and [TrainingInput.args] are mutually exclusive, i.e.,
1397 # both cannot be set at the same time.
1398 &quot;A String&quot;,
1399 ],
1400 &quot;acceleratorConfig&quot;: { # Represents a hardware accelerator request config. # Represents the type and number of accelerators used by the replica.
1401 # [Learn about restrictions on accelerator configurations for
1402 # training.](/ai-platform/training/docs/using-gpus#compute-engine-machine-types-with-gpu)
1403 # Note that the AcceleratorConfig can be used in both Jobs and Versions.
1404 # Learn more about [accelerators for training](/ml-engine/docs/using-gpus) and
1405 # [accelerators for online
1406 # prediction](/ml-engine/docs/machine-types-online-prediction#gpus).
1407 &quot;count&quot;: &quot;A String&quot;, # The number of accelerators to attach to each machine running the job.
1408 &quot;type&quot;: &quot;A String&quot;, # The type of accelerator to use.
1409 },
Bu Sun Kim4ed7d3f2020-05-27 12:20:54 -07001410 &quot;tpuTfVersion&quot;: &quot;A String&quot;, # The AI Platform runtime version that includes a TensorFlow version matching
1411 # the one used in the custom container. This field is required if the replica
1412 # is a TPU worker that uses a custom container. Otherwise, do not specify
1413 # this field. This must be a [runtime version that currently supports
1414 # training with
1415 # TPUs](/ml-engine/docs/tensorflow/runtime-version-list#tpu-support).
1416 #
1417 # Note that the version of TensorFlow included in a runtime version may
1418 # differ from the numbering of the runtime version itself, because it may
1419 # have a different [patch
1420 # version](https://www.tensorflow.org/guide/version_compat#semantic_versioning_20).
1421 # In this field, you must specify the runtime version (TensorFlow minor
1422 # version). For example, if your custom container runs TensorFlow `1.x.y`,
1423 # specify `1.x`.
1424 &quot;containerCommand&quot;: [ # The command with which the replica&#x27;s custom container is run.
1425 # If provided, it will override default ENTRYPOINT of the docker image.
1426 # If not provided, the docker image&#x27;s ENTRYPOINT is used.
1427 # It cannot be set if custom container image is
1428 # not provided.
1429 # Note that this field and [TrainingInput.args] are mutually exclusive, i.e.,
1430 # both cannot be set at the same time.
1431 &quot;A String&quot;,
1432 ],
Bu Sun Kim65020912020-05-20 12:08:20 -07001433 },
1434 &quot;useChiefInTfConfig&quot;: True or False, # Optional. Use `chief` instead of `master` in the `TF_CONFIG` environment
1435 # variable when training with a custom container. Defaults to `false`. [Learn
1436 # more about this
1437 # field.](/ai-platform/training/docs/distributed-training-details#chief-versus-master)
1438 #
1439 # This field has no effect for training jobs that don&#x27;t use a custom
1440 # container.
Bu Sun Kim65020912020-05-20 12:08:20 -07001441 },
1442 &quot;state&quot;: &quot;A String&quot;, # Output only. The detailed state of a job.
1443 &quot;jobId&quot;: &quot;A String&quot;, # Required. The user-specified id of the job.
1444 &quot;endTime&quot;: &quot;A String&quot;, # Output only. When the job processing was completed.
1445 &quot;startTime&quot;: &quot;A String&quot;, # Output only. When the job processing was started.
1446 &quot;predictionOutput&quot;: { # Represents results of a prediction job. # The current prediction job result.
1447 &quot;errorCount&quot;: &quot;A String&quot;, # The number of data instances which resulted in errors.
1448 &quot;outputPath&quot;: &quot;A String&quot;, # The output Google Cloud Storage location provided at the job creation time.
1449 &quot;nodeHours&quot;: 3.14, # Node hours used by the batch prediction job.
1450 &quot;predictionCount&quot;: &quot;A String&quot;, # The number of generated predictions.
1451 },
1452 &quot;trainingOutput&quot;: { # Represents results of a training job. Output only. # The current training job result.
Bu Sun Kim4ed7d3f2020-05-27 12:20:54 -07001453 &quot;isBuiltInAlgorithmJob&quot;: True or False, # Whether this job is a built-in Algorithm job.
1454 &quot;builtInAlgorithmOutput&quot;: { # Represents output related to a built-in algorithm Job. # Details related to built-in algorithms jobs.
1455 # Only set for built-in algorithms jobs.
1456 &quot;modelPath&quot;: &quot;A String&quot;, # The Cloud Storage path to the `model/` directory where the training job
1457 # saves the trained model. Only set for successful jobs that don&#x27;t use
1458 # hyperparameter tuning.
1459 &quot;pythonVersion&quot;: &quot;A String&quot;, # Python version on which the built-in algorithm was trained.
1460 &quot;runtimeVersion&quot;: &quot;A String&quot;, # AI Platform runtime version on which the built-in algorithm was
1461 # trained.
1462 &quot;framework&quot;: &quot;A String&quot;, # Framework on which the built-in algorithm was trained.
1463 },
Bu Sun Kim65020912020-05-20 12:08:20 -07001464 &quot;trials&quot;: [ # Results for individual Hyperparameter trials.
1465 # Only set for hyperparameter tuning jobs.
1466 { # Represents the result of a single hyperparameter tuning trial from a
1467 # training job. The TrainingOutput object that is returned on successful
1468 # completion of a training job with hyperparameter tuning includes a list
1469 # of HyperparameterOutput objects, one for each successful trial.
Bu Sun Kim65020912020-05-20 12:08:20 -07001470 &quot;trialId&quot;: &quot;A String&quot;, # The trial id for these results.
1471 &quot;endTime&quot;: &quot;A String&quot;, # Output only. End time for the trial.
Bu Sun Kim65020912020-05-20 12:08:20 -07001472 &quot;startTime&quot;: &quot;A String&quot;, # Output only. Start time for the trial.
Bu Sun Kim4ed7d3f2020-05-27 12:20:54 -07001473 &quot;isTrialStoppedEarly&quot;: True or False, # True if the trial is stopped early.
Bu Sun Kim65020912020-05-20 12:08:20 -07001474 &quot;finalMetric&quot;: { # An observed value of a metric. # The final objective metric seen for this trial.
Bu Sun Kim65020912020-05-20 12:08:20 -07001475 &quot;trainingStep&quot;: &quot;A String&quot;, # The global training step for this metric.
Bu Sun Kim4ed7d3f2020-05-27 12:20:54 -07001476 &quot;objectiveValue&quot;: 3.14, # The objective value at this training step.
Bu Sun Kim65020912020-05-20 12:08:20 -07001477 },
1478 &quot;builtInAlgorithmOutput&quot;: { # Represents output related to a built-in algorithm Job. # Details related to built-in algorithms jobs.
1479 # Only set for trials of built-in algorithms jobs that have succeeded.
Bu Sun Kim65020912020-05-20 12:08:20 -07001480 &quot;modelPath&quot;: &quot;A String&quot;, # The Cloud Storage path to the `model/` directory where the training job
1481 # saves the trained model. Only set for successful jobs that don&#x27;t use
1482 # hyperparameter tuning.
1483 &quot;pythonVersion&quot;: &quot;A String&quot;, # Python version on which the built-in algorithm was trained.
1484 &quot;runtimeVersion&quot;: &quot;A String&quot;, # AI Platform runtime version on which the built-in algorithm was
1485 # trained.
Bu Sun Kim4ed7d3f2020-05-27 12:20:54 -07001486 &quot;framework&quot;: &quot;A String&quot;, # Framework on which the built-in algorithm was trained.
Bu Sun Kim65020912020-05-20 12:08:20 -07001487 },
1488 &quot;state&quot;: &quot;A String&quot;, # Output only. The detailed state of the trial.
Bu Sun Kim4ed7d3f2020-05-27 12:20:54 -07001489 &quot;allMetrics&quot;: [ # All recorded object metrics for this trial. This field is not currently
1490 # populated.
1491 { # An observed value of a metric.
1492 &quot;trainingStep&quot;: &quot;A String&quot;, # The global training step for this metric.
1493 &quot;objectiveValue&quot;: 3.14, # The objective value at this training step.
1494 },
1495 ],
1496 &quot;hyperparameters&quot;: { # The hyperparameters given to this trial.
1497 &quot;a_key&quot;: &quot;A String&quot;,
1498 },
Bu Sun Kim65020912020-05-20 12:08:20 -07001499 },
1500 ],
1501 &quot;hyperparameterMetricTag&quot;: &quot;A String&quot;, # The TensorFlow summary tag name used for optimizing hyperparameter tuning
1502 # trials. See
1503 # [`HyperparameterSpec.hyperparameterMetricTag`](#HyperparameterSpec.FIELDS.hyperparameter_metric_tag)
1504 # for more information. Only set for hyperparameter tuning jobs.
1505 &quot;completedTrialCount&quot;: &quot;A String&quot;, # The number of hyperparameter tuning trials that completed successfully.
1506 # Only set for hyperparameter tuning jobs.
1507 &quot;isHyperparameterTuningJob&quot;: True or False, # Whether this job is a hyperparameter tuning job.
1508 &quot;consumedMLUnits&quot;: 3.14, # The amount of ML units consumed by the job.
Bu Sun Kim65020912020-05-20 12:08:20 -07001509 },
1510 &quot;createTime&quot;: &quot;A String&quot;, # Output only. When the job was created.
1511 &quot;labels&quot;: { # Optional. One or more labels that you can add, to organize your jobs.
1512 # Each label is a key-value pair, where both the key and the value are
1513 # arbitrary strings that you supply.
1514 # For more information, see the documentation on
1515 # &lt;a href=&quot;/ml-engine/docs/tensorflow/resource-labels&quot;&gt;using labels&lt;/a&gt;.
1516 &quot;a_key&quot;: &quot;A String&quot;,
1517 },
Bu Sun Kim65020912020-05-20 12:08:20 -07001518 }</pre>
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07001519</div>
1520
1521<div class="method">
1522 <code class="details" id="get">get(name, x__xgafv=None)</code>
1523 <pre>Describes a job.
1524
1525Args:
1526 name: string, Required. The name of the job to get the description of. (required)
1527 x__xgafv: string, V1 error format.
1528 Allowed values
1529 1 - v1 error format
1530 2 - v2 error format
1531
1532Returns:
1533 An object of the form:
1534
1535 { # Represents a training or prediction job.
Bu Sun Kim4ed7d3f2020-05-27 12:20:54 -07001536 &quot;predictionInput&quot;: { # Represents input parameters for a prediction job. # Input parameters to create a prediction job.
1537 &quot;versionName&quot;: &quot;A String&quot;, # Use this field if you want to specify a version of the model to use. The
1538 # string is formatted the same way as `model_version`, with the addition
1539 # of the version information:
1540 #
1541 # `&quot;projects/YOUR_PROJECT/models/YOUR_MODEL/versions/YOUR_VERSION&quot;`
1542 &quot;modelName&quot;: &quot;A String&quot;, # Use this field if you want to use the default version for the specified
1543 # model. The string must use the following format:
1544 #
1545 # `&quot;projects/YOUR_PROJECT/models/YOUR_MODEL&quot;`
1546 &quot;uri&quot;: &quot;A String&quot;, # Use this field if you want to specify a Google Cloud Storage path for
1547 # the model to use.
1548 &quot;maxWorkerCount&quot;: &quot;A String&quot;, # Optional. The maximum number of workers to be used for parallel processing.
1549 # Defaults to 10 if not specified.
1550 &quot;signatureName&quot;: &quot;A String&quot;, # Optional. The name of the signature defined in the SavedModel to use for
1551 # this job. Please refer to
1552 # [SavedModel](https://tensorflow.github.io/serving/serving_basic.html)
1553 # for information about how to use signatures.
1554 #
1555 # Defaults to
1556 # [DEFAULT_SERVING_SIGNATURE_DEF_KEY](https://www.tensorflow.org/api_docs/python/tf/saved_model/signature_constants)
1557 # , which is &quot;serving_default&quot;.
1558 &quot;outputPath&quot;: &quot;A String&quot;, # Required. The output Google Cloud Storage location.
1559 &quot;outputDataFormat&quot;: &quot;A String&quot;, # Optional. Format of the output data files, defaults to JSON.
1560 &quot;dataFormat&quot;: &quot;A String&quot;, # Required. The format of the input data files.
1561 &quot;batchSize&quot;: &quot;A String&quot;, # Optional. Number of records per batch, defaults to 64.
1562 # The service will buffer batch_size number of records in memory before
1563 # invoking one Tensorflow prediction call internally. So take the record
1564 # size and memory available into consideration when setting this parameter.
1565 &quot;runtimeVersion&quot;: &quot;A String&quot;, # Optional. The AI Platform runtime version to use for this batch
1566 # prediction. If not set, AI Platform will pick the runtime version used
1567 # during the CreateVersion request for this model version, or choose the
1568 # latest stable version when model version information is not available
1569 # such as when the model is specified by uri.
1570 &quot;inputPaths&quot;: [ # Required. The Cloud Storage location of the input data files. May contain
1571 # &lt;a href=&quot;/storage/docs/gsutil/addlhelp/WildcardNames&quot;&gt;wildcards&lt;/a&gt;.
1572 &quot;A String&quot;,
1573 ],
1574 &quot;region&quot;: &quot;A String&quot;, # Required. The Google Compute Engine region to run the prediction job in.
1575 # See the &lt;a href=&quot;/ml-engine/docs/tensorflow/regions&quot;&gt;available regions&lt;/a&gt;
1576 # for AI Platform services.
1577 },
1578 &quot;errorMessage&quot;: &quot;A String&quot;, # Output only. The details of a failure or a cancellation.
Bu Sun Kim65020912020-05-20 12:08:20 -07001579 &quot;etag&quot;: &quot;A String&quot;, # `etag` is used for optimistic concurrency control as a way to help
1580 # prevent simultaneous updates of a job from overwriting each other.
1581 # It is strongly suggested that systems make use of the `etag` in the
1582 # read-modify-write cycle to perform job updates in order to avoid race
1583 # conditions: An `etag` is returned in the response to `GetJob`, and
1584 # systems are expected to put that etag in the request to `UpdateJob` to
1585 # ensure that their change will be applied to the same version of the job.
1586 &quot;trainingInput&quot;: { # Represents input parameters for a training job. When using the gcloud command # Input parameters to create a training job.
1587 # to submit your training job, you can specify the input parameters as
1588 # command-line arguments and/or in a YAML configuration file referenced from
1589 # the --config command-line argument. For details, see the guide to [submitting
1590 # a training job](/ai-platform/training/docs/training-jobs).
Bu Sun Kim4ed7d3f2020-05-27 12:20:54 -07001591 &quot;workerCount&quot;: &quot;A String&quot;, # Optional. The number of worker replicas to use for the training job. Each
1592 # replica in the cluster will be of the type specified in `worker_type`.
1593 #
1594 # This value can only be used when `scale_tier` is set to `CUSTOM`. If you
1595 # set this value, you must also set `worker_type`.
1596 #
1597 # The default value is zero.
1598 &quot;pythonModule&quot;: &quot;A String&quot;, # Required. The Python module name to run after installing the packages.
1599 &quot;args&quot;: [ # Optional. Command-line arguments passed to the training application when it
1600 # starts. If your job uses a custom container, then the arguments are passed
1601 # to the container&#x27;s &lt;a class=&quot;external&quot; target=&quot;_blank&quot;
1602 # href=&quot;https://docs.docker.com/engine/reference/builder/#entrypoint&quot;&gt;
1603 # `ENTRYPOINT`&lt;/a&gt; command.
1604 &quot;A String&quot;,
1605 ],
Bu Sun Kim65020912020-05-20 12:08:20 -07001606 &quot;parameterServerConfig&quot;: { # Represents the configuration for a replica in a cluster. # Optional. The configuration for parameter servers.
1607 #
1608 # You should only set `parameterServerConfig.acceleratorConfig` if
1609 # `parameterServerType` is set to a Compute Engine machine type. [Learn
1610 # about restrictions on accelerator configurations for
1611 # training.](/ai-platform/training/docs/using-gpus#compute-engine-machine-types-with-gpu)
1612 #
1613 # Set `parameterServerConfig.imageUri` only if you build a custom image for
1614 # your parameter server. If `parameterServerConfig.imageUri` has not been
1615 # set, AI Platform uses the value of `masterConfig.imageUri`. Learn more about [configuring custom
1616 # containers](/ai-platform/training/docs/distributed-training-containers).
Bu Sun Kim65020912020-05-20 12:08:20 -07001617 &quot;imageUri&quot;: &quot;A String&quot;, # The Docker image to run on the replica. This image must be in Container
1618 # Registry. Learn more about [configuring custom
1619 # containers](/ai-platform/training/docs/distributed-training-containers).
1620 &quot;containerArgs&quot;: [ # Arguments to the entrypoint command.
1621 # The following rules apply for container_command and container_args:
1622 # - If you do not supply command or args:
1623 # The defaults defined in the Docker image are used.
1624 # - If you supply a command but no args:
1625 # The default EntryPoint and the default Cmd defined in the Docker image
1626 # are ignored. Your command is run without any arguments.
1627 # - If you supply only args:
1628 # The default Entrypoint defined in the Docker image is run with the args
1629 # that you supplied.
1630 # - If you supply a command and args:
1631 # The default Entrypoint and the default Cmd defined in the Docker image
1632 # are ignored. Your command is run with your args.
1633 # It cannot be set if custom container image is
1634 # not provided.
1635 # Note that this field and [TrainingInput.args] are mutually exclusive, i.e.,
1636 # both cannot be set at the same time.
1637 &quot;A String&quot;,
1638 ],
1639 &quot;acceleratorConfig&quot;: { # Represents a hardware accelerator request config. # Represents the type and number of accelerators used by the replica.
1640 # [Learn about restrictions on accelerator configurations for
1641 # training.](/ai-platform/training/docs/using-gpus#compute-engine-machine-types-with-gpu)
1642 # Note that the AcceleratorConfig can be used in both Jobs and Versions.
1643 # Learn more about [accelerators for training](/ml-engine/docs/using-gpus) and
1644 # [accelerators for online
1645 # prediction](/ml-engine/docs/machine-types-online-prediction#gpus).
1646 &quot;count&quot;: &quot;A String&quot;, # The number of accelerators to attach to each machine running the job.
1647 &quot;type&quot;: &quot;A String&quot;, # The type of accelerator to use.
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07001648 },
Bu Sun Kim4ed7d3f2020-05-27 12:20:54 -07001649 &quot;tpuTfVersion&quot;: &quot;A String&quot;, # The AI Platform runtime version that includes a TensorFlow version matching
1650 # the one used in the custom container. This field is required if the replica
1651 # is a TPU worker that uses a custom container. Otherwise, do not specify
1652 # this field. This must be a [runtime version that currently supports
1653 # training with
1654 # TPUs](/ml-engine/docs/tensorflow/runtime-version-list#tpu-support).
1655 #
1656 # Note that the version of TensorFlow included in a runtime version may
1657 # differ from the numbering of the runtime version itself, because it may
1658 # have a different [patch
1659 # version](https://www.tensorflow.org/guide/version_compat#semantic_versioning_20).
1660 # In this field, you must specify the runtime version (TensorFlow minor
1661 # version). For example, if your custom container runs TensorFlow `1.x.y`,
1662 # specify `1.x`.
1663 &quot;containerCommand&quot;: [ # The command with which the replica&#x27;s custom container is run.
1664 # If provided, it will override default ENTRYPOINT of the docker image.
1665 # If not provided, the docker image&#x27;s ENTRYPOINT is used.
1666 # It cannot be set if custom container image is
1667 # not provided.
1668 # Note that this field and [TrainingInput.args] are mutually exclusive, i.e.,
1669 # both cannot be set at the same time.
1670 &quot;A String&quot;,
1671 ],
Bu Sun Kim65020912020-05-20 12:08:20 -07001672 },
1673 &quot;encryptionConfig&quot;: { # Represents a custom encryption key configuration that can be applied to # Optional. Options for using customer-managed encryption keys (CMEK) to
1674 # protect resources created by a training job, instead of using Google&#x27;s
1675 # default encryption. If this is set, then all resources created by the
1676 # training job will be encrypted with the customer-managed encryption key
1677 # that you specify.
1678 #
1679 # [Learn how and when to use CMEK with AI Platform
1680 # Training](/ai-platform/training/docs/cmek).
1681 # a resource.
1682 &quot;kmsKeyName&quot;: &quot;A String&quot;, # The Cloud KMS resource identifier of the customer-managed encryption key
1683 # used to protect a resource, such as a training job. It has the following
1684 # format:
1685 # `projects/{PROJECT_ID}/locations/{REGION}/keyRings/{KEY_RING_NAME}/cryptoKeys/{KEY_NAME}`
1686 },
1687 &quot;hyperparameters&quot;: { # Represents a set of hyperparameters to optimize. # Optional. The set of Hyperparameters to tune.
Bu Sun Kim65020912020-05-20 12:08:20 -07001688 &quot;params&quot;: [ # Required. The set of parameters to tune.
1689 { # Represents a single hyperparameter to optimize.
Bu Sun Kim65020912020-05-20 12:08:20 -07001690 &quot;categoricalValues&quot;: [ # Required if type is `CATEGORICAL`. The list of possible categories.
1691 &quot;A String&quot;,
1692 ],
1693 &quot;parameterName&quot;: &quot;A String&quot;, # Required. The parameter name must be unique amongst all ParameterConfigs in
1694 # a HyperparameterSpec message. E.g., &quot;learning_rate&quot;.
1695 &quot;minValue&quot;: 3.14, # Required if type is `DOUBLE` or `INTEGER`. This field
1696 # should be unset if type is `CATEGORICAL`. This value should be integers if
1697 # type is INTEGER.
1698 &quot;discreteValues&quot;: [ # Required if type is `DISCRETE`.
1699 # A list of feasible points.
1700 # The list should be in strictly increasing order. For instance, this
1701 # parameter might have possible settings of 1.5, 2.5, and 4.0. This list
1702 # should not contain more than 1,000 values.
1703 3.14,
1704 ],
1705 &quot;scaleType&quot;: &quot;A String&quot;, # Optional. How the parameter should be scaled to the hypercube.
1706 # Leave unset for categorical parameters.
1707 # Some kind of scaling is strongly recommended for real or integral
1708 # parameters (e.g., `UNIT_LINEAR_SCALE`).
1709 &quot;maxValue&quot;: 3.14, # Required if type is `DOUBLE` or `INTEGER`. This field
1710 # should be unset if type is `CATEGORICAL`. This value should be integers if
1711 # type is `INTEGER`.
Bu Sun Kim4ed7d3f2020-05-27 12:20:54 -07001712 &quot;type&quot;: &quot;A String&quot;, # Required. The type of the parameter.
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07001713 },
1714 ],
Bu Sun Kim65020912020-05-20 12:08:20 -07001715 &quot;enableTrialEarlyStopping&quot;: True or False, # Optional. Indicates if the hyperparameter tuning job enables auto trial
1716 # early stopping.
1717 &quot;resumePreviousJobId&quot;: &quot;A String&quot;, # Optional. The prior hyperparameter tuning job id that users hope to
1718 # continue with. The job id will be used to find the corresponding vizier
1719 # study guid and resume the study.
1720 &quot;maxParallelTrials&quot;: 42, # Optional. The number of training trials to run concurrently.
1721 # You can reduce the time it takes to perform hyperparameter tuning by adding
1722 # trials in parallel. However, each trail only benefits from the information
1723 # gained in completed trials. That means that a trial does not get access to
1724 # the results of trials running at the same time, which could reduce the
1725 # quality of the overall optimization.
1726 #
1727 # Each trial will use the same scale tier and machine types.
1728 #
1729 # Defaults to one.
1730 &quot;maxFailedTrials&quot;: 42, # Optional. The number of failed trials that need to be seen before failing
1731 # the hyperparameter tuning job. You can specify this field to override the
1732 # default failing criteria for AI Platform hyperparameter tuning jobs.
1733 #
1734 # Defaults to zero, which means the service decides when a hyperparameter
1735 # job should fail.
1736 &quot;goal&quot;: &quot;A String&quot;, # Required. The type of goal to use for tuning. Available types are
1737 # `MAXIMIZE` and `MINIMIZE`.
1738 #
1739 # Defaults to `MAXIMIZE`.
1740 &quot;maxTrials&quot;: 42, # Optional. How many training trials should be attempted to optimize
1741 # the specified hyperparameters.
1742 #
1743 # Defaults to one.
1744 &quot;algorithm&quot;: &quot;A String&quot;, # Optional. The search algorithm specified for the hyperparameter
1745 # tuning job.
1746 # Uses the default AI Platform hyperparameter tuning
1747 # algorithm if unspecified.
Bu Sun Kim4ed7d3f2020-05-27 12:20:54 -07001748 &quot;hyperparameterMetricTag&quot;: &quot;A String&quot;, # Optional. The TensorFlow summary tag name to use for optimizing trials. For
1749 # current versions of TensorFlow, this tag name should exactly match what is
1750 # shown in TensorBoard, including all scopes. For versions of TensorFlow
1751 # prior to 0.12, this should be only the tag passed to tf.Summary.
1752 # By default, &quot;training/hptuning/metric&quot; will be used.
Bu Sun Kim65020912020-05-20 12:08:20 -07001753 },
1754 &quot;workerConfig&quot;: { # Represents the configuration for a replica in a cluster. # Optional. The configuration for workers.
1755 #
1756 # You should only set `workerConfig.acceleratorConfig` if `workerType` is set
1757 # to a Compute Engine machine type. [Learn about restrictions on accelerator
1758 # configurations for
1759 # training.](/ai-platform/training/docs/using-gpus#compute-engine-machine-types-with-gpu)
1760 #
1761 # Set `workerConfig.imageUri` only if you build a custom image for your
1762 # worker. If `workerConfig.imageUri` has not been set, AI Platform uses
1763 # the value of `masterConfig.imageUri`. Learn more about [configuring custom
1764 # containers](/ai-platform/training/docs/distributed-training-containers).
Bu Sun Kim65020912020-05-20 12:08:20 -07001765 &quot;imageUri&quot;: &quot;A String&quot;, # The Docker image to run on the replica. This image must be in Container
1766 # Registry. Learn more about [configuring custom
1767 # containers](/ai-platform/training/docs/distributed-training-containers).
1768 &quot;containerArgs&quot;: [ # Arguments to the entrypoint command.
1769 # The following rules apply for container_command and container_args:
1770 # - If you do not supply command or args:
1771 # The defaults defined in the Docker image are used.
1772 # - If you supply a command but no args:
1773 # The default EntryPoint and the default Cmd defined in the Docker image
1774 # are ignored. Your command is run without any arguments.
1775 # - If you supply only args:
1776 # The default Entrypoint defined in the Docker image is run with the args
1777 # that you supplied.
1778 # - If you supply a command and args:
1779 # The default Entrypoint and the default Cmd defined in the Docker image
1780 # are ignored. Your command is run with your args.
1781 # It cannot be set if custom container image is
1782 # not provided.
1783 # Note that this field and [TrainingInput.args] are mutually exclusive, i.e.,
1784 # both cannot be set at the same time.
1785 &quot;A String&quot;,
1786 ],
1787 &quot;acceleratorConfig&quot;: { # Represents a hardware accelerator request config. # Represents the type and number of accelerators used by the replica.
1788 # [Learn about restrictions on accelerator configurations for
1789 # training.](/ai-platform/training/docs/using-gpus#compute-engine-machine-types-with-gpu)
1790 # Note that the AcceleratorConfig can be used in both Jobs and Versions.
1791 # Learn more about [accelerators for training](/ml-engine/docs/using-gpus) and
1792 # [accelerators for online
1793 # prediction](/ml-engine/docs/machine-types-online-prediction#gpus).
1794 &quot;count&quot;: &quot;A String&quot;, # The number of accelerators to attach to each machine running the job.
1795 &quot;type&quot;: &quot;A String&quot;, # The type of accelerator to use.
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07001796 },
Bu Sun Kim4ed7d3f2020-05-27 12:20:54 -07001797 &quot;tpuTfVersion&quot;: &quot;A String&quot;, # The AI Platform runtime version that includes a TensorFlow version matching
1798 # the one used in the custom container. This field is required if the replica
1799 # is a TPU worker that uses a custom container. Otherwise, do not specify
1800 # this field. This must be a [runtime version that currently supports
1801 # training with
1802 # TPUs](/ml-engine/docs/tensorflow/runtime-version-list#tpu-support).
1803 #
1804 # Note that the version of TensorFlow included in a runtime version may
1805 # differ from the numbering of the runtime version itself, because it may
1806 # have a different [patch
1807 # version](https://www.tensorflow.org/guide/version_compat#semantic_versioning_20).
1808 # In this field, you must specify the runtime version (TensorFlow minor
1809 # version). For example, if your custom container runs TensorFlow `1.x.y`,
1810 # specify `1.x`.
1811 &quot;containerCommand&quot;: [ # The command with which the replica&#x27;s custom container is run.
1812 # If provided, it will override default ENTRYPOINT of the docker image.
1813 # If not provided, the docker image&#x27;s ENTRYPOINT is used.
1814 # It cannot be set if custom container image is
1815 # not provided.
1816 # Note that this field and [TrainingInput.args] are mutually exclusive, i.e.,
1817 # both cannot be set at the same time.
1818 &quot;A String&quot;,
1819 ],
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07001820 },
Bu Sun Kim65020912020-05-20 12:08:20 -07001821 &quot;parameterServerCount&quot;: &quot;A String&quot;, # Optional. The number of parameter server replicas to use for the training
1822 # job. Each replica in the cluster will be of the type specified in
1823 # `parameter_server_type`.
Dan O'Mearadd494642020-05-01 07:42:23 -07001824 #
Bu Sun Kim65020912020-05-20 12:08:20 -07001825 # This value can only be used when `scale_tier` is set to `CUSTOM`. If you
1826 # set this value, you must also set `parameter_server_type`.
Dan O'Mearadd494642020-05-01 07:42:23 -07001827 #
Bu Sun Kim65020912020-05-20 12:08:20 -07001828 # The default value is zero.
1829 &quot;packageUris&quot;: [ # Required. The Google Cloud Storage location of the packages with
1830 # the training program and any additional dependencies.
1831 # The maximum number of package URIs is 100.
1832 &quot;A String&quot;,
1833 ],
1834 &quot;evaluatorCount&quot;: &quot;A String&quot;, # Optional. The number of evaluator replicas to use for the training job.
1835 # Each replica in the cluster will be of the type specified in
1836 # `evaluator_type`.
Dan O'Mearadd494642020-05-01 07:42:23 -07001837 #
Bu Sun Kim65020912020-05-20 12:08:20 -07001838 # This value can only be used when `scale_tier` is set to `CUSTOM`. If you
1839 # set this value, you must also set `evaluator_type`.
Dan O'Mearadd494642020-05-01 07:42:23 -07001840 #
Bu Sun Kim65020912020-05-20 12:08:20 -07001841 # The default value is zero.
1842 &quot;masterType&quot;: &quot;A String&quot;, # Optional. Specifies the type of virtual machine to use for your training
1843 # job&#x27;s master worker. You must specify this field when `scaleTier` is set to
1844 # `CUSTOM`.
Dan O'Mearadd494642020-05-01 07:42:23 -07001845 #
Bu Sun Kim65020912020-05-20 12:08:20 -07001846 # You can use certain Compute Engine machine types directly in this field.
1847 # The following types are supported:
1848 #
1849 # - `n1-standard-4`
1850 # - `n1-standard-8`
1851 # - `n1-standard-16`
1852 # - `n1-standard-32`
1853 # - `n1-standard-64`
1854 # - `n1-standard-96`
1855 # - `n1-highmem-2`
1856 # - `n1-highmem-4`
1857 # - `n1-highmem-8`
1858 # - `n1-highmem-16`
1859 # - `n1-highmem-32`
1860 # - `n1-highmem-64`
1861 # - `n1-highmem-96`
1862 # - `n1-highcpu-16`
1863 # - `n1-highcpu-32`
1864 # - `n1-highcpu-64`
1865 # - `n1-highcpu-96`
1866 #
1867 # Learn more about [using Compute Engine machine
1868 # types](/ml-engine/docs/machine-types#compute-engine-machine-types).
1869 #
1870 # Alternatively, you can use the following legacy machine types:
1871 #
1872 # - `standard`
1873 # - `large_model`
1874 # - `complex_model_s`
1875 # - `complex_model_m`
1876 # - `complex_model_l`
1877 # - `standard_gpu`
1878 # - `complex_model_m_gpu`
1879 # - `complex_model_l_gpu`
1880 # - `standard_p100`
1881 # - `complex_model_m_p100`
1882 # - `standard_v100`
1883 # - `large_model_v100`
1884 # - `complex_model_m_v100`
1885 # - `complex_model_l_v100`
1886 #
1887 # Learn more about [using legacy machine
1888 # types](/ml-engine/docs/machine-types#legacy-machine-types).
1889 #
1890 # Finally, if you want to use a TPU for training, specify `cloud_tpu` in this
1891 # field. Learn more about the [special configuration options for training
1892 # with
1893 # TPUs](/ml-engine/docs/tensorflow/using-tpus#configuring_a_custom_tpu_machine).
1894 &quot;runtimeVersion&quot;: &quot;A String&quot;, # Optional. The AI Platform runtime version to use for training. You must
1895 # either specify this field or specify `masterConfig.imageUri`.
1896 #
1897 # For more information, see the [runtime version
1898 # list](/ai-platform/training/docs/runtime-version-list) and learn [how to
1899 # manage runtime versions](/ai-platform/training/docs/versioning).
1900 &quot;evaluatorType&quot;: &quot;A String&quot;, # Optional. Specifies the type of virtual machine to use for your training
1901 # job&#x27;s evaluator nodes.
1902 #
1903 # The supported values are the same as those described in the entry for
1904 # `masterType`.
1905 #
1906 # This value must be consistent with the category of machine type that
1907 # `masterType` uses. In other words, both must be Compute Engine machine
1908 # types or both must be legacy machine types.
1909 #
1910 # This value must be present when `scaleTier` is set to `CUSTOM` and
1911 # `evaluatorCount` is greater than zero.
Bu Sun Kim65020912020-05-20 12:08:20 -07001912 &quot;workerType&quot;: &quot;A String&quot;, # Optional. Specifies the type of virtual machine to use for your training
1913 # job&#x27;s worker nodes.
1914 #
1915 # The supported values are the same as those described in the entry for
1916 # `masterType`.
1917 #
1918 # This value must be consistent with the category of machine type that
1919 # `masterType` uses. In other words, both must be Compute Engine machine
1920 # types or both must be legacy machine types.
1921 #
1922 # If you use `cloud_tpu` for this value, see special instructions for
1923 # [configuring a custom TPU
1924 # machine](/ml-engine/docs/tensorflow/using-tpus#configuring_a_custom_tpu_machine).
1925 #
1926 # This value must be present when `scaleTier` is set to `CUSTOM` and
1927 # `workerCount` is greater than zero.
Bu Sun Kim4ed7d3f2020-05-27 12:20:54 -07001928 &quot;region&quot;: &quot;A String&quot;, # Required. The region to run the training job in. See the [available
1929 # regions](/ai-platform/training/docs/regions) for AI Platform Training.
Bu Sun Kim65020912020-05-20 12:08:20 -07001930 &quot;parameterServerType&quot;: &quot;A String&quot;, # Optional. Specifies the type of virtual machine to use for your training
1931 # job&#x27;s parameter server.
1932 #
1933 # The supported values are the same as those described in the entry for
1934 # `master_type`.
1935 #
1936 # This value must be consistent with the category of machine type that
1937 # `masterType` uses. In other words, both must be Compute Engine machine
1938 # types or both must be legacy machine types.
1939 #
1940 # This value must be present when `scaleTier` is set to `CUSTOM` and
1941 # `parameter_server_count` is greater than zero.
1942 &quot;masterConfig&quot;: { # Represents the configuration for a replica in a cluster. # Optional. The configuration for your master worker.
1943 #
1944 # You should only set `masterConfig.acceleratorConfig` if `masterType` is set
1945 # to a Compute Engine machine type. Learn about [restrictions on accelerator
1946 # configurations for
Dan O'Mearadd494642020-05-01 07:42:23 -07001947 # training.](/ai-platform/training/docs/using-gpus#compute-engine-machine-types-with-gpu)
Dan O'Mearadd494642020-05-01 07:42:23 -07001948 #
Bu Sun Kim65020912020-05-20 12:08:20 -07001949 # Set `masterConfig.imageUri` only if you build a custom image. Only one of
1950 # `masterConfig.imageUri` and `runtimeVersion` should be set. Learn more
1951 # about [configuring custom
1952 # containers](/ai-platform/training/docs/distributed-training-containers).
Bu Sun Kim65020912020-05-20 12:08:20 -07001953 &quot;imageUri&quot;: &quot;A String&quot;, # The Docker image to run on the replica. This image must be in Container
1954 # Registry. Learn more about [configuring custom
1955 # containers](/ai-platform/training/docs/distributed-training-containers).
1956 &quot;containerArgs&quot;: [ # Arguments to the entrypoint command.
1957 # The following rules apply for container_command and container_args:
1958 # - If you do not supply command or args:
1959 # The defaults defined in the Docker image are used.
1960 # - If you supply a command but no args:
1961 # The default EntryPoint and the default Cmd defined in the Docker image
1962 # are ignored. Your command is run without any arguments.
1963 # - If you supply only args:
1964 # The default Entrypoint defined in the Docker image is run with the args
1965 # that you supplied.
1966 # - If you supply a command and args:
1967 # The default Entrypoint and the default Cmd defined in the Docker image
1968 # are ignored. Your command is run with your args.
1969 # It cannot be set if custom container image is
1970 # not provided.
1971 # Note that this field and [TrainingInput.args] are mutually exclusive, i.e.,
1972 # both cannot be set at the same time.
1973 &quot;A String&quot;,
1974 ],
1975 &quot;acceleratorConfig&quot;: { # Represents a hardware accelerator request config. # Represents the type and number of accelerators used by the replica.
1976 # [Learn about restrictions on accelerator configurations for
1977 # training.](/ai-platform/training/docs/using-gpus#compute-engine-machine-types-with-gpu)
1978 # Note that the AcceleratorConfig can be used in both Jobs and Versions.
1979 # Learn more about [accelerators for training](/ml-engine/docs/using-gpus) and
1980 # [accelerators for online
1981 # prediction](/ml-engine/docs/machine-types-online-prediction#gpus).
1982 &quot;count&quot;: &quot;A String&quot;, # The number of accelerators to attach to each machine running the job.
1983 &quot;type&quot;: &quot;A String&quot;, # The type of accelerator to use.
1984 },
Bu Sun Kim4ed7d3f2020-05-27 12:20:54 -07001985 &quot;tpuTfVersion&quot;: &quot;A String&quot;, # The AI Platform runtime version that includes a TensorFlow version matching
1986 # the one used in the custom container. This field is required if the replica
1987 # is a TPU worker that uses a custom container. Otherwise, do not specify
1988 # this field. This must be a [runtime version that currently supports
1989 # training with
1990 # TPUs](/ml-engine/docs/tensorflow/runtime-version-list#tpu-support).
1991 #
1992 # Note that the version of TensorFlow included in a runtime version may
1993 # differ from the numbering of the runtime version itself, because it may
1994 # have a different [patch
1995 # version](https://www.tensorflow.org/guide/version_compat#semantic_versioning_20).
1996 # In this field, you must specify the runtime version (TensorFlow minor
1997 # version). For example, if your custom container runs TensorFlow `1.x.y`,
1998 # specify `1.x`.
1999 &quot;containerCommand&quot;: [ # The command with which the replica&#x27;s custom container is run.
2000 # If provided, it will override default ENTRYPOINT of the docker image.
2001 # If not provided, the docker image&#x27;s ENTRYPOINT is used.
2002 # It cannot be set if custom container image is
2003 # not provided.
2004 # Note that this field and [TrainingInput.args] are mutually exclusive, i.e.,
2005 # both cannot be set at the same time.
2006 &quot;A String&quot;,
2007 ],
Bu Sun Kim65020912020-05-20 12:08:20 -07002008 },
2009 &quot;scaleTier&quot;: &quot;A String&quot;, # Required. Specifies the machine types, the number of replicas for workers
2010 # and parameter servers.
2011 &quot;jobDir&quot;: &quot;A String&quot;, # Optional. A Google Cloud Storage path in which to store training outputs
2012 # and other data needed for training. This path is passed to your TensorFlow
2013 # program as the &#x27;--job-dir&#x27; command-line argument. The benefit of specifying
2014 # this field is that Cloud ML validates the path for use in training.
2015 &quot;pythonVersion&quot;: &quot;A String&quot;, # Optional. The version of Python used in training. You must either specify
2016 # this field or specify `masterConfig.imageUri`.
2017 #
2018 # The following Python versions are available:
2019 #
2020 # * Python &#x27;3.7&#x27; is available when `runtime_version` is set to &#x27;1.15&#x27; or
2021 # later.
2022 # * Python &#x27;3.5&#x27; is available when `runtime_version` is set to a version
2023 # from &#x27;1.4&#x27; to &#x27;1.14&#x27;.
2024 # * Python &#x27;2.7&#x27; is available when `runtime_version` is set to &#x27;1.15&#x27; or
2025 # earlier.
2026 #
2027 # Read more about the Python versions available for [each runtime
2028 # version](/ml-engine/docs/runtime-version-list).
Bu Sun Kim65020912020-05-20 12:08:20 -07002029 &quot;scheduling&quot;: { # All parameters related to scheduling of training jobs. # Optional. Scheduling options for a training job.
2030 &quot;maxWaitTime&quot;: &quot;A String&quot;,
2031 &quot;maxRunningTime&quot;: &quot;A String&quot;, # Optional. The maximum job running time, expressed in seconds. The field can
2032 # contain up to nine fractional digits, terminated by `s`. If not specified,
2033 # this field defaults to `604800s` (seven days).
2034 #
2035 # If the training job is still running after this duration, AI Platform
2036 # Training cancels it.
2037 #
2038 # For example, if you want to ensure your job runs for no more than 2 hours,
2039 # set this field to `7200s` (2 hours * 60 minutes / hour * 60 seconds /
2040 # minute).
2041 #
2042 # If you submit your training job using the `gcloud` tool, you can [provide
2043 # this field in a `config.yaml`
2044 # file](/ai-platform/training/docs/training-jobs#formatting_your_configuration_parameters).
2045 # For example:
2046 #
2047 # ```yaml
2048 # trainingInput:
2049 # ...
2050 # scheduling:
2051 # maxRunningTime: 7200s
2052 # ...
2053 # ```
2054 },
Bu Sun Kim4ed7d3f2020-05-27 12:20:54 -07002055 &quot;network&quot;: &quot;A String&quot;, # Optional. The full name of the Google Compute Engine
2056 # [network](/compute/docs/networks-and-firewalls#networks) to which the Job
2057 # is peered. For example, projects/12345/global/networks/myVPC. Format is of
2058 # the form projects/{project}/global/networks/{network}. Where {project} is a
2059 # project number, as in &#x27;12345&#x27;, and {network} is network name.&quot;.
2060 #
2061 # Private services access must already be configured for the network. If left
2062 # unspecified, the Job is not peered with any network. Learn more -
2063 # Connecting Job to user network over private
2064 # IP.
Bu Sun Kim65020912020-05-20 12:08:20 -07002065 &quot;evaluatorConfig&quot;: { # Represents the configuration for a replica in a cluster. # Optional. The configuration for evaluators.
2066 #
2067 # You should only set `evaluatorConfig.acceleratorConfig` if
2068 # `evaluatorType` is set to a Compute Engine machine type. [Learn
2069 # about restrictions on accelerator configurations for
Dan O'Mearadd494642020-05-01 07:42:23 -07002070 # training.](/ai-platform/training/docs/using-gpus#compute-engine-machine-types-with-gpu)
Bu Sun Kim65020912020-05-20 12:08:20 -07002071 #
2072 # Set `evaluatorConfig.imageUri` only if you build a custom image for
2073 # your evaluator. If `evaluatorConfig.imageUri` has not been
2074 # set, AI Platform uses the value of `masterConfig.imageUri`. Learn more about [configuring custom
Dan O'Mearadd494642020-05-01 07:42:23 -07002075 # containers](/ai-platform/training/docs/distributed-training-containers).
Bu Sun Kim65020912020-05-20 12:08:20 -07002076 &quot;imageUri&quot;: &quot;A String&quot;, # The Docker image to run on the replica. This image must be in Container
2077 # Registry. Learn more about [configuring custom
2078 # containers](/ai-platform/training/docs/distributed-training-containers).
2079 &quot;containerArgs&quot;: [ # Arguments to the entrypoint command.
2080 # The following rules apply for container_command and container_args:
2081 # - If you do not supply command or args:
2082 # The defaults defined in the Docker image are used.
2083 # - If you supply a command but no args:
2084 # The default EntryPoint and the default Cmd defined in the Docker image
2085 # are ignored. Your command is run without any arguments.
2086 # - If you supply only args:
2087 # The default Entrypoint defined in the Docker image is run with the args
2088 # that you supplied.
2089 # - If you supply a command and args:
2090 # The default Entrypoint and the default Cmd defined in the Docker image
2091 # are ignored. Your command is run with your args.
2092 # It cannot be set if custom container image is
2093 # not provided.
2094 # Note that this field and [TrainingInput.args] are mutually exclusive, i.e.,
2095 # both cannot be set at the same time.
2096 &quot;A String&quot;,
2097 ],
2098 &quot;acceleratorConfig&quot;: { # Represents a hardware accelerator request config. # Represents the type and number of accelerators used by the replica.
2099 # [Learn about restrictions on accelerator configurations for
2100 # training.](/ai-platform/training/docs/using-gpus#compute-engine-machine-types-with-gpu)
2101 # Note that the AcceleratorConfig can be used in both Jobs and Versions.
2102 # Learn more about [accelerators for training](/ml-engine/docs/using-gpus) and
2103 # [accelerators for online
2104 # prediction](/ml-engine/docs/machine-types-online-prediction#gpus).
2105 &quot;count&quot;: &quot;A String&quot;, # The number of accelerators to attach to each machine running the job.
2106 &quot;type&quot;: &quot;A String&quot;, # The type of accelerator to use.
2107 },
Bu Sun Kim4ed7d3f2020-05-27 12:20:54 -07002108 &quot;tpuTfVersion&quot;: &quot;A String&quot;, # The AI Platform runtime version that includes a TensorFlow version matching
2109 # the one used in the custom container. This field is required if the replica
2110 # is a TPU worker that uses a custom container. Otherwise, do not specify
2111 # this field. This must be a [runtime version that currently supports
2112 # training with
2113 # TPUs](/ml-engine/docs/tensorflow/runtime-version-list#tpu-support).
2114 #
2115 # Note that the version of TensorFlow included in a runtime version may
2116 # differ from the numbering of the runtime version itself, because it may
2117 # have a different [patch
2118 # version](https://www.tensorflow.org/guide/version_compat#semantic_versioning_20).
2119 # In this field, you must specify the runtime version (TensorFlow minor
2120 # version). For example, if your custom container runs TensorFlow `1.x.y`,
2121 # specify `1.x`.
2122 &quot;containerCommand&quot;: [ # The command with which the replica&#x27;s custom container is run.
2123 # If provided, it will override default ENTRYPOINT of the docker image.
2124 # If not provided, the docker image&#x27;s ENTRYPOINT is used.
2125 # It cannot be set if custom container image is
2126 # not provided.
2127 # Note that this field and [TrainingInput.args] are mutually exclusive, i.e.,
2128 # both cannot be set at the same time.
2129 &quot;A String&quot;,
2130 ],
Dan O'Mearadd494642020-05-01 07:42:23 -07002131 },
Bu Sun Kim65020912020-05-20 12:08:20 -07002132 &quot;useChiefInTfConfig&quot;: True or False, # Optional. Use `chief` instead of `master` in the `TF_CONFIG` environment
2133 # variable when training with a custom container. Defaults to `false`. [Learn
2134 # more about this
2135 # field.](/ai-platform/training/docs/distributed-training-details#chief-versus-master)
2136 #
2137 # This field has no effect for training jobs that don&#x27;t use a custom
2138 # container.
Dan O'Mearadd494642020-05-01 07:42:23 -07002139 },
Bu Sun Kim65020912020-05-20 12:08:20 -07002140 &quot;state&quot;: &quot;A String&quot;, # Output only. The detailed state of a job.
2141 &quot;jobId&quot;: &quot;A String&quot;, # Required. The user-specified id of the job.
2142 &quot;endTime&quot;: &quot;A String&quot;, # Output only. When the job processing was completed.
2143 &quot;startTime&quot;: &quot;A String&quot;, # Output only. When the job processing was started.
2144 &quot;predictionOutput&quot;: { # Represents results of a prediction job. # The current prediction job result.
2145 &quot;errorCount&quot;: &quot;A String&quot;, # The number of data instances which resulted in errors.
2146 &quot;outputPath&quot;: &quot;A String&quot;, # The output Google Cloud Storage location provided at the job creation time.
2147 &quot;nodeHours&quot;: 3.14, # Node hours used by the batch prediction job.
2148 &quot;predictionCount&quot;: &quot;A String&quot;, # The number of generated predictions.
2149 },
2150 &quot;trainingOutput&quot;: { # Represents results of a training job. Output only. # The current training job result.
Bu Sun Kim4ed7d3f2020-05-27 12:20:54 -07002151 &quot;isBuiltInAlgorithmJob&quot;: True or False, # Whether this job is a built-in Algorithm job.
2152 &quot;builtInAlgorithmOutput&quot;: { # Represents output related to a built-in algorithm Job. # Details related to built-in algorithms jobs.
2153 # Only set for built-in algorithms jobs.
2154 &quot;modelPath&quot;: &quot;A String&quot;, # The Cloud Storage path to the `model/` directory where the training job
2155 # saves the trained model. Only set for successful jobs that don&#x27;t use
2156 # hyperparameter tuning.
2157 &quot;pythonVersion&quot;: &quot;A String&quot;, # Python version on which the built-in algorithm was trained.
2158 &quot;runtimeVersion&quot;: &quot;A String&quot;, # AI Platform runtime version on which the built-in algorithm was
2159 # trained.
2160 &quot;framework&quot;: &quot;A String&quot;, # Framework on which the built-in algorithm was trained.
2161 },
Bu Sun Kim65020912020-05-20 12:08:20 -07002162 &quot;trials&quot;: [ # Results for individual Hyperparameter trials.
2163 # Only set for hyperparameter tuning jobs.
2164 { # Represents the result of a single hyperparameter tuning trial from a
2165 # training job. The TrainingOutput object that is returned on successful
2166 # completion of a training job with hyperparameter tuning includes a list
2167 # of HyperparameterOutput objects, one for each successful trial.
Bu Sun Kim65020912020-05-20 12:08:20 -07002168 &quot;trialId&quot;: &quot;A String&quot;, # The trial id for these results.
2169 &quot;endTime&quot;: &quot;A String&quot;, # Output only. End time for the trial.
Bu Sun Kim65020912020-05-20 12:08:20 -07002170 &quot;startTime&quot;: &quot;A String&quot;, # Output only. Start time for the trial.
Bu Sun Kim4ed7d3f2020-05-27 12:20:54 -07002171 &quot;isTrialStoppedEarly&quot;: True or False, # True if the trial is stopped early.
Bu Sun Kim65020912020-05-20 12:08:20 -07002172 &quot;finalMetric&quot;: { # An observed value of a metric. # The final objective metric seen for this trial.
Bu Sun Kim65020912020-05-20 12:08:20 -07002173 &quot;trainingStep&quot;: &quot;A String&quot;, # The global training step for this metric.
Bu Sun Kim4ed7d3f2020-05-27 12:20:54 -07002174 &quot;objectiveValue&quot;: 3.14, # The objective value at this training step.
Bu Sun Kim65020912020-05-20 12:08:20 -07002175 },
2176 &quot;builtInAlgorithmOutput&quot;: { # Represents output related to a built-in algorithm Job. # Details related to built-in algorithms jobs.
2177 # Only set for trials of built-in algorithms jobs that have succeeded.
Bu Sun Kim65020912020-05-20 12:08:20 -07002178 &quot;modelPath&quot;: &quot;A String&quot;, # The Cloud Storage path to the `model/` directory where the training job
2179 # saves the trained model. Only set for successful jobs that don&#x27;t use
2180 # hyperparameter tuning.
2181 &quot;pythonVersion&quot;: &quot;A String&quot;, # Python version on which the built-in algorithm was trained.
2182 &quot;runtimeVersion&quot;: &quot;A String&quot;, # AI Platform runtime version on which the built-in algorithm was
2183 # trained.
Bu Sun Kim4ed7d3f2020-05-27 12:20:54 -07002184 &quot;framework&quot;: &quot;A String&quot;, # Framework on which the built-in algorithm was trained.
Bu Sun Kim65020912020-05-20 12:08:20 -07002185 },
2186 &quot;state&quot;: &quot;A String&quot;, # Output only. The detailed state of the trial.
Bu Sun Kim4ed7d3f2020-05-27 12:20:54 -07002187 &quot;allMetrics&quot;: [ # All recorded object metrics for this trial. This field is not currently
2188 # populated.
2189 { # An observed value of a metric.
2190 &quot;trainingStep&quot;: &quot;A String&quot;, # The global training step for this metric.
2191 &quot;objectiveValue&quot;: 3.14, # The objective value at this training step.
2192 },
2193 ],
2194 &quot;hyperparameters&quot;: { # The hyperparameters given to this trial.
2195 &quot;a_key&quot;: &quot;A String&quot;,
2196 },
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07002197 },
2198 ],
Bu Sun Kim65020912020-05-20 12:08:20 -07002199 &quot;hyperparameterMetricTag&quot;: &quot;A String&quot;, # The TensorFlow summary tag name used for optimizing hyperparameter tuning
2200 # trials. See
2201 # [`HyperparameterSpec.hyperparameterMetricTag`](#HyperparameterSpec.FIELDS.hyperparameter_metric_tag)
2202 # for more information. Only set for hyperparameter tuning jobs.
2203 &quot;completedTrialCount&quot;: &quot;A String&quot;, # The number of hyperparameter tuning trials that completed successfully.
2204 # Only set for hyperparameter tuning jobs.
2205 &quot;isHyperparameterTuningJob&quot;: True or False, # Whether this job is a hyperparameter tuning job.
2206 &quot;consumedMLUnits&quot;: 3.14, # The amount of ML units consumed by the job.
Dan O'Mearadd494642020-05-01 07:42:23 -07002207 },
Bu Sun Kim65020912020-05-20 12:08:20 -07002208 &quot;createTime&quot;: &quot;A String&quot;, # Output only. When the job was created.
2209 &quot;labels&quot;: { # Optional. One or more labels that you can add, to organize your jobs.
2210 # Each label is a key-value pair, where both the key and the value are
2211 # arbitrary strings that you supply.
2212 # For more information, see the documentation on
2213 # &lt;a href=&quot;/ml-engine/docs/tensorflow/resource-labels&quot;&gt;using labels&lt;/a&gt;.
2214 &quot;a_key&quot;: &quot;A String&quot;,
2215 },
Bu Sun Kim65020912020-05-20 12:08:20 -07002216 }</pre>
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -04002217</div>
2218
2219<div class="method">
Dan O'Mearadd494642020-05-01 07:42:23 -07002220 <code class="details" id="getIamPolicy">getIamPolicy(resource, options_requestedPolicyVersion=None, x__xgafv=None)</code>
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07002221 <pre>Gets the access control policy for a resource.
2222Returns an empty policy if the resource exists and does not have a policy
2223set.
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -04002224
2225Args:
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07002226 resource: string, REQUIRED: The resource for which the policy is being requested.
2227See the operation documentation for the appropriate value for this field. (required)
Dan O'Mearadd494642020-05-01 07:42:23 -07002228 options_requestedPolicyVersion: integer, Optional. The policy format version to be returned.
2229
2230Valid values are 0, 1, and 3. Requests specifying an invalid value will be
2231rejected.
2232
2233Requests for policies with any conditional bindings must specify version 3.
2234Policies without any conditional bindings may specify any valid value or
2235leave the field unset.
Bu Sun Kim65020912020-05-20 12:08:20 -07002236
2237To learn which resources support conditions in their IAM policies, see the
2238[IAM
2239documentation](https://cloud.google.com/iam/help/conditions/resource-policies).
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -04002240 x__xgafv: string, V1 error format.
2241 Allowed values
2242 1 - v1 error format
2243 2 - v2 error format
2244
2245Returns:
2246 An object of the form:
2247
Dan O'Mearadd494642020-05-01 07:42:23 -07002248 { # An Identity and Access Management (IAM) policy, which specifies access
2249 # controls for Google Cloud resources.
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07002250 #
2251 #
Dan O'Mearadd494642020-05-01 07:42:23 -07002252 # A `Policy` is a collection of `bindings`. A `binding` binds one or more
2253 # `members` to a single `role`. Members can be user accounts, service accounts,
2254 # Google groups, and domains (such as G Suite). A `role` is a named list of
2255 # permissions; each `role` can be an IAM predefined role or a user-created
2256 # custom role.
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07002257 #
Bu Sun Kim65020912020-05-20 12:08:20 -07002258 # For some types of Google Cloud resources, a `binding` can also specify a
2259 # `condition`, which is a logical expression that allows access to a resource
2260 # only if the expression evaluates to `true`. A condition can add constraints
2261 # based on attributes of the request, the resource, or both. To learn which
2262 # resources support conditions in their IAM policies, see the
2263 # [IAM documentation](https://cloud.google.com/iam/help/conditions/resource-policies).
Dan O'Mearadd494642020-05-01 07:42:23 -07002264 #
2265 # **JSON example:**
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07002266 #
2267 # {
Bu Sun Kim65020912020-05-20 12:08:20 -07002268 # &quot;bindings&quot;: [
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07002269 # {
Bu Sun Kim65020912020-05-20 12:08:20 -07002270 # &quot;role&quot;: &quot;roles/resourcemanager.organizationAdmin&quot;,
2271 # &quot;members&quot;: [
2272 # &quot;user:mike@example.com&quot;,
2273 # &quot;group:admins@example.com&quot;,
2274 # &quot;domain:google.com&quot;,
2275 # &quot;serviceAccount:my-project-id@appspot.gserviceaccount.com&quot;
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07002276 # ]
2277 # },
2278 # {
Bu Sun Kim65020912020-05-20 12:08:20 -07002279 # &quot;role&quot;: &quot;roles/resourcemanager.organizationViewer&quot;,
2280 # &quot;members&quot;: [
2281 # &quot;user:eve@example.com&quot;
2282 # ],
2283 # &quot;condition&quot;: {
2284 # &quot;title&quot;: &quot;expirable access&quot;,
2285 # &quot;description&quot;: &quot;Does not grant access after Sep 2020&quot;,
2286 # &quot;expression&quot;: &quot;request.time &lt; timestamp(&#x27;2020-10-01T00:00:00.000Z&#x27;)&quot;,
Dan O'Mearadd494642020-05-01 07:42:23 -07002287 # }
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07002288 # }
Dan O'Mearadd494642020-05-01 07:42:23 -07002289 # ],
Bu Sun Kim65020912020-05-20 12:08:20 -07002290 # &quot;etag&quot;: &quot;BwWWja0YfJA=&quot;,
2291 # &quot;version&quot;: 3
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07002292 # }
2293 #
Dan O'Mearadd494642020-05-01 07:42:23 -07002294 # **YAML example:**
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07002295 #
2296 # bindings:
2297 # - members:
2298 # - user:mike@example.com
2299 # - group:admins@example.com
2300 # - domain:google.com
Dan O'Mearadd494642020-05-01 07:42:23 -07002301 # - serviceAccount:my-project-id@appspot.gserviceaccount.com
2302 # role: roles/resourcemanager.organizationAdmin
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07002303 # - members:
Dan O'Mearadd494642020-05-01 07:42:23 -07002304 # - user:eve@example.com
2305 # role: roles/resourcemanager.organizationViewer
2306 # condition:
2307 # title: expirable access
2308 # description: Does not grant access after Sep 2020
Bu Sun Kim65020912020-05-20 12:08:20 -07002309 # expression: request.time &lt; timestamp(&#x27;2020-10-01T00:00:00.000Z&#x27;)
Dan O'Mearadd494642020-05-01 07:42:23 -07002310 # - etag: BwWWja0YfJA=
2311 # - version: 3
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07002312 #
2313 # For a description of IAM and its features, see the
Dan O'Mearadd494642020-05-01 07:42:23 -07002314 # [IAM documentation](https://cloud.google.com/iam/docs/).
Bu Sun Kim65020912020-05-20 12:08:20 -07002315 &quot;version&quot;: 42, # Specifies the format of the policy.
2316 #
2317 # Valid values are `0`, `1`, and `3`. Requests that specify an invalid value
2318 # are rejected.
2319 #
2320 # Any operation that affects conditional role bindings must specify version
2321 # `3`. This requirement applies to the following operations:
2322 #
2323 # * Getting a policy that includes a conditional role binding
2324 # * Adding a conditional role binding to a policy
2325 # * Changing a conditional role binding in a policy
2326 # * Removing any role binding, with or without a condition, from a policy
2327 # that includes conditions
2328 #
2329 # **Important:** If you use IAM Conditions, you must include the `etag` field
2330 # whenever you call `setIamPolicy`. If you omit this field, then IAM allows
2331 # you to overwrite a version `3` policy with a version `1` policy, and all of
2332 # the conditions in the version `3` policy are lost.
2333 #
2334 # If a policy does not include any conditions, operations on that policy may
2335 # specify any valid version or leave the field unset.
2336 #
2337 # To learn which resources support conditions in their IAM policies, see the
2338 # [IAM documentation](https://cloud.google.com/iam/help/conditions/resource-policies).
2339 &quot;auditConfigs&quot;: [ # Specifies cloud audit logging configuration for this policy.
2340 { # Specifies the audit configuration for a service.
2341 # The configuration determines which permission types are logged, and what
2342 # identities, if any, are exempted from logging.
2343 # An AuditConfig must have one or more AuditLogConfigs.
2344 #
2345 # If there are AuditConfigs for both `allServices` and a specific service,
2346 # the union of the two AuditConfigs is used for that service: the log_types
2347 # specified in each AuditConfig are enabled, and the exempted_members in each
2348 # AuditLogConfig are exempted.
2349 #
2350 # Example Policy with multiple AuditConfigs:
2351 #
2352 # {
2353 # &quot;audit_configs&quot;: [
2354 # {
2355 # &quot;service&quot;: &quot;allServices&quot;
2356 # &quot;audit_log_configs&quot;: [
2357 # {
2358 # &quot;log_type&quot;: &quot;DATA_READ&quot;,
2359 # &quot;exempted_members&quot;: [
2360 # &quot;user:jose@example.com&quot;
2361 # ]
2362 # },
2363 # {
2364 # &quot;log_type&quot;: &quot;DATA_WRITE&quot;,
2365 # },
2366 # {
2367 # &quot;log_type&quot;: &quot;ADMIN_READ&quot;,
2368 # }
2369 # ]
2370 # },
2371 # {
2372 # &quot;service&quot;: &quot;sampleservice.googleapis.com&quot;
2373 # &quot;audit_log_configs&quot;: [
2374 # {
2375 # &quot;log_type&quot;: &quot;DATA_READ&quot;,
2376 # },
2377 # {
2378 # &quot;log_type&quot;: &quot;DATA_WRITE&quot;,
2379 # &quot;exempted_members&quot;: [
2380 # &quot;user:aliya@example.com&quot;
2381 # ]
2382 # }
2383 # ]
2384 # }
2385 # ]
2386 # }
2387 #
2388 # For sampleservice, this policy enables DATA_READ, DATA_WRITE and ADMIN_READ
2389 # logging. It also exempts jose@example.com from DATA_READ logging, and
2390 # aliya@example.com from DATA_WRITE logging.
2391 &quot;service&quot;: &quot;A String&quot;, # Specifies a service that will be enabled for audit logging.
2392 # For example, `storage.googleapis.com`, `cloudsql.googleapis.com`.
2393 # `allServices` is a special value that covers all services.
2394 &quot;auditLogConfigs&quot;: [ # The configuration for logging of each type of permission.
2395 { # Provides the configuration for logging a type of permissions.
2396 # Example:
2397 #
2398 # {
2399 # &quot;audit_log_configs&quot;: [
2400 # {
2401 # &quot;log_type&quot;: &quot;DATA_READ&quot;,
2402 # &quot;exempted_members&quot;: [
2403 # &quot;user:jose@example.com&quot;
2404 # ]
2405 # },
2406 # {
2407 # &quot;log_type&quot;: &quot;DATA_WRITE&quot;,
2408 # }
2409 # ]
2410 # }
2411 #
2412 # This enables &#x27;DATA_READ&#x27; and &#x27;DATA_WRITE&#x27; logging, while exempting
2413 # jose@example.com from DATA_READ logging.
2414 &quot;exemptedMembers&quot;: [ # Specifies the identities that do not cause logging for this type of
2415 # permission.
2416 # Follows the same format of Binding.members.
2417 &quot;A String&quot;,
2418 ],
2419 &quot;logType&quot;: &quot;A String&quot;, # The log type that this config enables.
2420 },
2421 ],
2422 },
2423 ],
2424 &quot;bindings&quot;: [ # Associates a list of `members` to a `role`. Optionally, may specify a
Dan O'Mearadd494642020-05-01 07:42:23 -07002425 # `condition` that determines how and when the `bindings` are applied. Each
2426 # of the `bindings` must contain at least one member.
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07002427 { # Associates `members` with a `role`.
Bu Sun Kim65020912020-05-20 12:08:20 -07002428 &quot;condition&quot;: { # Represents a textual expression in the Common Expression Language (CEL) # The condition that is associated with this binding.
2429 #
2430 # If the condition evaluates to `true`, then this binding applies to the
2431 # current request.
2432 #
2433 # If the condition evaluates to `false`, then this binding does not apply to
2434 # the current request. However, a different role binding might grant the same
2435 # role to one or more of the members in this binding.
2436 #
2437 # To learn which resources support conditions in their IAM policies, see the
2438 # [IAM
2439 # documentation](https://cloud.google.com/iam/help/conditions/resource-policies).
2440 # syntax. CEL is a C-like expression language. The syntax and semantics of CEL
2441 # are documented at https://github.com/google/cel-spec.
2442 #
2443 # Example (Comparison):
2444 #
2445 # title: &quot;Summary size limit&quot;
2446 # description: &quot;Determines if a summary is less than 100 chars&quot;
2447 # expression: &quot;document.summary.size() &lt; 100&quot;
2448 #
2449 # Example (Equality):
2450 #
2451 # title: &quot;Requestor is owner&quot;
2452 # description: &quot;Determines if requestor is the document owner&quot;
2453 # expression: &quot;document.owner == request.auth.claims.email&quot;
2454 #
2455 # Example (Logic):
2456 #
2457 # title: &quot;Public documents&quot;
2458 # description: &quot;Determine whether the document should be publicly visible&quot;
2459 # expression: &quot;document.type != &#x27;private&#x27; &amp;&amp; document.type != &#x27;internal&#x27;&quot;
2460 #
2461 # Example (Data Manipulation):
2462 #
2463 # title: &quot;Notification string&quot;
2464 # description: &quot;Create a notification string with a timestamp.&quot;
2465 # expression: &quot;&#x27;New message received at &#x27; + string(document.create_time)&quot;
2466 #
2467 # The exact variables and functions that may be referenced within an expression
2468 # are determined by the service that evaluates it. See the service
2469 # documentation for additional information.
Bu Sun Kim4ed7d3f2020-05-27 12:20:54 -07002470 &quot;description&quot;: &quot;A String&quot;, # Optional. Description of the expression. This is a longer text which
2471 # describes the expression, e.g. when hovered over it in a UI.
2472 &quot;expression&quot;: &quot;A String&quot;, # Textual representation of an expression in Common Expression Language
2473 # syntax.
Bu Sun Kim65020912020-05-20 12:08:20 -07002474 &quot;title&quot;: &quot;A String&quot;, # Optional. Title for the expression, i.e. a short string describing
2475 # its purpose. This can be used e.g. in UIs which allow to enter the
2476 # expression.
2477 &quot;location&quot;: &quot;A String&quot;, # Optional. String indicating the location of the expression for error
2478 # reporting, e.g. a file name and a position in the file.
Bu Sun Kim65020912020-05-20 12:08:20 -07002479 },
2480 &quot;members&quot;: [ # Specifies the identities requesting access for a Cloud Platform resource.
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07002481 # `members` can have the following values:
2482 #
2483 # * `allUsers`: A special identifier that represents anyone who is
2484 # on the internet; with or without a Google account.
2485 #
2486 # * `allAuthenticatedUsers`: A special identifier that represents anyone
2487 # who is authenticated with a Google account or a service account.
2488 #
2489 # * `user:{emailid}`: An email address that represents a specific Google
Dan O'Mearadd494642020-05-01 07:42:23 -07002490 # account. For example, `alice@example.com` .
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07002491 #
2492 #
2493 # * `serviceAccount:{emailid}`: An email address that represents a service
2494 # account. For example, `my-other-app@appspot.gserviceaccount.com`.
2495 #
2496 # * `group:{emailid}`: An email address that represents a Google group.
2497 # For example, `admins@example.com`.
2498 #
Dan O'Mearadd494642020-05-01 07:42:23 -07002499 # * `deleted:user:{emailid}?uid={uniqueid}`: An email address (plus unique
2500 # identifier) representing a user that has been recently deleted. For
2501 # example, `alice@example.com?uid=123456789012345678901`. If the user is
2502 # recovered, this value reverts to `user:{emailid}` and the recovered user
2503 # retains the role in the binding.
2504 #
2505 # * `deleted:serviceAccount:{emailid}?uid={uniqueid}`: An email address (plus
2506 # unique identifier) representing a service account that has been recently
2507 # deleted. For example,
2508 # `my-other-app@appspot.gserviceaccount.com?uid=123456789012345678901`.
2509 # If the service account is undeleted, this value reverts to
2510 # `serviceAccount:{emailid}` and the undeleted service account retains the
2511 # role in the binding.
2512 #
2513 # * `deleted:group:{emailid}?uid={uniqueid}`: An email address (plus unique
2514 # identifier) representing a Google group that has been recently
2515 # deleted. For example, `admins@example.com?uid=123456789012345678901`. If
2516 # the group is recovered, this value reverts to `group:{emailid}` and the
2517 # recovered group retains the role in the binding.
2518 #
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07002519 #
2520 # * `domain:{domain}`: The G Suite domain (primary) that represents all the
2521 # users of that domain. For example, `google.com` or `example.com`.
2522 #
Bu Sun Kim65020912020-05-20 12:08:20 -07002523 &quot;A String&quot;,
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07002524 ],
Bu Sun Kim65020912020-05-20 12:08:20 -07002525 &quot;role&quot;: &quot;A String&quot;, # Role that is assigned to `members`.
2526 # For example, `roles/viewer`, `roles/editor`, or `roles/owner`.
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07002527 },
2528 ],
Bu Sun Kim4ed7d3f2020-05-27 12:20:54 -07002529 &quot;etag&quot;: &quot;A String&quot;, # `etag` is used for optimistic concurrency control as a way to help
2530 # prevent simultaneous updates of a policy from overwriting each other.
2531 # It is strongly suggested that systems make use of the `etag` in the
2532 # read-modify-write cycle to perform policy updates in order to avoid race
2533 # conditions: An `etag` is returned in the response to `getIamPolicy`, and
2534 # systems are expected to put that etag in the request to `setIamPolicy` to
2535 # ensure that their change will be applied to the same version of the policy.
2536 #
2537 # **Important:** If you use IAM Conditions, you must include the `etag` field
2538 # whenever you call `setIamPolicy`. If you omit this field, then IAM allows
2539 # you to overwrite a version `3` policy with a version `1` policy, and all of
2540 # the conditions in the version `3` policy are lost.
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07002541 }</pre>
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -04002542</div>
2543
2544<div class="method">
Bu Sun Kim4ed7d3f2020-05-27 12:20:54 -07002545 <code class="details" id="list">list(parent, filter=None, pageToken=None, pageSize=None, x__xgafv=None)</code>
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -04002546 <pre>Lists the jobs in the project.
2547
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07002548If there are no jobs that match the request parameters, the list
2549request returns an empty response body: {}.
2550
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -04002551Args:
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07002552 parent: string, Required. The name of the project for which to list jobs. (required)
Bu Sun Kim65020912020-05-20 12:08:20 -07002553 filter: string, Optional. Specifies the subset of jobs to retrieve.
2554You can filter on the value of one or more attributes of the job object.
2555For example, retrieve jobs with a job identifier that starts with &#x27;census&#x27;:
2556&lt;p&gt;&lt;code&gt;gcloud ai-platform jobs list --filter=&#x27;jobId:census*&#x27;&lt;/code&gt;
2557&lt;p&gt;List all failed jobs with names that start with &#x27;rnn&#x27;:
2558&lt;p&gt;&lt;code&gt;gcloud ai-platform jobs list --filter=&#x27;jobId:rnn*
2559AND state:FAILED&#x27;&lt;/code&gt;
2560&lt;p&gt;For more examples, see the guide to
2561&lt;a href=&quot;/ml-engine/docs/tensorflow/monitor-training&quot;&gt;monitoring jobs&lt;/a&gt;.
Bu Sun Kim4ed7d3f2020-05-27 12:20:54 -07002562 pageToken: string, Optional. A page token to request the next page of results.
2563
2564You get the token from the `next_page_token` field of the response from
2565the previous call.
2566 pageSize: integer, Optional. The number of jobs to retrieve per &quot;page&quot; of results. If there
2567are more remaining results than this number, the response message will
2568contain a valid value in the `next_page_token` field.
2569
2570The default value is 20, and the maximum page size is 100.
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -04002571 x__xgafv: string, V1 error format.
2572 Allowed values
2573 1 - v1 error format
2574 2 - v2 error format
2575
2576Returns:
2577 An object of the form:
2578
2579 { # Response message for the ListJobs method.
Bu Sun Kim65020912020-05-20 12:08:20 -07002580 &quot;jobs&quot;: [ # The list of jobs.
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -04002581 { # Represents a training or prediction job.
Bu Sun Kim4ed7d3f2020-05-27 12:20:54 -07002582 &quot;predictionInput&quot;: { # Represents input parameters for a prediction job. # Input parameters to create a prediction job.
2583 &quot;versionName&quot;: &quot;A String&quot;, # Use this field if you want to specify a version of the model to use. The
2584 # string is formatted the same way as `model_version`, with the addition
2585 # of the version information:
2586 #
2587 # `&quot;projects/YOUR_PROJECT/models/YOUR_MODEL/versions/YOUR_VERSION&quot;`
2588 &quot;modelName&quot;: &quot;A String&quot;, # Use this field if you want to use the default version for the specified
2589 # model. The string must use the following format:
2590 #
2591 # `&quot;projects/YOUR_PROJECT/models/YOUR_MODEL&quot;`
2592 &quot;uri&quot;: &quot;A String&quot;, # Use this field if you want to specify a Google Cloud Storage path for
2593 # the model to use.
2594 &quot;maxWorkerCount&quot;: &quot;A String&quot;, # Optional. The maximum number of workers to be used for parallel processing.
2595 # Defaults to 10 if not specified.
2596 &quot;signatureName&quot;: &quot;A String&quot;, # Optional. The name of the signature defined in the SavedModel to use for
2597 # this job. Please refer to
2598 # [SavedModel](https://tensorflow.github.io/serving/serving_basic.html)
2599 # for information about how to use signatures.
2600 #
2601 # Defaults to
2602 # [DEFAULT_SERVING_SIGNATURE_DEF_KEY](https://www.tensorflow.org/api_docs/python/tf/saved_model/signature_constants)
2603 # , which is &quot;serving_default&quot;.
2604 &quot;outputPath&quot;: &quot;A String&quot;, # Required. The output Google Cloud Storage location.
2605 &quot;outputDataFormat&quot;: &quot;A String&quot;, # Optional. Format of the output data files, defaults to JSON.
2606 &quot;dataFormat&quot;: &quot;A String&quot;, # Required. The format of the input data files.
2607 &quot;batchSize&quot;: &quot;A String&quot;, # Optional. Number of records per batch, defaults to 64.
2608 # The service will buffer batch_size number of records in memory before
2609 # invoking one Tensorflow prediction call internally. So take the record
2610 # size and memory available into consideration when setting this parameter.
2611 &quot;runtimeVersion&quot;: &quot;A String&quot;, # Optional. The AI Platform runtime version to use for this batch
2612 # prediction. If not set, AI Platform will pick the runtime version used
2613 # during the CreateVersion request for this model version, or choose the
2614 # latest stable version when model version information is not available
2615 # such as when the model is specified by uri.
2616 &quot;inputPaths&quot;: [ # Required. The Cloud Storage location of the input data files. May contain
2617 # &lt;a href=&quot;/storage/docs/gsutil/addlhelp/WildcardNames&quot;&gt;wildcards&lt;/a&gt;.
2618 &quot;A String&quot;,
2619 ],
2620 &quot;region&quot;: &quot;A String&quot;, # Required. The Google Compute Engine region to run the prediction job in.
2621 # See the &lt;a href=&quot;/ml-engine/docs/tensorflow/regions&quot;&gt;available regions&lt;/a&gt;
2622 # for AI Platform services.
2623 },
2624 &quot;errorMessage&quot;: &quot;A String&quot;, # Output only. The details of a failure or a cancellation.
Bu Sun Kim65020912020-05-20 12:08:20 -07002625 &quot;etag&quot;: &quot;A String&quot;, # `etag` is used for optimistic concurrency control as a way to help
2626 # prevent simultaneous updates of a job from overwriting each other.
2627 # It is strongly suggested that systems make use of the `etag` in the
2628 # read-modify-write cycle to perform job updates in order to avoid race
2629 # conditions: An `etag` is returned in the response to `GetJob`, and
2630 # systems are expected to put that etag in the request to `UpdateJob` to
2631 # ensure that their change will be applied to the same version of the job.
2632 &quot;trainingInput&quot;: { # Represents input parameters for a training job. When using the gcloud command # Input parameters to create a training job.
2633 # to submit your training job, you can specify the input parameters as
2634 # command-line arguments and/or in a YAML configuration file referenced from
2635 # the --config command-line argument. For details, see the guide to [submitting
2636 # a training job](/ai-platform/training/docs/training-jobs).
Bu Sun Kim4ed7d3f2020-05-27 12:20:54 -07002637 &quot;workerCount&quot;: &quot;A String&quot;, # Optional. The number of worker replicas to use for the training job. Each
2638 # replica in the cluster will be of the type specified in `worker_type`.
2639 #
2640 # This value can only be used when `scale_tier` is set to `CUSTOM`. If you
2641 # set this value, you must also set `worker_type`.
2642 #
2643 # The default value is zero.
2644 &quot;pythonModule&quot;: &quot;A String&quot;, # Required. The Python module name to run after installing the packages.
2645 &quot;args&quot;: [ # Optional. Command-line arguments passed to the training application when it
2646 # starts. If your job uses a custom container, then the arguments are passed
2647 # to the container&#x27;s &lt;a class=&quot;external&quot; target=&quot;_blank&quot;
2648 # href=&quot;https://docs.docker.com/engine/reference/builder/#entrypoint&quot;&gt;
2649 # `ENTRYPOINT`&lt;/a&gt; command.
2650 &quot;A String&quot;,
2651 ],
Bu Sun Kim65020912020-05-20 12:08:20 -07002652 &quot;parameterServerConfig&quot;: { # Represents the configuration for a replica in a cluster. # Optional. The configuration for parameter servers.
2653 #
2654 # You should only set `parameterServerConfig.acceleratorConfig` if
2655 # `parameterServerType` is set to a Compute Engine machine type. [Learn
2656 # about restrictions on accelerator configurations for
2657 # training.](/ai-platform/training/docs/using-gpus#compute-engine-machine-types-with-gpu)
2658 #
2659 # Set `parameterServerConfig.imageUri` only if you build a custom image for
2660 # your parameter server. If `parameterServerConfig.imageUri` has not been
2661 # set, AI Platform uses the value of `masterConfig.imageUri`. Learn more about [configuring custom
2662 # containers](/ai-platform/training/docs/distributed-training-containers).
Bu Sun Kim65020912020-05-20 12:08:20 -07002663 &quot;imageUri&quot;: &quot;A String&quot;, # The Docker image to run on the replica. This image must be in Container
2664 # Registry. Learn more about [configuring custom
2665 # containers](/ai-platform/training/docs/distributed-training-containers).
2666 &quot;containerArgs&quot;: [ # Arguments to the entrypoint command.
2667 # The following rules apply for container_command and container_args:
2668 # - If you do not supply command or args:
2669 # The defaults defined in the Docker image are used.
2670 # - If you supply a command but no args:
2671 # The default EntryPoint and the default Cmd defined in the Docker image
2672 # are ignored. Your command is run without any arguments.
2673 # - If you supply only args:
2674 # The default Entrypoint defined in the Docker image is run with the args
2675 # that you supplied.
2676 # - If you supply a command and args:
2677 # The default Entrypoint and the default Cmd defined in the Docker image
2678 # are ignored. Your command is run with your args.
2679 # It cannot be set if custom container image is
2680 # not provided.
2681 # Note that this field and [TrainingInput.args] are mutually exclusive, i.e.,
2682 # both cannot be set at the same time.
2683 &quot;A String&quot;,
2684 ],
2685 &quot;acceleratorConfig&quot;: { # Represents a hardware accelerator request config. # Represents the type and number of accelerators used by the replica.
2686 # [Learn about restrictions on accelerator configurations for
2687 # training.](/ai-platform/training/docs/using-gpus#compute-engine-machine-types-with-gpu)
2688 # Note that the AcceleratorConfig can be used in both Jobs and Versions.
2689 # Learn more about [accelerators for training](/ml-engine/docs/using-gpus) and
2690 # [accelerators for online
2691 # prediction](/ml-engine/docs/machine-types-online-prediction#gpus).
2692 &quot;count&quot;: &quot;A String&quot;, # The number of accelerators to attach to each machine running the job.
2693 &quot;type&quot;: &quot;A String&quot;, # The type of accelerator to use.
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07002694 },
Bu Sun Kim4ed7d3f2020-05-27 12:20:54 -07002695 &quot;tpuTfVersion&quot;: &quot;A String&quot;, # The AI Platform runtime version that includes a TensorFlow version matching
2696 # the one used in the custom container. This field is required if the replica
2697 # is a TPU worker that uses a custom container. Otherwise, do not specify
2698 # this field. This must be a [runtime version that currently supports
2699 # training with
2700 # TPUs](/ml-engine/docs/tensorflow/runtime-version-list#tpu-support).
2701 #
2702 # Note that the version of TensorFlow included in a runtime version may
2703 # differ from the numbering of the runtime version itself, because it may
2704 # have a different [patch
2705 # version](https://www.tensorflow.org/guide/version_compat#semantic_versioning_20).
2706 # In this field, you must specify the runtime version (TensorFlow minor
2707 # version). For example, if your custom container runs TensorFlow `1.x.y`,
2708 # specify `1.x`.
2709 &quot;containerCommand&quot;: [ # The command with which the replica&#x27;s custom container is run.
2710 # If provided, it will override default ENTRYPOINT of the docker image.
2711 # If not provided, the docker image&#x27;s ENTRYPOINT is used.
2712 # It cannot be set if custom container image is
2713 # not provided.
2714 # Note that this field and [TrainingInput.args] are mutually exclusive, i.e.,
2715 # both cannot be set at the same time.
2716 &quot;A String&quot;,
2717 ],
Bu Sun Kim65020912020-05-20 12:08:20 -07002718 },
2719 &quot;encryptionConfig&quot;: { # Represents a custom encryption key configuration that can be applied to # Optional. Options for using customer-managed encryption keys (CMEK) to
2720 # protect resources created by a training job, instead of using Google&#x27;s
2721 # default encryption. If this is set, then all resources created by the
2722 # training job will be encrypted with the customer-managed encryption key
2723 # that you specify.
2724 #
2725 # [Learn how and when to use CMEK with AI Platform
2726 # Training](/ai-platform/training/docs/cmek).
2727 # a resource.
2728 &quot;kmsKeyName&quot;: &quot;A String&quot;, # The Cloud KMS resource identifier of the customer-managed encryption key
2729 # used to protect a resource, such as a training job. It has the following
2730 # format:
2731 # `projects/{PROJECT_ID}/locations/{REGION}/keyRings/{KEY_RING_NAME}/cryptoKeys/{KEY_NAME}`
2732 },
2733 &quot;hyperparameters&quot;: { # Represents a set of hyperparameters to optimize. # Optional. The set of Hyperparameters to tune.
Bu Sun Kim65020912020-05-20 12:08:20 -07002734 &quot;params&quot;: [ # Required. The set of parameters to tune.
2735 { # Represents a single hyperparameter to optimize.
Bu Sun Kim65020912020-05-20 12:08:20 -07002736 &quot;categoricalValues&quot;: [ # Required if type is `CATEGORICAL`. The list of possible categories.
2737 &quot;A String&quot;,
2738 ],
2739 &quot;parameterName&quot;: &quot;A String&quot;, # Required. The parameter name must be unique amongst all ParameterConfigs in
2740 # a HyperparameterSpec message. E.g., &quot;learning_rate&quot;.
2741 &quot;minValue&quot;: 3.14, # Required if type is `DOUBLE` or `INTEGER`. This field
2742 # should be unset if type is `CATEGORICAL`. This value should be integers if
2743 # type is INTEGER.
2744 &quot;discreteValues&quot;: [ # Required if type is `DISCRETE`.
2745 # A list of feasible points.
2746 # The list should be in strictly increasing order. For instance, this
2747 # parameter might have possible settings of 1.5, 2.5, and 4.0. This list
2748 # should not contain more than 1,000 values.
2749 3.14,
2750 ],
2751 &quot;scaleType&quot;: &quot;A String&quot;, # Optional. How the parameter should be scaled to the hypercube.
2752 # Leave unset for categorical parameters.
2753 # Some kind of scaling is strongly recommended for real or integral
2754 # parameters (e.g., `UNIT_LINEAR_SCALE`).
2755 &quot;maxValue&quot;: 3.14, # Required if type is `DOUBLE` or `INTEGER`. This field
2756 # should be unset if type is `CATEGORICAL`. This value should be integers if
2757 # type is `INTEGER`.
Bu Sun Kim4ed7d3f2020-05-27 12:20:54 -07002758 &quot;type&quot;: &quot;A String&quot;, # Required. The type of the parameter.
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -04002759 },
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07002760 ],
Bu Sun Kim65020912020-05-20 12:08:20 -07002761 &quot;enableTrialEarlyStopping&quot;: True or False, # Optional. Indicates if the hyperparameter tuning job enables auto trial
2762 # early stopping.
2763 &quot;resumePreviousJobId&quot;: &quot;A String&quot;, # Optional. The prior hyperparameter tuning job id that users hope to
2764 # continue with. The job id will be used to find the corresponding vizier
2765 # study guid and resume the study.
2766 &quot;maxParallelTrials&quot;: 42, # Optional. The number of training trials to run concurrently.
2767 # You can reduce the time it takes to perform hyperparameter tuning by adding
2768 # trials in parallel. However, each trail only benefits from the information
2769 # gained in completed trials. That means that a trial does not get access to
2770 # the results of trials running at the same time, which could reduce the
2771 # quality of the overall optimization.
2772 #
2773 # Each trial will use the same scale tier and machine types.
2774 #
2775 # Defaults to one.
2776 &quot;maxFailedTrials&quot;: 42, # Optional. The number of failed trials that need to be seen before failing
2777 # the hyperparameter tuning job. You can specify this field to override the
2778 # default failing criteria for AI Platform hyperparameter tuning jobs.
2779 #
2780 # Defaults to zero, which means the service decides when a hyperparameter
2781 # job should fail.
2782 &quot;goal&quot;: &quot;A String&quot;, # Required. The type of goal to use for tuning. Available types are
2783 # `MAXIMIZE` and `MINIMIZE`.
2784 #
2785 # Defaults to `MAXIMIZE`.
2786 &quot;maxTrials&quot;: 42, # Optional. How many training trials should be attempted to optimize
2787 # the specified hyperparameters.
2788 #
2789 # Defaults to one.
2790 &quot;algorithm&quot;: &quot;A String&quot;, # Optional. The search algorithm specified for the hyperparameter
2791 # tuning job.
2792 # Uses the default AI Platform hyperparameter tuning
2793 # algorithm if unspecified.
Bu Sun Kim4ed7d3f2020-05-27 12:20:54 -07002794 &quot;hyperparameterMetricTag&quot;: &quot;A String&quot;, # Optional. The TensorFlow summary tag name to use for optimizing trials. For
2795 # current versions of TensorFlow, this tag name should exactly match what is
2796 # shown in TensorBoard, including all scopes. For versions of TensorFlow
2797 # prior to 0.12, this should be only the tag passed to tf.Summary.
2798 # By default, &quot;training/hptuning/metric&quot; will be used.
Bu Sun Kim65020912020-05-20 12:08:20 -07002799 },
2800 &quot;workerConfig&quot;: { # Represents the configuration for a replica in a cluster. # Optional. The configuration for workers.
2801 #
2802 # You should only set `workerConfig.acceleratorConfig` if `workerType` is set
2803 # to a Compute Engine machine type. [Learn about restrictions on accelerator
2804 # configurations for
2805 # training.](/ai-platform/training/docs/using-gpus#compute-engine-machine-types-with-gpu)
2806 #
2807 # Set `workerConfig.imageUri` only if you build a custom image for your
2808 # worker. If `workerConfig.imageUri` has not been set, AI Platform uses
2809 # the value of `masterConfig.imageUri`. Learn more about [configuring custom
2810 # containers](/ai-platform/training/docs/distributed-training-containers).
Bu Sun Kim65020912020-05-20 12:08:20 -07002811 &quot;imageUri&quot;: &quot;A String&quot;, # The Docker image to run on the replica. This image must be in Container
2812 # Registry. Learn more about [configuring custom
2813 # containers](/ai-platform/training/docs/distributed-training-containers).
2814 &quot;containerArgs&quot;: [ # Arguments to the entrypoint command.
2815 # The following rules apply for container_command and container_args:
2816 # - If you do not supply command or args:
2817 # The defaults defined in the Docker image are used.
2818 # - If you supply a command but no args:
2819 # The default EntryPoint and the default Cmd defined in the Docker image
2820 # are ignored. Your command is run without any arguments.
2821 # - If you supply only args:
2822 # The default Entrypoint defined in the Docker image is run with the args
2823 # that you supplied.
2824 # - If you supply a command and args:
2825 # The default Entrypoint and the default Cmd defined in the Docker image
2826 # are ignored. Your command is run with your args.
2827 # It cannot be set if custom container image is
2828 # not provided.
2829 # Note that this field and [TrainingInput.args] are mutually exclusive, i.e.,
2830 # both cannot be set at the same time.
2831 &quot;A String&quot;,
2832 ],
2833 &quot;acceleratorConfig&quot;: { # Represents a hardware accelerator request config. # Represents the type and number of accelerators used by the replica.
2834 # [Learn about restrictions on accelerator configurations for
2835 # training.](/ai-platform/training/docs/using-gpus#compute-engine-machine-types-with-gpu)
2836 # Note that the AcceleratorConfig can be used in both Jobs and Versions.
2837 # Learn more about [accelerators for training](/ml-engine/docs/using-gpus) and
2838 # [accelerators for online
2839 # prediction](/ml-engine/docs/machine-types-online-prediction#gpus).
2840 &quot;count&quot;: &quot;A String&quot;, # The number of accelerators to attach to each machine running the job.
2841 &quot;type&quot;: &quot;A String&quot;, # The type of accelerator to use.
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07002842 },
Bu Sun Kim4ed7d3f2020-05-27 12:20:54 -07002843 &quot;tpuTfVersion&quot;: &quot;A String&quot;, # The AI Platform runtime version that includes a TensorFlow version matching
2844 # the one used in the custom container. This field is required if the replica
2845 # is a TPU worker that uses a custom container. Otherwise, do not specify
2846 # this field. This must be a [runtime version that currently supports
2847 # training with
2848 # TPUs](/ml-engine/docs/tensorflow/runtime-version-list#tpu-support).
2849 #
2850 # Note that the version of TensorFlow included in a runtime version may
2851 # differ from the numbering of the runtime version itself, because it may
2852 # have a different [patch
2853 # version](https://www.tensorflow.org/guide/version_compat#semantic_versioning_20).
2854 # In this field, you must specify the runtime version (TensorFlow minor
2855 # version). For example, if your custom container runs TensorFlow `1.x.y`,
2856 # specify `1.x`.
2857 &quot;containerCommand&quot;: [ # The command with which the replica&#x27;s custom container is run.
2858 # If provided, it will override default ENTRYPOINT of the docker image.
2859 # If not provided, the docker image&#x27;s ENTRYPOINT is used.
2860 # It cannot be set if custom container image is
2861 # not provided.
2862 # Note that this field and [TrainingInput.args] are mutually exclusive, i.e.,
2863 # both cannot be set at the same time.
2864 &quot;A String&quot;,
2865 ],
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07002866 },
Bu Sun Kim65020912020-05-20 12:08:20 -07002867 &quot;parameterServerCount&quot;: &quot;A String&quot;, # Optional. The number of parameter server replicas to use for the training
2868 # job. Each replica in the cluster will be of the type specified in
2869 # `parameter_server_type`.
Dan O'Mearadd494642020-05-01 07:42:23 -07002870 #
Bu Sun Kim65020912020-05-20 12:08:20 -07002871 # This value can only be used when `scale_tier` is set to `CUSTOM`. If you
2872 # set this value, you must also set `parameter_server_type`.
Dan O'Mearadd494642020-05-01 07:42:23 -07002873 #
Bu Sun Kim65020912020-05-20 12:08:20 -07002874 # The default value is zero.
2875 &quot;packageUris&quot;: [ # Required. The Google Cloud Storage location of the packages with
2876 # the training program and any additional dependencies.
2877 # The maximum number of package URIs is 100.
2878 &quot;A String&quot;,
2879 ],
2880 &quot;evaluatorCount&quot;: &quot;A String&quot;, # Optional. The number of evaluator replicas to use for the training job.
2881 # Each replica in the cluster will be of the type specified in
2882 # `evaluator_type`.
Dan O'Mearadd494642020-05-01 07:42:23 -07002883 #
Bu Sun Kim65020912020-05-20 12:08:20 -07002884 # This value can only be used when `scale_tier` is set to `CUSTOM`. If you
2885 # set this value, you must also set `evaluator_type`.
Dan O'Mearadd494642020-05-01 07:42:23 -07002886 #
Bu Sun Kim65020912020-05-20 12:08:20 -07002887 # The default value is zero.
2888 &quot;masterType&quot;: &quot;A String&quot;, # Optional. Specifies the type of virtual machine to use for your training
2889 # job&#x27;s master worker. You must specify this field when `scaleTier` is set to
2890 # `CUSTOM`.
Dan O'Mearadd494642020-05-01 07:42:23 -07002891 #
Bu Sun Kim65020912020-05-20 12:08:20 -07002892 # You can use certain Compute Engine machine types directly in this field.
2893 # The following types are supported:
2894 #
2895 # - `n1-standard-4`
2896 # - `n1-standard-8`
2897 # - `n1-standard-16`
2898 # - `n1-standard-32`
2899 # - `n1-standard-64`
2900 # - `n1-standard-96`
2901 # - `n1-highmem-2`
2902 # - `n1-highmem-4`
2903 # - `n1-highmem-8`
2904 # - `n1-highmem-16`
2905 # - `n1-highmem-32`
2906 # - `n1-highmem-64`
2907 # - `n1-highmem-96`
2908 # - `n1-highcpu-16`
2909 # - `n1-highcpu-32`
2910 # - `n1-highcpu-64`
2911 # - `n1-highcpu-96`
2912 #
2913 # Learn more about [using Compute Engine machine
2914 # types](/ml-engine/docs/machine-types#compute-engine-machine-types).
2915 #
2916 # Alternatively, you can use the following legacy machine types:
2917 #
2918 # - `standard`
2919 # - `large_model`
2920 # - `complex_model_s`
2921 # - `complex_model_m`
2922 # - `complex_model_l`
2923 # - `standard_gpu`
2924 # - `complex_model_m_gpu`
2925 # - `complex_model_l_gpu`
2926 # - `standard_p100`
2927 # - `complex_model_m_p100`
2928 # - `standard_v100`
2929 # - `large_model_v100`
2930 # - `complex_model_m_v100`
2931 # - `complex_model_l_v100`
2932 #
2933 # Learn more about [using legacy machine
2934 # types](/ml-engine/docs/machine-types#legacy-machine-types).
2935 #
2936 # Finally, if you want to use a TPU for training, specify `cloud_tpu` in this
2937 # field. Learn more about the [special configuration options for training
2938 # with
2939 # TPUs](/ml-engine/docs/tensorflow/using-tpus#configuring_a_custom_tpu_machine).
2940 &quot;runtimeVersion&quot;: &quot;A String&quot;, # Optional. The AI Platform runtime version to use for training. You must
2941 # either specify this field or specify `masterConfig.imageUri`.
2942 #
2943 # For more information, see the [runtime version
2944 # list](/ai-platform/training/docs/runtime-version-list) and learn [how to
2945 # manage runtime versions](/ai-platform/training/docs/versioning).
2946 &quot;evaluatorType&quot;: &quot;A String&quot;, # Optional. Specifies the type of virtual machine to use for your training
2947 # job&#x27;s evaluator nodes.
2948 #
2949 # The supported values are the same as those described in the entry for
2950 # `masterType`.
2951 #
2952 # This value must be consistent with the category of machine type that
2953 # `masterType` uses. In other words, both must be Compute Engine machine
2954 # types or both must be legacy machine types.
2955 #
2956 # This value must be present when `scaleTier` is set to `CUSTOM` and
2957 # `evaluatorCount` is greater than zero.
Bu Sun Kim65020912020-05-20 12:08:20 -07002958 &quot;workerType&quot;: &quot;A String&quot;, # Optional. Specifies the type of virtual machine to use for your training
2959 # job&#x27;s worker nodes.
2960 #
2961 # The supported values are the same as those described in the entry for
2962 # `masterType`.
2963 #
2964 # This value must be consistent with the category of machine type that
2965 # `masterType` uses. In other words, both must be Compute Engine machine
2966 # types or both must be legacy machine types.
2967 #
2968 # If you use `cloud_tpu` for this value, see special instructions for
2969 # [configuring a custom TPU
2970 # machine](/ml-engine/docs/tensorflow/using-tpus#configuring_a_custom_tpu_machine).
2971 #
2972 # This value must be present when `scaleTier` is set to `CUSTOM` and
2973 # `workerCount` is greater than zero.
Bu Sun Kim4ed7d3f2020-05-27 12:20:54 -07002974 &quot;region&quot;: &quot;A String&quot;, # Required. The region to run the training job in. See the [available
2975 # regions](/ai-platform/training/docs/regions) for AI Platform Training.
Bu Sun Kim65020912020-05-20 12:08:20 -07002976 &quot;parameterServerType&quot;: &quot;A String&quot;, # Optional. Specifies the type of virtual machine to use for your training
2977 # job&#x27;s parameter server.
2978 #
2979 # The supported values are the same as those described in the entry for
2980 # `master_type`.
2981 #
2982 # This value must be consistent with the category of machine type that
2983 # `masterType` uses. In other words, both must be Compute Engine machine
2984 # types or both must be legacy machine types.
2985 #
2986 # This value must be present when `scaleTier` is set to `CUSTOM` and
2987 # `parameter_server_count` is greater than zero.
2988 &quot;masterConfig&quot;: { # Represents the configuration for a replica in a cluster. # Optional. The configuration for your master worker.
2989 #
2990 # You should only set `masterConfig.acceleratorConfig` if `masterType` is set
2991 # to a Compute Engine machine type. Learn about [restrictions on accelerator
2992 # configurations for
Dan O'Mearadd494642020-05-01 07:42:23 -07002993 # training.](/ai-platform/training/docs/using-gpus#compute-engine-machine-types-with-gpu)
Dan O'Mearadd494642020-05-01 07:42:23 -07002994 #
Bu Sun Kim65020912020-05-20 12:08:20 -07002995 # Set `masterConfig.imageUri` only if you build a custom image. Only one of
2996 # `masterConfig.imageUri` and `runtimeVersion` should be set. Learn more
2997 # about [configuring custom
2998 # containers](/ai-platform/training/docs/distributed-training-containers).
Bu Sun Kim65020912020-05-20 12:08:20 -07002999 &quot;imageUri&quot;: &quot;A String&quot;, # The Docker image to run on the replica. This image must be in Container
3000 # Registry. Learn more about [configuring custom
3001 # containers](/ai-platform/training/docs/distributed-training-containers).
3002 &quot;containerArgs&quot;: [ # Arguments to the entrypoint command.
3003 # The following rules apply for container_command and container_args:
3004 # - If you do not supply command or args:
3005 # The defaults defined in the Docker image are used.
3006 # - If you supply a command but no args:
3007 # The default EntryPoint and the default Cmd defined in the Docker image
3008 # are ignored. Your command is run without any arguments.
3009 # - If you supply only args:
3010 # The default Entrypoint defined in the Docker image is run with the args
3011 # that you supplied.
3012 # - If you supply a command and args:
3013 # The default Entrypoint and the default Cmd defined in the Docker image
3014 # are ignored. Your command is run with your args.
3015 # It cannot be set if custom container image is
3016 # not provided.
3017 # Note that this field and [TrainingInput.args] are mutually exclusive, i.e.,
3018 # both cannot be set at the same time.
3019 &quot;A String&quot;,
3020 ],
3021 &quot;acceleratorConfig&quot;: { # Represents a hardware accelerator request config. # Represents the type and number of accelerators used by the replica.
3022 # [Learn about restrictions on accelerator configurations for
3023 # training.](/ai-platform/training/docs/using-gpus#compute-engine-machine-types-with-gpu)
3024 # Note that the AcceleratorConfig can be used in both Jobs and Versions.
3025 # Learn more about [accelerators for training](/ml-engine/docs/using-gpus) and
3026 # [accelerators for online
3027 # prediction](/ml-engine/docs/machine-types-online-prediction#gpus).
3028 &quot;count&quot;: &quot;A String&quot;, # The number of accelerators to attach to each machine running the job.
3029 &quot;type&quot;: &quot;A String&quot;, # The type of accelerator to use.
3030 },
Bu Sun Kim4ed7d3f2020-05-27 12:20:54 -07003031 &quot;tpuTfVersion&quot;: &quot;A String&quot;, # The AI Platform runtime version that includes a TensorFlow version matching
3032 # the one used in the custom container. This field is required if the replica
3033 # is a TPU worker that uses a custom container. Otherwise, do not specify
3034 # this field. This must be a [runtime version that currently supports
3035 # training with
3036 # TPUs](/ml-engine/docs/tensorflow/runtime-version-list#tpu-support).
3037 #
3038 # Note that the version of TensorFlow included in a runtime version may
3039 # differ from the numbering of the runtime version itself, because it may
3040 # have a different [patch
3041 # version](https://www.tensorflow.org/guide/version_compat#semantic_versioning_20).
3042 # In this field, you must specify the runtime version (TensorFlow minor
3043 # version). For example, if your custom container runs TensorFlow `1.x.y`,
3044 # specify `1.x`.
3045 &quot;containerCommand&quot;: [ # The command with which the replica&#x27;s custom container is run.
3046 # If provided, it will override default ENTRYPOINT of the docker image.
3047 # If not provided, the docker image&#x27;s ENTRYPOINT is used.
3048 # It cannot be set if custom container image is
3049 # not provided.
3050 # Note that this field and [TrainingInput.args] are mutually exclusive, i.e.,
3051 # both cannot be set at the same time.
3052 &quot;A String&quot;,
3053 ],
Bu Sun Kim65020912020-05-20 12:08:20 -07003054 },
3055 &quot;scaleTier&quot;: &quot;A String&quot;, # Required. Specifies the machine types, the number of replicas for workers
3056 # and parameter servers.
3057 &quot;jobDir&quot;: &quot;A String&quot;, # Optional. A Google Cloud Storage path in which to store training outputs
3058 # and other data needed for training. This path is passed to your TensorFlow
3059 # program as the &#x27;--job-dir&#x27; command-line argument. The benefit of specifying
3060 # this field is that Cloud ML validates the path for use in training.
3061 &quot;pythonVersion&quot;: &quot;A String&quot;, # Optional. The version of Python used in training. You must either specify
3062 # this field or specify `masterConfig.imageUri`.
3063 #
3064 # The following Python versions are available:
3065 #
3066 # * Python &#x27;3.7&#x27; is available when `runtime_version` is set to &#x27;1.15&#x27; or
3067 # later.
3068 # * Python &#x27;3.5&#x27; is available when `runtime_version` is set to a version
3069 # from &#x27;1.4&#x27; to &#x27;1.14&#x27;.
3070 # * Python &#x27;2.7&#x27; is available when `runtime_version` is set to &#x27;1.15&#x27; or
3071 # earlier.
3072 #
3073 # Read more about the Python versions available for [each runtime
3074 # version](/ml-engine/docs/runtime-version-list).
Bu Sun Kim65020912020-05-20 12:08:20 -07003075 &quot;scheduling&quot;: { # All parameters related to scheduling of training jobs. # Optional. Scheduling options for a training job.
3076 &quot;maxWaitTime&quot;: &quot;A String&quot;,
3077 &quot;maxRunningTime&quot;: &quot;A String&quot;, # Optional. The maximum job running time, expressed in seconds. The field can
3078 # contain up to nine fractional digits, terminated by `s`. If not specified,
3079 # this field defaults to `604800s` (seven days).
3080 #
3081 # If the training job is still running after this duration, AI Platform
3082 # Training cancels it.
3083 #
3084 # For example, if you want to ensure your job runs for no more than 2 hours,
3085 # set this field to `7200s` (2 hours * 60 minutes / hour * 60 seconds /
3086 # minute).
3087 #
3088 # If you submit your training job using the `gcloud` tool, you can [provide
3089 # this field in a `config.yaml`
3090 # file](/ai-platform/training/docs/training-jobs#formatting_your_configuration_parameters).
3091 # For example:
3092 #
3093 # ```yaml
3094 # trainingInput:
3095 # ...
3096 # scheduling:
3097 # maxRunningTime: 7200s
3098 # ...
3099 # ```
3100 },
Bu Sun Kim4ed7d3f2020-05-27 12:20:54 -07003101 &quot;network&quot;: &quot;A String&quot;, # Optional. The full name of the Google Compute Engine
3102 # [network](/compute/docs/networks-and-firewalls#networks) to which the Job
3103 # is peered. For example, projects/12345/global/networks/myVPC. Format is of
3104 # the form projects/{project}/global/networks/{network}. Where {project} is a
3105 # project number, as in &#x27;12345&#x27;, and {network} is network name.&quot;.
3106 #
3107 # Private services access must already be configured for the network. If left
3108 # unspecified, the Job is not peered with any network. Learn more -
3109 # Connecting Job to user network over private
3110 # IP.
Bu Sun Kim65020912020-05-20 12:08:20 -07003111 &quot;evaluatorConfig&quot;: { # Represents the configuration for a replica in a cluster. # Optional. The configuration for evaluators.
3112 #
3113 # You should only set `evaluatorConfig.acceleratorConfig` if
3114 # `evaluatorType` is set to a Compute Engine machine type. [Learn
3115 # about restrictions on accelerator configurations for
Dan O'Mearadd494642020-05-01 07:42:23 -07003116 # training.](/ai-platform/training/docs/using-gpus#compute-engine-machine-types-with-gpu)
Bu Sun Kim65020912020-05-20 12:08:20 -07003117 #
3118 # Set `evaluatorConfig.imageUri` only if you build a custom image for
3119 # your evaluator. If `evaluatorConfig.imageUri` has not been
3120 # set, AI Platform uses the value of `masterConfig.imageUri`. Learn more about [configuring custom
Dan O'Mearadd494642020-05-01 07:42:23 -07003121 # containers](/ai-platform/training/docs/distributed-training-containers).
Bu Sun Kim65020912020-05-20 12:08:20 -07003122 &quot;imageUri&quot;: &quot;A String&quot;, # The Docker image to run on the replica. This image must be in Container
3123 # Registry. Learn more about [configuring custom
3124 # containers](/ai-platform/training/docs/distributed-training-containers).
3125 &quot;containerArgs&quot;: [ # Arguments to the entrypoint command.
3126 # The following rules apply for container_command and container_args:
3127 # - If you do not supply command or args:
3128 # The defaults defined in the Docker image are used.
3129 # - If you supply a command but no args:
3130 # The default EntryPoint and the default Cmd defined in the Docker image
3131 # are ignored. Your command is run without any arguments.
3132 # - If you supply only args:
3133 # The default Entrypoint defined in the Docker image is run with the args
3134 # that you supplied.
3135 # - If you supply a command and args:
3136 # The default Entrypoint and the default Cmd defined in the Docker image
3137 # are ignored. Your command is run with your args.
3138 # It cannot be set if custom container image is
3139 # not provided.
3140 # Note that this field and [TrainingInput.args] are mutually exclusive, i.e.,
3141 # both cannot be set at the same time.
3142 &quot;A String&quot;,
3143 ],
3144 &quot;acceleratorConfig&quot;: { # Represents a hardware accelerator request config. # Represents the type and number of accelerators used by the replica.
3145 # [Learn about restrictions on accelerator configurations for
3146 # training.](/ai-platform/training/docs/using-gpus#compute-engine-machine-types-with-gpu)
3147 # Note that the AcceleratorConfig can be used in both Jobs and Versions.
3148 # Learn more about [accelerators for training](/ml-engine/docs/using-gpus) and
3149 # [accelerators for online
3150 # prediction](/ml-engine/docs/machine-types-online-prediction#gpus).
3151 &quot;count&quot;: &quot;A String&quot;, # The number of accelerators to attach to each machine running the job.
3152 &quot;type&quot;: &quot;A String&quot;, # The type of accelerator to use.
3153 },
Bu Sun Kim4ed7d3f2020-05-27 12:20:54 -07003154 &quot;tpuTfVersion&quot;: &quot;A String&quot;, # The AI Platform runtime version that includes a TensorFlow version matching
3155 # the one used in the custom container. This field is required if the replica
3156 # is a TPU worker that uses a custom container. Otherwise, do not specify
3157 # this field. This must be a [runtime version that currently supports
3158 # training with
3159 # TPUs](/ml-engine/docs/tensorflow/runtime-version-list#tpu-support).
3160 #
3161 # Note that the version of TensorFlow included in a runtime version may
3162 # differ from the numbering of the runtime version itself, because it may
3163 # have a different [patch
3164 # version](https://www.tensorflow.org/guide/version_compat#semantic_versioning_20).
3165 # In this field, you must specify the runtime version (TensorFlow minor
3166 # version). For example, if your custom container runs TensorFlow `1.x.y`,
3167 # specify `1.x`.
3168 &quot;containerCommand&quot;: [ # The command with which the replica&#x27;s custom container is run.
3169 # If provided, it will override default ENTRYPOINT of the docker image.
3170 # If not provided, the docker image&#x27;s ENTRYPOINT is used.
3171 # It cannot be set if custom container image is
3172 # not provided.
3173 # Note that this field and [TrainingInput.args] are mutually exclusive, i.e.,
3174 # both cannot be set at the same time.
3175 &quot;A String&quot;,
3176 ],
Dan O'Mearadd494642020-05-01 07:42:23 -07003177 },
Bu Sun Kim65020912020-05-20 12:08:20 -07003178 &quot;useChiefInTfConfig&quot;: True or False, # Optional. Use `chief` instead of `master` in the `TF_CONFIG` environment
3179 # variable when training with a custom container. Defaults to `false`. [Learn
3180 # more about this
3181 # field.](/ai-platform/training/docs/distributed-training-details#chief-versus-master)
3182 #
3183 # This field has no effect for training jobs that don&#x27;t use a custom
3184 # container.
Dan O'Mearadd494642020-05-01 07:42:23 -07003185 },
Bu Sun Kim65020912020-05-20 12:08:20 -07003186 &quot;state&quot;: &quot;A String&quot;, # Output only. The detailed state of a job.
3187 &quot;jobId&quot;: &quot;A String&quot;, # Required. The user-specified id of the job.
3188 &quot;endTime&quot;: &quot;A String&quot;, # Output only. When the job processing was completed.
3189 &quot;startTime&quot;: &quot;A String&quot;, # Output only. When the job processing was started.
3190 &quot;predictionOutput&quot;: { # Represents results of a prediction job. # The current prediction job result.
3191 &quot;errorCount&quot;: &quot;A String&quot;, # The number of data instances which resulted in errors.
3192 &quot;outputPath&quot;: &quot;A String&quot;, # The output Google Cloud Storage location provided at the job creation time.
3193 &quot;nodeHours&quot;: 3.14, # Node hours used by the batch prediction job.
3194 &quot;predictionCount&quot;: &quot;A String&quot;, # The number of generated predictions.
3195 },
3196 &quot;trainingOutput&quot;: { # Represents results of a training job. Output only. # The current training job result.
Bu Sun Kim4ed7d3f2020-05-27 12:20:54 -07003197 &quot;isBuiltInAlgorithmJob&quot;: True or False, # Whether this job is a built-in Algorithm job.
3198 &quot;builtInAlgorithmOutput&quot;: { # Represents output related to a built-in algorithm Job. # Details related to built-in algorithms jobs.
3199 # Only set for built-in algorithms jobs.
3200 &quot;modelPath&quot;: &quot;A String&quot;, # The Cloud Storage path to the `model/` directory where the training job
3201 # saves the trained model. Only set for successful jobs that don&#x27;t use
3202 # hyperparameter tuning.
3203 &quot;pythonVersion&quot;: &quot;A String&quot;, # Python version on which the built-in algorithm was trained.
3204 &quot;runtimeVersion&quot;: &quot;A String&quot;, # AI Platform runtime version on which the built-in algorithm was
3205 # trained.
3206 &quot;framework&quot;: &quot;A String&quot;, # Framework on which the built-in algorithm was trained.
3207 },
Bu Sun Kim65020912020-05-20 12:08:20 -07003208 &quot;trials&quot;: [ # Results for individual Hyperparameter trials.
3209 # Only set for hyperparameter tuning jobs.
3210 { # Represents the result of a single hyperparameter tuning trial from a
3211 # training job. The TrainingOutput object that is returned on successful
3212 # completion of a training job with hyperparameter tuning includes a list
3213 # of HyperparameterOutput objects, one for each successful trial.
Bu Sun Kim65020912020-05-20 12:08:20 -07003214 &quot;trialId&quot;: &quot;A String&quot;, # The trial id for these results.
3215 &quot;endTime&quot;: &quot;A String&quot;, # Output only. End time for the trial.
Bu Sun Kim65020912020-05-20 12:08:20 -07003216 &quot;startTime&quot;: &quot;A String&quot;, # Output only. Start time for the trial.
Bu Sun Kim4ed7d3f2020-05-27 12:20:54 -07003217 &quot;isTrialStoppedEarly&quot;: True or False, # True if the trial is stopped early.
Bu Sun Kim65020912020-05-20 12:08:20 -07003218 &quot;finalMetric&quot;: { # An observed value of a metric. # The final objective metric seen for this trial.
Bu Sun Kim65020912020-05-20 12:08:20 -07003219 &quot;trainingStep&quot;: &quot;A String&quot;, # The global training step for this metric.
Bu Sun Kim4ed7d3f2020-05-27 12:20:54 -07003220 &quot;objectiveValue&quot;: 3.14, # The objective value at this training step.
Bu Sun Kim65020912020-05-20 12:08:20 -07003221 },
3222 &quot;builtInAlgorithmOutput&quot;: { # Represents output related to a built-in algorithm Job. # Details related to built-in algorithms jobs.
3223 # Only set for trials of built-in algorithms jobs that have succeeded.
Bu Sun Kim65020912020-05-20 12:08:20 -07003224 &quot;modelPath&quot;: &quot;A String&quot;, # The Cloud Storage path to the `model/` directory where the training job
3225 # saves the trained model. Only set for successful jobs that don&#x27;t use
3226 # hyperparameter tuning.
3227 &quot;pythonVersion&quot;: &quot;A String&quot;, # Python version on which the built-in algorithm was trained.
3228 &quot;runtimeVersion&quot;: &quot;A String&quot;, # AI Platform runtime version on which the built-in algorithm was
3229 # trained.
Bu Sun Kim4ed7d3f2020-05-27 12:20:54 -07003230 &quot;framework&quot;: &quot;A String&quot;, # Framework on which the built-in algorithm was trained.
Bu Sun Kim65020912020-05-20 12:08:20 -07003231 },
3232 &quot;state&quot;: &quot;A String&quot;, # Output only. The detailed state of the trial.
Bu Sun Kim4ed7d3f2020-05-27 12:20:54 -07003233 &quot;allMetrics&quot;: [ # All recorded object metrics for this trial. This field is not currently
3234 # populated.
3235 { # An observed value of a metric.
3236 &quot;trainingStep&quot;: &quot;A String&quot;, # The global training step for this metric.
3237 &quot;objectiveValue&quot;: 3.14, # The objective value at this training step.
3238 },
3239 ],
3240 &quot;hyperparameters&quot;: { # The hyperparameters given to this trial.
3241 &quot;a_key&quot;: &quot;A String&quot;,
3242 },
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -04003243 },
3244 ],
Bu Sun Kim65020912020-05-20 12:08:20 -07003245 &quot;hyperparameterMetricTag&quot;: &quot;A String&quot;, # The TensorFlow summary tag name used for optimizing hyperparameter tuning
3246 # trials. See
3247 # [`HyperparameterSpec.hyperparameterMetricTag`](#HyperparameterSpec.FIELDS.hyperparameter_metric_tag)
3248 # for more information. Only set for hyperparameter tuning jobs.
3249 &quot;completedTrialCount&quot;: &quot;A String&quot;, # The number of hyperparameter tuning trials that completed successfully.
3250 # Only set for hyperparameter tuning jobs.
3251 &quot;isHyperparameterTuningJob&quot;: True or False, # Whether this job is a hyperparameter tuning job.
3252 &quot;consumedMLUnits&quot;: 3.14, # The amount of ML units consumed by the job.
Dan O'Mearadd494642020-05-01 07:42:23 -07003253 },
Bu Sun Kim65020912020-05-20 12:08:20 -07003254 &quot;createTime&quot;: &quot;A String&quot;, # Output only. When the job was created.
3255 &quot;labels&quot;: { # Optional. One or more labels that you can add, to organize your jobs.
3256 # Each label is a key-value pair, where both the key and the value are
3257 # arbitrary strings that you supply.
3258 # For more information, see the documentation on
3259 # &lt;a href=&quot;/ml-engine/docs/tensorflow/resource-labels&quot;&gt;using labels&lt;/a&gt;.
3260 &quot;a_key&quot;: &quot;A String&quot;,
3261 },
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -04003262 },
3263 ],
Bu Sun Kim65020912020-05-20 12:08:20 -07003264 &quot;nextPageToken&quot;: &quot;A String&quot;, # Optional. Pass this token as the `page_token` field of the request for a
3265 # subsequent call.
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -04003266 }</pre>
3267</div>
3268
3269<div class="method">
3270 <code class="details" id="list_next">list_next(previous_request, previous_response)</code>
3271 <pre>Retrieves the next page of results.
3272
3273Args:
3274 previous_request: The request for the previous page. (required)
3275 previous_response: The response from the request for the previous page. (required)
3276
3277Returns:
Bu Sun Kim65020912020-05-20 12:08:20 -07003278 A request object that you can call &#x27;execute()&#x27; on to request the next
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -04003279 page. Returns None if there are no more items in the collection.
3280 </pre>
3281</div>
3282
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07003283<div class="method">
Dan O'Mearadd494642020-05-01 07:42:23 -07003284 <code class="details" id="patch">patch(name, body=None, updateMask=None, x__xgafv=None)</code>
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07003285 <pre>Updates a specific job resource.
3286
3287Currently the only supported fields to update are `labels`.
3288
3289Args:
3290 name: string, Required. The job name. (required)
Dan O'Mearadd494642020-05-01 07:42:23 -07003291 body: object, The request body.
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07003292 The object takes the form of:
3293
3294{ # Represents a training or prediction job.
Bu Sun Kim4ed7d3f2020-05-27 12:20:54 -07003295 &quot;predictionInput&quot;: { # Represents input parameters for a prediction job. # Input parameters to create a prediction job.
3296 &quot;versionName&quot;: &quot;A String&quot;, # Use this field if you want to specify a version of the model to use. The
3297 # string is formatted the same way as `model_version`, with the addition
3298 # of the version information:
3299 #
3300 # `&quot;projects/YOUR_PROJECT/models/YOUR_MODEL/versions/YOUR_VERSION&quot;`
3301 &quot;modelName&quot;: &quot;A String&quot;, # Use this field if you want to use the default version for the specified
3302 # model. The string must use the following format:
3303 #
3304 # `&quot;projects/YOUR_PROJECT/models/YOUR_MODEL&quot;`
3305 &quot;uri&quot;: &quot;A String&quot;, # Use this field if you want to specify a Google Cloud Storage path for
3306 # the model to use.
3307 &quot;maxWorkerCount&quot;: &quot;A String&quot;, # Optional. The maximum number of workers to be used for parallel processing.
3308 # Defaults to 10 if not specified.
3309 &quot;signatureName&quot;: &quot;A String&quot;, # Optional. The name of the signature defined in the SavedModel to use for
3310 # this job. Please refer to
3311 # [SavedModel](https://tensorflow.github.io/serving/serving_basic.html)
3312 # for information about how to use signatures.
3313 #
3314 # Defaults to
3315 # [DEFAULT_SERVING_SIGNATURE_DEF_KEY](https://www.tensorflow.org/api_docs/python/tf/saved_model/signature_constants)
3316 # , which is &quot;serving_default&quot;.
3317 &quot;outputPath&quot;: &quot;A String&quot;, # Required. The output Google Cloud Storage location.
3318 &quot;outputDataFormat&quot;: &quot;A String&quot;, # Optional. Format of the output data files, defaults to JSON.
3319 &quot;dataFormat&quot;: &quot;A String&quot;, # Required. The format of the input data files.
3320 &quot;batchSize&quot;: &quot;A String&quot;, # Optional. Number of records per batch, defaults to 64.
3321 # The service will buffer batch_size number of records in memory before
3322 # invoking one Tensorflow prediction call internally. So take the record
3323 # size and memory available into consideration when setting this parameter.
3324 &quot;runtimeVersion&quot;: &quot;A String&quot;, # Optional. The AI Platform runtime version to use for this batch
3325 # prediction. If not set, AI Platform will pick the runtime version used
3326 # during the CreateVersion request for this model version, or choose the
3327 # latest stable version when model version information is not available
3328 # such as when the model is specified by uri.
3329 &quot;inputPaths&quot;: [ # Required. The Cloud Storage location of the input data files. May contain
3330 # &lt;a href=&quot;/storage/docs/gsutil/addlhelp/WildcardNames&quot;&gt;wildcards&lt;/a&gt;.
3331 &quot;A String&quot;,
3332 ],
3333 &quot;region&quot;: &quot;A String&quot;, # Required. The Google Compute Engine region to run the prediction job in.
3334 # See the &lt;a href=&quot;/ml-engine/docs/tensorflow/regions&quot;&gt;available regions&lt;/a&gt;
3335 # for AI Platform services.
3336 },
3337 &quot;errorMessage&quot;: &quot;A String&quot;, # Output only. The details of a failure or a cancellation.
Bu Sun Kim65020912020-05-20 12:08:20 -07003338 &quot;etag&quot;: &quot;A String&quot;, # `etag` is used for optimistic concurrency control as a way to help
Dan O'Mearadd494642020-05-01 07:42:23 -07003339 # prevent simultaneous updates of a job from overwriting each other.
3340 # It is strongly suggested that systems make use of the `etag` in the
3341 # read-modify-write cycle to perform job updates in order to avoid race
3342 # conditions: An `etag` is returned in the response to `GetJob`, and
3343 # systems are expected to put that etag in the request to `UpdateJob` to
3344 # ensure that their change will be applied to the same version of the job.
Bu Sun Kim65020912020-05-20 12:08:20 -07003345 &quot;trainingInput&quot;: { # Represents input parameters for a training job. When using the gcloud command # Input parameters to create a training job.
Dan O'Mearadd494642020-05-01 07:42:23 -07003346 # to submit your training job, you can specify the input parameters as
3347 # command-line arguments and/or in a YAML configuration file referenced from
3348 # the --config command-line argument. For details, see the guide to [submitting
3349 # a training job](/ai-platform/training/docs/training-jobs).
Bu Sun Kim4ed7d3f2020-05-27 12:20:54 -07003350 &quot;workerCount&quot;: &quot;A String&quot;, # Optional. The number of worker replicas to use for the training job. Each
3351 # replica in the cluster will be of the type specified in `worker_type`.
3352 #
3353 # This value can only be used when `scale_tier` is set to `CUSTOM`. If you
3354 # set this value, you must also set `worker_type`.
3355 #
3356 # The default value is zero.
3357 &quot;pythonModule&quot;: &quot;A String&quot;, # Required. The Python module name to run after installing the packages.
3358 &quot;args&quot;: [ # Optional. Command-line arguments passed to the training application when it
3359 # starts. If your job uses a custom container, then the arguments are passed
3360 # to the container&#x27;s &lt;a class=&quot;external&quot; target=&quot;_blank&quot;
3361 # href=&quot;https://docs.docker.com/engine/reference/builder/#entrypoint&quot;&gt;
3362 # `ENTRYPOINT`&lt;/a&gt; command.
3363 &quot;A String&quot;,
3364 ],
Bu Sun Kim65020912020-05-20 12:08:20 -07003365 &quot;parameterServerConfig&quot;: { # Represents the configuration for a replica in a cluster. # Optional. The configuration for parameter servers.
3366 #
3367 # You should only set `parameterServerConfig.acceleratorConfig` if
3368 # `parameterServerType` is set to a Compute Engine machine type. [Learn
3369 # about restrictions on accelerator configurations for
3370 # training.](/ai-platform/training/docs/using-gpus#compute-engine-machine-types-with-gpu)
3371 #
3372 # Set `parameterServerConfig.imageUri` only if you build a custom image for
3373 # your parameter server. If `parameterServerConfig.imageUri` has not been
3374 # set, AI Platform uses the value of `masterConfig.imageUri`. Learn more about [configuring custom
3375 # containers](/ai-platform/training/docs/distributed-training-containers).
Bu Sun Kim65020912020-05-20 12:08:20 -07003376 &quot;imageUri&quot;: &quot;A String&quot;, # The Docker image to run on the replica. This image must be in Container
3377 # Registry. Learn more about [configuring custom
3378 # containers](/ai-platform/training/docs/distributed-training-containers).
3379 &quot;containerArgs&quot;: [ # Arguments to the entrypoint command.
3380 # The following rules apply for container_command and container_args:
3381 # - If you do not supply command or args:
3382 # The defaults defined in the Docker image are used.
3383 # - If you supply a command but no args:
3384 # The default EntryPoint and the default Cmd defined in the Docker image
3385 # are ignored. Your command is run without any arguments.
3386 # - If you supply only args:
3387 # The default Entrypoint defined in the Docker image is run with the args
3388 # that you supplied.
3389 # - If you supply a command and args:
3390 # The default Entrypoint and the default Cmd defined in the Docker image
3391 # are ignored. Your command is run with your args.
3392 # It cannot be set if custom container image is
3393 # not provided.
3394 # Note that this field and [TrainingInput.args] are mutually exclusive, i.e.,
3395 # both cannot be set at the same time.
3396 &quot;A String&quot;,
3397 ],
3398 &quot;acceleratorConfig&quot;: { # Represents a hardware accelerator request config. # Represents the type and number of accelerators used by the replica.
3399 # [Learn about restrictions on accelerator configurations for
3400 # training.](/ai-platform/training/docs/using-gpus#compute-engine-machine-types-with-gpu)
3401 # Note that the AcceleratorConfig can be used in both Jobs and Versions.
3402 # Learn more about [accelerators for training](/ml-engine/docs/using-gpus) and
3403 # [accelerators for online
3404 # prediction](/ml-engine/docs/machine-types-online-prediction#gpus).
3405 &quot;count&quot;: &quot;A String&quot;, # The number of accelerators to attach to each machine running the job.
3406 &quot;type&quot;: &quot;A String&quot;, # The type of accelerator to use.
3407 },
Bu Sun Kim4ed7d3f2020-05-27 12:20:54 -07003408 &quot;tpuTfVersion&quot;: &quot;A String&quot;, # The AI Platform runtime version that includes a TensorFlow version matching
3409 # the one used in the custom container. This field is required if the replica
3410 # is a TPU worker that uses a custom container. Otherwise, do not specify
3411 # this field. This must be a [runtime version that currently supports
3412 # training with
3413 # TPUs](/ml-engine/docs/tensorflow/runtime-version-list#tpu-support).
3414 #
3415 # Note that the version of TensorFlow included in a runtime version may
3416 # differ from the numbering of the runtime version itself, because it may
3417 # have a different [patch
3418 # version](https://www.tensorflow.org/guide/version_compat#semantic_versioning_20).
3419 # In this field, you must specify the runtime version (TensorFlow minor
3420 # version). For example, if your custom container runs TensorFlow `1.x.y`,
3421 # specify `1.x`.
3422 &quot;containerCommand&quot;: [ # The command with which the replica&#x27;s custom container is run.
3423 # If provided, it will override default ENTRYPOINT of the docker image.
3424 # If not provided, the docker image&#x27;s ENTRYPOINT is used.
3425 # It cannot be set if custom container image is
3426 # not provided.
3427 # Note that this field and [TrainingInput.args] are mutually exclusive, i.e.,
3428 # both cannot be set at the same time.
3429 &quot;A String&quot;,
3430 ],
Bu Sun Kim65020912020-05-20 12:08:20 -07003431 },
3432 &quot;encryptionConfig&quot;: { # Represents a custom encryption key configuration that can be applied to # Optional. Options for using customer-managed encryption keys (CMEK) to
3433 # protect resources created by a training job, instead of using Google&#x27;s
3434 # default encryption. If this is set, then all resources created by the
3435 # training job will be encrypted with the customer-managed encryption key
3436 # that you specify.
3437 #
3438 # [Learn how and when to use CMEK with AI Platform
3439 # Training](/ai-platform/training/docs/cmek).
3440 # a resource.
3441 &quot;kmsKeyName&quot;: &quot;A String&quot;, # The Cloud KMS resource identifier of the customer-managed encryption key
3442 # used to protect a resource, such as a training job. It has the following
3443 # format:
3444 # `projects/{PROJECT_ID}/locations/{REGION}/keyRings/{KEY_RING_NAME}/cryptoKeys/{KEY_NAME}`
3445 },
3446 &quot;hyperparameters&quot;: { # Represents a set of hyperparameters to optimize. # Optional. The set of Hyperparameters to tune.
Bu Sun Kim65020912020-05-20 12:08:20 -07003447 &quot;params&quot;: [ # Required. The set of parameters to tune.
3448 { # Represents a single hyperparameter to optimize.
Bu Sun Kim65020912020-05-20 12:08:20 -07003449 &quot;categoricalValues&quot;: [ # Required if type is `CATEGORICAL`. The list of possible categories.
3450 &quot;A String&quot;,
3451 ],
3452 &quot;parameterName&quot;: &quot;A String&quot;, # Required. The parameter name must be unique amongst all ParameterConfigs in
3453 # a HyperparameterSpec message. E.g., &quot;learning_rate&quot;.
3454 &quot;minValue&quot;: 3.14, # Required if type is `DOUBLE` or `INTEGER`. This field
3455 # should be unset if type is `CATEGORICAL`. This value should be integers if
3456 # type is INTEGER.
3457 &quot;discreteValues&quot;: [ # Required if type is `DISCRETE`.
3458 # A list of feasible points.
3459 # The list should be in strictly increasing order. For instance, this
3460 # parameter might have possible settings of 1.5, 2.5, and 4.0. This list
3461 # should not contain more than 1,000 values.
3462 3.14,
3463 ],
3464 &quot;scaleType&quot;: &quot;A String&quot;, # Optional. How the parameter should be scaled to the hypercube.
3465 # Leave unset for categorical parameters.
3466 # Some kind of scaling is strongly recommended for real or integral
3467 # parameters (e.g., `UNIT_LINEAR_SCALE`).
3468 &quot;maxValue&quot;: 3.14, # Required if type is `DOUBLE` or `INTEGER`. This field
3469 # should be unset if type is `CATEGORICAL`. This value should be integers if
3470 # type is `INTEGER`.
Bu Sun Kim4ed7d3f2020-05-27 12:20:54 -07003471 &quot;type&quot;: &quot;A String&quot;, # Required. The type of the parameter.
Bu Sun Kim65020912020-05-20 12:08:20 -07003472 },
3473 ],
3474 &quot;enableTrialEarlyStopping&quot;: True or False, # Optional. Indicates if the hyperparameter tuning job enables auto trial
3475 # early stopping.
3476 &quot;resumePreviousJobId&quot;: &quot;A String&quot;, # Optional. The prior hyperparameter tuning job id that users hope to
3477 # continue with. The job id will be used to find the corresponding vizier
3478 # study guid and resume the study.
3479 &quot;maxParallelTrials&quot;: 42, # Optional. The number of training trials to run concurrently.
3480 # You can reduce the time it takes to perform hyperparameter tuning by adding
3481 # trials in parallel. However, each trail only benefits from the information
3482 # gained in completed trials. That means that a trial does not get access to
3483 # the results of trials running at the same time, which could reduce the
3484 # quality of the overall optimization.
3485 #
3486 # Each trial will use the same scale tier and machine types.
3487 #
3488 # Defaults to one.
3489 &quot;maxFailedTrials&quot;: 42, # Optional. The number of failed trials that need to be seen before failing
3490 # the hyperparameter tuning job. You can specify this field to override the
3491 # default failing criteria for AI Platform hyperparameter tuning jobs.
3492 #
3493 # Defaults to zero, which means the service decides when a hyperparameter
3494 # job should fail.
3495 &quot;goal&quot;: &quot;A String&quot;, # Required. The type of goal to use for tuning. Available types are
3496 # `MAXIMIZE` and `MINIMIZE`.
3497 #
3498 # Defaults to `MAXIMIZE`.
3499 &quot;maxTrials&quot;: 42, # Optional. How many training trials should be attempted to optimize
3500 # the specified hyperparameters.
3501 #
3502 # Defaults to one.
3503 &quot;algorithm&quot;: &quot;A String&quot;, # Optional. The search algorithm specified for the hyperparameter
3504 # tuning job.
3505 # Uses the default AI Platform hyperparameter tuning
3506 # algorithm if unspecified.
Bu Sun Kim4ed7d3f2020-05-27 12:20:54 -07003507 &quot;hyperparameterMetricTag&quot;: &quot;A String&quot;, # Optional. The TensorFlow summary tag name to use for optimizing trials. For
3508 # current versions of TensorFlow, this tag name should exactly match what is
3509 # shown in TensorBoard, including all scopes. For versions of TensorFlow
3510 # prior to 0.12, this should be only the tag passed to tf.Summary.
3511 # By default, &quot;training/hptuning/metric&quot; will be used.
Bu Sun Kim65020912020-05-20 12:08:20 -07003512 },
3513 &quot;workerConfig&quot;: { # Represents the configuration for a replica in a cluster. # Optional. The configuration for workers.
3514 #
3515 # You should only set `workerConfig.acceleratorConfig` if `workerType` is set
3516 # to a Compute Engine machine type. [Learn about restrictions on accelerator
3517 # configurations for
3518 # training.](/ai-platform/training/docs/using-gpus#compute-engine-machine-types-with-gpu)
3519 #
3520 # Set `workerConfig.imageUri` only if you build a custom image for your
3521 # worker. If `workerConfig.imageUri` has not been set, AI Platform uses
3522 # the value of `masterConfig.imageUri`. Learn more about [configuring custom
3523 # containers](/ai-platform/training/docs/distributed-training-containers).
Bu Sun Kim65020912020-05-20 12:08:20 -07003524 &quot;imageUri&quot;: &quot;A String&quot;, # The Docker image to run on the replica. This image must be in Container
3525 # Registry. Learn more about [configuring custom
3526 # containers](/ai-platform/training/docs/distributed-training-containers).
3527 &quot;containerArgs&quot;: [ # Arguments to the entrypoint command.
3528 # The following rules apply for container_command and container_args:
3529 # - If you do not supply command or args:
3530 # The defaults defined in the Docker image are used.
3531 # - If you supply a command but no args:
3532 # The default EntryPoint and the default Cmd defined in the Docker image
3533 # are ignored. Your command is run without any arguments.
3534 # - If you supply only args:
3535 # The default Entrypoint defined in the Docker image is run with the args
3536 # that you supplied.
3537 # - If you supply a command and args:
3538 # The default Entrypoint and the default Cmd defined in the Docker image
3539 # are ignored. Your command is run with your args.
3540 # It cannot be set if custom container image is
3541 # not provided.
3542 # Note that this field and [TrainingInput.args] are mutually exclusive, i.e.,
3543 # both cannot be set at the same time.
3544 &quot;A String&quot;,
3545 ],
3546 &quot;acceleratorConfig&quot;: { # Represents a hardware accelerator request config. # Represents the type and number of accelerators used by the replica.
3547 # [Learn about restrictions on accelerator configurations for
3548 # training.](/ai-platform/training/docs/using-gpus#compute-engine-machine-types-with-gpu)
3549 # Note that the AcceleratorConfig can be used in both Jobs and Versions.
3550 # Learn more about [accelerators for training](/ml-engine/docs/using-gpus) and
3551 # [accelerators for online
3552 # prediction](/ml-engine/docs/machine-types-online-prediction#gpus).
3553 &quot;count&quot;: &quot;A String&quot;, # The number of accelerators to attach to each machine running the job.
3554 &quot;type&quot;: &quot;A String&quot;, # The type of accelerator to use.
3555 },
Bu Sun Kim4ed7d3f2020-05-27 12:20:54 -07003556 &quot;tpuTfVersion&quot;: &quot;A String&quot;, # The AI Platform runtime version that includes a TensorFlow version matching
3557 # the one used in the custom container. This field is required if the replica
3558 # is a TPU worker that uses a custom container. Otherwise, do not specify
3559 # this field. This must be a [runtime version that currently supports
3560 # training with
3561 # TPUs](/ml-engine/docs/tensorflow/runtime-version-list#tpu-support).
3562 #
3563 # Note that the version of TensorFlow included in a runtime version may
3564 # differ from the numbering of the runtime version itself, because it may
3565 # have a different [patch
3566 # version](https://www.tensorflow.org/guide/version_compat#semantic_versioning_20).
3567 # In this field, you must specify the runtime version (TensorFlow minor
3568 # version). For example, if your custom container runs TensorFlow `1.x.y`,
3569 # specify `1.x`.
3570 &quot;containerCommand&quot;: [ # The command with which the replica&#x27;s custom container is run.
3571 # If provided, it will override default ENTRYPOINT of the docker image.
3572 # If not provided, the docker image&#x27;s ENTRYPOINT is used.
3573 # It cannot be set if custom container image is
3574 # not provided.
3575 # Note that this field and [TrainingInput.args] are mutually exclusive, i.e.,
3576 # both cannot be set at the same time.
3577 &quot;A String&quot;,
3578 ],
Bu Sun Kim65020912020-05-20 12:08:20 -07003579 },
3580 &quot;parameterServerCount&quot;: &quot;A String&quot;, # Optional. The number of parameter server replicas to use for the training
3581 # job. Each replica in the cluster will be of the type specified in
3582 # `parameter_server_type`.
3583 #
3584 # This value can only be used when `scale_tier` is set to `CUSTOM`. If you
3585 # set this value, you must also set `parameter_server_type`.
3586 #
3587 # The default value is zero.
3588 &quot;packageUris&quot;: [ # Required. The Google Cloud Storage location of the packages with
3589 # the training program and any additional dependencies.
3590 # The maximum number of package URIs is 100.
3591 &quot;A String&quot;,
3592 ],
3593 &quot;evaluatorCount&quot;: &quot;A String&quot;, # Optional. The number of evaluator replicas to use for the training job.
3594 # Each replica in the cluster will be of the type specified in
3595 # `evaluator_type`.
3596 #
3597 # This value can only be used when `scale_tier` is set to `CUSTOM`. If you
3598 # set this value, you must also set `evaluator_type`.
3599 #
3600 # The default value is zero.
3601 &quot;masterType&quot;: &quot;A String&quot;, # Optional. Specifies the type of virtual machine to use for your training
3602 # job&#x27;s master worker. You must specify this field when `scaleTier` is set to
Dan O'Mearadd494642020-05-01 07:42:23 -07003603 # `CUSTOM`.
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07003604 #
Dan O'Mearadd494642020-05-01 07:42:23 -07003605 # You can use certain Compute Engine machine types directly in this field.
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07003606 # The following types are supported:
3607 #
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07003608 # - `n1-standard-4`
3609 # - `n1-standard-8`
3610 # - `n1-standard-16`
3611 # - `n1-standard-32`
3612 # - `n1-standard-64`
3613 # - `n1-standard-96`
3614 # - `n1-highmem-2`
3615 # - `n1-highmem-4`
3616 # - `n1-highmem-8`
3617 # - `n1-highmem-16`
3618 # - `n1-highmem-32`
3619 # - `n1-highmem-64`
3620 # - `n1-highmem-96`
3621 # - `n1-highcpu-16`
3622 # - `n1-highcpu-32`
3623 # - `n1-highcpu-64`
3624 # - `n1-highcpu-96`
3625 #
Dan O'Mearadd494642020-05-01 07:42:23 -07003626 # Learn more about [using Compute Engine machine
3627 # types](/ml-engine/docs/machine-types#compute-engine-machine-types).
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07003628 #
Dan O'Mearadd494642020-05-01 07:42:23 -07003629 # Alternatively, you can use the following legacy machine types:
3630 #
3631 # - `standard`
3632 # - `large_model`
3633 # - `complex_model_s`
3634 # - `complex_model_m`
3635 # - `complex_model_l`
3636 # - `standard_gpu`
3637 # - `complex_model_m_gpu`
3638 # - `complex_model_l_gpu`
3639 # - `standard_p100`
3640 # - `complex_model_m_p100`
3641 # - `standard_v100`
3642 # - `large_model_v100`
3643 # - `complex_model_m_v100`
3644 # - `complex_model_l_v100`
3645 #
3646 # Learn more about [using legacy machine
3647 # types](/ml-engine/docs/machine-types#legacy-machine-types).
3648 #
3649 # Finally, if you want to use a TPU for training, specify `cloud_tpu` in this
3650 # field. Learn more about the [special configuration options for training
3651 # with
3652 # TPUs](/ml-engine/docs/tensorflow/using-tpus#configuring_a_custom_tpu_machine).
Bu Sun Kim65020912020-05-20 12:08:20 -07003653 &quot;runtimeVersion&quot;: &quot;A String&quot;, # Optional. The AI Platform runtime version to use for training. You must
3654 # either specify this field or specify `masterConfig.imageUri`.
3655 #
3656 # For more information, see the [runtime version
3657 # list](/ai-platform/training/docs/runtime-version-list) and learn [how to
3658 # manage runtime versions](/ai-platform/training/docs/versioning).
3659 &quot;evaluatorType&quot;: &quot;A String&quot;, # Optional. Specifies the type of virtual machine to use for your training
3660 # job&#x27;s evaluator nodes.
3661 #
3662 # The supported values are the same as those described in the entry for
3663 # `masterType`.
3664 #
3665 # This value must be consistent with the category of machine type that
3666 # `masterType` uses. In other words, both must be Compute Engine machine
3667 # types or both must be legacy machine types.
3668 #
3669 # This value must be present when `scaleTier` is set to `CUSTOM` and
3670 # `evaluatorCount` is greater than zero.
Bu Sun Kim65020912020-05-20 12:08:20 -07003671 &quot;workerType&quot;: &quot;A String&quot;, # Optional. Specifies the type of virtual machine to use for your training
3672 # job&#x27;s worker nodes.
3673 #
3674 # The supported values are the same as those described in the entry for
3675 # `masterType`.
3676 #
3677 # This value must be consistent with the category of machine type that
3678 # `masterType` uses. In other words, both must be Compute Engine machine
3679 # types or both must be legacy machine types.
3680 #
3681 # If you use `cloud_tpu` for this value, see special instructions for
3682 # [configuring a custom TPU
3683 # machine](/ml-engine/docs/tensorflow/using-tpus#configuring_a_custom_tpu_machine).
3684 #
3685 # This value must be present when `scaleTier` is set to `CUSTOM` and
3686 # `workerCount` is greater than zero.
Bu Sun Kim4ed7d3f2020-05-27 12:20:54 -07003687 &quot;region&quot;: &quot;A String&quot;, # Required. The region to run the training job in. See the [available
3688 # regions](/ai-platform/training/docs/regions) for AI Platform Training.
Bu Sun Kim65020912020-05-20 12:08:20 -07003689 &quot;parameterServerType&quot;: &quot;A String&quot;, # Optional. Specifies the type of virtual machine to use for your training
3690 # job&#x27;s parameter server.
3691 #
3692 # The supported values are the same as those described in the entry for
3693 # `master_type`.
3694 #
3695 # This value must be consistent with the category of machine type that
3696 # `masterType` uses. In other words, both must be Compute Engine machine
3697 # types or both must be legacy machine types.
3698 #
3699 # This value must be present when `scaleTier` is set to `CUSTOM` and
3700 # `parameter_server_count` is greater than zero.
3701 &quot;masterConfig&quot;: { # Represents the configuration for a replica in a cluster. # Optional. The configuration for your master worker.
3702 #
3703 # You should only set `masterConfig.acceleratorConfig` if `masterType` is set
3704 # to a Compute Engine machine type. Learn about [restrictions on accelerator
3705 # configurations for
3706 # training.](/ai-platform/training/docs/using-gpus#compute-engine-machine-types-with-gpu)
3707 #
3708 # Set `masterConfig.imageUri` only if you build a custom image. Only one of
3709 # `masterConfig.imageUri` and `runtimeVersion` should be set. Learn more
3710 # about [configuring custom
3711 # containers](/ai-platform/training/docs/distributed-training-containers).
Bu Sun Kim65020912020-05-20 12:08:20 -07003712 &quot;imageUri&quot;: &quot;A String&quot;, # The Docker image to run on the replica. This image must be in Container
3713 # Registry. Learn more about [configuring custom
3714 # containers](/ai-platform/training/docs/distributed-training-containers).
3715 &quot;containerArgs&quot;: [ # Arguments to the entrypoint command.
3716 # The following rules apply for container_command and container_args:
3717 # - If you do not supply command or args:
3718 # The defaults defined in the Docker image are used.
3719 # - If you supply a command but no args:
3720 # The default EntryPoint and the default Cmd defined in the Docker image
3721 # are ignored. Your command is run without any arguments.
3722 # - If you supply only args:
3723 # The default Entrypoint defined in the Docker image is run with the args
3724 # that you supplied.
3725 # - If you supply a command and args:
3726 # The default Entrypoint and the default Cmd defined in the Docker image
3727 # are ignored. Your command is run with your args.
3728 # It cannot be set if custom container image is
3729 # not provided.
3730 # Note that this field and [TrainingInput.args] are mutually exclusive, i.e.,
3731 # both cannot be set at the same time.
3732 &quot;A String&quot;,
3733 ],
3734 &quot;acceleratorConfig&quot;: { # Represents a hardware accelerator request config. # Represents the type and number of accelerators used by the replica.
3735 # [Learn about restrictions on accelerator configurations for
3736 # training.](/ai-platform/training/docs/using-gpus#compute-engine-machine-types-with-gpu)
3737 # Note that the AcceleratorConfig can be used in both Jobs and Versions.
3738 # Learn more about [accelerators for training](/ml-engine/docs/using-gpus) and
3739 # [accelerators for online
3740 # prediction](/ml-engine/docs/machine-types-online-prediction#gpus).
3741 &quot;count&quot;: &quot;A String&quot;, # The number of accelerators to attach to each machine running the job.
3742 &quot;type&quot;: &quot;A String&quot;, # The type of accelerator to use.
3743 },
Bu Sun Kim4ed7d3f2020-05-27 12:20:54 -07003744 &quot;tpuTfVersion&quot;: &quot;A String&quot;, # The AI Platform runtime version that includes a TensorFlow version matching
3745 # the one used in the custom container. This field is required if the replica
3746 # is a TPU worker that uses a custom container. Otherwise, do not specify
3747 # this field. This must be a [runtime version that currently supports
3748 # training with
3749 # TPUs](/ml-engine/docs/tensorflow/runtime-version-list#tpu-support).
3750 #
3751 # Note that the version of TensorFlow included in a runtime version may
3752 # differ from the numbering of the runtime version itself, because it may
3753 # have a different [patch
3754 # version](https://www.tensorflow.org/guide/version_compat#semantic_versioning_20).
3755 # In this field, you must specify the runtime version (TensorFlow minor
3756 # version). For example, if your custom container runs TensorFlow `1.x.y`,
3757 # specify `1.x`.
3758 &quot;containerCommand&quot;: [ # The command with which the replica&#x27;s custom container is run.
3759 # If provided, it will override default ENTRYPOINT of the docker image.
3760 # If not provided, the docker image&#x27;s ENTRYPOINT is used.
3761 # It cannot be set if custom container image is
3762 # not provided.
3763 # Note that this field and [TrainingInput.args] are mutually exclusive, i.e.,
3764 # both cannot be set at the same time.
3765 &quot;A String&quot;,
3766 ],
Bu Sun Kim65020912020-05-20 12:08:20 -07003767 },
3768 &quot;scaleTier&quot;: &quot;A String&quot;, # Required. Specifies the machine types, the number of replicas for workers
3769 # and parameter servers.
3770 &quot;jobDir&quot;: &quot;A String&quot;, # Optional. A Google Cloud Storage path in which to store training outputs
Dan O'Mearadd494642020-05-01 07:42:23 -07003771 # and other data needed for training. This path is passed to your TensorFlow
Bu Sun Kim65020912020-05-20 12:08:20 -07003772 # program as the &#x27;--job-dir&#x27; command-line argument. The benefit of specifying
Dan O'Mearadd494642020-05-01 07:42:23 -07003773 # this field is that Cloud ML validates the path for use in training.
Bu Sun Kim65020912020-05-20 12:08:20 -07003774 &quot;pythonVersion&quot;: &quot;A String&quot;, # Optional. The version of Python used in training. You must either specify
3775 # this field or specify `masterConfig.imageUri`.
3776 #
3777 # The following Python versions are available:
3778 #
3779 # * Python &#x27;3.7&#x27; is available when `runtime_version` is set to &#x27;1.15&#x27; or
3780 # later.
3781 # * Python &#x27;3.5&#x27; is available when `runtime_version` is set to a version
3782 # from &#x27;1.4&#x27; to &#x27;1.14&#x27;.
3783 # * Python &#x27;2.7&#x27; is available when `runtime_version` is set to &#x27;1.15&#x27; or
3784 # earlier.
3785 #
3786 # Read more about the Python versions available for [each runtime
3787 # version](/ml-engine/docs/runtime-version-list).
Bu Sun Kim65020912020-05-20 12:08:20 -07003788 &quot;scheduling&quot;: { # All parameters related to scheduling of training jobs. # Optional. Scheduling options for a training job.
3789 &quot;maxWaitTime&quot;: &quot;A String&quot;,
3790 &quot;maxRunningTime&quot;: &quot;A String&quot;, # Optional. The maximum job running time, expressed in seconds. The field can
3791 # contain up to nine fractional digits, terminated by `s`. If not specified,
3792 # this field defaults to `604800s` (seven days).
Dan O'Mearadd494642020-05-01 07:42:23 -07003793 #
3794 # If the training job is still running after this duration, AI Platform
3795 # Training cancels it.
3796 #
3797 # For example, if you want to ensure your job runs for no more than 2 hours,
3798 # set this field to `7200s` (2 hours * 60 minutes / hour * 60 seconds /
3799 # minute).
3800 #
3801 # If you submit your training job using the `gcloud` tool, you can [provide
3802 # this field in a `config.yaml`
3803 # file](/ai-platform/training/docs/training-jobs#formatting_your_configuration_parameters).
3804 # For example:
3805 #
3806 # ```yaml
3807 # trainingInput:
3808 # ...
3809 # scheduling:
3810 # maxRunningTime: 7200s
3811 # ...
3812 # ```
3813 },
Bu Sun Kim4ed7d3f2020-05-27 12:20:54 -07003814 &quot;network&quot;: &quot;A String&quot;, # Optional. The full name of the Google Compute Engine
3815 # [network](/compute/docs/networks-and-firewalls#networks) to which the Job
3816 # is peered. For example, projects/12345/global/networks/myVPC. Format is of
3817 # the form projects/{project}/global/networks/{network}. Where {project} is a
3818 # project number, as in &#x27;12345&#x27;, and {network} is network name.&quot;.
3819 #
3820 # Private services access must already be configured for the network. If left
3821 # unspecified, the Job is not peered with any network. Learn more -
3822 # Connecting Job to user network over private
3823 # IP.
Bu Sun Kim65020912020-05-20 12:08:20 -07003824 &quot;evaluatorConfig&quot;: { # Represents the configuration for a replica in a cluster. # Optional. The configuration for evaluators.
Dan O'Mearadd494642020-05-01 07:42:23 -07003825 #
3826 # You should only set `evaluatorConfig.acceleratorConfig` if
3827 # `evaluatorType` is set to a Compute Engine machine type. [Learn
3828 # about restrictions on accelerator configurations for
3829 # training.](/ai-platform/training/docs/using-gpus#compute-engine-machine-types-with-gpu)
3830 #
3831 # Set `evaluatorConfig.imageUri` only if you build a custom image for
3832 # your evaluator. If `evaluatorConfig.imageUri` has not been
3833 # set, AI Platform uses the value of `masterConfig.imageUri`. Learn more about [configuring custom
3834 # containers](/ai-platform/training/docs/distributed-training-containers).
Bu Sun Kim65020912020-05-20 12:08:20 -07003835 &quot;imageUri&quot;: &quot;A String&quot;, # The Docker image to run on the replica. This image must be in Container
3836 # Registry. Learn more about [configuring custom
3837 # containers](/ai-platform/training/docs/distributed-training-containers).
3838 &quot;containerArgs&quot;: [ # Arguments to the entrypoint command.
3839 # The following rules apply for container_command and container_args:
3840 # - If you do not supply command or args:
3841 # The defaults defined in the Docker image are used.
3842 # - If you supply a command but no args:
3843 # The default EntryPoint and the default Cmd defined in the Docker image
3844 # are ignored. Your command is run without any arguments.
3845 # - If you supply only args:
3846 # The default Entrypoint defined in the Docker image is run with the args
3847 # that you supplied.
3848 # - If you supply a command and args:
3849 # The default Entrypoint and the default Cmd defined in the Docker image
3850 # are ignored. Your command is run with your args.
3851 # It cannot be set if custom container image is
3852 # not provided.
3853 # Note that this field and [TrainingInput.args] are mutually exclusive, i.e.,
3854 # both cannot be set at the same time.
3855 &quot;A String&quot;,
3856 ],
3857 &quot;acceleratorConfig&quot;: { # Represents a hardware accelerator request config. # Represents the type and number of accelerators used by the replica.
Dan O'Mearadd494642020-05-01 07:42:23 -07003858 # [Learn about restrictions on accelerator configurations for
3859 # training.](/ai-platform/training/docs/using-gpus#compute-engine-machine-types-with-gpu)
3860 # Note that the AcceleratorConfig can be used in both Jobs and Versions.
3861 # Learn more about [accelerators for training](/ml-engine/docs/using-gpus) and
3862 # [accelerators for online
3863 # prediction](/ml-engine/docs/machine-types-online-prediction#gpus).
Bu Sun Kim65020912020-05-20 12:08:20 -07003864 &quot;count&quot;: &quot;A String&quot;, # The number of accelerators to attach to each machine running the job.
3865 &quot;type&quot;: &quot;A String&quot;, # The type of accelerator to use.
Dan O'Mearadd494642020-05-01 07:42:23 -07003866 },
Bu Sun Kim4ed7d3f2020-05-27 12:20:54 -07003867 &quot;tpuTfVersion&quot;: &quot;A String&quot;, # The AI Platform runtime version that includes a TensorFlow version matching
3868 # the one used in the custom container. This field is required if the replica
3869 # is a TPU worker that uses a custom container. Otherwise, do not specify
3870 # this field. This must be a [runtime version that currently supports
3871 # training with
3872 # TPUs](/ml-engine/docs/tensorflow/runtime-version-list#tpu-support).
3873 #
3874 # Note that the version of TensorFlow included in a runtime version may
3875 # differ from the numbering of the runtime version itself, because it may
3876 # have a different [patch
3877 # version](https://www.tensorflow.org/guide/version_compat#semantic_versioning_20).
3878 # In this field, you must specify the runtime version (TensorFlow minor
3879 # version). For example, if your custom container runs TensorFlow `1.x.y`,
3880 # specify `1.x`.
3881 &quot;containerCommand&quot;: [ # The command with which the replica&#x27;s custom container is run.
3882 # If provided, it will override default ENTRYPOINT of the docker image.
3883 # If not provided, the docker image&#x27;s ENTRYPOINT is used.
3884 # It cannot be set if custom container image is
3885 # not provided.
3886 # Note that this field and [TrainingInput.args] are mutually exclusive, i.e.,
3887 # both cannot be set at the same time.
3888 &quot;A String&quot;,
3889 ],
Dan O'Mearadd494642020-05-01 07:42:23 -07003890 },
Bu Sun Kim65020912020-05-20 12:08:20 -07003891 &quot;useChiefInTfConfig&quot;: True or False, # Optional. Use `chief` instead of `master` in the `TF_CONFIG` environment
Dan O'Mearadd494642020-05-01 07:42:23 -07003892 # variable when training with a custom container. Defaults to `false`. [Learn
3893 # more about this
3894 # field.](/ai-platform/training/docs/distributed-training-details#chief-versus-master)
3895 #
Bu Sun Kim65020912020-05-20 12:08:20 -07003896 # This field has no effect for training jobs that don&#x27;t use a custom
Dan O'Mearadd494642020-05-01 07:42:23 -07003897 # container.
Bu Sun Kim65020912020-05-20 12:08:20 -07003898 },
3899 &quot;state&quot;: &quot;A String&quot;, # Output only. The detailed state of a job.
3900 &quot;jobId&quot;: &quot;A String&quot;, # Required. The user-specified id of the job.
3901 &quot;endTime&quot;: &quot;A String&quot;, # Output only. When the job processing was completed.
3902 &quot;startTime&quot;: &quot;A String&quot;, # Output only. When the job processing was started.
3903 &quot;predictionOutput&quot;: { # Represents results of a prediction job. # The current prediction job result.
3904 &quot;errorCount&quot;: &quot;A String&quot;, # The number of data instances which resulted in errors.
3905 &quot;outputPath&quot;: &quot;A String&quot;, # The output Google Cloud Storage location provided at the job creation time.
3906 &quot;nodeHours&quot;: 3.14, # Node hours used by the batch prediction job.
3907 &quot;predictionCount&quot;: &quot;A String&quot;, # The number of generated predictions.
3908 },
3909 &quot;trainingOutput&quot;: { # Represents results of a training job. Output only. # The current training job result.
Bu Sun Kim4ed7d3f2020-05-27 12:20:54 -07003910 &quot;isBuiltInAlgorithmJob&quot;: True or False, # Whether this job is a built-in Algorithm job.
3911 &quot;builtInAlgorithmOutput&quot;: { # Represents output related to a built-in algorithm Job. # Details related to built-in algorithms jobs.
3912 # Only set for built-in algorithms jobs.
3913 &quot;modelPath&quot;: &quot;A String&quot;, # The Cloud Storage path to the `model/` directory where the training job
3914 # saves the trained model. Only set for successful jobs that don&#x27;t use
3915 # hyperparameter tuning.
3916 &quot;pythonVersion&quot;: &quot;A String&quot;, # Python version on which the built-in algorithm was trained.
3917 &quot;runtimeVersion&quot;: &quot;A String&quot;, # AI Platform runtime version on which the built-in algorithm was
3918 # trained.
3919 &quot;framework&quot;: &quot;A String&quot;, # Framework on which the built-in algorithm was trained.
3920 },
Bu Sun Kim65020912020-05-20 12:08:20 -07003921 &quot;trials&quot;: [ # Results for individual Hyperparameter trials.
3922 # Only set for hyperparameter tuning jobs.
3923 { # Represents the result of a single hyperparameter tuning trial from a
3924 # training job. The TrainingOutput object that is returned on successful
3925 # completion of a training job with hyperparameter tuning includes a list
3926 # of HyperparameterOutput objects, one for each successful trial.
Bu Sun Kim65020912020-05-20 12:08:20 -07003927 &quot;trialId&quot;: &quot;A String&quot;, # The trial id for these results.
3928 &quot;endTime&quot;: &quot;A String&quot;, # Output only. End time for the trial.
Bu Sun Kim65020912020-05-20 12:08:20 -07003929 &quot;startTime&quot;: &quot;A String&quot;, # Output only. Start time for the trial.
Bu Sun Kim4ed7d3f2020-05-27 12:20:54 -07003930 &quot;isTrialStoppedEarly&quot;: True or False, # True if the trial is stopped early.
Bu Sun Kim65020912020-05-20 12:08:20 -07003931 &quot;finalMetric&quot;: { # An observed value of a metric. # The final objective metric seen for this trial.
Bu Sun Kim65020912020-05-20 12:08:20 -07003932 &quot;trainingStep&quot;: &quot;A String&quot;, # The global training step for this metric.
Bu Sun Kim4ed7d3f2020-05-27 12:20:54 -07003933 &quot;objectiveValue&quot;: 3.14, # The objective value at this training step.
Bu Sun Kim65020912020-05-20 12:08:20 -07003934 },
3935 &quot;builtInAlgorithmOutput&quot;: { # Represents output related to a built-in algorithm Job. # Details related to built-in algorithms jobs.
3936 # Only set for trials of built-in algorithms jobs that have succeeded.
Bu Sun Kim65020912020-05-20 12:08:20 -07003937 &quot;modelPath&quot;: &quot;A String&quot;, # The Cloud Storage path to the `model/` directory where the training job
3938 # saves the trained model. Only set for successful jobs that don&#x27;t use
3939 # hyperparameter tuning.
3940 &quot;pythonVersion&quot;: &quot;A String&quot;, # Python version on which the built-in algorithm was trained.
3941 &quot;runtimeVersion&quot;: &quot;A String&quot;, # AI Platform runtime version on which the built-in algorithm was
3942 # trained.
Bu Sun Kim4ed7d3f2020-05-27 12:20:54 -07003943 &quot;framework&quot;: &quot;A String&quot;, # Framework on which the built-in algorithm was trained.
Bu Sun Kim65020912020-05-20 12:08:20 -07003944 },
3945 &quot;state&quot;: &quot;A String&quot;, # Output only. The detailed state of the trial.
Bu Sun Kim4ed7d3f2020-05-27 12:20:54 -07003946 &quot;allMetrics&quot;: [ # All recorded object metrics for this trial. This field is not currently
3947 # populated.
3948 { # An observed value of a metric.
3949 &quot;trainingStep&quot;: &quot;A String&quot;, # The global training step for this metric.
3950 &quot;objectiveValue&quot;: 3.14, # The objective value at this training step.
3951 },
3952 ],
3953 &quot;hyperparameters&quot;: { # The hyperparameters given to this trial.
3954 &quot;a_key&quot;: &quot;A String&quot;,
3955 },
Dan O'Mearadd494642020-05-01 07:42:23 -07003956 },
Bu Sun Kim65020912020-05-20 12:08:20 -07003957 ],
3958 &quot;hyperparameterMetricTag&quot;: &quot;A String&quot;, # The TensorFlow summary tag name used for optimizing hyperparameter tuning
3959 # trials. See
3960 # [`HyperparameterSpec.hyperparameterMetricTag`](#HyperparameterSpec.FIELDS.hyperparameter_metric_tag)
3961 # for more information. Only set for hyperparameter tuning jobs.
3962 &quot;completedTrialCount&quot;: &quot;A String&quot;, # The number of hyperparameter tuning trials that completed successfully.
3963 # Only set for hyperparameter tuning jobs.
3964 &quot;isHyperparameterTuningJob&quot;: True or False, # Whether this job is a hyperparameter tuning job.
3965 &quot;consumedMLUnits&quot;: 3.14, # The amount of ML units consumed by the job.
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07003966 },
Bu Sun Kim65020912020-05-20 12:08:20 -07003967 &quot;createTime&quot;: &quot;A String&quot;, # Output only. When the job was created.
3968 &quot;labels&quot;: { # Optional. One or more labels that you can add, to organize your jobs.
3969 # Each label is a key-value pair, where both the key and the value are
3970 # arbitrary strings that you supply.
3971 # For more information, see the documentation on
3972 # &lt;a href=&quot;/ml-engine/docs/tensorflow/resource-labels&quot;&gt;using labels&lt;/a&gt;.
3973 &quot;a_key&quot;: &quot;A String&quot;,
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07003974 },
Bu Sun Kim65020912020-05-20 12:08:20 -07003975 }
3976
3977 updateMask: string, Required. Specifies the path, relative to `Job`, of the field to update.
3978To adopt etag mechanism, include `etag` field in the mask, and include the
3979`etag` value in your job resource.
3980
3981For example, to change the labels of a job, the `update_mask` parameter
3982would be specified as `labels`, `etag`, and the
3983`PATCH` request body would specify the new value, as follows:
3984 {
3985 &quot;labels&quot;: {
3986 &quot;owner&quot;: &quot;Google&quot;,
3987 &quot;color&quot;: &quot;Blue&quot;
3988 }
3989 &quot;etag&quot;: &quot;33a64df551425fcc55e4d42a148795d9f25f89d4&quot;
3990 }
3991If `etag` matches the one on the server, the labels of the job will be
3992replaced with the given ones, and the server end `etag` will be
3993recalculated.
3994
3995Currently the only supported update masks are `labels` and `etag`.
3996 x__xgafv: string, V1 error format.
3997 Allowed values
3998 1 - v1 error format
3999 2 - v2 error format
4000
4001Returns:
4002 An object of the form:
4003
4004 { # Represents a training or prediction job.
Bu Sun Kim4ed7d3f2020-05-27 12:20:54 -07004005 &quot;predictionInput&quot;: { # Represents input parameters for a prediction job. # Input parameters to create a prediction job.
4006 &quot;versionName&quot;: &quot;A String&quot;, # Use this field if you want to specify a version of the model to use. The
4007 # string is formatted the same way as `model_version`, with the addition
4008 # of the version information:
4009 #
4010 # `&quot;projects/YOUR_PROJECT/models/YOUR_MODEL/versions/YOUR_VERSION&quot;`
4011 &quot;modelName&quot;: &quot;A String&quot;, # Use this field if you want to use the default version for the specified
4012 # model. The string must use the following format:
4013 #
4014 # `&quot;projects/YOUR_PROJECT/models/YOUR_MODEL&quot;`
4015 &quot;uri&quot;: &quot;A String&quot;, # Use this field if you want to specify a Google Cloud Storage path for
4016 # the model to use.
4017 &quot;maxWorkerCount&quot;: &quot;A String&quot;, # Optional. The maximum number of workers to be used for parallel processing.
4018 # Defaults to 10 if not specified.
4019 &quot;signatureName&quot;: &quot;A String&quot;, # Optional. The name of the signature defined in the SavedModel to use for
4020 # this job. Please refer to
4021 # [SavedModel](https://tensorflow.github.io/serving/serving_basic.html)
4022 # for information about how to use signatures.
4023 #
4024 # Defaults to
4025 # [DEFAULT_SERVING_SIGNATURE_DEF_KEY](https://www.tensorflow.org/api_docs/python/tf/saved_model/signature_constants)
4026 # , which is &quot;serving_default&quot;.
4027 &quot;outputPath&quot;: &quot;A String&quot;, # Required. The output Google Cloud Storage location.
4028 &quot;outputDataFormat&quot;: &quot;A String&quot;, # Optional. Format of the output data files, defaults to JSON.
4029 &quot;dataFormat&quot;: &quot;A String&quot;, # Required. The format of the input data files.
4030 &quot;batchSize&quot;: &quot;A String&quot;, # Optional. Number of records per batch, defaults to 64.
4031 # The service will buffer batch_size number of records in memory before
4032 # invoking one Tensorflow prediction call internally. So take the record
4033 # size and memory available into consideration when setting this parameter.
4034 &quot;runtimeVersion&quot;: &quot;A String&quot;, # Optional. The AI Platform runtime version to use for this batch
4035 # prediction. If not set, AI Platform will pick the runtime version used
4036 # during the CreateVersion request for this model version, or choose the
4037 # latest stable version when model version information is not available
4038 # such as when the model is specified by uri.
4039 &quot;inputPaths&quot;: [ # Required. The Cloud Storage location of the input data files. May contain
4040 # &lt;a href=&quot;/storage/docs/gsutil/addlhelp/WildcardNames&quot;&gt;wildcards&lt;/a&gt;.
4041 &quot;A String&quot;,
4042 ],
4043 &quot;region&quot;: &quot;A String&quot;, # Required. The Google Compute Engine region to run the prediction job in.
4044 # See the &lt;a href=&quot;/ml-engine/docs/tensorflow/regions&quot;&gt;available regions&lt;/a&gt;
4045 # for AI Platform services.
4046 },
4047 &quot;errorMessage&quot;: &quot;A String&quot;, # Output only. The details of a failure or a cancellation.
Bu Sun Kim65020912020-05-20 12:08:20 -07004048 &quot;etag&quot;: &quot;A String&quot;, # `etag` is used for optimistic concurrency control as a way to help
4049 # prevent simultaneous updates of a job from overwriting each other.
4050 # It is strongly suggested that systems make use of the `etag` in the
4051 # read-modify-write cycle to perform job updates in order to avoid race
4052 # conditions: An `etag` is returned in the response to `GetJob`, and
4053 # systems are expected to put that etag in the request to `UpdateJob` to
4054 # ensure that their change will be applied to the same version of the job.
4055 &quot;trainingInput&quot;: { # Represents input parameters for a training job. When using the gcloud command # Input parameters to create a training job.
4056 # to submit your training job, you can specify the input parameters as
4057 # command-line arguments and/or in a YAML configuration file referenced from
4058 # the --config command-line argument. For details, see the guide to [submitting
4059 # a training job](/ai-platform/training/docs/training-jobs).
Bu Sun Kim4ed7d3f2020-05-27 12:20:54 -07004060 &quot;workerCount&quot;: &quot;A String&quot;, # Optional. The number of worker replicas to use for the training job. Each
4061 # replica in the cluster will be of the type specified in `worker_type`.
4062 #
4063 # This value can only be used when `scale_tier` is set to `CUSTOM`. If you
4064 # set this value, you must also set `worker_type`.
4065 #
4066 # The default value is zero.
4067 &quot;pythonModule&quot;: &quot;A String&quot;, # Required. The Python module name to run after installing the packages.
4068 &quot;args&quot;: [ # Optional. Command-line arguments passed to the training application when it
4069 # starts. If your job uses a custom container, then the arguments are passed
4070 # to the container&#x27;s &lt;a class=&quot;external&quot; target=&quot;_blank&quot;
4071 # href=&quot;https://docs.docker.com/engine/reference/builder/#entrypoint&quot;&gt;
4072 # `ENTRYPOINT`&lt;/a&gt; command.
4073 &quot;A String&quot;,
4074 ],
Bu Sun Kim65020912020-05-20 12:08:20 -07004075 &quot;parameterServerConfig&quot;: { # Represents the configuration for a replica in a cluster. # Optional. The configuration for parameter servers.
4076 #
4077 # You should only set `parameterServerConfig.acceleratorConfig` if
4078 # `parameterServerType` is set to a Compute Engine machine type. [Learn
4079 # about restrictions on accelerator configurations for
4080 # training.](/ai-platform/training/docs/using-gpus#compute-engine-machine-types-with-gpu)
4081 #
4082 # Set `parameterServerConfig.imageUri` only if you build a custom image for
4083 # your parameter server. If `parameterServerConfig.imageUri` has not been
4084 # set, AI Platform uses the value of `masterConfig.imageUri`. Learn more about [configuring custom
4085 # containers](/ai-platform/training/docs/distributed-training-containers).
Bu Sun Kim65020912020-05-20 12:08:20 -07004086 &quot;imageUri&quot;: &quot;A String&quot;, # The Docker image to run on the replica. This image must be in Container
4087 # Registry. Learn more about [configuring custom
4088 # containers](/ai-platform/training/docs/distributed-training-containers).
4089 &quot;containerArgs&quot;: [ # Arguments to the entrypoint command.
4090 # The following rules apply for container_command and container_args:
4091 # - If you do not supply command or args:
4092 # The defaults defined in the Docker image are used.
4093 # - If you supply a command but no args:
4094 # The default EntryPoint and the default Cmd defined in the Docker image
4095 # are ignored. Your command is run without any arguments.
4096 # - If you supply only args:
4097 # The default Entrypoint defined in the Docker image is run with the args
4098 # that you supplied.
4099 # - If you supply a command and args:
4100 # The default Entrypoint and the default Cmd defined in the Docker image
4101 # are ignored. Your command is run with your args.
4102 # It cannot be set if custom container image is
4103 # not provided.
4104 # Note that this field and [TrainingInput.args] are mutually exclusive, i.e.,
4105 # both cannot be set at the same time.
4106 &quot;A String&quot;,
4107 ],
4108 &quot;acceleratorConfig&quot;: { # Represents a hardware accelerator request config. # Represents the type and number of accelerators used by the replica.
4109 # [Learn about restrictions on accelerator configurations for
4110 # training.](/ai-platform/training/docs/using-gpus#compute-engine-machine-types-with-gpu)
4111 # Note that the AcceleratorConfig can be used in both Jobs and Versions.
4112 # Learn more about [accelerators for training](/ml-engine/docs/using-gpus) and
4113 # [accelerators for online
4114 # prediction](/ml-engine/docs/machine-types-online-prediction#gpus).
4115 &quot;count&quot;: &quot;A String&quot;, # The number of accelerators to attach to each machine running the job.
4116 &quot;type&quot;: &quot;A String&quot;, # The type of accelerator to use.
4117 },
Bu Sun Kim4ed7d3f2020-05-27 12:20:54 -07004118 &quot;tpuTfVersion&quot;: &quot;A String&quot;, # The AI Platform runtime version that includes a TensorFlow version matching
4119 # the one used in the custom container. This field is required if the replica
4120 # is a TPU worker that uses a custom container. Otherwise, do not specify
4121 # this field. This must be a [runtime version that currently supports
4122 # training with
4123 # TPUs](/ml-engine/docs/tensorflow/runtime-version-list#tpu-support).
4124 #
4125 # Note that the version of TensorFlow included in a runtime version may
4126 # differ from the numbering of the runtime version itself, because it may
4127 # have a different [patch
4128 # version](https://www.tensorflow.org/guide/version_compat#semantic_versioning_20).
4129 # In this field, you must specify the runtime version (TensorFlow minor
4130 # version). For example, if your custom container runs TensorFlow `1.x.y`,
4131 # specify `1.x`.
4132 &quot;containerCommand&quot;: [ # The command with which the replica&#x27;s custom container is run.
4133 # If provided, it will override default ENTRYPOINT of the docker image.
4134 # If not provided, the docker image&#x27;s ENTRYPOINT is used.
4135 # It cannot be set if custom container image is
4136 # not provided.
4137 # Note that this field and [TrainingInput.args] are mutually exclusive, i.e.,
4138 # both cannot be set at the same time.
4139 &quot;A String&quot;,
4140 ],
Bu Sun Kim65020912020-05-20 12:08:20 -07004141 },
4142 &quot;encryptionConfig&quot;: { # Represents a custom encryption key configuration that can be applied to # Optional. Options for using customer-managed encryption keys (CMEK) to
4143 # protect resources created by a training job, instead of using Google&#x27;s
4144 # default encryption. If this is set, then all resources created by the
4145 # training job will be encrypted with the customer-managed encryption key
4146 # that you specify.
4147 #
4148 # [Learn how and when to use CMEK with AI Platform
4149 # Training](/ai-platform/training/docs/cmek).
4150 # a resource.
4151 &quot;kmsKeyName&quot;: &quot;A String&quot;, # The Cloud KMS resource identifier of the customer-managed encryption key
4152 # used to protect a resource, such as a training job. It has the following
4153 # format:
4154 # `projects/{PROJECT_ID}/locations/{REGION}/keyRings/{KEY_RING_NAME}/cryptoKeys/{KEY_NAME}`
4155 },
4156 &quot;hyperparameters&quot;: { # Represents a set of hyperparameters to optimize. # Optional. The set of Hyperparameters to tune.
Bu Sun Kim65020912020-05-20 12:08:20 -07004157 &quot;params&quot;: [ # Required. The set of parameters to tune.
4158 { # Represents a single hyperparameter to optimize.
Bu Sun Kim65020912020-05-20 12:08:20 -07004159 &quot;categoricalValues&quot;: [ # Required if type is `CATEGORICAL`. The list of possible categories.
4160 &quot;A String&quot;,
4161 ],
4162 &quot;parameterName&quot;: &quot;A String&quot;, # Required. The parameter name must be unique amongst all ParameterConfigs in
4163 # a HyperparameterSpec message. E.g., &quot;learning_rate&quot;.
4164 &quot;minValue&quot;: 3.14, # Required if type is `DOUBLE` or `INTEGER`. This field
4165 # should be unset if type is `CATEGORICAL`. This value should be integers if
4166 # type is INTEGER.
4167 &quot;discreteValues&quot;: [ # Required if type is `DISCRETE`.
4168 # A list of feasible points.
4169 # The list should be in strictly increasing order. For instance, this
4170 # parameter might have possible settings of 1.5, 2.5, and 4.0. This list
4171 # should not contain more than 1,000 values.
4172 3.14,
4173 ],
4174 &quot;scaleType&quot;: &quot;A String&quot;, # Optional. How the parameter should be scaled to the hypercube.
4175 # Leave unset for categorical parameters.
4176 # Some kind of scaling is strongly recommended for real or integral
4177 # parameters (e.g., `UNIT_LINEAR_SCALE`).
4178 &quot;maxValue&quot;: 3.14, # Required if type is `DOUBLE` or `INTEGER`. This field
4179 # should be unset if type is `CATEGORICAL`. This value should be integers if
4180 # type is `INTEGER`.
Bu Sun Kim4ed7d3f2020-05-27 12:20:54 -07004181 &quot;type&quot;: &quot;A String&quot;, # Required. The type of the parameter.
Bu Sun Kim65020912020-05-20 12:08:20 -07004182 },
4183 ],
4184 &quot;enableTrialEarlyStopping&quot;: True or False, # Optional. Indicates if the hyperparameter tuning job enables auto trial
4185 # early stopping.
4186 &quot;resumePreviousJobId&quot;: &quot;A String&quot;, # Optional. The prior hyperparameter tuning job id that users hope to
4187 # continue with. The job id will be used to find the corresponding vizier
4188 # study guid and resume the study.
4189 &quot;maxParallelTrials&quot;: 42, # Optional. The number of training trials to run concurrently.
4190 # You can reduce the time it takes to perform hyperparameter tuning by adding
4191 # trials in parallel. However, each trail only benefits from the information
4192 # gained in completed trials. That means that a trial does not get access to
4193 # the results of trials running at the same time, which could reduce the
4194 # quality of the overall optimization.
4195 #
4196 # Each trial will use the same scale tier and machine types.
4197 #
4198 # Defaults to one.
4199 &quot;maxFailedTrials&quot;: 42, # Optional. The number of failed trials that need to be seen before failing
4200 # the hyperparameter tuning job. You can specify this field to override the
4201 # default failing criteria for AI Platform hyperparameter tuning jobs.
4202 #
4203 # Defaults to zero, which means the service decides when a hyperparameter
4204 # job should fail.
4205 &quot;goal&quot;: &quot;A String&quot;, # Required. The type of goal to use for tuning. Available types are
4206 # `MAXIMIZE` and `MINIMIZE`.
4207 #
4208 # Defaults to `MAXIMIZE`.
4209 &quot;maxTrials&quot;: 42, # Optional. How many training trials should be attempted to optimize
4210 # the specified hyperparameters.
4211 #
4212 # Defaults to one.
4213 &quot;algorithm&quot;: &quot;A String&quot;, # Optional. The search algorithm specified for the hyperparameter
4214 # tuning job.
4215 # Uses the default AI Platform hyperparameter tuning
4216 # algorithm if unspecified.
Bu Sun Kim4ed7d3f2020-05-27 12:20:54 -07004217 &quot;hyperparameterMetricTag&quot;: &quot;A String&quot;, # Optional. The TensorFlow summary tag name to use for optimizing trials. For
4218 # current versions of TensorFlow, this tag name should exactly match what is
4219 # shown in TensorBoard, including all scopes. For versions of TensorFlow
4220 # prior to 0.12, this should be only the tag passed to tf.Summary.
4221 # By default, &quot;training/hptuning/metric&quot; will be used.
Bu Sun Kim65020912020-05-20 12:08:20 -07004222 },
4223 &quot;workerConfig&quot;: { # Represents the configuration for a replica in a cluster. # Optional. The configuration for workers.
4224 #
4225 # You should only set `workerConfig.acceleratorConfig` if `workerType` is set
4226 # to a Compute Engine machine type. [Learn about restrictions on accelerator
4227 # configurations for
4228 # training.](/ai-platform/training/docs/using-gpus#compute-engine-machine-types-with-gpu)
4229 #
4230 # Set `workerConfig.imageUri` only if you build a custom image for your
4231 # worker. If `workerConfig.imageUri` has not been set, AI Platform uses
4232 # the value of `masterConfig.imageUri`. Learn more about [configuring custom
4233 # containers](/ai-platform/training/docs/distributed-training-containers).
Bu Sun Kim65020912020-05-20 12:08:20 -07004234 &quot;imageUri&quot;: &quot;A String&quot;, # The Docker image to run on the replica. This image must be in Container
4235 # Registry. Learn more about [configuring custom
4236 # containers](/ai-platform/training/docs/distributed-training-containers).
4237 &quot;containerArgs&quot;: [ # Arguments to the entrypoint command.
4238 # The following rules apply for container_command and container_args:
4239 # - If you do not supply command or args:
4240 # The defaults defined in the Docker image are used.
4241 # - If you supply a command but no args:
4242 # The default EntryPoint and the default Cmd defined in the Docker image
4243 # are ignored. Your command is run without any arguments.
4244 # - If you supply only args:
4245 # The default Entrypoint defined in the Docker image is run with the args
4246 # that you supplied.
4247 # - If you supply a command and args:
4248 # The default Entrypoint and the default Cmd defined in the Docker image
4249 # are ignored. Your command is run with your args.
4250 # It cannot be set if custom container image is
4251 # not provided.
4252 # Note that this field and [TrainingInput.args] are mutually exclusive, i.e.,
4253 # both cannot be set at the same time.
4254 &quot;A String&quot;,
4255 ],
4256 &quot;acceleratorConfig&quot;: { # Represents a hardware accelerator request config. # Represents the type and number of accelerators used by the replica.
4257 # [Learn about restrictions on accelerator configurations for
4258 # training.](/ai-platform/training/docs/using-gpus#compute-engine-machine-types-with-gpu)
4259 # Note that the AcceleratorConfig can be used in both Jobs and Versions.
4260 # Learn more about [accelerators for training](/ml-engine/docs/using-gpus) and
4261 # [accelerators for online
4262 # prediction](/ml-engine/docs/machine-types-online-prediction#gpus).
4263 &quot;count&quot;: &quot;A String&quot;, # The number of accelerators to attach to each machine running the job.
4264 &quot;type&quot;: &quot;A String&quot;, # The type of accelerator to use.
4265 },
Bu Sun Kim4ed7d3f2020-05-27 12:20:54 -07004266 &quot;tpuTfVersion&quot;: &quot;A String&quot;, # The AI Platform runtime version that includes a TensorFlow version matching
4267 # the one used in the custom container. This field is required if the replica
4268 # is a TPU worker that uses a custom container. Otherwise, do not specify
4269 # this field. This must be a [runtime version that currently supports
4270 # training with
4271 # TPUs](/ml-engine/docs/tensorflow/runtime-version-list#tpu-support).
4272 #
4273 # Note that the version of TensorFlow included in a runtime version may
4274 # differ from the numbering of the runtime version itself, because it may
4275 # have a different [patch
4276 # version](https://www.tensorflow.org/guide/version_compat#semantic_versioning_20).
4277 # In this field, you must specify the runtime version (TensorFlow minor
4278 # version). For example, if your custom container runs TensorFlow `1.x.y`,
4279 # specify `1.x`.
4280 &quot;containerCommand&quot;: [ # The command with which the replica&#x27;s custom container is run.
4281 # If provided, it will override default ENTRYPOINT of the docker image.
4282 # If not provided, the docker image&#x27;s ENTRYPOINT is used.
4283 # It cannot be set if custom container image is
4284 # not provided.
4285 # Note that this field and [TrainingInput.args] are mutually exclusive, i.e.,
4286 # both cannot be set at the same time.
4287 &quot;A String&quot;,
4288 ],
Bu Sun Kim65020912020-05-20 12:08:20 -07004289 },
4290 &quot;parameterServerCount&quot;: &quot;A String&quot;, # Optional. The number of parameter server replicas to use for the training
4291 # job. Each replica in the cluster will be of the type specified in
4292 # `parameter_server_type`.
4293 #
4294 # This value can only be used when `scale_tier` is set to `CUSTOM`. If you
4295 # set this value, you must also set `parameter_server_type`.
4296 #
4297 # The default value is zero.
4298 &quot;packageUris&quot;: [ # Required. The Google Cloud Storage location of the packages with
4299 # the training program and any additional dependencies.
4300 # The maximum number of package URIs is 100.
4301 &quot;A String&quot;,
4302 ],
4303 &quot;evaluatorCount&quot;: &quot;A String&quot;, # Optional. The number of evaluator replicas to use for the training job.
4304 # Each replica in the cluster will be of the type specified in
4305 # `evaluator_type`.
4306 #
4307 # This value can only be used when `scale_tier` is set to `CUSTOM`. If you
4308 # set this value, you must also set `evaluator_type`.
4309 #
4310 # The default value is zero.
4311 &quot;masterType&quot;: &quot;A String&quot;, # Optional. Specifies the type of virtual machine to use for your training
4312 # job&#x27;s master worker. You must specify this field when `scaleTier` is set to
4313 # `CUSTOM`.
4314 #
4315 # You can use certain Compute Engine machine types directly in this field.
4316 # The following types are supported:
4317 #
4318 # - `n1-standard-4`
4319 # - `n1-standard-8`
4320 # - `n1-standard-16`
4321 # - `n1-standard-32`
4322 # - `n1-standard-64`
4323 # - `n1-standard-96`
4324 # - `n1-highmem-2`
4325 # - `n1-highmem-4`
4326 # - `n1-highmem-8`
4327 # - `n1-highmem-16`
4328 # - `n1-highmem-32`
4329 # - `n1-highmem-64`
4330 # - `n1-highmem-96`
4331 # - `n1-highcpu-16`
4332 # - `n1-highcpu-32`
4333 # - `n1-highcpu-64`
4334 # - `n1-highcpu-96`
4335 #
4336 # Learn more about [using Compute Engine machine
4337 # types](/ml-engine/docs/machine-types#compute-engine-machine-types).
4338 #
4339 # Alternatively, you can use the following legacy machine types:
4340 #
4341 # - `standard`
4342 # - `large_model`
4343 # - `complex_model_s`
4344 # - `complex_model_m`
4345 # - `complex_model_l`
4346 # - `standard_gpu`
4347 # - `complex_model_m_gpu`
4348 # - `complex_model_l_gpu`
4349 # - `standard_p100`
4350 # - `complex_model_m_p100`
4351 # - `standard_v100`
4352 # - `large_model_v100`
4353 # - `complex_model_m_v100`
4354 # - `complex_model_l_v100`
4355 #
4356 # Learn more about [using legacy machine
4357 # types](/ml-engine/docs/machine-types#legacy-machine-types).
4358 #
4359 # Finally, if you want to use a TPU for training, specify `cloud_tpu` in this
4360 # field. Learn more about the [special configuration options for training
4361 # with
4362 # TPUs](/ml-engine/docs/tensorflow/using-tpus#configuring_a_custom_tpu_machine).
4363 &quot;runtimeVersion&quot;: &quot;A String&quot;, # Optional. The AI Platform runtime version to use for training. You must
4364 # either specify this field or specify `masterConfig.imageUri`.
4365 #
4366 # For more information, see the [runtime version
4367 # list](/ai-platform/training/docs/runtime-version-list) and learn [how to
4368 # manage runtime versions](/ai-platform/training/docs/versioning).
4369 &quot;evaluatorType&quot;: &quot;A String&quot;, # Optional. Specifies the type of virtual machine to use for your training
4370 # job&#x27;s evaluator nodes.
4371 #
4372 # The supported values are the same as those described in the entry for
4373 # `masterType`.
4374 #
4375 # This value must be consistent with the category of machine type that
4376 # `masterType` uses. In other words, both must be Compute Engine machine
4377 # types or both must be legacy machine types.
4378 #
4379 # This value must be present when `scaleTier` is set to `CUSTOM` and
4380 # `evaluatorCount` is greater than zero.
Bu Sun Kim65020912020-05-20 12:08:20 -07004381 &quot;workerType&quot;: &quot;A String&quot;, # Optional. Specifies the type of virtual machine to use for your training
4382 # job&#x27;s worker nodes.
4383 #
4384 # The supported values are the same as those described in the entry for
4385 # `masterType`.
4386 #
4387 # This value must be consistent with the category of machine type that
4388 # `masterType` uses. In other words, both must be Compute Engine machine
4389 # types or both must be legacy machine types.
4390 #
4391 # If you use `cloud_tpu` for this value, see special instructions for
4392 # [configuring a custom TPU
4393 # machine](/ml-engine/docs/tensorflow/using-tpus#configuring_a_custom_tpu_machine).
4394 #
4395 # This value must be present when `scaleTier` is set to `CUSTOM` and
4396 # `workerCount` is greater than zero.
Bu Sun Kim4ed7d3f2020-05-27 12:20:54 -07004397 &quot;region&quot;: &quot;A String&quot;, # Required. The region to run the training job in. See the [available
4398 # regions](/ai-platform/training/docs/regions) for AI Platform Training.
Bu Sun Kim65020912020-05-20 12:08:20 -07004399 &quot;parameterServerType&quot;: &quot;A String&quot;, # Optional. Specifies the type of virtual machine to use for your training
4400 # job&#x27;s parameter server.
4401 #
4402 # The supported values are the same as those described in the entry for
4403 # `master_type`.
4404 #
4405 # This value must be consistent with the category of machine type that
4406 # `masterType` uses. In other words, both must be Compute Engine machine
4407 # types or both must be legacy machine types.
4408 #
4409 # This value must be present when `scaleTier` is set to `CUSTOM` and
4410 # `parameter_server_count` is greater than zero.
4411 &quot;masterConfig&quot;: { # Represents the configuration for a replica in a cluster. # Optional. The configuration for your master worker.
4412 #
4413 # You should only set `masterConfig.acceleratorConfig` if `masterType` is set
4414 # to a Compute Engine machine type. Learn about [restrictions on accelerator
4415 # configurations for
4416 # training.](/ai-platform/training/docs/using-gpus#compute-engine-machine-types-with-gpu)
4417 #
4418 # Set `masterConfig.imageUri` only if you build a custom image. Only one of
4419 # `masterConfig.imageUri` and `runtimeVersion` should be set. Learn more
4420 # about [configuring custom
4421 # containers](/ai-platform/training/docs/distributed-training-containers).
Bu Sun Kim65020912020-05-20 12:08:20 -07004422 &quot;imageUri&quot;: &quot;A String&quot;, # The Docker image to run on the replica. This image must be in Container
4423 # Registry. Learn more about [configuring custom
4424 # containers](/ai-platform/training/docs/distributed-training-containers).
4425 &quot;containerArgs&quot;: [ # Arguments to the entrypoint command.
4426 # The following rules apply for container_command and container_args:
4427 # - If you do not supply command or args:
4428 # The defaults defined in the Docker image are used.
4429 # - If you supply a command but no args:
4430 # The default EntryPoint and the default Cmd defined in the Docker image
4431 # are ignored. Your command is run without any arguments.
4432 # - If you supply only args:
4433 # The default Entrypoint defined in the Docker image is run with the args
4434 # that you supplied.
4435 # - If you supply a command and args:
4436 # The default Entrypoint and the default Cmd defined in the Docker image
4437 # are ignored. Your command is run with your args.
4438 # It cannot be set if custom container image is
4439 # not provided.
4440 # Note that this field and [TrainingInput.args] are mutually exclusive, i.e.,
4441 # both cannot be set at the same time.
4442 &quot;A String&quot;,
4443 ],
4444 &quot;acceleratorConfig&quot;: { # Represents a hardware accelerator request config. # Represents the type and number of accelerators used by the replica.
4445 # [Learn about restrictions on accelerator configurations for
4446 # training.](/ai-platform/training/docs/using-gpus#compute-engine-machine-types-with-gpu)
4447 # Note that the AcceleratorConfig can be used in both Jobs and Versions.
4448 # Learn more about [accelerators for training](/ml-engine/docs/using-gpus) and
4449 # [accelerators for online
4450 # prediction](/ml-engine/docs/machine-types-online-prediction#gpus).
4451 &quot;count&quot;: &quot;A String&quot;, # The number of accelerators to attach to each machine running the job.
4452 &quot;type&quot;: &quot;A String&quot;, # The type of accelerator to use.
4453 },
Bu Sun Kim4ed7d3f2020-05-27 12:20:54 -07004454 &quot;tpuTfVersion&quot;: &quot;A String&quot;, # The AI Platform runtime version that includes a TensorFlow version matching
4455 # the one used in the custom container. This field is required if the replica
4456 # is a TPU worker that uses a custom container. Otherwise, do not specify
4457 # this field. This must be a [runtime version that currently supports
4458 # training with
4459 # TPUs](/ml-engine/docs/tensorflow/runtime-version-list#tpu-support).
4460 #
4461 # Note that the version of TensorFlow included in a runtime version may
4462 # differ from the numbering of the runtime version itself, because it may
4463 # have a different [patch
4464 # version](https://www.tensorflow.org/guide/version_compat#semantic_versioning_20).
4465 # In this field, you must specify the runtime version (TensorFlow minor
4466 # version). For example, if your custom container runs TensorFlow `1.x.y`,
4467 # specify `1.x`.
4468 &quot;containerCommand&quot;: [ # The command with which the replica&#x27;s custom container is run.
4469 # If provided, it will override default ENTRYPOINT of the docker image.
4470 # If not provided, the docker image&#x27;s ENTRYPOINT is used.
4471 # It cannot be set if custom container image is
4472 # not provided.
4473 # Note that this field and [TrainingInput.args] are mutually exclusive, i.e.,
4474 # both cannot be set at the same time.
4475 &quot;A String&quot;,
4476 ],
Bu Sun Kim65020912020-05-20 12:08:20 -07004477 },
4478 &quot;scaleTier&quot;: &quot;A String&quot;, # Required. Specifies the machine types, the number of replicas for workers
4479 # and parameter servers.
4480 &quot;jobDir&quot;: &quot;A String&quot;, # Optional. A Google Cloud Storage path in which to store training outputs
4481 # and other data needed for training. This path is passed to your TensorFlow
4482 # program as the &#x27;--job-dir&#x27; command-line argument. The benefit of specifying
4483 # this field is that Cloud ML validates the path for use in training.
4484 &quot;pythonVersion&quot;: &quot;A String&quot;, # Optional. The version of Python used in training. You must either specify
4485 # this field or specify `masterConfig.imageUri`.
4486 #
4487 # The following Python versions are available:
4488 #
4489 # * Python &#x27;3.7&#x27; is available when `runtime_version` is set to &#x27;1.15&#x27; or
4490 # later.
4491 # * Python &#x27;3.5&#x27; is available when `runtime_version` is set to a version
4492 # from &#x27;1.4&#x27; to &#x27;1.14&#x27;.
4493 # * Python &#x27;2.7&#x27; is available when `runtime_version` is set to &#x27;1.15&#x27; or
4494 # earlier.
4495 #
4496 # Read more about the Python versions available for [each runtime
4497 # version](/ml-engine/docs/runtime-version-list).
Bu Sun Kim65020912020-05-20 12:08:20 -07004498 &quot;scheduling&quot;: { # All parameters related to scheduling of training jobs. # Optional. Scheduling options for a training job.
4499 &quot;maxWaitTime&quot;: &quot;A String&quot;,
4500 &quot;maxRunningTime&quot;: &quot;A String&quot;, # Optional. The maximum job running time, expressed in seconds. The field can
4501 # contain up to nine fractional digits, terminated by `s`. If not specified,
4502 # this field defaults to `604800s` (seven days).
4503 #
4504 # If the training job is still running after this duration, AI Platform
4505 # Training cancels it.
4506 #
4507 # For example, if you want to ensure your job runs for no more than 2 hours,
4508 # set this field to `7200s` (2 hours * 60 minutes / hour * 60 seconds /
4509 # minute).
4510 #
4511 # If you submit your training job using the `gcloud` tool, you can [provide
4512 # this field in a `config.yaml`
4513 # file](/ai-platform/training/docs/training-jobs#formatting_your_configuration_parameters).
4514 # For example:
4515 #
4516 # ```yaml
4517 # trainingInput:
4518 # ...
4519 # scheduling:
4520 # maxRunningTime: 7200s
4521 # ...
4522 # ```
4523 },
Bu Sun Kim4ed7d3f2020-05-27 12:20:54 -07004524 &quot;network&quot;: &quot;A String&quot;, # Optional. The full name of the Google Compute Engine
4525 # [network](/compute/docs/networks-and-firewalls#networks) to which the Job
4526 # is peered. For example, projects/12345/global/networks/myVPC. Format is of
4527 # the form projects/{project}/global/networks/{network}. Where {project} is a
4528 # project number, as in &#x27;12345&#x27;, and {network} is network name.&quot;.
4529 #
4530 # Private services access must already be configured for the network. If left
4531 # unspecified, the Job is not peered with any network. Learn more -
4532 # Connecting Job to user network over private
4533 # IP.
Bu Sun Kim65020912020-05-20 12:08:20 -07004534 &quot;evaluatorConfig&quot;: { # Represents the configuration for a replica in a cluster. # Optional. The configuration for evaluators.
4535 #
4536 # You should only set `evaluatorConfig.acceleratorConfig` if
4537 # `evaluatorType` is set to a Compute Engine machine type. [Learn
4538 # about restrictions on accelerator configurations for
4539 # training.](/ai-platform/training/docs/using-gpus#compute-engine-machine-types-with-gpu)
4540 #
4541 # Set `evaluatorConfig.imageUri` only if you build a custom image for
4542 # your evaluator. If `evaluatorConfig.imageUri` has not been
4543 # set, AI Platform uses the value of `masterConfig.imageUri`. Learn more about [configuring custom
4544 # containers](/ai-platform/training/docs/distributed-training-containers).
Bu Sun Kim65020912020-05-20 12:08:20 -07004545 &quot;imageUri&quot;: &quot;A String&quot;, # The Docker image to run on the replica. This image must be in Container
4546 # Registry. Learn more about [configuring custom
4547 # containers](/ai-platform/training/docs/distributed-training-containers).
4548 &quot;containerArgs&quot;: [ # Arguments to the entrypoint command.
4549 # The following rules apply for container_command and container_args:
4550 # - If you do not supply command or args:
4551 # The defaults defined in the Docker image are used.
4552 # - If you supply a command but no args:
4553 # The default EntryPoint and the default Cmd defined in the Docker image
4554 # are ignored. Your command is run without any arguments.
4555 # - If you supply only args:
4556 # The default Entrypoint defined in the Docker image is run with the args
4557 # that you supplied.
4558 # - If you supply a command and args:
4559 # The default Entrypoint and the default Cmd defined in the Docker image
4560 # are ignored. Your command is run with your args.
4561 # It cannot be set if custom container image is
4562 # not provided.
4563 # Note that this field and [TrainingInput.args] are mutually exclusive, i.e.,
4564 # both cannot be set at the same time.
4565 &quot;A String&quot;,
4566 ],
4567 &quot;acceleratorConfig&quot;: { # Represents a hardware accelerator request config. # Represents the type and number of accelerators used by the replica.
4568 # [Learn about restrictions on accelerator configurations for
4569 # training.](/ai-platform/training/docs/using-gpus#compute-engine-machine-types-with-gpu)
4570 # Note that the AcceleratorConfig can be used in both Jobs and Versions.
4571 # Learn more about [accelerators for training](/ml-engine/docs/using-gpus) and
4572 # [accelerators for online
4573 # prediction](/ml-engine/docs/machine-types-online-prediction#gpus).
4574 &quot;count&quot;: &quot;A String&quot;, # The number of accelerators to attach to each machine running the job.
4575 &quot;type&quot;: &quot;A String&quot;, # The type of accelerator to use.
4576 },
Bu Sun Kim4ed7d3f2020-05-27 12:20:54 -07004577 &quot;tpuTfVersion&quot;: &quot;A String&quot;, # The AI Platform runtime version that includes a TensorFlow version matching
4578 # the one used in the custom container. This field is required if the replica
4579 # is a TPU worker that uses a custom container. Otherwise, do not specify
4580 # this field. This must be a [runtime version that currently supports
4581 # training with
4582 # TPUs](/ml-engine/docs/tensorflow/runtime-version-list#tpu-support).
4583 #
4584 # Note that the version of TensorFlow included in a runtime version may
4585 # differ from the numbering of the runtime version itself, because it may
4586 # have a different [patch
4587 # version](https://www.tensorflow.org/guide/version_compat#semantic_versioning_20).
4588 # In this field, you must specify the runtime version (TensorFlow minor
4589 # version). For example, if your custom container runs TensorFlow `1.x.y`,
4590 # specify `1.x`.
4591 &quot;containerCommand&quot;: [ # The command with which the replica&#x27;s custom container is run.
4592 # If provided, it will override default ENTRYPOINT of the docker image.
4593 # If not provided, the docker image&#x27;s ENTRYPOINT is used.
4594 # It cannot be set if custom container image is
4595 # not provided.
4596 # Note that this field and [TrainingInput.args] are mutually exclusive, i.e.,
4597 # both cannot be set at the same time.
4598 &quot;A String&quot;,
4599 ],
Bu Sun Kim65020912020-05-20 12:08:20 -07004600 },
4601 &quot;useChiefInTfConfig&quot;: True or False, # Optional. Use `chief` instead of `master` in the `TF_CONFIG` environment
4602 # variable when training with a custom container. Defaults to `false`. [Learn
4603 # more about this
4604 # field.](/ai-platform/training/docs/distributed-training-details#chief-versus-master)
4605 #
4606 # This field has no effect for training jobs that don&#x27;t use a custom
4607 # container.
Bu Sun Kim65020912020-05-20 12:08:20 -07004608 },
4609 &quot;state&quot;: &quot;A String&quot;, # Output only. The detailed state of a job.
4610 &quot;jobId&quot;: &quot;A String&quot;, # Required. The user-specified id of the job.
4611 &quot;endTime&quot;: &quot;A String&quot;, # Output only. When the job processing was completed.
4612 &quot;startTime&quot;: &quot;A String&quot;, # Output only. When the job processing was started.
4613 &quot;predictionOutput&quot;: { # Represents results of a prediction job. # The current prediction job result.
4614 &quot;errorCount&quot;: &quot;A String&quot;, # The number of data instances which resulted in errors.
4615 &quot;outputPath&quot;: &quot;A String&quot;, # The output Google Cloud Storage location provided at the job creation time.
4616 &quot;nodeHours&quot;: 3.14, # Node hours used by the batch prediction job.
4617 &quot;predictionCount&quot;: &quot;A String&quot;, # The number of generated predictions.
4618 },
4619 &quot;trainingOutput&quot;: { # Represents results of a training job. Output only. # The current training job result.
Bu Sun Kim4ed7d3f2020-05-27 12:20:54 -07004620 &quot;isBuiltInAlgorithmJob&quot;: True or False, # Whether this job is a built-in Algorithm job.
4621 &quot;builtInAlgorithmOutput&quot;: { # Represents output related to a built-in algorithm Job. # Details related to built-in algorithms jobs.
4622 # Only set for built-in algorithms jobs.
4623 &quot;modelPath&quot;: &quot;A String&quot;, # The Cloud Storage path to the `model/` directory where the training job
4624 # saves the trained model. Only set for successful jobs that don&#x27;t use
4625 # hyperparameter tuning.
4626 &quot;pythonVersion&quot;: &quot;A String&quot;, # Python version on which the built-in algorithm was trained.
4627 &quot;runtimeVersion&quot;: &quot;A String&quot;, # AI Platform runtime version on which the built-in algorithm was
4628 # trained.
4629 &quot;framework&quot;: &quot;A String&quot;, # Framework on which the built-in algorithm was trained.
4630 },
Bu Sun Kim65020912020-05-20 12:08:20 -07004631 &quot;trials&quot;: [ # Results for individual Hyperparameter trials.
4632 # Only set for hyperparameter tuning jobs.
4633 { # Represents the result of a single hyperparameter tuning trial from a
4634 # training job. The TrainingOutput object that is returned on successful
4635 # completion of a training job with hyperparameter tuning includes a list
4636 # of HyperparameterOutput objects, one for each successful trial.
Bu Sun Kim65020912020-05-20 12:08:20 -07004637 &quot;trialId&quot;: &quot;A String&quot;, # The trial id for these results.
4638 &quot;endTime&quot;: &quot;A String&quot;, # Output only. End time for the trial.
Bu Sun Kim65020912020-05-20 12:08:20 -07004639 &quot;startTime&quot;: &quot;A String&quot;, # Output only. Start time for the trial.
Bu Sun Kim4ed7d3f2020-05-27 12:20:54 -07004640 &quot;isTrialStoppedEarly&quot;: True or False, # True if the trial is stopped early.
Bu Sun Kim65020912020-05-20 12:08:20 -07004641 &quot;finalMetric&quot;: { # An observed value of a metric. # The final objective metric seen for this trial.
Bu Sun Kim65020912020-05-20 12:08:20 -07004642 &quot;trainingStep&quot;: &quot;A String&quot;, # The global training step for this metric.
Bu Sun Kim4ed7d3f2020-05-27 12:20:54 -07004643 &quot;objectiveValue&quot;: 3.14, # The objective value at this training step.
Bu Sun Kim65020912020-05-20 12:08:20 -07004644 },
4645 &quot;builtInAlgorithmOutput&quot;: { # Represents output related to a built-in algorithm Job. # Details related to built-in algorithms jobs.
4646 # Only set for trials of built-in algorithms jobs that have succeeded.
Bu Sun Kim65020912020-05-20 12:08:20 -07004647 &quot;modelPath&quot;: &quot;A String&quot;, # The Cloud Storage path to the `model/` directory where the training job
4648 # saves the trained model. Only set for successful jobs that don&#x27;t use
4649 # hyperparameter tuning.
4650 &quot;pythonVersion&quot;: &quot;A String&quot;, # Python version on which the built-in algorithm was trained.
4651 &quot;runtimeVersion&quot;: &quot;A String&quot;, # AI Platform runtime version on which the built-in algorithm was
4652 # trained.
Bu Sun Kim4ed7d3f2020-05-27 12:20:54 -07004653 &quot;framework&quot;: &quot;A String&quot;, # Framework on which the built-in algorithm was trained.
Bu Sun Kim65020912020-05-20 12:08:20 -07004654 },
4655 &quot;state&quot;: &quot;A String&quot;, # Output only. The detailed state of the trial.
Bu Sun Kim4ed7d3f2020-05-27 12:20:54 -07004656 &quot;allMetrics&quot;: [ # All recorded object metrics for this trial. This field is not currently
4657 # populated.
4658 { # An observed value of a metric.
4659 &quot;trainingStep&quot;: &quot;A String&quot;, # The global training step for this metric.
4660 &quot;objectiveValue&quot;: 3.14, # The objective value at this training step.
4661 },
4662 ],
4663 &quot;hyperparameters&quot;: { # The hyperparameters given to this trial.
4664 &quot;a_key&quot;: &quot;A String&quot;,
4665 },
Bu Sun Kim65020912020-05-20 12:08:20 -07004666 },
4667 ],
4668 &quot;hyperparameterMetricTag&quot;: &quot;A String&quot;, # The TensorFlow summary tag name used for optimizing hyperparameter tuning
4669 # trials. See
4670 # [`HyperparameterSpec.hyperparameterMetricTag`](#HyperparameterSpec.FIELDS.hyperparameter_metric_tag)
4671 # for more information. Only set for hyperparameter tuning jobs.
4672 &quot;completedTrialCount&quot;: &quot;A String&quot;, # The number of hyperparameter tuning trials that completed successfully.
4673 # Only set for hyperparameter tuning jobs.
4674 &quot;isHyperparameterTuningJob&quot;: True or False, # Whether this job is a hyperparameter tuning job.
4675 &quot;consumedMLUnits&quot;: 3.14, # The amount of ML units consumed by the job.
Bu Sun Kim65020912020-05-20 12:08:20 -07004676 },
4677 &quot;createTime&quot;: &quot;A String&quot;, # Output only. When the job was created.
4678 &quot;labels&quot;: { # Optional. One or more labels that you can add, to organize your jobs.
4679 # Each label is a key-value pair, where both the key and the value are
4680 # arbitrary strings that you supply.
4681 # For more information, see the documentation on
4682 # &lt;a href=&quot;/ml-engine/docs/tensorflow/resource-labels&quot;&gt;using labels&lt;/a&gt;.
4683 &quot;a_key&quot;: &quot;A String&quot;,
4684 },
Bu Sun Kim65020912020-05-20 12:08:20 -07004685 }</pre>
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07004686</div>
4687
4688<div class="method">
Dan O'Mearadd494642020-05-01 07:42:23 -07004689 <code class="details" id="setIamPolicy">setIamPolicy(resource, body=None, x__xgafv=None)</code>
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07004690 <pre>Sets the access control policy on the specified resource. Replaces any
4691existing policy.
4692
Bu Sun Kim65020912020-05-20 12:08:20 -07004693Can return `NOT_FOUND`, `INVALID_ARGUMENT`, and `PERMISSION_DENIED` errors.
Dan O'Mearadd494642020-05-01 07:42:23 -07004694
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07004695Args:
4696 resource: string, REQUIRED: The resource for which the policy is being specified.
4697See the operation documentation for the appropriate value for this field. (required)
Dan O'Mearadd494642020-05-01 07:42:23 -07004698 body: object, The request body.
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07004699 The object takes the form of:
4700
4701{ # Request message for `SetIamPolicy` method.
Bu Sun Kim4ed7d3f2020-05-27 12:20:54 -07004702 &quot;updateMask&quot;: &quot;A String&quot;, # OPTIONAL: A FieldMask specifying which fields of the policy to modify. Only
4703 # the fields in the mask will be modified. If no mask is provided, the
4704 # following default mask is used:
4705 #
4706 # `paths: &quot;bindings, etag&quot;`
Bu Sun Kim65020912020-05-20 12:08:20 -07004707 &quot;policy&quot;: { # An Identity and Access Management (IAM) policy, which specifies access # REQUIRED: The complete policy to be applied to the `resource`. The size of
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07004708 # the policy is limited to a few 10s of KB. An empty policy is a
4709 # valid policy but certain Cloud Platform services (such as Projects)
4710 # might reject them.
Dan O'Mearadd494642020-05-01 07:42:23 -07004711 # controls for Google Cloud resources.
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07004712 #
4713 #
Dan O'Mearadd494642020-05-01 07:42:23 -07004714 # A `Policy` is a collection of `bindings`. A `binding` binds one or more
4715 # `members` to a single `role`. Members can be user accounts, service accounts,
4716 # Google groups, and domains (such as G Suite). A `role` is a named list of
4717 # permissions; each `role` can be an IAM predefined role or a user-created
4718 # custom role.
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07004719 #
Bu Sun Kim65020912020-05-20 12:08:20 -07004720 # For some types of Google Cloud resources, a `binding` can also specify a
4721 # `condition`, which is a logical expression that allows access to a resource
4722 # only if the expression evaluates to `true`. A condition can add constraints
4723 # based on attributes of the request, the resource, or both. To learn which
4724 # resources support conditions in their IAM policies, see the
4725 # [IAM documentation](https://cloud.google.com/iam/help/conditions/resource-policies).
Dan O'Mearadd494642020-05-01 07:42:23 -07004726 #
4727 # **JSON example:**
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07004728 #
4729 # {
Bu Sun Kim65020912020-05-20 12:08:20 -07004730 # &quot;bindings&quot;: [
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07004731 # {
Bu Sun Kim65020912020-05-20 12:08:20 -07004732 # &quot;role&quot;: &quot;roles/resourcemanager.organizationAdmin&quot;,
4733 # &quot;members&quot;: [
4734 # &quot;user:mike@example.com&quot;,
4735 # &quot;group:admins@example.com&quot;,
4736 # &quot;domain:google.com&quot;,
4737 # &quot;serviceAccount:my-project-id@appspot.gserviceaccount.com&quot;
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07004738 # ]
4739 # },
4740 # {
Bu Sun Kim65020912020-05-20 12:08:20 -07004741 # &quot;role&quot;: &quot;roles/resourcemanager.organizationViewer&quot;,
4742 # &quot;members&quot;: [
4743 # &quot;user:eve@example.com&quot;
4744 # ],
4745 # &quot;condition&quot;: {
4746 # &quot;title&quot;: &quot;expirable access&quot;,
4747 # &quot;description&quot;: &quot;Does not grant access after Sep 2020&quot;,
4748 # &quot;expression&quot;: &quot;request.time &lt; timestamp(&#x27;2020-10-01T00:00:00.000Z&#x27;)&quot;,
Dan O'Mearadd494642020-05-01 07:42:23 -07004749 # }
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07004750 # }
Dan O'Mearadd494642020-05-01 07:42:23 -07004751 # ],
Bu Sun Kim65020912020-05-20 12:08:20 -07004752 # &quot;etag&quot;: &quot;BwWWja0YfJA=&quot;,
4753 # &quot;version&quot;: 3
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07004754 # }
4755 #
Dan O'Mearadd494642020-05-01 07:42:23 -07004756 # **YAML example:**
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07004757 #
4758 # bindings:
4759 # - members:
4760 # - user:mike@example.com
4761 # - group:admins@example.com
4762 # - domain:google.com
Dan O'Mearadd494642020-05-01 07:42:23 -07004763 # - serviceAccount:my-project-id@appspot.gserviceaccount.com
4764 # role: roles/resourcemanager.organizationAdmin
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07004765 # - members:
Dan O'Mearadd494642020-05-01 07:42:23 -07004766 # - user:eve@example.com
4767 # role: roles/resourcemanager.organizationViewer
4768 # condition:
4769 # title: expirable access
4770 # description: Does not grant access after Sep 2020
Bu Sun Kim65020912020-05-20 12:08:20 -07004771 # expression: request.time &lt; timestamp(&#x27;2020-10-01T00:00:00.000Z&#x27;)
Dan O'Mearadd494642020-05-01 07:42:23 -07004772 # - etag: BwWWja0YfJA=
4773 # - version: 3
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07004774 #
4775 # For a description of IAM and its features, see the
Dan O'Mearadd494642020-05-01 07:42:23 -07004776 # [IAM documentation](https://cloud.google.com/iam/docs/).
Bu Sun Kim65020912020-05-20 12:08:20 -07004777 &quot;version&quot;: 42, # Specifies the format of the policy.
4778 #
4779 # Valid values are `0`, `1`, and `3`. Requests that specify an invalid value
4780 # are rejected.
4781 #
4782 # Any operation that affects conditional role bindings must specify version
4783 # `3`. This requirement applies to the following operations:
4784 #
4785 # * Getting a policy that includes a conditional role binding
4786 # * Adding a conditional role binding to a policy
4787 # * Changing a conditional role binding in a policy
4788 # * Removing any role binding, with or without a condition, from a policy
4789 # that includes conditions
4790 #
4791 # **Important:** If you use IAM Conditions, you must include the `etag` field
4792 # whenever you call `setIamPolicy`. If you omit this field, then IAM allows
4793 # you to overwrite a version `3` policy with a version `1` policy, and all of
4794 # the conditions in the version `3` policy are lost.
4795 #
4796 # If a policy does not include any conditions, operations on that policy may
4797 # specify any valid version or leave the field unset.
4798 #
4799 # To learn which resources support conditions in their IAM policies, see the
4800 # [IAM documentation](https://cloud.google.com/iam/help/conditions/resource-policies).
4801 &quot;auditConfigs&quot;: [ # Specifies cloud audit logging configuration for this policy.
4802 { # Specifies the audit configuration for a service.
4803 # The configuration determines which permission types are logged, and what
4804 # identities, if any, are exempted from logging.
4805 # An AuditConfig must have one or more AuditLogConfigs.
4806 #
4807 # If there are AuditConfigs for both `allServices` and a specific service,
4808 # the union of the two AuditConfigs is used for that service: the log_types
4809 # specified in each AuditConfig are enabled, and the exempted_members in each
4810 # AuditLogConfig are exempted.
4811 #
4812 # Example Policy with multiple AuditConfigs:
4813 #
4814 # {
4815 # &quot;audit_configs&quot;: [
4816 # {
4817 # &quot;service&quot;: &quot;allServices&quot;
4818 # &quot;audit_log_configs&quot;: [
4819 # {
4820 # &quot;log_type&quot;: &quot;DATA_READ&quot;,
4821 # &quot;exempted_members&quot;: [
4822 # &quot;user:jose@example.com&quot;
4823 # ]
4824 # },
4825 # {
4826 # &quot;log_type&quot;: &quot;DATA_WRITE&quot;,
4827 # },
4828 # {
4829 # &quot;log_type&quot;: &quot;ADMIN_READ&quot;,
4830 # }
4831 # ]
4832 # },
4833 # {
4834 # &quot;service&quot;: &quot;sampleservice.googleapis.com&quot;
4835 # &quot;audit_log_configs&quot;: [
4836 # {
4837 # &quot;log_type&quot;: &quot;DATA_READ&quot;,
4838 # },
4839 # {
4840 # &quot;log_type&quot;: &quot;DATA_WRITE&quot;,
4841 # &quot;exempted_members&quot;: [
4842 # &quot;user:aliya@example.com&quot;
4843 # ]
4844 # }
4845 # ]
4846 # }
4847 # ]
4848 # }
4849 #
4850 # For sampleservice, this policy enables DATA_READ, DATA_WRITE and ADMIN_READ
4851 # logging. It also exempts jose@example.com from DATA_READ logging, and
4852 # aliya@example.com from DATA_WRITE logging.
4853 &quot;service&quot;: &quot;A String&quot;, # Specifies a service that will be enabled for audit logging.
4854 # For example, `storage.googleapis.com`, `cloudsql.googleapis.com`.
4855 # `allServices` is a special value that covers all services.
4856 &quot;auditLogConfigs&quot;: [ # The configuration for logging of each type of permission.
4857 { # Provides the configuration for logging a type of permissions.
4858 # Example:
4859 #
4860 # {
4861 # &quot;audit_log_configs&quot;: [
4862 # {
4863 # &quot;log_type&quot;: &quot;DATA_READ&quot;,
4864 # &quot;exempted_members&quot;: [
4865 # &quot;user:jose@example.com&quot;
4866 # ]
4867 # },
4868 # {
4869 # &quot;log_type&quot;: &quot;DATA_WRITE&quot;,
4870 # }
4871 # ]
4872 # }
4873 #
4874 # This enables &#x27;DATA_READ&#x27; and &#x27;DATA_WRITE&#x27; logging, while exempting
4875 # jose@example.com from DATA_READ logging.
4876 &quot;exemptedMembers&quot;: [ # Specifies the identities that do not cause logging for this type of
4877 # permission.
4878 # Follows the same format of Binding.members.
4879 &quot;A String&quot;,
4880 ],
4881 &quot;logType&quot;: &quot;A String&quot;, # The log type that this config enables.
4882 },
4883 ],
4884 },
4885 ],
4886 &quot;bindings&quot;: [ # Associates a list of `members` to a `role`. Optionally, may specify a
Dan O'Mearadd494642020-05-01 07:42:23 -07004887 # `condition` that determines how and when the `bindings` are applied. Each
4888 # of the `bindings` must contain at least one member.
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07004889 { # Associates `members` with a `role`.
Bu Sun Kim65020912020-05-20 12:08:20 -07004890 &quot;condition&quot;: { # Represents a textual expression in the Common Expression Language (CEL) # The condition that is associated with this binding.
4891 #
4892 # If the condition evaluates to `true`, then this binding applies to the
4893 # current request.
4894 #
4895 # If the condition evaluates to `false`, then this binding does not apply to
4896 # the current request. However, a different role binding might grant the same
4897 # role to one or more of the members in this binding.
4898 #
4899 # To learn which resources support conditions in their IAM policies, see the
4900 # [IAM
4901 # documentation](https://cloud.google.com/iam/help/conditions/resource-policies).
4902 # syntax. CEL is a C-like expression language. The syntax and semantics of CEL
4903 # are documented at https://github.com/google/cel-spec.
4904 #
4905 # Example (Comparison):
4906 #
4907 # title: &quot;Summary size limit&quot;
4908 # description: &quot;Determines if a summary is less than 100 chars&quot;
4909 # expression: &quot;document.summary.size() &lt; 100&quot;
4910 #
4911 # Example (Equality):
4912 #
4913 # title: &quot;Requestor is owner&quot;
4914 # description: &quot;Determines if requestor is the document owner&quot;
4915 # expression: &quot;document.owner == request.auth.claims.email&quot;
4916 #
4917 # Example (Logic):
4918 #
4919 # title: &quot;Public documents&quot;
4920 # description: &quot;Determine whether the document should be publicly visible&quot;
4921 # expression: &quot;document.type != &#x27;private&#x27; &amp;&amp; document.type != &#x27;internal&#x27;&quot;
4922 #
4923 # Example (Data Manipulation):
4924 #
4925 # title: &quot;Notification string&quot;
4926 # description: &quot;Create a notification string with a timestamp.&quot;
4927 # expression: &quot;&#x27;New message received at &#x27; + string(document.create_time)&quot;
4928 #
4929 # The exact variables and functions that may be referenced within an expression
4930 # are determined by the service that evaluates it. See the service
4931 # documentation for additional information.
Bu Sun Kim4ed7d3f2020-05-27 12:20:54 -07004932 &quot;description&quot;: &quot;A String&quot;, # Optional. Description of the expression. This is a longer text which
4933 # describes the expression, e.g. when hovered over it in a UI.
4934 &quot;expression&quot;: &quot;A String&quot;, # Textual representation of an expression in Common Expression Language
4935 # syntax.
Bu Sun Kim65020912020-05-20 12:08:20 -07004936 &quot;title&quot;: &quot;A String&quot;, # Optional. Title for the expression, i.e. a short string describing
4937 # its purpose. This can be used e.g. in UIs which allow to enter the
4938 # expression.
4939 &quot;location&quot;: &quot;A String&quot;, # Optional. String indicating the location of the expression for error
4940 # reporting, e.g. a file name and a position in the file.
Bu Sun Kim65020912020-05-20 12:08:20 -07004941 },
4942 &quot;members&quot;: [ # Specifies the identities requesting access for a Cloud Platform resource.
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07004943 # `members` can have the following values:
4944 #
4945 # * `allUsers`: A special identifier that represents anyone who is
4946 # on the internet; with or without a Google account.
4947 #
4948 # * `allAuthenticatedUsers`: A special identifier that represents anyone
4949 # who is authenticated with a Google account or a service account.
4950 #
4951 # * `user:{emailid}`: An email address that represents a specific Google
Dan O'Mearadd494642020-05-01 07:42:23 -07004952 # account. For example, `alice@example.com` .
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07004953 #
4954 #
4955 # * `serviceAccount:{emailid}`: An email address that represents a service
4956 # account. For example, `my-other-app@appspot.gserviceaccount.com`.
4957 #
4958 # * `group:{emailid}`: An email address that represents a Google group.
4959 # For example, `admins@example.com`.
4960 #
Dan O'Mearadd494642020-05-01 07:42:23 -07004961 # * `deleted:user:{emailid}?uid={uniqueid}`: An email address (plus unique
4962 # identifier) representing a user that has been recently deleted. For
4963 # example, `alice@example.com?uid=123456789012345678901`. If the user is
4964 # recovered, this value reverts to `user:{emailid}` and the recovered user
4965 # retains the role in the binding.
4966 #
4967 # * `deleted:serviceAccount:{emailid}?uid={uniqueid}`: An email address (plus
4968 # unique identifier) representing a service account that has been recently
4969 # deleted. For example,
4970 # `my-other-app@appspot.gserviceaccount.com?uid=123456789012345678901`.
4971 # If the service account is undeleted, this value reverts to
4972 # `serviceAccount:{emailid}` and the undeleted service account retains the
4973 # role in the binding.
4974 #
4975 # * `deleted:group:{emailid}?uid={uniqueid}`: An email address (plus unique
4976 # identifier) representing a Google group that has been recently
4977 # deleted. For example, `admins@example.com?uid=123456789012345678901`. If
4978 # the group is recovered, this value reverts to `group:{emailid}` and the
4979 # recovered group retains the role in the binding.
4980 #
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07004981 #
4982 # * `domain:{domain}`: The G Suite domain (primary) that represents all the
4983 # users of that domain. For example, `google.com` or `example.com`.
4984 #
Bu Sun Kim65020912020-05-20 12:08:20 -07004985 &quot;A String&quot;,
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07004986 ],
Bu Sun Kim65020912020-05-20 12:08:20 -07004987 &quot;role&quot;: &quot;A String&quot;, # Role that is assigned to `members`.
4988 # For example, `roles/viewer`, `roles/editor`, or `roles/owner`.
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07004989 },
4990 ],
Bu Sun Kim4ed7d3f2020-05-27 12:20:54 -07004991 &quot;etag&quot;: &quot;A String&quot;, # `etag` is used for optimistic concurrency control as a way to help
4992 # prevent simultaneous updates of a policy from overwriting each other.
4993 # It is strongly suggested that systems make use of the `etag` in the
4994 # read-modify-write cycle to perform policy updates in order to avoid race
4995 # conditions: An `etag` is returned in the response to `getIamPolicy`, and
4996 # systems are expected to put that etag in the request to `setIamPolicy` to
4997 # ensure that their change will be applied to the same version of the policy.
4998 #
4999 # **Important:** If you use IAM Conditions, you must include the `etag` field
5000 # whenever you call `setIamPolicy`. If you omit this field, then IAM allows
5001 # you to overwrite a version `3` policy with a version `1` policy, and all of
5002 # the conditions in the version `3` policy are lost.
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07005003 },
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07005004 }
5005
5006 x__xgafv: string, V1 error format.
5007 Allowed values
5008 1 - v1 error format
5009 2 - v2 error format
5010
5011Returns:
5012 An object of the form:
5013
Dan O'Mearadd494642020-05-01 07:42:23 -07005014 { # An Identity and Access Management (IAM) policy, which specifies access
5015 # controls for Google Cloud resources.
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07005016 #
5017 #
Dan O'Mearadd494642020-05-01 07:42:23 -07005018 # A `Policy` is a collection of `bindings`. A `binding` binds one or more
5019 # `members` to a single `role`. Members can be user accounts, service accounts,
5020 # Google groups, and domains (such as G Suite). A `role` is a named list of
5021 # permissions; each `role` can be an IAM predefined role or a user-created
5022 # custom role.
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07005023 #
Bu Sun Kim65020912020-05-20 12:08:20 -07005024 # For some types of Google Cloud resources, a `binding` can also specify a
5025 # `condition`, which is a logical expression that allows access to a resource
5026 # only if the expression evaluates to `true`. A condition can add constraints
5027 # based on attributes of the request, the resource, or both. To learn which
5028 # resources support conditions in their IAM policies, see the
5029 # [IAM documentation](https://cloud.google.com/iam/help/conditions/resource-policies).
Dan O'Mearadd494642020-05-01 07:42:23 -07005030 #
5031 # **JSON example:**
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07005032 #
5033 # {
Bu Sun Kim65020912020-05-20 12:08:20 -07005034 # &quot;bindings&quot;: [
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07005035 # {
Bu Sun Kim65020912020-05-20 12:08:20 -07005036 # &quot;role&quot;: &quot;roles/resourcemanager.organizationAdmin&quot;,
5037 # &quot;members&quot;: [
5038 # &quot;user:mike@example.com&quot;,
5039 # &quot;group:admins@example.com&quot;,
5040 # &quot;domain:google.com&quot;,
5041 # &quot;serviceAccount:my-project-id@appspot.gserviceaccount.com&quot;
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07005042 # ]
5043 # },
5044 # {
Bu Sun Kim65020912020-05-20 12:08:20 -07005045 # &quot;role&quot;: &quot;roles/resourcemanager.organizationViewer&quot;,
5046 # &quot;members&quot;: [
5047 # &quot;user:eve@example.com&quot;
5048 # ],
5049 # &quot;condition&quot;: {
5050 # &quot;title&quot;: &quot;expirable access&quot;,
5051 # &quot;description&quot;: &quot;Does not grant access after Sep 2020&quot;,
5052 # &quot;expression&quot;: &quot;request.time &lt; timestamp(&#x27;2020-10-01T00:00:00.000Z&#x27;)&quot;,
Dan O'Mearadd494642020-05-01 07:42:23 -07005053 # }
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07005054 # }
Dan O'Mearadd494642020-05-01 07:42:23 -07005055 # ],
Bu Sun Kim65020912020-05-20 12:08:20 -07005056 # &quot;etag&quot;: &quot;BwWWja0YfJA=&quot;,
5057 # &quot;version&quot;: 3
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07005058 # }
5059 #
Dan O'Mearadd494642020-05-01 07:42:23 -07005060 # **YAML example:**
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07005061 #
5062 # bindings:
5063 # - members:
5064 # - user:mike@example.com
5065 # - group:admins@example.com
5066 # - domain:google.com
Dan O'Mearadd494642020-05-01 07:42:23 -07005067 # - serviceAccount:my-project-id@appspot.gserviceaccount.com
5068 # role: roles/resourcemanager.organizationAdmin
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07005069 # - members:
Dan O'Mearadd494642020-05-01 07:42:23 -07005070 # - user:eve@example.com
5071 # role: roles/resourcemanager.organizationViewer
5072 # condition:
5073 # title: expirable access
5074 # description: Does not grant access after Sep 2020
Bu Sun Kim65020912020-05-20 12:08:20 -07005075 # expression: request.time &lt; timestamp(&#x27;2020-10-01T00:00:00.000Z&#x27;)
Dan O'Mearadd494642020-05-01 07:42:23 -07005076 # - etag: BwWWja0YfJA=
5077 # - version: 3
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07005078 #
5079 # For a description of IAM and its features, see the
Dan O'Mearadd494642020-05-01 07:42:23 -07005080 # [IAM documentation](https://cloud.google.com/iam/docs/).
Bu Sun Kim65020912020-05-20 12:08:20 -07005081 &quot;version&quot;: 42, # Specifies the format of the policy.
5082 #
5083 # Valid values are `0`, `1`, and `3`. Requests that specify an invalid value
5084 # are rejected.
5085 #
5086 # Any operation that affects conditional role bindings must specify version
5087 # `3`. This requirement applies to the following operations:
5088 #
5089 # * Getting a policy that includes a conditional role binding
5090 # * Adding a conditional role binding to a policy
5091 # * Changing a conditional role binding in a policy
5092 # * Removing any role binding, with or without a condition, from a policy
5093 # that includes conditions
5094 #
5095 # **Important:** If you use IAM Conditions, you must include the `etag` field
5096 # whenever you call `setIamPolicy`. If you omit this field, then IAM allows
5097 # you to overwrite a version `3` policy with a version `1` policy, and all of
5098 # the conditions in the version `3` policy are lost.
5099 #
5100 # If a policy does not include any conditions, operations on that policy may
5101 # specify any valid version or leave the field unset.
5102 #
5103 # To learn which resources support conditions in their IAM policies, see the
5104 # [IAM documentation](https://cloud.google.com/iam/help/conditions/resource-policies).
5105 &quot;auditConfigs&quot;: [ # Specifies cloud audit logging configuration for this policy.
5106 { # Specifies the audit configuration for a service.
5107 # The configuration determines which permission types are logged, and what
5108 # identities, if any, are exempted from logging.
5109 # An AuditConfig must have one or more AuditLogConfigs.
5110 #
5111 # If there are AuditConfigs for both `allServices` and a specific service,
5112 # the union of the two AuditConfigs is used for that service: the log_types
5113 # specified in each AuditConfig are enabled, and the exempted_members in each
5114 # AuditLogConfig are exempted.
5115 #
5116 # Example Policy with multiple AuditConfigs:
5117 #
5118 # {
5119 # &quot;audit_configs&quot;: [
5120 # {
5121 # &quot;service&quot;: &quot;allServices&quot;
5122 # &quot;audit_log_configs&quot;: [
5123 # {
5124 # &quot;log_type&quot;: &quot;DATA_READ&quot;,
5125 # &quot;exempted_members&quot;: [
5126 # &quot;user:jose@example.com&quot;
5127 # ]
5128 # },
5129 # {
5130 # &quot;log_type&quot;: &quot;DATA_WRITE&quot;,
5131 # },
5132 # {
5133 # &quot;log_type&quot;: &quot;ADMIN_READ&quot;,
5134 # }
5135 # ]
5136 # },
5137 # {
5138 # &quot;service&quot;: &quot;sampleservice.googleapis.com&quot;
5139 # &quot;audit_log_configs&quot;: [
5140 # {
5141 # &quot;log_type&quot;: &quot;DATA_READ&quot;,
5142 # },
5143 # {
5144 # &quot;log_type&quot;: &quot;DATA_WRITE&quot;,
5145 # &quot;exempted_members&quot;: [
5146 # &quot;user:aliya@example.com&quot;
5147 # ]
5148 # }
5149 # ]
5150 # }
5151 # ]
5152 # }
5153 #
5154 # For sampleservice, this policy enables DATA_READ, DATA_WRITE and ADMIN_READ
5155 # logging. It also exempts jose@example.com from DATA_READ logging, and
5156 # aliya@example.com from DATA_WRITE logging.
5157 &quot;service&quot;: &quot;A String&quot;, # Specifies a service that will be enabled for audit logging.
5158 # For example, `storage.googleapis.com`, `cloudsql.googleapis.com`.
5159 # `allServices` is a special value that covers all services.
5160 &quot;auditLogConfigs&quot;: [ # The configuration for logging of each type of permission.
5161 { # Provides the configuration for logging a type of permissions.
5162 # Example:
5163 #
5164 # {
5165 # &quot;audit_log_configs&quot;: [
5166 # {
5167 # &quot;log_type&quot;: &quot;DATA_READ&quot;,
5168 # &quot;exempted_members&quot;: [
5169 # &quot;user:jose@example.com&quot;
5170 # ]
5171 # },
5172 # {
5173 # &quot;log_type&quot;: &quot;DATA_WRITE&quot;,
5174 # }
5175 # ]
5176 # }
5177 #
5178 # This enables &#x27;DATA_READ&#x27; and &#x27;DATA_WRITE&#x27; logging, while exempting
5179 # jose@example.com from DATA_READ logging.
5180 &quot;exemptedMembers&quot;: [ # Specifies the identities that do not cause logging for this type of
5181 # permission.
5182 # Follows the same format of Binding.members.
5183 &quot;A String&quot;,
5184 ],
5185 &quot;logType&quot;: &quot;A String&quot;, # The log type that this config enables.
5186 },
5187 ],
5188 },
5189 ],
5190 &quot;bindings&quot;: [ # Associates a list of `members` to a `role`. Optionally, may specify a
Dan O'Mearadd494642020-05-01 07:42:23 -07005191 # `condition` that determines how and when the `bindings` are applied. Each
5192 # of the `bindings` must contain at least one member.
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07005193 { # Associates `members` with a `role`.
Bu Sun Kim65020912020-05-20 12:08:20 -07005194 &quot;condition&quot;: { # Represents a textual expression in the Common Expression Language (CEL) # The condition that is associated with this binding.
5195 #
5196 # If the condition evaluates to `true`, then this binding applies to the
5197 # current request.
5198 #
5199 # If the condition evaluates to `false`, then this binding does not apply to
5200 # the current request. However, a different role binding might grant the same
5201 # role to one or more of the members in this binding.
5202 #
5203 # To learn which resources support conditions in their IAM policies, see the
5204 # [IAM
5205 # documentation](https://cloud.google.com/iam/help/conditions/resource-policies).
5206 # syntax. CEL is a C-like expression language. The syntax and semantics of CEL
5207 # are documented at https://github.com/google/cel-spec.
5208 #
5209 # Example (Comparison):
5210 #
5211 # title: &quot;Summary size limit&quot;
5212 # description: &quot;Determines if a summary is less than 100 chars&quot;
5213 # expression: &quot;document.summary.size() &lt; 100&quot;
5214 #
5215 # Example (Equality):
5216 #
5217 # title: &quot;Requestor is owner&quot;
5218 # description: &quot;Determines if requestor is the document owner&quot;
5219 # expression: &quot;document.owner == request.auth.claims.email&quot;
5220 #
5221 # Example (Logic):
5222 #
5223 # title: &quot;Public documents&quot;
5224 # description: &quot;Determine whether the document should be publicly visible&quot;
5225 # expression: &quot;document.type != &#x27;private&#x27; &amp;&amp; document.type != &#x27;internal&#x27;&quot;
5226 #
5227 # Example (Data Manipulation):
5228 #
5229 # title: &quot;Notification string&quot;
5230 # description: &quot;Create a notification string with a timestamp.&quot;
5231 # expression: &quot;&#x27;New message received at &#x27; + string(document.create_time)&quot;
5232 #
5233 # The exact variables and functions that may be referenced within an expression
5234 # are determined by the service that evaluates it. See the service
5235 # documentation for additional information.
Bu Sun Kim4ed7d3f2020-05-27 12:20:54 -07005236 &quot;description&quot;: &quot;A String&quot;, # Optional. Description of the expression. This is a longer text which
5237 # describes the expression, e.g. when hovered over it in a UI.
5238 &quot;expression&quot;: &quot;A String&quot;, # Textual representation of an expression in Common Expression Language
5239 # syntax.
Bu Sun Kim65020912020-05-20 12:08:20 -07005240 &quot;title&quot;: &quot;A String&quot;, # Optional. Title for the expression, i.e. a short string describing
5241 # its purpose. This can be used e.g. in UIs which allow to enter the
5242 # expression.
5243 &quot;location&quot;: &quot;A String&quot;, # Optional. String indicating the location of the expression for error
5244 # reporting, e.g. a file name and a position in the file.
Bu Sun Kim65020912020-05-20 12:08:20 -07005245 },
5246 &quot;members&quot;: [ # Specifies the identities requesting access for a Cloud Platform resource.
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07005247 # `members` can have the following values:
5248 #
5249 # * `allUsers`: A special identifier that represents anyone who is
5250 # on the internet; with or without a Google account.
5251 #
5252 # * `allAuthenticatedUsers`: A special identifier that represents anyone
5253 # who is authenticated with a Google account or a service account.
5254 #
5255 # * `user:{emailid}`: An email address that represents a specific Google
Dan O'Mearadd494642020-05-01 07:42:23 -07005256 # account. For example, `alice@example.com` .
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07005257 #
5258 #
5259 # * `serviceAccount:{emailid}`: An email address that represents a service
5260 # account. For example, `my-other-app@appspot.gserviceaccount.com`.
5261 #
5262 # * `group:{emailid}`: An email address that represents a Google group.
5263 # For example, `admins@example.com`.
5264 #
Dan O'Mearadd494642020-05-01 07:42:23 -07005265 # * `deleted:user:{emailid}?uid={uniqueid}`: An email address (plus unique
5266 # identifier) representing a user that has been recently deleted. For
5267 # example, `alice@example.com?uid=123456789012345678901`. If the user is
5268 # recovered, this value reverts to `user:{emailid}` and the recovered user
5269 # retains the role in the binding.
5270 #
5271 # * `deleted:serviceAccount:{emailid}?uid={uniqueid}`: An email address (plus
5272 # unique identifier) representing a service account that has been recently
5273 # deleted. For example,
5274 # `my-other-app@appspot.gserviceaccount.com?uid=123456789012345678901`.
5275 # If the service account is undeleted, this value reverts to
5276 # `serviceAccount:{emailid}` and the undeleted service account retains the
5277 # role in the binding.
5278 #
5279 # * `deleted:group:{emailid}?uid={uniqueid}`: An email address (plus unique
5280 # identifier) representing a Google group that has been recently
5281 # deleted. For example, `admins@example.com?uid=123456789012345678901`. If
5282 # the group is recovered, this value reverts to `group:{emailid}` and the
5283 # recovered group retains the role in the binding.
5284 #
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07005285 #
5286 # * `domain:{domain}`: The G Suite domain (primary) that represents all the
5287 # users of that domain. For example, `google.com` or `example.com`.
5288 #
Bu Sun Kim65020912020-05-20 12:08:20 -07005289 &quot;A String&quot;,
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07005290 ],
Bu Sun Kim65020912020-05-20 12:08:20 -07005291 &quot;role&quot;: &quot;A String&quot;, # Role that is assigned to `members`.
5292 # For example, `roles/viewer`, `roles/editor`, or `roles/owner`.
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07005293 },
5294 ],
Bu Sun Kim4ed7d3f2020-05-27 12:20:54 -07005295 &quot;etag&quot;: &quot;A String&quot;, # `etag` is used for optimistic concurrency control as a way to help
5296 # prevent simultaneous updates of a policy from overwriting each other.
5297 # It is strongly suggested that systems make use of the `etag` in the
5298 # read-modify-write cycle to perform policy updates in order to avoid race
5299 # conditions: An `etag` is returned in the response to `getIamPolicy`, and
5300 # systems are expected to put that etag in the request to `setIamPolicy` to
5301 # ensure that their change will be applied to the same version of the policy.
5302 #
5303 # **Important:** If you use IAM Conditions, you must include the `etag` field
5304 # whenever you call `setIamPolicy`. If you omit this field, then IAM allows
5305 # you to overwrite a version `3` policy with a version `1` policy, and all of
5306 # the conditions in the version `3` policy are lost.
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07005307 }</pre>
5308</div>
5309
5310<div class="method">
Dan O'Mearadd494642020-05-01 07:42:23 -07005311 <code class="details" id="testIamPermissions">testIamPermissions(resource, body=None, x__xgafv=None)</code>
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07005312 <pre>Returns permissions that a caller has on the specified resource.
5313If the resource does not exist, this will return an empty set of
Bu Sun Kim65020912020-05-20 12:08:20 -07005314permissions, not a `NOT_FOUND` error.
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07005315
5316Note: This operation is designed to be used for building permission-aware
5317UIs and command-line tools, not for authorization checking. This operation
Bu Sun Kim65020912020-05-20 12:08:20 -07005318may &quot;fail open&quot; without warning.
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07005319
5320Args:
5321 resource: string, REQUIRED: The resource for which the policy detail is being requested.
5322See the operation documentation for the appropriate value for this field. (required)
Dan O'Mearadd494642020-05-01 07:42:23 -07005323 body: object, The request body.
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07005324 The object takes the form of:
5325
5326{ # Request message for `TestIamPermissions` method.
Bu Sun Kim65020912020-05-20 12:08:20 -07005327 &quot;permissions&quot;: [ # The set of permissions to check for the `resource`. Permissions with
5328 # wildcards (such as &#x27;*&#x27; or &#x27;storage.*&#x27;) are not allowed. For more
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07005329 # information see
5330 # [IAM Overview](https://cloud.google.com/iam/docs/overview#permissions).
Bu Sun Kim65020912020-05-20 12:08:20 -07005331 &quot;A String&quot;,
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07005332 ],
5333 }
5334
5335 x__xgafv: string, V1 error format.
5336 Allowed values
5337 1 - v1 error format
5338 2 - v2 error format
5339
5340Returns:
5341 An object of the form:
5342
5343 { # Response message for `TestIamPermissions` method.
Bu Sun Kim65020912020-05-20 12:08:20 -07005344 &quot;permissions&quot;: [ # A subset of `TestPermissionsRequest.permissions` that the caller is
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07005345 # allowed.
Bu Sun Kim65020912020-05-20 12:08:20 -07005346 &quot;A String&quot;,
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07005347 ],
5348 }</pre>
5349</div>
5350
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -04005351</body></html>