blob: a45bbe58e12c6e12822bb590f080407d8cb5fa9d [file] [log] [blame]
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -04001<html><body>
2<style>
3
4body, h1, h2, h3, div, span, p, pre, a {
5 margin: 0;
6 padding: 0;
7 border: 0;
8 font-weight: inherit;
9 font-style: inherit;
10 font-size: 100%;
11 font-family: inherit;
12 vertical-align: baseline;
13}
14
15body {
16 font-size: 13px;
17 padding: 1em;
18}
19
20h1 {
21 font-size: 26px;
22 margin-bottom: 1em;
23}
24
25h2 {
26 font-size: 24px;
27 margin-bottom: 1em;
28}
29
30h3 {
31 font-size: 20px;
32 margin-bottom: 1em;
33 margin-top: 1em;
34}
35
36pre, code {
37 line-height: 1.5;
38 font-family: Monaco, 'DejaVu Sans Mono', 'Bitstream Vera Sans Mono', 'Lucida Console', monospace;
39}
40
41pre {
42 margin-top: 0.5em;
43}
44
45h1, h2, h3, p {
46 font-family: Arial, sans serif;
47}
48
49h1, h2, h3 {
50 border-bottom: solid #CCC 1px;
51}
52
53.toc_element {
54 margin-top: 0.5em;
55}
56
57.firstline {
58 margin-left: 2 em;
59}
60
61.method {
62 margin-top: 1em;
63 border: solid 1px #CCC;
64 padding: 1em;
65 background: #EEE;
66}
67
68.details {
69 font-weight: bold;
70 font-size: 14px;
71}
72
73</style>
74
Dan O'Mearadd494642020-05-01 07:42:23 -070075<h1><a href="ml_v1.html">AI Platform Training & Prediction API</a> . <a href="ml_v1.projects.html">projects</a> . <a href="ml_v1.projects.jobs.html">jobs</a></h1>
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -040076<h2>Instance Methods</h2>
77<p class="toc_element">
Bu Sun Kim715bd7f2019-06-14 16:50:42 -070078 <code><a href="#cancel">cancel(name, body=None, x__xgafv=None)</a></code></p>
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -040079<p class="firstline">Cancels a running job.</p>
80<p class="toc_element">
Dan O'Mearadd494642020-05-01 07:42:23 -070081 <code><a href="#create">create(parent, body=None, x__xgafv=None)</a></code></p>
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -040082<p class="firstline">Creates a training or a batch prediction job.</p>
83<p class="toc_element">
Thomas Coffee2f245372017-03-27 10:39:26 -070084 <code><a href="#get">get(name, x__xgafv=None)</a></code></p>
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -040085<p class="firstline">Describes a job.</p>
86<p class="toc_element">
Dan O'Mearadd494642020-05-01 07:42:23 -070087 <code><a href="#getIamPolicy">getIamPolicy(resource, options_requestedPolicyVersion=None, x__xgafv=None)</a></code></p>
Bu Sun Kim715bd7f2019-06-14 16:50:42 -070088<p class="firstline">Gets the access control policy for a resource.</p>
89<p class="toc_element">
Dan O'Mearadd494642020-05-01 07:42:23 -070090 <code><a href="#list">list(parent, pageSize=None, pageToken=None, x__xgafv=None, filter=None)</a></code></p>
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -040091<p class="firstline">Lists the jobs in the project.</p>
92<p class="toc_element">
93 <code><a href="#list_next">list_next(previous_request, previous_response)</a></code></p>
94<p class="firstline">Retrieves the next page of results.</p>
Bu Sun Kim715bd7f2019-06-14 16:50:42 -070095<p class="toc_element">
Dan O'Mearadd494642020-05-01 07:42:23 -070096 <code><a href="#patch">patch(name, body=None, updateMask=None, x__xgafv=None)</a></code></p>
Bu Sun Kim715bd7f2019-06-14 16:50:42 -070097<p class="firstline">Updates a specific job resource.</p>
98<p class="toc_element">
Dan O'Mearadd494642020-05-01 07:42:23 -070099 <code><a href="#setIamPolicy">setIamPolicy(resource, body=None, x__xgafv=None)</a></code></p>
Bu Sun Kim715bd7f2019-06-14 16:50:42 -0700100<p class="firstline">Sets the access control policy on the specified resource. Replaces any</p>
101<p class="toc_element">
Dan O'Mearadd494642020-05-01 07:42:23 -0700102 <code><a href="#testIamPermissions">testIamPermissions(resource, body=None, x__xgafv=None)</a></code></p>
Bu Sun Kim715bd7f2019-06-14 16:50:42 -0700103<p class="firstline">Returns permissions that a caller has on the specified resource.</p>
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -0400104<h3>Method Details</h3>
105<div class="method">
Bu Sun Kim715bd7f2019-06-14 16:50:42 -0700106 <code class="details" id="cancel">cancel(name, body=None, x__xgafv=None)</code>
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -0400107 <pre>Cancels a running job.
108
109Args:
Bu Sun Kim715bd7f2019-06-14 16:50:42 -0700110 name: string, Required. The name of the job to cancel. (required)
111 body: object, The request body.
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -0400112 The object takes the form of:
113
114{ # Request message for the CancelJob method.
115 }
116
117 x__xgafv: string, V1 error format.
118 Allowed values
119 1 - v1 error format
120 2 - v2 error format
121
122Returns:
123 An object of the form:
124
125 { # A generic empty message that you can re-use to avoid defining duplicated
126 # empty messages in your APIs. A typical example is to use it as the request
127 # or the response type of an API method. For instance:
128 #
129 # service Foo {
130 # rpc Bar(google.protobuf.Empty) returns (google.protobuf.Empty);
131 # }
132 #
133 # The JSON representation for `Empty` is empty JSON object `{}`.
134 }</pre>
135</div>
136
137<div class="method">
Dan O'Mearadd494642020-05-01 07:42:23 -0700138 <code class="details" id="create">create(parent, body=None, x__xgafv=None)</code>
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -0400139 <pre>Creates a training or a batch prediction job.
140
141Args:
Bu Sun Kim715bd7f2019-06-14 16:50:42 -0700142 parent: string, Required. The project name. (required)
Dan O'Mearadd494642020-05-01 07:42:23 -0700143 body: object, The request body.
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -0400144 The object takes the form of:
145
146{ # Represents a training or prediction job.
Bu Sun Kim715bd7f2019-06-14 16:50:42 -0700147 "errorMessage": "A String", # Output only. The details of a failure or a cancellation.
148 "trainingOutput": { # Represents results of a training job. Output only. # The current training job result.
149 "completedTrialCount": "A String", # The number of hyperparameter tuning trials that completed successfully.
150 # Only set for hyperparameter tuning jobs.
151 "trials": [ # Results for individual Hyperparameter trials.
152 # Only set for hyperparameter tuning jobs.
153 { # Represents the result of a single hyperparameter tuning trial from a
154 # training job. The TrainingOutput object that is returned on successful
155 # completion of a training job with hyperparameter tuning includes a list
156 # of HyperparameterOutput objects, one for each successful trial.
Dan O'Mearadd494642020-05-01 07:42:23 -0700157 "startTime": "A String", # Output only. Start time for the trial.
Bu Sun Kim715bd7f2019-06-14 16:50:42 -0700158 "hyperparameters": { # The hyperparameters given to this trial.
159 "a_key": "A String",
160 },
161 "finalMetric": { # An observed value of a metric. # The final objective metric seen for this trial.
162 "trainingStep": "A String", # The global training step for this metric.
163 "objectiveValue": 3.14, # The objective value at this training step.
164 },
Dan O'Mearadd494642020-05-01 07:42:23 -0700165 "state": "A String", # Output only. The detailed state of the trial.
Bu Sun Kim715bd7f2019-06-14 16:50:42 -0700166 "allMetrics": [ # All recorded object metrics for this trial. This field is not currently
167 # populated.
168 { # An observed value of a metric.
169 "trainingStep": "A String", # The global training step for this metric.
170 "objectiveValue": 3.14, # The objective value at this training step.
171 },
172 ],
173 "isTrialStoppedEarly": True or False, # True if the trial is stopped early.
Dan O'Mearadd494642020-05-01 07:42:23 -0700174 "endTime": "A String", # Output only. End time for the trial.
Bu Sun Kim715bd7f2019-06-14 16:50:42 -0700175 "trialId": "A String", # The trial id for these results.
176 "builtInAlgorithmOutput": { # Represents output related to a built-in algorithm Job. # Details related to built-in algorithms jobs.
177 # Only set for trials of built-in algorithms jobs that have succeeded.
178 "framework": "A String", # Framework on which the built-in algorithm was trained.
179 "modelPath": "A String", # The Cloud Storage path to the `model/` directory where the training job
180 # saves the trained model. Only set for successful jobs that don't use
181 # hyperparameter tuning.
182 "runtimeVersion": "A String", # AI Platform runtime version on which the built-in algorithm was
183 # trained.
184 "pythonVersion": "A String", # Python version on which the built-in algorithm was trained.
185 },
186 },
187 ],
188 "isHyperparameterTuningJob": True or False, # Whether this job is a hyperparameter tuning job.
189 "isBuiltInAlgorithmJob": True or False, # Whether this job is a built-in Algorithm job.
190 "consumedMLUnits": 3.14, # The amount of ML units consumed by the job.
191 "hyperparameterMetricTag": "A String", # The TensorFlow summary tag name used for optimizing hyperparameter tuning
192 # trials. See
193 # [`HyperparameterSpec.hyperparameterMetricTag`](#HyperparameterSpec.FIELDS.hyperparameter_metric_tag)
194 # for more information. Only set for hyperparameter tuning jobs.
195 "builtInAlgorithmOutput": { # Represents output related to a built-in algorithm Job. # Details related to built-in algorithms jobs.
196 # Only set for built-in algorithms jobs.
197 "framework": "A String", # Framework on which the built-in algorithm was trained.
198 "modelPath": "A String", # The Cloud Storage path to the `model/` directory where the training job
199 # saves the trained model. Only set for successful jobs that don't use
200 # hyperparameter tuning.
201 "runtimeVersion": "A String", # AI Platform runtime version on which the built-in algorithm was
202 # trained.
203 "pythonVersion": "A String", # Python version on which the built-in algorithm was trained.
204 },
205 },
206 "predictionInput": { # Represents input parameters for a prediction job. # Input parameters to create a prediction job.
207 "modelName": "A String", # Use this field if you want to use the default version for the specified
208 # model. The string must use the following format:
209 #
210 # `"projects/YOUR_PROJECT/models/YOUR_MODEL"`
Bu Sun Kim715bd7f2019-06-14 16:50:42 -0700211 "signatureName": "A String", # Optional. The name of the signature defined in the SavedModel to use for
212 # this job. Please refer to
213 # [SavedModel](https://tensorflow.github.io/serving/serving_basic.html)
214 # for information about how to use signatures.
215 #
216 # Defaults to
217 # [DEFAULT_SERVING_SIGNATURE_DEF_KEY](https://www.tensorflow.org/api_docs/python/tf/saved_model/signature_constants)
218 # , which is "serving_default".
Dan O'Mearadd494642020-05-01 07:42:23 -0700219 "runtimeVersion": "A String", # Optional. The AI Platform runtime version to use for this batch
220 # prediction. If not set, AI Platform will pick the runtime version used
221 # during the CreateVersion request for this model version, or choose the
222 # latest stable version when model version information is not available
223 # such as when the model is specified by uri.
Bu Sun Kim715bd7f2019-06-14 16:50:42 -0700224 "batchSize": "A String", # Optional. Number of records per batch, defaults to 64.
225 # The service will buffer batch_size number of records in memory before
226 # invoking one Tensorflow prediction call internally. So take the record
227 # size and memory available into consideration when setting this parameter.
Bu Sun Kim715bd7f2019-06-14 16:50:42 -0700228 "maxWorkerCount": "A String", # Optional. The maximum number of workers to be used for parallel processing.
229 # Defaults to 10 if not specified.
230 "uri": "A String", # Use this field if you want to specify a Google Cloud Storage path for
231 # the model to use.
232 "outputPath": "A String", # Required. The output Google Cloud Storage location.
233 "dataFormat": "A String", # Required. The format of the input data files.
234 "versionName": "A String", # Use this field if you want to specify a version of the model to use. The
235 # string is formatted the same way as `model_version`, with the addition
236 # of the version information:
237 #
238 # `"projects/YOUR_PROJECT/models/YOUR_MODEL/versions/YOUR_VERSION"`
239 "region": "A String", # Required. The Google Compute Engine region to run the prediction job in.
Dan O'Mearadd494642020-05-01 07:42:23 -0700240 # See the &lt;a href="/ml-engine/docs/tensorflow/regions"&gt;available regions&lt;/a&gt;
Bu Sun Kim715bd7f2019-06-14 16:50:42 -0700241 # for AI Platform services.
Dan O'Mearadd494642020-05-01 07:42:23 -0700242 "inputPaths": [ # Required. The Cloud Storage location of the input data files. May contain
243 # &lt;a href="/storage/docs/gsutil/addlhelp/WildcardNames"&gt;wildcards&lt;/a&gt;.
244 "A String",
245 ],
Bu Sun Kim715bd7f2019-06-14 16:50:42 -0700246 "outputDataFormat": "A String", # Optional. Format of the output data files, defaults to JSON.
247 },
Dan O'Mearadd494642020-05-01 07:42:23 -0700248 "labels": { # Optional. One or more labels that you can add, to organize your jobs.
249 # Each label is a key-value pair, where both the key and the value are
250 # arbitrary strings that you supply.
251 # For more information, see the documentation on
252 # &lt;a href="/ml-engine/docs/tensorflow/resource-labels"&gt;using labels&lt;/a&gt;.
253 "a_key": "A String",
254 },
255 "jobId": "A String", # Required. The user-specified id of the job.
256 "state": "A String", # Output only. The detailed state of a job.
257 "etag": "A String", # `etag` is used for optimistic concurrency control as a way to help
258 # prevent simultaneous updates of a job from overwriting each other.
259 # It is strongly suggested that systems make use of the `etag` in the
260 # read-modify-write cycle to perform job updates in order to avoid race
261 # conditions: An `etag` is returned in the response to `GetJob`, and
262 # systems are expected to put that etag in the request to `UpdateJob` to
263 # ensure that their change will be applied to the same version of the job.
264 "startTime": "A String", # Output only. When the job processing was started.
265 "trainingInput": { # Represents input parameters for a training job. When using the gcloud command # Input parameters to create a training job.
266 # to submit your training job, you can specify the input parameters as
267 # command-line arguments and/or in a YAML configuration file referenced from
268 # the --config command-line argument. For details, see the guide to [submitting
269 # a training job](/ai-platform/training/docs/training-jobs).
Bu Sun Kim715bd7f2019-06-14 16:50:42 -0700270 "masterType": "A String", # Optional. Specifies the type of virtual machine to use for your training
Dan O'Mearadd494642020-05-01 07:42:23 -0700271 # job's master worker. You must specify this field when `scaleTier` is set to
272 # `CUSTOM`.
Bu Sun Kim715bd7f2019-06-14 16:50:42 -0700273 #
Dan O'Mearadd494642020-05-01 07:42:23 -0700274 # You can use certain Compute Engine machine types directly in this field.
Bu Sun Kim715bd7f2019-06-14 16:50:42 -0700275 # The following types are supported:
276 #
Bu Sun Kim715bd7f2019-06-14 16:50:42 -0700277 # - `n1-standard-4`
278 # - `n1-standard-8`
279 # - `n1-standard-16`
280 # - `n1-standard-32`
281 # - `n1-standard-64`
282 # - `n1-standard-96`
283 # - `n1-highmem-2`
284 # - `n1-highmem-4`
285 # - `n1-highmem-8`
286 # - `n1-highmem-16`
287 # - `n1-highmem-32`
288 # - `n1-highmem-64`
289 # - `n1-highmem-96`
290 # - `n1-highcpu-16`
291 # - `n1-highcpu-32`
292 # - `n1-highcpu-64`
293 # - `n1-highcpu-96`
294 #
Dan O'Mearadd494642020-05-01 07:42:23 -0700295 # Learn more about [using Compute Engine machine
296 # types](/ml-engine/docs/machine-types#compute-engine-machine-types).
Bu Sun Kim715bd7f2019-06-14 16:50:42 -0700297 #
Dan O'Mearadd494642020-05-01 07:42:23 -0700298 # Alternatively, you can use the following legacy machine types:
299 #
300 # - `standard`
301 # - `large_model`
302 # - `complex_model_s`
303 # - `complex_model_m`
304 # - `complex_model_l`
305 # - `standard_gpu`
306 # - `complex_model_m_gpu`
307 # - `complex_model_l_gpu`
308 # - `standard_p100`
309 # - `complex_model_m_p100`
310 # - `standard_v100`
311 # - `large_model_v100`
312 # - `complex_model_m_v100`
313 # - `complex_model_l_v100`
314 #
315 # Learn more about [using legacy machine
316 # types](/ml-engine/docs/machine-types#legacy-machine-types).
317 #
318 # Finally, if you want to use a TPU for training, specify `cloud_tpu` in this
319 # field. Learn more about the [special configuration options for training
320 # with
321 # TPUs](/ml-engine/docs/tensorflow/using-tpus#configuring_a_custom_tpu_machine).
322 "jobDir": "A String", # Optional. A Google Cloud Storage path in which to store training outputs
323 # and other data needed for training. This path is passed to your TensorFlow
324 # program as the '--job-dir' command-line argument. The benefit of specifying
325 # this field is that Cloud ML validates the path for use in training.
326 "scheduling": { # All parameters related to scheduling of training jobs. # Optional. Scheduling options for a training job.
327 "maxRunningTime": "A String", # Optional. The maximum job running time, expressed in seconds. The field can
328 # contain up to nine fractional digits, terminated by `s`. By default there
329 # is no limit to the running time.
330 #
331 # If the training job is still running after this duration, AI Platform
332 # Training cancels it.
333 #
334 # For example, if you want to ensure your job runs for no more than 2 hours,
335 # set this field to `7200s` (2 hours * 60 minutes / hour * 60 seconds /
336 # minute).
337 #
338 # If you submit your training job using the `gcloud` tool, you can [provide
339 # this field in a `config.yaml`
340 # file](/ai-platform/training/docs/training-jobs#formatting_your_configuration_parameters).
341 # For example:
342 #
343 # ```yaml
344 # trainingInput:
345 # ...
346 # scheduling:
347 # maxRunningTime: 7200s
348 # ...
349 # ```
350 },
351 "parameterServerCount": "A String", # Optional. The number of parameter server replicas to use for the training
352 # job. Each replica in the cluster will be of the type specified in
353 # `parameter_server_type`.
354 #
355 # This value can only be used when `scale_tier` is set to `CUSTOM`. If you
356 # set this value, you must also set `parameter_server_type`.
357 #
358 # The default value is zero.
359 "evaluatorCount": "A String", # Optional. The number of evaluator replicas to use for the training job.
360 # Each replica in the cluster will be of the type specified in
361 # `evaluator_type`.
362 #
363 # This value can only be used when `scale_tier` is set to `CUSTOM`. If you
364 # set this value, you must also set `evaluator_type`.
365 #
366 # The default value is zero.
367 "workerType": "A String", # Optional. Specifies the type of virtual machine to use for your training
368 # job's worker nodes.
369 #
370 # The supported values are the same as those described in the entry for
371 # `masterType`.
372 #
373 # This value must be consistent with the category of machine type that
374 # `masterType` uses. In other words, both must be Compute Engine machine
375 # types or both must be legacy machine types.
376 #
377 # If you use `cloud_tpu` for this value, see special instructions for
378 # [configuring a custom TPU
379 # machine](/ml-engine/docs/tensorflow/using-tpus#configuring_a_custom_tpu_machine).
380 #
381 # This value must be present when `scaleTier` is set to `CUSTOM` and
382 # `workerCount` is greater than zero.
383 "scaleTier": "A String", # Required. Specifies the machine types, the number of replicas for workers
384 # and parameter servers.
385 "packageUris": [ # Required. The Google Cloud Storage location of the packages with
386 # the training program and any additional dependencies.
387 # The maximum number of package URIs is 100.
388 "A String",
389 ],
390 "workerConfig": { # Represents the configuration for a replica in a cluster. # Optional. The configuration for workers.
391 #
392 # You should only set `workerConfig.acceleratorConfig` if `workerType` is set
393 # to a Compute Engine machine type. [Learn about restrictions on accelerator
394 # configurations for
395 # training.](/ai-platform/training/docs/using-gpus#compute-engine-machine-types-with-gpu)
396 #
397 # Set `workerConfig.imageUri` only if you build a custom image for your
398 # worker. If `workerConfig.imageUri` has not been set, AI Platform uses
399 # the value of `masterConfig.imageUri`. Learn more about [configuring custom
400 # containers](/ai-platform/training/docs/distributed-training-containers).
401 "tpuTfVersion": "A String", # The AI Platform runtime version that includes a TensorFlow version matching
402 # the one used in the custom container. This field is required if the replica
403 # is a TPU worker that uses a custom container. Otherwise, do not specify
404 # this field. This must be a [runtime version that currently supports
405 # training with
406 # TPUs](/ml-engine/docs/tensorflow/runtime-version-list#tpu-support).
407 #
408 # Note that the version of TensorFlow included in a runtime version may
409 # differ from the numbering of the runtime version itself, because it may
410 # have a different [patch
411 # version](https://www.tensorflow.org/guide/version_compat#semantic_versioning_20).
412 # In this field, you must specify the runtime version (TensorFlow minor
413 # version). For example, if your custom container runs TensorFlow `1.x.y`,
414 # specify `1.x`.
415 "acceleratorConfig": { # Represents a hardware accelerator request config. # Represents the type and number of accelerators used by the replica.
416 # [Learn about restrictions on accelerator configurations for
417 # training.](/ai-platform/training/docs/using-gpus#compute-engine-machine-types-with-gpu)
418 # Note that the AcceleratorConfig can be used in both Jobs and Versions.
419 # Learn more about [accelerators for training](/ml-engine/docs/using-gpus) and
420 # [accelerators for online
421 # prediction](/ml-engine/docs/machine-types-online-prediction#gpus).
422 "count": "A String", # The number of accelerators to attach to each machine running the job.
423 "type": "A String", # The type of accelerator to use.
424 },
425 "imageUri": "A String", # The Docker image to run on the replica. This image must be in Container
426 # Registry. Learn more about [configuring custom
427 # containers](/ai-platform/training/docs/distributed-training-containers).
428 },
429 "evaluatorConfig": { # Represents the configuration for a replica in a cluster. # Optional. The configuration for evaluators.
430 #
431 # You should only set `evaluatorConfig.acceleratorConfig` if
432 # `evaluatorType` is set to a Compute Engine machine type. [Learn
433 # about restrictions on accelerator configurations for
434 # training.](/ai-platform/training/docs/using-gpus#compute-engine-machine-types-with-gpu)
435 #
436 # Set `evaluatorConfig.imageUri` only if you build a custom image for
437 # your evaluator. If `evaluatorConfig.imageUri` has not been
438 # set, AI Platform uses the value of `masterConfig.imageUri`. Learn more about [configuring custom
439 # containers](/ai-platform/training/docs/distributed-training-containers).
440 "tpuTfVersion": "A String", # The AI Platform runtime version that includes a TensorFlow version matching
441 # the one used in the custom container. This field is required if the replica
442 # is a TPU worker that uses a custom container. Otherwise, do not specify
443 # this field. This must be a [runtime version that currently supports
444 # training with
445 # TPUs](/ml-engine/docs/tensorflow/runtime-version-list#tpu-support).
446 #
447 # Note that the version of TensorFlow included in a runtime version may
448 # differ from the numbering of the runtime version itself, because it may
449 # have a different [patch
450 # version](https://www.tensorflow.org/guide/version_compat#semantic_versioning_20).
451 # In this field, you must specify the runtime version (TensorFlow minor
452 # version). For example, if your custom container runs TensorFlow `1.x.y`,
453 # specify `1.x`.
454 "acceleratorConfig": { # Represents a hardware accelerator request config. # Represents the type and number of accelerators used by the replica.
455 # [Learn about restrictions on accelerator configurations for
456 # training.](/ai-platform/training/docs/using-gpus#compute-engine-machine-types-with-gpu)
457 # Note that the AcceleratorConfig can be used in both Jobs and Versions.
458 # Learn more about [accelerators for training](/ml-engine/docs/using-gpus) and
459 # [accelerators for online
460 # prediction](/ml-engine/docs/machine-types-online-prediction#gpus).
461 "count": "A String", # The number of accelerators to attach to each machine running the job.
462 "type": "A String", # The type of accelerator to use.
463 },
464 "imageUri": "A String", # The Docker image to run on the replica. This image must be in Container
465 # Registry. Learn more about [configuring custom
466 # containers](/ai-platform/training/docs/distributed-training-containers).
467 },
468 "useChiefInTfConfig": True or False, # Optional. Use `chief` instead of `master` in the `TF_CONFIG` environment
469 # variable when training with a custom container. Defaults to `false`. [Learn
470 # more about this
471 # field.](/ai-platform/training/docs/distributed-training-details#chief-versus-master)
472 #
473 # This field has no effect for training jobs that don't use a custom
474 # container.
475 "masterConfig": { # Represents the configuration for a replica in a cluster. # Optional. The configuration for your master worker.
476 #
477 # You should only set `masterConfig.acceleratorConfig` if `masterType` is set
478 # to a Compute Engine machine type. Learn about [restrictions on accelerator
479 # configurations for
480 # training.](/ai-platform/training/docs/using-gpus#compute-engine-machine-types-with-gpu)
481 #
482 # Set `masterConfig.imageUri` only if you build a custom image. Only one of
483 # `masterConfig.imageUri` and `runtimeVersion` should be set. Learn more
484 # about [configuring custom
485 # containers](/ai-platform/training/docs/distributed-training-containers).
486 "tpuTfVersion": "A String", # The AI Platform runtime version that includes a TensorFlow version matching
487 # the one used in the custom container. This field is required if the replica
488 # is a TPU worker that uses a custom container. Otherwise, do not specify
489 # this field. This must be a [runtime version that currently supports
490 # training with
491 # TPUs](/ml-engine/docs/tensorflow/runtime-version-list#tpu-support).
492 #
493 # Note that the version of TensorFlow included in a runtime version may
494 # differ from the numbering of the runtime version itself, because it may
495 # have a different [patch
496 # version](https://www.tensorflow.org/guide/version_compat#semantic_versioning_20).
497 # In this field, you must specify the runtime version (TensorFlow minor
498 # version). For example, if your custom container runs TensorFlow `1.x.y`,
499 # specify `1.x`.
500 "acceleratorConfig": { # Represents a hardware accelerator request config. # Represents the type and number of accelerators used by the replica.
501 # [Learn about restrictions on accelerator configurations for
502 # training.](/ai-platform/training/docs/using-gpus#compute-engine-machine-types-with-gpu)
503 # Note that the AcceleratorConfig can be used in both Jobs and Versions.
504 # Learn more about [accelerators for training](/ml-engine/docs/using-gpus) and
505 # [accelerators for online
506 # prediction](/ml-engine/docs/machine-types-online-prediction#gpus).
507 "count": "A String", # The number of accelerators to attach to each machine running the job.
508 "type": "A String", # The type of accelerator to use.
509 },
510 "imageUri": "A String", # The Docker image to run on the replica. This image must be in Container
511 # Registry. Learn more about [configuring custom
512 # containers](/ai-platform/training/docs/distributed-training-containers).
513 },
514 "runtimeVersion": "A String", # Optional. The AI Platform runtime version to use for training. You must
515 # either specify this field or specify `masterConfig.imageUri`.
516 #
517 # For more information, see the [runtime version
518 # list](/ai-platform/training/docs/runtime-version-list) and learn [how to
519 # manage runtime versions](/ai-platform/training/docs/versioning).
Bu Sun Kim715bd7f2019-06-14 16:50:42 -0700520 "hyperparameters": { # Represents a set of hyperparameters to optimize. # Optional. The set of Hyperparameters to tune.
521 "maxTrials": 42, # Optional. How many training trials should be attempted to optimize
522 # the specified hyperparameters.
523 #
524 # Defaults to one.
525 "goal": "A String", # Required. The type of goal to use for tuning. Available types are
526 # `MAXIMIZE` and `MINIMIZE`.
527 #
528 # Defaults to `MAXIMIZE`.
529 "algorithm": "A String", # Optional. The search algorithm specified for the hyperparameter
530 # tuning job.
531 # Uses the default AI Platform hyperparameter tuning
532 # algorithm if unspecified.
533 "maxFailedTrials": 42, # Optional. The number of failed trials that need to be seen before failing
534 # the hyperparameter tuning job. You can specify this field to override the
535 # default failing criteria for AI Platform hyperparameter tuning jobs.
536 #
537 # Defaults to zero, which means the service decides when a hyperparameter
538 # job should fail.
539 "enableTrialEarlyStopping": True or False, # Optional. Indicates if the hyperparameter tuning job enables auto trial
540 # early stopping.
541 "resumePreviousJobId": "A String", # Optional. The prior hyperparameter tuning job id that users hope to
542 # continue with. The job id will be used to find the corresponding vizier
543 # study guid and resume the study.
544 "params": [ # Required. The set of parameters to tune.
545 { # Represents a single hyperparameter to optimize.
546 "maxValue": 3.14, # Required if type is `DOUBLE` or `INTEGER`. This field
547 # should be unset if type is `CATEGORICAL`. This value should be integers if
548 # type is `INTEGER`.
Dan O'Mearadd494642020-05-01 07:42:23 -0700549 "minValue": 3.14, # Required if type is `DOUBLE` or `INTEGER`. This field
550 # should be unset if type is `CATEGORICAL`. This value should be integers if
551 # type is INTEGER.
Bu Sun Kim715bd7f2019-06-14 16:50:42 -0700552 "discreteValues": [ # Required if type is `DISCRETE`.
553 # A list of feasible points.
554 # The list should be in strictly increasing order. For instance, this
555 # parameter might have possible settings of 1.5, 2.5, and 4.0. This list
556 # should not contain more than 1,000 values.
557 3.14,
558 ],
559 "parameterName": "A String", # Required. The parameter name must be unique amongst all ParameterConfigs in
560 # a HyperparameterSpec message. E.g., "learning_rate".
Dan O'Mearadd494642020-05-01 07:42:23 -0700561 "categoricalValues": [ # Required if type is `CATEGORICAL`. The list of possible categories.
562 "A String",
563 ],
Bu Sun Kim715bd7f2019-06-14 16:50:42 -0700564 "type": "A String", # Required. The type of the parameter.
565 "scaleType": "A String", # Optional. How the parameter should be scaled to the hypercube.
566 # Leave unset for categorical parameters.
567 # Some kind of scaling is strongly recommended for real or integral
568 # parameters (e.g., `UNIT_LINEAR_SCALE`).
569 },
570 ],
571 "hyperparameterMetricTag": "A String", # Optional. The TensorFlow summary tag name to use for optimizing trials. For
572 # current versions of TensorFlow, this tag name should exactly match what is
573 # shown in TensorBoard, including all scopes. For versions of TensorFlow
574 # prior to 0.12, this should be only the tag passed to tf.Summary.
575 # By default, "training/hptuning/metric" will be used.
576 "maxParallelTrials": 42, # Optional. The number of training trials to run concurrently.
577 # You can reduce the time it takes to perform hyperparameter tuning by adding
578 # trials in parallel. However, each trail only benefits from the information
579 # gained in completed trials. That means that a trial does not get access to
580 # the results of trials running at the same time, which could reduce the
581 # quality of the overall optimization.
582 #
583 # Each trial will use the same scale tier and machine types.
584 #
585 # Defaults to one.
586 },
Dan O'Mearadd494642020-05-01 07:42:23 -0700587 "args": [ # Optional. Command-line arguments passed to the training application when it
588 # starts. If your job uses a custom container, then the arguments are passed
589 # to the container's &lt;a class="external" target="_blank"
590 # href="https://docs.docker.com/engine/reference/builder/#entrypoint"&gt;
591 # `ENTRYPOINT`&lt;/a&gt; command.
Bu Sun Kim715bd7f2019-06-14 16:50:42 -0700592 "A String",
593 ],
594 "pythonModule": "A String", # Required. The Python module name to run after installing the packages.
Bu Sun Kim715bd7f2019-06-14 16:50:42 -0700595 "workerCount": "A String", # Optional. The number of worker replicas to use for the training job. Each
596 # replica in the cluster will be of the type specified in `worker_type`.
597 #
598 # This value can only be used when `scale_tier` is set to `CUSTOM`. If you
599 # set this value, you must also set `worker_type`.
600 #
601 # The default value is zero.
Dan O'Mearadd494642020-05-01 07:42:23 -0700602 "encryptionConfig": { # Represents a custom encryption key configuration that can be applied to # Optional. Options for using customer-managed encryption keys (CMEK) to
603 # protect resources created by a training job, instead of using Google's
604 # default encryption. If this is set, then all resources created by the
605 # training job will be encrypted with the customer-managed encryption key
606 # that you specify.
607 #
608 # [Learn how and when to use CMEK with AI Platform
609 # Training](/ai-platform/training/docs/cmek).
610 # a resource.
611 "kmsKeyName": "A String", # The Cloud KMS resource identifier of the customer-managed encryption key
612 # used to protect a resource, such as a training job. It has the following
613 # format:
614 # `projects/{PROJECT_ID}/locations/{REGION}/keyRings/{KEY_RING_NAME}/cryptoKeys/{KEY_NAME}`
615 },
616 "parameterServerConfig": { # Represents the configuration for a replica in a cluster. # Optional. The configuration for parameter servers.
617 #
618 # You should only set `parameterServerConfig.acceleratorConfig` if
619 # `parameterServerType` is set to a Compute Engine machine type. [Learn
620 # about restrictions on accelerator configurations for
621 # training.](/ai-platform/training/docs/using-gpus#compute-engine-machine-types-with-gpu)
622 #
623 # Set `parameterServerConfig.imageUri` only if you build a custom image for
624 # your parameter server. If `parameterServerConfig.imageUri` has not been
625 # set, AI Platform uses the value of `masterConfig.imageUri`. Learn more about [configuring custom
626 # containers](/ai-platform/training/docs/distributed-training-containers).
627 "tpuTfVersion": "A String", # The AI Platform runtime version that includes a TensorFlow version matching
628 # the one used in the custom container. This field is required if the replica
629 # is a TPU worker that uses a custom container. Otherwise, do not specify
630 # this field. This must be a [runtime version that currently supports
631 # training with
632 # TPUs](/ml-engine/docs/tensorflow/runtime-version-list#tpu-support).
633 #
634 # Note that the version of TensorFlow included in a runtime version may
635 # differ from the numbering of the runtime version itself, because it may
636 # have a different [patch
637 # version](https://www.tensorflow.org/guide/version_compat#semantic_versioning_20).
638 # In this field, you must specify the runtime version (TensorFlow minor
639 # version). For example, if your custom container runs TensorFlow `1.x.y`,
640 # specify `1.x`.
641 "acceleratorConfig": { # Represents a hardware accelerator request config. # Represents the type and number of accelerators used by the replica.
642 # [Learn about restrictions on accelerator configurations for
643 # training.](/ai-platform/training/docs/using-gpus#compute-engine-machine-types-with-gpu)
644 # Note that the AcceleratorConfig can be used in both Jobs and Versions.
645 # Learn more about [accelerators for training](/ml-engine/docs/using-gpus) and
646 # [accelerators for online
647 # prediction](/ml-engine/docs/machine-types-online-prediction#gpus).
648 "count": "A String", # The number of accelerators to attach to each machine running the job.
649 "type": "A String", # The type of accelerator to use.
650 },
651 "imageUri": "A String", # The Docker image to run on the replica. This image must be in Container
652 # Registry. Learn more about [configuring custom
653 # containers](/ai-platform/training/docs/distributed-training-containers).
654 },
655 "region": "A String", # Required. The region to run the training job in. See the [available
656 # regions](/ai-platform/training/docs/regions) for AI Platform Training.
657 "pythonVersion": "A String", # Optional. The version of Python used in training. You must either specify
658 # this field or specify `masterConfig.imageUri`.
659 #
660 # The following Python versions are available:
661 #
662 # * Python '3.7' is available when `runtime_version` is set to '1.15' or
663 # later.
664 # * Python '3.5' is available when `runtime_version` is set to a version
665 # from '1.4' to '1.14'.
666 # * Python '2.7' is available when `runtime_version` is set to '1.15' or
667 # earlier.
668 #
669 # Read more about the Python versions available for [each runtime
670 # version](/ml-engine/docs/runtime-version-list).
671 "evaluatorType": "A String", # Optional. Specifies the type of virtual machine to use for your training
672 # job's evaluator nodes.
673 #
674 # The supported values are the same as those described in the entry for
675 # `masterType`.
676 #
677 # This value must be consistent with the category of machine type that
678 # `masterType` uses. In other words, both must be Compute Engine machine
679 # types or both must be legacy machine types.
680 #
681 # This value must be present when `scaleTier` is set to `CUSTOM` and
682 # `evaluatorCount` is greater than zero.
Bu Sun Kim715bd7f2019-06-14 16:50:42 -0700683 "parameterServerType": "A String", # Optional. Specifies the type of virtual machine to use for your training
684 # job's parameter server.
685 #
686 # The supported values are the same as those described in the entry for
687 # `master_type`.
688 #
689 # This value must be consistent with the category of machine type that
Dan O'Mearadd494642020-05-01 07:42:23 -0700690 # `masterType` uses. In other words, both must be Compute Engine machine
691 # types or both must be legacy machine types.
Bu Sun Kim715bd7f2019-06-14 16:50:42 -0700692 #
693 # This value must be present when `scaleTier` is set to `CUSTOM` and
694 # `parameter_server_count` is greater than zero.
Bu Sun Kim715bd7f2019-06-14 16:50:42 -0700695 },
Bu Sun Kim715bd7f2019-06-14 16:50:42 -0700696 "endTime": "A String", # Output only. When the job processing was completed.
697 "predictionOutput": { # Represents results of a prediction job. # The current prediction job result.
698 "outputPath": "A String", # The output Google Cloud Storage location provided at the job creation time.
699 "nodeHours": 3.14, # Node hours used by the batch prediction job.
700 "predictionCount": "A String", # The number of generated predictions.
701 "errorCount": "A String", # The number of data instances which resulted in errors.
702 },
703 "createTime": "A String", # Output only. When the job was created.
704}
705
706 x__xgafv: string, V1 error format.
707 Allowed values
708 1 - v1 error format
709 2 - v2 error format
710
711Returns:
712 An object of the form:
713
714 { # Represents a training or prediction job.
715 "errorMessage": "A String", # Output only. The details of a failure or a cancellation.
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -0400716 "trainingOutput": { # Represents results of a training job. Output only. # The current training job result.
Bu Sun Kim715bd7f2019-06-14 16:50:42 -0700717 "completedTrialCount": "A String", # The number of hyperparameter tuning trials that completed successfully.
718 # Only set for hyperparameter tuning jobs.
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -0400719 "trials": [ # Results for individual Hyperparameter trials.
720 # Only set for hyperparameter tuning jobs.
721 { # Represents the result of a single hyperparameter tuning trial from a
722 # training job. The TrainingOutput object that is returned on successful
723 # completion of a training job with hyperparameter tuning includes a list
724 # of HyperparameterOutput objects, one for each successful trial.
Dan O'Mearadd494642020-05-01 07:42:23 -0700725 "startTime": "A String", # Output only. Start time for the trial.
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -0400726 "hyperparameters": { # The hyperparameters given to this trial.
727 "a_key": "A String",
728 },
Bu Sun Kim715bd7f2019-06-14 16:50:42 -0700729 "finalMetric": { # An observed value of a metric. # The final objective metric seen for this trial.
730 "trainingStep": "A String", # The global training step for this metric.
731 "objectiveValue": 3.14, # The objective value at this training step.
732 },
Dan O'Mearadd494642020-05-01 07:42:23 -0700733 "state": "A String", # Output only. The detailed state of the trial.
Bu Sun Kim715bd7f2019-06-14 16:50:42 -0700734 "allMetrics": [ # All recorded object metrics for this trial. This field is not currently
735 # populated.
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -0400736 { # An observed value of a metric.
737 "trainingStep": "A String", # The global training step for this metric.
738 "objectiveValue": 3.14, # The objective value at this training step.
739 },
740 ],
Bu Sun Kim715bd7f2019-06-14 16:50:42 -0700741 "isTrialStoppedEarly": True or False, # True if the trial is stopped early.
Dan O'Mearadd494642020-05-01 07:42:23 -0700742 "endTime": "A String", # Output only. End time for the trial.
Bu Sun Kim715bd7f2019-06-14 16:50:42 -0700743 "trialId": "A String", # The trial id for these results.
744 "builtInAlgorithmOutput": { # Represents output related to a built-in algorithm Job. # Details related to built-in algorithms jobs.
745 # Only set for trials of built-in algorithms jobs that have succeeded.
746 "framework": "A String", # Framework on which the built-in algorithm was trained.
747 "modelPath": "A String", # The Cloud Storage path to the `model/` directory where the training job
748 # saves the trained model. Only set for successful jobs that don't use
749 # hyperparameter tuning.
750 "runtimeVersion": "A String", # AI Platform runtime version on which the built-in algorithm was
751 # trained.
752 "pythonVersion": "A String", # Python version on which the built-in algorithm was trained.
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -0400753 },
754 },
755 ],
Sai Cheemalapati4ba8c232017-06-06 18:46:08 -0400756 "isHyperparameterTuningJob": True or False, # Whether this job is a hyperparameter tuning job.
Bu Sun Kim715bd7f2019-06-14 16:50:42 -0700757 "isBuiltInAlgorithmJob": True or False, # Whether this job is a built-in Algorithm job.
Sai Cheemalapati4ba8c232017-06-06 18:46:08 -0400758 "consumedMLUnits": 3.14, # The amount of ML units consumed by the job.
Bu Sun Kim715bd7f2019-06-14 16:50:42 -0700759 "hyperparameterMetricTag": "A String", # The TensorFlow summary tag name used for optimizing hyperparameter tuning
760 # trials. See
761 # [`HyperparameterSpec.hyperparameterMetricTag`](#HyperparameterSpec.FIELDS.hyperparameter_metric_tag)
762 # for more information. Only set for hyperparameter tuning jobs.
763 "builtInAlgorithmOutput": { # Represents output related to a built-in algorithm Job. # Details related to built-in algorithms jobs.
764 # Only set for built-in algorithms jobs.
765 "framework": "A String", # Framework on which the built-in algorithm was trained.
766 "modelPath": "A String", # The Cloud Storage path to the `model/` directory where the training job
767 # saves the trained model. Only set for successful jobs that don't use
768 # hyperparameter tuning.
769 "runtimeVersion": "A String", # AI Platform runtime version on which the built-in algorithm was
770 # trained.
771 "pythonVersion": "A String", # Python version on which the built-in algorithm was trained.
772 },
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -0400773 },
Bu Sun Kim715bd7f2019-06-14 16:50:42 -0700774 "predictionInput": { # Represents input parameters for a prediction job. # Input parameters to create a prediction job.
775 "modelName": "A String", # Use this field if you want to use the default version for the specified
776 # model. The string must use the following format:
777 #
778 # `"projects/YOUR_PROJECT/models/YOUR_MODEL"`
Bu Sun Kim715bd7f2019-06-14 16:50:42 -0700779 "signatureName": "A String", # Optional. The name of the signature defined in the SavedModel to use for
780 # this job. Please refer to
781 # [SavedModel](https://tensorflow.github.io/serving/serving_basic.html)
782 # for information about how to use signatures.
783 #
784 # Defaults to
785 # [DEFAULT_SERVING_SIGNATURE_DEF_KEY](https://www.tensorflow.org/api_docs/python/tf/saved_model/signature_constants)
786 # , which is "serving_default".
Dan O'Mearadd494642020-05-01 07:42:23 -0700787 "runtimeVersion": "A String", # Optional. The AI Platform runtime version to use for this batch
788 # prediction. If not set, AI Platform will pick the runtime version used
789 # during the CreateVersion request for this model version, or choose the
790 # latest stable version when model version information is not available
791 # such as when the model is specified by uri.
Bu Sun Kim715bd7f2019-06-14 16:50:42 -0700792 "batchSize": "A String", # Optional. Number of records per batch, defaults to 64.
793 # The service will buffer batch_size number of records in memory before
794 # invoking one Tensorflow prediction call internally. So take the record
795 # size and memory available into consideration when setting this parameter.
Bu Sun Kim715bd7f2019-06-14 16:50:42 -0700796 "maxWorkerCount": "A String", # Optional. The maximum number of workers to be used for parallel processing.
797 # Defaults to 10 if not specified.
798 "uri": "A String", # Use this field if you want to specify a Google Cloud Storage path for
799 # the model to use.
800 "outputPath": "A String", # Required. The output Google Cloud Storage location.
801 "dataFormat": "A String", # Required. The format of the input data files.
802 "versionName": "A String", # Use this field if you want to specify a version of the model to use. The
803 # string is formatted the same way as `model_version`, with the addition
804 # of the version information:
805 #
806 # `"projects/YOUR_PROJECT/models/YOUR_MODEL/versions/YOUR_VERSION"`
807 "region": "A String", # Required. The Google Compute Engine region to run the prediction job in.
Dan O'Mearadd494642020-05-01 07:42:23 -0700808 # See the &lt;a href="/ml-engine/docs/tensorflow/regions"&gt;available regions&lt;/a&gt;
Bu Sun Kim715bd7f2019-06-14 16:50:42 -0700809 # for AI Platform services.
Dan O'Mearadd494642020-05-01 07:42:23 -0700810 "inputPaths": [ # Required. The Cloud Storage location of the input data files. May contain
811 # &lt;a href="/storage/docs/gsutil/addlhelp/WildcardNames"&gt;wildcards&lt;/a&gt;.
812 "A String",
813 ],
Bu Sun Kim715bd7f2019-06-14 16:50:42 -0700814 "outputDataFormat": "A String", # Optional. Format of the output data files, defaults to JSON.
815 },
Dan O'Mearadd494642020-05-01 07:42:23 -0700816 "labels": { # Optional. One or more labels that you can add, to organize your jobs.
817 # Each label is a key-value pair, where both the key and the value are
818 # arbitrary strings that you supply.
819 # For more information, see the documentation on
820 # &lt;a href="/ml-engine/docs/tensorflow/resource-labels"&gt;using labels&lt;/a&gt;.
821 "a_key": "A String",
822 },
823 "jobId": "A String", # Required. The user-specified id of the job.
824 "state": "A String", # Output only. The detailed state of a job.
825 "etag": "A String", # `etag` is used for optimistic concurrency control as a way to help
826 # prevent simultaneous updates of a job from overwriting each other.
827 # It is strongly suggested that systems make use of the `etag` in the
828 # read-modify-write cycle to perform job updates in order to avoid race
829 # conditions: An `etag` is returned in the response to `GetJob`, and
830 # systems are expected to put that etag in the request to `UpdateJob` to
831 # ensure that their change will be applied to the same version of the job.
832 "startTime": "A String", # Output only. When the job processing was started.
833 "trainingInput": { # Represents input parameters for a training job. When using the gcloud command # Input parameters to create a training job.
834 # to submit your training job, you can specify the input parameters as
835 # command-line arguments and/or in a YAML configuration file referenced from
836 # the --config command-line argument. For details, see the guide to [submitting
837 # a training job](/ai-platform/training/docs/training-jobs).
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -0400838 "masterType": "A String", # Optional. Specifies the type of virtual machine to use for your training
Dan O'Mearadd494642020-05-01 07:42:23 -0700839 # job's master worker. You must specify this field when `scaleTier` is set to
840 # `CUSTOM`.
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -0400841 #
Dan O'Mearadd494642020-05-01 07:42:23 -0700842 # You can use certain Compute Engine machine types directly in this field.
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -0400843 # The following types are supported:
844 #
Bu Sun Kim715bd7f2019-06-14 16:50:42 -0700845 # - `n1-standard-4`
846 # - `n1-standard-8`
847 # - `n1-standard-16`
848 # - `n1-standard-32`
849 # - `n1-standard-64`
850 # - `n1-standard-96`
851 # - `n1-highmem-2`
852 # - `n1-highmem-4`
853 # - `n1-highmem-8`
854 # - `n1-highmem-16`
855 # - `n1-highmem-32`
856 # - `n1-highmem-64`
857 # - `n1-highmem-96`
858 # - `n1-highcpu-16`
859 # - `n1-highcpu-32`
860 # - `n1-highcpu-64`
861 # - `n1-highcpu-96`
862 #
Dan O'Mearadd494642020-05-01 07:42:23 -0700863 # Learn more about [using Compute Engine machine
864 # types](/ml-engine/docs/machine-types#compute-engine-machine-types).
Bu Sun Kim715bd7f2019-06-14 16:50:42 -0700865 #
Dan O'Mearadd494642020-05-01 07:42:23 -0700866 # Alternatively, you can use the following legacy machine types:
867 #
868 # - `standard`
869 # - `large_model`
870 # - `complex_model_s`
871 # - `complex_model_m`
872 # - `complex_model_l`
873 # - `standard_gpu`
874 # - `complex_model_m_gpu`
875 # - `complex_model_l_gpu`
876 # - `standard_p100`
877 # - `complex_model_m_p100`
878 # - `standard_v100`
879 # - `large_model_v100`
880 # - `complex_model_m_v100`
881 # - `complex_model_l_v100`
882 #
883 # Learn more about [using legacy machine
884 # types](/ml-engine/docs/machine-types#legacy-machine-types).
885 #
886 # Finally, if you want to use a TPU for training, specify `cloud_tpu` in this
887 # field. Learn more about the [special configuration options for training
888 # with
889 # TPUs](/ml-engine/docs/tensorflow/using-tpus#configuring_a_custom_tpu_machine).
890 "jobDir": "A String", # Optional. A Google Cloud Storage path in which to store training outputs
891 # and other data needed for training. This path is passed to your TensorFlow
892 # program as the '--job-dir' command-line argument. The benefit of specifying
893 # this field is that Cloud ML validates the path for use in training.
894 "scheduling": { # All parameters related to scheduling of training jobs. # Optional. Scheduling options for a training job.
895 "maxRunningTime": "A String", # Optional. The maximum job running time, expressed in seconds. The field can
896 # contain up to nine fractional digits, terminated by `s`. By default there
897 # is no limit to the running time.
898 #
899 # If the training job is still running after this duration, AI Platform
900 # Training cancels it.
901 #
902 # For example, if you want to ensure your job runs for no more than 2 hours,
903 # set this field to `7200s` (2 hours * 60 minutes / hour * 60 seconds /
904 # minute).
905 #
906 # If you submit your training job using the `gcloud` tool, you can [provide
907 # this field in a `config.yaml`
908 # file](/ai-platform/training/docs/training-jobs#formatting_your_configuration_parameters).
909 # For example:
910 #
911 # ```yaml
912 # trainingInput:
913 # ...
914 # scheduling:
915 # maxRunningTime: 7200s
916 # ...
917 # ```
918 },
919 "parameterServerCount": "A String", # Optional. The number of parameter server replicas to use for the training
920 # job. Each replica in the cluster will be of the type specified in
921 # `parameter_server_type`.
922 #
923 # This value can only be used when `scale_tier` is set to `CUSTOM`. If you
924 # set this value, you must also set `parameter_server_type`.
925 #
926 # The default value is zero.
927 "evaluatorCount": "A String", # Optional. The number of evaluator replicas to use for the training job.
928 # Each replica in the cluster will be of the type specified in
929 # `evaluator_type`.
930 #
931 # This value can only be used when `scale_tier` is set to `CUSTOM`. If you
932 # set this value, you must also set `evaluator_type`.
933 #
934 # The default value is zero.
935 "workerType": "A String", # Optional. Specifies the type of virtual machine to use for your training
936 # job's worker nodes.
937 #
938 # The supported values are the same as those described in the entry for
939 # `masterType`.
940 #
941 # This value must be consistent with the category of machine type that
942 # `masterType` uses. In other words, both must be Compute Engine machine
943 # types or both must be legacy machine types.
944 #
945 # If you use `cloud_tpu` for this value, see special instructions for
946 # [configuring a custom TPU
947 # machine](/ml-engine/docs/tensorflow/using-tpus#configuring_a_custom_tpu_machine).
948 #
949 # This value must be present when `scaleTier` is set to `CUSTOM` and
950 # `workerCount` is greater than zero.
951 "scaleTier": "A String", # Required. Specifies the machine types, the number of replicas for workers
952 # and parameter servers.
953 "packageUris": [ # Required. The Google Cloud Storage location of the packages with
954 # the training program and any additional dependencies.
955 # The maximum number of package URIs is 100.
956 "A String",
957 ],
958 "workerConfig": { # Represents the configuration for a replica in a cluster. # Optional. The configuration for workers.
959 #
960 # You should only set `workerConfig.acceleratorConfig` if `workerType` is set
961 # to a Compute Engine machine type. [Learn about restrictions on accelerator
962 # configurations for
963 # training.](/ai-platform/training/docs/using-gpus#compute-engine-machine-types-with-gpu)
964 #
965 # Set `workerConfig.imageUri` only if you build a custom image for your
966 # worker. If `workerConfig.imageUri` has not been set, AI Platform uses
967 # the value of `masterConfig.imageUri`. Learn more about [configuring custom
968 # containers](/ai-platform/training/docs/distributed-training-containers).
969 "tpuTfVersion": "A String", # The AI Platform runtime version that includes a TensorFlow version matching
970 # the one used in the custom container. This field is required if the replica
971 # is a TPU worker that uses a custom container. Otherwise, do not specify
972 # this field. This must be a [runtime version that currently supports
973 # training with
974 # TPUs](/ml-engine/docs/tensorflow/runtime-version-list#tpu-support).
975 #
976 # Note that the version of TensorFlow included in a runtime version may
977 # differ from the numbering of the runtime version itself, because it may
978 # have a different [patch
979 # version](https://www.tensorflow.org/guide/version_compat#semantic_versioning_20).
980 # In this field, you must specify the runtime version (TensorFlow minor
981 # version). For example, if your custom container runs TensorFlow `1.x.y`,
982 # specify `1.x`.
983 "acceleratorConfig": { # Represents a hardware accelerator request config. # Represents the type and number of accelerators used by the replica.
984 # [Learn about restrictions on accelerator configurations for
985 # training.](/ai-platform/training/docs/using-gpus#compute-engine-machine-types-with-gpu)
986 # Note that the AcceleratorConfig can be used in both Jobs and Versions.
987 # Learn more about [accelerators for training](/ml-engine/docs/using-gpus) and
988 # [accelerators for online
989 # prediction](/ml-engine/docs/machine-types-online-prediction#gpus).
990 "count": "A String", # The number of accelerators to attach to each machine running the job.
991 "type": "A String", # The type of accelerator to use.
992 },
993 "imageUri": "A String", # The Docker image to run on the replica. This image must be in Container
994 # Registry. Learn more about [configuring custom
995 # containers](/ai-platform/training/docs/distributed-training-containers).
996 },
997 "evaluatorConfig": { # Represents the configuration for a replica in a cluster. # Optional. The configuration for evaluators.
998 #
999 # You should only set `evaluatorConfig.acceleratorConfig` if
1000 # `evaluatorType` is set to a Compute Engine machine type. [Learn
1001 # about restrictions on accelerator configurations for
1002 # training.](/ai-platform/training/docs/using-gpus#compute-engine-machine-types-with-gpu)
1003 #
1004 # Set `evaluatorConfig.imageUri` only if you build a custom image for
1005 # your evaluator. If `evaluatorConfig.imageUri` has not been
1006 # set, AI Platform uses the value of `masterConfig.imageUri`. Learn more about [configuring custom
1007 # containers](/ai-platform/training/docs/distributed-training-containers).
1008 "tpuTfVersion": "A String", # The AI Platform runtime version that includes a TensorFlow version matching
1009 # the one used in the custom container. This field is required if the replica
1010 # is a TPU worker that uses a custom container. Otherwise, do not specify
1011 # this field. This must be a [runtime version that currently supports
1012 # training with
1013 # TPUs](/ml-engine/docs/tensorflow/runtime-version-list#tpu-support).
1014 #
1015 # Note that the version of TensorFlow included in a runtime version may
1016 # differ from the numbering of the runtime version itself, because it may
1017 # have a different [patch
1018 # version](https://www.tensorflow.org/guide/version_compat#semantic_versioning_20).
1019 # In this field, you must specify the runtime version (TensorFlow minor
1020 # version). For example, if your custom container runs TensorFlow `1.x.y`,
1021 # specify `1.x`.
1022 "acceleratorConfig": { # Represents a hardware accelerator request config. # Represents the type and number of accelerators used by the replica.
1023 # [Learn about restrictions on accelerator configurations for
1024 # training.](/ai-platform/training/docs/using-gpus#compute-engine-machine-types-with-gpu)
1025 # Note that the AcceleratorConfig can be used in both Jobs and Versions.
1026 # Learn more about [accelerators for training](/ml-engine/docs/using-gpus) and
1027 # [accelerators for online
1028 # prediction](/ml-engine/docs/machine-types-online-prediction#gpus).
1029 "count": "A String", # The number of accelerators to attach to each machine running the job.
1030 "type": "A String", # The type of accelerator to use.
1031 },
1032 "imageUri": "A String", # The Docker image to run on the replica. This image must be in Container
1033 # Registry. Learn more about [configuring custom
1034 # containers](/ai-platform/training/docs/distributed-training-containers).
1035 },
1036 "useChiefInTfConfig": True or False, # Optional. Use `chief` instead of `master` in the `TF_CONFIG` environment
1037 # variable when training with a custom container. Defaults to `false`. [Learn
1038 # more about this
1039 # field.](/ai-platform/training/docs/distributed-training-details#chief-versus-master)
1040 #
1041 # This field has no effect for training jobs that don't use a custom
1042 # container.
1043 "masterConfig": { # Represents the configuration for a replica in a cluster. # Optional. The configuration for your master worker.
1044 #
1045 # You should only set `masterConfig.acceleratorConfig` if `masterType` is set
1046 # to a Compute Engine machine type. Learn about [restrictions on accelerator
1047 # configurations for
1048 # training.](/ai-platform/training/docs/using-gpus#compute-engine-machine-types-with-gpu)
1049 #
1050 # Set `masterConfig.imageUri` only if you build a custom image. Only one of
1051 # `masterConfig.imageUri` and `runtimeVersion` should be set. Learn more
1052 # about [configuring custom
1053 # containers](/ai-platform/training/docs/distributed-training-containers).
1054 "tpuTfVersion": "A String", # The AI Platform runtime version that includes a TensorFlow version matching
1055 # the one used in the custom container. This field is required if the replica
1056 # is a TPU worker that uses a custom container. Otherwise, do not specify
1057 # this field. This must be a [runtime version that currently supports
1058 # training with
1059 # TPUs](/ml-engine/docs/tensorflow/runtime-version-list#tpu-support).
1060 #
1061 # Note that the version of TensorFlow included in a runtime version may
1062 # differ from the numbering of the runtime version itself, because it may
1063 # have a different [patch
1064 # version](https://www.tensorflow.org/guide/version_compat#semantic_versioning_20).
1065 # In this field, you must specify the runtime version (TensorFlow minor
1066 # version). For example, if your custom container runs TensorFlow `1.x.y`,
1067 # specify `1.x`.
1068 "acceleratorConfig": { # Represents a hardware accelerator request config. # Represents the type and number of accelerators used by the replica.
1069 # [Learn about restrictions on accelerator configurations for
1070 # training.](/ai-platform/training/docs/using-gpus#compute-engine-machine-types-with-gpu)
1071 # Note that the AcceleratorConfig can be used in both Jobs and Versions.
1072 # Learn more about [accelerators for training](/ml-engine/docs/using-gpus) and
1073 # [accelerators for online
1074 # prediction](/ml-engine/docs/machine-types-online-prediction#gpus).
1075 "count": "A String", # The number of accelerators to attach to each machine running the job.
1076 "type": "A String", # The type of accelerator to use.
1077 },
1078 "imageUri": "A String", # The Docker image to run on the replica. This image must be in Container
1079 # Registry. Learn more about [configuring custom
1080 # containers](/ai-platform/training/docs/distributed-training-containers).
1081 },
1082 "runtimeVersion": "A String", # Optional. The AI Platform runtime version to use for training. You must
1083 # either specify this field or specify `masterConfig.imageUri`.
1084 #
1085 # For more information, see the [runtime version
1086 # list](/ai-platform/training/docs/runtime-version-list) and learn [how to
1087 # manage runtime versions](/ai-platform/training/docs/versioning).
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -04001088 "hyperparameters": { # Represents a set of hyperparameters to optimize. # Optional. The set of Hyperparameters to tune.
1089 "maxTrials": 42, # Optional. How many training trials should be attempted to optimize
1090 # the specified hyperparameters.
1091 #
1092 # Defaults to one.
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07001093 "goal": "A String", # Required. The type of goal to use for tuning. Available types are
1094 # `MAXIMIZE` and `MINIMIZE`.
1095 #
1096 # Defaults to `MAXIMIZE`.
1097 "algorithm": "A String", # Optional. The search algorithm specified for the hyperparameter
1098 # tuning job.
1099 # Uses the default AI Platform hyperparameter tuning
1100 # algorithm if unspecified.
1101 "maxFailedTrials": 42, # Optional. The number of failed trials that need to be seen before failing
1102 # the hyperparameter tuning job. You can specify this field to override the
1103 # default failing criteria for AI Platform hyperparameter tuning jobs.
1104 #
1105 # Defaults to zero, which means the service decides when a hyperparameter
1106 # job should fail.
1107 "enableTrialEarlyStopping": True or False, # Optional. Indicates if the hyperparameter tuning job enables auto trial
1108 # early stopping.
1109 "resumePreviousJobId": "A String", # Optional. The prior hyperparameter tuning job id that users hope to
1110 # continue with. The job id will be used to find the corresponding vizier
1111 # study guid and resume the study.
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -04001112 "params": [ # Required. The set of parameters to tune.
1113 { # Represents a single hyperparameter to optimize.
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07001114 "maxValue": 3.14, # Required if type is `DOUBLE` or `INTEGER`. This field
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -04001115 # should be unset if type is `CATEGORICAL`. This value should be integers if
1116 # type is `INTEGER`.
Dan O'Mearadd494642020-05-01 07:42:23 -07001117 "minValue": 3.14, # Required if type is `DOUBLE` or `INTEGER`. This field
1118 # should be unset if type is `CATEGORICAL`. This value should be integers if
1119 # type is INTEGER.
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -04001120 "discreteValues": [ # Required if type is `DISCRETE`.
1121 # A list of feasible points.
1122 # The list should be in strictly increasing order. For instance, this
1123 # parameter might have possible settings of 1.5, 2.5, and 4.0. This list
1124 # should not contain more than 1,000 values.
1125 3.14,
1126 ],
1127 "parameterName": "A String", # Required. The parameter name must be unique amongst all ParameterConfigs in
1128 # a HyperparameterSpec message. E.g., "learning_rate".
Dan O'Mearadd494642020-05-01 07:42:23 -07001129 "categoricalValues": [ # Required if type is `CATEGORICAL`. The list of possible categories.
1130 "A String",
1131 ],
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -04001132 "type": "A String", # Required. The type of the parameter.
1133 "scaleType": "A String", # Optional. How the parameter should be scaled to the hypercube.
1134 # Leave unset for categorical parameters.
1135 # Some kind of scaling is strongly recommended for real or integral
1136 # parameters (e.g., `UNIT_LINEAR_SCALE`).
1137 },
1138 ],
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07001139 "hyperparameterMetricTag": "A String", # Optional. The TensorFlow summary tag name to use for optimizing trials. For
1140 # current versions of TensorFlow, this tag name should exactly match what is
1141 # shown in TensorBoard, including all scopes. For versions of TensorFlow
1142 # prior to 0.12, this should be only the tag passed to tf.Summary.
1143 # By default, "training/hptuning/metric" will be used.
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -04001144 "maxParallelTrials": 42, # Optional. The number of training trials to run concurrently.
1145 # You can reduce the time it takes to perform hyperparameter tuning by adding
1146 # trials in parallel. However, each trail only benefits from the information
1147 # gained in completed trials. That means that a trial does not get access to
1148 # the results of trials running at the same time, which could reduce the
1149 # quality of the overall optimization.
1150 #
1151 # Each trial will use the same scale tier and machine types.
1152 #
1153 # Defaults to one.
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -04001154 },
Dan O'Mearadd494642020-05-01 07:42:23 -07001155 "args": [ # Optional. Command-line arguments passed to the training application when it
1156 # starts. If your job uses a custom container, then the arguments are passed
1157 # to the container's &lt;a class="external" target="_blank"
1158 # href="https://docs.docker.com/engine/reference/builder/#entrypoint"&gt;
1159 # `ENTRYPOINT`&lt;/a&gt; command.
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -04001160 "A String",
1161 ],
1162 "pythonModule": "A String", # Required. The Python module name to run after installing the packages.
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -04001163 "workerCount": "A String", # Optional. The number of worker replicas to use for the training job. Each
1164 # replica in the cluster will be of the type specified in `worker_type`.
1165 #
1166 # This value can only be used when `scale_tier` is set to `CUSTOM`. If you
1167 # set this value, you must also set `worker_type`.
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07001168 #
1169 # The default value is zero.
Dan O'Mearadd494642020-05-01 07:42:23 -07001170 "encryptionConfig": { # Represents a custom encryption key configuration that can be applied to # Optional. Options for using customer-managed encryption keys (CMEK) to
1171 # protect resources created by a training job, instead of using Google's
1172 # default encryption. If this is set, then all resources created by the
1173 # training job will be encrypted with the customer-managed encryption key
1174 # that you specify.
1175 #
1176 # [Learn how and when to use CMEK with AI Platform
1177 # Training](/ai-platform/training/docs/cmek).
1178 # a resource.
1179 "kmsKeyName": "A String", # The Cloud KMS resource identifier of the customer-managed encryption key
1180 # used to protect a resource, such as a training job. It has the following
1181 # format:
1182 # `projects/{PROJECT_ID}/locations/{REGION}/keyRings/{KEY_RING_NAME}/cryptoKeys/{KEY_NAME}`
1183 },
1184 "parameterServerConfig": { # Represents the configuration for a replica in a cluster. # Optional. The configuration for parameter servers.
1185 #
1186 # You should only set `parameterServerConfig.acceleratorConfig` if
1187 # `parameterServerType` is set to a Compute Engine machine type. [Learn
1188 # about restrictions on accelerator configurations for
1189 # training.](/ai-platform/training/docs/using-gpus#compute-engine-machine-types-with-gpu)
1190 #
1191 # Set `parameterServerConfig.imageUri` only if you build a custom image for
1192 # your parameter server. If `parameterServerConfig.imageUri` has not been
1193 # set, AI Platform uses the value of `masterConfig.imageUri`. Learn more about [configuring custom
1194 # containers](/ai-platform/training/docs/distributed-training-containers).
1195 "tpuTfVersion": "A String", # The AI Platform runtime version that includes a TensorFlow version matching
1196 # the one used in the custom container. This field is required if the replica
1197 # is a TPU worker that uses a custom container. Otherwise, do not specify
1198 # this field. This must be a [runtime version that currently supports
1199 # training with
1200 # TPUs](/ml-engine/docs/tensorflow/runtime-version-list#tpu-support).
1201 #
1202 # Note that the version of TensorFlow included in a runtime version may
1203 # differ from the numbering of the runtime version itself, because it may
1204 # have a different [patch
1205 # version](https://www.tensorflow.org/guide/version_compat#semantic_versioning_20).
1206 # In this field, you must specify the runtime version (TensorFlow minor
1207 # version). For example, if your custom container runs TensorFlow `1.x.y`,
1208 # specify `1.x`.
1209 "acceleratorConfig": { # Represents a hardware accelerator request config. # Represents the type and number of accelerators used by the replica.
1210 # [Learn about restrictions on accelerator configurations for
1211 # training.](/ai-platform/training/docs/using-gpus#compute-engine-machine-types-with-gpu)
1212 # Note that the AcceleratorConfig can be used in both Jobs and Versions.
1213 # Learn more about [accelerators for training](/ml-engine/docs/using-gpus) and
1214 # [accelerators for online
1215 # prediction](/ml-engine/docs/machine-types-online-prediction#gpus).
1216 "count": "A String", # The number of accelerators to attach to each machine running the job.
1217 "type": "A String", # The type of accelerator to use.
1218 },
1219 "imageUri": "A String", # The Docker image to run on the replica. This image must be in Container
1220 # Registry. Learn more about [configuring custom
1221 # containers](/ai-platform/training/docs/distributed-training-containers).
1222 },
1223 "region": "A String", # Required. The region to run the training job in. See the [available
1224 # regions](/ai-platform/training/docs/regions) for AI Platform Training.
1225 "pythonVersion": "A String", # Optional. The version of Python used in training. You must either specify
1226 # this field or specify `masterConfig.imageUri`.
1227 #
1228 # The following Python versions are available:
1229 #
1230 # * Python '3.7' is available when `runtime_version` is set to '1.15' or
1231 # later.
1232 # * Python '3.5' is available when `runtime_version` is set to a version
1233 # from '1.4' to '1.14'.
1234 # * Python '2.7' is available when `runtime_version` is set to '1.15' or
1235 # earlier.
1236 #
1237 # Read more about the Python versions available for [each runtime
1238 # version](/ml-engine/docs/runtime-version-list).
1239 "evaluatorType": "A String", # Optional. Specifies the type of virtual machine to use for your training
1240 # job's evaluator nodes.
1241 #
1242 # The supported values are the same as those described in the entry for
1243 # `masterType`.
1244 #
1245 # This value must be consistent with the category of machine type that
1246 # `masterType` uses. In other words, both must be Compute Engine machine
1247 # types or both must be legacy machine types.
1248 #
1249 # This value must be present when `scaleTier` is set to `CUSTOM` and
1250 # `evaluatorCount` is greater than zero.
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -04001251 "parameterServerType": "A String", # Optional. Specifies the type of virtual machine to use for your training
1252 # job's parameter server.
1253 #
1254 # The supported values are the same as those described in the entry for
1255 # `master_type`.
1256 #
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07001257 # This value must be consistent with the category of machine type that
Dan O'Mearadd494642020-05-01 07:42:23 -07001258 # `masterType` uses. In other words, both must be Compute Engine machine
1259 # types or both must be legacy machine types.
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07001260 #
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -04001261 # This value must be present when `scaleTier` is set to `CUSTOM` and
1262 # `parameter_server_count` is greater than zero.
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07001263 },
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07001264 "endTime": "A String", # Output only. When the job processing was completed.
1265 "predictionOutput": { # Represents results of a prediction job. # The current prediction job result.
1266 "outputPath": "A String", # The output Google Cloud Storage location provided at the job creation time.
1267 "nodeHours": 3.14, # Node hours used by the batch prediction job.
1268 "predictionCount": "A String", # The number of generated predictions.
1269 "errorCount": "A String", # The number of data instances which resulted in errors.
1270 },
1271 "createTime": "A String", # Output only. When the job was created.
1272 }</pre>
1273</div>
1274
1275<div class="method">
1276 <code class="details" id="get">get(name, x__xgafv=None)</code>
1277 <pre>Describes a job.
1278
1279Args:
1280 name: string, Required. The name of the job to get the description of. (required)
1281 x__xgafv: string, V1 error format.
1282 Allowed values
1283 1 - v1 error format
1284 2 - v2 error format
1285
1286Returns:
1287 An object of the form:
1288
1289 { # Represents a training or prediction job.
1290 "errorMessage": "A String", # Output only. The details of a failure or a cancellation.
1291 "trainingOutput": { # Represents results of a training job. Output only. # The current training job result.
1292 "completedTrialCount": "A String", # The number of hyperparameter tuning trials that completed successfully.
1293 # Only set for hyperparameter tuning jobs.
1294 "trials": [ # Results for individual Hyperparameter trials.
1295 # Only set for hyperparameter tuning jobs.
1296 { # Represents the result of a single hyperparameter tuning trial from a
1297 # training job. The TrainingOutput object that is returned on successful
1298 # completion of a training job with hyperparameter tuning includes a list
1299 # of HyperparameterOutput objects, one for each successful trial.
Dan O'Mearadd494642020-05-01 07:42:23 -07001300 "startTime": "A String", # Output only. Start time for the trial.
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07001301 "hyperparameters": { # The hyperparameters given to this trial.
1302 "a_key": "A String",
1303 },
1304 "finalMetric": { # An observed value of a metric. # The final objective metric seen for this trial.
1305 "trainingStep": "A String", # The global training step for this metric.
1306 "objectiveValue": 3.14, # The objective value at this training step.
1307 },
Dan O'Mearadd494642020-05-01 07:42:23 -07001308 "state": "A String", # Output only. The detailed state of the trial.
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07001309 "allMetrics": [ # All recorded object metrics for this trial. This field is not currently
1310 # populated.
1311 { # An observed value of a metric.
1312 "trainingStep": "A String", # The global training step for this metric.
1313 "objectiveValue": 3.14, # The objective value at this training step.
1314 },
1315 ],
1316 "isTrialStoppedEarly": True or False, # True if the trial is stopped early.
Dan O'Mearadd494642020-05-01 07:42:23 -07001317 "endTime": "A String", # Output only. End time for the trial.
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07001318 "trialId": "A String", # The trial id for these results.
1319 "builtInAlgorithmOutput": { # Represents output related to a built-in algorithm Job. # Details related to built-in algorithms jobs.
1320 # Only set for trials of built-in algorithms jobs that have succeeded.
1321 "framework": "A String", # Framework on which the built-in algorithm was trained.
1322 "modelPath": "A String", # The Cloud Storage path to the `model/` directory where the training job
1323 # saves the trained model. Only set for successful jobs that don't use
1324 # hyperparameter tuning.
1325 "runtimeVersion": "A String", # AI Platform runtime version on which the built-in algorithm was
1326 # trained.
1327 "pythonVersion": "A String", # Python version on which the built-in algorithm was trained.
1328 },
1329 },
1330 ],
1331 "isHyperparameterTuningJob": True or False, # Whether this job is a hyperparameter tuning job.
1332 "isBuiltInAlgorithmJob": True or False, # Whether this job is a built-in Algorithm job.
1333 "consumedMLUnits": 3.14, # The amount of ML units consumed by the job.
1334 "hyperparameterMetricTag": "A String", # The TensorFlow summary tag name used for optimizing hyperparameter tuning
1335 # trials. See
1336 # [`HyperparameterSpec.hyperparameterMetricTag`](#HyperparameterSpec.FIELDS.hyperparameter_metric_tag)
1337 # for more information. Only set for hyperparameter tuning jobs.
1338 "builtInAlgorithmOutput": { # Represents output related to a built-in algorithm Job. # Details related to built-in algorithms jobs.
1339 # Only set for built-in algorithms jobs.
1340 "framework": "A String", # Framework on which the built-in algorithm was trained.
1341 "modelPath": "A String", # The Cloud Storage path to the `model/` directory where the training job
1342 # saves the trained model. Only set for successful jobs that don't use
1343 # hyperparameter tuning.
1344 "runtimeVersion": "A String", # AI Platform runtime version on which the built-in algorithm was
1345 # trained.
1346 "pythonVersion": "A String", # Python version on which the built-in algorithm was trained.
1347 },
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -04001348 },
Sai Cheemalapati4ba8c232017-06-06 18:46:08 -04001349 "predictionInput": { # Represents input parameters for a prediction job. # Input parameters to create a prediction job.
1350 "modelName": "A String", # Use this field if you want to use the default version for the specified
1351 # model. The string must use the following format:
1352 #
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07001353 # `"projects/YOUR_PROJECT/models/YOUR_MODEL"`
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07001354 "signatureName": "A String", # Optional. The name of the signature defined in the SavedModel to use for
1355 # this job. Please refer to
1356 # [SavedModel](https://tensorflow.github.io/serving/serving_basic.html)
1357 # for information about how to use signatures.
1358 #
1359 # Defaults to
1360 # [DEFAULT_SERVING_SIGNATURE_DEF_KEY](https://www.tensorflow.org/api_docs/python/tf/saved_model/signature_constants)
1361 # , which is "serving_default".
Dan O'Mearadd494642020-05-01 07:42:23 -07001362 "runtimeVersion": "A String", # Optional. The AI Platform runtime version to use for this batch
1363 # prediction. If not set, AI Platform will pick the runtime version used
1364 # during the CreateVersion request for this model version, or choose the
1365 # latest stable version when model version information is not available
1366 # such as when the model is specified by uri.
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07001367 "batchSize": "A String", # Optional. Number of records per batch, defaults to 64.
1368 # The service will buffer batch_size number of records in memory before
1369 # invoking one Tensorflow prediction call internally. So take the record
1370 # size and memory available into consideration when setting this parameter.
Sai Cheemalapati4ba8c232017-06-06 18:46:08 -04001371 "maxWorkerCount": "A String", # Optional. The maximum number of workers to be used for parallel processing.
1372 # Defaults to 10 if not specified.
1373 "uri": "A String", # Use this field if you want to specify a Google Cloud Storage path for
1374 # the model to use.
1375 "outputPath": "A String", # Required. The output Google Cloud Storage location.
1376 "dataFormat": "A String", # Required. The format of the input data files.
1377 "versionName": "A String", # Use this field if you want to specify a version of the model to use. The
1378 # string is formatted the same way as `model_version`, with the addition
1379 # of the version information:
1380 #
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07001381 # `"projects/YOUR_PROJECT/models/YOUR_MODEL/versions/YOUR_VERSION"`
1382 "region": "A String", # Required. The Google Compute Engine region to run the prediction job in.
Dan O'Mearadd494642020-05-01 07:42:23 -07001383 # See the &lt;a href="/ml-engine/docs/tensorflow/regions"&gt;available regions&lt;/a&gt;
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07001384 # for AI Platform services.
Dan O'Mearadd494642020-05-01 07:42:23 -07001385 "inputPaths": [ # Required. The Cloud Storage location of the input data files. May contain
1386 # &lt;a href="/storage/docs/gsutil/addlhelp/WildcardNames"&gt;wildcards&lt;/a&gt;.
1387 "A String",
1388 ],
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07001389 "outputDataFormat": "A String", # Optional. Format of the output data files, defaults to JSON.
1390 },
Dan O'Mearadd494642020-05-01 07:42:23 -07001391 "labels": { # Optional. One or more labels that you can add, to organize your jobs.
1392 # Each label is a key-value pair, where both the key and the value are
1393 # arbitrary strings that you supply.
1394 # For more information, see the documentation on
1395 # &lt;a href="/ml-engine/docs/tensorflow/resource-labels"&gt;using labels&lt;/a&gt;.
1396 "a_key": "A String",
1397 },
1398 "jobId": "A String", # Required. The user-specified id of the job.
1399 "state": "A String", # Output only. The detailed state of a job.
1400 "etag": "A String", # `etag` is used for optimistic concurrency control as a way to help
1401 # prevent simultaneous updates of a job from overwriting each other.
1402 # It is strongly suggested that systems make use of the `etag` in the
1403 # read-modify-write cycle to perform job updates in order to avoid race
1404 # conditions: An `etag` is returned in the response to `GetJob`, and
1405 # systems are expected to put that etag in the request to `UpdateJob` to
1406 # ensure that their change will be applied to the same version of the job.
1407 "startTime": "A String", # Output only. When the job processing was started.
1408 "trainingInput": { # Represents input parameters for a training job. When using the gcloud command # Input parameters to create a training job.
1409 # to submit your training job, you can specify the input parameters as
1410 # command-line arguments and/or in a YAML configuration file referenced from
1411 # the --config command-line argument. For details, see the guide to [submitting
1412 # a training job](/ai-platform/training/docs/training-jobs).
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07001413 "masterType": "A String", # Optional. Specifies the type of virtual machine to use for your training
Dan O'Mearadd494642020-05-01 07:42:23 -07001414 # job's master worker. You must specify this field when `scaleTier` is set to
1415 # `CUSTOM`.
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07001416 #
Dan O'Mearadd494642020-05-01 07:42:23 -07001417 # You can use certain Compute Engine machine types directly in this field.
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07001418 # The following types are supported:
1419 #
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07001420 # - `n1-standard-4`
1421 # - `n1-standard-8`
1422 # - `n1-standard-16`
1423 # - `n1-standard-32`
1424 # - `n1-standard-64`
1425 # - `n1-standard-96`
1426 # - `n1-highmem-2`
1427 # - `n1-highmem-4`
1428 # - `n1-highmem-8`
1429 # - `n1-highmem-16`
1430 # - `n1-highmem-32`
1431 # - `n1-highmem-64`
1432 # - `n1-highmem-96`
1433 # - `n1-highcpu-16`
1434 # - `n1-highcpu-32`
1435 # - `n1-highcpu-64`
1436 # - `n1-highcpu-96`
1437 #
Dan O'Mearadd494642020-05-01 07:42:23 -07001438 # Learn more about [using Compute Engine machine
1439 # types](/ml-engine/docs/machine-types#compute-engine-machine-types).
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07001440 #
Dan O'Mearadd494642020-05-01 07:42:23 -07001441 # Alternatively, you can use the following legacy machine types:
1442 #
1443 # - `standard`
1444 # - `large_model`
1445 # - `complex_model_s`
1446 # - `complex_model_m`
1447 # - `complex_model_l`
1448 # - `standard_gpu`
1449 # - `complex_model_m_gpu`
1450 # - `complex_model_l_gpu`
1451 # - `standard_p100`
1452 # - `complex_model_m_p100`
1453 # - `standard_v100`
1454 # - `large_model_v100`
1455 # - `complex_model_m_v100`
1456 # - `complex_model_l_v100`
1457 #
1458 # Learn more about [using legacy machine
1459 # types](/ml-engine/docs/machine-types#legacy-machine-types).
1460 #
1461 # Finally, if you want to use a TPU for training, specify `cloud_tpu` in this
1462 # field. Learn more about the [special configuration options for training
1463 # with
1464 # TPUs](/ml-engine/docs/tensorflow/using-tpus#configuring_a_custom_tpu_machine).
1465 "jobDir": "A String", # Optional. A Google Cloud Storage path in which to store training outputs
1466 # and other data needed for training. This path is passed to your TensorFlow
1467 # program as the '--job-dir' command-line argument. The benefit of specifying
1468 # this field is that Cloud ML validates the path for use in training.
1469 "scheduling": { # All parameters related to scheduling of training jobs. # Optional. Scheduling options for a training job.
1470 "maxRunningTime": "A String", # Optional. The maximum job running time, expressed in seconds. The field can
1471 # contain up to nine fractional digits, terminated by `s`. By default there
1472 # is no limit to the running time.
1473 #
1474 # If the training job is still running after this duration, AI Platform
1475 # Training cancels it.
1476 #
1477 # For example, if you want to ensure your job runs for no more than 2 hours,
1478 # set this field to `7200s` (2 hours * 60 minutes / hour * 60 seconds /
1479 # minute).
1480 #
1481 # If you submit your training job using the `gcloud` tool, you can [provide
1482 # this field in a `config.yaml`
1483 # file](/ai-platform/training/docs/training-jobs#formatting_your_configuration_parameters).
1484 # For example:
1485 #
1486 # ```yaml
1487 # trainingInput:
1488 # ...
1489 # scheduling:
1490 # maxRunningTime: 7200s
1491 # ...
1492 # ```
1493 },
1494 "parameterServerCount": "A String", # Optional. The number of parameter server replicas to use for the training
1495 # job. Each replica in the cluster will be of the type specified in
1496 # `parameter_server_type`.
1497 #
1498 # This value can only be used when `scale_tier` is set to `CUSTOM`. If you
1499 # set this value, you must also set `parameter_server_type`.
1500 #
1501 # The default value is zero.
1502 "evaluatorCount": "A String", # Optional. The number of evaluator replicas to use for the training job.
1503 # Each replica in the cluster will be of the type specified in
1504 # `evaluator_type`.
1505 #
1506 # This value can only be used when `scale_tier` is set to `CUSTOM`. If you
1507 # set this value, you must also set `evaluator_type`.
1508 #
1509 # The default value is zero.
1510 "workerType": "A String", # Optional. Specifies the type of virtual machine to use for your training
1511 # job's worker nodes.
1512 #
1513 # The supported values are the same as those described in the entry for
1514 # `masterType`.
1515 #
1516 # This value must be consistent with the category of machine type that
1517 # `masterType` uses. In other words, both must be Compute Engine machine
1518 # types or both must be legacy machine types.
1519 #
1520 # If you use `cloud_tpu` for this value, see special instructions for
1521 # [configuring a custom TPU
1522 # machine](/ml-engine/docs/tensorflow/using-tpus#configuring_a_custom_tpu_machine).
1523 #
1524 # This value must be present when `scaleTier` is set to `CUSTOM` and
1525 # `workerCount` is greater than zero.
1526 "scaleTier": "A String", # Required. Specifies the machine types, the number of replicas for workers
1527 # and parameter servers.
1528 "packageUris": [ # Required. The Google Cloud Storage location of the packages with
1529 # the training program and any additional dependencies.
1530 # The maximum number of package URIs is 100.
1531 "A String",
1532 ],
1533 "workerConfig": { # Represents the configuration for a replica in a cluster. # Optional. The configuration for workers.
1534 #
1535 # You should only set `workerConfig.acceleratorConfig` if `workerType` is set
1536 # to a Compute Engine machine type. [Learn about restrictions on accelerator
1537 # configurations for
1538 # training.](/ai-platform/training/docs/using-gpus#compute-engine-machine-types-with-gpu)
1539 #
1540 # Set `workerConfig.imageUri` only if you build a custom image for your
1541 # worker. If `workerConfig.imageUri` has not been set, AI Platform uses
1542 # the value of `masterConfig.imageUri`. Learn more about [configuring custom
1543 # containers](/ai-platform/training/docs/distributed-training-containers).
1544 "tpuTfVersion": "A String", # The AI Platform runtime version that includes a TensorFlow version matching
1545 # the one used in the custom container. This field is required if the replica
1546 # is a TPU worker that uses a custom container. Otherwise, do not specify
1547 # this field. This must be a [runtime version that currently supports
1548 # training with
1549 # TPUs](/ml-engine/docs/tensorflow/runtime-version-list#tpu-support).
1550 #
1551 # Note that the version of TensorFlow included in a runtime version may
1552 # differ from the numbering of the runtime version itself, because it may
1553 # have a different [patch
1554 # version](https://www.tensorflow.org/guide/version_compat#semantic_versioning_20).
1555 # In this field, you must specify the runtime version (TensorFlow minor
1556 # version). For example, if your custom container runs TensorFlow `1.x.y`,
1557 # specify `1.x`.
1558 "acceleratorConfig": { # Represents a hardware accelerator request config. # Represents the type and number of accelerators used by the replica.
1559 # [Learn about restrictions on accelerator configurations for
1560 # training.](/ai-platform/training/docs/using-gpus#compute-engine-machine-types-with-gpu)
1561 # Note that the AcceleratorConfig can be used in both Jobs and Versions.
1562 # Learn more about [accelerators for training](/ml-engine/docs/using-gpus) and
1563 # [accelerators for online
1564 # prediction](/ml-engine/docs/machine-types-online-prediction#gpus).
1565 "count": "A String", # The number of accelerators to attach to each machine running the job.
1566 "type": "A String", # The type of accelerator to use.
1567 },
1568 "imageUri": "A String", # The Docker image to run on the replica. This image must be in Container
1569 # Registry. Learn more about [configuring custom
1570 # containers](/ai-platform/training/docs/distributed-training-containers).
1571 },
1572 "evaluatorConfig": { # Represents the configuration for a replica in a cluster. # Optional. The configuration for evaluators.
1573 #
1574 # You should only set `evaluatorConfig.acceleratorConfig` if
1575 # `evaluatorType` is set to a Compute Engine machine type. [Learn
1576 # about restrictions on accelerator configurations for
1577 # training.](/ai-platform/training/docs/using-gpus#compute-engine-machine-types-with-gpu)
1578 #
1579 # Set `evaluatorConfig.imageUri` only if you build a custom image for
1580 # your evaluator. If `evaluatorConfig.imageUri` has not been
1581 # set, AI Platform uses the value of `masterConfig.imageUri`. Learn more about [configuring custom
1582 # containers](/ai-platform/training/docs/distributed-training-containers).
1583 "tpuTfVersion": "A String", # The AI Platform runtime version that includes a TensorFlow version matching
1584 # the one used in the custom container. This field is required if the replica
1585 # is a TPU worker that uses a custom container. Otherwise, do not specify
1586 # this field. This must be a [runtime version that currently supports
1587 # training with
1588 # TPUs](/ml-engine/docs/tensorflow/runtime-version-list#tpu-support).
1589 #
1590 # Note that the version of TensorFlow included in a runtime version may
1591 # differ from the numbering of the runtime version itself, because it may
1592 # have a different [patch
1593 # version](https://www.tensorflow.org/guide/version_compat#semantic_versioning_20).
1594 # In this field, you must specify the runtime version (TensorFlow minor
1595 # version). For example, if your custom container runs TensorFlow `1.x.y`,
1596 # specify `1.x`.
1597 "acceleratorConfig": { # Represents a hardware accelerator request config. # Represents the type and number of accelerators used by the replica.
1598 # [Learn about restrictions on accelerator configurations for
1599 # training.](/ai-platform/training/docs/using-gpus#compute-engine-machine-types-with-gpu)
1600 # Note that the AcceleratorConfig can be used in both Jobs and Versions.
1601 # Learn more about [accelerators for training](/ml-engine/docs/using-gpus) and
1602 # [accelerators for online
1603 # prediction](/ml-engine/docs/machine-types-online-prediction#gpus).
1604 "count": "A String", # The number of accelerators to attach to each machine running the job.
1605 "type": "A String", # The type of accelerator to use.
1606 },
1607 "imageUri": "A String", # The Docker image to run on the replica. This image must be in Container
1608 # Registry. Learn more about [configuring custom
1609 # containers](/ai-platform/training/docs/distributed-training-containers).
1610 },
1611 "useChiefInTfConfig": True or False, # Optional. Use `chief` instead of `master` in the `TF_CONFIG` environment
1612 # variable when training with a custom container. Defaults to `false`. [Learn
1613 # more about this
1614 # field.](/ai-platform/training/docs/distributed-training-details#chief-versus-master)
1615 #
1616 # This field has no effect for training jobs that don't use a custom
1617 # container.
1618 "masterConfig": { # Represents the configuration for a replica in a cluster. # Optional. The configuration for your master worker.
1619 #
1620 # You should only set `masterConfig.acceleratorConfig` if `masterType` is set
1621 # to a Compute Engine machine type. Learn about [restrictions on accelerator
1622 # configurations for
1623 # training.](/ai-platform/training/docs/using-gpus#compute-engine-machine-types-with-gpu)
1624 #
1625 # Set `masterConfig.imageUri` only if you build a custom image. Only one of
1626 # `masterConfig.imageUri` and `runtimeVersion` should be set. Learn more
1627 # about [configuring custom
1628 # containers](/ai-platform/training/docs/distributed-training-containers).
1629 "tpuTfVersion": "A String", # The AI Platform runtime version that includes a TensorFlow version matching
1630 # the one used in the custom container. This field is required if the replica
1631 # is a TPU worker that uses a custom container. Otherwise, do not specify
1632 # this field. This must be a [runtime version that currently supports
1633 # training with
1634 # TPUs](/ml-engine/docs/tensorflow/runtime-version-list#tpu-support).
1635 #
1636 # Note that the version of TensorFlow included in a runtime version may
1637 # differ from the numbering of the runtime version itself, because it may
1638 # have a different [patch
1639 # version](https://www.tensorflow.org/guide/version_compat#semantic_versioning_20).
1640 # In this field, you must specify the runtime version (TensorFlow minor
1641 # version). For example, if your custom container runs TensorFlow `1.x.y`,
1642 # specify `1.x`.
1643 "acceleratorConfig": { # Represents a hardware accelerator request config. # Represents the type and number of accelerators used by the replica.
1644 # [Learn about restrictions on accelerator configurations for
1645 # training.](/ai-platform/training/docs/using-gpus#compute-engine-machine-types-with-gpu)
1646 # Note that the AcceleratorConfig can be used in both Jobs and Versions.
1647 # Learn more about [accelerators for training](/ml-engine/docs/using-gpus) and
1648 # [accelerators for online
1649 # prediction](/ml-engine/docs/machine-types-online-prediction#gpus).
1650 "count": "A String", # The number of accelerators to attach to each machine running the job.
1651 "type": "A String", # The type of accelerator to use.
1652 },
1653 "imageUri": "A String", # The Docker image to run on the replica. This image must be in Container
1654 # Registry. Learn more about [configuring custom
1655 # containers](/ai-platform/training/docs/distributed-training-containers).
1656 },
1657 "runtimeVersion": "A String", # Optional. The AI Platform runtime version to use for training. You must
1658 # either specify this field or specify `masterConfig.imageUri`.
1659 #
1660 # For more information, see the [runtime version
1661 # list](/ai-platform/training/docs/runtime-version-list) and learn [how to
1662 # manage runtime versions](/ai-platform/training/docs/versioning).
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07001663 "hyperparameters": { # Represents a set of hyperparameters to optimize. # Optional. The set of Hyperparameters to tune.
1664 "maxTrials": 42, # Optional. How many training trials should be attempted to optimize
1665 # the specified hyperparameters.
1666 #
1667 # Defaults to one.
1668 "goal": "A String", # Required. The type of goal to use for tuning. Available types are
1669 # `MAXIMIZE` and `MINIMIZE`.
1670 #
1671 # Defaults to `MAXIMIZE`.
1672 "algorithm": "A String", # Optional. The search algorithm specified for the hyperparameter
1673 # tuning job.
1674 # Uses the default AI Platform hyperparameter tuning
1675 # algorithm if unspecified.
1676 "maxFailedTrials": 42, # Optional. The number of failed trials that need to be seen before failing
1677 # the hyperparameter tuning job. You can specify this field to override the
1678 # default failing criteria for AI Platform hyperparameter tuning jobs.
1679 #
1680 # Defaults to zero, which means the service decides when a hyperparameter
1681 # job should fail.
1682 "enableTrialEarlyStopping": True or False, # Optional. Indicates if the hyperparameter tuning job enables auto trial
1683 # early stopping.
1684 "resumePreviousJobId": "A String", # Optional. The prior hyperparameter tuning job id that users hope to
1685 # continue with. The job id will be used to find the corresponding vizier
1686 # study guid and resume the study.
1687 "params": [ # Required. The set of parameters to tune.
1688 { # Represents a single hyperparameter to optimize.
1689 "maxValue": 3.14, # Required if type is `DOUBLE` or `INTEGER`. This field
1690 # should be unset if type is `CATEGORICAL`. This value should be integers if
1691 # type is `INTEGER`.
Dan O'Mearadd494642020-05-01 07:42:23 -07001692 "minValue": 3.14, # Required if type is `DOUBLE` or `INTEGER`. This field
1693 # should be unset if type is `CATEGORICAL`. This value should be integers if
1694 # type is INTEGER.
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07001695 "discreteValues": [ # Required if type is `DISCRETE`.
1696 # A list of feasible points.
1697 # The list should be in strictly increasing order. For instance, this
1698 # parameter might have possible settings of 1.5, 2.5, and 4.0. This list
1699 # should not contain more than 1,000 values.
1700 3.14,
1701 ],
1702 "parameterName": "A String", # Required. The parameter name must be unique amongst all ParameterConfigs in
1703 # a HyperparameterSpec message. E.g., "learning_rate".
Dan O'Mearadd494642020-05-01 07:42:23 -07001704 "categoricalValues": [ # Required if type is `CATEGORICAL`. The list of possible categories.
1705 "A String",
1706 ],
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07001707 "type": "A String", # Required. The type of the parameter.
1708 "scaleType": "A String", # Optional. How the parameter should be scaled to the hypercube.
1709 # Leave unset for categorical parameters.
1710 # Some kind of scaling is strongly recommended for real or integral
1711 # parameters (e.g., `UNIT_LINEAR_SCALE`).
1712 },
1713 ],
1714 "hyperparameterMetricTag": "A String", # Optional. The TensorFlow summary tag name to use for optimizing trials. For
1715 # current versions of TensorFlow, this tag name should exactly match what is
1716 # shown in TensorBoard, including all scopes. For versions of TensorFlow
1717 # prior to 0.12, this should be only the tag passed to tf.Summary.
1718 # By default, "training/hptuning/metric" will be used.
1719 "maxParallelTrials": 42, # Optional. The number of training trials to run concurrently.
1720 # You can reduce the time it takes to perform hyperparameter tuning by adding
1721 # trials in parallel. However, each trail only benefits from the information
1722 # gained in completed trials. That means that a trial does not get access to
1723 # the results of trials running at the same time, which could reduce the
1724 # quality of the overall optimization.
1725 #
1726 # Each trial will use the same scale tier and machine types.
1727 #
1728 # Defaults to one.
1729 },
Dan O'Mearadd494642020-05-01 07:42:23 -07001730 "args": [ # Optional. Command-line arguments passed to the training application when it
1731 # starts. If your job uses a custom container, then the arguments are passed
1732 # to the container's &lt;a class="external" target="_blank"
1733 # href="https://docs.docker.com/engine/reference/builder/#entrypoint"&gt;
1734 # `ENTRYPOINT`&lt;/a&gt; command.
Sai Cheemalapati4ba8c232017-06-06 18:46:08 -04001735 "A String",
1736 ],
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07001737 "pythonModule": "A String", # Required. The Python module name to run after installing the packages.
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07001738 "workerCount": "A String", # Optional. The number of worker replicas to use for the training job. Each
1739 # replica in the cluster will be of the type specified in `worker_type`.
1740 #
1741 # This value can only be used when `scale_tier` is set to `CUSTOM`. If you
1742 # set this value, you must also set `worker_type`.
1743 #
1744 # The default value is zero.
Dan O'Mearadd494642020-05-01 07:42:23 -07001745 "encryptionConfig": { # Represents a custom encryption key configuration that can be applied to # Optional. Options for using customer-managed encryption keys (CMEK) to
1746 # protect resources created by a training job, instead of using Google's
1747 # default encryption. If this is set, then all resources created by the
1748 # training job will be encrypted with the customer-managed encryption key
1749 # that you specify.
1750 #
1751 # [Learn how and when to use CMEK with AI Platform
1752 # Training](/ai-platform/training/docs/cmek).
1753 # a resource.
1754 "kmsKeyName": "A String", # The Cloud KMS resource identifier of the customer-managed encryption key
1755 # used to protect a resource, such as a training job. It has the following
1756 # format:
1757 # `projects/{PROJECT_ID}/locations/{REGION}/keyRings/{KEY_RING_NAME}/cryptoKeys/{KEY_NAME}`
1758 },
1759 "parameterServerConfig": { # Represents the configuration for a replica in a cluster. # Optional. The configuration for parameter servers.
1760 #
1761 # You should only set `parameterServerConfig.acceleratorConfig` if
1762 # `parameterServerType` is set to a Compute Engine machine type. [Learn
1763 # about restrictions on accelerator configurations for
1764 # training.](/ai-platform/training/docs/using-gpus#compute-engine-machine-types-with-gpu)
1765 #
1766 # Set `parameterServerConfig.imageUri` only if you build a custom image for
1767 # your parameter server. If `parameterServerConfig.imageUri` has not been
1768 # set, AI Platform uses the value of `masterConfig.imageUri`. Learn more about [configuring custom
1769 # containers](/ai-platform/training/docs/distributed-training-containers).
1770 "tpuTfVersion": "A String", # The AI Platform runtime version that includes a TensorFlow version matching
1771 # the one used in the custom container. This field is required if the replica
1772 # is a TPU worker that uses a custom container. Otherwise, do not specify
1773 # this field. This must be a [runtime version that currently supports
1774 # training with
1775 # TPUs](/ml-engine/docs/tensorflow/runtime-version-list#tpu-support).
1776 #
1777 # Note that the version of TensorFlow included in a runtime version may
1778 # differ from the numbering of the runtime version itself, because it may
1779 # have a different [patch
1780 # version](https://www.tensorflow.org/guide/version_compat#semantic_versioning_20).
1781 # In this field, you must specify the runtime version (TensorFlow minor
1782 # version). For example, if your custom container runs TensorFlow `1.x.y`,
1783 # specify `1.x`.
1784 "acceleratorConfig": { # Represents a hardware accelerator request config. # Represents the type and number of accelerators used by the replica.
1785 # [Learn about restrictions on accelerator configurations for
1786 # training.](/ai-platform/training/docs/using-gpus#compute-engine-machine-types-with-gpu)
1787 # Note that the AcceleratorConfig can be used in both Jobs and Versions.
1788 # Learn more about [accelerators for training](/ml-engine/docs/using-gpus) and
1789 # [accelerators for online
1790 # prediction](/ml-engine/docs/machine-types-online-prediction#gpus).
1791 "count": "A String", # The number of accelerators to attach to each machine running the job.
1792 "type": "A String", # The type of accelerator to use.
1793 },
1794 "imageUri": "A String", # The Docker image to run on the replica. This image must be in Container
1795 # Registry. Learn more about [configuring custom
1796 # containers](/ai-platform/training/docs/distributed-training-containers).
1797 },
1798 "region": "A String", # Required. The region to run the training job in. See the [available
1799 # regions](/ai-platform/training/docs/regions) for AI Platform Training.
1800 "pythonVersion": "A String", # Optional. The version of Python used in training. You must either specify
1801 # this field or specify `masterConfig.imageUri`.
1802 #
1803 # The following Python versions are available:
1804 #
1805 # * Python '3.7' is available when `runtime_version` is set to '1.15' or
1806 # later.
1807 # * Python '3.5' is available when `runtime_version` is set to a version
1808 # from '1.4' to '1.14'.
1809 # * Python '2.7' is available when `runtime_version` is set to '1.15' or
1810 # earlier.
1811 #
1812 # Read more about the Python versions available for [each runtime
1813 # version](/ml-engine/docs/runtime-version-list).
1814 "evaluatorType": "A String", # Optional. Specifies the type of virtual machine to use for your training
1815 # job's evaluator nodes.
1816 #
1817 # The supported values are the same as those described in the entry for
1818 # `masterType`.
1819 #
1820 # This value must be consistent with the category of machine type that
1821 # `masterType` uses. In other words, both must be Compute Engine machine
1822 # types or both must be legacy machine types.
1823 #
1824 # This value must be present when `scaleTier` is set to `CUSTOM` and
1825 # `evaluatorCount` is greater than zero.
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07001826 "parameterServerType": "A String", # Optional. Specifies the type of virtual machine to use for your training
1827 # job's parameter server.
1828 #
1829 # The supported values are the same as those described in the entry for
1830 # `master_type`.
1831 #
1832 # This value must be consistent with the category of machine type that
Dan O'Mearadd494642020-05-01 07:42:23 -07001833 # `masterType` uses. In other words, both must be Compute Engine machine
1834 # types or both must be legacy machine types.
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07001835 #
1836 # This value must be present when `scaleTier` is set to `CUSTOM` and
1837 # `parameter_server_count` is greater than zero.
Sai Cheemalapati4ba8c232017-06-06 18:46:08 -04001838 },
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -04001839 "endTime": "A String", # Output only. When the job processing was completed.
1840 "predictionOutput": { # Represents results of a prediction job. # The current prediction job result.
1841 "outputPath": "A String", # The output Google Cloud Storage location provided at the job creation time.
1842 "nodeHours": 3.14, # Node hours used by the batch prediction job.
1843 "predictionCount": "A String", # The number of generated predictions.
1844 "errorCount": "A String", # The number of data instances which resulted in errors.
1845 },
1846 "createTime": "A String", # Output only. When the job was created.
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07001847 }</pre>
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -04001848</div>
1849
1850<div class="method">
Dan O'Mearadd494642020-05-01 07:42:23 -07001851 <code class="details" id="getIamPolicy">getIamPolicy(resource, options_requestedPolicyVersion=None, x__xgafv=None)</code>
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07001852 <pre>Gets the access control policy for a resource.
1853Returns an empty policy if the resource exists and does not have a policy
1854set.
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -04001855
1856Args:
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07001857 resource: string, REQUIRED: The resource for which the policy is being requested.
1858See the operation documentation for the appropriate value for this field. (required)
Dan O'Mearadd494642020-05-01 07:42:23 -07001859 options_requestedPolicyVersion: integer, Optional. The policy format version to be returned.
1860
1861Valid values are 0, 1, and 3. Requests specifying an invalid value will be
1862rejected.
1863
1864Requests for policies with any conditional bindings must specify version 3.
1865Policies without any conditional bindings may specify any valid value or
1866leave the field unset.
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -04001867 x__xgafv: string, V1 error format.
1868 Allowed values
1869 1 - v1 error format
1870 2 - v2 error format
1871
1872Returns:
1873 An object of the form:
1874
Dan O'Mearadd494642020-05-01 07:42:23 -07001875 { # An Identity and Access Management (IAM) policy, which specifies access
1876 # controls for Google Cloud resources.
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07001877 #
1878 #
Dan O'Mearadd494642020-05-01 07:42:23 -07001879 # A `Policy` is a collection of `bindings`. A `binding` binds one or more
1880 # `members` to a single `role`. Members can be user accounts, service accounts,
1881 # Google groups, and domains (such as G Suite). A `role` is a named list of
1882 # permissions; each `role` can be an IAM predefined role or a user-created
1883 # custom role.
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07001884 #
Dan O'Mearadd494642020-05-01 07:42:23 -07001885 # Optionally, a `binding` can specify a `condition`, which is a logical
1886 # expression that allows access to a resource only if the expression evaluates
1887 # to `true`. A condition can add constraints based on attributes of the
1888 # request, the resource, or both.
1889 #
1890 # **JSON example:**
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07001891 #
1892 # {
1893 # "bindings": [
1894 # {
Dan O'Mearadd494642020-05-01 07:42:23 -07001895 # "role": "roles/resourcemanager.organizationAdmin",
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07001896 # "members": [
1897 # "user:mike@example.com",
1898 # "group:admins@example.com",
1899 # "domain:google.com",
Dan O'Mearadd494642020-05-01 07:42:23 -07001900 # "serviceAccount:my-project-id@appspot.gserviceaccount.com"
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07001901 # ]
1902 # },
1903 # {
Dan O'Mearadd494642020-05-01 07:42:23 -07001904 # "role": "roles/resourcemanager.organizationViewer",
1905 # "members": ["user:eve@example.com"],
1906 # "condition": {
1907 # "title": "expirable access",
1908 # "description": "Does not grant access after Sep 2020",
1909 # "expression": "request.time &lt; timestamp('2020-10-01T00:00:00.000Z')",
1910 # }
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07001911 # }
Dan O'Mearadd494642020-05-01 07:42:23 -07001912 # ],
1913 # "etag": "BwWWja0YfJA=",
1914 # "version": 3
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07001915 # }
1916 #
Dan O'Mearadd494642020-05-01 07:42:23 -07001917 # **YAML example:**
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07001918 #
1919 # bindings:
1920 # - members:
1921 # - user:mike@example.com
1922 # - group:admins@example.com
1923 # - domain:google.com
Dan O'Mearadd494642020-05-01 07:42:23 -07001924 # - serviceAccount:my-project-id@appspot.gserviceaccount.com
1925 # role: roles/resourcemanager.organizationAdmin
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07001926 # - members:
Dan O'Mearadd494642020-05-01 07:42:23 -07001927 # - user:eve@example.com
1928 # role: roles/resourcemanager.organizationViewer
1929 # condition:
1930 # title: expirable access
1931 # description: Does not grant access after Sep 2020
1932 # expression: request.time &lt; timestamp('2020-10-01T00:00:00.000Z')
1933 # - etag: BwWWja0YfJA=
1934 # - version: 3
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07001935 #
1936 # For a description of IAM and its features, see the
Dan O'Mearadd494642020-05-01 07:42:23 -07001937 # [IAM documentation](https://cloud.google.com/iam/docs/).
1938 "bindings": [ # Associates a list of `members` to a `role`. Optionally, may specify a
1939 # `condition` that determines how and when the `bindings` are applied. Each
1940 # of the `bindings` must contain at least one member.
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07001941 { # Associates `members` with a `role`.
1942 "role": "A String", # Role that is assigned to `members`.
1943 # For example, `roles/viewer`, `roles/editor`, or `roles/owner`.
1944 "members": [ # Specifies the identities requesting access for a Cloud Platform resource.
1945 # `members` can have the following values:
1946 #
1947 # * `allUsers`: A special identifier that represents anyone who is
1948 # on the internet; with or without a Google account.
1949 #
1950 # * `allAuthenticatedUsers`: A special identifier that represents anyone
1951 # who is authenticated with a Google account or a service account.
1952 #
1953 # * `user:{emailid}`: An email address that represents a specific Google
Dan O'Mearadd494642020-05-01 07:42:23 -07001954 # account. For example, `alice@example.com` .
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07001955 #
1956 #
1957 # * `serviceAccount:{emailid}`: An email address that represents a service
1958 # account. For example, `my-other-app@appspot.gserviceaccount.com`.
1959 #
1960 # * `group:{emailid}`: An email address that represents a Google group.
1961 # For example, `admins@example.com`.
1962 #
Dan O'Mearadd494642020-05-01 07:42:23 -07001963 # * `deleted:user:{emailid}?uid={uniqueid}`: An email address (plus unique
1964 # identifier) representing a user that has been recently deleted. For
1965 # example, `alice@example.com?uid=123456789012345678901`. If the user is
1966 # recovered, this value reverts to `user:{emailid}` and the recovered user
1967 # retains the role in the binding.
1968 #
1969 # * `deleted:serviceAccount:{emailid}?uid={uniqueid}`: An email address (plus
1970 # unique identifier) representing a service account that has been recently
1971 # deleted. For example,
1972 # `my-other-app@appspot.gserviceaccount.com?uid=123456789012345678901`.
1973 # If the service account is undeleted, this value reverts to
1974 # `serviceAccount:{emailid}` and the undeleted service account retains the
1975 # role in the binding.
1976 #
1977 # * `deleted:group:{emailid}?uid={uniqueid}`: An email address (plus unique
1978 # identifier) representing a Google group that has been recently
1979 # deleted. For example, `admins@example.com?uid=123456789012345678901`. If
1980 # the group is recovered, this value reverts to `group:{emailid}` and the
1981 # recovered group retains the role in the binding.
1982 #
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07001983 #
1984 # * `domain:{domain}`: The G Suite domain (primary) that represents all the
1985 # users of that domain. For example, `google.com` or `example.com`.
1986 #
1987 "A String",
1988 ],
Dan O'Mearadd494642020-05-01 07:42:23 -07001989 "condition": { # Represents a textual expression in the Common Expression Language (CEL) # The condition that is associated with this binding.
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07001990 # NOTE: An unsatisfied condition will not allow user access via current
1991 # binding. Different bindings, including their conditions, are examined
1992 # independently.
Dan O'Mearadd494642020-05-01 07:42:23 -07001993 # syntax. CEL is a C-like expression language. The syntax and semantics of CEL
1994 # are documented at https://github.com/google/cel-spec.
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07001995 #
Dan O'Mearadd494642020-05-01 07:42:23 -07001996 # Example (Comparison):
1997 #
1998 # title: "Summary size limit"
1999 # description: "Determines if a summary is less than 100 chars"
2000 # expression: "document.summary.size() &lt; 100"
2001 #
2002 # Example (Equality):
2003 #
2004 # title: "Requestor is owner"
2005 # description: "Determines if requestor is the document owner"
2006 # expression: "document.owner == request.auth.claims.email"
2007 #
2008 # Example (Logic):
2009 #
2010 # title: "Public documents"
2011 # description: "Determine whether the document should be publicly visible"
2012 # expression: "document.type != 'private' &amp;&amp; document.type != 'internal'"
2013 #
2014 # Example (Data Manipulation):
2015 #
2016 # title: "Notification string"
2017 # description: "Create a notification string with a timestamp."
2018 # expression: "'New message received at ' + string(document.create_time)"
2019 #
2020 # The exact variables and functions that may be referenced within an expression
2021 # are determined by the service that evaluates it. See the service
2022 # documentation for additional information.
2023 "description": "A String", # Optional. Description of the expression. This is a longer text which
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07002024 # describes the expression, e.g. when hovered over it in a UI.
Dan O'Mearadd494642020-05-01 07:42:23 -07002025 "expression": "A String", # Textual representation of an expression in Common Expression Language
2026 # syntax.
2027 "location": "A String", # Optional. String indicating the location of the expression for error
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07002028 # reporting, e.g. a file name and a position in the file.
Dan O'Mearadd494642020-05-01 07:42:23 -07002029 "title": "A String", # Optional. Title for the expression, i.e. a short string describing
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07002030 # its purpose. This can be used e.g. in UIs which allow to enter the
2031 # expression.
2032 },
2033 },
2034 ],
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07002035 "auditConfigs": [ # Specifies cloud audit logging configuration for this policy.
2036 { # Specifies the audit configuration for a service.
2037 # The configuration determines which permission types are logged, and what
2038 # identities, if any, are exempted from logging.
2039 # An AuditConfig must have one or more AuditLogConfigs.
2040 #
2041 # If there are AuditConfigs for both `allServices` and a specific service,
2042 # the union of the two AuditConfigs is used for that service: the log_types
2043 # specified in each AuditConfig are enabled, and the exempted_members in each
2044 # AuditLogConfig are exempted.
2045 #
2046 # Example Policy with multiple AuditConfigs:
2047 #
2048 # {
2049 # "audit_configs": [
2050 # {
2051 # "service": "allServices"
2052 # "audit_log_configs": [
2053 # {
2054 # "log_type": "DATA_READ",
2055 # "exempted_members": [
Dan O'Mearadd494642020-05-01 07:42:23 -07002056 # "user:jose@example.com"
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07002057 # ]
2058 # },
2059 # {
2060 # "log_type": "DATA_WRITE",
2061 # },
2062 # {
2063 # "log_type": "ADMIN_READ",
2064 # }
2065 # ]
2066 # },
2067 # {
Dan O'Mearadd494642020-05-01 07:42:23 -07002068 # "service": "sampleservice.googleapis.com"
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07002069 # "audit_log_configs": [
2070 # {
2071 # "log_type": "DATA_READ",
2072 # },
2073 # {
2074 # "log_type": "DATA_WRITE",
2075 # "exempted_members": [
Dan O'Mearadd494642020-05-01 07:42:23 -07002076 # "user:aliya@example.com"
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07002077 # ]
2078 # }
2079 # ]
2080 # }
2081 # ]
2082 # }
2083 #
Dan O'Mearadd494642020-05-01 07:42:23 -07002084 # For sampleservice, this policy enables DATA_READ, DATA_WRITE and ADMIN_READ
2085 # logging. It also exempts jose@example.com from DATA_READ logging, and
2086 # aliya@example.com from DATA_WRITE logging.
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07002087 "auditLogConfigs": [ # The configuration for logging of each type of permission.
2088 { # Provides the configuration for logging a type of permissions.
2089 # Example:
2090 #
2091 # {
2092 # "audit_log_configs": [
2093 # {
2094 # "log_type": "DATA_READ",
2095 # "exempted_members": [
Dan O'Mearadd494642020-05-01 07:42:23 -07002096 # "user:jose@example.com"
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07002097 # ]
2098 # },
2099 # {
2100 # "log_type": "DATA_WRITE",
2101 # }
2102 # ]
2103 # }
2104 #
2105 # This enables 'DATA_READ' and 'DATA_WRITE' logging, while exempting
Dan O'Mearadd494642020-05-01 07:42:23 -07002106 # jose@example.com from DATA_READ logging.
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07002107 "exemptedMembers": [ # Specifies the identities that do not cause logging for this type of
2108 # permission.
2109 # Follows the same format of Binding.members.
2110 "A String",
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -04002111 ],
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07002112 "logType": "A String", # The log type that this config enables.
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -04002113 },
2114 ],
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07002115 "service": "A String", # Specifies a service that will be enabled for audit logging.
2116 # For example, `storage.googleapis.com`, `cloudsql.googleapis.com`.
2117 # `allServices` is a special value that covers all services.
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -04002118 },
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07002119 ],
Dan O'Mearadd494642020-05-01 07:42:23 -07002120 "etag": "A String", # `etag` is used for optimistic concurrency control as a way to help
2121 # prevent simultaneous updates of a policy from overwriting each other.
2122 # It is strongly suggested that systems make use of the `etag` in the
2123 # read-modify-write cycle to perform policy updates in order to avoid race
2124 # conditions: An `etag` is returned in the response to `getIamPolicy`, and
2125 # systems are expected to put that etag in the request to `setIamPolicy` to
2126 # ensure that their change will be applied to the same version of the policy.
2127 #
2128 # **Important:** If you use IAM Conditions, you must include the `etag` field
2129 # whenever you call `setIamPolicy`. If you omit this field, then IAM allows
2130 # you to overwrite a version `3` policy with a version `1` policy, and all of
2131 # the conditions in the version `3` policy are lost.
2132 "version": 42, # Specifies the format of the policy.
2133 #
2134 # Valid values are `0`, `1`, and `3`. Requests that specify an invalid value
2135 # are rejected.
2136 #
2137 # Any operation that affects conditional role bindings must specify version
2138 # `3`. This requirement applies to the following operations:
2139 #
2140 # * Getting a policy that includes a conditional role binding
2141 # * Adding a conditional role binding to a policy
2142 # * Changing a conditional role binding in a policy
2143 # * Removing any role binding, with or without a condition, from a policy
2144 # that includes conditions
2145 #
2146 # **Important:** If you use IAM Conditions, you must include the `etag` field
2147 # whenever you call `setIamPolicy`. If you omit this field, then IAM allows
2148 # you to overwrite a version `3` policy with a version `1` policy, and all of
2149 # the conditions in the version `3` policy are lost.
2150 #
2151 # If a policy does not include any conditions, operations on that policy may
2152 # specify any valid version or leave the field unset.
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07002153 }</pre>
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -04002154</div>
2155
2156<div class="method">
Dan O'Mearadd494642020-05-01 07:42:23 -07002157 <code class="details" id="list">list(parent, pageSize=None, pageToken=None, x__xgafv=None, filter=None)</code>
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -04002158 <pre>Lists the jobs in the project.
2159
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07002160If there are no jobs that match the request parameters, the list
2161request returns an empty response body: {}.
2162
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -04002163Args:
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07002164 parent: string, Required. The name of the project for which to list jobs. (required)
Dan O'Mearadd494642020-05-01 07:42:23 -07002165 pageSize: integer, Optional. The number of jobs to retrieve per "page" of results. If there
2166are more remaining results than this number, the response message will
2167contain a valid value in the `next_page_token` field.
2168
2169The default value is 20, and the maximum page size is 100.
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -04002170 pageToken: string, Optional. A page token to request the next page of results.
2171
2172You get the token from the `next_page_token` field of the response from
2173the previous call.
2174 x__xgafv: string, V1 error format.
2175 Allowed values
2176 1 - v1 error format
2177 2 - v2 error format
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07002178 filter: string, Optional. Specifies the subset of jobs to retrieve.
2179You can filter on the value of one or more attributes of the job object.
2180For example, retrieve jobs with a job identifier that starts with 'census':
Dan O'Mearadd494642020-05-01 07:42:23 -07002181&lt;p&gt;&lt;code&gt;gcloud ai-platform jobs list --filter='jobId:census*'&lt;/code&gt;
2182&lt;p&gt;List all failed jobs with names that start with 'rnn':
2183&lt;p&gt;&lt;code&gt;gcloud ai-platform jobs list --filter='jobId:rnn*
2184AND state:FAILED'&lt;/code&gt;
2185&lt;p&gt;For more examples, see the guide to
2186&lt;a href="/ml-engine/docs/tensorflow/monitor-training"&gt;monitoring jobs&lt;/a&gt;.
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -04002187
2188Returns:
2189 An object of the form:
2190
2191 { # Response message for the ListJobs method.
2192 "nextPageToken": "A String", # Optional. Pass this token as the `page_token` field of the request for a
2193 # subsequent call.
2194 "jobs": [ # The list of jobs.
2195 { # Represents a training or prediction job.
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07002196 "errorMessage": "A String", # Output only. The details of a failure or a cancellation.
2197 "trainingOutput": { # Represents results of a training job. Output only. # The current training job result.
2198 "completedTrialCount": "A String", # The number of hyperparameter tuning trials that completed successfully.
2199 # Only set for hyperparameter tuning jobs.
2200 "trials": [ # Results for individual Hyperparameter trials.
2201 # Only set for hyperparameter tuning jobs.
2202 { # Represents the result of a single hyperparameter tuning trial from a
2203 # training job. The TrainingOutput object that is returned on successful
2204 # completion of a training job with hyperparameter tuning includes a list
2205 # of HyperparameterOutput objects, one for each successful trial.
Dan O'Mearadd494642020-05-01 07:42:23 -07002206 "startTime": "A String", # Output only. Start time for the trial.
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07002207 "hyperparameters": { # The hyperparameters given to this trial.
2208 "a_key": "A String",
2209 },
2210 "finalMetric": { # An observed value of a metric. # The final objective metric seen for this trial.
2211 "trainingStep": "A String", # The global training step for this metric.
2212 "objectiveValue": 3.14, # The objective value at this training step.
2213 },
Dan O'Mearadd494642020-05-01 07:42:23 -07002214 "state": "A String", # Output only. The detailed state of the trial.
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07002215 "allMetrics": [ # All recorded object metrics for this trial. This field is not currently
2216 # populated.
2217 { # An observed value of a metric.
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -04002218 "trainingStep": "A String", # The global training step for this metric.
2219 "objectiveValue": 3.14, # The objective value at this training step.
2220 },
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07002221 ],
2222 "isTrialStoppedEarly": True or False, # True if the trial is stopped early.
Dan O'Mearadd494642020-05-01 07:42:23 -07002223 "endTime": "A String", # Output only. End time for the trial.
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07002224 "trialId": "A String", # The trial id for these results.
2225 "builtInAlgorithmOutput": { # Represents output related to a built-in algorithm Job. # Details related to built-in algorithms jobs.
2226 # Only set for trials of built-in algorithms jobs that have succeeded.
2227 "framework": "A String", # Framework on which the built-in algorithm was trained.
2228 "modelPath": "A String", # The Cloud Storage path to the `model/` directory where the training job
2229 # saves the trained model. Only set for successful jobs that don't use
2230 # hyperparameter tuning.
2231 "runtimeVersion": "A String", # AI Platform runtime version on which the built-in algorithm was
2232 # trained.
2233 "pythonVersion": "A String", # Python version on which the built-in algorithm was trained.
2234 },
2235 },
2236 ],
2237 "isHyperparameterTuningJob": True or False, # Whether this job is a hyperparameter tuning job.
2238 "isBuiltInAlgorithmJob": True or False, # Whether this job is a built-in Algorithm job.
2239 "consumedMLUnits": 3.14, # The amount of ML units consumed by the job.
2240 "hyperparameterMetricTag": "A String", # The TensorFlow summary tag name used for optimizing hyperparameter tuning
2241 # trials. See
2242 # [`HyperparameterSpec.hyperparameterMetricTag`](#HyperparameterSpec.FIELDS.hyperparameter_metric_tag)
2243 # for more information. Only set for hyperparameter tuning jobs.
2244 "builtInAlgorithmOutput": { # Represents output related to a built-in algorithm Job. # Details related to built-in algorithms jobs.
2245 # Only set for built-in algorithms jobs.
2246 "framework": "A String", # Framework on which the built-in algorithm was trained.
2247 "modelPath": "A String", # The Cloud Storage path to the `model/` directory where the training job
2248 # saves the trained model. Only set for successful jobs that don't use
2249 # hyperparameter tuning.
2250 "runtimeVersion": "A String", # AI Platform runtime version on which the built-in algorithm was
2251 # trained.
2252 "pythonVersion": "A String", # Python version on which the built-in algorithm was trained.
2253 },
2254 },
2255 "predictionInput": { # Represents input parameters for a prediction job. # Input parameters to create a prediction job.
2256 "modelName": "A String", # Use this field if you want to use the default version for the specified
2257 # model. The string must use the following format:
2258 #
2259 # `"projects/YOUR_PROJECT/models/YOUR_MODEL"`
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07002260 "signatureName": "A String", # Optional. The name of the signature defined in the SavedModel to use for
2261 # this job. Please refer to
2262 # [SavedModel](https://tensorflow.github.io/serving/serving_basic.html)
2263 # for information about how to use signatures.
2264 #
2265 # Defaults to
2266 # [DEFAULT_SERVING_SIGNATURE_DEF_KEY](https://www.tensorflow.org/api_docs/python/tf/saved_model/signature_constants)
2267 # , which is "serving_default".
Dan O'Mearadd494642020-05-01 07:42:23 -07002268 "runtimeVersion": "A String", # Optional. The AI Platform runtime version to use for this batch
2269 # prediction. If not set, AI Platform will pick the runtime version used
2270 # during the CreateVersion request for this model version, or choose the
2271 # latest stable version when model version information is not available
2272 # such as when the model is specified by uri.
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07002273 "batchSize": "A String", # Optional. Number of records per batch, defaults to 64.
2274 # The service will buffer batch_size number of records in memory before
2275 # invoking one Tensorflow prediction call internally. So take the record
2276 # size and memory available into consideration when setting this parameter.
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07002277 "maxWorkerCount": "A String", # Optional. The maximum number of workers to be used for parallel processing.
2278 # Defaults to 10 if not specified.
2279 "uri": "A String", # Use this field if you want to specify a Google Cloud Storage path for
2280 # the model to use.
2281 "outputPath": "A String", # Required. The output Google Cloud Storage location.
2282 "dataFormat": "A String", # Required. The format of the input data files.
2283 "versionName": "A String", # Use this field if you want to specify a version of the model to use. The
2284 # string is formatted the same way as `model_version`, with the addition
2285 # of the version information:
2286 #
2287 # `"projects/YOUR_PROJECT/models/YOUR_MODEL/versions/YOUR_VERSION"`
2288 "region": "A String", # Required. The Google Compute Engine region to run the prediction job in.
Dan O'Mearadd494642020-05-01 07:42:23 -07002289 # See the &lt;a href="/ml-engine/docs/tensorflow/regions"&gt;available regions&lt;/a&gt;
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07002290 # for AI Platform services.
Dan O'Mearadd494642020-05-01 07:42:23 -07002291 "inputPaths": [ # Required. The Cloud Storage location of the input data files. May contain
2292 # &lt;a href="/storage/docs/gsutil/addlhelp/WildcardNames"&gt;wildcards&lt;/a&gt;.
2293 "A String",
2294 ],
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07002295 "outputDataFormat": "A String", # Optional. Format of the output data files, defaults to JSON.
2296 },
Dan O'Mearadd494642020-05-01 07:42:23 -07002297 "labels": { # Optional. One or more labels that you can add, to organize your jobs.
2298 # Each label is a key-value pair, where both the key and the value are
2299 # arbitrary strings that you supply.
2300 # For more information, see the documentation on
2301 # &lt;a href="/ml-engine/docs/tensorflow/resource-labels"&gt;using labels&lt;/a&gt;.
2302 "a_key": "A String",
2303 },
2304 "jobId": "A String", # Required. The user-specified id of the job.
2305 "state": "A String", # Output only. The detailed state of a job.
2306 "etag": "A String", # `etag` is used for optimistic concurrency control as a way to help
2307 # prevent simultaneous updates of a job from overwriting each other.
2308 # It is strongly suggested that systems make use of the `etag` in the
2309 # read-modify-write cycle to perform job updates in order to avoid race
2310 # conditions: An `etag` is returned in the response to `GetJob`, and
2311 # systems are expected to put that etag in the request to `UpdateJob` to
2312 # ensure that their change will be applied to the same version of the job.
2313 "startTime": "A String", # Output only. When the job processing was started.
2314 "trainingInput": { # Represents input parameters for a training job. When using the gcloud command # Input parameters to create a training job.
2315 # to submit your training job, you can specify the input parameters as
2316 # command-line arguments and/or in a YAML configuration file referenced from
2317 # the --config command-line argument. For details, see the guide to [submitting
2318 # a training job](/ai-platform/training/docs/training-jobs).
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07002319 "masterType": "A String", # Optional. Specifies the type of virtual machine to use for your training
Dan O'Mearadd494642020-05-01 07:42:23 -07002320 # job's master worker. You must specify this field when `scaleTier` is set to
2321 # `CUSTOM`.
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07002322 #
Dan O'Mearadd494642020-05-01 07:42:23 -07002323 # You can use certain Compute Engine machine types directly in this field.
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07002324 # The following types are supported:
2325 #
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07002326 # - `n1-standard-4`
2327 # - `n1-standard-8`
2328 # - `n1-standard-16`
2329 # - `n1-standard-32`
2330 # - `n1-standard-64`
2331 # - `n1-standard-96`
2332 # - `n1-highmem-2`
2333 # - `n1-highmem-4`
2334 # - `n1-highmem-8`
2335 # - `n1-highmem-16`
2336 # - `n1-highmem-32`
2337 # - `n1-highmem-64`
2338 # - `n1-highmem-96`
2339 # - `n1-highcpu-16`
2340 # - `n1-highcpu-32`
2341 # - `n1-highcpu-64`
2342 # - `n1-highcpu-96`
2343 #
Dan O'Mearadd494642020-05-01 07:42:23 -07002344 # Learn more about [using Compute Engine machine
2345 # types](/ml-engine/docs/machine-types#compute-engine-machine-types).
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07002346 #
Dan O'Mearadd494642020-05-01 07:42:23 -07002347 # Alternatively, you can use the following legacy machine types:
2348 #
2349 # - `standard`
2350 # - `large_model`
2351 # - `complex_model_s`
2352 # - `complex_model_m`
2353 # - `complex_model_l`
2354 # - `standard_gpu`
2355 # - `complex_model_m_gpu`
2356 # - `complex_model_l_gpu`
2357 # - `standard_p100`
2358 # - `complex_model_m_p100`
2359 # - `standard_v100`
2360 # - `large_model_v100`
2361 # - `complex_model_m_v100`
2362 # - `complex_model_l_v100`
2363 #
2364 # Learn more about [using legacy machine
2365 # types](/ml-engine/docs/machine-types#legacy-machine-types).
2366 #
2367 # Finally, if you want to use a TPU for training, specify `cloud_tpu` in this
2368 # field. Learn more about the [special configuration options for training
2369 # with
2370 # TPUs](/ml-engine/docs/tensorflow/using-tpus#configuring_a_custom_tpu_machine).
2371 "jobDir": "A String", # Optional. A Google Cloud Storage path in which to store training outputs
2372 # and other data needed for training. This path is passed to your TensorFlow
2373 # program as the '--job-dir' command-line argument. The benefit of specifying
2374 # this field is that Cloud ML validates the path for use in training.
2375 "scheduling": { # All parameters related to scheduling of training jobs. # Optional. Scheduling options for a training job.
2376 "maxRunningTime": "A String", # Optional. The maximum job running time, expressed in seconds. The field can
2377 # contain up to nine fractional digits, terminated by `s`. By default there
2378 # is no limit to the running time.
2379 #
2380 # If the training job is still running after this duration, AI Platform
2381 # Training cancels it.
2382 #
2383 # For example, if you want to ensure your job runs for no more than 2 hours,
2384 # set this field to `7200s` (2 hours * 60 minutes / hour * 60 seconds /
2385 # minute).
2386 #
2387 # If you submit your training job using the `gcloud` tool, you can [provide
2388 # this field in a `config.yaml`
2389 # file](/ai-platform/training/docs/training-jobs#formatting_your_configuration_parameters).
2390 # For example:
2391 #
2392 # ```yaml
2393 # trainingInput:
2394 # ...
2395 # scheduling:
2396 # maxRunningTime: 7200s
2397 # ...
2398 # ```
2399 },
2400 "parameterServerCount": "A String", # Optional. The number of parameter server replicas to use for the training
2401 # job. Each replica in the cluster will be of the type specified in
2402 # `parameter_server_type`.
2403 #
2404 # This value can only be used when `scale_tier` is set to `CUSTOM`. If you
2405 # set this value, you must also set `parameter_server_type`.
2406 #
2407 # The default value is zero.
2408 "evaluatorCount": "A String", # Optional. The number of evaluator replicas to use for the training job.
2409 # Each replica in the cluster will be of the type specified in
2410 # `evaluator_type`.
2411 #
2412 # This value can only be used when `scale_tier` is set to `CUSTOM`. If you
2413 # set this value, you must also set `evaluator_type`.
2414 #
2415 # The default value is zero.
2416 "workerType": "A String", # Optional. Specifies the type of virtual machine to use for your training
2417 # job's worker nodes.
2418 #
2419 # The supported values are the same as those described in the entry for
2420 # `masterType`.
2421 #
2422 # This value must be consistent with the category of machine type that
2423 # `masterType` uses. In other words, both must be Compute Engine machine
2424 # types or both must be legacy machine types.
2425 #
2426 # If you use `cloud_tpu` for this value, see special instructions for
2427 # [configuring a custom TPU
2428 # machine](/ml-engine/docs/tensorflow/using-tpus#configuring_a_custom_tpu_machine).
2429 #
2430 # This value must be present when `scaleTier` is set to `CUSTOM` and
2431 # `workerCount` is greater than zero.
2432 "scaleTier": "A String", # Required. Specifies the machine types, the number of replicas for workers
2433 # and parameter servers.
2434 "packageUris": [ # Required. The Google Cloud Storage location of the packages with
2435 # the training program and any additional dependencies.
2436 # The maximum number of package URIs is 100.
2437 "A String",
2438 ],
2439 "workerConfig": { # Represents the configuration for a replica in a cluster. # Optional. The configuration for workers.
2440 #
2441 # You should only set `workerConfig.acceleratorConfig` if `workerType` is set
2442 # to a Compute Engine machine type. [Learn about restrictions on accelerator
2443 # configurations for
2444 # training.](/ai-platform/training/docs/using-gpus#compute-engine-machine-types-with-gpu)
2445 #
2446 # Set `workerConfig.imageUri` only if you build a custom image for your
2447 # worker. If `workerConfig.imageUri` has not been set, AI Platform uses
2448 # the value of `masterConfig.imageUri`. Learn more about [configuring custom
2449 # containers](/ai-platform/training/docs/distributed-training-containers).
2450 "tpuTfVersion": "A String", # The AI Platform runtime version that includes a TensorFlow version matching
2451 # the one used in the custom container. This field is required if the replica
2452 # is a TPU worker that uses a custom container. Otherwise, do not specify
2453 # this field. This must be a [runtime version that currently supports
2454 # training with
2455 # TPUs](/ml-engine/docs/tensorflow/runtime-version-list#tpu-support).
2456 #
2457 # Note that the version of TensorFlow included in a runtime version may
2458 # differ from the numbering of the runtime version itself, because it may
2459 # have a different [patch
2460 # version](https://www.tensorflow.org/guide/version_compat#semantic_versioning_20).
2461 # In this field, you must specify the runtime version (TensorFlow minor
2462 # version). For example, if your custom container runs TensorFlow `1.x.y`,
2463 # specify `1.x`.
2464 "acceleratorConfig": { # Represents a hardware accelerator request config. # Represents the type and number of accelerators used by the replica.
2465 # [Learn about restrictions on accelerator configurations for
2466 # training.](/ai-platform/training/docs/using-gpus#compute-engine-machine-types-with-gpu)
2467 # Note that the AcceleratorConfig can be used in both Jobs and Versions.
2468 # Learn more about [accelerators for training](/ml-engine/docs/using-gpus) and
2469 # [accelerators for online
2470 # prediction](/ml-engine/docs/machine-types-online-prediction#gpus).
2471 "count": "A String", # The number of accelerators to attach to each machine running the job.
2472 "type": "A String", # The type of accelerator to use.
2473 },
2474 "imageUri": "A String", # The Docker image to run on the replica. This image must be in Container
2475 # Registry. Learn more about [configuring custom
2476 # containers](/ai-platform/training/docs/distributed-training-containers).
2477 },
2478 "evaluatorConfig": { # Represents the configuration for a replica in a cluster. # Optional. The configuration for evaluators.
2479 #
2480 # You should only set `evaluatorConfig.acceleratorConfig` if
2481 # `evaluatorType` is set to a Compute Engine machine type. [Learn
2482 # about restrictions on accelerator configurations for
2483 # training.](/ai-platform/training/docs/using-gpus#compute-engine-machine-types-with-gpu)
2484 #
2485 # Set `evaluatorConfig.imageUri` only if you build a custom image for
2486 # your evaluator. If `evaluatorConfig.imageUri` has not been
2487 # set, AI Platform uses the value of `masterConfig.imageUri`. Learn more about [configuring custom
2488 # containers](/ai-platform/training/docs/distributed-training-containers).
2489 "tpuTfVersion": "A String", # The AI Platform runtime version that includes a TensorFlow version matching
2490 # the one used in the custom container. This field is required if the replica
2491 # is a TPU worker that uses a custom container. Otherwise, do not specify
2492 # this field. This must be a [runtime version that currently supports
2493 # training with
2494 # TPUs](/ml-engine/docs/tensorflow/runtime-version-list#tpu-support).
2495 #
2496 # Note that the version of TensorFlow included in a runtime version may
2497 # differ from the numbering of the runtime version itself, because it may
2498 # have a different [patch
2499 # version](https://www.tensorflow.org/guide/version_compat#semantic_versioning_20).
2500 # In this field, you must specify the runtime version (TensorFlow minor
2501 # version). For example, if your custom container runs TensorFlow `1.x.y`,
2502 # specify `1.x`.
2503 "acceleratorConfig": { # Represents a hardware accelerator request config. # Represents the type and number of accelerators used by the replica.
2504 # [Learn about restrictions on accelerator configurations for
2505 # training.](/ai-platform/training/docs/using-gpus#compute-engine-machine-types-with-gpu)
2506 # Note that the AcceleratorConfig can be used in both Jobs and Versions.
2507 # Learn more about [accelerators for training](/ml-engine/docs/using-gpus) and
2508 # [accelerators for online
2509 # prediction](/ml-engine/docs/machine-types-online-prediction#gpus).
2510 "count": "A String", # The number of accelerators to attach to each machine running the job.
2511 "type": "A String", # The type of accelerator to use.
2512 },
2513 "imageUri": "A String", # The Docker image to run on the replica. This image must be in Container
2514 # Registry. Learn more about [configuring custom
2515 # containers](/ai-platform/training/docs/distributed-training-containers).
2516 },
2517 "useChiefInTfConfig": True or False, # Optional. Use `chief` instead of `master` in the `TF_CONFIG` environment
2518 # variable when training with a custom container. Defaults to `false`. [Learn
2519 # more about this
2520 # field.](/ai-platform/training/docs/distributed-training-details#chief-versus-master)
2521 #
2522 # This field has no effect for training jobs that don't use a custom
2523 # container.
2524 "masterConfig": { # Represents the configuration for a replica in a cluster. # Optional. The configuration for your master worker.
2525 #
2526 # You should only set `masterConfig.acceleratorConfig` if `masterType` is set
2527 # to a Compute Engine machine type. Learn about [restrictions on accelerator
2528 # configurations for
2529 # training.](/ai-platform/training/docs/using-gpus#compute-engine-machine-types-with-gpu)
2530 #
2531 # Set `masterConfig.imageUri` only if you build a custom image. Only one of
2532 # `masterConfig.imageUri` and `runtimeVersion` should be set. Learn more
2533 # about [configuring custom
2534 # containers](/ai-platform/training/docs/distributed-training-containers).
2535 "tpuTfVersion": "A String", # The AI Platform runtime version that includes a TensorFlow version matching
2536 # the one used in the custom container. This field is required if the replica
2537 # is a TPU worker that uses a custom container. Otherwise, do not specify
2538 # this field. This must be a [runtime version that currently supports
2539 # training with
2540 # TPUs](/ml-engine/docs/tensorflow/runtime-version-list#tpu-support).
2541 #
2542 # Note that the version of TensorFlow included in a runtime version may
2543 # differ from the numbering of the runtime version itself, because it may
2544 # have a different [patch
2545 # version](https://www.tensorflow.org/guide/version_compat#semantic_versioning_20).
2546 # In this field, you must specify the runtime version (TensorFlow minor
2547 # version). For example, if your custom container runs TensorFlow `1.x.y`,
2548 # specify `1.x`.
2549 "acceleratorConfig": { # Represents a hardware accelerator request config. # Represents the type and number of accelerators used by the replica.
2550 # [Learn about restrictions on accelerator configurations for
2551 # training.](/ai-platform/training/docs/using-gpus#compute-engine-machine-types-with-gpu)
2552 # Note that the AcceleratorConfig can be used in both Jobs and Versions.
2553 # Learn more about [accelerators for training](/ml-engine/docs/using-gpus) and
2554 # [accelerators for online
2555 # prediction](/ml-engine/docs/machine-types-online-prediction#gpus).
2556 "count": "A String", # The number of accelerators to attach to each machine running the job.
2557 "type": "A String", # The type of accelerator to use.
2558 },
2559 "imageUri": "A String", # The Docker image to run on the replica. This image must be in Container
2560 # Registry. Learn more about [configuring custom
2561 # containers](/ai-platform/training/docs/distributed-training-containers).
2562 },
2563 "runtimeVersion": "A String", # Optional. The AI Platform runtime version to use for training. You must
2564 # either specify this field or specify `masterConfig.imageUri`.
2565 #
2566 # For more information, see the [runtime version
2567 # list](/ai-platform/training/docs/runtime-version-list) and learn [how to
2568 # manage runtime versions](/ai-platform/training/docs/versioning).
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07002569 "hyperparameters": { # Represents a set of hyperparameters to optimize. # Optional. The set of Hyperparameters to tune.
2570 "maxTrials": 42, # Optional. How many training trials should be attempted to optimize
2571 # the specified hyperparameters.
2572 #
2573 # Defaults to one.
2574 "goal": "A String", # Required. The type of goal to use for tuning. Available types are
2575 # `MAXIMIZE` and `MINIMIZE`.
2576 #
2577 # Defaults to `MAXIMIZE`.
2578 "algorithm": "A String", # Optional. The search algorithm specified for the hyperparameter
2579 # tuning job.
2580 # Uses the default AI Platform hyperparameter tuning
2581 # algorithm if unspecified.
2582 "maxFailedTrials": 42, # Optional. The number of failed trials that need to be seen before failing
2583 # the hyperparameter tuning job. You can specify this field to override the
2584 # default failing criteria for AI Platform hyperparameter tuning jobs.
2585 #
2586 # Defaults to zero, which means the service decides when a hyperparameter
2587 # job should fail.
2588 "enableTrialEarlyStopping": True or False, # Optional. Indicates if the hyperparameter tuning job enables auto trial
2589 # early stopping.
2590 "resumePreviousJobId": "A String", # Optional. The prior hyperparameter tuning job id that users hope to
2591 # continue with. The job id will be used to find the corresponding vizier
2592 # study guid and resume the study.
2593 "params": [ # Required. The set of parameters to tune.
2594 { # Represents a single hyperparameter to optimize.
2595 "maxValue": 3.14, # Required if type is `DOUBLE` or `INTEGER`. This field
2596 # should be unset if type is `CATEGORICAL`. This value should be integers if
2597 # type is `INTEGER`.
Dan O'Mearadd494642020-05-01 07:42:23 -07002598 "minValue": 3.14, # Required if type is `DOUBLE` or `INTEGER`. This field
2599 # should be unset if type is `CATEGORICAL`. This value should be integers if
2600 # type is INTEGER.
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07002601 "discreteValues": [ # Required if type is `DISCRETE`.
2602 # A list of feasible points.
2603 # The list should be in strictly increasing order. For instance, this
2604 # parameter might have possible settings of 1.5, 2.5, and 4.0. This list
2605 # should not contain more than 1,000 values.
2606 3.14,
2607 ],
2608 "parameterName": "A String", # Required. The parameter name must be unique amongst all ParameterConfigs in
2609 # a HyperparameterSpec message. E.g., "learning_rate".
Dan O'Mearadd494642020-05-01 07:42:23 -07002610 "categoricalValues": [ # Required if type is `CATEGORICAL`. The list of possible categories.
2611 "A String",
2612 ],
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07002613 "type": "A String", # Required. The type of the parameter.
2614 "scaleType": "A String", # Optional. How the parameter should be scaled to the hypercube.
2615 # Leave unset for categorical parameters.
2616 # Some kind of scaling is strongly recommended for real or integral
2617 # parameters (e.g., `UNIT_LINEAR_SCALE`).
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -04002618 },
2619 ],
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07002620 "hyperparameterMetricTag": "A String", # Optional. The TensorFlow summary tag name to use for optimizing trials. For
2621 # current versions of TensorFlow, this tag name should exactly match what is
2622 # shown in TensorBoard, including all scopes. For versions of TensorFlow
2623 # prior to 0.12, this should be only the tag passed to tf.Summary.
2624 # By default, "training/hptuning/metric" will be used.
2625 "maxParallelTrials": 42, # Optional. The number of training trials to run concurrently.
2626 # You can reduce the time it takes to perform hyperparameter tuning by adding
2627 # trials in parallel. However, each trail only benefits from the information
2628 # gained in completed trials. That means that a trial does not get access to
2629 # the results of trials running at the same time, which could reduce the
2630 # quality of the overall optimization.
2631 #
2632 # Each trial will use the same scale tier and machine types.
2633 #
2634 # Defaults to one.
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -04002635 },
Dan O'Mearadd494642020-05-01 07:42:23 -07002636 "args": [ # Optional. Command-line arguments passed to the training application when it
2637 # starts. If your job uses a custom container, then the arguments are passed
2638 # to the container's &lt;a class="external" target="_blank"
2639 # href="https://docs.docker.com/engine/reference/builder/#entrypoint"&gt;
2640 # `ENTRYPOINT`&lt;/a&gt; command.
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07002641 "A String",
2642 ],
2643 "pythonModule": "A String", # Required. The Python module name to run after installing the packages.
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07002644 "workerCount": "A String", # Optional. The number of worker replicas to use for the training job. Each
2645 # replica in the cluster will be of the type specified in `worker_type`.
2646 #
2647 # This value can only be used when `scale_tier` is set to `CUSTOM`. If you
2648 # set this value, you must also set `worker_type`.
2649 #
2650 # The default value is zero.
Dan O'Mearadd494642020-05-01 07:42:23 -07002651 "encryptionConfig": { # Represents a custom encryption key configuration that can be applied to # Optional. Options for using customer-managed encryption keys (CMEK) to
2652 # protect resources created by a training job, instead of using Google's
2653 # default encryption. If this is set, then all resources created by the
2654 # training job will be encrypted with the customer-managed encryption key
2655 # that you specify.
2656 #
2657 # [Learn how and when to use CMEK with AI Platform
2658 # Training](/ai-platform/training/docs/cmek).
2659 # a resource.
2660 "kmsKeyName": "A String", # The Cloud KMS resource identifier of the customer-managed encryption key
2661 # used to protect a resource, such as a training job. It has the following
2662 # format:
2663 # `projects/{PROJECT_ID}/locations/{REGION}/keyRings/{KEY_RING_NAME}/cryptoKeys/{KEY_NAME}`
2664 },
2665 "parameterServerConfig": { # Represents the configuration for a replica in a cluster. # Optional. The configuration for parameter servers.
2666 #
2667 # You should only set `parameterServerConfig.acceleratorConfig` if
2668 # `parameterServerType` is set to a Compute Engine machine type. [Learn
2669 # about restrictions on accelerator configurations for
2670 # training.](/ai-platform/training/docs/using-gpus#compute-engine-machine-types-with-gpu)
2671 #
2672 # Set `parameterServerConfig.imageUri` only if you build a custom image for
2673 # your parameter server. If `parameterServerConfig.imageUri` has not been
2674 # set, AI Platform uses the value of `masterConfig.imageUri`. Learn more about [configuring custom
2675 # containers](/ai-platform/training/docs/distributed-training-containers).
2676 "tpuTfVersion": "A String", # The AI Platform runtime version that includes a TensorFlow version matching
2677 # the one used in the custom container. This field is required if the replica
2678 # is a TPU worker that uses a custom container. Otherwise, do not specify
2679 # this field. This must be a [runtime version that currently supports
2680 # training with
2681 # TPUs](/ml-engine/docs/tensorflow/runtime-version-list#tpu-support).
2682 #
2683 # Note that the version of TensorFlow included in a runtime version may
2684 # differ from the numbering of the runtime version itself, because it may
2685 # have a different [patch
2686 # version](https://www.tensorflow.org/guide/version_compat#semantic_versioning_20).
2687 # In this field, you must specify the runtime version (TensorFlow minor
2688 # version). For example, if your custom container runs TensorFlow `1.x.y`,
2689 # specify `1.x`.
2690 "acceleratorConfig": { # Represents a hardware accelerator request config. # Represents the type and number of accelerators used by the replica.
2691 # [Learn about restrictions on accelerator configurations for
2692 # training.](/ai-platform/training/docs/using-gpus#compute-engine-machine-types-with-gpu)
2693 # Note that the AcceleratorConfig can be used in both Jobs and Versions.
2694 # Learn more about [accelerators for training](/ml-engine/docs/using-gpus) and
2695 # [accelerators for online
2696 # prediction](/ml-engine/docs/machine-types-online-prediction#gpus).
2697 "count": "A String", # The number of accelerators to attach to each machine running the job.
2698 "type": "A String", # The type of accelerator to use.
2699 },
2700 "imageUri": "A String", # The Docker image to run on the replica. This image must be in Container
2701 # Registry. Learn more about [configuring custom
2702 # containers](/ai-platform/training/docs/distributed-training-containers).
2703 },
2704 "region": "A String", # Required. The region to run the training job in. See the [available
2705 # regions](/ai-platform/training/docs/regions) for AI Platform Training.
2706 "pythonVersion": "A String", # Optional. The version of Python used in training. You must either specify
2707 # this field or specify `masterConfig.imageUri`.
2708 #
2709 # The following Python versions are available:
2710 #
2711 # * Python '3.7' is available when `runtime_version` is set to '1.15' or
2712 # later.
2713 # * Python '3.5' is available when `runtime_version` is set to a version
2714 # from '1.4' to '1.14'.
2715 # * Python '2.7' is available when `runtime_version` is set to '1.15' or
2716 # earlier.
2717 #
2718 # Read more about the Python versions available for [each runtime
2719 # version](/ml-engine/docs/runtime-version-list).
2720 "evaluatorType": "A String", # Optional. Specifies the type of virtual machine to use for your training
2721 # job's evaluator nodes.
2722 #
2723 # The supported values are the same as those described in the entry for
2724 # `masterType`.
2725 #
2726 # This value must be consistent with the category of machine type that
2727 # `masterType` uses. In other words, both must be Compute Engine machine
2728 # types or both must be legacy machine types.
2729 #
2730 # This value must be present when `scaleTier` is set to `CUSTOM` and
2731 # `evaluatorCount` is greater than zero.
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07002732 "parameterServerType": "A String", # Optional. Specifies the type of virtual machine to use for your training
2733 # job's parameter server.
2734 #
2735 # The supported values are the same as those described in the entry for
2736 # `master_type`.
2737 #
2738 # This value must be consistent with the category of machine type that
Dan O'Mearadd494642020-05-01 07:42:23 -07002739 # `masterType` uses. In other words, both must be Compute Engine machine
2740 # types or both must be legacy machine types.
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07002741 #
2742 # This value must be present when `scaleTier` is set to `CUSTOM` and
2743 # `parameter_server_count` is greater than zero.
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -04002744 },
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07002745 "endTime": "A String", # Output only. When the job processing was completed.
2746 "predictionOutput": { # Represents results of a prediction job. # The current prediction job result.
2747 "outputPath": "A String", # The output Google Cloud Storage location provided at the job creation time.
2748 "nodeHours": 3.14, # Node hours used by the batch prediction job.
2749 "predictionCount": "A String", # The number of generated predictions.
2750 "errorCount": "A String", # The number of data instances which resulted in errors.
2751 },
2752 "createTime": "A String", # Output only. When the job was created.
2753 },
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -04002754 ],
2755 }</pre>
2756</div>
2757
2758<div class="method">
2759 <code class="details" id="list_next">list_next(previous_request, previous_response)</code>
2760 <pre>Retrieves the next page of results.
2761
2762Args:
2763 previous_request: The request for the previous page. (required)
2764 previous_response: The response from the request for the previous page. (required)
2765
2766Returns:
2767 A request object that you can call 'execute()' on to request the next
2768 page. Returns None if there are no more items in the collection.
2769 </pre>
2770</div>
2771
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07002772<div class="method">
Dan O'Mearadd494642020-05-01 07:42:23 -07002773 <code class="details" id="patch">patch(name, body=None, updateMask=None, x__xgafv=None)</code>
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07002774 <pre>Updates a specific job resource.
2775
2776Currently the only supported fields to update are `labels`.
2777
2778Args:
2779 name: string, Required. The job name. (required)
Dan O'Mearadd494642020-05-01 07:42:23 -07002780 body: object, The request body.
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07002781 The object takes the form of:
2782
2783{ # Represents a training or prediction job.
2784 "errorMessage": "A String", # Output only. The details of a failure or a cancellation.
2785 "trainingOutput": { # Represents results of a training job. Output only. # The current training job result.
2786 "completedTrialCount": "A String", # The number of hyperparameter tuning trials that completed successfully.
2787 # Only set for hyperparameter tuning jobs.
2788 "trials": [ # Results for individual Hyperparameter trials.
2789 # Only set for hyperparameter tuning jobs.
2790 { # Represents the result of a single hyperparameter tuning trial from a
2791 # training job. The TrainingOutput object that is returned on successful
2792 # completion of a training job with hyperparameter tuning includes a list
2793 # of HyperparameterOutput objects, one for each successful trial.
Dan O'Mearadd494642020-05-01 07:42:23 -07002794 "startTime": "A String", # Output only. Start time for the trial.
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07002795 "hyperparameters": { # The hyperparameters given to this trial.
2796 "a_key": "A String",
2797 },
2798 "finalMetric": { # An observed value of a metric. # The final objective metric seen for this trial.
2799 "trainingStep": "A String", # The global training step for this metric.
2800 "objectiveValue": 3.14, # The objective value at this training step.
2801 },
Dan O'Mearadd494642020-05-01 07:42:23 -07002802 "state": "A String", # Output only. The detailed state of the trial.
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07002803 "allMetrics": [ # All recorded object metrics for this trial. This field is not currently
2804 # populated.
2805 { # An observed value of a metric.
2806 "trainingStep": "A String", # The global training step for this metric.
2807 "objectiveValue": 3.14, # The objective value at this training step.
2808 },
2809 ],
2810 "isTrialStoppedEarly": True or False, # True if the trial is stopped early.
Dan O'Mearadd494642020-05-01 07:42:23 -07002811 "endTime": "A String", # Output only. End time for the trial.
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07002812 "trialId": "A String", # The trial id for these results.
2813 "builtInAlgorithmOutput": { # Represents output related to a built-in algorithm Job. # Details related to built-in algorithms jobs.
2814 # Only set for trials of built-in algorithms jobs that have succeeded.
2815 "framework": "A String", # Framework on which the built-in algorithm was trained.
2816 "modelPath": "A String", # The Cloud Storage path to the `model/` directory where the training job
2817 # saves the trained model. Only set for successful jobs that don't use
2818 # hyperparameter tuning.
2819 "runtimeVersion": "A String", # AI Platform runtime version on which the built-in algorithm was
2820 # trained.
2821 "pythonVersion": "A String", # Python version on which the built-in algorithm was trained.
2822 },
2823 },
2824 ],
2825 "isHyperparameterTuningJob": True or False, # Whether this job is a hyperparameter tuning job.
2826 "isBuiltInAlgorithmJob": True or False, # Whether this job is a built-in Algorithm job.
2827 "consumedMLUnits": 3.14, # The amount of ML units consumed by the job.
2828 "hyperparameterMetricTag": "A String", # The TensorFlow summary tag name used for optimizing hyperparameter tuning
2829 # trials. See
2830 # [`HyperparameterSpec.hyperparameterMetricTag`](#HyperparameterSpec.FIELDS.hyperparameter_metric_tag)
2831 # for more information. Only set for hyperparameter tuning jobs.
2832 "builtInAlgorithmOutput": { # Represents output related to a built-in algorithm Job. # Details related to built-in algorithms jobs.
2833 # Only set for built-in algorithms jobs.
2834 "framework": "A String", # Framework on which the built-in algorithm was trained.
2835 "modelPath": "A String", # The Cloud Storage path to the `model/` directory where the training job
2836 # saves the trained model. Only set for successful jobs that don't use
2837 # hyperparameter tuning.
2838 "runtimeVersion": "A String", # AI Platform runtime version on which the built-in algorithm was
2839 # trained.
2840 "pythonVersion": "A String", # Python version on which the built-in algorithm was trained.
2841 },
2842 },
2843 "predictionInput": { # Represents input parameters for a prediction job. # Input parameters to create a prediction job.
2844 "modelName": "A String", # Use this field if you want to use the default version for the specified
2845 # model. The string must use the following format:
2846 #
2847 # `"projects/YOUR_PROJECT/models/YOUR_MODEL"`
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07002848 "signatureName": "A String", # Optional. The name of the signature defined in the SavedModel to use for
2849 # this job. Please refer to
2850 # [SavedModel](https://tensorflow.github.io/serving/serving_basic.html)
2851 # for information about how to use signatures.
2852 #
2853 # Defaults to
2854 # [DEFAULT_SERVING_SIGNATURE_DEF_KEY](https://www.tensorflow.org/api_docs/python/tf/saved_model/signature_constants)
2855 # , which is "serving_default".
Dan O'Mearadd494642020-05-01 07:42:23 -07002856 "runtimeVersion": "A String", # Optional. The AI Platform runtime version to use for this batch
2857 # prediction. If not set, AI Platform will pick the runtime version used
2858 # during the CreateVersion request for this model version, or choose the
2859 # latest stable version when model version information is not available
2860 # such as when the model is specified by uri.
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07002861 "batchSize": "A String", # Optional. Number of records per batch, defaults to 64.
2862 # The service will buffer batch_size number of records in memory before
2863 # invoking one Tensorflow prediction call internally. So take the record
2864 # size and memory available into consideration when setting this parameter.
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07002865 "maxWorkerCount": "A String", # Optional. The maximum number of workers to be used for parallel processing.
2866 # Defaults to 10 if not specified.
2867 "uri": "A String", # Use this field if you want to specify a Google Cloud Storage path for
2868 # the model to use.
2869 "outputPath": "A String", # Required. The output Google Cloud Storage location.
2870 "dataFormat": "A String", # Required. The format of the input data files.
2871 "versionName": "A String", # Use this field if you want to specify a version of the model to use. The
2872 # string is formatted the same way as `model_version`, with the addition
2873 # of the version information:
2874 #
2875 # `"projects/YOUR_PROJECT/models/YOUR_MODEL/versions/YOUR_VERSION"`
2876 "region": "A String", # Required. The Google Compute Engine region to run the prediction job in.
Dan O'Mearadd494642020-05-01 07:42:23 -07002877 # See the &lt;a href="/ml-engine/docs/tensorflow/regions"&gt;available regions&lt;/a&gt;
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07002878 # for AI Platform services.
Dan O'Mearadd494642020-05-01 07:42:23 -07002879 "inputPaths": [ # Required. The Cloud Storage location of the input data files. May contain
2880 # &lt;a href="/storage/docs/gsutil/addlhelp/WildcardNames"&gt;wildcards&lt;/a&gt;.
2881 "A String",
2882 ],
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07002883 "outputDataFormat": "A String", # Optional. Format of the output data files, defaults to JSON.
2884 },
Dan O'Mearadd494642020-05-01 07:42:23 -07002885 "labels": { # Optional. One or more labels that you can add, to organize your jobs.
2886 # Each label is a key-value pair, where both the key and the value are
2887 # arbitrary strings that you supply.
2888 # For more information, see the documentation on
2889 # &lt;a href="/ml-engine/docs/tensorflow/resource-labels"&gt;using labels&lt;/a&gt;.
2890 "a_key": "A String",
2891 },
2892 "jobId": "A String", # Required. The user-specified id of the job.
2893 "state": "A String", # Output only. The detailed state of a job.
2894 "etag": "A String", # `etag` is used for optimistic concurrency control as a way to help
2895 # prevent simultaneous updates of a job from overwriting each other.
2896 # It is strongly suggested that systems make use of the `etag` in the
2897 # read-modify-write cycle to perform job updates in order to avoid race
2898 # conditions: An `etag` is returned in the response to `GetJob`, and
2899 # systems are expected to put that etag in the request to `UpdateJob` to
2900 # ensure that their change will be applied to the same version of the job.
2901 "startTime": "A String", # Output only. When the job processing was started.
2902 "trainingInput": { # Represents input parameters for a training job. When using the gcloud command # Input parameters to create a training job.
2903 # to submit your training job, you can specify the input parameters as
2904 # command-line arguments and/or in a YAML configuration file referenced from
2905 # the --config command-line argument. For details, see the guide to [submitting
2906 # a training job](/ai-platform/training/docs/training-jobs).
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07002907 "masterType": "A String", # Optional. Specifies the type of virtual machine to use for your training
Dan O'Mearadd494642020-05-01 07:42:23 -07002908 # job's master worker. You must specify this field when `scaleTier` is set to
2909 # `CUSTOM`.
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07002910 #
Dan O'Mearadd494642020-05-01 07:42:23 -07002911 # You can use certain Compute Engine machine types directly in this field.
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07002912 # The following types are supported:
2913 #
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07002914 # - `n1-standard-4`
2915 # - `n1-standard-8`
2916 # - `n1-standard-16`
2917 # - `n1-standard-32`
2918 # - `n1-standard-64`
2919 # - `n1-standard-96`
2920 # - `n1-highmem-2`
2921 # - `n1-highmem-4`
2922 # - `n1-highmem-8`
2923 # - `n1-highmem-16`
2924 # - `n1-highmem-32`
2925 # - `n1-highmem-64`
2926 # - `n1-highmem-96`
2927 # - `n1-highcpu-16`
2928 # - `n1-highcpu-32`
2929 # - `n1-highcpu-64`
2930 # - `n1-highcpu-96`
2931 #
Dan O'Mearadd494642020-05-01 07:42:23 -07002932 # Learn more about [using Compute Engine machine
2933 # types](/ml-engine/docs/machine-types#compute-engine-machine-types).
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07002934 #
Dan O'Mearadd494642020-05-01 07:42:23 -07002935 # Alternatively, you can use the following legacy machine types:
2936 #
2937 # - `standard`
2938 # - `large_model`
2939 # - `complex_model_s`
2940 # - `complex_model_m`
2941 # - `complex_model_l`
2942 # - `standard_gpu`
2943 # - `complex_model_m_gpu`
2944 # - `complex_model_l_gpu`
2945 # - `standard_p100`
2946 # - `complex_model_m_p100`
2947 # - `standard_v100`
2948 # - `large_model_v100`
2949 # - `complex_model_m_v100`
2950 # - `complex_model_l_v100`
2951 #
2952 # Learn more about [using legacy machine
2953 # types](/ml-engine/docs/machine-types#legacy-machine-types).
2954 #
2955 # Finally, if you want to use a TPU for training, specify `cloud_tpu` in this
2956 # field. Learn more about the [special configuration options for training
2957 # with
2958 # TPUs](/ml-engine/docs/tensorflow/using-tpus#configuring_a_custom_tpu_machine).
2959 "jobDir": "A String", # Optional. A Google Cloud Storage path in which to store training outputs
2960 # and other data needed for training. This path is passed to your TensorFlow
2961 # program as the '--job-dir' command-line argument. The benefit of specifying
2962 # this field is that Cloud ML validates the path for use in training.
2963 "scheduling": { # All parameters related to scheduling of training jobs. # Optional. Scheduling options for a training job.
2964 "maxRunningTime": "A String", # Optional. The maximum job running time, expressed in seconds. The field can
2965 # contain up to nine fractional digits, terminated by `s`. By default there
2966 # is no limit to the running time.
2967 #
2968 # If the training job is still running after this duration, AI Platform
2969 # Training cancels it.
2970 #
2971 # For example, if you want to ensure your job runs for no more than 2 hours,
2972 # set this field to `7200s` (2 hours * 60 minutes / hour * 60 seconds /
2973 # minute).
2974 #
2975 # If you submit your training job using the `gcloud` tool, you can [provide
2976 # this field in a `config.yaml`
2977 # file](/ai-platform/training/docs/training-jobs#formatting_your_configuration_parameters).
2978 # For example:
2979 #
2980 # ```yaml
2981 # trainingInput:
2982 # ...
2983 # scheduling:
2984 # maxRunningTime: 7200s
2985 # ...
2986 # ```
2987 },
2988 "parameterServerCount": "A String", # Optional. The number of parameter server replicas to use for the training
2989 # job. Each replica in the cluster will be of the type specified in
2990 # `parameter_server_type`.
2991 #
2992 # This value can only be used when `scale_tier` is set to `CUSTOM`. If you
2993 # set this value, you must also set `parameter_server_type`.
2994 #
2995 # The default value is zero.
2996 "evaluatorCount": "A String", # Optional. The number of evaluator replicas to use for the training job.
2997 # Each replica in the cluster will be of the type specified in
2998 # `evaluator_type`.
2999 #
3000 # This value can only be used when `scale_tier` is set to `CUSTOM`. If you
3001 # set this value, you must also set `evaluator_type`.
3002 #
3003 # The default value is zero.
3004 "workerType": "A String", # Optional. Specifies the type of virtual machine to use for your training
3005 # job's worker nodes.
3006 #
3007 # The supported values are the same as those described in the entry for
3008 # `masterType`.
3009 #
3010 # This value must be consistent with the category of machine type that
3011 # `masterType` uses. In other words, both must be Compute Engine machine
3012 # types or both must be legacy machine types.
3013 #
3014 # If you use `cloud_tpu` for this value, see special instructions for
3015 # [configuring a custom TPU
3016 # machine](/ml-engine/docs/tensorflow/using-tpus#configuring_a_custom_tpu_machine).
3017 #
3018 # This value must be present when `scaleTier` is set to `CUSTOM` and
3019 # `workerCount` is greater than zero.
3020 "scaleTier": "A String", # Required. Specifies the machine types, the number of replicas for workers
3021 # and parameter servers.
3022 "packageUris": [ # Required. The Google Cloud Storage location of the packages with
3023 # the training program and any additional dependencies.
3024 # The maximum number of package URIs is 100.
3025 "A String",
3026 ],
3027 "workerConfig": { # Represents the configuration for a replica in a cluster. # Optional. The configuration for workers.
3028 #
3029 # You should only set `workerConfig.acceleratorConfig` if `workerType` is set
3030 # to a Compute Engine machine type. [Learn about restrictions on accelerator
3031 # configurations for
3032 # training.](/ai-platform/training/docs/using-gpus#compute-engine-machine-types-with-gpu)
3033 #
3034 # Set `workerConfig.imageUri` only if you build a custom image for your
3035 # worker. If `workerConfig.imageUri` has not been set, AI Platform uses
3036 # the value of `masterConfig.imageUri`. Learn more about [configuring custom
3037 # containers](/ai-platform/training/docs/distributed-training-containers).
3038 "tpuTfVersion": "A String", # The AI Platform runtime version that includes a TensorFlow version matching
3039 # the one used in the custom container. This field is required if the replica
3040 # is a TPU worker that uses a custom container. Otherwise, do not specify
3041 # this field. This must be a [runtime version that currently supports
3042 # training with
3043 # TPUs](/ml-engine/docs/tensorflow/runtime-version-list#tpu-support).
3044 #
3045 # Note that the version of TensorFlow included in a runtime version may
3046 # differ from the numbering of the runtime version itself, because it may
3047 # have a different [patch
3048 # version](https://www.tensorflow.org/guide/version_compat#semantic_versioning_20).
3049 # In this field, you must specify the runtime version (TensorFlow minor
3050 # version). For example, if your custom container runs TensorFlow `1.x.y`,
3051 # specify `1.x`.
3052 "acceleratorConfig": { # Represents a hardware accelerator request config. # Represents the type and number of accelerators used by the replica.
3053 # [Learn about restrictions on accelerator configurations for
3054 # training.](/ai-platform/training/docs/using-gpus#compute-engine-machine-types-with-gpu)
3055 # Note that the AcceleratorConfig can be used in both Jobs and Versions.
3056 # Learn more about [accelerators for training](/ml-engine/docs/using-gpus) and
3057 # [accelerators for online
3058 # prediction](/ml-engine/docs/machine-types-online-prediction#gpus).
3059 "count": "A String", # The number of accelerators to attach to each machine running the job.
3060 "type": "A String", # The type of accelerator to use.
3061 },
3062 "imageUri": "A String", # The Docker image to run on the replica. This image must be in Container
3063 # Registry. Learn more about [configuring custom
3064 # containers](/ai-platform/training/docs/distributed-training-containers).
3065 },
3066 "evaluatorConfig": { # Represents the configuration for a replica in a cluster. # Optional. The configuration for evaluators.
3067 #
3068 # You should only set `evaluatorConfig.acceleratorConfig` if
3069 # `evaluatorType` is set to a Compute Engine machine type. [Learn
3070 # about restrictions on accelerator configurations for
3071 # training.](/ai-platform/training/docs/using-gpus#compute-engine-machine-types-with-gpu)
3072 #
3073 # Set `evaluatorConfig.imageUri` only if you build a custom image for
3074 # your evaluator. If `evaluatorConfig.imageUri` has not been
3075 # set, AI Platform uses the value of `masterConfig.imageUri`. Learn more about [configuring custom
3076 # containers](/ai-platform/training/docs/distributed-training-containers).
3077 "tpuTfVersion": "A String", # The AI Platform runtime version that includes a TensorFlow version matching
3078 # the one used in the custom container. This field is required if the replica
3079 # is a TPU worker that uses a custom container. Otherwise, do not specify
3080 # this field. This must be a [runtime version that currently supports
3081 # training with
3082 # TPUs](/ml-engine/docs/tensorflow/runtime-version-list#tpu-support).
3083 #
3084 # Note that the version of TensorFlow included in a runtime version may
3085 # differ from the numbering of the runtime version itself, because it may
3086 # have a different [patch
3087 # version](https://www.tensorflow.org/guide/version_compat#semantic_versioning_20).
3088 # In this field, you must specify the runtime version (TensorFlow minor
3089 # version). For example, if your custom container runs TensorFlow `1.x.y`,
3090 # specify `1.x`.
3091 "acceleratorConfig": { # Represents a hardware accelerator request config. # Represents the type and number of accelerators used by the replica.
3092 # [Learn about restrictions on accelerator configurations for
3093 # training.](/ai-platform/training/docs/using-gpus#compute-engine-machine-types-with-gpu)
3094 # Note that the AcceleratorConfig can be used in both Jobs and Versions.
3095 # Learn more about [accelerators for training](/ml-engine/docs/using-gpus) and
3096 # [accelerators for online
3097 # prediction](/ml-engine/docs/machine-types-online-prediction#gpus).
3098 "count": "A String", # The number of accelerators to attach to each machine running the job.
3099 "type": "A String", # The type of accelerator to use.
3100 },
3101 "imageUri": "A String", # The Docker image to run on the replica. This image must be in Container
3102 # Registry. Learn more about [configuring custom
3103 # containers](/ai-platform/training/docs/distributed-training-containers).
3104 },
3105 "useChiefInTfConfig": True or False, # Optional. Use `chief` instead of `master` in the `TF_CONFIG` environment
3106 # variable when training with a custom container. Defaults to `false`. [Learn
3107 # more about this
3108 # field.](/ai-platform/training/docs/distributed-training-details#chief-versus-master)
3109 #
3110 # This field has no effect for training jobs that don't use a custom
3111 # container.
3112 "masterConfig": { # Represents the configuration for a replica in a cluster. # Optional. The configuration for your master worker.
3113 #
3114 # You should only set `masterConfig.acceleratorConfig` if `masterType` is set
3115 # to a Compute Engine machine type. Learn about [restrictions on accelerator
3116 # configurations for
3117 # training.](/ai-platform/training/docs/using-gpus#compute-engine-machine-types-with-gpu)
3118 #
3119 # Set `masterConfig.imageUri` only if you build a custom image. Only one of
3120 # `masterConfig.imageUri` and `runtimeVersion` should be set. Learn more
3121 # about [configuring custom
3122 # containers](/ai-platform/training/docs/distributed-training-containers).
3123 "tpuTfVersion": "A String", # The AI Platform runtime version that includes a TensorFlow version matching
3124 # the one used in the custom container. This field is required if the replica
3125 # is a TPU worker that uses a custom container. Otherwise, do not specify
3126 # this field. This must be a [runtime version that currently supports
3127 # training with
3128 # TPUs](/ml-engine/docs/tensorflow/runtime-version-list#tpu-support).
3129 #
3130 # Note that the version of TensorFlow included in a runtime version may
3131 # differ from the numbering of the runtime version itself, because it may
3132 # have a different [patch
3133 # version](https://www.tensorflow.org/guide/version_compat#semantic_versioning_20).
3134 # In this field, you must specify the runtime version (TensorFlow minor
3135 # version). For example, if your custom container runs TensorFlow `1.x.y`,
3136 # specify `1.x`.
3137 "acceleratorConfig": { # Represents a hardware accelerator request config. # Represents the type and number of accelerators used by the replica.
3138 # [Learn about restrictions on accelerator configurations for
3139 # training.](/ai-platform/training/docs/using-gpus#compute-engine-machine-types-with-gpu)
3140 # Note that the AcceleratorConfig can be used in both Jobs and Versions.
3141 # Learn more about [accelerators for training](/ml-engine/docs/using-gpus) and
3142 # [accelerators for online
3143 # prediction](/ml-engine/docs/machine-types-online-prediction#gpus).
3144 "count": "A String", # The number of accelerators to attach to each machine running the job.
3145 "type": "A String", # The type of accelerator to use.
3146 },
3147 "imageUri": "A String", # The Docker image to run on the replica. This image must be in Container
3148 # Registry. Learn more about [configuring custom
3149 # containers](/ai-platform/training/docs/distributed-training-containers).
3150 },
3151 "runtimeVersion": "A String", # Optional. The AI Platform runtime version to use for training. You must
3152 # either specify this field or specify `masterConfig.imageUri`.
3153 #
3154 # For more information, see the [runtime version
3155 # list](/ai-platform/training/docs/runtime-version-list) and learn [how to
3156 # manage runtime versions](/ai-platform/training/docs/versioning).
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07003157 "hyperparameters": { # Represents a set of hyperparameters to optimize. # Optional. The set of Hyperparameters to tune.
3158 "maxTrials": 42, # Optional. How many training trials should be attempted to optimize
3159 # the specified hyperparameters.
3160 #
3161 # Defaults to one.
3162 "goal": "A String", # Required. The type of goal to use for tuning. Available types are
3163 # `MAXIMIZE` and `MINIMIZE`.
3164 #
3165 # Defaults to `MAXIMIZE`.
3166 "algorithm": "A String", # Optional. The search algorithm specified for the hyperparameter
3167 # tuning job.
3168 # Uses the default AI Platform hyperparameter tuning
3169 # algorithm if unspecified.
3170 "maxFailedTrials": 42, # Optional. The number of failed trials that need to be seen before failing
3171 # the hyperparameter tuning job. You can specify this field to override the
3172 # default failing criteria for AI Platform hyperparameter tuning jobs.
3173 #
3174 # Defaults to zero, which means the service decides when a hyperparameter
3175 # job should fail.
3176 "enableTrialEarlyStopping": True or False, # Optional. Indicates if the hyperparameter tuning job enables auto trial
3177 # early stopping.
3178 "resumePreviousJobId": "A String", # Optional. The prior hyperparameter tuning job id that users hope to
3179 # continue with. The job id will be used to find the corresponding vizier
3180 # study guid and resume the study.
3181 "params": [ # Required. The set of parameters to tune.
3182 { # Represents a single hyperparameter to optimize.
3183 "maxValue": 3.14, # Required if type is `DOUBLE` or `INTEGER`. This field
3184 # should be unset if type is `CATEGORICAL`. This value should be integers if
3185 # type is `INTEGER`.
Dan O'Mearadd494642020-05-01 07:42:23 -07003186 "minValue": 3.14, # Required if type is `DOUBLE` or `INTEGER`. This field
3187 # should be unset if type is `CATEGORICAL`. This value should be integers if
3188 # type is INTEGER.
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07003189 "discreteValues": [ # Required if type is `DISCRETE`.
3190 # A list of feasible points.
3191 # The list should be in strictly increasing order. For instance, this
3192 # parameter might have possible settings of 1.5, 2.5, and 4.0. This list
3193 # should not contain more than 1,000 values.
3194 3.14,
3195 ],
3196 "parameterName": "A String", # Required. The parameter name must be unique amongst all ParameterConfigs in
3197 # a HyperparameterSpec message. E.g., "learning_rate".
Dan O'Mearadd494642020-05-01 07:42:23 -07003198 "categoricalValues": [ # Required if type is `CATEGORICAL`. The list of possible categories.
3199 "A String",
3200 ],
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07003201 "type": "A String", # Required. The type of the parameter.
3202 "scaleType": "A String", # Optional. How the parameter should be scaled to the hypercube.
3203 # Leave unset for categorical parameters.
3204 # Some kind of scaling is strongly recommended for real or integral
3205 # parameters (e.g., `UNIT_LINEAR_SCALE`).
3206 },
3207 ],
3208 "hyperparameterMetricTag": "A String", # Optional. The TensorFlow summary tag name to use for optimizing trials. For
3209 # current versions of TensorFlow, this tag name should exactly match what is
3210 # shown in TensorBoard, including all scopes. For versions of TensorFlow
3211 # prior to 0.12, this should be only the tag passed to tf.Summary.
3212 # By default, "training/hptuning/metric" will be used.
3213 "maxParallelTrials": 42, # Optional. The number of training trials to run concurrently.
3214 # You can reduce the time it takes to perform hyperparameter tuning by adding
3215 # trials in parallel. However, each trail only benefits from the information
3216 # gained in completed trials. That means that a trial does not get access to
3217 # the results of trials running at the same time, which could reduce the
3218 # quality of the overall optimization.
3219 #
3220 # Each trial will use the same scale tier and machine types.
3221 #
3222 # Defaults to one.
3223 },
Dan O'Mearadd494642020-05-01 07:42:23 -07003224 "args": [ # Optional. Command-line arguments passed to the training application when it
3225 # starts. If your job uses a custom container, then the arguments are passed
3226 # to the container's &lt;a class="external" target="_blank"
3227 # href="https://docs.docker.com/engine/reference/builder/#entrypoint"&gt;
3228 # `ENTRYPOINT`&lt;/a&gt; command.
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07003229 "A String",
3230 ],
3231 "pythonModule": "A String", # Required. The Python module name to run after installing the packages.
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07003232 "workerCount": "A String", # Optional. The number of worker replicas to use for the training job. Each
3233 # replica in the cluster will be of the type specified in `worker_type`.
3234 #
3235 # This value can only be used when `scale_tier` is set to `CUSTOM`. If you
3236 # set this value, you must also set `worker_type`.
3237 #
3238 # The default value is zero.
Dan O'Mearadd494642020-05-01 07:42:23 -07003239 "encryptionConfig": { # Represents a custom encryption key configuration that can be applied to # Optional. Options for using customer-managed encryption keys (CMEK) to
3240 # protect resources created by a training job, instead of using Google's
3241 # default encryption. If this is set, then all resources created by the
3242 # training job will be encrypted with the customer-managed encryption key
3243 # that you specify.
3244 #
3245 # [Learn how and when to use CMEK with AI Platform
3246 # Training](/ai-platform/training/docs/cmek).
3247 # a resource.
3248 "kmsKeyName": "A String", # The Cloud KMS resource identifier of the customer-managed encryption key
3249 # used to protect a resource, such as a training job. It has the following
3250 # format:
3251 # `projects/{PROJECT_ID}/locations/{REGION}/keyRings/{KEY_RING_NAME}/cryptoKeys/{KEY_NAME}`
3252 },
3253 "parameterServerConfig": { # Represents the configuration for a replica in a cluster. # Optional. The configuration for parameter servers.
3254 #
3255 # You should only set `parameterServerConfig.acceleratorConfig` if
3256 # `parameterServerType` is set to a Compute Engine machine type. [Learn
3257 # about restrictions on accelerator configurations for
3258 # training.](/ai-platform/training/docs/using-gpus#compute-engine-machine-types-with-gpu)
3259 #
3260 # Set `parameterServerConfig.imageUri` only if you build a custom image for
3261 # your parameter server. If `parameterServerConfig.imageUri` has not been
3262 # set, AI Platform uses the value of `masterConfig.imageUri`. Learn more about [configuring custom
3263 # containers](/ai-platform/training/docs/distributed-training-containers).
3264 "tpuTfVersion": "A String", # The AI Platform runtime version that includes a TensorFlow version matching
3265 # the one used in the custom container. This field is required if the replica
3266 # is a TPU worker that uses a custom container. Otherwise, do not specify
3267 # this field. This must be a [runtime version that currently supports
3268 # training with
3269 # TPUs](/ml-engine/docs/tensorflow/runtime-version-list#tpu-support).
3270 #
3271 # Note that the version of TensorFlow included in a runtime version may
3272 # differ from the numbering of the runtime version itself, because it may
3273 # have a different [patch
3274 # version](https://www.tensorflow.org/guide/version_compat#semantic_versioning_20).
3275 # In this field, you must specify the runtime version (TensorFlow minor
3276 # version). For example, if your custom container runs TensorFlow `1.x.y`,
3277 # specify `1.x`.
3278 "acceleratorConfig": { # Represents a hardware accelerator request config. # Represents the type and number of accelerators used by the replica.
3279 # [Learn about restrictions on accelerator configurations for
3280 # training.](/ai-platform/training/docs/using-gpus#compute-engine-machine-types-with-gpu)
3281 # Note that the AcceleratorConfig can be used in both Jobs and Versions.
3282 # Learn more about [accelerators for training](/ml-engine/docs/using-gpus) and
3283 # [accelerators for online
3284 # prediction](/ml-engine/docs/machine-types-online-prediction#gpus).
3285 "count": "A String", # The number of accelerators to attach to each machine running the job.
3286 "type": "A String", # The type of accelerator to use.
3287 },
3288 "imageUri": "A String", # The Docker image to run on the replica. This image must be in Container
3289 # Registry. Learn more about [configuring custom
3290 # containers](/ai-platform/training/docs/distributed-training-containers).
3291 },
3292 "region": "A String", # Required. The region to run the training job in. See the [available
3293 # regions](/ai-platform/training/docs/regions) for AI Platform Training.
3294 "pythonVersion": "A String", # Optional. The version of Python used in training. You must either specify
3295 # this field or specify `masterConfig.imageUri`.
3296 #
3297 # The following Python versions are available:
3298 #
3299 # * Python '3.7' is available when `runtime_version` is set to '1.15' or
3300 # later.
3301 # * Python '3.5' is available when `runtime_version` is set to a version
3302 # from '1.4' to '1.14'.
3303 # * Python '2.7' is available when `runtime_version` is set to '1.15' or
3304 # earlier.
3305 #
3306 # Read more about the Python versions available for [each runtime
3307 # version](/ml-engine/docs/runtime-version-list).
3308 "evaluatorType": "A String", # Optional. Specifies the type of virtual machine to use for your training
3309 # job's evaluator nodes.
3310 #
3311 # The supported values are the same as those described in the entry for
3312 # `masterType`.
3313 #
3314 # This value must be consistent with the category of machine type that
3315 # `masterType` uses. In other words, both must be Compute Engine machine
3316 # types or both must be legacy machine types.
3317 #
3318 # This value must be present when `scaleTier` is set to `CUSTOM` and
3319 # `evaluatorCount` is greater than zero.
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07003320 "parameterServerType": "A String", # Optional. Specifies the type of virtual machine to use for your training
3321 # job's parameter server.
3322 #
3323 # The supported values are the same as those described in the entry for
3324 # `master_type`.
3325 #
3326 # This value must be consistent with the category of machine type that
Dan O'Mearadd494642020-05-01 07:42:23 -07003327 # `masterType` uses. In other words, both must be Compute Engine machine
3328 # types or both must be legacy machine types.
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07003329 #
3330 # This value must be present when `scaleTier` is set to `CUSTOM` and
3331 # `parameter_server_count` is greater than zero.
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07003332 },
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07003333 "endTime": "A String", # Output only. When the job processing was completed.
3334 "predictionOutput": { # Represents results of a prediction job. # The current prediction job result.
3335 "outputPath": "A String", # The output Google Cloud Storage location provided at the job creation time.
3336 "nodeHours": 3.14, # Node hours used by the batch prediction job.
3337 "predictionCount": "A String", # The number of generated predictions.
3338 "errorCount": "A String", # The number of data instances which resulted in errors.
3339 },
3340 "createTime": "A String", # Output only. When the job was created.
3341}
3342
3343 updateMask: string, Required. Specifies the path, relative to `Job`, of the field to update.
3344To adopt etag mechanism, include `etag` field in the mask, and include the
3345`etag` value in your job resource.
3346
3347For example, to change the labels of a job, the `update_mask` parameter
3348would be specified as `labels`, `etag`, and the
3349`PATCH` request body would specify the new value, as follows:
3350 {
3351 "labels": {
3352 "owner": "Google",
3353 "color": "Blue"
3354 }
3355 "etag": "33a64df551425fcc55e4d42a148795d9f25f89d4"
3356 }
3357If `etag` matches the one on the server, the labels of the job will be
3358replaced with the given ones, and the server end `etag` will be
3359recalculated.
3360
3361Currently the only supported update masks are `labels` and `etag`.
3362 x__xgafv: string, V1 error format.
3363 Allowed values
3364 1 - v1 error format
3365 2 - v2 error format
3366
3367Returns:
3368 An object of the form:
3369
3370 { # Represents a training or prediction job.
3371 "errorMessage": "A String", # Output only. The details of a failure or a cancellation.
3372 "trainingOutput": { # Represents results of a training job. Output only. # The current training job result.
3373 "completedTrialCount": "A String", # The number of hyperparameter tuning trials that completed successfully.
3374 # Only set for hyperparameter tuning jobs.
3375 "trials": [ # Results for individual Hyperparameter trials.
3376 # Only set for hyperparameter tuning jobs.
3377 { # Represents the result of a single hyperparameter tuning trial from a
3378 # training job. The TrainingOutput object that is returned on successful
3379 # completion of a training job with hyperparameter tuning includes a list
3380 # of HyperparameterOutput objects, one for each successful trial.
Dan O'Mearadd494642020-05-01 07:42:23 -07003381 "startTime": "A String", # Output only. Start time for the trial.
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07003382 "hyperparameters": { # The hyperparameters given to this trial.
3383 "a_key": "A String",
3384 },
3385 "finalMetric": { # An observed value of a metric. # The final objective metric seen for this trial.
3386 "trainingStep": "A String", # The global training step for this metric.
3387 "objectiveValue": 3.14, # The objective value at this training step.
3388 },
Dan O'Mearadd494642020-05-01 07:42:23 -07003389 "state": "A String", # Output only. The detailed state of the trial.
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07003390 "allMetrics": [ # All recorded object metrics for this trial. This field is not currently
3391 # populated.
3392 { # An observed value of a metric.
3393 "trainingStep": "A String", # The global training step for this metric.
3394 "objectiveValue": 3.14, # The objective value at this training step.
3395 },
3396 ],
3397 "isTrialStoppedEarly": True or False, # True if the trial is stopped early.
Dan O'Mearadd494642020-05-01 07:42:23 -07003398 "endTime": "A String", # Output only. End time for the trial.
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07003399 "trialId": "A String", # The trial id for these results.
3400 "builtInAlgorithmOutput": { # Represents output related to a built-in algorithm Job. # Details related to built-in algorithms jobs.
3401 # Only set for trials of built-in algorithms jobs that have succeeded.
3402 "framework": "A String", # Framework on which the built-in algorithm was trained.
3403 "modelPath": "A String", # The Cloud Storage path to the `model/` directory where the training job
3404 # saves the trained model. Only set for successful jobs that don't use
3405 # hyperparameter tuning.
3406 "runtimeVersion": "A String", # AI Platform runtime version on which the built-in algorithm was
3407 # trained.
3408 "pythonVersion": "A String", # Python version on which the built-in algorithm was trained.
3409 },
3410 },
3411 ],
3412 "isHyperparameterTuningJob": True or False, # Whether this job is a hyperparameter tuning job.
3413 "isBuiltInAlgorithmJob": True or False, # Whether this job is a built-in Algorithm job.
3414 "consumedMLUnits": 3.14, # The amount of ML units consumed by the job.
3415 "hyperparameterMetricTag": "A String", # The TensorFlow summary tag name used for optimizing hyperparameter tuning
3416 # trials. See
3417 # [`HyperparameterSpec.hyperparameterMetricTag`](#HyperparameterSpec.FIELDS.hyperparameter_metric_tag)
3418 # for more information. Only set for hyperparameter tuning jobs.
3419 "builtInAlgorithmOutput": { # Represents output related to a built-in algorithm Job. # Details related to built-in algorithms jobs.
3420 # Only set for built-in algorithms jobs.
3421 "framework": "A String", # Framework on which the built-in algorithm was trained.
3422 "modelPath": "A String", # The Cloud Storage path to the `model/` directory where the training job
3423 # saves the trained model. Only set for successful jobs that don't use
3424 # hyperparameter tuning.
3425 "runtimeVersion": "A String", # AI Platform runtime version on which the built-in algorithm was
3426 # trained.
3427 "pythonVersion": "A String", # Python version on which the built-in algorithm was trained.
3428 },
3429 },
3430 "predictionInput": { # Represents input parameters for a prediction job. # Input parameters to create a prediction job.
3431 "modelName": "A String", # Use this field if you want to use the default version for the specified
3432 # model. The string must use the following format:
3433 #
3434 # `"projects/YOUR_PROJECT/models/YOUR_MODEL"`
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07003435 "signatureName": "A String", # Optional. The name of the signature defined in the SavedModel to use for
3436 # this job. Please refer to
3437 # [SavedModel](https://tensorflow.github.io/serving/serving_basic.html)
3438 # for information about how to use signatures.
3439 #
3440 # Defaults to
3441 # [DEFAULT_SERVING_SIGNATURE_DEF_KEY](https://www.tensorflow.org/api_docs/python/tf/saved_model/signature_constants)
3442 # , which is "serving_default".
Dan O'Mearadd494642020-05-01 07:42:23 -07003443 "runtimeVersion": "A String", # Optional. The AI Platform runtime version to use for this batch
3444 # prediction. If not set, AI Platform will pick the runtime version used
3445 # during the CreateVersion request for this model version, or choose the
3446 # latest stable version when model version information is not available
3447 # such as when the model is specified by uri.
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07003448 "batchSize": "A String", # Optional. Number of records per batch, defaults to 64.
3449 # The service will buffer batch_size number of records in memory before
3450 # invoking one Tensorflow prediction call internally. So take the record
3451 # size and memory available into consideration when setting this parameter.
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07003452 "maxWorkerCount": "A String", # Optional. The maximum number of workers to be used for parallel processing.
3453 # Defaults to 10 if not specified.
3454 "uri": "A String", # Use this field if you want to specify a Google Cloud Storage path for
3455 # the model to use.
3456 "outputPath": "A String", # Required. The output Google Cloud Storage location.
3457 "dataFormat": "A String", # Required. The format of the input data files.
3458 "versionName": "A String", # Use this field if you want to specify a version of the model to use. The
3459 # string is formatted the same way as `model_version`, with the addition
3460 # of the version information:
3461 #
3462 # `"projects/YOUR_PROJECT/models/YOUR_MODEL/versions/YOUR_VERSION"`
3463 "region": "A String", # Required. The Google Compute Engine region to run the prediction job in.
Dan O'Mearadd494642020-05-01 07:42:23 -07003464 # See the &lt;a href="/ml-engine/docs/tensorflow/regions"&gt;available regions&lt;/a&gt;
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07003465 # for AI Platform services.
Dan O'Mearadd494642020-05-01 07:42:23 -07003466 "inputPaths": [ # Required. The Cloud Storage location of the input data files. May contain
3467 # &lt;a href="/storage/docs/gsutil/addlhelp/WildcardNames"&gt;wildcards&lt;/a&gt;.
3468 "A String",
3469 ],
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07003470 "outputDataFormat": "A String", # Optional. Format of the output data files, defaults to JSON.
3471 },
Dan O'Mearadd494642020-05-01 07:42:23 -07003472 "labels": { # Optional. One or more labels that you can add, to organize your jobs.
3473 # Each label is a key-value pair, where both the key and the value are
3474 # arbitrary strings that you supply.
3475 # For more information, see the documentation on
3476 # &lt;a href="/ml-engine/docs/tensorflow/resource-labels"&gt;using labels&lt;/a&gt;.
3477 "a_key": "A String",
3478 },
3479 "jobId": "A String", # Required. The user-specified id of the job.
3480 "state": "A String", # Output only. The detailed state of a job.
3481 "etag": "A String", # `etag` is used for optimistic concurrency control as a way to help
3482 # prevent simultaneous updates of a job from overwriting each other.
3483 # It is strongly suggested that systems make use of the `etag` in the
3484 # read-modify-write cycle to perform job updates in order to avoid race
3485 # conditions: An `etag` is returned in the response to `GetJob`, and
3486 # systems are expected to put that etag in the request to `UpdateJob` to
3487 # ensure that their change will be applied to the same version of the job.
3488 "startTime": "A String", # Output only. When the job processing was started.
3489 "trainingInput": { # Represents input parameters for a training job. When using the gcloud command # Input parameters to create a training job.
3490 # to submit your training job, you can specify the input parameters as
3491 # command-line arguments and/or in a YAML configuration file referenced from
3492 # the --config command-line argument. For details, see the guide to [submitting
3493 # a training job](/ai-platform/training/docs/training-jobs).
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07003494 "masterType": "A String", # Optional. Specifies the type of virtual machine to use for your training
Dan O'Mearadd494642020-05-01 07:42:23 -07003495 # job's master worker. You must specify this field when `scaleTier` is set to
3496 # `CUSTOM`.
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07003497 #
Dan O'Mearadd494642020-05-01 07:42:23 -07003498 # You can use certain Compute Engine machine types directly in this field.
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07003499 # The following types are supported:
3500 #
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07003501 # - `n1-standard-4`
3502 # - `n1-standard-8`
3503 # - `n1-standard-16`
3504 # - `n1-standard-32`
3505 # - `n1-standard-64`
3506 # - `n1-standard-96`
3507 # - `n1-highmem-2`
3508 # - `n1-highmem-4`
3509 # - `n1-highmem-8`
3510 # - `n1-highmem-16`
3511 # - `n1-highmem-32`
3512 # - `n1-highmem-64`
3513 # - `n1-highmem-96`
3514 # - `n1-highcpu-16`
3515 # - `n1-highcpu-32`
3516 # - `n1-highcpu-64`
3517 # - `n1-highcpu-96`
3518 #
Dan O'Mearadd494642020-05-01 07:42:23 -07003519 # Learn more about [using Compute Engine machine
3520 # types](/ml-engine/docs/machine-types#compute-engine-machine-types).
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07003521 #
Dan O'Mearadd494642020-05-01 07:42:23 -07003522 # Alternatively, you can use the following legacy machine types:
3523 #
3524 # - `standard`
3525 # - `large_model`
3526 # - `complex_model_s`
3527 # - `complex_model_m`
3528 # - `complex_model_l`
3529 # - `standard_gpu`
3530 # - `complex_model_m_gpu`
3531 # - `complex_model_l_gpu`
3532 # - `standard_p100`
3533 # - `complex_model_m_p100`
3534 # - `standard_v100`
3535 # - `large_model_v100`
3536 # - `complex_model_m_v100`
3537 # - `complex_model_l_v100`
3538 #
3539 # Learn more about [using legacy machine
3540 # types](/ml-engine/docs/machine-types#legacy-machine-types).
3541 #
3542 # Finally, if you want to use a TPU for training, specify `cloud_tpu` in this
3543 # field. Learn more about the [special configuration options for training
3544 # with
3545 # TPUs](/ml-engine/docs/tensorflow/using-tpus#configuring_a_custom_tpu_machine).
3546 "jobDir": "A String", # Optional. A Google Cloud Storage path in which to store training outputs
3547 # and other data needed for training. This path is passed to your TensorFlow
3548 # program as the '--job-dir' command-line argument. The benefit of specifying
3549 # this field is that Cloud ML validates the path for use in training.
3550 "scheduling": { # All parameters related to scheduling of training jobs. # Optional. Scheduling options for a training job.
3551 "maxRunningTime": "A String", # Optional. The maximum job running time, expressed in seconds. The field can
3552 # contain up to nine fractional digits, terminated by `s`. By default there
3553 # is no limit to the running time.
3554 #
3555 # If the training job is still running after this duration, AI Platform
3556 # Training cancels it.
3557 #
3558 # For example, if you want to ensure your job runs for no more than 2 hours,
3559 # set this field to `7200s` (2 hours * 60 minutes / hour * 60 seconds /
3560 # minute).
3561 #
3562 # If you submit your training job using the `gcloud` tool, you can [provide
3563 # this field in a `config.yaml`
3564 # file](/ai-platform/training/docs/training-jobs#formatting_your_configuration_parameters).
3565 # For example:
3566 #
3567 # ```yaml
3568 # trainingInput:
3569 # ...
3570 # scheduling:
3571 # maxRunningTime: 7200s
3572 # ...
3573 # ```
3574 },
3575 "parameterServerCount": "A String", # Optional. The number of parameter server replicas to use for the training
3576 # job. Each replica in the cluster will be of the type specified in
3577 # `parameter_server_type`.
3578 #
3579 # This value can only be used when `scale_tier` is set to `CUSTOM`. If you
3580 # set this value, you must also set `parameter_server_type`.
3581 #
3582 # The default value is zero.
3583 "evaluatorCount": "A String", # Optional. The number of evaluator replicas to use for the training job.
3584 # Each replica in the cluster will be of the type specified in
3585 # `evaluator_type`.
3586 #
3587 # This value can only be used when `scale_tier` is set to `CUSTOM`. If you
3588 # set this value, you must also set `evaluator_type`.
3589 #
3590 # The default value is zero.
3591 "workerType": "A String", # Optional. Specifies the type of virtual machine to use for your training
3592 # job's worker nodes.
3593 #
3594 # The supported values are the same as those described in the entry for
3595 # `masterType`.
3596 #
3597 # This value must be consistent with the category of machine type that
3598 # `masterType` uses. In other words, both must be Compute Engine machine
3599 # types or both must be legacy machine types.
3600 #
3601 # If you use `cloud_tpu` for this value, see special instructions for
3602 # [configuring a custom TPU
3603 # machine](/ml-engine/docs/tensorflow/using-tpus#configuring_a_custom_tpu_machine).
3604 #
3605 # This value must be present when `scaleTier` is set to `CUSTOM` and
3606 # `workerCount` is greater than zero.
3607 "scaleTier": "A String", # Required. Specifies the machine types, the number of replicas for workers
3608 # and parameter servers.
3609 "packageUris": [ # Required. The Google Cloud Storage location of the packages with
3610 # the training program and any additional dependencies.
3611 # The maximum number of package URIs is 100.
3612 "A String",
3613 ],
3614 "workerConfig": { # Represents the configuration for a replica in a cluster. # Optional. The configuration for workers.
3615 #
3616 # You should only set `workerConfig.acceleratorConfig` if `workerType` is set
3617 # to a Compute Engine machine type. [Learn about restrictions on accelerator
3618 # configurations for
3619 # training.](/ai-platform/training/docs/using-gpus#compute-engine-machine-types-with-gpu)
3620 #
3621 # Set `workerConfig.imageUri` only if you build a custom image for your
3622 # worker. If `workerConfig.imageUri` has not been set, AI Platform uses
3623 # the value of `masterConfig.imageUri`. Learn more about [configuring custom
3624 # containers](/ai-platform/training/docs/distributed-training-containers).
3625 "tpuTfVersion": "A String", # The AI Platform runtime version that includes a TensorFlow version matching
3626 # the one used in the custom container. This field is required if the replica
3627 # is a TPU worker that uses a custom container. Otherwise, do not specify
3628 # this field. This must be a [runtime version that currently supports
3629 # training with
3630 # TPUs](/ml-engine/docs/tensorflow/runtime-version-list#tpu-support).
3631 #
3632 # Note that the version of TensorFlow included in a runtime version may
3633 # differ from the numbering of the runtime version itself, because it may
3634 # have a different [patch
3635 # version](https://www.tensorflow.org/guide/version_compat#semantic_versioning_20).
3636 # In this field, you must specify the runtime version (TensorFlow minor
3637 # version). For example, if your custom container runs TensorFlow `1.x.y`,
3638 # specify `1.x`.
3639 "acceleratorConfig": { # Represents a hardware accelerator request config. # Represents the type and number of accelerators used by the replica.
3640 # [Learn about restrictions on accelerator configurations for
3641 # training.](/ai-platform/training/docs/using-gpus#compute-engine-machine-types-with-gpu)
3642 # Note that the AcceleratorConfig can be used in both Jobs and Versions.
3643 # Learn more about [accelerators for training](/ml-engine/docs/using-gpus) and
3644 # [accelerators for online
3645 # prediction](/ml-engine/docs/machine-types-online-prediction#gpus).
3646 "count": "A String", # The number of accelerators to attach to each machine running the job.
3647 "type": "A String", # The type of accelerator to use.
3648 },
3649 "imageUri": "A String", # The Docker image to run on the replica. This image must be in Container
3650 # Registry. Learn more about [configuring custom
3651 # containers](/ai-platform/training/docs/distributed-training-containers).
3652 },
3653 "evaluatorConfig": { # Represents the configuration for a replica in a cluster. # Optional. The configuration for evaluators.
3654 #
3655 # You should only set `evaluatorConfig.acceleratorConfig` if
3656 # `evaluatorType` is set to a Compute Engine machine type. [Learn
3657 # about restrictions on accelerator configurations for
3658 # training.](/ai-platform/training/docs/using-gpus#compute-engine-machine-types-with-gpu)
3659 #
3660 # Set `evaluatorConfig.imageUri` only if you build a custom image for
3661 # your evaluator. If `evaluatorConfig.imageUri` has not been
3662 # set, AI Platform uses the value of `masterConfig.imageUri`. Learn more about [configuring custom
3663 # containers](/ai-platform/training/docs/distributed-training-containers).
3664 "tpuTfVersion": "A String", # The AI Platform runtime version that includes a TensorFlow version matching
3665 # the one used in the custom container. This field is required if the replica
3666 # is a TPU worker that uses a custom container. Otherwise, do not specify
3667 # this field. This must be a [runtime version that currently supports
3668 # training with
3669 # TPUs](/ml-engine/docs/tensorflow/runtime-version-list#tpu-support).
3670 #
3671 # Note that the version of TensorFlow included in a runtime version may
3672 # differ from the numbering of the runtime version itself, because it may
3673 # have a different [patch
3674 # version](https://www.tensorflow.org/guide/version_compat#semantic_versioning_20).
3675 # In this field, you must specify the runtime version (TensorFlow minor
3676 # version). For example, if your custom container runs TensorFlow `1.x.y`,
3677 # specify `1.x`.
3678 "acceleratorConfig": { # Represents a hardware accelerator request config. # Represents the type and number of accelerators used by the replica.
3679 # [Learn about restrictions on accelerator configurations for
3680 # training.](/ai-platform/training/docs/using-gpus#compute-engine-machine-types-with-gpu)
3681 # Note that the AcceleratorConfig can be used in both Jobs and Versions.
3682 # Learn more about [accelerators for training](/ml-engine/docs/using-gpus) and
3683 # [accelerators for online
3684 # prediction](/ml-engine/docs/machine-types-online-prediction#gpus).
3685 "count": "A String", # The number of accelerators to attach to each machine running the job.
3686 "type": "A String", # The type of accelerator to use.
3687 },
3688 "imageUri": "A String", # The Docker image to run on the replica. This image must be in Container
3689 # Registry. Learn more about [configuring custom
3690 # containers](/ai-platform/training/docs/distributed-training-containers).
3691 },
3692 "useChiefInTfConfig": True or False, # Optional. Use `chief` instead of `master` in the `TF_CONFIG` environment
3693 # variable when training with a custom container. Defaults to `false`. [Learn
3694 # more about this
3695 # field.](/ai-platform/training/docs/distributed-training-details#chief-versus-master)
3696 #
3697 # This field has no effect for training jobs that don't use a custom
3698 # container.
3699 "masterConfig": { # Represents the configuration for a replica in a cluster. # Optional. The configuration for your master worker.
3700 #
3701 # You should only set `masterConfig.acceleratorConfig` if `masterType` is set
3702 # to a Compute Engine machine type. Learn about [restrictions on accelerator
3703 # configurations for
3704 # training.](/ai-platform/training/docs/using-gpus#compute-engine-machine-types-with-gpu)
3705 #
3706 # Set `masterConfig.imageUri` only if you build a custom image. Only one of
3707 # `masterConfig.imageUri` and `runtimeVersion` should be set. Learn more
3708 # about [configuring custom
3709 # containers](/ai-platform/training/docs/distributed-training-containers).
3710 "tpuTfVersion": "A String", # The AI Platform runtime version that includes a TensorFlow version matching
3711 # the one used in the custom container. This field is required if the replica
3712 # is a TPU worker that uses a custom container. Otherwise, do not specify
3713 # this field. This must be a [runtime version that currently supports
3714 # training with
3715 # TPUs](/ml-engine/docs/tensorflow/runtime-version-list#tpu-support).
3716 #
3717 # Note that the version of TensorFlow included in a runtime version may
3718 # differ from the numbering of the runtime version itself, because it may
3719 # have a different [patch
3720 # version](https://www.tensorflow.org/guide/version_compat#semantic_versioning_20).
3721 # In this field, you must specify the runtime version (TensorFlow minor
3722 # version). For example, if your custom container runs TensorFlow `1.x.y`,
3723 # specify `1.x`.
3724 "acceleratorConfig": { # Represents a hardware accelerator request config. # Represents the type and number of accelerators used by the replica.
3725 # [Learn about restrictions on accelerator configurations for
3726 # training.](/ai-platform/training/docs/using-gpus#compute-engine-machine-types-with-gpu)
3727 # Note that the AcceleratorConfig can be used in both Jobs and Versions.
3728 # Learn more about [accelerators for training](/ml-engine/docs/using-gpus) and
3729 # [accelerators for online
3730 # prediction](/ml-engine/docs/machine-types-online-prediction#gpus).
3731 "count": "A String", # The number of accelerators to attach to each machine running the job.
3732 "type": "A String", # The type of accelerator to use.
3733 },
3734 "imageUri": "A String", # The Docker image to run on the replica. This image must be in Container
3735 # Registry. Learn more about [configuring custom
3736 # containers](/ai-platform/training/docs/distributed-training-containers).
3737 },
3738 "runtimeVersion": "A String", # Optional. The AI Platform runtime version to use for training. You must
3739 # either specify this field or specify `masterConfig.imageUri`.
3740 #
3741 # For more information, see the [runtime version
3742 # list](/ai-platform/training/docs/runtime-version-list) and learn [how to
3743 # manage runtime versions](/ai-platform/training/docs/versioning).
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07003744 "hyperparameters": { # Represents a set of hyperparameters to optimize. # Optional. The set of Hyperparameters to tune.
3745 "maxTrials": 42, # Optional. How many training trials should be attempted to optimize
3746 # the specified hyperparameters.
3747 #
3748 # Defaults to one.
3749 "goal": "A String", # Required. The type of goal to use for tuning. Available types are
3750 # `MAXIMIZE` and `MINIMIZE`.
3751 #
3752 # Defaults to `MAXIMIZE`.
3753 "algorithm": "A String", # Optional. The search algorithm specified for the hyperparameter
3754 # tuning job.
3755 # Uses the default AI Platform hyperparameter tuning
3756 # algorithm if unspecified.
3757 "maxFailedTrials": 42, # Optional. The number of failed trials that need to be seen before failing
3758 # the hyperparameter tuning job. You can specify this field to override the
3759 # default failing criteria for AI Platform hyperparameter tuning jobs.
3760 #
3761 # Defaults to zero, which means the service decides when a hyperparameter
3762 # job should fail.
3763 "enableTrialEarlyStopping": True or False, # Optional. Indicates if the hyperparameter tuning job enables auto trial
3764 # early stopping.
3765 "resumePreviousJobId": "A String", # Optional. The prior hyperparameter tuning job id that users hope to
3766 # continue with. The job id will be used to find the corresponding vizier
3767 # study guid and resume the study.
3768 "params": [ # Required. The set of parameters to tune.
3769 { # Represents a single hyperparameter to optimize.
3770 "maxValue": 3.14, # Required if type is `DOUBLE` or `INTEGER`. This field
3771 # should be unset if type is `CATEGORICAL`. This value should be integers if
3772 # type is `INTEGER`.
Dan O'Mearadd494642020-05-01 07:42:23 -07003773 "minValue": 3.14, # Required if type is `DOUBLE` or `INTEGER`. This field
3774 # should be unset if type is `CATEGORICAL`. This value should be integers if
3775 # type is INTEGER.
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07003776 "discreteValues": [ # Required if type is `DISCRETE`.
3777 # A list of feasible points.
3778 # The list should be in strictly increasing order. For instance, this
3779 # parameter might have possible settings of 1.5, 2.5, and 4.0. This list
3780 # should not contain more than 1,000 values.
3781 3.14,
3782 ],
3783 "parameterName": "A String", # Required. The parameter name must be unique amongst all ParameterConfigs in
3784 # a HyperparameterSpec message. E.g., "learning_rate".
Dan O'Mearadd494642020-05-01 07:42:23 -07003785 "categoricalValues": [ # Required if type is `CATEGORICAL`. The list of possible categories.
3786 "A String",
3787 ],
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07003788 "type": "A String", # Required. The type of the parameter.
3789 "scaleType": "A String", # Optional. How the parameter should be scaled to the hypercube.
3790 # Leave unset for categorical parameters.
3791 # Some kind of scaling is strongly recommended for real or integral
3792 # parameters (e.g., `UNIT_LINEAR_SCALE`).
3793 },
3794 ],
3795 "hyperparameterMetricTag": "A String", # Optional. The TensorFlow summary tag name to use for optimizing trials. For
3796 # current versions of TensorFlow, this tag name should exactly match what is
3797 # shown in TensorBoard, including all scopes. For versions of TensorFlow
3798 # prior to 0.12, this should be only the tag passed to tf.Summary.
3799 # By default, "training/hptuning/metric" will be used.
3800 "maxParallelTrials": 42, # Optional. The number of training trials to run concurrently.
3801 # You can reduce the time it takes to perform hyperparameter tuning by adding
3802 # trials in parallel. However, each trail only benefits from the information
3803 # gained in completed trials. That means that a trial does not get access to
3804 # the results of trials running at the same time, which could reduce the
3805 # quality of the overall optimization.
3806 #
3807 # Each trial will use the same scale tier and machine types.
3808 #
3809 # Defaults to one.
3810 },
Dan O'Mearadd494642020-05-01 07:42:23 -07003811 "args": [ # Optional. Command-line arguments passed to the training application when it
3812 # starts. If your job uses a custom container, then the arguments are passed
3813 # to the container's &lt;a class="external" target="_blank"
3814 # href="https://docs.docker.com/engine/reference/builder/#entrypoint"&gt;
3815 # `ENTRYPOINT`&lt;/a&gt; command.
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07003816 "A String",
3817 ],
3818 "pythonModule": "A String", # Required. The Python module name to run after installing the packages.
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07003819 "workerCount": "A String", # Optional. The number of worker replicas to use for the training job. Each
3820 # replica in the cluster will be of the type specified in `worker_type`.
3821 #
3822 # This value can only be used when `scale_tier` is set to `CUSTOM`. If you
3823 # set this value, you must also set `worker_type`.
3824 #
3825 # The default value is zero.
Dan O'Mearadd494642020-05-01 07:42:23 -07003826 "encryptionConfig": { # Represents a custom encryption key configuration that can be applied to # Optional. Options for using customer-managed encryption keys (CMEK) to
3827 # protect resources created by a training job, instead of using Google's
3828 # default encryption. If this is set, then all resources created by the
3829 # training job will be encrypted with the customer-managed encryption key
3830 # that you specify.
3831 #
3832 # [Learn how and when to use CMEK with AI Platform
3833 # Training](/ai-platform/training/docs/cmek).
3834 # a resource.
3835 "kmsKeyName": "A String", # The Cloud KMS resource identifier of the customer-managed encryption key
3836 # used to protect a resource, such as a training job. It has the following
3837 # format:
3838 # `projects/{PROJECT_ID}/locations/{REGION}/keyRings/{KEY_RING_NAME}/cryptoKeys/{KEY_NAME}`
3839 },
3840 "parameterServerConfig": { # Represents the configuration for a replica in a cluster. # Optional. The configuration for parameter servers.
3841 #
3842 # You should only set `parameterServerConfig.acceleratorConfig` if
3843 # `parameterServerType` is set to a Compute Engine machine type. [Learn
3844 # about restrictions on accelerator configurations for
3845 # training.](/ai-platform/training/docs/using-gpus#compute-engine-machine-types-with-gpu)
3846 #
3847 # Set `parameterServerConfig.imageUri` only if you build a custom image for
3848 # your parameter server. If `parameterServerConfig.imageUri` has not been
3849 # set, AI Platform uses the value of `masterConfig.imageUri`. Learn more about [configuring custom
3850 # containers](/ai-platform/training/docs/distributed-training-containers).
3851 "tpuTfVersion": "A String", # The AI Platform runtime version that includes a TensorFlow version matching
3852 # the one used in the custom container. This field is required if the replica
3853 # is a TPU worker that uses a custom container. Otherwise, do not specify
3854 # this field. This must be a [runtime version that currently supports
3855 # training with
3856 # TPUs](/ml-engine/docs/tensorflow/runtime-version-list#tpu-support).
3857 #
3858 # Note that the version of TensorFlow included in a runtime version may
3859 # differ from the numbering of the runtime version itself, because it may
3860 # have a different [patch
3861 # version](https://www.tensorflow.org/guide/version_compat#semantic_versioning_20).
3862 # In this field, you must specify the runtime version (TensorFlow minor
3863 # version). For example, if your custom container runs TensorFlow `1.x.y`,
3864 # specify `1.x`.
3865 "acceleratorConfig": { # Represents a hardware accelerator request config. # Represents the type and number of accelerators used by the replica.
3866 # [Learn about restrictions on accelerator configurations for
3867 # training.](/ai-platform/training/docs/using-gpus#compute-engine-machine-types-with-gpu)
3868 # Note that the AcceleratorConfig can be used in both Jobs and Versions.
3869 # Learn more about [accelerators for training](/ml-engine/docs/using-gpus) and
3870 # [accelerators for online
3871 # prediction](/ml-engine/docs/machine-types-online-prediction#gpus).
3872 "count": "A String", # The number of accelerators to attach to each machine running the job.
3873 "type": "A String", # The type of accelerator to use.
3874 },
3875 "imageUri": "A String", # The Docker image to run on the replica. This image must be in Container
3876 # Registry. Learn more about [configuring custom
3877 # containers](/ai-platform/training/docs/distributed-training-containers).
3878 },
3879 "region": "A String", # Required. The region to run the training job in. See the [available
3880 # regions](/ai-platform/training/docs/regions) for AI Platform Training.
3881 "pythonVersion": "A String", # Optional. The version of Python used in training. You must either specify
3882 # this field or specify `masterConfig.imageUri`.
3883 #
3884 # The following Python versions are available:
3885 #
3886 # * Python '3.7' is available when `runtime_version` is set to '1.15' or
3887 # later.
3888 # * Python '3.5' is available when `runtime_version` is set to a version
3889 # from '1.4' to '1.14'.
3890 # * Python '2.7' is available when `runtime_version` is set to '1.15' or
3891 # earlier.
3892 #
3893 # Read more about the Python versions available for [each runtime
3894 # version](/ml-engine/docs/runtime-version-list).
3895 "evaluatorType": "A String", # Optional. Specifies the type of virtual machine to use for your training
3896 # job's evaluator nodes.
3897 #
3898 # The supported values are the same as those described in the entry for
3899 # `masterType`.
3900 #
3901 # This value must be consistent with the category of machine type that
3902 # `masterType` uses. In other words, both must be Compute Engine machine
3903 # types or both must be legacy machine types.
3904 #
3905 # This value must be present when `scaleTier` is set to `CUSTOM` and
3906 # `evaluatorCount` is greater than zero.
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07003907 "parameterServerType": "A String", # Optional. Specifies the type of virtual machine to use for your training
3908 # job's parameter server.
3909 #
3910 # The supported values are the same as those described in the entry for
3911 # `master_type`.
3912 #
3913 # This value must be consistent with the category of machine type that
Dan O'Mearadd494642020-05-01 07:42:23 -07003914 # `masterType` uses. In other words, both must be Compute Engine machine
3915 # types or both must be legacy machine types.
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07003916 #
3917 # This value must be present when `scaleTier` is set to `CUSTOM` and
3918 # `parameter_server_count` is greater than zero.
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07003919 },
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07003920 "endTime": "A String", # Output only. When the job processing was completed.
3921 "predictionOutput": { # Represents results of a prediction job. # The current prediction job result.
3922 "outputPath": "A String", # The output Google Cloud Storage location provided at the job creation time.
3923 "nodeHours": 3.14, # Node hours used by the batch prediction job.
3924 "predictionCount": "A String", # The number of generated predictions.
3925 "errorCount": "A String", # The number of data instances which resulted in errors.
3926 },
3927 "createTime": "A String", # Output only. When the job was created.
3928 }</pre>
3929</div>
3930
3931<div class="method">
Dan O'Mearadd494642020-05-01 07:42:23 -07003932 <code class="details" id="setIamPolicy">setIamPolicy(resource, body=None, x__xgafv=None)</code>
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07003933 <pre>Sets the access control policy on the specified resource. Replaces any
3934existing policy.
3935
Dan O'Mearadd494642020-05-01 07:42:23 -07003936Can return Public Errors: NOT_FOUND, INVALID_ARGUMENT and PERMISSION_DENIED
3937
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07003938Args:
3939 resource: string, REQUIRED: The resource for which the policy is being specified.
3940See the operation documentation for the appropriate value for this field. (required)
Dan O'Mearadd494642020-05-01 07:42:23 -07003941 body: object, The request body.
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07003942 The object takes the form of:
3943
3944{ # Request message for `SetIamPolicy` method.
Dan O'Mearadd494642020-05-01 07:42:23 -07003945 "policy": { # An Identity and Access Management (IAM) policy, which specifies access # REQUIRED: The complete policy to be applied to the `resource`. The size of
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07003946 # the policy is limited to a few 10s of KB. An empty policy is a
3947 # valid policy but certain Cloud Platform services (such as Projects)
3948 # might reject them.
Dan O'Mearadd494642020-05-01 07:42:23 -07003949 # controls for Google Cloud resources.
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07003950 #
3951 #
Dan O'Mearadd494642020-05-01 07:42:23 -07003952 # A `Policy` is a collection of `bindings`. A `binding` binds one or more
3953 # `members` to a single `role`. Members can be user accounts, service accounts,
3954 # Google groups, and domains (such as G Suite). A `role` is a named list of
3955 # permissions; each `role` can be an IAM predefined role or a user-created
3956 # custom role.
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07003957 #
Dan O'Mearadd494642020-05-01 07:42:23 -07003958 # Optionally, a `binding` can specify a `condition`, which is a logical
3959 # expression that allows access to a resource only if the expression evaluates
3960 # to `true`. A condition can add constraints based on attributes of the
3961 # request, the resource, or both.
3962 #
3963 # **JSON example:**
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07003964 #
3965 # {
3966 # "bindings": [
3967 # {
Dan O'Mearadd494642020-05-01 07:42:23 -07003968 # "role": "roles/resourcemanager.organizationAdmin",
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07003969 # "members": [
3970 # "user:mike@example.com",
3971 # "group:admins@example.com",
3972 # "domain:google.com",
Dan O'Mearadd494642020-05-01 07:42:23 -07003973 # "serviceAccount:my-project-id@appspot.gserviceaccount.com"
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07003974 # ]
3975 # },
3976 # {
Dan O'Mearadd494642020-05-01 07:42:23 -07003977 # "role": "roles/resourcemanager.organizationViewer",
3978 # "members": ["user:eve@example.com"],
3979 # "condition": {
3980 # "title": "expirable access",
3981 # "description": "Does not grant access after Sep 2020",
3982 # "expression": "request.time &lt; timestamp('2020-10-01T00:00:00.000Z')",
3983 # }
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07003984 # }
Dan O'Mearadd494642020-05-01 07:42:23 -07003985 # ],
3986 # "etag": "BwWWja0YfJA=",
3987 # "version": 3
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07003988 # }
3989 #
Dan O'Mearadd494642020-05-01 07:42:23 -07003990 # **YAML example:**
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07003991 #
3992 # bindings:
3993 # - members:
3994 # - user:mike@example.com
3995 # - group:admins@example.com
3996 # - domain:google.com
Dan O'Mearadd494642020-05-01 07:42:23 -07003997 # - serviceAccount:my-project-id@appspot.gserviceaccount.com
3998 # role: roles/resourcemanager.organizationAdmin
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07003999 # - members:
Dan O'Mearadd494642020-05-01 07:42:23 -07004000 # - user:eve@example.com
4001 # role: roles/resourcemanager.organizationViewer
4002 # condition:
4003 # title: expirable access
4004 # description: Does not grant access after Sep 2020
4005 # expression: request.time &lt; timestamp('2020-10-01T00:00:00.000Z')
4006 # - etag: BwWWja0YfJA=
4007 # - version: 3
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07004008 #
4009 # For a description of IAM and its features, see the
Dan O'Mearadd494642020-05-01 07:42:23 -07004010 # [IAM documentation](https://cloud.google.com/iam/docs/).
4011 "bindings": [ # Associates a list of `members` to a `role`. Optionally, may specify a
4012 # `condition` that determines how and when the `bindings` are applied. Each
4013 # of the `bindings` must contain at least one member.
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07004014 { # Associates `members` with a `role`.
4015 "role": "A String", # Role that is assigned to `members`.
4016 # For example, `roles/viewer`, `roles/editor`, or `roles/owner`.
4017 "members": [ # Specifies the identities requesting access for a Cloud Platform resource.
4018 # `members` can have the following values:
4019 #
4020 # * `allUsers`: A special identifier that represents anyone who is
4021 # on the internet; with or without a Google account.
4022 #
4023 # * `allAuthenticatedUsers`: A special identifier that represents anyone
4024 # who is authenticated with a Google account or a service account.
4025 #
4026 # * `user:{emailid}`: An email address that represents a specific Google
Dan O'Mearadd494642020-05-01 07:42:23 -07004027 # account. For example, `alice@example.com` .
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07004028 #
4029 #
4030 # * `serviceAccount:{emailid}`: An email address that represents a service
4031 # account. For example, `my-other-app@appspot.gserviceaccount.com`.
4032 #
4033 # * `group:{emailid}`: An email address that represents a Google group.
4034 # For example, `admins@example.com`.
4035 #
Dan O'Mearadd494642020-05-01 07:42:23 -07004036 # * `deleted:user:{emailid}?uid={uniqueid}`: An email address (plus unique
4037 # identifier) representing a user that has been recently deleted. For
4038 # example, `alice@example.com?uid=123456789012345678901`. If the user is
4039 # recovered, this value reverts to `user:{emailid}` and the recovered user
4040 # retains the role in the binding.
4041 #
4042 # * `deleted:serviceAccount:{emailid}?uid={uniqueid}`: An email address (plus
4043 # unique identifier) representing a service account that has been recently
4044 # deleted. For example,
4045 # `my-other-app@appspot.gserviceaccount.com?uid=123456789012345678901`.
4046 # If the service account is undeleted, this value reverts to
4047 # `serviceAccount:{emailid}` and the undeleted service account retains the
4048 # role in the binding.
4049 #
4050 # * `deleted:group:{emailid}?uid={uniqueid}`: An email address (plus unique
4051 # identifier) representing a Google group that has been recently
4052 # deleted. For example, `admins@example.com?uid=123456789012345678901`. If
4053 # the group is recovered, this value reverts to `group:{emailid}` and the
4054 # recovered group retains the role in the binding.
4055 #
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07004056 #
4057 # * `domain:{domain}`: The G Suite domain (primary) that represents all the
4058 # users of that domain. For example, `google.com` or `example.com`.
4059 #
4060 "A String",
4061 ],
Dan O'Mearadd494642020-05-01 07:42:23 -07004062 "condition": { # Represents a textual expression in the Common Expression Language (CEL) # The condition that is associated with this binding.
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07004063 # NOTE: An unsatisfied condition will not allow user access via current
4064 # binding. Different bindings, including their conditions, are examined
4065 # independently.
Dan O'Mearadd494642020-05-01 07:42:23 -07004066 # syntax. CEL is a C-like expression language. The syntax and semantics of CEL
4067 # are documented at https://github.com/google/cel-spec.
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07004068 #
Dan O'Mearadd494642020-05-01 07:42:23 -07004069 # Example (Comparison):
4070 #
4071 # title: "Summary size limit"
4072 # description: "Determines if a summary is less than 100 chars"
4073 # expression: "document.summary.size() &lt; 100"
4074 #
4075 # Example (Equality):
4076 #
4077 # title: "Requestor is owner"
4078 # description: "Determines if requestor is the document owner"
4079 # expression: "document.owner == request.auth.claims.email"
4080 #
4081 # Example (Logic):
4082 #
4083 # title: "Public documents"
4084 # description: "Determine whether the document should be publicly visible"
4085 # expression: "document.type != 'private' &amp;&amp; document.type != 'internal'"
4086 #
4087 # Example (Data Manipulation):
4088 #
4089 # title: "Notification string"
4090 # description: "Create a notification string with a timestamp."
4091 # expression: "'New message received at ' + string(document.create_time)"
4092 #
4093 # The exact variables and functions that may be referenced within an expression
4094 # are determined by the service that evaluates it. See the service
4095 # documentation for additional information.
4096 "description": "A String", # Optional. Description of the expression. This is a longer text which
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07004097 # describes the expression, e.g. when hovered over it in a UI.
Dan O'Mearadd494642020-05-01 07:42:23 -07004098 "expression": "A String", # Textual representation of an expression in Common Expression Language
4099 # syntax.
4100 "location": "A String", # Optional. String indicating the location of the expression for error
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07004101 # reporting, e.g. a file name and a position in the file.
Dan O'Mearadd494642020-05-01 07:42:23 -07004102 "title": "A String", # Optional. Title for the expression, i.e. a short string describing
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07004103 # its purpose. This can be used e.g. in UIs which allow to enter the
4104 # expression.
4105 },
4106 },
4107 ],
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07004108 "auditConfigs": [ # Specifies cloud audit logging configuration for this policy.
4109 { # Specifies the audit configuration for a service.
4110 # The configuration determines which permission types are logged, and what
4111 # identities, if any, are exempted from logging.
4112 # An AuditConfig must have one or more AuditLogConfigs.
4113 #
4114 # If there are AuditConfigs for both `allServices` and a specific service,
4115 # the union of the two AuditConfigs is used for that service: the log_types
4116 # specified in each AuditConfig are enabled, and the exempted_members in each
4117 # AuditLogConfig are exempted.
4118 #
4119 # Example Policy with multiple AuditConfigs:
4120 #
4121 # {
4122 # "audit_configs": [
4123 # {
4124 # "service": "allServices"
4125 # "audit_log_configs": [
4126 # {
4127 # "log_type": "DATA_READ",
4128 # "exempted_members": [
Dan O'Mearadd494642020-05-01 07:42:23 -07004129 # "user:jose@example.com"
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07004130 # ]
4131 # },
4132 # {
4133 # "log_type": "DATA_WRITE",
4134 # },
4135 # {
4136 # "log_type": "ADMIN_READ",
4137 # }
4138 # ]
4139 # },
4140 # {
Dan O'Mearadd494642020-05-01 07:42:23 -07004141 # "service": "sampleservice.googleapis.com"
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07004142 # "audit_log_configs": [
4143 # {
4144 # "log_type": "DATA_READ",
4145 # },
4146 # {
4147 # "log_type": "DATA_WRITE",
4148 # "exempted_members": [
Dan O'Mearadd494642020-05-01 07:42:23 -07004149 # "user:aliya@example.com"
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07004150 # ]
4151 # }
4152 # ]
4153 # }
4154 # ]
4155 # }
4156 #
Dan O'Mearadd494642020-05-01 07:42:23 -07004157 # For sampleservice, this policy enables DATA_READ, DATA_WRITE and ADMIN_READ
4158 # logging. It also exempts jose@example.com from DATA_READ logging, and
4159 # aliya@example.com from DATA_WRITE logging.
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07004160 "auditLogConfigs": [ # The configuration for logging of each type of permission.
4161 { # Provides the configuration for logging a type of permissions.
4162 # Example:
4163 #
4164 # {
4165 # "audit_log_configs": [
4166 # {
4167 # "log_type": "DATA_READ",
4168 # "exempted_members": [
Dan O'Mearadd494642020-05-01 07:42:23 -07004169 # "user:jose@example.com"
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07004170 # ]
4171 # },
4172 # {
4173 # "log_type": "DATA_WRITE",
4174 # }
4175 # ]
4176 # }
4177 #
4178 # This enables 'DATA_READ' and 'DATA_WRITE' logging, while exempting
Dan O'Mearadd494642020-05-01 07:42:23 -07004179 # jose@example.com from DATA_READ logging.
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07004180 "exemptedMembers": [ # Specifies the identities that do not cause logging for this type of
4181 # permission.
4182 # Follows the same format of Binding.members.
4183 "A String",
4184 ],
4185 "logType": "A String", # The log type that this config enables.
4186 },
4187 ],
4188 "service": "A String", # Specifies a service that will be enabled for audit logging.
4189 # For example, `storage.googleapis.com`, `cloudsql.googleapis.com`.
4190 # `allServices` is a special value that covers all services.
4191 },
4192 ],
Dan O'Mearadd494642020-05-01 07:42:23 -07004193 "etag": "A String", # `etag` is used for optimistic concurrency control as a way to help
4194 # prevent simultaneous updates of a policy from overwriting each other.
4195 # It is strongly suggested that systems make use of the `etag` in the
4196 # read-modify-write cycle to perform policy updates in order to avoid race
4197 # conditions: An `etag` is returned in the response to `getIamPolicy`, and
4198 # systems are expected to put that etag in the request to `setIamPolicy` to
4199 # ensure that their change will be applied to the same version of the policy.
4200 #
4201 # **Important:** If you use IAM Conditions, you must include the `etag` field
4202 # whenever you call `setIamPolicy`. If you omit this field, then IAM allows
4203 # you to overwrite a version `3` policy with a version `1` policy, and all of
4204 # the conditions in the version `3` policy are lost.
4205 "version": 42, # Specifies the format of the policy.
4206 #
4207 # Valid values are `0`, `1`, and `3`. Requests that specify an invalid value
4208 # are rejected.
4209 #
4210 # Any operation that affects conditional role bindings must specify version
4211 # `3`. This requirement applies to the following operations:
4212 #
4213 # * Getting a policy that includes a conditional role binding
4214 # * Adding a conditional role binding to a policy
4215 # * Changing a conditional role binding in a policy
4216 # * Removing any role binding, with or without a condition, from a policy
4217 # that includes conditions
4218 #
4219 # **Important:** If you use IAM Conditions, you must include the `etag` field
4220 # whenever you call `setIamPolicy`. If you omit this field, then IAM allows
4221 # you to overwrite a version `3` policy with a version `1` policy, and all of
4222 # the conditions in the version `3` policy are lost.
4223 #
4224 # If a policy does not include any conditions, operations on that policy may
4225 # specify any valid version or leave the field unset.
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07004226 },
4227 "updateMask": "A String", # OPTIONAL: A FieldMask specifying which fields of the policy to modify. Only
4228 # the fields in the mask will be modified. If no mask is provided, the
4229 # following default mask is used:
4230 # paths: "bindings, etag"
4231 # This field is only used by Cloud IAM.
4232 }
4233
4234 x__xgafv: string, V1 error format.
4235 Allowed values
4236 1 - v1 error format
4237 2 - v2 error format
4238
4239Returns:
4240 An object of the form:
4241
Dan O'Mearadd494642020-05-01 07:42:23 -07004242 { # An Identity and Access Management (IAM) policy, which specifies access
4243 # controls for Google Cloud resources.
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07004244 #
4245 #
Dan O'Mearadd494642020-05-01 07:42:23 -07004246 # A `Policy` is a collection of `bindings`. A `binding` binds one or more
4247 # `members` to a single `role`. Members can be user accounts, service accounts,
4248 # Google groups, and domains (such as G Suite). A `role` is a named list of
4249 # permissions; each `role` can be an IAM predefined role or a user-created
4250 # custom role.
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07004251 #
Dan O'Mearadd494642020-05-01 07:42:23 -07004252 # Optionally, a `binding` can specify a `condition`, which is a logical
4253 # expression that allows access to a resource only if the expression evaluates
4254 # to `true`. A condition can add constraints based on attributes of the
4255 # request, the resource, or both.
4256 #
4257 # **JSON example:**
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07004258 #
4259 # {
4260 # "bindings": [
4261 # {
Dan O'Mearadd494642020-05-01 07:42:23 -07004262 # "role": "roles/resourcemanager.organizationAdmin",
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07004263 # "members": [
4264 # "user:mike@example.com",
4265 # "group:admins@example.com",
4266 # "domain:google.com",
Dan O'Mearadd494642020-05-01 07:42:23 -07004267 # "serviceAccount:my-project-id@appspot.gserviceaccount.com"
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07004268 # ]
4269 # },
4270 # {
Dan O'Mearadd494642020-05-01 07:42:23 -07004271 # "role": "roles/resourcemanager.organizationViewer",
4272 # "members": ["user:eve@example.com"],
4273 # "condition": {
4274 # "title": "expirable access",
4275 # "description": "Does not grant access after Sep 2020",
4276 # "expression": "request.time &lt; timestamp('2020-10-01T00:00:00.000Z')",
4277 # }
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07004278 # }
Dan O'Mearadd494642020-05-01 07:42:23 -07004279 # ],
4280 # "etag": "BwWWja0YfJA=",
4281 # "version": 3
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07004282 # }
4283 #
Dan O'Mearadd494642020-05-01 07:42:23 -07004284 # **YAML example:**
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07004285 #
4286 # bindings:
4287 # - members:
4288 # - user:mike@example.com
4289 # - group:admins@example.com
4290 # - domain:google.com
Dan O'Mearadd494642020-05-01 07:42:23 -07004291 # - serviceAccount:my-project-id@appspot.gserviceaccount.com
4292 # role: roles/resourcemanager.organizationAdmin
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07004293 # - members:
Dan O'Mearadd494642020-05-01 07:42:23 -07004294 # - user:eve@example.com
4295 # role: roles/resourcemanager.organizationViewer
4296 # condition:
4297 # title: expirable access
4298 # description: Does not grant access after Sep 2020
4299 # expression: request.time &lt; timestamp('2020-10-01T00:00:00.000Z')
4300 # - etag: BwWWja0YfJA=
4301 # - version: 3
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07004302 #
4303 # For a description of IAM and its features, see the
Dan O'Mearadd494642020-05-01 07:42:23 -07004304 # [IAM documentation](https://cloud.google.com/iam/docs/).
4305 "bindings": [ # Associates a list of `members` to a `role`. Optionally, may specify a
4306 # `condition` that determines how and when the `bindings` are applied. Each
4307 # of the `bindings` must contain at least one member.
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07004308 { # Associates `members` with a `role`.
4309 "role": "A String", # Role that is assigned to `members`.
4310 # For example, `roles/viewer`, `roles/editor`, or `roles/owner`.
4311 "members": [ # Specifies the identities requesting access for a Cloud Platform resource.
4312 # `members` can have the following values:
4313 #
4314 # * `allUsers`: A special identifier that represents anyone who is
4315 # on the internet; with or without a Google account.
4316 #
4317 # * `allAuthenticatedUsers`: A special identifier that represents anyone
4318 # who is authenticated with a Google account or a service account.
4319 #
4320 # * `user:{emailid}`: An email address that represents a specific Google
Dan O'Mearadd494642020-05-01 07:42:23 -07004321 # account. For example, `alice@example.com` .
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07004322 #
4323 #
4324 # * `serviceAccount:{emailid}`: An email address that represents a service
4325 # account. For example, `my-other-app@appspot.gserviceaccount.com`.
4326 #
4327 # * `group:{emailid}`: An email address that represents a Google group.
4328 # For example, `admins@example.com`.
4329 #
Dan O'Mearadd494642020-05-01 07:42:23 -07004330 # * `deleted:user:{emailid}?uid={uniqueid}`: An email address (plus unique
4331 # identifier) representing a user that has been recently deleted. For
4332 # example, `alice@example.com?uid=123456789012345678901`. If the user is
4333 # recovered, this value reverts to `user:{emailid}` and the recovered user
4334 # retains the role in the binding.
4335 #
4336 # * `deleted:serviceAccount:{emailid}?uid={uniqueid}`: An email address (plus
4337 # unique identifier) representing a service account that has been recently
4338 # deleted. For example,
4339 # `my-other-app@appspot.gserviceaccount.com?uid=123456789012345678901`.
4340 # If the service account is undeleted, this value reverts to
4341 # `serviceAccount:{emailid}` and the undeleted service account retains the
4342 # role in the binding.
4343 #
4344 # * `deleted:group:{emailid}?uid={uniqueid}`: An email address (plus unique
4345 # identifier) representing a Google group that has been recently
4346 # deleted. For example, `admins@example.com?uid=123456789012345678901`. If
4347 # the group is recovered, this value reverts to `group:{emailid}` and the
4348 # recovered group retains the role in the binding.
4349 #
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07004350 #
4351 # * `domain:{domain}`: The G Suite domain (primary) that represents all the
4352 # users of that domain. For example, `google.com` or `example.com`.
4353 #
4354 "A String",
4355 ],
Dan O'Mearadd494642020-05-01 07:42:23 -07004356 "condition": { # Represents a textual expression in the Common Expression Language (CEL) # The condition that is associated with this binding.
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07004357 # NOTE: An unsatisfied condition will not allow user access via current
4358 # binding. Different bindings, including their conditions, are examined
4359 # independently.
Dan O'Mearadd494642020-05-01 07:42:23 -07004360 # syntax. CEL is a C-like expression language. The syntax and semantics of CEL
4361 # are documented at https://github.com/google/cel-spec.
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07004362 #
Dan O'Mearadd494642020-05-01 07:42:23 -07004363 # Example (Comparison):
4364 #
4365 # title: "Summary size limit"
4366 # description: "Determines if a summary is less than 100 chars"
4367 # expression: "document.summary.size() &lt; 100"
4368 #
4369 # Example (Equality):
4370 #
4371 # title: "Requestor is owner"
4372 # description: "Determines if requestor is the document owner"
4373 # expression: "document.owner == request.auth.claims.email"
4374 #
4375 # Example (Logic):
4376 #
4377 # title: "Public documents"
4378 # description: "Determine whether the document should be publicly visible"
4379 # expression: "document.type != 'private' &amp;&amp; document.type != 'internal'"
4380 #
4381 # Example (Data Manipulation):
4382 #
4383 # title: "Notification string"
4384 # description: "Create a notification string with a timestamp."
4385 # expression: "'New message received at ' + string(document.create_time)"
4386 #
4387 # The exact variables and functions that may be referenced within an expression
4388 # are determined by the service that evaluates it. See the service
4389 # documentation for additional information.
4390 "description": "A String", # Optional. Description of the expression. This is a longer text which
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07004391 # describes the expression, e.g. when hovered over it in a UI.
Dan O'Mearadd494642020-05-01 07:42:23 -07004392 "expression": "A String", # Textual representation of an expression in Common Expression Language
4393 # syntax.
4394 "location": "A String", # Optional. String indicating the location of the expression for error
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07004395 # reporting, e.g. a file name and a position in the file.
Dan O'Mearadd494642020-05-01 07:42:23 -07004396 "title": "A String", # Optional. Title for the expression, i.e. a short string describing
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07004397 # its purpose. This can be used e.g. in UIs which allow to enter the
4398 # expression.
4399 },
4400 },
4401 ],
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07004402 "auditConfigs": [ # Specifies cloud audit logging configuration for this policy.
4403 { # Specifies the audit configuration for a service.
4404 # The configuration determines which permission types are logged, and what
4405 # identities, if any, are exempted from logging.
4406 # An AuditConfig must have one or more AuditLogConfigs.
4407 #
4408 # If there are AuditConfigs for both `allServices` and a specific service,
4409 # the union of the two AuditConfigs is used for that service: the log_types
4410 # specified in each AuditConfig are enabled, and the exempted_members in each
4411 # AuditLogConfig are exempted.
4412 #
4413 # Example Policy with multiple AuditConfigs:
4414 #
4415 # {
4416 # "audit_configs": [
4417 # {
4418 # "service": "allServices"
4419 # "audit_log_configs": [
4420 # {
4421 # "log_type": "DATA_READ",
4422 # "exempted_members": [
Dan O'Mearadd494642020-05-01 07:42:23 -07004423 # "user:jose@example.com"
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07004424 # ]
4425 # },
4426 # {
4427 # "log_type": "DATA_WRITE",
4428 # },
4429 # {
4430 # "log_type": "ADMIN_READ",
4431 # }
4432 # ]
4433 # },
4434 # {
Dan O'Mearadd494642020-05-01 07:42:23 -07004435 # "service": "sampleservice.googleapis.com"
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07004436 # "audit_log_configs": [
4437 # {
4438 # "log_type": "DATA_READ",
4439 # },
4440 # {
4441 # "log_type": "DATA_WRITE",
4442 # "exempted_members": [
Dan O'Mearadd494642020-05-01 07:42:23 -07004443 # "user:aliya@example.com"
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07004444 # ]
4445 # }
4446 # ]
4447 # }
4448 # ]
4449 # }
4450 #
Dan O'Mearadd494642020-05-01 07:42:23 -07004451 # For sampleservice, this policy enables DATA_READ, DATA_WRITE and ADMIN_READ
4452 # logging. It also exempts jose@example.com from DATA_READ logging, and
4453 # aliya@example.com from DATA_WRITE logging.
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07004454 "auditLogConfigs": [ # The configuration for logging of each type of permission.
4455 { # Provides the configuration for logging a type of permissions.
4456 # Example:
4457 #
4458 # {
4459 # "audit_log_configs": [
4460 # {
4461 # "log_type": "DATA_READ",
4462 # "exempted_members": [
Dan O'Mearadd494642020-05-01 07:42:23 -07004463 # "user:jose@example.com"
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07004464 # ]
4465 # },
4466 # {
4467 # "log_type": "DATA_WRITE",
4468 # }
4469 # ]
4470 # }
4471 #
4472 # This enables 'DATA_READ' and 'DATA_WRITE' logging, while exempting
Dan O'Mearadd494642020-05-01 07:42:23 -07004473 # jose@example.com from DATA_READ logging.
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07004474 "exemptedMembers": [ # Specifies the identities that do not cause logging for this type of
4475 # permission.
4476 # Follows the same format of Binding.members.
4477 "A String",
4478 ],
4479 "logType": "A String", # The log type that this config enables.
4480 },
4481 ],
4482 "service": "A String", # Specifies a service that will be enabled for audit logging.
4483 # For example, `storage.googleapis.com`, `cloudsql.googleapis.com`.
4484 # `allServices` is a special value that covers all services.
4485 },
4486 ],
Dan O'Mearadd494642020-05-01 07:42:23 -07004487 "etag": "A String", # `etag` is used for optimistic concurrency control as a way to help
4488 # prevent simultaneous updates of a policy from overwriting each other.
4489 # It is strongly suggested that systems make use of the `etag` in the
4490 # read-modify-write cycle to perform policy updates in order to avoid race
4491 # conditions: An `etag` is returned in the response to `getIamPolicy`, and
4492 # systems are expected to put that etag in the request to `setIamPolicy` to
4493 # ensure that their change will be applied to the same version of the policy.
4494 #
4495 # **Important:** If you use IAM Conditions, you must include the `etag` field
4496 # whenever you call `setIamPolicy`. If you omit this field, then IAM allows
4497 # you to overwrite a version `3` policy with a version `1` policy, and all of
4498 # the conditions in the version `3` policy are lost.
4499 "version": 42, # Specifies the format of the policy.
4500 #
4501 # Valid values are `0`, `1`, and `3`. Requests that specify an invalid value
4502 # are rejected.
4503 #
4504 # Any operation that affects conditional role bindings must specify version
4505 # `3`. This requirement applies to the following operations:
4506 #
4507 # * Getting a policy that includes a conditional role binding
4508 # * Adding a conditional role binding to a policy
4509 # * Changing a conditional role binding in a policy
4510 # * Removing any role binding, with or without a condition, from a policy
4511 # that includes conditions
4512 #
4513 # **Important:** If you use IAM Conditions, you must include the `etag` field
4514 # whenever you call `setIamPolicy`. If you omit this field, then IAM allows
4515 # you to overwrite a version `3` policy with a version `1` policy, and all of
4516 # the conditions in the version `3` policy are lost.
4517 #
4518 # If a policy does not include any conditions, operations on that policy may
4519 # specify any valid version or leave the field unset.
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07004520 }</pre>
4521</div>
4522
4523<div class="method">
Dan O'Mearadd494642020-05-01 07:42:23 -07004524 <code class="details" id="testIamPermissions">testIamPermissions(resource, body=None, x__xgafv=None)</code>
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07004525 <pre>Returns permissions that a caller has on the specified resource.
4526If the resource does not exist, this will return an empty set of
4527permissions, not a NOT_FOUND error.
4528
4529Note: This operation is designed to be used for building permission-aware
4530UIs and command-line tools, not for authorization checking. This operation
4531may "fail open" without warning.
4532
4533Args:
4534 resource: string, REQUIRED: The resource for which the policy detail is being requested.
4535See the operation documentation for the appropriate value for this field. (required)
Dan O'Mearadd494642020-05-01 07:42:23 -07004536 body: object, The request body.
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07004537 The object takes the form of:
4538
4539{ # Request message for `TestIamPermissions` method.
4540 "permissions": [ # The set of permissions to check for the `resource`. Permissions with
4541 # wildcards (such as '*' or 'storage.*') are not allowed. For more
4542 # information see
4543 # [IAM Overview](https://cloud.google.com/iam/docs/overview#permissions).
4544 "A String",
4545 ],
4546 }
4547
4548 x__xgafv: string, V1 error format.
4549 Allowed values
4550 1 - v1 error format
4551 2 - v2 error format
4552
4553Returns:
4554 An object of the form:
4555
4556 { # Response message for `TestIamPermissions` method.
4557 "permissions": [ # A subset of `TestPermissionsRequest.permissions` that the caller is
4558 # allowed.
4559 "A String",
4560 ],
4561 }</pre>
4562</div>
4563
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -04004564</body></html>