blob: 89164556f66349fb480ab98021c7e521ad9de5f5 [file] [log] [blame]
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -04001<html><body>
2<style>
3
4body, h1, h2, h3, div, span, p, pre, a {
5 margin: 0;
6 padding: 0;
7 border: 0;
8 font-weight: inherit;
9 font-style: inherit;
10 font-size: 100%;
11 font-family: inherit;
12 vertical-align: baseline;
13}
14
15body {
16 font-size: 13px;
17 padding: 1em;
18}
19
20h1 {
21 font-size: 26px;
22 margin-bottom: 1em;
23}
24
25h2 {
26 font-size: 24px;
27 margin-bottom: 1em;
28}
29
30h3 {
31 font-size: 20px;
32 margin-bottom: 1em;
33 margin-top: 1em;
34}
35
36pre, code {
37 line-height: 1.5;
38 font-family: Monaco, 'DejaVu Sans Mono', 'Bitstream Vera Sans Mono', 'Lucida Console', monospace;
39}
40
41pre {
42 margin-top: 0.5em;
43}
44
45h1, h2, h3, p {
46 font-family: Arial, sans serif;
47}
48
49h1, h2, h3 {
50 border-bottom: solid #CCC 1px;
51}
52
53.toc_element {
54 margin-top: 0.5em;
55}
56
57.firstline {
58 margin-left: 2 em;
59}
60
61.method {
62 margin-top: 1em;
63 border: solid 1px #CCC;
64 padding: 1em;
65 background: #EEE;
66}
67
68.details {
69 font-weight: bold;
70 font-size: 14px;
71}
72
73</style>
74
Dan O'Mearadd494642020-05-01 07:42:23 -070075<h1><a href="ml_v1.html">AI Platform Training & Prediction API</a> . <a href="ml_v1.projects.html">projects</a> . <a href="ml_v1.projects.jobs.html">jobs</a></h1>
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -040076<h2>Instance Methods</h2>
77<p class="toc_element">
Bu Sun Kim715bd7f2019-06-14 16:50:42 -070078 <code><a href="#cancel">cancel(name, body=None, x__xgafv=None)</a></code></p>
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -040079<p class="firstline">Cancels a running job.</p>
80<p class="toc_element">
Dan O'Mearadd494642020-05-01 07:42:23 -070081 <code><a href="#create">create(parent, body=None, x__xgafv=None)</a></code></p>
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -040082<p class="firstline">Creates a training or a batch prediction job.</p>
83<p class="toc_element">
Thomas Coffee2f245372017-03-27 10:39:26 -070084 <code><a href="#get">get(name, x__xgafv=None)</a></code></p>
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -040085<p class="firstline">Describes a job.</p>
86<p class="toc_element">
Dan O'Mearadd494642020-05-01 07:42:23 -070087 <code><a href="#getIamPolicy">getIamPolicy(resource, options_requestedPolicyVersion=None, x__xgafv=None)</a></code></p>
Bu Sun Kim715bd7f2019-06-14 16:50:42 -070088<p class="firstline">Gets the access control policy for a resource.</p>
89<p class="toc_element">
Bu Sun Kim4ed7d3f2020-05-27 12:20:54 -070090 <code><a href="#list">list(parent, filter=None, pageToken=None, pageSize=None, x__xgafv=None)</a></code></p>
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -040091<p class="firstline">Lists the jobs in the project.</p>
92<p class="toc_element">
93 <code><a href="#list_next">list_next(previous_request, previous_response)</a></code></p>
94<p class="firstline">Retrieves the next page of results.</p>
Bu Sun Kim715bd7f2019-06-14 16:50:42 -070095<p class="toc_element">
Dan O'Mearadd494642020-05-01 07:42:23 -070096 <code><a href="#patch">patch(name, body=None, updateMask=None, x__xgafv=None)</a></code></p>
Bu Sun Kim715bd7f2019-06-14 16:50:42 -070097<p class="firstline">Updates a specific job resource.</p>
98<p class="toc_element">
Dan O'Mearadd494642020-05-01 07:42:23 -070099 <code><a href="#setIamPolicy">setIamPolicy(resource, body=None, x__xgafv=None)</a></code></p>
Bu Sun Kim715bd7f2019-06-14 16:50:42 -0700100<p class="firstline">Sets the access control policy on the specified resource. Replaces any</p>
101<p class="toc_element">
Dan O'Mearadd494642020-05-01 07:42:23 -0700102 <code><a href="#testIamPermissions">testIamPermissions(resource, body=None, x__xgafv=None)</a></code></p>
Bu Sun Kim715bd7f2019-06-14 16:50:42 -0700103<p class="firstline">Returns permissions that a caller has on the specified resource.</p>
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -0400104<h3>Method Details</h3>
105<div class="method">
Bu Sun Kim715bd7f2019-06-14 16:50:42 -0700106 <code class="details" id="cancel">cancel(name, body=None, x__xgafv=None)</code>
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -0400107 <pre>Cancels a running job.
108
109Args:
Bu Sun Kim715bd7f2019-06-14 16:50:42 -0700110 name: string, Required. The name of the job to cancel. (required)
111 body: object, The request body.
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -0400112 The object takes the form of:
113
114{ # Request message for the CancelJob method.
115 }
116
117 x__xgafv: string, V1 error format.
118 Allowed values
119 1 - v1 error format
120 2 - v2 error format
121
122Returns:
123 An object of the form:
124
125 { # A generic empty message that you can re-use to avoid defining duplicated
126 # empty messages in your APIs. A typical example is to use it as the request
127 # or the response type of an API method. For instance:
128 #
129 # service Foo {
130 # rpc Bar(google.protobuf.Empty) returns (google.protobuf.Empty);
131 # }
132 #
133 # The JSON representation for `Empty` is empty JSON object `{}`.
134 }</pre>
135</div>
136
137<div class="method">
Dan O'Mearadd494642020-05-01 07:42:23 -0700138 <code class="details" id="create">create(parent, body=None, x__xgafv=None)</code>
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -0400139 <pre>Creates a training or a batch prediction job.
140
141Args:
Bu Sun Kim715bd7f2019-06-14 16:50:42 -0700142 parent: string, Required. The project name. (required)
Dan O'Mearadd494642020-05-01 07:42:23 -0700143 body: object, The request body.
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -0400144 The object takes the form of:
145
146{ # Represents a training or prediction job.
Bu Sun Kimd059ad82020-07-22 17:02:09 -0700147 &quot;createTime&quot;: &quot;A String&quot;, # Output only. When the job was created.
Bu Sun Kim4ed7d3f2020-05-27 12:20:54 -0700148 &quot;predictionInput&quot;: { # Represents input parameters for a prediction job. # Input parameters to create a prediction job.
Bu Sun Kimd059ad82020-07-22 17:02:09 -0700149 &quot;dataFormat&quot;: &quot;A String&quot;, # Required. The format of the input data files.
150 &quot;outputPath&quot;: &quot;A String&quot;, # Required. The output Google Cloud Storage location.
151 &quot;inputPaths&quot;: [ # Required. The Cloud Storage location of the input data files. May contain
152 # &lt;a href=&quot;/storage/docs/gsutil/addlhelp/WildcardNames&quot;&gt;wildcards&lt;/a&gt;.
153 &quot;A String&quot;,
154 ],
155 &quot;region&quot;: &quot;A String&quot;, # Required. The Google Compute Engine region to run the prediction job in.
156 # See the &lt;a href=&quot;/ml-engine/docs/tensorflow/regions&quot;&gt;available regions&lt;/a&gt;
157 # for AI Platform services.
Bu Sun Kim4ed7d3f2020-05-27 12:20:54 -0700158 &quot;versionName&quot;: &quot;A String&quot;, # Use this field if you want to specify a version of the model to use. The
159 # string is formatted the same way as `model_version`, with the addition
160 # of the version information:
161 #
162 # `&quot;projects/YOUR_PROJECT/models/YOUR_MODEL/versions/YOUR_VERSION&quot;`
Bu Sun Kimd059ad82020-07-22 17:02:09 -0700163 &quot;uri&quot;: &quot;A String&quot;, # Use this field if you want to specify a Google Cloud Storage path for
164 # the model to use.
165 &quot;outputDataFormat&quot;: &quot;A String&quot;, # Optional. Format of the output data files, defaults to JSON.
166 &quot;runtimeVersion&quot;: &quot;A String&quot;, # Optional. The AI Platform runtime version to use for this batch
167 # prediction. If not set, AI Platform will pick the runtime version used
168 # during the CreateVersion request for this model version, or choose the
169 # latest stable version when model version information is not available
170 # such as when the model is specified by uri.
Bu Sun Kim4ed7d3f2020-05-27 12:20:54 -0700171 &quot;modelName&quot;: &quot;A String&quot;, # Use this field if you want to use the default version for the specified
172 # model. The string must use the following format:
173 #
174 # `&quot;projects/YOUR_PROJECT/models/YOUR_MODEL&quot;`
Bu Sun Kim4ed7d3f2020-05-27 12:20:54 -0700175 &quot;signatureName&quot;: &quot;A String&quot;, # Optional. The name of the signature defined in the SavedModel to use for
176 # this job. Please refer to
177 # [SavedModel](https://tensorflow.github.io/serving/serving_basic.html)
178 # for information about how to use signatures.
179 #
180 # Defaults to
181 # [DEFAULT_SERVING_SIGNATURE_DEF_KEY](https://www.tensorflow.org/api_docs/python/tf/saved_model/signature_constants)
182 # , which is &quot;serving_default&quot;.
Bu Sun Kim4ed7d3f2020-05-27 12:20:54 -0700183 &quot;batchSize&quot;: &quot;A String&quot;, # Optional. Number of records per batch, defaults to 64.
184 # The service will buffer batch_size number of records in memory before
185 # invoking one Tensorflow prediction call internally. So take the record
186 # size and memory available into consideration when setting this parameter.
Bu Sun Kimd059ad82020-07-22 17:02:09 -0700187 &quot;maxWorkerCount&quot;: &quot;A String&quot;, # Optional. The maximum number of workers to be used for parallel processing.
188 # Defaults to 10 if not specified.
189 },
190 &quot;labels&quot;: { # Optional. One or more labels that you can add, to organize your jobs.
191 # Each label is a key-value pair, where both the key and the value are
192 # arbitrary strings that you supply.
193 # For more information, see the documentation on
194 # &lt;a href=&quot;/ml-engine/docs/tensorflow/resource-labels&quot;&gt;using labels&lt;/a&gt;.
195 &quot;a_key&quot;: &quot;A String&quot;,
196 },
197 &quot;endTime&quot;: &quot;A String&quot;, # Output only. When the job processing was completed.
198 &quot;trainingOutput&quot;: { # Represents results of a training job. Output only. # The current training job result.
199 &quot;trials&quot;: [ # Results for individual Hyperparameter trials.
200 # Only set for hyperparameter tuning jobs.
201 { # Represents the result of a single hyperparameter tuning trial from a
202 # training job. The TrainingOutput object that is returned on successful
203 # completion of a training job with hyperparameter tuning includes a list
204 # of HyperparameterOutput objects, one for each successful trial.
205 &quot;endTime&quot;: &quot;A String&quot;, # Output only. End time for the trial.
206 &quot;finalMetric&quot;: { # An observed value of a metric. # The final objective metric seen for this trial.
207 &quot;trainingStep&quot;: &quot;A String&quot;, # The global training step for this metric.
208 &quot;objectiveValue&quot;: 3.14, # The objective value at this training step.
209 },
210 &quot;hyperparameters&quot;: { # The hyperparameters given to this trial.
211 &quot;a_key&quot;: &quot;A String&quot;,
212 },
213 &quot;builtInAlgorithmOutput&quot;: { # Represents output related to a built-in algorithm Job. # Details related to built-in algorithms jobs.
214 # Only set for trials of built-in algorithms jobs that have succeeded.
215 &quot;modelPath&quot;: &quot;A String&quot;, # The Cloud Storage path to the `model/` directory where the training job
216 # saves the trained model. Only set for successful jobs that don&#x27;t use
217 # hyperparameter tuning.
218 &quot;framework&quot;: &quot;A String&quot;, # Framework on which the built-in algorithm was trained.
219 &quot;runtimeVersion&quot;: &quot;A String&quot;, # AI Platform runtime version on which the built-in algorithm was
220 # trained.
221 &quot;pythonVersion&quot;: &quot;A String&quot;, # Python version on which the built-in algorithm was trained.
222 },
223 &quot;startTime&quot;: &quot;A String&quot;, # Output only. Start time for the trial.
224 &quot;allMetrics&quot;: [ # All recorded object metrics for this trial. This field is not currently
225 # populated.
226 { # An observed value of a metric.
227 &quot;trainingStep&quot;: &quot;A String&quot;, # The global training step for this metric.
228 &quot;objectiveValue&quot;: 3.14, # The objective value at this training step.
229 },
230 ],
231 &quot;trialId&quot;: &quot;A String&quot;, # The trial id for these results.
232 &quot;isTrialStoppedEarly&quot;: True or False, # True if the trial is stopped early.
233 &quot;state&quot;: &quot;A String&quot;, # Output only. The detailed state of the trial.
234 },
Bu Sun Kim4ed7d3f2020-05-27 12:20:54 -0700235 ],
Bu Sun Kimd059ad82020-07-22 17:02:09 -0700236 &quot;completedTrialCount&quot;: &quot;A String&quot;, # The number of hyperparameter tuning trials that completed successfully.
237 # Only set for hyperparameter tuning jobs.
238 &quot;isHyperparameterTuningJob&quot;: True or False, # Whether this job is a hyperparameter tuning job.
239 &quot;isBuiltInAlgorithmJob&quot;: True or False, # Whether this job is a built-in Algorithm job.
240 &quot;builtInAlgorithmOutput&quot;: { # Represents output related to a built-in algorithm Job. # Details related to built-in algorithms jobs.
241 # Only set for built-in algorithms jobs.
242 &quot;modelPath&quot;: &quot;A String&quot;, # The Cloud Storage path to the `model/` directory where the training job
243 # saves the trained model. Only set for successful jobs that don&#x27;t use
244 # hyperparameter tuning.
245 &quot;framework&quot;: &quot;A String&quot;, # Framework on which the built-in algorithm was trained.
246 &quot;runtimeVersion&quot;: &quot;A String&quot;, # AI Platform runtime version on which the built-in algorithm was
247 # trained.
248 &quot;pythonVersion&quot;: &quot;A String&quot;, # Python version on which the built-in algorithm was trained.
249 },
250 &quot;consumedMLUnits&quot;: 3.14, # The amount of ML units consumed by the job.
251 &quot;hyperparameterMetricTag&quot;: &quot;A String&quot;, # The TensorFlow summary tag name used for optimizing hyperparameter tuning
252 # trials. See
253 # [`HyperparameterSpec.hyperparameterMetricTag`](#HyperparameterSpec.FIELDS.hyperparameter_metric_tag)
254 # for more information. Only set for hyperparameter tuning jobs.
Bu Sun Kim4ed7d3f2020-05-27 12:20:54 -0700255 },
256 &quot;errorMessage&quot;: &quot;A String&quot;, # Output only. The details of a failure or a cancellation.
Bu Sun Kimd059ad82020-07-22 17:02:09 -0700257 &quot;predictionOutput&quot;: { # Represents results of a prediction job. # The current prediction job result.
258 &quot;errorCount&quot;: &quot;A String&quot;, # The number of data instances which resulted in errors.
259 &quot;nodeHours&quot;: 3.14, # Node hours used by the batch prediction job.
260 &quot;outputPath&quot;: &quot;A String&quot;, # The output Google Cloud Storage location provided at the job creation time.
261 &quot;predictionCount&quot;: &quot;A String&quot;, # The number of generated predictions.
262 },
263 &quot;startTime&quot;: &quot;A String&quot;, # Output only. When the job processing was started.
264 &quot;state&quot;: &quot;A String&quot;, # Output only. The detailed state of a job.
Bu Sun Kim65020912020-05-20 12:08:20 -0700265 &quot;trainingInput&quot;: { # Represents input parameters for a training job. When using the gcloud command # Input parameters to create a training job.
Dan O'Mearadd494642020-05-01 07:42:23 -0700266 # to submit your training job, you can specify the input parameters as
267 # command-line arguments and/or in a YAML configuration file referenced from
268 # the --config command-line argument. For details, see the guide to [submitting
269 # a training job](/ai-platform/training/docs/training-jobs).
Bu Sun Kimd059ad82020-07-22 17:02:09 -0700270 &quot;serviceAccount&quot;: &quot;A String&quot;, # Optional. Specifies the service account for workload run-as account.
271 # Users submitting jobs must have act-as permission on this run-as account.
272 # If not specified, then CMLE P4SA will be used by default.
Bu Sun Kim65020912020-05-20 12:08:20 -0700273 &quot;workerConfig&quot;: { # Represents the configuration for a replica in a cluster. # Optional. The configuration for workers.
274 #
275 # You should only set `workerConfig.acceleratorConfig` if `workerType` is set
276 # to a Compute Engine machine type. [Learn about restrictions on accelerator
277 # configurations for
278 # training.](/ai-platform/training/docs/using-gpus#compute-engine-machine-types-with-gpu)
279 #
280 # Set `workerConfig.imageUri` only if you build a custom image for your
281 # worker. If `workerConfig.imageUri` has not been set, AI Platform uses
282 # the value of `masterConfig.imageUri`. Learn more about [configuring custom
283 # containers](/ai-platform/training/docs/distributed-training-containers).
Bu Sun Kimd059ad82020-07-22 17:02:09 -0700284 &quot;acceleratorConfig&quot;: { # Represents a hardware accelerator request config. # Represents the type and number of accelerators used by the replica.
285 # [Learn about restrictions on accelerator configurations for
286 # training.](/ai-platform/training/docs/using-gpus#compute-engine-machine-types-with-gpu)
287 # Note that the AcceleratorConfig can be used in both Jobs and Versions.
288 # Learn more about [accelerators for training](/ml-engine/docs/using-gpus) and
289 # [accelerators for online
290 # prediction](/ml-engine/docs/machine-types-online-prediction#gpus).
291 &quot;type&quot;: &quot;A String&quot;, # The type of accelerator to use.
292 &quot;count&quot;: &quot;A String&quot;, # The number of accelerators to attach to each machine running the job.
293 },
Bu Sun Kim65020912020-05-20 12:08:20 -0700294 &quot;imageUri&quot;: &quot;A String&quot;, # The Docker image to run on the replica. This image must be in Container
295 # Registry. Learn more about [configuring custom
296 # containers](/ai-platform/training/docs/distributed-training-containers).
297 &quot;containerArgs&quot;: [ # Arguments to the entrypoint command.
298 # The following rules apply for container_command and container_args:
299 # - If you do not supply command or args:
300 # The defaults defined in the Docker image are used.
301 # - If you supply a command but no args:
302 # The default EntryPoint and the default Cmd defined in the Docker image
303 # are ignored. Your command is run without any arguments.
304 # - If you supply only args:
305 # The default Entrypoint defined in the Docker image is run with the args
306 # that you supplied.
307 # - If you supply a command and args:
308 # The default Entrypoint and the default Cmd defined in the Docker image
309 # are ignored. Your command is run with your args.
310 # It cannot be set if custom container image is
311 # not provided.
312 # Note that this field and [TrainingInput.args] are mutually exclusive, i.e.,
313 # both cannot be set at the same time.
314 &quot;A String&quot;,
315 ],
Bu Sun Kim4ed7d3f2020-05-27 12:20:54 -0700316 &quot;tpuTfVersion&quot;: &quot;A String&quot;, # The AI Platform runtime version that includes a TensorFlow version matching
317 # the one used in the custom container. This field is required if the replica
318 # is a TPU worker that uses a custom container. Otherwise, do not specify
319 # this field. This must be a [runtime version that currently supports
320 # training with
321 # TPUs](/ml-engine/docs/tensorflow/runtime-version-list#tpu-support).
322 #
323 # Note that the version of TensorFlow included in a runtime version may
324 # differ from the numbering of the runtime version itself, because it may
325 # have a different [patch
326 # version](https://www.tensorflow.org/guide/version_compat#semantic_versioning_20).
327 # In this field, you must specify the runtime version (TensorFlow minor
328 # version). For example, if your custom container runs TensorFlow `1.x.y`,
329 # specify `1.x`.
330 &quot;containerCommand&quot;: [ # The command with which the replica&#x27;s custom container is run.
331 # If provided, it will override default ENTRYPOINT of the docker image.
332 # If not provided, the docker image&#x27;s ENTRYPOINT is used.
333 # It cannot be set if custom container image is
334 # not provided.
335 # Note that this field and [TrainingInput.args] are mutually exclusive, i.e.,
336 # both cannot be set at the same time.
337 &quot;A String&quot;,
338 ],
Bu Sun Kim65020912020-05-20 12:08:20 -0700339 },
Bu Sun Kimd059ad82020-07-22 17:02:09 -0700340 &quot;useChiefInTfConfig&quot;: True or False, # Optional. Use `chief` instead of `master` in the `TF_CONFIG` environment
341 # variable when training with a custom container. Defaults to `false`. [Learn
342 # more about this
343 # field.](/ai-platform/training/docs/distributed-training-details#chief-versus-master)
Bu Sun Kim65020912020-05-20 12:08:20 -0700344 #
Bu Sun Kimd059ad82020-07-22 17:02:09 -0700345 # This field has no effect for training jobs that don&#x27;t use a custom
346 # container.
Bu Sun Kim65020912020-05-20 12:08:20 -0700347 &quot;masterType&quot;: &quot;A String&quot;, # Optional. Specifies the type of virtual machine to use for your training
348 # job&#x27;s master worker. You must specify this field when `scaleTier` is set to
Dan O'Mearadd494642020-05-01 07:42:23 -0700349 # `CUSTOM`.
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -0400350 #
Dan O'Mearadd494642020-05-01 07:42:23 -0700351 # You can use certain Compute Engine machine types directly in this field.
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -0400352 # The following types are supported:
353 #
Bu Sun Kim715bd7f2019-06-14 16:50:42 -0700354 # - `n1-standard-4`
355 # - `n1-standard-8`
356 # - `n1-standard-16`
357 # - `n1-standard-32`
358 # - `n1-standard-64`
359 # - `n1-standard-96`
360 # - `n1-highmem-2`
361 # - `n1-highmem-4`
362 # - `n1-highmem-8`
363 # - `n1-highmem-16`
364 # - `n1-highmem-32`
365 # - `n1-highmem-64`
366 # - `n1-highmem-96`
367 # - `n1-highcpu-16`
368 # - `n1-highcpu-32`
369 # - `n1-highcpu-64`
370 # - `n1-highcpu-96`
371 #
Dan O'Mearadd494642020-05-01 07:42:23 -0700372 # Learn more about [using Compute Engine machine
373 # types](/ml-engine/docs/machine-types#compute-engine-machine-types).
Bu Sun Kim715bd7f2019-06-14 16:50:42 -0700374 #
Dan O'Mearadd494642020-05-01 07:42:23 -0700375 # Alternatively, you can use the following legacy machine types:
376 #
377 # - `standard`
378 # - `large_model`
379 # - `complex_model_s`
380 # - `complex_model_m`
381 # - `complex_model_l`
382 # - `standard_gpu`
383 # - `complex_model_m_gpu`
384 # - `complex_model_l_gpu`
385 # - `standard_p100`
386 # - `complex_model_m_p100`
387 # - `standard_v100`
388 # - `large_model_v100`
389 # - `complex_model_m_v100`
390 # - `complex_model_l_v100`
391 #
392 # Learn more about [using legacy machine
393 # types](/ml-engine/docs/machine-types#legacy-machine-types).
394 #
395 # Finally, if you want to use a TPU for training, specify `cloud_tpu` in this
396 # field. Learn more about the [special configuration options for training
397 # with
398 # TPUs](/ml-engine/docs/tensorflow/using-tpus#configuring_a_custom_tpu_machine).
Bu Sun Kimd059ad82020-07-22 17:02:09 -0700399 &quot;parameterServerConfig&quot;: { # Represents the configuration for a replica in a cluster. # Optional. The configuration for parameter servers.
Bu Sun Kim65020912020-05-20 12:08:20 -0700400 #
Bu Sun Kimd059ad82020-07-22 17:02:09 -0700401 # You should only set `parameterServerConfig.acceleratorConfig` if
402 # `parameterServerType` is set to a Compute Engine machine type. [Learn
403 # about restrictions on accelerator configurations for
Bu Sun Kim65020912020-05-20 12:08:20 -0700404 # training.](/ai-platform/training/docs/using-gpus#compute-engine-machine-types-with-gpu)
405 #
Bu Sun Kimd059ad82020-07-22 17:02:09 -0700406 # Set `parameterServerConfig.imageUri` only if you build a custom image for
407 # your parameter server. If `parameterServerConfig.imageUri` has not been
408 # set, AI Platform uses the value of `masterConfig.imageUri`. Learn more about [configuring custom
Bu Sun Kim65020912020-05-20 12:08:20 -0700409 # containers](/ai-platform/training/docs/distributed-training-containers).
Bu Sun Kimd059ad82020-07-22 17:02:09 -0700410 &quot;acceleratorConfig&quot;: { # Represents a hardware accelerator request config. # Represents the type and number of accelerators used by the replica.
411 # [Learn about restrictions on accelerator configurations for
412 # training.](/ai-platform/training/docs/using-gpus#compute-engine-machine-types-with-gpu)
413 # Note that the AcceleratorConfig can be used in both Jobs and Versions.
414 # Learn more about [accelerators for training](/ml-engine/docs/using-gpus) and
415 # [accelerators for online
416 # prediction](/ml-engine/docs/machine-types-online-prediction#gpus).
417 &quot;type&quot;: &quot;A String&quot;, # The type of accelerator to use.
418 &quot;count&quot;: &quot;A String&quot;, # The number of accelerators to attach to each machine running the job.
419 },
Bu Sun Kim65020912020-05-20 12:08:20 -0700420 &quot;imageUri&quot;: &quot;A String&quot;, # The Docker image to run on the replica. This image must be in Container
421 # Registry. Learn more about [configuring custom
422 # containers](/ai-platform/training/docs/distributed-training-containers).
423 &quot;containerArgs&quot;: [ # Arguments to the entrypoint command.
424 # The following rules apply for container_command and container_args:
425 # - If you do not supply command or args:
426 # The defaults defined in the Docker image are used.
427 # - If you supply a command but no args:
428 # The default EntryPoint and the default Cmd defined in the Docker image
429 # are ignored. Your command is run without any arguments.
430 # - If you supply only args:
431 # The default Entrypoint defined in the Docker image is run with the args
432 # that you supplied.
433 # - If you supply a command and args:
434 # The default Entrypoint and the default Cmd defined in the Docker image
435 # are ignored. Your command is run with your args.
436 # It cannot be set if custom container image is
437 # not provided.
438 # Note that this field and [TrainingInput.args] are mutually exclusive, i.e.,
439 # both cannot be set at the same time.
440 &quot;A String&quot;,
441 ],
Bu Sun Kim4ed7d3f2020-05-27 12:20:54 -0700442 &quot;tpuTfVersion&quot;: &quot;A String&quot;, # The AI Platform runtime version that includes a TensorFlow version matching
443 # the one used in the custom container. This field is required if the replica
444 # is a TPU worker that uses a custom container. Otherwise, do not specify
445 # this field. This must be a [runtime version that currently supports
446 # training with
447 # TPUs](/ml-engine/docs/tensorflow/runtime-version-list#tpu-support).
448 #
449 # Note that the version of TensorFlow included in a runtime version may
450 # differ from the numbering of the runtime version itself, because it may
451 # have a different [patch
452 # version](https://www.tensorflow.org/guide/version_compat#semantic_versioning_20).
453 # In this field, you must specify the runtime version (TensorFlow minor
454 # version). For example, if your custom container runs TensorFlow `1.x.y`,
455 # specify `1.x`.
456 &quot;containerCommand&quot;: [ # The command with which the replica&#x27;s custom container is run.
457 # If provided, it will override default ENTRYPOINT of the docker image.
458 # If not provided, the docker image&#x27;s ENTRYPOINT is used.
459 # It cannot be set if custom container image is
460 # not provided.
461 # Note that this field and [TrainingInput.args] are mutually exclusive, i.e.,
462 # both cannot be set at the same time.
463 &quot;A String&quot;,
464 ],
Bu Sun Kim65020912020-05-20 12:08:20 -0700465 },
Bu Sun Kimd059ad82020-07-22 17:02:09 -0700466 &quot;region&quot;: &quot;A String&quot;, # Required. The region to run the training job in. See the [available
467 # regions](/ai-platform/training/docs/regions) for AI Platform Training.
Bu Sun Kim65020912020-05-20 12:08:20 -0700468 &quot;jobDir&quot;: &quot;A String&quot;, # Optional. A Google Cloud Storage path in which to store training outputs
Dan O'Mearadd494642020-05-01 07:42:23 -0700469 # and other data needed for training. This path is passed to your TensorFlow
Bu Sun Kim65020912020-05-20 12:08:20 -0700470 # program as the &#x27;--job-dir&#x27; command-line argument. The benefit of specifying
Dan O'Mearadd494642020-05-01 07:42:23 -0700471 # this field is that Cloud ML validates the path for use in training.
Bu Sun Kim65020912020-05-20 12:08:20 -0700472 &quot;pythonVersion&quot;: &quot;A String&quot;, # Optional. The version of Python used in training. You must either specify
473 # this field or specify `masterConfig.imageUri`.
474 #
475 # The following Python versions are available:
476 #
477 # * Python &#x27;3.7&#x27; is available when `runtime_version` is set to &#x27;1.15&#x27; or
478 # later.
479 # * Python &#x27;3.5&#x27; is available when `runtime_version` is set to a version
480 # from &#x27;1.4&#x27; to &#x27;1.14&#x27;.
481 # * Python &#x27;2.7&#x27; is available when `runtime_version` is set to &#x27;1.15&#x27; or
482 # earlier.
483 #
484 # Read more about the Python versions available for [each runtime
485 # version](/ml-engine/docs/runtime-version-list).
Bu Sun Kimd059ad82020-07-22 17:02:09 -0700486 &quot;hyperparameters&quot;: { # Represents a set of hyperparameters to optimize. # Optional. The set of Hyperparameters to tune.
487 &quot;maxTrials&quot;: 42, # Optional. How many training trials should be attempted to optimize
488 # the specified hyperparameters.
489 #
490 # Defaults to one.
491 &quot;enableTrialEarlyStopping&quot;: True or False, # Optional. Indicates if the hyperparameter tuning job enables auto trial
492 # early stopping.
493 &quot;params&quot;: [ # Required. The set of parameters to tune.
494 { # Represents a single hyperparameter to optimize.
495 &quot;minValue&quot;: 3.14, # Required if type is `DOUBLE` or `INTEGER`. This field
496 # should be unset if type is `CATEGORICAL`. This value should be integers if
497 # type is INTEGER.
498 &quot;categoricalValues&quot;: [ # Required if type is `CATEGORICAL`. The list of possible categories.
499 &quot;A String&quot;,
500 ],
501 &quot;scaleType&quot;: &quot;A String&quot;, # Optional. How the parameter should be scaled to the hypercube.
502 # Leave unset for categorical parameters.
503 # Some kind of scaling is strongly recommended for real or integral
504 # parameters (e.g., `UNIT_LINEAR_SCALE`).
505 &quot;discreteValues&quot;: [ # Required if type is `DISCRETE`.
506 # A list of feasible points.
507 # The list should be in strictly increasing order. For instance, this
508 # parameter might have possible settings of 1.5, 2.5, and 4.0. This list
509 # should not contain more than 1,000 values.
510 3.14,
511 ],
512 &quot;type&quot;: &quot;A String&quot;, # Required. The type of the parameter.
513 &quot;maxValue&quot;: 3.14, # Required if type is `DOUBLE` or `INTEGER`. This field
514 # should be unset if type is `CATEGORICAL`. This value should be integers if
515 # type is `INTEGER`.
516 &quot;parameterName&quot;: &quot;A String&quot;, # Required. The parameter name must be unique amongst all ParameterConfigs in
517 # a HyperparameterSpec message. E.g., &quot;learning_rate&quot;.
518 },
519 ],
520 &quot;maxFailedTrials&quot;: 42, # Optional. The number of failed trials that need to be seen before failing
521 # the hyperparameter tuning job. You can specify this field to override the
522 # default failing criteria for AI Platform hyperparameter tuning jobs.
523 #
524 # Defaults to zero, which means the service decides when a hyperparameter
525 # job should fail.
526 &quot;hyperparameterMetricTag&quot;: &quot;A String&quot;, # Optional. The TensorFlow summary tag name to use for optimizing trials. For
527 # current versions of TensorFlow, this tag name should exactly match what is
528 # shown in TensorBoard, including all scopes. For versions of TensorFlow
529 # prior to 0.12, this should be only the tag passed to tf.Summary.
530 # By default, &quot;training/hptuning/metric&quot; will be used.
531 &quot;resumePreviousJobId&quot;: &quot;A String&quot;, # Optional. The prior hyperparameter tuning job id that users hope to
532 # continue with. The job id will be used to find the corresponding vizier
533 # study guid and resume the study.
534 &quot;goal&quot;: &quot;A String&quot;, # Required. The type of goal to use for tuning. Available types are
535 # `MAXIMIZE` and `MINIMIZE`.
536 #
537 # Defaults to `MAXIMIZE`.
538 &quot;algorithm&quot;: &quot;A String&quot;, # Optional. The search algorithm specified for the hyperparameter
539 # tuning job.
540 # Uses the default AI Platform hyperparameter tuning
541 # algorithm if unspecified.
542 &quot;maxParallelTrials&quot;: 42, # Optional. The number of training trials to run concurrently.
543 # You can reduce the time it takes to perform hyperparameter tuning by adding
544 # trials in parallel. However, each trail only benefits from the information
545 # gained in completed trials. That means that a trial does not get access to
546 # the results of trials running at the same time, which could reduce the
547 # quality of the overall optimization.
548 #
549 # Each trial will use the same scale tier and machine types.
550 #
551 # Defaults to one.
552 },
553 &quot;evaluatorType&quot;: &quot;A String&quot;, # Optional. Specifies the type of virtual machine to use for your training
554 # job&#x27;s evaluator nodes.
555 #
556 # The supported values are the same as those described in the entry for
557 # `masterType`.
558 #
559 # This value must be consistent with the category of machine type that
560 # `masterType` uses. In other words, both must be Compute Engine machine
561 # types or both must be legacy machine types.
562 #
563 # This value must be present when `scaleTier` is set to `CUSTOM` and
564 # `evaluatorCount` is greater than zero.
565 &quot;network&quot;: &quot;A String&quot;, # Optional. The full name of the Google Compute Engine
566 # [network](/compute/docs/networks-and-firewalls#networks) to which the Job
567 # is peered. For example, projects/12345/global/networks/myVPC. Format is of
568 # the form projects/{project}/global/networks/{network}. Where {project} is a
569 # project number, as in &#x27;12345&#x27;, and {network} is network name.&quot;.
570 #
571 # Private services access must already be configured for the network. If left
572 # unspecified, the Job is not peered with any network. Learn more -
573 # Connecting Job to user network over private
574 # IP.
575 &quot;parameterServerType&quot;: &quot;A String&quot;, # Optional. Specifies the type of virtual machine to use for your training
576 # job&#x27;s parameter server.
577 #
578 # The supported values are the same as those described in the entry for
579 # `master_type`.
580 #
581 # This value must be consistent with the category of machine type that
582 # `masterType` uses. In other words, both must be Compute Engine machine
583 # types or both must be legacy machine types.
584 #
585 # This value must be present when `scaleTier` is set to `CUSTOM` and
586 # `parameter_server_count` is greater than zero.
587 &quot;workerType&quot;: &quot;A String&quot;, # Optional. Specifies the type of virtual machine to use for your training
588 # job&#x27;s worker nodes.
589 #
590 # The supported values are the same as those described in the entry for
591 # `masterType`.
592 #
593 # This value must be consistent with the category of machine type that
594 # `masterType` uses. In other words, both must be Compute Engine machine
595 # types or both must be legacy machine types.
596 #
597 # If you use `cloud_tpu` for this value, see special instructions for
598 # [configuring a custom TPU
599 # machine](/ml-engine/docs/tensorflow/using-tpus#configuring_a_custom_tpu_machine).
600 #
601 # This value must be present when `scaleTier` is set to `CUSTOM` and
602 # `workerCount` is greater than zero.
603 &quot;masterConfig&quot;: { # Represents the configuration for a replica in a cluster. # Optional. The configuration for your master worker.
604 #
605 # You should only set `masterConfig.acceleratorConfig` if `masterType` is set
606 # to a Compute Engine machine type. Learn about [restrictions on accelerator
607 # configurations for
608 # training.](/ai-platform/training/docs/using-gpus#compute-engine-machine-types-with-gpu)
609 #
610 # Set `masterConfig.imageUri` only if you build a custom image. Only one of
611 # `masterConfig.imageUri` and `runtimeVersion` should be set. Learn more
612 # about [configuring custom
613 # containers](/ai-platform/training/docs/distributed-training-containers).
614 &quot;acceleratorConfig&quot;: { # Represents a hardware accelerator request config. # Represents the type and number of accelerators used by the replica.
615 # [Learn about restrictions on accelerator configurations for
616 # training.](/ai-platform/training/docs/using-gpus#compute-engine-machine-types-with-gpu)
617 # Note that the AcceleratorConfig can be used in both Jobs and Versions.
618 # Learn more about [accelerators for training](/ml-engine/docs/using-gpus) and
619 # [accelerators for online
620 # prediction](/ml-engine/docs/machine-types-online-prediction#gpus).
621 &quot;type&quot;: &quot;A String&quot;, # The type of accelerator to use.
622 &quot;count&quot;: &quot;A String&quot;, # The number of accelerators to attach to each machine running the job.
623 },
624 &quot;imageUri&quot;: &quot;A String&quot;, # The Docker image to run on the replica. This image must be in Container
625 # Registry. Learn more about [configuring custom
626 # containers](/ai-platform/training/docs/distributed-training-containers).
627 &quot;containerArgs&quot;: [ # Arguments to the entrypoint command.
628 # The following rules apply for container_command and container_args:
629 # - If you do not supply command or args:
630 # The defaults defined in the Docker image are used.
631 # - If you supply a command but no args:
632 # The default EntryPoint and the default Cmd defined in the Docker image
633 # are ignored. Your command is run without any arguments.
634 # - If you supply only args:
635 # The default Entrypoint defined in the Docker image is run with the args
636 # that you supplied.
637 # - If you supply a command and args:
638 # The default Entrypoint and the default Cmd defined in the Docker image
639 # are ignored. Your command is run with your args.
640 # It cannot be set if custom container image is
641 # not provided.
642 # Note that this field and [TrainingInput.args] are mutually exclusive, i.e.,
643 # both cannot be set at the same time.
644 &quot;A String&quot;,
645 ],
646 &quot;tpuTfVersion&quot;: &quot;A String&quot;, # The AI Platform runtime version that includes a TensorFlow version matching
647 # the one used in the custom container. This field is required if the replica
648 # is a TPU worker that uses a custom container. Otherwise, do not specify
649 # this field. This must be a [runtime version that currently supports
650 # training with
651 # TPUs](/ml-engine/docs/tensorflow/runtime-version-list#tpu-support).
652 #
653 # Note that the version of TensorFlow included in a runtime version may
654 # differ from the numbering of the runtime version itself, because it may
655 # have a different [patch
656 # version](https://www.tensorflow.org/guide/version_compat#semantic_versioning_20).
657 # In this field, you must specify the runtime version (TensorFlow minor
658 # version). For example, if your custom container runs TensorFlow `1.x.y`,
659 # specify `1.x`.
660 &quot;containerCommand&quot;: [ # The command with which the replica&#x27;s custom container is run.
661 # If provided, it will override default ENTRYPOINT of the docker image.
662 # If not provided, the docker image&#x27;s ENTRYPOINT is used.
663 # It cannot be set if custom container image is
664 # not provided.
665 # Note that this field and [TrainingInput.args] are mutually exclusive, i.e.,
666 # both cannot be set at the same time.
667 &quot;A String&quot;,
668 ],
669 },
670 &quot;evaluatorCount&quot;: &quot;A String&quot;, # Optional. The number of evaluator replicas to use for the training job.
671 # Each replica in the cluster will be of the type specified in
672 # `evaluator_type`.
673 #
674 # This value can only be used when `scale_tier` is set to `CUSTOM`. If you
675 # set this value, you must also set `evaluator_type`.
676 #
677 # The default value is zero.
678 &quot;args&quot;: [ # Optional. Command-line arguments passed to the training application when it
679 # starts. If your job uses a custom container, then the arguments are passed
680 # to the container&#x27;s &lt;a class=&quot;external&quot; target=&quot;_blank&quot;
681 # href=&quot;https://docs.docker.com/engine/reference/builder/#entrypoint&quot;&gt;
682 # `ENTRYPOINT`&lt;/a&gt; command.
683 &quot;A String&quot;,
684 ],
685 &quot;pythonModule&quot;: &quot;A String&quot;, # Required. The Python module name to run after installing the packages.
686 &quot;runtimeVersion&quot;: &quot;A String&quot;, # Optional. The AI Platform runtime version to use for training. You must
687 # either specify this field or specify `masterConfig.imageUri`.
688 #
689 # For more information, see the [runtime version
690 # list](/ai-platform/training/docs/runtime-version-list) and learn [how to
691 # manage runtime versions](/ai-platform/training/docs/versioning).
692 &quot;parameterServerCount&quot;: &quot;A String&quot;, # Optional. The number of parameter server replicas to use for the training
693 # job. Each replica in the cluster will be of the type specified in
694 # `parameter_server_type`.
695 #
696 # This value can only be used when `scale_tier` is set to `CUSTOM`. If you
697 # set this value, you must also set `parameter_server_type`.
698 #
699 # The default value is zero.
700 &quot;evaluatorConfig&quot;: { # Represents the configuration for a replica in a cluster. # Optional. The configuration for evaluators.
701 #
702 # You should only set `evaluatorConfig.acceleratorConfig` if
703 # `evaluatorType` is set to a Compute Engine machine type. [Learn
704 # about restrictions on accelerator configurations for
705 # training.](/ai-platform/training/docs/using-gpus#compute-engine-machine-types-with-gpu)
706 #
707 # Set `evaluatorConfig.imageUri` only if you build a custom image for
708 # your evaluator. If `evaluatorConfig.imageUri` has not been
709 # set, AI Platform uses the value of `masterConfig.imageUri`. Learn more about [configuring custom
710 # containers](/ai-platform/training/docs/distributed-training-containers).
711 &quot;acceleratorConfig&quot;: { # Represents a hardware accelerator request config. # Represents the type and number of accelerators used by the replica.
712 # [Learn about restrictions on accelerator configurations for
713 # training.](/ai-platform/training/docs/using-gpus#compute-engine-machine-types-with-gpu)
714 # Note that the AcceleratorConfig can be used in both Jobs and Versions.
715 # Learn more about [accelerators for training](/ml-engine/docs/using-gpus) and
716 # [accelerators for online
717 # prediction](/ml-engine/docs/machine-types-online-prediction#gpus).
718 &quot;type&quot;: &quot;A String&quot;, # The type of accelerator to use.
719 &quot;count&quot;: &quot;A String&quot;, # The number of accelerators to attach to each machine running the job.
720 },
721 &quot;imageUri&quot;: &quot;A String&quot;, # The Docker image to run on the replica. This image must be in Container
722 # Registry. Learn more about [configuring custom
723 # containers](/ai-platform/training/docs/distributed-training-containers).
724 &quot;containerArgs&quot;: [ # Arguments to the entrypoint command.
725 # The following rules apply for container_command and container_args:
726 # - If you do not supply command or args:
727 # The defaults defined in the Docker image are used.
728 # - If you supply a command but no args:
729 # The default EntryPoint and the default Cmd defined in the Docker image
730 # are ignored. Your command is run without any arguments.
731 # - If you supply only args:
732 # The default Entrypoint defined in the Docker image is run with the args
733 # that you supplied.
734 # - If you supply a command and args:
735 # The default Entrypoint and the default Cmd defined in the Docker image
736 # are ignored. Your command is run with your args.
737 # It cannot be set if custom container image is
738 # not provided.
739 # Note that this field and [TrainingInput.args] are mutually exclusive, i.e.,
740 # both cannot be set at the same time.
741 &quot;A String&quot;,
742 ],
743 &quot;tpuTfVersion&quot;: &quot;A String&quot;, # The AI Platform runtime version that includes a TensorFlow version matching
744 # the one used in the custom container. This field is required if the replica
745 # is a TPU worker that uses a custom container. Otherwise, do not specify
746 # this field. This must be a [runtime version that currently supports
747 # training with
748 # TPUs](/ml-engine/docs/tensorflow/runtime-version-list#tpu-support).
749 #
750 # Note that the version of TensorFlow included in a runtime version may
751 # differ from the numbering of the runtime version itself, because it may
752 # have a different [patch
753 # version](https://www.tensorflow.org/guide/version_compat#semantic_versioning_20).
754 # In this field, you must specify the runtime version (TensorFlow minor
755 # version). For example, if your custom container runs TensorFlow `1.x.y`,
756 # specify `1.x`.
757 &quot;containerCommand&quot;: [ # The command with which the replica&#x27;s custom container is run.
758 # If provided, it will override default ENTRYPOINT of the docker image.
759 # If not provided, the docker image&#x27;s ENTRYPOINT is used.
760 # It cannot be set if custom container image is
761 # not provided.
762 # Note that this field and [TrainingInput.args] are mutually exclusive, i.e.,
763 # both cannot be set at the same time.
764 &quot;A String&quot;,
765 ],
766 },
767 &quot;encryptionConfig&quot;: { # Represents a custom encryption key configuration that can be applied to # Optional. Options for using customer-managed encryption keys (CMEK) to
768 # protect resources created by a training job, instead of using Google&#x27;s
769 # default encryption. If this is set, then all resources created by the
770 # training job will be encrypted with the customer-managed encryption key
771 # that you specify.
772 #
773 # [Learn how and when to use CMEK with AI Platform
774 # Training](/ai-platform/training/docs/cmek).
775 # a resource.
776 &quot;kmsKeyName&quot;: &quot;A String&quot;, # The Cloud KMS resource identifier of the customer-managed encryption key
777 # used to protect a resource, such as a training job. It has the following
778 # format:
779 # `projects/{PROJECT_ID}/locations/{REGION}/keyRings/{KEY_RING_NAME}/cryptoKeys/{KEY_NAME}`
780 },
781 &quot;workerCount&quot;: &quot;A String&quot;, # Optional. The number of worker replicas to use for the training job. Each
782 # replica in the cluster will be of the type specified in `worker_type`.
783 #
784 # This value can only be used when `scale_tier` is set to `CUSTOM`. If you
785 # set this value, you must also set `worker_type`.
786 #
787 # The default value is zero.
Bu Sun Kim65020912020-05-20 12:08:20 -0700788 &quot;scheduling&quot;: { # All parameters related to scheduling of training jobs. # Optional. Scheduling options for a training job.
789 &quot;maxWaitTime&quot;: &quot;A String&quot;,
790 &quot;maxRunningTime&quot;: &quot;A String&quot;, # Optional. The maximum job running time, expressed in seconds. The field can
791 # contain up to nine fractional digits, terminated by `s`. If not specified,
792 # this field defaults to `604800s` (seven days).
Dan O'Mearadd494642020-05-01 07:42:23 -0700793 #
794 # If the training job is still running after this duration, AI Platform
795 # Training cancels it.
796 #
797 # For example, if you want to ensure your job runs for no more than 2 hours,
798 # set this field to `7200s` (2 hours * 60 minutes / hour * 60 seconds /
799 # minute).
800 #
801 # If you submit your training job using the `gcloud` tool, you can [provide
802 # this field in a `config.yaml`
803 # file](/ai-platform/training/docs/training-jobs#formatting_your_configuration_parameters).
804 # For example:
805 #
806 # ```yaml
807 # trainingInput:
808 # ...
809 # scheduling:
810 # maxRunningTime: 7200s
811 # ...
812 # ```
813 },
Bu Sun Kimd059ad82020-07-22 17:02:09 -0700814 &quot;scaleTier&quot;: &quot;A String&quot;, # Required. Specifies the machine types, the number of replicas for workers
815 # and parameter servers.
816 &quot;packageUris&quot;: [ # Required. The Google Cloud Storage location of the packages with
817 # the training program and any additional dependencies.
818 # The maximum number of package URIs is 100.
819 &quot;A String&quot;,
Bu Sun Kim65020912020-05-20 12:08:20 -0700820 ],
Bu Sun Kim715bd7f2019-06-14 16:50:42 -0700821 },
Bu Sun Kimd059ad82020-07-22 17:02:09 -0700822 &quot;etag&quot;: &quot;A String&quot;, # `etag` is used for optimistic concurrency control as a way to help
823 # prevent simultaneous updates of a job from overwriting each other.
824 # It is strongly suggested that systems make use of the `etag` in the
825 # read-modify-write cycle to perform job updates in order to avoid race
826 # conditions: An `etag` is returned in the response to `GetJob`, and
827 # systems are expected to put that etag in the request to `UpdateJob` to
828 # ensure that their change will be applied to the same version of the job.
829 &quot;jobId&quot;: &quot;A String&quot;, # Required. The user-specified id of the job.
Bu Sun Kim65020912020-05-20 12:08:20 -0700830 }
831
832 x__xgafv: string, V1 error format.
833 Allowed values
834 1 - v1 error format
835 2 - v2 error format
836
837Returns:
838 An object of the form:
839
840 { # Represents a training or prediction job.
Bu Sun Kimd059ad82020-07-22 17:02:09 -0700841 &quot;createTime&quot;: &quot;A String&quot;, # Output only. When the job was created.
Bu Sun Kim4ed7d3f2020-05-27 12:20:54 -0700842 &quot;predictionInput&quot;: { # Represents input parameters for a prediction job. # Input parameters to create a prediction job.
Bu Sun Kimd059ad82020-07-22 17:02:09 -0700843 &quot;dataFormat&quot;: &quot;A String&quot;, # Required. The format of the input data files.
844 &quot;outputPath&quot;: &quot;A String&quot;, # Required. The output Google Cloud Storage location.
845 &quot;inputPaths&quot;: [ # Required. The Cloud Storage location of the input data files. May contain
846 # &lt;a href=&quot;/storage/docs/gsutil/addlhelp/WildcardNames&quot;&gt;wildcards&lt;/a&gt;.
847 &quot;A String&quot;,
848 ],
849 &quot;region&quot;: &quot;A String&quot;, # Required. The Google Compute Engine region to run the prediction job in.
850 # See the &lt;a href=&quot;/ml-engine/docs/tensorflow/regions&quot;&gt;available regions&lt;/a&gt;
851 # for AI Platform services.
Bu Sun Kim4ed7d3f2020-05-27 12:20:54 -0700852 &quot;versionName&quot;: &quot;A String&quot;, # Use this field if you want to specify a version of the model to use. The
853 # string is formatted the same way as `model_version`, with the addition
854 # of the version information:
855 #
856 # `&quot;projects/YOUR_PROJECT/models/YOUR_MODEL/versions/YOUR_VERSION&quot;`
Bu Sun Kimd059ad82020-07-22 17:02:09 -0700857 &quot;uri&quot;: &quot;A String&quot;, # Use this field if you want to specify a Google Cloud Storage path for
858 # the model to use.
859 &quot;outputDataFormat&quot;: &quot;A String&quot;, # Optional. Format of the output data files, defaults to JSON.
860 &quot;runtimeVersion&quot;: &quot;A String&quot;, # Optional. The AI Platform runtime version to use for this batch
861 # prediction. If not set, AI Platform will pick the runtime version used
862 # during the CreateVersion request for this model version, or choose the
863 # latest stable version when model version information is not available
864 # such as when the model is specified by uri.
Bu Sun Kim4ed7d3f2020-05-27 12:20:54 -0700865 &quot;modelName&quot;: &quot;A String&quot;, # Use this field if you want to use the default version for the specified
866 # model. The string must use the following format:
867 #
868 # `&quot;projects/YOUR_PROJECT/models/YOUR_MODEL&quot;`
Bu Sun Kim4ed7d3f2020-05-27 12:20:54 -0700869 &quot;signatureName&quot;: &quot;A String&quot;, # Optional. The name of the signature defined in the SavedModel to use for
870 # this job. Please refer to
871 # [SavedModel](https://tensorflow.github.io/serving/serving_basic.html)
872 # for information about how to use signatures.
873 #
874 # Defaults to
875 # [DEFAULT_SERVING_SIGNATURE_DEF_KEY](https://www.tensorflow.org/api_docs/python/tf/saved_model/signature_constants)
876 # , which is &quot;serving_default&quot;.
Bu Sun Kim4ed7d3f2020-05-27 12:20:54 -0700877 &quot;batchSize&quot;: &quot;A String&quot;, # Optional. Number of records per batch, defaults to 64.
878 # The service will buffer batch_size number of records in memory before
879 # invoking one Tensorflow prediction call internally. So take the record
880 # size and memory available into consideration when setting this parameter.
Bu Sun Kimd059ad82020-07-22 17:02:09 -0700881 &quot;maxWorkerCount&quot;: &quot;A String&quot;, # Optional. The maximum number of workers to be used for parallel processing.
882 # Defaults to 10 if not specified.
883 },
884 &quot;labels&quot;: { # Optional. One or more labels that you can add, to organize your jobs.
885 # Each label is a key-value pair, where both the key and the value are
886 # arbitrary strings that you supply.
887 # For more information, see the documentation on
888 # &lt;a href=&quot;/ml-engine/docs/tensorflow/resource-labels&quot;&gt;using labels&lt;/a&gt;.
889 &quot;a_key&quot;: &quot;A String&quot;,
890 },
891 &quot;endTime&quot;: &quot;A String&quot;, # Output only. When the job processing was completed.
892 &quot;trainingOutput&quot;: { # Represents results of a training job. Output only. # The current training job result.
893 &quot;trials&quot;: [ # Results for individual Hyperparameter trials.
894 # Only set for hyperparameter tuning jobs.
895 { # Represents the result of a single hyperparameter tuning trial from a
896 # training job. The TrainingOutput object that is returned on successful
897 # completion of a training job with hyperparameter tuning includes a list
898 # of HyperparameterOutput objects, one for each successful trial.
899 &quot;endTime&quot;: &quot;A String&quot;, # Output only. End time for the trial.
900 &quot;finalMetric&quot;: { # An observed value of a metric. # The final objective metric seen for this trial.
901 &quot;trainingStep&quot;: &quot;A String&quot;, # The global training step for this metric.
902 &quot;objectiveValue&quot;: 3.14, # The objective value at this training step.
903 },
904 &quot;hyperparameters&quot;: { # The hyperparameters given to this trial.
905 &quot;a_key&quot;: &quot;A String&quot;,
906 },
907 &quot;builtInAlgorithmOutput&quot;: { # Represents output related to a built-in algorithm Job. # Details related to built-in algorithms jobs.
908 # Only set for trials of built-in algorithms jobs that have succeeded.
909 &quot;modelPath&quot;: &quot;A String&quot;, # The Cloud Storage path to the `model/` directory where the training job
910 # saves the trained model. Only set for successful jobs that don&#x27;t use
911 # hyperparameter tuning.
912 &quot;framework&quot;: &quot;A String&quot;, # Framework on which the built-in algorithm was trained.
913 &quot;runtimeVersion&quot;: &quot;A String&quot;, # AI Platform runtime version on which the built-in algorithm was
914 # trained.
915 &quot;pythonVersion&quot;: &quot;A String&quot;, # Python version on which the built-in algorithm was trained.
916 },
917 &quot;startTime&quot;: &quot;A String&quot;, # Output only. Start time for the trial.
918 &quot;allMetrics&quot;: [ # All recorded object metrics for this trial. This field is not currently
919 # populated.
920 { # An observed value of a metric.
921 &quot;trainingStep&quot;: &quot;A String&quot;, # The global training step for this metric.
922 &quot;objectiveValue&quot;: 3.14, # The objective value at this training step.
923 },
924 ],
925 &quot;trialId&quot;: &quot;A String&quot;, # The trial id for these results.
926 &quot;isTrialStoppedEarly&quot;: True or False, # True if the trial is stopped early.
927 &quot;state&quot;: &quot;A String&quot;, # Output only. The detailed state of the trial.
928 },
Bu Sun Kim4ed7d3f2020-05-27 12:20:54 -0700929 ],
Bu Sun Kimd059ad82020-07-22 17:02:09 -0700930 &quot;completedTrialCount&quot;: &quot;A String&quot;, # The number of hyperparameter tuning trials that completed successfully.
931 # Only set for hyperparameter tuning jobs.
932 &quot;isHyperparameterTuningJob&quot;: True or False, # Whether this job is a hyperparameter tuning job.
933 &quot;isBuiltInAlgorithmJob&quot;: True or False, # Whether this job is a built-in Algorithm job.
934 &quot;builtInAlgorithmOutput&quot;: { # Represents output related to a built-in algorithm Job. # Details related to built-in algorithms jobs.
935 # Only set for built-in algorithms jobs.
936 &quot;modelPath&quot;: &quot;A String&quot;, # The Cloud Storage path to the `model/` directory where the training job
937 # saves the trained model. Only set for successful jobs that don&#x27;t use
938 # hyperparameter tuning.
939 &quot;framework&quot;: &quot;A String&quot;, # Framework on which the built-in algorithm was trained.
940 &quot;runtimeVersion&quot;: &quot;A String&quot;, # AI Platform runtime version on which the built-in algorithm was
941 # trained.
942 &quot;pythonVersion&quot;: &quot;A String&quot;, # Python version on which the built-in algorithm was trained.
943 },
944 &quot;consumedMLUnits&quot;: 3.14, # The amount of ML units consumed by the job.
945 &quot;hyperparameterMetricTag&quot;: &quot;A String&quot;, # The TensorFlow summary tag name used for optimizing hyperparameter tuning
946 # trials. See
947 # [`HyperparameterSpec.hyperparameterMetricTag`](#HyperparameterSpec.FIELDS.hyperparameter_metric_tag)
948 # for more information. Only set for hyperparameter tuning jobs.
Bu Sun Kim4ed7d3f2020-05-27 12:20:54 -0700949 },
950 &quot;errorMessage&quot;: &quot;A String&quot;, # Output only. The details of a failure or a cancellation.
Bu Sun Kimd059ad82020-07-22 17:02:09 -0700951 &quot;predictionOutput&quot;: { # Represents results of a prediction job. # The current prediction job result.
952 &quot;errorCount&quot;: &quot;A String&quot;, # The number of data instances which resulted in errors.
953 &quot;nodeHours&quot;: 3.14, # Node hours used by the batch prediction job.
954 &quot;outputPath&quot;: &quot;A String&quot;, # The output Google Cloud Storage location provided at the job creation time.
955 &quot;predictionCount&quot;: &quot;A String&quot;, # The number of generated predictions.
956 },
957 &quot;startTime&quot;: &quot;A String&quot;, # Output only. When the job processing was started.
958 &quot;state&quot;: &quot;A String&quot;, # Output only. The detailed state of a job.
Bu Sun Kim65020912020-05-20 12:08:20 -0700959 &quot;trainingInput&quot;: { # Represents input parameters for a training job. When using the gcloud command # Input parameters to create a training job.
960 # to submit your training job, you can specify the input parameters as
961 # command-line arguments and/or in a YAML configuration file referenced from
962 # the --config command-line argument. For details, see the guide to [submitting
963 # a training job](/ai-platform/training/docs/training-jobs).
Bu Sun Kimd059ad82020-07-22 17:02:09 -0700964 &quot;serviceAccount&quot;: &quot;A String&quot;, # Optional. Specifies the service account for workload run-as account.
965 # Users submitting jobs must have act-as permission on this run-as account.
966 # If not specified, then CMLE P4SA will be used by default.
Bu Sun Kim65020912020-05-20 12:08:20 -0700967 &quot;workerConfig&quot;: { # Represents the configuration for a replica in a cluster. # Optional. The configuration for workers.
968 #
969 # You should only set `workerConfig.acceleratorConfig` if `workerType` is set
970 # to a Compute Engine machine type. [Learn about restrictions on accelerator
971 # configurations for
972 # training.](/ai-platform/training/docs/using-gpus#compute-engine-machine-types-with-gpu)
973 #
974 # Set `workerConfig.imageUri` only if you build a custom image for your
975 # worker. If `workerConfig.imageUri` has not been set, AI Platform uses
976 # the value of `masterConfig.imageUri`. Learn more about [configuring custom
977 # containers](/ai-platform/training/docs/distributed-training-containers).
Bu Sun Kimd059ad82020-07-22 17:02:09 -0700978 &quot;acceleratorConfig&quot;: { # Represents a hardware accelerator request config. # Represents the type and number of accelerators used by the replica.
979 # [Learn about restrictions on accelerator configurations for
980 # training.](/ai-platform/training/docs/using-gpus#compute-engine-machine-types-with-gpu)
981 # Note that the AcceleratorConfig can be used in both Jobs and Versions.
982 # Learn more about [accelerators for training](/ml-engine/docs/using-gpus) and
983 # [accelerators for online
984 # prediction](/ml-engine/docs/machine-types-online-prediction#gpus).
985 &quot;type&quot;: &quot;A String&quot;, # The type of accelerator to use.
986 &quot;count&quot;: &quot;A String&quot;, # The number of accelerators to attach to each machine running the job.
987 },
Bu Sun Kim65020912020-05-20 12:08:20 -0700988 &quot;imageUri&quot;: &quot;A String&quot;, # The Docker image to run on the replica. This image must be in Container
989 # Registry. Learn more about [configuring custom
990 # containers](/ai-platform/training/docs/distributed-training-containers).
991 &quot;containerArgs&quot;: [ # Arguments to the entrypoint command.
992 # The following rules apply for container_command and container_args:
993 # - If you do not supply command or args:
994 # The defaults defined in the Docker image are used.
995 # - If you supply a command but no args:
996 # The default EntryPoint and the default Cmd defined in the Docker image
997 # are ignored. Your command is run without any arguments.
998 # - If you supply only args:
999 # The default Entrypoint defined in the Docker image is run with the args
1000 # that you supplied.
1001 # - If you supply a command and args:
1002 # The default Entrypoint and the default Cmd defined in the Docker image
1003 # are ignored. Your command is run with your args.
1004 # It cannot be set if custom container image is
1005 # not provided.
1006 # Note that this field and [TrainingInput.args] are mutually exclusive, i.e.,
1007 # both cannot be set at the same time.
1008 &quot;A String&quot;,
1009 ],
Bu Sun Kim4ed7d3f2020-05-27 12:20:54 -07001010 &quot;tpuTfVersion&quot;: &quot;A String&quot;, # The AI Platform runtime version that includes a TensorFlow version matching
1011 # the one used in the custom container. This field is required if the replica
1012 # is a TPU worker that uses a custom container. Otherwise, do not specify
1013 # this field. This must be a [runtime version that currently supports
1014 # training with
1015 # TPUs](/ml-engine/docs/tensorflow/runtime-version-list#tpu-support).
1016 #
1017 # Note that the version of TensorFlow included in a runtime version may
1018 # differ from the numbering of the runtime version itself, because it may
1019 # have a different [patch
1020 # version](https://www.tensorflow.org/guide/version_compat#semantic_versioning_20).
1021 # In this field, you must specify the runtime version (TensorFlow minor
1022 # version). For example, if your custom container runs TensorFlow `1.x.y`,
1023 # specify `1.x`.
1024 &quot;containerCommand&quot;: [ # The command with which the replica&#x27;s custom container is run.
1025 # If provided, it will override default ENTRYPOINT of the docker image.
1026 # If not provided, the docker image&#x27;s ENTRYPOINT is used.
1027 # It cannot be set if custom container image is
1028 # not provided.
1029 # Note that this field and [TrainingInput.args] are mutually exclusive, i.e.,
1030 # both cannot be set at the same time.
1031 &quot;A String&quot;,
1032 ],
Bu Sun Kim65020912020-05-20 12:08:20 -07001033 },
Bu Sun Kimd059ad82020-07-22 17:02:09 -07001034 &quot;useChiefInTfConfig&quot;: True or False, # Optional. Use `chief` instead of `master` in the `TF_CONFIG` environment
1035 # variable when training with a custom container. Defaults to `false`. [Learn
1036 # more about this
1037 # field.](/ai-platform/training/docs/distributed-training-details#chief-versus-master)
Bu Sun Kim65020912020-05-20 12:08:20 -07001038 #
Bu Sun Kimd059ad82020-07-22 17:02:09 -07001039 # This field has no effect for training jobs that don&#x27;t use a custom
1040 # container.
Bu Sun Kim65020912020-05-20 12:08:20 -07001041 &quot;masterType&quot;: &quot;A String&quot;, # Optional. Specifies the type of virtual machine to use for your training
1042 # job&#x27;s master worker. You must specify this field when `scaleTier` is set to
1043 # `CUSTOM`.
1044 #
1045 # You can use certain Compute Engine machine types directly in this field.
1046 # The following types are supported:
1047 #
1048 # - `n1-standard-4`
1049 # - `n1-standard-8`
1050 # - `n1-standard-16`
1051 # - `n1-standard-32`
1052 # - `n1-standard-64`
1053 # - `n1-standard-96`
1054 # - `n1-highmem-2`
1055 # - `n1-highmem-4`
1056 # - `n1-highmem-8`
1057 # - `n1-highmem-16`
1058 # - `n1-highmem-32`
1059 # - `n1-highmem-64`
1060 # - `n1-highmem-96`
1061 # - `n1-highcpu-16`
1062 # - `n1-highcpu-32`
1063 # - `n1-highcpu-64`
1064 # - `n1-highcpu-96`
1065 #
1066 # Learn more about [using Compute Engine machine
1067 # types](/ml-engine/docs/machine-types#compute-engine-machine-types).
1068 #
1069 # Alternatively, you can use the following legacy machine types:
1070 #
1071 # - `standard`
1072 # - `large_model`
1073 # - `complex_model_s`
1074 # - `complex_model_m`
1075 # - `complex_model_l`
1076 # - `standard_gpu`
1077 # - `complex_model_m_gpu`
1078 # - `complex_model_l_gpu`
1079 # - `standard_p100`
1080 # - `complex_model_m_p100`
1081 # - `standard_v100`
1082 # - `large_model_v100`
1083 # - `complex_model_m_v100`
1084 # - `complex_model_l_v100`
1085 #
1086 # Learn more about [using legacy machine
1087 # types](/ml-engine/docs/machine-types#legacy-machine-types).
1088 #
1089 # Finally, if you want to use a TPU for training, specify `cloud_tpu` in this
1090 # field. Learn more about the [special configuration options for training
1091 # with
1092 # TPUs](/ml-engine/docs/tensorflow/using-tpus#configuring_a_custom_tpu_machine).
Bu Sun Kimd059ad82020-07-22 17:02:09 -07001093 &quot;parameterServerConfig&quot;: { # Represents the configuration for a replica in a cluster. # Optional. The configuration for parameter servers.
Bu Sun Kim65020912020-05-20 12:08:20 -07001094 #
Bu Sun Kimd059ad82020-07-22 17:02:09 -07001095 # You should only set `parameterServerConfig.acceleratorConfig` if
1096 # `parameterServerType` is set to a Compute Engine machine type. [Learn
1097 # about restrictions on accelerator configurations for
Bu Sun Kim65020912020-05-20 12:08:20 -07001098 # training.](/ai-platform/training/docs/using-gpus#compute-engine-machine-types-with-gpu)
1099 #
Bu Sun Kimd059ad82020-07-22 17:02:09 -07001100 # Set `parameterServerConfig.imageUri` only if you build a custom image for
1101 # your parameter server. If `parameterServerConfig.imageUri` has not been
1102 # set, AI Platform uses the value of `masterConfig.imageUri`. Learn more about [configuring custom
Bu Sun Kim65020912020-05-20 12:08:20 -07001103 # containers](/ai-platform/training/docs/distributed-training-containers).
Bu Sun Kimd059ad82020-07-22 17:02:09 -07001104 &quot;acceleratorConfig&quot;: { # Represents a hardware accelerator request config. # Represents the type and number of accelerators used by the replica.
1105 # [Learn about restrictions on accelerator configurations for
1106 # training.](/ai-platform/training/docs/using-gpus#compute-engine-machine-types-with-gpu)
1107 # Note that the AcceleratorConfig can be used in both Jobs and Versions.
1108 # Learn more about [accelerators for training](/ml-engine/docs/using-gpus) and
1109 # [accelerators for online
1110 # prediction](/ml-engine/docs/machine-types-online-prediction#gpus).
1111 &quot;type&quot;: &quot;A String&quot;, # The type of accelerator to use.
1112 &quot;count&quot;: &quot;A String&quot;, # The number of accelerators to attach to each machine running the job.
1113 },
Bu Sun Kim65020912020-05-20 12:08:20 -07001114 &quot;imageUri&quot;: &quot;A String&quot;, # The Docker image to run on the replica. This image must be in Container
1115 # Registry. Learn more about [configuring custom
1116 # containers](/ai-platform/training/docs/distributed-training-containers).
1117 &quot;containerArgs&quot;: [ # Arguments to the entrypoint command.
1118 # The following rules apply for container_command and container_args:
1119 # - If you do not supply command or args:
1120 # The defaults defined in the Docker image are used.
1121 # - If you supply a command but no args:
1122 # The default EntryPoint and the default Cmd defined in the Docker image
1123 # are ignored. Your command is run without any arguments.
1124 # - If you supply only args:
1125 # The default Entrypoint defined in the Docker image is run with the args
1126 # that you supplied.
1127 # - If you supply a command and args:
1128 # The default Entrypoint and the default Cmd defined in the Docker image
1129 # are ignored. Your command is run with your args.
1130 # It cannot be set if custom container image is
1131 # not provided.
1132 # Note that this field and [TrainingInput.args] are mutually exclusive, i.e.,
1133 # both cannot be set at the same time.
1134 &quot;A String&quot;,
1135 ],
Bu Sun Kim4ed7d3f2020-05-27 12:20:54 -07001136 &quot;tpuTfVersion&quot;: &quot;A String&quot;, # The AI Platform runtime version that includes a TensorFlow version matching
1137 # the one used in the custom container. This field is required if the replica
1138 # is a TPU worker that uses a custom container. Otherwise, do not specify
1139 # this field. This must be a [runtime version that currently supports
1140 # training with
1141 # TPUs](/ml-engine/docs/tensorflow/runtime-version-list#tpu-support).
1142 #
1143 # Note that the version of TensorFlow included in a runtime version may
1144 # differ from the numbering of the runtime version itself, because it may
1145 # have a different [patch
1146 # version](https://www.tensorflow.org/guide/version_compat#semantic_versioning_20).
1147 # In this field, you must specify the runtime version (TensorFlow minor
1148 # version). For example, if your custom container runs TensorFlow `1.x.y`,
1149 # specify `1.x`.
1150 &quot;containerCommand&quot;: [ # The command with which the replica&#x27;s custom container is run.
1151 # If provided, it will override default ENTRYPOINT of the docker image.
1152 # If not provided, the docker image&#x27;s ENTRYPOINT is used.
1153 # It cannot be set if custom container image is
1154 # not provided.
1155 # Note that this field and [TrainingInput.args] are mutually exclusive, i.e.,
1156 # both cannot be set at the same time.
1157 &quot;A String&quot;,
1158 ],
Bu Sun Kim65020912020-05-20 12:08:20 -07001159 },
Bu Sun Kimd059ad82020-07-22 17:02:09 -07001160 &quot;region&quot;: &quot;A String&quot;, # Required. The region to run the training job in. See the [available
1161 # regions](/ai-platform/training/docs/regions) for AI Platform Training.
Bu Sun Kim65020912020-05-20 12:08:20 -07001162 &quot;jobDir&quot;: &quot;A String&quot;, # Optional. A Google Cloud Storage path in which to store training outputs
1163 # and other data needed for training. This path is passed to your TensorFlow
1164 # program as the &#x27;--job-dir&#x27; command-line argument. The benefit of specifying
1165 # this field is that Cloud ML validates the path for use in training.
1166 &quot;pythonVersion&quot;: &quot;A String&quot;, # Optional. The version of Python used in training. You must either specify
1167 # this field or specify `masterConfig.imageUri`.
1168 #
1169 # The following Python versions are available:
1170 #
1171 # * Python &#x27;3.7&#x27; is available when `runtime_version` is set to &#x27;1.15&#x27; or
1172 # later.
1173 # * Python &#x27;3.5&#x27; is available when `runtime_version` is set to a version
1174 # from &#x27;1.4&#x27; to &#x27;1.14&#x27;.
1175 # * Python &#x27;2.7&#x27; is available when `runtime_version` is set to &#x27;1.15&#x27; or
1176 # earlier.
1177 #
1178 # Read more about the Python versions available for [each runtime
1179 # version](/ml-engine/docs/runtime-version-list).
Bu Sun Kimd059ad82020-07-22 17:02:09 -07001180 &quot;hyperparameters&quot;: { # Represents a set of hyperparameters to optimize. # Optional. The set of Hyperparameters to tune.
1181 &quot;maxTrials&quot;: 42, # Optional. How many training trials should be attempted to optimize
1182 # the specified hyperparameters.
1183 #
1184 # Defaults to one.
1185 &quot;enableTrialEarlyStopping&quot;: True or False, # Optional. Indicates if the hyperparameter tuning job enables auto trial
1186 # early stopping.
1187 &quot;params&quot;: [ # Required. The set of parameters to tune.
1188 { # Represents a single hyperparameter to optimize.
1189 &quot;minValue&quot;: 3.14, # Required if type is `DOUBLE` or `INTEGER`. This field
1190 # should be unset if type is `CATEGORICAL`. This value should be integers if
1191 # type is INTEGER.
1192 &quot;categoricalValues&quot;: [ # Required if type is `CATEGORICAL`. The list of possible categories.
1193 &quot;A String&quot;,
1194 ],
1195 &quot;scaleType&quot;: &quot;A String&quot;, # Optional. How the parameter should be scaled to the hypercube.
1196 # Leave unset for categorical parameters.
1197 # Some kind of scaling is strongly recommended for real or integral
1198 # parameters (e.g., `UNIT_LINEAR_SCALE`).
1199 &quot;discreteValues&quot;: [ # Required if type is `DISCRETE`.
1200 # A list of feasible points.
1201 # The list should be in strictly increasing order. For instance, this
1202 # parameter might have possible settings of 1.5, 2.5, and 4.0. This list
1203 # should not contain more than 1,000 values.
1204 3.14,
1205 ],
1206 &quot;type&quot;: &quot;A String&quot;, # Required. The type of the parameter.
1207 &quot;maxValue&quot;: 3.14, # Required if type is `DOUBLE` or `INTEGER`. This field
1208 # should be unset if type is `CATEGORICAL`. This value should be integers if
1209 # type is `INTEGER`.
1210 &quot;parameterName&quot;: &quot;A String&quot;, # Required. The parameter name must be unique amongst all ParameterConfigs in
1211 # a HyperparameterSpec message. E.g., &quot;learning_rate&quot;.
1212 },
1213 ],
1214 &quot;maxFailedTrials&quot;: 42, # Optional. The number of failed trials that need to be seen before failing
1215 # the hyperparameter tuning job. You can specify this field to override the
1216 # default failing criteria for AI Platform hyperparameter tuning jobs.
1217 #
1218 # Defaults to zero, which means the service decides when a hyperparameter
1219 # job should fail.
1220 &quot;hyperparameterMetricTag&quot;: &quot;A String&quot;, # Optional. The TensorFlow summary tag name to use for optimizing trials. For
1221 # current versions of TensorFlow, this tag name should exactly match what is
1222 # shown in TensorBoard, including all scopes. For versions of TensorFlow
1223 # prior to 0.12, this should be only the tag passed to tf.Summary.
1224 # By default, &quot;training/hptuning/metric&quot; will be used.
1225 &quot;resumePreviousJobId&quot;: &quot;A String&quot;, # Optional. The prior hyperparameter tuning job id that users hope to
1226 # continue with. The job id will be used to find the corresponding vizier
1227 # study guid and resume the study.
1228 &quot;goal&quot;: &quot;A String&quot;, # Required. The type of goal to use for tuning. Available types are
1229 # `MAXIMIZE` and `MINIMIZE`.
1230 #
1231 # Defaults to `MAXIMIZE`.
1232 &quot;algorithm&quot;: &quot;A String&quot;, # Optional. The search algorithm specified for the hyperparameter
1233 # tuning job.
1234 # Uses the default AI Platform hyperparameter tuning
1235 # algorithm if unspecified.
1236 &quot;maxParallelTrials&quot;: 42, # Optional. The number of training trials to run concurrently.
1237 # You can reduce the time it takes to perform hyperparameter tuning by adding
1238 # trials in parallel. However, each trail only benefits from the information
1239 # gained in completed trials. That means that a trial does not get access to
1240 # the results of trials running at the same time, which could reduce the
1241 # quality of the overall optimization.
1242 #
1243 # Each trial will use the same scale tier and machine types.
1244 #
1245 # Defaults to one.
1246 },
1247 &quot;evaluatorType&quot;: &quot;A String&quot;, # Optional. Specifies the type of virtual machine to use for your training
1248 # job&#x27;s evaluator nodes.
1249 #
1250 # The supported values are the same as those described in the entry for
1251 # `masterType`.
1252 #
1253 # This value must be consistent with the category of machine type that
1254 # `masterType` uses. In other words, both must be Compute Engine machine
1255 # types or both must be legacy machine types.
1256 #
1257 # This value must be present when `scaleTier` is set to `CUSTOM` and
1258 # `evaluatorCount` is greater than zero.
1259 &quot;network&quot;: &quot;A String&quot;, # Optional. The full name of the Google Compute Engine
1260 # [network](/compute/docs/networks-and-firewalls#networks) to which the Job
1261 # is peered. For example, projects/12345/global/networks/myVPC. Format is of
1262 # the form projects/{project}/global/networks/{network}. Where {project} is a
1263 # project number, as in &#x27;12345&#x27;, and {network} is network name.&quot;.
1264 #
1265 # Private services access must already be configured for the network. If left
1266 # unspecified, the Job is not peered with any network. Learn more -
1267 # Connecting Job to user network over private
1268 # IP.
1269 &quot;parameterServerType&quot;: &quot;A String&quot;, # Optional. Specifies the type of virtual machine to use for your training
1270 # job&#x27;s parameter server.
1271 #
1272 # The supported values are the same as those described in the entry for
1273 # `master_type`.
1274 #
1275 # This value must be consistent with the category of machine type that
1276 # `masterType` uses. In other words, both must be Compute Engine machine
1277 # types or both must be legacy machine types.
1278 #
1279 # This value must be present when `scaleTier` is set to `CUSTOM` and
1280 # `parameter_server_count` is greater than zero.
1281 &quot;workerType&quot;: &quot;A String&quot;, # Optional. Specifies the type of virtual machine to use for your training
1282 # job&#x27;s worker nodes.
1283 #
1284 # The supported values are the same as those described in the entry for
1285 # `masterType`.
1286 #
1287 # This value must be consistent with the category of machine type that
1288 # `masterType` uses. In other words, both must be Compute Engine machine
1289 # types or both must be legacy machine types.
1290 #
1291 # If you use `cloud_tpu` for this value, see special instructions for
1292 # [configuring a custom TPU
1293 # machine](/ml-engine/docs/tensorflow/using-tpus#configuring_a_custom_tpu_machine).
1294 #
1295 # This value must be present when `scaleTier` is set to `CUSTOM` and
1296 # `workerCount` is greater than zero.
1297 &quot;masterConfig&quot;: { # Represents the configuration for a replica in a cluster. # Optional. The configuration for your master worker.
1298 #
1299 # You should only set `masterConfig.acceleratorConfig` if `masterType` is set
1300 # to a Compute Engine machine type. Learn about [restrictions on accelerator
1301 # configurations for
1302 # training.](/ai-platform/training/docs/using-gpus#compute-engine-machine-types-with-gpu)
1303 #
1304 # Set `masterConfig.imageUri` only if you build a custom image. Only one of
1305 # `masterConfig.imageUri` and `runtimeVersion` should be set. Learn more
1306 # about [configuring custom
1307 # containers](/ai-platform/training/docs/distributed-training-containers).
1308 &quot;acceleratorConfig&quot;: { # Represents a hardware accelerator request config. # Represents the type and number of accelerators used by the replica.
1309 # [Learn about restrictions on accelerator configurations for
1310 # training.](/ai-platform/training/docs/using-gpus#compute-engine-machine-types-with-gpu)
1311 # Note that the AcceleratorConfig can be used in both Jobs and Versions.
1312 # Learn more about [accelerators for training](/ml-engine/docs/using-gpus) and
1313 # [accelerators for online
1314 # prediction](/ml-engine/docs/machine-types-online-prediction#gpus).
1315 &quot;type&quot;: &quot;A String&quot;, # The type of accelerator to use.
1316 &quot;count&quot;: &quot;A String&quot;, # The number of accelerators to attach to each machine running the job.
1317 },
1318 &quot;imageUri&quot;: &quot;A String&quot;, # The Docker image to run on the replica. This image must be in Container
1319 # Registry. Learn more about [configuring custom
1320 # containers](/ai-platform/training/docs/distributed-training-containers).
1321 &quot;containerArgs&quot;: [ # Arguments to the entrypoint command.
1322 # The following rules apply for container_command and container_args:
1323 # - If you do not supply command or args:
1324 # The defaults defined in the Docker image are used.
1325 # - If you supply a command but no args:
1326 # The default EntryPoint and the default Cmd defined in the Docker image
1327 # are ignored. Your command is run without any arguments.
1328 # - If you supply only args:
1329 # The default Entrypoint defined in the Docker image is run with the args
1330 # that you supplied.
1331 # - If you supply a command and args:
1332 # The default Entrypoint and the default Cmd defined in the Docker image
1333 # are ignored. Your command is run with your args.
1334 # It cannot be set if custom container image is
1335 # not provided.
1336 # Note that this field and [TrainingInput.args] are mutually exclusive, i.e.,
1337 # both cannot be set at the same time.
1338 &quot;A String&quot;,
1339 ],
1340 &quot;tpuTfVersion&quot;: &quot;A String&quot;, # The AI Platform runtime version that includes a TensorFlow version matching
1341 # the one used in the custom container. This field is required if the replica
1342 # is a TPU worker that uses a custom container. Otherwise, do not specify
1343 # this field. This must be a [runtime version that currently supports
1344 # training with
1345 # TPUs](/ml-engine/docs/tensorflow/runtime-version-list#tpu-support).
1346 #
1347 # Note that the version of TensorFlow included in a runtime version may
1348 # differ from the numbering of the runtime version itself, because it may
1349 # have a different [patch
1350 # version](https://www.tensorflow.org/guide/version_compat#semantic_versioning_20).
1351 # In this field, you must specify the runtime version (TensorFlow minor
1352 # version). For example, if your custom container runs TensorFlow `1.x.y`,
1353 # specify `1.x`.
1354 &quot;containerCommand&quot;: [ # The command with which the replica&#x27;s custom container is run.
1355 # If provided, it will override default ENTRYPOINT of the docker image.
1356 # If not provided, the docker image&#x27;s ENTRYPOINT is used.
1357 # It cannot be set if custom container image is
1358 # not provided.
1359 # Note that this field and [TrainingInput.args] are mutually exclusive, i.e.,
1360 # both cannot be set at the same time.
1361 &quot;A String&quot;,
1362 ],
1363 },
1364 &quot;evaluatorCount&quot;: &quot;A String&quot;, # Optional. The number of evaluator replicas to use for the training job.
1365 # Each replica in the cluster will be of the type specified in
1366 # `evaluator_type`.
1367 #
1368 # This value can only be used when `scale_tier` is set to `CUSTOM`. If you
1369 # set this value, you must also set `evaluator_type`.
1370 #
1371 # The default value is zero.
1372 &quot;args&quot;: [ # Optional. Command-line arguments passed to the training application when it
1373 # starts. If your job uses a custom container, then the arguments are passed
1374 # to the container&#x27;s &lt;a class=&quot;external&quot; target=&quot;_blank&quot;
1375 # href=&quot;https://docs.docker.com/engine/reference/builder/#entrypoint&quot;&gt;
1376 # `ENTRYPOINT`&lt;/a&gt; command.
1377 &quot;A String&quot;,
1378 ],
1379 &quot;pythonModule&quot;: &quot;A String&quot;, # Required. The Python module name to run after installing the packages.
1380 &quot;runtimeVersion&quot;: &quot;A String&quot;, # Optional. The AI Platform runtime version to use for training. You must
1381 # either specify this field or specify `masterConfig.imageUri`.
1382 #
1383 # For more information, see the [runtime version
1384 # list](/ai-platform/training/docs/runtime-version-list) and learn [how to
1385 # manage runtime versions](/ai-platform/training/docs/versioning).
1386 &quot;parameterServerCount&quot;: &quot;A String&quot;, # Optional. The number of parameter server replicas to use for the training
1387 # job. Each replica in the cluster will be of the type specified in
1388 # `parameter_server_type`.
1389 #
1390 # This value can only be used when `scale_tier` is set to `CUSTOM`. If you
1391 # set this value, you must also set `parameter_server_type`.
1392 #
1393 # The default value is zero.
1394 &quot;evaluatorConfig&quot;: { # Represents the configuration for a replica in a cluster. # Optional. The configuration for evaluators.
1395 #
1396 # You should only set `evaluatorConfig.acceleratorConfig` if
1397 # `evaluatorType` is set to a Compute Engine machine type. [Learn
1398 # about restrictions on accelerator configurations for
1399 # training.](/ai-platform/training/docs/using-gpus#compute-engine-machine-types-with-gpu)
1400 #
1401 # Set `evaluatorConfig.imageUri` only if you build a custom image for
1402 # your evaluator. If `evaluatorConfig.imageUri` has not been
1403 # set, AI Platform uses the value of `masterConfig.imageUri`. Learn more about [configuring custom
1404 # containers](/ai-platform/training/docs/distributed-training-containers).
1405 &quot;acceleratorConfig&quot;: { # Represents a hardware accelerator request config. # Represents the type and number of accelerators used by the replica.
1406 # [Learn about restrictions on accelerator configurations for
1407 # training.](/ai-platform/training/docs/using-gpus#compute-engine-machine-types-with-gpu)
1408 # Note that the AcceleratorConfig can be used in both Jobs and Versions.
1409 # Learn more about [accelerators for training](/ml-engine/docs/using-gpus) and
1410 # [accelerators for online
1411 # prediction](/ml-engine/docs/machine-types-online-prediction#gpus).
1412 &quot;type&quot;: &quot;A String&quot;, # The type of accelerator to use.
1413 &quot;count&quot;: &quot;A String&quot;, # The number of accelerators to attach to each machine running the job.
1414 },
1415 &quot;imageUri&quot;: &quot;A String&quot;, # The Docker image to run on the replica. This image must be in Container
1416 # Registry. Learn more about [configuring custom
1417 # containers](/ai-platform/training/docs/distributed-training-containers).
1418 &quot;containerArgs&quot;: [ # Arguments to the entrypoint command.
1419 # The following rules apply for container_command and container_args:
1420 # - If you do not supply command or args:
1421 # The defaults defined in the Docker image are used.
1422 # - If you supply a command but no args:
1423 # The default EntryPoint and the default Cmd defined in the Docker image
1424 # are ignored. Your command is run without any arguments.
1425 # - If you supply only args:
1426 # The default Entrypoint defined in the Docker image is run with the args
1427 # that you supplied.
1428 # - If you supply a command and args:
1429 # The default Entrypoint and the default Cmd defined in the Docker image
1430 # are ignored. Your command is run with your args.
1431 # It cannot be set if custom container image is
1432 # not provided.
1433 # Note that this field and [TrainingInput.args] are mutually exclusive, i.e.,
1434 # both cannot be set at the same time.
1435 &quot;A String&quot;,
1436 ],
1437 &quot;tpuTfVersion&quot;: &quot;A String&quot;, # The AI Platform runtime version that includes a TensorFlow version matching
1438 # the one used in the custom container. This field is required if the replica
1439 # is a TPU worker that uses a custom container. Otherwise, do not specify
1440 # this field. This must be a [runtime version that currently supports
1441 # training with
1442 # TPUs](/ml-engine/docs/tensorflow/runtime-version-list#tpu-support).
1443 #
1444 # Note that the version of TensorFlow included in a runtime version may
1445 # differ from the numbering of the runtime version itself, because it may
1446 # have a different [patch
1447 # version](https://www.tensorflow.org/guide/version_compat#semantic_versioning_20).
1448 # In this field, you must specify the runtime version (TensorFlow minor
1449 # version). For example, if your custom container runs TensorFlow `1.x.y`,
1450 # specify `1.x`.
1451 &quot;containerCommand&quot;: [ # The command with which the replica&#x27;s custom container is run.
1452 # If provided, it will override default ENTRYPOINT of the docker image.
1453 # If not provided, the docker image&#x27;s ENTRYPOINT is used.
1454 # It cannot be set if custom container image is
1455 # not provided.
1456 # Note that this field and [TrainingInput.args] are mutually exclusive, i.e.,
1457 # both cannot be set at the same time.
1458 &quot;A String&quot;,
1459 ],
1460 },
1461 &quot;encryptionConfig&quot;: { # Represents a custom encryption key configuration that can be applied to # Optional. Options for using customer-managed encryption keys (CMEK) to
1462 # protect resources created by a training job, instead of using Google&#x27;s
1463 # default encryption. If this is set, then all resources created by the
1464 # training job will be encrypted with the customer-managed encryption key
1465 # that you specify.
1466 #
1467 # [Learn how and when to use CMEK with AI Platform
1468 # Training](/ai-platform/training/docs/cmek).
1469 # a resource.
1470 &quot;kmsKeyName&quot;: &quot;A String&quot;, # The Cloud KMS resource identifier of the customer-managed encryption key
1471 # used to protect a resource, such as a training job. It has the following
1472 # format:
1473 # `projects/{PROJECT_ID}/locations/{REGION}/keyRings/{KEY_RING_NAME}/cryptoKeys/{KEY_NAME}`
1474 },
1475 &quot;workerCount&quot;: &quot;A String&quot;, # Optional. The number of worker replicas to use for the training job. Each
1476 # replica in the cluster will be of the type specified in `worker_type`.
1477 #
1478 # This value can only be used when `scale_tier` is set to `CUSTOM`. If you
1479 # set this value, you must also set `worker_type`.
1480 #
1481 # The default value is zero.
Bu Sun Kim65020912020-05-20 12:08:20 -07001482 &quot;scheduling&quot;: { # All parameters related to scheduling of training jobs. # Optional. Scheduling options for a training job.
1483 &quot;maxWaitTime&quot;: &quot;A String&quot;,
1484 &quot;maxRunningTime&quot;: &quot;A String&quot;, # Optional. The maximum job running time, expressed in seconds. The field can
1485 # contain up to nine fractional digits, terminated by `s`. If not specified,
1486 # this field defaults to `604800s` (seven days).
1487 #
1488 # If the training job is still running after this duration, AI Platform
1489 # Training cancels it.
1490 #
1491 # For example, if you want to ensure your job runs for no more than 2 hours,
1492 # set this field to `7200s` (2 hours * 60 minutes / hour * 60 seconds /
1493 # minute).
1494 #
1495 # If you submit your training job using the `gcloud` tool, you can [provide
1496 # this field in a `config.yaml`
1497 # file](/ai-platform/training/docs/training-jobs#formatting_your_configuration_parameters).
1498 # For example:
1499 #
1500 # ```yaml
1501 # trainingInput:
1502 # ...
1503 # scheduling:
1504 # maxRunningTime: 7200s
1505 # ...
1506 # ```
1507 },
Bu Sun Kimd059ad82020-07-22 17:02:09 -07001508 &quot;scaleTier&quot;: &quot;A String&quot;, # Required. Specifies the machine types, the number of replicas for workers
1509 # and parameter servers.
1510 &quot;packageUris&quot;: [ # Required. The Google Cloud Storage location of the packages with
1511 # the training program and any additional dependencies.
1512 # The maximum number of package URIs is 100.
1513 &quot;A String&quot;,
Bu Sun Kim65020912020-05-20 12:08:20 -07001514 ],
Bu Sun Kim65020912020-05-20 12:08:20 -07001515 },
Bu Sun Kimd059ad82020-07-22 17:02:09 -07001516 &quot;etag&quot;: &quot;A String&quot;, # `etag` is used for optimistic concurrency control as a way to help
1517 # prevent simultaneous updates of a job from overwriting each other.
1518 # It is strongly suggested that systems make use of the `etag` in the
1519 # read-modify-write cycle to perform job updates in order to avoid race
1520 # conditions: An `etag` is returned in the response to `GetJob`, and
1521 # systems are expected to put that etag in the request to `UpdateJob` to
1522 # ensure that their change will be applied to the same version of the job.
1523 &quot;jobId&quot;: &quot;A String&quot;, # Required. The user-specified id of the job.
Bu Sun Kim65020912020-05-20 12:08:20 -07001524 }</pre>
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07001525</div>
1526
1527<div class="method">
1528 <code class="details" id="get">get(name, x__xgafv=None)</code>
1529 <pre>Describes a job.
1530
1531Args:
1532 name: string, Required. The name of the job to get the description of. (required)
1533 x__xgafv: string, V1 error format.
1534 Allowed values
1535 1 - v1 error format
1536 2 - v2 error format
1537
1538Returns:
1539 An object of the form:
1540
1541 { # Represents a training or prediction job.
Bu Sun Kimd059ad82020-07-22 17:02:09 -07001542 &quot;createTime&quot;: &quot;A String&quot;, # Output only. When the job was created.
Bu Sun Kim4ed7d3f2020-05-27 12:20:54 -07001543 &quot;predictionInput&quot;: { # Represents input parameters for a prediction job. # Input parameters to create a prediction job.
Bu Sun Kimd059ad82020-07-22 17:02:09 -07001544 &quot;dataFormat&quot;: &quot;A String&quot;, # Required. The format of the input data files.
1545 &quot;outputPath&quot;: &quot;A String&quot;, # Required. The output Google Cloud Storage location.
1546 &quot;inputPaths&quot;: [ # Required. The Cloud Storage location of the input data files. May contain
1547 # &lt;a href=&quot;/storage/docs/gsutil/addlhelp/WildcardNames&quot;&gt;wildcards&lt;/a&gt;.
1548 &quot;A String&quot;,
1549 ],
1550 &quot;region&quot;: &quot;A String&quot;, # Required. The Google Compute Engine region to run the prediction job in.
1551 # See the &lt;a href=&quot;/ml-engine/docs/tensorflow/regions&quot;&gt;available regions&lt;/a&gt;
1552 # for AI Platform services.
Bu Sun Kim4ed7d3f2020-05-27 12:20:54 -07001553 &quot;versionName&quot;: &quot;A String&quot;, # Use this field if you want to specify a version of the model to use. The
1554 # string is formatted the same way as `model_version`, with the addition
1555 # of the version information:
1556 #
1557 # `&quot;projects/YOUR_PROJECT/models/YOUR_MODEL/versions/YOUR_VERSION&quot;`
Bu Sun Kimd059ad82020-07-22 17:02:09 -07001558 &quot;uri&quot;: &quot;A String&quot;, # Use this field if you want to specify a Google Cloud Storage path for
1559 # the model to use.
1560 &quot;outputDataFormat&quot;: &quot;A String&quot;, # Optional. Format of the output data files, defaults to JSON.
1561 &quot;runtimeVersion&quot;: &quot;A String&quot;, # Optional. The AI Platform runtime version to use for this batch
1562 # prediction. If not set, AI Platform will pick the runtime version used
1563 # during the CreateVersion request for this model version, or choose the
1564 # latest stable version when model version information is not available
1565 # such as when the model is specified by uri.
Bu Sun Kim4ed7d3f2020-05-27 12:20:54 -07001566 &quot;modelName&quot;: &quot;A String&quot;, # Use this field if you want to use the default version for the specified
1567 # model. The string must use the following format:
1568 #
1569 # `&quot;projects/YOUR_PROJECT/models/YOUR_MODEL&quot;`
Bu Sun Kim4ed7d3f2020-05-27 12:20:54 -07001570 &quot;signatureName&quot;: &quot;A String&quot;, # Optional. The name of the signature defined in the SavedModel to use for
1571 # this job. Please refer to
1572 # [SavedModel](https://tensorflow.github.io/serving/serving_basic.html)
1573 # for information about how to use signatures.
1574 #
1575 # Defaults to
1576 # [DEFAULT_SERVING_SIGNATURE_DEF_KEY](https://www.tensorflow.org/api_docs/python/tf/saved_model/signature_constants)
1577 # , which is &quot;serving_default&quot;.
Bu Sun Kim4ed7d3f2020-05-27 12:20:54 -07001578 &quot;batchSize&quot;: &quot;A String&quot;, # Optional. Number of records per batch, defaults to 64.
1579 # The service will buffer batch_size number of records in memory before
1580 # invoking one Tensorflow prediction call internally. So take the record
1581 # size and memory available into consideration when setting this parameter.
Bu Sun Kimd059ad82020-07-22 17:02:09 -07001582 &quot;maxWorkerCount&quot;: &quot;A String&quot;, # Optional. The maximum number of workers to be used for parallel processing.
1583 # Defaults to 10 if not specified.
1584 },
1585 &quot;labels&quot;: { # Optional. One or more labels that you can add, to organize your jobs.
1586 # Each label is a key-value pair, where both the key and the value are
1587 # arbitrary strings that you supply.
1588 # For more information, see the documentation on
1589 # &lt;a href=&quot;/ml-engine/docs/tensorflow/resource-labels&quot;&gt;using labels&lt;/a&gt;.
1590 &quot;a_key&quot;: &quot;A String&quot;,
1591 },
1592 &quot;endTime&quot;: &quot;A String&quot;, # Output only. When the job processing was completed.
1593 &quot;trainingOutput&quot;: { # Represents results of a training job. Output only. # The current training job result.
1594 &quot;trials&quot;: [ # Results for individual Hyperparameter trials.
1595 # Only set for hyperparameter tuning jobs.
1596 { # Represents the result of a single hyperparameter tuning trial from a
1597 # training job. The TrainingOutput object that is returned on successful
1598 # completion of a training job with hyperparameter tuning includes a list
1599 # of HyperparameterOutput objects, one for each successful trial.
1600 &quot;endTime&quot;: &quot;A String&quot;, # Output only. End time for the trial.
1601 &quot;finalMetric&quot;: { # An observed value of a metric. # The final objective metric seen for this trial.
1602 &quot;trainingStep&quot;: &quot;A String&quot;, # The global training step for this metric.
1603 &quot;objectiveValue&quot;: 3.14, # The objective value at this training step.
1604 },
1605 &quot;hyperparameters&quot;: { # The hyperparameters given to this trial.
1606 &quot;a_key&quot;: &quot;A String&quot;,
1607 },
1608 &quot;builtInAlgorithmOutput&quot;: { # Represents output related to a built-in algorithm Job. # Details related to built-in algorithms jobs.
1609 # Only set for trials of built-in algorithms jobs that have succeeded.
1610 &quot;modelPath&quot;: &quot;A String&quot;, # The Cloud Storage path to the `model/` directory where the training job
1611 # saves the trained model. Only set for successful jobs that don&#x27;t use
1612 # hyperparameter tuning.
1613 &quot;framework&quot;: &quot;A String&quot;, # Framework on which the built-in algorithm was trained.
1614 &quot;runtimeVersion&quot;: &quot;A String&quot;, # AI Platform runtime version on which the built-in algorithm was
1615 # trained.
1616 &quot;pythonVersion&quot;: &quot;A String&quot;, # Python version on which the built-in algorithm was trained.
1617 },
1618 &quot;startTime&quot;: &quot;A String&quot;, # Output only. Start time for the trial.
1619 &quot;allMetrics&quot;: [ # All recorded object metrics for this trial. This field is not currently
1620 # populated.
1621 { # An observed value of a metric.
1622 &quot;trainingStep&quot;: &quot;A String&quot;, # The global training step for this metric.
1623 &quot;objectiveValue&quot;: 3.14, # The objective value at this training step.
1624 },
1625 ],
1626 &quot;trialId&quot;: &quot;A String&quot;, # The trial id for these results.
1627 &quot;isTrialStoppedEarly&quot;: True or False, # True if the trial is stopped early.
1628 &quot;state&quot;: &quot;A String&quot;, # Output only. The detailed state of the trial.
1629 },
Bu Sun Kim4ed7d3f2020-05-27 12:20:54 -07001630 ],
Bu Sun Kimd059ad82020-07-22 17:02:09 -07001631 &quot;completedTrialCount&quot;: &quot;A String&quot;, # The number of hyperparameter tuning trials that completed successfully.
1632 # Only set for hyperparameter tuning jobs.
1633 &quot;isHyperparameterTuningJob&quot;: True or False, # Whether this job is a hyperparameter tuning job.
1634 &quot;isBuiltInAlgorithmJob&quot;: True or False, # Whether this job is a built-in Algorithm job.
1635 &quot;builtInAlgorithmOutput&quot;: { # Represents output related to a built-in algorithm Job. # Details related to built-in algorithms jobs.
1636 # Only set for built-in algorithms jobs.
1637 &quot;modelPath&quot;: &quot;A String&quot;, # The Cloud Storage path to the `model/` directory where the training job
1638 # saves the trained model. Only set for successful jobs that don&#x27;t use
1639 # hyperparameter tuning.
1640 &quot;framework&quot;: &quot;A String&quot;, # Framework on which the built-in algorithm was trained.
1641 &quot;runtimeVersion&quot;: &quot;A String&quot;, # AI Platform runtime version on which the built-in algorithm was
1642 # trained.
1643 &quot;pythonVersion&quot;: &quot;A String&quot;, # Python version on which the built-in algorithm was trained.
1644 },
1645 &quot;consumedMLUnits&quot;: 3.14, # The amount of ML units consumed by the job.
1646 &quot;hyperparameterMetricTag&quot;: &quot;A String&quot;, # The TensorFlow summary tag name used for optimizing hyperparameter tuning
1647 # trials. See
1648 # [`HyperparameterSpec.hyperparameterMetricTag`](#HyperparameterSpec.FIELDS.hyperparameter_metric_tag)
1649 # for more information. Only set for hyperparameter tuning jobs.
Bu Sun Kim4ed7d3f2020-05-27 12:20:54 -07001650 },
1651 &quot;errorMessage&quot;: &quot;A String&quot;, # Output only. The details of a failure or a cancellation.
Bu Sun Kimd059ad82020-07-22 17:02:09 -07001652 &quot;predictionOutput&quot;: { # Represents results of a prediction job. # The current prediction job result.
1653 &quot;errorCount&quot;: &quot;A String&quot;, # The number of data instances which resulted in errors.
1654 &quot;nodeHours&quot;: 3.14, # Node hours used by the batch prediction job.
1655 &quot;outputPath&quot;: &quot;A String&quot;, # The output Google Cloud Storage location provided at the job creation time.
1656 &quot;predictionCount&quot;: &quot;A String&quot;, # The number of generated predictions.
1657 },
1658 &quot;startTime&quot;: &quot;A String&quot;, # Output only. When the job processing was started.
1659 &quot;state&quot;: &quot;A String&quot;, # Output only. The detailed state of a job.
Bu Sun Kim65020912020-05-20 12:08:20 -07001660 &quot;trainingInput&quot;: { # Represents input parameters for a training job. When using the gcloud command # Input parameters to create a training job.
1661 # to submit your training job, you can specify the input parameters as
1662 # command-line arguments and/or in a YAML configuration file referenced from
1663 # the --config command-line argument. For details, see the guide to [submitting
1664 # a training job](/ai-platform/training/docs/training-jobs).
Bu Sun Kimd059ad82020-07-22 17:02:09 -07001665 &quot;serviceAccount&quot;: &quot;A String&quot;, # Optional. Specifies the service account for workload run-as account.
1666 # Users submitting jobs must have act-as permission on this run-as account.
1667 # If not specified, then CMLE P4SA will be used by default.
Bu Sun Kim65020912020-05-20 12:08:20 -07001668 &quot;workerConfig&quot;: { # Represents the configuration for a replica in a cluster. # Optional. The configuration for workers.
1669 #
1670 # You should only set `workerConfig.acceleratorConfig` if `workerType` is set
1671 # to a Compute Engine machine type. [Learn about restrictions on accelerator
1672 # configurations for
1673 # training.](/ai-platform/training/docs/using-gpus#compute-engine-machine-types-with-gpu)
1674 #
1675 # Set `workerConfig.imageUri` only if you build a custom image for your
1676 # worker. If `workerConfig.imageUri` has not been set, AI Platform uses
1677 # the value of `masterConfig.imageUri`. Learn more about [configuring custom
1678 # containers](/ai-platform/training/docs/distributed-training-containers).
Bu Sun Kimd059ad82020-07-22 17:02:09 -07001679 &quot;acceleratorConfig&quot;: { # Represents a hardware accelerator request config. # Represents the type and number of accelerators used by the replica.
1680 # [Learn about restrictions on accelerator configurations for
1681 # training.](/ai-platform/training/docs/using-gpus#compute-engine-machine-types-with-gpu)
1682 # Note that the AcceleratorConfig can be used in both Jobs and Versions.
1683 # Learn more about [accelerators for training](/ml-engine/docs/using-gpus) and
1684 # [accelerators for online
1685 # prediction](/ml-engine/docs/machine-types-online-prediction#gpus).
1686 &quot;type&quot;: &quot;A String&quot;, # The type of accelerator to use.
1687 &quot;count&quot;: &quot;A String&quot;, # The number of accelerators to attach to each machine running the job.
1688 },
Bu Sun Kim65020912020-05-20 12:08:20 -07001689 &quot;imageUri&quot;: &quot;A String&quot;, # The Docker image to run on the replica. This image must be in Container
1690 # Registry. Learn more about [configuring custom
1691 # containers](/ai-platform/training/docs/distributed-training-containers).
1692 &quot;containerArgs&quot;: [ # Arguments to the entrypoint command.
1693 # The following rules apply for container_command and container_args:
1694 # - If you do not supply command or args:
1695 # The defaults defined in the Docker image are used.
1696 # - If you supply a command but no args:
1697 # The default EntryPoint and the default Cmd defined in the Docker image
1698 # are ignored. Your command is run without any arguments.
1699 # - If you supply only args:
1700 # The default Entrypoint defined in the Docker image is run with the args
1701 # that you supplied.
1702 # - If you supply a command and args:
1703 # The default Entrypoint and the default Cmd defined in the Docker image
1704 # are ignored. Your command is run with your args.
1705 # It cannot be set if custom container image is
1706 # not provided.
1707 # Note that this field and [TrainingInput.args] are mutually exclusive, i.e.,
1708 # both cannot be set at the same time.
1709 &quot;A String&quot;,
1710 ],
Bu Sun Kim4ed7d3f2020-05-27 12:20:54 -07001711 &quot;tpuTfVersion&quot;: &quot;A String&quot;, # The AI Platform runtime version that includes a TensorFlow version matching
1712 # the one used in the custom container. This field is required if the replica
1713 # is a TPU worker that uses a custom container. Otherwise, do not specify
1714 # this field. This must be a [runtime version that currently supports
1715 # training with
1716 # TPUs](/ml-engine/docs/tensorflow/runtime-version-list#tpu-support).
1717 #
1718 # Note that the version of TensorFlow included in a runtime version may
1719 # differ from the numbering of the runtime version itself, because it may
1720 # have a different [patch
1721 # version](https://www.tensorflow.org/guide/version_compat#semantic_versioning_20).
1722 # In this field, you must specify the runtime version (TensorFlow minor
1723 # version). For example, if your custom container runs TensorFlow `1.x.y`,
1724 # specify `1.x`.
1725 &quot;containerCommand&quot;: [ # The command with which the replica&#x27;s custom container is run.
1726 # If provided, it will override default ENTRYPOINT of the docker image.
1727 # If not provided, the docker image&#x27;s ENTRYPOINT is used.
1728 # It cannot be set if custom container image is
1729 # not provided.
1730 # Note that this field and [TrainingInput.args] are mutually exclusive, i.e.,
1731 # both cannot be set at the same time.
1732 &quot;A String&quot;,
1733 ],
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07001734 },
Bu Sun Kimd059ad82020-07-22 17:02:09 -07001735 &quot;useChiefInTfConfig&quot;: True or False, # Optional. Use `chief` instead of `master` in the `TF_CONFIG` environment
1736 # variable when training with a custom container. Defaults to `false`. [Learn
1737 # more about this
1738 # field.](/ai-platform/training/docs/distributed-training-details#chief-versus-master)
Dan O'Mearadd494642020-05-01 07:42:23 -07001739 #
Bu Sun Kimd059ad82020-07-22 17:02:09 -07001740 # This field has no effect for training jobs that don&#x27;t use a custom
1741 # container.
Bu Sun Kim65020912020-05-20 12:08:20 -07001742 &quot;masterType&quot;: &quot;A String&quot;, # Optional. Specifies the type of virtual machine to use for your training
1743 # job&#x27;s master worker. You must specify this field when `scaleTier` is set to
1744 # `CUSTOM`.
Dan O'Mearadd494642020-05-01 07:42:23 -07001745 #
Bu Sun Kim65020912020-05-20 12:08:20 -07001746 # You can use certain Compute Engine machine types directly in this field.
1747 # The following types are supported:
1748 #
1749 # - `n1-standard-4`
1750 # - `n1-standard-8`
1751 # - `n1-standard-16`
1752 # - `n1-standard-32`
1753 # - `n1-standard-64`
1754 # - `n1-standard-96`
1755 # - `n1-highmem-2`
1756 # - `n1-highmem-4`
1757 # - `n1-highmem-8`
1758 # - `n1-highmem-16`
1759 # - `n1-highmem-32`
1760 # - `n1-highmem-64`
1761 # - `n1-highmem-96`
1762 # - `n1-highcpu-16`
1763 # - `n1-highcpu-32`
1764 # - `n1-highcpu-64`
1765 # - `n1-highcpu-96`
1766 #
1767 # Learn more about [using Compute Engine machine
1768 # types](/ml-engine/docs/machine-types#compute-engine-machine-types).
1769 #
1770 # Alternatively, you can use the following legacy machine types:
1771 #
1772 # - `standard`
1773 # - `large_model`
1774 # - `complex_model_s`
1775 # - `complex_model_m`
1776 # - `complex_model_l`
1777 # - `standard_gpu`
1778 # - `complex_model_m_gpu`
1779 # - `complex_model_l_gpu`
1780 # - `standard_p100`
1781 # - `complex_model_m_p100`
1782 # - `standard_v100`
1783 # - `large_model_v100`
1784 # - `complex_model_m_v100`
1785 # - `complex_model_l_v100`
1786 #
1787 # Learn more about [using legacy machine
1788 # types](/ml-engine/docs/machine-types#legacy-machine-types).
1789 #
1790 # Finally, if you want to use a TPU for training, specify `cloud_tpu` in this
1791 # field. Learn more about the [special configuration options for training
1792 # with
1793 # TPUs](/ml-engine/docs/tensorflow/using-tpus#configuring_a_custom_tpu_machine).
Bu Sun Kimd059ad82020-07-22 17:02:09 -07001794 &quot;parameterServerConfig&quot;: { # Represents the configuration for a replica in a cluster. # Optional. The configuration for parameter servers.
Bu Sun Kim65020912020-05-20 12:08:20 -07001795 #
Bu Sun Kimd059ad82020-07-22 17:02:09 -07001796 # You should only set `parameterServerConfig.acceleratorConfig` if
1797 # `parameterServerType` is set to a Compute Engine machine type. [Learn
1798 # about restrictions on accelerator configurations for
Dan O'Mearadd494642020-05-01 07:42:23 -07001799 # training.](/ai-platform/training/docs/using-gpus#compute-engine-machine-types-with-gpu)
Dan O'Mearadd494642020-05-01 07:42:23 -07001800 #
Bu Sun Kimd059ad82020-07-22 17:02:09 -07001801 # Set `parameterServerConfig.imageUri` only if you build a custom image for
1802 # your parameter server. If `parameterServerConfig.imageUri` has not been
1803 # set, AI Platform uses the value of `masterConfig.imageUri`. Learn more about [configuring custom
Bu Sun Kim65020912020-05-20 12:08:20 -07001804 # containers](/ai-platform/training/docs/distributed-training-containers).
Bu Sun Kimd059ad82020-07-22 17:02:09 -07001805 &quot;acceleratorConfig&quot;: { # Represents a hardware accelerator request config. # Represents the type and number of accelerators used by the replica.
1806 # [Learn about restrictions on accelerator configurations for
1807 # training.](/ai-platform/training/docs/using-gpus#compute-engine-machine-types-with-gpu)
1808 # Note that the AcceleratorConfig can be used in both Jobs and Versions.
1809 # Learn more about [accelerators for training](/ml-engine/docs/using-gpus) and
1810 # [accelerators for online
1811 # prediction](/ml-engine/docs/machine-types-online-prediction#gpus).
1812 &quot;type&quot;: &quot;A String&quot;, # The type of accelerator to use.
1813 &quot;count&quot;: &quot;A String&quot;, # The number of accelerators to attach to each machine running the job.
1814 },
Bu Sun Kim65020912020-05-20 12:08:20 -07001815 &quot;imageUri&quot;: &quot;A String&quot;, # The Docker image to run on the replica. This image must be in Container
1816 # Registry. Learn more about [configuring custom
1817 # containers](/ai-platform/training/docs/distributed-training-containers).
1818 &quot;containerArgs&quot;: [ # Arguments to the entrypoint command.
1819 # The following rules apply for container_command and container_args:
1820 # - If you do not supply command or args:
1821 # The defaults defined in the Docker image are used.
1822 # - If you supply a command but no args:
1823 # The default EntryPoint and the default Cmd defined in the Docker image
1824 # are ignored. Your command is run without any arguments.
1825 # - If you supply only args:
1826 # The default Entrypoint defined in the Docker image is run with the args
1827 # that you supplied.
1828 # - If you supply a command and args:
1829 # The default Entrypoint and the default Cmd defined in the Docker image
1830 # are ignored. Your command is run with your args.
1831 # It cannot be set if custom container image is
1832 # not provided.
1833 # Note that this field and [TrainingInput.args] are mutually exclusive, i.e.,
1834 # both cannot be set at the same time.
1835 &quot;A String&quot;,
1836 ],
Bu Sun Kim4ed7d3f2020-05-27 12:20:54 -07001837 &quot;tpuTfVersion&quot;: &quot;A String&quot;, # The AI Platform runtime version that includes a TensorFlow version matching
1838 # the one used in the custom container. This field is required if the replica
1839 # is a TPU worker that uses a custom container. Otherwise, do not specify
1840 # this field. This must be a [runtime version that currently supports
1841 # training with
1842 # TPUs](/ml-engine/docs/tensorflow/runtime-version-list#tpu-support).
1843 #
1844 # Note that the version of TensorFlow included in a runtime version may
1845 # differ from the numbering of the runtime version itself, because it may
1846 # have a different [patch
1847 # version](https://www.tensorflow.org/guide/version_compat#semantic_versioning_20).
1848 # In this field, you must specify the runtime version (TensorFlow minor
1849 # version). For example, if your custom container runs TensorFlow `1.x.y`,
1850 # specify `1.x`.
1851 &quot;containerCommand&quot;: [ # The command with which the replica&#x27;s custom container is run.
1852 # If provided, it will override default ENTRYPOINT of the docker image.
1853 # If not provided, the docker image&#x27;s ENTRYPOINT is used.
1854 # It cannot be set if custom container image is
1855 # not provided.
1856 # Note that this field and [TrainingInput.args] are mutually exclusive, i.e.,
1857 # both cannot be set at the same time.
1858 &quot;A String&quot;,
1859 ],
Bu Sun Kim65020912020-05-20 12:08:20 -07001860 },
Bu Sun Kimd059ad82020-07-22 17:02:09 -07001861 &quot;region&quot;: &quot;A String&quot;, # Required. The region to run the training job in. See the [available
1862 # regions](/ai-platform/training/docs/regions) for AI Platform Training.
Bu Sun Kim65020912020-05-20 12:08:20 -07001863 &quot;jobDir&quot;: &quot;A String&quot;, # Optional. A Google Cloud Storage path in which to store training outputs
1864 # and other data needed for training. This path is passed to your TensorFlow
1865 # program as the &#x27;--job-dir&#x27; command-line argument. The benefit of specifying
1866 # this field is that Cloud ML validates the path for use in training.
1867 &quot;pythonVersion&quot;: &quot;A String&quot;, # Optional. The version of Python used in training. You must either specify
1868 # this field or specify `masterConfig.imageUri`.
1869 #
1870 # The following Python versions are available:
1871 #
1872 # * Python &#x27;3.7&#x27; is available when `runtime_version` is set to &#x27;1.15&#x27; or
1873 # later.
1874 # * Python &#x27;3.5&#x27; is available when `runtime_version` is set to a version
1875 # from &#x27;1.4&#x27; to &#x27;1.14&#x27;.
1876 # * Python &#x27;2.7&#x27; is available when `runtime_version` is set to &#x27;1.15&#x27; or
1877 # earlier.
1878 #
1879 # Read more about the Python versions available for [each runtime
1880 # version](/ml-engine/docs/runtime-version-list).
Bu Sun Kimd059ad82020-07-22 17:02:09 -07001881 &quot;hyperparameters&quot;: { # Represents a set of hyperparameters to optimize. # Optional. The set of Hyperparameters to tune.
1882 &quot;maxTrials&quot;: 42, # Optional. How many training trials should be attempted to optimize
1883 # the specified hyperparameters.
1884 #
1885 # Defaults to one.
1886 &quot;enableTrialEarlyStopping&quot;: True or False, # Optional. Indicates if the hyperparameter tuning job enables auto trial
1887 # early stopping.
1888 &quot;params&quot;: [ # Required. The set of parameters to tune.
1889 { # Represents a single hyperparameter to optimize.
1890 &quot;minValue&quot;: 3.14, # Required if type is `DOUBLE` or `INTEGER`. This field
1891 # should be unset if type is `CATEGORICAL`. This value should be integers if
1892 # type is INTEGER.
1893 &quot;categoricalValues&quot;: [ # Required if type is `CATEGORICAL`. The list of possible categories.
1894 &quot;A String&quot;,
1895 ],
1896 &quot;scaleType&quot;: &quot;A String&quot;, # Optional. How the parameter should be scaled to the hypercube.
1897 # Leave unset for categorical parameters.
1898 # Some kind of scaling is strongly recommended for real or integral
1899 # parameters (e.g., `UNIT_LINEAR_SCALE`).
1900 &quot;discreteValues&quot;: [ # Required if type is `DISCRETE`.
1901 # A list of feasible points.
1902 # The list should be in strictly increasing order. For instance, this
1903 # parameter might have possible settings of 1.5, 2.5, and 4.0. This list
1904 # should not contain more than 1,000 values.
1905 3.14,
1906 ],
1907 &quot;type&quot;: &quot;A String&quot;, # Required. The type of the parameter.
1908 &quot;maxValue&quot;: 3.14, # Required if type is `DOUBLE` or `INTEGER`. This field
1909 # should be unset if type is `CATEGORICAL`. This value should be integers if
1910 # type is `INTEGER`.
1911 &quot;parameterName&quot;: &quot;A String&quot;, # Required. The parameter name must be unique amongst all ParameterConfigs in
1912 # a HyperparameterSpec message. E.g., &quot;learning_rate&quot;.
1913 },
1914 ],
1915 &quot;maxFailedTrials&quot;: 42, # Optional. The number of failed trials that need to be seen before failing
1916 # the hyperparameter tuning job. You can specify this field to override the
1917 # default failing criteria for AI Platform hyperparameter tuning jobs.
1918 #
1919 # Defaults to zero, which means the service decides when a hyperparameter
1920 # job should fail.
1921 &quot;hyperparameterMetricTag&quot;: &quot;A String&quot;, # Optional. The TensorFlow summary tag name to use for optimizing trials. For
1922 # current versions of TensorFlow, this tag name should exactly match what is
1923 # shown in TensorBoard, including all scopes. For versions of TensorFlow
1924 # prior to 0.12, this should be only the tag passed to tf.Summary.
1925 # By default, &quot;training/hptuning/metric&quot; will be used.
1926 &quot;resumePreviousJobId&quot;: &quot;A String&quot;, # Optional. The prior hyperparameter tuning job id that users hope to
1927 # continue with. The job id will be used to find the corresponding vizier
1928 # study guid and resume the study.
1929 &quot;goal&quot;: &quot;A String&quot;, # Required. The type of goal to use for tuning. Available types are
1930 # `MAXIMIZE` and `MINIMIZE`.
1931 #
1932 # Defaults to `MAXIMIZE`.
1933 &quot;algorithm&quot;: &quot;A String&quot;, # Optional. The search algorithm specified for the hyperparameter
1934 # tuning job.
1935 # Uses the default AI Platform hyperparameter tuning
1936 # algorithm if unspecified.
1937 &quot;maxParallelTrials&quot;: 42, # Optional. The number of training trials to run concurrently.
1938 # You can reduce the time it takes to perform hyperparameter tuning by adding
1939 # trials in parallel. However, each trail only benefits from the information
1940 # gained in completed trials. That means that a trial does not get access to
1941 # the results of trials running at the same time, which could reduce the
1942 # quality of the overall optimization.
1943 #
1944 # Each trial will use the same scale tier and machine types.
1945 #
1946 # Defaults to one.
1947 },
1948 &quot;evaluatorType&quot;: &quot;A String&quot;, # Optional. Specifies the type of virtual machine to use for your training
1949 # job&#x27;s evaluator nodes.
1950 #
1951 # The supported values are the same as those described in the entry for
1952 # `masterType`.
1953 #
1954 # This value must be consistent with the category of machine type that
1955 # `masterType` uses. In other words, both must be Compute Engine machine
1956 # types or both must be legacy machine types.
1957 #
1958 # This value must be present when `scaleTier` is set to `CUSTOM` and
1959 # `evaluatorCount` is greater than zero.
1960 &quot;network&quot;: &quot;A String&quot;, # Optional. The full name of the Google Compute Engine
1961 # [network](/compute/docs/networks-and-firewalls#networks) to which the Job
1962 # is peered. For example, projects/12345/global/networks/myVPC. Format is of
1963 # the form projects/{project}/global/networks/{network}. Where {project} is a
1964 # project number, as in &#x27;12345&#x27;, and {network} is network name.&quot;.
1965 #
1966 # Private services access must already be configured for the network. If left
1967 # unspecified, the Job is not peered with any network. Learn more -
1968 # Connecting Job to user network over private
1969 # IP.
1970 &quot;parameterServerType&quot;: &quot;A String&quot;, # Optional. Specifies the type of virtual machine to use for your training
1971 # job&#x27;s parameter server.
1972 #
1973 # The supported values are the same as those described in the entry for
1974 # `master_type`.
1975 #
1976 # This value must be consistent with the category of machine type that
1977 # `masterType` uses. In other words, both must be Compute Engine machine
1978 # types or both must be legacy machine types.
1979 #
1980 # This value must be present when `scaleTier` is set to `CUSTOM` and
1981 # `parameter_server_count` is greater than zero.
1982 &quot;workerType&quot;: &quot;A String&quot;, # Optional. Specifies the type of virtual machine to use for your training
1983 # job&#x27;s worker nodes.
1984 #
1985 # The supported values are the same as those described in the entry for
1986 # `masterType`.
1987 #
1988 # This value must be consistent with the category of machine type that
1989 # `masterType` uses. In other words, both must be Compute Engine machine
1990 # types or both must be legacy machine types.
1991 #
1992 # If you use `cloud_tpu` for this value, see special instructions for
1993 # [configuring a custom TPU
1994 # machine](/ml-engine/docs/tensorflow/using-tpus#configuring_a_custom_tpu_machine).
1995 #
1996 # This value must be present when `scaleTier` is set to `CUSTOM` and
1997 # `workerCount` is greater than zero.
1998 &quot;masterConfig&quot;: { # Represents the configuration for a replica in a cluster. # Optional. The configuration for your master worker.
1999 #
2000 # You should only set `masterConfig.acceleratorConfig` if `masterType` is set
2001 # to a Compute Engine machine type. Learn about [restrictions on accelerator
2002 # configurations for
2003 # training.](/ai-platform/training/docs/using-gpus#compute-engine-machine-types-with-gpu)
2004 #
2005 # Set `masterConfig.imageUri` only if you build a custom image. Only one of
2006 # `masterConfig.imageUri` and `runtimeVersion` should be set. Learn more
2007 # about [configuring custom
2008 # containers](/ai-platform/training/docs/distributed-training-containers).
2009 &quot;acceleratorConfig&quot;: { # Represents a hardware accelerator request config. # Represents the type and number of accelerators used by the replica.
2010 # [Learn about restrictions on accelerator configurations for
2011 # training.](/ai-platform/training/docs/using-gpus#compute-engine-machine-types-with-gpu)
2012 # Note that the AcceleratorConfig can be used in both Jobs and Versions.
2013 # Learn more about [accelerators for training](/ml-engine/docs/using-gpus) and
2014 # [accelerators for online
2015 # prediction](/ml-engine/docs/machine-types-online-prediction#gpus).
2016 &quot;type&quot;: &quot;A String&quot;, # The type of accelerator to use.
2017 &quot;count&quot;: &quot;A String&quot;, # The number of accelerators to attach to each machine running the job.
2018 },
2019 &quot;imageUri&quot;: &quot;A String&quot;, # The Docker image to run on the replica. This image must be in Container
2020 # Registry. Learn more about [configuring custom
2021 # containers](/ai-platform/training/docs/distributed-training-containers).
2022 &quot;containerArgs&quot;: [ # Arguments to the entrypoint command.
2023 # The following rules apply for container_command and container_args:
2024 # - If you do not supply command or args:
2025 # The defaults defined in the Docker image are used.
2026 # - If you supply a command but no args:
2027 # The default EntryPoint and the default Cmd defined in the Docker image
2028 # are ignored. Your command is run without any arguments.
2029 # - If you supply only args:
2030 # The default Entrypoint defined in the Docker image is run with the args
2031 # that you supplied.
2032 # - If you supply a command and args:
2033 # The default Entrypoint and the default Cmd defined in the Docker image
2034 # are ignored. Your command is run with your args.
2035 # It cannot be set if custom container image is
2036 # not provided.
2037 # Note that this field and [TrainingInput.args] are mutually exclusive, i.e.,
2038 # both cannot be set at the same time.
2039 &quot;A String&quot;,
2040 ],
2041 &quot;tpuTfVersion&quot;: &quot;A String&quot;, # The AI Platform runtime version that includes a TensorFlow version matching
2042 # the one used in the custom container. This field is required if the replica
2043 # is a TPU worker that uses a custom container. Otherwise, do not specify
2044 # this field. This must be a [runtime version that currently supports
2045 # training with
2046 # TPUs](/ml-engine/docs/tensorflow/runtime-version-list#tpu-support).
2047 #
2048 # Note that the version of TensorFlow included in a runtime version may
2049 # differ from the numbering of the runtime version itself, because it may
2050 # have a different [patch
2051 # version](https://www.tensorflow.org/guide/version_compat#semantic_versioning_20).
2052 # In this field, you must specify the runtime version (TensorFlow minor
2053 # version). For example, if your custom container runs TensorFlow `1.x.y`,
2054 # specify `1.x`.
2055 &quot;containerCommand&quot;: [ # The command with which the replica&#x27;s custom container is run.
2056 # If provided, it will override default ENTRYPOINT of the docker image.
2057 # If not provided, the docker image&#x27;s ENTRYPOINT is used.
2058 # It cannot be set if custom container image is
2059 # not provided.
2060 # Note that this field and [TrainingInput.args] are mutually exclusive, i.e.,
2061 # both cannot be set at the same time.
2062 &quot;A String&quot;,
2063 ],
2064 },
2065 &quot;evaluatorCount&quot;: &quot;A String&quot;, # Optional. The number of evaluator replicas to use for the training job.
2066 # Each replica in the cluster will be of the type specified in
2067 # `evaluator_type`.
2068 #
2069 # This value can only be used when `scale_tier` is set to `CUSTOM`. If you
2070 # set this value, you must also set `evaluator_type`.
2071 #
2072 # The default value is zero.
2073 &quot;args&quot;: [ # Optional. Command-line arguments passed to the training application when it
2074 # starts. If your job uses a custom container, then the arguments are passed
2075 # to the container&#x27;s &lt;a class=&quot;external&quot; target=&quot;_blank&quot;
2076 # href=&quot;https://docs.docker.com/engine/reference/builder/#entrypoint&quot;&gt;
2077 # `ENTRYPOINT`&lt;/a&gt; command.
2078 &quot;A String&quot;,
2079 ],
2080 &quot;pythonModule&quot;: &quot;A String&quot;, # Required. The Python module name to run after installing the packages.
2081 &quot;runtimeVersion&quot;: &quot;A String&quot;, # Optional. The AI Platform runtime version to use for training. You must
2082 # either specify this field or specify `masterConfig.imageUri`.
2083 #
2084 # For more information, see the [runtime version
2085 # list](/ai-platform/training/docs/runtime-version-list) and learn [how to
2086 # manage runtime versions](/ai-platform/training/docs/versioning).
2087 &quot;parameterServerCount&quot;: &quot;A String&quot;, # Optional. The number of parameter server replicas to use for the training
2088 # job. Each replica in the cluster will be of the type specified in
2089 # `parameter_server_type`.
2090 #
2091 # This value can only be used when `scale_tier` is set to `CUSTOM`. If you
2092 # set this value, you must also set `parameter_server_type`.
2093 #
2094 # The default value is zero.
2095 &quot;evaluatorConfig&quot;: { # Represents the configuration for a replica in a cluster. # Optional. The configuration for evaluators.
2096 #
2097 # You should only set `evaluatorConfig.acceleratorConfig` if
2098 # `evaluatorType` is set to a Compute Engine machine type. [Learn
2099 # about restrictions on accelerator configurations for
2100 # training.](/ai-platform/training/docs/using-gpus#compute-engine-machine-types-with-gpu)
2101 #
2102 # Set `evaluatorConfig.imageUri` only if you build a custom image for
2103 # your evaluator. If `evaluatorConfig.imageUri` has not been
2104 # set, AI Platform uses the value of `masterConfig.imageUri`. Learn more about [configuring custom
2105 # containers](/ai-platform/training/docs/distributed-training-containers).
2106 &quot;acceleratorConfig&quot;: { # Represents a hardware accelerator request config. # Represents the type and number of accelerators used by the replica.
2107 # [Learn about restrictions on accelerator configurations for
2108 # training.](/ai-platform/training/docs/using-gpus#compute-engine-machine-types-with-gpu)
2109 # Note that the AcceleratorConfig can be used in both Jobs and Versions.
2110 # Learn more about [accelerators for training](/ml-engine/docs/using-gpus) and
2111 # [accelerators for online
2112 # prediction](/ml-engine/docs/machine-types-online-prediction#gpus).
2113 &quot;type&quot;: &quot;A String&quot;, # The type of accelerator to use.
2114 &quot;count&quot;: &quot;A String&quot;, # The number of accelerators to attach to each machine running the job.
2115 },
2116 &quot;imageUri&quot;: &quot;A String&quot;, # The Docker image to run on the replica. This image must be in Container
2117 # Registry. Learn more about [configuring custom
2118 # containers](/ai-platform/training/docs/distributed-training-containers).
2119 &quot;containerArgs&quot;: [ # Arguments to the entrypoint command.
2120 # The following rules apply for container_command and container_args:
2121 # - If you do not supply command or args:
2122 # The defaults defined in the Docker image are used.
2123 # - If you supply a command but no args:
2124 # The default EntryPoint and the default Cmd defined in the Docker image
2125 # are ignored. Your command is run without any arguments.
2126 # - If you supply only args:
2127 # The default Entrypoint defined in the Docker image is run with the args
2128 # that you supplied.
2129 # - If you supply a command and args:
2130 # The default Entrypoint and the default Cmd defined in the Docker image
2131 # are ignored. Your command is run with your args.
2132 # It cannot be set if custom container image is
2133 # not provided.
2134 # Note that this field and [TrainingInput.args] are mutually exclusive, i.e.,
2135 # both cannot be set at the same time.
2136 &quot;A String&quot;,
2137 ],
2138 &quot;tpuTfVersion&quot;: &quot;A String&quot;, # The AI Platform runtime version that includes a TensorFlow version matching
2139 # the one used in the custom container. This field is required if the replica
2140 # is a TPU worker that uses a custom container. Otherwise, do not specify
2141 # this field. This must be a [runtime version that currently supports
2142 # training with
2143 # TPUs](/ml-engine/docs/tensorflow/runtime-version-list#tpu-support).
2144 #
2145 # Note that the version of TensorFlow included in a runtime version may
2146 # differ from the numbering of the runtime version itself, because it may
2147 # have a different [patch
2148 # version](https://www.tensorflow.org/guide/version_compat#semantic_versioning_20).
2149 # In this field, you must specify the runtime version (TensorFlow minor
2150 # version). For example, if your custom container runs TensorFlow `1.x.y`,
2151 # specify `1.x`.
2152 &quot;containerCommand&quot;: [ # The command with which the replica&#x27;s custom container is run.
2153 # If provided, it will override default ENTRYPOINT of the docker image.
2154 # If not provided, the docker image&#x27;s ENTRYPOINT is used.
2155 # It cannot be set if custom container image is
2156 # not provided.
2157 # Note that this field and [TrainingInput.args] are mutually exclusive, i.e.,
2158 # both cannot be set at the same time.
2159 &quot;A String&quot;,
2160 ],
2161 },
2162 &quot;encryptionConfig&quot;: { # Represents a custom encryption key configuration that can be applied to # Optional. Options for using customer-managed encryption keys (CMEK) to
2163 # protect resources created by a training job, instead of using Google&#x27;s
2164 # default encryption. If this is set, then all resources created by the
2165 # training job will be encrypted with the customer-managed encryption key
2166 # that you specify.
2167 #
2168 # [Learn how and when to use CMEK with AI Platform
2169 # Training](/ai-platform/training/docs/cmek).
2170 # a resource.
2171 &quot;kmsKeyName&quot;: &quot;A String&quot;, # The Cloud KMS resource identifier of the customer-managed encryption key
2172 # used to protect a resource, such as a training job. It has the following
2173 # format:
2174 # `projects/{PROJECT_ID}/locations/{REGION}/keyRings/{KEY_RING_NAME}/cryptoKeys/{KEY_NAME}`
2175 },
2176 &quot;workerCount&quot;: &quot;A String&quot;, # Optional. The number of worker replicas to use for the training job. Each
2177 # replica in the cluster will be of the type specified in `worker_type`.
2178 #
2179 # This value can only be used when `scale_tier` is set to `CUSTOM`. If you
2180 # set this value, you must also set `worker_type`.
2181 #
2182 # The default value is zero.
Bu Sun Kim65020912020-05-20 12:08:20 -07002183 &quot;scheduling&quot;: { # All parameters related to scheduling of training jobs. # Optional. Scheduling options for a training job.
2184 &quot;maxWaitTime&quot;: &quot;A String&quot;,
2185 &quot;maxRunningTime&quot;: &quot;A String&quot;, # Optional. The maximum job running time, expressed in seconds. The field can
2186 # contain up to nine fractional digits, terminated by `s`. If not specified,
2187 # this field defaults to `604800s` (seven days).
2188 #
2189 # If the training job is still running after this duration, AI Platform
2190 # Training cancels it.
2191 #
2192 # For example, if you want to ensure your job runs for no more than 2 hours,
2193 # set this field to `7200s` (2 hours * 60 minutes / hour * 60 seconds /
2194 # minute).
2195 #
2196 # If you submit your training job using the `gcloud` tool, you can [provide
2197 # this field in a `config.yaml`
2198 # file](/ai-platform/training/docs/training-jobs#formatting_your_configuration_parameters).
2199 # For example:
2200 #
2201 # ```yaml
2202 # trainingInput:
2203 # ...
2204 # scheduling:
2205 # maxRunningTime: 7200s
2206 # ...
2207 # ```
2208 },
Bu Sun Kimd059ad82020-07-22 17:02:09 -07002209 &quot;scaleTier&quot;: &quot;A String&quot;, # Required. Specifies the machine types, the number of replicas for workers
2210 # and parameter servers.
2211 &quot;packageUris&quot;: [ # Required. The Google Cloud Storage location of the packages with
2212 # the training program and any additional dependencies.
2213 # The maximum number of package URIs is 100.
2214 &quot;A String&quot;,
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07002215 ],
Dan O'Mearadd494642020-05-01 07:42:23 -07002216 },
Bu Sun Kimd059ad82020-07-22 17:02:09 -07002217 &quot;etag&quot;: &quot;A String&quot;, # `etag` is used for optimistic concurrency control as a way to help
2218 # prevent simultaneous updates of a job from overwriting each other.
2219 # It is strongly suggested that systems make use of the `etag` in the
2220 # read-modify-write cycle to perform job updates in order to avoid race
2221 # conditions: An `etag` is returned in the response to `GetJob`, and
2222 # systems are expected to put that etag in the request to `UpdateJob` to
2223 # ensure that their change will be applied to the same version of the job.
2224 &quot;jobId&quot;: &quot;A String&quot;, # Required. The user-specified id of the job.
Bu Sun Kim65020912020-05-20 12:08:20 -07002225 }</pre>
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -04002226</div>
2227
2228<div class="method">
Dan O'Mearadd494642020-05-01 07:42:23 -07002229 <code class="details" id="getIamPolicy">getIamPolicy(resource, options_requestedPolicyVersion=None, x__xgafv=None)</code>
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07002230 <pre>Gets the access control policy for a resource.
2231Returns an empty policy if the resource exists and does not have a policy
2232set.
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -04002233
2234Args:
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07002235 resource: string, REQUIRED: The resource for which the policy is being requested.
2236See the operation documentation for the appropriate value for this field. (required)
Dan O'Mearadd494642020-05-01 07:42:23 -07002237 options_requestedPolicyVersion: integer, Optional. The policy format version to be returned.
2238
2239Valid values are 0, 1, and 3. Requests specifying an invalid value will be
2240rejected.
2241
2242Requests for policies with any conditional bindings must specify version 3.
2243Policies without any conditional bindings may specify any valid value or
2244leave the field unset.
Bu Sun Kim65020912020-05-20 12:08:20 -07002245
2246To learn which resources support conditions in their IAM policies, see the
2247[IAM
2248documentation](https://cloud.google.com/iam/help/conditions/resource-policies).
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -04002249 x__xgafv: string, V1 error format.
2250 Allowed values
2251 1 - v1 error format
2252 2 - v2 error format
2253
2254Returns:
2255 An object of the form:
2256
Dan O'Mearadd494642020-05-01 07:42:23 -07002257 { # An Identity and Access Management (IAM) policy, which specifies access
2258 # controls for Google Cloud resources.
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07002259 #
2260 #
Dan O'Mearadd494642020-05-01 07:42:23 -07002261 # A `Policy` is a collection of `bindings`. A `binding` binds one or more
2262 # `members` to a single `role`. Members can be user accounts, service accounts,
2263 # Google groups, and domains (such as G Suite). A `role` is a named list of
2264 # permissions; each `role` can be an IAM predefined role or a user-created
2265 # custom role.
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07002266 #
Bu Sun Kim65020912020-05-20 12:08:20 -07002267 # For some types of Google Cloud resources, a `binding` can also specify a
2268 # `condition`, which is a logical expression that allows access to a resource
2269 # only if the expression evaluates to `true`. A condition can add constraints
2270 # based on attributes of the request, the resource, or both. To learn which
2271 # resources support conditions in their IAM policies, see the
2272 # [IAM documentation](https://cloud.google.com/iam/help/conditions/resource-policies).
Dan O'Mearadd494642020-05-01 07:42:23 -07002273 #
2274 # **JSON example:**
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07002275 #
2276 # {
Bu Sun Kim65020912020-05-20 12:08:20 -07002277 # &quot;bindings&quot;: [
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07002278 # {
Bu Sun Kim65020912020-05-20 12:08:20 -07002279 # &quot;role&quot;: &quot;roles/resourcemanager.organizationAdmin&quot;,
2280 # &quot;members&quot;: [
2281 # &quot;user:mike@example.com&quot;,
2282 # &quot;group:admins@example.com&quot;,
2283 # &quot;domain:google.com&quot;,
2284 # &quot;serviceAccount:my-project-id@appspot.gserviceaccount.com&quot;
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07002285 # ]
2286 # },
2287 # {
Bu Sun Kim65020912020-05-20 12:08:20 -07002288 # &quot;role&quot;: &quot;roles/resourcemanager.organizationViewer&quot;,
2289 # &quot;members&quot;: [
2290 # &quot;user:eve@example.com&quot;
2291 # ],
2292 # &quot;condition&quot;: {
2293 # &quot;title&quot;: &quot;expirable access&quot;,
2294 # &quot;description&quot;: &quot;Does not grant access after Sep 2020&quot;,
2295 # &quot;expression&quot;: &quot;request.time &lt; timestamp(&#x27;2020-10-01T00:00:00.000Z&#x27;)&quot;,
Dan O'Mearadd494642020-05-01 07:42:23 -07002296 # }
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07002297 # }
Dan O'Mearadd494642020-05-01 07:42:23 -07002298 # ],
Bu Sun Kim65020912020-05-20 12:08:20 -07002299 # &quot;etag&quot;: &quot;BwWWja0YfJA=&quot;,
2300 # &quot;version&quot;: 3
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07002301 # }
2302 #
Dan O'Mearadd494642020-05-01 07:42:23 -07002303 # **YAML example:**
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07002304 #
2305 # bindings:
2306 # - members:
2307 # - user:mike@example.com
2308 # - group:admins@example.com
2309 # - domain:google.com
Dan O'Mearadd494642020-05-01 07:42:23 -07002310 # - serviceAccount:my-project-id@appspot.gserviceaccount.com
2311 # role: roles/resourcemanager.organizationAdmin
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07002312 # - members:
Dan O'Mearadd494642020-05-01 07:42:23 -07002313 # - user:eve@example.com
2314 # role: roles/resourcemanager.organizationViewer
2315 # condition:
2316 # title: expirable access
2317 # description: Does not grant access after Sep 2020
Bu Sun Kim65020912020-05-20 12:08:20 -07002318 # expression: request.time &lt; timestamp(&#x27;2020-10-01T00:00:00.000Z&#x27;)
Dan O'Mearadd494642020-05-01 07:42:23 -07002319 # - etag: BwWWja0YfJA=
2320 # - version: 3
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07002321 #
2322 # For a description of IAM and its features, see the
Dan O'Mearadd494642020-05-01 07:42:23 -07002323 # [IAM documentation](https://cloud.google.com/iam/docs/).
Bu Sun Kimd059ad82020-07-22 17:02:09 -07002324 &quot;etag&quot;: &quot;A String&quot;, # `etag` is used for optimistic concurrency control as a way to help
2325 # prevent simultaneous updates of a policy from overwriting each other.
2326 # It is strongly suggested that systems make use of the `etag` in the
2327 # read-modify-write cycle to perform policy updates in order to avoid race
2328 # conditions: An `etag` is returned in the response to `getIamPolicy`, and
2329 # systems are expected to put that etag in the request to `setIamPolicy` to
2330 # ensure that their change will be applied to the same version of the policy.
Bu Sun Kim65020912020-05-20 12:08:20 -07002331 #
2332 # **Important:** If you use IAM Conditions, you must include the `etag` field
2333 # whenever you call `setIamPolicy`. If you omit this field, then IAM allows
2334 # you to overwrite a version `3` policy with a version `1` policy, and all of
2335 # the conditions in the version `3` policy are lost.
Bu Sun Kim65020912020-05-20 12:08:20 -07002336 &quot;auditConfigs&quot;: [ # Specifies cloud audit logging configuration for this policy.
2337 { # Specifies the audit configuration for a service.
2338 # The configuration determines which permission types are logged, and what
2339 # identities, if any, are exempted from logging.
2340 # An AuditConfig must have one or more AuditLogConfigs.
2341 #
2342 # If there are AuditConfigs for both `allServices` and a specific service,
2343 # the union of the two AuditConfigs is used for that service: the log_types
2344 # specified in each AuditConfig are enabled, and the exempted_members in each
2345 # AuditLogConfig are exempted.
2346 #
2347 # Example Policy with multiple AuditConfigs:
2348 #
2349 # {
2350 # &quot;audit_configs&quot;: [
2351 # {
Bu Sun Kimd059ad82020-07-22 17:02:09 -07002352 # &quot;service&quot;: &quot;allServices&quot;,
Bu Sun Kim65020912020-05-20 12:08:20 -07002353 # &quot;audit_log_configs&quot;: [
2354 # {
2355 # &quot;log_type&quot;: &quot;DATA_READ&quot;,
2356 # &quot;exempted_members&quot;: [
2357 # &quot;user:jose@example.com&quot;
2358 # ]
2359 # },
2360 # {
Bu Sun Kimd059ad82020-07-22 17:02:09 -07002361 # &quot;log_type&quot;: &quot;DATA_WRITE&quot;
Bu Sun Kim65020912020-05-20 12:08:20 -07002362 # },
2363 # {
Bu Sun Kimd059ad82020-07-22 17:02:09 -07002364 # &quot;log_type&quot;: &quot;ADMIN_READ&quot;
Bu Sun Kim65020912020-05-20 12:08:20 -07002365 # }
2366 # ]
2367 # },
2368 # {
Bu Sun Kimd059ad82020-07-22 17:02:09 -07002369 # &quot;service&quot;: &quot;sampleservice.googleapis.com&quot;,
Bu Sun Kim65020912020-05-20 12:08:20 -07002370 # &quot;audit_log_configs&quot;: [
2371 # {
Bu Sun Kimd059ad82020-07-22 17:02:09 -07002372 # &quot;log_type&quot;: &quot;DATA_READ&quot;
Bu Sun Kim65020912020-05-20 12:08:20 -07002373 # },
2374 # {
2375 # &quot;log_type&quot;: &quot;DATA_WRITE&quot;,
2376 # &quot;exempted_members&quot;: [
2377 # &quot;user:aliya@example.com&quot;
2378 # ]
2379 # }
2380 # ]
2381 # }
2382 # ]
2383 # }
2384 #
2385 # For sampleservice, this policy enables DATA_READ, DATA_WRITE and ADMIN_READ
2386 # logging. It also exempts jose@example.com from DATA_READ logging, and
2387 # aliya@example.com from DATA_WRITE logging.
2388 &quot;service&quot;: &quot;A String&quot;, # Specifies a service that will be enabled for audit logging.
2389 # For example, `storage.googleapis.com`, `cloudsql.googleapis.com`.
2390 # `allServices` is a special value that covers all services.
2391 &quot;auditLogConfigs&quot;: [ # The configuration for logging of each type of permission.
2392 { # Provides the configuration for logging a type of permissions.
2393 # Example:
2394 #
2395 # {
2396 # &quot;audit_log_configs&quot;: [
2397 # {
2398 # &quot;log_type&quot;: &quot;DATA_READ&quot;,
2399 # &quot;exempted_members&quot;: [
2400 # &quot;user:jose@example.com&quot;
2401 # ]
2402 # },
2403 # {
Bu Sun Kimd059ad82020-07-22 17:02:09 -07002404 # &quot;log_type&quot;: &quot;DATA_WRITE&quot;
Bu Sun Kim65020912020-05-20 12:08:20 -07002405 # }
2406 # ]
2407 # }
2408 #
2409 # This enables &#x27;DATA_READ&#x27; and &#x27;DATA_WRITE&#x27; logging, while exempting
2410 # jose@example.com from DATA_READ logging.
Bu Sun Kimd059ad82020-07-22 17:02:09 -07002411 &quot;logType&quot;: &quot;A String&quot;, # The log type that this config enables.
Bu Sun Kim65020912020-05-20 12:08:20 -07002412 &quot;exemptedMembers&quot;: [ # Specifies the identities that do not cause logging for this type of
2413 # permission.
2414 # Follows the same format of Binding.members.
2415 &quot;A String&quot;,
2416 ],
Bu Sun Kim65020912020-05-20 12:08:20 -07002417 },
2418 ],
2419 },
2420 ],
Bu Sun Kimd059ad82020-07-22 17:02:09 -07002421 &quot;version&quot;: 42, # Specifies the format of the policy.
2422 #
2423 # Valid values are `0`, `1`, and `3`. Requests that specify an invalid value
2424 # are rejected.
2425 #
2426 # Any operation that affects conditional role bindings must specify version
2427 # `3`. This requirement applies to the following operations:
2428 #
2429 # * Getting a policy that includes a conditional role binding
2430 # * Adding a conditional role binding to a policy
2431 # * Changing a conditional role binding in a policy
2432 # * Removing any role binding, with or without a condition, from a policy
2433 # that includes conditions
2434 #
2435 # **Important:** If you use IAM Conditions, you must include the `etag` field
2436 # whenever you call `setIamPolicy`. If you omit this field, then IAM allows
2437 # you to overwrite a version `3` policy with a version `1` policy, and all of
2438 # the conditions in the version `3` policy are lost.
2439 #
2440 # If a policy does not include any conditions, operations on that policy may
2441 # specify any valid version or leave the field unset.
2442 #
2443 # To learn which resources support conditions in their IAM policies, see the
2444 # [IAM documentation](https://cloud.google.com/iam/help/conditions/resource-policies).
Bu Sun Kim65020912020-05-20 12:08:20 -07002445 &quot;bindings&quot;: [ # Associates a list of `members` to a `role`. Optionally, may specify a
Dan O'Mearadd494642020-05-01 07:42:23 -07002446 # `condition` that determines how and when the `bindings` are applied. Each
2447 # of the `bindings` must contain at least one member.
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07002448 { # Associates `members` with a `role`.
Bu Sun Kimd059ad82020-07-22 17:02:09 -07002449 &quot;role&quot;: &quot;A String&quot;, # Role that is assigned to `members`.
2450 # For example, `roles/viewer`, `roles/editor`, or `roles/owner`.
Bu Sun Kim65020912020-05-20 12:08:20 -07002451 &quot;condition&quot;: { # Represents a textual expression in the Common Expression Language (CEL) # The condition that is associated with this binding.
2452 #
2453 # If the condition evaluates to `true`, then this binding applies to the
2454 # current request.
2455 #
2456 # If the condition evaluates to `false`, then this binding does not apply to
2457 # the current request. However, a different role binding might grant the same
2458 # role to one or more of the members in this binding.
2459 #
2460 # To learn which resources support conditions in their IAM policies, see the
2461 # [IAM
2462 # documentation](https://cloud.google.com/iam/help/conditions/resource-policies).
2463 # syntax. CEL is a C-like expression language. The syntax and semantics of CEL
2464 # are documented at https://github.com/google/cel-spec.
2465 #
2466 # Example (Comparison):
2467 #
2468 # title: &quot;Summary size limit&quot;
2469 # description: &quot;Determines if a summary is less than 100 chars&quot;
2470 # expression: &quot;document.summary.size() &lt; 100&quot;
2471 #
2472 # Example (Equality):
2473 #
2474 # title: &quot;Requestor is owner&quot;
2475 # description: &quot;Determines if requestor is the document owner&quot;
2476 # expression: &quot;document.owner == request.auth.claims.email&quot;
2477 #
2478 # Example (Logic):
2479 #
2480 # title: &quot;Public documents&quot;
2481 # description: &quot;Determine whether the document should be publicly visible&quot;
2482 # expression: &quot;document.type != &#x27;private&#x27; &amp;&amp; document.type != &#x27;internal&#x27;&quot;
2483 #
2484 # Example (Data Manipulation):
2485 #
2486 # title: &quot;Notification string&quot;
2487 # description: &quot;Create a notification string with a timestamp.&quot;
2488 # expression: &quot;&#x27;New message received at &#x27; + string(document.create_time)&quot;
2489 #
2490 # The exact variables and functions that may be referenced within an expression
2491 # are determined by the service that evaluates it. See the service
2492 # documentation for additional information.
Bu Sun Kim4ed7d3f2020-05-27 12:20:54 -07002493 &quot;expression&quot;: &quot;A String&quot;, # Textual representation of an expression in Common Expression Language
2494 # syntax.
Bu Sun Kim65020912020-05-20 12:08:20 -07002495 &quot;title&quot;: &quot;A String&quot;, # Optional. Title for the expression, i.e. a short string describing
2496 # its purpose. This can be used e.g. in UIs which allow to enter the
2497 # expression.
2498 &quot;location&quot;: &quot;A String&quot;, # Optional. String indicating the location of the expression for error
2499 # reporting, e.g. a file name and a position in the file.
Bu Sun Kimd059ad82020-07-22 17:02:09 -07002500 &quot;description&quot;: &quot;A String&quot;, # Optional. Description of the expression. This is a longer text which
2501 # describes the expression, e.g. when hovered over it in a UI.
Bu Sun Kim65020912020-05-20 12:08:20 -07002502 },
2503 &quot;members&quot;: [ # Specifies the identities requesting access for a Cloud Platform resource.
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07002504 # `members` can have the following values:
2505 #
2506 # * `allUsers`: A special identifier that represents anyone who is
2507 # on the internet; with or without a Google account.
2508 #
2509 # * `allAuthenticatedUsers`: A special identifier that represents anyone
2510 # who is authenticated with a Google account or a service account.
2511 #
2512 # * `user:{emailid}`: An email address that represents a specific Google
Dan O'Mearadd494642020-05-01 07:42:23 -07002513 # account. For example, `alice@example.com` .
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07002514 #
2515 #
2516 # * `serviceAccount:{emailid}`: An email address that represents a service
2517 # account. For example, `my-other-app@appspot.gserviceaccount.com`.
2518 #
2519 # * `group:{emailid}`: An email address that represents a Google group.
2520 # For example, `admins@example.com`.
2521 #
Dan O'Mearadd494642020-05-01 07:42:23 -07002522 # * `deleted:user:{emailid}?uid={uniqueid}`: An email address (plus unique
2523 # identifier) representing a user that has been recently deleted. For
2524 # example, `alice@example.com?uid=123456789012345678901`. If the user is
2525 # recovered, this value reverts to `user:{emailid}` and the recovered user
2526 # retains the role in the binding.
2527 #
2528 # * `deleted:serviceAccount:{emailid}?uid={uniqueid}`: An email address (plus
2529 # unique identifier) representing a service account that has been recently
2530 # deleted. For example,
2531 # `my-other-app@appspot.gserviceaccount.com?uid=123456789012345678901`.
2532 # If the service account is undeleted, this value reverts to
2533 # `serviceAccount:{emailid}` and the undeleted service account retains the
2534 # role in the binding.
2535 #
2536 # * `deleted:group:{emailid}?uid={uniqueid}`: An email address (plus unique
2537 # identifier) representing a Google group that has been recently
2538 # deleted. For example, `admins@example.com?uid=123456789012345678901`. If
2539 # the group is recovered, this value reverts to `group:{emailid}` and the
2540 # recovered group retains the role in the binding.
2541 #
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07002542 #
2543 # * `domain:{domain}`: The G Suite domain (primary) that represents all the
2544 # users of that domain. For example, `google.com` or `example.com`.
2545 #
Bu Sun Kim65020912020-05-20 12:08:20 -07002546 &quot;A String&quot;,
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07002547 ],
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07002548 },
2549 ],
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07002550 }</pre>
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -04002551</div>
2552
2553<div class="method">
Bu Sun Kim4ed7d3f2020-05-27 12:20:54 -07002554 <code class="details" id="list">list(parent, filter=None, pageToken=None, pageSize=None, x__xgafv=None)</code>
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -04002555 <pre>Lists the jobs in the project.
2556
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07002557If there are no jobs that match the request parameters, the list
2558request returns an empty response body: {}.
2559
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -04002560Args:
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07002561 parent: string, Required. The name of the project for which to list jobs. (required)
Bu Sun Kim65020912020-05-20 12:08:20 -07002562 filter: string, Optional. Specifies the subset of jobs to retrieve.
2563You can filter on the value of one or more attributes of the job object.
2564For example, retrieve jobs with a job identifier that starts with &#x27;census&#x27;:
2565&lt;p&gt;&lt;code&gt;gcloud ai-platform jobs list --filter=&#x27;jobId:census*&#x27;&lt;/code&gt;
2566&lt;p&gt;List all failed jobs with names that start with &#x27;rnn&#x27;:
2567&lt;p&gt;&lt;code&gt;gcloud ai-platform jobs list --filter=&#x27;jobId:rnn*
2568AND state:FAILED&#x27;&lt;/code&gt;
2569&lt;p&gt;For more examples, see the guide to
2570&lt;a href=&quot;/ml-engine/docs/tensorflow/monitor-training&quot;&gt;monitoring jobs&lt;/a&gt;.
Bu Sun Kim4ed7d3f2020-05-27 12:20:54 -07002571 pageToken: string, Optional. A page token to request the next page of results.
2572
2573You get the token from the `next_page_token` field of the response from
2574the previous call.
2575 pageSize: integer, Optional. The number of jobs to retrieve per &quot;page&quot; of results. If there
2576are more remaining results than this number, the response message will
2577contain a valid value in the `next_page_token` field.
2578
2579The default value is 20, and the maximum page size is 100.
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -04002580 x__xgafv: string, V1 error format.
2581 Allowed values
2582 1 - v1 error format
2583 2 - v2 error format
2584
2585Returns:
2586 An object of the form:
2587
2588 { # Response message for the ListJobs method.
Bu Sun Kimd059ad82020-07-22 17:02:09 -07002589 &quot;nextPageToken&quot;: &quot;A String&quot;, # Optional. Pass this token as the `page_token` field of the request for a
2590 # subsequent call.
Bu Sun Kim65020912020-05-20 12:08:20 -07002591 &quot;jobs&quot;: [ # The list of jobs.
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -04002592 { # Represents a training or prediction job.
Bu Sun Kimd059ad82020-07-22 17:02:09 -07002593 &quot;createTime&quot;: &quot;A String&quot;, # Output only. When the job was created.
Bu Sun Kim4ed7d3f2020-05-27 12:20:54 -07002594 &quot;predictionInput&quot;: { # Represents input parameters for a prediction job. # Input parameters to create a prediction job.
Bu Sun Kimd059ad82020-07-22 17:02:09 -07002595 &quot;dataFormat&quot;: &quot;A String&quot;, # Required. The format of the input data files.
2596 &quot;outputPath&quot;: &quot;A String&quot;, # Required. The output Google Cloud Storage location.
2597 &quot;inputPaths&quot;: [ # Required. The Cloud Storage location of the input data files. May contain
2598 # &lt;a href=&quot;/storage/docs/gsutil/addlhelp/WildcardNames&quot;&gt;wildcards&lt;/a&gt;.
2599 &quot;A String&quot;,
2600 ],
2601 &quot;region&quot;: &quot;A String&quot;, # Required. The Google Compute Engine region to run the prediction job in.
2602 # See the &lt;a href=&quot;/ml-engine/docs/tensorflow/regions&quot;&gt;available regions&lt;/a&gt;
2603 # for AI Platform services.
Bu Sun Kim4ed7d3f2020-05-27 12:20:54 -07002604 &quot;versionName&quot;: &quot;A String&quot;, # Use this field if you want to specify a version of the model to use. The
2605 # string is formatted the same way as `model_version`, with the addition
2606 # of the version information:
2607 #
2608 # `&quot;projects/YOUR_PROJECT/models/YOUR_MODEL/versions/YOUR_VERSION&quot;`
Bu Sun Kimd059ad82020-07-22 17:02:09 -07002609 &quot;uri&quot;: &quot;A String&quot;, # Use this field if you want to specify a Google Cloud Storage path for
2610 # the model to use.
2611 &quot;outputDataFormat&quot;: &quot;A String&quot;, # Optional. Format of the output data files, defaults to JSON.
2612 &quot;runtimeVersion&quot;: &quot;A String&quot;, # Optional. The AI Platform runtime version to use for this batch
2613 # prediction. If not set, AI Platform will pick the runtime version used
2614 # during the CreateVersion request for this model version, or choose the
2615 # latest stable version when model version information is not available
2616 # such as when the model is specified by uri.
Bu Sun Kim4ed7d3f2020-05-27 12:20:54 -07002617 &quot;modelName&quot;: &quot;A String&quot;, # Use this field if you want to use the default version for the specified
2618 # model. The string must use the following format:
2619 #
2620 # `&quot;projects/YOUR_PROJECT/models/YOUR_MODEL&quot;`
Bu Sun Kim4ed7d3f2020-05-27 12:20:54 -07002621 &quot;signatureName&quot;: &quot;A String&quot;, # Optional. The name of the signature defined in the SavedModel to use for
2622 # this job. Please refer to
2623 # [SavedModel](https://tensorflow.github.io/serving/serving_basic.html)
2624 # for information about how to use signatures.
2625 #
2626 # Defaults to
2627 # [DEFAULT_SERVING_SIGNATURE_DEF_KEY](https://www.tensorflow.org/api_docs/python/tf/saved_model/signature_constants)
2628 # , which is &quot;serving_default&quot;.
Bu Sun Kim4ed7d3f2020-05-27 12:20:54 -07002629 &quot;batchSize&quot;: &quot;A String&quot;, # Optional. Number of records per batch, defaults to 64.
2630 # The service will buffer batch_size number of records in memory before
2631 # invoking one Tensorflow prediction call internally. So take the record
2632 # size and memory available into consideration when setting this parameter.
Bu Sun Kimd059ad82020-07-22 17:02:09 -07002633 &quot;maxWorkerCount&quot;: &quot;A String&quot;, # Optional. The maximum number of workers to be used for parallel processing.
2634 # Defaults to 10 if not specified.
2635 },
2636 &quot;labels&quot;: { # Optional. One or more labels that you can add, to organize your jobs.
2637 # Each label is a key-value pair, where both the key and the value are
2638 # arbitrary strings that you supply.
2639 # For more information, see the documentation on
2640 # &lt;a href=&quot;/ml-engine/docs/tensorflow/resource-labels&quot;&gt;using labels&lt;/a&gt;.
2641 &quot;a_key&quot;: &quot;A String&quot;,
2642 },
2643 &quot;endTime&quot;: &quot;A String&quot;, # Output only. When the job processing was completed.
2644 &quot;trainingOutput&quot;: { # Represents results of a training job. Output only. # The current training job result.
2645 &quot;trials&quot;: [ # Results for individual Hyperparameter trials.
2646 # Only set for hyperparameter tuning jobs.
2647 { # Represents the result of a single hyperparameter tuning trial from a
2648 # training job. The TrainingOutput object that is returned on successful
2649 # completion of a training job with hyperparameter tuning includes a list
2650 # of HyperparameterOutput objects, one for each successful trial.
2651 &quot;endTime&quot;: &quot;A String&quot;, # Output only. End time for the trial.
2652 &quot;finalMetric&quot;: { # An observed value of a metric. # The final objective metric seen for this trial.
2653 &quot;trainingStep&quot;: &quot;A String&quot;, # The global training step for this metric.
2654 &quot;objectiveValue&quot;: 3.14, # The objective value at this training step.
2655 },
2656 &quot;hyperparameters&quot;: { # The hyperparameters given to this trial.
2657 &quot;a_key&quot;: &quot;A String&quot;,
2658 },
2659 &quot;builtInAlgorithmOutput&quot;: { # Represents output related to a built-in algorithm Job. # Details related to built-in algorithms jobs.
2660 # Only set for trials of built-in algorithms jobs that have succeeded.
2661 &quot;modelPath&quot;: &quot;A String&quot;, # The Cloud Storage path to the `model/` directory where the training job
2662 # saves the trained model. Only set for successful jobs that don&#x27;t use
2663 # hyperparameter tuning.
2664 &quot;framework&quot;: &quot;A String&quot;, # Framework on which the built-in algorithm was trained.
2665 &quot;runtimeVersion&quot;: &quot;A String&quot;, # AI Platform runtime version on which the built-in algorithm was
2666 # trained.
2667 &quot;pythonVersion&quot;: &quot;A String&quot;, # Python version on which the built-in algorithm was trained.
2668 },
2669 &quot;startTime&quot;: &quot;A String&quot;, # Output only. Start time for the trial.
2670 &quot;allMetrics&quot;: [ # All recorded object metrics for this trial. This field is not currently
2671 # populated.
2672 { # An observed value of a metric.
2673 &quot;trainingStep&quot;: &quot;A String&quot;, # The global training step for this metric.
2674 &quot;objectiveValue&quot;: 3.14, # The objective value at this training step.
2675 },
2676 ],
2677 &quot;trialId&quot;: &quot;A String&quot;, # The trial id for these results.
2678 &quot;isTrialStoppedEarly&quot;: True or False, # True if the trial is stopped early.
2679 &quot;state&quot;: &quot;A String&quot;, # Output only. The detailed state of the trial.
2680 },
Bu Sun Kim4ed7d3f2020-05-27 12:20:54 -07002681 ],
Bu Sun Kimd059ad82020-07-22 17:02:09 -07002682 &quot;completedTrialCount&quot;: &quot;A String&quot;, # The number of hyperparameter tuning trials that completed successfully.
2683 # Only set for hyperparameter tuning jobs.
2684 &quot;isHyperparameterTuningJob&quot;: True or False, # Whether this job is a hyperparameter tuning job.
2685 &quot;isBuiltInAlgorithmJob&quot;: True or False, # Whether this job is a built-in Algorithm job.
2686 &quot;builtInAlgorithmOutput&quot;: { # Represents output related to a built-in algorithm Job. # Details related to built-in algorithms jobs.
2687 # Only set for built-in algorithms jobs.
2688 &quot;modelPath&quot;: &quot;A String&quot;, # The Cloud Storage path to the `model/` directory where the training job
2689 # saves the trained model. Only set for successful jobs that don&#x27;t use
2690 # hyperparameter tuning.
2691 &quot;framework&quot;: &quot;A String&quot;, # Framework on which the built-in algorithm was trained.
2692 &quot;runtimeVersion&quot;: &quot;A String&quot;, # AI Platform runtime version on which the built-in algorithm was
2693 # trained.
2694 &quot;pythonVersion&quot;: &quot;A String&quot;, # Python version on which the built-in algorithm was trained.
2695 },
2696 &quot;consumedMLUnits&quot;: 3.14, # The amount of ML units consumed by the job.
2697 &quot;hyperparameterMetricTag&quot;: &quot;A String&quot;, # The TensorFlow summary tag name used for optimizing hyperparameter tuning
2698 # trials. See
2699 # [`HyperparameterSpec.hyperparameterMetricTag`](#HyperparameterSpec.FIELDS.hyperparameter_metric_tag)
2700 # for more information. Only set for hyperparameter tuning jobs.
Bu Sun Kim4ed7d3f2020-05-27 12:20:54 -07002701 },
2702 &quot;errorMessage&quot;: &quot;A String&quot;, # Output only. The details of a failure or a cancellation.
Bu Sun Kimd059ad82020-07-22 17:02:09 -07002703 &quot;predictionOutput&quot;: { # Represents results of a prediction job. # The current prediction job result.
2704 &quot;errorCount&quot;: &quot;A String&quot;, # The number of data instances which resulted in errors.
2705 &quot;nodeHours&quot;: 3.14, # Node hours used by the batch prediction job.
2706 &quot;outputPath&quot;: &quot;A String&quot;, # The output Google Cloud Storage location provided at the job creation time.
2707 &quot;predictionCount&quot;: &quot;A String&quot;, # The number of generated predictions.
2708 },
2709 &quot;startTime&quot;: &quot;A String&quot;, # Output only. When the job processing was started.
2710 &quot;state&quot;: &quot;A String&quot;, # Output only. The detailed state of a job.
Bu Sun Kim65020912020-05-20 12:08:20 -07002711 &quot;trainingInput&quot;: { # Represents input parameters for a training job. When using the gcloud command # Input parameters to create a training job.
2712 # to submit your training job, you can specify the input parameters as
2713 # command-line arguments and/or in a YAML configuration file referenced from
2714 # the --config command-line argument. For details, see the guide to [submitting
2715 # a training job](/ai-platform/training/docs/training-jobs).
Bu Sun Kimd059ad82020-07-22 17:02:09 -07002716 &quot;serviceAccount&quot;: &quot;A String&quot;, # Optional. Specifies the service account for workload run-as account.
2717 # Users submitting jobs must have act-as permission on this run-as account.
2718 # If not specified, then CMLE P4SA will be used by default.
Bu Sun Kim65020912020-05-20 12:08:20 -07002719 &quot;workerConfig&quot;: { # Represents the configuration for a replica in a cluster. # Optional. The configuration for workers.
2720 #
2721 # You should only set `workerConfig.acceleratorConfig` if `workerType` is set
2722 # to a Compute Engine machine type. [Learn about restrictions on accelerator
2723 # configurations for
2724 # training.](/ai-platform/training/docs/using-gpus#compute-engine-machine-types-with-gpu)
2725 #
2726 # Set `workerConfig.imageUri` only if you build a custom image for your
2727 # worker. If `workerConfig.imageUri` has not been set, AI Platform uses
2728 # the value of `masterConfig.imageUri`. Learn more about [configuring custom
2729 # containers](/ai-platform/training/docs/distributed-training-containers).
Bu Sun Kimd059ad82020-07-22 17:02:09 -07002730 &quot;acceleratorConfig&quot;: { # Represents a hardware accelerator request config. # Represents the type and number of accelerators used by the replica.
2731 # [Learn about restrictions on accelerator configurations for
2732 # training.](/ai-platform/training/docs/using-gpus#compute-engine-machine-types-with-gpu)
2733 # Note that the AcceleratorConfig can be used in both Jobs and Versions.
2734 # Learn more about [accelerators for training](/ml-engine/docs/using-gpus) and
2735 # [accelerators for online
2736 # prediction](/ml-engine/docs/machine-types-online-prediction#gpus).
2737 &quot;type&quot;: &quot;A String&quot;, # The type of accelerator to use.
2738 &quot;count&quot;: &quot;A String&quot;, # The number of accelerators to attach to each machine running the job.
2739 },
Bu Sun Kim65020912020-05-20 12:08:20 -07002740 &quot;imageUri&quot;: &quot;A String&quot;, # The Docker image to run on the replica. This image must be in Container
2741 # Registry. Learn more about [configuring custom
2742 # containers](/ai-platform/training/docs/distributed-training-containers).
2743 &quot;containerArgs&quot;: [ # Arguments to the entrypoint command.
2744 # The following rules apply for container_command and container_args:
2745 # - If you do not supply command or args:
2746 # The defaults defined in the Docker image are used.
2747 # - If you supply a command but no args:
2748 # The default EntryPoint and the default Cmd defined in the Docker image
2749 # are ignored. Your command is run without any arguments.
2750 # - If you supply only args:
2751 # The default Entrypoint defined in the Docker image is run with the args
2752 # that you supplied.
2753 # - If you supply a command and args:
2754 # The default Entrypoint and the default Cmd defined in the Docker image
2755 # are ignored. Your command is run with your args.
2756 # It cannot be set if custom container image is
2757 # not provided.
2758 # Note that this field and [TrainingInput.args] are mutually exclusive, i.e.,
2759 # both cannot be set at the same time.
2760 &quot;A String&quot;,
2761 ],
Bu Sun Kim4ed7d3f2020-05-27 12:20:54 -07002762 &quot;tpuTfVersion&quot;: &quot;A String&quot;, # The AI Platform runtime version that includes a TensorFlow version matching
2763 # the one used in the custom container. This field is required if the replica
2764 # is a TPU worker that uses a custom container. Otherwise, do not specify
2765 # this field. This must be a [runtime version that currently supports
2766 # training with
2767 # TPUs](/ml-engine/docs/tensorflow/runtime-version-list#tpu-support).
2768 #
2769 # Note that the version of TensorFlow included in a runtime version may
2770 # differ from the numbering of the runtime version itself, because it may
2771 # have a different [patch
2772 # version](https://www.tensorflow.org/guide/version_compat#semantic_versioning_20).
2773 # In this field, you must specify the runtime version (TensorFlow minor
2774 # version). For example, if your custom container runs TensorFlow `1.x.y`,
2775 # specify `1.x`.
2776 &quot;containerCommand&quot;: [ # The command with which the replica&#x27;s custom container is run.
2777 # If provided, it will override default ENTRYPOINT of the docker image.
2778 # If not provided, the docker image&#x27;s ENTRYPOINT is used.
2779 # It cannot be set if custom container image is
2780 # not provided.
2781 # Note that this field and [TrainingInput.args] are mutually exclusive, i.e.,
2782 # both cannot be set at the same time.
2783 &quot;A String&quot;,
2784 ],
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07002785 },
Bu Sun Kimd059ad82020-07-22 17:02:09 -07002786 &quot;useChiefInTfConfig&quot;: True or False, # Optional. Use `chief` instead of `master` in the `TF_CONFIG` environment
2787 # variable when training with a custom container. Defaults to `false`. [Learn
2788 # more about this
2789 # field.](/ai-platform/training/docs/distributed-training-details#chief-versus-master)
Dan O'Mearadd494642020-05-01 07:42:23 -07002790 #
Bu Sun Kimd059ad82020-07-22 17:02:09 -07002791 # This field has no effect for training jobs that don&#x27;t use a custom
2792 # container.
Bu Sun Kim65020912020-05-20 12:08:20 -07002793 &quot;masterType&quot;: &quot;A String&quot;, # Optional. Specifies the type of virtual machine to use for your training
2794 # job&#x27;s master worker. You must specify this field when `scaleTier` is set to
2795 # `CUSTOM`.
Dan O'Mearadd494642020-05-01 07:42:23 -07002796 #
Bu Sun Kim65020912020-05-20 12:08:20 -07002797 # You can use certain Compute Engine machine types directly in this field.
2798 # The following types are supported:
2799 #
2800 # - `n1-standard-4`
2801 # - `n1-standard-8`
2802 # - `n1-standard-16`
2803 # - `n1-standard-32`
2804 # - `n1-standard-64`
2805 # - `n1-standard-96`
2806 # - `n1-highmem-2`
2807 # - `n1-highmem-4`
2808 # - `n1-highmem-8`
2809 # - `n1-highmem-16`
2810 # - `n1-highmem-32`
2811 # - `n1-highmem-64`
2812 # - `n1-highmem-96`
2813 # - `n1-highcpu-16`
2814 # - `n1-highcpu-32`
2815 # - `n1-highcpu-64`
2816 # - `n1-highcpu-96`
2817 #
2818 # Learn more about [using Compute Engine machine
2819 # types](/ml-engine/docs/machine-types#compute-engine-machine-types).
2820 #
2821 # Alternatively, you can use the following legacy machine types:
2822 #
2823 # - `standard`
2824 # - `large_model`
2825 # - `complex_model_s`
2826 # - `complex_model_m`
2827 # - `complex_model_l`
2828 # - `standard_gpu`
2829 # - `complex_model_m_gpu`
2830 # - `complex_model_l_gpu`
2831 # - `standard_p100`
2832 # - `complex_model_m_p100`
2833 # - `standard_v100`
2834 # - `large_model_v100`
2835 # - `complex_model_m_v100`
2836 # - `complex_model_l_v100`
2837 #
2838 # Learn more about [using legacy machine
2839 # types](/ml-engine/docs/machine-types#legacy-machine-types).
2840 #
2841 # Finally, if you want to use a TPU for training, specify `cloud_tpu` in this
2842 # field. Learn more about the [special configuration options for training
2843 # with
2844 # TPUs](/ml-engine/docs/tensorflow/using-tpus#configuring_a_custom_tpu_machine).
Bu Sun Kimd059ad82020-07-22 17:02:09 -07002845 &quot;parameterServerConfig&quot;: { # Represents the configuration for a replica in a cluster. # Optional. The configuration for parameter servers.
Bu Sun Kim65020912020-05-20 12:08:20 -07002846 #
Bu Sun Kimd059ad82020-07-22 17:02:09 -07002847 # You should only set `parameterServerConfig.acceleratorConfig` if
2848 # `parameterServerType` is set to a Compute Engine machine type. [Learn
2849 # about restrictions on accelerator configurations for
Dan O'Mearadd494642020-05-01 07:42:23 -07002850 # training.](/ai-platform/training/docs/using-gpus#compute-engine-machine-types-with-gpu)
Dan O'Mearadd494642020-05-01 07:42:23 -07002851 #
Bu Sun Kimd059ad82020-07-22 17:02:09 -07002852 # Set `parameterServerConfig.imageUri` only if you build a custom image for
2853 # your parameter server. If `parameterServerConfig.imageUri` has not been
2854 # set, AI Platform uses the value of `masterConfig.imageUri`. Learn more about [configuring custom
Bu Sun Kim65020912020-05-20 12:08:20 -07002855 # containers](/ai-platform/training/docs/distributed-training-containers).
Bu Sun Kimd059ad82020-07-22 17:02:09 -07002856 &quot;acceleratorConfig&quot;: { # Represents a hardware accelerator request config. # Represents the type and number of accelerators used by the replica.
2857 # [Learn about restrictions on accelerator configurations for
2858 # training.](/ai-platform/training/docs/using-gpus#compute-engine-machine-types-with-gpu)
2859 # Note that the AcceleratorConfig can be used in both Jobs and Versions.
2860 # Learn more about [accelerators for training](/ml-engine/docs/using-gpus) and
2861 # [accelerators for online
2862 # prediction](/ml-engine/docs/machine-types-online-prediction#gpus).
2863 &quot;type&quot;: &quot;A String&quot;, # The type of accelerator to use.
2864 &quot;count&quot;: &quot;A String&quot;, # The number of accelerators to attach to each machine running the job.
2865 },
Bu Sun Kim65020912020-05-20 12:08:20 -07002866 &quot;imageUri&quot;: &quot;A String&quot;, # The Docker image to run on the replica. This image must be in Container
2867 # Registry. Learn more about [configuring custom
2868 # containers](/ai-platform/training/docs/distributed-training-containers).
2869 &quot;containerArgs&quot;: [ # Arguments to the entrypoint command.
2870 # The following rules apply for container_command and container_args:
2871 # - If you do not supply command or args:
2872 # The defaults defined in the Docker image are used.
2873 # - If you supply a command but no args:
2874 # The default EntryPoint and the default Cmd defined in the Docker image
2875 # are ignored. Your command is run without any arguments.
2876 # - If you supply only args:
2877 # The default Entrypoint defined in the Docker image is run with the args
2878 # that you supplied.
2879 # - If you supply a command and args:
2880 # The default Entrypoint and the default Cmd defined in the Docker image
2881 # are ignored. Your command is run with your args.
2882 # It cannot be set if custom container image is
2883 # not provided.
2884 # Note that this field and [TrainingInput.args] are mutually exclusive, i.e.,
2885 # both cannot be set at the same time.
2886 &quot;A String&quot;,
2887 ],
Bu Sun Kim4ed7d3f2020-05-27 12:20:54 -07002888 &quot;tpuTfVersion&quot;: &quot;A String&quot;, # The AI Platform runtime version that includes a TensorFlow version matching
2889 # the one used in the custom container. This field is required if the replica
2890 # is a TPU worker that uses a custom container. Otherwise, do not specify
2891 # this field. This must be a [runtime version that currently supports
2892 # training with
2893 # TPUs](/ml-engine/docs/tensorflow/runtime-version-list#tpu-support).
2894 #
2895 # Note that the version of TensorFlow included in a runtime version may
2896 # differ from the numbering of the runtime version itself, because it may
2897 # have a different [patch
2898 # version](https://www.tensorflow.org/guide/version_compat#semantic_versioning_20).
2899 # In this field, you must specify the runtime version (TensorFlow minor
2900 # version). For example, if your custom container runs TensorFlow `1.x.y`,
2901 # specify `1.x`.
2902 &quot;containerCommand&quot;: [ # The command with which the replica&#x27;s custom container is run.
2903 # If provided, it will override default ENTRYPOINT of the docker image.
2904 # If not provided, the docker image&#x27;s ENTRYPOINT is used.
2905 # It cannot be set if custom container image is
2906 # not provided.
2907 # Note that this field and [TrainingInput.args] are mutually exclusive, i.e.,
2908 # both cannot be set at the same time.
2909 &quot;A String&quot;,
2910 ],
Bu Sun Kim65020912020-05-20 12:08:20 -07002911 },
Bu Sun Kimd059ad82020-07-22 17:02:09 -07002912 &quot;region&quot;: &quot;A String&quot;, # Required. The region to run the training job in. See the [available
2913 # regions](/ai-platform/training/docs/regions) for AI Platform Training.
Bu Sun Kim65020912020-05-20 12:08:20 -07002914 &quot;jobDir&quot;: &quot;A String&quot;, # Optional. A Google Cloud Storage path in which to store training outputs
2915 # and other data needed for training. This path is passed to your TensorFlow
2916 # program as the &#x27;--job-dir&#x27; command-line argument. The benefit of specifying
2917 # this field is that Cloud ML validates the path for use in training.
2918 &quot;pythonVersion&quot;: &quot;A String&quot;, # Optional. The version of Python used in training. You must either specify
2919 # this field or specify `masterConfig.imageUri`.
2920 #
2921 # The following Python versions are available:
2922 #
2923 # * Python &#x27;3.7&#x27; is available when `runtime_version` is set to &#x27;1.15&#x27; or
2924 # later.
2925 # * Python &#x27;3.5&#x27; is available when `runtime_version` is set to a version
2926 # from &#x27;1.4&#x27; to &#x27;1.14&#x27;.
2927 # * Python &#x27;2.7&#x27; is available when `runtime_version` is set to &#x27;1.15&#x27; or
2928 # earlier.
2929 #
2930 # Read more about the Python versions available for [each runtime
2931 # version](/ml-engine/docs/runtime-version-list).
Bu Sun Kimd059ad82020-07-22 17:02:09 -07002932 &quot;hyperparameters&quot;: { # Represents a set of hyperparameters to optimize. # Optional. The set of Hyperparameters to tune.
2933 &quot;maxTrials&quot;: 42, # Optional. How many training trials should be attempted to optimize
2934 # the specified hyperparameters.
2935 #
2936 # Defaults to one.
2937 &quot;enableTrialEarlyStopping&quot;: True or False, # Optional. Indicates if the hyperparameter tuning job enables auto trial
2938 # early stopping.
2939 &quot;params&quot;: [ # Required. The set of parameters to tune.
2940 { # Represents a single hyperparameter to optimize.
2941 &quot;minValue&quot;: 3.14, # Required if type is `DOUBLE` or `INTEGER`. This field
2942 # should be unset if type is `CATEGORICAL`. This value should be integers if
2943 # type is INTEGER.
2944 &quot;categoricalValues&quot;: [ # Required if type is `CATEGORICAL`. The list of possible categories.
2945 &quot;A String&quot;,
2946 ],
2947 &quot;scaleType&quot;: &quot;A String&quot;, # Optional. How the parameter should be scaled to the hypercube.
2948 # Leave unset for categorical parameters.
2949 # Some kind of scaling is strongly recommended for real or integral
2950 # parameters (e.g., `UNIT_LINEAR_SCALE`).
2951 &quot;discreteValues&quot;: [ # Required if type is `DISCRETE`.
2952 # A list of feasible points.
2953 # The list should be in strictly increasing order. For instance, this
2954 # parameter might have possible settings of 1.5, 2.5, and 4.0. This list
2955 # should not contain more than 1,000 values.
2956 3.14,
2957 ],
2958 &quot;type&quot;: &quot;A String&quot;, # Required. The type of the parameter.
2959 &quot;maxValue&quot;: 3.14, # Required if type is `DOUBLE` or `INTEGER`. This field
2960 # should be unset if type is `CATEGORICAL`. This value should be integers if
2961 # type is `INTEGER`.
2962 &quot;parameterName&quot;: &quot;A String&quot;, # Required. The parameter name must be unique amongst all ParameterConfigs in
2963 # a HyperparameterSpec message. E.g., &quot;learning_rate&quot;.
2964 },
2965 ],
2966 &quot;maxFailedTrials&quot;: 42, # Optional. The number of failed trials that need to be seen before failing
2967 # the hyperparameter tuning job. You can specify this field to override the
2968 # default failing criteria for AI Platform hyperparameter tuning jobs.
2969 #
2970 # Defaults to zero, which means the service decides when a hyperparameter
2971 # job should fail.
2972 &quot;hyperparameterMetricTag&quot;: &quot;A String&quot;, # Optional. The TensorFlow summary tag name to use for optimizing trials. For
2973 # current versions of TensorFlow, this tag name should exactly match what is
2974 # shown in TensorBoard, including all scopes. For versions of TensorFlow
2975 # prior to 0.12, this should be only the tag passed to tf.Summary.
2976 # By default, &quot;training/hptuning/metric&quot; will be used.
2977 &quot;resumePreviousJobId&quot;: &quot;A String&quot;, # Optional. The prior hyperparameter tuning job id that users hope to
2978 # continue with. The job id will be used to find the corresponding vizier
2979 # study guid and resume the study.
2980 &quot;goal&quot;: &quot;A String&quot;, # Required. The type of goal to use for tuning. Available types are
2981 # `MAXIMIZE` and `MINIMIZE`.
2982 #
2983 # Defaults to `MAXIMIZE`.
2984 &quot;algorithm&quot;: &quot;A String&quot;, # Optional. The search algorithm specified for the hyperparameter
2985 # tuning job.
2986 # Uses the default AI Platform hyperparameter tuning
2987 # algorithm if unspecified.
2988 &quot;maxParallelTrials&quot;: 42, # Optional. The number of training trials to run concurrently.
2989 # You can reduce the time it takes to perform hyperparameter tuning by adding
2990 # trials in parallel. However, each trail only benefits from the information
2991 # gained in completed trials. That means that a trial does not get access to
2992 # the results of trials running at the same time, which could reduce the
2993 # quality of the overall optimization.
2994 #
2995 # Each trial will use the same scale tier and machine types.
2996 #
2997 # Defaults to one.
2998 },
2999 &quot;evaluatorType&quot;: &quot;A String&quot;, # Optional. Specifies the type of virtual machine to use for your training
3000 # job&#x27;s evaluator nodes.
3001 #
3002 # The supported values are the same as those described in the entry for
3003 # `masterType`.
3004 #
3005 # This value must be consistent with the category of machine type that
3006 # `masterType` uses. In other words, both must be Compute Engine machine
3007 # types or both must be legacy machine types.
3008 #
3009 # This value must be present when `scaleTier` is set to `CUSTOM` and
3010 # `evaluatorCount` is greater than zero.
3011 &quot;network&quot;: &quot;A String&quot;, # Optional. The full name of the Google Compute Engine
3012 # [network](/compute/docs/networks-and-firewalls#networks) to which the Job
3013 # is peered. For example, projects/12345/global/networks/myVPC. Format is of
3014 # the form projects/{project}/global/networks/{network}. Where {project} is a
3015 # project number, as in &#x27;12345&#x27;, and {network} is network name.&quot;.
3016 #
3017 # Private services access must already be configured for the network. If left
3018 # unspecified, the Job is not peered with any network. Learn more -
3019 # Connecting Job to user network over private
3020 # IP.
3021 &quot;parameterServerType&quot;: &quot;A String&quot;, # Optional. Specifies the type of virtual machine to use for your training
3022 # job&#x27;s parameter server.
3023 #
3024 # The supported values are the same as those described in the entry for
3025 # `master_type`.
3026 #
3027 # This value must be consistent with the category of machine type that
3028 # `masterType` uses. In other words, both must be Compute Engine machine
3029 # types or both must be legacy machine types.
3030 #
3031 # This value must be present when `scaleTier` is set to `CUSTOM` and
3032 # `parameter_server_count` is greater than zero.
3033 &quot;workerType&quot;: &quot;A String&quot;, # Optional. Specifies the type of virtual machine to use for your training
3034 # job&#x27;s worker nodes.
3035 #
3036 # The supported values are the same as those described in the entry for
3037 # `masterType`.
3038 #
3039 # This value must be consistent with the category of machine type that
3040 # `masterType` uses. In other words, both must be Compute Engine machine
3041 # types or both must be legacy machine types.
3042 #
3043 # If you use `cloud_tpu` for this value, see special instructions for
3044 # [configuring a custom TPU
3045 # machine](/ml-engine/docs/tensorflow/using-tpus#configuring_a_custom_tpu_machine).
3046 #
3047 # This value must be present when `scaleTier` is set to `CUSTOM` and
3048 # `workerCount` is greater than zero.
3049 &quot;masterConfig&quot;: { # Represents the configuration for a replica in a cluster. # Optional. The configuration for your master worker.
3050 #
3051 # You should only set `masterConfig.acceleratorConfig` if `masterType` is set
3052 # to a Compute Engine machine type. Learn about [restrictions on accelerator
3053 # configurations for
3054 # training.](/ai-platform/training/docs/using-gpus#compute-engine-machine-types-with-gpu)
3055 #
3056 # Set `masterConfig.imageUri` only if you build a custom image. Only one of
3057 # `masterConfig.imageUri` and `runtimeVersion` should be set. Learn more
3058 # about [configuring custom
3059 # containers](/ai-platform/training/docs/distributed-training-containers).
3060 &quot;acceleratorConfig&quot;: { # Represents a hardware accelerator request config. # Represents the type and number of accelerators used by the replica.
3061 # [Learn about restrictions on accelerator configurations for
3062 # training.](/ai-platform/training/docs/using-gpus#compute-engine-machine-types-with-gpu)
3063 # Note that the AcceleratorConfig can be used in both Jobs and Versions.
3064 # Learn more about [accelerators for training](/ml-engine/docs/using-gpus) and
3065 # [accelerators for online
3066 # prediction](/ml-engine/docs/machine-types-online-prediction#gpus).
3067 &quot;type&quot;: &quot;A String&quot;, # The type of accelerator to use.
3068 &quot;count&quot;: &quot;A String&quot;, # The number of accelerators to attach to each machine running the job.
3069 },
3070 &quot;imageUri&quot;: &quot;A String&quot;, # The Docker image to run on the replica. This image must be in Container
3071 # Registry. Learn more about [configuring custom
3072 # containers](/ai-platform/training/docs/distributed-training-containers).
3073 &quot;containerArgs&quot;: [ # Arguments to the entrypoint command.
3074 # The following rules apply for container_command and container_args:
3075 # - If you do not supply command or args:
3076 # The defaults defined in the Docker image are used.
3077 # - If you supply a command but no args:
3078 # The default EntryPoint and the default Cmd defined in the Docker image
3079 # are ignored. Your command is run without any arguments.
3080 # - If you supply only args:
3081 # The default Entrypoint defined in the Docker image is run with the args
3082 # that you supplied.
3083 # - If you supply a command and args:
3084 # The default Entrypoint and the default Cmd defined in the Docker image
3085 # are ignored. Your command is run with your args.
3086 # It cannot be set if custom container image is
3087 # not provided.
3088 # Note that this field and [TrainingInput.args] are mutually exclusive, i.e.,
3089 # both cannot be set at the same time.
3090 &quot;A String&quot;,
3091 ],
3092 &quot;tpuTfVersion&quot;: &quot;A String&quot;, # The AI Platform runtime version that includes a TensorFlow version matching
3093 # the one used in the custom container. This field is required if the replica
3094 # is a TPU worker that uses a custom container. Otherwise, do not specify
3095 # this field. This must be a [runtime version that currently supports
3096 # training with
3097 # TPUs](/ml-engine/docs/tensorflow/runtime-version-list#tpu-support).
3098 #
3099 # Note that the version of TensorFlow included in a runtime version may
3100 # differ from the numbering of the runtime version itself, because it may
3101 # have a different [patch
3102 # version](https://www.tensorflow.org/guide/version_compat#semantic_versioning_20).
3103 # In this field, you must specify the runtime version (TensorFlow minor
3104 # version). For example, if your custom container runs TensorFlow `1.x.y`,
3105 # specify `1.x`.
3106 &quot;containerCommand&quot;: [ # The command with which the replica&#x27;s custom container is run.
3107 # If provided, it will override default ENTRYPOINT of the docker image.
3108 # If not provided, the docker image&#x27;s ENTRYPOINT is used.
3109 # It cannot be set if custom container image is
3110 # not provided.
3111 # Note that this field and [TrainingInput.args] are mutually exclusive, i.e.,
3112 # both cannot be set at the same time.
3113 &quot;A String&quot;,
3114 ],
3115 },
3116 &quot;evaluatorCount&quot;: &quot;A String&quot;, # Optional. The number of evaluator replicas to use for the training job.
3117 # Each replica in the cluster will be of the type specified in
3118 # `evaluator_type`.
3119 #
3120 # This value can only be used when `scale_tier` is set to `CUSTOM`. If you
3121 # set this value, you must also set `evaluator_type`.
3122 #
3123 # The default value is zero.
3124 &quot;args&quot;: [ # Optional. Command-line arguments passed to the training application when it
3125 # starts. If your job uses a custom container, then the arguments are passed
3126 # to the container&#x27;s &lt;a class=&quot;external&quot; target=&quot;_blank&quot;
3127 # href=&quot;https://docs.docker.com/engine/reference/builder/#entrypoint&quot;&gt;
3128 # `ENTRYPOINT`&lt;/a&gt; command.
3129 &quot;A String&quot;,
3130 ],
3131 &quot;pythonModule&quot;: &quot;A String&quot;, # Required. The Python module name to run after installing the packages.
3132 &quot;runtimeVersion&quot;: &quot;A String&quot;, # Optional. The AI Platform runtime version to use for training. You must
3133 # either specify this field or specify `masterConfig.imageUri`.
3134 #
3135 # For more information, see the [runtime version
3136 # list](/ai-platform/training/docs/runtime-version-list) and learn [how to
3137 # manage runtime versions](/ai-platform/training/docs/versioning).
3138 &quot;parameterServerCount&quot;: &quot;A String&quot;, # Optional. The number of parameter server replicas to use for the training
3139 # job. Each replica in the cluster will be of the type specified in
3140 # `parameter_server_type`.
3141 #
3142 # This value can only be used when `scale_tier` is set to `CUSTOM`. If you
3143 # set this value, you must also set `parameter_server_type`.
3144 #
3145 # The default value is zero.
3146 &quot;evaluatorConfig&quot;: { # Represents the configuration for a replica in a cluster. # Optional. The configuration for evaluators.
3147 #
3148 # You should only set `evaluatorConfig.acceleratorConfig` if
3149 # `evaluatorType` is set to a Compute Engine machine type. [Learn
3150 # about restrictions on accelerator configurations for
3151 # training.](/ai-platform/training/docs/using-gpus#compute-engine-machine-types-with-gpu)
3152 #
3153 # Set `evaluatorConfig.imageUri` only if you build a custom image for
3154 # your evaluator. If `evaluatorConfig.imageUri` has not been
3155 # set, AI Platform uses the value of `masterConfig.imageUri`. Learn more about [configuring custom
3156 # containers](/ai-platform/training/docs/distributed-training-containers).
3157 &quot;acceleratorConfig&quot;: { # Represents a hardware accelerator request config. # Represents the type and number of accelerators used by the replica.
3158 # [Learn about restrictions on accelerator configurations for
3159 # training.](/ai-platform/training/docs/using-gpus#compute-engine-machine-types-with-gpu)
3160 # Note that the AcceleratorConfig can be used in both Jobs and Versions.
3161 # Learn more about [accelerators for training](/ml-engine/docs/using-gpus) and
3162 # [accelerators for online
3163 # prediction](/ml-engine/docs/machine-types-online-prediction#gpus).
3164 &quot;type&quot;: &quot;A String&quot;, # The type of accelerator to use.
3165 &quot;count&quot;: &quot;A String&quot;, # The number of accelerators to attach to each machine running the job.
3166 },
3167 &quot;imageUri&quot;: &quot;A String&quot;, # The Docker image to run on the replica. This image must be in Container
3168 # Registry. Learn more about [configuring custom
3169 # containers](/ai-platform/training/docs/distributed-training-containers).
3170 &quot;containerArgs&quot;: [ # Arguments to the entrypoint command.
3171 # The following rules apply for container_command and container_args:
3172 # - If you do not supply command or args:
3173 # The defaults defined in the Docker image are used.
3174 # - If you supply a command but no args:
3175 # The default EntryPoint and the default Cmd defined in the Docker image
3176 # are ignored. Your command is run without any arguments.
3177 # - If you supply only args:
3178 # The default Entrypoint defined in the Docker image is run with the args
3179 # that you supplied.
3180 # - If you supply a command and args:
3181 # The default Entrypoint and the default Cmd defined in the Docker image
3182 # are ignored. Your command is run with your args.
3183 # It cannot be set if custom container image is
3184 # not provided.
3185 # Note that this field and [TrainingInput.args] are mutually exclusive, i.e.,
3186 # both cannot be set at the same time.
3187 &quot;A String&quot;,
3188 ],
3189 &quot;tpuTfVersion&quot;: &quot;A String&quot;, # The AI Platform runtime version that includes a TensorFlow version matching
3190 # the one used in the custom container. This field is required if the replica
3191 # is a TPU worker that uses a custom container. Otherwise, do not specify
3192 # this field. This must be a [runtime version that currently supports
3193 # training with
3194 # TPUs](/ml-engine/docs/tensorflow/runtime-version-list#tpu-support).
3195 #
3196 # Note that the version of TensorFlow included in a runtime version may
3197 # differ from the numbering of the runtime version itself, because it may
3198 # have a different [patch
3199 # version](https://www.tensorflow.org/guide/version_compat#semantic_versioning_20).
3200 # In this field, you must specify the runtime version (TensorFlow minor
3201 # version). For example, if your custom container runs TensorFlow `1.x.y`,
3202 # specify `1.x`.
3203 &quot;containerCommand&quot;: [ # The command with which the replica&#x27;s custom container is run.
3204 # If provided, it will override default ENTRYPOINT of the docker image.
3205 # If not provided, the docker image&#x27;s ENTRYPOINT is used.
3206 # It cannot be set if custom container image is
3207 # not provided.
3208 # Note that this field and [TrainingInput.args] are mutually exclusive, i.e.,
3209 # both cannot be set at the same time.
3210 &quot;A String&quot;,
3211 ],
3212 },
3213 &quot;encryptionConfig&quot;: { # Represents a custom encryption key configuration that can be applied to # Optional. Options for using customer-managed encryption keys (CMEK) to
3214 # protect resources created by a training job, instead of using Google&#x27;s
3215 # default encryption. If this is set, then all resources created by the
3216 # training job will be encrypted with the customer-managed encryption key
3217 # that you specify.
3218 #
3219 # [Learn how and when to use CMEK with AI Platform
3220 # Training](/ai-platform/training/docs/cmek).
3221 # a resource.
3222 &quot;kmsKeyName&quot;: &quot;A String&quot;, # The Cloud KMS resource identifier of the customer-managed encryption key
3223 # used to protect a resource, such as a training job. It has the following
3224 # format:
3225 # `projects/{PROJECT_ID}/locations/{REGION}/keyRings/{KEY_RING_NAME}/cryptoKeys/{KEY_NAME}`
3226 },
3227 &quot;workerCount&quot;: &quot;A String&quot;, # Optional. The number of worker replicas to use for the training job. Each
3228 # replica in the cluster will be of the type specified in `worker_type`.
3229 #
3230 # This value can only be used when `scale_tier` is set to `CUSTOM`. If you
3231 # set this value, you must also set `worker_type`.
3232 #
3233 # The default value is zero.
Bu Sun Kim65020912020-05-20 12:08:20 -07003234 &quot;scheduling&quot;: { # All parameters related to scheduling of training jobs. # Optional. Scheduling options for a training job.
3235 &quot;maxWaitTime&quot;: &quot;A String&quot;,
3236 &quot;maxRunningTime&quot;: &quot;A String&quot;, # Optional. The maximum job running time, expressed in seconds. The field can
3237 # contain up to nine fractional digits, terminated by `s`. If not specified,
3238 # this field defaults to `604800s` (seven days).
3239 #
3240 # If the training job is still running after this duration, AI Platform
3241 # Training cancels it.
3242 #
3243 # For example, if you want to ensure your job runs for no more than 2 hours,
3244 # set this field to `7200s` (2 hours * 60 minutes / hour * 60 seconds /
3245 # minute).
3246 #
3247 # If you submit your training job using the `gcloud` tool, you can [provide
3248 # this field in a `config.yaml`
3249 # file](/ai-platform/training/docs/training-jobs#formatting_your_configuration_parameters).
3250 # For example:
3251 #
3252 # ```yaml
3253 # trainingInput:
3254 # ...
3255 # scheduling:
3256 # maxRunningTime: 7200s
3257 # ...
3258 # ```
3259 },
Bu Sun Kimd059ad82020-07-22 17:02:09 -07003260 &quot;scaleTier&quot;: &quot;A String&quot;, # Required. Specifies the machine types, the number of replicas for workers
3261 # and parameter servers.
3262 &quot;packageUris&quot;: [ # Required. The Google Cloud Storage location of the packages with
3263 # the training program and any additional dependencies.
3264 # The maximum number of package URIs is 100.
3265 &quot;A String&quot;,
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -04003266 ],
Dan O'Mearadd494642020-05-01 07:42:23 -07003267 },
Bu Sun Kimd059ad82020-07-22 17:02:09 -07003268 &quot;etag&quot;: &quot;A String&quot;, # `etag` is used for optimistic concurrency control as a way to help
3269 # prevent simultaneous updates of a job from overwriting each other.
3270 # It is strongly suggested that systems make use of the `etag` in the
3271 # read-modify-write cycle to perform job updates in order to avoid race
3272 # conditions: An `etag` is returned in the response to `GetJob`, and
3273 # systems are expected to put that etag in the request to `UpdateJob` to
3274 # ensure that their change will be applied to the same version of the job.
3275 &quot;jobId&quot;: &quot;A String&quot;, # Required. The user-specified id of the job.
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -04003276 },
3277 ],
3278 }</pre>
3279</div>
3280
3281<div class="method">
3282 <code class="details" id="list_next">list_next(previous_request, previous_response)</code>
3283 <pre>Retrieves the next page of results.
3284
3285Args:
3286 previous_request: The request for the previous page. (required)
3287 previous_response: The response from the request for the previous page. (required)
3288
3289Returns:
Bu Sun Kim65020912020-05-20 12:08:20 -07003290 A request object that you can call &#x27;execute()&#x27; on to request the next
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -04003291 page. Returns None if there are no more items in the collection.
3292 </pre>
3293</div>
3294
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07003295<div class="method">
Dan O'Mearadd494642020-05-01 07:42:23 -07003296 <code class="details" id="patch">patch(name, body=None, updateMask=None, x__xgafv=None)</code>
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07003297 <pre>Updates a specific job resource.
3298
3299Currently the only supported fields to update are `labels`.
3300
3301Args:
3302 name: string, Required. The job name. (required)
Dan O'Mearadd494642020-05-01 07:42:23 -07003303 body: object, The request body.
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07003304 The object takes the form of:
3305
3306{ # Represents a training or prediction job.
Bu Sun Kimd059ad82020-07-22 17:02:09 -07003307 &quot;createTime&quot;: &quot;A String&quot;, # Output only. When the job was created.
Bu Sun Kim4ed7d3f2020-05-27 12:20:54 -07003308 &quot;predictionInput&quot;: { # Represents input parameters for a prediction job. # Input parameters to create a prediction job.
Bu Sun Kimd059ad82020-07-22 17:02:09 -07003309 &quot;dataFormat&quot;: &quot;A String&quot;, # Required. The format of the input data files.
3310 &quot;outputPath&quot;: &quot;A String&quot;, # Required. The output Google Cloud Storage location.
3311 &quot;inputPaths&quot;: [ # Required. The Cloud Storage location of the input data files. May contain
3312 # &lt;a href=&quot;/storage/docs/gsutil/addlhelp/WildcardNames&quot;&gt;wildcards&lt;/a&gt;.
3313 &quot;A String&quot;,
3314 ],
3315 &quot;region&quot;: &quot;A String&quot;, # Required. The Google Compute Engine region to run the prediction job in.
3316 # See the &lt;a href=&quot;/ml-engine/docs/tensorflow/regions&quot;&gt;available regions&lt;/a&gt;
3317 # for AI Platform services.
Bu Sun Kim4ed7d3f2020-05-27 12:20:54 -07003318 &quot;versionName&quot;: &quot;A String&quot;, # Use this field if you want to specify a version of the model to use. The
3319 # string is formatted the same way as `model_version`, with the addition
3320 # of the version information:
3321 #
3322 # `&quot;projects/YOUR_PROJECT/models/YOUR_MODEL/versions/YOUR_VERSION&quot;`
Bu Sun Kimd059ad82020-07-22 17:02:09 -07003323 &quot;uri&quot;: &quot;A String&quot;, # Use this field if you want to specify a Google Cloud Storage path for
3324 # the model to use.
3325 &quot;outputDataFormat&quot;: &quot;A String&quot;, # Optional. Format of the output data files, defaults to JSON.
3326 &quot;runtimeVersion&quot;: &quot;A String&quot;, # Optional. The AI Platform runtime version to use for this batch
3327 # prediction. If not set, AI Platform will pick the runtime version used
3328 # during the CreateVersion request for this model version, or choose the
3329 # latest stable version when model version information is not available
3330 # such as when the model is specified by uri.
Bu Sun Kim4ed7d3f2020-05-27 12:20:54 -07003331 &quot;modelName&quot;: &quot;A String&quot;, # Use this field if you want to use the default version for the specified
3332 # model. The string must use the following format:
3333 #
3334 # `&quot;projects/YOUR_PROJECT/models/YOUR_MODEL&quot;`
Bu Sun Kim4ed7d3f2020-05-27 12:20:54 -07003335 &quot;signatureName&quot;: &quot;A String&quot;, # Optional. The name of the signature defined in the SavedModel to use for
3336 # this job. Please refer to
3337 # [SavedModel](https://tensorflow.github.io/serving/serving_basic.html)
3338 # for information about how to use signatures.
3339 #
3340 # Defaults to
3341 # [DEFAULT_SERVING_SIGNATURE_DEF_KEY](https://www.tensorflow.org/api_docs/python/tf/saved_model/signature_constants)
3342 # , which is &quot;serving_default&quot;.
Bu Sun Kim4ed7d3f2020-05-27 12:20:54 -07003343 &quot;batchSize&quot;: &quot;A String&quot;, # Optional. Number of records per batch, defaults to 64.
3344 # The service will buffer batch_size number of records in memory before
3345 # invoking one Tensorflow prediction call internally. So take the record
3346 # size and memory available into consideration when setting this parameter.
Bu Sun Kimd059ad82020-07-22 17:02:09 -07003347 &quot;maxWorkerCount&quot;: &quot;A String&quot;, # Optional. The maximum number of workers to be used for parallel processing.
3348 # Defaults to 10 if not specified.
3349 },
3350 &quot;labels&quot;: { # Optional. One or more labels that you can add, to organize your jobs.
3351 # Each label is a key-value pair, where both the key and the value are
3352 # arbitrary strings that you supply.
3353 # For more information, see the documentation on
3354 # &lt;a href=&quot;/ml-engine/docs/tensorflow/resource-labels&quot;&gt;using labels&lt;/a&gt;.
3355 &quot;a_key&quot;: &quot;A String&quot;,
3356 },
3357 &quot;endTime&quot;: &quot;A String&quot;, # Output only. When the job processing was completed.
3358 &quot;trainingOutput&quot;: { # Represents results of a training job. Output only. # The current training job result.
3359 &quot;trials&quot;: [ # Results for individual Hyperparameter trials.
3360 # Only set for hyperparameter tuning jobs.
3361 { # Represents the result of a single hyperparameter tuning trial from a
3362 # training job. The TrainingOutput object that is returned on successful
3363 # completion of a training job with hyperparameter tuning includes a list
3364 # of HyperparameterOutput objects, one for each successful trial.
3365 &quot;endTime&quot;: &quot;A String&quot;, # Output only. End time for the trial.
3366 &quot;finalMetric&quot;: { # An observed value of a metric. # The final objective metric seen for this trial.
3367 &quot;trainingStep&quot;: &quot;A String&quot;, # The global training step for this metric.
3368 &quot;objectiveValue&quot;: 3.14, # The objective value at this training step.
3369 },
3370 &quot;hyperparameters&quot;: { # The hyperparameters given to this trial.
3371 &quot;a_key&quot;: &quot;A String&quot;,
3372 },
3373 &quot;builtInAlgorithmOutput&quot;: { # Represents output related to a built-in algorithm Job. # Details related to built-in algorithms jobs.
3374 # Only set for trials of built-in algorithms jobs that have succeeded.
3375 &quot;modelPath&quot;: &quot;A String&quot;, # The Cloud Storage path to the `model/` directory where the training job
3376 # saves the trained model. Only set for successful jobs that don&#x27;t use
3377 # hyperparameter tuning.
3378 &quot;framework&quot;: &quot;A String&quot;, # Framework on which the built-in algorithm was trained.
3379 &quot;runtimeVersion&quot;: &quot;A String&quot;, # AI Platform runtime version on which the built-in algorithm was
3380 # trained.
3381 &quot;pythonVersion&quot;: &quot;A String&quot;, # Python version on which the built-in algorithm was trained.
3382 },
3383 &quot;startTime&quot;: &quot;A String&quot;, # Output only. Start time for the trial.
3384 &quot;allMetrics&quot;: [ # All recorded object metrics for this trial. This field is not currently
3385 # populated.
3386 { # An observed value of a metric.
3387 &quot;trainingStep&quot;: &quot;A String&quot;, # The global training step for this metric.
3388 &quot;objectiveValue&quot;: 3.14, # The objective value at this training step.
3389 },
3390 ],
3391 &quot;trialId&quot;: &quot;A String&quot;, # The trial id for these results.
3392 &quot;isTrialStoppedEarly&quot;: True or False, # True if the trial is stopped early.
3393 &quot;state&quot;: &quot;A String&quot;, # Output only. The detailed state of the trial.
3394 },
Bu Sun Kim4ed7d3f2020-05-27 12:20:54 -07003395 ],
Bu Sun Kimd059ad82020-07-22 17:02:09 -07003396 &quot;completedTrialCount&quot;: &quot;A String&quot;, # The number of hyperparameter tuning trials that completed successfully.
3397 # Only set for hyperparameter tuning jobs.
3398 &quot;isHyperparameterTuningJob&quot;: True or False, # Whether this job is a hyperparameter tuning job.
3399 &quot;isBuiltInAlgorithmJob&quot;: True or False, # Whether this job is a built-in Algorithm job.
3400 &quot;builtInAlgorithmOutput&quot;: { # Represents output related to a built-in algorithm Job. # Details related to built-in algorithms jobs.
3401 # Only set for built-in algorithms jobs.
3402 &quot;modelPath&quot;: &quot;A String&quot;, # The Cloud Storage path to the `model/` directory where the training job
3403 # saves the trained model. Only set for successful jobs that don&#x27;t use
3404 # hyperparameter tuning.
3405 &quot;framework&quot;: &quot;A String&quot;, # Framework on which the built-in algorithm was trained.
3406 &quot;runtimeVersion&quot;: &quot;A String&quot;, # AI Platform runtime version on which the built-in algorithm was
3407 # trained.
3408 &quot;pythonVersion&quot;: &quot;A String&quot;, # Python version on which the built-in algorithm was trained.
3409 },
3410 &quot;consumedMLUnits&quot;: 3.14, # The amount of ML units consumed by the job.
3411 &quot;hyperparameterMetricTag&quot;: &quot;A String&quot;, # The TensorFlow summary tag name used for optimizing hyperparameter tuning
3412 # trials. See
3413 # [`HyperparameterSpec.hyperparameterMetricTag`](#HyperparameterSpec.FIELDS.hyperparameter_metric_tag)
3414 # for more information. Only set for hyperparameter tuning jobs.
Bu Sun Kim4ed7d3f2020-05-27 12:20:54 -07003415 },
3416 &quot;errorMessage&quot;: &quot;A String&quot;, # Output only. The details of a failure or a cancellation.
Bu Sun Kimd059ad82020-07-22 17:02:09 -07003417 &quot;predictionOutput&quot;: { # Represents results of a prediction job. # The current prediction job result.
3418 &quot;errorCount&quot;: &quot;A String&quot;, # The number of data instances which resulted in errors.
3419 &quot;nodeHours&quot;: 3.14, # Node hours used by the batch prediction job.
3420 &quot;outputPath&quot;: &quot;A String&quot;, # The output Google Cloud Storage location provided at the job creation time.
3421 &quot;predictionCount&quot;: &quot;A String&quot;, # The number of generated predictions.
3422 },
3423 &quot;startTime&quot;: &quot;A String&quot;, # Output only. When the job processing was started.
3424 &quot;state&quot;: &quot;A String&quot;, # Output only. The detailed state of a job.
Bu Sun Kim65020912020-05-20 12:08:20 -07003425 &quot;trainingInput&quot;: { # Represents input parameters for a training job. When using the gcloud command # Input parameters to create a training job.
Dan O'Mearadd494642020-05-01 07:42:23 -07003426 # to submit your training job, you can specify the input parameters as
3427 # command-line arguments and/or in a YAML configuration file referenced from
3428 # the --config command-line argument. For details, see the guide to [submitting
3429 # a training job](/ai-platform/training/docs/training-jobs).
Bu Sun Kimd059ad82020-07-22 17:02:09 -07003430 &quot;serviceAccount&quot;: &quot;A String&quot;, # Optional. Specifies the service account for workload run-as account.
3431 # Users submitting jobs must have act-as permission on this run-as account.
3432 # If not specified, then CMLE P4SA will be used by default.
Bu Sun Kim65020912020-05-20 12:08:20 -07003433 &quot;workerConfig&quot;: { # Represents the configuration for a replica in a cluster. # Optional. The configuration for workers.
3434 #
3435 # You should only set `workerConfig.acceleratorConfig` if `workerType` is set
3436 # to a Compute Engine machine type. [Learn about restrictions on accelerator
3437 # configurations for
3438 # training.](/ai-platform/training/docs/using-gpus#compute-engine-machine-types-with-gpu)
3439 #
3440 # Set `workerConfig.imageUri` only if you build a custom image for your
3441 # worker. If `workerConfig.imageUri` has not been set, AI Platform uses
3442 # the value of `masterConfig.imageUri`. Learn more about [configuring custom
3443 # containers](/ai-platform/training/docs/distributed-training-containers).
Bu Sun Kimd059ad82020-07-22 17:02:09 -07003444 &quot;acceleratorConfig&quot;: { # Represents a hardware accelerator request config. # Represents the type and number of accelerators used by the replica.
3445 # [Learn about restrictions on accelerator configurations for
3446 # training.](/ai-platform/training/docs/using-gpus#compute-engine-machine-types-with-gpu)
3447 # Note that the AcceleratorConfig can be used in both Jobs and Versions.
3448 # Learn more about [accelerators for training](/ml-engine/docs/using-gpus) and
3449 # [accelerators for online
3450 # prediction](/ml-engine/docs/machine-types-online-prediction#gpus).
3451 &quot;type&quot;: &quot;A String&quot;, # The type of accelerator to use.
3452 &quot;count&quot;: &quot;A String&quot;, # The number of accelerators to attach to each machine running the job.
3453 },
Bu Sun Kim65020912020-05-20 12:08:20 -07003454 &quot;imageUri&quot;: &quot;A String&quot;, # The Docker image to run on the replica. This image must be in Container
3455 # Registry. Learn more about [configuring custom
3456 # containers](/ai-platform/training/docs/distributed-training-containers).
3457 &quot;containerArgs&quot;: [ # Arguments to the entrypoint command.
3458 # The following rules apply for container_command and container_args:
3459 # - If you do not supply command or args:
3460 # The defaults defined in the Docker image are used.
3461 # - If you supply a command but no args:
3462 # The default EntryPoint and the default Cmd defined in the Docker image
3463 # are ignored. Your command is run without any arguments.
3464 # - If you supply only args:
3465 # The default Entrypoint defined in the Docker image is run with the args
3466 # that you supplied.
3467 # - If you supply a command and args:
3468 # The default Entrypoint and the default Cmd defined in the Docker image
3469 # are ignored. Your command is run with your args.
3470 # It cannot be set if custom container image is
3471 # not provided.
3472 # Note that this field and [TrainingInput.args] are mutually exclusive, i.e.,
3473 # both cannot be set at the same time.
3474 &quot;A String&quot;,
3475 ],
Bu Sun Kim4ed7d3f2020-05-27 12:20:54 -07003476 &quot;tpuTfVersion&quot;: &quot;A String&quot;, # The AI Platform runtime version that includes a TensorFlow version matching
3477 # the one used in the custom container. This field is required if the replica
3478 # is a TPU worker that uses a custom container. Otherwise, do not specify
3479 # this field. This must be a [runtime version that currently supports
3480 # training with
3481 # TPUs](/ml-engine/docs/tensorflow/runtime-version-list#tpu-support).
3482 #
3483 # Note that the version of TensorFlow included in a runtime version may
3484 # differ from the numbering of the runtime version itself, because it may
3485 # have a different [patch
3486 # version](https://www.tensorflow.org/guide/version_compat#semantic_versioning_20).
3487 # In this field, you must specify the runtime version (TensorFlow minor
3488 # version). For example, if your custom container runs TensorFlow `1.x.y`,
3489 # specify `1.x`.
3490 &quot;containerCommand&quot;: [ # The command with which the replica&#x27;s custom container is run.
3491 # If provided, it will override default ENTRYPOINT of the docker image.
3492 # If not provided, the docker image&#x27;s ENTRYPOINT is used.
3493 # It cannot be set if custom container image is
3494 # not provided.
3495 # Note that this field and [TrainingInput.args] are mutually exclusive, i.e.,
3496 # both cannot be set at the same time.
3497 &quot;A String&quot;,
3498 ],
Bu Sun Kim65020912020-05-20 12:08:20 -07003499 },
Bu Sun Kimd059ad82020-07-22 17:02:09 -07003500 &quot;useChiefInTfConfig&quot;: True or False, # Optional. Use `chief` instead of `master` in the `TF_CONFIG` environment
3501 # variable when training with a custom container. Defaults to `false`. [Learn
3502 # more about this
3503 # field.](/ai-platform/training/docs/distributed-training-details#chief-versus-master)
Bu Sun Kim65020912020-05-20 12:08:20 -07003504 #
Bu Sun Kimd059ad82020-07-22 17:02:09 -07003505 # This field has no effect for training jobs that don&#x27;t use a custom
3506 # container.
Bu Sun Kim65020912020-05-20 12:08:20 -07003507 &quot;masterType&quot;: &quot;A String&quot;, # Optional. Specifies the type of virtual machine to use for your training
3508 # job&#x27;s master worker. You must specify this field when `scaleTier` is set to
Dan O'Mearadd494642020-05-01 07:42:23 -07003509 # `CUSTOM`.
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07003510 #
Dan O'Mearadd494642020-05-01 07:42:23 -07003511 # You can use certain Compute Engine machine types directly in this field.
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07003512 # The following types are supported:
3513 #
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07003514 # - `n1-standard-4`
3515 # - `n1-standard-8`
3516 # - `n1-standard-16`
3517 # - `n1-standard-32`
3518 # - `n1-standard-64`
3519 # - `n1-standard-96`
3520 # - `n1-highmem-2`
3521 # - `n1-highmem-4`
3522 # - `n1-highmem-8`
3523 # - `n1-highmem-16`
3524 # - `n1-highmem-32`
3525 # - `n1-highmem-64`
3526 # - `n1-highmem-96`
3527 # - `n1-highcpu-16`
3528 # - `n1-highcpu-32`
3529 # - `n1-highcpu-64`
3530 # - `n1-highcpu-96`
3531 #
Dan O'Mearadd494642020-05-01 07:42:23 -07003532 # Learn more about [using Compute Engine machine
3533 # types](/ml-engine/docs/machine-types#compute-engine-machine-types).
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07003534 #
Dan O'Mearadd494642020-05-01 07:42:23 -07003535 # Alternatively, you can use the following legacy machine types:
3536 #
3537 # - `standard`
3538 # - `large_model`
3539 # - `complex_model_s`
3540 # - `complex_model_m`
3541 # - `complex_model_l`
3542 # - `standard_gpu`
3543 # - `complex_model_m_gpu`
3544 # - `complex_model_l_gpu`
3545 # - `standard_p100`
3546 # - `complex_model_m_p100`
3547 # - `standard_v100`
3548 # - `large_model_v100`
3549 # - `complex_model_m_v100`
3550 # - `complex_model_l_v100`
3551 #
3552 # Learn more about [using legacy machine
3553 # types](/ml-engine/docs/machine-types#legacy-machine-types).
3554 #
3555 # Finally, if you want to use a TPU for training, specify `cloud_tpu` in this
3556 # field. Learn more about the [special configuration options for training
3557 # with
3558 # TPUs](/ml-engine/docs/tensorflow/using-tpus#configuring_a_custom_tpu_machine).
Bu Sun Kimd059ad82020-07-22 17:02:09 -07003559 &quot;parameterServerConfig&quot;: { # Represents the configuration for a replica in a cluster. # Optional. The configuration for parameter servers.
Bu Sun Kim65020912020-05-20 12:08:20 -07003560 #
Bu Sun Kimd059ad82020-07-22 17:02:09 -07003561 # You should only set `parameterServerConfig.acceleratorConfig` if
3562 # `parameterServerType` is set to a Compute Engine machine type. [Learn
3563 # about restrictions on accelerator configurations for
Bu Sun Kim65020912020-05-20 12:08:20 -07003564 # training.](/ai-platform/training/docs/using-gpus#compute-engine-machine-types-with-gpu)
3565 #
Bu Sun Kimd059ad82020-07-22 17:02:09 -07003566 # Set `parameterServerConfig.imageUri` only if you build a custom image for
3567 # your parameter server. If `parameterServerConfig.imageUri` has not been
3568 # set, AI Platform uses the value of `masterConfig.imageUri`. Learn more about [configuring custom
Bu Sun Kim65020912020-05-20 12:08:20 -07003569 # containers](/ai-platform/training/docs/distributed-training-containers).
Bu Sun Kimd059ad82020-07-22 17:02:09 -07003570 &quot;acceleratorConfig&quot;: { # Represents a hardware accelerator request config. # Represents the type and number of accelerators used by the replica.
3571 # [Learn about restrictions on accelerator configurations for
3572 # training.](/ai-platform/training/docs/using-gpus#compute-engine-machine-types-with-gpu)
3573 # Note that the AcceleratorConfig can be used in both Jobs and Versions.
3574 # Learn more about [accelerators for training](/ml-engine/docs/using-gpus) and
3575 # [accelerators for online
3576 # prediction](/ml-engine/docs/machine-types-online-prediction#gpus).
3577 &quot;type&quot;: &quot;A String&quot;, # The type of accelerator to use.
3578 &quot;count&quot;: &quot;A String&quot;, # The number of accelerators to attach to each machine running the job.
3579 },
Bu Sun Kim65020912020-05-20 12:08:20 -07003580 &quot;imageUri&quot;: &quot;A String&quot;, # The Docker image to run on the replica. This image must be in Container
3581 # Registry. Learn more about [configuring custom
3582 # containers](/ai-platform/training/docs/distributed-training-containers).
3583 &quot;containerArgs&quot;: [ # Arguments to the entrypoint command.
3584 # The following rules apply for container_command and container_args:
3585 # - If you do not supply command or args:
3586 # The defaults defined in the Docker image are used.
3587 # - If you supply a command but no args:
3588 # The default EntryPoint and the default Cmd defined in the Docker image
3589 # are ignored. Your command is run without any arguments.
3590 # - If you supply only args:
3591 # The default Entrypoint defined in the Docker image is run with the args
3592 # that you supplied.
3593 # - If you supply a command and args:
3594 # The default Entrypoint and the default Cmd defined in the Docker image
3595 # are ignored. Your command is run with your args.
3596 # It cannot be set if custom container image is
3597 # not provided.
3598 # Note that this field and [TrainingInput.args] are mutually exclusive, i.e.,
3599 # both cannot be set at the same time.
3600 &quot;A String&quot;,
3601 ],
Bu Sun Kim4ed7d3f2020-05-27 12:20:54 -07003602 &quot;tpuTfVersion&quot;: &quot;A String&quot;, # The AI Platform runtime version that includes a TensorFlow version matching
3603 # the one used in the custom container. This field is required if the replica
3604 # is a TPU worker that uses a custom container. Otherwise, do not specify
3605 # this field. This must be a [runtime version that currently supports
3606 # training with
3607 # TPUs](/ml-engine/docs/tensorflow/runtime-version-list#tpu-support).
3608 #
3609 # Note that the version of TensorFlow included in a runtime version may
3610 # differ from the numbering of the runtime version itself, because it may
3611 # have a different [patch
3612 # version](https://www.tensorflow.org/guide/version_compat#semantic_versioning_20).
3613 # In this field, you must specify the runtime version (TensorFlow minor
3614 # version). For example, if your custom container runs TensorFlow `1.x.y`,
3615 # specify `1.x`.
3616 &quot;containerCommand&quot;: [ # The command with which the replica&#x27;s custom container is run.
3617 # If provided, it will override default ENTRYPOINT of the docker image.
3618 # If not provided, the docker image&#x27;s ENTRYPOINT is used.
3619 # It cannot be set if custom container image is
3620 # not provided.
3621 # Note that this field and [TrainingInput.args] are mutually exclusive, i.e.,
3622 # both cannot be set at the same time.
3623 &quot;A String&quot;,
3624 ],
Bu Sun Kim65020912020-05-20 12:08:20 -07003625 },
Bu Sun Kimd059ad82020-07-22 17:02:09 -07003626 &quot;region&quot;: &quot;A String&quot;, # Required. The region to run the training job in. See the [available
3627 # regions](/ai-platform/training/docs/regions) for AI Platform Training.
Bu Sun Kim65020912020-05-20 12:08:20 -07003628 &quot;jobDir&quot;: &quot;A String&quot;, # Optional. A Google Cloud Storage path in which to store training outputs
Dan O'Mearadd494642020-05-01 07:42:23 -07003629 # and other data needed for training. This path is passed to your TensorFlow
Bu Sun Kim65020912020-05-20 12:08:20 -07003630 # program as the &#x27;--job-dir&#x27; command-line argument. The benefit of specifying
Dan O'Mearadd494642020-05-01 07:42:23 -07003631 # this field is that Cloud ML validates the path for use in training.
Bu Sun Kim65020912020-05-20 12:08:20 -07003632 &quot;pythonVersion&quot;: &quot;A String&quot;, # Optional. The version of Python used in training. You must either specify
3633 # this field or specify `masterConfig.imageUri`.
3634 #
3635 # The following Python versions are available:
3636 #
3637 # * Python &#x27;3.7&#x27; is available when `runtime_version` is set to &#x27;1.15&#x27; or
3638 # later.
3639 # * Python &#x27;3.5&#x27; is available when `runtime_version` is set to a version
3640 # from &#x27;1.4&#x27; to &#x27;1.14&#x27;.
3641 # * Python &#x27;2.7&#x27; is available when `runtime_version` is set to &#x27;1.15&#x27; or
3642 # earlier.
3643 #
3644 # Read more about the Python versions available for [each runtime
3645 # version](/ml-engine/docs/runtime-version-list).
Bu Sun Kimd059ad82020-07-22 17:02:09 -07003646 &quot;hyperparameters&quot;: { # Represents a set of hyperparameters to optimize. # Optional. The set of Hyperparameters to tune.
3647 &quot;maxTrials&quot;: 42, # Optional. How many training trials should be attempted to optimize
3648 # the specified hyperparameters.
3649 #
3650 # Defaults to one.
3651 &quot;enableTrialEarlyStopping&quot;: True or False, # Optional. Indicates if the hyperparameter tuning job enables auto trial
3652 # early stopping.
3653 &quot;params&quot;: [ # Required. The set of parameters to tune.
3654 { # Represents a single hyperparameter to optimize.
3655 &quot;minValue&quot;: 3.14, # Required if type is `DOUBLE` or `INTEGER`. This field
3656 # should be unset if type is `CATEGORICAL`. This value should be integers if
3657 # type is INTEGER.
3658 &quot;categoricalValues&quot;: [ # Required if type is `CATEGORICAL`. The list of possible categories.
3659 &quot;A String&quot;,
3660 ],
3661 &quot;scaleType&quot;: &quot;A String&quot;, # Optional. How the parameter should be scaled to the hypercube.
3662 # Leave unset for categorical parameters.
3663 # Some kind of scaling is strongly recommended for real or integral
3664 # parameters (e.g., `UNIT_LINEAR_SCALE`).
3665 &quot;discreteValues&quot;: [ # Required if type is `DISCRETE`.
3666 # A list of feasible points.
3667 # The list should be in strictly increasing order. For instance, this
3668 # parameter might have possible settings of 1.5, 2.5, and 4.0. This list
3669 # should not contain more than 1,000 values.
3670 3.14,
3671 ],
3672 &quot;type&quot;: &quot;A String&quot;, # Required. The type of the parameter.
3673 &quot;maxValue&quot;: 3.14, # Required if type is `DOUBLE` or `INTEGER`. This field
3674 # should be unset if type is `CATEGORICAL`. This value should be integers if
3675 # type is `INTEGER`.
3676 &quot;parameterName&quot;: &quot;A String&quot;, # Required. The parameter name must be unique amongst all ParameterConfigs in
3677 # a HyperparameterSpec message. E.g., &quot;learning_rate&quot;.
3678 },
3679 ],
3680 &quot;maxFailedTrials&quot;: 42, # Optional. The number of failed trials that need to be seen before failing
3681 # the hyperparameter tuning job. You can specify this field to override the
3682 # default failing criteria for AI Platform hyperparameter tuning jobs.
3683 #
3684 # Defaults to zero, which means the service decides when a hyperparameter
3685 # job should fail.
3686 &quot;hyperparameterMetricTag&quot;: &quot;A String&quot;, # Optional. The TensorFlow summary tag name to use for optimizing trials. For
3687 # current versions of TensorFlow, this tag name should exactly match what is
3688 # shown in TensorBoard, including all scopes. For versions of TensorFlow
3689 # prior to 0.12, this should be only the tag passed to tf.Summary.
3690 # By default, &quot;training/hptuning/metric&quot; will be used.
3691 &quot;resumePreviousJobId&quot;: &quot;A String&quot;, # Optional. The prior hyperparameter tuning job id that users hope to
3692 # continue with. The job id will be used to find the corresponding vizier
3693 # study guid and resume the study.
3694 &quot;goal&quot;: &quot;A String&quot;, # Required. The type of goal to use for tuning. Available types are
3695 # `MAXIMIZE` and `MINIMIZE`.
3696 #
3697 # Defaults to `MAXIMIZE`.
3698 &quot;algorithm&quot;: &quot;A String&quot;, # Optional. The search algorithm specified for the hyperparameter
3699 # tuning job.
3700 # Uses the default AI Platform hyperparameter tuning
3701 # algorithm if unspecified.
3702 &quot;maxParallelTrials&quot;: 42, # Optional. The number of training trials to run concurrently.
3703 # You can reduce the time it takes to perform hyperparameter tuning by adding
3704 # trials in parallel. However, each trail only benefits from the information
3705 # gained in completed trials. That means that a trial does not get access to
3706 # the results of trials running at the same time, which could reduce the
3707 # quality of the overall optimization.
3708 #
3709 # Each trial will use the same scale tier and machine types.
3710 #
3711 # Defaults to one.
3712 },
3713 &quot;evaluatorType&quot;: &quot;A String&quot;, # Optional. Specifies the type of virtual machine to use for your training
3714 # job&#x27;s evaluator nodes.
3715 #
3716 # The supported values are the same as those described in the entry for
3717 # `masterType`.
3718 #
3719 # This value must be consistent with the category of machine type that
3720 # `masterType` uses. In other words, both must be Compute Engine machine
3721 # types or both must be legacy machine types.
3722 #
3723 # This value must be present when `scaleTier` is set to `CUSTOM` and
3724 # `evaluatorCount` is greater than zero.
3725 &quot;network&quot;: &quot;A String&quot;, # Optional. The full name of the Google Compute Engine
3726 # [network](/compute/docs/networks-and-firewalls#networks) to which the Job
3727 # is peered. For example, projects/12345/global/networks/myVPC. Format is of
3728 # the form projects/{project}/global/networks/{network}. Where {project} is a
3729 # project number, as in &#x27;12345&#x27;, and {network} is network name.&quot;.
3730 #
3731 # Private services access must already be configured for the network. If left
3732 # unspecified, the Job is not peered with any network. Learn more -
3733 # Connecting Job to user network over private
3734 # IP.
3735 &quot;parameterServerType&quot;: &quot;A String&quot;, # Optional. Specifies the type of virtual machine to use for your training
3736 # job&#x27;s parameter server.
3737 #
3738 # The supported values are the same as those described in the entry for
3739 # `master_type`.
3740 #
3741 # This value must be consistent with the category of machine type that
3742 # `masterType` uses. In other words, both must be Compute Engine machine
3743 # types or both must be legacy machine types.
3744 #
3745 # This value must be present when `scaleTier` is set to `CUSTOM` and
3746 # `parameter_server_count` is greater than zero.
3747 &quot;workerType&quot;: &quot;A String&quot;, # Optional. Specifies the type of virtual machine to use for your training
3748 # job&#x27;s worker nodes.
3749 #
3750 # The supported values are the same as those described in the entry for
3751 # `masterType`.
3752 #
3753 # This value must be consistent with the category of machine type that
3754 # `masterType` uses. In other words, both must be Compute Engine machine
3755 # types or both must be legacy machine types.
3756 #
3757 # If you use `cloud_tpu` for this value, see special instructions for
3758 # [configuring a custom TPU
3759 # machine](/ml-engine/docs/tensorflow/using-tpus#configuring_a_custom_tpu_machine).
3760 #
3761 # This value must be present when `scaleTier` is set to `CUSTOM` and
3762 # `workerCount` is greater than zero.
3763 &quot;masterConfig&quot;: { # Represents the configuration for a replica in a cluster. # Optional. The configuration for your master worker.
3764 #
3765 # You should only set `masterConfig.acceleratorConfig` if `masterType` is set
3766 # to a Compute Engine machine type. Learn about [restrictions on accelerator
3767 # configurations for
3768 # training.](/ai-platform/training/docs/using-gpus#compute-engine-machine-types-with-gpu)
3769 #
3770 # Set `masterConfig.imageUri` only if you build a custom image. Only one of
3771 # `masterConfig.imageUri` and `runtimeVersion` should be set. Learn more
3772 # about [configuring custom
3773 # containers](/ai-platform/training/docs/distributed-training-containers).
3774 &quot;acceleratorConfig&quot;: { # Represents a hardware accelerator request config. # Represents the type and number of accelerators used by the replica.
3775 # [Learn about restrictions on accelerator configurations for
3776 # training.](/ai-platform/training/docs/using-gpus#compute-engine-machine-types-with-gpu)
3777 # Note that the AcceleratorConfig can be used in both Jobs and Versions.
3778 # Learn more about [accelerators for training](/ml-engine/docs/using-gpus) and
3779 # [accelerators for online
3780 # prediction](/ml-engine/docs/machine-types-online-prediction#gpus).
3781 &quot;type&quot;: &quot;A String&quot;, # The type of accelerator to use.
3782 &quot;count&quot;: &quot;A String&quot;, # The number of accelerators to attach to each machine running the job.
3783 },
3784 &quot;imageUri&quot;: &quot;A String&quot;, # The Docker image to run on the replica. This image must be in Container
3785 # Registry. Learn more about [configuring custom
3786 # containers](/ai-platform/training/docs/distributed-training-containers).
3787 &quot;containerArgs&quot;: [ # Arguments to the entrypoint command.
3788 # The following rules apply for container_command and container_args:
3789 # - If you do not supply command or args:
3790 # The defaults defined in the Docker image are used.
3791 # - If you supply a command but no args:
3792 # The default EntryPoint and the default Cmd defined in the Docker image
3793 # are ignored. Your command is run without any arguments.
3794 # - If you supply only args:
3795 # The default Entrypoint defined in the Docker image is run with the args
3796 # that you supplied.
3797 # - If you supply a command and args:
3798 # The default Entrypoint and the default Cmd defined in the Docker image
3799 # are ignored. Your command is run with your args.
3800 # It cannot be set if custom container image is
3801 # not provided.
3802 # Note that this field and [TrainingInput.args] are mutually exclusive, i.e.,
3803 # both cannot be set at the same time.
3804 &quot;A String&quot;,
3805 ],
3806 &quot;tpuTfVersion&quot;: &quot;A String&quot;, # The AI Platform runtime version that includes a TensorFlow version matching
3807 # the one used in the custom container. This field is required if the replica
3808 # is a TPU worker that uses a custom container. Otherwise, do not specify
3809 # this field. This must be a [runtime version that currently supports
3810 # training with
3811 # TPUs](/ml-engine/docs/tensorflow/runtime-version-list#tpu-support).
3812 #
3813 # Note that the version of TensorFlow included in a runtime version may
3814 # differ from the numbering of the runtime version itself, because it may
3815 # have a different [patch
3816 # version](https://www.tensorflow.org/guide/version_compat#semantic_versioning_20).
3817 # In this field, you must specify the runtime version (TensorFlow minor
3818 # version). For example, if your custom container runs TensorFlow `1.x.y`,
3819 # specify `1.x`.
3820 &quot;containerCommand&quot;: [ # The command with which the replica&#x27;s custom container is run.
3821 # If provided, it will override default ENTRYPOINT of the docker image.
3822 # If not provided, the docker image&#x27;s ENTRYPOINT is used.
3823 # It cannot be set if custom container image is
3824 # not provided.
3825 # Note that this field and [TrainingInput.args] are mutually exclusive, i.e.,
3826 # both cannot be set at the same time.
3827 &quot;A String&quot;,
3828 ],
3829 },
3830 &quot;evaluatorCount&quot;: &quot;A String&quot;, # Optional. The number of evaluator replicas to use for the training job.
3831 # Each replica in the cluster will be of the type specified in
3832 # `evaluator_type`.
3833 #
3834 # This value can only be used when `scale_tier` is set to `CUSTOM`. If you
3835 # set this value, you must also set `evaluator_type`.
3836 #
3837 # The default value is zero.
3838 &quot;args&quot;: [ # Optional. Command-line arguments passed to the training application when it
3839 # starts. If your job uses a custom container, then the arguments are passed
3840 # to the container&#x27;s &lt;a class=&quot;external&quot; target=&quot;_blank&quot;
3841 # href=&quot;https://docs.docker.com/engine/reference/builder/#entrypoint&quot;&gt;
3842 # `ENTRYPOINT`&lt;/a&gt; command.
3843 &quot;A String&quot;,
3844 ],
3845 &quot;pythonModule&quot;: &quot;A String&quot;, # Required. The Python module name to run after installing the packages.
3846 &quot;runtimeVersion&quot;: &quot;A String&quot;, # Optional. The AI Platform runtime version to use for training. You must
3847 # either specify this field or specify `masterConfig.imageUri`.
3848 #
3849 # For more information, see the [runtime version
3850 # list](/ai-platform/training/docs/runtime-version-list) and learn [how to
3851 # manage runtime versions](/ai-platform/training/docs/versioning).
3852 &quot;parameterServerCount&quot;: &quot;A String&quot;, # Optional. The number of parameter server replicas to use for the training
3853 # job. Each replica in the cluster will be of the type specified in
3854 # `parameter_server_type`.
3855 #
3856 # This value can only be used when `scale_tier` is set to `CUSTOM`. If you
3857 # set this value, you must also set `parameter_server_type`.
3858 #
3859 # The default value is zero.
3860 &quot;evaluatorConfig&quot;: { # Represents the configuration for a replica in a cluster. # Optional. The configuration for evaluators.
3861 #
3862 # You should only set `evaluatorConfig.acceleratorConfig` if
3863 # `evaluatorType` is set to a Compute Engine machine type. [Learn
3864 # about restrictions on accelerator configurations for
3865 # training.](/ai-platform/training/docs/using-gpus#compute-engine-machine-types-with-gpu)
3866 #
3867 # Set `evaluatorConfig.imageUri` only if you build a custom image for
3868 # your evaluator. If `evaluatorConfig.imageUri` has not been
3869 # set, AI Platform uses the value of `masterConfig.imageUri`. Learn more about [configuring custom
3870 # containers](/ai-platform/training/docs/distributed-training-containers).
3871 &quot;acceleratorConfig&quot;: { # Represents a hardware accelerator request config. # Represents the type and number of accelerators used by the replica.
3872 # [Learn about restrictions on accelerator configurations for
3873 # training.](/ai-platform/training/docs/using-gpus#compute-engine-machine-types-with-gpu)
3874 # Note that the AcceleratorConfig can be used in both Jobs and Versions.
3875 # Learn more about [accelerators for training](/ml-engine/docs/using-gpus) and
3876 # [accelerators for online
3877 # prediction](/ml-engine/docs/machine-types-online-prediction#gpus).
3878 &quot;type&quot;: &quot;A String&quot;, # The type of accelerator to use.
3879 &quot;count&quot;: &quot;A String&quot;, # The number of accelerators to attach to each machine running the job.
3880 },
3881 &quot;imageUri&quot;: &quot;A String&quot;, # The Docker image to run on the replica. This image must be in Container
3882 # Registry. Learn more about [configuring custom
3883 # containers](/ai-platform/training/docs/distributed-training-containers).
3884 &quot;containerArgs&quot;: [ # Arguments to the entrypoint command.
3885 # The following rules apply for container_command and container_args:
3886 # - If you do not supply command or args:
3887 # The defaults defined in the Docker image are used.
3888 # - If you supply a command but no args:
3889 # The default EntryPoint and the default Cmd defined in the Docker image
3890 # are ignored. Your command is run without any arguments.
3891 # - If you supply only args:
3892 # The default Entrypoint defined in the Docker image is run with the args
3893 # that you supplied.
3894 # - If you supply a command and args:
3895 # The default Entrypoint and the default Cmd defined in the Docker image
3896 # are ignored. Your command is run with your args.
3897 # It cannot be set if custom container image is
3898 # not provided.
3899 # Note that this field and [TrainingInput.args] are mutually exclusive, i.e.,
3900 # both cannot be set at the same time.
3901 &quot;A String&quot;,
3902 ],
3903 &quot;tpuTfVersion&quot;: &quot;A String&quot;, # The AI Platform runtime version that includes a TensorFlow version matching
3904 # the one used in the custom container. This field is required if the replica
3905 # is a TPU worker that uses a custom container. Otherwise, do not specify
3906 # this field. This must be a [runtime version that currently supports
3907 # training with
3908 # TPUs](/ml-engine/docs/tensorflow/runtime-version-list#tpu-support).
3909 #
3910 # Note that the version of TensorFlow included in a runtime version may
3911 # differ from the numbering of the runtime version itself, because it may
3912 # have a different [patch
3913 # version](https://www.tensorflow.org/guide/version_compat#semantic_versioning_20).
3914 # In this field, you must specify the runtime version (TensorFlow minor
3915 # version). For example, if your custom container runs TensorFlow `1.x.y`,
3916 # specify `1.x`.
3917 &quot;containerCommand&quot;: [ # The command with which the replica&#x27;s custom container is run.
3918 # If provided, it will override default ENTRYPOINT of the docker image.
3919 # If not provided, the docker image&#x27;s ENTRYPOINT is used.
3920 # It cannot be set if custom container image is
3921 # not provided.
3922 # Note that this field and [TrainingInput.args] are mutually exclusive, i.e.,
3923 # both cannot be set at the same time.
3924 &quot;A String&quot;,
3925 ],
3926 },
3927 &quot;encryptionConfig&quot;: { # Represents a custom encryption key configuration that can be applied to # Optional. Options for using customer-managed encryption keys (CMEK) to
3928 # protect resources created by a training job, instead of using Google&#x27;s
3929 # default encryption. If this is set, then all resources created by the
3930 # training job will be encrypted with the customer-managed encryption key
3931 # that you specify.
3932 #
3933 # [Learn how and when to use CMEK with AI Platform
3934 # Training](/ai-platform/training/docs/cmek).
3935 # a resource.
3936 &quot;kmsKeyName&quot;: &quot;A String&quot;, # The Cloud KMS resource identifier of the customer-managed encryption key
3937 # used to protect a resource, such as a training job. It has the following
3938 # format:
3939 # `projects/{PROJECT_ID}/locations/{REGION}/keyRings/{KEY_RING_NAME}/cryptoKeys/{KEY_NAME}`
3940 },
3941 &quot;workerCount&quot;: &quot;A String&quot;, # Optional. The number of worker replicas to use for the training job. Each
3942 # replica in the cluster will be of the type specified in `worker_type`.
3943 #
3944 # This value can only be used when `scale_tier` is set to `CUSTOM`. If you
3945 # set this value, you must also set `worker_type`.
3946 #
3947 # The default value is zero.
Bu Sun Kim65020912020-05-20 12:08:20 -07003948 &quot;scheduling&quot;: { # All parameters related to scheduling of training jobs. # Optional. Scheduling options for a training job.
3949 &quot;maxWaitTime&quot;: &quot;A String&quot;,
3950 &quot;maxRunningTime&quot;: &quot;A String&quot;, # Optional. The maximum job running time, expressed in seconds. The field can
3951 # contain up to nine fractional digits, terminated by `s`. If not specified,
3952 # this field defaults to `604800s` (seven days).
Dan O'Mearadd494642020-05-01 07:42:23 -07003953 #
3954 # If the training job is still running after this duration, AI Platform
3955 # Training cancels it.
3956 #
3957 # For example, if you want to ensure your job runs for no more than 2 hours,
3958 # set this field to `7200s` (2 hours * 60 minutes / hour * 60 seconds /
3959 # minute).
3960 #
3961 # If you submit your training job using the `gcloud` tool, you can [provide
3962 # this field in a `config.yaml`
3963 # file](/ai-platform/training/docs/training-jobs#formatting_your_configuration_parameters).
3964 # For example:
3965 #
3966 # ```yaml
3967 # trainingInput:
3968 # ...
3969 # scheduling:
3970 # maxRunningTime: 7200s
3971 # ...
3972 # ```
3973 },
Bu Sun Kimd059ad82020-07-22 17:02:09 -07003974 &quot;scaleTier&quot;: &quot;A String&quot;, # Required. Specifies the machine types, the number of replicas for workers
3975 # and parameter servers.
3976 &quot;packageUris&quot;: [ # Required. The Google Cloud Storage location of the packages with
3977 # the training program and any additional dependencies.
3978 # The maximum number of package URIs is 100.
3979 &quot;A String&quot;,
Bu Sun Kim65020912020-05-20 12:08:20 -07003980 ],
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07003981 },
Bu Sun Kimd059ad82020-07-22 17:02:09 -07003982 &quot;etag&quot;: &quot;A String&quot;, # `etag` is used for optimistic concurrency control as a way to help
3983 # prevent simultaneous updates of a job from overwriting each other.
3984 # It is strongly suggested that systems make use of the `etag` in the
3985 # read-modify-write cycle to perform job updates in order to avoid race
3986 # conditions: An `etag` is returned in the response to `GetJob`, and
3987 # systems are expected to put that etag in the request to `UpdateJob` to
3988 # ensure that their change will be applied to the same version of the job.
3989 &quot;jobId&quot;: &quot;A String&quot;, # Required. The user-specified id of the job.
Bu Sun Kim65020912020-05-20 12:08:20 -07003990 }
3991
3992 updateMask: string, Required. Specifies the path, relative to `Job`, of the field to update.
3993To adopt etag mechanism, include `etag` field in the mask, and include the
3994`etag` value in your job resource.
3995
3996For example, to change the labels of a job, the `update_mask` parameter
3997would be specified as `labels`, `etag`, and the
3998`PATCH` request body would specify the new value, as follows:
3999 {
4000 &quot;labels&quot;: {
4001 &quot;owner&quot;: &quot;Google&quot;,
4002 &quot;color&quot;: &quot;Blue&quot;
4003 }
4004 &quot;etag&quot;: &quot;33a64df551425fcc55e4d42a148795d9f25f89d4&quot;
4005 }
4006If `etag` matches the one on the server, the labels of the job will be
4007replaced with the given ones, and the server end `etag` will be
4008recalculated.
4009
4010Currently the only supported update masks are `labels` and `etag`.
4011 x__xgafv: string, V1 error format.
4012 Allowed values
4013 1 - v1 error format
4014 2 - v2 error format
4015
4016Returns:
4017 An object of the form:
4018
4019 { # Represents a training or prediction job.
Bu Sun Kimd059ad82020-07-22 17:02:09 -07004020 &quot;createTime&quot;: &quot;A String&quot;, # Output only. When the job was created.
Bu Sun Kim4ed7d3f2020-05-27 12:20:54 -07004021 &quot;predictionInput&quot;: { # Represents input parameters for a prediction job. # Input parameters to create a prediction job.
Bu Sun Kimd059ad82020-07-22 17:02:09 -07004022 &quot;dataFormat&quot;: &quot;A String&quot;, # Required. The format of the input data files.
4023 &quot;outputPath&quot;: &quot;A String&quot;, # Required. The output Google Cloud Storage location.
4024 &quot;inputPaths&quot;: [ # Required. The Cloud Storage location of the input data files. May contain
4025 # &lt;a href=&quot;/storage/docs/gsutil/addlhelp/WildcardNames&quot;&gt;wildcards&lt;/a&gt;.
4026 &quot;A String&quot;,
4027 ],
4028 &quot;region&quot;: &quot;A String&quot;, # Required. The Google Compute Engine region to run the prediction job in.
4029 # See the &lt;a href=&quot;/ml-engine/docs/tensorflow/regions&quot;&gt;available regions&lt;/a&gt;
4030 # for AI Platform services.
Bu Sun Kim4ed7d3f2020-05-27 12:20:54 -07004031 &quot;versionName&quot;: &quot;A String&quot;, # Use this field if you want to specify a version of the model to use. The
4032 # string is formatted the same way as `model_version`, with the addition
4033 # of the version information:
4034 #
4035 # `&quot;projects/YOUR_PROJECT/models/YOUR_MODEL/versions/YOUR_VERSION&quot;`
Bu Sun Kimd059ad82020-07-22 17:02:09 -07004036 &quot;uri&quot;: &quot;A String&quot;, # Use this field if you want to specify a Google Cloud Storage path for
4037 # the model to use.
4038 &quot;outputDataFormat&quot;: &quot;A String&quot;, # Optional. Format of the output data files, defaults to JSON.
4039 &quot;runtimeVersion&quot;: &quot;A String&quot;, # Optional. The AI Platform runtime version to use for this batch
4040 # prediction. If not set, AI Platform will pick the runtime version used
4041 # during the CreateVersion request for this model version, or choose the
4042 # latest stable version when model version information is not available
4043 # such as when the model is specified by uri.
Bu Sun Kim4ed7d3f2020-05-27 12:20:54 -07004044 &quot;modelName&quot;: &quot;A String&quot;, # Use this field if you want to use the default version for the specified
4045 # model. The string must use the following format:
4046 #
4047 # `&quot;projects/YOUR_PROJECT/models/YOUR_MODEL&quot;`
Bu Sun Kim4ed7d3f2020-05-27 12:20:54 -07004048 &quot;signatureName&quot;: &quot;A String&quot;, # Optional. The name of the signature defined in the SavedModel to use for
4049 # this job. Please refer to
4050 # [SavedModel](https://tensorflow.github.io/serving/serving_basic.html)
4051 # for information about how to use signatures.
4052 #
4053 # Defaults to
4054 # [DEFAULT_SERVING_SIGNATURE_DEF_KEY](https://www.tensorflow.org/api_docs/python/tf/saved_model/signature_constants)
4055 # , which is &quot;serving_default&quot;.
Bu Sun Kim4ed7d3f2020-05-27 12:20:54 -07004056 &quot;batchSize&quot;: &quot;A String&quot;, # Optional. Number of records per batch, defaults to 64.
4057 # The service will buffer batch_size number of records in memory before
4058 # invoking one Tensorflow prediction call internally. So take the record
4059 # size and memory available into consideration when setting this parameter.
Bu Sun Kimd059ad82020-07-22 17:02:09 -07004060 &quot;maxWorkerCount&quot;: &quot;A String&quot;, # Optional. The maximum number of workers to be used for parallel processing.
4061 # Defaults to 10 if not specified.
4062 },
4063 &quot;labels&quot;: { # Optional. One or more labels that you can add, to organize your jobs.
4064 # Each label is a key-value pair, where both the key and the value are
4065 # arbitrary strings that you supply.
4066 # For more information, see the documentation on
4067 # &lt;a href=&quot;/ml-engine/docs/tensorflow/resource-labels&quot;&gt;using labels&lt;/a&gt;.
4068 &quot;a_key&quot;: &quot;A String&quot;,
4069 },
4070 &quot;endTime&quot;: &quot;A String&quot;, # Output only. When the job processing was completed.
4071 &quot;trainingOutput&quot;: { # Represents results of a training job. Output only. # The current training job result.
4072 &quot;trials&quot;: [ # Results for individual Hyperparameter trials.
4073 # Only set for hyperparameter tuning jobs.
4074 { # Represents the result of a single hyperparameter tuning trial from a
4075 # training job. The TrainingOutput object that is returned on successful
4076 # completion of a training job with hyperparameter tuning includes a list
4077 # of HyperparameterOutput objects, one for each successful trial.
4078 &quot;endTime&quot;: &quot;A String&quot;, # Output only. End time for the trial.
4079 &quot;finalMetric&quot;: { # An observed value of a metric. # The final objective metric seen for this trial.
4080 &quot;trainingStep&quot;: &quot;A String&quot;, # The global training step for this metric.
4081 &quot;objectiveValue&quot;: 3.14, # The objective value at this training step.
4082 },
4083 &quot;hyperparameters&quot;: { # The hyperparameters given to this trial.
4084 &quot;a_key&quot;: &quot;A String&quot;,
4085 },
4086 &quot;builtInAlgorithmOutput&quot;: { # Represents output related to a built-in algorithm Job. # Details related to built-in algorithms jobs.
4087 # Only set for trials of built-in algorithms jobs that have succeeded.
4088 &quot;modelPath&quot;: &quot;A String&quot;, # The Cloud Storage path to the `model/` directory where the training job
4089 # saves the trained model. Only set for successful jobs that don&#x27;t use
4090 # hyperparameter tuning.
4091 &quot;framework&quot;: &quot;A String&quot;, # Framework on which the built-in algorithm was trained.
4092 &quot;runtimeVersion&quot;: &quot;A String&quot;, # AI Platform runtime version on which the built-in algorithm was
4093 # trained.
4094 &quot;pythonVersion&quot;: &quot;A String&quot;, # Python version on which the built-in algorithm was trained.
4095 },
4096 &quot;startTime&quot;: &quot;A String&quot;, # Output only. Start time for the trial.
4097 &quot;allMetrics&quot;: [ # All recorded object metrics for this trial. This field is not currently
4098 # populated.
4099 { # An observed value of a metric.
4100 &quot;trainingStep&quot;: &quot;A String&quot;, # The global training step for this metric.
4101 &quot;objectiveValue&quot;: 3.14, # The objective value at this training step.
4102 },
4103 ],
4104 &quot;trialId&quot;: &quot;A String&quot;, # The trial id for these results.
4105 &quot;isTrialStoppedEarly&quot;: True or False, # True if the trial is stopped early.
4106 &quot;state&quot;: &quot;A String&quot;, # Output only. The detailed state of the trial.
4107 },
Bu Sun Kim4ed7d3f2020-05-27 12:20:54 -07004108 ],
Bu Sun Kimd059ad82020-07-22 17:02:09 -07004109 &quot;completedTrialCount&quot;: &quot;A String&quot;, # The number of hyperparameter tuning trials that completed successfully.
4110 # Only set for hyperparameter tuning jobs.
4111 &quot;isHyperparameterTuningJob&quot;: True or False, # Whether this job is a hyperparameter tuning job.
4112 &quot;isBuiltInAlgorithmJob&quot;: True or False, # Whether this job is a built-in Algorithm job.
4113 &quot;builtInAlgorithmOutput&quot;: { # Represents output related to a built-in algorithm Job. # Details related to built-in algorithms jobs.
4114 # Only set for built-in algorithms jobs.
4115 &quot;modelPath&quot;: &quot;A String&quot;, # The Cloud Storage path to the `model/` directory where the training job
4116 # saves the trained model. Only set for successful jobs that don&#x27;t use
4117 # hyperparameter tuning.
4118 &quot;framework&quot;: &quot;A String&quot;, # Framework on which the built-in algorithm was trained.
4119 &quot;runtimeVersion&quot;: &quot;A String&quot;, # AI Platform runtime version on which the built-in algorithm was
4120 # trained.
4121 &quot;pythonVersion&quot;: &quot;A String&quot;, # Python version on which the built-in algorithm was trained.
4122 },
4123 &quot;consumedMLUnits&quot;: 3.14, # The amount of ML units consumed by the job.
4124 &quot;hyperparameterMetricTag&quot;: &quot;A String&quot;, # The TensorFlow summary tag name used for optimizing hyperparameter tuning
4125 # trials. See
4126 # [`HyperparameterSpec.hyperparameterMetricTag`](#HyperparameterSpec.FIELDS.hyperparameter_metric_tag)
4127 # for more information. Only set for hyperparameter tuning jobs.
Bu Sun Kim4ed7d3f2020-05-27 12:20:54 -07004128 },
4129 &quot;errorMessage&quot;: &quot;A String&quot;, # Output only. The details of a failure or a cancellation.
Bu Sun Kimd059ad82020-07-22 17:02:09 -07004130 &quot;predictionOutput&quot;: { # Represents results of a prediction job. # The current prediction job result.
4131 &quot;errorCount&quot;: &quot;A String&quot;, # The number of data instances which resulted in errors.
4132 &quot;nodeHours&quot;: 3.14, # Node hours used by the batch prediction job.
4133 &quot;outputPath&quot;: &quot;A String&quot;, # The output Google Cloud Storage location provided at the job creation time.
4134 &quot;predictionCount&quot;: &quot;A String&quot;, # The number of generated predictions.
4135 },
4136 &quot;startTime&quot;: &quot;A String&quot;, # Output only. When the job processing was started.
4137 &quot;state&quot;: &quot;A String&quot;, # Output only. The detailed state of a job.
Bu Sun Kim65020912020-05-20 12:08:20 -07004138 &quot;trainingInput&quot;: { # Represents input parameters for a training job. When using the gcloud command # Input parameters to create a training job.
4139 # to submit your training job, you can specify the input parameters as
4140 # command-line arguments and/or in a YAML configuration file referenced from
4141 # the --config command-line argument. For details, see the guide to [submitting
4142 # a training job](/ai-platform/training/docs/training-jobs).
Bu Sun Kimd059ad82020-07-22 17:02:09 -07004143 &quot;serviceAccount&quot;: &quot;A String&quot;, # Optional. Specifies the service account for workload run-as account.
4144 # Users submitting jobs must have act-as permission on this run-as account.
4145 # If not specified, then CMLE P4SA will be used by default.
Bu Sun Kim65020912020-05-20 12:08:20 -07004146 &quot;workerConfig&quot;: { # Represents the configuration for a replica in a cluster. # Optional. The configuration for workers.
4147 #
4148 # You should only set `workerConfig.acceleratorConfig` if `workerType` is set
4149 # to a Compute Engine machine type. [Learn about restrictions on accelerator
4150 # configurations for
4151 # training.](/ai-platform/training/docs/using-gpus#compute-engine-machine-types-with-gpu)
4152 #
4153 # Set `workerConfig.imageUri` only if you build a custom image for your
4154 # worker. If `workerConfig.imageUri` has not been set, AI Platform uses
4155 # the value of `masterConfig.imageUri`. Learn more about [configuring custom
4156 # containers](/ai-platform/training/docs/distributed-training-containers).
Bu Sun Kimd059ad82020-07-22 17:02:09 -07004157 &quot;acceleratorConfig&quot;: { # Represents a hardware accelerator request config. # Represents the type and number of accelerators used by the replica.
4158 # [Learn about restrictions on accelerator configurations for
4159 # training.](/ai-platform/training/docs/using-gpus#compute-engine-machine-types-with-gpu)
4160 # Note that the AcceleratorConfig can be used in both Jobs and Versions.
4161 # Learn more about [accelerators for training](/ml-engine/docs/using-gpus) and
4162 # [accelerators for online
4163 # prediction](/ml-engine/docs/machine-types-online-prediction#gpus).
4164 &quot;type&quot;: &quot;A String&quot;, # The type of accelerator to use.
4165 &quot;count&quot;: &quot;A String&quot;, # The number of accelerators to attach to each machine running the job.
4166 },
Bu Sun Kim65020912020-05-20 12:08:20 -07004167 &quot;imageUri&quot;: &quot;A String&quot;, # The Docker image to run on the replica. This image must be in Container
4168 # Registry. Learn more about [configuring custom
4169 # containers](/ai-platform/training/docs/distributed-training-containers).
4170 &quot;containerArgs&quot;: [ # Arguments to the entrypoint command.
4171 # The following rules apply for container_command and container_args:
4172 # - If you do not supply command or args:
4173 # The defaults defined in the Docker image are used.
4174 # - If you supply a command but no args:
4175 # The default EntryPoint and the default Cmd defined in the Docker image
4176 # are ignored. Your command is run without any arguments.
4177 # - If you supply only args:
4178 # The default Entrypoint defined in the Docker image is run with the args
4179 # that you supplied.
4180 # - If you supply a command and args:
4181 # The default Entrypoint and the default Cmd defined in the Docker image
4182 # are ignored. Your command is run with your args.
4183 # It cannot be set if custom container image is
4184 # not provided.
4185 # Note that this field and [TrainingInput.args] are mutually exclusive, i.e.,
4186 # both cannot be set at the same time.
4187 &quot;A String&quot;,
4188 ],
Bu Sun Kim4ed7d3f2020-05-27 12:20:54 -07004189 &quot;tpuTfVersion&quot;: &quot;A String&quot;, # The AI Platform runtime version that includes a TensorFlow version matching
4190 # the one used in the custom container. This field is required if the replica
4191 # is a TPU worker that uses a custom container. Otherwise, do not specify
4192 # this field. This must be a [runtime version that currently supports
4193 # training with
4194 # TPUs](/ml-engine/docs/tensorflow/runtime-version-list#tpu-support).
4195 #
4196 # Note that the version of TensorFlow included in a runtime version may
4197 # differ from the numbering of the runtime version itself, because it may
4198 # have a different [patch
4199 # version](https://www.tensorflow.org/guide/version_compat#semantic_versioning_20).
4200 # In this field, you must specify the runtime version (TensorFlow minor
4201 # version). For example, if your custom container runs TensorFlow `1.x.y`,
4202 # specify `1.x`.
4203 &quot;containerCommand&quot;: [ # The command with which the replica&#x27;s custom container is run.
4204 # If provided, it will override default ENTRYPOINT of the docker image.
4205 # If not provided, the docker image&#x27;s ENTRYPOINT is used.
4206 # It cannot be set if custom container image is
4207 # not provided.
4208 # Note that this field and [TrainingInput.args] are mutually exclusive, i.e.,
4209 # both cannot be set at the same time.
4210 &quot;A String&quot;,
4211 ],
Bu Sun Kim65020912020-05-20 12:08:20 -07004212 },
Bu Sun Kimd059ad82020-07-22 17:02:09 -07004213 &quot;useChiefInTfConfig&quot;: True or False, # Optional. Use `chief` instead of `master` in the `TF_CONFIG` environment
4214 # variable when training with a custom container. Defaults to `false`. [Learn
4215 # more about this
4216 # field.](/ai-platform/training/docs/distributed-training-details#chief-versus-master)
Bu Sun Kim65020912020-05-20 12:08:20 -07004217 #
Bu Sun Kimd059ad82020-07-22 17:02:09 -07004218 # This field has no effect for training jobs that don&#x27;t use a custom
4219 # container.
Bu Sun Kim65020912020-05-20 12:08:20 -07004220 &quot;masterType&quot;: &quot;A String&quot;, # Optional. Specifies the type of virtual machine to use for your training
4221 # job&#x27;s master worker. You must specify this field when `scaleTier` is set to
4222 # `CUSTOM`.
4223 #
4224 # You can use certain Compute Engine machine types directly in this field.
4225 # The following types are supported:
4226 #
4227 # - `n1-standard-4`
4228 # - `n1-standard-8`
4229 # - `n1-standard-16`
4230 # - `n1-standard-32`
4231 # - `n1-standard-64`
4232 # - `n1-standard-96`
4233 # - `n1-highmem-2`
4234 # - `n1-highmem-4`
4235 # - `n1-highmem-8`
4236 # - `n1-highmem-16`
4237 # - `n1-highmem-32`
4238 # - `n1-highmem-64`
4239 # - `n1-highmem-96`
4240 # - `n1-highcpu-16`
4241 # - `n1-highcpu-32`
4242 # - `n1-highcpu-64`
4243 # - `n1-highcpu-96`
4244 #
4245 # Learn more about [using Compute Engine machine
4246 # types](/ml-engine/docs/machine-types#compute-engine-machine-types).
4247 #
4248 # Alternatively, you can use the following legacy machine types:
4249 #
4250 # - `standard`
4251 # - `large_model`
4252 # - `complex_model_s`
4253 # - `complex_model_m`
4254 # - `complex_model_l`
4255 # - `standard_gpu`
4256 # - `complex_model_m_gpu`
4257 # - `complex_model_l_gpu`
4258 # - `standard_p100`
4259 # - `complex_model_m_p100`
4260 # - `standard_v100`
4261 # - `large_model_v100`
4262 # - `complex_model_m_v100`
4263 # - `complex_model_l_v100`
4264 #
4265 # Learn more about [using legacy machine
4266 # types](/ml-engine/docs/machine-types#legacy-machine-types).
4267 #
4268 # Finally, if you want to use a TPU for training, specify `cloud_tpu` in this
4269 # field. Learn more about the [special configuration options for training
4270 # with
4271 # TPUs](/ml-engine/docs/tensorflow/using-tpus#configuring_a_custom_tpu_machine).
Bu Sun Kimd059ad82020-07-22 17:02:09 -07004272 &quot;parameterServerConfig&quot;: { # Represents the configuration for a replica in a cluster. # Optional. The configuration for parameter servers.
Bu Sun Kim65020912020-05-20 12:08:20 -07004273 #
Bu Sun Kimd059ad82020-07-22 17:02:09 -07004274 # You should only set `parameterServerConfig.acceleratorConfig` if
4275 # `parameterServerType` is set to a Compute Engine machine type. [Learn
4276 # about restrictions on accelerator configurations for
Bu Sun Kim65020912020-05-20 12:08:20 -07004277 # training.](/ai-platform/training/docs/using-gpus#compute-engine-machine-types-with-gpu)
4278 #
Bu Sun Kimd059ad82020-07-22 17:02:09 -07004279 # Set `parameterServerConfig.imageUri` only if you build a custom image for
4280 # your parameter server. If `parameterServerConfig.imageUri` has not been
4281 # set, AI Platform uses the value of `masterConfig.imageUri`. Learn more about [configuring custom
Bu Sun Kim65020912020-05-20 12:08:20 -07004282 # containers](/ai-platform/training/docs/distributed-training-containers).
Bu Sun Kimd059ad82020-07-22 17:02:09 -07004283 &quot;acceleratorConfig&quot;: { # Represents a hardware accelerator request config. # Represents the type and number of accelerators used by the replica.
4284 # [Learn about restrictions on accelerator configurations for
4285 # training.](/ai-platform/training/docs/using-gpus#compute-engine-machine-types-with-gpu)
4286 # Note that the AcceleratorConfig can be used in both Jobs and Versions.
4287 # Learn more about [accelerators for training](/ml-engine/docs/using-gpus) and
4288 # [accelerators for online
4289 # prediction](/ml-engine/docs/machine-types-online-prediction#gpus).
4290 &quot;type&quot;: &quot;A String&quot;, # The type of accelerator to use.
4291 &quot;count&quot;: &quot;A String&quot;, # The number of accelerators to attach to each machine running the job.
4292 },
Bu Sun Kim65020912020-05-20 12:08:20 -07004293 &quot;imageUri&quot;: &quot;A String&quot;, # The Docker image to run on the replica. This image must be in Container
4294 # Registry. Learn more about [configuring custom
4295 # containers](/ai-platform/training/docs/distributed-training-containers).
4296 &quot;containerArgs&quot;: [ # Arguments to the entrypoint command.
4297 # The following rules apply for container_command and container_args:
4298 # - If you do not supply command or args:
4299 # The defaults defined in the Docker image are used.
4300 # - If you supply a command but no args:
4301 # The default EntryPoint and the default Cmd defined in the Docker image
4302 # are ignored. Your command is run without any arguments.
4303 # - If you supply only args:
4304 # The default Entrypoint defined in the Docker image is run with the args
4305 # that you supplied.
4306 # - If you supply a command and args:
4307 # The default Entrypoint and the default Cmd defined in the Docker image
4308 # are ignored. Your command is run with your args.
4309 # It cannot be set if custom container image is
4310 # not provided.
4311 # Note that this field and [TrainingInput.args] are mutually exclusive, i.e.,
4312 # both cannot be set at the same time.
4313 &quot;A String&quot;,
4314 ],
Bu Sun Kim4ed7d3f2020-05-27 12:20:54 -07004315 &quot;tpuTfVersion&quot;: &quot;A String&quot;, # The AI Platform runtime version that includes a TensorFlow version matching
4316 # the one used in the custom container. This field is required if the replica
4317 # is a TPU worker that uses a custom container. Otherwise, do not specify
4318 # this field. This must be a [runtime version that currently supports
4319 # training with
4320 # TPUs](/ml-engine/docs/tensorflow/runtime-version-list#tpu-support).
4321 #
4322 # Note that the version of TensorFlow included in a runtime version may
4323 # differ from the numbering of the runtime version itself, because it may
4324 # have a different [patch
4325 # version](https://www.tensorflow.org/guide/version_compat#semantic_versioning_20).
4326 # In this field, you must specify the runtime version (TensorFlow minor
4327 # version). For example, if your custom container runs TensorFlow `1.x.y`,
4328 # specify `1.x`.
4329 &quot;containerCommand&quot;: [ # The command with which the replica&#x27;s custom container is run.
4330 # If provided, it will override default ENTRYPOINT of the docker image.
4331 # If not provided, the docker image&#x27;s ENTRYPOINT is used.
4332 # It cannot be set if custom container image is
4333 # not provided.
4334 # Note that this field and [TrainingInput.args] are mutually exclusive, i.e.,
4335 # both cannot be set at the same time.
4336 &quot;A String&quot;,
4337 ],
Bu Sun Kim65020912020-05-20 12:08:20 -07004338 },
Bu Sun Kimd059ad82020-07-22 17:02:09 -07004339 &quot;region&quot;: &quot;A String&quot;, # Required. The region to run the training job in. See the [available
4340 # regions](/ai-platform/training/docs/regions) for AI Platform Training.
Bu Sun Kim65020912020-05-20 12:08:20 -07004341 &quot;jobDir&quot;: &quot;A String&quot;, # Optional. A Google Cloud Storage path in which to store training outputs
4342 # and other data needed for training. This path is passed to your TensorFlow
4343 # program as the &#x27;--job-dir&#x27; command-line argument. The benefit of specifying
4344 # this field is that Cloud ML validates the path for use in training.
4345 &quot;pythonVersion&quot;: &quot;A String&quot;, # Optional. The version of Python used in training. You must either specify
4346 # this field or specify `masterConfig.imageUri`.
4347 #
4348 # The following Python versions are available:
4349 #
4350 # * Python &#x27;3.7&#x27; is available when `runtime_version` is set to &#x27;1.15&#x27; or
4351 # later.
4352 # * Python &#x27;3.5&#x27; is available when `runtime_version` is set to a version
4353 # from &#x27;1.4&#x27; to &#x27;1.14&#x27;.
4354 # * Python &#x27;2.7&#x27; is available when `runtime_version` is set to &#x27;1.15&#x27; or
4355 # earlier.
4356 #
4357 # Read more about the Python versions available for [each runtime
4358 # version](/ml-engine/docs/runtime-version-list).
Bu Sun Kimd059ad82020-07-22 17:02:09 -07004359 &quot;hyperparameters&quot;: { # Represents a set of hyperparameters to optimize. # Optional. The set of Hyperparameters to tune.
4360 &quot;maxTrials&quot;: 42, # Optional. How many training trials should be attempted to optimize
4361 # the specified hyperparameters.
4362 #
4363 # Defaults to one.
4364 &quot;enableTrialEarlyStopping&quot;: True or False, # Optional. Indicates if the hyperparameter tuning job enables auto trial
4365 # early stopping.
4366 &quot;params&quot;: [ # Required. The set of parameters to tune.
4367 { # Represents a single hyperparameter to optimize.
4368 &quot;minValue&quot;: 3.14, # Required if type is `DOUBLE` or `INTEGER`. This field
4369 # should be unset if type is `CATEGORICAL`. This value should be integers if
4370 # type is INTEGER.
4371 &quot;categoricalValues&quot;: [ # Required if type is `CATEGORICAL`. The list of possible categories.
4372 &quot;A String&quot;,
4373 ],
4374 &quot;scaleType&quot;: &quot;A String&quot;, # Optional. How the parameter should be scaled to the hypercube.
4375 # Leave unset for categorical parameters.
4376 # Some kind of scaling is strongly recommended for real or integral
4377 # parameters (e.g., `UNIT_LINEAR_SCALE`).
4378 &quot;discreteValues&quot;: [ # Required if type is `DISCRETE`.
4379 # A list of feasible points.
4380 # The list should be in strictly increasing order. For instance, this
4381 # parameter might have possible settings of 1.5, 2.5, and 4.0. This list
4382 # should not contain more than 1,000 values.
4383 3.14,
4384 ],
4385 &quot;type&quot;: &quot;A String&quot;, # Required. The type of the parameter.
4386 &quot;maxValue&quot;: 3.14, # Required if type is `DOUBLE` or `INTEGER`. This field
4387 # should be unset if type is `CATEGORICAL`. This value should be integers if
4388 # type is `INTEGER`.
4389 &quot;parameterName&quot;: &quot;A String&quot;, # Required. The parameter name must be unique amongst all ParameterConfigs in
4390 # a HyperparameterSpec message. E.g., &quot;learning_rate&quot;.
4391 },
4392 ],
4393 &quot;maxFailedTrials&quot;: 42, # Optional. The number of failed trials that need to be seen before failing
4394 # the hyperparameter tuning job. You can specify this field to override the
4395 # default failing criteria for AI Platform hyperparameter tuning jobs.
4396 #
4397 # Defaults to zero, which means the service decides when a hyperparameter
4398 # job should fail.
4399 &quot;hyperparameterMetricTag&quot;: &quot;A String&quot;, # Optional. The TensorFlow summary tag name to use for optimizing trials. For
4400 # current versions of TensorFlow, this tag name should exactly match what is
4401 # shown in TensorBoard, including all scopes. For versions of TensorFlow
4402 # prior to 0.12, this should be only the tag passed to tf.Summary.
4403 # By default, &quot;training/hptuning/metric&quot; will be used.
4404 &quot;resumePreviousJobId&quot;: &quot;A String&quot;, # Optional. The prior hyperparameter tuning job id that users hope to
4405 # continue with. The job id will be used to find the corresponding vizier
4406 # study guid and resume the study.
4407 &quot;goal&quot;: &quot;A String&quot;, # Required. The type of goal to use for tuning. Available types are
4408 # `MAXIMIZE` and `MINIMIZE`.
4409 #
4410 # Defaults to `MAXIMIZE`.
4411 &quot;algorithm&quot;: &quot;A String&quot;, # Optional. The search algorithm specified for the hyperparameter
4412 # tuning job.
4413 # Uses the default AI Platform hyperparameter tuning
4414 # algorithm if unspecified.
4415 &quot;maxParallelTrials&quot;: 42, # Optional. The number of training trials to run concurrently.
4416 # You can reduce the time it takes to perform hyperparameter tuning by adding
4417 # trials in parallel. However, each trail only benefits from the information
4418 # gained in completed trials. That means that a trial does not get access to
4419 # the results of trials running at the same time, which could reduce the
4420 # quality of the overall optimization.
4421 #
4422 # Each trial will use the same scale tier and machine types.
4423 #
4424 # Defaults to one.
4425 },
4426 &quot;evaluatorType&quot;: &quot;A String&quot;, # Optional. Specifies the type of virtual machine to use for your training
4427 # job&#x27;s evaluator nodes.
4428 #
4429 # The supported values are the same as those described in the entry for
4430 # `masterType`.
4431 #
4432 # This value must be consistent with the category of machine type that
4433 # `masterType` uses. In other words, both must be Compute Engine machine
4434 # types or both must be legacy machine types.
4435 #
4436 # This value must be present when `scaleTier` is set to `CUSTOM` and
4437 # `evaluatorCount` is greater than zero.
4438 &quot;network&quot;: &quot;A String&quot;, # Optional. The full name of the Google Compute Engine
4439 # [network](/compute/docs/networks-and-firewalls#networks) to which the Job
4440 # is peered. For example, projects/12345/global/networks/myVPC. Format is of
4441 # the form projects/{project}/global/networks/{network}. Where {project} is a
4442 # project number, as in &#x27;12345&#x27;, and {network} is network name.&quot;.
4443 #
4444 # Private services access must already be configured for the network. If left
4445 # unspecified, the Job is not peered with any network. Learn more -
4446 # Connecting Job to user network over private
4447 # IP.
4448 &quot;parameterServerType&quot;: &quot;A String&quot;, # Optional. Specifies the type of virtual machine to use for your training
4449 # job&#x27;s parameter server.
4450 #
4451 # The supported values are the same as those described in the entry for
4452 # `master_type`.
4453 #
4454 # This value must be consistent with the category of machine type that
4455 # `masterType` uses. In other words, both must be Compute Engine machine
4456 # types or both must be legacy machine types.
4457 #
4458 # This value must be present when `scaleTier` is set to `CUSTOM` and
4459 # `parameter_server_count` is greater than zero.
4460 &quot;workerType&quot;: &quot;A String&quot;, # Optional. Specifies the type of virtual machine to use for your training
4461 # job&#x27;s worker nodes.
4462 #
4463 # The supported values are the same as those described in the entry for
4464 # `masterType`.
4465 #
4466 # This value must be consistent with the category of machine type that
4467 # `masterType` uses. In other words, both must be Compute Engine machine
4468 # types or both must be legacy machine types.
4469 #
4470 # If you use `cloud_tpu` for this value, see special instructions for
4471 # [configuring a custom TPU
4472 # machine](/ml-engine/docs/tensorflow/using-tpus#configuring_a_custom_tpu_machine).
4473 #
4474 # This value must be present when `scaleTier` is set to `CUSTOM` and
4475 # `workerCount` is greater than zero.
4476 &quot;masterConfig&quot;: { # Represents the configuration for a replica in a cluster. # Optional. The configuration for your master worker.
4477 #
4478 # You should only set `masterConfig.acceleratorConfig` if `masterType` is set
4479 # to a Compute Engine machine type. Learn about [restrictions on accelerator
4480 # configurations for
4481 # training.](/ai-platform/training/docs/using-gpus#compute-engine-machine-types-with-gpu)
4482 #
4483 # Set `masterConfig.imageUri` only if you build a custom image. Only one of
4484 # `masterConfig.imageUri` and `runtimeVersion` should be set. Learn more
4485 # about [configuring custom
4486 # containers](/ai-platform/training/docs/distributed-training-containers).
4487 &quot;acceleratorConfig&quot;: { # Represents a hardware accelerator request config. # Represents the type and number of accelerators used by the replica.
4488 # [Learn about restrictions on accelerator configurations for
4489 # training.](/ai-platform/training/docs/using-gpus#compute-engine-machine-types-with-gpu)
4490 # Note that the AcceleratorConfig can be used in both Jobs and Versions.
4491 # Learn more about [accelerators for training](/ml-engine/docs/using-gpus) and
4492 # [accelerators for online
4493 # prediction](/ml-engine/docs/machine-types-online-prediction#gpus).
4494 &quot;type&quot;: &quot;A String&quot;, # The type of accelerator to use.
4495 &quot;count&quot;: &quot;A String&quot;, # The number of accelerators to attach to each machine running the job.
4496 },
4497 &quot;imageUri&quot;: &quot;A String&quot;, # The Docker image to run on the replica. This image must be in Container
4498 # Registry. Learn more about [configuring custom
4499 # containers](/ai-platform/training/docs/distributed-training-containers).
4500 &quot;containerArgs&quot;: [ # Arguments to the entrypoint command.
4501 # The following rules apply for container_command and container_args:
4502 # - If you do not supply command or args:
4503 # The defaults defined in the Docker image are used.
4504 # - If you supply a command but no args:
4505 # The default EntryPoint and the default Cmd defined in the Docker image
4506 # are ignored. Your command is run without any arguments.
4507 # - If you supply only args:
4508 # The default Entrypoint defined in the Docker image is run with the args
4509 # that you supplied.
4510 # - If you supply a command and args:
4511 # The default Entrypoint and the default Cmd defined in the Docker image
4512 # are ignored. Your command is run with your args.
4513 # It cannot be set if custom container image is
4514 # not provided.
4515 # Note that this field and [TrainingInput.args] are mutually exclusive, i.e.,
4516 # both cannot be set at the same time.
4517 &quot;A String&quot;,
4518 ],
4519 &quot;tpuTfVersion&quot;: &quot;A String&quot;, # The AI Platform runtime version that includes a TensorFlow version matching
4520 # the one used in the custom container. This field is required if the replica
4521 # is a TPU worker that uses a custom container. Otherwise, do not specify
4522 # this field. This must be a [runtime version that currently supports
4523 # training with
4524 # TPUs](/ml-engine/docs/tensorflow/runtime-version-list#tpu-support).
4525 #
4526 # Note that the version of TensorFlow included in a runtime version may
4527 # differ from the numbering of the runtime version itself, because it may
4528 # have a different [patch
4529 # version](https://www.tensorflow.org/guide/version_compat#semantic_versioning_20).
4530 # In this field, you must specify the runtime version (TensorFlow minor
4531 # version). For example, if your custom container runs TensorFlow `1.x.y`,
4532 # specify `1.x`.
4533 &quot;containerCommand&quot;: [ # The command with which the replica&#x27;s custom container is run.
4534 # If provided, it will override default ENTRYPOINT of the docker image.
4535 # If not provided, the docker image&#x27;s ENTRYPOINT is used.
4536 # It cannot be set if custom container image is
4537 # not provided.
4538 # Note that this field and [TrainingInput.args] are mutually exclusive, i.e.,
4539 # both cannot be set at the same time.
4540 &quot;A String&quot;,
4541 ],
4542 },
4543 &quot;evaluatorCount&quot;: &quot;A String&quot;, # Optional. The number of evaluator replicas to use for the training job.
4544 # Each replica in the cluster will be of the type specified in
4545 # `evaluator_type`.
4546 #
4547 # This value can only be used when `scale_tier` is set to `CUSTOM`. If you
4548 # set this value, you must also set `evaluator_type`.
4549 #
4550 # The default value is zero.
4551 &quot;args&quot;: [ # Optional. Command-line arguments passed to the training application when it
4552 # starts. If your job uses a custom container, then the arguments are passed
4553 # to the container&#x27;s &lt;a class=&quot;external&quot; target=&quot;_blank&quot;
4554 # href=&quot;https://docs.docker.com/engine/reference/builder/#entrypoint&quot;&gt;
4555 # `ENTRYPOINT`&lt;/a&gt; command.
4556 &quot;A String&quot;,
4557 ],
4558 &quot;pythonModule&quot;: &quot;A String&quot;, # Required. The Python module name to run after installing the packages.
4559 &quot;runtimeVersion&quot;: &quot;A String&quot;, # Optional. The AI Platform runtime version to use for training. You must
4560 # either specify this field or specify `masterConfig.imageUri`.
4561 #
4562 # For more information, see the [runtime version
4563 # list](/ai-platform/training/docs/runtime-version-list) and learn [how to
4564 # manage runtime versions](/ai-platform/training/docs/versioning).
4565 &quot;parameterServerCount&quot;: &quot;A String&quot;, # Optional. The number of parameter server replicas to use for the training
4566 # job. Each replica in the cluster will be of the type specified in
4567 # `parameter_server_type`.
4568 #
4569 # This value can only be used when `scale_tier` is set to `CUSTOM`. If you
4570 # set this value, you must also set `parameter_server_type`.
4571 #
4572 # The default value is zero.
4573 &quot;evaluatorConfig&quot;: { # Represents the configuration for a replica in a cluster. # Optional. The configuration for evaluators.
4574 #
4575 # You should only set `evaluatorConfig.acceleratorConfig` if
4576 # `evaluatorType` is set to a Compute Engine machine type. [Learn
4577 # about restrictions on accelerator configurations for
4578 # training.](/ai-platform/training/docs/using-gpus#compute-engine-machine-types-with-gpu)
4579 #
4580 # Set `evaluatorConfig.imageUri` only if you build a custom image for
4581 # your evaluator. If `evaluatorConfig.imageUri` has not been
4582 # set, AI Platform uses the value of `masterConfig.imageUri`. Learn more about [configuring custom
4583 # containers](/ai-platform/training/docs/distributed-training-containers).
4584 &quot;acceleratorConfig&quot;: { # Represents a hardware accelerator request config. # Represents the type and number of accelerators used by the replica.
4585 # [Learn about restrictions on accelerator configurations for
4586 # training.](/ai-platform/training/docs/using-gpus#compute-engine-machine-types-with-gpu)
4587 # Note that the AcceleratorConfig can be used in both Jobs and Versions.
4588 # Learn more about [accelerators for training](/ml-engine/docs/using-gpus) and
4589 # [accelerators for online
4590 # prediction](/ml-engine/docs/machine-types-online-prediction#gpus).
4591 &quot;type&quot;: &quot;A String&quot;, # The type of accelerator to use.
4592 &quot;count&quot;: &quot;A String&quot;, # The number of accelerators to attach to each machine running the job.
4593 },
4594 &quot;imageUri&quot;: &quot;A String&quot;, # The Docker image to run on the replica. This image must be in Container
4595 # Registry. Learn more about [configuring custom
4596 # containers](/ai-platform/training/docs/distributed-training-containers).
4597 &quot;containerArgs&quot;: [ # Arguments to the entrypoint command.
4598 # The following rules apply for container_command and container_args:
4599 # - If you do not supply command or args:
4600 # The defaults defined in the Docker image are used.
4601 # - If you supply a command but no args:
4602 # The default EntryPoint and the default Cmd defined in the Docker image
4603 # are ignored. Your command is run without any arguments.
4604 # - If you supply only args:
4605 # The default Entrypoint defined in the Docker image is run with the args
4606 # that you supplied.
4607 # - If you supply a command and args:
4608 # The default Entrypoint and the default Cmd defined in the Docker image
4609 # are ignored. Your command is run with your args.
4610 # It cannot be set if custom container image is
4611 # not provided.
4612 # Note that this field and [TrainingInput.args] are mutually exclusive, i.e.,
4613 # both cannot be set at the same time.
4614 &quot;A String&quot;,
4615 ],
4616 &quot;tpuTfVersion&quot;: &quot;A String&quot;, # The AI Platform runtime version that includes a TensorFlow version matching
4617 # the one used in the custom container. This field is required if the replica
4618 # is a TPU worker that uses a custom container. Otherwise, do not specify
4619 # this field. This must be a [runtime version that currently supports
4620 # training with
4621 # TPUs](/ml-engine/docs/tensorflow/runtime-version-list#tpu-support).
4622 #
4623 # Note that the version of TensorFlow included in a runtime version may
4624 # differ from the numbering of the runtime version itself, because it may
4625 # have a different [patch
4626 # version](https://www.tensorflow.org/guide/version_compat#semantic_versioning_20).
4627 # In this field, you must specify the runtime version (TensorFlow minor
4628 # version). For example, if your custom container runs TensorFlow `1.x.y`,
4629 # specify `1.x`.
4630 &quot;containerCommand&quot;: [ # The command with which the replica&#x27;s custom container is run.
4631 # If provided, it will override default ENTRYPOINT of the docker image.
4632 # If not provided, the docker image&#x27;s ENTRYPOINT is used.
4633 # It cannot be set if custom container image is
4634 # not provided.
4635 # Note that this field and [TrainingInput.args] are mutually exclusive, i.e.,
4636 # both cannot be set at the same time.
4637 &quot;A String&quot;,
4638 ],
4639 },
4640 &quot;encryptionConfig&quot;: { # Represents a custom encryption key configuration that can be applied to # Optional. Options for using customer-managed encryption keys (CMEK) to
4641 # protect resources created by a training job, instead of using Google&#x27;s
4642 # default encryption. If this is set, then all resources created by the
4643 # training job will be encrypted with the customer-managed encryption key
4644 # that you specify.
4645 #
4646 # [Learn how and when to use CMEK with AI Platform
4647 # Training](/ai-platform/training/docs/cmek).
4648 # a resource.
4649 &quot;kmsKeyName&quot;: &quot;A String&quot;, # The Cloud KMS resource identifier of the customer-managed encryption key
4650 # used to protect a resource, such as a training job. It has the following
4651 # format:
4652 # `projects/{PROJECT_ID}/locations/{REGION}/keyRings/{KEY_RING_NAME}/cryptoKeys/{KEY_NAME}`
4653 },
4654 &quot;workerCount&quot;: &quot;A String&quot;, # Optional. The number of worker replicas to use for the training job. Each
4655 # replica in the cluster will be of the type specified in `worker_type`.
4656 #
4657 # This value can only be used when `scale_tier` is set to `CUSTOM`. If you
4658 # set this value, you must also set `worker_type`.
4659 #
4660 # The default value is zero.
Bu Sun Kim65020912020-05-20 12:08:20 -07004661 &quot;scheduling&quot;: { # All parameters related to scheduling of training jobs. # Optional. Scheduling options for a training job.
4662 &quot;maxWaitTime&quot;: &quot;A String&quot;,
4663 &quot;maxRunningTime&quot;: &quot;A String&quot;, # Optional. The maximum job running time, expressed in seconds. The field can
4664 # contain up to nine fractional digits, terminated by `s`. If not specified,
4665 # this field defaults to `604800s` (seven days).
4666 #
4667 # If the training job is still running after this duration, AI Platform
4668 # Training cancels it.
4669 #
4670 # For example, if you want to ensure your job runs for no more than 2 hours,
4671 # set this field to `7200s` (2 hours * 60 minutes / hour * 60 seconds /
4672 # minute).
4673 #
4674 # If you submit your training job using the `gcloud` tool, you can [provide
4675 # this field in a `config.yaml`
4676 # file](/ai-platform/training/docs/training-jobs#formatting_your_configuration_parameters).
4677 # For example:
4678 #
4679 # ```yaml
4680 # trainingInput:
4681 # ...
4682 # scheduling:
4683 # maxRunningTime: 7200s
4684 # ...
4685 # ```
4686 },
Bu Sun Kimd059ad82020-07-22 17:02:09 -07004687 &quot;scaleTier&quot;: &quot;A String&quot;, # Required. Specifies the machine types, the number of replicas for workers
4688 # and parameter servers.
4689 &quot;packageUris&quot;: [ # Required. The Google Cloud Storage location of the packages with
4690 # the training program and any additional dependencies.
4691 # The maximum number of package URIs is 100.
4692 &quot;A String&quot;,
Bu Sun Kim65020912020-05-20 12:08:20 -07004693 ],
Bu Sun Kim65020912020-05-20 12:08:20 -07004694 },
Bu Sun Kimd059ad82020-07-22 17:02:09 -07004695 &quot;etag&quot;: &quot;A String&quot;, # `etag` is used for optimistic concurrency control as a way to help
4696 # prevent simultaneous updates of a job from overwriting each other.
4697 # It is strongly suggested that systems make use of the `etag` in the
4698 # read-modify-write cycle to perform job updates in order to avoid race
4699 # conditions: An `etag` is returned in the response to `GetJob`, and
4700 # systems are expected to put that etag in the request to `UpdateJob` to
4701 # ensure that their change will be applied to the same version of the job.
4702 &quot;jobId&quot;: &quot;A String&quot;, # Required. The user-specified id of the job.
Bu Sun Kim65020912020-05-20 12:08:20 -07004703 }</pre>
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07004704</div>
4705
4706<div class="method">
Dan O'Mearadd494642020-05-01 07:42:23 -07004707 <code class="details" id="setIamPolicy">setIamPolicy(resource, body=None, x__xgafv=None)</code>
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07004708 <pre>Sets the access control policy on the specified resource. Replaces any
4709existing policy.
4710
Bu Sun Kim65020912020-05-20 12:08:20 -07004711Can return `NOT_FOUND`, `INVALID_ARGUMENT`, and `PERMISSION_DENIED` errors.
Dan O'Mearadd494642020-05-01 07:42:23 -07004712
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07004713Args:
4714 resource: string, REQUIRED: The resource for which the policy is being specified.
4715See the operation documentation for the appropriate value for this field. (required)
Dan O'Mearadd494642020-05-01 07:42:23 -07004716 body: object, The request body.
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07004717 The object takes the form of:
4718
4719{ # Request message for `SetIamPolicy` method.
Bu Sun Kim65020912020-05-20 12:08:20 -07004720 &quot;policy&quot;: { # An Identity and Access Management (IAM) policy, which specifies access # REQUIRED: The complete policy to be applied to the `resource`. The size of
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07004721 # the policy is limited to a few 10s of KB. An empty policy is a
4722 # valid policy but certain Cloud Platform services (such as Projects)
4723 # might reject them.
Dan O'Mearadd494642020-05-01 07:42:23 -07004724 # controls for Google Cloud resources.
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07004725 #
4726 #
Dan O'Mearadd494642020-05-01 07:42:23 -07004727 # A `Policy` is a collection of `bindings`. A `binding` binds one or more
4728 # `members` to a single `role`. Members can be user accounts, service accounts,
4729 # Google groups, and domains (such as G Suite). A `role` is a named list of
4730 # permissions; each `role` can be an IAM predefined role or a user-created
4731 # custom role.
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07004732 #
Bu Sun Kim65020912020-05-20 12:08:20 -07004733 # For some types of Google Cloud resources, a `binding` can also specify a
4734 # `condition`, which is a logical expression that allows access to a resource
4735 # only if the expression evaluates to `true`. A condition can add constraints
4736 # based on attributes of the request, the resource, or both. To learn which
4737 # resources support conditions in their IAM policies, see the
4738 # [IAM documentation](https://cloud.google.com/iam/help/conditions/resource-policies).
Dan O'Mearadd494642020-05-01 07:42:23 -07004739 #
4740 # **JSON example:**
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07004741 #
4742 # {
Bu Sun Kim65020912020-05-20 12:08:20 -07004743 # &quot;bindings&quot;: [
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07004744 # {
Bu Sun Kim65020912020-05-20 12:08:20 -07004745 # &quot;role&quot;: &quot;roles/resourcemanager.organizationAdmin&quot;,
4746 # &quot;members&quot;: [
4747 # &quot;user:mike@example.com&quot;,
4748 # &quot;group:admins@example.com&quot;,
4749 # &quot;domain:google.com&quot;,
4750 # &quot;serviceAccount:my-project-id@appspot.gserviceaccount.com&quot;
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07004751 # ]
4752 # },
4753 # {
Bu Sun Kim65020912020-05-20 12:08:20 -07004754 # &quot;role&quot;: &quot;roles/resourcemanager.organizationViewer&quot;,
4755 # &quot;members&quot;: [
4756 # &quot;user:eve@example.com&quot;
4757 # ],
4758 # &quot;condition&quot;: {
4759 # &quot;title&quot;: &quot;expirable access&quot;,
4760 # &quot;description&quot;: &quot;Does not grant access after Sep 2020&quot;,
4761 # &quot;expression&quot;: &quot;request.time &lt; timestamp(&#x27;2020-10-01T00:00:00.000Z&#x27;)&quot;,
Dan O'Mearadd494642020-05-01 07:42:23 -07004762 # }
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07004763 # }
Dan O'Mearadd494642020-05-01 07:42:23 -07004764 # ],
Bu Sun Kim65020912020-05-20 12:08:20 -07004765 # &quot;etag&quot;: &quot;BwWWja0YfJA=&quot;,
4766 # &quot;version&quot;: 3
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07004767 # }
4768 #
Dan O'Mearadd494642020-05-01 07:42:23 -07004769 # **YAML example:**
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07004770 #
4771 # bindings:
4772 # - members:
4773 # - user:mike@example.com
4774 # - group:admins@example.com
4775 # - domain:google.com
Dan O'Mearadd494642020-05-01 07:42:23 -07004776 # - serviceAccount:my-project-id@appspot.gserviceaccount.com
4777 # role: roles/resourcemanager.organizationAdmin
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07004778 # - members:
Dan O'Mearadd494642020-05-01 07:42:23 -07004779 # - user:eve@example.com
4780 # role: roles/resourcemanager.organizationViewer
4781 # condition:
4782 # title: expirable access
4783 # description: Does not grant access after Sep 2020
Bu Sun Kim65020912020-05-20 12:08:20 -07004784 # expression: request.time &lt; timestamp(&#x27;2020-10-01T00:00:00.000Z&#x27;)
Dan O'Mearadd494642020-05-01 07:42:23 -07004785 # - etag: BwWWja0YfJA=
4786 # - version: 3
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07004787 #
4788 # For a description of IAM and its features, see the
Dan O'Mearadd494642020-05-01 07:42:23 -07004789 # [IAM documentation](https://cloud.google.com/iam/docs/).
Bu Sun Kimd059ad82020-07-22 17:02:09 -07004790 &quot;etag&quot;: &quot;A String&quot;, # `etag` is used for optimistic concurrency control as a way to help
4791 # prevent simultaneous updates of a policy from overwriting each other.
4792 # It is strongly suggested that systems make use of the `etag` in the
4793 # read-modify-write cycle to perform policy updates in order to avoid race
4794 # conditions: An `etag` is returned in the response to `getIamPolicy`, and
4795 # systems are expected to put that etag in the request to `setIamPolicy` to
4796 # ensure that their change will be applied to the same version of the policy.
Bu Sun Kim65020912020-05-20 12:08:20 -07004797 #
4798 # **Important:** If you use IAM Conditions, you must include the `etag` field
4799 # whenever you call `setIamPolicy`. If you omit this field, then IAM allows
4800 # you to overwrite a version `3` policy with a version `1` policy, and all of
4801 # the conditions in the version `3` policy are lost.
Bu Sun Kim65020912020-05-20 12:08:20 -07004802 &quot;auditConfigs&quot;: [ # Specifies cloud audit logging configuration for this policy.
4803 { # Specifies the audit configuration for a service.
4804 # The configuration determines which permission types are logged, and what
4805 # identities, if any, are exempted from logging.
4806 # An AuditConfig must have one or more AuditLogConfigs.
4807 #
4808 # If there are AuditConfigs for both `allServices` and a specific service,
4809 # the union of the two AuditConfigs is used for that service: the log_types
4810 # specified in each AuditConfig are enabled, and the exempted_members in each
4811 # AuditLogConfig are exempted.
4812 #
4813 # Example Policy with multiple AuditConfigs:
4814 #
4815 # {
4816 # &quot;audit_configs&quot;: [
4817 # {
Bu Sun Kimd059ad82020-07-22 17:02:09 -07004818 # &quot;service&quot;: &quot;allServices&quot;,
Bu Sun Kim65020912020-05-20 12:08:20 -07004819 # &quot;audit_log_configs&quot;: [
4820 # {
4821 # &quot;log_type&quot;: &quot;DATA_READ&quot;,
4822 # &quot;exempted_members&quot;: [
4823 # &quot;user:jose@example.com&quot;
4824 # ]
4825 # },
4826 # {
Bu Sun Kimd059ad82020-07-22 17:02:09 -07004827 # &quot;log_type&quot;: &quot;DATA_WRITE&quot;
Bu Sun Kim65020912020-05-20 12:08:20 -07004828 # },
4829 # {
Bu Sun Kimd059ad82020-07-22 17:02:09 -07004830 # &quot;log_type&quot;: &quot;ADMIN_READ&quot;
Bu Sun Kim65020912020-05-20 12:08:20 -07004831 # }
4832 # ]
4833 # },
4834 # {
Bu Sun Kimd059ad82020-07-22 17:02:09 -07004835 # &quot;service&quot;: &quot;sampleservice.googleapis.com&quot;,
Bu Sun Kim65020912020-05-20 12:08:20 -07004836 # &quot;audit_log_configs&quot;: [
4837 # {
Bu Sun Kimd059ad82020-07-22 17:02:09 -07004838 # &quot;log_type&quot;: &quot;DATA_READ&quot;
Bu Sun Kim65020912020-05-20 12:08:20 -07004839 # },
4840 # {
4841 # &quot;log_type&quot;: &quot;DATA_WRITE&quot;,
4842 # &quot;exempted_members&quot;: [
4843 # &quot;user:aliya@example.com&quot;
4844 # ]
4845 # }
4846 # ]
4847 # }
4848 # ]
4849 # }
4850 #
4851 # For sampleservice, this policy enables DATA_READ, DATA_WRITE and ADMIN_READ
4852 # logging. It also exempts jose@example.com from DATA_READ logging, and
4853 # aliya@example.com from DATA_WRITE logging.
4854 &quot;service&quot;: &quot;A String&quot;, # Specifies a service that will be enabled for audit logging.
4855 # For example, `storage.googleapis.com`, `cloudsql.googleapis.com`.
4856 # `allServices` is a special value that covers all services.
4857 &quot;auditLogConfigs&quot;: [ # The configuration for logging of each type of permission.
4858 { # Provides the configuration for logging a type of permissions.
4859 # Example:
4860 #
4861 # {
4862 # &quot;audit_log_configs&quot;: [
4863 # {
4864 # &quot;log_type&quot;: &quot;DATA_READ&quot;,
4865 # &quot;exempted_members&quot;: [
4866 # &quot;user:jose@example.com&quot;
4867 # ]
4868 # },
4869 # {
Bu Sun Kimd059ad82020-07-22 17:02:09 -07004870 # &quot;log_type&quot;: &quot;DATA_WRITE&quot;
Bu Sun Kim65020912020-05-20 12:08:20 -07004871 # }
4872 # ]
4873 # }
4874 #
4875 # This enables &#x27;DATA_READ&#x27; and &#x27;DATA_WRITE&#x27; logging, while exempting
4876 # jose@example.com from DATA_READ logging.
Bu Sun Kimd059ad82020-07-22 17:02:09 -07004877 &quot;logType&quot;: &quot;A String&quot;, # The log type that this config enables.
Bu Sun Kim65020912020-05-20 12:08:20 -07004878 &quot;exemptedMembers&quot;: [ # Specifies the identities that do not cause logging for this type of
4879 # permission.
4880 # Follows the same format of Binding.members.
4881 &quot;A String&quot;,
4882 ],
Bu Sun Kim65020912020-05-20 12:08:20 -07004883 },
4884 ],
4885 },
4886 ],
Bu Sun Kimd059ad82020-07-22 17:02:09 -07004887 &quot;version&quot;: 42, # Specifies the format of the policy.
4888 #
4889 # Valid values are `0`, `1`, and `3`. Requests that specify an invalid value
4890 # are rejected.
4891 #
4892 # Any operation that affects conditional role bindings must specify version
4893 # `3`. This requirement applies to the following operations:
4894 #
4895 # * Getting a policy that includes a conditional role binding
4896 # * Adding a conditional role binding to a policy
4897 # * Changing a conditional role binding in a policy
4898 # * Removing any role binding, with or without a condition, from a policy
4899 # that includes conditions
4900 #
4901 # **Important:** If you use IAM Conditions, you must include the `etag` field
4902 # whenever you call `setIamPolicy`. If you omit this field, then IAM allows
4903 # you to overwrite a version `3` policy with a version `1` policy, and all of
4904 # the conditions in the version `3` policy are lost.
4905 #
4906 # If a policy does not include any conditions, operations on that policy may
4907 # specify any valid version or leave the field unset.
4908 #
4909 # To learn which resources support conditions in their IAM policies, see the
4910 # [IAM documentation](https://cloud.google.com/iam/help/conditions/resource-policies).
Bu Sun Kim65020912020-05-20 12:08:20 -07004911 &quot;bindings&quot;: [ # Associates a list of `members` to a `role`. Optionally, may specify a
Dan O'Mearadd494642020-05-01 07:42:23 -07004912 # `condition` that determines how and when the `bindings` are applied. Each
4913 # of the `bindings` must contain at least one member.
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07004914 { # Associates `members` with a `role`.
Bu Sun Kimd059ad82020-07-22 17:02:09 -07004915 &quot;role&quot;: &quot;A String&quot;, # Role that is assigned to `members`.
4916 # For example, `roles/viewer`, `roles/editor`, or `roles/owner`.
Bu Sun Kim65020912020-05-20 12:08:20 -07004917 &quot;condition&quot;: { # Represents a textual expression in the Common Expression Language (CEL) # The condition that is associated with this binding.
4918 #
4919 # If the condition evaluates to `true`, then this binding applies to the
4920 # current request.
4921 #
4922 # If the condition evaluates to `false`, then this binding does not apply to
4923 # the current request. However, a different role binding might grant the same
4924 # role to one or more of the members in this binding.
4925 #
4926 # To learn which resources support conditions in their IAM policies, see the
4927 # [IAM
4928 # documentation](https://cloud.google.com/iam/help/conditions/resource-policies).
4929 # syntax. CEL is a C-like expression language. The syntax and semantics of CEL
4930 # are documented at https://github.com/google/cel-spec.
4931 #
4932 # Example (Comparison):
4933 #
4934 # title: &quot;Summary size limit&quot;
4935 # description: &quot;Determines if a summary is less than 100 chars&quot;
4936 # expression: &quot;document.summary.size() &lt; 100&quot;
4937 #
4938 # Example (Equality):
4939 #
4940 # title: &quot;Requestor is owner&quot;
4941 # description: &quot;Determines if requestor is the document owner&quot;
4942 # expression: &quot;document.owner == request.auth.claims.email&quot;
4943 #
4944 # Example (Logic):
4945 #
4946 # title: &quot;Public documents&quot;
4947 # description: &quot;Determine whether the document should be publicly visible&quot;
4948 # expression: &quot;document.type != &#x27;private&#x27; &amp;&amp; document.type != &#x27;internal&#x27;&quot;
4949 #
4950 # Example (Data Manipulation):
4951 #
4952 # title: &quot;Notification string&quot;
4953 # description: &quot;Create a notification string with a timestamp.&quot;
4954 # expression: &quot;&#x27;New message received at &#x27; + string(document.create_time)&quot;
4955 #
4956 # The exact variables and functions that may be referenced within an expression
4957 # are determined by the service that evaluates it. See the service
4958 # documentation for additional information.
Bu Sun Kim4ed7d3f2020-05-27 12:20:54 -07004959 &quot;expression&quot;: &quot;A String&quot;, # Textual representation of an expression in Common Expression Language
4960 # syntax.
Bu Sun Kim65020912020-05-20 12:08:20 -07004961 &quot;title&quot;: &quot;A String&quot;, # Optional. Title for the expression, i.e. a short string describing
4962 # its purpose. This can be used e.g. in UIs which allow to enter the
4963 # expression.
4964 &quot;location&quot;: &quot;A String&quot;, # Optional. String indicating the location of the expression for error
4965 # reporting, e.g. a file name and a position in the file.
Bu Sun Kimd059ad82020-07-22 17:02:09 -07004966 &quot;description&quot;: &quot;A String&quot;, # Optional. Description of the expression. This is a longer text which
4967 # describes the expression, e.g. when hovered over it in a UI.
Bu Sun Kim65020912020-05-20 12:08:20 -07004968 },
4969 &quot;members&quot;: [ # Specifies the identities requesting access for a Cloud Platform resource.
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07004970 # `members` can have the following values:
4971 #
4972 # * `allUsers`: A special identifier that represents anyone who is
4973 # on the internet; with or without a Google account.
4974 #
4975 # * `allAuthenticatedUsers`: A special identifier that represents anyone
4976 # who is authenticated with a Google account or a service account.
4977 #
4978 # * `user:{emailid}`: An email address that represents a specific Google
Dan O'Mearadd494642020-05-01 07:42:23 -07004979 # account. For example, `alice@example.com` .
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07004980 #
4981 #
4982 # * `serviceAccount:{emailid}`: An email address that represents a service
4983 # account. For example, `my-other-app@appspot.gserviceaccount.com`.
4984 #
4985 # * `group:{emailid}`: An email address that represents a Google group.
4986 # For example, `admins@example.com`.
4987 #
Dan O'Mearadd494642020-05-01 07:42:23 -07004988 # * `deleted:user:{emailid}?uid={uniqueid}`: An email address (plus unique
4989 # identifier) representing a user that has been recently deleted. For
4990 # example, `alice@example.com?uid=123456789012345678901`. If the user is
4991 # recovered, this value reverts to `user:{emailid}` and the recovered user
4992 # retains the role in the binding.
4993 #
4994 # * `deleted:serviceAccount:{emailid}?uid={uniqueid}`: An email address (plus
4995 # unique identifier) representing a service account that has been recently
4996 # deleted. For example,
4997 # `my-other-app@appspot.gserviceaccount.com?uid=123456789012345678901`.
4998 # If the service account is undeleted, this value reverts to
4999 # `serviceAccount:{emailid}` and the undeleted service account retains the
5000 # role in the binding.
5001 #
5002 # * `deleted:group:{emailid}?uid={uniqueid}`: An email address (plus unique
5003 # identifier) representing a Google group that has been recently
5004 # deleted. For example, `admins@example.com?uid=123456789012345678901`. If
5005 # the group is recovered, this value reverts to `group:{emailid}` and the
5006 # recovered group retains the role in the binding.
5007 #
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07005008 #
5009 # * `domain:{domain}`: The G Suite domain (primary) that represents all the
5010 # users of that domain. For example, `google.com` or `example.com`.
5011 #
Bu Sun Kim65020912020-05-20 12:08:20 -07005012 &quot;A String&quot;,
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07005013 ],
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07005014 },
5015 ],
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07005016 },
Bu Sun Kimd059ad82020-07-22 17:02:09 -07005017 &quot;updateMask&quot;: &quot;A String&quot;, # OPTIONAL: A FieldMask specifying which fields of the policy to modify. Only
5018 # the fields in the mask will be modified. If no mask is provided, the
5019 # following default mask is used:
5020 #
5021 # `paths: &quot;bindings, etag&quot;`
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07005022 }
5023
5024 x__xgafv: string, V1 error format.
5025 Allowed values
5026 1 - v1 error format
5027 2 - v2 error format
5028
5029Returns:
5030 An object of the form:
5031
Dan O'Mearadd494642020-05-01 07:42:23 -07005032 { # An Identity and Access Management (IAM) policy, which specifies access
5033 # controls for Google Cloud resources.
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07005034 #
5035 #
Dan O'Mearadd494642020-05-01 07:42:23 -07005036 # A `Policy` is a collection of `bindings`. A `binding` binds one or more
5037 # `members` to a single `role`. Members can be user accounts, service accounts,
5038 # Google groups, and domains (such as G Suite). A `role` is a named list of
5039 # permissions; each `role` can be an IAM predefined role or a user-created
5040 # custom role.
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07005041 #
Bu Sun Kim65020912020-05-20 12:08:20 -07005042 # For some types of Google Cloud resources, a `binding` can also specify a
5043 # `condition`, which is a logical expression that allows access to a resource
5044 # only if the expression evaluates to `true`. A condition can add constraints
5045 # based on attributes of the request, the resource, or both. To learn which
5046 # resources support conditions in their IAM policies, see the
5047 # [IAM documentation](https://cloud.google.com/iam/help/conditions/resource-policies).
Dan O'Mearadd494642020-05-01 07:42:23 -07005048 #
5049 # **JSON example:**
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07005050 #
5051 # {
Bu Sun Kim65020912020-05-20 12:08:20 -07005052 # &quot;bindings&quot;: [
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07005053 # {
Bu Sun Kim65020912020-05-20 12:08:20 -07005054 # &quot;role&quot;: &quot;roles/resourcemanager.organizationAdmin&quot;,
5055 # &quot;members&quot;: [
5056 # &quot;user:mike@example.com&quot;,
5057 # &quot;group:admins@example.com&quot;,
5058 # &quot;domain:google.com&quot;,
5059 # &quot;serviceAccount:my-project-id@appspot.gserviceaccount.com&quot;
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07005060 # ]
5061 # },
5062 # {
Bu Sun Kim65020912020-05-20 12:08:20 -07005063 # &quot;role&quot;: &quot;roles/resourcemanager.organizationViewer&quot;,
5064 # &quot;members&quot;: [
5065 # &quot;user:eve@example.com&quot;
5066 # ],
5067 # &quot;condition&quot;: {
5068 # &quot;title&quot;: &quot;expirable access&quot;,
5069 # &quot;description&quot;: &quot;Does not grant access after Sep 2020&quot;,
5070 # &quot;expression&quot;: &quot;request.time &lt; timestamp(&#x27;2020-10-01T00:00:00.000Z&#x27;)&quot;,
Dan O'Mearadd494642020-05-01 07:42:23 -07005071 # }
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07005072 # }
Dan O'Mearadd494642020-05-01 07:42:23 -07005073 # ],
Bu Sun Kim65020912020-05-20 12:08:20 -07005074 # &quot;etag&quot;: &quot;BwWWja0YfJA=&quot;,
5075 # &quot;version&quot;: 3
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07005076 # }
5077 #
Dan O'Mearadd494642020-05-01 07:42:23 -07005078 # **YAML example:**
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07005079 #
5080 # bindings:
5081 # - members:
5082 # - user:mike@example.com
5083 # - group:admins@example.com
5084 # - domain:google.com
Dan O'Mearadd494642020-05-01 07:42:23 -07005085 # - serviceAccount:my-project-id@appspot.gserviceaccount.com
5086 # role: roles/resourcemanager.organizationAdmin
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07005087 # - members:
Dan O'Mearadd494642020-05-01 07:42:23 -07005088 # - user:eve@example.com
5089 # role: roles/resourcemanager.organizationViewer
5090 # condition:
5091 # title: expirable access
5092 # description: Does not grant access after Sep 2020
Bu Sun Kim65020912020-05-20 12:08:20 -07005093 # expression: request.time &lt; timestamp(&#x27;2020-10-01T00:00:00.000Z&#x27;)
Dan O'Mearadd494642020-05-01 07:42:23 -07005094 # - etag: BwWWja0YfJA=
5095 # - version: 3
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07005096 #
5097 # For a description of IAM and its features, see the
Dan O'Mearadd494642020-05-01 07:42:23 -07005098 # [IAM documentation](https://cloud.google.com/iam/docs/).
Bu Sun Kimd059ad82020-07-22 17:02:09 -07005099 &quot;etag&quot;: &quot;A String&quot;, # `etag` is used for optimistic concurrency control as a way to help
5100 # prevent simultaneous updates of a policy from overwriting each other.
5101 # It is strongly suggested that systems make use of the `etag` in the
5102 # read-modify-write cycle to perform policy updates in order to avoid race
5103 # conditions: An `etag` is returned in the response to `getIamPolicy`, and
5104 # systems are expected to put that etag in the request to `setIamPolicy` to
5105 # ensure that their change will be applied to the same version of the policy.
Bu Sun Kim65020912020-05-20 12:08:20 -07005106 #
5107 # **Important:** If you use IAM Conditions, you must include the `etag` field
5108 # whenever you call `setIamPolicy`. If you omit this field, then IAM allows
5109 # you to overwrite a version `3` policy with a version `1` policy, and all of
5110 # the conditions in the version `3` policy are lost.
Bu Sun Kim65020912020-05-20 12:08:20 -07005111 &quot;auditConfigs&quot;: [ # Specifies cloud audit logging configuration for this policy.
5112 { # Specifies the audit configuration for a service.
5113 # The configuration determines which permission types are logged, and what
5114 # identities, if any, are exempted from logging.
5115 # An AuditConfig must have one or more AuditLogConfigs.
5116 #
5117 # If there are AuditConfigs for both `allServices` and a specific service,
5118 # the union of the two AuditConfigs is used for that service: the log_types
5119 # specified in each AuditConfig are enabled, and the exempted_members in each
5120 # AuditLogConfig are exempted.
5121 #
5122 # Example Policy with multiple AuditConfigs:
5123 #
5124 # {
5125 # &quot;audit_configs&quot;: [
5126 # {
Bu Sun Kimd059ad82020-07-22 17:02:09 -07005127 # &quot;service&quot;: &quot;allServices&quot;,
Bu Sun Kim65020912020-05-20 12:08:20 -07005128 # &quot;audit_log_configs&quot;: [
5129 # {
5130 # &quot;log_type&quot;: &quot;DATA_READ&quot;,
5131 # &quot;exempted_members&quot;: [
5132 # &quot;user:jose@example.com&quot;
5133 # ]
5134 # },
5135 # {
Bu Sun Kimd059ad82020-07-22 17:02:09 -07005136 # &quot;log_type&quot;: &quot;DATA_WRITE&quot;
Bu Sun Kim65020912020-05-20 12:08:20 -07005137 # },
5138 # {
Bu Sun Kimd059ad82020-07-22 17:02:09 -07005139 # &quot;log_type&quot;: &quot;ADMIN_READ&quot;
Bu Sun Kim65020912020-05-20 12:08:20 -07005140 # }
5141 # ]
5142 # },
5143 # {
Bu Sun Kimd059ad82020-07-22 17:02:09 -07005144 # &quot;service&quot;: &quot;sampleservice.googleapis.com&quot;,
Bu Sun Kim65020912020-05-20 12:08:20 -07005145 # &quot;audit_log_configs&quot;: [
5146 # {
Bu Sun Kimd059ad82020-07-22 17:02:09 -07005147 # &quot;log_type&quot;: &quot;DATA_READ&quot;
Bu Sun Kim65020912020-05-20 12:08:20 -07005148 # },
5149 # {
5150 # &quot;log_type&quot;: &quot;DATA_WRITE&quot;,
5151 # &quot;exempted_members&quot;: [
5152 # &quot;user:aliya@example.com&quot;
5153 # ]
5154 # }
5155 # ]
5156 # }
5157 # ]
5158 # }
5159 #
5160 # For sampleservice, this policy enables DATA_READ, DATA_WRITE and ADMIN_READ
5161 # logging. It also exempts jose@example.com from DATA_READ logging, and
5162 # aliya@example.com from DATA_WRITE logging.
5163 &quot;service&quot;: &quot;A String&quot;, # Specifies a service that will be enabled for audit logging.
5164 # For example, `storage.googleapis.com`, `cloudsql.googleapis.com`.
5165 # `allServices` is a special value that covers all services.
5166 &quot;auditLogConfigs&quot;: [ # The configuration for logging of each type of permission.
5167 { # Provides the configuration for logging a type of permissions.
5168 # Example:
5169 #
5170 # {
5171 # &quot;audit_log_configs&quot;: [
5172 # {
5173 # &quot;log_type&quot;: &quot;DATA_READ&quot;,
5174 # &quot;exempted_members&quot;: [
5175 # &quot;user:jose@example.com&quot;
5176 # ]
5177 # },
5178 # {
Bu Sun Kimd059ad82020-07-22 17:02:09 -07005179 # &quot;log_type&quot;: &quot;DATA_WRITE&quot;
Bu Sun Kim65020912020-05-20 12:08:20 -07005180 # }
5181 # ]
5182 # }
5183 #
5184 # This enables &#x27;DATA_READ&#x27; and &#x27;DATA_WRITE&#x27; logging, while exempting
5185 # jose@example.com from DATA_READ logging.
Bu Sun Kimd059ad82020-07-22 17:02:09 -07005186 &quot;logType&quot;: &quot;A String&quot;, # The log type that this config enables.
Bu Sun Kim65020912020-05-20 12:08:20 -07005187 &quot;exemptedMembers&quot;: [ # Specifies the identities that do not cause logging for this type of
5188 # permission.
5189 # Follows the same format of Binding.members.
5190 &quot;A String&quot;,
5191 ],
Bu Sun Kim65020912020-05-20 12:08:20 -07005192 },
5193 ],
5194 },
5195 ],
Bu Sun Kimd059ad82020-07-22 17:02:09 -07005196 &quot;version&quot;: 42, # Specifies the format of the policy.
5197 #
5198 # Valid values are `0`, `1`, and `3`. Requests that specify an invalid value
5199 # are rejected.
5200 #
5201 # Any operation that affects conditional role bindings must specify version
5202 # `3`. This requirement applies to the following operations:
5203 #
5204 # * Getting a policy that includes a conditional role binding
5205 # * Adding a conditional role binding to a policy
5206 # * Changing a conditional role binding in a policy
5207 # * Removing any role binding, with or without a condition, from a policy
5208 # that includes conditions
5209 #
5210 # **Important:** If you use IAM Conditions, you must include the `etag` field
5211 # whenever you call `setIamPolicy`. If you omit this field, then IAM allows
5212 # you to overwrite a version `3` policy with a version `1` policy, and all of
5213 # the conditions in the version `3` policy are lost.
5214 #
5215 # If a policy does not include any conditions, operations on that policy may
5216 # specify any valid version or leave the field unset.
5217 #
5218 # To learn which resources support conditions in their IAM policies, see the
5219 # [IAM documentation](https://cloud.google.com/iam/help/conditions/resource-policies).
Bu Sun Kim65020912020-05-20 12:08:20 -07005220 &quot;bindings&quot;: [ # Associates a list of `members` to a `role`. Optionally, may specify a
Dan O'Mearadd494642020-05-01 07:42:23 -07005221 # `condition` that determines how and when the `bindings` are applied. Each
5222 # of the `bindings` must contain at least one member.
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07005223 { # Associates `members` with a `role`.
Bu Sun Kimd059ad82020-07-22 17:02:09 -07005224 &quot;role&quot;: &quot;A String&quot;, # Role that is assigned to `members`.
5225 # For example, `roles/viewer`, `roles/editor`, or `roles/owner`.
Bu Sun Kim65020912020-05-20 12:08:20 -07005226 &quot;condition&quot;: { # Represents a textual expression in the Common Expression Language (CEL) # The condition that is associated with this binding.
5227 #
5228 # If the condition evaluates to `true`, then this binding applies to the
5229 # current request.
5230 #
5231 # If the condition evaluates to `false`, then this binding does not apply to
5232 # the current request. However, a different role binding might grant the same
5233 # role to one or more of the members in this binding.
5234 #
5235 # To learn which resources support conditions in their IAM policies, see the
5236 # [IAM
5237 # documentation](https://cloud.google.com/iam/help/conditions/resource-policies).
5238 # syntax. CEL is a C-like expression language. The syntax and semantics of CEL
5239 # are documented at https://github.com/google/cel-spec.
5240 #
5241 # Example (Comparison):
5242 #
5243 # title: &quot;Summary size limit&quot;
5244 # description: &quot;Determines if a summary is less than 100 chars&quot;
5245 # expression: &quot;document.summary.size() &lt; 100&quot;
5246 #
5247 # Example (Equality):
5248 #
5249 # title: &quot;Requestor is owner&quot;
5250 # description: &quot;Determines if requestor is the document owner&quot;
5251 # expression: &quot;document.owner == request.auth.claims.email&quot;
5252 #
5253 # Example (Logic):
5254 #
5255 # title: &quot;Public documents&quot;
5256 # description: &quot;Determine whether the document should be publicly visible&quot;
5257 # expression: &quot;document.type != &#x27;private&#x27; &amp;&amp; document.type != &#x27;internal&#x27;&quot;
5258 #
5259 # Example (Data Manipulation):
5260 #
5261 # title: &quot;Notification string&quot;
5262 # description: &quot;Create a notification string with a timestamp.&quot;
5263 # expression: &quot;&#x27;New message received at &#x27; + string(document.create_time)&quot;
5264 #
5265 # The exact variables and functions that may be referenced within an expression
5266 # are determined by the service that evaluates it. See the service
5267 # documentation for additional information.
Bu Sun Kim4ed7d3f2020-05-27 12:20:54 -07005268 &quot;expression&quot;: &quot;A String&quot;, # Textual representation of an expression in Common Expression Language
5269 # syntax.
Bu Sun Kim65020912020-05-20 12:08:20 -07005270 &quot;title&quot;: &quot;A String&quot;, # Optional. Title for the expression, i.e. a short string describing
5271 # its purpose. This can be used e.g. in UIs which allow to enter the
5272 # expression.
5273 &quot;location&quot;: &quot;A String&quot;, # Optional. String indicating the location of the expression for error
5274 # reporting, e.g. a file name and a position in the file.
Bu Sun Kimd059ad82020-07-22 17:02:09 -07005275 &quot;description&quot;: &quot;A String&quot;, # Optional. Description of the expression. This is a longer text which
5276 # describes the expression, e.g. when hovered over it in a UI.
Bu Sun Kim65020912020-05-20 12:08:20 -07005277 },
5278 &quot;members&quot;: [ # Specifies the identities requesting access for a Cloud Platform resource.
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07005279 # `members` can have the following values:
5280 #
5281 # * `allUsers`: A special identifier that represents anyone who is
5282 # on the internet; with or without a Google account.
5283 #
5284 # * `allAuthenticatedUsers`: A special identifier that represents anyone
5285 # who is authenticated with a Google account or a service account.
5286 #
5287 # * `user:{emailid}`: An email address that represents a specific Google
Dan O'Mearadd494642020-05-01 07:42:23 -07005288 # account. For example, `alice@example.com` .
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07005289 #
5290 #
5291 # * `serviceAccount:{emailid}`: An email address that represents a service
5292 # account. For example, `my-other-app@appspot.gserviceaccount.com`.
5293 #
5294 # * `group:{emailid}`: An email address that represents a Google group.
5295 # For example, `admins@example.com`.
5296 #
Dan O'Mearadd494642020-05-01 07:42:23 -07005297 # * `deleted:user:{emailid}?uid={uniqueid}`: An email address (plus unique
5298 # identifier) representing a user that has been recently deleted. For
5299 # example, `alice@example.com?uid=123456789012345678901`. If the user is
5300 # recovered, this value reverts to `user:{emailid}` and the recovered user
5301 # retains the role in the binding.
5302 #
5303 # * `deleted:serviceAccount:{emailid}?uid={uniqueid}`: An email address (plus
5304 # unique identifier) representing a service account that has been recently
5305 # deleted. For example,
5306 # `my-other-app@appspot.gserviceaccount.com?uid=123456789012345678901`.
5307 # If the service account is undeleted, this value reverts to
5308 # `serviceAccount:{emailid}` and the undeleted service account retains the
5309 # role in the binding.
5310 #
5311 # * `deleted:group:{emailid}?uid={uniqueid}`: An email address (plus unique
5312 # identifier) representing a Google group that has been recently
5313 # deleted. For example, `admins@example.com?uid=123456789012345678901`. If
5314 # the group is recovered, this value reverts to `group:{emailid}` and the
5315 # recovered group retains the role in the binding.
5316 #
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07005317 #
5318 # * `domain:{domain}`: The G Suite domain (primary) that represents all the
5319 # users of that domain. For example, `google.com` or `example.com`.
5320 #
Bu Sun Kim65020912020-05-20 12:08:20 -07005321 &quot;A String&quot;,
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07005322 ],
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07005323 },
5324 ],
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07005325 }</pre>
5326</div>
5327
5328<div class="method">
Dan O'Mearadd494642020-05-01 07:42:23 -07005329 <code class="details" id="testIamPermissions">testIamPermissions(resource, body=None, x__xgafv=None)</code>
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07005330 <pre>Returns permissions that a caller has on the specified resource.
5331If the resource does not exist, this will return an empty set of
Bu Sun Kim65020912020-05-20 12:08:20 -07005332permissions, not a `NOT_FOUND` error.
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07005333
5334Note: This operation is designed to be used for building permission-aware
5335UIs and command-line tools, not for authorization checking. This operation
Bu Sun Kim65020912020-05-20 12:08:20 -07005336may &quot;fail open&quot; without warning.
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07005337
5338Args:
5339 resource: string, REQUIRED: The resource for which the policy detail is being requested.
5340See the operation documentation for the appropriate value for this field. (required)
Dan O'Mearadd494642020-05-01 07:42:23 -07005341 body: object, The request body.
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07005342 The object takes the form of:
5343
5344{ # Request message for `TestIamPermissions` method.
Bu Sun Kim65020912020-05-20 12:08:20 -07005345 &quot;permissions&quot;: [ # The set of permissions to check for the `resource`. Permissions with
5346 # wildcards (such as &#x27;*&#x27; or &#x27;storage.*&#x27;) are not allowed. For more
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07005347 # information see
5348 # [IAM Overview](https://cloud.google.com/iam/docs/overview#permissions).
Bu Sun Kim65020912020-05-20 12:08:20 -07005349 &quot;A String&quot;,
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07005350 ],
5351 }
5352
5353 x__xgafv: string, V1 error format.
5354 Allowed values
5355 1 - v1 error format
5356 2 - v2 error format
5357
5358Returns:
5359 An object of the form:
5360
5361 { # Response message for `TestIamPermissions` method.
Bu Sun Kim65020912020-05-20 12:08:20 -07005362 &quot;permissions&quot;: [ # A subset of `TestPermissionsRequest.permissions` that the caller is
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07005363 # allowed.
Bu Sun Kim65020912020-05-20 12:08:20 -07005364 &quot;A String&quot;,
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07005365 ],
5366 }</pre>
5367</div>
5368
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -04005369</body></html>