blob: d0a9ac24d39a16b51e81f196e64edba4c5bde523 [file] [log] [blame]
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -04001<html><body>
2<style>
3
4body, h1, h2, h3, div, span, p, pre, a {
5 margin: 0;
6 padding: 0;
7 border: 0;
8 font-weight: inherit;
9 font-style: inherit;
10 font-size: 100%;
11 font-family: inherit;
12 vertical-align: baseline;
13}
14
15body {
16 font-size: 13px;
17 padding: 1em;
18}
19
20h1 {
21 font-size: 26px;
22 margin-bottom: 1em;
23}
24
25h2 {
26 font-size: 24px;
27 margin-bottom: 1em;
28}
29
30h3 {
31 font-size: 20px;
32 margin-bottom: 1em;
33 margin-top: 1em;
34}
35
36pre, code {
37 line-height: 1.5;
38 font-family: Monaco, 'DejaVu Sans Mono', 'Bitstream Vera Sans Mono', 'Lucida Console', monospace;
39}
40
41pre {
42 margin-top: 0.5em;
43}
44
45h1, h2, h3, p {
46 font-family: Arial, sans serif;
47}
48
49h1, h2, h3 {
50 border-bottom: solid #CCC 1px;
51}
52
53.toc_element {
54 margin-top: 0.5em;
55}
56
57.firstline {
58 margin-left: 2 em;
59}
60
61.method {
62 margin-top: 1em;
63 border: solid 1px #CCC;
64 padding: 1em;
65 background: #EEE;
66}
67
68.details {
69 font-weight: bold;
70 font-size: 14px;
71}
72
73</style>
74
Bu Sun Kim715bd7f2019-06-14 16:50:42 -070075<h1><a href="ml_v1.html">Cloud Machine Learning Engine</a> . <a href="ml_v1.projects.html">projects</a> . <a href="ml_v1.projects.jobs.html">jobs</a></h1>
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -040076<h2>Instance Methods</h2>
77<p class="toc_element">
Bu Sun Kim715bd7f2019-06-14 16:50:42 -070078 <code><a href="#cancel">cancel(name, body=None, x__xgafv=None)</a></code></p>
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -040079<p class="firstline">Cancels a running job.</p>
80<p class="toc_element">
Thomas Coffee2f245372017-03-27 10:39:26 -070081 <code><a href="#create">create(parent, body, x__xgafv=None)</a></code></p>
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -040082<p class="firstline">Creates a training or a batch prediction job.</p>
83<p class="toc_element">
Thomas Coffee2f245372017-03-27 10:39:26 -070084 <code><a href="#get">get(name, x__xgafv=None)</a></code></p>
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -040085<p class="firstline">Describes a job.</p>
86<p class="toc_element">
Bu Sun Kim715bd7f2019-06-14 16:50:42 -070087 <code><a href="#getIamPolicy">getIamPolicy(resource, x__xgafv=None)</a></code></p>
88<p class="firstline">Gets the access control policy for a resource.</p>
89<p class="toc_element">
90 <code><a href="#list">list(parent, pageToken=None, x__xgafv=None, pageSize=None, filter=None)</a></code></p>
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -040091<p class="firstline">Lists the jobs in the project.</p>
92<p class="toc_element">
93 <code><a href="#list_next">list_next(previous_request, previous_response)</a></code></p>
94<p class="firstline">Retrieves the next page of results.</p>
Bu Sun Kim715bd7f2019-06-14 16:50:42 -070095<p class="toc_element">
96 <code><a href="#patch">patch(name, body, updateMask=None, x__xgafv=None)</a></code></p>
97<p class="firstline">Updates a specific job resource.</p>
98<p class="toc_element">
99 <code><a href="#setIamPolicy">setIamPolicy(resource, body, x__xgafv=None)</a></code></p>
100<p class="firstline">Sets the access control policy on the specified resource. Replaces any</p>
101<p class="toc_element">
102 <code><a href="#testIamPermissions">testIamPermissions(resource, body, x__xgafv=None)</a></code></p>
103<p class="firstline">Returns permissions that a caller has on the specified resource.</p>
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -0400104<h3>Method Details</h3>
105<div class="method">
Bu Sun Kim715bd7f2019-06-14 16:50:42 -0700106 <code class="details" id="cancel">cancel(name, body=None, x__xgafv=None)</code>
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -0400107 <pre>Cancels a running job.
108
109Args:
Bu Sun Kim715bd7f2019-06-14 16:50:42 -0700110 name: string, Required. The name of the job to cancel. (required)
111 body: object, The request body.
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -0400112 The object takes the form of:
113
114{ # Request message for the CancelJob method.
115 }
116
117 x__xgafv: string, V1 error format.
118 Allowed values
119 1 - v1 error format
120 2 - v2 error format
121
122Returns:
123 An object of the form:
124
125 { # A generic empty message that you can re-use to avoid defining duplicated
126 # empty messages in your APIs. A typical example is to use it as the request
127 # or the response type of an API method. For instance:
128 #
129 # service Foo {
130 # rpc Bar(google.protobuf.Empty) returns (google.protobuf.Empty);
131 # }
132 #
133 # The JSON representation for `Empty` is empty JSON object `{}`.
134 }</pre>
135</div>
136
137<div class="method">
Thomas Coffee2f245372017-03-27 10:39:26 -0700138 <code class="details" id="create">create(parent, body, x__xgafv=None)</code>
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -0400139 <pre>Creates a training or a batch prediction job.
140
141Args:
Bu Sun Kim715bd7f2019-06-14 16:50:42 -0700142 parent: string, Required. The project name. (required)
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -0400143 body: object, The request body. (required)
144 The object takes the form of:
145
146{ # Represents a training or prediction job.
Bu Sun Kim715bd7f2019-06-14 16:50:42 -0700147 "errorMessage": "A String", # Output only. The details of a failure or a cancellation.
148 "trainingOutput": { # Represents results of a training job. Output only. # The current training job result.
149 "completedTrialCount": "A String", # The number of hyperparameter tuning trials that completed successfully.
150 # Only set for hyperparameter tuning jobs.
151 "trials": [ # Results for individual Hyperparameter trials.
152 # Only set for hyperparameter tuning jobs.
153 { # Represents the result of a single hyperparameter tuning trial from a
154 # training job. The TrainingOutput object that is returned on successful
155 # completion of a training job with hyperparameter tuning includes a list
156 # of HyperparameterOutput objects, one for each successful trial.
157 "hyperparameters": { # The hyperparameters given to this trial.
158 "a_key": "A String",
159 },
160 "finalMetric": { # An observed value of a metric. # The final objective metric seen for this trial.
161 "trainingStep": "A String", # The global training step for this metric.
162 "objectiveValue": 3.14, # The objective value at this training step.
163 },
164 "allMetrics": [ # All recorded object metrics for this trial. This field is not currently
165 # populated.
166 { # An observed value of a metric.
167 "trainingStep": "A String", # The global training step for this metric.
168 "objectiveValue": 3.14, # The objective value at this training step.
169 },
170 ],
171 "isTrialStoppedEarly": True or False, # True if the trial is stopped early.
172 "trialId": "A String", # The trial id for these results.
173 "builtInAlgorithmOutput": { # Represents output related to a built-in algorithm Job. # Details related to built-in algorithms jobs.
174 # Only set for trials of built-in algorithms jobs that have succeeded.
175 "framework": "A String", # Framework on which the built-in algorithm was trained.
176 "modelPath": "A String", # The Cloud Storage path to the `model/` directory where the training job
177 # saves the trained model. Only set for successful jobs that don't use
178 # hyperparameter tuning.
179 "runtimeVersion": "A String", # AI Platform runtime version on which the built-in algorithm was
180 # trained.
181 "pythonVersion": "A String", # Python version on which the built-in algorithm was trained.
182 },
183 },
184 ],
185 "isHyperparameterTuningJob": True or False, # Whether this job is a hyperparameter tuning job.
186 "isBuiltInAlgorithmJob": True or False, # Whether this job is a built-in Algorithm job.
187 "consumedMLUnits": 3.14, # The amount of ML units consumed by the job.
188 "hyperparameterMetricTag": "A String", # The TensorFlow summary tag name used for optimizing hyperparameter tuning
189 # trials. See
190 # [`HyperparameterSpec.hyperparameterMetricTag`](#HyperparameterSpec.FIELDS.hyperparameter_metric_tag)
191 # for more information. Only set for hyperparameter tuning jobs.
192 "builtInAlgorithmOutput": { # Represents output related to a built-in algorithm Job. # Details related to built-in algorithms jobs.
193 # Only set for built-in algorithms jobs.
194 "framework": "A String", # Framework on which the built-in algorithm was trained.
195 "modelPath": "A String", # The Cloud Storage path to the `model/` directory where the training job
196 # saves the trained model. Only set for successful jobs that don't use
197 # hyperparameter tuning.
198 "runtimeVersion": "A String", # AI Platform runtime version on which the built-in algorithm was
199 # trained.
200 "pythonVersion": "A String", # Python version on which the built-in algorithm was trained.
201 },
202 },
203 "predictionInput": { # Represents input parameters for a prediction job. # Input parameters to create a prediction job.
204 "modelName": "A String", # Use this field if you want to use the default version for the specified
205 # model. The string must use the following format:
206 #
207 # `"projects/YOUR_PROJECT/models/YOUR_MODEL"`
208 "runtimeVersion": "A String", # Optional. The AI Platform runtime version to use for this batch
209 # prediction. If not set, AI Platform will pick the runtime version used
210 # during the CreateVersion request for this model version, or choose the
211 # latest stable version when model version information is not available
212 # such as when the model is specified by uri.
213 "signatureName": "A String", # Optional. The name of the signature defined in the SavedModel to use for
214 # this job. Please refer to
215 # [SavedModel](https://tensorflow.github.io/serving/serving_basic.html)
216 # for information about how to use signatures.
217 #
218 # Defaults to
219 # [DEFAULT_SERVING_SIGNATURE_DEF_KEY](https://www.tensorflow.org/api_docs/python/tf/saved_model/signature_constants)
220 # , which is "serving_default".
221 "batchSize": "A String", # Optional. Number of records per batch, defaults to 64.
222 # The service will buffer batch_size number of records in memory before
223 # invoking one Tensorflow prediction call internally. So take the record
224 # size and memory available into consideration when setting this parameter.
225 "inputPaths": [ # Required. The Cloud Storage location of the input data files. May contain
226 # <a href="/storage/docs/gsutil/addlhelp/WildcardNames">wildcards</a>.
227 "A String",
228 ],
229 "maxWorkerCount": "A String", # Optional. The maximum number of workers to be used for parallel processing.
230 # Defaults to 10 if not specified.
231 "uri": "A String", # Use this field if you want to specify a Google Cloud Storage path for
232 # the model to use.
233 "outputPath": "A String", # Required. The output Google Cloud Storage location.
234 "dataFormat": "A String", # Required. The format of the input data files.
235 "versionName": "A String", # Use this field if you want to specify a version of the model to use. The
236 # string is formatted the same way as `model_version`, with the addition
237 # of the version information:
238 #
239 # `"projects/YOUR_PROJECT/models/YOUR_MODEL/versions/YOUR_VERSION"`
240 "region": "A String", # Required. The Google Compute Engine region to run the prediction job in.
241 # See the <a href="/ml-engine/docs/tensorflow/regions">available regions</a>
242 # for AI Platform services.
243 "outputDataFormat": "A String", # Optional. Format of the output data files, defaults to JSON.
244 },
245 "trainingInput": { # Represents input parameters for a training job. When using the # Input parameters to create a training job.
246 # gcloud command to submit your training job, you can specify
247 # the input parameters as command-line arguments and/or in a YAML configuration
248 # file referenced from the --config command-line argument. For
249 # details, see the guide to
250 # <a href="/ml-engine/docs/tensorflow/training-jobs">submitting a training
251 # job</a>.
252 "workerType": "A String", # Optional. Specifies the type of virtual machine to use for your training
253 # job's worker nodes.
254 #
255 # The supported values are the same as those described in the entry for
256 # `masterType`.
257 #
258 # This value must be consistent with the category of machine type that
259 # `masterType` uses. In other words, both must be AI Platform machine
260 # types or both must be Compute Engine machine types.
261 #
262 # If you use `cloud_tpu` for this value, see special instructions for
263 # [configuring a custom TPU
264 # machine](/ml-engine/docs/tensorflow/using-tpus#configuring_a_custom_tpu_machine).
265 #
266 # This value must be present when `scaleTier` is set to `CUSTOM` and
267 # `workerCount` is greater than zero.
268 "parameterServerConfig": { # Represents the configuration for a replica in a cluster. # Optional. The configuration for parameter servers.
269 #
270 # You should only set `parameterServerConfig.acceleratorConfig` if
271 # `parameterServerConfigType` is set to a Compute Engine machine type. [Learn
272 # about restrictions on accelerator configurations for
273 # training.](/ml-engine/docs/tensorflow/using-gpus#compute-engine-machine-types-with-gpu)
274 #
275 # Set `parameterServerConfig.imageUri` only if you build a custom image for
276 # your parameter server. If `parameterServerConfig.imageUri` has not been
277 # set, AI Platform uses the value of `masterConfig.imageUri`.
278 # Learn more about [configuring custom
279 # containers](/ml-engine/docs/distributed-training-containers).
280 "acceleratorConfig": { # Represents a hardware accelerator request config. # Represents the type and number of accelerators used by the replica.
281 # [Learn about restrictions on accelerator configurations for
282 # training.](/ml-engine/docs/tensorflow/using-gpus#compute-engine-machine-types-with-gpu)
283 "count": "A String", # The number of accelerators to attach to each machine running the job.
284 "type": "A String", # The type of accelerator to use.
285 },
286 "imageUri": "A String", # The Docker image to run on the replica. This image must be in Container
287 # Registry. Learn more about [configuring custom
288 # containers](/ml-engine/docs/distributed-training-containers).
289 },
290 "runtimeVersion": "A String", # Optional. The AI Platform runtime version to use for training. If not
291 # set, AI Platform uses the default stable version, 1.0. For more
292 # information, see the
293 # <a href="/ml-engine/docs/runtime-version-list">runtime version list</a>
294 # and
295 # <a href="/ml-engine/docs/versioning">how to manage runtime versions</a>.
296 "scaleTier": "A String", # Required. Specifies the machine types, the number of replicas for workers
297 # and parameter servers.
298 "masterType": "A String", # Optional. Specifies the type of virtual machine to use for your training
299 # job's master worker.
300 #
301 # The following types are supported:
302 #
303 # <dl>
304 # <dt>standard</dt>
305 # <dd>
306 # A basic machine configuration suitable for training simple models with
307 # small to moderate datasets.
308 # </dd>
309 # <dt>large_model</dt>
310 # <dd>
311 # A machine with a lot of memory, specially suited for parameter servers
312 # when your model is large (having many hidden layers or layers with very
313 # large numbers of nodes).
314 # </dd>
315 # <dt>complex_model_s</dt>
316 # <dd>
317 # A machine suitable for the master and workers of the cluster when your
318 # model requires more computation than the standard machine can handle
319 # satisfactorily.
320 # </dd>
321 # <dt>complex_model_m</dt>
322 # <dd>
323 # A machine with roughly twice the number of cores and roughly double the
324 # memory of <i>complex_model_s</i>.
325 # </dd>
326 # <dt>complex_model_l</dt>
327 # <dd>
328 # A machine with roughly twice the number of cores and roughly double the
329 # memory of <i>complex_model_m</i>.
330 # </dd>
331 # <dt>standard_gpu</dt>
332 # <dd>
333 # A machine equivalent to <i>standard</i> that
334 # also includes a single NVIDIA Tesla K80 GPU. See more about
335 # <a href="/ml-engine/docs/tensorflow/using-gpus">using GPUs to
336 # train your model</a>.
337 # </dd>
338 # <dt>complex_model_m_gpu</dt>
339 # <dd>
340 # A machine equivalent to <i>complex_model_m</i> that also includes
341 # four NVIDIA Tesla K80 GPUs.
342 # </dd>
343 # <dt>complex_model_l_gpu</dt>
344 # <dd>
345 # A machine equivalent to <i>complex_model_l</i> that also includes
346 # eight NVIDIA Tesla K80 GPUs.
347 # </dd>
348 # <dt>standard_p100</dt>
349 # <dd>
350 # A machine equivalent to <i>standard</i> that
351 # also includes a single NVIDIA Tesla P100 GPU.
352 # </dd>
353 # <dt>complex_model_m_p100</dt>
354 # <dd>
355 # A machine equivalent to <i>complex_model_m</i> that also includes
356 # four NVIDIA Tesla P100 GPUs.
357 # </dd>
358 # <dt>standard_v100</dt>
359 # <dd>
360 # A machine equivalent to <i>standard</i> that
361 # also includes a single NVIDIA Tesla V100 GPU.
362 # </dd>
363 # <dt>large_model_v100</dt>
364 # <dd>
365 # A machine equivalent to <i>large_model</i> that
366 # also includes a single NVIDIA Tesla V100 GPU.
367 # </dd>
368 # <dt>complex_model_m_v100</dt>
369 # <dd>
370 # A machine equivalent to <i>complex_model_m</i> that
371 # also includes four NVIDIA Tesla V100 GPUs.
372 # </dd>
373 # <dt>complex_model_l_v100</dt>
374 # <dd>
375 # A machine equivalent to <i>complex_model_l</i> that
376 # also includes eight NVIDIA Tesla V100 GPUs.
377 # </dd>
378 # <dt>cloud_tpu</dt>
379 # <dd>
380 # A TPU VM including one Cloud TPU. See more about
381 # <a href="/ml-engine/docs/tensorflow/using-tpus">using TPUs to train
382 # your model</a>.
383 # </dd>
384 # </dl>
385 #
386 # You may also use certain Compute Engine machine types directly in this
387 # field. The following types are supported:
388 #
389 # - `n1-standard-4`
390 # - `n1-standard-8`
391 # - `n1-standard-16`
392 # - `n1-standard-32`
393 # - `n1-standard-64`
394 # - `n1-standard-96`
395 # - `n1-highmem-2`
396 # - `n1-highmem-4`
397 # - `n1-highmem-8`
398 # - `n1-highmem-16`
399 # - `n1-highmem-32`
400 # - `n1-highmem-64`
401 # - `n1-highmem-96`
402 # - `n1-highcpu-16`
403 # - `n1-highcpu-32`
404 # - `n1-highcpu-64`
405 # - `n1-highcpu-96`
406 #
407 # See more about [using Compute Engine machine
408 # types](/ml-engine/docs/tensorflow/machine-types#compute-engine-machine-types).
409 #
410 # You must set this value when `scaleTier` is set to `CUSTOM`.
411 "hyperparameters": { # Represents a set of hyperparameters to optimize. # Optional. The set of Hyperparameters to tune.
412 "maxTrials": 42, # Optional. How many training trials should be attempted to optimize
413 # the specified hyperparameters.
414 #
415 # Defaults to one.
416 "goal": "A String", # Required. The type of goal to use for tuning. Available types are
417 # `MAXIMIZE` and `MINIMIZE`.
418 #
419 # Defaults to `MAXIMIZE`.
420 "algorithm": "A String", # Optional. The search algorithm specified for the hyperparameter
421 # tuning job.
422 # Uses the default AI Platform hyperparameter tuning
423 # algorithm if unspecified.
424 "maxFailedTrials": 42, # Optional. The number of failed trials that need to be seen before failing
425 # the hyperparameter tuning job. You can specify this field to override the
426 # default failing criteria for AI Platform hyperparameter tuning jobs.
427 #
428 # Defaults to zero, which means the service decides when a hyperparameter
429 # job should fail.
430 "enableTrialEarlyStopping": True or False, # Optional. Indicates if the hyperparameter tuning job enables auto trial
431 # early stopping.
432 "resumePreviousJobId": "A String", # Optional. The prior hyperparameter tuning job id that users hope to
433 # continue with. The job id will be used to find the corresponding vizier
434 # study guid and resume the study.
435 "params": [ # Required. The set of parameters to tune.
436 { # Represents a single hyperparameter to optimize.
437 "maxValue": 3.14, # Required if type is `DOUBLE` or `INTEGER`. This field
438 # should be unset if type is `CATEGORICAL`. This value should be integers if
439 # type is `INTEGER`.
440 "categoricalValues": [ # Required if type is `CATEGORICAL`. The list of possible categories.
441 "A String",
442 ],
443 "discreteValues": [ # Required if type is `DISCRETE`.
444 # A list of feasible points.
445 # The list should be in strictly increasing order. For instance, this
446 # parameter might have possible settings of 1.5, 2.5, and 4.0. This list
447 # should not contain more than 1,000 values.
448 3.14,
449 ],
450 "parameterName": "A String", # Required. The parameter name must be unique amongst all ParameterConfigs in
451 # a HyperparameterSpec message. E.g., "learning_rate".
452 "minValue": 3.14, # Required if type is `DOUBLE` or `INTEGER`. This field
453 # should be unset if type is `CATEGORICAL`. This value should be integers if
454 # type is INTEGER.
455 "type": "A String", # Required. The type of the parameter.
456 "scaleType": "A String", # Optional. How the parameter should be scaled to the hypercube.
457 # Leave unset for categorical parameters.
458 # Some kind of scaling is strongly recommended for real or integral
459 # parameters (e.g., `UNIT_LINEAR_SCALE`).
460 },
461 ],
462 "hyperparameterMetricTag": "A String", # Optional. The TensorFlow summary tag name to use for optimizing trials. For
463 # current versions of TensorFlow, this tag name should exactly match what is
464 # shown in TensorBoard, including all scopes. For versions of TensorFlow
465 # prior to 0.12, this should be only the tag passed to tf.Summary.
466 # By default, "training/hptuning/metric" will be used.
467 "maxParallelTrials": 42, # Optional. The number of training trials to run concurrently.
468 # You can reduce the time it takes to perform hyperparameter tuning by adding
469 # trials in parallel. However, each trail only benefits from the information
470 # gained in completed trials. That means that a trial does not get access to
471 # the results of trials running at the same time, which could reduce the
472 # quality of the overall optimization.
473 #
474 # Each trial will use the same scale tier and machine types.
475 #
476 # Defaults to one.
477 },
478 "region": "A String", # Required. The Google Compute Engine region to run the training job in.
479 # See the <a href="/ml-engine/docs/tensorflow/regions">available regions</a>
480 # for AI Platform services.
481 "args": [ # Optional. Command line arguments to pass to the program.
482 "A String",
483 ],
484 "pythonModule": "A String", # Required. The Python module name to run after installing the packages.
485 "pythonVersion": "A String", # Optional. The version of Python used in training. If not set, the default
486 # version is '2.7'. Python '3.5' is available when `runtime_version` is set
487 # to '1.4' and above. Python '2.7' works with all supported
488 # <a href="/ml-engine/docs/runtime-version-list">runtime versions</a>.
489 "jobDir": "A String", # Optional. A Google Cloud Storage path in which to store training outputs
490 # and other data needed for training. This path is passed to your TensorFlow
491 # program as the '--job-dir' command-line argument. The benefit of specifying
492 # this field is that Cloud ML validates the path for use in training.
493 "packageUris": [ # Required. The Google Cloud Storage location of the packages with
494 # the training program and any additional dependencies.
495 # The maximum number of package URIs is 100.
496 "A String",
497 ],
498 "workerCount": "A String", # Optional. The number of worker replicas to use for the training job. Each
499 # replica in the cluster will be of the type specified in `worker_type`.
500 #
501 # This value can only be used when `scale_tier` is set to `CUSTOM`. If you
502 # set this value, you must also set `worker_type`.
503 #
504 # The default value is zero.
505 "parameterServerType": "A String", # Optional. Specifies the type of virtual machine to use for your training
506 # job's parameter server.
507 #
508 # The supported values are the same as those described in the entry for
509 # `master_type`.
510 #
511 # This value must be consistent with the category of machine type that
512 # `masterType` uses. In other words, both must be AI Platform machine
513 # types or both must be Compute Engine machine types.
514 #
515 # This value must be present when `scaleTier` is set to `CUSTOM` and
516 # `parameter_server_count` is greater than zero.
517 "workerConfig": { # Represents the configuration for a replica in a cluster. # Optional. The configuration for workers.
518 #
519 # You should only set `workerConfig.acceleratorConfig` if `workerType` is set
520 # to a Compute Engine machine type. [Learn about restrictions on accelerator
521 # configurations for
522 # training.](/ml-engine/docs/tensorflow/using-gpus#compute-engine-machine-types-with-gpu)
523 #
524 # Set `workerConfig.imageUri` only if you build a custom image for your
525 # worker. If `workerConfig.imageUri` has not been set, AI Platform uses
526 # the value of `masterConfig.imageUri`. Learn more about
527 # [configuring custom
528 # containers](/ml-engine/docs/distributed-training-containers).
529 "acceleratorConfig": { # Represents a hardware accelerator request config. # Represents the type and number of accelerators used by the replica.
530 # [Learn about restrictions on accelerator configurations for
531 # training.](/ml-engine/docs/tensorflow/using-gpus#compute-engine-machine-types-with-gpu)
532 "count": "A String", # The number of accelerators to attach to each machine running the job.
533 "type": "A String", # The type of accelerator to use.
534 },
535 "imageUri": "A String", # The Docker image to run on the replica. This image must be in Container
536 # Registry. Learn more about [configuring custom
537 # containers](/ml-engine/docs/distributed-training-containers).
538 },
539 "maxRunningTime": "A String", # Optional. The maximum job running time. The default is 7 days.
540 "masterConfig": { # Represents the configuration for a replica in a cluster. # Optional. The configuration for your master worker.
541 #
542 # You should only set `masterConfig.acceleratorConfig` if `masterType` is set
543 # to a Compute Engine machine type. Learn about [restrictions on accelerator
544 # configurations for
545 # training.](/ml-engine/docs/tensorflow/using-gpus#compute-engine-machine-types-with-gpu)
546 #
547 # Set `masterConfig.imageUri` only if you build a custom image. Only one of
548 # `masterConfig.imageUri` and `runtimeVersion` should be set. Learn more about
549 # [configuring custom
550 # containers](/ml-engine/docs/distributed-training-containers).
551 "acceleratorConfig": { # Represents a hardware accelerator request config. # Represents the type and number of accelerators used by the replica.
552 # [Learn about restrictions on accelerator configurations for
553 # training.](/ml-engine/docs/tensorflow/using-gpus#compute-engine-machine-types-with-gpu)
554 "count": "A String", # The number of accelerators to attach to each machine running the job.
555 "type": "A String", # The type of accelerator to use.
556 },
557 "imageUri": "A String", # The Docker image to run on the replica. This image must be in Container
558 # Registry. Learn more about [configuring custom
559 # containers](/ml-engine/docs/distributed-training-containers).
560 },
561 "parameterServerCount": "A String", # Optional. The number of parameter server replicas to use for the training
562 # job. Each replica in the cluster will be of the type specified in
563 # `parameter_server_type`.
564 #
565 # This value can only be used when `scale_tier` is set to `CUSTOM`.If you
566 # set this value, you must also set `parameter_server_type`.
567 #
568 # The default value is zero.
569 },
570 "jobId": "A String", # Required. The user-specified id of the job.
571 "labels": { # Optional. One or more labels that you can add, to organize your jobs.
572 # Each label is a key-value pair, where both the key and the value are
573 # arbitrary strings that you supply.
574 # For more information, see the documentation on
575 # <a href="/ml-engine/docs/tensorflow/resource-labels">using labels</a>.
576 "a_key": "A String",
577 },
578 "state": "A String", # Output only. The detailed state of a job.
579 "etag": "A String", # `etag` is used for optimistic concurrency control as a way to help
580 # prevent simultaneous updates of a job from overwriting each other.
581 # It is strongly suggested that systems make use of the `etag` in the
582 # read-modify-write cycle to perform job updates in order to avoid race
583 # conditions: An `etag` is returned in the response to `GetJob`, and
584 # systems are expected to put that etag in the request to `UpdateJob` to
585 # ensure that their change will be applied to the same version of the job.
586 "startTime": "A String", # Output only. When the job processing was started.
587 "endTime": "A String", # Output only. When the job processing was completed.
588 "predictionOutput": { # Represents results of a prediction job. # The current prediction job result.
589 "outputPath": "A String", # The output Google Cloud Storage location provided at the job creation time.
590 "nodeHours": 3.14, # Node hours used by the batch prediction job.
591 "predictionCount": "A String", # The number of generated predictions.
592 "errorCount": "A String", # The number of data instances which resulted in errors.
593 },
594 "createTime": "A String", # Output only. When the job was created.
595}
596
597 x__xgafv: string, V1 error format.
598 Allowed values
599 1 - v1 error format
600 2 - v2 error format
601
602Returns:
603 An object of the form:
604
605 { # Represents a training or prediction job.
606 "errorMessage": "A String", # Output only. The details of a failure or a cancellation.
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -0400607 "trainingOutput": { # Represents results of a training job. Output only. # The current training job result.
Bu Sun Kim715bd7f2019-06-14 16:50:42 -0700608 "completedTrialCount": "A String", # The number of hyperparameter tuning trials that completed successfully.
609 # Only set for hyperparameter tuning jobs.
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -0400610 "trials": [ # Results for individual Hyperparameter trials.
611 # Only set for hyperparameter tuning jobs.
612 { # Represents the result of a single hyperparameter tuning trial from a
613 # training job. The TrainingOutput object that is returned on successful
614 # completion of a training job with hyperparameter tuning includes a list
615 # of HyperparameterOutput objects, one for each successful trial.
616 "hyperparameters": { # The hyperparameters given to this trial.
617 "a_key": "A String",
618 },
Bu Sun Kim715bd7f2019-06-14 16:50:42 -0700619 "finalMetric": { # An observed value of a metric. # The final objective metric seen for this trial.
620 "trainingStep": "A String", # The global training step for this metric.
621 "objectiveValue": 3.14, # The objective value at this training step.
622 },
623 "allMetrics": [ # All recorded object metrics for this trial. This field is not currently
624 # populated.
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -0400625 { # An observed value of a metric.
626 "trainingStep": "A String", # The global training step for this metric.
627 "objectiveValue": 3.14, # The objective value at this training step.
628 },
629 ],
Bu Sun Kim715bd7f2019-06-14 16:50:42 -0700630 "isTrialStoppedEarly": True or False, # True if the trial is stopped early.
631 "trialId": "A String", # The trial id for these results.
632 "builtInAlgorithmOutput": { # Represents output related to a built-in algorithm Job. # Details related to built-in algorithms jobs.
633 # Only set for trials of built-in algorithms jobs that have succeeded.
634 "framework": "A String", # Framework on which the built-in algorithm was trained.
635 "modelPath": "A String", # The Cloud Storage path to the `model/` directory where the training job
636 # saves the trained model. Only set for successful jobs that don't use
637 # hyperparameter tuning.
638 "runtimeVersion": "A String", # AI Platform runtime version on which the built-in algorithm was
639 # trained.
640 "pythonVersion": "A String", # Python version on which the built-in algorithm was trained.
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -0400641 },
642 },
643 ],
Sai Cheemalapati4ba8c232017-06-06 18:46:08 -0400644 "isHyperparameterTuningJob": True or False, # Whether this job is a hyperparameter tuning job.
Bu Sun Kim715bd7f2019-06-14 16:50:42 -0700645 "isBuiltInAlgorithmJob": True or False, # Whether this job is a built-in Algorithm job.
Sai Cheemalapati4ba8c232017-06-06 18:46:08 -0400646 "consumedMLUnits": 3.14, # The amount of ML units consumed by the job.
Bu Sun Kim715bd7f2019-06-14 16:50:42 -0700647 "hyperparameterMetricTag": "A String", # The TensorFlow summary tag name used for optimizing hyperparameter tuning
648 # trials. See
649 # [`HyperparameterSpec.hyperparameterMetricTag`](#HyperparameterSpec.FIELDS.hyperparameter_metric_tag)
650 # for more information. Only set for hyperparameter tuning jobs.
651 "builtInAlgorithmOutput": { # Represents output related to a built-in algorithm Job. # Details related to built-in algorithms jobs.
652 # Only set for built-in algorithms jobs.
653 "framework": "A String", # Framework on which the built-in algorithm was trained.
654 "modelPath": "A String", # The Cloud Storage path to the `model/` directory where the training job
655 # saves the trained model. Only set for successful jobs that don't use
656 # hyperparameter tuning.
657 "runtimeVersion": "A String", # AI Platform runtime version on which the built-in algorithm was
658 # trained.
659 "pythonVersion": "A String", # Python version on which the built-in algorithm was trained.
660 },
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -0400661 },
Bu Sun Kim715bd7f2019-06-14 16:50:42 -0700662 "predictionInput": { # Represents input parameters for a prediction job. # Input parameters to create a prediction job.
663 "modelName": "A String", # Use this field if you want to use the default version for the specified
664 # model. The string must use the following format:
665 #
666 # `"projects/YOUR_PROJECT/models/YOUR_MODEL"`
667 "runtimeVersion": "A String", # Optional. The AI Platform runtime version to use for this batch
668 # prediction. If not set, AI Platform will pick the runtime version used
669 # during the CreateVersion request for this model version, or choose the
670 # latest stable version when model version information is not available
671 # such as when the model is specified by uri.
672 "signatureName": "A String", # Optional. The name of the signature defined in the SavedModel to use for
673 # this job. Please refer to
674 # [SavedModel](https://tensorflow.github.io/serving/serving_basic.html)
675 # for information about how to use signatures.
676 #
677 # Defaults to
678 # [DEFAULT_SERVING_SIGNATURE_DEF_KEY](https://www.tensorflow.org/api_docs/python/tf/saved_model/signature_constants)
679 # , which is "serving_default".
680 "batchSize": "A String", # Optional. Number of records per batch, defaults to 64.
681 # The service will buffer batch_size number of records in memory before
682 # invoking one Tensorflow prediction call internally. So take the record
683 # size and memory available into consideration when setting this parameter.
684 "inputPaths": [ # Required. The Cloud Storage location of the input data files. May contain
685 # <a href="/storage/docs/gsutil/addlhelp/WildcardNames">wildcards</a>.
686 "A String",
687 ],
688 "maxWorkerCount": "A String", # Optional. The maximum number of workers to be used for parallel processing.
689 # Defaults to 10 if not specified.
690 "uri": "A String", # Use this field if you want to specify a Google Cloud Storage path for
691 # the model to use.
692 "outputPath": "A String", # Required. The output Google Cloud Storage location.
693 "dataFormat": "A String", # Required. The format of the input data files.
694 "versionName": "A String", # Use this field if you want to specify a version of the model to use. The
695 # string is formatted the same way as `model_version`, with the addition
696 # of the version information:
697 #
698 # `"projects/YOUR_PROJECT/models/YOUR_MODEL/versions/YOUR_VERSION"`
699 "region": "A String", # Required. The Google Compute Engine region to run the prediction job in.
700 # See the <a href="/ml-engine/docs/tensorflow/regions">available regions</a>
701 # for AI Platform services.
702 "outputDataFormat": "A String", # Optional. Format of the output data files, defaults to JSON.
703 },
704 "trainingInput": { # Represents input parameters for a training job. When using the # Input parameters to create a training job.
705 # gcloud command to submit your training job, you can specify
706 # the input parameters as command-line arguments and/or in a YAML configuration
707 # file referenced from the --config command-line argument. For
708 # details, see the guide to
709 # <a href="/ml-engine/docs/tensorflow/training-jobs">submitting a training
710 # job</a>.
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -0400711 "workerType": "A String", # Optional. Specifies the type of virtual machine to use for your training
712 # job's worker nodes.
713 #
714 # The supported values are the same as those described in the entry for
715 # `masterType`.
716 #
Bu Sun Kim715bd7f2019-06-14 16:50:42 -0700717 # This value must be consistent with the category of machine type that
718 # `masterType` uses. In other words, both must be AI Platform machine
719 # types or both must be Compute Engine machine types.
720 #
721 # If you use `cloud_tpu` for this value, see special instructions for
722 # [configuring a custom TPU
723 # machine](/ml-engine/docs/tensorflow/using-tpus#configuring_a_custom_tpu_machine).
724 #
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -0400725 # This value must be present when `scaleTier` is set to `CUSTOM` and
726 # `workerCount` is greater than zero.
Bu Sun Kim715bd7f2019-06-14 16:50:42 -0700727 "parameterServerConfig": { # Represents the configuration for a replica in a cluster. # Optional. The configuration for parameter servers.
728 #
729 # You should only set `parameterServerConfig.acceleratorConfig` if
730 # `parameterServerConfigType` is set to a Compute Engine machine type. [Learn
731 # about restrictions on accelerator configurations for
732 # training.](/ml-engine/docs/tensorflow/using-gpus#compute-engine-machine-types-with-gpu)
733 #
734 # Set `parameterServerConfig.imageUri` only if you build a custom image for
735 # your parameter server. If `parameterServerConfig.imageUri` has not been
736 # set, AI Platform uses the value of `masterConfig.imageUri`.
737 # Learn more about [configuring custom
738 # containers](/ml-engine/docs/distributed-training-containers).
739 "acceleratorConfig": { # Represents a hardware accelerator request config. # Represents the type and number of accelerators used by the replica.
740 # [Learn about restrictions on accelerator configurations for
741 # training.](/ml-engine/docs/tensorflow/using-gpus#compute-engine-machine-types-with-gpu)
742 "count": "A String", # The number of accelerators to attach to each machine running the job.
743 "type": "A String", # The type of accelerator to use.
744 },
745 "imageUri": "A String", # The Docker image to run on the replica. This image must be in Container
746 # Registry. Learn more about [configuring custom
747 # containers](/ml-engine/docs/distributed-training-containers).
748 },
749 "runtimeVersion": "A String", # Optional. The AI Platform runtime version to use for training. If not
750 # set, AI Platform uses the default stable version, 1.0. For more
751 # information, see the
752 # <a href="/ml-engine/docs/runtime-version-list">runtime version list</a>
753 # and
754 # <a href="/ml-engine/docs/versioning">how to manage runtime versions</a>.
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -0400755 "scaleTier": "A String", # Required. Specifies the machine types, the number of replicas for workers
756 # and parameter servers.
757 "masterType": "A String", # Optional. Specifies the type of virtual machine to use for your training
758 # job's master worker.
759 #
760 # The following types are supported:
761 #
762 # <dl>
763 # <dt>standard</dt>
764 # <dd>
765 # A basic machine configuration suitable for training simple models with
766 # small to moderate datasets.
767 # </dd>
768 # <dt>large_model</dt>
769 # <dd>
770 # A machine with a lot of memory, specially suited for parameter servers
771 # when your model is large (having many hidden layers or layers with very
772 # large numbers of nodes).
773 # </dd>
774 # <dt>complex_model_s</dt>
775 # <dd>
776 # A machine suitable for the master and workers of the cluster when your
777 # model requires more computation than the standard machine can handle
778 # satisfactorily.
779 # </dd>
780 # <dt>complex_model_m</dt>
781 # <dd>
782 # A machine with roughly twice the number of cores and roughly double the
Bu Sun Kim715bd7f2019-06-14 16:50:42 -0700783 # memory of <i>complex_model_s</i>.
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -0400784 # </dd>
785 # <dt>complex_model_l</dt>
786 # <dd>
787 # A machine with roughly twice the number of cores and roughly double the
Bu Sun Kim715bd7f2019-06-14 16:50:42 -0700788 # memory of <i>complex_model_m</i>.
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -0400789 # </dd>
790 # <dt>standard_gpu</dt>
791 # <dd>
Bu Sun Kim715bd7f2019-06-14 16:50:42 -0700792 # A machine equivalent to <i>standard</i> that
793 # also includes a single NVIDIA Tesla K80 GPU. See more about
794 # <a href="/ml-engine/docs/tensorflow/using-gpus">using GPUs to
795 # train your model</a>.
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -0400796 # </dd>
797 # <dt>complex_model_m_gpu</dt>
798 # <dd>
Bu Sun Kim715bd7f2019-06-14 16:50:42 -0700799 # A machine equivalent to <i>complex_model_m</i> that also includes
800 # four NVIDIA Tesla K80 GPUs.
801 # </dd>
802 # <dt>complex_model_l_gpu</dt>
803 # <dd>
804 # A machine equivalent to <i>complex_model_l</i> that also includes
805 # eight NVIDIA Tesla K80 GPUs.
806 # </dd>
807 # <dt>standard_p100</dt>
808 # <dd>
809 # A machine equivalent to <i>standard</i> that
810 # also includes a single NVIDIA Tesla P100 GPU.
811 # </dd>
812 # <dt>complex_model_m_p100</dt>
813 # <dd>
814 # A machine equivalent to <i>complex_model_m</i> that also includes
815 # four NVIDIA Tesla P100 GPUs.
816 # </dd>
817 # <dt>standard_v100</dt>
818 # <dd>
819 # A machine equivalent to <i>standard</i> that
820 # also includes a single NVIDIA Tesla V100 GPU.
821 # </dd>
822 # <dt>large_model_v100</dt>
823 # <dd>
824 # A machine equivalent to <i>large_model</i> that
825 # also includes a single NVIDIA Tesla V100 GPU.
826 # </dd>
827 # <dt>complex_model_m_v100</dt>
828 # <dd>
829 # A machine equivalent to <i>complex_model_m</i> that
830 # also includes four NVIDIA Tesla V100 GPUs.
831 # </dd>
832 # <dt>complex_model_l_v100</dt>
833 # <dd>
834 # A machine equivalent to <i>complex_model_l</i> that
835 # also includes eight NVIDIA Tesla V100 GPUs.
836 # </dd>
837 # <dt>cloud_tpu</dt>
838 # <dd>
839 # A TPU VM including one Cloud TPU. See more about
840 # <a href="/ml-engine/docs/tensorflow/using-tpus">using TPUs to train
841 # your model</a>.
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -0400842 # </dd>
843 # </dl>
844 #
Bu Sun Kim715bd7f2019-06-14 16:50:42 -0700845 # You may also use certain Compute Engine machine types directly in this
846 # field. The following types are supported:
847 #
848 # - `n1-standard-4`
849 # - `n1-standard-8`
850 # - `n1-standard-16`
851 # - `n1-standard-32`
852 # - `n1-standard-64`
853 # - `n1-standard-96`
854 # - `n1-highmem-2`
855 # - `n1-highmem-4`
856 # - `n1-highmem-8`
857 # - `n1-highmem-16`
858 # - `n1-highmem-32`
859 # - `n1-highmem-64`
860 # - `n1-highmem-96`
861 # - `n1-highcpu-16`
862 # - `n1-highcpu-32`
863 # - `n1-highcpu-64`
864 # - `n1-highcpu-96`
865 #
866 # See more about [using Compute Engine machine
867 # types](/ml-engine/docs/tensorflow/machine-types#compute-engine-machine-types).
868 #
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -0400869 # You must set this value when `scaleTier` is set to `CUSTOM`.
870 "hyperparameters": { # Represents a set of hyperparameters to optimize. # Optional. The set of Hyperparameters to tune.
871 "maxTrials": 42, # Optional. How many training trials should be attempted to optimize
872 # the specified hyperparameters.
873 #
874 # Defaults to one.
Bu Sun Kim715bd7f2019-06-14 16:50:42 -0700875 "goal": "A String", # Required. The type of goal to use for tuning. Available types are
876 # `MAXIMIZE` and `MINIMIZE`.
877 #
878 # Defaults to `MAXIMIZE`.
879 "algorithm": "A String", # Optional. The search algorithm specified for the hyperparameter
880 # tuning job.
881 # Uses the default AI Platform hyperparameter tuning
882 # algorithm if unspecified.
883 "maxFailedTrials": 42, # Optional. The number of failed trials that need to be seen before failing
884 # the hyperparameter tuning job. You can specify this field to override the
885 # default failing criteria for AI Platform hyperparameter tuning jobs.
886 #
887 # Defaults to zero, which means the service decides when a hyperparameter
888 # job should fail.
889 "enableTrialEarlyStopping": True or False, # Optional. Indicates if the hyperparameter tuning job enables auto trial
890 # early stopping.
891 "resumePreviousJobId": "A String", # Optional. The prior hyperparameter tuning job id that users hope to
892 # continue with. The job id will be used to find the corresponding vizier
893 # study guid and resume the study.
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -0400894 "params": [ # Required. The set of parameters to tune.
895 { # Represents a single hyperparameter to optimize.
Bu Sun Kim715bd7f2019-06-14 16:50:42 -0700896 "maxValue": 3.14, # Required if type is `DOUBLE` or `INTEGER`. This field
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -0400897 # should be unset if type is `CATEGORICAL`. This value should be integers if
898 # type is `INTEGER`.
Sai Cheemalapati4ba8c232017-06-06 18:46:08 -0400899 "categoricalValues": [ # Required if type is `CATEGORICAL`. The list of possible categories.
900 "A String",
901 ],
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -0400902 "discreteValues": [ # Required if type is `DISCRETE`.
903 # A list of feasible points.
904 # The list should be in strictly increasing order. For instance, this
905 # parameter might have possible settings of 1.5, 2.5, and 4.0. This list
906 # should not contain more than 1,000 values.
907 3.14,
908 ],
909 "parameterName": "A String", # Required. The parameter name must be unique amongst all ParameterConfigs in
910 # a HyperparameterSpec message. E.g., "learning_rate".
Sai Cheemalapati4ba8c232017-06-06 18:46:08 -0400911 "minValue": 3.14, # Required if type is `DOUBLE` or `INTEGER`. This field
912 # should be unset if type is `CATEGORICAL`. This value should be integers if
913 # type is INTEGER.
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -0400914 "type": "A String", # Required. The type of the parameter.
915 "scaleType": "A String", # Optional. How the parameter should be scaled to the hypercube.
916 # Leave unset for categorical parameters.
917 # Some kind of scaling is strongly recommended for real or integral
918 # parameters (e.g., `UNIT_LINEAR_SCALE`).
919 },
920 ],
Bu Sun Kim715bd7f2019-06-14 16:50:42 -0700921 "hyperparameterMetricTag": "A String", # Optional. The TensorFlow summary tag name to use for optimizing trials. For
922 # current versions of TensorFlow, this tag name should exactly match what is
923 # shown in TensorBoard, including all scopes. For versions of TensorFlow
924 # prior to 0.12, this should be only the tag passed to tf.Summary.
925 # By default, "training/hptuning/metric" will be used.
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -0400926 "maxParallelTrials": 42, # Optional. The number of training trials to run concurrently.
927 # You can reduce the time it takes to perform hyperparameter tuning by adding
928 # trials in parallel. However, each trail only benefits from the information
929 # gained in completed trials. That means that a trial does not get access to
930 # the results of trials running at the same time, which could reduce the
931 # quality of the overall optimization.
932 #
933 # Each trial will use the same scale tier and machine types.
934 #
935 # Defaults to one.
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -0400936 },
937 "region": "A String", # Required. The Google Compute Engine region to run the training job in.
Bu Sun Kim715bd7f2019-06-14 16:50:42 -0700938 # See the <a href="/ml-engine/docs/tensorflow/regions">available regions</a>
939 # for AI Platform services.
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -0400940 "args": [ # Optional. Command line arguments to pass to the program.
941 "A String",
942 ],
943 "pythonModule": "A String", # Required. The Python module name to run after installing the packages.
Bu Sun Kim715bd7f2019-06-14 16:50:42 -0700944 "pythonVersion": "A String", # Optional. The version of Python used in training. If not set, the default
945 # version is '2.7'. Python '3.5' is available when `runtime_version` is set
946 # to '1.4' and above. Python '2.7' works with all supported
947 # <a href="/ml-engine/docs/runtime-version-list">runtime versions</a>.
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -0400948 "jobDir": "A String", # Optional. A Google Cloud Storage path in which to store training outputs
949 # and other data needed for training. This path is passed to your TensorFlow
Bu Sun Kim715bd7f2019-06-14 16:50:42 -0700950 # program as the '--job-dir' command-line argument. The benefit of specifying
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -0400951 # this field is that Cloud ML validates the path for use in training.
952 "packageUris": [ # Required. The Google Cloud Storage location of the packages with
953 # the training program and any additional dependencies.
Sai Cheemalapati4ba8c232017-06-06 18:46:08 -0400954 # The maximum number of package URIs is 100.
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -0400955 "A String",
956 ],
957 "workerCount": "A String", # Optional. The number of worker replicas to use for the training job. Each
958 # replica in the cluster will be of the type specified in `worker_type`.
959 #
960 # This value can only be used when `scale_tier` is set to `CUSTOM`. If you
961 # set this value, you must also set `worker_type`.
Bu Sun Kim715bd7f2019-06-14 16:50:42 -0700962 #
963 # The default value is zero.
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -0400964 "parameterServerType": "A String", # Optional. Specifies the type of virtual machine to use for your training
965 # job's parameter server.
966 #
967 # The supported values are the same as those described in the entry for
968 # `master_type`.
969 #
Bu Sun Kim715bd7f2019-06-14 16:50:42 -0700970 # This value must be consistent with the category of machine type that
971 # `masterType` uses. In other words, both must be AI Platform machine
972 # types or both must be Compute Engine machine types.
973 #
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -0400974 # This value must be present when `scaleTier` is set to `CUSTOM` and
975 # `parameter_server_count` is greater than zero.
Bu Sun Kim715bd7f2019-06-14 16:50:42 -0700976 "workerConfig": { # Represents the configuration for a replica in a cluster. # Optional. The configuration for workers.
977 #
978 # You should only set `workerConfig.acceleratorConfig` if `workerType` is set
979 # to a Compute Engine machine type. [Learn about restrictions on accelerator
980 # configurations for
981 # training.](/ml-engine/docs/tensorflow/using-gpus#compute-engine-machine-types-with-gpu)
982 #
983 # Set `workerConfig.imageUri` only if you build a custom image for your
984 # worker. If `workerConfig.imageUri` has not been set, AI Platform uses
985 # the value of `masterConfig.imageUri`. Learn more about
986 # [configuring custom
987 # containers](/ml-engine/docs/distributed-training-containers).
988 "acceleratorConfig": { # Represents a hardware accelerator request config. # Represents the type and number of accelerators used by the replica.
989 # [Learn about restrictions on accelerator configurations for
990 # training.](/ml-engine/docs/tensorflow/using-gpus#compute-engine-machine-types-with-gpu)
991 "count": "A String", # The number of accelerators to attach to each machine running the job.
992 "type": "A String", # The type of accelerator to use.
993 },
994 "imageUri": "A String", # The Docker image to run on the replica. This image must be in Container
995 # Registry. Learn more about [configuring custom
996 # containers](/ml-engine/docs/distributed-training-containers).
997 },
998 "maxRunningTime": "A String", # Optional. The maximum job running time. The default is 7 days.
999 "masterConfig": { # Represents the configuration for a replica in a cluster. # Optional. The configuration for your master worker.
1000 #
1001 # You should only set `masterConfig.acceleratorConfig` if `masterType` is set
1002 # to a Compute Engine machine type. Learn about [restrictions on accelerator
1003 # configurations for
1004 # training.](/ml-engine/docs/tensorflow/using-gpus#compute-engine-machine-types-with-gpu)
1005 #
1006 # Set `masterConfig.imageUri` only if you build a custom image. Only one of
1007 # `masterConfig.imageUri` and `runtimeVersion` should be set. Learn more about
1008 # [configuring custom
1009 # containers](/ml-engine/docs/distributed-training-containers).
1010 "acceleratorConfig": { # Represents a hardware accelerator request config. # Represents the type and number of accelerators used by the replica.
1011 # [Learn about restrictions on accelerator configurations for
1012 # training.](/ml-engine/docs/tensorflow/using-gpus#compute-engine-machine-types-with-gpu)
1013 "count": "A String", # The number of accelerators to attach to each machine running the job.
1014 "type": "A String", # The type of accelerator to use.
1015 },
1016 "imageUri": "A String", # The Docker image to run on the replica. This image must be in Container
1017 # Registry. Learn more about [configuring custom
1018 # containers](/ml-engine/docs/distributed-training-containers).
1019 },
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -04001020 "parameterServerCount": "A String", # Optional. The number of parameter server replicas to use for the training
1021 # job. Each replica in the cluster will be of the type specified in
1022 # `parameter_server_type`.
1023 #
1024 # This value can only be used when `scale_tier` is set to `CUSTOM`.If you
1025 # set this value, you must also set `parameter_server_type`.
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07001026 #
1027 # The default value is zero.
1028 },
1029 "jobId": "A String", # Required. The user-specified id of the job.
1030 "labels": { # Optional. One or more labels that you can add, to organize your jobs.
1031 # Each label is a key-value pair, where both the key and the value are
1032 # arbitrary strings that you supply.
1033 # For more information, see the documentation on
1034 # <a href="/ml-engine/docs/tensorflow/resource-labels">using labels</a>.
1035 "a_key": "A String",
1036 },
1037 "state": "A String", # Output only. The detailed state of a job.
1038 "etag": "A String", # `etag` is used for optimistic concurrency control as a way to help
1039 # prevent simultaneous updates of a job from overwriting each other.
1040 # It is strongly suggested that systems make use of the `etag` in the
1041 # read-modify-write cycle to perform job updates in order to avoid race
1042 # conditions: An `etag` is returned in the response to `GetJob`, and
1043 # systems are expected to put that etag in the request to `UpdateJob` to
1044 # ensure that their change will be applied to the same version of the job.
1045 "startTime": "A String", # Output only. When the job processing was started.
1046 "endTime": "A String", # Output only. When the job processing was completed.
1047 "predictionOutput": { # Represents results of a prediction job. # The current prediction job result.
1048 "outputPath": "A String", # The output Google Cloud Storage location provided at the job creation time.
1049 "nodeHours": 3.14, # Node hours used by the batch prediction job.
1050 "predictionCount": "A String", # The number of generated predictions.
1051 "errorCount": "A String", # The number of data instances which resulted in errors.
1052 },
1053 "createTime": "A String", # Output only. When the job was created.
1054 }</pre>
1055</div>
1056
1057<div class="method">
1058 <code class="details" id="get">get(name, x__xgafv=None)</code>
1059 <pre>Describes a job.
1060
1061Args:
1062 name: string, Required. The name of the job to get the description of. (required)
1063 x__xgafv: string, V1 error format.
1064 Allowed values
1065 1 - v1 error format
1066 2 - v2 error format
1067
1068Returns:
1069 An object of the form:
1070
1071 { # Represents a training or prediction job.
1072 "errorMessage": "A String", # Output only. The details of a failure or a cancellation.
1073 "trainingOutput": { # Represents results of a training job. Output only. # The current training job result.
1074 "completedTrialCount": "A String", # The number of hyperparameter tuning trials that completed successfully.
1075 # Only set for hyperparameter tuning jobs.
1076 "trials": [ # Results for individual Hyperparameter trials.
1077 # Only set for hyperparameter tuning jobs.
1078 { # Represents the result of a single hyperparameter tuning trial from a
1079 # training job. The TrainingOutput object that is returned on successful
1080 # completion of a training job with hyperparameter tuning includes a list
1081 # of HyperparameterOutput objects, one for each successful trial.
1082 "hyperparameters": { # The hyperparameters given to this trial.
1083 "a_key": "A String",
1084 },
1085 "finalMetric": { # An observed value of a metric. # The final objective metric seen for this trial.
1086 "trainingStep": "A String", # The global training step for this metric.
1087 "objectiveValue": 3.14, # The objective value at this training step.
1088 },
1089 "allMetrics": [ # All recorded object metrics for this trial. This field is not currently
1090 # populated.
1091 { # An observed value of a metric.
1092 "trainingStep": "A String", # The global training step for this metric.
1093 "objectiveValue": 3.14, # The objective value at this training step.
1094 },
1095 ],
1096 "isTrialStoppedEarly": True or False, # True if the trial is stopped early.
1097 "trialId": "A String", # The trial id for these results.
1098 "builtInAlgorithmOutput": { # Represents output related to a built-in algorithm Job. # Details related to built-in algorithms jobs.
1099 # Only set for trials of built-in algorithms jobs that have succeeded.
1100 "framework": "A String", # Framework on which the built-in algorithm was trained.
1101 "modelPath": "A String", # The Cloud Storage path to the `model/` directory where the training job
1102 # saves the trained model. Only set for successful jobs that don't use
1103 # hyperparameter tuning.
1104 "runtimeVersion": "A String", # AI Platform runtime version on which the built-in algorithm was
1105 # trained.
1106 "pythonVersion": "A String", # Python version on which the built-in algorithm was trained.
1107 },
1108 },
1109 ],
1110 "isHyperparameterTuningJob": True or False, # Whether this job is a hyperparameter tuning job.
1111 "isBuiltInAlgorithmJob": True or False, # Whether this job is a built-in Algorithm job.
1112 "consumedMLUnits": 3.14, # The amount of ML units consumed by the job.
1113 "hyperparameterMetricTag": "A String", # The TensorFlow summary tag name used for optimizing hyperparameter tuning
1114 # trials. See
1115 # [`HyperparameterSpec.hyperparameterMetricTag`](#HyperparameterSpec.FIELDS.hyperparameter_metric_tag)
1116 # for more information. Only set for hyperparameter tuning jobs.
1117 "builtInAlgorithmOutput": { # Represents output related to a built-in algorithm Job. # Details related to built-in algorithms jobs.
1118 # Only set for built-in algorithms jobs.
1119 "framework": "A String", # Framework on which the built-in algorithm was trained.
1120 "modelPath": "A String", # The Cloud Storage path to the `model/` directory where the training job
1121 # saves the trained model. Only set for successful jobs that don't use
1122 # hyperparameter tuning.
1123 "runtimeVersion": "A String", # AI Platform runtime version on which the built-in algorithm was
1124 # trained.
1125 "pythonVersion": "A String", # Python version on which the built-in algorithm was trained.
1126 },
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -04001127 },
Sai Cheemalapati4ba8c232017-06-06 18:46:08 -04001128 "predictionInput": { # Represents input parameters for a prediction job. # Input parameters to create a prediction job.
1129 "modelName": "A String", # Use this field if you want to use the default version for the specified
1130 # model. The string must use the following format:
1131 #
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07001132 # `"projects/YOUR_PROJECT/models/YOUR_MODEL"`
1133 "runtimeVersion": "A String", # Optional. The AI Platform runtime version to use for this batch
1134 # prediction. If not set, AI Platform will pick the runtime version used
Sai Cheemalapati4ba8c232017-06-06 18:46:08 -04001135 # during the CreateVersion request for this model version, or choose the
1136 # latest stable version when model version information is not available
1137 # such as when the model is specified by uri.
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07001138 "signatureName": "A String", # Optional. The name of the signature defined in the SavedModel to use for
1139 # this job. Please refer to
1140 # [SavedModel](https://tensorflow.github.io/serving/serving_basic.html)
1141 # for information about how to use signatures.
1142 #
1143 # Defaults to
1144 # [DEFAULT_SERVING_SIGNATURE_DEF_KEY](https://www.tensorflow.org/api_docs/python/tf/saved_model/signature_constants)
1145 # , which is "serving_default".
1146 "batchSize": "A String", # Optional. Number of records per batch, defaults to 64.
1147 # The service will buffer batch_size number of records in memory before
1148 # invoking one Tensorflow prediction call internally. So take the record
1149 # size and memory available into consideration when setting this parameter.
1150 "inputPaths": [ # Required. The Cloud Storage location of the input data files. May contain
1151 # <a href="/storage/docs/gsutil/addlhelp/WildcardNames">wildcards</a>.
1152 "A String",
1153 ],
Sai Cheemalapati4ba8c232017-06-06 18:46:08 -04001154 "maxWorkerCount": "A String", # Optional. The maximum number of workers to be used for parallel processing.
1155 # Defaults to 10 if not specified.
1156 "uri": "A String", # Use this field if you want to specify a Google Cloud Storage path for
1157 # the model to use.
1158 "outputPath": "A String", # Required. The output Google Cloud Storage location.
1159 "dataFormat": "A String", # Required. The format of the input data files.
1160 "versionName": "A String", # Use this field if you want to specify a version of the model to use. The
1161 # string is formatted the same way as `model_version`, with the addition
1162 # of the version information:
1163 #
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07001164 # `"projects/YOUR_PROJECT/models/YOUR_MODEL/versions/YOUR_VERSION"`
1165 "region": "A String", # Required. The Google Compute Engine region to run the prediction job in.
1166 # See the <a href="/ml-engine/docs/tensorflow/regions">available regions</a>
1167 # for AI Platform services.
1168 "outputDataFormat": "A String", # Optional. Format of the output data files, defaults to JSON.
1169 },
1170 "trainingInput": { # Represents input parameters for a training job. When using the # Input parameters to create a training job.
1171 # gcloud command to submit your training job, you can specify
1172 # the input parameters as command-line arguments and/or in a YAML configuration
1173 # file referenced from the --config command-line argument. For
1174 # details, see the guide to
1175 # <a href="/ml-engine/docs/tensorflow/training-jobs">submitting a training
1176 # job</a>.
1177 "workerType": "A String", # Optional. Specifies the type of virtual machine to use for your training
1178 # job's worker nodes.
1179 #
1180 # The supported values are the same as those described in the entry for
1181 # `masterType`.
1182 #
1183 # This value must be consistent with the category of machine type that
1184 # `masterType` uses. In other words, both must be AI Platform machine
1185 # types or both must be Compute Engine machine types.
1186 #
1187 # If you use `cloud_tpu` for this value, see special instructions for
1188 # [configuring a custom TPU
1189 # machine](/ml-engine/docs/tensorflow/using-tpus#configuring_a_custom_tpu_machine).
1190 #
1191 # This value must be present when `scaleTier` is set to `CUSTOM` and
1192 # `workerCount` is greater than zero.
1193 "parameterServerConfig": { # Represents the configuration for a replica in a cluster. # Optional. The configuration for parameter servers.
1194 #
1195 # You should only set `parameterServerConfig.acceleratorConfig` if
1196 # `parameterServerConfigType` is set to a Compute Engine machine type. [Learn
1197 # about restrictions on accelerator configurations for
1198 # training.](/ml-engine/docs/tensorflow/using-gpus#compute-engine-machine-types-with-gpu)
1199 #
1200 # Set `parameterServerConfig.imageUri` only if you build a custom image for
1201 # your parameter server. If `parameterServerConfig.imageUri` has not been
1202 # set, AI Platform uses the value of `masterConfig.imageUri`.
1203 # Learn more about [configuring custom
1204 # containers](/ml-engine/docs/distributed-training-containers).
1205 "acceleratorConfig": { # Represents a hardware accelerator request config. # Represents the type and number of accelerators used by the replica.
1206 # [Learn about restrictions on accelerator configurations for
1207 # training.](/ml-engine/docs/tensorflow/using-gpus#compute-engine-machine-types-with-gpu)
1208 "count": "A String", # The number of accelerators to attach to each machine running the job.
1209 "type": "A String", # The type of accelerator to use.
1210 },
1211 "imageUri": "A String", # The Docker image to run on the replica. This image must be in Container
1212 # Registry. Learn more about [configuring custom
1213 # containers](/ml-engine/docs/distributed-training-containers).
1214 },
1215 "runtimeVersion": "A String", # Optional. The AI Platform runtime version to use for training. If not
1216 # set, AI Platform uses the default stable version, 1.0. For more
1217 # information, see the
1218 # <a href="/ml-engine/docs/runtime-version-list">runtime version list</a>
1219 # and
1220 # <a href="/ml-engine/docs/versioning">how to manage runtime versions</a>.
1221 "scaleTier": "A String", # Required. Specifies the machine types, the number of replicas for workers
1222 # and parameter servers.
1223 "masterType": "A String", # Optional. Specifies the type of virtual machine to use for your training
1224 # job's master worker.
1225 #
1226 # The following types are supported:
1227 #
1228 # <dl>
1229 # <dt>standard</dt>
1230 # <dd>
1231 # A basic machine configuration suitable for training simple models with
1232 # small to moderate datasets.
1233 # </dd>
1234 # <dt>large_model</dt>
1235 # <dd>
1236 # A machine with a lot of memory, specially suited for parameter servers
1237 # when your model is large (having many hidden layers or layers with very
1238 # large numbers of nodes).
1239 # </dd>
1240 # <dt>complex_model_s</dt>
1241 # <dd>
1242 # A machine suitable for the master and workers of the cluster when your
1243 # model requires more computation than the standard machine can handle
1244 # satisfactorily.
1245 # </dd>
1246 # <dt>complex_model_m</dt>
1247 # <dd>
1248 # A machine with roughly twice the number of cores and roughly double the
1249 # memory of <i>complex_model_s</i>.
1250 # </dd>
1251 # <dt>complex_model_l</dt>
1252 # <dd>
1253 # A machine with roughly twice the number of cores and roughly double the
1254 # memory of <i>complex_model_m</i>.
1255 # </dd>
1256 # <dt>standard_gpu</dt>
1257 # <dd>
1258 # A machine equivalent to <i>standard</i> that
1259 # also includes a single NVIDIA Tesla K80 GPU. See more about
1260 # <a href="/ml-engine/docs/tensorflow/using-gpus">using GPUs to
1261 # train your model</a>.
1262 # </dd>
1263 # <dt>complex_model_m_gpu</dt>
1264 # <dd>
1265 # A machine equivalent to <i>complex_model_m</i> that also includes
1266 # four NVIDIA Tesla K80 GPUs.
1267 # </dd>
1268 # <dt>complex_model_l_gpu</dt>
1269 # <dd>
1270 # A machine equivalent to <i>complex_model_l</i> that also includes
1271 # eight NVIDIA Tesla K80 GPUs.
1272 # </dd>
1273 # <dt>standard_p100</dt>
1274 # <dd>
1275 # A machine equivalent to <i>standard</i> that
1276 # also includes a single NVIDIA Tesla P100 GPU.
1277 # </dd>
1278 # <dt>complex_model_m_p100</dt>
1279 # <dd>
1280 # A machine equivalent to <i>complex_model_m</i> that also includes
1281 # four NVIDIA Tesla P100 GPUs.
1282 # </dd>
1283 # <dt>standard_v100</dt>
1284 # <dd>
1285 # A machine equivalent to <i>standard</i> that
1286 # also includes a single NVIDIA Tesla V100 GPU.
1287 # </dd>
1288 # <dt>large_model_v100</dt>
1289 # <dd>
1290 # A machine equivalent to <i>large_model</i> that
1291 # also includes a single NVIDIA Tesla V100 GPU.
1292 # </dd>
1293 # <dt>complex_model_m_v100</dt>
1294 # <dd>
1295 # A machine equivalent to <i>complex_model_m</i> that
1296 # also includes four NVIDIA Tesla V100 GPUs.
1297 # </dd>
1298 # <dt>complex_model_l_v100</dt>
1299 # <dd>
1300 # A machine equivalent to <i>complex_model_l</i> that
1301 # also includes eight NVIDIA Tesla V100 GPUs.
1302 # </dd>
1303 # <dt>cloud_tpu</dt>
1304 # <dd>
1305 # A TPU VM including one Cloud TPU. See more about
1306 # <a href="/ml-engine/docs/tensorflow/using-tpus">using TPUs to train
1307 # your model</a>.
1308 # </dd>
1309 # </dl>
1310 #
1311 # You may also use certain Compute Engine machine types directly in this
1312 # field. The following types are supported:
1313 #
1314 # - `n1-standard-4`
1315 # - `n1-standard-8`
1316 # - `n1-standard-16`
1317 # - `n1-standard-32`
1318 # - `n1-standard-64`
1319 # - `n1-standard-96`
1320 # - `n1-highmem-2`
1321 # - `n1-highmem-4`
1322 # - `n1-highmem-8`
1323 # - `n1-highmem-16`
1324 # - `n1-highmem-32`
1325 # - `n1-highmem-64`
1326 # - `n1-highmem-96`
1327 # - `n1-highcpu-16`
1328 # - `n1-highcpu-32`
1329 # - `n1-highcpu-64`
1330 # - `n1-highcpu-96`
1331 #
1332 # See more about [using Compute Engine machine
1333 # types](/ml-engine/docs/tensorflow/machine-types#compute-engine-machine-types).
1334 #
1335 # You must set this value when `scaleTier` is set to `CUSTOM`.
1336 "hyperparameters": { # Represents a set of hyperparameters to optimize. # Optional. The set of Hyperparameters to tune.
1337 "maxTrials": 42, # Optional. How many training trials should be attempted to optimize
1338 # the specified hyperparameters.
1339 #
1340 # Defaults to one.
1341 "goal": "A String", # Required. The type of goal to use for tuning. Available types are
1342 # `MAXIMIZE` and `MINIMIZE`.
1343 #
1344 # Defaults to `MAXIMIZE`.
1345 "algorithm": "A String", # Optional. The search algorithm specified for the hyperparameter
1346 # tuning job.
1347 # Uses the default AI Platform hyperparameter tuning
1348 # algorithm if unspecified.
1349 "maxFailedTrials": 42, # Optional. The number of failed trials that need to be seen before failing
1350 # the hyperparameter tuning job. You can specify this field to override the
1351 # default failing criteria for AI Platform hyperparameter tuning jobs.
1352 #
1353 # Defaults to zero, which means the service decides when a hyperparameter
1354 # job should fail.
1355 "enableTrialEarlyStopping": True or False, # Optional. Indicates if the hyperparameter tuning job enables auto trial
1356 # early stopping.
1357 "resumePreviousJobId": "A String", # Optional. The prior hyperparameter tuning job id that users hope to
1358 # continue with. The job id will be used to find the corresponding vizier
1359 # study guid and resume the study.
1360 "params": [ # Required. The set of parameters to tune.
1361 { # Represents a single hyperparameter to optimize.
1362 "maxValue": 3.14, # Required if type is `DOUBLE` or `INTEGER`. This field
1363 # should be unset if type is `CATEGORICAL`. This value should be integers if
1364 # type is `INTEGER`.
1365 "categoricalValues": [ # Required if type is `CATEGORICAL`. The list of possible categories.
1366 "A String",
1367 ],
1368 "discreteValues": [ # Required if type is `DISCRETE`.
1369 # A list of feasible points.
1370 # The list should be in strictly increasing order. For instance, this
1371 # parameter might have possible settings of 1.5, 2.5, and 4.0. This list
1372 # should not contain more than 1,000 values.
1373 3.14,
1374 ],
1375 "parameterName": "A String", # Required. The parameter name must be unique amongst all ParameterConfigs in
1376 # a HyperparameterSpec message. E.g., "learning_rate".
1377 "minValue": 3.14, # Required if type is `DOUBLE` or `INTEGER`. This field
1378 # should be unset if type is `CATEGORICAL`. This value should be integers if
1379 # type is INTEGER.
1380 "type": "A String", # Required. The type of the parameter.
1381 "scaleType": "A String", # Optional. How the parameter should be scaled to the hypercube.
1382 # Leave unset for categorical parameters.
1383 # Some kind of scaling is strongly recommended for real or integral
1384 # parameters (e.g., `UNIT_LINEAR_SCALE`).
1385 },
1386 ],
1387 "hyperparameterMetricTag": "A String", # Optional. The TensorFlow summary tag name to use for optimizing trials. For
1388 # current versions of TensorFlow, this tag name should exactly match what is
1389 # shown in TensorBoard, including all scopes. For versions of TensorFlow
1390 # prior to 0.12, this should be only the tag passed to tf.Summary.
1391 # By default, "training/hptuning/metric" will be used.
1392 "maxParallelTrials": 42, # Optional. The number of training trials to run concurrently.
1393 # You can reduce the time it takes to perform hyperparameter tuning by adding
1394 # trials in parallel. However, each trail only benefits from the information
1395 # gained in completed trials. That means that a trial does not get access to
1396 # the results of trials running at the same time, which could reduce the
1397 # quality of the overall optimization.
1398 #
1399 # Each trial will use the same scale tier and machine types.
1400 #
1401 # Defaults to one.
1402 },
1403 "region": "A String", # Required. The Google Compute Engine region to run the training job in.
1404 # See the <a href="/ml-engine/docs/tensorflow/regions">available regions</a>
1405 # for AI Platform services.
1406 "args": [ # Optional. Command line arguments to pass to the program.
Sai Cheemalapati4ba8c232017-06-06 18:46:08 -04001407 "A String",
1408 ],
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07001409 "pythonModule": "A String", # Required. The Python module name to run after installing the packages.
1410 "pythonVersion": "A String", # Optional. The version of Python used in training. If not set, the default
1411 # version is '2.7'. Python '3.5' is available when `runtime_version` is set
1412 # to '1.4' and above. Python '2.7' works with all supported
1413 # <a href="/ml-engine/docs/runtime-version-list">runtime versions</a>.
1414 "jobDir": "A String", # Optional. A Google Cloud Storage path in which to store training outputs
1415 # and other data needed for training. This path is passed to your TensorFlow
1416 # program as the '--job-dir' command-line argument. The benefit of specifying
1417 # this field is that Cloud ML validates the path for use in training.
1418 "packageUris": [ # Required. The Google Cloud Storage location of the packages with
1419 # the training program and any additional dependencies.
1420 # The maximum number of package URIs is 100.
1421 "A String",
1422 ],
1423 "workerCount": "A String", # Optional. The number of worker replicas to use for the training job. Each
1424 # replica in the cluster will be of the type specified in `worker_type`.
1425 #
1426 # This value can only be used when `scale_tier` is set to `CUSTOM`. If you
1427 # set this value, you must also set `worker_type`.
1428 #
1429 # The default value is zero.
1430 "parameterServerType": "A String", # Optional. Specifies the type of virtual machine to use for your training
1431 # job's parameter server.
1432 #
1433 # The supported values are the same as those described in the entry for
1434 # `master_type`.
1435 #
1436 # This value must be consistent with the category of machine type that
1437 # `masterType` uses. In other words, both must be AI Platform machine
1438 # types or both must be Compute Engine machine types.
1439 #
1440 # This value must be present when `scaleTier` is set to `CUSTOM` and
1441 # `parameter_server_count` is greater than zero.
1442 "workerConfig": { # Represents the configuration for a replica in a cluster. # Optional. The configuration for workers.
1443 #
1444 # You should only set `workerConfig.acceleratorConfig` if `workerType` is set
1445 # to a Compute Engine machine type. [Learn about restrictions on accelerator
1446 # configurations for
1447 # training.](/ml-engine/docs/tensorflow/using-gpus#compute-engine-machine-types-with-gpu)
1448 #
1449 # Set `workerConfig.imageUri` only if you build a custom image for your
1450 # worker. If `workerConfig.imageUri` has not been set, AI Platform uses
1451 # the value of `masterConfig.imageUri`. Learn more about
1452 # [configuring custom
1453 # containers](/ml-engine/docs/distributed-training-containers).
1454 "acceleratorConfig": { # Represents a hardware accelerator request config. # Represents the type and number of accelerators used by the replica.
1455 # [Learn about restrictions on accelerator configurations for
1456 # training.](/ml-engine/docs/tensorflow/using-gpus#compute-engine-machine-types-with-gpu)
1457 "count": "A String", # The number of accelerators to attach to each machine running the job.
1458 "type": "A String", # The type of accelerator to use.
1459 },
1460 "imageUri": "A String", # The Docker image to run on the replica. This image must be in Container
1461 # Registry. Learn more about [configuring custom
1462 # containers](/ml-engine/docs/distributed-training-containers).
1463 },
1464 "maxRunningTime": "A String", # Optional. The maximum job running time. The default is 7 days.
1465 "masterConfig": { # Represents the configuration for a replica in a cluster. # Optional. The configuration for your master worker.
1466 #
1467 # You should only set `masterConfig.acceleratorConfig` if `masterType` is set
1468 # to a Compute Engine machine type. Learn about [restrictions on accelerator
1469 # configurations for
1470 # training.](/ml-engine/docs/tensorflow/using-gpus#compute-engine-machine-types-with-gpu)
1471 #
1472 # Set `masterConfig.imageUri` only if you build a custom image. Only one of
1473 # `masterConfig.imageUri` and `runtimeVersion` should be set. Learn more about
1474 # [configuring custom
1475 # containers](/ml-engine/docs/distributed-training-containers).
1476 "acceleratorConfig": { # Represents a hardware accelerator request config. # Represents the type and number of accelerators used by the replica.
1477 # [Learn about restrictions on accelerator configurations for
1478 # training.](/ml-engine/docs/tensorflow/using-gpus#compute-engine-machine-types-with-gpu)
1479 "count": "A String", # The number of accelerators to attach to each machine running the job.
1480 "type": "A String", # The type of accelerator to use.
1481 },
1482 "imageUri": "A String", # The Docker image to run on the replica. This image must be in Container
1483 # Registry. Learn more about [configuring custom
1484 # containers](/ml-engine/docs/distributed-training-containers).
1485 },
1486 "parameterServerCount": "A String", # Optional. The number of parameter server replicas to use for the training
1487 # job. Each replica in the cluster will be of the type specified in
1488 # `parameter_server_type`.
1489 #
1490 # This value can only be used when `scale_tier` is set to `CUSTOM`.If you
1491 # set this value, you must also set `parameter_server_type`.
1492 #
1493 # The default value is zero.
Sai Cheemalapati4ba8c232017-06-06 18:46:08 -04001494 },
Sai Cheemalapati4ba8c232017-06-06 18:46:08 -04001495 "jobId": "A String", # Required. The user-specified id of the job.
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07001496 "labels": { # Optional. One or more labels that you can add, to organize your jobs.
1497 # Each label is a key-value pair, where both the key and the value are
1498 # arbitrary strings that you supply.
1499 # For more information, see the documentation on
1500 # <a href="/ml-engine/docs/tensorflow/resource-labels">using labels</a>.
1501 "a_key": "A String",
1502 },
Sai Cheemalapati4ba8c232017-06-06 18:46:08 -04001503 "state": "A String", # Output only. The detailed state of a job.
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07001504 "etag": "A String", # `etag` is used for optimistic concurrency control as a way to help
1505 # prevent simultaneous updates of a job from overwriting each other.
1506 # It is strongly suggested that systems make use of the `etag` in the
1507 # read-modify-write cycle to perform job updates in order to avoid race
1508 # conditions: An `etag` is returned in the response to `GetJob`, and
1509 # systems are expected to put that etag in the request to `UpdateJob` to
1510 # ensure that their change will be applied to the same version of the job.
Sai Cheemalapati4ba8c232017-06-06 18:46:08 -04001511 "startTime": "A String", # Output only. When the job processing was started.
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -04001512 "endTime": "A String", # Output only. When the job processing was completed.
1513 "predictionOutput": { # Represents results of a prediction job. # The current prediction job result.
1514 "outputPath": "A String", # The output Google Cloud Storage location provided at the job creation time.
1515 "nodeHours": 3.14, # Node hours used by the batch prediction job.
1516 "predictionCount": "A String", # The number of generated predictions.
1517 "errorCount": "A String", # The number of data instances which resulted in errors.
1518 },
1519 "createTime": "A String", # Output only. When the job was created.
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07001520 }</pre>
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -04001521</div>
1522
1523<div class="method">
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07001524 <code class="details" id="getIamPolicy">getIamPolicy(resource, x__xgafv=None)</code>
1525 <pre>Gets the access control policy for a resource.
1526Returns an empty policy if the resource exists and does not have a policy
1527set.
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -04001528
1529Args:
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07001530 resource: string, REQUIRED: The resource for which the policy is being requested.
1531See the operation documentation for the appropriate value for this field. (required)
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -04001532 x__xgafv: string, V1 error format.
1533 Allowed values
1534 1 - v1 error format
1535 2 - v2 error format
1536
1537Returns:
1538 An object of the form:
1539
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07001540 { # Defines an Identity and Access Management (IAM) policy. It is used to
1541 # specify access control policies for Cloud Platform resources.
1542 #
1543 #
1544 # A `Policy` consists of a list of `bindings`. A `binding` binds a list of
1545 # `members` to a `role`, where the members can be user accounts, Google groups,
1546 # Google domains, and service accounts. A `role` is a named list of permissions
1547 # defined by IAM.
1548 #
1549 # **JSON Example**
1550 #
1551 # {
1552 # "bindings": [
1553 # {
1554 # "role": "roles/owner",
1555 # "members": [
1556 # "user:mike@example.com",
1557 # "group:admins@example.com",
1558 # "domain:google.com",
1559 # "serviceAccount:my-other-app@appspot.gserviceaccount.com"
1560 # ]
1561 # },
1562 # {
1563 # "role": "roles/viewer",
1564 # "members": ["user:sean@example.com"]
1565 # }
1566 # ]
1567 # }
1568 #
1569 # **YAML Example**
1570 #
1571 # bindings:
1572 # - members:
1573 # - user:mike@example.com
1574 # - group:admins@example.com
1575 # - domain:google.com
1576 # - serviceAccount:my-other-app@appspot.gserviceaccount.com
1577 # role: roles/owner
1578 # - members:
1579 # - user:sean@example.com
1580 # role: roles/viewer
1581 #
1582 #
1583 # For a description of IAM and its features, see the
1584 # [IAM developer's guide](https://cloud.google.com/iam/docs).
1585 "bindings": [ # Associates a list of `members` to a `role`.
1586 # `bindings` with no members will result in an error.
1587 { # Associates `members` with a `role`.
1588 "role": "A String", # Role that is assigned to `members`.
1589 # For example, `roles/viewer`, `roles/editor`, or `roles/owner`.
1590 "members": [ # Specifies the identities requesting access for a Cloud Platform resource.
1591 # `members` can have the following values:
1592 #
1593 # * `allUsers`: A special identifier that represents anyone who is
1594 # on the internet; with or without a Google account.
1595 #
1596 # * `allAuthenticatedUsers`: A special identifier that represents anyone
1597 # who is authenticated with a Google account or a service account.
1598 #
1599 # * `user:{emailid}`: An email address that represents a specific Google
1600 # account. For example, `alice@gmail.com` .
1601 #
1602 #
1603 # * `serviceAccount:{emailid}`: An email address that represents a service
1604 # account. For example, `my-other-app@appspot.gserviceaccount.com`.
1605 #
1606 # * `group:{emailid}`: An email address that represents a Google group.
1607 # For example, `admins@example.com`.
1608 #
1609 #
1610 # * `domain:{domain}`: The G Suite domain (primary) that represents all the
1611 # users of that domain. For example, `google.com` or `example.com`.
1612 #
1613 "A String",
1614 ],
1615 "condition": { # Represents an expression text. Example: # The condition that is associated with this binding.
1616 # NOTE: An unsatisfied condition will not allow user access via current
1617 # binding. Different bindings, including their conditions, are examined
1618 # independently.
1619 #
1620 # title: "User account presence"
1621 # description: "Determines whether the request has a user account"
1622 # expression: "size(request.user) > 0"
1623 "description": "A String", # An optional description of the expression. This is a longer text which
1624 # describes the expression, e.g. when hovered over it in a UI.
1625 "expression": "A String", # Textual representation of an expression in
1626 # Common Expression Language syntax.
1627 #
1628 # The application context of the containing message determines which
1629 # well-known feature set of CEL is supported.
1630 "location": "A String", # An optional string indicating the location of the expression for error
1631 # reporting, e.g. a file name and a position in the file.
1632 "title": "A String", # An optional title for the expression, i.e. a short string describing
1633 # its purpose. This can be used e.g. in UIs which allow to enter the
1634 # expression.
1635 },
1636 },
1637 ],
1638 "etag": "A String", # `etag` is used for optimistic concurrency control as a way to help
1639 # prevent simultaneous updates of a policy from overwriting each other.
1640 # It is strongly suggested that systems make use of the `etag` in the
1641 # read-modify-write cycle to perform policy updates in order to avoid race
1642 # conditions: An `etag` is returned in the response to `getIamPolicy`, and
1643 # systems are expected to put that etag in the request to `setIamPolicy` to
1644 # ensure that their change will be applied to the same version of the policy.
1645 #
1646 # If no `etag` is provided in the call to `setIamPolicy`, then the existing
1647 # policy is overwritten blindly.
1648 "version": 42, # Deprecated.
1649 "auditConfigs": [ # Specifies cloud audit logging configuration for this policy.
1650 { # Specifies the audit configuration for a service.
1651 # The configuration determines which permission types are logged, and what
1652 # identities, if any, are exempted from logging.
1653 # An AuditConfig must have one or more AuditLogConfigs.
1654 #
1655 # If there are AuditConfigs for both `allServices` and a specific service,
1656 # the union of the two AuditConfigs is used for that service: the log_types
1657 # specified in each AuditConfig are enabled, and the exempted_members in each
1658 # AuditLogConfig are exempted.
1659 #
1660 # Example Policy with multiple AuditConfigs:
1661 #
1662 # {
1663 # "audit_configs": [
1664 # {
1665 # "service": "allServices"
1666 # "audit_log_configs": [
1667 # {
1668 # "log_type": "DATA_READ",
1669 # "exempted_members": [
1670 # "user:foo@gmail.com"
1671 # ]
1672 # },
1673 # {
1674 # "log_type": "DATA_WRITE",
1675 # },
1676 # {
1677 # "log_type": "ADMIN_READ",
1678 # }
1679 # ]
1680 # },
1681 # {
1682 # "service": "fooservice.googleapis.com"
1683 # "audit_log_configs": [
1684 # {
1685 # "log_type": "DATA_READ",
1686 # },
1687 # {
1688 # "log_type": "DATA_WRITE",
1689 # "exempted_members": [
1690 # "user:bar@gmail.com"
1691 # ]
1692 # }
1693 # ]
1694 # }
1695 # ]
1696 # }
1697 #
1698 # For fooservice, this policy enables DATA_READ, DATA_WRITE and ADMIN_READ
1699 # logging. It also exempts foo@gmail.com from DATA_READ logging, and
1700 # bar@gmail.com from DATA_WRITE logging.
1701 "auditLogConfigs": [ # The configuration for logging of each type of permission.
1702 { # Provides the configuration for logging a type of permissions.
1703 # Example:
1704 #
1705 # {
1706 # "audit_log_configs": [
1707 # {
1708 # "log_type": "DATA_READ",
1709 # "exempted_members": [
1710 # "user:foo@gmail.com"
1711 # ]
1712 # },
1713 # {
1714 # "log_type": "DATA_WRITE",
1715 # }
1716 # ]
1717 # }
1718 #
1719 # This enables 'DATA_READ' and 'DATA_WRITE' logging, while exempting
1720 # foo@gmail.com from DATA_READ logging.
1721 "exemptedMembers": [ # Specifies the identities that do not cause logging for this type of
1722 # permission.
1723 # Follows the same format of Binding.members.
1724 "A String",
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -04001725 ],
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07001726 "logType": "A String", # The log type that this config enables.
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -04001727 },
1728 ],
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07001729 "service": "A String", # Specifies a service that will be enabled for audit logging.
1730 # For example, `storage.googleapis.com`, `cloudsql.googleapis.com`.
1731 # `allServices` is a special value that covers all services.
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -04001732 },
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07001733 ],
1734 }</pre>
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -04001735</div>
1736
1737<div class="method">
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07001738 <code class="details" id="list">list(parent, pageToken=None, x__xgafv=None, pageSize=None, filter=None)</code>
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -04001739 <pre>Lists the jobs in the project.
1740
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07001741If there are no jobs that match the request parameters, the list
1742request returns an empty response body: {}.
1743
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -04001744Args:
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07001745 parent: string, Required. The name of the project for which to list jobs. (required)
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -04001746 pageToken: string, Optional. A page token to request the next page of results.
1747
1748You get the token from the `next_page_token` field of the response from
1749the previous call.
1750 x__xgafv: string, V1 error format.
1751 Allowed values
1752 1 - v1 error format
1753 2 - v2 error format
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07001754 pageSize: integer, Optional. The number of jobs to retrieve per "page" of results. If there
1755are more remaining results than this number, the response message will
1756contain a valid value in the `next_page_token` field.
1757
1758The default value is 20, and the maximum page size is 100.
1759 filter: string, Optional. Specifies the subset of jobs to retrieve.
1760You can filter on the value of one or more attributes of the job object.
1761For example, retrieve jobs with a job identifier that starts with 'census':
1762<p><code>gcloud ai-platform jobs list --filter='jobId:census*'</code>
1763<p>List all failed jobs with names that start with 'rnn':
1764<p><code>gcloud ai-platform jobs list --filter='jobId:rnn*
1765AND state:FAILED'</code>
1766<p>For more examples, see the guide to
1767<a href="/ml-engine/docs/tensorflow/monitor-training">monitoring jobs</a>.
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -04001768
1769Returns:
1770 An object of the form:
1771
1772 { # Response message for the ListJobs method.
1773 "nextPageToken": "A String", # Optional. Pass this token as the `page_token` field of the request for a
1774 # subsequent call.
1775 "jobs": [ # The list of jobs.
1776 { # Represents a training or prediction job.
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07001777 "errorMessage": "A String", # Output only. The details of a failure or a cancellation.
1778 "trainingOutput": { # Represents results of a training job. Output only. # The current training job result.
1779 "completedTrialCount": "A String", # The number of hyperparameter tuning trials that completed successfully.
1780 # Only set for hyperparameter tuning jobs.
1781 "trials": [ # Results for individual Hyperparameter trials.
1782 # Only set for hyperparameter tuning jobs.
1783 { # Represents the result of a single hyperparameter tuning trial from a
1784 # training job. The TrainingOutput object that is returned on successful
1785 # completion of a training job with hyperparameter tuning includes a list
1786 # of HyperparameterOutput objects, one for each successful trial.
1787 "hyperparameters": { # The hyperparameters given to this trial.
1788 "a_key": "A String",
1789 },
1790 "finalMetric": { # An observed value of a metric. # The final objective metric seen for this trial.
1791 "trainingStep": "A String", # The global training step for this metric.
1792 "objectiveValue": 3.14, # The objective value at this training step.
1793 },
1794 "allMetrics": [ # All recorded object metrics for this trial. This field is not currently
1795 # populated.
1796 { # An observed value of a metric.
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -04001797 "trainingStep": "A String", # The global training step for this metric.
1798 "objectiveValue": 3.14, # The objective value at this training step.
1799 },
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07001800 ],
1801 "isTrialStoppedEarly": True or False, # True if the trial is stopped early.
1802 "trialId": "A String", # The trial id for these results.
1803 "builtInAlgorithmOutput": { # Represents output related to a built-in algorithm Job. # Details related to built-in algorithms jobs.
1804 # Only set for trials of built-in algorithms jobs that have succeeded.
1805 "framework": "A String", # Framework on which the built-in algorithm was trained.
1806 "modelPath": "A String", # The Cloud Storage path to the `model/` directory where the training job
1807 # saves the trained model. Only set for successful jobs that don't use
1808 # hyperparameter tuning.
1809 "runtimeVersion": "A String", # AI Platform runtime version on which the built-in algorithm was
1810 # trained.
1811 "pythonVersion": "A String", # Python version on which the built-in algorithm was trained.
1812 },
1813 },
1814 ],
1815 "isHyperparameterTuningJob": True or False, # Whether this job is a hyperparameter tuning job.
1816 "isBuiltInAlgorithmJob": True or False, # Whether this job is a built-in Algorithm job.
1817 "consumedMLUnits": 3.14, # The amount of ML units consumed by the job.
1818 "hyperparameterMetricTag": "A String", # The TensorFlow summary tag name used for optimizing hyperparameter tuning
1819 # trials. See
1820 # [`HyperparameterSpec.hyperparameterMetricTag`](#HyperparameterSpec.FIELDS.hyperparameter_metric_tag)
1821 # for more information. Only set for hyperparameter tuning jobs.
1822 "builtInAlgorithmOutput": { # Represents output related to a built-in algorithm Job. # Details related to built-in algorithms jobs.
1823 # Only set for built-in algorithms jobs.
1824 "framework": "A String", # Framework on which the built-in algorithm was trained.
1825 "modelPath": "A String", # The Cloud Storage path to the `model/` directory where the training job
1826 # saves the trained model. Only set for successful jobs that don't use
1827 # hyperparameter tuning.
1828 "runtimeVersion": "A String", # AI Platform runtime version on which the built-in algorithm was
1829 # trained.
1830 "pythonVersion": "A String", # Python version on which the built-in algorithm was trained.
1831 },
1832 },
1833 "predictionInput": { # Represents input parameters for a prediction job. # Input parameters to create a prediction job.
1834 "modelName": "A String", # Use this field if you want to use the default version for the specified
1835 # model. The string must use the following format:
1836 #
1837 # `"projects/YOUR_PROJECT/models/YOUR_MODEL"`
1838 "runtimeVersion": "A String", # Optional. The AI Platform runtime version to use for this batch
1839 # prediction. If not set, AI Platform will pick the runtime version used
1840 # during the CreateVersion request for this model version, or choose the
1841 # latest stable version when model version information is not available
1842 # such as when the model is specified by uri.
1843 "signatureName": "A String", # Optional. The name of the signature defined in the SavedModel to use for
1844 # this job. Please refer to
1845 # [SavedModel](https://tensorflow.github.io/serving/serving_basic.html)
1846 # for information about how to use signatures.
1847 #
1848 # Defaults to
1849 # [DEFAULT_SERVING_SIGNATURE_DEF_KEY](https://www.tensorflow.org/api_docs/python/tf/saved_model/signature_constants)
1850 # , which is "serving_default".
1851 "batchSize": "A String", # Optional. Number of records per batch, defaults to 64.
1852 # The service will buffer batch_size number of records in memory before
1853 # invoking one Tensorflow prediction call internally. So take the record
1854 # size and memory available into consideration when setting this parameter.
1855 "inputPaths": [ # Required. The Cloud Storage location of the input data files. May contain
1856 # <a href="/storage/docs/gsutil/addlhelp/WildcardNames">wildcards</a>.
1857 "A String",
1858 ],
1859 "maxWorkerCount": "A String", # Optional. The maximum number of workers to be used for parallel processing.
1860 # Defaults to 10 if not specified.
1861 "uri": "A String", # Use this field if you want to specify a Google Cloud Storage path for
1862 # the model to use.
1863 "outputPath": "A String", # Required. The output Google Cloud Storage location.
1864 "dataFormat": "A String", # Required. The format of the input data files.
1865 "versionName": "A String", # Use this field if you want to specify a version of the model to use. The
1866 # string is formatted the same way as `model_version`, with the addition
1867 # of the version information:
1868 #
1869 # `"projects/YOUR_PROJECT/models/YOUR_MODEL/versions/YOUR_VERSION"`
1870 "region": "A String", # Required. The Google Compute Engine region to run the prediction job in.
1871 # See the <a href="/ml-engine/docs/tensorflow/regions">available regions</a>
1872 # for AI Platform services.
1873 "outputDataFormat": "A String", # Optional. Format of the output data files, defaults to JSON.
1874 },
1875 "trainingInput": { # Represents input parameters for a training job. When using the # Input parameters to create a training job.
1876 # gcloud command to submit your training job, you can specify
1877 # the input parameters as command-line arguments and/or in a YAML configuration
1878 # file referenced from the --config command-line argument. For
1879 # details, see the guide to
1880 # <a href="/ml-engine/docs/tensorflow/training-jobs">submitting a training
1881 # job</a>.
1882 "workerType": "A String", # Optional. Specifies the type of virtual machine to use for your training
1883 # job's worker nodes.
1884 #
1885 # The supported values are the same as those described in the entry for
1886 # `masterType`.
1887 #
1888 # This value must be consistent with the category of machine type that
1889 # `masterType` uses. In other words, both must be AI Platform machine
1890 # types or both must be Compute Engine machine types.
1891 #
1892 # If you use `cloud_tpu` for this value, see special instructions for
1893 # [configuring a custom TPU
1894 # machine](/ml-engine/docs/tensorflow/using-tpus#configuring_a_custom_tpu_machine).
1895 #
1896 # This value must be present when `scaleTier` is set to `CUSTOM` and
1897 # `workerCount` is greater than zero.
1898 "parameterServerConfig": { # Represents the configuration for a replica in a cluster. # Optional. The configuration for parameter servers.
1899 #
1900 # You should only set `parameterServerConfig.acceleratorConfig` if
1901 # `parameterServerConfigType` is set to a Compute Engine machine type. [Learn
1902 # about restrictions on accelerator configurations for
1903 # training.](/ml-engine/docs/tensorflow/using-gpus#compute-engine-machine-types-with-gpu)
1904 #
1905 # Set `parameterServerConfig.imageUri` only if you build a custom image for
1906 # your parameter server. If `parameterServerConfig.imageUri` has not been
1907 # set, AI Platform uses the value of `masterConfig.imageUri`.
1908 # Learn more about [configuring custom
1909 # containers](/ml-engine/docs/distributed-training-containers).
1910 "acceleratorConfig": { # Represents a hardware accelerator request config. # Represents the type and number of accelerators used by the replica.
1911 # [Learn about restrictions on accelerator configurations for
1912 # training.](/ml-engine/docs/tensorflow/using-gpus#compute-engine-machine-types-with-gpu)
1913 "count": "A String", # The number of accelerators to attach to each machine running the job.
1914 "type": "A String", # The type of accelerator to use.
1915 },
1916 "imageUri": "A String", # The Docker image to run on the replica. This image must be in Container
1917 # Registry. Learn more about [configuring custom
1918 # containers](/ml-engine/docs/distributed-training-containers).
1919 },
1920 "runtimeVersion": "A String", # Optional. The AI Platform runtime version to use for training. If not
1921 # set, AI Platform uses the default stable version, 1.0. For more
1922 # information, see the
1923 # <a href="/ml-engine/docs/runtime-version-list">runtime version list</a>
1924 # and
1925 # <a href="/ml-engine/docs/versioning">how to manage runtime versions</a>.
1926 "scaleTier": "A String", # Required. Specifies the machine types, the number of replicas for workers
1927 # and parameter servers.
1928 "masterType": "A String", # Optional. Specifies the type of virtual machine to use for your training
1929 # job's master worker.
1930 #
1931 # The following types are supported:
1932 #
1933 # <dl>
1934 # <dt>standard</dt>
1935 # <dd>
1936 # A basic machine configuration suitable for training simple models with
1937 # small to moderate datasets.
1938 # </dd>
1939 # <dt>large_model</dt>
1940 # <dd>
1941 # A machine with a lot of memory, specially suited for parameter servers
1942 # when your model is large (having many hidden layers or layers with very
1943 # large numbers of nodes).
1944 # </dd>
1945 # <dt>complex_model_s</dt>
1946 # <dd>
1947 # A machine suitable for the master and workers of the cluster when your
1948 # model requires more computation than the standard machine can handle
1949 # satisfactorily.
1950 # </dd>
1951 # <dt>complex_model_m</dt>
1952 # <dd>
1953 # A machine with roughly twice the number of cores and roughly double the
1954 # memory of <i>complex_model_s</i>.
1955 # </dd>
1956 # <dt>complex_model_l</dt>
1957 # <dd>
1958 # A machine with roughly twice the number of cores and roughly double the
1959 # memory of <i>complex_model_m</i>.
1960 # </dd>
1961 # <dt>standard_gpu</dt>
1962 # <dd>
1963 # A machine equivalent to <i>standard</i> that
1964 # also includes a single NVIDIA Tesla K80 GPU. See more about
1965 # <a href="/ml-engine/docs/tensorflow/using-gpus">using GPUs to
1966 # train your model</a>.
1967 # </dd>
1968 # <dt>complex_model_m_gpu</dt>
1969 # <dd>
1970 # A machine equivalent to <i>complex_model_m</i> that also includes
1971 # four NVIDIA Tesla K80 GPUs.
1972 # </dd>
1973 # <dt>complex_model_l_gpu</dt>
1974 # <dd>
1975 # A machine equivalent to <i>complex_model_l</i> that also includes
1976 # eight NVIDIA Tesla K80 GPUs.
1977 # </dd>
1978 # <dt>standard_p100</dt>
1979 # <dd>
1980 # A machine equivalent to <i>standard</i> that
1981 # also includes a single NVIDIA Tesla P100 GPU.
1982 # </dd>
1983 # <dt>complex_model_m_p100</dt>
1984 # <dd>
1985 # A machine equivalent to <i>complex_model_m</i> that also includes
1986 # four NVIDIA Tesla P100 GPUs.
1987 # </dd>
1988 # <dt>standard_v100</dt>
1989 # <dd>
1990 # A machine equivalent to <i>standard</i> that
1991 # also includes a single NVIDIA Tesla V100 GPU.
1992 # </dd>
1993 # <dt>large_model_v100</dt>
1994 # <dd>
1995 # A machine equivalent to <i>large_model</i> that
1996 # also includes a single NVIDIA Tesla V100 GPU.
1997 # </dd>
1998 # <dt>complex_model_m_v100</dt>
1999 # <dd>
2000 # A machine equivalent to <i>complex_model_m</i> that
2001 # also includes four NVIDIA Tesla V100 GPUs.
2002 # </dd>
2003 # <dt>complex_model_l_v100</dt>
2004 # <dd>
2005 # A machine equivalent to <i>complex_model_l</i> that
2006 # also includes eight NVIDIA Tesla V100 GPUs.
2007 # </dd>
2008 # <dt>cloud_tpu</dt>
2009 # <dd>
2010 # A TPU VM including one Cloud TPU. See more about
2011 # <a href="/ml-engine/docs/tensorflow/using-tpus">using TPUs to train
2012 # your model</a>.
2013 # </dd>
2014 # </dl>
2015 #
2016 # You may also use certain Compute Engine machine types directly in this
2017 # field. The following types are supported:
2018 #
2019 # - `n1-standard-4`
2020 # - `n1-standard-8`
2021 # - `n1-standard-16`
2022 # - `n1-standard-32`
2023 # - `n1-standard-64`
2024 # - `n1-standard-96`
2025 # - `n1-highmem-2`
2026 # - `n1-highmem-4`
2027 # - `n1-highmem-8`
2028 # - `n1-highmem-16`
2029 # - `n1-highmem-32`
2030 # - `n1-highmem-64`
2031 # - `n1-highmem-96`
2032 # - `n1-highcpu-16`
2033 # - `n1-highcpu-32`
2034 # - `n1-highcpu-64`
2035 # - `n1-highcpu-96`
2036 #
2037 # See more about [using Compute Engine machine
2038 # types](/ml-engine/docs/tensorflow/machine-types#compute-engine-machine-types).
2039 #
2040 # You must set this value when `scaleTier` is set to `CUSTOM`.
2041 "hyperparameters": { # Represents a set of hyperparameters to optimize. # Optional. The set of Hyperparameters to tune.
2042 "maxTrials": 42, # Optional. How many training trials should be attempted to optimize
2043 # the specified hyperparameters.
2044 #
2045 # Defaults to one.
2046 "goal": "A String", # Required. The type of goal to use for tuning. Available types are
2047 # `MAXIMIZE` and `MINIMIZE`.
2048 #
2049 # Defaults to `MAXIMIZE`.
2050 "algorithm": "A String", # Optional. The search algorithm specified for the hyperparameter
2051 # tuning job.
2052 # Uses the default AI Platform hyperparameter tuning
2053 # algorithm if unspecified.
2054 "maxFailedTrials": 42, # Optional. The number of failed trials that need to be seen before failing
2055 # the hyperparameter tuning job. You can specify this field to override the
2056 # default failing criteria for AI Platform hyperparameter tuning jobs.
2057 #
2058 # Defaults to zero, which means the service decides when a hyperparameter
2059 # job should fail.
2060 "enableTrialEarlyStopping": True or False, # Optional. Indicates if the hyperparameter tuning job enables auto trial
2061 # early stopping.
2062 "resumePreviousJobId": "A String", # Optional. The prior hyperparameter tuning job id that users hope to
2063 # continue with. The job id will be used to find the corresponding vizier
2064 # study guid and resume the study.
2065 "params": [ # Required. The set of parameters to tune.
2066 { # Represents a single hyperparameter to optimize.
2067 "maxValue": 3.14, # Required if type is `DOUBLE` or `INTEGER`. This field
2068 # should be unset if type is `CATEGORICAL`. This value should be integers if
2069 # type is `INTEGER`.
2070 "categoricalValues": [ # Required if type is `CATEGORICAL`. The list of possible categories.
2071 "A String",
2072 ],
2073 "discreteValues": [ # Required if type is `DISCRETE`.
2074 # A list of feasible points.
2075 # The list should be in strictly increasing order. For instance, this
2076 # parameter might have possible settings of 1.5, 2.5, and 4.0. This list
2077 # should not contain more than 1,000 values.
2078 3.14,
2079 ],
2080 "parameterName": "A String", # Required. The parameter name must be unique amongst all ParameterConfigs in
2081 # a HyperparameterSpec message. E.g., "learning_rate".
2082 "minValue": 3.14, # Required if type is `DOUBLE` or `INTEGER`. This field
2083 # should be unset if type is `CATEGORICAL`. This value should be integers if
2084 # type is INTEGER.
2085 "type": "A String", # Required. The type of the parameter.
2086 "scaleType": "A String", # Optional. How the parameter should be scaled to the hypercube.
2087 # Leave unset for categorical parameters.
2088 # Some kind of scaling is strongly recommended for real or integral
2089 # parameters (e.g., `UNIT_LINEAR_SCALE`).
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -04002090 },
2091 ],
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07002092 "hyperparameterMetricTag": "A String", # Optional. The TensorFlow summary tag name to use for optimizing trials. For
2093 # current versions of TensorFlow, this tag name should exactly match what is
2094 # shown in TensorBoard, including all scopes. For versions of TensorFlow
2095 # prior to 0.12, this should be only the tag passed to tf.Summary.
2096 # By default, "training/hptuning/metric" will be used.
2097 "maxParallelTrials": 42, # Optional. The number of training trials to run concurrently.
2098 # You can reduce the time it takes to perform hyperparameter tuning by adding
2099 # trials in parallel. However, each trail only benefits from the information
2100 # gained in completed trials. That means that a trial does not get access to
2101 # the results of trials running at the same time, which could reduce the
2102 # quality of the overall optimization.
2103 #
2104 # Each trial will use the same scale tier and machine types.
2105 #
2106 # Defaults to one.
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -04002107 },
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07002108 "region": "A String", # Required. The Google Compute Engine region to run the training job in.
2109 # See the <a href="/ml-engine/docs/tensorflow/regions">available regions</a>
2110 # for AI Platform services.
2111 "args": [ # Optional. Command line arguments to pass to the program.
2112 "A String",
2113 ],
2114 "pythonModule": "A String", # Required. The Python module name to run after installing the packages.
2115 "pythonVersion": "A String", # Optional. The version of Python used in training. If not set, the default
2116 # version is '2.7'. Python '3.5' is available when `runtime_version` is set
2117 # to '1.4' and above. Python '2.7' works with all supported
2118 # <a href="/ml-engine/docs/runtime-version-list">runtime versions</a>.
2119 "jobDir": "A String", # Optional. A Google Cloud Storage path in which to store training outputs
2120 # and other data needed for training. This path is passed to your TensorFlow
2121 # program as the '--job-dir' command-line argument. The benefit of specifying
2122 # this field is that Cloud ML validates the path for use in training.
2123 "packageUris": [ # Required. The Google Cloud Storage location of the packages with
2124 # the training program and any additional dependencies.
2125 # The maximum number of package URIs is 100.
2126 "A String",
2127 ],
2128 "workerCount": "A String", # Optional. The number of worker replicas to use for the training job. Each
2129 # replica in the cluster will be of the type specified in `worker_type`.
2130 #
2131 # This value can only be used when `scale_tier` is set to `CUSTOM`. If you
2132 # set this value, you must also set `worker_type`.
2133 #
2134 # The default value is zero.
2135 "parameterServerType": "A String", # Optional. Specifies the type of virtual machine to use for your training
2136 # job's parameter server.
2137 #
2138 # The supported values are the same as those described in the entry for
2139 # `master_type`.
2140 #
2141 # This value must be consistent with the category of machine type that
2142 # `masterType` uses. In other words, both must be AI Platform machine
2143 # types or both must be Compute Engine machine types.
2144 #
2145 # This value must be present when `scaleTier` is set to `CUSTOM` and
2146 # `parameter_server_count` is greater than zero.
2147 "workerConfig": { # Represents the configuration for a replica in a cluster. # Optional. The configuration for workers.
2148 #
2149 # You should only set `workerConfig.acceleratorConfig` if `workerType` is set
2150 # to a Compute Engine machine type. [Learn about restrictions on accelerator
2151 # configurations for
2152 # training.](/ml-engine/docs/tensorflow/using-gpus#compute-engine-machine-types-with-gpu)
2153 #
2154 # Set `workerConfig.imageUri` only if you build a custom image for your
2155 # worker. If `workerConfig.imageUri` has not been set, AI Platform uses
2156 # the value of `masterConfig.imageUri`. Learn more about
2157 # [configuring custom
2158 # containers](/ml-engine/docs/distributed-training-containers).
2159 "acceleratorConfig": { # Represents a hardware accelerator request config. # Represents the type and number of accelerators used by the replica.
2160 # [Learn about restrictions on accelerator configurations for
2161 # training.](/ml-engine/docs/tensorflow/using-gpus#compute-engine-machine-types-with-gpu)
2162 "count": "A String", # The number of accelerators to attach to each machine running the job.
2163 "type": "A String", # The type of accelerator to use.
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -04002164 },
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07002165 "imageUri": "A String", # The Docker image to run on the replica. This image must be in Container
2166 # Registry. Learn more about [configuring custom
2167 # containers](/ml-engine/docs/distributed-training-containers).
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -04002168 },
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07002169 "maxRunningTime": "A String", # Optional. The maximum job running time. The default is 7 days.
2170 "masterConfig": { # Represents the configuration for a replica in a cluster. # Optional. The configuration for your master worker.
2171 #
2172 # You should only set `masterConfig.acceleratorConfig` if `masterType` is set
2173 # to a Compute Engine machine type. Learn about [restrictions on accelerator
2174 # configurations for
2175 # training.](/ml-engine/docs/tensorflow/using-gpus#compute-engine-machine-types-with-gpu)
2176 #
2177 # Set `masterConfig.imageUri` only if you build a custom image. Only one of
2178 # `masterConfig.imageUri` and `runtimeVersion` should be set. Learn more about
2179 # [configuring custom
2180 # containers](/ml-engine/docs/distributed-training-containers).
2181 "acceleratorConfig": { # Represents a hardware accelerator request config. # Represents the type and number of accelerators used by the replica.
2182 # [Learn about restrictions on accelerator configurations for
2183 # training.](/ml-engine/docs/tensorflow/using-gpus#compute-engine-machine-types-with-gpu)
2184 "count": "A String", # The number of accelerators to attach to each machine running the job.
2185 "type": "A String", # The type of accelerator to use.
2186 },
2187 "imageUri": "A String", # The Docker image to run on the replica. This image must be in Container
2188 # Registry. Learn more about [configuring custom
2189 # containers](/ml-engine/docs/distributed-training-containers).
Sai Cheemalapati4ba8c232017-06-06 18:46:08 -04002190 },
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07002191 "parameterServerCount": "A String", # Optional. The number of parameter server replicas to use for the training
2192 # job. Each replica in the cluster will be of the type specified in
2193 # `parameter_server_type`.
2194 #
2195 # This value can only be used when `scale_tier` is set to `CUSTOM`.If you
2196 # set this value, you must also set `parameter_server_type`.
2197 #
2198 # The default value is zero.
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -04002199 },
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07002200 "jobId": "A String", # Required. The user-specified id of the job.
2201 "labels": { # Optional. One or more labels that you can add, to organize your jobs.
2202 # Each label is a key-value pair, where both the key and the value are
2203 # arbitrary strings that you supply.
2204 # For more information, see the documentation on
2205 # <a href="/ml-engine/docs/tensorflow/resource-labels">using labels</a>.
2206 "a_key": "A String",
2207 },
2208 "state": "A String", # Output only. The detailed state of a job.
2209 "etag": "A String", # `etag` is used for optimistic concurrency control as a way to help
2210 # prevent simultaneous updates of a job from overwriting each other.
2211 # It is strongly suggested that systems make use of the `etag` in the
2212 # read-modify-write cycle to perform job updates in order to avoid race
2213 # conditions: An `etag` is returned in the response to `GetJob`, and
2214 # systems are expected to put that etag in the request to `UpdateJob` to
2215 # ensure that their change will be applied to the same version of the job.
2216 "startTime": "A String", # Output only. When the job processing was started.
2217 "endTime": "A String", # Output only. When the job processing was completed.
2218 "predictionOutput": { # Represents results of a prediction job. # The current prediction job result.
2219 "outputPath": "A String", # The output Google Cloud Storage location provided at the job creation time.
2220 "nodeHours": 3.14, # Node hours used by the batch prediction job.
2221 "predictionCount": "A String", # The number of generated predictions.
2222 "errorCount": "A String", # The number of data instances which resulted in errors.
2223 },
2224 "createTime": "A String", # Output only. When the job was created.
2225 },
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -04002226 ],
2227 }</pre>
2228</div>
2229
2230<div class="method">
2231 <code class="details" id="list_next">list_next(previous_request, previous_response)</code>
2232 <pre>Retrieves the next page of results.
2233
2234Args:
2235 previous_request: The request for the previous page. (required)
2236 previous_response: The response from the request for the previous page. (required)
2237
2238Returns:
2239 A request object that you can call 'execute()' on to request the next
2240 page. Returns None if there are no more items in the collection.
2241 </pre>
2242</div>
2243
Bu Sun Kim715bd7f2019-06-14 16:50:42 -07002244<div class="method">
2245 <code class="details" id="patch">patch(name, body, updateMask=None, x__xgafv=None)</code>
2246 <pre>Updates a specific job resource.
2247
2248Currently the only supported fields to update are `labels`.
2249
2250Args:
2251 name: string, Required. The job name. (required)
2252 body: object, The request body. (required)
2253 The object takes the form of:
2254
2255{ # Represents a training or prediction job.
2256 "errorMessage": "A String", # Output only. The details of a failure or a cancellation.
2257 "trainingOutput": { # Represents results of a training job. Output only. # The current training job result.
2258 "completedTrialCount": "A String", # The number of hyperparameter tuning trials that completed successfully.
2259 # Only set for hyperparameter tuning jobs.
2260 "trials": [ # Results for individual Hyperparameter trials.
2261 # Only set for hyperparameter tuning jobs.
2262 { # Represents the result of a single hyperparameter tuning trial from a
2263 # training job. The TrainingOutput object that is returned on successful
2264 # completion of a training job with hyperparameter tuning includes a list
2265 # of HyperparameterOutput objects, one for each successful trial.
2266 "hyperparameters": { # The hyperparameters given to this trial.
2267 "a_key": "A String",
2268 },
2269 "finalMetric": { # An observed value of a metric. # The final objective metric seen for this trial.
2270 "trainingStep": "A String", # The global training step for this metric.
2271 "objectiveValue": 3.14, # The objective value at this training step.
2272 },
2273 "allMetrics": [ # All recorded object metrics for this trial. This field is not currently
2274 # populated.
2275 { # An observed value of a metric.
2276 "trainingStep": "A String", # The global training step for this metric.
2277 "objectiveValue": 3.14, # The objective value at this training step.
2278 },
2279 ],
2280 "isTrialStoppedEarly": True or False, # True if the trial is stopped early.
2281 "trialId": "A String", # The trial id for these results.
2282 "builtInAlgorithmOutput": { # Represents output related to a built-in algorithm Job. # Details related to built-in algorithms jobs.
2283 # Only set for trials of built-in algorithms jobs that have succeeded.
2284 "framework": "A String", # Framework on which the built-in algorithm was trained.
2285 "modelPath": "A String", # The Cloud Storage path to the `model/` directory where the training job
2286 # saves the trained model. Only set for successful jobs that don't use
2287 # hyperparameter tuning.
2288 "runtimeVersion": "A String", # AI Platform runtime version on which the built-in algorithm was
2289 # trained.
2290 "pythonVersion": "A String", # Python version on which the built-in algorithm was trained.
2291 },
2292 },
2293 ],
2294 "isHyperparameterTuningJob": True or False, # Whether this job is a hyperparameter tuning job.
2295 "isBuiltInAlgorithmJob": True or False, # Whether this job is a built-in Algorithm job.
2296 "consumedMLUnits": 3.14, # The amount of ML units consumed by the job.
2297 "hyperparameterMetricTag": "A String", # The TensorFlow summary tag name used for optimizing hyperparameter tuning
2298 # trials. See
2299 # [`HyperparameterSpec.hyperparameterMetricTag`](#HyperparameterSpec.FIELDS.hyperparameter_metric_tag)
2300 # for more information. Only set for hyperparameter tuning jobs.
2301 "builtInAlgorithmOutput": { # Represents output related to a built-in algorithm Job. # Details related to built-in algorithms jobs.
2302 # Only set for built-in algorithms jobs.
2303 "framework": "A String", # Framework on which the built-in algorithm was trained.
2304 "modelPath": "A String", # The Cloud Storage path to the `model/` directory where the training job
2305 # saves the trained model. Only set for successful jobs that don't use
2306 # hyperparameter tuning.
2307 "runtimeVersion": "A String", # AI Platform runtime version on which the built-in algorithm was
2308 # trained.
2309 "pythonVersion": "A String", # Python version on which the built-in algorithm was trained.
2310 },
2311 },
2312 "predictionInput": { # Represents input parameters for a prediction job. # Input parameters to create a prediction job.
2313 "modelName": "A String", # Use this field if you want to use the default version for the specified
2314 # model. The string must use the following format:
2315 #
2316 # `"projects/YOUR_PROJECT/models/YOUR_MODEL"`
2317 "runtimeVersion": "A String", # Optional. The AI Platform runtime version to use for this batch
2318 # prediction. If not set, AI Platform will pick the runtime version used
2319 # during the CreateVersion request for this model version, or choose the
2320 # latest stable version when model version information is not available
2321 # such as when the model is specified by uri.
2322 "signatureName": "A String", # Optional. The name of the signature defined in the SavedModel to use for
2323 # this job. Please refer to
2324 # [SavedModel](https://tensorflow.github.io/serving/serving_basic.html)
2325 # for information about how to use signatures.
2326 #
2327 # Defaults to
2328 # [DEFAULT_SERVING_SIGNATURE_DEF_KEY](https://www.tensorflow.org/api_docs/python/tf/saved_model/signature_constants)
2329 # , which is "serving_default".
2330 "batchSize": "A String", # Optional. Number of records per batch, defaults to 64.
2331 # The service will buffer batch_size number of records in memory before
2332 # invoking one Tensorflow prediction call internally. So take the record
2333 # size and memory available into consideration when setting this parameter.
2334 "inputPaths": [ # Required. The Cloud Storage location of the input data files. May contain
2335 # <a href="/storage/docs/gsutil/addlhelp/WildcardNames">wildcards</a>.
2336 "A String",
2337 ],
2338 "maxWorkerCount": "A String", # Optional. The maximum number of workers to be used for parallel processing.
2339 # Defaults to 10 if not specified.
2340 "uri": "A String", # Use this field if you want to specify a Google Cloud Storage path for
2341 # the model to use.
2342 "outputPath": "A String", # Required. The output Google Cloud Storage location.
2343 "dataFormat": "A String", # Required. The format of the input data files.
2344 "versionName": "A String", # Use this field if you want to specify a version of the model to use. The
2345 # string is formatted the same way as `model_version`, with the addition
2346 # of the version information:
2347 #
2348 # `"projects/YOUR_PROJECT/models/YOUR_MODEL/versions/YOUR_VERSION"`
2349 "region": "A String", # Required. The Google Compute Engine region to run the prediction job in.
2350 # See the <a href="/ml-engine/docs/tensorflow/regions">available regions</a>
2351 # for AI Platform services.
2352 "outputDataFormat": "A String", # Optional. Format of the output data files, defaults to JSON.
2353 },
2354 "trainingInput": { # Represents input parameters for a training job. When using the # Input parameters to create a training job.
2355 # gcloud command to submit your training job, you can specify
2356 # the input parameters as command-line arguments and/or in a YAML configuration
2357 # file referenced from the --config command-line argument. For
2358 # details, see the guide to
2359 # <a href="/ml-engine/docs/tensorflow/training-jobs">submitting a training
2360 # job</a>.
2361 "workerType": "A String", # Optional. Specifies the type of virtual machine to use for your training
2362 # job's worker nodes.
2363 #
2364 # The supported values are the same as those described in the entry for
2365 # `masterType`.
2366 #
2367 # This value must be consistent with the category of machine type that
2368 # `masterType` uses. In other words, both must be AI Platform machine
2369 # types or both must be Compute Engine machine types.
2370 #
2371 # If you use `cloud_tpu` for this value, see special instructions for
2372 # [configuring a custom TPU
2373 # machine](/ml-engine/docs/tensorflow/using-tpus#configuring_a_custom_tpu_machine).
2374 #
2375 # This value must be present when `scaleTier` is set to `CUSTOM` and
2376 # `workerCount` is greater than zero.
2377 "parameterServerConfig": { # Represents the configuration for a replica in a cluster. # Optional. The configuration for parameter servers.
2378 #
2379 # You should only set `parameterServerConfig.acceleratorConfig` if
2380 # `parameterServerConfigType` is set to a Compute Engine machine type. [Learn
2381 # about restrictions on accelerator configurations for
2382 # training.](/ml-engine/docs/tensorflow/using-gpus#compute-engine-machine-types-with-gpu)
2383 #
2384 # Set `parameterServerConfig.imageUri` only if you build a custom image for
2385 # your parameter server. If `parameterServerConfig.imageUri` has not been
2386 # set, AI Platform uses the value of `masterConfig.imageUri`.
2387 # Learn more about [configuring custom
2388 # containers](/ml-engine/docs/distributed-training-containers).
2389 "acceleratorConfig": { # Represents a hardware accelerator request config. # Represents the type and number of accelerators used by the replica.
2390 # [Learn about restrictions on accelerator configurations for
2391 # training.](/ml-engine/docs/tensorflow/using-gpus#compute-engine-machine-types-with-gpu)
2392 "count": "A String", # The number of accelerators to attach to each machine running the job.
2393 "type": "A String", # The type of accelerator to use.
2394 },
2395 "imageUri": "A String", # The Docker image to run on the replica. This image must be in Container
2396 # Registry. Learn more about [configuring custom
2397 # containers](/ml-engine/docs/distributed-training-containers).
2398 },
2399 "runtimeVersion": "A String", # Optional. The AI Platform runtime version to use for training. If not
2400 # set, AI Platform uses the default stable version, 1.0. For more
2401 # information, see the
2402 # <a href="/ml-engine/docs/runtime-version-list">runtime version list</a>
2403 # and
2404 # <a href="/ml-engine/docs/versioning">how to manage runtime versions</a>.
2405 "scaleTier": "A String", # Required. Specifies the machine types, the number of replicas for workers
2406 # and parameter servers.
2407 "masterType": "A String", # Optional. Specifies the type of virtual machine to use for your training
2408 # job's master worker.
2409 #
2410 # The following types are supported:
2411 #
2412 # <dl>
2413 # <dt>standard</dt>
2414 # <dd>
2415 # A basic machine configuration suitable for training simple models with
2416 # small to moderate datasets.
2417 # </dd>
2418 # <dt>large_model</dt>
2419 # <dd>
2420 # A machine with a lot of memory, specially suited for parameter servers
2421 # when your model is large (having many hidden layers or layers with very
2422 # large numbers of nodes).
2423 # </dd>
2424 # <dt>complex_model_s</dt>
2425 # <dd>
2426 # A machine suitable for the master and workers of the cluster when your
2427 # model requires more computation than the standard machine can handle
2428 # satisfactorily.
2429 # </dd>
2430 # <dt>complex_model_m</dt>
2431 # <dd>
2432 # A machine with roughly twice the number of cores and roughly double the
2433 # memory of <i>complex_model_s</i>.
2434 # </dd>
2435 # <dt>complex_model_l</dt>
2436 # <dd>
2437 # A machine with roughly twice the number of cores and roughly double the
2438 # memory of <i>complex_model_m</i>.
2439 # </dd>
2440 # <dt>standard_gpu</dt>
2441 # <dd>
2442 # A machine equivalent to <i>standard</i> that
2443 # also includes a single NVIDIA Tesla K80 GPU. See more about
2444 # <a href="/ml-engine/docs/tensorflow/using-gpus">using GPUs to
2445 # train your model</a>.
2446 # </dd>
2447 # <dt>complex_model_m_gpu</dt>
2448 # <dd>
2449 # A machine equivalent to <i>complex_model_m</i> that also includes
2450 # four NVIDIA Tesla K80 GPUs.
2451 # </dd>
2452 # <dt>complex_model_l_gpu</dt>
2453 # <dd>
2454 # A machine equivalent to <i>complex_model_l</i> that also includes
2455 # eight NVIDIA Tesla K80 GPUs.
2456 # </dd>
2457 # <dt>standard_p100</dt>
2458 # <dd>
2459 # A machine equivalent to <i>standard</i> that
2460 # also includes a single NVIDIA Tesla P100 GPU.
2461 # </dd>
2462 # <dt>complex_model_m_p100</dt>
2463 # <dd>
2464 # A machine equivalent to <i>complex_model_m</i> that also includes
2465 # four NVIDIA Tesla P100 GPUs.
2466 # </dd>
2467 # <dt>standard_v100</dt>
2468 # <dd>
2469 # A machine equivalent to <i>standard</i> that
2470 # also includes a single NVIDIA Tesla V100 GPU.
2471 # </dd>
2472 # <dt>large_model_v100</dt>
2473 # <dd>
2474 # A machine equivalent to <i>large_model</i> that
2475 # also includes a single NVIDIA Tesla V100 GPU.
2476 # </dd>
2477 # <dt>complex_model_m_v100</dt>
2478 # <dd>
2479 # A machine equivalent to <i>complex_model_m</i> that
2480 # also includes four NVIDIA Tesla V100 GPUs.
2481 # </dd>
2482 # <dt>complex_model_l_v100</dt>
2483 # <dd>
2484 # A machine equivalent to <i>complex_model_l</i> that
2485 # also includes eight NVIDIA Tesla V100 GPUs.
2486 # </dd>
2487 # <dt>cloud_tpu</dt>
2488 # <dd>
2489 # A TPU VM including one Cloud TPU. See more about
2490 # <a href="/ml-engine/docs/tensorflow/using-tpus">using TPUs to train
2491 # your model</a>.
2492 # </dd>
2493 # </dl>
2494 #
2495 # You may also use certain Compute Engine machine types directly in this
2496 # field. The following types are supported:
2497 #
2498 # - `n1-standard-4`
2499 # - `n1-standard-8`
2500 # - `n1-standard-16`
2501 # - `n1-standard-32`
2502 # - `n1-standard-64`
2503 # - `n1-standard-96`
2504 # - `n1-highmem-2`
2505 # - `n1-highmem-4`
2506 # - `n1-highmem-8`
2507 # - `n1-highmem-16`
2508 # - `n1-highmem-32`
2509 # - `n1-highmem-64`
2510 # - `n1-highmem-96`
2511 # - `n1-highcpu-16`
2512 # - `n1-highcpu-32`
2513 # - `n1-highcpu-64`
2514 # - `n1-highcpu-96`
2515 #
2516 # See more about [using Compute Engine machine
2517 # types](/ml-engine/docs/tensorflow/machine-types#compute-engine-machine-types).
2518 #
2519 # You must set this value when `scaleTier` is set to `CUSTOM`.
2520 "hyperparameters": { # Represents a set of hyperparameters to optimize. # Optional. The set of Hyperparameters to tune.
2521 "maxTrials": 42, # Optional. How many training trials should be attempted to optimize
2522 # the specified hyperparameters.
2523 #
2524 # Defaults to one.
2525 "goal": "A String", # Required. The type of goal to use for tuning. Available types are
2526 # `MAXIMIZE` and `MINIMIZE`.
2527 #
2528 # Defaults to `MAXIMIZE`.
2529 "algorithm": "A String", # Optional. The search algorithm specified for the hyperparameter
2530 # tuning job.
2531 # Uses the default AI Platform hyperparameter tuning
2532 # algorithm if unspecified.
2533 "maxFailedTrials": 42, # Optional. The number of failed trials that need to be seen before failing
2534 # the hyperparameter tuning job. You can specify this field to override the
2535 # default failing criteria for AI Platform hyperparameter tuning jobs.
2536 #
2537 # Defaults to zero, which means the service decides when a hyperparameter
2538 # job should fail.
2539 "enableTrialEarlyStopping": True or False, # Optional. Indicates if the hyperparameter tuning job enables auto trial
2540 # early stopping.
2541 "resumePreviousJobId": "A String", # Optional. The prior hyperparameter tuning job id that users hope to
2542 # continue with. The job id will be used to find the corresponding vizier
2543 # study guid and resume the study.
2544 "params": [ # Required. The set of parameters to tune.
2545 { # Represents a single hyperparameter to optimize.
2546 "maxValue": 3.14, # Required if type is `DOUBLE` or `INTEGER`. This field
2547 # should be unset if type is `CATEGORICAL`. This value should be integers if
2548 # type is `INTEGER`.
2549 "categoricalValues": [ # Required if type is `CATEGORICAL`. The list of possible categories.
2550 "A String",
2551 ],
2552 "discreteValues": [ # Required if type is `DISCRETE`.
2553 # A list of feasible points.
2554 # The list should be in strictly increasing order. For instance, this
2555 # parameter might have possible settings of 1.5, 2.5, and 4.0. This list
2556 # should not contain more than 1,000 values.
2557 3.14,
2558 ],
2559 "parameterName": "A String", # Required. The parameter name must be unique amongst all ParameterConfigs in
2560 # a HyperparameterSpec message. E.g., "learning_rate".
2561 "minValue": 3.14, # Required if type is `DOUBLE` or `INTEGER`. This field
2562 # should be unset if type is `CATEGORICAL`. This value should be integers if
2563 # type is INTEGER.
2564 "type": "A String", # Required. The type of the parameter.
2565 "scaleType": "A String", # Optional. How the parameter should be scaled to the hypercube.
2566 # Leave unset for categorical parameters.
2567 # Some kind of scaling is strongly recommended for real or integral
2568 # parameters (e.g., `UNIT_LINEAR_SCALE`).
2569 },
2570 ],
2571 "hyperparameterMetricTag": "A String", # Optional. The TensorFlow summary tag name to use for optimizing trials. For
2572 # current versions of TensorFlow, this tag name should exactly match what is
2573 # shown in TensorBoard, including all scopes. For versions of TensorFlow
2574 # prior to 0.12, this should be only the tag passed to tf.Summary.
2575 # By default, "training/hptuning/metric" will be used.
2576 "maxParallelTrials": 42, # Optional. The number of training trials to run concurrently.
2577 # You can reduce the time it takes to perform hyperparameter tuning by adding
2578 # trials in parallel. However, each trail only benefits from the information
2579 # gained in completed trials. That means that a trial does not get access to
2580 # the results of trials running at the same time, which could reduce the
2581 # quality of the overall optimization.
2582 #
2583 # Each trial will use the same scale tier and machine types.
2584 #
2585 # Defaults to one.
2586 },
2587 "region": "A String", # Required. The Google Compute Engine region to run the training job in.
2588 # See the <a href="/ml-engine/docs/tensorflow/regions">available regions</a>
2589 # for AI Platform services.
2590 "args": [ # Optional. Command line arguments to pass to the program.
2591 "A String",
2592 ],
2593 "pythonModule": "A String", # Required. The Python module name to run after installing the packages.
2594 "pythonVersion": "A String", # Optional. The version of Python used in training. If not set, the default
2595 # version is '2.7'. Python '3.5' is available when `runtime_version` is set
2596 # to '1.4' and above. Python '2.7' works with all supported
2597 # <a href="/ml-engine/docs/runtime-version-list">runtime versions</a>.
2598 "jobDir": "A String", # Optional. A Google Cloud Storage path in which to store training outputs
2599 # and other data needed for training. This path is passed to your TensorFlow
2600 # program as the '--job-dir' command-line argument. The benefit of specifying
2601 # this field is that Cloud ML validates the path for use in training.
2602 "packageUris": [ # Required. The Google Cloud Storage location of the packages with
2603 # the training program and any additional dependencies.
2604 # The maximum number of package URIs is 100.
2605 "A String",
2606 ],
2607 "workerCount": "A String", # Optional. The number of worker replicas to use for the training job. Each
2608 # replica in the cluster will be of the type specified in `worker_type`.
2609 #
2610 # This value can only be used when `scale_tier` is set to `CUSTOM`. If you
2611 # set this value, you must also set `worker_type`.
2612 #
2613 # The default value is zero.
2614 "parameterServerType": "A String", # Optional. Specifies the type of virtual machine to use for your training
2615 # job's parameter server.
2616 #
2617 # The supported values are the same as those described in the entry for
2618 # `master_type`.
2619 #
2620 # This value must be consistent with the category of machine type that
2621 # `masterType` uses. In other words, both must be AI Platform machine
2622 # types or both must be Compute Engine machine types.
2623 #
2624 # This value must be present when `scaleTier` is set to `CUSTOM` and
2625 # `parameter_server_count` is greater than zero.
2626 "workerConfig": { # Represents the configuration for a replica in a cluster. # Optional. The configuration for workers.
2627 #
2628 # You should only set `workerConfig.acceleratorConfig` if `workerType` is set
2629 # to a Compute Engine machine type. [Learn about restrictions on accelerator
2630 # configurations for
2631 # training.](/ml-engine/docs/tensorflow/using-gpus#compute-engine-machine-types-with-gpu)
2632 #
2633 # Set `workerConfig.imageUri` only if you build a custom image for your
2634 # worker. If `workerConfig.imageUri` has not been set, AI Platform uses
2635 # the value of `masterConfig.imageUri`. Learn more about
2636 # [configuring custom
2637 # containers](/ml-engine/docs/distributed-training-containers).
2638 "acceleratorConfig": { # Represents a hardware accelerator request config. # Represents the type and number of accelerators used by the replica.
2639 # [Learn about restrictions on accelerator configurations for
2640 # training.](/ml-engine/docs/tensorflow/using-gpus#compute-engine-machine-types-with-gpu)
2641 "count": "A String", # The number of accelerators to attach to each machine running the job.
2642 "type": "A String", # The type of accelerator to use.
2643 },
2644 "imageUri": "A String", # The Docker image to run on the replica. This image must be in Container
2645 # Registry. Learn more about [configuring custom
2646 # containers](/ml-engine/docs/distributed-training-containers).
2647 },
2648 "maxRunningTime": "A String", # Optional. The maximum job running time. The default is 7 days.
2649 "masterConfig": { # Represents the configuration for a replica in a cluster. # Optional. The configuration for your master worker.
2650 #
2651 # You should only set `masterConfig.acceleratorConfig` if `masterType` is set
2652 # to a Compute Engine machine type. Learn about [restrictions on accelerator
2653 # configurations for
2654 # training.](/ml-engine/docs/tensorflow/using-gpus#compute-engine-machine-types-with-gpu)
2655 #
2656 # Set `masterConfig.imageUri` only if you build a custom image. Only one of
2657 # `masterConfig.imageUri` and `runtimeVersion` should be set. Learn more about
2658 # [configuring custom
2659 # containers](/ml-engine/docs/distributed-training-containers).
2660 "acceleratorConfig": { # Represents a hardware accelerator request config. # Represents the type and number of accelerators used by the replica.
2661 # [Learn about restrictions on accelerator configurations for
2662 # training.](/ml-engine/docs/tensorflow/using-gpus#compute-engine-machine-types-with-gpu)
2663 "count": "A String", # The number of accelerators to attach to each machine running the job.
2664 "type": "A String", # The type of accelerator to use.
2665 },
2666 "imageUri": "A String", # The Docker image to run on the replica. This image must be in Container
2667 # Registry. Learn more about [configuring custom
2668 # containers](/ml-engine/docs/distributed-training-containers).
2669 },
2670 "parameterServerCount": "A String", # Optional. The number of parameter server replicas to use for the training
2671 # job. Each replica in the cluster will be of the type specified in
2672 # `parameter_server_type`.
2673 #
2674 # This value can only be used when `scale_tier` is set to `CUSTOM`.If you
2675 # set this value, you must also set `parameter_server_type`.
2676 #
2677 # The default value is zero.
2678 },
2679 "jobId": "A String", # Required. The user-specified id of the job.
2680 "labels": { # Optional. One or more labels that you can add, to organize your jobs.
2681 # Each label is a key-value pair, where both the key and the value are
2682 # arbitrary strings that you supply.
2683 # For more information, see the documentation on
2684 # <a href="/ml-engine/docs/tensorflow/resource-labels">using labels</a>.
2685 "a_key": "A String",
2686 },
2687 "state": "A String", # Output only. The detailed state of a job.
2688 "etag": "A String", # `etag` is used for optimistic concurrency control as a way to help
2689 # prevent simultaneous updates of a job from overwriting each other.
2690 # It is strongly suggested that systems make use of the `etag` in the
2691 # read-modify-write cycle to perform job updates in order to avoid race
2692 # conditions: An `etag` is returned in the response to `GetJob`, and
2693 # systems are expected to put that etag in the request to `UpdateJob` to
2694 # ensure that their change will be applied to the same version of the job.
2695 "startTime": "A String", # Output only. When the job processing was started.
2696 "endTime": "A String", # Output only. When the job processing was completed.
2697 "predictionOutput": { # Represents results of a prediction job. # The current prediction job result.
2698 "outputPath": "A String", # The output Google Cloud Storage location provided at the job creation time.
2699 "nodeHours": 3.14, # Node hours used by the batch prediction job.
2700 "predictionCount": "A String", # The number of generated predictions.
2701 "errorCount": "A String", # The number of data instances which resulted in errors.
2702 },
2703 "createTime": "A String", # Output only. When the job was created.
2704}
2705
2706 updateMask: string, Required. Specifies the path, relative to `Job`, of the field to update.
2707To adopt etag mechanism, include `etag` field in the mask, and include the
2708`etag` value in your job resource.
2709
2710For example, to change the labels of a job, the `update_mask` parameter
2711would be specified as `labels`, `etag`, and the
2712`PATCH` request body would specify the new value, as follows:
2713 {
2714 "labels": {
2715 "owner": "Google",
2716 "color": "Blue"
2717 }
2718 "etag": "33a64df551425fcc55e4d42a148795d9f25f89d4"
2719 }
2720If `etag` matches the one on the server, the labels of the job will be
2721replaced with the given ones, and the server end `etag` will be
2722recalculated.
2723
2724Currently the only supported update masks are `labels` and `etag`.
2725 x__xgafv: string, V1 error format.
2726 Allowed values
2727 1 - v1 error format
2728 2 - v2 error format
2729
2730Returns:
2731 An object of the form:
2732
2733 { # Represents a training or prediction job.
2734 "errorMessage": "A String", # Output only. The details of a failure or a cancellation.
2735 "trainingOutput": { # Represents results of a training job. Output only. # The current training job result.
2736 "completedTrialCount": "A String", # The number of hyperparameter tuning trials that completed successfully.
2737 # Only set for hyperparameter tuning jobs.
2738 "trials": [ # Results for individual Hyperparameter trials.
2739 # Only set for hyperparameter tuning jobs.
2740 { # Represents the result of a single hyperparameter tuning trial from a
2741 # training job. The TrainingOutput object that is returned on successful
2742 # completion of a training job with hyperparameter tuning includes a list
2743 # of HyperparameterOutput objects, one for each successful trial.
2744 "hyperparameters": { # The hyperparameters given to this trial.
2745 "a_key": "A String",
2746 },
2747 "finalMetric": { # An observed value of a metric. # The final objective metric seen for this trial.
2748 "trainingStep": "A String", # The global training step for this metric.
2749 "objectiveValue": 3.14, # The objective value at this training step.
2750 },
2751 "allMetrics": [ # All recorded object metrics for this trial. This field is not currently
2752 # populated.
2753 { # An observed value of a metric.
2754 "trainingStep": "A String", # The global training step for this metric.
2755 "objectiveValue": 3.14, # The objective value at this training step.
2756 },
2757 ],
2758 "isTrialStoppedEarly": True or False, # True if the trial is stopped early.
2759 "trialId": "A String", # The trial id for these results.
2760 "builtInAlgorithmOutput": { # Represents output related to a built-in algorithm Job. # Details related to built-in algorithms jobs.
2761 # Only set for trials of built-in algorithms jobs that have succeeded.
2762 "framework": "A String", # Framework on which the built-in algorithm was trained.
2763 "modelPath": "A String", # The Cloud Storage path to the `model/` directory where the training job
2764 # saves the trained model. Only set for successful jobs that don't use
2765 # hyperparameter tuning.
2766 "runtimeVersion": "A String", # AI Platform runtime version on which the built-in algorithm was
2767 # trained.
2768 "pythonVersion": "A String", # Python version on which the built-in algorithm was trained.
2769 },
2770 },
2771 ],
2772 "isHyperparameterTuningJob": True or False, # Whether this job is a hyperparameter tuning job.
2773 "isBuiltInAlgorithmJob": True or False, # Whether this job is a built-in Algorithm job.
2774 "consumedMLUnits": 3.14, # The amount of ML units consumed by the job.
2775 "hyperparameterMetricTag": "A String", # The TensorFlow summary tag name used for optimizing hyperparameter tuning
2776 # trials. See
2777 # [`HyperparameterSpec.hyperparameterMetricTag`](#HyperparameterSpec.FIELDS.hyperparameter_metric_tag)
2778 # for more information. Only set for hyperparameter tuning jobs.
2779 "builtInAlgorithmOutput": { # Represents output related to a built-in algorithm Job. # Details related to built-in algorithms jobs.
2780 # Only set for built-in algorithms jobs.
2781 "framework": "A String", # Framework on which the built-in algorithm was trained.
2782 "modelPath": "A String", # The Cloud Storage path to the `model/` directory where the training job
2783 # saves the trained model. Only set for successful jobs that don't use
2784 # hyperparameter tuning.
2785 "runtimeVersion": "A String", # AI Platform runtime version on which the built-in algorithm was
2786 # trained.
2787 "pythonVersion": "A String", # Python version on which the built-in algorithm was trained.
2788 },
2789 },
2790 "predictionInput": { # Represents input parameters for a prediction job. # Input parameters to create a prediction job.
2791 "modelName": "A String", # Use this field if you want to use the default version for the specified
2792 # model. The string must use the following format:
2793 #
2794 # `"projects/YOUR_PROJECT/models/YOUR_MODEL"`
2795 "runtimeVersion": "A String", # Optional. The AI Platform runtime version to use for this batch
2796 # prediction. If not set, AI Platform will pick the runtime version used
2797 # during the CreateVersion request for this model version, or choose the
2798 # latest stable version when model version information is not available
2799 # such as when the model is specified by uri.
2800 "signatureName": "A String", # Optional. The name of the signature defined in the SavedModel to use for
2801 # this job. Please refer to
2802 # [SavedModel](https://tensorflow.github.io/serving/serving_basic.html)
2803 # for information about how to use signatures.
2804 #
2805 # Defaults to
2806 # [DEFAULT_SERVING_SIGNATURE_DEF_KEY](https://www.tensorflow.org/api_docs/python/tf/saved_model/signature_constants)
2807 # , which is "serving_default".
2808 "batchSize": "A String", # Optional. Number of records per batch, defaults to 64.
2809 # The service will buffer batch_size number of records in memory before
2810 # invoking one Tensorflow prediction call internally. So take the record
2811 # size and memory available into consideration when setting this parameter.
2812 "inputPaths": [ # Required. The Cloud Storage location of the input data files. May contain
2813 # <a href="/storage/docs/gsutil/addlhelp/WildcardNames">wildcards</a>.
2814 "A String",
2815 ],
2816 "maxWorkerCount": "A String", # Optional. The maximum number of workers to be used for parallel processing.
2817 # Defaults to 10 if not specified.
2818 "uri": "A String", # Use this field if you want to specify a Google Cloud Storage path for
2819 # the model to use.
2820 "outputPath": "A String", # Required. The output Google Cloud Storage location.
2821 "dataFormat": "A String", # Required. The format of the input data files.
2822 "versionName": "A String", # Use this field if you want to specify a version of the model to use. The
2823 # string is formatted the same way as `model_version`, with the addition
2824 # of the version information:
2825 #
2826 # `"projects/YOUR_PROJECT/models/YOUR_MODEL/versions/YOUR_VERSION"`
2827 "region": "A String", # Required. The Google Compute Engine region to run the prediction job in.
2828 # See the <a href="/ml-engine/docs/tensorflow/regions">available regions</a>
2829 # for AI Platform services.
2830 "outputDataFormat": "A String", # Optional. Format of the output data files, defaults to JSON.
2831 },
2832 "trainingInput": { # Represents input parameters for a training job. When using the # Input parameters to create a training job.
2833 # gcloud command to submit your training job, you can specify
2834 # the input parameters as command-line arguments and/or in a YAML configuration
2835 # file referenced from the --config command-line argument. For
2836 # details, see the guide to
2837 # <a href="/ml-engine/docs/tensorflow/training-jobs">submitting a training
2838 # job</a>.
2839 "workerType": "A String", # Optional. Specifies the type of virtual machine to use for your training
2840 # job's worker nodes.
2841 #
2842 # The supported values are the same as those described in the entry for
2843 # `masterType`.
2844 #
2845 # This value must be consistent with the category of machine type that
2846 # `masterType` uses. In other words, both must be AI Platform machine
2847 # types or both must be Compute Engine machine types.
2848 #
2849 # If you use `cloud_tpu` for this value, see special instructions for
2850 # [configuring a custom TPU
2851 # machine](/ml-engine/docs/tensorflow/using-tpus#configuring_a_custom_tpu_machine).
2852 #
2853 # This value must be present when `scaleTier` is set to `CUSTOM` and
2854 # `workerCount` is greater than zero.
2855 "parameterServerConfig": { # Represents the configuration for a replica in a cluster. # Optional. The configuration for parameter servers.
2856 #
2857 # You should only set `parameterServerConfig.acceleratorConfig` if
2858 # `parameterServerConfigType` is set to a Compute Engine machine type. [Learn
2859 # about restrictions on accelerator configurations for
2860 # training.](/ml-engine/docs/tensorflow/using-gpus#compute-engine-machine-types-with-gpu)
2861 #
2862 # Set `parameterServerConfig.imageUri` only if you build a custom image for
2863 # your parameter server. If `parameterServerConfig.imageUri` has not been
2864 # set, AI Platform uses the value of `masterConfig.imageUri`.
2865 # Learn more about [configuring custom
2866 # containers](/ml-engine/docs/distributed-training-containers).
2867 "acceleratorConfig": { # Represents a hardware accelerator request config. # Represents the type and number of accelerators used by the replica.
2868 # [Learn about restrictions on accelerator configurations for
2869 # training.](/ml-engine/docs/tensorflow/using-gpus#compute-engine-machine-types-with-gpu)
2870 "count": "A String", # The number of accelerators to attach to each machine running the job.
2871 "type": "A String", # The type of accelerator to use.
2872 },
2873 "imageUri": "A String", # The Docker image to run on the replica. This image must be in Container
2874 # Registry. Learn more about [configuring custom
2875 # containers](/ml-engine/docs/distributed-training-containers).
2876 },
2877 "runtimeVersion": "A String", # Optional. The AI Platform runtime version to use for training. If not
2878 # set, AI Platform uses the default stable version, 1.0. For more
2879 # information, see the
2880 # <a href="/ml-engine/docs/runtime-version-list">runtime version list</a>
2881 # and
2882 # <a href="/ml-engine/docs/versioning">how to manage runtime versions</a>.
2883 "scaleTier": "A String", # Required. Specifies the machine types, the number of replicas for workers
2884 # and parameter servers.
2885 "masterType": "A String", # Optional. Specifies the type of virtual machine to use for your training
2886 # job's master worker.
2887 #
2888 # The following types are supported:
2889 #
2890 # <dl>
2891 # <dt>standard</dt>
2892 # <dd>
2893 # A basic machine configuration suitable for training simple models with
2894 # small to moderate datasets.
2895 # </dd>
2896 # <dt>large_model</dt>
2897 # <dd>
2898 # A machine with a lot of memory, specially suited for parameter servers
2899 # when your model is large (having many hidden layers or layers with very
2900 # large numbers of nodes).
2901 # </dd>
2902 # <dt>complex_model_s</dt>
2903 # <dd>
2904 # A machine suitable for the master and workers of the cluster when your
2905 # model requires more computation than the standard machine can handle
2906 # satisfactorily.
2907 # </dd>
2908 # <dt>complex_model_m</dt>
2909 # <dd>
2910 # A machine with roughly twice the number of cores and roughly double the
2911 # memory of <i>complex_model_s</i>.
2912 # </dd>
2913 # <dt>complex_model_l</dt>
2914 # <dd>
2915 # A machine with roughly twice the number of cores and roughly double the
2916 # memory of <i>complex_model_m</i>.
2917 # </dd>
2918 # <dt>standard_gpu</dt>
2919 # <dd>
2920 # A machine equivalent to <i>standard</i> that
2921 # also includes a single NVIDIA Tesla K80 GPU. See more about
2922 # <a href="/ml-engine/docs/tensorflow/using-gpus">using GPUs to
2923 # train your model</a>.
2924 # </dd>
2925 # <dt>complex_model_m_gpu</dt>
2926 # <dd>
2927 # A machine equivalent to <i>complex_model_m</i> that also includes
2928 # four NVIDIA Tesla K80 GPUs.
2929 # </dd>
2930 # <dt>complex_model_l_gpu</dt>
2931 # <dd>
2932 # A machine equivalent to <i>complex_model_l</i> that also includes
2933 # eight NVIDIA Tesla K80 GPUs.
2934 # </dd>
2935 # <dt>standard_p100</dt>
2936 # <dd>
2937 # A machine equivalent to <i>standard</i> that
2938 # also includes a single NVIDIA Tesla P100 GPU.
2939 # </dd>
2940 # <dt>complex_model_m_p100</dt>
2941 # <dd>
2942 # A machine equivalent to <i>complex_model_m</i> that also includes
2943 # four NVIDIA Tesla P100 GPUs.
2944 # </dd>
2945 # <dt>standard_v100</dt>
2946 # <dd>
2947 # A machine equivalent to <i>standard</i> that
2948 # also includes a single NVIDIA Tesla V100 GPU.
2949 # </dd>
2950 # <dt>large_model_v100</dt>
2951 # <dd>
2952 # A machine equivalent to <i>large_model</i> that
2953 # also includes a single NVIDIA Tesla V100 GPU.
2954 # </dd>
2955 # <dt>complex_model_m_v100</dt>
2956 # <dd>
2957 # A machine equivalent to <i>complex_model_m</i> that
2958 # also includes four NVIDIA Tesla V100 GPUs.
2959 # </dd>
2960 # <dt>complex_model_l_v100</dt>
2961 # <dd>
2962 # A machine equivalent to <i>complex_model_l</i> that
2963 # also includes eight NVIDIA Tesla V100 GPUs.
2964 # </dd>
2965 # <dt>cloud_tpu</dt>
2966 # <dd>
2967 # A TPU VM including one Cloud TPU. See more about
2968 # <a href="/ml-engine/docs/tensorflow/using-tpus">using TPUs to train
2969 # your model</a>.
2970 # </dd>
2971 # </dl>
2972 #
2973 # You may also use certain Compute Engine machine types directly in this
2974 # field. The following types are supported:
2975 #
2976 # - `n1-standard-4`
2977 # - `n1-standard-8`
2978 # - `n1-standard-16`
2979 # - `n1-standard-32`
2980 # - `n1-standard-64`
2981 # - `n1-standard-96`
2982 # - `n1-highmem-2`
2983 # - `n1-highmem-4`
2984 # - `n1-highmem-8`
2985 # - `n1-highmem-16`
2986 # - `n1-highmem-32`
2987 # - `n1-highmem-64`
2988 # - `n1-highmem-96`
2989 # - `n1-highcpu-16`
2990 # - `n1-highcpu-32`
2991 # - `n1-highcpu-64`
2992 # - `n1-highcpu-96`
2993 #
2994 # See more about [using Compute Engine machine
2995 # types](/ml-engine/docs/tensorflow/machine-types#compute-engine-machine-types).
2996 #
2997 # You must set this value when `scaleTier` is set to `CUSTOM`.
2998 "hyperparameters": { # Represents a set of hyperparameters to optimize. # Optional. The set of Hyperparameters to tune.
2999 "maxTrials": 42, # Optional. How many training trials should be attempted to optimize
3000 # the specified hyperparameters.
3001 #
3002 # Defaults to one.
3003 "goal": "A String", # Required. The type of goal to use for tuning. Available types are
3004 # `MAXIMIZE` and `MINIMIZE`.
3005 #
3006 # Defaults to `MAXIMIZE`.
3007 "algorithm": "A String", # Optional. The search algorithm specified for the hyperparameter
3008 # tuning job.
3009 # Uses the default AI Platform hyperparameter tuning
3010 # algorithm if unspecified.
3011 "maxFailedTrials": 42, # Optional. The number of failed trials that need to be seen before failing
3012 # the hyperparameter tuning job. You can specify this field to override the
3013 # default failing criteria for AI Platform hyperparameter tuning jobs.
3014 #
3015 # Defaults to zero, which means the service decides when a hyperparameter
3016 # job should fail.
3017 "enableTrialEarlyStopping": True or False, # Optional. Indicates if the hyperparameter tuning job enables auto trial
3018 # early stopping.
3019 "resumePreviousJobId": "A String", # Optional. The prior hyperparameter tuning job id that users hope to
3020 # continue with. The job id will be used to find the corresponding vizier
3021 # study guid and resume the study.
3022 "params": [ # Required. The set of parameters to tune.
3023 { # Represents a single hyperparameter to optimize.
3024 "maxValue": 3.14, # Required if type is `DOUBLE` or `INTEGER`. This field
3025 # should be unset if type is `CATEGORICAL`. This value should be integers if
3026 # type is `INTEGER`.
3027 "categoricalValues": [ # Required if type is `CATEGORICAL`. The list of possible categories.
3028 "A String",
3029 ],
3030 "discreteValues": [ # Required if type is `DISCRETE`.
3031 # A list of feasible points.
3032 # The list should be in strictly increasing order. For instance, this
3033 # parameter might have possible settings of 1.5, 2.5, and 4.0. This list
3034 # should not contain more than 1,000 values.
3035 3.14,
3036 ],
3037 "parameterName": "A String", # Required. The parameter name must be unique amongst all ParameterConfigs in
3038 # a HyperparameterSpec message. E.g., "learning_rate".
3039 "minValue": 3.14, # Required if type is `DOUBLE` or `INTEGER`. This field
3040 # should be unset if type is `CATEGORICAL`. This value should be integers if
3041 # type is INTEGER.
3042 "type": "A String", # Required. The type of the parameter.
3043 "scaleType": "A String", # Optional. How the parameter should be scaled to the hypercube.
3044 # Leave unset for categorical parameters.
3045 # Some kind of scaling is strongly recommended for real or integral
3046 # parameters (e.g., `UNIT_LINEAR_SCALE`).
3047 },
3048 ],
3049 "hyperparameterMetricTag": "A String", # Optional. The TensorFlow summary tag name to use for optimizing trials. For
3050 # current versions of TensorFlow, this tag name should exactly match what is
3051 # shown in TensorBoard, including all scopes. For versions of TensorFlow
3052 # prior to 0.12, this should be only the tag passed to tf.Summary.
3053 # By default, "training/hptuning/metric" will be used.
3054 "maxParallelTrials": 42, # Optional. The number of training trials to run concurrently.
3055 # You can reduce the time it takes to perform hyperparameter tuning by adding
3056 # trials in parallel. However, each trail only benefits from the information
3057 # gained in completed trials. That means that a trial does not get access to
3058 # the results of trials running at the same time, which could reduce the
3059 # quality of the overall optimization.
3060 #
3061 # Each trial will use the same scale tier and machine types.
3062 #
3063 # Defaults to one.
3064 },
3065 "region": "A String", # Required. The Google Compute Engine region to run the training job in.
3066 # See the <a href="/ml-engine/docs/tensorflow/regions">available regions</a>
3067 # for AI Platform services.
3068 "args": [ # Optional. Command line arguments to pass to the program.
3069 "A String",
3070 ],
3071 "pythonModule": "A String", # Required. The Python module name to run after installing the packages.
3072 "pythonVersion": "A String", # Optional. The version of Python used in training. If not set, the default
3073 # version is '2.7'. Python '3.5' is available when `runtime_version` is set
3074 # to '1.4' and above. Python '2.7' works with all supported
3075 # <a href="/ml-engine/docs/runtime-version-list">runtime versions</a>.
3076 "jobDir": "A String", # Optional. A Google Cloud Storage path in which to store training outputs
3077 # and other data needed for training. This path is passed to your TensorFlow
3078 # program as the '--job-dir' command-line argument. The benefit of specifying
3079 # this field is that Cloud ML validates the path for use in training.
3080 "packageUris": [ # Required. The Google Cloud Storage location of the packages with
3081 # the training program and any additional dependencies.
3082 # The maximum number of package URIs is 100.
3083 "A String",
3084 ],
3085 "workerCount": "A String", # Optional. The number of worker replicas to use for the training job. Each
3086 # replica in the cluster will be of the type specified in `worker_type`.
3087 #
3088 # This value can only be used when `scale_tier` is set to `CUSTOM`. If you
3089 # set this value, you must also set `worker_type`.
3090 #
3091 # The default value is zero.
3092 "parameterServerType": "A String", # Optional. Specifies the type of virtual machine to use for your training
3093 # job's parameter server.
3094 #
3095 # The supported values are the same as those described in the entry for
3096 # `master_type`.
3097 #
3098 # This value must be consistent with the category of machine type that
3099 # `masterType` uses. In other words, both must be AI Platform machine
3100 # types or both must be Compute Engine machine types.
3101 #
3102 # This value must be present when `scaleTier` is set to `CUSTOM` and
3103 # `parameter_server_count` is greater than zero.
3104 "workerConfig": { # Represents the configuration for a replica in a cluster. # Optional. The configuration for workers.
3105 #
3106 # You should only set `workerConfig.acceleratorConfig` if `workerType` is set
3107 # to a Compute Engine machine type. [Learn about restrictions on accelerator
3108 # configurations for
3109 # training.](/ml-engine/docs/tensorflow/using-gpus#compute-engine-machine-types-with-gpu)
3110 #
3111 # Set `workerConfig.imageUri` only if you build a custom image for your
3112 # worker. If `workerConfig.imageUri` has not been set, AI Platform uses
3113 # the value of `masterConfig.imageUri`. Learn more about
3114 # [configuring custom
3115 # containers](/ml-engine/docs/distributed-training-containers).
3116 "acceleratorConfig": { # Represents a hardware accelerator request config. # Represents the type and number of accelerators used by the replica.
3117 # [Learn about restrictions on accelerator configurations for
3118 # training.](/ml-engine/docs/tensorflow/using-gpus#compute-engine-machine-types-with-gpu)
3119 "count": "A String", # The number of accelerators to attach to each machine running the job.
3120 "type": "A String", # The type of accelerator to use.
3121 },
3122 "imageUri": "A String", # The Docker image to run on the replica. This image must be in Container
3123 # Registry. Learn more about [configuring custom
3124 # containers](/ml-engine/docs/distributed-training-containers).
3125 },
3126 "maxRunningTime": "A String", # Optional. The maximum job running time. The default is 7 days.
3127 "masterConfig": { # Represents the configuration for a replica in a cluster. # Optional. The configuration for your master worker.
3128 #
3129 # You should only set `masterConfig.acceleratorConfig` if `masterType` is set
3130 # to a Compute Engine machine type. Learn about [restrictions on accelerator
3131 # configurations for
3132 # training.](/ml-engine/docs/tensorflow/using-gpus#compute-engine-machine-types-with-gpu)
3133 #
3134 # Set `masterConfig.imageUri` only if you build a custom image. Only one of
3135 # `masterConfig.imageUri` and `runtimeVersion` should be set. Learn more about
3136 # [configuring custom
3137 # containers](/ml-engine/docs/distributed-training-containers).
3138 "acceleratorConfig": { # Represents a hardware accelerator request config. # Represents the type and number of accelerators used by the replica.
3139 # [Learn about restrictions on accelerator configurations for
3140 # training.](/ml-engine/docs/tensorflow/using-gpus#compute-engine-machine-types-with-gpu)
3141 "count": "A String", # The number of accelerators to attach to each machine running the job.
3142 "type": "A String", # The type of accelerator to use.
3143 },
3144 "imageUri": "A String", # The Docker image to run on the replica. This image must be in Container
3145 # Registry. Learn more about [configuring custom
3146 # containers](/ml-engine/docs/distributed-training-containers).
3147 },
3148 "parameterServerCount": "A String", # Optional. The number of parameter server replicas to use for the training
3149 # job. Each replica in the cluster will be of the type specified in
3150 # `parameter_server_type`.
3151 #
3152 # This value can only be used when `scale_tier` is set to `CUSTOM`.If you
3153 # set this value, you must also set `parameter_server_type`.
3154 #
3155 # The default value is zero.
3156 },
3157 "jobId": "A String", # Required. The user-specified id of the job.
3158 "labels": { # Optional. One or more labels that you can add, to organize your jobs.
3159 # Each label is a key-value pair, where both the key and the value are
3160 # arbitrary strings that you supply.
3161 # For more information, see the documentation on
3162 # <a href="/ml-engine/docs/tensorflow/resource-labels">using labels</a>.
3163 "a_key": "A String",
3164 },
3165 "state": "A String", # Output only. The detailed state of a job.
3166 "etag": "A String", # `etag` is used for optimistic concurrency control as a way to help
3167 # prevent simultaneous updates of a job from overwriting each other.
3168 # It is strongly suggested that systems make use of the `etag` in the
3169 # read-modify-write cycle to perform job updates in order to avoid race
3170 # conditions: An `etag` is returned in the response to `GetJob`, and
3171 # systems are expected to put that etag in the request to `UpdateJob` to
3172 # ensure that their change will be applied to the same version of the job.
3173 "startTime": "A String", # Output only. When the job processing was started.
3174 "endTime": "A String", # Output only. When the job processing was completed.
3175 "predictionOutput": { # Represents results of a prediction job. # The current prediction job result.
3176 "outputPath": "A String", # The output Google Cloud Storage location provided at the job creation time.
3177 "nodeHours": 3.14, # Node hours used by the batch prediction job.
3178 "predictionCount": "A String", # The number of generated predictions.
3179 "errorCount": "A String", # The number of data instances which resulted in errors.
3180 },
3181 "createTime": "A String", # Output only. When the job was created.
3182 }</pre>
3183</div>
3184
3185<div class="method">
3186 <code class="details" id="setIamPolicy">setIamPolicy(resource, body, x__xgafv=None)</code>
3187 <pre>Sets the access control policy on the specified resource. Replaces any
3188existing policy.
3189
3190Args:
3191 resource: string, REQUIRED: The resource for which the policy is being specified.
3192See the operation documentation for the appropriate value for this field. (required)
3193 body: object, The request body. (required)
3194 The object takes the form of:
3195
3196{ # Request message for `SetIamPolicy` method.
3197 "policy": { # Defines an Identity and Access Management (IAM) policy. It is used to # REQUIRED: The complete policy to be applied to the `resource`. The size of
3198 # the policy is limited to a few 10s of KB. An empty policy is a
3199 # valid policy but certain Cloud Platform services (such as Projects)
3200 # might reject them.
3201 # specify access control policies for Cloud Platform resources.
3202 #
3203 #
3204 # A `Policy` consists of a list of `bindings`. A `binding` binds a list of
3205 # `members` to a `role`, where the members can be user accounts, Google groups,
3206 # Google domains, and service accounts. A `role` is a named list of permissions
3207 # defined by IAM.
3208 #
3209 # **JSON Example**
3210 #
3211 # {
3212 # "bindings": [
3213 # {
3214 # "role": "roles/owner",
3215 # "members": [
3216 # "user:mike@example.com",
3217 # "group:admins@example.com",
3218 # "domain:google.com",
3219 # "serviceAccount:my-other-app@appspot.gserviceaccount.com"
3220 # ]
3221 # },
3222 # {
3223 # "role": "roles/viewer",
3224 # "members": ["user:sean@example.com"]
3225 # }
3226 # ]
3227 # }
3228 #
3229 # **YAML Example**
3230 #
3231 # bindings:
3232 # - members:
3233 # - user:mike@example.com
3234 # - group:admins@example.com
3235 # - domain:google.com
3236 # - serviceAccount:my-other-app@appspot.gserviceaccount.com
3237 # role: roles/owner
3238 # - members:
3239 # - user:sean@example.com
3240 # role: roles/viewer
3241 #
3242 #
3243 # For a description of IAM and its features, see the
3244 # [IAM developer's guide](https://cloud.google.com/iam/docs).
3245 "bindings": [ # Associates a list of `members` to a `role`.
3246 # `bindings` with no members will result in an error.
3247 { # Associates `members` with a `role`.
3248 "role": "A String", # Role that is assigned to `members`.
3249 # For example, `roles/viewer`, `roles/editor`, or `roles/owner`.
3250 "members": [ # Specifies the identities requesting access for a Cloud Platform resource.
3251 # `members` can have the following values:
3252 #
3253 # * `allUsers`: A special identifier that represents anyone who is
3254 # on the internet; with or without a Google account.
3255 #
3256 # * `allAuthenticatedUsers`: A special identifier that represents anyone
3257 # who is authenticated with a Google account or a service account.
3258 #
3259 # * `user:{emailid}`: An email address that represents a specific Google
3260 # account. For example, `alice@gmail.com` .
3261 #
3262 #
3263 # * `serviceAccount:{emailid}`: An email address that represents a service
3264 # account. For example, `my-other-app@appspot.gserviceaccount.com`.
3265 #
3266 # * `group:{emailid}`: An email address that represents a Google group.
3267 # For example, `admins@example.com`.
3268 #
3269 #
3270 # * `domain:{domain}`: The G Suite domain (primary) that represents all the
3271 # users of that domain. For example, `google.com` or `example.com`.
3272 #
3273 "A String",
3274 ],
3275 "condition": { # Represents an expression text. Example: # The condition that is associated with this binding.
3276 # NOTE: An unsatisfied condition will not allow user access via current
3277 # binding. Different bindings, including their conditions, are examined
3278 # independently.
3279 #
3280 # title: "User account presence"
3281 # description: "Determines whether the request has a user account"
3282 # expression: "size(request.user) > 0"
3283 "description": "A String", # An optional description of the expression. This is a longer text which
3284 # describes the expression, e.g. when hovered over it in a UI.
3285 "expression": "A String", # Textual representation of an expression in
3286 # Common Expression Language syntax.
3287 #
3288 # The application context of the containing message determines which
3289 # well-known feature set of CEL is supported.
3290 "location": "A String", # An optional string indicating the location of the expression for error
3291 # reporting, e.g. a file name and a position in the file.
3292 "title": "A String", # An optional title for the expression, i.e. a short string describing
3293 # its purpose. This can be used e.g. in UIs which allow to enter the
3294 # expression.
3295 },
3296 },
3297 ],
3298 "etag": "A String", # `etag` is used for optimistic concurrency control as a way to help
3299 # prevent simultaneous updates of a policy from overwriting each other.
3300 # It is strongly suggested that systems make use of the `etag` in the
3301 # read-modify-write cycle to perform policy updates in order to avoid race
3302 # conditions: An `etag` is returned in the response to `getIamPolicy`, and
3303 # systems are expected to put that etag in the request to `setIamPolicy` to
3304 # ensure that their change will be applied to the same version of the policy.
3305 #
3306 # If no `etag` is provided in the call to `setIamPolicy`, then the existing
3307 # policy is overwritten blindly.
3308 "version": 42, # Deprecated.
3309 "auditConfigs": [ # Specifies cloud audit logging configuration for this policy.
3310 { # Specifies the audit configuration for a service.
3311 # The configuration determines which permission types are logged, and what
3312 # identities, if any, are exempted from logging.
3313 # An AuditConfig must have one or more AuditLogConfigs.
3314 #
3315 # If there are AuditConfigs for both `allServices` and a specific service,
3316 # the union of the two AuditConfigs is used for that service: the log_types
3317 # specified in each AuditConfig are enabled, and the exempted_members in each
3318 # AuditLogConfig are exempted.
3319 #
3320 # Example Policy with multiple AuditConfigs:
3321 #
3322 # {
3323 # "audit_configs": [
3324 # {
3325 # "service": "allServices"
3326 # "audit_log_configs": [
3327 # {
3328 # "log_type": "DATA_READ",
3329 # "exempted_members": [
3330 # "user:foo@gmail.com"
3331 # ]
3332 # },
3333 # {
3334 # "log_type": "DATA_WRITE",
3335 # },
3336 # {
3337 # "log_type": "ADMIN_READ",
3338 # }
3339 # ]
3340 # },
3341 # {
3342 # "service": "fooservice.googleapis.com"
3343 # "audit_log_configs": [
3344 # {
3345 # "log_type": "DATA_READ",
3346 # },
3347 # {
3348 # "log_type": "DATA_WRITE",
3349 # "exempted_members": [
3350 # "user:bar@gmail.com"
3351 # ]
3352 # }
3353 # ]
3354 # }
3355 # ]
3356 # }
3357 #
3358 # For fooservice, this policy enables DATA_READ, DATA_WRITE and ADMIN_READ
3359 # logging. It also exempts foo@gmail.com from DATA_READ logging, and
3360 # bar@gmail.com from DATA_WRITE logging.
3361 "auditLogConfigs": [ # The configuration for logging of each type of permission.
3362 { # Provides the configuration for logging a type of permissions.
3363 # Example:
3364 #
3365 # {
3366 # "audit_log_configs": [
3367 # {
3368 # "log_type": "DATA_READ",
3369 # "exempted_members": [
3370 # "user:foo@gmail.com"
3371 # ]
3372 # },
3373 # {
3374 # "log_type": "DATA_WRITE",
3375 # }
3376 # ]
3377 # }
3378 #
3379 # This enables 'DATA_READ' and 'DATA_WRITE' logging, while exempting
3380 # foo@gmail.com from DATA_READ logging.
3381 "exemptedMembers": [ # Specifies the identities that do not cause logging for this type of
3382 # permission.
3383 # Follows the same format of Binding.members.
3384 "A String",
3385 ],
3386 "logType": "A String", # The log type that this config enables.
3387 },
3388 ],
3389 "service": "A String", # Specifies a service that will be enabled for audit logging.
3390 # For example, `storage.googleapis.com`, `cloudsql.googleapis.com`.
3391 # `allServices` is a special value that covers all services.
3392 },
3393 ],
3394 },
3395 "updateMask": "A String", # OPTIONAL: A FieldMask specifying which fields of the policy to modify. Only
3396 # the fields in the mask will be modified. If no mask is provided, the
3397 # following default mask is used:
3398 # paths: "bindings, etag"
3399 # This field is only used by Cloud IAM.
3400 }
3401
3402 x__xgafv: string, V1 error format.
3403 Allowed values
3404 1 - v1 error format
3405 2 - v2 error format
3406
3407Returns:
3408 An object of the form:
3409
3410 { # Defines an Identity and Access Management (IAM) policy. It is used to
3411 # specify access control policies for Cloud Platform resources.
3412 #
3413 #
3414 # A `Policy` consists of a list of `bindings`. A `binding` binds a list of
3415 # `members` to a `role`, where the members can be user accounts, Google groups,
3416 # Google domains, and service accounts. A `role` is a named list of permissions
3417 # defined by IAM.
3418 #
3419 # **JSON Example**
3420 #
3421 # {
3422 # "bindings": [
3423 # {
3424 # "role": "roles/owner",
3425 # "members": [
3426 # "user:mike@example.com",
3427 # "group:admins@example.com",
3428 # "domain:google.com",
3429 # "serviceAccount:my-other-app@appspot.gserviceaccount.com"
3430 # ]
3431 # },
3432 # {
3433 # "role": "roles/viewer",
3434 # "members": ["user:sean@example.com"]
3435 # }
3436 # ]
3437 # }
3438 #
3439 # **YAML Example**
3440 #
3441 # bindings:
3442 # - members:
3443 # - user:mike@example.com
3444 # - group:admins@example.com
3445 # - domain:google.com
3446 # - serviceAccount:my-other-app@appspot.gserviceaccount.com
3447 # role: roles/owner
3448 # - members:
3449 # - user:sean@example.com
3450 # role: roles/viewer
3451 #
3452 #
3453 # For a description of IAM and its features, see the
3454 # [IAM developer's guide](https://cloud.google.com/iam/docs).
3455 "bindings": [ # Associates a list of `members` to a `role`.
3456 # `bindings` with no members will result in an error.
3457 { # Associates `members` with a `role`.
3458 "role": "A String", # Role that is assigned to `members`.
3459 # For example, `roles/viewer`, `roles/editor`, or `roles/owner`.
3460 "members": [ # Specifies the identities requesting access for a Cloud Platform resource.
3461 # `members` can have the following values:
3462 #
3463 # * `allUsers`: A special identifier that represents anyone who is
3464 # on the internet; with or without a Google account.
3465 #
3466 # * `allAuthenticatedUsers`: A special identifier that represents anyone
3467 # who is authenticated with a Google account or a service account.
3468 #
3469 # * `user:{emailid}`: An email address that represents a specific Google
3470 # account. For example, `alice@gmail.com` .
3471 #
3472 #
3473 # * `serviceAccount:{emailid}`: An email address that represents a service
3474 # account. For example, `my-other-app@appspot.gserviceaccount.com`.
3475 #
3476 # * `group:{emailid}`: An email address that represents a Google group.
3477 # For example, `admins@example.com`.
3478 #
3479 #
3480 # * `domain:{domain}`: The G Suite domain (primary) that represents all the
3481 # users of that domain. For example, `google.com` or `example.com`.
3482 #
3483 "A String",
3484 ],
3485 "condition": { # Represents an expression text. Example: # The condition that is associated with this binding.
3486 # NOTE: An unsatisfied condition will not allow user access via current
3487 # binding. Different bindings, including their conditions, are examined
3488 # independently.
3489 #
3490 # title: "User account presence"
3491 # description: "Determines whether the request has a user account"
3492 # expression: "size(request.user) > 0"
3493 "description": "A String", # An optional description of the expression. This is a longer text which
3494 # describes the expression, e.g. when hovered over it in a UI.
3495 "expression": "A String", # Textual representation of an expression in
3496 # Common Expression Language syntax.
3497 #
3498 # The application context of the containing message determines which
3499 # well-known feature set of CEL is supported.
3500 "location": "A String", # An optional string indicating the location of the expression for error
3501 # reporting, e.g. a file name and a position in the file.
3502 "title": "A String", # An optional title for the expression, i.e. a short string describing
3503 # its purpose. This can be used e.g. in UIs which allow to enter the
3504 # expression.
3505 },
3506 },
3507 ],
3508 "etag": "A String", # `etag` is used for optimistic concurrency control as a way to help
3509 # prevent simultaneous updates of a policy from overwriting each other.
3510 # It is strongly suggested that systems make use of the `etag` in the
3511 # read-modify-write cycle to perform policy updates in order to avoid race
3512 # conditions: An `etag` is returned in the response to `getIamPolicy`, and
3513 # systems are expected to put that etag in the request to `setIamPolicy` to
3514 # ensure that their change will be applied to the same version of the policy.
3515 #
3516 # If no `etag` is provided in the call to `setIamPolicy`, then the existing
3517 # policy is overwritten blindly.
3518 "version": 42, # Deprecated.
3519 "auditConfigs": [ # Specifies cloud audit logging configuration for this policy.
3520 { # Specifies the audit configuration for a service.
3521 # The configuration determines which permission types are logged, and what
3522 # identities, if any, are exempted from logging.
3523 # An AuditConfig must have one or more AuditLogConfigs.
3524 #
3525 # If there are AuditConfigs for both `allServices` and a specific service,
3526 # the union of the two AuditConfigs is used for that service: the log_types
3527 # specified in each AuditConfig are enabled, and the exempted_members in each
3528 # AuditLogConfig are exempted.
3529 #
3530 # Example Policy with multiple AuditConfigs:
3531 #
3532 # {
3533 # "audit_configs": [
3534 # {
3535 # "service": "allServices"
3536 # "audit_log_configs": [
3537 # {
3538 # "log_type": "DATA_READ",
3539 # "exempted_members": [
3540 # "user:foo@gmail.com"
3541 # ]
3542 # },
3543 # {
3544 # "log_type": "DATA_WRITE",
3545 # },
3546 # {
3547 # "log_type": "ADMIN_READ",
3548 # }
3549 # ]
3550 # },
3551 # {
3552 # "service": "fooservice.googleapis.com"
3553 # "audit_log_configs": [
3554 # {
3555 # "log_type": "DATA_READ",
3556 # },
3557 # {
3558 # "log_type": "DATA_WRITE",
3559 # "exempted_members": [
3560 # "user:bar@gmail.com"
3561 # ]
3562 # }
3563 # ]
3564 # }
3565 # ]
3566 # }
3567 #
3568 # For fooservice, this policy enables DATA_READ, DATA_WRITE and ADMIN_READ
3569 # logging. It also exempts foo@gmail.com from DATA_READ logging, and
3570 # bar@gmail.com from DATA_WRITE logging.
3571 "auditLogConfigs": [ # The configuration for logging of each type of permission.
3572 { # Provides the configuration for logging a type of permissions.
3573 # Example:
3574 #
3575 # {
3576 # "audit_log_configs": [
3577 # {
3578 # "log_type": "DATA_READ",
3579 # "exempted_members": [
3580 # "user:foo@gmail.com"
3581 # ]
3582 # },
3583 # {
3584 # "log_type": "DATA_WRITE",
3585 # }
3586 # ]
3587 # }
3588 #
3589 # This enables 'DATA_READ' and 'DATA_WRITE' logging, while exempting
3590 # foo@gmail.com from DATA_READ logging.
3591 "exemptedMembers": [ # Specifies the identities that do not cause logging for this type of
3592 # permission.
3593 # Follows the same format of Binding.members.
3594 "A String",
3595 ],
3596 "logType": "A String", # The log type that this config enables.
3597 },
3598 ],
3599 "service": "A String", # Specifies a service that will be enabled for audit logging.
3600 # For example, `storage.googleapis.com`, `cloudsql.googleapis.com`.
3601 # `allServices` is a special value that covers all services.
3602 },
3603 ],
3604 }</pre>
3605</div>
3606
3607<div class="method">
3608 <code class="details" id="testIamPermissions">testIamPermissions(resource, body, x__xgafv=None)</code>
3609 <pre>Returns permissions that a caller has on the specified resource.
3610If the resource does not exist, this will return an empty set of
3611permissions, not a NOT_FOUND error.
3612
3613Note: This operation is designed to be used for building permission-aware
3614UIs and command-line tools, not for authorization checking. This operation
3615may "fail open" without warning.
3616
3617Args:
3618 resource: string, REQUIRED: The resource for which the policy detail is being requested.
3619See the operation documentation for the appropriate value for this field. (required)
3620 body: object, The request body. (required)
3621 The object takes the form of:
3622
3623{ # Request message for `TestIamPermissions` method.
3624 "permissions": [ # The set of permissions to check for the `resource`. Permissions with
3625 # wildcards (such as '*' or 'storage.*') are not allowed. For more
3626 # information see
3627 # [IAM Overview](https://cloud.google.com/iam/docs/overview#permissions).
3628 "A String",
3629 ],
3630 }
3631
3632 x__xgafv: string, V1 error format.
3633 Allowed values
3634 1 - v1 error format
3635 2 - v2 error format
3636
3637Returns:
3638 An object of the form:
3639
3640 { # Response message for `TestIamPermissions` method.
3641 "permissions": [ # A subset of `TestPermissionsRequest.permissions` that the caller is
3642 # allowed.
3643 "A String",
3644 ],
3645 }</pre>
3646</div>
3647
Sai Cheemalapatic30d2b52017-03-13 12:12:03 -04003648</body></html>