Sai Cheemalapati | c30d2b5 | 2017-03-13 12:12:03 -0400 | [diff] [blame] | 1 | <html><body> |
| 2 | <style> |
| 3 | |
| 4 | body, h1, h2, h3, div, span, p, pre, a { |
| 5 | margin: 0; |
| 6 | padding: 0; |
| 7 | border: 0; |
| 8 | font-weight: inherit; |
| 9 | font-style: inherit; |
| 10 | font-size: 100%; |
| 11 | font-family: inherit; |
| 12 | vertical-align: baseline; |
| 13 | } |
| 14 | |
| 15 | body { |
| 16 | font-size: 13px; |
| 17 | padding: 1em; |
| 18 | } |
| 19 | |
| 20 | h1 { |
| 21 | font-size: 26px; |
| 22 | margin-bottom: 1em; |
| 23 | } |
| 24 | |
| 25 | h2 { |
| 26 | font-size: 24px; |
| 27 | margin-bottom: 1em; |
| 28 | } |
| 29 | |
| 30 | h3 { |
| 31 | font-size: 20px; |
| 32 | margin-bottom: 1em; |
| 33 | margin-top: 1em; |
| 34 | } |
| 35 | |
| 36 | pre, code { |
| 37 | line-height: 1.5; |
| 38 | font-family: Monaco, 'DejaVu Sans Mono', 'Bitstream Vera Sans Mono', 'Lucida Console', monospace; |
| 39 | } |
| 40 | |
| 41 | pre { |
| 42 | margin-top: 0.5em; |
| 43 | } |
| 44 | |
| 45 | h1, h2, h3, p { |
| 46 | font-family: Arial, sans serif; |
| 47 | } |
| 48 | |
| 49 | h1, h2, h3 { |
| 50 | border-bottom: solid #CCC 1px; |
| 51 | } |
| 52 | |
| 53 | .toc_element { |
| 54 | margin-top: 0.5em; |
| 55 | } |
| 56 | |
| 57 | .firstline { |
| 58 | margin-left: 2 em; |
| 59 | } |
| 60 | |
| 61 | .method { |
| 62 | margin-top: 1em; |
| 63 | border: solid 1px #CCC; |
| 64 | padding: 1em; |
| 65 | background: #EEE; |
| 66 | } |
| 67 | |
| 68 | .details { |
| 69 | font-weight: bold; |
| 70 | font-size: 14px; |
| 71 | } |
| 72 | |
| 73 | </style> |
| 74 | |
Bu Sun Kim | 715bd7f | 2019-06-14 16:50:42 -0700 | [diff] [blame^] | 75 | <h1><a href="ml_v1.html">Cloud Machine Learning Engine</a> . <a href="ml_v1.projects.html">projects</a> . <a href="ml_v1.projects.models.html">models</a> . <a href="ml_v1.projects.models.versions.html">versions</a></h1> |
Sai Cheemalapati | c30d2b5 | 2017-03-13 12:12:03 -0400 | [diff] [blame] | 76 | <h2>Instance Methods</h2> |
| 77 | <p class="toc_element"> |
Thomas Coffee | 2f24537 | 2017-03-27 10:39:26 -0700 | [diff] [blame] | 78 | <code><a href="#create">create(parent, body, x__xgafv=None)</a></code></p> |
Sai Cheemalapati | c30d2b5 | 2017-03-13 12:12:03 -0400 | [diff] [blame] | 79 | <p class="firstline">Creates a new version of a model from a trained TensorFlow model.</p> |
| 80 | <p class="toc_element"> |
Thomas Coffee | 2f24537 | 2017-03-27 10:39:26 -0700 | [diff] [blame] | 81 | <code><a href="#delete">delete(name, x__xgafv=None)</a></code></p> |
Sai Cheemalapati | c30d2b5 | 2017-03-13 12:12:03 -0400 | [diff] [blame] | 82 | <p class="firstline">Deletes a model version.</p> |
| 83 | <p class="toc_element"> |
Thomas Coffee | 2f24537 | 2017-03-27 10:39:26 -0700 | [diff] [blame] | 84 | <code><a href="#get">get(name, x__xgafv=None)</a></code></p> |
Sai Cheemalapati | c30d2b5 | 2017-03-13 12:12:03 -0400 | [diff] [blame] | 85 | <p class="firstline">Gets information about a model version.</p> |
| 86 | <p class="toc_element"> |
Bu Sun Kim | 715bd7f | 2019-06-14 16:50:42 -0700 | [diff] [blame^] | 87 | <code><a href="#list">list(parent, pageToken=None, x__xgafv=None, pageSize=None, filter=None)</a></code></p> |
Sai Cheemalapati | c30d2b5 | 2017-03-13 12:12:03 -0400 | [diff] [blame] | 88 | <p class="firstline">Gets basic information about all the versions of a model.</p> |
| 89 | <p class="toc_element"> |
| 90 | <code><a href="#list_next">list_next(previous_request, previous_response)</a></code></p> |
| 91 | <p class="firstline">Retrieves the next page of results.</p> |
| 92 | <p class="toc_element"> |
Bu Sun Kim | 715bd7f | 2019-06-14 16:50:42 -0700 | [diff] [blame^] | 93 | <code><a href="#patch">patch(name, body, updateMask=None, x__xgafv=None)</a></code></p> |
| 94 | <p class="firstline">Updates the specified Version resource.</p> |
| 95 | <p class="toc_element"> |
| 96 | <code><a href="#setDefault">setDefault(name, body=None, x__xgafv=None)</a></code></p> |
Sai Cheemalapati | c30d2b5 | 2017-03-13 12:12:03 -0400 | [diff] [blame] | 97 | <p class="firstline">Designates a version to be the default for the model.</p> |
| 98 | <h3>Method Details</h3> |
| 99 | <div class="method"> |
Thomas Coffee | 2f24537 | 2017-03-27 10:39:26 -0700 | [diff] [blame] | 100 | <code class="details" id="create">create(parent, body, x__xgafv=None)</code> |
Sai Cheemalapati | c30d2b5 | 2017-03-13 12:12:03 -0400 | [diff] [blame] | 101 | <pre>Creates a new version of a model from a trained TensorFlow model. |
| 102 | |
| 103 | If the version created in the cloud by this call is the first deployed |
| 104 | version of the specified model, it will be made the default version of the |
| 105 | model. When you add a version to a model that already has one or more |
| 106 | versions, the default version does not automatically change. If you want a |
| 107 | new version to be the default, you must call |
Sai Cheemalapati | e833b79 | 2017-03-24 15:06:46 -0700 | [diff] [blame] | 108 | [projects.models.versions.setDefault](/ml-engine/reference/rest/v1/projects.models.versions/setDefault). |
Sai Cheemalapati | c30d2b5 | 2017-03-13 12:12:03 -0400 | [diff] [blame] | 109 | |
| 110 | Args: |
Bu Sun Kim | 715bd7f | 2019-06-14 16:50:42 -0700 | [diff] [blame^] | 111 | parent: string, Required. The name of the model. (required) |
Sai Cheemalapati | c30d2b5 | 2017-03-13 12:12:03 -0400 | [diff] [blame] | 112 | body: object, The request body. (required) |
| 113 | The object takes the form of: |
| 114 | |
| 115 | { # Represents a version of the model. |
| 116 | # |
| 117 | # Each version is a trained model deployed in the cloud, ready to handle |
| 118 | # prediction requests. A model can have multiple versions. You can get |
| 119 | # information about all of the versions of a given model by calling |
Sai Cheemalapati | e833b79 | 2017-03-24 15:06:46 -0700 | [diff] [blame] | 120 | # [projects.models.versions.list](/ml-engine/reference/rest/v1/projects.models.versions/list). |
Bu Sun Kim | 715bd7f | 2019-06-14 16:50:42 -0700 | [diff] [blame^] | 121 | "errorMessage": "A String", # Output only. The details of a failure or a cancellation. |
| 122 | "labels": { # Optional. One or more labels that you can add, to organize your model |
| 123 | # versions. Each label is a key-value pair, where both the key and the value |
| 124 | # are arbitrary strings that you supply. |
| 125 | # For more information, see the documentation on |
| 126 | # <a href="/ml-engine/docs/tensorflow/resource-labels">using labels</a>. |
| 127 | "a_key": "A String", |
| 128 | }, |
| 129 | "machineType": "A String", # Optional. The type of machine on which to serve the model. Currently only |
| 130 | # applies to online prediction service. |
| 131 | # <dl> |
| 132 | # <dt>mls1-c1-m2</dt> |
| 133 | # <dd> |
| 134 | # The <b>default</b> machine type, with 1 core and 2 GB RAM. The deprecated |
| 135 | # name for this machine type is "mls1-highmem-1". |
| 136 | # </dd> |
| 137 | # <dt>mls1-c4-m2</dt> |
| 138 | # <dd> |
| 139 | # In <b>Beta</b>. This machine type has 4 cores and 2 GB RAM. The |
| 140 | # deprecated name for this machine type is "mls1-highcpu-4". |
| 141 | # </dd> |
| 142 | # </dl> |
Thomas Coffee | 2f24537 | 2017-03-27 10:39:26 -0700 | [diff] [blame] | 143 | "description": "A String", # Optional. The description specified for the version when it was created. |
Bu Sun Kim | 715bd7f | 2019-06-14 16:50:42 -0700 | [diff] [blame^] | 144 | "runtimeVersion": "A String", # Optional. The AI Platform runtime version to use for this deployment. |
| 145 | # If not set, AI Platform uses the default stable version, 1.0. For more |
| 146 | # information, see the |
| 147 | # [runtime version list](/ml-engine/docs/runtime-version-list) and |
| 148 | # [how to manage runtime versions](/ml-engine/docs/versioning). |
Sai Cheemalapati | 4ba8c23 | 2017-06-06 18:46:08 -0400 | [diff] [blame] | 149 | "manualScaling": { # Options for manually scaling a model. # Manually select the number of nodes to use for serving the |
Bu Sun Kim | 715bd7f | 2019-06-14 16:50:42 -0700 | [diff] [blame^] | 150 | # model. You should generally use `auto_scaling` with an appropriate |
Sai Cheemalapati | 4ba8c23 | 2017-06-06 18:46:08 -0400 | [diff] [blame] | 151 | # `min_nodes` instead, but this option is available if you want more |
| 152 | # predictable billing. Beware that latency and error rates will increase |
| 153 | # if the traffic exceeds that capability of the system to serve it based |
| 154 | # on the selected number of nodes. |
Sai Cheemalapati | c30d2b5 | 2017-03-13 12:12:03 -0400 | [diff] [blame] | 155 | "nodes": 42, # The number of nodes to allocate for this model. These nodes are always up, |
| 156 | # starting from the time the model is deployed, so the cost of operating |
Sai Cheemalapati | 4ba8c23 | 2017-06-06 18:46:08 -0400 | [diff] [blame] | 157 | # this model will be proportional to `nodes` * number of hours since |
| 158 | # last billing cycle plus the cost for each prediction performed. |
Sai Cheemalapati | c30d2b5 | 2017-03-13 12:12:03 -0400 | [diff] [blame] | 159 | }, |
Bu Sun Kim | 715bd7f | 2019-06-14 16:50:42 -0700 | [diff] [blame^] | 160 | "predictionClass": "A String", # Optional. The fully qualified name |
| 161 | # (<var>module_name</var>.<var>class_name</var>) of a class that implements |
| 162 | # the Predictor interface described in this reference field. The module |
| 163 | # containing this class should be included in a package provided to the |
| 164 | # [`packageUris` field](#Version.FIELDS.package_uris). |
Sai Cheemalapati | c30d2b5 | 2017-03-13 12:12:03 -0400 | [diff] [blame] | 165 | # |
Bu Sun Kim | 715bd7f | 2019-06-14 16:50:42 -0700 | [diff] [blame^] | 166 | # Specify this field if and only if you are deploying a [custom prediction |
| 167 | # routine (beta)](/ml-engine/docs/tensorflow/custom-prediction-routines). |
| 168 | # If you specify this field, you must set |
| 169 | # [`runtimeVersion`](#Version.FIELDS.runtime_version) to 1.4 or greater. |
| 170 | # |
| 171 | # The following code sample provides the Predictor interface: |
| 172 | # |
| 173 | # ```py |
| 174 | # class Predictor(object): |
| 175 | # """Interface for constructing custom predictors.""" |
| 176 | # |
| 177 | # def predict(self, instances, **kwargs): |
| 178 | # """Performs custom prediction. |
| 179 | # |
| 180 | # Instances are the decoded values from the request. They have already |
| 181 | # been deserialized from JSON. |
| 182 | # |
| 183 | # Args: |
| 184 | # instances: A list of prediction input instances. |
| 185 | # **kwargs: A dictionary of keyword args provided as additional |
| 186 | # fields on the predict request body. |
| 187 | # |
| 188 | # Returns: |
| 189 | # A list of outputs containing the prediction results. This list must |
| 190 | # be JSON serializable. |
| 191 | # """ |
| 192 | # raise NotImplementedError() |
| 193 | # |
| 194 | # @classmethod |
| 195 | # def from_path(cls, model_dir): |
| 196 | # """Creates an instance of Predictor using the given path. |
| 197 | # |
| 198 | # Loading of the predictor should be done in this method. |
| 199 | # |
| 200 | # Args: |
| 201 | # model_dir: The local directory that contains the exported model |
| 202 | # file along with any additional files uploaded when creating the |
| 203 | # version resource. |
| 204 | # |
| 205 | # Returns: |
| 206 | # An instance implementing this Predictor class. |
| 207 | # """ |
| 208 | # raise NotImplementedError() |
| 209 | # ``` |
| 210 | # |
| 211 | # Learn more about [the Predictor interface and custom prediction |
| 212 | # routines](/ml-engine/docs/tensorflow/custom-prediction-routines). |
| 213 | "autoScaling": { # Options for automatically scaling a model. # Automatically scale the number of nodes used to serve the model in |
Sai Cheemalapati | 4ba8c23 | 2017-06-06 18:46:08 -0400 | [diff] [blame] | 214 | # response to increases and decreases in traffic. Care should be |
| 215 | # taken to ramp up traffic according to the model's ability to scale |
| 216 | # or you will start seeing increases in latency and 429 response codes. |
| 217 | "minNodes": 42, # Optional. The minimum number of nodes to allocate for this model. These |
Bu Sun Kim | 715bd7f | 2019-06-14 16:50:42 -0700 | [diff] [blame^] | 218 | # nodes are always up, starting from the time the model is deployed. |
| 219 | # Therefore, the cost of operating this model will be at least |
Sai Cheemalapati | 4ba8c23 | 2017-06-06 18:46:08 -0400 | [diff] [blame] | 220 | # `rate` * `min_nodes` * number of hours since last billing cycle, |
Bu Sun Kim | 715bd7f | 2019-06-14 16:50:42 -0700 | [diff] [blame^] | 221 | # where `rate` is the cost per node-hour as documented in the |
| 222 | # [pricing guide](/ml-engine/docs/pricing), |
Sai Cheemalapati | 4ba8c23 | 2017-06-06 18:46:08 -0400 | [diff] [blame] | 223 | # even if no predictions are performed. There is additional cost for each |
| 224 | # prediction performed. |
| 225 | # |
| 226 | # Unlike manual scaling, if the load gets too heavy for the nodes |
| 227 | # that are up, the service will automatically add nodes to handle the |
| 228 | # increased load as well as scale back as traffic drops, always maintaining |
| 229 | # at least `min_nodes`. You will be charged for the time in which additional |
| 230 | # nodes are used. |
| 231 | # |
| 232 | # If not specified, `min_nodes` defaults to 0, in which case, when traffic |
| 233 | # to a model stops (and after a cool-down period), nodes will be shut down |
| 234 | # and no charges will be incurred until traffic to the model resumes. |
Bu Sun Kim | 715bd7f | 2019-06-14 16:50:42 -0700 | [diff] [blame^] | 235 | # |
| 236 | # You can set `min_nodes` when creating the model version, and you can also |
| 237 | # update `min_nodes` for an existing version: |
| 238 | # <pre> |
| 239 | # update_body.json: |
| 240 | # { |
| 241 | # 'autoScaling': { |
| 242 | # 'minNodes': 5 |
| 243 | # } |
| 244 | # } |
| 245 | # </pre> |
| 246 | # HTTP request: |
| 247 | # <pre> |
| 248 | # PATCH |
| 249 | # https://ml.googleapis.com/v1/{name=projects/*/models/*/versions/*}?update_mask=autoScaling.minNodes |
| 250 | # -d @./update_body.json |
| 251 | # </pre> |
Sai Cheemalapati | 4ba8c23 | 2017-06-06 18:46:08 -0400 | [diff] [blame] | 252 | }, |
Bu Sun Kim | 715bd7f | 2019-06-14 16:50:42 -0700 | [diff] [blame^] | 253 | "serviceAccount": "A String", # Optional. Specifies the service account for resource access control. |
| 254 | "state": "A String", # Output only. The state of a version. |
| 255 | "pythonVersion": "A String", # Optional. The version of Python used in prediction. If not set, the default |
| 256 | # version is '2.7'. Python '3.5' is available when `runtime_version` is set |
| 257 | # to '1.4' and above. Python '2.7' works with all supported runtime versions. |
| 258 | "framework": "A String", # Optional. The machine learning framework AI Platform uses to train |
| 259 | # this version of the model. Valid values are `TENSORFLOW`, `SCIKIT_LEARN`, |
| 260 | # `XGBOOST`. If you do not specify a framework, AI Platform |
| 261 | # will analyze files in the deployment_uri to determine a framework. If you |
| 262 | # choose `SCIKIT_LEARN` or `XGBOOST`, you must also set the runtime version |
| 263 | # of the model to 1.4 or greater. |
| 264 | # |
| 265 | # Do **not** specify a framework if you're deploying a [custom |
| 266 | # prediction routine](/ml-engine/docs/tensorflow/custom-prediction-routines). |
| 267 | "packageUris": [ # Optional. Cloud Storage paths (`gs://…`) of packages for [custom |
| 268 | # prediction routines](/ml-engine/docs/tensorflow/custom-prediction-routines) |
| 269 | # or [scikit-learn pipelines with custom |
| 270 | # code](/ml-engine/docs/scikit/exporting-for-prediction#custom-pipeline-code). |
| 271 | # |
| 272 | # For a custom prediction routine, one of these packages must contain your |
| 273 | # Predictor class (see |
| 274 | # [`predictionClass`](#Version.FIELDS.prediction_class)). Additionally, |
| 275 | # include any dependencies used by your Predictor or scikit-learn pipeline |
| 276 | # uses that are not already included in your selected [runtime |
| 277 | # version](/ml-engine/docs/tensorflow/runtime-version-list). |
| 278 | # |
| 279 | # If you specify this field, you must also set |
| 280 | # [`runtimeVersion`](#Version.FIELDS.runtime_version) to 1.4 or greater. |
| 281 | "A String", |
| 282 | ], |
| 283 | "etag": "A String", # `etag` is used for optimistic concurrency control as a way to help |
| 284 | # prevent simultaneous updates of a model from overwriting each other. |
| 285 | # It is strongly suggested that systems make use of the `etag` in the |
| 286 | # read-modify-write cycle to perform model updates in order to avoid race |
| 287 | # conditions: An `etag` is returned in the response to `GetVersion`, and |
| 288 | # systems are expected to put that etag in the request to `UpdateVersion` to |
| 289 | # ensure that their change will be applied to the model as intended. |
| 290 | "lastUseTime": "A String", # Output only. The time the version was last used for prediction. |
| 291 | "deploymentUri": "A String", # Required. The Cloud Storage location of the trained model used to |
| 292 | # create the version. See the |
| 293 | # [guide to model |
| 294 | # deployment](/ml-engine/docs/tensorflow/deploying-models) for more |
| 295 | # information. |
| 296 | # |
| 297 | # When passing Version to |
| 298 | # [projects.models.versions.create](/ml-engine/reference/rest/v1/projects.models.versions/create) |
| 299 | # the model service uses the specified location as the source of the model. |
| 300 | # Once deployed, the model version is hosted by the prediction service, so |
| 301 | # this location is useful only as a historical record. |
| 302 | # The total number of model files can't exceed 1000. |
Sai Cheemalapati | c30d2b5 | 2017-03-13 12:12:03 -0400 | [diff] [blame] | 303 | "createTime": "A String", # Output only. The time the version was created. |
| 304 | "isDefault": True or False, # Output only. If true, this version will be used to handle prediction |
| 305 | # requests that do not specify a version. |
| 306 | # |
| 307 | # You can change the default version by calling |
Sai Cheemalapati | e833b79 | 2017-03-24 15:06:46 -0700 | [diff] [blame] | 308 | # [projects.methods.versions.setDefault](/ml-engine/reference/rest/v1/projects.models.versions/setDefault). |
Thomas Coffee | 2f24537 | 2017-03-27 10:39:26 -0700 | [diff] [blame] | 309 | "name": "A String", # Required.The name specified for the version when it was created. |
| 310 | # |
| 311 | # The version name must be unique within the model it is created in. |
Sai Cheemalapati | c30d2b5 | 2017-03-13 12:12:03 -0400 | [diff] [blame] | 312 | } |
| 313 | |
| 314 | x__xgafv: string, V1 error format. |
| 315 | Allowed values |
| 316 | 1 - v1 error format |
| 317 | 2 - v2 error format |
| 318 | |
| 319 | Returns: |
| 320 | An object of the form: |
| 321 | |
| 322 | { # This resource represents a long-running operation that is the result of a |
| 323 | # network API call. |
Thomas Coffee | 2f24537 | 2017-03-27 10:39:26 -0700 | [diff] [blame] | 324 | "metadata": { # Service-specific metadata associated with the operation. It typically |
| 325 | # contains progress information and common metadata such as create time. |
| 326 | # Some services might not provide such metadata. Any method that returns a |
| 327 | # long-running operation should document the metadata type, if any. |
| 328 | "a_key": "", # Properties of the object. Contains field @type with type URL. |
| 329 | }, |
Bu Sun Kim | 715bd7f | 2019-06-14 16:50:42 -0700 | [diff] [blame^] | 330 | "error": { # The `Status` type defines a logical error model that is suitable for # The error result of the operation in case of failure or cancellation. |
| 331 | # different programming environments, including REST APIs and RPC APIs. It is |
| 332 | # used by [gRPC](https://github.com/grpc). Each `Status` message contains |
| 333 | # three pieces of data: error code, error message, and error details. |
Sai Cheemalapati | c30d2b5 | 2017-03-13 12:12:03 -0400 | [diff] [blame] | 334 | # |
Bu Sun Kim | 715bd7f | 2019-06-14 16:50:42 -0700 | [diff] [blame^] | 335 | # You can find out more about this error model and how to work with it in the |
| 336 | # [API Design Guide](https://cloud.google.com/apis/design/errors). |
Sai Cheemalapati | c30d2b5 | 2017-03-13 12:12:03 -0400 | [diff] [blame] | 337 | "message": "A String", # A developer-facing error message, which should be in English. Any |
| 338 | # user-facing error message should be localized and sent in the |
| 339 | # google.rpc.Status.details field, or localized by the client. |
| 340 | "code": 42, # The status code, which should be an enum value of google.rpc.Code. |
Bu Sun Kim | 715bd7f | 2019-06-14 16:50:42 -0700 | [diff] [blame^] | 341 | "details": [ # A list of messages that carry the error details. There is a common set of |
| 342 | # message types for APIs to use. |
Sai Cheemalapati | c30d2b5 | 2017-03-13 12:12:03 -0400 | [diff] [blame] | 343 | { |
| 344 | "a_key": "", # Properties of the object. Contains field @type with type URL. |
| 345 | }, |
| 346 | ], |
| 347 | }, |
Sai Cheemalapati | 4ba8c23 | 2017-06-06 18:46:08 -0400 | [diff] [blame] | 348 | "done": True or False, # If the value is `false`, it means the operation is still in progress. |
Bu Sun Kim | 715bd7f | 2019-06-14 16:50:42 -0700 | [diff] [blame^] | 349 | # If `true`, the operation is completed, and either `error` or `response` is |
Sai Cheemalapati | 4ba8c23 | 2017-06-06 18:46:08 -0400 | [diff] [blame] | 350 | # available. |
| 351 | "response": { # The normal response of the operation in case of success. If the original |
| 352 | # method returns no data on success, such as `Delete`, the response is |
| 353 | # `google.protobuf.Empty`. If the original method is standard |
| 354 | # `Get`/`Create`/`Update`, the response should be the resource. For other |
| 355 | # methods, the response should have the type `XxxResponse`, where `Xxx` |
| 356 | # is the original method name. For example, if the original method name |
| 357 | # is `TakeSnapshot()`, the inferred response type is |
| 358 | # `TakeSnapshotResponse`. |
| 359 | "a_key": "", # Properties of the object. Contains field @type with type URL. |
| 360 | }, |
| 361 | "name": "A String", # The server-assigned name, which is only unique within the same service that |
| 362 | # originally returns it. If you use the default HTTP mapping, the |
Bu Sun Kim | 715bd7f | 2019-06-14 16:50:42 -0700 | [diff] [blame^] | 363 | # `name` should be a resource name ending with `operations/{unique_id}`. |
Sai Cheemalapati | c30d2b5 | 2017-03-13 12:12:03 -0400 | [diff] [blame] | 364 | }</pre> |
| 365 | </div> |
| 366 | |
| 367 | <div class="method"> |
Thomas Coffee | 2f24537 | 2017-03-27 10:39:26 -0700 | [diff] [blame] | 368 | <code class="details" id="delete">delete(name, x__xgafv=None)</code> |
Sai Cheemalapati | c30d2b5 | 2017-03-13 12:12:03 -0400 | [diff] [blame] | 369 | <pre>Deletes a model version. |
| 370 | |
| 371 | Each model can have multiple versions deployed and in use at any given |
| 372 | time. Use this method to remove a single version. |
| 373 | |
| 374 | Note: You cannot delete the version that is set as the default version |
| 375 | of the model unless it is the only remaining version. |
| 376 | |
| 377 | Args: |
| 378 | name: string, Required. The name of the version. You can get the names of all the |
| 379 | versions of a model by calling |
Bu Sun Kim | 715bd7f | 2019-06-14 16:50:42 -0700 | [diff] [blame^] | 380 | [projects.models.versions.list](/ml-engine/reference/rest/v1/projects.models.versions/list). (required) |
Sai Cheemalapati | c30d2b5 | 2017-03-13 12:12:03 -0400 | [diff] [blame] | 381 | x__xgafv: string, V1 error format. |
| 382 | Allowed values |
| 383 | 1 - v1 error format |
| 384 | 2 - v2 error format |
| 385 | |
| 386 | Returns: |
| 387 | An object of the form: |
| 388 | |
| 389 | { # This resource represents a long-running operation that is the result of a |
| 390 | # network API call. |
Thomas Coffee | 2f24537 | 2017-03-27 10:39:26 -0700 | [diff] [blame] | 391 | "metadata": { # Service-specific metadata associated with the operation. It typically |
| 392 | # contains progress information and common metadata such as create time. |
| 393 | # Some services might not provide such metadata. Any method that returns a |
| 394 | # long-running operation should document the metadata type, if any. |
| 395 | "a_key": "", # Properties of the object. Contains field @type with type URL. |
| 396 | }, |
Bu Sun Kim | 715bd7f | 2019-06-14 16:50:42 -0700 | [diff] [blame^] | 397 | "error": { # The `Status` type defines a logical error model that is suitable for # The error result of the operation in case of failure or cancellation. |
| 398 | # different programming environments, including REST APIs and RPC APIs. It is |
| 399 | # used by [gRPC](https://github.com/grpc). Each `Status` message contains |
| 400 | # three pieces of data: error code, error message, and error details. |
Sai Cheemalapati | c30d2b5 | 2017-03-13 12:12:03 -0400 | [diff] [blame] | 401 | # |
Bu Sun Kim | 715bd7f | 2019-06-14 16:50:42 -0700 | [diff] [blame^] | 402 | # You can find out more about this error model and how to work with it in the |
| 403 | # [API Design Guide](https://cloud.google.com/apis/design/errors). |
Sai Cheemalapati | c30d2b5 | 2017-03-13 12:12:03 -0400 | [diff] [blame] | 404 | "message": "A String", # A developer-facing error message, which should be in English. Any |
| 405 | # user-facing error message should be localized and sent in the |
| 406 | # google.rpc.Status.details field, or localized by the client. |
| 407 | "code": 42, # The status code, which should be an enum value of google.rpc.Code. |
Bu Sun Kim | 715bd7f | 2019-06-14 16:50:42 -0700 | [diff] [blame^] | 408 | "details": [ # A list of messages that carry the error details. There is a common set of |
| 409 | # message types for APIs to use. |
Sai Cheemalapati | c30d2b5 | 2017-03-13 12:12:03 -0400 | [diff] [blame] | 410 | { |
| 411 | "a_key": "", # Properties of the object. Contains field @type with type URL. |
| 412 | }, |
| 413 | ], |
| 414 | }, |
Sai Cheemalapati | 4ba8c23 | 2017-06-06 18:46:08 -0400 | [diff] [blame] | 415 | "done": True or False, # If the value is `false`, it means the operation is still in progress. |
Bu Sun Kim | 715bd7f | 2019-06-14 16:50:42 -0700 | [diff] [blame^] | 416 | # If `true`, the operation is completed, and either `error` or `response` is |
Sai Cheemalapati | 4ba8c23 | 2017-06-06 18:46:08 -0400 | [diff] [blame] | 417 | # available. |
| 418 | "response": { # The normal response of the operation in case of success. If the original |
| 419 | # method returns no data on success, such as `Delete`, the response is |
| 420 | # `google.protobuf.Empty`. If the original method is standard |
| 421 | # `Get`/`Create`/`Update`, the response should be the resource. For other |
| 422 | # methods, the response should have the type `XxxResponse`, where `Xxx` |
| 423 | # is the original method name. For example, if the original method name |
| 424 | # is `TakeSnapshot()`, the inferred response type is |
| 425 | # `TakeSnapshotResponse`. |
| 426 | "a_key": "", # Properties of the object. Contains field @type with type URL. |
| 427 | }, |
| 428 | "name": "A String", # The server-assigned name, which is only unique within the same service that |
| 429 | # originally returns it. If you use the default HTTP mapping, the |
Bu Sun Kim | 715bd7f | 2019-06-14 16:50:42 -0700 | [diff] [blame^] | 430 | # `name` should be a resource name ending with `operations/{unique_id}`. |
Sai Cheemalapati | c30d2b5 | 2017-03-13 12:12:03 -0400 | [diff] [blame] | 431 | }</pre> |
| 432 | </div> |
| 433 | |
| 434 | <div class="method"> |
Thomas Coffee | 2f24537 | 2017-03-27 10:39:26 -0700 | [diff] [blame] | 435 | <code class="details" id="get">get(name, x__xgafv=None)</code> |
Sai Cheemalapati | c30d2b5 | 2017-03-13 12:12:03 -0400 | [diff] [blame] | 436 | <pre>Gets information about a model version. |
| 437 | |
| 438 | Models can have multiple versions. You can call |
Sai Cheemalapati | e833b79 | 2017-03-24 15:06:46 -0700 | [diff] [blame] | 439 | [projects.models.versions.list](/ml-engine/reference/rest/v1/projects.models.versions/list) |
Sai Cheemalapati | c30d2b5 | 2017-03-13 12:12:03 -0400 | [diff] [blame] | 440 | to get the same information that this method returns for all of the |
| 441 | versions of a model. |
| 442 | |
| 443 | Args: |
Bu Sun Kim | 715bd7f | 2019-06-14 16:50:42 -0700 | [diff] [blame^] | 444 | name: string, Required. The name of the version. (required) |
Sai Cheemalapati | c30d2b5 | 2017-03-13 12:12:03 -0400 | [diff] [blame] | 445 | x__xgafv: string, V1 error format. |
| 446 | Allowed values |
| 447 | 1 - v1 error format |
| 448 | 2 - v2 error format |
| 449 | |
| 450 | Returns: |
| 451 | An object of the form: |
| 452 | |
| 453 | { # Represents a version of the model. |
| 454 | # |
| 455 | # Each version is a trained model deployed in the cloud, ready to handle |
| 456 | # prediction requests. A model can have multiple versions. You can get |
| 457 | # information about all of the versions of a given model by calling |
Sai Cheemalapati | e833b79 | 2017-03-24 15:06:46 -0700 | [diff] [blame] | 458 | # [projects.models.versions.list](/ml-engine/reference/rest/v1/projects.models.versions/list). |
Bu Sun Kim | 715bd7f | 2019-06-14 16:50:42 -0700 | [diff] [blame^] | 459 | "errorMessage": "A String", # Output only. The details of a failure or a cancellation. |
| 460 | "labels": { # Optional. One or more labels that you can add, to organize your model |
| 461 | # versions. Each label is a key-value pair, where both the key and the value |
| 462 | # are arbitrary strings that you supply. |
| 463 | # For more information, see the documentation on |
| 464 | # <a href="/ml-engine/docs/tensorflow/resource-labels">using labels</a>. |
| 465 | "a_key": "A String", |
| 466 | }, |
| 467 | "machineType": "A String", # Optional. The type of machine on which to serve the model. Currently only |
| 468 | # applies to online prediction service. |
| 469 | # <dl> |
| 470 | # <dt>mls1-c1-m2</dt> |
| 471 | # <dd> |
| 472 | # The <b>default</b> machine type, with 1 core and 2 GB RAM. The deprecated |
| 473 | # name for this machine type is "mls1-highmem-1". |
| 474 | # </dd> |
| 475 | # <dt>mls1-c4-m2</dt> |
| 476 | # <dd> |
| 477 | # In <b>Beta</b>. This machine type has 4 cores and 2 GB RAM. The |
| 478 | # deprecated name for this machine type is "mls1-highcpu-4". |
| 479 | # </dd> |
| 480 | # </dl> |
Thomas Coffee | 2f24537 | 2017-03-27 10:39:26 -0700 | [diff] [blame] | 481 | "description": "A String", # Optional. The description specified for the version when it was created. |
Bu Sun Kim | 715bd7f | 2019-06-14 16:50:42 -0700 | [diff] [blame^] | 482 | "runtimeVersion": "A String", # Optional. The AI Platform runtime version to use for this deployment. |
| 483 | # If not set, AI Platform uses the default stable version, 1.0. For more |
| 484 | # information, see the |
| 485 | # [runtime version list](/ml-engine/docs/runtime-version-list) and |
| 486 | # [how to manage runtime versions](/ml-engine/docs/versioning). |
Sai Cheemalapati | 4ba8c23 | 2017-06-06 18:46:08 -0400 | [diff] [blame] | 487 | "manualScaling": { # Options for manually scaling a model. # Manually select the number of nodes to use for serving the |
Bu Sun Kim | 715bd7f | 2019-06-14 16:50:42 -0700 | [diff] [blame^] | 488 | # model. You should generally use `auto_scaling` with an appropriate |
Sai Cheemalapati | 4ba8c23 | 2017-06-06 18:46:08 -0400 | [diff] [blame] | 489 | # `min_nodes` instead, but this option is available if you want more |
| 490 | # predictable billing. Beware that latency and error rates will increase |
| 491 | # if the traffic exceeds that capability of the system to serve it based |
| 492 | # on the selected number of nodes. |
Sai Cheemalapati | c30d2b5 | 2017-03-13 12:12:03 -0400 | [diff] [blame] | 493 | "nodes": 42, # The number of nodes to allocate for this model. These nodes are always up, |
| 494 | # starting from the time the model is deployed, so the cost of operating |
Sai Cheemalapati | 4ba8c23 | 2017-06-06 18:46:08 -0400 | [diff] [blame] | 495 | # this model will be proportional to `nodes` * number of hours since |
| 496 | # last billing cycle plus the cost for each prediction performed. |
Sai Cheemalapati | c30d2b5 | 2017-03-13 12:12:03 -0400 | [diff] [blame] | 497 | }, |
Bu Sun Kim | 715bd7f | 2019-06-14 16:50:42 -0700 | [diff] [blame^] | 498 | "predictionClass": "A String", # Optional. The fully qualified name |
| 499 | # (<var>module_name</var>.<var>class_name</var>) of a class that implements |
| 500 | # the Predictor interface described in this reference field. The module |
| 501 | # containing this class should be included in a package provided to the |
| 502 | # [`packageUris` field](#Version.FIELDS.package_uris). |
Sai Cheemalapati | c30d2b5 | 2017-03-13 12:12:03 -0400 | [diff] [blame] | 503 | # |
Bu Sun Kim | 715bd7f | 2019-06-14 16:50:42 -0700 | [diff] [blame^] | 504 | # Specify this field if and only if you are deploying a [custom prediction |
| 505 | # routine (beta)](/ml-engine/docs/tensorflow/custom-prediction-routines). |
| 506 | # If you specify this field, you must set |
| 507 | # [`runtimeVersion`](#Version.FIELDS.runtime_version) to 1.4 or greater. |
| 508 | # |
| 509 | # The following code sample provides the Predictor interface: |
| 510 | # |
| 511 | # ```py |
| 512 | # class Predictor(object): |
| 513 | # """Interface for constructing custom predictors.""" |
| 514 | # |
| 515 | # def predict(self, instances, **kwargs): |
| 516 | # """Performs custom prediction. |
| 517 | # |
| 518 | # Instances are the decoded values from the request. They have already |
| 519 | # been deserialized from JSON. |
| 520 | # |
| 521 | # Args: |
| 522 | # instances: A list of prediction input instances. |
| 523 | # **kwargs: A dictionary of keyword args provided as additional |
| 524 | # fields on the predict request body. |
| 525 | # |
| 526 | # Returns: |
| 527 | # A list of outputs containing the prediction results. This list must |
| 528 | # be JSON serializable. |
| 529 | # """ |
| 530 | # raise NotImplementedError() |
| 531 | # |
| 532 | # @classmethod |
| 533 | # def from_path(cls, model_dir): |
| 534 | # """Creates an instance of Predictor using the given path. |
| 535 | # |
| 536 | # Loading of the predictor should be done in this method. |
| 537 | # |
| 538 | # Args: |
| 539 | # model_dir: The local directory that contains the exported model |
| 540 | # file along with any additional files uploaded when creating the |
| 541 | # version resource. |
| 542 | # |
| 543 | # Returns: |
| 544 | # An instance implementing this Predictor class. |
| 545 | # """ |
| 546 | # raise NotImplementedError() |
| 547 | # ``` |
| 548 | # |
| 549 | # Learn more about [the Predictor interface and custom prediction |
| 550 | # routines](/ml-engine/docs/tensorflow/custom-prediction-routines). |
| 551 | "autoScaling": { # Options for automatically scaling a model. # Automatically scale the number of nodes used to serve the model in |
Sai Cheemalapati | 4ba8c23 | 2017-06-06 18:46:08 -0400 | [diff] [blame] | 552 | # response to increases and decreases in traffic. Care should be |
| 553 | # taken to ramp up traffic according to the model's ability to scale |
| 554 | # or you will start seeing increases in latency and 429 response codes. |
| 555 | "minNodes": 42, # Optional. The minimum number of nodes to allocate for this model. These |
Bu Sun Kim | 715bd7f | 2019-06-14 16:50:42 -0700 | [diff] [blame^] | 556 | # nodes are always up, starting from the time the model is deployed. |
| 557 | # Therefore, the cost of operating this model will be at least |
Sai Cheemalapati | 4ba8c23 | 2017-06-06 18:46:08 -0400 | [diff] [blame] | 558 | # `rate` * `min_nodes` * number of hours since last billing cycle, |
Bu Sun Kim | 715bd7f | 2019-06-14 16:50:42 -0700 | [diff] [blame^] | 559 | # where `rate` is the cost per node-hour as documented in the |
| 560 | # [pricing guide](/ml-engine/docs/pricing), |
Sai Cheemalapati | 4ba8c23 | 2017-06-06 18:46:08 -0400 | [diff] [blame] | 561 | # even if no predictions are performed. There is additional cost for each |
| 562 | # prediction performed. |
| 563 | # |
| 564 | # Unlike manual scaling, if the load gets too heavy for the nodes |
| 565 | # that are up, the service will automatically add nodes to handle the |
| 566 | # increased load as well as scale back as traffic drops, always maintaining |
| 567 | # at least `min_nodes`. You will be charged for the time in which additional |
| 568 | # nodes are used. |
| 569 | # |
| 570 | # If not specified, `min_nodes` defaults to 0, in which case, when traffic |
| 571 | # to a model stops (and after a cool-down period), nodes will be shut down |
| 572 | # and no charges will be incurred until traffic to the model resumes. |
Bu Sun Kim | 715bd7f | 2019-06-14 16:50:42 -0700 | [diff] [blame^] | 573 | # |
| 574 | # You can set `min_nodes` when creating the model version, and you can also |
| 575 | # update `min_nodes` for an existing version: |
| 576 | # <pre> |
| 577 | # update_body.json: |
| 578 | # { |
| 579 | # 'autoScaling': { |
| 580 | # 'minNodes': 5 |
| 581 | # } |
| 582 | # } |
| 583 | # </pre> |
| 584 | # HTTP request: |
| 585 | # <pre> |
| 586 | # PATCH |
| 587 | # https://ml.googleapis.com/v1/{name=projects/*/models/*/versions/*}?update_mask=autoScaling.minNodes |
| 588 | # -d @./update_body.json |
| 589 | # </pre> |
Sai Cheemalapati | 4ba8c23 | 2017-06-06 18:46:08 -0400 | [diff] [blame] | 590 | }, |
Bu Sun Kim | 715bd7f | 2019-06-14 16:50:42 -0700 | [diff] [blame^] | 591 | "serviceAccount": "A String", # Optional. Specifies the service account for resource access control. |
| 592 | "state": "A String", # Output only. The state of a version. |
| 593 | "pythonVersion": "A String", # Optional. The version of Python used in prediction. If not set, the default |
| 594 | # version is '2.7'. Python '3.5' is available when `runtime_version` is set |
| 595 | # to '1.4' and above. Python '2.7' works with all supported runtime versions. |
| 596 | "framework": "A String", # Optional. The machine learning framework AI Platform uses to train |
| 597 | # this version of the model. Valid values are `TENSORFLOW`, `SCIKIT_LEARN`, |
| 598 | # `XGBOOST`. If you do not specify a framework, AI Platform |
| 599 | # will analyze files in the deployment_uri to determine a framework. If you |
| 600 | # choose `SCIKIT_LEARN` or `XGBOOST`, you must also set the runtime version |
| 601 | # of the model to 1.4 or greater. |
| 602 | # |
| 603 | # Do **not** specify a framework if you're deploying a [custom |
| 604 | # prediction routine](/ml-engine/docs/tensorflow/custom-prediction-routines). |
| 605 | "packageUris": [ # Optional. Cloud Storage paths (`gs://…`) of packages for [custom |
| 606 | # prediction routines](/ml-engine/docs/tensorflow/custom-prediction-routines) |
| 607 | # or [scikit-learn pipelines with custom |
| 608 | # code](/ml-engine/docs/scikit/exporting-for-prediction#custom-pipeline-code). |
| 609 | # |
| 610 | # For a custom prediction routine, one of these packages must contain your |
| 611 | # Predictor class (see |
| 612 | # [`predictionClass`](#Version.FIELDS.prediction_class)). Additionally, |
| 613 | # include any dependencies used by your Predictor or scikit-learn pipeline |
| 614 | # uses that are not already included in your selected [runtime |
| 615 | # version](/ml-engine/docs/tensorflow/runtime-version-list). |
| 616 | # |
| 617 | # If you specify this field, you must also set |
| 618 | # [`runtimeVersion`](#Version.FIELDS.runtime_version) to 1.4 or greater. |
| 619 | "A String", |
| 620 | ], |
| 621 | "etag": "A String", # `etag` is used for optimistic concurrency control as a way to help |
| 622 | # prevent simultaneous updates of a model from overwriting each other. |
| 623 | # It is strongly suggested that systems make use of the `etag` in the |
| 624 | # read-modify-write cycle to perform model updates in order to avoid race |
| 625 | # conditions: An `etag` is returned in the response to `GetVersion`, and |
| 626 | # systems are expected to put that etag in the request to `UpdateVersion` to |
| 627 | # ensure that their change will be applied to the model as intended. |
| 628 | "lastUseTime": "A String", # Output only. The time the version was last used for prediction. |
| 629 | "deploymentUri": "A String", # Required. The Cloud Storage location of the trained model used to |
| 630 | # create the version. See the |
| 631 | # [guide to model |
| 632 | # deployment](/ml-engine/docs/tensorflow/deploying-models) for more |
| 633 | # information. |
| 634 | # |
| 635 | # When passing Version to |
| 636 | # [projects.models.versions.create](/ml-engine/reference/rest/v1/projects.models.versions/create) |
| 637 | # the model service uses the specified location as the source of the model. |
| 638 | # Once deployed, the model version is hosted by the prediction service, so |
| 639 | # this location is useful only as a historical record. |
| 640 | # The total number of model files can't exceed 1000. |
Sai Cheemalapati | c30d2b5 | 2017-03-13 12:12:03 -0400 | [diff] [blame] | 641 | "createTime": "A String", # Output only. The time the version was created. |
| 642 | "isDefault": True or False, # Output only. If true, this version will be used to handle prediction |
| 643 | # requests that do not specify a version. |
| 644 | # |
| 645 | # You can change the default version by calling |
Sai Cheemalapati | e833b79 | 2017-03-24 15:06:46 -0700 | [diff] [blame] | 646 | # [projects.methods.versions.setDefault](/ml-engine/reference/rest/v1/projects.models.versions/setDefault). |
Thomas Coffee | 2f24537 | 2017-03-27 10:39:26 -0700 | [diff] [blame] | 647 | "name": "A String", # Required.The name specified for the version when it was created. |
| 648 | # |
| 649 | # The version name must be unique within the model it is created in. |
Sai Cheemalapati | c30d2b5 | 2017-03-13 12:12:03 -0400 | [diff] [blame] | 650 | }</pre> |
| 651 | </div> |
| 652 | |
| 653 | <div class="method"> |
Bu Sun Kim | 715bd7f | 2019-06-14 16:50:42 -0700 | [diff] [blame^] | 654 | <code class="details" id="list">list(parent, pageToken=None, x__xgafv=None, pageSize=None, filter=None)</code> |
Sai Cheemalapati | c30d2b5 | 2017-03-13 12:12:03 -0400 | [diff] [blame] | 655 | <pre>Gets basic information about all the versions of a model. |
| 656 | |
Bu Sun Kim | 715bd7f | 2019-06-14 16:50:42 -0700 | [diff] [blame^] | 657 | If you expect that a model has many versions, or if you need to handle |
Sai Cheemalapati | c30d2b5 | 2017-03-13 12:12:03 -0400 | [diff] [blame] | 658 | only a limited number of results at a time, you can request that the list |
Bu Sun Kim | 715bd7f | 2019-06-14 16:50:42 -0700 | [diff] [blame^] | 659 | be retrieved in batches (called pages). |
| 660 | |
| 661 | If there are no versions that match the request parameters, the list |
| 662 | request returns an empty response body: {}. |
Sai Cheemalapati | c30d2b5 | 2017-03-13 12:12:03 -0400 | [diff] [blame] | 663 | |
| 664 | Args: |
Bu Sun Kim | 715bd7f | 2019-06-14 16:50:42 -0700 | [diff] [blame^] | 665 | parent: string, Required. The name of the model for which to list the version. (required) |
Sai Cheemalapati | c30d2b5 | 2017-03-13 12:12:03 -0400 | [diff] [blame] | 666 | pageToken: string, Optional. A page token to request the next page of results. |
| 667 | |
| 668 | You get the token from the `next_page_token` field of the response from |
| 669 | the previous call. |
| 670 | x__xgafv: string, V1 error format. |
| 671 | Allowed values |
| 672 | 1 - v1 error format |
| 673 | 2 - v2 error format |
Sai Cheemalapati | 4ba8c23 | 2017-06-06 18:46:08 -0400 | [diff] [blame] | 674 | pageSize: integer, Optional. The number of versions to retrieve per "page" of results. If |
| 675 | there are more remaining results than this number, the response message |
| 676 | will contain a valid value in the `next_page_token` field. |
| 677 | |
| 678 | The default value is 20, and the maximum page size is 100. |
Bu Sun Kim | 715bd7f | 2019-06-14 16:50:42 -0700 | [diff] [blame^] | 679 | filter: string, Optional. Specifies the subset of versions to retrieve. |
Sai Cheemalapati | c30d2b5 | 2017-03-13 12:12:03 -0400 | [diff] [blame] | 680 | |
| 681 | Returns: |
| 682 | An object of the form: |
| 683 | |
| 684 | { # Response message for the ListVersions method. |
| 685 | "nextPageToken": "A String", # Optional. Pass this token as the `page_token` field of the request for a |
| 686 | # subsequent call. |
| 687 | "versions": [ # The list of versions. |
| 688 | { # Represents a version of the model. |
| 689 | # |
| 690 | # Each version is a trained model deployed in the cloud, ready to handle |
| 691 | # prediction requests. A model can have multiple versions. You can get |
| 692 | # information about all of the versions of a given model by calling |
Sai Cheemalapati | e833b79 | 2017-03-24 15:06:46 -0700 | [diff] [blame] | 693 | # [projects.models.versions.list](/ml-engine/reference/rest/v1/projects.models.versions/list). |
Bu Sun Kim | 715bd7f | 2019-06-14 16:50:42 -0700 | [diff] [blame^] | 694 | "errorMessage": "A String", # Output only. The details of a failure or a cancellation. |
| 695 | "labels": { # Optional. One or more labels that you can add, to organize your model |
| 696 | # versions. Each label is a key-value pair, where both the key and the value |
| 697 | # are arbitrary strings that you supply. |
| 698 | # For more information, see the documentation on |
| 699 | # <a href="/ml-engine/docs/tensorflow/resource-labels">using labels</a>. |
| 700 | "a_key": "A String", |
| 701 | }, |
| 702 | "machineType": "A String", # Optional. The type of machine on which to serve the model. Currently only |
| 703 | # applies to online prediction service. |
| 704 | # <dl> |
| 705 | # <dt>mls1-c1-m2</dt> |
| 706 | # <dd> |
| 707 | # The <b>default</b> machine type, with 1 core and 2 GB RAM. The deprecated |
| 708 | # name for this machine type is "mls1-highmem-1". |
| 709 | # </dd> |
| 710 | # <dt>mls1-c4-m2</dt> |
| 711 | # <dd> |
| 712 | # In <b>Beta</b>. This machine type has 4 cores and 2 GB RAM. The |
| 713 | # deprecated name for this machine type is "mls1-highcpu-4". |
| 714 | # </dd> |
| 715 | # </dl> |
Thomas Coffee | 2f24537 | 2017-03-27 10:39:26 -0700 | [diff] [blame] | 716 | "description": "A String", # Optional. The description specified for the version when it was created. |
Bu Sun Kim | 715bd7f | 2019-06-14 16:50:42 -0700 | [diff] [blame^] | 717 | "runtimeVersion": "A String", # Optional. The AI Platform runtime version to use for this deployment. |
| 718 | # If not set, AI Platform uses the default stable version, 1.0. For more |
| 719 | # information, see the |
| 720 | # [runtime version list](/ml-engine/docs/runtime-version-list) and |
| 721 | # [how to manage runtime versions](/ml-engine/docs/versioning). |
Sai Cheemalapati | 4ba8c23 | 2017-06-06 18:46:08 -0400 | [diff] [blame] | 722 | "manualScaling": { # Options for manually scaling a model. # Manually select the number of nodes to use for serving the |
Bu Sun Kim | 715bd7f | 2019-06-14 16:50:42 -0700 | [diff] [blame^] | 723 | # model. You should generally use `auto_scaling` with an appropriate |
Sai Cheemalapati | 4ba8c23 | 2017-06-06 18:46:08 -0400 | [diff] [blame] | 724 | # `min_nodes` instead, but this option is available if you want more |
| 725 | # predictable billing. Beware that latency and error rates will increase |
| 726 | # if the traffic exceeds that capability of the system to serve it based |
| 727 | # on the selected number of nodes. |
Sai Cheemalapati | c30d2b5 | 2017-03-13 12:12:03 -0400 | [diff] [blame] | 728 | "nodes": 42, # The number of nodes to allocate for this model. These nodes are always up, |
| 729 | # starting from the time the model is deployed, so the cost of operating |
Sai Cheemalapati | 4ba8c23 | 2017-06-06 18:46:08 -0400 | [diff] [blame] | 730 | # this model will be proportional to `nodes` * number of hours since |
| 731 | # last billing cycle plus the cost for each prediction performed. |
Sai Cheemalapati | c30d2b5 | 2017-03-13 12:12:03 -0400 | [diff] [blame] | 732 | }, |
Bu Sun Kim | 715bd7f | 2019-06-14 16:50:42 -0700 | [diff] [blame^] | 733 | "predictionClass": "A String", # Optional. The fully qualified name |
| 734 | # (<var>module_name</var>.<var>class_name</var>) of a class that implements |
| 735 | # the Predictor interface described in this reference field. The module |
| 736 | # containing this class should be included in a package provided to the |
| 737 | # [`packageUris` field](#Version.FIELDS.package_uris). |
Sai Cheemalapati | c30d2b5 | 2017-03-13 12:12:03 -0400 | [diff] [blame] | 738 | # |
Bu Sun Kim | 715bd7f | 2019-06-14 16:50:42 -0700 | [diff] [blame^] | 739 | # Specify this field if and only if you are deploying a [custom prediction |
| 740 | # routine (beta)](/ml-engine/docs/tensorflow/custom-prediction-routines). |
| 741 | # If you specify this field, you must set |
| 742 | # [`runtimeVersion`](#Version.FIELDS.runtime_version) to 1.4 or greater. |
| 743 | # |
| 744 | # The following code sample provides the Predictor interface: |
| 745 | # |
| 746 | # ```py |
| 747 | # class Predictor(object): |
| 748 | # """Interface for constructing custom predictors.""" |
| 749 | # |
| 750 | # def predict(self, instances, **kwargs): |
| 751 | # """Performs custom prediction. |
| 752 | # |
| 753 | # Instances are the decoded values from the request. They have already |
| 754 | # been deserialized from JSON. |
| 755 | # |
| 756 | # Args: |
| 757 | # instances: A list of prediction input instances. |
| 758 | # **kwargs: A dictionary of keyword args provided as additional |
| 759 | # fields on the predict request body. |
| 760 | # |
| 761 | # Returns: |
| 762 | # A list of outputs containing the prediction results. This list must |
| 763 | # be JSON serializable. |
| 764 | # """ |
| 765 | # raise NotImplementedError() |
| 766 | # |
| 767 | # @classmethod |
| 768 | # def from_path(cls, model_dir): |
| 769 | # """Creates an instance of Predictor using the given path. |
| 770 | # |
| 771 | # Loading of the predictor should be done in this method. |
| 772 | # |
| 773 | # Args: |
| 774 | # model_dir: The local directory that contains the exported model |
| 775 | # file along with any additional files uploaded when creating the |
| 776 | # version resource. |
| 777 | # |
| 778 | # Returns: |
| 779 | # An instance implementing this Predictor class. |
| 780 | # """ |
| 781 | # raise NotImplementedError() |
| 782 | # ``` |
| 783 | # |
| 784 | # Learn more about [the Predictor interface and custom prediction |
| 785 | # routines](/ml-engine/docs/tensorflow/custom-prediction-routines). |
| 786 | "autoScaling": { # Options for automatically scaling a model. # Automatically scale the number of nodes used to serve the model in |
Sai Cheemalapati | 4ba8c23 | 2017-06-06 18:46:08 -0400 | [diff] [blame] | 787 | # response to increases and decreases in traffic. Care should be |
| 788 | # taken to ramp up traffic according to the model's ability to scale |
| 789 | # or you will start seeing increases in latency and 429 response codes. |
| 790 | "minNodes": 42, # Optional. The minimum number of nodes to allocate for this model. These |
Bu Sun Kim | 715bd7f | 2019-06-14 16:50:42 -0700 | [diff] [blame^] | 791 | # nodes are always up, starting from the time the model is deployed. |
| 792 | # Therefore, the cost of operating this model will be at least |
Sai Cheemalapati | 4ba8c23 | 2017-06-06 18:46:08 -0400 | [diff] [blame] | 793 | # `rate` * `min_nodes` * number of hours since last billing cycle, |
Bu Sun Kim | 715bd7f | 2019-06-14 16:50:42 -0700 | [diff] [blame^] | 794 | # where `rate` is the cost per node-hour as documented in the |
| 795 | # [pricing guide](/ml-engine/docs/pricing), |
Sai Cheemalapati | 4ba8c23 | 2017-06-06 18:46:08 -0400 | [diff] [blame] | 796 | # even if no predictions are performed. There is additional cost for each |
| 797 | # prediction performed. |
| 798 | # |
| 799 | # Unlike manual scaling, if the load gets too heavy for the nodes |
| 800 | # that are up, the service will automatically add nodes to handle the |
| 801 | # increased load as well as scale back as traffic drops, always maintaining |
| 802 | # at least `min_nodes`. You will be charged for the time in which additional |
| 803 | # nodes are used. |
| 804 | # |
| 805 | # If not specified, `min_nodes` defaults to 0, in which case, when traffic |
| 806 | # to a model stops (and after a cool-down period), nodes will be shut down |
| 807 | # and no charges will be incurred until traffic to the model resumes. |
Bu Sun Kim | 715bd7f | 2019-06-14 16:50:42 -0700 | [diff] [blame^] | 808 | # |
| 809 | # You can set `min_nodes` when creating the model version, and you can also |
| 810 | # update `min_nodes` for an existing version: |
| 811 | # <pre> |
| 812 | # update_body.json: |
| 813 | # { |
| 814 | # 'autoScaling': { |
| 815 | # 'minNodes': 5 |
| 816 | # } |
| 817 | # } |
| 818 | # </pre> |
| 819 | # HTTP request: |
| 820 | # <pre> |
| 821 | # PATCH |
| 822 | # https://ml.googleapis.com/v1/{name=projects/*/models/*/versions/*}?update_mask=autoScaling.minNodes |
| 823 | # -d @./update_body.json |
| 824 | # </pre> |
Sai Cheemalapati | 4ba8c23 | 2017-06-06 18:46:08 -0400 | [diff] [blame] | 825 | }, |
Bu Sun Kim | 715bd7f | 2019-06-14 16:50:42 -0700 | [diff] [blame^] | 826 | "serviceAccount": "A String", # Optional. Specifies the service account for resource access control. |
| 827 | "state": "A String", # Output only. The state of a version. |
| 828 | "pythonVersion": "A String", # Optional. The version of Python used in prediction. If not set, the default |
| 829 | # version is '2.7'. Python '3.5' is available when `runtime_version` is set |
| 830 | # to '1.4' and above. Python '2.7' works with all supported runtime versions. |
| 831 | "framework": "A String", # Optional. The machine learning framework AI Platform uses to train |
| 832 | # this version of the model. Valid values are `TENSORFLOW`, `SCIKIT_LEARN`, |
| 833 | # `XGBOOST`. If you do not specify a framework, AI Platform |
| 834 | # will analyze files in the deployment_uri to determine a framework. If you |
| 835 | # choose `SCIKIT_LEARN` or `XGBOOST`, you must also set the runtime version |
| 836 | # of the model to 1.4 or greater. |
| 837 | # |
| 838 | # Do **not** specify a framework if you're deploying a [custom |
| 839 | # prediction routine](/ml-engine/docs/tensorflow/custom-prediction-routines). |
| 840 | "packageUris": [ # Optional. Cloud Storage paths (`gs://…`) of packages for [custom |
| 841 | # prediction routines](/ml-engine/docs/tensorflow/custom-prediction-routines) |
| 842 | # or [scikit-learn pipelines with custom |
| 843 | # code](/ml-engine/docs/scikit/exporting-for-prediction#custom-pipeline-code). |
| 844 | # |
| 845 | # For a custom prediction routine, one of these packages must contain your |
| 846 | # Predictor class (see |
| 847 | # [`predictionClass`](#Version.FIELDS.prediction_class)). Additionally, |
| 848 | # include any dependencies used by your Predictor or scikit-learn pipeline |
| 849 | # uses that are not already included in your selected [runtime |
| 850 | # version](/ml-engine/docs/tensorflow/runtime-version-list). |
| 851 | # |
| 852 | # If you specify this field, you must also set |
| 853 | # [`runtimeVersion`](#Version.FIELDS.runtime_version) to 1.4 or greater. |
| 854 | "A String", |
| 855 | ], |
| 856 | "etag": "A String", # `etag` is used for optimistic concurrency control as a way to help |
| 857 | # prevent simultaneous updates of a model from overwriting each other. |
| 858 | # It is strongly suggested that systems make use of the `etag` in the |
| 859 | # read-modify-write cycle to perform model updates in order to avoid race |
| 860 | # conditions: An `etag` is returned in the response to `GetVersion`, and |
| 861 | # systems are expected to put that etag in the request to `UpdateVersion` to |
| 862 | # ensure that their change will be applied to the model as intended. |
| 863 | "lastUseTime": "A String", # Output only. The time the version was last used for prediction. |
| 864 | "deploymentUri": "A String", # Required. The Cloud Storage location of the trained model used to |
| 865 | # create the version. See the |
| 866 | # [guide to model |
| 867 | # deployment](/ml-engine/docs/tensorflow/deploying-models) for more |
| 868 | # information. |
| 869 | # |
| 870 | # When passing Version to |
| 871 | # [projects.models.versions.create](/ml-engine/reference/rest/v1/projects.models.versions/create) |
| 872 | # the model service uses the specified location as the source of the model. |
| 873 | # Once deployed, the model version is hosted by the prediction service, so |
| 874 | # this location is useful only as a historical record. |
| 875 | # The total number of model files can't exceed 1000. |
Sai Cheemalapati | c30d2b5 | 2017-03-13 12:12:03 -0400 | [diff] [blame] | 876 | "createTime": "A String", # Output only. The time the version was created. |
| 877 | "isDefault": True or False, # Output only. If true, this version will be used to handle prediction |
| 878 | # requests that do not specify a version. |
| 879 | # |
| 880 | # You can change the default version by calling |
Sai Cheemalapati | e833b79 | 2017-03-24 15:06:46 -0700 | [diff] [blame] | 881 | # [projects.methods.versions.setDefault](/ml-engine/reference/rest/v1/projects.models.versions/setDefault). |
Thomas Coffee | 2f24537 | 2017-03-27 10:39:26 -0700 | [diff] [blame] | 882 | "name": "A String", # Required.The name specified for the version when it was created. |
| 883 | # |
| 884 | # The version name must be unique within the model it is created in. |
Sai Cheemalapati | c30d2b5 | 2017-03-13 12:12:03 -0400 | [diff] [blame] | 885 | }, |
| 886 | ], |
| 887 | }</pre> |
| 888 | </div> |
| 889 | |
| 890 | <div class="method"> |
| 891 | <code class="details" id="list_next">list_next(previous_request, previous_response)</code> |
| 892 | <pre>Retrieves the next page of results. |
| 893 | |
| 894 | Args: |
| 895 | previous_request: The request for the previous page. (required) |
| 896 | previous_response: The response from the request for the previous page. (required) |
| 897 | |
| 898 | Returns: |
| 899 | A request object that you can call 'execute()' on to request the next |
| 900 | page. Returns None if there are no more items in the collection. |
| 901 | </pre> |
| 902 | </div> |
| 903 | |
| 904 | <div class="method"> |
Bu Sun Kim | 715bd7f | 2019-06-14 16:50:42 -0700 | [diff] [blame^] | 905 | <code class="details" id="patch">patch(name, body, updateMask=None, x__xgafv=None)</code> |
| 906 | <pre>Updates the specified Version resource. |
| 907 | |
| 908 | Currently the only update-able fields are `description` and |
| 909 | `autoScaling.minNodes`. |
| 910 | |
| 911 | Args: |
| 912 | name: string, Required. The name of the model. (required) |
| 913 | body: object, The request body. (required) |
| 914 | The object takes the form of: |
| 915 | |
| 916 | { # Represents a version of the model. |
| 917 | # |
| 918 | # Each version is a trained model deployed in the cloud, ready to handle |
| 919 | # prediction requests. A model can have multiple versions. You can get |
| 920 | # information about all of the versions of a given model by calling |
| 921 | # [projects.models.versions.list](/ml-engine/reference/rest/v1/projects.models.versions/list). |
| 922 | "errorMessage": "A String", # Output only. The details of a failure or a cancellation. |
| 923 | "labels": { # Optional. One or more labels that you can add, to organize your model |
| 924 | # versions. Each label is a key-value pair, where both the key and the value |
| 925 | # are arbitrary strings that you supply. |
| 926 | # For more information, see the documentation on |
| 927 | # <a href="/ml-engine/docs/tensorflow/resource-labels">using labels</a>. |
| 928 | "a_key": "A String", |
| 929 | }, |
| 930 | "machineType": "A String", # Optional. The type of machine on which to serve the model. Currently only |
| 931 | # applies to online prediction service. |
| 932 | # <dl> |
| 933 | # <dt>mls1-c1-m2</dt> |
| 934 | # <dd> |
| 935 | # The <b>default</b> machine type, with 1 core and 2 GB RAM. The deprecated |
| 936 | # name for this machine type is "mls1-highmem-1". |
| 937 | # </dd> |
| 938 | # <dt>mls1-c4-m2</dt> |
| 939 | # <dd> |
| 940 | # In <b>Beta</b>. This machine type has 4 cores and 2 GB RAM. The |
| 941 | # deprecated name for this machine type is "mls1-highcpu-4". |
| 942 | # </dd> |
| 943 | # </dl> |
| 944 | "description": "A String", # Optional. The description specified for the version when it was created. |
| 945 | "runtimeVersion": "A String", # Optional. The AI Platform runtime version to use for this deployment. |
| 946 | # If not set, AI Platform uses the default stable version, 1.0. For more |
| 947 | # information, see the |
| 948 | # [runtime version list](/ml-engine/docs/runtime-version-list) and |
| 949 | # [how to manage runtime versions](/ml-engine/docs/versioning). |
| 950 | "manualScaling": { # Options for manually scaling a model. # Manually select the number of nodes to use for serving the |
| 951 | # model. You should generally use `auto_scaling` with an appropriate |
| 952 | # `min_nodes` instead, but this option is available if you want more |
| 953 | # predictable billing. Beware that latency and error rates will increase |
| 954 | # if the traffic exceeds that capability of the system to serve it based |
| 955 | # on the selected number of nodes. |
| 956 | "nodes": 42, # The number of nodes to allocate for this model. These nodes are always up, |
| 957 | # starting from the time the model is deployed, so the cost of operating |
| 958 | # this model will be proportional to `nodes` * number of hours since |
| 959 | # last billing cycle plus the cost for each prediction performed. |
| 960 | }, |
| 961 | "predictionClass": "A String", # Optional. The fully qualified name |
| 962 | # (<var>module_name</var>.<var>class_name</var>) of a class that implements |
| 963 | # the Predictor interface described in this reference field. The module |
| 964 | # containing this class should be included in a package provided to the |
| 965 | # [`packageUris` field](#Version.FIELDS.package_uris). |
| 966 | # |
| 967 | # Specify this field if and only if you are deploying a [custom prediction |
| 968 | # routine (beta)](/ml-engine/docs/tensorflow/custom-prediction-routines). |
| 969 | # If you specify this field, you must set |
| 970 | # [`runtimeVersion`](#Version.FIELDS.runtime_version) to 1.4 or greater. |
| 971 | # |
| 972 | # The following code sample provides the Predictor interface: |
| 973 | # |
| 974 | # ```py |
| 975 | # class Predictor(object): |
| 976 | # """Interface for constructing custom predictors.""" |
| 977 | # |
| 978 | # def predict(self, instances, **kwargs): |
| 979 | # """Performs custom prediction. |
| 980 | # |
| 981 | # Instances are the decoded values from the request. They have already |
| 982 | # been deserialized from JSON. |
| 983 | # |
| 984 | # Args: |
| 985 | # instances: A list of prediction input instances. |
| 986 | # **kwargs: A dictionary of keyword args provided as additional |
| 987 | # fields on the predict request body. |
| 988 | # |
| 989 | # Returns: |
| 990 | # A list of outputs containing the prediction results. This list must |
| 991 | # be JSON serializable. |
| 992 | # """ |
| 993 | # raise NotImplementedError() |
| 994 | # |
| 995 | # @classmethod |
| 996 | # def from_path(cls, model_dir): |
| 997 | # """Creates an instance of Predictor using the given path. |
| 998 | # |
| 999 | # Loading of the predictor should be done in this method. |
| 1000 | # |
| 1001 | # Args: |
| 1002 | # model_dir: The local directory that contains the exported model |
| 1003 | # file along with any additional files uploaded when creating the |
| 1004 | # version resource. |
| 1005 | # |
| 1006 | # Returns: |
| 1007 | # An instance implementing this Predictor class. |
| 1008 | # """ |
| 1009 | # raise NotImplementedError() |
| 1010 | # ``` |
| 1011 | # |
| 1012 | # Learn more about [the Predictor interface and custom prediction |
| 1013 | # routines](/ml-engine/docs/tensorflow/custom-prediction-routines). |
| 1014 | "autoScaling": { # Options for automatically scaling a model. # Automatically scale the number of nodes used to serve the model in |
| 1015 | # response to increases and decreases in traffic. Care should be |
| 1016 | # taken to ramp up traffic according to the model's ability to scale |
| 1017 | # or you will start seeing increases in latency and 429 response codes. |
| 1018 | "minNodes": 42, # Optional. The minimum number of nodes to allocate for this model. These |
| 1019 | # nodes are always up, starting from the time the model is deployed. |
| 1020 | # Therefore, the cost of operating this model will be at least |
| 1021 | # `rate` * `min_nodes` * number of hours since last billing cycle, |
| 1022 | # where `rate` is the cost per node-hour as documented in the |
| 1023 | # [pricing guide](/ml-engine/docs/pricing), |
| 1024 | # even if no predictions are performed. There is additional cost for each |
| 1025 | # prediction performed. |
| 1026 | # |
| 1027 | # Unlike manual scaling, if the load gets too heavy for the nodes |
| 1028 | # that are up, the service will automatically add nodes to handle the |
| 1029 | # increased load as well as scale back as traffic drops, always maintaining |
| 1030 | # at least `min_nodes`. You will be charged for the time in which additional |
| 1031 | # nodes are used. |
| 1032 | # |
| 1033 | # If not specified, `min_nodes` defaults to 0, in which case, when traffic |
| 1034 | # to a model stops (and after a cool-down period), nodes will be shut down |
| 1035 | # and no charges will be incurred until traffic to the model resumes. |
| 1036 | # |
| 1037 | # You can set `min_nodes` when creating the model version, and you can also |
| 1038 | # update `min_nodes` for an existing version: |
| 1039 | # <pre> |
| 1040 | # update_body.json: |
| 1041 | # { |
| 1042 | # 'autoScaling': { |
| 1043 | # 'minNodes': 5 |
| 1044 | # } |
| 1045 | # } |
| 1046 | # </pre> |
| 1047 | # HTTP request: |
| 1048 | # <pre> |
| 1049 | # PATCH |
| 1050 | # https://ml.googleapis.com/v1/{name=projects/*/models/*/versions/*}?update_mask=autoScaling.minNodes |
| 1051 | # -d @./update_body.json |
| 1052 | # </pre> |
| 1053 | }, |
| 1054 | "serviceAccount": "A String", # Optional. Specifies the service account for resource access control. |
| 1055 | "state": "A String", # Output only. The state of a version. |
| 1056 | "pythonVersion": "A String", # Optional. The version of Python used in prediction. If not set, the default |
| 1057 | # version is '2.7'. Python '3.5' is available when `runtime_version` is set |
| 1058 | # to '1.4' and above. Python '2.7' works with all supported runtime versions. |
| 1059 | "framework": "A String", # Optional. The machine learning framework AI Platform uses to train |
| 1060 | # this version of the model. Valid values are `TENSORFLOW`, `SCIKIT_LEARN`, |
| 1061 | # `XGBOOST`. If you do not specify a framework, AI Platform |
| 1062 | # will analyze files in the deployment_uri to determine a framework. If you |
| 1063 | # choose `SCIKIT_LEARN` or `XGBOOST`, you must also set the runtime version |
| 1064 | # of the model to 1.4 or greater. |
| 1065 | # |
| 1066 | # Do **not** specify a framework if you're deploying a [custom |
| 1067 | # prediction routine](/ml-engine/docs/tensorflow/custom-prediction-routines). |
| 1068 | "packageUris": [ # Optional. Cloud Storage paths (`gs://…`) of packages for [custom |
| 1069 | # prediction routines](/ml-engine/docs/tensorflow/custom-prediction-routines) |
| 1070 | # or [scikit-learn pipelines with custom |
| 1071 | # code](/ml-engine/docs/scikit/exporting-for-prediction#custom-pipeline-code). |
| 1072 | # |
| 1073 | # For a custom prediction routine, one of these packages must contain your |
| 1074 | # Predictor class (see |
| 1075 | # [`predictionClass`](#Version.FIELDS.prediction_class)). Additionally, |
| 1076 | # include any dependencies used by your Predictor or scikit-learn pipeline |
| 1077 | # uses that are not already included in your selected [runtime |
| 1078 | # version](/ml-engine/docs/tensorflow/runtime-version-list). |
| 1079 | # |
| 1080 | # If you specify this field, you must also set |
| 1081 | # [`runtimeVersion`](#Version.FIELDS.runtime_version) to 1.4 or greater. |
| 1082 | "A String", |
| 1083 | ], |
| 1084 | "etag": "A String", # `etag` is used for optimistic concurrency control as a way to help |
| 1085 | # prevent simultaneous updates of a model from overwriting each other. |
| 1086 | # It is strongly suggested that systems make use of the `etag` in the |
| 1087 | # read-modify-write cycle to perform model updates in order to avoid race |
| 1088 | # conditions: An `etag` is returned in the response to `GetVersion`, and |
| 1089 | # systems are expected to put that etag in the request to `UpdateVersion` to |
| 1090 | # ensure that their change will be applied to the model as intended. |
| 1091 | "lastUseTime": "A String", # Output only. The time the version was last used for prediction. |
| 1092 | "deploymentUri": "A String", # Required. The Cloud Storage location of the trained model used to |
| 1093 | # create the version. See the |
| 1094 | # [guide to model |
| 1095 | # deployment](/ml-engine/docs/tensorflow/deploying-models) for more |
| 1096 | # information. |
| 1097 | # |
| 1098 | # When passing Version to |
| 1099 | # [projects.models.versions.create](/ml-engine/reference/rest/v1/projects.models.versions/create) |
| 1100 | # the model service uses the specified location as the source of the model. |
| 1101 | # Once deployed, the model version is hosted by the prediction service, so |
| 1102 | # this location is useful only as a historical record. |
| 1103 | # The total number of model files can't exceed 1000. |
| 1104 | "createTime": "A String", # Output only. The time the version was created. |
| 1105 | "isDefault": True or False, # Output only. If true, this version will be used to handle prediction |
| 1106 | # requests that do not specify a version. |
| 1107 | # |
| 1108 | # You can change the default version by calling |
| 1109 | # [projects.methods.versions.setDefault](/ml-engine/reference/rest/v1/projects.models.versions/setDefault). |
| 1110 | "name": "A String", # Required.The name specified for the version when it was created. |
| 1111 | # |
| 1112 | # The version name must be unique within the model it is created in. |
| 1113 | } |
| 1114 | |
| 1115 | updateMask: string, Required. Specifies the path, relative to `Version`, of the field to |
| 1116 | update. Must be present and non-empty. |
| 1117 | |
| 1118 | For example, to change the description of a version to "foo", the |
| 1119 | `update_mask` parameter would be specified as `description`, and the |
| 1120 | `PATCH` request body would specify the new value, as follows: |
| 1121 | { |
| 1122 | "description": "foo" |
| 1123 | } |
| 1124 | |
| 1125 | Currently the only supported update mask fields are `description` and |
| 1126 | `autoScaling.minNodes`. |
| 1127 | x__xgafv: string, V1 error format. |
| 1128 | Allowed values |
| 1129 | 1 - v1 error format |
| 1130 | 2 - v2 error format |
| 1131 | |
| 1132 | Returns: |
| 1133 | An object of the form: |
| 1134 | |
| 1135 | { # This resource represents a long-running operation that is the result of a |
| 1136 | # network API call. |
| 1137 | "metadata": { # Service-specific metadata associated with the operation. It typically |
| 1138 | # contains progress information and common metadata such as create time. |
| 1139 | # Some services might not provide such metadata. Any method that returns a |
| 1140 | # long-running operation should document the metadata type, if any. |
| 1141 | "a_key": "", # Properties of the object. Contains field @type with type URL. |
| 1142 | }, |
| 1143 | "error": { # The `Status` type defines a logical error model that is suitable for # The error result of the operation in case of failure or cancellation. |
| 1144 | # different programming environments, including REST APIs and RPC APIs. It is |
| 1145 | # used by [gRPC](https://github.com/grpc). Each `Status` message contains |
| 1146 | # three pieces of data: error code, error message, and error details. |
| 1147 | # |
| 1148 | # You can find out more about this error model and how to work with it in the |
| 1149 | # [API Design Guide](https://cloud.google.com/apis/design/errors). |
| 1150 | "message": "A String", # A developer-facing error message, which should be in English. Any |
| 1151 | # user-facing error message should be localized and sent in the |
| 1152 | # google.rpc.Status.details field, or localized by the client. |
| 1153 | "code": 42, # The status code, which should be an enum value of google.rpc.Code. |
| 1154 | "details": [ # A list of messages that carry the error details. There is a common set of |
| 1155 | # message types for APIs to use. |
| 1156 | { |
| 1157 | "a_key": "", # Properties of the object. Contains field @type with type URL. |
| 1158 | }, |
| 1159 | ], |
| 1160 | }, |
| 1161 | "done": True or False, # If the value is `false`, it means the operation is still in progress. |
| 1162 | # If `true`, the operation is completed, and either `error` or `response` is |
| 1163 | # available. |
| 1164 | "response": { # The normal response of the operation in case of success. If the original |
| 1165 | # method returns no data on success, such as `Delete`, the response is |
| 1166 | # `google.protobuf.Empty`. If the original method is standard |
| 1167 | # `Get`/`Create`/`Update`, the response should be the resource. For other |
| 1168 | # methods, the response should have the type `XxxResponse`, where `Xxx` |
| 1169 | # is the original method name. For example, if the original method name |
| 1170 | # is `TakeSnapshot()`, the inferred response type is |
| 1171 | # `TakeSnapshotResponse`. |
| 1172 | "a_key": "", # Properties of the object. Contains field @type with type URL. |
| 1173 | }, |
| 1174 | "name": "A String", # The server-assigned name, which is only unique within the same service that |
| 1175 | # originally returns it. If you use the default HTTP mapping, the |
| 1176 | # `name` should be a resource name ending with `operations/{unique_id}`. |
| 1177 | }</pre> |
| 1178 | </div> |
| 1179 | |
| 1180 | <div class="method"> |
| 1181 | <code class="details" id="setDefault">setDefault(name, body=None, x__xgafv=None)</code> |
Sai Cheemalapati | c30d2b5 | 2017-03-13 12:12:03 -0400 | [diff] [blame] | 1182 | <pre>Designates a version to be the default for the model. |
| 1183 | |
| 1184 | The default version is used for prediction requests made against the model |
| 1185 | that don't specify a version. |
| 1186 | |
| 1187 | The first version to be created for a model is automatically set as the |
| 1188 | default. You must make any subsequent changes to the default version |
| 1189 | setting manually using this method. |
| 1190 | |
| 1191 | Args: |
| 1192 | name: string, Required. The name of the version to make the default for the model. You |
| 1193 | can get the names of all the versions of a model by calling |
Bu Sun Kim | 715bd7f | 2019-06-14 16:50:42 -0700 | [diff] [blame^] | 1194 | [projects.models.versions.list](/ml-engine/reference/rest/v1/projects.models.versions/list). (required) |
| 1195 | body: object, The request body. |
Sai Cheemalapati | c30d2b5 | 2017-03-13 12:12:03 -0400 | [diff] [blame] | 1196 | The object takes the form of: |
| 1197 | |
| 1198 | { # Request message for the SetDefaultVersion request. |
| 1199 | } |
| 1200 | |
| 1201 | x__xgafv: string, V1 error format. |
| 1202 | Allowed values |
| 1203 | 1 - v1 error format |
| 1204 | 2 - v2 error format |
| 1205 | |
| 1206 | Returns: |
| 1207 | An object of the form: |
| 1208 | |
| 1209 | { # Represents a version of the model. |
| 1210 | # |
| 1211 | # Each version is a trained model deployed in the cloud, ready to handle |
| 1212 | # prediction requests. A model can have multiple versions. You can get |
| 1213 | # information about all of the versions of a given model by calling |
Sai Cheemalapati | e833b79 | 2017-03-24 15:06:46 -0700 | [diff] [blame] | 1214 | # [projects.models.versions.list](/ml-engine/reference/rest/v1/projects.models.versions/list). |
Bu Sun Kim | 715bd7f | 2019-06-14 16:50:42 -0700 | [diff] [blame^] | 1215 | "errorMessage": "A String", # Output only. The details of a failure or a cancellation. |
| 1216 | "labels": { # Optional. One or more labels that you can add, to organize your model |
| 1217 | # versions. Each label is a key-value pair, where both the key and the value |
| 1218 | # are arbitrary strings that you supply. |
| 1219 | # For more information, see the documentation on |
| 1220 | # <a href="/ml-engine/docs/tensorflow/resource-labels">using labels</a>. |
| 1221 | "a_key": "A String", |
| 1222 | }, |
| 1223 | "machineType": "A String", # Optional. The type of machine on which to serve the model. Currently only |
| 1224 | # applies to online prediction service. |
| 1225 | # <dl> |
| 1226 | # <dt>mls1-c1-m2</dt> |
| 1227 | # <dd> |
| 1228 | # The <b>default</b> machine type, with 1 core and 2 GB RAM. The deprecated |
| 1229 | # name for this machine type is "mls1-highmem-1". |
| 1230 | # </dd> |
| 1231 | # <dt>mls1-c4-m2</dt> |
| 1232 | # <dd> |
| 1233 | # In <b>Beta</b>. This machine type has 4 cores and 2 GB RAM. The |
| 1234 | # deprecated name for this machine type is "mls1-highcpu-4". |
| 1235 | # </dd> |
| 1236 | # </dl> |
Thomas Coffee | 2f24537 | 2017-03-27 10:39:26 -0700 | [diff] [blame] | 1237 | "description": "A String", # Optional. The description specified for the version when it was created. |
Bu Sun Kim | 715bd7f | 2019-06-14 16:50:42 -0700 | [diff] [blame^] | 1238 | "runtimeVersion": "A String", # Optional. The AI Platform runtime version to use for this deployment. |
| 1239 | # If not set, AI Platform uses the default stable version, 1.0. For more |
| 1240 | # information, see the |
| 1241 | # [runtime version list](/ml-engine/docs/runtime-version-list) and |
| 1242 | # [how to manage runtime versions](/ml-engine/docs/versioning). |
Sai Cheemalapati | 4ba8c23 | 2017-06-06 18:46:08 -0400 | [diff] [blame] | 1243 | "manualScaling": { # Options for manually scaling a model. # Manually select the number of nodes to use for serving the |
Bu Sun Kim | 715bd7f | 2019-06-14 16:50:42 -0700 | [diff] [blame^] | 1244 | # model. You should generally use `auto_scaling` with an appropriate |
Sai Cheemalapati | 4ba8c23 | 2017-06-06 18:46:08 -0400 | [diff] [blame] | 1245 | # `min_nodes` instead, but this option is available if you want more |
| 1246 | # predictable billing. Beware that latency and error rates will increase |
| 1247 | # if the traffic exceeds that capability of the system to serve it based |
| 1248 | # on the selected number of nodes. |
Sai Cheemalapati | c30d2b5 | 2017-03-13 12:12:03 -0400 | [diff] [blame] | 1249 | "nodes": 42, # The number of nodes to allocate for this model. These nodes are always up, |
| 1250 | # starting from the time the model is deployed, so the cost of operating |
Sai Cheemalapati | 4ba8c23 | 2017-06-06 18:46:08 -0400 | [diff] [blame] | 1251 | # this model will be proportional to `nodes` * number of hours since |
| 1252 | # last billing cycle plus the cost for each prediction performed. |
Sai Cheemalapati | c30d2b5 | 2017-03-13 12:12:03 -0400 | [diff] [blame] | 1253 | }, |
Bu Sun Kim | 715bd7f | 2019-06-14 16:50:42 -0700 | [diff] [blame^] | 1254 | "predictionClass": "A String", # Optional. The fully qualified name |
| 1255 | # (<var>module_name</var>.<var>class_name</var>) of a class that implements |
| 1256 | # the Predictor interface described in this reference field. The module |
| 1257 | # containing this class should be included in a package provided to the |
| 1258 | # [`packageUris` field](#Version.FIELDS.package_uris). |
Sai Cheemalapati | c30d2b5 | 2017-03-13 12:12:03 -0400 | [diff] [blame] | 1259 | # |
Bu Sun Kim | 715bd7f | 2019-06-14 16:50:42 -0700 | [diff] [blame^] | 1260 | # Specify this field if and only if you are deploying a [custom prediction |
| 1261 | # routine (beta)](/ml-engine/docs/tensorflow/custom-prediction-routines). |
| 1262 | # If you specify this field, you must set |
| 1263 | # [`runtimeVersion`](#Version.FIELDS.runtime_version) to 1.4 or greater. |
| 1264 | # |
| 1265 | # The following code sample provides the Predictor interface: |
| 1266 | # |
| 1267 | # ```py |
| 1268 | # class Predictor(object): |
| 1269 | # """Interface for constructing custom predictors.""" |
| 1270 | # |
| 1271 | # def predict(self, instances, **kwargs): |
| 1272 | # """Performs custom prediction. |
| 1273 | # |
| 1274 | # Instances are the decoded values from the request. They have already |
| 1275 | # been deserialized from JSON. |
| 1276 | # |
| 1277 | # Args: |
| 1278 | # instances: A list of prediction input instances. |
| 1279 | # **kwargs: A dictionary of keyword args provided as additional |
| 1280 | # fields on the predict request body. |
| 1281 | # |
| 1282 | # Returns: |
| 1283 | # A list of outputs containing the prediction results. This list must |
| 1284 | # be JSON serializable. |
| 1285 | # """ |
| 1286 | # raise NotImplementedError() |
| 1287 | # |
| 1288 | # @classmethod |
| 1289 | # def from_path(cls, model_dir): |
| 1290 | # """Creates an instance of Predictor using the given path. |
| 1291 | # |
| 1292 | # Loading of the predictor should be done in this method. |
| 1293 | # |
| 1294 | # Args: |
| 1295 | # model_dir: The local directory that contains the exported model |
| 1296 | # file along with any additional files uploaded when creating the |
| 1297 | # version resource. |
| 1298 | # |
| 1299 | # Returns: |
| 1300 | # An instance implementing this Predictor class. |
| 1301 | # """ |
| 1302 | # raise NotImplementedError() |
| 1303 | # ``` |
| 1304 | # |
| 1305 | # Learn more about [the Predictor interface and custom prediction |
| 1306 | # routines](/ml-engine/docs/tensorflow/custom-prediction-routines). |
| 1307 | "autoScaling": { # Options for automatically scaling a model. # Automatically scale the number of nodes used to serve the model in |
Sai Cheemalapati | 4ba8c23 | 2017-06-06 18:46:08 -0400 | [diff] [blame] | 1308 | # response to increases and decreases in traffic. Care should be |
| 1309 | # taken to ramp up traffic according to the model's ability to scale |
| 1310 | # or you will start seeing increases in latency and 429 response codes. |
| 1311 | "minNodes": 42, # Optional. The minimum number of nodes to allocate for this model. These |
Bu Sun Kim | 715bd7f | 2019-06-14 16:50:42 -0700 | [diff] [blame^] | 1312 | # nodes are always up, starting from the time the model is deployed. |
| 1313 | # Therefore, the cost of operating this model will be at least |
Sai Cheemalapati | 4ba8c23 | 2017-06-06 18:46:08 -0400 | [diff] [blame] | 1314 | # `rate` * `min_nodes` * number of hours since last billing cycle, |
Bu Sun Kim | 715bd7f | 2019-06-14 16:50:42 -0700 | [diff] [blame^] | 1315 | # where `rate` is the cost per node-hour as documented in the |
| 1316 | # [pricing guide](/ml-engine/docs/pricing), |
Sai Cheemalapati | 4ba8c23 | 2017-06-06 18:46:08 -0400 | [diff] [blame] | 1317 | # even if no predictions are performed. There is additional cost for each |
| 1318 | # prediction performed. |
| 1319 | # |
| 1320 | # Unlike manual scaling, if the load gets too heavy for the nodes |
| 1321 | # that are up, the service will automatically add nodes to handle the |
| 1322 | # increased load as well as scale back as traffic drops, always maintaining |
| 1323 | # at least `min_nodes`. You will be charged for the time in which additional |
| 1324 | # nodes are used. |
| 1325 | # |
| 1326 | # If not specified, `min_nodes` defaults to 0, in which case, when traffic |
| 1327 | # to a model stops (and after a cool-down period), nodes will be shut down |
| 1328 | # and no charges will be incurred until traffic to the model resumes. |
Bu Sun Kim | 715bd7f | 2019-06-14 16:50:42 -0700 | [diff] [blame^] | 1329 | # |
| 1330 | # You can set `min_nodes` when creating the model version, and you can also |
| 1331 | # update `min_nodes` for an existing version: |
| 1332 | # <pre> |
| 1333 | # update_body.json: |
| 1334 | # { |
| 1335 | # 'autoScaling': { |
| 1336 | # 'minNodes': 5 |
| 1337 | # } |
| 1338 | # } |
| 1339 | # </pre> |
| 1340 | # HTTP request: |
| 1341 | # <pre> |
| 1342 | # PATCH |
| 1343 | # https://ml.googleapis.com/v1/{name=projects/*/models/*/versions/*}?update_mask=autoScaling.minNodes |
| 1344 | # -d @./update_body.json |
| 1345 | # </pre> |
Sai Cheemalapati | 4ba8c23 | 2017-06-06 18:46:08 -0400 | [diff] [blame] | 1346 | }, |
Bu Sun Kim | 715bd7f | 2019-06-14 16:50:42 -0700 | [diff] [blame^] | 1347 | "serviceAccount": "A String", # Optional. Specifies the service account for resource access control. |
| 1348 | "state": "A String", # Output only. The state of a version. |
| 1349 | "pythonVersion": "A String", # Optional. The version of Python used in prediction. If not set, the default |
| 1350 | # version is '2.7'. Python '3.5' is available when `runtime_version` is set |
| 1351 | # to '1.4' and above. Python '2.7' works with all supported runtime versions. |
| 1352 | "framework": "A String", # Optional. The machine learning framework AI Platform uses to train |
| 1353 | # this version of the model. Valid values are `TENSORFLOW`, `SCIKIT_LEARN`, |
| 1354 | # `XGBOOST`. If you do not specify a framework, AI Platform |
| 1355 | # will analyze files in the deployment_uri to determine a framework. If you |
| 1356 | # choose `SCIKIT_LEARN` or `XGBOOST`, you must also set the runtime version |
| 1357 | # of the model to 1.4 or greater. |
| 1358 | # |
| 1359 | # Do **not** specify a framework if you're deploying a [custom |
| 1360 | # prediction routine](/ml-engine/docs/tensorflow/custom-prediction-routines). |
| 1361 | "packageUris": [ # Optional. Cloud Storage paths (`gs://…`) of packages for [custom |
| 1362 | # prediction routines](/ml-engine/docs/tensorflow/custom-prediction-routines) |
| 1363 | # or [scikit-learn pipelines with custom |
| 1364 | # code](/ml-engine/docs/scikit/exporting-for-prediction#custom-pipeline-code). |
| 1365 | # |
| 1366 | # For a custom prediction routine, one of these packages must contain your |
| 1367 | # Predictor class (see |
| 1368 | # [`predictionClass`](#Version.FIELDS.prediction_class)). Additionally, |
| 1369 | # include any dependencies used by your Predictor or scikit-learn pipeline |
| 1370 | # uses that are not already included in your selected [runtime |
| 1371 | # version](/ml-engine/docs/tensorflow/runtime-version-list). |
| 1372 | # |
| 1373 | # If you specify this field, you must also set |
| 1374 | # [`runtimeVersion`](#Version.FIELDS.runtime_version) to 1.4 or greater. |
| 1375 | "A String", |
| 1376 | ], |
| 1377 | "etag": "A String", # `etag` is used for optimistic concurrency control as a way to help |
| 1378 | # prevent simultaneous updates of a model from overwriting each other. |
| 1379 | # It is strongly suggested that systems make use of the `etag` in the |
| 1380 | # read-modify-write cycle to perform model updates in order to avoid race |
| 1381 | # conditions: An `etag` is returned in the response to `GetVersion`, and |
| 1382 | # systems are expected to put that etag in the request to `UpdateVersion` to |
| 1383 | # ensure that their change will be applied to the model as intended. |
| 1384 | "lastUseTime": "A String", # Output only. The time the version was last used for prediction. |
| 1385 | "deploymentUri": "A String", # Required. The Cloud Storage location of the trained model used to |
| 1386 | # create the version. See the |
| 1387 | # [guide to model |
| 1388 | # deployment](/ml-engine/docs/tensorflow/deploying-models) for more |
| 1389 | # information. |
| 1390 | # |
| 1391 | # When passing Version to |
| 1392 | # [projects.models.versions.create](/ml-engine/reference/rest/v1/projects.models.versions/create) |
| 1393 | # the model service uses the specified location as the source of the model. |
| 1394 | # Once deployed, the model version is hosted by the prediction service, so |
| 1395 | # this location is useful only as a historical record. |
| 1396 | # The total number of model files can't exceed 1000. |
Sai Cheemalapati | c30d2b5 | 2017-03-13 12:12:03 -0400 | [diff] [blame] | 1397 | "createTime": "A String", # Output only. The time the version was created. |
| 1398 | "isDefault": True or False, # Output only. If true, this version will be used to handle prediction |
| 1399 | # requests that do not specify a version. |
| 1400 | # |
| 1401 | # You can change the default version by calling |
Sai Cheemalapati | e833b79 | 2017-03-24 15:06:46 -0700 | [diff] [blame] | 1402 | # [projects.methods.versions.setDefault](/ml-engine/reference/rest/v1/projects.models.versions/setDefault). |
Thomas Coffee | 2f24537 | 2017-03-27 10:39:26 -0700 | [diff] [blame] | 1403 | "name": "A String", # Required.The name specified for the version when it was created. |
| 1404 | # |
| 1405 | # The version name must be unique within the model it is created in. |
Sai Cheemalapati | c30d2b5 | 2017-03-13 12:12:03 -0400 | [diff] [blame] | 1406 | }</pre> |
| 1407 | </div> |
| 1408 | |
| 1409 | </body></html> |