Regen docs (#364)
diff --git a/docs/dyn/ml_v1beta1.projects.html b/docs/dyn/ml_v1beta1.projects.html
index 6aab1f0..677fdd3 100644
--- a/docs/dyn/ml_v1beta1.projects.html
+++ b/docs/dyn/ml_v1beta1.projects.html
@@ -72,7 +72,7 @@
</style>
-<h1><a href="ml_v1beta1.html">Google Cloud Machine Learning</a> . <a href="ml_v1beta1.projects.html">projects</a></h1>
+<h1><a href="ml_v1beta1.html">Google Cloud Machine Learning Engine</a> . <a href="ml_v1beta1.projects.html">projects</a></h1>
<h2>Instance Methods</h2>
<p class="toc_element">
<code><a href="ml_v1beta1.projects.jobs.html">jobs()</a></code>
@@ -125,51 +125,7 @@
<code class="details" id="predict">predict(name=None, body, x__xgafv=None)</code>
<pre>Performs prediction on the data in the request.
-Responses are very similar to requests. There are two top-level fields,
-each of which are JSON lists:
-
-<dl>
- <dt>predictions</dt>
- <dd>The list of predictions, one per instance in the request.</dd>
- <dt>error</dt>
- <dd>An error message returned instead of a prediction list if any
- instance produced an error.</dd>
-</dl>
-
-If the call is successful, the response body will contain one prediction
-entry per instance in the request body. If prediction fails for any
-instance, the response body will contain no predictions and will contian
-a single error entry instead.
-
-Even though there is one prediction per instance, the format of a
-prediction is not directly related to the format of an instance.
-Predictions take whatever format is specified in the outputs collection
-defined in the model. The collection of predictions is returned in a JSON
-list. Each member of the list can be a simple value, a list, or a JSON
-object of any complexity. If your model has more than one output tensor,
-each prediction will be a JSON object containing a name/value pair for each
-output. The names identify the output aliases in the graph.
-
-The following examples show some possible responses:
-
-A simple set of predictions for three input instances, where each
-prediction is an integer value:
-<pre>
-{"predictions": [5, 4, 3]}
-</pre>
-A more complex set of predictions, each containing two named values that
-correspond to output tensors, named **label** and **scores** respectively.
-The value of **label** is the predicted category ("car" or "beach") and
-**scores** contains a list of probabilities for that instance across the
-possible categories.
-<pre>
-{"predictions": [{"label": "beach", "scores": [0.1, 0.9]},
- {"label": "car", "scores": [0.75, 0.25]}]}
-</pre>
-A response when there is an error processing an input instance:
-<pre>
-{"error": "Divide by zero"}
-</pre>
+**** REMOVE FROM GENERATED DOCUMENTATION
Args:
name: string, Required. The resource name of a model or a version.
@@ -193,7 +149,7 @@
# model's input definition. Instances can include named inputs or can contain
# only unlabeled values.
#
- # Most data does not include named inputs. Some instances will be simple
+ # Not all data includes named inputs. Some instances will be simple
# JSON values (boolean, number, or string). However, instances are often lists
# of simple values, or complex nested lists. Here are some examples of request
# bodies:
@@ -208,7 +164,13 @@
# </pre>
# Sentences encoded as lists of words (vectors of strings):
# <pre>
- # {"instances": [["the","quick","brown"], ["la","bruja","le"]]}
+ # {
+ # "instances": [
+ # ["the","quick","brown"],
+ # ["la","bruja","le"],
+ # ...
+ # ]
+ # }
# </pre>
# Floating point scalar values:
# <pre>
@@ -216,22 +178,53 @@
# </pre>
# Vectors of integers:
# <pre>
- # {"instances": [[0, 1, 2], [3, 4, 5],...]}
+ # {
+ # "instances": [
+ # [0, 1, 2],
+ # [3, 4, 5],
+ # ...
+ # ]
+ # }
# </pre>
# Tensors (in this case, two-dimensional tensors):
# <pre>
- # {"instances": [[[0, 1, 2], [3, 4, 5]], ...]}
+ # {
+ # "instances": [
+ # [
+ # [0, 1, 2],
+ # [3, 4, 5]
+ # ],
+ # ...
+ # ]
+ # }
# </pre>
- # Images represented as a three-dimensional list. In this encoding scheme the
- # first two dimensions represent the rows and columns of the image, and the
- # third contains the R, G, and B values for each pixel.
+ # Images can be represented different ways. In this encoding scheme the first
+ # two dimensions represent the rows and columns of the image, and the third
+ # contains lists (vectors) of the R, G, and B values for each pixel.
# <pre>
- # {"instances": [[[[138, 30, 66], [130, 20, 56], ...]]]]}
+ # {
+ # "instances": [
+ # [
+ # [
+ # [138, 30, 66],
+ # [130, 20, 56],
+ # ...
+ # ],
+ # [
+ # [126, 38, 61],
+ # [122, 24, 57],
+ # ...
+ # ],
+ # ...
+ # ],
+ # ...
+ # ]
+ # }
# </pre>
- # Data must be encoded as UTF-8. If your data uses another character encoding,
- # you must base64 encode the data and mark it as binary. To mark a JSON string
- # as binary, replace it with an object with a single attribute named `b`:
- # <pre>{"b": "..."} </pre>
+ # JSON strings must be encoded as UTF-8. To send binary data, you must
+ # base64-encode the data and mark it as binary. To mark a JSON string
+ # as binary, replace it with a JSON object with a single attribute named `b64`:
+ # <pre>{"b64": "..."} </pre>
# For example:
#
# Two Serialized tf.Examples (fake data, for illustrative purposes only):
@@ -247,8 +240,20 @@
#
# JSON input data to be preprocessed:
# <pre>
- # {"instances": [{"a": 1.0, "b": true, "c": "x"},
- # {"a": -2.0, "b": false, "c": "y"}]}
+ # {
+ # "instances": [
+ # {
+ # "a": 1.0,
+ # "b": true,
+ # "c": "x"
+ # },
+ # {
+ # "a": -2.0,
+ # "b": false,
+ # "c": "y"
+ # }
+ # ]
+ # }
# </pre>
# Some models have an underlying TensorFlow graph that accepts multiple input
# tensors. In this case, you should use the names of JSON name/value pairs to
@@ -257,14 +262,59 @@
# For a graph with input tensor aliases "tag" (string) and "image"
# (base64-encoded string):
# <pre>
- # {"instances": [{"tag": "beach", "image": {"b64": "ASa8asdf"}},
- # {"tag": "car", "image": {"b64": "JLK7ljk3"}}]}
+ # {
+ # "instances": [
+ # {
+ # "tag": "beach",
+ # "image": {"b64": "ASa8asdf"}
+ # },
+ # {
+ # "tag": "car",
+ # "image": {"b64": "JLK7ljk3"}
+ # }
+ # ]
+ # }
# </pre>
# For a graph with input tensor aliases "tag" (string) and "image"
# (3-dimensional array of 8-bit ints):
# <pre>
- # {"instances": [{"tag": "beach", "image": [[[263, 1, 10], [262, 2, 11], ...]]},
- # {"tag": "car", "image": [[[10, 11, 24], [23, 10, 15], ...]]}]}
+ # {
+ # "instances": [
+ # {
+ # "tag": "beach",
+ # "image": [
+ # [
+ # [138, 30, 66],
+ # [130, 20, 56],
+ # ...
+ # ],
+ # [
+ # [126, 38, 61],
+ # [122, 24, 57],
+ # ...
+ # ],
+ # ...
+ # ]
+ # },
+ # {
+ # "tag": "car",
+ # "image": [
+ # [
+ # [255, 0, 102],
+ # [255, 0, 97],
+ # ...
+ # ],
+ # [
+ # [254, 1, 101],
+ # [254, 2, 93],
+ # ...
+ # ],
+ # ...
+ # ]
+ # },
+ # ...
+ # ]
+ # }
# </pre>
# If the call is successful, the response body will contain one prediction
# entry per instance in the request body. If prediction fails for any