docs: update docs/dyn (#1096)
This PR was generated using Autosynth. :rainbow:
Synth log will be available here:
https://source.cloud.google.com/results/invocations/6f0f288a-a1e8-4b2d-a85f-00b1c6150185/targets
- [ ] To automatically regenerate this PR, check this box.
Source-Link: https://github.com/googleapis/synthtool/commit/39b7149da4026765385403632db3c6f63db96b2c
Source-Link: https://github.com/googleapis/synthtool/commit/9a7d9fbb7045c34c9d3d22c1ff766eeae51f04c9
Source-Link: https://github.com/googleapis/synthtool/commit/dc9903a8c30c3662b6098f0e4a97f221d67268b2
Source-Link: https://github.com/googleapis/synthtool/commit/7fcc405a579d5d53a726ff3da1b7c8c08f0f2d58
Source-Link: https://github.com/googleapis/synthtool/commit/d5fc0bcf9ea9789c5b0e3154a9e3b29e5cea6116
Source-Link: https://github.com/googleapis/synthtool/commit/e89175cf074dccc4babb4eca66ae913696e47a71
Source-Link: https://github.com/googleapis/synthtool/commit/7d652819519dfa24da9e14548232e4aaba71a11c
Source-Link: https://github.com/googleapis/synthtool/commit/7db8a6c5ffb12a6e4c2f799c18f00f7f3d60e279
Source-Link: https://github.com/googleapis/synthtool/commit/1f1148d3c7a7a52f0c98077f976bd9b3c948ee2b
Source-Link: https://github.com/googleapis/synthtool/commit/2c8aecedd55b0480fb4e123b6e07fa5b12953862
Source-Link: https://github.com/googleapis/synthtool/commit/3d3e94c4e02370f307a9a200b0c743c3d8d19f29
Source-Link: https://github.com/googleapis/synthtool/commit/c7824ea48ff6d4d42dfae0849aec8a85acd90bd9
Source-Link: https://github.com/googleapis/synthtool/commit/ba9918cd22874245b55734f57470c719b577e591
Source-Link: https://github.com/googleapis/synthtool/commit/b19b401571e77192f8dd38eab5fb2300a0de9324
Source-Link: https://github.com/googleapis/synthtool/commit/6542bd723403513626f61642fc02ddca528409aa
diff --git a/docs/dyn/datalabeling_v1beta1.projects.evaluations.html b/docs/dyn/datalabeling_v1beta1.projects.evaluations.html
index c12f63e..173c48e 100644
--- a/docs/dyn/datalabeling_v1beta1.projects.evaluations.html
+++ b/docs/dyn/datalabeling_v1beta1.projects.evaluations.html
@@ -78,7 +78,7 @@
<code><a href="#close">close()</a></code></p>
<p class="firstline">Close httplib2 connections.</p>
<p class="toc_element">
- <code><a href="#search">search(parent, filter=None, pageSize=None, pageToken=None, x__xgafv=None)</a></code></p>
+ <code><a href="#search">search(parent, pageSize=None, pageToken=None, filter=None, x__xgafv=None)</a></code></p>
<p class="firstline">Searches evaluations within a project.</p>
<p class="toc_element">
<code><a href="#search_next">search_next(previous_request, previous_response)</a></code></p>
@@ -90,14 +90,14 @@
</div>
<div class="method">
- <code class="details" id="search">search(parent, filter=None, pageSize=None, pageToken=None, x__xgafv=None)</code>
+ <code class="details" id="search">search(parent, pageSize=None, pageToken=None, filter=None, x__xgafv=None)</code>
<pre>Searches evaluations within a project.
Args:
parent: string, Required. Evaluation search parent (project ID). Format: "projects/ {project_id}" (required)
- filter: string, Optional. To search evaluations, you can filter by the following: * evaluation_job.evaluation_job_id (the last part of EvaluationJob.name) * evaluation_job.model_id (the {model_name} portion of EvaluationJob.modelVersion) * evaluation_job.evaluation_job_run_time_start (Minimum threshold for the evaluationJobRunTime that created the evaluation) * evaluation_job.evaluation_job_run_time_end (Maximum threshold for the evaluationJobRunTime that created the evaluation) * evaluation_job.job_state (EvaluationJob.state) * annotation_spec.display_name (the Evaluation contains a metric for the annotation spec with this displayName) To filter by multiple critiera, use the `AND` operator or the `OR` operator. The following examples shows a string that filters by several critiera: "evaluation_job.evaluation_job_id = {evaluation_job_id} AND evaluation_job.model_id = {model_name} AND evaluation_job.evaluation_job_run_time_start = {timestamp_1} AND evaluation_job.evaluation_job_run_time_end = {timestamp_2} AND annotation_spec.display_name = {display_name}"
pageSize: integer, Optional. Requested page size. Server may return fewer results than requested. Default value is 100.
pageToken: string, Optional. A token identifying a page of results for the server to return. Typically obtained by the nextPageToken of the response to a previous search request. If you don't specify this field, the API call requests the first page of the search.
+ filter: string, Optional. To search evaluations, you can filter by the following: * evaluation_job.evaluation_job_id (the last part of EvaluationJob.name) * evaluation_job.model_id (the {model_name} portion of EvaluationJob.modelVersion) * evaluation_job.evaluation_job_run_time_start (Minimum threshold for the evaluationJobRunTime that created the evaluation) * evaluation_job.evaluation_job_run_time_end (Maximum threshold for the evaluationJobRunTime that created the evaluation) * evaluation_job.job_state (EvaluationJob.state) * annotation_spec.display_name (the Evaluation contains a metric for the annotation spec with this displayName) To filter by multiple critiera, use the `AND` operator or the `OR` operator. The following examples shows a string that filters by several critiera: "evaluation_job.evaluation_job_id = {evaluation_job_id} AND evaluation_job.model_id = {model_name} AND evaluation_job.evaluation_job_run_time_start = {timestamp_1} AND evaluation_job.evaluation_job_run_time_end = {timestamp_2} AND annotation_spec.display_name = {display_name}"
x__xgafv: string, V1 error format.
Allowed values
1 - v1 error format
@@ -110,88 +110,88 @@
"nextPageToken": "A String", # A token to retrieve next page of results.
"evaluations": [ # The list of evaluations matching the search.
{ # Describes an evaluation between a machine learning model's predictions and ground truth labels. Created when an EvaluationJob runs successfully.
- "evaluationMetrics": { # Output only. Metrics comparing predictions to ground truth labels.
- "classificationMetrics": { # Metrics calculated for a classification model.
- "prCurve": { # Precision-recall curve based on ground truth labels, predicted labels, and scores for the predicted labels.
- "meanAveragePrecision": 3.14, # Mean average prcision of this curve.
- "annotationSpec": { # Container of information related to one possible annotation that can be used in a labeling task. For example, an image classification task where images are labeled as `dog` or `cat` must reference an AnnotationSpec for `dog` and an AnnotationSpec for `cat`. # The annotation spec of the label for which the precision-recall curve calculated. If this field is empty, that means the precision-recall curve is an aggregate curve for all labels.
- "displayName": "A String", # Required. The display name of the AnnotationSpec. Maximum of 64 characters.
- "description": "A String", # Optional. User-provided description of the annotation specification. The description can be up to 10,000 characters long.
- "index": 42, # Output only. This is the integer index of the AnnotationSpec. The index for the whole AnnotationSpecSet is sequential starting from 0. For example, an AnnotationSpecSet with classes `dog` and `cat`, might contain one AnnotationSpec with `{ display_name: "dog", index: 0 }` and one AnnotationSpec with `{ display_name: "cat", index: 1 }`. This is especially useful for model training as it encodes the string labels into numeric values.
- },
- "areaUnderCurve": 3.14, # Area under the precision-recall curve. Not to be confused with area under a receiver operating characteristic (ROC) curve.
- "confidenceMetricsEntries": [ # Entries that make up the precision-recall graph. Each entry is a "point" on the graph drawn for a different `confidence_threshold`.
- {
- "precisionAt1": 3.14, # Precision value for entries with label that has highest score.
- "recallAt5": 3.14, # Recall value for entries with label that has highest 5 scores.
- "recallAt1": 3.14, # Recall value for entries with label that has highest score.
- "recall": 3.14, # Recall value.
- "precisionAt5": 3.14, # Precision value for entries with label that has highest 5 scores.
- "confidenceThreshold": 3.14, # Threshold used for this entry. For classification tasks, this is a classification threshold: a predicted label is categorized as positive or negative (in the context of this point on the PR curve) based on whether the label's score meets this threshold. For image object detection (bounding box) tasks, this is the [intersection-over-union (IOU)](/vision/automl/object-detection/docs/evaluate#intersection-over-union) threshold for the context of this point on the PR curve.
- "f1ScoreAt5": 3.14, # The harmonic mean of recall_at5 and precision_at5.
- "f1Score": 3.14, # Harmonic mean of recall and precision.
- "precision": 3.14, # Precision value.
- "f1ScoreAt1": 3.14, # The harmonic mean of recall_at1 and precision_at1.
- },
- ],
- },
- "confusionMatrix": { # Confusion matrix of the model running the classification. Only applicable when the metrics entry aggregates multiple labels. Not applicable when the entry is for a single label. # Confusion matrix of predicted labels vs. ground truth labels.
- "row": [
- { # A row in the confusion matrix. Each entry in this row has the same ground truth label.
- "annotationSpec": { # Container of information related to one possible annotation that can be used in a labeling task. For example, an image classification task where images are labeled as `dog` or `cat` must reference an AnnotationSpec for `dog` and an AnnotationSpec for `cat`. # The annotation spec of the ground truth label for this row.
- "displayName": "A String", # Required. The display name of the AnnotationSpec. Maximum of 64 characters.
- "description": "A String", # Optional. User-provided description of the annotation specification. The description can be up to 10,000 characters long.
- "index": 42, # Output only. This is the integer index of the AnnotationSpec. The index for the whole AnnotationSpecSet is sequential starting from 0. For example, an AnnotationSpecSet with classes `dog` and `cat`, might contain one AnnotationSpec with `{ display_name: "dog", index: 0 }` and one AnnotationSpec with `{ display_name: "cat", index: 1 }`. This is especially useful for model training as it encodes the string labels into numeric values.
- },
- "entries": [ # A list of the confusion matrix entries. One entry for each possible predicted label.
- {
- "annotationSpec": { # Container of information related to one possible annotation that can be used in a labeling task. For example, an image classification task where images are labeled as `dog` or `cat` must reference an AnnotationSpec for `dog` and an AnnotationSpec for `cat`. # The annotation spec of a predicted label.
- "displayName": "A String", # Required. The display name of the AnnotationSpec. Maximum of 64 characters.
- "description": "A String", # Optional. User-provided description of the annotation specification. The description can be up to 10,000 characters long.
- "index": 42, # Output only. This is the integer index of the AnnotationSpec. The index for the whole AnnotationSpecSet is sequential starting from 0. For example, an AnnotationSpecSet with classes `dog` and `cat`, might contain one AnnotationSpec with `{ display_name: "dog", index: 0 }` and one AnnotationSpec with `{ display_name: "cat", index: 1 }`. This is especially useful for model training as it encodes the string labels into numeric values.
- },
- "itemCount": 42, # Number of items predicted to have this label. (The ground truth label for these items is the `Row.annotationSpec` of this entry's parent.)
- },
- ],
- },
- ],
- },
- },
- "objectDetectionMetrics": { # Metrics calculated for an image object detection (bounding box) model.
- "prCurve": { # Precision-recall curve.
- "meanAveragePrecision": 3.14, # Mean average prcision of this curve.
- "annotationSpec": { # Container of information related to one possible annotation that can be used in a labeling task. For example, an image classification task where images are labeled as `dog` or `cat` must reference an AnnotationSpec for `dog` and an AnnotationSpec for `cat`. # The annotation spec of the label for which the precision-recall curve calculated. If this field is empty, that means the precision-recall curve is an aggregate curve for all labels.
- "displayName": "A String", # Required. The display name of the AnnotationSpec. Maximum of 64 characters.
- "description": "A String", # Optional. User-provided description of the annotation specification. The description can be up to 10,000 characters long.
- "index": 42, # Output only. This is the integer index of the AnnotationSpec. The index for the whole AnnotationSpecSet is sequential starting from 0. For example, an AnnotationSpecSet with classes `dog` and `cat`, might contain one AnnotationSpec with `{ display_name: "dog", index: 0 }` and one AnnotationSpec with `{ display_name: "cat", index: 1 }`. This is especially useful for model training as it encodes the string labels into numeric values.
- },
- "areaUnderCurve": 3.14, # Area under the precision-recall curve. Not to be confused with area under a receiver operating characteristic (ROC) curve.
- "confidenceMetricsEntries": [ # Entries that make up the precision-recall graph. Each entry is a "point" on the graph drawn for a different `confidence_threshold`.
- {
- "precisionAt1": 3.14, # Precision value for entries with label that has highest score.
- "recallAt5": 3.14, # Recall value for entries with label that has highest 5 scores.
- "recallAt1": 3.14, # Recall value for entries with label that has highest score.
- "recall": 3.14, # Recall value.
- "precisionAt5": 3.14, # Precision value for entries with label that has highest 5 scores.
- "confidenceThreshold": 3.14, # Threshold used for this entry. For classification tasks, this is a classification threshold: a predicted label is categorized as positive or negative (in the context of this point on the PR curve) based on whether the label's score meets this threshold. For image object detection (bounding box) tasks, this is the [intersection-over-union (IOU)](/vision/automl/object-detection/docs/evaluate#intersection-over-union) threshold for the context of this point on the PR curve.
- "f1ScoreAt5": 3.14, # The harmonic mean of recall_at5 and precision_at5.
- "f1Score": 3.14, # Harmonic mean of recall and precision.
- "precision": 3.14, # Precision value.
- "f1ScoreAt1": 3.14, # The harmonic mean of recall_at1 and precision_at1.
- },
- ],
- },
- },
- },
- "name": "A String", # Output only. Resource name of an evaluation. The name has the following format: "projects/{project_id}/datasets/{dataset_id}/evaluations/ {evaluation_id}'
"annotationType": "A String", # Output only. Type of task that the model version being evaluated performs, as defined in the evaluationJobConfig.inputConfig.annotationType field of the evaluation job that created this evaluation.
+ "name": "A String", # Output only. Resource name of an evaluation. The name has the following format: "projects/{project_id}/datasets/{dataset_id}/evaluations/ {evaluation_id}'
+ "evaluationJobRunTime": "A String", # Output only. Timestamp for when the evaluation job that created this evaluation ran.
"config": { # Configuration details used for calculating evaluation metrics and creating an Evaluation. # Output only. Options used in the evaluation job that created this evaluation.
"boundingBoxEvaluationOptions": { # Options regarding evaluation between bounding boxes. # Only specify this field if the related model performs image object detection (`IMAGE_BOUNDING_BOX_ANNOTATION`). Describes how to evaluate bounding boxes.
"iouThreshold": 3.14, # Minimum [intersection-over-union (IOU)](/vision/automl/object-detection/docs/evaluate#intersection-over-union) required for 2 bounding boxes to be considered a match. This must be a number between 0 and 1.
},
},
"createTime": "A String", # Output only. Timestamp for when this evaluation was created.
- "evaluationJobRunTime": "A String", # Output only. Timestamp for when the evaluation job that created this evaluation ran.
+ "evaluationMetrics": { # Output only. Metrics comparing predictions to ground truth labels.
+ "objectDetectionMetrics": { # Metrics calculated for an image object detection (bounding box) model.
+ "prCurve": { # Precision-recall curve.
+ "meanAveragePrecision": 3.14, # Mean average prcision of this curve.
+ "annotationSpec": { # Container of information related to one possible annotation that can be used in a labeling task. For example, an image classification task where images are labeled as `dog` or `cat` must reference an AnnotationSpec for `dog` and an AnnotationSpec for `cat`. # The annotation spec of the label for which the precision-recall curve calculated. If this field is empty, that means the precision-recall curve is an aggregate curve for all labels.
+ "description": "A String", # Optional. User-provided description of the annotation specification. The description can be up to 10,000 characters long.
+ "displayName": "A String", # Required. The display name of the AnnotationSpec. Maximum of 64 characters.
+ "index": 42, # Output only. This is the integer index of the AnnotationSpec. The index for the whole AnnotationSpecSet is sequential starting from 0. For example, an AnnotationSpecSet with classes `dog` and `cat`, might contain one AnnotationSpec with `{ display_name: "dog", index: 0 }` and one AnnotationSpec with `{ display_name: "cat", index: 1 }`. This is especially useful for model training as it encodes the string labels into numeric values.
+ },
+ "areaUnderCurve": 3.14, # Area under the precision-recall curve. Not to be confused with area under a receiver operating characteristic (ROC) curve.
+ "confidenceMetricsEntries": [ # Entries that make up the precision-recall graph. Each entry is a "point" on the graph drawn for a different `confidence_threshold`.
+ {
+ "recallAt1": 3.14, # Recall value for entries with label that has highest score.
+ "f1Score": 3.14, # Harmonic mean of recall and precision.
+ "precision": 3.14, # Precision value.
+ "precisionAt1": 3.14, # Precision value for entries with label that has highest score.
+ "f1ScoreAt1": 3.14, # The harmonic mean of recall_at1 and precision_at1.
+ "confidenceThreshold": 3.14, # Threshold used for this entry. For classification tasks, this is a classification threshold: a predicted label is categorized as positive or negative (in the context of this point on the PR curve) based on whether the label's score meets this threshold. For image object detection (bounding box) tasks, this is the [intersection-over-union (IOU)](/vision/automl/object-detection/docs/evaluate#intersection-over-union) threshold for the context of this point on the PR curve.
+ "precisionAt5": 3.14, # Precision value for entries with label that has highest 5 scores.
+ "f1ScoreAt5": 3.14, # The harmonic mean of recall_at5 and precision_at5.
+ "recallAt5": 3.14, # Recall value for entries with label that has highest 5 scores.
+ "recall": 3.14, # Recall value.
+ },
+ ],
+ },
+ },
+ "classificationMetrics": { # Metrics calculated for a classification model.
+ "prCurve": { # Precision-recall curve based on ground truth labels, predicted labels, and scores for the predicted labels.
+ "meanAveragePrecision": 3.14, # Mean average prcision of this curve.
+ "annotationSpec": { # Container of information related to one possible annotation that can be used in a labeling task. For example, an image classification task where images are labeled as `dog` or `cat` must reference an AnnotationSpec for `dog` and an AnnotationSpec for `cat`. # The annotation spec of the label for which the precision-recall curve calculated. If this field is empty, that means the precision-recall curve is an aggregate curve for all labels.
+ "description": "A String", # Optional. User-provided description of the annotation specification. The description can be up to 10,000 characters long.
+ "displayName": "A String", # Required. The display name of the AnnotationSpec. Maximum of 64 characters.
+ "index": 42, # Output only. This is the integer index of the AnnotationSpec. The index for the whole AnnotationSpecSet is sequential starting from 0. For example, an AnnotationSpecSet with classes `dog` and `cat`, might contain one AnnotationSpec with `{ display_name: "dog", index: 0 }` and one AnnotationSpec with `{ display_name: "cat", index: 1 }`. This is especially useful for model training as it encodes the string labels into numeric values.
+ },
+ "areaUnderCurve": 3.14, # Area under the precision-recall curve. Not to be confused with area under a receiver operating characteristic (ROC) curve.
+ "confidenceMetricsEntries": [ # Entries that make up the precision-recall graph. Each entry is a "point" on the graph drawn for a different `confidence_threshold`.
+ {
+ "recallAt1": 3.14, # Recall value for entries with label that has highest score.
+ "f1Score": 3.14, # Harmonic mean of recall and precision.
+ "precision": 3.14, # Precision value.
+ "precisionAt1": 3.14, # Precision value for entries with label that has highest score.
+ "f1ScoreAt1": 3.14, # The harmonic mean of recall_at1 and precision_at1.
+ "confidenceThreshold": 3.14, # Threshold used for this entry. For classification tasks, this is a classification threshold: a predicted label is categorized as positive or negative (in the context of this point on the PR curve) based on whether the label's score meets this threshold. For image object detection (bounding box) tasks, this is the [intersection-over-union (IOU)](/vision/automl/object-detection/docs/evaluate#intersection-over-union) threshold for the context of this point on the PR curve.
+ "precisionAt5": 3.14, # Precision value for entries with label that has highest 5 scores.
+ "f1ScoreAt5": 3.14, # The harmonic mean of recall_at5 and precision_at5.
+ "recallAt5": 3.14, # Recall value for entries with label that has highest 5 scores.
+ "recall": 3.14, # Recall value.
+ },
+ ],
+ },
+ "confusionMatrix": { # Confusion matrix of the model running the classification. Only applicable when the metrics entry aggregates multiple labels. Not applicable when the entry is for a single label. # Confusion matrix of predicted labels vs. ground truth labels.
+ "row": [
+ { # A row in the confusion matrix. Each entry in this row has the same ground truth label.
+ "entries": [ # A list of the confusion matrix entries. One entry for each possible predicted label.
+ {
+ "annotationSpec": { # Container of information related to one possible annotation that can be used in a labeling task. For example, an image classification task where images are labeled as `dog` or `cat` must reference an AnnotationSpec for `dog` and an AnnotationSpec for `cat`. # The annotation spec of a predicted label.
+ "description": "A String", # Optional. User-provided description of the annotation specification. The description can be up to 10,000 characters long.
+ "displayName": "A String", # Required. The display name of the AnnotationSpec. Maximum of 64 characters.
+ "index": 42, # Output only. This is the integer index of the AnnotationSpec. The index for the whole AnnotationSpecSet is sequential starting from 0. For example, an AnnotationSpecSet with classes `dog` and `cat`, might contain one AnnotationSpec with `{ display_name: "dog", index: 0 }` and one AnnotationSpec with `{ display_name: "cat", index: 1 }`. This is especially useful for model training as it encodes the string labels into numeric values.
+ },
+ "itemCount": 42, # Number of items predicted to have this label. (The ground truth label for these items is the `Row.annotationSpec` of this entry's parent.)
+ },
+ ],
+ "annotationSpec": { # Container of information related to one possible annotation that can be used in a labeling task. For example, an image classification task where images are labeled as `dog` or `cat` must reference an AnnotationSpec for `dog` and an AnnotationSpec for `cat`. # The annotation spec of the ground truth label for this row.
+ "description": "A String", # Optional. User-provided description of the annotation specification. The description can be up to 10,000 characters long.
+ "displayName": "A String", # Required. The display name of the AnnotationSpec. Maximum of 64 characters.
+ "index": 42, # Output only. This is the integer index of the AnnotationSpec. The index for the whole AnnotationSpecSet is sequential starting from 0. For example, an AnnotationSpecSet with classes `dog` and `cat`, might contain one AnnotationSpec with `{ display_name: "dog", index: 0 }` and one AnnotationSpec with `{ display_name: "cat", index: 1 }`. This is especially useful for model training as it encodes the string labels into numeric values.
+ },
+ },
+ ],
+ },
+ },
+ },
"evaluatedItemCount": "A String", # Output only. The number of items in the ground truth dataset that were used for this evaluation. Only populated when the evaulation is for certain AnnotationTypes.
},
],