chore: update docs/dyn (#1106)

diff --git a/docs/dyn/datalabeling_v1beta1.projects.evaluations.html b/docs/dyn/datalabeling_v1beta1.projects.evaluations.html
index 173c48e..36c3307 100644
--- a/docs/dyn/datalabeling_v1beta1.projects.evaluations.html
+++ b/docs/dyn/datalabeling_v1beta1.projects.evaluations.html
@@ -78,7 +78,7 @@
   <code><a href="#close">close()</a></code></p>
 <p class="firstline">Close httplib2 connections.</p>
 <p class="toc_element">
-  <code><a href="#search">search(parent, pageSize=None, pageToken=None, filter=None, x__xgafv=None)</a></code></p>
+  <code><a href="#search">search(parent, pageSize=None, filter=None, pageToken=None, x__xgafv=None)</a></code></p>
 <p class="firstline">Searches evaluations within a project.</p>
 <p class="toc_element">
   <code><a href="#search_next">search_next(previous_request, previous_response)</a></code></p>
@@ -90,14 +90,14 @@
 </div>
 
 <div class="method">
-    <code class="details" id="search">search(parent, pageSize=None, pageToken=None, filter=None, x__xgafv=None)</code>
+    <code class="details" id="search">search(parent, pageSize=None, filter=None, pageToken=None, x__xgafv=None)</code>
   <pre>Searches evaluations within a project.
 
 Args:
   parent: string, Required. Evaluation search parent (project ID). Format: &quot;projects/ {project_id}&quot; (required)
   pageSize: integer, Optional. Requested page size. Server may return fewer results than requested. Default value is 100.
-  pageToken: string, Optional. A token identifying a page of results for the server to return. Typically obtained by the nextPageToken of the response to a previous search request. If you don&#x27;t specify this field, the API call requests the first page of the search.
   filter: string, Optional. To search evaluations, you can filter by the following: * evaluation_job.evaluation_job_id (the last part of EvaluationJob.name) * evaluation_job.model_id (the {model_name} portion of EvaluationJob.modelVersion) * evaluation_job.evaluation_job_run_time_start (Minimum threshold for the evaluationJobRunTime that created the evaluation) * evaluation_job.evaluation_job_run_time_end (Maximum threshold for the evaluationJobRunTime that created the evaluation) * evaluation_job.job_state (EvaluationJob.state) * annotation_spec.display_name (the Evaluation contains a metric for the annotation spec with this displayName) To filter by multiple critiera, use the `AND` operator or the `OR` operator. The following examples shows a string that filters by several critiera: &quot;evaluation_job.evaluation_job_id = {evaluation_job_id} AND evaluation_job.model_id = {model_name} AND evaluation_job.evaluation_job_run_time_start = {timestamp_1} AND evaluation_job.evaluation_job_run_time_end = {timestamp_2} AND annotation_spec.display_name = {display_name}&quot;
+  pageToken: string, Optional. A token identifying a page of results for the server to return. Typically obtained by the nextPageToken of the response to a previous search request. If you don&#x27;t specify this field, the API call requests the first page of the search.
   x__xgafv: string, V1 error format.
     Allowed values
       1 - v1 error format
@@ -110,76 +110,67 @@
     &quot;nextPageToken&quot;: &quot;A String&quot;, # A token to retrieve next page of results.
     &quot;evaluations&quot;: [ # The list of evaluations matching the search.
       { # Describes an evaluation between a machine learning model&#x27;s predictions and ground truth labels. Created when an EvaluationJob runs successfully.
-        &quot;annotationType&quot;: &quot;A String&quot;, # Output only. Type of task that the model version being evaluated performs, as defined in the evaluationJobConfig.inputConfig.annotationType field of the evaluation job that created this evaluation.
-        &quot;name&quot;: &quot;A String&quot;, # Output only. Resource name of an evaluation. The name has the following format: &quot;projects/{project_id}/datasets/{dataset_id}/evaluations/ {evaluation_id}&#x27;
-        &quot;evaluationJobRunTime&quot;: &quot;A String&quot;, # Output only. Timestamp for when the evaluation job that created this evaluation ran.
-        &quot;config&quot;: { # Configuration details used for calculating evaluation metrics and creating an Evaluation. # Output only. Options used in the evaluation job that created this evaluation.
-          &quot;boundingBoxEvaluationOptions&quot;: { # Options regarding evaluation between bounding boxes. # Only specify this field if the related model performs image object detection (`IMAGE_BOUNDING_BOX_ANNOTATION`). Describes how to evaluate bounding boxes.
-            &quot;iouThreshold&quot;: 3.14, # Minimum [intersection-over-union (IOU)](/vision/automl/object-detection/docs/evaluate#intersection-over-union) required for 2 bounding boxes to be considered a match. This must be a number between 0 and 1.
-          },
-        },
-        &quot;createTime&quot;: &quot;A String&quot;, # Output only. Timestamp for when this evaluation was created.
         &quot;evaluationMetrics&quot;: { # Output only. Metrics comparing predictions to ground truth labels.
           &quot;objectDetectionMetrics&quot;: { # Metrics calculated for an image object detection (bounding box) model.
             &quot;prCurve&quot;: { # Precision-recall curve.
               &quot;meanAveragePrecision&quot;: 3.14, # Mean average prcision of this curve.
+              &quot;confidenceMetricsEntries&quot;: [ # Entries that make up the precision-recall graph. Each entry is a &quot;point&quot; on the graph drawn for a different `confidence_threshold`.
+                {
+                  &quot;recallAt1&quot;: 3.14, # Recall value for entries with label that has highest score.
+                  &quot;recall&quot;: 3.14, # Recall value.
+                  &quot;recallAt5&quot;: 3.14, # Recall value for entries with label that has highest 5 scores.
+                  &quot;confidenceThreshold&quot;: 3.14, # Threshold used for this entry. For classification tasks, this is a classification threshold: a predicted label is categorized as positive or negative (in the context of this point on the PR curve) based on whether the label&#x27;s score meets this threshold. For image object detection (bounding box) tasks, this is the [intersection-over-union (IOU)](/vision/automl/object-detection/docs/evaluate#intersection-over-union) threshold for the context of this point on the PR curve.
+                  &quot;f1ScoreAt1&quot;: 3.14, # The harmonic mean of recall_at1 and precision_at1.
+                  &quot;f1ScoreAt5&quot;: 3.14, # The harmonic mean of recall_at5 and precision_at5.
+                  &quot;precisionAt5&quot;: 3.14, # Precision value for entries with label that has highest 5 scores.
+                  &quot;precisionAt1&quot;: 3.14, # Precision value for entries with label that has highest score.
+                  &quot;f1Score&quot;: 3.14, # Harmonic mean of recall and precision.
+                  &quot;precision&quot;: 3.14, # Precision value.
+                },
+              ],
               &quot;annotationSpec&quot;: { # Container of information related to one possible annotation that can be used in a labeling task. For example, an image classification task where images are labeled as `dog` or `cat` must reference an AnnotationSpec for `dog` and an AnnotationSpec for `cat`. # The annotation spec of the label for which the precision-recall curve calculated. If this field is empty, that means the precision-recall curve is an aggregate curve for all labels.
                 &quot;description&quot;: &quot;A String&quot;, # Optional. User-provided description of the annotation specification. The description can be up to 10,000 characters long.
                 &quot;displayName&quot;: &quot;A String&quot;, # Required. The display name of the AnnotationSpec. Maximum of 64 characters.
                 &quot;index&quot;: 42, # Output only. This is the integer index of the AnnotationSpec. The index for the whole AnnotationSpecSet is sequential starting from 0. For example, an AnnotationSpecSet with classes `dog` and `cat`, might contain one AnnotationSpec with `{ display_name: &quot;dog&quot;, index: 0 }` and one AnnotationSpec with `{ display_name: &quot;cat&quot;, index: 1 }`. This is especially useful for model training as it encodes the string labels into numeric values.
               },
               &quot;areaUnderCurve&quot;: 3.14, # Area under the precision-recall curve. Not to be confused with area under a receiver operating characteristic (ROC) curve.
-              &quot;confidenceMetricsEntries&quot;: [ # Entries that make up the precision-recall graph. Each entry is a &quot;point&quot; on the graph drawn for a different `confidence_threshold`.
-                {
-                  &quot;recallAt1&quot;: 3.14, # Recall value for entries with label that has highest score.
-                  &quot;f1Score&quot;: 3.14, # Harmonic mean of recall and precision.
-                  &quot;precision&quot;: 3.14, # Precision value.
-                  &quot;precisionAt1&quot;: 3.14, # Precision value for entries with label that has highest score.
-                  &quot;f1ScoreAt1&quot;: 3.14, # The harmonic mean of recall_at1 and precision_at1.
-                  &quot;confidenceThreshold&quot;: 3.14, # Threshold used for this entry. For classification tasks, this is a classification threshold: a predicted label is categorized as positive or negative (in the context of this point on the PR curve) based on whether the label&#x27;s score meets this threshold. For image object detection (bounding box) tasks, this is the [intersection-over-union (IOU)](/vision/automl/object-detection/docs/evaluate#intersection-over-union) threshold for the context of this point on the PR curve.
-                  &quot;precisionAt5&quot;: 3.14, # Precision value for entries with label that has highest 5 scores.
-                  &quot;f1ScoreAt5&quot;: 3.14, # The harmonic mean of recall_at5 and precision_at5.
-                  &quot;recallAt5&quot;: 3.14, # Recall value for entries with label that has highest 5 scores.
-                  &quot;recall&quot;: 3.14, # Recall value.
-                },
-              ],
             },
           },
           &quot;classificationMetrics&quot;: { # Metrics calculated for a classification model.
             &quot;prCurve&quot;: { # Precision-recall curve based on ground truth labels, predicted labels, and scores for the predicted labels.
               &quot;meanAveragePrecision&quot;: 3.14, # Mean average prcision of this curve.
+              &quot;confidenceMetricsEntries&quot;: [ # Entries that make up the precision-recall graph. Each entry is a &quot;point&quot; on the graph drawn for a different `confidence_threshold`.
+                {
+                  &quot;recallAt1&quot;: 3.14, # Recall value for entries with label that has highest score.
+                  &quot;recall&quot;: 3.14, # Recall value.
+                  &quot;recallAt5&quot;: 3.14, # Recall value for entries with label that has highest 5 scores.
+                  &quot;confidenceThreshold&quot;: 3.14, # Threshold used for this entry. For classification tasks, this is a classification threshold: a predicted label is categorized as positive or negative (in the context of this point on the PR curve) based on whether the label&#x27;s score meets this threshold. For image object detection (bounding box) tasks, this is the [intersection-over-union (IOU)](/vision/automl/object-detection/docs/evaluate#intersection-over-union) threshold for the context of this point on the PR curve.
+                  &quot;f1ScoreAt1&quot;: 3.14, # The harmonic mean of recall_at1 and precision_at1.
+                  &quot;f1ScoreAt5&quot;: 3.14, # The harmonic mean of recall_at5 and precision_at5.
+                  &quot;precisionAt5&quot;: 3.14, # Precision value for entries with label that has highest 5 scores.
+                  &quot;precisionAt1&quot;: 3.14, # Precision value for entries with label that has highest score.
+                  &quot;f1Score&quot;: 3.14, # Harmonic mean of recall and precision.
+                  &quot;precision&quot;: 3.14, # Precision value.
+                },
+              ],
               &quot;annotationSpec&quot;: { # Container of information related to one possible annotation that can be used in a labeling task. For example, an image classification task where images are labeled as `dog` or `cat` must reference an AnnotationSpec for `dog` and an AnnotationSpec for `cat`. # The annotation spec of the label for which the precision-recall curve calculated. If this field is empty, that means the precision-recall curve is an aggregate curve for all labels.
                 &quot;description&quot;: &quot;A String&quot;, # Optional. User-provided description of the annotation specification. The description can be up to 10,000 characters long.
                 &quot;displayName&quot;: &quot;A String&quot;, # Required. The display name of the AnnotationSpec. Maximum of 64 characters.
                 &quot;index&quot;: 42, # Output only. This is the integer index of the AnnotationSpec. The index for the whole AnnotationSpecSet is sequential starting from 0. For example, an AnnotationSpecSet with classes `dog` and `cat`, might contain one AnnotationSpec with `{ display_name: &quot;dog&quot;, index: 0 }` and one AnnotationSpec with `{ display_name: &quot;cat&quot;, index: 1 }`. This is especially useful for model training as it encodes the string labels into numeric values.
               },
               &quot;areaUnderCurve&quot;: 3.14, # Area under the precision-recall curve. Not to be confused with area under a receiver operating characteristic (ROC) curve.
-              &quot;confidenceMetricsEntries&quot;: [ # Entries that make up the precision-recall graph. Each entry is a &quot;point&quot; on the graph drawn for a different `confidence_threshold`.
-                {
-                  &quot;recallAt1&quot;: 3.14, # Recall value for entries with label that has highest score.
-                  &quot;f1Score&quot;: 3.14, # Harmonic mean of recall and precision.
-                  &quot;precision&quot;: 3.14, # Precision value.
-                  &quot;precisionAt1&quot;: 3.14, # Precision value for entries with label that has highest score.
-                  &quot;f1ScoreAt1&quot;: 3.14, # The harmonic mean of recall_at1 and precision_at1.
-                  &quot;confidenceThreshold&quot;: 3.14, # Threshold used for this entry. For classification tasks, this is a classification threshold: a predicted label is categorized as positive or negative (in the context of this point on the PR curve) based on whether the label&#x27;s score meets this threshold. For image object detection (bounding box) tasks, this is the [intersection-over-union (IOU)](/vision/automl/object-detection/docs/evaluate#intersection-over-union) threshold for the context of this point on the PR curve.
-                  &quot;precisionAt5&quot;: 3.14, # Precision value for entries with label that has highest 5 scores.
-                  &quot;f1ScoreAt5&quot;: 3.14, # The harmonic mean of recall_at5 and precision_at5.
-                  &quot;recallAt5&quot;: 3.14, # Recall value for entries with label that has highest 5 scores.
-                  &quot;recall&quot;: 3.14, # Recall value.
-                },
-              ],
             },
             &quot;confusionMatrix&quot;: { # Confusion matrix of the model running the classification. Only applicable when the metrics entry aggregates multiple labels. Not applicable when the entry is for a single label. # Confusion matrix of predicted labels vs. ground truth labels.
               &quot;row&quot;: [
                 { # A row in the confusion matrix. Each entry in this row has the same ground truth label.
                   &quot;entries&quot;: [ # A list of the confusion matrix entries. One entry for each possible predicted label.
                     {
+                      &quot;itemCount&quot;: 42, # Number of items predicted to have this label. (The ground truth label for these items is the `Row.annotationSpec` of this entry&#x27;s parent.)
                       &quot;annotationSpec&quot;: { # Container of information related to one possible annotation that can be used in a labeling task. For example, an image classification task where images are labeled as `dog` or `cat` must reference an AnnotationSpec for `dog` and an AnnotationSpec for `cat`. # The annotation spec of a predicted label.
                         &quot;description&quot;: &quot;A String&quot;, # Optional. User-provided description of the annotation specification. The description can be up to 10,000 characters long.
                         &quot;displayName&quot;: &quot;A String&quot;, # Required. The display name of the AnnotationSpec. Maximum of 64 characters.
                         &quot;index&quot;: 42, # Output only. This is the integer index of the AnnotationSpec. The index for the whole AnnotationSpecSet is sequential starting from 0. For example, an AnnotationSpecSet with classes `dog` and `cat`, might contain one AnnotationSpec with `{ display_name: &quot;dog&quot;, index: 0 }` and one AnnotationSpec with `{ display_name: &quot;cat&quot;, index: 1 }`. This is especially useful for model training as it encodes the string labels into numeric values.
                       },
-                      &quot;itemCount&quot;: 42, # Number of items predicted to have this label. (The ground truth label for these items is the `Row.annotationSpec` of this entry&#x27;s parent.)
                     },
                   ],
                   &quot;annotationSpec&quot;: { # Container of information related to one possible annotation that can be used in a labeling task. For example, an image classification task where images are labeled as `dog` or `cat` must reference an AnnotationSpec for `dog` and an AnnotationSpec for `cat`. # The annotation spec of the ground truth label for this row.
@@ -192,6 +183,15 @@
             },
           },
         },
+        &quot;name&quot;: &quot;A String&quot;, # Output only. Resource name of an evaluation. The name has the following format: &quot;projects/{project_id}/datasets/{dataset_id}/evaluations/ {evaluation_id}&#x27;
+        &quot;evaluationJobRunTime&quot;: &quot;A String&quot;, # Output only. Timestamp for when the evaluation job that created this evaluation ran.
+        &quot;config&quot;: { # Configuration details used for calculating evaluation metrics and creating an Evaluation. # Output only. Options used in the evaluation job that created this evaluation.
+          &quot;boundingBoxEvaluationOptions&quot;: { # Options regarding evaluation between bounding boxes. # Only specify this field if the related model performs image object detection (`IMAGE_BOUNDING_BOX_ANNOTATION`). Describes how to evaluate bounding boxes.
+            &quot;iouThreshold&quot;: 3.14, # Minimum [intersection-over-union (IOU)](/vision/automl/object-detection/docs/evaluate#intersection-over-union) required for 2 bounding boxes to be considered a match. This must be a number between 0 and 1.
+          },
+        },
+        &quot;createTime&quot;: &quot;A String&quot;, # Output only. Timestamp for when this evaluation was created.
+        &quot;annotationType&quot;: &quot;A String&quot;, # Output only. Type of task that the model version being evaluated performs, as defined in the evaluationJobConfig.inputConfig.annotationType field of the evaluation job that created this evaluation.
         &quot;evaluatedItemCount&quot;: &quot;A String&quot;, # Output only. The number of items in the ground truth dataset that were used for this evaluation. Only populated when the evaulation is for certain AnnotationTypes.
       },
     ],