chore: Update discovery artifacts (#1400)
## Discovery Artifact Change Summary:
feat(compute): update the api https://github.com/googleapis/google-api-python-client/commit/b8ce2754752f8157b84091a99594f9a45a8f8eed
feat(container): update the api https://github.com/googleapis/google-api-python-client/commit/a73f41e49d7ab6258bd722b4ee6d022c195975c2
feat(dataproc): update the api https://github.com/googleapis/google-api-python-client/commit/be0dde6ee43f4ff05396d33b16e0af2a1fabfc28
feat(lifesciences): update the api https://github.com/googleapis/google-api-python-client/commit/c524c0a316e4206c8b0e0075e3ed5eceb7e60016
feat(osconfig): update the api https://github.com/googleapis/google-api-python-client/commit/5dbaaad34dec45eb5f5a9e98710b3ec05b4d5429
feat(pagespeedonline): update the api https://github.com/googleapis/google-api-python-client/commit/47d41c544376b1911261410235b63ffe3e5faa91
feat(privateca): update the api https://github.com/googleapis/google-api-python-client/commit/8f7ad0d176d61f9e9a409d7fe35b20c5f1c239a5
diff --git a/docs/dyn/container_v1beta1.projects.zones.clusters.nodePools.html b/docs/dyn/container_v1beta1.projects.zones.clusters.nodePools.html
index f1c71c3..6129bf4 100644
--- a/docs/dyn/container_v1beta1.projects.zones.clusters.nodePools.html
+++ b/docs/dyn/container_v1beta1.projects.zones.clusters.nodePools.html
@@ -210,7 +210,7 @@
{ # CreateNodePoolRequest creates a node pool for a cluster.
"clusterId": "A String", # Required. Deprecated. The name of the cluster. This field has been deprecated and replaced by the parent field.
- "nodePool": { # NodePool contains the name and configuration for a cluster's node pool. Node pools are a set of nodes (i.e. VM's), with a common configuration and specification, under the control of the cluster master. They may have a set of Kubernetes labels applied to them, which may be used to reference them during pod scheduling. They may also be resized up or down, to accommodate the workload. # Required. The node pool to create.
+ "nodePool": { # NodePool contains the name and configuration for a cluster's node pool. Node pools are a set of nodes (i.e. VM's), with a common configuration and specification, under the control of the cluster master. They may have a set of Kubernetes labels applied to them, which may be used to reference them during pod scheduling. They may also be resized up or down, to accommodate the workload. These upgrade settings control the level of parallelism and the level of disruption caused by an upgrade. maxUnavailable controls the number of nodes that can be simultaneously unavailable. maxSurge controls the number of additional nodes that can be added to the node pool temporarily for the time of the upgrade to increase the number of available nodes. (maxUnavailable + maxSurge) determines the level of parallelism (how many nodes are being upgraded at the same time). Note: upgrades inevitably introduce some disruption since workloads need to be moved from old nodes to new, upgraded ones. Even if maxUnavailable=0, this holds true. (Disruption stays within the limits of PodDisruptionBudget, if it is configured.) Consider a hypothetical node pool with 5 nodes having maxSurge=2, maxUnavailable=1. This means the upgrade process upgrades 3 nodes simultaneously. It creates 2 additional (upgraded) nodes, then it brings down 3 old (not yet upgraded) nodes at the same time. This ensures that there are always at least 4 nodes available. # Required. The node pool to create.
"autoscaling": { # NodePoolAutoscaling contains information required by cluster autoscaler to adjust the size of the node pool to the current cluster usage. # Autoscaler configuration for this NodePool. Autoscaler is enabled only if a valid configuration is present.
"autoprovisioned": True or False, # Can this node pool be deleted automatically.
"enabled": True or False, # Is autoscaling enabled for this node pool.
@@ -322,7 +322,7 @@
"selfLink": "A String", # [Output only] Server-defined URL for the resource.
"status": "A String", # [Output only] The status of the nodes in this pool instance.
"statusMessage": "A String", # [Output only] Deprecated. Use conditions instead. Additional information about the current status of this node pool instance, if available.
- "upgradeSettings": { # These upgrade settings control the level of parallelism and the level of disruption caused by an upgrade. maxUnavailable controls the number of nodes that can be simultaneously unavailable. maxSurge controls the number of additional nodes that can be added to the node pool temporarily for the time of the upgrade to increase the number of available nodes. (maxUnavailable + maxSurge) determines the level of parallelism (how many nodes are being upgraded at the same time). Note: upgrades inevitably introduce some disruption since workloads need to be moved from old nodes to new, upgraded ones. Even if maxUnavailable=0, this holds true. (Disruption stays within the limits of PodDisruptionBudget, if it is configured.) Consider a hypothetical node pool with 5 nodes having maxSurge=2, maxUnavailable=1. This means the upgrade process upgrades 3 nodes simultaneously. It creates 2 additional (upgraded) nodes, then it brings down 3 old (not yet upgraded) nodes at the same time. This ensures that there are always at least 4 nodes available. # Upgrade settings control disruption and speed of the upgrade.
+ "upgradeSettings": { # Upgrade settings control disruption and speed of the upgrade.
"maxSurge": 42, # The maximum number of nodes that can be created beyond the current size of the node pool during the upgrade process.
"maxUnavailable": 42, # The maximum number of nodes that can be simultaneously unavailable during the upgrade process. A node is considered available if its status is Ready.
},
@@ -483,7 +483,7 @@
Returns:
An object of the form:
- { # NodePool contains the name and configuration for a cluster's node pool. Node pools are a set of nodes (i.e. VM's), with a common configuration and specification, under the control of the cluster master. They may have a set of Kubernetes labels applied to them, which may be used to reference them during pod scheduling. They may also be resized up or down, to accommodate the workload.
+ { # NodePool contains the name and configuration for a cluster's node pool. Node pools are a set of nodes (i.e. VM's), with a common configuration and specification, under the control of the cluster master. They may have a set of Kubernetes labels applied to them, which may be used to reference them during pod scheduling. They may also be resized up or down, to accommodate the workload. These upgrade settings control the level of parallelism and the level of disruption caused by an upgrade. maxUnavailable controls the number of nodes that can be simultaneously unavailable. maxSurge controls the number of additional nodes that can be added to the node pool temporarily for the time of the upgrade to increase the number of available nodes. (maxUnavailable + maxSurge) determines the level of parallelism (how many nodes are being upgraded at the same time). Note: upgrades inevitably introduce some disruption since workloads need to be moved from old nodes to new, upgraded ones. Even if maxUnavailable=0, this holds true. (Disruption stays within the limits of PodDisruptionBudget, if it is configured.) Consider a hypothetical node pool with 5 nodes having maxSurge=2, maxUnavailable=1. This means the upgrade process upgrades 3 nodes simultaneously. It creates 2 additional (upgraded) nodes, then it brings down 3 old (not yet upgraded) nodes at the same time. This ensures that there are always at least 4 nodes available.
"autoscaling": { # NodePoolAutoscaling contains information required by cluster autoscaler to adjust the size of the node pool to the current cluster usage. # Autoscaler configuration for this NodePool. Autoscaler is enabled only if a valid configuration is present.
"autoprovisioned": True or False, # Can this node pool be deleted automatically.
"enabled": True or False, # Is autoscaling enabled for this node pool.
@@ -595,7 +595,7 @@
"selfLink": "A String", # [Output only] Server-defined URL for the resource.
"status": "A String", # [Output only] The status of the nodes in this pool instance.
"statusMessage": "A String", # [Output only] Deprecated. Use conditions instead. Additional information about the current status of this node pool instance, if available.
- "upgradeSettings": { # These upgrade settings control the level of parallelism and the level of disruption caused by an upgrade. maxUnavailable controls the number of nodes that can be simultaneously unavailable. maxSurge controls the number of additional nodes that can be added to the node pool temporarily for the time of the upgrade to increase the number of available nodes. (maxUnavailable + maxSurge) determines the level of parallelism (how many nodes are being upgraded at the same time). Note: upgrades inevitably introduce some disruption since workloads need to be moved from old nodes to new, upgraded ones. Even if maxUnavailable=0, this holds true. (Disruption stays within the limits of PodDisruptionBudget, if it is configured.) Consider a hypothetical node pool with 5 nodes having maxSurge=2, maxUnavailable=1. This means the upgrade process upgrades 3 nodes simultaneously. It creates 2 additional (upgraded) nodes, then it brings down 3 old (not yet upgraded) nodes at the same time. This ensures that there are always at least 4 nodes available. # Upgrade settings control disruption and speed of the upgrade.
+ "upgradeSettings": { # Upgrade settings control disruption and speed of the upgrade.
"maxSurge": 42, # The maximum number of nodes that can be created beyond the current size of the node pool during the upgrade process.
"maxUnavailable": 42, # The maximum number of nodes that can be simultaneously unavailable during the upgrade process. A node is considered available if its status is Ready.
},
@@ -622,7 +622,7 @@
{ # ListNodePoolsResponse is the result of ListNodePoolsRequest.
"nodePools": [ # A list of node pools for a cluster.
- { # NodePool contains the name and configuration for a cluster's node pool. Node pools are a set of nodes (i.e. VM's), with a common configuration and specification, under the control of the cluster master. They may have a set of Kubernetes labels applied to them, which may be used to reference them during pod scheduling. They may also be resized up or down, to accommodate the workload.
+ { # NodePool contains the name and configuration for a cluster's node pool. Node pools are a set of nodes (i.e. VM's), with a common configuration and specification, under the control of the cluster master. They may have a set of Kubernetes labels applied to them, which may be used to reference them during pod scheduling. They may also be resized up or down, to accommodate the workload. These upgrade settings control the level of parallelism and the level of disruption caused by an upgrade. maxUnavailable controls the number of nodes that can be simultaneously unavailable. maxSurge controls the number of additional nodes that can be added to the node pool temporarily for the time of the upgrade to increase the number of available nodes. (maxUnavailable + maxSurge) determines the level of parallelism (how many nodes are being upgraded at the same time). Note: upgrades inevitably introduce some disruption since workloads need to be moved from old nodes to new, upgraded ones. Even if maxUnavailable=0, this holds true. (Disruption stays within the limits of PodDisruptionBudget, if it is configured.) Consider a hypothetical node pool with 5 nodes having maxSurge=2, maxUnavailable=1. This means the upgrade process upgrades 3 nodes simultaneously. It creates 2 additional (upgraded) nodes, then it brings down 3 old (not yet upgraded) nodes at the same time. This ensures that there are always at least 4 nodes available.
"autoscaling": { # NodePoolAutoscaling contains information required by cluster autoscaler to adjust the size of the node pool to the current cluster usage. # Autoscaler configuration for this NodePool. Autoscaler is enabled only if a valid configuration is present.
"autoprovisioned": True or False, # Can this node pool be deleted automatically.
"enabled": True or False, # Is autoscaling enabled for this node pool.
@@ -734,7 +734,7 @@
"selfLink": "A String", # [Output only] Server-defined URL for the resource.
"status": "A String", # [Output only] The status of the nodes in this pool instance.
"statusMessage": "A String", # [Output only] Deprecated. Use conditions instead. Additional information about the current status of this node pool instance, if available.
- "upgradeSettings": { # These upgrade settings control the level of parallelism and the level of disruption caused by an upgrade. maxUnavailable controls the number of nodes that can be simultaneously unavailable. maxSurge controls the number of additional nodes that can be added to the node pool temporarily for the time of the upgrade to increase the number of available nodes. (maxUnavailable + maxSurge) determines the level of parallelism (how many nodes are being upgraded at the same time). Note: upgrades inevitably introduce some disruption since workloads need to be moved from old nodes to new, upgraded ones. Even if maxUnavailable=0, this holds true. (Disruption stays within the limits of PodDisruptionBudget, if it is configured.) Consider a hypothetical node pool with 5 nodes having maxSurge=2, maxUnavailable=1. This means the upgrade process upgrades 3 nodes simultaneously. It creates 2 additional (upgraded) nodes, then it brings down 3 old (not yet upgraded) nodes at the same time. This ensures that there are always at least 4 nodes available. # Upgrade settings control disruption and speed of the upgrade.
+ "upgradeSettings": { # Upgrade settings control disruption and speed of the upgrade.
"maxSurge": 42, # The maximum number of nodes that can be created beyond the current size of the node pool during the upgrade process.
"maxUnavailable": 42, # The maximum number of nodes that can be simultaneously unavailable during the upgrade process. A node is considered available if its status is Ready.
},
@@ -1047,7 +1047,7 @@
},
],
},
- "upgradeSettings": { # These upgrade settings control the level of parallelism and the level of disruption caused by an upgrade. maxUnavailable controls the number of nodes that can be simultaneously unavailable. maxSurge controls the number of additional nodes that can be added to the node pool temporarily for the time of the upgrade to increase the number of available nodes. (maxUnavailable + maxSurge) determines the level of parallelism (how many nodes are being upgraded at the same time). Note: upgrades inevitably introduce some disruption since workloads need to be moved from old nodes to new, upgraded ones. Even if maxUnavailable=0, this holds true. (Disruption stays within the limits of PodDisruptionBudget, if it is configured.) Consider a hypothetical node pool with 5 nodes having maxSurge=2, maxUnavailable=1. This means the upgrade process upgrades 3 nodes simultaneously. It creates 2 additional (upgraded) nodes, then it brings down 3 old (not yet upgraded) nodes at the same time. This ensures that there are always at least 4 nodes available. # Upgrade settings control disruption and speed of the upgrade.
+ "upgradeSettings": { # Upgrade settings control disruption and speed of the upgrade.
"maxSurge": 42, # The maximum number of nodes that can be created beyond the current size of the node pool during the upgrade process.
"maxUnavailable": 42, # The maximum number of nodes that can be simultaneously unavailable during the upgrade process. A node is considered available if its status is Ready.
},