Skip to content

autoscaling_v2beta2

ContainerResourceMetricSource

class lightkube.models.autoscaling_v2beta2.ContainerResourceMetricSource(container, name, target)

ContainerResourceMetricSource indicates how to scale on a resource metric known to Kubernetes, as specified in requests and limits, describing each pod in the current scale target (e.g. CPU or memory). The values will be averaged together before being compared to the target. Such metrics are built in to Kubernetes, and have special scaling options on top of those available to normal per-pod metrics using the "pods" source. Only one "target" type should be set.

parameters

  • container str - container is the name of the container in the pods of the scaling target
  • name str - name is the name of the resource in question.
  • target MetricTarget - target specifies the target value for the given metric

ContainerResourceMetricStatus

class lightkube.models.autoscaling_v2beta2.ContainerResourceMetricStatus(container, current, name)

ContainerResourceMetricStatus indicates the current value of a resource metric known to Kubernetes, as specified in requests and limits, describing a single container in each pod in the current scale target (e.g. CPU or memory). Such metrics are built in to Kubernetes, and have special scaling options on top of those available to normal per-pod metrics using the "pods" source.

parameters

  • container str - Container is the name of the container in the pods of the scaling target
  • current MetricValueStatus - current contains the current value for the given metric
  • name str - Name is the name of the resource in question.

CrossVersionObjectReference

class lightkube.models.autoscaling_v2beta2.CrossVersionObjectReference(kind, name, apiVersion=None)

CrossVersionObjectReference contains enough information to let you identify the referred resource.

parameters

  • kind str - Kind of the referent; More info
  • name str - Name of the referent; More info
  • apiVersion str - (optional) API version of the referent

ExternalMetricSource

class lightkube.models.autoscaling_v2beta2.ExternalMetricSource(metric, target)

ExternalMetricSource indicates how to scale on a metric not associated with any Kubernetes object (for example length of queue in cloud messaging service, or QPS from loadbalancer running outside of cluster).

parameters

  • metric MetricIdentifier - metric identifies the target metric by name and selector
  • target MetricTarget - target specifies the target value for the given metric

ExternalMetricStatus

class lightkube.models.autoscaling_v2beta2.ExternalMetricStatus(current, metric)

ExternalMetricStatus indicates the current value of a global metric not associated with any Kubernetes object.

parameters

  • current MetricValueStatus - current contains the current value for the given metric
  • metric MetricIdentifier - metric identifies the target metric by name and selector

HPAScalingPolicy

class lightkube.models.autoscaling_v2beta2.HPAScalingPolicy(periodSeconds, type, value)

HPAScalingPolicy is a single policy which must hold true for a specified past interval.

parameters

  • periodSeconds int - PeriodSeconds specifies the window of time for which the policy should hold true. PeriodSeconds must be greater than zero and less than or equal to 1800 (30 min).
  • type str - Type is used to specify the scaling policy.
  • value int - Value contains the amount of change which is permitted by the policy. It must be greater than zero

HPAScalingRules

class lightkube.models.autoscaling_v2beta2.HPAScalingRules(policies=None, selectPolicy=None, stabilizationWindowSeconds=None)

HPAScalingRules configures the scaling behavior for one direction. These Rules are applied after calculating DesiredReplicas from metrics for the HPA. They can limit the scaling velocity by specifying scaling policies. They can prevent flapping by specifying the stabilization window, so that the number of replicas is not set instantly, instead, the safest value from the stabilization window is chosen.

parameters

  • policies HPAScalingPolicy - (optional) policies is a list of potential scaling polices which can be used during scaling. At least one policy must be specified, otherwise the HPAScalingRules will be discarded as invalid
  • selectPolicy str - (optional) selectPolicy is used to specify which policy should be used. If not set, the default value MaxPolicySelect is used.
  • stabilizationWindowSeconds int - (optional) StabilizationWindowSeconds is the number of seconds for which past recommendations should be considered while scaling up or scaling down. StabilizationWindowSeconds must be greater than or equal to zero and less than or equal to 3600 (one hour). If not set, use the default values: - For scale up: 0 (i.e. no stabilization is done). - For scale down: 300 (i.e. the stabilization window is 300 seconds long).

HorizontalPodAutoscaler

class lightkube.models.autoscaling_v2beta2.HorizontalPodAutoscaler(apiVersion=None, kind=None, metadata=None, spec=None, status=None)

HorizontalPodAutoscaler is the configuration for a horizontal pod autoscaler, which automatically manages the replica count of any resource implementing the scale subresource based on the metrics specified.

parameters

  • apiVersion str - (optional) APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info
  • kind str - (optional) Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info
  • metadata meta_v1.ObjectMeta - (optional) metadata is the standard object metadata. More info
  • spec HorizontalPodAutoscalerSpec - (optional) spec is the specification for the behaviour of the autoscaler. More info
  • status HorizontalPodAutoscalerStatus - (optional) status is the current information about the autoscaler.

HorizontalPodAutoscalerBehavior

class lightkube.models.autoscaling_v2beta2.HorizontalPodAutoscalerBehavior(scaleDown=None, scaleUp=None)

HorizontalPodAutoscalerBehavior configures the scaling behavior of the target in both Up and Down directions (scaleUp and scaleDown fields respectively).

parameters

  • scaleDown HPAScalingRules - (optional) scaleDown is scaling policy for scaling Down. If not set, the default value is to allow to scale down to minReplicas pods, with a 300 second stabilization window (i.e., the highest recommendation for the last 300sec is used).
  • scaleUp HPAScalingRules - (optional) scaleUp is scaling policy for scaling Up. If not set, the default value is the higher of:
    • increase no more than 4 pods per 60 seconds
    • double the number of pods per 60 seconds No stabilization is used.

HorizontalPodAutoscalerCondition

class lightkube.models.autoscaling_v2beta2.HorizontalPodAutoscalerCondition(status, type, lastTransitionTime=None, message=None, reason=None)

HorizontalPodAutoscalerCondition describes the state of a HorizontalPodAutoscaler at a certain point.

parameters

  • status str - status is the status of the condition (True, False, Unknown)
  • type str - type describes the current condition
  • lastTransitionTime meta_v1.Time - (optional) lastTransitionTime is the last time the condition transitioned from one status to another
  • message str - (optional) message is a human-readable explanation containing details about the transition
  • reason str - (optional) reason is the reason for the condition's last transition.

HorizontalPodAutoscalerList

class lightkube.models.autoscaling_v2beta2.HorizontalPodAutoscalerList(items, apiVersion=None, kind=None, metadata=None)

HorizontalPodAutoscalerList is a list of horizontal pod autoscaler objects.

parameters

  • items HorizontalPodAutoscaler - items is the list of horizontal pod autoscaler objects.
  • apiVersion str - (optional) APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info
  • kind str - (optional) Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info
  • metadata meta_v1.ListMeta - (optional) metadata is the standard list metadata.

HorizontalPodAutoscalerSpec

class lightkube.models.autoscaling_v2beta2.HorizontalPodAutoscalerSpec(maxReplicas, scaleTargetRef, behavior=None, metrics=None, minReplicas=None)

HorizontalPodAutoscalerSpec describes the desired functionality of the HorizontalPodAutoscaler.

parameters

  • maxReplicas int - maxReplicas is the upper limit for the number of replicas to which the autoscaler can scale up. It cannot be less that minReplicas.
  • scaleTargetRef CrossVersionObjectReference - scaleTargetRef points to the target resource to scale, and is used to the pods for which metrics should be collected, as well as to actually change the replica count.
  • behavior HorizontalPodAutoscalerBehavior - (optional) behavior configures the scaling behavior of the target in both Up and Down directions (scaleUp and scaleDown fields respectively). If not set, the default HPAScalingRules for scale up and scale down are used.
  • metrics MetricSpec - (optional) metrics contains the specifications for which to use to calculate the desired replica count (the maximum replica count across all metrics will be used). The desired replica count is calculated multiplying the ratio between the target value and the current value by the current number of pods. Ergo, metrics used must decrease as the pod count is increased, and vice-versa. See the individual metric source types for more information about how each type of metric must respond. If not set, the default metric will be set to 80% average CPU utilization.
  • minReplicas int - (optional) minReplicas is the lower limit for the number of replicas to which the autoscaler can scale down. It defaults to 1 pod. minReplicas is allowed to be 0 if the alpha feature gate HPAScaleToZero is enabled and at least one Object or External metric is configured. Scaling is active as long as at least one metric value is available.

HorizontalPodAutoscalerStatus

class lightkube.models.autoscaling_v2beta2.HorizontalPodAutoscalerStatus(currentReplicas, desiredReplicas, conditions=None, currentMetrics=None, lastScaleTime=None, observedGeneration=None)

HorizontalPodAutoscalerStatus describes the current status of a horizontal pod autoscaler.

parameters

  • currentReplicas int - currentReplicas is current number of replicas of pods managed by this autoscaler, as last seen by the autoscaler.
  • desiredReplicas int - desiredReplicas is the desired number of replicas of pods managed by this autoscaler, as last calculated by the autoscaler.
  • conditions HorizontalPodAutoscalerCondition - (optional) conditions is the set of conditions required for this autoscaler to scale its target, and indicates whether or not those conditions are met.
  • currentMetrics MetricStatus - (optional) currentMetrics is the last read state of the metrics used by this autoscaler.
  • lastScaleTime meta_v1.Time - (optional) lastScaleTime is the last time the HorizontalPodAutoscaler scaled the number of pods, used by the autoscaler to control how often the number of pods is changed.
  • observedGeneration int - (optional) observedGeneration is the most recent generation observed by this autoscaler.

MetricIdentifier

class lightkube.models.autoscaling_v2beta2.MetricIdentifier(name, selector=None)

MetricIdentifier defines the name and optionally selector for a metric

parameters

  • name str - name is the name of the given metric
  • selector meta_v1.LabelSelector - (optional) selector is the string-encoded form of a standard kubernetes label selector for the given metric When set, it is passed as an additional parameter to the metrics server for more specific metrics scoping. When unset, just the metricName will be used to gather metrics.

MetricSpec

class lightkube.models.autoscaling_v2beta2.MetricSpec(type, containerResource=None, external=None, object=None, pods=None, resource=None)

MetricSpec specifies how to scale based on a single metric (only type and one other matching field should be set at once).

parameters

  • type str - type is the type of metric source. It should be one of "ContainerResource", "External", "Object", "Pods" or "Resource", each mapping to a matching field in the object. Note: "ContainerResource" type is available on when the feature-gate HPAContainerMetrics is enabled
  • containerResource ContainerResourceMetricSource - (optional) container resource refers to a resource metric (such as those specified in requests and limits) known to Kubernetes describing a single container in each pod of the current scale target (e.g. CPU or memory). Such metrics are built in to Kubernetes, and have special scaling options on top of those available to normal per-pod metrics using the "pods" source. This is an alpha feature and can be enabled by the HPAContainerMetrics feature flag.
  • external ExternalMetricSource - (optional) external refers to a global metric that is not associated with any Kubernetes object. It allows autoscaling based on information coming from components running outside of cluster (for example length of queue in cloud messaging service, or QPS from loadbalancer running outside of cluster).
  • object ObjectMetricSource - (optional) object refers to a metric describing a single kubernetes object (for example, hits-per-second on an Ingress object).
  • pods PodsMetricSource - (optional) pods refers to a metric describing each pod in the current scale target (for example, transactions-processed-per-second). The values will be averaged together before being compared to the target value.
  • resource ResourceMetricSource - (optional) resource refers to a resource metric (such as those specified in requests and limits) known to Kubernetes describing each pod in the current scale target (e.g. CPU or memory). Such metrics are built in to Kubernetes, and have special scaling options on top of those available to normal per-pod metrics using the "pods" source.

MetricStatus

class lightkube.models.autoscaling_v2beta2.MetricStatus(type, containerResource=None, external=None, object=None, pods=None, resource=None)

MetricStatus describes the last-read state of a single metric.

parameters

  • type str - type is the type of metric source. It will be one of "ContainerResource", "External", "Object", "Pods" or "Resource", each corresponds to a matching field in the object. Note: "ContainerResource" type is available on when the feature-gate HPAContainerMetrics is enabled
  • containerResource ContainerResourceMetricStatus - (optional) container resource refers to a resource metric (such as those specified in requests and limits) known to Kubernetes describing a single container in each pod in the current scale target (e.g. CPU or memory). Such metrics are built in to Kubernetes, and have special scaling options on top of those available to normal per-pod metrics using the "pods" source.
  • external ExternalMetricStatus - (optional) external refers to a global metric that is not associated with any Kubernetes object. It allows autoscaling based on information coming from components running outside of cluster (for example length of queue in cloud messaging service, or QPS from loadbalancer running outside of cluster).
  • object ObjectMetricStatus - (optional) object refers to a metric describing a single kubernetes object (for example, hits-per-second on an Ingress object).
  • pods PodsMetricStatus - (optional) pods refers to a metric describing each pod in the current scale target (for example, transactions-processed-per-second). The values will be averaged together before being compared to the target value.
  • resource ResourceMetricStatus - (optional) resource refers to a resource metric (such as those specified in requests and limits) known to Kubernetes describing each pod in the current scale target (e.g. CPU or memory). Such metrics are built in to Kubernetes, and have special scaling options on top of those available to normal per-pod metrics using the "pods" source.

MetricTarget

class lightkube.models.autoscaling_v2beta2.MetricTarget(type, averageUtilization=None, averageValue=None, value=None)

MetricTarget defines the target value, average value, or average utilization of a specific metric

parameters

  • type str - type represents whether the metric type is Utilization, Value, or AverageValue
  • averageUtilization int - (optional) averageUtilization is the target value of the average of the resource metric across all relevant pods, represented as a percentage of the requested value of the resource for the pods. Currently only valid for Resource metric source type
  • averageValue resource.Quantity - (optional) averageValue is the target value of the average of the metric across all relevant pods (as a quantity)
  • value resource.Quantity - (optional) value is the target value of the metric (as a quantity).

MetricValueStatus

class lightkube.models.autoscaling_v2beta2.MetricValueStatus(averageUtilization=None, averageValue=None, value=None)

MetricValueStatus holds the current value for a metric

parameters

  • averageUtilization int - (optional) currentAverageUtilization is the current value of the average of the resource metric across all relevant pods, represented as a percentage of the requested value of the resource for the pods.
  • averageValue resource.Quantity - (optional) averageValue is the current value of the average of the metric across all relevant pods (as a quantity)
  • value resource.Quantity - (optional) value is the current value of the metric (as a quantity).

ObjectMetricSource

class lightkube.models.autoscaling_v2beta2.ObjectMetricSource(describedObject, metric, target)

ObjectMetricSource indicates how to scale on a metric describing a kubernetes object (for example, hits-per-second on an Ingress object).

parameters

ObjectMetricStatus

class lightkube.models.autoscaling_v2beta2.ObjectMetricStatus(current, describedObject, metric)

ObjectMetricStatus indicates the current value of a metric describing a kubernetes object (for example, hits-per-second on an Ingress object).

parameters

PodsMetricSource

class lightkube.models.autoscaling_v2beta2.PodsMetricSource(metric, target)

PodsMetricSource indicates how to scale on a metric describing each pod in the current scale target (for example, transactions-processed-per-second). The values will be averaged together before being compared to the target value.

parameters

  • metric MetricIdentifier - metric identifies the target metric by name and selector
  • target MetricTarget - target specifies the target value for the given metric

PodsMetricStatus

class lightkube.models.autoscaling_v2beta2.PodsMetricStatus(current, metric)

PodsMetricStatus indicates the current value of a metric describing each pod in the current scale target (for example, transactions-processed-per-second).

parameters

  • current MetricValueStatus - current contains the current value for the given metric
  • metric MetricIdentifier - metric identifies the target metric by name and selector

ResourceMetricSource

class lightkube.models.autoscaling_v2beta2.ResourceMetricSource(name, target)

ResourceMetricSource indicates how to scale on a resource metric known to Kubernetes, as specified in requests and limits, describing each pod in the current scale target (e.g. CPU or memory). The values will be averaged together before being compared to the target. Such metrics are built in to Kubernetes, and have special scaling options on top of those available to normal per-pod metrics using the "pods" source. Only one "target" type should be set.

parameters

  • name str - name is the name of the resource in question.
  • target MetricTarget - target specifies the target value for the given metric

ResourceMetricStatus

class lightkube.models.autoscaling_v2beta2.ResourceMetricStatus(current, name)

ResourceMetricStatus indicates the current value of a resource metric known to Kubernetes, as specified in requests and limits, describing each pod in the current scale target (e.g. CPU or memory). Such metrics are built in to Kubernetes, and have special scaling options on top of those available to normal per-pod metrics using the "pods" source.

parameters

  • current MetricValueStatus - current contains the current value for the given metric
  • name str - Name is the name of the resource in question.