-
Notifications
You must be signed in to change notification settings - Fork 9
Pod Disruption Budgets
This document explains why CloudZero Agent uses Pod Disruption Budgets (PDBs) and how to handle them during cluster maintenance operations.
Pod Disruption Budgets are Kubernetes resources that limit the number of pods that can be voluntarily disrupted during cluster maintenance operations. They help ensure application availability during:
- Node maintenance
- Cluster upgrades
- Pod evictions
- Voluntary disruptions
For more information, see the Kubernetes documentation on Pod Disruption Budgets.
The CloudZero Agent uses PDBs on all components with minAvailable: 1 to prevent accidental scaling down to 0 replicas during cluster operations. Without PDBs, these components could be accidentally scaled down to 0 replicas during cluster operations, causing:
- Complete loss of metrics collection
- Loss of cost attribution data
- Service unavailability
You may have noticed that some CloudZero Agent components (like the main agent and kube-state-metrics) run with only 1 replica but still have PDBs with minAvailable: 1. This is intentional and follows Kubernetes best practices for single-instance stateful applications.
As noted in the Kubernetes documentation on single-instance stateful applications, single-instance stateful applications may want to set minAvailable: 1 in the PDB to prevent the pod from being evicted, but this will also prevent the node from being drained.
The PDB prevents:
- Accidental scaling to 0: Prevents the component from being accidentally scaled down during cluster operations
- Data loss: Ensures continuous metrics collection and cost attribution
- Service disruption: Maintains critical CloudZero functionality
However, this also means:
- Node draining is blocked: The PDB prevents pods from being evicted during node maintenance
- Cluster upgrades may be delayed: The autoscaler cannot drain nodes with these pods
If you're using autoscaling with clustered mode (components.agent.mode: clustered), the metrics collection component (Alloy) can scale horizontally and handle pod disruptions gracefully. This eliminates the PDB concern for metrics collection.
However, Kube State Metrics (KSM) currently cannot scale horizontally in the same way, so the KSM PDB remains a consideration. We are actively working on improving this situation.
There are two valid approaches to handling PDBs, depending on your environment and tolerance for brief data gaps:
This is the default configuration and is appropriate when:
- You require uninterrupted metrics collection
- Brief gaps in data are unacceptable for your use case
- You can coordinate maintenance windows with CloudZero Agent updates
With this approach, temporarily disable PDBs before planned maintenance, then re-enable them afterwards. The Kubernetes documentation recommends this pattern: establish an understanding that the cluster operator needs to consult you before termination, and when contacted, prepare for downtime and delete the PDB to indicate readiness for disruption, then recreate it afterwards.
You can disable PDBs globally or individually:
Disable all PDBs globally:
defaults:
podDisruptionBudget:
enabled: false
kubeStateMetrics:
podDisruptionBudget: nullOr disable PDBs individually:
components:
agent:
podDisruptionBudget:
enabled: false
aggregator:
podDisruptionBudget:
enabled: false
webhookServer:
podDisruptionBudget:
enabled: false
kubeStateMetrics:
podDisruptionBudget: nullhelm upgrade <release-name> ./helm -f your-values.yamlThen perform your maintenance operation (cluster upgrades, node patching, node draining, etc.).
Remove or comment out the PDB overrides from your values file, then apply again:
helm upgrade <release-name> ./helm -f your-values.yamlPermanently disabling some or all PDBs is a valid choice for many environments. This is particularly appropriate when:
- You use node autoscalers like Karpenter where nodes are frequently added and removed
- You prefer automated cluster operations over manual coordination
- You can tolerate brief gaps in data collection (typically a few minutes during pod migrations)
Understanding the trade-offs:
When a pod is evicted without a PDB, there will be a brief window where that component is unavailable. The impact varies by component:
-
KSM metrics: KSM collects point-in-time Kubernetes object state. Brief gaps are generally less impactful because this data represents current state rather than sampled measurements. The main concern is short-lived pods that might be missed entirely during the gap.
-
cAdvisor metrics: These are time-series samples collected at regular intervals. Missing samples can affect accuracy of resource utilization calculations. If you enable clustered mode with autoscaling, this concern is eliminated since Alloy handles disruptions gracefully.
Recommended configuration for environments with frequent node churn:
# Use clustered mode with autoscaling for metrics collection
components:
agent:
mode: clustered
autoscaling:
enabled: true
# Disable the KSM PDB to allow automated node draining
kubeStateMetrics:
podDisruptionBudget: nullThis configuration gives you the best of both worlds: horizontally-scalable metrics collection that handles disruptions gracefully, combined with automated KSM pod migration at the cost of occasional brief gaps in KSM data.
If you encounter issues where PDBs prevent node draining:
-
Check which PDBs are blocking:
kubectl get pdb
-
Temporarily disable the blocking PDB:
kubectl patch pdb <pdb-name> -p '{"spec":{"minAvailable":0}}'
-
Perform your maintenance operation
-
Re-enable the PDB:
kubectl patch pdb <pdb-name> -p '{"spec":{"minAvailable":1}}'
If using Option 1 (temporary PDB disabling):
- Plan maintenance windows: Coordinate with your team before disabling PDBs
- Re-enable promptly: Always re-enable PDBs after maintenance to prevent accidental data gaps
- Monitor during maintenance: Ensure CloudZero functionality is restored after PDBs are re-enabled
- Document the process: Keep track of when PDBs are disabled and re-enabled
If using Option 2 (permanent PDB disabling):
-
Use clustered mode: Enable
components.agent.mode: clusteredwith autoscaling to minimize the impact of pod disruptions on metrics collection - Monitor for data gaps: Be aware that KSM data may have brief gaps during node migrations
- Consider your workload: If you have many short-lived pods, the brief KSM gaps may cause some pods to be missed entirely
- Kubernetes Pod Disruption Budgets Documentation
- Single-instance Stateful Applications
- Autoscaling - Horizontal pod autoscaling for CloudZero Agent components