EKS Worker Lode Autoscaling with Karpenter
Karpenter is now the standard recommendation by AWS for EKS cluster autoscaling. In version 1.0+, Karpenter stabilized its core APIs, separating configuration into two main Custom Resources: NodePools and EC2NodeClasses.
A. The Core Concepts
- NodePool (formerly Provisioner): This is the Kubernetes-native side of the configuration. It dictates the rules for what kinds of nodes can be created and the lifecycle of those nodes. You can define taints, tolerations, topology spreads, and limits.
- Example constraints: “Only use instances with at least 16GB of RAM,” or “Only use Spot instances in the
us-east-1azone.”
- Example constraints: “Only use instances with at least 16GB of RAM,” or “Only use Spot instances in the
- EC2NodeClass (formerly AWSNodeTemplate): This contains the AWS-specific configuration for the underlying EC2 instances. It tells Karpenter exactly how to configure the virtual machines it provisions.
- Example settings: Subnet IDs, Security Group IDs, IAM Roles, EBS volume sizes, and AMI selection.
B. Configuration Example (Karpenter v1 API)
Here is a practical, production-ready example of how these two resources link together to provision flexible, cost-effective nodes:
YAML
# 1. AWS-Specific Configuration (EC2NodeClass)
apiVersion: karpenter.k8s.aws/v1
kind: EC2NodeClass
metadata:
name: default-node-class
spec:
# Instructs Karpenter to use the latest AL2023 EKS optimized AMI
amiSelectorTerms:
- alias: al2023@latest
# Automatically discover subnets and security groups via AWS tags
subnetSelectorTerms:
- tags:
karpenter.sh/discovery: my-eks-cluster
securityGroupSelectorTerms:
- tags:
karpenter.sh/discovery: my-eks-cluster
# The IAM role the EC2 instance will assume
role: "KarpenterNodeRole-my-eks-cluster"
# Block device mappings (Disk size)
blockDeviceMappings:
- deviceName: /dev/xvda
ebs:
volumeSize: 50Gi
volumeType: gp3
---
# 2. Kubernetes Node Constraints (NodePool)
apiVersion: karpenter.sh/v1
kind: NodePool
metadata:
name: general-compute
spec:
template:
spec:
# Link this NodePool to the EC2NodeClass defined above
nodeClassRef:
group: karpenter.k8s.aws
kind: EC2NodeClass
name: default-node-class
# Define the instance types Karpenter is allowed to use
requirements:
- key: "karpenter.sh/capacity-type"
operator: In
values: ["spot", "on-demand"]
- key: "karpenter.k8s.aws/instance-family"
operator: In
values: ["m5", "m6i", "m6g", "c5", "c6i"]
- key: "kubernetes.io/arch"
operator: In
values: ["amd64", "arm64"] # Mix Intel/AMD and Graviton!
# Disruption controls (Consolidation)
disruption:
consolidationPolicy: WhenUnderutilized
consolidateAfter: 30s
Why this is powerful: The above configuration allows Karpenter to mix Spot and On-Demand instances, switch between Intel/AMD and ARM architectures based on what your pods tolerate, and aggressively consolidate pods onto fewer, cheaper nodes when traffic drops saving you a tremendous amount of money.