Skip to main content
< All Topics

Kubernetes LimitRange Controller

LimitRanges for resources

In the world of Kubernetes, if you don’t set rules, a single container can act like a “noisy neighbor,” eating up all the CPU and Memory on a node. This can crash other important applications.

While ResourceQuotas set limits for the whole Namespace (the whole team),

LimitRanges are the specific rules for individual Pods and Containers (the individual player). They ensure every container is “sized just right” not too small to fail, and not too big to waste resources. If a developer forgets to set CPU or Memory request, the LimitRange automatically applies a safe default setting for them.

Think of a Kubernetes Namespace as a Shared Office Cafeteria.

  • ResourceQuota is the budget for the entire department (e.g., “The Engineering team gets 500 plates of food total per day”).
  • LimitRange is the rule for the individual plate:
    • Min: You must take at least 1 spoon of rice (so you don’t starve/crash).
    • Max: You cannot take more than 2 full plates (so you don’t leave others hungry).
    • Default: If you don’t say what you want, we automatically give you a standard “Thali” (Standard Default Request/Limit).
Easy Remember Cheat Sheet
  1. “LimitRange is for the Pod; ResourceQuota is for the Namespace.”
  2. “No limits defined? LimitRange fills in the blanks.”
  3. “It stops the ‘Goldilocks’ problem: Not too big, not too small.”
  4. “It acts as a gatekeeper at the door if a Pod breaks the rules, it gets rejected immediately.”
  • Scope: Applied at the Namespace level but enforces rules on individual Containers and Pods.
  • Defaults: Can inject default requests and limits if the user creates a Pod without them.
  • Validation: It rejects Pod creation if the requested resources violate the Min/Max constraints.
  • Ratio Control: It can enforce a strict ratio between Request (minimum needed) and Limit (maximum allowed) to prevent over-commitment.
FeatureLimitRangeResourceQuota
TargetIndividual Container / Pod / PVCEntire Namespace aggregate
Primary JobSet Defaults & Min/Max boundsCap total usage (Hard limits)
Action on ViolationDenies Pod creation immediatelyDenies Pod creation if quota exceeded
Auto-configurationYES (Injects default values)NO (Just counts and blocks)
AnalogyPer-person plate limitTotal buffet budget

At an Architect level, LimitRanges are your first line of defense against Denial of Service (DoS) attacks from internal bad actors or misconfigured CI/CD pipelines.

  • The LimitRange is actually an Admission Plugin. It runs in the API server. When a request comes in:
    • Mutating Phase: If the pod has no limits, the LimitRange mutates the pod spec to inject the defaults.
    • Validating Phase: It checks the final numbers. If they are outside the Min/Max, it rejects the pod.
  • Strategy: Never allow a Namespace to exist without a LimitRange. This ensures that even if a developer is lazy, their pods are “capped” by the default limits.
  • Integration: Combine LimitRanges with OPA Gatekeeper or Kyverno. While LimitRange handles the numeric values, Kyverno can enforce which LimitRange profile is applied to which team.
  • Security Context: By forcing a low CPU and Memory limit, you force developers to optimize their code. This reduces the blast radius if a container is compromised and tries to mine crypto (it will hit the CPU limit immediately).
    • Kyverno (Policy engine to enforce presence of LimitRanges)
Use Case
  • Multi-tenant Clusters: When Team A and Team B share a cluster, LimitRanges prevent Team A from creating a “Monster Pod” that takes up an entire 64GB RAM node.
  • Development Environments: Developers often forget to add YAML resources. LimitRange fixes this by auto-injecting “small t-shirt size” resources (e.g., 200m CPU, 512Mi RAM).
Benefits
  • Stability: Prevents node starvation.
  • Cost Control: Prevents accidental provisioning of massive containers.
  • Standardization: Enforces a baseline “T-shirt sizing” for applications.
Limitations
  • No Retroactive Action: If you apply a LimitRange today, it does not kill or resize existing Pods that violate the rule. It only affects new Pods.
  • Node Capacity Ignorance: A LimitRange allows you to set a Max of 100GB RAM even if your biggest node is only 64GB. It validates the number, not the physical reality (though the Pod will stick in Pending state later).
Common Issues, Problems, and Solutions
  • Problem: “My Pod creation is failing with Forbidden error.”
    • Solution: The Pod requested resources outside the Min/Max range. Check kubectl describe limitrange and adjust your Pod spec.
  • Problem: “I didn’t set limits, but my Pod has limits now.”
    • Solution: This is the Default feature working as intended. If you don’t want this, you must explicitly define limits in your Pod or remove the default section from the LimitRange.
  • Problem: “OOMKilled (Out of Memory) errors.”
    • Solution: The defaultLimit might be too aggressive (too small). Increase the default memory limit in the LimitRange.
Yaml file to create LimitRange
# ============================================================================== # MASTER LIMITRANGE CONFIGURATION # Purpose: Enforce strict resource governance on Containers, Pods, and Storage. # ============================================================================== apiVersion: v1 kind: LimitRange metadata: name: master-governance-policy # CHANGE THIS: Apply this to your target namespace namespace: default spec: limits: # —————————————————————————- # SECTION 1: CONTAINER LEVEL RULES # Applies to every individual container (App, Sidecar, InitContainer) # —————————————————————————- type: Container # 1. DEFAULT LIMIT (The Ceiling) # If a user forgets to specify ‘limits’, this value is injected automatically. # Why? To prevent a container from consuming infinite resources and crashing the node. default: cpu: “500m” # 500 millicores (0.5 CPU) memory: “512Mi” # 512 Mebibytes # 2. DEFAULT REQUEST (The Guarantee) # If a user forgets to specify ‘requests’, this value is injected automatically. # Why? ensures the scheduler reserves at least this much space for the container. defaultRequest: cpu: “100m” # 100 millicores (0.1 CPU) memory: “128Mi” # 128 Mebibytes # 3. MAX (The Hard Stop) # No container in this namespace can EVER be larger than this. # Why? Forces developers to split large monolithic apps into smaller microservices. max: cpu: “2” # Max 2 Cores allowed per container memory: “2Gi” # Max 2GB RAM allowed per container # 4. MIN (The Floor) # No container can request less than this. # Why? Prevents “spamming” the scheduler with tiny, useless containers. min: cpu: “10m” # Minimum 10 millicores memory: “32Mi” # Minimum 32MB RAM # 5. RATIO (The Burst Controller) # Calculation: Limit / Request <= Ratio # If Request is 1GB, Limit cannot be more than 2GB (because ratio is 2). # Why? Prevents massive “bursting” where a container reserves little but uses a lot. maxLimitRequestRatio: cpu: “4” # Limit can be max 4x the Request memory: “2” # Limit can be max 2x the Request # —————————————————————————- # SECTION 2: POD LEVEL RULES # Applies to the SUM of all containers inside a single Pod. # —————————————————————————- type: Pod # 1. MAX TOTAL (The Group Limit) # The combined resources of all containers in the Pod cannot exceed this. # Why? Useful if you have many sidecars (Istio, Logging) and need to cap the whole group. max: cpu: “4” # The whole Pod cannot use more than 4 Cores memory: “4Gi” # The whole Pod cannot use more than 4GB RAM # 2. MIN TOTAL (The Group Floor) # The combined resources must be at least this much. min: cpu: “50m” memory: “64Mi” # 3. RATIO (Pod Level Bursting) maxLimitRequestRatio: cpu: “10” # —————————————————————————- # SECTION 3: STORAGE (PVC) LEVEL RULES # Applies to PersistentVolumeClaims requested by the Pods. # —————————————————————————- type: PersistentVolumeClaim # 1. MAX STORAGE (The Disk Cap) # No single PVC can request more than this size. # Why? Prevents accidental requests for massive, expensive volumes (e.g., 10TB). max: storage: “50Gi” # Max 50GB per volume # 2. MIN STORAGE (The Disk Floor) # No single PVC can be smaller than this. # Why? Some storage providers (like AWS EBS) have minimum size requirements or performance issues with tiny disks. min: storage: “1Gi” # Min 1GB per volume

Difference between Quota vs. LimitRange:

FeatureResourceQuotaLimitRange
ScopeAggregate (Whole Namespace)Individual (Single Pod/Container)
Primary GoalPrevent one team from using the whole cluster.Prevent one Pod from using the whole node.
Defaulting?No (It creates no values).Yes (Injects default CPU/Mem).
Object Counts?Yes (Can limit count of Pods, Services, PVCs).No (Cannot limit number of objects).
When it failsWhen the total usage > Quota.When the request/limit violates Min/Max.
AnalogyTotal Bank BalanceATM Withdrawal Limit

Contents
Scroll to Top