Namespace & Multi-Tenancy
Namespace
Think of a Kubernetes Namespace as a way to divide a single big cluster into multiple smaller, virtual clusters. It helps you organize your resources (like pods, services, and deployments) so that different teams or projects don’t mess with each other. It provides scope for names meaning, you can have a “database” service in the Dev namespace and another “database” service in the Prod namespace, and they won’t clash.
Imagine a Large Office Building (The Kubernetes Cluster). Inside this building, you have different Departments like HR, IT, and Finance (The Namespaces).
- Isolation: People in the HR department (Pods in HR namespace) work in their own cabin. The noise they make doesn’t disturb the IT team.
- Resource Sharing: Everyone shares the same building electricity and water (Node Resources: CPU/RAM), but you can set rules so the IT team doesn’t use up all the coffee (ResourceQuotas).
- Naming: Both HR and IT can have a manager named “Ramesh” (Service Name). If you call “Ramesh” inside the HR cabin, the HR manager responds. If you call “Ramesh” in IT, the IT manager responds.
Key Characteristics to Remember
- Namespaces are “Virtual Clusters” backed by the same physical hardware.
- They provide a scope for names; names must be unique within a namespace, but can repeat across different namespaces.
- Not all resources are in a namespace (Nodes and PersistentVolumes are global!).
- DNS for a service looks like:
<service-name>.<namespace-name>.svc.cluster.local.
- Scope: Logical isolation.
- Multi-tenancy: Allows many users/teams to share one cluster.
- Resource Control: Can limit how much CPU/RAM a namespace uses.
- Security: Acts as a boundary for Access Control (RBAC).
| Feature | Description | Simple Trick to Remember |
| Isolation | Separates resources logically. | Like separate folders on a laptop. |
| Quota | Limits total CPU/Memory for the namespace. | Like a monthly pocket money limit. |
| RBAC | Controls who can access what inside. | Like an ID card for a specific lab. |
| DNS | automatic naming for discovery. | <service-name>.<namespace-name>.svc.cluster.local. |
The “Built-in” Namespaces
When you first set up Kubernetes, it comes with default namespaces. You should know them:
default: Where your stuff goes if you don’t specify a namespace.kube-system: For Kubernetes’ own components (like the scheduler, kube-dns). Warning: Don’t touch this unless you know exactly what you are doing!kube-public: Strictly for public data (rarely used by users).kube-node-lease: Used by nodes to send “heartbeats” to the master (to say “I am alive”).
As an Architect, namespaces are your first line of defense and organization. You don’t just “create” them; you govern them.
- ResourceQuotas: You must apply these to prevent a “noisy neighbor” issue where a Dev environment eats up all CPU, starving Production.
- LimitRanges: While Quota is for the whole namespace, LimitRange sets the default and max size for individual pods inside that namespace.
- Network Policies: By default, namespaces can talk to each other. To strictly isolate them (Security), you must implement NetworkPolicies (deny-all by default).
- Service Mesh Interaction: If using Istio, you can enforce mTLS strictly per namespace.
Namespace Resource Quotas
In Kubernetes, a cluster is like a big shared computer. If one team runs a very heavy application, it might accidentally use up all the CPU and RAM, causing other teams’ applications to crash.
Namespace Resource Quotas are simply the “limits” or “budgets” you set for a specific team (Namespace). It tells Kubernetes: “This team is allowed to use only this much CPU and this much Memory, and not a single byte more.” This ensures fairness and prevents one bad application from bringing down the whole cluster.
Key Characteristics to Remember
- “Quota is the Ceiling” – It sets the maximum limit for the whole room (Namespace), not just one person (Pod).
- “Requests allow entry, Limits stop abuse” – Quotas calculate the total of all Pods’ requests and limits to see if they fit.
- “No Ticket, No Entry” – If a Quota is active, every single Pod must have resource requests defined, or it will be rejected (unless you have a LimitRange).
- Aggregate Level: Quotas apply to the sum of all resources in the namespace.
- Hard Limits: Kubernetes creates a hard stop. You cannot exceed the quota.
- Resource Types: It covers Compute (CPU, Memory), Storage, and Object Counts (e.g., max 10 pods).
- Scope: Strictly bound to a specific Namespace.
| Feature | Description | Real-World Check |
| Compute Quota | Limits total CPU & RAM usage. | “You have 4GB RAM total for this project.” |
| Object Quota | Limits the count of resources (e.g., Pods, Services). | “You can only create 10 servers max.” |
| ScopeSelector | Apply quotas only to specific pod priorities. | “Only Gold-tier apps get unlimited resources.” |
| Enforcement | Immediate rejection of new Pods if over quota. | “Transaction Declined: Insufficient Funds.” |
Kubernetes Resource Quotas are critical governance objects defined in the API group v1. They provide constraints that limit the aggregate resource consumption per Namespace. When a ResourceQuota is applied, the Kubernetes API server inspects every pod creation request. If the new pod’s resource requirement forces the namespace usage over the set Quota, the API server returns a 403 Forbidden error.
This mechanism is vital for multi-tenant environments where you have Development, Staging, and Production workloads running on the same physical hardware. It is the primary defense against the “Noisy Neighbor” problem.
the most important concept is limiting Compute Resources.
- requests.cpu: The minimum CPU guaranteed to the namespace.
- limits.cpu: The maximum CPU the namespace can ever reach.
- requests.memory & limits.memory: Same logic for RAM.
At an architect level, you must consider Quota Scopes and Object Counts to prevent control plane abuse.
- BestEffort vs. NotBestEffort: You can set different quotas for pods that have limits versus those that don’t (Quality of Service).
- Object Count Quotas: Prevent “resource exhaustion attacks” where a loop creates 10,000 tiny pods, crashing the API server even if CPU usage is low. You limit
pods,services,secrets,configmaps, etc. - LimitRange Integration: A Quota fails if a Pod doesn’t have a resource request. Architects use LimitRange to automatically inject default requests/limits so the Quota system doesn’t reject valid deployments purely due to missing YAML fields.
Limitations
- No “Borrowing”: Namespace A cannot borrow unused quota from Namespace B, even if the cluster is empty. It is a hard wall.
- CPU Throttling: If you hit CPU limits, the app slows down (throttles). If you hit Memory limits, the app crashes (OOMKilled).
Common Issues, Problems and Solutions
| Problem | Root Cause | Solution |
| “Forbidden: exceeded quota” | The new pod requests more resources than available in the remaining quota. | Increase the Quota or optimize the Pod’s resource requests. |
| Deployment stuck at 0 replicas | Quota is full, so the ReplicaSet cannot create the Pod. | Check kubectl describe quota to see which resource is exhausted. |
| Pods failing without explicit error | Often caused by Ephemeral Storage limits being hit. | Add requests.ephemeral-storage to your quota monitoring. |
Yaml file to create Namespace and its Quota
LimitRanges for resources
In the world of Kubernetes, if you don’t set rules, a single container can act like a “noisy neighbor,” eating up all the CPU and Memory on a node. This can crash other important applications.
While ResourceQuotas set limits for the whole Namespace (the whole team),
LimitRanges are the specific rules for individual Pods and Containers (the individual player). They ensure every container is “sized just right” not too small to fail, and not too big to waste resources. If a developer forgets to set CPU or Memory request, the LimitRange automatically applies a safe default setting for them.
Think of a Kubernetes Namespace as a Shared Office Cafeteria.
- ResourceQuota is the budget for the entire department (e.g., “The Engineering team gets 500 plates of food total per day”).
- LimitRange is the rule for the individual plate:
- Min: You must take at least 1 spoon of rice (so you don’t starve/crash).
- Max: You cannot take more than 2 full plates (so you don’t leave others hungry).
- Default: If you don’t say what you want, we automatically give you a standard “Thali” (Standard Default Request/Limit).
Easy Remember Cheat Sheet
- “LimitRange is for the Pod; ResourceQuota is for the Namespace.”
- “No limits defined? LimitRange fills in the blanks.”
- “It stops the ‘Goldilocks’ problem: Not too big, not too small.”
- “It acts as a gatekeeper at the door if a Pod breaks the rules, it gets rejected immediately.”
- Scope: Applied at the Namespace level but enforces rules on individual Containers and Pods.
- Defaults: Can inject default
requestsandlimitsif the user creates a Pod without them. - Validation: It rejects Pod creation if the requested resources violate the Min/Max constraints.
- Ratio Control: It can enforce a strict ratio between Request (minimum needed) and Limit (maximum allowed) to prevent over-commitment.
| Feature | LimitRange | ResourceQuota |
| Target | Individual Container / Pod / PVC | Entire Namespace aggregate |
| Primary Job | Set Defaults & Min/Max bounds | Cap total usage (Hard limits) |
| Action on Violation | Denies Pod creation immediately | Denies Pod creation if quota exceeded |
| Auto-configuration | YES (Injects default values) | NO (Just counts and blocks) |
| Analogy | Per-person plate limit | Total buffet budget |
At an Architect level, LimitRanges are your first line of defense against Denial of Service (DoS) attacks from internal bad actors or misconfigured CI/CD pipelines.
- The LimitRange is actually an Admission Plugin. It runs in the API server. When a request comes in:
- Mutating Phase: If the pod has no limits, the LimitRange mutates the pod spec to inject the defaults.
- Validating Phase: It checks the final numbers. If they are outside the Min/Max, it rejects the pod.
- Strategy: Never allow a Namespace to exist without a LimitRange. This ensures that even if a developer is lazy, their pods are “capped” by the default limits.
- Integration: Combine LimitRanges with OPA Gatekeeper or Kyverno. While LimitRange handles the numeric values, Kyverno can enforce which LimitRange profile is applied to which team.
- Security Context: By forcing a low
CPUandMemorylimit, you force developers to optimize their code. This reduces the blast radius if a container is compromised and tries to mine crypto (it will hit the CPU limit immediately). - Kyverno (Policy engine to enforce presence of LimitRanges)
Use Case
- Multi-tenant Clusters: When Team A and Team B share a cluster, LimitRanges prevent Team A from creating a “Monster Pod” that takes up an entire 64GB RAM node.
- Development Environments: Developers often forget to add YAML resources. LimitRange fixes this by auto-injecting “small t-shirt size” resources (e.g., 200m CPU, 512Mi RAM).
Benefits
- Stability: Prevents node starvation.
- Cost Control: Prevents accidental provisioning of massive containers.
- Standardization: Enforces a baseline “T-shirt sizing” for applications.
Limitations
- No Retroactive Action: If you apply a LimitRange today, it does not kill or resize existing Pods that violate the rule. It only affects new Pods.
- Node Capacity Ignorance: A LimitRange allows you to set a Max of 100GB RAM even if your biggest node is only 64GB. It validates the number, not the physical reality (though the Pod will stick in
Pendingstate later).
Common Issues, Problems, and Solutions
- Problem: “My Pod creation is failing with
Forbiddenerror.”- Solution: The Pod requested resources outside the Min/Max range. Check
kubectl describe limitrangeand adjust your Pod spec.
- Solution: The Pod requested resources outside the Min/Max range. Check
- Problem: “I didn’t set limits, but my Pod has limits now.”
- Solution: This is the
Defaultfeature working as intended. If you don’t want this, you must explicitly define limits in your Pod or remove thedefaultsection from the LimitRange.
- Solution: This is the
- Problem: “OOMKilled (Out of Memory) errors.”
- Solution: The
defaultLimitmight be too aggressive (too small). Increase the default memory limit in the LimitRange.
- Solution: The
- Configure Default Memory Requests and Limits for a Namespace
- Configure Default CPU Requests and Limits for a Namespace
- Limit Ranges API Reference
Yaml file to create LimitRange
Difference between Quota vs. LimitRange:
| Feature | ResourceQuota | LimitRange |
| Scope | Aggregate (Whole Namespace) | Individual (Single Pod/Container) |
| Primary Goal | Prevent one team from using the whole cluster. | Prevent one Pod from using the whole node. |
| Defaulting? | No (It creates no values). | Yes (Injects default CPU/Mem). |
| Object Counts? | Yes (Can limit count of Pods, Services, PVCs). | No (Cannot limit number of objects). |
| When it fails | When the total usage > Quota. | When the request/limit violates Min/Max. |
| Analogy | Total Bank Balance | ATM Withdrawal Limit |