Skip to main content
< All Topics

Kubernetes Pods


The Atom of Kubernetes Pods & The Life of a Container

We finally start running applications! In Kubernetes, you don’t run “Containers” directly; you run Pods.

A Pod is the smallest unit in Kubernetes. It is like a “pea pod” that can hold one or more peas (containers).

If you master the Pod Lifecycle and Multi-Container patterns here, you will solve 80% of production errors quickly.


A Pod is not just a container; it is a logical host. Think of a Pod like a Virtual Machine (VM) and the containers inside it as processes running on that VM.

  • Shared Network: All containers in a Pod share the same Network Namespace. They have the same IP address and can talk to each other via localhost.
  • Shared Storage: Containers can access the same shared volumes, allowing data exchange (e.g., a log shipper reading logs written by an app).
  • Shared Lifecycle: They are scheduled together, start together, and die together.

The Pod

The Pod is the smallest deployable computing unit, but it is much more than a wrapper for a single container. It acts as a logical host, creating an encapsulated environment that mimics a Virtual Machine (VM). While containers provide isolation from the outside world, the Pod provides a shared context for the containers inside it.

To truly understand a Pod, you must view it through the lens of the “VM Analogy”:

  • The Pod is the VM. It owns the IP address, the network ports, and the storage volumes.
  • The Containers are the Processes. Just as multiple processes (e.g., an app server and a log agent) run on the same server and share resources, multiple containers run inside a Pod and share the Pod’s environment.

This architecture is achieved through three fundamental pillars of sharing: NetworkingStorage, and Lifecycle.

  • “One Pod = One IP Address.” No matter how many containers are inside, they all share that single IP.
  • “Pods are Mortal.” They are born, they do their job, and they die. They are not resurrected; they are replaced.
  • “Localhost Communication.” Containers in the same Pod talk to each other using localhost, just like processes on your laptop.
  • “Atomic Unit.” You never deploy a “container” directly in Kubernetes; you always deploy a Pod.
  • Ephemeral: Pods are temporary. If a node dies, the Pod dies.
  • Co-located: Containers in a Pod are always scheduled on the same Node.
  • Shared Context: They share Network, Storage, and Lifecycle.
Containers with the Pod.
FeatureDescriptionAnalogy
NetworkShared IP & PortsRoommates sharing the same landline phone number.
StorageShared VolumesRoommates sharing the same bookshelf.
LifecycleBorn & Die TogetherSiamese twins; they move together everywhere.
CommunicationlocalhostTalking to someone in the same room.
ScalingScale Pods, not containersYou add more “Dorm Rooms,” not just more students into one room.

Shared Network: The “IP-per-Pod” Model

In a traditional VM or physical server, every process running on the host shares the same network interface. Kubernetes applies this exact model to Pods.

  • The Network Namespace: When a Pod starts, Kubernetes first launches a tiny, hidden container often called the “pause container” (or infrastructure container). Its sole job is to hold the Network Namespace open. All subsequent user containers (your app, your sidecars) join this exact same namespace.
  • Single Identity: Because they share the namespace, every container in the Pod shares the same IP address and MAC address. To the rest of the cluster, the Pod is a single addressable endpoint.
  • Localhost Communication: Containers within the Pod can communicate with each other using localhost.
    • Example: If you have a web server container listening on port 80 and a database proxy container in the same Pod, the web server can connect to localhost:5432 to reach the proxy.
  • Port Coordination: Just like processes on a VM, containers in a Pod cannot bind to the same port. You cannot run two containers that both listen on port 80 inside the same Pod; you will get a “Port already in use” error.

Shared Storage: Data Gravity and Volumes

Containers are ephemeral; when they crash, their filesystem is lost. Pods solve this by providing Volumes storage abstractions that are bound to the lifecycle of the Pod, not the individual container.

  • Volume Mounting: A Volume is defined at the Pod level (like mounting an external hard drive to a VM). Individual containers can then “mount” this volume into their own internal file systems.
  • Data Exchange patterns: This capability is the backbone of multi-container design patterns.
    • The Sidecar Pattern: A primary container writes logs to a shared volume (e.g., /var/log/app). A secondary “sidecar” container (like Fluentd or Logstash) mounts that same directory, reads the logs, and pushes them to a central logging system.
    • The Init Container Pattern: An init container spins up, downloads a configuration file or a dataset to a shared volume, and then terminates. The main application container starts up, mounts that volume, and uses the data.

Shared Lifecycle: Atomic Scheduling

The Pod is the atomic unit of scheduling in Kubernetes. The scheduler does not notice individual containers; it only sees Pods.

  • Co-Scheduling: A Pod is either scheduled to a node or it isn’t. You will never have a situation where “Container A” of a Pod is running on Node 1 while “Container B” of the same Pod is running on Node 2. They are physically colocated on the same hardware.
  • Symbiotic Existence:
    • Startup: Containers in a Pod start in a defined order (Init containers first, then app containers).
    • Termination: When a Pod is deleted, all containers receive termination signals (SIGTERM) simultaneously.
  • Restart vs. Recreate: If a container crashes (process fails), the kubelet on the node will restart that specific container (keeping the Pod and its IP address alive). However, if the Pod itself is evicted or deleted, the IP is lost, and a brand new Pod must be created elsewhere.

Kubernetes Pods – Official Docs


Pod Lifecycle: From Birth to Death

Think of a Kubernetes Pod like a firecracker 🧨. It is designed to be lit, do its job (sparkle or bang), and then it is finished. You don’t try to “fix” a burnt-out firecracker; you just get a new one.

Unlike a Virtual Machine (VM) which you might reboot to fix, a Pod is designed to be disposable. It is born, it serves its purpose, and it dies. If a Pod “dies,” Kubernetes doesn’t resurrect it; it replaces it with a new one.

  • “Pods don’t heal, they are replaced.” –  If a Pod dies, the controller creates a new one with a new ID and IP.
  • “Pending means Waiting.” – Usually waiting for a Node to open up or an image to download.
  • “Unknown is scary but usually just network lag.” – The Master has lost contact with the Worker Node; the Pod might actually be fine.
  • “Running \neq Ready.” – Just because the engine is on (Running) doesn’t mean the car is ready to drive (Ready to take traffic).
  • Ephemeral Nature: Pods are disposable entities.
  • Phase vs. State: “Phase” is the high-level summary (e.g., Running), while “Container State” is the low-level detail (e.g., CrashLoopBackOff).
  • Graceful Termination: Kubernetes gives Pods 30 seconds (default) to pack up and leave before force-killing them.

The 5 Phases of a Pod
PhaseAnalogyWhat’s Happening?Common Error/Status
PendingThe Waiting Room ⏳API accepted it, but it’s not on a Node yet.ErrImagePull, ImagePullBackOff, Pending
RunningThe Active Job 🏃‍♂️Assigned to a Node, container process started.CrashLoopBackOff, Running
SucceededMission Accomplished ✅All containers finished successfully (Exit Code 0).Completed (Common in CronJobs)
FailedThe Crash 💥Containers stopped, at least one failed.Error, OOMKilled (Out of Memory)
UnknownThe Ghost 👻Master cannot talk to the Worker Node (Kubelet).Unknown (Network partition)

  1. Pending (The Waiting Room): The API Server has created the Pod object, but it hasn’t been scheduled or started yet.
    • Scheduling: The Scheduler is looking for a Node with sufficient resources (CPU/RAM).
    • Image Pulling: The Node is currently downloading the container image (this can take time for large images).
  2. Running (The Active State): The Pod has been assigned to a Node, and at least one container inside it is active or in the process of starting / restarting.
    • Crucial Distinction: Running \neq Ready.
      • Running means the container process has started.
      • Ready means the application is actually capable of accepting traffic (passed its Readiness Probe).
  3. Succeeded (The “Job Done” State): All containers in the Pod have terminated successfully (Exit Code 0) and will not restart.
    • Use Case: This is common for Batch Jobs or CronJobs (e.g., a database backup script that runs, finishes, and stops).
  4. Failed (The Crash): All containers have terminated, and at least one terminated with an error (Exit Code \neq 0).
    • Implication: The application crashed or the process was killed. Depending on your restartPolicy, Kubernetes may try to create a new Pod to replace it.
  5. Unknown (The Mystery): The Control Plane (Master) cannot retrieve the status of the Pod.
    • Common Cause: Usually a network partition where the Kubelet on the Worker Node cannot communicate with the API Server. The Pod might still be running, but the Master doesn’t know.

Static Pods

Static Pods bypass this entire “brain.” They are pods that are managed directly by the Kubelet daemon on a specific node, without the API Server observing or managing the initial configuration.

How Do They Work?

The mechanism relies entirely on the filesystem of the node.

    1. The Manifest Folder: The Kubelet configuration file (/var/lib/kubelet/config.yaml) defines a specific directory path, usually staticPodPath. By default, this is often set to: /etc/kubernetes/manifests
    2. Creation: If you place a valid Pod YAML file into this folder, the Kubelet daemon (which scans this folder periodically) automatically creates and starts the Pod.
    3. Deletion: To delete the Pod, you simply remove the file from that folder. Kubelet notices the file is gone and terminates the Pod.
    4. Mirror Pods: Even though the Kubelet creates the pod locally, it tries to create a Mirror Pod on the Kubernetes API Server.
      • Purpose: This allows you to see the static pod when you run kubectl get pods.
      • Limitation: These mirror pods are read-only. You cannot edit or delete them via the API Server (e.g., kubectl delete pod will fail or simply leave the pod running because the source file still exists on the node).

    Why Do We Need Them?

    The primary use case for Static Pods is Bootstrapping the Control Plane.

    This helps solve the “Chicken and Egg” problem: How do you run the Kubernetes Control Plane components (which are Pods) if Kubernetes isn’t running yet?

    You use Static Pods to start the essential components on the Master Node:

    • kube-apiserver
    • etcd
    • kube-controller-manager
    • kube-scheduler

    Since these are defined as static manifests on the Master Node’s disk, the Kubelet can start them up immediately when the machine boots, bringing the cluster to life.

    Limitations
    • No Scheduling: You cannot ask a Static Pod to “move” to another node. You have to manually move the file.
    • Limited Health Checks: While Kubelet checks liveness, integration with complex cloud-native health reporting is limited compared to standard deployments.
    • ConfigMap/Secret Dependency: Static Pods running the Control Plane cannot mount ConfigMaps or Secrets from the API Server (because the API server might not be ready!). They usually rely on local files for configuration.

    Static Pods and Kubelet Config (staticPodPath)


    Multi-Container Patterns

    Imagine a specialized surgery. You have the Lead Surgeon (Main Container) doing the actual operation. But they cannot do everything alone. They need an Anesthesiologist (Sidecar) to monitor the patient’s vitals continuously, and a Nurse (Ambassador) to fetch tools from the outside storage room so the surgeon doesn’t have to leave the table.

    In Kubernetes, a Pod is that operation theatre. It usually holds one main container, but sometimes we add “helper” containers to assist the main one. They live together, share the same network IP (talk via localhost), and can see the same storage volumes.

    1. Sidecar: The “Assistant.” It extends functionality (like logging) without changing the main app.
    2. Adapter: The “Translator.” It converts output from your app into a format the monitoring tool understands.
    3. Ambassador: The “Proxy.” It handles connections to the outside world, so your app handles simple localhost requests.
    • Tightly Coupled: These containers live and die together. If the Pod dies, all containers inside it die.
    • Shared Context: They share the same Network Namespace (IP address) and IPC (Inter-Process Communication).
    • Shared Storage: They can mount the same Volume to read/write shared files.
    PatternRoleBest AnalogyPrimary Use Case
    SidecarEnhancerA sidecar attached to a motorbike.Log forwarding, config syncing, file watching.
    AdapterStandardizerA universal travel power adapter.Normalizing metrics, formatting output.
    AmbassadorGatewayA receptionist handling calls.Database proxy, service discovery, authentication.

    While 90% of the time you run a “Single-Container Pod,” the “Multi-Container Pod” is a powerful feature for separation of concerns.

    Instead of jamming all logic (logging, monitoring, proxying) into one giant, messy application code, we split them.

    1. The Sidecar Pattern (The Assistant)

    “Main Container” serves the users, and the “Sidecar Container” handles boring tasks like collecting logs, updating configuration files, or handling security (HTTPS).

    FeatureDescriptionReal-World Example
    Primary GoalOffload non-business tasks.Moving log shipping logic out of Java code.
    CouplingTightly coupled in deployment, loosely coupled in code.Deployed together in one YAML, but code is separate.
    NetworkingShares localhost.Sidecar can talk to Main App on localhost:8080.
    LifecycleLifecycle is tied to the Pod.If the Pod is killed, the Sidecar is killed too.


    2. The Adapter Pattern (The Translator)

    In Kubernetes, the “Adapter Pattern” standardizes the output of your application. If your app speaks “Language A” but your monitoring tool expects “Language B,” the Adapter sits in the middle and translates A to B.

    • The Translator: It changes the appearance or format of the main container’s output.
    • Standardizer: It ensures all Pods in your cluster look the same to the outside world, even if the apps inside are different.
    • Hides Complexity: The outside world doesn’t need to know the messy details of the main app.
    • Interface Transformation: The primary goal is to match an external interface requirement (like Prometheus metrics).
    • Shared Network: Uses localhost to communicate with the main container.
    • Shared Volumes: Often reads files (logs or status files) written by the main container to transform them.
    FeatureDescriptionReal-World Example
    Primary GoalStandardization & Translation.Converting “Error: 500” text to a JSON Metric { "status": 500 }.
    CouplingTightly coupled.Works directly with the specific output of the main app.
    NetworkingMasks the Main App.The monitoring system talks to the Adapter, not the Main App.
    LifecycleSymbiotic.If the main app stops producing data, the adapter has nothing to translate.

    3. The Ambassador Pattern (The Proxy)

    In Kubernetes, your Main App wants to talk to a complex database cluster. Instead of coding complex logic (like “which shard is active?” or “how to handle retries?”) into your app, your app simply talks to localhost. The Ambassador container listens on localhost, takes the request, and proxies it to the correct destination in the outside world.

    It acts as a gateway for outbound traffic.

    • The Proxy: It sits between your app and the outside world.
    • The Smart Router: It knows where to send your traffic (Sharding, Failover).
    • The Bodyguard: It handles security (mTLS, Authentication) so your app doesn’t have to.
    • Outbound Gateway: Unlike the Adapter (which focuses on output/monitoring), the Ambassador focuses on connections your app makes to other services.
    • Localhost Abstraction: Your app thinks it is talking to a local service (localhost:3306), but it’s actually talking to a remote cluster (db-prod-01.aws...).
    • Language Agnostic: Use an Ambassador written in Go (like Envoy) to handle networking for a Main App written in Java or Python.
    • Common Use Case: Database proxies or Service Mesh proxies (like Envoy or Istio). The app talks to localhost, and the Ambassador handles the complex logic of routing to the correct database shard or handling circuit breaking.
    FeatureDescriptionReal-World Example
    Primary GoalProxy & Abstract Connections.App talks to localhost, Proxy sends to CloudDB.
    CouplingLoosely coupled.App just needs a TCP connection; doesn’t care who handles it.
    NetworkingHandles “Outbound” traffic.Managing connection pools to a Redis cluster.
    LifecycleSymbiotic.If the Ambassador dies, the Main App loses internet/DB access.

    Benefits
    • Reusability: Write the helper container once, use it with many different main applications.
    • Isolation: If the logging agent crashes, your main web server keeps running (usually).
    • Team Independence: The “Ops” team manages the logging container image; the “Dev” team manages the app image.

    InitContainers (The Setup Crew)

    In Kubernetes, InitContainers are specialized containers that run before your application starts. While your main application containers might run in parallel, InitContainers run sequentially and must complete successfully before the main application is even allowed to start.

    Why Use InitContainers?

    There are two primary reasons to use them: Dependencies and Security.

    1. Waiting for Dependencies (The “Blocker”)

    Modern distributed apps are race conditions waiting to happen. If your App starts before your Database is ready, your App crashes.

    • The Problem: Kubernetes starts Pods fast. Databases take time to load.
    • The Solution: An InitContainer runs a script that repeatedly checks “Is the DB there?” It blocks the main app from starting until the answer is “Yes.”

    🧪 Lab 5.1: InitContainer in Action

    Copy this YAML and run it. It demonstrates blocking the app start until a condition is met.

    YAML

    apiVersion: v1
    kind: Pod
    metadata:
      name: init-demo
    spec:
      containers:
      - name: my-app
        image: busybox
        command: ['sh', '-c', 'echo The app is running! && sleep 3600']
      initContainers:
      - name: init-myservice
        image: busybox
        command: ['sh', '-c', 'echo "Init: Setting up environment..."; sleep 5; echo "Init: Done!"']
    

    What happens?

    1. Run kubectl apply -f pod.yaml.
    2. Run kubectl get pods -w.
    3. You will see: Init:0/1PodInitializingRunning.
    4. The main app never starts until the “sleep 5” finishes.

    The Probes:

    • Liveness Probe: “Are you alive?” If no, restart the Pod.
    • Readiness Probe: “Can you work?” If no, don’t send traffic, but don’t kill it.
    • Startup Probe: “Have you started?” Used for slow-starting apps.

    RestartPolicy: This dictates behavior in Failed or Succeeded phases.

    • Always (Default): Even if it finishes successfully, restart it. (Good for web servers).
    • OnFailure: Only restart if it crashes. (Good for AI training jobs).
    • Never: Never restart. (Good for one-time testing).

    Resource limits:

    Security Context:

    💀 The Nightmare: CrashLoopBackOff

    This is the most common error you will see.

    • What it means: The Pod started ➔ It crashed ➔ K8s restarted it ➔ It crashed again ➔ K8s waits 10s ➔ Restarts ➔ Crashes…
    • Common Causes:
      1. Application error (Code bug).
      2. Missing configuration (Env variable or ConfigMap missing).
      3. Port conflict.
      4. Container ran out of memory (OOMKilled).

    Contents
    Scroll to Top