Kubernetes Secrets for Decoupling Configuration
Sensitive Data Management
In Kubernetes, Secrets are specific objects designed to store this sensitive information. They function very similarly to ConfigMaps (which store non-sensitive configuration), but Secrets are intended to hold confidential data. This separation helps you manage your application’s security better, ensuring that passwords aren’t just lying around in plain text code.
Key Characteristics to Remember
- “Secrets are for passwords, keys, and tokens; ConfigMaps are for non-sensitive settings.”
- “Base64 encoding is NOT encryption; it’s just a translation format.”
- “Always enable Encryption at Rest to protect secrets stored in etcd.”
- “Mount secrets as volumes rather than environment variables for better security.”
| Feature | Description |
| Object Type | kind: Secret |
| Primary Use | storing passwords, OAuth tokens, SSH keys, TLS certificates. |
| Storage Limit | 1MiB (to prevent large memory consumption). |
| Default Security | Low (Base64 Encoded only). |
| Best Practice | Use External Vaults (Vault, AWS Secrets Manager) for production. |
Kubernetes Secrets solve the problem of decoupling credentials from code. However, there is a massive misconception among beginners that “Secrets are encrypted by default.” They are not.
The Base64 Trap
When you create a secret, Kubernetes converts the data into Base64.
- Encoding is just changing the format (like translating Hindi to English). It is easily reversible.
- Encryption is scrambling the data so it cannot be read without a key.
If a hacker gets access to your cluster and runs: kubectl get secrets my-secret -o yaml
They will see a string like c3VwZXJzZWNyZXQ=. They can simply copy-paste this into a decoder and get your password in clear text.
How to actually secure it?
- RBAC (Role-Based Access Control): Don’t let developers run
kubectl get secrets. - Encryption at Rest: Configure Kubernetes to encrypt the secret before writing it to the etcd database.
- External Vaults: Don’t store secrets in Kubernetes at all. Use tools like HashiCorp Vault.
Types of Secrets
Kubernetes has built-in types for specific use cases. Using the correct type helps validations and automated behaviors.
Opaque (The Default)
The Opaque secret type is the workhorse of Kubernetes configuration. It is technically defined as type: Opaque in the YAML, though historically it is treated as the default if no type is provided.
Method A: From Literal (Command Line) Best for quick, simple passwords.
# Syntax
kubectl create secret generic [secret-name] --from-literal=[key]=[value]
# Example
kubectl create secret generic backend-user --from-literal=username=admin
Method B: From File (The Real World Way) In production, you often have a file (like ssh-private-key.pem or config.json) that you want to upload as a secret.
# This creates a secret where Key = filename, Value = file content
kubectl create secret generic ssh-key-secret --from-file=./id_rsa
The “Generic” Alias
You will notice the command is kubectl create secret generic.
- CLI Command:
generic - YAML Type:
OpaqueThey are the same thing.genericis just the CLI alias for creating anOpaquesecret.
Why use “stringData”
Beginners often struggle with manually converting their passwords to Base64 to put them into a YAML file.
- The Hard Way (
datafield): You must encode the stringsecret123->c2VjcmV0MTIzand paste it. - The Easy Way (
stringDatafield): You can write plain text in your YAML using thestringDatafield. When you apply it, Kubernetes automatically converts it to Base64 and moves it to thedatafield for you.
apiVersion: v1
kind: Secret
metadata:
name: my-app-secret
type: Opaque
stringData:
# You can write plain text here!
database_url: "postgres://user:pass@localhost:5432/db"
api_key: "my-super-secret-key"Use Cases
- Database Connection Strings.
- SaaS API Keys (Stripe, SendGrid, Twilio).
- SSH Private Keys for git access.
- Basic Auth (
.htpasswd) files.
https://kubernetes.io/docs/concepts/configuration/secret/#opaque-secrets
https://kubernetes.io/docs/tasks/configmap-secret/managing-secret-using-kubectl
kubernetes.io/dockerconfigjson (Private Registry Auth)
When a Kubelet (the agent on the node) tries to pull a private image from (like Docker Hub private repos, AWS ECR, or Google Artifact Registry), the registry challenges it for credentials.
- Kubelet looks at the Pod’s spec.
- It finds the
imagePullSecretslist. - It retrieves the referenced Secret from the API server.
- It decodes the
.dockerconfigjsondata. - It uses those credentials to authenticate with the registry and pull the image.
If any step fails (secret missing, wrong password, wrong namespace), the pod status enters ImagePullBackOff (it tries, fails, waits, tries again…).
Method A: The CLI Shortcut (Recommended)
This command generates the correct JSON structure for you.
kubectl create secret docker-registry my-registry-key \
--docker-server=https://index.docker.io/v1/ \
--docker-username=janedoe \
--docker-password=mypassword123 \
--docker-email=jane@example.com- Note: For Docker Hub, use
https://index.docker.io/v1/. For AWS ECR, it looks like123456789.dkr.ecr.us-east-1.amazonaws.com.
The ServiceAccount Strategy (The Pro Move)
Adding imagePullSecrets to every single Pod YAML is tedious and error-prone. Better Approach: Add the secret to the default ServiceAccount of the namespace.
- Create the secret.
- Patch the ServiceAccount:Bash
kubectl patch serviceaccount default -p '{"imagePullSecrets": [{"name": "my-registry-key"}]}'
Result: Any Pod created in this namespace will automatically inherit this secret and can pull images without extra YAML configuration.
Cloud Provider Helpers (ECR/ACR/GCR)
The Problem: Docker Hub passwords are static. But AWS ECR tokens expire every 12 hours. If you use a static secret for ECR, your pods will fail to pull images the next day. The Solution: You need a “Refresher” mechanism.
- AWS ECR Credential Helper: A tool installed on the nodes.
- External CronJob: A Kubernetes CronJob that runs
aws ecr get-login-password, deletes the old secret, and creates a fresh one every 10 hours. - Kubelet Credential Provider: (Newer feature) Allows Kubelet to dynamically fetch credentials from a binary.
Inspecting the Secret (Debugging)
If you inspect this secret, you will see one key: .dockerconfigjson.
kubectl get secret my-registry-key -o jsonpath="{.data.\.dockerconfigjson}" | base64Use Cases
- Pulling proprietary software from a vendor’s private registry.
- CI/CD pipelines pulling internal application images.
- Mirrored registries (pull-through cache) requiring authentication.
https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry
kubernetes.io/tls (SSL/TLS Certificates)
The primary consumer of kubernetes.io/tls secrets is the Ingress Controller.
When you configure an Ingress resource to serve HTTPS, you reference the secret name in the tls section. The Ingress Controller (like Nginx) watches for this secret, extracts the certificate and key, and configures the underlying Nginx server to handle the SSL handshake.
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: my-secure-ingress
spec:
tls:
- hosts:
- secure-app.com
secretName: my-site-tls # <--- References the Secret
rules:
- host: secure-app.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: my-service
port:
number: 80Method A: The CLI Shortcut This checks if the files exist and creates the secret with the correct keys automatically.
kubectl create secret tls my-tls-secret \
--cert=path/to/cert.pem \
--key=path/to/key.pem
Method B: The “Bundle” Trick If your certificate authority (CA) gives you a “root” certificate and an “intermediate” certificate, you must bundle them into one file before creating the secret.
cat website.crt intermediate.crt root.crt > full-chain.crt
kubectl create secret tls my-tls-secret --cert=full-chain.crt --key=private.key
Why? Browsers trust the Root CA, not your specific Intermediate CA. You must provide the full chain of trust.
DevSecOps Architect Level
Cert-Manager (The Automation King)
Manually creating TLS secrets is “Anti-Pattern” in modern DevSecOps because certificates expire (usually every 90 days for Let’s Encrypt). Cert-Manager is an operator that runs in your cluster.
- It talks to Let’s Encrypt (or HashiCorp Vault).
- It performs the validation challenge (proving you own the domain).
- It automatically creates and updates the
kubernetes.io/tlssecret for you. - It renews the certificate 30 days before expiry automatically.
- Official Website: https://cert-manager.io/
- GitHub: https://github.com/cert-manager/cert-manager
Wildcard Certificates
Instead of creating one secret per subdomain (api.google.com, mail.google.com), you create one Wildcard Secret (*.google.com).
- Strategy: Create one
wildcard-tlssecret in a specific namespace (or replicate it using tools like Reflector) and share it across all Ingress resources.
Mounting TLS in Pods (mTLS)
Sometimes you need “End-to-End Encryption” (User -> Ingress -> Pod). In this case, you mount the TLS secret directly into the Pod.
volumes:
- name: tls-volume
secret:
secretName: my-site-tls
containers:
- name: app
volumeMounts:
- name: tls-volume
mountPath: "/etc/ssl/certs"
readOnly: true
The app will see two files: /etc/ssl/certs/tls.crt and /etc/ssl/certs/tls.key.
Use Cases
- Ingress Controller: Terminating HTTPS traffic at the edge.
- Service Mesh (Istio/Linkerd): Managing mTLS identity certificates between microservices.
- Secure Pods: Running a web server (like Tomcat or Nginx) inside a pod that serves HTTPS directly.
Common Issues
- Certificate Expiry: The 1st cause of outages. The secret doesn’t update itself.
- Solution: Use Cert-Manager. https://cert-manager.io/docs
- Key Mismatch: You generated a new Certificate but kept the old Private Key (or vice versa). Nginx will crash or fail to reload.
- Solution: Always regenerate both or verify the modulus of both files matches using
openssl.
- Solution: Always regenerate both or verify the modulus of both files matches using
- Wrong Keys: Creating the secret with keys like
cert.peminstead oftls.crt. The Ingress controller is hardcoded to look fortls.crt. If it’s not there, it fails.
- https://kubernetes.io/docs/concepts/configuration/secret/#tls-secrets
- https://kubernetes.io/docs/concepts/services-networking/ingress/#tls
DevSecOps Architect Level
As an Architect, you should rarely rely on native Kubernetes Secrets alone for high-compliance environments (like Banking or Healthcare). Here are the enterprise-grade tools you must know:
RBAC (Role-Based Access Control)
This is your first line of defense.
- Principle: Developers might need to deploy apps that use secrets, but they rarely need to read the secret values themselves.
- Action: Create Roles that forbid the
get,list, andwatchverbs onsecretsresources for standard users.
Encryption at Rest (Etcd Encryption)
By default, etcd stores data in plain text. If someone steals the physical hard drive or takes a backup snapshot of the master node, they have all your passwords.
- Solution: Enable Encryption at Rest. This is a cluster-level setting (configured via an
EncryptionConfigurationfile) that forces the API server to encrypt secrets before writing them toetcd. - Algorithms: Use strong providers like
aescbcorkms(Key Management Service) which integrates with cloud providers (AWS KMS, Google Cloud KMS).
External Secret Stores (The Enterprise Way)
For banking-grade security, do not store secrets in Kubernetes at all.
- Tool: Use the Secrets Store CSI Driver.
- How it works: Your secrets live in HashiCorp Vault, AWS Secrets Manager, or Azure Key Vault. They are only mounted into the Pod as a volume at runtime. They never actually exist in the Kubernetes
etcddatabase.
Secrets Store CSI Driver
This allows you to mount secrets/keys/certs stored in enterprise vaults directly into the Pod as a volume using the CSI (Container Storage Interface).
- Benefit: The secret is never actually stored as a Kubernetes Secret object (env var). It exists only in the Pod’s memory/file system.
- Official Website: https://secrets-store-csi-driver.sigs.k8s.io/
External Secrets Operator (ESO) – The Industry Standard
Instead of manually creating secrets, you store them in a cloud provider’s secure vault (AWS/Azure/GCP). The ESO controller fetches them and creates the Kubernetes Secret automatically.
- Official Website: https://external-secrets.io/
- Supported Backends:
- AWS Secrets Manager: https://aws.amazon.com/secrets-manager/
- Azure Key Vault: https://azure.microsoft.com/en-us/products/key-vault/
- Google Secret Manager: https://cloud.google.com/secret-manager
HashiCorp Vault – The Gold Standard
This is the most popular tool for managing secrets in a platform-agnostic way (works on any cloud or on-premise). You can inject secrets into Pods using the Vault Agent Sidecar Injector.
- Official Website: https://www.vaultproject.io/
- Vault K8s Integration: https://developer.hashicorp.com/vault/docs/platform/k8s
Since base64 provides no real protection, you must implement these three layers of defense:
Additional Details
- Immutable Secrets: Just like ConfigMaps, you can mark a Secret as
immutable: true. This protects against accidental updates and improves API performance (Kubelet stops watching for changes). - Volume vs. Env Var Updates:
- If you mount a Secret as a Volume, and you update the Secret, the file inside the Pod creates a symlink update automatically (eventually).
- If you use Environment Variables, the Pod will not see the update until you restart the Pod.
- SubPath Mounting: Be careful using
subPathto mount a single file from a Secret. It creates a static copy and breaks auto-updates. - Dot-prefixed files: If mounting secrets as a volume, K8s uses hidden
..datasymlinks to handle atomic updates.
Key Components
- Data Field: Contains the Base64 encoded strings.
- StringData Field: Allows you to write plain text in your YAML (K8s converts it to Base64 automatically on apply). Use this for easier editing!
- Type: Defines the schema (Opaque, TLS, etc.).
Use Cases
- Storing database credentials (User/Pass).
- Storing API Tokens for third-party services (Stripe, SendGrid).
- Storing TLS Certificates for HTTPS.
- Authenticating with private Docker Registries.
Best Practices
- Never commit Secrets to Git. Use
.gitignoreor Sealed Secrets. - Use Least Privilege. Pods should only mount the specific secrets they need.
- Enable Auditing. Turn on Kubernetes Audit Logs to track who accessed a secret.
Common Issues
- Git Leakage: Developers accidentally committing
secret.yamlto GitHub.- Solution: Use git-secrets to prevent commits that contain patterns like passwords.
- Size Limits: Storing large certificates or config files > 1MB fails.
- Solution: Compress data or use external object storage (like S3) and pass the URL.
- Consumption Complexity: Applications sometimes expect a specific config file format, not just key-value pairs. You might need an
initContainerto format the secret properly.
Troubleshooting
- “CrashLoopBackOff”: Often caused because the app cannot find the Environment Variable (did you name the secret key correctly?).
- “ImagePullBackOff”: Usually means the
imagePullSecretis missing or invalid for the private registry.
https://kubernetes.io/docs/concepts/configuration/secret
https://kubernetes.io/docs/tasks/administer-cluster/encrypt-data
Configure a Pod to Use Secrets
To configure a Pod, you edit the .yaml file. You never “send” the secret to the pod manually; you declare it in the configuration.
- Environment Variables: Best for passwords and API keys.
- Volumes: Best for certificate files or large config files.
Method A: As Environment Variables (Most Common)
This is the standard way to inject database passwords or API tokens. Kubernetes automatically decodes the secret and sets it as a regular environment variable.
Copy the code below into a file named secret-env.yaml.
# 1. Create the Secret
apiVersion: v1
kind: Secret
metadata:
name: db-credentials
type: Opaque
stringData:
# 'stringData' allows you to write plain text in YAML.
# Kubernetes will Base64 encode it automatically when saved.
username: "admin_user"
password: "SuperSecretPassword123!"
---
# 2. Create the Pod
apiVersion: v1
kind: Pod
metadata:
name: secret-env-demo
spec:
containers:
- name: my-app
image: busybox
command: [ "sleep", "3600" ] # Keep alive for testing
env:
# Inject the USERNAME
- name: DB_USER
valueFrom:
secretKeyRef:
name: db-credentials # Name of the Secret
key: username # Key inside the Secret
# Inject the PASSWORD
- name: DB_PASS
valueFrom:
secretKeyRef:
name: db-credentials # Name of the Secret
key: password # Key inside the Secret
How to Run and Verify:
# 1. Apply
kubectl apply -f secret-env.yaml
# 2. Verify (You will see the PLAINTEXT password)
kubectl exec secret-env-demo -- env | grep DB_
Expected Output:
DB_USER=admin_user
DB_PASS=SuperSecretPassword123!
Method B: As a Volume (Best for Files)
Use this when your application expects a file, such as an SSH key or an SSL certificate.
Copy the code below into a file named secret-volume.yaml.
# 1. Create the Secret (simulating an SSH key)
apiVersion: v1
kind: Secret
metadata:
name: ssh-key-secret
type: Opaque
stringData:
id_rsa: |
-----BEGIN OPENSSH PRIVATE KEY-----
b3BlbnNzaC1rZXktdjEAAAAABG5vbmUAAAAEbm9uZQAAAAAAAAABAAABlwAAAAdzc2gtcn
EXAMPLE_KEY_CONTENT_DO_NOT_USE_IN_PROD
-----END OPENSSH PRIVATE KEY-----
---
# 2. Create the Pod
apiVersion: v1
kind: Pod
metadata:
name: secret-vol-demo
spec:
containers:
- name: my-app
image: busybox
command: [ "sleep", "3600" ]
volumeMounts:
- name: secret-vol
mountPath: "/etc/certs" # Where the file will appear
readOnly: true # Always good practice for secrets
volumes:
- name: secret-vol
secret:
secretName: ssh-key-secret
How to Run and Verify:
# 1. Apply
kubectl apply -f secret-volume.yaml
# 2. Verify (Check if the file exists)
kubectl exec secret-vol-demo -- ls -l /etc/certs/
Expected Output:
lrwxrwxrwx 1 root root ... id_rsa -> ..data/id_rsa
3. Read the content:
kubectl exec secret-vol-demo -- cat /etc/certs/id_rsa