EKS Secrets Custom Init Containers
How to Implement It (The emptyDir Approach)
The most common way to pass the data from the initContainer to the main container is via an emptyDir volume.
- The Volume: An in-memory
emptyDirvolume is mounted to both theinitContainerand the main application container. - The Fetch: The
initContainer(often a lightweight Alpine image with the AWS CLI installed) runs a script to pull the parameters from AWS SSM. - The Write: It formats those parameters (e.g., as a
.envfile) and saves them into the shared volume. - The Execution: The
initContainerterminates successfully. The main container starts, reads the.envfile from the shared volume, and starts the application process.
Example YAML Outline:
YAML
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-app
spec:
template:
spec:
serviceAccountName: my-app-sa # Needs IRSA configured for AWS SSM access
volumes:
- name: shared-secrets
emptyDir:
medium: Memory # Keeps secrets in RAM, not on disk
initContainers:
- name: fetch-ssm-secrets
image: amazon/aws-cli:latest
volumeMounts:
- name: shared-secrets
mountPath: /secrets
command:
- /bin/sh
- -c
- |
# Fetch from SSM and write to a file in the shared volume
aws ssm get-parameter --name "/my-app/db-password" --with-decryption --query "Parameter.Value" --output text > /secrets/.env
containers:
- name: my-app-container
image: my-app-image:latest
volumeMounts:
- name: shared-secrets
mountPath: /app/secrets
command:
- /bin/sh
- -c
- |
# Source the secrets and start the app
export $(cat /app/secrets/.env | xargs)
npm start
Additional Considerations
- Security (IRSA): For this to work securely, you must use IAM Roles for Service Accounts (IRSA) or EKS Pod Identity. This ensures the pod gets exactly the AWS permissions it needs to read from SSM, without hardcoding AWS credentials.
- In-Memory Storage: Notice the
medium: Memoryin theemptyDirdefinition above. This creates atmpfs(RAM-backed) file system, ensuring your sensitive SSM parameters are never written to the underlying node’s physical disk. - The Update Problem: As you noted in your “Cons,” if an SSM parameter changes, the pod won’t know. You have to manually trigger a rollout restart (
kubectl rollout restart deployment my-app) to force theinitContainerto run again and fetch the fresh values.
First, we will compare the initContainer approach to the External Secrets Operator (ESO) so you can see the trade-offs, and then I’ll provide a robust script for your implementation.
Comparison: initContainer vs. External Secrets Operator (ESO)
The choice usually comes down to whether you want to manage “Infrastructure as Code” (ESO) or “Logic as Code” (initContainer).
| Feature | Custom initContainer | External Secrets Operator (ESO) |
| Performance | Slower. Adds seconds to every pod startup while it calls the AWS API. | Faster. Secrets are pre-synced to K8s; pods start instantly. |
| Complexity | Low. No extra cluster components to install or manage. | Moderate. Requires installing an operator and CRDs. |
| Security | Secrets stay in memory/tmpfs (if configured). Not in K8s Secrets. | Secrets live in K8s Secrets (etcd). Requires encryption at rest. |
| Updates | Manual. You must restart pods to pick up new SSM values. | Automatic. ESO polls AWS and updates K8s Secrets automatically. |
| Error Handling | If the AWS API is down, the pod fails to start. | If the AWS API is down, the pod uses the last cached secret. |
Production-Ready initContainer Script
If you decide to stick with the initContainer, you shouldn’t just pull one variable at a time. Using get-parameters-by-path is much more efficient.
Here is a script designed to fetch all parameters under a specific path (e.g., /app/prod/) and format them for an .env file.
The Script (fetch-secrets.sh):
Bash
#!/bin/bash
set -e # Exit on error
# Configuration
SECRET_PATH="/app/production/"
OUTPUT_FILE="/secrets/.env"
echo "Fetching parameters from SSM path: $SECRET_PATH"
# Fetch all parameters in the path, decrypt them, and format as KEY=VALUE
# We use --recursive to get nested paths if necessary
aws ssm get-parameters-by-path \
--path "$SECRET_PATH" \
--with-decryption \
--recursive \
--query "Parameters[*].[Name,Value]" \
--output text | while read -r name value; do
# Strip the path prefix to get just the variable name
# e.g., /app/production/DB_PASSWORD -> DB_PASSWORD
var_name=$(echo "$name" | awk -F'/' '{print $NF}')
echo "$var_name=$value" >> "$OUTPUT_FILE"
done
echo "Successfully wrote secrets to $OUTPUT_FILE"
Deployment Configuration
To use this script effectively, make sure your emptyDir is backed by memory to ensure secrets never touch the disk.
YAML
spec:
volumes:
- name: shared-secrets
emptyDir:
medium: Memory # Secrets stay in RAM
initContainers:
- name: fetch-ssm-secrets
image: amazon/aws-cli:latest
volumeMounts:
- name: shared-secrets
mountPath: /secrets
command: ["/bin/bash", "-c"]
args:
- |
# (Insert the script above here or mount it via ConfigMap)
containers:
- name: app
volumeMounts:
- name: shared-secrets
mountPath: /app/secrets
command: ["/bin/sh", "-c"]
args:
- |
set -a; source /app/secrets/.env; set +a;
exec ./my-application-binary
A quick tip on security: When using the initContainer method, the main application container still needs a way to “know” when the secret file is ready. Since initContainers must finish before the main container starts, this is handled naturally by Kubernetes.
The IRSA setup requires three distinct parts: the permissions, the trust relationship, and the Kubernetes configuration.
1. The IAM Permission Policy
First, you need an IAM Policy that grants access to your specific SSM paths. If your parameters are encrypted using a Customer Managed KMS Key (CMK), you must also grant kms:Decrypt permissions.
Create an IAM policy with the following JSON:
JSON
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"ssm:GetParametersByPath",
"ssm:GetParameters",
"ssm:GetParameter"
],
"Resource": "arn:aws:ssm:<REGION>:<ACCOUNT_ID>:parameter/app/production/*"
},
{
"Effect": "Allow",
"Action": [
"kms:Decrypt"
],
"Resource": "arn:aws:kms:<REGION>:<ACCOUNT_ID>:key/<KMS_KEY_ID>"
}
]
}
(Note: If you are using the default AWS-managed KMS key for SSM (alias/aws/ssm), the KMS block is usually not required as long as the role is in the same account.)
2. The IAM Trust Policy
Next, create an IAM Role, attach the policy you just created, and configure its Trust Relationship. This is the core of IRSA—it tells AWS that a specific Kubernetes Service Account (via your cluster’s OIDC provider) is allowed to assume this role.
JSON
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Federated": "arn:aws:iam::<ACCOUNT_ID>:oidc-provider/<OIDC_PROVIDER_URL>"
},
"Action": "sts:AssumeRoleWithWebIdentity",
"Condition": {
"StringEquals": {
"<OIDC_PROVIDER_URL>:sub": "system:serviceaccount:<NAMESPACE>:<SERVICE_ACCOUNT_NAME>",
"<OIDC_PROVIDER_URL>:aud": "sts.amazonaws.com"
}
}
}
]
}
Make sure to replace the placeholders with your cluster’s OIDC URL, your target Kubernetes namespace, and the name of the Service Account you intend to use.
3. The Kubernetes Service Account
Finally, tell Kubernetes to link the Service Account to the AWS IAM Role using an annotation.
YAML
apiVersion: v1
kind: ServiceAccount
metadata:
name: my-app-sa
namespace: my-namespace
annotations:
eks.amazonaws.com/role-arn: arn:aws:iam::<ACCOUNT_ID>:role/<IAM_ROLE_NAME>
Once this is applied, ensure the serviceAccountName: my-app-sa is defined in your Deployment’s Pod spec. When the initContainer starts, Kubernetes will automatically inject a temporary web identity token into the pod, which the AWS CLI uses to authenticate.
A Modern Alternative: EKS Pod Identity
If you are running a newer cluster, AWS recently released EKS Pod Identity, which acts as a successor to IRSA. It eliminates the need to configure OIDC providers and complex Trust Policies entirely. Instead, you use an AWS API to map an IAM Role directly to a K8s Service Account, and an AWS-managed agent handles the rest.