AWS Cloud to EKS Authentication & Access Control
If you have ever wondered how your AWS user account is securely allowed to run kubectl commands on an Amazon EKS cluster, you are in the right place!
EKS Authentication and Access Control is simply the process of verifying who you are using AWS IAM and then deciding what you are allowed to do inside the cluster using native Kubernetes RBAC. These two mechanisms operate on a strictly decoupled model, ensuring high security and easy management.
Prerequisite: Before continuing with this topic, ensure you have a foundational understanding of native Kubernetes Role-Based Access Control
The Corporate Building Analogy
To understand this decoupled flow, think of a highly secure corporate office building:
- AWS IAM (Authentication/AuthN): This is the main security gate at the ground floor. You show your ID card, and the security guard confirms you are who you claim to be.
- EKS Identity Mapping: This is the receptionist who checks your verified ID against a pre-registration database and hands you a specific, color-coded visitor badge.
- Kubernetes RBAC (Authorization/AuthZ): These are the electronic locks on specific rooms (like the server room). The locks do not care about your original ID card; they only look at the color of your badge (your mapped Kubernetes group) to decide if the door opens.
Quick Reference
| Concept / Feature | Handled By | Purpose |
| Authentication (AuthN) | AWS IAM | Proves your identity. Defines the user or machine role. |
| Authorization (AuthZ) | Kubernetes RBAC | Dictates your permissions (e.g., read pods, create deployments). |
| Token Generation | AWS CLI / Authenticator | Creates a temporary token to talk to the EKS API. |
| Identity Mapping (Legacy) | aws-auth ConfigMap | The legacy way to map IAM users/roles to Kubernetes users/groups. |
| Identity Mapping (Modern) | EKS Access API | The modern way to map IAM to K8s directly via AWS API. |
| Workload Identity (Legacy) | IRSA | IAM Roles for Service Accounts (using OIDC) to give pods AWS permissions. |
| Workload Identity (Modern) | EKS Pod Identity | The simplified agent-based way to grant AWS permissions to your applications. |
User to Cluster Access (Inbound): Authentication, Mapping, and Authorization
When you execute a command like kubectl get pods against an Amazon Elastic Kubernetes Service (EKS) cluster, you are initiating a sophisticated, multi-stage security handshake. Kubernetes natively lacks a built-in user identity database; it does not know what a password is or how to store user credentials. Instead, EKS operates on a strictly decoupled security model: it delegates Authentication (AuthN) to AWS Identity and Access Management (IAM) and relies on native Kubernetes Role-Based Access Control (RBAC) for Authorization (AuthZ).
Understanding the exact mechanics of this flow especially the transition from the legacy aws-auth ConfigMap to the modern EKS Access Management API is critical for securing modern cloud-native environments.
1. The Authentication Handshake (AuthN): Cryptographic Mechanics
The process of proving your identity to the cluster relies on a short-lived, cryptographically signed token generated on your local machine and validated by the EKS Control Plane.
Here is the exact step-by-step technical workflow:
Step 1: The Request & Token Generation: When you execute a kubectl command, the executable reads your kubeconfig file. In an EKS environment, this file is typically configured with an exec plugin that invokes the aws eks get-token command (or the aws-iam-authenticator tool).
- The AWS CLI uses your local AWS credentials to construct a pre-signed URL for the AWS Security Token Service (STS)
GetCallerIdentityAPI action. - This pre-signed URL contains cryptographic signatures proving you hold the private keys for the IAM identity, but it does not transmit the keys themselves.
- To make this URL compatible with Kubernetes token authentication, it is Base64-URL encoded and prefixed with the string
k8s-aws-v1.. This resulting string becomes your Bearer token.
Step 2: Transmission: The kubectl client injects this Bearer token into the standard HTTP Authorization header and transmits the API request (e.g., GET /api/v1/namespaces/default/pods) to the EKS API Server over TLS.
Step 3: Webhook Interception & Validation: The Kubernetes API Server receives the request and extracts the token. Because the API Server is configured with a Webhook Token Authenticator, it forwards the token to a dedicated authentication webhook running inside the EKS Control Plane.
- The webhook removes the
k8s-aws-v1.prefix and decodes the Base64 payload back into the pre-signed STS URL. - Crucially, the webhook does not just blindly trust the URL. It executes the HTTP
GETrequest against the regional AWS STS endpoint.
Step 4: STS Confirmation: AWS STS receives the pre-signed request, validates the cryptographic signature against its own IAM database, and verifies that the URL has not expired (these tokens typically expire in 15 minutes).
- If valid, STS returns an HTTP 200 OK response containing your AWS Account ID, IAM User ID, and the full IAM Amazon Resource Name (ARN) (e.g.,
arn:aws:iam::123456789012:user/Aliceorarn:aws:iam::123456789012:role/DevOpsRole).
2. Identity Translation: Bridging AWS and Kubernetes
At this point, the EKS Control Plane knows your AWS identity, but Kubernetes RBAC has no concept of an “IAM ARN.” The cluster must translate this IAM ARN into a Kubernetes identity (a username and a set of groups).
EKS supports two distinct mechanisms for this mapping. While both can coexist during a migration phase, EKS evaluates the modern Access Entries first.
The Legacy Mechanism: aws-auth ConfigMap
For years, the standard method for mapping identities was modifying a Kubernetes ConfigMap named aws-auth located in the kube-system namespace.
- How it works: It utilizes YAML arrays (
mapRolesandmapUsers) to statically link IAM ARNs to Kubernetes usernames and groups (likesystem:masters). - The Drawbacks: This approach is inherently flawed for enterprise DevSecOps. It requires you to already have administrative access to the cluster to grant access to others. Furthermore, modifying YAML via
kubectlis highly prone to syntax errors. A single misplaced space or indentation error in theaws-authConfigMap can instantly sever access for all users, resulting in a catastrophic cluster lockout. It also suffers from configuration drift when managed via Infrastructure as Code (IaC).
Lab: Test youraws-authConfigMap knowledge in your local sandbox environment.
The Modern Standard: EKS Access Management API
To solve the brittleness of the ConfigMap, AWS introduced the EKS Access Management API. This shifts the identity mapping entirely out of the Kubernetes data plane and into the AWS Control Plane.
- How it works: Cluster administrators create Access Entries directly via the AWS API, AWS CLI, AWS Console, or Terraform. An Access Entry acts as a direct, managed link between an IAM Principal and a Kubernetes identity.
- AWS-Managed Policies: Alongside custom RBAC group mappings, you can attach AWS-managed Access Policies (such as
AmazonEKSClusterAdminPolicy,AmazonEKSEditPolicy, orAmazonEKSViewPolicy) directly to an Access Entry. This completely bypasses the need to manually write Kubernetes RBAC YAML for standard cluster roles. - The Benefits: This mechanism standardizes access control. Because it uses AWS APIs, inputs are strictly validated, making syntax-based cluster lockouts virtually impossible. It also centralizes audit logging in AWS CloudTrail and allows you to manage EKS access exactly like you manage access to an S3 bucket or an EC2 instance.
Lab: Test the EKS Access Management API functionality using the AWS CLI or Console.
Crucial Rules of EKS Identity Mapping
- Rule #1: AWS IAM does NOT grant in-cluster permissions. Attaching an
AdministratorAccessIAM policy to an AWS user does not make them a Kubernetes administrator. You must explicitly map them inside EKS. - Rule #2: The Cluster Creator Exemption. The IAM identity that initially provisions the EKS cluster is automatically granted
system:mastersprivileges at the control plane level. This is invisible to standard configuration outputs. - Rule #3: The
system:nodesrequirement. EKS worker nodes also use this mapping process! The IAM Role attached to the EC2 instances must be mapped to thesystem:nodesgroup so the kubelets can register themselves with the control plane. - Rule #4: Canonical ARN Matching: The legacy
aws-authConfigMap requires exact string matching. Always map the canonical ARN returned byaws sts get-caller-identity.
3. Authorization Execution: Kubernetes RBAC
Once the Identity Translation phase provides the EKS API Server with your Kubernetes username and assigned groups, the request officially enters the native Kubernetes Authorization module.
AWS IAM’s job is completely finished. AWS IAM cannot dictate whether you are allowed to delete a Pod in a specific namespace. That is strictly the domain of Kubernetes RBAC.
The API Server evaluates your requested action against the configured RBAC objects:
- The Request Parameters: The system identifies the action you are trying to perform. It breaks it down into a Verb (e.g.,
get,list,create,delete), an API Group (e.g.,apps), and a Resource (e.g.,deployments). - Role and ClusterRole Evaluation: The system searches for
Roles(namespace-scoped permissions) orClusterRoles(cluster-wide permissions) that match the requested Verb, API Group, and Resource. - Binding Evaluation: The system then checks
RoleBindingsandClusterRoleBindingsto see if your mapped Kubernetesusernameorgroupsare attached to any of those valid Roles.
If a matching allowance rule is found, the authorization module approves the request, and the API Server proceeds to execute your command (after passing it through any configured Admission Controllers). If no matching rule exists, the API Server immediately returns an HTTP 403 Forbidden response.
How Your Apps Get Out (Workload Identity) Cluster to AWS Access
Securing Pod Access to AWS Services
While understanding how users authenticate into an Amazon EKS cluster is critical, modern DevSecOps architectures require an equally robust strategy for Workload Identity. When applications executing within EKS pods need to interact with external AWS services such as a Python application using boto3 to upload files to an S3 bucket, read secrets from AWS Secrets Manager, or write records to DynamoDB they require valid, securely scoped AWS credentials.
Currently, two primary architectures dictate how pods securely obtain AWS credentials: the legacy IAM Roles for Service Accounts (IRSA) and the modern Amazon EKS Pod Identity.
1. The Legacy Architecture: IAM Roles for Service Accounts (IRSA)
Introduced in 2019, IRSA was the first secure mechanism to grant fine-grained IAM roles at the pod level. It relies heavily on OpenID Connect (OIDC) federation.
How IRSA Works: The Cryptographic Handshake
- OIDC Provider Configuration: The EKS cluster is configured with a public OIDC discovery endpoint. You must create an IAM OIDC Identity Provider in your AWS account that trusts this cluster endpoint.
- Kubernetes ServiceAccount Annotation: You create a Kubernetes
ServiceAccountand annotate it with the target IAM Role ARN (e.g.,eks.amazonaws.com/role-arn: arn:aws:iam::111122223333:role/s3-reader-role). - Mutating Admission Webhook: When a pod is scheduled using this
ServiceAccount, an EKS-managed mutating admission webhook intercepts the pod creation request. - Environment Variable Injection: The webhook automatically injects two critical environment variables into the pod’s containers:
AWS_ROLE_ARNandAWS_WEB_IDENTITY_TOKEN_FILE. It also mounts a projected volume containing a cryptographically signed JSON Web Token (JWT) generated by the Kubernetes API server. - Token Exchange: When the AWS SDK (e.g., Python’s
boto3, Go V2) initializes inside the pod, it detects these environment variables. It reads the JWT from the mounted file and calls the AWS STSAssumeRoleWithWebIdentityAPI. - Validation: AWS STS validates the JWT against the cluster’s OIDC provider. If the signature matches and the IAM Role’s trust policy allows it, STS returns short-lived AWS credentials to the pod.
DevSecOps Pain Points with IRSA
While secure, IRSA introduces significant operational overhead. It requires managing OIDC thumbprints, which occasionally rotate and can break automation if not carefully monitored. Furthermore, the IAM Trust Policies required for IRSA are notoriously complex, requiring strict StringEquals conditions matching the exact Kubernetes namespace and ServiceAccount name, making Infrastructure as Code (IaC) configuration cumbersome.
2. The Modern Architecture: Amazon EKS Pod Identity
To eliminate the operational friction of OIDC federation, AWS released EKS Pod Identity. This is now the recommended standard for workload identity, utilizing a simplified, agent-based architecture that shifts the complexity entirely to the AWS Control Plane.
How EKS Pod Identity Works
- The DaemonSet Agent: You deploy the Amazon EKS Pod Identity Agent as a DaemonSet across your worker nodes.
- API Associations: Instead of annotating Kubernetes
ServiceAccounts, you use the AWS EKS API (or Terraform) to create a direct “Pod Identity Association.” This maps an IAM Role directly to a Kubernetes Namespace and ServiceAccount pair. - IMDS Interception: When the AWS SDK inside the pod attempts to retrieve credentials, it defaults to querying the EC2 Instance Metadata Service (IMDS) via
169.254.169.254. - Local Provisioning: The EKS Pod Identity Agent running on the node intercepts this IMDS network call. The agent verifies the pod’s identity with the local
kubelet, then securely requests temporary credentials from the EKS Auth API (a specialized control plane service). - Seamless Delivery: The agent delivers these temporary STS credentials back to the pod. The application remains completely unaware that the credentials were mathematically scoped to its specific ServiceAccount rather than the underlying EC2 node.
Architecture Comparison Summary
| Feature / Mechanism | IAM Roles for Service Accounts (IRSA) | EKS Pod Identity (Modern Standard) |
| Core Technology | OIDC Federation & Mutating Webhook | DaemonSet Agent & EKS Auth API |
| Credential Retrieval | STS AssumeRoleWithWebIdentity | IMDS Interception by local Node Agent |
| IAM Trust Policy Principal | The specific EKS Cluster’s OIDC Provider | pods.eks.amazonaws.com |
| Cluster Dependency | Requires public OIDC endpoint creation | No external OIDC dependencies |
| Best Used For | Legacy clusters, or multi-cloud workloads | All modern Amazon EKS deployments |
The DevSecOps Advantage & Architect Considerations
By migrating from IRSA to EKS Pod Identity, platform teams can eliminate brittle IAM configurations, reduce the surface area for configuration drift, and establish a cleaner, more resilient DevSecOps pipeline for machine identities.
- No OIDC Management: You completely bypass OIDC discovery endpoints and thumbprint management.
- Simplified IAM Trust Policies: The IAM Role simply needs to trust the principal
pods.eks.amazonaws.com. - Streamlined IaC: Managing these mappings in Terraform is highly efficient, utilizing the
aws_eks_pod_identity_associationresource without needing to parse complex JSON trust relationships. - Webhook Latency Mitigation: STS token validation introduces a slight latency to API calls. Caching mechanisms within the authenticator mitigate this for subsequent requests from the same user.
- Impersonation Risks: If a user can modify the
aws-authConfigMap or EKS Access Entries, they can escalate their privileges by mapping their own IAM user to thesystem:mastersgroup. Access to these must be strictly audited.
DevSecOps Architect Level Best Practices
At a production-grade DevSecOps level, you must automate and secure this pipeline using Infrastructure as Code (IaC) and follow the principle of least privilege.
- Eliminate Direct IAM User Access via Role Assumption: Never map individual IAM Users directly into the cluster or use them for daily operations. Map IAM Roles to EKS instead, and enforce engineers to assume those roles via AWS SSO/Identity Center.
- Deprecate
aws-authwith IaC (Shift-Left Security): Actively migrate existing clusters away from theaws-authConfigMap. Never hardcode IAM ARNs in manual edits. Utilize Terraform (aws_eks_access_entryandaws_eks_access_policy_associationresources) to map IAM roles to EKS administrators seamlessly and prevent configuration drift. - Granular RBAC Over
system:masters(Least Privilege): Avoid theAmazonEKSClusterAdminPolicyand mapping roles to thesystem:mastersgroup unless absolutely necessary for break-glass administration, as it bypasses all RBAC checks. Instead, create granular K8sRolesand map users to restricted namespaces usingRoleBindings. - Mandate Modern Machine Identities: Mandate the Amazon EKS Pod Identity Agent for new workloads instead of older IRSA methods to reduce IAM trust policy complexity and OIDC management overhead.
- Unified Auditing & Compliance: Enable EKS Control Plane Logging (specifically the
authenticatorandauditlog streams) and route them to Amazon CloudWatch. Correlate these logs with AWS CloudTrail to actively monitor modifications to EKS Access Entries and maintain a complete chain of custody. - Integrate Security Tooling:
- Checkov by Prisma Cloud: A static analysis tool to scan Terraform code and ensure you aren’t over-provisioning EKS Access Entries.
- Teleport / HashiCorp Boundary: For highly secure environments, consider bypassing direct IAM-to-EKS mapping entirely using an identity-aware proxy for short-lived, Just-In-Time (JIT) Kubernetes access.
- OIDC Federation: Modern clusters often integrate external Identity Providers (Okta, Azure AD, Google Workspace) directly into Kubernetes, mapping SSO groups directly to K8s RBAC.
Tooling Integration:
- AWS IAM Authenticator: The core engine behind the mapping.
- Checkov by Prisma Cloud: A static analysis tool to scan Terraform code and ensure you aren’t over-provisioning EKS Access Entries.
- Teleport / HashiCorp Boundary: For highly secure environments, consider bypassing direct IAM-to-EKS mapping entirely using an identity-aware proxy for short-lived, Just-In-Time (JIT) Kubernetes access.
- OIDC Federation: Modern clusters often integrate external Identity Providers (Okta, Azure AD, Google Workspace) directly into Kubernetes using OpenID Connect, mapping SSO groups directly to K8s RBAC.
- Cross-Account Access: You can map IAM roles from different AWS accounts into a single EKS cluster, provided the cluster’s IAM role trusts the external account (heavily used in Hub-and-Spoke architectures).
Additional Details
- Key Components
- AWS STS: Security Token Service, validates the identity.
- EKS API Server: The Kubernetes control plane entry point.
- Access Entries: AWS-native mapping for IAM to K8s.
- RBAC (Role / RoleBinding): K8s native authorization objects.
- Key Characteristics
- Decoupled: AuthN (AWS) and AuthZ (K8s) are handled by separate systems.
- Short-lived: STS tokens expire typically in 15 minutes, ensuring high security.
- API-Driven: With modern Access Entries, identity mappings can be managed via AWS APIs without direct cluster access.
- Use Case
- Granting a developer read-only access to a specific Kubernetes namespace to view application logs without giving them access to production secrets or other namespaces.
- Allowing a CI/CD pipeline (like GitHub Actions via OIDC) to assume an IAM role and deploy Helm charts into the cluster.
- Benefits
- Centralized Identity: No need to create separate user accounts inside Kubernetes; you leverage your existing AWS IAM infrastructure.
- Enhanced Security: No static passwords or long-lived credentials stored in kubeconfig files.
- Compliance: Easy tracking of who did what using AWS CloudTrail and K8s Audit Logs.
- Best Practices
- Never use IAM Users directly for daily operations; map IAM Roles to EKS, and have users assume those roles.
- Adopt EKS Access API and migrate away from
aws-authConfigMap. - Create granular K8s Roles bound to specific namespaces rather than using ClusterRoles.
- Regularly rotate the IAM keys of any service accounts used in external CI/CD systems (or better yet, use OIDC federation).
- Technical Challenges
- Troubleshooting
Unauthorizederrors can be tricky because the failure could be at the AWS CLI token generation, the IAM mapping, or the RBAC permissions. - Managing state drift if some engineers manually edit the
aws-authConfigMap while Terraform expects a different state.
- Troubleshooting
- Limitations
- IAM policies cannot directly restrict access to specific Kubernetes resources (e.g., you cannot write an AWS IAM policy to restrict access to a K8s namespace; IAM only gets you into the cluster, RBAC must do the rest).
- Common Issues
- “You must be logged in to the server (Unauthorized)”: Usually means your AWS token expired, your AWS CLI is using the wrong profile, or the IAM entity is not mapped in EKS.
- “Forbidden: User cannot list resource in API group”: Means authentication succeeded, but K8s RBAC is blocking you. You need a RoleBinding.
- Problems and Solutions
- Problem: Accidentally deleting the
aws-authConfigMap and locking everyone out. - Solution: As long as you have access to the IAM Role that created the cluster, you can still log in and fix it. Better yet, switch to EKS Access Entries where AWS APIs prevent accidental lockout of cluster administrators.
- Problem: Accidentally deleting the
Conclusion
Mastering EKS Authentication and Access Control bridges the gap between AWS cloud security and Kubernetes native security. By understanding the flow from IAM STS tokens to Kubernetes RBAC, and adopting the modern EKS Access Entries API, you are setting up a robust, scalable, and highly secure platform. Keep practicing, keep automating, and stay secure!