If you're running your applications in AWS and EKS (Amazon's managed Kubernetes service), chances are, apps in the EKS cluster need to access AWS resources. For example, your app might do backups then upload things to S3. For another example, you might run your CI/CD toolchain in EKS, which then interacts with AWS services to launch EC2 nodes, deploy code, manage infrastructure, etc.
To grant access to apps running in EKS, the first thing that comes to mind, probably, is access keys, since they have been around for such a long time that it's hard to get it out of our heads. However, we should never put access keys in our apps, because by definition, they are long-term credentials, and that is a security risk.
So, we need to access AWS securely from an EKS cluster. Read on.
1 Access AWS from Local Machines Securely
Before we do anything, first things first, since we'll use AWS CLI to create stuff, we should make sure we are using the most secure AWS CLI authentication option.
Many years ago, access key was the most intuitive way for configuring AWS CLI access, and here's how we used to do it: create an IAM user, create an access key, and set the Access Key ID/Secret Access Key in the profile.
As of 2025, never do this. Based on the description, you already know where it could go wrong, and you are right: This approach uses long-lived keys, which brings security risks and does not align with security best practices in the modern DevSecOps era, especially when working with a production environment.
So, instead, use short-term credentials, such as AWS IAM Identity Center, whenever possible (and it is possible in almost all situations). Furthermore, the duration of a short-term token from IAM Identity Center can be configured (usually in hours), meaning even if the token is compromised, the impact circle is limited. For more information, see the official documentation here.
2 Creating an AWS EKS Cluster
If you don't have an EKS cluster, you can create one using eksctl
, which is a CLI tool for managing EKS clusters.
For example, if you are using macOS, install eksctl
with brew:
brew tap weaveworks/tap
brew install weaveworks/tap/eksctl
For users of other OS, refer to the official documentation here.
Prepare a config file named config.yaml
with the following content:
$ cat
apiVersion: eksctl.io/v1alpha5
kind: ClusterConfig
metadata:
name: test
region: ap-southeast-1
nodeGroups:
- name: ng-1
instanceType: m5.large
desiredCapacity: 2
Run:
eksctl create cluster -f config.yaml
And voila, we have a K8s cluster named "test" (adjust accordingly), and even our kubeconfig
is already automatically updated.
3 EKS Pod Identity
We've established that using long-term credentials to access AWS is bad, and since we would not do it even on local machines, we surely shouldn't do it in Pods. This means, we've got to use short-lived tokens, and the obvious answer is to use "IAM roles for service accounts" - create an IAM role with just the right amount of permissions, associate the role to a service account, then use the service account in our Pods so that the Pods can access AWS without long-lived credentials.
Previously, we would achieve this by using the OpenID Connect (OIDC) for EKS, which does exactly what's described above. But OIDC comes with operational overheads:
- In many companies, creating OIDC providers is a responsibility of a different team than the one managing the EKS clusters.
- If we have multiple EKS clusters, we'd have to create one IAM role per cluster, meaning either there are duplicated roles, or we have to update the role's trust policy every time we add a new cluster. This approach is hard to scale.
Now, there is a better option: EKS Pod Identity Association.
Under the hood, Pod Identity Association still works using IAM roles for service accounts, as described above. But with Pod Identity Association, IAM roles are no longer tied to a single cluster's OIDC provider. We don't even need OIDC providers anymore. A role can be used across multiple EKS clusters, without updating trust policies each time a new cluster is created. This reduces the operational overhead, and less overhead means good.
4 Using EKS Pod Identity Association
First, we need to ensure the eks-pod-identity-agent
addon is enabled in our EKS cluster.
If you followed the previous section to create a cluster, now, all we need to do is run:
eksctl create addon --cluster test --name eks-pod-identity-agent
If you are using EKS Auto Mode, you can skip the above command, as auto mode will have preinstalled it already. To use EKS Auto Mode, set
autoModeConfig.enabled
totrue
in the cluster config when creating the cluster. Read more about using EKS Auto Mode here.
Once the eks-pod-identity-agent
addon is enabled, we can create a podidentityassociation
with eksctl
. To do so, first, add the following section into the cluster's config.yaml
:
# ... omitted, same as in section 2
iam:
podIdentityAssociations:
- namespace: default
serviceAccountName: s3-reader
createServiceAccount: true
permissionPolicyARNs: ["arn:aws:iam::aws:policy/AmazonS3ReadOnlyAccess"]
Then, run eksctl create podidentityassociation -f config.yaml
.
This handles IAM role, service account, and everything automatically for you, and the service account will be named "s3-reader", in the "default" namespace, with S3 read-only access.
Alternatively, we can use the
eksctl
CLI to achieve the same:eksctl create podidentityassociation \ --cluster test \ --namespace default \ --service-account-name s3-reader \ --permission-policy-arns="arn:aws:iam::aws:policy/AmazonS3ReadOnlyAccess"
Now, if we associate the service account "s3-reader" in the "default" namespace to a Pod, that Pod would automatically get S3 read-only access. To verify this, create a file pod.yaml
:
apiVersion: v1
kind: Pod
metadata:
name: awscli
namespace: default
spec:
serviceAccountName: s3-reader
containers:
- name: awscli-container
image: amazon/aws-cli
command: ["sleep", "3600"]
Apply it: kubectl apply -f pod.yaml
.
Now comes the magic:
$ kubectl exec -it awscli -- aws s3 ls
2025-08-08 12:29:21 test-bucket-ironcore864
As we can see, the pod now has S3 read access. Simple as that, no need to do anything on the IAM side, which is another plus for EKS Pod Identity Association because all can be achieved solely with EKS API, no need for IAM API.
5 ServiceAccount is a Dangerous Thing
So far, we have successfully got access to AWS from a Pod in EKS without any long-lived token, and everything seems secure.
But is it?
There is a service account in the Pod, and if the Pod is compromised, hackers can do dangerous things. To simulate the situation, let's create an admin service account and associate it with a Pod. Create a test.yaml
file with the following content:
apiVersion: v1
kind: ServiceAccount
metadata:
name: super-admin-sa
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: super-admin-binding
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin # Built-in cluster role with full permissions
subjects:
- kind: ServiceAccount
name: super-admin-sa
namespace: default
---
apiVersion: v1
kind: Pod
metadata:
name: super-admin-pod
spec:
serviceAccountName: super-admin-sa
containers:
- name: busybox
image: busybox:1.35
command: ["sleep", "3600"]
resources:
requests:
cpu: "10m"
memory: "32Mi"
This creates an admin service account with the cluster admin role.
Deploy it: kubectl apply -f test.yaml
.
If hackers manage to get into the Pod, they can get the service account token (and the API server's certificate), which is enough to gain access to the cluster. Let's simulate this:
# Get the service account token (from the pod)
kubectl exec super-admin-pod -- cat /var/run/secrets/kubernetes.io/serviceaccount/token > eks-token.txt
# Get the CA certificate
kubectl exec super-admin-pod -- cat /var/run/secrets/kubernetes.io/serviceaccount/ca.crt > eks-ca.crt
# Get API endpoint
export EKS_ENDPOINT=$(aws eks describe-cluster --name test --query "cluster.endpoint" --output text)
# With the service account token, we can do all sorts of things even if we are not in the cluster
curl --cacert eks-ca.crt \
-H "Authorization: Bearer $(cat eks-token.txt)" \
-H "Accept: application/json" \
"$EKS_ENDPOINT/api/v1/pods"
As we see, getting hold of the Pod/ServiceAccount is basically getting access to the whole cluster. This leads to the next section: K8s hardening.
6 Harden Your K8s Cluster
As demonstrated above, even with service account and short-lived tokens, it's not secure enough.
Luckily, there are many tools that we can use to harden our K8s clusters:
- The obvious improvement is to remove shell binaries from images. We could also disable
kubectl exec
in Kubernetes using RBAC by creating a custom Role or ClusterRole that removes thecreate
verb for thepods/exec
resource. - On the service account token's side, we can configure a shorter expiration time, see more here.
- From a networking standpoint, we can enable only private access to the EKS API endpoint. With NetworkPolicy, we can also deny external access.
- We can also use Pod Security Standards (PSS) to enhance the security of our Pods, like using the "Restricted" profile, which enforces strong security best practices. Read Pod Security Standards and Apply Pod Security Standards at the Cluster Level.
- From a logging/monitoring standpoint, we can enable EKS API logging and audit logging, and make sure we constantly monitor and periodically audit them.
- Last but not least, there are many security tools for K8s (or, container runtime security platforms) that enhance K8s security to the next level, like the Falco project. For example, it can be configured to detect reading of service account token files.
If you are interested in K8s hardening, read the Threat Model and Guidelines and follow the tutorial for hardening Pods, Network, and Authn, Authz, Logging & Auditing.
