In our EKS Kubernetes cluster we want multiple AWS users to be able to use the kubectl
command to examine resources and for now, they can even have admin access to a few select groups.
The way I’ve always done this in the past is I create new stanza in the aws-auth
configMap in the kube-system
namespace. This is how AWS tells you how to do it in their documentation.
The problem with this is you are modifying an obscure file and most people in administering AWS can’t really see this. Today I’ve been trying out roles instead to see if I can get better results. I used some documentation on nextlinklabs.com but most of the commands didn’t work for me so I thought they wouldn’t work for most people either. So here goes.
1. Create a new role
This role will be the role people will assume when they want to access Kubernetes. My role looks as follows:
{
"Role": {
"Path": "/",
"RoleName": "eks-admin-role",
"RoleId": "XXX",
"Arn": "arn:aws:iam::XXXXXX:role/eks-admin-role",
"CreateDate": "2021-08-17T01:00:58+00:00",
"AssumeRolePolicyDocument": {
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"AWS": [
"arn:aws:iam::XXXX:user/vallard",
"arn:aws:iam::XXXX:user/test"
]
},
"Action": "sts:AssumeRole",
"Condition": {}
}
]
},
"MaxSessionDuration": 3600,
"RoleLastUsed": {
"LastUsedDate": "2021-08-17T19:30:56+00:00",
"Region": "us-west-2"
}
}
}
Notice that I need to put all the users in this role that I would otherwise have put in the aws-auth
configMap. I was hoping I could just put the groups, but unless I’m using something a little more fancy than AWS user groups, I’m not able to do this.
2. Allow Users to Assume the Role
I created another policy that I add to the users so they can actually assume the role. It looks as follows:
{
"Policy": {
"PolicyName": "eks-admin-assume-role-policy",
"PolicyId": "XXXX",
"Arn": "arn:aws:iam::XXXX:policy/eks-admin-assume-role-policy",
"Path": "/",
"DefaultVersionId": "v1",
"AttachmentCount": 0,
"PermissionsBoundaryUsageCount": 0,
"IsAttachable": true,
"CreateDate": "2021-08-17T01:10:21+00:00",
"UpdateDate": "2021-08-17T01:10:21+00:00",
"Tags": []
}
}
With the permissions set as:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "AllowAssumeOrganizationAccountRole",
"Effect": "Allow",
"Action": "sts:AssumeRole",
"Resource": "arn:aws:iam::XXXX:role/eks-admin-role"
}
]
}
Now, we attach this permission to a group that we want to have EKS permissions and assign the user to this group.
3. Update Kubernetes aws-auth
Now we need to add this role to our list of roles in aws-auth
configMap. This is done with:
kubectl edit cm/aws-auth -n kube-system
And we simply add our new role to this. It now looks as follows:
apiVersion: v1
data:
mapRoles:
- groups:
- system:bootstrappers
- system:nodes
rolearn: arn:aws:iam::XXXX:role/wg_eks_node_role_stage
username: system:node:{{EC2PrivateDNSName}}
- groups:
- system:masters
rolearn: arn:aws:iam::XXXX:role/eks-admin-role
username: eks-admin
kind: ConfigMap
metadata:
creationTimestamp: "2020-04-16T17:44:52Z"
name: aws-auth
namespace: kube-system
Now this new role has access to the Kubernetes cluster using the systems:masters group, meaning it can do everything on Kubernetes.
4. Fix ~/.kube/config file to use role
Lastly, we add some arguments to make our kubeclt
commands work correctly.
- name: arn:aws:eks:us-west-2:XXXX:cluster/eks-cluster
user:
exec:
apiVersion: client.authentication.k8s.io/v1alpha1
args:
- --region
- us-west-2
- eks
- get-token
- --cluster-name
- eks-stage
- --role
- arn:aws:iam::XXXX:role/eks-admin-role
command: aws
env:
- name: AWS_PROFILE
value: testro
provideClusterInfo: false
This seems to work, but to throw one more wrench into the problem, our user must use MFA when connecting to the CLI of our cluster. So to make this happen, I have to run a special script to first get my CLI authorized with MFA. Then I can finally run kubectl
commands!