How to Spin up AWS EKS and deploy your applications in it
This blog will lay out the steps needed to spin up an AWS EKS cluster with EC2 nodes. This is aimed primarily at beginners and also for starting small scale proof of concepts, however suggestions of improvements from expert practitioners are always welcome.
Although these steps are well laid out in the AWS documentation but there are few nuances which are not chronological in the documentation and a beginner might find it difficult to get the right information and in the right order. Hence my attempt is to bring those informations in order, so that those can be followed to bring in the desired effect without encountering any weird errors.
Caution : Usage of EKS clusters along with the related services (especially the NAT Gateways and the EC2 instances as nodes) will incur significant cost if the services are not terminated after use.
Pre-requisites
- An active AWS account.
- Access to a local machine, i.e. a laptop or Ubuntu/Linux VM (I used a Macbook Pro).
Set up AWS CLI
- For any system other than MacOS, follow the instructions in here.
- For MacOs, you can directly download the official package from here and then follow the installation instruction by invoking the installer.
Set up kubectl
- Follow the instructions in here. My suggestion would be to choose a version one step lower than the most recent one, so as to avoid any bugs that might show up with the most latest one.
Setup an IAM Principal
- Create an IAM user in the AWS account. This is the IAM Principal that will be used to create the EKS Cluster. By default, only this user will have access to the EKS Cluster after it is commissioned.
- Configure the
aws
CLI with this user by following the instructions here - Verify the configuration by running the command, it should show the IAM user’s ARN
aws sts get-caller-identity
Step 1 : EKS Cluster creation
Details of the steps below are taken from here
- Create the VPC stack
I have used my own names, feel free to replace with yours
aws cloudformation create-stack \
--region us-west-2 \
--stack-name orch-lord-eks-vpc-stack \
--template-url https://s3.us-west-2.amazonaws.com/amazon-eks/cloudformation/2020-10-29/amazon-eks-vpc-private-subnets.yaml
Go to CloudFormations and check the status of your stack. Proceed to the next step only after the status turns to CREATE_COMPLETE
The next steps deal with some of the core concepts of AWS architecture and the details of those beyond the scope of this blog, to know the details of those, I suggest you take an AWS solution architect course or any other equivalent course/book.
- Create a cluster IAM role and attach the required Amazon EKS IAM managed policy to it
Copy the following contents to a file named eks-cluster-role-trust-policy.json
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Service": "eks.amazonaws.com"
},
"Action": "sts:AssumeRole"
}
]
}
Create the role
aws iam create-role \
--role-name orchLordEKSClusterRole \
--assume-role-policy-document file://"eks-cluster-role-trust-policy.json"
Attach the required Amazon EKS managed IAM policy to the role
aws iam attach-role-policy \
--policy-arn arn:aws:iam::aws:policy/AmazonEKSClusterPolicy \
--role-name orchLordEKSClusterRole
- Follow steps 3–8 as mentioned here.
Wait till the Cluster becomes Active
Step 2 : Configure your local system to communicate with your cluster
- Create or update a
kubeconfig
file for your cluster.
aws eks update-kubeconfig --region us-west-2 --name orch-lord-eks-cluster
- Test your configuration
kubectl get svc
The output should be something similar to this
Step 3 : Create nodes
We will use AWS EC2 instances in this blog. Using Fargate node is tricky, especially when it comes to attaching storage to the PODs.
- Create a node IAM role and attach the required Amazon EKS IAM managed policy to it
Copy the following contents to a file named node-role-trust-policy.json
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Service": "ec2.amazonaws.com"
},
"Action": "sts:AssumeRole"
}
]
}
Create the node IAM role
aws iam create-role \
--role-name orchLordEKSNodeRole \
--assume-role-policy-document file://"node-role-trust-policy.json"
Attach the required managed IAM policies to the role.
aws iam attach-role-policy \
--policy-arn arn:aws:iam::aws:policy/AmazonEKSWorkerNodePolicy \
--role-name orchLordEKSNodeRole
aws iam attach-role-policy \
--policy-arn arn:aws:iam::aws:policy/AmazonEC2ContainerRegistryReadOnly \
--role-name orchLordEKSNodeRole
aws iam attach-role-policy \
--policy-arn arn:aws:iam::aws:policy/AmazonEKS_CNI_Policy \
--role-name orchLordEKSNodeRole
- Follow the steps 2–9 under Step 3 : Create Nodes ( Managed Nodes — Linux tab)
Select t3.large
as Instance Types
Wait till the Node Group is Active
Step 4 : Creating an IAM OIDC provider for your cluster
- Copy the value of the OpenID Connect provider URL
- Open the IAM Console here
- In the left navigation pane, choose Identity Providers under Access management
- To create a provider, choose Add provider
- For Provider type, select OpenID Connect
- For Provider URL, enter the OIDC provider URL for your cluster (encircled in the screenshot above), and then choose Get thumbprint
- For Audience, enter
sts.amazonaws.com
and choose Add provider
Step 5 : Creating the Amazon EBS CSI driver IAM role
- View your cluster’s OIDC provider URL
aws eks describe-cluster \
--name orch-lord-eks-cluster \
--query "cluster.identity.oidc.issuer" \
--output text
- Create the IAM role
Copy the following contents to a file that’s named aws-ebs-csi-driver-trust-policy.json
. Replace 111122223333
with your account ID, region-code
with your AWS Region, and EXAMPLED539D4633E53DE1B71EXAMPLE
with the value that was returned in the previous step (within the rectangle in the screenshot above). If your cluster is in the AWS GovCloud (US-East) or AWS GovCloud (US-West) AWS Regions, then replace arn:aws:
with arn:aws-us-gov:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Federated": "arn:aws:iam::111122223333:oidc-provider/oidc.eks.region-code.amazonaws.com/id/EXAMPLED539D4633E53DE1B71EXAMPLE"
},
"Action": "sts:AssumeRoleWithWebIdentity",
"Condition": {
"StringEquals": {
"oidc.eks.region-code.amazonaws.com/id/EXAMPLED539D4633E53DE1B71EXAMPLE:aud": "sts.amazonaws.com",
"oidc.eks.region-code.amazonaws.com/id/EXAMPLED539D4633E53DE1B71EXAMPLE:sub": "system:serviceaccount:kube-system:ebs-csi-controller-sa"
}
}
}
]
}
- Create the role
aws iam create-role \
--role-name OrchLordEKS_EBS_CSI_DriverRole \
--assume-role-policy-document file://"aws-ebs-csi-driver-trust-policy.json"
- Attach the required AWS managed policy to the role with the following command. If your cluster is in the AWS GovCloud (US-East) or AWS GovCloud (US-West) AWS Regions, then replace
arn:aws:
witharn:aws-us-gov:
aws iam attach-role-policy \
--policy-arn arn:aws:iam::aws:policy/service-role/AmazonEBSCSIDriverPolicy \
--role-name OrchLordEKS_EBS_CSI_DriverRole
- Annotate the
ebs-csi-controller-sa
Kubernetes service account with the ARN of the IAM role. Replace111122223333
with your account ID
kubectl annotate serviceaccount ebs-csi-controller-sa \
-n kube-system \
eks.amazonaws.com/role-arn=arn:aws:iam::111122223333:role/OrchLordEKS_EBS_CSI_DriverRole
- Restart the
ebs-csi-controller
deployment for the annotation to take effect
kubectl rollout restart deployment ebs-csi-controller -n kube-system
Step 6 : Managing the Amazon EBS CSI driver as an Amazon EKS add-on
- To see the required platform version, run the following command
aws eks describe-addon-versions --addon-name aws-ebs-csi-driver
- Add the Amazon EBS CSI add-on using the AWS CLI
Run the following command. Replace 111122223333
with your account ID. If your cluster is in the AWS GovCloud (US-East) or AWS GovCloud (US-West) AWS Regions, then replace arn:aws:
with arn:aws-us-gov:
aws eks create-addon --cluster-name orch_lord_eks_cluster --addon-name aws-ebs-csi-driver \
--service-account-role-arn arn:aws:iam::111122223333:role/OrchLordEKS_EBS_CSI_DriverRole
- Check the current version of your Amazon EBS CSI add-on. Replace
my-cluster
with your cluster name
aws eks describe-addon --cluster-name my-cluster --addon-name aws-ebs-csi-driver --query "addon.addonVersion" --output text
Sample output
v1.18.0-eksbuild.1
- Determine which versions of the Amazon EBS CSI add-on are available for your cluster version (my cluster version was 1.25)
aws eks describe-addon-versions --addon-name aws-ebs-csi-driver --kubernetes-version 1.25 \
--query "addons[].addonVersions[].[addonVersion, compatibilities[].defaultVersion]" --output text
Sample output
v1.18.0-eksbuild.1
True
v1.17.0-eksbuild.1
False
v1.16.1-eksbuild.1
False
v1.16.0-eksbuild.1
False
v1.15.1-eksbuild.1
False
The version with True
underneath is the default version deployed when the add-on is created. The version deployed when the add-on is created might not be the latest available version. In the previous output, the latest version is deployed when the add-on is created.
If required, update the add-on to the version with True
that was returned in the output of the previous step. If it was returned in the output, you can also update to a later version.
aws eks update-addon --cluster-name my-cluster --addon-name aws-ebs-csi-driver --addon-version v1.11.4-eksbuild.1 \
--resolve-conflicts PRESERVE
Step 7 : Enable dynamic volume provisioning
All the following K8s objects in this section will be created in kube-system
namespace
- Create a
StorageClass
Copy the following content in a file named storageclass.yaml
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: ebs-sc
provisioner: ebs.csi.aws.com
volumeBindingMode: WaitForFirstConsumer
Run the following command
kubectl apply -f storageclass.yaml -n kube-system
- Create a
PersistentVolumeClaim
Copy the following content in a file named claim.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: orch-lord-pv-claim
spec:
accessModes:
- ReadWriteOnce
storageClassName: ebs-sc
resources:
requests:
storage: 10Gi
Run the following command
kubectl apply -f claim.yaml -n kube-system
Step 8 : Deploy a PostgreSQL db in the cluster
- Create a
ConfigMap
to store the userid and password of the PostgreSQL db that we are about to deploy within a POD in the cluster.
Caution : Usage of ConfigMap
for storing secrets like credentials is not ideal and should never be used in a production scenario. I am only following a simple route to demonstrate the overall process.
Copy the following content in a file postgres_configmap.yaml
kind: ConfigMap
apiVersion: v1
metadata:
name: pg-creds-map
data:
username: postgres
password: <<your preferred password>>
Run the following command
kubectl apply -f postgres_configmap.yaml -n kube-system
- Create the
Pod
definition for the PostgreSQL container
Copy the following content in a file postgrespod.yaml
apiVersion: v1
kind: Pod
metadata:
name: postgresdb-pod
labels:
app: postgresdb-app
spec:
volumes:
- name: postgres-storage
persistentVolumeClaim:
claimName: orch-lord-pv-claim
containers:
- name: postgres-container
image: postgres:15.3-alpine
ports:
- containerPort: 5432
volumeMounts:
- mountPath: /pg-data/var/lib/postgresql/data
name: postgres-storage
env:
- name: POSTGRES_USER
valueFrom:
configMapKeyRef:
name: pg-creds-map
key: username
- name: POSTGRES_PASSWORD
valueFrom:
configMapKeyRef:
name: pg-creds-map
key: password
- name: POSTGRES_DB
value: orchlordbackend
restartPolicy: Always
Run the following command
kubectl apply -f postgrespod.yaml -n kube-system
- Check the status of the Pod
kubectl get pods -n kube-system
You can also see the dynamically allocated storage
- Verify that the PostgreSQL db is running
Get into the Pod using kubectl
kubectl exec -it postgresdb-pod bash -n kube-system
Get the IP address of the Pod from the Cluster
psql -h <<Pod IP>> -U postgres -p 5432
Provide the password once prompted and voila! you should land up in the following screen
Quite obviously, there are no tables yet but the database exists and is running.
Conclusion
If you have made it this far, then great, you now have a working PostgreSQL db running within a Pod in your EKS cluster. In an upcoming blog, I will lay out the steps to connect to this db from an application deployed in a different Pod within the EKS cluster.