Elastic Kubernetes Service (EKS) — Deploy Scalable Enterprise Applications on Amazon Web Services (AWS)
Deploying an enterprise scalable application for backend microservices has many considerations, often past the development stage, these fall into performance, response time, and scalability. Amazon Web Services (AWS) offers a variety of ways to deploy applications within their Elastic Compute Cloud offerings along with a cloud-native approach, using Kubernetes.
Amazon Web Services (AWS) has a variety of offerings for deploying applications, these solution provides further explanation of downstream choices using Elastic Kubernetes Service (EKS).
Install KubeCTL and EKSCTL CLI Tools
Creating, deploying, and operating resources for AWS within the Command Line Interface (CLI), requires installing two additional tools, one for Kubernetes, and another for EKS.
In your terminal, using Chocolatey, install the Kubernetes Command Line Tool (KubeCTL) to access and control your Kubernetes resources linked by configuration, paste and enter the below snippet to install KubeCTL.
choco install kubernetes-cli
Once KubeCTL is installed, check that it has been properly setup, in your terminal, using KubeCTL check the version on your system, paste and enter the below snippet to check the version of KubeCTL.
kubectl version --short
Next, begin installing the Elastic Kubernetes Service Command Line Tool (EKSCTL) for controlling the AWS based Kubernetes resources linked by configuration, paste and enter the below snippet to install KubeCTL.
choco install eksctl
After EKSCTL is installed, check that it has been properly setup, in your terminal, using EKSCTL check the version on your system, paste and enter the below snippet to check the version of EKSCTL.
eksctl version
Now that the required CLI tools have been installed you can begin the process of provisioning resources with AWS CloudFormation from EKSCTL and deploy images to those resources with KubeCTL.
Create an Elastic Kubernetes Service (EKS) Cluster Through EKSCTL With CloudFormation
AWS has pre-built CloudFormation templates for deploying infrastructure with built in best practices, that provision and start with a few steps in the EKSCTL CLI. Once the resources are provisioned, they can be updated, reconfigured, and adjusted for infrastructure changes.
In your terminal, paste and enter the below snippet to start the cluster creation.
eksctl create cluster --name=cluster --version=1.27 --nodes=1 --node-type=t3.medium
As the CloudFormation template runs the process for provisioning the infrastructure resources, the command line will show a log of the process. Once the process has completed, check the resource to make sure the node groups have started, the Kube Configuration will automatically be added to your KubeCTL resources, once this process is complete.
In your terminal, paste and enter the below snippet to receive information about the node groups.
kubectl get nodes
Once the node groups have started, you can begin the next process of preparing the application for deployment through .YAML resource configurations. You can see your Kubernetes cluster in the AWS console by visiting the EKS service. At the EKS service page, in the left menu click “Clusters”, then click on your recently created cluster to see the service settings.
In the EKS service, via the AWS console, you can update, delete, and add dependent services like Node groups and configurations.
Create an Elastic Container Registry (ECR) Repository
AWS supports public and private Container Registries with repositories for container image artifacts built with a variety of configurations. A common practice is to create a Dockerfile that will containerize the built or compiled code and run as a Linux virtual machine within Kubernetes and other container engines.
Start by creating an ECR repository, via the console in AWS. On the ECR page click the button “Get started” to begin creating a repository. On the “Create repository” page, enter a repository name, enable “Image scan settings” by clicking the toggle, enable “Encryption settings” by clicking the toggle, then click “Create repository”.
Once the repository is created, any images that you build can be named, built, and pushed to the repository so that a container runtime within AWS can pull the image.
Create, Build, and Push a Docker Image
In the root directory of your application code, create a Dockerfile for implementing the environment, containerization & runtime command instructions. Start by pasting the sample code below to create a file that will create a production ready Java Spring Boot container.
#!/bin/bash
FROM gradle:7.6.1-jdk17-alpine AS build
COPY --chown=gradle:gradle . /home/gradle/src
WORKDIR /home/gradle/src
RUN gradle build --no-daemon
FROM amazoncorretto:17
RUN mkdir /app
COPY --from=build /home/gradle/src/build/libs/*.jar /app/spring-boot-application.jar
EXPOSE 8080
ENTRYPOINT ["java","-Dspring.profiles.active=prod","-jar","/app/spring-boot-application.jar"]
In your terminal, from the root of your application folder, update the repository and image, then paste and enter the below snippet to build your docker image.
docker build -t <repository>/<image> .
The above command should build the image for the repository and store it within your Docker images cache on your local machine. You can either build the image with a different name and rename it while tagging, or build the image as its proper name, then tag and rename it, then push the image.
In your terminal, update the repository, image name, AWS account id, region, and add a tag, something like “latest” then paste and enter the below snippet to tag your docker image for the repository.
docker tag <repository>/<image> <AWS_ACCOUNT_ID>.dkr.ecr.<REGION>.amazonaws.com/<image>:<tag>
The tagging principle in Docker images is used for instantiating versioning or precedence for the pull command to reference. Often when storing images historically, the previous image is retagged or untagged, and the newest image is tagged latest for organizational purposes.
The first step in pushing an image to AWS ECR is authenticating through the Docker CLI, to do this on Windows, you need to first install the AWS Tools for PowerShell. In your terminal, paste the below snippet to install the required tools.
Install-Module -Name AWS.Tools.ECR
Once you have installed the AWS Tools for PowerShell, you can now use PowerShell to stream the data response from the AWS authentication endpoint, into the Docker CLI so that the authenticate can be provided for pushing to the repository.
In your terminal, paste and enter the below snippet to authenticate Docker with your ECR repository.
(Get-ECRLoginCommand).Password | docker login --username AWS --password-stdin aws_account_id.dkr.ecr.region.amazonaws.com
Once your Docker CLI is authenticated, you can push the tagged image to your AWS ECR repository with the token that has been provided to the command line interface.
In your terminal, paste and enter the below snippet to push your docker docker image to your ECR repository.
docker push <AWS_ACCOUNT_ID>.dkr.ecr.<REGION>.amazonaws.com/<image>:<tag>
Your Docker image should now appear within your ECR repository and can be pulled through the CLI or from container runtimes in AWS.
Create a Namespace, Deployment, Service, and Ingress for Your EKS Application
Establishing container services for Kubernetes through .YAML files will create the required environment for deploying your application. In most scenarios, you will need a namespace, a deployment for the container image, a service to listen for that environments exposed port, and an ingress or load balancer to connect that services targeted port to the internet.
https://kubernetes.io/docs/concepts/services-networking/
To begin creating your Kubernetes service array, create a namespace, by saving the code below to a file called namespace.yaml. After the file is created, you can use KubeCTL to apply the file and deploy the service to Kubernetes.
apiVersion: v1
kind: Namespace
metadata:
name: backend-namespace
labels:
name: backend-namespace
In your terminal, paste and enter the below snippet to apply the namespace file with KubeCTL.
kubectl apply -f namespace.yaml
Once the namespace has been created, you can move on to creating the deployment file that will pull the container image you previously built and start the program through the container runtime. If you need to add any secrets to the storage volume or apply command line arguments these can be included in the .YAML file. Create the deployment file by saving the code below to a file called deployment.yaml, make sure to update the image name with the image you pushed to ECR.
apiVersion: apps/v1
kind: Deployment
metadata:
name: backend-deployment
namespace: backend-namespace
labels:
app: backend-application
spec:
replicas: 1
selector:
matchLabels:
app: backend-application
template:
metadata:
name: backend-pod
labels:
app: backend-application
spec:
containers:
- name: backend-container
image: <AWS_ACCOUNT_ID>.dkr.ecr.<REGION>.amazonaws.com/<image>:<tag>
imagePullPolicy: Always
restartPolicy: Always
ports:
- containerPort: 8080
resources:
requests:
memory: "180Mi"
cpu: "300m"
limits:
memory: "360Mi"
cpu: "600m"
In your terminal, paste and enter the below snippet to apply the deployment file with KubeCTL.
kubectl apply -f deployment.yaml
Now that the deployment is up and running, you can start to add services on top of this application layer in the Kubernetes array, to expose it to the internet. A service will allow Kubernetes to access a deployment port and listen to that port, forwarding it to another port that is accessible within the cluster. Create the service file by saving the code below to a file called service.yaml.
apiVersion: v1
kind: Service
metadata:
name: backend-service
spec:
type: NodePort
selector:
app: backend-application
ports:
- protocol: TCP
port: 80
targetPort: 8080
name: http
In your terminal, paste and enter the below snippet to apply the service file with KubeCTL.
kubectl apply -f service.yaml
In the Kubernetes cluster, if the files are applied successfully, there is now a namespace with a deployment, that has a service. For the final piece, you need to add a method for exposing the service target port to the internet through AWS. Create the ingress file by saving the code below to a file called ingress.yaml, make sure to update the domain with your domain or a placeholder.
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: backend-ingress
namespace: backend-namespace
annotations:
kubernetes.io/ingress.class: alb
alb.ingress.kubernetes.io/scheme: internet-facing
spec:
ingressClassName: alb
rules:
- host: <domain>
- http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: backend-service
port:
number: 80
In your terminal, paste and enter the below snippet to apply the ingress file with KubeCTL.
kubectl apply -f ingress.yaml
AWS will begin to provision an Application Load Balancer (ALB) that will connect to the ingress with your cluster and provide an internet accessible Internet Protocol (IP) address. Once the ALB is successfully provisioned, you can view your application from the internet, at the IP address that is assigned to it by AWS. If the ALB services are not auto provisioned, you can add the AWS Load Balancer Controller add-on to replace the NGINX Ingress controller which should adapt the classic load balancer to the latest version, creating new services within your Virtual Private Cloud (VPC).
You can check the status of each service by pasting the correlated command in your terminal, the resources should respond with the same numerator as denominator of running along with additional service based information.
Namespace
kubectl get namespaces --show-labels
Deployment
kubectl get deployments --show-labels --namespace=backend-namespace
Service
kubectl get services --show-labels --namespace=backend-namespace
Ingress
kubectl get ingress --show-labels --namespace=backend-namespace