Containerized deployments in the cloud have become a favorite among organizations, because they deliver the true value of digital transformation through unparalleled agility and flexibility.

All cloud service providers have a portfolio of services dedicated to containerized workloads, such as managed Kubernetes services, container registry services, or serverless compute engines. Managed services based on K8s and serverless technologies help organizations focus on applications and innovation, rather than on the deployment and maintenance of hosting environments.

Elastic Kubernetes Service (EKS) is the managed Kubernetes service offering from AWS. It automates administrative tasks, like deployment of the K8s control plane, updating management, patching, node provisioning, and more. This allows customers to focus on packaging and deploying their applications to the cluster.

AWS Fargate takes this abstraction one step further by providing a serverless hosting solution for containers that can be integrated with EKS. In this article, we’ll explore the features of AWS Fargate and how it can be used to run serverless pods with EKS.

What Is AWS Fargate?

AWS fargate enables customers to deploy containers without the hassle of creating and managing servers. It can be integrated with both ECS and EKS to run the workloads. Only compute resources required to run the containers are created, thereby eliminating overprovisioning and wastage.

This optimized approach for deployment ensures that you are paying only for the resources that are required for the application. Costs can also be reduced through Fargate Spot and compute savings plans. These savings can be up to 70% for workloads tolerant to interruptions and up to 50% for persistent workloads.

The backend infrastructure for hosting the containers is created and maintained with up-to-date patches, as required by Fargate. All infrastructure maintenance activities, like scaling, patching, and securing the environment are abstracted from the users, thereby eliminating associated operational overhead.

The pods deployed in Fargate are secure, as they run in isolated kernel runtime environments without sharing any resources with other pods. Observability into the container runtime metrics and application logs are provided through out-of-the-box integration with Amazon’s CloudWatch Container Insights solution.

Run Serverless Pods Using AWS Fargate and EKS

EKS provides a fully managed Kubernetes service in AWS and is certified K8s conformant for a standardized experience. It can be used for greenfield cloud-native applications, or for migrating containers from on-premises K8s clusters to AWS.

The control plane of K8s is managed by EKS and deployed across multiple AWS availability zones to ensure high availability that is backed by a 99.95% uptime SLA. All security patches  are automatically updated in the K8s control plane, thereby ensuring security of hosted workloads. The service also has a built-in capability to scale compute capacity, either through managed node groups or through serverless compute via Fargate. 

EKS can be used to run pods on Amazon Fargate by integrating with upstream K8s APIs. This helps extend your EKS cluster on demand, while ensuring manageability through existing tools.

While running serverless pods using AWS Fargate, you get the benefits of both services: the serverless capabilities of Fargate and the manageability of EKS. It helps with the rightsizing of applications and ensures that customers only have to pay for the resources used by deployed pods.

The cost components include vCPU and memory charges for Fargate and standard EKS charges for the cluster. An added advantage is that users don’t have to be experts in K8s to benefit from the service, they can simply package the application to be deployed on Fargate. This helps accelerate the build and release of applications, making it a perfect solution for cloud-native architectures.

That said, there are some constraints associated with this deployment model that you should be aware of. There is a limit of 4 vCPU and 30 GB of memory per pod. Privileged pods, or pods that use HostNetwork or HostPort, and DaemonSets are not supported. While extending EKS with Fargate, you can only use Network or Application Load Balancers as ingress with IP targets only.

How to Extend EKS Using AWS Fargate

Let’s look at the step-by-step process of extending the EKS cluster using AWS Fargate.

Your first step is to ensure that the IAM service principles used to run the below commands have VPC access, the required IAM permissions to work with Amazon EKS IAM, service-linked roles, and AWS CloudFormation.

Prerequisites

You need to have the following tools available to deploy and configure the EKS cluster and AWS Fargate:

  • kubectl command line utility for managing K8s clusters (version 1.20 or above)
  • eksctl utility to manage EKS clusters (version 0.5.1.0 or above)

The commands below are run in AWS CloudShell for demonstration purposes. AWS CloudShell helps you run AWS CLI commands directly in a browser-based shell interface, which is preauthenticated with AWS console credentials.

  1. Download the kubectl binary from Amazon S3 using the command below:
$ curl -o kubectl https://amazon-eks.s3.us-west-2.amazonaws.com/1.20.4/2021-04-12/bin/linux/amd64/kubectl
  1. Using the command below, apply execute permissions to the binary, copy the binary to a folder in your PATH, and add PATH to your shell initialization file.
$ chmod +x ./kubectl

$ mkdir -p $HOME/bin && cp ./kubectl $HOME/bin/kubectl && export PATH=$PATH:$HOME/bin

$ echo 'export PATH=$PATH:$HOME/bin' >> ~/.bashrc

After this step, the version can be verified using the following command:

$ kubectl version --short --client
Client Version: v1.22.6-eks-7d68063
  1. Download  and extract the eksctl utility and move it to /usr/local/bin using the following commands:
$ curl --silent --location https://github.com/weaveworks/eksctl/releases/latest/download/eksctl\_$(uname -s)\_amd64.tar.gz | tar xz -C /tmp

$ sudo mv /tmp/eksctl /usr/local/bin

The version can then be verified using this command:

$ eksctl version  
0.109.0

Cluster Deployment

  1. Deploy the EKS cluster with an associated Fargate profile and no worker nodes using the following command:
$ eksctl create cluster --name demo --region eu-central-1 --fargate
2022-09-01 19:10:57 [ℹ]  eksctl version 0.109.0
2022-09-01 19:10:57 [ℹ]  using region eu-central-1
2022-09-01 19:10:57 [ℹ]  setting availability zones to [eu-central-1c eu-central-1b eu-central-1a]
2022-09-01 19:10:57 [ℹ]  subnets for eu-central-1c - public:192.168.0.0/19 private:192.168.96.0/19
2022-09-01 19:10:57 [ℹ]  subnets for eu-central-1b - public:192.168.32.0/19 private:192.168.128.0/19
2022-09-01 19:10:57 [ℹ]  subnets for eu-central-1a - public:192.168.64.0/19 private:192.168.160.0/19
2022-09-01 19:10:57 [ℹ]  using Kubernetes version 1.22
2022-09-01 19:10:57 [ℹ]  creating EKS cluster "demo" in "eu-central-1" region with Fargate profile
2022-09-01 19:10:57 [ℹ]  if you encounter any issues, check CloudFormation console or try 'eksctl utils describe-stacks --region=eu-central-1 --cluster=demo'
2022-09-01 19:10:57 [ℹ]  Kubernetes API endpoint access will use default of {publicAccess=true, privateAccess=false} for cluster "demo" in "eu-central-1"
2022-09-01 19:10:57 [ℹ]  CloudWatch logging will not be enabled for cluster "demo" in "eu-central-1"
2022-09-01 19:10:57 [ℹ]  you can enable it with 'eksctl utils update-cluster-logging --enable-types={SPECIFY-YOUR-LOG-TYPES-HERE (e.g. all)} --region=eu-central-1 --cluster=demo'
2022-09-01 19:10:57 [ℹ]  
2 sequential tasks: { create cluster control plane "demo", 
    2 sequential sub-tasks: { 
        wait for control plane to become ready,
        create fargate profiles,
    } 
}
2022-09-01 19:10:57 [ℹ]  building cluster stack "eksctl-demo-cluster"
2022-09-01 19:10:58 [ℹ]  deploying stack "eksctl-demo-cluster"
2022-09-01 19:11:28 [ℹ]  waiting for CloudFormation stack "eksctl-demo-cluster"
2022-09-01 19:11:58 [ℹ]  waiting for CloudFormation stack "eksctl-demo-cluster"
2022-09-01 19:12:58 [ℹ]  waiting for CloudFormation stack "eksctl-demo-cluster"
2022-09-01 19:13:58 [ℹ]  waiting for CloudFormation stack "eksctl-demo-cluster"
2022-09-01 19:14:58 [ℹ]  waiting for CloudFormation stack "eksctl-demo-cluster"
2022-09-01 19:15:58 [ℹ]  waiting for CloudFormation stack "eksctl-demo-cluster"
2022-09-01 19:16:58 [ℹ]  waiting for CloudFormation stack "eksctl-demo-cluster"
2022-09-01 19:17:58 [ℹ]  waiting for CloudFormation stack "eksctl-demo-cluster"
2022-09-01 19:18:58 [ℹ]  waiting for CloudFormation stack "eksctl-demo-cluster"
2022-09-01 19:19:58 [ℹ]  waiting for CloudFormation stack "eksctl-demo-cluster"
2022-09-01 19:20:58 [ℹ]  waiting for CloudFormation stack "eksctl-demo-cluster"
2022-09-01 19:22:59 [ℹ]  creating Fargate profile "fp-default" on EKS cluster "demo"
2022-09-01 19:27:17 [ℹ]  created Fargate profile "fp-default" on EKS cluster "demo"
W0901 19:27:47.788334     600 warnings.go:70] spec.template.spec.affinity.nodeAffinity.requiredDuringSchedulingIgnoredDuringExecution.nodeSelectorTerms[0].matchExpressions[0].key: beta.kubernetes.io/os is deprecated since v1.14; use "kubernetes.io/os" instead
W0901 19:27:47.788353     600 warnings.go:70] spec.template.spec.affinity.nodeAffinity.requiredDuringSchedulingIgnoredDuringExecution.nodeSelectorTerms[0].matchExpressions[1].key: beta.kubernetes.io/arch is deprecated since v1.14; use "kubernetes.io/arch" instead
2022-09-01 19:27:47 [ℹ]  "coredns" is now schedulable onto Fargate
2022-09-01 19:28:50 [ℹ]  "coredns" is now scheduled onto Fargate
2022-09-01 19:28:50 [ℹ]  "coredns" pods are now scheduled onto Fargate
2022-09-01 19:28:50 [ℹ]  waiting for the control plane availability...
2022-09-01 19:28:51 [✔]  saved kubeconfig as "/home/cloudshell-user/.kube/config"
2022-09-01 19:28:51 [ℹ]  no tasks
2022-09-01 19:28:51 [✔]  all EKS cluster resources for "demo" have been created
2022-09-01 19:28:52 [ℹ]  kubectl command should work with "/home/cloudshell-user/.kube/config", try 'kubectl get nodes'
2022-09-01 19:28:52 [✔]  EKS cluster "demo" in "eu-central-1" region is ready

The above command creates a cluster with the name demo in the eu-central-1 region and associates a Fargate profile with it. This profile is required to specify which pods should be run on Fargate. The command creates this profile along with the subnets the pods should connect to and the IAM execution role AmazonEKSFargatePodExecutionRole.

The “AmazonEKSFargatePodExecutionRole” is required to download the container images and perform activities on the user’s behalf. If you open the EKS console and browse to Configuration>Fargate profiles, you’ll see the profile listed there.

  1. Use the following command to create the kubeconfig file, which is required to connect to the created cluster:
$ aws eks --region eu-central-1 update-kubeconfig --name demo
Added new context arn:aws:eks:eu-central-1:726133447647:cluster/demo to /home/cloudshell-user/.kube/config
  1. You can run any kubectl command to test the connectivity. For example, if you run the command to get pods in the cluster, you won’t see any resources listed.
$ kubectl get pods
No resources found in default namespace. 

Deploy Sample App

  1. Create a sample-service.yaml file locally using the below content for deploying a sample nginx application:
apiVersion: v1
kind: Service
metadata:
  name: my-service
  namespace: default
  labels:
    app: my-demoapp
spec:
  selector:
    app: my-demoapp
  ports:
    - protocol: TCP
      port: 80
      targetPort: 80
apiVersion: apps/v1
kind: Deployment
metadata:
  name: my-deployment
  namespace: default
  labels:
    app: my-demoapp
spec:
  replicas: 3
  selector:
    matchLabels:
      app: my-demoapp
  template:
    metadata:
      labels:
        app: my-demoapp
    spec:
      affinity:
        nodeAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            nodeSelectorTerms:
            - matchExpressions:
              - key: beta.kubernetes.io/arch
                operator: In
                values:
                - amd64
                - arm64
      containers:
      - name: nginx
        image: public.ecr.aws/z9d2n7e1/nginx:1.19.5
        ports:
        - containerPort: 80

Upload the file to AWS CloudShell by clicking on Actions dropdown menu:

  1. Deploy the application with the following command:
$ kubectl apply -f sample-service.yaml
Warning: spec.template.spec.affinity.nodeAffinity.requiredDuringSchedulingIgnoredDuringExecution.nodeSelectorTerms[0].matchExpressions[0].key: beta.kubernetes.io/arch is deprecated since v1.14; use "kubernetes.io/arch" instead
deployment.apps/my-deployment created
[cloudshell-user@ip-10-0-169-247 ~]$  
  1. Run the following command to get the status of the pods:
$ kubectl get pods
NAME                           READY   STATUS    RESTARTS   AGE
my-deployment-854b54d8-dwzgh   1/1     Running   0          5m38s
my-deployment-854b54d8-nnrjc   1/1     Running   0          5m38s
my-deployment-854b54d8-t5kg7   1/1     Running   0          5m38s
[cloudshell-user@ip-10-0-169-247 ~]$ 
  1. Describe the pod to view its details:
$ kubectl describe pod my-deployment-854b54d8-dwzgh
Name:                 my-deployment-854b54d8-dwzgh
Namespace:            default
Priority:             2000001000
Priority Class Name:  system-node-critical
Node:                 fargate-ip-192-168-161-44.eu-central-1.compute.internal/192.168.161.44
Start Time:           Thu, 01 Sep 2022 19:51:00 +0000
Labels:               app=my-demoapp
                      eks.amazonaws.com/fargate-profile=fp-default
                      pod-template-hash=854b54d8
Annotations:          CapacityProvisioned: 0.25vCPU 0.5GB
                      Logging: LoggingDisabled: LOGGING_CONFIGMAP_NOT_FOUND
                      kubernetes.io/psp: eks.privileged
Status:               Running
IP:                   192.168.161.44
IPs:
  IP:           192.168.161.44
Controlled By:  ReplicaSet/my-deployment-854b54d8
Containers:
  nginx:
    Container ID:   containerd://dd8e0f2064ea165652e3189b70f1e20f01910f68968ef6a733fd6076f3398333
    Image:          public.ecr.aws/z9d2n7e1/nginx:1.19.5
    Image ID:       public.ecr.aws/z9d2n7e1/nginx@sha256:21dc5d03243e1aa6197e6a7fefeefe11d20371239570d8a74631fe3ee3fd526a
    Port:           80/TCP
    Host Port:      0/TCP
    State:          Running
      Started:      Thu, 01 Sep 2022 19:51:08 +0000
    Ready:          True
    Restart Count:  0
    Environment:    <none>
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-7k89n (ro)
Conditions:
  Type              Status
  Initialized       True 
  Ready             True 
  ContainersReady   True 
  PodScheduled      True 
Volumes:
  kube-api-access-7k89n:
    Type:                    Projected (a volume that contains injected data from multiple sources)
    TokenExpirationSeconds:  3607
    ConfigMapName:           kube-root-ca.crt
    ConfigMapOptional:       <nil>
    DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
  Type     Reason           Age    From               Message
  ----     ------           ----   ----               -------
  Warning  LoggingDisabled  9m40s  fargate-scheduler  Disabled logging because aws-logging configmap was not found. configmap "aws-logging" not found
  Normal   Scheduled        8m48s  fargate-scheduler  Successfully assigned default/my-deployment-854b54d8-dwzgh to fargate-ip-192-168-161-44.eu-central-1.compute.internal
  Normal   Pulling          8m47s  kubelet            Pulling image "public.ecr.aws/z9d2n7e1/nginx:1.19.5"
  Normal   Pulled           8m42s  kubelet            Successfully pulled image "public.ecr.aws/z9d2n7e1/nginx:1.19.5" in 5.217210027s
  Normal   Created          8m41s  kubelet            Created container nginx
  Normal   Started          8m41s  kubelet            Started container nginx

The pod is now deployed to Fargate.

  1. Next, let’s run the following command to see the nodes associated with the EKS cluster:
$ kubectl get nodes
NAME                                                       STATUS   ROLES    AGE   VERSION
fargate-ip-192-168-123-152.eu-central-1.compute.internal   Ready    <none>   32m   v1.22.6-eks-14c7a48
fargate-ip-192-168-128-209.eu-central-1.compute.internal   Ready    <none>   10m   v1.22.6-eks-14c7a48
fargate-ip-192-168-139-105.eu-central-1.compute.internal   Ready    <none>   10m   v1.22.6-eks-14c7a48
fargate-ip-192-168-161-44.eu-central-1.compute.internal    Ready    <none>   10m   v1.22.6-eks-14c7a48
fargate-ip-192-168-180-241.eu-central-1.compute.internal   Ready    <none>   32m   v1.22.6-eks-14c7a48 

You can see four nodes; two for coreDNS and two for the nginx pods deployed.

  1. If you delete the deployment once the testing is complete and check the number of nodes, you’ll see that the additional nodes created for deploying the nginx application are no longer listed.
$ kubectl delete deployment my-deployment
deployment.apps "my-deployment" deleted
$ kubectl get nodes
NAME                                                       STATUS   ROLES    AGE   VERSION
fargate-ip-192-168-123-152.eu-central-1.compute.internal   Ready    <none>   35m   v1.22.6-eks-14c7a48
fargate-ip-192-168-180-241.eu-central-1.compute.internal   Ready    <none>   35m   v1.22.6-eks-14c7a48

Conclusion

As shown above, you can easily extend the EKS cluster using Fargate where the compute resources were created on demand for deployments. This ensures the rightsizing of compute as well as optimal cost for running your containerized workloads. Furthermore, pods deployed in Fargate offer enhanced security because they run on isolated environments with no shared resources.

Deployment of serverless Kubernetes pods in AWS EKS and Fargate are beneficial in burst scenarios, where you might not be able to add additional capacity to your EKS cluster without having to pay for backend nodes. It can also be used in test and development scenarios, where you might need to quickly spin up test environments and tear them down after the testing with minimal overhead.

Extending your EKS clusters through AWS Fargate offers a hassle-free way to achieve quick development and deployment of containerized workloads.

Contact us