Creating a Highly Available Secured Kubernetes Cluster on AWS with Kops

Learn how create your own production ready Kubernetes cluster on AWS.

Arve Knudsen

Arve Knudsen

October 6, 2017

Creating a Highly Available Secured Kubernetes Cluster on AWS with Kops

Kubernetes is a popular system for orchestrating clusters of Docker containers. It is open source and can run on a number of platforms, such as Amazon Web Services (AWS) and Google Compute Engine (GCE), in addition to your own hardware.

There exist managed services, where Kubernetes comes pre-installed and is administrated on your behalf, for example the Google Container Engine (GKE). When it comes to AWS however, the platform I will be discussing in this article, you have to install it yourself.

Installing Kubernetes by hand is a daunting task, and considering that it’s a continuously evolving system with many moving parts, you will want to use a tool to do it for you. The maybe most popular tool for provisioning Kubernetes clusters (and maintaining them) at the time of writing, is kops. It is also an official, and open source, offering from the Kubernetes organization.

In this article I will guide you through creating your very own highly available, secured, Kubernetes cluster on AWS with kops (presently at version 1.7).

The cluster will be highly available in that it will have three Kubernetes masters across three AWS availability zones (AZs) and at least three workers again in the same AZs. This is considered a Kubernetes best practice, and ensures that if one or two AZs become unavailable for either masters or workers, there will be one or two left to service requests. For high availability purposes, there should be at least 3 nodes in order to ensure quorum among them.

The cluster will also be secured, in that it’s contained in a private subnet and that one can only SSH into it via a dedicated bastion host, another best practice, and that its control plane/API requires authentication via client certificate based authentication and is configured with RBAC authorization.

Creating the Cluster Itself

The very first step is to bring the cluster up, made quite simple thanks to the awesome power of kops. The below kops invocation creates a highly available cluster on AWS, with 5 worker nodes spread among three AZs and three master nodes in the same AZs. For security, all master/worker nodes are in a private subnet and not exposed to the Internet. We also instantiate a bastion host as the sole entry point into the cluster via SSH, and the cluster is configured to enable RBAC as its authorization mode. I choose Flannel as the networking system due to my experience with it, which is quite good, I have never noticed any problems with it.

> kops --state s3://$CLUSTER create cluster \  
--zones eu-central-1a,eu-central-1b,eu-central-1c \  
--master-zones eu-central-1a,eu-central-1b,eu-central-1c \  
--topology private --networking flannel --master-size m4.large \  
--node-size m4.large --node-count 5 --bastion --cloud aws \  
--ssh-public-key id_rsa.pub --authorization RBAC --yes $CLUSTER

We use $CLUSTER as a placeholder for the cluster name throughout this article.

Exporting the Kubectl Configuration

After creating the cluster, we would like to generate a configuration file to use in order to have kubectl operate against our cluster. We do this with the following command:

> KUBECONFIG=$CLUSTER.kubeconfig kops export kubecfg $CLUSTER

Configuring Cluster Components for RBAC

In order for certain cluster components to function with RBAC enabled, some configuration is required. Basically, what we need to do is to bind the right roles to service accounts to allow the latter to perform certain tasks on behalf of pods.

Configure the cluster when it’s ready by applying the configuration files in the sections below:

> kubectl --kubeconfig $CLUSTER.kubeconfig apply -f \  
kube-system-rbac.yaml

Default System Service Account

The default service account in the kube-system namespace must be bound to the cluster-admin role:

apiVersion: rbac.authorization.k8s.io/v1  
kind: ClusterRoleBinding  
metadata:  
  name: system:default-sa  
subjects:  
  - kind: ServiceAccount  
    name: default  
    namespace: kube-system  
roleRef:  
  kind: ClusterRole  
  name: cluster-admin  
  apiGroup: rbac.authorization.k8s.io

Client Certificate Based Authentication

I decided to implement user authentication via TLS certificates, as this is directly supported in the kubectl tool and ties in easily with RBAC authorization. The trick here is to get hold of the certificate authority (CA) certificate and key that kops used when creating the cluster, as it will allow us to generate valid user certificates. Luckily, these files are stored in kops’ S3 bucket. The following commands copies the CA key and certificate to the local directory:

> aws s3 cp s3://$BUCKET/$CLUSTER/pki/private/ca/$KEY ca.key  
> aws s3 cp s3://$BUCKET/$CLUSTER/pki/issued/ca/$CERT ca.crt

Now that we have the CA key and certificate, we can generate a user certificate with the openssl command line tool. The procedure consists of first generating a private key, then with the previously generated key generating a signing request for a certificate representing a user with username $USERNAME and finally signing the certificate with the help of the CA key and certificate. The below commands will produce a key and certificate named user.key and user.crt, respectively:

> openssl genrsa -out user.key 4096  
> openssl req -new -key user.key -out user.csr -subj \  
‘/CN=$USERNAME/O=developer’  
> openssl x509 -req -in user.csr -CA ca.crt -CAkey ca.key \  
-CAcreateserial -out user.crt -days 365

Granting Cluster Administrator Rights to User

We would like for our user, as represented by the certificate, to have cluster administrator rights, meaning that they are permitted any operation on the cluster. The way to do this is to create a ClusterRoleBinding that gives the cluster-admin role to the new user, as in the following command:

> kubectl --kubeconfig $CLUSTER.kubeconfig create \  
clusterrolebinding $USERNAME-cluster-admin-binding \  
--clusterrole=cluster-admin --user=$USERNAME

After granting your user this role, we can start using it instead of the default admin user, as you’ll see in the next section.

Identifying User through Certificate via Kubectl

Given the certificate we created for the user previously, and having assigned the user the cluster-admin role, we can now identify towards Kubernetes by modifying the kubectl configuration. We configure kubectl to authenticate with the certificate towards the cluster via the following commands:

> kubectl --kubeconfig $CLUSTER.kubeconfig config set-credentials \  
$USERNAME --client-key=user.key --client-certificate=user.crt  
> kubectl --kubeconfig $CLUSTER.kubeconfig config set-context \  
$CLUSTER --user $USERNAME  
> kubectl --kubeconfig $CLUSTER.kubeconfig config use-context \  
$CLUSTER

After configuring kubectl with the previous commands, you should be able to operate on the cluster as the new user. Try for example listing all pods:

> kubectl --kubeconfig $CLUSTER.kubeconfig get pods --all-namespaces

If the above command worked, you should now have a fully functional cluster on AWS which you can deploy your applications within — have fun!

Adding Labels to Kubernetes Nodes

We recommend you make a final change to your cluster in anticipation of enabling a full logging system, i.e. the more or less standard combination of Elasticsearch, Fluentd and Kibana. This solution will give you a searchable database of logs with a comprehensive graphical interface, but in order for your logs to make it into this system, you need to enable Fluentd on your worker nodes.

The way to do this is to first open kops’ configuration editor for the nodes instance group:

> kops --state s3://$CLUSTER edit --name $CLUSTER ig nodes

Then, you must add a field beta.kubernetes.io/fluentd-ds-ready to spec.nodeLabels:

spec:  
  nodeLabels:  
    beta.kubernetes.io/fluentd-ds-ready: "true"

After saving your changes and closing the editor, you’ll need to update the cluster configuration and then force a rolling update of the cluster so that the label gets applied to the nodes:

> kops --state s3://$CLUSTER update cluster $CLUSTER --yes  
> kops --state s3://$CLUSTER rolling-update cluster $CLUSTER --yes \  
--force

The rolling update should take some time to finish, but when it does your worker nodes should all be labelled beta.kubernetes.io/fluentd-ds-ready=true and be ready for log consumption!

Scripted Cluster Creation

In order to automate the procedure of creating Kubernetes clusters (on AWS) according to the opinionated stack outlined in this article, we at Coder Society have created a wrapper script for kops. It’s written in Python and hosted on GitHub as k8s-aws. This script invokes kops and sets up RBAC etc. according to the scheme outlined in this article, but in addition to that it also installs other facilities in the cluster that we standardize on:

We’ve used it to create many clusters, and it should save you time if you want to create clusters according to the guidelines in this article.

In future articles we will detail installation of the EFK logging stack and the Prometheus Operator monitoring stack plus other Kubernetes practices that we make use of in Coder Society. Stay posted!

  • kubernetes
  • aws
  • kops
logoAlt

Content you want to read. By coders, for coders.

Right in your inbox.

By submitting you agree to us processing your personal data according to our privacy policy

We use cookies 🍪 in order to provide you with the best browsing experience. The data collected by cookies is used to optimize the website. By continuing to browse the site you are agreeing to our use of cookies in accordance with our Privacy and Cookie Policy.