Getting Started with Kubernetes on Amazon Web Services (AWS)

Wednesday Aug 23rd 2017 by Deepak Vohra

The process of installing Kubernetes on CoreOS on AWS is simplified by following these steps.

Kubernetes is a Docker container orchestration platform that offers several benefits, including being able to create a service based on a Docker image, load balancing across multiple cluster nodes, scaling, rolling updates, monitoring, and logging. In this tutorial, we shall discuss using Jenkins with Kubernetes. In a two-article tutorial, we shall discuss using Jenkins on Kubernetes. In this article, we shall start off by installing Kubernetes on CoreOS on AWS. We shall use an AWS CloudFormation to create a Kubernetes cluster. This tutorial has the following sections:

Setting the Environment

This tutorial requires the following software to be installed:

  • AWS Command Line Interface (CLI)
  • The CloudFormation Generator tool kube-aws
  • The kubectl Binaries
  • CoreOS Application Signing Key

The Amazon Linux AMI has the AWS Command Line Interface (CLI) pre-installed. Create an Amazon EC2 instance using the Amazon Linux AMI. The Security Group Inbound/Outbound rules should be set to allow traffic for all protocol in port range 0-65535 from any source and to any destination. Obtain the Public IP address or the Public DNS Name of the EC2 instance, as shown in Figure 1.

Amazon Linux EC2 Instance Public IP Address
Figure 1: Amazon Linux EC2 Instance Public IP Address

SSH Login to the EC2 instance by using the key pair used to launch the EC2 instance and the Public DNS name (or the Public IP Address).

ssh -i "jenkins.pem"

Another pre-requisite is to register a domain name with a domain registrar to be used as the external DNS name on which to make the Kubernetes cluster API server accessible. We have used the external DNS name NOSQLSEARCH.COM. Because the NOSQLSEARCH.COM is already registered, use another domain name and substitute another domain name in subsequent settings in which NOSQLSEARCH.COM is used.

Configuring AWS Credentials

Create a set of AWS Security credentials, which are used to configure the EC2 instance on which the CloudFormation stack is launched. In the AWS EC2 Management Console, click Security Credentials for the user account and click Create New Access Key to create an access key. Copy the AWS Access Key Id and the AWS Secret Access Key. After SSH logging into the Amazon Linux instance, run the following command to configure the instance with the AWS credentials:

aws configure

Specify the Access Key ID and Access Key when prompted. Specify the default region name (us-east-1) and the output format (json), as shown in Figure 2.

Configuring AWS Credentials
Figure 2: Configuring AWS Credentials

Creating an EC2 Key Pair

An EC2 Key pair is required as a cluster parameter in creating a CloudFormation stack for the Kubernetes cluster. To create the EC2 key pair, the AWS credentials need to be configured, which we already did. Run the following command to create a key pair called kubernetes-coreos and save it as kubernetes-coreos.pem.

aws ec2 create-key-pair --key-name kubernetes-coreos
   --query 'KeyMaterial' --output text > kubernetes-coreos.pem

The access permissions of the key pair need to be modified to allow only read by the owner by using 400 as the mode.

chmod 400 kubernetes-coreos.pem

The EC2 key pair gets created and the mode gets set to 400, as shown in Figure 3.

Creating an EC2 Key Pair
Figure 3: Creating an EC2 Key Pair

Installing the CoreOS Application Signing Key

As of March 2016, CoreOS applications on the GitHub and packaged into AppC images are signed with the CoreOS Application Signing Key. Import the CoreOS Application Signing Key.

gpg2 --keyserver --recv-key FC8A365E

Next, validate the key by outputting its fingerprint.

gpg2 --fingerprint FC8A365E

The key fingerprint should be 18AD 5014 C99E F7E3 BA5F 6CE9 50BD D3E0 FC8A 365E.

Installing the kube-aws CloudFormation Generator

Download the latest release tarball and detached signature (.sig) for kube-aws.


Validate the tarball's GPG signature.

sudo gpg2 --verify kube-aws-linux-amd64.tar.gz.sig

The Primary key fingerprint should be 18AD 5014 C99E F7E3 BA5F 6CE9 50BD D3E0 FC8A 365E. Extract the binary from the tar.gz file.

tar zxvf kube-aws-linux-amd64.tar.gz

Move the kube-aws binaries to the path directory /usr/local/bin.

sudo mv linux-amd64/kube-aws /usr/local/bin

Creating a KMS Key

Next, create a KMS key by using the aws command line interface (CLI). The KMS key is used to encrypt/decrypt cluster TLS assets and is identified by an Arn string. Specify region (us-east-1) with the --region option.

aws kms --region=us-east-1 create-key
   --description="kube-aws assets"

A KMS Key gets created. Copy the KeyMetadata.Arn string, which starts with arn:aws:kms:<region>, in which <region> is a variable. The KeyMetadata.Arn string is to be used later to initialize the cluster CloudFormation stack.

Creating a CloudFormation Stack for the Kubernetes Cluster

The procedure to create a CloudFormation stack for the Kubernetes cluster is as follows:

  1. Create an asset directory.
  2. Initialize the CloudFormation stack.
  3. Render the contents of the asset directory.
  4. Optionally, customize the cluster in the cluster.yaml file.
  5. Validate the CloudFormation stack and the cloud-config user data files.
  6. Launch the CloudFormation stack.

Create a directory on the Amazon Linux EC2 instance for the generated assets and cd (change directory) to the asset directory.

mkdir coreos-cluster
   cd coreos-cluster

Initialize the CloudFormation stack using the Amazon EC2 key pair, KMS Key Arn string, and external DNS name.

kube-aws init --cluster-name=kubernetes-coreos-cluster
   --external-dns-name=NOSQLSEARCH.COM --region=us-east-1
   --availability-zone=us-east-1c --key-name=kubernetes-coreos

The CloudFormation stack assets get created with the configuration file cluster.yaml created in the coreos-cluster directory.

Render (generate) the cluster assets consisting of templates and credentials that are used to create, update, and interact with the Kubernetes cluster.

kube-aws render

The CloudFormation template stack-template.json gets created; we shall use it to create the Kubernetes cluster. The cluster.yaml may be customized to add or modify cluster settings including cluster name, external DN name, automatically create a Route53 A record, hosted zone, AWS region, AWS availability zone, and EC2 Instance type for EC2 instances on which the Kubernetes controller and worker nodes are created, number of worker nodes, and Kuebrnetes version. As an example, set workerCount to 3 using vi for cluster.yaml. Modifying the cluster.yaml does not require the assets to be re-rendered, but if any of the user data files or the stack template are modified, the cluster assets must be re-rendered with kube-aws render.

Validate the CloudFormation stack:

kube-aws validate

The output should indicate that the user data is valid and the stack template is valid. Launch the CloudFormation stack.

kube-aws up

A CloudFormation stack with the same name as the line being created must not exist, or the error listed in Figure 4 gets generated. It takes a few minutes for the CloudFormation stack to get created and the Kubernetes controller and worker nodes to become available. The kube-aws up command does not complete until the cluster has launched. The controller IP gets listed when the cluster is launched, as shown in Figure 4. Find the cluster status with the following command:

kube-aws status

Using the controller IP, output the SSH login into the Kubernetes controller node:

ssh -i "kubernetes-coreos.pem" core@

The Kubernetes controller command prompt is displayed (see Figure 4).

Launching the CloudFormation Stack
Figure 4: Launching the CloudFormation Stack

The AWS CloudFormation>Stacks should list the stack created, as shown in Figure 5.

CloudFormation Stack for Kubernetes Cluster
Figure 5: CloudFormation Stack for Kubernetes Cluster

For a single Kubernetes master (controller) and three worker nodes, the AWS EC2 instances are shown in Figure 6.

Kubernetes Controller and Worker Node Instances on AWS EC2
Figure 6: Kubernetes Controller and Worker Node Instances on AWS EC2

Configuring an External DNS

Next, configure the external DNS, NOSQLSEARCH.COM (which would be different for different users), to add an A record for the Public IP address of the controller. Obtain the Public IP address of the controller from the EC2 console, as shown in Figure 6.

The procedure to add an A record would be different for different domain registrars. Essentially, the DNS Zone File needs to be modified for the external DNS NOSQLSEARCH.COM A record. In the Edit Zone Record, specify the Kubernetes cluster controller Public IP address in the Points To field and click Finish, as shown in Figure 7. Click Save Changes to save the modifications to the A record.

Adding an A (Host) record for Domain
Figure 7: Adding an A (Host) record for Domain

The A record should list the Points To as the Public IP address of the Controller instance (see Figure 8).

Modified A Record for domain
Figure 8: Modified A Record for domain

Downloading kubectl Binaries

Next, download kubectl, which is a command line tool to run commands against a Kubernetes cluster.

sudo wget
sudo chmod +x ./kubectl

The kubectl commands reference may be used to run subsequent commands.

Listing Cluster Nodes

List the nodes in the Kubernetes cluster:

./kubectl get nodes

The single master node and three worker nodes get listed, as shown in Figure 9.

Listing Kubernetes Cluster Nodes
Figure 9: Listing Kubernetes Cluster Nodes


In this tutorial, we discussed installing Kubernetes on AWS using a CloudFormation stack. In a subsequent article, we shall install Jenkins on the Kubernetes cluster.

Mobile Site | Full Site
Copyright 2017 © QuinStreet Inc. All Rights Reserved