Kubernetes is a Docker container orchestration platform that offers several benefits, including being able to create a service based on a Docker image, load balancing across multiple cluster nodes, scaling, rolling updates, monitoring, and logging. In this tutorial, we shall discuss using Jenkins with Kubernetes. In a two-article tutorial, we shall discuss using Jenkins on Kubernetes. In this article, we shall start off by installing Kubernetes on CoreOS on AWS. We shall use an AWS CloudFormation to create a Kubernetes cluster. This tutorial has the following sections:
- Setting the Environment
- Configuring AWS Credentials
- Creating a EC2 Key Pair
- Installing the CoreOS Application Signing Key
- Installing the kube-aws CloudFormation Generator
- Creating a KMS Key
- Creating a CloudFormation Stack for the Kubernetes Cluster
- Configuring an External DNS
- Downloading kubectl Binaries
- Listing Cluster Nodes
This tutorial requires the following software to be installed:
- AWS Command Line Interface (CLI)
- The CloudFormation Generator tool kube-aws
- The kubectl Binaries
- CoreOS Application Signing Key
The Amazon Linux AMI has the AWS Command Line Interface (CLI) pre-installed. Create an Amazon EC2 instance using the Amazon Linux AMI. The Security Group Inbound/Outbound rules should be set to allow traffic for all protocol in port range 0-65535 from any source and to any destination. Obtain the Public IP address or the Public DNS Name of the EC2 instance, as shown in Figure 1.
Figure 1: Amazon Linux EC2 Instance Public IP Address
SSH Login to the EC2 instance by using the key pair used to launch the EC2 instance and the Public DNS name (or the Public IP Address).
ssh -i "jenkins.pem" firstname.lastname@example.org
Another pre-requisite is to register a domain name with a domain registrar to be used as the external DNS name on which to make the Kubernetes cluster API server accessible. We have used the external DNS name NOSQLSEARCH.COM. Because the NOSQLSEARCH.COM is already registered, use another domain name and substitute another domain name in subsequent settings in which NOSQLSEARCH.COM is used.
Create a set of AWS Security credentials, which are used to configure the EC2 instance on which the CloudFormation stack is launched. In the AWS EC2 Management Console, click Security Credentials for the user account and click Create New Access Key to create an access key. Copy the AWS Access Key Id and the AWS Secret Access Key. After SSH logging into the Amazon Linux instance, run the following command to configure the instance with the AWS credentials:
Specify the Access Key ID and Access Key when prompted. Specify the default region name (us-east-1) and the output format (json), as shown in Figure 2.
Figure 2: Configuring AWS Credentials
An EC2 Key pair is required as a cluster parameter in creating a CloudFormation stack for the Kubernetes cluster. To create the EC2 key pair, the AWS credentials need to be configured, which we already did. Run the following command to create a key pair called kubernetes-coreos and save it as kubernetes-coreos.pem.
aws ec2 create-key-pair --key-name kubernetes-coreos --query 'KeyMaterial' --output text > kubernetes-coreos.pem
The access permissions of the key pair need to be modified to allow only read by the owner by using 400 as the mode.
chmod 400 kubernetes-coreos.pem
The EC2 key pair gets created and the mode gets set to 400, as shown in Figure 3.
Figure 3: Creating an EC2 Key Pair
As of March 2016, CoreOS applications on the GitHub and packaged into AppC images are signed with the CoreOS Application Signing Key. Import the CoreOS Application Signing Key.
gpg2 --keyserver pgp.mit.edu --recv-key FC8A365E
Next, validate the key by outputting its fingerprint.
gpg2 --fingerprint FC8A365E
The key fingerprint should be 18AD 5014 C99E F7E3 BA5F 6CE9 50BD D3E0 FC8A 365E.
Download the latest release tarball and detached signature (.sig) for kube-aws.
wget https://github.com/coreos/coreos-kubernetes/releases /download/v0.7.1/kube-aws-linux-amd64.tar.gz wget https://github.com/coreos/coreos-kubernetes/releases /download/v0.7.1/kube-aws-linux-amd64.tar.gz.sig
Validate the tarball's GPG signature.
sudo gpg2 --verify kube-aws-linux-amd64.tar.gz.sig kube-aws-linux-amd64.tar.gz
The Primary key fingerprint should be 18AD 5014 C99E F7E3 BA5F 6CE9 50BD D3E0 FC8A 365E. Extract the binary from the tar.gz file.
tar zxvf kube-aws-linux-amd64.tar.gz
Move the kube-aws binaries to the path directory /usr/local/bin.
sudo mv linux-amd64/kube-aws /usr/local/bin
Next, create a KMS key by using the aws command line interface (CLI). The KMS key is used to encrypt/decrypt cluster TLS assets and is identified by an Arn string. Specify region (us-east-1) with the --region option.
aws kms --region=us-east-1 create-key --description="kube-aws assets"
A KMS Key gets created. Copy the KeyMetadata.Arn string, which starts with arn:aws:kms:<region>, in which <region> is a variable. The KeyMetadata.Arn string is to be used later to initialize the cluster CloudFormation stack.
The procedure to create a CloudFormation stack for the Kubernetes cluster is as follows:
- Create an asset directory.
- Initialize the CloudFormation stack.
- Render the contents of the asset directory.
- Optionally, customize the cluster in the cluster.yaml file.
- Validate the CloudFormation stack and the cloud-config user data files.
- Launch the CloudFormation stack.
Create a directory on the Amazon Linux EC2 instance for the generated assets and cd (change directory) to the asset directory.
mkdir coreos-cluster cd coreos-cluster
Initialize the CloudFormation stack using the Amazon EC2 key pair, KMS Key Arn string, and external DNS name.
kube-aws init --cluster-name=kubernetes-coreos-cluster --external-dns-name=NOSQLSEARCH.COM --region=us-east-1 --availability-zone=us-east-1c --key-name=kubernetes-coreos --kms-key-arn="arn:aws:kms:us-east-1:672593526685:key /51627475-67cc-4ac5-b378-05a833111116"
The CloudFormation stack assets get created with the configuration file cluster.yaml created in the coreos-cluster directory.
Render (generate) the cluster assets consisting of templates and credentials that are used to create, update, and interact with the Kubernetes cluster.
The CloudFormation template stack-template.json gets created; we shall use it to create the Kubernetes cluster. The cluster.yaml may be customized to add or modify cluster settings including cluster name, external DN name, automatically create a Route53 A record, hosted zone, AWS region, AWS availability zone, and EC2 Instance type for EC2 instances on which the Kubernetes controller and worker nodes are created, number of worker nodes, and Kuebrnetes version. As an example, set workerCount to 3 using vi for cluster.yaml. Modifying the cluster.yaml does not require the assets to be re-rendered, but if any of the user data files or the stack template are modified, the cluster assets must be re-rendered with kube-aws render.
Validate the CloudFormation stack:
The output should indicate that the user data is valid and the stack template is valid. Launch the CloudFormation stack.
A CloudFormation stack with the same name as the line being created must not exist, or the error listed in Figure 4 gets generated. It takes a few minutes for the CloudFormation stack to get created and the Kubernetes controller and worker nodes to become available. The kube-aws up command does not complete until the cluster has launched. The controller IP gets listed when the cluster is launched, as shown in Figure 4. Find the cluster status with the following command:
Using the controller IP, output the SSH login into the Kubernetes controller node:
ssh -i "kubernetes-coreos.pem" email@example.com
The Kubernetes controller command prompt is displayed (see Figure 4).
Figure 4: Launching the CloudFormation Stack
The AWS CloudFormation>Stacks should list the stack created, as shown in Figure 5.
Figure 5: CloudFormation Stack for Kubernetes Cluster
For a single Kubernetes master (controller) and three worker nodes, the AWS EC2 instances are shown in Figure 6.
Figure 6: Kubernetes Controller and Worker Node Instances on AWS EC2
Next, configure the external DNS, NOSQLSEARCH.COM (which would be different for different users), to add an A record for the Public IP address of the controller. Obtain the Public IP address of the controller from the EC2 console, as shown in Figure 6.
The procedure to add an A record would be different for different domain registrars. Essentially, the DNS Zone File needs to be modified for the external DNS NOSQLSEARCH.COM A record. In the Edit Zone Record, specify the Kubernetes cluster controller Public IP address in the Points To field and click Finish, as shown in Figure 7. Click Save Changes to save the modifications to the A record.
Figure 7: Adding an A (Host) record for Domain
The A record should list the Points To as the Public IP address of the Controller instance (see Figure 8).
Figure 8: Modified A Record for domain
Next, download kubectl, which is a command line tool to run commands against a Kubernetes cluster.
sudo wget https://storage.googleapis.com/kubernetes-release /release/v1.3.0/bin/linux/amd64/./kubectl sudo chmod +x ./kubectl
The kubectl commands reference may be used to run subsequent commands.
List the nodes in the Kubernetes cluster:
./kubectl get nodes
The single master node and three worker nodes get listed, as shown in Figure 9.
Figure 9: Listing Kubernetes Cluster Nodes
In this tutorial, we discussed installing Kubernetes on AWS using a CloudFormation stack. In a subsequent article, we shall install Jenkins on the Kubernetes cluster.