Kubernetes was introduced in an earlier article, "Getting Started with Kubernetes on Amazon Web Services (AWS)." Kubernetes was also discussed in another article, "Using Kubernetes (K8s) on IBM Bluemix." Kubernetes may be installed on bare metal on almost any OS including Fedora, CentOS, Ubuntu, and CoreOS for development purposes.
Kubernetes installation on bare metal involves running several commands for setting up a master node, worker nodes, pod network, and etcd.
Kubernetes 1.4 introduces a new tool called kubeadm for bootstrapping a Kubernetes cluster. The kubeadm bootstraps a Kubernetes cluster with two commands. After installing Docker, kubectl, and kubelet, the master node may be started with kubeadm init and worker nodes added with kubeadm join.
In this article, we shall use the following procedure to install and bootstrap a Kubernetes cluster and subsequently test the cluster:
- Start three new Ubuntu instances on Amazon EC2.
- On all of the Ubuntu instances, install Docker, kubeadm, kubectl, and kubelet.
- From one of the Ubuntu instances, initialize the Kubernetes cluster Master with the following command:
- Apply Calico Pod network policy kubeadm/calico.yaml.
- Join the other two Ubuntu instances (nodes) with master with kubeadm join --token=<token> <master ip>.
- On master, three nodes get listed with 'kubectl get nodes.'
- Run an application on master:
kubectl -s http://localhost:8080 run nginx --image=nginx --replicas=3 --port=80
- List the pods:
kubectl get pods -o wide
- Uninstall the Kubernetes cluster.
This article has the following sections:
- Setting the Environment
- Installing Docker, kubeadm, kubectl, and kubelet on Each Host
- Initializing the Master
- Installing the Calico Pod Network
- Joining Nodes to the Cluster
- Installing a Sample Application
- Uninstalling the Cluster
- Further Developments in kubeadm
The kubeadm tool requires the following machines running one of Ubuntu 16.04+, HypriotOS v1.0.1+, or CentOS 7 running on them.
- One machine for the master node
- One or more machines for the worker nodes
At least 1GB of RAM is required on each of the machines. We have used three Ubuntu machines running on Amazon EC2 to bootstrap a cluster with a single master node and two worker nodes. The three Ubuntu machines are shown in Figure 1.
Figure 1: Ubuntu machines
In this section we shall install Docker, kubelet, kubectl, and kubeadm on each of the three machines. The components installed are discussed in Table 1.
|Docker||The container runtime. Version 1.11.2 is recommended and v1.10.3 and v1.12.1 are also fine. Required on all machines in the cluster.|
|kubelet||The core component of Kubernetes that runs on all the machines in the cluster. Starts containers and Pods. Required on all machines in the cluster.|
|kubectl||The command line tool to manage a cluster. Required only on the master node, but useful if installed on all nodes.|
|kubeadm||The tool to bootstrap a cluster. Required on all machines in the cluster.|
Table 1: Components to Install
Obtain the Public IP Address each of the three machines and SSH log in to each of the machines:
ssh -i "docker.pem" email@example.com ssh -i "docker.pem" firstname.lastname@example.org ssh -i "docker.pem" email@example.com
The commands to install the binaries are required to be run as root; therefore, set user to root.
sudo su -
Run the following commands on each of the machines:
curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add - cat <<EOF > /etc/apt/sources.list.d/kubernetes.list deb http://apt.kubernetes.io/ kubernetes-xenial main EOF
The first command downloads the required packages for Kubernetes, as shown in the output in Figure 2.
Figure 2: Downloading packages for Kubernetes
The 2nd command downloads the package lists from the repositories and updates them with the newest versions of the packages.
The output is shown in Figure 3.
Figure 3: Updating repository packages
Next, install Docker:
# Install docker if you don't have it already. apt-get install -y docker.io
Docker gets installed, as shown in the command output in Figure 4.
Figure 4: Installing Docker
And, subsequently install kubelet (core component of Kubernetes), kubeadm (bootstrapping tool), kubectl (cluster management tool), and kubernetes-cni (network plugin):
apt-get install -y kubelet kubeadm kubectl kubernetes-cni
The output from the preceding commands is shown in Figure 5.
Figure 5: Installing kubelet, kubeadm, kubectln, and kubernetes-cni
Next, initialize the master on which the etcd database and the API server run. The kubelet starts Pods to run these components. Run the following command that auto detects the IP Addresses:
As shown in the command output, first, some pre-flight checks are run to validate the system state. Subsequently, a master/tokens token is generated that is to be used as a mutual authenticating key for worker nodes that want to join the cluster. Next, a self-signed Certificate Authority key and certificate are generated to provide identities to each of the nodes in the cluster for communication with the clients. An API server key and certificate are created for the API server for communication with the clients. A util/kubeconfig file is created for the kubelet to connect to the API server and another util/kubeconfig file is created for the administration. Subsequently, the API client configuration is created. The output from the kubeadm init command is shown in Figure 6.
Figure 6: Running kubeadm init
All control plane components become ready. The first node becomes ready and a test deployment is made. Essential add-on components kube-discovery, kube-proxy, and kube-dns also get created, as shown in the command output in Figure 7. The Kubernetes master gets initialized successfully. A command with the following syntax is generated; it must be run on machines (nodes) that are to join the cluster.
kubeadm join -token=<token> <IP Address of the master node>
The preceding command must be copied and kept for subsequent use on worker nodes.
Figure 7: Kubernetes master initialised
By default, the master nodes are not schedulable and made so by using the "dedicated" taint. The master node could be made schedulable with the following command:
kubectl taint nodes --all dedicated-
The kubeadm command supports some other options (see Table 2) that we did not have to be use but could be used to override the default command.
|--skip-preflight-checks||Skips the preflight checks||Preflight checks are performed|
|--use-kubernetes-version||Sets the Kubernetes version to use||v1.5.1|
|--api-advertise-addresses||The kubeadm init command auto detects and uses the IP Address of the default network interface and uses it to generate certificates for the API server. This configuration parameter may be used to override the default with one or more IP Addresses on which the API server is to be validated.||Auto detects|
|--api-external-dns-names||This configuration parameter may be used to override the default network interface with one or more hostnames on which the API server is to be validated. Only one of IP Address/es or External DNS name/s must be used.|
Specifies a Cloud provider.
The cloud-manager supports "aws", "azure", "cloudstack", "gce", "mesos", "openstack", "ovirt", "rackspace", and "vsphere". Cloud provider configuration may be provided in the /etc/kubernetes/cloud-config file. Using a Cloud provider also has the advantage of using persistent volumes and load balancing.
|No auto detection of a Cloud provider|
|--pod-network-cidr||Allocates network ranges (CIDRs) to each node and is useful for certain networking solutions, including Flannel and Cloud providers.|
|--service-cidr||Overrides the subnet Kubernetes uses to assign IP Addresses to Pods. The /etc/systemd/system/kubelet.service.d/10-kubeadm.conf also must be modified.||10.96.0.0/12|
|--service-dns-domain||Overrides the DNS name suffix for assigning services with DNS names; it has the format <service_name>.<namespace>.svc.cluster.local. The /etc/systemd/system/kubelet.service.d/10-kubeadm.conf also must be modified.||cluster.local|
|--token||Specifies the token to be used for mutual authentication between the master and the nodes joining the cluster.||Auto generated|
Table 2: Kubeadm command options
For Pods to be able to communicate with each other, a Pod network add-on must be installed. Calico provides a kubeadm-hosted install configuration in the form of a ConfigMap at http://docs.projectcalico.org/master/getting-started/kubernetes/installation/hosted/kubeadm/calico.yaml that we shall use in this section to install a Pod network. Run the following command on the master node to install the Pod network:
kubectl apply -f http://docs.projectcalico.org/master/getting-started/ kubernetes/installation/hosted/kubeadm/calico.yaml
Alternatively, download the calico.yaml and copy to the master node:
scp -i "docker.pem" calico.yaml firstname.lastname@example.org:~
Subsequently, run the following command:
kubectl apply -f calico.yaml
Calico and a single node etcd cluster get installed, as shown in Figure 8.
Figure 8: Installing Calico Policy
Subsequently, list all Pods in all Kubernetes namespaces.
kubectl get pods --all-namespaces
The kube-dns Pod must be running, as listed in Figure 9.
Figure 9: Listing Pods in all namespaces
In this section, we shall join worker nodes to the cluster by using the kubeadm join command, which has the following syntax:
kubeadm join --token=<token> <master-ip>
Optionally, the kubeadm join command may be run with the --skip-preflight-checks option to skip the preliminary validation.
The kubeadm join command uses the supplied token to communicate with the API server and get the root CA certificate, and creates a local key pair. Subsequently, a certificate signing request (CSR) is sent to the API server for signing and the local kubelet is configured to connect to the API server.
Run the kubeadm join command copied from the output of the kubeadm init command on each of the Ubuntu machines that are to join the cluster.
First, SSH log in to the Ubuntu instance/s:
ssh -i "docker.pem" email@example.com
ssh -i "docker.pem" firstname.lastname@example.org
Subsequently, run the kubeadm join command. First, some pre-flight checks are performed. The provided token is validated. Next, node discovery is used. A cluster info discovery client is created and info is requested from the API server. A cluster info object is received and a signature is verified by using the given token. The cluster info signature and contents are found to be valid and the node discovery is complete. Subsequently, node bootstrapping is performed, in which the API endpoints https://10.0.0.129:6443 are used to establish a connection. Subsequently, a certificate signing request (csr) is made by using an API client to get a unique certificate for the node. Once a signed certificate is received from the API server, a kubelet configuration file is generated. The "Node join complete" message listed in Figure 10 indicates that the node has joined the cluster.
Figure 10: Joining a node to the cluster
Similarly, run the same command on the other Ubuntu machine. The other node also joins the cluster, as indicated by the output in Figure 11.
Figure 11: Joining second node to cluster
On the master node, run the following command to list the nodes:
kubectl get nodes
The master node and the two worker nodes should get listed, as shown in Figure 12.
Figure 12: Listing Kubernetes cluster nodes
Next, we shall test the cluster. Run the following command to run a nginx-based Pod cluster consisting of three replicas:
kubectl -s http://localhost:8080 run nginx --image=nginx --replicas=3 --port=80
List the deployments:
kubectl get deployments
List the cluster-wide Pods:
kubectl get pods -o wide
Expose the deployment as a service of type LoadBalancer:
kubectl expose deployment nginx --port=80 --type=LoadBalancer
List the services:
kubectl get services
The output from the preceding commands indicates the nginx deployment was created, and the three Pods run across the two worker nodes in the cluster. A service called "nginx" also gets created, as shown in Figure 13.
Figure 13: Running a nginx Pod cluster
Copy the Cluster IP of the service. Run the curl command to invoke the service:
The HTML markup from the service gets output, as shown in Figure 14.
Figure 14: Invoking nginx service
To uninstall the cluster installed by kubeadm, run the following command:
The cluster gets uninstalled, as shown in Figure 15.
Figure 15: Uninstalling/Resetting Kubernetes cluster
kubeadm has several limitations and is recommended only for development use. The limitations of kubeadm are as follows;
- Only a few OS are supported: Ubuntu 16.04+, CentOS 7, HypriotOS v1.0.1+.
- Not suitable for production use.
- Cloud providers integration is experimental.
- A cluster with only a single master with a single etcd database on it is created. High Availability is not supported, implying that the master is a single point of failure (SPOF).
- HostPort and HostIP functionality are not supported.
- Some other known issues when kubeadm is used with RHEL/CentOS 7, and VirtualBox.
kubeadm is in alpha in Kubernetes v 1.5 and is in beta since Kubernetes 1.6. Minor fixes and improvements continue to me made to kubeadm with each new Kubernetes version:
- With Kubernetes 1.7, modifications to cluster internal resources installed with kubeadm are overwritten when upgrading from v 1.6 to v 1.7.
- In Kubernetes 1.8, the default Bootstrap token created with kubeadm init becomes non valid and is deleted after 24 hours of being created to limit the exposure of the valuable credential. The kubeadm join command delegates the TLS bootstrapping to the kubelet iteslf instead of reimplementing the process. The bootstrap KubeConfig file is written to /etc/kubernetes/bootstrap-kubelet-conf with kubeadm join.
In this article, we used the kubeadm tool feature available since Kubernetes v1.4 to bootstrap a Kubernetes cluster. First, the required binaries for Docker, kubectl, kubelet, and kubeadm are installed. Subsequently, the kubeadm init command is used to initialize the master node in the cluster. Finally, the kubeadm join command is used to join worker nodes with the cluster. A sample nginx application is run to test the cluster.