Overview

This post is meant to help you get a Kubernetes cluster up and running with kubeadm in no time. This guide will be using Vultr to deploy two servers, one master and one worker, however you can deploy as many servers as you would like.

So what is kubeadm?

Kubeadm is a tool developed by Kubernetes which allows you to create a minimum viable cluster up and running by following best practices. It will only bootstrap your cluster not provision machines. Things such as addons, the Kubernetes dashboard, monitoring solutions, etc is not something kubeadm will do for you.

Hosting?

With this guide you can essentially deploy this Kubernetes cluster on any cloud provider.

I however decided to go with Vultr to provision the servers for this cluster. There are a few requirements for the servers we will deploy.

  • One or more machines running a deb/rpm-compatible OS (We will be using CentOS)
  • 2 GB or more of RAM per machine
  • 2 CPUs or more on the master
  • Full network connectivity among all machines in the cluster

So the two servers I deployed were the following:

  • 1 CPU 2GB RAM with CentOS 7 (Worker node)
  • 2 CPU 4GB RAM with CentOS 7 (Master node)

With this amount of RAM on both servers, Kubernetes will have plenty of room to breathe.

Configuring the worker and master

Here are the steps we will have to take on both the master and worker node

  • Yum update & packages
  • Install docker
  • Disable selinux
  • Disable swap
  • Disable Firewall
  • Update IPTables
  • Install kubelet/kubeadm/kubectl

Installing Docker

We’ll be using version 1.14 of Kubernetes in this tutorial. For this version, Kubernetes recommends running Docker v18.06.2. Be sure to check the recommended Docker version for your version of Kuberenetes

So to do this we will be adding the Docker repository to yum and specifically installing 18.06.2.

Once Docker is installed, we’ll need to configure the docker daemon to the settings recommended by Kubernetes.

### Add Docker repository.
yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo

## Install Docker CE.
yum update && yum install docker-ce-18.06.2.ce

## Create /etc/docker directory.
mkdir /etc/docker

# Setup daemon.
cat > /etc/docker/daemon.json <<EOF
{
  "exec-opts": ["native.cgroupdriver=systemd"],
  "log-driver": "json-file",
  "log-opts": {
    "max-size": "100m"
  },
  "storage-driver": "overlay2",
  "storage-opts": [
    "overlay2.override_kernel_check=true"
  ]
}
EOF

mkdir -p /etc/systemd/system/docker.service.d

# Restart Docker
systemctl daemon-reload
systemctl enable docker.service
systemctl restart docker

Disable SELinux

Since we are using CentOS we need to disable SELinux. This is needed to allow containers access to the host filesystem.

setenforce 0
sed -i 's/^SELINUX=enforcing$/SELINUX=disable/' /etc/selinux/config

Disable Swap

Swap needs to be disabled to allow kubelet to work properly.

sed -i '/swap/d' /etc/fstab
swapoff -a

Update IPTables

Kubernetes recommends that we ensure net.bridge.bridge-nf-call-iptables is set to 1. This is due to issues where REHL/CentOS 7 has had issues with traffic being rerouted incorrectly due to bypassing of iptables.

cat <<EOF >  /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
sysctl --system

Install kubelet/kubeadm/kubectl

We will need to add the kubernetes repo to yum. Once we do that we just need to run the install command and enable kubelet.

cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
exclude=kube*
EOF

yum install -y kubelet kubeadm kubectl --disableexcludes=kubernetes
systemctl enable --now kubelet

Creating the cluster

Now we have fully configured both of our master and worker node. Now we can initialize our master node and join our worker nodes to the master!

Note that if you wanted to add more worker nodes the process above would have to be done on all of those nodes as well.

Master Node setup

We want to initialize our master node by running the following command. You’ll want to substitute your master node’s IP address in the command below. Additionally, we’ll pass in the pod-network-cidr to make it easier for us later when we install the Flannel network overlay

kubeadm init --apiserver-advertise-address=YOUR_IP_HERE --pod-network-cidr=10.244.0.0/16

This may take a while to complete but once it is completed you should see something similar at the end of the output like this

kubeadm join YOUR_IP:6443 --token 4if8c2.pbqh82zxcg8rswui \--discovery-token-ca-cert-hash sha256:a0b2bb2b31bf7b06bb5058540f02724240fc9447b0e457e049e59d2ce19fcba2

This command is what your worker nodes need to execute to join the cluster so take note of it.

Next up on the master node is to copy the kube/config file over to your $Home so you can execute kubectl commands

mkdir $HOME/.kube
cp /etc/kubernetes/admin.conf $HOME/.kube/config

Finally we have Flannel.

Flannel is what allows pod to pod communication. There are various other types of network overlays that you can install but for simplicity I have decided to go with Flannel.

To install flannel you want to run the following command.

kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

With this config copied over you should be able to run kubectl get cs and get a response

NAME                 STATUS    MESSAGE             ERROR
scheduler            Healthy   ok
controller-manager   Healthy   ok
etcd-0               Healthy   {"health":"true"}

Your master node is set and ready to go! Onto the worker node!

Worker Node

At this point there is no extra work that is needed to be done on the worker node. All we need to do is run the kubeadm join command that we got from our kubeadm init .

If by some chance you misplaced the kubeadm join command you generate another one on the master node by running

kubeadm token create --print-join-command

Once you run the kubeadm join command if you run kubectl get nodes on master you should see an output like this

NAME          STATUS   ROLES    AGE    VERSION
k8-master   Ready    master   107m   v1.14.2
k8-worker   Ready    <none>   45m    v1.14.2

Wrapping up

Just like that you have bootstrapped a Kubernetes cluster using kubeadm. Some take aways you could also do this with private networks. Vultr (along with other cloud providers) allow for private networks!

Also if you want to execute kubectl commands from your local machine against your cluster you can accomplish this by having kubectl installed locally and pull down the .kube/config file from the cluster to your local machine in $HOME/.kube/config.

Hopefully this guide helps you traverse kubeadm and gets you playing with kubernetes in no time!

Useful links