The final instance I wanted to cover for a setup is via Kubeadm. Kubeadm is a bit more advanced than Minikube in the sense of that it is used for clustering. This means that we are no longer talking about a single node setup with all the components in on node. We will need to decide on a setup path here. First step is to select which machines will be Master and let the rest be worker nodes. In my case I will use the first node as a Master node, and 02 and 03 will be used as worker nodes. As with Minikube I will be making use of the documentation available from Kubernetes itself. You can find it here.
The next step is that we will need to setup the 3 nodes with a supported OS. I will use CentOS here but I will use the latest version, which is 7.7.
Prerequisites
Some other prerequisite steps to do before we start are, these steps needs to be done on each host you are installing:
- Verify you have different MAC addresses
- product uuid is different for each node.
- Swap must be disabled. Else Kubelets may not work properly.
- Make sure your firewall ports are open as they should be. You can find a reference here for controller and worker nodes.
The first two points specifically are down to the use of Templates to deploy your VMs from what I see.
This is the output from one file, you can see the mac address in 3rd last line and the uuid in the last line. Each node has different IDs and MAC in my case so time to continue.
You can check the MAC addresses via ip link and you need to do a cat on /sys/class/dmi/id/product_uuid to verify that they in deed are different. The reason why it is so important that these keys are different is that K8s uses these values to identify the nodes.
To disable the swap you need to edit the fstab.
- run sudo vi /etc/fstab
- Comment out the line that contains the swap drive. and save the file
- Verify that swap is set to 0 by checking free -h from the cli.
The next part has to do with the firewall. This is shown on the last line of the first screenshot above. It is basically to set the iptables to legacy mode. This apparently is a reason why RHEL/CentOS 8 is not supported. They do not support the legacy mode.
To do this, run the following command:
update-alternatives --set iptables /usr/sbin/iptables-legacy
CRI installation
Next we need the CRI to be installed. WE will as mentioned in the lab considerations, go with with Docker here. This is done the easiest via a script that is already on the Kubernetes page. You can find the script here. This must be done on each of the three nodes in our cluster.
# Install Docker CE
## Set up the repository
### Install required packages.
yum install -y yum-utils device-mapper-persistent-data lvm2
### Add Docker repository.
yum-config-manager --add-repo \
https://download.docker.com/linux/centos/docker-ce.repo
## Install Docker CE.
yum update -y && yum install -y \
containerd.io-1.2.10 \
docker-ce-19.03.4 \
docker-ce-cli-19.03.4
## Create /etc/docker directory.
mkdir /etc/docker
# Setup daemon.
cat > /etc/docker/daemon.json <<EOF
{
"exec-opts": ["native.cgroupdriver=systemd"],
"log-driver": "json-file",
"log-opts": {
"max-size": "100m"
},
"storage-driver": "overlay2",
"storage-opts": [
"overlay2.override_kernel_check=true"
]
}
EOF
mkdir -p /etc/systemd/system/docker.service.d
# Restart Docker
systemctl daemon-reload
systemctl restart docker
So, copy the script and touch up a file on each of your K8s hosts and then create a script file. Do a chmod +x on the file and then run it with sudo privileges.
Finally run the command: sudo ./cri.sh and let the script do its’ thing. It should not take more than a few moments to run.
Kubectl, kubelet, and kubeadm installation
For the next part we need to install three tools.
It is again done via script that can be found here.
cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
EOF
# Set SELinux in permissive mode (effectively disabling it)
setenforce 0
sed -i 's/^SELINUX=enforcing$/SELINUX=permissive/' /etc/selinux/config
yum install -y kubelet kubeadm kubectl --disableexcludes=kubernetes
systemctl enable --now kubelet
Again the steps are similar to above with the CRI. Create the script file, copy the script into the file and then do a chmod on the file. Run the file with sudo.
After running this on all three we have done the common actions for now. The next step is to create the cluster with kubeadm.
Create the Cluster
Now that most things are installed on each node it is time to use Kubeadm to create the control node. This is the node that you select to run the cluster administration and also the etcd database. In my case I just want the one controller for the cluster. In a production environment it would most likely make sense to have at least 2 controllers with a LB in front of it.
It is recommended to run a sudo yum update before starting this part.
A second part that I found was that I had not enabled docker.service. This resulted in a couple of fun errors. I enabled it by running sudo systemctl enable docker.service.
I then went on to initialize the cluster via Kubeadm. For me and my network settings it was running the following command:
sudo kubeadm init --pod-network-cidr=10.244.0.0/16 --apiserver-advertise-address=10.0.102.144
The first part is relevant for flannel and must be set to IP 10.244.0.0/16 for flannel to work, this will be covered after the init of the cluster. The second part is to make sure that the communication goes to the correct Master node. This is more in case of where you have multiple network adapters or adapters that are using DHCP.
The kubeadm finished successfully with some warnings.
At the bottom of the screen of text there are some instructions to be carried out additionally:
mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config
Also at the bottom you can see that there is a line to join worker nodes to the cluster. For now it’s a good thing to copy this somewhere safe, like a notepad (or WordPress in my case :D. Just kidding, not a good place to put tokens for certs and so on).
kubeadm join 10.0.102.144:6443 --token t3tgr1.h71xhtzmk1bk3w4n \ --discovery-token-ca-cert-hash sha256:9bc9ca08f9392c38f7e3ba64bd1fbe6af3637c19cad7d0dfa5ead55fdd4ec954
This should be run on each of the two worker nodes as root, but first we need to do something about the networking part. We need to create a CNI (Container Network Interface) interface to allow pods to talk together. This is also a requirement for the CoreDNS service. The CNI I want to cover a bit later in more detail, in order to just finish the installation of the cluster. In my case I went with flannel. It was quite simple and I needed to run a simple command:
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/2140ac876ef134e0ed5af15c65e414cf26827915/Documentation/kube-flannel.yml
Note: We are still only doing things on the Master node for now. Nothing in the “Create the Cluster” topic so far needs to be run on the worker nodes.
You can confirm that the installation was successful by running the following command:
kubectl get pods --all-namespaces
This will give you an output like this:
Make sure the services are running, this can take a moment, so be patient.
Next it is time to use the Kubeadmin join command from above. We need to join the two worker nodes. to the cluster. So on the first of the two worker nodes. Do run this as root, not as sudo only, as this will fail the preflight tests.
So it looks good. Repeat the step on the second worker node.
To verify that all is good, you can run kubectl get nodes on the master node, and you should get a result like this:
This also looks good.
So lets try to run a test.
To remove the deployment issue the command kubectl delete deployment/nginx.
That completes this post on deploying kubeadm.