PAUL'S BLOG

Learn. Build. Share. Repeat.

Installing Kubernetes 1.30

2024-08-09 7 min read Tutorial Kubernetes Certification

Kubernetes on your Laptop (3 part series)

This is the third and final post in my “Kubernetes on your Laptop” series. In this post, I will show you how to install Kubernetes on Ubuntu server. If you haven’t read the previous posts, I recommend you go back and ensure you have an Ubuntu server running on your laptop before proceeding.

At this point, you could go with “Kubernetes The Hard Way” by @kelseyhightower. But if you aren’t ready for that yet, this step-by-step guide will help you get a Kubernetes cluster up and running on your laptop using kubeadm and cilium.

Most of the installation commands come straight from Kubernetes’ “Installing kubeadm” guide, so I guess you could say this is “Kubernetes The Easy Way”… okay maybe not that easy 😜 but we’ll muscle through it 💪

Let’s go!

Configure the control and worker nodes

For each of the nodes (control and workers), you will need to install the necessary packages and configure the system. Each node needs to have containerd, crictl, kubeadm, kubelet, and kubectl installed.

Start by SSH’ing into each node and make sure you are running as root.

If you recall in my previous post, the network that the virtual machines are installed on uses DHCP so an IP address will be assigned an IP address as they boot up. So when you start your virtual machines, I recommend you start with the control node first so you can get the IP address and then start the worker nodes.

sudo -i

System Updates

To avoid Kubernetes data such as contents of Secret object being written to tmpfs, “swap to disk” must be disabled. Additionally see @thokin’s comment and the kubeadm prerequisites to understand why swap must be disabled.

Reference: https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory

swapoff -a
sed -i '/ swap / s/^\(.*\)$/#\1/g' /etc/fstab

Update the system and install necessary packages.

apt-get update && apt-get upgrade -y

Install containerd

Kubernetes uses the Container Runtime Interface (CRI) to interact with container runtimes and containerd is the container runtime that Kubernetes uses (it was dockerd in the past).

We will need to install containerd on each node from the Docker repository and configure it to use systemd as the cgroup driver.

Reference: https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/install-kubeadm/#installing-runtime

Add docker gpg key and repository.

install -m 0755 -d /etc/apt/keyrings
curl -fsSL https://download.docker.com/linux/ubuntu/gpg -o /etc/apt/keyrings/docker.asc
chmod a+r /etc/apt/keyrings/docker.asc
echo \
  "deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.asc] https://download.docker.com/linux/ubuntu \
  $(. /etc/os-release && echo "$VERSION_CODENAME") stable" | \
  tee /etc/apt/sources.list.d/docker.list > /dev/null

Update packages and install containerd.

apt-get update
apt-get install containerd.io -y

Configure containerd to use systemd as the cgroup driver to use systemd cgroups.

Reference: https://kubernetes.io/docs/setup/production-environment/container-runtimes/#containerd-systemd

mkdir -p /etc/containerd
containerd config default | tee /etc/containerd/config.toml
sed -e 's/SystemdCgroup = false/SystemdCgroup = true/g' -i /etc/containerd/config.toml
systemctl restart containerd
systemctl enable containerd

Update containerd to load the overlay and br_netfilter modules.

cat <<EOF | tee /etc/modules-load.d/containerd.conf
overlay
br_netfilter
EOF

Update kernel network settings to allow traffic to be forwarded.

Reference: https://kubernetes.io/docs/setup/production-environment/container-runtimes/#prerequisite-ipv4-forwarding-optional

cat << EOF | tee /etc/sysctl.d/kubernetes.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
EOF

Load the kernel modules and apply the sysctl settings to ensure they changes are used by the current system.

modprobe overlay
modprobe br_netfilter
sysctl --system

Verify containerd is running.

systemctl status containerd

Install kubeadm, kubelet, and kubectl

The following tools are necessary for a successful Kubernetes installation:

  • kubeadm: the command to bootstrap the cluster.
  • kubelet: the component that runs on all of the machines in the cluster and does things like starting pods and containers.
  • kubectl: the command line tool to interact with the cluster.

Reference: https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/install-kubeadm/#installing-kubeadm-kubelet-and-kubectl

Add the Kubernetes repository.

curl -fsSL https://pkgs.k8s.io/core:/stable:/v1.30/deb/Release.key | gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg
echo 'deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://pkgs.k8s.io/core:/stable:/v1.30/deb/ /' | tee /etc/apt/sources.list.d/kubernetes.list

Update the system, install the Kubernetes packages, and lock the versions.

apt-get update
apt-get install -y kubelet=1.30.3-1.1 kubeadm=1.30.3-1.1 kubectl=1.30.3-1.1
apt-mark hold kubelet kubeadm kubectl

Enable kubelet.

systemctl enable --now kubelet

Install critctl

CRI-O is an implementation of the Container Runtime Interface (CRI) used by the kubelet to interact with container runtimes.

Reference: https://kubernetes.io/docs/setup/production-environment/container-runtimes/#cri-o

Install crictl by downloading the binary to the system.

export CRICTL_VERSION="v1.30.1"
export CRICTL_ARCH=$(dpkg --print-architecture)
wget https://github.com/kubernetes-sigs/cri-tools/releases/download/$CRICTL_VERSION/crictl-$CRICTL_VERSION-linux-$CRICTL_ARCH.tar.gz
tar zxvf crictl-$CRICTL_VERSION-linux-$CRICTL_ARCH.tar.gz -C /usr/local/bin
rm -f crictl-$CRICTL_VERSION-linux-$CRICTL_ARCH.tar.gz

Verify crictl is installed.

crictl version

⛔️ Repeat these steps for each worker node before proceeding.

Install kubernetes on control node

The control node is where the Kubernetes control plane components will be installed. This includes the API server, controller manager, scheduler, and etcd. The control node will also run the CNI plugin to provide networking for the cluster.

Control plane installation with kubeadm

Using kubeadm, install the Kubernetes with the kubeadm init command. This will install the control plane components and create the necessary configuration files.

Reference: https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm/

kubeadm init --kubernetes-version 1.30.3 --pod-network-cidr 192.168.0.0/16 --v=5

Export the kubeconfig file so the root user can access the cluster.

export KUBECONFIG=/etc/kubernetes/admin.conf

CNI plugin installation with Cilium CLI

Cilium will be used as the CNI plugin for the cluster. Cilium is a powerful, efficient, and secure CNI plugin that provides network security, observability, and troubleshooting capabilities.

Reference: https://docs.cilium.io/en/stable/gettingstarted/k8s-install-default/#install-the-cilium-cli

Install the Cilium CLI.

export CILIUM_VERSION=$(curl -s https://raw.githubusercontent.com/cilium/cilium-cli/main/stable.txt)
export CILIUM_ARCH=$(dpkg --print-architecture)
# Download the Cilium CLI binary and its sha256sum
curl -L --fail --remote-name-all https://github.com/cilium/cilium-cli/releases/download/$CILIUM_VERSION/cilium-linux-$CILIUM_ARCH.tar.gz{,.sha256sum}

# Verify sha256sum
sha256sum --check cilium-linux-$CILIUM_ARCH.tar.gz.sha256sum

# Move binary to correct location and remove tarball
tar xzvf cilium-linux-$CILIUM_ARCH.tar.gz -C /usr/local/bin 
rm cilium-linux-$CILIUM_ARCH.tar.gz{,.sha256sum}

Verify the Cilium CLI is installed.

cilium version --client

Install network plugin.

cilium install

Wait for the CNI plugin to be installed.

cilium status --wait

After a few minutes, you should see output like this which shows the Cilium CNI plugin is installed and running.

    /¯¯\
 /¯¯\__/¯¯\    Cilium:             OK
 \__/¯¯\__/    Operator:           OK
 /¯¯\__/¯¯\    Envoy DaemonSet:    OK
 \__/¯¯\__/    Hubble Relay:       disabled
    \__/       ClusterMesh:        disabled

DaemonSet              cilium-envoy       Desired: 1, Ready: 1/1, Available: 1/1
DaemonSet              cilium             Desired: 1, Ready: 1/1, Available: 1/1
Deployment             cilium-operator    Desired: 1, Ready: 1/1, Available: 1/1
Containers:            cilium             Running: 1
                       cilium-envoy       Running: 1
                       cilium-operator    Running: 1
Cluster Pods:          0/2 managed by Cilium
Helm chart version:
Image versions         cilium             quay.io/cilium/cilium:v1.16.0@sha256:46ffa4ef3cf6d8885dcc4af5963b0683f7d59daa90d49ed9fb68d3b1627fe058: 1
                       cilium-envoy       quay.io/cilium/cilium-envoy:v1.29.7-39a2a56bbd5b3a591f69dbca51d3e30ef97e0e51@sha256:bd5ff8c66716080028f414ec1cb4f7dc66f40d2fb5a009fff187f4a9b90b566b: 1
                       cilium-operator    quay.io/cilium/operator-generic:v1.16.0@sha256:d6621c11c4e4943bf2998af7febe05be5ed6fdcf812b27ad4388f47022190316: 1

Exit the root shell and run the following commands to configure kubectl for your normal user account.

exit

Configure kubectl to connect to the cluster.

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

Verify you can connect to the cluster.

kubectl get nodes

You should see the control node listed as Ready. This means the CNI plugin is running and the control node is ready to accept workloads.

Print the join command and run it on the worker nodes.

kubeadm token create --print-join-command

Join the worker nodes to the cluster

Log into each worker node, make sure you are in the root shell, and paste and run the join command that you copied from the control node.

Reference: https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm/#join-nodes

If your shell does not show root@worker, you can run sudo -i to switch to the root user.

Run a command similar to the following on each worker node.

kubeadm join 192.168.120.130:6443 --token xxxxxxxxxxxxxxx --discovery-token-ca-cert-hash sha256:xxxxxxxxxxxxxxxxx

After the worker nodes have joined the cluster, log back into the control node and verify the nodes are listed as Ready.

kubectl get nodes -w

At this point you probably don’t need to log into the worker nodes again. You can work with cluster from the control node.

Configure kubectl and tools

Log back into the control node as your normal user and configure kubectl. These configurations are optional but will make your life easier when working with Kubernetes.

Reference: https://kubernetes.io/docs/reference/kubectl/quick-reference/

Install kubectl completion

source <(kubectl completion bash)
echo "source <(kubectl completion bash)" >> ~/.bashrc

Set kubectl alias so you can use k instead of kubectl.

cat <<EOF | tee -a ~/.bashrc
alias k=kubectl
complete -o default -F __start_kubectl k
EOF

Reload the bash profile.

source ~/.bashrc

Install jq to help with formatting output and strace for debugging.

sudo apt-get install jq strace -y

Install helm to help with installing applications.

Reference:https://helm.sh/docs/intro/install/

curl -fsSL -o get_helm.sh https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3
chmod 700 get_helm.sh
./get_helm.sh

Install the etcdctl tool to interact with the etcd database.

sudo apt install etcd-client -y

Configure .vimrc to help with editing YAML files.

cat <<EOF | tee -a ~/.vimrc
set tabstop=2
set expandtab
set shiftwidth=2
EOF

Conclusion

Congratulations! You now have a local Kubernetes cluster running on your laptop. You are now ready to start deploying applications and learning more about Kubernetes.

Happy Kuberneting! 🚀