Managed Kubernetes offers a safe way to deploy and scale your clusters without worrying too much about the underlying computing. Companies such as Civo make it easier to spin up clusters in under 90 seconds. However, if you clicked on this tutorial, you're likely curious about how Kubernetes clusters are assembled—or perhaps you have cost or compute-related reasons for setting up your own. Whatever the case, in this post, we take a peek under the hood and spin up our own Kubernetes cluster using kubeadm.
If you’re unfamiliar with Kubeadm, it’s a tool designed to streamline Kubernetes setup. While many other tools exist, at the time of writing, Kubeadm is the only tool listed in the Kubernetes documentation.
A Refresher on Kubernetes Components
Before jumping into implementation, it's important to take a step back and understand the basic functions of each component we will be installing. At a high level, a Kubernetes cluster is split into two parts, the control plane, and worker nodes, each part contains a subset of components.
Control plane
Component | Description |
---|---|
kube-apiserver | Exposes the Kubernetes HTTP API. All requests from clients such as kubectl go through the API server. |
etcd | Highly available key-value store for all API server data. |
kube-scheduler | Looks for Pods not yet bound to a node and assigns each Pod to a suitable node. |
kube-controller-manager | Runs controllers to implement Kubernetes API behavior. |
cloud-controller-manager | Implements cloud-specific features such as instances, zones, and load balancers. Example: Civo Cloud Controller Manager. |
Worker nodes
Each worker node runs a few components, here’s a quick table of each component and their function:Component | Description |
---|---|
kubelet | Ensures that Pods are running, including their containers. |
kube-proxy | Maintains network rules on nodes to implement Services. |
Container runtime |
Responsible for running containers, not to be confused with the Kubelet, although closely related.
Container runtimes actually run the containers. The Kubelet communicates with the container runtime to start, stop, and manage containers as directed by the control plane. |
Infrastructure Setup
We will begin by creating a network in which we will provision all our nodes. To do this using the Civo CLI run the following command:civo networks create --create-default-firewall kubeadm
This will create a network called kubeadm and create some default firewall rules.
Output is similar to:

Provision your Nodes
One of the great things about managed Kubernetes is you never have to worry about standing up your own nodes, well now we do. For this demonstration, we will be using three nodes: one for the control plane and the other two for the worker nodes. Create node one: civo instance create joestar -t ubuntu-jammy --network kubeadm --size g3.medium --wait --firewall kubeadm
The command above will provision a medium-sized Ubuntu node called joestar
, using the --network
and --firewall
flags, we specify the network and firewall that we want attached to the instance.
Create node two:
civo instance create brando -t ubuntu-jammy --network kubeadm --size g3.medium --wait --firewall kubeadm
Create node three:
civo instance create speedwagon -t ubuntu-jammy --network kubeadm --size g3.medium --wait --firewall kubeadm
As you will log into your nodes frequently, let’s set up SSH key authentication to make things slightly easier.
Export node IP addresses
export SPEEDWAGON=$(civo instance show -o json speedwagon | jq -r .public_ip)
export JOESTAR=$(civo instance show -o json joestar | jq -r .public_ip)
export BRANDO=$(civo instance show -o json brando | jq -r .public_ip)
civo instance show <instance name>
Copy SSH keys
ssh-copy-id civo@$SPEEDWAGON
ssh-copy-id civo@$BRANDO
ssh-copy-id civo@$JOESTAR
Configure Hostnames
To avoid using IP addresses everywhere it is much more convenient to use hostnames, to do this SSH into each of your nodes and the following entry under /etc/hosts
.
sudo tee -a /etc/hosts <<EOF
# kubeadm nodes
212.2.240.207 brando
212.2.240.98 joestar
212.2.245.125 speedwagon
EOF
Upon completion , you should be able to ping each of the nodes via hostnames:
ssh civo@$JOESTAR
Ping Brando:
ping brando
Output is similar to:
Disabling Swap
On each of the nodes, run the following command:
sudo swapoff -a
(crontab -l 2>/dev/null; echo "@reboot /sbin/swapoff -a") | crontab - || true
This will disable swap and add a crontab entry to ensure it is disabled upon reboot of a node.
Enable IPV4 Packet Forwarding
By default, the Linux kernel does not allow IPv4 packets to be routed between interfaces. We need to enable IP packet forwarding to avoid issues when nodes and pods try to communicate.
Run the following command on each node:
# sysctl params required by setup, params persist across reboots
cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf
net.ipv4.ip_forward = 1
EOF
# Apply sysctl params without reboot
sudo sysctl --system
Installing a Container Runtime
Container runtimes are responsible for running and managing containers on a host system. When you send a request to the Kubernetes API server, the Kubelet receives instructions on creating or managing containers. The Kubelet then communicates with the container runtime, which executes the actual container operations.
In December 2020, the Kubernetes project deprecated Docker as its container runtime in favor of containerd, an industry-standard runtime. For this demonstration, we will be using containerd. If you're curious about container runtimes, Ivan Velichko has a great guide you can read here.
On each node, run the following commands:
Install Dependencies:
sudo apt install curl gnupg2 software-properties-common apt-transport-https ca-certificates -y
Add GPG Keys:
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
Add APT Repository:
sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable"
Install Containerd:
sudo apt update && sudo apt install containerd.io -y
Generate a Configuration File:
containerd config default | sudo tee /etc/containerd/config.toml
Enable SystemdCgroups:
sudo sed -i 's/SystemdCgroup \= false/SystemdCgroup \= true/g' /etc/containerd/config.toml
Enable and Restart Containerd:
sudo systemctl enable containerd && sudo systemctl restart containerd
Install Kubeadm, Kubelet, and Kubectl
Next, we can install some of the components we described earlier. First, download the public signing key for the Kubernetes package repositories; on each node, run the following command:
curl -fsSL https://pkgs.k8s.io/core:/stable:/v1.31/deb/Release.key | sudo gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg
Add the Kubernetes apt
repository:
echo 'deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://pkgs.k8s.io/core:/stable:/v1.31/deb/ /' | sudo tee /etc/apt/sources.list.d/kubernetes.list
Install Kubelet, Kubeadm, and Kubectl:
sudo apt-get update
sudo apt-get install -y kubelet kubeadm kubectl
sudo apt-mark hold kubelet kubeadm kubectl
Enable the kubelet:
sudo systemctl enable --now kubelet
Select a Controlplane
Of the three nodes created, we will need to select a single node to use as our control plane. This is where components such as the apiserver will live. For this demonstration, we will be using the node named joestar
, but feel free to select whichever you prefer.
Next, run the following commands on the joestar
node to initialize the control plane:
kubeadm init --pod-network-cidr=10.1.1.0/24 --apiserver-advertise-address <Private IP address of your joestar>
civo instance show joestar
to display the IP address
Output is similar to:
Export your kubeconfig:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
Initializing the control plane should also return a command for worker nodes can use to join the cluster. The output is similar to:
At this point, if you run kubectl get nodes
your output should be similar to:
In the output above, the controlplane node isn’t ready yet. This is because we do not have a container networking interface(CNI) installed yet,if you’re unsure what that is check out this video by Alex Jones on the Civo Academy.
Install a CNI
Our CNI of choice for this demonstration is Cilium, a high-performance eBPF-based CNI. If you want an alternative to Cilium, check out this portion of the Kubernetes documentation for more options.
On your control plane node, run the following commands:
Download Cilium:
curl -LO https://github.com/cilium/cilium-cli/releases/latest/download/cilium-linux-amd64.tar.gz
Extract the tarball:
sudo tar xzvfC cilium-linux-amd64.tar.gz /usr/local/bin
Install Cilium:
cilium install --set ipam.operator.clusterPoolIPv4PodCIDRList=10.1.1.0/24
We use the --set ipam.operator.clusterPoolIPv4PodCIDRLis
flag because we used a custom CIDR when creating our cluster.
After a couple of seconds run:
cilium status
Output is similar to:
At this point, your control plane node should be in the ready
state:
kubectl get nodes
Output is similar to:
Adding Worker Nodes
With the control plane up and running, we are finally ready to add our worker nodes. SSH into nodes speedwagon
and brando
and run the kubeadm join command:
sudo kubeadm join 192.168.1.7:6443 --token 3drqdd.uhp0byzflstl0lfb \
--discovery-token-ca-cert-hash sha256:b2754d60513e144bc464944219024b6b919015db4b9d318e4ea74002acea3762
kubeadm token create --print-join-command
Output is similar to:
Hop back on to your control plane and run:
kubectl get nodes
Output is similar to:
Create a Deployment:
kubectl apply -f - <<EOF
apiVersion: apps/v1
kind: Deployment
metadata:
name: whoami
namespace: default
spec:
replicas: 4
selector:
matchLabels:
app: whoami
template:
metadata:
labels:
app: whoami
spec:
containers:
- name: whoami
image: traefik/whoami
ports:
- containerPort: 80
EOF
kubectl get pods
Outputs:
Clean up
Upon completing this tutorial, if you would like to remove some of the resources we provisioned, to delete the instances, run the following commands:
civo instance rm joestar
civo instance rm speedwagon
civo instance rm brando
Considerations for Selecting a Node
When selecting nodes for your clusters, here are some things to keep in mind.
Size:docs recommend a minimum of 2GB of RAM for worker nodes and a minimum of 2CPUS for control plane node, be sure to assess your workloads and plan accordingly.
Operating System: Selecting a stable distro will go a long way in improving the stability of your nodes; Debian/Ubuntu-based nodes are always a solid choice. For users looking to run workloads on Windows, take a look at this section of the docs. Remember, this will also affect your choice of CNI, as there are limited options.
Beware of the Control Plane: The control plane is the brain of your Kubernetes cluster, and special consideration should be given to these nodes.
By default, Kubernetes does not schedule user workloads on control plane nodes. This is for a good reason –it helps maintain the stability and performance of critical cluster components.
Control plane nodes are automatically tainted to prevent regular pods from being scheduled on them. If you need to run specific workloads on control plane nodes (which is generally not recommended), you'll need to add appropriate tolerations to those pods.
For production environments, running more than one control plane node is crucial. This provides redundancy and ensures your cluster remains operational even if one control plane node fails. Kubernetes supports running multiple control plane nodes in a high-availability (HA) configuration.
Conclusion
Kubernetes is a lot easier when you don’t have to worry about some of the underlying infrastructure. In this tutorial, we assembled a cluster from the ground up using kubeadm. If you’re interested in learning more about kubernetes internals, here a some ideas: