Kubernetes (K8s) is an open-source platform for automating the deployment, scaling, and management of containerized applications. Setting up a Kubernetes cluster on Ubuntu is a straightforward process when using tools like kubeadm
. This guide provides a comprehensive, step-by-step approach to creating a multi-node Kubernetes cluster on Ubuntu, suitable for beginners and experienced users alike. We’ll use kubeadm
to set up a cluster with one control-plane (master) node and at least one worker node, and deploy a pod network using Calico.
Prerequisites
Before starting, ensure you have the following:
- Hardware Requirements:
- At least two Ubuntu machines (one for the control-plane node, one or more for worker nodes).
- Minimum specs per node: 2 CPUs, 2GB RAM, 20GB free disk space.
- 64-bit Ubuntu 20.04, 22.04, or 24.04 (server or desktop).
- Software Requirements:
- SSH access to all nodes with a user having
sudo
privileges. - Internet connectivity for downloading packages.
- Docker or containerd installed as the container runtime.
- Network Requirements:
- Full network connectivity between nodes (public or private network).
- Firewall rules allowing necessary Kubernetes ports (see below).
- Node Setup:
- For this guide, we’ll assume a setup with:
- Control-plane node:
k8s-master
(e.g., IP:192.168.1.100
). - Worker nodes:
k8s-worker-1
,k8s-worker-2
(e.g., IPs:192.168.1.101
,192.168.1.102
).
- Control-plane node:
Step-by-Step Guide to Creating a Kubernetes Cluster
Step 1: Prepare All Nodes
Perform these steps on all nodes (control-plane and workers) unless specified otherwise.
1.1 Update and Upgrade the System
Ensure your system is up-to-date to avoid compatibility issues.
sudo apt-get update
sudo apt-get upgrade -y
1.2 Set Hostnames
Configure unique hostnames for each node to simplify communication.
- On the control-plane node:
sudo hostnamectl set-hostname k8s-master
- On worker nodes (adjust for each):
sudo hostnamectl set-hostname k8s-worker-1
sudo hostnamectl set-hostname k8s-worker-2
1.3 Configure /etc/hosts
Edit /etc/hosts
on all nodes to resolve hostnames to IP addresses.
sudo nano /etc/hosts
Add entries like:
192.168.1.100 k8s-master
192.168.1.101 k8s-worker-1
192.168.1.102 k8s-worker-2
Save and exit. Verify connectivity:
ping -c 3 k8s-master
ping -c 3 k8s-worker-1
ping -c 3 k8s-worker-2
1.4 Disable Swap
Kubernetes requires swap to be disabled for consistent performance.
sudo swapoff -a
sudo sed -i '/ swap / s/^\(.*\)$/#\1/g' /etc/fstab
Verify swap is disabled:
free -m # Swap should show 0
1.5 Enable Kernel Modules and Networking
Load required kernel modules and configure networking for Kubernetes.
sudo modprobe overlay
sudo modprobe br_netfilter
sudo tee /etc/modules-load.d/k8s.conf <<EOF
overlay
br_netfilter
EOF
Configure sysctl settings:
sudo tee /etc/sysctl.d/k8s.conf <<EOF
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.ipv4.ip_forward = 1
EOF
sudo sysctl --system
1.6 Configure Firewall (Optional)
If using UFW, open required ports. For the control-plane node:
sudo ufw allow 6443/tcp
sudo ufw allow 2379:2380/tcp
sudo ufw allow 10250/tcp
sudo ufw allow 10259/tcp
sudo ufw allow 10257/tcp
sudo ufw allow OpenSSH
sudo ufw enable
For worker nodes:
sudo ufw allow 10250/tcp
sudo ufw allow 30000:32767/tcp
sudo ufw allow OpenSSH
sudo ufw enable
Alternatively, disable the firewall for testing:
sudo ufw disable
Step 2: Install Container Runtime
Kubernetes requires a container runtime like containerd or Docker. We’ll use containerd for this guide.
2.1 Install containerd
sudo apt-get update
sudo apt-get install -y containerd.io
2.2 Configure containerd
Generate a default configuration:
sudo mkdir -p /etc/containerd
sudo containerd config default
sudo containerd config default | sudo tee /etc/containerd/config.toml
Modify the configuration to use systemd
as the cgroup driver, which is required for Kubernetes:
sudo sed -i 's/SystemdCgroup = false/SystemdCgroup = true/' /etc/containerd/config.toml
Restart containerd and enable it to start on boot:
sudo systemctl restart containerd
sudo systemctl enable containerd
Verify containerd is running:
sudo systemctl status containerd
Step 3: Install Kubernetes Components
Install kubeadm
, kubelet
, and kubectl
on all nodes. kubeadm
initializes the cluster, kubelet
runs containers on nodes, and kubectl
is the command-line tool for interacting with the cluster.
3.1 Add Kubernetes APT Repository
Install dependencies and add the Kubernetes repository GPG key:
sudo apt-get update
sudo apt-get install -y apt-transport-https ca-certificates curl gpg
sudo mkdir -p /etc/apt/keyrings
curl -fsSL https://pkgs.k8s.io/core:/stable:/v1.31/deb/Release.key | sudo gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg
Add the Kubernetes repository (replace $(lsb_release -cs)
with your Ubuntu codename if needed, e.g., jammy
for 22.04):
echo "deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://pkgs.k8s.io/core:/stable:/v1.31/deb/ /" | sudo tee /etc/apt/sources.list.d/kubernetes.list
3.2 Install Kubernetes Components
Update the package list and install the required packages:
sudo apt-get update
sudo apt-get install -y kubelet kubeadm kubectl
sudo apt-mark hold kubelet kubeadm kubectl
The apt-mark hold
command prevents these packages from being automatically upgraded, which could break the cluster.
Verify versions:
kubeadm version
kubectl version --client
kubelet --version
Step 4: Initialize the Control-Plane Node
Perform this step only on the control-plane node (k8s-master
).
4.1 Initialize the Cluster with kubeadm
Run the kubeadm init
command to set up the control-plane node. Specify the pod network CIDR for compatibility with Calico (a popular pod network add-on):
sudo kubeadm init --pod-network-cidr=192.168.0.0/16
This command:
- Initializes the Kubernetes control plane.
- Generates a token for worker nodes to join the cluster.
- Sets up the kube-apiserver, etcd, kube-scheduler, and kube-controller-manager.
After successful initialization, you’ll see output similar to:
Your Kubernetes control-plane has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
Then you can join any number of worker nodes by running the following on each as root:
kubeadm join 192.168.1.100:6443 --token <token> \
--discovery-token-ca-cert-hash sha256:<hash>
4.2 Configure kubectl for the Admin User
Set up the Kubernetes configuration file for kubectl
:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
Verify the cluster is running:
kubectl get nodes
You should see the control-plane node with a NotReady
status (because the pod network is not yet installed).
4.3 Save the Join Command
The kubeadm init
output includes a kubeadm join
command with a token and CA certificate hash. Save this command, as you’ll need it to join worker nodes. If you lose it, you can regenerate a token later:
kubeadm token create --print-join-command
Step 5: Deploy a Pod Network (Calico)
Kubernetes requires a Container Network Interface (CNI) plugin to enable communication between pods. We’ll use Calico, a popular choice.
On the control-plane node, apply the Calico manifest:
kubectl apply -f https://raw.githubusercontent.com/projectcalico/calico/v3.27.3/manifests/calico.yaml
Wait a few moments for the Calico pods to start:
kubectl get pods -n kube-system
Check the node status again:
kubectl get nodes
The control-plane node should now show as Ready
.
Step 6: Join Worker Nodes to the Cluster
Perform this step on each worker node (k8s-worker-1
, k8s-worker-2
, etc.).
Run the kubeadm join
command provided by the kubeadm init
output. It will look like:
sudo kubeadm join 192.168.1.100:6443 --token <token> --discovery-token-ca-cert-hash sha256:<hash>
Replace <token>
and <hash>
with the values from the control-plane node.
After running the command, the worker node will join the cluster. Verify from the control-plane node:
kubectl get nodes
You should see all nodes (k8s-master
, k8s-worker-1
, etc.) with a Ready
status.
Step 7: Verify the Cluster
To ensure the cluster is fully operational:
- Check node status:
kubectl get nodes -o wide
- Check running pods in all namespaces:
kubectl get pods --all-namespaces -o wide
- Deploy a test pod to confirm functionality:
kubectl run nginx --image=nginx --restart=Never
kubectl get pods -o wide
If the nginx
pod is in the Running
state, your cluster is operational.
Step 8: Optional Configurations
8.1 Install a Dashboard (Optional)
The Kubernetes Dashboard provides a web-based UI for cluster management:
kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.7.0/aio/deploy/recommended.yaml
Access the dashboard:
kubectl proxy
Open http://localhost:8001/api/v1/namespaces/kubernetes-dashboard/services/https:kubernetes-dashboard:/proxy/
in a browser. Create a token to log in:
kubectl -n kubernetes-dashboard create token admin-user
8.2 Set Up Cluster Autoscaling (Optional)
For production environments, consider integrating a cluster autoscaler or monitoring tools like Prometheus and Grafana.
Troubleshooting Common Issues
- Nodes NotReady: Ensure the pod network (Calico) is installed and pods in
kube-system
are running (kubectl get pods -n kube-system
). - Join Command Fails: Verify the token and CA hash. Regenerate with
kubeadm token create --print-join-command
. - CNI Issues: Confirm the correct pod network CIDR was used during
kubeadm init
. - Firewall Blocking: Check that required ports are open (e.g., 6443 for API server, 10250 for kubelet).
- Resource Constraints: Increase CPU/RAM if nodes fail to start pods.
- containerd Errors: Verify
SystemdCgroup = true
in/etc/containerd/config.toml
.
For detailed logs:
journalctl -u kubelet
kubectl describe node <node-name>
Post-Installation Notes
- Backup kubeconfig: Save
/etc/kubernetes/admin.conf
securely, as it grants full cluster access. - Cluster Maintenance: Regularly update Kubernetes components (
sudo apt-get upgrade
) and monitor cluster health. - Security: Restrict access to the control-plane node and use RBAC for
kubectl
users. - Next Steps: Explore deploying applications, setting up Ingress controllers, or integrating with CI/CD pipelines.
Conclusion
You’ve successfully set up a Kubernetes cluster on Ubuntu using kubeadm
! Your cluster is now ready to deploy containerized applications. Start by experimenting with simple deployments, such as the nginx
pod above, or explore advanced topics like Helm charts, persistent storage, or autoscaling. For further learning, refer to the official Kubernetes documentation or community resources like the Kubernetes Slack or forums.
If you encounter issues, the Kubernetes community and tools like kubectl describe
or journalctl
are invaluable for debugging. Happy clustering!
Note: This guide is based on Kubernetes v1.31 and Ubuntu 22.04/24.04 as of August 15, 2025. Always check the official Kubernetes documentation for the latest recommendations and updates.