Setup Kubernetes Multi Control Plane with Ansible (Kubespray)
This guide explains how to set up a Kubernetes cluster with High Availability (HA) using Kubespray. The cluster consists of 3 control plane nodes, 2 worker nodes, and 1 external load balancer.
Versions used:
- Kubernetes v1.33.7
- Ubuntu 22.04 LTS
- CRI containerd
1. Infrastructure Preparation
Minimum Specifications
| Node | CPU | RAM |
|---|---|---|
| Ansible + Kubespray | 2 vCPU | 4 GB |
| Load Balancer API | 2 vCPU | 4 GB |
| Control Plane (x3) | 2 vCPU | 4 GB |
| Worker (x3) | 2 vCPU | 8 GB |
General Prerequisites
- Ubuntu Server 22.04 LTS
- Swap permanently disabled
- Sudo user with NOPASSWD
- All nodes are connected (open internal ports)
- Python3 available on all nodes
2. Topology & IP Address
| Name | IP | OS |
|---|---|---|
| Ansible | 192.168.100.12 | Ubuntu 22.04 |
| LB-master | 172.16.21.154 | Ubuntu 22.04 |
| k8s-master-node-1 | 172.16.21.132 | Ubuntu 22.04 |
| k8s-master-node-2 | 172.16.21.244 | Ubuntu 22.04 |
| k8s-master-node-3 | 172.16.21.178 | Ubuntu 22.04 |
| k8s-worker-1 | 172.16.21.202 | Ubuntu 22.04 |
| k8s-worker-2 | 172.16.21.231 | Ubuntu 22.04 |
3. Load Balancer Configuration (NGINX TCP)
apt update -y
apt install nginx -y
Edit /etc/nginx/nginx.conf and add at the bottom:
stream {
upstream k8s_api {
least_conn;
server 172.16.21.132:6443;
server 172.16.21.244:6443;
server 172.16.21.178:6443;
}
server {
listen 6443;
proxy_pass k8s_api;
proxy_connect_timeout 3s;
proxy_timeout 10s;
}
}
Reload NGINX:
nginx -t && systemctl reload nginx
4. Basic Configuration for All Nodes (Master & Worker)
Disable swap:
swapoff -a
sed -i '/ swap / s/^/#/' /etc/fstab
Sudo without password:
visudo
Add:
ubuntu ALL=(ALL) NOPASSWD:ALL
5. Setup SSH Ansible
ssh-keygen
for ip in 154 132 244 178 202 231; do
ssh-copy-id [email protected].$ip
done
6. Kubespray Installation
git clone https://github.com/kubernetes-sigs/kubespray.git
cd kubespray
git checkout release-2.29
git checkout release-2.25 contrib/
Install dependencies:
sudo apt install -y python3-pip
pip3 install -r requirements.txt
7. Generate Inventory
cp -rfp inventory/sample inventory/mycluster
declare -a IPS=(172.16.21.132 172.16.21.244 172.16.21.178 172.16.21.202 172.16.21.231)
CONFIG_FILE=inventory/mycluster/hosts.yml python3 contrib/inventory_builder/inventory.py ${IPS[@]}
Edit inventory/mycluster/hosts.yml and ensure correct grouping:
kube_control_plane: master-1,2,3etcd: master-1,2,3kube_node: worker-1,2,3
all:
hosts:
k8s-master-node-1:
ansible_host: 172.16.21.132
ip: 172.16.21.132
access_ip: 172.16.21.132
k8s-master-node-2:
ansible_host: 172.16.21.244
ip: 172.16.21.244
access_ip: 172.16.21.244
k8s-master-node-3:
ansible_host: 172.16.21.178
ip: 172.16.21.178
access_ip: 172.16.21.178
k8s-worker-node-1:
ansible_host: 172.16.21.202
ip: 172.16.21.202
access_ip: 172.16.21.202
k8s-worker-node-2:
ansible_host: 172.16.21.231
ip: 172.16.21.231
access_ip: 172.16.21.231
children:
kube_control_plane:
hosts:
k8s-master-node-1:
k8s-master-node-2:
k8s-master-node-3:
kube_node:
hosts:
k8s-worker-node-1:
k8s-worker-node-2:
etcd:
hosts:
k8s-master-node-1:
k8s-master-node-2:
k8s-master-node-3:
k8s_cluster:
children:
kube_control_plane:
kube_node:
calico_rr:
hosts: {}
8. Kubernetes & Containerd Configuration
Edit:
inventory/mycluster/group_vars/k8s-cluster/k8s-cluster.yml
Ensure:
container_manager: containerd
kube_version: v1.35.0
9. External API Load Balancer Configuration
Edit:
inventory/mycluster/group_vars/all/all.yml
apiserver_loadbalancer_domain_name: "k8s-api.example.local"
loadbalancer_apiserver:
address: 172.16.21.154
port: 6443
loadbalancer_apiserver_localhost: false
Add to /etc/hosts on the Ansible node:
echo "172.16.21.154 k8s-api.example.local" | sudo tee -a /etc/hosts
10. Deploy Cluster
ansible-playbook -i inventory/mycluster/hosts.yml cluster.yml --become
If SSH private key uses a custom path, run the following command:
ansible-playbook -i inventory/mycluster/hosts.yml cluster.yml --become -u ubuntu -e ansible_ssh_private_key_file=/path/to/custom/key
11. Verification
kubectl get nodes -o wide
12. Containerd Notes
Docker command replacements:
crictl pscrictl imagesctr -n k8s.io containers list
Bonus
1. Reset cluster
Reset the entire cluster to initial state:
ansible-playbook -i inventory/mycluster/hosts.yml reset.yml --become \
-u ubuntu \
-e ansible_ssh_private_key_file=/path/to/custom/key
2. Redeploy cluster
Redeploy cluster after reset:
ansible-playbook -i inventory/mycluster/hosts.yml cluster.yml --become \
-u ubuntu \
-e ansible_ssh_private_key_file=/path/to/custom/key
3. Scale cluster
Add new nodes to existing cluster:
ansible-playbook -i inventory/mycluster/hosts.yml scale.yml --become \
-u ubuntu \
-e ansible_ssh_private_key_file=/path/to/custom/key
4. Remove node
Remove nodes from cluster:
ansible-playbook -i inventory/mycluster/hosts.yml remove_node.yml --become \
-u ubuntu \
-e ansible_ssh_private_key_file=/path/to/custom/key \
-e node=node1,node2
5. Upgrade cluster
Upgrade Kubernetes cluster version:
ansible-playbook -i inventory/mycluster/hosts.yml upgrade_cluster.yml --become \
-u ubuntu \
-e ansible_ssh_private_key_file=/path/to/custom/key
6. Recover control plane
Recover failed control plane:
ansible-playbook -i inventory/mycluster/hosts.yml recover_control_plane.yml --become \
-u ubuntu \
-e ansible_ssh_private_key_file=/path/to/custom/key
./EOF
Kubernetes v1.33.7 HA cluster is ready to use.