Setup Kubernetes Multi Control Plane dengan Ansible (Kubespray)

3 menit baca

Panduan ini menjelaskan cara setup cluster Kubernetes dengan High Availability (HA) menggunakan Kubespray. Cluster terdiri dari 3 control plane node, 2 worker node, dan 1 external load balancer.

Versi yang digunakan:

  • Kubernetes v1.33.7
  • Ubuntu 22.04 LTS
  • CRI containerd

1. Persiapan Infrastruktur

Spesifikasi Minimum

NodeCPURAM
Ansible + Kubespray2 vCPU4 GB
Load Balancer API2 vCPU4 GB
Control Plane (x3)2 vCPU4 GB
Worker (x3)2 vCPU8 GB

Prasyarat Umum

  • Ubuntu Server 22.04 LTS
  • Swap dinonaktifkan permanen
  • User sudo dengan NOPASSWD
  • Semua node terhubung (port internal terbuka)
  • Python3 tersedia di semua node

2. Topologi & Alamat IP

NameIPOS
Ansible192.168.100.12Ubuntu 22.04
LB-master172.16.21.154Ubuntu 22.04
k8s-master-node-1172.16.21.132Ubuntu 22.04
k8s-master-node-2172.16.21.244Ubuntu 22.04
k8s-master-node-3172.16.21.178Ubuntu 22.04
k8s-worker-1172.16.21.202Ubuntu 22.04
k8s-worker-2172.16.21.231Ubuntu 22.04

3. Konfigurasi Load Balancer (NGINX TCP)

apt update -y
apt install nginx -y

Edit /etc/nginx/nginx.conf dan tambahkan di bagian bawah:

stream {
  upstream k8s_api {
    least_conn;
    server 172.16.21.132:6443;
    server 172.16.21.244:6443;
    server 172.16.21.178:6443;
  }

  server {
    listen 6443;
    proxy_pass k8s_api;
    proxy_connect_timeout 3s;
    proxy_timeout 10s;
  }
}

Reload NGINX:

nginx -t && systemctl reload nginx

4. Konfigurasi Dasar untuk Semua Node (Master & Worker)

Nonaktifkan swap:

swapoff -a
sed -i '/ swap / s/^/#/' /etc/fstab

Sudo tanpa password:

visudo

Tambahkan:

ubuntu ALL=(ALL) NOPASSWD:ALL

5. Setup SSH Ansible

ssh-keygen
for ip in 154 132 244 178 202 231; do
  ssh-copy-id [email protected].$ip
done

6. Instalasi Kubespray

git clone https://github.com/kubernetes-sigs/kubespray.git
cd kubespray
git checkout release-2.29
git checkout release-2.25 contrib/

Instal dependensi:

sudo apt install -y python3-pip
pip3 install -r requirements.txt

7. Generate Inventory

cp -rfp inventory/sample inventory/mycluster
declare -a IPS=(172.16.21.132 172.16.21.244 172.16.21.178 172.16.21.202 172.16.21.231)
CONFIG_FILE=inventory/mycluster/hosts.yml python3 contrib/inventory_builder/inventory.py ${IPS[@]}

Edit inventory/mycluster/hosts.yml dan pastikan pengelompokan yang benar:

  • kube_control_plane: master-1,2,3
  • etcd: master-1,2,3
  • kube_node: worker-1,2,3
all:
  hosts:
  k8s-master-node-1:
    ansible_host: 172.16.21.132
    ip: 172.16.21.132
    access_ip: 172.16.21.132
  k8s-master-node-2:
    ansible_host: 172.16.21.244
    ip: 172.16.21.244
    access_ip: 172.16.21.244
  k8s-master-node-3:
    ansible_host: 172.16.21.178
    ip: 172.16.21.178
    access_ip: 172.16.21.178
  k8s-worker-node-1:
    ansible_host: 172.16.21.202
    ip: 172.16.21.202
    access_ip: 172.16.21.202
  k8s-worker-node-2:
    ansible_host: 172.16.21.231
    ip: 172.16.21.231
    access_ip: 172.16.21.231
  
  children:
  kube_control_plane:
    hosts:
    k8s-master-node-1:
    k8s-master-node-2:
    k8s-master-node-3:
  
  kube_node:
    hosts:
    k8s-worker-node-1:
    k8s-worker-node-2:
  
  etcd:
    hosts:
    k8s-master-node-1:
    k8s-master-node-2:
    k8s-master-node-3:
  
  k8s_cluster:
    children:
    kube_control_plane:
    kube_node:
  
  calico_rr:
    hosts: {}

8. Konfigurasi Kubernetes & Containerd

Edit:

inventory/mycluster/group_vars/k8s-cluster/k8s-cluster.yml

Pastikan:

container_manager: containerd
kube_version: v1.35.0

9. Konfigurasi External API Load Balancer

Edit:

inventory/mycluster/group_vars/all/all.yml
apiserver_loadbalancer_domain_name: "k8s-api.example.local"

loadbalancer_apiserver:
  address: 172.16.21.154
  port: 6443

loadbalancer_apiserver_localhost: false

Tambahkan ke /etc/hosts pada node Ansible:

echo "172.16.21.154 k8s-api.example.local" | sudo tee -a /etc/hosts

10. Deploy Cluster

ansible-playbook -i inventory/mycluster/hosts.yml cluster.yml --become

Jika SSH private key menggunakan path custom, jalankan perintah berikut:

ansible-playbook -i inventory/mycluster/hosts.yml cluster.yml --become -u ubuntu -e ansible_ssh_private_key_file=/path/to/custom/key

11. Verifikasi

kubectl get nodes -o wide

12. Catatan Containerd

Pengganti perintah Docker:

  • crictl ps
  • crictl images
  • ctr -n k8s.io containers list

Bonus

1. Reset cluster

Reset seluruh cluster ke keadaan awal:

ansible-playbook -i inventory/mycluster/hosts.yml reset.yml --become \
  -u ubuntu \
  -e ansible_ssh_private_key_file=/path/to/custom/key

2. Redeploy cluster

Redeploy cluster setelah reset:

ansible-playbook -i inventory/mycluster/hosts.yml cluster.yml --become \
  -u ubuntu \
  -e ansible_ssh_private_key_file=/path/to/custom/key

3. Scale cluster

Tambahkan node baru ke cluster yang sudah ada:

ansible-playbook -i inventory/mycluster/hosts.yml scale.yml --become \
  -u ubuntu \
  -e ansible_ssh_private_key_file=/path/to/custom/key

4. Remove node

Hapus node dari cluster:

ansible-playbook -i inventory/mycluster/hosts.yml remove_node.yml --become \
  -u ubuntu \
  -e ansible_ssh_private_key_file=/path/to/custom/key \
  -e node=node1,node2

5. Upgrade cluster

Upgrade Kubernetes cluster version:

ansible-playbook -i inventory/mycluster/hosts.yml upgrade_cluster.yml --become \
  -u ubuntu \
  -e ansible_ssh_private_key_file=/path/to/custom/key

6. Recover control plane

Recover failed control plane:

ansible-playbook -i inventory/mycluster/hosts.yml recover_control_plane.yml --become \
  -u ubuntu \
  -e ansible_ssh_private_key_file=/path/to/custom/key

./EOF

Kubernetes v1.33.7 HA cluster is ready to use.

Bagikan artikel ini

Merasa terbantu? Bagikan kepada yang lain.

comments powered by Disqus