Why Proxmox for Kubernetes?
Proxmox VE is a free, enterprise-grade hypervisor that runs on any x86 hardware. Running Proxmox as the base OS and creating VMs on top is a great way to build a real multi-node K8s cluster.
This setup gives you a real cluster for:
- CKA/CKAD exam practice
- Testing Helm charts and operators
- Learning without paying for EKS or GKE
Proxmox VE dashboard — 3 VMs running as Kubernetes nodes on one physical machine
Recommended Hardware
Any x86 machine with at least:
- 4+ CPU cores
- 16GB+ RAM (32GB recommended)
- 256GB+ SSD
Cost: ~$100–200 for a secondhand mini PC or workstation
Step 1 — Install Proxmox
Download Proxmox VE ISO from proxmox.com, flash to USB, install on your machine.
Access the web UI at: https://YOUR_IP:8006
Step 2 — Create VMs for the Cluster
Create 3 VMs:
| Node | Role | CPU | RAM | Disk |
|---|---|---|---|---|
| k8s-master | Control Plane | 2 vCPU | 4GB | 32GB |
| k8s-worker1 | Worker | 2 vCPU | 6GB | 32GB |
| k8s-worker2 | Worker | 2 vCPU | 6GB | 32GB |
Use Ubuntu 22.04 LTS as the OS for all nodes.
Step 3 — Install kubeadm on All Nodes
Run on every node:
# Disable swap
swapoff -a
sed -i '/ swap / s/^\(.*\)$/#\1/g' /etc/fstab
# Install container runtime (containerd)
apt-get update
apt-get install -y containerd
# Install kubeadm, kubelet, kubectl
apt-get install -y apt-transport-https ca-certificates curl
curl -fsSL https://pkgs.k8s.io/core:/stable:/v1.29/deb/Release.key | gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg
echo 'deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://pkgs.k8s.io/core:/stable:/v1.29/deb/ /' | tee /etc/apt/sources.list.d/kubernetes.list
apt-get update
apt-get install -y kubelet kubeadm kubectl
Step 4 — Initialize the Control Plane
On the master node only:
kubeadm init --pod-network-cidr=10.244.0.0/16
Set up kubectl access:
mkdir -p $HOME/.kube
cp /etc/kubernetes/admin.conf $HOME/.kube/config
Step 5 — Install Flannel Network Plugin
kubectl apply -f https://github.com/flannel-io/flannel/releases/latest/download/kube-flannel.yml
Step 6 — Join Worker Nodes
Copy the join command from the kubeadm init output and run it on each worker:
kubeadm join MASTER_IP:6443 --token TOKEN --discovery-token-ca-cert-hash sha256:HASH
All nodes showing Ready status — the cluster is up and running
Step 7 — Verify the Cluster
kubectl get nodes
Expected output:
NAME STATUS ROLES AGE VERSION
k8s-master Ready control-plane 5m v1.29.0
k8s-worker1 Ready <none> 3m v1.29.0
k8s-worker2 Ready <none> 3m v1.29.0
Your cluster is ready. Deploy a test app:
kubectl create deployment nginx --image=nginx
kubectl expose deployment nginx --port=80 --type=NodePort
What I Practice on This Cluster
- RBAC and namespace isolation
- Helm chart deployment and customization
- Ingress controllers (Nginx)
- Persistent volumes and storage classes
- Prometheus + Grafana monitoring stack
- Pod autoscaling (HPA)
Pods scaling up across the cluster — this is what Kubernetes is built for
Cost
Running this cluster costs me nothing extra — it runs on the same OptiPlex as my Ollama setup. Total electricity draw is under 80W for the whole machine.
If you are preparing for CKA or just want to learn Kubernetes properly, build this setup. A real cluster beats any simulator.