Creating a One-Time Setup for Kubernetes Cluster with Worker Nodes Using HAProxy

Setting up a Kubernetes cluster on-premise is an essential task for many DevOps professionals and system administrators. This blog will guide you through creating a one-time setup that will not only deploy a Kubernetes cluster with worker nodes but also ensure high availability (HA) using HAProxy for load balancing.
Prerequisites
Before we start, make sure you have the following prerequisites:
At least 3 servers: One master node and two worker nodes. You can scale the cluster as needed.
A load balancer server to run HAProxy (preferably on a separate machine).
Basic understanding of Kubernetes, Linux commands, and networking.
Tools like
kubeadm
,kubelet
,kubectl
,docker
, andHAProxy
installed.Each server should have a static IP assigned.
Step 1: Prepare the Environment
You need to prepare the environment by installing the required software on all nodes.
1.1. Installing Docker
Kubernetes requires Docker as a container runtime. Install Docker on all the nodes (master and worker nodes).
# Update and install dependencies
sudo apt-get update && sudo apt-get install -y apt-transport-https ca-certificates curl software-properties-common
# Add Docker’s official GPG key
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
# Add Docker repository
sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable"
# Install Docker
sudo apt-get update && sudo apt-get install -y docker-ce
# Enable Docker service
sudo systemctl enable docker && sudo systemctl start docker
1.2. Installing Kubernetes Components
Install Kubernetes components (kubeadm
, kubelet
, kubectl
) on all the nodes.
# Add Kubernetes repository
curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add -
sudo apt-add-repository "deb http://apt.kubernetes.io/ kubernetes-xenial main"
# Install Kubernetes components
sudo apt-get update
sudo apt-get install -y kubelet kubeadm kubectl
# Mark Kubernetes packages to hold at the current version
sudo apt-mark hold kubelet kubeadm kubectl
1.3. Installing HAProxy on the Load Balancer Node
The load balancer node will run HAProxy, which will handle the traffic distribution to the master nodes.
# Install HAProxy
sudo apt-get update
sudo apt-get install -y haproxy
# Enable HAProxy service
sudo systemctl enable haproxy && sudo systemctl start haproxy
Step 2: Setting Up HAProxy for Load Balancing
The role of HAProxy in this setup is to distribute traffic across multiple Kubernetes master nodes for high availability. First, configure HAProxy to forward traffic to the Kubernetes API servers on the master nodes.
2.1. Configure HAProxy
Edit the HAProxy configuration file (/etc/haproxy/haproxy.cfg
) and configure the load balancing.
# Define front-end for load balancing
frontend kubernetes-api
bind *:6443
mode tcp
default_backend kubernetes-backend
# Define back-end with master nodes
backend kubernetes-backend
mode tcp
balance roundrobin
server master1 <MASTER1_IP>:6443 check
server master2 <MASTER2_IP>:6443 check
server master3 <MASTER3_IP>:6443 check
Replace <MASTER1_IP>
, <MASTER2_IP>
, and <MASTER3_IP>
with the actual IP addresses of your master nodes. This configuration will distribute traffic to the master nodes, ensuring high availability.
2.2. Restart HAProxy
After making changes to the configuration, restart HAProxy to apply the changes.
sudo systemctl restart haproxy
Step 3: Initializing the Kubernetes Master Node
The Kubernetes master node should be initialized first. Use kubeadm
to initialize the Kubernetes cluster.
# Initialize the master node
sudo kubeadm init --control-plane-endpoint="<LOAD_BALANCER_IP>:6443" --upload-certs
# Set up kubeconfig for the user
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
# To allow scheduling on the master node (optional, depending on your use case)
kubectl taint nodes --all node-role.kubernetes.io/master-
Make sure to replace <LOAD_BALANCER_IP>
with the IP address of your HAProxy load balancer.
3.1. Install a Pod Network (Weave Net)
Kubernetes requires a network plugin to facilitate communication between pods across nodes. You can install the Weave Net plugin or any other compatible network plugin.
kubectl apply -f https://git.io/weave-kube
Step 4: Join the Worker Nodes to the Cluster
Once the master node is initialized, you need to join the worker nodes to the cluster. After the kubeadm init
command completes, kubeadm
will output a join command with a token that you will use on the worker nodes.
On each worker node, run the following command:
kubeadm join <LOAD_BALANCER_IP>:6443 --token <your-token> --discovery-token-ca-cert-hash sha256:<hash>
This will securely join the worker nodes to the Kubernetes cluster.
Step 5: Verify the Cluster
After joining the worker nodes to the cluster, verify that everything is working by checking the node status:
kubectl get nodes
You should see your master and worker nodes listed, with the status Ready
.
Step 6: Deploying Applications
Now that your cluster is up and running with high availability, you can deploy applications. You can use kubectl
to manage resources in the cluster.
For example, to deploy a sample application:
kubectl run nginx --image=nginx --replicas=2 --port=80
kubectl expose pod nginx --port=80 --type=NodePort
This deploys an Nginx application and exposes it through a NodePort.
The process—eliminating the need for a cloud-based load balancer. Instead, the playbook leverages HAProxy between the Kubernetes master nodes to manage traffic, ensuring high availability. The playbook supports one-time setup as well as resets, allowing flexibility in managing your cluster. It's a powerful solution for those deploying Kubernetes on bare-metal hardware, streamlining the cluster setup with automation.
For more details, visit the GitHub repository.
Conclusion
In this blog, we've walked through setting up a high-availability Kubernetes cluster on-premise with worker nodes using HAProxy for load balancing. This setup ensures that the Kubernetes master nodes are highly available and can handle failure gracefully, providing reliability and scalability to your workloads. The steps above allow you to deploy a stable and fault-tolerant Kubernetes environment with minimal ongoing management.
Last updated
Was this helpful?