Creating a One-Time Setup for Kubernetes Cluster with Worker Nodes Using HAProxy
Last updated
Was this helpful?
Last updated
Was this helpful?
Setting up a Kubernetes cluster on-premise is an essential task for many DevOps professionals and system administrators. This blog will guide you through creating a one-time setup that will not only deploy a Kubernetes cluster with worker nodes but also ensure high availability (HA) using HAProxy for load balancing.
Before we start, make sure you have the following prerequisites:
At least 3 servers: One master node and two worker nodes. You can scale the cluster as needed.
A load balancer server to run HAProxy (preferably on a separate machine).
Basic understanding of Kubernetes, Linux commands, and networking.
Tools like kubeadm
, kubelet
, kubectl
, docker
, and HAProxy
installed.
Each server should have a static IP assigned.
You need to prepare the environment by installing the required software on all nodes.
1.1. Installing Docker
Kubernetes requires Docker as a container runtime. Install Docker on all the nodes (master and worker nodes).
1.2. Installing Kubernetes Components
Install Kubernetes components (kubeadm
, kubelet
, kubectl
) on all the nodes.
1.3. Installing HAProxy on the Load Balancer Node
The load balancer node will run HAProxy, which will handle the traffic distribution to the master nodes.
The role of HAProxy in this setup is to distribute traffic across multiple Kubernetes master nodes for high availability. First, configure HAProxy to forward traffic to the Kubernetes API servers on the master nodes.
2.1. Configure HAProxy
Edit the HAProxy configuration file (/etc/haproxy/haproxy.cfg
) and configure the load balancing.
Replace <MASTER1_IP>
, <MASTER2_IP>
, and <MASTER3_IP>
with the actual IP addresses of your master nodes. This configuration will distribute traffic to the master nodes, ensuring high availability.
2.2. Restart HAProxy
After making changes to the configuration, restart HAProxy to apply the changes.
The Kubernetes master node should be initialized first. Use kubeadm
to initialize the Kubernetes cluster.
Make sure to replace <LOAD_BALANCER_IP>
with the IP address of your HAProxy load balancer.
3.1. Install a Pod Network (Weave Net)
Kubernetes requires a network plugin to facilitate communication between pods across nodes. You can install the Weave Net plugin or any other compatible network plugin.
Once the master node is initialized, you need to join the worker nodes to the cluster. After the kubeadm init
command completes, kubeadm
will output a join command with a token that you will use on the worker nodes.
On each worker node, run the following command:
This will securely join the worker nodes to the Kubernetes cluster.
After joining the worker nodes to the cluster, verify that everything is working by checking the node status:
You should see your master and worker nodes listed, with the status Ready
.
Now that your cluster is up and running with high availability, you can deploy applications. You can use kubectl
to manage resources in the cluster.
For example, to deploy a sample application:
This deploys an Nginx application and exposes it through a NodePort.
The process—eliminating the need for a cloud-based load balancer. Instead, the playbook leverages HAProxy between the Kubernetes master nodes to manage traffic, ensuring high availability. The playbook supports one-time setup as well as resets, allowing flexibility in managing your cluster. It's a powerful solution for those deploying Kubernetes on bare-metal hardware, streamlining the cluster setup with automation.
In this blog, we've walked through setting up a high-availability Kubernetes cluster on-premise with worker nodes using HAProxy for load balancing. This setup ensures that the Kubernetes master nodes are highly available and can handle failure gracefully, providing reliability and scalability to your workloads. The steps above allow you to deploy a stable and fault-tolerant Kubernetes environment with minimal ongoing management.
For more details, visit the .