Set up an HA Cluster Using Keepalived and HAproxy

    This tutorial demonstrates how to configure Keepalived and HAproxy for load balancing and achieve high availability. The steps are listed as below:

    1. Prepare hosts.
    2. Configure Keepalived and HAproxy.
    3. Use KubeKey to set up a Kubernetes cluster and install KubeSphere.

    The example cluster has three master nodes, three worker nodes, two nodes for load balancing and one virtual IP address. The virtual IP address in this example may also be called “a floating IP address”. That means in the event of node failures, the IP address can be passed between nodes allowing for failover, thus achieving high availability.

    Notice that in this example, Keepalived and HAproxy are not installed on any of the master nodes. Admittedly, you can do that and high availability can also be achieved. That said, configuring two specific nodes for load balancing (You can add more nodes of this kind as needed) is more secure. Only Keepalived and HAproxy will be installed on these two nodes, avoiding any potential conflicts with any Kubernetes components and services.

    Prepare Hosts

    For more information about requirements for nodes, network, and dependencies, see .

    Keepalived provides a VRPP implementation and allows you to configure Linux machines for load balancing, preventing single points of failure. , providing reliable, high performance load balancing, works perfectly with Keepalived.

    As Keepalived and HAproxy are installed on and lb2, if either one goes down, the virtual IP address (i.e. the floating IP address) will be automatically associated with another node so that the cluster is still functioning well, thus achieving high availability. If you want, you can add more nodes all with Keepalived and HAproxy installed for that purpose.

    Run the following command to install Keepalived and HAproxy first.

    1. The configuration of HAproxy is exactly the same on the two machines for load balancing. Run the following command to configure HAproxy.

      1. vi /etc/haproxy/haproxy.cfg
    2. Here is an example configuration for your reference (Pay attention to the server field. Note that 6443 is the apiserver port):

      1. global
      2. log /dev/log local0 warning
      3. chroot /var/lib/haproxy
      4. pidfile /var/run/haproxy.pid
      5. maxconn 4000
      6. user haproxy
      7. group haproxy
      8. daemon
      9. stats socket /var/lib/haproxy/stats
      10. defaults
      11. log global
      12. option httplog
      13. option dontlognull
      14. timeout connect 5000
      15. timeout client 50000
      16. timeout server 50000
      17. frontend kube-apiserver
      18. bind *:6443
      19. mode tcp
      20. option tcplog
      21. default_backend kube-apiserver
      22. backend kube-apiserver
      23. mode tcp
      24. option tcplog
      25. option tcp-check
      26. balance roundrobin
      27. default-server inter 10s downinter 5s rise 2 fall 2 slowstart 60s maxconn 250 maxqueue 256 weight 100
      28. server kube-apiserver-1 172.16.0.4:6443 check # Replace the IP address with your own.
      29. server kube-apiserver-2 172.16.0.5:6443 check # Replace the IP address with your own.
      30. server kube-apiserver-3 172.16.0.6:6443 check # Replace the IP address with your own.
    3. Save the file and run the following command to restart HAproxy.

      1. systemctl restart haproxy
    4. Make it persist through reboots:

      1. systemctl enable haproxy
    5. Make sure you configure HAproxy on the other machine (lb2) as well.

    Keepalived must be installed on both machines while the configuration of them is slightly different.

    1. Run the following command to configure Keepalived.

      1. vi /etc/keepalived/keepalived.conf
      1. global_defs {
      2. notification_email {
      3. }
      4. router_id LVS_DEVEL
      5. vrrp_garp_interval 0
      6. vrrp_gna_interval 0
      7. }
      8. vrrp_script chk_haproxy {
      9. script "killall -0 haproxy"
      10. interval 2
      11. weight 2
      12. }
      13. vrrp_instance haproxy-vip {
      14. state BACKUP
      15. priority 100
      16. interface eth0 # Network card
      17. virtual_router_id 60
      18. advert_int 1
      19. authentication {
      20. auth_type PASS
      21. auth_pass 1111
      22. }
      23. unicast_src_ip 172.16.0.2 # The IP address of this machine
      24. unicast_peer {
      25. 172.16.0.3 # The IP address of peer machines
      26. }
      27. virtual_ipaddress {
      28. 172.16.0.10/24 # The VIP address
      29. }
      30. track_script {
      31. chk_haproxy
      32. }
      33. }

      Note

      • For the interface field, you must provide your own network card information. You can run ifconfig on your machine to get the value.

      • The IP address provided for unicast_src_ip is the IP address of your current machine. For other machines where HAproxy and Keepalived are also installed for load balancing, their IP address must be input for the field unicast_peer.

    2. Save the file and run the following command to restart Keepalived.

    3. Make it persist through reboots:

      1. systemctl enable haproxy
    4. Make sure you configure Keepalived on the other machine (lb2) as well.

    Verify High Availability

    Before you start to create your Kubernetes cluster, make sure you have tested the high availability.

    1. On the machine lb1, run the following command:

      1. [[email protected] ~]# ip a s
      2. 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
      3. link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
      4. inet 127.0.0.1/8 scope host lo
      5. valid_lft forever preferred_lft forever
      6. inet6 ::1/128 scope host
      7. valid_lft forever preferred_lft forever
      8. 2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
      9. link/ether 52:54:9e:27:38:c8 brd ff:ff:ff:ff:ff:ff
      10. inet 172.16.0.2/24 brd 172.16.0.255 scope global noprefixroute dynamic eth0
      11. valid_lft 73334sec preferred_lft 73334sec
      12. inet 172.16.0.10/24 scope global secondary eth0 # The VIP address
      13. valid_lft forever preferred_lft forever
      14. inet6 fe80::510e:f96:98b2:af40/64 scope link noprefixroute
      15. valid_lft forever preferred_lft forever
    2. As you can see above, the virtual IP address is successfully added. Simulate a failure on this node:

      1. systemctl stop haproxy
    3. Check the floating IP address again and you can see it disappear on lb1.

      1. [[email protected] ~]# ip a s
      2. 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
      3. link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
      4. inet 127.0.0.1/8 scope host lo
      5. valid_lft forever preferred_lft forever
      6. valid_lft forever preferred_lft forever
      7. 2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
      8. link/ether 52:54:9e:27:38:c8 brd ff:ff:ff:ff:ff:ff
      9. inet 172.16.0.2/24 brd 172.16.0.255 scope global noprefixroute dynamic eth0
      10. valid_lft 72802sec preferred_lft 72802sec
      11. valid_lft forever preferred_lft forever
    4. Theoretically, the virtual IP will be failed over to the other machine (lb2) if the configuration is successful. On lb2, run the following command and here is the expected output:

      1. [[email protected] ~]# ip a s
      2. 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
      3. link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
      4. inet 127.0.0.1/8 scope host lo
      5. valid_lft forever preferred_lft forever
      6. inet6 ::1/128 scope host
      7. valid_lft forever preferred_lft forever
      8. 2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
      9. link/ether 52:54:9e:3f:51:ba brd ff:ff:ff:ff:ff:ff
      10. inet 172.16.0.3/24 brd 172.16.0.255 scope global noprefixroute dynamic eth0
      11. valid_lft 72690sec preferred_lft 72690sec
      12. inet 172.16.0.10/24 scope global secondary eth0 # The VIP address
      13. valid_lft forever preferred_lft forever
      14. inet6 fe80::f67c:bd4f:d6d5:1d9b/64 scope link noprefixroute
      15. valid_lft forever preferred_lft forever
    5. As you can see above, high availability is successfully configured.

    is an efficient and convenient tool to create a Kubernetes cluster. Follow the steps below to download KubeKey.

    Download KubeKey from its GitHub Release Page or use the following command directly.

    1. curl -sfL https://get-kk.kubesphere.io | VERSION=v1.1.0 sh -

    Run the following command first to make sure you download KubeKey from the correct zone.

    1. curl -sfL https://get-kk.kubesphere.io | VERSION=v1.1.0 sh -

    Note

    After you download KubeKey, if you transfer it to a new machine also with poor network connections to Googleapis, you must run export KKZONE=cn again before you proceed with the steps below.

    Note

    The commands above download the latest release (v1.1.0) of KubeKey. You can change the version number in the command to download a specific version.

    Make kk executable:

    1. chmod +x kk

    Create an example configuration file with default configurations. Here Kubernetes v1.20.4 is used as an example.

    1. ./kk create config --with-kubesphere v3.1.0 --with-kubernetes v1.20.4

    Note

    • Recommended Kubernetes versions for KubeSphere v3.1.0: v1.17.9, v1.18.8, v1.19.8 and v1.20.4. If you do not specify a Kubernetes version, KubeKey will install Kubernetes v1.19.8 by default. For more information about supported Kubernetes versions, see .

    • If you do not add the flag --with-kubesphere in the command in this step, KubeSphere will not be deployed unless you install it using the addons field in the configuration file or add this flag again when you use ./kk create cluster later.

    • If you add the flag --with-kubesphere without specifying a KubeSphere version, the latest version of KubeSphere will be installed.

    Deploy KubeSphere and Kubernetes

    After you run the commands above, a configuration file config-sample.yaml will be created. Edit the file to add machine information, configure the load balancer and more.

    Note

    The file name may be different if you customize it.

    1. ...
    2. spec:
    3. hosts:
    4. - {name: master1, address: 172.16.0.4, internalAddress: 172.16.0.4, user: root, password: Testing123}
    5. - {name: master2, address: 172.16.0.5, internalAddress: 172.16.0.5, user: root, password: Testing123}
    6. - {name: master3, address: 172.16.0.6, internalAddress: 172.16.0.6, user: root, password: Testing123}
    7. - {name: worker1, address: 172.16.0.7, internalAddress: 172.16.0.7, user: root, password: Testing123}
    8. - {name: worker2, address: 172.16.0.8, internalAddress: 172.16.0.8, user: root, password: Testing123}
    9. - {name: worker3, address: 172.16.0.9, internalAddress: 172.16.0.9, user: root, password: Testing123}
    10. roleGroups:
    11. etcd:
    12. - master1
    13. - master2
    14. - master3
    15. master:
    16. - master1
    17. - master2
    18. - master3
    19. worker:
    20. - worker1
    21. - worker2
    22. - worker3
    23. controlPlaneEndpoint:
    24. domain: lb.kubesphere.local
    25. address: 172.16.0.10 # The VIP address
    26. port: 6443
    27. ...

    Note

    • Replace the value of controlPlaneEndpoint.address with your own VIP address.
    • For more information about different parameters in this configuration file, see .

    After you complete the configuration, you can execute the following command to start the installation:

    1. ./kk create cluster -f config-sample.yaml