Installing HA Master and Etcd Cluster

    Load Balancer can distribute traffic from multiple web services to multiple backend hosts, and can automatically detect and isolate unusable hosts,to improve service capability and availability of business. In addition, installing a HA cluster can be implemented with keepalved and Haproxy as well.

    At least one etcd node needs to be deployed, but multiple etcd nodes (only support odd number) can make the cluster more reliable. This document walks you through how to create a highly available cluster by configuring the QingCloud Load Balancer.

    • Please make sure that you already read . This page only explains how to configure a HA cluster. As an optional configuration item, it will not elaborate on the procedures and parameters, see Multi-Node for details.

    Highly Available Architecture

    This example prepares 6 hosts, master and etcd clusters will be deployed in 3 of them. It enables you to configure multiple master and etcd nodes in .

    This page shows an example of creating a load balancer on QingCloud platform, and briefly explain its creation steps. For details, please refer to QingCloud Document.

    Step 1: Create a Load Balancer

    Note: You need to apply for a VxNet at first, then you can create a Load Balancer in this VxNet.

    Login QingCloud Cloud Platform and select Network & CDN → Load Balancers, then click the create button and fill in the basic information. The following example describes how to create a Load Balancer:

    • Available Zones: For example, Beijing Zone 3 - C
    • Deployment Mode: Single Zone
    • Network: Select the private network of the VPC

    Other settings can be kept as default. After filling in the basic information, click Submit to complete the creation.

    LB info

    Step 2: Create the Listener

    • Name: Define a concise and clear name for this Listener
    • Listener Protocol: Select TCP protocol
    • Port: 6443
    • Load mode: Poll

    Step 3: Add the Backend Server

    Under the current Load Balancer, click Add Backend

    1. Select the Managed VxNet network and the VxNet where the cluster host is located (e.g. kubesphere), click Advanced Search, you can check multiple backend hosts in one time. For example, to achieve high availability of the Master node, you are supposed to check the master1, master2, and master3 as listed.

    2. Finally, click Submit when you’re done.

    Add Backend

    After adding the backend, you need to click Apply Changes to make it effective. You can verify the three added Master nodes of the current Load Balancer in the list.

    The host names master1, master2, and master3 can be filled in the [kube-master] and [etcd] sections of the following example as highly available Master and etcd nodes.

    Note that the number of etcd needs to be set to odd number. Since the memory of etcd is relatively large, it may cause insufficient resources to the working node. Therefore, it is not recommended to deploy the etcd cluster on the working node.

    In order to manage deployment process and target machines, please refer to the following example to configure all hosts in hosts.ini. It’s recommneded to install using user, here shows an example configuration in CentOS 7.5 using root user. Note that each host information occupies one line and cannot be wrapped manually.

    host.ini configuration example

    Configure the LB Parameters

    Finally, you need to modify the relevant parameters in the vars.yaml after prepare the Load Balancer. Assume the internal IP address of the Load Balancer is (replaced it with your actual Load Balancer IP address), and the listening port of the TCP protocol is 6443, then the parameter configuration in conf/vars.yml can be modified like the following example (loadbalancer_apiserver as an optional configuration which should be uncommented in the configuration file).

    vars.yml configuration sample

    1. ## External LB example config
    2. ## apiserver_loadbalancer_domain_name: "lb.kubesphere.local"
    3. loadbalancer_apiserver:
    4. port: 6443

    See Multi-Node to configure the related parameters of the persistent storage in vars.yml and complete the rest multi-node installation process after completing highly available configuration.