Storage Configuration Instruction

    • QingCloud Block Storage
    • QingStor NeonSAN
    • Ceph RBD
    • NFS
    • NFS in Kubernetes (Multi-node installation test only)
    • Local Volume (All-in-One installation test only)

    At the same time, Installer integrates the QingCloud-CSI (Block Storage Plugin) and the . It can be connected to the QingCloud block storage or QingStor NeonSAN as a storage, just need simple configuration before installation.

    Make sure you have QingCloud account. In addition, The Installer also integrates storage clients such as NFS, GlusterFS and Ceph RBD. Users need to prepare the relevant storage server in advance, and then configure the corresponding parameters in to connect to the corresponding storage server.

    The versions of open source storage servers and clients that have been tested using Installer, as well as the CSI plugins, are listed as following:

    After preparing the storage server, then you need to reference the parameter description in the following table. Then modify the corresponding storage class part in the configuration file ( ) according to your storage server.

    The following is a brief description of the parameter configuration related to vars.yml storage, also see ) for the details.

    KubeSphere supports QingCloud Block Storage as the platform storage service. If you would like to experience dynamic provisioning to create volumes, it’s recommended to use QingCloud Block Storage, KubeSphere integrated , which supports you to use the different performance of block storage in QingCloud platform.

    After plugin installation completes, user can create volumes based on several types of disk, such as super high performance disk, high performance disk and high capacity disk, with ReadWriteOnce access mode and mount volumes on workloads.

    The parameters for configuring the QingCloud-CSI plugin are described below.

    QingCloud-CSIDescription
    qingcloud_csi_enabledDetermines whether to use QingCloud-CSI as the persistent storage volume, can be set to true or false. Defaults to false
    qingcloud_csi_is_default_classDetermines whether to set QingCloud-CSI as default storage class, can be set to true or false. Defaults to false.
    Note: When there are multiple storage classes in the system, only one can be set as the default.
    qingcloud_access_key_id ,
    qingcloud_secret_access_key
    Get from QingCloud Cloud Platform Console
    qingcloud_zonezone should be the same as the zone where the Kubernetes cluster is installed, and the CSI plugin will operate on the storage volumes for this zone. For example: zone can be set to these values, such as sh1a (Shanghai 1-A), sh1b (Shanghai 1-B), pek2 (Beijing 2), pek3a (Beijing 3-A), pek3b (Beijing 3-B), pek3c (Beijing 3-C), gd1 (Guangdong 1), gd2a (Guangdong 2-A), ap1 (Asia Pacific 1), ap2a (Asia Pacific 2-A)
    typeThe type of volume in QingCloud IaaS platform. In QingCloud public cloud platform, 0 represents high performance volume. 3 respresents super high performance volume. 1 or 2 represents high capacity volume depending on cluster‘s zone, see
    maxSize, minSizeLimit the range of volume size in GiB
    stepSizeSet the increment of volumes size in GiB
    fsTypeThe file system of the storage volume, which supports ext3, ext4, xfs. The default is ext4

    QingStor NeonSAN

    The NeonSAN-CSI plugin supports the enterprise-level distributed storage as the platform storage service. If you have prepared the NeonSAN server, you will be able to configure the NeonSAN-CSI plugin to connect to its storage server in conf/vars.yml, see NeonSAN-CSI Reference

    The open source distributed storage system, can be configured in conf/vars.yml, assume you have prepared Ceph storage servers in advance, thus you can reference the following definition. See Kubernetes Documentation for more details.

    Ceph_RBDDescription
    ceph_rbd_enabledDetermines whether to use Ceph RBD as the persistent storage, can be set to true or false. Defaults to false
    ceph_rbd_storage_classStorage class name
    ceph_rbd_is_default_classDetermines whether to set Ceph RBD as default storage class, can be set to true or false. Defaults to false.
    Note: When there are multiple storage classes in the system, only one can be set as the default.
    ceph_rbd_monitorsCeph monitors, comma delimited. This parameter is required, which depends on Ceph RBD server parameters
    ceph_rbd_admin_idCeph client ID that is capable of creating images in the pool. Default is “admin”
    ceph_rbd_admin_secretAdmin_id’s secret,Secret name for “adminId”. This parameter is required. The provided secret must have type “kubernetes.io/rbd”
    ceph_rbd_poolCeph RBD pool. Default is “rbd”
    ceph_rbd_user_idCeph client ID that is used to map the RBD image. Default is the same as adminId
    ceph_rbd_user_secretSecret for User_id, it is required to create this secret in namespace which used rbd image
    ceph_rbd_fsTypefsType that is supported by kubernetes. Default: “ext4”
    ceph_rbd_imageFormatCeph RBD image format, “1” or “2”. Default is “1”
    ceph_rbd_imageFeaturesThis parameter is optional and should only be used if you set imageFormat to “2”. Currently supported features are layering only. Default is “”, and no features are turned on

    Attention:

    GlusterFS

    GlusterFS is a scalable network filesystem suitable for data-intensive tasks such as cloud storage and media streaming. Assume you have prepared GlusterFS storage servers in advance, thus you can reference the following definition,see for more details.

    Attention:

    An NFS volume allows an existing NFS (Network File System) share to be mounted into your Pod. NFS can be configured in conf/vars.yml, assume you have prepared Ceph storage servers in advance. By the way, you can use QingCloud vNAS as NFS server.

    NFSDescription
    nfs_client_enableDetermines whether to use NFS as the persistent storage, can be set to true or false. Defaults to false
    nfs_client_is_default_classDetermines whether to set NFS as default storage class, can be set to true or false. Defaults to false.
    Note: When there are multiple storage classes in the system, only one can be set as the default
    nfs_serverThe NFS server address, either IP or Hostname
    nfs_pathNFS shared directory, which is the file directory shared on the server, see

    NFS in Kubernetes(Multi-node installation test only)

    This kind of storage will install the in the Kubernetes cluster, which is an out-of-tree dynamic provisioner for Kubernetes, requiring the Kubernetes node to have enough disk space. The definition of the conf/vars.yml is as following table.

    Local volumeDescription
    local_volume_provisioner_enabledDetermines whether to use Local as the persistent storage, can be set to true or false. Defaults to true
    local_volume_provisioner_storage_classStorage class name, default value:local
    local_volume_is_default_classDetermines whether to set Local as the default storage class, can be set to true or false. Defaults to true.
    Note: When there are multiple storage classes in the system, only one can be set as the default