Persistent Storage Class Configuration in Kubernetes

    • Network storage

      The network storage medium is not on the current node but is mounted to the node through the network. Generally, there are redundant replicas to guarantee high availability. When the node fails, the corresponding network storage can be re-mounted to another node for further use.

    • Local storage

      The local storage medium is on the current node and typically can provide lower latency than the network storage. Because there are no redundant replicas, once the node fails, data might be lost. If it is an IDC server, data can be restored to a certain extent. If it is a virtual machine using the local disk on the public cloud, data cannot be retrieved after the node fails.

    PVs are created automatically by the system administrator or volume provisioner. PVs and Pods are bound by PersistentVolumeClaim (PVC). Users request for using a PV through a PVC instead of creating a PV directly. The corresponding volume provisioner creates a PV that meets the requirements of PVC and then binds the PV to the PVC.

    Warning

    Do not delete a PV in any case unless you are familiar with the underlying volume provisioner. Deleting a PV manually can cause orphaned volumes and unexpected behavior.

    TiKV uses the Raft protocol to replicate data. When a node fails, PD automatically schedules data to fill the missing data replicas; TiKV requires low read and write latency, so local SSD storage is strongly recommended in the production environment.

    PD also uses Raft to replicate data. PD is not an I/O-intensive application, but a database for storing cluster meta information, so a local SAS disk or network SSD storage such as EBS General Purpose SSD (gp2) volumes on AWS or SSD persistent disks on GCP can meet the requirements.

    To ensure availability, it is recommended to use network storage for components such as TiDB monitoring, TiDB Binlog, and tidb-backup because they do not have redundant replicas. TiDB Binlog’s Pump and Drainer components are I/O-intensive applications that require low read and write latency, so it is recommended to use high-performance network storage such as EBS Provisioned IOPS SSD (io1) volumes on AWS or SSD persistent disks on GCP.

    When deploying TiDB clusters or tidb-backup with TiDB Operator, you can configure StorageClass for the components that require persistent storage via the corresponding storageClassName field in the values.yaml configuration file. The StorageClassName is set to local-storage by default.

    Kubernetes 1.11 and later versions support , but you need to run the following command to enable volume expansion for the corresponding StorageClass:

    After volume expansion is enabled, expand the PV using the following method:

    1. Edit the PersistentVolumeClaim (PVC) object:

      Suppose the PVC is 10 Gi and now we need to expand it to 100 Gi.

      1. kubectl patch pvc -n ${namespace} ${pvc_name} -p '{"spec": {"resources": {"requests": {"storage": "100Gi"}}}'
    2. View the size of the PV:

      After the expansion, the size displayed by running kubectl get pvc -n ${namespace} ${pvc_name} is still the original one. But if you run the following command to view the size of the PV, it shows that the size has been expanded to the expected one.

      1. kubectl get pv | grep ${pvc_name}
    • For a disk that stores TiKV data, you can mount the disk into the /mnt/ssd directory.

      To achieve high performance, it is recommanded to allocate TiDB a dedicated disk, and the recommended disk type is SSD.

    • For a disk that stores PD data, follow the to mount the disk. First, create multiple directories in the disk, and bind mount the directories into the /mnt/sharedssd directory.

      Configure Storage Class - 图2Note

      The number of directories you create depends on the planned number of TiDB clusters, and the number of PD servers in each cluster. For each directory, a corresponding PV will be created. Each PD server uses one PV.

    • For a disk that stores monitoring data, follow the steps to mount the disk. First, create multiple directories in the disk, and bind mount the directories into the /mnt/monitoring directory.

      Note

      The number of directories you create depends on the planned number of TiDB clusters. For each directory, a corresponding PV will be created. The monitoring data in each TiDB cluster uses one PV.

    • For a disk that stores TiDB Binlog and backup data, follow the to mount the disk. First, create multiple directories in the disk, and bind mount the directories them into the /mnt/backup directory.

      Configure Storage Class - 图4Note

      The number of directories you create depends on the planned number of TiDB clusters, the number of Pumps in each cluster, and your backup method. For each directory, a corresponding PV will be created. Each Pump uses one PV and each Drainer uses one PV. All Ad-hoc full backup tasks and all tasks share one PV.

    The /mnt/ssd, /mnt/sharedssd, /mnt/monitoring, and /mnt/backup directories mentioned above are discovery directories used by local-volume-provisioner. local-volume-provisioner creates a corresponding PV for each subdirectory in discovery directory.

    Online deployment

    1. Download the deployment file for local-volume-provisioner.

      1. wget https://raw.githubusercontent.com/pingcap/tidb-operator/master/examples/local-pv/local-volume-provisioner.yaml
    2. If you use the same discovery directory as described in , you can skip this step. If you use a different path of discovery directory than in the previous step, you need to modify the ConfigMap and DaemonSet spec.

      • Modify the data.storageClassMap field in the ConfigMap spec:

        For more configuration about local-volume-provisioner, refer to Configuration.

      • Modify volumes and volumeMounts fields in the DaemonSet spec to ensure the discovery directory can be mounted to the corresponding directory in the Pod:

        1. volumeMounts:
        2. - mountPath: /mnt/ssd
        3. name: local-ssd
        4. mountPropagation: "HostToContainer"
        5. name: local-sharedssd
        6. mountPropagation: "HostToContainer"
        7. - mountPath: /mnt/backup
        8. name: local-backup
        9. mountPropagation: "HostToContainer"
        10. - mountPath: /mnt/monitoring
        11. name: local-monitoring
        12. mountPropagation: "HostToContainer"
        13. volumes:
        14. - name: local-ssd
        15. hostPath:
        16. path: /mnt/ssd
        17. - name: local-sharedssd
        18. hostPath:
        19. path: /mnt/sharedssd
        20. - name: local-backup
        21. hostPath:
        22. path: /mnt/backup
        23. - name: local-monitoring
        24. hostPath:
        25. ......
      1. kubectl apply -f https://raw.githubusercontent.com/pingcap/tidb-operator/v1.3.2/manifests/local-dind/local-volume-provisioner.yaml
    3. Check status of Pod and PV.

      1. kubectl get po -n kube-system -l app=local-volume-provisioner && \
      2. kubectl get pv | grep -e ssd-storage -e shared-ssd-storage -e monitoring-storage -e backup-storage

      local-volume-provisioner creates a PV for each mounting point under the discovery directory.

      Note

      If no mount point is in the discovery directory, no PV is created and the output is empty.

    For more information, refer to and local-static-provisioner document.

    Offline deployment

    Steps of offline deployment is same as online deployment, except the following:

    • Download the local-volume-provisioner.yaml file on a machine with Internet access, then upload it to the server and install it.

    • local-volume-provisioner is a DaemonSet that starts a Pod on every Kubernetes worker node. The Pod uses the image. If the server does not have access to the Internet, download this Docker image on a machine with Internet access:

      the local-volume-provisioner-v2.3.4.tar file to the server, and execute the docker load command to load the file on the server:
      1. docker load -i local-volume-provisioner-v2.3.4.tar
    • A local PV’s path is its unique identifier. To avoid conflicts, it is recommended to use the UUID of the device to generate a unique path.
    • For I/O isolation, a dedicated physical disk per PV is recommended to ensure hardware-based isolation.
    • For capacity isolation, a partition per PV or a physical disk per PV is recommended.

    For more information on local PV in Kubernetes, refer to Best Practices.

    In general, after a PVC is no longer used and deleted, the PV bound to it is reclaimed and placed in the resource pool for scheduling by the provisioner. To avoid accidental data loss, you can globally configure the reclaim policy of the StorageClass to Retain or only change the reclaim policy of a single PV to Retain. With the Retain policy, a PV is not automatically reclaimed.

    • Configure globally:

      The reclaim policy of a StorageClass is set at creation time and it cannot be updated once it is created. If it is not set when created, you can create another StorageClass of the same provisioner. For example, the default reclaim policy of the StorageClass for persistent disks on Google Kubernetes Engine (GKE) is Delete. You can create another StorageClass named pd-standard with its reclaim policy as Retain, and change the storageClassName of the corresponding component to pd-standard when creating a TiDB cluster.

      1. apiVersion: storage.k8s.io/v1
      2. kind: StorageClass
      3. metadata:
      4. name: pd-standard
      5. parameters:
      6. type: pd-standard
      7. provisioner: kubernetes.io/gce-pd
      8. reclaimPolicy: Retain
      9. volumeBindingMode: Immediate
    • Configure a single PV:

      1. kubectl patch pv ${pv_name} -p '{"spec":{"persistentVolumeReclaimPolicy":"Retain"}}'

    Configure Storage Class - 图6Note

    By default, to ensure data safety, TiDB Operator automatically changes the reclaim policy of the PVs of PD and TiKV to Retain.

    When the reclaim policy of PVs is set to Retain, if you have confirmed that the data of a PV can be deleted, you can delete this PV and the corresponding data by strictly taking the following steps:

    1. Delete the PVC object corresponding to the PV:

    For more details, refer to .