Recommended etcd practices

    Because etcd writes data to disk and persists proposals on disk, its performance depends on disk performance. Although etcd is not particularly I/O intensive, it requires a low latency block device for optimal performance and stability. Because etcd’s consensus protocol depends on persistently storing metadata to a log (WAL), etcd is sensitive to disk-write latency. Slow disks and disk activity from other processes can cause long fsync latencies.

    Those latencies can cause etcd to miss heartbeats, not commit new proposals to the disk on time, and ultimately experience request timeouts and temporary leader loss. High write latencies also lead to an OpenShift API slowness, which affects cluster performance. Because of these reasons, avoid colocating other workloads on the control-plane nodes.

    In terms of latency, run etcd on top of a block device that can write at least 50 IOPS of 8000 bytes long sequentially. That is, with a latency of 10ms, keep in mind that uses fdatasync to synchronize each write in the WAL. For heavy loaded clusters, sequential 500 IOPS of 8000 bytes (2 ms) are recommended. To measure those numbers, you can use a benchmarking tool, such as fio.

    To achieve such performance, run etcd on machines that are backed by SSD or NVMe disks with low latency and high throughput. Consider single-level cell (SLC) solid-state drives (SSDs), which provide 1 bit per memory cell, are durable and reliable, and are ideal for write-intensive workloads.

    The following hard disk features provide optimal etcd performance:

    • Low latency to support fast read operation.

    • High-bandwidth writes for faster compactions and defragmentation.

    • High-bandwidth reads for faster recovery from failures.

    • Solid state drives as a minimum selection, however NVMe drives are preferred.

    • Server-grade hardware from various manufacturers for increased reliability.

    • RAID 0 technology for increased performance.

    • Dedicated etcd drives. Do not place log files or other heavy workloads on etcd drives.

    Avoid NAS or SAN setups and spinning drives. Always benchmark by using utilities such as fio. Continuously monitor the cluster performance as it increases.

    Some key metrics to monitor on a deployed OKD cluster are p99 of etcd disk write ahead log duration and the number of etcd leader changes. Use Prometheus to track these metrics.

    The etcd member database sizes can vary in a cluster during normal operations. This difference does not affect cluster upgrades, even if the leader size is different from the other members.

    To validate the hardware for etcd before or after you create the OKD cluster, you can use fio.

    Prerequisites

    • Container runtimes such as Podman or Docker are installed on the machine that you’re testing.

    • Data is written to the path.

    Procedure

    • Run fio and analyze the results:

      • If you use Podman, run this command:

      • If you use Docker, run this command:

        1. $ sudo docker run --volume /var/lib/etcd:/var/lib/etcd:Z quay.io/openshift-scale/etcd-perf

    The output reports whether the disk is fast enough to host etcd by comparing the 99th percentile of the fsync metric captured from the run to see if it is less than 10 ms. A few of the most important etcd metrics that might affected by I/O performance are as follow:

    • etcd_disk_wal_fsync_duration_seconds_bucket metric reports the etcd’s WAL fsync duration

    • etcd_disk_backend_commit_duration_seconds_bucket metric reports the etcd backend commit latency duration

    • etcd_server_leader_changes_seen_total metric reports the leader changes

    Because etcd replicates the requests among all the members, its performance strongly depends on network input/output (I/O) latency. High network latencies result in etcd heartbeats taking longer than the election timeout, which results in leader elections that are disruptive to the cluster. A key metric to monitor on a deployed OKD cluster is the 99th percentile of etcd network peer latency on each etcd cluster member. Use Prometheus to track the metric.

    The histogram_quantile(0.99, rate(etcd_network_peer_round_trip_time_seconds_bucket[2m])) metric reports the round trip time for etcd to finish replicating the client requests between the members. Ensure that it is less than 50 ms.

    Additional resources

    Prerequisites

    • The MachineConfigPool must match metadata.labels[machineconfiguration.openshift.io/role]. This applies to a controller, worker, or a custom pool.

    • The node’s auxiliary storage device, such as /dev/sdb, must match the sdb. Change this reference in all places in the file.

    The Machine Config Operator (MCO) is responsible for mounting a secondary disk for an OKD 4.13 container storage.

    Use the following steps to move etcd to a different device:

    Procedure

    1. Create a machineconfig YAML file named etcd-mc.yml and add the following information:

      1. apiVersion: machineconfiguration.openshift.io/v1
      2. kind: MachineConfig
      3. metadata:
      4. labels:
      5. machineconfiguration.openshift.io/role: master
      6. name: 98-var-lib-etcd
      7. spec:
      8. config:
      9. ignition:
      10. version: 3.2.0
      11. systemd:
      12. units:
      13. - contents: |
      14. [Unit]
      15. Description=Make File System on /dev/sdb
      16. DefaultDependencies=no
      17. BindsTo=dev-sdb.device
      18. After=dev-sdb.device var.mount
      19. Before=systemd-fsck@dev-sdb.service
      20. [Service]
      21. Type=oneshot
      22. RemainAfterExit=yes
      23. ExecStart=/usr/lib/systemd/systemd-makefs xfs /dev/sdb
      24. TimeoutSec=0
      25. [Install]
      26. WantedBy=var-lib-containers.mount
      27. enabled: true
      28. name: systemd-mkfs@dev-sdb.service
      29. - contents: |
      30. [Unit]
      31. Description=Mount /dev/sdb to /var/lib/etcd
      32. Before=local-fs.target
      33. Requires=systemd-mkfs@dev-sdb.service
      34. After=systemd-mkfs@dev-sdb.service var.mount
      35. [Mount]
      36. What=/dev/sdb
      37. Where=/var/lib/etcd
      38. Type=xfs
      39. Options=defaults,prjquota
      40. [Install]
      41. WantedBy=local-fs.target
      42. enabled: true
      43. name: var-lib-etcd.mount
      44. - contents: |
      45. Description=Sync etcd data if new mount is empty
      46. DefaultDependencies=no
      47. After=var-lib-etcd.mount var.mount
      48. Before=crio.service
      49. [Service]
      50. Type=oneshot
      51. RemainAfterExit=yes
      52. ExecCondition=/usr/bin/test ! -d /var/lib/etcd/member
      53. ExecStart=/usr/sbin/setenforce 0
      54. ExecStart=/bin/rsync -ar /sysroot/ostree/deploy/rhcos/var/lib/etcd/ /var/lib/etcd/
      55. ExecStart=/usr/sbin/setenforce 1
      56. TimeoutSec=0
      57. [Install]
      58. WantedBy=multi-user.target graphical.target
      59. enabled: true
      60. name: sync-var-lib-etcd-to-etcd.service
      61. - contents: |
      62. [Unit]
      63. Description=Restore recursive SELinux security contexts
      64. DefaultDependencies=no
      65. After=var-lib-etcd.mount
      66. Before=crio.service
      67. [Service]
      68. Type=oneshot
      69. RemainAfterExit=yes
      70. ExecStart=/sbin/restorecon -R /var/lib/etcd/
      71. TimeoutSec=0
      72. [Install]
      73. WantedBy=multi-user.target graphical.target
      74. enabled: true
      75. name: restorecon-var-lib-etcd.service
    2. Create the machine configuration by entering the following commands:

      1. $ oc login -u ${ADMIN} -p ${ADMINPASSWORD} ${API}
      2. ... output omitted ...
      1. $ oc create -f etcd-mc.yml
      2. machineconfig.machineconfiguration.openshift.io/98-var-lib-etcd created
      1. $ oc login -u ${ADMIN} -p ${ADMINPASSWORD} ${API}
      2. [... output omitted ...]
      1. $ oc create -f etcd-mc.yml machineconfig.machineconfiguration.openshift.io/98-var-lib-etcd created

      The nodes are updated and rebooted. After the reboot completes, the following events occur:

      • An XFS file system is created on the specified disk.

      • The disk mounts to /var/lib/etc.

      • The content from /sysroot/ostree/deploy/rhcos/var/lib/etcd syncs to /var/lib/etcd.

      • A restore of SELinux labels is forced for /var/lib/etcd.

      • The old content is not removed.

    3. After the nodes are on a separate disk, update the machine configuration file, etcd-mc.yml with the following information:

      1. apiVersion: machineconfiguration.openshift.io/v1
      2. kind: MachineConfig
      3. metadata:
      4. labels:
      5. machineconfiguration.openshift.io/role: master
      6. name: 98-var-lib-etcd
      7. spec:
      8. config:
      9. ignition:
      10. version: 3.2.0
      11. systemd:
      12. - contents: |
      13. Description=Mount /dev/sdb to /var/lib/etcd
      14. Before=local-fs.target
      15. Requires=systemd-mkfs@dev-sdb.service
      16. After=systemd-mkfs@dev-sdb.service var.mount
      17. [Mount]
      18. What=/dev/sdb
      19. Where=/var/lib/etcd
      20. Type=xfs
      21. Options=defaults,prjquota
      22. [Install]
      23. WantedBy=local-fs.target
      24. enabled: true
      25. name: var-lib-etcd.mount
    4. Apply the modified version that removes the logic for creating and syncing the device by entering the following command:

      The previous step prevents the nodes from rebooting.

    Additional resources

    For large and dense clusters, etcd can suffer from poor performance if the keyspace grows too large and exceeds the space quota. Periodically maintain and defragment etcd to free up space in the data store. Monitor Prometheus for etcd metrics and defragment it when required; otherwise, etcd can raise a cluster-wide alarm that puts the cluster into a maintenance mode that accepts only key reads and deletes.

    Monitor these key metrics:

    • etcd_server_quota_backend_bytes, which is the current quota limit

    • etcd_mvcc_db_total_size_in_use_in_bytes, which indicates the actual database usage after a history compaction

    • etcd_mvcc_db_total_size_in_bytes, which shows the database size, including free space waiting for defragmentation

    Defragment etcd data to reclaim disk space after events that cause disk fragmentation, such as etcd history compaction.

    History compaction is performed automatically every five minutes and leaves gaps in the back-end database. This fragmented space is available for use by etcd, but is not available to the host file system. You must defragment etcd to make this space available to the host file system.

    Defragmentation occurs automatically, but you can also trigger it manually.

    Automatic defragmentation is good for most cases, because the etcd operator uses cluster information to determine the most efficient operation for the user.

    The etcd Operator automatically defragments disks. No manual intervention is needed.

    Verify that the defragmentation process is successful by viewing one of these logs:

    • etcd logs

    • cluster-etcd-operator pod

    • operator status error log

    Example log output for successful defragmentation

    1. etcd member has been defragmented: <member_name>, memberID: <member_id>

    Example log output for unsuccessful defragmentation

    1. failed defrag on member: <member_name>, memberID: <member_id>: <error_message>

    Manual defragmentation

    A Prometheus alert indicates when you need to use manual defragmentation. The alert is displayed in two cases:

    • When etcd uses more than 50% of its available space for more than 10 minutes

    • When etcd is actively using less than 50% of its total database size for more than 10 minutes

    You can also determine whether defragmentation is needed by checking the etcd database size in MB that will be freed by defragmentation with the PromQL expression: (etcd_mvcc_db_total_size_in_bytes - etcd_mvcc_db_total_size_in_use_in_bytes)/1024/1024

    Defragmenting etcd is a blocking action. The etcd member will not respond until defragmentation is complete. For this reason, wait at least one minute between defragmentation actions on each of the pods to allow the cluster to recover.

    Follow this procedure to defragment etcd data on each etcd member.

    Prerequisites

    • You have access to the cluster as a user with the cluster-admin role.

    Procedure

    1. Determine which etcd member is the leader, because the leader should be defragmented last.

      1. Get the list of etcd pods:

        1. $ oc -n openshift-etcd get pods -l k8s-app=etcd -o wide

        Example output

        1. etcd-ip-10-0-159-225.example.redhat.com 3/3 Running 0 175m 10.0.159.225 ip-10-0-159-225.example.redhat.com <none> <none>
        2. etcd-ip-10-0-191-37.example.redhat.com 3/3 Running 0 173m 10.0.191.37 ip-10-0-191-37.example.redhat.com <none> <none>
        3. etcd-ip-10-0-199-170.example.redhat.com 3/3 Running 0 176m 10.0.199.170 ip-10-0-199-170.example.redhat.com <none> <none>
      2. Choose a pod and run the following command to determine which etcd member is the leader:

        1. $ oc rsh -n openshift-etcd etcd-ip-10-0-159-225.example.redhat.com etcdctl endpoint status --cluster -w table

        Example output

        1. Defaulting container name to etcdctl.
        2. Use 'oc describe pod/etcd-ip-10-0-159-225.example.redhat.com -n openshift-etcd' to see all of the containers in this pod.
        3. +---------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+
        4. | ENDPOINT | ID | VERSION | DB SIZE | IS LEADER | IS LEARNER | RAFT TERM | RAFT INDEX | RAFT APPLIED INDEX | ERRORS |
        5. +---------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+
        6. | https://10.0.191.37:2379 | 251cd44483d811c3 | 3.4.9 | 104 MB | false | false | 7 | 91624 | 91624 | |
        7. | https://10.0.159.225:2379 | 264c7c58ecbdabee | 3.4.9 | 104 MB | false | false | 7 | 91624 | 91624 | |
        8. | https://10.0.199.170:2379 | 9ac311f93915cc79 | 3.4.9 | 104 MB | true | false | 7 | 91624 | 91624 | |
        9. +---------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+

        Based on the IS LEADER column of this output, the https://10.0.199.170:2379 endpoint is the leader. Matching this endpoint with the output of the previous step, the pod name of the leader is etcd-ip-10-0-199-170.example.redhat.com.

    2. Defragment an etcd member.

      1. Connect to the running etcd container, passing in the name of a pod that is not the leader:

        1. $ oc rsh -n openshift-etcd etcd-ip-10-0-159-225.example.redhat.com
      2. Unset the ETCDCTL_ENDPOINTS environment variable:

      3. Defragment the etcd member:

        1. sh-4.4# etcdctl --command-timeout=30s --endpoints=https://localhost:2379 defrag

        Example output

        1. Finished defragmenting etcd member[https://localhost:2379]

        If a timeout error occurs, increase the value for --command-timeout until the command succeeds.

      4. Verify that the database size was reduced:

        1. sh-4.4# etcdctl endpoint status -w table --cluster

        Example output

        1. +---------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+
        2. | ENDPOINT | ID | VERSION | DB SIZE | IS LEADER | IS LEARNER | RAFT TERM | RAFT INDEX | RAFT APPLIED INDEX | ERRORS |
        3. +---------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+
        4. | https://10.0.191.37:2379 | 251cd44483d811c3 | 3.4.9 | 104 MB | false | false | 7 | 91624 | 91624 | |
        5. | https://10.0.159.225:2379 | 264c7c58ecbdabee | 3.4.9 | 41 MB | false | false | 7 | 91624 | 91624 | | (1)
        6. | https://10.0.199.170:2379 | 9ac311f93915cc79 | 3.4.9 | 104 MB | true | false | 7 | 91624 | 91624 | |
        7. +---------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+

        This example shows that the database size for this etcd member is now 41 MB as opposed to the starting size of 104 MB.

      5. Repeat these steps to connect to each of the other etcd members and defragment them. Always defragment the leader last.

        Wait at least one minute between defragmentation actions to allow the etcd pod to recover. Until the etcd pod recovers, the etcd member will not respond.

    3. If any NOSPACE alarms were triggered due to the space quota being exceeded, clear them.

      1. Check if there are any NOSPACE alarms:

        1. sh-4.4# etcdctl alarm list

        Example output

        1. memberID:12345678912345678912 alarm:NOSPACE
      2. Clear the alarms:

        1. sh-4.4# etcdctl alarm disarm

    Next steps

    After defragmentation, if etcd still uses more than 50% of its available space, consider increasing the disk quota for etcd.