Using Ceph RBD for dynamic provisioning

    Persistent Storage Using Ceph Rados Block Device provides an explanation of persistent volumes (PVs), persistent volume claims (PVCs), and how to use Ceph Rados Block Device (RBD) as persistent storage.

    1. Install the latest ceph-common package:

      The ceph-common library must be installed on all schedulable OKD nodes.

    2. From an administrator or MON node, create a new pool for dynamic volumes, for example:

      1. $ ceph osd pool create kube 1024
      2. $ ceph auth get-or-create client.kube mon 'allow r, allow command "osd blacklist"' osd 'allow class-read object_prefix rbd_children, allow rwx pool=kube' -o ceph.client.kube.keyring

    To use an existing Ceph cluster for dynamic persistent storage:

      1. $ ceph auth get client.admin

      Ceph secret definition example

      1. apiVersion: v1
      2. kind: Secret
      3. metadata:
      4. name: ceph-secret
      5. namespace: kube-system
      6. data:
      7. key: QVFBOFF2SlZheUJQRVJBQWgvS2cwT1laQUhPQno3akZwekxxdGc9PQ== (1)
      8. type: kubernetes.io/rbd (2)
      1This base64 key is generated on one of the Ceph MON nodes using the ceph auth get-key client.admin | base64 command, then copying the output and pasting it as the secret key’s value.
      2This value is required for Ceph RBD to work with dynamic provisioning.
    1. Create the Ceph secret for the client.admin:

      1. $ oc create -f ceph-secret.yaml
      2. secret "ceph-secret" created
    2. Verify that the secret was created:

    3. Create the storage class:

      1. $ oc create -f ceph-storageclass.yaml
      2. storageclass "dynamic" created

      Ceph storage class example

      1. apiVersion: storage.k8s.io/v1beta1
      2. metadata:
      3. name: dynamic
      4. annotations:
      5. storageclass.kubernetes.io/is-default-class: "true"
      6. provisioner: kubernetes.io/rbd
      7. parameters:
      8. monitors: 192.168.1.11:6789,192.168.1.12:6789,192.168.1.13:6789 (1)
      9. adminId: admin (2)
      10. adminSecretNamespace: kube-system (4)
      11. pool: kube (5)
      12. userId: kube (6)
      13. userSecretName: ceph-user-secret (7)
    4. Verify that the storage class was created:

      1. $ oc get storageclasses
      2. NAME TYPE
      3. dynamic (default) kubernetes.io/rbd
    5. Create the PVC object definition:

      PVC object definition example

      1. kind: PersistentVolumeClaim
      2. apiVersion: v1
      3. metadata:
      4. name: ceph-claim-dynamic
      5. spec:
      6. accessModes: (1)
      7. - ReadWriteOnce
      8. resources:
      9. requests:
      10. storage: 2Gi (2)
      1The accessModes do not enforce access rights but instead act as labels to match a PV to a PVC.
      2This claim looks for PVs that offer 2Gi or greater capacity.
    6. Verify that the PVC was created and bound to the expected PV:

      1. $ oc get pvc
      2. ceph-claim Bound pvc-f548d663-3cac-11e7-9937-0024e8650c7a 2Gi RWO 1m
    7. Create the pod object definition:

      Pod object definition example

      1. apiVersion: v1
      2. kind: Pod
      3. metadata:
      4. name: ceph-pod1 (1)
      5. spec:
      6. containers:
      7. - name: ceph-busybox
      8. image: busybox (2)
      9. command: ["sleep", "60000"]
      10. volumeMounts:
      11. - name: ceph-vol1 (3)
      12. mountPath: /usr/share/busybox (4)
      13. readOnly: false
      14. volumes:
      15. - name: ceph-vol1
      16. persistentVolumeClaim:
      17. claimName: ceph-claim-dynamic (5)
    8. Create the pod:

      1. $ oc create -f ceph-pod1.yaml
      2. pod "ceph-pod1" created
    9. Verify that the pod was created:

      1. $ oc get pod
      2. NAME READY STATUS RESTARTS AGE

    After a minute or so, the pod status changes to .

    To make persistent storage available to every project, you must modify the default project template. Adding this to your default project template allows every user who has access to create a project access to the Ceph cluster. See for more information.

    Default project example

    1Place your Ceph user key here in base64 format.