Complete Example Using Ceph RBD

    You are viewing documentation for a release that is no longer supported. The latest supported version of version 3 is [3.11]. For the most recent version 4, see

    This topic provides an end-to-end example of using an existing Ceph cluster as an OKD persistent store. It is assumed that a working Ceph cluster is already set up. If not, consult the Overview of Red Hat Ceph Storage.

    provides an explanation of persistent volumes (PVs), persistent volume claims (PVCs), and using Ceph RBD as persistent storage.

    Installing the ceph-common Package

    The ceph-common library must be installed on all schedulable OKD nodes:

    The OKD all-in-one host is not often used to run pod workloads and, thus, is not included as a schedulable node.

    The ceph auth get-key command is run on a Ceph MON node to display the key value for the client.admin user:

    1. apiVersion: v1
    2. kind: Secret
    3. metadata:
    4. name: ceph-secret
    5. data:
    6. key: QVFBOFF2SlZheUJQRVJBQWgvS2cwT1laQUhPQno3akZwekxxdGc9PQ== (1)
    1This base64 key is generated on one of the Ceph MON nodes using the ceph auth get-key client.admin | base64 command, then copying the output and pasting it as the secret key’s value.

    Save the secret definition to a file, for example ceph-secret.yaml, then create the secret:

    1. $ oc create -f ceph-secret.yaml
    2. secret "ceph-secret" created

    Verify that the secret was created:

    1. # oc get secret ceph-secret
    2. NAME TYPE DATA AGE
    3. ceph-secret Opaque 1 23d

    Creating the Persistent Volume

    Next, before creating the PV object in OKD, define the persistent volume file:

    Example 2. Persistent Volume Object Definition Using Ceph RBD

    Save the PV definition to a file, for example ceph-pv.yaml, and create the persistent volume:

    1. # oc create -f ceph-pv.yaml
    2. persistentvolume "ceph-pv" created

    Verify that the persistent volume was created:

    1. NAME LABELS CAPACITY ACCESSMODES STATUS CLAIM REASON AGE
    2. ceph-pv <none> 2147483648 RWO Available 2s

    A persistent volume claim (PVC) specifies the desired access mode and storage capacity. Currently, based on only these two attributes, a PVC is bound to a single PV. Once a PV is bound to a PVC, that PV is essentially tied to the PVC’s project and cannot be bound to by another PVC. There is a one-to-one mapping of PVs and PVCs. However, multiple pods in the same project can use the same PVC.

    1. kind: PersistentVolumeClaim
    2. apiVersion: v1
    3. name: ceph-claim
    4. spec:
    5. accessModes: (1)
    6. - ReadWriteOnce
    7. resources:
    8. requests:
    9. storage: 2Gi (2)
    1As mentioned above for PVs, the accessModes do not enforce access right, but rather act as labels to match a PV to a PVC.
    2This claim will look for PVs offering 2Gi or greater capacity.

    Save the PVC definition to a file, for example ceph-claim.yaml, and create the PVC:

    1the claim was bound to the ceph-pv PV.

    Creating the Pod

    A pod definition file or a template file can be used to define a pod. Below is a pod specification that creates a single container and mounts the Ceph RBD volume for read-write access:

    Example 4. Pod Object Definition

    1. apiVersion: v1
    2. kind: Pod
    3. metadata:
    4. name: ceph-pod1 (1)
    5. spec:
    6. containers:
    7. - name: ceph-busybox
    8. image: busybox (2)
    9. command: ["sleep", "60000"]
    10. - name: ceph-vol1 (3)
    11. mountPath: /usr/share/busybox (4)
    12. readOnly: false
    13. volumes:
    14. persistentVolumeClaim:
    15. claimName: ceph-claim (5)

    Save the pod definition to a file, for example ceph-pod1.yaml, and create the pod:

    1. # oc create -f ceph-pod1.yaml
    2. pod "ceph-pod1" created
    3. #verify pod was created
    4. # oc get pod
    5. NAME READY STATUS RESTARTS AGE
    6. ceph-pod1 1/1 Running 0 2m
    7. (1)
    1After a minute or so, the pod will be in the Running state.

    When using block storage, such as Ceph RBD, the physical block storage is managed by the pod. The group ID defined in the pod becomes the group ID of both the Ceph RBD mount inside the container, and the group ID of the actual storage itself. Thus, it is usually unnecessary to define a group ID in the pod specifiation. However, if a group ID is desired, it can be defined using **fsGroup**, as shown in the following pod definition fragment:

    Example 5. Group ID Pod Definition

    1. ...
    2. spec:
    3. containers:
    4. - name:
    5. ...
    6. securityContext: (1)
    7. ...
    1 must be defined at the pod level, not under a specific container.
    2All containers in the pod will have the same fsGroup ID.

    Setting ceph-user-secret as Default for Projects

    If you would like to make the persistent storage available to every project you have to modify the default project template. You can read more on modifying the default project template. Read more on modifying the default project template. Adding this to your default project template allows every user who has access to create a project access to the Ceph cluster.