Using Ceph RBD for dynamic provisioning
Persistent Storage Using Ceph Rados Block Device provides an explanation of persistent volumes (PVs), persistent volume claims (PVCs), and how to use Ceph Rados Block Device (RBD) as persistent storage.
Install the latest ceph-common package:
The
ceph-common
library must be installed onall schedulable
OKD nodes.From an administrator or MON node, create a new pool for dynamic volumes, for example:
$ ceph osd pool create kube 1024
$ ceph auth get-or-create client.kube mon 'allow r, allow command "osd blacklist"' osd 'allow class-read object_prefix rbd_children, allow rwx pool=kube' -o ceph.client.kube.keyring
To use an existing Ceph cluster for dynamic persistent storage:
-
$ ceph auth get client.admin
Ceph secret definition example
apiVersion: v1
kind: Secret
metadata:
name: ceph-secret
namespace: kube-system
data:
key: QVFBOFF2SlZheUJQRVJBQWgvS2cwT1laQUhPQno3akZwekxxdGc9PQ== (1)
type: kubernetes.io/rbd (2)
1 This base64 key is generated on one of the Ceph MON nodes using the ceph auth get-key client.admin | base64
command, then copying the output and pasting it as the secret key’s value.2 This value is required for Ceph RBD to work with dynamic provisioning. Create the Ceph secret for the client.admin:
$ oc create -f ceph-secret.yaml
secret "ceph-secret" created
Verify that the secret was created:
Create the storage class:
$ oc create -f ceph-storageclass.yaml
storageclass "dynamic" created
Ceph storage class example
apiVersion: storage.k8s.io/v1beta1
metadata:
name: dynamic
annotations:
storageclass.kubernetes.io/is-default-class: "true"
provisioner: kubernetes.io/rbd
parameters:
monitors: 192.168.1.11:6789,192.168.1.12:6789,192.168.1.13:6789 (1)
adminId: admin (2)
adminSecretNamespace: kube-system (4)
pool: kube (5)
userId: kube (6)
userSecretName: ceph-user-secret (7)
Verify that the storage class was created:
$ oc get storageclasses
NAME TYPE
dynamic (default) kubernetes.io/rbd
Create the PVC object definition:
PVC object definition example
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: ceph-claim-dynamic
spec:
accessModes: (1)
- ReadWriteOnce
resources:
requests:
storage: 2Gi (2)
1 The accessModes
do not enforce access rights but instead act as labels to match a PV to a PVC.2 This claim looks for PVs that offer 2Gi
or greater capacity.Verify that the PVC was created and bound to the expected PV:
$ oc get pvc
ceph-claim Bound pvc-f548d663-3cac-11e7-9937-0024e8650c7a 2Gi RWO 1m
Create the pod object definition:
Pod object definition example
apiVersion: v1
kind: Pod
metadata:
name: ceph-pod1 (1)
spec:
containers:
- name: ceph-busybox
image: busybox (2)
command: ["sleep", "60000"]
volumeMounts:
- name: ceph-vol1 (3)
mountPath: /usr/share/busybox (4)
readOnly: false
volumes:
- name: ceph-vol1
persistentVolumeClaim:
claimName: ceph-claim-dynamic (5)
Create the pod:
$ oc create -f ceph-pod1.yaml
pod "ceph-pod1" created
Verify that the pod was created:
$ oc get pod
NAME READY STATUS RESTARTS AGE
After a minute or so, the pod status changes to .
To make persistent storage available to every project, you must modify the default project template. Adding this to your default project template allows every user who has access to create a project access to the Ceph cluster. See for more information.
Default project example
1 | Place your Ceph user key here in base64 format. |