使用rbd-provisioner提供rbd持久化存储

    一些用户会使用kubeadm来部署集群,或者将kube-controller-manager以容器的方式运行。这种方式下,kubernetes在创建使用ceph rbd pv/pvc时没任何问题,但使用dynamic provisioning自动管理存储生命周期时会报错。提示"rbd: create volume failed, err: failed to create rbd image: executable file not found in $PATH:"

    问题来自gcr.io提供的kube-controller-manager容器镜像未打包ceph-common组件,缺少了rbd命令,因此无法通过rbd命令为pod创建rbd image,查了github的相关文章,目前kubernetes官方在kubernetes-incubator/external-storage项目通过External Provisioners的方式来解决此类问题。

    本文主要针对该问题,通过rbd-provisioner的方式,解决ceph rbd的dynamic provisioning问题。

    • 根据自己需要,修改rbd-provisioner的namespace;

    部署完成后检查rbd-provisioner deployment,确保已经正常部署;

    1. [root@k8s01 ~]# kubectl describe deployments.apps -n kube-system rbd-provisioner
    2. Name: rbd-provisioner
    3. Namespace: kube-system
    4. CreationTimestamp: Sat, 13 Oct 2018 20:08:45 +0800
    5. Labels: app=rbd-provisioner
    6. Annotations: deployment.kubernetes.io/revision: 1
    7. kubectl.kubernetes.io/last-applied-configuration:
    8. {"apiVersion":"extensions/v1beta1","kind":"Deployment","metadata":{"annotations":{},"name":"rbd-provisioner","namespace":"kube-system"},"s...
    9. Selector: app=rbd-provisioner
    10. Replicas: 1 desired | 1 updated | 1 total | 1 available | 0 unavailable
    11. StrategyType: Recreate
    12. MinReadySeconds: 0
    13. Pod Template:
    14. Labels: app=rbd-provisioner
    15. Service Account: rbd-provisioner
    16. Containers:
    17. rbd-provisioner:
    18. Image: quay.io/external_storage/rbd-provisioner:latest
    19. Port: <none>
    20. Host Port: <none>
    21. Environment:
    22. PROVISIONER_NAME: ceph.com/rbd
    23. Mounts: <none>
    24. Volumes: <none>
    25. Conditions:
    26. Type Status Reason
    27. ---- ------ ------
    28. Available True MinimumReplicasAvailable
    29. OldReplicaSets: <none>
    30. NewReplicaSet: rbd-provisioner-db574c5c (1/1 replicas created)
    31. Events: <none>

    部署完rbd-provisioner,还需要创建StorageClass。创建SC前,我们还需要创建相关用户的secret;

    1. [root@k8s01 ~]# vi secrets.yaml
    2. apiVersion: v1
    3. kind: Secret
    4. metadata:
    5. name: ceph-admin-secret
    6. namespace: kube-system
    7. type: "kubernetes.io/rbd"
    8. data:
    9. # ceph auth get-key client.admin | base64
    10. key: QVFCdng4QmJKQkFsSFJBQWl1c1o0TGdOV250NlpKQ1BSMHFCa1E9PQ==
    11. apiVersion: v1
    12. kind: Secret
    13. metadata:
    14. namespace: kube-system
    15. type: "kubernetes.io/rbd"
    16. data:
    17. # ceph auth add client.kube mon 'allow r' osd 'allow rwx pool=kube'
    18. # ceph auth get-key client.kube | base64
    19. key: QVFCTHdNRmJueFZ4TUJBQTZjd1MybEJ2Q0JUcmZhRk4yL2tJQVE9PQ==
    20. [root@k8s01 ~]# kubectl create -f secrets.yaml
    21. [root@k8s01 ~]# vi secrets-default.yaml
    22. apiVersion: v1
    23. kind: Secret
    24. metadata:
    25. name: ceph-secret
    26. type: "kubernetes.io/rbd"
    27. data:
    28. # ceph auth add client.kube mon 'allow r' osd 'allow rwx pool=kube'
    29. # ceph auth get-key client.kube | base64
    30. key: QVFCTHdNRmJueFZ4TUJBQTZjd1MybEJ2Q0JUcmZhRk4yL2tJQVE9PQ==
    31. [root@k8s01 ~]# kubectl create -f secrets-default.yaml -n default
    • 创建secret保存client.admin和client.kube用户的key,client.admin和client.kube用户的secret可以放在kube-system namespace,但如果其他namespace需要使用ceph rbd的dynamic provisioning功能的话,要在相应的namespace创建secret来保存client.kube用户key信息;
    • 其他设置和普通的ceph rbd StorageClass一致,但provisioner需要设置为ceph.com/rbd,不是默认的kubernetes.io/rbd,这样rbd的请求将由rbd-provisioner来处理;
    • 考虑到兼容性,建议尽量关闭rbd image feature,并且kubelet节点的ceph-common版本尽量和ceph服务器端保持一致,我的环境都使用的L版本;

    在kube-system和default namespace分别创建pod,通过启动一个busybox实例,将ceph rbd镜像挂载到/usr/share/busybox

    1. [root@k8s01 ~]# vi test-pod.yaml
    2. apiVersion: v1
    3. kind: Pod
    4. metadata:
    5. name: ceph-pod1
    6. spec:
    7. containers:
    8. - name: ceph-busybox
    9. image: busybox
    10. command: ["sleep", "60000"]
    11. volumeMounts:
    12. - name: ceph-vol1
    13. mountPath: /usr/share/busybox
    14. readOnly: false
    15. volumes:
    16. - name: ceph-vol1
    17. persistentVolumeClaim:
    18. claimName: ceph-claim
    19. ---
    20. kind: PersistentVolumeClaim
    21. metadata:
    22. spec:
    23. accessModes:
    24. - ReadWriteOnce
    25. resources:
    26. requests:
    27. storage: 2Gi
    28. [root@k8s01 ~]# kubectl create -f test-pod.yaml -n kube-system
    29. pod/ceph-pod1 created
    30. persistentvolumeclaim/ceph-claim created
    31. [root@k8s01 ~]# kubectl create -f test-pod.yaml -n default
    32. pod/ceph-pod1 created
    33. persistentvolumeclaim/ceph-claim created
    1. [root@k8s01 ~]# kubectl get pvc
    2. NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
    3. ceph-claim Bound pvc-ee0f1c35-cef7-11e8-8484-005056a33f16 2Gi RWO ceph-rbd 25s
    4. [root@k8s01 ~]# kubectl get pvc -n kube-system
    5. NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
    6. ceph-claim Bound pvc-ea377cad-cef7-11e8-8484-005056a33f16 2Gi RWO ceph-rbd 36s
    7. [root@k8s01 ~]# kubectl get pv
    8. NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
    9. pvc-ea377cad-cef7-11e8-8484-005056a33f16 2Gi RWO Delete Bound kube-system/ceph-claim ceph-rbd 40s
    10. pvc-ee0f1c35-cef7-11e8-8484-005056a33f16 2Gi RWO Delete Bound default/ceph-claim ceph-rbd 32s

    在ceph服务器上,检查rbd镜像创建情况和镜像的信息;

    检查busybox内的文件系统挂载和使用情况,确认能正常工作;

    1. [root@k8s01 ~]# kubectl exec -it ceph-pod1 mount |grep rbd
    2. /dev/rbd0 on /usr/share/busybox type ext4 (rw,seclabel,relatime,stripe=1024,data=ordered)
    3. [root@k8s01 ~]# kubectl exec -it -n kube-system ceph-pod1 mount |grep rbd
    4. /dev/rbd0 on /usr/share/busybox type ext4 (rw,seclabel,relatime,stripe=1024,data=ordered)
    5. [root@k8s01 ~]# kubectl exec -it -n kube-system ceph-pod1 df |grep rbd
    6. /dev/rbd0 1998672 6144 1976144 0% /usr/share/busybox
    7. [root@k8s01 ~]# kubectl exec -it ceph-pod1 df |grep rbd
    8. /dev/rbd0 1998672 6144 1976144 0% /usr/share/busybox

    测试删除pod能否自动删除pv和pvc,生产环境中谨慎,设置好回收策略;

    1. [root@k8s01 ~]# kubectl delete -f test-pod.yaml
    2. pod "ceph-pod1" deleted
    3. persistentvolumeclaim "ceph-claim" deleted
    4. [root@k8s01 ~]# kubectl delete -f test-pod.yaml -n kube-system
    5. pod "ceph-pod1" deleted
    6. persistentvolumeclaim "ceph-claim" deleted
    7. [root@k8s01 ~]# kubectl get pv
    8. No resources found.
    9. [root@k8s01 ~]# kubectl get pvc
    10. No resources found.
    11. [root@k8s01 ~]# kubectl get pvc -n kube-system
    12. No resources found.

    大部分情况下,我们无需使用rbd provisioner来提供ceph rbd的dynamic provisioning能力。经测试,在OpenShift、Rancher、SUSE CaaS以及本Handbook的二进制文件方式部署,在安装好ceph-common软件包的情况下,定义StorageClass时使用即可正常使用ceph rbd provisioning功能。