NFS

    • Ceph file system (preferably latest stable luminous or higher versions)

    • In the NFS server host machine, ‘libcephfs2’ (preferably latest stableluminous or higher), ‘nfs-ganesha’ and ‘nfs-ganesha-ceph’ packages (latestganesha v2.5 stable or higher versions)

    • NFS-Ganesha server host connected to the Ceph public network

    NFS-Ganesha provides a File System Abstraction Layer (FSAL) to plug in differentstorage backends. FSAL_CEPHis the plugin FSAL for CephFS. For each NFS-Ganesha export, FSAL_CEPH uses alibcephfs client, user-space CephFS client, to mount the CephFS path thatNFS-Ganesha exports.

    Setting up NFS-Ganesha with CephFS, involves setting up NFS-Ganesha’sconfiguration file, and also setting up a Ceph configuration file and cephxaccess credentials for the Ceph clients created by NFS-Ganesha to accessCephFS.

    A sample ganesha.conf configured with FSAL_CEPH can be found here,.It is suitable for a standalone NFS-Ganesha server, or an active/passiveconfiguration of NFS-Ganesha servers managed by some sort of clusteringsoftware (e.g., Pacemaker). Important details about the options areadded as comments in the sample conf. There are options to do the following:

    • minimize Ganesha caching wherever possible since the libcephfs clients(of FSAL_CEPH) also cache aggressively

    • read from Ganesha config files stored in RADOS objects

    • store client recovery data in RADOS OMAP key-value interface

    • mandate NFSv4.1+ access

    • enable read delegations (need at least v13.0.1 ‘libcephfs2’ packageand v2.6.0 stable ‘nfs-ganesha’ and ‘nfs-ganesha-ceph’ packages)

    Configuration for libcephfs clients

    Required ceph.conf for libcephfs clients includes:

    • a [client] section with option set to let the clients connectto the Ceph cluster’s monitors, usually generated via ceph config generate-minimal-conf, e.g.,

    It is preferred to mount the NFS-Ganesha exports using NFSv4.1+ protocolsto get the benefit of sessions.

    Conventions for mounting NFS resources are platform-specific. Thefollowing conventions work on Linux and some Unix platforms:

    From the command line:

    1. mount -t nfs -o nfsvers=4.1,proto=tcp <ganesha-host-name>:<ganesha-pseudo-path> <mount-point>
    • Per running ganesha daemon, FSAL_CEPH can only export one Ceph file systemalthough multiple directories in a Ceph file system may be exported.

    This tutorial assumes you have a kubernetes cluster deployed. If not can be usedto setup a single node cluster. In this tutorial minikube is used.

    Note

    Clone the rook repository:

    1. git clone https://github.com/rook/rook.git

    Deploy the rook operator:

    1. kubectl create -f common.yaml
    2. kubectl create -f operator.yaml

    Note

    Nautilus release or latest Ceph image should be used.

    Before proceding check if the pods are running:

    1. kubectl -n rook-ceph get pod

    Note

    For troubleshooting on any pod use:

    1. kubectl describe -n rook-ceph pod <pod-name>

    If using minikube cluster change the dataDirHostPath to /data/rook incluster-test.yaml file. This is to make sure data persists across reboots.

    Deploy the ceph cluster:

    To interact with Ceph Daemons, let’s deploy toolbox:

    1. kubectl create -f ./toolbox.yaml

    Exec into the rook-ceph-tools pod:

    1. kubectl -n rook-ceph exec -it $(kubectl -n rook-ceph get pod -l "app=rook-ceph-tools" -o jsonpath='{.items[0].metadata.name}') bash

    Check if you have one Ceph monitor, manager, OSD running and cluster is healthy:

    1. [root@minikube /]# ceph -s
    2. cluster:
    3. id: 3a30f44c-a9ce-4c26-9f25-cc6fd23128d0
    4. health: HEALTH_OK
    5.  
    6. services:
    7. mon: 1 daemons, quorum a (age 14m)
    8. mgr: a(active, since 13m)
    9. osd: 1 osds: 1 up (since 13m), 1 in (since 13m)
    10.  
    11. data:
    12. pools: 0 pools, 0 pgs
    13. objects: 0 objects, 0 B
    14. usage: 5.0 GiB used, 11 GiB / 16 GiB avail
    15. pgs:

    Note

    Single monitor should never be used in real production deployment. Asit can cause single point of failure.

    Create a Ceph File System

    Using ceph-mgr volumes module, we will create a ceph file system:

    1. [root@minikube /]# ceph fs volume create myfs

    By default replicated size for OSD is 3. Since we are using only one OSD. It can cause error. Let’s fix this up by setting replicated size to 1.:

    1. [root@minikube /]# ceph osd pool set cephfs.myfs.meta size 1
    2. [root@minikube /]# ceph osd pool set cephfs.myfs.data size 1

    Note

    Check Cluster status again:

    Add Storage for NFS-Ganesha Servers to prevent recovery conflicts:

    1. pool 'nfs-ganesha' created
    2. [root@minikube /]# ceph osd pool set nfs-ganesha size 1
    3. [root@minikube /]# ceph orchestrator nfs add mynfs nfs-ganesha ganesha

    Here we have created a NFS-Ganesha cluster called “mynfs” in “ganesha”namespace with “nfs-ganesha” OSD pool.

    Scale out NFS-Ganesha cluster:

    1. [root@minikube /]# ceph orchestrator nfs update mynfs 2

    Configure NFS-Ganesha Exports

    Initially rook creates ClusterIP service for the dashboard. With this servicetype, only the pods in same kubernetes cluster can access it.

    Expose Ceph Dashboard port:

    1. kubectl patch service -n rook-ceph -p '{"spec":{"type": "NodePort"}}' rook-ceph-mgr-dashboard
    2. kubectl get service -n rook-ceph rook-ceph-mgr-dashboard
    3. NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
    4. rook-ceph-mgr-dashboard NodePort 10.108.183.148 <none> 8443:31727/TCP 117m

    This makes the dashboard reachable outside kubernetes cluster and the servicetype is changed to NodePort service.

    Create JSON file for dashboard:

    1. $ cat ~/export.json
    2. {
    3. "cluster_id": "mynfs",
    4. "path": "/",
    5. "fsal": {"name": "CEPH", "user_id":"admin", "fs_name": "myfs", "sec_label_xattr": null},
    6. "pseudo": "/cephfs",
    7. "tag": null,
    8. "access_type": "RW",
    9. "squash": "no_root_squash",
    10. "protocols": [4],
    11. "transports": ["TCP"],
    12. "security_label": true,
    13. "daemons": ["mynfs.a", "mynfs.b"],
    14. "clients": []
    15. }

    Note

    Don’t use this JSON file for real production deployment. As here theganesha servers are given client-admin access rights.

    We need to download and run this scriptto pass the JSON file contents. Dashboard creates NFS-Ganesha export filebased on this JSON file.:

    1. ./run-backend-rook-api-request.sh POST /api/nfs-ganesha/export "$(cat <json-file-path>)"

    Expose the NFS Servers:

    Note

    Ports are chosen at random by Kubernetes from a certain range.Specific port number can be added to nodePort field in spec.

    Open a root shell on the host and mount one of the NFS servers:

    1. mount -t nfs -o port=31013 $(minikube ip):/cephfs /mnt/rook

    Normal file operations can be performed on /mnt/rook if the mount is successful.

    Note