Persistent storage using NFS

    Additional resources

    Storage must exist in the underlying infrastructure before it can be mounted as a volume in OKD. To provision NFS volumes, a list of NFS servers and export paths are all that is required.

    Procedure

    1. Create an object definition for the PV:

      Each NFS volume must be mountable by all schedulable nodes in the cluster.

    2. Verify that the PV was created:

      1. $ oc get pv

      Example output

      1. NAME LABELS CAPACITY ACCESSMODES STATUS CLAIM REASON AGE
      2. pv0001 <none> 5Gi RWO Available 31s
    3. Create a persistent volume claim that binds to the new PV:

      1. apiVersion: v1
      2. kind: PersistentVolumeClaim
      3. metadata:
      4. name: nfs-claim1
      5. spec:
      6. accessModes:
      7. - ReadWriteOnce (1)
      8. resources:
      9. requests:
      10. storage: 5Gi (2)
      11. volumeName: pv0001
      12. storageClassName: ""
      1The access modes do not enforce security, but rather act as labels to match a PV to a PVC.
      2This claim looks for PVs offering 5Gi or greater capacity.
    4. Verify that the persistent volume claim was created:

        Example output

        1. NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
        2. nfs-claim1 Bound pv0001 5Gi RWO 2m

      You can use disk partitions to enforce disk quotas and size constraints. Each partition can be its own export. Each export is one PV. OKD enforces unique names for PVs, but the uniqueness of the NFS volume’s server and path is up to the administrator.

      Enforcing quotas in this way allows the developer to request persistent storage by a specific amount, such as 10Gi, and be matched with a corresponding volume of equal or greater capacity.

      This section covers NFS volume security, including matching permissions and SELinux considerations. The user is expected to understand the basics of POSIX permissions, process UIDs, supplemental groups, and SELinux.

      Developers request NFS storage by referencing either a PVC by name or the NFS volume plug-in directly in the volumes section of their Pod definition.

      The /etc/exports file on the NFS server contains the accessible NFS directories. The target NFS directory has POSIX owner and group IDs. The OKD NFS plug-in mounts the container’s NFS directory with the same POSIX ownership and permissions found on the exported NFS directory. However, the container is not run with its effective UID equal to the owner of the NFS mount, which is the desired behavior.

      As an example, if the target NFS directory appears on the NFS server as:

      1. $ ls -lZ /opt/nfs -d

      Example output

      1. $ id nfsnobody

      Example output

      1. uid=65534(nfsnobody) gid=65534(nfsnobody) groups=65534(nfsnobody)

      Then the container must match SELinux labels, and either run with a UID of 65534, the nfsnobody owner, or with 5555 in its supplemental groups to access the directory.

      The recommended way to handle NFS access, assuming it is not an option to change permissions on the NFS export, is to use supplemental groups. Supplemental groups in OKD are used for shared storage, of which NFS is an example. In contrast, block storage such as iSCSI uses the fsGroup SCC strategy and the fsGroup value in the securityContext of the pod.

      To gain access to persistent storage, it is generally preferable to use supplemental group IDs versus user IDs.

      Because the group ID on the example target NFS directory is 5555, the Pod can define that group ID using supplementalGroups under the securityContext definition of the pod. For example:

      1. containers:
      2. - name:
      3. ...
      4. securityContext: (1)
      5. supplementalGroups: [5555] (2)
      1securityContext must be defined at the pod level, not under a specific container.
      2An array of GIDs defined for the pod. In this case, there is one element in the array. Additional GIDs would be comma-separated.

      Assuming there are no custom SCCs that might satisfy the pod requirements, the pod likely matches the restricted SCC. This SCC has the supplementalGroups strategy set to RunAsAny, meaning that any supplied group ID is accepted without range checking.

      As a result, the above pod passes admissions and is launched. However, if group ID range checking is desired, a custom SCC is the preferred solution. A custom SCC can be created such that minimum and maximum group IDs are defined, group ID range checking is enforced, and a group ID of 5555 is allowed.

      User IDs can be defined in the container image or in the Pod definition.

      It is generally preferable to use supplemental group IDs to gain access to persistent storage versus using user IDs.

      In the example target NFS directory shown above, the container needs its UID set to 65534, ignoring group IDs for the moment, so the following can be added to the Pod definition:

      1. spec:
      2. containers: (1)
      3. - name:
      4. ...
      5. securityContext:
      6. runAsUser: 65534 (2)
      1Pods contain a securityContext definition specific to each container and a pod’s securityContext which applies to all containers defined in the pod.
      265534 is the nfsnobody user.

      Assuming that the project is default and the SCC is restricted, the user ID of 65534 as requested by the pod is not allowed. Therefore, the pod fails for the following reasons:

      • It requests 65534 as its user ID.

      • All SCCs available to the Pod are examined to see which SCC allows a user ID of 65534. While all policies of the SCCs are checked, the focus here is on user ID.

      • Because all available SCCs use MustRunAsRange for their runAsUser strategy, UID range checking is required.

      • 65534 is not included in the SCC or project’s user ID range.

      It is generally considered a good practice not to modify the predefined SCCs. The preferred way to fix this situation is to create a custom SCC A custom SCC can be created such that minimum and maximum user IDs are defined, UID range checking is still enforced, and the UID of 65534 is allowed.

      Fedora and Fedora CoreOS (FCOS) systems are configured to use SELinux on remote NFS servers by default.

      For non-Fedora and non-FCOS systems, SELinux does not allow writing from a pod to a remote NFS server. The NFS volume mounts correctly but it is read-only. You will need to enable the correct SELinux permissions by using the following procedure.

      Prerequisites

      • The container-selinux package must be installed. This package provides the virt_use_nfs SELinux boolean.
      • Enable the virt_use_nfs boolean using the following command. The -P option makes this boolean persistent across reboots.

        1. # setsebool -P virt_use_nfs 1

      To enable arbitrary container users to read and write the volume, each exported volume on the NFS server should conform to the following conditions:

      • Every export must be exported using the following format:

        1. /<example_fs> *(rw,root_squash)
      • The firewall must be configured to allow traffic to the mount point.

        • For NFSv4, configure the default port 2049 (nfs).

          NFSv4

        • For NFSv3, there are three ports to configure: (nfs), 20048 (mountd), and 111 (portmapper).

          NFSv3

            1. # iptables -I INPUT 1 -p tcp --dport 20048 -j ACCEPT
            1. # iptables -I INPUT 1 -p tcp --dport 111 -j ACCEPT
        • The NFS export and directory must be set up so that they are accessible by the target pods. Either set the export to be owned by the container’s primary UID, or supply the pod group access using supplementalGroups, as shown in the group IDs above.

        NFS implements the OKD Recyclable plug-in interface. Automatic processes handle reclamation tasks based on policies set on each persistent volume.

        By default, PVs are set to Retain.

        Once claim to a PVC is deleted, and the PV is released, the PV object should not be reused. Instead, a new PV should be created with the same basic volume details as the original.

        For example, the administrator creates a PV named nfs1:

        1. apiVersion: v1
        2. kind: PersistentVolume
        3. metadata:
        4. name: nfs1
        5. spec:
        6. capacity:
        7. storage: 1Mi
        8. accessModes:
        9. - ReadWriteMany
        10. nfs:
        11. server: 192.168.1.1
        12. path: "/"

        The user creates PVC1, which binds to nfs1. The user then deletes PVC1, releasing claim to nfs1. This results in nfs1 being Released. If the administrator wants to make the same NFS share available, they should create a new PV with the same NFS server details, but a different PV name:

        1. apiVersion: v1
        2. kind: PersistentVolume
        3. metadata:
        4. name: nfs2
        5. spec:
        6. capacity:
        7. storage: 1Mi
        8. accessModes:
        9. - ReadWriteMany
        10. nfs:
        11. server: 192.168.1.1
        12. path: "/"

        Deleting the original PV and re-creating it with the same name is discouraged. Attempting to manually change the status of a PV from Released to Available causes errors and potential data loss.

        Depending on what version of NFS is being used and how it is configured, there may be additional configuration steps needed for proper export and security mapping. The following are some that may apply:

        NFSv4 mount incorrectly shows all files with ownership of nobody:nobody

        • Could be attributed to the ID mapping settings, found in /etc/idmapd.conf on your NFS.

        • See .