Prerequisites for Installing the Container Storage Interface (CSI) Storage Plugin

Lists the prerequisites for installing and using the Container Storage Interface (CSI) Storage Plugin.

Hardware and Software Requirements

To install and use the Container Storage Interface (CSI) Storage Plugin, you must have the following:
Component Supported Versions
HPE Ezmeral Data Fabric File Store 6.1.0 or later. For additional version compatibility information, see CSI Version Compatibility.
Ezmeral Ecosystem Pack (EEP) Any EEP supported by data-fabric 6.1.0 or later. See EEP Support by MapR Core Version.
Kubernetes Software 1.17 and later*
OS (Kubernetes nodes) All nodes in the Kubernetes cluster must use the same Linux OS. Configuration files are available to support:
  • CentOS
  • RHEL (use CentOS configuration file)
  • Ubuntu
NOTE Docker for Mac with Kubernetes is not supported as a development platform for containers that use data-fabric for Kubernetes.
CSI Driver FUSE and Loopback NFS drivers (implementing the CSI spec with v1.3.0). The download location shows the latest version of the driver.
Sidecar Containers The CSI plugin pod uses:
  • csi-node-driver-registrar — v1.3.0
  • livenessprobe — v2.2.0
The CSI provisioner pod uses:
  • csi-attacher — v2.2.0
  • csi-provisioner — v1.6.0
  • csi-snapshotter — v3.0.2
  • snapshot-controller — v3.0.2
  • livenessprobe — v2.2.0
  • csi-resizer — v0.5.0
POSIX License The Basic POSIX client package is included by default when you install data-fabric for Kubernetes. The Platinum POSIX client can be enabled by specifying a parameter in the pod specification.

To enable the Platinum POSIX client package, see Enabling the Platinum Posix Client for Kubernetes Interfaces for Data Fabric FlexVolume Driver. For a comparison of the Basic and Platinum POSIX client packages, see Preparing for Installation (HPE Ezmeral Data Fabric POSIX Client).

*Kubernetes alpha features are not supported.

Before You Install

Before installing the Container Storage Interface (CSI) Storage Plugin, note that the installation procedure assumes that the Kubernetes cluster is already installed and functioning normally. In addition:

  1. Ensure that all Kubernetes nodes use the same Linux distribution.

    For example, all nodes can be CentOS nodes, or all nodes can be Ubuntu nodes. A cluster with a mixture of CentOS and Ubuntu nodes is not supported.

  2. Configure your Kubernetes cluster to allow privileged pods by running the following commands:
    $ ./kube-apiserver ...  --allow-privileged=true …
    $ ./kubelet ...  --allow-privileged=true ...
  3. Enable mount propagation to share volumes mounted by one container with other containers in the same pod and other pods on the same node.

    See Mount Propagation for more information.

  4. Apply CRDs to your Kubernetes cluster if they are not already present:
    Kubernetes 1.20 and Later
    kubectl apply -f https://raw.githubusercontent.com/kubernetes-csi/external-snapshotter/v4.2.1/client/config/crd/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
    kubectl apply -f https://raw.githubusercontent.com/kubernetes-csi/external-snapshotter/v4.2.1/client/config/crd/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
    kubectl apply -f https://raw.githubusercontent.com/kubernetes-csi/external-snapshotter/v4.2.1/client/config/crd/snapshot.storage.k8s.io_volumesnapshots.yaml
    Kubernetes 1.19 and Earlier
    kubectl apply -f https://raw.githubusercontent.com/kubernetes-csi/external-snapshotter/release-3.0/client/config/crd/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
    kubectl apply -f https://raw.githubusercontent.com/kubernetes-csi/external-snapshotter/release-3.0/client/config/crd/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
    kubectl apply -f https://raw.githubusercontent.com/kubernetes-csi/external-snapshotter/release-3.0/client/config/crd/snapshot.storage.k8s.io_volumesnapshots.yaml
    For more information see Snapshot Controller.
  5. For OpenShift, install the SecurityContextConstraints by applying deploy/openshift/csi-scc.yaml in the mapr-csi GitHub repository:
    oc apply -f deploy/openshift/csi-scc.yaml
  6. Create the state volume-mount path, and update the CSI driver yaml. In prior releases, the state of dynamically provisioned volumes and their snapshots was held in memory. The provisioner would lose this state if the controller pod was restarted or upgraded. After restarts, the provisioner would fail to take snapshots, restore snapshots, resize or clone previously created volumes.

    With the latest version of the CSI driver, the provisioner persists the encrypted state of the dynamically provisioned volumes and their snapshots in a volume on the data-fabric cluster. If the controller pod is restarted, the state is automatically recovered, and operations on previously created volumes work as intended.

    You can change the state volume-mount prefix by updating the --statevolmountprefix=/path/to/dir argument in the mapr-kdfprovisioner image of the CSI driver yaml.

    NOTE The directory you specify needs to be read-writable for all users who provision volumes on the data-fabric cluster using CSI drivers:
    # Create state volume mount path
    hadoop fs -mkdir /apps/k8s
    hadoop fs -chmod 777 /apps/k8s
    
    # Update csi driver yaml
    --statevolmountprefix=/apps/k8s
  7. Understand the number of volume mounts per node that your application requires. The CSI driver default is 20 volume mounts per node. You can modify the number of volume mounts per node by adjusting the value of the maxvolumepernode parameter in the csi-maprkdf-<version>.yaml or csi-maprnfskdf-<version>.yaml file.