Direct Access NFS

Describes the data-fabric direct access filesystem.

The data-fabric direct access file system enables real-time read/write data flows using the Network File System (NFS) protocol. Standard applications and tools can directly access the filesystem storage layer using NFS. Legacy systems can access data, and traditional file I/O operations work as expected on a conventional UNIX filesystem. A remote client can easily mount a data-fabric cluster over NFS to move data to and from the cluster. Application servers can write log files and other data directly to the data-fabric cluster’s storage layer instead of caching the data on an external direct or network-attached storage.

You can mount a data-fabric cluster directly through a network filesystem (NFS) from a Linux or a Mac client. When you mount a data-fabric cluster, applications can read and write data directly into the cluster with standard tools, applications, and scripts. Data Fabric enables direct file modification and multiple concurrent reads and writes with POSIX semantics. For example, you can run a MapReduce application that outputs to a CSV file, and then import the CSV file directly into SQL through NFS.

Data Fabric exports each cluster as the directory /mapr/<cluster name>. If you create a mount point with the local path /mapr, Hadoop FS paths and NFS paths to the cluster will be the same. This makes it easy to work on the same files through NFS and Hadoop. In a multi-cluster setting, the clusters share a single namespace. You can see them all by mounting the top-level /mapr directory.