Discusses the features of the data-fabric filesystem, and provides a comparison with the Hadoop Distributed File System (HDFS).

The data-fabric filesystem provides a unified data solution for structured data (tables) and unstructured data (files).

The data-fabric filesystem is a random, read-write distributed filesystem that allows applications to concurrently read and write directly to disk. The Hadoop Distributed File System (HDFS), by contrast, has append-only writes and can only read from closed files. As HDFS is layered over the existing Linux filesystem, a large number of input/output (I/O) operations decrease the cluster’s performance. The data-fabric filesystem also eliminates the Namenode associated with cluster failure in other Hadoop distributions, and enables special features for data management, and high availability.

The storage system architecture used by data-fabric filesystem is written in C/C++ and prevents locking contention, eliminating performance impact from Java garbage collection.

The following table highlights some of the features of the data-fabric filesystem:
Feature Description
Storage pools A group of disks to which the data-fabric filesystem writes data.
Containers An abstract entity that stores files and directories in the data-fabric filesystem. A container always belongs to exactly one volume, and can hold namespace information, file chunks, or table chunks for that volume.
CLDB A service that tracks the location of every container.
Volumes A management entity that stores and organizes containers. Used to distribute metadata, set permissions on data in the cluster, and for data backup. A volume consists of a single name container, and a number of data containers.
Direct Access NFS Enables applications to read and write data directly on to the cluster.
POSIX Clients The loopbacknfs, and FUSE-based POSIX clients connect to one or more data-fabric clusters, and allow app servers, web servers, and applications to write data directly, and securely to the data-fabric cluster.