This site contains the main documentation for Version 6.1 of the MapR Converged Data Platform, including installation, configuration, administration, and reference information.
This section contains information about installing and upgrading MapR software. It also contains information about how to migrate data and applications from an Apache Hadoop cluster to a MapR cluster.
MapR Data Platform is the industry-leading data platform for AI and analytics that solves enterprise business needs.
This section describes how to manage the nodes and services that make up a cluster.
This section contains information related to application development for Ezmeral ecosystem components and MapR Data Platform products, including the file system, Database (Key-Value and JSON), and Event Streams.
This section contains release-independent information, including: MapR Installer documentation, Ecosystem release notes, interoperability matrices, security vulnerabilities, and links to other MapR version documentation.
Definitions for commonly used terms in MapR Converged Data Platform environments.
A special file in every directory, for controlling the compression and chunk size used for the directory and its subdirectories.
A special mount point in the root-level volume (or read-only mirror) that points to the writable original copy of the volume.
A special directory in the top level of each volume, containing all the snapshots for that volume.
A Boolean expression that defines a combination of users, groups, or roles that have access to an object stored natively such as a directory, file, or MapR Database table.
A list of permissions attached to an object. An ACL specifies users or system processes that can perform specific actions on an object.
In the Control System, a user or group whose use of a volume can be subject to quotas. Using the Control System, you can set or modify quotas that limit the space used by all the volumes owned by an accountable entity.
In the CLI, a user or group whose use of a volume can be subject to quotas. Using the CLI, you can set or modify quotas that limit the space used by all the volumes owned by the accounting entity.
A user or users with special privileges to administer the cluster or cluster resources. Administrative functions can include managing hardware resources, users, data, services, security, and availability.
An advisory disk capacity limit that can be set for a volume, user, or group. When disk usage exceeds the advisory quota, an alert is sent.
Physical isolation between a computer system and unsecured networks. To enhance security, air-gapped computer systems are disconnected from other systems and networks.
Lightweight, stand-alone executables that include verything needed to run an application. Application containers are typically available for Linux and Windows applications.
Key-value and columnar database with HBase API. Supports Apache HBase tables and databases and also provides a native implementation of the HBase API for optimized performance on the MapR platform.
A binary number in which each bit controls a single toggle.
Files in the filesystem are split into chunks (similar to Hadoop blocks) that are normally 256 MB by default. Any multiple of 65,536 bytes is a valid chunk size, but tuning the size correctly is important. Files inherit the chunk size settings of the directory that contains them, as do subdirectories on which chunk size has not been explicitly set. Any files written by a Hadoop application, whether via the file APIs or over NFS, use chunk size specified by the settings for the directory where the file is written.
A node that runs the mapr-client that can access every cluster node and is used to access the cluster. Also referred to as an "edge node." Client nodes and edge nodes are NOT part of a data-fabric cluster.
mapr-client
The MapR user.
A node that is part of a data-fabric cluster. Cluster nodes can be used for data, compute, or both data and compute.
The interval of time during which READ, WRITE, or GETATTR operations on one file from one IP address or UID are logged only once for a particular operation, if auditing is enabled.
A compute node is used to process data using a compute engine (for example, YARN, Hive, Spark, or Drill). A compute node is by definition a data-fabric cluster node.
The unit of shared storage in a MapR cluster. Every container is either a name container or a data container.
A service, running on one or more MapR nodes, that maintains the locations of services, containers, and other cluster information.
In Kubernetes, the plan or blueprint for building and maintaining an application. Custom resources are specified as .yaml files.
.yaml
In Kubernetes, a list of valid fields that defines the shape of a custom resource (CR).
A process that enables users to remove empty or deleted space in the database and to compact the database to occupy contiguous space.
One of the two types of containers in a MapR cluster. Data containers typically have a cascaded configuration (master replicates to replica1, replica1 replicates to replica2, and so on). Every data container is either a master container, an intermediate container, or a tail container depending on its replication role.
A data node has the function of storing data and always runs FileServer. A data node is by definition a data-fabric cluster node.
The number of copies of a volume that should be maintained by the MapR cluster for normal operation.
A label for a feature or collection of features that have usage restrictions. Developer previews are not tested for production environments, and should be used with caution.
The disk space balancer is a tool that balances disk space usage on a cluster by moving containers between storage pools. Whenever a storage pool is over 70% full (or a threshold defined by the cldb.balancer.disk.threshold.percentage parameter), the disk space balancer distributes containers to other storage pools that have lower utilization than the average for that cluster. The disk space balancer aims to ensure that the percentage of space used on all of the disks in the node is similar.
cldb.balancer.disk.threshold.percentage
A file on each node, containing a list of the node's disks that have been configured for use by the file system.
The application containers used by Docker software. Docker is a leading proponent of OS virtualization using application containers ("containerization").
A file containing data from a volume for distribution or restoration. There are two types of dump files: full dump files containing all data in a volume, and incremental dump files that contain changes to a volume between two points in time.
A small-footprint edition of the HPE Ezmeral Data Fabric designed to capture, process, and analyze IoT data close to the source of the data.
A node that runs the mapr-client that can access every cluster node and is used to access the cluster. Also referred to as a "client node." Client nodes and edge nodes are NOT part of a data-fabric cluster.
A user or group.
A sequence number that identifies all copies that have the latest updates for a container. The larger the number, the most up-to-date the copy of the container. The CLDB uses the epoch to ensure that an out-of-date copy cannot become the master for the container.
A filelet, also called an fid, is a 256MB shard of a file. A 1 GB file for instance is comprised of the following filelets: 64K (primary fid)+(256MB-64KB)+256MB+256MB+256MB.
A node on which a mapr-gateway is installed. A gateway node is by definition a data-fabric cluster node.
mapr-gateway
A distributed storage system, designed to scale to a very large size, for managing massive amounts of structured data.
A small-footprint edition of the HPE Ezmeral Data Fabric designed to capture, process, and analyze IoT data close to the source of the data. Also referred to as an "edge cluster."
A signal sent by each FileServer and NFS node every second to provide information to the CLDB about the node's health and resource usage.
A program that simplifies installation of the HPE Ezmeral Data Fabric. The Installer guides you through the process of installing a cluster with data-fabric services and ecosystem components. You can also use the Installer to update a previously installed cluster with additional nodes, services, and ecosystem components. And you can use the Installer to upgrade a cluster to a newer core version if the cluster was installed using the Installer or an Installer Stanza.
The node on which you run the Installer program. The Installer node can be a node in the cluster that you plan to install; or, it can be a node that is not part of the cluster. But certain prerequisites must be met if the Installer node is not one of the nodes in the cluster to be installed.
A process that purges messages previously published to a topic partition, retaining the latest version.
The minimum complement of software packages required to construct a MapR cluster. These packages include mapr-core, mapr-core-internal, mapr-cldb, mapr-apiserver, mapr-fileserver, mapr-zookeeper, and others. Note that ecosystem components are not part of MapR core.
mapr-core
mapr-core-internal
mapr-cldb
mapr-apiserver
mapr-fileserver
mapr-zookeeper
The NFS-mountable, distributed, high-performance MapR data-storage system.
The "MapR user." The user that cluster services run as (typically named mapr or hadoop) on each node.
mapr
hadoop
A service that acts as a proxy and gateway for translating requests between lightweight client applications and the MapR cluster.
A set of Docker containers that provide persistent storage for Kubernetes objects through the MapR Filesystem. Once the Docker containers are installed, both a Kubernetes FlexVolume Driver and a Kubernetes Dynamic Volume Provisioner are available for static and dynamic provisioning of MapR storage.
A selected set of stable, interoperable, and widely used components from the Hadoop Ecosystem that are fully supported on the MapR platform.
A gateway that supports table and stream replication. The MapR gateway mediates one-way communication between a source MapR cluster and a destination cluster. The MapR gateway also applies updates from JSON tables to their secondary indexes and propagates Change Data Capture (CDC) logs.
The user that cluster services run as (typically named mapr or hadoop) on each node. The MapR user, also known as the "MapR admin," has full privileges to administer the cluster. The administrative privilege, with varying levels of control, can be assigned to other users as well.
A gateway that serves as a centralized entry point for all the operations that need to be performed on tiered storage.
The minimum number of copies of a volume that should be maintained by the MapR cluster for normal operation. When the replication factor falls below this minimum, re-replication occurs as aggressively as possible to restore the replication level. If any containers in the CLDB volume fall below the minimum replication factor, writes are disabled until aggressive re-replication restores the minimum level of replication.
A read-only physical copy of a volume.
A container in a MapR cluster that holds a volume's namespace information and file chunk locations, and the first 64 KB of each file in the volume.
A protocol that allows a user on a client computer to access files over a network as though they were stored locally.
An individual server (physical or virtual machine) in a cluster.
A data service that works with the ResourceManager to host the YARN resource containers that run on each data node.
In Kubernetes, a way to install and manage an application. Kubernetes operators handle not just application installation, but also the entire application lifecycle, including complex upgrades. An operator consists of a combination of two real Kubernetes objects: a controller and a custom resource.
A Docker-based application container image that includes a container-optimized MapR client. The PACC provides seamless access to cluster services, including the MapR File System, MapR Database, and MapR Event Store For Apache Kafka. The PACC makes it fast and easy to run containerized applications that access data in cluster.
A disk capacity limit that can be set for a volume, user, or group. When disk usage exceeds the quota, no more data can be written.
The maximum allowable data loss as a point in time. If the recovery point objective is two hours, then the maximum allowable amount of data loss that is acceptable is two hours of work.
The maximum allowable time to recovery after data loss. If the recovery time objective is five hours, then it must be possible to restore data up to the recovery point objective within five hours.
The number of copies of a volume.
The replication role of a container determines how that container is replicated to other storage pools in the cluster.
The replication role balancer is a tool that switches the replication roles of containers to ensure that every node has an equal share of of master and replica containers (for name containers) and an equal share of master, intermediate, and tail containers (for data containers).
Re-replication occurs whenever the number of available replica containers drops below the number prescribed by that volume's replication factor. Re-replication may occur for a variety of reasons including replica container corruption, node unavailability, hard disk failure, or an increase in replication factor.
A YARN service that manages cluster resources and schedules applications.
The service that the node runs in a cluster. You can use a node for one, or a combination of the following roles: CLDB, JobTracker, WebServer, ResourceManager, Zookeeper, FileServer, TaskTracker, NFS, and HBase.
A Kubernetes object that holds sensitive information, such as passwords, tokens, and keys. Pods that require this sensitive information reference the secret in their pod definition. Secrets are the method Kubernetes uses to move sensitive data into pods.
The MapR platform and supported ecosystem components are designed to implement security unless the user takes specific steps to turn off security options.
A group of rules that specify recurring points in time at which certain actions are determined to occur.
A read-only logical image of a volume at a specific point in time.
A unit of storage made up of one or more disks. By default, MapR storage pools contain two or three disks. For high-volume reads and writes, you can create larger storage pools when initially formatting storage during cluster creation.
The number of disks in a storage pool.
The group that has administrative access to the MapR cluster.
The user that has administrative access to the MapR cluster.
In the MapR platform, a file that contains keys used to authenticate users and cluster servers. Tickets are created using the maprlogin or configure.sh utilities and are encrypted to protect their contents. Different types of tickets are provided for users and services. For example, every user who wants to access a cluster must have a user ticket, and every node in a cluster must have a server ticket.
maprlogin
configure.sh
A Kubernetes secret that contains a ticket.
A tree of files and directories grouped for the purpose of applying a policy or set of policies to all of them at once.
A MapR process that coordinates the starting and stopping of configured services on a node.
A unit of memory allocated for use by YARN to process each map or reduce task.
ZooKeeper is a coordination service for distributed applications. It provides a shared hierarchical namespace that is organized like a standard file system.