config
Manages configuration values for the MapR cluster.
Configuration Fields
The following fields are configurable.
Field |
Default Value |
Description |
---|---|---|
|
10 |
The maximum number of containers that can be balanced in parallel by the disk balancer. The value is a percentage of the number of nodes in the system. |
|
1 |
Enables or disables the Disk Balancer. |
|
2 * 60 |
The sleep interval (in seconds) between two successive runs of the Disk Balancer. |
|
70 |
Percentage of used space that causes containers in a storage pool to be distributed across other less used storage pools. |
|
0 |
Disables (0) or enables (1) the logging of messages in Disk Balancer and Role Balancer. |
|
10 |
The percentage (of the number of nodes in the system) to use to determine the maximum number of containers whose roles (Masters and Tails) will be balanced in parallel by the Role Balancer. For example, suppose there are 500 nodes and this value is 10(%). The number of containers whose roles are balanced in parallel is (10/100)*500=50. |
|
1 |
Enables (1) or Disables (0) the Role Balancer. |
|
15 * 60 |
The sleep interval (in seconds) between two successive runs of the Role Balancer. |
|
30 * 60 |
The initial startup delay of the Role Balancer for existing clusters. |
|
90 |
The percentage at which the |
|
0 |
The allocation algorithm to use when creating new containers. Values can be:
|
|
85 |
The percentage of space on a fileserver to use to classify the fileserver as FULL. |
|
16 * 1024 |
The maximum size for containers. This is a soft limit. |
|
256 |
The size of each chunk that make up a file in the MapR-FS. |
|
|
The default topology for new volumes. |
|
365 |
The retention period of the files (in days) used to record Dialhome metrics. Files that are past their retention period are automatically deleted. |
|
60 * 60 |
The number of seconds a node can fail to heartbeat before it is considered dead. Once a node is considered dead, the CLDB re-replicates any data contained on the node. |
|
60 |
The frequency in minutes at which CLDB should log messages about fileserver's time skew. |
|
3 |
The number of container replicas that can resync in parallel from the source for low_latency (star-replicated) volumes. |
cldb.mfs.heartbeat.timeout.multiple |
10 |
Specifies a heartbeat timeout multiple. For small clusters, the heartbeat interval is 1 second and the multiple is 10 by default, which makes the heartbeat timeout 10 seconds. |
|
0 |
Disables (0) or enables (1) the processing of critically under-replicated containers. If enabled, the critically under-replicated containers are processed on a priority basis to increase the number of copies. |
|
1200 |
The number of containers that can be replicated in parallel, expressed as a percentage of the number of active nodes. If the value is 1200, the number of containers that can be replicated is 12 times the number of active nodes. |
|
0 |
Disables (0) or enables (1) the processing of over-replicated containers. Over-replicated containers are processed to delete extra copies, which is when the number of copies is more than the desired replication factor. |
|
15 |
The delay between CLDB startup and replication manager startup, to allow all nodes to register and heartbeat |
|
4 |
The maximum number of containers that can be in transit on an SP. Containers that are serving either as the source or destination of a resync operation are considered as being in ‘transit’. |
|
15 |
The sleep duration (in seconds) between consecutive runs of the Replication Manager. |
|
2 * 60 |
The sleep duration (in seconds) between consecutive runs of the Replication Scanner. While the Replication Scanner bucketizes containers into different classes, the Manager thread either replicates or removes additional copies. |
|
90 |
The threshold percentage that is used to raise alarms when the used space on the nodes of a topology exceed a certain percentage of total space. |
|
3 |
The default replication factor for the CLDB volume. |
|
2 |
The default minimum replication factor. Containers with fewer copies than this value are replicated on a priority basis. |
|
3 |
The desired replication factor of data. |
|
"bz2,gz,tgz,tbz2, zip,z,Z,mp3,jpg, jpeg,mpg,mpeg,avi, gif,png,lzo,jar" |
The file types that should not be compressed. See File Extensions of Compressed Files. |
|
root |
The super group of the MapR-FS layer. |
|
mapr |
The super user of the MapR-FS layer. |
|
|
The configuration variable to set the current version of mapr distribution. Failing to set this variable on an upgrade will cause alarms to be missed when all the nodes in a cluster are not at the same version of the software. |
mfs.feature.db.json.support |
1 for new MapR installations, 0 for upgraded MapR installations. | Disables (0) or enables (1) MapR streams and support in MapR-DB for JSON documents and tables. |
mfs.feature.devicefile.support |
1 | Defines whether named pipes can be used over NFS. |
pernode.numcntrs.alarm.thr |
50000 | The maximum number of RW containers on each node beyond which performance may not be optimal. The optimal number for RW and snapshot containers combined is 10 times the value of this parameter. |