configure.sh

Describes the syntax and parameters of the configure.sh script that you run for a number of tasks including setting up data-fabric client nodes, and configuring services for a node.

You run configure.sh to set up a data-fabric cluster node, or to set up a data-fabric client node for communication with one or more clusters. You can also run configure.sh to update the configuration of a node. For example, you can use configure.sh to change the services running on a node, or specify the user that runs data-fabric services.

Attention: On a Windows client, the configure.sh script is named configure.bat. The script requires the -c parameter and does not accept the -Z parameter, but otherwise works similarly as on a Linux client.

Steps Performed by configure.sh

configure.sh performs the following steps, each time you run it:
  • Updates /opt/mapr/conf/mapr-clusters.conf with the cluster name. It creates or modifies a line in /opt/mapr/conf/mapr-clusters.conf containing a cluster name followed by a list of CLDB nodes. New entries are added to mapr-clusters.conf when the cluster name passed to the -N parameter is different from the existing cluster name in that file.
  • Checks that the node has at least 4GB of RAM, and that the /tmp and /opt partitions each have at least 1 GB of free space. If these conditions are not met, the script asks for confirmation before continuing.
  • Disables standard NFS daemons. If the node has the mapr-nfs role, the script disables the standard Linux NFS daemon, since both NFS processes cannot run on the same node.
  • Updates additional *.conf and *.xml files related to the cluster and the services running on the node. For example, yarn-site.xml, warden.conf, and cldb.conf may be updated based on input to configure.sh.
  • On cluster nodes, it creates a group named shadow, adds the data-fabric user to this group, and then enables members of the shadow group to view the /etc/shadow file. Read access to the /etc/shadow file enables data-fabric users to authenticate with the data-fabric cluster.
  • Starts newly installed services. Automatically starts new services, if Warden is running at the time you run configure.sh.
  • All changes to configuration options or system files are logged to /opt/mapr/logs/configure.log. You can use the -L parameter to specify a different log file name.

When you include disk-setup options (-D or -F) on nodes with the mapr-fileserver role, the script performs the following additional steps:

  • Runs disksetup to create the disktab file. configure.sh takes the values that you specify in the -disk-opts option, and passes the value to disksetup. For example, if you include -disk-opts FW5 when you run configure.sh, configure.sh runs disksteup -F -W5. If disksetup fails, configure.sh exits with an error.
  • Starts Zookeeper and Warden. When the configure.sh script starts services, the message starting <servicename> is echoed to the standard output to enable the user to see which services are starting. When Warden starts, the Warden and ZooKeeper services are added to the inittab file as the first available inittab IDs, enabling these services to restart automatically on failure.

    You can specify the -no-autostart option to prevent the script from starting Zookeeper or Warden when you run configure.sh with the -F or -D options.

Syntax

/opt/mapr/server/configure.sh  
        -C <cldb_list>  
        -Z <zookeeper_list> 
        -EZ <ext_zookeeper_list>  
        [<parameters>]
/opt/mapr/server/configure.sh  
        -C <cldb_list> 
        [ -M <cldb_mh_list ...> ] 
        -Z <zookeeper_list>  
        [<parameters>]
/opt/mapr/server/configure.sh 
        -c 
        [ -R ] 
        [<parameters>]
/opt/mapr/server/configure.sh 
        -R 
        [ -c ] 
        [<parameters>]

Options

-C
Use the -C option for CLDB servers that only have a single IP address. This option takes a comma-separated list of the CLDB nodes that this machine uses to connect to the data-fabric cluster. The list is in the following format:
hostname[:port_no][,hostname[:port_no]...]
-c
Specifies client setup. The -C option is required, while the -Z option is optional. See set up a data-fabric client node for communication with one or more clusters.
-EZ
The -EZ option is optional when configuring the cluster, and is not applicable when configuring a client. This option takes a comma-separated list of the external IP addresses of the ZooKeeper nodes in the cluster. The list is in the following format:
hostname[:port_no][,hostname[:port_no] ...]
-M
Use the -M option only for multihomed CLDB servers that have more than one IP address. This option takes a comma-separated list of the multihomed CLDB nodes that this machine uses to connect to the data-fabric cluster. The list is in the following format:
hostname[:port_no][,[hostname[:port_no]...]] 
-R
After initial node configuration, specifies that configure.sh should use the previously configured ZooKeeper and CLDB nodes. The -C and -Z parameters are not required when you specify -R. When -R is specified, the CLDB credentials are read from mapr-clusters.conf, while the ZooKeeper credentials are read from warden.conf. Use the -R option when you make changes to the services configured on a node without changing the CLDB and ZooKeeper nodes. Specify the --noRecalcMem parameter to skip recalculating memory settings when refreshing roles.
Note: This parameter impacts the JMX parameters in /opt/mapr/conf/env_override.sh in the following ways:
  • When you set MAPR_JMXLOCALBINDING to true, running /opt/mapr/server/configure.sh -R sets MAPR_JMXAUTH to false, since JMX is only accessible from the local machine and does not require authentication.
  • When you set MAPR_JMXLOCALBINDING to false but set MAPR_JMXLOCALHOST to true, running /opt/mapr/server/configure.sh -R sets MAPR_JMXAUTH to true and MAPR_JMXSSL to false, since JMX is only accessible from the local network and does not require secure authentication.
  • When you set MAPR_JMXLOCALBINDING to false but set MAPR_JMXREMOTEHOST to true, running /opt/mapr/server/configure.sh -R sets MAPR_JMXAUTH to true and MAPR_JMXSSL to true, since JMX is now accessible remotely and requires secure authentication.
-Z
The -Z option is required unless you specify the -c (lowercase), or the -R option. The -Z option takes a comma-separated list of the ZooKeeper nodes in the cluster. The list is in the following format:
hostname[:port_no][,hostname[:port_no]...]

Parameters

-certdomain
Specifies a DNS domain for generated SSL wildcard certificates. This domain overrides the default DNS domain.
--create-user | -a
Creates a local user to run data-fabric services, using the user specified either with the -u parameter, or from the environment variable $MAPR_USER.
-D
Specifies a comma-delimited list of disks to use with the data-fabric filesystem. With the -D option, you cannot specify partitions. By default, the configure.sh script automatically starts cluster services, after the configuration finishes successfully. If you do not want cluster services to be restarted, include the -no-autostart option along with the -D option.
-d
The host and port of the MySQL database to use for storing data-fabric Metrics data.
-dare
Enables on-disk encryption at the cluster-level. When run on the first CLDB node with the -genkeys option, the utility generates the data-at-rest encryption master key file at /opt/mapr/conf/dare.master.key.
-defaultdb
Sets the default database (HBase or HPE Ezmeral Data Fabric Database) to which the HBase clients connect. If you do not explicitly configure this option, it defaults to hbase (HBase) when you have mapr-hbase-regionserver or mapr-hbase-master installed on the node. Otherwise, it defaults to maprdb (HPE Ezmeral Data Fabric Database). You can also change the database setting using hbase-site.xml or the HBase client connection. For more information, see Configure the Default Database for HBase Clients.
-disk-opts
Denotes disksetup formatting options. Do not include spaces or commas between the disksetup options. For example, you can specify -disk-opts FW5 to format the disks (F), and configure five disks per storage pool (W5).
-disableSsl
Disables SSL for ZooKeeper nodes on secure clusters.

The new ZooKeeper (ZK version 3.5.6) supports SSL encryption for server-to-server communication. When you install a new clean 6.2 secure cluster, SSL between ZooKeeper servers is enabled automatically.

However when you perform a rolling upgrade, few nodes are upgraded to data-fabric 6.2 (with the new ZooKeeper server), while other nodes still run the old data-fabric 6.1, where the ZooKeeper is at version 3.4.11 and is incapable of using SSL.

You must disable SSL using this option, to get this hybrid cluster to work. You must enable SSL for ZooKeeper only AFTER you upgrade all nodes to data-fabric 6.2.

You can use this option even when refreshing roles. For example: configure.sh -R -disableSsl.

Running configure.sh without this option enables SSL. To turn on SSL:
  1. Shutdown the cluster.
  2. On every ZooKeeper node, run configure.sh -R (without this disableSsl parameter).
  3. Start the cluster.
The sslQuroum parameter in zoo.cfg controls whether or not the ZooKeeper nodes can use SSL for communication.

To verify that ZooKeeper nodes are communicating over SSL, check the ZooKeeper log for messages such as SSL handshake complete with … and/or Accepted TLS connection from....

-dp
Specifies the password for logging into the MySQL database used for storing data-fabric Metrics data.
-ds
Specifies the name of the database schema to use for the MySQL database used for storing data-fabric Metrics data. The default schema name is metrics.
-du
Specifies the username for logging into the MySQL database used for storing data-fabric Metrics data.
-EC
Specifies a host or hosts that contain the Hive Metastore. Use this parameter and the ‑hiveMetastoreHost argument to configure an ecosystem component, such as Drill, to communicate with the Hive Metastore. Use the following format to specify a list of hosts:
hostname[:port_no][,hostname[:port_no]...]

See the -EC parameter example later on this page. If you do not specify a host port number, Drill uses the default Hive Metastore port number (9083) for every host.

-EP
Specifies an option that is passed directly to an ecosystem configure.sh script. These commands follow the form ‑EP<ecosystem component name> <option>. In general, ‑EP options are not documented, and should be used only if the documentation specifically instructs you to use them.
In data-fabric 6.0 and later, some ecosystem components have their own configure.sh scripts. The server configure.sh script or a user, can pass options directly to the ecosystem component by using the ‑EP syntax. For example, in the following command:
/opt/mapr/server/configure.sh -R -EPkibana '-kibanaPort 5610'

-EPkibana '-kibanaPort 5610' changes the default port for Kibana to 5610.

As ecosystem components are updated more frequently than Data Fabric Core (which contains the server configure.sh script), implementing some configure.sh functions through an ecosystem configure.sh script can accelerate the introduction of new features.

-ES
Specifies a comma-separated list of host names or IP addresses that identify the Elasticsearch nodes. The Elasticsearch nodes can either be part of the current data-fabric cluster, or part of a different data-fabric cluster. Do not use this option when you configure a node for the first time. Use this option along with the -R parameter.
The list is in the following format:
hostname/IPaddress[:port_no][,hostname/IPaddress[:port_no]...]
Note: The default Elasticsearch port is 9200. If you want to use a different port, specify the port number when you list the Elasticsearch nodes.
-ESDB
Specifies a non-default location for writing index data on Elasticsearch nodes. To configure an index location, you only need to include this parameter on Elasticsearch nodes.

Elasticsearch requires a lot of disk space. Therefore, a separate filesystem for the index is recommended. It is not recommended to store index data under the / or the /var file system.

For more information, see Log Aggregation and Storage.
-F
Specifies a path to a text file that lists the disks and partitions to use with the data-fabric filesystem. By default, the configure.sh script automatically starts cluster services after the configuration finishes successfully. If you do not want cluster services to be restarted, include the -no-autostart option along with the -F option.
-f
Specifies that the node should be configured without performing the system prerequisite check.
-forceSecurityDefaults
Instructs configure.sh to undo any custom security settings for a cluster, and reconfigure security to the default data-fabric values for -unsecure or -secure. You must specify either the -secure or the -unsecure option. Using the -forceSecurityDefaults option removes the /opt/mapr/conf/.customSecure file. Use the following syntax:
/opt/mapr/server/configure.sh -forceSecurityDefaults [ -unsecure | -secure ] -C <CLDB_node> -Z <ZK_node>

For more information, see Customizing Security in HPE Ezmeral Data Fabric.

Important: It is possible that the -forceSecurityDefaults operation might not undo all custom security settings since configure.sh cannot know all of the custom settings that were implemented. Therefore, you might have to edit some configuration files and settings to restore the cluster to full functionality.
-G
The group ID to use when creating $MAPR_USER with the -create-user or -a option; corresponds to the -g or -gid option of the useradd command in Linux.
-g
The group name under which data-fabric services run.
-genkeys
Generates needed keys and certificates for the initial CLDB node in a secure cluster. If specified with the -dare option, the -genkeys option generates a master key at /opt/mapr/conf/dare.master.key on the first CLDB node. Without the master key, you cannot start the cluster, nor can you access the data.
-H
Specifies the HTTPS port number for connecting to the CLDB node. The default port is 7443.
-HS
Specifies the IP or hostname of the node in the cluster that performs the HistoryServer role. This parameter is required only when a node in the cluster performs the HistoryServer role. In data-fabric 5.1 and later, this parameter is expanded to support the Mesos DNS-style name with format for Job History. The format is <myriad-fwk-name>.mesos. For example, if the -MF parameter is myriadA, the name is: jobhistory.myriadA.mesos. Myriad is not supported in data-fabric 6.2.0 and later.
--isvm
Specifies the virtual machine setup. Required when configure.sh is run on a cluster node, that is on a virtual machine. This option configures the script to use less memory.
-J
Specifies the JMX port for the CLDB. Default: 7220
-JMXEnable
Globally enables JMX support for services on the node. JMX is enabled by default.
-JMXDisable
Globally disables JMX support for services on the node.
-JMXLocalBindingEnable
Enables local binding for JMX connections.
-JMXLocalBindingDisable
Disables local binding for JMX connections.
-JMXLocalHostEnable
Enables the local-host TCP port for JMX. This setting is mutually exclusive with JMXRemoteHostEnable.
-JMXLocalHostDisable
Disables the local-host TCP port for JMX.
-JMXRemoteHostEnable
Enables the remote TCP port for JMX. This setting is mutually exclusive with JMXLocalHostEnable.
-JMXRemoteHostDisable
Disables the remote TCP port for JMX.
-K | -kerberosEnable
Indicates that Kerberos security has been enabled. Kerberos security is disabled by default.
-L
Specifies a log file. If not specified, configure.sh logs errors to /opt/mapr/logs/configure.log.
-label
Default Value: default
Possible Values: Any registered label
Description: The label to use for the storage pool. See Using Storage Labels for more information on labels.
The label should contain only the following characters:
A-Z a-z 0-9 _ - .
Attention: This option is meant to be run ONLY on CLDB nodes. However, if you intend to use this option on data nodes, ensure that you first register the label on the data node before using this option.
--logHTTPFS
Specifies the hostname to enable centralized logging using fluentd.
-MCL
Specifies the top-level directory where all the staging data as well as shuffle data is written for a specific Myriad framework. Used when multiple clusters are implementing Myriad. Myriad is not supported in data-fabric 6.2.0 and later.
-MP
Specifies the name of the Myriad framework that is displayed in the Mesos UI. Myriad is not supported in data-fabric 6.2.0 and later.
-MHA
Enables Myriad high availability. Myriad is not supported in data-fabric 6.2.0 and later.
-M7
Deprecated as of data-fabric 4.0.1.
-maprpam
When specified, the configure.sh script installs the data-fabric version of Pluggable Authentication Modules (PAM). This option is ignored if -S is not set.
-N

Specifies the cluster name. If you do not specify a name, configure.sh applies a default name (my.cluster.com) to the cluster. Whenever you run configure.sh, be aware of the existing cluster name or names in mapr-clusters.conf and specify the -N parameter accordingly. If you specify a name that does not exist, a new line is created in mapr-clusters.conf and is treated as a configuration for a separate cluster.

Subsequent runs of configure.sh without the -N parameter operate on this default cluster. If you specify a name when you first run configure.sh, you can modify the CLDB and ZooKeeper settings corresponding to the named cluster by specifying the same name and running configure.sh again. Whenever you need to re-run configure.sh on a given cluster (to add or rename nodes, for example), be sure to specify the same cluster name that you used when you ran configure.sh for the first time.

-no-autostart
Specifies that the script should not start Zookeeper or Warden when you run configure.sh.
-no-auto-permission-update
Pass this option to prevent MapR from silently altering permissions in /etc/shadow.
-nocerts
When specified, the configure.sh script does not generate SSL certificates even when the -genkeys option is specified.
-noDB
Specifies that HPE Ezmeral Data Fabric Database is not in use.
-noRecalcMem
Skips recalculating memory settings when refreshing roles. Can be used only with the -R option.
-OT
Specifies a comma-separated list of host names or IP addresses that identify the OpenTSDB nodes. The OpenTSDB nodes can be part of the current data-fabric cluster or part of a different data-fabric cluster. Do not use this option when you configure a node for the first time. Use this option along with the -R parameter. The Warden service must be running when you use configure.sh -R -OT.
Use the following format to list the hostnames:
hostname/IP address[:port_no][,hostname/IP address[:port_no]...]
Note: The default OpenTSDB port is 4242. If you want to use a different port, specify the port number when you list the OpenTSDB nodes.
-on-prompt-cont
Specify:
  • y to automatically respond Yes to all prompts.
  • n to automatically respond No to all prompts.
-P
Specifies the Kerberos instance that is used to form a CLDB Kerberos principal in the form of mapr/<instance-name>@<realm-name>. Enclose this value in quotes ("). This value is ignored if Kerberos security is not enabled.
-QS
Use the -QS option to configure the OJAI Distributed Query Service. See Configure the OJAI Distributed Query Service.
-RM

In data-fabric 5.1, this parameter is expanded to support the Mesos DNS-style hostname for Myriad configuration. The Mesos-style hostname is <application name>.marathon.mesos. When starting ResourceManager from Marathon, the .<application name> rm, for example, is rm.marathon.mesos.

In data-fabric 4.0.2, this parameter is not required unless you want to configure manual or automatic failover; zero configuration failover is enabled by default. In data-fabric 4.0.1, this parameter specifies the nodes in the cluster with the ResourceManager role.

List the nodes in the following format: hostname[,hostname]...]

For more information, see ResourceManager High Availability. Myriad is not supported in data-fabric 6.2.0 and later.

-S | -secure
Specifies that this cluster is a secure cluster, and configures security on the platform and on all ecosystem components that support security. Default: insecure.
-syschk
Configures the system checks to be enabled or disabled. Value: Y/N.
-TL
Specifies the single node on which the timeline server is installed for the Hive-on-Tez user interface. When you install Tez manually, you must also install the timeline server and run configure.sh -TL <timeline_server_node> on all nodes to indicate where the timeline server resides.
-U
The user ID to use when creating $MAPR_USER with either the --create-user or -a option; corresponds to the -u or --uid option of the useradd command in Linux.
-u
The user name under which data-fabric services run.
-unsecure
Specifies that this cluster is not secure. Default: unsecure.
-v
In addition to logging information, also prints to stdout.

HSM Parameters - For more information, see Setting Up the External KMIP Keystore

-hsm

Performs HSM configuration. This will always run the mrhsm init command to initialize the HSM if not already initialized. The -hsmlabel option is required if the -hsm option is specified for the first time.

When used with the -genkeys option, -hsm invokes the mrhsm enable command to generate the CLDB and also the DARE keys if the -dare option is specified.

Otherwise, -hsm configures the settings specified by the -hsmip, -hsmport, -hsmcacert, -hsmclientcert, -hsmclientkey and -hsmkmipversion options, but does not enable the HSM feature or generate any keys.

-hsmip <ip-address>

The comma-separated list of host names or IP addresses of the external HSM. This parameter is required only when no IP addresses have been configured, or when you need to modify the IP addresses of the external HSM.

-hsmport <port>
The KMIP port of the external HSM.

This parameter is optional. If omitted, this defaults to the standard KMIP port of 5696.

-hsmcacert </path/to/cert>
The full path name of the file containing the HSM CA certificate downloaded from the HSM.

This parameter is required only when no CA certificate has been configured, or when we need to modify the CA certificate.

-hsmclientcert </path/to/cert>
The full path name of the file containing the KMIP-enabled client certificate.

This parameter is required only when no client certificate has been configured, or when you need to modify the client certificate.

-hsmclientkey </path/to/key>
The full path name of the file containing the KMIP-enabled client key.

This parameter is required only when no client key has been configured, or when you need to modify the client key.

-hsmlabel <label>

The KMIP token label. This is an ASCII string which is used to describe the KMIP token and can range from 1 to 32 characters, e.g. Utimaco ESKM.

This parameter is only needed when initializing the KMIP token for the first time. It is ignored for subsequent invocations.

-hsmsopin <so-pin>

PIN for the Security Officer (SO). This should be between 4 to 255 characters inclusive. The SO PIN is set in the KMIP token during the initial invocation.

In subsequent invocations, the SO PIN entered into this utility must match the configured SO PIN. If this argument is not specified, you will be prompted to enter it.

-hsmkmipversion <version>
The KMIP version number to use for all communication with the external KMIP-enabled key store. Supported values are 1.0, 1.1, 1.2, 1.3 and 1.4. The default value is 1.1.

At the end of the configure.sh script, the HSM should be up and running, when you use the HSM parameters. Use the mrhsm info command to check the HSM status.

Protection of Java key stores is NOT supported in the HSM for data-fabric 6.2. In later releases, configure.sh will generate PKCS#12 key stores instead of JCEKS key stores.

Examples

  1. Add a node (not CLDB or ZooKeeper) to a cluster that is running the CLDB and ZooKeeper on three nodes:

    On the new node, run the following command:

    /opt/mapr/server/configure.sh -C nodeA,nodeB,nodeC -Z nodeA,nodeB,nodeC
  2. Configure a client to work with cluster my.cluster.com, which has one CLDB at nodeA:

    On a Linux client, run the following command:

    /opt/mapr/server/configure.sh -N my.cluster.com -c -C nodeA

    On a Windows 7 client, run the following command:

    C:\opt\mapr\server\configure.bat -N my.cluster.com -c -C nodeA
  3. Add a second cluster to the configuration:

    On a node in the second cluster your.cluster.com, run the following command:

    /opt/mapr/server/configure.sh -C nodeZ -N your.cluster.com -Z <zkNodeA,zkNodeB,zkNodeC>
  4. Add CLDB servers with multiple IP addresses to a cluster:

    In this example, the cluster my.cluster.com has CLDB servers at nodeA, nodeB, nodeC, and nodeD. The CLDB servers nodeB and nodeD have two NICs each at eth0 and eth1.

    On a node in the cluster my.cluster.com, run the following command:

    /opt/mapr/server/configure.sh -N my.cluster.com -C nodeAeth0,nodeCeth0 -M \
              nodeBeth0,nodeBeth1 -M nodeDeth0,nodeDeth1 -Z zknodeA
  5. Start a cluster in secure mode using configure.sh

    In this example, the cluster my.cluster.com has two CLDB servers at nodeA and nodeB. The ZooKeeper node for this cluster is at nodeC. To start the cluster in secure mode, run the following command on nodeA:

    /opt/mapr/server/configure.sh -N my.cluster.com -C nodeA,nodeB -Z nodeC -secure \
              -genkeys -F <disklist file>

    This command creates the ssl_truststore, ssl_keystore, maprserverticket, and cldb.key files. Copy these files from nodeA's /opt/mapr/conf directory to nodeB's /opt/mapr/conf directory.

    On nodeB, change the permissions on the ssl_keystore, maprserverticket, and cldb.key files to 600 (the mapr user) by using the following command:

    chmod 600 ssl_keystore maprserverticket cldb.key

    On the ssl_truststore file, change the permissions to 644 (world readable):

    chmod 644 ssl_truststore

    On nodeB, run the following command:

    /opt/mapr/server/configure.sh -N mycluster.com -C nodeA,nodeB -Z nodeC -secure -F \
              <disklist file>
  6. Configure the Drill storage plugin to communicate with a Hive Metastore:

    This example uses the -EC parameter to configure the Drill storage plugin to communicate with a Hive Metastore located on nodeA:

    /opt/mapr/server/configure.sh -EC '-hiveMetastoreHost nodeA'
  7. Configure HSM:

    A sample session transcript using the /opt/mapr/server/configure.sh script with DARE enabled is as follows. The portions in bold relate to the common HSM features, while the portions in italics relate to the DARE-specific features:

    # /opt/mapr/server/configure.sh  -secure -genkeys -N test96.cluster.com -C perfnode96.lab:7222 -Z perfnode96.lab:5181 -F disks.txt -dare -hsm -hsmip 10.10.30.129 -hsmlabel "SafeNet KeySecure" -hsmsopin 12345678 -hsmclientcert /root/safenet-keysecure/client.pem -hsmcacert /root/safenet-keysecure/CA.pem -hsmclientkey /root/safenet-keysecure/key.pem
    create /opt/mapr/conf/conf.old
    CLDB node list: perfnode96.lab:7222
    Zookeeper node list: perfnode96.lab:5181
    External Zookeeper node list: 
    Node setup configuration:  cldb fileserver hadoop-util zookeeper
    Log can be found at:  /opt/mapr/logs/configure.log
    Initializing HSM with label SafeNet KeySecure
    Generated random user PIN B$V5g%$2#%8Kc6SL
    Obtained cluster name test96.cluster.com from mapr-clusters.conf
    Enabling MapR HSM on cluster test96.cluster.com
    Successfully generated Core KEK, UUID CF9FE63E85EF233B583972FB6265DB33067E8DBBB300297FF8F562DFCF7EA904
    Successfully generated Common KEK, UUID 32A903E6D0DF67FDBCD953A33FC2547F50D35C18666E2A0A0B5CF749FBF84D6A
    Successfully set encrypted CLDB key in KMIP configuration
    Successfully set encrypted DARE key in KMIP configuration
    
    ##############################################################################
    # NOTE: The DARE master key for data at rest encryption is protected by the  #
    # HSM. All keys in the HSM, including the DARE master key, should be safely  #
    # backed up. Without the DARE master key, cluster cannot be started and data #
    # cannot be accessed.                                                        #
    ##############################################################################
    
    Creating 100 year self signed certificate with subjectDN='CN=*.lab'
    Configuring hadoop-util
    /dev/sdb added.
    /dev/sdc added.
    /dev/sdd added.
    Zookeeper found on this node, and it is not running. Starting Zookeeper
    Warden is not running. Starting mapr-warden. Warden will then start all other configured services on this node
    ... Starting cldb
    ... Starting fileserver
    ... Starting hadoop-util
    To further manage the system, use "maprcli", or connect browser to https://{webserver host name}:8443/
    To stop and start this node, use "systemctl start/stop mapr-warden "
    No need to set label returning from SetDiskLabel 

Troubleshooting configure.sh

When you run configure.sh with the -OT option for the first time, you might encounter an error message such as directory /opt/mapr/conf/proxy is not owned by root. You must ignore this transient error message. If you repeatedly see this error during client operations, then re-run configure.sh with the -R option.