Step 10: Install Log Monitoring

Installing the monitoring logging components is optional. The logging components enable the collection, storage, and visualization of core logs, system logs, and ecosystem component logs. Monitoring components are available as part of the Ecosystem Pack (EEP) that you selected for the cluster.

About this task

Complete the steps to install the logging components as the root user or using sudo. Installing logging components on a client node or edge node is not supported.

Procedure

  1. For log monitoring, install the following packages:
    Component Requirements
    fluentd Install the mapr-fluentd package on each node in the cluster.
    Elasticsearch Install the mapr-elasticsearch package on at least three nodes in the cluster to allow failover of log storage if one Elasticsearch node is unavailable.
    Kibana Install the mapr-kibana package on at least one node in the cluster.
    For example, on a three-node cluster, you can run the following commands to install log packages:
    • For CentOS/RedHat:
      • Node A: yum install mapr-fluentd mapr-elasticsearch
      • Node B: yum install mapr-fluentd mapr-elasticsearch
      • Node C: yum install mapr-fluentd mapr-elasticsearch mapr-kibana
    • For Ubuntu:
      • Node A: apt-get install mapr-fluentd mapr-elasticsearch
      • Node B: apt-get install mapr-fluentd mapr-elasticsearch
      • Node C: apt-get install mapr-fluentd mapr-elasticsearch mapr-kibana
    • For SLES:
      • Node A: zypper install mapr-fluentd mapr-elasticsearch
      • Node B: zypper install mapr-fluentd mapr-elasticsearch
      • Node C: zypper install mapr-fluentd mapr-elasticsearch mapr-kibana
  2. For secure HPE Ezmeral Data Fabric clusters, run maprlogin print to verify that you have a user ticket for the HPE Ezmeral Data Fabric user and the root user. These user tickets are required for a successful installation. If you need to generate a HPE Ezmeral Data Fabric user ticket, run maprlogin password. For more information, see Generating a HPE Ezmeral Data Fabric User Ticket.
  3. For secure data-fabric clusters, verify that the following keystore, truststore, and pem files are present on all nodes. If the files are not present, you must copy them from the security master node to all other nodes. If the /opt/mapr/conf/ca directory doesn't exist, you must create the directory:
    • /opt/mapr/conf/ssl_userkeystore
    • /opt/mapr/conf/ssl_userkeystore.csr
    • /opt/mapr/conf/ssl_userkeystore.p12
    • /opt/mapr/conf/ssl_userkeystore.pem
    • /opt/mapr/conf/ssl_userkeystore-signed.pem
    • /opt/mapr/conf/ssl_usertruststore
    • /opt/mapr/conf/ssl_usertruststore.p12
    • /opt/mapr/conf/ssl_usertruststore.pem
    • /opt/mapr/conf/ca/root-ca.pem
    • /opt/mapr/conf/ca/chain-ca.pem
    • /opt/mapr/conf/ca/signing-ca.pem
    For more information about these files, see Understanding the Key Store and Trust Store Files.
  4. For secure HPE Ezmeral Data Fabric clusters, configure a password for the Elasticsearch admin user to enable authentication for the end user using Kibana to search the Elasticsearch log index. This password needs to be provided at the time of running configure.sh. If no password is specified, you will default to the pre-mep-5.0.0, default password of admin. Use one of the following methods to pass the password to Elasticsearch/Kibana:
    • On the nodes where Fluentd/Elasticsearch/Kibana is installed, export the password as an environment variable before calling configure.sh:
      export ES_ADMIN_PASSWORD="<newElasticsearchPassword>"

      Then run configure.sh as you normally would run it (go to step 5).

    • Add the following options to the configure.sh command in step 5. This method explicitly passes the password on the configure.sh command line:
      -EPelasticsearch '-password <newElasticsearchPassword>' -EPkibana '-password <newElasticsearchPassword>' -EPfluentd '-password <newElasticsearchPassword>'
      Example
      /opt/mapr/server/configure.sh -R -v -ES mfs74.qa.lab -ESDB /opt/mapr/es_db -OT mfs74.qa.lab -C mfs74.qa.lab -Z mfs74.qa.lab -EPelasticsearch '-password helloMapR' -EPkibana '-password helloMapR' -EPfluentd '-password helloMapR'
    • Add the following options to the configure.sh command in step 5. This method explicitly passes the password on the configure.sh command line by specifying a file:
      -EPelasticsearch '-password <name of local file containing new password>'  -EPkibana '-password <name of local file containing new password>' -EPfluentd '-password <name of local file containing new password>'
      Example
      /opt/mapr/server/configure.sh -R -v -ES mfs74.qa.lab -ESDB /opt/mapr/es_db -OT mfs74.qa.lab -C mfs74.qa.lab -Z mfs74.qa.lab -EPelasticsearch '-password /tmp/es_password' -EPkibana '-password /tmp/es_password' -EPfluentd '-password /tmp/es_password'
  5. Run configure.sh on each node in the HPE Ezmeral Data Fabric cluster with the -R and -ES parameters, adding parameters to configure the Fluentd/Elasticsearch/Kibana password as needed. Optionally, you can include the -ESDB parameter to specify the location for writing index data. A Warden service must be running when you use configure.sh -R.
    /opt/mapr/server/configure.sh -R -ES <comma-separate list of Elasticsearch nodes> [-ESDB <filepath>]
    Parameter Description
    -ES Specifies a comma-separated list of host names or IP addresses that identify the Elasticsearch nodes. The Elasticsearch nodes can be part of the current HPE Ezmeral Data Fabric cluster or part of a different HPE Ezmeral Data Fabric cluster. The list is in the following format:
    • hostname/IPaddress[:port_no] [,hostname/IPaddress[:port_no]...]
    NOTE The default Elasticsearch port is 9200. If you want to use a different port, specify the port number when you list the Elasticsearch nodes.
    -ESDB Specifies a non-default location for writing index data on Elasticsearch nodes. In order to configure an index location, you only need to include this parameter on Elasticsearch nodes. By default, the Elasticsearch index is written to /opt/mapr/elasticsearch/elasticsearch-<version>/var/lib/MaprMonitoring/.
    NOTE Elasticsearch requires a lot of disk space. Therefore, a separate filesystem for the index is strongly recommended. It is not recommended to store index data under the / or the /var file system.

    Upgrading to a new version of monitoring removes the /opt/mapr/elasticsearch/elasticsearch-<version>/var/lib/MaprMonitoring/ directory. If you want to retain Elasticsearch index data through an upgrade, you must use the -ESDB parameter to specify a separate filesystem or back up the default directory before upgrading. The Pre-Upgrade Steps for Monitoring include this step.

    -OT Specifies a comma-separated list of host names or IP addresses that identify the OpenTSDB nodes. The OpenTSDB nodes can be part of the current HPE Ezmeral Data Fabric cluster or part of a different HPE Ezmeral Data Fabric cluster. Do not use this option when you configure a node for the first time. Use this option along with the -R parameter. A Warden service must be running when you use configure.sh -R -OT.
    The hostname list should use the following format:
    hostname/IP address[:port_no][,hostname/IP address[:port_no]...]
    NOTE The default OpenTSDB port is 4242. If you want to use a different port, specify the port number when you list the OpenTSDB nodes.
    -R After initial node configuration, specifies that configure.sh should use the previously configured ZooKeeper and CLDB nodes.
    For example, to configure monitoring components you can run one of the following commands:
    • In this example, a location is specified for the Elasticsearch index directory, and default ports are used for Elasticsearch nodes:
      /opt/mapr/server/configure.sh -R -ES NodeA,NodeB,NodeC -ESDB /opt/mapr/myindexlocation
    • In this example, non-default ports are specified for Elasticsearch, and the default location is used for the Elasticsearch index directory:
      /opt/mapr/server/configure.sh -R -ES NodeA:9595,NodeB:9595,NodeC:9595
    After you run configure.sh -R, if errors are displayed see Troubleshoot Monitoring Installation Errors.
  6. If you installed Kibana, perform the following steps:
    1. Use one of the following methods to load the Kibana URL:
      • From the Control System, select the Kibana view. After you select the Kibana view, you may also need to select the Pop-out page into a tab option.
      • From a web browser, launch the following URL: https://<IPaddressOfKibanaNode>:5601
    2. When the Kibana page loads, it displays a Configure an index pattern screen. Provide the following values:
      NOTE The Index contains time-based events option is selected by default and should remain selected.
      Field Value
      Index name or pattern mapr_monitoring-*
      Time-field @timestamp
    3. Click Create.