Step 1: Restart and Check Cluster Services

After upgrading core using either a manual offline or rolling upgrade method (not upgrading with the Installer) and upgrading your ecosystem components, configure and restart the cluster and services.

About this task

NOTE This task is applicable only to manual offline and rolling upgrade methods.
IMPORTANT Before restarting cluster services, upgrade any existing ecosystem packages to versions compatible with the upgraded data-fabric release. For more information, see EEP Components and OS Support.

This procedure configures and restarts the cluster and services, including ecosystem components, remounts the NFS share, and checks that all packages have been upgraded on all nodes.

After finishing this procedure, run non-trivial health checks, such as performance benchmarks relevant to the cluster’s typical workload or a suite of common jobs. It is a good idea to run these types of checks when the cluster is idle. In this procedure, you configure each node in the cluster without changing the list of services that will run on the node. If you want to change the list of services, do so after completing the upgrade. After you have upgraded packages on all nodes, perform this procedure on all nodes to restart the cluster. Upon completion of this procedure, core services are running on all nodes.

Procedure

  1. Merge any custom edits that you made to your cluster environment variables into the new /opt/mapr/conf/env_override.sh file before restarting the cluster. This is because the upgrade process replaces your original /opt/mapr/conf/env.sh file with a new copy of env.sh that is appropriate for the data-fabric release to which you are upgrading. The new env.sh does not include any custom edits you might have made to the original env.sh. However, a backup of your original env.sh file is saved as /opt/mapr/conf/env.sh<timestamp>. Before restarting the cluster, you must add any custom entries from /opt/mapr/conf/env.sh<timestamp> into /opt/mapr/conf/env_override.sh, and copy the updated env_override.sh to all other nodes in the cluster. See About env_override.sh.
  2. On each node in the cluster, remove the mapruserticket file. For manual upgrades, the file must be removed to ensure that impersonation works properly. The mapruserticket file is re-created automatically when you restart Warden. For more information, see Installation and Upgrade Notes (Release 6.2.0).
    # rm /opt/mapr/conf/mapruserticket
  3. If you are upgrading from core 6.0.x to 6.2.0, create the ssl_truststore.pem and ssl_keystore.pem files. These files are used by the Data Access Gateway, Grafana, and Hue components. This step is necessary only for manual upgrades because upgrades performed with the Installer distribute the files automatically. Use these commands:
    1. Use the manageSSLKeys.sh utility to generate the files:
      /opt/mapr/server/manageSSLKeys.sh convert -N my.cluster.com /opt/mapr/conf/ssl_truststore /opt/mapr/conf/ssl_truststore.pem
      
      /opt/mapr/server/manageSSLKeys.sh convert -N my.cluster.com /opt/mapr/conf/ssl_keystore /opt/mapr/conf/ssl_keystore.pem
    2. Copy the generated ssl_keystore.pem and ssl_truststore.pem files to the /opt/mapr/conf/ directory on all the nodes in the cluster.
  4. If you are upgrading to release 6.2.0 or later, use the following command to create the new userkeystores and usertruststores for log monitoring. You run this command once on any node and then copy the resulting files to all other nodes in the cluster:
    manageSSLKeys.sh createusercerts -ug <cluster_admin_ID>:<cluster_admin_group> -N <cluster_name>
  5. On each node in the cluster, run configure.sh with the -R option:
    # /opt/mapr/server/configure.sh -R -HS <hostname>
  6. If ZooKeeper is installed on the node, start it:
    # service mapr-zookeeper start
  7. Start Warden.
    # service mapr-warden start
  8. Run a simple health-check targeting the filesystem and MapReduce services only. Address any issues or alerts that might have come up at this point.
  9. Set the new cluster version in the /opt/mapr/MapRBuildVersion file by running the following command on any node in the cluster:
    # maprcli config save -values {mapr.targetversion:"`cat /opt/mapr/MapRBuildVersion`"}
  10. Verify the new cluster version:
    For example:
    # maprcli config load -keys mapr.targetversion
    mapr.targetversion
    6.2.0.0.20200915234957.GA
  11. Remount the data-fabric NFS share:
    The following example assumes that the cluster is mounted at /mapr:
    # mount -o hard,nolock <hostname>:/mapr /mapr
  12. Run commands, as shown in the example, to check that the packages have been upgraded successfully:
    Check the following:
    • All expected nodes show up in a cluster node list, and the expected services are configured on each node.
    • A master CLDB is active, and all nodes return the same result.
    • Only one ZooKeeper service claims to be the ZooKeeper leader, and all other ZooKeepers are followers.
    For example:
    # maprcli node list -columns hostname,csvc
    hostname configuredservice ip
    centos55 nodemanager,cldb,fileserver,hoststats 10.10.82.55
    centos56 nodemanager,cldb,fileserver,hoststats 10.10.82.56
    centos57 fileserver,nodemanager,hoststats,resourcemanager 10.10.82.57
    centos58 fileserver,nodemanager,webserver,nfs,hoststats,resourcemanager 10.10.82.58
    ...more nodes...
                    
    # maprcli node cldbmaster
    cldbmaster
    ServerID: 8851109109619685455 HostName: centos56
                    
    # service mapr-zookeeper status
    Redirecting to /bin/systemctl status mapr-zookeeper.service
    ● mapr-zookeeper.service - MapR Technologies, Inc. zookeeper service
       Loaded: loaded (/etc/systemd/system/mapr-zookeeper.service; enabled; vendor preset: disabled)
       Active: active (running) since Wed 2021-05-26 09:18:54 PDT; 1 months 9 days ago
      Process: 2215 ExecStart=/opt/mapr/initscripts/zookeeper start (code=exited, status=0/SUCCESS)
     Main PID: 2510 (java)
        Tasks: 0 (limit: 410335)
       Memory: 4.5M
       CGroup: /system.slice/mapr-zookeeper.service
               ‣ 2510 /usr/lib/jvm/java-11-openjdk-11.0.9.11-3.el8_3.x86_64/bin/java -Dzookeeper.log.dir=/opt/mapr/zookeeper/zookeeper-3.5.6/logs -Dzookeeper.lo>
    
    May 26 09:18:53 <node> systemd[1]: Starting MapR Technologies, Inc. zookeeper service...
    May 26 09:18:53 <node> su[2459]: (to mapr) root on none
    May 26 09:18:53 <node> su[2459]: pam_unix(su:session): session opened for user mapr by (uid=0)
    May 26 09:18:53 <node> zookeeper[2215]: JMX disabled by user request
    May 26 09:18:53 <node> zookeeper[2215]: Using config: /opt/mapr/zookeeper/zookeeper-3.5.6/conf/zoo.cfg
    May 26 09:18:54 <node> zookeeper[2215]: Starting zookeeper ... STARTED
    May 26 09:18:54 <node> su[2459]: pam_unix(su:session): session closed for user mapr
    May 26 09:18:54 <node> systemd[1]: Started MapR Technologies, Inc. zookeeper service.