Removing One or More Nodes

Describes how to decommission a node from service.

Prerequisites

Perform the following prerequisite steps before removing a node using the Control System or CLI or REST API:
  1. Drain the node of data by moving the node to the /decommissioned physical topology. All the data on a node in the /decommissioned topology is migrated to other volumes and nodes in the appropriate topologies.
  2. Run the following command to check if a given volume is present on the node:
    maprcli dump volumenodes -volumename <volume> -json | grep IP:Port
    As an example, consider the volume rocky that is present on a node with IP 10.163.167.212. To check whether this volume exists on this node, run the command:
     maprcli dump volumenodes -volumename rocky -json
    {
            "timestamp":1606879372378,
            "timeofday":"2020-12-01 07:22:52.378 GMT-0800 PM",
            "status":"OK",
            "total":1,
            "data":[
                    {
                            "Servers":{
                                    "IP:Port":"10.163.167.212:5660-192.168.122.1:5660--3-VALID"
                            }
                    }
            ]
    }
    

    The output shows that the volume rocky exists on node 10.163.167.212 that is accessible on port 5660.

    To return just the IP and the Port alone, pipe the output through the grep command as follows:
     maprcli dump volumenodes -volumename test -json | grep IP:Port
     "IP:Port":"10.163.167.212:5660-192.168.122.1:5660--3-VALID"
    Run this command for each non-local volume in your cluster to verify that the node being removed is not storing any volume data.
  3. Install CLDB or ZooKeeper on another node (only) if the node you are removing is a CLDB or ZooKeeper node and run configure.sh with -C and -Z options.

    This is to ensure that ZooKeeper quorum is maintained and that an optimal number of CLDB is available for high availability.

About this task

You can remove one or more nodes using the Control System and the CLI.

Removing Multiple Nodes Using the Control System

About this task

To remove one or more nodes:

Procedure

  1. Log in to the Control System and click Nodes.
    NOTE The Nodes menu is not available on the Kubernetes version of the Control System.
  2. Select the nodes from the list of nodes in the Nodes pane and click Remove Node(s).
    The Remove Node(s) dialog displays.
  3. Verify the list of nodes to remove and click Remove Nodes.

Removing a Node Using the Control System

About this task

To remove a node:

Procedure

  1. Go to the Viewing Node Details page and click Remove Node.
    The Remove Node(s) confirmation dialog displays.
  2. Click Remove Node.

Removing one or more Nodes Using the CLI or REST API

About this task

Use the node remove command to remove one or more server nodes from the cluster. To run this command, you must have full control (fc) or administrator (a) permission. The syntax is:

maprcli node remove -nodes <node names> ]
If the following error is generated, you must wait for the state duration of the CLDB master node to reach 15 minutes or more. Otherwise, the node remove fails:
node remove failed for node <node_name>, Error: Resource temporarily unavailable; CLDB just became master, node removed not allowed until sometime
To check the state duration value, use this command:
maprcli dump cldbstate -json

After you issue the node remove command, wait several minutes to ensure that the nodes have been completely removed.

TIP To ensure that a node that is removed does not rejoin the cluster on reboot, either remove all data-fabric packages from the node, or remove the cluster configuration that is present in the /opt/mapr/conf/mapr-clusters.conf file on the node.