Installing HPE Ezmeral Data Fabric and Kubernetes Software on the Same Nodes

This section describes how to install the configuration files for the HPE Ezmeral Data Fabric for Kubernetes. In this configuration, HPE Ezmeral Data Fabric and Kubernetes software can coexist on the same nodes if certain version requirements are met.

IMPORTANT Some versions of the HPE Ezmeral Data Fabric for Kubernetes do not support installing Data Fabric and Kubernetes software on the same nodes. To ensure that you are using a version that supports this feature, see the MapR Data Fabric for Kubernetes release notes.

Before Installation

Before installing the HPE Ezmeral Data Fabric for Kubernetes, note these preinstallation requirements:
  • This procedure assumes that the Kubernetes cluster is already installed and functioning normally.
  • Ensure that all Kubernetes nodes use the same Linux distribution. For example, all nodes can be CentOS nodes, or all nodes can be Ubuntu nodes. But a cluster with a mixture of CentOS and Ubuntu nodes is not supported.
  • This procedure requires stopping Warden and Zookeeper on all nodes in the Data Fabric cluster and then restarting Warden and Zookeeper on all nodes. The steps cannot be performed online one node at a time.
  • Do not install the Data Fabric client on a node where the volume plug-in configuration file is installed. The Data Fabric client can be installed on a node in the Kubernetes cluster, but it must be installed before the HPE Ezmeral Data Fabric for Kubernetes is installed on the same Kubernetes cluster.
CAUTION Do not try to install the volume plug-in without following the steps below. Doing so can cause Data Fabric libraries to be overwritten.

Install the Data Fabric 6.0.1 or Later Cluster on the Kubernetes Nodes

Use any of the methods described in Installing with the Installer to install a Data Fabric 6.0.1 or later cluster on the existing Kubernetes nodes.

Install the HPE Ezmeral Data Fabric for Kubernetes

Use these steps to install the HPE Ezmeral Data Fabric for Kubernetes on the Kubernetes cluster:
  1. Stop all running jobs on the Data Fabric cluster.
  2. Stop Warden on all Data Fabric cluster nodes by running the following command on each node:
    service mapr-warden stop
  3. Stop Zookeeper on all Data Fabric Zookeeper nodes by running the following command on each node:
    service mapr-zookeeper stop
  4. Deploy the HPE Ezmeral Data Fabric for Kubernetes components by using steps 1 through 6 of Installing HPE Ezmeral Data Fabric and Kubernetes Software on Separate Nodes.
  5. Configure the MAPR_SUBNETS environment variable to ensure that Data Fabric software does not use the docker0 network interface on each node. See Designating NICs for HPE Ezmeral Data Fabric.

    If MAPR_SUBNETS is not set, the CLDB uses all NICs present on the node. When Docker is installed on a node, thedocker0 bridge is created as a virtual NIC for use by the Docker containers. You must configure the MAPR_SUBNETS setting to include the physical NICs that you want the CLDB to use and exclude the docker0 network interface. In this way, you can avoid issues with duplicate or non-routable IP addresses. For more information about docker0, see Docker container networking.

  6. Start Zookeeper on all Data Fabric Zookeepr nodes by running the following command on each node:
    service mapr-zookeeper start
  7. Start Warden on all Data Fabric cluster nodes by running the following command on each node:
    service mapr-warden start