Installing Spark Standalone

This topic describes how to use package managers to download and install Spark Standalone from the EEP repository.

Prerequisites

To set up the EEP repository, see Step 10: Install Ecosystem Components Manually.

About this task

Spark is distributed as four separate packages:
Package Description
mapr-spark Install this package on any node where you want to install Spark. This package is dependent on the mapr-client package.
mapr-spark-master Install this package on Spark master nodes. Spark master nodes must be able to communicate with Spark worker nodes over SSH without using passwords. This package is dependent on the mapr-spark and the mapr-core packages.
mapr-spark-historyserver Install this optional package on Spark History Server nodes. This package is dependent on the mapr-spark and mapr-core packages.
mapr-spark-thriftserver

Install this optional package on Spark Thrift Server nodes. This package is available starting in the EEP 4.0 release. It is dependent on the mapr‍-‍spark and mapr‍-‍core packages.

Run the following commands as root or using sudo.

Procedure

  1. Create the /apps/spark directory on the cluster filesystem, and set the correct permissions on the directory.
    hadoop fs -mkdir /apps/spark
    hadoop fs -chmod 777 /apps/spark
    NOTE Beginning with EEP 6.2.0, the configure.sh script creates the /apps/spark directory automatically.
  2. Install Spark using the appropriate commands for your operating system:
    On CentOS / Red Hat
    yum install mapr-spark mapr-spark-master mapr-spark-historyserver mapr-spark-thriftserver
    On Ubuntu
    apt-get install mapr-spark mapr-spark-master mapr-spark-historyserver mapr-spark-thriftserver
    On SLES
    zypper install mapr-spark mapr-spark-master mapr-spark-historyserver mapr-spark-thriftserver
    NOTE The mapr-spark-historyserver, mapr-spark-master, and mapr-spark-thriftserver packages are optional.

    Spark is installed into the /opt/mapr/spark directory.

  3. For Spark 2.x:
    Copy the /opt/mapr/spark/spark-<version>/conf/slaves.template into /opt/mapr/spark/spark-<version>/conf/slaves, and add the hostnames of the Spark worker nodes. Put one worker node hostname on each line.
    For example:
    localhost
    worker-node-1
    worker-node-2
  4. Set up passwordless ssh for the mapr user such that the Spark master node has access to all secondary nodes defined in the conf/slaves file for Spark 2.x .
  5. As the mapr user, start the worker nodes by running the following command in the master node. Since the Master daemon is managed by the Warden daemon, do not use the start-all.sh or stop-all.sh command.
    For Spark 2.x:
    /opt/mapr/spark/spark-<version>/sbin/start-slaves.sh
  6. If you want to integrate Spark with MapR Event Store For Apache Kafka, install the Streams Client on each Spark node:
    • On Ubuntu:
       apt-get install mapr-kafka
    • On RedHat/CentOS:
      yum install mapr-kafka
  7. If you want to use a Streaming Producer, add the spark-streaming-kafka-producer_2.11.jar from the MapR Data Platform Maven repository to the Spark classpath (/opt/mapr/spark/spark-<versions>/jars/).
  8. After installing Spark Standalone but before running your Spark jobs, follow the steps outlined at Configuring Spark Standalone.