Installer Known Issues
This topic describes some Installer known issues that you should be aware of while troubleshooting.
Issue | Description |
---|---|
IN-2784 & MFS-11853 | Stopping a cluster by stopping ZooKeeper and Warden can cause clients that are
accessing the file system through POSIX (for example, the Object Store) to hang if Loopback NFS is installed on a
cluster node and is not stopped first. Note that beginning with Installer 1.15, the
Installer installs Loopback NFS on all cluster nodes unless NFS is enabled.
Workaround If Loopback NFS is running and you need to stop the cluster, you must
first unmount |
IN-1343 | In a new installation of six nodes or more using the
Installer, if only data nodes fail to install, retrying the installation can
fail. Workaround Use the Installer uninstall feature, and retry the installation from scratch. See Uninstalling Software Using the Installer Uninstall Button. |
IN-2132 | On a SUSE cluster, using Installer version 1.10 or
earlier of the mapr-setup.sh script can complete successfully even if
sshpass is not installed.Workaround
Upgrade to the latest Installer. You must use version 1.11 or later of the
|
IN-2008 | When you upgrade from a secure 6.0.0 cluster to 6.0.1 using Installer 1.9, a
security certificate for log monitoring is overwritten. As a result, Elasticsearch can fail
to start after the upgrade. This issue is not present during a new installation of 6.0.0 or 6.0.1 or during an
upgrade to 6.1.0. This issue is fixed in Installer 1.10 and later. Workaround To resolve the issue, you must
remove the .keystore_password file, re-run the command to generate new
Elasticsearch certificates, and then re-distribute the certificates. Use these steps:
|
IN-2443 | An internal server error that includes a NullPointerException can be generated if you
install a cluster on Ubuntu 16 using a Installer Stanza. The
error appears if Hive is installed but no password for Hive is included in the
.yaml installation file.Workaround Add the Hive password to the
|
IN-18 | When using the -v or --verbose options with Installer Stanzas, detailed error information is not provided on
the command line. For example, if a mapr user or group is not present on a
host during a new installation, the mapr-installer-cli reports
"Verification Error" on the command line.Workaround To view more detailed error
information when using the |
IN-2200 | Deploying a MapR 6.0.1 cluster on AWS fails when the following parameters are
specified:
Workaround Try using a |
IN-2152 | During a Installer upgrade from any release to 6.0.1, core files can be generated for ecosystem components,
which can cause alarms in the Control System following the
upgrade. This happens because the upgrade sequence shuts down the cluster, then upgrades Core packages, and then restarts Core. Restarting Core is necessary to upgrade some
ecosystem components. When the old ecosystem components are started, version
incompatibilities with the new version of Core can cause core
dumps. This is a temporary issue. Upgrading the ecosystem component, which happens later in
the upgrade process, resolves the issue. The issue does not exist in 6.1 and later releases, which have the ability to prevent services from
restarting during an upgrade. Workaround Ignore the Control System alarms, or upgrade to 6.1 or later, which should not generate core alarms. |
IN-1940 | In Installer versions 1.9 and earlier, the probe command can fail because of a runtime
error if you have installed the Operational Applications with HPE Ezmeral Data Fabric Database
template. The error is caused by the presence of the
mapr-drill-internal package. Any node running the Data Access Gateway requires the mapr-drill-internal package to
be installed even though Drill is not installed as a service. The
mapr-drill-internal package provides a set of client libraries used by the
Data Access Gateway.Workaround Before using the
|
IN-1635 | In Installer Stanza versions 1.9 and earlier, the probe command was hard coded with a cluster
admin user of mapr . If you configured a cluster admin user other than
mapr , the probe -generated YAML file could not be imported
using the import command.Workaround Before using the
|
IN-2123 | In a secure cluster, the Extend Cluster operation fails if you
try to extend the control group. The new control node cannot join the cluster because it
inadvertently receives a new set of keys. This issue affects versions 1.7 through 1.10 of
the Installer and is fixed in Installer 1.10.0.201812181130 and later versions. Workaround You can resolve the
issue by manually copying mapruserticket into the |
IN-2141 | The following issue applies to Installer versions 1.7
through 1.10, but not all 1.10 versions. The issue is fixed in Installer 1.10.0.201812181130 and later versions. An extend cluster (add node) operation
can fail when you:
The extend cluster operation fails because keystore, truststore, and server ticket ( Workaround Before attempting the extend cluster
operation, copy the keystore, truststore, and server ticket
( maprserverticket ) files from any CLDB node to
/opt/mapr/installer/data/tmp on the installer node. The files that need
to be copied are:
If metrics monitoring is configured on the cluster, you must also copy the tickets related to Fluentd, Kibana, and Elasticsearch to the same location. |
IN-2217 | During an upgrade to MEP 6.1.0 using the Installer, the
Installer does not back up the Drill conf ,
log , and jar directories into
${MAPR_HOME}/drill/OLD_DRILL_VERSIONS . This can happen when you upgrade
Drill from an old version (for example, Drill 1.10 in MEP 3.0) to Drill 1.15.0.0 in MEP
6.1.0. Recent packaging changes in Drill contribute to this issue. Drill 1.10 consists
only of Workaround Before upgrading, perform the following steps:
|
IN-1915 | During an upgrade using the Installer, refreshing the
browser page can cause the Installer to forget upgrade parameters that were specified before
the refresh. Workaround Avoid refreshing the browser page during an upgrade operation. If you must refresh the page, go back to the first page of the upgrade operation and start over again to ensure that the Installer has the correct parameters before it begins the Verify phase of the upgrade. |
IN-2035 | During a version upgrade using the Installer, if you
select the Advanced Configuration button and then click
Previous (one or more times) followed by
Abort, the Installer can indicate that the upgrade completed even
though the upgrade was aborted. Workaround If this happens, you must reset the
installer and reload the last known state. Follow these steps to reset the cluster state:
|
IN-2065 | /mapr sometimes does not get mounted after you enable NFS (v3 or v4)
using the Installer Incremental Install function. The
Incremental Install function is an online operation. Enabling NFS using an Incremental
Install can create a race condition between when the mapr_fstab file gets
created and NFS is started by Warden. If NFS is started by Warden before the
mapr_fstab file is created, /mapr does not get
mounted.Workaround If
If the time stamp of the mapr_fstab file is older than the Warden time
stamp:
|
INFO-420 | The procedure for Configuring Storage using disksetup does not work for new installations of DARE-enabled 6.1 clusters. With DARE enabled,
disksetup fails on any node that is not a CLDB node because there is no
local copy of the dare.master.key file. When you use disksetup, non-CLDB
nodes try to contact the CLDB, which must be running when the nodes attempt
contact.Workaround After running configure.sh , you must:
|
IN-2057 | A fresh install of 6.0.0 using the
sample_advanced.yaml file for Installer
Stanzas (Installer version 1.9) can fail with the following error
message: The
error is generated because the .yaml file is missing an entry for the
mapr-data-access-gateway in the MASTER services section. The
mapr-data-access-gateway service is needed for HPE Ezmeral Data Fabric Database installations.Workaround In the MASTER services
section of the |
IN-1272 | During an upgrade to 6.0 or later (Drill 1.11),
configure.sh sometimes fails to disable the storage plugin for HBase. The
HBase server is not supported in Core 6.0 or later, so the
HBase storage plugin should be disabled before a cluster is upgraded to 6.0 or later. Otherwise, Drill queries against HBase will
hang.Workaround Before upgrading to Drill 1.11 or later, manually disable the
HBase storage plug-in. To manually disable the plug-in, you can use the Drill Web Console
or Drill REST API commands. You can disable the HBase storage plugin on the
Storage page of the Drill Web Console at
http(s)://<drill-hostname>:8047 . For more information, see this
page:
|
IN-1747 | If you use the Installer 1.10
Uninstall button to uninstall software and a node is unreachable, you will not be able to uninstall the node later
when the node is reachable. Workaround Uninstall the software on the node manually. See Decommissioning a Node and Uninstalling Data Fabric Software from the Command-line. |
IN-2015 | A fresh install of 6.0.0 with MEP 4.1.1 using Installer
1.9 can fail with the following error
message:
Workaround Update the installer to version 1.10 or later, and retry the operation. |
IN-2018 | Logging on to Kibana results in an authentication failure. This can happen on a CentOS
cluster if you use Installer 1.10 to install 6.0.1 MEP 5.0.0, and
then upgrade to 6.1.0 and MEP 6.0.0. Workaround Try using Installer 1.9 to install the 6.0.1 cluster and Installer 1.10 to upgrade the cluster. See Updating the MapR Installer. |
CORE-150 | After using the Incremental Install function of the Installer to apply security to an Azure-based 6.1.0 cluster, the Hue and Spark-thrift server links are not
accessible in the Installer interface. This issue can occur on an Azure-provisioned cluster
whose internal DNS suffix starts with a number rather than a letter. Workaround Re-create the cluster in Azure so that the internal DNS suffix starts with a letter and not a number. |
IN-2025 | The Extend Cluster operation can fail during the
Verify Nodes phase with an error indicating Unscalable host
groups found . This error can occur when the MASTER group is missing or a
single-instance service (for example, Grafana) has been moved out of the MASTER group. The
mapr-installer.log reveals which MapR services are supposed to be in the
MASTER group.Workaround Move any original MASTER services that caused the error
back to the MASTER group. The |
IN-2006 | On a cluster with mapr-drill installed, the probe
command can return the wrong database type value.Workaround After using the
probe command, check to see if the resulting YAML file has the correct
mapr_db setting. Possible settings are:
If necessary, change the setting in the YAML file to match the value from the probed cluster. |
IN-1955 | If you install MapR software using the Installer in a
browser and then upgrade the installer in the same browser
tab and attempt an upgrade without starting a new browser, the stale browser cache can cause
upgrade errors. Workaround Clear your browser cache or open a new browser tab whenever you need to update the MapR Installer and perform a new installer operation. |
IN-1983 | After an upgrade from MapR 5.x to MapR 6.1 and MEP 6.0.0 using the MapR Installer, the
kafka-connect service fails to start. This issue has been noticed on
platforms that use systemd .Workaround Stop the
|
IN-1972 | During an upgrade from MapR 5.x to MapR 6.1, the MapR Installer prompts you for the
MySQL user ID and password. If you enter a password that is different from the password you
provided when you originally configured MySQL through the MapR Installer, the upgrade fails
with this error: "Unable to connect to database.…" Workaround When the MapR Installer prompts you for the MySQL user ID and password, enter the password that you specified when you first installed the cluster. If you did not specify a password for MySQL when you installed MapR 5.x, leave the password field blank. |
IN-1904 | If you initiate a system startup by clicking the Startup button
on the MapR Installer web interface, the Authentication screen is
displayed. If you subsequently click the Previous button, the
following buttons are shown as active even though they are not usable during system startup:
Workaround: Do not use the Previous button during startup. |
IN-1657 | After updating MapR Installer 1.7 or later, the Installer
can lose awareness that a cluster was previously installed. For example, the MapR Installer
might indicate the need for a fresh install. Workaround: If this happens, do NOT
proceed with installation or upgrade operations. Follow these steps to reset the cluster
state:
|
IN-1084 | For MapR 6.0 or later clusters, enabling security by using the Incremental Install function can overwrite custom
certificates in the ssl_truststore and ssl_keystore files.
When you turn on security, the MapR Installer runs the configure.sh script
on the CLDB primary node to generate security keys and then distributes the keys to all the
other CLDB nodes. The installer also distributes certificates to all the other nodes. This
process can cause custom certificates to be overwritten. However, before enabling security,
the MapR Installer makes a backup of the existing ssl_keystore and
ssl_truststore files.Workaround: After enabling security, locate
the backup of the ssl_keystore and ssl_truststore files.
The backup uses this format:
Extract any custom
certificates from the backup files, and manually merge or add them into the new
ssl_keystore and ssl_truststore files.To merge
the files, you can use the |
IN-997 | When using MapR Installer 1.9 with Ubuntu distributions, an upgrade of the MapR
Installer definitions requires a restart of the installer service. The restart is needed
because the MapR Installer services version is not updated properly when you use either of
the following commands:
Workaround: After installing or reloading the MapR Installer definitions,
issue one of these commands to fix the services version:
|
IN-1671 | For MapR Installer 1.8 and earlier, installation on Ubuntu 16.04 can fail if Python 2
is not available or if the default is set to Python 3. The installer requires
python and python-yaml to be installed on all
nodes.Workaround: To install the Python packages
manually:
|
IN-1336 | The MapR Installer Retry function can be affected if the
installer operation fails. Suppose you deselect a service during an Incremental
Install operation. If the Incremental Install fails and
you need to Retry, it's possible that the service will not be
deselected (uninstalled). Workaround: Manually remove (uninstall) the service by
using one of these commands:
|
IN-1392 | During an Extend Cluster (add node) operation using the MapR
Installer, if installation of the added node fails and you abort the operation, the
installer can display the added node even though it did not get
installed. Workaround: When the MapR Installer indicates that a node did not get added correctly (typically during the Installation phase), select the node and click Remove Node. Then retry adding the node. |
IN-1396 | An installation using the MapR Installer fails with the following Ansible module
error:
Workaround:
Check for syntax errors in the |
IN-1398 | In the MapR Installer Verify Nodes page, if you click a host,
the Disks Selected for MapR box in the right pane displays the disks
that were specified for the host either manually or automatically. If you deselect a disk in
the right pane and click Retry, the deselection is not always
implemented. Workaround: Click Previous to go back to the Node Configuration page, and re-specify the disks that you want. Then continue with the operation. |
IN-1386 | On a secure MapR cluster, YARN jobs can fail if you specify IP addresses rather than
host names when you configure nodes using the MapR Installer. Workaround: Do not
use an IP address for node configuration with the MapR Installer. If you already used an IP
address, change the IP address in the yarn-site.xml file
on all nodes. In the following example, the 10.10.10.7 IP address must be changed to a host
name, such as
|
IN-1333 | On Ubuntu clusters, the mapr-setup.sh script fails to reload the MapR
Installer definitions during an update of the MapR installer and
definitions.Workaround: After updating, restart the installer to load the
definitions:
|
IN-907 | The MapR Installer service fails if the mapr user or
root user already exist and they are configured to use a shell other than
bash. For more information about user requirements, see the MapR Installer Prerequisites and Guidelines. |
IN-1079 | Verification fails when the installed language pack is for a language other than
English. Workaround: Remove the non-English language pack and install the English
language pack. In the following example, the non-English language pack is shown as German.
Also, make sure your system locale is set to en_us , as described in Infrastructure.
|
IN-804 | Using the Incremental Install operation to add a third node to a CONTROL group
generates an error: ERROR: configure_refresh.sh failed . This issue applies
to MapR Installer versions 1.6 and earlier.Workaround: Update the MapR Installer to version 1.7 or later, and retry the operation. See Updating the Installer. |
IN-1314 | When you use the MapR Installer to install ecosystem components that require a MySQL
component such as Hive, Oozie, or Hue, the passwords you provide to install the MySQL
database are displayed in the mapr-installer.log and
<nodename>.log files. Beginning with MapR Installer 1.7, the
permissions for the mapr-installer.log and
<nodename>.log files are changed so that these passwords are not world
readable. However, the passwords are still present in log files created with earlier
versions of the MapR Installer.Workaround: For increased security, remove the earlier logs or change the user permissions for them. |
IN-646 | An upgrade using the MapR Installer can fail if Ansible processes hang. Performing a
service mapr-installer restart does not help. The
installer-process.log indicates Lock file
/opt/mapr/installer/data/mapr-installer.lock exists.
|
IN-1042 | Installation of the 5.2.x mapr-metrics package on SUSE 12 SP2 fails
because the libmysqulclient16 package is not present. This can happen when
mapr-metrics is installed manually or using the MapR Installer. This issue
was detected during installations of MapR 5.2.x with MEP 3.0.0.Workaround: None. |
IN-870 | If your cluster uses 2-digit MEPs, and you
use the MapR Installer Extend Cluster button to add a
node, the node can be added with a patch version that is different from the patch version of
other nodes in the cluster. See Understanding Two-Digit and Three-Digit MEPs. A one-time change can prevent this issue. After updating the MapR Installer from version 1.5 to version 1.6 or later, but before performing any MapR Installer operations, use an Incremental Install to change your 2-digit MEP version to the equivalent 3-digit MEP version. See Updating the Installer. |
ES-27, IN‑1387 | On a new installation of a secure cluster using the MapR Installer, Elasticsearch
fails to start, and logs indicate that Elasticsearch key generation failed. When this
happens, Kibana and Fluentd also do not start. The MapR Installer allows the installation to
complete. Workaround: Check the installer log for a message indicating that Elasticsearch could not be secured. Use the Incremental Install feature of the MapR Installer to retry installation of the MapR Monitoring logging components. Alternatively, you can configure security for the logging components manually. See Step 9: Install Log Monitoring. |
IN-1332 | On clusters with less than the recommended memory configuration (16 GB per node),
services added during an Incremental
Install operation might fail to start because Warden allocated available memory to
the MapR filesystem. The MapR Installer might not indicate a problem with the newly added
services. If this issue occurs, the MapR filesystem cannot relinquish memory without
restarting Warden. Note: This issue can also occur on clusters with more than 16 GB of
memory per node if the installed services require more memory than is currently
installed. Workaround: Use MCS or the
|
IN-1339 | Installation fails with the MapR Installer reporting an Unexpected failure during
module execution, and the following entry is present in the Logs for the Installer:
Workaround: Change the ssh settings as described in known issue IN-405 later on this page, and retry the installation. |
IN-553 | New installations on Ubuntu 14.04 using MapR Installer 1.6 or 1.7 can fail because of
a JDK 1.8 issue. Workaround: If you are installing on Ubuntu 14.04, you must install Java JDK 1.8 before running the MapR Installer. For more information, see this website. If you are installing on RHEL/CentOS or SUSE, the MapR Installer installs Java JDK 1.8 for you. |
IN-405 | MapR installation or cluster
import fails with the error message: "Failed to resolve remote temporary directory
from ansible-tmp- ...." Workaround: To proceed using the MapR Installer, disable
SSH connection reuse by including this entry underneath the
This workaround can lead to longer install times. We recommend that you resolve any network connectivity issues in your environment. |
IN-250 | An upgrade to a new MapR core version and a new MEP using the MapR Installer can fail if the cluster being
upgraded was initially installed with Hive Metastore but not with Hive. The Hive Metastore
package has an installation dependency on Hive, but the Hive Metastore definitions do not
enforce the dependency, resulting in inconsistencies in the installer database. This issue
has been observed on Ubuntu platforms. Workaround: Before upgrading, if you have Hive Metastore installed by itself, use the Incremental Install feature of the MapR Installer to install Hive. Then proceed with the upgrade. Performing an upgrade
without doing the Incremental Install of Hive will cause the upgrade to fail. In
this scenario, you will have to reinstall or rebuild the database by using Stanza commands.
You can use the |
N/A | The MapR Installer Web Interface can inadvertently deselect services that you have selected, preventing them from being installed. For example, if you select an auto-provisioning template on the Select Services page, and you also select additional services (for example, Streams Tools), and go to the next page, when you return to the Select Services page, Streams Tools will be deselected, and you will need to reselect it to ensure that it is installed. |
MapR‑22652 | The MapR Installer does not prevent the selection of Impala 2.2 when the cluster will be installed on Ubuntu nodes. Since Impala 2.2 is not supported on Ubuntu, the installation will not complete successfully. |
MapR‑20606 | The Configure Service Layout page may assign services to a group with the name
"Unprovisioned Services." Workaround: In the MapR Installer Web Interface, click Restore Default. |
N/A | You cannot use the MapR Installer after you upgrade the cluster using the command
line. After you use the command line to upgrade a cluster that you installed with the MapR Installer, the MapR Installer is not aware that the cluster runs the upgraded version. Therefore, MapR Installer does not install nodes and ecosystem components that apply to the upgraded version. Workaround: Use the MapR Installer Stanzas
|
MapR-19727 | On nodes running RedHat/CentOS 6.5 or older, JDK 1.7 may cause an SSL connection error
when you attempt to access the Installer web
interface. Workaround:
|
MapR-18668 | Hue does not work on RedHat/CentOS 7 when it is configured to use a MySQL
database. When this issue occurs, the MapR Control System (MCS) displays the Workaround:
|
MapR-18574 | Installation may fail on nodes where there is a package manager running or a stale
lock file exists. When this type of failure occurs, the installation log for that node may
contain an error message similar to the following:
Workaround:
To clean up a running package manager process, run apt-get or
yum on a shell and follow the instructions provided by the package
manager. Note: When you select a node on the “Installing MapR” page, the option to view the
installation log for that node displays in the left panel. |
MapR-18507 | Metrics does not work on these operating systems: Ubuntu, RedHat 7, and SUSE/SLES 12. |
MapR-18388 | Known issues with spark-standalone and multiple spark primary instances:
Workaround:To start Spark workers, run the following command as the mapr user
on each worker
node:
Example:
It is important to list all the spark primary node hostnames so that the worker nodes can connect to the active primary node. You can run the following command to determine the nodes that run the spark primary instances:
|