Known Issues at Release (MapR 6.1.1)

Lists the known issues that users should be aware of before installing or using release 6.1.1.

You may encounter the following known issues after upgrading to Version 6.1.1. This list is current as of the release date. For a dynamic list of all known issues for all MapR product releases, refer to Support notices of known issues

Where available, the workaround for an issue is also documented in this topic. MapR regularly releases maintenance releases and patches to fix issues. We recommend checking the release notes for any subsequent maintenance releases to see if one or more of these issues are fixed.

Installation and Configuration Issues

You can see generic installation issues here: MapR Installer Known Issues.
Keytool error on SLES 12 SP4 Node
Running the configure.sh script on a SLES 12 SP 4 node can fail with the following error:
keytool error: java.security.ProviderException: Could not initialize NSS
The installation fails because the mozilla-nss dependency is not installed.
Workaround: Install the mozilla-nss package using the following command, and rerun configure.sh:
zypper install mozilla-nss
IN-2637
After a manual installation, Oozie and Hive services can fail to connect to a MySQL or MariaDB database because the server time-zone value is unrecognized or represents more than one time zone. The issue affects your installation if you applied the mapr-patch released on or after February 21, 2021 (including the latest mapr-patch). This issue affects manual installations but is fixed in Installer 1.14.0.0.
Workaround: For manual installations, you must configure either the server or JDBC driver (using the serverTimezone configuration property) to use a more specific time-zone value if you want to utilize time-zone support. After running configure.sh but before starting the Oozie or Hive services, update the serverTimezone parameter in the hive-site.xml or oozie-site.xml. For more information, see MySQL Bug #95036.

Monitoring

ES-75
Elasticsearch purge jobs fail in MapR 6.1.0 with EEP 6.3.1 because certain curator files are missing.
Workaround: A patch is required for this issue. See Known Issue article 4812 in the Support notices of known issues. To download a patch, see Downloading a Patch. EEP 6.3.3 and later include the fix for this issue.
ES-76
Running the curator purge script in Elasticsearch in EEP 6.3.2 or EEP 7.0.1 returns a syntax error.
Workaround: A patch is required for this issue. See Known Issue article 4812 in the Support notices of known issues. To download a patch, see Downloading a Patch. EEP 6.3.3 and later include the fix for this issue.
ES-77
During an upgrade from release 6.1.0 and EEP 6.3.x to release 6.2.0 and EEP 7.0.1, Elasticsearch, Kibana, and Grafana packages do not get upgraded and remain at the EEP 6.3.x package version. The issue occurs because of a misnumbered fourth digit in the EEP 7.0.1 packages for Elasticsearch, Kibana, and Grafana. This issue affects manual upgrades and version upgrades using the Installer. This issue does not affect new installations.
Workaround: After the upgrade, manually uninstall the mapr-elasticsearch and mapr-kibana packages using a command such as yum remove. Then reinstall the packages. If you installed using the Installer, run an Incremental Install to reinstall the packages. If you installed the cluster manually, use a command such as yum install to reinstall the mapr-elasticsearch and mapr-kibana packages. For a guide to the component package versions, see Component Versions for Released EEPs.

NFSv4

MFS-11919
nfs-ganesha crashes when starting NFSv4 server on SLES 12 sp4/sp5 nodes. However, subsequently NFSv4 is able to come up and work fine.
Workaround: None.

Upgrade

IN-2824
During a version upgrade using the Installer, the upgrade fails during the Verification phase with a message that the OS is not supported. Hovering over the error in the right-navigation pane indicates that your OS is not supported for the core version you selected, and the core version is displayed as 5.1.0. Troubleshooting indicates that the install.json file includes EEP components but no core services in the services list. This can happen because of a timing issue in the Installer user interface.
Workaround: You might be able to avoid this issue by incorporating a brief delay when you use the Maintenance Update or Version Upgrade function. After specifying the core version and the EEP version, wait a minute or two before clicking Next to advance to the next screen. This delay gives the Installer time to process the selections that you made.
To recover from the upgrade failure:
  1. Reset the Installer database as described in Resetting the Installer Database.
  2. Import the last known state by following the steps for importing the cluster state in Importing or Exporting the Cluster State.
  3. Retry the upgrade through the Installer.

Impersonation

MFS-11943
Impersonation does not work by default on insecure clusters because the admin file is not automatically created in the /opt/mapr/conf/proxy directory upon initial cluster configuration.
Workaround: Manually create an empty admin file in /opt/mapr/conf/proxy, for example /opt/mapr/conf/proxy/mapr, and then restart services.

File System

MFS-14890
Issuing hadoop mfs -lsfid with a non-mapr user ticket returns an Operation not permitted error. For example:
2022-03-15 18:12:23,6663 DEBUG Cidcache fc/cidcache.cc:4315 Thread: 18743 Enter GetVolumeMountPoint 138102135
2022-03-15 18:12:23,6663 DEBUG Cidcache fc/cidcache.cc:4754 Thread: 18743 Sending RPC to CLDB: 10.163.172.130:7222
2022-03-15 18:12:23,6672 DEBUG Cidcache fc/cidcache.cc:4193 Thread: 18743 GetVolumeProperties returns: 0, numTries:0
2022-03-15 18:12:23,6673 ERROR Cidcache fc/cidcache.cc:4277 Thread: 18743 VolumePropertiesLookupRequest failed, cldb returned err Operation not permitted(1), CLDB: 10.163.172.130:7222
2022-03-15 18:12:23,6673 DEBUG Client fc/client.cc:11139 Thread: 18743 GetVolumeMountPoint returns: 0
In a secure cluster, this happens because the hadoop mfs -lsfid command works only when the UID associated with the ticket has either READ permission or full permission at the cluster level.
Workaround: Assign read permission to the UID by using an ACL to give the non-mapr user login permission. For example:
# maprcli acl  edit -type cluster -user user1:login
maprcli acl show -type cluster
Allowed actions             Principal
[login, ss, cv, a, fc, cp]  User mapr
[login, ss, cv, a, fc, cp]  User root
[login]                     User user1