Known Issues (Release 7.2.0)

You might encounter the following known issues after upgrading to release 7.2.0. This list is current as of the release date.

IMPORTANT The "Support notices of known issues" tool is no longer available, but you can obtain the same information by logging on to the HPE Support Center. See Support Articles in the HPE Support Center.

Where available, the workaround for an issue is also documented. HPE regularly releases maintenance releases and patches to fix issues. We recommend checking the release notes for any subsequent maintenance releases to see if one or more of these issues are fixed.

Clients

CORE-960
In a Java 17 environment, installing the Data Fabric client for Windows generates the following error during client configuration:
C:\>C:\opt\mapr\server\configure.bat -N mycluster -c -secure -C node1:7222 node2:7222
Don't forget to copy conf\ssl_truststore from a server on your cluster.

java.lang.reflect.InaccessibleObjectException: Unable to make protected final java.lang.Class java.lang.ClassLoader.defineClass(java.lang.String,byte[],int,int,java.security.ProtectionDomain) throws java.lang.ClassFormatError accessible: module java.base does not "opens java.lang" to unnamed module @491666ad

at java.base/java.lang.reflect.AccessibleObject.checkCanSetAccessible(AccessibleObject.java:354)
at java.base/java.lang.reflect.AccessibleObject.checkCanSetAccessible(AccessibleObject.java:297)
at java.base/java.lang.reflect.Method.checkCanSetAccessible(Method.java:199)
at java.base/java.lang.reflect.Method.setAccessible(Method.java:193)
at com.mapr.fs.ShimLoader.injectNativeLoader(ShimLoader.java:281)
at com.mapr.fs.ShimLoader.load(ShimLoader.java:225)
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:77)
at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:568)
at org.apache.hadoop.conf.CoreDefaultProperties.<clinit>(CoreDefaultProperties.java:63)
at org.apache.hadoop.conf.Configuration.<clinit>(Configuration.java:803)
at org.apache.hadoop.util.ShutdownHookManager$HookEntry.<init>(ShutdownHookManager.java:206)
at org.apache.hadoop.util.ShutdownHookManager.addShutdownHook(ShutdownHookManager.java:304)
at org.apache.hadoop.util.RunJar.run(RunJar.java:301)
at org.apache.hadoop.util.RunJar.main(RunJar.java:236)
java.lang.RuntimeException: Failure loading MapRClient.
at com.mapr.fs.ShimLoader.injectNativeLoader(ShimLoader.java:307)
at com.mapr.fs.ShimLoader.load(ShimLoader.java:225)
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:77)
at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:568)
at org.apache.hadoop.conf.CoreDefaultProperties.<clinit>(CoreDefaultProperties.java:63)
at org.apache.hadoop.conf.Configuration.<clinit>(Configuration.java:803)
at org.apache.hadoop.util.ShutdownHookManager$HookEntry.<init>(ShutdownHookManager.java:206)
at org.apache.hadoop.util.ShutdownHookManager.addShutdownHook(ShutdownHookManager.java:304)
at org.apache.hadoop.util.RunJar.run(RunJar.java:301)
at org.apache.hadoop.util.RunJar.main(RunJar.java:236)

Caused by: java.lang.reflect.InaccessibleObjectException: Unable to make protected final java.lang.Class java.lang.ClassLoader.defineClass(java.lang.String,byte[],int,int,java.security.ProtectionDomain) throws java.lang.ClassFormatError accessible: module java.base does not "opens java.lang" to unnamed module @491666ad

at java.base/java.lang.reflect.AccessibleObject.checkCanSetAccessible(AccessibleObject.java:354)
at java.base/java.lang.reflect.AccessibleObject.checkCanSetAccessible(AccessibleObject.java:297)
at java.base/java.lang.reflect.Method.checkCanSetAccessible(Method.java:199)
at java.base/java.lang.reflect.Method.setAccessible(Method.java:193)
at com.mapr.fs.ShimLoader.injectNativeLoader(ShimLoader.java:281)
... 11 more
2024-01-15 13:23:36,761 INFO conf.CoreDefaultProperties: Cannot execute load() method
java.lang.reflect.InvocationTargetException
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:77)
at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:568)
at org.apache.hadoop.conf.CoreDefaultProperties.<clinit>(CoreDefaultProperties.java:63)
at org.apache.hadoop.conf.Configuration.<clinit>(Configuration.java:803)
at org.apache.hadoop.util.ShutdownHookManager$HookEntry.<init>(ShutdownHookManager.java:206)
at org.apache.hadoop.util.ShutdownHookManager.addShutdownHook(ShutdownHookManager.java:304)
at org.apache.hadoop.util.RunJar.run(RunJar.java:301)
at org.apache.hadoop.util.RunJar.main(RunJar.java:236)
Caused by: java.lang.ExceptionInInitializerError
at com.mapr.fs.ShimLoader.load(ShimLoader.java:245)
... 10 more

Caused by: java.lang.RuntimeException: Failure loading MapRClient.

at com.mapr.fs.ShimLoader.injectNativeLoader(ShimLoader.java:307)
at com.mapr.fs.ShimLoader.load(ShimLoader.java:225)
... 10 more
The error occurs because Java 17 generates an exception on reflect operation handling.
Workaround: Add the HADOOP_OPTS environment variable with a value of --add-opens java.base/java.lang=ALL-UNNAMED.
MFS-15570
Using the Data Fabric client on Mac OS X fails with the following warning:
2023-01-17 10:13:51,170 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
fs/common/ossl_dynlib.cc:185: dlopen(libssl.1.1.dylib, 6): image not found
This warning is returned when you use the client to run the hadoop fs -ls command.
Workaround: Create the following symlinks:
# ln -s /usr/local/opt/openssl@1.1/lib/libssl.1.1.dylib /usr/local/lib/libssl.1.1.dylib
# ln -s /usr/local/opt/openssl@1.1/lib/libcrypto.1.1.dylib /usr/local/lib/libcrypto.1.1.dylib

Control System

For known issues related to the Control System, see Control System - 7.2.0.0 Release Notes.

Installer

See IN-3223 in Installer Known Issues.

HPE Ezmeral Data Fabric File Store

MFS-14392
When used with the -macs 1 parameter, the maprcli virtualip list command mistakenly lists the POSIX service if the node has an active FUSE mount. For example:
# maprcli virtualip list -macs 1 -json
{
"timestamp":1638100744415,
"timeofday":"2021-11-28 03:59:04.415 GMT-0800 AM",
"status":"OK",
"total":0,
"data":[
{
"h":0,
"service":"POSIX_CLIENT_FUSE",
"hn":"m2-hux6k-01-n1.mip.storage.hpecorp.net",
"ip":"10.163.160.141",
"mac":"50:5d:ac:2e:44:1b"
},
{
"h":0,
"service":"NFS_V3",
"hn":"m2-hux6k-01-n1.mip.storage.hpecorp.net",
"ip":"10.163.160.141",
"mac":"50:5d:ac:2e:44:1b"
},

{
"h":0,
"service":"S3", 
"hn":"m2-hux6k-01-n2.mip.storage.hpecorp.net",
"ip":"10.163.160.142",
"mac":"50:5d:ac:2e:44:4b"
},
{
"h":0,
"service":"S3", 
"hn":"m2-hux6k-01-n3.mip.storage.hpecorp.net",
"ip":"10.163.160.143",
"mac":"60:de:f3:04:e2:1c"
},
Workaround: To prevent the POSIX listing from displaying, stop the POSIX service on the node, and retrun the command.
MFS-15672
In maprcli command for 'filter' option the operators other than "==" (Exact match) and "!=" ( Does not match ) are not working

Workaround: This issue has been fixed with a patch. To obtain the fix, apply the latest mapr-patch for your Data Fabric release version to your Data Fabric installation.

HPE Ezmeral Data Fabric Database

MAPRDB-2583

High File Server Memory Alarm is raised during replication of JSON tables that have row size less than 1 KB, and secondary indexes have been created on the JSON table being replicated.

Workaround: See Addressing High Memory File Server Alarm for JSON Table Replication

Ranger

RAN-260
Because of conflicts between the Ranger debian packages in EEP 9.1.0, installing or upgrading multiple Ranger packages to EEP 9.1.0 on the same node causes the following installation error:
trying to overwrite '/opt/mapr/ranger/rangerversion', which is also in package <package_name>
Workaround: To avoid this issue, pass the --force-overwrite option to dpkg when installing the packages. You can accomplish this through apt by using the -o DPkg::options::="--force-overwrite" option.
For example, the following command installs the mapr-ranger and mapr-ranger-userysnc packages by passing the --force-overwrite option to dpkg through apt:
sudo apt install mapr-ranger mapr-ranger-usersync -o DPkg::options::="--force-overwrite"

Upgrade

OTSDB-147
After upgrading OpenTSDB from version 2.4.0 to version 2.4.1, the Crontab on each OpenTSDB node is not updated and continues to point to the previous OpenTSDB version.
Workaround: To fix the Crontab, run the following commands on each OpenTSDB node, replacing $MAPR_USER with the name of the cluster admin (typically mapr) :
  • RHEL
    export CRONTAB="/var/spool/cron/$MAPR_USER"
    sed -i 's/2.4.0/2.4.1/' $CRONTAB
  • SLES
    export CRONTAB="/var/spool/cron/tabs/$MAPR_USER"
    sed -i 's/2.4.0/2.4.1/' $CRONTAB
  • Ubuntu
    export CRONTAB="/var/spool/cron/crontabs/$MAPR_USER"
    sed -i 's/2.4.0/2.4.1/' $CRONTAB
COMSECURE-615
Upgrading directly from release 6.1.x to release 7.x.x can fail because the upgrade process reads password information from the default Hadoop ssl-server.xml and ssl-client.xml files rather than the original .xml files. Note that upgrades from release 6.2.0 to 7.x.x are not affected by this issue.
The issue does not occur, and the upgrade succeeds, if either of the following conditions is true:
  • The existing password is mapr123 (the default value) when the EEP upgrade is initiated.
  • You upgrade the cluster first to release 6.2.0 and then subsequently to release 7.x.x.
Understanding the Upgrade Process and Workaround: The workaround in this section modifies the release 6.1.x-to-7.x.x upgrade so that it works like the 6.2.0-to-7.x.x upgrade.
Upgrading to core 7.x.x requires installing the mapr-hadoop-util package. Before the upgrade, Hadoop files are stored in a subdirectory such as hadoop-2.7.0. Installation of the mapr-hadoop-util package:
  • Creates a subdirectory to preserve the original .xml files. This subdirectory has the same name as the original Hadoop directory and a timestamp suffix (for example, hadoop-2.7.0.20210324131839.GA).
  • Creates a subdirectory for the new Hadoop version (hadoop-2.7.6).
  • Deletes the original hadoop-2.7.0 directory.
During the upgrade, a special file called /opt/mapr/hadoop/prior_hadoop_dir needs to be created to store the location of the prior Hadoop directory. The configure.sh script uses this location to copy the ssl-server.xml and ssl-client.xml files to the new hadoop-2.7.6 subdirectory.
In a release 6.1.x-to-7.x.x upgrade, the prior_hadoop_dir file does not get created, and configure.sh uses the default ssl-server.xml and ssl-client.xml files provided with Hadoop 2.7.6. In this scenario, any customization in the original .xml files is not applied.
The following workaround restores the missing prior_hadoop_dir file. With the file restored, configure.sh -R consumes the prior_hadoop_dir file and copies the the original ssl-server.xml and ssl-client.xml files into the hadoop-2.7.6 directory, replacing the files that contain the default mapr123 password.
Workaround: After upgrading the ecosystem packages, but before running configure.sh -R:
  1. Create a file named prior_hadoop_dir that contains the Hadoop directory path. For example:
    # cat /opt/mapr/hadoop/prior_hadoop_dir
    /opt/mapr/hadoop/hadoop-2.7.0.20210324131839.GA
    If multiple directories are present, specify the directory with the most recent timestamp.
  2. Run the configure.sh -R command as instructed to complete the EEP upgrade.