Known Issues at Release (Release 7.0.0)

You may encounter the following known issues after upgrading to Release 7.0.0. This list is current as of the release date.

IMPORTANT The "Support notices of known issues" tool is no longer available, but you can obtain the same information by logging on to the HPE Support Center. See Support Articles in the HPE Support Center.

Where available, the workaround for an issue is also documented. HPE regularly releases maintenance releases and patches to fix issues. We recommend checking the release notes for any subsequent maintenance releases to see if one or more of these issues are fixed.

HPE Ezmeral Data Fabric File Store

CORE-645
The FileMigrate feature does not work with HPE Ezmeral Data Fabric versions 6.2, 6.2.1, and 7.0.0. The FileMigrate feature needs to be upgraded to support JDK 11 and AWS S3 1.11.X versions.
MFS-15087
Running configure.sh on a SLES 15 SP3 node returns the following disksetup error:
# /opt/mapr/server/configure.sh -C 10.163.164.140 -Z 10.163.164.140 -RM 10.163.164.140 -HS 10.163.164.140 -N c.140 -secure -F ~/disklist.txt -dare -genkeys
grep: /opt/mapr/conf/disktab: No such file or directory

Error 1, Operation not permitted. Fileserver start
The error occurs because sudo is not available by default on SLES 15 SP3.
Workaround: Before installing HPE Ezmeral Data Fabric on the node, install sudo manually by using zypper install sudo.
MFS-14347
cliframework CLIInterface leaks objects on streams commands.
MFS-11105
Unable to list the active clients of NFS Gateways.
MFS-6077
MFS assert failure in mapr::fs::BlockAllocator::Free.
MFS-2703
cldb crash at CryptoPP::HashFilter::HashFilter.
MFS-2032
Each thread in the FileClient sending RPCs to the cluster will issue redundant container lookups when the container is not available.
MFS-15672
In maprcli command for 'filter' option the operators other than "==" (Exact match) and "!=" ( does not match ) are not working.

Workaround: This issue has been fixed with a patch. To obtain the fix, apply the latest mapr-patch for your Data Fabric release version to your Data Fabric installation.

HPE Ezmeral Data Fabric Database

MAPRDB-2529
If you run the following commands without a license, the commands hang:
  • maprcli stream replica add
  • maprcli stream edit (Without a license, this command only hangs when you run it with the -compact option.)

Workaround: Add a license as described in Adding a License.

MAPRDB-2506
Query conditions on masked fields that a user does not have permission to read return data based on the masked value and not the actual values of the fields.
For example, field a.b.c = 135 in table t1. The field is masked and appears as a.b.c = 0 to users that do not have permission to read the masked data. If you do not have permission to read the masked data and you want to find all the documents in t1 where a.b.c is greater than 10:
mapr dbshell find /tmp/t1 --c '{"$gt":{"a.b.c":10}}'
The search does not return any documents:
mapr dbshell find /tmp/t1 --c '{"$gt":{"a.b.c":10}}' 
0 document(s) found
If the search specifies the masked value in the query condition (WHERE a.b.c = 0):
mapr dbshell find /tmp/t1 --c '{"$eq":{"a.b.c":0}}'
{"_id":"d1","a":{"b":{"c":0,"c1":"string1"}}}
1 document(s) found.

The search only returns documents because the query condition matched the masked value.

Workaround: Do not include masked fields in query conditions unless you have permission to read the data in the masked fields. Masking permissions are set through the defaultunmaskedreadperm and unmaskedreadperm options at the table, column family, or column level. See Dynamic Data Masking for more information.

MAPRDB-2458
Hitting assert in RBType_RB_REMOVE (elm=0x0, head=0x46472c0) at fs/server/db/rbtree.h.
MAPRDB-2583

High File Server Memory Alarm is raised during replication of JSON tables that have row size less than 1 KB, and secondary indexes have been created on the JSON table being replicated.

Workaround: See Addressing High Memory File Server Alarm for JSON Table Replication

Installation and Configuration Issues

The following features are still in development and are not currently available in release 7.0.0:

PACC Feature Is Unavailable

The Persistent Application Client Container (PACC) is currently unavailable for use with release 7.0.0. PACC information will be added to the release 7.0.0 documentation as soon as the PACC feature becomes available.

Development Environment for HPE Ezmeral Data Fabric Is Unavailable

The Development Environment for HPE Ezmeral Data Fabric is currently unavailable for use with release 7.0.0. The development environment script will be updated and documented for release 7.0.0 as soon as the updated Docker image is available.

FUSE-Based Client Issues

The mounting process can be inhibited, or access to the FUSE mount using available commands can hang, if security software blocks the FUSE subsystem or kernel. Examples of such security software include:

MCS and Object Store Interface Issues

MON-8065

Users with login permissions through group membership are unable to login to the Control System (MCS).

Workaround: Provide individual login permissions to each user to login to the Control System (MCS).

MON-7568
If you are using SPNEGO authentication, Control System (MCS) UI is unresponsive after the session timeout.
This issue is fixed when you apply the mapr-webserver-7.0.0.6.20220526103815-1.noarch and mapr-apiserver-7.0.0.6.20220526103815-1.noarch patches.
Workaround: Use a new browser tab after the session timeout.
MFS-14802
The control system (MCS) and commands, such as the maprcli and mrconfig commands, incorrectly label binary byte numbers in decimal units. For example, the system calculates disk utilization in mebibytes (MiB), but incorrectly displays an MB (megabyte) label next to the number when it should display MiB.
Workaround: None.
Lack of a License Prevents the Use of Some Features
If you have not applied a license, some MCS and Object Store user-interface features are not available. For example, MCS displays the Register Now button until you apply a community license or other license. Clicking the button does not provide license information.
The Object Store interface displays an Upgrade Now! button and prevents you from creating or editing an account until you apply an enterprise license or enterprise trial license. Clicking the button does not provide license information.
Workaround: To obtain a community license, enterprise license, or enterprise trial license, navigate to the My Clusters page, and create an account. To apply the license, see Adding a License.

Permissions

MFS-5219
Permission denied for execute operation when user's group is mentioned in ACEs.

Upgrade Issues

OTSDB-147
After upgrading OpenTSDB from version 2.4.0 to version 2.4.1, the Crontab on each OpenTSDB node is not updated and continues to point to the previous OpenTSDB version.
Workaround: To fix the Crontab, run the following commands on each OpenTSDB node, replacing $MAPR_USER with the name of the cluster admin (typically mapr) :
  • RHEL
    export CRONTAB="/var/spool/cron/$MAPR_USER"
    sed -i 's/2.4.0/2.4.1/' $CRONTAB
  • SLES
    export CRONTAB="/var/spool/cron/tabs/$MAPR_USER"
    sed -i 's/2.4.0/2.4.1/' $CRONTAB
  • Ubuntu
    export CRONTAB="/var/spool/cron/crontabs/$MAPR_USER"
    sed -i 's/2.4.0/2.4.1/' $CRONTAB
COMSECURE-615
Upgrading directly from release 6.1.x to release 7.0.0 can fail because the upgrade process reads password information from the default Hadoop ssl-server.xml and ssl-client.xml files rather than the original .xml files. Note that upgrades from release 6.2.0 to 7.0.0 are not affected by this issue.
The issue does not occur, and the upgrade succeeds, if either of the following conditions is true:
  • The existing password is mapr123 (the default value) when the EEP upgrade is initiated.
  • You upgrade the cluster first to release 6.2.0 and then subsequently to release 7.0.0.
Understanding the Upgrade Process and Workaround: The workaround in this section modifies the release 6.1.x-to-7.0.0 upgrade so that it works like the 6.2.0-to-7.0.0 upgrade.
Upgrading to core 7.0.0 requires installing the mapr-hadoop-util package. Before the upgrade, Hadoop files are stored in a subdirectory such as hadoop-2.7.0. Installation of the mapr-hadoop-util package:
  • Creates a subdirectory to preserve the original .xml files. This subdirectory has the same name as the original Hadoop directory and a timestamp suffix (for example, hadoop-2.7.0.20210324131839.GA).
  • Creates a subdirectory for the new Hadoop version (hadoop-2.7.6).
  • Deletes the original hadoop-2.7.0 directory.
During the upgrade, a special file called /opt/mapr/hadoop/prior_hadoop_dir needs to be created to store the location of the prior Hadoop directory. The configure.sh script uses this location to copy the ssl-server.xml and ssl-client.xml files to the new hadoop-2.7.6 subdirectory.
In a release 6.1.x-to-7.0.0 upgrade, the prior_hadoop_dir file does not get created, and configure.sh uses the default ssl-server.xml and ssl-client.xml files provided with Hadoop 2.7.6. In this scenario, any customization in the original .xml files is not applied.
The following workaround restores the missing prior_hadoop_dir file. With the file restored, configure.sh -R consumes the prior_hadoop_dir file and copies the the original ssl-server.xml and ssl-client.xml files into the hadoop-2.7.6 directory, replacing the files that contain the default mapr123 password.
Workaround: After upgrading the ecosystem packages, but before running configure.sh -R:
  1. Create a file named prior_hadoop_dir that contains the Hadoop directory path. For example:
    # cat /opt/mapr/hadoop/prior_hadoop_dir
    /opt/mapr/hadoop/hadoop-2.7.0.20210324131839.GA
    If multiple directories are present, specify the directory with the most recent timestamp.
  2. Run the configure.sh -R command as instructed to complete the EEP upgrade.