Known Issues (Release 7.7)

You might encounter the following known issues after upgrading to release 7.7. This list is current as of the release date.

IMPORTANT The "Support notices of known issues" tool is no longer available, but you can obtain the same information by logging on to the HPE Support Center. See Support Articles in the HPE Support Center.

Where available, the workaround for an issue is also documented. HPE regularly releases maintenance releases and patches to fix issues. We recommend checking the release notes for any subsequent maintenance releases to see if one or more of these issues are fixed.

Client Libraries

MFS-18249

The FUSE-based POSIX client remains in a dead/inactive state when the ticket expires.

Workaround: To generate a new ticket, manually update the JWT access and refresh tokens.

MFS-18258

When you add a new cluster to a cluster group, the FUSE-based POSIX client and the loopbacknfs POSIX client take about five mintues to load or list the newly added cluster.

Workaround: None.

Data Fabric UI

Sign-in Issues

DFUI-160
If you sign in to the Data Fabric UI as an SSO user but you do not have fabric-level login permission, a sign-in page for the "Managed Control System" (MCS) is displayed. The "Managed Control System" sign-in is not usable for the consumption-based HPE Ezmeral Data Fabric.
Workaround: Use one of the following workarounds:
  • Edit the MCS URL, and retry logging in. For example, change the boldface characters in the following URL:
    https;//<host-name>:8443/app/mcs/#/app/overview
    To this:
    https;//<host-name>:8443/app/dfui
  • Try signing in as a user who has fabric-level login permission.
  • Dismiss the MCS page, clear your browser cache, and retry signing in.
DFUI-437
If you sign in to the Data Fabric UI as a non-SSO user and then sign out and try to sign in as an SSO user, a sign-in page for the "Managed Control System" (MCS) is displayed. The "Managed Control System" sign-in is not usable for the consumption-based HPE Ezmeral Data Fabric.
Workaround: Use one of the following workarounds:
  • Edit the MCS URL, and retry logging in. For example, change the boldface characters in the following URL:
    https;//<host-name>:8443/app/mcs/#/app/overview
    To this:
    https;//<host-name>:8443/app/dfui
  • Dismiss the "Managed Control System" sign-in screen, and retry signing in as a non-SSO user.
  • Dismiss the MCS page, clear your browser cache, and retry signing in.
DFUI-811
If you launch the Data Fabric UI and then sign out and wait for 5-10 minutes and then attempt to sign in, a sign-in page for the "Managed Control System" (MCS) is displayed.
Workaround: See the workaround for DFUI-437.
DFUI-826
In a cloud fabric, an empty page is displayed after a session expires and you subsequently click on a fabric name. The browser can display the following URL:
https://<hostname>:8443/oath/login
Workaround: None.
DFUI-874
Sometimes when you attempt to sign in to the Data Fabric UI, the "Managed Control System" (MCS) is displayed, or the Object Store UI is displayed.
Workaround: See the workaround for DFUI-437.
DFUI-897
A user with no assigned role cannot sign in to the Data Fabric UI.
Workaround: Using your SSO provider software, assign a role to the user, and retry the sign-in operation.
DFUI-902
Incorrect resource data is displayed when an LDAP user signs in to the Data Fabric UI without any SSO roles.
Workaround: See the workaround for DFUI-897
DFUI-1123
Attempting to sign in to the Data Fabric UI as a group results in a login error message in the browser. For example:
https://<hostname>:8443/login?error
Workaround: None.
DFUI-1135
The Data Fabric UI does not allow an SSO user to log in after an unsuccessful login attempt.
Workaround: None.

Mirroring Issues

DFUI-1227
If you create a mirror volume with a security policy, an error is generated when you try to remove the security policy.
Workaround: None.
DFUI-1229
Data aces on a mirror volume cannot be edited.
Workaround: None.

Display Issues

DFUI-1186
After you complete the SSO setup for a new fabric, fabric resources such as volumes and mirrors are not immediately displayed in the Data Fabric UI.
Workaround: Wait at least 20 minutes or more for the Data Fabric UI to display the fabric details.
DFUI-1221
If a fabric includes a large number of resources, loading the resources to display in the Resources card on the home page can take a long time.
Workaround: None.
DFUI-2102
When you create a table replica on a primary cluster with the source table on a secondary cluster, the replication operation times out. However, the table replica is successfully created on the primary cluster. The table replica appears in the Replication tab, but does not appear in the Data Fabric UI Graph or Table view for the primary cluster.

This behavior is the same for both a source table on the primary cluster and the replica on the secondary cluster.

Workaround: None.
DFUI-2099
When you delete a table replica from the Data Fabric UI Home page, the table replica remains listed in the Replication tab. When you select the table on the Replication tab, a message returns stating that the requested file does not exist.
Workaround: None.

External S3

DFUI-2157
Editing buckets on external S3 servers is not supported.
Workaround: None.
MFS-18893
s3cmd cp throws error while copy large or jumbo object from Data Fabric to external S3 buckets or across external S3 buckets, even if object is copied successfully .
Workaround: None.
MFS-18905
The copy object operation fails intermittently when copying an object by using AWS CLI, across S3 buckets on various cloud providers to Data Fabric and vice-versa.
Workaround: None.

Installation or Fabric Creation

MFS-18734
Release 7.7.0 of the HPE Ezmeral Data Fabric has a dependency on the libssl1.1 package, which is not included in Ubuntu 22.04. As a result, you must apply the package manually to Ubuntu 22.04 nodes before installing Data Fabric software.
Workaround: On every node in the fabric or cluster:
NOTE The following steps are required for cluster nodes but are not required for client nodes.
  1. Download the libssl1.1 package:
    wget http://archive.ubuntu.com/ubuntu/pool/main/o/openssl/libssl1.1_1.1.0g-2ubuntu4_amd64.deb
  2. Use the following command to install the package:
    sudo dpkg -i libssl1.1_1.1.0g-2ubuntu4_amd64.deb
MFS-18437
Fabric creation can fail if host-name resolution takes more than 200 ms.
Workaround: Check your host-name resolution time, and take steps to improve it. See Troubleshoot Fabric Creation. Then retry fabric deployment.
DFUI-565, EZINDFAAS-169
Installation or fabric creation can fail if a proxy is used for internet traffic with the HPE Ezmeral Data Fabric.
Workaround: Export the following proxy settings, and retry the operation:
# cat /etc/environment
export http_proxy=http://<proxy_server_hostname_or_IP>:<proxy_port>
export https_proxy=http://<proxy_server_hostname_or_IP>:<proxy_port>
export HTTP_PROXY=http://<proxy_server_hostname_or_IP>:<proxy_port>
export HTTPS_PROXY=http://<proxy_server_hostname_or_IP>:<proxy_port>

NFSv4

MFS-18264
Attempts to mount the NFS4 server fail and return the following error:
Mount.nfs4: Stale file handle
Workaround:
  1. Update the EXPORT section of /opt/mapr/conf/nfs4server.conf as follows:
    EXPORT
    {
      # Export Id (mandatory, each EXPORT must have a unique Export_Id)
      Export_Id = 30;
    
      # Exported path (mandatory)
      Path = /mapr/clustername; <-- here instead of mapr please use /mapr/clustername
    
      # Pseudo Path (required for NFS v4)
      Pseudo = /mapr;
    
      Squash = No_Root_Squash;
    
      # Required for access (default is None)
      # Could use CLIENT blocks instead
      Access_Type = RW;
    
      # Security type (krb5,krb5i,krb5p)
      SecType = sys;
    
      # Exporting FSAL
      FSAL {
        Name = MAPR;
      }
    
      #SuperUser_Uid = 0;
    }
    For more information about the /opt/mapr/conf/nfs4server.conf file, see Configuring NFSv4 Server.
  2. Restart the NFSv4 server:
    maprcli node services -nodes <node names> -nfs4 restart
    For more information about starting or restarting NFSv4, see Starting, Stopping, and Restarting HPE Ezmeral Data Fabric NFSv4.

Object Store

MFS-17233
On cloud (AWS, Azure, or GCP) fabrics, if an instance is rebooted, the public IP addresses can change. If this happens, the MOSS certificates must be regenerated to include the new IP addresses, and the changes must be propagated to all fabric nodes.
Workaround: To regenerate the MOSS certificates:
  1. Identify the new external IP address for each cloud instance.
  2. On each cloud instance:
    1. Log on as a sudo user.
    2. Update the certificate using the following manageSSLKeys.sh command:
      /opt/mapr/server/manageSSLKeys.sh createusercert -u moss -ug mapr:mapr -k <ssl_keystore_password> -p <ssl_truststore_password> -ips "<new external ip of the instance>" -a moss -w
    3. Restart the MOSS service:
      maprcli node services -nodes 'hostname -f' -name s3server -action restart -json
    NOTE You can obtain the ssl_keystore_password and ssl_truststore_password from the node where the configure.sh -secure -genkeys command was issued. In the /opt/mapr/conf/store-passwords.txt file, the passwords are listed under keys as ssl.server.keystore.keypassword and ssl.server.truststore.password.
    Use the following commands to ensure correct file ownership:
    chown mapr:mapr /opt/mapr/conf/ssl_usertruststore.p12
    chmod 0444 /opt/mapr/conf/ssl_usertruststore.p12"
    chown mapr:mapr /opt/mapr/conf/ssl_userkeystore.p12
    chmod 0400 /opt/mapr/conf/ssl_userkeystore.p12"
DFUI-519
An SSO user is unable to create buckets on the Data Fabric UI and the Object Store. This is applicable to an SSO user with any role such as infrastructure administrator, fabric manager or developer.
Workaround: Create an IAM policy with all permissions in the user account. This has to be done via minIO client or the Object Store UI. Assign the IAM policy to the SSO user. Login to the Data Fabric UI and create a bucket/view bucket.
DFUI-577
Downloading a large file (1 GB or larger) can fail with the following error:
Unable to download file "<filename>": Request failed with status code 500
Workaround: Instead of using the Data Fabric UI to download a large file, use a MinIO Client (mc) command. For more information about mc commands, see MinIO Client (mc) Commands.
MFS-18250
The S3 server crashes when you copy a jumbo object (object size>256 MB) from one bucket to another bucket across fabrics using aws s3 cli.
Workaround: Set the 'max_concurrent_requests' parameter value to 1 on the AWS configuration file.

Online Help

DFUI-459
If a proxy is used for internet traffic with the HPE Ezmeral Data Fabric, online help screens can time out or fail to fetch help content.
Workaround: Add the following proxy servers to the /opt/mapr/apiserver/conf/properties.cfg file:
  • http.proxy=<proxyServer>:<proxyPort>
  • https.proxy=<proxyServer>:<proxyPort>

Security Policies

MFS-18154
A security policy created on a cloud-based primary fabric (such as AWS) is not replicated on to a secondary fabric created on another cloud provider (such as GCP).
Workaround: None.

Topics

DFUI-637
Non-LDAP SSO user authenticating to Keycloak cannot create topic on the Data Fabric UI.
Workaround: None.
DFUI-639
A non-LDAP SSO user authenticating to Keycloak cannot create a volume or stream using the Data Fabric UI.
Workaround: None. Non-LDAP and SSO local users are not currently supported.

Upgrade

COMSECURE-615
Upgrading directly from release 6.1.x to release 7.x.x can fail because the upgrade process reads password information from the default Hadoop ssl-server.xml and ssl-client.xml files rather than the original .xml files. Note that upgrades from release 6.2.0 to 7.x.x are not affected by this issue.
The issue does not occur, and the upgrade succeeds, if either of the following conditions is true:
  • The existing password is mapr123 (the default value) when the EEP upgrade is initiated.
  • You upgrade the cluster first to release 6.2.0 and then subsequently to release 7.x.x.
Understanding the Upgrade Process and Workaround: The workaround in this section modifies the release 6.1.x-to-7.x.x upgrade so that it works like the 6.2.0-to-7.x.x upgrade.
Upgrading to core 7.x.x requires installing the mapr-hadoop-util package. Before the upgrade, Hadoop files are stored in a subdirectory such as hadoop-2.7.0. Installation of the mapr-hadoop-util package:
  • Creates a subdirectory to preserve the original .xml files. This subdirectory has the same name as the original Hadoop directory and a timestamp suffix (for example, hadoop-2.7.0.20210324131839.GA).
  • Creates a subdirectory for the new Hadoop version (hadoop-2.7.6).
  • Deletes the original hadoop-2.7.0 directory.
During the upgrade, a special file called /opt/mapr/hadoop/prior_hadoop_dir needs to be created to store the location of the prior Hadoop directory. The configure.sh script uses this location to copy the ssl-server.xml and ssl-client.xml files to the new hadoop-2.7.6 subdirectory.
In a release 6.1.x-to-7.x.x upgrade, the prior_hadoop_dir file does not get created, and configure.sh uses the default ssl-server.xml and ssl-client.xml files provided with Hadoop 2.7.6. In this scenario, any customization in the original .xml files is not applied.
The following workaround restores the missing prior_hadoop_dir file. With the file restored, configure.sh -R consumes the prior_hadoop_dir file and copies the the original ssl-server.xml and ssl-client.xml files into the hadoop-2.7.6 directory, replacing the files that contain the default mapr123 password.
Workaround: After upgrading the ecosystem packages, but before running configure.sh -R:
  1. Create a file named prior_hadoop_dir that contains the Hadoop directory path. For example:
    # cat /opt/mapr/hadoop/prior_hadoop_dir
    /opt/mapr/hadoop/hadoop-2.7.0.20210324131839.GA
    If multiple directories are present, specify the directory with the most recent timestamp.
  2. Run the configure.sh -R command as instructed to complete the EEP upgrade.
EZINDFAAS-793
In an AWS deployment, after an upgrade from release 7.6.1 to 7.7.0, SSO authentication can be disabled. This can be caused by a missing Keycloak certificate. To communicate with Keycloak, the API server needs the Keycloak certificate to be part of the local ssl_truststore.
Workaround: On all nodes where the API server is running – except for the CLDB master – manually update the ssl_truststore with the Keycloak certificate. See Identifying All CLDB Nodes.
EZINDFAAS-811
Upgrading from release 7.6.1 to 7.7.0 fails if you initiate the upgrade from a Data Fabric UI URL that is not the URL provided by the seed node when you created the fabric. The seed node indicates the API server node that is the primary installer host.
Workaround: Use either of the following workarounds:
  • Initiate the upgrade from the Data Fabric UI URL provided by the seed node when the fabric was created. This URL uses the API server node with the running installer service.
  • If you must use an API server node other than the primary installer host:
    1. Copy the .pem file from the /infrastructure/terraform/ directory of the primary installer host to the /tmp directory of the secondary installer host where you want to initiate the upgrade.
    2. Restart the installer service on the secondary installer host:
      sudo service mapr-installer restart
    3. Initiate the upgrade as described in Upgrading a Data Fabric.
MFS-17624
An upgrade from release 7.5.0 or earlier to 7.6.0 or later can terminate with a fatal error detected by the Java Runtime Environment.
Workaround: None.
MFS-18920
Upgrading from release 7.5.0 to 7.7.0 can change the valid duration of the JWT access token. Normally the token should be valid for two hours. After an upgrade operation, the valid duration can change from 2 hours to 20 minutes. When this happens, exporting the MAPR_JWT_TOKEN_LOCATION, MAPR_JWT_REFRESH_TOKEN_LOCATION variables does not correct the issue.
Workaround: After upgrading, manually reset the valid duration of the JWT access token by using the steps in Access and Refresh Tokens.
DFUI-2163
SSO authentication is not enabled for Data Fabric UI, after upgrading from HPE Ezmeral Data Fabric release version 7.5 to release version 7.6.
Workaround: Restart the API server after upgrade.

Volumes

DFUI-638
Non-LDAP SSO user authenticating to Keycloak cannot create volume on the Data Fabric UI.

Workaround: Create a volume via the Data Fabric minIO client.