Configuring the S3 Gateway Client

Describes how to configure the client and provides client configuration and operation examples.

NOTICE The S3 gateway is included in EEP 6.0.0 - EEP 8.0.0 repositories. S3 gateway is not supported in HPE Ezmeral Data Fabric 7.0.0 onward. HPE Ezmeral Data Fabric 7.0.0 introduces a native object storage solution. For more information, see HPE Ezmeral Data Fabric Object Store.

EEP 7.1.0 and later supports S3 gateway 2.1.0 and higher. S3 gateway 2.2.0.0 is available starting in EEP 8.0.0.

The S3 gateway (MinIO) client is located in /opt/mapr/objectstore-client/objectstore-client-<version>/util/mc.

Before you complete the tasks described in the following sections, complete the tasks in Configuring S3 Gateway to configure S3 gateway.

Add an S3-Compatible Service

To add an S3-compatible service, run the mc alias set command, as shown:
mc alias set ALIAS HOSTNAME ADMIN_ACCESS_KEY ADMIN_SECRET_KEY
Example
mc alias set myminio http://localhost:9000 minioadmin minioadmin

Create a User

You can create a user with or without a UID and GID. When you create a user without a UID and GID, the UID and GID for the S3 gateway process is used.
  • Creating a user without the UID and GID
    Run the mc admin user add command, as shown:
    mc admin user add ALIAS USERNAME PASSWORD
    Example
    mc admin user add myminio test qwerty78
  • Creating a user with the UID and GID
    Run the mc admin user add command with the UID and GID, as shown:
    mc admin user add ALIAS/ USERNAME PASSWORD UID GID
    Example
    mc admin user add myminio test qwerty78 1000 1000

For more information, see MinIO Admin Complete Guide.

Examples

After you configure the client, you can perform bucket and object operations in S3 through Java, Python, Hadoop, and Spark.

This section provides examples with built-in users. In the examples, the default admin user, minioadmin, is used.

The examples demonstrate how to perform the following tasks:
  • list buckets
  • create a bucket
  • delete a bucket
  • check that a bucket exists
  • list files
  • upload a file
  • delete a file
  • check that a file exists
Java Example
See Java Examples.
Python Example
See Python Examples.
Hadoop Example
For Hadoop, provide the accessKey, secretKey, host, and port, as shown:
-Dfs.s3a.access.key=ACCESS_KEY -Dfs.s3a.secret.key=PASSWORD -Dfs.s3a.endpoint=http(s)://HOST:PORT -Dfs.s3a.path.style.access=true -Dfs.s3a.impl=org.apache.hadoop.fs.s3a.S3AFileSystem
Spark Example
Since Spark uses Hadoop libraries to work with S3, you must provide the accessKey, secretKey, host, and port, as shown:
--conf spark.hadoop.fs.s3a.access.key=ACCESS_KEY --conf spark.hadoop.fs.s3a.secret.key=PASSWORD --conf spark.hadoop.fs.s3a.endpoint=http(s)://HOST:PORT --conf spark.hadoop.fs.s3a.path.style.access=true --conf spark.hadoop.fs.s3a.impl=org.apache.hadoop.fs.s3a.S3AFileSystem