Configuring the S3 Gateway Client
Describes how to configure the client and provides client configuration and operation examples.
EEP 7.1.0 and later supports S3 gateway 2.1.0 and higher. S3 gateway 2.2.0.0 is available starting in EEP 8.0.0.
The S3 gateway (MinIO) client is located in
/opt/mapr/objectstore-client/objectstore-client-<version>/util/mc
.
Before you complete the tasks described in the following sections, complete the tasks in Configuring S3 Gateway to configure S3 gateway.
Add an S3-Compatible Service
mc alias set
command, as
shown:mc alias set ALIAS HOSTNAME ADMIN_ACCESS_KEY ADMIN_SECRET_KEY
mc alias set myminio http://localhost:9000 minioadmin minioadmin
Create a User
- Creating a user without the UID and GIDRun the
mc admin user add
command, as shown:mc admin user add ALIAS USERNAME PASSWORD
Examplemc admin user add myminio test qwerty78
- Creating a user with the UID and GIDRun the
mc admin user add
command with the UID and GID, as shown:mc admin user add ALIAS/ USERNAME PASSWORD UID GID
Examplemc admin user add myminio test qwerty78 1000 1000
For more information, see MinIO Admin Complete Guide.
Examples
After you configure the client, you can perform bucket and object operations in S3 through Java, Python, Hadoop, and Spark.
This section provides examples with built-in users. In the examples, the default admin
user, minioadmin
, is used.
- list buckets
- create a bucket
- delete a bucket
- check that a bucket exists
- list files
- upload a file
- delete a file
- check that a file exists
- Java Example
- See Java Examples.
- Python Example
- See Python Examples.
- Hadoop Example
- For Hadoop, provide the accessKey, secretKey, host, and port, as
shown:
-Dfs.s3a.access.key=ACCESS_KEY -Dfs.s3a.secret.key=PASSWORD -Dfs.s3a.endpoint=http(s)://HOST:PORT -Dfs.s3a.path.style.access=true -Dfs.s3a.impl=org.apache.hadoop.fs.s3a.S3AFileSystem
- Spark Example
- Since Spark uses Hadoop libraries to work with S3, you must provide the accessKey,
secretKey, host, and port, as
shown:
--conf spark.hadoop.fs.s3a.access.key=ACCESS_KEY --conf spark.hadoop.fs.s3a.secret.key=PASSWORD --conf spark.hadoop.fs.s3a.endpoint=http(s)://HOST:PORT --conf spark.hadoop.fs.s3a.path.style.access=true --conf spark.hadoop.fs.s3a.impl=org.apache.hadoop.fs.s3a.S3AFileSystem