MapR Connector Import and Export Options
Import Options
Import Option | Description |
--as-avrodatafile, --as-textfile |
The format of a to-be-imported data file in MapR File System. An 'hcat' or 'hive' job type supports 'rcfile', 'orcfile' and 'textfile' file
formats. To set the file format, you need to use the -D command line
option. For example:
NOTE If import to MapR File System in Avro format fails, include
-Dmapreduce.job.user.classpath.first=true in your command.
|
--target-dir |
MapR File System destination directory. |
--num-mappers |
The number of mappers for the import job. The default value is 4. |
--query |
The SQL query to select data from a Teradata database. This option works only with the textfile and avrofile formats. |
--table |
The name of the source table in a Teradata system from which the data is imported. |
--columns |
The names of columns to import from the source table in a Teradata system, in
comma-separated format. For
example:
|
--hive-table |
The name of the target table in Hive or HCatalog. |
--fields-terminated-by |
The field separator to use with the imported files. This parameter is only
applicable with the textfile file format. The default value is
\t . |
--split-by |
The column of the table used to split work units. |
--map-column-hive |
Override mapping from SQL to Hive type for configured columns. |
--where |
WHERE clause to use during import. |
--staging-table 1 |
The table for staging data before insertion into the destination table. Only
applicable when using |
--num-partitions-for-staging-table 1
|
The number of partitions to create when auto-creating the staging table. Only
applicable when using input-method
split.by.partition . |
--staging-database 1
|
The database for creating the staging table. Only applicable when using
input-method
split.by.partition.
|
--staging-force 1
|
Option to force the connector to create a staging table, if supported. Only
applicable when using input-method
split.by.partition . |
--input-method 1
|
The input method to use to transfer data from Teradata. Supported values:
|
--batch-size 1
|
The number of row processed per batch. |
--access-lock 1
|
Option to apply access lock on the database. |
--query-band 1
|
The arbitrary query bands to be set for all queries the connector runs. Specify
the query bands using a semicolon-separated key=value format. |
--skip-xviews 1
|
Option to switch to the non-X version of system views to obtain metadata. |
--date-format 1
|
Custom format for date columns. |
--time-format 1
|
Custom format for time columns. |
--timestamp-format 1
|
Custom format for timestamp columns. |
Use the
-D
command line option to set the file format. For example:
-D tdch.fileformat="fileformat"
The following Sqoop import options are unsupported:
--append
--compression-codec
--direct
--direct-split-size
--compress, -z
--check-column
--incremental
--last-value
--mysql-delimiters
--optionally-enclosed-by
--hive-delims-replacement
--hive-drop-import-delims
--hive-partition-key
--hive-partition-value
--column-family
--hbase-create-table
--hbase-row-key
--hbase-table
--map-column-java
--fetch-size
--as-sequencefile
Export Options
Export Option | Description |
--table |
The name of the target table in a Teradata system. |
--export-dir |
The directory of to-be-exported source files in MapR File System. |
--num-mappers |
The number of mappers for the export job. The default value is 4. |
--columns |
The names of fields to export to the target table in a Teradata system, in comma-separated format. For export from MapR File System, you can only use this option with the avrofile format. |
--staging-table |
The table in which data will be staged before being inserted into the destination table. |
--keep-staging-table 1
|
Option specifying that the connector retain the staging table after failures. |
--staging-database 1
|
The database for creating the staging table. |
--staging-force 1
|
Option to force the connector to create a staging table, if supported. |
--output-method 1
|
The output method to use to transfer data to Teradata. Supported values:
|
--query-band 1
|
The arbitrary query bands to be set for all queries that the connector runs.
Specify the query bands using a semicolon-separated key=value
format. |
--error-table 1
|
Prefix name for error tables. Only applicable when using
output-method
internal.fastload . |
--error-database 1
|
Override for the default error database name. Only applicable when using
output-method internal.fastload . |
--fastload-socket-hostname 1
|
Hostname or IP address of the host on which Sqoop runs. If not set, the connector
auto-detects the host. Only applicable when using output-method
internal.fastload . |
--fastload-socket-port 1
|
The host port that fastload tasks use to synchronize state. Only applicable when
using output-method
internal.fastload . |
--fastload-socket-timeout 1
|
The timeout value the server socket uses for fastload task connections. Only
applicable for output-method
internal.fastload . |
--skip-xviews 1
|
Option to switch to the non-X version of system views to obtain metadata. |
--date-format 1
|
Custom format for date columns. |
--time-format 1
|
Custom format for time columns. |
--timestamp-format 1
|
Custom format for timestamp columns. |
Use the
-D
command line option to set the file format. For example:
-D tdch.fileformat="fileformat"
Supported export file format values
are:- textfile (default format)
- avrofile
- orcfile
- refile
The following Sqoop export options are unsupported:
--batch
--clear-staging-table
--direct
--update-key
--update-mode
--input-lines-terminated-by
--input-optionally-enclosed-by
--map-column-java
--as-sequencefile
1 Only available
starting in Sqoop-1.4.6-1707.