File external table
This topic describes how to use file external tables to directly query Parquet and ORC data files in AWS S3.
Usage notes
- File external tables must be created in databases within
default_catalog
. - Only Parquet and ORC data files are supported.
- You can only use file external tables to query data. INSERT, DELETE, and DROP operations on the target data file are not supported.
Create a file external table
Syntax
CREATE EXTERNAL TABLE <table_name>
(
<col_name> <col_type> [NULL | NOT NULL] [COMMENT "<comment>"]
)
ENGINE=file
COMMENT [comment]
PROPERTIES
(
FileLayoutParams,
StorageCredentialParams,
)
Parameter | Required | Description |
---|---|---|
table_name | Yes | The name of the file external table. The naming conventions are as follows:The name can contain letters, digits (0-9), and underscores (_). It must start with a letter.The name cannot exceed 64 characters in length. |
col_name | Yes | The column name in the file external table. The column names in the file external table must be the same as those in the target data file but are not case-sensitive. The order of columns in the file external table can be different from that in the target data file. |
col_type | Yes | The column type in the file external table. You need to specify this parameter based on the column type in the target data file. For more information, see Mapping of column types. |
NULL | NOT NULL | No | Whether the column in the file external table is allowed to be NULL .
NULL | NOT NULL modifier based on the following rules:
|
comment | No | The comment of column in the file external table. |
ENGINE | Yes | The type of engine. Set the value to file . |
comment | No | The description of the file external table. |
You also need to configure FileLayoutParams
and StorageCredentialParams
in PROPERTIES
to describe how to read files and integrate with AWS authentication service (AWS IAM).
FileLayoutParams
FileLayoutParams :: =
"path" = "<file_path>",
"format" = "<file_format>"
Parameter | Required | Description |
---|---|---|
path | Yes | The path of the data file stored in Amazon S3. The path format is s3://<bucket name>/<folder>/ . Pay attention to the following rules when you enter the path:
|
format | Yes | The format of the data file. Only Parquet and ORC are supported. |
enable_recursive_listing | No | Specifies whether to recursively transverse all files under the current path. Default value: false . |
StorageCredentialParams
AWS S3
If you choose AWS S3 as file storage, configure the following parameters:
-- You choose instance profile as the credential method when accessing S3.
StorageCredentialParams (for AWS S3) :: =
"aws.s3.use_instance_profile" = "true",
"aws.s3.region" = "<aws_s3_region>"
-- You choose assumed role as the credential method when accessing S3.
StorageCredentialParams (for AWS S3) :: =
"aws.s3.use_instance_profile" = "true",
"aws.s3.iam_role_arn" = "<ARN of your assumed role>",
"aws.s3.region" = "<aws_s3_region>"
Parameter name | Required | Description |
---|---|---|
aws.s3.use_instance_profile | Yes | Specifies whether to enable the instance profile and assumed role as credential methods for authentication when you access AWS S3. Valid values: true and false . true allows Celerdata to use both the two credential methods. Default value: true . |
aws.s3.iam_role_arn | Yes | The ARN of the IAM role that has privileges on the S3 bucket in which the target data file is stored. If you want to use assumed role as the credential method for accessing AWS S3, you must specify this parameter. Celerdata will assume this role when you analyze Hive data by using a catalog. |
aws.s3.region | Yes | The region in which the S3 bucket resides, for example, us-west-1 . |
For information about how to choose a credential method for accessing AWS S3 and how to configure an access control policy in the AWS IAM Console, see Authentication parameters for accessing AWS S3.
Mapping of column types
The following table provides the mapping of column types between the target data file and the file external table.
Data file | File external table |
---|---|
INT/INTEGER | INT |
BIGINT | BIGINT |
TIMESTAMP | DATETIME. Note that TIMESTAMP is converted to DATETIME without a time zone based on the time zone setting of the current session and loses some of its precision. |
STRING | STRING |
VARCHAR | VARCHAR |
CHAR | CHAR |
DOUBLE | DOUBLE |
FLOAT | FLOAT |
DECIMAL | DECIMAL |
BOOLEAN | BOOLEAN |
ARRAY | ARRAY |
MAP | MAP |
STRUCT | STRUCT |
Analyze data by using a file external table
SELECT COUNT(*) FROM <file_external_table>
Examples
Example 1: Create a file external table and use the instance profile-based credential method to access a single Parquet file in AWS S3.
CREATE EXTERNAL TABLE table_1
(
name string,
id int
)
ENGINE=file
PROPERTIES
(
"path" = "s3://bucket-test/folder1/raw_0.parquet",
"format" = "parquet",
"aws.s3.use_instance_profile" = "true",
"aws.s3.region" = "us-west-2"
);
Example 2: Create a file external table and use the assumed role-based credential method to access all the ORC files under the target file path in AWS S3.
CREATE EXTERNAL TABLE table_1
(
name string,
id int
)
ENGINE=file
PROPERTIES
(
"path" = "s3://bucket-test/folder1/",
"format" = "orc",
"aws.s3.use_instance_profile" = "true",
"aws.s3.iam_role_arn" = "arn:aws:iam::51234343412:role/role_name_in_aws_iam",
"aws.s3.region" = "us-west-2"
);