Skip to main content

File external table

This topic describes how to use file external tables to directly query Parquet and ORC data files in AWS S3.

Usage notes

  • File external tables must be created in databases within default_catalog.
  • Only Parquet and ORC data files are supported.
  • You can only use file external tables to query data. INSERT, DELETE, and DROP operations on the target data file are not supported.

Create a file external table

Syntax

CREATE EXTERNAL TABLE <table_name>
(
<col_name> <col_type> [NULL | NOT NULL] [COMMENT "<comment>"]
)
ENGINE=file
COMMENT [comment]
PROPERTIES
(
FileLayoutParams,
StorageCredentialParams,
)
ParameterRequiredDescription
table_nameYesThe name of the file external table. The naming conventions are as follows:The name can contain letters, digits (0-9), and underscores (_). It must start with a letter.The name cannot exceed 64 characters in length.
col_nameYesThe column name in the file external table. The column names in the file external table must be the same as those in the target data file but are not case-sensitive. The order of columns in the file external table can be different from that in the target data file.
col_typeYesThe column type in the file external table. You need to specify this parameter based on the column type in the target data file. For more information, see Mapping of column types.
NULL | NOT NULLNoWhether the column in the file external table is allowed to be NULL.
  • NULL: NULL is allowed.
  • NOT NULL: NULL is not allowed.
You must specify the NULL | NOT NULL modifier based on the following rules:
  • If NULL | NOT NULL is not specified for the columns in the target data file, you can choose not to specify NULL | NOT NULL for the columns in the file external table or specify NULL for the columns in the file external table.
  • If NULL is specified for the columns in the target data file, you can choose not to specify NULL | NOT NULL for the columns in the file external table or specify NULL for the columns in the file external table.
  • If NOT NULL is specified for the columns in the target data file, you must also specify NOT NULL for the columns in the file external table.
commentNoThe comment of column in the file external table.
ENGINEYesThe type of engine. Set the value to file.
commentNoThe description of the file external table.

You also need to configure FileLayoutParams and StorageCredentialParams in PROPERTIES to describe how to read files and integrate with AWS authentication service (AWS IAM).

FileLayoutParams

FileLayoutParams :: = 
"path" = "<file_path>",
"format" = "<file_format>"
ParameterRequiredDescription
pathYesThe path of the data file stored in Amazon S3. The path format is s3://<bucket name>/<folder>/. Pay attention to the following rules when you enter the path:
  • If the value of the path parameter ends with a slash /, such as s3://<bucket name>/<folder>/, Celerdata treats it as a path. When you execute a query, Celerdata traverses all data files under the path. It does not traverse data files by using recursion by default.
  • If the value of the path parameter does not end with /, such as s3://<bucket name>/<folder>, Celerdata treats it as a single data file. When you execute a query, Celerdata only scans this data file.
formatYesThe format of the data file. Only Parquet and ORC are supported.
enable_recursive_listingNoSpecifies whether to recursively transverse all files under the current path. Default value: false.

StorageCredentialParams

AWS S3

If you choose AWS S3 as file storage, configure the following parameters:

-- You choose instance profile as the credential method when accessing S3.
StorageCredentialParams (for AWS S3) :: =
"aws.s3.use_instance_profile" = "true",
"aws.s3.region" = "<aws_s3_region>"

-- You choose assumed role as the credential method when accessing S3.
StorageCredentialParams (for AWS S3) :: =
"aws.s3.use_instance_profile" = "true",
"aws.s3.iam_role_arn" = "<ARN of your assumed role>",
"aws.s3.region" = "<aws_s3_region>"
Parameter nameRequiredDescription
aws.s3.use_instance_profileYesSpecifies whether to enable the instance profile and assumed role as credential methods for authentication when you access AWS S3. Valid values: true and false. true allows Celerdata to use both the two credential methods. Default value: true.
aws.s3.iam_role_arnYesThe ARN of the IAM role that has privileges on the S3 bucket in which the target data file is stored. If you want to use assumed role as the credential method for accessing AWS S3, you must specify this parameter. Celerdata will assume this role when you analyze Hive data by using a catalog.
aws.s3.regionYesThe region in which the S3 bucket resides, for example, us-west-1.

For information about how to choose a credential method for accessing AWS S3 and how to configure an access control policy in the AWS IAM Console, see Authentication parameters for accessing AWS S3.

Mapping of column types

The following table provides the mapping of column types between the target data file and the file external table.

Data fileFile external table
INT/INTEGERINT
BIGINTBIGINT
TIMESTAMPDATETIME. Note that TIMESTAMP is converted to DATETIME without a time zone based on the time zone setting of the current session and loses some of its precision.
STRINGSTRING
VARCHARVARCHAR
CHARCHAR
DOUBLEDOUBLE
FLOATFLOAT
DECIMALDECIMAL
BOOLEANBOOLEAN
ARRAYARRAY
MAPMAP
STRUCTSTRUCT

Analyze data by using a file external table

SELECT COUNT(*) FROM <file_external_table>

Examples

Example 1: Create a file external table and use the instance profile-based credential method to access a single Parquet file in AWS S3.

CREATE EXTERNAL TABLE table_1
(
name string,
id int
)
ENGINE=file
PROPERTIES
(
"path" = "s3://bucket-test/folder1/raw_0.parquet",
"format" = "parquet",
"aws.s3.use_instance_profile" = "true",
"aws.s3.region" = "us-west-2"
);

Example 2: Create a file external table and use the assumed role-based credential method to access all the ORC files under the target file path in AWS S3.

CREATE EXTERNAL TABLE table_1
(
name string,
id int
)
ENGINE=file
PROPERTIES
(
"path" = "s3://bucket-test/folder1/",
"format" = "orc",
"aws.s3.use_instance_profile" = "true",
"aws.s3.iam_role_arn" = "arn:aws:iam::51234343412:role/role_name_in_aws_iam",
"aws.s3.region" = "us-west-2"
);