Iceberg catalog
An Iceberg catalog is a type of external catalog that is supported by StarRocks from v2.4 onwards. With Iceberg catalogs, you can:
- Directly query data stored in Iceberg without the need to manually create tables.
- Use INSERT INTO or asynchronous materialized views (which are supported from v2.5 onwards) to process data stored in Iceberg and load the data into StarRocks.
- Perform operations on StarRocks to create or drop Iceberg databases and tables, or sink data from StarRocks tables to Parquet-formatted Iceberg tables by using INSERT INTO (this feature is supported from v3.1 onwards).
To ensure successful SQL workloads on your Iceberg cluster, your StarRocks cluster must be able to access the storage system and metastore of your Iceberg cluster. StarRocks supports the following storage systems and metastores:
-
Distributed file system (HDFS) or object storage like AWS S3, Microsoft Azure Storage, Google GCS, or other S3-compatible storage system (for example, MinIO)
-
Metastore like Hive metastore, AWS Glue, or Tabular
- If you choose AWS S3 as storage, you can use HMS or AWS Glue as metastore. If you choose any other storage system, you can only use HMS as metastore.
- If you choose Tabular as metastore, you need to use the Iceberg REST catalog.
Usage notes
Take note of the following points when you use StarRocks to query data from Iceberg:
| File format | Compression format | Iceberg table version |
|---|---|---|
| Parquet | SNAPPY, LZ4, ZSTD, GZIP, and NO_COMPRESSION |
|
| ORC | ZLIB, SNAPPY, LZO, LZ4, ZSTD, and NO_COMPRESSION |
|
Integration preparation
Before you create an Iceberg catalog, make sure your StarRocks cluster can integrate with the storage system and metastore of your Iceberg cluster.
Storage
Select the tab that matches your storage type:
- AWS S3
- HDFS
If your Iceberg cluster uses AWS S3 as storage or AWS Glue as metastore, choose your suitable authentication method and make the required preparations to ensure that your StarRocks cluster can access the related AWS cloud resources.
The following authentication methods are recommended:
- Instance profile
- Assumed role
- IAM user
Of the above-mentioned three authentication methods, instance profile is the most widely used.
For more information, see Preparation for authentication in AWS IAM.
If you choose HDFS as storage, configure your StarRocks cluster as follows:
-
(Optional) Set the username that is used to access your HDFS cluster and Hive metastore. By default, StarRocks uses the username of the FE and BE or CN processes to access your HDFS cluster and Hive metastore. You can also set the username by adding
export HADOOP_USER_NAME="<user_name>"at the beginning of the fe/conf/hadoop_env.sh file of each FE and at the beginning of the be/conf/hadoop_env.sh file of each BE or the cn/conf/hadoop_env.sh file of each CN. After you set the username in these files, restart each FE and each BE or CN to make the parameter settings take effect. You can set only one username for each StarRocks cluster. -
When you query Iceberg data, the FEs and BEs or CNs of your StarRocks cluster use the HDFS client to access your HDFS cluster. In most cases, you do not need to configure your StarRocks cluster to achieve that purpose, and StarRocks starts the HDFS client using the default configurations. You need to configure your StarRocks cluster only in the following situations:
- High availability (HA) is enabled for your HDFS cluster: Add the hdfs-site.xml file of your HDFS cluster to the $FE_HOME/conf path of each FE and to the $BE_HOME/conf path of each BE or the $CN_HOME/conf path of each CN.
- View File System (ViewFs) is enabled for your HDFS cluster: Add the core-site.xml file of your HDFS cluster to the $FE_HOME/conf path of each FE and to the $BE_HOME/conf path of each BE or the $CN_HOME/conf path of each CN.
If an error indicating an unknown host is returned when you send a query, you must add the mapping between the host names and IP addresses of your HDFS cluster nodes to the /etc/hosts path.
Kerberos authentication
If Kerberos authentication is enabled for your HDFS cluster or Hive metastore, configure your StarRocks cluster as follows:
- Run the
kinit -kt keytab_path principalcommand on each FE and each BE or CN to obtain Ticket Granting Ticket (TGT) from Key Distribution Center (KDC). To run this command, you must have the permissions to access your HDFS cluster and Hive metastore. Note that accessing KDC with this command is time-sensitive. Therefore, you need to use cron to run this command periodically. - Add
JAVA_OPTS="-Djava.security.krb5.conf=/etc/krb5.conf"to the $FE_HOME/conf/fe.conf file of each FE and to the $BE_HOME/conf/be.conf file of each BE or the $CN_HOME/conf/cn.conf file of each CN. In this example,/etc/krb5.confis the save path of the krb5.conf file. You can modify the path based on your needs.
Create an Iceberg catalog
Syntax
CREATE EXTERNAL CATALOG <catalog_name>
[COMMENT <comment>]
PROPERTIES
(
"type" = "iceberg",
[SecurityParams],
MetastoreParams,
StorageCredentialParams,
MetadataRelatedParams
)
Parameters
catalog_name
The name of the Iceberg catalog. The naming conventions are as follows:
- The name can contain letters, digits (0-9), and underscores (_). It must start with a letter.
- The name is case-sensitive and cannot exceed 1023 characters in length.
comment
The description of the Iceberg catalog. This parameter is optional.
type
The type of your data source. Set the value to iceberg.
SecurityParams
Parameter(s) about how StarRocks manages data access to the catalog.
catalog.access.control
The data access control policy. Valid values:
native(Default): The StarRocks built-in data access control system is used.allowall: All data access checks are delegated to the Catalog itself.ranger: Data access checks are delegated to Apache Ranger.
MetastoreParams
A set of parameters about how StarRocks integrates with the metastore of your data source. Choose the tab that matches your metastore type:
- Hive metastore
- AWS Glue
- REST
- JDBC
Hive metastore
If you choose Hive metastore as the metastore of your data source, configure MetastoreParams as follows:
"iceberg.catalog.type" = "hive",
"hive.metastore.uris" = "<hive_metastore_uri>"
Before querying Iceberg data, you must add the mapping between the host names and IP addresses of your Hive metastore nodes to the /etc/hosts path. Otherwise, StarRocks may fail to access your Hive metastore when you start a query.
The following table describes the parameter you need to configure in MetastoreParams.
-
iceberg.catalog.type- Required: Yes
- Description: The type of metastore that you use for your Iceberg cluster. Set the value to
hive.
-
hive.metastore.uris- Required: Yes
- Description: The URI of your Hive metastore. Format:
thrift://<metastore_IP_address>:<metastore_port>.
If high availability (HA) is enabled for your Hive metastore, you can specify multiple metastore URIs and separate them with commas (,), for example,"thrift://<metastore_IP_address_1>:<metastore_port_1>,thrift://<metastore_IP_address_2>:<metastore_port_2>,thrift://<metastore_IP_address_3>:<metastore_port_3>".
AWS Glue
If you choose AWS Glue as the metastore of your data source, which is supported only when you choose AWS S3 as storage, take one of the following actions:
-
To choose the instance profile-based authentication method, configure
MetastoreParamsas follows:"iceberg.catalog.type" = "glue",
"aws.glue.use_instance_profile" = "true",
"aws.glue.region" = "<aws_glue_region>" -
To choose the assumed role-based authentication method, configure
MetastoreParamsas follows:"iceberg.catalog.type" = "glue",
"aws.glue.use_instance_profile" = "true",
"aws.glue.iam_role_arn" = "<iam_role_arn>",
"aws.glue.region" = "<aws_glue_region>" -
To choose the IAM user-based authentication method, configure
MetastoreParamsas follows:"iceberg.catalog.type" = "glue",
"aws.glue.use_instance_profile" = "false",
"aws.glue.access_key" = "<iam_user_access_key>",
"aws.glue.secret_key" = "<iam_user_secret_key>",
"aws.glue.region" = "<aws_s3_region>"
MetastoreParams for AWS Glue:
-
iceberg.catalog.type- Required: Yes
- Description: The type of metastore that you use for your Iceberg cluster. Set the value to
glue.
-
aws.glue.use_instance_profile- Required: Yes
- Description: Specifies whether to enable the instance profile-based authentication method and the assumed role-based authentication method. Valid values:
trueandfalse. Default value:false.
-
aws.glue.iam_role_arn- Required: No
- Description: The ARN of the IAM role that has privileges on your AWS Glue Data Catalog. If you use the assumed role-based authentication method to access AWS Glue, you must specify this parameter.
-
aws.glue.region- Required: Yes
- Description: The region in which your AWS Glue Data Catalog resides. Example:
us-west-1.
-
aws.glue.access_key- Required: No
- Description: The access key of your AWS IAM user. If you use the IAM user-based authentication method to access AWS Glue, you must specify this parameter.
-
aws.glue.secret_key- Required: No
- Description: The secret key of your AWS IAM user. If you use the IAM user-based authentication method to access AWS Glue, you must specify this parameter.
-
aws.glue.catalog_id- Required: No
- Description: The ID of the AWS Glue Data Catalog to use. When not specified, the catalog in the current AWS account is used. You must specify this parameter when you need to access a Glue Data Catalog in a different AWS account (cross-account access).
For information about how to choose an authentication method for accessing AWS Glue and how to configure an access control policy in the AWS IAM Console, see Authentication parameters for accessing AWS Glue.
REST
For detailed instructions on creating an Iceberg REST catalog for S3 tables, see Create Iceberg REST Catalog for AWS S3 tables.
If you use REST as metastore, you must specify the metastore type as REST ("iceberg.catalog.type" = "rest"). Configure MetastoreParams as follows:
"iceberg.catalog.type" = "rest",
"iceberg.catalog.uri" = "<rest_server_api_endpoint>",
"iceberg.catalog.security" = "oauth2",
"iceberg.catalog.oauth2.credential" = "<credential>",
"iceberg.catalog.warehouse" = "<identifier_or_path_to_warehouse>"
MetastoreParams for REST catalog:
-
iceberg.catalog.type- Required: Yes
- Description: The type of metastore that you use for your Iceberg cluster. Set the value to
rest. -
-
iceberg.catalog.uri- Required: Yes
- Description: The URI of the REST service endpoint. Example:
https://api.tabular.io/ws. -
-
iceberg.catalog.view-endpoints-supported- Required: No
- Description: Whether to use the view endpoints to support view-related operations when the REST service of earlier versions that does not return endpoints in
CatalogConfig. This parameter is used for backwards compatibility with REST servers of early versions. Default:false.
-
iceberg.catalog.security- Required: No
- Description: The type of authorization protocol to use. Default:
NONE. Valid value:OAUTH2, which requires either atokenorcredential.
-
iceberg.catalog.oauth2.token- Required: No
- Description: The bearer token used for interactions with the server. A
tokenorcredentialis required forOAUTH2authorization protocol. Example:AbCdEf123456.
-
iceberg.catalog.oauth2.credential- Required: No
- Description: The credential to exchange for a token in the OAuth2 client credentials flow with the server. A
tokenorcredentialis required forOAUTH2authorization protocol. Example:AbCdEf123456.
-
iceberg.catalog.oauth2.scope- Required: No
- Description: Scope to be used when communicating with the REST Catalog. Applicable only when
credentialis used.
-
iceberg.catalog.oauth2.server-uri- Required: No
- Description: The endpoint to retrieve access token from OAuth2 Server.
-
iceberg.catalog.vended-credentials-enabled- Required: No
- Description: Whether to use credentials provided by REST backend for file system access. Default:
true.
-
iceberg.catalog.warehouse- Required: No
- Description: The warehouse location or identifier of the Iceberg catalog. Example:
s3://my_bucket/warehouse_locationorsandbox. -
-
iceberg.catalog.rest.nested-namespace-enabled- Required: No
- Description: Whether to support querying objects under nested namespace. Default:
false.
The following example creates an Iceberg catalog named tabular that uses Tabular as metastore:
CREATE EXTERNAL CATALOG tabular
PROPERTIES
(
"type" = "iceberg",
"iceberg.catalog.type" = "rest",
"iceberg.catalog.uri" = "https://api.tabular.io/ws",
"iceberg.catalog.oauth2.credential" = "t-5Ii8e3FIbT9m0:aaaa-3bbbbbbbbbbbbbbbbbbb",
"iceberg.catalog.warehouse" = "sandbox"
);
The following example creates an Iceberg catalog named smith_polaris that uses Polaris as metastore:
CREATE EXTERNAL CATALOG smith_polaris
PROPERTIES (
"iceberg.catalog.uri" = "http://xxx.xx.xx.xxx:8181/api/catalog",
"type" = "iceberg",
"iceberg.catalog.type" = "rest",
"iceberg.catalog.warehouse" = "starrocks_catalog",
"iceberg.catalog.security" = "oauth2",
"iceberg.catalog.oauth2.credential" = "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx",
"iceberg.catalog.oauth2.scope"='PRINCIPAL_ROLE:ALL'
);
# `ns1.ns2.tpch_namespace` is a nested namespace
create table smith_polaris.`ns1.ns2.tpch_namespace`.tbl (c1 string);
mysql> select * from smith_polaris.`ns1.ns2.tpch_namespace`.tbl;
+------+
| c1 |
+------+
| 1 |
| 2 |
| 3 |
+------+
3 rows in set (0.34 sec)
The following example creates an Iceberg catalog named r2 that uses Cloudflare R2 Data Catalog as metastore:
CREATE EXTERNAL CATALOG r2
PROPERTIES
(
"type" = "iceberg",
"iceberg.catalog.type" = "rest",
"iceberg.catalog.uri" = "<r2_catalog_uri>",
"iceberg.catalog.security" = "oauth2",
"iceberg.catalog.oauth2.token" = "<r2_api_token>",
"iceberg.catalog.warehouse" = "<r2_warehouse_name>"
);
SET CATALOG r2;
CREATE DATABASE testdb;
SHOW DATABASES FROM r2;
+--------------------+
| Database |
+--------------------+
| information_schema |
| testdb |
+--------------------+
2 rows in set (0.66 sec)
The <r2_warehouse_name>,<r2_api_token>, and <r2_catalog_uri> values are obtained from the Cloudflare Dashboard as detailed here.
JDBC
If you choose JDBC as the metastore of your data source, configure MetastoreParams as follows:
"iceberg.catalog.type" = "jdbc",
"iceberg.catalog.uri" = "<jdbc_uri>",
"iceberg.catalog.warehouse" = "<warehouse_location>"
The following table describes the parameter you need to configure in MetastoreParams.
-
iceberg.catalog.type- Required: Yes
- Description: The type of metastore that you use for your Iceberg cluster. Set the value to
jdbc.
-
iceberg.catalog.uri- Required: Yes
- Description: The URI of your database. Format:
jdbc:[mysql\|postgresql]://<DB_IP_address>:<DB_PORT>/<DB_NAME>.
-
iceberg.catalog.warehouse- Required: Yes
- Description: The warehouse location or identifier of the Iceberg catalog. Example:
s3://my_bucket/warehouse_location.
-
iceberg.catalog.jdbc.user- Required: No
- Description: The username for the database.
-
iceberg.catalog.jdbc.password- Required: No
- Description: The password for the database.
-
iceberg.catalog.jdbc.init-catalog-tables- Required: No
- Description: Whether to create the tables
iceberg_namespace_propertiesandiceberg_tablesfor storing metadata in the database specified byiceberg.catalog.uri. The default value isfalse. Specifytrueif these two tables have not yet been created in the database specified byiceberg.catalog.uri.
The following example creates an Iceberg catalog named iceberg_jdbc and uses JDBC as metastore:
CREATE EXTERNAL CATALOG iceberg_jdbc
PROPERTIES
(
"type" = "iceberg",
"iceberg.catalog.type" = "jdbc",
"iceberg.catalog.warehouse" = "s3://my_bucket/warehouse_location",
"iceberg.catalog.uri" = "jdbc:mysql://ip:port/db_name",
"iceberg.catalog.jdbc.user" = "username",
"iceberg.catalog.jdbc.password" = "password",
"aws.s3.endpoint" = "<s3_endpoint>",
"aws.s3.access_key" = "<iam_user_access_key>",
"aws.s3.secret_key" = "<iam_user_secret_key>"
);
If using MySQL or other custom JDBC drivers, the corresponding JAR files need to be placed in the fe/lib and be/lib/jni-packages directories.
StorageCredentialParams
A set of parameters about how StarRocks integrates with your storage system. This parameter set is optional.
Note the following points:
-
If you use HDFS as storage, you do not need to configure
StorageCredentialParamsand can skip this section. If you use AWS S3, other S3-compatible storage system, Microsoft Azure Storage, or Google GCS as storage, you must configureStorageCredentialParams. -
If you use Tabular as metastore, you do not need to configure
StorageCredentialParamsand can skip this section. If you use HMS or AWS Glue as metastore, you must configureStorageCredentialParams.
Choose the tab that matches your storage type:
- AWS S3
- HDFS
- MinIO
- Microsoft Azure Blob Storage
- Google GCS
AWS S3
If you choose AWS S3 as storage for your Iceberg cluster, take one of the following actions:
-
To choose the instance profile-based authentication method, configure
StorageCredentialParamsas follows:"aws.s3.use_instance_profile" = "true",
"aws.s3.region" = "<aws_s3_region>" -
To choose the assumed role-based authentication method, configure
StorageCredentialParamsas follows:"aws.s3.use_instance_profile" = "true",
"aws.s3.iam_role_arn" = "<iam_role_arn>",
"aws.s3.region" = "<aws_s3_region>" -
To choose the IAM user-based authentication method, configure
StorageCredentialParamsas follows:"aws.s3.use_instance_profile" = "false",
"aws.s3.access_key" = "<iam_user_access_key>",
"aws.s3.secret_key" = "<iam_user_secret_key>",
"aws.s3.region" = "<aws_s3_region>"
StorageCredentialParams for AWS S3:
aws.s3.use_instance_profile
Required: Yes
Description: Specifies whether to enable the instance profile-based authentication method and the assumed role-based authentication method. Valid values: true and false. Default value: false.