- Release Notes
- Introduction to CelerData Cloud Serverless
- Quick Start
- Sign up for CelerData Cloud Serverless
- A quick tour of the console
- Connect to CelerData Cloud Serverless
- Create an IAM integration
- Create and assign a warehouse
- Create an external catalog
- Load data from cloud storage
- Load data from Apache Kafka/Confluent Cloud
- Try your first query
- Invite new users
- Design data access control policy
- Warehouses
- Catalog, database, table, view, and MV
- Overview of database objects
- Catalog
- Table types
- Asynchronous materialized views
- Data Loading
- Data access control
- Networking and private connectivity
- Usage and Billing
- Organization and Account
- Integration
- Query Acceleration
- Reference
- AWS IAM policies
- Information Schema
- Overview
- be_bvars
- be_cloud_native_compactions
- be_compactions
- character_sets
- collations
- column_privileges
- columns
- engines
- events
- global_variables
- key_column_usage
- load_tracking_logs
- loads
- materialized_views
- partitions
- pipe_files
- pipes
- referential_constraints
- routines
- schema_privileges
- schemata
- session_variables
- statistics
- table_constraints
- table_privileges
- tables
- tables_config
- task_runs
- tasks
- triggers
- user_privileges
- views
- Data Types
- System Metadatabase
- Keywords
- SQL Statements
- Account Management
- Data Definition
- CREATE TABLE
- ALTER TABLE
- DROP CATALOG
- CREATE TABLE LIKE
- REFRESH EXTERNAL TABLE
- RESTORE
- SET CATALOG
- DROP TABLE
- RECOVER
- USE
- CREATE MATERIALIZED VIEW
- DROP DATABASE
- ALTER MATERIALIZED VIEW
- DROP REPOSITORY
- CANCEL RESTORE
- DROP INDEX
- DROP MATERIALIZED VIEW
- CREATE DATABASE
- CREATE TABLE AS SELECT
- BACKUP
- CANCEL BACKUP
- CREATE REPOSITORY
- CREATE INDEX
- Data Manipulation
- INSERT
- SHOW CREATE DATABASE
- SHOW BACKUP
- SHOW ALTER MATERIALIZED VIEW
- SHOW CATALOGS
- SHOW CREATE MATERIALIZED VIEW
- SELECT
- SHOW ALTER
- SHOW MATERIALIZED VIEW
- RESUME ROUTINE LOAD
- ALTER ROUTINE LOAD
- SHOW TABLES
- STREAM LOAD
- SHOW PARTITIONS
- CANCEL REFRESH MATERIALIZED VIEW
- SHOW CREATE CATALOG
- SHOW ROUTINE LOAD TASK
- SHOW RESTORE
- CREATE ROUTINE LOAD
- STOP ROUTINE LOAD
- SHOW DATABASES
- BROKER LOAD
- SHOW ROUTINE LOAD
- PAUSE ROUTINE LOAD
- SHOW SNAPSHOT
- SHOW CREATE TABLE
- CANCEL LOAD
- REFRESH MATERIALIZED VIEW
- SHOW REPOSITORIES
- SHOW LOAD
- Administration
- DESCRIBE
- SQL Functions
- Function List
- String Functions
- CONCAT
- HEX
- LOWER
- SPLIT
- LPAD
- SUBSTRING
- PARSE_URL
- INSTR
- REPEAT
- LCASE
- REPLACE
- HEX_DECODE_BINARY
- RPAD
- SPLIT_PART
- STRCMP
- SPACE
- CHARACTER_LENGTH
- URL_ENCODE
- APPEND_TAILING_CHAR_IF_ABSENT
- LTRIM
- HEX_DECODE_STRING
- URL_DECODE
- LEFT
- STARTS_WITH
- CONCAT
- GROUP_CONCAT
- STR_TO_MAP
- STRLEFT
- STRRIGHT
- MONEY_FORMAT
- RIGHT
- SUBSTRING_INDEX
- UCASE
- TRIM
- FIND_IN_SET
- RTRIM
- ASCII
- UPPER
- REVERSE
- LENGTH
- UNHEX
- ENDS_WITH
- CHAR_LENGTH
- NULL_OR_EMPTY
- LOCATE
- CHAR
- Predicate Functions
- Map Functions
- Binary Functions
- Geospatial Functions
- Lambda Expression
- Utility Functions
- Bitmap Functions
- BITMAP_SUBSET_LIMIT
- TO_BITMAP
- BITMAP_AGG
- BITMAP_FROM_STRING
- BITMAP_OR
- BITMAP_REMOVE
- BITMAP_AND
- BITMAP_TO_BASE64
- BITMAP_MIN
- BITMAP_CONTAINS
- SUB_BITMAP
- BITMAP_UNION
- BITMAP_COUNT
- BITMAP_UNION_INT
- BITMAP_XOR
- BITMAP_UNION_COUNT
- BITMAP_HAS_ANY
- BITMAP_INTERSECT
- BITMAP_AND_NOT
- BITMAP_TO_STRING
- BITMAP_HASH
- INTERSECT_COUNT
- BITMAP_EMPTY
- BITMAP_MAX
- BASE64_TO_ARRAY
- BITMAP_TO_ARRAY
- Struct Functions
- Aggregate Functions
- RETENTION
- MI
- MULTI_DISTINCT_SUM
- WINDOW_FUNNEL
- STDDEV_SAMP
- GROUPING_ID
- HLL_HASH
- AVG
- HLL_UNION_AGG
- COUNT
- BITMAP
- HLL_EMPTY
- SUM
- MAX_BY
- PERCENTILE_CONT
- COVAR_POP
- PERCENTILE_APPROX
- HLL_RAW_AGG
- STDDEV
- CORR
- COVAR_SAMP
- MIN_BY
- MAX
- VAR_SAMP
- STD
- HLL_UNION
- APPROX_COUNT_DISTINCT
- MULTI_DISTINCT_COUNT
- VARIANCE
- ANY_VALUE
- COUNT_IF
- GROUPING
- PERCENTILE_DISC
- Array Functions
- ARRAY_CUM_SUM
- ARRAY_MAX
- ARRAY_LENGTH
- ARRAY_REMOVE
- UNNEST
- ARRAY_SLICE
- ALL_MATCH
- ARRAY_CONCAT
- ARRAY_SORT
- ARRAY_POSITION
- ARRAY_DIFFERENCE
- ARRAY_CONTAINS
- ARRAY_JOIN
- ARRAY_INTERSECT
- CARDINALITY
- ARRAY_CONTAINS_ALL
- ARRAYS_OVERLAP
- ARRAY_MIN
- ARRAY_MAP
- ELEMENT_AT
- ARRAY_APPEND
- ARRAY_SORTBY
- ARRAY_TO_BITMAP
- ARRAY_GENERATE
- ARRAY_AVG
- ARRAY_FILTER
- ANY_MATCH
- REVERSE
- ARRAY_AGG
- ARRAY_DISTINCT
- ARRAY_SUM
- Condition Functions
- Math Functions
- Date and Time Functions
- DAYNAME
- MINUTE
- FROM_UNIXTIME
- HOUR
- MONTHNAME
- MONTHS_ADD
- ADD_MONTHS
- DATE_SUB
- PREVIOUS_DAY
- TO_TERA_DATA
- MINUTES_SUB
- WEEKS_ADD
- HOURS_DIFF
- UNIX_TIMESTAMP
- DAY
- DATE_SLICE
- DATE
- CURTIME
- SECONDS_SUB
- MONTH
- WEEK
- TO_DATE
- TIMEDIFF
- MONTHS_DIFF
- STR_TO_JODATIME
- WEEK_ISO
- MICROSECONDS_SUB
- TIME_SLICE
- MAKEDATE
- DATE_TRUNC
- JODATIME
- DAYOFWEEK
- YEARS_SUB
- TIMESTAMP_ADD
- HOURS_SUB
- STR2DATE
- TIMESTAMP
- FROM_DAYS
- WEEK_OF_YEAR
- YEAR
- TIMESTAMP_DIFF
- TO_TERA_TIMESTAMP
- DAYOFMONTH
- DAYOFYEAR
- DATE_FORMAT
- MONTHS_SUB
- NEXT_DAY
- MINUTES_DIFF
- DATA_ADD
- MINUTES_ADD
- CURDATE
- DAY_OF_WEEK_ISO
- CURRENt_TIMESTAMP
- STR_TO_DATE
- LAST_DAY
- WEEKS_SUB
- TO_DAYS
- DATEDIFF
- NOW
- TO_ISO8601
- TIME_TO_SEC
- QUARTER
- SECONDS_DIFF
- UTC_TIMESTAMP
- DATA_DIFF
- SECONDS_ADD
- ADDDATE
- WEEKSDIFF
- CONVERT_TZ
- MICROSECONDS_ADD
- SECOND
- YEARS_DIFF
- YEARS_ADD
- HOURS_ADD
- DAYS_SUB
- DAYS_DIFF
- Cryptographic Functions
- Percentile Functions
- Bit Functions
- JSON Functions
- Hash Functions
- Scalar Functions
- Table Functions
SHOW LOAD
Description
Displays information of all load jobs or given load jobs in a database. This statement can only display load jobs that are created by using Broker Load and INSERT.
However, we now recommend that you use the SELECT statement to query the results of Broker Load or Insert jobs from the loads
table in the information_schema
database. For more information, see Batch load data from Amazon S3.
In addition to the preceding loading methods, CelerData supports using Stream Load and Routine Load to load data. Stream Load is a synchronous operation and will directly return information of Stream Load jobs. Routine Load is an asynchronous operation where you can use the SHOW ROUTINE LOAD statement to display information of Routine Load jobs.
Syntax
SHOW LOAD [ FROM db_name ]
[
WHERE [ LABEL { = "label_name" | LIKE "label_matcher" } ]
[ [AND] STATE = { "PENDING" | "ETL" | "LOADING" | "FINISHED" | "CANCELLED" } ]
]
[ ORDER BY field_name [ ASC | DESC ] ]
[ LIMIT { [offset, ] limit | limit OFFSET offset } ]
Note
You can add the
\G
option to the statement (such asSHOW LOAD WHERE LABEL = "label1"\G;
) to vertically display output rather than in the usual horizontal table format. For more information, see Examples.
Parameters
Parameter | Required | Description |
---|---|---|
db_name | No | The database name. If this parameter is not specified, your current database is used by default. |
LABEL = "label_name" | No | The labels of load jobs. |
LABEL LIKE "label_matcher" | No | If this parameter is specified, the information of load jobs whose labels contain label_matcher is returned. |
AND | No |
|
STATE | No | The states of load jobs. The states vary based on loading methods.
STATE parameter is not specified, the information of load jobs in all states is returned by default. If the STATE parameter is specified, only the information of load jobs in the given state is returned. For example, STATE = "PENDING" returns the information of load jobs in the PENDING state. |
ORDER BY field_name [ASC | DESC] | No | If this parameter is specified, the output is sorted in ascending or descending order based on a field. The following fields are supported: JobId , Label , State , Progress , Type , EtlInfo , TaskInfo , ErrorMsg , CreateTime , EtlStartTime , EtlFinishTime , LoadStartTime , LoadFinishTime , URL , and JobDetails .
JobId by default. |
LIMIT limit | No | The number of load jobs that are allowed to display. If this parameter is not specified, the information of all load jobs that match the filter conditions are displayed. If this parameter is specified, for example, LIMIT 10 , only the information of 10 load jobs that match filter conditions are returned. |
OFFSET offset | No | The offset parameter defines the number of load jobs to be skipped. For example, OFFSET 5 skips the first five load jobs and returns the rest. The value of the offset parameter defaults to 0 . |
Output
+-------+-------+-------+----------+------+---------+----------+----------+------------+--------------+---------------+---------------+----------------+-----+------------+
| JobId | Label | State | Progress | Type | Priority | EtlInfo | TaskInfo | ErrorMsg | CreateTime | EtlStartTime | EtlFinishTime | LoadStartTime | LoadFinishTime | URL | JobDetails |
+-------+-------+-------+----------+------+---------+----------+----------+------------+--------------+---------------+---------------+----------------+-----+------------+
The output of this statement varies based on loading methods.
Field | Broker Load | INSERT |
---|---|---|
JobId | The unique ID assigned by CelerData to identify the load job in your CelerData cloud account. | The field has the same meaning in an INSERT job as it does in a Broker Load job. |
Label | The label of the load job. The label of a load job is unique within a database but can be duplicate across different databases. | The field has the same meaning in an INSERT job as it does in a Broker Load job. |
State | The state of the load job.
| The state of the load job.
|
Progress | The stage of the load job. A Broker Load job only has the LOAD stage, which ranges from 0% to 100% to describe the progress of the stage. When the load job enters the LOAD stage, LOADING is returned for the State parameter. Note
| The stage of the load job. An INSERT job only has the LOAD stage, which ranges from 0% to 100% to describe the progress of the stage. When the load job enters the LOAD stage, LOADING is returned for the State parameter. The Note is the same as those for Broker Load. |
Type | The method of the load job. The value of this parameter defaults to BROKER . | The method of the load job. The value of this parameter defaults to INSERT . |
Priority | The priority of the load job. Valid values: LOWEST, LOW, NORMAL, HIGH, and HIGHEST. | - |
EtlInfo | The metrics related to ETL.
max-filter-ratio parameter:dpp.abnorm.ALL /(unselected.rows + dpp.abnorm.ALL + dpp.norm.ALL ). | The metrics related to ETL. An INSERT job does not have the ETL stage. Therefore, NULL is returned. |
TaskInfo | The parameters that are specified when you create the load job.
| The parameters that are specified when you create the load job.
|
ErrorMsg | The error message returned when the load job fails. When the state of the loading job is PENDING , LOADING , or FINISHED , NULL is returned for the ErrorMsg field. When the state of the loading job is CANCELLED , the value returned for the ErrorMsg field consists of two parts: type and msg .
| The error message returned when the load job fails. When the state of the loading job is FINISHED , NULL is returned for the ErrorMsg field. When the state of the loading job is CANCELLED , the value returned for the ErrorMsg field consists of two parts: type and msg .
|
CreateTime | The time at which the load job was created. | The field has the same meaning in an INSERT job as it does in a Broker Load job. |
EtlStartTime | A Broker Load job does not have the ETL stage. Therefore, the value of this field is the same as the value of the LoadStartTime field. | An INSERT job does not have the ETL stage. Therefore, the value of this field is the same as the value of the LoadStartTime field. |
EtlFinishTime | A Broker Load job does not have the ETL stage. Therefore, the value of this field is the same as the value of the LoadStartTime field. | An INSERT job does not have the ETL stage. Therefore, the value of this field is the same as the value of the LoadStartTime field. |
LoadStartTime | The time at which the LOAD stage starts. | The field has the same meaning in an INSERT job as it does in a Broker Load job. |
LoadFinishTime | The time at which the load job finishes. | The field has the same meaning in an INSERT job as it does in a Broker Load job. |
URL | The URL that is used to access the unqualified data detected in the load job. You can use the curl or wget command to access the URL and obtain the unqualified data. If no unqualified data is detected, NULL is returned. | The field has the same meaning in an INSERT job as it does in a Broker Load job. |
JobDetails | Other information related to the load job.
| The field has the same meaning in an INSERT job as it does in a Broker Load job. |
Usage notes
The information returned by the SHOW LOAD statement is valid for 3 days from
LoadFinishTime
of a load job. After 3 days, the information cannot be displayed. You can use thelabel_keep_max_second
parameter to modify the default validity period.ADMIN SET FRONTEND CONFIG ("label_keep_max_second" = "value");
If the value of the
LoadStartTime
field isN/A
for a long time, it means that load jobs heavily pile up. We recommended that you reduce the frequency of creating load jobs.Total time period consumed by a load job =
LoadFinishTime
-CreateTime
.Total time period a load job consumed in the
LOAD
stage =LoadFinishTime
-LoadStartTime
.
Examples
Vertically display all load jobs in your current database.
SHOW LOAD\G;
*************************** 1. row ***************************
JobId: 976331
Label: duplicate_table_with_null
State: FINISHED
Progress: ETL:100%; LOAD:100%
Type: BROKER
Priority: NORMAL
EtlInfo: unselected.rows=0; dpp.abnorm.ALL=0; dpp.norm.ALL=65546
TaskInfo: resource:N/A; timeout(s):300; max_filter_ratio:0.0
ErrorMsg: NULL
CreateTime: 2022-10-17 19:35:00
EtlStartTime: 2022-10-17 19:35:04
EtlFinishTime: 2022-10-17 19:35:04
LoadStartTime: 2022-10-17 19:35:04
LoadFinishTime: 2022-10-17 19:35:06
URL: NULL
JobDetails: {"Unfinished backends":{"b90a703c-6e5a-4fcb-a8e1-94eca5be0b8f":[]},"ScannedRows":65546,"TaskNumber":1,"All backends":{"b90a703c-6e5a-4fcb-a8e1-94eca5be0b8f":[10004]},"FileNumber":1,"FileSize":548622}