Scale a CelerData cluster
CelerData supports scaling clusters with no downtime. As your workloads grow or drop, you can view the details about a cluster and then decide whether to scale your cluster to maintain the necessary performance levels at minimum costs.
Both classic and elastic clusters support vertical scaling, horizontal scaling, and storage scaling.
NOTE
CelerData supports deploying classic clusters on both AWS Cloud and Azure Cloud, but it supports deploying elastic clusters only on AWS Cloud.
Introduction to scaling operations
Vertical scaling
You can vertically scale your cluster up or down by upgrading or downgrading the instance type of cluster nodes to increase or decrease computing power and storage capacity. Consider a scale-up in the following scenarios:
-
Your workloads are hitting CPU or I/O limits, which increase query latency and decrease concurrency, but storage capacity is sufficient.
-
You need to quickly react to fix performance issues that cannot be resolved by using classic optimization techniques.
Horizontal scaling
You can horizontally scale your cluster out or in by adding or removing cluster nodes to increase or decrease computing power and storage capacity. Consider a scale-out in the following scenarios:
-
Your workloads are hitting CPU, I/O, and storage limits, which increase query latency and decrease concurrency, but storage capacity is sufficient.
-
You have maxed out your performance requirements, even in the highest performance tier of your service.
-
Your data cannot fit into the current number of nodes.
Storage scaling
You can scale the storage of your cluster up or down to suit the needs of spikes and dips in cluster activity as your data volume changes.
You can only scale the disk size for BE nodes in classic clusters. For FE nodes in classic clusters and Coordinator Nodes in elastic clusters, you can edit the disk IOPS and throughput of the disks in addition to the disk size.
Scale a classic cluster
For a classic cluster, CelerData supports vertical scaling, horizontal scaling, and storage scaling.
Vertical scaling
Take note of the following points:
- If your cluster uses EBS volumes as storage, the cluster nodes will restart on a rolling basis during a scale-up and you may experience query or data loading failures. Therefore, we recommend that you perform a scale-up during off-peak hours.
- If your cluster uses instance store volumes as storage, the amount of time taken by a scale-up varies depending on the volume of data in your cluster.
Follow these steps:
-
Sign in to the CelerData Cloud BYOC console.
-
On the Clusters page, click the cluster that you want to scale.
-
On the cluster details page, click Manage and choose Edit cluster.
NOTE
- You can only scale clusters that are in the Running state. If a cluster is in not in the Running state, the Edit cluster menu item is disabled.
- If you are scaling a cluster created in your Free Developer Tier, a dialog box is displayed, prompting you to unlock the cluster before you can continue. For more information, see Use Free Developer Tier.
-
On the page that appears, select the type of node that you want to scale from the Node type drop-down list, select Scale up/down from the Operation type drop-down list, select the instance type that you want to scale to, and then click Subscribe.
-
In the message that appears, confirm your scaling settings and click Subscribe.
The cluster enters the Updating state.
CelerData requires some time to launch instances of the new instance type and migrates your data and workloads from the original instances to the new instances, during which charges to you are still calculated based on the original instance type.
When the scaling operation is complete, the cluster returns to the Running state.
Horizontal scaling
For a scale-out or scale-in, you can set the number of FE nodes only to 1, 3, or 5, but we recommend that the number of BE nodes be greater than or equal to 3 for production environments.
Follow these steps:
-
Sign in to the CelerData Cloud BYOC console.
-
On the Clusters page, click the cluster that you want to scale.
-
On the cluster details page, click Manage and choose Edit cluster.
NOTE
- You can only scale clusters that are in the Running state. If a cluster is in not in the Running state, the Edit cluster menu item is disabled.
- If you are scaling a cluster created in your Free Developer Tier, a dialog box is displayed, prompting you to unlock the cluster before you can continue. For more information, see Use Free Developer Tier.
-
On the page that appears, select the type of node that you want to scale from the Node type drop-down list, select Scale in/out from the Operation type drop-down list, specify the number of nodes that you want to have, and then click Subscribe.
-
In the message that appears, confirm your scaling settings and click Subscribe.
The cluster enters the Updating state.
CelerData requires some time to release or launch instances of the current instance type, during which charges to you are still calculated based on the original number of nodes.
When the scaling operation is complete, the cluster returns to the Running state.
Storage scaling
For BE nodes, you can only scale the disk size. For FE nodes, you can edit the disk IOPS and throughput of the disks in addition to the disk size.
Follow these steps:
-
Sign in to the CelerData Cloud BYOC console.
-
On the Clusters page, click the cluster that you want to scale.
-
On the cluster details page, click Manage and choose Edit cluster.
NOTE
- You can only scale clusters that are in the Running state. If a cluster is in not in the Running state, the Edit cluster menu item is disabled.
- If you are scaling a cluster created in your Free Developer Tier, a dialog box is displayed, prompting you to unlock the cluster before you can continue. For more information, see Use Free Developer Tier.
-
On the page that appears, select the type of node for which you want to change the storage size from the Node type drop-down list, and select Edit storage from the Operation type drop-down list. Then, specify the Disk IOPS and Disk throughput if you have selected FE node as the Node type, specify Disk size for the storage you want to scale, and click Subscribe.
NOTE
- The minimum disk IOPS per FE node is 3000.
- The minimum disk throughput per FE node is 150 MB/s.
- The minimum disk size per FE node is 30 GB.
- The minimum storage size per BE node is 500 GB.
-
In the message that appears, confirm your scaling settings and click Subscribe.
The cluster enters the Updating state.
CelerData requires some time to release or launch storage resources, during which charges to you are still calculated based on the original storage size.
When the scaling operation is complete, the cluster returns to the Running state.
Scale an elastic cluster
For an elastic cluster, CelerData supports vertical scaling, horizontal scaling, and Coordinator Node storage scaling. You can also enable Auto Scaling for each warehouse to allow the system to automatically scale the number of Compute Nodes based on the CPU utilization of the warehouse.
Vertical scaling
Take note of the following points:
- If your cluster uses EBS volumes as storage, the cluster nodes will restart on a rolling basis during a scale-up and you may experience query or data loading failures. Therefore, we recommend that you perform a scale-up during off-peak hours.
- If your cluster uses instance store volumes as storage, the amount of time taken by a scale-up varies depending on the volume of data in your cluster.
Follow these steps:
-
Sign in to the CelerData Cloud BYOC console.
-
On the Clusters page, click the cluster that you want to scale.
-
On the cluster details page, click Manage and choose Edit cluster.
NOTE
You can only scale clusters that are in the Running state. If a cluster is in not in the Running state, the Edit cluster menu item is disabled.
-
On the page that appears, select the type of node that you want to scale from the Node type drop-down list, select Scale up/down from the Operation type drop-down list, and select the name of the warehouse if you have selected Compute Node as the Node type. Then, select the instance type that you want to scale to, and then click Subscribe.
-
In the message that appears, confirm your scaling settings and click Subscribe.
The cluster enters the Updating state.
CelerData requires some time to launch instances of the new instance type and migrates your data and workloads from the original instances to the new instances, during which charges to you are still calculated based on the original instance type.
When the scaling operation is complete, the cluster returns to the Running state.
Horizontal scaling
For a scale-out or scale-in, you can set the number of Coordinator Nodes only to 1, 3, or 5. You can edit the number of Compute Nodes in a warehouse.
Follow these steps:
-
Sign in to the CelerData Cloud BYOC console.
-
On the Clusters page, click the cluster that you want to scale.
-
On the cluster details page, click Manage and choose Edit cluster.
NOTE
You can only scale clusters that are in the Running state. If a cluster is in not in the Running state, the Edit cluster menu item is disabled.
-
On the page that appears, select the type of node that you want to scale from the Node type drop-down list, select Scale in/out from the Operation type drop-down list, and select the name of the warehouse if you have selected Compute Node as the Node type. Then, specify the number of nodes that you want to have, and then click Subscribe.
-
In the message that appears, confirm your scaling settings and click Subscribe.
The cluster enters the Updating state.
CelerData requires some time to release or launch instances of the current instance type, during which charges to you are still calculated based on the original number of nodes.
When the scaling operation is complete, the cluster returns to the Running state.
Storage scaling
You can scale the storage only for Coordinator Nodes. In addition to the disk size, you can edit the disk IOPS and throughput of the disks.
Follow these steps:
-
Sign in to the CelerData Cloud BYOC console.
-
On the Clusters page, click the cluster that you want to scale.
-
On the cluster details page, click Manage and choose Edit cluster.
NOTE
You can only scale clusters that are in the Running state. If a cluster is in not in the Running state, the Edit cluster menu item is disabled.
-
On the page that appears, select Coordinator Node from the Node type drop-down list, select Edit storage from the Operation type drop-down list, specify the Disk IOPS, Disk throughput, and Disk size for the storage you want to scale, and then click Subscribe.
NOTE
- The minimum disk IOPS per Coordinator Node is 3000.
- The minimum disk throughput per Coordinator Node is 150 MB/s.
- The minimum disk size per Coordinator Node is 30 GB.
-
In the message that appears, confirm your scaling settings and click Subscribe.
The cluster enters the Updating state.
CelerData requires some time to release or launch storage resources, during which charges to you are still calculated based on the original storage size.
When the scaling operation is complete, the cluster returns to the Running state.
Auto Scaling
You can define the Auto Scaling policies for each warehouse to allow them adaptively adjust the number of Compute Nodes within. CelerData will assess the CPU utilization the warehouse in real time, and scale in or out the Compute Nodes based on the policies you have made, helping you maintain steady, predictable performance at the lowest possible cost.
Follow these steps:
-
Sign in to the CelerData Cloud BYOC console.
-
On the Clusters page, click the elastic cluster where the warehouse that you want to enable Auto Scaling resides.
-
On the Warehouses tab of the cluster detail page, move the cursor to the lower-right corner of the card for the warehouse to display the View more details button, and then click the button.
-
Click the Resource Scheduling tab. Then, click Edit in the Autoscaling Policy section.
-
In the Edit autoscaling policy dialog box, turn on the switch next to Autoscaling, and specify the Scaling range (the minimum and maximum number of the Compute Nodes in the warehouse). Then, set the Scale out and Scale in policies as follows:
a. In the Scale out policy section, set the CPU utilization upper limit, the time threshold, and the number of Compute Nodes to be scaled out each step.
b. In the Scale in policy section, set the CPU utilization lower limit, the time threshold, and the number of Compute Nodes to be scaled in each step.
NOTE
- The lower bound of the Scaling range is
1
, and the upper bound is100
. - Auto Scaling policies take effect only when the warehouse is running.
- Scale out policy takes effect only when the current Compute Node count is less than the upper limit of the Scaling range you have defined.
- Scale in policy takes effect only when the current Compute Node count is greater than the upper limit of the Scaling range you have defined.
- The CPU utilization upper limit of Scale out policy must be greater than the lower limit of Scale in policy.
- To avoid significant fluctuations in cluster performance, only a maximum of two Compute Nodes can be scaled in per step.
- The lower bound of the Scaling range is
-
Click Save changes to save the policies.