clusters
Operations on a clusters
resource.
Overview
Name | clusters |
Type | Resource |
Id | databricks_workspace.compute.clusters |
Fields
Name | Datatype |
---|---|
autotermination_minutes | integer |
aws_attributes | object |
cluster_id | string |
cluster_name | string |
cluster_source | string |
creator_user_name | string |
default_tags | object |
disk_spec | object |
driver_instance_source | object |
driver_node_type_id | string |
enable_elastic_disk | boolean |
enable_local_disk_encryption | boolean |
init_scripts_safe_mode | boolean |
instance_source | object |
last_state_loss_time | integer |
node_type_id | string |
num_workers | integer |
spark_context_id | integer |
spark_version | string |
start_time | integer |
state | string |
state_message | string |
terminated_time | integer |
termination_reason | object |
Methods
Name | Accessible by | Required Params | Description |
---|---|---|---|
get | SELECT | cluster_id, deployment_name | Retrieves the information for a cluster given its identifier. Clusters can be described while they are running, or up to 60 days after they are terminated. |
list | SELECT | deployment_name | Return information about all pinned and active clusters, and all clusters terminated within the last 30 days. Clusters terminated prior to this period are not included. |
create | INSERT | deployment_name | Creates a new Spark cluster. This method will acquire new instances from the cloud provider if necessary. This method is asynchronous; the returned |
delete | DELETE | deployment_name | Terminates the Spark cluster with the specified ID. The cluster is removed asynchronously. Once the termination has completed, the cluster will be in a |
update | UPDATE | deployment_name | Updates the configuration of a cluster to match the partial set of attributes and size. Denote which fields to update using the |
edit | REPLACE | deployment_name | Updates the configuration of a cluster to match the provided attributes and size. A cluster can be updated if it is in a |
changeowner | EXEC | deployment_name | Change the owner of the cluster. You must be an admin and the cluster must be terminated to perform this operation. The service principal application ID can be supplied as an argument to |
permanentdelete | EXEC | deployment_name | Permanently deletes a Spark cluster. This cluster is terminated and resources are asynchronously removed. |
pin | EXEC | deployment_name | Pinning a cluster ensures that the cluster will always be returned by the ListClusters API. Pinning a cluster that is already pinned will have no effect. This API can only be called by workspace admins. |
resize | EXEC | deployment_name | Resizes a cluster to have a desired number of workers. This will fail unless the cluster is in a |
restart | EXEC | deployment_name | Restarts a Spark cluster with the supplied ID. If the cluster is not currently in a |
start | EXEC | deployment_name | Starts a terminated Spark cluster with the supplied ID. This works similar to |
unpin | EXEC | deployment_name | Unpinning a cluster will allow the cluster to eventually be removed from the ListClusters API. Unpinning a cluster that is not pinned will have no effect. This API can only be called by workspace admins. |
SELECT
examples
- clusters (list)
- clusters (get)
SELECT
autotermination_minutes,
aws_attributes,
cluster_id,
cluster_name,
cluster_source,
creator_user_name,
default_tags,
disk_spec,
driver_instance_source,
driver_node_type_id,
enable_elastic_disk,
enable_local_disk_encryption,
init_scripts_safe_mode,
instance_source,
last_state_loss_time,
node_type_id,
num_workers,
spark_context_id,
spark_version,
start_time,
state,
state_message,
terminated_time,
termination_reason
FROM databricks_workspace.compute.clusters
WHERE deployment_name = '{{ deployment_name }}';
SELECT
autotermination_minutes,
aws_attributes,
cluster_id,
cluster_name,
cluster_source,
creator_user_name,
default_tags,
disk_spec,
driver_instance_source,
driver_node_type_id,
enable_elastic_disk,
enable_local_disk_encryption,
init_scripts_safe_mode,
instance_source,
last_state_loss_time,
node_type_id,
num_workers,
spark_context_id,
spark_version,
start_time,
state,
state_message,
terminated_time,
termination_reason
FROM databricks_workspace.compute.clusters
WHERE cluster_id = '{{ cluster_id }}' AND
deployment_name = '{{ deployment_name }}';
INSERT
example
Use the following StackQL query and manifest file to create a new clusters
resource.
- clusters
- Manifest
/*+ create */
INSERT INTO databricks_workspace.compute.clusters (
deployment_name,
data__cluster_name,
data__is_single_node,
data__kind,
data__spark_version,
data__node_type_id,
data__aws_attributes
)
SELECT
'{{ deployment_name }}',
'{{ cluster_name }}',
'{{ is_single_node }}',
'{{ kind }}',
'{{ spark_version }}',
'{{ node_type_id }}',
'{{ aws_attributes }}'
;
- name: your_resource_model_name
props:
- name: cluster_name
value: single-node-with-kind-cluster
- name: is_single_node
value: true
- name: kind
value: CLASSIC_PREVIEW
- name: spark_version
value: 14.3.x-scala2.12
- name: node_type_id
value: i3.xlarge
- name: aws_attributes
value:
first_on_demand: 1
availability: SPOT_WITH_FALLBACK
zone_id: auto
spot_bid_price_percent: 100
ebs_volume_count: 0
UPDATE
example
Updates a clusters
resource.
/*+ update */
-- replace field1, field2, etc. with the fields you want to update
UPDATE databricks_workspace.compute.clusters
SET field1 = '{{ value1 }}',
field2 = '{{ value2 }}', ...
WHERE deployment_name = '{{ deployment_name }}';
REPLACE
example
Replaces a clusters
resource.
/*+ update */
-- replace field1, field2, etc. with the fields you want to update
REPLACE databricks_workspace.compute.clusters
SET field1 = '{ value1 }',
field2 = '{ value2 }', ...
WHERE deployment_name = '{{ deployment_name }}';
DELETE
example
Deletes a clusters
resource.
/*+ delete */
DELETE FROM databricks_workspace.compute.clusters
WHERE deployment_name = '{{ deployment_name }}';