Create a new Apache Spark cluster. See https://docs.gcp.databricks.com/dev-tools/api/latest/clusters.html#create for a more in-depth documentation.
Edit the configuration of a cluster to match the provided attributes and size. See https://docs.gcp.databricks.com/dev-tools/api/latest/clusters.html#edit for more in-depth documentation.
Retrieve a list of events about the activity of a cluster. You can retrieve events from active clusters (running, pending, or reconfiguring) and terminated clusters within 30 days of their last termination. This API is paginated. If there are more events to read, the response includes all the parameters necessary to request the next page of events. See https://docs.gcp.databricks.com/dev-tools/api/latest/clusters.html#events for more in-depth documentation.
Type: object
{
"start_time" : "The start time in epoch milliseconds. If empty, returns events starting from the beginning of time.",
"cluster_id" : "The ID of the cluster to retrieve events about. This field is required.",
"event_types" : [ "string" ],
"end_time" : "The end time in epoch milliseconds. If empty, returns events up to the current time.",
"order" : "string. Possible values: ASC | DESC"
}
Retrieve the information for a cluster given its identifier. Clusters can be described while they are running or up to 30 days after they are terminated. See https://docs.gcp.databricks.com/dev-tools/api/latest/clusters.html#get for more in-depth documentation.
Return information about all pinned clusters, active clusters, up to 150 of the most recently terminated all-purpose clusters in the past 30 days, and up to 30 of the most recently terminated job clusters in the past 30 days. For example, if there is 1 pinned cluster, 4 active clusters, 45 terminated all-purpose clusters in the past 30 days, and 50 terminated job clusters in the past 30 days, then this API returns the 1 pinned cluster, 4 active clusters, all 45 terminated all-purpose clusters, and the 30 most recently terminated job clusters. See https://docs.gcp.databricks.com/dev-tools/api/latest/clusters.html#list for more in-depth documentation.
This operation has no parameters
Return a list of supported Spark node types. These node types can be used to launch a cluster. See https://docs.gcp.databricks.com/dev-tools/api/latest/clusters.html#list-node-types for more in-depth documentation.
This operation has no parameters
Return a list of availability zones where clusters can be created in (ex. us-west-2a). These zones can be used to launch a cluster. See https://docs.gcp.databricks.com/dev-tools/api/latest/clusters.html#list-zones for more in-depth documentation.
This operation has no parameters
Permanently delete a cluster. If the cluster is running, it is terminated and its resources are asynchronously removed. If the cluster is terminated, then it is immediately removed. You cannot perform any action, including retrieve the cluster’s permissions, on a permanently deleted cluster. A permanently deleted cluster is also no longer returned in the cluster list. See https://docs.gcp.databricks.com/dev-tools/api/latest/clusters.html#permanent-delete for more in-depth documentation.
Ensure that an all-purpose cluster configuration is retained even after a cluster has been terminated for more than 30 days. Pinning ensures that the cluster is always returned by the List API. Pinning a cluster that is already pinned has no effect. You must be a Databricks administrator to invoke this API. See https://docs.gcp.databricks.com/dev-tools/api/latest/clusters.html#pin for more in-depth documentation.
Resize a cluster to have a desired number of workers. The cluster must be in the RUNNING state. Specify the num_workers OR autoscale properties. See https://docs.gcp.databricks.com/dev-tools/api/latest/clusters.html#resize for more in-depth documentation.
Type: object
{
"cluster_id" : "The cluster to be resized.",
"num_workers" : "Number of worker nodes that this cluster should have. A cluster has one Spark driver and num_workers executors for a total of num_workers + 1 Spark nodes.",
"autoscale" : {
"max_workers" : "The maximum number of workers to which the cluster can scale up when overloaded. max_workers must be strictly greater than min_workers.",
"min_workers" : "The minimum number of workers to which the cluster can scale down when underutilized. It is also the initial number of workers the cluster will have after creation."
}
}
Restart a cluster given its ID. The cluster must be in the RUNNING state. See https://docs.gcp.databricks.com/dev-tools/api/latest/clusters.html#restart for more in-depth documentation.
Return the list of available runtime versions. These versions can be used to launch a cluster. See https://docs.gcp.databricks.com/dev-tools/api/latest/clusters.html#runtime-versions for more in-depth documentation.
This operation has no parameters
Start a terminated cluster given its ID. See https://docs.gcp.databricks.com/dev-tools/api/latest/clusters.html#start for more in-depth documentation.
Terminate a cluster given its ID. The cluster is removed asynchronously. Once the termination has completed, the cluster will be in the TERMINATED state. If the cluster is already in a TERMINATING or TERMINATED state, nothing will happen. Unless a cluster is pinned, 30 days after the cluster is terminated, it is permanently deleted. See https://docs.gcp.databricks.com/dev-tools/api/latest/clusters.html#delete-terminate for more in-depth documentation.
Allows the cluster to eventually be removed from the list returned by the List API. Unpinning a cluster that is not pinned has no effect. You must be a Databricks administrator to invoke this API. See https://docs.gcp.databricks.com/dev-tools/api/latest/clusters.html#unpin for more in-depth documentation.