databricks

package module
v0.2.0 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Dec 23, 2022 License: Apache-2.0 Imports: 29 Imported by: 51

README

Databricks SDK for Go

Stability: Experimental

The Databricks SDK for Go includes functionality to accelerate development with Go for the Databricks Lakehouse. It covers all public Databricks REST API operations. The SDK's internal HTTP client is robust and handles failures on different levels by performing intelligent retries.

Contents

Getting started

  1. On your local development machine with Go already installed and a Go code project active, create a go.mod file to track your Go code's dependencies by running the go mod init command, for example:

    go mod init sample
    
  2. Take a dependency on the Databricks SDK for Go package by running the go mod edit -require command:

    go mod edit -require github.com/databricks/databricks-sdk-go@v0.2.0
    

    Your go.mod file should now look like this:

    module sample
    
    go 1.18
    
    require github.com/databricks/databricks-sdk-go v0.2.0
    
    // Indirect dependencies will go here.
    
  3. Within your project, create a Go code file that imports the Databricks SDK for Go. The following example, in a file named main.go with the following contents, simply lists all the clusters in your Databricks workspace:

    package main
    
    import (
      "context"
    
      "github.com/databricks/databricks-sdk-go"
      "github.com/databricks/databricks-sdk-go/service/clusters"
    )
    
    func main() {
      w := databricks.Must(databricks.NewWorkspaceClient())
      all, err := w.Clusters.ListAll(context.Background(), clusters.List{})
      if err != nil {
        panic(err)
      }
      for _, c := range all {
        println(c.ClusterName)
      }
    }
    
  4. Add any misssing module dependencies by running the go mod tidy command:

    go mod tidy
    

    Note: If you get the error go: warning: "all" matched no packages, you forgot to add the preceding Go code file that imports the Databricks SDK for Go.

  5. Grab copies of all packages needed to support builds and tests of packages in your main module, by running the go mod vendor command:

    go mod vendor
    
  6. Set up Databricks authentication on your local development machine by running databricks configure command, if you have not done so already. For details, see the next section, Authentication.

  7. Run your Go code file, assuming a file named main.go, by running the go run command:

    go run main.go
    

    Assuming the preceding example code is run, the output is:

    [TRACE] Loading config via environment
    [TRACE] Loading config via config-file
    ...
    [TRACE] Attempting to configure auth: pat
    [TRACE] Attempting to configure auth: basic
    [TRACE] Attempting to configure auth: azure-client-secret
    ...
    

Authentication

If you use Databricks configuration profiles or Databricks-specific environment variables for Databricks authentication, the only code required to start working with a Databricks workspace is the following code snippet, which instructs the Databricks SDK for Go to use its default authentication flow:

w := databricks.Must(databricks.NewWorkspaceClient())
w./*press TAB for autocompletion*/

The conventional name for the variable that holds the workspace-level client of the Databricks SDK for Go is w, which is shorthand for workspace.

In this section
Default authentication flow

If you run the Databricks Terraform Provider, the Databricks CLI, or applications that target the Databricks SDKs for other langauges, most likely they will all interoperate nicely together. By default, the Databricks SDK for Go tries the following authentication methods, in the following order, until it succeeds:

  1. Databricks native authentication
  2. Azure native authentication
  3. Google Cloud Platform native authentication
  4. If the SDK is unsuccessful at this point, it returns an authentication error and stops running.

You can instruct the Databricks SDK for Go to use a specific authentication method by setting the AuthType field in *databricks.Config as described in the following sections.

For each authentication method, the SDK searches for compatible authentication credentials in the following locations, in the following order. Once the SDK finds a compatible set of credentials that it can use, it stops searching:

  1. Credentials that hard-coded into *databricks.Config.

    Caution: Databricks does not recommend hard-coding credentials into *databricks.Config, as they can be exposed in plain text in version control systems. Use environment variables or configuration profiles instead.

  2. Credentials in Databricks-specific environment variables.

  3. For Databricks native authentication, credentials in the .databrickscfg file's DEFAULT configuration profile from its default file location (~ for Linux or macOS, and %USERPROFILE% for Windows).

  4. For Azure or Google Cloud Platform native authentication, the SDK searches for credentials through the Azure CLI or Google Cloud CLI as needed.

Depending on the Databricks authentication method, the SDK uses the following information. Presented are the *databricks.Config arguments, their descriptions, any corresponding environment variables, and any corresponding .databrickscfg file fields, respectively.

Databricks native authentication

By default, the Databricks SDK for Go initially tries Databricks token authentication (AuthType: "pat" in *databricks.Config). If the SDK is unsuccessful, it then tries Databricks basic (username/password) authentication (AuthType: "basic" in *databricks.Config).

  • For Databricks token authentication, you must provide Host and Token; or their environment variable or .databrickscfg file field equivalents.
  • For Databricks basic authentication, you must provide Host, Username, and Password (for AWS workspace-level operations); or Host, AccountID, Username, and Password (for AWS, Azure, or GCP account-level operations); or their environment variable or .databrickscfg file field equivalents.
*databricks.Config argument Description Environment variable / .databrickscfg file field
Host (String) The Databricks host URL for either the Databricks workspace endpoint or the Databricks accounts endpoint. DATABRICKS_HOST / host
AccountID (String) The Databricks account ID for the Databricks accounts endpoint. Only has effect when Host is either https://accounts.cloud.databricks.com/ (AWS), https://accounts.azuredatabricks.net/ (Azure), or https://accounts.gcp.databricks.com/ (GCP). DATABRICKS_ACCOUNT_ID / account_id
Token (String) The Databricks personal access token (PAT) (AWS, Azure, and GCP) or Azure Active Directory (Azure AD) token (Azure). DATABRICKS_TOKEN / token
Username (String) The Databricks username part of basic authentication. Only possible when Host is *.cloud.databricks.com (AWS). DATABRICKS_USERNAME / username
Password (String) The Databricks password part of basic authentication. Only possible when Host is *.cloud.databricks.com (AWS). DATABRICKS_PASSWORD / password

For example, to use Databricks token authentication:

package main

import (
	"bufio"
	"context"
	"fmt"
	"os"
	"strings"

	"github.com/databricks/databricks-sdk-go"
	"github.com/databricks/databricks-sdk-go/config"
)

func main() {
	// Perform Databricks token authentication for a Databricks workspace.
	w, err := databricks.NewWorkspaceClient(&databricks.Config{
		Host:        askFor("Host:"),                  // workspace url
		Token:       askFor("Personal Access Token:"), // PAT
		Credentials: config.PatCredentials{},          // enforce PAT auth
	})
	if err != nil {
		panic(err)
	}
	me, err := w.CurrentUser.Me(context.Background())
	if err != nil {
		panic(err)
	}
	fmt.Printf("Hello, my name is %s!\n", me.DisplayName)
}

func askFor(prompt string) string {
	var s string
	r := bufio.NewReader(os.Stdin)
	for {
		fmt.Fprint(os.Stdout, prompt+" ")
		s, _ = r.ReadString('\n')
		s = strings.TrimSpace(s)
		if s != "" {
			break
		}
	}
	return s
}
Azure native authentication

By default, the Databricks SDK for Go first tries Azure client secret authentication (AuthType: "azure-client-secret" in *databricks.Config). If the SDK is unsuccessful, it then tries Azure CLI authentication (AuthType: "azure-cli" in *databricks.Config). See Manage service principals.

The Databricks SDK for Go picks up an Azure CLI token, if you've previously authenticated as an Azure user by running az login on your machine. See Get Azure AD tokens for users by using the Azure CLI.

To authenticate as an Azure Active Directory (Azure AD) service principal, you must provide one of the following. See also Add a service principal to your Azure Databricks account:

  • AzureResourceID, AzureClientSecret, AzureClientID, and AzureTenantID; or their environment variable or .databrickscfg file field equivalents.
  • AzureResourceID and AzureUseMSI; or their environment variable or .databrickscfg file field equivalents.
*databricks.Config argument Description Environment variable / .databrickscfg file field
AzureResourceID (String) The Azure Resource Manager ID for the Azure Databricks workspace, which is exchanged for a Databricks host URL. DATABRICKS_AZURE_RESOURCE_ID / azure_workspace_resource_id
AzureUseMSI (Boolean) true to use Azure Managed Service Identity passwordless authentication flow for service principals. This feature is not yet implemented in the Databricks SDK for Go. ARM_USE_MSI / azure_use_msi
AzureClientSecret (String) The Azure AD service principal's client secret. ARM_CLIENT_SECRET / azure_client_secret
AzureClientID (String) The Azure AD service principal's application ID. ARM_CLIENT_ID / azure_client_id
AzureTenantID (String) The Azure AD service principal's tenant ID. ARM_TENANT_ID / azure_tenant_id
AzureEnvironment (String) The Azure environment type (such as Public, UsGov, China, and Germany) for a specific set of API endpoints. Defaults to PUBLIC. ARM_ENVIRONMENT / azure_environment

For example, to use Azure client secret authentication:

w, err := databricks.NewWorkspaceClient(&databricks.Config{
  Host:              askFor("Host:"),
  AzureResourceID:   askFor("Azure Resource ID:"),
  AzureTenantID:     askFor("AAD Tenant ID:"),
  AzureClientID:     askFor("AAD Client ID:"),
  AzureClientSecret: askFor("AAD Client Secret:"),
  Credentials:       config.AzureClientSecretCredentials{},
})
Google Cloud Platform native authentication

By default, the Databricks SDK for Go first tries Google Cloud Platform (GCP) ID authentication (AuthType: "google-id" in *databricks.Config). If the SDK is unsuccessful, it then tries GCP credentials authentication (AuthType: "google-credentials" in *databricks.Config).

The Databricks SDK for Go picks up an OAuth token in the scope of the Google Default Application Credentials (DAC) flow. This means that if you have run gcloud auth application-default login on your development machine, or launch the application on the compute, that is allowed to impersonate the Google Cloud service account specified in GoogleServiceAccount. Authentication should then work out of the box. See Creating and managing service accounts.

To authenticate as a Google Cloud service account, you must provide one of the following:

  • Host and GoogleServiceAccount; or their environment variable or .databrickscfg file field equivalents.
  • Host and GoogleCredentials; or their environment variable or .databrickscfg file field equivalents.
*databricks.Config argument Description Environment variable / .databrickscfg file field
GoogleServiceAccount (String) The Google Cloud Platform (GCP) service account e-mail used for impersonation in the Default Application Credentials Flow that does not require a password. DATABRICKS_GOOGLE_SERVICE_ACCOUNT / google_service_account
GoogleCredentials (String) GCP Service Account Credentials JSON or the location of these credentials on the local filesystem. GOOGLE_CREDENTIALS / google_credentials

For example, to use Google ID authentication:

w, err := databricks.NewWorkspaceClient(&databricks.Config{
  Host:                 askFor("Host:"),
  GoogleServiceAccount: askFor("Google Service Account:"),
  Credentials:          config.GoogleDefaultCredentials{},
})
Overriding .databrickscfg

For Databricks native authentication, you can override the default behavior in *databricks.Config for using .databrickscfg as follows:

*databricks.Config argument Description Environment variable
Profile (String) A connection profile specified within .databrickscfg to use instead of DEFAULT. DATABRICKS_CONFIG_PROFILE
ConfigFile (String) A non-default location of the Databricks CLI credentials file. DATABRICKS_CONFIG_FILE

For example, to use a profile named MYPROFILE instead of DEFAULT:

w := databricks.Must(databricks.NewWorkspaceClient(&databricks.Config{
  Profile:  "MYPROFILE",
}))
// Now call the Databricks workspace APIs as desired...
Additional authentication configuration options

For all authentication methods, you can override the default behavior in *databricks.Config as follows:

*databricks.Config argument Description Environment variable
AuthType (String) When multiple auth attributes are available in the environment, use the auth type specified by this argument. This argument also holds the currently selected auth. (None)
HTTPTimeoutSeconds (Integer) Number of seconds for HTTP timeout. Default is 60. (None)
RetryTimeoutSeconds (Integer) Number of seconds to keep retrying HTTP requests. Default is 300 (5 minutes). (None)
DebugTruncateBytes (Integer) Truncate JSON fields in debug logs above this limit. Default is 96. DATABRICKS_DEBUG_TRUNCATE_BYTES
DebugHeaders (Boolean) true to debug HTTP headers of requests made by the application. Default is false, as headers contain sensitive data, such as access tokens. DATABRICKS_DEBUG_HEADERS
RateLimit (Integer) Maximum number of requests per second made to Databricks REST API. DATABRICKS_RATE_LIMIT

For example, to turn on debug HTTP headers:

w := databricks.Must(databricks.NewWorkspaceClient(&databricks.Config{
  DebugHeaders: true,
}))  
// Now call the Databricks workspace APIs as desired...
Custom credentials provider

In some cases, you may want to have deeper control over authentication to Databricks. This can be achieved by creating your own credentials provider that returns an HTTP request visitor:

type CustomCredentials struct {}

func (c *CustomCredentials) Name() string {
	return "custom"
}

func (c *CustomCredentials) Configure(ctx context.Context, cfg *config.Config) (func(*http.Request) error, error) {
	return func(r *http.Request) error {
		token := "..."
		r.Header.Set("Authorization", fmt.Sprintf("Bearer %s", token))
		return nil
	}, nil
}

func main() {
	w := databricks.Must(databricks.NewWorkspaceClient(&databricks.Config{
		Credentials: &CustomCredentials{},
	}))
    // ..
}

Code examples

To find code examples that demonstrate how to call the Databricks SDK for Go, see the top-level examples folder within this repository

Long-running operations

More than 20 methods across different Databricks APIs are long-running operations for managing things like clusters, command execution, jobs, libraries, Delta Live Tables pipelines, and Databricks SQL warehouses. For example, in the Clusters API, once you create a cluster, you receive a cluster ID, and the cluster is in the PENDING state while Databricks takes care of provisioning virtual machines from the cloud provider in the background. But the cluster is only usable in the RUNNING state. Another example is the API for running a job or repairing the run: right after the run starts, the run is in the PENDING state, though the job is considered to be finished only when it is in the TERMINATED or SKIPPED states. And of course you. would want to know the error message when the long-running operation times out or why things fail. And sometimes you want to configure a custom timeout other than the default of 20 minutes.

To hide all of the integration-specific complexity from the end user, Databricks SDK for Go provides a high-level API for triggering the long-running operations and waiting for the releated entities to reach the right state or return back the error message about the problem in case of failure. All long-running operations have the XxxAndWait name pattern, where Xxx is the operation name. All these generated methods return information about the relevant entity once the operation is finished. It is possible to configure a custom timeout to XxxAndWait by providing a functional option argument constructed by retries.Timeout[Zzz](time.Duration) function, where Zzz is the result type of XxxAndWait.

In the following example, CreateAndWait returns ClusterInfo only once the cluster is in the RUNNING state, otherwise it will timeout in 10 minutes:

clusterInfo, err = w.Clusters.CreateAndWait(ctx, clusters.CreateCluster{
    ClusterName:            "Created cluster",
    SparkVersion:           latestLTS,
    NodeTypeId:             smallestWithDisk,
    AutoterminationMinutes: 10,
    NumWorkers:             1,
}, retries.Timeout[clusters.ClusterInfo](10*time.Minute))
In this section
Command execution on clusters

You can run Python, Scala, R, or SQL code on running interactive Databricks clusters and get the results back. All supplied code gets leading whitespace removed, so that you could easily embed Python code into Go applications. This high-level wrapper comes from the Databricks Terraform provider, where it was tested for over 2 years for use cases such as DBFS mounts and SQL permissions. This interface hides the intricate complexity of all internal APIs involved to simplify the unit-testing experience for command execution. Databricks does not recommending that you use lower-level interfaces for command execution. The execution timeout is 20 minutes and cannot be overriden for the sake of interface simplicity, meaning that you should only use this API if you have some relatively complex executions to perform. Please use jobs in case your commands must run longer than 20 minutes. Or use the Databricks SQL Driver for Go in case your workload type is purely for business intelligence.

res := w.CommandExecutor.Execute(ctx, clusterId, "python", "print(1)")
if res.Failed() {
    return fmt.Errorf("command failed: %w", res.Err())
}
println(res.Text())
// Out: 1
Cluster library management

You can install or uninstall libraries on running Databricks clusters. UpdateAndWait follows all conventions of long-running operations and wraps Install and Uninstall operations, followed by checking for the installation status of the cluster, exposing error messages back in a simplified way. This high-level wrapper came from the Databricks Terraform provider, where it was tested for over 2 years in the databricks_cluster and databricks_library resources. Databricks recommends that you use UpdateAndWait as the only API for cluster library management.

err = w.Libraries.UpdateAndWait(ctx, libraries.Update{
    ClusterId: clusterId,
    Install: []libraries.Library{
        {
            Pypi: &libraries.PythonPyPiLibrary{
                Package: "dbl-tempo",
            },
        },
    },
})
Advanced usage

You can track the intermediate state of a long-running operation while waiting to reach the correct state by supplying the func(i *retries.Info[Zzz]) functional option, where Zzz is the return type of the XxxAndWait method:

clusterInfo, err = w.Clusters.CreateAndWait(ctx, clusters.CreateCluster{
    // ...
}, func(i *retries.Info[clusters.ClusterInfo]) {
    updateIntermediateState(i.Info.StateMessage)
})

Paginated responses

On the platform side, some Databricks APIs have result pagination, and some of them do not. Some APIs follow the offset-plus-limit pagination, some start their offsets from 0 and some from 1, some use the cursor-based iteration, and others just return all results in a single response. The Databricks SDK for Go hides this intricate complexity and generates a more high-level interface for retrieving all results of a certain entity type. The naming pattern is XxxAll, where Xxx is the name of the method to retrieve a single page of results.

all, err := w.Repos.ListAll(ctx, repos.List{})
if err != nil {
    return fmt.Errorf("list repos: %w", err)
}
for _, repo := range all {
    println(repo.Path)
}

GetByName utility methods

On the platform side, most of the Databricks APIs could be retrieved primarily by their identifiers. In some common workflows, it's easier to reason about workspace objects by their names. To simplify development experience and speed-up proof-of-concepts, the Databricks SDK for Go generates code for GetByName client-side utilities. Please keep in mind, that some Databricks APIs don't enforce unique names on objects and these generated helpers return an error whenever duplicate name is detected.

repo, err := w.Repos.GetByPath(ctx, path)
if err != nil {
    return err
}
return w.Repos.Update(ctx, repos.UpdateRepo{
    RepoId: repo.Id,
    Branch: tag,
})

Node type and Databricks Runtime selectors

The Databricks SDK for Go provides selector methods that make developing multi-cloud applications easier and just rely on characteristics of the virtual machine, such as the number of cores or availability of local disks or always picking up the latest Databricks Runtime for the interactive cluster or per-job cluster.

// Fetch the list of spark runtime versions.
sparkVersions, err := w.Clusters.SparkVersions(ctx)
if err != nil {
    return err
}

// Select the latest LTS version.
latestLTS, err := sparkVersions.Select(clusters.SparkVersionRequest{
    Latest:          true,
    LongTermSupport: true,
})
if err != nil {
    return err
}

// Fetch the list of available node types.
nodeTypes, err := w.Clusters.ListNodeTypes(ctx)
if err != nil {
    return err
}

// Select the smallest node type ID.
smallestWithDisk, err := nodeTypes.Smallest(clusters.NodeTypeRequest{
    LocalDisk: true,
})
if err != nil {
    return err
}

// Create the cluster and wait for it to start properly.
runningCluster, err := w.Clusters.CreateAndWait(ctx, clusters.CreateCluster{
    ClusterName:            clusterName,
    SparkVersion:           latestLTS,
    NodeTypeId:             smallestWithDisk,
    AutoterminationMinutes: 15,
    NumWorkers:             1,
})

Integration with io interfaces for DBFS

You can open a file on DBFS for reading or writing with w.Dbfs.Open. This function returns a dbfs.Handle that is compatible with a subset of io interfaces for reading, writing, and closing.

Uploading a file from an io.Reader:

upload, _ := os.Open("/path/to/local/file.ext")
remote, _ := w.Dbfs.Open(ctx, "/path/to/remote/file", dbfs.FileModeWrite|dbfs.FileModeOverwrite)
_, _ = io.Copy(remote, upload)
_ = remote.Close()

Downloading a file to an io.Writer:

download, _ := os.Create("/path/to/local")
remote, _ := w.Dbfs.Open(ctx, "/path/to/remote/file", dbfs.FileModeRead)
_, _ = io.Copy(download, remote)
Reading into and writing from buffers

You can read from or write to a DBFS file directly from a byte slice through the convenience functions w.Dbfs.ReadFile and w.Dbfs.WriteFile.

Uploading a file from a byte slice:

err := w.Dbfs.WriteFile(ctx, "/path/to/remote/file", []byte("Hello world!"))

Downloading a file into a byte slice:

buf, err := w.Dbfs.ReadFile(ctx, "/path/to/remote/file")

pflag.Value for enums

Databricks SDK for Go loosely integrates with spf13/pflag by implementing pflag.Value for all enum types.

Logging

By default, Databricks SDK for Go uses logger.SimpleLogger, which is a levelled proxy to log.Printf, printing to os.Stderr. You can disable logging completely by adding log.SetOutput(io.Discard) to your init() function. You are encouraged to override logging.DefaultLogger with your own implementation that follows the logger.Logger interface.

Current Logger interface will evolve in the future versions of Databricks SDK for Go.

Interface stability

During the Experimental period, Databricks is actively working on stabilizing the Databricks SDK for Go's interfaces. API clients for all services are generated from specification files that are synchronized from the main platform. You are highly encouraged to pin the exact version in the go.mod file and read the changelog where Databricks documents the changes. Some types of interfaces are more stable than others. For those interfaces that are not yet nightly tested, Databricks may have minor documented backward-incompatible changes, such as fixing mapping correctness from int to int64 or renaming the methods or some type names to bring more consistency.

Documentation

Index

Constants

This section is empty.

Variables

View Source
var ErrNotAccountClient = errors.New("invalid Databricks Account configuration")

Functions

func Must

func Must[T any](c T, err error) T

Must panics if error is not nil. It's intended to be used with databricks.NewWorkspaceClient and databricks.NewAccountClient.

func Version

func Version() string

Version of this SDK

func WithProduct

func WithProduct(name, version string)

WithProduct is expected to be set by developers to differentiate their app from others.

Example setting is:

func init() {
	databricks.WithProduct("your-product", "0.0.1")
}

Types

type AccountClient

type AccountClient struct {
	Config *config.Config

	// This API allows you to download billable usage logs for the specified
	// account and date range. This feature works with all account types.
	BillableUsage *billing.BillableUsageAPI

	// These APIs manage budget configuration including notifications for
	// exceeding a budget for a period. They can also retrieve the status of
	// each budget.
	Budgets *billing.BudgetsAPI

	// These APIs manage credential configurations for this workspace.
	// Databricks needs access to a cross-account service IAM role in your AWS
	// account so that Databricks can deploy clusters in the appropriate VPC for
	// the new workspace. A credential configuration encapsulates this role
	// information, and its ID is used when creating a new workspace.
	Credentials *deployment.CredentialsAPI

	// These APIs manage encryption key configurations for this workspace
	// (optional). A key configuration encapsulates the AWS KMS key information
	// and some information about how the key configuration can be used. There
	// are two possible uses for key configurations:
	//
	// * Managed services: A key configuration can be used to encrypt a
	// workspace's notebook and secret data in the control plane, as well as
	// Databricks SQL queries and query history. * Storage: A key configuration
	// can be used to encrypt a workspace's DBFS and EBS data in the data plane.
	//
	// In both of these cases, the key configuration's ID is used when creating
	// a new workspace. This Preview feature is available if your account is on
	// the E2 version of the platform. Updating a running workspace with
	// workspace storage encryption requires that the workspace is on the E2
	// version of the platform. If you have an older workspace, it might not be
	// on the E2 version of the platform. If you are not sure, contact your
	// Databricks reprsentative.
	EncryptionKeys *deployment.EncryptionKeysAPI

	// Groups simplify identity management, making it easier to assign access to
	// Databricks Account, data, and other securable objects.
	//
	// It is best practice to assign access to workspaces and access-control
	// policies in Unity Catalog to groups, instead of to users individually.
	// All Databricks Account identities can be assigned as members of groups,
	// and members inherit permissions that are assigned to their group.
	Groups *scim.AccountGroupsAPI

	// These APIs manage log delivery configurations for this account. The two
	// supported log types for this API are _billable usage logs_ and _audit
	// logs_. This feature is in Public Preview. This feature works with all
	// account ID types.
	//
	// Log delivery works with all account types. However, if your account is on
	// the E2 version of the platform or on a select custom plan that allows
	// multiple workspaces per account, you can optionally configure different
	// storage destinations for each workspace. Log delivery status is also
	// provided to know the latest status of log delivery attempts. The
	// high-level flow of billable usage delivery:
	//
	// 1. **Create storage**: In AWS, [create a new AWS S3 bucket] with a
	// specific bucket policy. Using Databricks APIs, call the Account API to
	// create a [storage configuration object](#operation/create-storage-config)
	// that uses the bucket name. 2. **Create credentials**: In AWS, create the
	// appropriate AWS IAM role. For full details, including the required IAM
	// role policies and trust relationship, see [Billable usage log delivery].
	// Using Databricks APIs, call the Account API to create a [credential
	// configuration object](#operation/create-credential-config) that uses the
	// IAM role's ARN. 3. **Create log delivery configuration**: Using
	// Databricks APIs, call the Account API to [create a log delivery
	// configuration](#operation/create-log-delivery-config) that uses the
	// credential and storage configuration objects from previous steps. You can
	// specify if the logs should include all events of that log type in your
	// account (_Account level_ delivery) or only events for a specific set of
	// workspaces (_workspace level_ delivery). Account level log delivery
	// applies to all current and future workspaces plus account level logs,
	// while workspace level log delivery solely delivers logs related to the
	// specified workspaces. You can create multiple types of delivery
	// configurations per account.
	//
	// For billable usage delivery: * For more information about billable usage
	// logs, see [Billable usage log delivery]. For the CSV schema, see the
	// [Usage page]. * The delivery location is
	// `<bucket-name>/<prefix>/billable-usage/csv/`, where `<prefix>` is the
	// name of the optional delivery path prefix you set up during log delivery
	// configuration. Files are named
	// `workspaceId=<workspace-id>-usageMonth=<month>.csv`. * All billable usage
	// logs apply to specific workspaces (_workspace level_ logs). You can
	// aggregate usage for your entire account by creating an _account level_
	// delivery configuration that delivers logs for all current and future
	// workspaces in your account. * The files are delivered daily by
	// overwriting the month's CSV file for each workspace.
	//
	// For audit log delivery: * For more information about about audit log
	// delivery, see [Audit log delivery], which includes information about the
	// used JSON schema. * The delivery location is
	// `<bucket-name>/<delivery-path-prefix>/workspaceId=<workspaceId>/date=<yyyy-mm-dd>/auditlogs_<internal-id>.json`.
	// Files may get overwritten with the same content multiple times to achieve
	// exactly-once delivery. * If the audit log delivery configuration included
	// specific workspace IDs, only _workspace-level_ audit logs for those
	// workspaces are delivered. If the log delivery configuration applies to
	// the entire account (_account level_ delivery configuration), the audit
	// log delivery includes workspace-level audit logs for all workspaces in
	// the account as well as account-level audit logs. See [Audit log delivery]
	// for details. * Auditable events are typically available in logs within 15
	// minutes.
	//
	// [Audit log delivery]: https://docs.databricks.com/administration-guide/account-settings/audit-logs.html
	// [Billable usage log delivery]: https://docs.databricks.com/administration-guide/account-settings/billable-usage-delivery.html
	// [Usage page]: https://docs.databricks.com/administration-guide/account-settings/usage.html
	// [create a new AWS S3 bucket]: https://docs.databricks.com/administration-guide/account-api/aws-storage.html
	LogDelivery *billing.LogDeliveryAPI

	// These APIs manage network configurations for customer-managed VPCs
	// (optional). A network configuration encapsulates the IDs for AWS VPCs,
	// subnets, and security groups. Its ID is used when creating a new
	// workspace if you use customer-managed VPCs.
	Networks *deployment.NetworksAPI

	// These APIs manage private access settings for this account. A private
	// access settings object specifies how your workspace is accessed using AWS
	// PrivateLink. Each workspace that has any PrivateLink connections must
	// include the ID for a private access settings object is in its workspace
	// configuration object. Your account must be enabled for PrivateLink to use
	// these APIs. Before configuring PrivateLink, it is important to read the
	// [Databricks article about PrivateLink].
	//
	// [Databricks article about PrivateLink]: https://docs.databricks.com/administration-guide/cloud-configurations/aws/privatelink.html
	PrivateAccess *deployment.PrivateAccessAPI

	// Identities for use with jobs, automated tools, and systems such as
	// scripts, apps, and CI/CD platforms. Databricks recommends creating
	// service principals to run production jobs or modify production data. If
	// all processes that act on production data run with service principals,
	// interactive users do not need any write, delete, or modify privileges in
	// production. This eliminates the risk of a user overwriting production
	// data by accident.
	ServicePrincipals *scim.AccountServicePrincipalsAPI

	// These APIs manage storage configurations for this workspace. A root
	// storage S3 bucket in your account is required to store objects like
	// cluster logs, notebook revisions, and job results. You can also use the
	// root storage S3 bucket for storage of non-production DBFS data. A storage
	// configuration encapsulates this bucket information, and its ID is used
	// when creating a new workspace.
	Storage *deployment.StorageAPI

	// User identities recognized by Databricks and represented by email
	// addresses.
	//
	// Databricks recommends using SCIM provisioning to sync users and groups
	// automatically from your identity provider to your Databricks Account.
	// SCIM streamlines onboarding a new employee or team by using your identity
	// provider to create users and groups in Databricks Account and give them
	// the proper level of access. When a user leaves your organization or no
	// longer needs access to Databricks Account, admins can terminate the user
	// in your identity provider and that user’s account will also be removed
	// from Databricks Account. This ensures a consistent offboarding process
	// and prevents unauthorized users from accessing sensitive data.
	Users *scim.AccountUsersAPI

	// These APIs manage VPC endpoint configurations for this account. This
	// object registers an AWS VPC endpoint in your Databricks account so your
	// workspace can use it with AWS PrivateLink. Your VPC endpoint connects to
	// one of two VPC endpoint services -- one for workspace (both for front-end
	// connection and for back-end connection to REST APIs) and one for the
	// back-end secure cluster connectivity relay from the data plane. Your
	// account must be enabled for PrivateLink to use these APIs. Before
	// configuring PrivateLink, it is important to read the [Databricks article
	// about PrivateLink].
	//
	// [Databricks article about PrivateLink]: https://docs.databricks.com/administration-guide/cloud-configurations/aws/privatelink.html
	VpcEndpoints *deployment.VpcEndpointsAPI

	// Databricks Workspace Assignment REST API
	WorkspaceAssignment *permissions.WorkspaceAssignmentAPI

	// These APIs manage workspaces for this account. A Databricks workspace is
	// an environment for accessing all of your Databricks assets. The workspace
	// organizes objects (notebooks, libraries, and experiments) into folders,
	// and provides access to data and computational resources such as clusters
	// and jobs.
	//
	// These endpoints are available if your account is on the E2 version of the
	// platform or on a select custom plan that allows multiple workspaces per
	// account.
	Workspaces *deployment.WorkspacesAPI
}

func NewAccountClient

func NewAccountClient(c ...*Config) (*AccountClient, error)

NewAccountClient creates new Databricks SDK client for Accounts or returns error in case configuration is wrong

type Config

type Config config.Config

type WorkspaceClient

type WorkspaceClient struct {
	Config *config.Config

	// The alerts API can be used to perform CRUD operations on alerts. An alert
	// is a Databricks SQL object that periodically runs a query, evaluates a
	// condition of its result, and notifies one or more users and/or alert
	// destinations if the condition was met.
	Alerts *sql.AlertsAPI

	// A catalog is the first layer of Unity Catalog’s three-level namespace.
	// It’s used to organize your data assets. Users can see all catalogs on
	// which they have been assigned the USE_CATALOG data permission.
	//
	// In Unity Catalog, admins and data stewards manage users and their access
	// to data centrally across all of the workspaces in a Databricks account.
	// Users in different workspaces can share access to the same data,
	// depending on privileges granted centrally in Unity Catalog.
	Catalogs *unitycatalog.CatalogsAPI

	// Cluster policy limits the ability to configure clusters based on a set of
	// rules. The policy rules limit the attributes or attribute values
	// available for cluster creation. Cluster policies have ACLs that limit
	// their use to specific users and groups.
	//
	// Cluster policies let you limit users to create clusters with prescribed
	// settings, simplify the user interface and enable more users to create
	// their own clusters (by fixing and hiding some values), control cost by
	// limiting per cluster maximum cost (by setting limits on attributes whose
	// values contribute to hourly price).
	//
	// Cluster policy permissions limit which policies a user can select in the
	// Policy drop-down when the user creates a cluster: - A user who has
	// cluster create permission can select the Unrestricted policy and create
	// fully-configurable clusters. - A user who has both cluster create
	// permission and access to cluster policies can select the Unrestricted
	// policy and policies they have access to. - A user that has access to only
	// cluster policies, can select the policies they have access to.
	//
	// If no policies have been created in the workspace, the Policy drop-down
	// does not display.
	//
	// Only admin users can create, edit, and delete policies. Admin users also
	// have access to all policies.
	ClusterPolicies *clusterpolicies.ClusterPoliciesAPI

	// The Clusters API allows you to create, start, edit, list, terminate, and
	// delete clusters.
	//
	// Databricks maps cluster node instance types to compute units known as
	// DBUs. See the instance type pricing page for a list of the supported
	// instance types and their corresponding DBUs.
	//
	// A Databricks cluster is a set of computation resources and configurations
	// on which you run data engineering, data science, and data analytics
	// workloads, such as production ETL pipelines, streaming analytics, ad-hoc
	// analytics, and machine learning.
	//
	// You run these workloads as a set of commands in a notebook or as an
	// automated job. Databricks makes a distinction between all-purpose
	// clusters and job clusters. You use all-purpose clusters to analyze data
	// collaboratively using interactive notebooks. You use job clusters to run
	// fast and robust automated jobs.
	//
	// You can create an all-purpose cluster using the UI, CLI, or REST API. You
	// can manually terminate and restart an all-purpose cluster. Multiple users
	// can share such clusters to do collaborative interactive analysis.
	//
	// IMPORTANT: Databricks retains cluster configuration information for up to
	// 200 all-purpose clusters terminated in the last 30 days and up to 30 job
	// clusters recently terminated by the job scheduler. To keep an all-purpose
	// cluster configuration even after it has been terminated for more than 30
	// days, an administrator can pin a cluster to the cluster list.
	Clusters *clusters.ClustersAPI

	// This API allows executing commands on running clusters.
	CommandExecutor commands.CommandExecutor

	// This API allows retrieving information about currently authenticated user
	// or service principal.
	CurrentUser *scim.CurrentUserAPI

	// In general, there is little need to modify dashboards using the API.
	// However, it can be useful to use dashboard objects to look-up a
	// collection of related query IDs. The API can also be used to duplicate
	// multiple dashboards at once since you can get a dashboard definition with
	// a GET request and then POST it to create a new one.
	Dashboards *sql.DashboardsAPI

	// This API is provided to assist you in making new query objects. When
	// creating a query object, you may optionally specify a `data_source_id`
	// for the SQL warehouse against which it will run. If you don't already
	// know the `data_source_id` for your desired SQL warehouse, this API will
	// help you find it.
	//
	// This API does not support searches. It returns the full list of SQL
	// warehouses in your workspace. We advise you to use any text editor, REST
	// client, or `grep` to search the response from this API for the name of
	// your SQL warehouse as it appears in Databricks SQL.
	DataSources *sql.DataSourcesAPI

	// DBFS API makes it simple to interact with various data sources without
	// having to include a users credentials every time to read a file.
	Dbfs *dbfs.DbfsAPI

	// The SQL Permissions API is similar to the endpoints of the
	// :method:permissions/setobjectpermissions. However, this exposes only one
	// endpoint, which gets the Access Control List for a given object. You
	// cannot modify any permissions using this API.
	//
	// There are three levels of permission:
	//
	// - `CAN_VIEW`: Allows read-only access
	//
	// - `CAN_RUN`: Allows read access and run access (superset of `CAN_VIEW`)
	//
	// - `CAN_MANAGE`: Allows all actions: read, run, edit, delete, modify
	// permissions (superset of `CAN_RUN`)
	DbsqlPermissions *sql.DbsqlPermissionsAPI

	Experiments *mlflow.ExperimentsAPI

	// An external location is an object that combines a cloud storage path with
	// a storage credential that authorizes access to the cloud storage path.
	// Each external location is subject to Unity Catalog access-control
	// policies that control which users and groups can access the credential.
	// If a user does not have access to an external location in Unity Catalog,
	// the request fails and Unity Catalog does not attempt to authenticate to
	// your cloud tenant on the user’s behalf.
	//
	// Databricks recommends using external locations rather than using storage
	// credentials directly.
	//
	// To create external locations, you must be a metastore admin or a user
	// with the CREATE_EXTERNAL_LOCATION privilege.
	ExternalLocations *unitycatalog.ExternalLocationsAPI

	// Registers personal access token for Databricks to do operations on behalf
	// of the user.
	//
	// See [more info].
	//
	// [more info]: https://docs.databricks.com/repos/get-access-tokens-from-git-provider.html
	GitCredentials *gitcredentials.GitCredentialsAPI

	// The Global Init Scripts API enables Workspace administrators to configure
	// global initialization scripts for their workspace. These scripts run on
	// every node in every cluster in the workspace.
	//
	// **Important:** Existing clusters must be restarted to pick up any changes
	// made to global init scripts. Global init scripts are run in order. If the
	// init script returns with a bad exit code, the Apache Spark container
	// fails to launch and init scripts with later position are skipped. If
	// enough containers fail, the entire cluster fails with a
	// `GLOBAL_INIT_SCRIPT_FAILURE` error code.
	GlobalInitScripts *globalinitscripts.GlobalInitScriptsAPI

	// In Unity Catalog, data is secure by default. Initially, users have no
	// access to data in a metastore. Access can be granted by either a
	// metastore admin, the owner of an object, or the owner of the catalog or
	// schema that contains the object. Securable objects in Unity Catalog are
	// hierarchical and privileges are inherited downward.
	//
	// Initially, users have no access to data in a metastore. Access can be
	// granted by either a metastore admin, the owner of an object, or the owner
	// of the catalog or schema that contains the object.
	//
	// Securable objects in Unity Catalog are hierarchical and privileges are
	// inherited downward. This means that granting a privilege on the catalog
	// automatically grants the privilege to all current and future objects
	// within the catalog. Similarly, privileges granted on a schema are
	// inherited by all current and future objects within that schema.
	Grants *unitycatalog.GrantsAPI

	// Groups simplify identity management, making it easier to assign access to
	// Databricks Workspace, data, and other securable objects.
	//
	// It is best practice to assign access to workspaces and access-control
	// policies in Unity Catalog to groups, instead of to users individually.
	// All Databricks Workspace identities can be assigned as members of groups,
	// and members inherit permissions that are assigned to their group.
	Groups *scim.GroupsAPI

	// Instance Pools API are used to create, edit, delete and list instance
	// pools by using ready-to-use cloud instances which reduces a cluster start
	// and auto-scaling times.
	//
	// Databricks pools reduce cluster start and auto-scaling times by
	// maintaining a set of idle, ready-to-use instances. When a cluster is
	// attached to a pool, cluster nodes are created using the pool’s idle
	// instances. If the pool has no idle instances, the pool expands by
	// allocating a new instance from the instance provider in order to
	// accommodate the cluster’s request. When a cluster releases an instance,
	// it returns to the pool and is free for another cluster to use. Only
	// clusters attached to a pool can use that pool’s idle instances.
	//
	// You can specify a different pool for the driver node and worker nodes, or
	// use the same pool for both.
	//
	// Databricks does not charge DBUs while instances are idle in the pool.
	// Instance provider billing does apply. See pricing.
	InstancePools *instancepools.InstancePoolsAPI

	// The Instance Profiles API allows admins to add, list, and remove instance
	// profiles that users can launch clusters with. Regular users can list the
	// instance profiles available to them. See [Secure access to S3 buckets]
	// using instance profiles for more information.
	//
	// [Secure access to S3 buckets]: https://docs.databricks.com/administration-guide/cloud-configurations/aws/instance-profiles.html
	InstanceProfiles *clusters.InstanceProfilesAPI

	// The IP Access List API enables Databricks admins to configure IP access
	// lists for a workspace.
	//
	// IP access lists affect web application access and REST API access to this
	// workspace only. If the feature is disabled for a workspace, all access is
	// allowed for this workspace. There is support for allow lists (inclusion)
	// and block lists (exclusion).
	//
	// When a connection is attempted: 1. **First, all block lists are
	// checked.** If the connection IP address matches any block list, the
	// connection is rejected. 2. **If the connection was not rejected by block
	// lists**, the IP address is compared with the allow lists.
	//
	// If there is at least one allow list for the workspace, the connection is
	// allowed only if the IP address matches an allow list. If there are no
	// allow lists for the workspace, all IP addresses are allowed.
	//
	// For all allow lists and block lists combined, the workspace supports a
	// maximum of 1000 IP/CIDR values, where one CIDR counts as a single value.
	//
	// After changes to the IP access list feature, it can take a few minutes
	// for changes to take effect.
	IpAccessLists *ipaccesslists.IpAccessListsAPI

	// The Jobs API allows you to create, edit, and delete jobs.
	//
	// You can use a Databricks job to run a data processing or data analysis
	// task in a Databricks cluster with scalable resources. Your job can
	// consist of a single task or can be a large, multi-task workflow with
	// complex dependencies. Databricks manages the task orchestration, cluster
	// management, monitoring, and error reporting for all of your jobs. You can
	// run your jobs immediately or periodically through an easy-to-use
	// scheduling system. You can implement job tasks using notebooks, JARS,
	// Delta Live Tables pipelines, or Python, Scala, Spark submit, and Java
	// applications.
	//
	// You should never hard code secrets or store them in plain text. Use the
	// :service:secrets to manage secrets in the [Databricks CLI]. Use the
	// [Secrets utility] to reference secrets in notebooks and jobs.
	//
	// [Databricks CLI]: https://docs.databricks.com/dev-tools/cli/index.html
	// [Secrets utility]: https://docs.databricks.com/dev-tools/databricks-utils.html#dbutils-secrets
	Jobs *jobs.JobsAPI

	// The Libraries API allows you to install and uninstall libraries and get
	// the status of libraries on a cluster.
	//
	// To make third-party or custom code available to notebooks and jobs
	// running on your clusters, you can install a library. Libraries can be
	// written in Python, Java, Scala, and R. You can upload Java, Scala, and
	// Python libraries and point to external packages in PyPI, Maven, and CRAN
	// repositories.
	//
	// Cluster libraries can be used by all notebooks running on a cluster. You
	// can install a cluster library directly from a public repository such as
	// PyPI or Maven, using a previously installed workspace library, or using
	// an init script.
	//
	// When you install a library on a cluster, a notebook already attached to
	// that cluster will not immediately see the new library. You must first
	// detach and then reattach the notebook to the cluster.
	//
	// When you uninstall a library from a cluster, the library is removed only
	// when you restart the cluster. Until you restart the cluster, the status
	// of the uninstalled library appears as Uninstall pending restart.
	Libraries *libraries.LibrariesAPI

	MLflowArtifacts *mlflow.MLflowArtifactsAPI

	// These endpoints are modified versions of the MLflow API that accept
	// additional input parameters or return additional information.
	MLflowDatabricks *mlflow.MLflowDatabricksAPI

	MLflowMetrics *mlflow.MLflowMetricsAPI

	MLflowRuns *mlflow.MLflowRunsAPI

	// A metastore is the top-level container of objects in Unity Catalog. It
	// stores data assets (tables and views) and the permissions that govern
	// access to them. Databricks account admins can create metastores and
	// assign them to Databricks workspaces to control which workloads use each
	// metastore. For a workspace to use Unity Catalog, it must have a Unity
	// Catalog metastore attached.
	//
	// Each metastore is configured with a root storage location in a cloud
	// storage account. This storage location is used for metadata and managed
	// tables data.
	//
	// NOTE: This metastore is distinct from the metastore included in
	// Databricks workspaces created before Unity Catalog was released. If your
	// workspace includes a legacy Hive metastore, the data in that metastore is
	// available in Unity Catalog in a catalog named hive_metastore.
	Metastores *unitycatalog.MetastoresAPI

	ModelVersionComments *mlflow.ModelVersionCommentsAPI

	ModelVersions *mlflow.ModelVersionsAPI

	// Permissions API are used to create read, write, edit, update and manage
	// access for various users on different objects and endpoints.
	Permissions *permissions.PermissionsAPI

	// The Delta Live Tables API allows you to create, edit, delete, start, and
	// view details about pipelines.
	//
	// Delta Live Tables is a framework for building reliable, maintainable, and
	// testable data processing pipelines. You define the transformations to
	// perform on your data, and Delta Live Tables manages task orchestration,
	// cluster management, monitoring, data quality, and error handling.
	//
	// Instead of defining your data pipelines using a series of separate Apache
	// Spark tasks, Delta Live Tables manages how your data is transformed based
	// on a target schema you define for each processing step. You can also
	// enforce data quality with Delta Live Tables expectations. Expectations
	// allow you to define expected data quality and specify how to handle
	// records that fail those expectations.
	Pipelines *pipelines.PipelinesAPI

	// View available policy families. A policy family contains a policy
	// definition providing best practices for configuring clusters for a
	// particular use case.
	//
	// Databricks manages and provides policy families for several common
	// cluster use cases. You cannot create, edit, or delete policy families.
	//
	// Policy families cannot be used directly to create clusters. Instead, you
	// create cluster policies using a policy family. Cluster policies created
	// using a policy family inherit the policy family's policy definition.
	PolicyFamilies *clusterpolicies.PolicyFamiliesAPI

	// Databricks Delta Sharing: Providers REST API
	Providers *unitycatalog.ProvidersAPI

	// These endpoints are used for CRUD operations on query definitions. Query
	// definitions include the target SQL warehouse, query text, name,
	// description, tags, execution schedule, parameters, and visualizations.
	Queries *sql.QueriesAPI

	// Access the history of queries through SQL warehouses.
	QueryHistory *sql.QueryHistoryAPI

	// Databricks Delta Sharing: Recipient Activation REST API
	RecipientActivation *unitycatalog.RecipientActivationAPI

	// Databricks Delta Sharing: Recipients REST API
	Recipients *unitycatalog.RecipientsAPI

	RegisteredModels *mlflow.RegisteredModelsAPI

	RegistryWebhooks *mlflow.RegistryWebhooksAPI

	// The Repos API allows users to manage their git repos. Users can use the
	// API to access all repos that they have manage permissions on.
	//
	// Databricks Repos is a visual Git client in Databricks. It supports common
	// Git operations such a cloning a repository, committing and pushing,
	// pulling, branch management, and visual comparison of diffs when
	// committing.
	//
	// Within Repos you can develop code in notebooks or other files and follow
	// data science and engineering code development best practices using Git
	// for version control, collaboration, and CI/CD.
	Repos *repos.ReposAPI

	// A schema (also called a database) is the second layer of Unity
	// Catalog’s three-level namespace. A schema organizes tables and views.
	// To access (or list) a table or view in a schema, users must have the
	// USE_SCHEMA data permission on the schema and its parent catalog, and they
	// must have the SELECT permission on the table or view.
	Schemas *unitycatalog.SchemasAPI

	// The Secrets API allows you to manage secrets, secret scopes, and access
	// permissions.
	//
	// Sometimes accessing data requires that you authenticate to external data
	// sources through JDBC. Instead of directly entering your credentials into
	// a notebook, use Databricks secrets to store your credentials and
	// reference them in notebooks and jobs.
	//
	// Administrators, secret creators, and users granted permission can read
	// Databricks secrets. While Databricks makes an effort to redact secret
	// values that might be displayed in notebooks, it is not possible to
	// prevent such users from reading secrets.
	Secrets *secrets.SecretsAPI

	// Identities for use with jobs, automated tools, and systems such as
	// scripts, apps, and CI/CD platforms. Databricks recommends creating
	// service principals to run production jobs or modify production data. If
	// all processes that act on production data run with service principals,
	// interactive users do not need any write, delete, or modify privileges in
	// production. This eliminates the risk of a user overwriting production
	// data by accident.
	ServicePrincipals *scim.ServicePrincipalsAPI

	// Databricks Delta Sharing: Shares REST API
	Shares *unitycatalog.SharesAPI

	// A storage credential represents an authentication and authorization
	// mechanism for accessing data stored on your cloud tenant, using an IAM
	// role. Each storage credential is subject to Unity Catalog access-control
	// policies that control which users and groups can access the credential.
	// If a user does not have access to a storage credential in Unity Catalog,
	// the request fails and Unity Catalog does not attempt to authenticate to
	// your cloud tenant on the user’s behalf.
	//
	// Databricks recommends using external locations rather than using storage
	// credentials directly.
	//
	// To create storage credentials, you must be a Databricks account admin.
	// The account admin who creates the storage credential can delegate
	// ownership to another user or group to manage permissions on it.
	StorageCredentials *unitycatalog.StorageCredentialsAPI

	// A table resides in the third layer of Unity Catalog’s three-level
	// namespace. It contains rows of data. To create a table, users must have
	// CREATE_TABLE and USE_SCHEMA permissions on the schema, and they must have
	// the USE_CATALOG permission on its parent catalog. To query a table, users
	// must have the SELECT permission on the table, and they must have the
	// USE_CATALOG permission on its parent catalog and the USE_SCHEMA
	// permission on its parent schema.
	//
	// A table can be managed or external.
	Tables *unitycatalog.TablesAPI

	// Enables administrators to get all tokens and delete tokens for other
	// users. Admins can either get every token, get a specific token by ID, or
	// get all tokens for a particular user.
	TokenManagement *tokenmanagement.TokenManagementAPI

	// The Token API allows you to create, list, and revoke tokens that can be
	// used to authenticate and access Databricks REST APIs.
	Tokens *tokens.TokensAPI

	TransitionRequests *mlflow.TransitionRequestsAPI

	// User identities recognized by Databricks and represented by email
	// addresses.
	//
	// Databricks recommends using SCIM provisioning to sync users and groups
	// automatically from your identity provider to your Databricks Workspace.
	// SCIM streamlines onboarding a new employee or team by using your identity
	// provider to create users and groups in Databricks Workspace and give them
	// the proper level of access. When a user leaves your organization or no
	// longer needs access to Databricks Workspace, admins can terminate the
	// user in your identity provider and that user’s account will also be
	// removed from Databricks Workspace. This ensures a consistent offboarding
	// process and prevents unauthorized users from accessing sensitive data.
	Users *scim.UsersAPI

	// A SQL warehouse is a compute resource that lets you run SQL commands on
	// data objects within Databricks SQL. Compute resources are infrastructure
	// resources that provide processing capabilities in the cloud.
	Warehouses *sql.WarehousesAPI

	// The Workspace API allows you to list, import, export, and delete
	// notebooks and folders.
	//
	// A notebook is a web-based interface to a document that contains runnable
	// code, visualizations, and explanatory text.
	Workspace *workspace.WorkspaceAPI

	// This API allows updating known workspace settings for advanced users.
	WorkspaceConf *workspaceconf.WorkspaceConfAPI
}

func NewWorkspaceClient

func NewWorkspaceClient(c ...*Config) (*WorkspaceClient, error)

NewWorkspaceClient creates new Databricks SDK client for Workspaces or returns error in case configuration is wrong

Directories

Path Synopsis
examples
internal
env
code
Package holds higher-level abstractions on top of OpenAPI that are used to generate code via text/template for Databricks SDK in different languages.
Package holds higher-level abstractions on top of OpenAPI that are used to generate code via text/template for Databricks SDK in different languages.
gen
Usage: openapi-codegen
Usage: openapi-codegen
Databricks SDK for Go APIs
Databricks SDK for Go APIs
billing
These APIs allow you to manage Billable Usage, Budgets, Log Delivery, etc.
These APIs allow you to manage Billable Usage, Budgets, Log Delivery, etc.
clusterpolicies
These APIs allow you to manage Cluster Policies, Policy Families, etc.
These APIs allow you to manage Cluster Policies, Policy Families, etc.
clusters
These APIs allow you to manage Clusters, Instance Profiles, etc.
These APIs allow you to manage Clusters, Instance Profiles, etc.
commands
This API allows execution of Python, Scala, SQL, or R commands on running Databricks Clusters.
This API allows execution of Python, Scala, SQL, or R commands on running Databricks Clusters.
dbfs
DBFS API makes it simple to interact with various data sources without having to include a users credentials every time to read a file.
DBFS API makes it simple to interact with various data sources without having to include a users credentials every time to read a file.
deployment
These APIs allow you to manage Credentials, Encryption Keys, Networks, Private Access, Storage, Vpc Endpoints, Workspaces, etc.
These APIs allow you to manage Credentials, Encryption Keys, Networks, Private Access, Storage, Vpc Endpoints, Workspaces, etc.
gitcredentials
Registers personal access token for Databricks to do operations on behalf of the user.
Registers personal access token for Databricks to do operations on behalf of the user.
globalinitscripts
The Global Init Scripts API enables Workspace administrators to configure global initialization scripts for their workspace.
The Global Init Scripts API enables Workspace administrators to configure global initialization scripts for their workspace.
instancepools
Instance Pools API are used to create, edit, delete and list instance pools by using ready-to-use cloud instances which reduces a cluster start and auto-scaling times.
Instance Pools API are used to create, edit, delete and list instance pools by using ready-to-use cloud instances which reduces a cluster start and auto-scaling times.
ipaccesslists
The IP Access List API enables Databricks admins to configure IP access lists for a workspace.
The IP Access List API enables Databricks admins to configure IP access lists for a workspace.
jobs
The Jobs API allows you to create, edit, and delete jobs.
The Jobs API allows you to create, edit, and delete jobs.
libraries
The Libraries API allows you to install and uninstall libraries and get the status of libraries on a cluster.
The Libraries API allows you to install and uninstall libraries and get the status of libraries on a cluster.
mlflow
These APIs allow you to manage Experiments, M Lflow Artifacts, M Lflow Databricks, M Lflow Metrics, M Lflow Runs, Model Version Comments, Model Versions, Registered Models, Registry Webhooks, Transition Requests, etc.
These APIs allow you to manage Experiments, M Lflow Artifacts, M Lflow Databricks, M Lflow Metrics, M Lflow Runs, Model Version Comments, Model Versions, Registered Models, Registry Webhooks, Transition Requests, etc.
permissions
These APIs allow you to manage Permissions, Workspace Assignment, etc.
These APIs allow you to manage Permissions, Workspace Assignment, etc.
pipelines
The Delta Live Tables API allows you to create, edit, delete, start, and view details about pipelines.
The Delta Live Tables API allows you to create, edit, delete, start, and view details about pipelines.
repos
The Repos API allows users to manage their git repos.
The Repos API allows users to manage their git repos.
scim
These APIs allow you to manage Account Groups, Account Service Principals, Account Users, Current User, Groups, Service Principals, Users, etc.
These APIs allow you to manage Account Groups, Account Service Principals, Account Users, Current User, Groups, Service Principals, Users, etc.
secrets
The Secrets API allows you to manage secrets, secret scopes, and access permissions.
The Secrets API allows you to manage secrets, secret scopes, and access permissions.
sql
These APIs allow you to manage Alerts, Dashboards, Data Sources, Dbsql Permissions, Queries, Query History, Warehouses, etc.
These APIs allow you to manage Alerts, Dashboards, Data Sources, Dbsql Permissions, Queries, Query History, Warehouses, etc.
tokenmanagement
Enables administrators to get all tokens and delete tokens for other users.
Enables administrators to get all tokens and delete tokens for other users.
tokens
The Token API allows you to create, list, and revoke tokens that can be used to authenticate and access Databricks REST APIs.
The Token API allows you to create, list, and revoke tokens that can be used to authenticate and access Databricks REST APIs.
unitycatalog
These APIs allow you to manage Catalogs, External Locations, Grants, Metastores, Providers, Recipient Activation, Recipients, Schemas, Shares, Storage Credentials, Tables, etc.
These APIs allow you to manage Catalogs, External Locations, Grants, Metastores, Providers, Recipient Activation, Recipients, Schemas, Shares, Storage Credentials, Tables, etc.
workspace
The Workspace API allows you to list, import, export, and delete notebooks and folders.
The Workspace API allows you to list, import, export, and delete notebooks and folders.
workspaceconf
This API allows updating known workspace settings for advanced users.
This API allows updating known workspace settings for advanced users.

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL