catalog

package
v0.41.0 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: May 21, 2024 License: Apache-2.0 Imports: 7 Imported by: 35

Documentation

Overview

These APIs allow you to manage Account Metastore Assignments, Account Metastores, Account Storage Credentials, Artifact Allowlists, Catalogs, Connections, External Locations, Functions, Grants, Metastores, Model Versions, Online Tables, Quality Monitors, Registered Models, Schemas, Storage Credentials, System Schemas, Table Constraints, Tables, Volumes, Workspace Bindings, etc.

Index

Examples

Constants

This section is empty.

Variables

This section is empty.

Functions

This section is empty.

Types

type AccountMetastoreAssignmentsAPI

type AccountMetastoreAssignmentsAPI struct {
	// contains filtered or unexported fields
}

These APIs manage metastore assignments to a workspace.

func NewAccountMetastoreAssignments

func NewAccountMetastoreAssignments(client *client.DatabricksClient) *AccountMetastoreAssignmentsAPI

func (*AccountMetastoreAssignmentsAPI) Create

Assigns a workspace to a metastore.

Creates an assignment to a metastore for a workspace

func (*AccountMetastoreAssignmentsAPI) Delete

Delete a metastore assignment.

Deletes a metastore assignment to a workspace, leaving the workspace with no metastore.

func (*AccountMetastoreAssignmentsAPI) DeleteByWorkspaceIdAndMetastoreId

func (a *AccountMetastoreAssignmentsAPI) DeleteByWorkspaceIdAndMetastoreId(ctx context.Context, workspaceId int64, metastoreId string) error

Delete a metastore assignment.

Deletes a metastore assignment to a workspace, leaving the workspace with no metastore.

func (*AccountMetastoreAssignmentsAPI) Get

Gets the metastore assignment for a workspace.

Gets the metastore assignment, if any, for the workspace specified by ID. If the workspace is assigned a metastore, the mappig will be returned. If no metastore is assigned to the workspace, the assignment will not be found and a 404 returned.

func (*AccountMetastoreAssignmentsAPI) GetByWorkspaceId

func (a *AccountMetastoreAssignmentsAPI) GetByWorkspaceId(ctx context.Context, workspaceId int64) (*AccountsMetastoreAssignment, error)

Gets the metastore assignment for a workspace.

Gets the metastore assignment, if any, for the workspace specified by ID. If the workspace is assigned a metastore, the mappig will be returned. If no metastore is assigned to the workspace, the assignment will not be found and a 404 returned.

func (*AccountMetastoreAssignmentsAPI) Impl

Impl returns low-level AccountMetastoreAssignments API implementation Deprecated: use MockAccountMetastoreAssignmentsInterface instead.

func (*AccountMetastoreAssignmentsAPI) List

Get all workspaces assigned to a metastore.

Gets a list of all Databricks workspace IDs that have been assigned to given metastore.

This method is generated by Databricks SDK Code Generator.

func (*AccountMetastoreAssignmentsAPI) ListAll added in v0.22.0

Get all workspaces assigned to a metastore.

Gets a list of all Databricks workspace IDs that have been assigned to given metastore.

This method is generated by Databricks SDK Code Generator.

func (*AccountMetastoreAssignmentsAPI) ListByMetastoreId

Get all workspaces assigned to a metastore.

Gets a list of all Databricks workspace IDs that have been assigned to given metastore.

func (*AccountMetastoreAssignmentsAPI) Update

Updates a metastore assignment to a workspaces.

Updates an assignment to a metastore for a workspace. Currently, only the default catalog may be updated.

func (*AccountMetastoreAssignmentsAPI) WithImpl

WithImpl could be used to override low-level API implementations for unit testing purposes with github.com/golang/mock or other mocking frameworks. Deprecated: use MockAccountMetastoreAssignmentsInterface instead.

type AccountMetastoreAssignmentsInterface added in v0.29.0

type AccountMetastoreAssignmentsInterface interface {
	// WithImpl could be used to override low-level API implementations for unit
	// testing purposes with [github.com/golang/mock] or other mocking frameworks.
	// Deprecated: use MockAccountMetastoreAssignmentsInterface instead.
	WithImpl(impl AccountMetastoreAssignmentsService) AccountMetastoreAssignmentsInterface

	// Impl returns low-level AccountMetastoreAssignments API implementation
	// Deprecated: use MockAccountMetastoreAssignmentsInterface instead.
	Impl() AccountMetastoreAssignmentsService

	// Assigns a workspace to a metastore.
	//
	// Creates an assignment to a metastore for a workspace
	Create(ctx context.Context, request AccountsCreateMetastoreAssignment) error

	// Delete a metastore assignment.
	//
	// Deletes a metastore assignment to a workspace, leaving the workspace with no
	// metastore.
	Delete(ctx context.Context, request DeleteAccountMetastoreAssignmentRequest) error

	// Delete a metastore assignment.
	//
	// Deletes a metastore assignment to a workspace, leaving the workspace with no
	// metastore.
	DeleteByWorkspaceIdAndMetastoreId(ctx context.Context, workspaceId int64, metastoreId string) error

	// Gets the metastore assignment for a workspace.
	//
	// Gets the metastore assignment, if any, for the workspace specified by ID. If
	// the workspace is assigned a metastore, the mappig will be returned. If no
	// metastore is assigned to the workspace, the assignment will not be found and
	// a 404 returned.
	Get(ctx context.Context, request GetAccountMetastoreAssignmentRequest) (*AccountsMetastoreAssignment, error)

	// Gets the metastore assignment for a workspace.
	//
	// Gets the metastore assignment, if any, for the workspace specified by ID. If
	// the workspace is assigned a metastore, the mappig will be returned. If no
	// metastore is assigned to the workspace, the assignment will not be found and
	// a 404 returned.
	GetByWorkspaceId(ctx context.Context, workspaceId int64) (*AccountsMetastoreAssignment, error)

	// Get all workspaces assigned to a metastore.
	//
	// Gets a list of all Databricks workspace IDs that have been assigned to given
	// metastore.
	//
	// This method is generated by Databricks SDK Code Generator.
	List(ctx context.Context, request ListAccountMetastoreAssignmentsRequest) listing.Iterator[int64]

	// Get all workspaces assigned to a metastore.
	//
	// Gets a list of all Databricks workspace IDs that have been assigned to given
	// metastore.
	//
	// This method is generated by Databricks SDK Code Generator.
	ListAll(ctx context.Context, request ListAccountMetastoreAssignmentsRequest) ([]int64, error)

	// Get all workspaces assigned to a metastore.
	//
	// Gets a list of all Databricks workspace IDs that have been assigned to given
	// metastore.
	ListByMetastoreId(ctx context.Context, metastoreId string) (*ListAccountMetastoreAssignmentsResponse, error)

	// Updates a metastore assignment to a workspaces.
	//
	// Updates an assignment to a metastore for a workspace. Currently, only the
	// default catalog may be updated.
	Update(ctx context.Context, request AccountsUpdateMetastoreAssignment) error
}

type AccountMetastoreAssignmentsService

type AccountMetastoreAssignmentsService interface {

	// Assigns a workspace to a metastore.
	//
	// Creates an assignment to a metastore for a workspace
	Create(ctx context.Context, request AccountsCreateMetastoreAssignment) error

	// Delete a metastore assignment.
	//
	// Deletes a metastore assignment to a workspace, leaving the workspace with
	// no metastore.
	Delete(ctx context.Context, request DeleteAccountMetastoreAssignmentRequest) error

	// Gets the metastore assignment for a workspace.
	//
	// Gets the metastore assignment, if any, for the workspace specified by ID.
	// If the workspace is assigned a metastore, the mappig will be returned. If
	// no metastore is assigned to the workspace, the assignment will not be
	// found and a 404 returned.
	Get(ctx context.Context, request GetAccountMetastoreAssignmentRequest) (*AccountsMetastoreAssignment, error)

	// Get all workspaces assigned to a metastore.
	//
	// Gets a list of all Databricks workspace IDs that have been assigned to
	// given metastore.
	//
	// Use ListAll() to get all WorkspaceId instances
	List(ctx context.Context, request ListAccountMetastoreAssignmentsRequest) (*ListAccountMetastoreAssignmentsResponse, error)

	// Updates a metastore assignment to a workspaces.
	//
	// Updates an assignment to a metastore for a workspace. Currently, only the
	// default catalog may be updated.
	Update(ctx context.Context, request AccountsUpdateMetastoreAssignment) error
}

These APIs manage metastore assignments to a workspace.

type AccountMetastoresAPI

type AccountMetastoresAPI struct {
	// contains filtered or unexported fields
}

These APIs manage Unity Catalog metastores for an account. A metastore contains catalogs that can be associated with workspaces

func NewAccountMetastores

func NewAccountMetastores(client *client.DatabricksClient) *AccountMetastoresAPI

func (*AccountMetastoresAPI) Create

Create metastore.

Creates a Unity Catalog metastore.

func (*AccountMetastoresAPI) Delete

Delete a metastore.

Deletes a Unity Catalog metastore for an account, both specified by ID.

func (*AccountMetastoresAPI) DeleteByMetastoreId

func (a *AccountMetastoresAPI) DeleteByMetastoreId(ctx context.Context, metastoreId string) error

Delete a metastore.

Deletes a Unity Catalog metastore for an account, both specified by ID.

func (*AccountMetastoresAPI) Get

Get a metastore.

Gets a Unity Catalog metastore from an account, both specified by ID.

func (*AccountMetastoresAPI) GetByMetastoreId

func (a *AccountMetastoresAPI) GetByMetastoreId(ctx context.Context, metastoreId string) (*AccountsMetastoreInfo, error)

Get a metastore.

Gets a Unity Catalog metastore from an account, both specified by ID.

func (*AccountMetastoresAPI) Impl

Impl returns low-level AccountMetastores API implementation Deprecated: use MockAccountMetastoresInterface instead.

func (*AccountMetastoresAPI) List

Get all metastores associated with an account.

Gets all Unity Catalog metastores associated with an account specified by ID.

This method is generated by Databricks SDK Code Generator.

func (*AccountMetastoresAPI) ListAll added in v0.18.0

Get all metastores associated with an account.

Gets all Unity Catalog metastores associated with an account specified by ID.

This method is generated by Databricks SDK Code Generator.

func (*AccountMetastoresAPI) Update

Update a metastore.

Updates an existing Unity Catalog metastore.

func (*AccountMetastoresAPI) WithImpl

WithImpl could be used to override low-level API implementations for unit testing purposes with github.com/golang/mock or other mocking frameworks. Deprecated: use MockAccountMetastoresInterface instead.

type AccountMetastoresInterface added in v0.29.0

type AccountMetastoresInterface interface {
	// WithImpl could be used to override low-level API implementations for unit
	// testing purposes with [github.com/golang/mock] or other mocking frameworks.
	// Deprecated: use MockAccountMetastoresInterface instead.
	WithImpl(impl AccountMetastoresService) AccountMetastoresInterface

	// Impl returns low-level AccountMetastores API implementation
	// Deprecated: use MockAccountMetastoresInterface instead.
	Impl() AccountMetastoresService

	// Create metastore.
	//
	// Creates a Unity Catalog metastore.
	Create(ctx context.Context, request AccountsCreateMetastore) (*AccountsMetastoreInfo, error)

	// Delete a metastore.
	//
	// Deletes a Unity Catalog metastore for an account, both specified by ID.
	Delete(ctx context.Context, request DeleteAccountMetastoreRequest) error

	// Delete a metastore.
	//
	// Deletes a Unity Catalog metastore for an account, both specified by ID.
	DeleteByMetastoreId(ctx context.Context, metastoreId string) error

	// Get a metastore.
	//
	// Gets a Unity Catalog metastore from an account, both specified by ID.
	Get(ctx context.Context, request GetAccountMetastoreRequest) (*AccountsMetastoreInfo, error)

	// Get a metastore.
	//
	// Gets a Unity Catalog metastore from an account, both specified by ID.
	GetByMetastoreId(ctx context.Context, metastoreId string) (*AccountsMetastoreInfo, error)

	// Get all metastores associated with an account.
	//
	// Gets all Unity Catalog metastores associated with an account specified by ID.
	//
	// This method is generated by Databricks SDK Code Generator.
	List(ctx context.Context) listing.Iterator[MetastoreInfo]

	// Get all metastores associated with an account.
	//
	// Gets all Unity Catalog metastores associated with an account specified by ID.
	//
	// This method is generated by Databricks SDK Code Generator.
	ListAll(ctx context.Context) ([]MetastoreInfo, error)

	// Update a metastore.
	//
	// Updates an existing Unity Catalog metastore.
	Update(ctx context.Context, request AccountsUpdateMetastore) (*AccountsMetastoreInfo, error)
}

type AccountMetastoresService

type AccountMetastoresService interface {

	// Create metastore.
	//
	// Creates a Unity Catalog metastore.
	Create(ctx context.Context, request AccountsCreateMetastore) (*AccountsMetastoreInfo, error)

	// Delete a metastore.
	//
	// Deletes a Unity Catalog metastore for an account, both specified by ID.
	Delete(ctx context.Context, request DeleteAccountMetastoreRequest) error

	// Get a metastore.
	//
	// Gets a Unity Catalog metastore from an account, both specified by ID.
	Get(ctx context.Context, request GetAccountMetastoreRequest) (*AccountsMetastoreInfo, error)

	// Get all metastores associated with an account.
	//
	// Gets all Unity Catalog metastores associated with an account specified by
	// ID.
	//
	// Use ListAll() to get all MetastoreInfo instances
	List(ctx context.Context) (*ListMetastoresResponse, error)

	// Update a metastore.
	//
	// Updates an existing Unity Catalog metastore.
	Update(ctx context.Context, request AccountsUpdateMetastore) (*AccountsMetastoreInfo, error)
}

These APIs manage Unity Catalog metastores for an account. A metastore contains catalogs that can be associated with workspaces

type AccountStorageCredentialsAPI

type AccountStorageCredentialsAPI struct {
	// contains filtered or unexported fields
}

These APIs manage storage credentials for a particular metastore.

func NewAccountStorageCredentials

func NewAccountStorageCredentials(client *client.DatabricksClient) *AccountStorageCredentialsAPI

func (*AccountStorageCredentialsAPI) Create

Create a storage credential.

Creates a new storage credential. The request object is specific to the cloud:

* **AwsIamRole** for AWS credentials * **AzureServicePrincipal** for Azure credentials * **GcpServiceAcountKey** for GCP credentials.

The caller must be a metastore admin and have the **CREATE_STORAGE_CREDENTIAL** privilege on the metastore.

func (*AccountStorageCredentialsAPI) Delete added in v0.9.0

Delete a storage credential.

Deletes a storage credential from the metastore. The caller must be an owner of the storage credential.

func (*AccountStorageCredentialsAPI) DeleteByMetastoreIdAndStorageCredentialName added in v0.23.0

func (a *AccountStorageCredentialsAPI) DeleteByMetastoreIdAndStorageCredentialName(ctx context.Context, metastoreId string, storageCredentialName string) error

Delete a storage credential.

Deletes a storage credential from the metastore. The caller must be an owner of the storage credential.

func (*AccountStorageCredentialsAPI) Get

Gets the named storage credential.

Gets a storage credential from the metastore. The caller must be a metastore admin, the owner of the storage credential, or have a level of privilege on the storage credential.

func (*AccountStorageCredentialsAPI) GetByMetastoreIdAndStorageCredentialName added in v0.23.0

func (a *AccountStorageCredentialsAPI) GetByMetastoreIdAndStorageCredentialName(ctx context.Context, metastoreId string, storageCredentialName string) (*AccountsStorageCredentialInfo, error)

Gets the named storage credential.

Gets a storage credential from the metastore. The caller must be a metastore admin, the owner of the storage credential, or have a level of privilege on the storage credential.

func (*AccountStorageCredentialsAPI) GetByName added in v0.18.0

GetByName calls AccountStorageCredentialsAPI.StorageCredentialInfoNameToIdMap and returns a single StorageCredentialInfo.

Returns an error if there's more than one StorageCredentialInfo with the same .Name.

Note: All StorageCredentialInfo instances are loaded into memory before returning matching by name.

This method is generated by Databricks SDK Code Generator.

func (*AccountStorageCredentialsAPI) Impl

Impl returns low-level AccountStorageCredentials API implementation Deprecated: use MockAccountStorageCredentialsInterface instead.

func (*AccountStorageCredentialsAPI) List

Get all storage credentials assigned to a metastore.

Gets a list of all storage credentials that have been assigned to given metastore.

func (*AccountStorageCredentialsAPI) ListByMetastoreId

func (a *AccountStorageCredentialsAPI) ListByMetastoreId(ctx context.Context, metastoreId string) ([]StorageCredentialInfo, error)

Get all storage credentials assigned to a metastore.

Gets a list of all storage credentials that have been assigned to given metastore.

func (*AccountStorageCredentialsAPI) StorageCredentialInfoNameToIdMap added in v0.18.0

func (a *AccountStorageCredentialsAPI) StorageCredentialInfoNameToIdMap(ctx context.Context, request ListAccountStorageCredentialsRequest) (map[string]string, error)

StorageCredentialInfoNameToIdMap calls AccountStorageCredentialsAPI.List and creates a map of results with StorageCredentialInfo.Name as key and StorageCredentialInfo.Id as value.

Returns an error if there's more than one StorageCredentialInfo with the same .Name.

Note: All StorageCredentialInfo instances are loaded into memory before creating a map.

This method is generated by Databricks SDK Code Generator.

func (*AccountStorageCredentialsAPI) Update added in v0.9.0

Updates a storage credential.

Updates a storage credential on the metastore. The caller must be the owner of the storage credential. If the caller is a metastore admin, only the __owner__ credential can be changed.

func (*AccountStorageCredentialsAPI) WithImpl

WithImpl could be used to override low-level API implementations for unit testing purposes with github.com/golang/mock or other mocking frameworks. Deprecated: use MockAccountStorageCredentialsInterface instead.

type AccountStorageCredentialsInterface added in v0.29.0

type AccountStorageCredentialsInterface interface {
	// WithImpl could be used to override low-level API implementations for unit
	// testing purposes with [github.com/golang/mock] or other mocking frameworks.
	// Deprecated: use MockAccountStorageCredentialsInterface instead.
	WithImpl(impl AccountStorageCredentialsService) AccountStorageCredentialsInterface

	// Impl returns low-level AccountStorageCredentials API implementation
	// Deprecated: use MockAccountStorageCredentialsInterface instead.
	Impl() AccountStorageCredentialsService

	// Create a storage credential.
	//
	// Creates a new storage credential. The request object is specific to the
	// cloud:
	//
	// * **AwsIamRole** for AWS credentials * **AzureServicePrincipal** for Azure
	// credentials * **GcpServiceAcountKey** for GCP credentials.
	//
	// The caller must be a metastore admin and have the
	// **CREATE_STORAGE_CREDENTIAL** privilege on the metastore.
	Create(ctx context.Context, request AccountsCreateStorageCredential) (*AccountsStorageCredentialInfo, error)

	// Delete a storage credential.
	//
	// Deletes a storage credential from the metastore. The caller must be an owner
	// of the storage credential.
	Delete(ctx context.Context, request DeleteAccountStorageCredentialRequest) error

	// Delete a storage credential.
	//
	// Deletes a storage credential from the metastore. The caller must be an owner
	// of the storage credential.
	DeleteByMetastoreIdAndStorageCredentialName(ctx context.Context, metastoreId string, storageCredentialName string) error

	// Gets the named storage credential.
	//
	// Gets a storage credential from the metastore. The caller must be a metastore
	// admin, the owner of the storage credential, or have a level of privilege on
	// the storage credential.
	Get(ctx context.Context, request GetAccountStorageCredentialRequest) (*AccountsStorageCredentialInfo, error)

	// Gets the named storage credential.
	//
	// Gets a storage credential from the metastore. The caller must be a metastore
	// admin, the owner of the storage credential, or have a level of privilege on
	// the storage credential.
	GetByMetastoreIdAndStorageCredentialName(ctx context.Context, metastoreId string, storageCredentialName string) (*AccountsStorageCredentialInfo, error)

	// Get all storage credentials assigned to a metastore.
	//
	// Gets a list of all storage credentials that have been assigned to given
	// metastore.
	List(ctx context.Context, request ListAccountStorageCredentialsRequest) ([]StorageCredentialInfo, error)

	// StorageCredentialInfoNameToIdMap calls [AccountStorageCredentialsAPI.List] and creates a map of results with [StorageCredentialInfo].Name as key and [StorageCredentialInfo].Id as value.
	//
	// Returns an error if there's more than one [StorageCredentialInfo] with the same .Name.
	//
	// Note: All [StorageCredentialInfo] instances are loaded into memory before creating a map.
	//
	// This method is generated by Databricks SDK Code Generator.
	StorageCredentialInfoNameToIdMap(ctx context.Context, request ListAccountStorageCredentialsRequest) (map[string]string, error)

	// GetByName calls [AccountStorageCredentialsAPI.StorageCredentialInfoNameToIdMap] and returns a single [StorageCredentialInfo].
	//
	// Returns an error if there's more than one [StorageCredentialInfo] with the same .Name.
	//
	// Note: All [StorageCredentialInfo] instances are loaded into memory before returning matching by name.
	//
	// This method is generated by Databricks SDK Code Generator.
	GetByName(ctx context.Context, name string) (*StorageCredentialInfo, error)

	// Get all storage credentials assigned to a metastore.
	//
	// Gets a list of all storage credentials that have been assigned to given
	// metastore.
	ListByMetastoreId(ctx context.Context, metastoreId string) ([]StorageCredentialInfo, error)

	// Updates a storage credential.
	//
	// Updates a storage credential on the metastore. The caller must be the owner
	// of the storage credential. If the caller is a metastore admin, only the
	// __owner__ credential can be changed.
	Update(ctx context.Context, request AccountsUpdateStorageCredential) (*AccountsStorageCredentialInfo, error)
}

type AccountStorageCredentialsService

type AccountStorageCredentialsService interface {

	// Create a storage credential.
	//
	// Creates a new storage credential. The request object is specific to the
	// cloud:
	//
	// * **AwsIamRole** for AWS credentials * **AzureServicePrincipal** for
	// Azure credentials * **GcpServiceAcountKey** for GCP credentials.
	//
	// The caller must be a metastore admin and have the
	// **CREATE_STORAGE_CREDENTIAL** privilege on the metastore.
	Create(ctx context.Context, request AccountsCreateStorageCredential) (*AccountsStorageCredentialInfo, error)

	// Delete a storage credential.
	//
	// Deletes a storage credential from the metastore. The caller must be an
	// owner of the storage credential.
	Delete(ctx context.Context, request DeleteAccountStorageCredentialRequest) error

	// Gets the named storage credential.
	//
	// Gets a storage credential from the metastore. The caller must be a
	// metastore admin, the owner of the storage credential, or have a level of
	// privilege on the storage credential.
	Get(ctx context.Context, request GetAccountStorageCredentialRequest) (*AccountsStorageCredentialInfo, error)

	// Get all storage credentials assigned to a metastore.
	//
	// Gets a list of all storage credentials that have been assigned to given
	// metastore.
	List(ctx context.Context, request ListAccountStorageCredentialsRequest) ([]StorageCredentialInfo, error)

	// Updates a storage credential.
	//
	// Updates a storage credential on the metastore. The caller must be the
	// owner of the storage credential. If the caller is a metastore admin, only
	// the __owner__ credential can be changed.
	Update(ctx context.Context, request AccountsUpdateStorageCredential) (*AccountsStorageCredentialInfo, error)
}

These APIs manage storage credentials for a particular metastore.

type AccountsCreateMetastore added in v0.10.0

type AccountsCreateMetastore struct {
	MetastoreInfo *CreateMetastore `json:"metastore_info,omitempty"`
}

type AccountsCreateMetastoreAssignment added in v0.10.0

type AccountsCreateMetastoreAssignment struct {
	MetastoreAssignment *CreateMetastoreAssignment `json:"metastore_assignment,omitempty"`
	// Unity Catalog metastore ID
	MetastoreId string `json:"-" url:"-"`
	// Workspace ID.
	WorkspaceId int64 `json:"-" url:"-"`
}

type AccountsCreateStorageCredential added in v0.10.0

type AccountsCreateStorageCredential struct {
	CredentialInfo *CreateStorageCredential `json:"credential_info,omitempty"`
	// Unity Catalog metastore ID
	MetastoreId string `json:"-" url:"-"`
}

type AccountsMetastoreAssignment added in v0.10.0

type AccountsMetastoreAssignment struct {
	MetastoreAssignment *MetastoreAssignment `json:"metastore_assignment,omitempty"`
}

type AccountsMetastoreInfo added in v0.10.0

type AccountsMetastoreInfo struct {
	MetastoreInfo *MetastoreInfo `json:"metastore_info,omitempty"`
}

type AccountsStorageCredentialInfo added in v0.16.0

type AccountsStorageCredentialInfo struct {
	CredentialInfo *StorageCredentialInfo `json:"credential_info,omitempty"`
}

type AccountsUpdateMetastore added in v0.10.0

type AccountsUpdateMetastore struct {
	// Unity Catalog metastore ID
	MetastoreId string `json:"-" url:"-"`

	MetastoreInfo *UpdateMetastore `json:"metastore_info,omitempty"`
}

type AccountsUpdateMetastoreAssignment added in v0.10.0

type AccountsUpdateMetastoreAssignment struct {
	MetastoreAssignment *UpdateMetastoreAssignment `json:"metastore_assignment,omitempty"`
	// Unity Catalog metastore ID
	MetastoreId string `json:"-" url:"-"`
	// Workspace ID.
	WorkspaceId int64 `json:"-" url:"-"`
}

type AccountsUpdateStorageCredential added in v0.10.0

type AccountsUpdateStorageCredential struct {
	CredentialInfo *UpdateStorageCredential `json:"credential_info,omitempty"`
	// Unity Catalog metastore ID
	MetastoreId string `json:"-" url:"-"`
	// Name of the storage credential.
	StorageCredentialName string `json:"-" url:"-"`
}

type ArtifactAllowlistInfo added in v0.17.0

type ArtifactAllowlistInfo struct {
	// A list of allowed artifact match patterns.
	ArtifactMatchers []ArtifactMatcher `json:"artifact_matchers,omitempty"`
	// Time at which this artifact allowlist was set, in epoch milliseconds.
	CreatedAt int64 `json:"created_at,omitempty"`
	// Username of the user who set the artifact allowlist.
	CreatedBy string `json:"created_by,omitempty"`
	// Unique identifier of parent metastore.
	MetastoreId string `json:"metastore_id,omitempty"`

	ForceSendFields []string `json:"-"`
}

func (ArtifactAllowlistInfo) MarshalJSON added in v0.23.0

func (s ArtifactAllowlistInfo) MarshalJSON() ([]byte, error)

func (*ArtifactAllowlistInfo) UnmarshalJSON added in v0.23.0

func (s *ArtifactAllowlistInfo) UnmarshalJSON(b []byte) error

type ArtifactAllowlistsAPI added in v0.17.0

type ArtifactAllowlistsAPI struct {
	// contains filtered or unexported fields
}

In Databricks Runtime 13.3 and above, you can add libraries and init scripts to the `allowlist` in UC so that users can leverage these artifacts on compute configured with shared access mode.

func NewArtifactAllowlists added in v0.17.0

func NewArtifactAllowlists(client *client.DatabricksClient) *ArtifactAllowlistsAPI

func (*ArtifactAllowlistsAPI) Get added in v0.17.0

Get an artifact allowlist.

Get the artifact allowlist of a certain artifact type. The caller must be a metastore admin or have the **MANAGE ALLOWLIST** privilege on the metastore.

func (*ArtifactAllowlistsAPI) GetByArtifactType added in v0.17.0

func (a *ArtifactAllowlistsAPI) GetByArtifactType(ctx context.Context, artifactType ArtifactType) (*ArtifactAllowlistInfo, error)

Get an artifact allowlist.

Get the artifact allowlist of a certain artifact type. The caller must be a metastore admin or have the **MANAGE ALLOWLIST** privilege on the metastore.

func (*ArtifactAllowlistsAPI) Impl added in v0.17.0

Impl returns low-level ArtifactAllowlists API implementation Deprecated: use MockArtifactAllowlistsInterface instead.

func (*ArtifactAllowlistsAPI) Update added in v0.17.0

Set an artifact allowlist.

Set the artifact allowlist of a certain artifact type. The whole artifact allowlist is replaced with the new allowlist. The caller must be a metastore admin or have the **MANAGE ALLOWLIST** privilege on the metastore.

func (*ArtifactAllowlistsAPI) WithImpl added in v0.17.0

WithImpl could be used to override low-level API implementations for unit testing purposes with github.com/golang/mock or other mocking frameworks. Deprecated: use MockArtifactAllowlistsInterface instead.

type ArtifactAllowlistsInterface added in v0.29.0

type ArtifactAllowlistsInterface interface {
	// WithImpl could be used to override low-level API implementations for unit
	// testing purposes with [github.com/golang/mock] or other mocking frameworks.
	// Deprecated: use MockArtifactAllowlistsInterface instead.
	WithImpl(impl ArtifactAllowlistsService) ArtifactAllowlistsInterface

	// Impl returns low-level ArtifactAllowlists API implementation
	// Deprecated: use MockArtifactAllowlistsInterface instead.
	Impl() ArtifactAllowlistsService

	// Get an artifact allowlist.
	//
	// Get the artifact allowlist of a certain artifact type. The caller must be a
	// metastore admin or have the **MANAGE ALLOWLIST** privilege on the metastore.
	Get(ctx context.Context, request GetArtifactAllowlistRequest) (*ArtifactAllowlistInfo, error)

	// Get an artifact allowlist.
	//
	// Get the artifact allowlist of a certain artifact type. The caller must be a
	// metastore admin or have the **MANAGE ALLOWLIST** privilege on the metastore.
	GetByArtifactType(ctx context.Context, artifactType ArtifactType) (*ArtifactAllowlistInfo, error)

	// Set an artifact allowlist.
	//
	// Set the artifact allowlist of a certain artifact type. The whole artifact
	// allowlist is replaced with the new allowlist. The caller must be a metastore
	// admin or have the **MANAGE ALLOWLIST** privilege on the metastore.
	Update(ctx context.Context, request SetArtifactAllowlist) (*ArtifactAllowlistInfo, error)
}

type ArtifactAllowlistsService added in v0.17.0

type ArtifactAllowlistsService interface {

	// Get an artifact allowlist.
	//
	// Get the artifact allowlist of a certain artifact type. The caller must be
	// a metastore admin or have the **MANAGE ALLOWLIST** privilege on the
	// metastore.
	Get(ctx context.Context, request GetArtifactAllowlistRequest) (*ArtifactAllowlistInfo, error)

	// Set an artifact allowlist.
	//
	// Set the artifact allowlist of a certain artifact type. The whole artifact
	// allowlist is replaced with the new allowlist. The caller must be a
	// metastore admin or have the **MANAGE ALLOWLIST** privilege on the
	// metastore.
	Update(ctx context.Context, request SetArtifactAllowlist) (*ArtifactAllowlistInfo, error)
}

In Databricks Runtime 13.3 and above, you can add libraries and init scripts to the `allowlist` in UC so that users can leverage these artifacts on compute configured with shared access mode.

type ArtifactMatcher added in v0.17.0

type ArtifactMatcher struct {
	// The artifact path or maven coordinate
	Artifact string `json:"artifact"`
	// The pattern matching type of the artifact
	MatchType MatchType `json:"match_type"`
}

type ArtifactType added in v0.17.0

type ArtifactType string

The artifact type

const ArtifactTypeInitScript ArtifactType = `INIT_SCRIPT`
const ArtifactTypeLibraryJar ArtifactType = `LIBRARY_JAR`
const ArtifactTypeLibraryMaven ArtifactType = `LIBRARY_MAVEN`

func (*ArtifactType) Set added in v0.17.0

func (f *ArtifactType) Set(v string) error

Set raw string value and validate it against allowed values

func (*ArtifactType) String added in v0.17.0

func (f *ArtifactType) String() string

String representation for fmt.Print

func (*ArtifactType) Type added in v0.17.0

func (f *ArtifactType) Type() string

Type always returns ArtifactType to satisfy [pflag.Value] interface

type AssignResponse added in v0.34.0

type AssignResponse struct {
}

type AwsIamRoleRequest added in v0.35.0

type AwsIamRoleRequest struct {
	// The Amazon Resource Name (ARN) of the AWS IAM role for S3 data access.
	RoleArn string `json:"role_arn"`
}

type AwsIamRoleResponse added in v0.35.0

type AwsIamRoleResponse struct {
	// The external ID used in role assumption to prevent confused deputy
	// problem..
	ExternalId string `json:"external_id,omitempty"`
	// The Amazon Resource Name (ARN) of the AWS IAM role for S3 data access.
	RoleArn string `json:"role_arn"`
	// The Amazon Resource Name (ARN) of the AWS IAM user managed by Databricks.
	// This is the identity that is going to assume the AWS IAM role.
	UnityCatalogIamArn string `json:"unity_catalog_iam_arn,omitempty"`

	ForceSendFields []string `json:"-"`
}

func (AwsIamRoleResponse) MarshalJSON added in v0.35.0

func (s AwsIamRoleResponse) MarshalJSON() ([]byte, error)

func (*AwsIamRoleResponse) UnmarshalJSON added in v0.35.0

func (s *AwsIamRoleResponse) UnmarshalJSON(b []byte) error

type AzureManagedIdentityRequest added in v0.38.0

type AzureManagedIdentityRequest struct {
	// The Azure resource ID of the Azure Databricks Access Connector. Use the
	// format
	// /subscriptions/{guid}/resourceGroups/{rg-name}/providers/Microsoft.Databricks/accessConnectors/{connector-name}.
	AccessConnectorId string `json:"access_connector_id"`
	// The Azure resource ID of the managed identity. Use the format
	// /subscriptions/{guid}/resourceGroups/{rg-name}/providers/Microsoft.ManagedIdentity/userAssignedIdentities/{identity-name}.
	// This is only available for user-assgined identities. For system-assigned
	// identities, the access_connector_id is used to identify the identity. If
	// this field is not provided, then we assume the AzureManagedIdentity is
	// for a system-assigned identity.
	ManagedIdentityId string `json:"managed_identity_id,omitempty"`

	ForceSendFields []string `json:"-"`
}

func (AzureManagedIdentityRequest) MarshalJSON added in v0.38.0

func (s AzureManagedIdentityRequest) MarshalJSON() ([]byte, error)

func (*AzureManagedIdentityRequest) UnmarshalJSON added in v0.38.0

func (s *AzureManagedIdentityRequest) UnmarshalJSON(b []byte) error

type AzureManagedIdentityResponse added in v0.38.0

type AzureManagedIdentityResponse struct {
	// The Azure resource ID of the Azure Databricks Access Connector. Use the
	// format
	// /subscriptions/{guid}/resourceGroups/{rg-name}/providers/Microsoft.Databricks/accessConnectors/{connector-name}.
	AccessConnectorId string `json:"access_connector_id"`
	// The Databricks internal ID that represents this managed identity.
	CredentialId string `json:"credential_id,omitempty"`
	// The Azure resource ID of the managed identity. Use the format
	// /subscriptions/{guid}/resourceGroups/{rg-name}/providers/Microsoft.ManagedIdentity/userAssignedIdentities/{identity-name}.
	// This is only available for user-assgined identities. For system-assigned
	// identities, the access_connector_id is used to identify the identity. If
	// this field is not provided, then we assume the AzureManagedIdentity is
	// for a system-assigned identity.
	ManagedIdentityId string `json:"managed_identity_id,omitempty"`

	ForceSendFields []string `json:"-"`
}

func (AzureManagedIdentityResponse) MarshalJSON added in v0.38.0

func (s AzureManagedIdentityResponse) MarshalJSON() ([]byte, error)

func (*AzureManagedIdentityResponse) UnmarshalJSON added in v0.38.0

func (s *AzureManagedIdentityResponse) UnmarshalJSON(b []byte) error

type AzureServicePrincipal

type AzureServicePrincipal struct {
	// The application ID of the application registration within the referenced
	// AAD tenant.
	ApplicationId string `json:"application_id"`
	// The client secret generated for the above app ID in AAD.
	ClientSecret string `json:"client_secret"`
	// The directory ID corresponding to the Azure Active Directory (AAD) tenant
	// of the application.
	DirectoryId string `json:"directory_id"`
}

type CancelRefreshRequest added in v0.31.0

type CancelRefreshRequest struct {
	// ID of the refresh.
	RefreshId string `json:"-" url:"-"`
	// Full name of the table.
	TableName string `json:"-" url:"-"`
}

Cancel refresh

type CancelRefreshResponse added in v0.34.0

type CancelRefreshResponse struct {
}

type CatalogInfo

type CatalogInfo struct {
	// Indicates whether the principal is limited to retrieving metadata for the
	// associated object through the BROWSE privilege when include_browse is
	// enabled in the request.
	BrowseOnly bool `json:"browse_only,omitempty"`
	// The type of the catalog.
	CatalogType CatalogType `json:"catalog_type,omitempty"`
	// User-provided free-form text description.
	Comment string `json:"comment,omitempty"`
	// The name of the connection to an external data source.
	ConnectionName string `json:"connection_name,omitempty"`
	// Time at which this catalog was created, in epoch milliseconds.
	CreatedAt int64 `json:"created_at,omitempty"`
	// Username of catalog creator.
	CreatedBy string `json:"created_by,omitempty"`

	EffectivePredictiveOptimizationFlag *EffectivePredictiveOptimizationFlag `json:"effective_predictive_optimization_flag,omitempty"`
	// Whether predictive optimization should be enabled for this object and
	// objects under it.
	EnablePredictiveOptimization EnablePredictiveOptimization `json:"enable_predictive_optimization,omitempty"`
	// The full name of the catalog. Corresponds with the name field.
	FullName string `json:"full_name,omitempty"`
	// Whether the current securable is accessible from all workspaces or a
	// specific set of workspaces.
	IsolationMode IsolationMode `json:"isolation_mode,omitempty"`
	// Unique identifier of parent metastore.
	MetastoreId string `json:"metastore_id,omitempty"`
	// Name of catalog.
	Name string `json:"name,omitempty"`
	// A map of key-value properties attached to the securable.
	Options map[string]string `json:"options,omitempty"`
	// Username of current owner of catalog.
	Owner string `json:"owner,omitempty"`
	// A map of key-value properties attached to the securable.
	Properties map[string]string `json:"properties,omitempty"`
	// The name of delta sharing provider.
	//
	// A Delta Sharing catalog is a catalog that is based on a Delta share on a
	// remote sharing server.
	ProviderName string `json:"provider_name,omitempty"`
	// Status of an asynchronously provisioned resource.
	ProvisioningInfo *ProvisioningInfo `json:"provisioning_info,omitempty"`
	// Kind of catalog securable.
	SecurableKind CatalogInfoSecurableKind `json:"securable_kind,omitempty"`

	SecurableType string `json:"securable_type,omitempty"`
	// The name of the share under the share provider.
	ShareName string `json:"share_name,omitempty"`
	// Storage Location URL (full path) for managed tables within catalog.
	StorageLocation string `json:"storage_location,omitempty"`
	// Storage root URL for managed tables within catalog.
	StorageRoot string `json:"storage_root,omitempty"`
	// Time at which this catalog was last modified, in epoch milliseconds.
	UpdatedAt int64 `json:"updated_at,omitempty"`
	// Username of user who last modified catalog.
	UpdatedBy string `json:"updated_by,omitempty"`

	ForceSendFields []string `json:"-"`
}

func (CatalogInfo) MarshalJSON added in v0.23.0

func (s CatalogInfo) MarshalJSON() ([]byte, error)

func (*CatalogInfo) UnmarshalJSON added in v0.23.0

func (s *CatalogInfo) UnmarshalJSON(b []byte) error

type CatalogInfoSecurableKind added in v0.18.0

type CatalogInfoSecurableKind string

Kind of catalog securable.

const CatalogInfoSecurableKindCatalogDeltasharing CatalogInfoSecurableKind = `CATALOG_DELTASHARING`
const CatalogInfoSecurableKindCatalogForeignBigquery CatalogInfoSecurableKind = `CATALOG_FOREIGN_BIGQUERY`
const CatalogInfoSecurableKindCatalogForeignDatabricks CatalogInfoSecurableKind = `CATALOG_FOREIGN_DATABRICKS`
const CatalogInfoSecurableKindCatalogForeignMysql CatalogInfoSecurableKind = `CATALOG_FOREIGN_MYSQL`
const CatalogInfoSecurableKindCatalogForeignPostgresql CatalogInfoSecurableKind = `CATALOG_FOREIGN_POSTGRESQL`
const CatalogInfoSecurableKindCatalogForeignRedshift CatalogInfoSecurableKind = `CATALOG_FOREIGN_REDSHIFT`
const CatalogInfoSecurableKindCatalogForeignSnowflake CatalogInfoSecurableKind = `CATALOG_FOREIGN_SNOWFLAKE`
const CatalogInfoSecurableKindCatalogForeignSqldw CatalogInfoSecurableKind = `CATALOG_FOREIGN_SQLDW`
const CatalogInfoSecurableKindCatalogForeignSqlserver CatalogInfoSecurableKind = `CATALOG_FOREIGN_SQLSERVER`
const CatalogInfoSecurableKindCatalogInternal CatalogInfoSecurableKind = `CATALOG_INTERNAL`
const CatalogInfoSecurableKindCatalogOnline CatalogInfoSecurableKind = `CATALOG_ONLINE`
const CatalogInfoSecurableKindCatalogOnlineIndex CatalogInfoSecurableKind = `CATALOG_ONLINE_INDEX`
const CatalogInfoSecurableKindCatalogStandard CatalogInfoSecurableKind = `CATALOG_STANDARD`
const CatalogInfoSecurableKindCatalogSystem CatalogInfoSecurableKind = `CATALOG_SYSTEM`
const CatalogInfoSecurableKindCatalogSystemDeltasharing CatalogInfoSecurableKind = `CATALOG_SYSTEM_DELTASHARING`

func (*CatalogInfoSecurableKind) Set added in v0.18.0

Set raw string value and validate it against allowed values

func (*CatalogInfoSecurableKind) String added in v0.18.0

func (f *CatalogInfoSecurableKind) String() string

String representation for fmt.Print

func (*CatalogInfoSecurableKind) Type added in v0.18.0

func (f *CatalogInfoSecurableKind) Type() string

Type always returns CatalogInfoSecurableKind to satisfy [pflag.Value] interface

type CatalogType

type CatalogType string

The type of the catalog.

const CatalogTypeDeltasharingCatalog CatalogType = `DELTASHARING_CATALOG`
const CatalogTypeManagedCatalog CatalogType = `MANAGED_CATALOG`
const CatalogTypeSystemCatalog CatalogType = `SYSTEM_CATALOG`

func (*CatalogType) Set

func (f *CatalogType) Set(v string) error

Set raw string value and validate it against allowed values

func (*CatalogType) String

func (f *CatalogType) String() string

String representation for fmt.Print

func (*CatalogType) Type

func (f *CatalogType) Type() string

Type always returns CatalogType to satisfy [pflag.Value] interface

type CatalogsAPI

type CatalogsAPI struct {
	// contains filtered or unexported fields
}

A catalog is the first layer of Unity Catalog’s three-level namespace. It’s used to organize your data assets. Users can see all catalogs on which they have been assigned the USE_CATALOG data permission.

In Unity Catalog, admins and data stewards manage users and their access to data centrally across all of the workspaces in a Databricks account. Users in different workspaces can share access to the same data, depending on privileges granted centrally in Unity Catalog.

func NewCatalogs

func NewCatalogs(client *client.DatabricksClient) *CatalogsAPI

func (*CatalogsAPI) Create

func (a *CatalogsAPI) Create(ctx context.Context, request CreateCatalog) (*CatalogInfo, error)

Create a catalog.

Creates a new catalog instance in the parent metastore if the caller is a metastore admin or has the **CREATE_CATALOG** privilege.

Example (CatalogWorkspaceBindings)
ctx := context.Background()
w, err := databricks.NewWorkspaceClient()
if err != nil {
	panic(err)
}

created, err := w.Catalogs.Create(ctx, catalog.CreateCatalog{
	Name: fmt.Sprintf("sdk-%x", time.Now().UnixNano()),
})
if err != nil {
	panic(err)
}
logger.Infof(ctx, "found %v", created)

// cleanup

err = w.Catalogs.Delete(ctx, catalog.DeleteCatalogRequest{
	Name:  created.Name,
	Force: true,
})
if err != nil {
	panic(err)
}
Output:

Example (Catalogs)
ctx := context.Background()
w, err := databricks.NewWorkspaceClient()
if err != nil {
	panic(err)
}

created, err := w.Catalogs.Create(ctx, catalog.CreateCatalog{
	Name: fmt.Sprintf("sdk-%x", time.Now().UnixNano()),
})
if err != nil {
	panic(err)
}
logger.Infof(ctx, "found %v", created)

// cleanup

err = w.Catalogs.Delete(ctx, catalog.DeleteCatalogRequest{
	Name:  created.Name,
	Force: true,
})
if err != nil {
	panic(err)
}
Output:

Example (Schemas)
ctx := context.Background()
w, err := databricks.NewWorkspaceClient()
if err != nil {
	panic(err)
}

newCatalog, err := w.Catalogs.Create(ctx, catalog.CreateCatalog{
	Name: fmt.Sprintf("sdk-%x", time.Now().UnixNano()),
})
if err != nil {
	panic(err)
}
logger.Infof(ctx, "found %v", newCatalog)

// cleanup

err = w.Catalogs.Delete(ctx, catalog.DeleteCatalogRequest{
	Name:  newCatalog.Name,
	Force: true,
})
if err != nil {
	panic(err)
}
Output:

Example (Shares)
ctx := context.Background()
w, err := databricks.NewWorkspaceClient()
if err != nil {
	panic(err)
}

createdCatalog, err := w.Catalogs.Create(ctx, catalog.CreateCatalog{
	Name: fmt.Sprintf("sdk-%x", time.Now().UnixNano()),
})
if err != nil {
	panic(err)
}
logger.Infof(ctx, "found %v", createdCatalog)

// cleanup

err = w.Catalogs.Delete(ctx, catalog.DeleteCatalogRequest{
	Name:  createdCatalog.Name,
	Force: true,
})
if err != nil {
	panic(err)
}
Output:

Example (Tables)
ctx := context.Background()
w, err := databricks.NewWorkspaceClient()
if err != nil {
	panic(err)
}

createdCatalog, err := w.Catalogs.Create(ctx, catalog.CreateCatalog{
	Name: fmt.Sprintf("sdk-%x", time.Now().UnixNano()),
})
if err != nil {
	panic(err)
}
logger.Infof(ctx, "found %v", createdCatalog)

// cleanup

err = w.Catalogs.Delete(ctx, catalog.DeleteCatalogRequest{
	Name:  createdCatalog.Name,
	Force: true,
})
if err != nil {
	panic(err)
}
Output:

Example (Volumes)
ctx := context.Background()
w, err := databricks.NewWorkspaceClient()
if err != nil {
	panic(err)
}

createdCatalog, err := w.Catalogs.Create(ctx, catalog.CreateCatalog{
	Name: fmt.Sprintf("sdk-%x", time.Now().UnixNano()),
})
if err != nil {
	panic(err)
}
logger.Infof(ctx, "found %v", createdCatalog)

// cleanup

err = w.Catalogs.Delete(ctx, catalog.DeleteCatalogRequest{
	Name:  createdCatalog.Name,
	Force: true,
})
if err != nil {
	panic(err)
}
Output:

func (*CatalogsAPI) Delete

func (a *CatalogsAPI) Delete(ctx context.Context, request DeleteCatalogRequest) error

Delete a catalog.

Deletes the catalog that matches the supplied name. The caller must be a metastore admin or the owner of the catalog.

func (*CatalogsAPI) DeleteByName

func (a *CatalogsAPI) DeleteByName(ctx context.Context, name string) error

Delete a catalog.

Deletes the catalog that matches the supplied name. The caller must be a metastore admin or the owner of the catalog.

func (*CatalogsAPI) Get

func (a *CatalogsAPI) Get(ctx context.Context, request GetCatalogRequest) (*CatalogInfo, error)

Get a catalog.

Gets the specified catalog in a metastore. The caller must be a metastore admin, the owner of the catalog, or a user that has the **USE_CATALOG** privilege set for their account.

Example (Catalogs)
ctx := context.Background()
w, err := databricks.NewWorkspaceClient()
if err != nil {
	panic(err)
}

created, err := w.Catalogs.Create(ctx, catalog.CreateCatalog{
	Name: fmt.Sprintf("sdk-%x", time.Now().UnixNano()),
})
if err != nil {
	panic(err)
}
logger.Infof(ctx, "found %v", created)

_, err = w.Catalogs.GetByName(ctx, created.Name)
if err != nil {
	panic(err)
}

// cleanup

err = w.Catalogs.Delete(ctx, catalog.DeleteCatalogRequest{
	Name:  created.Name,
	Force: true,
})
if err != nil {
	panic(err)
}
Output:

func (*CatalogsAPI) GetByName

func (a *CatalogsAPI) GetByName(ctx context.Context, name string) (*CatalogInfo, error)

Get a catalog.

Gets the specified catalog in a metastore. The caller must be a metastore admin, the owner of the catalog, or a user that has the **USE_CATALOG** privilege set for their account.

func (*CatalogsAPI) Impl

func (a *CatalogsAPI) Impl() CatalogsService

Impl returns low-level Catalogs API implementation Deprecated: use MockCatalogsInterface instead.

func (*CatalogsAPI) List added in v0.24.0

List catalogs.

Gets an array of catalogs in the metastore. If the caller is the metastore admin, all catalogs will be retrieved. Otherwise, only catalogs owned by the caller (or for which the caller has the **USE_CATALOG** privilege) will be retrieved. There is no guarantee of a specific ordering of the elements in the array.

This method is generated by Databricks SDK Code Generator.

func (*CatalogsAPI) ListAll

func (a *CatalogsAPI) ListAll(ctx context.Context, request ListCatalogsRequest) ([]CatalogInfo, error)

List catalogs.

Gets an array of catalogs in the metastore. If the caller is the metastore admin, all catalogs will be retrieved. Otherwise, only catalogs owned by the caller (or for which the caller has the **USE_CATALOG** privilege) will be retrieved. There is no guarantee of a specific ordering of the elements in the array.

This method is generated by Databricks SDK Code Generator.

Example (Catalogs)
ctx := context.Background()
w, err := databricks.NewWorkspaceClient()
if err != nil {
	panic(err)
}

all, err := w.Catalogs.ListAll(ctx, catalog.ListCatalogsRequest{})
if err != nil {
	panic(err)
}
logger.Infof(ctx, "found %v", all)
Output:

func (*CatalogsAPI) Update

func (a *CatalogsAPI) Update(ctx context.Context, request UpdateCatalog) (*CatalogInfo, error)

Update a catalog.

Updates the catalog that matches the supplied name. The caller must be either the owner of the catalog, or a metastore admin (when changing the owner field of the catalog).

Example (CatalogWorkspaceBindings)
ctx := context.Background()
w, err := databricks.NewWorkspaceClient()
if err != nil {
	panic(err)
}

created, err := w.Catalogs.Create(ctx, catalog.CreateCatalog{
	Name: fmt.Sprintf("sdk-%x", time.Now().UnixNano()),
})
if err != nil {
	panic(err)
}
logger.Infof(ctx, "found %v", created)

_, err = w.Catalogs.Update(ctx, catalog.UpdateCatalog{
	Name:          created.Name,
	IsolationMode: catalog.IsolationModeIsolated,
})
if err != nil {
	panic(err)
}

// cleanup

err = w.Catalogs.Delete(ctx, catalog.DeleteCatalogRequest{
	Name:  created.Name,
	Force: true,
})
if err != nil {
	panic(err)
}
Output:

Example (Catalogs)
ctx := context.Background()
w, err := databricks.NewWorkspaceClient()
if err != nil {
	panic(err)
}

created, err := w.Catalogs.Create(ctx, catalog.CreateCatalog{
	Name: fmt.Sprintf("sdk-%x", time.Now().UnixNano()),
})
if err != nil {
	panic(err)
}
logger.Infof(ctx, "found %v", created)

_, err = w.Catalogs.Update(ctx, catalog.UpdateCatalog{
	Name:    created.Name,
	Comment: "updated",
})
if err != nil {
	panic(err)
}

// cleanup

err = w.Catalogs.Delete(ctx, catalog.DeleteCatalogRequest{
	Name:  created.Name,
	Force: true,
})
if err != nil {
	panic(err)
}
Output:

func (*CatalogsAPI) WithImpl

func (a *CatalogsAPI) WithImpl(impl CatalogsService) CatalogsInterface

WithImpl could be used to override low-level API implementations for unit testing purposes with github.com/golang/mock or other mocking frameworks. Deprecated: use MockCatalogsInterface instead.

type CatalogsInterface added in v0.29.0

type CatalogsInterface interface {
	// WithImpl could be used to override low-level API implementations for unit
	// testing purposes with [github.com/golang/mock] or other mocking frameworks.
	// Deprecated: use MockCatalogsInterface instead.
	WithImpl(impl CatalogsService) CatalogsInterface

	// Impl returns low-level Catalogs API implementation
	// Deprecated: use MockCatalogsInterface instead.
	Impl() CatalogsService

	// Create a catalog.
	//
	// Creates a new catalog instance in the parent metastore if the caller is a
	// metastore admin or has the **CREATE_CATALOG** privilege.
	Create(ctx context.Context, request CreateCatalog) (*CatalogInfo, error)

	// Delete a catalog.
	//
	// Deletes the catalog that matches the supplied name. The caller must be a
	// metastore admin or the owner of the catalog.
	Delete(ctx context.Context, request DeleteCatalogRequest) error

	// Delete a catalog.
	//
	// Deletes the catalog that matches the supplied name. The caller must be a
	// metastore admin or the owner of the catalog.
	DeleteByName(ctx context.Context, name string) error

	// Get a catalog.
	//
	// Gets the specified catalog in a metastore. The caller must be a metastore
	// admin, the owner of the catalog, or a user that has the **USE_CATALOG**
	// privilege set for their account.
	Get(ctx context.Context, request GetCatalogRequest) (*CatalogInfo, error)

	// Get a catalog.
	//
	// Gets the specified catalog in a metastore. The caller must be a metastore
	// admin, the owner of the catalog, or a user that has the **USE_CATALOG**
	// privilege set for their account.
	GetByName(ctx context.Context, name string) (*CatalogInfo, error)

	// List catalogs.
	//
	// Gets an array of catalogs in the metastore. If the caller is the metastore
	// admin, all catalogs will be retrieved. Otherwise, only catalogs owned by the
	// caller (or for which the caller has the **USE_CATALOG** privilege) will be
	// retrieved. There is no guarantee of a specific ordering of the elements in
	// the array.
	//
	// This method is generated by Databricks SDK Code Generator.
	List(ctx context.Context, request ListCatalogsRequest) listing.Iterator[CatalogInfo]

	// List catalogs.
	//
	// Gets an array of catalogs in the metastore. If the caller is the metastore
	// admin, all catalogs will be retrieved. Otherwise, only catalogs owned by the
	// caller (or for which the caller has the **USE_CATALOG** privilege) will be
	// retrieved. There is no guarantee of a specific ordering of the elements in
	// the array.
	//
	// This method is generated by Databricks SDK Code Generator.
	ListAll(ctx context.Context, request ListCatalogsRequest) ([]CatalogInfo, error)

	// Update a catalog.
	//
	// Updates the catalog that matches the supplied name. The caller must be either
	// the owner of the catalog, or a metastore admin (when changing the owner field
	// of the catalog).
	Update(ctx context.Context, request UpdateCatalog) (*CatalogInfo, error)
}

type CatalogsService

type CatalogsService interface {

	// Create a catalog.
	//
	// Creates a new catalog instance in the parent metastore if the caller is a
	// metastore admin or has the **CREATE_CATALOG** privilege.
	Create(ctx context.Context, request CreateCatalog) (*CatalogInfo, error)

	// Delete a catalog.
	//
	// Deletes the catalog that matches the supplied name. The caller must be a
	// metastore admin or the owner of the catalog.
	Delete(ctx context.Context, request DeleteCatalogRequest) error

	// Get a catalog.
	//
	// Gets the specified catalog in a metastore. The caller must be a metastore
	// admin, the owner of the catalog, or a user that has the **USE_CATALOG**
	// privilege set for their account.
	Get(ctx context.Context, request GetCatalogRequest) (*CatalogInfo, error)

	// List catalogs.
	//
	// Gets an array of catalogs in the metastore. If the caller is the
	// metastore admin, all catalogs will be retrieved. Otherwise, only catalogs
	// owned by the caller (or for which the caller has the **USE_CATALOG**
	// privilege) will be retrieved. There is no guarantee of a specific
	// ordering of the elements in the array.
	//
	// Use ListAll() to get all CatalogInfo instances
	List(ctx context.Context, request ListCatalogsRequest) (*ListCatalogsResponse, error)

	// Update a catalog.
	//
	// Updates the catalog that matches the supplied name. The caller must be
	// either the owner of the catalog, or a metastore admin (when changing the
	// owner field of the catalog).
	Update(ctx context.Context, request UpdateCatalog) (*CatalogInfo, error)
}

A catalog is the first layer of Unity Catalog’s three-level namespace. It’s used to organize your data assets. Users can see all catalogs on which they have been assigned the USE_CATALOG data permission.

In Unity Catalog, admins and data stewards manage users and their access to data centrally across all of the workspaces in a Databricks account. Users in different workspaces can share access to the same data, depending on privileges granted centrally in Unity Catalog.

type CloudflareApiToken added in v0.27.0

type CloudflareApiToken struct {
	// The Cloudflare access key id of the token.
	AccessKeyId string `json:"access_key_id"`
	// The account id associated with the API token.
	AccountId string `json:"account_id"`
	// The secret access token generated for the access key id
	SecretAccessKey string `json:"secret_access_key"`
}

type ColumnInfo

type ColumnInfo struct {
	// User-provided free-form text description.
	Comment string `json:"comment,omitempty"`

	Mask *ColumnMask `json:"mask,omitempty"`
	// Name of Column.
	Name string `json:"name,omitempty"`
	// Whether field may be Null (default: true).
	Nullable bool `json:"nullable,omitempty"`
	// Partition index for column.
	PartitionIndex int `json:"partition_index,omitempty"`
	// Ordinal position of column (starting at position 0).
	Position int `json:"position,omitempty"`
	// Format of IntervalType.
	TypeIntervalType string `json:"type_interval_type,omitempty"`
	// Full data type specification, JSON-serialized.
	TypeJson string `json:"type_json,omitempty"`
	// Name of type (INT, STRUCT, MAP, etc.).
	TypeName ColumnTypeName `json:"type_name,omitempty"`
	// Digits of precision; required for DecimalTypes.
	TypePrecision int `json:"type_precision,omitempty"`
	// Digits to right of decimal; Required for DecimalTypes.
	TypeScale int `json:"type_scale,omitempty"`
	// Full data type specification as SQL/catalogString text.
	TypeText string `json:"type_text,omitempty"`

	ForceSendFields []string `json:"-"`
}

func (ColumnInfo) MarshalJSON added in v0.23.0

func (s ColumnInfo) MarshalJSON() ([]byte, error)

func (*ColumnInfo) UnmarshalJSON added in v0.23.0

func (s *ColumnInfo) UnmarshalJSON(b []byte) error

type ColumnMask

type ColumnMask struct {
	// The full name of the column mask SQL UDF.
	FunctionName string `json:"function_name,omitempty"`
	// The list of additional table columns to be passed as input to the column
	// mask function. The first arg of the mask function should be of the type
	// of the column being masked and the types of the rest of the args should
	// match the types of columns in 'using_column_names'.
	UsingColumnNames []string `json:"using_column_names,omitempty"`

	ForceSendFields []string `json:"-"`
}

func (ColumnMask) MarshalJSON added in v0.23.0

func (s ColumnMask) MarshalJSON() ([]byte, error)

func (*ColumnMask) UnmarshalJSON added in v0.23.0

func (s *ColumnMask) UnmarshalJSON(b []byte) error

type ColumnTypeName

type ColumnTypeName string

Name of type (INT, STRUCT, MAP, etc.).

const ColumnTypeNameArray ColumnTypeName = `ARRAY`
const ColumnTypeNameBinary ColumnTypeName = `BINARY`
const ColumnTypeNameBoolean ColumnTypeName = `BOOLEAN`
const ColumnTypeNameByte ColumnTypeName = `BYTE`
const ColumnTypeNameChar ColumnTypeName = `CHAR`
const ColumnTypeNameDate ColumnTypeName = `DATE`
const ColumnTypeNameDecimal ColumnTypeName = `DECIMAL`
const ColumnTypeNameDouble ColumnTypeName = `DOUBLE`
const ColumnTypeNameFloat ColumnTypeName = `FLOAT`
const ColumnTypeNameInt ColumnTypeName = `INT`
const ColumnTypeNameInterval ColumnTypeName = `INTERVAL`
const ColumnTypeNameLong ColumnTypeName = `LONG`
const ColumnTypeNameMap ColumnTypeName = `MAP`
const ColumnTypeNameNull ColumnTypeName = `NULL`
const ColumnTypeNameShort ColumnTypeName = `SHORT`
const ColumnTypeNameString ColumnTypeName = `STRING`
const ColumnTypeNameStruct ColumnTypeName = `STRUCT`
const ColumnTypeNameTableType ColumnTypeName = `TABLE_TYPE`
const ColumnTypeNameTimestamp ColumnTypeName = `TIMESTAMP`
const ColumnTypeNameTimestampNtz ColumnTypeName = `TIMESTAMP_NTZ`
const ColumnTypeNameUserDefinedType ColumnTypeName = `USER_DEFINED_TYPE`

func (*ColumnTypeName) Set

func (f *ColumnTypeName) Set(v string) error

Set raw string value and validate it against allowed values

func (*ColumnTypeName) String

func (f *ColumnTypeName) String() string

String representation for fmt.Print

func (*ColumnTypeName) Type

func (f *ColumnTypeName) Type() string

Type always returns ColumnTypeName to satisfy [pflag.Value] interface

type ConnectionInfo added in v0.10.0

type ConnectionInfo struct {
	// User-provided free-form text description.
	Comment string `json:"comment,omitempty"`
	// Unique identifier of the Connection.
	ConnectionId string `json:"connection_id,omitempty"`
	// The type of connection.
	ConnectionType ConnectionType `json:"connection_type,omitempty"`
	// Time at which this connection was created, in epoch milliseconds.
	CreatedAt int64 `json:"created_at,omitempty"`
	// Username of connection creator.
	CreatedBy string `json:"created_by,omitempty"`
	// The type of credential.
	CredentialType CredentialType `json:"credential_type,omitempty"`
	// Full name of connection.
	FullName string `json:"full_name,omitempty"`
	// Unique identifier of parent metastore.
	MetastoreId string `json:"metastore_id,omitempty"`
	// Name of the connection.
	Name string `json:"name,omitempty"`
	// A map of key-value properties attached to the securable.
	Options map[string]string `json:"options,omitempty"`
	// Username of current owner of the connection.
	Owner string `json:"owner,omitempty"`
	// An object containing map of key-value properties attached to the
	// connection.
	Properties map[string]string `json:"properties,omitempty"`
	// Status of an asynchronously provisioned resource.
	ProvisioningInfo *ProvisioningInfo `json:"provisioning_info,omitempty"`
	// If the connection is read only.
	ReadOnly bool `json:"read_only,omitempty"`
	// Kind of connection securable.
	SecurableKind ConnectionInfoSecurableKind `json:"securable_kind,omitempty"`

	SecurableType string `json:"securable_type,omitempty"`
	// Time at which this connection was updated, in epoch milliseconds.
	UpdatedAt int64 `json:"updated_at,omitempty"`
	// Username of user who last modified connection.
	UpdatedBy string `json:"updated_by,omitempty"`
	// URL of the remote data source, extracted from options.
	Url string `json:"url,omitempty"`

	ForceSendFields []string `json:"-"`
}

func (ConnectionInfo) MarshalJSON added in v0.23.0

func (s ConnectionInfo) MarshalJSON() ([]byte, error)

func (*ConnectionInfo) UnmarshalJSON added in v0.23.0

func (s *ConnectionInfo) UnmarshalJSON(b []byte) error

type ConnectionInfoSecurableKind added in v0.16.0

type ConnectionInfoSecurableKind string

Kind of connection securable.

const ConnectionInfoSecurableKindConnectionBigquery ConnectionInfoSecurableKind = `CONNECTION_BIGQUERY`
const ConnectionInfoSecurableKindConnectionDatabricks ConnectionInfoSecurableKind = `CONNECTION_DATABRICKS`
const ConnectionInfoSecurableKindConnectionMysql ConnectionInfoSecurableKind = `CONNECTION_MYSQL`
const ConnectionInfoSecurableKindConnectionOnlineCatalog ConnectionInfoSecurableKind = `CONNECTION_ONLINE_CATALOG`
const ConnectionInfoSecurableKindConnectionPostgresql ConnectionInfoSecurableKind = `CONNECTION_POSTGRESQL`
const ConnectionInfoSecurableKindConnectionRedshift ConnectionInfoSecurableKind = `CONNECTION_REDSHIFT`
const ConnectionInfoSecurableKindConnectionSnowflake ConnectionInfoSecurableKind = `CONNECTION_SNOWFLAKE`
const ConnectionInfoSecurableKindConnectionSqldw ConnectionInfoSecurableKind = `CONNECTION_SQLDW`
const ConnectionInfoSecurableKindConnectionSqlserver ConnectionInfoSecurableKind = `CONNECTION_SQLSERVER`

func (*ConnectionInfoSecurableKind) Set added in v0.16.0

Set raw string value and validate it against allowed values

func (*ConnectionInfoSecurableKind) String added in v0.16.0

func (f *ConnectionInfoSecurableKind) String() string

String representation for fmt.Print

func (*ConnectionInfoSecurableKind) Type added in v0.16.0

Type always returns ConnectionInfoSecurableKind to satisfy [pflag.Value] interface

type ConnectionType added in v0.10.0

type ConnectionType string

The type of connection.

const ConnectionTypeBigquery ConnectionType = `BIGQUERY`
const ConnectionTypeDatabricks ConnectionType = `DATABRICKS`
const ConnectionTypeMysql ConnectionType = `MYSQL`
const ConnectionTypePostgresql ConnectionType = `POSTGRESQL`
const ConnectionTypeRedshift ConnectionType = `REDSHIFT`
const ConnectionTypeSnowflake ConnectionType = `SNOWFLAKE`
const ConnectionTypeSqldw ConnectionType = `SQLDW`
const ConnectionTypeSqlserver ConnectionType = `SQLSERVER`

func (*ConnectionType) Set added in v0.10.0

func (f *ConnectionType) Set(v string) error

Set raw string value and validate it against allowed values

func (*ConnectionType) String added in v0.10.0

func (f *ConnectionType) String() string

String representation for fmt.Print

func (*ConnectionType) Type added in v0.10.0

func (f *ConnectionType) Type() string

Type always returns ConnectionType to satisfy [pflag.Value] interface

type ConnectionsAPI added in v0.10.0

type ConnectionsAPI struct {
	// contains filtered or unexported fields
}

Connections allow for creating a connection to an external data source.

A connection is an abstraction of an external data source that can be connected from Databricks Compute. Creating a connection object is the first step to managing external data sources within Unity Catalog, with the second step being creating a data object (catalog, schema, or table) using the connection. Data objects derived from a connection can be written to or read from similar to other Unity Catalog data objects based on cloud storage. Users may create different types of connections with each connection having a unique set of configuration options to support credential management and other settings.

func NewConnections added in v0.10.0

func NewConnections(client *client.DatabricksClient) *ConnectionsAPI

func (*ConnectionsAPI) ConnectionInfoNameToFullNameMap added in v0.10.0

func (a *ConnectionsAPI) ConnectionInfoNameToFullNameMap(ctx context.Context, request ListConnectionsRequest) (map[string]string, error)

ConnectionInfoNameToFullNameMap calls ConnectionsAPI.ListAll and creates a map of results with ConnectionInfo.Name as key and ConnectionInfo.FullName as value.

Returns an error if there's more than one ConnectionInfo with the same .Name.

Note: All ConnectionInfo instances are loaded into memory before creating a map.

This method is generated by Databricks SDK Code Generator.

func (*ConnectionsAPI) Create added in v0.10.0

func (a *ConnectionsAPI) Create(ctx context.Context, request CreateConnection) (*ConnectionInfo, error)

Create a connection.

Creates a new connection

Creates a new connection to an external data source. It allows users to specify connection details and configurations for interaction with the external server.

Example (Connections)
ctx := context.Background()
w, err := databricks.NewWorkspaceClient()
if err != nil {
	panic(err)
}

connCreate, err := w.Connections.Create(ctx, catalog.CreateConnection{
	Comment:        "Go SDK Acceptance Test Connection",
	ConnectionType: catalog.ConnectionTypeDatabricks,
	Name:           fmt.Sprintf("sdk-%x", time.Now().UnixNano()),
	Options:        map[string]string{"host": fmt.Sprintf("%s-fake-workspace.cloud.databricks.com", fmt.Sprintf("sdk-%x", time.Now().UnixNano())), "httpPath": fmt.Sprintf("/sql/1.0/warehouses/%s", fmt.Sprintf("sdk-%x", time.Now().UnixNano())), "personalAccessToken": fmt.Sprintf("sdk-%x", time.Now().UnixNano())},
})
if err != nil {
	panic(err)
}
logger.Infof(ctx, "found %v", connCreate)

// cleanup

err = w.Connections.Delete(ctx, catalog.DeleteConnectionRequest{
	Name: connCreate.Name,
})
if err != nil {
	panic(err)
}
Output:

func (*ConnectionsAPI) Delete added in v0.10.0

Delete a connection.

Deletes the connection that matches the supplied name.

func (*ConnectionsAPI) DeleteByName added in v0.32.0

func (a *ConnectionsAPI) DeleteByName(ctx context.Context, name string) error

Delete a connection.

Deletes the connection that matches the supplied name.

func (*ConnectionsAPI) Get added in v0.10.0

Get a connection.

Gets a connection from it's name.

Example (Connections)
ctx := context.Background()
w, err := databricks.NewWorkspaceClient()
if err != nil {
	panic(err)
}

connCreate, err := w.Connections.Create(ctx, catalog.CreateConnection{
	Comment:        "Go SDK Acceptance Test Connection",
	ConnectionType: catalog.ConnectionTypeDatabricks,
	Name:           fmt.Sprintf("sdk-%x", time.Now().UnixNano()),
	Options:        map[string]string{"host": fmt.Sprintf("%s-fake-workspace.cloud.databricks.com", fmt.Sprintf("sdk-%x", time.Now().UnixNano())), "httpPath": fmt.Sprintf("/sql/1.0/warehouses/%s", fmt.Sprintf("sdk-%x", time.Now().UnixNano())), "personalAccessToken": fmt.Sprintf("sdk-%x", time.Now().UnixNano())},
})
if err != nil {
	panic(err)
}
logger.Infof(ctx, "found %v", connCreate)

connUpdate, err := w.Connections.Update(ctx, catalog.UpdateConnection{
	Name:    connCreate.Name,
	Options: map[string]string{"host": fmt.Sprintf("%s-fake-workspace.cloud.databricks.com", fmt.Sprintf("sdk-%x", time.Now().UnixNano())), "httpPath": fmt.Sprintf("/sql/1.0/warehouses/%s", fmt.Sprintf("sdk-%x", time.Now().UnixNano())), "personalAccessToken": fmt.Sprintf("sdk-%x", time.Now().UnixNano())},
})
if err != nil {
	panic(err)
}
logger.Infof(ctx, "found %v", connUpdate)

conn, err := w.Connections.Get(ctx, catalog.GetConnectionRequest{
	Name: connUpdate.Name,
})
if err != nil {
	panic(err)
}
logger.Infof(ctx, "found %v", conn)

// cleanup

err = w.Connections.Delete(ctx, catalog.DeleteConnectionRequest{
	Name: connCreate.Name,
})
if err != nil {
	panic(err)
}
Output:

func (*ConnectionsAPI) GetByName added in v0.10.0

func (a *ConnectionsAPI) GetByName(ctx context.Context, name string) (*ConnectionInfo, error)

Get a connection.

Gets a connection from it's name.

func (*ConnectionsAPI) Impl added in v0.10.0

Impl returns low-level Connections API implementation Deprecated: use MockConnectionsInterface instead.

func (*ConnectionsAPI) List added in v0.24.0

List connections.

List all connections.

This method is generated by Databricks SDK Code Generator.

func (*ConnectionsAPI) ListAll added in v0.10.0

List connections.

List all connections.

This method is generated by Databricks SDK Code Generator.

Example (Connections)
ctx := context.Background()
w, err := databricks.NewWorkspaceClient()
if err != nil {
	panic(err)
}

connList, err := w.Connections.ListAll(ctx, catalog.ListConnectionsRequest{})
if err != nil {
	panic(err)
}
logger.Infof(ctx, "found %v", connList)
Output:

func (*ConnectionsAPI) Update added in v0.10.0

func (a *ConnectionsAPI) Update(ctx context.Context, request UpdateConnection) (*ConnectionInfo, error)

Update a connection.

Updates the connection that matches the supplied name.

Example (Connections)
ctx := context.Background()
w, err := databricks.NewWorkspaceClient()
if err != nil {
	panic(err)
}

connCreate, err := w.Connections.Create(ctx, catalog.CreateConnection{
	Comment:        "Go SDK Acceptance Test Connection",
	ConnectionType: catalog.ConnectionTypeDatabricks,
	Name:           fmt.Sprintf("sdk-%x", time.Now().UnixNano()),
	Options:        map[string]string{"host": fmt.Sprintf("%s-fake-workspace.cloud.databricks.com", fmt.Sprintf("sdk-%x", time.Now().UnixNano())), "httpPath": fmt.Sprintf("/sql/1.0/warehouses/%s", fmt.Sprintf("sdk-%x", time.Now().UnixNano())), "personalAccessToken": fmt.Sprintf("sdk-%x", time.Now().UnixNano())},
})
if err != nil {
	panic(err)
}
logger.Infof(ctx, "found %v", connCreate)

connUpdate, err := w.Connections.Update(ctx, catalog.UpdateConnection{
	Name:    connCreate.Name,
	Options: map[string]string{"host": fmt.Sprintf("%s-fake-workspace.cloud.databricks.com", fmt.Sprintf("sdk-%x", time.Now().UnixNano())), "httpPath": fmt.Sprintf("/sql/1.0/warehouses/%s", fmt.Sprintf("sdk-%x", time.Now().UnixNano())), "personalAccessToken": fmt.Sprintf("sdk-%x", time.Now().UnixNano())},
})
if err != nil {
	panic(err)
}
logger.Infof(ctx, "found %v", connUpdate)

// cleanup

err = w.Connections.Delete(ctx, catalog.DeleteConnectionRequest{
	Name: connCreate.Name,
})
if err != nil {
	panic(err)
}
Output:

func (*ConnectionsAPI) WithImpl added in v0.10.0

WithImpl could be used to override low-level API implementations for unit testing purposes with github.com/golang/mock or other mocking frameworks. Deprecated: use MockConnectionsInterface instead.

type ConnectionsInterface added in v0.29.0

type ConnectionsInterface interface {
	// WithImpl could be used to override low-level API implementations for unit
	// testing purposes with [github.com/golang/mock] or other mocking frameworks.
	// Deprecated: use MockConnectionsInterface instead.
	WithImpl(impl ConnectionsService) ConnectionsInterface

	// Impl returns low-level Connections API implementation
	// Deprecated: use MockConnectionsInterface instead.
	Impl() ConnectionsService

	// Create a connection.
	//
	// Creates a new connection
	//
	// Creates a new connection to an external data source. It allows users to
	// specify connection details and configurations for interaction with the
	// external server.
	Create(ctx context.Context, request CreateConnection) (*ConnectionInfo, error)

	// Delete a connection.
	//
	// Deletes the connection that matches the supplied name.
	Delete(ctx context.Context, request DeleteConnectionRequest) error

	// Delete a connection.
	//
	// Deletes the connection that matches the supplied name.
	DeleteByName(ctx context.Context, name string) error

	// Get a connection.
	//
	// Gets a connection from it's name.
	Get(ctx context.Context, request GetConnectionRequest) (*ConnectionInfo, error)

	// Get a connection.
	//
	// Gets a connection from it's name.
	GetByName(ctx context.Context, name string) (*ConnectionInfo, error)

	// List connections.
	//
	// List all connections.
	//
	// This method is generated by Databricks SDK Code Generator.
	List(ctx context.Context, request ListConnectionsRequest) listing.Iterator[ConnectionInfo]

	// List connections.
	//
	// List all connections.
	//
	// This method is generated by Databricks SDK Code Generator.
	ListAll(ctx context.Context, request ListConnectionsRequest) ([]ConnectionInfo, error)

	// ConnectionInfoNameToFullNameMap calls [ConnectionsAPI.ListAll] and creates a map of results with [ConnectionInfo].Name as key and [ConnectionInfo].FullName as value.
	//
	// Returns an error if there's more than one [ConnectionInfo] with the same .Name.
	//
	// Note: All [ConnectionInfo] instances are loaded into memory before creating a map.
	//
	// This method is generated by Databricks SDK Code Generator.
	ConnectionInfoNameToFullNameMap(ctx context.Context, request ListConnectionsRequest) (map[string]string, error)

	// Update a connection.
	//
	// Updates the connection that matches the supplied name.
	Update(ctx context.Context, request UpdateConnection) (*ConnectionInfo, error)
}

type ConnectionsService added in v0.10.0

type ConnectionsService interface {

	// Create a connection.
	//
	// Creates a new connection
	//
	// Creates a new connection to an external data source. It allows users to
	// specify connection details and configurations for interaction with the
	// external server.
	Create(ctx context.Context, request CreateConnection) (*ConnectionInfo, error)

	// Delete a connection.
	//
	// Deletes the connection that matches the supplied name.
	Delete(ctx context.Context, request DeleteConnectionRequest) error

	// Get a connection.
	//
	// Gets a connection from it's name.
	Get(ctx context.Context, request GetConnectionRequest) (*ConnectionInfo, error)

	// List connections.
	//
	// List all connections.
	//
	// Use ListAll() to get all ConnectionInfo instances, which will iterate over every result page.
	List(ctx context.Context, request ListConnectionsRequest) (*ListConnectionsResponse, error)

	// Update a connection.
	//
	// Updates the connection that matches the supplied name.
	Update(ctx context.Context, request UpdateConnection) (*ConnectionInfo, error)
}

Connections allow for creating a connection to an external data source.

A connection is an abstraction of an external data source that can be connected from Databricks Compute. Creating a connection object is the first step to managing external data sources within Unity Catalog, with the second step being creating a data object (catalog, schema, or table) using the connection. Data objects derived from a connection can be written to or read from similar to other Unity Catalog data objects based on cloud storage. Users may create different types of connections with each connection having a unique set of configuration options to support credential management and other settings.

type ContinuousUpdateStatus added in v0.33.0

type ContinuousUpdateStatus struct {
	// Progress of the initial data synchronization.
	InitialPipelineSyncProgress *PipelineProgress `json:"initial_pipeline_sync_progress,omitempty"`
	// The last source table Delta version that was synced to the online table.
	// Note that this Delta version may not be completely synced to the online
	// table yet.
	LastProcessedCommitVersion int64 `json:"last_processed_commit_version,omitempty"`
	// The timestamp of the last time any data was synchronized from the source
	// table to the online table.
	Timestamp string `json:"timestamp,omitempty"`

	ForceSendFields []string `json:"-"`
}

Detailed status of an online table. Shown if the online table is in the ONLINE_CONTINUOUS_UPDATE or the ONLINE_UPDATING_PIPELINE_RESOURCES state.

func (ContinuousUpdateStatus) MarshalJSON added in v0.33.0

func (s ContinuousUpdateStatus) MarshalJSON() ([]byte, error)

func (*ContinuousUpdateStatus) UnmarshalJSON added in v0.33.0

func (s *ContinuousUpdateStatus) UnmarshalJSON(b []byte) error

type CreateCatalog

type CreateCatalog struct {
	// User-provided free-form text description.
	Comment string `json:"comment,omitempty"`
	// The name of the connection to an external data source.
	ConnectionName string `json:"connection_name,omitempty"`
	// Name of catalog.
	Name string `json:"name"`
	// A map of key-value properties attached to the securable.
	Options map[string]string `json:"options,omitempty"`
	// A map of key-value properties attached to the securable.
	Properties map[string]string `json:"properties,omitempty"`
	// The name of delta sharing provider.
	//
	// A Delta Sharing catalog is a catalog that is based on a Delta share on a
	// remote sharing server.
	ProviderName string `json:"provider_name,omitempty"`
	// The name of the share under the share provider.
	ShareName string `json:"share_name,omitempty"`
	// Storage root URL for managed tables within catalog.
	StorageRoot string `json:"storage_root,omitempty"`

	ForceSendFields []string `json:"-"`
}

func (CreateCatalog) MarshalJSON added in v0.23.0

func (s CreateCatalog) MarshalJSON() ([]byte, error)

func (*CreateCatalog) UnmarshalJSON added in v0.23.0

func (s *CreateCatalog) UnmarshalJSON(b []byte) error

type CreateConnection added in v0.10.0

type CreateConnection struct {
	// User-provided free-form text description.
	Comment string `json:"comment,omitempty"`
	// The type of connection.
	ConnectionType ConnectionType `json:"connection_type"`
	// Name of the connection.
	Name string `json:"name"`
	// A map of key-value properties attached to the securable.
	Options map[string]string `json:"options"`
	// An object containing map of key-value properties attached to the
	// connection.
	Properties map[string]string `json:"properties,omitempty"`
	// If the connection is read only.
	ReadOnly bool `json:"read_only,omitempty"`

	ForceSendFields []string `json:"-"`
}

func (CreateConnection) MarshalJSON added in v0.23.0

func (s CreateConnection) MarshalJSON() ([]byte, error)

func (*CreateConnection) UnmarshalJSON added in v0.23.0

func (s *CreateConnection) UnmarshalJSON(b []byte) error

type CreateExternalLocation

type CreateExternalLocation struct {
	// The AWS access point to use when accesing s3 for this external location.
	AccessPoint string `json:"access_point,omitempty"`
	// User-provided free-form text description.
	Comment string `json:"comment,omitempty"`
	// Name of the storage credential used with this location.
	CredentialName string `json:"credential_name"`
	// Encryption options that apply to clients connecting to cloud storage.
	EncryptionDetails *EncryptionDetails `json:"encryption_details,omitempty"`
	// Name of the external location.
	Name string `json:"name"`
	// Indicates whether the external location is read-only.
	ReadOnly bool `json:"read_only,omitempty"`
	// Skips validation of the storage credential associated with the external
	// location.
	SkipValidation bool `json:"skip_validation,omitempty"`
	// Path URL of the external location.
	Url string `json:"url"`

	ForceSendFields []string `json:"-"`
}

func (CreateExternalLocation) MarshalJSON added in v0.23.0

func (s CreateExternalLocation) MarshalJSON() ([]byte, error)

func (*CreateExternalLocation) UnmarshalJSON added in v0.23.0

func (s *CreateExternalLocation) UnmarshalJSON(b []byte) error

type CreateFunction

type CreateFunction struct {
	// Name of parent catalog.
	CatalogName string `json:"catalog_name"`
	// User-provided free-form text description.
	Comment string `json:"comment,omitempty"`
	// Scalar function return data type.
	DataType ColumnTypeName `json:"data_type"`
	// External function language.
	ExternalLanguage string `json:"external_language,omitempty"`
	// External function name.
	ExternalName string `json:"external_name,omitempty"`
	// Pretty printed function data type.
	FullDataType string `json:"full_data_type"`

	InputParams FunctionParameterInfos `json:"input_params"`
	// Whether the function is deterministic.
	IsDeterministic bool `json:"is_deterministic"`
	// Function null call.
	IsNullCall bool `json:"is_null_call"`
	// Name of function, relative to parent schema.
	Name string `json:"name"`
	// Function parameter style. **S** is the value for SQL.
	ParameterStyle CreateFunctionParameterStyle `json:"parameter_style"`
	// JSON-serialized key-value pair map, encoded (escaped) as a string.
	Properties string `json:"properties,omitempty"`
	// Table function return parameters.
	ReturnParams FunctionParameterInfos `json:"return_params"`
	// Function language. When **EXTERNAL** is used, the language of the routine
	// function should be specified in the __external_language__ field, and the
	// __return_params__ of the function cannot be used (as **TABLE** return
	// type is not supported), and the __sql_data_access__ field must be
	// **NO_SQL**.
	RoutineBody CreateFunctionRoutineBody `json:"routine_body"`
	// Function body.
	RoutineDefinition string `json:"routine_definition"`
	// Function dependencies.
	RoutineDependencies DependencyList `json:"routine_dependencies"`
	// Name of parent schema relative to its parent catalog.
	SchemaName string `json:"schema_name"`
	// Function security type.
	SecurityType CreateFunctionSecurityType `json:"security_type"`
	// Specific name of the function; Reserved for future use.
	SpecificName string `json:"specific_name"`
	// Function SQL data access.
	SqlDataAccess CreateFunctionSqlDataAccess `json:"sql_data_access"`
	// List of schemes whose objects can be referenced without qualification.
	SqlPath string `json:"sql_path,omitempty"`

	ForceSendFields []string `json:"-"`
}

func (CreateFunction) MarshalJSON added in v0.23.0

func (s CreateFunction) MarshalJSON() ([]byte, error)

func (*CreateFunction) UnmarshalJSON added in v0.23.0

func (s *CreateFunction) UnmarshalJSON(b []byte) error

type CreateFunctionParameterStyle

type CreateFunctionParameterStyle string

Function parameter style. **S** is the value for SQL.

const CreateFunctionParameterStyleS CreateFunctionParameterStyle = `S`

func (*CreateFunctionParameterStyle) Set

Set raw string value and validate it against allowed values

func (*CreateFunctionParameterStyle) String

String representation for fmt.Print

func (*CreateFunctionParameterStyle) Type

Type always returns CreateFunctionParameterStyle to satisfy [pflag.Value] interface

type CreateFunctionRequest added in v0.25.0

type CreateFunctionRequest struct {
	// Partial __FunctionInfo__ specifying the function to be created.
	FunctionInfo CreateFunction `json:"function_info"`
}

type CreateFunctionRoutineBody

type CreateFunctionRoutineBody string

Function language. When **EXTERNAL** is used, the language of the routine function should be specified in the __external_language__ field, and the __return_params__ of the function cannot be used (as **TABLE** return type is not supported), and the __sql_data_access__ field must be **NO_SQL**.

const CreateFunctionRoutineBodyExternal CreateFunctionRoutineBody = `EXTERNAL`
const CreateFunctionRoutineBodySql CreateFunctionRoutineBody = `SQL`

func (*CreateFunctionRoutineBody) Set

Set raw string value and validate it against allowed values

func (*CreateFunctionRoutineBody) String

func (f *CreateFunctionRoutineBody) String() string

String representation for fmt.Print

func (*CreateFunctionRoutineBody) Type

Type always returns CreateFunctionRoutineBody to satisfy [pflag.Value] interface

type CreateFunctionSecurityType

type CreateFunctionSecurityType string

Function security type.

const CreateFunctionSecurityTypeDefiner CreateFunctionSecurityType = `DEFINER`

func (*CreateFunctionSecurityType) Set

Set raw string value and validate it against allowed values

func (*CreateFunctionSecurityType) String

func (f *CreateFunctionSecurityType) String() string

String representation for fmt.Print

func (*CreateFunctionSecurityType) Type

Type always returns CreateFunctionSecurityType to satisfy [pflag.Value] interface

type CreateFunctionSqlDataAccess

type CreateFunctionSqlDataAccess string

Function SQL data access.

const CreateFunctionSqlDataAccessContainsSql CreateFunctionSqlDataAccess = `CONTAINS_SQL`
const CreateFunctionSqlDataAccessNoSql CreateFunctionSqlDataAccess = `NO_SQL`
const CreateFunctionSqlDataAccessReadsSqlData CreateFunctionSqlDataAccess = `READS_SQL_DATA`

func (*CreateFunctionSqlDataAccess) Set

Set raw string value and validate it against allowed values

func (*CreateFunctionSqlDataAccess) String

func (f *CreateFunctionSqlDataAccess) String() string

String representation for fmt.Print

func (*CreateFunctionSqlDataAccess) Type

Type always returns CreateFunctionSqlDataAccess to satisfy [pflag.Value] interface

type CreateMetastore

type CreateMetastore struct {
	// The user-specified name of the metastore.
	Name string `json:"name"`
	// Cloud region which the metastore serves (e.g., `us-west-2`, `westus`). If
	// this field is omitted, the region of the workspace receiving the request
	// will be used.
	Region string `json:"region,omitempty"`
	// The storage root URL for metastore
	StorageRoot string `json:"storage_root,omitempty"`

	ForceSendFields []string `json:"-"`
}

func (CreateMetastore) MarshalJSON added in v0.23.0

func (s CreateMetastore) MarshalJSON() ([]byte, error)

func (*CreateMetastore) UnmarshalJSON added in v0.23.0

func (s *CreateMetastore) UnmarshalJSON(b []byte) error

type CreateMetastoreAssignment

type CreateMetastoreAssignment struct {
	// The name of the default catalog in the metastore.
	DefaultCatalogName string `json:"default_catalog_name"`
	// The unique ID of the metastore.
	MetastoreId string `json:"metastore_id"`
	// A workspace ID.
	WorkspaceId int64 `json:"-" url:"-"`
}

type CreateMonitor added in v0.30.0

type CreateMonitor struct {
	// The directory to store monitoring assets (e.g. dashboard, metric tables).
	AssetsDir string `json:"assets_dir"`
	// Name of the baseline table from which drift metrics are computed from.
	// Columns in the monitored table should also be present in the baseline
	// table.
	BaselineTableName string `json:"baseline_table_name,omitempty"`
	// Custom metrics to compute on the monitored table. These can be aggregate
	// metrics, derived metrics (from already computed aggregate metrics), or
	// drift metrics (comparing metrics across time windows).
	CustomMetrics []MonitorMetric `json:"custom_metrics,omitempty"`
	// The data classification config for the monitor.
	DataClassificationConfig *MonitorDataClassificationConfig `json:"data_classification_config,omitempty"`
	// Configuration for monitoring inference logs.
	InferenceLog *MonitorInferenceLog `json:"inference_log,omitempty"`
	// The notification settings for the monitor.
	Notifications *MonitorNotifications `json:"notifications,omitempty"`
	// Schema where output metric tables are created.
	OutputSchemaName string `json:"output_schema_name"`
	// The schedule for automatically updating and refreshing metric tables.
	Schedule *MonitorCronSchedule `json:"schedule,omitempty"`
	// Whether to skip creating a default dashboard summarizing data quality
	// metrics.
	SkipBuiltinDashboard bool `json:"skip_builtin_dashboard,omitempty"`
	// List of column expressions to slice data with for targeted analysis. The
	// data is grouped by each expression independently, resulting in a separate
	// slice for each predicate and its complements. For high-cardinality
	// columns, only the top 100 unique values by frequency will generate
	// slices.
	SlicingExprs []string `json:"slicing_exprs,omitempty"`
	// Configuration for monitoring snapshot tables.
	Snapshot *MonitorSnapshot `json:"snapshot,omitempty"`
	// Full name of the table.
	TableName string `json:"-" url:"-"`
	// Configuration for monitoring time series tables.
	TimeSeries *MonitorTimeSeries `json:"time_series,omitempty"`
	// Optional argument to specify the warehouse for dashboard creation. If not
	// specified, the first running warehouse will be used.
	WarehouseId string `json:"warehouse_id,omitempty"`

	ForceSendFields []string `json:"-"`
}

func (CreateMonitor) MarshalJSON added in v0.30.0

func (s CreateMonitor) MarshalJSON() ([]byte, error)

func (*CreateMonitor) UnmarshalJSON added in v0.30.0

func (s *CreateMonitor) UnmarshalJSON(b []byte) error

type CreateOnlineTableRequest added in v0.35.0

type CreateOnlineTableRequest struct {
	// Full three-part (catalog, schema, table) name of the table.
	Name string `json:"name,omitempty"`
	// Specification of the online table.
	Spec *OnlineTableSpec `json:"spec,omitempty"`

	ForceSendFields []string `json:"-"`
}

Online Table information.

func (CreateOnlineTableRequest) MarshalJSON added in v0.35.0

func (s CreateOnlineTableRequest) MarshalJSON() ([]byte, error)

func (*CreateOnlineTableRequest) UnmarshalJSON added in v0.35.0

func (s *CreateOnlineTableRequest) UnmarshalJSON(b []byte) error

type CreateRegisteredModelRequest added in v0.18.0

type CreateRegisteredModelRequest struct {
	// The name of the catalog where the schema and the registered model reside
	CatalogName string `json:"catalog_name"`
	// The comment attached to the registered model
	Comment string `json:"comment,omitempty"`
	// The name of the registered model
	Name string `json:"name"`
	// The name of the schema where the registered model resides
	SchemaName string `json:"schema_name"`
	// The storage location on the cloud under which model version data files
	// are stored
	StorageLocation string `json:"storage_location,omitempty"`

	ForceSendFields []string `json:"-"`
}

func (CreateRegisteredModelRequest) MarshalJSON added in v0.23.0

func (s CreateRegisteredModelRequest) MarshalJSON() ([]byte, error)

func (*CreateRegisteredModelRequest) UnmarshalJSON added in v0.23.0

func (s *CreateRegisteredModelRequest) UnmarshalJSON(b []byte) error

type CreateResponse added in v0.34.0

type CreateResponse struct {
}

type CreateSchema

type CreateSchema struct {
	// Name of parent catalog.
	CatalogName string `json:"catalog_name"`
	// User-provided free-form text description.
	Comment string `json:"comment,omitempty"`
	// Name of schema, relative to parent catalog.
	Name string `json:"name"`
	// A map of key-value properties attached to the securable.
	Properties map[string]string `json:"properties,omitempty"`
	// Storage root URL for managed tables within schema.
	StorageRoot string `json:"storage_root,omitempty"`

	ForceSendFields []string `json:"-"`
}

func (CreateSchema) MarshalJSON added in v0.23.0

func (s CreateSchema) MarshalJSON() ([]byte, error)

func (*CreateSchema) UnmarshalJSON added in v0.23.0

func (s *CreateSchema) UnmarshalJSON(b []byte) error

type CreateStorageCredential

type CreateStorageCredential struct {
	// The AWS IAM role configuration.
	AwsIamRole *AwsIamRoleRequest `json:"aws_iam_role,omitempty"`
	// The Azure managed identity configuration.
	AzureManagedIdentity *AzureManagedIdentityRequest `json:"azure_managed_identity,omitempty"`
	// The Azure service principal configuration.
	AzureServicePrincipal *AzureServicePrincipal `json:"azure_service_principal,omitempty"`
	// The Cloudflare API token configuration.
	CloudflareApiToken *CloudflareApiToken `json:"cloudflare_api_token,omitempty"`
	// Comment associated with the credential.
	Comment string `json:"comment,omitempty"`
	// The <Databricks> managed GCP service account configuration.
	DatabricksGcpServiceAccount *DatabricksGcpServiceAccountRequest `json:"databricks_gcp_service_account,omitempty"`
	// The credential name. The name must be unique within the metastore.
	Name string `json:"name"`
	// Whether the storage credential is only usable for read operations.
	ReadOnly bool `json:"read_only,omitempty"`
	// Supplying true to this argument skips validation of the created
	// credential.
	SkipValidation bool `json:"skip_validation,omitempty"`

	ForceSendFields []string `json:"-"`
}

func (CreateStorageCredential) MarshalJSON added in v0.23.0

func (s CreateStorageCredential) MarshalJSON() ([]byte, error)

func (*CreateStorageCredential) UnmarshalJSON added in v0.23.0

func (s *CreateStorageCredential) UnmarshalJSON(b []byte) error

type CreateTableConstraint

type CreateTableConstraint struct {
	// A table constraint, as defined by *one* of the following fields being
	// set: __primary_key_constraint__, __foreign_key_constraint__,
	// __named_table_constraint__.
	Constraint TableConstraint `json:"constraint"`
	// The full name of the table referenced by the constraint.
	FullNameArg string `json:"full_name_arg"`
}

type CreateVolumeRequestContent

type CreateVolumeRequestContent struct {
	// The name of the catalog where the schema and the volume are
	CatalogName string `json:"catalog_name"`
	// The comment attached to the volume
	Comment string `json:"comment,omitempty"`
	// The name of the volume
	Name string `json:"name"`
	// The name of the schema where the volume is
	SchemaName string `json:"schema_name"`
	// The storage location on the cloud
	StorageLocation string `json:"storage_location,omitempty"`

	VolumeType VolumeType `json:"volume_type"`

	ForceSendFields []string `json:"-"`
}

func (CreateVolumeRequestContent) MarshalJSON added in v0.23.0

func (s CreateVolumeRequestContent) MarshalJSON() ([]byte, error)

func (*CreateVolumeRequestContent) UnmarshalJSON added in v0.23.0

func (s *CreateVolumeRequestContent) UnmarshalJSON(b []byte) error

type CredentialType added in v0.10.0

type CredentialType string

The type of credential.

const CredentialTypeUsernamePassword CredentialType = `USERNAME_PASSWORD`

func (*CredentialType) Set added in v0.10.0

func (f *CredentialType) Set(v string) error

Set raw string value and validate it against allowed values

func (*CredentialType) String added in v0.10.0

func (f *CredentialType) String() string

String representation for fmt.Print

func (*CredentialType) Type added in v0.10.0

func (f *CredentialType) Type() string

Type always returns CredentialType to satisfy [pflag.Value] interface

type CurrentWorkspaceBindings added in v0.9.0

type CurrentWorkspaceBindings struct {
	// A list of workspace IDs.
	Workspaces []int64 `json:"workspaces,omitempty"`
}

Currently assigned workspaces

type DataSourceFormat

type DataSourceFormat string

Data source format

const DataSourceFormatAvro DataSourceFormat = `AVRO`
const DataSourceFormatCsv DataSourceFormat = `CSV`
const DataSourceFormatDelta DataSourceFormat = `DELTA`
const DataSourceFormatDeltasharing DataSourceFormat = `DELTASHARING`
const DataSourceFormatJson DataSourceFormat = `JSON`
const DataSourceFormatOrc DataSourceFormat = `ORC`
const DataSourceFormatParquet DataSourceFormat = `PARQUET`
const DataSourceFormatText DataSourceFormat = `TEXT`
const DataSourceFormatUnityCatalog DataSourceFormat = `UNITY_CATALOG`

func (*DataSourceFormat) Set

func (f *DataSourceFormat) Set(v string) error

Set raw string value and validate it against allowed values

func (*DataSourceFormat) String

func (f *DataSourceFormat) String() string

String representation for fmt.Print

func (*DataSourceFormat) Type

func (f *DataSourceFormat) Type() string

Type always returns DataSourceFormat to satisfy [pflag.Value] interface

type DatabricksGcpServiceAccountRequest added in v0.34.0

type DatabricksGcpServiceAccountRequest struct {
}

type DatabricksGcpServiceAccountResponse added in v0.10.0

type DatabricksGcpServiceAccountResponse struct {
	// The Databricks internal ID that represents this service account. This is
	// an output-only field.
	CredentialId string `json:"credential_id,omitempty"`
	// The email of the service account. This is an output-only field.
	Email string `json:"email,omitempty"`

	ForceSendFields []string `json:"-"`
}

func (DatabricksGcpServiceAccountResponse) MarshalJSON added in v0.23.0

func (s DatabricksGcpServiceAccountResponse) MarshalJSON() ([]byte, error)

func (*DatabricksGcpServiceAccountResponse) UnmarshalJSON added in v0.23.0

func (s *DatabricksGcpServiceAccountResponse) UnmarshalJSON(b []byte) error

type DeleteAccountMetastoreAssignmentRequest

type DeleteAccountMetastoreAssignmentRequest struct {
	// Unity Catalog metastore ID
	MetastoreId string `json:"-" url:"-"`
	// Workspace ID.
	WorkspaceId int64 `json:"-" url:"-"`
}

Delete a metastore assignment

type DeleteAccountMetastoreRequest

type DeleteAccountMetastoreRequest struct {
	// Force deletion even if the metastore is not empty. Default is false.
	Force bool `json:"-" url:"force,omitempty"`
	// Unity Catalog metastore ID
	MetastoreId string `json:"-" url:"-"`

	ForceSendFields []string `json:"-"`
}

Delete a metastore

func (DeleteAccountMetastoreRequest) MarshalJSON added in v0.23.0

func (s DeleteAccountMetastoreRequest) MarshalJSON() ([]byte, error)

func (*DeleteAccountMetastoreRequest) UnmarshalJSON added in v0.23.0

func (s *DeleteAccountMetastoreRequest) UnmarshalJSON(b []byte) error

type DeleteAccountStorageCredentialRequest added in v0.9.0

type DeleteAccountStorageCredentialRequest struct {
	// Force deletion even if the Storage Credential is not empty. Default is
	// false.
	Force bool `json:"-" url:"force,omitempty"`
	// Unity Catalog metastore ID
	MetastoreId string `json:"-" url:"-"`
	// Name of the storage credential.
	StorageCredentialName string `json:"-" url:"-"`

	ForceSendFields []string `json:"-"`
}

Delete a storage credential

func (DeleteAccountStorageCredentialRequest) MarshalJSON added in v0.23.0

func (s DeleteAccountStorageCredentialRequest) MarshalJSON() ([]byte, error)

func (*DeleteAccountStorageCredentialRequest) UnmarshalJSON added in v0.23.0

func (s *DeleteAccountStorageCredentialRequest) UnmarshalJSON(b []byte) error

type DeleteAliasRequest added in v0.18.0

type DeleteAliasRequest struct {
	// The name of the alias
	Alias string `json:"-" url:"-"`
	// The three-level (fully qualified) name of the registered model
	FullName string `json:"-" url:"-"`
}

Delete a Registered Model Alias

type DeleteAliasResponse added in v0.34.0

type DeleteAliasResponse struct {
}

type DeleteCatalogRequest

type DeleteCatalogRequest struct {
	// Force deletion even if the catalog is not empty.
	Force bool `json:"-" url:"force,omitempty"`
	// The name of the catalog.
	Name string `json:"-" url:"-"`

	ForceSendFields []string `json:"-"`
}

Delete a catalog

func (DeleteCatalogRequest) MarshalJSON added in v0.23.0

func (s DeleteCatalogRequest) MarshalJSON() ([]byte, error)

func (*DeleteCatalogRequest) UnmarshalJSON added in v0.23.0

func (s *DeleteCatalogRequest) UnmarshalJSON(b []byte) error

type DeleteConnectionRequest added in v0.10.0

type DeleteConnectionRequest struct {
	// The name of the connection to be deleted.
	Name string `json:"-" url:"-"`
}

Delete a connection

type DeleteExternalLocationRequest

type DeleteExternalLocationRequest struct {
	// Force deletion even if there are dependent external tables or mounts.
	Force bool `json:"-" url:"force,omitempty"`
	// Name of the external location.
	Name string `json:"-" url:"-"`

	ForceSendFields []string `json:"-"`
}

Delete an external location

func (DeleteExternalLocationRequest) MarshalJSON added in v0.23.0

func (s DeleteExternalLocationRequest) MarshalJSON() ([]byte, error)

func (*DeleteExternalLocationRequest) UnmarshalJSON added in v0.23.0

func (s *DeleteExternalLocationRequest) UnmarshalJSON(b []byte) error

type DeleteFunctionRequest

type DeleteFunctionRequest struct {
	// Force deletion even if the function is notempty.
	Force bool `json:"-" url:"force,omitempty"`
	// The fully-qualified name of the function (of the form
	// __catalog_name__.__schema_name__.__function__name__).
	Name string `json:"-" url:"-"`

	ForceSendFields []string `json:"-"`
}

Delete a function

func (DeleteFunctionRequest) MarshalJSON added in v0.23.0

func (s DeleteFunctionRequest) MarshalJSON() ([]byte, error)

func (*DeleteFunctionRequest) UnmarshalJSON added in v0.23.0

func (s *DeleteFunctionRequest) UnmarshalJSON(b []byte) error

type DeleteMetastoreRequest

type DeleteMetastoreRequest struct {
	// Force deletion even if the metastore is not empty. Default is false.
	Force bool `json:"-" url:"force,omitempty"`
	// Unique ID of the metastore.
	Id string `json:"-" url:"-"`

	ForceSendFields []string `json:"-"`
}

Delete a metastore

func (DeleteMetastoreRequest) MarshalJSON added in v0.23.0

func (s DeleteMetastoreRequest) MarshalJSON() ([]byte, error)

func (*DeleteMetastoreRequest) UnmarshalJSON added in v0.23.0

func (s *DeleteMetastoreRequest) UnmarshalJSON(b []byte) error

type DeleteModelVersionRequest added in v0.18.0

type DeleteModelVersionRequest struct {
	// The three-level (fully qualified) name of the model version
	FullName string `json:"-" url:"-"`
	// The integer version number of the model version
	Version int `json:"-" url:"-"`
}

Delete a Model Version

type DeleteOnlineTableRequest added in v0.33.0

type DeleteOnlineTableRequest struct {
	// Full three-part (catalog, schema, table) name of the table.
	Name string `json:"-" url:"-"`
}

Delete an Online Table

type DeleteQualityMonitorRequest added in v0.41.0

type DeleteQualityMonitorRequest struct {
	// Full name of the table.
	TableName string `json:"-" url:"-"`
}

Delete a table monitor

type DeleteRegisteredModelRequest added in v0.18.0

type DeleteRegisteredModelRequest struct {
	// The three-level (fully qualified) name of the registered model
	FullName string `json:"-" url:"-"`
}

Delete a Registered Model

type DeleteResponse added in v0.34.0

type DeleteResponse struct {
}

type DeleteSchemaRequest

type DeleteSchemaRequest struct {
	// Full name of the schema.
	FullName string `json:"-" url:"-"`
}

Delete a schema

type DeleteStorageCredentialRequest

type DeleteStorageCredentialRequest struct {
	// Force deletion even if there are dependent external locations or external
	// tables.
	Force bool `json:"-" url:"force,omitempty"`
	// Name of the storage credential.
	Name string `json:"-" url:"-"`

	ForceSendFields []string `json:"-"`
}

Delete a credential

func (DeleteStorageCredentialRequest) MarshalJSON added in v0.23.0

func (s DeleteStorageCredentialRequest) MarshalJSON() ([]byte, error)

func (*DeleteStorageCredentialRequest) UnmarshalJSON added in v0.23.0

func (s *DeleteStorageCredentialRequest) UnmarshalJSON(b []byte) error

type DeleteTableConstraintRequest

type DeleteTableConstraintRequest struct {
	// If true, try deleting all child constraints of the current constraint. If
	// false, reject this operation if the current constraint has any child
	// constraints.
	Cascade bool `json:"-" url:"cascade"`
	// The name of the constraint to delete.
	ConstraintName string `json:"-" url:"constraint_name"`
	// Full name of the table referenced by the constraint.
	FullName string `json:"-" url:"-"`
}

Delete a table constraint

type DeleteTableRequest

type DeleteTableRequest struct {
	// Full name of the table.
	FullName string `json:"-" url:"-"`
}

Delete a table

type DeleteVolumeRequest

type DeleteVolumeRequest struct {
	// The three-level (fully qualified) name of the volume
	Name string `json:"-" url:"-"`
}

Delete a Volume

type DeltaRuntimePropertiesKvPairs added in v0.9.0

type DeltaRuntimePropertiesKvPairs struct {
	// A map of key-value properties attached to the securable.
	DeltaRuntimeProperties map[string]string `json:"delta_runtime_properties"`
}

Properties pertaining to the current state of the delta table as given by the commit server. This does not contain **delta.*** (input) properties in __TableInfo.properties__.

type Dependency

type Dependency struct {
	// A function that is dependent on a SQL object.
	Function *FunctionDependency `json:"function,omitempty"`
	// A table that is dependent on a SQL object.
	Table *TableDependency `json:"table,omitempty"`
}

A dependency of a SQL object. Either the __table__ field or the __function__ field must be defined.

type DependencyList added in v0.25.0

type DependencyList struct {
	// Array of dependencies.
	Dependencies []Dependency `json:"dependencies,omitempty"`
}

A list of dependencies.

type DisableRequest added in v0.10.0

type DisableRequest struct {
	// The metastore ID under which the system schema lives.
	MetastoreId string `json:"-" url:"-"`
	// Full name of the system schema.
	SchemaName string `json:"-" url:"-"`
}

Disable a system schema

type DisableResponse added in v0.34.0

type DisableResponse struct {
}

type EffectivePermissionsList

type EffectivePermissionsList struct {
	// The privileges conveyed to each principal (either directly or via
	// inheritance)
	PrivilegeAssignments []EffectivePrivilegeAssignment `json:"privilege_assignments,omitempty"`
}

type EffectivePredictiveOptimizationFlag added in v0.17.0

type EffectivePredictiveOptimizationFlag struct {
	// The name of the object from which the flag was inherited. If there was no
	// inheritance, this field is left blank.
	InheritedFromName string `json:"inherited_from_name,omitempty"`
	// The type of the object from which the flag was inherited. If there was no
	// inheritance, this field is left blank.
	InheritedFromType EffectivePredictiveOptimizationFlagInheritedFromType `json:"inherited_from_type,omitempty"`
	// Whether predictive optimization should be enabled for this object and
	// objects under it.
	Value EnablePredictiveOptimization `json:"value"`

	ForceSendFields []string `json:"-"`
}

func (EffectivePredictiveOptimizationFlag) MarshalJSON added in v0.23.0

func (s EffectivePredictiveOptimizationFlag) MarshalJSON() ([]byte, error)

func (*EffectivePredictiveOptimizationFlag) UnmarshalJSON added in v0.23.0

func (s *EffectivePredictiveOptimizationFlag) UnmarshalJSON(b []byte) error

type EffectivePredictiveOptimizationFlagInheritedFromType added in v0.17.0

type EffectivePredictiveOptimizationFlagInheritedFromType string

The type of the object from which the flag was inherited. If there was no inheritance, this field is left blank.

const EffectivePredictiveOptimizationFlagInheritedFromTypeCatalog EffectivePredictiveOptimizationFlagInheritedFromType = `CATALOG`
const EffectivePredictiveOptimizationFlagInheritedFromTypeSchema EffectivePredictiveOptimizationFlagInheritedFromType = `SCHEMA`

func (*EffectivePredictiveOptimizationFlagInheritedFromType) Set added in v0.17.0

Set raw string value and validate it against allowed values

func (*EffectivePredictiveOptimizationFlagInheritedFromType) String added in v0.17.0

String representation for fmt.Print

func (*EffectivePredictiveOptimizationFlagInheritedFromType) Type added in v0.17.0

Type always returns EffectivePredictiveOptimizationFlagInheritedFromType to satisfy [pflag.Value] interface

type EffectivePrivilege

type EffectivePrivilege struct {
	// The full name of the object that conveys this privilege via inheritance.
	// This field is omitted when privilege is not inherited (it's assigned to
	// the securable itself).
	InheritedFromName string `json:"inherited_from_name,omitempty"`
	// The type of the object that conveys this privilege via inheritance. This
	// field is omitted when privilege is not inherited (it's assigned to the
	// securable itself).
	InheritedFromType SecurableType `json:"inherited_from_type,omitempty"`
	// The privilege assigned to the principal.
	Privilege Privilege `json:"privilege,omitempty"`

	ForceSendFields []string `json:"-"`
}

func (EffectivePrivilege) MarshalJSON added in v0.23.0

func (s EffectivePrivilege) MarshalJSON() ([]byte, error)

func (*EffectivePrivilege) UnmarshalJSON added in v0.23.0

func (s *EffectivePrivilege) UnmarshalJSON(b []byte) error

type EffectivePrivilegeAssignment

type EffectivePrivilegeAssignment struct {
	// The principal (user email address or group name).
	Principal string `json:"principal,omitempty"`
	// The privileges conveyed to the principal (either directly or via
	// inheritance).
	Privileges []EffectivePrivilege `json:"privileges,omitempty"`

	ForceSendFields []string `json:"-"`
}

func (EffectivePrivilegeAssignment) MarshalJSON added in v0.23.0

func (s EffectivePrivilegeAssignment) MarshalJSON() ([]byte, error)

func (*EffectivePrivilegeAssignment) UnmarshalJSON added in v0.23.0

func (s *EffectivePrivilegeAssignment) UnmarshalJSON(b []byte) error

type EnablePredictiveOptimization added in v0.17.0

type EnablePredictiveOptimization string

Whether predictive optimization should be enabled for this object and objects under it.

const EnablePredictiveOptimizationDisable EnablePredictiveOptimization = `DISABLE`
const EnablePredictiveOptimizationEnable EnablePredictiveOptimization = `ENABLE`
const EnablePredictiveOptimizationInherit EnablePredictiveOptimization = `INHERIT`

func (*EnablePredictiveOptimization) Set added in v0.17.0

Set raw string value and validate it against allowed values

func (*EnablePredictiveOptimization) String added in v0.17.0

String representation for fmt.Print

func (*EnablePredictiveOptimization) Type added in v0.17.0

Type always returns EnablePredictiveOptimization to satisfy [pflag.Value] interface

type EnableRequest added in v0.11.0

type EnableRequest struct {
	// The metastore ID under which the system schema lives.
	MetastoreId string `json:"-" url:"-"`
	// Full name of the system schema.
	SchemaName string `json:"-" url:"-"`
}

Enable a system schema

type EnableResponse added in v0.34.0

type EnableResponse struct {
}

type EncryptionDetails added in v0.14.0

type EncryptionDetails struct {
	// Server-Side Encryption properties for clients communicating with AWS s3.
	SseEncryptionDetails *SseEncryptionDetails `json:"sse_encryption_details,omitempty"`
}

Encryption options that apply to clients connecting to cloud storage.

type ExistsRequest added in v0.30.0

type ExistsRequest struct {
	// Full name of the table.
	FullName string `json:"-" url:"-"`
}

Get boolean reflecting if table exists

type ExternalLocationInfo

type ExternalLocationInfo struct {
	// The AWS access point to use when accesing s3 for this external location.
	AccessPoint string `json:"access_point,omitempty"`
	// Indicates whether the principal is limited to retrieving metadata for the
	// associated object through the BROWSE privilege when include_browse is
	// enabled in the request.
	BrowseOnly bool `json:"browse_only,omitempty"`
	// User-provided free-form text description.
	Comment string `json:"comment,omitempty"`
	// Time at which this external location was created, in epoch milliseconds.
	CreatedAt int64 `json:"created_at,omitempty"`
	// Username of external location creator.
	CreatedBy string `json:"created_by,omitempty"`
	// Unique ID of the location's storage credential.
	CredentialId string `json:"credential_id,omitempty"`
	// Name of the storage credential used with this location.
	CredentialName string `json:"credential_name,omitempty"`
	// Encryption options that apply to clients connecting to cloud storage.
	EncryptionDetails *EncryptionDetails `json:"encryption_details,omitempty"`
	// Unique identifier of metastore hosting the external location.
	MetastoreId string `json:"metastore_id,omitempty"`
	// Name of the external location.
	Name string `json:"name,omitempty"`
	// The owner of the external location.
	Owner string `json:"owner,omitempty"`
	// Indicates whether the external location is read-only.
	ReadOnly bool `json:"read_only,omitempty"`
	// Time at which external location this was last modified, in epoch
	// milliseconds.
	UpdatedAt int64 `json:"updated_at,omitempty"`
	// Username of user who last modified the external location.
	UpdatedBy string `json:"updated_by,omitempty"`
	// Path URL of the external location.
	Url string `json:"url,omitempty"`

	ForceSendFields []string `json:"-"`
}

func (ExternalLocationInfo) MarshalJSON added in v0.23.0

func (s ExternalLocationInfo) MarshalJSON() ([]byte, error)

func (*ExternalLocationInfo) UnmarshalJSON added in v0.23.0

func (s *ExternalLocationInfo) UnmarshalJSON(b []byte) error

type ExternalLocationsAPI

type ExternalLocationsAPI struct {
	// contains filtered or unexported fields
}

An external location is an object that combines a cloud storage path with a storage credential that authorizes access to the cloud storage path. Each external location is subject to Unity Catalog access-control policies that control which users and groups can access the credential. If a user does not have access to an external location in Unity Catalog, the request fails and Unity Catalog does not attempt to authenticate to your cloud tenant on the user’s behalf.

Databricks recommends using external locations rather than using storage credentials directly.

To create external locations, you must be a metastore admin or a user with the **CREATE_EXTERNAL_LOCATION** privilege.

func NewExternalLocations

func NewExternalLocations(client *client.DatabricksClient) *ExternalLocationsAPI

func (*ExternalLocationsAPI) Create

Create an external location.

Creates a new external location entry in the metastore. The caller must be a metastore admin or have the **CREATE_EXTERNAL_LOCATION** privilege on both the metastore and the associated storage credential.

Example (ExternalLocationsOnAws)
ctx := context.Background()
w, err := databricks.NewWorkspaceClient()
if err != nil {
	panic(err)
}

credential, err := w.StorageCredentials.Create(ctx, catalog.CreateStorageCredential{
	Name: fmt.Sprintf("sdk-%x", time.Now().UnixNano()),
	AwsIamRole: &catalog.AwsIamRoleRequest{
		RoleArn: os.Getenv("TEST_METASTORE_DATA_ACCESS_ARN"),
	},
})
if err != nil {
	panic(err)
}
logger.Infof(ctx, "found %v", credential)

created, err := w.ExternalLocations.Create(ctx, catalog.CreateExternalLocation{
	Name:           fmt.Sprintf("sdk-%x", time.Now().UnixNano()),
	CredentialName: credential.Name,
	Url:            fmt.Sprintf("s3://%s/%s", os.Getenv("TEST_BUCKET"), fmt.Sprintf("sdk-%x", time.Now().UnixNano())),
})
if err != nil {
	panic(err)
}
logger.Infof(ctx, "found %v", created)

// cleanup

err = w.StorageCredentials.DeleteByName(ctx, credential.Name)
if err != nil {
	panic(err)
}
err = w.ExternalLocations.DeleteByName(ctx, created.Name)
if err != nil {
	panic(err)
}
Output:

Example (Volumes)
ctx := context.Background()
w, err := databricks.NewWorkspaceClient()
if err != nil {
	panic(err)
}

storageCredential, err := w.StorageCredentials.Create(ctx, catalog.CreateStorageCredential{
	Name: fmt.Sprintf("sdk-%x", time.Now().UnixNano()),
	AwsIamRole: &catalog.AwsIamRoleRequest{
		RoleArn: os.Getenv("TEST_METASTORE_DATA_ACCESS_ARN"),
	},
	Comment: "created via SDK",
})
if err != nil {
	panic(err)
}
logger.Infof(ctx, "found %v", storageCredential)

externalLocation, err := w.ExternalLocations.Create(ctx, catalog.CreateExternalLocation{
	Name:           fmt.Sprintf("sdk-%x", time.Now().UnixNano()),
	CredentialName: storageCredential.Name,
	Comment:        "created via SDK",
	Url:            "s3://" + os.Getenv("TEST_BUCKET") + "/" + fmt.Sprintf("sdk-%x", time.Now().UnixNano()),
})
if err != nil {
	panic(err)
}
logger.Infof(ctx, "found %v", externalLocation)

// cleanup

err = w.StorageCredentials.DeleteByName(ctx, storageCredential.Name)
if err != nil {
	panic(err)
}
err = w.ExternalLocations.DeleteByName(ctx, externalLocation.Name)
if err != nil {
	panic(err)
}
Output:

func (*ExternalLocationsAPI) Delete

Delete an external location.

Deletes the specified external location from the metastore. The caller must be the owner of the external location.

func (*ExternalLocationsAPI) DeleteByName

func (a *ExternalLocationsAPI) DeleteByName(ctx context.Context, name string) error

Delete an external location.

Deletes the specified external location from the metastore. The caller must be the owner of the external location.

func (*ExternalLocationsAPI) Get

Get an external location.

Gets an external location from the metastore. The caller must be either a metastore admin, the owner of the external location, or a user that has some privilege on the external location.

Example (ExternalLocationsOnAws)
ctx := context.Background()
w, err := databricks.NewWorkspaceClient()
if err != nil {
	panic(err)
}

credential, err := w.StorageCredentials.Create(ctx, catalog.CreateStorageCredential{
	Name: fmt.Sprintf("sdk-%x", time.Now().UnixNano()),
	AwsIamRole: &catalog.AwsIamRoleRequest{
		RoleArn: os.Getenv("TEST_METASTORE_DATA_ACCESS_ARN"),
	},
})
if err != nil {
	panic(err)
}
logger.Infof(ctx, "found %v", credential)

created, err := w.ExternalLocations.Create(ctx, catalog.CreateExternalLocation{
	Name:           fmt.Sprintf("sdk-%x", time.Now().UnixNano()),
	CredentialName: credential.Name,
	Url:            fmt.Sprintf("s3://%s/%s", os.Getenv("TEST_BUCKET"), fmt.Sprintf("sdk-%x", time.Now().UnixNano())),
})
if err != nil {
	panic(err)
}
logger.Infof(ctx, "found %v", created)

_, err = w.ExternalLocations.GetByName(ctx, created.Name)
if err != nil {
	panic(err)
}

// cleanup

err = w.StorageCredentials.DeleteByName(ctx, credential.Name)
if err != nil {
	panic(err)
}
err = w.ExternalLocations.DeleteByName(ctx, created.Name)
if err != nil {
	panic(err)
}
Output:

func (*ExternalLocationsAPI) GetByName

Get an external location.

Gets an external location from the metastore. The caller must be either a metastore admin, the owner of the external location, or a user that has some privilege on the external location.

func (*ExternalLocationsAPI) Impl

Impl returns low-level ExternalLocations API implementation Deprecated: use MockExternalLocationsInterface instead.

func (*ExternalLocationsAPI) List added in v0.24.0

List external locations.

Gets an array of external locations (__ExternalLocationInfo__ objects) from the metastore. The caller must be a metastore admin, the owner of the external location, or a user that has some privilege on the external location. There is no guarantee of a specific ordering of the elements in the array.

This method is generated by Databricks SDK Code Generator.

func (*ExternalLocationsAPI) ListAll

List external locations.

Gets an array of external locations (__ExternalLocationInfo__ objects) from the metastore. The caller must be a metastore admin, the owner of the external location, or a user that has some privilege on the external location. There is no guarantee of a specific ordering of the elements in the array.

This method is generated by Databricks SDK Code Generator.

Example (ExternalLocationsOnAws)
ctx := context.Background()
w, err := databricks.NewWorkspaceClient()
if err != nil {
	panic(err)
}

all, err := w.ExternalLocations.ListAll(ctx, catalog.ListExternalLocationsRequest{})
if err != nil {
	panic(err)
}
logger.Infof(ctx, "found %v", all)
Output:

func (*ExternalLocationsAPI) Update

Update an external location.

Updates an external location in the metastore. The caller must be the owner of the external location, or be a metastore admin. In the second case, the admin can only update the name of the external location.

Example (ExternalLocationsOnAws)
ctx := context.Background()
w, err := databricks.NewWorkspaceClient()
if err != nil {
	panic(err)
}

credential, err := w.StorageCredentials.Create(ctx, catalog.CreateStorageCredential{
	Name: fmt.Sprintf("sdk-%x", time.Now().UnixNano()),
	AwsIamRole: &catalog.AwsIamRoleRequest{
		RoleArn: os.Getenv("TEST_METASTORE_DATA_ACCESS_ARN"),
	},
})
if err != nil {
	panic(err)
}
logger.Infof(ctx, "found %v", credential)

created, err := w.ExternalLocations.Create(ctx, catalog.CreateExternalLocation{
	Name:           fmt.Sprintf("sdk-%x", time.Now().UnixNano()),
	CredentialName: credential.Name,
	Url:            fmt.Sprintf("s3://%s/%s", os.Getenv("TEST_BUCKET"), fmt.Sprintf("sdk-%x", time.Now().UnixNano())),
})
if err != nil {
	panic(err)
}
logger.Infof(ctx, "found %v", created)

_, err = w.ExternalLocations.Update(ctx, catalog.UpdateExternalLocation{
	Name:           created.Name,
	CredentialName: credential.Name,
	Url:            fmt.Sprintf("s3://%s/%s", os.Getenv("TEST_BUCKET"), fmt.Sprintf("sdk-%x", time.Now().UnixNano())),
})
if err != nil {
	panic(err)
}

// cleanup

err = w.StorageCredentials.DeleteByName(ctx, credential.Name)
if err != nil {
	panic(err)
}
err = w.ExternalLocations.DeleteByName(ctx, created.Name)
if err != nil {
	panic(err)
}
Output:

func (*ExternalLocationsAPI) WithImpl

WithImpl could be used to override low-level API implementations for unit testing purposes with github.com/golang/mock or other mocking frameworks. Deprecated: use MockExternalLocationsInterface instead.

type ExternalLocationsInterface added in v0.29.0

type ExternalLocationsInterface interface {
	// WithImpl could be used to override low-level API implementations for unit
	// testing purposes with [github.com/golang/mock] or other mocking frameworks.
	// Deprecated: use MockExternalLocationsInterface instead.
	WithImpl(impl ExternalLocationsService) ExternalLocationsInterface

	// Impl returns low-level ExternalLocations API implementation
	// Deprecated: use MockExternalLocationsInterface instead.
	Impl() ExternalLocationsService

	// Create an external location.
	//
	// Creates a new external location entry in the metastore. The caller must be a
	// metastore admin or have the **CREATE_EXTERNAL_LOCATION** privilege on both
	// the metastore and the associated storage credential.
	Create(ctx context.Context, request CreateExternalLocation) (*ExternalLocationInfo, error)

	// Delete an external location.
	//
	// Deletes the specified external location from the metastore. The caller must
	// be the owner of the external location.
	Delete(ctx context.Context, request DeleteExternalLocationRequest) error

	// Delete an external location.
	//
	// Deletes the specified external location from the metastore. The caller must
	// be the owner of the external location.
	DeleteByName(ctx context.Context, name string) error

	// Get an external location.
	//
	// Gets an external location from the metastore. The caller must be either a
	// metastore admin, the owner of the external location, or a user that has some
	// privilege on the external location.
	Get(ctx context.Context, request GetExternalLocationRequest) (*ExternalLocationInfo, error)

	// Get an external location.
	//
	// Gets an external location from the metastore. The caller must be either a
	// metastore admin, the owner of the external location, or a user that has some
	// privilege on the external location.
	GetByName(ctx context.Context, name string) (*ExternalLocationInfo, error)

	// List external locations.
	//
	// Gets an array of external locations (__ExternalLocationInfo__ objects) from
	// the metastore. The caller must be a metastore admin, the owner of the
	// external location, or a user that has some privilege on the external
	// location. There is no guarantee of a specific ordering of the elements in the
	// array.
	//
	// This method is generated by Databricks SDK Code Generator.
	List(ctx context.Context, request ListExternalLocationsRequest) listing.Iterator[ExternalLocationInfo]

	// List external locations.
	//
	// Gets an array of external locations (__ExternalLocationInfo__ objects) from
	// the metastore. The caller must be a metastore admin, the owner of the
	// external location, or a user that has some privilege on the external
	// location. There is no guarantee of a specific ordering of the elements in the
	// array.
	//
	// This method is generated by Databricks SDK Code Generator.
	ListAll(ctx context.Context, request ListExternalLocationsRequest) ([]ExternalLocationInfo, error)

	// Update an external location.
	//
	// Updates an external location in the metastore. The caller must be the owner
	// of the external location, or be a metastore admin. In the second case, the
	// admin can only update the name of the external location.
	Update(ctx context.Context, request UpdateExternalLocation) (*ExternalLocationInfo, error)
}

type ExternalLocationsService

type ExternalLocationsService interface {

	// Create an external location.
	//
	// Creates a new external location entry in the metastore. The caller must
	// be a metastore admin or have the **CREATE_EXTERNAL_LOCATION** privilege
	// on both the metastore and the associated storage credential.
	Create(ctx context.Context, request CreateExternalLocation) (*ExternalLocationInfo, error)

	// Delete an external location.
	//
	// Deletes the specified external location from the metastore. The caller
	// must be the owner of the external location.
	Delete(ctx context.Context, request DeleteExternalLocationRequest) error

	// Get an external location.
	//
	// Gets an external location from the metastore. The caller must be either a
	// metastore admin, the owner of the external location, or a user that has
	// some privilege on the external location.
	Get(ctx context.Context, request GetExternalLocationRequest) (*ExternalLocationInfo, error)

	// List external locations.
	//
	// Gets an array of external locations (__ExternalLocationInfo__ objects)
	// from the metastore. The caller must be a metastore admin, the owner of
	// the external location, or a user that has some privilege on the external
	// location. There is no guarantee of a specific ordering of the elements in
	// the array.
	//
	// Use ListAll() to get all ExternalLocationInfo instances, which will iterate over every result page.
	List(ctx context.Context, request ListExternalLocationsRequest) (*ListExternalLocationsResponse, error)

	// Update an external location.
	//
	// Updates an external location in the metastore. The caller must be the
	// owner of the external location, or be a metastore admin. In the second
	// case, the admin can only update the name of the external location.
	Update(ctx context.Context, request UpdateExternalLocation) (*ExternalLocationInfo, error)
}

An external location is an object that combines a cloud storage path with a storage credential that authorizes access to the cloud storage path. Each external location is subject to Unity Catalog access-control policies that control which users and groups can access the credential. If a user does not have access to an external location in Unity Catalog, the request fails and Unity Catalog does not attempt to authenticate to your cloud tenant on the user’s behalf.

Databricks recommends using external locations rather than using storage credentials directly.

To create external locations, you must be a metastore admin or a user with the **CREATE_EXTERNAL_LOCATION** privilege.

type FailedStatus added in v0.33.0

type FailedStatus struct {
	// The last source table Delta version that was synced to the online table.
	// Note that this Delta version may only be partially synced to the online
	// table. Only populated if the table is still online and available for
	// serving.
	LastProcessedCommitVersion int64 `json:"last_processed_commit_version,omitempty"`
	// The timestamp of the last time any data was synchronized from the source
	// table to the online table. Only populated if the table is still online
	// and available for serving.
	Timestamp string `json:"timestamp,omitempty"`

	ForceSendFields []string `json:"-"`
}

Detailed status of an online table. Shown if the online table is in the OFFLINE_FAILED or the ONLINE_PIPELINE_FAILED state.

func (FailedStatus) MarshalJSON added in v0.33.0

func (s FailedStatus) MarshalJSON() ([]byte, error)

func (*FailedStatus) UnmarshalJSON added in v0.33.0

func (s *FailedStatus) UnmarshalJSON(b []byte) error

type ForeignKeyConstraint

type ForeignKeyConstraint struct {
	// Column names for this constraint.
	ChildColumns []string `json:"child_columns"`
	// The name of the constraint.
	Name string `json:"name"`
	// Column names for this constraint.
	ParentColumns []string `json:"parent_columns"`
	// The full name of the parent constraint.
	ParentTable string `json:"parent_table"`
}

type FunctionDependency

type FunctionDependency struct {
	// Full name of the dependent function, in the form of
	// __catalog_name__.__schema_name__.__function_name__.
	FunctionFullName string `json:"function_full_name"`
}

A function that is dependent on a SQL object.

type FunctionInfo

type FunctionInfo struct {
	// Indicates whether the principal is limited to retrieving metadata for the
	// associated object through the BROWSE privilege when include_browse is
	// enabled in the request.
	BrowseOnly bool `json:"browse_only,omitempty"`
	// Name of parent catalog.
	CatalogName string `json:"catalog_name,omitempty"`
	// User-provided free-form text description.
	Comment string `json:"comment,omitempty"`
	// Time at which this function was created, in epoch milliseconds.
	CreatedAt int64 `json:"created_at,omitempty"`
	// Username of function creator.
	CreatedBy string `json:"created_by,omitempty"`
	// Scalar function return data type.
	DataType ColumnTypeName `json:"data_type,omitempty"`
	// External function language.
	ExternalLanguage string `json:"external_language,omitempty"`
	// External function name.
	ExternalName string `json:"external_name,omitempty"`
	// Pretty printed function data type.
	FullDataType string `json:"full_data_type,omitempty"`
	// Full name of function, in form of
	// __catalog_name__.__schema_name__.__function__name__
	FullName string `json:"full_name,omitempty"`
	// Id of Function, relative to parent schema.
	FunctionId string `json:"function_id,omitempty"`

	InputParams *FunctionParameterInfos `json:"input_params,omitempty"`
	// Whether the function is deterministic.
	IsDeterministic bool `json:"is_deterministic,omitempty"`
	// Function null call.
	IsNullCall bool `json:"is_null_call,omitempty"`
	// Unique identifier of parent metastore.
	MetastoreId string `json:"metastore_id,omitempty"`
	// Name of function, relative to parent schema.
	Name string `json:"name,omitempty"`
	// Username of current owner of function.
	Owner string `json:"owner,omitempty"`
	// Function parameter style. **S** is the value for SQL.
	ParameterStyle FunctionInfoParameterStyle `json:"parameter_style,omitempty"`
	// JSON-serialized key-value pair map, encoded (escaped) as a string.
	Properties string `json:"properties,omitempty"`
	// Table function return parameters.
	ReturnParams *FunctionParameterInfos `json:"return_params,omitempty"`
	// Function language. When **EXTERNAL** is used, the language of the routine
	// function should be specified in the __external_language__ field, and the
	// __return_params__ of the function cannot be used (as **TABLE** return
	// type is not supported), and the __sql_data_access__ field must be
	// **NO_SQL**.
	RoutineBody FunctionInfoRoutineBody `json:"routine_body,omitempty"`
	// Function body.
	RoutineDefinition string `json:"routine_definition,omitempty"`
	// Function dependencies.
	RoutineDependencies *DependencyList `json:"routine_dependencies,omitempty"`
	// Name of parent schema relative to its parent catalog.
	SchemaName string `json:"schema_name,omitempty"`
	// Function security type.
	SecurityType FunctionInfoSecurityType `json:"security_type,omitempty"`
	// Specific name of the function; Reserved for future use.
	SpecificName string `json:"specific_name,omitempty"`
	// Function SQL data access.
	SqlDataAccess FunctionInfoSqlDataAccess `json:"sql_data_access,omitempty"`
	// List of schemes whose objects can be referenced without qualification.
	SqlPath string `json:"sql_path,omitempty"`
	// Time at which this function was created, in epoch milliseconds.
	UpdatedAt int64 `json:"updated_at,omitempty"`
	// Username of user who last modified function.
	UpdatedBy string `json:"updated_by,omitempty"`

	ForceSendFields []string `json:"-"`
}

func (FunctionInfo) MarshalJSON added in v0.23.0

func (s FunctionInfo) MarshalJSON() ([]byte, error)

func (*FunctionInfo) UnmarshalJSON added in v0.23.0

func (s *FunctionInfo) UnmarshalJSON(b []byte) error

type FunctionInfoParameterStyle

type FunctionInfoParameterStyle string

Function parameter style. **S** is the value for SQL.

const FunctionInfoParameterStyleS FunctionInfoParameterStyle = `S`

func (*FunctionInfoParameterStyle) Set

Set raw string value and validate it against allowed values

func (*FunctionInfoParameterStyle) String

func (f *FunctionInfoParameterStyle) String() string

String representation for fmt.Print

func (*FunctionInfoParameterStyle) Type

Type always returns FunctionInfoParameterStyle to satisfy [pflag.Value] interface

type FunctionInfoRoutineBody

type FunctionInfoRoutineBody string

Function language. When **EXTERNAL** is used, the language of the routine function should be specified in the __external_language__ field, and the __return_params__ of the function cannot be used (as **TABLE** return type is not supported), and the __sql_data_access__ field must be **NO_SQL**.

const FunctionInfoRoutineBodyExternal FunctionInfoRoutineBody = `EXTERNAL`
const FunctionInfoRoutineBodySql FunctionInfoRoutineBody = `SQL`

func (*FunctionInfoRoutineBody) Set

Set raw string value and validate it against allowed values

func (*FunctionInfoRoutineBody) String

func (f *FunctionInfoRoutineBody) String() string

String representation for fmt.Print

func (*FunctionInfoRoutineBody) Type

func (f *FunctionInfoRoutineBody) Type() string

Type always returns FunctionInfoRoutineBody to satisfy [pflag.Value] interface

type FunctionInfoSecurityType

type FunctionInfoSecurityType string

Function security type.

const FunctionInfoSecurityTypeDefiner FunctionInfoSecurityType = `DEFINER`

func (*FunctionInfoSecurityType) Set

Set raw string value and validate it against allowed values

func (*FunctionInfoSecurityType) String

func (f *FunctionInfoSecurityType) String() string

String representation for fmt.Print

func (*FunctionInfoSecurityType) Type

func (f *FunctionInfoSecurityType) Type() string

Type always returns FunctionInfoSecurityType to satisfy [pflag.Value] interface

type FunctionInfoSqlDataAccess

type FunctionInfoSqlDataAccess string

Function SQL data access.

const FunctionInfoSqlDataAccessContainsSql FunctionInfoSqlDataAccess = `CONTAINS_SQL`
const FunctionInfoSqlDataAccessNoSql FunctionInfoSqlDataAccess = `NO_SQL`
const FunctionInfoSqlDataAccessReadsSqlData FunctionInfoSqlDataAccess = `READS_SQL_DATA`

func (*FunctionInfoSqlDataAccess) Set

Set raw string value and validate it against allowed values

func (*FunctionInfoSqlDataAccess) String

func (f *FunctionInfoSqlDataAccess) String() string

String representation for fmt.Print

func (*FunctionInfoSqlDataAccess) Type

Type always returns FunctionInfoSqlDataAccess to satisfy [pflag.Value] interface

type FunctionParameterInfo

type FunctionParameterInfo struct {
	// User-provided free-form text description.
	Comment string `json:"comment,omitempty"`
	// Name of parameter.
	Name string `json:"name"`
	// Default value of the parameter.
	ParameterDefault string `json:"parameter_default,omitempty"`
	// The mode of the function parameter.
	ParameterMode FunctionParameterMode `json:"parameter_mode,omitempty"`
	// The type of function parameter.
	ParameterType FunctionParameterType `json:"parameter_type,omitempty"`
	// Ordinal position of column (starting at position 0).
	Position int `json:"position"`
	// Format of IntervalType.
	TypeIntervalType string `json:"type_interval_type,omitempty"`
	// Full data type spec, JSON-serialized.
	TypeJson string `json:"type_json,omitempty"`
	// Name of type (INT, STRUCT, MAP, etc.).
	TypeName ColumnTypeName `json:"type_name"`
	// Digits of precision; required on Create for DecimalTypes.
	TypePrecision int `json:"type_precision,omitempty"`
	// Digits to right of decimal; Required on Create for DecimalTypes.
	TypeScale int `json:"type_scale,omitempty"`
	// Full data type spec, SQL/catalogString text.
	TypeText string `json:"type_text"`

	ForceSendFields []string `json:"-"`
}

func (FunctionParameterInfo) MarshalJSON added in v0.23.0

func (s FunctionParameterInfo) MarshalJSON() ([]byte, error)

func (*FunctionParameterInfo) UnmarshalJSON added in v0.23.0

func (s *FunctionParameterInfo) UnmarshalJSON(b []byte) error

type FunctionParameterInfos added in v0.25.0

type FunctionParameterInfos struct {
	// The array of __FunctionParameterInfo__ definitions of the function's
	// parameters.
	Parameters []FunctionParameterInfo `json:"parameters,omitempty"`
}

type FunctionParameterMode

type FunctionParameterMode string

The mode of the function parameter.

const FunctionParameterModeIn FunctionParameterMode = `IN`

func (*FunctionParameterMode) Set

Set raw string value and validate it against allowed values

func (*FunctionParameterMode) String

func (f *FunctionParameterMode) String() string

String representation for fmt.Print

func (*FunctionParameterMode) Type

func (f *FunctionParameterMode) Type() string

Type always returns FunctionParameterMode to satisfy [pflag.Value] interface

type FunctionParameterType

type FunctionParameterType string

The type of function parameter.

const FunctionParameterTypeColumn FunctionParameterType = `COLUMN`
const FunctionParameterTypeParam FunctionParameterType = `PARAM`

func (*FunctionParameterType) Set

Set raw string value and validate it against allowed values

func (*FunctionParameterType) String

func (f *FunctionParameterType) String() string

String representation for fmt.Print

func (*FunctionParameterType) Type

func (f *FunctionParameterType) Type() string

Type always returns FunctionParameterType to satisfy [pflag.Value] interface

type FunctionsAPI

type FunctionsAPI struct {
	// contains filtered or unexported fields
}

Functions implement User-Defined Functions (UDFs) in Unity Catalog.

The function implementation can be any SQL expression or Query, and it can be invoked wherever a table reference is allowed in a query. In Unity Catalog, a function resides at the same level as a table, so it can be referenced with the form __catalog_name__.__schema_name__.__function_name__.

func NewFunctions

func NewFunctions(client *client.DatabricksClient) *FunctionsAPI

func (*FunctionsAPI) Create

Create a function.

Creates a new function

The user must have the following permissions in order for the function to be created: - **USE_CATALOG** on the function's parent catalog - **USE_SCHEMA** and **CREATE_FUNCTION** on the function's parent schema

func (*FunctionsAPI) Delete

func (a *FunctionsAPI) Delete(ctx context.Context, request DeleteFunctionRequest) error

Delete a function.

Deletes the function that matches the supplied name. For the deletion to succeed, the user must satisfy one of the following conditions: - Is the owner of the function's parent catalog - Is the owner of the function's parent schema and have the **USE_CATALOG** privilege on its parent catalog - Is the owner of the function itself and have both the **USE_CATALOG** privilege on its parent catalog and the **USE_SCHEMA** privilege on its parent schema

func (*FunctionsAPI) DeleteByName

func (a *FunctionsAPI) DeleteByName(ctx context.Context, name string) error

Delete a function.

Deletes the function that matches the supplied name. For the deletion to succeed, the user must satisfy one of the following conditions: - Is the owner of the function's parent catalog - Is the owner of the function's parent schema and have the **USE_CATALOG** privilege on its parent catalog - Is the owner of the function itself and have both the **USE_CATALOG** privilege on its parent catalog and the **USE_SCHEMA** privilege on its parent schema

func (*FunctionsAPI) FunctionInfoNameToFullNameMap added in v0.10.0

func (a *FunctionsAPI) FunctionInfoNameToFullNameMap(ctx context.Context, request ListFunctionsRequest) (map[string]string, error)

FunctionInfoNameToFullNameMap calls FunctionsAPI.ListAll and creates a map of results with FunctionInfo.Name as key and FunctionInfo.FullName as value.

Returns an error if there's more than one FunctionInfo with the same .Name.

Note: All FunctionInfo instances are loaded into memory before creating a map.

This method is generated by Databricks SDK Code Generator.

func (*FunctionsAPI) Get

Get a function.

Gets a function from within a parent catalog and schema. For the fetch to succeed, the user must satisfy one of the following requirements: - Is a metastore admin - Is an owner of the function's parent catalog - Have the **USE_CATALOG** privilege on the function's parent catalog and be the owner of the function - Have the **USE_CATALOG** privilege on the function's parent catalog, the **USE_SCHEMA** privilege on the function's parent schema, and the **EXECUTE** privilege on the function itself

func (*FunctionsAPI) GetByName

func (a *FunctionsAPI) GetByName(ctx context.Context, name string) (*FunctionInfo, error)

Get a function.

Gets a function from within a parent catalog and schema. For the fetch to succeed, the user must satisfy one of the following requirements: - Is a metastore admin - Is an owner of the function's parent catalog - Have the **USE_CATALOG** privilege on the function's parent catalog and be the owner of the function - Have the **USE_CATALOG** privilege on the function's parent catalog, the **USE_SCHEMA** privilege on the function's parent schema, and the **EXECUTE** privilege on the function itself

func (*FunctionsAPI) Impl

func (a *FunctionsAPI) Impl() FunctionsService

Impl returns low-level Functions API implementation Deprecated: use MockFunctionsInterface instead.

func (*FunctionsAPI) List

List functions.

List functions within the specified parent catalog and schema. If the user is a metastore admin, all functions are returned in the output list. Otherwise, the user must have the **USE_CATALOG** privilege on the catalog and the **USE_SCHEMA** privilege on the schema, and the output list contains only functions for which either the user has the **EXECUTE** privilege or the user is the owner. There is no guarantee of a specific ordering of the elements in the array.

This method is generated by Databricks SDK Code Generator.

func (*FunctionsAPI) ListAll added in v0.10.0

func (a *FunctionsAPI) ListAll(ctx context.Context, request ListFunctionsRequest) ([]FunctionInfo, error)

List functions.

List functions within the specified parent catalog and schema. If the user is a metastore admin, all functions are returned in the output list. Otherwise, the user must have the **USE_CATALOG** privilege on the catalog and the **USE_SCHEMA** privilege on the schema, and the output list contains only functions for which either the user has the **EXECUTE** privilege or the user is the owner. There is no guarantee of a specific ordering of the elements in the array.

This method is generated by Databricks SDK Code Generator.

func (*FunctionsAPI) Update

func (a *FunctionsAPI) Update(ctx context.Context, request UpdateFunction) (*FunctionInfo, error)

Update a function.

Updates the function that matches the supplied name. Only the owner of the function can be updated. If the user is not a metastore admin, the user must be a member of the group that is the new function owner. - Is a metastore admin - Is the owner of the function's parent catalog - Is the owner of the function's parent schema and has the **USE_CATALOG** privilege on its parent catalog - Is the owner of the function itself and has the **USE_CATALOG** privilege on its parent catalog as well as the **USE_SCHEMA** privilege on the function's parent schema.

func (*FunctionsAPI) WithImpl

WithImpl could be used to override low-level API implementations for unit testing purposes with github.com/golang/mock or other mocking frameworks. Deprecated: use MockFunctionsInterface instead.

type FunctionsInterface added in v0.29.0

type FunctionsInterface interface {
	// WithImpl could be used to override low-level API implementations for unit
	// testing purposes with [github.com/golang/mock] or other mocking frameworks.
	// Deprecated: use MockFunctionsInterface instead.
	WithImpl(impl FunctionsService) FunctionsInterface

	// Impl returns low-level Functions API implementation
	// Deprecated: use MockFunctionsInterface instead.
	Impl() FunctionsService

	// Create a function.
	//
	// Creates a new function
	//
	// The user must have the following permissions in order for the function to be
	// created: - **USE_CATALOG** on the function's parent catalog - **USE_SCHEMA**
	// and **CREATE_FUNCTION** on the function's parent schema
	Create(ctx context.Context, request CreateFunctionRequest) (*FunctionInfo, error)

	// Delete a function.
	//
	// Deletes the function that matches the supplied name. For the deletion to
	// succeed, the user must satisfy one of the following conditions: - Is the
	// owner of the function's parent catalog - Is the owner of the function's
	// parent schema and have the **USE_CATALOG** privilege on its parent catalog -
	// Is the owner of the function itself and have both the **USE_CATALOG**
	// privilege on its parent catalog and the **USE_SCHEMA** privilege on its
	// parent schema
	Delete(ctx context.Context, request DeleteFunctionRequest) error

	// Delete a function.
	//
	// Deletes the function that matches the supplied name. For the deletion to
	// succeed, the user must satisfy one of the following conditions: - Is the
	// owner of the function's parent catalog - Is the owner of the function's
	// parent schema and have the **USE_CATALOG** privilege on its parent catalog -
	// Is the owner of the function itself and have both the **USE_CATALOG**
	// privilege on its parent catalog and the **USE_SCHEMA** privilege on its
	// parent schema
	DeleteByName(ctx context.Context, name string) error

	// Get a function.
	//
	// Gets a function from within a parent catalog and schema. For the fetch to
	// succeed, the user must satisfy one of the following requirements: - Is a
	// metastore admin - Is an owner of the function's parent catalog - Have the
	// **USE_CATALOG** privilege on the function's parent catalog and be the owner
	// of the function - Have the **USE_CATALOG** privilege on the function's parent
	// catalog, the **USE_SCHEMA** privilege on the function's parent schema, and
	// the **EXECUTE** privilege on the function itself
	Get(ctx context.Context, request GetFunctionRequest) (*FunctionInfo, error)

	// Get a function.
	//
	// Gets a function from within a parent catalog and schema. For the fetch to
	// succeed, the user must satisfy one of the following requirements: - Is a
	// metastore admin - Is an owner of the function's parent catalog - Have the
	// **USE_CATALOG** privilege on the function's parent catalog and be the owner
	// of the function - Have the **USE_CATALOG** privilege on the function's parent
	// catalog, the **USE_SCHEMA** privilege on the function's parent schema, and
	// the **EXECUTE** privilege on the function itself
	GetByName(ctx context.Context, name string) (*FunctionInfo, error)

	// List functions.
	//
	// List functions within the specified parent catalog and schema. If the user is
	// a metastore admin, all functions are returned in the output list. Otherwise,
	// the user must have the **USE_CATALOG** privilege on the catalog and the
	// **USE_SCHEMA** privilege on the schema, and the output list contains only
	// functions for which either the user has the **EXECUTE** privilege or the user
	// is the owner. There is no guarantee of a specific ordering of the elements in
	// the array.
	//
	// This method is generated by Databricks SDK Code Generator.
	List(ctx context.Context, request ListFunctionsRequest) listing.Iterator[FunctionInfo]

	// List functions.
	//
	// List functions within the specified parent catalog and schema. If the user is
	// a metastore admin, all functions are returned in the output list. Otherwise,
	// the user must have the **USE_CATALOG** privilege on the catalog and the
	// **USE_SCHEMA** privilege on the schema, and the output list contains only
	// functions for which either the user has the **EXECUTE** privilege or the user
	// is the owner. There is no guarantee of a specific ordering of the elements in
	// the array.
	//
	// This method is generated by Databricks SDK Code Generator.
	ListAll(ctx context.Context, request ListFunctionsRequest) ([]FunctionInfo, error)

	// FunctionInfoNameToFullNameMap calls [FunctionsAPI.ListAll] and creates a map of results with [FunctionInfo].Name as key and [FunctionInfo].FullName as value.
	//
	// Returns an error if there's more than one [FunctionInfo] with the same .Name.
	//
	// Note: All [FunctionInfo] instances are loaded into memory before creating a map.
	//
	// This method is generated by Databricks SDK Code Generator.
	FunctionInfoNameToFullNameMap(ctx context.Context, request ListFunctionsRequest) (map[string]string, error)

	// Update a function.
	//
	// Updates the function that matches the supplied name. Only the owner of the
	// function can be updated. If the user is not a metastore admin, the user must
	// be a member of the group that is the new function owner. - Is a metastore
	// admin - Is the owner of the function's parent catalog - Is the owner of the
	// function's parent schema and has the **USE_CATALOG** privilege on its parent
	// catalog - Is the owner of the function itself and has the **USE_CATALOG**
	// privilege on its parent catalog as well as the **USE_SCHEMA** privilege on
	// the function's parent schema.
	Update(ctx context.Context, request UpdateFunction) (*FunctionInfo, error)
}

type FunctionsService

type FunctionsService interface {

	// Create a function.
	//
	// Creates a new function
	//
	// The user must have the following permissions in order for the function to
	// be created: - **USE_CATALOG** on the function's parent catalog -
	// **USE_SCHEMA** and **CREATE_FUNCTION** on the function's parent schema
	Create(ctx context.Context, request CreateFunctionRequest) (*FunctionInfo, error)

	// Delete a function.
	//
	// Deletes the function that matches the supplied name. For the deletion to
	// succeed, the user must satisfy one of the following conditions: - Is the
	// owner of the function's parent catalog - Is the owner of the function's
	// parent schema and have the **USE_CATALOG** privilege on its parent
	// catalog - Is the owner of the function itself and have both the
	// **USE_CATALOG** privilege on its parent catalog and the **USE_SCHEMA**
	// privilege on its parent schema
	Delete(ctx context.Context, request DeleteFunctionRequest) error

	// Get a function.
	//
	// Gets a function from within a parent catalog and schema. For the fetch to
	// succeed, the user must satisfy one of the following requirements: - Is a
	// metastore admin - Is an owner of the function's parent catalog - Have the
	// **USE_CATALOG** privilege on the function's parent catalog and be the
	// owner of the function - Have the **USE_CATALOG** privilege on the
	// function's parent catalog, the **USE_SCHEMA** privilege on the function's
	// parent schema, and the **EXECUTE** privilege on the function itself
	Get(ctx context.Context, request GetFunctionRequest) (*FunctionInfo, error)

	// List functions.
	//
	// List functions within the specified parent catalog and schema. If the
	// user is a metastore admin, all functions are returned in the output list.
	// Otherwise, the user must have the **USE_CATALOG** privilege on the
	// catalog and the **USE_SCHEMA** privilege on the schema, and the output
	// list contains only functions for which either the user has the
	// **EXECUTE** privilege or the user is the owner. There is no guarantee of
	// a specific ordering of the elements in the array.
	//
	// Use ListAll() to get all FunctionInfo instances, which will iterate over every result page.
	List(ctx context.Context, request ListFunctionsRequest) (*ListFunctionsResponse, error)

	// Update a function.
	//
	// Updates the function that matches the supplied name. Only the owner of
	// the function can be updated. If the user is not a metastore admin, the
	// user must be a member of the group that is the new function owner. - Is a
	// metastore admin - Is the owner of the function's parent catalog - Is the
	// owner of the function's parent schema and has the **USE_CATALOG**
	// privilege on its parent catalog - Is the owner of the function itself and
	// has the **USE_CATALOG** privilege on its parent catalog as well as the
	// **USE_SCHEMA** privilege on the function's parent schema.
	Update(ctx context.Context, request UpdateFunction) (*FunctionInfo, error)
}

Functions implement User-Defined Functions (UDFs) in Unity Catalog.

The function implementation can be any SQL expression or Query, and it can be invoked wherever a table reference is allowed in a query. In Unity Catalog, a function resides at the same level as a table, so it can be referenced with the form __catalog_name__.__schema_name__.__function_name__.

type GetAccountMetastoreAssignmentRequest

type GetAccountMetastoreAssignmentRequest struct {
	// Workspace ID.
	WorkspaceId int64 `json:"-" url:"-"`
}

Gets the metastore assignment for a workspace

type GetAccountMetastoreRequest

type GetAccountMetastoreRequest struct {
	// Unity Catalog metastore ID
	MetastoreId string `json:"-" url:"-"`
}

Get a metastore

type GetAccountStorageCredentialRequest

type GetAccountStorageCredentialRequest struct {
	// Unity Catalog metastore ID
	MetastoreId string `json:"-" url:"-"`
	// Name of the storage credential.
	StorageCredentialName string `json:"-" url:"-"`
}

Gets the named storage credential

type GetArtifactAllowlistRequest added in v0.17.0

type GetArtifactAllowlistRequest struct {
	// The artifact type of the allowlist.
	ArtifactType ArtifactType `json:"-" url:"-"`
}

Get an artifact allowlist

type GetBindingsRequest added in v0.23.0

type GetBindingsRequest struct {
	// The name of the securable.
	SecurableName string `json:"-" url:"-"`
	// The type of the securable.
	SecurableType string `json:"-" url:"-"`
}

Get securable workspace bindings

type GetByAliasRequest added in v0.18.0

type GetByAliasRequest struct {
	// The name of the alias
	Alias string `json:"-" url:"-"`
	// The three-level (fully qualified) name of the registered model
	FullName string `json:"-" url:"-"`
}

Get Model Version By Alias

type GetCatalogRequest

type GetCatalogRequest struct {
	// Whether to include catalogs in the response for which the principal can
	// only access selective metadata for
	IncludeBrowse bool `json:"-" url:"include_browse,omitempty"`
	// The name of the catalog.
	Name string `json:"-" url:"-"`

	ForceSendFields []string `json:"-"`
}

Get a catalog

func (GetCatalogRequest) MarshalJSON added in v0.35.0

func (s GetCatalogRequest) MarshalJSON() ([]byte, error)

func (*GetCatalogRequest) UnmarshalJSON added in v0.35.0

func (s *GetCatalogRequest) UnmarshalJSON(b []byte) error

type GetConnectionRequest added in v0.10.0

type GetConnectionRequest struct {
	// Name of the connection.
	Name string `json:"-" url:"-"`
}

Get a connection

type GetEffectiveRequest

type GetEffectiveRequest struct {
	// Full name of securable.
	FullName string `json:"-" url:"-"`
	// If provided, only the effective permissions for the specified principal
	// (user or group) are returned.
	Principal string `json:"-" url:"principal,omitempty"`
	// Type of securable.
	SecurableType SecurableType `json:"-" url:"-"`

	ForceSendFields []string `json:"-"`
}

Get effective permissions

func (GetEffectiveRequest) MarshalJSON added in v0.23.0

func (s GetEffectiveRequest) MarshalJSON() ([]byte, error)

func (*GetEffectiveRequest) UnmarshalJSON added in v0.23.0

func (s *GetEffectiveRequest) UnmarshalJSON(b []byte) error

type GetExternalLocationRequest

type GetExternalLocationRequest struct {
	// Whether to include external locations in the response for which the
	// principal can only access selective metadata for
	IncludeBrowse bool `json:"-" url:"include_browse,omitempty"`
	// Name of the external location.
	Name string `json:"-" url:"-"`

	ForceSendFields []string `json:"-"`
}

Get an external location

func (GetExternalLocationRequest) MarshalJSON added in v0.35.0

func (s GetExternalLocationRequest) MarshalJSON() ([]byte, error)

func (*GetExternalLocationRequest) UnmarshalJSON added in v0.35.0

func (s *GetExternalLocationRequest) UnmarshalJSON(b []byte) error

type GetFunctionRequest

type GetFunctionRequest struct {
	// Whether to include functions in the response for which the principal can
	// only access selective metadata for
	IncludeBrowse bool `json:"-" url:"include_browse,omitempty"`
	// The fully-qualified name of the function (of the form
	// __catalog_name__.__schema_name__.__function__name__).
	Name string `json:"-" url:"-"`

	ForceSendFields []string `json:"-"`
}

Get a function

func (GetFunctionRequest) MarshalJSON added in v0.35.0

func (s GetFunctionRequest) MarshalJSON() ([]byte, error)

func (*GetFunctionRequest) UnmarshalJSON added in v0.35.0

func (s *GetFunctionRequest) UnmarshalJSON(b []byte) error

type GetGrantRequest

type GetGrantRequest struct {
	// Full name of securable.
	FullName string `json:"-" url:"-"`
	// If provided, only the permissions for the specified principal (user or
	// group) are returned.
	Principal string `json:"-" url:"principal,omitempty"`
	// Type of securable.
	SecurableType SecurableType `json:"-" url:"-"`

	ForceSendFields []string `json:"-"`
}

Get permissions

func (GetGrantRequest) MarshalJSON added in v0.23.0

func (s GetGrantRequest) MarshalJSON() ([]byte, error)

func (*GetGrantRequest) UnmarshalJSON added in v0.23.0

func (s *GetGrantRequest) UnmarshalJSON(b []byte) error

type GetMetastoreRequest

type GetMetastoreRequest struct {
	// Unique ID of the metastore.
	Id string `json:"-" url:"-"`
}

Get a metastore

type GetMetastoreSummaryResponse

type GetMetastoreSummaryResponse struct {
	// Cloud vendor of the metastore home shard (e.g., `aws`, `azure`, `gcp`).
	Cloud string `json:"cloud,omitempty"`
	// Time at which this metastore was created, in epoch milliseconds.
	CreatedAt int64 `json:"created_at,omitempty"`
	// Username of metastore creator.
	CreatedBy string `json:"created_by,omitempty"`
	// Unique identifier of the metastore's (Default) Data Access Configuration.
	DefaultDataAccessConfigId string `json:"default_data_access_config_id,omitempty"`
	// The organization name of a Delta Sharing entity, to be used in
	// Databricks-to-Databricks Delta Sharing as the official name.
	DeltaSharingOrganizationName string `json:"delta_sharing_organization_name,omitempty"`
	// The lifetime of delta sharing recipient token in seconds.
	DeltaSharingRecipientTokenLifetimeInSeconds int64 `json:"delta_sharing_recipient_token_lifetime_in_seconds,omitempty"`
	// The scope of Delta Sharing enabled for the metastore.
	DeltaSharingScope GetMetastoreSummaryResponseDeltaSharingScope `json:"delta_sharing_scope,omitempty"`
	// Globally unique metastore ID across clouds and regions, of the form
	// `cloud:region:metastore_id`.
	GlobalMetastoreId string `json:"global_metastore_id,omitempty"`
	// Unique identifier of metastore.
	MetastoreId string `json:"metastore_id,omitempty"`
	// The user-specified name of the metastore.
	Name string `json:"name,omitempty"`
	// The owner of the metastore.
	Owner string `json:"owner,omitempty"`
	// Privilege model version of the metastore, of the form `major.minor`
	// (e.g., `1.0`).
	PrivilegeModelVersion string `json:"privilege_model_version,omitempty"`
	// Cloud region which the metastore serves (e.g., `us-west-2`, `westus`).
	Region string `json:"region,omitempty"`
	// The storage root URL for metastore
	StorageRoot string `json:"storage_root,omitempty"`
	// UUID of storage credential to access the metastore storage_root.
	StorageRootCredentialId string `json:"storage_root_credential_id,omitempty"`
	// Name of the storage credential to access the metastore storage_root.
	StorageRootCredentialName string `json:"storage_root_credential_name,omitempty"`
	// Time at which the metastore was last modified, in epoch milliseconds.
	UpdatedAt int64 `json:"updated_at,omitempty"`
	// Username of user who last modified the metastore.
	UpdatedBy string `json:"updated_by,omitempty"`

	ForceSendFields []string `json:"-"`
}

func (GetMetastoreSummaryResponse) MarshalJSON added in v0.23.0

func (s GetMetastoreSummaryResponse) MarshalJSON() ([]byte, error)

func (*GetMetastoreSummaryResponse) UnmarshalJSON added in v0.23.0

func (s *GetMetastoreSummaryResponse) UnmarshalJSON(b []byte) error

type GetMetastoreSummaryResponseDeltaSharingScope

type GetMetastoreSummaryResponseDeltaSharingScope string

The scope of Delta Sharing enabled for the metastore.

const GetMetastoreSummaryResponseDeltaSharingScopeInternal GetMetastoreSummaryResponseDeltaSharingScope = `INTERNAL`
const GetMetastoreSummaryResponseDeltaSharingScopeInternalAndExternal GetMetastoreSummaryResponseDeltaSharingScope = `INTERNAL_AND_EXTERNAL`

func (*GetMetastoreSummaryResponseDeltaSharingScope) Set

Set raw string value and validate it against allowed values

func (*GetMetastoreSummaryResponseDeltaSharingScope) String

String representation for fmt.Print

func (*GetMetastoreSummaryResponseDeltaSharingScope) Type

Type always returns GetMetastoreSummaryResponseDeltaSharingScope to satisfy [pflag.Value] interface

type GetModelVersionRequest added in v0.18.0

type GetModelVersionRequest struct {
	// The three-level (fully qualified) name of the model version
	FullName string `json:"-" url:"-"`
	// Whether to include model versions in the response for which the principal
	// can only access selective metadata for
	IncludeBrowse bool `json:"-" url:"include_browse,omitempty"`
	// The integer version number of the model version
	Version int `json:"-" url:"-"`

	ForceSendFields []string `json:"-"`
}

Get a Model Version

func (GetModelVersionRequest) MarshalJSON added in v0.35.0

func (s GetModelVersionRequest) MarshalJSON() ([]byte, error)

func (*GetModelVersionRequest) UnmarshalJSON added in v0.35.0

func (s *GetModelVersionRequest) UnmarshalJSON(b []byte) error

type GetOnlineTableRequest added in v0.33.0

type GetOnlineTableRequest struct {
	// Full three-part (catalog, schema, table) name of the table.
	Name string `json:"-" url:"-"`
}

Get an Online Table

type GetQualityMonitorRequest added in v0.41.0

type GetQualityMonitorRequest struct {
	// Full name of the table.
	TableName string `json:"-" url:"-"`
}

Get a table monitor

type GetRefreshRequest added in v0.31.0

type GetRefreshRequest struct {
	// ID of the refresh.
	RefreshId string `json:"-" url:"-"`
	// Full name of the table.
	TableName string `json:"-" url:"-"`
}

Get refresh

type GetRegisteredModelRequest added in v0.18.0

type GetRegisteredModelRequest struct {
	// The three-level (fully qualified) name of the registered model
	FullName string `json:"-" url:"-"`
	// Whether to include registered models in the response for which the
	// principal can only access selective metadata for
	IncludeBrowse bool `json:"-" url:"include_browse,omitempty"`

	ForceSendFields []string `json:"-"`
}

Get a Registered Model

func (GetRegisteredModelRequest) MarshalJSON added in v0.35.0

func (s GetRegisteredModelRequest) MarshalJSON() ([]byte, error)

func (*GetRegisteredModelRequest) UnmarshalJSON added in v0.35.0

func (s *GetRegisteredModelRequest) UnmarshalJSON(b []byte) error

type GetSchemaRequest

type GetSchemaRequest struct {
	// Full name of the schema.
	FullName string `json:"-" url:"-"`
	// Whether to include schemas in the response for which the principal can
	// only access selective metadata for
	IncludeBrowse bool `json:"-" url:"include_browse,omitempty"`

	ForceSendFields []string `json:"-"`
}

Get a schema

func (GetSchemaRequest) MarshalJSON added in v0.35.0

func (s GetSchemaRequest) MarshalJSON() ([]byte, error)

func (*GetSchemaRequest) UnmarshalJSON added in v0.35.0

func (s *GetSchemaRequest) UnmarshalJSON(b []byte) error

type GetStorageCredentialRequest

type GetStorageCredentialRequest struct {
	// Name of the storage credential.
	Name string `json:"-" url:"-"`
}

Get a credential

type GetTableRequest

type GetTableRequest struct {
	// Full name of the table.
	FullName string `json:"-" url:"-"`
	// Whether to include tables in the response for which the principal can
	// only access selective metadata for
	IncludeBrowse bool `json:"-" url:"include_browse,omitempty"`
	// Whether delta metadata should be included in the response.
	IncludeDeltaMetadata bool `json:"-" url:"include_delta_metadata,omitempty"`

	ForceSendFields []string `json:"-"`
}

Get a table

func (GetTableRequest) MarshalJSON added in v0.23.0

func (s GetTableRequest) MarshalJSON() ([]byte, error)

func (*GetTableRequest) UnmarshalJSON added in v0.23.0

func (s *GetTableRequest) UnmarshalJSON(b []byte) error

type GetWorkspaceBindingRequest added in v0.9.0

type GetWorkspaceBindingRequest struct {
	// The name of the catalog.
	Name string `json:"-" url:"-"`
}

Get catalog workspace bindings

type GrantsAPI

type GrantsAPI struct {
	// contains filtered or unexported fields
}

In Unity Catalog, data is secure by default. Initially, users have no access to data in a metastore. Access can be granted by either a metastore admin, the owner of an object, or the owner of the catalog or schema that contains the object. Securable objects in Unity Catalog are hierarchical and privileges are inherited downward.

Securable objects in Unity Catalog are hierarchical and privileges are inherited downward. This means that granting a privilege on the catalog automatically grants the privilege to all current and future objects within the catalog. Similarly, privileges granted on a schema are inherited by all current and future objects within that schema.

func NewGrants

func NewGrants(client *client.DatabricksClient) *GrantsAPI

func (*GrantsAPI) Get

func (a *GrantsAPI) Get(ctx context.Context, request GetGrantRequest) (*PermissionsList, error)

Get permissions.

Gets the permissions for a securable.

func (*GrantsAPI) GetBySecurableTypeAndFullName

func (a *GrantsAPI) GetBySecurableTypeAndFullName(ctx context.Context, securableType SecurableType, fullName string) (*PermissionsList, error)

Get permissions.

Gets the permissions for a securable.

func (*GrantsAPI) GetEffective

func (a *GrantsAPI) GetEffective(ctx context.Context, request GetEffectiveRequest) (*EffectivePermissionsList, error)

Get effective permissions.

Gets the effective permissions for a securable.

Example (Tables)
ctx := context.Background()
w, err := databricks.NewWorkspaceClient()
if err != nil {
	panic(err)
}

tableName := fmt.Sprintf("sdk-%x", time.Now().UnixNano())

createdCatalog, err := w.Catalogs.Create(ctx, catalog.CreateCatalog{
	Name: fmt.Sprintf("sdk-%x", time.Now().UnixNano()),
})
if err != nil {
	panic(err)
}
logger.Infof(ctx, "found %v", createdCatalog)

createdSchema, err := w.Schemas.Create(ctx, catalog.CreateSchema{
	Name:        fmt.Sprintf("sdk-%x", time.Now().UnixNano()),
	CatalogName: createdCatalog.Name,
})
if err != nil {
	panic(err)
}
logger.Infof(ctx, "found %v", createdSchema)

_, err = w.StatementExecution.ExecuteAndWait(ctx, sql.ExecuteStatementRequest{
	WarehouseId: os.Getenv("TEST_DEFAULT_WAREHOUSE_ID"),
	Catalog:     createdCatalog.Name,
	Schema:      createdSchema.Name,
	Statement:   fmt.Sprintf("CREATE TABLE %s AS SELECT 2+2 as four", tableName),
})
if err != nil {
	panic(err)
}

tableFullName := fmt.Sprintf("%s.%s.%s", createdCatalog.Name, createdSchema.Name, tableName)

createdTable, err := w.Tables.GetByFullName(ctx, tableFullName)
if err != nil {
	panic(err)
}
logger.Infof(ctx, "found %v", createdTable)

grants, err := w.Grants.GetEffectiveBySecurableTypeAndFullName(ctx, catalog.SecurableTypeTable, createdTable.FullName)
if err != nil {
	panic(err)
}
logger.Infof(ctx, "found %v", grants)

// cleanup

err = w.Schemas.DeleteByFullName(ctx, createdSchema.FullName)
if err != nil {
	panic(err)
}
err = w.Catalogs.Delete(ctx, catalog.DeleteCatalogRequest{
	Name:  createdCatalog.Name,
	Force: true,
})
if err != nil {
	panic(err)
}
err = w.Tables.DeleteByFullName(ctx, tableFullName)
if err != nil {
	panic(err)
}
Output:

func (*GrantsAPI) GetEffectiveBySecurableTypeAndFullName

func (a *GrantsAPI) GetEffectiveBySecurableTypeAndFullName(ctx context.Context, securableType SecurableType, fullName string) (*EffectivePermissionsList, error)

Get effective permissions.

Gets the effective permissions for a securable.

func (*GrantsAPI) Impl

func (a *GrantsAPI) Impl() GrantsService

Impl returns low-level Grants API implementation Deprecated: use MockGrantsInterface instead.

func (*GrantsAPI) Update

func (a *GrantsAPI) Update(ctx context.Context, request UpdatePermissions) (*PermissionsList, error)

Update permissions.

Updates the permissions for a securable.

Example (Tables)
ctx := context.Background()
w, err := databricks.NewWorkspaceClient()
if err != nil {
	panic(err)
}

tableName := fmt.Sprintf("sdk-%x", time.Now().UnixNano())

createdCatalog, err := w.Catalogs.Create(ctx, catalog.CreateCatalog{
	Name: fmt.Sprintf("sdk-%x", time.Now().UnixNano()),
})
if err != nil {
	panic(err)
}
logger.Infof(ctx, "found %v", createdCatalog)

createdSchema, err := w.Schemas.Create(ctx, catalog.CreateSchema{
	Name:        fmt.Sprintf("sdk-%x", time.Now().UnixNano()),
	CatalogName: createdCatalog.Name,
})
if err != nil {
	panic(err)
}
logger.Infof(ctx, "found %v", createdSchema)

_, err = w.StatementExecution.ExecuteAndWait(ctx, sql.ExecuteStatementRequest{
	WarehouseId: os.Getenv("TEST_DEFAULT_WAREHOUSE_ID"),
	Catalog:     createdCatalog.Name,
	Schema:      createdSchema.Name,
	Statement:   fmt.Sprintf("CREATE TABLE %s AS SELECT 2+2 as four", tableName),
})
if err != nil {
	panic(err)
}

tableFullName := fmt.Sprintf("%s.%s.%s", createdCatalog.Name, createdSchema.Name, tableName)

accountLevelGroupName := os.Getenv("TEST_DATA_ENG_GROUP")

createdTable, err := w.Tables.GetByFullName(ctx, tableFullName)
if err != nil {
	panic(err)
}
logger.Infof(ctx, "found %v", createdTable)

x, err := w.Grants.Update(ctx, catalog.UpdatePermissions{
	FullName:      createdTable.FullName,
	SecurableType: catalog.SecurableTypeTable,
	Changes: []catalog.PermissionsChange{catalog.PermissionsChange{
		Add:       []catalog.Privilege{catalog.PrivilegeModify, catalog.PrivilegeSelect},
		Principal: accountLevelGroupName,
	}},
})
if err != nil {
	panic(err)
}
logger.Infof(ctx, "found %v", x)

// cleanup

err = w.Schemas.DeleteByFullName(ctx, createdSchema.FullName)
if err != nil {
	panic(err)
}
err = w.Catalogs.Delete(ctx, catalog.DeleteCatalogRequest{
	Name:  createdCatalog.Name,
	Force: true,
})
if err != nil {
	panic(err)
}
err = w.Tables.DeleteByFullName(ctx, tableFullName)
if err != nil {
	panic(err)
}
Output:

func (*GrantsAPI) WithImpl

func (a *GrantsAPI) WithImpl(impl GrantsService) GrantsInterface

WithImpl could be used to override low-level API implementations for unit testing purposes with github.com/golang/mock or other mocking frameworks. Deprecated: use MockGrantsInterface instead.

type GrantsInterface added in v0.29.0

type GrantsInterface interface {
	// WithImpl could be used to override low-level API implementations for unit
	// testing purposes with [github.com/golang/mock] or other mocking frameworks.
	// Deprecated: use MockGrantsInterface instead.
	WithImpl(impl GrantsService) GrantsInterface

	// Impl returns low-level Grants API implementation
	// Deprecated: use MockGrantsInterface instead.
	Impl() GrantsService

	// Get permissions.
	//
	// Gets the permissions for a securable.
	Get(ctx context.Context, request GetGrantRequest) (*PermissionsList, error)

	// Get permissions.
	//
	// Gets the permissions for a securable.
	GetBySecurableTypeAndFullName(ctx context.Context, securableType SecurableType, fullName string) (*PermissionsList, error)

	// Get effective permissions.
	//
	// Gets the effective permissions for a securable.
	GetEffective(ctx context.Context, request GetEffectiveRequest) (*EffectivePermissionsList, error)

	// Get effective permissions.
	//
	// Gets the effective permissions for a securable.
	GetEffectiveBySecurableTypeAndFullName(ctx context.Context, securableType SecurableType, fullName string) (*EffectivePermissionsList, error)

	// Update permissions.
	//
	// Updates the permissions for a securable.
	Update(ctx context.Context, request UpdatePermissions) (*PermissionsList, error)
}

type GrantsService

type GrantsService interface {

	// Get permissions.
	//
	// Gets the permissions for a securable.
	Get(ctx context.Context, request GetGrantRequest) (*PermissionsList, error)

	// Get effective permissions.
	//
	// Gets the effective permissions for a securable.
	GetEffective(ctx context.Context, request GetEffectiveRequest) (*EffectivePermissionsList, error)

	// Update permissions.
	//
	// Updates the permissions for a securable.
	Update(ctx context.Context, request UpdatePermissions) (*PermissionsList, error)
}

In Unity Catalog, data is secure by default. Initially, users have no access to data in a metastore. Access can be granted by either a metastore admin, the owner of an object, or the owner of the catalog or schema that contains the object. Securable objects in Unity Catalog are hierarchical and privileges are inherited downward.

Securable objects in Unity Catalog are hierarchical and privileges are inherited downward. This means that granting a privilege on the catalog automatically grants the privilege to all current and future objects within the catalog. Similarly, privileges granted on a schema are inherited by all current and future objects within that schema.

type IsolationMode added in v0.9.0

type IsolationMode string

Whether the current securable is accessible from all workspaces or a specific set of workspaces.

const IsolationModeIsolated IsolationMode = `ISOLATED`
const IsolationModeOpen IsolationMode = `OPEN`

func (*IsolationMode) Set added in v0.9.0

func (f *IsolationMode) Set(v string) error

Set raw string value and validate it against allowed values

func (*IsolationMode) String added in v0.9.0

func (f *IsolationMode) String() string

String representation for fmt.Print

func (*IsolationMode) Type added in v0.9.0

func (f *IsolationMode) Type() string

Type always returns IsolationMode to satisfy [pflag.Value] interface

type ListAccountMetastoreAssignmentsRequest

type ListAccountMetastoreAssignmentsRequest struct {
	// Unity Catalog metastore ID
	MetastoreId string `json:"-" url:"-"`
}

Get all workspaces assigned to a metastore

type ListAccountMetastoreAssignmentsResponse added in v0.22.0

type ListAccountMetastoreAssignmentsResponse struct {
	WorkspaceIds []int64 `json:"workspace_ids,omitempty"`
}

The list of workspaces to which the given metastore is assigned.

type ListAccountStorageCredentialsRequest

type ListAccountStorageCredentialsRequest struct {
	// Unity Catalog metastore ID
	MetastoreId string `json:"-" url:"-"`
}

Get all storage credentials assigned to a metastore

type ListCatalogsRequest added in v0.35.0

type ListCatalogsRequest struct {
	// Whether to include catalogs in the response for which the principal can
	// only access selective metadata for
	IncludeBrowse bool `json:"-" url:"include_browse,omitempty"`

	ForceSendFields []string `json:"-"`
}

List catalogs

func (ListCatalogsRequest) MarshalJSON added in v0.35.0

func (s ListCatalogsRequest) MarshalJSON() ([]byte, error)

func (*ListCatalogsRequest) UnmarshalJSON added in v0.35.0

func (s *ListCatalogsRequest) UnmarshalJSON(b []byte) error

type ListCatalogsResponse

type ListCatalogsResponse struct {
	// An array of catalog information objects.
	Catalogs []CatalogInfo `json:"catalogs,omitempty"`
}

type ListConnectionsRequest added in v0.41.0

type ListConnectionsRequest struct {
	// Maximum number of connections to return. - If not set, all connections
	// are returned (not recommended). - when set to a value greater than 0, the
	// page length is the minimum of this value and a server configured value; -
	// when set to 0, the page length is set to a server configured value
	// (recommended); - when set to a value less than 0, an invalid parameter
	// error is returned;
	MaxResults int `json:"-" url:"max_results,omitempty"`
	// Opaque pagination token to go to next page based on previous query.
	PageToken string `json:"-" url:"page_token,omitempty"`

	ForceSendFields []string `json:"-"`
}

List connections

func (ListConnectionsRequest) MarshalJSON added in v0.41.0

func (s ListConnectionsRequest) MarshalJSON() ([]byte, error)

func (*ListConnectionsRequest) UnmarshalJSON added in v0.41.0

func (s *ListConnectionsRequest) UnmarshalJSON(b []byte) error

type ListConnectionsResponse added in v0.10.0

type ListConnectionsResponse struct {
	// An array of connection information objects.
	Connections []ConnectionInfo `json:"connections,omitempty"`
	// Opaque token to retrieve the next page of results. Absent if there are no
	// more pages. __page_token__ should be set to this value for the next
	// request (for the next page of results).
	NextPageToken string `json:"next_page_token,omitempty"`

	ForceSendFields []string `json:"-"`
}

func (ListConnectionsResponse) MarshalJSON added in v0.41.0

func (s ListConnectionsResponse) MarshalJSON() ([]byte, error)

func (*ListConnectionsResponse) UnmarshalJSON added in v0.41.0

func (s *ListConnectionsResponse) UnmarshalJSON(b []byte) error

type ListExternalLocationsRequest added in v0.29.0

type ListExternalLocationsRequest struct {
	// Whether to include external locations in the response for which the
	// principal can only access selective metadata for
	IncludeBrowse bool `json:"-" url:"include_browse,omitempty"`
	// Maximum number of external locations to return. If not set, all the
	// external locations are returned (not recommended). - when set to a value
	// greater than 0, the page length is the minimum of this value and a server
	// configured value; - when set to 0, the page length is set to a server
	// configured value (recommended); - when set to a value less than 0, an
	// invalid parameter error is returned;
	MaxResults int `json:"-" url:"max_results,omitempty"`
	// Opaque pagination token to go to next page based on previous query.
	PageToken string `json:"-" url:"page_token,omitempty"`

	ForceSendFields []string `json:"-"`
}

List external locations

func (ListExternalLocationsRequest) MarshalJSON added in v0.29.0

func (s ListExternalLocationsRequest) MarshalJSON() ([]byte, error)

func (*ListExternalLocationsRequest) UnmarshalJSON added in v0.29.0

func (s *ListExternalLocationsRequest) UnmarshalJSON(b []byte) error

type ListExternalLocationsResponse

type ListExternalLocationsResponse struct {
	// An array of external locations.
	ExternalLocations []ExternalLocationInfo `json:"external_locations,omitempty"`
	// Opaque token to retrieve the next page of results. Absent if there are no
	// more pages. __page_token__ should be set to this value for the next
	// request (for the next page of results).
	NextPageToken string `json:"next_page_token,omitempty"`

	ForceSendFields []string `json:"-"`
}

func (ListExternalLocationsResponse) MarshalJSON added in v0.29.0

func (s ListExternalLocationsResponse) MarshalJSON() ([]byte, error)

func (*ListExternalLocationsResponse) UnmarshalJSON added in v0.29.0

func (s *ListExternalLocationsResponse) UnmarshalJSON(b []byte) error

type ListFunctionsRequest

type ListFunctionsRequest struct {
	// Name of parent catalog for functions of interest.
	CatalogName string `json:"-" url:"catalog_name"`
	// Whether to include functions in the response for which the principal can
	// only access selective metadata for
	IncludeBrowse bool `json:"-" url:"include_browse,omitempty"`
	// Maximum number of functions to return. If not set, all the functions are
	// returned (not recommended). - when set to a value greater than 0, the
	// page length is the minimum of this value and a server configured value; -
	// when set to 0, the page length is set to a server configured value
	// (recommended); - when set to a value less than 0, an invalid parameter
	// error is returned;
	MaxResults int `json:"-" url:"max_results,omitempty"`
	// Opaque pagination token to go to next page based on previous query.
	PageToken string `json:"-" url:"page_token,omitempty"`
	// Parent schema of functions.
	SchemaName string `json:"-" url:"schema_name"`

	ForceSendFields []string `json:"-"`
}

List functions

func (ListFunctionsRequest) MarshalJSON added in v0.29.0

func (s ListFunctionsRequest) MarshalJSON() ([]byte, error)

func (*ListFunctionsRequest) UnmarshalJSON added in v0.29.0

func (s *ListFunctionsRequest) UnmarshalJSON(b []byte) error

type ListFunctionsResponse

type ListFunctionsResponse struct {
	// An array of function information objects.
	Functions []FunctionInfo `json:"functions,omitempty"`
	// Opaque token to retrieve the next page of results. Absent if there are no
	// more pages. __page_token__ should be set to this value for the next
	// request (for the next page of results).
	NextPageToken string `json:"next_page_token,omitempty"`

	ForceSendFields []string `json:"-"`
}

func (ListFunctionsResponse) MarshalJSON added in v0.29.0

func (s ListFunctionsResponse) MarshalJSON() ([]byte, error)

func (*ListFunctionsResponse) UnmarshalJSON added in v0.29.0

func (s *ListFunctionsResponse) UnmarshalJSON(b []byte) error

type ListMetastoresResponse

type ListMetastoresResponse struct {
	// An array of metastore information objects.
	Metastores []MetastoreInfo `json:"metastores,omitempty"`
}

type ListModelVersionsRequest added in v0.18.0

type ListModelVersionsRequest struct {
	// The full three-level name of the registered model under which to list
	// model versions
	FullName string `json:"-" url:"-"`
	// Whether to include model versions in the response for which the principal
	// can only access selective metadata for
	IncludeBrowse bool `json:"-" url:"include_browse,omitempty"`
	// Maximum number of model versions to return. If not set, the page length
	// is set to a server configured value (100, as of 1/3/2024). - when set to
	// a value greater than 0, the page length is the minimum of this value and
	// a server configured value(1000, as of 1/3/2024); - when set to 0, the
	// page length is set to a server configured value (100, as of 1/3/2024)
	// (recommended); - when set to a value less than 0, an invalid parameter
	// error is returned;
	MaxResults int `json:"-" url:"max_results,omitempty"`
	// Opaque pagination token to go to next page based on previous query.
	PageToken string `json:"-" url:"page_token,omitempty"`

	ForceSendFields []string `json:"-"`
}

List Model Versions

func (ListModelVersionsRequest) MarshalJSON added in v0.23.0

func (s ListModelVersionsRequest) MarshalJSON() ([]byte, error)

func (*ListModelVersionsRequest) UnmarshalJSON added in v0.23.0

func (s *ListModelVersionsRequest) UnmarshalJSON(b []byte) error

type ListModelVersionsResponse added in v0.18.0

type ListModelVersionsResponse struct {
	ModelVersions []ModelVersionInfo `json:"model_versions,omitempty"`
	// Opaque token to retrieve the next page of results. Absent if there are no
	// more pages. __page_token__ should be set to this value for the next
	// request (for the next page of results).
	NextPageToken string `json:"next_page_token,omitempty"`

	ForceSendFields []string `json:"-"`
}

func (ListModelVersionsResponse) MarshalJSON added in v0.23.0

func (s ListModelVersionsResponse) MarshalJSON() ([]byte, error)

func (*ListModelVersionsResponse) UnmarshalJSON added in v0.23.0

func (s *ListModelVersionsResponse) UnmarshalJSON(b []byte) error

type ListRefreshesRequest added in v0.31.0

type ListRefreshesRequest struct {
	// Full name of the table.
	TableName string `json:"-" url:"-"`
}

List refreshes

type ListRegisteredModelsRequest added in v0.18.0

type ListRegisteredModelsRequest struct {
	// The identifier of the catalog under which to list registered models. If
	// specified, schema_name must be specified.
	CatalogName string `json:"-" url:"catalog_name,omitempty"`
	// Whether to include registered models in the response for which the
	// principal can only access selective metadata for
	IncludeBrowse bool `json:"-" url:"include_browse,omitempty"`
	// Max number of registered models to return.
	//
	// If both catalog and schema are specified: - when max_results is not
	// specified, the page length is set to a server configured value (10000, as
	// of 4/2/2024). - when set to a value greater than 0, the page length is
	// the minimum of this value and a server configured value (10000, as of
	// 4/2/2024); - when set to 0, the page length is set to a server configured
	// value (10000, as of 4/2/2024); - when set to a value less than 0, an
	// invalid parameter error is returned;
	//
	// If neither schema nor catalog is specified: - when max_results is not
	// specified, the page length is set to a server configured value (100, as
	// of 4/2/2024). - when set to a value greater than 0, the page length is
	// the minimum of this value and a server configured value (1000, as of
	// 4/2/2024); - when set to 0, the page length is set to a server configured
	// value (100, as of 4/2/2024); - when set to a value less than 0, an
	// invalid parameter error is returned;
	MaxResults int `json:"-" url:"max_results,omitempty"`
	// Opaque token to send for the next page of results (pagination).
	PageToken string `json:"-" url:"page_token,omitempty"`
	// The identifier of the schema under which to list registered models. If
	// specified, catalog_name must be specified.
	SchemaName string `json:"-" url:"schema_name,omitempty"`

	ForceSendFields []string `json:"-"`
}

List Registered Models

func (ListRegisteredModelsRequest) MarshalJSON added in v0.23.0

func (s ListRegisteredModelsRequest) MarshalJSON() ([]byte, error)

func (*ListRegisteredModelsRequest) UnmarshalJSON added in v0.23.0

func (s *ListRegisteredModelsRequest) UnmarshalJSON(b []byte) error

type ListRegisteredModelsResponse added in v0.18.0

type ListRegisteredModelsResponse struct {
	// Opaque token for pagination. Omitted if there are no more results.
	// page_token should be set to this value for fetching the next page.
	NextPageToken string `json:"next_page_token,omitempty"`

	RegisteredModels []RegisteredModelInfo `json:"registered_models,omitempty"`

	ForceSendFields []string `json:"-"`
}

func (ListRegisteredModelsResponse) MarshalJSON added in v0.23.0

func (s ListRegisteredModelsResponse) MarshalJSON() ([]byte, error)

func (*ListRegisteredModelsResponse) UnmarshalJSON added in v0.23.0

func (s *ListRegisteredModelsResponse) UnmarshalJSON(b []byte) error

type ListSchemasRequest

type ListSchemasRequest struct {
	// Parent catalog for schemas of interest.
	CatalogName string `json:"-" url:"catalog_name"`
	// Whether to include schemas in the response for which the principal can
	// only access selective metadata for
	IncludeBrowse bool `json:"-" url:"include_browse,omitempty"`
	// Maximum number of schemas to return. If not set, all the schemas are
	// returned (not recommended). - when set to a value greater than 0, the
	// page length is the minimum of this value and a server configured value; -
	// when set to 0, the page length is set to a server configured value
	// (recommended); - when set to a value less than 0, an invalid parameter
	// error is returned;
	MaxResults int `json:"-" url:"max_results,omitempty"`
	// Opaque pagination token to go to next page based on previous query.
	PageToken string `json:"-" url:"page_token,omitempty"`

	ForceSendFields []string `json:"-"`
}

List schemas

func (ListSchemasRequest) MarshalJSON added in v0.29.0

func (s ListSchemasRequest) MarshalJSON() ([]byte, error)

func (*ListSchemasRequest) UnmarshalJSON added in v0.29.0

func (s *ListSchemasRequest) UnmarshalJSON(b []byte) error

type ListSchemasResponse

type ListSchemasResponse struct {
	// Opaque token to retrieve the next page of results. Absent if there are no
	// more pages. __page_token__ should be set to this value for the next
	// request (for the next page of results).
	NextPageToken string `json:"next_page_token,omitempty"`
	// An array of schema information objects.
	Schemas []SchemaInfo `json:"schemas,omitempty"`

	ForceSendFields []string `json:"-"`
}

func (ListSchemasResponse) MarshalJSON added in v0.29.0

func (s ListSchemasResponse) MarshalJSON() ([]byte, error)

func (*ListSchemasResponse) UnmarshalJSON added in v0.29.0

func (s *ListSchemasResponse) UnmarshalJSON(b []byte) error

type ListStorageCredentialsRequest added in v0.29.0

type ListStorageCredentialsRequest struct {
	// Maximum number of storage credentials to return. If not set, all the
	// storage credentials are returned (not recommended). - when set to a value
	// greater than 0, the page length is the minimum of this value and a server
	// configured value; - when set to 0, the page length is set to a server
	// configured value (recommended); - when set to a value less than 0, an
	// invalid parameter error is returned;
	MaxResults int `json:"-" url:"max_results,omitempty"`
	// Opaque pagination token to go to next page based on previous query.
	PageToken string `json:"-" url:"page_token,omitempty"`

	ForceSendFields []string `json:"-"`
}

List credentials

func (ListStorageCredentialsRequest) MarshalJSON added in v0.29.0

func (s ListStorageCredentialsRequest) MarshalJSON() ([]byte, error)

func (*ListStorageCredentialsRequest) UnmarshalJSON added in v0.29.0

func (s *ListStorageCredentialsRequest) UnmarshalJSON(b []byte) error

type ListStorageCredentialsResponse added in v0.9.0

type ListStorageCredentialsResponse struct {
	// Opaque token to retrieve the next page of results. Absent if there are no
	// more pages. __page_token__ should be set to this value for the next
	// request (for the next page of results).
	NextPageToken string `json:"next_page_token,omitempty"`

	StorageCredentials []StorageCredentialInfo `json:"storage_credentials,omitempty"`

	ForceSendFields []string `json:"-"`
}

func (ListStorageCredentialsResponse) MarshalJSON added in v0.29.0

func (s ListStorageCredentialsResponse) MarshalJSON() ([]byte, error)

func (*ListStorageCredentialsResponse) UnmarshalJSON added in v0.29.0

func (s *ListStorageCredentialsResponse) UnmarshalJSON(b []byte) error

type ListSummariesRequest

type ListSummariesRequest struct {
	// Name of parent catalog for tables of interest.
	CatalogName string `json:"-" url:"catalog_name"`
	// Maximum number of summaries for tables to return. If not set, the page
	// length is set to a server configured value (10000, as of 1/5/2024). -
	// when set to a value greater than 0, the page length is the minimum of
	// this value and a server configured value (10000, as of 1/5/2024); - when
	// set to 0, the page length is set to a server configured value (10000, as
	// of 1/5/2024) (recommended); - when set to a value less than 0, an invalid
	// parameter error is returned;
	MaxResults int `json:"-" url:"max_results,omitempty"`
	// Opaque pagination token to go to next page based on previous query.
	PageToken string `json:"-" url:"page_token,omitempty"`
	// A sql LIKE pattern (% and _) for schema names. All schemas will be
	// returned if not set or empty.
	SchemaNamePattern string `json:"-" url:"schema_name_pattern,omitempty"`
	// A sql LIKE pattern (% and _) for table names. All tables will be returned
	// if not set or empty.
	TableNamePattern string `json:"-" url:"table_name_pattern,omitempty"`

	ForceSendFields []string `json:"-"`
}

List table summaries

func (ListSummariesRequest) MarshalJSON added in v0.23.0

func (s ListSummariesRequest) MarshalJSON() ([]byte, error)

func (*ListSummariesRequest) UnmarshalJSON added in v0.23.0

func (s *ListSummariesRequest) UnmarshalJSON(b []byte) error

type ListSystemSchemasRequest added in v0.10.0

type ListSystemSchemasRequest struct {
	// The ID for the metastore in which the system schema resides.
	MetastoreId string `json:"-" url:"-"`
}

List system schemas

type ListSystemSchemasResponse added in v0.10.0

type ListSystemSchemasResponse struct {
	// An array of system schema information objects.
	Schemas []SystemSchemaInfo `json:"schemas,omitempty"`
}

type ListTableSummariesResponse

type ListTableSummariesResponse struct {
	// Opaque token to retrieve the next page of results. Absent if there are no
	// more pages. __page_token__ should be set to this value for the next
	// request (for the next page of results).
	NextPageToken string `json:"next_page_token,omitempty"`
	// List of table summaries.
	Tables []TableSummary `json:"tables,omitempty"`

	ForceSendFields []string `json:"-"`
}

func (ListTableSummariesResponse) MarshalJSON added in v0.23.0

func (s ListTableSummariesResponse) MarshalJSON() ([]byte, error)

func (*ListTableSummariesResponse) UnmarshalJSON added in v0.23.0

func (s *ListTableSummariesResponse) UnmarshalJSON(b []byte) error

type ListTablesRequest

type ListTablesRequest struct {
	// Name of parent catalog for tables of interest.
	CatalogName string `json:"-" url:"catalog_name"`
	// Whether to include tables in the response for which the principal can
	// only access selective metadata for
	IncludeBrowse bool `json:"-" url:"include_browse,omitempty"`
	// Whether delta metadata should be included in the response.
	IncludeDeltaMetadata bool `json:"-" url:"include_delta_metadata,omitempty"`
	// Maximum number of tables to return. If not set, all the tables are
	// returned (not recommended). - when set to a value greater than 0, the
	// page length is the minimum of this value and a server configured value; -
	// when set to 0, the page length is set to a server configured value
	// (recommended); - when set to a value less than 0, an invalid parameter
	// error is returned;
	MaxResults int `json:"-" url:"max_results,omitempty"`
	// Whether to omit the columns of the table from the response or not.
	OmitColumns bool `json:"-" url:"omit_columns,omitempty"`
	// Whether to omit the properties of the table from the response or not.
	OmitProperties bool `json:"-" url:"omit_properties,omitempty"`
	// Opaque token to send for the next page of results (pagination).
	PageToken string `json:"-" url:"page_token,omitempty"`
	// Parent schema of tables.
	SchemaName string `json:"-" url:"schema_name"`

	ForceSendFields []string `json:"-"`
}

List tables

func (ListTablesRequest) MarshalJSON added in v0.23.0

func (s ListTablesRequest) MarshalJSON() ([]byte, error)

func (*ListTablesRequest) UnmarshalJSON added in v0.23.0

func (s *ListTablesRequest) UnmarshalJSON(b []byte) error

type ListTablesResponse

type ListTablesResponse struct {
	// Opaque token to retrieve the next page of results. Absent if there are no
	// more pages. __page_token__ should be set to this value for the next
	// request (for the next page of results).
	NextPageToken string `json:"next_page_token,omitempty"`
	// An array of table information objects.
	Tables []TableInfo `json:"tables,omitempty"`

	ForceSendFields []string `json:"-"`
}

func (ListTablesResponse) MarshalJSON added in v0.23.0

func (s ListTablesResponse) MarshalJSON() ([]byte, error)

func (*ListTablesResponse) UnmarshalJSON added in v0.23.0

func (s *ListTablesResponse) UnmarshalJSON(b []byte) error

type ListVolumesRequest

type ListVolumesRequest struct {
	// The identifier of the catalog
	CatalogName string `json:"-" url:"catalog_name"`
	// Whether to include volumes in the response for which the principal can
	// only access selective metadata for
	IncludeBrowse bool `json:"-" url:"include_browse,omitempty"`
	// Maximum number of volumes to return (page length).
	//
	// If not set, the page length is set to a server configured value (10000,
	// as of 1/29/2024). - when set to a value greater than 0, the page length
	// is the minimum of this value and a server configured value (10000, as of
	// 1/29/2024); - when set to 0, the page length is set to a server
	// configured value (10000, as of 1/29/2024) (recommended); - when set to a
	// value less than 0, an invalid parameter error is returned;
	//
	// Note: this parameter controls only the maximum number of volumes to
	// return. The actual number of volumes returned in a page may be smaller
	// than this value, including 0, even if there are more pages.
	MaxResults int `json:"-" url:"max_results,omitempty"`
	// Opaque token returned by a previous request. It must be included in the
	// request to retrieve the next page of results (pagination).
	PageToken string `json:"-" url:"page_token,omitempty"`
	// The identifier of the schema
	SchemaName string `json:"-" url:"schema_name"`

	ForceSendFields []string `json:"-"`
}

List Volumes

func (ListVolumesRequest) MarshalJSON added in v0.32.0

func (s ListVolumesRequest) MarshalJSON() ([]byte, error)

func (*ListVolumesRequest) UnmarshalJSON added in v0.32.0

func (s *ListVolumesRequest) UnmarshalJSON(b []byte) error

type ListVolumesResponseContent

type ListVolumesResponseContent struct {
	// Opaque token to retrieve the next page of results. Absent if there are no
	// more pages. __page_token__ should be set to this value for the next
	// request to retrieve the next page of results.
	NextPageToken string `json:"next_page_token,omitempty"`

	Volumes []VolumeInfo `json:"volumes,omitempty"`

	ForceSendFields []string `json:"-"`
}

func (ListVolumesResponseContent) MarshalJSON added in v0.32.0

func (s ListVolumesResponseContent) MarshalJSON() ([]byte, error)

func (*ListVolumesResponseContent) UnmarshalJSON added in v0.32.0

func (s *ListVolumesResponseContent) UnmarshalJSON(b []byte) error

type MatchType added in v0.17.0

type MatchType string

The artifact pattern matching type

const MatchTypePrefixMatch MatchType = `PREFIX_MATCH`

func (*MatchType) Set added in v0.17.0

func (f *MatchType) Set(v string) error

Set raw string value and validate it against allowed values

func (*MatchType) String added in v0.17.0

func (f *MatchType) String() string

String representation for fmt.Print

func (*MatchType) Type added in v0.17.0

func (f *MatchType) Type() string

Type always returns MatchType to satisfy [pflag.Value] interface

type MetastoreAssignment

type MetastoreAssignment struct {
	// The name of the default catalog in the metastore.
	DefaultCatalogName string `json:"default_catalog_name,omitempty"`
	// The unique ID of the metastore.
	MetastoreId string `json:"metastore_id"`
	// The unique ID of the Databricks workspace.
	WorkspaceId int64 `json:"workspace_id"`

	ForceSendFields []string `json:"-"`
}

func (MetastoreAssignment) MarshalJSON added in v0.23.0

func (s MetastoreAssignment) MarshalJSON() ([]byte, error)

func (*MetastoreAssignment) UnmarshalJSON added in v0.23.0

func (s *MetastoreAssignment) UnmarshalJSON(b []byte) error

type MetastoreInfo

type MetastoreInfo struct {
	// Cloud vendor of the metastore home shard (e.g., `aws`, `azure`, `gcp`).
	Cloud string `json:"cloud,omitempty"`
	// Time at which this metastore was created, in epoch milliseconds.
	CreatedAt int64 `json:"created_at,omitempty"`
	// Username of metastore creator.
	CreatedBy string `json:"created_by,omitempty"`
	// Unique identifier of the metastore's (Default) Data Access Configuration.
	DefaultDataAccessConfigId string `json:"default_data_access_config_id,omitempty"`
	// The organization name of a Delta Sharing entity, to be used in
	// Databricks-to-Databricks Delta Sharing as the official name.
	DeltaSharingOrganizationName string `json:"delta_sharing_organization_name,omitempty"`
	// The lifetime of delta sharing recipient token in seconds.
	DeltaSharingRecipientTokenLifetimeInSeconds int64 `json:"delta_sharing_recipient_token_lifetime_in_seconds,omitempty"`
	// The scope of Delta Sharing enabled for the metastore.
	DeltaSharingScope MetastoreInfoDeltaSharingScope `json:"delta_sharing_scope,omitempty"`
	// Globally unique metastore ID across clouds and regions, of the form
	// `cloud:region:metastore_id`.
	GlobalMetastoreId string `json:"global_metastore_id,omitempty"`
	// Unique identifier of metastore.
	MetastoreId string `json:"metastore_id,omitempty"`
	// The user-specified name of the metastore.
	Name string `json:"name,omitempty"`
	// The owner of the metastore.
	Owner string `json:"owner,omitempty"`
	// Privilege model version of the metastore, of the form `major.minor`
	// (e.g., `1.0`).
	PrivilegeModelVersion string `json:"privilege_model_version,omitempty"`
	// Cloud region which the metastore serves (e.g., `us-west-2`, `westus`).
	Region string `json:"region,omitempty"`
	// The storage root URL for metastore
	StorageRoot string `json:"storage_root,omitempty"`
	// UUID of storage credential to access the metastore storage_root.
	StorageRootCredentialId string `json:"storage_root_credential_id,omitempty"`
	// Name of the storage credential to access the metastore storage_root.
	StorageRootCredentialName string `json:"storage_root_credential_name,omitempty"`
	// Time at which the metastore was last modified, in epoch milliseconds.
	UpdatedAt int64 `json:"updated_at,omitempty"`
	// Username of user who last modified the metastore.
	UpdatedBy string `json:"updated_by,omitempty"`

	ForceSendFields []string `json:"-"`
}

func (MetastoreInfo) MarshalJSON added in v0.23.0

func (s MetastoreInfo) MarshalJSON() ([]byte, error)

func (*MetastoreInfo) UnmarshalJSON added in v0.23.0

func (s *MetastoreInfo) UnmarshalJSON(b []byte) error

type MetastoreInfoDeltaSharingScope

type MetastoreInfoDeltaSharingScope string

The scope of Delta Sharing enabled for the metastore.

const MetastoreInfoDeltaSharingScopeInternal MetastoreInfoDeltaSharingScope = `INTERNAL`
const MetastoreInfoDeltaSharingScopeInternalAndExternal MetastoreInfoDeltaSharingScope = `INTERNAL_AND_EXTERNAL`

func (*MetastoreInfoDeltaSharingScope) Set

Set raw string value and validate it against allowed values

func (*MetastoreInfoDeltaSharingScope) String

String representation for fmt.Print

func (*MetastoreInfoDeltaSharingScope) Type

Type always returns MetastoreInfoDeltaSharingScope to satisfy [pflag.Value] interface

type MetastoresAPI

type MetastoresAPI struct {
	// contains filtered or unexported fields
}

A metastore is the top-level container of objects in Unity Catalog. It stores data assets (tables and views) and the permissions that govern access to them. Databricks account admins can create metastores and assign them to Databricks workspaces to control which workloads use each metastore. For a workspace to use Unity Catalog, it must have a Unity Catalog metastore attached.

Each metastore is configured with a root storage location in a cloud storage account. This storage location is used for metadata and managed tables data.

NOTE: This metastore is distinct from the metastore included in Databricks workspaces created before Unity Catalog was released. If your workspace includes a legacy Hive metastore, the data in that metastore is available in a catalog named hive_metastore.

func NewMetastores

func NewMetastores(client *client.DatabricksClient) *MetastoresAPI

func (*MetastoresAPI) Assign

Create an assignment.

Creates a new metastore assignment. If an assignment for the same __workspace_id__ exists, it will be overwritten by the new __metastore_id__ and __default_catalog_name__. The caller must be an account admin.

Example (Metastores)
ctx := context.Background()
w, err := databricks.NewWorkspaceClient()
if err != nil {
	panic(err)
}

workspaceId := func(v string) int64 {
	i, err := strconv.ParseInt(v, 10, 64)
	if err != nil {
		panic(fmt.Sprintf("`%s` is not int64: %s", v, err))
	}
	return i
}(os.Getenv("DUMMY_WORKSPACE_ID"))

created, err := w.Metastores.Create(ctx, catalog.CreateMetastore{
	Name:        fmt.Sprintf("sdk-%x", time.Now().UnixNano()),
	StorageRoot: fmt.Sprintf("s3://%s/%s", os.Getenv("TEST_BUCKET"), fmt.Sprintf("sdk-%x", time.Now().UnixNano())),
})
if err != nil {
	panic(err)
}
logger.Infof(ctx, "found %v", created)

err = w.Metastores.Assign(ctx, catalog.CreateMetastoreAssignment{
	MetastoreId: created.MetastoreId,
	WorkspaceId: workspaceId,
})
if err != nil {
	panic(err)
}

// cleanup

err = w.Metastores.Delete(ctx, catalog.DeleteMetastoreRequest{
	Id:    created.MetastoreId,
	Force: true,
})
if err != nil {
	panic(err)
}
Output:

func (*MetastoresAPI) Create

func (a *MetastoresAPI) Create(ctx context.Context, request CreateMetastore) (*MetastoreInfo, error)

Create a metastore.

Creates a new metastore based on a provided name and optional storage root path. By default (if the __owner__ field is not set), the owner of the new metastore is the user calling the __createMetastore__ API. If the __owner__ field is set to the empty string (**""**), the ownership is assigned to the System User instead.

Example (Metastores)
ctx := context.Background()
w, err := databricks.NewWorkspaceClient()
if err != nil {
	panic(err)
}

created, err := w.Metastores.Create(ctx, catalog.CreateMetastore{
	Name:        fmt.Sprintf("sdk-%x", time.Now().UnixNano()),
	StorageRoot: fmt.Sprintf("s3://%s/%s", os.Getenv("TEST_BUCKET"), fmt.Sprintf("sdk-%x", time.Now().UnixNano())),
})
if err != nil {
	panic(err)
}
logger.Infof(ctx, "found %v", created)

// cleanup

err = w.Metastores.Delete(ctx, catalog.DeleteMetastoreRequest{
	Id:    created.MetastoreId,
	Force: true,
})
if err != nil {
	panic(err)
}
Output:

func (*MetastoresAPI) Current

Get metastore assignment for workspace.

Gets the metastore assignment for the workspace being accessed.

Example (Metastores)
ctx := context.Background()
w, err := databricks.NewWorkspaceClient()
if err != nil {
	panic(err)
}

currentMetastore, err := w.Metastores.Current(ctx)
if err != nil {
	panic(err)
}
logger.Infof(ctx, "found %v", currentMetastore)
Output:

func (*MetastoresAPI) Delete

func (a *MetastoresAPI) Delete(ctx context.Context, request DeleteMetastoreRequest) error

Delete a metastore.

Deletes a metastore. The caller must be a metastore admin.

func (*MetastoresAPI) DeleteById

func (a *MetastoresAPI) DeleteById(ctx context.Context, id string) error

Delete a metastore.

Deletes a metastore. The caller must be a metastore admin.

func (*MetastoresAPI) Get

Get a metastore.

Gets a metastore that matches the supplied ID. The caller must be a metastore admin to retrieve this info.

Example (Metastores)
ctx := context.Background()
w, err := databricks.NewWorkspaceClient()
if err != nil {
	panic(err)
}

created, err := w.Metastores.Create(ctx, catalog.CreateMetastore{
	Name:        fmt.Sprintf("sdk-%x", time.Now().UnixNano()),
	StorageRoot: fmt.Sprintf("s3://%s/%s", os.Getenv("TEST_BUCKET"), fmt.Sprintf("sdk-%x", time.Now().UnixNano())),
})
if err != nil {
	panic(err)
}
logger.Infof(ctx, "found %v", created)

_, err = w.Metastores.GetById(ctx, created.MetastoreId)
if err != nil {
	panic(err)
}

// cleanup

err = w.Metastores.Delete(ctx, catalog.DeleteMetastoreRequest{
	Id:    created.MetastoreId,
	Force: true,
})
if err != nil {
	panic(err)
}
Output:

func (*MetastoresAPI) GetById

func (a *MetastoresAPI) GetById(ctx context.Context, id string) (*MetastoreInfo, error)

Get a metastore.

Gets a metastore that matches the supplied ID. The caller must be a metastore admin to retrieve this info.

func (*MetastoresAPI) GetByName

func (a *MetastoresAPI) GetByName(ctx context.Context, name string) (*MetastoreInfo, error)

GetByName calls MetastoresAPI.MetastoreInfoNameToMetastoreIdMap and returns a single MetastoreInfo.

Returns an error if there's more than one MetastoreInfo with the same .Name.

Note: All MetastoreInfo instances are loaded into memory before returning matching by name.

This method is generated by Databricks SDK Code Generator.

func (*MetastoresAPI) Impl

func (a *MetastoresAPI) Impl() MetastoresService

Impl returns low-level Metastores API implementation Deprecated: use MockMetastoresInterface instead.

func (*MetastoresAPI) List added in v0.24.0

List metastores.

Gets an array of the available metastores (as __MetastoreInfo__ objects). The caller must be an admin to retrieve this info. There is no guarantee of a specific ordering of the elements in the array.

This method is generated by Databricks SDK Code Generator.

func (*MetastoresAPI) ListAll

func (a *MetastoresAPI) ListAll(ctx context.Context) ([]MetastoreInfo, error)

List metastores.

Gets an array of the available metastores (as __MetastoreInfo__ objects). The caller must be an admin to retrieve this info. There is no guarantee of a specific ordering of the elements in the array.

This method is generated by Databricks SDK Code Generator.

Example (Metastores)
ctx := context.Background()
w, err := databricks.NewWorkspaceClient()
if err != nil {
	panic(err)
}

all, err := w.Metastores.ListAll(ctx)
if err != nil {
	panic(err)
}
logger.Infof(ctx, "found %v", all)
Output:

func (*MetastoresAPI) MetastoreInfoNameToMetastoreIdMap

func (a *MetastoresAPI) MetastoreInfoNameToMetastoreIdMap(ctx context.Context) (map[string]string, error)

MetastoreInfoNameToMetastoreIdMap calls MetastoresAPI.ListAll and creates a map of results with MetastoreInfo.Name as key and MetastoreInfo.MetastoreId as value.

Returns an error if there's more than one MetastoreInfo with the same .Name.

Note: All MetastoreInfo instances are loaded into memory before creating a map.

This method is generated by Databricks SDK Code Generator.

func (*MetastoresAPI) Summary

Get a metastore summary.

Gets information about a metastore. This summary includes the storage credential, the cloud vendor, the cloud region, and the global metastore ID.

Example (Metastores)
ctx := context.Background()
w, err := databricks.NewWorkspaceClient()
if err != nil {
	panic(err)
}

summary, err := w.Metastores.Summary(ctx)
if err != nil {
	panic(err)
}
logger.Infof(ctx, "found %v", summary)
Output:

func (*MetastoresAPI) Unassign

func (a *MetastoresAPI) Unassign(ctx context.Context, request UnassignRequest) error

Delete an assignment.

Deletes a metastore assignment. The caller must be an account administrator.

Example (Metastores)
ctx := context.Background()
w, err := databricks.NewWorkspaceClient()
if err != nil {
	panic(err)
}

workspaceId := func(v string) int64 {
	i, err := strconv.ParseInt(v, 10, 64)
	if err != nil {
		panic(fmt.Sprintf("`%s` is not int64: %s", v, err))
	}
	return i
}(os.Getenv("DUMMY_WORKSPACE_ID"))

created, err := w.Metastores.Create(ctx, catalog.CreateMetastore{
	Name:        fmt.Sprintf("sdk-%x", time.Now().UnixNano()),
	StorageRoot: fmt.Sprintf("s3://%s/%s", os.Getenv("TEST_BUCKET"), fmt.Sprintf("sdk-%x", time.Now().UnixNano())),
})
if err != nil {
	panic(err)
}
logger.Infof(ctx, "found %v", created)

err = w.Metastores.Unassign(ctx, catalog.UnassignRequest{
	MetastoreId: created.MetastoreId,
	WorkspaceId: workspaceId,
})
if err != nil {
	panic(err)
}

// cleanup

err = w.Metastores.Delete(ctx, catalog.DeleteMetastoreRequest{
	Id:    created.MetastoreId,
	Force: true,
})
if err != nil {
	panic(err)
}
Output:

func (*MetastoresAPI) UnassignByWorkspaceId

func (a *MetastoresAPI) UnassignByWorkspaceId(ctx context.Context, workspaceId int64) error

Delete an assignment.

Deletes a metastore assignment. The caller must be an account administrator.

func (*MetastoresAPI) Update

func (a *MetastoresAPI) Update(ctx context.Context, request UpdateMetastore) (*MetastoreInfo, error)

Update a metastore.

Updates information for a specific metastore. The caller must be a metastore admin. If the __owner__ field is set to the empty string (**""**), the ownership is updated to the System User.

Example (Metastores)
ctx := context.Background()
w, err := databricks.NewWorkspaceClient()
if err != nil {
	panic(err)
}

created, err := w.Metastores.Create(ctx, catalog.CreateMetastore{
	Name:        fmt.Sprintf("sdk-%x", time.Now().UnixNano()),
	StorageRoot: fmt.Sprintf("s3://%s/%s", os.Getenv("TEST_BUCKET"), fmt.Sprintf("sdk-%x", time.Now().UnixNano())),
})
if err != nil {
	panic(err)
}
logger.Infof(ctx, "found %v", created)

_, err = w.Metastores.Update(ctx, catalog.UpdateMetastore{
	Id:      created.MetastoreId,
	NewName: fmt.Sprintf("sdk-%x", time.Now().UnixNano()),
})
if err != nil {
	panic(err)
}

// cleanup

err = w.Metastores.Delete(ctx, catalog.DeleteMetastoreRequest{
	Id:    created.MetastoreId,
	Force: true,
})
if err != nil {
	panic(err)
}
Output:

func (*MetastoresAPI) UpdateAssignment

func (a *MetastoresAPI) UpdateAssignment(ctx context.Context, request UpdateMetastoreAssignment) error

Update an assignment.

Updates a metastore assignment. This operation can be used to update __metastore_id__ or __default_catalog_name__ for a specified Workspace, if the Workspace is already assigned a metastore. The caller must be an account admin to update __metastore_id__; otherwise, the caller can be a Workspace admin.

func (*MetastoresAPI) WithImpl

WithImpl could be used to override low-level API implementations for unit testing purposes with github.com/golang/mock or other mocking frameworks. Deprecated: use MockMetastoresInterface instead.

type MetastoresInterface added in v0.29.0

type MetastoresInterface interface {
	// WithImpl could be used to override low-level API implementations for unit
	// testing purposes with [github.com/golang/mock] or other mocking frameworks.
	// Deprecated: use MockMetastoresInterface instead.
	WithImpl(impl MetastoresService) MetastoresInterface

	// Impl returns low-level Metastores API implementation
	// Deprecated: use MockMetastoresInterface instead.
	Impl() MetastoresService

	// Create an assignment.
	//
	// Creates a new metastore assignment. If an assignment for the same
	// __workspace_id__ exists, it will be overwritten by the new __metastore_id__
	// and __default_catalog_name__. The caller must be an account admin.
	Assign(ctx context.Context, request CreateMetastoreAssignment) error

	// Create a metastore.
	//
	// Creates a new metastore based on a provided name and optional storage root
	// path. By default (if the __owner__ field is not set), the owner of the new
	// metastore is the user calling the __createMetastore__ API. If the __owner__
	// field is set to the empty string (**""**), the ownership is assigned to the
	// System User instead.
	Create(ctx context.Context, request CreateMetastore) (*MetastoreInfo, error)

	// Get metastore assignment for workspace.
	//
	// Gets the metastore assignment for the workspace being accessed.
	Current(ctx context.Context) (*MetastoreAssignment, error)

	// Delete a metastore.
	//
	// Deletes a metastore. The caller must be a metastore admin.
	Delete(ctx context.Context, request DeleteMetastoreRequest) error

	// Delete a metastore.
	//
	// Deletes a metastore. The caller must be a metastore admin.
	DeleteById(ctx context.Context, id string) error

	// Get a metastore.
	//
	// Gets a metastore that matches the supplied ID. The caller must be a metastore
	// admin to retrieve this info.
	Get(ctx context.Context, request GetMetastoreRequest) (*MetastoreInfo, error)

	// Get a metastore.
	//
	// Gets a metastore that matches the supplied ID. The caller must be a metastore
	// admin to retrieve this info.
	GetById(ctx context.Context, id string) (*MetastoreInfo, error)

	// List metastores.
	//
	// Gets an array of the available metastores (as __MetastoreInfo__ objects). The
	// caller must be an admin to retrieve this info. There is no guarantee of a
	// specific ordering of the elements in the array.
	//
	// This method is generated by Databricks SDK Code Generator.
	List(ctx context.Context) listing.Iterator[MetastoreInfo]

	// List metastores.
	//
	// Gets an array of the available metastores (as __MetastoreInfo__ objects). The
	// caller must be an admin to retrieve this info. There is no guarantee of a
	// specific ordering of the elements in the array.
	//
	// This method is generated by Databricks SDK Code Generator.
	ListAll(ctx context.Context) ([]MetastoreInfo, error)

	// MetastoreInfoNameToMetastoreIdMap calls [MetastoresAPI.ListAll] and creates a map of results with [MetastoreInfo].Name as key and [MetastoreInfo].MetastoreId as value.
	//
	// Returns an error if there's more than one [MetastoreInfo] with the same .Name.
	//
	// Note: All [MetastoreInfo] instances are loaded into memory before creating a map.
	//
	// This method is generated by Databricks SDK Code Generator.
	MetastoreInfoNameToMetastoreIdMap(ctx context.Context) (map[string]string, error)

	// GetByName calls [MetastoresAPI.MetastoreInfoNameToMetastoreIdMap] and returns a single [MetastoreInfo].
	//
	// Returns an error if there's more than one [MetastoreInfo] with the same .Name.
	//
	// Note: All [MetastoreInfo] instances are loaded into memory before returning matching by name.
	//
	// This method is generated by Databricks SDK Code Generator.
	GetByName(ctx context.Context, name string) (*MetastoreInfo, error)

	// Get a metastore summary.
	//
	// Gets information about a metastore. This summary includes the storage
	// credential, the cloud vendor, the cloud region, and the global metastore ID.
	Summary(ctx context.Context) (*GetMetastoreSummaryResponse, error)

	// Delete an assignment.
	//
	// Deletes a metastore assignment. The caller must be an account administrator.
	Unassign(ctx context.Context, request UnassignRequest) error

	// Delete an assignment.
	//
	// Deletes a metastore assignment. The caller must be an account administrator.
	UnassignByWorkspaceId(ctx context.Context, workspaceId int64) error

	// Update a metastore.
	//
	// Updates information for a specific metastore. The caller must be a metastore
	// admin. If the __owner__ field is set to the empty string (**""**), the
	// ownership is updated to the System User.
	Update(ctx context.Context, request UpdateMetastore) (*MetastoreInfo, error)

	// Update an assignment.
	//
	// Updates a metastore assignment. This operation can be used to update
	// __metastore_id__ or __default_catalog_name__ for a specified Workspace, if
	// the Workspace is already assigned a metastore. The caller must be an account
	// admin to update __metastore_id__; otherwise, the caller can be a Workspace
	// admin.
	UpdateAssignment(ctx context.Context, request UpdateMetastoreAssignment) error
}

type MetastoresService

type MetastoresService interface {

	// Create an assignment.
	//
	// Creates a new metastore assignment. If an assignment for the same
	// __workspace_id__ exists, it will be overwritten by the new
	// __metastore_id__ and __default_catalog_name__. The caller must be an
	// account admin.
	Assign(ctx context.Context, request CreateMetastoreAssignment) error

	// Create a metastore.
	//
	// Creates a new metastore based on a provided name and optional storage
	// root path. By default (if the __owner__ field is not set), the owner of
	// the new metastore is the user calling the __createMetastore__ API. If the
	// __owner__ field is set to the empty string (**""**), the ownership is
	// assigned to the System User instead.
	Create(ctx context.Context, request CreateMetastore) (*MetastoreInfo, error)

	// Get metastore assignment for workspace.
	//
	// Gets the metastore assignment for the workspace being accessed.
	Current(ctx context.Context) (*MetastoreAssignment, error)

	// Delete a metastore.
	//
	// Deletes a metastore. The caller must be a metastore admin.
	Delete(ctx context.Context, request DeleteMetastoreRequest) error

	// Get a metastore.
	//
	// Gets a metastore that matches the supplied ID. The caller must be a
	// metastore admin to retrieve this info.
	Get(ctx context.Context, request GetMetastoreRequest) (*MetastoreInfo, error)

	// List metastores.
	//
	// Gets an array of the available metastores (as __MetastoreInfo__ objects).
	// The caller must be an admin to retrieve this info. There is no guarantee
	// of a specific ordering of the elements in the array.
	//
	// Use ListAll() to get all MetastoreInfo instances
	List(ctx context.Context) (*ListMetastoresResponse, error)

	// Get a metastore summary.
	//
	// Gets information about a metastore. This summary includes the storage
	// credential, the cloud vendor, the cloud region, and the global metastore
	// ID.
	Summary(ctx context.Context) (*GetMetastoreSummaryResponse, error)

	// Delete an assignment.
	//
	// Deletes a metastore assignment. The caller must be an account
	// administrator.
	Unassign(ctx context.Context, request UnassignRequest) error

	// Update a metastore.
	//
	// Updates information for a specific metastore. The caller must be a
	// metastore admin. If the __owner__ field is set to the empty string
	// (**""**), the ownership is updated to the System User.
	Update(ctx context.Context, request UpdateMetastore) (*MetastoreInfo, error)

	// Update an assignment.
	//
	// Updates a metastore assignment. This operation can be used to update
	// __metastore_id__ or __default_catalog_name__ for a specified Workspace,
	// if the Workspace is already assigned a metastore. The caller must be an
	// account admin to update __metastore_id__; otherwise, the caller can be a
	// Workspace admin.
	UpdateAssignment(ctx context.Context, request UpdateMetastoreAssignment) error
}

A metastore is the top-level container of objects in Unity Catalog. It stores data assets (tables and views) and the permissions that govern access to them. Databricks account admins can create metastores and assign them to Databricks workspaces to control which workloads use each metastore. For a workspace to use Unity Catalog, it must have a Unity Catalog metastore attached.

Each metastore is configured with a root storage location in a cloud storage account. This storage location is used for metadata and managed tables data.

NOTE: This metastore is distinct from the metastore included in Databricks workspaces created before Unity Catalog was released. If your workspace includes a legacy Hive metastore, the data in that metastore is available in a catalog named hive_metastore.

type ModelVersionInfo added in v0.18.0

type ModelVersionInfo struct {
	// Indicates whether the principal is limited to retrieving metadata for the
	// associated object through the BROWSE privilege when include_browse is
	// enabled in the request.
	BrowseOnly bool `json:"browse_only,omitempty"`
	// The name of the catalog containing the model version
	CatalogName string `json:"catalog_name,omitempty"`
	// The comment attached to the model version
	Comment string `json:"comment,omitempty"`

	CreatedAt int64 `json:"created_at,omitempty"`
	// The identifier of the user who created the model version
	CreatedBy string `json:"created_by,omitempty"`
	// The unique identifier of the model version
	Id string `json:"id,omitempty"`
	// The unique identifier of the metastore containing the model version
	MetastoreId string `json:"metastore_id,omitempty"`
	// The name of the parent registered model of the model version, relative to
	// parent schema
	ModelName string `json:"model_name,omitempty"`
	// Model version dependencies, for feature-store packaged models
	ModelVersionDependencies *DependencyList `json:"model_version_dependencies,omitempty"`
	// MLflow run ID used when creating the model version, if “source“ was
	// generated by an experiment run stored in an MLflow tracking server
	RunId string `json:"run_id,omitempty"`
	// ID of the Databricks workspace containing the MLflow run that generated
	// this model version, if applicable
	RunWorkspaceId int `json:"run_workspace_id,omitempty"`
	// The name of the schema containing the model version, relative to parent
	// catalog
	SchemaName string `json:"schema_name,omitempty"`
	// URI indicating the location of the source artifacts (files) for the model
	// version
	Source string `json:"source,omitempty"`
	// Current status of the model version. Newly created model versions start
	// in PENDING_REGISTRATION status, then move to READY status once the model
	// version files are uploaded and the model version is finalized. Only model
	// versions in READY status can be loaded for inference or served.
	Status ModelVersionInfoStatus `json:"status,omitempty"`
	// The storage location on the cloud under which model version data files
	// are stored
	StorageLocation string `json:"storage_location,omitempty"`

	UpdatedAt int64 `json:"updated_at,omitempty"`
	// The identifier of the user who updated the model version last time
	UpdatedBy string `json:"updated_by,omitempty"`
	// Integer model version number, used to reference the model version in API
	// requests.
	Version int `json:"version,omitempty"`

	ForceSendFields []string `json:"-"`
}

func (ModelVersionInfo) MarshalJSON added in v0.23.0

func (s ModelVersionInfo) MarshalJSON() ([]byte, error)

func (*ModelVersionInfo) UnmarshalJSON added in v0.23.0

func (s *ModelVersionInfo) UnmarshalJSON(b []byte) error

type ModelVersionInfoStatus added in v0.18.0

type ModelVersionInfoStatus string

Current status of the model version. Newly created model versions start in PENDING_REGISTRATION status, then move to READY status once the model version files are uploaded and the model version is finalized. Only model versions in READY status can be loaded for inference or served.

const ModelVersionInfoStatusFailedRegistration ModelVersionInfoStatus = `FAILED_REGISTRATION`
const ModelVersionInfoStatusPendingRegistration ModelVersionInfoStatus = `PENDING_REGISTRATION`
const ModelVersionInfoStatusReady ModelVersionInfoStatus = `READY`

func (*ModelVersionInfoStatus) Set added in v0.18.0

Set raw string value and validate it against allowed values

func (*ModelVersionInfoStatus) String added in v0.18.0

func (f *ModelVersionInfoStatus) String() string

String representation for fmt.Print

func (*ModelVersionInfoStatus) Type added in v0.18.0

func (f *ModelVersionInfoStatus) Type() string

Type always returns ModelVersionInfoStatus to satisfy [pflag.Value] interface

type ModelVersionsAPI added in v0.18.0

type ModelVersionsAPI struct {
	// contains filtered or unexported fields
}

Databricks provides a hosted version of MLflow Model Registry in Unity Catalog. Models in Unity Catalog provide centralized access control, auditing, lineage, and discovery of ML models across Databricks workspaces.

This API reference documents the REST endpoints for managing model versions in Unity Catalog. For more details, see the [registered models API docs](/api/workspace/registeredmodels).

func NewModelVersions added in v0.18.0

func NewModelVersions(client *client.DatabricksClient) *ModelVersionsAPI

func (*ModelVersionsAPI) Delete added in v0.18.0

Delete a Model Version.

Deletes a model version from the specified registered model. Any aliases assigned to the model version will also be deleted.

The caller must be a metastore admin or an owner of the parent registered model. For the latter case, the caller must also be the owner or have the **USE_CATALOG** privilege on the parent catalog and the **USE_SCHEMA** privilege on the parent schema.

func (*ModelVersionsAPI) DeleteByFullNameAndVersion added in v0.18.0

func (a *ModelVersionsAPI) DeleteByFullNameAndVersion(ctx context.Context, fullName string, version int) error

Delete a Model Version.

Deletes a model version from the specified registered model. Any aliases assigned to the model version will also be deleted.

The caller must be a metastore admin or an owner of the parent registered model. For the latter case, the caller must also be the owner or have the **USE_CATALOG** privilege on the parent catalog and the **USE_SCHEMA** privilege on the parent schema.

func (*ModelVersionsAPI) Get added in v0.18.0

Get a Model Version.

Get a model version.

The caller must be a metastore admin or an owner of (or have the **EXECUTE** privilege on) the parent registered model. For the latter case, the caller must also be the owner or have the **USE_CATALOG** privilege on the parent catalog and the **USE_SCHEMA** privilege on the parent schema.

func (*ModelVersionsAPI) GetByAlias added in v0.18.0

func (a *ModelVersionsAPI) GetByAlias(ctx context.Context, request GetByAliasRequest) (*ModelVersionInfo, error)

Get Model Version By Alias.

Get a model version by alias.

The caller must be a metastore admin or an owner of (or have the **EXECUTE** privilege on) the registered model. For the latter case, the caller must also be the owner or have the **USE_CATALOG** privilege on the parent catalog and the **USE_SCHEMA** privilege on the parent schema.

func (*ModelVersionsAPI) GetByAliasByFullNameAndAlias added in v0.18.0

func (a *ModelVersionsAPI) GetByAliasByFullNameAndAlias(ctx context.Context, fullName string, alias string) (*ModelVersionInfo, error)

Get Model Version By Alias.

Get a model version by alias.

The caller must be a metastore admin or an owner of (or have the **EXECUTE** privilege on) the registered model. For the latter case, the caller must also be the owner or have the **USE_CATALOG** privilege on the parent catalog and the **USE_SCHEMA** privilege on the parent schema.

func (*ModelVersionsAPI) GetByFullNameAndVersion added in v0.18.0

func (a *ModelVersionsAPI) GetByFullNameAndVersion(ctx context.Context, fullName string, version int) (*RegisteredModelInfo, error)

Get a Model Version.

Get a model version.

The caller must be a metastore admin or an owner of (or have the **EXECUTE** privilege on) the parent registered model. For the latter case, the caller must also be the owner or have the **USE_CATALOG** privilege on the parent catalog and the **USE_SCHEMA** privilege on the parent schema.

func (*ModelVersionsAPI) Impl added in v0.18.0

Impl returns low-level ModelVersions API implementation Deprecated: use MockModelVersionsInterface instead.

func (*ModelVersionsAPI) List added in v0.24.0

List Model Versions.

List model versions. You can list model versions under a particular schema, or list all model versions in the current metastore.

The returned models are filtered based on the privileges of the calling user. For example, the metastore admin is able to list all the model versions. A regular user needs to be the owner or have the **EXECUTE** privilege on the parent registered model to recieve the model versions in the response. For the latter case, the caller must also be the owner or have the **USE_CATALOG** privilege on the parent catalog and the **USE_SCHEMA** privilege on the parent schema.

There is no guarantee of a specific ordering of the elements in the response. The elements in the response will not contain any aliases or tags.

This method is generated by Databricks SDK Code Generator.

func (*ModelVersionsAPI) ListAll added in v0.18.0

List Model Versions.

List model versions. You can list model versions under a particular schema, or list all model versions in the current metastore.

The returned models are filtered based on the privileges of the calling user. For example, the metastore admin is able to list all the model versions. A regular user needs to be the owner or have the **EXECUTE** privilege on the parent registered model to recieve the model versions in the response. For the latter case, the caller must also be the owner or have the **USE_CATALOG** privilege on the parent catalog and the **USE_SCHEMA** privilege on the parent schema.

There is no guarantee of a specific ordering of the elements in the response. The elements in the response will not contain any aliases or tags.

This method is generated by Databricks SDK Code Generator.

func (*ModelVersionsAPI) ListByFullName added in v0.18.0

func (a *ModelVersionsAPI) ListByFullName(ctx context.Context, fullName string) (*ListModelVersionsResponse, error)

List Model Versions.

List model versions. You can list model versions under a particular schema, or list all model versions in the current metastore.

The returned models are filtered based on the privileges of the calling user. For example, the metastore admin is able to list all the model versions. A regular user needs to be the owner or have the **EXECUTE** privilege on the parent registered model to recieve the model versions in the response. For the latter case, the caller must also be the owner or have the **USE_CATALOG** privilege on the parent catalog and the **USE_SCHEMA** privilege on the parent schema.

There is no guarantee of a specific ordering of the elements in the response. The elements in the response will not contain any aliases or tags.

func (*ModelVersionsAPI) Update added in v0.18.0

Update a Model Version.

Updates the specified model version.

The caller must be a metastore admin or an owner of the parent registered model. For the latter case, the caller must also be the owner or have the **USE_CATALOG** privilege on the parent catalog and the **USE_SCHEMA** privilege on the parent schema.

Currently only the comment of the model version can be updated.

func (*ModelVersionsAPI) WithImpl added in v0.18.0

WithImpl could be used to override low-level API implementations for unit testing purposes with github.com/golang/mock or other mocking frameworks. Deprecated: use MockModelVersionsInterface instead.

type ModelVersionsInterface added in v0.29.0

type ModelVersionsInterface interface {
	// WithImpl could be used to override low-level API implementations for unit
	// testing purposes with [github.com/golang/mock] or other mocking frameworks.
	// Deprecated: use MockModelVersionsInterface instead.
	WithImpl(impl ModelVersionsService) ModelVersionsInterface

	// Impl returns low-level ModelVersions API implementation
	// Deprecated: use MockModelVersionsInterface instead.
	Impl() ModelVersionsService

	// Delete a Model Version.
	//
	// Deletes a model version from the specified registered model. Any aliases
	// assigned to the model version will also be deleted.
	//
	// The caller must be a metastore admin or an owner of the parent registered
	// model. For the latter case, the caller must also be the owner or have the
	// **USE_CATALOG** privilege on the parent catalog and the **USE_SCHEMA**
	// privilege on the parent schema.
	Delete(ctx context.Context, request DeleteModelVersionRequest) error

	// Delete a Model Version.
	//
	// Deletes a model version from the specified registered model. Any aliases
	// assigned to the model version will also be deleted.
	//
	// The caller must be a metastore admin or an owner of the parent registered
	// model. For the latter case, the caller must also be the owner or have the
	// **USE_CATALOG** privilege on the parent catalog and the **USE_SCHEMA**
	// privilege on the parent schema.
	DeleteByFullNameAndVersion(ctx context.Context, fullName string, version int) error

	// Get a Model Version.
	//
	// Get a model version.
	//
	// The caller must be a metastore admin or an owner of (or have the **EXECUTE**
	// privilege on) the parent registered model. For the latter case, the caller
	// must also be the owner or have the **USE_CATALOG** privilege on the parent
	// catalog and the **USE_SCHEMA** privilege on the parent schema.
	Get(ctx context.Context, request GetModelVersionRequest) (*RegisteredModelInfo, error)

	// Get a Model Version.
	//
	// Get a model version.
	//
	// The caller must be a metastore admin or an owner of (or have the **EXECUTE**
	// privilege on) the parent registered model. For the latter case, the caller
	// must also be the owner or have the **USE_CATALOG** privilege on the parent
	// catalog and the **USE_SCHEMA** privilege on the parent schema.
	GetByFullNameAndVersion(ctx context.Context, fullName string, version int) (*RegisteredModelInfo, error)

	// Get Model Version By Alias.
	//
	// Get a model version by alias.
	//
	// The caller must be a metastore admin or an owner of (or have the **EXECUTE**
	// privilege on) the registered model. For the latter case, the caller must also
	// be the owner or have the **USE_CATALOG** privilege on the parent catalog and
	// the **USE_SCHEMA** privilege on the parent schema.
	GetByAlias(ctx context.Context, request GetByAliasRequest) (*ModelVersionInfo, error)

	// Get Model Version By Alias.
	//
	// Get a model version by alias.
	//
	// The caller must be a metastore admin or an owner of (or have the **EXECUTE**
	// privilege on) the registered model. For the latter case, the caller must also
	// be the owner or have the **USE_CATALOG** privilege on the parent catalog and
	// the **USE_SCHEMA** privilege on the parent schema.
	GetByAliasByFullNameAndAlias(ctx context.Context, fullName string, alias string) (*ModelVersionInfo, error)

	// List Model Versions.
	//
	// List model versions. You can list model versions under a particular schema,
	// or list all model versions in the current metastore.
	//
	// The returned models are filtered based on the privileges of the calling user.
	// For example, the metastore admin is able to list all the model versions. A
	// regular user needs to be the owner or have the **EXECUTE** privilege on the
	// parent registered model to recieve the model versions in the response. For
	// the latter case, the caller must also be the owner or have the
	// **USE_CATALOG** privilege on the parent catalog and the **USE_SCHEMA**
	// privilege on the parent schema.
	//
	// There is no guarantee of a specific ordering of the elements in the response.
	// The elements in the response will not contain any aliases or tags.
	//
	// This method is generated by Databricks SDK Code Generator.
	List(ctx context.Context, request ListModelVersionsRequest) listing.Iterator[ModelVersionInfo]

	// List Model Versions.
	//
	// List model versions. You can list model versions under a particular schema,
	// or list all model versions in the current metastore.
	//
	// The returned models are filtered based on the privileges of the calling user.
	// For example, the metastore admin is able to list all the model versions. A
	// regular user needs to be the owner or have the **EXECUTE** privilege on the
	// parent registered model to recieve the model versions in the response. For
	// the latter case, the caller must also be the owner or have the
	// **USE_CATALOG** privilege on the parent catalog and the **USE_SCHEMA**
	// privilege on the parent schema.
	//
	// There is no guarantee of a specific ordering of the elements in the response.
	// The elements in the response will not contain any aliases or tags.
	//
	// This method is generated by Databricks SDK Code Generator.
	ListAll(ctx context.Context, request ListModelVersionsRequest) ([]ModelVersionInfo, error)

	// List Model Versions.
	//
	// List model versions. You can list model versions under a particular schema,
	// or list all model versions in the current metastore.
	//
	// The returned models are filtered based on the privileges of the calling user.
	// For example, the metastore admin is able to list all the model versions. A
	// regular user needs to be the owner or have the **EXECUTE** privilege on the
	// parent registered model to recieve the model versions in the response. For
	// the latter case, the caller must also be the owner or have the
	// **USE_CATALOG** privilege on the parent catalog and the **USE_SCHEMA**
	// privilege on the parent schema.
	//
	// There is no guarantee of a specific ordering of the elements in the response.
	// The elements in the response will not contain any aliases or tags.
	ListByFullName(ctx context.Context, fullName string) (*ListModelVersionsResponse, error)

	// Update a Model Version.
	//
	// Updates the specified model version.
	//
	// The caller must be a metastore admin or an owner of the parent registered
	// model. For the latter case, the caller must also be the owner or have the
	// **USE_CATALOG** privilege on the parent catalog and the **USE_SCHEMA**
	// privilege on the parent schema.
	//
	// Currently only the comment of the model version can be updated.
	Update(ctx context.Context, request UpdateModelVersionRequest) (*ModelVersionInfo, error)
}

type ModelVersionsService added in v0.18.0

type ModelVersionsService interface {

	// Delete a Model Version.
	//
	// Deletes a model version from the specified registered model. Any aliases
	// assigned to the model version will also be deleted.
	//
	// The caller must be a metastore admin or an owner of the parent registered
	// model. For the latter case, the caller must also be the owner or have the
	// **USE_CATALOG** privilege on the parent catalog and the **USE_SCHEMA**
	// privilege on the parent schema.
	Delete(ctx context.Context, request DeleteModelVersionRequest) error

	// Get a Model Version.
	//
	// Get a model version.
	//
	// The caller must be a metastore admin or an owner of (or have the
	// **EXECUTE** privilege on) the parent registered model. For the latter
	// case, the caller must also be the owner or have the **USE_CATALOG**
	// privilege on the parent catalog and the **USE_SCHEMA** privilege on the
	// parent schema.
	Get(ctx context.Context, request GetModelVersionRequest) (*RegisteredModelInfo, error)

	// Get Model Version By Alias.
	//
	// Get a model version by alias.
	//
	// The caller must be a metastore admin or an owner of (or have the
	// **EXECUTE** privilege on) the registered model. For the latter case, the
	// caller must also be the owner or have the **USE_CATALOG** privilege on
	// the parent catalog and the **USE_SCHEMA** privilege on the parent schema.
	GetByAlias(ctx context.Context, request GetByAliasRequest) (*ModelVersionInfo, error)

	// List Model Versions.
	//
	// List model versions. You can list model versions under a particular
	// schema, or list all model versions in the current metastore.
	//
	// The returned models are filtered based on the privileges of the calling
	// user. For example, the metastore admin is able to list all the model
	// versions. A regular user needs to be the owner or have the **EXECUTE**
	// privilege on the parent registered model to recieve the model versions in
	// the response. For the latter case, the caller must also be the owner or
	// have the **USE_CATALOG** privilege on the parent catalog and the
	// **USE_SCHEMA** privilege on the parent schema.
	//
	// There is no guarantee of a specific ordering of the elements in the
	// response. The elements in the response will not contain any aliases or
	// tags.
	//
	// Use ListAll() to get all ModelVersionInfo instances, which will iterate over every result page.
	List(ctx context.Context, request ListModelVersionsRequest) (*ListModelVersionsResponse, error)

	// Update a Model Version.
	//
	// Updates the specified model version.
	//
	// The caller must be a metastore admin or an owner of the parent registered
	// model. For the latter case, the caller must also be the owner or have the
	// **USE_CATALOG** privilege on the parent catalog and the **USE_SCHEMA**
	// privilege on the parent schema.
	//
	// Currently only the comment of the model version can be updated.
	Update(ctx context.Context, request UpdateModelVersionRequest) (*ModelVersionInfo, error)
}

Databricks provides a hosted version of MLflow Model Registry in Unity Catalog. Models in Unity Catalog provide centralized access control, auditing, lineage, and discovery of ML models across Databricks workspaces.

This API reference documents the REST endpoints for managing model versions in Unity Catalog. For more details, see the [registered models API docs](/api/workspace/registeredmodels).

type MonitorCronSchedule added in v0.30.0

type MonitorCronSchedule struct {
	// Read only field that indicates whether a schedule is paused or not.
	PauseStatus MonitorCronSchedulePauseStatus `json:"pause_status,omitempty"`
	// The expression that determines when to run the monitor. See [examples].
	//
	// [examples]: https://www.quartz-scheduler.org/documentation/quartz-2.3.0/tutorials/crontrigger.html
	QuartzCronExpression string `json:"quartz_cron_expression"`
	// The timezone id (e.g., “"PST"“) in which to evaluate the quartz
	// expression.
	TimezoneId string `json:"timezone_id"`
}

type MonitorCronSchedulePauseStatus added in v0.30.0

type MonitorCronSchedulePauseStatus string

Read only field that indicates whether a schedule is paused or not.

const MonitorCronSchedulePauseStatusPaused MonitorCronSchedulePauseStatus = `PAUSED`
const MonitorCronSchedulePauseStatusUnpaused MonitorCronSchedulePauseStatus = `UNPAUSED`

func (*MonitorCronSchedulePauseStatus) Set added in v0.30.0

Set raw string value and validate it against allowed values

func (*MonitorCronSchedulePauseStatus) String added in v0.30.0

String representation for fmt.Print

func (*MonitorCronSchedulePauseStatus) Type added in v0.30.0

Type always returns MonitorCronSchedulePauseStatus to satisfy [pflag.Value] interface

type MonitorDataClassificationConfig added in v0.30.0

type MonitorDataClassificationConfig struct {
	// Whether data classification is enabled.
	Enabled bool `json:"enabled,omitempty"`

	ForceSendFields []string `json:"-"`
}

func (MonitorDataClassificationConfig) MarshalJSON added in v0.30.0

func (s MonitorDataClassificationConfig) MarshalJSON() ([]byte, error)

func (*MonitorDataClassificationConfig) UnmarshalJSON added in v0.30.0

func (s *MonitorDataClassificationConfig) UnmarshalJSON(b []byte) error

type MonitorDestination added in v0.38.0

type MonitorDestination struct {
	// The list of email addresses to send the notification to. A maximum of 5
	// email addresses is supported.
	EmailAddresses []string `json:"email_addresses,omitempty"`
}

type MonitorInferenceLog added in v0.38.0

type MonitorInferenceLog struct {
	// Granularities for aggregating data into time windows based on their
	// timestamp. Currently the following static granularities are supported:
	// {“"5 minutes"“, “"30 minutes"“, “"1 hour"“, “"1 day"“, “"<n>
	// week(s)"“, “"1 month"“, “"1 year"“}.
	Granularities []string `json:"granularities"`
	// Optional column that contains the ground truth for the prediction.
	LabelCol string `json:"label_col,omitempty"`
	// Column that contains the id of the model generating the predictions.
	// Metrics will be computed per model id by default, and also across all
	// model ids.
	ModelIdCol string `json:"model_id_col"`
	// Column that contains the output/prediction from the model.
	PredictionCol string `json:"prediction_col"`
	// Optional column that contains the prediction probabilities for each class
	// in a classification problem type. The values in this column should be a
	// map, mapping each class label to the prediction probability for a given
	// sample. The map should be of PySpark MapType().
	PredictionProbaCol string `json:"prediction_proba_col,omitempty"`
	// Problem type the model aims to solve. Determines the type of
	// model-quality metrics that will be computed.
	ProblemType MonitorInferenceLogProblemType `json:"problem_type"`
	// Column that contains the timestamps of requests. The column must be one
	// of the following: - A “TimestampType“ column - A column whose values
	// can be converted to timestamps through the pyspark “to_timestamp“
	// [function].
	//
	// [function]: https://spark.apache.org/docs/latest/api/python/reference/pyspark.sql/api/pyspark.sql.functions.to_timestamp.html
	TimestampCol string `json:"timestamp_col"`

	ForceSendFields []string `json:"-"`
}

func (MonitorInferenceLog) MarshalJSON added in v0.38.0

func (s MonitorInferenceLog) MarshalJSON() ([]byte, error)

func (*MonitorInferenceLog) UnmarshalJSON added in v0.38.0

func (s *MonitorInferenceLog) UnmarshalJSON(b []byte) error

type MonitorInferenceLogProblemType added in v0.38.0

type MonitorInferenceLogProblemType string

Problem type the model aims to solve. Determines the type of model-quality metrics that will be computed.

const MonitorInferenceLogProblemTypeProblemTypeClassification MonitorInferenceLogProblemType = `PROBLEM_TYPE_CLASSIFICATION`
const MonitorInferenceLogProblemTypeProblemTypeRegression MonitorInferenceLogProblemType = `PROBLEM_TYPE_REGRESSION`

func (*MonitorInferenceLogProblemType) Set added in v0.38.0

Set raw string value and validate it against allowed values

func (*MonitorInferenceLogProblemType) String added in v0.38.0

String representation for fmt.Print

func (*MonitorInferenceLogProblemType) Type added in v0.38.0

Type always returns MonitorInferenceLogProblemType to satisfy [pflag.Value] interface

type MonitorInfo added in v0.30.0

type MonitorInfo struct {
	// The directory to store monitoring assets (e.g. dashboard, metric tables).
	AssetsDir string `json:"assets_dir,omitempty"`
	// Name of the baseline table from which drift metrics are computed from.
	// Columns in the monitored table should also be present in the baseline
	// table.
	BaselineTableName string `json:"baseline_table_name,omitempty"`
	// Custom metrics to compute on the monitored table. These can be aggregate
	// metrics, derived metrics (from already computed aggregate metrics), or
	// drift metrics (comparing metrics across time windows).
	CustomMetrics []MonitorMetric `json:"custom_metrics,omitempty"`
	// Id of dashboard that visualizes the computed metrics. This can be empty
	// if the monitor is in PENDING state.
	DashboardId string `json:"dashboard_id,omitempty"`
	// The data classification config for the monitor.
	DataClassificationConfig *MonitorDataClassificationConfig `json:"data_classification_config,omitempty"`
	// The full name of the drift metrics table. Format:
	// __catalog_name__.__schema_name__.__table_name__.
	DriftMetricsTableName string `json:"drift_metrics_table_name"`
	// Configuration for monitoring inference logs.
	InferenceLog *MonitorInferenceLog `json:"inference_log,omitempty"`
	// The latest failure message of the monitor (if any).
	LatestMonitorFailureMsg string `json:"latest_monitor_failure_msg,omitempty"`
	// The version of the monitor config (e.g. 1,2,3). If negative, the monitor
	// may be corrupted.
	MonitorVersion string `json:"monitor_version"`
	// The notification settings for the monitor.
	Notifications *MonitorNotifications `json:"notifications,omitempty"`
	// Schema where output metric tables are created.
	OutputSchemaName string `json:"output_schema_name,omitempty"`
	// The full name of the profile metrics table. Format:
	// __catalog_name__.__schema_name__.__table_name__.
	ProfileMetricsTableName string `json:"profile_metrics_table_name"`
	// The schedule for automatically updating and refreshing metric tables.
	Schedule *MonitorCronSchedule `json:"schedule,omitempty"`
	// List of column expressions to slice data with for targeted analysis. The
	// data is grouped by each expression independently, resulting in a separate
	// slice for each predicate and its complements. For high-cardinality
	// columns, only the top 100 unique values by frequency will generate
	// slices.
	SlicingExprs []string `json:"slicing_exprs,omitempty"`
	// Configuration for monitoring snapshot tables.
	Snapshot *MonitorSnapshot `json:"snapshot,omitempty"`
	// The status of the monitor.
	Status MonitorInfoStatus `json:"status"`
	// The full name of the table to monitor. Format:
	// __catalog_name__.__schema_name__.__table_name__.
	TableName string `json:"table_name"`
	// Configuration for monitoring time series tables.
	TimeSeries *MonitorTimeSeries `json:"time_series,omitempty"`

	ForceSendFields []string `json:"-"`
}

func (MonitorInfo) MarshalJSON added in v0.30.0

func (s MonitorInfo) MarshalJSON() ([]byte, error)

func (*MonitorInfo) UnmarshalJSON added in v0.30.0

func (s *MonitorInfo) UnmarshalJSON(b []byte) error

type MonitorInfoStatus added in v0.30.0

type MonitorInfoStatus string

The status of the monitor.

const MonitorInfoStatusMonitorStatusActive MonitorInfoStatus = `MONITOR_STATUS_ACTIVE`
const MonitorInfoStatusMonitorStatusDeletePending MonitorInfoStatus = `MONITOR_STATUS_DELETE_PENDING`
const MonitorInfoStatusMonitorStatusError MonitorInfoStatus = `MONITOR_STATUS_ERROR`
const MonitorInfoStatusMonitorStatusFailed MonitorInfoStatus = `MONITOR_STATUS_FAILED`
const MonitorInfoStatusMonitorStatusPending MonitorInfoStatus = `MONITOR_STATUS_PENDING`

func (*MonitorInfoStatus) Set added in v0.30.0

func (f *MonitorInfoStatus) Set(v string) error

Set raw string value and validate it against allowed values

func (*MonitorInfoStatus) String added in v0.30.0

func (f *MonitorInfoStatus) String() string

String representation for fmt.Print

func (*MonitorInfoStatus) Type added in v0.30.0

func (f *MonitorInfoStatus) Type() string

Type always returns MonitorInfoStatus to satisfy [pflag.Value] interface

type MonitorMetric added in v0.38.0

type MonitorMetric struct {
	// Jinja template for a SQL expression that specifies how to compute the
	// metric. See [create metric definition].
	//
	// [create metric definition]: https://docs.databricks.com/en/lakehouse-monitoring/custom-metrics.html#create-definition
	Definition string `json:"definition"`
	// A list of column names in the input table the metric should be computed
	// for. Can use “":table"“ to indicate that the metric needs information
	// from multiple columns.
	InputColumns []string `json:"input_columns"`
	// Name of the metric in the output tables.
	Name string `json:"name"`
	// The output type of the custom metric.
	OutputDataType string `json:"output_data_type"`
	// Can only be one of “"CUSTOM_METRIC_TYPE_AGGREGATE"“,
	// “"CUSTOM_METRIC_TYPE_DERIVED"“, or “"CUSTOM_METRIC_TYPE_DRIFT"“. The
	// “"CUSTOM_METRIC_TYPE_AGGREGATE"“ and “"CUSTOM_METRIC_TYPE_DERIVED"“
	// metrics are computed on a single table, whereas the
	// “"CUSTOM_METRIC_TYPE_DRIFT"“ compare metrics across baseline and input
	// table, or across the two consecutive time windows. -
	// CUSTOM_METRIC_TYPE_AGGREGATE: only depend on the existing columns in your
	// table - CUSTOM_METRIC_TYPE_DERIVED: depend on previously computed
	// aggregate metrics - CUSTOM_METRIC_TYPE_DRIFT: depend on previously
	// computed aggregate or derived metrics
	Type MonitorMetricType `json:"type"`
}

type MonitorMetricType added in v0.38.0

type MonitorMetricType string

Can only be one of “"CUSTOM_METRIC_TYPE_AGGREGATE"“, “"CUSTOM_METRIC_TYPE_DERIVED"“, or “"CUSTOM_METRIC_TYPE_DRIFT"“. The “"CUSTOM_METRIC_TYPE_AGGREGATE"“ and “"CUSTOM_METRIC_TYPE_DERIVED"“ metrics are computed on a single table, whereas the “"CUSTOM_METRIC_TYPE_DRIFT"“ compare metrics across baseline and input table, or across the two consecutive time windows. - CUSTOM_METRIC_TYPE_AGGREGATE: only depend on the existing columns in your table - CUSTOM_METRIC_TYPE_DERIVED: depend on previously computed aggregate metrics - CUSTOM_METRIC_TYPE_DRIFT: depend on previously computed aggregate or derived metrics

const MonitorMetricTypeCustomMetricTypeAggregate MonitorMetricType = `CUSTOM_METRIC_TYPE_AGGREGATE`
const MonitorMetricTypeCustomMetricTypeDerived MonitorMetricType = `CUSTOM_METRIC_TYPE_DERIVED`
const MonitorMetricTypeCustomMetricTypeDrift MonitorMetricType = `CUSTOM_METRIC_TYPE_DRIFT`

func (*MonitorMetricType) Set added in v0.38.0

func (f *MonitorMetricType) Set(v string) error

Set raw string value and validate it against allowed values

func (*MonitorMetricType) String added in v0.38.0

func (f *MonitorMetricType) String() string

String representation for fmt.Print

func (*MonitorMetricType) Type added in v0.38.0

func (f *MonitorMetricType) Type() string

Type always returns MonitorMetricType to satisfy [pflag.Value] interface

type MonitorNotifications added in v0.38.0

type MonitorNotifications struct {
	// Who to send notifications to on monitor failure.
	OnFailure *MonitorDestination `json:"on_failure,omitempty"`
	// Who to send notifications to when new data classification tags are
	// detected.
	OnNewClassificationTagDetected *MonitorDestination `json:"on_new_classification_tag_detected,omitempty"`
}

type MonitorRefreshInfo added in v0.31.0

type MonitorRefreshInfo struct {
	// Time at which refresh operation completed (milliseconds since 1/1/1970
	// UTC).
	EndTimeMs int64 `json:"end_time_ms,omitempty"`
	// An optional message to give insight into the current state of the job
	// (e.g. FAILURE messages).
	Message string `json:"message,omitempty"`
	// Unique id of the refresh operation.
	RefreshId int64 `json:"refresh_id"`
	// Time at which refresh operation was initiated (milliseconds since
	// 1/1/1970 UTC).
	StartTimeMs int64 `json:"start_time_ms"`
	// The current state of the refresh.
	State MonitorRefreshInfoState `json:"state"`
	// The method by which the refresh was triggered.
	Trigger MonitorRefreshInfoTrigger `json:"trigger,omitempty"`

	ForceSendFields []string `json:"-"`
}

func (MonitorRefreshInfo) MarshalJSON added in v0.31.0

func (s MonitorRefreshInfo) MarshalJSON() ([]byte, error)

func (*MonitorRefreshInfo) UnmarshalJSON added in v0.31.0

func (s *MonitorRefreshInfo) UnmarshalJSON(b []byte) error

type MonitorRefreshInfoState added in v0.31.0

type MonitorRefreshInfoState string

The current state of the refresh.

const MonitorRefreshInfoStateCanceled MonitorRefreshInfoState = `CANCELED`
const MonitorRefreshInfoStateFailed MonitorRefreshInfoState = `FAILED`
const MonitorRefreshInfoStatePending MonitorRefreshInfoState = `PENDING`
const MonitorRefreshInfoStateRunning MonitorRefreshInfoState = `RUNNING`
const MonitorRefreshInfoStateSuccess MonitorRefreshInfoState = `SUCCESS`

func (*MonitorRefreshInfoState) Set added in v0.31.0

Set raw string value and validate it against allowed values

func (*MonitorRefreshInfoState) String added in v0.31.0

func (f *MonitorRefreshInfoState) String() string

String representation for fmt.Print

func (*MonitorRefreshInfoState) Type added in v0.31.0

func (f *MonitorRefreshInfoState) Type() string

Type always returns MonitorRefreshInfoState to satisfy [pflag.Value] interface

type MonitorRefreshInfoTrigger added in v0.38.0

type MonitorRefreshInfoTrigger string

The method by which the refresh was triggered.

const MonitorRefreshInfoTriggerManual MonitorRefreshInfoTrigger = `MANUAL`
const MonitorRefreshInfoTriggerSchedule MonitorRefreshInfoTrigger = `SCHEDULE`

func (*MonitorRefreshInfoTrigger) Set added in v0.38.0

Set raw string value and validate it against allowed values

func (*MonitorRefreshInfoTrigger) String added in v0.38.0

func (f *MonitorRefreshInfoTrigger) String() string

String representation for fmt.Print

func (*MonitorRefreshInfoTrigger) Type added in v0.38.0

Type always returns MonitorRefreshInfoTrigger to satisfy [pflag.Value] interface

type MonitorRefreshListResponse added in v0.41.0

type MonitorRefreshListResponse struct {
	// List of refreshes.
	Refreshes []MonitorRefreshInfo `json:"refreshes,omitempty"`
}

type MonitorSnapshot added in v0.38.0

type MonitorSnapshot struct {
}

type MonitorTimeSeries added in v0.38.0

type MonitorTimeSeries struct {
	// Granularities for aggregating data into time windows based on their
	// timestamp. Currently the following static granularities are supported:
	// {“"5 minutes"“, “"30 minutes"“, “"1 hour"“, “"1 day"“, “"<n>
	// week(s)"“, “"1 month"“, “"1 year"“}.
	Granularities []string `json:"granularities"`
	// Column that contains the timestamps of requests. The column must be one
	// of the following: - A “TimestampType“ column - A column whose values
	// can be converted to timestamps through the pyspark “to_timestamp“
	// [function].
	//
	// [function]: https://spark.apache.org/docs/latest/api/python/reference/pyspark.sql/api/pyspark.sql.functions.to_timestamp.html
	TimestampCol string `json:"timestamp_col"`
}

type NamedTableConstraint

type NamedTableConstraint struct {
	// The name of the constraint.
	Name string `json:"name"`
}

type OnlineTable added in v0.33.0

type OnlineTable struct {
	// Full three-part (catalog, schema, table) name of the table.
	Name string `json:"name,omitempty"`
	// Specification of the online table.
	Spec *OnlineTableSpec `json:"spec,omitempty"`
	// Online Table status
	Status *OnlineTableStatus `json:"status,omitempty"`

	ForceSendFields []string `json:"-"`
}

Online Table information.

func (OnlineTable) MarshalJSON added in v0.33.0

func (s OnlineTable) MarshalJSON() ([]byte, error)

func (*OnlineTable) UnmarshalJSON added in v0.33.0

func (s *OnlineTable) UnmarshalJSON(b []byte) error

type OnlineTableSpec added in v0.33.0

type OnlineTableSpec struct {
	// Whether to create a full-copy pipeline -- a pipeline that stops after
	// creates a full copy of the source table upon initialization and does not
	// process any change data feeds (CDFs) afterwards. The pipeline can still
	// be manually triggered afterwards, but it always perform a full copy of
	// the source table and there are no incremental updates. This mode is
	// useful for syncing views or tables without CDFs to online tables. Note
	// that the full-copy pipeline only supports "triggered" scheduling policy.
	PerformFullCopy bool `json:"perform_full_copy,omitempty"`
	// ID of the associated pipeline. Generated by the server - cannot be set by
	// the caller.
	PipelineId string `json:"pipeline_id,omitempty"`
	// Primary Key columns to be used for data insert/update in the destination.
	PrimaryKeyColumns []string `json:"primary_key_columns,omitempty"`
	// Pipeline runs continuously after generating the initial data.
	RunContinuously *OnlineTableSpecContinuousSchedulingPolicy `json:"run_continuously,omitempty"`
	// Pipeline stops after generating the initial data and can be triggered
	// later (manually, through a cron job or through data triggers)
	RunTriggered *OnlineTableSpecTriggeredSchedulingPolicy `json:"run_triggered,omitempty"`
	// Three-part (catalog, schema, table) name of the source Delta table.
	SourceTableFullName string `json:"source_table_full_name,omitempty"`
	// Time series key to deduplicate (tie-break) rows with the same primary
	// key.
	TimeseriesKey string `json:"timeseries_key,omitempty"`

	ForceSendFields []string `json:"-"`
}

Specification of an online table.

func (OnlineTableSpec) MarshalJSON added in v0.33.0

func (s OnlineTableSpec) MarshalJSON() ([]byte, error)

func (*OnlineTableSpec) UnmarshalJSON added in v0.33.0

func (s *OnlineTableSpec) UnmarshalJSON(b []byte) error

type OnlineTableSpecContinuousSchedulingPolicy added in v0.34.0

type OnlineTableSpecContinuousSchedulingPolicy struct {
}

type OnlineTableSpecTriggeredSchedulingPolicy added in v0.34.0

type OnlineTableSpecTriggeredSchedulingPolicy struct {
}

type OnlineTableState added in v0.33.0

type OnlineTableState string

The state of an online table.

const OnlineTableStateOffline OnlineTableState = `OFFLINE`
const OnlineTableStateOfflineFailed OnlineTableState = `OFFLINE_FAILED`
const OnlineTableStateOnline OnlineTableState = `ONLINE`
const OnlineTableStateOnlineContinuousUpdate OnlineTableState = `ONLINE_CONTINUOUS_UPDATE`
const OnlineTableStateOnlineNoPendingUpdate OnlineTableState = `ONLINE_NO_PENDING_UPDATE`
const OnlineTableStateOnlinePipelineFailed OnlineTableState = `ONLINE_PIPELINE_FAILED`
const OnlineTableStateOnlineTableStateUnspecified OnlineTableState = `ONLINE_TABLE_STATE_UNSPECIFIED`
const OnlineTableStateOnlineTriggeredUpdate OnlineTableState = `ONLINE_TRIGGERED_UPDATE`
const OnlineTableStateOnlineUpdatingPipelineResources OnlineTableState = `ONLINE_UPDATING_PIPELINE_RESOURCES`
const OnlineTableStateProvisioning OnlineTableState = `PROVISIONING`
const OnlineTableStateProvisioningInitialSnapshot OnlineTableState = `PROVISIONING_INITIAL_SNAPSHOT`
const OnlineTableStateProvisioningPipelineResources OnlineTableState = `PROVISIONING_PIPELINE_RESOURCES`

func (*OnlineTableState) Set added in v0.33.0

func (f *OnlineTableState) Set(v string) error

Set raw string value and validate it against allowed values

func (*OnlineTableState) String added in v0.33.0

func (f *OnlineTableState) String() string

String representation for fmt.Print

func (*OnlineTableState) Type added in v0.33.0

func (f *OnlineTableState) Type() string

Type always returns OnlineTableState to satisfy [pflag.Value] interface

type OnlineTableStatus added in v0.33.0

type OnlineTableStatus struct {
	// Detailed status of an online table. Shown if the online table is in the
	// ONLINE_CONTINUOUS_UPDATE or the ONLINE_UPDATING_PIPELINE_RESOURCES state.
	ContinuousUpdateStatus *ContinuousUpdateStatus `json:"continuous_update_status,omitempty"`
	// The state of the online table.
	DetailedState OnlineTableState `json:"detailed_state,omitempty"`
	// Detailed status of an online table. Shown if the online table is in the
	// OFFLINE_FAILED or the ONLINE_PIPELINE_FAILED state.
	FailedStatus *FailedStatus `json:"failed_status,omitempty"`
	// A text description of the current state of the online table.
	Message string `json:"message,omitempty"`
	// Detailed status of an online table. Shown if the online table is in the
	// PROVISIONING_PIPELINE_RESOURCES or the PROVISIONING_INITIAL_SNAPSHOT
	// state.
	ProvisioningStatus *ProvisioningStatus `json:"provisioning_status,omitempty"`
	// Detailed status of an online table. Shown if the online table is in the
	// ONLINE_TRIGGERED_UPDATE or the ONLINE_NO_PENDING_UPDATE state.
	TriggeredUpdateStatus *TriggeredUpdateStatus `json:"triggered_update_status,omitempty"`

	ForceSendFields []string `json:"-"`
}

Status of an online table.

func (OnlineTableStatus) MarshalJSON added in v0.33.0

func (s OnlineTableStatus) MarshalJSON() ([]byte, error)

func (*OnlineTableStatus) UnmarshalJSON added in v0.33.0

func (s *OnlineTableStatus) UnmarshalJSON(b []byte) error

type OnlineTablesAPI added in v0.33.0

type OnlineTablesAPI struct {
	// contains filtered or unexported fields
}

Online tables provide lower latency and higher QPS access to data from Delta tables.

func NewOnlineTables added in v0.33.0

func NewOnlineTables(client *client.DatabricksClient) *OnlineTablesAPI

func (*OnlineTablesAPI) Create added in v0.33.0

Create an Online Table.

Create a new Online Table.

func (*OnlineTablesAPI) Delete added in v0.33.0

Delete an Online Table.

Delete an online table. Warning: This will delete all the data in the online table. If the source Delta table was deleted or modified since this Online Table was created, this will lose the data forever!

func (*OnlineTablesAPI) DeleteByName added in v0.33.0

func (a *OnlineTablesAPI) DeleteByName(ctx context.Context, name string) error

Delete an Online Table.

Delete an online table. Warning: This will delete all the data in the online table. If the source Delta table was deleted or modified since this Online Table was created, this will lose the data forever!

func (*OnlineTablesAPI) Get added in v0.33.0

Get an Online Table.

Get information about an existing online table and its status.

func (*OnlineTablesAPI) GetByName added in v0.33.0

func (a *OnlineTablesAPI) GetByName(ctx context.Context, name string) (*OnlineTable, error)

Get an Online Table.

Get information about an existing online table and its status.

func (*OnlineTablesAPI) Impl added in v0.33.0

Impl returns low-level OnlineTables API implementation Deprecated: use MockOnlineTablesInterface instead.

func (*OnlineTablesAPI) WithImpl added in v0.33.0

WithImpl could be used to override low-level API implementations for unit testing purposes with github.com/golang/mock or other mocking frameworks. Deprecated: use MockOnlineTablesInterface instead.

type OnlineTablesInterface added in v0.33.0

type OnlineTablesInterface interface {
	// WithImpl could be used to override low-level API implementations for unit
	// testing purposes with [github.com/golang/mock] or other mocking frameworks.
	// Deprecated: use MockOnlineTablesInterface instead.
	WithImpl(impl OnlineTablesService) OnlineTablesInterface

	// Impl returns low-level OnlineTables API implementation
	// Deprecated: use MockOnlineTablesInterface instead.
	Impl() OnlineTablesService

	// Create an Online Table.
	//
	// Create a new Online Table.
	Create(ctx context.Context, request CreateOnlineTableRequest) (*OnlineTable, error)

	// Delete an Online Table.
	//
	// Delete an online table. Warning: This will delete all the data in the online
	// table. If the source Delta table was deleted or modified since this Online
	// Table was created, this will lose the data forever!
	Delete(ctx context.Context, request DeleteOnlineTableRequest) error

	// Delete an Online Table.
	//
	// Delete an online table. Warning: This will delete all the data in the online
	// table. If the source Delta table was deleted or modified since this Online
	// Table was created, this will lose the data forever!
	DeleteByName(ctx context.Context, name string) error

	// Get an Online Table.
	//
	// Get information about an existing online table and its status.
	Get(ctx context.Context, request GetOnlineTableRequest) (*OnlineTable, error)

	// Get an Online Table.
	//
	// Get information about an existing online table and its status.
	GetByName(ctx context.Context, name string) (*OnlineTable, error)
}

type OnlineTablesService added in v0.33.0

type OnlineTablesService interface {

	// Create an Online Table.
	//
	// Create a new Online Table.
	Create(ctx context.Context, request CreateOnlineTableRequest) (*OnlineTable, error)

	// Delete an Online Table.
	//
	// Delete an online table. Warning: This will delete all the data in the
	// online table. If the source Delta table was deleted or modified since
	// this Online Table was created, this will lose the data forever!
	Delete(ctx context.Context, request DeleteOnlineTableRequest) error

	// Get an Online Table.
	//
	// Get information about an existing online table and its status.
	Get(ctx context.Context, request GetOnlineTableRequest) (*OnlineTable, error)
}

Online tables provide lower latency and higher QPS access to data from Delta tables.

type PermissionsChange

type PermissionsChange struct {
	// The set of privileges to add.
	Add []Privilege `json:"add,omitempty"`
	// The principal whose privileges we are changing.
	Principal string `json:"principal,omitempty"`
	// The set of privileges to remove.
	Remove []Privilege `json:"remove,omitempty"`

	ForceSendFields []string `json:"-"`
}

func (PermissionsChange) MarshalJSON added in v0.23.0

func (s PermissionsChange) MarshalJSON() ([]byte, error)

func (*PermissionsChange) UnmarshalJSON added in v0.23.0

func (s *PermissionsChange) UnmarshalJSON(b []byte) error

type PermissionsList

type PermissionsList struct {
	// The privileges assigned to each principal
	PrivilegeAssignments []PrivilegeAssignment `json:"privilege_assignments,omitempty"`
}

type PipelineProgress added in v0.33.0

type PipelineProgress struct {
	// The estimated time remaining to complete this update in seconds.
	EstimatedCompletionTimeSeconds float64 `json:"estimated_completion_time_seconds,omitempty"`
	// The source table Delta version that was last processed by the pipeline.
	// The pipeline may not have completely processed this version yet.
	LatestVersionCurrentlyProcessing int64 `json:"latest_version_currently_processing,omitempty"`
	// The completion ratio of this update. This is a number between 0 and 1.
	SyncProgressCompletion float64 `json:"sync_progress_completion,omitempty"`
	// The number of rows that have been synced in this update.
	SyncedRowCount int64 `json:"synced_row_count,omitempty"`
	// The total number of rows that need to be synced in this update. This
	// number may be an estimate.
	TotalRowCount int64 `json:"total_row_count,omitempty"`

	ForceSendFields []string `json:"-"`
}

Progress information of the Online Table data synchronization pipeline.

func (PipelineProgress) MarshalJSON added in v0.33.0

func (s PipelineProgress) MarshalJSON() ([]byte, error)

func (*PipelineProgress) UnmarshalJSON added in v0.33.0

func (s *PipelineProgress) UnmarshalJSON(b []byte) error

type PrimaryKeyConstraint

type PrimaryKeyConstraint struct {
	// Column names for this constraint.
	ChildColumns []string `json:"child_columns"`
	// The name of the constraint.
	Name string `json:"name"`
}

type Privilege

type Privilege string
const PrivilegeAccess Privilege = `ACCESS`
const PrivilegeAllPrivileges Privilege = `ALL_PRIVILEGES`
const PrivilegeApplyTag Privilege = `APPLY_TAG`
const PrivilegeCreate Privilege = `CREATE`
const PrivilegeCreateCatalog Privilege = `CREATE_CATALOG`
const PrivilegeCreateConnection Privilege = `CREATE_CONNECTION`
const PrivilegeCreateExternalLocation Privilege = `CREATE_EXTERNAL_LOCATION`
const PrivilegeCreateExternalTable Privilege = `CREATE_EXTERNAL_TABLE`
const PrivilegeCreateExternalVolume Privilege = `CREATE_EXTERNAL_VOLUME`
const PrivilegeCreateForeignCatalog Privilege = `CREATE_FOREIGN_CATALOG`
const PrivilegeCreateFunction Privilege = `CREATE_FUNCTION`
const PrivilegeCreateManagedStorage Privilege = `CREATE_MANAGED_STORAGE`
const PrivilegeCreateMaterializedView Privilege = `CREATE_MATERIALIZED_VIEW`
const PrivilegeCreateModel Privilege = `CREATE_MODEL`
const PrivilegeCreateProvider Privilege = `CREATE_PROVIDER`
const PrivilegeCreateRecipient Privilege = `CREATE_RECIPIENT`
const PrivilegeCreateSchema Privilege = `CREATE_SCHEMA`
const PrivilegeCreateServiceCredential Privilege = `CREATE_SERVICE_CREDENTIAL`
const PrivilegeCreateShare Privilege = `CREATE_SHARE`
const PrivilegeCreateStorageCredential Privilege = `CREATE_STORAGE_CREDENTIAL`
const PrivilegeCreateTable Privilege = `CREATE_TABLE`
const PrivilegeCreateView Privilege = `CREATE_VIEW`
const PrivilegeCreateVolume Privilege = `CREATE_VOLUME`
const PrivilegeExecute Privilege = `EXECUTE`
const PrivilegeManageAllowlist Privilege = `MANAGE_ALLOWLIST`
const PrivilegeModify Privilege = `MODIFY`
const PrivilegeReadFiles Privilege = `READ_FILES`
const PrivilegeReadPrivateFiles Privilege = `READ_PRIVATE_FILES`
const PrivilegeReadVolume Privilege = `READ_VOLUME`
const PrivilegeRefresh Privilege = `REFRESH`
const PrivilegeSelect Privilege = `SELECT`
const PrivilegeSetSharePermission Privilege = `SET_SHARE_PERMISSION`
const PrivilegeSingleUserAccess Privilege = `SINGLE_USER_ACCESS`
const PrivilegeUsage Privilege = `USAGE`
const PrivilegeUseCatalog Privilege = `USE_CATALOG`
const PrivilegeUseConnection Privilege = `USE_CONNECTION`
const PrivilegeUseMarketplaceAssets Privilege = `USE_MARKETPLACE_ASSETS`
const PrivilegeUseProvider Privilege = `USE_PROVIDER`
const PrivilegeUseRecipient Privilege = `USE_RECIPIENT`
const PrivilegeUseSchema Privilege = `USE_SCHEMA`
const PrivilegeUseShare Privilege = `USE_SHARE`
const PrivilegeWriteFiles Privilege = `WRITE_FILES`
const PrivilegeWritePrivateFiles Privilege = `WRITE_PRIVATE_FILES`
const PrivilegeWriteVolume Privilege = `WRITE_VOLUME`

func (*Privilege) Set

func (f *Privilege) Set(v string) error

Set raw string value and validate it against allowed values

func (*Privilege) String

func (f *Privilege) String() string

String representation for fmt.Print

func (*Privilege) Type

func (f *Privilege) Type() string

Type always returns Privilege to satisfy [pflag.Value] interface

type PrivilegeAssignment

type PrivilegeAssignment struct {
	// The principal (user email address or group name).
	Principal string `json:"principal,omitempty"`
	// The privileges assigned to the principal.
	Privileges []Privilege `json:"privileges,omitempty"`

	ForceSendFields []string `json:"-"`
}

func (PrivilegeAssignment) MarshalJSON added in v0.23.0

func (s PrivilegeAssignment) MarshalJSON() ([]byte, error)

func (*PrivilegeAssignment) UnmarshalJSON added in v0.23.0

func (s *PrivilegeAssignment) UnmarshalJSON(b []byte) error

type PropertiesKvPairs added in v0.10.0

type PropertiesKvPairs map[string]string

An object containing map of key-value properties attached to the connection.

type ProvisioningInfo added in v0.18.0

type ProvisioningInfo struct {
	State ProvisioningInfoState `json:"state,omitempty"`
}

Status of an asynchronously provisioned resource.

type ProvisioningInfoState added in v0.18.0

type ProvisioningInfoState string
const ProvisioningInfoStateActive ProvisioningInfoState = `ACTIVE`
const ProvisioningInfoStateDeleting ProvisioningInfoState = `DELETING`
const ProvisioningInfoStateFailed ProvisioningInfoState = `FAILED`
const ProvisioningInfoStateProvisioning ProvisioningInfoState = `PROVISIONING`
const ProvisioningInfoStateStateUnspecified ProvisioningInfoState = `STATE_UNSPECIFIED`

func (*ProvisioningInfoState) Set added in v0.18.0

Set raw string value and validate it against allowed values

func (*ProvisioningInfoState) String added in v0.18.0

func (f *ProvisioningInfoState) String() string

String representation for fmt.Print

func (*ProvisioningInfoState) Type added in v0.18.0

func (f *ProvisioningInfoState) Type() string

Type always returns ProvisioningInfoState to satisfy [pflag.Value] interface

type ProvisioningStatus added in v0.33.0

type ProvisioningStatus struct {
	// Details about initial data synchronization. Only populated when in the
	// PROVISIONING_INITIAL_SNAPSHOT state.
	InitialPipelineSyncProgress *PipelineProgress `json:"initial_pipeline_sync_progress,omitempty"`
}

Detailed status of an online table. Shown if the online table is in the PROVISIONING_PIPELINE_RESOURCES or the PROVISIONING_INITIAL_SNAPSHOT state.

type QualityMonitorsAPI added in v0.41.0

type QualityMonitorsAPI struct {
	// contains filtered or unexported fields
}

A monitor computes and monitors data or model quality metrics for a table over time. It generates metrics tables and a dashboard that you can use to monitor table health and set alerts.

Most write operations require the user to be the owner of the table (or its parent schema or parent catalog). Viewing the dashboard, computed metrics, or monitor configuration only requires the user to have **SELECT** privileges on the table (along with **USE_SCHEMA** and **USE_CATALOG**).

func NewQualityMonitors added in v0.41.0

func NewQualityMonitors(client *client.DatabricksClient) *QualityMonitorsAPI

func (*QualityMonitorsAPI) CancelRefresh added in v0.41.0

func (a *QualityMonitorsAPI) CancelRefresh(ctx context.Context, request CancelRefreshRequest) error

Cancel refresh.

Cancel an active monitor refresh for the given refresh ID.

The caller must either: 1. be an owner of the table's parent catalog 2. have **USE_CATALOG** on the table's parent catalog and be an owner of the table's parent schema 3. have the following permissions: - **USE_CATALOG** on the table's parent catalog - **USE_SCHEMA** on the table's parent schema - be an owner of the table

Additionally, the call must be made from the workspace where the monitor was created.

func (*QualityMonitorsAPI) Create added in v0.41.0

func (a *QualityMonitorsAPI) Create(ctx context.Context, request CreateMonitor) (*MonitorInfo, error)

Create a table monitor.

Creates a new monitor for the specified table.

The caller must either: 1. be an owner of the table's parent catalog, have **USE_SCHEMA** on the table's parent schema, and have **SELECT** access on the table 2. have **USE_CATALOG** on the table's parent catalog, be an owner of the table's parent schema, and have **SELECT** access on the table. 3. have the following permissions: - **USE_CATALOG** on the table's parent catalog - **USE_SCHEMA** on the table's parent schema - be an owner of the table.

Workspace assets, such as the dashboard, will be created in the workspace where this call was made.

func (*QualityMonitorsAPI) Delete added in v0.41.0

Delete a table monitor.

Deletes a monitor for the specified table.

The caller must either: 1. be an owner of the table's parent catalog 2. have **USE_CATALOG** on the table's parent catalog and be an owner of the table's parent schema 3. have the following permissions: - **USE_CATALOG** on the table's parent catalog - **USE_SCHEMA** on the table's parent schema - be an owner of the table.

Additionally, the call must be made from the workspace where the monitor was created.

Note that the metric tables and dashboard will not be deleted as part of this call; those assets must be manually cleaned up (if desired).

func (*QualityMonitorsAPI) DeleteByTableName added in v0.41.0

func (a *QualityMonitorsAPI) DeleteByTableName(ctx context.Context, tableName string) error

Delete a table monitor.

Deletes a monitor for the specified table.

The caller must either: 1. be an owner of the table's parent catalog 2. have **USE_CATALOG** on the table's parent catalog and be an owner of the table's parent schema 3. have the following permissions: - **USE_CATALOG** on the table's parent catalog - **USE_SCHEMA** on the table's parent schema - be an owner of the table.

Additionally, the call must be made from the workspace where the monitor was created.

Note that the metric tables and dashboard will not be deleted as part of this call; those assets must be manually cleaned up (if desired).

func (*QualityMonitorsAPI) Get added in v0.41.0

Get a table monitor.

Gets a monitor for the specified table.

The caller must either: 1. be an owner of the table's parent catalog 2. have **USE_CATALOG** on the table's parent catalog and be an owner of the table's parent schema. 3. have the following permissions: - **USE_CATALOG** on the table's parent catalog - **USE_SCHEMA** on the table's parent schema - **SELECT** privilege on the table.

The returned information includes configuration values, as well as information on assets created by the monitor. Some information (e.g., dashboard) may be filtered out if the caller is in a different workspace than where the monitor was created.

func (*QualityMonitorsAPI) GetByTableName added in v0.41.0

func (a *QualityMonitorsAPI) GetByTableName(ctx context.Context, tableName string) (*MonitorInfo, error)

Get a table monitor.

Gets a monitor for the specified table.

The caller must either: 1. be an owner of the table's parent catalog 2. have **USE_CATALOG** on the table's parent catalog and be an owner of the table's parent schema. 3. have the following permissions: - **USE_CATALOG** on the table's parent catalog - **USE_SCHEMA** on the table's parent schema - **SELECT** privilege on the table.

The returned information includes configuration values, as well as information on assets created by the monitor. Some information (e.g., dashboard) may be filtered out if the caller is in a different workspace than where the monitor was created.

func (*QualityMonitorsAPI) GetRefresh added in v0.41.0

Get refresh.

Gets info about a specific monitor refresh using the given refresh ID.

The caller must either: 1. be an owner of the table's parent catalog 2. have **USE_CATALOG** on the table's parent catalog and be an owner of the table's parent schema 3. have the following permissions: - **USE_CATALOG** on the table's parent catalog - **USE_SCHEMA** on the table's parent schema - **SELECT** privilege on the table.

Additionally, the call must be made from the workspace where the monitor was created.

func (*QualityMonitorsAPI) GetRefreshByTableNameAndRefreshId added in v0.41.0

func (a *QualityMonitorsAPI) GetRefreshByTableNameAndRefreshId(ctx context.Context, tableName string, refreshId string) (*MonitorRefreshInfo, error)

Get refresh.

Gets info about a specific monitor refresh using the given refresh ID.

The caller must either: 1. be an owner of the table's parent catalog 2. have **USE_CATALOG** on the table's parent catalog and be an owner of the table's parent schema 3. have the following permissions: - **USE_CATALOG** on the table's parent catalog - **USE_SCHEMA** on the table's parent schema - **SELECT** privilege on the table.

Additionally, the call must be made from the workspace where the monitor was created.

func (*QualityMonitorsAPI) Impl added in v0.41.0

Impl returns low-level QualityMonitors API implementation Deprecated: use MockQualityMonitorsInterface instead.

func (*QualityMonitorsAPI) ListRefreshes added in v0.41.0

List refreshes.

Gets an array containing the history of the most recent refreshes (up to 25) for this table.

The caller must either: 1. be an owner of the table's parent catalog 2. have **USE_CATALOG** on the table's parent catalog and be an owner of the table's parent schema 3. have the following permissions: - **USE_CATALOG** on the table's parent catalog - **USE_SCHEMA** on the table's parent schema - **SELECT** privilege on the table.

Additionally, the call must be made from the workspace where the monitor was created.

func (*QualityMonitorsAPI) ListRefreshesByTableName added in v0.41.0

func (a *QualityMonitorsAPI) ListRefreshesByTableName(ctx context.Context, tableName string) (*MonitorRefreshListResponse, error)

List refreshes.

Gets an array containing the history of the most recent refreshes (up to 25) for this table.

The caller must either: 1. be an owner of the table's parent catalog 2. have **USE_CATALOG** on the table's parent catalog and be an owner of the table's parent schema 3. have the following permissions: - **USE_CATALOG** on the table's parent catalog - **USE_SCHEMA** on the table's parent schema - **SELECT** privilege on the table.

Additionally, the call must be made from the workspace where the monitor was created.

func (*QualityMonitorsAPI) RunRefresh added in v0.41.0

Queue a metric refresh for a monitor.

Queues a metric refresh on the monitor for the specified table. The refresh will execute in the background.

The caller must either: 1. be an owner of the table's parent catalog 2. have **USE_CATALOG** on the table's parent catalog and be an owner of the table's parent schema 3. have the following permissions: - **USE_CATALOG** on the table's parent catalog - **USE_SCHEMA** on the table's parent schema - be an owner of the table

Additionally, the call must be made from the workspace where the monitor was created.

func (*QualityMonitorsAPI) Update added in v0.41.0

func (a *QualityMonitorsAPI) Update(ctx context.Context, request UpdateMonitor) (*MonitorInfo, error)

Update a table monitor.

Updates a monitor for the specified table.

The caller must either: 1. be an owner of the table's parent catalog 2. have **USE_CATALOG** on the table's parent catalog and be an owner of the table's parent schema 3. have the following permissions: - **USE_CATALOG** on the table's parent catalog - **USE_SCHEMA** on the table's parent schema - be an owner of the table.

Additionally, the call must be made from the workspace where the monitor was created, and the caller must be the original creator of the monitor.

Certain configuration fields, such as output asset identifiers, cannot be updated.

func (*QualityMonitorsAPI) WithImpl added in v0.41.0

WithImpl could be used to override low-level API implementations for unit testing purposes with github.com/golang/mock or other mocking frameworks. Deprecated: use MockQualityMonitorsInterface instead.

type QualityMonitorsInterface added in v0.41.0

type QualityMonitorsInterface interface {
	// WithImpl could be used to override low-level API implementations for unit
	// testing purposes with [github.com/golang/mock] or other mocking frameworks.
	// Deprecated: use MockQualityMonitorsInterface instead.
	WithImpl(impl QualityMonitorsService) QualityMonitorsInterface

	// Impl returns low-level QualityMonitors API implementation
	// Deprecated: use MockQualityMonitorsInterface instead.
	Impl() QualityMonitorsService

	// Cancel refresh.
	//
	// Cancel an active monitor refresh for the given refresh ID.
	//
	// The caller must either: 1. be an owner of the table's parent catalog 2. have
	// **USE_CATALOG** on the table's parent catalog and be an owner of the table's
	// parent schema 3. have the following permissions: - **USE_CATALOG** on the
	// table's parent catalog - **USE_SCHEMA** on the table's parent schema - be an
	// owner of the table
	//
	// Additionally, the call must be made from the workspace where the monitor was
	// created.
	CancelRefresh(ctx context.Context, request CancelRefreshRequest) error

	// Create a table monitor.
	//
	// Creates a new monitor for the specified table.
	//
	// The caller must either: 1. be an owner of the table's parent catalog, have
	// **USE_SCHEMA** on the table's parent schema, and have **SELECT** access on
	// the table 2. have **USE_CATALOG** on the table's parent catalog, be an owner
	// of the table's parent schema, and have **SELECT** access on the table. 3.
	// have the following permissions: - **USE_CATALOG** on the table's parent
	// catalog - **USE_SCHEMA** on the table's parent schema - be an owner of the
	// table.
	//
	// Workspace assets, such as the dashboard, will be created in the workspace
	// where this call was made.
	Create(ctx context.Context, request CreateMonitor) (*MonitorInfo, error)

	// Delete a table monitor.
	//
	// Deletes a monitor for the specified table.
	//
	// The caller must either: 1. be an owner of the table's parent catalog 2. have
	// **USE_CATALOG** on the table's parent catalog and be an owner of the table's
	// parent schema 3. have the following permissions: - **USE_CATALOG** on the
	// table's parent catalog - **USE_SCHEMA** on the table's parent schema - be an
	// owner of the table.
	//
	// Additionally, the call must be made from the workspace where the monitor was
	// created.
	//
	// Note that the metric tables and dashboard will not be deleted as part of this
	// call; those assets must be manually cleaned up (if desired).
	Delete(ctx context.Context, request DeleteQualityMonitorRequest) error

	// Delete a table monitor.
	//
	// Deletes a monitor for the specified table.
	//
	// The caller must either: 1. be an owner of the table's parent catalog 2. have
	// **USE_CATALOG** on the table's parent catalog and be an owner of the table's
	// parent schema 3. have the following permissions: - **USE_CATALOG** on the
	// table's parent catalog - **USE_SCHEMA** on the table's parent schema - be an
	// owner of the table.
	//
	// Additionally, the call must be made from the workspace where the monitor was
	// created.
	//
	// Note that the metric tables and dashboard will not be deleted as part of this
	// call; those assets must be manually cleaned up (if desired).
	DeleteByTableName(ctx context.Context, tableName string) error

	// Get a table monitor.
	//
	// Gets a monitor for the specified table.
	//
	// The caller must either: 1. be an owner of the table's parent catalog 2. have
	// **USE_CATALOG** on the table's parent catalog and be an owner of the table's
	// parent schema. 3. have the following permissions: - **USE_CATALOG** on the
	// table's parent catalog - **USE_SCHEMA** on the table's parent schema -
	// **SELECT** privilege on the table.
	//
	// The returned information includes configuration values, as well as
	// information on assets created by the monitor. Some information (e.g.,
	// dashboard) may be filtered out if the caller is in a different workspace than
	// where the monitor was created.
	Get(ctx context.Context, request GetQualityMonitorRequest) (*MonitorInfo, error)

	// Get a table monitor.
	//
	// Gets a monitor for the specified table.
	//
	// The caller must either: 1. be an owner of the table's parent catalog 2. have
	// **USE_CATALOG** on the table's parent catalog and be an owner of the table's
	// parent schema. 3. have the following permissions: - **USE_CATALOG** on the
	// table's parent catalog - **USE_SCHEMA** on the table's parent schema -
	// **SELECT** privilege on the table.
	//
	// The returned information includes configuration values, as well as
	// information on assets created by the monitor. Some information (e.g.,
	// dashboard) may be filtered out if the caller is in a different workspace than
	// where the monitor was created.
	GetByTableName(ctx context.Context, tableName string) (*MonitorInfo, error)

	// Get refresh.
	//
	// Gets info about a specific monitor refresh using the given refresh ID.
	//
	// The caller must either: 1. be an owner of the table's parent catalog 2. have
	// **USE_CATALOG** on the table's parent catalog and be an owner of the table's
	// parent schema 3. have the following permissions: - **USE_CATALOG** on the
	// table's parent catalog - **USE_SCHEMA** on the table's parent schema -
	// **SELECT** privilege on the table.
	//
	// Additionally, the call must be made from the workspace where the monitor was
	// created.
	GetRefresh(ctx context.Context, request GetRefreshRequest) (*MonitorRefreshInfo, error)

	// Get refresh.
	//
	// Gets info about a specific monitor refresh using the given refresh ID.
	//
	// The caller must either: 1. be an owner of the table's parent catalog 2. have
	// **USE_CATALOG** on the table's parent catalog and be an owner of the table's
	// parent schema 3. have the following permissions: - **USE_CATALOG** on the
	// table's parent catalog - **USE_SCHEMA** on the table's parent schema -
	// **SELECT** privilege on the table.
	//
	// Additionally, the call must be made from the workspace where the monitor was
	// created.
	GetRefreshByTableNameAndRefreshId(ctx context.Context, tableName string, refreshId string) (*MonitorRefreshInfo, error)

	// List refreshes.
	//
	// Gets an array containing the history of the most recent refreshes (up to 25)
	// for this table.
	//
	// The caller must either: 1. be an owner of the table's parent catalog 2. have
	// **USE_CATALOG** on the table's parent catalog and be an owner of the table's
	// parent schema 3. have the following permissions: - **USE_CATALOG** on the
	// table's parent catalog - **USE_SCHEMA** on the table's parent schema -
	// **SELECT** privilege on the table.
	//
	// Additionally, the call must be made from the workspace where the monitor was
	// created.
	ListRefreshes(ctx context.Context, request ListRefreshesRequest) (*MonitorRefreshListResponse, error)

	// List refreshes.
	//
	// Gets an array containing the history of the most recent refreshes (up to 25)
	// for this table.
	//
	// The caller must either: 1. be an owner of the table's parent catalog 2. have
	// **USE_CATALOG** on the table's parent catalog and be an owner of the table's
	// parent schema 3. have the following permissions: - **USE_CATALOG** on the
	// table's parent catalog - **USE_SCHEMA** on the table's parent schema -
	// **SELECT** privilege on the table.
	//
	// Additionally, the call must be made from the workspace where the monitor was
	// created.
	ListRefreshesByTableName(ctx context.Context, tableName string) (*MonitorRefreshListResponse, error)

	// Queue a metric refresh for a monitor.
	//
	// Queues a metric refresh on the monitor for the specified table. The refresh
	// will execute in the background.
	//
	// The caller must either: 1. be an owner of the table's parent catalog 2. have
	// **USE_CATALOG** on the table's parent catalog and be an owner of the table's
	// parent schema 3. have the following permissions: - **USE_CATALOG** on the
	// table's parent catalog - **USE_SCHEMA** on the table's parent schema - be an
	// owner of the table
	//
	// Additionally, the call must be made from the workspace where the monitor was
	// created.
	RunRefresh(ctx context.Context, request RunRefreshRequest) (*MonitorRefreshInfo, error)

	// Update a table monitor.
	//
	// Updates a monitor for the specified table.
	//
	// The caller must either: 1. be an owner of the table's parent catalog 2. have
	// **USE_CATALOG** on the table's parent catalog and be an owner of the table's
	// parent schema 3. have the following permissions: - **USE_CATALOG** on the
	// table's parent catalog - **USE_SCHEMA** on the table's parent schema - be an
	// owner of the table.
	//
	// Additionally, the call must be made from the workspace where the monitor was
	// created, and the caller must be the original creator of the monitor.
	//
	// Certain configuration fields, such as output asset identifiers, cannot be
	// updated.
	Update(ctx context.Context, request UpdateMonitor) (*MonitorInfo, error)
}

type QualityMonitorsService added in v0.41.0

type QualityMonitorsService interface {

	// Cancel refresh.
	//
	// Cancel an active monitor refresh for the given refresh ID.
	//
	// The caller must either: 1. be an owner of the table's parent catalog 2.
	// have **USE_CATALOG** on the table's parent catalog and be an owner of the
	// table's parent schema 3. have the following permissions: -
	// **USE_CATALOG** on the table's parent catalog - **USE_SCHEMA** on the
	// table's parent schema - be an owner of the table
	//
	// Additionally, the call must be made from the workspace where the monitor
	// was created.
	CancelRefresh(ctx context.Context, request CancelRefreshRequest) error

	// Create a table monitor.
	//
	// Creates a new monitor for the specified table.
	//
	// The caller must either: 1. be an owner of the table's parent catalog,
	// have **USE_SCHEMA** on the table's parent schema, and have **SELECT**
	// access on the table 2. have **USE_CATALOG** on the table's parent
	// catalog, be an owner of the table's parent schema, and have **SELECT**
	// access on the table. 3. have the following permissions: - **USE_CATALOG**
	// on the table's parent catalog - **USE_SCHEMA** on the table's parent
	// schema - be an owner of the table.
	//
	// Workspace assets, such as the dashboard, will be created in the workspace
	// where this call was made.
	Create(ctx context.Context, request CreateMonitor) (*MonitorInfo, error)

	// Delete a table monitor.
	//
	// Deletes a monitor for the specified table.
	//
	// The caller must either: 1. be an owner of the table's parent catalog 2.
	// have **USE_CATALOG** on the table's parent catalog and be an owner of the
	// table's parent schema 3. have the following permissions: -
	// **USE_CATALOG** on the table's parent catalog - **USE_SCHEMA** on the
	// table's parent schema - be an owner of the table.
	//
	// Additionally, the call must be made from the workspace where the monitor
	// was created.
	//
	// Note that the metric tables and dashboard will not be deleted as part of
	// this call; those assets must be manually cleaned up (if desired).
	Delete(ctx context.Context, request DeleteQualityMonitorRequest) error

	// Get a table monitor.
	//
	// Gets a monitor for the specified table.
	//
	// The caller must either: 1. be an owner of the table's parent catalog 2.
	// have **USE_CATALOG** on the table's parent catalog and be an owner of the
	// table's parent schema. 3. have the following permissions: -
	// **USE_CATALOG** on the table's parent catalog - **USE_SCHEMA** on the
	// table's parent schema - **SELECT** privilege on the table.
	//
	// The returned information includes configuration values, as well as
	// information on assets created by the monitor. Some information (e.g.,
	// dashboard) may be filtered out if the caller is in a different workspace
	// than where the monitor was created.
	Get(ctx context.Context, request GetQualityMonitorRequest) (*MonitorInfo, error)

	// Get refresh.
	//
	// Gets info about a specific monitor refresh using the given refresh ID.
	//
	// The caller must either: 1. be an owner of the table's parent catalog 2.
	// have **USE_CATALOG** on the table's parent catalog and be an owner of the
	// table's parent schema 3. have the following permissions: -
	// **USE_CATALOG** on the table's parent catalog - **USE_SCHEMA** on the
	// table's parent schema - **SELECT** privilege on the table.
	//
	// Additionally, the call must be made from the workspace where the monitor
	// was created.
	GetRefresh(ctx context.Context, request GetRefreshRequest) (*MonitorRefreshInfo, error)

	// List refreshes.
	//
	// Gets an array containing the history of the most recent refreshes (up to
	// 25) for this table.
	//
	// The caller must either: 1. be an owner of the table's parent catalog 2.
	// have **USE_CATALOG** on the table's parent catalog and be an owner of the
	// table's parent schema 3. have the following permissions: -
	// **USE_CATALOG** on the table's parent catalog - **USE_SCHEMA** on the
	// table's parent schema - **SELECT** privilege on the table.
	//
	// Additionally, the call must be made from the workspace where the monitor
	// was created.
	ListRefreshes(ctx context.Context, request ListRefreshesRequest) (*MonitorRefreshListResponse, error)

	// Queue a metric refresh for a monitor.
	//
	// Queues a metric refresh on the monitor for the specified table. The
	// refresh will execute in the background.
	//
	// The caller must either: 1. be an owner of the table's parent catalog 2.
	// have **USE_CATALOG** on the table's parent catalog and be an owner of the
	// table's parent schema 3. have the following permissions: -
	// **USE_CATALOG** on the table's parent catalog - **USE_SCHEMA** on the
	// table's parent schema - be an owner of the table
	//
	// Additionally, the call must be made from the workspace where the monitor
	// was created.
	RunRefresh(ctx context.Context, request RunRefreshRequest) (*MonitorRefreshInfo, error)

	// Update a table monitor.
	//
	// Updates a monitor for the specified table.
	//
	// The caller must either: 1. be an owner of the table's parent catalog 2.
	// have **USE_CATALOG** on the table's parent catalog and be an owner of the
	// table's parent schema 3. have the following permissions: -
	// **USE_CATALOG** on the table's parent catalog - **USE_SCHEMA** on the
	// table's parent schema - be an owner of the table.
	//
	// Additionally, the call must be made from the workspace where the monitor
	// was created, and the caller must be the original creator of the monitor.
	//
	// Certain configuration fields, such as output asset identifiers, cannot be
	// updated.
	Update(ctx context.Context, request UpdateMonitor) (*MonitorInfo, error)
}

A monitor computes and monitors data or model quality metrics for a table over time. It generates metrics tables and a dashboard that you can use to monitor table health and set alerts.

Most write operations require the user to be the owner of the table (or its parent schema or parent catalog). Viewing the dashboard, computed metrics, or monitor configuration only requires the user to have **SELECT** privileges on the table (along with **USE_SCHEMA** and **USE_CATALOG**).

type ReadVolumeRequest

type ReadVolumeRequest struct {
	// Whether to include volumes in the response for which the principal can
	// only access selective metadata for
	IncludeBrowse bool `json:"-" url:"include_browse,omitempty"`
	// The three-level (fully qualified) name of the volume
	Name string `json:"-" url:"-"`

	ForceSendFields []string `json:"-"`
}

Get a Volume

func (ReadVolumeRequest) MarshalJSON added in v0.35.0

func (s ReadVolumeRequest) MarshalJSON() ([]byte, error)

func (*ReadVolumeRequest) UnmarshalJSON added in v0.35.0

func (s *ReadVolumeRequest) UnmarshalJSON(b []byte) error

type RegisteredModelAlias added in v0.18.0

type RegisteredModelAlias struct {
	// Name of the alias, e.g. 'champion' or 'latest_stable'
	AliasName string `json:"alias_name,omitempty"`
	// Integer version number of the model version to which this alias points.
	VersionNum int `json:"version_num,omitempty"`

	ForceSendFields []string `json:"-"`
}

Registered model alias.

func (RegisteredModelAlias) MarshalJSON added in v0.23.0

func (s RegisteredModelAlias) MarshalJSON() ([]byte, error)

func (*RegisteredModelAlias) UnmarshalJSON added in v0.23.0

func (s *RegisteredModelAlias) UnmarshalJSON(b []byte) error

type RegisteredModelInfo added in v0.18.0

type RegisteredModelInfo struct {
	// List of aliases associated with the registered model
	Aliases []RegisteredModelAlias `json:"aliases,omitempty"`
	// Indicates whether the principal is limited to retrieving metadata for the
	// associated object through the BROWSE privilege when include_browse is
	// enabled in the request.
	BrowseOnly bool `json:"browse_only,omitempty"`
	// The name of the catalog where the schema and the registered model reside
	CatalogName string `json:"catalog_name,omitempty"`
	// The comment attached to the registered model
	Comment string `json:"comment,omitempty"`
	// Creation timestamp of the registered model in milliseconds since the Unix
	// epoch
	CreatedAt int64 `json:"created_at,omitempty"`
	// The identifier of the user who created the registered model
	CreatedBy string `json:"created_by,omitempty"`
	// The three-level (fully qualified) name of the registered model
	FullName string `json:"full_name,omitempty"`
	// The unique identifier of the metastore
	MetastoreId string `json:"metastore_id,omitempty"`
	// The name of the registered model
	Name string `json:"name,omitempty"`
	// The identifier of the user who owns the registered model
	Owner string `json:"owner,omitempty"`
	// The name of the schema where the registered model resides
	SchemaName string `json:"schema_name,omitempty"`
	// The storage location on the cloud under which model version data files
	// are stored
	StorageLocation string `json:"storage_location,omitempty"`
	// Last-update timestamp of the registered model in milliseconds since the
	// Unix epoch
	UpdatedAt int64 `json:"updated_at,omitempty"`
	// The identifier of the user who updated the registered model last time
	UpdatedBy string `json:"updated_by,omitempty"`

	ForceSendFields []string `json:"-"`
}

func (RegisteredModelInfo) MarshalJSON added in v0.23.0

func (s RegisteredModelInfo) MarshalJSON() ([]byte, error)

func (*RegisteredModelInfo) UnmarshalJSON added in v0.23.0

func (s *RegisteredModelInfo) UnmarshalJSON(b []byte) error

type RegisteredModelsAPI added in v0.18.0

type RegisteredModelsAPI struct {
	// contains filtered or unexported fields
}

Databricks provides a hosted version of MLflow Model Registry in Unity Catalog. Models in Unity Catalog provide centralized access control, auditing, lineage, and discovery of ML models across Databricks workspaces.

An MLflow registered model resides in the third layer of Unity Catalog’s three-level namespace. Registered models contain model versions, which correspond to actual ML models (MLflow models). Creating new model versions currently requires use of the MLflow Python client. Once model versions are created, you can load them for batch inference using MLflow Python client APIs, or deploy them for real-time serving using Databricks Model Serving.

All operations on registered models and model versions require USE_CATALOG permissions on the enclosing catalog and USE_SCHEMA permissions on the enclosing schema. In addition, the following additional privileges are required for various operations:

* To create a registered model, users must additionally have the CREATE_MODEL permission on the target schema. * To view registered model or model version metadata, model version data files, or invoke a model version, users must additionally have the EXECUTE permission on the registered model * To update registered model or model version tags, users must additionally have APPLY TAG permissions on the registered model * To update other registered model or model version metadata (comments, aliases) create a new model version, or update permissions on the registered model, users must be owners of the registered model.

Note: The securable type for models is "FUNCTION". When using REST APIs (e.g. tagging, grants) that specify a securable type, use "FUNCTION" as the securable type.

func NewRegisteredModels added in v0.18.0

func NewRegisteredModels(client *client.DatabricksClient) *RegisteredModelsAPI

func (*RegisteredModelsAPI) Create added in v0.18.0

Create a Registered Model.

Creates a new registered model in Unity Catalog.

File storage for model versions in the registered model will be located in the default location which is specified by the parent schema, or the parent catalog, or the Metastore.

For registered model creation to succeed, the user must satisfy the following conditions: - The caller must be a metastore admin, or be the owner of the parent catalog and schema, or have the **USE_CATALOG** privilege on the parent catalog and the **USE_SCHEMA** privilege on the parent schema. - The caller must have the **CREATE MODEL** or **CREATE FUNCTION** privilege on the parent schema.

func (*RegisteredModelsAPI) Delete added in v0.18.0

Delete a Registered Model.

Deletes a registered model and all its model versions from the specified parent catalog and schema.

The caller must be a metastore admin or an owner of the registered model. For the latter case, the caller must also be the owner or have the **USE_CATALOG** privilege on the parent catalog and the **USE_SCHEMA** privilege on the parent schema.

func (*RegisteredModelsAPI) DeleteAlias added in v0.18.0

func (a *RegisteredModelsAPI) DeleteAlias(ctx context.Context, request DeleteAliasRequest) error

Delete a Registered Model Alias.

Deletes a registered model alias.

The caller must be a metastore admin or an owner of the registered model. For the latter case, the caller must also be the owner or have the **USE_CATALOG** privilege on the parent catalog and the **USE_SCHEMA** privilege on the parent schema.

func (*RegisteredModelsAPI) DeleteAliasByFullNameAndAlias added in v0.18.0

func (a *RegisteredModelsAPI) DeleteAliasByFullNameAndAlias(ctx context.Context, fullName string, alias string) error

Delete a Registered Model Alias.

Deletes a registered model alias.

The caller must be a metastore admin or an owner of the registered model. For the latter case, the caller must also be the owner or have the **USE_CATALOG** privilege on the parent catalog and the **USE_SCHEMA** privilege on the parent schema.

func (*RegisteredModelsAPI) DeleteByFullName added in v0.18.0

func (a *RegisteredModelsAPI) DeleteByFullName(ctx context.Context, fullName string) error

Delete a Registered Model.

Deletes a registered model and all its model versions from the specified parent catalog and schema.

The caller must be a metastore admin or an owner of the registered model. For the latter case, the caller must also be the owner or have the **USE_CATALOG** privilege on the parent catalog and the **USE_SCHEMA** privilege on the parent schema.

func (*RegisteredModelsAPI) Get added in v0.18.0

Get a Registered Model.

Get a registered model.

The caller must be a metastore admin or an owner of (or have the **EXECUTE** privilege on) the registered model. For the latter case, the caller must also be the owner or have the **USE_CATALOG** privilege on the parent catalog and the **USE_SCHEMA** privilege on the parent schema.

func (*RegisteredModelsAPI) GetByFullName added in v0.18.0

func (a *RegisteredModelsAPI) GetByFullName(ctx context.Context, fullName string) (*RegisteredModelInfo, error)

Get a Registered Model.

Get a registered model.

The caller must be a metastore admin or an owner of (or have the **EXECUTE** privilege on) the registered model. For the latter case, the caller must also be the owner or have the **USE_CATALOG** privilege on the parent catalog and the **USE_SCHEMA** privilege on the parent schema.

func (*RegisteredModelsAPI) GetByName added in v0.18.0

func (a *RegisteredModelsAPI) GetByName(ctx context.Context, name string) (*RegisteredModelInfo, error)

GetByName calls RegisteredModelsAPI.RegisteredModelInfoNameToFullNameMap and returns a single RegisteredModelInfo.

Returns an error if there's more than one RegisteredModelInfo with the same .Name.

Note: All RegisteredModelInfo instances are loaded into memory before returning matching by name.

This method is generated by Databricks SDK Code Generator.

func (*RegisteredModelsAPI) Impl added in v0.18.0

Impl returns low-level RegisteredModels API implementation Deprecated: use MockRegisteredModelsInterface instead.

func (*RegisteredModelsAPI) List added in v0.24.0

List Registered Models.

List registered models. You can list registered models under a particular schema, or list all registered models in the current metastore.

The returned models are filtered based on the privileges of the calling user. For example, the metastore admin is able to list all the registered models. A regular user needs to be the owner or have the **EXECUTE** privilege on the registered model to recieve the registered models in the response. For the latter case, the caller must also be the owner or have the **USE_CATALOG** privilege on the parent catalog and the **USE_SCHEMA** privilege on the parent schema.

There is no guarantee of a specific ordering of the elements in the response.

This method is generated by Databricks SDK Code Generator.

func (*RegisteredModelsAPI) ListAll added in v0.18.0

List Registered Models.

List registered models. You can list registered models under a particular schema, or list all registered models in the current metastore.

The returned models are filtered based on the privileges of the calling user. For example, the metastore admin is able to list all the registered models. A regular user needs to be the owner or have the **EXECUTE** privilege on the registered model to recieve the registered models in the response. For the latter case, the caller must also be the owner or have the **USE_CATALOG** privilege on the parent catalog and the **USE_SCHEMA** privilege on the parent schema.

There is no guarantee of a specific ordering of the elements in the response.

This method is generated by Databricks SDK Code Generator.

func (*RegisteredModelsAPI) RegisteredModelInfoNameToFullNameMap added in v0.18.0

func (a *RegisteredModelsAPI) RegisteredModelInfoNameToFullNameMap(ctx context.Context, request ListRegisteredModelsRequest) (map[string]string, error)

RegisteredModelInfoNameToFullNameMap calls RegisteredModelsAPI.ListAll and creates a map of results with RegisteredModelInfo.Name as key and RegisteredModelInfo.FullName as value.

Returns an error if there's more than one RegisteredModelInfo with the same .Name.

Note: All RegisteredModelInfo instances are loaded into memory before creating a map.

This method is generated by Databricks SDK Code Generator.

func (*RegisteredModelsAPI) SetAlias added in v0.18.0

Set a Registered Model Alias.

Set an alias on the specified registered model.

The caller must be a metastore admin or an owner of the registered model. For the latter case, the caller must also be the owner or have the **USE_CATALOG** privilege on the parent catalog and the **USE_SCHEMA** privilege on the parent schema.

func (*RegisteredModelsAPI) Update added in v0.18.0

Update a Registered Model.

Updates the specified registered model.

The caller must be a metastore admin or an owner of the registered model. For the latter case, the caller must also be the owner or have the **USE_CATALOG** privilege on the parent catalog and the **USE_SCHEMA** privilege on the parent schema.

Currently only the name, the owner or the comment of the registered model can be updated.

func (*RegisteredModelsAPI) WithImpl added in v0.18.0

WithImpl could be used to override low-level API implementations for unit testing purposes with github.com/golang/mock or other mocking frameworks. Deprecated: use MockRegisteredModelsInterface instead.

type RegisteredModelsInterface added in v0.29.0

type RegisteredModelsInterface interface {
	// WithImpl could be used to override low-level API implementations for unit
	// testing purposes with [github.com/golang/mock] or other mocking frameworks.
	// Deprecated: use MockRegisteredModelsInterface instead.
	WithImpl(impl RegisteredModelsService) RegisteredModelsInterface

	// Impl returns low-level RegisteredModels API implementation
	// Deprecated: use MockRegisteredModelsInterface instead.
	Impl() RegisteredModelsService

	// Create a Registered Model.
	//
	// Creates a new registered model in Unity Catalog.
	//
	// File storage for model versions in the registered model will be located in
	// the default location which is specified by the parent schema, or the parent
	// catalog, or the Metastore.
	//
	// For registered model creation to succeed, the user must satisfy the following
	// conditions: - The caller must be a metastore admin, or be the owner of the
	// parent catalog and schema, or have the **USE_CATALOG** privilege on the
	// parent catalog and the **USE_SCHEMA** privilege on the parent schema. - The
	// caller must have the **CREATE MODEL** or **CREATE FUNCTION** privilege on the
	// parent schema.
	Create(ctx context.Context, request CreateRegisteredModelRequest) (*RegisteredModelInfo, error)

	// Delete a Registered Model.
	//
	// Deletes a registered model and all its model versions from the specified
	// parent catalog and schema.
	//
	// The caller must be a metastore admin or an owner of the registered model. For
	// the latter case, the caller must also be the owner or have the
	// **USE_CATALOG** privilege on the parent catalog and the **USE_SCHEMA**
	// privilege on the parent schema.
	Delete(ctx context.Context, request DeleteRegisteredModelRequest) error

	// Delete a Registered Model.
	//
	// Deletes a registered model and all its model versions from the specified
	// parent catalog and schema.
	//
	// The caller must be a metastore admin or an owner of the registered model. For
	// the latter case, the caller must also be the owner or have the
	// **USE_CATALOG** privilege on the parent catalog and the **USE_SCHEMA**
	// privilege on the parent schema.
	DeleteByFullName(ctx context.Context, fullName string) error

	// Delete a Registered Model Alias.
	//
	// Deletes a registered model alias.
	//
	// The caller must be a metastore admin or an owner of the registered model. For
	// the latter case, the caller must also be the owner or have the
	// **USE_CATALOG** privilege on the parent catalog and the **USE_SCHEMA**
	// privilege on the parent schema.
	DeleteAlias(ctx context.Context, request DeleteAliasRequest) error

	// Delete a Registered Model Alias.
	//
	// Deletes a registered model alias.
	//
	// The caller must be a metastore admin or an owner of the registered model. For
	// the latter case, the caller must also be the owner or have the
	// **USE_CATALOG** privilege on the parent catalog and the **USE_SCHEMA**
	// privilege on the parent schema.
	DeleteAliasByFullNameAndAlias(ctx context.Context, fullName string, alias string) error

	// Get a Registered Model.
	//
	// Get a registered model.
	//
	// The caller must be a metastore admin or an owner of (or have the **EXECUTE**
	// privilege on) the registered model. For the latter case, the caller must also
	// be the owner or have the **USE_CATALOG** privilege on the parent catalog and
	// the **USE_SCHEMA** privilege on the parent schema.
	Get(ctx context.Context, request GetRegisteredModelRequest) (*RegisteredModelInfo, error)

	// Get a Registered Model.
	//
	// Get a registered model.
	//
	// The caller must be a metastore admin or an owner of (or have the **EXECUTE**
	// privilege on) the registered model. For the latter case, the caller must also
	// be the owner or have the **USE_CATALOG** privilege on the parent catalog and
	// the **USE_SCHEMA** privilege on the parent schema.
	GetByFullName(ctx context.Context, fullName string) (*RegisteredModelInfo, error)

	// List Registered Models.
	//
	// List registered models. You can list registered models under a particular
	// schema, or list all registered models in the current metastore.
	//
	// The returned models are filtered based on the privileges of the calling user.
	// For example, the metastore admin is able to list all the registered models. A
	// regular user needs to be the owner or have the **EXECUTE** privilege on the
	// registered model to recieve the registered models in the response. For the
	// latter case, the caller must also be the owner or have the **USE_CATALOG**
	// privilege on the parent catalog and the **USE_SCHEMA** privilege on the
	// parent schema.
	//
	// There is no guarantee of a specific ordering of the elements in the response.
	//
	// This method is generated by Databricks SDK Code Generator.
	List(ctx context.Context, request ListRegisteredModelsRequest) listing.Iterator[RegisteredModelInfo]

	// List Registered Models.
	//
	// List registered models. You can list registered models under a particular
	// schema, or list all registered models in the current metastore.
	//
	// The returned models are filtered based on the privileges of the calling user.
	// For example, the metastore admin is able to list all the registered models. A
	// regular user needs to be the owner or have the **EXECUTE** privilege on the
	// registered model to recieve the registered models in the response. For the
	// latter case, the caller must also be the owner or have the **USE_CATALOG**
	// privilege on the parent catalog and the **USE_SCHEMA** privilege on the
	// parent schema.
	//
	// There is no guarantee of a specific ordering of the elements in the response.
	//
	// This method is generated by Databricks SDK Code Generator.
	ListAll(ctx context.Context, request ListRegisteredModelsRequest) ([]RegisteredModelInfo, error)

	// RegisteredModelInfoNameToFullNameMap calls [RegisteredModelsAPI.ListAll] and creates a map of results with [RegisteredModelInfo].Name as key and [RegisteredModelInfo].FullName as value.
	//
	// Returns an error if there's more than one [RegisteredModelInfo] with the same .Name.
	//
	// Note: All [RegisteredModelInfo] instances are loaded into memory before creating a map.
	//
	// This method is generated by Databricks SDK Code Generator.
	RegisteredModelInfoNameToFullNameMap(ctx context.Context, request ListRegisteredModelsRequest) (map[string]string, error)

	// GetByName calls [RegisteredModelsAPI.RegisteredModelInfoNameToFullNameMap] and returns a single [RegisteredModelInfo].
	//
	// Returns an error if there's more than one [RegisteredModelInfo] with the same .Name.
	//
	// Note: All [RegisteredModelInfo] instances are loaded into memory before returning matching by name.
	//
	// This method is generated by Databricks SDK Code Generator.
	GetByName(ctx context.Context, name string) (*RegisteredModelInfo, error)

	// Set a Registered Model Alias.
	//
	// Set an alias on the specified registered model.
	//
	// The caller must be a metastore admin or an owner of the registered model. For
	// the latter case, the caller must also be the owner or have the
	// **USE_CATALOG** privilege on the parent catalog and the **USE_SCHEMA**
	// privilege on the parent schema.
	SetAlias(ctx context.Context, request SetRegisteredModelAliasRequest) (*RegisteredModelAlias, error)

	// Update a Registered Model.
	//
	// Updates the specified registered model.
	//
	// The caller must be a metastore admin or an owner of the registered model. For
	// the latter case, the caller must also be the owner or have the
	// **USE_CATALOG** privilege on the parent catalog and the **USE_SCHEMA**
	// privilege on the parent schema.
	//
	// Currently only the name, the owner or the comment of the registered model can
	// be updated.
	Update(ctx context.Context, request UpdateRegisteredModelRequest) (*RegisteredModelInfo, error)
}

type RegisteredModelsService added in v0.18.0

type RegisteredModelsService interface {

	// Create a Registered Model.
	//
	// Creates a new registered model in Unity Catalog.
	//
	// File storage for model versions in the registered model will be located
	// in the default location which is specified by the parent schema, or the
	// parent catalog, or the Metastore.
	//
	// For registered model creation to succeed, the user must satisfy the
	// following conditions: - The caller must be a metastore admin, or be the
	// owner of the parent catalog and schema, or have the **USE_CATALOG**
	// privilege on the parent catalog and the **USE_SCHEMA** privilege on the
	// parent schema. - The caller must have the **CREATE MODEL** or **CREATE
	// FUNCTION** privilege on the parent schema.
	Create(ctx context.Context, request CreateRegisteredModelRequest) (*RegisteredModelInfo, error)

	// Delete a Registered Model.
	//
	// Deletes a registered model and all its model versions from the specified
	// parent catalog and schema.
	//
	// The caller must be a metastore admin or an owner of the registered model.
	// For the latter case, the caller must also be the owner or have the
	// **USE_CATALOG** privilege on the parent catalog and the **USE_SCHEMA**
	// privilege on the parent schema.
	Delete(ctx context.Context, request DeleteRegisteredModelRequest) error

	// Delete a Registered Model Alias.
	//
	// Deletes a registered model alias.
	//
	// The caller must be a metastore admin or an owner of the registered model.
	// For the latter case, the caller must also be the owner or have the
	// **USE_CATALOG** privilege on the parent catalog and the **USE_SCHEMA**
	// privilege on the parent schema.
	DeleteAlias(ctx context.Context, request DeleteAliasRequest) error

	// Get a Registered Model.
	//
	// Get a registered model.
	//
	// The caller must be a metastore admin or an owner of (or have the
	// **EXECUTE** privilege on) the registered model. For the latter case, the
	// caller must also be the owner or have the **USE_CATALOG** privilege on
	// the parent catalog and the **USE_SCHEMA** privilege on the parent schema.
	Get(ctx context.Context, request GetRegisteredModelRequest) (*RegisteredModelInfo, error)

	// List Registered Models.
	//
	// List registered models. You can list registered models under a particular
	// schema, or list all registered models in the current metastore.
	//
	// The returned models are filtered based on the privileges of the calling
	// user. For example, the metastore admin is able to list all the registered
	// models. A regular user needs to be the owner or have the **EXECUTE**
	// privilege on the registered model to recieve the registered models in the
	// response. For the latter case, the caller must also be the owner or have
	// the **USE_CATALOG** privilege on the parent catalog and the
	// **USE_SCHEMA** privilege on the parent schema.
	//
	// There is no guarantee of a specific ordering of the elements in the
	// response.
	//
	// Use ListAll() to get all RegisteredModelInfo instances, which will iterate over every result page.
	List(ctx context.Context, request ListRegisteredModelsRequest) (*ListRegisteredModelsResponse, error)

	// Set a Registered Model Alias.
	//
	// Set an alias on the specified registered model.
	//
	// The caller must be a metastore admin or an owner of the registered model.
	// For the latter case, the caller must also be the owner or have the
	// **USE_CATALOG** privilege on the parent catalog and the **USE_SCHEMA**
	// privilege on the parent schema.
	SetAlias(ctx context.Context, request SetRegisteredModelAliasRequest) (*RegisteredModelAlias, error)

	// Update a Registered Model.
	//
	// Updates the specified registered model.
	//
	// The caller must be a metastore admin or an owner of the registered model.
	// For the latter case, the caller must also be the owner or have the
	// **USE_CATALOG** privilege on the parent catalog and the **USE_SCHEMA**
	// privilege on the parent schema.
	//
	// Currently only the name, the owner or the comment of the registered model
	// can be updated.
	Update(ctx context.Context, request UpdateRegisteredModelRequest) (*RegisteredModelInfo, error)
}

Databricks provides a hosted version of MLflow Model Registry in Unity Catalog. Models in Unity Catalog provide centralized access control, auditing, lineage, and discovery of ML models across Databricks workspaces.

An MLflow registered model resides in the third layer of Unity Catalog’s three-level namespace. Registered models contain model versions, which correspond to actual ML models (MLflow models). Creating new model versions currently requires use of the MLflow Python client. Once model versions are created, you can load them for batch inference using MLflow Python client APIs, or deploy them for real-time serving using Databricks Model Serving.

All operations on registered models and model versions require USE_CATALOG permissions on the enclosing catalog and USE_SCHEMA permissions on the enclosing schema. In addition, the following additional privileges are required for various operations:

* To create a registered model, users must additionally have the CREATE_MODEL permission on the target schema. * To view registered model or model version metadata, model version data files, or invoke a model version, users must additionally have the EXECUTE permission on the registered model * To update registered model or model version tags, users must additionally have APPLY TAG permissions on the registered model * To update other registered model or model version metadata (comments, aliases) create a new model version, or update permissions on the registered model, users must be owners of the registered model.

Note: The securable type for models is "FUNCTION". When using REST APIs (e.g. tagging, grants) that specify a securable type, use "FUNCTION" as the securable type.

type RunRefreshRequest added in v0.31.0

type RunRefreshRequest struct {
	// Full name of the table.
	TableName string `json:"-" url:"-"`
}

Queue a metric refresh for a monitor

type SchemaInfo

type SchemaInfo struct {
	// Indicates whether the principal is limited to retrieving metadata for the
	// associated object through the BROWSE privilege when include_browse is
	// enabled in the request.
	BrowseOnly bool `json:"browse_only,omitempty"`
	// Name of parent catalog.
	CatalogName string `json:"catalog_name,omitempty"`
	// The type of the parent catalog.
	CatalogType string `json:"catalog_type,omitempty"`
	// User-provided free-form text description.
	Comment string `json:"comment,omitempty"`
	// Time at which this schema was created, in epoch milliseconds.
	CreatedAt int64 `json:"created_at,omitempty"`
	// Username of schema creator.
	CreatedBy string `json:"created_by,omitempty"`

	EffectivePredictiveOptimizationFlag *EffectivePredictiveOptimizationFlag `json:"effective_predictive_optimization_flag,omitempty"`
	// Whether predictive optimization should be enabled for this object and
	// objects under it.
	EnablePredictiveOptimization EnablePredictiveOptimization `json:"enable_predictive_optimization,omitempty"`
	// Full name of schema, in form of __catalog_name__.__schema_name__.
	FullName string `json:"full_name,omitempty"`
	// Unique identifier of parent metastore.
	MetastoreId string `json:"metastore_id,omitempty"`
	// Name of schema, relative to parent catalog.
	Name string `json:"name,omitempty"`
	// Username of current owner of schema.
	Owner string `json:"owner,omitempty"`
	// A map of key-value properties attached to the securable.
	Properties map[string]string `json:"properties,omitempty"`
	// The unique identifier of the schema.
	SchemaId string `json:"schema_id,omitempty"`
	// Storage location for managed tables within schema.
	StorageLocation string `json:"storage_location,omitempty"`
	// Storage root URL for managed tables within schema.
	StorageRoot string `json:"storage_root,omitempty"`
	// Time at which this schema was created, in epoch milliseconds.
	UpdatedAt int64 `json:"updated_at,omitempty"`
	// Username of user who last modified schema.
	UpdatedBy string `json:"updated_by,omitempty"`

	ForceSendFields []string `json:"-"`
}

func (SchemaInfo) MarshalJSON added in v0.23.0

func (s SchemaInfo) MarshalJSON() ([]byte, error)

func (*SchemaInfo) UnmarshalJSON added in v0.23.0

func (s *SchemaInfo) UnmarshalJSON(b []byte) error

type SchemasAPI

type SchemasAPI struct {
	// contains filtered or unexported fields
}

A schema (also called a database) is the second layer of Unity Catalog’s three-level namespace. A schema organizes tables, views and functions. To access (or list) a table or view in a schema, users must have the USE_SCHEMA data permission on the schema and its parent catalog, and they must have the SELECT permission on the table or view.

func NewSchemas

func NewSchemas(client *client.DatabricksClient) *SchemasAPI

func (*SchemasAPI) Create

func (a *SchemasAPI) Create(ctx context.Context, request CreateSchema) (*SchemaInfo, error)

Create a schema.

Creates a new schema for catalog in the Metatastore. The caller must be a metastore admin, or have the **CREATE_SCHEMA** privilege in the parent catalog.

Example (Schemas)
ctx := context.Background()
w, err := databricks.NewWorkspaceClient()
if err != nil {
	panic(err)
}

newCatalog, err := w.Catalogs.Create(ctx, catalog.CreateCatalog{
	Name: fmt.Sprintf("sdk-%x", time.Now().UnixNano()),
})
if err != nil {
	panic(err)
}
logger.Infof(ctx, "found %v", newCatalog)

created, err := w.Schemas.Create(ctx, catalog.CreateSchema{
	Name:        fmt.Sprintf("sdk-%x", time.Now().UnixNano()),
	CatalogName: newCatalog.Name,
})
if err != nil {
	panic(err)
}
logger.Infof(ctx, "found %v", created)

// cleanup

err = w.Catalogs.Delete(ctx, catalog.DeleteCatalogRequest{
	Name:  newCatalog.Name,
	Force: true,
})
if err != nil {
	panic(err)
}
err = w.Schemas.DeleteByFullName(ctx, created.FullName)
if err != nil {
	panic(err)
}
Output:

Example (Shares)
ctx := context.Background()
w, err := databricks.NewWorkspaceClient()
if err != nil {
	panic(err)
}

createdCatalog, err := w.Catalogs.Create(ctx, catalog.CreateCatalog{
	Name: fmt.Sprintf("sdk-%x", time.Now().UnixNano()),
})
if err != nil {
	panic(err)
}
logger.Infof(ctx, "found %v", createdCatalog)

createdSchema, err := w.Schemas.Create(ctx, catalog.CreateSchema{
	Name:        fmt.Sprintf("sdk-%x", time.Now().UnixNano()),
	CatalogName: createdCatalog.Name,
})
if err != nil {
	panic(err)
}
logger.Infof(ctx, "found %v", createdSchema)

// cleanup

err = w.Catalogs.Delete(ctx, catalog.DeleteCatalogRequest{
	Name:  createdCatalog.Name,
	Force: true,
})
if err != nil {
	panic(err)
}
err = w.Schemas.DeleteByFullName(ctx, createdSchema.FullName)
if err != nil {
	panic(err)
}
Output:

Example (Tables)
ctx := context.Background()
w, err := databricks.NewWorkspaceClient()
if err != nil {
	panic(err)
}

createdCatalog, err := w.Catalogs.Create(ctx, catalog.CreateCatalog{
	Name: fmt.Sprintf("sdk-%x", time.Now().UnixNano()),
})
if err != nil {
	panic(err)
}
logger.Infof(ctx, "found %v", createdCatalog)

createdSchema, err := w.Schemas.Create(ctx, catalog.CreateSchema{
	Name:        fmt.Sprintf("sdk-%x", time.Now().UnixNano()),
	CatalogName: createdCatalog.Name,
})
if err != nil {
	panic(err)
}
logger.Infof(ctx, "found %v", createdSchema)

// cleanup

err = w.Catalogs.Delete(ctx, catalog.DeleteCatalogRequest{
	Name:  createdCatalog.Name,
	Force: true,
})
if err != nil {
	panic(err)
}
err = w.Schemas.DeleteByFullName(ctx, createdSchema.FullName)
if err != nil {
	panic(err)
}
Output:

Example (Volumes)
ctx := context.Background()
w, err := databricks.NewWorkspaceClient()
if err != nil {
	panic(err)
}

createdCatalog, err := w.Catalogs.Create(ctx, catalog.CreateCatalog{
	Name: fmt.Sprintf("sdk-%x", time.Now().UnixNano()),
})
if err != nil {
	panic(err)
}
logger.Infof(ctx, "found %v", createdCatalog)

createdSchema, err := w.Schemas.Create(ctx, catalog.CreateSchema{
	Name:        fmt.Sprintf("sdk-%x", time.Now().UnixNano()),
	CatalogName: createdCatalog.Name,
})
if err != nil {
	panic(err)
}
logger.Infof(ctx, "found %v", createdSchema)

// cleanup

err = w.Catalogs.Delete(ctx, catalog.DeleteCatalogRequest{
	Name:  createdCatalog.Name,
	Force: true,
})
if err != nil {
	panic(err)
}
err = w.Schemas.DeleteByFullName(ctx, createdSchema.FullName)
if err != nil {
	panic(err)
}
Output:

func (*SchemasAPI) Delete

func (a *SchemasAPI) Delete(ctx context.Context, request DeleteSchemaRequest) error

Delete a schema.

Deletes the specified schema from the parent catalog. The caller must be the owner of the schema or an owner of the parent catalog.

func (*SchemasAPI) DeleteByFullName

func (a *SchemasAPI) DeleteByFullName(ctx context.Context, fullName string) error

Delete a schema.

Deletes the specified schema from the parent catalog. The caller must be the owner of the schema or an owner of the parent catalog.

func (*SchemasAPI) Get

func (a *SchemasAPI) Get(ctx context.Context, request GetSchemaRequest) (*SchemaInfo, error)

Get a schema.

Gets the specified schema within the metastore. The caller must be a metastore admin, the owner of the schema, or a user that has the **USE_SCHEMA** privilege on the schema.

Example (Schemas)
ctx := context.Background()
w, err := databricks.NewWorkspaceClient()
if err != nil {
	panic(err)
}

newCatalog, err := w.Catalogs.Create(ctx, catalog.CreateCatalog{
	Name: fmt.Sprintf("sdk-%x", time.Now().UnixNano()),
})
if err != nil {
	panic(err)
}
logger.Infof(ctx, "found %v", newCatalog)

created, err := w.Schemas.Create(ctx, catalog.CreateSchema{
	Name:        fmt.Sprintf("sdk-%x", time.Now().UnixNano()),
	CatalogName: newCatalog.Name,
})
if err != nil {
	panic(err)
}
logger.Infof(ctx, "found %v", created)

_, err = w.Schemas.GetByFullName(ctx, created.FullName)
if err != nil {
	panic(err)
}

// cleanup

err = w.Catalogs.Delete(ctx, catalog.DeleteCatalogRequest{
	Name:  newCatalog.Name,
	Force: true,
})
if err != nil {
	panic(err)
}
err = w.Schemas.DeleteByFullName(ctx, created.FullName)
if err != nil {
	panic(err)
}
Output:

func (*SchemasAPI) GetByFullName

func (a *SchemasAPI) GetByFullName(ctx context.Context, fullName string) (*SchemaInfo, error)

Get a schema.

Gets the specified schema within the metastore. The caller must be a metastore admin, the owner of the schema, or a user that has the **USE_SCHEMA** privilege on the schema.

func (*SchemasAPI) GetByName

func (a *SchemasAPI) GetByName(ctx context.Context, name string) (*SchemaInfo, error)

GetByName calls SchemasAPI.SchemaInfoNameToFullNameMap and returns a single SchemaInfo.

Returns an error if there's more than one SchemaInfo with the same .Name.

Note: All SchemaInfo instances are loaded into memory before returning matching by name.

This method is generated by Databricks SDK Code Generator.

func (*SchemasAPI) Impl

func (a *SchemasAPI) Impl() SchemasService

Impl returns low-level Schemas API implementation Deprecated: use MockSchemasInterface instead.

func (*SchemasAPI) List added in v0.24.0

List schemas.

Gets an array of schemas for a catalog in the metastore. If the caller is the metastore admin or the owner of the parent catalog, all schemas for the catalog will be retrieved. Otherwise, only schemas owned by the caller (or for which the caller has the **USE_SCHEMA** privilege) will be retrieved. There is no guarantee of a specific ordering of the elements in the array.

This method is generated by Databricks SDK Code Generator.

func (*SchemasAPI) ListAll

func (a *SchemasAPI) ListAll(ctx context.Context, request ListSchemasRequest) ([]SchemaInfo, error)

List schemas.

Gets an array of schemas for a catalog in the metastore. If the caller is the metastore admin or the owner of the parent catalog, all schemas for the catalog will be retrieved. Otherwise, only schemas owned by the caller (or for which the caller has the **USE_SCHEMA** privilege) will be retrieved. There is no guarantee of a specific ordering of the elements in the array.

This method is generated by Databricks SDK Code Generator.

Example (Schemas)
ctx := context.Background()
w, err := databricks.NewWorkspaceClient()
if err != nil {
	panic(err)
}

newCatalog, err := w.Catalogs.Create(ctx, catalog.CreateCatalog{
	Name: fmt.Sprintf("sdk-%x", time.Now().UnixNano()),
})
if err != nil {
	panic(err)
}
logger.Infof(ctx, "found %v", newCatalog)

all, err := w.Schemas.ListAll(ctx, catalog.ListSchemasRequest{
	CatalogName: newCatalog.Name,
})
if err != nil {
	panic(err)
}
logger.Infof(ctx, "found %v", all)

// cleanup

err = w.Catalogs.Delete(ctx, catalog.DeleteCatalogRequest{
	Name:  newCatalog.Name,
	Force: true,
})
if err != nil {
	panic(err)
}
Output:

func (*SchemasAPI) SchemaInfoNameToFullNameMap

func (a *SchemasAPI) SchemaInfoNameToFullNameMap(ctx context.Context, request ListSchemasRequest) (map[string]string, error)

SchemaInfoNameToFullNameMap calls SchemasAPI.ListAll and creates a map of results with SchemaInfo.Name as key and SchemaInfo.FullName as value.

Returns an error if there's more than one SchemaInfo with the same .Name.

Note: All SchemaInfo instances are loaded into memory before creating a map.

This method is generated by Databricks SDK Code Generator.

func (*SchemasAPI) Update

func (a *SchemasAPI) Update(ctx context.Context, request UpdateSchema) (*SchemaInfo, error)

Update a schema.

Updates a schema for a catalog. The caller must be the owner of the schema or a metastore admin. If the caller is a metastore admin, only the __owner__ field can be changed in the update. If the __name__ field must be updated, the caller must be a metastore admin or have the **CREATE_SCHEMA** privilege on the parent catalog.

Example (Schemas)
ctx := context.Background()
w, err := databricks.NewWorkspaceClient()
if err != nil {
	panic(err)
}

newCatalog, err := w.Catalogs.Create(ctx, catalog.CreateCatalog{
	Name: fmt.Sprintf("sdk-%x", time.Now().UnixNano()),
})
if err != nil {
	panic(err)
}
logger.Infof(ctx, "found %v", newCatalog)

created, err := w.Schemas.Create(ctx, catalog.CreateSchema{
	Name:        fmt.Sprintf("sdk-%x", time.Now().UnixNano()),
	CatalogName: newCatalog.Name,
})
if err != nil {
	panic(err)
}
logger.Infof(ctx, "found %v", created)

_, err = w.Schemas.Update(ctx, catalog.UpdateSchema{
	FullName: created.FullName,
	Comment:  fmt.Sprintf("sdk-%x", time.Now().UnixNano()),
})
if err != nil {
	panic(err)
}

// cleanup

err = w.Catalogs.Delete(ctx, catalog.DeleteCatalogRequest{
	Name:  newCatalog.Name,
	Force: true,
})
if err != nil {
	panic(err)
}
err = w.Schemas.DeleteByFullName(ctx, created.FullName)
if err != nil {
	panic(err)
}
Output:

func (*SchemasAPI) WithImpl

func (a *SchemasAPI) WithImpl(impl SchemasService) SchemasInterface

WithImpl could be used to override low-level API implementations for unit testing purposes with github.com/golang/mock or other mocking frameworks. Deprecated: use MockSchemasInterface instead.

type SchemasInterface added in v0.29.0

type SchemasInterface interface {
	// WithImpl could be used to override low-level API implementations for unit
	// testing purposes with [github.com/golang/mock] or other mocking frameworks.
	// Deprecated: use MockSchemasInterface instead.
	WithImpl(impl SchemasService) SchemasInterface

	// Impl returns low-level Schemas API implementation
	// Deprecated: use MockSchemasInterface instead.
	Impl() SchemasService

	// Create a schema.
	//
	// Creates a new schema for catalog in the Metatastore. The caller must be a
	// metastore admin, or have the **CREATE_SCHEMA** privilege in the parent
	// catalog.
	Create(ctx context.Context, request CreateSchema) (*SchemaInfo, error)

	// Delete a schema.
	//
	// Deletes the specified schema from the parent catalog. The caller must be the
	// owner of the schema or an owner of the parent catalog.
	Delete(ctx context.Context, request DeleteSchemaRequest) error

	// Delete a schema.
	//
	// Deletes the specified schema from the parent catalog. The caller must be the
	// owner of the schema or an owner of the parent catalog.
	DeleteByFullName(ctx context.Context, fullName string) error

	// Get a schema.
	//
	// Gets the specified schema within the metastore. The caller must be a
	// metastore admin, the owner of the schema, or a user that has the
	// **USE_SCHEMA** privilege on the schema.
	Get(ctx context.Context, request GetSchemaRequest) (*SchemaInfo, error)

	// Get a schema.
	//
	// Gets the specified schema within the metastore. The caller must be a
	// metastore admin, the owner of the schema, or a user that has the
	// **USE_SCHEMA** privilege on the schema.
	GetByFullName(ctx context.Context, fullName string) (*SchemaInfo, error)

	// List schemas.
	//
	// Gets an array of schemas for a catalog in the metastore. If the caller is the
	// metastore admin or the owner of the parent catalog, all schemas for the
	// catalog will be retrieved. Otherwise, only schemas owned by the caller (or
	// for which the caller has the **USE_SCHEMA** privilege) will be retrieved.
	// There is no guarantee of a specific ordering of the elements in the array.
	//
	// This method is generated by Databricks SDK Code Generator.
	List(ctx context.Context, request ListSchemasRequest) listing.Iterator[SchemaInfo]

	// List schemas.
	//
	// Gets an array of schemas for a catalog in the metastore. If the caller is the
	// metastore admin or the owner of the parent catalog, all schemas for the
	// catalog will be retrieved. Otherwise, only schemas owned by the caller (or
	// for which the caller has the **USE_SCHEMA** privilege) will be retrieved.
	// There is no guarantee of a specific ordering of the elements in the array.
	//
	// This method is generated by Databricks SDK Code Generator.
	ListAll(ctx context.Context, request ListSchemasRequest) ([]SchemaInfo, error)

	// SchemaInfoNameToFullNameMap calls [SchemasAPI.ListAll] and creates a map of results with [SchemaInfo].Name as key and [SchemaInfo].FullName as value.
	//
	// Returns an error if there's more than one [SchemaInfo] with the same .Name.
	//
	// Note: All [SchemaInfo] instances are loaded into memory before creating a map.
	//
	// This method is generated by Databricks SDK Code Generator.
	SchemaInfoNameToFullNameMap(ctx context.Context, request ListSchemasRequest) (map[string]string, error)

	// GetByName calls [SchemasAPI.SchemaInfoNameToFullNameMap] and returns a single [SchemaInfo].
	//
	// Returns an error if there's more than one [SchemaInfo] with the same .Name.
	//
	// Note: All [SchemaInfo] instances are loaded into memory before returning matching by name.
	//
	// This method is generated by Databricks SDK Code Generator.
	GetByName(ctx context.Context, name string) (*SchemaInfo, error)

	// Update a schema.
	//
	// Updates a schema for a catalog. The caller must be the owner of the schema or
	// a metastore admin. If the caller is a metastore admin, only the __owner__
	// field can be changed in the update. If the __name__ field must be updated,
	// the caller must be a metastore admin or have the **CREATE_SCHEMA** privilege
	// on the parent catalog.
	Update(ctx context.Context, request UpdateSchema) (*SchemaInfo, error)
}

type SchemasService

type SchemasService interface {

	// Create a schema.
	//
	// Creates a new schema for catalog in the Metatastore. The caller must be a
	// metastore admin, or have the **CREATE_SCHEMA** privilege in the parent
	// catalog.
	Create(ctx context.Context, request CreateSchema) (*SchemaInfo, error)

	// Delete a schema.
	//
	// Deletes the specified schema from the parent catalog. The caller must be
	// the owner of the schema or an owner of the parent catalog.
	Delete(ctx context.Context, request DeleteSchemaRequest) error

	// Get a schema.
	//
	// Gets the specified schema within the metastore. The caller must be a
	// metastore admin, the owner of the schema, or a user that has the
	// **USE_SCHEMA** privilege on the schema.
	Get(ctx context.Context, request GetSchemaRequest) (*SchemaInfo, error)

	// List schemas.
	//
	// Gets an array of schemas for a catalog in the metastore. If the caller is
	// the metastore admin or the owner of the parent catalog, all schemas for
	// the catalog will be retrieved. Otherwise, only schemas owned by the
	// caller (or for which the caller has the **USE_SCHEMA** privilege) will be
	// retrieved. There is no guarantee of a specific ordering of the elements
	// in the array.
	//
	// Use ListAll() to get all SchemaInfo instances, which will iterate over every result page.
	List(ctx context.Context, request ListSchemasRequest) (*ListSchemasResponse, error)

	// Update a schema.
	//
	// Updates a schema for a catalog. The caller must be the owner of the
	// schema or a metastore admin. If the caller is a metastore admin, only the
	// __owner__ field can be changed in the update. If the __name__ field must
	// be updated, the caller must be a metastore admin or have the
	// **CREATE_SCHEMA** privilege on the parent catalog.
	Update(ctx context.Context, request UpdateSchema) (*SchemaInfo, error)
}

A schema (also called a database) is the second layer of Unity Catalog’s three-level namespace. A schema organizes tables, views and functions. To access (or list) a table or view in a schema, users must have the USE_SCHEMA data permission on the schema and its parent catalog, and they must have the SELECT permission on the table or view.

type SecurableOptionsMap added in v0.11.0

type SecurableOptionsMap map[string]string

A map of key-value properties attached to the securable.

type SecurablePropertiesMap

type SecurablePropertiesMap map[string]string

A map of key-value properties attached to the securable.

type SecurableType

type SecurableType string

The type of Unity Catalog securable

const SecurableTypeCatalog SecurableType = `catalog`
const SecurableTypeConnection SecurableType = `connection`
const SecurableTypeExternalLocation SecurableType = `external_location`
const SecurableTypeFunction SecurableType = `function`
const SecurableTypeMetastore SecurableType = `metastore`
const SecurableTypePipeline SecurableType = `pipeline`
const SecurableTypeProvider SecurableType = `provider`
const SecurableTypeRecipient SecurableType = `recipient`
const SecurableTypeSchema SecurableType = `schema`
const SecurableTypeShare SecurableType = `share`
const SecurableTypeStorageCredential SecurableType = `storage_credential`
const SecurableTypeTable SecurableType = `table`
const SecurableTypeVolume SecurableType = `volume`

func (*SecurableType) Set

func (f *SecurableType) Set(v string) error

Set raw string value and validate it against allowed values

func (*SecurableType) String

func (f *SecurableType) String() string

String representation for fmt.Print

func (*SecurableType) Type

func (f *SecurableType) Type() string

Type always returns SecurableType to satisfy [pflag.Value] interface

type SetArtifactAllowlist added in v0.17.0

type SetArtifactAllowlist struct {
	// A list of allowed artifact match patterns.
	ArtifactMatchers []ArtifactMatcher `json:"artifact_matchers"`
	// The artifact type of the allowlist.
	ArtifactType ArtifactType `json:"-" url:"-"`
}

type SetRegisteredModelAliasRequest added in v0.18.0

type SetRegisteredModelAliasRequest struct {
	// The name of the alias
	Alias string `json:"alias" url:"-"`
	// Full name of the registered model
	FullName string `json:"full_name" url:"-"`
	// The version number of the model version to which the alias points
	VersionNum int `json:"version_num"`
}

type SseEncryptionDetails added in v0.14.0

type SseEncryptionDetails struct {
	// The type of key encryption to use (affects headers from s3 client).
	Algorithm SseEncryptionDetailsAlgorithm `json:"algorithm,omitempty"`
	// When algorithm is **AWS_SSE_KMS** this field specifies the ARN of the SSE
	// key to use.
	AwsKmsKeyArn string `json:"aws_kms_key_arn,omitempty"`

	ForceSendFields []string `json:"-"`
}

Server-Side Encryption properties for clients communicating with AWS s3.

func (SseEncryptionDetails) MarshalJSON added in v0.23.0

func (s SseEncryptionDetails) MarshalJSON() ([]byte, error)

func (*SseEncryptionDetails) UnmarshalJSON added in v0.23.0

func (s *SseEncryptionDetails) UnmarshalJSON(b []byte) error

type SseEncryptionDetailsAlgorithm added in v0.14.0

type SseEncryptionDetailsAlgorithm string

The type of key encryption to use (affects headers from s3 client).

const SseEncryptionDetailsAlgorithmAwsSseKms SseEncryptionDetailsAlgorithm = `AWS_SSE_KMS`
const SseEncryptionDetailsAlgorithmAwsSseS3 SseEncryptionDetailsAlgorithm = `AWS_SSE_S3`

func (*SseEncryptionDetailsAlgorithm) Set added in v0.14.0

Set raw string value and validate it against allowed values

func (*SseEncryptionDetailsAlgorithm) String added in v0.14.0

String representation for fmt.Print

func (*SseEncryptionDetailsAlgorithm) Type added in v0.14.0

Type always returns SseEncryptionDetailsAlgorithm to satisfy [pflag.Value] interface

type StorageCredentialInfo

type StorageCredentialInfo struct {
	// The AWS IAM role configuration.
	AwsIamRole *AwsIamRoleResponse `json:"aws_iam_role,omitempty"`
	// The Azure managed identity configuration.
	AzureManagedIdentity *AzureManagedIdentityResponse `json:"azure_managed_identity,omitempty"`
	// The Azure service principal configuration.
	AzureServicePrincipal *AzureServicePrincipal `json:"azure_service_principal,omitempty"`
	// The Cloudflare API token configuration.
	CloudflareApiToken *CloudflareApiToken `json:"cloudflare_api_token,omitempty"`
	// Comment associated with the credential.
	Comment string `json:"comment,omitempty"`
	// Time at which this Credential was created, in epoch milliseconds.
	CreatedAt int64 `json:"created_at,omitempty"`
	// Username of credential creator.
	CreatedBy string `json:"created_by,omitempty"`
	// The <Databricks> managed GCP service account configuration.
	DatabricksGcpServiceAccount *DatabricksGcpServiceAccountResponse `json:"databricks_gcp_service_account,omitempty"`
	// The unique identifier of the credential.
	Id string `json:"id,omitempty"`
	// Unique identifier of parent metastore.
	MetastoreId string `json:"metastore_id,omitempty"`
	// The credential name. The name must be unique within the metastore.
	Name string `json:"name,omitempty"`
	// Username of current owner of credential.
	Owner string `json:"owner,omitempty"`
	// Whether the storage credential is only usable for read operations.
	ReadOnly bool `json:"read_only,omitempty"`
	// Time at which this credential was last modified, in epoch milliseconds.
	UpdatedAt int64 `json:"updated_at,omitempty"`
	// Username of user who last modified the credential.
	UpdatedBy string `json:"updated_by,omitempty"`
	// Whether this credential is the current metastore's root storage
	// credential.
	UsedForManagedStorage bool `json:"used_for_managed_storage,omitempty"`

	ForceSendFields []string `json:"-"`
}

func (StorageCredentialInfo) MarshalJSON added in v0.23.0

func (s StorageCredentialInfo) MarshalJSON() ([]byte, error)

func (*StorageCredentialInfo) UnmarshalJSON added in v0.23.0

func (s *StorageCredentialInfo) UnmarshalJSON(b []byte) error

type StorageCredentialsAPI

type StorageCredentialsAPI struct {
	// contains filtered or unexported fields
}

A storage credential represents an authentication and authorization mechanism for accessing data stored on your cloud tenant. Each storage credential is subject to Unity Catalog access-control policies that control which users and groups can access the credential. If a user does not have access to a storage credential in Unity Catalog, the request fails and Unity Catalog does not attempt to authenticate to your cloud tenant on the user’s behalf.

Databricks recommends using external locations rather than using storage credentials directly.

To create storage credentials, you must be a Databricks account admin. The account admin who creates the storage credential can delegate ownership to another user or group to manage permissions on it.

func NewStorageCredentials

func NewStorageCredentials(client *client.DatabricksClient) *StorageCredentialsAPI

func (*StorageCredentialsAPI) Create

Create a storage credential.

Creates a new storage credential.

Example (ExternalLocationsOnAws)
ctx := context.Background()
w, err := databricks.NewWorkspaceClient()
if err != nil {
	panic(err)
}

credential, err := w.StorageCredentials.Create(ctx, catalog.CreateStorageCredential{
	Name: fmt.Sprintf("sdk-%x", time.Now().UnixNano()),
	AwsIamRole: &catalog.AwsIamRoleRequest{
		RoleArn: os.Getenv("TEST_METASTORE_DATA_ACCESS_ARN"),
	},
})
if err != nil {
	panic(err)
}
logger.Infof(ctx, "found %v", credential)

// cleanup

err = w.StorageCredentials.DeleteByName(ctx, credential.Name)
if err != nil {
	panic(err)
}
Output:

Example (StorageCredentialsOnAws)
ctx := context.Background()
w, err := databricks.NewWorkspaceClient()
if err != nil {
	panic(err)
}

created, err := w.StorageCredentials.Create(ctx, catalog.CreateStorageCredential{
	Name: fmt.Sprintf("sdk-%x", time.Now().UnixNano()),
	AwsIamRole: &catalog.AwsIamRoleRequest{
		RoleArn: os.Getenv("TEST_METASTORE_DATA_ACCESS_ARN"),
	},
})
if err != nil {
	panic(err)
}
logger.Infof(ctx, "found %v", created)

// cleanup

err = w.StorageCredentials.DeleteByName(ctx, created.Name)
if err != nil {
	panic(err)
}
Output:

Example (Volumes)
ctx := context.Background()
w, err := databricks.NewWorkspaceClient()
if err != nil {
	panic(err)
}

storageCredential, err := w.StorageCredentials.Create(ctx, catalog.CreateStorageCredential{
	Name: fmt.Sprintf("sdk-%x", time.Now().UnixNano()),
	AwsIamRole: &catalog.AwsIamRoleRequest{
		RoleArn: os.Getenv("TEST_METASTORE_DATA_ACCESS_ARN"),
	},
	Comment: "created via SDK",
})
if err != nil {
	panic(err)
}
logger.Infof(ctx, "found %v", storageCredential)

// cleanup

err = w.StorageCredentials.DeleteByName(ctx, storageCredential.Name)
if err != nil {
	panic(err)
}
Output:

func (*StorageCredentialsAPI) Delete

Delete a credential.

Deletes a storage credential from the metastore. The caller must be an owner of the storage credential.

func (*StorageCredentialsAPI) DeleteByName

func (a *StorageCredentialsAPI) DeleteByName(ctx context.Context, name string) error

Delete a credential.

Deletes a storage credential from the metastore. The caller must be an owner of the storage credential.

func (*StorageCredentialsAPI) Get

Get a credential.

Gets a storage credential from the metastore. The caller must be a metastore admin, the owner of the storage credential, or have some permission on the storage credential.

Example (StorageCredentialsOnAws)
ctx := context.Background()
w, err := databricks.NewWorkspaceClient()
if err != nil {
	panic(err)
}

created, err := w.StorageCredentials.Create(ctx, catalog.CreateStorageCredential{
	Name: fmt.Sprintf("sdk-%x", time.Now().UnixNano()),
	AwsIamRole: &catalog.AwsIamRoleRequest{
		RoleArn: os.Getenv("TEST_METASTORE_DATA_ACCESS_ARN"),
	},
})
if err != nil {
	panic(err)
}
logger.Infof(ctx, "found %v", created)

byName, err := w.StorageCredentials.GetByName(ctx, created.Name)
if err != nil {
	panic(err)
}
logger.Infof(ctx, "found %v", byName)

// cleanup

err = w.StorageCredentials.DeleteByName(ctx, created.Name)
if err != nil {
	panic(err)
}
Output:

func (*StorageCredentialsAPI) GetByName

Get a credential.

Gets a storage credential from the metastore. The caller must be a metastore admin, the owner of the storage credential, or have some permission on the storage credential.

func (*StorageCredentialsAPI) Impl

Impl returns low-level StorageCredentials API implementation Deprecated: use MockStorageCredentialsInterface instead.

func (*StorageCredentialsAPI) List

List credentials.

Gets an array of storage credentials (as __StorageCredentialInfo__ objects). The array is limited to only those storage credentials the caller has permission to access. If the caller is a metastore admin, retrieval of credentials is unrestricted. There is no guarantee of a specific ordering of the elements in the array.

This method is generated by Databricks SDK Code Generator.

func (*StorageCredentialsAPI) ListAll added in v0.9.0

List credentials.

Gets an array of storage credentials (as __StorageCredentialInfo__ objects). The array is limited to only those storage credentials the caller has permission to access. If the caller is a metastore admin, retrieval of credentials is unrestricted. There is no guarantee of a specific ordering of the elements in the array.

This method is generated by Databricks SDK Code Generator.

Example (StorageCredentialsOnAws)
ctx := context.Background()
w, err := databricks.NewWorkspaceClient()
if err != nil {
	panic(err)
}

all, err := w.StorageCredentials.ListAll(ctx, catalog.ListStorageCredentialsRequest{})
if err != nil {
	panic(err)
}
logger.Infof(ctx, "found %v", all)
Output:

func (*StorageCredentialsAPI) StorageCredentialInfoNameToIdMap

func (a *StorageCredentialsAPI) StorageCredentialInfoNameToIdMap(ctx context.Context, request ListStorageCredentialsRequest) (map[string]string, error)

StorageCredentialInfoNameToIdMap calls StorageCredentialsAPI.ListAll and creates a map of results with StorageCredentialInfo.Name as key and StorageCredentialInfo.Id as value.

Returns an error if there's more than one StorageCredentialInfo with the same .Name.

Note: All StorageCredentialInfo instances are loaded into memory before creating a map.

This method is generated by Databricks SDK Code Generator.

func (*StorageCredentialsAPI) Update

Update a credential.

Updates a storage credential on the metastore.

Example (StorageCredentialsOnAws)
ctx := context.Background()
w, err := databricks.NewWorkspaceClient()
if err != nil {
	panic(err)
}

created, err := w.StorageCredentials.Create(ctx, catalog.CreateStorageCredential{
	Name: fmt.Sprintf("sdk-%x", time.Now().UnixNano()),
	AwsIamRole: &catalog.AwsIamRoleRequest{
		RoleArn: os.Getenv("TEST_METASTORE_DATA_ACCESS_ARN"),
	},
})
if err != nil {
	panic(err)
}
logger.Infof(ctx, "found %v", created)

_, err = w.StorageCredentials.Update(ctx, catalog.UpdateStorageCredential{
	Name:    created.Name,
	Comment: fmt.Sprintf("sdk-%x", time.Now().UnixNano()),
	AwsIamRole: &catalog.AwsIamRoleRequest{
		RoleArn: os.Getenv("TEST_METASTORE_DATA_ACCESS_ARN"),
	},
})
if err != nil {
	panic(err)
}

// cleanup

err = w.StorageCredentials.DeleteByName(ctx, created.Name)
if err != nil {
	panic(err)
}
Output:

func (*StorageCredentialsAPI) Validate

Validate a storage credential.

Validates a storage credential. At least one of __external_location_name__ and __url__ need to be provided. If only one of them is provided, it will be used for validation. And if both are provided, the __url__ will be used for validation, and __external_location_name__ will be ignored when checking overlapping urls.

Either the __storage_credential_name__ or the cloud-specific credential must be provided.

The caller must be a metastore admin or the storage credential owner or have the **CREATE_EXTERNAL_LOCATION** privilege on the metastore and the storage credential.

func (*StorageCredentialsAPI) WithImpl

WithImpl could be used to override low-level API implementations for unit testing purposes with github.com/golang/mock or other mocking frameworks. Deprecated: use MockStorageCredentialsInterface instead.

type StorageCredentialsInterface added in v0.29.0

type StorageCredentialsInterface interface {
	// WithImpl could be used to override low-level API implementations for unit
	// testing purposes with [github.com/golang/mock] or other mocking frameworks.
	// Deprecated: use MockStorageCredentialsInterface instead.
	WithImpl(impl StorageCredentialsService) StorageCredentialsInterface

	// Impl returns low-level StorageCredentials API implementation
	// Deprecated: use MockStorageCredentialsInterface instead.
	Impl() StorageCredentialsService

	// Create a storage credential.
	//
	// Creates a new storage credential.
	Create(ctx context.Context, request CreateStorageCredential) (*StorageCredentialInfo, error)

	// Delete a credential.
	//
	// Deletes a storage credential from the metastore. The caller must be an owner
	// of the storage credential.
	Delete(ctx context.Context, request DeleteStorageCredentialRequest) error

	// Delete a credential.
	//
	// Deletes a storage credential from the metastore. The caller must be an owner
	// of the storage credential.
	DeleteByName(ctx context.Context, name string) error

	// Get a credential.
	//
	// Gets a storage credential from the metastore. The caller must be a metastore
	// admin, the owner of the storage credential, or have some permission on the
	// storage credential.
	Get(ctx context.Context, request GetStorageCredentialRequest) (*StorageCredentialInfo, error)

	// Get a credential.
	//
	// Gets a storage credential from the metastore. The caller must be a metastore
	// admin, the owner of the storage credential, or have some permission on the
	// storage credential.
	GetByName(ctx context.Context, name string) (*StorageCredentialInfo, error)

	// List credentials.
	//
	// Gets an array of storage credentials (as __StorageCredentialInfo__ objects).
	// The array is limited to only those storage credentials the caller has
	// permission to access. If the caller is a metastore admin, retrieval of
	// credentials is unrestricted. There is no guarantee of a specific ordering of
	// the elements in the array.
	//
	// This method is generated by Databricks SDK Code Generator.
	List(ctx context.Context, request ListStorageCredentialsRequest) listing.Iterator[StorageCredentialInfo]

	// List credentials.
	//
	// Gets an array of storage credentials (as __StorageCredentialInfo__ objects).
	// The array is limited to only those storage credentials the caller has
	// permission to access. If the caller is a metastore admin, retrieval of
	// credentials is unrestricted. There is no guarantee of a specific ordering of
	// the elements in the array.
	//
	// This method is generated by Databricks SDK Code Generator.
	ListAll(ctx context.Context, request ListStorageCredentialsRequest) ([]StorageCredentialInfo, error)

	// StorageCredentialInfoNameToIdMap calls [StorageCredentialsAPI.ListAll] and creates a map of results with [StorageCredentialInfo].Name as key and [StorageCredentialInfo].Id as value.
	//
	// Returns an error if there's more than one [StorageCredentialInfo] with the same .Name.
	//
	// Note: All [StorageCredentialInfo] instances are loaded into memory before creating a map.
	//
	// This method is generated by Databricks SDK Code Generator.
	StorageCredentialInfoNameToIdMap(ctx context.Context, request ListStorageCredentialsRequest) (map[string]string, error)

	// Update a credential.
	//
	// Updates a storage credential on the metastore.
	Update(ctx context.Context, request UpdateStorageCredential) (*StorageCredentialInfo, error)

	// Validate a storage credential.
	//
	// Validates a storage credential. At least one of __external_location_name__
	// and __url__ need to be provided. If only one of them is provided, it will be
	// used for validation. And if both are provided, the __url__ will be used for
	// validation, and __external_location_name__ will be ignored when checking
	// overlapping urls.
	//
	// Either the __storage_credential_name__ or the cloud-specific credential must
	// be provided.
	//
	// The caller must be a metastore admin or the storage credential owner or have
	// the **CREATE_EXTERNAL_LOCATION** privilege on the metastore and the storage
	// credential.
	Validate(ctx context.Context, request ValidateStorageCredential) (*ValidateStorageCredentialResponse, error)
}

type StorageCredentialsService

type StorageCredentialsService interface {

	// Create a storage credential.
	//
	// Creates a new storage credential.
	Create(ctx context.Context, request CreateStorageCredential) (*StorageCredentialInfo, error)

	// Delete a credential.
	//
	// Deletes a storage credential from the metastore. The caller must be an
	// owner of the storage credential.
	Delete(ctx context.Context, request DeleteStorageCredentialRequest) error

	// Get a credential.
	//
	// Gets a storage credential from the metastore. The caller must be a
	// metastore admin, the owner of the storage credential, or have some
	// permission on the storage credential.
	Get(ctx context.Context, request GetStorageCredentialRequest) (*StorageCredentialInfo, error)

	// List credentials.
	//
	// Gets an array of storage credentials (as __StorageCredentialInfo__
	// objects). The array is limited to only those storage credentials the
	// caller has permission to access. If the caller is a metastore admin,
	// retrieval of credentials is unrestricted. There is no guarantee of a
	// specific ordering of the elements in the array.
	//
	// Use ListAll() to get all StorageCredentialInfo instances, which will iterate over every result page.
	List(ctx context.Context, request ListStorageCredentialsRequest) (*ListStorageCredentialsResponse, error)

	// Update a credential.
	//
	// Updates a storage credential on the metastore.
	Update(ctx context.Context, request UpdateStorageCredential) (*StorageCredentialInfo, error)

	// Validate a storage credential.
	//
	// Validates a storage credential. At least one of
	// __external_location_name__ and __url__ need to be provided. If only one
	// of them is provided, it will be used for validation. And if both are
	// provided, the __url__ will be used for validation, and
	// __external_location_name__ will be ignored when checking overlapping
	// urls.
	//
	// Either the __storage_credential_name__ or the cloud-specific credential
	// must be provided.
	//
	// The caller must be a metastore admin or the storage credential owner or
	// have the **CREATE_EXTERNAL_LOCATION** privilege on the metastore and the
	// storage credential.
	Validate(ctx context.Context, request ValidateStorageCredential) (*ValidateStorageCredentialResponse, error)
}

A storage credential represents an authentication and authorization mechanism for accessing data stored on your cloud tenant. Each storage credential is subject to Unity Catalog access-control policies that control which users and groups can access the credential. If a user does not have access to a storage credential in Unity Catalog, the request fails and Unity Catalog does not attempt to authenticate to your cloud tenant on the user’s behalf.

Databricks recommends using external locations rather than using storage credentials directly.

To create storage credentials, you must be a Databricks account admin. The account admin who creates the storage credential can delegate ownership to another user or group to manage permissions on it.

type SystemSchemaInfo added in v0.10.0

type SystemSchemaInfo struct {
	// Name of the system schema.
	Schema string `json:"schema,omitempty"`
	// The current state of enablement for the system schema. An empty string
	// means the system schema is available and ready for opt-in.
	State SystemSchemaInfoState `json:"state,omitempty"`

	ForceSendFields []string `json:"-"`
}

func (SystemSchemaInfo) MarshalJSON added in v0.23.0

func (s SystemSchemaInfo) MarshalJSON() ([]byte, error)

func (*SystemSchemaInfo) UnmarshalJSON added in v0.23.0

func (s *SystemSchemaInfo) UnmarshalJSON(b []byte) error

type SystemSchemaInfoState added in v0.10.0

type SystemSchemaInfoState string

The current state of enablement for the system schema. An empty string means the system schema is available and ready for opt-in.

const SystemSchemaInfoStateAvailable SystemSchemaInfoState = `AVAILABLE`
const SystemSchemaInfoStateDisableInitialized SystemSchemaInfoState = `DISABLE_INITIALIZED`
const SystemSchemaInfoStateEnableCompleted SystemSchemaInfoState = `ENABLE_COMPLETED`
const SystemSchemaInfoStateEnableInitialized SystemSchemaInfoState = `ENABLE_INITIALIZED`
const SystemSchemaInfoStateUnavailable SystemSchemaInfoState = `UNAVAILABLE`

func (*SystemSchemaInfoState) Set added in v0.10.0

Set raw string value and validate it against allowed values

func (*SystemSchemaInfoState) String added in v0.10.0

func (f *SystemSchemaInfoState) String() string

String representation for fmt.Print

func (*SystemSchemaInfoState) Type added in v0.10.0

func (f *SystemSchemaInfoState) Type() string

Type always returns SystemSchemaInfoState to satisfy [pflag.Value] interface

type SystemSchemasAPI added in v0.10.0

type SystemSchemasAPI struct {
	// contains filtered or unexported fields
}

A system schema is a schema that lives within the system catalog. A system schema may contain information about customer usage of Unity Catalog such as audit-logs, billing-logs, lineage information, etc.

func NewSystemSchemas added in v0.10.0

func NewSystemSchemas(client *client.DatabricksClient) *SystemSchemasAPI

func (*SystemSchemasAPI) Disable added in v0.10.0

func (a *SystemSchemasAPI) Disable(ctx context.Context, request DisableRequest) error

Disable a system schema.

Disables the system schema and removes it from the system catalog. The caller must be an account admin or a metastore admin.

func (*SystemSchemasAPI) DisableByMetastoreIdAndSchemaName added in v0.10.0

func (a *SystemSchemasAPI) DisableByMetastoreIdAndSchemaName(ctx context.Context, metastoreId string, schemaName string) error

Disable a system schema.

Disables the system schema and removes it from the system catalog. The caller must be an account admin or a metastore admin.

func (*SystemSchemasAPI) Enable added in v0.10.0

func (a *SystemSchemasAPI) Enable(ctx context.Context, request EnableRequest) error

Enable a system schema.

Enables the system schema and adds it to the system catalog. The caller must be an account admin or a metastore admin.

func (*SystemSchemasAPI) Impl added in v0.10.0

Impl returns low-level SystemSchemas API implementation Deprecated: use MockSystemSchemasInterface instead.

func (*SystemSchemasAPI) List added in v0.24.0

List system schemas.

Gets an array of system schemas for a metastore. The caller must be an account admin or a metastore admin.

This method is generated by Databricks SDK Code Generator.

func (*SystemSchemasAPI) ListAll added in v0.10.0

List system schemas.

Gets an array of system schemas for a metastore. The caller must be an account admin or a metastore admin.

This method is generated by Databricks SDK Code Generator.

func (*SystemSchemasAPI) ListByMetastoreId added in v0.10.0

func (a *SystemSchemasAPI) ListByMetastoreId(ctx context.Context, metastoreId string) (*ListSystemSchemasResponse, error)

List system schemas.

Gets an array of system schemas for a metastore. The caller must be an account admin or a metastore admin.

func (*SystemSchemasAPI) WithImpl added in v0.10.0

WithImpl could be used to override low-level API implementations for unit testing purposes with github.com/golang/mock or other mocking frameworks. Deprecated: use MockSystemSchemasInterface instead.

type SystemSchemasInterface added in v0.29.0

type SystemSchemasInterface interface {
	// WithImpl could be used to override low-level API implementations for unit
	// testing purposes with [github.com/golang/mock] or other mocking frameworks.
	// Deprecated: use MockSystemSchemasInterface instead.
	WithImpl(impl SystemSchemasService) SystemSchemasInterface

	// Impl returns low-level SystemSchemas API implementation
	// Deprecated: use MockSystemSchemasInterface instead.
	Impl() SystemSchemasService

	// Disable a system schema.
	//
	// Disables the system schema and removes it from the system catalog. The caller
	// must be an account admin or a metastore admin.
	Disable(ctx context.Context, request DisableRequest) error

	// Disable a system schema.
	//
	// Disables the system schema and removes it from the system catalog. The caller
	// must be an account admin or a metastore admin.
	DisableByMetastoreIdAndSchemaName(ctx context.Context, metastoreId string, schemaName string) error

	// Enable a system schema.
	//
	// Enables the system schema and adds it to the system catalog. The caller must
	// be an account admin or a metastore admin.
	Enable(ctx context.Context, request EnableRequest) error

	// List system schemas.
	//
	// Gets an array of system schemas for a metastore. The caller must be an
	// account admin or a metastore admin.
	//
	// This method is generated by Databricks SDK Code Generator.
	List(ctx context.Context, request ListSystemSchemasRequest) listing.Iterator[SystemSchemaInfo]

	// List system schemas.
	//
	// Gets an array of system schemas for a metastore. The caller must be an
	// account admin or a metastore admin.
	//
	// This method is generated by Databricks SDK Code Generator.
	ListAll(ctx context.Context, request ListSystemSchemasRequest) ([]SystemSchemaInfo, error)

	// List system schemas.
	//
	// Gets an array of system schemas for a metastore. The caller must be an
	// account admin or a metastore admin.
	ListByMetastoreId(ctx context.Context, metastoreId string) (*ListSystemSchemasResponse, error)
}

type SystemSchemasService added in v0.10.0

type SystemSchemasService interface {

	// Disable a system schema.
	//
	// Disables the system schema and removes it from the system catalog. The
	// caller must be an account admin or a metastore admin.
	Disable(ctx context.Context, request DisableRequest) error

	// Enable a system schema.
	//
	// Enables the system schema and adds it to the system catalog. The caller
	// must be an account admin or a metastore admin.
	Enable(ctx context.Context, request EnableRequest) error

	// List system schemas.
	//
	// Gets an array of system schemas for a metastore. The caller must be an
	// account admin or a metastore admin.
	//
	// Use ListAll() to get all SystemSchemaInfo instances
	List(ctx context.Context, request ListSystemSchemasRequest) (*ListSystemSchemasResponse, error)
}

A system schema is a schema that lives within the system catalog. A system schema may contain information about customer usage of Unity Catalog such as audit-logs, billing-logs, lineage information, etc.

type TableConstraint

type TableConstraint struct {
	ForeignKeyConstraint *ForeignKeyConstraint `json:"foreign_key_constraint,omitempty"`

	NamedTableConstraint *NamedTableConstraint `json:"named_table_constraint,omitempty"`

	PrimaryKeyConstraint *PrimaryKeyConstraint `json:"primary_key_constraint,omitempty"`
}

A table constraint, as defined by *one* of the following fields being set: __primary_key_constraint__, __foreign_key_constraint__, __named_table_constraint__.

type TableConstraintsAPI

type TableConstraintsAPI struct {
	// contains filtered or unexported fields
}

Primary key and foreign key constraints encode relationships between fields in tables.

Primary and foreign keys are informational only and are not enforced. Foreign keys must reference a primary key in another table. This primary key is the parent constraint of the foreign key and the table this primary key is on is the parent table of the foreign key. Similarly, the foreign key is the child constraint of its referenced primary key; the table of the foreign key is the child table of the primary key.

You can declare primary keys and foreign keys as part of the table specification during table creation. You can also add or drop constraints on existing tables.

func NewTableConstraints

func NewTableConstraints(client *client.DatabricksClient) *TableConstraintsAPI

func (*TableConstraintsAPI) Create

Create a table constraint.

Creates a new table constraint.

For the table constraint creation to succeed, the user must satisfy both of these conditions: - the user must have the **USE_CATALOG** privilege on the table's parent catalog, the **USE_SCHEMA** privilege on the table's parent schema, and be the owner of the table. - if the new constraint is a __ForeignKeyConstraint__, the user must have the **USE_CATALOG** privilege on the referenced parent table's catalog, the **USE_SCHEMA** privilege on the referenced parent table's schema, and be the owner of the referenced parent table.

func (*TableConstraintsAPI) Delete

Delete a table constraint.

Deletes a table constraint.

For the table constraint deletion to succeed, the user must satisfy both of these conditions: - the user must have the **USE_CATALOG** privilege on the table's parent catalog, the **USE_SCHEMA** privilege on the table's parent schema, and be the owner of the table. - if __cascade__ argument is **true**, the user must have the following permissions on all of the child tables: the **USE_CATALOG** privilege on the table's catalog, the **USE_SCHEMA** privilege on the table's schema, and be the owner of the table.

func (*TableConstraintsAPI) DeleteByFullName

func (a *TableConstraintsAPI) DeleteByFullName(ctx context.Context, fullName string) error

Delete a table constraint.

Deletes a table constraint.

For the table constraint deletion to succeed, the user must satisfy both of these conditions: - the user must have the **USE_CATALOG** privilege on the table's parent catalog, the **USE_SCHEMA** privilege on the table's parent schema, and be the owner of the table. - if __cascade__ argument is **true**, the user must have the following permissions on all of the child tables: the **USE_CATALOG** privilege on the table's catalog, the **USE_SCHEMA** privilege on the table's schema, and be the owner of the table.

func (*TableConstraintsAPI) Impl

Impl returns low-level TableConstraints API implementation Deprecated: use MockTableConstraintsInterface instead.

func (*TableConstraintsAPI) WithImpl

WithImpl could be used to override low-level API implementations for unit testing purposes with github.com/golang/mock or other mocking frameworks. Deprecated: use MockTableConstraintsInterface instead.

type TableConstraintsInterface added in v0.29.0

type TableConstraintsInterface interface {
	// WithImpl could be used to override low-level API implementations for unit
	// testing purposes with [github.com/golang/mock] or other mocking frameworks.
	// Deprecated: use MockTableConstraintsInterface instead.
	WithImpl(impl TableConstraintsService) TableConstraintsInterface

	// Impl returns low-level TableConstraints API implementation
	// Deprecated: use MockTableConstraintsInterface instead.
	Impl() TableConstraintsService

	// Create a table constraint.
	//
	// Creates a new table constraint.
	//
	// For the table constraint creation to succeed, the user must satisfy both of
	// these conditions: - the user must have the **USE_CATALOG** privilege on the
	// table's parent catalog, the **USE_SCHEMA** privilege on the table's parent
	// schema, and be the owner of the table. - if the new constraint is a
	// __ForeignKeyConstraint__, the user must have the **USE_CATALOG** privilege on
	// the referenced parent table's catalog, the **USE_SCHEMA** privilege on the
	// referenced parent table's schema, and be the owner of the referenced parent
	// table.
	Create(ctx context.Context, request CreateTableConstraint) (*TableConstraint, error)

	// Delete a table constraint.
	//
	// Deletes a table constraint.
	//
	// For the table constraint deletion to succeed, the user must satisfy both of
	// these conditions: - the user must have the **USE_CATALOG** privilege on the
	// table's parent catalog, the **USE_SCHEMA** privilege on the table's parent
	// schema, and be the owner of the table. - if __cascade__ argument is **true**,
	// the user must have the following permissions on all of the child tables: the
	// **USE_CATALOG** privilege on the table's catalog, the **USE_SCHEMA**
	// privilege on the table's schema, and be the owner of the table.
	Delete(ctx context.Context, request DeleteTableConstraintRequest) error

	// Delete a table constraint.
	//
	// Deletes a table constraint.
	//
	// For the table constraint deletion to succeed, the user must satisfy both of
	// these conditions: - the user must have the **USE_CATALOG** privilege on the
	// table's parent catalog, the **USE_SCHEMA** privilege on the table's parent
	// schema, and be the owner of the table. - if __cascade__ argument is **true**,
	// the user must have the following permissions on all of the child tables: the
	// **USE_CATALOG** privilege on the table's catalog, the **USE_SCHEMA**
	// privilege on the table's schema, and be the owner of the table.
	DeleteByFullName(ctx context.Context, fullName string) error
}

type TableConstraintsService

type TableConstraintsService interface {

	// Create a table constraint.
	//
	// Creates a new table constraint.
	//
	// For the table constraint creation to succeed, the user must satisfy both
	// of these conditions: - the user must have the **USE_CATALOG** privilege
	// on the table's parent catalog, the **USE_SCHEMA** privilege on the
	// table's parent schema, and be the owner of the table. - if the new
	// constraint is a __ForeignKeyConstraint__, the user must have the
	// **USE_CATALOG** privilege on the referenced parent table's catalog, the
	// **USE_SCHEMA** privilege on the referenced parent table's schema, and be
	// the owner of the referenced parent table.
	Create(ctx context.Context, request CreateTableConstraint) (*TableConstraint, error)

	// Delete a table constraint.
	//
	// Deletes a table constraint.
	//
	// For the table constraint deletion to succeed, the user must satisfy both
	// of these conditions: - the user must have the **USE_CATALOG** privilege
	// on the table's parent catalog, the **USE_SCHEMA** privilege on the
	// table's parent schema, and be the owner of the table. - if __cascade__
	// argument is **true**, the user must have the following permissions on all
	// of the child tables: the **USE_CATALOG** privilege on the table's
	// catalog, the **USE_SCHEMA** privilege on the table's schema, and be the
	// owner of the table.
	Delete(ctx context.Context, request DeleteTableConstraintRequest) error
}

Primary key and foreign key constraints encode relationships between fields in tables.

Primary and foreign keys are informational only and are not enforced. Foreign keys must reference a primary key in another table. This primary key is the parent constraint of the foreign key and the table this primary key is on is the parent table of the foreign key. Similarly, the foreign key is the child constraint of its referenced primary key; the table of the foreign key is the child table of the primary key.

You can declare primary keys and foreign keys as part of the table specification during table creation. You can also add or drop constraints on existing tables.

type TableDependency

type TableDependency struct {
	// Full name of the dependent table, in the form of
	// __catalog_name__.__schema_name__.__table_name__.
	TableFullName string `json:"table_full_name"`
}

A table that is dependent on a SQL object.

type TableExistsResponse added in v0.30.0

type TableExistsResponse struct {
	// Whether the table exists or not.
	TableExists bool `json:"table_exists,omitempty"`

	ForceSendFields []string `json:"-"`
}

func (TableExistsResponse) MarshalJSON added in v0.30.0

func (s TableExistsResponse) MarshalJSON() ([]byte, error)

func (*TableExistsResponse) UnmarshalJSON added in v0.30.0

func (s *TableExistsResponse) UnmarshalJSON(b []byte) error

type TableInfo

type TableInfo struct {
	// The AWS access point to use when accesing s3 for this external location.
	AccessPoint string `json:"access_point,omitempty"`
	// Indicates whether the principal is limited to retrieving metadata for the
	// associated object through the BROWSE privilege when include_browse is
	// enabled in the request.
	BrowseOnly bool `json:"browse_only,omitempty"`
	// Name of parent catalog.
	CatalogName string `json:"catalog_name,omitempty"`
	// The array of __ColumnInfo__ definitions of the table's columns.
	Columns []ColumnInfo `json:"columns,omitempty"`
	// User-provided free-form text description.
	Comment string `json:"comment,omitempty"`
	// Time at which this table was created, in epoch milliseconds.
	CreatedAt int64 `json:"created_at,omitempty"`
	// Username of table creator.
	CreatedBy string `json:"created_by,omitempty"`
	// Unique ID of the Data Access Configuration to use with the table data.
	DataAccessConfigurationId string `json:"data_access_configuration_id,omitempty"`
	// Data source format
	DataSourceFormat DataSourceFormat `json:"data_source_format,omitempty"`
	// Time at which this table was deleted, in epoch milliseconds. Field is
	// omitted if table is not deleted.
	DeletedAt int64 `json:"deleted_at,omitempty"`
	// Information pertaining to current state of the delta table.
	DeltaRuntimePropertiesKvpairs *DeltaRuntimePropertiesKvPairs `json:"delta_runtime_properties_kvpairs,omitempty"`

	EffectivePredictiveOptimizationFlag *EffectivePredictiveOptimizationFlag `json:"effective_predictive_optimization_flag,omitempty"`
	// Whether predictive optimization should be enabled for this object and
	// objects under it.
	EnablePredictiveOptimization EnablePredictiveOptimization `json:"enable_predictive_optimization,omitempty"`
	// Encryption options that apply to clients connecting to cloud storage.
	EncryptionDetails *EncryptionDetails `json:"encryption_details,omitempty"`
	// Full name of table, in form of
	// __catalog_name__.__schema_name__.__table_name__
	FullName string `json:"full_name,omitempty"`
	// Unique identifier of parent metastore.
	MetastoreId string `json:"metastore_id,omitempty"`
	// Name of table, relative to parent schema.
	Name string `json:"name,omitempty"`
	// Username of current owner of table.
	Owner string `json:"owner,omitempty"`
	// The pipeline ID of the table. Applicable for tables created by pipelines
	// (Materialized View, Streaming Table, etc.).
	PipelineId string `json:"pipeline_id,omitempty"`
	// A map of key-value properties attached to the securable.
	Properties map[string]string `json:"properties,omitempty"`

	RowFilter *TableRowFilter `json:"row_filter,omitempty"`
	// Name of parent schema relative to its parent catalog.
	SchemaName string `json:"schema_name,omitempty"`
	// List of schemes whose objects can be referenced without qualification.
	SqlPath string `json:"sql_path,omitempty"`
	// Name of the storage credential, when a storage credential is configured
	// for use with this table.
	StorageCredentialName string `json:"storage_credential_name,omitempty"`
	// Storage root URL for table (for **MANAGED**, **EXTERNAL** tables)
	StorageLocation string `json:"storage_location,omitempty"`
	// List of table constraints. Note: this field is not set in the output of
	// the __listTables__ API.
	TableConstraints []TableConstraint `json:"table_constraints,omitempty"`
	// The unique identifier of the table.
	TableId string `json:"table_id,omitempty"`

	TableType TableType `json:"table_type,omitempty"`
	// Time at which this table was last modified, in epoch milliseconds.
	UpdatedAt int64 `json:"updated_at,omitempty"`
	// Username of user who last modified the table.
	UpdatedBy string `json:"updated_by,omitempty"`
	// View definition SQL (when __table_type__ is **VIEW**,
	// **MATERIALIZED_VIEW**, or **STREAMING_TABLE**)
	ViewDefinition string `json:"view_definition,omitempty"`
	// View dependencies (when table_type == **VIEW** or **MATERIALIZED_VIEW**,
	// **STREAMING_TABLE**) - when DependencyList is None, the dependency is not
	// provided; - when DependencyList is an empty list, the dependency is
	// provided but is empty; - when DependencyList is not an empty list,
	// dependencies are provided and recorded.
	ViewDependencies *DependencyList `json:"view_dependencies,omitempty"`

	ForceSendFields []string `json:"-"`
}

func (TableInfo) MarshalJSON added in v0.23.0

func (s TableInfo) MarshalJSON() ([]byte, error)

func (*TableInfo) UnmarshalJSON added in v0.23.0

func (s *TableInfo) UnmarshalJSON(b []byte) error

type TableRowFilter

type TableRowFilter struct {
	// The full name of the row filter SQL UDF.
	FunctionName string `json:"function_name"`
	// The list of table columns to be passed as input to the row filter
	// function. The column types should match the types of the filter function
	// arguments.
	InputColumnNames []string `json:"input_column_names"`
}

type TableSummary

type TableSummary struct {
	// The full name of the table.
	FullName string `json:"full_name,omitempty"`

	TableType TableType `json:"table_type,omitempty"`

	ForceSendFields []string `json:"-"`
}

func (TableSummary) MarshalJSON added in v0.23.0

func (s TableSummary) MarshalJSON() ([]byte, error)

func (*TableSummary) UnmarshalJSON added in v0.23.0

func (s *TableSummary) UnmarshalJSON(b []byte) error

type TableType

type TableType string
const TableTypeExternal TableType = `EXTERNAL`
const TableTypeManaged TableType = `MANAGED`
const TableTypeMaterializedView TableType = `MATERIALIZED_VIEW`
const TableTypeStreamingTable TableType = `STREAMING_TABLE`
const TableTypeView TableType = `VIEW`

func (*TableType) Set

func (f *TableType) Set(v string) error

Set raw string value and validate it against allowed values

func (*TableType) String

func (f *TableType) String() string

String representation for fmt.Print

func (*TableType) Type

func (f *TableType) Type() string

Type always returns TableType to satisfy [pflag.Value] interface

type TablesAPI

type TablesAPI struct {
	// contains filtered or unexported fields
}

A table resides in the third layer of Unity Catalog’s three-level namespace. It contains rows of data. To create a table, users must have CREATE_TABLE and USE_SCHEMA permissions on the schema, and they must have the USE_CATALOG permission on its parent catalog. To query a table, users must have the SELECT permission on the table, and they must have the USE_CATALOG permission on its parent catalog and the USE_SCHEMA permission on its parent schema.

A table can be managed or external. From an API perspective, a __VIEW__ is a particular kind of table (rather than a managed or external table).

func NewTables

func NewTables(client *client.DatabricksClient) *TablesAPI

func (*TablesAPI) Delete

func (a *TablesAPI) Delete(ctx context.Context, request DeleteTableRequest) error

Delete a table.

Deletes a table from the specified parent catalog and schema. The caller must be the owner of the parent catalog, have the **USE_CATALOG** privilege on the parent catalog and be the owner of the parent schema, or be the owner of the table and have the **USE_CATALOG** privilege on the parent catalog and the **USE_SCHEMA** privilege on the parent schema.

func (*TablesAPI) DeleteByFullName

func (a *TablesAPI) DeleteByFullName(ctx context.Context, fullName string) error

Delete a table.

Deletes a table from the specified parent catalog and schema. The caller must be the owner of the parent catalog, have the **USE_CATALOG** privilege on the parent catalog and be the owner of the parent schema, or be the owner of the table and have the **USE_CATALOG** privilege on the parent catalog and the **USE_SCHEMA** privilege on the parent schema.

func (*TablesAPI) Exists added in v0.30.0

func (a *TablesAPI) Exists(ctx context.Context, request ExistsRequest) (*TableExistsResponse, error)

Get boolean reflecting if table exists.

Gets if a table exists in the metastore for a specific catalog and schema. The caller must satisfy one of the following requirements: * Be a metastore admin * Be the owner of the parent catalog * Be the owner of the parent schema and have the USE_CATALOG privilege on the parent catalog * Have the **USE_CATALOG** privilege on the parent catalog and the **USE_SCHEMA** privilege on the parent schema, and either be the table owner or have the SELECT privilege on the table. * Have BROWSE privilege on the parent catalog * Have BROWSE privilege on the parent schema.

func (*TablesAPI) ExistsByFullName added in v0.30.0

func (a *TablesAPI) ExistsByFullName(ctx context.Context, fullName string) (*TableExistsResponse, error)

Get boolean reflecting if table exists.

Gets if a table exists in the metastore for a specific catalog and schema. The caller must satisfy one of the following requirements: * Be a metastore admin * Be the owner of the parent catalog * Be the owner of the parent schema and have the USE_CATALOG privilege on the parent catalog * Have the **USE_CATALOG** privilege on the parent catalog and the **USE_SCHEMA** privilege on the parent schema, and either be the table owner or have the SELECT privilege on the table. * Have BROWSE privilege on the parent catalog * Have BROWSE privilege on the parent schema.

func (*TablesAPI) Get

func (a *TablesAPI) Get(ctx context.Context, request GetTableRequest) (*TableInfo, error)

Get a table.

Gets a table from the metastore for a specific catalog and schema. The caller must satisfy one of the following requirements: * Be a metastore admin * Be the owner of the parent catalog * Be the owner of the parent schema and have the USE_CATALOG privilege on the parent catalog * Have the **USE_CATALOG** privilege on the parent catalog and the **USE_SCHEMA** privilege on the parent schema, and either be the table owner or have the SELECT privilege on the table.

Example (Tables)
ctx := context.Background()
w, err := databricks.NewWorkspaceClient()
if err != nil {
	panic(err)
}

tableName := fmt.Sprintf("sdk-%x", time.Now().UnixNano())

createdCatalog, err := w.Catalogs.Create(ctx, catalog.CreateCatalog{
	Name: fmt.Sprintf("sdk-%x", time.Now().UnixNano()),
})
if err != nil {
	panic(err)
}
logger.Infof(ctx, "found %v", createdCatalog)

createdSchema, err := w.Schemas.Create(ctx, catalog.CreateSchema{
	Name:        fmt.Sprintf("sdk-%x", time.Now().UnixNano()),
	CatalogName: createdCatalog.Name,
})
if err != nil {
	panic(err)
}
logger.Infof(ctx, "found %v", createdSchema)

_, err = w.StatementExecution.ExecuteAndWait(ctx, sql.ExecuteStatementRequest{
	WarehouseId: os.Getenv("TEST_DEFAULT_WAREHOUSE_ID"),
	Catalog:     createdCatalog.Name,
	Schema:      createdSchema.Name,
	Statement:   fmt.Sprintf("CREATE TABLE %s AS SELECT 2+2 as four", tableName),
})
if err != nil {
	panic(err)
}

tableFullName := fmt.Sprintf("%s.%s.%s", createdCatalog.Name, createdSchema.Name, tableName)

createdTable, err := w.Tables.GetByFullName(ctx, tableFullName)
if err != nil {
	panic(err)
}
logger.Infof(ctx, "found %v", createdTable)

// cleanup

err = w.Schemas.DeleteByFullName(ctx, createdSchema.FullName)
if err != nil {
	panic(err)
}
err = w.Catalogs.Delete(ctx, catalog.DeleteCatalogRequest{
	Name:  createdCatalog.Name,
	Force: true,
})
if err != nil {
	panic(err)
}
err = w.Tables.DeleteByFullName(ctx, tableFullName)
if err != nil {
	panic(err)
}
Output:

func (*TablesAPI) GetByFullName

func (a *TablesAPI) GetByFullName(ctx context.Context, fullName string) (*TableInfo, error)

Get a table.

Gets a table from the metastore for a specific catalog and schema. The caller must satisfy one of the following requirements: * Be a metastore admin * Be the owner of the parent catalog * Be the owner of the parent schema and have the USE_CATALOG privilege on the parent catalog * Have the **USE_CATALOG** privilege on the parent catalog and the **USE_SCHEMA** privilege on the parent schema, and either be the table owner or have the SELECT privilege on the table.

func (*TablesAPI) GetByName

func (a *TablesAPI) GetByName(ctx context.Context, name string) (*TableInfo, error)

GetByName calls TablesAPI.TableInfoNameToTableIdMap and returns a single TableInfo.

Returns an error if there's more than one TableInfo with the same .Name.

Note: All TableInfo instances are loaded into memory before returning matching by name.

This method is generated by Databricks SDK Code Generator.

func (*TablesAPI) Impl

func (a *TablesAPI) Impl() TablesService

Impl returns low-level Tables API implementation Deprecated: use MockTablesInterface instead.

func (*TablesAPI) List added in v0.24.0

List tables.

Gets an array of all tables for the current metastore under the parent catalog and schema. The caller must be a metastore admin or an owner of (or have the **SELECT** privilege on) the table. For the latter case, the caller must also be the owner or have the **USE_CATALOG** privilege on the parent catalog and the **USE_SCHEMA** privilege on the parent schema. There is no guarantee of a specific ordering of the elements in the array.

This method is generated by Databricks SDK Code Generator.

func (*TablesAPI) ListAll

func (a *TablesAPI) ListAll(ctx context.Context, request ListTablesRequest) ([]TableInfo, error)

List tables.

Gets an array of all tables for the current metastore under the parent catalog and schema. The caller must be a metastore admin or an owner of (or have the **SELECT** privilege on) the table. For the latter case, the caller must also be the owner or have the **USE_CATALOG** privilege on the parent catalog and the **USE_SCHEMA** privilege on the parent schema. There is no guarantee of a specific ordering of the elements in the array.

This method is generated by Databricks SDK Code Generator.

Example (Tables)
ctx := context.Background()
w, err := databricks.NewWorkspaceClient()
if err != nil {
	panic(err)
}

createdCatalog, err := w.Catalogs.Create(ctx, catalog.CreateCatalog{
	Name: fmt.Sprintf("sdk-%x", time.Now().UnixNano()),
})
if err != nil {
	panic(err)
}
logger.Infof(ctx, "found %v", createdCatalog)

createdSchema, err := w.Schemas.Create(ctx, catalog.CreateSchema{
	Name:        fmt.Sprintf("sdk-%x", time.Now().UnixNano()),
	CatalogName: createdCatalog.Name,
})
if err != nil {
	panic(err)
}
logger.Infof(ctx, "found %v", createdSchema)

allTables, err := w.Tables.ListAll(ctx, catalog.ListTablesRequest{
	CatalogName: createdCatalog.Name,
	SchemaName:  createdSchema.Name,
})
if err != nil {
	panic(err)
}
logger.Infof(ctx, "found %v", allTables)

// cleanup

err = w.Schemas.DeleteByFullName(ctx, createdSchema.FullName)
if err != nil {
	panic(err)
}
err = w.Catalogs.Delete(ctx, catalog.DeleteCatalogRequest{
	Name:  createdCatalog.Name,
	Force: true,
})
if err != nil {
	panic(err)
}
Output:

func (*TablesAPI) ListSummaries

func (a *TablesAPI) ListSummaries(ctx context.Context, request ListSummariesRequest) listing.Iterator[TableSummary]

List table summaries.

Gets an array of summaries for tables for a schema and catalog within the metastore. The table summaries returned are either:

* summaries for tables (within the current metastore and parent catalog and schema), when the user is a metastore admin, or: * summaries for tables and schemas (within the current metastore and parent catalog) for which the user has ownership or the **SELECT** privilege on the table and ownership or **USE_SCHEMA** privilege on the schema, provided that the user also has ownership or the **USE_CATALOG** privilege on the parent catalog.

There is no guarantee of a specific ordering of the elements in the array.

This method is generated by Databricks SDK Code Generator.

Example (Tables)
ctx := context.Background()
w, err := databricks.NewWorkspaceClient()
if err != nil {
	panic(err)
}

createdCatalog, err := w.Catalogs.Create(ctx, catalog.CreateCatalog{
	Name: fmt.Sprintf("sdk-%x", time.Now().UnixNano()),
})
if err != nil {
	panic(err)
}
logger.Infof(ctx, "found %v", createdCatalog)

createdSchema, err := w.Schemas.Create(ctx, catalog.CreateSchema{
	Name:        fmt.Sprintf("sdk-%x", time.Now().UnixNano()),
	CatalogName: createdCatalog.Name,
})
if err != nil {
	panic(err)
}
logger.Infof(ctx, "found %v", createdSchema)

summaries, err := w.Tables.ListSummariesAll(ctx, catalog.ListSummariesRequest{
	CatalogName:       createdCatalog.Name,
	SchemaNamePattern: createdSchema.Name,
})
if err != nil {
	panic(err)
}
logger.Infof(ctx, "found %v", summaries)

// cleanup

err = w.Schemas.DeleteByFullName(ctx, createdSchema.FullName)
if err != nil {
	panic(err)
}
err = w.Catalogs.Delete(ctx, catalog.DeleteCatalogRequest{
	Name:  createdCatalog.Name,
	Force: true,
})
if err != nil {
	panic(err)
}
Output:

func (*TablesAPI) ListSummariesAll added in v0.10.0

func (a *TablesAPI) ListSummariesAll(ctx context.Context, request ListSummariesRequest) ([]TableSummary, error)

List table summaries.

Gets an array of summaries for tables for a schema and catalog within the metastore. The table summaries returned are either:

* summaries for tables (within the current metastore and parent catalog and schema), when the user is a metastore admin, or: * summaries for tables and schemas (within the current metastore and parent catalog) for which the user has ownership or the **SELECT** privilege on the table and ownership or **USE_SCHEMA** privilege on the schema, provided that the user also has ownership or the **USE_CATALOG** privilege on the parent catalog.

There is no guarantee of a specific ordering of the elements in the array.

This method is generated by Databricks SDK Code Generator.

func (*TablesAPI) TableInfoNameToTableIdMap

func (a *TablesAPI) TableInfoNameToTableIdMap(ctx context.Context, request ListTablesRequest) (map[string]string, error)

TableInfoNameToTableIdMap calls TablesAPI.ListAll and creates a map of results with TableInfo.Name as key and TableInfo.TableId as value.

Returns an error if there's more than one TableInfo with the same .Name.

Note: All TableInfo instances are loaded into memory before creating a map.

This method is generated by Databricks SDK Code Generator.

func (*TablesAPI) Update added in v0.13.0

func (a *TablesAPI) Update(ctx context.Context, request UpdateTableRequest) error

Update a table owner.

Change the owner of the table. The caller must be the owner of the parent catalog, have the **USE_CATALOG** privilege on the parent catalog and be the owner of the parent schema, or be the owner of the table and have the **USE_CATALOG** privilege on the parent catalog and the **USE_SCHEMA** privilege on the parent schema.

func (*TablesAPI) WithImpl

func (a *TablesAPI) WithImpl(impl TablesService) TablesInterface

WithImpl could be used to override low-level API implementations for unit testing purposes with github.com/golang/mock or other mocking frameworks. Deprecated: use MockTablesInterface instead.

type TablesInterface added in v0.29.0

type TablesInterface interface {
	// WithImpl could be used to override low-level API implementations for unit
	// testing purposes with [github.com/golang/mock] or other mocking frameworks.
	// Deprecated: use MockTablesInterface instead.
	WithImpl(impl TablesService) TablesInterface

	// Impl returns low-level Tables API implementation
	// Deprecated: use MockTablesInterface instead.
	Impl() TablesService

	// Delete a table.
	//
	// Deletes a table from the specified parent catalog and schema. The caller must
	// be the owner of the parent catalog, have the **USE_CATALOG** privilege on the
	// parent catalog and be the owner of the parent schema, or be the owner of the
	// table and have the **USE_CATALOG** privilege on the parent catalog and the
	// **USE_SCHEMA** privilege on the parent schema.
	Delete(ctx context.Context, request DeleteTableRequest) error

	// Delete a table.
	//
	// Deletes a table from the specified parent catalog and schema. The caller must
	// be the owner of the parent catalog, have the **USE_CATALOG** privilege on the
	// parent catalog and be the owner of the parent schema, or be the owner of the
	// table and have the **USE_CATALOG** privilege on the parent catalog and the
	// **USE_SCHEMA** privilege on the parent schema.
	DeleteByFullName(ctx context.Context, fullName string) error

	// Get boolean reflecting if table exists.
	//
	// Gets if a table exists in the metastore for a specific catalog and schema.
	// The caller must satisfy one of the following requirements: * Be a metastore
	// admin * Be the owner of the parent catalog * Be the owner of the parent
	// schema and have the USE_CATALOG privilege on the parent catalog * Have the
	// **USE_CATALOG** privilege on the parent catalog and the **USE_SCHEMA**
	// privilege on the parent schema, and either be the table owner or have the
	// SELECT privilege on the table. * Have BROWSE privilege on the parent catalog
	// * Have BROWSE privilege on the parent schema.
	Exists(ctx context.Context, request ExistsRequest) (*TableExistsResponse, error)

	// Get boolean reflecting if table exists.
	//
	// Gets if a table exists in the metastore for a specific catalog and schema.
	// The caller must satisfy one of the following requirements: * Be a metastore
	// admin * Be the owner of the parent catalog * Be the owner of the parent
	// schema and have the USE_CATALOG privilege on the parent catalog * Have the
	// **USE_CATALOG** privilege on the parent catalog and the **USE_SCHEMA**
	// privilege on the parent schema, and either be the table owner or have the
	// SELECT privilege on the table. * Have BROWSE privilege on the parent catalog
	// * Have BROWSE privilege on the parent schema.
	ExistsByFullName(ctx context.Context, fullName string) (*TableExistsResponse, error)

	// Get a table.
	//
	// Gets a table from the metastore for a specific catalog and schema. The caller
	// must satisfy one of the following requirements: * Be a metastore admin * Be
	// the owner of the parent catalog * Be the owner of the parent schema and have
	// the USE_CATALOG privilege on the parent catalog * Have the **USE_CATALOG**
	// privilege on the parent catalog and the **USE_SCHEMA** privilege on the
	// parent schema, and either be the table owner or have the SELECT privilege on
	// the table.
	Get(ctx context.Context, request GetTableRequest) (*TableInfo, error)

	// Get a table.
	//
	// Gets a table from the metastore for a specific catalog and schema. The caller
	// must satisfy one of the following requirements: * Be a metastore admin * Be
	// the owner of the parent catalog * Be the owner of the parent schema and have
	// the USE_CATALOG privilege on the parent catalog * Have the **USE_CATALOG**
	// privilege on the parent catalog and the **USE_SCHEMA** privilege on the
	// parent schema, and either be the table owner or have the SELECT privilege on
	// the table.
	GetByFullName(ctx context.Context, fullName string) (*TableInfo, error)

	// List tables.
	//
	// Gets an array of all tables for the current metastore under the parent
	// catalog and schema. The caller must be a metastore admin or an owner of (or
	// have the **SELECT** privilege on) the table. For the latter case, the caller
	// must also be the owner or have the **USE_CATALOG** privilege on the parent
	// catalog and the **USE_SCHEMA** privilege on the parent schema. There is no
	// guarantee of a specific ordering of the elements in the array.
	//
	// This method is generated by Databricks SDK Code Generator.
	List(ctx context.Context, request ListTablesRequest) listing.Iterator[TableInfo]

	// List tables.
	//
	// Gets an array of all tables for the current metastore under the parent
	// catalog and schema. The caller must be a metastore admin or an owner of (or
	// have the **SELECT** privilege on) the table. For the latter case, the caller
	// must also be the owner or have the **USE_CATALOG** privilege on the parent
	// catalog and the **USE_SCHEMA** privilege on the parent schema. There is no
	// guarantee of a specific ordering of the elements in the array.
	//
	// This method is generated by Databricks SDK Code Generator.
	ListAll(ctx context.Context, request ListTablesRequest) ([]TableInfo, error)

	// TableInfoNameToTableIdMap calls [TablesAPI.ListAll] and creates a map of results with [TableInfo].Name as key and [TableInfo].TableId as value.
	//
	// Returns an error if there's more than one [TableInfo] with the same .Name.
	//
	// Note: All [TableInfo] instances are loaded into memory before creating a map.
	//
	// This method is generated by Databricks SDK Code Generator.
	TableInfoNameToTableIdMap(ctx context.Context, request ListTablesRequest) (map[string]string, error)

	// GetByName calls [TablesAPI.TableInfoNameToTableIdMap] and returns a single [TableInfo].
	//
	// Returns an error if there's more than one [TableInfo] with the same .Name.
	//
	// Note: All [TableInfo] instances are loaded into memory before returning matching by name.
	//
	// This method is generated by Databricks SDK Code Generator.
	GetByName(ctx context.Context, name string) (*TableInfo, error)

	// List table summaries.
	//
	// Gets an array of summaries for tables for a schema and catalog within the
	// metastore. The table summaries returned are either:
	//
	// * summaries for tables (within the current metastore and parent catalog and
	// schema), when the user is a metastore admin, or: * summaries for tables and
	// schemas (within the current metastore and parent catalog) for which the user
	// has ownership or the **SELECT** privilege on the table and ownership or
	// **USE_SCHEMA** privilege on the schema, provided that the user also has
	// ownership or the **USE_CATALOG** privilege on the parent catalog.
	//
	// There is no guarantee of a specific ordering of the elements in the array.
	//
	// This method is generated by Databricks SDK Code Generator.
	ListSummaries(ctx context.Context, request ListSummariesRequest) listing.Iterator[TableSummary]

	// List table summaries.
	//
	// Gets an array of summaries for tables for a schema and catalog within the
	// metastore. The table summaries returned are either:
	//
	// * summaries for tables (within the current metastore and parent catalog and
	// schema), when the user is a metastore admin, or: * summaries for tables and
	// schemas (within the current metastore and parent catalog) for which the user
	// has ownership or the **SELECT** privilege on the table and ownership or
	// **USE_SCHEMA** privilege on the schema, provided that the user also has
	// ownership or the **USE_CATALOG** privilege on the parent catalog.
	//
	// There is no guarantee of a specific ordering of the elements in the array.
	//
	// This method is generated by Databricks SDK Code Generator.
	ListSummariesAll(ctx context.Context, request ListSummariesRequest) ([]TableSummary, error)

	// Update a table owner.
	//
	// Change the owner of the table. The caller must be the owner of the parent
	// catalog, have the **USE_CATALOG** privilege on the parent catalog and be the
	// owner of the parent schema, or be the owner of the table and have the
	// **USE_CATALOG** privilege on the parent catalog and the **USE_SCHEMA**
	// privilege on the parent schema.
	Update(ctx context.Context, request UpdateTableRequest) error
}

type TablesService

type TablesService interface {

	// Delete a table.
	//
	// Deletes a table from the specified parent catalog and schema. The caller
	// must be the owner of the parent catalog, have the **USE_CATALOG**
	// privilege on the parent catalog and be the owner of the parent schema, or
	// be the owner of the table and have the **USE_CATALOG** privilege on the
	// parent catalog and the **USE_SCHEMA** privilege on the parent schema.
	Delete(ctx context.Context, request DeleteTableRequest) error

	// Get boolean reflecting if table exists.
	//
	// Gets if a table exists in the metastore for a specific catalog and
	// schema. The caller must satisfy one of the following requirements: * Be a
	// metastore admin * Be the owner of the parent catalog * Be the owner of
	// the parent schema and have the USE_CATALOG privilege on the parent
	// catalog * Have the **USE_CATALOG** privilege on the parent catalog and
	// the **USE_SCHEMA** privilege on the parent schema, and either be the
	// table owner or have the SELECT privilege on the table. * Have BROWSE
	// privilege on the parent catalog * Have BROWSE privilege on the parent
	// schema.
	Exists(ctx context.Context, request ExistsRequest) (*TableExistsResponse, error)

	// Get a table.
	//
	// Gets a table from the metastore for a specific catalog and schema. The
	// caller must satisfy one of the following requirements: * Be a metastore
	// admin * Be the owner of the parent catalog * Be the owner of the parent
	// schema and have the USE_CATALOG privilege on the parent catalog * Have
	// the **USE_CATALOG** privilege on the parent catalog and the
	// **USE_SCHEMA** privilege on the parent schema, and either be the table
	// owner or have the SELECT privilege on the table.
	Get(ctx context.Context, request GetTableRequest) (*TableInfo, error)

	// List tables.
	//
	// Gets an array of all tables for the current metastore under the parent
	// catalog and schema. The caller must be a metastore admin or an owner of
	// (or have the **SELECT** privilege on) the table. For the latter case, the
	// caller must also be the owner or have the **USE_CATALOG** privilege on
	// the parent catalog and the **USE_SCHEMA** privilege on the parent schema.
	// There is no guarantee of a specific ordering of the elements in the
	// array.
	//
	// Use ListAll() to get all TableInfo instances, which will iterate over every result page.
	List(ctx context.Context, request ListTablesRequest) (*ListTablesResponse, error)

	// List table summaries.
	//
	// Gets an array of summaries for tables for a schema and catalog within the
	// metastore. The table summaries returned are either:
	//
	// * summaries for tables (within the current metastore and parent catalog
	// and schema), when the user is a metastore admin, or: * summaries for
	// tables and schemas (within the current metastore and parent catalog) for
	// which the user has ownership or the **SELECT** privilege on the table and
	// ownership or **USE_SCHEMA** privilege on the schema, provided that the
	// user also has ownership or the **USE_CATALOG** privilege on the parent
	// catalog.
	//
	// There is no guarantee of a specific ordering of the elements in the
	// array.
	//
	// Use ListSummariesAll() to get all TableSummary instances, which will iterate over every result page.
	ListSummaries(ctx context.Context, request ListSummariesRequest) (*ListTableSummariesResponse, error)

	// Update a table owner.
	//
	// Change the owner of the table. The caller must be the owner of the parent
	// catalog, have the **USE_CATALOG** privilege on the parent catalog and be
	// the owner of the parent schema, or be the owner of the table and have the
	// **USE_CATALOG** privilege on the parent catalog and the **USE_SCHEMA**
	// privilege on the parent schema.
	Update(ctx context.Context, request UpdateTableRequest) error
}

A table resides in the third layer of Unity Catalog’s three-level namespace. It contains rows of data. To create a table, users must have CREATE_TABLE and USE_SCHEMA permissions on the schema, and they must have the USE_CATALOG permission on its parent catalog. To query a table, users must have the SELECT permission on the table, and they must have the USE_CATALOG permission on its parent catalog and the USE_SCHEMA permission on its parent schema.

A table can be managed or external. From an API perspective, a __VIEW__ is a particular kind of table (rather than a managed or external table).

type TriggeredUpdateStatus added in v0.33.0

type TriggeredUpdateStatus struct {
	// The last source table Delta version that was synced to the online table.
	// Note that this Delta version may not be completely synced to the online
	// table yet.
	LastProcessedCommitVersion int64 `json:"last_processed_commit_version,omitempty"`
	// The timestamp of the last time any data was synchronized from the source
	// table to the online table.
	Timestamp string `json:"timestamp,omitempty"`
	// Progress of the active data synchronization pipeline.
	TriggeredUpdateProgress *PipelineProgress `json:"triggered_update_progress,omitempty"`

	ForceSendFields []string `json:"-"`
}

Detailed status of an online table. Shown if the online table is in the ONLINE_TRIGGERED_UPDATE or the ONLINE_NO_PENDING_UPDATE state.

func (TriggeredUpdateStatus) MarshalJSON added in v0.33.0

func (s TriggeredUpdateStatus) MarshalJSON() ([]byte, error)

func (*TriggeredUpdateStatus) UnmarshalJSON added in v0.33.0

func (s *TriggeredUpdateStatus) UnmarshalJSON(b []byte) error

type UnassignRequest

type UnassignRequest struct {
	// Query for the ID of the metastore to delete.
	MetastoreId string `json:"-" url:"metastore_id"`
	// A workspace ID.
	WorkspaceId int64 `json:"-" url:"-"`
}

Delete an assignment

type UnassignResponse added in v0.34.0

type UnassignResponse struct {
}

type UpdateAssignmentResponse added in v0.34.0

type UpdateAssignmentResponse struct {
}

type UpdateCatalog

type UpdateCatalog struct {
	// User-provided free-form text description.
	Comment string `json:"comment,omitempty"`
	// Whether predictive optimization should be enabled for this object and
	// objects under it.
	EnablePredictiveOptimization EnablePredictiveOptimization `json:"enable_predictive_optimization,omitempty"`
	// Whether the current securable is accessible from all workspaces or a
	// specific set of workspaces.
	IsolationMode IsolationMode `json:"isolation_mode,omitempty"`
	// The name of the catalog.
	Name string `json:"-" url:"-"`
	// New name for the catalog.
	NewName string `json:"new_name,omitempty"`
	// Username of current owner of catalog.
	Owner string `json:"owner,omitempty"`
	// A map of key-value properties attached to the securable.
	Properties map[string]string `json:"properties,omitempty"`

	ForceSendFields []string `json:"-"`
}

func (UpdateCatalog) MarshalJSON added in v0.23.0

func (s UpdateCatalog) MarshalJSON() ([]byte, error)

func (*UpdateCatalog) UnmarshalJSON added in v0.23.0

func (s *UpdateCatalog) UnmarshalJSON(b []byte) error

type UpdateConnection added in v0.10.0

type UpdateConnection struct {
	// Name of the connection.
	Name string `json:"-" url:"-"`
	// New name for the connection.
	NewName string `json:"new_name,omitempty"`
	// A map of key-value properties attached to the securable.
	Options map[string]string `json:"options"`
	// Username of current owner of the connection.
	Owner string `json:"owner,omitempty"`

	ForceSendFields []string `json:"-"`
}

func (UpdateConnection) MarshalJSON added in v0.23.0

func (s UpdateConnection) MarshalJSON() ([]byte, error)

func (*UpdateConnection) UnmarshalJSON added in v0.23.0

func (s *UpdateConnection) UnmarshalJSON(b []byte) error

type UpdateExternalLocation

type UpdateExternalLocation struct {
	// The AWS access point to use when accesing s3 for this external location.
	AccessPoint string `json:"access_point,omitempty"`
	// User-provided free-form text description.
	Comment string `json:"comment,omitempty"`
	// Name of the storage credential used with this location.
	CredentialName string `json:"credential_name,omitempty"`
	// Encryption options that apply to clients connecting to cloud storage.
	EncryptionDetails *EncryptionDetails `json:"encryption_details,omitempty"`
	// Force update even if changing url invalidates dependent external tables
	// or mounts.
	Force bool `json:"force,omitempty"`
	// Name of the external location.
	Name string `json:"-" url:"-"`
	// New name for the external location.
	NewName string `json:"new_name,omitempty"`
	// The owner of the external location.
	Owner string `json:"owner,omitempty"`
	// Indicates whether the external location is read-only.
	ReadOnly bool `json:"read_only,omitempty"`
	// Skips validation of the storage credential associated with the external
	// location.
	SkipValidation bool `json:"skip_validation,omitempty"`
	// Path URL of the external location.
	Url string `json:"url,omitempty"`

	ForceSendFields []string `json:"-"`
}

func (UpdateExternalLocation) MarshalJSON added in v0.23.0

func (s UpdateExternalLocation) MarshalJSON() ([]byte, error)

func (*UpdateExternalLocation) UnmarshalJSON added in v0.23.0

func (s *UpdateExternalLocation) UnmarshalJSON(b []byte) error

type UpdateFunction

type UpdateFunction struct {
	// The fully-qualified name of the function (of the form
	// __catalog_name__.__schema_name__.__function__name__).
	Name string `json:"-" url:"-"`
	// Username of current owner of function.
	Owner string `json:"owner,omitempty"`

	ForceSendFields []string `json:"-"`
}

func (UpdateFunction) MarshalJSON added in v0.23.0

func (s UpdateFunction) MarshalJSON() ([]byte, error)

func (*UpdateFunction) UnmarshalJSON added in v0.23.0

func (s *UpdateFunction) UnmarshalJSON(b []byte) error

type UpdateMetastore

type UpdateMetastore struct {
	// The organization name of a Delta Sharing entity, to be used in
	// Databricks-to-Databricks Delta Sharing as the official name.
	DeltaSharingOrganizationName string `json:"delta_sharing_organization_name,omitempty"`
	// The lifetime of delta sharing recipient token in seconds.
	DeltaSharingRecipientTokenLifetimeInSeconds int64 `json:"delta_sharing_recipient_token_lifetime_in_seconds,omitempty"`
	// The scope of Delta Sharing enabled for the metastore.
	DeltaSharingScope UpdateMetastoreDeltaSharingScope `json:"delta_sharing_scope,omitempty"`
	// Unique ID of the metastore.
	Id string `json:"-" url:"-"`
	// New name for the metastore.
	NewName string `json:"new_name,omitempty"`
	// The owner of the metastore.
	Owner string `json:"owner,omitempty"`
	// Privilege model version of the metastore, of the form `major.minor`
	// (e.g., `1.0`).
	PrivilegeModelVersion string `json:"privilege_model_version,omitempty"`
	// UUID of storage credential to access the metastore storage_root.
	StorageRootCredentialId string `json:"storage_root_credential_id,omitempty"`

	ForceSendFields []string `json:"-"`
}

func (UpdateMetastore) MarshalJSON added in v0.23.0

func (s UpdateMetastore) MarshalJSON() ([]byte, error)

func (*UpdateMetastore) UnmarshalJSON added in v0.23.0

func (s *UpdateMetastore) UnmarshalJSON(b []byte) error

type UpdateMetastoreAssignment

type UpdateMetastoreAssignment struct {
	// The name of the default catalog for the metastore.
	DefaultCatalogName string `json:"default_catalog_name,omitempty"`
	// The unique ID of the metastore.
	MetastoreId string `json:"metastore_id,omitempty"`
	// A workspace ID.
	WorkspaceId int64 `json:"-" url:"-"`

	ForceSendFields []string `json:"-"`
}

func (UpdateMetastoreAssignment) MarshalJSON added in v0.23.0

func (s UpdateMetastoreAssignment) MarshalJSON() ([]byte, error)

func (*UpdateMetastoreAssignment) UnmarshalJSON added in v0.23.0

func (s *UpdateMetastoreAssignment) UnmarshalJSON(b []byte) error

type UpdateMetastoreDeltaSharingScope

type UpdateMetastoreDeltaSharingScope string

The scope of Delta Sharing enabled for the metastore.

const UpdateMetastoreDeltaSharingScopeInternal UpdateMetastoreDeltaSharingScope = `INTERNAL`
const UpdateMetastoreDeltaSharingScopeInternalAndExternal UpdateMetastoreDeltaSharingScope = `INTERNAL_AND_EXTERNAL`

func (*UpdateMetastoreDeltaSharingScope) Set

Set raw string value and validate it against allowed values

func (*UpdateMetastoreDeltaSharingScope) String

String representation for fmt.Print

func (*UpdateMetastoreDeltaSharingScope) Type

Type always returns UpdateMetastoreDeltaSharingScope to satisfy [pflag.Value] interface

type UpdateModelVersionRequest added in v0.18.0

type UpdateModelVersionRequest struct {
	// The comment attached to the model version
	Comment string `json:"comment,omitempty"`
	// The three-level (fully qualified) name of the model version
	FullName string `json:"-" url:"-"`
	// The integer version number of the model version
	Version int `json:"-" url:"-"`

	ForceSendFields []string `json:"-"`
}

func (UpdateModelVersionRequest) MarshalJSON added in v0.23.0

func (s UpdateModelVersionRequest) MarshalJSON() ([]byte, error)

func (*UpdateModelVersionRequest) UnmarshalJSON added in v0.23.0

func (s *UpdateModelVersionRequest) UnmarshalJSON(b []byte) error

type UpdateMonitor added in v0.30.0

type UpdateMonitor struct {
	// Name of the baseline table from which drift metrics are computed from.
	// Columns in the monitored table should also be present in the baseline
	// table.
	BaselineTableName string `json:"baseline_table_name,omitempty"`
	// Custom metrics to compute on the monitored table. These can be aggregate
	// metrics, derived metrics (from already computed aggregate metrics), or
	// drift metrics (comparing metrics across time windows).
	CustomMetrics []MonitorMetric `json:"custom_metrics,omitempty"`
	// Id of dashboard that visualizes the computed metrics. This can be empty
	// if the monitor is in PENDING state.
	DashboardId string `json:"dashboard_id,omitempty"`
	// The data classification config for the monitor.
	DataClassificationConfig *MonitorDataClassificationConfig `json:"data_classification_config,omitempty"`
	// Configuration for monitoring inference logs.
	InferenceLog *MonitorInferenceLog `json:"inference_log,omitempty"`
	// The notification settings for the monitor.
	Notifications *MonitorNotifications `json:"notifications,omitempty"`
	// Schema where output metric tables are created.
	OutputSchemaName string `json:"output_schema_name"`
	// The schedule for automatically updating and refreshing metric tables.
	Schedule *MonitorCronSchedule `json:"schedule,omitempty"`
	// List of column expressions to slice data with for targeted analysis. The
	// data is grouped by each expression independently, resulting in a separate
	// slice for each predicate and its complements. For high-cardinality
	// columns, only the top 100 unique values by frequency will generate
	// slices.
	SlicingExprs []string `json:"slicing_exprs,omitempty"`
	// Configuration for monitoring snapshot tables.
	Snapshot *MonitorSnapshot `json:"snapshot,omitempty"`
	// Full name of the table.
	TableName string `json:"-" url:"-"`
	// Configuration for monitoring time series tables.
	TimeSeries *MonitorTimeSeries `json:"time_series,omitempty"`

	ForceSendFields []string `json:"-"`
}

func (UpdateMonitor) MarshalJSON added in v0.30.0

func (s UpdateMonitor) MarshalJSON() ([]byte, error)

func (*UpdateMonitor) UnmarshalJSON added in v0.30.0

func (s *UpdateMonitor) UnmarshalJSON(b []byte) error

type UpdatePermissions

type UpdatePermissions struct {
	// Array of permissions change objects.
	Changes []PermissionsChange `json:"changes,omitempty"`
	// Full name of securable.
	FullName string `json:"-" url:"-"`
	// Type of securable.
	SecurableType SecurableType `json:"-" url:"-"`
}

type UpdateRegisteredModelRequest added in v0.18.0

type UpdateRegisteredModelRequest struct {
	// The comment attached to the registered model
	Comment string `json:"comment,omitempty"`
	// The three-level (fully qualified) name of the registered model
	FullName string `json:"-" url:"-"`
	// New name for the registered model.
	NewName string `json:"new_name,omitempty"`
	// The identifier of the user who owns the registered model
	Owner string `json:"owner,omitempty"`

	ForceSendFields []string `json:"-"`
}

func (UpdateRegisteredModelRequest) MarshalJSON added in v0.23.0

func (s UpdateRegisteredModelRequest) MarshalJSON() ([]byte, error)

func (*UpdateRegisteredModelRequest) UnmarshalJSON added in v0.23.0

func (s *UpdateRegisteredModelRequest) UnmarshalJSON(b []byte) error

type UpdateResponse added in v0.34.0

type UpdateResponse struct {
}

type UpdateSchema

type UpdateSchema struct {
	// User-provided free-form text description.
	Comment string `json:"comment,omitempty"`
	// Whether predictive optimization should be enabled for this object and
	// objects under it.
	EnablePredictiveOptimization EnablePredictiveOptimization `json:"enable_predictive_optimization,omitempty"`
	// Full name of the schema.
	FullName string `json:"-" url:"-"`
	// New name for the schema.
	NewName string `json:"new_name,omitempty"`
	// Username of current owner of schema.
	Owner string `json:"owner,omitempty"`
	// A map of key-value properties attached to the securable.
	Properties map[string]string `json:"properties,omitempty"`

	ForceSendFields []string `json:"-"`
}

func (UpdateSchema) MarshalJSON added in v0.23.0

func (s UpdateSchema) MarshalJSON() ([]byte, error)

func (*UpdateSchema) UnmarshalJSON added in v0.23.0

func (s *UpdateSchema) UnmarshalJSON(b []byte) error

type UpdateStorageCredential

type UpdateStorageCredential struct {
	// The AWS IAM role configuration.
	AwsIamRole *AwsIamRoleRequest `json:"aws_iam_role,omitempty"`
	// The Azure managed identity configuration.
	AzureManagedIdentity *AzureManagedIdentityResponse `json:"azure_managed_identity,omitempty"`
	// The Azure service principal configuration.
	AzureServicePrincipal *AzureServicePrincipal `json:"azure_service_principal,omitempty"`
	// The Cloudflare API token configuration.
	CloudflareApiToken *CloudflareApiToken `json:"cloudflare_api_token,omitempty"`
	// Comment associated with the credential.
	Comment string `json:"comment,omitempty"`
	// The <Databricks> managed GCP service account configuration.
	DatabricksGcpServiceAccount *DatabricksGcpServiceAccountRequest `json:"databricks_gcp_service_account,omitempty"`
	// Force update even if there are dependent external locations or external
	// tables.
	Force bool `json:"force,omitempty"`
	// Name of the storage credential.
	Name string `json:"-" url:"-"`
	// New name for the storage credential.
	NewName string `json:"new_name,omitempty"`
	// Username of current owner of credential.
	Owner string `json:"owner,omitempty"`
	// Whether the storage credential is only usable for read operations.
	ReadOnly bool `json:"read_only,omitempty"`
	// Supplying true to this argument skips validation of the updated
	// credential.
	SkipValidation bool `json:"skip_validation,omitempty"`

	ForceSendFields []string `json:"-"`
}

func (UpdateStorageCredential) MarshalJSON added in v0.23.0

func (s UpdateStorageCredential) MarshalJSON() ([]byte, error)

func (*UpdateStorageCredential) UnmarshalJSON added in v0.23.0

func (s *UpdateStorageCredential) UnmarshalJSON(b []byte) error

type UpdateTableRequest added in v0.13.0

type UpdateTableRequest struct {
	// Full name of the table.
	FullName string `json:"-" url:"-"`

	Owner string `json:"owner,omitempty"`

	ForceSendFields []string `json:"-"`
}

Update a table owner.

func (UpdateTableRequest) MarshalJSON added in v0.23.0

func (s UpdateTableRequest) MarshalJSON() ([]byte, error)

func (*UpdateTableRequest) UnmarshalJSON added in v0.23.0

func (s *UpdateTableRequest) UnmarshalJSON(b []byte) error

type UpdateVolumeRequestContent

type UpdateVolumeRequestContent struct {
	// The comment attached to the volume
	Comment string `json:"comment,omitempty"`
	// The three-level (fully qualified) name of the volume
	Name string `json:"-" url:"-"`
	// New name for the volume.
	NewName string `json:"new_name,omitempty"`
	// The identifier of the user who owns the volume
	Owner string `json:"owner,omitempty"`

	ForceSendFields []string `json:"-"`
}

func (UpdateVolumeRequestContent) MarshalJSON added in v0.23.0

func (s UpdateVolumeRequestContent) MarshalJSON() ([]byte, error)

func (*UpdateVolumeRequestContent) UnmarshalJSON added in v0.23.0

func (s *UpdateVolumeRequestContent) UnmarshalJSON(b []byte) error

type UpdateWorkspaceBindings added in v0.9.0

type UpdateWorkspaceBindings struct {
	// A list of workspace IDs.
	AssignWorkspaces []int64 `json:"assign_workspaces,omitempty"`
	// The name of the catalog.
	Name string `json:"-" url:"-"`
	// A list of workspace IDs.
	UnassignWorkspaces []int64 `json:"unassign_workspaces,omitempty"`
}

type UpdateWorkspaceBindingsParameters added in v0.23.0

type UpdateWorkspaceBindingsParameters struct {
	// List of workspace bindings
	Add []WorkspaceBinding `json:"add,omitempty"`
	// List of workspace bindings
	Remove []WorkspaceBinding `json:"remove,omitempty"`
	// The name of the securable.
	SecurableName string `json:"-" url:"-"`
	// The type of the securable.
	SecurableType string `json:"-" url:"-"`
}

type ValidateStorageCredential

type ValidateStorageCredential struct {
	// The AWS IAM role configuration.
	AwsIamRole *AwsIamRoleRequest `json:"aws_iam_role,omitempty"`
	// The Azure managed identity configuration.
	AzureManagedIdentity *AzureManagedIdentityRequest `json:"azure_managed_identity,omitempty"`
	// The Azure service principal configuration.
	AzureServicePrincipal *AzureServicePrincipal `json:"azure_service_principal,omitempty"`
	// The Cloudflare API token configuration.
	CloudflareApiToken *CloudflareApiToken `json:"cloudflare_api_token,omitempty"`
	// The Databricks created GCP service account configuration.
	DatabricksGcpServiceAccount *DatabricksGcpServiceAccountRequest `json:"databricks_gcp_service_account,omitempty"`
	// The name of an existing external location to validate.
	ExternalLocationName string `json:"external_location_name,omitempty"`
	// Whether the storage credential is only usable for read operations.
	ReadOnly bool `json:"read_only,omitempty"`
	// The name of the storage credential to validate.
	StorageCredentialName string `json:"storage_credential_name,omitempty"`
	// The external location url to validate.
	Url string `json:"url,omitempty"`

	ForceSendFields []string `json:"-"`
}

func (ValidateStorageCredential) MarshalJSON added in v0.23.0

func (s ValidateStorageCredential) MarshalJSON() ([]byte, error)

func (*ValidateStorageCredential) UnmarshalJSON added in v0.23.0

func (s *ValidateStorageCredential) UnmarshalJSON(b []byte) error

type ValidateStorageCredentialResponse

type ValidateStorageCredentialResponse struct {
	// Whether the tested location is a directory in cloud storage.
	IsDir bool `json:"isDir,omitempty"`
	// The results of the validation check.
	Results []ValidationResult `json:"results,omitempty"`

	ForceSendFields []string `json:"-"`
}

func (ValidateStorageCredentialResponse) MarshalJSON added in v0.23.0

func (s ValidateStorageCredentialResponse) MarshalJSON() ([]byte, error)

func (*ValidateStorageCredentialResponse) UnmarshalJSON added in v0.23.0

func (s *ValidateStorageCredentialResponse) UnmarshalJSON(b []byte) error

type ValidationResult

type ValidationResult struct {
	// Error message would exist when the result does not equal to **PASS**.
	Message string `json:"message,omitempty"`
	// The operation tested.
	Operation ValidationResultOperation `json:"operation,omitempty"`
	// The results of the tested operation.
	Result ValidationResultResult `json:"result,omitempty"`

	ForceSendFields []string `json:"-"`
}

func (ValidationResult) MarshalJSON added in v0.23.0

func (s ValidationResult) MarshalJSON() ([]byte, error)

func (*ValidationResult) UnmarshalJSON added in v0.23.0

func (s *ValidationResult) UnmarshalJSON(b []byte) error

type ValidationResultOperation

type ValidationResultOperation string

The operation tested.

const ValidationResultOperationDelete ValidationResultOperation = `DELETE`
const ValidationResultOperationList ValidationResultOperation = `LIST`
const ValidationResultOperationPathExists ValidationResultOperation = `PATH_EXISTS`
const ValidationResultOperationRead ValidationResultOperation = `READ`
const ValidationResultOperationWrite ValidationResultOperation = `WRITE`

func (*ValidationResultOperation) Set

Set raw string value and validate it against allowed values

func (*ValidationResultOperation) String

func (f *ValidationResultOperation) String() string

String representation for fmt.Print

func (*ValidationResultOperation) Type

Type always returns ValidationResultOperation to satisfy [pflag.Value] interface

type ValidationResultResult

type ValidationResultResult string

The results of the tested operation.

const ValidationResultResultFail ValidationResultResult = `FAIL`
const ValidationResultResultPass ValidationResultResult = `PASS`
const ValidationResultResultSkip ValidationResultResult = `SKIP`

func (*ValidationResultResult) Set

Set raw string value and validate it against allowed values

func (*ValidationResultResult) String

func (f *ValidationResultResult) String() string

String representation for fmt.Print

func (*ValidationResultResult) Type

func (f *ValidationResultResult) Type() string

Type always returns ValidationResultResult to satisfy [pflag.Value] interface

type VolumeInfo

type VolumeInfo struct {
	// The AWS access point to use when accesing s3 for this external location.
	AccessPoint string `json:"access_point,omitempty"`
	// Indicates whether the principal is limited to retrieving metadata for the
	// associated object through the BROWSE privilege when include_browse is
	// enabled in the request.
	BrowseOnly bool `json:"browse_only,omitempty"`
	// The name of the catalog where the schema and the volume are
	CatalogName string `json:"catalog_name,omitempty"`
	// The comment attached to the volume
	Comment string `json:"comment,omitempty"`

	CreatedAt int64 `json:"created_at,omitempty"`
	// The identifier of the user who created the volume
	CreatedBy string `json:"created_by,omitempty"`
	// Encryption options that apply to clients connecting to cloud storage.
	EncryptionDetails *EncryptionDetails `json:"encryption_details,omitempty"`
	// The three-level (fully qualified) name of the volume
	FullName string `json:"full_name,omitempty"`
	// The unique identifier of the metastore
	MetastoreId string `json:"metastore_id,omitempty"`
	// The name of the volume
	Name string `json:"name,omitempty"`
	// The identifier of the user who owns the volume
	Owner string `json:"owner,omitempty"`
	// The name of the schema where the volume is
	SchemaName string `json:"schema_name,omitempty"`
	// The storage location on the cloud
	StorageLocation string `json:"storage_location,omitempty"`

	UpdatedAt int64 `json:"updated_at,omitempty"`
	// The identifier of the user who updated the volume last time
	UpdatedBy string `json:"updated_by,omitempty"`
	// The unique identifier of the volume
	VolumeId string `json:"volume_id,omitempty"`

	VolumeType VolumeType `json:"volume_type,omitempty"`

	ForceSendFields []string `json:"-"`
}

func (VolumeInfo) MarshalJSON added in v0.23.0

func (s VolumeInfo) MarshalJSON() ([]byte, error)

func (*VolumeInfo) UnmarshalJSON added in v0.23.0

func (s *VolumeInfo) UnmarshalJSON(b []byte) error

type VolumeType

type VolumeType string
const VolumeTypeExternal VolumeType = `EXTERNAL`
const VolumeTypeManaged VolumeType = `MANAGED`

func (*VolumeType) Set

func (f *VolumeType) Set(v string) error

Set raw string value and validate it against allowed values

func (*VolumeType) String

func (f *VolumeType) String() string

String representation for fmt.Print

func (*VolumeType) Type

func (f *VolumeType) Type() string

Type always returns VolumeType to satisfy [pflag.Value] interface

type VolumesAPI

type VolumesAPI struct {
	// contains filtered or unexported fields
}

Volumes are a Unity Catalog (UC) capability for accessing, storing, governing, organizing and processing files. Use cases include running machine learning on unstructured data such as image, audio, video, or PDF files, organizing data sets during the data exploration stages in data science, working with libraries that require access to the local file system on cluster machines, storing library and config files of arbitrary formats such as .whl or .txt centrally and providing secure access across workspaces to it, or transforming and querying non-tabular data files in ETL.

func NewVolumes

func NewVolumes(client *client.DatabricksClient) *VolumesAPI

func (*VolumesAPI) Create

Create a Volume.

Creates a new volume.

The user could create either an external volume or a managed volume. An external volume will be created in the specified external location, while a managed volume will be located in the default location which is specified by the parent schema, or the parent catalog, or the Metastore.

For the volume creation to succeed, the user must satisfy following conditions: - The caller must be a metastore admin, or be the owner of the parent catalog and schema, or have the **USE_CATALOG** privilege on the parent catalog and the **USE_SCHEMA** privilege on the parent schema. - The caller must have **CREATE VOLUME** privilege on the parent schema.

For an external volume, following conditions also need to satisfy - The caller must have **CREATE EXTERNAL VOLUME** privilege on the external location. - There are no other tables, nor volumes existing in the specified storage location. - The specified storage location is not under the location of other tables, nor volumes, or catalogs or schemas.

Example (Volumes)
ctx := context.Background()
w, err := databricks.NewWorkspaceClient()
if err != nil {
	panic(err)
}

storageCredential, err := w.StorageCredentials.Create(ctx, catalog.CreateStorageCredential{
	Name: fmt.Sprintf("sdk-%x", time.Now().UnixNano()),
	AwsIamRole: &catalog.AwsIamRoleRequest{
		RoleArn: os.Getenv("TEST_METASTORE_DATA_ACCESS_ARN"),
	},
	Comment: "created via SDK",
})
if err != nil {
	panic(err)
}
logger.Infof(ctx, "found %v", storageCredential)

externalLocation, err := w.ExternalLocations.Create(ctx, catalog.CreateExternalLocation{
	Name:           fmt.Sprintf("sdk-%x", time.Now().UnixNano()),
	CredentialName: storageCredential.Name,
	Comment:        "created via SDK",
	Url:            "s3://" + os.Getenv("TEST_BUCKET") + "/" + fmt.Sprintf("sdk-%x", time.Now().UnixNano()),
})
if err != nil {
	panic(err)
}
logger.Infof(ctx, "found %v", externalLocation)

createdCatalog, err := w.Catalogs.Create(ctx, catalog.CreateCatalog{
	Name: fmt.Sprintf("sdk-%x", time.Now().UnixNano()),
})
if err != nil {
	panic(err)
}
logger.Infof(ctx, "found %v", createdCatalog)

createdSchema, err := w.Schemas.Create(ctx, catalog.CreateSchema{
	Name:        fmt.Sprintf("sdk-%x", time.Now().UnixNano()),
	CatalogName: createdCatalog.Name,
})
if err != nil {
	panic(err)
}
logger.Infof(ctx, "found %v", createdSchema)

createdVolume, err := w.Volumes.Create(ctx, catalog.CreateVolumeRequestContent{
	CatalogName:     createdCatalog.Name,
	SchemaName:      createdSchema.Name,
	Name:            fmt.Sprintf("sdk-%x", time.Now().UnixNano()),
	StorageLocation: externalLocation.Url,
	VolumeType:      catalog.VolumeTypeExternal,
})
if err != nil {
	panic(err)
}
logger.Infof(ctx, "found %v", createdVolume)

// cleanup

err = w.StorageCredentials.DeleteByName(ctx, storageCredential.Name)
if err != nil {
	panic(err)
}
err = w.ExternalLocations.DeleteByName(ctx, externalLocation.Name)
if err != nil {
	panic(err)
}
err = w.Schemas.DeleteByFullName(ctx, createdSchema.FullName)
if err != nil {
	panic(err)
}
err = w.Catalogs.Delete(ctx, catalog.DeleteCatalogRequest{
	Name:  createdCatalog.Name,
	Force: true,
})
if err != nil {
	panic(err)
}
err = w.Volumes.DeleteByName(ctx, createdVolume.FullName)
if err != nil {
	panic(err)
}
Output:

func (*VolumesAPI) Delete

func (a *VolumesAPI) Delete(ctx context.Context, request DeleteVolumeRequest) error

Delete a Volume.

Deletes a volume from the specified parent catalog and schema.

The caller must be a metastore admin or an owner of the volume. For the latter case, the caller must also be the owner or have the **USE_CATALOG** privilege on the parent catalog and the **USE_SCHEMA** privilege on the parent schema.

func (*VolumesAPI) DeleteByName added in v0.32.0

func (a *VolumesAPI) DeleteByName(ctx context.Context, name string) error

Delete a Volume.

Deletes a volume from the specified parent catalog and schema.

The caller must be a metastore admin or an owner of the volume. For the latter case, the caller must also be the owner or have the **USE_CATALOG** privilege on the parent catalog and the **USE_SCHEMA** privilege on the parent schema.

func (*VolumesAPI) GetByName

func (a *VolumesAPI) GetByName(ctx context.Context, name string) (*VolumeInfo, error)

GetByName calls VolumesAPI.VolumeInfoNameToVolumeIdMap and returns a single VolumeInfo.

Returns an error if there's more than one VolumeInfo with the same .Name.

Note: All VolumeInfo instances are loaded into memory before returning matching by name.

This method is generated by Databricks SDK Code Generator.

func (*VolumesAPI) Impl

func (a *VolumesAPI) Impl() VolumesService

Impl returns low-level Volumes API implementation Deprecated: use MockVolumesInterface instead.

func (*VolumesAPI) List added in v0.24.0

List Volumes.

Gets an array of volumes for the current metastore under the parent catalog and schema.

The returned volumes are filtered based on the privileges of the calling user. For example, the metastore admin is able to list all the volumes. A regular user needs to be the owner or have the **READ VOLUME** privilege on the volume to recieve the volumes in the response. For the latter case, the caller must also be the owner or have the **USE_CATALOG** privilege on the parent catalog and the **USE_SCHEMA** privilege on the parent schema.

There is no guarantee of a specific ordering of the elements in the array.

This method is generated by Databricks SDK Code Generator.

func (*VolumesAPI) ListAll

func (a *VolumesAPI) ListAll(ctx context.Context, request ListVolumesRequest) ([]VolumeInfo, error)

List Volumes.

Gets an array of volumes for the current metastore under the parent catalog and schema.

The returned volumes are filtered based on the privileges of the calling user. For example, the metastore admin is able to list all the volumes. A regular user needs to be the owner or have the **READ VOLUME** privilege on the volume to recieve the volumes in the response. For the latter case, the caller must also be the owner or have the **USE_CATALOG** privilege on the parent catalog and the **USE_SCHEMA** privilege on the parent schema.

There is no guarantee of a specific ordering of the elements in the array.

This method is generated by Databricks SDK Code Generator.

Example (Volumes)
ctx := context.Background()
w, err := databricks.NewWorkspaceClient()
if err != nil {
	panic(err)
}

createdCatalog, err := w.Catalogs.Create(ctx, catalog.CreateCatalog{
	Name: fmt.Sprintf("sdk-%x", time.Now().UnixNano()),
})
if err != nil {
	panic(err)
}
logger.Infof(ctx, "found %v", createdCatalog)

createdSchema, err := w.Schemas.Create(ctx, catalog.CreateSchema{
	Name:        fmt.Sprintf("sdk-%x", time.Now().UnixNano()),
	CatalogName: createdCatalog.Name,
})
if err != nil {
	panic(err)
}
logger.Infof(ctx, "found %v", createdSchema)

allVolumes, err := w.Volumes.ListAll(ctx, catalog.ListVolumesRequest{
	CatalogName: createdCatalog.Name,
	SchemaName:  createdSchema.Name,
})
if err != nil {
	panic(err)
}
logger.Infof(ctx, "found %v", allVolumes)

// cleanup

err = w.Schemas.DeleteByFullName(ctx, createdSchema.FullName)
if err != nil {
	panic(err)
}
err = w.Catalogs.Delete(ctx, catalog.DeleteCatalogRequest{
	Name:  createdCatalog.Name,
	Force: true,
})
if err != nil {
	panic(err)
}
Output:

func (*VolumesAPI) Read

func (a *VolumesAPI) Read(ctx context.Context, request ReadVolumeRequest) (*VolumeInfo, error)

Get a Volume.

Gets a volume from the metastore for a specific catalog and schema.

The caller must be a metastore admin or an owner of (or have the **READ VOLUME** privilege on) the volume. For the latter case, the caller must also be the owner or have the **USE_CATALOG** privilege on the parent catalog and the **USE_SCHEMA** privilege on the parent schema.

Example (Volumes)
ctx := context.Background()
w, err := databricks.NewWorkspaceClient()
if err != nil {
	panic(err)
}

storageCredential, err := w.StorageCredentials.Create(ctx, catalog.CreateStorageCredential{
	Name: fmt.Sprintf("sdk-%x", time.Now().UnixNano()),
	AwsIamRole: &catalog.AwsIamRoleRequest{
		RoleArn: os.Getenv("TEST_METASTORE_DATA_ACCESS_ARN"),
	},
	Comment: "created via SDK",
})
if err != nil {
	panic(err)
}
logger.Infof(ctx, "found %v", storageCredential)

externalLocation, err := w.ExternalLocations.Create(ctx, catalog.CreateExternalLocation{
	Name:           fmt.Sprintf("sdk-%x", time.Now().UnixNano()),
	CredentialName: storageCredential.Name,
	Comment:        "created via SDK",
	Url:            "s3://" + os.Getenv("TEST_BUCKET") + "/" + fmt.Sprintf("sdk-%x", time.Now().UnixNano()),
})
if err != nil {
	panic(err)
}
logger.Infof(ctx, "found %v", externalLocation)

createdCatalog, err := w.Catalogs.Create(ctx, catalog.CreateCatalog{
	Name: fmt.Sprintf("sdk-%x", time.Now().UnixNano()),
})
if err != nil {
	panic(err)
}
logger.Infof(ctx, "found %v", createdCatalog)

createdSchema, err := w.Schemas.Create(ctx, catalog.CreateSchema{
	Name:        fmt.Sprintf("sdk-%x", time.Now().UnixNano()),
	CatalogName: createdCatalog.Name,
})
if err != nil {
	panic(err)
}
logger.Infof(ctx, "found %v", createdSchema)

createdVolume, err := w.Volumes.Create(ctx, catalog.CreateVolumeRequestContent{
	CatalogName:     createdCatalog.Name,
	SchemaName:      createdSchema.Name,
	Name:            fmt.Sprintf("sdk-%x", time.Now().UnixNano()),
	StorageLocation: externalLocation.Url,
	VolumeType:      catalog.VolumeTypeExternal,
})
if err != nil {
	panic(err)
}
logger.Infof(ctx, "found %v", createdVolume)

loadedVolume, err := w.Volumes.ReadByName(ctx, createdVolume.FullName)
if err != nil {
	panic(err)
}
logger.Infof(ctx, "found %v", loadedVolume)

// cleanup

err = w.StorageCredentials.DeleteByName(ctx, storageCredential.Name)
if err != nil {
	panic(err)
}
err = w.ExternalLocations.DeleteByName(ctx, externalLocation.Name)
if err != nil {
	panic(err)
}
err = w.Schemas.DeleteByFullName(ctx, createdSchema.FullName)
if err != nil {
	panic(err)
}
err = w.Catalogs.Delete(ctx, catalog.DeleteCatalogRequest{
	Name:  createdCatalog.Name,
	Force: true,
})
if err != nil {
	panic(err)
}
err = w.Volumes.DeleteByName(ctx, createdVolume.FullName)
if err != nil {
	panic(err)
}
Output:

func (*VolumesAPI) ReadByName added in v0.32.0

func (a *VolumesAPI) ReadByName(ctx context.Context, name string) (*VolumeInfo, error)

Get a Volume.

Gets a volume from the metastore for a specific catalog and schema.

The caller must be a metastore admin or an owner of (or have the **READ VOLUME** privilege on) the volume. For the latter case, the caller must also be the owner or have the **USE_CATALOG** privilege on the parent catalog and the **USE_SCHEMA** privilege on the parent schema.

func (*VolumesAPI) Update

Update a Volume.

Updates the specified volume under the specified parent catalog and schema.

The caller must be a metastore admin or an owner of the volume. For the latter case, the caller must also be the owner or have the **USE_CATALOG** privilege on the parent catalog and the **USE_SCHEMA** privilege on the parent schema.

Currently only the name, the owner or the comment of the volume could be updated.

Example (Volumes)
ctx := context.Background()
w, err := databricks.NewWorkspaceClient()
if err != nil {
	panic(err)
}

storageCredential, err := w.StorageCredentials.Create(ctx, catalog.CreateStorageCredential{
	Name: fmt.Sprintf("sdk-%x", time.Now().UnixNano()),
	AwsIamRole: &catalog.AwsIamRoleRequest{
		RoleArn: os.Getenv("TEST_METASTORE_DATA_ACCESS_ARN"),
	},
	Comment: "created via SDK",
})
if err != nil {
	panic(err)
}
logger.Infof(ctx, "found %v", storageCredential)

externalLocation, err := w.ExternalLocations.Create(ctx, catalog.CreateExternalLocation{
	Name:           fmt.Sprintf("sdk-%x", time.Now().UnixNano()),
	CredentialName: storageCredential.Name,
	Comment:        "created via SDK",
	Url:            "s3://" + os.Getenv("TEST_BUCKET") + "/" + fmt.Sprintf("sdk-%x", time.Now().UnixNano()),
})
if err != nil {
	panic(err)
}
logger.Infof(ctx, "found %v", externalLocation)

createdCatalog, err := w.Catalogs.Create(ctx, catalog.CreateCatalog{
	Name: fmt.Sprintf("sdk-%x", time.Now().UnixNano()),
})
if err != nil {
	panic(err)
}
logger.Infof(ctx, "found %v", createdCatalog)

createdSchema, err := w.Schemas.Create(ctx, catalog.CreateSchema{
	Name:        fmt.Sprintf("sdk-%x", time.Now().UnixNano()),
	CatalogName: createdCatalog.Name,
})
if err != nil {
	panic(err)
}
logger.Infof(ctx, "found %v", createdSchema)

createdVolume, err := w.Volumes.Create(ctx, catalog.CreateVolumeRequestContent{
	CatalogName:     createdCatalog.Name,
	SchemaName:      createdSchema.Name,
	Name:            fmt.Sprintf("sdk-%x", time.Now().UnixNano()),
	StorageLocation: externalLocation.Url,
	VolumeType:      catalog.VolumeTypeExternal,
})
if err != nil {
	panic(err)
}
logger.Infof(ctx, "found %v", createdVolume)

loadedVolume, err := w.Volumes.ReadByName(ctx, createdVolume.FullName)
if err != nil {
	panic(err)
}
logger.Infof(ctx, "found %v", loadedVolume)

_, err = w.Volumes.Update(ctx, catalog.UpdateVolumeRequestContent{
	Name:    loadedVolume.FullName,
	Comment: "Updated volume comment",
})
if err != nil {
	panic(err)
}

// cleanup

err = w.StorageCredentials.DeleteByName(ctx, storageCredential.Name)
if err != nil {
	panic(err)
}
err = w.ExternalLocations.DeleteByName(ctx, externalLocation.Name)
if err != nil {
	panic(err)
}
err = w.Schemas.DeleteByFullName(ctx, createdSchema.FullName)
if err != nil {
	panic(err)
}
err = w.Catalogs.Delete(ctx, catalog.DeleteCatalogRequest{
	Name:  createdCatalog.Name,
	Force: true,
})
if err != nil {
	panic(err)
}
err = w.Volumes.DeleteByName(ctx, createdVolume.FullName)
if err != nil {
	panic(err)
}
Output:

func (*VolumesAPI) VolumeInfoNameToVolumeIdMap

func (a *VolumesAPI) VolumeInfoNameToVolumeIdMap(ctx context.Context, request ListVolumesRequest) (map[string]string, error)

VolumeInfoNameToVolumeIdMap calls VolumesAPI.ListAll and creates a map of results with VolumeInfo.Name as key and VolumeInfo.VolumeId as value.

Returns an error if there's more than one VolumeInfo with the same .Name.

Note: All VolumeInfo instances are loaded into memory before creating a map.

This method is generated by Databricks SDK Code Generator.

func (*VolumesAPI) WithImpl

func (a *VolumesAPI) WithImpl(impl VolumesService) VolumesInterface

WithImpl could be used to override low-level API implementations for unit testing purposes with github.com/golang/mock or other mocking frameworks. Deprecated: use MockVolumesInterface instead.

type VolumesInterface added in v0.29.0

type VolumesInterface interface {
	// WithImpl could be used to override low-level API implementations for unit
	// testing purposes with [github.com/golang/mock] or other mocking frameworks.
	// Deprecated: use MockVolumesInterface instead.
	WithImpl(impl VolumesService) VolumesInterface

	// Impl returns low-level Volumes API implementation
	// Deprecated: use MockVolumesInterface instead.
	Impl() VolumesService

	// Create a Volume.
	//
	// Creates a new volume.
	//
	// The user could create either an external volume or a managed volume. An
	// external volume will be created in the specified external location, while a
	// managed volume will be located in the default location which is specified by
	// the parent schema, or the parent catalog, or the Metastore.
	//
	// For the volume creation to succeed, the user must satisfy following
	// conditions: - The caller must be a metastore admin, or be the owner of the
	// parent catalog and schema, or have the **USE_CATALOG** privilege on the
	// parent catalog and the **USE_SCHEMA** privilege on the parent schema. - The
	// caller must have **CREATE VOLUME** privilege on the parent schema.
	//
	// For an external volume, following conditions also need to satisfy - The
	// caller must have **CREATE EXTERNAL VOLUME** privilege on the external
	// location. - There are no other tables, nor volumes existing in the specified
	// storage location. - The specified storage location is not under the location
	// of other tables, nor volumes, or catalogs or schemas.
	Create(ctx context.Context, request CreateVolumeRequestContent) (*VolumeInfo, error)

	// Delete a Volume.
	//
	// Deletes a volume from the specified parent catalog and schema.
	//
	// The caller must be a metastore admin or an owner of the volume. For the
	// latter case, the caller must also be the owner or have the **USE_CATALOG**
	// privilege on the parent catalog and the **USE_SCHEMA** privilege on the
	// parent schema.
	Delete(ctx context.Context, request DeleteVolumeRequest) error

	// Delete a Volume.
	//
	// Deletes a volume from the specified parent catalog and schema.
	//
	// The caller must be a metastore admin or an owner of the volume. For the
	// latter case, the caller must also be the owner or have the **USE_CATALOG**
	// privilege on the parent catalog and the **USE_SCHEMA** privilege on the
	// parent schema.
	DeleteByName(ctx context.Context, name string) error

	// List Volumes.
	//
	// Gets an array of volumes for the current metastore under the parent catalog
	// and schema.
	//
	// The returned volumes are filtered based on the privileges of the calling
	// user. For example, the metastore admin is able to list all the volumes. A
	// regular user needs to be the owner or have the **READ VOLUME** privilege on
	// the volume to recieve the volumes in the response. For the latter case, the
	// caller must also be the owner or have the **USE_CATALOG** privilege on the
	// parent catalog and the **USE_SCHEMA** privilege on the parent schema.
	//
	// There is no guarantee of a specific ordering of the elements in the array.
	//
	// This method is generated by Databricks SDK Code Generator.
	List(ctx context.Context, request ListVolumesRequest) listing.Iterator[VolumeInfo]

	// List Volumes.
	//
	// Gets an array of volumes for the current metastore under the parent catalog
	// and schema.
	//
	// The returned volumes are filtered based on the privileges of the calling
	// user. For example, the metastore admin is able to list all the volumes. A
	// regular user needs to be the owner or have the **READ VOLUME** privilege on
	// the volume to recieve the volumes in the response. For the latter case, the
	// caller must also be the owner or have the **USE_CATALOG** privilege on the
	// parent catalog and the **USE_SCHEMA** privilege on the parent schema.
	//
	// There is no guarantee of a specific ordering of the elements in the array.
	//
	// This method is generated by Databricks SDK Code Generator.
	ListAll(ctx context.Context, request ListVolumesRequest) ([]VolumeInfo, error)

	// VolumeInfoNameToVolumeIdMap calls [VolumesAPI.ListAll] and creates a map of results with [VolumeInfo].Name as key and [VolumeInfo].VolumeId as value.
	//
	// Returns an error if there's more than one [VolumeInfo] with the same .Name.
	//
	// Note: All [VolumeInfo] instances are loaded into memory before creating a map.
	//
	// This method is generated by Databricks SDK Code Generator.
	VolumeInfoNameToVolumeIdMap(ctx context.Context, request ListVolumesRequest) (map[string]string, error)

	// GetByName calls [VolumesAPI.VolumeInfoNameToVolumeIdMap] and returns a single [VolumeInfo].
	//
	// Returns an error if there's more than one [VolumeInfo] with the same .Name.
	//
	// Note: All [VolumeInfo] instances are loaded into memory before returning matching by name.
	//
	// This method is generated by Databricks SDK Code Generator.
	GetByName(ctx context.Context, name string) (*VolumeInfo, error)

	// Get a Volume.
	//
	// Gets a volume from the metastore for a specific catalog and schema.
	//
	// The caller must be a metastore admin or an owner of (or have the **READ
	// VOLUME** privilege on) the volume. For the latter case, the caller must also
	// be the owner or have the **USE_CATALOG** privilege on the parent catalog and
	// the **USE_SCHEMA** privilege on the parent schema.
	Read(ctx context.Context, request ReadVolumeRequest) (*VolumeInfo, error)

	// Get a Volume.
	//
	// Gets a volume from the metastore for a specific catalog and schema.
	//
	// The caller must be a metastore admin or an owner of (or have the **READ
	// VOLUME** privilege on) the volume. For the latter case, the caller must also
	// be the owner or have the **USE_CATALOG** privilege on the parent catalog and
	// the **USE_SCHEMA** privilege on the parent schema.
	ReadByName(ctx context.Context, name string) (*VolumeInfo, error)

	// Update a Volume.
	//
	// Updates the specified volume under the specified parent catalog and schema.
	//
	// The caller must be a metastore admin or an owner of the volume. For the
	// latter case, the caller must also be the owner or have the **USE_CATALOG**
	// privilege on the parent catalog and the **USE_SCHEMA** privilege on the
	// parent schema.
	//
	// Currently only the name, the owner or the comment of the volume could be
	// updated.
	Update(ctx context.Context, request UpdateVolumeRequestContent) (*VolumeInfo, error)
}

type VolumesService

type VolumesService interface {

	// Create a Volume.
	//
	// Creates a new volume.
	//
	// The user could create either an external volume or a managed volume. An
	// external volume will be created in the specified external location, while
	// a managed volume will be located in the default location which is
	// specified by the parent schema, or the parent catalog, or the Metastore.
	//
	// For the volume creation to succeed, the user must satisfy following
	// conditions: - The caller must be a metastore admin, or be the owner of
	// the parent catalog and schema, or have the **USE_CATALOG** privilege on
	// the parent catalog and the **USE_SCHEMA** privilege on the parent schema.
	// - The caller must have **CREATE VOLUME** privilege on the parent schema.
	//
	// For an external volume, following conditions also need to satisfy - The
	// caller must have **CREATE EXTERNAL VOLUME** privilege on the external
	// location. - There are no other tables, nor volumes existing in the
	// specified storage location. - The specified storage location is not under
	// the location of other tables, nor volumes, or catalogs or schemas.
	Create(ctx context.Context, request CreateVolumeRequestContent) (*VolumeInfo, error)

	// Delete a Volume.
	//
	// Deletes a volume from the specified parent catalog and schema.
	//
	// The caller must be a metastore admin or an owner of the volume. For the
	// latter case, the caller must also be the owner or have the
	// **USE_CATALOG** privilege on the parent catalog and the **USE_SCHEMA**
	// privilege on the parent schema.
	Delete(ctx context.Context, request DeleteVolumeRequest) error

	// List Volumes.
	//
	// Gets an array of volumes for the current metastore under the parent
	// catalog and schema.
	//
	// The returned volumes are filtered based on the privileges of the calling
	// user. For example, the metastore admin is able to list all the volumes. A
	// regular user needs to be the owner or have the **READ VOLUME** privilege
	// on the volume to recieve the volumes in the response. For the latter
	// case, the caller must also be the owner or have the **USE_CATALOG**
	// privilege on the parent catalog and the **USE_SCHEMA** privilege on the
	// parent schema.
	//
	// There is no guarantee of a specific ordering of the elements in the
	// array.
	//
	// Use ListAll() to get all VolumeInfo instances, which will iterate over every result page.
	List(ctx context.Context, request ListVolumesRequest) (*ListVolumesResponseContent, error)

	// Get a Volume.
	//
	// Gets a volume from the metastore for a specific catalog and schema.
	//
	// The caller must be a metastore admin or an owner of (or have the **READ
	// VOLUME** privilege on) the volume. For the latter case, the caller must
	// also be the owner or have the **USE_CATALOG** privilege on the parent
	// catalog and the **USE_SCHEMA** privilege on the parent schema.
	Read(ctx context.Context, request ReadVolumeRequest) (*VolumeInfo, error)

	// Update a Volume.
	//
	// Updates the specified volume under the specified parent catalog and
	// schema.
	//
	// The caller must be a metastore admin or an owner of the volume. For the
	// latter case, the caller must also be the owner or have the
	// **USE_CATALOG** privilege on the parent catalog and the **USE_SCHEMA**
	// privilege on the parent schema.
	//
	// Currently only the name, the owner or the comment of the volume could be
	// updated.
	Update(ctx context.Context, request UpdateVolumeRequestContent) (*VolumeInfo, error)
}

Volumes are a Unity Catalog (UC) capability for accessing, storing, governing, organizing and processing files. Use cases include running machine learning on unstructured data such as image, audio, video, or PDF files, organizing data sets during the data exploration stages in data science, working with libraries that require access to the local file system on cluster machines, storing library and config files of arbitrary formats such as .whl or .txt centrally and providing secure access across workspaces to it, or transforming and querying non-tabular data files in ETL.

type WorkspaceBinding added in v0.23.0

type WorkspaceBinding struct {
	BindingType WorkspaceBindingBindingType `json:"binding_type,omitempty"`

	WorkspaceId int64 `json:"workspace_id,omitempty"`

	ForceSendFields []string `json:"-"`
}

func (WorkspaceBinding) MarshalJSON added in v0.23.0

func (s WorkspaceBinding) MarshalJSON() ([]byte, error)

func (*WorkspaceBinding) UnmarshalJSON added in v0.23.0

func (s *WorkspaceBinding) UnmarshalJSON(b []byte) error

type WorkspaceBindingBindingType added in v0.23.0

type WorkspaceBindingBindingType string
const WorkspaceBindingBindingTypeBindingTypeReadOnly WorkspaceBindingBindingType = `BINDING_TYPE_READ_ONLY`
const WorkspaceBindingBindingTypeBindingTypeReadWrite WorkspaceBindingBindingType = `BINDING_TYPE_READ_WRITE`

func (*WorkspaceBindingBindingType) Set added in v0.23.0

Set raw string value and validate it against allowed values

func (*WorkspaceBindingBindingType) String added in v0.23.0

func (f *WorkspaceBindingBindingType) String() string

String representation for fmt.Print

func (*WorkspaceBindingBindingType) Type added in v0.23.0

Type always returns WorkspaceBindingBindingType to satisfy [pflag.Value] interface

type WorkspaceBindingsAPI added in v0.9.0

type WorkspaceBindingsAPI struct {
	// contains filtered or unexported fields
}

A securable in Databricks can be configured as __OPEN__ or __ISOLATED__. An __OPEN__ securable can be accessed from any workspace, while an __ISOLATED__ securable can only be accessed from a configured list of workspaces. This API allows you to configure (bind) securables to workspaces.

NOTE: The __isolation_mode__ is configured for the securable itself (using its Update method) and the workspace bindings are only consulted when the securable's __isolation_mode__ is set to __ISOLATED__.

A securable's workspace bindings can be configured by a metastore admin or the owner of the securable.

The original path (/api/2.1/unity-catalog/workspace-bindings/catalogs/{name}) is deprecated. Please use the new path (/api/2.1/unity-catalog/bindings/{securable_type}/{securable_name}) which introduces the ability to bind a securable in READ_ONLY mode (catalogs only).

Securables that support binding: - catalog

func NewWorkspaceBindings added in v0.9.0

func NewWorkspaceBindings(client *client.DatabricksClient) *WorkspaceBindingsAPI

func (*WorkspaceBindingsAPI) Get added in v0.9.0

Get catalog workspace bindings.

Gets workspace bindings of the catalog. The caller must be a metastore admin or an owner of the catalog.

Example (CatalogWorkspaceBindings)
ctx := context.Background()
w, err := databricks.NewWorkspaceClient()
if err != nil {
	panic(err)
}

created, err := w.Catalogs.Create(ctx, catalog.CreateCatalog{
	Name: fmt.Sprintf("sdk-%x", time.Now().UnixNano()),
})
if err != nil {
	panic(err)
}
logger.Infof(ctx, "found %v", created)

bindings, err := w.WorkspaceBindings.GetByName(ctx, created.Name)
if err != nil {
	panic(err)
}
logger.Infof(ctx, "found %v", bindings)

// cleanup

err = w.Catalogs.Delete(ctx, catalog.DeleteCatalogRequest{
	Name:  created.Name,
	Force: true,
})
if err != nil {
	panic(err)
}
Output:

func (*WorkspaceBindingsAPI) GetBindings added in v0.23.0

Get securable workspace bindings.

Gets workspace bindings of the securable. The caller must be a metastore admin or an owner of the securable.

func (*WorkspaceBindingsAPI) GetBindingsBySecurableTypeAndSecurableName added in v0.23.0

func (a *WorkspaceBindingsAPI) GetBindingsBySecurableTypeAndSecurableName(ctx context.Context, securableType string, securableName string) (*WorkspaceBindingsResponse, error)

Get securable workspace bindings.

Gets workspace bindings of the securable. The caller must be a metastore admin or an owner of the securable.

func (*WorkspaceBindingsAPI) GetByName added in v0.9.0

Get catalog workspace bindings.

Gets workspace bindings of the catalog. The caller must be a metastore admin or an owner of the catalog.

func (*WorkspaceBindingsAPI) Impl added in v0.9.0

Impl returns low-level WorkspaceBindings API implementation Deprecated: use MockWorkspaceBindingsInterface instead.

func (*WorkspaceBindingsAPI) Update added in v0.9.0

Update catalog workspace bindings.

Updates workspace bindings of the catalog. The caller must be a metastore admin or an owner of the catalog.

Example (CatalogWorkspaceBindings)
ctx := context.Background()
w, err := databricks.NewWorkspaceClient()
if err != nil {
	panic(err)
}

thisWorkspaceId := func(v string) int64 {
	i, err := strconv.ParseInt(v, 10, 64)
	if err != nil {
		panic(fmt.Sprintf("`%s` is not int64: %s", v, err))
	}
	return i
}(os.Getenv("THIS_WORKSPACE_ID"))

created, err := w.Catalogs.Create(ctx, catalog.CreateCatalog{
	Name: fmt.Sprintf("sdk-%x", time.Now().UnixNano()),
})
if err != nil {
	panic(err)
}
logger.Infof(ctx, "found %v", created)

_, err = w.WorkspaceBindings.Update(ctx, catalog.UpdateWorkspaceBindings{
	Name:             created.Name,
	AssignWorkspaces: []int64{thisWorkspaceId},
})
if err != nil {
	panic(err)
}

// cleanup

err = w.Catalogs.Delete(ctx, catalog.DeleteCatalogRequest{
	Name:  created.Name,
	Force: true,
})
if err != nil {
	panic(err)
}
Output:

func (*WorkspaceBindingsAPI) UpdateBindings added in v0.23.0

Update securable workspace bindings.

Updates workspace bindings of the securable. The caller must be a metastore admin or an owner of the securable.

func (*WorkspaceBindingsAPI) WithImpl added in v0.9.0

WithImpl could be used to override low-level API implementations for unit testing purposes with github.com/golang/mock or other mocking frameworks. Deprecated: use MockWorkspaceBindingsInterface instead.

type WorkspaceBindingsInterface added in v0.29.0

type WorkspaceBindingsInterface interface {
	// WithImpl could be used to override low-level API implementations for unit
	// testing purposes with [github.com/golang/mock] or other mocking frameworks.
	// Deprecated: use MockWorkspaceBindingsInterface instead.
	WithImpl(impl WorkspaceBindingsService) WorkspaceBindingsInterface

	// Impl returns low-level WorkspaceBindings API implementation
	// Deprecated: use MockWorkspaceBindingsInterface instead.
	Impl() WorkspaceBindingsService

	// Get catalog workspace bindings.
	//
	// Gets workspace bindings of the catalog. The caller must be a metastore admin
	// or an owner of the catalog.
	Get(ctx context.Context, request GetWorkspaceBindingRequest) (*CurrentWorkspaceBindings, error)

	// Get catalog workspace bindings.
	//
	// Gets workspace bindings of the catalog. The caller must be a metastore admin
	// or an owner of the catalog.
	GetByName(ctx context.Context, name string) (*CurrentWorkspaceBindings, error)

	// Get securable workspace bindings.
	//
	// Gets workspace bindings of the securable. The caller must be a metastore
	// admin or an owner of the securable.
	GetBindings(ctx context.Context, request GetBindingsRequest) (*WorkspaceBindingsResponse, error)

	// Get securable workspace bindings.
	//
	// Gets workspace bindings of the securable. The caller must be a metastore
	// admin or an owner of the securable.
	GetBindingsBySecurableTypeAndSecurableName(ctx context.Context, securableType string, securableName string) (*WorkspaceBindingsResponse, error)

	// Update catalog workspace bindings.
	//
	// Updates workspace bindings of the catalog. The caller must be a metastore
	// admin or an owner of the catalog.
	Update(ctx context.Context, request UpdateWorkspaceBindings) (*CurrentWorkspaceBindings, error)

	// Update securable workspace bindings.
	//
	// Updates workspace bindings of the securable. The caller must be a metastore
	// admin or an owner of the securable.
	UpdateBindings(ctx context.Context, request UpdateWorkspaceBindingsParameters) (*WorkspaceBindingsResponse, error)
}

type WorkspaceBindingsResponse added in v0.23.0

type WorkspaceBindingsResponse struct {
	// List of workspace bindings
	Bindings []WorkspaceBinding `json:"bindings,omitempty"`
}

Currently assigned workspace bindings

type WorkspaceBindingsService added in v0.9.0

type WorkspaceBindingsService interface {

	// Get catalog workspace bindings.
	//
	// Gets workspace bindings of the catalog. The caller must be a metastore
	// admin or an owner of the catalog.
	Get(ctx context.Context, request GetWorkspaceBindingRequest) (*CurrentWorkspaceBindings, error)

	// Get securable workspace bindings.
	//
	// Gets workspace bindings of the securable. The caller must be a metastore
	// admin or an owner of the securable.
	GetBindings(ctx context.Context, request GetBindingsRequest) (*WorkspaceBindingsResponse, error)

	// Update catalog workspace bindings.
	//
	// Updates workspace bindings of the catalog. The caller must be a metastore
	// admin or an owner of the catalog.
	Update(ctx context.Context, request UpdateWorkspaceBindings) (*CurrentWorkspaceBindings, error)

	// Update securable workspace bindings.
	//
	// Updates workspace bindings of the securable. The caller must be a
	// metastore admin or an owner of the securable.
	UpdateBindings(ctx context.Context, request UpdateWorkspaceBindingsParameters) (*WorkspaceBindingsResponse, error)
}

A securable in Databricks can be configured as __OPEN__ or __ISOLATED__. An __OPEN__ securable can be accessed from any workspace, while an __ISOLATED__ securable can only be accessed from a configured list of workspaces. This API allows you to configure (bind) securables to workspaces.

NOTE: The __isolation_mode__ is configured for the securable itself (using its Update method) and the workspace bindings are only consulted when the securable's __isolation_mode__ is set to __ISOLATED__.

A securable's workspace bindings can be configured by a metastore admin or the owner of the securable.

The original path (/api/2.1/unity-catalog/workspace-bindings/catalogs/{name}) is deprecated. Please use the new path (/api/2.1/unity-catalog/bindings/{securable_type}/{securable_name}) which introduces the ability to bind a securable in READ_ONLY mode (catalogs only).

Securables that support binding: - catalog

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL