job

package
v12.4.0-beta+incompatible Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Feb 3, 2018 License: Apache-2.0 Imports: 9 Imported by: 0

Documentation

Overview

Package job implements the Azure ARM Job service API version 2017-09-01-preview.

Creates an Azure Data Lake Analytics job client.

Index

Constants

View Source
const (
	// DefaultAdlaJobDNSSuffix is the default value for adla job dns suffix
	DefaultAdlaJobDNSSuffix = "azuredatalakeanalytics.net"
)

Variables

This section is empty.

Functions

func UserAgent

func UserAgent() string

UserAgent returns the UserAgent string to use when sending http.Requests.

func Version

func Version() string

Version returns the semantic version (see http://semver.org) of the client.

Types

type BaseClient

type BaseClient struct {
	autorest.Client
	AdlaJobDNSSuffix string
}

BaseClient is the base client for Job.

func New

func New() BaseClient

New creates an instance of the BaseClient client.

func NewWithoutDefaults

func NewWithoutDefaults(adlaJobDNSSuffix string) BaseClient

NewWithoutDefaults creates an instance of the BaseClient client.

type BaseJobParameters

type BaseJobParameters struct {
	// Type - the job type of the current job (Hive, USql, or Scope (for internal use only)). Possible values include: 'USQL', 'Hive', 'Scope'
	Type Type `json:"type,omitempty"`
	// Properties - the job specific properties.
	Properties BasicCreateJobProperties `json:"properties,omitempty"`
}

BaseJobParameters data Lake Analytics Job Parameters base class for build and submit.

func (*BaseJobParameters) UnmarshalJSON

func (bjp *BaseJobParameters) UnmarshalJSON(body []byte) error

UnmarshalJSON is the custom unmarshaler for BaseJobParameters struct.

type BasicCreateJobProperties

type BasicCreateJobProperties interface {
	AsCreateUSQLJobProperties() (*CreateUSQLJobProperties, bool)
	AsCreateScopeJobProperties() (*CreateScopeJobProperties, bool)
	AsCreateJobProperties() (*CreateJobProperties, bool)
}

BasicCreateJobProperties the common Data Lake Analytics job properties for job submission.

type BasicProperties

type BasicProperties interface {
	AsUSQLJobProperties() (*USQLJobProperties, bool)
	AsScopeJobProperties() (*ScopeJobProperties, bool)
	AsHiveJobProperties() (*HiveJobProperties, bool)
	AsProperties() (*Properties, bool)
}

BasicProperties the common Data Lake Analytics job properties.

type BuildJobParameters

type BuildJobParameters struct {
	// Type - the job type of the current job (Hive, USql, or Scope (for internal use only)). Possible values include: 'USQL', 'Hive', 'Scope'
	Type Type `json:"type,omitempty"`
	// Properties - the job specific properties.
	Properties BasicCreateJobProperties `json:"properties,omitempty"`
	// Name - the friendly name of the job to build.
	Name *string `json:"name,omitempty"`
}

BuildJobParameters the parameters used to build a new Data Lake Analytics job.

func (*BuildJobParameters) UnmarshalJSON

func (bjp *BuildJobParameters) UnmarshalJSON(body []byte) error

UnmarshalJSON is the custom unmarshaler for BuildJobParameters struct.

type Client

type Client struct {
	BaseClient
}

Client is the creates an Azure Data Lake Analytics job client.

func NewClient

func NewClient() Client

NewClient creates an instance of the Client client.

func (Client) Build

func (client Client) Build(ctx context.Context, accountName string, parameters BuildJobParameters) (result Information, err error)

Build builds (compiles) the specified job in the specified Data Lake Analytics account for job correctness and validation.

accountName is the Azure Data Lake Analytics account to execute job operations on. parameters is the parameters to build a job.

func (Client) BuildPreparer

func (client Client) BuildPreparer(ctx context.Context, accountName string, parameters BuildJobParameters) (*http.Request, error)

BuildPreparer prepares the Build request.

func (Client) BuildResponder

func (client Client) BuildResponder(resp *http.Response) (result Information, err error)

BuildResponder handles the response to the Build request. The method always closes the http.Response Body.

func (Client) BuildSender

func (client Client) BuildSender(req *http.Request) (*http.Response, error)

BuildSender sends the Build request. The method will close the http.Response Body if it receives an error.

func (Client) Cancel

func (client Client) Cancel(ctx context.Context, accountName string, jobIdentity uuid.UUID) (result JobCancelFuture, err error)

Cancel cancels the running job specified by the job ID.

accountName is the Azure Data Lake Analytics account to execute job operations on. jobIdentity is job identifier. Uniquely identifies the job across all jobs submitted to the service.

func (Client) CancelPreparer

func (client Client) CancelPreparer(ctx context.Context, accountName string, jobIdentity uuid.UUID) (*http.Request, error)

CancelPreparer prepares the Cancel request.

func (Client) CancelResponder

func (client Client) CancelResponder(resp *http.Response) (result autorest.Response, err error)

CancelResponder handles the response to the Cancel request. The method always closes the http.Response Body.

func (Client) CancelSender

func (client Client) CancelSender(req *http.Request) (future JobCancelFuture, err error)

CancelSender sends the Cancel request. The method will close the http.Response Body if it receives an error.

func (Client) Create

func (client Client) Create(ctx context.Context, accountName string, jobIdentity uuid.UUID, parameters CreateJobParameters) (result Information, err error)

Create submits a job to the specified Data Lake Analytics account.

accountName is the Azure Data Lake Analytics account to execute job operations on. jobIdentity is job identifier. Uniquely identifies the job across all jobs submitted to the service. parameters is the parameters to submit a job.

func (Client) CreatePreparer

func (client Client) CreatePreparer(ctx context.Context, accountName string, jobIdentity uuid.UUID, parameters CreateJobParameters) (*http.Request, error)

CreatePreparer prepares the Create request.

func (Client) CreateResponder

func (client Client) CreateResponder(resp *http.Response) (result Information, err error)

CreateResponder handles the response to the Create request. The method always closes the http.Response Body.

func (Client) CreateSender

func (client Client) CreateSender(req *http.Request) (*http.Response, error)

CreateSender sends the Create request. The method will close the http.Response Body if it receives an error.

func (Client) Get

func (client Client) Get(ctx context.Context, accountName string, jobIdentity uuid.UUID) (result Information, err error)

Get gets the job information for the specified job ID.

accountName is the Azure Data Lake Analytics account to execute job operations on. jobIdentity is jobInfo ID.

func (Client) GetDebugDataPath

func (client Client) GetDebugDataPath(ctx context.Context, accountName string, jobIdentity uuid.UUID) (result DataPath, err error)

GetDebugDataPath gets the job debug data information specified by the job ID.

accountName is the Azure Data Lake Analytics account to execute job operations on. jobIdentity is job identifier. Uniquely identifies the job across all jobs submitted to the service.

func (Client) GetDebugDataPathPreparer

func (client Client) GetDebugDataPathPreparer(ctx context.Context, accountName string, jobIdentity uuid.UUID) (*http.Request, error)

GetDebugDataPathPreparer prepares the GetDebugDataPath request.

func (Client) GetDebugDataPathResponder

func (client Client) GetDebugDataPathResponder(resp *http.Response) (result DataPath, err error)

GetDebugDataPathResponder handles the response to the GetDebugDataPath request. The method always closes the http.Response Body.

func (Client) GetDebugDataPathSender

func (client Client) GetDebugDataPathSender(req *http.Request) (*http.Response, error)

GetDebugDataPathSender sends the GetDebugDataPath request. The method will close the http.Response Body if it receives an error.

func (Client) GetPreparer

func (client Client) GetPreparer(ctx context.Context, accountName string, jobIdentity uuid.UUID) (*http.Request, error)

GetPreparer prepares the Get request.

func (Client) GetResponder

func (client Client) GetResponder(resp *http.Response) (result Information, err error)

GetResponder handles the response to the Get request. The method always closes the http.Response Body.

func (Client) GetSender

func (client Client) GetSender(req *http.Request) (*http.Response, error)

GetSender sends the Get request. The method will close the http.Response Body if it receives an error.

func (Client) GetStatistics

func (client Client) GetStatistics(ctx context.Context, accountName string, jobIdentity uuid.UUID) (result Statistics, err error)

GetStatistics gets statistics of the specified job.

accountName is the Azure Data Lake Analytics account to execute job operations on. jobIdentity is job Information ID.

func (Client) GetStatisticsPreparer

func (client Client) GetStatisticsPreparer(ctx context.Context, accountName string, jobIdentity uuid.UUID) (*http.Request, error)

GetStatisticsPreparer prepares the GetStatistics request.

func (Client) GetStatisticsResponder

func (client Client) GetStatisticsResponder(resp *http.Response) (result Statistics, err error)

GetStatisticsResponder handles the response to the GetStatistics request. The method always closes the http.Response Body.

func (Client) GetStatisticsSender

func (client Client) GetStatisticsSender(req *http.Request) (*http.Response, error)

GetStatisticsSender sends the GetStatistics request. The method will close the http.Response Body if it receives an error.

func (Client) List

func (client Client) List(ctx context.Context, accountName string, filter string, top *int32, skip *int32, selectParameter string, orderby string, count *bool) (result InfoListResultPage, err error)

List lists the jobs, if any, associated with the specified Data Lake Analytics account. The response includes a link to the next page of results, if any.

accountName is the Azure Data Lake Analytics account to execute job operations on. filter is oData filter. Optional. top is the number of items to return. Optional. skip is the number of items to skip over before returning elements. Optional. selectParameter is oData Select statement. Limits the properties on each entry to just those requested, e.g. Categories?$select=CategoryName,Description. Optional. orderby is orderBy clause. One or more comma-separated expressions with an optional "asc" (the default) or "desc" depending on the order you'd like the values sorted, e.g. Categories?$orderby=CategoryName desc. Optional. count is the Boolean value of true or false to request a count of the matching resources included with the resources in the response, e.g. Categories?$count=true. Optional.

func (Client) ListComplete

func (client Client) ListComplete(ctx context.Context, accountName string, filter string, top *int32, skip *int32, selectParameter string, orderby string, count *bool) (result InfoListResultIterator, err error)

ListComplete enumerates all values, automatically crossing page boundaries as required.

func (Client) ListPreparer

func (client Client) ListPreparer(ctx context.Context, accountName string, filter string, top *int32, skip *int32, selectParameter string, orderby string, count *bool) (*http.Request, error)

ListPreparer prepares the List request.

func (Client) ListResponder

func (client Client) ListResponder(resp *http.Response) (result InfoListResult, err error)

ListResponder handles the response to the List request. The method always closes the http.Response Body.

func (Client) ListSender

func (client Client) ListSender(req *http.Request) (*http.Response, error)

ListSender sends the List request. The method will close the http.Response Body if it receives an error.

func (Client) Update

func (client Client) Update(ctx context.Context, accountName string, jobIdentity uuid.UUID, parameters *UpdateJobParameters) (result JobUpdateFuture, err error)

Update updates the job information for the specified job ID. (Only for use internally with Scope job type.)

accountName is the Azure Data Lake Analytics account to execute job operations on. jobIdentity is job identifier. Uniquely identifies the job across all jobs submitted to the service. parameters is the parameters to update a job.

func (Client) UpdatePreparer

func (client Client) UpdatePreparer(ctx context.Context, accountName string, jobIdentity uuid.UUID, parameters *UpdateJobParameters) (*http.Request, error)

UpdatePreparer prepares the Update request.

func (Client) UpdateResponder

func (client Client) UpdateResponder(resp *http.Response) (result Information, err error)

UpdateResponder handles the response to the Update request. The method always closes the http.Response Body.

func (Client) UpdateSender

func (client Client) UpdateSender(req *http.Request) (future JobUpdateFuture, err error)

UpdateSender sends the Update request. The method will close the http.Response Body if it receives an error.

func (Client) Yield

func (client Client) Yield(ctx context.Context, accountName string, jobIdentity uuid.UUID) (result JobYieldFuture, err error)

Yield pauses the specified job and places it back in the job queue, behind other jobs of equal or higher importance, based on priority. (Only for use internally with Scope job type.)

accountName is the Azure Data Lake Analytics account to execute job operations on. jobIdentity is job identifier. Uniquely identifies the job across all jobs submitted to the service.

func (Client) YieldPreparer

func (client Client) YieldPreparer(ctx context.Context, accountName string, jobIdentity uuid.UUID) (*http.Request, error)

YieldPreparer prepares the Yield request.

func (Client) YieldResponder

func (client Client) YieldResponder(resp *http.Response) (result autorest.Response, err error)

YieldResponder handles the response to the Yield request. The method always closes the http.Response Body.

func (Client) YieldSender

func (client Client) YieldSender(req *http.Request) (future JobYieldFuture, err error)

YieldSender sends the Yield request. The method will close the http.Response Body if it receives an error.

type CompileMode

type CompileMode string

CompileMode enumerates the values for compile mode.

const (
	// Full ...
	Full CompileMode = "Full"
	// Semantic ...
	Semantic CompileMode = "Semantic"
	// SingleBox ...
	SingleBox CompileMode = "SingleBox"
)

type CreateJobParameters

type CreateJobParameters struct {
	// Type - the job type of the current job (Hive, USql, or Scope (for internal use only)). Possible values include: 'USQL', 'Hive', 'Scope'
	Type Type `json:"type,omitempty"`
	// Properties - the job specific properties.
	Properties BasicCreateJobProperties `json:"properties,omitempty"`
	// Name - the friendly name of the job to submit.
	Name *string `json:"name,omitempty"`
	// DegreeOfParallelism - the degree of parallelism to use for this job. This must be greater than 0, if set to less than 0 it will default to 1.
	DegreeOfParallelism *int32 `json:"degreeOfParallelism,omitempty"`
	// Priority - the priority value to use for the current job. Lower numbers have a higher priority. By default, a job has a priority of 1000. This must be greater than 0.
	Priority *int32 `json:"priority,omitempty"`
	// LogFilePatterns - the list of log file name patterns to find in the logFolder. '*' is the only matching character allowed. Example format: jobExecution*.log or *mylog*.txt
	LogFilePatterns *[]string `json:"logFilePatterns,omitempty"`
	// Related - the recurring job relationship information properties.
	Related *RelationshipProperties `json:"related,omitempty"`
}

CreateJobParameters the parameters used to submit a new Data Lake Analytics job.

func (*CreateJobParameters) UnmarshalJSON

func (cjp *CreateJobParameters) UnmarshalJSON(body []byte) error

UnmarshalJSON is the custom unmarshaler for CreateJobParameters struct.

type CreateJobProperties

type CreateJobProperties struct {
	// RuntimeVersion - the runtime version of the Data Lake Analytics engine to use for the specific type of job being run.
	RuntimeVersion *string `json:"runtimeVersion,omitempty"`
	// Script - the script to run. Please note that the maximum script size is 3 MB.
	Script *string `json:"script,omitempty"`
	// Type - Possible values include: 'TypeCreateJobProperties', 'TypeUSQL', 'TypeScope'
	Type TypeBasicCreateJobProperties `json:"type,omitempty"`
}

CreateJobProperties the common Data Lake Analytics job properties for job submission.

func (CreateJobProperties) AsBasicCreateJobProperties

func (cjp CreateJobProperties) AsBasicCreateJobProperties() (BasicCreateJobProperties, bool)

AsBasicCreateJobProperties is the BasicCreateJobProperties implementation for CreateJobProperties.

func (CreateJobProperties) AsCreateJobProperties

func (cjp CreateJobProperties) AsCreateJobProperties() (*CreateJobProperties, bool)

AsCreateJobProperties is the BasicCreateJobProperties implementation for CreateJobProperties.

func (CreateJobProperties) AsCreateScopeJobProperties

func (cjp CreateJobProperties) AsCreateScopeJobProperties() (*CreateScopeJobProperties, bool)

AsCreateScopeJobProperties is the BasicCreateJobProperties implementation for CreateJobProperties.

func (CreateJobProperties) AsCreateUSQLJobProperties

func (cjp CreateJobProperties) AsCreateUSQLJobProperties() (*CreateUSQLJobProperties, bool)

AsCreateUSQLJobProperties is the BasicCreateJobProperties implementation for CreateJobProperties.

func (CreateJobProperties) MarshalJSON

func (cjp CreateJobProperties) MarshalJSON() ([]byte, error)

MarshalJSON is the custom marshaler for CreateJobProperties.

type CreateScopeJobParameters

type CreateScopeJobParameters struct {
	// Type - the job type of the current job (Hive, USql, or Scope (for internal use only)). Possible values include: 'USQL', 'Hive', 'Scope'
	Type Type `json:"type,omitempty"`
	// Properties - the job specific properties.
	Properties BasicCreateJobProperties `json:"properties,omitempty"`
	// Name - the friendly name of the job to submit.
	Name *string `json:"name,omitempty"`
	// DegreeOfParallelism - the degree of parallelism to use for this job. This must be greater than 0, if set to less than 0 it will default to 1.
	DegreeOfParallelism *int32 `json:"degreeOfParallelism,omitempty"`
	// Priority - the priority value to use for the current job. Lower numbers have a higher priority. By default, a job has a priority of 1000. This must be greater than 0.
	Priority *int32 `json:"priority,omitempty"`
	// LogFilePatterns - the list of log file name patterns to find in the logFolder. '*' is the only matching character allowed. Example format: jobExecution*.log or *mylog*.txt
	LogFilePatterns *[]string `json:"logFilePatterns,omitempty"`
	// Related - the recurring job relationship information properties.
	Related *RelationshipProperties `json:"related,omitempty"`
	// Tags - the key-value pairs used to add additional metadata to the job information. (Only for use internally with Scope job type.)
	Tags *map[string]*string `json:"tags,omitempty"`
}

CreateScopeJobParameters the parameters used to submit a new Data Lake Analytics Scope job. (Only for use internally with Scope job type.)

func (*CreateScopeJobParameters) UnmarshalJSON

func (csjp *CreateScopeJobParameters) UnmarshalJSON(body []byte) error

UnmarshalJSON is the custom unmarshaler for CreateScopeJobParameters struct.

type CreateScopeJobProperties

type CreateScopeJobProperties struct {
	// RuntimeVersion - the runtime version of the Data Lake Analytics engine to use for the specific type of job being run.
	RuntimeVersion *string `json:"runtimeVersion,omitempty"`
	// Script - the script to run. Please note that the maximum script size is 3 MB.
	Script *string `json:"script,omitempty"`
	// Type - Possible values include: 'TypeCreateJobProperties', 'TypeUSQL', 'TypeScope'
	Type TypeBasicCreateJobProperties `json:"type,omitempty"`
	// Resources - the list of resources that are required by the job.
	Resources *[]ScopeJobResource `json:"resources,omitempty"`
	// Notifier - the list of email addresses, separated by semi-colons, to notify when the job reaches a terminal state.
	Notifier *string `json:"notifier,omitempty"`
}

CreateScopeJobProperties scope job properties used when submitting Scope jobs.

func (CreateScopeJobProperties) AsBasicCreateJobProperties

func (csjp CreateScopeJobProperties) AsBasicCreateJobProperties() (BasicCreateJobProperties, bool)

AsBasicCreateJobProperties is the BasicCreateJobProperties implementation for CreateScopeJobProperties.

func (CreateScopeJobProperties) AsCreateJobProperties

func (csjp CreateScopeJobProperties) AsCreateJobProperties() (*CreateJobProperties, bool)

AsCreateJobProperties is the BasicCreateJobProperties implementation for CreateScopeJobProperties.

func (CreateScopeJobProperties) AsCreateScopeJobProperties

func (csjp CreateScopeJobProperties) AsCreateScopeJobProperties() (*CreateScopeJobProperties, bool)

AsCreateScopeJobProperties is the BasicCreateJobProperties implementation for CreateScopeJobProperties.

func (CreateScopeJobProperties) AsCreateUSQLJobProperties

func (csjp CreateScopeJobProperties) AsCreateUSQLJobProperties() (*CreateUSQLJobProperties, bool)

AsCreateUSQLJobProperties is the BasicCreateJobProperties implementation for CreateScopeJobProperties.

func (CreateScopeJobProperties) MarshalJSON

func (csjp CreateScopeJobProperties) MarshalJSON() ([]byte, error)

MarshalJSON is the custom marshaler for CreateScopeJobProperties.

type CreateUSQLJobProperties

type CreateUSQLJobProperties struct {
	// RuntimeVersion - the runtime version of the Data Lake Analytics engine to use for the specific type of job being run.
	RuntimeVersion *string `json:"runtimeVersion,omitempty"`
	// Script - the script to run. Please note that the maximum script size is 3 MB.
	Script *string `json:"script,omitempty"`
	// Type - Possible values include: 'TypeCreateJobProperties', 'TypeUSQL', 'TypeScope'
	Type TypeBasicCreateJobProperties `json:"type,omitempty"`
	// CompileMode - the specific compilation mode for the job used during execution. If this is not specified during submission, the server will determine the optimal compilation mode. Possible values include: 'Semantic', 'Full', 'SingleBox'
	CompileMode CompileMode `json:"compileMode,omitempty"`
}

CreateUSQLJobProperties u-SQL job properties used when submitting U-SQL jobs.

func (CreateUSQLJobProperties) AsBasicCreateJobProperties

func (cusjp CreateUSQLJobProperties) AsBasicCreateJobProperties() (BasicCreateJobProperties, bool)

AsBasicCreateJobProperties is the BasicCreateJobProperties implementation for CreateUSQLJobProperties.

func (CreateUSQLJobProperties) AsCreateJobProperties

func (cusjp CreateUSQLJobProperties) AsCreateJobProperties() (*CreateJobProperties, bool)

AsCreateJobProperties is the BasicCreateJobProperties implementation for CreateUSQLJobProperties.

func (CreateUSQLJobProperties) AsCreateScopeJobProperties

func (cusjp CreateUSQLJobProperties) AsCreateScopeJobProperties() (*CreateScopeJobProperties, bool)

AsCreateScopeJobProperties is the BasicCreateJobProperties implementation for CreateUSQLJobProperties.

func (CreateUSQLJobProperties) AsCreateUSQLJobProperties

func (cusjp CreateUSQLJobProperties) AsCreateUSQLJobProperties() (*CreateUSQLJobProperties, bool)

AsCreateUSQLJobProperties is the BasicCreateJobProperties implementation for CreateUSQLJobProperties.

func (CreateUSQLJobProperties) MarshalJSON

func (cusjp CreateUSQLJobProperties) MarshalJSON() ([]byte, error)

MarshalJSON is the custom marshaler for CreateUSQLJobProperties.

type DataPath

type DataPath struct {
	autorest.Response `json:"-"`
	// JobID - the id of the job this data is for.
	JobID *uuid.UUID `json:"jobId,omitempty"`
	// Command - the command that this job data relates to.
	Command *string `json:"command,omitempty"`
	// Paths - the list of paths to all of the job data.
	Paths *[]string `json:"paths,omitempty"`
}

DataPath a Data Lake Analytics job data path item.

type Diagnostics

type Diagnostics struct {
	// ColumnNumber - the column where the error occured.
	ColumnNumber *int32 `json:"columnNumber,omitempty"`
	// End - the ending index of the error.
	End *int32 `json:"end,omitempty"`
	// LineNumber - the line number the error occured on.
	LineNumber *int32 `json:"lineNumber,omitempty"`
	// Message - the error message.
	Message *string `json:"message,omitempty"`
	// Severity - the severity of the error. Possible values include: 'Warning', 'Error', 'Info', 'SevereWarning', 'Deprecated', 'UserWarning'
	Severity SeverityTypes `json:"severity,omitempty"`
	// Start - the starting index of the error.
	Start *int32 `json:"start,omitempty"`
}

Diagnostics error diagnostic information for failed jobs.

type ErrorDetails

type ErrorDetails struct {
	// Description - the error message description
	Description *string `json:"description,omitempty"`
	// Details - the details of the error message.
	Details *string `json:"details,omitempty"`
	// EndOffset - the end offset in the job where the error was found.
	EndOffset *int32 `json:"endOffset,omitempty"`
	// ErrorID - the specific identifier for the type of error encountered in the job.
	ErrorID *string `json:"errorId,omitempty"`
	// FilePath - the path to any supplemental error files, if any.
	FilePath *string `json:"filePath,omitempty"`
	// HelpLink - the link to MSDN or Azure help for this type of error, if any.
	HelpLink *string `json:"helpLink,omitempty"`
	// InternalDiagnostics - the internal diagnostic stack trace if the user requesting the job error details has sufficient permissions it will be retrieved, otherwise it will be empty.
	InternalDiagnostics *string `json:"internalDiagnostics,omitempty"`
	// LineNumber - the specific line number in the job where the error occured.
	LineNumber *int32 `json:"lineNumber,omitempty"`
	// Message - the user friendly error message for the failure.
	Message *string `json:"message,omitempty"`
	// Resolution - the recommended resolution for the failure, if any.
	Resolution *string `json:"resolution,omitempty"`
	// InnerError - the inner error of this specific job error message, if any.
	InnerError *InnerError `json:"innerError,omitempty"`
	// Severity - the severity level of the failure. Possible values include: 'Warning', 'Error', 'Info', 'SevereWarning', 'Deprecated', 'UserWarning'
	Severity SeverityTypes `json:"severity,omitempty"`
	// Source - the ultimate source of the failure (usually either SYSTEM or USER).
	Source *string `json:"source,omitempty"`
	// StartOffset - the start offset in the job where the error was found
	StartOffset *int32 `json:"startOffset,omitempty"`
}

ErrorDetails the Data Lake Analytics job error details.

type HiveJobProperties

type HiveJobProperties struct {
	// RuntimeVersion - the runtime version of the Data Lake Analytics engine to use for the specific type of job being run.
	RuntimeVersion *string `json:"runtimeVersion,omitempty"`
	// Script - the script to run. Please note that the maximum script size is 3 MB.
	Script *string `json:"script,omitempty"`
	// Type - Possible values include: 'TypeBasicPropertiesTypeJobProperties', 'TypeBasicPropertiesTypeUSQL', 'TypeBasicPropertiesTypeScope', 'TypeBasicPropertiesTypeHive'
	Type TypeBasicProperties `json:"type,omitempty"`
	// LogsLocation - the Hive logs location
	LogsLocation *string `json:"logsLocation,omitempty"`
	// OutputLocation - the location of Hive job output files (both execution output and results)
	OutputLocation *string `json:"outputLocation,omitempty"`
	// StatementCount - the number of statements that will be run based on the script
	StatementCount *int32 `json:"statementCount,omitempty"`
	// ExecutedStatementCount - the number of statements that have been run based on the script
	ExecutedStatementCount *int32 `json:"executedStatementCount,omitempty"`
}

HiveJobProperties hive job properties used when retrieving Hive jobs.

func (HiveJobProperties) AsBasicProperties

func (hjp HiveJobProperties) AsBasicProperties() (BasicProperties, bool)

AsBasicProperties is the BasicProperties implementation for HiveJobProperties.

func (HiveJobProperties) AsHiveJobProperties

func (hjp HiveJobProperties) AsHiveJobProperties() (*HiveJobProperties, bool)

AsHiveJobProperties is the BasicProperties implementation for HiveJobProperties.

func (HiveJobProperties) AsProperties

func (hjp HiveJobProperties) AsProperties() (*Properties, bool)

AsProperties is the BasicProperties implementation for HiveJobProperties.

func (HiveJobProperties) AsScopeJobProperties

func (hjp HiveJobProperties) AsScopeJobProperties() (*ScopeJobProperties, bool)

AsScopeJobProperties is the BasicProperties implementation for HiveJobProperties.

func (HiveJobProperties) AsUSQLJobProperties

func (hjp HiveJobProperties) AsUSQLJobProperties() (*USQLJobProperties, bool)

AsUSQLJobProperties is the BasicProperties implementation for HiveJobProperties.

func (HiveJobProperties) MarshalJSON

func (hjp HiveJobProperties) MarshalJSON() ([]byte, error)

MarshalJSON is the custom marshaler for HiveJobProperties.

type InfoListResult

type InfoListResult struct {
	autorest.Response `json:"-"`
	// Value - the list of JobInfo items.
	Value *[]InformationBasic `json:"value,omitempty"`
	// NextLink - the link (url) to the next page of results.
	NextLink *string `json:"nextLink,omitempty"`
}

InfoListResult list of JobInfo items.

func (InfoListResult) IsEmpty

func (ilr InfoListResult) IsEmpty() bool

IsEmpty returns true if the ListResult contains no values.

type InfoListResultIterator

type InfoListResultIterator struct {
	// contains filtered or unexported fields
}

InfoListResultIterator provides access to a complete listing of InformationBasic values.

func (*InfoListResultIterator) Next

func (iter *InfoListResultIterator) Next() error

Next advances to the next value. If there was an error making the request the iterator does not advance and the error is returned.

func (InfoListResultIterator) NotDone

func (iter InfoListResultIterator) NotDone() bool

NotDone returns true if the enumeration should be started or is not yet complete.

func (InfoListResultIterator) Response

func (iter InfoListResultIterator) Response() InfoListResult

Response returns the raw server response from the last page request.

func (InfoListResultIterator) Value

Value returns the current value or a zero-initialized value if the iterator has advanced beyond the end of the collection.

type InfoListResultPage

type InfoListResultPage struct {
	// contains filtered or unexported fields
}

InfoListResultPage contains a page of InformationBasic values.

func (*InfoListResultPage) Next

func (page *InfoListResultPage) Next() error

Next advances to the next page of values. If there was an error making the request the page does not advance and the error is returned.

func (InfoListResultPage) NotDone

func (page InfoListResultPage) NotDone() bool

NotDone returns true if the page enumeration should be started or is not yet complete.

func (InfoListResultPage) Response

func (page InfoListResultPage) Response() InfoListResult

Response returns the raw server response from the last page request.

func (InfoListResultPage) Values

func (page InfoListResultPage) Values() []InformationBasic

Values returns the slice of values for the current page or nil if there are no values.

type Information

type Information struct {
	autorest.Response `json:"-"`
	// JobID - the job's unique identifier (a GUID).
	JobID *uuid.UUID `json:"jobId,omitempty"`
	// Name - the friendly name of the job.
	Name *string `json:"name,omitempty"`
	// Type - the job type of the current job (Hive, USql, or Scope (for internal use only)). Possible values include: 'USQL', 'Hive', 'Scope'
	Type Type `json:"type,omitempty"`
	// Submitter - the user or account that submitted the job.
	Submitter *string `json:"submitter,omitempty"`
	// DegreeOfParallelism - the degree of parallelism used for this job. This must be greater than 0, if set to less than 0 it will default to 1.
	DegreeOfParallelism *int32 `json:"degreeOfParallelism,omitempty"`
	// Priority - the priority value for the current job. Lower numbers have a higher priority. By default, a job has a priority of 1000. This must be greater than 0.
	Priority *int32 `json:"priority,omitempty"`
	// SubmitTime - the time the job was submitted to the service.
	SubmitTime *date.Time `json:"submitTime,omitempty"`
	// StartTime - the start time of the job.
	StartTime *date.Time `json:"startTime,omitempty"`
	// EndTime - the completion time of the job.
	EndTime *date.Time `json:"endTime,omitempty"`
	// State - the job state. When the job is in the Ended state, refer to Result and ErrorMessage for details. Possible values include: 'StateAccepted', 'StateCompiling', 'StateEnded', 'StateNew', 'StateQueued', 'StateRunning', 'StateScheduling', 'StateStarting', 'StatePaused', 'StateWaitingForCapacity'
	State State `json:"state,omitempty"`
	// Result - the result of job execution or the current result of the running job. Possible values include: 'None', 'Succeeded', 'Cancelled', 'Failed'
	Result Result `json:"result,omitempty"`
	// LogFolder - the log folder path to use in the following format: adl://<accountName>.azuredatalakestore.net/system/jobservice/jobs/Usql/2016/03/13/17/18/5fe51957-93bc-4de0-8ddc-c5a4753b068b/logs/.
	LogFolder *string `json:"logFolder,omitempty"`
	// LogFilePatterns - the list of log file name patterns to find in the logFolder. '*' is the only matching character allowed. Example format: jobExecution*.log or *mylog*.txt
	LogFilePatterns *[]string `json:"logFilePatterns,omitempty"`
	// Related - the recurring job relationship information properties.
	Related *RelationshipProperties `json:"related,omitempty"`
	// Tags - the key-value pairs used to add additional metadata to the job information. (Only for use internally with Scope job type.)
	Tags *map[string]*string `json:"tags,omitempty"`
	// ErrorMessage - the error message details for the job, if the job failed.
	ErrorMessage *[]ErrorDetails `json:"errorMessage,omitempty"`
	// StateAuditRecords - the job state audit records, indicating when various operations have been performed on this job.
	StateAuditRecords *[]StateAuditRecord `json:"stateAuditRecords,omitempty"`
	// Properties - the job specific properties.
	Properties BasicProperties `json:"properties,omitempty"`
}

Information the extended Data Lake Analytics job information properties returned when retrieving a specific job.

func (*Information) UnmarshalJSON

func (i *Information) UnmarshalJSON(body []byte) error

UnmarshalJSON is the custom unmarshaler for Information struct.

type InformationBasic

type InformationBasic struct {
	// JobID - the job's unique identifier (a GUID).
	JobID *uuid.UUID `json:"jobId,omitempty"`
	// Name - the friendly name of the job.
	Name *string `json:"name,omitempty"`
	// Type - the job type of the current job (Hive, USql, or Scope (for internal use only)). Possible values include: 'USQL', 'Hive', 'Scope'
	Type Type `json:"type,omitempty"`
	// Submitter - the user or account that submitted the job.
	Submitter *string `json:"submitter,omitempty"`
	// DegreeOfParallelism - the degree of parallelism used for this job. This must be greater than 0, if set to less than 0 it will default to 1.
	DegreeOfParallelism *int32 `json:"degreeOfParallelism,omitempty"`
	// Priority - the priority value for the current job. Lower numbers have a higher priority. By default, a job has a priority of 1000. This must be greater than 0.
	Priority *int32 `json:"priority,omitempty"`
	// SubmitTime - the time the job was submitted to the service.
	SubmitTime *date.Time `json:"submitTime,omitempty"`
	// StartTime - the start time of the job.
	StartTime *date.Time `json:"startTime,omitempty"`
	// EndTime - the completion time of the job.
	EndTime *date.Time `json:"endTime,omitempty"`
	// State - the job state. When the job is in the Ended state, refer to Result and ErrorMessage for details. Possible values include: 'StateAccepted', 'StateCompiling', 'StateEnded', 'StateNew', 'StateQueued', 'StateRunning', 'StateScheduling', 'StateStarting', 'StatePaused', 'StateWaitingForCapacity'
	State State `json:"state,omitempty"`
	// Result - the result of job execution or the current result of the running job. Possible values include: 'None', 'Succeeded', 'Cancelled', 'Failed'
	Result Result `json:"result,omitempty"`
	// LogFolder - the log folder path to use in the following format: adl://<accountName>.azuredatalakestore.net/system/jobservice/jobs/Usql/2016/03/13/17/18/5fe51957-93bc-4de0-8ddc-c5a4753b068b/logs/.
	LogFolder *string `json:"logFolder,omitempty"`
	// LogFilePatterns - the list of log file name patterns to find in the logFolder. '*' is the only matching character allowed. Example format: jobExecution*.log or *mylog*.txt
	LogFilePatterns *[]string `json:"logFilePatterns,omitempty"`
	// Related - the recurring job relationship information properties.
	Related *RelationshipProperties `json:"related,omitempty"`
	// Tags - the key-value pairs used to add additional metadata to the job information. (Only for use internally with Scope job type.)
	Tags *map[string]*string `json:"tags,omitempty"`
}

InformationBasic the common Data Lake Analytics job information properties.

type InnerError

type InnerError struct {
	// DiagnosticCode - the diagnostic error code.
	DiagnosticCode *int32 `json:"diagnosticCode,omitempty"`
	// Severity - the severity level of the failure. Possible values include: 'Warning', 'Error', 'Info', 'SevereWarning', 'Deprecated', 'UserWarning'
	Severity SeverityTypes `json:"severity,omitempty"`
	// Details - the details of the error message.
	Details *string `json:"details,omitempty"`
	// Component - the component that failed.
	Component *string `json:"component,omitempty"`
	// ErrorID - the specific identifier for the type of error encountered in the job.
	ErrorID *string `json:"errorId,omitempty"`
	// HelpLink - the link to MSDN or Azure help for this type of error, if any.
	HelpLink *string `json:"helpLink,omitempty"`
	// InternalDiagnostics - the internal diagnostic stack trace if the user requesting the job error details has sufficient permissions it will be retrieved, otherwise it will be empty.
	InternalDiagnostics *string `json:"internalDiagnostics,omitempty"`
	// Message - the user friendly error message for the failure.
	Message *string `json:"message,omitempty"`
	// Resolution - the recommended resolution for the failure, if any.
	Resolution *string `json:"resolution,omitempty"`
	// Source - the ultimate source of the failure (usually either SYSTEM or USER).
	Source *string `json:"source,omitempty"`
	// Description - the error message description
	Description *string `json:"description,omitempty"`
	// InnerError - the inner error of this specific job error message, if any.
	InnerError *InnerError `json:"innerError,omitempty"`
}

InnerError the Data Lake Analytics job error details.

type JobCancelFuture

type JobCancelFuture struct {
	azure.Future
	// contains filtered or unexported fields
}

JobCancelFuture an abstraction for monitoring and retrieving the results of a long-running operation.

func (JobCancelFuture) Result

func (future JobCancelFuture) Result(client Client) (ar autorest.Response, err error)

Result returns the result of the asynchronous operation. If the operation has not completed it will return an error.

type JobUpdateFuture

type JobUpdateFuture struct {
	azure.Future
	// contains filtered or unexported fields
}

JobUpdateFuture an abstraction for monitoring and retrieving the results of a long-running operation.

func (JobUpdateFuture) Result

func (future JobUpdateFuture) Result(client Client) (i Information, err error)

Result returns the result of the asynchronous operation. If the operation has not completed it will return an error.

type JobYieldFuture

type JobYieldFuture struct {
	azure.Future
	// contains filtered or unexported fields
}

JobYieldFuture an abstraction for monitoring and retrieving the results of a long-running operation.

func (JobYieldFuture) Result

func (future JobYieldFuture) Result(client Client) (ar autorest.Response, err error)

Result returns the result of the asynchronous operation. If the operation has not completed it will return an error.

type PipelineClient

type PipelineClient struct {
	BaseClient
}

PipelineClient is the creates an Azure Data Lake Analytics job client.

func NewPipelineClient

func NewPipelineClient() PipelineClient

NewPipelineClient creates an instance of the PipelineClient client.

func (PipelineClient) Get

func (client PipelineClient) Get(ctx context.Context, accountName string, pipelineIdentity uuid.UUID, startDateTime *date.Time, endDateTime *date.Time) (result PipelineInformation, err error)

Get gets the Pipeline information for the specified pipeline ID.

accountName is the Azure Data Lake Analytics account to execute job operations on. pipelineIdentity is pipeline ID. startDateTime is the start date for when to get the pipeline and aggregate its data. The startDateTime and endDateTime can be no more than 30 days apart. endDateTime is the end date for when to get the pipeline and aggregate its data. The startDateTime and endDateTime can be no more than 30 days apart.

func (PipelineClient) GetPreparer

func (client PipelineClient) GetPreparer(ctx context.Context, accountName string, pipelineIdentity uuid.UUID, startDateTime *date.Time, endDateTime *date.Time) (*http.Request, error)

GetPreparer prepares the Get request.

func (PipelineClient) GetResponder

func (client PipelineClient) GetResponder(resp *http.Response) (result PipelineInformation, err error)

GetResponder handles the response to the Get request. The method always closes the http.Response Body.

func (PipelineClient) GetSender

func (client PipelineClient) GetSender(req *http.Request) (*http.Response, error)

GetSender sends the Get request. The method will close the http.Response Body if it receives an error.

func (PipelineClient) List

func (client PipelineClient) List(ctx context.Context, accountName string, startDateTime *date.Time, endDateTime *date.Time) (result PipelineInformationListResultPage, err error)

List lists all pipelines.

accountName is the Azure Data Lake Analytics account to execute job operations on. startDateTime is the start date for when to get the list of pipelines. The startDateTime and endDateTime can be no more than 30 days apart. endDateTime is the end date for when to get the list of pipelines. The startDateTime and endDateTime can be no more than 30 days apart.

func (PipelineClient) ListComplete

func (client PipelineClient) ListComplete(ctx context.Context, accountName string, startDateTime *date.Time, endDateTime *date.Time) (result PipelineInformationListResultIterator, err error)

ListComplete enumerates all values, automatically crossing page boundaries as required.

func (PipelineClient) ListPreparer

func (client PipelineClient) ListPreparer(ctx context.Context, accountName string, startDateTime *date.Time, endDateTime *date.Time) (*http.Request, error)

ListPreparer prepares the List request.

func (PipelineClient) ListResponder

func (client PipelineClient) ListResponder(resp *http.Response) (result PipelineInformationListResult, err error)

ListResponder handles the response to the List request. The method always closes the http.Response Body.

func (PipelineClient) ListSender

func (client PipelineClient) ListSender(req *http.Request) (*http.Response, error)

ListSender sends the List request. The method will close the http.Response Body if it receives an error.

type PipelineInformation

type PipelineInformation struct {
	autorest.Response `json:"-"`
	// PipelineID - the job relationship pipeline identifier (a GUID).
	PipelineID *uuid.UUID `json:"pipelineId,omitempty"`
	// PipelineName - the friendly name of the job relationship pipeline, which does not need to be unique.
	PipelineName *string `json:"pipelineName,omitempty"`
	// PipelineURI - the pipeline uri, unique, links to the originating service for this pipeline.
	PipelineURI *string `json:"pipelineUri,omitempty"`
	// NumJobsFailed - the number of jobs in this pipeline that have failed.
	NumJobsFailed *int32 `json:"numJobsFailed,omitempty"`
	// NumJobsCanceled - the number of jobs in this pipeline that have been canceled.
	NumJobsCanceled *int32 `json:"numJobsCanceled,omitempty"`
	// NumJobsSucceeded - the number of jobs in this pipeline that have succeeded.
	NumJobsSucceeded *int32 `json:"numJobsSucceeded,omitempty"`
	// AuHoursFailed - the number of job execution hours that resulted in failed jobs.
	AuHoursFailed *float64 `json:"auHoursFailed,omitempty"`
	// AuHoursCanceled - the number of job execution hours that resulted in canceled jobs.
	AuHoursCanceled *float64 `json:"auHoursCanceled,omitempty"`
	// AuHoursSucceeded - the number of job execution hours that resulted in successful jobs.
	AuHoursSucceeded *float64 `json:"auHoursSucceeded,omitempty"`
	// LastSubmitTime - the last time a job in this pipeline was submitted.
	LastSubmitTime *date.Time `json:"lastSubmitTime,omitempty"`
	// Runs - the list of recurrence identifiers representing each run of this pipeline.
	Runs *[]PipelineRunInformation `json:"runs,omitempty"`
	// Recurrences - the list of recurrence identifiers representing each run of this pipeline.
	Recurrences *[]uuid.UUID `json:"recurrences,omitempty"`
}

PipelineInformation job Pipeline Information, showing the relationship of jobs and recurrences of those jobs in a pipeline.

type PipelineInformationListResult

type PipelineInformationListResult struct {
	autorest.Response `json:"-"`
	// Value - the list of job pipeline information items.
	Value *[]PipelineInformation `json:"value,omitempty"`
	// NextLink - the link (url) to the next page of results.
	NextLink *string `json:"nextLink,omitempty"`
}

PipelineInformationListResult list of job pipeline information items.

func (PipelineInformationListResult) IsEmpty

func (pilr PipelineInformationListResult) IsEmpty() bool

IsEmpty returns true if the ListResult contains no values.

type PipelineInformationListResultIterator

type PipelineInformationListResultIterator struct {
	// contains filtered or unexported fields
}

PipelineInformationListResultIterator provides access to a complete listing of PipelineInformation values.

func (*PipelineInformationListResultIterator) Next

Next advances to the next value. If there was an error making the request the iterator does not advance and the error is returned.

func (PipelineInformationListResultIterator) NotDone

NotDone returns true if the enumeration should be started or is not yet complete.

func (PipelineInformationListResultIterator) Response

Response returns the raw server response from the last page request.

func (PipelineInformationListResultIterator) Value

Value returns the current value or a zero-initialized value if the iterator has advanced beyond the end of the collection.

type PipelineInformationListResultPage

type PipelineInformationListResultPage struct {
	// contains filtered or unexported fields
}

PipelineInformationListResultPage contains a page of PipelineInformation values.

func (*PipelineInformationListResultPage) Next

Next advances to the next page of values. If there was an error making the request the page does not advance and the error is returned.

func (PipelineInformationListResultPage) NotDone

func (page PipelineInformationListResultPage) NotDone() bool

NotDone returns true if the page enumeration should be started or is not yet complete.

func (PipelineInformationListResultPage) Response

Response returns the raw server response from the last page request.

func (PipelineInformationListResultPage) Values

Values returns the slice of values for the current page or nil if there are no values.

type PipelineRunInformation

type PipelineRunInformation struct {
	// RunID - the run identifier of an instance of pipeline executions (a GUID).
	RunID *uuid.UUID `json:"runId,omitempty"`
	// LastSubmitTime - the time this instance was last submitted.
	LastSubmitTime *date.Time `json:"lastSubmitTime,omitempty"`
}

PipelineRunInformation run info for a specific job pipeline.

type Properties

type Properties struct {
	// RuntimeVersion - the runtime version of the Data Lake Analytics engine to use for the specific type of job being run.
	RuntimeVersion *string `json:"runtimeVersion,omitempty"`
	// Script - the script to run. Please note that the maximum script size is 3 MB.
	Script *string `json:"script,omitempty"`
	// Type - Possible values include: 'TypeBasicPropertiesTypeJobProperties', 'TypeBasicPropertiesTypeUSQL', 'TypeBasicPropertiesTypeScope', 'TypeBasicPropertiesTypeHive'
	Type TypeBasicProperties `json:"type,omitempty"`
}

Properties the common Data Lake Analytics job properties.

func (Properties) AsBasicProperties

func (p Properties) AsBasicProperties() (BasicProperties, bool)

AsBasicProperties is the BasicProperties implementation for Properties.

func (Properties) AsHiveJobProperties

func (p Properties) AsHiveJobProperties() (*HiveJobProperties, bool)

AsHiveJobProperties is the BasicProperties implementation for Properties.

func (Properties) AsProperties

func (p Properties) AsProperties() (*Properties, bool)

AsProperties is the BasicProperties implementation for Properties.

func (Properties) AsScopeJobProperties

func (p Properties) AsScopeJobProperties() (*ScopeJobProperties, bool)

AsScopeJobProperties is the BasicProperties implementation for Properties.

func (Properties) AsUSQLJobProperties

func (p Properties) AsUSQLJobProperties() (*USQLJobProperties, bool)

AsUSQLJobProperties is the BasicProperties implementation for Properties.

func (Properties) MarshalJSON

func (p Properties) MarshalJSON() ([]byte, error)

MarshalJSON is the custom marshaler for Properties.

type RecurrenceClient

type RecurrenceClient struct {
	BaseClient
}

RecurrenceClient is the creates an Azure Data Lake Analytics job client.

func NewRecurrenceClient

func NewRecurrenceClient() RecurrenceClient

NewRecurrenceClient creates an instance of the RecurrenceClient client.

func (RecurrenceClient) Get

func (client RecurrenceClient) Get(ctx context.Context, accountName string, recurrenceIdentity uuid.UUID, startDateTime *date.Time, endDateTime *date.Time) (result RecurrenceInformation, err error)

Get gets the recurrence information for the specified recurrence ID.

accountName is the Azure Data Lake Analytics account to execute job operations on. recurrenceIdentity is recurrence ID. startDateTime is the start date for when to get the recurrence and aggregate its data. The startDateTime and endDateTime can be no more than 30 days apart. endDateTime is the end date for when to get recurrence and aggregate its data. The startDateTime and endDateTime can be no more than 30 days apart.

func (RecurrenceClient) GetPreparer

func (client RecurrenceClient) GetPreparer(ctx context.Context, accountName string, recurrenceIdentity uuid.UUID, startDateTime *date.Time, endDateTime *date.Time) (*http.Request, error)

GetPreparer prepares the Get request.

func (RecurrenceClient) GetResponder

func (client RecurrenceClient) GetResponder(resp *http.Response) (result RecurrenceInformation, err error)

GetResponder handles the response to the Get request. The method always closes the http.Response Body.

func (RecurrenceClient) GetSender

func (client RecurrenceClient) GetSender(req *http.Request) (*http.Response, error)

GetSender sends the Get request. The method will close the http.Response Body if it receives an error.

func (RecurrenceClient) List

func (client RecurrenceClient) List(ctx context.Context, accountName string, startDateTime *date.Time, endDateTime *date.Time) (result RecurrenceInformationListResultPage, err error)

List lists all recurrences.

accountName is the Azure Data Lake Analytics account to execute job operations on. startDateTime is the start date for when to get the list of recurrences. The startDateTime and endDateTime can be no more than 30 days apart. endDateTime is the end date for when to get the list of recurrences. The startDateTime and endDateTime can be no more than 30 days apart.

func (RecurrenceClient) ListComplete

func (client RecurrenceClient) ListComplete(ctx context.Context, accountName string, startDateTime *date.Time, endDateTime *date.Time) (result RecurrenceInformationListResultIterator, err error)

ListComplete enumerates all values, automatically crossing page boundaries as required.

func (RecurrenceClient) ListPreparer

func (client RecurrenceClient) ListPreparer(ctx context.Context, accountName string, startDateTime *date.Time, endDateTime *date.Time) (*http.Request, error)

ListPreparer prepares the List request.

func (RecurrenceClient) ListResponder

func (client RecurrenceClient) ListResponder(resp *http.Response) (result RecurrenceInformationListResult, err error)

ListResponder handles the response to the List request. The method always closes the http.Response Body.

func (RecurrenceClient) ListSender

func (client RecurrenceClient) ListSender(req *http.Request) (*http.Response, error)

ListSender sends the List request. The method will close the http.Response Body if it receives an error.

type RecurrenceInformation

type RecurrenceInformation struct {
	autorest.Response `json:"-"`
	// RecurrenceID - the recurrence identifier (a GUID), unique per activity/script, regardless of iterations. This is something to link different occurrences of the same job together.
	RecurrenceID *uuid.UUID `json:"recurrenceId,omitempty"`
	// RecurrenceName - the recurrence name, user friendly name for the correlation between jobs.
	RecurrenceName *string `json:"recurrenceName,omitempty"`
	// NumJobsFailed - the number of jobs in this recurrence that have failed.
	NumJobsFailed *int32 `json:"numJobsFailed,omitempty"`
	// NumJobsCanceled - the number of jobs in this recurrence that have been canceled.
	NumJobsCanceled *int32 `json:"numJobsCanceled,omitempty"`
	// NumJobsSucceeded - the number of jobs in this recurrence that have succeeded.
	NumJobsSucceeded *int32 `json:"numJobsSucceeded,omitempty"`
	// AuHoursFailed - the number of job execution hours that resulted in failed jobs.
	AuHoursFailed *float64 `json:"auHoursFailed,omitempty"`
	// AuHoursCanceled - the number of job execution hours that resulted in canceled jobs.
	AuHoursCanceled *float64 `json:"auHoursCanceled,omitempty"`
	// AuHoursSucceeded - the number of job execution hours that resulted in successful jobs.
	AuHoursSucceeded *float64 `json:"auHoursSucceeded,omitempty"`
	// LastSubmitTime - the last time a job in this recurrence was submitted.
	LastSubmitTime *date.Time `json:"lastSubmitTime,omitempty"`
}

RecurrenceInformation recurrence job information for a specific recurrence.

type RecurrenceInformationListResult

type RecurrenceInformationListResult struct {
	autorest.Response `json:"-"`
	// Value - the list of job recurrence information items.
	Value *[]RecurrenceInformation `json:"value,omitempty"`
	// NextLink - the link (url) to the next page of results.
	NextLink *string `json:"nextLink,omitempty"`
}

RecurrenceInformationListResult list of job recurrence information items.

func (RecurrenceInformationListResult) IsEmpty

func (rilr RecurrenceInformationListResult) IsEmpty() bool

IsEmpty returns true if the ListResult contains no values.

type RecurrenceInformationListResultIterator

type RecurrenceInformationListResultIterator struct {
	// contains filtered or unexported fields
}

RecurrenceInformationListResultIterator provides access to a complete listing of RecurrenceInformation values.

func (*RecurrenceInformationListResultIterator) Next

Next advances to the next value. If there was an error making the request the iterator does not advance and the error is returned.

func (RecurrenceInformationListResultIterator) NotDone

NotDone returns true if the enumeration should be started or is not yet complete.

func (RecurrenceInformationListResultIterator) Response

Response returns the raw server response from the last page request.

func (RecurrenceInformationListResultIterator) Value

Value returns the current value or a zero-initialized value if the iterator has advanced beyond the end of the collection.

type RecurrenceInformationListResultPage

type RecurrenceInformationListResultPage struct {
	// contains filtered or unexported fields
}

RecurrenceInformationListResultPage contains a page of RecurrenceInformation values.

func (*RecurrenceInformationListResultPage) Next

Next advances to the next page of values. If there was an error making the request the page does not advance and the error is returned.

func (RecurrenceInformationListResultPage) NotDone

NotDone returns true if the page enumeration should be started or is not yet complete.

func (RecurrenceInformationListResultPage) Response

Response returns the raw server response from the last page request.

func (RecurrenceInformationListResultPage) Values

Values returns the slice of values for the current page or nil if there are no values.

type RelationshipProperties

type RelationshipProperties struct {
	// PipelineID - the job relationship pipeline identifier (a GUID).
	PipelineID *uuid.UUID `json:"pipelineId,omitempty"`
	// PipelineName - the friendly name of the job relationship pipeline, which does not need to be unique.
	PipelineName *string `json:"pipelineName,omitempty"`
	// PipelineURI - the pipeline uri, unique, links to the originating service for this pipeline.
	PipelineURI *string `json:"pipelineUri,omitempty"`
	// RunID - the run identifier (a GUID), unique identifier of the iteration of this pipeline.
	RunID *uuid.UUID `json:"runId,omitempty"`
	// RecurrenceID - the recurrence identifier (a GUID), unique per activity/script, regardless of iterations. This is something to link different occurrences of the same job together.
	RecurrenceID *uuid.UUID `json:"recurrenceId,omitempty"`
	// RecurrenceName - the recurrence name, user friendly name for the correlation between jobs.
	RecurrenceName *string `json:"recurrenceName,omitempty"`
}

RelationshipProperties job relationship information properties including pipeline information, correlation information, etc.

type Resource

type Resource struct {
	// Name - the name of the resource.
	Name *string `json:"name,omitempty"`
	// ResourcePath - the path to the resource.
	ResourcePath *string `json:"resourcePath,omitempty"`
	// Type - the job resource type. Possible values include: 'VertexResource', 'JobManagerResource', 'StatisticsResource', 'VertexResourceInUserFolder', 'JobManagerResourceInUserFolder', 'StatisticsResourceInUserFolder'
	Type ResourceType `json:"type,omitempty"`
}

Resource the Data Lake Analytics job resources.

type ResourceType

type ResourceType string

ResourceType enumerates the values for resource type.

const (
	// JobManagerResource ...
	JobManagerResource ResourceType = "JobManagerResource"
	// JobManagerResourceInUserFolder ...
	JobManagerResourceInUserFolder ResourceType = "JobManagerResourceInUserFolder"
	// StatisticsResource ...
	StatisticsResource ResourceType = "StatisticsResource"
	// StatisticsResourceInUserFolder ...
	StatisticsResourceInUserFolder ResourceType = "StatisticsResourceInUserFolder"
	// VertexResource ...
	VertexResource ResourceType = "VertexResource"
	// VertexResourceInUserFolder ...
	VertexResourceInUserFolder ResourceType = "VertexResourceInUserFolder"
)

type ResourceUsageStatistics

type ResourceUsageStatistics struct {
	// Average - the average value.
	Average *float64 `json:"average,omitempty"`
	// Minimum - the minimum value.
	Minimum *int64 `json:"minimum,omitempty"`
	// Maximum - the maximum value.
	Maximum *int64 `json:"maximum,omitempty"`
}

ResourceUsageStatistics the statistics information for resource usage.

type Result

type Result string

Result enumerates the values for result.

const (
	// Cancelled ...
	Cancelled Result = "Cancelled"
	// Failed ...
	Failed Result = "Failed"
	// None ...
	None Result = "None"
	// Succeeded ...
	Succeeded Result = "Succeeded"
)

type ScopeJobProperties

type ScopeJobProperties struct {
	// RuntimeVersion - the runtime version of the Data Lake Analytics engine to use for the specific type of job being run.
	RuntimeVersion *string `json:"runtimeVersion,omitempty"`
	// Script - the script to run. Please note that the maximum script size is 3 MB.
	Script *string `json:"script,omitempty"`
	// Type - Possible values include: 'TypeBasicPropertiesTypeJobProperties', 'TypeBasicPropertiesTypeUSQL', 'TypeBasicPropertiesTypeScope', 'TypeBasicPropertiesTypeHive'
	Type TypeBasicProperties `json:"type,omitempty"`
	// Resources - the list of resources that are required by the job
	Resources *[]ScopeJobResource `json:"resources,omitempty"`
	// UserAlgebraPath - the algebra file path after the job has completed
	UserAlgebraPath *string `json:"userAlgebraPath,omitempty"`
	// Notifier - the list of email addresses, separated by semi-colons, to notify when the job reaches a terminal state.
	Notifier *string `json:"notifier,omitempty"`
	// TotalCompilationTime - the total time this job spent compiling. This value should not be set by the user and will be ignored if it is.
	TotalCompilationTime *string `json:"totalCompilationTime,omitempty"`
	// TotalPausedTime - the total time this job spent paused. This value should not be set by the user and will be ignored if it is.
	TotalPausedTime *string `json:"totalPausedTime,omitempty"`
	// TotalQueuedTime - the total time this job spent queued. This value should not be set by the user and will be ignored if it is.
	TotalQueuedTime *string `json:"totalQueuedTime,omitempty"`
	// TotalRunningTime - the total time this job spent executing. This value should not be set by the user and will be ignored if it is.
	TotalRunningTime *string `json:"totalRunningTime,omitempty"`
	// RootProcessNodeID - the ID used to identify the job manager coordinating job execution. This value should not be set by the user and will be ignored if it is.
	RootProcessNodeID *string `json:"rootProcessNodeId,omitempty"`
	// YarnApplicationID - the ID used to identify the yarn application executing the job. This value should not be set by the user and will be ignored if it is.
	YarnApplicationID *string `json:"yarnApplicationId,omitempty"`
}

ScopeJobProperties scope job properties used when submitting and retrieving Scope jobs. (Only for use internally with Scope job type.)

func (ScopeJobProperties) AsBasicProperties

func (sjp ScopeJobProperties) AsBasicProperties() (BasicProperties, bool)

AsBasicProperties is the BasicProperties implementation for ScopeJobProperties.

func (ScopeJobProperties) AsHiveJobProperties

func (sjp ScopeJobProperties) AsHiveJobProperties() (*HiveJobProperties, bool)

AsHiveJobProperties is the BasicProperties implementation for ScopeJobProperties.

func (ScopeJobProperties) AsProperties

func (sjp ScopeJobProperties) AsProperties() (*Properties, bool)

AsProperties is the BasicProperties implementation for ScopeJobProperties.

func (ScopeJobProperties) AsScopeJobProperties

func (sjp ScopeJobProperties) AsScopeJobProperties() (*ScopeJobProperties, bool)

AsScopeJobProperties is the BasicProperties implementation for ScopeJobProperties.

func (ScopeJobProperties) AsUSQLJobProperties

func (sjp ScopeJobProperties) AsUSQLJobProperties() (*USQLJobProperties, bool)

AsUSQLJobProperties is the BasicProperties implementation for ScopeJobProperties.

func (ScopeJobProperties) MarshalJSON

func (sjp ScopeJobProperties) MarshalJSON() ([]byte, error)

MarshalJSON is the custom marshaler for ScopeJobProperties.

type ScopeJobResource

type ScopeJobResource struct {
	// Name - the name of the resource.
	Name *string `json:"name,omitempty"`
	// Path - the path to the resource.
	Path *string `json:"path,omitempty"`
}

ScopeJobResource the Scope job resources. (Only for use internally with Scope job type.)

type SeverityTypes

type SeverityTypes string

SeverityTypes enumerates the values for severity types.

const (
	// Deprecated ...
	Deprecated SeverityTypes = "Deprecated"
	// Error ...
	Error SeverityTypes = "Error"
	// Info ...
	Info SeverityTypes = "Info"
	// SevereWarning ...
	SevereWarning SeverityTypes = "SevereWarning"
	// UserWarning ...
	UserWarning SeverityTypes = "UserWarning"
	// Warning ...
	Warning SeverityTypes = "Warning"
)

type State

type State string

State enumerates the values for state.

const (
	// StateAccepted ...
	StateAccepted State = "Accepted"
	// StateCompiling ...
	StateCompiling State = "Compiling"
	// StateEnded ...
	StateEnded State = "Ended"
	// StateNew ...
	StateNew State = "New"
	// StatePaused ...
	StatePaused State = "Paused"
	// StateQueued ...
	StateQueued State = "Queued"
	// StateRunning ...
	StateRunning State = "Running"
	// StateScheduling ...
	StateScheduling State = "Scheduling"
	// StateStarting ...
	StateStarting State = "Starting"
	// StateWaitingForCapacity ...
	StateWaitingForCapacity State = "WaitingForCapacity"
)

type StateAuditRecord

type StateAuditRecord struct {
	// NewState - the new state the job is in.
	NewState *string `json:"newState,omitempty"`
	// TimeStamp - the time stamp that the state change took place.
	TimeStamp *date.Time `json:"timeStamp,omitempty"`
	// RequestedByUser - the user who requests the change.
	RequestedByUser *string `json:"requestedByUser,omitempty"`
	// Details - the details of the audit log.
	Details *string `json:"details,omitempty"`
}

StateAuditRecord the Data Lake Analytics job state audit records for tracking the lifecycle of a job.

type Statistics

type Statistics struct {
	autorest.Response `json:"-"`
	// LastUpdateTimeUtc - the last update time for the statistics.
	LastUpdateTimeUtc *date.Time `json:"lastUpdateTimeUtc,omitempty"`
	// FinalizingTimeUtc - the job finalizing start time.
	FinalizingTimeUtc *date.Time `json:"finalizingTimeUtc,omitempty"`
	// Stages - the list of stages for the job.
	Stages *[]StatisticsVertexStage `json:"stages,omitempty"`
}

Statistics the Data Lake Analytics job execution statistics.

type StatisticsVertex

type StatisticsVertex struct {
	// Name - the name of the vertex.
	Name *string `json:"name,omitempty"`
	// VertexID - the id of the vertex.
	VertexID *uuid.UUID `json:"vertexId,omitempty"`
	// ExecutionTime - the amount of execution time of the vertex.
	ExecutionTime *string `json:"executionTime,omitempty"`
	// DataRead - the amount of data read of the vertex, in bytes.
	DataRead *int64 `json:"dataRead,omitempty"`
	// PeakMemUsage - the amount of peak memory usage of the vertex, in bytes.
	PeakMemUsage *int64 `json:"peakMemUsage,omitempty"`
}

StatisticsVertex the detailed information for a vertex.

type StatisticsVertexStage

type StatisticsVertexStage struct {
	// DataRead - the amount of data read, in bytes.
	DataRead *int64 `json:"dataRead,omitempty"`
	// DataReadCrossPod - the amount of data read across multiple pods, in bytes.
	DataReadCrossPod *int64 `json:"dataReadCrossPod,omitempty"`
	// DataReadIntraPod - the amount of data read in one pod, in bytes.
	DataReadIntraPod *int64 `json:"dataReadIntraPod,omitempty"`
	// DataToRead - the amount of data remaining to be read, in bytes.
	DataToRead *int64 `json:"dataToRead,omitempty"`
	// DataWritten - the amount of data written, in bytes.
	DataWritten *int64 `json:"dataWritten,omitempty"`
	// DuplicateDiscardCount - the number of duplicates that were discarded.
	DuplicateDiscardCount *int32 `json:"duplicateDiscardCount,omitempty"`
	// FailedCount - the number of failures that occured in this stage.
	FailedCount *int32 `json:"failedCount,omitempty"`
	// MaxVertexDataRead - the maximum amount of data read in a single vertex, in bytes.
	MaxVertexDataRead *int64 `json:"maxVertexDataRead,omitempty"`
	// MinVertexDataRead - the minimum amount of data read in a single vertex, in bytes.
	MinVertexDataRead *int64 `json:"minVertexDataRead,omitempty"`
	// ReadFailureCount - the number of read failures in this stage.
	ReadFailureCount *int32 `json:"readFailureCount,omitempty"`
	// RevocationCount - the number of vertices that were revoked during this stage.
	RevocationCount *int32 `json:"revocationCount,omitempty"`
	// RunningCount - the number of currently running vertices in this stage.
	RunningCount *int32 `json:"runningCount,omitempty"`
	// ScheduledCount - the number of currently scheduled vertices in this stage
	ScheduledCount *int32 `json:"scheduledCount,omitempty"`
	// StageName - the name of this stage in job execution.
	StageName *string `json:"stageName,omitempty"`
	// SucceededCount - the number of vertices that succeeded in this stage.
	SucceededCount *int32 `json:"succeededCount,omitempty"`
	// TempDataWritten - the amount of temporary data written, in bytes.
	TempDataWritten *int64 `json:"tempDataWritten,omitempty"`
	// TotalCount - the total vertex count for this stage.
	TotalCount *int32 `json:"totalCount,omitempty"`
	// TotalFailedTime - the amount of time that failed vertices took up in this stage.
	TotalFailedTime *string `json:"totalFailedTime,omitempty"`
	// TotalProgress - the current progress of this stage, as a percentage.
	TotalProgress *int32 `json:"totalProgress,omitempty"`
	// TotalSucceededTime - the amount of time all successful vertices took in this stage.
	TotalSucceededTime *string `json:"totalSucceededTime,omitempty"`
	// TotalPeakMemUsage - the sum of the peak memory usage of all the vertices in the stage, in bytes.
	TotalPeakMemUsage *int64 `json:"totalPeakMemUsage,omitempty"`
	// TotalExecutionTime - the sum of the total execution time of all the vertices in the stage.
	TotalExecutionTime *string `json:"totalExecutionTime,omitempty"`
	// MaxDataReadVertex - the vertex with the maximum amount of data read.
	MaxDataReadVertex *StatisticsVertex `json:"maxDataReadVertex,omitempty"`
	// MaxExecutionTimeVertex - the vertex with the maximum execution time.
	MaxExecutionTimeVertex *StatisticsVertex `json:"maxExecutionTimeVertex,omitempty"`
	// MaxPeakMemUsageVertex - the vertex with the maximum peak memory usage.
	MaxPeakMemUsageVertex *StatisticsVertex `json:"maxPeakMemUsageVertex,omitempty"`
	// EstimatedVertexCPUCoreCount - the estimated vertex CPU core count.
	EstimatedVertexCPUCoreCount *int32 `json:"estimatedVertexCpuCoreCount,omitempty"`
	// EstimatedVertexPeakCPUCoreCount - the estimated vertex peak CPU core count.
	EstimatedVertexPeakCPUCoreCount *int32 `json:"estimatedVertexPeakCpuCoreCount,omitempty"`
	// EstimatedVertexMemSize - the estimated vertex memory size, in bytes.
	EstimatedVertexMemSize *int64 `json:"estimatedVertexMemSize,omitempty"`
	// AllocatedContainerCPUCoreCount - the statistics information for the allocated container CPU core count.
	AllocatedContainerCPUCoreCount *ResourceUsageStatistics `json:"allocatedContainerCpuCoreCount,omitempty"`
	// AllocatedContainerMemSize - the statistics information for the allocated container memory size.
	AllocatedContainerMemSize *ResourceUsageStatistics `json:"allocatedContainerMemSize,omitempty"`
	// UsedVertexCPUCoreCount - the statistics information for the used vertex CPU core count.
	UsedVertexCPUCoreCount *ResourceUsageStatistics `json:"usedVertexCpuCoreCount,omitempty"`
	// UsedVertexPeakMemSize - the statistics information for the used vertex peak memory size.
	UsedVertexPeakMemSize *ResourceUsageStatistics `json:"usedVertexPeakMemSize,omitempty"`
}

StatisticsVertexStage the Data Lake Analytics job statistics vertex stage information.

type Type

type Type string

Type enumerates the values for type.

const (
	// Hive ...
	Hive Type = "Hive"
	// Scope ...
	Scope Type = "Scope"
	// USQL ...
	USQL Type = "USql"
)

type TypeBasicCreateJobProperties

type TypeBasicCreateJobProperties string

TypeBasicCreateJobProperties enumerates the values for type basic create job properties.

const (
	// TypeCreateJobProperties ...
	TypeCreateJobProperties TypeBasicCreateJobProperties = "CreateJobProperties"
	// TypeScope ...
	TypeScope TypeBasicCreateJobProperties = "Scope"
	// TypeUSQL ...
	TypeUSQL TypeBasicCreateJobProperties = "USql"
)

type TypeBasicProperties

type TypeBasicProperties string

TypeBasicProperties enumerates the values for type basic properties.

const (
	// TypeBasicPropertiesTypeHive ...
	TypeBasicPropertiesTypeHive TypeBasicProperties = "Hive"
	// TypeBasicPropertiesTypeJobProperties ...
	TypeBasicPropertiesTypeJobProperties TypeBasicProperties = "JobProperties"
	// TypeBasicPropertiesTypeScope ...
	TypeBasicPropertiesTypeScope TypeBasicProperties = "Scope"
	// TypeBasicPropertiesTypeUSQL ...
	TypeBasicPropertiesTypeUSQL TypeBasicProperties = "USql"
)

type USQLJobProperties

type USQLJobProperties struct {
	// RuntimeVersion - the runtime version of the Data Lake Analytics engine to use for the specific type of job being run.
	RuntimeVersion *string `json:"runtimeVersion,omitempty"`
	// Script - the script to run. Please note that the maximum script size is 3 MB.
	Script *string `json:"script,omitempty"`
	// Type - Possible values include: 'TypeBasicPropertiesTypeJobProperties', 'TypeBasicPropertiesTypeUSQL', 'TypeBasicPropertiesTypeScope', 'TypeBasicPropertiesTypeHive'
	Type TypeBasicProperties `json:"type,omitempty"`
	// Resources - the list of resources that are required by the job
	Resources *[]Resource `json:"resources,omitempty"`
	// Statistics - the job specific statistics.
	Statistics *Statistics `json:"statistics,omitempty"`
	// DebugData - the job specific debug data locations.
	DebugData *DataPath `json:"debugData,omitempty"`
	// Diagnostics - the diagnostics for the job.
	Diagnostics *[]Diagnostics `json:"diagnostics,omitempty"`
	// AlgebraFilePath - the algebra file path after the job has completed
	AlgebraFilePath *string `json:"algebraFilePath,omitempty"`
	// TotalCompilationTime - the total time this job spent compiling. This value should not be set by the user and will be ignored if it is.
	TotalCompilationTime *string `json:"totalCompilationTime,omitempty"`
	// TotalPausedTime - the total time this job spent paused. This value should not be set by the user and will be ignored if it is.
	TotalPausedTime *string `json:"totalPausedTime,omitempty"`
	// TotalQueuedTime - the total time this job spent queued. This value should not be set by the user and will be ignored if it is.
	TotalQueuedTime *string `json:"totalQueuedTime,omitempty"`
	// TotalRunningTime - the total time this job spent executing. This value should not be set by the user and will be ignored if it is.
	TotalRunningTime *string `json:"totalRunningTime,omitempty"`
	// RootProcessNodeID - the ID used to identify the job manager coordinating job execution. This value should not be set by the user and will be ignored if it is.
	RootProcessNodeID *string `json:"rootProcessNodeId,omitempty"`
	// YarnApplicationID - the ID used to identify the yarn application executing the job. This value should not be set by the user and will be ignored if it is.
	YarnApplicationID *string `json:"yarnApplicationId,omitempty"`
	// YarnApplicationTimeStamp - the timestamp (in ticks) for the yarn application executing the job. This value should not be set by the user and will be ignored if it is.
	YarnApplicationTimeStamp *int64 `json:"yarnApplicationTimeStamp,omitempty"`
	// CompileMode - the specific compilation mode for the job used during execution. If this is not specified during submission, the server will determine the optimal compilation mode. Possible values include: 'Semantic', 'Full', 'SingleBox'
	CompileMode CompileMode `json:"compileMode,omitempty"`
}

USQLJobProperties u-SQL job properties used when retrieving U-SQL jobs.

func (USQLJobProperties) AsBasicProperties

func (usjp USQLJobProperties) AsBasicProperties() (BasicProperties, bool)

AsBasicProperties is the BasicProperties implementation for USQLJobProperties.

func (USQLJobProperties) AsHiveJobProperties

func (usjp USQLJobProperties) AsHiveJobProperties() (*HiveJobProperties, bool)

AsHiveJobProperties is the BasicProperties implementation for USQLJobProperties.

func (USQLJobProperties) AsProperties

func (usjp USQLJobProperties) AsProperties() (*Properties, bool)

AsProperties is the BasicProperties implementation for USQLJobProperties.

func (USQLJobProperties) AsScopeJobProperties

func (usjp USQLJobProperties) AsScopeJobProperties() (*ScopeJobProperties, bool)

AsScopeJobProperties is the BasicProperties implementation for USQLJobProperties.

func (USQLJobProperties) AsUSQLJobProperties

func (usjp USQLJobProperties) AsUSQLJobProperties() (*USQLJobProperties, bool)

AsUSQLJobProperties is the BasicProperties implementation for USQLJobProperties.

func (USQLJobProperties) MarshalJSON

func (usjp USQLJobProperties) MarshalJSON() ([]byte, error)

MarshalJSON is the custom marshaler for USQLJobProperties.

type UpdateJobParameters

type UpdateJobParameters struct {
	// DegreeOfParallelism - the degree of parallelism used for this job. This must be greater than 0, if set to less than 0 it will default to 1.
	DegreeOfParallelism *int32 `json:"degreeOfParallelism,omitempty"`
	// Priority - the priority value for the current job. Lower numbers have a higher priority. By default, a job has a priority of 1000. This must be greater than 0.
	Priority *int32 `json:"priority,omitempty"`
	// Tags - the key-value pairs used to add additional metadata to the job information. (Only for use internally with Scope job type.)
	Tags *map[string]*string `json:"tags,omitempty"`
}

UpdateJobParameters the parameters that can be used to update existing Data Lake Analytics job information properties. (Only for use internally with Scope job type.)

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL