Documentation ¶
Index ¶
- Constants
- func FailResult(err error) (*models.RunCommandResult, error)
- func NewExecutorError(code models.ErrorCode, message string) *models.BaseError
- func NewFailedResult(reason string) *models.RunCommandResult
- func WriteJobResults(resultsDir string, stdout, stderr io.Reader, exitcode int, err error, ...) *models.RunCommandResult
- type Executor
- type ExecutorProvider
- type LogStreamRequest
- type OutputLimits
- type RunCommandRequest
Constants ¶
const ( ExecutionAlreadyStarted models.ErrorCode = "ExecutionAlreadyStarted" ExecutionAlreadyCancelled models.ErrorCode = "ExecutionAlreadyCancelled" ExecutionAlreadyComplete models.ErrorCode = "ExecutionAlreadyComplete" ExecutionNotFound models.ErrorCode = "ExecutionNotFound" ExecutorSpecValidationErr models.ErrorCode = "ExecutorSpecValidationErr" )
Common Error Codes for Executor
const EXECUTOR_COMPONENT = "Executor"
Variables ¶
This section is empty.
Functions ¶
func FailResult ¶
func FailResult(err error) (*models.RunCommandResult, error)
func NewExecutorError ¶ added in v1.5.0
func NewFailedResult ¶ added in v1.0.4
func NewFailedResult(reason string) *models.RunCommandResult
func WriteJobResults ¶
func WriteJobResults( resultsDir string, stdout, stderr io.Reader, exitcode int, err error, limits OutputLimits, ) *models.RunCommandResult
WriteJobResults produces files and a models.RunCommandResult in the standard format, including truncating the contents of both where necessary to fit within system-defined limits.
It will consume only the bytes from the passed io.Readers that it needs to correctly form job outputs. Once the command returns, the readers can close.
Types ¶
type Executor ¶
type Executor interface { // A Providable is something that a Provider can check for installation status provider.Providable bidstrategy.SemanticBidStrategy bidstrategy.ResourceBidStrategy // Start initiates an execution for the given RunCommandRequest. // It returns an error if the execution already exists and is in a started or terminal state. // Implementations may also return other errors based on resource limitations or internal faults. Start(ctx context.Context, request *RunCommandRequest) error // Run initiates and waits for the completion of an execution for the given RunCommandRequest. // It returns a RunCommandResult and an error if any part of the operation fails. // Specifically, it will return an error if the execution already exists and is in a started or terminal state. Run(ctx context.Context, args *RunCommandRequest) (*models.RunCommandResult, error) // Wait monitors the completion of an execution identified by its executionID. // It returns two channels: // 1. A channel that emits the execution result once the task is complete. // 2. An error channel that relays any issues encountered, such as when the // execution is non-existent or has already concluded. Wait(ctx context.Context, executionID string) (<-chan *models.RunCommandResult, <-chan error) // Cancel attempts to cancel an ongoing execution identified by its executionID. // Returns an error if the execution does not exist or is already in a terminal state. Cancel(ctx context.Context, executionID string) error // GetLogStream provides a stream of output for an ongoing or completed execution identified by its executionID. // The 'withHistory' flag indicates whether to include historical data in the stream. // The 'follow' flag indicates whether the stream should continue to send data as it is produced. // Returns an io.ReadCloser to read the output stream and an error if the operation fails. // Specifically, it will return an error if the execution does not exist. GetLogStream(ctx context.Context, request LogStreamRequest) (io.ReadCloser, error) }
Executor serves as an execution manager for running jobs on a specific backend, such as a Docker daemon. It provides a comprehensive set of methods to initiate, monitor, terminate, and retrieve output streams for executions.
type ExecutorProvider ¶
ExecutorProvider returns a executor for the given engine type
type LogStreamRequest ¶ added in v1.2.2
LogStreamRequest encapsulates the parameters required to retrieve a log stream.
type OutputLimits ¶ added in v1.0.4
type RunCommandRequest ¶ added in v1.0.4
type RunCommandRequest struct { JobID string // Unique identifier for the job. ExecutionID string // Unique identifier for a specific execution of the job. Resources *models.Resources // Resource requirements like CPU, Memory, GPU, Disk. Network *models.NetworkConfig // Network configuration for the execution. Outputs []*models.ResultPath // Paths where the execution should store its outputs. Inputs []storage.PreparedStorage // Prepared storage elements that are used as inputs. ResultsDir string // Directory where results should be stored. EngineParams *models.SpecConfig // Engine-specific configuration parameters. OutputLimits OutputLimits // Output size limits for the execution. }
RunCommandRequest encapsulates the parameters required to initiate a job execution. It includes identifiers, resource requirements, network configurations, and various other settings.