worker

package
v0.6.0 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Jul 23, 2015 License: MIT Imports: 24 Imported by: 0

Documentation

Index

Constants

This section is empty.

Variables

View Source
var (
	// LogWriterTick is how often the buffer should be flushed out and sent to
	// travis-logs.
	LogWriterTick = 500 * time.Millisecond

	// LogChunkSize is a bit of a magic number, calculated like this: The
	// maximum Pusher payload is 10 kB (or 10 KiB, who knows, but let's go with
	// 10 kB since that is smaller). Looking at the travis-logs source, the
	// current message overhead (i.e. the part of the payload that isn't
	// the content of the log part) is 42 bytes + the length of the JSON-
	// encoded ID and the length of the JSON-encoded sequence number. A 64-
	// bit number is up to 20 digits long, so that means (assuming we don't
	// go over 64-bit numbers) the overhead is up to 82 bytes. That means
	// we can send up to 9918 bytes of content. However, the JSON-encoded
	// version of a string can be significantly longer than the raw bytes.
	// Worst case that I could find is "<", which with the Go JSON encoder
	// becomes "\u003c" (i.e. six bytes long). So, given a string of just
	// left angle brackets, the string would become six times as long,
	// meaning that the longest string we can take is 1653. We could still
	// get errors if we go over 64-bit numbers, but I find the likeliness
	// of that happening to both the sequence number, the ID, and us maxing
	// out the worst-case logs to be quite unlikely, so I'm willing to live
	// with that. --Henrik
	LogChunkSize = 1653
)
View Source
var (
	// VersionString is the git describe version set at build time
	VersionString = "?"
	// RevisionString is the git revision set at build time
	RevisionString = "?"
	// GeneratedString is the build date set at build time
	GeneratedString = "?"
)

Functions

This section is empty.

Types

type BuildPayload

type BuildPayload struct {
	ID     uint64 `json:"id"`
	Number string `json:"number"`
}

BuildPayload contains information about the build.

type BuildScriptGenerator

type BuildScriptGenerator interface {
	Generate(gocontext.Context, *simplejson.Json) ([]byte, error)
}

A BuildScriptGenerator generates a build script for a given job payload.

func NewBuildScriptGenerator

func NewBuildScriptGenerator(cfg *config.Config) BuildScriptGenerator

NewBuildScriptGenerator creates a generator backed by an HTTP API.

type BuildScriptGeneratorError

type BuildScriptGeneratorError struct {

	// true when this error can be recovered by retrying later
	Recover bool
	// contains filtered or unexported fields
}

A BuildScriptGeneratorError is sometimes used by the Generate method on a BuildScriptGenerator to return more metadata about an error.

type Canceller

type Canceller interface {
	// Subscribe will set up a subscription for cancellation messages for the
	// given job ID. When a cancellation message comes in, the channel will be
	// closed. Only one subscription per job ID is valid, if there's already a
	// subscription set up, an error will be returned.
	Subscribe(id uint64, ch chan<- struct{}) error

	// Unsubscribe removes the existing subscription for the given job ID.
	Unsubscribe(id uint64)
}

A Canceller allows you to subscribe to and unsubscribe from cancellation messages for a given job ID.

type CommandDispatcher

type CommandDispatcher struct {
	// contains filtered or unexported fields
}

CommandDispatcher is responsible for listening to a command queue on AMQP and dispatching the commands to the right place. Currently the only valid command is the 'cancel job' command.

func NewCommandDispatcher

func NewCommandDispatcher(ctx gocontext.Context, conn *amqp.Connection) *CommandDispatcher

NewCommandDispatcher creates a new CommandDispatcher. No network traffic occurs until you call Run()

func (*CommandDispatcher) Run

func (d *CommandDispatcher) Run()

Run will make the CommandDispatcher listen to the worker command queue and start dispatching any incoming commands.

func (*CommandDispatcher) Subscribe

func (d *CommandDispatcher) Subscribe(id uint64, ch chan<- struct{}) error

Subscribe is an implementation of Canceller.Subscribe.

func (*CommandDispatcher) Unsubscribe

func (d *CommandDispatcher) Unsubscribe(id uint64)

Unsubscribe is an implementation of Canceller.Unsubscribe.

type FinishState

type FinishState string

FinishState is the state that a job finished with (such as pass/fail/etc.). You should not provide a string directly, but use one of the FinishStateX constants defined in this package.

const (
	FinishStatePassed    FinishState = "passed"
	FinishStateFailed    FinishState = "failed"
	FinishStateErrored   FinishState = "errored"
	FinishStateCancelled FinishState = "cancelled"
)

Valid finish states for the FinishState type

type Job

type Job interface {
	Payload() *JobPayload
	RawPayload() *simplejson.Json
	StartAttributes() *backend.StartAttributes

	Received() error
	Started() error
	Error(context.Context, string) error
	Requeue() error
	Finish(FinishState) error

	LogWriter(context.Context) (LogWriter, error)
}

A Job ties togeher all the elements required for a build job

type JobJobPayload

type JobJobPayload struct {
	ID     uint64 `json:"id"`
	Number string `json:"number"`
}

JobJobPayload contains information about the job.

type JobPayload

type JobPayload struct {
	Type       string                 `json:"type"`
	Job        JobJobPayload          `json:"job"`
	Build      BuildPayload           `json:"source"`
	Repository RepositoryPayload      `json:"repository"`
	UUID       string                 `json:"uuid"`
	Config     map[string]interface{} `json:"config"`
	Timeouts   TimeoutsPayload        `json:"timeouts,omitempty"`
}

JobPayload is the payload we receive over RabbitMQ.

type JobQueue

type JobQueue struct {
	// contains filtered or unexported fields
}

A JobQueue allows getting Jobs out of an AMQP queue.

func NewJobQueue

func NewJobQueue(conn *amqp.Connection, queue string) (*JobQueue, error)

NewJobQueue creates a JobQueue backed by the given AMQP connections and connects to the AMQP queue with the given name. The queue will be declared in AMQP when this function is called, so an error could be raised if the queue already exists, but with different attributes than we expect.

func (*JobQueue) Jobs

func (q *JobQueue) Jobs(ctx gocontext.Context) (outChan <-chan Job, err error)

Jobs creates a new consumer on the queue, and returns three channels. The first channel gets sent every BuildJob that we receive from AMQP. The stopChan is a channel that can be closed in order to stop the consumer.

type LogWriter

type LogWriter interface {
	io.WriteCloser
	WriteAndClose([]byte) (int, error)
	SetTimeout(time.Duration)
	Timeout() <-chan time.Time
	SetMaxLogLength(int)
}

A LogWriter is primarily an io.Writer that will send all bytes to travis-logs for processing, and also has some utility methods for timeouts and log length limiting. Each LogWriter is tied to a given job, and can be gotten by calling the LogWriter() method on a Job.

func NewLogWriter

func NewLogWriter(ctx gocontext.Context, conn *amqp.Connection, jobID uint64) (LogWriter, error)

NewLogWriter creates a new AMQP-backed log writer for the given job ID. An error can be returned if there was an error declaring the right AMQP queues.

type Processor

type Processor struct {
	ID uuid.UUID

	CurrentJob     Job
	ProcessedCount int

	SkipShutdownOnLogTimeout bool
	// contains filtered or unexported fields
}

A Processor will process build jobs on a channel, one by one, until it is told to shut down or the channel of build jobs closes.

func NewProcessor

func NewProcessor(ctx gocontext.Context, hostname string, buildJobsQueue *JobQueue,
	provider backend.Provider, generator BuildScriptGenerator, canceller Canceller,
	hardTimeout time.Duration, logTimeout time.Duration) (*Processor, error)

NewProcessor creates a new processor that will run the build jobs on the given channel using the given provider and getting build scripts from the generator.

func (*Processor) GracefulShutdown

func (p *Processor) GracefulShutdown()

GracefulShutdown tells the processor to finish the job it is currently processing, but not pick up any new jobs. This method will return immediately, the processor is done when Run() returns.

func (*Processor) Run

func (p *Processor) Run()

Run starts the processor. This method will not return until the processor is terminated, either by calling the GracefulShutdown or Terminate methods, or if the build jobs channel is closed.

func (*Processor) Terminate

func (p *Processor) Terminate()

Terminate tells the processor to stop working on the current job as soon as possible.

type ProcessorPool

type ProcessorPool struct {
	Context     gocontext.Context
	Conn        *amqp.Connection
	Provider    backend.Provider
	Generator   BuildScriptGenerator
	Canceller   Canceller
	Hostname    string
	HardTimeout time.Duration
	LogTimeout  time.Duration

	SkipShutdownOnLogTimeout bool
	// contains filtered or unexported fields
}

A ProcessorPool spins up multiple Processors handling build jobs from the same queue.

func NewProcessorPool

func NewProcessorPool(hostname string, ctx gocontext.Context, hardTimeout time.Duration,
	logTimeout time.Duration, amqpConn *amqp.Connection, provider backend.Provider,
	generator BuildScriptGenerator, canceller Canceller) *ProcessorPool

NewProcessorPool creates a new processor pool using the given arguments.

func (*ProcessorPool) Decr added in v0.5.1

func (p *ProcessorPool) Decr()

Decr pops a processor out of the pool and issues a graceful shutdown

func (*ProcessorPool) Each added in v0.4.4

func (p *ProcessorPool) Each(f func(int, *Processor))

Each loops through all the processors in the pool and calls the given function for each of them, passing in the index and the processor. The order of the processors is the same for the same set of processors.

func (*ProcessorPool) GracefulShutdown

func (p *ProcessorPool) GracefulShutdown()

GracefulShutdown causes each processor in the pool to start its graceful shutdown.

func (*ProcessorPool) Incr added in v0.5.1

func (p *ProcessorPool) Incr()

Incr adds a single running processor to the pool

func (*ProcessorPool) Run

func (p *ProcessorPool) Run(poolSize int, queueName string) error

Run starts up a number of processors and connects them to the given queue. This method stalls until all processors have finished.

func (*ProcessorPool) Size added in v0.5.1

func (p *ProcessorPool) Size() int

Size returns the number of processors in the pool

type RepositoryPayload

type RepositoryPayload struct {
	ID   uint64 `json:"id"`
	Slug string `json:"slug"`
}

RepositoryPayload contains information about the repository.

type TimeoutsPayload added in v0.5.0

type TimeoutsPayload struct {
	HardLimit  uint64 `json:"hard_limit"`
	LogSilence uint64 `json:"log_silence"`
}

TimeoutsPayload contains information about any custom timeouts. The timeouts are given in seconds, and a value of 0 means no custom timeout is set.

Directories

Path Synopsis
cmd
Package metrics provides easy methods to send metrics
Package metrics provides easy methods to send metrics
Package workerintegration contains various integration tests for Worker.
Package workerintegration contains various integration tests for Worker.

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL