Documentation ¶
Index ¶
- type Scheduler
- type Server
- type ServerOption
- func BaseContext(ctx func() context.Context) ServerOption
- func Concurrency(v int) ServerOption
- func ErrorHandler(f asynq.ErrorHandlerFunc) ServerOption
- func IsFailure(f func(error) bool) ServerOption
- func Logger(logger asynq.Logger) ServerOption
- func Middleware(m ...asynq.MiddlewareFunc) ServerOption
- func Queues(v map[string]int) ServerOption
- func RedisConnOpt(v asynq.RedisConnOpt) ServerOption
- func RetryDelayFunc(f asynq.RetryDelayFunc) ServerOption
- func StrictPriority(v bool) ServerOption
Constants ¶
This section is empty.
Variables ¶
This section is empty.
Functions ¶
This section is empty.
Types ¶
type Scheduler ¶ added in v1.0.5
func NewScheduler ¶ added in v1.0.5
func NewScheduler(r asynq.RedisConnOpt, opts *asynq.SchedulerOpts) *Scheduler
func (*Scheduler) Unregister ¶ added in v1.0.5
type Server ¶
func NewServer ¶
func NewServer(opts ...ServerOption) *Server
func (*Server) Handle ¶
Handle registers the handler for the given pattern. If a handler already exists for pattern, Handle panics.
func (*Server) HandleFunc ¶
func (s *Server) HandleFunc(pattern string, handler asynq.HandlerFunc)
HandleFunc registers the handler function for the given pattern.
type ServerOption ¶
type ServerOption func(*Server)
ServerOption is an Asynq server option.
func BaseContext ¶
func BaseContext(ctx func() context.Context) ServerOption
BaseContext optionally specifies a function that returns the base context for Handler invocations on this server.
If BaseContext is nil, the default is context.Background(). If this is defined, then it MUST return a non-nil context
func Concurrency ¶
func Concurrency(v int) ServerOption
Concurrency Maximum number of concurrent processing of tasks.
func ErrorHandler ¶
func ErrorHandler(f asynq.ErrorHandlerFunc) ServerOption
ErrorHandler handles errors returned by the task handler.
HandleError is invoked only if the task handler returns a non-nil error.
Example:
func reportError(ctx context, task *asynq.Task, err error) { retried, _ := asynq.GetRetryCount(ctx) maxRetry, _ := asynq.GetMaxRetry(ctx) if retried >= maxRetry { err = fmt.Errorf("retry exhausted for task %s: %w", task.Type, err) } errorReportingService.Notify(err) }) ErrorHandler: asynq.ErrorHandlerFunc(reportError)
func IsFailure ¶
func IsFailure(f func(error) bool) ServerOption
IsFailure Predicate function to determine whether the error returned from Handler is a failure. If the function returns false, Server will not increment the retried counter for the task, and Server won't record the queue stats (processed and failed stats) to avoid skewing the error rate of the queue.
By default, if the given error is non-nil the function returns true.
func Middleware ¶
func Middleware(m ...asynq.MiddlewareFunc) ServerOption
Middleware with service middleware option.
func Queues ¶
func Queues(v map[string]int) ServerOption
Queues List of queues to process with given priority value. Keys are the names of the queues and values are associated priority value.
If set to nil or not specified, the server will process only the "default" queue.
Priority is treated as follows to avoid starving low priority queues.
Example:
Queues: map[string]int{ "critical": 6, "default": 3, "low": 1, }
With the above config and given that all queues are not empty, the tasks in "critical", "default", "low" should be processed 60%, 30%, 10% of the time respectively.
If a queue has a zero or negative priority value, the queue will be ignored.
func RedisConnOpt ¶
func RedisConnOpt(v asynq.RedisConnOpt) ServerOption
func RetryDelayFunc ¶
func RetryDelayFunc(f asynq.RetryDelayFunc) ServerOption
RetryDelayFunc Function to calculate retry delay for a failed task.
By default, it uses exponential backoff algorithm to calculate the delay.
func StrictPriority ¶
func StrictPriority(v bool) ServerOption
StrictPriority indicates whether the queue priority should be treated strictly.
If set to true, tasks in the queue with the highest priority is processed first. The tasks in lower priority queues are processed only when those queues with higher priorities are empty.