Documentation ¶
Index ¶
- func AmqpExecutor(fn GraphiteReturner, consumer rabbitmq.Consumer, cache *lru.Cache)
- func ChanExecutor(fn GraphiteReturner, jobQueue JobQueue, cache *lru.Cache)
- func Construct()
- func Dispatcher(jobQueue JobQueue)
- func GraphiteAuthContextReturner(org_id int64) (bgraphite.Context, error)
- func Init(metrics met.Backend)
- func LoadOrSetOffset() int
- type Check
- type CheckDef
- type CheckEvaluator
- type GraphiteCheckEvaluator
- type GraphiteReturner
- type Job
- type JobQueue
- type PreAMQPJobQueue
- type Ticker
Constants ¶
This section is empty.
Variables ¶
This section is empty.
Functions ¶
func AmqpExecutor ¶
func AmqpExecutor(fn GraphiteReturner, consumer rabbitmq.Consumer, cache *lru.Cache)
AmqpExecutor reads jobs from rabbitmq, executes them, and acknowledges them if they processed succesfully or encountered a fatal error (i.e. an error that we know won't recover on future retries, so no point in retrying)
func ChanExecutor ¶
func ChanExecutor(fn GraphiteReturner, jobQueue JobQueue, cache *lru.Cache)
func Dispatcher ¶
func Dispatcher(jobQueue JobQueue)
Dispatcher dispatches, every second, all jobs that should run for that second every job has an id so that you can run multiple dispatchers (for HA) while still only processing each job once. (provided jobs get consistently routed to executors)
func Init ¶
Init initalizes all metrics run this function when statsd is ready, so we can create the series
func LoadOrSetOffset ¶
func LoadOrSetOffset() int
Types ¶
type Check ¶
type Check struct { // do we need these members here? //Id int64 //OrgId int64 //DataSourceId int64 Definition CheckDef }
type CheckEvaluator ¶
type CheckEvaluator interface {
Eval() (*m.CheckEvalResult, error)
}
type GraphiteCheckEvaluator ¶
type GraphiteCheckEvaluator struct { Context graphite.Context Check CheckDef // contains filtered or unexported fields }
func NewGraphiteCheckEvaluator ¶
func NewGraphiteCheckEvaluator(c graphite.Context, check CheckDef) (*GraphiteCheckEvaluator, error)
func (*GraphiteCheckEvaluator) Eval ¶
func (ce *GraphiteCheckEvaluator) Eval(ts time.Time) (m.CheckEvalResult, error)
TODO instrument error scenarios Eval evaluates the crit/warn expression and returns the result, and any non-fatal error (implying the query should be retried later, when a temporary infra problem restores) as well as fatal errors.
type Job ¶
type Job struct { OrgId int64 MonitorId int64 EndpointId int64 EndpointName string EndpointSlug string Settings map[string]string MonitorTypeName string Notifications m.MonitorNotificationSetting Freq int64 Offset int64 // offset on top of "even" minute/10s/.. intervals Definition CheckDef GeneratedAt time.Time LastPointTs time.Time AssertMinSeries int // to verify during execution at least this many series are returned (would be nice at some point to include actual number of collectors) AssertStart time.Time // to verify timestamps in response AssertStep int // to verify step duration AssertSteps int // to verify during execution this many points are included }
Job is a job for an alert execution note that LastPointTs is a time denoting the timestamp of the last point to run against this way the check runs always on the right data, irrespective of execution delays that said, for convenience, we track the generatedAt timestamp
func (Job) StoreResult ¶
func (job Job) StoreResult(res m.CheckEvalResult)
type PreAMQPJobQueue ¶
type PreAMQPJobQueue struct {
// contains filtered or unexported fields
}
func (PreAMQPJobQueue) Put ¶
func (jq PreAMQPJobQueue) Put(job *Job)
type Ticker ¶
ticker is a ticker to power the alerting scheduler. it's like a time.Ticker, except:
- it doesn't drop ticks for slow receivers, rather, it queues up. so that callers are in control to instrument what's going on.
- it automatically ticks every second, which is the right thing in our current design
- it ticks on second marks or very shortly after. this provides a predictable load pattern (this shouldn't cause too much load contention issues because the next steps in the pipeline just process at their own pace)
- the timestamps are used to mark "last datapoint to query for" and as such, are a configurable amount of seconds in the past
- because we want to allow:
- a clean "resume where we left off" and "don't yield ticks we already did"
- adjusting offset over time to compensate for storage backing up or getting fast and providing lower latency you specify a lastProcessed timestamp as well as an offset at creation, or runtime