Documentation ¶
Overview ¶
Package expvastic can read from any endpoints that provides expvar data and ships them to elasticsearch. You can inspect the metrics with kibana.
Dashboard is provided here: https://github.com/arsham/expvastic/blob/master/bin/dashboard.json
Please refer to golang's expvar documentation for more information.
Here is a couple of screenshots: http://i.imgur.com/6kB88g4.png and http://i.imgur.com/0ROSWsM.png
Installation guides can be found on github page: https://github.com/arsham/expvastic
At the heart of this package, there is Engine. It acts like a glue between a Reader and a Recorder. Messages are transfered in a package called DataContainer, which is a list of DataType objects.
Here an example configuration, save it somewhere (let's call it expvastic.yml for now):
settings: debug_evel: info readers: FirstApp: # service name type: expvar type_name: my_app1 # this is the elasticsearch type name endpoint: localhost:1234 routepath: /debug/vars interval: 500ms timeout: 3s log_level: debug backoff: 10 SomeApplication: type: expvar type_name: SomeApplication endpoint: localhost:1235 routepath: /debug/vars interval: 500ms timeout: 13s log_level: debug backoff: 10 recorders: main_elasticsearch: type: elasticsearch endpoint: 127.0.0.1:9200 index_name: expvastic timeout: 8s backoff: 10 the_other_elasticsearch: type: elasticsearch endpoint: 127.0.0.1:9200 index_name: expvastic timeout: 18s backoff: 10 routes: route1: readers: - FirstApp recorders: - main_elasticsearch route2: readers: - FirstApp - SomeApplication recorders: - main_elasticsearch route3: readers: - SomeApplication recorders: - main_elasticsearch - the_other_elasticsearch
Then run the application:
expvasyml -c expvastic.yml
You can mix and match the routes, but the engine will choose the best setup to achive your goal without duplicating the results. For instance assume you set the routes like this:
readers: app_0: app_1: app_2: recorders: elastic_0: elastic_1: elastic_2: elastic_3: routes: route1: readers: - app_0 - app_2 recorders: - elastic_1 route2: readers: - app_0 recorders: - elastic_1 - elastic_2 - elastic_3 route2: readers: - app_1 - app_2 recorders: - elastic_1 - elastic_0
Expvastic creates three engines like so:
Data from app_0 will be shipped to: elastic_1, elastic_2 and elastic_3 Data from app_1 will be shipped to: elastic_1 and, elastic_0 Data from app_2 will be shipped to: elastic_1 and, elastic_0
For running tests:
go test $(glide nv)
For getting test coverages, use this gist: https://gist.github.com/arsham/f45f7e7eea7e18796bc1ed5ced9f9f4a. Then run:
goverall
For getting benchmarks
go test $(glide nv) -run=^$ -bench=.
For showing the memory and cpu profiles, on each folder run:
BASENAME=$(basename $(pwd)) go test -run=^$ -bench=. -cpuprofile=cpu.out -benchmem -memprofile=mem.out go tool pprof -pdf $BASENAME.test cpu.out > cpu.pdf && open cpu.pdf go tool pprof -pdf $BASENAME.test mem.out > mem.pdf && open mem.pdf
Use of this source code is governed by the Apache 2.0 license. License that can be found in the LICENSE file.
Index ¶
Examples ¶
Constants ¶
This section is empty.
Variables ¶
var ( // ErrEmptyRecName is for when the recorder's name is an empty string. ErrEmptyRecName = fmt.Errorf("recorder name empty") // ErrDupRecName is for when there are two recorders with the same name. ErrDupRecName = fmt.Errorf("recorder name empty") )
Functions ¶
func StartEngines ¶ added in v0.1.0
func StartEngines(ctx context.Context, log logrus.FieldLogger, confMap *config.ConfMap) (chan struct{}, error)
StartEngines creates some Engines and returns a channel that closes it when it's done its work. For each routes, we need one engine that has one reader and writes to multiple recorders. This is because:
1 - Readers should not intercept each other by engaging the recorders. 2 - When a reader goes out of scope, we can safely stop the recorders.
When a recorder goes out of scope, the Engine stops sending to that recorder.
Types ¶
type Engine ¶ added in v0.0.4
type Engine struct {
// contains filtered or unexported fields
}
Engine represents an engine that receives information from readers and ships them to recorders. The Engine is allowed to change the index and type names at will. When the context times out or canceled, the engine will close the the job channels by calling the Stop method. Note that we could create a channel and distribute the recorders payload, but we didn't because there is no way to find out which recorder errors right after the payload has been sent. IMPORTANT: the readers should not close their streams, I am closing them here.
Example (SendingJobs) ¶
package main import ( "bytes" "context" "fmt" "io" "net/http" "net/http/httptest" "time" "github.com/arsham/expvastic" "github.com/arsham/expvastic/lib" "github.com/arsham/expvastic/reader" "github.com/arsham/expvastic/recorder" ) func main() { var res *reader.ReadJobResult log := lib.DiscardLogger() ctx, cancel := context.WithCancel(context.Background()) desire := `{"the key": "is the value!"}` recorded := make(chan string) redTs := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) { io.WriteString(w, desire) })) defer redTs.Close() recTs := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) { recorded <- "Job was recorded" })) defer recTs.Close() jobChan := make(chan context.Context) resultChan := make(chan *reader.ReadJobResult) payloadChan := make(chan *recorder.RecordJob) ctxReader := reader.NewCtxReader(redTs.URL) red, _ := reader.NewSimpleReader(log, ctxReader, jobChan, resultChan, "reader_example", "typeName", 10*time.Millisecond, 10*time.Millisecond) rec, _ := recorder.NewSimpleRecorder(ctx, log, payloadChan, "reader_example", recTs.URL, "intexName", 10*time.Millisecond) redDone := red.Start(ctx) recDone := rec.Start(ctx) cl, err := expvastic.NewWithReadRecorder(ctx, log, 0, red, rec) fmt.Println("Engine creation success:", err == nil) clDone := cl.Start() select { case red.JobChan() <- ctx: fmt.Println("Just sent a job request") case <-time.After(time.Second): panic("expected the reader to recive the job, but it blocked") } fmt.Println(<-recorded) select { case res = <-red.ResultChan(): fmt.Println("Job operation success:", res.Err == nil) case <-time.After(5 * time.Second): // Should be more than the interval, otherwise the response is not ready yet panic("expected to recive a data back, nothing recieved") } buf := new(bytes.Buffer) buf.ReadFrom(res.Res) fmt.Println("Reader just received payload:", buf.String()) cancel() _, open := <-redDone fmt.Println("Reader closure:", !open) _, open = <-recDone fmt.Println("Recorder closure:", !open) _, open = <-clDone fmt.Println("Client closure:", !open) }
Output: Engine creation success: true Just sent a job request Job was recorded Job operation success: true Reader just received payload: {"the key": "is the value!"} Reader closure: true Recorder closure: true Client closure: true
func NewWithConfig ¶ added in v0.1.1
func NewWithConfig( ctx context.Context, log logrus.FieldLogger, readChanBuff, readResChanBuff, recChanBuff, recResChan int, red config.ReaderConf, recorders ...config.RecorderConf, ) (*Engine, error)
NewWithConfig instantiates reader and recorders from the configurations. readChanBuff, readResChanBuff, recChanBuff, recResChan are the channel buffer amount. Please refer to benchmarks how to choose the best values.
func NewWithReadRecorder ¶ added in v0.1.1
func NewWithReadRecorder(ctx context.Context, logger logrus.FieldLogger, recResChan int, red reader.DataReader, recs ...recorder.DataRecorder) (*Engine, error)
NewWithReadRecorder creates an instance with already made reader and recorders. It spawns one reader and streams its payload to all recorders. Returns an error if there are recorders with the same name, or any of them have no name.
Directories ¶
Path | Synopsis |
---|---|
cmd
|
|
Package config reads the configurations from a yaml file and produces necessary configuration for instantiating readers and recorders.
|
Package config reads the configurations from a yaml file and produces necessary configuration for instantiating readers and recorders. |
Package datatype contains necessary logic to sanitise a JSON object coming from a reader.
|
Package datatype contains necessary logic to sanitise a JSON object coming from a reader. |
Package lib contains some functionalities needed for expvastic.
|
Package lib contains some functionalities needed for expvastic. |
Package reader contains logic for reading from a provider.
|
Package reader contains logic for reading from a provider. |
expvar
Package expvar contains logic to read from an expvar provide.
|
Package expvar contains logic to read from an expvar provide. |
self
Package self contains codes for recording expvastic's own metrics.
|
Package self contains codes for recording expvastic's own metrics. |
Package recorder contains logic to record data into a database.
|
Package recorder contains logic to record data into a database. |
elasticsearch
Package elasticsearch contains logic to record data to an elasticsearch index.
|
Package elasticsearch contains logic to record data to an elasticsearch index. |