expvastic

package module
v0.5.0 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Dec 30, 2016 License: Apache-2.0 Imports: 13 Imported by: 0

README

Expvastic

GoDoc Build Status Coverage Status Go Report Card

Expvastic can record your application's metrics in ElasticSearch and you can view them with kibana. It can read from any applications (written in any language) that provides metrics in json format.

  1. Features
  2. Installation
  3. Kibana
  1. Usage
  1. LICENSE

Features

  • Very lightweight and fast.
  • Can read from multiple input.
  • Can ship the metrics to multiple databases.
  • Shows memory usages and GC pauses of the apps.
  • Metrics can be aggregated for different apps (with elasticsearch's type system).
  • A kibana dashboard is also provided here.
  • Maps values how you define them. For example you can change bytes to megabytes.
  • Benchmarks are included.

There are TODO items in the issue section. Feature requests welcome!

Please refer to golang's expvar documentation for more information.

Screenshots can be found in this document. Here is an example:

Colored

Installation

I will provide a docker image soon, but for now it needs to be installed. You need golang 1.7 and glide installed. Simply do:

go get github.com/arsham/expvastic
cd $GOPATH/src/github.com/arsham/expvastic
glide install
go install ./cmd/expvastic

You also need elasticsearch and kibana, here is a couple of docker images you can start with:

docker run -d --restart always --name expvastic -p 9200:9200 --ulimit nofile=98304:98304 -v "/path/to/somewhere/expvastic":/usr/share/elasticsearch/data elasticsearch
docker run -d --restart always --name kibana -p 80:5601 --link expvastic:elasticsearch -p 5601:5601 kibana

Kibana

Access the dashboard (or any other ports you have exposed kibana to, notice the -p:80:5601 above), and enter expvastic as Index name or pattern in management section.

Select @timestamp for Time-field name. In case it doesn't show up, click Index contains time-based events twice, it will provide you with the timestamp. Then click on create button.

Importing Dashboard

Go to Saved Objects section of management, and click on the import button. Upload this file and you're done!

One of the provided dashboards shows the expvastic's own metrics, and you can use the other one for everything you have defined in the configuration file.

Usage

With Flags

With this method you can only have one reader and ship to one recorder. Consider the next section for more flexible setup. The defaults are sensible to use, you only need to point the app to two endpoints, and it does the rest for you:

expvastic -reader="localhost:1234/debug/vars" -recorder="localhost:9200"

For more flags run:

expvastic -h
Advanced

Please refer to this document for advanced configuration and mappings.

LICENSE

Use of this source code is governed by the Apache 2.0 license. License that can be found in the LICENSE file.

Enjoy!

Documentation

Overview

Package expvastic can read from any endpoints that provides expvar data and ships them to elasticsearch. You can inspect the metrics with kibana.

Please refer to golang's expvar documentation for more information. Installation guides can be found on github page: https://github.com/arsham/expvastic

At the heart of this package, there is Engine. It acts like a glue between a Reader and a Recorder. Messages are transferred in a package called DataContainer, which is a list of DataType objects.

Here an example configuration, save it somewhere (let's call it expvastic.yml for now):

settings:
    log_level: info

readers:                           # You can specify the applications you want to show the metrics
    FirstApp:                      # service name
        type: expvar               # the type of reader. More to come soon!
        type_name: AppVastic       # this will be the _type in elasticsearch
        endpoint: localhost:1234   # where the application
        routepath: /debug/vars     # the endpoint that app provides the metrics
        interval: 500ms            # every half a second, it will collect the metrics.
        timeout: 3s                # in 3 seconds it gives in if the application is not responsive
        backoff: 10                # after 10 times the application didn't response, it will stop reading from it
    AnotherApplication:
        type: expvar
        type_name: this_is_awesome
        endpoint: localhost:1235
        routepath: /metrics
        interval: 500ms
        timeout: 13s
        backoff: 10

recorders:                         # This section is where the data will be shipped to
    main_elasticsearch:
        type: elasticsearch        # the type of recorder. More to come soon!
        endpoint: 127.0.0.1:9200
        index_name: expvastic
        timeout: 8s
        backoff: 10
    the_other_elasticsearch:
        type: elasticsearch
        endpoint: 127.0.0.1:9201
        index_name: expvastic
        timeout: 18s
        backoff: 10

routes:                            # You can specify metrics of which application will be recorded in which target
    route1:
        readers:
            - FirstApp
        recorders:
            - main_elasticsearch
    route2:
        readers:
            - FirstApp
            - AnotherApplication
        recorders:
            - main_elasticsearch
    route3:                      # Yes, you can have multiple!
        readers:
            - AnotherApplication
        recorders:
            - main_elasticsearch
            - the_other_elasticsearch

Then run the application:

expvasyml -c expvastic.yml

You can mix and match the routes, but the engine will choose the best setup to achieve your goal without duplicating the results. For instance assume you set the routes like this:

readers:
    app_0: type: expvar
    app_1: type: expvar
    app_2: type: expvar
    app_3: type: expvar
    app_4: type: expvar
    app_5: type: expvar
    not_used_app: type: expvar # note that this one is not specified in the routes, therefore it is ignored
recorders:
    elastic_0: type: elasticsearch
    elastic_1: type: elasticsearch
    elastic_2: type: elasticsearch
    elastic_3: type: elasticsearch
routes:
    route1:
        readers:
            - app_0
            - app_2
            - app_4
        recorders:
            - elastic_1
    route2:
        readers:
            - app_0
            - app_5
        recorders:
            - elastic_2
            - elastic_3
    route3:
        readers:
            - app_1
            - app_2
        recorders:
            - elastic_0
            - elastic_1

Expvastic creates three engines like so:

elastic_0 records data from app_0, app_1
elastic_1 records data from app_0, app_1, app_2, app_4
elastic_2 records data from app_0, app_5
elastic_3 records data from app_0, app_5

You can change the numbers to your liking:

gc_types:                      # These inputs will be collected into one list and zero values will be removed
    memstats.PauseEnd
    memstats.PauseNs

memory_bytes:                   # These values will be transoformed from bytes
    StackInuse: mb              # To MB
    memstats.Alloc: gb          # To GB

To run the tests for the codes, in the root of the application run:

go test $(glide nv)

Or for testing readers:

go test ./readers

To show the coverage, se this gist https://gist.github.com/arsham/f45f7e7eea7e18796bc1ed5ced9f9f4a. Then run:

goverall

It will open a browser tab and show you the coverage.

To run all benchmarks:

go test $(glide nv) -run=^$ -bench=.

For showing the memory and cpu profiles, on each folder run:

BASENAME=$(basename $(pwd))
go test -run=^$ -bench=. -cpuprofile=cpu.out -benchmem -memprofile=mem.out
go tool pprof -pdf $BASENAME.test cpu.out > cpu.pdf && open cpu.pdf
go tool pprof -pdf $BASENAME.test mem.out > mem.pdf && open mem.pdf

Use of this source code is governed by the Apache 2.0 license. License that can be found in the LICENSE file.

Index

Examples

Constants

This section is empty.

Variables

View Source
var (
	// ErrDuplicateRecorderName is for when there are two recorders with the same name.
	ErrDuplicateRecorderName = fmt.Errorf("recorder name cannot be reused")
)

Functions

func StartEngines added in v0.1.0

func StartEngines(ctx context.Context, log logrus.FieldLogger, confMap *config.ConfMap) (chan struct{}, error)

StartEngines creates some Engines and returns a channel that closes it when it's done its work. For each routes, we need one engine that has multiple readers and writes to one recorder. When all recorders of one reader go out of scope, the Engine stops that reader because there is no destination.

Types

type Engine added in v0.0.4

type Engine struct {
	// contains filtered or unexported fields
}

Engine represents an engine that receives information from readers and ships them to a recorder. The Engine is allowed to change the index and type names at will. When the context times out or cancelled, the engine will close the the job channels by calling its stop method. It will send a stop signal to readers and recorders asking them to finish their jobs. It will timeout the stop signals if it doesn't receive a response. Note that we could create a channel and distribute the recorders payload, but we didn't because there is no way to find out which recorder errors right after the payload has been sent. IMPORTANT: the readers should not close their streams, the Engine closes them.

Example (SendingJobs)
package main

import (
	"context"
	"fmt"
	"io"
	"net/http"
	"net/http/httptest"
	"time"

	"github.com/arsham/expvastic"
	"github.com/arsham/expvastic/communication"
	"github.com/arsham/expvastic/lib"
	"github.com/arsham/expvastic/reader"
	"github.com/arsham/expvastic/recorder"
)

func main() {
	log := lib.DiscardLogger()
	ctx, cancel := context.WithCancel(context.Background())
	recorded := make(chan string)

	redTs := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
		desire := `{"the key": "is the value!"}`
		io.WriteString(w, desire)
	}))
	defer redTs.Close()

	recTs := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
		recorded <- "Job was recorded"
	}))
	defer recTs.Close()

	jobChan := make(chan context.Context)
	errorChan := make(chan communication.ErrorMessage, 10)
	resultChan := make(chan *reader.ReadJobResult)
	payloadChan := make(chan *recorder.RecordJob)

	red, _ := reader.NewSimpleReader(log, redTs.URL, jobChan, resultChan, errorChan, "reader_example", "typeName", time.Hour, time.Hour) // We want to issue manually
	rec, _ := recorder.NewSimpleRecorder(ctx, log, payloadChan, errorChan, "reader_example", recTs.URL, "intexName", time.Hour)

	e, err := expvastic.NewWithReadRecorder(ctx, log, errorChan, resultChan, rec, red)
	done := make(chan struct{})
	go func() {
		e.Start()
		done <- struct{}{}
	}()
	fmt.Println("Engine creation success:", err == nil)

	select {
	case jobChan <- communication.NewReadJob(ctx):
		fmt.Println("Just sent a job request")
	case <-time.After(1 * time.Second):
		panic("expected the reader to receive the job, but it blocked")
	}

	fmt.Println(<-recorded)

	select {
	case <-errorChan:
		panic("expected no errors")
	case <-time.After(10 * time.Millisecond):
		fmt.Println("No errors reported!")
	}
	// We can check again
	// Both readers and recorders produce errors if they need to
	select {
	case <-errorChan:
		panic("expected no errors")
	case <-time.After(10 * time.Millisecond):
		fmt.Println("No errors reported!")
	}

	cancel()
	<-done
	fmt.Println("Client closed gracefully")

}
Output:

Engine creation success: true
Just sent a job request
Job was recorded
No errors reported!
No errors reported!
Client closed gracefully

func NewWithConfig added in v0.1.1

func NewWithConfig(ctx context.Context, log logrus.FieldLogger,
	readChanBuff, readResChanBuff, recChanBuff int,
	recorderConf config.RecorderConf, readers ...config.ReaderConf) (*Engine, error)

NewWithConfig instantiates reader and recorders from the configurations and sends them to the NewWithReadRecorder. The engine's work starts from there. readChanBuff, readResChanBuff, recChanBuff are the channel buffer amount. Please refer to the benchmarks how to choose the best values.

func NewWithReadRecorder added in v0.1.1

func NewWithReadRecorder(ctx context.Context, log logrus.FieldLogger, errorChan <-chan communication.ErrorMessage,
	readerResChan <-chan *reader.ReadJobResult, rec recorder.DataRecorder, reds ...reader.DataReader) (*Engine, error)

NewWithReadRecorder creates an instance an Engine with already made reader and recorders. It streams all readers payloads to the recorder. Returns an error if there are recorders with the same name, or any of them have no name.

func (*Engine) Start added in v0.0.4

func (e *Engine) Start()

Start begins pulling the data from DataReaders and chips them to the DataRecorder. When the context is cancelled or timed out, the engine closes all job channels and sends them a stop signal.

Directories

Path Synopsis
cmd
Package communication contains necessary logic for passing messages, returning errors and stop signals.
Package communication contains necessary logic for passing messages, returning errors and stop signals.
Package config reads the configurations from a yaml file and produces necessary configuration for instantiating readers and recorders.
Package config reads the configurations from a yaml file and produces necessary configuration for instantiating readers and recorders.
Package datatype contains necessary logic to sanitise a JSON object coming from a reader.
Package datatype contains necessary logic to sanitise a JSON object coming from a reader.
Package lib contains some functionalities needed for expvastic.
Package lib contains some functionalities needed for expvastic.
Package reader contains logic for reading from a provider.
Package reader contains logic for reading from a provider.
expvar
Package expvar contains logic to read from an expvar provide.
Package expvar contains logic to read from an expvar provide.
self
Package self contains codes for recording expvastic's own metrics.
Package self contains codes for recording expvastic's own metrics.
Package recorder contains logic to record data into a database.
Package recorder contains logic to record data into a database.
elasticsearch
Package elasticsearch contains logic to record data to an elasticsearch index.
Package elasticsearch contains logic to record data to an elasticsearch index.

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL