Documentation ¶
Index ¶
- func CollectJSONStream(ctx context.Context, opts CollectJSONOptions) error
- func ConvertFromCSV(ctx context.Context, bucketSize int, input io.Reader, output io.Writer) error
- func DumpCSV(ctx context.Context, iter *ChunkIterator, prefix string) error
- func FlushCollector(c Collector, writer io.Writer) error
- func NewWriterCollector(chunkSize int, writer io.WriteCloser) io.WriteCloser
- func WriteCSV(ctx context.Context, iter *ChunkIterator, writer io.Writer) error
- type Chunk
- type ChunkIterator
- type CollectJSONOptions
- type Collector
- func NewBaseCollector(maxSize int) Collector
- func NewBatchCollector(maxSamples int) Collector
- func NewDynamicCollector(maxSamples int) Collector
- func NewSamplingCollector(minimumInterval time.Duration, collector Collector) Collector
- func NewStreamingCollector(maxSamples int, writer io.Writer) Collector
- func NewStreamingDynamicCollector(max int, writer io.Writer) Collector
- func NewStreamingDynamicUncompressedCollectorBSON(maxSamples int, writer io.Writer) Collector
- func NewStreamingDynamicUncompressedCollectorJSON(maxSamples int, writer io.Writer) Collector
- func NewStreamingUncompressedCollectorBSON(maxSamples int, writer io.Writer) Collector
- func NewStreamingUncompressedCollectorJSON(maxSamples int, writer io.Writer) Collector
- func NewUncompressedCollectorBSON(maxSamples int) Collector
- func NewUncompressedCollectorJSON(maxSamples int) Collector
- type CollectorInfo
- type Iterator
- type Metric
Constants ¶
This section is empty.
Variables ¶
This section is empty.
Functions ¶
func CollectJSONStream ¶
func CollectJSONStream(ctx context.Context, opts CollectJSONOptions) error
CollectJSONStream provides a blocking process that reads new-line seperated JSON documents from a file and creates FTDC data from these sources.
The Options structure allows you to define the collection intervals and also specify the source. The collector supports reading directly from an arbitrary IO reader, or from a file. The "follow" option allows you to watch the end of a file for new JSON documents, a la "tail -f".
func ConvertFromCSV ¶
ConvertFromCSV takes an input stream and writes ftdc compressed data to the provided output writer.
If the number of fields changes in the CSV fields, the first field with the changed number of fields becomes the header for the subsequent documents in the stream.
func DumpCSV ¶
func DumpCSV(ctx context.Context, iter *ChunkIterator, prefix string) error
DumpCSV writes a sequence of chunks to CSV files, creating new files if the iterator detects a schema change, using only the number of fields in the chunk to detect schema changes. DumpCSV writes a header row to each file.
The file names are constructed as "prefix.<count>.csv".
func FlushCollector ¶
FlushCollector writes the contents of a collector out to an io.Writer. This is useful in the context of any collector, but is particularly useful in the context of streaming collectors, which flush data periodically and may have cached data.
func NewWriterCollector ¶
func NewWriterCollector(chunkSize int, writer io.WriteCloser) io.WriteCloser
Types ¶
type Chunk ¶
type Chunk struct { Metrics []Metric // contains filtered or unexported fields }
Chunk represents a 'metric chunk' of data in the FTDC.
func (*Chunk) GetMetadata ¶
func (*Chunk) Iterator ¶
Iterator returns an iterator that you can use to read documents for each sample period in the chunk. Documents are returned in collection order, with keys flattened and dot-seperated fully qualified paths.
The documents are constructed from the metrics data lazily.
func (*Chunk) StructuredIterator ¶
StructuredIterator returns the contents of the chunk as a sequence of documents that (mostly) resemble the original source documents (with the non-metrics fields omitted.) The output documents mirror the structure of the input documents.
type ChunkIterator ¶
type ChunkIterator struct {
// contains filtered or unexported fields
}
ChunkIterator is a simple iterator for reading off of an FTDC data source (e.g. file). The iterator processes chunks batches of metrics lazily, reading form the io.Reader every time the iterator is advanced.
Use the iterator as follows:
iter := ReadChunks(ctx, file) for iter.Next() { chunk := iter.Chunk() // <manipulate chunk> } if err := iter.Err(); err != nil { return err }
You MUST call the Chunk() method no more than once per iteration.
You shoule check the Err() method when iterator is complete to see if there were any issues encountered when decoding chunks.
func ReadChunks ¶
func ReadChunks(ctx context.Context, r io.Reader) *ChunkIterator
ReadChunks creates a ChunkIterator from an underlying FTDC data source.
func (*ChunkIterator) Chunk ¶
func (iter *ChunkIterator) Chunk() *Chunk
Chunk returns a copy of the chunk processed by the iterator. You must call Chunk no more than once per iteration. Additional accesses to Chunk will panic.
func (*ChunkIterator) Close ¶
func (iter *ChunkIterator) Close()
Close releases resources of the iterator. Use this method to release those resources if you stop iterating before the iterator is exhausted. Canceling the context that you used to create the iterator has the same effect.
func (*ChunkIterator) Err ¶
func (iter *ChunkIterator) Err() error
Err returns a non-nil error if the iterator encountered any errors during iteration.
func (*ChunkIterator) Next ¶
func (iter *ChunkIterator) Next() bool
Next advances the iterator and returns true if the iterator has a chunk that is unprocessed. Use the Chunk() method to access the iterator.
type CollectJSONOptions ¶
type CollectJSONOptions struct { OutputFilePrefix string SampleCount int FlushInterval time.Duration InputSource io.Reader `json:"-"` FileName string Follow bool }
CollectJSONOptions specifies options for a JSON2FTDC collector. You must specify EITHER an input Source as a reader or a file name.
type Collector ¶
type Collector interface { // SetMetadata sets the metadata document for the collector or // chunk. This document is optional. Pass a nil to unset it, // or a different document to override a previous operation. SetMetadata(interface{}) error // Add extracts metrics from a document and appends it to the // current collector. These documents MUST all be // identical including field order. Returns an error if there // is a problem parsing the document or if the number of // metrics collected changes. Add(interface{}) error // Resolve renders the existing documents and outputs the full // FTDC chunk as a byte slice to be written out to storage. Resolve() ([]byte, error) // Reset clears the collector for future use. Reset() // Info reports on the current state of the collector for // introspection and to support schema change and payload // size. Info() CollectorInfo }
Collector describes the interface for collecting and constructing FTDC data series. Implementations may have different efficiencies and handling of schema changes.
The SetMetadata and Add methods both take interface{} values. These are converted to bson documents; however it is an error to pass a type based on a map.
func NewBaseCollector ¶
NewBasicCollector provides a basic FTDC data collector that mirrors the server's implementation. The Add method will error if you attempt to add more than the specified number of records (plus one, as the reference/schema document doesn't count).
func NewBatchCollector ¶
NewBatchCollector constructs a collector implementation that builds data chunks with payloads of the specified number of samples. This implementation allows you break data into smaller components for more efficient read operations.
func NewDynamicCollector ¶
NewDynamicCollector constructs a Collector that records metrics from documents, creating new chunks when either the number of samples collected exceeds the specified max sample count OR the schema changes.
There is some overhead associated with detecting schema changes, particularly for documents with more complex schemas, so you may wish to opt for a simpler collector in some cases.
func NewSamplingCollector ¶
NewSamplingCollector wraps a different collector implementation and provides an implementation of the Add method that skips collection of results if the specified minimumInterval has not elapsed since the last collection.
func NewStreamingCollector ¶
NewStreamingCollector wraps the underlying collector, writing the data to the underlying writer after the underlying collector is filled. This is similar to the batch collector, but allows the collector to drop FTDC data from memory. Chunks are flushed to disk when the collector as collected the "maxSamples" number of samples during the Add operation.
func NewStreamingDynamicCollector ¶
NewStreamingDynamicCollector has the same semantics as the dynamic collector but wraps the streaming collector rather than the batch collector. Chunks are flushed during the Add() operation when the schema changes or the chunk is full.
func NewStreamingDynamicUncompressedCollectorBSON ¶
NewStreamingUncompressedCollectorJSON constructs a collector that resolves data into a stream of JSON documents. The output of these uncompressed collectors does not use the FTDC encoding for data, and can be read as newline seperated JSON.
The metadata for this collector is rendered as the first document in the stream. Additionally, the collector will automatically handle schema changes by flushing the previous batch.
All data is written to the writer when the underlying collector has captured its target number of collectors and is automatically flushed to the writer during the write operation or when you call Close. You can also use the FlushCollector helper.
func NewStreamingDynamicUncompressedCollectorJSON ¶
NewStreamingUncompressedCollectorBSON constructs a collector that resolves data into a stream of BSON documents. The output of these uncompressed collectors does not use the FTDC encoding for data, and can be read with the bsondump and other related utilites.
The metadata for this collector is rendered as the first document in the stream. Additionally, the collector will automatically handle schema changes by flushing the previous batch.
All data is written to the writer when the underlying collector has captured its target number of collectors and is automatically flushed to the writer during the write operation or when you call Close. You can also use the FlushCollector helper.
func NewStreamingUncompressedCollectorBSON ¶
NewUncompressedCollectorBSON constructs a collector that resolves data into a stream of BSON documents. The output of these uncompressed collectors does not use the FTDC encoding for data, and can be read with the bsondump and other related utilites.
This collector will not allow you to collect documents with different schema (determined by the number of top-level fields.)
The metadata for this collector is rendered as the first document in the stream.
All data is written to the writer when the underlying collector has captured its target number of collectors and is automatically flushed to the writer during the write operation or when you call Close. You can also use the FlushCollector helper.
func NewStreamingUncompressedCollectorJSON ¶
NewUncompressedCollectorJSON constructs a collector that resolves data into a stream of JSON documents. The output of these uncompressed collectors does not use the FTDC encoding for data, and can be read as newline seperated JSON.
This collector will not allow you to collect documents with different schema (determined by the number of top-level fields.)
The metadata for this collector is rendered as the first document in the stream.
All data is written to the writer when the underlying collector has captured its target number of collectors and is automatically flushed to the writer during the write operation or when you call Close. You can also use the FlushCollector helper.
func NewUncompressedCollectorBSON ¶
NewUncompressedCollectorBSON constructs a collector that resolves data into a stream of BSON documents. The output of these uncompressed collectors does not use the FTDC encoding for data, and can be read with the bsondump and other related utilites.
This collector will not allow you to collect documents with different schema (determined by the number of top-level fields.)
If you do not resolve the after receiving the maximum number of samples, then additional Add operations will fail.
The metadata for this collector is rendered as the first document in the stream.
func NewUncompressedCollectorJSON ¶
NewUncompressedCollectorJSON constructs a collector that resolves data into a stream of JSON documents. The output of these uncompressed collectors does not use the FTDC encoding for data, and can be read as newline seperated JSON.
This collector will not allow you to collect documents with different schema (determined by the number of top-level fields.)
If you do not resolve the after receiving the maximum number of samples, then additional Add operations will fail.
The metadata for this collector is rendered as the first document in the stream.
type CollectorInfo ¶
CollectorInfo reports on the current state of the collector and provides introspection into the current state of the collector for testing, transparency, and to support more complex collector features, including payload size controls and schema change
type Iterator ¶
type Iterator interface { Next() bool Document() *bsonx.Document Metadata() *bsonx.Document Err() error Close() }
func ReadMatrix ¶
ReadMatrix returns a "matrix format" for the data in a chunk. The ducments returned by the iterator represent the entire chunk, in flattened form, with each field representing a single metric as an array of all values for the event.
The matrix documents have full type fidelity, but are not substantially less expensive to produce than full iteration.
func ReadMetrics ¶
ReadMetrics returns a standard document iterator that reads FTDC chunks. The Documents returned by the iterator are flattened.
func ReadSeries ¶
ReadSeries is similar to the ReadMatrix format, and produces a single document per chunk, that contains the flattented keys for that chunk, mapped to arrays of all the values of the chunk.
The matrix documents have better type fidelity than raw chunks but do not properly collapse the bson timestamp type. To use these values produced by the iterator, consider marshaling them directly to map[string]interface{} and use a case statement, on the values in the map, such as:
switch v.(type) { case []int32: // ... case []int64: // ... case []bool: // ... case []time.Time: // ... case []float64: // ... }
Although the *bsonx.Document type does support iteration directly.
type Metric ¶
type Metric struct { // For metrics that were derived from nested BSON documents, // this preserves the path to the field, in support of being // able to reconstitute metrics/chunks as a stream of BSON // documents. ParentPath []string // KeyName is the specific field name of a metric in. It is // *not* fully qualified with its parent document path, use // the Key() method to access a value with more appropriate // user facing context. KeyName string // Values is an array of each value collected for this metric. // During decoding, this attribute stores delta-encoded // values, but those are expanded during decoding and should // never be visible to user. Values []int64 // contains filtered or unexported fields }
Metric represents an item in a chunk.
Source Files ¶
- bson_extract.go
- bson_hash.go
- bson_matrix.go
- bson_metric.go
- bson_restore.go
- collector.go
- collector_batch.go
- collector_better.go
- collector_dynamic.go
- collector_sample.go
- collector_streaming.go
- collector_uncompressed.go
- csv.go
- ftdc.go
- iterator.go
- iterator_chunk.go
- iterator_combined.go
- iterator_matrix.go
- iterator_sample.go
- json.go
- read.go
- util.go
- writer.go
Directories ¶
Path | Synopsis |
---|---|
Package bsonx is a bson library built on top of the legacy bson.Document type for building and interacting with bson documents.
|
Package bsonx is a bson library built on top of the legacy bson.Document type for building and interacting with bson documents. |
bsontype
Package bsontype is a utility package that contains types for each BSON type and the a stringifier for the Type to enable easier debugging when working with BSON.
|
Package bsontype is a utility package that contains types for each BSON type and the a stringifier for the Type to enable easier debugging when working with BSON. |
elements
Package elements holds the logic to encode and decode the BSON element types from native Go to BSON binary and vice versa.
|
Package elements holds the logic to encode and decode the BSON element types from native Go to BSON binary and vice versa. |
Package events contains a number of different data types and formats that you can use to populate ftdc metrics series.
|
Package events contains a number of different data types and formats that you can use to populate ftdc metrics series. |
Package hdrhistogram provides an implementation of Gil Tene's HDR Histogram data structure.
|
Package hdrhistogram provides an implementation of Gil Tene's HDR Histogram data structure. |
Package metrics includes data types used for Golang runtime and system metrics collection
|
Package metrics includes data types used for Golang runtime and system metrics collection |