Documentation ¶
Index ¶
- Constants
- Variables
- func Unmarshal(data []Result, v interface{}) error
- type CloudWatchLogsAction
- type CloudWatchLogsActions
- type Config
- type InvalidUnmarshalError
- type Logger
- type QueryManager
- type QuerySpec
- type Result
- type ResultField
- type StartQueryError
- type Stats
- type StatsGetter
- type Stream
- type TerminalQueryStatusError
- type UnexpectedQueryError
- type UnmarshalResultFieldValueError
Examples ¶
Constants ¶
const ( // QueryConcurrencyQuotaLimit contains the CloudWatch Logs Query // Concurrency service quota limit as documented at // https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/cloudwatch_limits_cwl.html. // // The documented service quota may increase over time, in which case // this value should be updated to match the documentation. QueryConcurrencyQuotaLimit = 10 // DefaultParallel is the default maximum number of parallel // CloudWatch Logs Insights queries a QueryManager will attempt to // run at any one time. // // The default value is set to slightly less than the service quota // limit to leave some concurrency available for other users even if // the QueryManager is at maximum capacity. DefaultParallel = QueryConcurrencyQuotaLimit - 2 // DefaultLimit is the default result count limit used if the Limit // field of a QuerySpec is zero or negative. DefaultLimit = 1000 // MaxLimit is the maximum value the result count limit field in a // QuerySpec may be set to. MaxLimit = 10000 )
const TimeLayout = "2006-01-02 15:04:05.000"
TimeLayout is a Go time layout which documents the format of the time values returned within the timestamp fields of CloudWatch Logs Insights queries.
TimeLayout defines the format by showing how the Go reference time of
Mon Jan 2 15:04:05 -0700 MST 2006
would be formatted if it were the value. TimeLayout can be used with time.Parse to parse timestamp fields, such as @timestamp and @ingestionTime, which are returned within CloudWatch Logs Insights query results.
Variables ¶
var ( // ErrClosed is the error returned by a read or query operation // when the underlying stream or query manager has been closed. ErrClosed = errors.New("incite: operation on a closed object") )
var NopLogger = nopLogger(0)
NopLogger is a Logger that ignores any messages sent to it.
var RPSDefaults = map[CloudWatchLogsAction]int{ StartQuery: RPSQuotaLimits[StartQuery] - 2, StopQuery: RPSQuotaLimits[StopQuery] - 2, GetQueryResults: RPSQuotaLimits[GetQueryResults] - 2, }
RPSDefaults specifies the default maximum number of requests per second which the QueryManager will make to the CloudWatch Logs web service for each CloudWatch Logs act. These default values are operative unless explicitly overwritten in the Config structure passed to NewQueryManager.
var RPSQuotaLimits = map[CloudWatchLogsAction]int{ StartQuery: 5, StopQuery: 5, GetQueryResults: 5, }
RPSQuotaLimits contains the CloudWatch Logs service quota limits for number of requests per second for each CloudWatch Logs API act before the request fails due to a throttling error as documented in the AWS service limits system.
The documented service quotas may increase over time, in which case the map values should be updated to match the increases.
Functions ¶
func Unmarshal ¶
Unmarshal converts CloudWatch Logs Insights result data into the user-defined type indicated by v, and stores the result in the value pointed to by v.
The argument v must contain a non-nil pointer whose ultimate target is a slice, array, or interface value. If v ultimately targets an interface{}, it is treated as if it targets a []map[string]string.
The element type of the array or slice must target a map type, struct type, or one of two special cases. The two special cases allow array or slice elements of type interface{} and Result. If the element type targets a map, the map's keys must be strings and its value type must target a string type, interface{}, or any type that implements encoding.TextUnmarshaler.
To unmarshal data into an array or slice of maps, Unmarshal uses the ResultField name as the map key and the ResultField value as its value. If the map value targets an encoding.TextUnmarshaler, the value's UnmarshalText method is used to unmarshal the ResultField value. If the map value targets a string type, the ResultField's value is directly inserted as the field value in the map. As a special case, if the map value targets interface{}, Unmarshal first tries to unmarshal the ResultField value as JSON using json.Unmarshal, and falls back to the plain string value if JSON unmarshaling fails.
To unmarshal data into a struct type, Unmarshal uses the following top-level rules:
• A struct field with an "incite" tag receives the value of the ResultField field named in the tag. Unmarshaling of the field value is done according to additional rules discussed below. If the tag is "-" the field is ignored. If the field type does not ultimately target a struct field unmarshallable type, an InvalidUnmarshalError is returned.
• A struct field with a "json" tag receives the the value of the ResultField field named in the tag using the json.Unmarshal function from the encoding/json package with the ResultField value as the input JSON and the struct field address as the target. If the tag is "-" the field is ignored. The field type is not checked for validity.
• An "incite" tag takes precedence over a "json" tag so there is no point using both tags on the same struct field.
• A struct field with no "incite" or "json" tag receives the value of the ResultField field sharing the same case-sensitive name as the struct field, but only if the field type ultimately targets a struct field unmarshallable type. Otherwise the field is ignored.
The following types are considered struct field unmarshallable types:
bool int, int8, int16, int32, int64 uint, uint8, uint16, uint32, uint64 float32, float64 interface{} Any map, struct, slice, or array type
A struct field targeting interface{} or any map, struct, slice, or array type is assumed to contain valid JSON and unmarshalled using json.Unmarshal. Any other field is decoded from its string representation using the intuitive approach. As a special case, if a CloudWatch Logs timestamp field (@timestamp or @ingestionTime) is named in an "incite" tag, it may only target a time.Time or string value. If it targets a time.Time, the value is decoded using TimeLayout with the time.Parse function. As a further special case, Incite's intermediate result deletion field (@deleted) may only target a bool or string field.
If a target type rule is violated, Unmarshal returns InvalidUnmarshalError.
If a result field value cannot be decoded, Unmarshal continues decoding the remaining input data on a best effort basis, and after processing all the data, returns an UnmarshalResultFieldValueError describing the first such decoding problem encountered.
The value pointed to by v may have changed even if Unmarshal returns an error.
Example (Interface) ¶
package main import ( "fmt" "github.com/gogama/incite" ) func main() { // An interface{} is treated as []map[string]string. The Object // key's value is not deserialized from JSON, it remains a string. data := []incite.Result{ {{"@ptr", "abc123"}, {"Object", `{"key":"value"}`}}, } var v interface{} _ = incite.Unmarshal(data, &v) // Error ignored for simplicity. fmt.Println(v) }
Output: [map[@ptr:abc123 Object:{"key":"value"}]]
Example (MapStringInterface) ¶
package main import ( "fmt" "github.com/gogama/incite" ) func main() { // As a special case, the data are unmarshalled fuzzily if the target // is a map[string]interface{}. If a value is valid JSON it is // unmarshalled as JSON, otherwise it is kept as a string. Here the // Object and QuotedString fields are contain valid JSON so they // unmarshal as a map and string, respectively. UnquotedString is // not valid JSON and stays as a string. data := []incite.Result{ { {"Object", `{"key":"value"}`}, {"QuotedString", `"hello"`}, {"UnquotedString", `world`}, }, } var v []map[string]interface{} _ = incite.Unmarshal(data, &v) // Error ignored for simplicity. fmt.Println(v) }
Output: [map[Object:map[key:value] QuotedString:hello UnquotedString:world]]
Example (MapStringString) ¶
package main import ( "fmt" "github.com/gogama/incite" ) func main() { data := []incite.Result{ {{"@ptr", "foo"}, {"@message", "bar"}}, } var v []map[string]string _ = incite.Unmarshal(data, &v) // Error ignored for simplicity. fmt.Println(v) }
Output: [map[@message:bar @ptr:foo]]
Example (Pointer) ¶
package main import ( "fmt" "github.com/gogama/incite" ) func main() { // Pointers are followed in the intuitive manner. data := []incite.Result{{{"hello", "world"}}} var v *[]map[string]**string _ = incite.Unmarshal(data, &v) // Error ignored for simplicity. fmt.Println(**(*v)[0]["hello"]) }
Output: world
Example (Struct) ¶
package main import ( "fmt" "time" "github.com/gogama/incite" ) func main() { data := []incite.Result{ { {"@ptr", "row1"}, {"@timestamp", "2021-07-17 01:00:01.012"}, {"@message", `{}`}, {"DiscoveredField", "1234.5"}, }, { {"@ptr", "row2"}, {"@timestamp", "2021-07-17 01:00:03.999"}, {"@message", `{"foo":"bar","ham":"eggs"}`}, }, } var v []struct { Timestamp time.Time `incite:"@timestamp"` Message map[string]interface{} `json:"@message"` DiscoveredField float64 // Many other mappings are possible. See Unmarshal documentation // for details. } _ = incite.Unmarshal(data, &v) // Error ignored for simplicity. fmt.Println(v) }
Output: [{2021-07-17 01:00:01.012 +0000 UTC map[] 1234.5} {2021-07-17 01:00:03.999 +0000 UTC map[foo:bar ham:eggs] 0}]
Types ¶
type CloudWatchLogsAction ¶
type CloudWatchLogsAction int
CloudWatchLogsAction represents a single enumerated CloudWatch Logs act.
const ( // StartQuery indicates the CloudWatch Logs StartQuery act. StartQuery CloudWatchLogsAction = iota // StopQuery indicates the CloudWatch Logs StopQuery act. StopQuery // GetQueryResults indicates the CloudWatchLogs GetQueryResults act. GetQueryResults )
type CloudWatchLogsActions ¶
type CloudWatchLogsActions interface { StartQueryWithContext(context.Context, *cloudwatchlogs.StartQueryInput, ...request.Option) (*cloudwatchlogs.StartQueryOutput, error) StopQueryWithContext(context.Context, *cloudwatchlogs.StopQueryInput, ...request.Option) (*cloudwatchlogs.StopQueryOutput, error) GetQueryResultsWithContext(context.Context, *cloudwatchlogs.GetQueryResultsInput, ...request.Option) (*cloudwatchlogs.GetQueryResultsOutput, error) }
CloudWatchLogsActions provides access to the CloudWatch Logs actions which QueryManager needs in order to run Insights queries using the CloudWatch Logs service.
This interface is compatible with the AWS SDK for Go (v1)'s cloudwatchlogsiface.CloudWatchLogsAPI interface and *cloudwatchlogs.CloudWatchLogs type, so you may use either of these AWS SDK types to provide the CloudWatch Logs act capabilities.
For example:
import ( "github.com/aws/aws-sdk-go/service/cloudwatchlogs" "github.com/aws/aws-sdk-go/aws/session" ) var myActions incite.CloudWatchLogsActions = cloudwatchlogs.New(session.Must(session.NewSession())) // Now you can use myActions with NewQueryManager to construct a new // QueryManager.
type Config ¶
type Config struct { // Actions provides the CloudWatch Logs capabilities the QueryManager // needs to execute Insights queries against the CloudWatch Logs // service. If this value is nil then NewQueryManager panics. // // Normally Actions should be set to the value of an AWS SDK for Go // (v1) CloudWatch Logs client: both the cloudwatchlogsiface.CloudWatchLogsAPI // interface and the *cloudwatchlogs.CloudWatchLogs type are // compatible with the CloudWatchLogsActions interface. Use a // properly configured instance of one of these types to set the // value of the Actions field. Actions CloudWatchLogsActions // Parallel optionally specifies the maximum number of parallel // CloudWatch Logs Insights queries which the QueryManager may run // at one time. The purpose of Parallel is to avoid starving other // humans or systems using CloudWatch Logs Insights in the same AWS // account and region. // // If set to a positive number then that exact number is used as the // parallelism factor. If set to zero or a negative number then // DefaultParallel is used instead. // // Parallel gives the upper limit on the number of Insights queries // the QueryManager may have open at any one time. The actual number // of Insights queries may be lower either because of throttling or // service limit exceptions from the CloudWatch Logs web service, or // because the QueryManager simply doesn't need all the parallel // capacity. // // Note that an Insights query is not necessarily one-to-one with a // Query operation on a QueryManager. If the Query operation is // chunked, the QueryManager may create Insights multiple queries in // the CloudWatch Logs web service to fulfil the chunked Query // operation. Parallel int // RPS optionally specifies the maximum number of requests to the // CloudWatch Logs web service which the QueryManager may make in // each one-second period for each CloudWatch Logs act. The // purpose of RPS is to prevent the QueryManager or other humans or // systems using CloudWatch Logs in the same AWS account and region // from being throttled by the web service. // // If RPS has a missing, zero, or negative number for any required // CloudWatch Logs act, the value specified in RPSDefaults is // used instead. The default behavior should be adequate for many // use cases, so you typically will not need to set this field // explicitly. // // The values in RPS should ideally not exceed the corresponding // values in RPSQuotaLimits as this will almost certainly result in // throttling, worse performance, and your application being a "bad // citizen" affecting other users of the same AWS account. RPS map[CloudWatchLogsAction]int // Logger optionally specifies a logging object to which the // QueryManager can send log messages about queries it is managing. // This value may be left nil to skip logging altogether. Logger Logger // Name optionally gives the new QueryManager a friendly name, which // will be included in log messages the QueryManager emits to the // Logger. // // Other than being used in logging, this field has no effect on the // QueryManager's behavior. Name string }
Config provides the NewQueryManager function with the information it needs to construct a new QueryManager.
type InvalidUnmarshalError ¶
type InvalidUnmarshalError struct { Type reflect.Type RowType reflect.Type Field string FieldType reflect.Type Message string }
An InvalidUnmarshalError occurs when a value with an invalid type is passed to to Unmarshal.
func (*InvalidUnmarshalError) Error ¶
func (e *InvalidUnmarshalError) Error() string
type Logger ¶
type Logger interface { // Printf sends a line of output to the logger. Arguments are // handled in the manner of fmt.Printf. If the formatted string // does not end in a newline, the logger implementation should // append a final newline. Printf(format string, v ...interface{}) }
A Logger represents a logging object which can receive log messages from a QueryManager and send them to an output sink.
Logger is compatible with *log.Logger. Therefore you may, for example, use log.DefaultLogger(), or any other *log.Logger value, as the Logger field of a Config structure when constructing a new QueryManager using the NewQueryManager function.
type QueryManager ¶
QueryManager executes one or more CloudWatch Logs Insights queries, optionally executing simultaneous queries in parallel.
QueryManager's job is to hide the complexity of the CloudWatch Logs Insights API, taking care of mundane details such as starting and polling Insights query jobs in the CloudWatch Logs service, breaking queries into smaller time chunks (if desired), de-duplicating and providing preview results (if desired), retrying transient request failures, and managing resources to try to stay within the CloudWatch Logs service quota limits.
Use NewQueryManager to create a QueryManager, and be sure to close it when you no longer need its services, since every QueryManager consumes a tiny amount of system resources just by existing.
Calling the Query method will return a result Stream from which the query results can be read as they become available. Use the Unmarshal function to unmarshal the bare results into other structured types.
Calling the Close method will immediately cancel all running queries started with the QueryManager, as if each query's Stream had been explicitly closed.
Calling the GetStats method will return the running sum of all statistics for all queries run within the QueryManager since it was created.
Example ¶
package main import ( "fmt" "time" "github.com/aws/aws-sdk-go/service/cloudwatchlogs" "github.com/aws/aws-sdk-go/aws/session" "github.com/gogama/incite" ) func main() { s := session.Must(session.NewSession()) a := cloudwatchlogs.New(s) m := incite.NewQueryManager(incite.Config{ Actions: a, }) defer func() { _ = m.Close() }() end := time.Now().Truncate(time.Second) str, err := m.Query(incite.QuerySpec{ Text: "fields @timestamp, @message | filter @message =~ /foo/ | sort @timestamp desc", Start: end.Add(-15 * time.Minute), End: end, Groups: []string{"/my/log/group"}, Limit: 100, }) if err != nil { fmt.Println("ERROR", err) return } data, err := incite.ReadAll(str) if err != nil { fmt.Println("ERROR", err) } fmt.Println("RESULTS", data) }
Output:
func NewQueryManager ¶
func NewQueryManager(cfg Config) QueryManager
NewQueryManager returns a new query manager with the given configuration.
type QuerySpec ¶
type QuerySpec struct { // Text contains the actual text of the CloudWatch Insights query. // // Text must not contain an empty or blank string. Beyond checking // for blank text, Incite does not attempt to parse Text and simply // forwards it to the CloudWatch Logs service. Care must be taken to // specify query text compatible with the Chunk, Preview, and Split // fields, or the results may be confusing. // // To limit the number of results returned by the query, use the // Limit field, since the Insights API seems to ignore the `limit` // command when specified in the query text. Note that if the // QuerySpec specifies a chunked query, then Limit will apply to the // results obtained from each chunk, not to the global query. // // To learn the Insights query syntax, please see the official // documentation at: // https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/CWL_QuerySyntax.html. Text string // Groups lists the names of the CloudWatch Logs log groups to be // queried. It may not be empty. Groups []string // Start specifies the beginning of the time range to query, // inclusive of Start itself. // // Start must be strictly before End, and must represent a whole // number of seconds (it cannot have sub-second granularity). Start time.Time // End specifies the end of the time range to query, exclusive of // End itself. // // End must be strictly after Start, and must represent a whole // number of seconds (it cannot have sub-second granularity). End time.Time // Limit optionally specifies the maximum number of results to be // returned by the query. // // If Limit is zero or negative, the value DefaultLimit is used // instead. If Limit exceeds MaxLimit, the query operation will fail // with an error. // // In a chunked query, Limit applies to each chunk separately, so // up to (n × Limit) final results may be returned, where n is the // number of chunks. // // Note that as of 2021-07-15, the CloudWatch Logs StartQuery API // seems to ignore the `limit` command in the query text, so if you // want to apply a limit you must use the Limit field. Limit int64 // Chunk optionally requests a chunked query and indicates the chunk // size. // // If Chunk is zero, negative, or greater than the difference // between End and Start, the query is not chunked. If Chunk is // positive and less than the difference between End and Start, the // query is broken into n chunks, where n is (End-Start)/Chunk, // rounded up to the nearest integer value. If Chunk is positive, it // must represent a whole number of seconds (cannot have sub-second // granularity). // // In a chunked query, each chunk is sent to the CloudWatch Logs // service as a separate Insights query. This can help large queries // complete before the CloudWatch Logs query timeout of 15 minutes, // and can increase performance because chunks can be run in parallel. // However, the following considerations should be taken into account: // // • In a chunked query, Limit applies separately to each chunk. So // a query with 50 chunks and a limit of 50 could produce up to 2500 // final results. // // • If Text contains a sort command, the sort will only apply // within each individual chunk. If the QuerySpec is executed by a // QueryManager configured with a parallelism factor above 1, then // the results may appear be expected of order since the order of // completion of chunks is not guaranteed. // // • If Text contains a stats command, the statistical aggregation // will be applied to each chunk in a chunked query, meaning up to n // versions of each aggregate data point may be returned, one per // chunk, necessitating further aggregation in your application // logic. // // • In general if you use chunking with query text which implies // any kind of server-side aggregation, you may need to perform // custom post-processing on the results. Chunk time.Duration // Preview optionally requests preview results from a running query. // // If Preview is true, intermediate results for the query are // sent to the result Stream as soon as they are available. This // can improve your application's responsiveness for the end user // but requires care since not all intermediate results are valid // members of the final result set. // // When Preview is true, the query result Stream may produce some // intermediate results which it later determines are invalid // because they shouldn't be final members of the result set. For // each such invalid result, an extra dummy Result will be sent to // the result Stream with the following structure: // // incite.Result{ // { Field: "@ptr", Value: "<Unique @ptr of the earlier invalid result>" }, // { Field: "@deleted", Value: "true" }, // } // // The presence of the "@deleted" field can be used to identify and // delete the earlier invalid result sharing the same "@ptr" field. // // If the results from CloudWatch Logs do not contain an @ptr field, // the Preview option does not detect invalidated results and // consequently does not create dummy @deleted items. Since the // `stats` command creates results that do not contain @ptr, // applications should either avoid combining Preview mode with // `stats` or apply their own custom logic to eliminate obsolete // intermediate results. Preview bool // Priority optionally allows a query operation to be given a higher // or lower priority with regard to other query operations managed // by the same QueryManager. This allows your application to ensure // its higher priority work runs before lower priority work, // making the most efficient work of finite CloudWatch Logs service // resources. // // A lower number indicates a higher priority. The default zero // value is appropriate for many cases. // // The Priority field may be set to any valid int value. A query // whose Priority number is lower is allocated CloudWatch Logs // query capacity in preference to a query whose Priority number is // higher, but only within the same QueryManager. Priority int // SplitUntil specifies if, and how, the query time range, or the // query chunks, will be dynamically split into sub-chunks when // they produce the maximum number of results that CloudWatch Logs // Insights will return (MaxLimit). // // If SplitUntil is zero or negative, then splitting is disabled. // If positive, then splitting is enabled and SplitUntil must // represent a whole number of seconds (cannot have sub-second // granularity). // // To use splitting, you must also set Limit to MaxLimit and // Preview must be false. // // When splitting is enabled and, when a time range produces // MaxLimit results, the range is split into sub-chunks no larger // than SplitUntil. If a sub-chunk produces MaxLimit results, it is // recursively split into smaller sub-sub-chunks again no larger // than SplitUntil. The splitting process continues until either // the time range cannot be split into at least two chunks no // smaller than SplitUntil or the time range produces fewer than // MaxLimit results. SplitUntil time.Duration }
QuerySpec specifies the parameters for a query operation either using the global Query function or a QueryManager's Query method.
type Result ¶
type Result []ResultField
Result represents a single result row from a CloudWatch Logs Insights query.
func Query ¶
Query is sweet sweet sugar to perform a synchronous CloudWatch Logs Insights query and get back all the results without needing to construct a QueryManager. Query runs the query indicated by q, using the CloudWatch Logs actions provided by a, and returns all the query results (or the error if the query failed).
The context ctx controls the lifetime of the query. If ctx expires or is cancelled, the query is cancelled and an error is returned. If you don't need the ability to set a timeout or cancel the request, use context.Background().
Unlike NewQueryManager, supports a configurable level of parallel query execution, Query uses a parallelism factor of 1. This means that if q represents a chunked query, then the chunks will be run serially, in ascending order of time.
This function is intended for quick prototyping and simple scripting and command-line interface use cases. More complex applications, especially applications running concurrent queries against the same region from multiple goroutines, should construct and configure a QueryManager explicitly.
Example ¶
package main import ( "context" "fmt" "time" "github.com/aws/aws-sdk-go/service/cloudwatchlogs" "github.com/aws/aws-sdk-go/aws/session" "github.com/gogama/incite" ) func main() { s := session.Must(session.NewSession()) a := cloudwatchlogs.New(s) end := time.Now().Truncate(time.Second) data, err := incite.Query(context.Background(), a, incite.QuerySpec{ Text: "fields @timestamp, @message | filter @message =~ /foo/ | sort @timestamp desc", Start: end.Add(-15 * time.Minute), End: end, Groups: []string{"/my/log/group"}, Limit: 100, }) if err != nil { fmt.Println("ERROR", err) return } fmt.Println("RESULTS", data) }
Output:
type ResultField ¶
ResultField represents a single field name/field value pair within a Result.
type StartQueryError ¶ added in v0.9.1
type StartQueryError struct { // Text is the text of the query that could not be started. Text string // Start is the start time of the query chunk that could not be // started. If the query has more than one chunk, this could differ // from the value originally set in the QuerySpec. Start time.Time // End is the end time of the query chunk that could not be started // If the query has more than one chunk, this could differ from the // value originally set in the QuerySpec. End time.Time // Cause is the causing error, which will typically be an AWS SDK // for Go error type. Cause error }
StartQueryError is returned by Stream.Read to indicate that the CloudWatch Logs service API returned a fatal error when attempting to start a chunk of the stream's query.
When StartQueryError is returned by Stream.Read, the stream's query is considered failed and all subsequent reads on the stream will return an error.
func (*StartQueryError) Error ¶ added in v0.9.1
func (err *StartQueryError) Error() string
func (*StartQueryError) Unwrap ¶ added in v0.9.1
func (err *StartQueryError) Unwrap() error
type Stats ¶
type Stats struct { // BytesScanned is a metric returned by CloudWatch Logs Insights // which represents the total number of bytes of log events // scanned. BytesScanned float64 // RecordsMatched is a metric returned by CloudWatch Logs Insights // which tallies the number of log events that matched the query or // queries. RecordsMatched float64 // RecordsScanned is a metric returned by CloudWatch Logs Insights // which tallies the number of log events scanned during the query // or queries. RecordsScanned float64 // RangeRequested is a metric collected by Incite which tallies the // accumulated time range requested in the query or queries. RangeRequested time.Duration // RangeStarted is a metric collected by Incite which tallies the // aggregate amount of query time for which the query has been // initiated in the CloudWatch Logs Insights web service. The value // in this field is always less than or equal to RangeRequested, and // it never decreases. // // For a non-chunked query, this field is either zero or the total // time range covered by the QuerySpec. For a chunked query, this // field reflects the accumulated time of all the chunks which have // been started. For a QueryManager, this field represents the // accumulated time of all started query chunks from all queries // submitted to the query manager. RangeStarted time.Duration // RangeDone is a metric collected by Incite which tallies the // aggregate amount of query time which the CloudWatch Logs Insights // service has successfully finished querying so far. The value in // this field is always less than or equal to RangeStarted, and it // never decreases. RangeDone time.Duration // RangeFailed is a metric collected by Incite which tallies the // aggregate amount of query time which was started in the // CloudWatch Logs Insights service but which ended with an error, // either because the Stream or QueryManager was closed, or because // the CloudWatch Logs service returned a non-retryable error. RangeFailed time.Duration // RangeMaxed is a metric collected by Incite which tallies the // aggregate amount of query time which the CloudWatch Logs Insights // service has successfully finished querying so far but for which // Insights returned the maximum number of results requested in the // QuerySpec Limit field, indicating that more results may be // available than were returned. The value in this field is always // less than or equal to RangeDone, and it never decreases. // // If RangeMaxed is zero, this indicates that no query chunks // produced the maximum number of results requested. // // RangeMaxed will usually not be increased by queries that use // dynamic chunk splitting. This is because chunk splitting will // continue recursively splitting until sub-chunks do not produce // the maximum number of results. However, when a chunk which // produces MaxLimit results is too small to split further, its // duration will be added to RangeMaxed. RangeMaxed time.Duration }
Stats records metadata about query execution. When returned from a Stream, Stats contains metadata about the stream's query. When returned from a QueryManager, Stats contains accumulated metadata about all queries executed by the query manager.
The Stats structure contains two types of metadata.
The first type of metadata in Stats are returned from the CloudWatch Logs Insights web service and consist of metrics about the amount of data scanned by the query or queries. These metadata are contained within the fields BytesScanned, RecordsMatched, and RecordsScanned.
The second type of metadata in Stats are collected by Incite and consist of metrics about the size of the time range or ranges queried and how much progress has been on the queries. These metadata can be useful, for example, for showing progress bars or other work in progress indicators. They are contained within the fields RangeRequested, RangeStarted, RangeDone, RangeFailed, and RangeMaxed.
type StatsGetter ¶
type StatsGetter interface {
GetStats() Stats
}
StatsGetter provides access to the Insights query statistics returned by the CloudWatch Logs API.
Both Stream and QueryManager contain the StatsGetter interface. Call the GetStats method on a Stream to get the query statistics for the stream's query. Call the GetStats method on a QueryManager to get the query statistics for all queries run within the QueryManager.
type Stream ¶
type Stream interface { io.Closer StatsGetter // Read reads up to len(p) CloudWatch Logs Insights results into p. // It returns the number of results read (0 <= n <= len(p)) and any // error encountered. Even if Read returns n < len(p), it may use // all of p as scratch space during the call. If some, but fewer // than len(p), results are available, Read conventionally // returns what is available instead of waiting for more. // // When Read encounters an error or end-of-file condition after // successfully reading n > 0 results, it returns the number of // results read. It may return the (non-nil) error from the same // call or return the error (and n == 0) from a subsequent call. An // instance of this general case is that a Stream returning a // non-zero number of results at the end of the input stream may // return either err == EOF or err == nil. The next Read should // return 0, EOF. // // Callers should always process the n > 0 results returned before // considering the error err. Doing so correctly handles I/O errors // that happen after reading some results and also both of the // allowed EOF behaviors. // // Implementations of Read are discouraged from returning a zero // result count with a nil error, except when len(p) == 0. Callers // should treat a return of 0 and nil as indicating that nothing // happened; in particular it does not indicate EOF. // // Implementations must not retain p. // // As a convenience, the ReadAll function may be used to read all // remaining results available in a Stream. // // If the query underlying the Stream failed permanently, then err // may be one of: // // • StartQueryError // • TerminalQueryStatusError // • UnexpectedQueryError Read(p []Result) (n int, err error) }
Stream provides access to the result stream from a query operation either using a QueryManager or the global Query function.
Use the Close method if you need to prematurely cancel the query operation, releasing the local (in-process) and remote (in the CloudWatch Logs service) resources it consumes.
Use the Read method to read query results from the stream. The Read method returns io.EOF when the entire results stream has been consumed. At this point the query is over and all local and remote resources have been released, so it is not necessary to close the Stream explicitly.
Use the GetStats method to obtain the Insights statistics pertaining to the query. Note that the results from the GetStats method may change over time as new results are pulled from the CloudWatch Logs web service, but will stop changing after the Read method returns io.EOF or any other error. If the query was chunked, the stats will be summed across multiple chunks.
type TerminalQueryStatusError ¶ added in v0.9.1
type TerminalQueryStatusError struct { // QueryID is the CloudWatch Logs Insights query ID of the chunk // that was reported in a terminal status. QueryID string // Status is the status string returned by CloudWatch Logs via the // GetQueryResults API act. Status string // Text is the text of the query that was reported in terminal // status. Text string }
TerminalQueryStatusError is returned by Stream.Read when CloudWatch Logs Insights indicated that a chunk of the stream's query is in a failed status, such as Cancelled, Failed, or Timeout.
When TerminalQueryStatusError is returned by Stream.Read, the stream's query is considered failed and all subsequent reads on the stream will return an error.
func (*TerminalQueryStatusError) Error ¶ added in v0.9.1
func (err *TerminalQueryStatusError) Error() string
type UnexpectedQueryError ¶ added in v0.9.1
type UnexpectedQueryError struct { // QueryID is the CloudWatch Logs Insights query ID of the chunk // that experienced an unexpected event. QueryID string // Text is the text of the query for the chunk. Text string // Cause is the causing error. Cause error }
UnexpectedQueryError is returned by Stream.Read when the CloudWatch Logs Insights API behaved unexpectedly while Incite was polling a chunk status via the CloudWatch Logs GetQueryResults API act.
When UnexpectedQueryError is returned by Stream.Read, the stream's query is considered failed and all subsequent reads on the stream will return an error.
func (*UnexpectedQueryError) Error ¶ added in v0.9.1
func (err *UnexpectedQueryError) Error() string
func (*UnexpectedQueryError) Unwrap ¶ added in v0.9.1
func (err *UnexpectedQueryError) Unwrap() error
type UnmarshalResultFieldValueError ¶
type UnmarshalResultFieldValueError struct { ResultField Cause error ResultIndex int FieldIndex int }
An UnmarshalResultFieldValueError describes a failure to unmarshal a specific ResultField value within a specific Result.
func (*UnmarshalResultFieldValueError) Error ¶
func (e *UnmarshalResultFieldValueError) Error() string