Documentation ¶
Overview ¶
Package appdash provides a Go app performance tracing suite.
Appdash allows you to trace the end-to-end performance of hierarchically structured applications. You can, for example, measure the time and see the detailed information of each HTTP request and SQL query made by an entire distributed web application.
Web Front-end ¶
The cmd/appdash tool launches a web front-end which displays a web UI for viewing collected app traces. It is effectively a remote collector which your application can connect and send events to.
Timing and application-specific metadata information can be viewed in a nice timeline view for each span (e.g. HTTP request) and it's children.
The web front-end can also be embedded in your own Go HTTP server by utilizing the traceapp sub-package, which is effectively what cmd/appdash serves internally.
HTTP and SQL tracing ¶
Sub-packages for HTTP and SQL event tracing are provided for use with appdash, which allows it to function equivalently to Google's Dapper and Twitter's Zipkin performance tracing suites.
Appdash Structure ¶
The most high-level structure is a Trace, which represents the performance of an application from start to finish (in an HTTP application, for example, the loading of a web page).
A Trace is a tree structure that is made up of several spans, which are just IDs (in an HTTP application, these ID's are passed through the stack via a few special headers).
Each span ID has a set of Events that directly correspond to it inside a Collector. These events can be any combination of message, log, time-span, or time-stamped events (the cmd/appdash web UI displays these events as appropriate).
Inside your application, a Recorder is used to send events to a Collector, which can be a remote HTTP(S) collector, a local in-memory or persistent collector, etc. Additionally, you can implement the Collector interface yourself and store events however you like.
Index ¶
- Constants
- Variables
- func LogWithTimestamp(msg string, timestamp time.Time) logEvent
- func PersistEvery(s PersistentStore, interval time.Duration, file string) error
- func RegisterEvent(e Event)
- func UnmarshalEvent(as Annotations, e Event) error
- func UnmarshalEvents(anns Annotations, events *[]Event) error
- type AggregatedResult
- type Aggregator
- type Annotation
- type Annotations
- type ChunkedCollector
- type Collector
- type CollectorServer
- type DeleteStore
- type Event
- type EventMarshaler
- type EventSchemaUnmarshalError
- type EventUnmarshaler
- type ID
- type ImportantEvent
- type InfluxDBAdminUser
- type InfluxDBConfig
- type InfluxDBRetentionPolicy
- type InfluxDBStore
- func (in *InfluxDBStore) Aggregate(start, end time.Duration) ([]*AggregatedResult, error)
- func (in *InfluxDBStore) Close() error
- func (in *InfluxDBStore) Collect(id SpanID, anns ...Annotation) error
- func (in *InfluxDBStore) Trace(id ID) (*Trace, error)
- func (in *InfluxDBStore) Traces(opts TracesOpts) ([]*Trace, error)
- type LimitStore
- type MemoryStore
- func (ms *MemoryStore) Collect(id SpanID, as ...Annotation) error
- func (ms *MemoryStore) Delete(traces ...ID) error
- func (ms *MemoryStore) ReadFrom(r io.Reader) (int64, error)
- func (ms *MemoryStore) Trace(id ID) (*Trace, error)
- func (ms *MemoryStore) Traces(opts TracesOpts) ([]*Trace, error)
- func (ms *MemoryStore) Write(w io.Writer) error
- type PersistentStore
- type Queryer
- type RecentStore
- type Recorder
- func (r *Recorder) Annotation(as ...Annotation)
- func (r *Recorder) Child() *Recorder
- func (r *Recorder) Errors() []error
- func (r *Recorder) Event(e Event)
- func (r *Recorder) Finish()
- func (r *Recorder) Log(msg string)
- func (r *Recorder) LogWithTimestamp(msg string, timestamp time.Time)
- func (r *Recorder) Msg(msg string)
- func (r *Recorder) Name(name string)
- type RemoteCollector
- type Span
- type SpanID
- type Store
- type Timespan
- type TimespanEvent
- type TimestampedEvent
- type Trace
- type TracesOpts
Examples ¶
Constants ¶
const ( // SpanIDDelimiter is the delimiter used to concatenate an // SpanID's components. SpanIDDelimiter = "/" )
Variables ¶
var ( // ErrBadSpanID is returned when the span ID cannot be parsed. ErrBadSpanID = errors.New("bad span ID") )
var ErrQueueDropped = errors.New("ChunkedCollector queue entirely dropped (trace data will be missing)")
ErrQueueDropped is the error returns by ChunkedCollector.Flush and ChunkedCollector.Collect when the internal queue has grown too large and has been subsequently dropped.
var ( // ErrTraceNotFound is returned by Store.GetTrace when no trace is // found with the given ID. ErrTraceNotFound = errors.New("trace not found") )
Functions ¶
func LogWithTimestamp ¶
LogWithTimestamp returns an Event with an explicit timestamp that contains only a human readable message.
func PersistEvery ¶
func PersistEvery(s PersistentStore, interval time.Duration, file string) error
PersistEvery persists s's data to a file periodically.
func RegisterEvent ¶
func RegisterEvent(e Event)
RegisterEvent registers an event type for use with UnmarshalEvents.
Events must be registered with this package in order for unmarshaling to work. Much like the image package, sometimes blank imports will be used for packages that register Appdash events with this package:
import( _ "sourcegraph.com/sourcegraph/appdash/httptrace" _ "sourcegraph.com/sourcegraph/appdash/sqltrace" )
func UnmarshalEvent ¶
func UnmarshalEvent(as Annotations, e Event) error
UnmarshalEvent unmarshals annotations into an event.
func UnmarshalEvents ¶
func UnmarshalEvents(anns Annotations, events *[]Event) error
UnmarshalEvents unmarshals all events found in anns into events. Any schemas found in anns that were not registered (using RegisterEvent) are ignored; missing a schema is not an error.
Types ¶
type AggregatedResult ¶
type AggregatedResult struct { // RootSpanName is the name of the root span of the traces that were // aggregated to form this result. RootSpanName string // Average, Minimum, Maximum, and standard deviation of the total trace // times (earliest span start time, latest span end time) of all traces // that were aggregated to produce this result, respectively. Average, Min, Max, StdDev time.Duration // Samples is the number of traces that were sampled in order to produce // this result. Samples int64 // Slowest is the N-slowest trace IDs that were part of this group, such // that these are the most valuable/slowest traces for inspection. Slowest []ID }
AggregatedResult represents a set of traces that were aggregated together by root span name to produce some useful metrics (average trace time, minimum time, a link to the slowest traces, etc).
type Aggregator ¶
type Aggregator interface { // Aggregate should return the aggregated data for all traces within the // past 72/hr, such that: // // Aggregate(-72 * time.Hour, 0) // // would return all possible results. Aggregate(start, end time.Duration) ([]*AggregatedResult, error) }
Aggregator is a type of store that can aggregate its trace data and return results about it.
type Annotation ¶
type Annotation struct { // Key is the annotation's key. Key string // Value is the annotation's value, which may be either human or // machine readable, depending on the schema of the event that // generated it. Value []byte }
An Annotation is an arbitrary key-value property on a span.
func (Annotation) Important ¶
func (a Annotation) Important() bool
Important determines if this annotation's key is considered important to any of the registered event types.
type Annotations ¶
type Annotations []Annotation
Annotations is a list of annotations (on a span).
func MarshalEvent ¶
func MarshalEvent(e Event) (Annotations, error)
MarshalEvent marshals an event into annotations.
func (Annotations) String ¶
func (as Annotations) String() string
String returns a formatted list of annotations.
func (Annotations) StringMap ¶
func (as Annotations) StringMap() map[string]string
StringMap returns the annotations as a key-value map. Only one annotation for a key appears in the map, and it is chosen arbitrarily among the annotations with the same key.
type ChunkedCollector ¶
type ChunkedCollector struct { // Collector is the underlying collector that spans are sent to. Collector // MinInterval specifies the minimum interval at which to call Flush // automatically (in a separate goroutine, as to not to block the caller who // may be recording time-sensitive operations). // // Default MinInterval = 500 * time.Millisecond (500ms). MinInterval time.Duration // FlushTimeout, if non-zero, specifies the time after which a flush operation // is considered timed out. If timeout occurs, the pending queue is entirely // dropped (trace data lost) and ErrQueueDropped is returned by Flush. // // Default FlushTimeout = 50 * time.Millisecond (50ms). FlushTimeout time.Duration // MaxQueueSize, if non-zero, is the maximum size in bytes that the pending // queue of collections may grow to before being entirely dropped (trace data // lost). In the event that the queue is dropped, Collect will return // ErrQueueDropped. // // Default MaxQueueSize = 32 * 1024 * 1024 (32 MB). MaxQueueSize uint64 // Log, if non-nil, is used to log warnings like when the queue is entirely // dropped (and hence trace data was lost). Log *log.Logger // OnFlush, if non-nil, will be directly invoked at the start of each Flush // operation that is performed by this collector. queueSize is the number of // entries in the queue (i.e. number of underlying collections that will // occur). // // It is primarily used for debugging purposes. OnFlush func(queueSize int) // contains filtered or unexported fields }
ChunkedCollector groups annotations together that have the same span and calls its underlying collector's Collect method with the chunked data periodically, instead of immediately. This is more efficient, especially in the case of the underlying Collector being an RemoteCollector, because whole spans are collected at once (rather than in parts). It also prevents the caller, usually a Recorder, from blocking time-sensitive operations on collection (which, in the case of RemoteCollector, may involve connecting to a remote server).
Inherently, ChunkedCollector queues all collections prior to them being flushed out to the underlying Collector. Because of this it's important to understand the various boundaries that are imposed to avoid any sort of queue backlogging or perceived memory leaks.
The flow of a ChunkedCollector is that:
- It receives a collection.
- If the queue size exceeds MaxQueueSize in bytes, the pending queue is entirely dropped and ErrQueueDropped is returned.
- Otherwise, if the queue would not exceed that size, the collection is added to the queue.
- After MinInterval (or if Flush is called manually), all queued collections are passed off to the underlying collector. If the overall Flush time measured after each underlying Collect call exceeds FlushTimeout, the pending queue is entirely dropped and ErrQueueDropped is returned.
- If the queue has been entirely dropped as a result of one of the above cases, entire traces and/or parts of their data will be missing. For this reason, you may specify a Log for debugging purposes.
func NewChunkedCollector ¶
func NewChunkedCollector(c Collector) *ChunkedCollector
NewChunkedCollector is shorthand for:
c := &ChunkedCollector{ Collector: c, MinInterval: 500 * time.Millisecond, FlushTimeout: 2 * time.Second, MaxQueueSize: 32 * 1024 * 1024, // 32 MB Log: log.New(os.Stderr, "appdash: ", log.LstdFlags), }
func (*ChunkedCollector) Collect ¶
func (cc *ChunkedCollector) Collect(span SpanID, anns ...Annotation) error
Collect adds the span and annotations to a local buffer until the next call to Flush (or when MinInterval elapses), at which point they are sent (grouped by span) to the underlying collector.
func (*ChunkedCollector) Flush ¶
func (cc *ChunkedCollector) Flush() error
Flush immediately sends all pending spans to the underlying collector.
func (*ChunkedCollector) Stop ¶
func (cc *ChunkedCollector) Stop()
Stop stops the collector. After stopping, no more data will be sent to the underlying collector and calls to Collect will fail.
type Collector ¶
type Collector interface {
Collect(SpanID, ...Annotation) error
}
A Collector collects events that occur in spans.
func NewLocalCollector ¶
NewLocalCollector returns a Collector that writes directly to a Store.
type CollectorServer ¶
type CollectorServer struct { // Log is the logger to use for errors and warnings. If nil, a new // logger is created. Log *log.Logger // Debug is whether to log debug messages. Debug bool // Trace is whether to log all data that is received. Trace bool // contains filtered or unexported fields }
A CollectorServer listens for spans and annotations and adds them to a local collector.
type DeleteStore ¶
type DeleteStore interface { Store // Delete deletes traces given their trace IDs. Delete(...ID) error }
A DeleteStore is a Store that can delete traces.
type Event ¶
type Event interface { // Schema should return the event's schema, a constant string, for example // the sqltrace package defines SQLEvent which returns just "SQL". Schema() string }
An Event is a record of the occurrence of something.
func Log ¶
Log returns an Event whose timestamp is the current time that contains only a human-readable message.
type EventMarshaler ¶
type EventMarshaler interface { // MarshalEvent should marshal this event itself into a set of annotations, or // return an error. MarshalEvent() (Annotations, error) }
EventMarshaler is the interface implemented by an event that can marshal a representation of itself into annotations.
type EventSchemaUnmarshalError ¶
type EventSchemaUnmarshalError struct { Found []string // schemas found in the annotations Target string // schema of the target event }
An EventSchemaUnmarshalError is when annotations are attempted to be unmarshaled into an event object that does not match any of the schemas in the annotations.
func (*EventSchemaUnmarshalError) Error ¶
func (e *EventSchemaUnmarshalError) Error() string
type EventUnmarshaler ¶
type EventUnmarshaler interface { // UnmarshalEvent should unmarshal the given annotations into a event of the // same type, or return an error. UnmarshalEvent(Annotations) (Event, error) }
EventUnmarshaler is the interface implemented by an event that can unmarshal an annotation representation of itself.
type ID ¶
type ID uint64
An ID is a unique, uniformly distributed 64-bit ID.
func (ID) MarshalJSON ¶
MarshalJSON encodes the ID as a hex string.
func (*ID) UnmarshalJSON ¶
UnmarshalJSON decodes the given data as either a hexadecimal string or JSON integer.
type ImportantEvent ¶
type ImportantEvent interface {
Important() []string
}
ImportantEvent is an event that can describe in particular which annotation keys it finds important. Only important annotation keys are displayed in the web UI by default.
type InfluxDBAdminUser ¶
type InfluxDBConfig ¶
type InfluxDBConfig struct { AdminUser InfluxDBAdminUser BuildInfo *influxDBServer.BuildInfo DefaultRP InfluxDBRetentionPolicy Mode mode Server *influxDBServer.Config // LogOutput, if specified, controls where all InfluxDB logs are written to. LogOutput io.Writer // MaxBatchSizeBytes specifies the maximum size (estimated memory usage) in // bytes that a batch may grow to become before being entirely dropped (and // inherently, trace data lost). This prevents any potential memory leak in // the case of an unresponsive or too slow InfluxDB server / pending flush // operation. // // The default value used by NewInfluxDBConfig is 128*1024*1024 (128 MB). MaxBatchSizeBytes int // BatchFlushInterval specifies the minimum interval between flush calls by // the background goroutine in order to flush point batches out to // InfluxDB. That is, after each batch flush the goroutine will sleep for // this amount of time to prevent CPU overutilization. // // The default value used by NewInfluxDBConfig is 500 * time.Millisecond. BatchFlushInterval time.Duration }
func NewInfluxDBConfig ¶
func NewInfluxDBConfig() (*InfluxDBConfig, error)
NewInfluxDBConfig returns a new InfluxDBConfig with the default values.
type InfluxDBRetentionPolicy ¶
type InfluxDBStore ¶
type InfluxDBStore struct {
// contains filtered or unexported fields
}
func NewInfluxDBStore ¶
func NewInfluxDBStore(config *InfluxDBConfig) (*InfluxDBStore, error)
NewInfluxDBStore returns a new InfluxDB-backed store. It starts an in-process / embedded InfluxDB server.
func (*InfluxDBStore) Aggregate ¶
func (in *InfluxDBStore) Aggregate(start, end time.Duration) ([]*AggregatedResult, error)
Aggregate implements the Aggregator interface.
func (*InfluxDBStore) Close ¶
func (in *InfluxDBStore) Close() error
Close flushes the last batch to InfluxDB and shuts down the InfluxDBStore.
func (*InfluxDBStore) Collect ¶
func (in *InfluxDBStore) Collect(id SpanID, anns ...Annotation) error
func (*InfluxDBStore) Traces ¶
func (in *InfluxDBStore) Traces(opts TracesOpts) ([]*Trace, error)
type LimitStore ¶
type LimitStore struct { // Max is the maximum number of traces that the store should keep. Max int // DeleteStore is the underlying store that spans are saved to and // deleted from. DeleteStore // contains filtered or unexported fields }
A LimitStore wraps another store and deletes the oldest trace when the number of traces reaches the capacity (Max).
func (*LimitStore) Collect ¶
func (ls *LimitStore) Collect(id SpanID, anns ...Annotation) error
Collect calls the underlying store's Collect, deleting the oldest trace if the capacity has been reached.
type MemoryStore ¶
A MemoryStore is an in-memory Store that also implements the PersistentStore interface.
func NewMemoryStore ¶
func NewMemoryStore() *MemoryStore
NewMemoryStore creates a new in-memory store
func (*MemoryStore) Collect ¶
func (ms *MemoryStore) Collect(id SpanID, as ...Annotation) error
Collect implements the Collector interface by collecting the events that occured in the span in-memory.
func (*MemoryStore) Delete ¶
func (ms *MemoryStore) Delete(traces ...ID) error
Delete implements the DeleteStore interface by deleting the traces given by their span ID's from this in-memory store.
func (*MemoryStore) ReadFrom ¶
func (ms *MemoryStore) ReadFrom(r io.Reader) (int64, error)
ReadFrom implements the PersistentStore interface by using gob-decoding to load ms's internal data structures from the reader r.
func (*MemoryStore) Trace ¶
func (ms *MemoryStore) Trace(id ID) (*Trace, error)
Trace implements the Store interface by returning the Trace (a tree of spans) for the given trace span ID or, if no such trace exists, by returning ErrTraceNotFound.
func (*MemoryStore) Traces ¶
func (ms *MemoryStore) Traces(opts TracesOpts) ([]*Trace, error)
Traces implements the Queryer interface.
type PersistentStore ¶
PersistentStore is a Store that can persist its data and read it back in.
type Queryer ¶
type Queryer interface { // Traces returns an implementation-defined list of traces according to the options. Traces(opts TracesOpts) ([]*Trace, error) }
A Queryer indexes spans and makes them queryable.
func MultiQueryer ¶
MultiQueryer returns a Queryer whose Traces method returns a union of all traces across each queryer.
type RecentStore ¶
type RecentStore struct { // MinEvictAge is the minimum age of a trace before it is evicted. MinEvictAge time.Duration // DeleteStore is the underlying store that spans are saved to and // deleted from. DeleteStore // Debug is whether to log debug messages. Debug bool // contains filtered or unexported fields }
A RecentStore wraps another store and deletes old traces after a specified amount of time.
func (*RecentStore) Collect ¶
func (rs *RecentStore) Collect(id SpanID, anns ...Annotation) error
Collect calls the underlying store's Collect and records the time that this trace was first seen.
type Recorder ¶
type Recorder struct { // Logger, if non-nil, causes errors to be written to this logger directly // instead of being manually checked via the Error method. Logger *log.Logger SpanID // the span ID that annotations are about // contains filtered or unexported fields }
A Recorder is associated with a span and records annotations on the span by sending them to a collector.
func NewRecorder ¶
NewRecorder creates a new recorder for the given span and collector. If c is nil, NewRecorder panics.
func (*Recorder) Annotation ¶
func (r *Recorder) Annotation(as ...Annotation)
Annotation records raw annotations on the span.
func (*Recorder) Child ¶
Child creates a new Recorder with the same collector and a new child SpanID whose parent is this recorder's SpanID.
func (*Recorder) Errors ¶
Errors returns all errors encountered by the Recorder since the last call to Errors. After calling Errors, the Recorder's list of errors is emptied.
func (*Recorder) Event ¶
Event records any event that implements the Event, TimespanEvent, or TimestampedEvent interfaces.
func (*Recorder) Finish ¶
func (r *Recorder) Finish()
Finish finishes recording and saves the recorded information to the underlying collector. If Finish is not called, then no data will be written to the underlying collector. Finish must be called once, otherwise r.error is called, this constraint ensures that collector is called once per Recorder, in order to avoid for performance reasons extra operations(span look up & span's annotations update) within the collector.
func (*Recorder) Log ¶
Log records a Log event (an event with the current timestamp and a human-readable message) on the span.
func (*Recorder) LogWithTimestamp ¶
LogWithTimestamp records a Log event with an explicit timestamp
type RemoteCollector ¶
type RemoteCollector struct { // Log is the logger to use for errors and warnings. If nil, a new // logger is created. Log *log.Logger // Debug is whether to log debug messages. Debug bool // contains filtered or unexported fields }
A RemoteCollector sends data to a collector server (created with NewServer).
func NewRemoteCollector ¶
func NewRemoteCollector(addr string) *RemoteCollector
NewRemoteCollector creates a collector that sends data to a collector server (created with NewServer). It sends data immediately when Collect is called. To send data in chunks, use a ChunkedCollector.
func NewTLSRemoteCollector ¶
func NewTLSRemoteCollector(addr string, tlsConfig *tls.Config) *RemoteCollector
NewTLSRemoteCollector creates a RemoteCollector that uses TLS.
func (*RemoteCollector) Close ¶
func (rc *RemoteCollector) Close() error
Close closes the connection to the server.
func (*RemoteCollector) Collect ¶
func (rc *RemoteCollector) Collect(span SpanID, anns ...Annotation) error
Collect implements the Collector interface by sending the events that occured in the span to the remote collector server (see CollectorServer).
type Span ¶
type Span struct { // ID probabilistically uniquely identifies this span. ID SpanID Annotations }
Span is a span ID and its annotations.
type SpanID ¶
type SpanID struct { // Trace is the root ID of the tree that contains all of the spans // related to this one. Trace ID // Span is an ID that probabilistically uniquely identifies this // span. Span ID // Parent is the ID of the parent span, if any. Parent ID }
A SpanID refers to a single span.
func NewRootSpanID ¶
func NewRootSpanID() SpanID
NewRootSpanID generates a new span ID for a root span. This should only be used to generate entries for spans caused exclusively by spans which are outside of your system as a whole (e.g., a root span for the first time you see a user request).
func NewSpanID ¶
NewSpanID returns a new ID for an span which is the child of the given parent ID. This should be used to track causal relationships between spans.
func ParseSpanID ¶
ParseSpanID parses the given string as a slash-separated set of parameters.
func (SpanID) Format ¶
Format formats according to a format specifier and returns the resulting string. The receiver's string representation is the first argument.
Example ¶
// Assume we're connected to a database. var ( event SpanID db *sql.DB userID int ) // This passes the root ID and the parent event ID to the database, which // allows us to correlate, for example, slow queries with the web requests // which caused them. query := event.Format(`/* %s/%s */ %s`, `SELECT email FROM users WHERE id = ?`) r := db.QueryRow(query, userID) if r == nil { panic("user not found") } var email string if err := r.Scan(&email); err != nil { panic("couldn't read email") } fmt.Printf("User's email: %s\n", email)
Output:
type Store ¶
type Store interface { Collector // Trace gets a trace (a tree of spans) given its trace ID. If no // such trace exists, ErrTraceNotFound is returned. Trace(ID) (*Trace, error) }
A Store stores and retrieves spans.
func MultiStore ¶
MultiStore returns a Store whose operations occur on the multiple given stores.
type Timespan ¶
Timespan is an event that satisfies the appdash.TimespanEvent interface. This is used to show its beginning and end times of a span.
type TimespanEvent ¶
A TimespanEvent is an Event with a start and an end time.
type TimestampedEvent ¶
A TimestampedEvent is an Event with a timestamp.
type Trace ¶
A Trace is a tree of spans.
func (*Trace) FindSpan ¶
FindSpan recursively searches for a span whose Span ID is spanID in t and its descendants. If no such span is found, nil is returned.
func (*Trace) TimespanEvent ¶
func (t *Trace) TimespanEvent() (TimespanEvent, error)
func (*Trace) TreeString ¶
TreeString returns the Trace as a formatted string that visually represents the trace's tree.
type TracesOpts ¶
type TracesOpts struct { // Timespan specifies a time range values which can be used as input for filtering traces. Timespan Timespan // TraceIDs filters the returned traces to just the ones with the given IDs. TraceIDs []ID }
TraceOpts bundles the options used for list of traces.
Source Files ¶
Directories ¶
Path | Synopsis |
---|---|
cmd
|
|
appdash
Command appdash runs the Appdash web UI from the command-line.
|
Command appdash runs the Appdash web UI from the command-line. |
examples
|
|
cmd/webapp
webapp: a standalone example Negroni / Gorilla based webapp.
|
webapp: a standalone example Negroni / Gorilla based webapp. |
cmd/webapp-opentracing
webapp: a standalone example Negroni / Gorilla based webapp.
|
webapp: a standalone example Negroni / Gorilla based webapp. |
Package httptrace implements support for tracing HTTP applications.
|
Package httptrace implements support for tracing HTTP applications. |
internal
|
|
wire
Package wire is a generated protocol buffer package.
|
Package wire is a generated protocol buffer package. |
Package opentracing provides an Appdash implementation of the OpenTracing API.
|
Package opentracing provides an Appdash implementation of the OpenTracing API. |
Package sqltrace implements utility types for tracing SQL queries.
|
Package sqltrace implements utility types for tracing SQL queries. |
Package traceapp implements the Appdash web UI.
|
Package traceapp implements the Appdash web UI. |
tmpl
Package tmpl contains template data used by Appdash.
|
Package tmpl contains template data used by Appdash. |