Documentation ¶
Overview ¶
Package stats tracks the statistics associated with benchmark runs.
Index ¶
Constants ¶
This section is empty.
Variables ¶
This section is empty.
Functions ¶
This section is empty.
Types ¶
type BenchResults ¶
type BenchResults struct { // GoVersion is the version of the compiler the benchmark was compiled with. GoVersion string // GrpcVersion is the gRPC version being benchmarked. GrpcVersion string // RunMode is the workload mode for this benchmark run. This could be unary, // stream or unconstrained. RunMode string // Features represents the configured feature options for this run. Features Features // benchmark runs during one execution. It is a slice indexed by // 'FeaturesIndex' and a value of true indicates that the associated // feature is shared across all runs. SharedFeatures []bool // Data contains the statistical data of interest from the benchmark run. Data RunData }
BenchResults records features and results of a benchmark run. A collection of these structs is usually serialized and written to a file after a benchmark execution, and could later be read for pretty-printing or comparison with other benchmark results.
type FeatureIndex ¶
type FeatureIndex int
FeatureIndex is an enum for features that usually differ across individual benchmark runs in a single execution. These are usually configured by the user through command line flags.
const ( EnableTraceIndex FeatureIndex = iota ReadLatenciesIndex ReadKbpsIndex ReadMTUIndex MaxConcurrentCallsIndex ReqSizeBytesIndex RespSizeBytesIndex ReqPayloadCurveIndex RespPayloadCurveIndex CompModesIndex EnableChannelzIndex EnablePreloaderIndex // MaxFeatureIndex is a place holder to indicate the total number of feature // indices we have. Any new feature indices should be added above this. MaxFeatureIndex )
FeatureIndex enum values corresponding to individually settable features.
type Features ¶
type Features struct { // Network mode used for this benchmark run. Could be one of Local, LAN, WAN // or Longhaul. NetworkMode string // UseBufCon indicates whether an in-memory connection was used for this // benchmark run instead of system network I/O. UseBufConn bool // EnableKeepalive indicates if keepalives were enabled on the connections // used in this benchmark run. EnableKeepalive bool // BenchTime indicates the duration of the benchmark run. BenchTime time.Duration // EnableTrace indicates if tracing was enabled. EnableTrace bool // Latency is the simulated one-way network latency used. Latency time.Duration // Kbps is the simulated network throughput used. Kbps int // MTU is the simulated network MTU used. MTU int // MaxConcurrentCalls is the number of concurrent RPCs made during this // benchmark run. MaxConcurrentCalls int // ReqSizeBytes is the request size in bytes used in this benchmark run. // Unused if ReqPayloadCurve is non-nil. ReqSizeBytes int // RespSizeBytes is the response size in bytes used in this benchmark run. // Unused if RespPayloadCurve is non-nil. RespSizeBytes int // ReqPayloadCurve is a histogram representing the shape a random // distribution request payloads should take. ReqPayloadCurve *PayloadCurve // RespPayloadCurve is a histogram representing the shape a random // distribution request payloads should take. RespPayloadCurve *PayloadCurve // ModeCompressor represents the compressor mode used. ModeCompressor string // EnableChannelz indicates if channelz was turned on. EnableChannelz bool // EnablePreloader indicates if preloading was turned on. EnablePreloader bool }
Features represent configured options for a specific benchmark run. This is usually constructed from command line arguments passed by the caller. See benchmark/benchmain/main.go for defined command line flags. This is also part of the BenchResults struct which is serialized and written to a file.
func (Features) PrintableName ¶
PrintableName returns a one line name which includes the features specified by 'wantFeatures' which is a bitmask of wanted features, indexed by FeaturesIndex.
func (Features) SharedFeatures ¶
SharedFeatures returns the shared features as a pretty printable string. 'wantFeatures' is a bitmask of wanted features, indexed by FeaturesIndex.
type Histogram ¶
type Histogram struct { // Count is the total number of values added to the histogram. Count int64 // Sum is the sum of all the values added to the histogram. Sum int64 // SumOfSquares is the sum of squares of all values. SumOfSquares int64 // Min is the minimum of all the values added to the histogram. Min int64 // Max is the maximum of all the values added to the histogram. Max int64 // Buckets contains all the buckets of the histogram. Buckets []HistogramBucket // contains filtered or unexported fields }
Histogram accumulates values in the form of a histogram with exponentially increased bucket sizes.
func NewHistogram ¶
func NewHistogram(opts HistogramOptions) *Histogram
NewHistogram returns a pointer to a new Histogram object that was created with the provided options.
func (*Histogram) Merge ¶
Merge takes another histogram h2, and merges its content into h. The two histograms must be created by equivalent HistogramOptions.
func (*Histogram) Opts ¶
func (h *Histogram) Opts() HistogramOptions
Opts returns a copy of the options used to create the Histogram.
func (*Histogram) PrintWithUnit ¶
PrintWithUnit writes textual output of the histogram values . Data in histogram is divided by a Unit before print.
type HistogramBucket ¶
type HistogramBucket struct { // LowBound is the lower bound of the bucket. LowBound float64 // Count is the number of values in the bucket. Count int64 }
HistogramBucket represents one histogram bucket.
type HistogramOptions ¶
type HistogramOptions struct { // NumBuckets is the number of buckets. NumBuckets int // GrowthFactor is the growth factor of the buckets. A value of 0.1 // indicates that bucket N+1 will be 10% larger than bucket N. GrowthFactor float64 // BaseBucketSize is the size of the first bucket. BaseBucketSize float64 // MinValue is the lower bound of the first bucket. MinValue int64 }
HistogramOptions contains the parameters that define the histogram's buckets. The first bucket of the created histogram (with index 0) contains [min, min+n) where n = BaseBucketSize, min = MinValue. Bucket i (i>=1) contains [min + n * m^(i-1), min + n * m^i), where m = 1+GrowthFactor. The type of the values is int64.
type PayloadCurve ¶
type PayloadCurve struct { // Sha256 must be a public field so that the gob encoder can write it to // disk. This will be needed at decode-time by the Hash function. Sha256 string // contains filtered or unexported fields }
PayloadCurve is an internal representation of a weighted random distribution CSV file. Once a *PayloadCurve is created with NewPayloadCurve, the ChooseRandom function should be called to generate random payload sizes.
func NewPayloadCurve ¶
func NewPayloadCurve(file string) (*PayloadCurve, error)
NewPayloadCurve parses a .csv file and returns a *PayloadCurve if no errors were encountered in parsing and initialization.
func (*PayloadCurve) ChooseRandom ¶
func (pc *PayloadCurve) ChooseRandom() int
ChooseRandom picks a random payload size (in bytes) that follows the underlying weighted random distribution.
func (*PayloadCurve) Hash ¶
func (pc *PayloadCurve) Hash() string
Hash returns a string uniquely identifying a payload curve file for feature matching purposes.
func (*PayloadCurve) ShortHash ¶
func (pc *PayloadCurve) ShortHash() string
ShortHash returns a shortened version of Hash for display purposes.
type RunData ¶
type RunData struct { // TotalOps is the number of operations executed during this benchmark run. // Only makes sense for unary and streaming workloads. TotalOps uint64 // SendOps is the number of send operations executed during this benchmark // run. Only makes sense for unconstrained workloads. SendOps uint64 // RecvOps is the number of receive operations executed during this benchmark // run. Only makes sense for unconstrained workloads. RecvOps uint64 // AllocedBytes is the average memory allocation in bytes per operation. AllocedBytes float64 // Allocs is the average number of memory allocations per operation. Allocs float64 // ReqT is the average request throughput associated with this run. ReqT float64 // RespT is the average response throughput associated with this run. RespT float64 // Fiftieth is the 50th percentile latency. Fiftieth time.Duration // Ninetieth is the 90th percentile latency. Ninetieth time.Duration // Ninetyninth is the 99th percentile latency. NinetyNinth time.Duration // Average is the average latency. Average time.Duration }
RunData contains statistical data of interest from a benchmark run.
type Stats ¶
type Stats struct {
// contains filtered or unexported fields
}
Stats is a helper for gathering statistics about individual benchmark runs.
func NewStats ¶
NewStats creates a new Stats instance. If numBuckets is not positive, the default value (16) will be used.
func (*Stats) AddDuration ¶
AddDuration adds an elapsed duration per operation to the stats. This is used by unary and stream modes where request and response stats are equal.
func (*Stats) EndRun ¶
EndRun is to be invoked to indicate the end of the ongoing benchmark run. It computes a bunch of stats and dumps them to stdout.
func (*Stats) EndUnconstrainedRun ¶
EndUnconstrainedRun is similar to EndRun, but is to be used for unconstrained workloads.
func (*Stats) GetResults ¶
func (s *Stats) GetResults() []BenchResults
GetResults returns the results from all benchmark runs.