eval

package
v0.0.0-...-26f42d8 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Jan 12, 2024 License: MIT Imports: 11 Imported by: 0

README

DaisyNFS evaluation

The AWS evaluation setup is documented in aws/README.md. These instructions document a more manual setup.

The evaluation scripts use some environment variables that point to a few repositories:

export GO_NFSD_PATH=~/go-nfsd
export DAISY_NFSD_PATH=~/daisy-nfsd
export XV6_PATH=~/xv6-public
export LTP_PATH=~/ltp

You'll need to clone mit-pdos/go-nfsd, mit-pdos/xv6-public, and linux-test-project/ltp (this last one is only needed to run the stress tests).

This repo is mit-pdos/daisy-nfsd.

These instructions assume you've compiled the evaluation driver with go build ./cmd/daisy-eval.

smallfile, largefile, and app benchmarks

Run daisy-eval -i eval/data bench. Then ./eval/eval.py -i eval/data bench will produce a file eval/data/bench.data.

smallfile scalability on a disk

Run daisy-eval -i eval/data bench. Then ./eval/eval.py -i eval/data scale will produce files for each system in data/{daisy-nfsd,go-nfsd,linux}.data.

The daisy-eval driver has an argument to set the disk file.

Plotting

You can run ./plot.sh to run the Python post-processing and gnuplot all at once.

stress tests

Run ./tests.sh to run the fsstress and fsx-linux tests. You'll need to clone ltp and compile it; running ./tests.sh --help will give you the right commands.

The default number of iterations runs each suite for about 10 seconds. To run longer, run something like ./tests.sh 10, which will scale the default iteration counts by 10.

Documentation

Index

Constants

This section is empty.

Variables

View Source
var LargefileSuite = []Benchmark{
	LargefileBench(300),
}

Functions

func PrepareBenchmarks

func PrepareBenchmarks()

func WriteObservation

func WriteObservation(w io.Writer, o Observation)

Types

type Benchmark

type Benchmark struct {

	// Config has configuration related to the benchmark workload under test
	// "bench" is a map with benchmark options
	Config KeyValue
	// contains filtered or unexported fields
}

func AppBench

func AppBench() Benchmark

func BenchSuite

func BenchSuite(smallfileDuration string) []Benchmark

func ExtendedBenchSuite

func ExtendedBenchSuite(smallfileDuration string, par int) []Benchmark

func LargefileBench

func LargefileBench(fileSizeMb int) Benchmark

func ScaleSuite

func ScaleSuite(benchtime string, threads int) []Benchmark

func SmallfileBench

func SmallfileBench(benchtime string, threads int) Benchmark

func (Benchmark) Command

func (b Benchmark) Command() []string

func (Benchmark) Name

func (b Benchmark) Name() string

func (Benchmark) ParseOutput

func (b Benchmark) ParseOutput(lines []string) []Observation

type BenchmarkSuite

type BenchmarkSuite struct {
	Iters       int
	Randomize   bool
	Filesystems []KeyValue
	Benches     []Benchmark
}

func (*BenchmarkSuite) Workloads

func (bs *BenchmarkSuite) Workloads() []Workload

type Fs

type Fs struct {
	// contains filtered or unexported fields
}

func GetFilesys

func GetFilesys(conf KeyValue) Fs

func (Fs) Name

func (fs Fs) Name() string

func (Fs) Run

func (fs Fs) Run(command []string) []string

type KeyValue

type KeyValue map[string]interface{}

KeyValue is a generic set of key-value pairs

expect values to be string, float64, or bool (or recursively another KeyValue)

func BasicFilesystem

func BasicFilesystem(name string, disk string, unstable bool) KeyValue

func LinuxDurabilityFilesystems

func LinuxDurabilityFilesystems(disk string) []KeyValue

LinuxDurabilityFilesystems returns many Linux filesystems, varying durability options

func ManyDurabilityFilesystems

func ManyDurabilityFilesystems(disk string) []KeyValue

func (KeyValue) Clone

func (kv KeyValue) Clone() KeyValue

func (KeyValue) Delete

func (kv KeyValue) Delete(key string) KeyValue

Delete returns a new KeyValue with key removed

func (KeyValue) Extend

func (kv KeyValue) Extend(kv2 KeyValue) KeyValue

Extend adds all key-value pairs from kv2 to kv

modifies kv in-place and returns kv (for chaining)

func (KeyValue) Flatten

func (kv KeyValue) Flatten() KeyValue

func (KeyValue) Pairs

func (kv KeyValue) Pairs() []KeyValuePair

Pairs returns the key-value pairs in kv, sorted by key

func (KeyValue) Product

func (kv KeyValue) Product() []KeyValue

Product takes the cross product of any fields that are slices

func (KeyValue) Validate

func (kv KeyValue) Validate() error

type KeyValuePair

type KeyValuePair struct {
	Key string
	Val interface{}
}

type Observation

type Observation struct {
	Values KeyValue `json:"values"`
	Config KeyValue `json:"config"`
}

func ReadObservation

func ReadObservation(r io.Reader) (o Observation, err error)

ReadObservation gets the next observation in r

func (Observation) Write

func (o Observation) Write(w io.Writer) error

Write appends the serialized observation to w

type Workload

type Workload struct {
	Fs    Fs
	Bench Benchmark
}

func (Workload) Run

func (w Workload) Run() []Observation

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL