store

package
v0.7.0 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Dec 8, 2016 License: Apache-2.0 Imports: 3 Imported by: 0

README

Results of benchmarking

Using RocksDB

So, reading times are on the order of single unit microseconds, while writing times with Sync set to true are ~30 milliseconds.

$ go test -run BenchmarkSet -v -bench .
PASS
BenchmarkGet_valsize100-6  	  500000	      2850 ns/op
--- BENCH: BenchmarkGet_valsize100-6
	store_test.go:85: Wrote 100 keys.
	store_test.go:85: Wrote 100 keys.
	store_test.go:85: Wrote 100 keys.
	store_test.go:85: Wrote 100 keys.
BenchmarkGet_valsize1000-6 	  500000	      3565 ns/op
--- BENCH: BenchmarkGet_valsize1000-6
	store_test.go:85: Wrote 100 keys.
	store_test.go:85: Wrote 100 keys.
	store_test.go:85: Wrote 100 keys.
	store_test.go:85: Wrote 100 keys.
BenchmarkGet_valsize10000-6	  200000	      8541 ns/op
--- BENCH: BenchmarkGet_valsize10000-6
	store_test.go:85: Wrote 100 keys.
	store_test.go:85: Wrote 100 keys.
	store_test.go:85: Wrote 100 keys.
	store_test.go:85: Wrote 100 keys.
	store_test.go:85: Wrote 100 keys.
BenchmarkSet_valsize100-6  	      50	  32932578 ns/op
BenchmarkSet_valsize1000-6 	      50	  28066678 ns/op
BenchmarkSet_valsize10000-6	      50	  28736228 ns/op
ok  	github.com/dgraph-io/dgraph/store	48.393s

Also based on dgraph-io/experiments/db:

BoltDB

Without copying the resulting byte slice from Bolt. Unsafe

$ go test -bench BenchmarkRead .
testing: warning: no tests to run
PASS
BenchmarkReadBolt_1024	  500000	      3858 ns/op
BenchmarkReadBolt_10KB	  500000	      3738 ns/op
BenchmarkReadBolt_500KB	 1000000	      3141 ns/op
BenchmarkReadBolt_1MB	 1000000	      3026 ns/op
ok  	github.com/dgraph-io/experiments/db	102.513s

Copying the resulting byte slice. Safe

$ go test -bench BenchmarkRead .
testing: warning: no tests to run
PASS
BenchmarkReadBolt_1024	  200000	      6760 ns/op
BenchmarkReadBolt_10KB	  100000	     21249 ns/op
BenchmarkReadBolt_500KB	   10000	    214449 ns/op
BenchmarkReadBolt_1MB	    3000	    350712 ns/op
ok  	github.com/dgraph-io/experiments/db	80.890s

RocksDB

$ go test -bench BenchmarkGet .
PASS
BenchmarkGet_valsize1024	  300000	      5715 ns/op
BenchmarkGet_valsize10KB	   50000	     27619 ns/op
BenchmarkGet_valsize500KB	    2000	    604185 ns/op
BenchmarkGet_valsize1MB	    2000	   1064685 ns/op
ok  	github.com/dgraph-io/dgraph/store	55.029s

Thoughts

Dgraph uses append only commit log to sync new mutations to disk before returning. Every time a posting list gets init, it checks for both the stored posting list and the mutations committed after the posting list was written. Hence, our access pattern from store is largely read-only, with fewer writes. This is true, irrespective of how many writes get commited by the end user.

Hence, BoltDB is a better choice. It performs better for reads/seeks, despite Dgraph needing a value copy. Writes are somewhat slower, but that shouldn't be a problem because of the above mentioned reasons.

Update: Just realized that BoltDB only allows a SINGLE writer at any point in time. This is equivalent to a global mutex lock. That'd essentially kill Dgraph's performance. So, BoltDB is out!

Documentation

Index

Constants

This section is empty.

Variables

This section is empty.

Functions

This section is empty.

Types

type Store

type Store struct {
	// contains filtered or unexported fields
}

Store contains some handles to RocksDB.

func NewReadOnlyStore added in v0.4.4

func NewReadOnlyStore(filepath string) (*Store, error)

NewReadOnlyStore constructs a readonly Store object at filepath, given options.

func NewStore added in v0.4.4

func NewStore(filepath string) (*Store, error)

NewStore constructs a Store object at filepath, given some options.

func NewSyncStore added in v0.7.0

func NewSyncStore(filepath string) (*Store, error)

func (*Store) Close

func (s *Store) Close()

Close closes our data store.

func (*Store) Delete

func (s *Store) Delete(k []byte) error

Delete deletes a key from data store.

func (*Store) Get

func (s *Store) Get(key []byte) (*rdb.Slice, error)

Get returns the value given a key for RocksDB.

func (*Store) GetStats added in v0.7.0

func (s *Store) GetStats() string

GetStats returns stats of our data store.

func (*Store) IndexFilterblockSize added in v0.4.3

func (s *Store) IndexFilterblockSize() uint64

IndexFilterblockSize returns the filter block size.

func (*Store) MemtableSize added in v0.4.3

func (s *Store) MemtableSize() uint64

Memtable returns the memtable size.

func (*Store) NewCheckpoint added in v0.7.0

func (s *Store) NewCheckpoint() (*rdb.Checkpoint, error)

NewCheckpoint creates new checkpoint from current store.

func (*Store) NewIterator added in v0.4.4

func (s *Store) NewIterator() *rdb.Iterator

NewIterator initializes a new iterator and returns it.

func (*Store) NewSnapshot added in v0.7.0

func (s *Store) NewSnapshot() *rdb.Snapshot

NewSnapshot creates new snapshot from current store.

func (*Store) NewWriteBatch added in v0.4.4

func (s *Store) NewWriteBatch() *rdb.WriteBatch

NewWriteBatch creates a new WriteBatch object and returns a pointer to it.

func (*Store) SetOne

func (s *Store) SetOne(k []byte, val []byte) error

SetOne adds a key-value to data store.

func (*Store) SetSnapshot added in v0.7.0

func (s *Store) SetSnapshot(snapshot *rdb.Snapshot)

SetSnapshot updates default read options to use the given snapshot.

func (*Store) WriteBatch added in v0.4.4

func (s *Store) WriteBatch(wb *rdb.WriteBatch) error

WriteBatch does a batch write to RocksDB from the data in WriteBatch object.

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL