Documentation ¶
Overview ¶
Package badger implements an embeddable, simple and fast key-value database, written in pure Go. It is designed to be highly performant for both reads and writes simultaneously. Badger uses Multi-Version Concurrency Control (MVCC), and supports transactions. It runs transactions concurrently, with serializable snapshot isolation guarantees.
Badger uses an LSM tree along with a value log to separate keys from values, hence reducing both write amplification and the size of the LSM tree. This allows LSM tree to be served entirely from RAM, while the values are served from SSD.
Usage ¶
Badger has the following main types: DB, Txn, Item and Iterator. DB contains keys that are associated with values. It must be opened with the appropriate options before it can be accessed.
All operations happen inside a Txn. Txn represents a transaction, which can be read-only or read-write. Read-only transactions can read values for a given key (which are returned inside an Item), or iterate over a set of key-value pairs using an Iterator (which are returned as Item type values as well). Read-write transactions can also update and delete keys from the DB.
See the examples for more usage details.
Index ¶
- Constants
- Variables
- type DB
- func (db *DB) Backup(w io.Writer, since uint64) (uint64, error)
- func (db *DB) Close() (err error)
- func (db *DB) GetMergeOperator(key []byte, f MergeFunc, dur time.Duration) *MergeOperator
- func (db *DB) GetSequence(key []byte, bandwidth uint64) (*Sequence, error)
- func (db *DB) Load(r io.Reader) error
- func (db *DB) MaxBatchCount() int64
- func (db *DB) MaxBatchSize() int64
- func (db *DB) NewTransaction(update bool) *Txn
- func (db *DB) RunValueLogGC(discardRatio float64) error
- func (db *DB) Size() (lsm int64, vlog int64)
- func (db *DB) Tables() []TableInfo
- func (db *DB) Update(fn func(txn *Txn) error) error
- func (db *DB) View(fn func(txn *Txn) error) error
- type Entry
- type Item
- func (item *Item) DiscardEarlierVersions() bool
- func (item *Item) EstimatedSize() int64
- func (item *Item) ExpiresAt() uint64
- func (item *Item) IsDeletedOrExpired() bool
- func (item *Item) Key() []byte
- func (item *Item) KeyCopy(dst []byte) []byte
- func (item *Item) String() string
- func (item *Item) ToString() string
- func (item *Item) UserMeta() byte
- func (item *Item) Value() ([]byte, error)
- func (item *Item) ValueCopy(dst []byte) ([]byte, error)
- func (item *Item) Version() uint64
- type Iterator
- type IteratorOptions
- type ManagedDB
- type Manifest
- type MergeFunc
- type MergeOperator
- type Options
- type Sequence
- type TableInfo
- type Txn
- func (txn *Txn) Commit(callback func(error)) error
- func (txn *Txn) CommitAt(commitTs uint64, callback func(error)) error
- func (txn *Txn) Delete(key []byte) error
- func (txn *Txn) Discard()
- func (txn *Txn) Get(key []byte) (item *Item, rerr error)
- func (txn *Txn) NewIterator(opt IteratorOptions) *Iterator
- func (txn *Txn) Set(key, val []byte) error
- func (txn *Txn) SetEntry(e *Entry) error
- func (txn *Txn) SetWithDiscard(key, val []byte, meta byte) error
- func (txn *Txn) SetWithMeta(key, val []byte, meta byte) error
- func (txn *Txn) SetWithTTL(key, val []byte, dur time.Duration) error
Examples ¶
Constants ¶
const (
// ManifestFilename is the filename for the manifest file.
ManifestFilename = "MANIFEST"
)
Variables ¶
var ( // ErrValueLogSize is returned when opt.ValueLogFileSize option is not within the valid // range. ErrValueLogSize = errors.New("Invalid ValueLogFileSize, must be between 1MB and 2GB") // ErrValueThreshold is returned when ValueThreshold is set to a value close to or greater than // uint16. ErrValueThreshold = errors.New("Invalid ValueThreshold, must be lower than uint16.") // ErrKeyNotFound is returned when key isn't found on a txn.Get. ErrKeyNotFound = errors.New("Key not found") // ErrTxnTooBig is returned if too many writes are fit into a single transaction. ErrTxnTooBig = errors.New("Txn is too big to fit into one request") // ErrConflict is returned when a transaction conflicts with another transaction. This can happen if // the read rows had been updated concurrently by another transaction. ErrConflict = errors.New("Transaction Conflict. Please retry") // ErrReadOnlyTxn is returned if an update function is called on a read-only transaction. ErrReadOnlyTxn = errors.New("No sets or deletes are allowed in a read-only transaction") // ErrDiscardedTxn is returned if a previously discarded transaction is re-used. ErrDiscardedTxn = errors.New("This transaction has been discarded. Create a new one") // ErrEmptyKey is returned if an empty key is passed on an update function. ErrEmptyKey = errors.New("Key cannot be empty") // ErrRetry is returned when a log file containing the value is not found. // This usually indicates that it may have been garbage collected, and the // operation needs to be retried. ErrRetry = errors.New("Unable to find log file. Please retry") // ErrThresholdZero is returned if threshold is set to zero, and value log GC is called. // In such a case, GC can't be run. ErrThresholdZero = errors.New( "Value log GC can't run because threshold is set to zero") // ErrNoRewrite is returned if a call for value log GC doesn't result in a log file rewrite. ErrNoRewrite = errors.New( "Value log GC attempt didn't result in any cleanup") // ErrRejected is returned if a value log GC is called either while another GC is running, or // after DB::Close has been called. ErrRejected = errors.New("Value log GC request rejected") // ErrInvalidRequest is returned if the user request is invalid. ErrInvalidRequest = errors.New("Invalid request") // ErrManagedTxn is returned if the user tries to use an API which isn't // allowed due to external management of transactions, when using ManagedDB. ErrManagedTxn = errors.New( "Invalid API request. Not allowed to perform this action using ManagedDB") // ErrInvalidDump if a data dump made previously cannot be loaded into the database. ErrInvalidDump = errors.New("Data dump cannot be read") // ErrZeroBandwidth is returned if the user passes in zero bandwidth for sequence. ErrZeroBandwidth = errors.New("Bandwidth must be greater than zero") // ErrInvalidLoadingMode is returned when opt.ValueLogLoadingMode option is not // within the valid range ErrInvalidLoadingMode = errors.New("Invalid ValueLogLoadingMode, must be FileIO or MemoryMap") // ErrReplayNeeded is returned when opt.ReadOnly is set but the // database requires a value log replay. ErrReplayNeeded = errors.New("Database was not properly closed, cannot open read-only") // ErrWindowsNotSupported is returned when opt.ReadOnly is used on Windows ErrWindowsNotSupported = errors.New("Read-only mode is not supported on Windows") // ErrTruncateNeeded is returned when the value log gets corrupt, and requires truncation of // corrupt data to allow Badger to run properly. ErrTruncateNeeded = errors.New("Value log truncate required to run DB. This might result in data loss.") // ErrBlockedWrites is returned if the user called DropAll. During the process of dropping all // data from Badger, we stop accepting new writes, by returning this error. ErrBlockedWrites = errors.New("Writes are blocked possibly due to DropAll") )
var DefaultIteratorOptions = IteratorOptions{ PrefetchValues: true, PrefetchSize: 100, Reverse: false, AllVersions: false, }
DefaultIteratorOptions contains default options when iterating over Badger key-value stores.
var DefaultOptions = Options{ DoNotCompact: false, LevelOneSize: 256 << 20, LevelSizeMultiplier: 10, TableLoadingMode: options.LoadToRAM, ValueLogLoadingMode: options.MemoryMap, MaxLevels: 7, MaxTableSize: 64 << 20, NumCompactors: 3, NumLevelZeroTables: 5, NumLevelZeroTablesStall: 10, NumMemtables: 5, SyncWrites: true, NumVersionsToKeep: 1, ValueLogFileSize: 1<<30 - 1, ValueLogMaxEntries: 1000000, ValueThreshold: 32, Truncate: false, }
DefaultOptions sets a list of recommended options for good performance. Feel free to modify these to suit your needs.
var LSMOnlyOptions = Options{}
LSMOnlyOptions follows from DefaultOptions, but sets a higher ValueThreshold so values would be colocated with the LSM tree, with value log largely acting as a write-ahead log only. These options would reduce the disk usage of value log, and make Badger act like a typical LSM tree.
Functions ¶
This section is empty.
Types ¶
type DB ¶
type DB struct { sync.RWMutex // Guards list of inmemory tables, not individual reads and writes. // contains filtered or unexported fields }
DB provides the various functions required to interact with Badger. DB is thread-safe.
func Open ¶
Open returns a new DB object.
Example ¶
dir, err := ioutil.TempDir("", "badger") if err != nil { log.Fatal(err) } defer os.RemoveAll(dir) opts := DefaultOptions opts.Dir = dir opts.ValueDir = dir db, err := Open(opts) if err != nil { log.Fatal(err) } defer db.Close() err = db.View(func(txn *Txn) error { _, err := txn.Get([]byte("key")) // We expect ErrKeyNotFound fmt.Println(err) return nil }) if err != nil { log.Fatal(err) } txn := db.NewTransaction(true) // Read-write txn err = txn.Set([]byte("key"), []byte("value")) if err != nil { log.Fatal(err) } err = txn.Commit(nil) if err != nil { log.Fatal(err) } err = db.View(func(txn *Txn) error { item, err := txn.Get([]byte("key")) if err != nil { return err } val, err := item.Value() if err != nil { return err } fmt.Printf("%s\n", string(val)) return nil }) if err != nil { log.Fatal(err) }
Output: Key not found value
func (*DB) Backup ¶
Backup dumps a protobuf-encoded list of all entries in the database into the given writer, that are newer than the specified version. It returns a timestamp indicating when the entries were dumped which can be passed into a later invocation to generate an incremental dump, of entries that have been added/modified since the last invocation of DB.Backup()
This can be used to backup the data in a database at a given point in time.
func (*DB) Close ¶
Close closes a DB. It's crucial to call it to ensure all the pending updates make their way to disk. Calling DB.Close() multiple times is not safe and would cause panic.
func (*DB) GetMergeOperator ¶
GetMergeOperator creates a new MergeOperator for a given key and returns a pointer to it. It also fires off a goroutine that performs a compaction using the merge function that runs periodically, as specified by dur.
func (*DB) GetSequence ¶
GetSequence would initiate a new sequence object, generating it from the stored lease, if available, in the database. Sequence can be used to get a list of monotonically increasing integers. Multiple sequences can be created by providing different keys. Bandwidth sets the size of the lease, determining how many Next() requests can be served from memory.
func (*DB) Load ¶
Load reads a protobuf-encoded list of all entries from a reader and writes them to the database. This can be used to restore the database from a backup made by calling DB.Backup().
DB.Load() should be called on a database that is not running any other concurrent transactions while it is running.
func (*DB) MaxBatchCount ¶
MaxBatchCount returns max possible entries in batch
func (*DB) MaxBatchSize ¶
MaxBatchCount returns max possible batch size
func (*DB) NewTransaction ¶
NewTransaction creates a new transaction. Badger supports concurrent execution of transactions, providing serializable snapshot isolation, avoiding write skews. Badger achieves this by tracking the keys read and at Commit time, ensuring that these read keys weren't concurrently modified by another transaction.
For read-only transactions, set update to false. In this mode, we don't track the rows read for any changes. Thus, any long running iterations done in this mode wouldn't pay this overhead.
Running transactions concurrently is OK. However, a transaction itself isn't thread safe, and should only be run serially. It doesn't matter if a transaction is created by one goroutine and passed down to other, as long as the Txn APIs are called serially.
When you create a new transaction, it is absolutely essential to call Discard(). This should be done irrespective of what the update param is set to. Commit API internally runs Discard, but running it twice wouldn't cause any issues.
txn := db.NewTransaction(false) defer txn.Discard() // Call various APIs.
func (*DB) RunValueLogGC ¶
RunValueLogGC triggers a value log garbage collection.
It picks value log files to perform GC based on statistics that are collected duing compactions. If no such statistics are available, then log files are picked in random order. The process stops as soon as the first log file is encountered which does not result in garbage collection.
When a log file is picked, it is first sampled. If the sample shows that we can discard at least discardRatio space of that file, it would be rewritten.
If a call to RunValueLogGC results in no rewrites, then an ErrNoRewrite is thrown indicating that the call resulted in no file rewrites.
We recommend setting discardRatio to 0.5, thus indicating that a file be rewritten if half the space can be discarded. This results in a lifetime value log write amplification of 2 (1 from original write + 0.5 rewrite + 0.25 + 0.125 + ... = 2). Setting it to higher value would result in fewer space reclaims, while setting it to a lower value would result in more space reclaims at the cost of increased activity on the LSM tree. discardRatio must be in the range (0.0, 1.0), both endpoints excluded, otherwise an ErrInvalidRequest is returned.
Only one GC is allowed at a time. If another value log GC is running, or DB has been closed, this would return an ErrRejected.
Note: Every time GC is run, it would produce a spike of activity on the LSM tree.
func (*DB) Size ¶
Size returns the size of lsm and value log files in bytes. It can be used to decide how often to call RunValueLogGC.
type Entry ¶
type Entry struct { Key []byte Value []byte UserMeta byte ExpiresAt uint64 // time.Unix // contains filtered or unexported fields }
Entry provides Key, Value, UserMeta and ExpiresAt. This struct can be used by the user to set data.
type Item ¶
type Item struct {
// contains filtered or unexported fields
}
Item is returned during iteration. Both the Key() and Value() output is only valid until iterator.Next() is called.
func (*Item) DiscardEarlierVersions ¶
func (*Item) EstimatedSize ¶
EstimatedSize returns approximate size of the key-value pair.
This can be called while iterating through a store to quickly estimate the size of a range of key-value pairs (without fetching the corresponding values).
func (*Item) ExpiresAt ¶
ExpiresAt returns a Unix time value indicating when the item will be considered expired. 0 indicates that the item will never expire.
func (*Item) IsDeletedOrExpired ¶
IsDeletedOrExpired returns true if item contains deleted or expired value.
func (*Item) Key ¶
Key returns the key.
Key is only valid as long as item is valid, or transaction is valid. If you need to use it outside its validity, please use KeyCopy
func (*Item) KeyCopy ¶
KeyCopy returns a copy of the key of the item, writing it to dst slice. If nil is passed, or capacity of dst isn't sufficient, a new slice would be allocated and returned.
func (*Item) UserMeta ¶
UserMeta returns the userMeta set by the user. Typically, this byte, optionally set by the user is used to interpret the value.
func (*Item) Value ¶
Value retrieves the value of the item from the value log.
This method must be called within a transaction. Calling it outside a transaction is considered undefined behavior. If an iterator is being used, then Item.Value() is defined in the current iteration only, because items are reused.
If you need to use a value outside a transaction, please use Item.ValueCopy instead, or copy it yourself. Value might change once discard or commit is called. Use ValueCopy if you want to do a Set after Get.
func (*Item) ValueCopy ¶
ValueCopy returns a copy of the value of the item from the value log, writing it to dst slice. If nil is passed, or capacity of dst isn't sufficient, a new slice would be allocated and returned. Tip: It might make sense to reuse the returned slice as dst argument for the next call.
This function is useful in long running iterate/update transactions to avoid a write deadlock. See Github issue: https://github.com/dgraph-io/badger/issues/315
type Iterator ¶
type Iterator struct {
// contains filtered or unexported fields
}
Iterator helps iterating over the KV pairs in a lexicographically sorted order.
func (*Iterator) Close ¶
func (it *Iterator) Close()
Close would close the iterator. It is important to call this when you're done with iteration.
func (*Iterator) Item ¶
Item returns pointer to the current key-value pair. This item is only valid until it.Next() gets called.
func (*Iterator) Next ¶
func (it *Iterator) Next()
Next would advance the iterator by one. Always check it.Valid() after a Next() to ensure you have access to a valid it.Item().
func (*Iterator) Rewind ¶
func (it *Iterator) Rewind()
Rewind would rewind the iterator cursor all the way to zero-th position, which would be the smallest key if iterating forward, and largest if iterating backward. It does not keep track of whether the cursor started with a Seek().
func (*Iterator) Seek ¶
Seek would seek to the provided key if present. If absent, it would seek to the next smallest key greater than provided if iterating in the forward direction. Behavior would be reversed is iterating backwards.
func (*Iterator) ValidForPrefix ¶
ValidForPrefix returns false when iteration is done or when the current key is not prefixed by the specified prefix.
type IteratorOptions ¶
type IteratorOptions struct { // Indicates whether we should prefetch values during iteration and store them. PrefetchValues bool // How many KV pairs to prefetch while iterating. Valid only if PrefetchValues is true. PrefetchSize int Reverse bool // Direction of iteration. False is forward, true is backward. AllVersions bool // Fetch all valid versions of the same key. // contains filtered or unexported fields }
IteratorOptions is used to set options when iterating over Badger key-value stores.
This package provides DefaultIteratorOptions which contains options that should work for most applications. Consider using that as a starting point before customizing it for your own needs.
type ManagedDB ¶
type ManagedDB struct {
*DB
}
ManagedDB allows end users to manage the transactions themselves. Transaction start and commit timestamps are set by end-user.
This is only useful for databases built on top of Badger (like Dgraph), and can be ignored by most users.
WARNING: This is an experimental feature and may be changed significantly in a future release. So please proceed with caution.
func OpenManaged ¶
OpenManaged returns a new ManagedDB, which allows more control over setting transaction timestamps.
This is only useful for databases built on top of Badger (like Dgraph), and can be ignored by most users.
func (*ManagedDB) DropAll ¶
DropAll would drop all the data stored in Badger. It does this in the following way. - Stop accepting new writes. - Pause the compactions. - Pick all tables from all levels, create a changeset to delete all these tables and apply it to manifest. DO not pick up the latest table from level 0, to preserve the (persistent) badgerHead key. - Iterate over the KVs in Level 0, and run deletes on them via transactions. - The deletions are done at the same timestamp as the latest version of the key. Thus, we could write the keys back at the same timestamp as before.
func (*ManagedDB) GetSequence ¶
GetSequence is not supported on ManagedDB. Calling this would result in a panic.
func (*ManagedDB) NewTransaction ¶
NewTransaction overrides DB.NewTransaction() and panics when invoked. Use NewTransactionAt() instead.
func (*ManagedDB) NewTransactionAt ¶
NewTransactionAt follows the same logic as DB.NewTransaction(), but uses the provided read timestamp.
This is only useful for databases built on top of Badger (like Dgraph), and can be ignored by most users.
func (*ManagedDB) SetDiscardTs ¶
SetDiscardTs sets a timestamp at or below which, any invalid or deleted versions can be discarded from the LSM tree, and thence from the value log to reclaim disk space.
type Manifest ¶
type Manifest struct { Levels []levelManifest Tables map[uint64]tableManifest // Contains total number of creation and deletion changes in the manifest -- used to compute // whether it'd be useful to rewrite the manifest. Creations int Deletions int }
Manifest represents the contents of the MANIFEST file in a Badger store.
The MANIFEST file describes the startup state of the db -- all LSM files and what level they're at.
It consists of a sequence of ManifestChangeSet objects. Each of these is treated atomically, and contains a sequence of ManifestChange's (file creations/deletions) which we use to reconstruct the manifest at startup.
func ReplayManifestFile ¶
ReplayManifestFile reads the manifest file and constructs two manifest objects. (We need one immutable copy and one mutable copy of the manifest. Easiest way is to construct two of them.) Also, returns the last offset after a completely read manifest entry -- the file must be truncated at that point before further appends are made (if there is a partial entry after that). In normal conditions, truncOffset is the file size.
type MergeFunc ¶
MergeFunc accepts two byte slices, one representing an existing value, and another representing a new value that needs to be ‘merged’ into it. MergeFunc contains the logic to perform the ‘merge’ and return an updated value. MergeFunc could perform operations like integer addition, list appends etc. Note that the ordering of the operands is unspecified, so the merge func should either be agnostic to ordering or do additional handling if ordering is required.
type MergeOperator ¶
MergeOperator represents a Badger merge operator.
func (*MergeOperator) Add ¶
func (op *MergeOperator) Add(val []byte) error
Add records a value in Badger which will eventually be merged by a background routine into the values that were recorded by previous invocations to Add().
func (*MergeOperator) Get ¶
func (op *MergeOperator) Get() ([]byte, error)
Get returns the latest value for the merge operator, which is derived by applying the merge function to all the values added so far.
If Add has not been called even once, Get will return ErrKeyNotFound.
func (*MergeOperator) Stop ¶
func (op *MergeOperator) Stop()
Stop waits for any pending merge to complete and then stops the background goroutine.
type Options ¶
type Options struct { // 1. Mandatory flags // ------------------- // Directory to store the data in. Should exist and be writable. Dir string // Directory to store the value log in. Can be the same as Dir. Should // exist and be writable. ValueDir string // 2. Frequently modified flags // ----------------------------- // Sync all writes to disk. Setting this to true would slow down data // loading significantly. SyncWrites bool // How should LSM tree be accessed. TableLoadingMode options.FileLoadingMode // How should value log be accessed. ValueLogLoadingMode options.FileLoadingMode // How many versions to keep per key. NumVersionsToKeep int // 3. Flags that user might want to review // ---------------------------------------- // The following affect all levels of LSM tree. MaxTableSize int64 // Each table (or file) is at most this size. LevelSizeMultiplier int // Equals SizeOf(Li+1)/SizeOf(Li). MaxLevels int // Maximum number of levels of compaction. // If value size >= this threshold, only store value offsets in tree. ValueThreshold int // Maximum number of tables to keep in memory, before stalling. NumMemtables int // The following affect how we handle LSM tree L0. // Maximum number of Level 0 tables before we start compacting. NumLevelZeroTables int // If we hit this number of Level 0 tables, we will stall until L0 is // compacted away. NumLevelZeroTablesStall int // Maximum total size for L1. LevelOneSize int64 // Size of single value log file. ValueLogFileSize int64 // Max number of entries a value log file can hold (approximately). A value log file would be // determined by the smaller of its file size and max entries. ValueLogMaxEntries uint32 // Number of compaction workers to run concurrently. NumCompactors int // 4. Flags for testing purposes // ------------------------------ DoNotCompact bool // Stops LSM tree from compactions. // Open the DB as read-only. With this set, multiple processes can // open the same Badger DB. Note: if the DB being opened had crashed // before and has vlog data to be replayed, ReadOnly will cause Open // to fail with an appropriate message. ReadOnly bool // Truncate value log to delete corrupt data, if any. Would not truncate if ReadOnly is set. Truncate bool // contains filtered or unexported fields }
Options are params for creating DB object.
This package provides DefaultOptions which contains options that should work for most applications. Consider using that as a starting point before customizing it for your own needs.
type Sequence ¶
Sequence represents a Badger sequence.
type Txn ¶
type Txn struct {
// contains filtered or unexported fields
}
Txn represents a Badger transaction.
func (*Txn) Commit ¶
Commit commits the transaction, following these steps:
1. If there are no writes, return immediately.
2. Check if read rows were updated since txn started. If so, return ErrConflict.
3. If no conflict, generate a commit timestamp and update written rows' commit ts.
4. Batch up all writes, write them to value log and LSM tree.
5. If callback is provided, Badger will return immediately after checking for conflicts. Writes to the database will happen in the background. If there is a conflict, an error will be returned and the callback will not run. If there are no conflicts, the callback will be called in the background upon successful completion of writes or any error during write.
If error is nil, the transaction is successfully committed. In case of a non-nil error, the LSM tree won't be updated, so there's no need for any rollback.
func (*Txn) CommitAt ¶
CommitAt commits the transaction, following the same logic as Commit(), but at the given commit timestamp. This will panic if not used with ManagedDB.
This is only useful for databases built on top of Badger (like Dgraph), and can be ignored by most users.
func (*Txn) Delete ¶
Delete deletes a key.
This is done by adding a delete marker for the key at commit timestamp. Any reads happening before this timestamp would be unaffected. Any reads after this commit would see the deletion.
The current transaction keeps a reference to the key byte slice argument. Users must not modify the key until the end of the transaction.
func (*Txn) Discard ¶
func (txn *Txn) Discard()
Discard discards a created transaction. This method is very important and must be called. Commit method calls this internally, however, calling this multiple times doesn't cause any issues. So, this can safely be called via a defer right when transaction is created.
NOTE: If any operations are run on a discarded transaction, ErrDiscardedTxn is returned.
func (*Txn) Get ¶
Get looks for key and returns corresponding Item. If key is not found, ErrKeyNotFound is returned.
func (*Txn) NewIterator ¶
func (txn *Txn) NewIterator(opt IteratorOptions) *Iterator
NewIterator returns a new iterator. Depending upon the options, either only keys, or both key-value pairs would be fetched. The keys are returned in lexicographically sorted order. Using prefetch is highly recommended if you're doing a long running iteration. Avoid long running iterations in update transactions.
Example ¶
dir, err := ioutil.TempDir("", "badger") if err != nil { log.Fatal(err) } defer os.RemoveAll(dir) opts := DefaultOptions opts.Dir = dir opts.ValueDir = dir db, err := Open(opts) if err != nil { log.Fatal(err) } defer db.Close() bkey := func(i int) []byte { return []byte(fmt.Sprintf("%09d", i)) } bval := func(i int) []byte { return []byte(fmt.Sprintf("%025d", i)) } txn := db.NewTransaction(true) // Fill in 1000 items n := 1000 for i := 0; i < n; i++ { err := txn.Set(bkey(i), bval(i)) if err != nil { log.Fatal(err) } } err = txn.Commit(nil) if err != nil { log.Fatal(err) } opt := DefaultIteratorOptions opt.PrefetchSize = 10 // Iterate over 1000 items var count int err = db.View(func(txn *Txn) error { it := txn.NewIterator(opt) defer it.Close() for it.Rewind(); it.Valid(); it.Next() { count++ } return nil }) if err != nil { log.Fatal(err) } fmt.Printf("Counted %d elements", count)
Output: Counted 1000 elements
func (*Txn) Set ¶
Set adds a key-value pair to the database.
It will return ErrReadOnlyTxn if update flag was set to false when creating the transaction.
The current transaction keeps a reference to the key and val byte slice arguments. Users must not modify key and val until the end of the transaction.
func (*Txn) SetEntry ¶
SetEntry takes an Entry struct and adds the key-value pair in the struct, along with other metadata to the database.
The current transaction keeps a reference to the entry passed in argument. Users must not modify the entry until the end of the transaction.
func (*Txn) SetWithDiscard ¶
SetWithDiscard acts like SetWithMeta, but adds a marker to discard earlier versions of the key.
This method is only useful if you have set a higher limit for options.NumVersionsToKeep. The default setting is 1, in which case, this function doesn't add any more benefit than just calling the normal SetWithMeta (or Set) function. If however, you have a higher setting for NumVersionsToKeep (in Dgraph, we set it to infinity), you can use this method to indicate that all the older versions can be discarded and removed during compactions.
The current transaction keeps a reference to the key and val byte slice arguments. Users must not modify key and val until the end of the transaction.
func (*Txn) SetWithMeta ¶
SetWithMeta adds a key-value pair to the database, along with a metadata byte.
This byte is stored alongside the key, and can be used as an aid to interpret the value or store other contextual bits corresponding to the key-value pair.
The current transaction keeps a reference to the key and val byte slice arguments. Users must not modify key and val until the end of the transaction.
func (*Txn) SetWithTTL ¶
SetWithTTL adds a key-value pair to the database, along with a time-to-live (TTL) setting. A key stored with a TTL would automatically expire after the time has elapsed , and be eligible for garbage collection.
The current transaction keeps a reference to the key and val byte slice arguments. Users must not modify key and val until the end of the transaction.