badgerhold

package module
v2.5.2 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Sep 14, 2020 License: MIT Imports: 13 Imported by: 0

README

BadgerHold

Build Status GoDoc Coverage Status Go Report Card

BadgerHold is a simple querying and indexing layer on top of a Badger instance. The goal is to create a simple, higher level interface on top of Badger DB that simplifies dealing with Go Types and finding data, but exposes the underlying Badger DB for customizing as you wish. By default the encoding used is Gob, so feel free to use the GobEncoder/Decoder interface for faster serialization. Or, alternately, you can use any serialization you want by supplying encode / decode funcs to the Options struct on Open.

One Go Type will be prefixed with it's type name, so you can store multiple types in a single Badger database with conflicts.

This project is a rewrite of the BoltHold project on the Badger KV database instead of Bolt. For a performance comparison between bolt and badger, see https://blog.dgraph.io/post/badger-lmdb-boltdb/. I've written up my own comparison of the two focusing on characteristics other than performance here: https://tech.townsourced.com/post/boltdb-vs-badger/.

Indexes

Indexes allow you to skip checking any records that don't meet your index criteria. If you have 1000 records and only 10 of them are of the Division you want to deal with, then you don't need to check to see if the other 990 records match your query criteria if you create an index on the Division field. The downside of an index is added disk reads and writes on every write operation. For read heavy operations datasets, indexes can be very useful.

In every BadgerHold store, there will be a reserved bucket _indexes which will be used to hold indexes that point back to another bucket's Key system. Indexes will be defined by setting the badgerhold:"index" struct tag on a field in a type.

type Person struct {
	Name string
	Division string `badgerhold:"index"`
}

// alternate struct tag if you wish to specify the index name
type Person struct {
	Name string
	Division string `badgerholdIndex:"IdxDivision"`
}

This means that there will be an index created for Division that will contain the set of unique divisions, and the main record keys they refer to.

Optionally, you can implement the Storer interface, to specify your own indexes, rather than using the badgerHoldIndex struct tag.

Queries

Queries are chain-able constructs that filters out any data that doesn't match it's criteria. An index will be used if the .Index() chain is called, otherwise BadgerHold won't use any index.

Queries will look like this:

s.Find(&result, badgerhold.Where("FieldName").Eq(value).And("AnotherField").Lt(AnotherValue).Or(badgerhold.Where("FieldName").Eq(anotherValue)))

Fields must be exported, and thus always need to start with an upper-case letter. Available operators include:

  • Equal - Where("field").Eq(value)
  • Not Equal - Where("field").Ne(value)
  • Greater Than - Where("field").Gt(value)
  • Less Than - Where("field").Lt(value)
  • Less than or Equal To - Where("field").Le(value)
  • Greater Than or Equal To - Where("field").Ge(value)
  • In - Where("field").In(val1, val2, val3)
  • IsNil - Where("field").IsNil()
  • Regular Expression - Where("field").RegExp(regexp.MustCompile("ea"))
  • Matches Function - Where("field").MatchFunc(func(ra *RecordAccess) (bool, error))
  • Skip - Where("field").Eq(value).Skip(10)
  • Limit - Where("field").Eq(value).Limit(10)
  • SortBy - Where("field").Eq(value).SortBy("field1", "field2")
  • Reverse - Where("field").Eq(value).SortBy("field").Reverse()
  • Index - Where("field").Eq(value).Index("indexName")

If you want to run a query's criteria against the Key value, you can use the badgerhold.Key constant:

store.Find(&result, badgerhold.Where(badgerhold.Key).Ne(value))

You can access nested structure fields in queries like this:

type Repo struct {
  Name string
  Contact ContactPerson
}

type ContactPerson struct {
  Name string
}

store.Find(&repo, badgerhold.Where("Contact.Name").Eq("some-name")

Instead of passing in a specific value to compare against in a query, you can compare against another field in the same struct. Consider the following struct:

type Person struct {
	Name string
	Birth time.Time
	Death time.Time
}

If you wanted to find any invalid records where a Person's death was before their birth, you could do the following:

store.Find(&result, badgerhold.Where("Death").Lt(badgerhold.Field("Birth")))

Queries can be used in more than just selecting data. You can delete or update data that matches a query.

Using the example above, if you wanted to remove all of the invalid records where Death < Birth:

// you must pass in a sample type, so BadgerHold knows which bucket to use and what indexes to update
store.DeleteMatching(&Person{}, badgerhold.Where("Death").Lt(badgerhold.Field("Birth")))

Or if you wanted to update all the invalid records to flip/flop the Birth and Death dates:


store.UpdateMatching(&Person{}, badgerhold.Where("Death").Lt(badgerhold.Field("Birth")), func(record interface{}) error {
	update, ok := record.(*Person) // record will always be a pointer
	if !ok {
		return fmt.Errorf("Record isn't the correct type!  Wanted Person, got %T", record)
	}

	update.Birth, update.Death = update.Death, update.Birth

	return nil
})
Keys in Structs

A common scenario is to store the badgerhold Key in the same struct that is stored in the badgerDB value. You can automatically populate a record's Key in a struct by using the badgerhold:"key" struct tag when running Find queries.

Another common scenario is to insert data with an auto-incrementing key assigned by the database. When performing an Insert, if the type of the key matches the type of the badgerhold:"key" tagged field, the data is passed in by reference, and the field's current value is the zero-value for that type, then it is set on the data before insertion.

type Employee struct {
	ID uint64 `badgerhold:"key"`
	FirstName string
	LastName string
	Division string
	Hired time.Time
}

// old struct tag, currenty still supported but may be deprecated in the future
type Employee struct {
	ID uint64 `badgerholdKey`
	FirstName string
	LastName string
	Division string
	Hired time.Time
}

Badgerhold assumes only one of such struct tags exists. If a value already exists in the key field, it will be overwritten.

If you want to insert an auto-incrementing Key you can pass the badgerhold.NextSequence() func as the Key value.

err := store.Insert(badgerhold.NextSequence(), data)

The key value will be a uint64.

If you want to know the value of the auto-incrementing Key that was generated using badgerhold.NextSequence(), then make sure to pass a pointer to your data and that the badgerholdKey tagged field is of type uint64.

err := store.Insert(badgerhold.NextSequence(), &data)
Unique Constraints

You can create a unique constraint on a given field by using the badgerhold:"unique" struct tag:

type User struct {
  Name string
  Email string `badgerhold:"unique"` // this field will be indexed with a unique constraint
}

The example above will only allow one record of type User to exist with a given Email field. Any insert, update or upsert that would violate that constraint will fail and return the badgerhold.ErrUniqueExists error.

Aggregate Queries

Aggregate queries are queries that group results by a field. For example, lets say you had a collection of employees:

type Employee struct {
	FirstName string
	LastName string
	Division string
	Hired time.Time
}

And you wanted to find the most senior (first hired) employee in each division:

result, err := store.FindAggregate(&Employee{}, nil, "Division") //nil query matches against all records

This will return a slice of Aggregate Result from which you can extract your groups and find Min, Max, Avg, Count, etc.

for i := range result {
	var division string
	employee := &Employee{}

	result[i].Group(&division)
	result[i].Min("Hired", employee)

	fmt.Printf("The most senior employee in the %s division is %s.\n",
		division, employee.FirstName + " " + employee.LastName)
}

Aggregate queries become especially powerful when combined with the sub-querying capability of MatchFunc.

Many more examples of queries can be found in the find_test.go file in this repository.

Comparing

Just like with Go, types must be the same in order to be compared with each other. You cannot compare an int to a int32. The built-in Go comparable types (ints, floats, strings, etc) will work as expected. Other types from the standard library can also be compared such as time.Time, big.Rat, big.Int, and big.Float. If there are other standard library types that I missed, let me know.

You can compare any custom type either by using the MatchFunc criteria, or by satisfying the Comparer interface with your type by adding the Compare method: Compare(other interface{}) (int, error).

If a type doesn't have a predefined comparer, and doesn't satisfy the Comparer interface, then the types value is converted to a string and compared lexicographically.

Behavior Changes

Since BadgerHold is a higher level interface than Badger DB, there are some added helpers. Instead of Put, you have the options of:

  • Insert - Fails if key already exists.
  • Update - Fails if key doesn't exist ErrNotFound.
  • Upsert - If key doesn't exist, it inserts the data, otherwise it updates the existing record.

When getting data instead of returning nil if a value doesn't exist, BadgerHold returns badgerhold.ErrNotFound, and similarly when deleting data, instead of silently continuing if a value isn't found to delete, BadgerHold returns badgerhold.ErrNotFound. The exception to this is when using query based functions such as Find (returns an empty slice), DeleteMatching and UpdateMatching where no error is returned.

When should I use BadgerHold?

BadgerHold will be useful in the same scenarios where BadgerDB is useful, with the added benefit of being able to retire some of your data filtering code and possibly improved performance.

You can also use it instead of SQLite for many scenarios. BadgerHold's main benefit over SQLite is its simplicity when working with Go Types. There is no need for an ORM layer to translate records to types, simply put types in, and get types out. You also don't have to deal with database initialization. Usually with SQLite you'll need several scripts to create the database, create the tables you expect, and create any indexes. With BadgerHold you simply open a new file and put any type of data you want in it.

options := badgerhold.DefaultOptions
options.Dir = "data"
options.ValueDir = "data"

store, err := badgerhold.Open(options)
defer store.Close()
if err != nil {
	// handle error
	log.Fatal(err)
}


err = store.Insert("key", &Item{
	Name:    "Test Name",
	Created: time.Now(),
})

That's it!

Badgerhold currently has over 80% coverage in unit tests, and it's backed by BadgerDB which is a very solid and well built piece of software, so I encourage you to give it a try.

If you end up using BadgerHold, I'd love to hear about it.

Documentation

Overview

Package badgerhold is an indexing and querying layer on top of a badger DB. The goal is to allow easy, persistent storage and retrieval of Go types. badgerDB is an embedded key-value store, and badgerhold serves a similar use case however with a higher level interface for common uses of Badger.

Go Types

BadgerHold deals directly with Go Types. When inserting data, you pass in your structure directly. When querying data you pass in a pointer to a slice of the type you want to return. By default Gob encoding is used. You can put multiple different types into the same DB file and they (and their indexes) will be stored separately.

err := store.Insert(1234, Item{
	Name:    "Test Name",
	Created: time.Now(),
})

var result []Item

err := store.Find(&result, query)

Indexes

BadgerHold will automatically create an index for any struct fields tags with "badgerholdIndex"

type Item struct {
	ID       int
	Name     string
	Category string `badgerholdIndex:"Category"`
	Created  time.Time
}

The first field specified in query will be used as the index (if one exists).

Queries are chained together criteria that applies to a set of fields:

badgerhold.Where("Name").Eq("John Doe").And("DOB").Lt(time.Now())
Example
package main

import (
	"fmt"
	"log"
	"os"
	"time"

	"github.com/dgraph-io/badger/v2"
)

type Item struct {
	ID       int
	Category string `badgerholdIndex:"Category"`
	Created  time.Time
}

func main() {
	data := []Item{
		{
			ID:       0,
			Category: "blue",
			Created:  time.Now().Add(-4 * time.Hour),
		},
		{
			ID:       1,
			Category: "red",
			Created:  time.Now().Add(-3 * time.Hour),
		},
		{
			ID:       2,
			Category: "blue",
			Created:  time.Now().Add(-2 * time.Hour),
		},
		{
			ID:       3,
			Category: "blue",
			Created:  time.Now().Add(-20 * time.Minute),
		},
	}

	dir := tempdir()
	defer os.RemoveAll(dir)

	options := DefaultOptions
	options.Dir = dir
	options.ValueDir = dir
	store, err := Open(options)
	defer store.Close()

	if err != nil {
		// handle error
		log.Fatal(err)
	}

	// insert the data in one transaction

	err = store.Badger().Update(func(tx *badger.Txn) error {
		for i := range data {
			err := store.TxInsert(tx, data[i].ID, data[i])
			if err != nil {
				return err
			}
		}
		return nil
	})

	if err != nil {
		// handle error
		log.Fatal(err)
	}

	// Find all items in the blue category that have been created in the past hour
	var result []Item

	err = store.Find(&result, Where("Category").Eq("blue").And("Created").Ge(time.Now().Add(-1*time.Hour)))

	if err != nil {
		// handle error
		log.Fatal(err)
	}

	fmt.Println(result[0].ID)
}
Output:

3

Index

Examples

Constants

View Source
const (
	// BadgerHoldIndexTag is the struct tag used to define an a field as indexable for a badgerhold
	BadgerHoldIndexTag = "badgerholdIndex"

	// BadgerholdKeyTag is the struct tag used to define an a field as a key for use in a Find query
	BadgerholdKeyTag = "badgerholdKey"
)
View Source
const Key = ""

Key is shorthand for specifying a query to run again the Key in a badgerhold, simply returns "" Where(badgerhold.Key).Eq("testkey")

Variables

View Source
var DefaultOptions = Options{
	Options:          badger.DefaultOptions(""),
	Encoder:          DefaultEncode,
	Decoder:          DefaultDecode,
	SequenceBandwith: 100,
}

DefaultOptions are a default set of options for opening a BadgerHold database Includes badgers own default options

View Source
var ErrKeyExists = errors.New("This Key already exists in badgerhold for this type")

ErrKeyExists is the error returned when data is being Inserted for a Key that already exists

View Source
var ErrNotFound = errors.New("No data found for this key")

ErrNotFound is returned when no data is found for the given key

View Source
var ErrUniqueExists = errors.New("This value cannot be written due to the unique constraint on the field")

ErrUniqueExists is the error thrown when data is being inserted for a unique constraint value that already exists

Functions

func DefaultDecode

func DefaultDecode(data []byte, value interface{}) error

DefaultDecode is the default decoding func for badgerhold (Gob)

func DefaultEncode

func DefaultEncode(value interface{}) ([]byte, error)

DefaultEncode is the default encoding func for badgerhold (Gob)

func NextSequence

func NextSequence() interface{}

NextSequence is used to create a sequential key for inserts Inserts a uint64 as the key store.Insert(badgerhold.NextSequence(), data)

Types

type AggregateResult

type AggregateResult struct {
	// contains filtered or unexported fields
}

AggregateResult allows you to access the results of an aggregate query

func (*AggregateResult) Avg

func (a *AggregateResult) Avg(field string) float64

Avg returns the average float value of the aggregate grouping panics if the field cannot be converted to an float64

func (*AggregateResult) Count

func (a *AggregateResult) Count() int

Count returns the number of records in the aggregate grouping

func (*AggregateResult) Group

func (a *AggregateResult) Group(result ...interface{})

Group returns the field grouped by in the query

func (*AggregateResult) Max

func (a *AggregateResult) Max(field string, result interface{})

Max Returns the maxiumum value of the Aggregate Grouping, uses the Comparer interface

func (*AggregateResult) Min

func (a *AggregateResult) Min(field string, result interface{})

Min returns the minimum value of the Aggregate Grouping, uses the Comparer interface

func (*AggregateResult) Reduction

func (a *AggregateResult) Reduction(result interface{})

Reduction is the collection of records that are part of the AggregateResult Group

func (*AggregateResult) Sort

func (a *AggregateResult) Sort(field string)

Sort sorts the aggregate reduction by the passed in field in ascending order Sort is called automatically by calls to Min / Max to get the min and max values

func (*AggregateResult) Sum

func (a *AggregateResult) Sum(field string) float64

Sum returns the sum value of the aggregate grouping panics if the field cannot be converted to an float64

type Comparer

type Comparer interface {
	Compare(other interface{}) (int, error)
}

Comparer compares a type against the encoded value in the store. The result should be 0 if current==other, -1 if current < other, and +1 if current > other. If a field in a struct doesn't specify a comparer, then the default comparison is used (convert to string and compare) this interface is already handled for standard Go Types as well as more complex ones such as those in time and big an error is returned if the type cannot be compared The concrete type will always be passedin, not a pointer

type Criterion

type Criterion struct {
	// contains filtered or unexported fields
}

Criterion is an operator and a value that a given field needs to match on

func Where

func Where(field string) *Criterion

Where starts a query for specifying the criteria that an object in the badgerhold needs to match to be returned in a Find result

Query API Example

s.Find(badgerhold.Where("FieldName").Eq(value).And("AnotherField").Lt(AnotherValue).
	Or(badgerhold.Where("FieldName").Eq(anotherValue)

Since Gobs only encode exported fields, this will panic if you pass in a field with a lower case first letter

func (*Criterion) Eq

func (c *Criterion) Eq(value interface{}) *Query

Eq tests if the current field is Equal to the passed in value

func (*Criterion) Ge

func (c *Criterion) Ge(value interface{}) *Query

Ge test if the current field is Greater Than or Equal To the passed in value

func (*Criterion) Gt

func (c *Criterion) Gt(value interface{}) *Query

Gt test if the current field is Greater Than the passed in value

func (*Criterion) HasPrefix

func (c *Criterion) HasPrefix(prefix string) *Query

HasPrefix will test if a field starts with provided string

func (*Criterion) HasSuffix

func (c *Criterion) HasSuffix(suffix string) *Query

HasSuffix will test if a field ends with provided string

func (*Criterion) In

func (c *Criterion) In(values ...interface{}) *Query

In test if the current field is a member of the slice of values passed in

func (*Criterion) IsNil

func (c *Criterion) IsNil() *Query

IsNil will test if a field is equal to nil

func (*Criterion) Le

func (c *Criterion) Le(value interface{}) *Query

Le test if the current field is Less Than or Equal To the passed in value

func (*Criterion) Lt

func (c *Criterion) Lt(value interface{}) *Query

Lt test if the current field is Less Than the passed in value

func (*Criterion) MatchFunc

func (c *Criterion) MatchFunc(match MatchFunc) *Query

MatchFunc will test if a field matches the passed in function

func (*Criterion) Ne

func (c *Criterion) Ne(value interface{}) *Query

Ne test if the current field is Not Equal to the passed in value

func (*Criterion) RegExp

func (c *Criterion) RegExp(expression *regexp.Regexp) *Query

RegExp will test if a field matches against the regular expression The Field Value will be converted to string (%s) before testing

func (*Criterion) String

func (c *Criterion) String() string

type DecodeFunc

type DecodeFunc func(data []byte, value interface{}) error

DecodeFunc is a function for decoding a value from bytes

type EncodeFunc

type EncodeFunc func(value interface{}) ([]byte, error)

EncodeFunc is a function for encoding a value into bytes

type ErrTypeMismatch

type ErrTypeMismatch struct {
	Value interface{}
	Other interface{}
}

ErrTypeMismatch is the error thrown when two types cannot be compared

func (*ErrTypeMismatch) Error

func (e *ErrTypeMismatch) Error() string

type Field

type Field string

Field allows for referencing a field in structure being compared

type Index

type Index struct {
	IndexFunc func(name string, value interface{}) ([]byte, error)
	Unique    bool
}

Index is a function that returns the indexable, encoded bytes of the passed in value

type MatchFunc

type MatchFunc func(ra *RecordAccess) (bool, error)

MatchFunc is a function used to test an arbitrary matching value in a query

type Options

type Options struct {
	Encoder          EncodeFunc
	Decoder          DecodeFunc
	SequenceBandwith uint64
	badger.Options
}

Options allows you set different options from the defaults For example the encoding and decoding funcs which default to Gob

type Query

type Query struct {
	// contains filtered or unexported fields
}

Query is a chained collection of criteria of which an object in the badgerhold needs to match to be returned an empty query matches against all records

func (*Query) And

func (q *Query) And(field string) *Criterion

And creates a nother set of criterion the needs to apply to a query

func (*Query) Index

func (q *Query) Index(indexName string) *Query

Index specifies the index to use when running this query

func (*Query) IsEmpty

func (q *Query) IsEmpty() bool

IsEmpty returns true if the query is an empty query an empty query matches against everything

func (*Query) Limit

func (q *Query) Limit(amount int) *Query

Limit sets the maximum number of records that can be returned by a query Setting Limit multiple times, or to a negative value will panic

func (*Query) Or

func (q *Query) Or(query *Query) *Query

Or creates another separate query that gets unioned with any other results in the query Or will panic if the query passed in contains a limit or skip value, as they are only allowed on top level queries

func (*Query) Reverse

func (q *Query) Reverse() *Query

Reverse will reverse the current result set useful with SortBy

func (*Query) Skip

func (q *Query) Skip(amount int) *Query

Skip skips the number of records that match all the rest of the query criteria, and does not return them in the result set. Setting skip multiple times, or to a negative value will panic

func (*Query) SortBy

func (q *Query) SortBy(fields ...string) *Query

SortBy sorts the results by the given fields name Multiple fields can be used

func (*Query) String

func (q *Query) String() string

type RecordAccess

type RecordAccess struct {
	// contains filtered or unexported fields
}

RecordAccess allows access to the current record, field or allows running a subquery within a MatchFunc

func (*RecordAccess) Field

func (r *RecordAccess) Field() interface{}

Field is the current field being queried

func (*RecordAccess) Record

func (r *RecordAccess) Record() interface{}

Record is the complete record for a given row in badgerhold

func (*RecordAccess) SubAggregateQuery

func (r *RecordAccess) SubAggregateQuery(query *Query, groupBy ...string) ([]*AggregateResult, error)

SubAggregateQuery allows you to run another aggregate query in the same transaction for each record in a parent query

func (*RecordAccess) SubQuery

func (r *RecordAccess) SubQuery(result interface{}, query *Query) error

SubQuery allows you to run another query in the same transaction for each record in a parent query

type Store

type Store struct {
	// contains filtered or unexported fields
}

Store is a badgerhold wrapper around a badger DB

func Open

func Open(options Options) (*Store, error)

Open opens or creates a badgerhold file.

func (*Store) Badger

func (s *Store) Badger() *badger.DB

Badger returns the underlying Badger DB the badgerhold is based on

func (*Store) Close

func (s *Store) Close() error

Close closes the badger db

func (*Store) Delete

func (s *Store) Delete(key, dataType interface{}) error

Delete deletes a record from the bolthold, datatype just needs to be an example of the type stored so that the proper bucket and indexes are updated

func (*Store) DeleteMatching

func (s *Store) DeleteMatching(dataType interface{}, query *Query) error

DeleteMatching deletes all of the records that match the passed in query

func (*Store) Find

func (s *Store) Find(result interface{}, query *Query) error

Find retrieves a set of values from the badgerhold that matches the passed in query result must be a pointer to a slice. The result of the query will be appended to the passed in result slice, rather than the passed in slice being emptied.

func (*Store) FindAggregate

func (s *Store) FindAggregate(dataType interface{}, query *Query, groupBy ...string) ([]*AggregateResult, error)

FindAggregate returns an aggregate grouping for the passed in query groupBy is optional

func (*Store) Get

func (s *Store) Get(key, result interface{}) error

Get retrieves a value from badgerhold and puts it into result. Result must be a pointer

func (*Store) Insert

func (s *Store) Insert(key, data interface{}) error

Insert inserts the passed in data into the the badgerhold

If the the key already exists in the badgerhold, then an ErrKeyExists is returned If the data struct has a field tagged as `badgerholdKey` and it is the same type as the Insert key, AND the data struct is passed by reference, AND the key field is currently set to the zero-value for that type, then that field will be set to the value of the insert key.

To use this with badgerhold.NextSequence() use a type of `uint64` for the key field.

func (*Store) TxDelete

func (s *Store) TxDelete(tx *badger.Txn, key, dataType interface{}) error

TxDelete is the same as Delete except it allows you specify your own transaction

func (*Store) TxDeleteMatching

func (s *Store) TxDeleteMatching(tx *badger.Txn, dataType interface{}, query *Query) error

TxDeleteMatching does the same as DeleteMatching, but allows you to specify your own transaction

func (*Store) TxFind

func (s *Store) TxFind(tx *badger.Txn, result interface{}, query *Query) error

TxFind allows you to pass in your own badger transaction to retrieve a set of values from the badgerhold

func (*Store) TxFindAggregate

func (s *Store) TxFindAggregate(tx *badger.Txn, dataType interface{}, query *Query,
	groupBy ...string) ([]*AggregateResult, error)

TxFindAggregate is the same as FindAggregate, but you specify your own transaction groupBy is optional

func (*Store) TxGet

func (s *Store) TxGet(tx *badger.Txn, key, result interface{}) error

TxGet allows you to pass in your own badger transaction to retrieve a value from the badgerhold and puts it into result

func (*Store) TxInsert

func (s *Store) TxInsert(tx *badger.Txn, key, data interface{}) error

TxInsert is the same as Insert except it allows you specify your own transaction

func (*Store) TxUpdate

func (s *Store) TxUpdate(tx *badger.Txn, key interface{}, data interface{}) error

TxUpdate is the same as Update except it allows you to specify your own transaction

func (*Store) TxUpdateMatching

func (s *Store) TxUpdateMatching(tx *badger.Txn, dataType interface{}, query *Query,
	update func(record interface{}) error) error

TxUpdateMatching does the same as UpdateMatching, but allows you to specify your own transaction

func (*Store) TxUpsert

func (s *Store) TxUpsert(tx *badger.Txn, key interface{}, data interface{}) error

TxUpsert is the same as Upsert except it allows you to specify your own transaction

func (*Store) Update

func (s *Store) Update(key interface{}, data interface{}) error

Update updates an existing record in the badgerhold if the Key doesn't already exist in the store, then it fails with ErrNotFound

func (*Store) UpdateMatching

func (s *Store) UpdateMatching(dataType interface{}, query *Query, update func(record interface{}) error) error

UpdateMatching runs the update function for every record that match the passed in query Note that the type of record in the update func always has to be a pointer

func (*Store) Upsert

func (s *Store) Upsert(key interface{}, data interface{}) error

Upsert inserts the record into the badgerhold if it doesn't exist. If it does already exist, then it updates the existing record

type Storer

type Storer interface {
	Type() string              // used as the badgerdb index prefix
	Indexes() map[string]Index //[indexname]indexFunc
}

Storer is the Interface to implement to skip reflect calls on all data passed into the badgerhold

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL