Documentation ¶
Overview ¶
Package bstore is an in-process database with serializable transactions supporting referential/unique/nonzero constraints, (multikey) indices, automatic schema management based on Go types and struct tags, and a query API.
Bstore a small, pure Go library that still provides most of the common data consistency requirements for modest database use cases. Bstore aims to make basic use of cgo-based libraries, such as sqlite, unnecessary.
Bstore implements autoincrementing primary keys, indices, default values, enforcement of nonzero, unique and referential integrity constraints, automatic schema updates and a query API for combining filters/sorting/limits. Queries are planned and executed using indices for speed where possible. Bstore works with Go types: you typically don't have to write any (un)marshal code for your types. Bstore is not an ORM, it plans and executes queries itself.
Field types ¶
Struct field types currently supported for storing, including pointers to these types, but not pointers to pointers:
- int (as int32), int8, int16, int32, int64
- uint (as uint32), uint8, uint16, uint32, uint64
- bool, float32, float64, string, []byte
- Maps, with keys and values of any supported type, except keys with pointer types.
- Slices and arrays, with elements of any supported type.
- time.Time
- Types that implement binary.MarshalBinary and binary.UnmarshalBinary, useful for struct types with state in private fields. Do not change the (Un)marshalBinary method in an incompatible way without a data migration.
- Structs, with fields of any supported type.
Note: int and uint are stored as int32 and uint32, for compatibility of database files between 32bit and 64bit systems. Where possible, use explicit (u)int32 or (u)int64 types.
Cyclic types are supported, but cyclic data is not. Attempting to store cyclic data will likely result in a stack overflow panic.
Anonymous struct fields are handled by taking in each of the anonymous struct's fields as a type's own fields. The named embedded type is not part of the type schema, and with a Query it can currently only be used with UpdateField and UpdateFields, not for filtering.
Bstore embraces the use of Go zero values. Use zero values, possibly pointers, where you would use NULL values in SQL.
Struct tags ¶
The typical Go struct can be stored in the database. The first field of a struct type is its primary key, must always be unique, and in case of an integer type the insertion of a zero value automatically changes it to the next sequence number by default. Additional behaviour can be configured through struct tag "bstore". The values are comma-separated. Typically one word, but some have multiple space-separated words:
- "-" ignores the field entirely, not stored.
- "name <fieldname>", use "fieldname" instead of the Go type field name.
- "nonzero", enforces that field values are not the zero value.
- "noauto", only valid for integer types, and only for the primary key. By default, an integer-typed primary key will automatically get a next value assigned on insert when it is 0. With noauto inserting a 0 value results in an error. For primary keys of other types inserting the zero value always results in an error.
- "index" or "index <field1>+<field2>+<...> [<name>]", adds an index. In the first form, the index is on the field on which the tag is specified, and the index name is the same as the field name. In the second form multiple fields can be specified, and an optional name. The first field must be the field on which the tag is specified. The field names are +-separated. The default name for the second form is the same +-separated string but can be set explicitly with the second parameter. An index can only be set for basic integer types, bools, time and strings. A field of slice type can also have an index (but not a unique index, and only one slice field per index), allowing fast lookup of any single value in the slice with Query.FilterIn. Indices are automatically (re)created when registering a type. Fields with a pointer type cannot have an index. String values used in an index cannot contain a \0.
- "unique" or "unique <field1>+<field2>+<...> [<name>]", adds an index as with "index" and also enforces a unique constraint. For time.Time the timezone is ignored for the uniqueness check.
- "ref <type>", enforces that the value exists as primary key for "type". Field types must match exactly, e.g. you cannot reference an int with an int64. An index is automatically created and maintained for fields with a foreign key, for efficiently checking that removed records in the referenced type are not in use. If the field has the zero value, the reference is not checked. If you require a valid reference, add "nonzero".
- "default <value>", replaces a zero value with the specified value on record insert. Special value "now" is recognized for time.Time as the current time. Times are parsed as time.RFC3339 otherwise. Supported types: bool ("true"/"false"), integers, floats, strings. Value is not quoted and no escaping of special characters, like the comma that separates struct tag words, is possible. Defaults are also replaced on fields in nested structs, slices and arrays, but not in maps.
- "typename <name>", override name of the type. The name of the Go type is used by default. Can only be present on the first field (primary key). Useful for doing schema updates.
Schema updates ¶
Before using a Go type, you must register it for use with the open database by passing a (possibly zero) value of that type to the Open or Register functions. For each type, a type definition is stored in the database. If a type has an updated definition since the previous database open, a new type definition is added to the database automatically and any required modifications are made and checked: Indexes (re)created, fields added/removed, new nonzero/unique/reference constraints validated.
As a special case, you can change field types between pointer and non-pointer types. With one exception: changing from pointer to non-pointer where the type has a field that must be nonzero is not allowed. The on-disk encoding will not be changed, and nil pointers will turn into zero values, and zero values into nil pointers. Also see section Limitations about pointer types.
Because named embed structs are not part of the type definition, you can wrap/unwrap fields into a embed/anonymous struct field. No new type definition is created.
Some schema conversions are not allowed. In some cases due to architectural limitations. In some cases because the constraint checks haven't been implemented yet, or the parsing code does not yet know how to parse the old on-disk values into the updated Go types. If you need a conversion that is not supported, you will need to write a manual conversion, and you would have to keep track whether the update has been executed.
Changes that are allowed:
- From smaller to larger integer types (same signedness).
- Removal of "noauto" on primary keys (always integer types). This updates the "next sequence" counter automatically to continue after the current maximum value.
- Adding/removing/modifying an index, including a unique index. When a unique index is added, the current records are verified to be unique.
- Adding/removing a reference. When a reference is added, the current records are verified to be valid references.
- Add/remove a nonzero constraint. Existing records are verified.
Conversions that are not currently allowed, but may be in the future:
- Signedness of integer types. With a one-time check that old values fit in the new type, this could be allowed in the future.
- Conversions between basic types: strings, []byte, integers, floats, boolean. Checks would have to be added for some of these conversions. For example, from string to integer: the on-disk string values would have to be valid integers.
- Types of primary keys cannot be changed, also not from one integer type to a wider integer type of same signedness.
Bolt and storage ¶
Bolt is used as underlying storage through the bbolt library. Bolt stores key/values in a single file, allowing multiple/nested buckets (namespaces) in a B+tree and provides ACID serializable transactions. A single write transaction can be active at a time, and one or more read-only transactions. Do not start a blocking read-only transaction in a goroutine while holding a writable transaction or vice versa, this can cause deadlock.
Bolt returns Go values that are memory mapped to the database file. This means Bolt/bstore database files cannot be transferred between machines with different endianness. Bolt uses explicit widths for its types, so files can be transferred between 32bit and 64bit machines of same endianness. While Bolt returns read-only memory mapped byte slices, bstore only ever returns parsed/copied regular writable Go values that require no special programmer attention.
For each Go type opened for a database file, bstore ensures a Bolt bucket exists with two subbuckets:
- "types", with type descriptions of the stored records. Each time the database file is opened with a modified Go type (add/removed/modified field/type/bstore struct tag), a new type description is automatically added, identified by sequence number.
- "records", containing all data, with the type's primary key as Bolt key, and the encoded remaining fields as value. The encoding starts with a reference to a type description.
For each index, another subbucket is created, its name starting with "index.". The stored keys consist of the index fields followed by the primary key, and an empty value. See format.md for details.
Limitations ¶
Bstore has limitations, not all of which are architectural so may be fixed in the future.
Bstore does not implement the equivalent of SQL joins, aggregates, and many other concepts.
Filtering/comparing/sorting on pointer fields is not allowed. Pointer fields cannot have a (unique) index. Use non-pointer values with the zero value as the equivalent of a nil pointer.
The first field of a stored struct is always the primary key. Autoincrement is only available for the primary key.
Bolt opens the database file with a lock. Only one process can have the database open at a time.
An index stored on disk in Bolt can consume more disk space than other database systems would: For each record, the indexed field(s) and primary key are stored in full. Because bstore uses Bolt as key/value store, and doesn't manage disk pages itself, it cannot as efficiently pack an index page with many records.
Interface values cannot be stored. This would require storing the type along with the value. Instead, use a type that is a BinaryMarshaler.
Values of builtin type "complex" cannot be stored.
Bstore inherits limitations from Bolt, see https://pkg.go.dev/go.etcd.io/bbolt#readme-caveats-amp-limitations.
Comparison with sqlite ¶
Sqlite is a great library, but Go applications that require cgo are hard to cross-compile. With bstore, cross-compiling to most Go-supported platforms stays trivial (though not plan9, unfortunately). Although bstore is much more limited in so many aspects than sqlite, bstore also offers some advantages as well. Some points of comparison:
- Cross-compilation and reproducibility: Trivial with bstore due to pure Go, much harder with sqlite because of cgo.
- Code complexity: low with bstore (7k lines including comments/docs), high with sqlite.
- Query language: mostly-type-checked function calls in bstore, free-form query strings only checked at runtime with sqlite.
- Functionality: very limited with bstore, much more full-featured with sqlite.
- Schema management: mostly automatic based on Go type definitions in bstore, manual with ALTER statements in sqlite.
- Types and packing/parsing: automatic/transparent in bstore based on Go types (including maps, slices, structs and custom MarshalBinary encoding), versus manual scanning and parameter passing with sqlite with limited set of SQL types.
- Performance: low to good performance with bstore, high performance with sqlite.
- Database files: single file with bstore, several files with sqlite (due to WAL or journal files).
- Test coverage: decent coverage but limited real-world for bstore, versus extremely thoroughly tested and with enormous real-world use.
Example ¶
package main import ( "context" "errors" "log" "os" "time" "github.com/mjl-/bstore" ) func main() { // Msg and Mailbox are the types we are going to store. type Msg struct { // First field is always primary key (PK) and must be non-zero. // Integer types get their IDs assigned from a sequence when // inserted with the zero value. ID uint64 // MailboxID must be nonzero, it references the PK of Mailbox // (enforced), a combination MailboxID+UID must be unique // (enforced) and we create an additional index on // MailboxID+Received for faster queries. MailboxID uint32 `bstore:"nonzero,ref Mailbox,unique MailboxID+UID,index MailboxID+Received"` // UID must be nonzero, for IMAP. UID uint32 `bstore:"nonzero"` // Received is nonzero too, and also gets its own index. Received time.Time `bstore:"nonzero,index"` From string To string Seen bool Data []byte // ... an actual mailbox message would have more fields... } type Mailbox struct { ID uint32 Name string `bstore:"unique"` } // For tests. os.Mkdir("testdata", 0700) const path = "testdata/mail.db" os.Remove(path) ctx := context.Background() // Possibly replace with a request context. // Open or create database mail.db, and register types Msg and Mailbox. // Bstore automatically creates (unique) indices. // If you had previously opened this database with types of the same // name but not the exact field types, bstore checks the types are // compatible and makes any changes necessary, such as // creating/replacing indices, verifying new constraints (unique, // nonzero, references). db, err := bstore.Open(ctx, path, nil, Msg{}, Mailbox{}) if err != nil { log.Fatalln("open:", err) } defer db.Close() // Insert mailboxes. Because the primary key is zero, the next // autoincrement/sequence is assigned to the ID field. var ( inbox = Mailbox{Name: "INBOX"} sent = Mailbox{Name: "Sent"} archive = Mailbox{Name: "Archive"} trash = Mailbox{Name: "Trash"} ) if err := db.Insert(ctx, &inbox, &sent, &archive, &trash); err != nil { log.Fatalln("insert mailbox:", err) } // Insert messages, IDs are automatically assigned. now := time.Now() var ( msg0 = Msg{MailboxID: inbox.ID, UID: 1, Received: now.Add(-time.Hour)} msg1 = Msg{MailboxID: inbox.ID, UID: 2, Received: now.Add(-time.Second), Seen: true} msg2 = Msg{MailboxID: inbox.ID, UID: 3, Received: now} msg3 = Msg{MailboxID: inbox.ID, UID: 4, Received: now.Add(-time.Minute)} msg4 = Msg{MailboxID: trash.ID, UID: 1, Received: now} msg5 = Msg{MailboxID: trash.ID, UID: 2, Received: now} msg6 = Msg{MailboxID: archive.ID, UID: 1, Received: now} ) if err := db.Insert(ctx, &msg0, &msg1, &msg2, &msg3, &msg4, &msg5, &msg6); err != nil { log.Fatalln("insert messages:", err) } // Get a single record by ID using Get. nmsg0 := Msg{ID: msg0.ID} if err := db.Get(ctx, &nmsg0); err != nil { log.Fatalln("get:", err) } // ErrAbsent is returned if the record does not exist. if err := db.Get(ctx, &Msg{ID: msg0.ID + 999}); err != bstore.ErrAbsent { log.Fatalln("get did not return ErrAbsent:", err) } // Inserting duplicate values results in ErrUnique. if err := db.Insert(ctx, &Msg{MailboxID: trash.ID, UID: 1, Received: now}); err == nil || !errors.Is(err, bstore.ErrUnique) { log.Fatalln("inserting duplicate message did not return ErrUnique:", err) } // Inserting fields that reference non-existing records results in ErrReference. if err := db.Insert(ctx, &Msg{MailboxID: trash.ID + 999, UID: 1, Received: now}); err == nil || !errors.Is(err, bstore.ErrReference) { log.Fatalln("inserting reference to absent mailbox did not return ErrReference:", err) } // Deleting records that are still referenced results in ErrReference. if err := db.Delete(ctx, &Mailbox{ID: inbox.ID}); err == nil || !errors.Is(err, bstore.ErrReference) { log.Fatalln("deleting mailbox that is still referenced did not return ErrReference:", err) } // Updating a record checks constraints. nmsg0 = msg0 nmsg0.UID = 2 // Not unique. if err := db.Update(ctx, &nmsg0); err == nil || !errors.Is(err, bstore.ErrUnique) { log.Fatalln("updating message to already present UID did not return ErrUnique:", err) } nmsg0 = msg0 nmsg0.Received = time.Time{} // Zero value. if err := db.Update(ctx, &nmsg0); err == nil || !errors.Is(err, bstore.ErrZero) { log.Fatalln("updating message to zero Received did not return ErrZero:", err) } // Use a transaction with DB.Write or DB.Read for a consistent view. err = db.Write(ctx, func(tx *bstore.Tx) error { // tx also has Insert, Update, Delete, Get. // But we can compose and execute proper queries. // // We can call several Filter* and Sort* methods that all add // to the query. We end with an operation like Count, Get (a // single record), List (all selected records), Delete (delete // selected records), Update, etc. // // FilterNonzero filters on the nonzero field values of its // parameter. Since "false" is a zero value, we cannot use // FilterNonzero but use FilterEqual instead. We also want the // messages in "newest first" order. // // QueryTx and QueryDB must be called on the package, because // type parameters cannot be introduced on methods in Go. q := bstore.QueryTx[Msg](tx) q.FilterNonzero(Msg{MailboxID: inbox.ID}) q.FilterEqual("Seen", false) q.SortDesc("Received") msgs, err := q.List() if err != nil { log.Fatalln("listing unseen inbox messages, newest first:", err) } if len(msgs) != 3 || msgs[0].ID != msg2.ID || msgs[1].ID != msg3.ID || msgs[2].ID != msg0.ID { log.Fatalf("listing unseen inbox messages, got %v, expected message ids %d,%d,%d", msgs, msg2.ID, msg3.ID, msg0.ID) } // The index on MailboxID,Received was used automatically to // retrieve the messages efficiently in sorted order without // requiring a fetch + in-memory sort. stats := tx.Stats() if stats.PlanIndexScan != 1 { log.Fatalf("index scan was not used (%d)", stats.PlanIndexScan) } else if stats.Sort != 0 { log.Fatalf("in-memory sort was performed (%d)", stats.Sort) } // We can use filters to select records to delete. // Note the chaining: filters return the same, modified query. // Operations like Delete finish the query. Don't put too many // filters in a single chained statement, for readability. n, err := bstore.QueryTx[Msg](tx).FilterNonzero(Msg{MailboxID: trash.ID}).Delete() if err != nil { log.Fatalln("deleting messages from trash:", err) } else if n != 2 { log.Fatalf("deleted %d messages from trash, expected 2", n) } // We can select messages to update, e.g. to mark all messages in inbox as seen. // We can also gather the records or their IDs that are removed, similar to SQL "returning". var updated []Msg q = bstore.QueryTx[Msg](tx) q.FilterNonzero(Msg{MailboxID: inbox.ID}) q.FilterEqual("Seen", false) q.SortDesc("Received") q.Gather(&updated) n, err = q.UpdateNonzero(Msg{Seen: true}) if err != nil { log.Fatalln("update messages in inbox to seen:", err) } else if n != 3 || len(updated) != 3 { log.Fatalf("updated %d messages %v, expected 3", n, updated) } // We can also iterate over the messages one by one. Below we // iterate over just the IDs efficiently, use .Next() for // iterating over the full messages. stats = tx.Stats() var ids []uint64 q = bstore.QueryTx[Msg](tx).FilterNonzero(Msg{MailboxID: inbox.ID}).SortAsc("Received") for { var id uint64 if err := q.NextID(&id); err == bstore.ErrAbsent { // No more messages. // Note: if we don't iterate until an error, Close must be called on the query for cleanup. break } else if err != nil { log.Fatalln("iterating over IDs:", err) } // The ID is fetched from the index. The full record is // never read from the database. Calling Next instead // of NextID does always fetch, parse and return the // full record. ids = append(ids, id) } if len(ids) != 4 || ids[0] != msg0.ID || ids[1] != msg3.ID || ids[2] != msg1.ID || ids[3] != msg2.ID { log.Fatalf("iterating over IDs, got %v, expected %d,%d,%d,%d", ids, msg0.ID, msg3.ID, msg1.ID, msg2.ID) } delta := tx.Stats().Sub(stats) if delta.Index.Cursor == 0 || delta.Records.Get != 0 { log.Fatalf("no index was scanned (%d), or records were fetched (%d)", delta.Index.Cursor, delta.Records.Get) } // Return success causing transaction to commit. return nil }) if err != nil { log.Fatalln("write transaction:", err) } }
Output:
Index ¶
- Variables
- type DB
- func (db *DB) Begin(ctx context.Context, writable bool) (*Tx, error)
- func (db *DB) Close() error
- func (db *DB) Delete(ctx context.Context, values ...any) error
- func (db *DB) Drop(ctx context.Context, name string) error
- func (db *DB) Get(ctx context.Context, values ...any) error
- func (db *DB) HintAppend(append bool, values ...any) error
- func (db *DB) Insert(ctx context.Context, values ...any) error
- func (db *DB) Read(ctx context.Context, fn func(*Tx) error) error
- func (db *DB) Register(ctx context.Context, typeValues ...any) error
- func (db *DB) Stats() Stats
- func (db *DB) Update(ctx context.Context, values ...any) error
- func (db *DB) Write(ctx context.Context, fn func(*Tx) error) error
- type Options
- type Query
- func (q *Query[T]) Close() error
- func (q *Query[T]) Count() (n int, rerr error)
- func (q *Query[T]) Delete() (deleted int, rerr error)
- func (q *Query[T]) Err() error
- func (q *Query[T]) Exists() (exists bool, rerr error)
- func (q *Query[T]) FilterEqual(fieldName string, values ...any) *Query[T]
- func (q *Query[T]) FilterFn(fn func(value T) bool) *Query[T]
- func (q *Query[T]) FilterGreater(fieldName string, value any) *Query[T]
- func (q *Query[T]) FilterGreaterEqual(fieldName string, value any) *Query[T]
- func (q *Query[T]) FilterID(id any) *Query[T]
- func (q *Query[T]) FilterIDs(ids any) *Query[T]
- func (q *Query[T]) FilterIn(fieldName string, value any) *Query[T]
- func (q *Query[T]) FilterLess(fieldName string, value any) *Query[T]
- func (q *Query[T]) FilterLessEqual(fieldName string, value any) *Query[T]
- func (q *Query[T]) FilterNonzero(value T) *Query[T]
- func (q *Query[T]) FilterNotEqual(fieldName string, values ...any) *Query[T]
- func (q *Query[T]) ForEach(fn func(value T) error) (rerr error)
- func (q *Query[T]) Gather(l *[]T) *Query[T]
- func (q *Query[T]) GatherIDs(ids any) *Query[T]
- func (q *Query[T]) Get() (value T, rerr error)
- func (q *Query[T]) IDs(idsptr any) (rerr error)
- func (q *Query[T]) Limit(n int) *Query[T]
- func (q *Query[T]) List() (list []T, rerr error)
- func (q *Query[T]) Next() (value T, rerr error)
- func (q *Query[T]) NextID(idptr any) (rerr error)
- func (q *Query[T]) SortAsc(fieldNames ...string) *Query[T]
- func (q *Query[T]) SortDesc(fieldNames ...string) *Query[T]
- func (q *Query[T]) Stats() Stats
- func (q *Query[T]) UpdateField(fieldName string, value any) (updated int, rerr error)
- func (q *Query[T]) UpdateFields(fieldValues map[string]any) (updated int, rerr error)
- func (q *Query[T]) UpdateNonzero(value T) (updated int, rerr error)
- type Stats
- type StatsKV
- type Tx
- func (tx *Tx) Commit() error
- func (tx *Tx) Delete(values ...any) error
- func (tx *Tx) Get(values ...any) error
- func (tx *Tx) Insert(values ...any) error
- func (tx *Tx) Keys(typeName string, fn func(pk any) error) error
- func (tx *Tx) Record(typeName, key string, fields *[]string) (map[string]any, error)
- func (tx *Tx) Records(typeName string, fields *[]string, fn func(map[string]any) error) error
- func (tx *Tx) Rollback() error
- func (tx *Tx) Stats() Stats
- func (tx *Tx) Types() ([]string, error)
- func (tx *Tx) Update(values ...any) error
- func (tx *Tx) WriteTo(w io.Writer) (n int64, err error)
Examples ¶
Constants ¶
This section is empty.
Variables ¶
var ( ErrAbsent = errors.New("absent") // If a function can return an ErrAbsent, it can be compared directly, without errors.Is. ErrZero = errors.New("must be nonzero") ErrUnique = errors.New("not unique") ErrReference = errors.New("referential inconsistency") ErrMultiple = errors.New("multiple results") ErrSeq = errors.New("highest autoincrement sequence value reached") ErrType = errors.New("unknown/bad type") ErrIncompatible = errors.New("incompatible types") ErrFinished = errors.New("query finished") ErrStore = errors.New("internal/storage error") // E.g. when buckets disappear, possibly by external users of the underlying BoltDB database. ErrParam = errors.New("bad parameters") ErrTxBotched = errors.New("botched transaction") // Set on transactions after failed and aborted write operations. )
var StopForEach error = errors.New("stop foreach")
StopForEach is an error value that, if returned by the function passed to Query.ForEach, stops further iterations.
Functions ¶
This section is empty.
Types ¶
type DB ¶
type DB struct {
// contains filtered or unexported fields
}
DB is a database storing Go struct values in an underlying bolt database. DB is safe for concurrent use, unlike a Tx or a Query.
func Open ¶
Open opens a bstore database and registers types by calling Register.
If the file does not exist, a new database file is created, unless opts has MustExist set. Files are created with permission 0600, or with Perm from Options if nonzero.
Only one DB instance can be open for a file at a time. Use opts.Timeout to specify a timeout during open to prevent indefinite blocking.
The context is used for opening and initializing the database, not for further operations. If the context is canceled while waiting on the database file lock, the operation is not aborted other than when the deadline/timeout is reached.
See function Register for checks for changed/unchanged schema during open based on environment variable "bstore_schema_check".
func (*DB) Begin ¶
Begin starts a transaction.
If writable is true, the transaction allows modifications. Only one writable transaction can be active at a time on a DB. No read-only transactions can be active at the same time. Attempting to begin a read-only transaction from a writable transaction leads to deadlock.
A writable Tx can be committed or rolled back. A read-only transaction must always be rolled back.
func (*DB) Drop ¶
Drop removes a type and its data from the database. If the type is currently registered, it is unregistered and no longer available. If a type is still referenced by another type, eg through a "ref" struct tag, ErrReference is returned. If the type does not exist, ErrAbsent is returned.
func (*DB) HintAppend ¶
HintAppend sets a hint whether changes to the types indicated by each struct from values is (mostly) append-only.
This currently sets the BoltDB bucket FillPercentage to 1 for efficient use of storage space.
func (*DB) Read ¶
Read calls function fn with a new read-only transaction, ensuring transaction rollback.
func (*DB) Register ¶
Register registers the Go types of each value in typeValues for use with the database. Each value must be a struct, not a pointer.
Type definition versions (schema versions) are added to the database if they don't already exist or have changed. Existing type definitions are checked for compatibility. Unique indexes are created if they don't already exist. Creating a new unique index fails with ErrUnique on duplicate values. If a nonzero constraint is added, all records are verified to be nonzero. If a zero value is found, ErrZero is returned.
Register can be called multiple times, with different types. But types that reference each other must be registered in the same call to Registers.
To help during development, if environment variable "bstore_schema_check" is set to "changed", an error is returned if there is no schema change. If it is set to "unchanged", an error is returned if there was a schema change.
func (*DB) Stats ¶
Stats returns usage statistics for the lifetime of DB. Stats are tracked first in a Query or a Tx. Stats from a Query are propagated to its Tx when the Query finishes. Stats from a Tx are propagated to its DB when the transaction ends.
type Options ¶
type Options struct { Timeout time.Duration // Abort if opening DB takes longer than Timeout. If not set, the deadline from the context is used. Perm fs.FileMode // Permissions for new file if created. If zero, 0600 is used. MustExist bool // Before opening, check that file exists. If not, io/fs.ErrNotExist is returned. RegisterLogger *slog.Logger // For debug logging about schema upgrades. }
Options configure how a database should be opened or initialized.
type Query ¶
type Query[T any] struct { // contains filtered or unexported fields }
Query selects data for Go struct T based on filters, sorting, limits. The query is completed by calling an operation, such as Count, Get, List, Update, Delete, etc.
Record selection functions like FilterEqual and Limit return the (modified) query itself, allowing chaining of calls.
Queries are automatically closed after their operation, with two exceptions: After using Next and NextID on a query that did not yet return a non-nil error, you must call Close.
A Query is not safe for concurrent use.
func QueryDB ¶
QueryDB returns a new Query for type T. When an operation on the query is executed, a read-only/writable transaction is created as appropriate for the operation.
func QueryTx ¶
QueryTx returns a new Query that operates on type T using transaction tx. The context of the transaction is used for the query.
func (*Query[T]) Close ¶
Close closes a Query. Must always be called for Queries on which Next or NextID was called. Other operations call Close themselves.
func (*Query[T]) Delete ¶
Delete removes the selected records, returning how many were deleted.
See Gather and GatherIDs for collecting the deleted records or IDs.
func (*Query[T]) Err ¶
Err returns if an error is set on the query. Can happen for invalid filters or canceled contexts. Finished queries return ErrFinished.
func (*Query[T]) FilterEqual ¶
FilterEqual selects records that have one of values for fieldName.
Note: Value must be a compatible type for comparison with fieldName. Go constant numbers become ints, which are not compatible with uint or float types.
func (*Query[T]) FilterFn ¶
FilterFn calls fn for each record selected so far. If fn returns true, the record is kept for further filters and finally the operation.
func (*Query[T]) FilterGreater ¶
FilterGreater selects records that have fieldName > value.
Note: Value must be a compatible type for comparison with fieldName. Go constant numbers become ints, which are not compatible with uint or float types.
func (*Query[T]) FilterGreaterEqual ¶
FilterGreaterEqual selects records that have fieldName >= value.
func (*Query[T]) FilterID ¶
FilterID selects the records with primary key id, which must be of the type of T's primary key.
func (*Query[T]) FilterIDs ¶
FilterIDs selects the records with a primary key that is in ids. Ids must be a slice of T's primary key type.
func (*Query[T]) FilterIn ¶
FilterIn selects records that have one of values of the string slice fieldName.
If fieldName has an index, it is used to select rows.
Note: Value must be a compatible type for comparison with the elements of fieldName. Go constant numbers become ints, which are not compatible with uint or float types.
func (*Query[T]) FilterLess ¶
FilterLess selects records that have fieldName < value.
func (*Query[T]) FilterLessEqual ¶
FilterLessEqual selects records that have fieldName <= value.
func (*Query[T]) FilterNonzero ¶
FilterNonzero gathers the nonzero fields from value, and selects records that have equal values for those fields. At least one value must be nonzero. If a value comes from an external source, e.g. user input, make sure it is not the zero value.
Keep in mind that filtering on an embed/anonymous field looks at individual fields in the embedded field for non-zeroness, not at the embed field as a whole.
func (*Query[T]) FilterNotEqual ¶
FilterNotEqual selects records that do not have any of values for fieldName.
func (*Query[T]) ForEach ¶
ForEach calls fn on each selected record. If fn returns StopForEach, ForEach stops iterating, so no longer calls fn, and returns nil. Fn must not update values, the internal cursor is not repositioned between invocations of fn, which would cause undefined behaviour (in practice, matching values could be skipped).
func (*Query[T]) Gather ¶
Gather causes an Update or Delete operation to return the values of the affect records into l. For Update, the updated records are returned.
func (*Query[T]) GatherIDs ¶
GatherIDs causes an Update or Delete operation to return the primary keys of affected records into ids, which must be a pointer to a slice of T's primary key.
func (*Query[T]) Get ¶
Get returns the single selected record.
ErrMultiple is returned if multiple records were selected. ErrAbsent is returned if no record was selected.
func (*Query[T]) IDs ¶
IDs sets idsptr to the primary keys of selected records. Idptrs must be a slice of T's primary key type.
func (*Query[T]) Limit ¶
Limit stops selecting records after the first n records. Can only be called once. n must be > 1.
func (*Query[T]) List ¶
List returns all selected records. On success with zero selected records, List returns the empty list.
func (*Query[T]) Next ¶
Next fetches the next record, moving the cursor forward.
ErrAbsent is returned if no more records match.
Automatically created transactions are read-only.
Close must be called on a Query on which Next or NextID was called and that is not yet finished, i.e. has not yet returned an error (including ErrAbsent).
func (*Query[T]) NextID ¶
NextID is like Next, but only fetches the primary key of the next matching record, storing it in idptr.
func (*Query[T]) SortAsc ¶
SortAsc sorts the selected records by fieldNames in ascending order. Additional orderings can be added by more calls to SortAsc or SortDesc.
func (*Query[T]) SortDesc ¶
SortDesc sorts the selected records by fieldNames in descending order. Additional orderings can be added by more calls to SortAsc or SortDesc.
func (*Query[T]) Stats ¶
Stats returns the current statistics for this query. When a query finishes, its stats are added to those of its transaction. When a transaction finishes, its stats are added to those of its database.
func (*Query[T]) UpdateField ¶
UpdateField calls UpdateFields for fieldName and value.
func (*Query[T]) UpdateFields ¶
UpdateFields updates all selected records, setting fields named by the map keys of fieldValues to the corresponding map value and returning the number of records updated.
See Gather and GatherIDs for collecting the updated records or IDs.
Entire embed fields can be updated, as well as their individual embedded fields.
func (*Query[T]) UpdateNonzero ¶
UpdateNonzero updates all selected records with the non-zero fields from value, returning the number of records updated.
Recall that false, 0, "" are all zero values. Use UpdateField or UpdateFields to set fields to zero their value. This is especially relevant if the field value comes from an external source, e.g. user input.
See Gather and GatherIDs for collecting the updated records or IDs.
Keep in mind that updating on an embed/anonymous field looks at individual fields in the embedded field for non-zeroness, not at the embed field as a whole.
type Stats ¶
type Stats struct { // Number of read-only or writable transactions. Set for DB only. Reads uint Writes uint Bucket StatsKV // Use of buckets. Records StatsKV // Use of records bucket for types. Index StatsKV // Use of index buckets for types. // Operations that modify the database. Each record is counted, e.g. // for a query that updates/deletes multiple records. Get uint Insert uint Update uint Delete uint Queries uint // Total queries executed. PlanTableScan uint // Full table scans. PlanPK uint // Primary key get. PlanUnique uint // Full key Unique index get. PlanPKScan uint // Scan over primary keys. PlanIndexScan uint // Scan over index. Sort uint // In-memory collect and sort. LastType string // Last type queried. LastIndex string // Last index for LastType used for a query, or empty. LastOrdered bool // Whether last scan (PK or index) use was ordered, e.g. for sorting or because of a comparison filter. LastAsc bool // If ordered, whether last index scan was ascending. }
Stats tracks DB/Tx/Query statistics, mostly counters.
type StatsKV ¶
type StatsKV struct { Get uint Put uint // For Stats.Bucket, this counts calls of CreateBucket. Delete uint Cursor uint // Any cursor operation: Seek/First/Last/Next/Prev. }
StatsKV represent operations on the underlying BoltDB key/value store.
type Tx ¶
type Tx struct {
// contains filtered or unexported fields
}
Tx is a transaction on DB.
A Tx is not safe for concurrent use.
func (*Tx) Commit ¶
Commit commits changes made in the transaction to the database. Statistics are added to its DB. If the commit fails, or the transaction was botched, the transaction is rolled back.
func (*Tx) Delete ¶
Delete removes values by their primary key from the database. Each value must be a struct or pointer to a struct. Indices are automatically updated and referential integrity is maintained.
ErrAbsent is returned if the record does not exist. ErrReference is returned if another record still references this record.
func (*Tx) Get ¶
Get fetches records by their primary key from the database. Each value must be a pointer to a struct.
ErrAbsent is returned if the record does not exist.
func (*Tx) Insert ¶
Insert inserts values as new records into the database. Each value must be a pointer to a struct. If the primary key field is zero and autoincrement is not disabled, the next sequence is assigned. Indices are automatically updated.
ErrUnique is returned if the record already exists. ErrSeq is returned if no next autoincrement integer is available. ErrZero is returned if a nonzero constraint would be violated. ErrReference is returned if another record is referenced that does not exist.
func (*Tx) Keys ¶
Keys returns the parsed primary keys for the type "typeName". The type does not have to be registered with Open or Register. For use with Record(s) to export data.
func (*Tx) Record ¶
Record returns the record with primary "key" for "typeName" parsed as map. "Fields" is set to the fields of the type. The type does not have to be registered with Open or Register. Record parses the data without the Go type present. BinaryMarshal fields are returned as bytes.
func (*Tx) Records ¶
Records calls "fn" for each record of "typeName". Records sets "fields" to the fields of the type. The type does not have to be registered with Open or Register. Record parses the data without the Go type present. BinaryMarshal fields are returned as bytes.
func (*Tx) Rollback ¶
Rollback aborts and cancels any changes made in this transaction. Statistics are added to its DB.
func (*Tx) Stats ¶
Stats returns usage statistics for this transaction. When a transaction is rolled back or committed, its statistics are copied into its DB.
func (*Tx) Types ¶
Types returns the types present in the database, regardless of whether they are currently registered using Open or Register. Useful for exporting data with Keys and Records.