datastore

package
v0.0.0-...-86f0ed1 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Nov 25, 2024 License: Apache-2.0, Apache-2.0 Imports: 35 Imported by: 129

Documentation

Overview

Package datastore contains APIs to handle datastore queries

Index

Constants

This section is empty.

Variables

View Source
var (
	ErrNoSuchEntity          = datastore.ErrNoSuchEntity
	ErrConcurrentTransaction = datastore.ErrConcurrentTransaction

	// Stop is understood by various services to stop iterative processes. Examples
	// include datastore.Interface.Run's callback.
	Stop = stopErr{}

	// ErrLimitExceeded is used to indicate the iteration limit has been exceeded.
	ErrLimitExceeded = limitExceeded{}
)

These errors are returned by various datastore.Interface methods.

View Source
var (
	// ErrMultipleInequalityFilter is returned from Query.Finalize if you build a
	// query which has inequality filters on multiple fields.
	ErrMultipleInequalityFilter = errors.New(
		"inequality filters on multiple properties in the same Query is not allowed")

	// ErrNullQuery is returned from Query.Finalize if you build a query for which
	// there cannot possibly be any results.
	ErrNullQuery = errors.New(
		"the query is overconstrained and can never have results")
)
View Source
var ErrSkipProperty = errors.New("the property should be skipped")

ErrSkipProperty can be returned from ToProperty to instruct the default PropertyLoadSaver to silently skip storing the property.

View Source
var WritePropertyMapDeterministic = false

WritePropertyMapDeterministic allows tests to make Serializer.PropertyMap deterministic.

This should be set once at the top of your tests in an init() function.

Functions

func AddRawFilters

func AddRawFilters(c context.Context, filts ...RawFilter) context.Context

AddRawFilters adds RawInterface filters to the context.

func AllocateIDs

func AllocateIDs(c context.Context, ent ...any) error

AllocateIDs allows you to allocate IDs from the datastore without putting any data.

A partial valid key will be constructed from each entity's kind and parent, if present. An allocation will then be performed against the datastore for each key, and the partial key will be populated with a unique integer ID. The resulting keys will be applied to their objects using PopulateKey. If successful, any existing ID will be destroyed.

If the object is supplied that cannot accept an integer key, this method will panic.

ent must be one of:

  • *S where S is a struct
  • *P where *P is a concrete type implementing PropertyLoadSaver
  • []S or []*S where S is a struct
  • []P or []*P where *P is a concrete type implementing PropertyLoadSaver
  • []I, where I is some interface type. Each element of the slice must have either *S or *P as its underlying type.
  • []*Key, to populate a slice of partial-valid keys.

nil values (or interface-typed nils) are not allowed, neither as standalone arguments nor inside slices. Passing them will cause a panic.

If an error is encountered, the returned error value will depend on the input arguments. If one argument is supplied, the result will be the encountered error type. If multiple arguments are supplied, the result will be a MultiError whose error index corresponds to the argument in which the error was encountered.

If an ent argument is a slice, its error type will be a MultiError. Note that in the scenario where multiple slices are provided, this will return a MultiError containing a nested MultiError for each slice argument.

func Count

func Count(c context.Context, q *Query) (int64, error)

Count executes the given query and returns the number of entries which match it.

If the query is marked as eventually consistent via EventualConsistency(true) will use a fast server-side aggregation, with the downside that such queries may return slightly stale results and can't be used inside transactions.

If the query is strongly consistent, will essentially do a full keys-only query and count the number of matches locally.

func CountBatch

func CountBatch(c context.Context, batchSize int32, q *Query) (int64, error)

CountBatch is a batching version of Count. See RunBatch for more information about batching, and CountBatch for more information about the parameters.

If the Context supplied to CountBatch is cancelled or reaches its deadline, CountBatch will terminate with the Context's error.

By default, datastore applies a short (~5s) timeout to queries. This can be increased, usually to around several minutes, by explicitly setting a deadline on the supplied Context.

If the specified `batchSize` is <= 0, no batching will be performed.

func CountMulti

func CountMulti(c context.Context, queries []*Query) (int64, error)

CountMulti runs multiple queries in parallel and counts the total number of unique entities produced by them.

Unlike Count, this method doesn't support server-side aggregation. It always does full keys-only queries. If you have only one query and don't care about strong consistency, use `Count(c, q.EventualConsistency(true))`: it will use the server-side aggregation which is orders of magnitude faster than the local counting.

func Delete

func Delete(c context.Context, ent ...any) error

Delete removes the supplied entities from the datastore.

ent must be one of:

  • *S, where S is a struct
  • *P, where *P is a concrete type implementing PropertyLoadSaver
  • []S or []*S, where S is a struct
  • []P or []*P, where *P is a concrete type implementing PropertyLoadSaver
  • []I, where I is some interface type. Each element of the slice must have either *S or *P as its underlying type.
  • *Key, to remove a specific key from the datastore.
  • []*Key, to remove a slice of keys from the datastore.

nil values (or interface-typed nils) are not allowed, neither as standalone arguments nor inside slices. Passing them will cause a panic.

If an error is encountered, the returned error value will depend on the input arguments. If one argument is supplied, the result will be the encountered error type. If multiple arguments are supplied, the result will be a MultiError whose error index corresponds to the argument in which the error was encountered.

If an ent argument is a slice, its error type will be a MultiError. Note that in the scenario where multiple slices are provided, this will return a MultiError containing a nested MultiError for each slice argument.

func Get

func Get(c context.Context, dst ...any) error

Get retrieves objects from the datastore.

Each element in dst must be one of:

  • *S, where S is a struct
  • *P, where *P is a concrete type implementing PropertyLoadSaver
  • []S or []*S, where S is a struct
  • []P or []*P, where *P is a concrete type implementing PropertyLoadSaver
  • []I, where I is some interface type. Each element of the slice must have either *S or *P as its underlying type.

nil values (or interface-typed nils) are not allowed, neither as standalone arguments nor inside slices. Passing them will cause a panic.

If an error is encountered, the returned error value will depend on the input arguments. If one argument is supplied, the result will be the encountered error type. If multiple arguments are supplied, the result will be a MultiError whose error index corresponds to the argument in which the error was encountered.

If a dst argument is a slice, its error type will be a MultiError. Note that in the scenario where multiple slices are provided, this will return a MultiError containing a nested MultiError for each slice argument.

If there was an issue retrieving the entity, the input `dst` objects will not be affected. This means that you can populate an object for dst with some values, do a Get, and on an ErrNoSuchEntity, do a Put (inside a transaction, of course :)).

func GetAll

func GetAll(c context.Context, q *Query, dst any) error

GetAll retrieves all of the Query results into dst.

By default, datastore applies a short (~5s) timeout to queries. This can be increased, usually to around several minutes, by explicitly setting a deadline on the supplied Context.

dst must be one of:

  • *[]S or *[]*S, where S is a struct
  • *[]P or *[]*P, where *P is a concrete type implementing PropertyLoadSaver
  • *[]*Key implies a keys-only query.

Deprecated - Use GetAllWithLimit instead. If database happens to have many entities which matchq, GetAll can easily exhaust the available memory before returning, leading to an OOM error. If you use GetAllWithLimit you can pick an 'impossible' limit, which will still be safer by default than GetAll, and easier to debug, too.

func GetAllWithLimit

func GetAllWithLimit(ctx context.Context, q *Query, dst any, lim int) error

GetAllWithLimit retrieves all of the Query results into dst up to a limit.

GetAllWithLimit is like GetAll, but it applies a limit. If the limit is negative, we return an error. Additionally, if we exceed the limit, then we return ErrLimitExceeded indicating that a truncation has occurred.

Note that GetAllWithLimit does NOT return the cursor. It is primarily intended as a way to migrate calls to GetAll to a version with more predictable behavior so that you get a nice failed RPC when the result set is too big rather than an a hard-to-debug OOM.

By default, datastore applies a short (~5s) timeout to queries. This can be increased, usually to around several minutes, by explicitly setting a deadline on the supplied Context.

dst must be one of:

  • *[]S or *[]*S, where S is a struct
  • *[]P or *[]*P, where *P is a concrete type implementing PropertyLoadSaver
  • *[]*Key implies a keys-only query.

func GetMetaDefault

func GetMetaDefault(getter MetaGetterSetter, key string, dflt any) any

GetMetaDefault is a helper for GetMeta, allowing a default value.

If the metadata key is not available, or its type doesn't equal the homogenized type of dflt, then dflt will be returned.

Type homogenization:

signed integer types -> int64
bool                 -> Toggle fields (bool)

Example:

pls.GetMetaDefault("foo", 100).(int64)

func GetPLS

func GetPLS(obj any) interface {
	PropertyLoadSaver
	MetaGetterSetter
}

GetPLS resolves obj into default struct PropertyLoadSaver and MetaGetterSetter implementation.

obj must be a non-nil pointer to a struct of some sort.

By default, exported fields will be serialized to/from the datastore. If the field is not exported, it will be skipped by the serialization routines.

If a field is of a non-supported type (see Property for the list of supported property types), this function will panic. Other problems include duplicate field names (due to tagging), recursively defined structs, nested structures with multiple slices (e.g. slices of slices, either directly `[][]type` or indirectly `[]Embedded` where Embedded contains a slice.)

The following field types are supported:

  • int64, int32, int16, int8, int
  • uint32, uint16, uint8, byte
  • float64, float32
  • string
  • []byte
  • bool
  • time.Time
  • GeoPoint
  • *Key
  • any Type whose underlying type is one of the above types
  • Types which implement PropertyConverter on (*Type)
  • Types which implement proto.Message
  • A struct composed of the above types (except for nested slices)
  • A slice of any of the above types

GetPLS supports the following struct tag syntax:

`gae:"[fieldName][,noindex]"` -- `fieldName`, if supplied, is an alternate
   datastore property name for an exportable field. By default this library
   uses the Go field name as the datastore property name, but sometimes
   this is undesirable (e.g. for datastore compatibility with another,
   likely python, application which named the field with a lowercase
   first letter).

   A fieldName of "-" means that gae will ignore the field for all
   serialization/deserialization.

   if noindex is specified, then this field will not be indexed in the
   datastore, even if it was an otherwise indexable type. If fieldName is
   blank, and noindex is specifed, then fieldName will default to the
   field's actual name. Note that by default, all fields (with indexable
   types) are indexed.

`gae:"[fieldName][,nocompress|zstd|legacy]"` -- for fields of type
   `proto.Message`. Protobuf fields are _never_ indexed, but are stored
   as encoded blobs.

   Like for other fields, `fieldName` is optional, and defaults to the Go
   struct field name if omitted.

   By default (with no options), protos are stored with binary encoding
   without compression. This is the same as "nocompress".

   You may optionally use "zstd" compression by specifying this option.

   It is valid to switch between "nocompress" and "zstd"; the library
   knows how to decode and encode both, even when the in-datastore format
   doesn't match the tag.

   The "legacy" option will store the protobuf without compression, BUT this
   encoding doesn't have a "mode" bit. This is purely for compatibility with
   the deprecated `proto-gae` generator, and is not recommended. The format
   is a `[]byte` containing the binary serialization of the proto with no
   other metadata.

`gae:"[fieldName],lsp[,noindex]` -- for nested struct-valued fields (structs
   specifically, not pointers to structs). "lsp" stands for "local
   structured property", since this feature is primarily used for
   compatibility with Python's ndb.LocalStructuredProperty. Fields that use
   this option are stored as nested entities inside the larger outer entity.

   By default fields of nested entities are indexed. E.g. if an entity
   property `nested` contains a nested entity with a property `prop`,
   there's a datastore index on a field called `nested.prop` that can be
   used to e.g. query for all entities that have a nested entity with `prop`
   property set to some value.

   Use "noindex" option to suppress indexing of *all* fields of the nested
   entity (recursively). Without "noindex" the indexing decision is done
   based on options set on the inner fields.

   NOTE: Python's ndb.LocalStructuredProperty doesn't support indexes, so
   any nested entities written from Python will be unindexed.

   Finally, nested entities can have keys (defined as usual via meta
   fields, see below). Semantically they are just indexed key-valued
   properties internally named `__key__`. In particular to query based on
   a nested property key use e.g. `nested.__key__` field name. "noindex"
   option on the "lsp" field will turn of indexing of the nested key.
   There's no way to index some inner property, and *not* index the inner
   key at the same time. Only complete keys are stored, e.g. if an inner
   entity has `$id` meta-field, but its value is 0 (indicating an incomplete
   key), this key won't be stored at all. This can have observable
   consequences when using `$id` and `$parent` meta fields together or just
   using partially populated key in `$key` meta field.

   Keys are round-tripped correctly when using Cloud Datastore APIs, but
   Python's ndb.LocalStructuredProperty would drop them, so better not to
   depend on them when doing interop with Python.

`gae:"$metaKey[,<value>]` -- indicates a field is metadata. Metadata
   can be used to control filter behavior, or to store key data when using
   the Interface.KeyForObj* methods. The supported field types are:
     - *Key
     - int64, int32, int16, int8, uint32, uint16, uint8, byte
     - string
     - Toggle (GetMeta and SetMeta treat the field as if it were bool)
     - Any type which implements PropertyConverter
     - Any type which implements proto.Message
   Additionally, numeric, string and Toggle types allow setting a default
   value in the struct field tag (the "<value>" portion).

   Only exported fields allow SetMeta, but all fields of appropriate type
   allow tagged defaults for use with GetMeta. See Examples.

`gae:"[-],extra"` -- for fields of type PropertyMap. Indicates that any
   extra, unrecognized or mismatched property types (type in datastore
   doesn't match your struct's field type) should be loaded into and
   saved from this field. This form allows you to control the behavior
   of reads and writes when your schema changes, or to implement something
   like ndb.Expando with a mix of structured and unstructured fields.

   If the `-` is present, then datastore write operations will not put
   elements of this map into the datastore.

   If the field is non-exported, then read operations from the datastore
   will not populate the members of this map, but extra fields or
   structural differences encountered when reading into this struct will be
   silently ignored. This is useful if you want to just ignore old fields.

   If there is a conflict between a field in the struct and a same-named
   Property in the extra field, the field in the struct takes precedence.

   Recursive structs are supported, but all extra properties go to the
   topmost structure's Extra field. This is a bit non-intuitive, but the
   implementation complexity was deemed not worth it, since that sort of
   thing is generally only useful on schema changes, which should be
   transient.

   Examples:
     // "black hole": ignore mismatches, ignore on write
     _ PropertyMap `gae:"-,extra"

     // "expando": full content is read/written
     Expando PropertyMap `gae:",extra"

     // "convert": content is read from datastore, but lost on writes. This
     // is useful for doing conversions from an old schema to a new one,
     // since you can retrieve the old data and populate it into new fields,
     // for example. Probably should be used in conjunction with an
     // implementation of the PropertyLoadSaver interface so that you can
     // transparently upconvert to the new schema on load.
     Convert PropertyMap `gae:"-,extra"

Example "special" structure. This is supposed to be some sort of datastore singleton object.

struct secretFoo {
  // _id and _kind are not exported, so setting their values will not be
  // reflected by GetMeta.
  _id   int64  `gae:"$id,1"`
  _kind string `gae:"$kind,InternalFooSingleton"`

  // Value is exported, so can be read and written by the PropertyLoadSaver,
  // but secretFoo is shared with a python appengine module which has
  // stored this field as 'value' instead of 'Value'.
  Value int64  `gae:"value"`
}

Example "normal" structure that you might use in a go-only appengine app.

struct User {
  ID string `gae:"$id"`
  // "kind" is automatically implied by the struct name: "User"
  // "parent" is nil... Users are root entities

  // 'Name' will be serialized to the datastore in the field 'Name'
  Name string
}

struct Comment {
  ID int64 `gae:"$id"`
  // "kind" is automatically implied by the struct name: "Comment"

  // Parent will be enforced by the application to be a User key.
  Parent *Key `gae:"$parent"`

  // 'Lines' will serialized to the datastore in the field 'Lines'
  Lines []string
}

A pointer-to-struct may also implement MetaGetterSetter to provide more sophisticated metadata values. Explicitly defined fields (as shown above) always take precedence over fields manipulated by the MetaGetterSetter methods. So if your GetMeta handles "kind", but you explicitly have a $kind field, the $kind field will take precedence and your GetMeta implementation will not be called for "kind".

A struct overloading any of the PropertyLoadSaver or MetaGetterSetter interfaces may evoke the default struct behavior by using GetPLS on itself. For example:

struct Special {
  Name string

  foo string
}

func (s *Special) Load(props PropertyMap) error {
  if foo, ok := props["foo"]; ok && len(foo) == 1 {
    s.foo = foo
    delete(props, "foo")
  }
  return GetPLS(s).Load(props)
}

func (s *Special) Save(withMeta bool) (PropertyMap, error) {
  props, err := GetPLS(s).Save(withMeta)
  if err != nil {
    return nil, err
  }
  props["foo"] = []Property{MkProperty(s.foo)}
  return props, nil
}

func (s *Special) Problem() error {
  return GetPLS(s).Problem()
}

Additionally, any field ptr-to-type may implement the PropertyConverter interface to allow a single field to, for example, implement some alternate encoding (json, gzip), or even just serialize to/from a simple string field. This applies to normal fields, as well as metadata fields. It can be useful for storing struct '$id's which have multi-field meanings. For example, the Person struct below could be initialized in go as `&Person{Name{"Jane", "Doe"}}`, retaining Jane's name as manipulable Go fields. However, in the datastore, it would have a key of `/Person,"Jane|Doe"`, and loading the struct from the datastore as part of a Query, for example, would correctly populate Person.Name.First and Person.Name.Last.

type Name struct {
  First string
  Last string
}

func (n *Name) ToProperty() (Property, error) {
  return MkProperty(fmt.Sprintf("%s|%s", n.First, n.Last)), nil
}

func (n *Name) FromProperty(p Property) error {
  // check p to be a PTString
  // split on "|"
  // assign to n.First, n.Last
}

type Person struct {
  ID Name `gae:"$id"`
}

func IntToTime

func IntToTime(v int64) time.Time

IntToTime converts a datastore time integer into its time.Time value.

func IsErrInvalidKey

func IsErrInvalidKey(err error) bool

IsErrInvalidKey tests if a given error is a wrapped datastore.ErrInvalidKey error.

func IsErrNoSuchEntity

func IsErrNoSuchEntity(err error) (found bool)

IsErrNoSuchEntity tests if an error is ErrNoSuchEntity, or is a MultiError that contains ErrNoSuchEntity and no other errors.

func IsMultiCursor

func IsMultiCursor(cursor Cursor) bool

IsMultiCursor returns true if the cursor probably represents a multicursor that is returned by RunMulti. Returns false otherwise

Note: There is finite chance that some other cursor can be decoded as a valid multicursor

func IsMultiCursorString

func IsMultiCursorString(cursor string) bool

IsMultiCursorString returns true if the cursor string is probably a valid representation of a multicursor that is returned by RunMulti. Returns false otherwise

Note: There is finite chance that some other cursor can be decoded as a valid multicursor

func MakeErrInvalidKey

func MakeErrInvalidKey(reason string, args ...any) *errors.Annotator

MakeErrInvalidKey returns an errors.Annotator instance that wraps an invalid key error. Calling IsErrInvalidKey on this Annotator or its derivatives will return true.

func PopulateKey

func PopulateKey(obj any, key *Key) bool

PopulateKey loads key into obj.

obj is any object that Interface.Get is able to accept.

Upon successful application, this method will return true. If the key could not be applied to the object, this method will return false. It will panic if obj is an invalid datastore model.

func Put

func Put(c context.Context, src ...any) error

Put writes objects into the datastore.

src must be one of:

  • *S, where S is a struct
  • *P, where *P is a concrete type implementing PropertyLoadSaver
  • []S or []*S, where S is a struct
  • []P or []*P, where *P is a concrete type implementing PropertyLoadSaver
  • []I, where I is some interface type. Each element of the slice must have either *S or *P as its underlying type.

nil values (or interface-typed nils) are not allowed, neither as standalone arguments nor inside slices. Passing them will cause a panic.

A *Key will be extracted from src via KeyForObj. If extractedKey.IsIncomplete() is true, and the object is put to the datastore successfully, then Put will write the resolved (datastore-generated) *Key back to src.

NOTE: The datastore only autogenerates *Keys with integer IDs. Only models which use a raw `$key` or integer-typed `$id` field are elegible for this. A model with a string-typed `$id` field will not accept an integer id'd *Key and will cause the Put to fail.

If an error is encountered, the returned error value will depend on the input arguments. If one argument is supplied, the result will be the encountered error type. If multiple arguments are supplied, the result will be a MultiError whose error index corresponds to the argument in which the error was encountered.

If a src argument is a slice, its error type will be a MultiError. Note that in the scenario where multiple slices are provided, this will return a MultiError containing a nested MultiError for each slice argument.

func RoundTime

func RoundTime(t time.Time) time.Time

RoundTime rounds a time.Time to microseconds, which is the (undocumented) way that the AppEngine SDK stores it.

func Run

func Run(c context.Context, q *Query, cb any) error

Run executes the given query, and calls `cb` for each successfully retrieved item.

By default, datastore applies a short (~5s) timeout to queries. This can be increased, usually to around several minutes, by explicitly setting a deadline on the supplied Context.

cb is a callback function whose signature is

func(obj TYPE[, getCursor CursorCB]) [error]

Where TYPE is one of:

  • S or *S, where S is a struct
  • P or *P, where *P is a concrete type implementing PropertyLoadSaver
  • *Key (implies a keys-only query)

If the error is omitted from the signature, this will run until the query returns all its results, or has an error/times out.

If error is in the signature, the query will continue as long as the callback returns nil. If it returns `Stop`, the query will stop and Run will return nil. Otherwise, the query will stop and Run will return the user's error.

Run may also stop on the first datastore error encountered, which can occur due to flakiness, timeout, etc. If it encounters such an error, it will be returned.

func RunBatch

func RunBatch(c context.Context, batchSize int32, q *Query, cb any) error

RunBatch is a batching version of Run. Like Run, executes a query and invokes the supplied callback for each returned result. RunBatch differs from Run in that it performs the query in batches, using a cursor to continue the query in between batches.

See Run for more information about the parameters.

Batching processes the supplied query in batches, buffering the full batch set locally before sending its results to the user. It will then proceed to the next batch until finished or cancelled. This is useful:

  • For efficiency, decoupling the processing of query data from the underlying datastore operation.
  • For very long-running queries, where the duration of the query would normally exceed datastore's maximum query timeout.
  • The caller may count return callbacks and perform processing at each `batchSize` interval with confidence that the underlying query will not timeout during that processing.

If the Context supplied to RunBatch is cancelled or reaches its deadline, RunBatch will terminate with the Context's error.

By default, datastore applies a short (~5s) timeout to queries. This can be increased, usually to around several minutes, by explicitly setting a deadline on the supplied Context.

If the specified `batchSize` is <= 0, no batching will be performed.

func RunInTransaction

func RunInTransaction(c context.Context, f func(c context.Context) error, opts *TransactionOptions) error

RunInTransaction runs f inside of a transaction. See the appengine SDK's documentation for full details on the behavior of transactions in the datastore.

Note that the behavior of transactions may change depending on what filters have been installed. It's possible that we'll end up implementing things like nested/buffered transactions as filters.

func RunMulti

func RunMulti(c context.Context, queries []*Query, cb any) error

RunMulti executes the logical OR of multiple queries, calling `cb` for each unique entity (by *Key) that it finds. Results will be returned in the order of the provided queries; All queries must have matching Orders.

cb is a callback function (please refer to the `Run` function comments for formats and restrictions for `cb` in this file).

The cursor that is returned by the callback cannot be used on a single query by doing `query.Start(cursor)` (In some cases it may not even complain when you try to do this. But the results are undefined). Apply the cursor to the same list of queries using ApplyCursors.

Note: projection queries are not supported, as they are non-trivial in complexity and haven't been needed yet.

Note: The cb is called for every unique entity (by *Key) that is retrieved on the current run. It is possible to get the same entity twice over two calls to RunMulti with different cursors.

DANGER: Cursors are buggy when using Cloud Datastore production backend. Paginated queries skip entities sitting on page boundaries. This doesn't happen when using `impl/memory` and thus hard to spot in unit tests. See queryIterator doc for more details.

func SetRaw

SetRaw sets the current Datastore object in the context. Useful for testing with a quick mock. This is just a shorthand SetRawFactory invocation to set a factory which always returns the same object.

func SetRawFactory

func SetRawFactory(c context.Context, rdsf RawFactory) context.Context

SetRawFactory sets the function to produce Datastore instances, as returned by the Raw method.

func TimeToInt

func TimeToInt(t time.Time) int64

TimeToInt converts a time value to a datastore-appropriate integer value.

This method truncates the time to microseconds and drops the timezone, because that's the (undocumented) way that the appengine SDK does it.

func UpconvertUnderlyingType

func UpconvertUnderlyingType(o any) any

UpconvertUnderlyingType takes an object o, and attempts to convert it to its native datastore-compatible type. e.g. int16 will convert to int64, and `type Foo string` will convert to `string`.

func WithBatching

func WithBatching(c context.Context, enabled bool) context.Context

WithBatching enables or disables automatic operation batching. Batching is enabled by default, and batch sizes are defined by the datastore's Constraints.

Datastore has built-in constraints that it applies to some operations:

  • For Get, there is a maximum number of elements that can be processed in a single RPC (see Constriants.MaxGetSize).
  • For Put, there is a maximum number of elements that can be processed in a single RPC (see Constriants.MaxPutSize).
  • For Delete, there is a maximum number of elements that can be processed in a single RPC (see Constriants.MaxDeleteSize).

Batching masks these limitations, providing an interface that meets user expectations. Behind the scenes, it splits large operations into a series of parallel smaller operations that fit within the datastore's constraints.

func WithoutTransaction

func WithoutTransaction(c context.Context) context.Context

WithoutTransaction returns a Context that isn't bound to a transaction. This may be called even when outside of a transaction, in which case the input Context is a valid return value.

This can be useful to perform non-transactional tasks given only a Context that is bound to a transaction.

Types

type BoolList

type BoolList []bool

BoolList is a convenience wrapper for []bool that provides summary methods for working with the list in aggregate.

func (BoolList) All

func (bl BoolList) All() bool

All returns true iff all of the booleans in this list are true.

func (BoolList) Any

func (bl BoolList) Any() bool

Any returns true iff any of the booleans in this list are true.

type Constraints

type Constraints struct {
	// MaxGetSize is the maximum number of entities that can be referenced in a
	// single GetMulti call. If <= 0, no constraint is applied.
	MaxGetSize int
	// MaxPutSize is the maximum number of entities that can be referenced in a
	// single PutMulti call. If <= 0, no constraint is applied.
	MaxPutSize int
	// MaxDeleteSize is the maximum number of entities that can be referenced in a
	// single DeleteMulti call. If <= 0, no constraint is applied.
	MaxDeleteSize int
}

Constraints represent implementation constraints.

A zero-value Constraints is valid, and indicates that no constraints are present.

type Cursor

type Cursor interface {
	fmt.Stringer
}

Cursor wraps datastore.Cursor.

func DecodeCursor

func DecodeCursor(c context.Context, s string) (Cursor, error)

DecodeCursor converts a string returned by a Cursor into a Cursor instance. It will return an error if the supplied string is not valid, or could not be decoded by the implementation.

type CursorCB

type CursorCB func() (Cursor, error)

CursorCB is used to obtain a Cursor while Run'ing a query on either Interface or RawInterface.

it can be invoked to obtain the current cursor.

type DeleteMultiCB

type DeleteMultiCB func(idx int, err error)

DeleteMultiCB is the callback signature provided to RawInterface.DeleteMulti

  • idx is the index of the entity, ranging from 0 through len-1.
  • err is an error associated with deleting this entity.

The callback is called once per element. It may be called concurrently, and may be called out of order. The "idx" variable describes which element is being processed. If any callbacks are invoked, exactly one callback will be invoked for each supplied element.

type Deserializer

type Deserializer struct {
	// If empty, this Deserializer will use the appid and namespace encoded in
	// *Key objects (if any).
	//
	// If supplied, any encoded appid/namespace will be ignored and this will be
	// used to fill in the appid and namespace for all returned *Key objects.
	KeyContext KeyContext
}

Deserializer allows reading binary-encoded datastore types (like Properties, Keys, etc.)

See the `Deserialize` variable for a common shortcut.

var Deserialize Deserializer

Deserialize is a Deserializer without KeyContext (i.e. appid/namespace encoded in Keys will be returned). Useful for inline invocations like:

datastore.Deserialize.Time(...)

func (Deserializer) GeoPoint

func (d Deserializer) GeoPoint(buf cmpbin.ReadableBytesBuffer) (gp GeoPoint, err error)

GeoPoint reads a GeoPoint from the buffer.

func (Deserializer) IndexColumn

func (d Deserializer) IndexColumn(buf cmpbin.ReadableBytesBuffer) (c IndexColumn, err error)

IndexColumn reads an IndexColumn from the buffer.

func (Deserializer) IndexDefinition

func (d Deserializer) IndexDefinition(buf cmpbin.ReadableBytesBuffer) (i IndexDefinition, err error)

IndexDefinition reads an IndexDefinition from the buffer.

func (Deserializer) Key

func (d Deserializer) Key(buf cmpbin.ReadableBytesBuffer) (ret *Key, err error)

Key deserializes a key from the buffer. The value of context must match the value of context that was passed to WriteKey when the key was encoded. If context == WithoutContext, then the appid and namespace parameters are used in the decoded Key. Otherwise they're ignored.

func (Deserializer) KeyTok

func (d Deserializer) KeyTok(buf cmpbin.ReadableBytesBuffer) (ret KeyTok, err error)

KeyTok reads a KeyTok from the buffer. You usually want ReadKey instead of this.

func (Deserializer) Property

func (d Deserializer) Property(buf cmpbin.ReadableBytesBuffer) (p Property, err error)

Property reads a Property from the buffer. `context` and `kc` behave the same way they do for Key, but only have an effect if the decoded property has a Key value.

func (Deserializer) PropertyMap

PropertyMap reads a top-level PropertyMap from the buffer. `context` and friends behave the same way that they do for ReadKey.

func (Deserializer) Time

Time reads a time.Time from the buffer.

type DroppedArgLookup

type DroppedArgLookup []idxPair

DroppedArgLookup is returned from using a DroppedArgTracker.

It can be used to recover the index from the original slice by providing the reduced slice index.

func (DroppedArgLookup) OriginalIndex

func (dal DroppedArgLookup) OriginalIndex(reducedIndex int) int

OriginalIndex maps from an index into the array(s) returned from MustDrop back to the corresponding index in the original arrays.

type DroppedArgTracker

type DroppedArgTracker []int

DroppedArgTracker is used to track dropping items from Keys as well as meta and/or PropertyMap arrays from one layer of the RawInterface to the next.

If you're not writing a datastore backend implementation (like "go.chromium.org/luci/gae/impl/*"), then you can ignore this type.

For example, say your GetMulti method was passed 4 arguments, but one of them was bad. DroppedArgTracker would allow you to "drop" the bad entry, and then synthesize new keys/meta/values arrays excluding the bad entry. You could then map from the new arrays back to the indexes of the original arrays.

This DroppedArgTracker will do no allocations if you don't end up dropping any arguments (so in the 'good' case, there are zero allocations).

Example:

  Say we're given a list of arguments which look like ("_" means a bad value
  that we drop):

   input: A B _ C D _ _ E
    Idxs: 0 1 2 3 4 5 6 7
 dropped:     2     5 6

DropKeys(input): A B C D E
                 0 1 2 3 4

OriginalIndex(0) -> 0
OriginalIndex(1) -> 1
OriginalIndex(2) -> 3
OriginalIndex(3) -> 4
OriginalIndex(4) -> 7

Methods on this type are NOT goroutine safe.

func (DroppedArgTracker) DropKeys

func (dat DroppedArgTracker) DropKeys(keys []*Key) ([]*Key, DroppedArgLookup)

DropKeys returns a compressed version of `keys`, dropping all elements which were marked with MarkForRemoval.

func (DroppedArgTracker) DropKeysAndMeta

func (dat DroppedArgTracker) DropKeysAndMeta(keys []*Key, meta MultiMetaGetter) ([]*Key, MultiMetaGetter, DroppedArgLookup)

DropKeysAndMeta returns a compressed version of `keys` and `meta`, dropping all elements which were marked with MarkForRemoval.

`keys` and `meta` must have the same lengths.

func (DroppedArgTracker) DropKeysAndVals

func (dat DroppedArgTracker) DropKeysAndVals(keys []*Key, vals []PropertyMap) ([]*Key, []PropertyMap, DroppedArgLookup)

DropKeysAndVals returns a compressed version of `keys` and `vals`, dropping all elements which were marked with MarkForRemoval.

`keys` and `vals` must have the same lengths.

func (*DroppedArgTracker) MarkForRemoval

func (dat *DroppedArgTracker) MarkForRemoval(originalIndex, N int)

MarkForRemoval tracks `originalIndex` for removal when `Drop*` methods are called.

N is a size hint for the maximum number of entries that `dat` could have. If `dat` has a capacity of < N, it will be allocated to N.

If called with N == len(args) and originalIndex is always increasing, then this will only do one allocation for the life of this DroppedArgTracker, and each MarkForRemoval will only cost a single slice append. If called out of order, or with a bad value of N, this will do more allocations and will do a binary search on each call.

func (*DroppedArgTracker) MarkNilKeys

func (dat *DroppedArgTracker) MarkNilKeys(keys []*Key)

MarkNilKeys is a helper method which calls MarkForRemoval for each nil key.

func (*DroppedArgTracker) MarkNilKeysMeta

func (dat *DroppedArgTracker) MarkNilKeysMeta(keys []*Key, meta MultiMetaGetter)

MarkNilKeysMeta is a helper method which calls MarkForRemoval for each nil key or meta.

func (*DroppedArgTracker) MarkNilKeysVals

func (dat *DroppedArgTracker) MarkNilKeysVals(keys []*Key, vals []PropertyMap)

MarkNilKeysVals is a helper method which calls MarkForRemoval for each nil key or value.

type Elementary

type Elementary interface {
	constraints.Integer | constraints.Float | ~bool | ~string | ~[]byte | time.Time | GeoPoint | *Key
}

Elementary is a type set with all "elementary" datastore types.

type ErrFieldMismatch

type ErrFieldMismatch struct {
	StructType reflect.Type
	FieldName  string
	Reason     string
}

ErrFieldMismatch is returned when a field is to be loaded into a different type than the one it was stored from, or when a field is missing or unexported in the destination struct. StructType is the type of the struct pointed to by the destination argument passed to Get or to Iterator.Next.

func (*ErrFieldMismatch) Error

func (e *ErrFieldMismatch) Error() string

type ExistsResult

type ExistsResult struct {
	// contains filtered or unexported fields
}

ExistsResult is a 2-dimensional boolean array that represents the existence of entries in the datastore. It is returned by the datastore Exists method. It is designed to accommodate the potentially-nested variadic arguments that can be passed to Exists.

The first dimension contains one entry for each Exists input index. If the argument is a single entry, the boolean value at this index will be true if that argument was present in the datastore and false otherwise. If the argument is a slice, it will contain an aggregate value that is true iff no values in that slice were missing from the datastore.

The second dimension presents a boolean slice for each input argument. Single arguments will have a slice of size 1 whose value corresponds to the first dimension value for that argument. Slice arguments have a slice of the same size. A given index in the second dimension slice is true iff the element at that index was present.

func Exists

func Exists(c context.Context, ent ...any) (*ExistsResult, error)

Exists tests if the supplied objects are present in the datastore.

ent must be one of:

  • *S, where S is a struct
  • *P, where *P is a concrete type implementing PropertyLoadSaver
  • []S or []*S, where S is a struct
  • []P or []*P, where *P is a concrete type implementing PropertyLoadSaver
  • []I, where I is some interface type. Each element of the slice must have either *S or *P as its underlying type.
  • *Key, to check a specific key from the datastore.
  • []*Key, to check a slice of keys from the datastore.

nil values (or interface-typed nils) are not allowed, neither as standalone arguments nor inside slices. Passing them will cause a panic.

If an error is encountered, the returned error value will depend on the input arguments. If one argument is supplied, the result will be the encountered error type. If multiple arguments are supplied, the result will be a MultiError whose error index corresponds to the argument in which the error was encountered.

If an ent argument is a slice, its error type will be a MultiError. Note that in the scenario, where multiple slices are provided, this will return a MultiError containing a nested MultiError for each slice argument.

func (*ExistsResult) All

func (r *ExistsResult) All() bool

All returns true if all of the available boolean slots are true.

func (*ExistsResult) Any

func (r *ExistsResult) Any() bool

Any returns true if any of the boolean slots are true.

func (*ExistsResult) Get

func (r *ExistsResult) Get(i int, j ...int) bool

Get returns the boolean value at the specified index.

The one-argument form returns the first-dimension boolean. If i is a slice argument, this will be true iff all of the slice's booleans are true.

An optional second argument can be passed to access a specific boolean value in slice i. If the argument at i is a single argument, the only valid index, 0, will be the same as calling the single-argument Get.

Passing more than one additional argument will result in a panic.

func (*ExistsResult) Len

func (r *ExistsResult) Len(i ...int) int

Len returns the number of boolean results available.

The zero-argument form returns the first-dimension size, which will equal the total number of arguments passed to Exists.

The one-argument form returns the number of booleans in the slice for argument i.

Passing more than one argument will result in a panic.

func (*ExistsResult) List

func (r *ExistsResult) List(i ...int) BoolList

List returns the BoolList for the given argument index.

The zero-argument form returns the first-dimension boolean list.

An optional argument can be passed to access a specific argument's boolean slice. If the argument at i is a non-slice argument, the list will be a slice of size 1 containing i's first-dimension value.

Passing more than one argument will result in a panic.

type FinalizedQuery

type FinalizedQuery struct {
	// contains filtered or unexported fields
}

FinalizedQuery is the representation of a Query which has been normalized.

It contains only fully-specified, non-redundant, non-conflicting information pertaining to the Query to run. It can only represent a valid query.

func (*FinalizedQuery) Ancestor

func (q *FinalizedQuery) Ancestor() *Key

Ancestor returns the ancestor filter key, if any. This is a convenience function for getting the value from EqFilters()["__ancestor__"].

func (*FinalizedQuery) Bounds

func (q *FinalizedQuery) Bounds() (start, end Cursor)

Bounds returns the start and end Cursors. One or both may be nil. The Cursors returned are implementation-specific depending on the actual RawInterface implementation and the filters installed (if the filters interfere with Cursor production).

func (*FinalizedQuery) Distinct

func (q *FinalizedQuery) Distinct() bool

Distinct returns true iff this is a distinct projection query. It will never be true for non-projection queries.

func (*FinalizedQuery) EqFilters

func (q *FinalizedQuery) EqFilters() map[string]PropertySlice

EqFilters returns all the equality filters. The map key is the field name and the PropertySlice is the values that field should equal.

This includes a special equality filter on "__ancestor__". If "__ancestor__" is present in the result, it's guaranteed to have 1 value in the PropertySlice which is of type *Key.

func (*FinalizedQuery) EventuallyConsistent

func (q *FinalizedQuery) EventuallyConsistent() bool

EventuallyConsistent returns true iff this query will be eventually consistent. This is true when the query is a non-ancestor query, or when it's an ancestor query with the 'EventualConsistency(true)' option set.

func (*FinalizedQuery) GQL

func (q *FinalizedQuery) GQL() string

GQL returns a correctly formatted Cloud Datastore GQL expression which is equivalent to this query.

The flavor of GQL that this emits is defined here:

https://cloud.google.com/datastore/docs/apis/gql/gql_reference

NOTE: Cursors are omitted because currently there's currently no syntax for literal cursors.

NOTE: GeoPoint values are emitted with speculated future syntax. There is currently no syntax for literal GeoPoint values.

func (*FinalizedQuery) InFilters

func (q *FinalizedQuery) InFilters() map[string][]PropertySlice

InFilters returns all "in" equality filters. The map key is the field name and the value is a list of filters on that field's value that should be AND'ed together. Individual filters are represented by a non-empty PropertySlice with allowed values.

func (*FinalizedQuery) IneqFilterHigh

func (q *FinalizedQuery) IneqFilterHigh() (field, op string, val Property)

IneqFilterHigh returns the field name, operator and value for the high-side inequality filter. If the returned field name is "", it means that there's no upper inequality bound on this query.

If field is non-empty, op may have the values "<" or "<=".

func (*FinalizedQuery) IneqFilterLow

func (q *FinalizedQuery) IneqFilterLow() (field, op string, val Property)

IneqFilterLow returns the field name, operator and value for the low-side inequality filter. If the returned field name is "", it means that there's no lower inequality bound on this query.

If field is non-empty, op may have the values ">" or ">=".

func (*FinalizedQuery) IneqFilterProp

func (q *FinalizedQuery) IneqFilterProp() string

IneqFilterProp returns the inequality filter property name, if one is used for this filter. An empty return value means that this query does not contain any inequality filters.

func (*FinalizedQuery) KeysOnly

func (q *FinalizedQuery) KeysOnly() bool

KeysOnly returns true iff this query will only return keys (as opposed to a normal or projection query).

func (*FinalizedQuery) Kind

func (q *FinalizedQuery) Kind() string

Kind returns the datastore 'Kind' over which this query operates. It may be empty for a kindless query.

func (*FinalizedQuery) Limit

func (q *FinalizedQuery) Limit() (int32, bool)

Limit returns the maximum number of responses this query will retrieve, and a boolean indicating if the limit is set.

func (*FinalizedQuery) Offset

func (q *FinalizedQuery) Offset() (int32, bool)

Offset returns the number of responses this query will skip before returning data, and a boolean indicating if the offset is set.

func (*FinalizedQuery) Orders

func (q *FinalizedQuery) Orders() []IndexColumn

Orders returns the sort orders that this query will use, including all orders implied by the projections, and the implicit __key__ order at the end.

func (*FinalizedQuery) Original

func (q *FinalizedQuery) Original() *Query

Original returns the original Query object from which this FinalizedQuery was derived.

func (*FinalizedQuery) Project

func (q *FinalizedQuery) Project() []string

Project is the list of fields that this query projects on, or empty if this is not a projection query.

func (*FinalizedQuery) String

func (q *FinalizedQuery) String() string

func (*FinalizedQuery) Valid

func (q *FinalizedQuery) Valid(kc KeyContext) error

Valid returns true iff this FinalizedQuery is valid in the provided KeyContext's App ID and Namespace.

This checks the ancestor filter (if any), as well as the inequality filters if they filter on '__key__'.

In particular, it does NOT validate equality filters which happen to have values of type PTKey, nor does it validate inequality filters that happen to have values of type PTKey (but don't filter on the magic '__key__' field).

type GeoPoint

type GeoPoint struct {
	Lat, Lng float64
}

GeoPoint represents a location as latitude/longitude in degrees.

You probably shouldn't use these, but their inclusion here is so that the datastore service can interact (and round-trip) correctly with other datastore API implementations.

func (GeoPoint) Valid

func (g GeoPoint) Valid() bool

Valid returns whether a GeoPoint is within [-90, 90] latitude and [-180, 180] longitude.

type GetMultiCB

type GetMultiCB func(idx int, val PropertyMap, err error)

GetMultiCB is the callback signature provided to RawInterface.GetMulti

  • idx is the index of the entity, ranging from 0 through len-1.
  • val is the data of the entity
  • It may be nil if some of the keys to the GetMulti were bad, since all keys are validated before the RPC occurs!
  • err is an error associated with this entity (e.g. ErrNoSuchEntity).

The callback is called once per element. It may be called concurrently, and may be called out of order. The "idx" variable describes which element is being processed. If any callbacks are invoked, exactly one callback will be invoked for each supplied element.

type IndexColumn

type IndexColumn struct {
	Property   string
	Descending bool
}

IndexColumn represents a sort order for a single entity field.

func ParseIndexColumn

func ParseIndexColumn(spec string) (IndexColumn, error)

ParseIndexColumn takes a spec in the form of /\s*-?\s*.+\s*/, and returns an IndexColumn. Examples are:

`- Field `:  IndexColumn{Property: "Field", Descending: true}
`Something`: IndexColumn{Property: "Something", Descending: false}

`+Field` is invalid. “ is invalid.

func (IndexColumn) GQL

func (i IndexColumn) GQL() string

GQL returns a correctly formatted Cloud Datastore GQL literal which is valid for the `ORDER BY` clause.

The flavor of GQL that this emits is defined here:

https://cloud.google.com/datastore/docs/apis/gql/gql_reference

func (*IndexColumn) MarshalYAML

func (i *IndexColumn) MarshalYAML() (any, error)

MarshalYAML serializes an IndexColumn into a index.yml `property`.

func (IndexColumn) String

func (i IndexColumn) String() string

String returns a human-readable version of this IndexColumn which is compatible with ParseIndexColumn.

func (*IndexColumn) UnmarshalYAML

func (i *IndexColumn) UnmarshalYAML(unmarshal func(any) error) error

UnmarshalYAML deserializes a index.yml `property` into an IndexColumn.

type IndexDefinition

type IndexDefinition struct {
	Kind     string        `yaml:"kind"`
	Ancestor bool          `yaml:"ancestor"`
	SortBy   []IndexColumn `yaml:"properties"`
}

IndexDefinition holds the parsed definition of a datastore index definition.

func FindAndParseIndexYAML

func FindAndParseIndexYAML(path string) ([]*IndexDefinition, error)

FindAndParseIndexYAML walks up from the directory specified by path until it finds a `index.yaml` or `index.yml` file. If an index YAML file is found, it opens and parses the file, and returns all the indexes found. If path is a relative path, it is converted into an absolute path relative to the calling test file. To determine the path of the calling test file, FindAndParseIndexYAML walks upto a maximum of 100 call stack frames looking for a file ending with `_test.go`.

FindAndParseIndexYAML returns a non-nil error if the root of the drive is reached without finding an index YAML file, if there was an error reading the found index YAML file, or if the calling test file could not be located in the case of a relative path argument.

func ParseIndexYAML

func ParseIndexYAML(content io.Reader) ([]*IndexDefinition, error)

ParseIndexYAML parses the contents of a index YAML file into a list of IndexDefinitions.

func (*IndexDefinition) Builtin

func (id *IndexDefinition) Builtin() bool

Builtin returns true iff the IndexDefinition is one of the automatic built-in indexes.

func (*IndexDefinition) Compound

func (id *IndexDefinition) Compound() bool

Compound returns true iff this IndexDefinition is a valid compound index definition.

NOTE: !Builtin() does not imply Compound().

func (*IndexDefinition) Equal

func (id *IndexDefinition) Equal(o *IndexDefinition) bool

Equal returns true if the two IndexDefinitions are equivalent.

func (*IndexDefinition) Flip

func (id *IndexDefinition) Flip() *IndexDefinition

Flip returns an IndexDefinition with its SortBy field in reverse order.

func (*IndexDefinition) GetFullSortOrder

func (id *IndexDefinition) GetFullSortOrder() []IndexColumn

GetFullSortOrder gets the full sort order for this IndexDefinition, including an extra "__ancestor__" column at the front if this index has Ancestor set to true.

func (*IndexDefinition) Less

func (id *IndexDefinition) Less(o *IndexDefinition) bool

Less returns true iff id is ordered before o.

func (*IndexDefinition) MarshalYAML

func (id *IndexDefinition) MarshalYAML() (any, error)

MarshalYAML serializes an IndexDefinition into a index.yml `index`.

func (*IndexDefinition) Normalize

func (id *IndexDefinition) Normalize() *IndexDefinition

Normalize returns an IndexDefinition which has a normalized SortBy field.

This is just appending __key__ if it's not explicitly the last field in this IndexDefinition.

func (*IndexDefinition) PrepForIdxTable

func (id *IndexDefinition) PrepForIdxTable() *IndexDefinition

PrepForIdxTable normalize and then flips the IndexDefinition.

func (*IndexDefinition) String

func (id *IndexDefinition) String() string

func (*IndexDefinition) YAMLString

func (id *IndexDefinition) YAMLString() (string, error)

YAMLString returns the YAML representation of this IndexDefinition.

If the index definition is Builtin() or not Compound(), this will return an error.

type IndexSetting

type IndexSetting bool

IndexSetting indicates whether or not a Property should be indexed by the datastore.

const (
	ShouldIndex IndexSetting = false
	NoIndex     IndexSetting = true
)

ShouldIndex is the default, which is why it must assume the zero value, even though it's weird :(.

func (IndexSetting) String

func (i IndexSetting) String() string

type Indexed

type Indexed struct{}

Indexed indicates to Optional or Nullable to produce indexed properties.

type IndexedProperties

type IndexedProperties map[string]IndexedPropertySlice

IndexedProperties maps from a property name to a set of its indexed values.

It includes special values '__key__' and '__ancestor__' which contains all of the ancestor entries for this key.

Map values are in some arbitrary order. If you need them sorted, call Sort explicitly.

func (IndexedProperties) Sort

func (sip IndexedProperties) Sort()

Sort sorts all values (useful in tests).

type IndexedPropertySlice

type IndexedPropertySlice [][]byte

IndexedPropertySlice is a set of properties serialized to their comparable index representations via Serializer.IndexedProperties(...).

Values are in some arbitrary order. If you need them sorted, call sort.Sort explicitly.

func (IndexedPropertySlice) Len

func (s IndexedPropertySlice) Len() int

func (IndexedPropertySlice) Less

func (s IndexedPropertySlice) Less(i, j int) bool

func (IndexedPropertySlice) Swap

func (s IndexedPropertySlice) Swap(i, j int)

type Indexing

type Indexing interface {
	// contains filtered or unexported methods
}

Indexing is implemented by Indexed and Unindexed.

type Key

type Key struct {
	// contains filtered or unexported fields
}

Key is the type used for all datastore operations.

func KeyForObj

func KeyForObj(c context.Context, src any) *Key

KeyForObj extracts a key from src.

It is the same as KeyForObjErr, except that if KeyForObjErr would have returned an error, this method panics. It's safe to use if you know that src statically meets the metadata constraints described by KeyForObjErr.

func KeyForObjErr

func KeyForObjErr(c context.Context, src any) (*Key, error)

KeyForObjErr extracts a key from src.

src must be one of:

  • *S, where S is a struct
  • a PropertyLoadSaver

It is expected that the struct exposes the following metadata (as retrieved by MetaGetter.GetMeta):

  • "key" (type: Key) - The full datastore key to use. Must not be nil. OR
  • "id" (type: int64 or string) - The id of the Key to create.
  • "kind" (optional, type: string) - The kind of the Key to create. If blank or not present, KeyForObjErr will extract the name of the src object's type.
  • "parent" (optional, type: Key) - The parent key to use.

By default, the metadata will be extracted from the struct and its tagged properties. However, if the struct implements MetaGetterSetter it is wholly responsible for exporting the required fields. A struct that implements GetMeta to make some minor tweaks can evoke the defualt behavior by using GetPLS(s).GetMeta.

If a required metadata item is missing or of the wrong type, then this will return an error.

func MakeKey

func MakeKey(c context.Context, elems ...any) *Key

MakeKey is a convenience method for manufacturing a *Key. It should only be used when elems... is known statically (e.g. in the code) to be correct.

elems is pairs of (string, string|int|int32|int64) pairs, which correspond to Kind/id pairs. Example:

dstore.MakeKey("Parent", 1, "Child", "id")

Would create the key:

<current appID>:<current Namespace>:/Parent,1/Child,id

If elems is not parsable (e.g. wrong length, wrong types, etc.) this method will panic.

func NewIncompleteKeys

func NewIncompleteKeys(c context.Context, count int, kind string, parent *Key) (keys []*Key)

NewIncompleteKeys allocates count incomplete keys sharing the same kind and parent. It is useful as input to AllocateIDs.

func NewKey

func NewKey(c context.Context, kind, stringID string, intID int64, parent *Key) *Key

NewKey constructs a new key in the current appID/Namespace, using the specified parameters.

func NewKeyEncoded

func NewKeyEncoded(encoded string) (ret *Key, err error)

NewKeyEncoded decodes and returns a *Key

func NewKeyToks

func NewKeyToks(c context.Context, toks []KeyTok) *Key

NewKeyToks constructs a new key in the current appID/Namespace, using the specified key tokens.

func (*Key) AppID

func (k *Key) AppID() string

AppID returns the application ID that this Key is for.

func (*Key) Encode

func (k *Key) Encode() string

Encode encodes the provided key as a base64-encoded protobuf.

This encoding is compatible with the SDK-provided encoding and is agnostic to the underlying implementation of the Key.

It's encoded with the urlsafe base64 table without padding.

func (*Key) Equal

func (k *Key) Equal(other *Key) bool

Equal returns true iff the two keys represent identical key values.

func (*Key) EstimateSize

func (k *Key) EstimateSize() int64

EstimateSize estimates the size of a Key.

It uses https://cloud.google.com/appengine/articles/storage_breakdown?csw=1 as a guide for these values.

func (*Key) GQL

func (k *Key) GQL() string

GQL returns a correctly formatted Cloud Datastore GQL key literal.

The flavor of GQL that this emits is defined here:

https://cloud.google.com/datastore/docs/apis/gql/gql_reference

func (*Key) GobDecode

func (k *Key) GobDecode(buf []byte) error

GobDecode allows the Key to be decoded in a Gob struct.

func (*Key) GobEncode

func (k *Key) GobEncode() ([]byte, error)

GobEncode allows the Key to be encoded in a Gob struct.

func (*Key) HasAncestor

func (k *Key) HasAncestor(other *Key) bool

HasAncestor returns true iff other is an ancestor of k (or if other == k).

func (*Key) Incomplete

func (k *Key) Incomplete() *Key

Incomplete returns an incomplete version of the key. The ID fields of the last token will be set to zero/empty.

func (*Key) IncompleteEqual

func (k *Key) IncompleteEqual(other *Key) (ret bool)

IncompleteEqual asserts that, were the two keys incomplete, they would be equal.

This asserts equality for the full lineage of the key, except for its last token ID.

func (*Key) IntID

func (k *Key) IntID() int64

IntID returns the IntID of the child KeyTok

func (*Key) IsIncomplete

func (k *Key) IsIncomplete() bool

IsIncomplete returns true iff the last token of this Key doesn't define either a StringID or an IntID.

func (*Key) KeyContext

func (k *Key) KeyContext() *KeyContext

KeyContext returns the KeyContext that this Key is using.

func (*Key) Kind

func (k *Key) Kind() string

Kind returns the Kind of the child KeyTok

func (*Key) LastTok

func (k *Key) LastTok() KeyTok

LastTok returns the last KeyTok in this Key. Non-nil Keys are always guaranteed to have at least one token.

func (*Key) Less

func (k *Key) Less(other *Key) bool

Less returns true iff k would sort before other.

func (*Key) MarshalJSON

func (k *Key) MarshalJSON() ([]byte, error)

MarshalJSON allows this key to be automatically marshaled by encoding/json.

func (*Key) Namespace

func (k *Key) Namespace() string

Namespace returns the namespace that this Key is for.

func (*Key) Parent

func (k *Key) Parent() *Key

Parent returns the parent Key of this *Key, or nil. The parent will always have the concrete type of *Key.

func (*Key) PartialValid

func (k *Key) PartialValid(kc KeyContext) bool

PartialValid returns true iff this key is suitable for use in a Put operation. This is the same as Valid(k, false, ...), but also allowing k to be IsIncomplete().

func (*Key) Root

func (k *Key) Root() *Key

Root returns the entity root for the given key.

func (*Key) Split

func (k *Key) Split() (appID, namespace string, toks []KeyTok)

Split componentizes the key into pieces (AppID, Namespace and tokens)

Each token represents one piece of they key's 'path'.

toks is guaranteed to be empty if and only if k is nil. If k is non-nil then it contains at least one token.

func (*Key) String

func (k *Key) String() string

String returns a human-readable representation of the key in the form of

AID:NS:/Kind,id/Kind,id/...

func (*Key) StringID

func (k *Key) StringID() string

StringID returns the StringID of the child KeyTok

func (*Key) UnmarshalJSON

func (k *Key) UnmarshalJSON(buf []byte) error

UnmarshalJSON allows this key to be automatically unmarshaled by encoding/json.

func (*Key) Valid

func (k *Key) Valid(allowSpecial bool, kc KeyContext) bool

Valid determines if a key is valid, according to a couple of rules:

  • k is not nil
  • every token of k:
  • (if !allowSpecial) token's kind doesn't start with '__'
  • token's kind and appid are non-blank
  • token is not incomplete
  • all tokens have the same namespace and appid

func (*Key) WithID

func (k *Key) WithID(stringID string, intID int64) *Key

WithID returns the key generated by setting the ID of its last token to the specified value.

To generate this, k is reduced to its Incomplete form, then populated with a new ID. The resulting key will have the same token linage as k (i.e., will be IncompleteEqual).

type KeyContext

type KeyContext struct {
	AppID     string
	Namespace string
}

KeyContext is the context in which a key is generated.

func GetKeyContext

func GetKeyContext(c context.Context) KeyContext

GetKeyContext returns the KeyContext whose AppID and Namespace match those installed in the supplied Context.

func MkKeyContext

func MkKeyContext(appID, namespace string) KeyContext

MkKeyContext is a helper function to create a new KeyContext.

It is preferable to field-based struct initialization because, as a function, it has the ability to enforce an exact number of parameters.

func (KeyContext) MakeKey

func (kc KeyContext) MakeKey(elems ...any) *Key

MakeKey is a convenience function for manufacturing a *Key. It should only be used when elems... is known statically (e.g. in the code) to be correct.

elems is pairs of (string, string|int|int32|int64) pairs, which correspond to Kind/id pairs. Example:

MkKeyContext("aid", "namespace").MakeKey("Parent", 1, "Child", "id")

Would create the key:

aid:namespace:/Parent,1/Child,id

If elems is not parsable (e.g. wrong length, wrong types, etc.) this method will panic.

See MakeKey for a version of this function which automatically provides aid and ns.

func (KeyContext) Matches

func (kc KeyContext) Matches(o KeyContext) bool

Matches returns true iff the AppID and Namespace parameters are the same for the two KeyContext instances.

func (KeyContext) NewKey

func (kc KeyContext) NewKey(kind, stringID string, intID int64, parent *Key) *Key

NewKey is a wrapper around NewToks which has an interface similar to NewKey in the SDK.

See NewKey for a version of this function which automatically provides aid and ns.

func (KeyContext) NewKeyFromMeta

func (kc KeyContext) NewKeyFromMeta(mgs MetaGetterSetter) (*Key, error)

NewKeyFromMeta constructs a key (potentially partial) based on meta fields.

Looks at `$key`, `$kind`, `$id`, `$parent`. Returns an error if necessary fields are missing.

func (KeyContext) NewKeyToks

func (kc KeyContext) NewKeyToks(toks []KeyTok) *Key

NewKeyToks creates a new Key. It is the Key implementation returned from the various PropertyMap serialization routines, as well as the native key implementation for the in-memory implementation of gae.

See NewKeyToks for a version of this function which automatically provides aid and ns.

type KeyTok

type KeyTok struct {
	Kind     string
	IntID    int64
	StringID string
}

KeyTok is a single token from a multi-part Key.

func (KeyTok) ID

func (k KeyTok) ID() Property

ID returns the 'active' id as a Property (either the StringID or the IntID).

func (KeyTok) IsIncomplete

func (k KeyTok) IsIncomplete() bool

IsIncomplete returns true iff this token doesn't define either a StringID or an IntID.

func (KeyTok) Less

func (k KeyTok) Less(other KeyTok) bool

Less returns true iff k would sort before other.

func (KeyTok) Special

func (k KeyTok) Special() bool

Special returns true iff this token begins and ends with "__"

type MetaGetterSetter

type MetaGetterSetter interface {
	// GetMeta will get information about the field which has the struct tag in
	// the form of `gae:"$<key>[,<default>]?"`.
	//
	// It returns the value, if any, and true iff the value was retrieved.
	//
	// Supported metadata types are:
	//   int64  - may have default (ascii encoded base-10)
	//   string - may have default
	//   Toggle - MUST have default ("true" or "false")
	//   *Key    - NO default allowed
	//
	// Struct fields of type Toggle (which is an Auto/On/Off) require you to
	// specify a value of 'true' or 'false' for the default value of the struct
	// tag, and GetMeta will return the combined value as a regular boolean true
	// or false value.
	// Example:
	//   type MyStruct struct {
	//     CoolField int64 `gae:"$id,1"`
	//   }
	//   val, err := helper.GetPLS(&MyStruct{}).GetMeta("id")
	//   // val == 1
	//   // err == nil
	//
	//   val, err := helper.GetPLS(&MyStruct{10}).GetMeta("id")
	//   // val == 10
	//   // err == nil
	//
	//   type MyStruct struct {
	//     TFlag Toggle `gae:"$flag1,true"`  // defaults to true
	//     FFlag Toggle `gae:"$flag2,false"` // defaults to false
	//     // BadFlag  Toggle `gae:"$flag3"` // ILLEGAL
	//   }
	GetMeta(key string) (any, bool)

	// GetAllMeta returns a PropertyMap with all of the metadata in this
	// MetaGetterSetter. If a metadata field has an error during serialization,
	// it is skipped.
	//
	// If a *struct is implementing this, then it only needs to return the
	// metadata fields which would be returned by its GetMeta implementation, and
	// the `GetPLS` implementation will add any statically-defined metadata
	// fields. So if GetMeta provides $id, but there's a simple tagged field for
	// $kind, this method is only expected to return a PropertyMap with "$id".
	GetAllMeta() PropertyMap

	// SetMeta allows you to set the current value of the meta-keyed field.
	// It returns true iff the field was set.
	SetMeta(key string, val any) bool
}

MetaGetterSetter is the sister interface of PropertyLoadSaver, which pertains to getting and saving metadata.

Metadata are indicated on structs with the `gae:"$<keyname>[,default]"`. Typical examples include `$id`, `$kind`, `$parent` and `$key`, but some filters like dscache also use metadata to allow configurability for structs they interact with (such as `$dscache.enable` and `$dscache.expiration`).

A *struct may implement MetaGetterSetter to provide metadata directly, e.g. via computation over the contents of the struct (for example, generating "id" as a hash of various fields in the struct).

If you implement the MetaGetterSetter interface, your implementation must respond to ALL keys - there is no fallback to the default behavior. You can re-use the default behavior by doing the following. This example is for a struct whose ID is a pure function of the struct contents:

type MyStruct {  // $kind is derived from the public struct name
   // We want to optionally allow this struct to have a parent.
   OptionalParent *datastore.Key `gae:"$parent"`

   // other fields
}

func (s *MyStruct) GetMeta(key string) (any, bool) {
  // If the gae/datastore library is asking for the id, calculate it.
  if key == "id" {
    return s.calculateID(), true
  }
  // Otherwise use the datastore library default processing to read
  // tagged struct fields (e.g. $kind, $parent) using the normal
  // algorithm.
  return datastore.GetPLS(s).GetMeta(key)
}

func (s *MyStruct) GetAllMeta() datastore.PropertyMap {
  // Note that GetAllMeta, confusingly, only needs to return a map containing
  // the extra values returned by GetMeta.
  ret := datastore.PropertyMap{}
  ret.SetMeta("id", s.calculateID())
  return ret
}

func (s *MyStruct) SetMeta(key string, value any) bool {
  // We don't need to do any custom assignment, so just fallthrough to the
  // default.
  return datastore.GetPLS(s).SetMeta(key, value)
}

Note: Because of the nature of the underlying datastore library, `$id` can be SET even during Put operations. This will happen when gae calculates an incomplete key for an entity - when this happens, it means that the underlying datastore service will compute and attach a generated id (int64) to the entity it saves, and then return this back to the user.

type MultiMetaGetter

type MultiMetaGetter []MetaGetterSetter

MultiMetaGetter is a carrier for metadata, used with RawInterface.GetMulti

It's OK to default-construct this. GetMeta will just return (nil, ErrMetaFieldUnset) for every index.

func NewMultiMetaGetter

func NewMultiMetaGetter(data []PropertyMap) MultiMetaGetter

NewMultiMetaGetter returns a new MultiMetaGetter object. data may be nil.

func (MultiMetaGetter) GetMeta

func (m MultiMetaGetter) GetMeta(idx int, key string) (any, bool)

GetMeta is like PropertyLoadSaver.GetMeta, but it also takes an index indicating which slot you want metadata for. If idx isn't there, this returns (nil, ErrMetaFieldUnset).

func (MultiMetaGetter) GetSingle

func (m MultiMetaGetter) GetSingle(idx int) MetaGetterSetter

GetSingle gets a single MetaGetter at the specified index.

type NewKeyCB

type NewKeyCB func(idx int, key *Key, err error)

NewKeyCB is the callback signature provided to RawInterface.PutMulti and RawInterface.AllocateIDs. It is invoked once for each positional key that was generated as the result of a call.

  • idx is the index of the entity, ranging from 0 through len-1.
  • key is the new key for the entity (if the original was incomplete)
  • It may be nil if some of the keys/vals to the PutMulti were bad, since all keys are validated before the RPC occurs!
  • err is an error associated with putting this entity.

The callback is called once per element. It may be called concurrently, and may be called out of order. The "idx" variable describes which element is being processed. If any callbacks are invoked, exactly one callback will be invoked for each supplied element.

type Nullable

type Nullable[T Elementary, I Indexing] struct {
	// contains filtered or unexported fields
}

Nullable is almost the same as Optional, except absent properties are stored as PTNull (instead of being skipped), which means absence of a property can be filtered on in queries.

Note that unindexed nullables are represented by unindexed PTNull in the datastore. APIs that work on a PropertyMap level can distinguish such properties from unindexed optionals (they will see the key in the property map). But when using the default PropertyLoadSaver, unindexed nullables and unindexed optionals are indistinguishable.

func NewIndexedNullable

func NewIndexedNullable[T Elementary](val T) Nullable[T, Indexed]

NewIndexedNullable creates a new, already set, indexed nullable.

To get an unset nullable, just use the zero of Nullable[T, I].

func NewUnindexedNullable

func NewUnindexedNullable[T Elementary](val T) Nullable[T, Unindexed]

NewUnindexedNullable creates a new, already set, unindexed nullable.

To get an unset nullable, just use the zero of Nullable[T, I].

func (*Nullable[T, I]) FromProperty

func (o *Nullable[T, I]) FromProperty(prop Property) error

FromProperty implements PropertyConverter.

func (Nullable[T, I]) Get

func (o Nullable[T, I]) Get() T

Get returns the value stored inside or a zero T if the value is unset.

func (Nullable[T, I]) IsSet

func (o Nullable[T, I]) IsSet() bool

IsSet returns true if the value is set.

func (*Nullable[T, I]) Set

func (o *Nullable[T, I]) Set(val T)

Set stores the value and marks the nullable as set.

func (*Nullable[T, I]) ToProperty

func (o *Nullable[T, I]) ToProperty() (prop Property, err error)

ToProperty implements PropertyConverter.

func (*Nullable[T, I]) Unset

func (o *Nullable[T, I]) Unset()

Unset flips the nullable into the unset state and clears the value to zero.

type Optional

type Optional[T Elementary, I Indexing] struct {
	// contains filtered or unexported fields
}

Optional wraps an elementary property type, adding "is set" flag to it.

A pointer to Optional[T, I] implements PropertyConverter, allowing values of Optional[T, I] to appear as fields in structs representing entities.

This is useful for rare cases when it is necessary to distinguish a zero value of T from an absent value. For example, a zero integer property ends up in indices, but an absent property doesn't.

A zero value of Optional[T, I] represents an unset property. Setting a value via Set(...) (even if this is a zero value of T) marks the property as set.

Unset properties are not stored into the datastore at all and they are totally invisible to all queries. Conversely, when an entity is being loaded, its Optional[T, I] fields that don't match any loaded properties will remain in unset state. Additionally, PTNull properties are treated as unset as well.

To store unset properties as PTNull, use Nullable[T, I] instead. The primary benefit is the ability to filter queries by null, i.e. absence of a property.

Type parameter I controls if the stored properties should be indexed or not. It should either be Indexed or Unindexed.

func NewIndexedOptional

func NewIndexedOptional[T Elementary](val T) Optional[T, Indexed]

NewIndexedOptional creates a new, already set, indexed optional.

To get an unset optional, just use the zero of Optional[T, I].

func NewUnindexedOptional

func NewUnindexedOptional[T Elementary](val T) Optional[T, Unindexed]

NewUnindexedOptional creates a new, already set, unindexed optional.

To get an unset optional, just use the zero of Optional[T, I].

func (*Optional[T, I]) FromProperty

func (o *Optional[T, I]) FromProperty(prop Property) error

FromProperty implements PropertyConverter.

func (Optional[T, I]) Get

func (o Optional[T, I]) Get() T

Get returns the value stored inside or a zero T if the value is unset.

func (Optional[T, I]) IsSet

func (o Optional[T, I]) IsSet() bool

IsSet returns true if the value is set.

func (*Optional[T, I]) Set

func (o *Optional[T, I]) Set(val T)

Set stores the value and marks the optional as set.

func (*Optional[T, I]) ToProperty

func (o *Optional[T, I]) ToProperty() (prop Property, err error)

ToProperty implements PropertyConverter.

func (*Optional[T, I]) Unset

func (o *Optional[T, I]) Unset()

Unset flips the optional into the unset state and clears the value to zero.

type Property

type Property struct {
	// contains filtered or unexported fields
}

Property is a value plus an indicator of whether the value should be indexed. Name and Multiple are stored in the PropertyMap object.

func MkProperty

func MkProperty(val any) Property

MkProperty makes a new indexed* Property and returns it. If val is an invalid value, this panics (so don't do it). If you want to handle the error normally, use SetValue(..., ShouldIndex) instead.

*indexed if val is not an unindexable type like []byte.

func MkPropertyNI

func MkPropertyNI(val any) Property

MkPropertyNI makes a new Property (with noindex set to true), and returns it. If val is an invalid value, this panics (so don't do it). If you want to handle the error normally, use SetValue(..., NoIndex) instead.

func (Property) Clone

func (p Property) Clone() PropertyData

Clone implements the PropertyData interface.

func (*Property) Compare

func (p *Property) Compare(other *Property) int

Compare compares this Property to another, returning a trinary value indicating where it would sort relative to the other in datastore.

It returns:

<0 if the Property would sort before `other`.
>0 if the Property would after before `other`.
0 if the Property equals `other`.

This uses datastore's index rules for sorting (see IndexTypeAndValue). Panics if either of property types is non-Comparable(). Check for this in advance.

func (*Property) Equal

func (p *Property) Equal(other *Property) bool

Equal returns true iff p and other have identical index representations.

This uses datastore's index rules for sorting (see IndexTypeAndValue). Panics if either of property types is non-Comparable(). Check for this in advance.

func (*Property) EstimateSize

func (p *Property) EstimateSize() int64

EstimateSize estimates the amount of space that this Property would consume if it were committed as part of an entity in the real production datastore.

It uses https://cloud.google.com/appengine/articles/storage_breakdown?csw=1 as a guide for these values.

func (*Property) GQL

func (p *Property) GQL() string

GQL returns a correctly formatted Cloud Datastore GQL literal which is valid for a comparison value in the `WHERE` clause.

The flavor of GQL that this emits is defined here:

https://cloud.google.com/datastore/docs/apis/gql/gql_reference

NOTE: GeoPoint values are emitted with speculated future syntax. There is currently no syntax for literal GeoPoint values.

func (*Property) IndexSetting

func (p *Property) IndexSetting() IndexSetting

IndexSetting says whether or not the datastore should create indicies for this value.

func (Property) IndexTypeAndValue

func (p Property) IndexTypeAndValue() (PropertyType, any)

IndexTypeAndValue returns the type and value of the Property as it would show up in a datastore index.

This is used to operate on the Property as it would be stored in a datastore index, specifically for serialization and comparison.

Panics if the property has a non-Comparable() type. Check for this in advance.

The returned type will be the PropertyType used in the index, in particular:

  • PTNull
  • PTInt
  • PTBool
  • PTFloat
  • PTGeoPoint
  • PTKey
  • PTString

The returned value will be one of:

  • nil
  • bool
  • int64
  • float64
  • string
  • []byte
  • GeoPoint
  • *Key

func (Property) IsZero

func (p Property) IsZero() bool

IsZero implements the PropertyData interface.

func (*Property) Less

func (p *Property) Less(other *Property) bool

Less returns true iff p would sort before other.

This uses datastore's index rules for sorting (see IndexTypeAndValue). Panics if either of property types is non-Comparable(). Check for this in advance.

func (*Property) Project

func (p *Property) Project(to PropertyType) (any, error)

Project can be used to project a Property retrieved from a Projection query into a different datatype. For example, if you have a PTInt property, you could Project(PTTime) to convert it to a time.Time. The following conversions are supported:

PTString <-> PTBlobKey
PTString <-> PTBytes
PTXXX <-> PTXXX (i.e. identity)
PTInt <-> PTTime
PTNull <-> Anything

func (*Property) SetValue

func (p *Property) SetValue(value any, is IndexSetting) (err error)

SetValue sets the Value field of a Property, and ensures that its value conforms to the permissible types. That way, you're guaranteed that if you have a Property, its value is valid.

value is the property value. The valid types are:

  • int64
  • time.Time
  • bool
  • string (only the first 1500 bytes is indexable)
  • []byte (only the first 1500 bytes is indexable)
  • blobstore.Key (only the first 1500 bytes is indexable)
  • float64
  • *Key
  • GeoPoint
  • PropertyMap

This set is smaller than the set of valid struct field types that the datastore can load and save. A Property Value cannot be a slice (apart from []byte); use multiple Properties instead. Also, a Value's type must be explicitly on the list above; it is not sufficient for the underlying type to be on that list. For example, a Value of "type myInt64 int64" is invalid. Smaller-width integers and floats are also invalid. Again, this is more restrictive than the set of valid struct field types.

A value may also be the nil interface value; this is equivalent to Python's None but not directly representable by a Go struct. Loading a nil-valued property into a struct will set that field to the zero value.

Values represented by references to mutable memory (such as []byte, *Key and PropertyMap) are stored as references: the referenced memory is *not* copied.

func (Property) Slice

func (p Property) Slice() PropertySlice

Slice implements the PropertyData interface.

func (Property) String

func (p Property) String() string

func (*Property) Type

func (p *Property) Type() PropertyType

Type is the PT* type of the data contained in Value().

func (*Property) Value

func (p *Property) Value() any

Value returns the current value held by this property. It's guaranteed to be a valid value type (i.e. `p.SetValue(p.Value(), true)` will never return an error).

type PropertyConverter

type PropertyConverter interface {
	ToProperty() (Property, error)
	FromProperty(Property) error
}

PropertyConverter may be implemented by the pointer-to a struct field which is serialized by the struct PropertyLoadSaver from GetPLS. Its ToProperty will be called on save, and it's FromProperty will be called on load (from datastore). The method may do arbitrary computation, and if it encounters an error, may return it. If ToProperty returns an error that is or wraps ErrSkipProperty, then the property will silently be omitted. Other errors will be treated as fatal struct conversion errors (as defined by PropertyLoadSaver).

Example:

type Complex complex
func (c *Complex) ToProperty() (ret Property, err error) {
  // something like:
  err = ret.SetValue(fmt.Sprint(*c), true)
  return
}
func (c *Complex) FromProperty(p Property) (err error) {
  ... load *c from p ...
}

type MyStruct struct {
  Complexity []Complex // acts like []complex, but can be serialized to DS
}

type PropertyData

type PropertyData interface {

	// Slice returns a PropertySlice representation of this PropertyData.
	//
	// The returned PropertySlice is a clone of the original data. Consequently,
	// Property-modifying methods such as SetValue should NOT be
	// called on the results.
	Slice() PropertySlice

	// Clone creates a duplicate copy of this PropertyData.
	Clone() PropertyData

	// IsZero returns true if the property has the zero value or is an empty PropertySlice.
	IsZero() bool
	// contains filtered or unexported methods
}

PropertyData is an interface implemented by Property and PropertySlice to identify themselves as valid PropertyMap values.

type PropertyLoadSaver

type PropertyLoadSaver interface {
	// Load takes the values from the given map and attempts to save them into
	// the underlying object (usually a struct or a PropertyMap). If a fatal
	// error occurs, it's returned via error. If non-fatal conversion errors
	// occur, error will be a MultiError containing one or more ErrFieldMismatch
	// objects.
	Load(PropertyMap) error

	// Save returns the current property as a PropertyMap. if withMeta is true,
	// then the PropertyMap contains all the metadata (e.g. '$meta' fields)
	// which was held by this PropertyLoadSaver.
	Save(withMeta bool) (PropertyMap, error)
}

PropertyLoadSaver may be implemented by a user type, and Interface will use this interface to serialize the type instead of trying to automatically create a serialization codec for it with helper.GetPLS.

type PropertyMap

type PropertyMap map[string]PropertyData

PropertyMap represents the contents of a datastore entity in a generic way. It maps from property name to a list of property values which correspond to that property name. It is the spiritual successor to PropertyList from the original SDK.

PropertyMap may contain "meta" values, which are keyed with a '$' prefix. Technically the datastore allows arbitrary property names, but all of the SDKs go out of their way to try to make all property names valid programming language tokens. Special values must correspond to a single Property... corresponding to 0 is equivalent to unset, and corresponding to >1 is an error. So:

{
  "$id": {MkProperty(1)}, // GetProperty("id") -> 1, nil
  "$foo": {}, // GetProperty("foo") -> nil, ErrMetaFieldUnset
  // GetProperty("bar") -> nil, ErrMetaFieldUnset
  "$meep": {
    MkProperty("hi"),
    MkProperty("there")}, // GetProperty("meep") -> nil, error!
}

Additionally, Save returns a copy of the map with the meta keys omitted (e.g. these keys are not going to be serialized to the datastore).

func (PropertyMap) Clone

func (pm PropertyMap) Clone() PropertyMap

Clone returns a deep copy of this PropertyMap.

If `pm` is nil, returns nil as well.

func (PropertyMap) EstimateSize

func (pm PropertyMap) EstimateSize() int64

EstimateSize estimates the size that it would take to encode this PropertyMap in the production Appengine datastore. The calculation excludes metadata fields in the map.

It uses https://cloud.google.com/appengine/articles/storage_breakdown?csw=1 as a guide for sizes.

func (PropertyMap) GetAllMeta

func (pm PropertyMap) GetAllMeta() PropertyMap

GetAllMeta implements PropertyLoadSaver.GetAllMeta.

func (PropertyMap) GetMeta

func (pm PropertyMap) GetMeta(key string) (any, bool)

GetMeta implements PropertyLoadSaver.GetMeta, and returns the current value associated with the metadata key.

func (PropertyMap) Load

func (pm PropertyMap) Load(props PropertyMap) error

Load implements PropertyLoadSaver.Load

func (PropertyMap) Problem

func (pm PropertyMap) Problem() error

Problem implements PropertyLoadSaver.Problem. It ALWAYS returns nil.

func (PropertyMap) Save

func (pm PropertyMap) Save(withMeta bool) (PropertyMap, error)

Save implements PropertyLoadSaver.Save by returning a copy of the current map data.

func (PropertyMap) SetMeta

func (pm PropertyMap) SetMeta(key string, val any) bool

SetMeta implements PropertyLoadSaver.SetMeta. It will only return an error if `val` has an invalid type (e.g. not one supported by Property).

func (PropertyMap) Slice

func (pm PropertyMap) Slice(key string) PropertySlice

Slice returns a PropertySlice for the given key

If the value associated with that key is nil, an empty slice will be returned. If the value is single Property, a slice of size 1 with that Property in it will be returned.

func (PropertyMap) TurnOffIdx

func (pm PropertyMap) TurnOffIdx()

TurnOffIdx sets NoIndex for all properties in the map. This method modifies the map in-place.

type PropertySlice

type PropertySlice []Property

PropertySlice is a slice of Properties. It implements sort.Interface.

PropertySlice holds multiple Properties. Writing a PropertySlice to datastore implicitly marks the property as "multiple", even if it only has one element.

func (PropertySlice) Clone

func (s PropertySlice) Clone() PropertyData

Clone implements the PropertyData interface.

func (PropertySlice) IsZero

func (s PropertySlice) IsZero() bool

IsZero implements the PropertyData interface.

func (PropertySlice) Len

func (s PropertySlice) Len() int

func (PropertySlice) Less

func (s PropertySlice) Less(i, j int) bool

func (PropertySlice) Slice

func (s PropertySlice) Slice() PropertySlice

Slice implements the PropertyData interface.

func (PropertySlice) Swap

func (s PropertySlice) Swap(i, j int)

type PropertyType

type PropertyType byte

PropertyType is a single-byte representation of the type of data contained in a Property. The specific values of this type information are chosen so that the types sort according to the order of types as sorted by the datastore.

Note that indexes may only contain values of the following types:

PTNull
PTInt
PTBool
PTFloat
PTString
PTGeoPoint
PTKey

The biggest impact of this is that if you do a Projection query, you'll only get back Properties with the above types (e.g. if you store a PTTime value, then Project on it, you'll get back a PTInt value). For convenience, Property has a Project(PropertyType) method which will side-cast to your intended type. If you project into a structure with the high-level Interface implementation, or use StructPLS, this conversion will be done for you automatically, using the type of the destination field to cast.

const (
	// PTNull represents the 'nil' value. This is only directly visible when
	// reading/writing a PropertyMap. If a PTNull value is loaded into a struct
	// field, the field will be initialized with its zero value. If a struct with
	// a zero value is saved from a struct, it will still retain the field's type,
	// not the 'nil' type. This is in contrast to other GAE languages such as
	// python where 'None' is a distinct value than the 'zero' value (e.g. a
	// StringProperty can have the value "" OR None).
	//
	// PTNull is a Projection-query type
	PTNull PropertyType = iota

	// PTInt is always an int64.
	//
	// This is a Projection-query type, and may be projected to PTTime.
	PTInt
	PTTime

	// PTBool represents true or false
	//
	// This is a Projection-query type.
	PTBool

	// PTBytes represents []byte
	PTBytes

	// PTString is used to represent all strings (text).
	//
	// PTString is a Projection-query type and may be projected to PTBytes or
	// PTBlobKey.
	PTString

	// PTFloat is always a float64.
	//
	// This is a Projection-query type.
	PTFloat

	// PTGeoPoint is a Projection-query type.
	PTGeoPoint

	// PTKey represents a *Key object.
	//
	// PTKey is a Projection-query type.
	PTKey

	// PTBlobKey represents a blobstore.Key
	PTBlobKey

	// PTPropertyMap represents a PropertyMap object.
	//
	// This is typicaly used to represent GAE *datastore.Entity objects.
	PTPropertyMap

	// PTUnknown is a placeholder value which should never show up in reality.
	//
	// NOTE: THIS MUST BE LAST VALUE FOR THE init() ASSERTION BELOW TO WORK.
	PTUnknown
)

These constants are in the order described by

https://cloud.google.com/appengine/docs/go/datastore/entities#Go_Value_type_ordering

with a slight divergence for the Int/Time split.

NOTE: this enum can only occupy 7 bits, because we use the high bit to encode indexed/non-indexed, and we additionally require that all valid values and all INVERTED valid values must never equal 0xFF or 0x00. The reason for this constraint is that we must always be able to create a byte that sorts before and after it.

See "./serialize".WriteProperty and "impl/memory".increment for more info.

func PropertyTypeOf

func PropertyTypeOf(v any, checkValid bool) (PropertyType, error)

PropertyTypeOf returns the PT* type of the given Property-compatible value v. If checkValid is true, this method will also ensure that time.Time and GeoPoint have valid values.

func (PropertyType) Comparable

func (pt PropertyType) Comparable() bool

Comparable is true if properties of this type can be compared to one another.

Only comparable properties can be used in query filters. Non-comparable properties have no datastore indexes.

Comparison operations include: Compare, Less, Equal. They must not be called with properties of non-comparable types.

Additionally IndexTypeAndValue must not be called either, since non-comparable types have no index representation.

func (PropertyType) String

func (i PropertyType) String() string

type Query

type Query struct {
	// contains filtered or unexported fields
}

Query is a builder-object for building a datastore query. It may represent an invalid query, but the error will only be observable when you call Finalize.

Fields like "$id" are technically usable at the datastore level, but using them through the non-raw interface is likely a mistake.

For example, instead of using: > datastore.NewQuery(...).Lte("$id", ...) One should use: > datastore.NewQuery(...).Lte("__key__", ...)

func ApplyCursorString

func ApplyCursorString(ctx context.Context, queries []*Query, cursorToken string) ([]*Query, error)

ApplyCursorString applies the cursors represented by the string and returns the new list of queries. The cursor string should be generated from cursor returned by RunMulti, this will not work on any other cursor. The queries must match the original list of queries that was used to generate the cursor. If the queries don't match the behavior is undefined. The order of queries is not important as they will be sorted before use.

func ApplyCursors

func ApplyCursors(ctx context.Context, queries []*Query, cursor Cursor) ([]*Query, error)

ApplyCursors applies the cursors to the queries and returns the new list of queries. The cursor should be from RunMulti, this will not work on any other cursor. The queries should match the original list of queries that was used to generate the cursor. If the queries don't match the behavior is undefined. The order for the queries is not important as they will be sorted before use.

func NewQuery

func NewQuery(kind string) *Query

NewQuery returns a new Query for the given kind. If kind may be empty to begin a kindless query.

func (*Query) Ancestor

func (q *Query) Ancestor(ancestor *Key) *Query

Ancestor sets the ancestor filter for this query.

If ancestor is nil, then this removes the Ancestor restriction from the query.

func (*Query) ClearFilters

func (q *Query) ClearFilters() *Query

ClearFilters clears all equality and inequality filters from the Query. It does not clear the Ancestor filter if one is defined.

func (*Query) ClearOrder

func (q *Query) ClearOrder() *Query

ClearOrder removes all orders from this Query.

func (*Query) ClearProject

func (q *Query) ClearProject() *Query

ClearProject removes all projected fields from this Query.

func (*Query) Distinct

func (q *Query) Distinct(on bool) *Query

Distinct makes a projection query only return distinct values. This has no effect on non-projection queries.

func (*Query) End

func (q *Query) End(c Cursor) *Query

End sets the ending cursor. The cursor is implementation-defined by the particular 'impl' you have installed.

func (*Query) Eq

func (q *Query) Eq(field string, values ...any) *Query

Eq adds one or more equality restrictions to the query.

Equality filters interact with multiply-defined properties by ensuring that the given field has /at least one/ value which is equal to the specified constraint.

So a query with `.Eq("thing", 1, 2)` will only return entities where the field "thing" is multiply defined and contains both a value of 1 and a value of 2. If the field is singular, such check will never pass. To query for entities with a field matching any one of values use `.In("thing", 1, 2)` filter instead.

`Eq("thing", 1).Eq("thing", 2)` and `.Eq("thing", 1, 2)` have identical meaning.

func (*Query) EventualConsistency

func (q *Query) EventualConsistency(on bool) *Query

EventualConsistency changes the EventualConsistency setting for this query.

func (*Query) Finalize

func (q *Query) Finalize() (*FinalizedQuery, error)

Finalize converts this Query to a FinalizedQuery. If the Query has any inconsistencies or violates any of the query rules, that will be returned here.

func (*Query) FirestoreMode

func (q *Query) FirestoreMode(on bool) *Query

FirestoreMode set the firestore mode. It removes internal checks for this Query which don't apply when using Firestore-in-Datastore mode.

In firestore mode all Datastore queries become strongly consistent by default, but still can be made eventually consistent via a call to EventualConsistency(true). In particular this is useful for aggregation queries like Count().

Note that firestore mode allows non-ancestor queries within a transaction.

func (*Query) GetFirestoreMode

func (q *Query) GetFirestoreMode() bool

GetFirestoreMode returns the firestore mode.

func (*Query) Gt

func (q *Query) Gt(field string, value any) *Query

Gt imposes a 'greater-than' inequality restriction on the Query.

Inequality filters interact with multiply-defined properties by ensuring that the given field has /exactly one/ value which matches /all/ of the inequality constraints.

So a query with `.Gt("thing", 5).Lt("thing", 10)` will only return entities where the field "thing" has a single value where `5 < val < 10`.

func (*Query) Gte

func (q *Query) Gte(field string, value any) *Query

Gte imposes a 'greater-than-or-equal' inequality restriction on the Query.

Inequality filters interact with multiply-defined properties by ensuring that the given field has /exactly one/ value which matches /all/ of the inequality constraints.

So a query with `.Gt("thing", 5).Lt("thing", 10)` will only return entities where the field "thing" has a single value where `5 < val < 10`.

func (*Query) In

func (q *Query) In(field string, values ...any) *Query

In imposes a 'is-in-a-set' equality restriction on the Query.

Equality filters interact with multiply-defined properties by ensuring that the given field has /at least one/ value which is equal to the specified constraint. So a query with `.In("thing", 1, 2)` will return entities where at least one value of the field "thing" is either 1 or 2.

Multiple `In` filters on the same property are AND-ed together, e.g. `.In("thing", 1, 2).In("thing", 3, 4)` will return entities whose repeated "thing" field has a value equal to 1 or 2 AND another value equal to 3 or 4.

func (*Query) KeysOnly

func (q *Query) KeysOnly(on bool) *Query

KeysOnly makes this into a query which only returns keys (but doesn't fetch values). It's incompatible with projection queries.

func (*Query) Kind

func (q *Query) Kind(kind string) *Query

Kind alters the kind of this query.

func (*Query) Less

func (a *Query) Less(b *Query) bool

Less returns true if a < b. It is used for local sorting of lists of queries, there is nothing datastore specific about this.

func (*Query) Limit

func (q *Query) Limit(limit int32) *Query

Limit sets the limit (max items to return) for this query. If limit < 0, this removes the limit from the query entirely.

func (*Query) Lt

func (q *Query) Lt(field string, value any) *Query

Lt imposes a 'less-than' inequality restriction on the Query.

Inequality filters interact with multiply-defined properties by ensuring that the given field has /exactly one/ value which matches /all/ of the inequality constraints.

So a query with `.Gt("thing", 5).Lt("thing", 10)` will only return entities where the field "thing" has a single value where `5 < val < 10`.

func (*Query) Lte

func (q *Query) Lte(field string, value any) *Query

Lte imposes a 'less-than-or-equal' inequality restriction on the Query.

Inequality filters interact with multiply-defined properties by ensuring that the given field has /exactly one/ value which matches /all/ of the inequality constraints.

So a query with `.Gt("thing", 5).Lt("thing", 10)` will only return entities where the field "thing" has a single value where `5 < val < 10`.

func (*Query) Offset

func (q *Query) Offset(offset int32) *Query

Offset sets the offset (number of items to skip) for this query. If offset < 0, this removes the offset from the query entirely.

func (*Query) Order

func (q *Query) Order(fieldNames ...string) *Query

Order sets one or more orders for this query.

func (*Query) Project

func (q *Query) Project(fieldNames ...string) *Query

Project lists one or more field names to project.

func (*Query) Start

func (q *Query) Start(c Cursor) *Query

Start sets a starting cursor. The cursor is implementation-defined by the particular 'impl' you have installed.

func (*Query) String

func (q *Query) String() string

type RawFactory

type RawFactory func(c context.Context) RawInterface

RawFactory is the function signature for factory methods compatible with SetRawFactory.

type RawFilter

type RawFilter func(context.Context, RawInterface) RawInterface

RawFilter is the function signature for a RawFilter implementation. It gets the current RDS implementation, and returns a new RDS implementation backed by the one passed in.

type RawInterface

type RawInterface interface {
	// AllocateIDs allows you to allocate IDs from the datastore without putting
	// any data. The supplied keys must be PartialValid and share the same entity
	// type.
	//
	// If there's no error, the keys in the slice will be replaced with keys
	// containing integer IDs assigned to them.
	AllocateIDs(keys []*Key, cb NewKeyCB) error

	// RunInTransaction runs f in a transaction.
	//
	// opts may be nil.
	//
	// NOTE: Implementations and filters are guaranteed that:
	//   - f is not nil
	RunInTransaction(f func(c context.Context) error, opts *TransactionOptions) error

	// DecodeCursor converts a string returned by a Cursor into a Cursor instance.
	// It will return an error if the supplied string is not valid, or could not
	// be decoded by the implementation.
	DecodeCursor(s string) (Cursor, error)

	// Run executes the given query, and calls `cb` for each successfully item.
	//
	// NOTE: Implementations and filters are guaranteed that:
	//   - query is not nil
	//   - cb is not nil
	Run(q *FinalizedQuery, cb RawRunCB) error

	// Count executes the given query and returns the number of entries which
	// match it.
	Count(q *FinalizedQuery) (int64, error)

	// GetMulti retrieves items from the datastore.
	//
	// If there was a server error, it will be returned directly. Otherwise,
	// callback will execute once per key/value pair, returning either the
	// operation result or individual error for each position. If the callback
	// receives an error, it will immediately forward that error and stop
	// subsequent callbacks.
	//
	// meta is used to propagate metadata from higher levels.
	//
	// NOTE: Implementations and filters are guaranteed that:
	//   - len(keys) > 0
	//   - all keys are Valid, !Incomplete, and in the current namespace
	//   - cb is not nil
	GetMulti(keys []*Key, meta MultiMetaGetter, cb GetMultiCB) error

	// PutMulti writes items to the datastore.
	//
	// If there was a server error, it will be returned directly. Otherwise,
	// callback will execute once per key/value pair, returning either the
	// operation result or individual error for each position. If the callback
	// receives an error, it will immediately forward that error and stop
	// subsequent callbacks.
	//
	// NOTE: Implementations and filters are guaranteed that:
	//   - len(keys) > 0
	//   - len(keys) == len(vals)
	//   - all keys are Valid and in the current namespace
	//   - cb is not nil
	PutMulti(keys []*Key, vals []PropertyMap, cb NewKeyCB) error

	// DeleteMulti removes items from the datastore.
	//
	// If there was a server error, it will be returned directly. Otherwise,
	// callback will execute once per key/value pair, returning either the
	// operation result or individual error for each position. If the callback
	// receives an error, it will immediately forward that error and stop
	// subsequent callbacks.
	//
	// NOTE: Implementations and filters are guaranteed that
	//   - len(keys) > 0
	//   - all keys are Valid, !Incomplete, and in the current namespace
	//   - none keys of the keys are 'special' (use a kind prefixed with '__')
	//   - cb is not nil
	DeleteMulti(keys []*Key, cb DeleteMultiCB) error

	// WithoutTransaction returns a derived Context without a transaction applied.
	// This may be called even when outside of a transaction, in which case the
	// input Context is a valid return value.
	WithoutTransaction() context.Context

	// CurrentTransaction returns a reference to the current Transaction, or nil
	// if the Context does not have a current Transaction.
	CurrentTransaction() Transaction

	// Constraints returns this implementation's constraints.
	Constraints() Constraints

	// GetTestable returns the Testable interface for the implementation, or nil
	// if there is none.
	GetTestable() Testable
}

RawInterface implements the datastore functionality without any of the fancy reflection stuff. This is so that Filters can avoid doing lots of redundant reflection work. See datastore.Interface for a more user-friendly interface.

func Raw

Raw gets the RawInterface implementation from context.

type RawRunCB

type RawRunCB func(key *Key, val PropertyMap, getCursor CursorCB) error

RawRunCB is the callback signature provided to RawInterface.Run

  • key is the Key of the entity
  • val is the data of the entity (or nil, if the query was keys-only)

Return nil to continue iterating through the query results, or an error to stop. If you return the error `Stop`, then Run will stop the query and return nil.

type Serializer

type Serializer struct {
	// If true, WithKeyContext controls whether bytes written with this Serializer
	// include the Key's appid and namespace.
	//
	// Frequently the appid and namespace of keys are known in advance and so
	// there's no reason to redundantly encode them.
	WithKeyContext bool
}

Serializer allows writing binary-encoded datastore types (like Properties, Keys, etc.)

See the `Serialize` and `SerializeKC` variables for common shortcuts.

var (
	// Serialize is a Serializer{WithKeyContext:false}, useful for inline
	// invocations like:
	//
	//   datastore.Serialize.Time(...)
	Serialize Serializer

	// SerializeKC is a Serializer{WithKeyContext:true}, useful for inline
	// invocations like:
	//
	//   datastore.SerializeKC.Key(...)
	SerializeKC = Serializer{true}
)

func (Serializer) GeoPoint

func (s Serializer) GeoPoint(buf cmpbin.WriteableBytesBuffer, gp GeoPoint) (err error)

GeoPoint writes a GeoPoint to the buffer.

func (Serializer) IndexColumn

func (s Serializer) IndexColumn(buf cmpbin.WriteableBytesBuffer, c IndexColumn) (err error)

IndexColumn writes an IndexColumn to the buffer.

func (Serializer) IndexDefinition

func (s Serializer) IndexDefinition(buf cmpbin.WriteableBytesBuffer, i IndexDefinition) (err error)

IndexDefinition writes an IndexDefinition to the buffer

func (Serializer) IndexedProperties

func (s Serializer) IndexedProperties(k *Key, pm PropertyMap) IndexedProperties

IndexedProperties turns a regular PropertyMap into a IndexedProperties.

Essentially all the []Property's become IndexedPropertySlice, using cmpbin and Serializer's encodings.

Keys are serialized without their context.

func (Serializer) IndexedPropertiesForIndicies

func (s Serializer) IndexedPropertiesForIndicies(k *Key, pm PropertyMap, idx []IndexColumn) (ret IndexedProperties)

IndexedPropertiesForIndicies is like IndexedProperties, but it returns only properties mentioned in the given indices as well as special '__key__' and '__ancestor__' properties.

It is just an optimization to avoid serializing indices that will never be checked.

func (Serializer) IndexedProperty

func (s Serializer) IndexedProperty(buf cmpbin.WriteableBytesBuffer, p Property) error

IndexedProperty writes a Property to the buffer as its native index type. `context` behaves the same way that it does for WriteKey, but only has an effect if `p` contains a Key as its IndexValue.

func (Serializer) Key

func (s Serializer) Key(buf cmpbin.WriteableBytesBuffer, k *Key) (err error)

Key encodes a key to the buffer. If context is WithContext, then this encoded value will include the appid and namespace of the key.

func (Serializer) KeyTok

func (s Serializer) KeyTok(buf cmpbin.WriteableBytesBuffer, tok KeyTok) (err error)

KeyTok writes a KeyTok to the buffer. You usually want Key instead of this.

func (Serializer) Property

Property writes a Property to the buffer. `context` behaves the same way that it does for WriteKey, but only has an effect if `p` contains a Key as its IndexValue.

func (Serializer) PropertyMap

func (s Serializer) PropertyMap(buf cmpbin.WriteableBytesBuffer, pm PropertyMap) error

PropertyMap writes an entire PropertyMap to the buffer. `context` behaves the same way that it does for WriteKey.

If WritePropertyMapDeterministic is true, then the rows will be sorted by property name before they're serialized to buf (mostly useful for testing, but also potentially useful if you need to make a hash of the property data).

func (Serializer) Time

Time writes a time.Time to the buffer.

The supplied time is rounded via datastore.RoundTime and written as a microseconds-since-epoch integer to comform to datastore storage standards.

func (Serializer) ToBytes

func (s Serializer) ToBytes(i any) []byte

ToBytes serializes i to a byte slice, if it's one of the type supported by this library. If an error is encountered (e.g. `i` is not a supported type), this method panics.

func (Serializer) ToBytesErr

func (s Serializer) ToBytesErr(i any) (ret []byte, err error)

ToBytesErr serializes i to a byte slice, if it's one of the type supported by this library, otherwise it returns an error.

type Testable

type Testable interface {
	// AddIndex adds the provided index.
	// Blocks all datastore access while the index is built.
	// Panics if any of the IndexDefinition objects are not Compound()
	AddIndexes(...*IndexDefinition)

	// TakeIndexSnapshot allows you to take a snapshot of the current index
	// tables, which can be used later with SetIndexSnapshot.
	TakeIndexSnapshot() TestingSnapshot

	// SetIndexSnapshot allows you to set the state of the current index tables.
	// Note that this would allow you to create 'non-lienarities' in the precieved
	// index results (e.g. you could force the indexes to go back in time).
	//
	// SetIndexSnapshot takes a reference of the given TestingSnapshot. You're
	// still responsible for closing the snapshot after this call.
	SetIndexSnapshot(TestingSnapshot)

	// CatchupIndexes catches the index table up to the current state of the
	// datastore. This is equivalent to:
	//   idxSnap := TakeIndexSnapshot()
	//   SetIndexSnapshot(idxSnap)
	//
	// But depending on the implementation it may implemented with an atomic
	// operation.
	CatchupIndexes()

	// SetTransactionRetryCount set how many times RunInTransaction will retry
	// transaction body pretending transaction conflicts happens. 0 (default)
	// means commit succeeds on the first attempt (no retries).
	SetTransactionRetryCount(int)

	// Consistent controls the eventual consistency behavior of the testing
	// implementation. If it is called with true, then this datastore
	// implementation will be always-consistent, instead of eventually-consistent.
	//
	// By default the datastore is eventually consistent, and you must call
	// CatchupIndexes or use Take/SetIndexSnapshot to manipulate the index state.
	Consistent(always bool)

	// AutoIndex controls the index creation behavior. If it is set to true, then
	// any time the datastore encounters a missing index, it will silently create
	// one and allow the query to succeed. If it's false, then the query will
	// return an error describing the index which could be added with AddIndexes.
	//
	// By default this is false.
	AutoIndex(bool)

	// DisableSpecialEntities turns off maintenance of special __entity_group__
	// type entities. By default this mainenance is enabled, but it can be
	// disabled by calling this with true.
	//
	// If it's true:
	//   - AllocateIDs returns an error.
	//   - Put'ing incomplete Keys returns an error.
	//   - Transactions are disabled and will return an error.
	//
	// This is mainly only useful when using an embedded in-memory datastore as
	// a fully-consistent 'datastore-lite'. In particular, this is useful for the
	// txnBuf filter which uses it to fulfil queries in a buffered transaction,
	// but never wants the in-memory versions of these entities to bleed through
	// to the user code.
	DisableSpecialEntities(bool)

	// ShowSpecialProperties disables stripping of special properties added by
	// the datastore internally (like __scatter__) from result of Get calls.
	//
	// Normally such properties are used internally by the datastore or only for
	// queries. Returning them explicitly is useful for assertions in tests that
	// rely on queries over special properties.
	ShowSpecialProperties(bool)

	// SetConstraints sets this instance's constraints. If the supplied
	// constraints are invalid, an error will be returned.
	//
	// If c is nil, default constraints will be set.
	SetConstraints(c *Constraints) error
}

Testable is the testable interface for fake datastore implementations.

func GetTestable

func GetTestable(c context.Context) Testable

GetTestable returns the Testable interface for the implementation, or nil if there is none.

type TestingSnapshot

type TestingSnapshot interface {
	ImATestingSnapshot()
}

TestingSnapshot is an opaque implementation-defined snapshot type.

type Toggle

type Toggle byte

Toggle is a tri-state boolean (Auto/True/False), which allows structs to control boolean flags for metadata in a non-ambiguous way.

const (
	Auto Toggle = iota
	On
	Off
)

These are the allowed values for Toggle. Any other values are invalid.

func (Toggle) String

func (i Toggle) String() string

type Transaction

type Transaction any

Transaction is a generic interface used to describe a Datastore transaction.

The nil Transaction represents no transaction context.

TODO: Add some functionality here. Ideas include:

  • Active() bool: is the transaction currently active?
  • AffectedGroups() []*ds.Key: list the groups that have been referenced in this Transaction so far.

func CurrentTransaction

func CurrentTransaction(c context.Context) Transaction

CurrentTransaction returns a reference to the current Transaction, or nil if the Context does not have a current Transaction.

type TransactionOptions

type TransactionOptions struct {
	// Attempts controls the number of retries to perform when commits fail
	// due to a conflicting transaction. If omitted, it defaults to 3.
	Attempts int
	// ReadOnly controls whether the transaction is a read only transaction.
	// Read only transactions are potentially more efficient.
	ReadOnly bool
}

TransactionOptions are the options for running a transaction.

type Unindexed

type Unindexed struct{}

Unindexed indicates to Optional or Nullable to produce unindexed properties.

Directories

Path Synopsis
Package dumper implements a very VERY dumb datastore-dumping debugging aid.
Package dumper implements a very VERY dumb datastore-dumping debugging aid.
internal
protos/datastore
Package datastore contains protos used for compatibility with GAE datastore.
Package datastore contains protos used for compatibility with GAE datastore.
protos/multicursor
Package multicursor contains protos for Cursors
Package multicursor contains protos for Cursors
testprotos
Package testprotos contains protos for testing.
Package testprotos contains protos for testing.
Package meta contains some methods for interacting with GAE's metadata APIs.
Package meta contains some methods for interacting with GAE's metadata APIs.

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL