snapcache

package
v3.2.7+incompatible Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Mar 6, 2019 License: Apache-2.0 Imports: 12 Imported by: 0

Documentation

Index

Constants

This section is empty.

Variables

This section is empty.

Functions

This section is empty.

Types

type Breadcrumb struct {
	SequenceNumber uint64
	Timestamp      time.Time

	KVs        *ctrie.Ctrie
	Deltas     []syncproto.SerializedUpdate
	SyncStatus api.SyncStatus
	// contains filtered or unexported fields
}
func (b *Breadcrumb) Next(ctx context.Context) (*Breadcrumb, error)

type Cache

type Cache struct {
	// contains filtered or unexported fields
}

SnapshotCache consumes updates from the Syncer API and caches them in the form of a series of Breadcrumb objects. Each Breadcrumb (conceptually) contains the complete snapshot of the datastore at the revision it was created as well as a list of deltas from the previous snapshot. A client that wants to keep in sync can get the current Breadcrumb, process the key-value pairs that it contains, then walk forward through the linked list of Breadcrumb objects, processing only the deltas.

The Breadcrumb object provides a Next() method, which returns the next Breadcrumb in the sequence, blocking until it is available if required.

Keys and values are stored in their serialized form so that each request handling thread has less work to do.

Implementation

To avoid the overhead of taking a complete copy of the state for each Breadcrumb, we use a Ctrie, which supports efficient, concurrent-read-safe snapshots. The main thread of the SnapshotCache processes updates sequentially, updating the Ctrie. After processing a batch of updates, the main thread generates a new Breadcrumb object with a read-only snapshot of the Ctrie along with the list of deltas.

Each Breadcrumb object contains a pointer to the next Breadcrumb, which is filled in using an atomic write once it is available. This allows each client to follow the linked list of Breadcrumb without blocking until it reaches the end of the list (i.e. until it has "caught up"). When it reaches the end of the list, the Next() method blocks on a global condition variable, which is Broadcast() by the main thread once the next snapshot is available.

Why not use channels to fan out to the clients? I think it'd be more tricky to make robust and non-blocking: We'd need to keep a list of channels to send to (one per client); the book-keeping around adding/removing from that list is a little fiddly and we'd need to iterate over the list (which may be slow) to send the updates to each client. If any of the clients were blocked, we'd need to selectively skip channels (else we'd block all clients due to one slow client) and keep track of what we'd sent to each channel. All doable but, I think, more fiddly than using a non-blocking linked list and a condition variable and letting each client look after itself.

func New

func New(config Config) *Cache

func (*Cache) CurrentBreadcrumb

func (c *Cache) CurrentBreadcrumb() *Breadcrumb

CurrentBreadcrumb returns the current Breadcrumb, which contains a snapshot of the datastore at the time it was created and a method to wait for the next Breadcrumb to be dropped. It is safe to call from any goroutine.

func (*Cache) OnStatusUpdated

func (c *Cache) OnStatusUpdated(status api.SyncStatus)

OnStatusUpdated implements the SyncerCallbacks API. It shouldn't be called directly.

func (*Cache) OnUpdates

func (c *Cache) OnUpdates(updates []api.Update)

OnUpdates implements the SyncerCallbacks API. It shouldn't be called directly.

func (*Cache) Start

func (c *Cache) Start(ctx context.Context)

Start starts the cache's main loop in a background goroutine.

type Config

type Config struct {
	MaxBatchSize     int
	WakeUpInterval   time.Duration
	HealthAggregator healthAggregator
}

func (*Config) ApplyDefaults

func (config *Config) ApplyDefaults()

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL