cache

package
v3.0.0-beta.0+incompat... Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Mar 31, 2017 License: Apache-2.0 Imports: 32 Imported by: 49,665

Documentation

Overview

Package cache is a client-side caching mechanism. It is useful for reducing the number of server calls you'd otherwise need to make. Reflector watches a server and updates a Store. Two stores are provided; one that simply caches objects (for example, to allow a scheduler to list currently available nodes), and one that additionally acts as a FIFO queue (for example, to allow a scheduler to process incoming pods).

Example
// source simulates an apiserver object endpoint.
source := fcache.NewFakeControllerSource()

// This will hold the downstream state, as we know it.
downstream := NewStore(DeletionHandlingMetaNamespaceKeyFunc)

// This will hold incoming changes. Note how we pass downstream in as a
// KeyLister, that way resync operations will result in the correct set
// of update/delete deltas.
fifo := NewDeltaFIFO(MetaNamespaceKeyFunc, nil, downstream)

// Let's do threadsafe output to get predictable test results.
deletionCounter := make(chan string, 1000)

cfg := &Config{
	Queue:            fifo,
	ListerWatcher:    source,
	ObjectType:       &v1.Pod{},
	FullResyncPeriod: time.Millisecond * 100,
	RetryOnError:     false,

	// Let's implement a simple controller that just deletes
	// everything that comes in.
	Process: func(obj interface{}) error {
		// Obj is from the Pop method of the Queue we make above.
		newest := obj.(Deltas).Newest()

		if newest.Type != Deleted {
			// Update our downstream store.
			err := downstream.Add(newest.Object)
			if err != nil {
				return err
			}

			// Delete this object.
			source.Delete(newest.Object.(runtime.Object))
		} else {
			// Update our downstream store.
			err := downstream.Delete(newest.Object)
			if err != nil {
				return err
			}

			// fifo's KeyOf is easiest, because it handles
			// DeletedFinalStateUnknown markers.
			key, err := fifo.KeyOf(newest.Object)
			if err != nil {
				return err
			}

			// Report this deletion.
			deletionCounter <- key
		}
		return nil
	},
}

// Create the controller and run it until we close stop.
stop := make(chan struct{})
defer close(stop)
go New(cfg).Run(stop)

// Let's add a few objects to the source.
testIDs := []string{"a-hello", "b-controller", "c-framework"}
for _, name := range testIDs {
	// Note that these pods are not valid-- the fake source doesn't
	// call validation or anything.
	source.Add(&v1.Pod{ObjectMeta: metav1.ObjectMeta{Name: name}})
}

// Let's wait for the controller to process the things we just added.
outputSet := sets.String{}
for i := 0; i < len(testIDs); i++ {
	outputSet.Insert(<-deletionCounter)
}

for _, key := range outputSet.List() {
	fmt.Println(key)
}
Output:

a-hello
b-controller
c-framework

Index

Examples

Constants

View Source
const (
	NamespaceIndex string = "namespace"
)

Variables

View Source
var (
	// ErrZeroLengthDeltasObject is returned in a KeyError if a Deltas
	// object with zero length is encountered (should be impossible,
	// even if such an object is accidentally produced by a DeltaCompressor--
	// but included for completeness).
	ErrZeroLengthDeltasObject = errors.New("0 length Deltas object; can't get key")
)
View Source
var FIFOClosedError error = errors.New("DeltaFIFO: manipulating with closed queue")

Functions

func DeletionHandlingMetaNamespaceKeyFunc

func DeletionHandlingMetaNamespaceKeyFunc(obj interface{}) (string, error)

DeletionHandlingMetaNamespaceKeyFunc checks for DeletedFinalStateUnknown objects before calling MetaNamespaceKeyFunc.

func ListAll

func ListAll(store Store, selector labels.Selector, appendFn AppendFunc) error

func ListAllByNamespace

func ListAllByNamespace(indexer Indexer, namespace string, selector labels.Selector, appendFn AppendFunc) error

func ListWatchUntil

func ListWatchUntil(timeout time.Duration, lw ListerWatcher, conditions ...watch.ConditionFunc) (*watch.Event, error)

TODO: check for watch expired error and retry watch from latest point? Same issue exists for Until.

func MetaNamespaceIndexFunc

func MetaNamespaceIndexFunc(obj interface{}) ([]string, error)

MetaNamespaceIndexFunc is a default index function that indexes based on an object's namespace

func MetaNamespaceKeyFunc

func MetaNamespaceKeyFunc(obj interface{}) (string, error)

MetaNamespaceKeyFunc is a convenient default KeyFunc which knows how to make keys for API objects which implement meta.Interface. The key uses the format <namespace>/<name> unless <namespace> is empty, then it's just <name>.

TODO: replace key-as-string with a key-as-struct so that this packing/unpacking won't be necessary.

func NewIndexerInformer

func NewIndexerInformer(
	lw ListerWatcher,
	objType runtime.Object,
	resyncPeriod time.Duration,
	h ResourceEventHandler,
	indexers Indexers,
) (Indexer, Controller)

NewIndexerInformer returns a Indexer and a controller for populating the index while also providing event notifications. You should only used the returned Index for Get/List operations; Add/Modify/Deletes will cause the event notifications to be faulty.

Parameters:

  • lw is list and watch functions for the source of the resource you want to be informed of.
  • objType is an object of the type that you expect to receive.
  • resyncPeriod: if non-zero, will re-list this often (you will get OnUpdate calls, even if nothing changed). Otherwise, re-list will be delayed as long as possible (until the upstream source closes the watch or times out, or you stop the controller).
  • h is the object you want notifications sent to.

func NewInformer

func NewInformer(
	lw ListerWatcher,
	objType runtime.Object,
	resyncPeriod time.Duration,
	h ResourceEventHandler,
) (Store, Controller)

NewInformer returns a Store and a controller for populating the store while also providing event notifications. You should only used the returned Store for Get/List operations; Add/Modify/Deletes will cause the event notifications to be faulty.

Parameters:

  • lw is list and watch functions for the source of the resource you want to be informed of.
  • objType is an object of the type that you expect to receive.
  • resyncPeriod: if non-zero, will re-list this often (you will get OnUpdate calls, even if nothing changed). Otherwise, re-list will be delayed as long as possible (until the upstream source closes the watch or times out, or you stop the controller).
  • h is the object you want notifications sent to.
Example
// source simulates an apiserver object endpoint.
source := fcache.NewFakeControllerSource()

// Let's do threadsafe output to get predictable test results.
deletionCounter := make(chan string, 1000)

// Make a controller that immediately deletes anything added to it, and
// logs anything deleted.
_, controller := NewInformer(
	source,
	&v1.Pod{},
	time.Millisecond*100,
	ResourceEventHandlerFuncs{
		AddFunc: func(obj interface{}) {
			source.Delete(obj.(runtime.Object))
		},
		DeleteFunc: func(obj interface{}) {
			key, err := DeletionHandlingMetaNamespaceKeyFunc(obj)
			if err != nil {
				key = "oops something went wrong with the key"
			}

			// Report this deletion.
			deletionCounter <- key
		},
	},
)

// Run the controller and run it until we close stop.
stop := make(chan struct{})
defer close(stop)
go controller.Run(stop)

// Let's add a few objects to the source.
testIDs := []string{"a-hello", "b-controller", "c-framework"}
for _, name := range testIDs {
	// Note that these pods are not valid-- the fake source doesn't
	// call validation or anything.
	source.Add(&v1.Pod{ObjectMeta: metav1.ObjectMeta{Name: name}})
}

// Let's wait for the controller to process the things we just added.
outputSet := sets.String{}
for i := 0; i < len(testIDs); i++ {
	outputSet.Insert(<-deletionCounter)
}

for _, key := range outputSet.List() {
	fmt.Println(key)
}
Output:

a-hello
b-controller
c-framework

func NewNamespaceKeyedIndexerAndReflector

func NewNamespaceKeyedIndexerAndReflector(lw ListerWatcher, expectedType interface{}, resyncPeriod time.Duration) (indexer Indexer, reflector *Reflector)

NewNamespaceKeyedIndexerAndReflector creates an Indexer and a Reflector The indexer is configured to key on namespace

func Pop

func Pop(queue Queue) interface{}

Helper function for popping from Queue. WARNING: Do NOT use this function in non-test code to avoid races unless you really really really really know what you are doing.

func SplitMetaNamespaceKey

func SplitMetaNamespaceKey(key string) (namespace, name string, err error)

SplitMetaNamespaceKey returns the namespace and name that MetaNamespaceKeyFunc encoded into key.

TODO: replace key-as-string with a key-as-struct so that this packing/unpacking won't be necessary.

func WaitForCacheSync

func WaitForCacheSync(stopCh <-chan struct{}, cacheSyncs ...InformerSynced) bool

WaitForCacheSync waits for caches to populate. It returns true if it was successful, false if the contoller should shutdown

Types

type AppendFunc

type AppendFunc func(interface{})

AppendFunc is used to add a matching item to whatever list the caller is using

type CacheMutationDetector

type CacheMutationDetector interface {
	AddObject(obj interface{})
	Run(stopCh <-chan struct{})
}

func NewCacheMutationDetector

func NewCacheMutationDetector(name string) CacheMutationDetector

type Config

type Config struct {
	// The queue for your objects; either a FIFO or
	// a DeltaFIFO. Your Process() function should accept
	// the output of this Queue's Pop() method.
	Queue

	// Something that can list and watch your objects.
	ListerWatcher

	// Something that can process your objects.
	Process ProcessFunc

	// The type of your objects.
	ObjectType runtime.Object

	// Reprocess everything at least this often.
	// Note that if it takes longer for you to clear the queue than this
	// period, you will end up processing items in the order determined
	// by FIFO.Replace(). Currently, this is random. If this is a
	// problem, we can change that replacement policy to append new
	// things to the end of the queue instead of replacing the entire
	// queue.
	FullResyncPeriod time.Duration

	// ShouldResync, if specified, is invoked when the controller's reflector determines the next
	// periodic sync should occur. If this returns true, it means the reflector should proceed with
	// the resync.
	ShouldResync ShouldResyncFunc

	// If true, when Process() returns an error, re-enqueue the object.
	// TODO: add interface to let you inject a delay/backoff or drop
	//       the object completely if desired. Pass the object in
	//       question to this interface as a parameter.
	RetryOnError bool
}

Config contains all the settings for a Controller.

type Controller

type Controller interface {
	Run(stopCh <-chan struct{})
	HasSynced() bool
	LastSyncResourceVersion() string
}

func New

func New(c *Config) Controller

New makes a new Controller from the given Config.

type DeletedFinalStateUnknown

type DeletedFinalStateUnknown struct {
	Key string
	Obj interface{}
}

DeletedFinalStateUnknown is placed into a DeltaFIFO in the case where an object was deleted but the watch deletion event was missed. In this case we don't know the final "resting" state of the object, so there's a chance the included `Obj` is stale.

type Delta

type Delta struct {
	Type   DeltaType
	Object interface{}
}

Delta is the type stored by a DeltaFIFO. It tells you what change happened, and the object's state after* that change.

[*] Unless the change is a deletion, and then you'll get the final

state of the object before it was deleted.

type DeltaCompressor

type DeltaCompressor interface {
	Compress(Deltas) Deltas
}

DeltaCompressor is an algorithm that removes redundant changes.

type DeltaCompressorFunc

type DeltaCompressorFunc func(Deltas) Deltas

DeltaCompressorFunc should remove redundant changes; but changes that are redundant depend on one's desired semantics, so this is an injectable function.

DeltaCompressorFunc adapts a raw function to be a DeltaCompressor.

func (DeltaCompressorFunc) Compress

func (dc DeltaCompressorFunc) Compress(d Deltas) Deltas

Compress just calls dc.

type DeltaFIFO

type DeltaFIFO struct {
	// contains filtered or unexported fields
}

DeltaFIFO is like FIFO, but allows you to process deletes.

DeltaFIFO is a producer-consumer queue, where a Reflector is intended to be the producer, and the consumer is whatever calls the Pop() method.

DeltaFIFO solves this use case:

  • You want to process every object change (delta) at most once.
  • When you process an object, you want to see everything that's happened to it since you last processed it.
  • You want to process the deletion of objects.
  • You might want to periodically reprocess objects.

DeltaFIFO's Pop(), Get(), and GetByKey() methods return interface{} to satisfy the Store/Queue interfaces, but it will always return an object of type Deltas.

A note on threading: If you call Pop() in parallel from multiple threads, you could end up with multiple threads processing slightly different versions of the same object.

A note on the KeyLister used by the DeltaFIFO: It's main purpose is to list keys that are "known", for the purpose of figuring out which items have been deleted when Replace() or Delete() are called. The deleted object will be included in the DeleteFinalStateUnknown markers. These objects could be stale.

You may provide a function to compress deltas (e.g., represent a series of Updates as a single Update).

func NewDeltaFIFO

func NewDeltaFIFO(keyFunc KeyFunc, compressor DeltaCompressor, knownObjects KeyListerGetter) *DeltaFIFO

NewDeltaFIFO returns a Store which can be used process changes to items.

keyFunc is used to figure out what key an object should have. (It's exposed in the returned DeltaFIFO's KeyOf() method, with bonus features.)

'compressor' may compress as many or as few items as it wants (including returning an empty slice), but it should do what it does quickly since it is called while the queue is locked. 'compressor' may be nil if you don't want any delta compression.

'keyLister' is expected to return a list of keys that the consumer of this queue "knows about". It is used to decide which items are missing when Replace() is called; 'Deleted' deltas are produced for these items. It may be nil if you don't need to detect all deletions. TODO: consider merging keyLister with this object, tracking a list of

"known" keys when Pop() is called. Have to think about how that
affects error retrying.

TODO(lavalamp): I believe there is a possible race only when using an

external known object source that the above TODO would
fix.

Also see the comment on DeltaFIFO.

func (*DeltaFIFO) Add

func (f *DeltaFIFO) Add(obj interface{}) error

Add inserts an item, and puts it in the queue. The item is only enqueued if it doesn't already exist in the set.

func (*DeltaFIFO) AddIfNotPresent

func (f *DeltaFIFO) AddIfNotPresent(obj interface{}) error

AddIfNotPresent inserts an item, and puts it in the queue. If the item is already present in the set, it is neither enqueued nor added to the set.

This is useful in a single producer/consumer scenario so that the consumer can safely retry items without contending with the producer and potentially enqueueing stale items.

Important: obj must be a Deltas (the output of the Pop() function). Yes, this is different from the Add/Update/Delete functions.

func (*DeltaFIFO) Close

func (f *DeltaFIFO) Close()

Close the queue.

func (*DeltaFIFO) Delete

func (f *DeltaFIFO) Delete(obj interface{}) error

Delete is just like Add, but makes an Deleted Delta. If the item does not already exist, it will be ignored. (It may have already been deleted by a Replace (re-list), for example.

func (*DeltaFIFO) Get

func (f *DeltaFIFO) Get(obj interface{}) (item interface{}, exists bool, err error)

Get returns the complete list of deltas for the requested item, or sets exists=false. You should treat the items returned inside the deltas as immutable.

func (*DeltaFIFO) GetByKey

func (f *DeltaFIFO) GetByKey(key string) (item interface{}, exists bool, err error)

GetByKey returns the complete list of deltas for the requested item, setting exists=false if that list is empty. You should treat the items returned inside the deltas as immutable.

func (*DeltaFIFO) HasSynced

func (f *DeltaFIFO) HasSynced() bool

Return true if an Add/Update/Delete/AddIfNotPresent are called first, or an Update called first but the first batch of items inserted by Replace() has been popped

func (*DeltaFIFO) IsClosed

func (f *DeltaFIFO) IsClosed() bool

Checks if the queue is closed

func (*DeltaFIFO) KeyOf

func (f *DeltaFIFO) KeyOf(obj interface{}) (string, error)

KeyOf exposes f's keyFunc, but also detects the key of a Deltas object or DeletedFinalStateUnknown objects.

func (*DeltaFIFO) List

func (f *DeltaFIFO) List() []interface{}

List returns a list of all the items; it returns the object from the most recent Delta. You should treat the items returned inside the deltas as immutable.

func (*DeltaFIFO) ListKeys

func (f *DeltaFIFO) ListKeys() []string

ListKeys returns a list of all the keys of the objects currently in the FIFO.

func (*DeltaFIFO) Pop

func (f *DeltaFIFO) Pop(process PopProcessFunc) (interface{}, error)

Pop blocks until an item is added to the queue, and then returns it. If multiple items are ready, they are returned in the order in which they were added/updated. The item is removed from the queue (and the store) before it is returned, so if you don't successfully process it, you need to add it back with AddIfNotPresent(). process function is called under lock, so it is safe update data structures in it that need to be in sync with the queue (e.g. knownKeys). The PopProcessFunc may return an instance of ErrRequeue with a nested error to indicate the current item should be requeued (equivalent to calling AddIfNotPresent under the lock).

Pop returns a 'Deltas', which has a complete list of all the things that happened to the object (deltas) while it was sitting in the queue.

func (*DeltaFIFO) Replace

func (f *DeltaFIFO) Replace(list []interface{}, resourceVersion string) error

Replace will delete the contents of 'f', using instead the given map. 'f' takes ownership of the map, you should not reference the map again after calling this function. f's queue is reset, too; upon return, it will contain the items in the map, in no particular order.

func (*DeltaFIFO) Resync

func (f *DeltaFIFO) Resync() error

Resync will send a sync event for each item

func (*DeltaFIFO) Update

func (f *DeltaFIFO) Update(obj interface{}) error

Update is just like Add, but makes an Updated Delta.

type DeltaType

type DeltaType string

DeltaType is the type of a change (addition, deletion, etc)

const (
	Added   DeltaType = "Added"
	Updated DeltaType = "Updated"
	Deleted DeltaType = "Deleted"
	// The other types are obvious. You'll get Sync deltas when:
	//  * A watch expires/errors out and a new list/watch cycle is started.
	//  * You've turned on periodic syncs.
	// (Anything that trigger's DeltaFIFO's Replace() method.)
	Sync DeltaType = "Sync"
)

type Deltas

type Deltas []Delta

Deltas is a list of one or more 'Delta's to an individual object. The oldest delta is at index 0, the newest delta is the last one.

func (Deltas) Newest

func (d Deltas) Newest() *Delta

Newest is a convenience function that returns the newest delta, or nil if there are no deltas.

func (Deltas) Oldest

func (d Deltas) Oldest() *Delta

Oldest is a convenience function that returns the oldest delta, or nil if there are no deltas.

type ErrRequeue

type ErrRequeue struct {
	// Err is returned by the Pop function
	Err error
}

ErrRequeue may be returned by a PopProcessFunc to safely requeue the current item. The value of Err will be returned from Pop.

func (ErrRequeue) Error

func (e ErrRequeue) Error() string

type ExpirationCache

type ExpirationCache struct {
	// contains filtered or unexported fields
}

ExpirationCache implements the store interface

  1. All entries are automatically time stamped on insert a. The key is computed based off the original item/keyFunc b. The value inserted under that key is the timestamped item
  2. Expiration happens lazily on read based on the expiration policy a. No item can be inserted into the store while we're expiring *any* item in the cache.
  3. Time-stamps are stripped off unexpired entries before return

Note that the ExpirationCache is inherently slower than a normal threadSafeStore because it takes a write lock every time it checks if an item has expired.

func (*ExpirationCache) Add

func (c *ExpirationCache) Add(obj interface{}) error

Add timestamps an item and inserts it into the cache, overwriting entries that might exist under the same key.

func (*ExpirationCache) Delete

func (c *ExpirationCache) Delete(obj interface{}) error

Delete removes an item from the cache.

func (*ExpirationCache) Get

func (c *ExpirationCache) Get(obj interface{}) (interface{}, bool, error)

Get returns unexpired items. It purges the cache of expired items in the process.

func (*ExpirationCache) GetByKey

func (c *ExpirationCache) GetByKey(key string) (interface{}, bool, error)

GetByKey returns the item stored under the key, or sets exists=false.

func (*ExpirationCache) List

func (c *ExpirationCache) List() []interface{}

List retrieves a list of unexpired items. It purges the cache of expired items in the process.

func (*ExpirationCache) ListKeys

func (c *ExpirationCache) ListKeys() []string

ListKeys returns a list of all keys in the expiration cache.

func (*ExpirationCache) Replace

func (c *ExpirationCache) Replace(list []interface{}, resourceVersion string) error

Replace will convert all items in the given list to TimestampedEntries before attempting the replace operation. The replace operation will delete the contents of the ExpirationCache `c`.

func (*ExpirationCache) Resync

func (c *ExpirationCache) Resync() error

Resync will touch all objects to put them into the processing queue

func (*ExpirationCache) Update

func (c *ExpirationCache) Update(obj interface{}) error

Update has not been implemented yet for lack of a use case, so this method simply calls `Add`. This effectively refreshes the timestamp.

type ExpirationPolicy

type ExpirationPolicy interface {
	IsExpired(obj *timestampedEntry) bool
}

ExpirationPolicy dictates when an object expires. Currently only abstracted out so unittests don't rely on the system clock.

type ExplicitKey

type ExplicitKey string

ExplicitKey can be passed to MetaNamespaceKeyFunc if you have the key for the object but not the object itself.

type FIFO

type FIFO struct {
	// contains filtered or unexported fields
}

FIFO receives adds and updates from a Reflector, and puts them in a queue for FIFO order processing. If multiple adds/updates of a single item happen while an item is in the queue before it has been processed, it will only be processed once, and when it is processed, the most recent version will be processed. This can't be done with a channel.

FIFO solves this use case:

  • You want to process every object (exactly) once.
  • You want to process the most recent version of the object when you process it.
  • You do not want to process deleted objects, they should be removed from the queue.
  • You do not want to periodically reprocess objects.

Compare with DeltaFIFO for other use cases.

func NewFIFO

func NewFIFO(keyFunc KeyFunc) *FIFO

NewFIFO returns a Store which can be used to queue up items to process.

func (*FIFO) Add

func (f *FIFO) Add(obj interface{}) error

Add inserts an item, and puts it in the queue. The item is only enqueued if it doesn't already exist in the set.

func (*FIFO) AddIfNotPresent

func (f *FIFO) AddIfNotPresent(obj interface{}) error

AddIfNotPresent inserts an item, and puts it in the queue. If the item is already present in the set, it is neither enqueued nor added to the set.

This is useful in a single producer/consumer scenario so that the consumer can safely retry items without contending with the producer and potentially enqueueing stale items.

func (*FIFO) Close

func (f *FIFO) Close()

Close the queue.

func (*FIFO) Delete

func (f *FIFO) Delete(obj interface{}) error

Delete removes an item. It doesn't add it to the queue, because this implementation assumes the consumer only cares about the objects, not the order in which they were created/added.

func (*FIFO) Get

func (f *FIFO) Get(obj interface{}) (item interface{}, exists bool, err error)

Get returns the requested item, or sets exists=false.

func (*FIFO) GetByKey

func (f *FIFO) GetByKey(key string) (item interface{}, exists bool, err error)

GetByKey returns the requested item, or sets exists=false.

func (*FIFO) HasSynced

func (f *FIFO) HasSynced() bool

Return true if an Add/Update/Delete/AddIfNotPresent are called first, or an Update called first but the first batch of items inserted by Replace() has been popped

func (*FIFO) IsClosed

func (f *FIFO) IsClosed() bool

Checks if the queue is closed

func (*FIFO) List

func (f *FIFO) List() []interface{}

List returns a list of all the items.

func (*FIFO) ListKeys

func (f *FIFO) ListKeys() []string

ListKeys returns a list of all the keys of the objects currently in the FIFO.

func (*FIFO) Pop

func (f *FIFO) Pop(process PopProcessFunc) (interface{}, error)

Pop waits until an item is ready and processes it. If multiple items are ready, they are returned in the order in which they were added/updated. The item is removed from the queue (and the store) before it is processed, so if you don't successfully process it, it should be added back with AddIfNotPresent(). process function is called under lock, so it is safe update data structures in it that need to be in sync with the queue.

func (*FIFO) Replace

func (f *FIFO) Replace(list []interface{}, resourceVersion string) error

Replace will delete the contents of 'f', using instead the given map. 'f' takes ownership of the map, you should not reference the map again after calling this function. f's queue is reset, too; upon return, it will contain the items in the map, in no particular order.

func (*FIFO) Resync

func (f *FIFO) Resync() error

Resync will touch all objects to put them into the processing queue

func (*FIFO) Update

func (f *FIFO) Update(obj interface{}) error

Update is the same as Add in this implementation.

type FakeCustomStore

type FakeCustomStore struct {
	AddFunc      func(obj interface{}) error
	UpdateFunc   func(obj interface{}) error
	DeleteFunc   func(obj interface{}) error
	ListFunc     func() []interface{}
	ListKeysFunc func() []string
	GetFunc      func(obj interface{}) (item interface{}, exists bool, err error)
	GetByKeyFunc func(key string) (item interface{}, exists bool, err error)
	ReplaceFunc  func(list []interface{}, resourceVerion string) error
	ResyncFunc   func() error
}

FakeStore lets you define custom functions for store operations

func (*FakeCustomStore) Add

func (f *FakeCustomStore) Add(obj interface{}) error

Add calls the custom Add function if defined

func (*FakeCustomStore) Delete

func (f *FakeCustomStore) Delete(obj interface{}) error

Delete calls the custom Delete function if defined

func (*FakeCustomStore) Get

func (f *FakeCustomStore) Get(obj interface{}) (item interface{}, exists bool, err error)

Get calls the custom Get function if defined

func (*FakeCustomStore) GetByKey

func (f *FakeCustomStore) GetByKey(key string) (item interface{}, exists bool, err error)

GetByKey calls the custom GetByKey function if defined

func (*FakeCustomStore) List

func (f *FakeCustomStore) List() []interface{}

List calls the custom List function if defined

func (*FakeCustomStore) ListKeys

func (f *FakeCustomStore) ListKeys() []string

ListKeys calls the custom ListKeys function if defined

func (*FakeCustomStore) Replace

func (f *FakeCustomStore) Replace(list []interface{}, resourceVersion string) error

Replace calls the custom Replace function if defined

func (*FakeCustomStore) Resync

func (f *FakeCustomStore) Resync() error

Resync calls the custom Resync function if defined

func (*FakeCustomStore) Update

func (f *FakeCustomStore) Update(obj interface{}) error

Update calls the custom Update function if defined

type FakeExpirationPolicy

type FakeExpirationPolicy struct {
	NeverExpire     sets.String
	RetrieveKeyFunc KeyFunc
}

func (*FakeExpirationPolicy) IsExpired

func (p *FakeExpirationPolicy) IsExpired(obj *timestampedEntry) bool

type GenericLister

type GenericLister interface {
	// List will return all objects across namespaces
	List(selector labels.Selector) (ret []runtime.Object, err error)
	// Get will attempt to retrieve assuming that name==key
	Get(name string) (runtime.Object, error)
	// ByNamespace will give you a GenericNamespaceLister for one namespace
	ByNamespace(namespace string) GenericNamespaceLister
}

GenericLister is a lister skin on a generic Indexer

func NewGenericLister

func NewGenericLister(indexer Indexer, resource schema.GroupResource) GenericLister

type GenericNamespaceLister

type GenericNamespaceLister interface {
	// List will return all objects in this namespace
	List(selector labels.Selector) (ret []runtime.Object, err error)
	// Get will attempt to retrieve by namespace and name
	Get(name string) (runtime.Object, error)
}

GenericNamespaceLister is a lister skin on a generic Indexer

type Getter

type Getter interface {
	Get() *restclient.Request
}

Getter interface knows how to access Get method from RESTClient.

type Index

type Index map[string]sets.String

Index maps the indexed value to a set of keys in the store that match on that value

type IndexFunc

type IndexFunc func(obj interface{}) ([]string, error)

IndexFunc knows how to provide an indexed value for an object.

type Indexer

type Indexer interface {
	Store
	// Retrieve list of objects that match on the named indexing function
	Index(indexName string, obj interface{}) ([]interface{}, error)
	// ListIndexFuncValues returns the list of generated values of an Index func
	ListIndexFuncValues(indexName string) []string
	// ByIndex lists object that match on the named indexing function with the exact key
	ByIndex(indexName, indexKey string) ([]interface{}, error)
	// GetIndexer return the indexers
	GetIndexers() Indexers

	// AddIndexers adds more indexers to this store.  If you call this after you already have data
	// in the store, the results are undefined.
	AddIndexers(newIndexers Indexers) error
}

Indexer is a storage interface that lets you list objects using multiple indexing functions

func NewIndexer

func NewIndexer(keyFunc KeyFunc, indexers Indexers) Indexer

NewIndexer returns an Indexer implemented simply with a map and a lock.

type Indexers

type Indexers map[string]IndexFunc

Indexers maps a name to a IndexFunc

type Indices

type Indices map[string]Index

Indices maps a name to an Index

type InformerSynced

type InformerSynced func() bool

InformerSynced is a function that can be used to determine if an informer has synced. This is useful for determining if caches have synced.

type KeyError

type KeyError struct {
	Obj interface{}
	Err error
}

KeyError will be returned any time a KeyFunc gives an error; it includes the object at fault.

func (KeyError) Error

func (k KeyError) Error() string

Error gives a human-readable description of the error.

type KeyFunc

type KeyFunc func(obj interface{}) (string, error)

KeyFunc knows how to make a key from an object. Implementations should be deterministic.

func IndexFuncToKeyFuncAdapter

func IndexFuncToKeyFuncAdapter(indexFunc IndexFunc) KeyFunc

IndexFuncToKeyFuncAdapter adapts an indexFunc to a keyFunc. This is only useful if your index function returns unique values for every object. This is conversion can create errors when more than one key is found. You should prefer to make proper key and index functions.

type KeyGetter

type KeyGetter interface {
	GetByKey(key string) (interface{}, bool, error)
}

A KeyGetter is anything that knows how to get the value stored under a given key.

type KeyLister

type KeyLister interface {
	ListKeys() []string
}

A KeyLister is anything that knows how to list its keys.

type KeyListerGetter

type KeyListerGetter interface {
	KeyLister
	KeyGetter
}

A KeyListerGetter is anything that knows how to list its keys and look up by key.

type ListFunc

type ListFunc func(options metav1.ListOptions) (runtime.Object, error)

ListFunc knows how to list resources

type ListWatch

type ListWatch struct {
	ListFunc  ListFunc
	WatchFunc WatchFunc
}

ListWatch knows how to list and watch a set of apiserver resources. It satisfies the ListerWatcher interface. It is a convenience function for users of NewReflector, etc. ListFunc and WatchFunc must not be nil

func NewListWatchFromClient

func NewListWatchFromClient(c Getter, resource string, namespace string, fieldSelector fields.Selector) *ListWatch

NewListWatchFromClient creates a new ListWatch from the specified client, resource, namespace and field selector.

func (*ListWatch) List

func (lw *ListWatch) List(options metav1.ListOptions) (runtime.Object, error)

List a set of apiserver resources

func (*ListWatch) Watch

func (lw *ListWatch) Watch(options metav1.ListOptions) (watch.Interface, error)

Watch a set of apiserver resources

type ListerWatcher

type ListerWatcher interface {
	// List should return a list type object; the Items field will be extracted, and the
	// ResourceVersion field will be used to start the watch in the right place.
	List(options metav1.ListOptions) (runtime.Object, error)
	// Watch should begin a watch at the specified version.
	Watch(options metav1.ListOptions) (watch.Interface, error)
}

ListerWatcher is any object that knows how to perform an initial list and start a watch on a resource.

type PopProcessFunc

type PopProcessFunc func(interface{}) error

PopProcessFunc is passed to Pop() method of Queue interface. It is supposed to process the element popped from the queue.

type ProcessFunc

type ProcessFunc func(obj interface{}) error

ProcessFunc processes a single object.

type Queue

type Queue interface {
	Store

	// Pop blocks until it has something to process.
	// It returns the object that was process and the result of processing.
	// The PopProcessFunc may return an ErrRequeue{...} to indicate the item
	// should be requeued before releasing the lock on the queue.
	Pop(PopProcessFunc) (interface{}, error)

	// AddIfNotPresent adds a value previously
	// returned by Pop back into the queue as long
	// as nothing else (presumably more recent)
	// has since been added.
	AddIfNotPresent(interface{}) error

	// Return true if the first batch of items has been popped
	HasSynced() bool

	// Close queue
	Close()
}

Queue is exactly like a Store, but has a Pop() method too.

type Reflector

type Reflector struct {
	ShouldResync func() bool
	// contains filtered or unexported fields
}

Reflector watches a specified resource and causes all changes to be reflected in the given store.

func NewNamedReflector

func NewNamedReflector(name string, lw ListerWatcher, expectedType interface{}, store Store, resyncPeriod time.Duration) *Reflector

NewNamedReflector same as NewReflector, but with a specified name for logging

func NewReflector

func NewReflector(lw ListerWatcher, expectedType interface{}, store Store, resyncPeriod time.Duration) *Reflector

NewReflector creates a new Reflector object which will keep the given store up to date with the server's contents for the given resource. Reflector promises to only put things in the store that have the type of expectedType, unless expectedType is nil. If resyncPeriod is non-zero, then lists will be executed after every resyncPeriod, so that you can use reflectors to periodically process everything as well as incrementally processing the things that change.

func (*Reflector) LastSyncResourceVersion

func (r *Reflector) LastSyncResourceVersion() string

LastSyncResourceVersion is the resource version observed when last sync with the underlying store The value returned is not synchronized with access to the underlying store and is not thread-safe

func (*Reflector) ListAndWatch

func (r *Reflector) ListAndWatch(stopCh <-chan struct{}) error

ListAndWatch first lists all items and get the resource version at the moment of call, and then use the resource version to watch. It returns error if ListAndWatch didn't even try to initialize watch.

func (*Reflector) Run

func (r *Reflector) Run()

Run starts a watch and handles watch events. Will restart the watch if it is closed. Run starts a goroutine and returns immediately.

func (*Reflector) RunUntil

func (r *Reflector) RunUntil(stopCh <-chan struct{})

RunUntil starts a watch and handles watch events. Will restart the watch if it is closed. RunUntil starts a goroutine and returns immediately. It will exit when stopCh is closed.

type ResourceEventHandler

type ResourceEventHandler interface {
	OnAdd(obj interface{})
	OnUpdate(oldObj, newObj interface{})
	OnDelete(obj interface{})
}

ResourceEventHandler can handle notifications for events that happen to a resource. The events are informational only, so you can't return an error.

  • OnAdd is called when an object is added.
  • OnUpdate is called when an object is modified. Note that oldObj is the last known state of the object-- it is possible that several changes were combined together, so you can't use this to see every single change. OnUpdate is also called when a re-list happens, and it will get called even if nothing changed. This is useful for periodically evaluating or syncing something.
  • OnDelete will get the final state of the item if it is known, otherwise it will get an object of type DeletedFinalStateUnknown. This can happen if the watch is closed and misses the delete event and we don't notice the deletion until the subsequent re-list.

type ResourceEventHandlerFuncs

type ResourceEventHandlerFuncs struct {
	AddFunc    func(obj interface{})
	UpdateFunc func(oldObj, newObj interface{})
	DeleteFunc func(obj interface{})
}

ResourceEventHandlerFuncs is an adaptor to let you easily specify as many or as few of the notification functions as you want while still implementing ResourceEventHandler.

func (ResourceEventHandlerFuncs) OnAdd

func (r ResourceEventHandlerFuncs) OnAdd(obj interface{})

OnAdd calls AddFunc if it's not nil.

func (ResourceEventHandlerFuncs) OnDelete

func (r ResourceEventHandlerFuncs) OnDelete(obj interface{})

OnDelete calls DeleteFunc if it's not nil.

func (ResourceEventHandlerFuncs) OnUpdate

func (r ResourceEventHandlerFuncs) OnUpdate(oldObj, newObj interface{})

OnUpdate calls UpdateFunc if it's not nil.

type SharedIndexInformer

type SharedIndexInformer interface {
	SharedInformer
	// AddIndexers add indexers to the informer before it starts.
	AddIndexers(indexers Indexers) error
	GetIndexer() Indexer
}

func NewSharedIndexInformer

func NewSharedIndexInformer(lw ListerWatcher, objType runtime.Object, defaultEventHandlerResyncPeriod time.Duration, indexers Indexers) SharedIndexInformer

NewSharedIndexInformer creates a new instance for the listwatcher.

type SharedInformer

type SharedInformer interface {
	// AddEventHandler adds an event handler to the shared informer using the shared informer's resync
	// period.  Events to a single handler are delivered sequentially, but there is no coordination
	// between different handlers.
	AddEventHandler(handler ResourceEventHandler)
	// AddEventHandlerWithResyncPeriod adds an event handler to the shared informer using the
	// specified resync period.  Events to a single handler are delivered sequentially, but there is
	// no coordination between different handlers.
	AddEventHandlerWithResyncPeriod(handler ResourceEventHandler, resyncPeriod time.Duration)
	// GetStore returns the Store.
	GetStore() Store
	// GetController gives back a synthetic interface that "votes" to start the informer
	GetController() Controller
	// Run starts the shared informer, which will be stopped when stopCh is closed.
	Run(stopCh <-chan struct{})
	// HasSynced returns true if the shared informer's store has synced.
	HasSynced() bool
	// LastSyncResourceVersion is the resource version observed when last synced with the underlying
	// store. The value returned is not synchronized with access to the underlying store and is not
	// thread-safe.
	LastSyncResourceVersion() string
}

SharedInformer has a shared data cache and is capable of distributing notifications for changes to the cache to multiple listeners who registered via AddEventHandler. If you use this, there is one behavior change compared to a standard Informer. When you receive a notification, the cache will be AT LEAST as fresh as the notification, but it MAY be more fresh. You should NOT depend on the contents of the cache exactly matching the notification you've received in handler functions. If there was a create, followed by a delete, the cache may NOT have your item. This has advantages over the broadcaster since it allows us to share a common cache across many controllers. Extending the broadcaster would have required us keep duplicate caches for each watch.

func NewSharedInformer

func NewSharedInformer(lw ListerWatcher, objType runtime.Object, resyncPeriod time.Duration) SharedInformer

NewSharedInformer creates a new instance for the listwatcher.

type ShouldResyncFunc

type ShouldResyncFunc func() bool

ShouldResyncFunc is a type of function that indicates if a reflector should perform a resync or not. It can be used by a shared informer to support multiple event handlers with custom resync periods.

type Store

type Store interface {
	Add(obj interface{}) error
	Update(obj interface{}) error
	Delete(obj interface{}) error
	List() []interface{}
	ListKeys() []string
	Get(obj interface{}) (item interface{}, exists bool, err error)
	GetByKey(key string) (item interface{}, exists bool, err error)

	// Replace will delete the contents of the store, using instead the
	// given list. Store takes ownership of the list, you should not reference
	// it after calling this function.
	Replace([]interface{}, string) error
	Resync() error
}

Store is a generic object storage interface. Reflector knows how to watch a server and update a store. A generic store is provided, which allows Reflector to be used as a local caching system, and an LRU store, which allows Reflector to work like a queue of items yet to be processed.

Store makes no assumptions about stored object identity; it is the responsibility of a Store implementation to provide a mechanism to correctly key objects and to define the contract for obtaining objects by some arbitrary key type.

func NewFakeExpirationStore

func NewFakeExpirationStore(keyFunc KeyFunc, deletedKeys chan<- string, expirationPolicy ExpirationPolicy, cacheClock clock.Clock) Store

func NewStore

func NewStore(keyFunc KeyFunc) Store

NewStore returns a Store implemented simply with a map and a lock.

func NewTTLStore

func NewTTLStore(keyFunc KeyFunc, ttl time.Duration) Store

NewTTLStore creates and returns a ExpirationCache with a TTLPolicy

type TTLPolicy

type TTLPolicy struct {
	//	 >0: Expire entries with an age > ttl
	//	<=0: Don't expire any entry
	Ttl time.Duration

	// Clock used to calculate ttl expiration
	Clock clock.Clock
}

TTLPolicy implements a ttl based ExpirationPolicy.

func (*TTLPolicy) IsExpired

func (p *TTLPolicy) IsExpired(obj *timestampedEntry) bool

IsExpired returns true if the given object is older than the ttl, or it can't determine its age.

type ThreadSafeStore

type ThreadSafeStore interface {
	Add(key string, obj interface{})
	Update(key string, obj interface{})
	Delete(key string)
	Get(key string) (item interface{}, exists bool)
	List() []interface{}
	ListKeys() []string
	Replace(map[string]interface{}, string)
	Index(indexName string, obj interface{}) ([]interface{}, error)
	ListIndexFuncValues(name string) []string
	ByIndex(indexName, indexKey string) ([]interface{}, error)
	GetIndexers() Indexers

	// AddIndexers adds more indexers to this store.  If you call this after you already have data
	// in the store, the results are undefined.
	AddIndexers(newIndexers Indexers) error
	Resync() error
}

ThreadSafeStore is an interface that allows concurrent access to a storage backend. TL;DR caveats: you must not modify anything returned by Get or List as it will break the indexing feature in addition to not being thread safe.

The guarantees of thread safety provided by List/Get are only valid if the caller treats returned items as read-only. For example, a pointer inserted in the store through `Add` will be returned as is by `Get`. Multiple clients might invoke `Get` on the same key and modify the pointer in a non-thread-safe way. Also note that modifying objects stored by the indexers (if any) will *not* automatically lead to a re-index. So it's not a good idea to directly modify the objects returned by Get/List, in general.

func NewThreadSafeStore

func NewThreadSafeStore(indexers Indexers, indices Indices) ThreadSafeStore

type UndeltaStore

type UndeltaStore struct {
	Store
	PushFunc func([]interface{})
}

UndeltaStore listens to incremental updates and sends complete state on every change. It implements the Store interface so that it can receive a stream of mirrored objects from Reflector. Whenever it receives any complete (Store.Replace) or incremental change (Store.Add, Store.Update, Store.Delete), it sends the complete state by calling PushFunc. It is thread-safe. It guarantees that every change (Add, Update, Replace, Delete) results in one call to PushFunc, but sometimes PushFunc may be called twice with the same values. PushFunc should be thread safe.

func NewUndeltaStore

func NewUndeltaStore(pushFunc func([]interface{}), keyFunc KeyFunc) *UndeltaStore

NewUndeltaStore returns an UndeltaStore implemented with a Store.

func (*UndeltaStore) Add

func (u *UndeltaStore) Add(obj interface{}) error

func (*UndeltaStore) Delete

func (u *UndeltaStore) Delete(obj interface{}) error

func (*UndeltaStore) Replace

func (u *UndeltaStore) Replace(list []interface{}, resourceVersion string) error

func (*UndeltaStore) Update

func (u *UndeltaStore) Update(obj interface{}) error

type WatchFunc

type WatchFunc func(options metav1.ListOptions) (watch.Interface, error)

WatchFunc knows how to watch resources

Directories

Path Synopsis

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL