gqldataloader

package
v0.6.0 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Apr 8, 2022 License: Apache-2.0 Imports: 4 Imported by: 0

Documentation

Index

Constants

This section is empty.

Variables

This section is empty.

Functions

This section is empty.

Types

type AssetLoader

type AssetLoader struct {
	// contains filtered or unexported fields
}

AssetLoader batches and caches requests

func NewAssetLoader

func NewAssetLoader(config AssetLoaderConfig) *AssetLoader

NewAssetLoader creates a new AssetLoader given a fetch, wait, and maxBatch

func (*AssetLoader) Clear

func (l *AssetLoader) Clear(key id.AssetID)

Clear the value at key from the cache, if it exists

func (*AssetLoader) Load

func (l *AssetLoader) Load(key id.AssetID) (*gqlmodel.Asset, error)

Load a Asset by key, batching and caching will be applied automatically

func (*AssetLoader) LoadAll

func (l *AssetLoader) LoadAll(keys []id.AssetID) ([]*gqlmodel.Asset, []error)

LoadAll fetches many keys at once. It will be broken into appropriate sized sub batches depending on how the loader is configured

func (*AssetLoader) LoadAllThunk

func (l *AssetLoader) LoadAllThunk(keys []id.AssetID) func() ([]*gqlmodel.Asset, []error)

LoadAllThunk returns a function that when called will block waiting for a Assets. This method should be used if you want one goroutine to make requests to many different data loaders without blocking until the thunk is called.

func (*AssetLoader) LoadThunk

func (l *AssetLoader) LoadThunk(key id.AssetID) func() (*gqlmodel.Asset, error)

LoadThunk returns a function that when called will block waiting for a Asset. This method should be used if you want one goroutine to make requests to many different data loaders without blocking until the thunk is called.

func (*AssetLoader) Prime

func (l *AssetLoader) Prime(key id.AssetID, value *gqlmodel.Asset) bool

Prime the cache with the provided key and value. If the key already exists, no change is made and false is returned. (To forcefully prime the cache, clear the key first with loader.clear(key).prime(key, value).)

type AssetLoaderConfig

type AssetLoaderConfig struct {
	// Fetch is a method that provides the data for the loader
	Fetch func(keys []id.AssetID) ([]*gqlmodel.Asset, []error)

	// Wait is how long wait before sending a batch
	Wait time.Duration

	// MaxBatch will limit the maximum number of keys to send in one batch, 0 = not limit
	MaxBatch int
}

AssetLoaderConfig captures the config to create a new AssetLoader

type DatasetLoader

type DatasetLoader struct {
	// contains filtered or unexported fields
}

DatasetLoader batches and caches requests

func NewDatasetLoader

func NewDatasetLoader(config DatasetLoaderConfig) *DatasetLoader

NewDatasetLoader creates a new DatasetLoader given a fetch, wait, and maxBatch

func (*DatasetLoader) Clear

func (l *DatasetLoader) Clear(key id.DatasetID)

Clear the value at key from the cache, if it exists

func (*DatasetLoader) Load

func (l *DatasetLoader) Load(key id.DatasetID) (*gqlmodel.Dataset, error)

Load a Dataset by key, batching and caching will be applied automatically

func (*DatasetLoader) LoadAll

func (l *DatasetLoader) LoadAll(keys []id.DatasetID) ([]*gqlmodel.Dataset, []error)

LoadAll fetches many keys at once. It will be broken into appropriate sized sub batches depending on how the loader is configured

func (*DatasetLoader) LoadAllThunk

func (l *DatasetLoader) LoadAllThunk(keys []id.DatasetID) func() ([]*gqlmodel.Dataset, []error)

LoadAllThunk returns a function that when called will block waiting for a Datasets. This method should be used if you want one goroutine to make requests to many different data loaders without blocking until the thunk is called.

func (*DatasetLoader) LoadThunk

func (l *DatasetLoader) LoadThunk(key id.DatasetID) func() (*gqlmodel.Dataset, error)

LoadThunk returns a function that when called will block waiting for a Dataset. This method should be used if you want one goroutine to make requests to many different data loaders without blocking until the thunk is called.

func (*DatasetLoader) Prime

func (l *DatasetLoader) Prime(key id.DatasetID, value *gqlmodel.Dataset) bool

Prime the cache with the provided key and value. If the key already exists, no change is made and false is returned. (To forcefully prime the cache, clear the key first with loader.clear(key).prime(key, value).)

type DatasetLoaderConfig

type DatasetLoaderConfig struct {
	// Fetch is a method that provides the data for the loader
	Fetch func(keys []id.DatasetID) ([]*gqlmodel.Dataset, []error)

	// Wait is how long wait before sending a batch
	Wait time.Duration

	// MaxBatch will limit the maximum number of keys to send in one batch, 0 = not limit
	MaxBatch int
}

DatasetLoaderConfig captures the config to create a new DatasetLoader

type DatasetSchemaLoader

type DatasetSchemaLoader struct {
	// contains filtered or unexported fields
}

DatasetSchemaLoader batches and caches requests

func NewDatasetSchemaLoader

func NewDatasetSchemaLoader(config DatasetSchemaLoaderConfig) *DatasetSchemaLoader

NewDatasetSchemaLoader creates a new DatasetSchemaLoader given a fetch, wait, and maxBatch

func (*DatasetSchemaLoader) Clear

func (l *DatasetSchemaLoader) Clear(key id.DatasetSchemaID)

Clear the value at key from the cache, if it exists

func (*DatasetSchemaLoader) Load

Load a DatasetSchema by key, batching and caching will be applied automatically

func (*DatasetSchemaLoader) LoadAll

LoadAll fetches many keys at once. It will be broken into appropriate sized sub batches depending on how the loader is configured

func (*DatasetSchemaLoader) LoadAllThunk

func (l *DatasetSchemaLoader) LoadAllThunk(keys []id.DatasetSchemaID) func() ([]*gqlmodel.DatasetSchema, []error)

LoadAllThunk returns a function that when called will block waiting for a DatasetSchemas. This method should be used if you want one goroutine to make requests to many different data loaders without blocking until the thunk is called.

func (*DatasetSchemaLoader) LoadThunk

func (l *DatasetSchemaLoader) LoadThunk(key id.DatasetSchemaID) func() (*gqlmodel.DatasetSchema, error)

LoadThunk returns a function that when called will block waiting for a DatasetSchema. This method should be used if you want one goroutine to make requests to many different data loaders without blocking until the thunk is called.

func (*DatasetSchemaLoader) Prime

Prime the cache with the provided key and value. If the key already exists, no change is made and false is returned. (To forcefully prime the cache, clear the key first with loader.clear(key).prime(key, value).)

type DatasetSchemaLoaderConfig

type DatasetSchemaLoaderConfig struct {
	// Fetch is a method that provides the data for the loader
	Fetch func(keys []id.DatasetSchemaID) ([]*gqlmodel.DatasetSchema, []error)

	// Wait is how long wait before sending a batch
	Wait time.Duration

	// MaxBatch will limit the maximum number of keys to send in one batch, 0 = not limit
	MaxBatch int
}

DatasetSchemaLoaderConfig captures the config to create a new DatasetSchemaLoader

type LayerGroupLoader

type LayerGroupLoader struct {
	// contains filtered or unexported fields
}

LayerGroupLoader batches and caches requests

func NewLayerGroupLoader

func NewLayerGroupLoader(config LayerGroupLoaderConfig) *LayerGroupLoader

NewLayerGroupLoader creates a new LayerGroupLoader given a fetch, wait, and maxBatch

func (*LayerGroupLoader) Clear

func (l *LayerGroupLoader) Clear(key id.LayerID)

Clear the value at key from the cache, if it exists

func (*LayerGroupLoader) Load

Load a LayerGroup by key, batching and caching will be applied automatically

func (*LayerGroupLoader) LoadAll

func (l *LayerGroupLoader) LoadAll(keys []id.LayerID) ([]*gqlmodel.LayerGroup, []error)

LoadAll fetches many keys at once. It will be broken into appropriate sized sub batches depending on how the loader is configured

func (*LayerGroupLoader) LoadAllThunk

func (l *LayerGroupLoader) LoadAllThunk(keys []id.LayerID) func() ([]*gqlmodel.LayerGroup, []error)

LoadAllThunk returns a function that when called will block waiting for a LayerGroups. This method should be used if you want one goroutine to make requests to many different data loaders without blocking until the thunk is called.

func (*LayerGroupLoader) LoadThunk

func (l *LayerGroupLoader) LoadThunk(key id.LayerID) func() (*gqlmodel.LayerGroup, error)

LoadThunk returns a function that when called will block waiting for a LayerGroup. This method should be used if you want one goroutine to make requests to many different data loaders without blocking until the thunk is called.

func (*LayerGroupLoader) Prime

func (l *LayerGroupLoader) Prime(key id.LayerID, value *gqlmodel.LayerGroup) bool

Prime the cache with the provided key and value. If the key already exists, no change is made and false is returned. (To forcefully prime the cache, clear the key first with loader.clear(key).prime(key, value).)

type LayerGroupLoaderConfig

type LayerGroupLoaderConfig struct {
	// Fetch is a method that provides the data for the loader
	Fetch func(keys []id.LayerID) ([]*gqlmodel.LayerGroup, []error)

	// Wait is how long wait before sending a batch
	Wait time.Duration

	// MaxBatch will limit the maximum number of keys to send in one batch, 0 = not limit
	MaxBatch int
}

LayerGroupLoaderConfig captures the config to create a new LayerGroupLoader

type LayerItemLoader

type LayerItemLoader struct {
	// contains filtered or unexported fields
}

LayerItemLoader batches and caches requests

func NewLayerItemLoader

func NewLayerItemLoader(config LayerItemLoaderConfig) *LayerItemLoader

NewLayerItemLoader creates a new LayerItemLoader given a fetch, wait, and maxBatch

func (*LayerItemLoader) Clear

func (l *LayerItemLoader) Clear(key id.LayerID)

Clear the value at key from the cache, if it exists

func (*LayerItemLoader) Load

func (l *LayerItemLoader) Load(key id.LayerID) (*gqlmodel.LayerItem, error)

Load a LayerItem by key, batching and caching will be applied automatically

func (*LayerItemLoader) LoadAll

func (l *LayerItemLoader) LoadAll(keys []id.LayerID) ([]*gqlmodel.LayerItem, []error)

LoadAll fetches many keys at once. It will be broken into appropriate sized sub batches depending on how the loader is configured

func (*LayerItemLoader) LoadAllThunk

func (l *LayerItemLoader) LoadAllThunk(keys []id.LayerID) func() ([]*gqlmodel.LayerItem, []error)

LoadAllThunk returns a function that when called will block waiting for a LayerItems. This method should be used if you want one goroutine to make requests to many different data loaders without blocking until the thunk is called.

func (*LayerItemLoader) LoadThunk

func (l *LayerItemLoader) LoadThunk(key id.LayerID) func() (*gqlmodel.LayerItem, error)

LoadThunk returns a function that when called will block waiting for a LayerItem. This method should be used if you want one goroutine to make requests to many different data loaders without blocking until the thunk is called.

func (*LayerItemLoader) Prime

func (l *LayerItemLoader) Prime(key id.LayerID, value *gqlmodel.LayerItem) bool

Prime the cache with the provided key and value. If the key already exists, no change is made and false is returned. (To forcefully prime the cache, clear the key first with loader.clear(key).prime(key, value).)

type LayerItemLoaderConfig

type LayerItemLoaderConfig struct {
	// Fetch is a method that provides the data for the loader
	Fetch func(keys []id.LayerID) ([]*gqlmodel.LayerItem, []error)

	// Wait is how long wait before sending a batch
	Wait time.Duration

	// MaxBatch will limit the maximum number of keys to send in one batch, 0 = not limit
	MaxBatch int
}

LayerItemLoaderConfig captures the config to create a new LayerItemLoader

type LayerLoader

type LayerLoader struct {
	// contains filtered or unexported fields
}

LayerLoader batches and caches requests

func NewLayerLoader

func NewLayerLoader(config LayerLoaderConfig) *LayerLoader

NewLayerLoader creates a new LayerLoader given a fetch, wait, and maxBatch

func (*LayerLoader) Clear

func (l *LayerLoader) Clear(key id.LayerID)

Clear the value at key from the cache, if it exists

func (*LayerLoader) Load

func (l *LayerLoader) Load(key id.LayerID) (*gqlmodel.Layer, error)

Load a Layer by key, batching and caching will be applied automatically

func (*LayerLoader) LoadAll

func (l *LayerLoader) LoadAll(keys []id.LayerID) ([]*gqlmodel.Layer, []error)

LoadAll fetches many keys at once. It will be broken into appropriate sized sub batches depending on how the loader is configured

func (*LayerLoader) LoadAllThunk

func (l *LayerLoader) LoadAllThunk(keys []id.LayerID) func() ([]*gqlmodel.Layer, []error)

LoadAllThunk returns a function that when called will block waiting for a Layers. This method should be used if you want one goroutine to make requests to many different data loaders without blocking until the thunk is called.

func (*LayerLoader) LoadThunk

func (l *LayerLoader) LoadThunk(key id.LayerID) func() (*gqlmodel.Layer, error)

LoadThunk returns a function that when called will block waiting for a Layer. This method should be used if you want one goroutine to make requests to many different data loaders without blocking until the thunk is called.

func (*LayerLoader) Prime

func (l *LayerLoader) Prime(key id.LayerID, value *gqlmodel.Layer) bool

Prime the cache with the provided key and value. If the key already exists, no change is made and false is returned. (To forcefully prime the cache, clear the key first with loader.clear(key).prime(key, value).)

type LayerLoaderConfig

type LayerLoaderConfig struct {
	// Fetch is a method that provides the data for the loader
	Fetch func(keys []id.LayerID) ([]*gqlmodel.Layer, []error)

	// Wait is how long wait before sending a batch
	Wait time.Duration

	// MaxBatch will limit the maximum number of keys to send in one batch, 0 = not limit
	MaxBatch int
}

LayerLoaderConfig captures the config to create a new LayerLoader

type PluginLoader

type PluginLoader struct {
	// contains filtered or unexported fields
}

PluginLoader batches and caches requests

func NewPluginLoader

func NewPluginLoader(config PluginLoaderConfig) *PluginLoader

NewPluginLoader creates a new PluginLoader given a fetch, wait, and maxBatch

func (*PluginLoader) Clear

func (l *PluginLoader) Clear(key id.PluginID)

Clear the value at key from the cache, if it exists

func (*PluginLoader) Load

func (l *PluginLoader) Load(key id.PluginID) (*gqlmodel.Plugin, error)

Load a Plugin by key, batching and caching will be applied automatically

func (*PluginLoader) LoadAll

func (l *PluginLoader) LoadAll(keys []id.PluginID) ([]*gqlmodel.Plugin, []error)

LoadAll fetches many keys at once. It will be broken into appropriate sized sub batches depending on how the loader is configured

func (*PluginLoader) LoadAllThunk

func (l *PluginLoader) LoadAllThunk(keys []id.PluginID) func() ([]*gqlmodel.Plugin, []error)

LoadAllThunk returns a function that when called will block waiting for a Plugins. This method should be used if you want one goroutine to make requests to many different data loaders without blocking until the thunk is called.

func (*PluginLoader) LoadThunk

func (l *PluginLoader) LoadThunk(key id.PluginID) func() (*gqlmodel.Plugin, error)

LoadThunk returns a function that when called will block waiting for a Plugin. This method should be used if you want one goroutine to make requests to many different data loaders without blocking until the thunk is called.

func (*PluginLoader) Prime

func (l *PluginLoader) Prime(key id.PluginID, value *gqlmodel.Plugin) bool

Prime the cache with the provided key and value. If the key already exists, no change is made and false is returned. (To forcefully prime the cache, clear the key first with loader.clear(key).prime(key, value).)

type PluginLoaderConfig

type PluginLoaderConfig struct {
	// Fetch is a method that provides the data for the loader
	Fetch func(keys []id.PluginID) ([]*gqlmodel.Plugin, []error)

	// Wait is how long wait before sending a batch
	Wait time.Duration

	// MaxBatch will limit the maximum number of keys to send in one batch, 0 = not limit
	MaxBatch int
}

PluginLoaderConfig captures the config to create a new PluginLoader

type ProjectLoader

type ProjectLoader struct {
	// contains filtered or unexported fields
}

ProjectLoader batches and caches requests

func NewProjectLoader

func NewProjectLoader(config ProjectLoaderConfig) *ProjectLoader

NewProjectLoader creates a new ProjectLoader given a fetch, wait, and maxBatch

func (*ProjectLoader) Clear

func (l *ProjectLoader) Clear(key id.ProjectID)

Clear the value at key from the cache, if it exists

func (*ProjectLoader) Load

func (l *ProjectLoader) Load(key id.ProjectID) (*gqlmodel.Project, error)

Load a Project by key, batching and caching will be applied automatically

func (*ProjectLoader) LoadAll

func (l *ProjectLoader) LoadAll(keys []id.ProjectID) ([]*gqlmodel.Project, []error)

LoadAll fetches many keys at once. It will be broken into appropriate sized sub batches depending on how the loader is configured

func (*ProjectLoader) LoadAllThunk

func (l *ProjectLoader) LoadAllThunk(keys []id.ProjectID) func() ([]*gqlmodel.Project, []error)

LoadAllThunk returns a function that when called will block waiting for a Projects. This method should be used if you want one goroutine to make requests to many different data loaders without blocking until the thunk is called.

func (*ProjectLoader) LoadThunk

func (l *ProjectLoader) LoadThunk(key id.ProjectID) func() (*gqlmodel.Project, error)

LoadThunk returns a function that when called will block waiting for a Project. This method should be used if you want one goroutine to make requests to many different data loaders without blocking until the thunk is called.

func (*ProjectLoader) Prime

func (l *ProjectLoader) Prime(key id.ProjectID, value *gqlmodel.Project) bool

Prime the cache with the provided key and value. If the key already exists, no change is made and false is returned. (To forcefully prime the cache, clear the key first with loader.clear(key).prime(key, value).)

type ProjectLoaderConfig

type ProjectLoaderConfig struct {
	// Fetch is a method that provides the data for the loader
	Fetch func(keys []id.ProjectID) ([]*gqlmodel.Project, []error)

	// Wait is how long wait before sending a batch
	Wait time.Duration

	// MaxBatch will limit the maximum number of keys to send in one batch, 0 = not limit
	MaxBatch int
}

ProjectLoaderConfig captures the config to create a new ProjectLoader

type PropertyLoader

type PropertyLoader struct {
	// contains filtered or unexported fields
}

PropertyLoader batches and caches requests

func NewPropertyLoader

func NewPropertyLoader(config PropertyLoaderConfig) *PropertyLoader

NewPropertyLoader creates a new PropertyLoader given a fetch, wait, and maxBatch

func (*PropertyLoader) Clear

func (l *PropertyLoader) Clear(key id.PropertyID)

Clear the value at key from the cache, if it exists

func (*PropertyLoader) Load

Load a Property by key, batching and caching will be applied automatically

func (*PropertyLoader) LoadAll

func (l *PropertyLoader) LoadAll(keys []id.PropertyID) ([]*gqlmodel.Property, []error)

LoadAll fetches many keys at once. It will be broken into appropriate sized sub batches depending on how the loader is configured

func (*PropertyLoader) LoadAllThunk

func (l *PropertyLoader) LoadAllThunk(keys []id.PropertyID) func() ([]*gqlmodel.Property, []error)

LoadAllThunk returns a function that when called will block waiting for a Propertys. This method should be used if you want one goroutine to make requests to many different data loaders without blocking until the thunk is called.

func (*PropertyLoader) LoadThunk

func (l *PropertyLoader) LoadThunk(key id.PropertyID) func() (*gqlmodel.Property, error)

LoadThunk returns a function that when called will block waiting for a Property. This method should be used if you want one goroutine to make requests to many different data loaders without blocking until the thunk is called.

func (*PropertyLoader) Prime

func (l *PropertyLoader) Prime(key id.PropertyID, value *gqlmodel.Property) bool

Prime the cache with the provided key and value. If the key already exists, no change is made and false is returned. (To forcefully prime the cache, clear the key first with loader.clear(key).prime(key, value).)

type PropertyLoaderConfig

type PropertyLoaderConfig struct {
	// Fetch is a method that provides the data for the loader
	Fetch func(keys []id.PropertyID) ([]*gqlmodel.Property, []error)

	// Wait is how long wait before sending a batch
	Wait time.Duration

	// MaxBatch will limit the maximum number of keys to send in one batch, 0 = not limit
	MaxBatch int
}

PropertyLoaderConfig captures the config to create a new PropertyLoader

type PropertySchemaLoader

type PropertySchemaLoader struct {
	// contains filtered or unexported fields
}

PropertySchemaLoader batches and caches requests

func NewPropertySchemaLoader

func NewPropertySchemaLoader(config PropertySchemaLoaderConfig) *PropertySchemaLoader

NewPropertySchemaLoader creates a new PropertySchemaLoader given a fetch, wait, and maxBatch

func (*PropertySchemaLoader) Clear

Clear the value at key from the cache, if it exists

func (*PropertySchemaLoader) Load

Load a PropertySchema by key, batching and caching will be applied automatically

func (*PropertySchemaLoader) LoadAll

LoadAll fetches many keys at once. It will be broken into appropriate sized sub batches depending on how the loader is configured

func (*PropertySchemaLoader) LoadAllThunk

func (l *PropertySchemaLoader) LoadAllThunk(keys []id.PropertySchemaID) func() ([]*gqlmodel.PropertySchema, []error)

LoadAllThunk returns a function that when called will block waiting for a PropertySchemas. This method should be used if you want one goroutine to make requests to many different data loaders without blocking until the thunk is called.

func (*PropertySchemaLoader) LoadThunk

LoadThunk returns a function that when called will block waiting for a PropertySchema. This method should be used if you want one goroutine to make requests to many different data loaders without blocking until the thunk is called.

func (*PropertySchemaLoader) Prime

Prime the cache with the provided key and value. If the key already exists, no change is made and false is returned. (To forcefully prime the cache, clear the key first with loader.clear(key).prime(key, value).)

type PropertySchemaLoaderConfig

type PropertySchemaLoaderConfig struct {
	// Fetch is a method that provides the data for the loader
	Fetch func(keys []id.PropertySchemaID) ([]*gqlmodel.PropertySchema, []error)

	// Wait is how long wait before sending a batch
	Wait time.Duration

	// MaxBatch will limit the maximum number of keys to send in one batch, 0 = not limit
	MaxBatch int
}

PropertySchemaLoaderConfig captures the config to create a new PropertySchemaLoader

type SceneLoader

type SceneLoader struct {
	// contains filtered or unexported fields
}

SceneLoader batches and caches requests

func NewSceneLoader

func NewSceneLoader(config SceneLoaderConfig) *SceneLoader

NewSceneLoader creates a new SceneLoader given a fetch, wait, and maxBatch

func (*SceneLoader) Clear

func (l *SceneLoader) Clear(key id.SceneID)

Clear the value at key from the cache, if it exists

func (*SceneLoader) Load

func (l *SceneLoader) Load(key id.SceneID) (*gqlmodel.Scene, error)

Load a Scene by key, batching and caching will be applied automatically

func (*SceneLoader) LoadAll

func (l *SceneLoader) LoadAll(keys []id.SceneID) ([]*gqlmodel.Scene, []error)

LoadAll fetches many keys at once. It will be broken into appropriate sized sub batches depending on how the loader is configured

func (*SceneLoader) LoadAllThunk

func (l *SceneLoader) LoadAllThunk(keys []id.SceneID) func() ([]*gqlmodel.Scene, []error)

LoadAllThunk returns a function that when called will block waiting for a Scenes. This method should be used if you want one goroutine to make requests to many different data loaders without blocking until the thunk is called.

func (*SceneLoader) LoadThunk

func (l *SceneLoader) LoadThunk(key id.SceneID) func() (*gqlmodel.Scene, error)

LoadThunk returns a function that when called will block waiting for a Scene. This method should be used if you want one goroutine to make requests to many different data loaders without blocking until the thunk is called.

func (*SceneLoader) Prime

func (l *SceneLoader) Prime(key id.SceneID, value *gqlmodel.Scene) bool

Prime the cache with the provided key and value. If the key already exists, no change is made and false is returned. (To forcefully prime the cache, clear the key first with loader.clear(key).prime(key, value).)

type SceneLoaderConfig

type SceneLoaderConfig struct {
	// Fetch is a method that provides the data for the loader
	Fetch func(keys []id.SceneID) ([]*gqlmodel.Scene, []error)

	// Wait is how long wait before sending a batch
	Wait time.Duration

	// MaxBatch will limit the maximum number of keys to send in one batch, 0 = not limit
	MaxBatch int
}

SceneLoaderConfig captures the config to create a new SceneLoader

type TagGroupLoader

type TagGroupLoader struct {
	// contains filtered or unexported fields
}

TagGroupLoader batches and caches requests

func NewTagGroupLoader

func NewTagGroupLoader(config TagGroupLoaderConfig) *TagGroupLoader

NewTagGroupLoader creates a new TagGroupLoader given a fetch, wait, and maxBatch

func (*TagGroupLoader) Clear

func (l *TagGroupLoader) Clear(key id.TagID)

Clear the value at key from the cache, if it exists

func (*TagGroupLoader) Load

func (l *TagGroupLoader) Load(key id.TagID) (*gqlmodel.TagGroup, error)

Load a TagGroup by key, batching and caching will be applied automatically

func (*TagGroupLoader) LoadAll

func (l *TagGroupLoader) LoadAll(keys []id.TagID) ([]*gqlmodel.TagGroup, []error)

LoadAll fetches many keys at once. It will be broken into appropriate sized sub batches depending on how the loader is configured

func (*TagGroupLoader) LoadAllThunk

func (l *TagGroupLoader) LoadAllThunk(keys []id.TagID) func() ([]*gqlmodel.TagGroup, []error)

LoadAllThunk returns a function that when called will block waiting for a TagGroups. This method should be used if you want one goroutine to make requests to many different data loaders without blocking until the thunk is called.

func (*TagGroupLoader) LoadThunk

func (l *TagGroupLoader) LoadThunk(key id.TagID) func() (*gqlmodel.TagGroup, error)

LoadThunk returns a function that when called will block waiting for a TagGroup. This method should be used if you want one goroutine to make requests to many different data loaders without blocking until the thunk is called.

func (*TagGroupLoader) Prime

func (l *TagGroupLoader) Prime(key id.TagID, value *gqlmodel.TagGroup) bool

Prime the cache with the provided key and value. If the key already exists, no change is made and false is returned. (To forcefully prime the cache, clear the key first with loader.clear(key).prime(key, value).)

type TagGroupLoaderConfig

type TagGroupLoaderConfig struct {
	// Fetch is a method that provides the data for the loader
	Fetch func(keys []id.TagID) ([]*gqlmodel.TagGroup, []error)

	// Wait is how long wait before sending a batch
	Wait time.Duration

	// MaxBatch will limit the maximum number of keys to send in one batch, 0 = not limit
	MaxBatch int
}

TagGroupLoaderConfig captures the config to create a new TagGroupLoader

type TagItemLoader

type TagItemLoader struct {
	// contains filtered or unexported fields
}

TagItemLoader batches and caches requests

func NewTagItemLoader

func NewTagItemLoader(config TagItemLoaderConfig) *TagItemLoader

NewTagItemLoader creates a new TagItemLoader given a fetch, wait, and maxBatch

func (*TagItemLoader) Clear

func (l *TagItemLoader) Clear(key id.TagID)

Clear the value at key from the cache, if it exists

func (*TagItemLoader) Load

func (l *TagItemLoader) Load(key id.TagID) (*gqlmodel.TagItem, error)

Load a TagItem by key, batching and caching will be applied automatically

func (*TagItemLoader) LoadAll

func (l *TagItemLoader) LoadAll(keys []id.TagID) ([]*gqlmodel.TagItem, []error)

LoadAll fetches many keys at once. It will be broken into appropriate sized sub batches depending on how the loader is configured

func (*TagItemLoader) LoadAllThunk

func (l *TagItemLoader) LoadAllThunk(keys []id.TagID) func() ([]*gqlmodel.TagItem, []error)

LoadAllThunk returns a function that when called will block waiting for a TagItems. This method should be used if you want one goroutine to make requests to many different data loaders without blocking until the thunk is called.

func (*TagItemLoader) LoadThunk

func (l *TagItemLoader) LoadThunk(key id.TagID) func() (*gqlmodel.TagItem, error)

LoadThunk returns a function that when called will block waiting for a TagItem. This method should be used if you want one goroutine to make requests to many different data loaders without blocking until the thunk is called.

func (*TagItemLoader) Prime

func (l *TagItemLoader) Prime(key id.TagID, value *gqlmodel.TagItem) bool

Prime the cache with the provided key and value. If the key already exists, no change is made and false is returned. (To forcefully prime the cache, clear the key first with loader.clear(key).prime(key, value).)

type TagItemLoaderConfig

type TagItemLoaderConfig struct {
	// Fetch is a method that provides the data for the loader
	Fetch func(keys []id.TagID) ([]*gqlmodel.TagItem, []error)

	// Wait is how long wait before sending a batch
	Wait time.Duration

	// MaxBatch will limit the maximum number of keys to send in one batch, 0 = not limit
	MaxBatch int
}

TagItemLoaderConfig captures the config to create a new TagItemLoader

type TagLoader

type TagLoader struct {
	// contains filtered or unexported fields
}

TagLoader batches and caches requests

func NewTagLoader

func NewTagLoader(config TagLoaderConfig) *TagLoader

NewTagLoader creates a new TagLoader given a fetch, wait, and maxBatch

func (*TagLoader) Clear

func (l *TagLoader) Clear(key id.TagID)

Clear the value at key from the cache, if it exists

func (*TagLoader) Load

func (l *TagLoader) Load(key id.TagID) (*gqlmodel.Tag, error)

Load a Tag by key, batching and caching will be applied automatically

func (*TagLoader) LoadAll

func (l *TagLoader) LoadAll(keys []id.TagID) ([]*gqlmodel.Tag, []error)

LoadAll fetches many keys at once. It will be broken into appropriate sized sub batches depending on how the loader is configured

func (*TagLoader) LoadAllThunk

func (l *TagLoader) LoadAllThunk(keys []id.TagID) func() ([]*gqlmodel.Tag, []error)

LoadAllThunk returns a function that when called will block waiting for a Tags. This method should be used if you want one goroutine to make requests to many different data loaders without blocking until the thunk is called.

func (*TagLoader) LoadThunk

func (l *TagLoader) LoadThunk(key id.TagID) func() (*gqlmodel.Tag, error)

LoadThunk returns a function that when called will block waiting for a Tag. This method should be used if you want one goroutine to make requests to many different data loaders without blocking until the thunk is called.

func (*TagLoader) Prime

func (l *TagLoader) Prime(key id.TagID, value *gqlmodel.Tag) bool

Prime the cache with the provided key and value. If the key already exists, no change is made and false is returned. (To forcefully prime the cache, clear the key first with loader.clear(key).prime(key, value).)

type TagLoaderConfig

type TagLoaderConfig struct {
	// Fetch is a method that provides the data for the loader
	Fetch func(keys []id.TagID) ([]*gqlmodel.Tag, []error)

	// Wait is how long wait before sending a batch
	Wait time.Duration

	// MaxBatch will limit the maximum number of keys to send in one batch, 0 = not limit
	MaxBatch int
}

TagLoaderConfig captures the config to create a new TagLoader

type TeamLoader

type TeamLoader struct {
	// contains filtered or unexported fields
}

TeamLoader batches and caches requests

func NewTeamLoader

func NewTeamLoader(config TeamLoaderConfig) *TeamLoader

NewTeamLoader creates a new TeamLoader given a fetch, wait, and maxBatch

func (*TeamLoader) Clear

func (l *TeamLoader) Clear(key id.TeamID)

Clear the value at key from the cache, if it exists

func (*TeamLoader) Load

func (l *TeamLoader) Load(key id.TeamID) (*gqlmodel.Team, error)

Load a Team by key, batching and caching will be applied automatically

func (*TeamLoader) LoadAll

func (l *TeamLoader) LoadAll(keys []id.TeamID) ([]*gqlmodel.Team, []error)

LoadAll fetches many keys at once. It will be broken into appropriate sized sub batches depending on how the loader is configured

func (*TeamLoader) LoadAllThunk

func (l *TeamLoader) LoadAllThunk(keys []id.TeamID) func() ([]*gqlmodel.Team, []error)

LoadAllThunk returns a function that when called will block waiting for a Teams. This method should be used if you want one goroutine to make requests to many different data loaders without blocking until the thunk is called.

func (*TeamLoader) LoadThunk

func (l *TeamLoader) LoadThunk(key id.TeamID) func() (*gqlmodel.Team, error)

LoadThunk returns a function that when called will block waiting for a Team. This method should be used if you want one goroutine to make requests to many different data loaders without blocking until the thunk is called.

func (*TeamLoader) Prime

func (l *TeamLoader) Prime(key id.TeamID, value *gqlmodel.Team) bool

Prime the cache with the provided key and value. If the key already exists, no change is made and false is returned. (To forcefully prime the cache, clear the key first with loader.clear(key).prime(key, value).)

type TeamLoaderConfig

type TeamLoaderConfig struct {
	// Fetch is a method that provides the data for the loader
	Fetch func(keys []id.TeamID) ([]*gqlmodel.Team, []error)

	// Wait is how long wait before sending a batch
	Wait time.Duration

	// MaxBatch will limit the maximum number of keys to send in one batch, 0 = not limit
	MaxBatch int
}

TeamLoaderConfig captures the config to create a new TeamLoader

type UserLoader

type UserLoader struct {
	// contains filtered or unexported fields
}

UserLoader batches and caches requests

func NewUserLoader

func NewUserLoader(config UserLoaderConfig) *UserLoader

NewUserLoader creates a new UserLoader given a fetch, wait, and maxBatch

func (*UserLoader) Clear

func (l *UserLoader) Clear(key id.UserID)

Clear the value at key from the cache, if it exists

func (*UserLoader) Load

func (l *UserLoader) Load(key id.UserID) (*gqlmodel.User, error)

Load a User by key, batching and caching will be applied automatically

func (*UserLoader) LoadAll

func (l *UserLoader) LoadAll(keys []id.UserID) ([]*gqlmodel.User, []error)

LoadAll fetches many keys at once. It will be broken into appropriate sized sub batches depending on how the loader is configured

func (*UserLoader) LoadAllThunk

func (l *UserLoader) LoadAllThunk(keys []id.UserID) func() ([]*gqlmodel.User, []error)

LoadAllThunk returns a function that when called will block waiting for a Users. This method should be used if you want one goroutine to make requests to many different data loaders without blocking until the thunk is called.

func (*UserLoader) LoadThunk

func (l *UserLoader) LoadThunk(key id.UserID) func() (*gqlmodel.User, error)

LoadThunk returns a function that when called will block waiting for a User. This method should be used if you want one goroutine to make requests to many different data loaders without blocking until the thunk is called.

func (*UserLoader) Prime

func (l *UserLoader) Prime(key id.UserID, value *gqlmodel.User) bool

Prime the cache with the provided key and value. If the key already exists, no change is made and false is returned. (To forcefully prime the cache, clear the key first with loader.clear(key).prime(key, value).)

type UserLoaderConfig

type UserLoaderConfig struct {
	// Fetch is a method that provides the data for the loader
	Fetch func(keys []id.UserID) ([]*gqlmodel.User, []error)

	// Wait is how long wait before sending a batch
	Wait time.Duration

	// MaxBatch will limit the maximum number of keys to send in one batch, 0 = not limit
	MaxBatch int
}

UserLoaderConfig captures the config to create a new UserLoader

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL