loaders

package
v0.25.1 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Mar 13, 2024 License: AGPL-3.0 Imports: 5 Imported by: 0

Documentation

Index

Constants

This section is empty.

Variables

This section is empty.

Functions

This section is empty.

Types

type FileLoader

type FileLoader struct {
	// contains filtered or unexported fields
}

FileLoader batches and caches requests

func NewFileLoader

func NewFileLoader(config FileLoaderConfig) *FileLoader

NewFileLoader creates a new FileLoader given a fetch, wait, and maxBatch

func (*FileLoader) Clear

func (l *FileLoader) Clear(key models.FileID)

Clear the value at key from the cache, if it exists

func (*FileLoader) Load

func (l *FileLoader) Load(key models.FileID) (models.File, error)

Load a File by key, batching and caching will be applied automatically

func (*FileLoader) LoadAll

func (l *FileLoader) LoadAll(keys []models.FileID) ([]models.File, []error)

LoadAll fetches many keys at once. It will be broken into appropriate sized sub batches depending on how the loader is configured

func (*FileLoader) LoadAllThunk

func (l *FileLoader) LoadAllThunk(keys []models.FileID) func() ([]models.File, []error)

LoadAllThunk returns a function that when called will block waiting for a Files. This method should be used if you want one goroutine to make requests to many different data loaders without blocking until the thunk is called.

func (*FileLoader) LoadThunk

func (l *FileLoader) LoadThunk(key models.FileID) func() (models.File, error)

LoadThunk returns a function that when called will block waiting for a File. This method should be used if you want one goroutine to make requests to many different data loaders without blocking until the thunk is called.

func (*FileLoader) Prime

func (l *FileLoader) Prime(key models.FileID, value models.File) bool

Prime the cache with the provided key and value. If the key already exists, no change is made and false is returned. (To forcefully prime the cache, clear the key first with loader.clear(key).prime(key, value).)

type FileLoaderConfig

type FileLoaderConfig struct {
	// Fetch is a method that provides the data for the loader
	Fetch func(keys []models.FileID) ([]models.File, []error)

	// Wait is how long wait before sending a batch
	Wait time.Duration

	// MaxBatch will limit the maximum number of keys to send in one batch, 0 = not limit
	MaxBatch int
}

FileLoaderConfig captures the config to create a new FileLoader

type GalleryFileIDsLoader

type GalleryFileIDsLoader struct {
	// contains filtered or unexported fields
}

GalleryFileIDsLoader batches and caches requests

func NewGalleryFileIDsLoader

func NewGalleryFileIDsLoader(config GalleryFileIDsLoaderConfig) *GalleryFileIDsLoader

NewGalleryFileIDsLoader creates a new GalleryFileIDsLoader given a fetch, wait, and maxBatch

func (*GalleryFileIDsLoader) Clear

func (l *GalleryFileIDsLoader) Clear(key int)

Clear the value at key from the cache, if it exists

func (*GalleryFileIDsLoader) Load

func (l *GalleryFileIDsLoader) Load(key int) ([]models.FileID, error)

Load a FileID by key, batching and caching will be applied automatically

func (*GalleryFileIDsLoader) LoadAll

func (l *GalleryFileIDsLoader) LoadAll(keys []int) ([][]models.FileID, []error)

LoadAll fetches many keys at once. It will be broken into appropriate sized sub batches depending on how the loader is configured

func (*GalleryFileIDsLoader) LoadAllThunk

func (l *GalleryFileIDsLoader) LoadAllThunk(keys []int) func() ([][]models.FileID, []error)

LoadAllThunk returns a function that when called will block waiting for a FileIDs. This method should be used if you want one goroutine to make requests to many different data loaders without blocking until the thunk is called.

func (*GalleryFileIDsLoader) LoadThunk

func (l *GalleryFileIDsLoader) LoadThunk(key int) func() ([]models.FileID, error)

LoadThunk returns a function that when called will block waiting for a FileID. This method should be used if you want one goroutine to make requests to many different data loaders without blocking until the thunk is called.

func (*GalleryFileIDsLoader) Prime

func (l *GalleryFileIDsLoader) Prime(key int, value []models.FileID) bool

Prime the cache with the provided key and value. If the key already exists, no change is made and false is returned. (To forcefully prime the cache, clear the key first with loader.clear(key).prime(key, value).)

type GalleryFileIDsLoaderConfig

type GalleryFileIDsLoaderConfig struct {
	// Fetch is a method that provides the data for the loader
	Fetch func(keys []int) ([][]models.FileID, []error)

	// Wait is how long wait before sending a batch
	Wait time.Duration

	// MaxBatch will limit the maximum number of keys to send in one batch, 0 = not limit
	MaxBatch int
}

GalleryFileIDsLoaderConfig captures the config to create a new GalleryFileIDsLoader

type GalleryLoader

type GalleryLoader struct {
	// contains filtered or unexported fields
}

GalleryLoader batches and caches requests

func NewGalleryLoader

func NewGalleryLoader(config GalleryLoaderConfig) *GalleryLoader

NewGalleryLoader creates a new GalleryLoader given a fetch, wait, and maxBatch

func (*GalleryLoader) Clear

func (l *GalleryLoader) Clear(key int)

Clear the value at key from the cache, if it exists

func (*GalleryLoader) Load

func (l *GalleryLoader) Load(key int) (*models.Gallery, error)

Load a Gallery by key, batching and caching will be applied automatically

func (*GalleryLoader) LoadAll

func (l *GalleryLoader) LoadAll(keys []int) ([]*models.Gallery, []error)

LoadAll fetches many keys at once. It will be broken into appropriate sized sub batches depending on how the loader is configured

func (*GalleryLoader) LoadAllThunk

func (l *GalleryLoader) LoadAllThunk(keys []int) func() ([]*models.Gallery, []error)

LoadAllThunk returns a function that when called will block waiting for a Gallerys. This method should be used if you want one goroutine to make requests to many different data loaders without blocking until the thunk is called.

func (*GalleryLoader) LoadThunk

func (l *GalleryLoader) LoadThunk(key int) func() (*models.Gallery, error)

LoadThunk returns a function that when called will block waiting for a Gallery. This method should be used if you want one goroutine to make requests to many different data loaders without blocking until the thunk is called.

func (*GalleryLoader) Prime

func (l *GalleryLoader) Prime(key int, value *models.Gallery) bool

Prime the cache with the provided key and value. If the key already exists, no change is made and false is returned. (To forcefully prime the cache, clear the key first with loader.clear(key).prime(key, value).)

type GalleryLoaderConfig

type GalleryLoaderConfig struct {
	// Fetch is a method that provides the data for the loader
	Fetch func(keys []int) ([]*models.Gallery, []error)

	// Wait is how long wait before sending a batch
	Wait time.Duration

	// MaxBatch will limit the maximum number of keys to send in one batch, 0 = not limit
	MaxBatch int
}

GalleryLoaderConfig captures the config to create a new GalleryLoader

type ImageFileIDsLoader

type ImageFileIDsLoader struct {
	// contains filtered or unexported fields
}

ImageFileIDsLoader batches and caches requests

func NewImageFileIDsLoader

func NewImageFileIDsLoader(config ImageFileIDsLoaderConfig) *ImageFileIDsLoader

NewImageFileIDsLoader creates a new ImageFileIDsLoader given a fetch, wait, and maxBatch

func (*ImageFileIDsLoader) Clear

func (l *ImageFileIDsLoader) Clear(key int)

Clear the value at key from the cache, if it exists

func (*ImageFileIDsLoader) Load

func (l *ImageFileIDsLoader) Load(key int) ([]models.FileID, error)

Load a FileID by key, batching and caching will be applied automatically

func (*ImageFileIDsLoader) LoadAll

func (l *ImageFileIDsLoader) LoadAll(keys []int) ([][]models.FileID, []error)

LoadAll fetches many keys at once. It will be broken into appropriate sized sub batches depending on how the loader is configured

func (*ImageFileIDsLoader) LoadAllThunk

func (l *ImageFileIDsLoader) LoadAllThunk(keys []int) func() ([][]models.FileID, []error)

LoadAllThunk returns a function that when called will block waiting for a FileIDs. This method should be used if you want one goroutine to make requests to many different data loaders without blocking until the thunk is called.

func (*ImageFileIDsLoader) LoadThunk

func (l *ImageFileIDsLoader) LoadThunk(key int) func() ([]models.FileID, error)

LoadThunk returns a function that when called will block waiting for a FileID. This method should be used if you want one goroutine to make requests to many different data loaders without blocking until the thunk is called.

func (*ImageFileIDsLoader) Prime

func (l *ImageFileIDsLoader) Prime(key int, value []models.FileID) bool

Prime the cache with the provided key and value. If the key already exists, no change is made and false is returned. (To forcefully prime the cache, clear the key first with loader.clear(key).prime(key, value).)

type ImageFileIDsLoaderConfig

type ImageFileIDsLoaderConfig struct {
	// Fetch is a method that provides the data for the loader
	Fetch func(keys []int) ([][]models.FileID, []error)

	// Wait is how long wait before sending a batch
	Wait time.Duration

	// MaxBatch will limit the maximum number of keys to send in one batch, 0 = not limit
	MaxBatch int
}

ImageFileIDsLoaderConfig captures the config to create a new ImageFileIDsLoader

type ImageLoader

type ImageLoader struct {
	// contains filtered or unexported fields
}

ImageLoader batches and caches requests

func NewImageLoader

func NewImageLoader(config ImageLoaderConfig) *ImageLoader

NewImageLoader creates a new ImageLoader given a fetch, wait, and maxBatch

func (*ImageLoader) Clear

func (l *ImageLoader) Clear(key int)

Clear the value at key from the cache, if it exists

func (*ImageLoader) Load

func (l *ImageLoader) Load(key int) (*models.Image, error)

Load a Image by key, batching and caching will be applied automatically

func (*ImageLoader) LoadAll

func (l *ImageLoader) LoadAll(keys []int) ([]*models.Image, []error)

LoadAll fetches many keys at once. It will be broken into appropriate sized sub batches depending on how the loader is configured

func (*ImageLoader) LoadAllThunk

func (l *ImageLoader) LoadAllThunk(keys []int) func() ([]*models.Image, []error)

LoadAllThunk returns a function that when called will block waiting for a Images. This method should be used if you want one goroutine to make requests to many different data loaders without blocking until the thunk is called.

func (*ImageLoader) LoadThunk

func (l *ImageLoader) LoadThunk(key int) func() (*models.Image, error)

LoadThunk returns a function that when called will block waiting for a Image. This method should be used if you want one goroutine to make requests to many different data loaders without blocking until the thunk is called.

func (*ImageLoader) Prime

func (l *ImageLoader) Prime(key int, value *models.Image) bool

Prime the cache with the provided key and value. If the key already exists, no change is made and false is returned. (To forcefully prime the cache, clear the key first with loader.clear(key).prime(key, value).)

type ImageLoaderConfig

type ImageLoaderConfig struct {
	// Fetch is a method that provides the data for the loader
	Fetch func(keys []int) ([]*models.Image, []error)

	// Wait is how long wait before sending a batch
	Wait time.Duration

	// MaxBatch will limit the maximum number of keys to send in one batch, 0 = not limit
	MaxBatch int
}

ImageLoaderConfig captures the config to create a new ImageLoader

type Loaders

type Loaders struct {
	SceneByID        *SceneLoader
	SceneFiles       *SceneFileIDsLoader
	ScenePlayCount   *ScenePlayCountLoader
	SceneOCount      *SceneOCountLoader
	ScenePlayHistory *ScenePlayHistoryLoader
	SceneOHistory    *SceneOHistoryLoader
	SceneLastPlayed  *SceneLastPlayedLoader

	ImageFiles   *ImageFileIDsLoader
	GalleryFiles *GalleryFileIDsLoader

	GalleryByID   *GalleryLoader
	ImageByID     *ImageLoader
	PerformerByID *PerformerLoader
	StudioByID    *StudioLoader
	TagByID       *TagLoader
	MovieByID     *MovieLoader
	FileByID      *FileLoader
}

func From

func From(ctx context.Context) Loaders

type Middleware

type Middleware struct {
	Repository models.Repository
}

func (Middleware) Middleware

func (m Middleware) Middleware(next http.Handler) http.Handler

type MovieLoader

type MovieLoader struct {
	// contains filtered or unexported fields
}

MovieLoader batches and caches requests

func NewMovieLoader

func NewMovieLoader(config MovieLoaderConfig) *MovieLoader

NewMovieLoader creates a new MovieLoader given a fetch, wait, and maxBatch

func (*MovieLoader) Clear

func (l *MovieLoader) Clear(key int)

Clear the value at key from the cache, if it exists

func (*MovieLoader) Load

func (l *MovieLoader) Load(key int) (*models.Movie, error)

Load a Movie by key, batching and caching will be applied automatically

func (*MovieLoader) LoadAll

func (l *MovieLoader) LoadAll(keys []int) ([]*models.Movie, []error)

LoadAll fetches many keys at once. It will be broken into appropriate sized sub batches depending on how the loader is configured

func (*MovieLoader) LoadAllThunk

func (l *MovieLoader) LoadAllThunk(keys []int) func() ([]*models.Movie, []error)

LoadAllThunk returns a function that when called will block waiting for a Movies. This method should be used if you want one goroutine to make requests to many different data loaders without blocking until the thunk is called.

func (*MovieLoader) LoadThunk

func (l *MovieLoader) LoadThunk(key int) func() (*models.Movie, error)

LoadThunk returns a function that when called will block waiting for a Movie. This method should be used if you want one goroutine to make requests to many different data loaders without blocking until the thunk is called.

func (*MovieLoader) Prime

func (l *MovieLoader) Prime(key int, value *models.Movie) bool

Prime the cache with the provided key and value. If the key already exists, no change is made and false is returned. (To forcefully prime the cache, clear the key first with loader.clear(key).prime(key, value).)

type MovieLoaderConfig

type MovieLoaderConfig struct {
	// Fetch is a method that provides the data for the loader
	Fetch func(keys []int) ([]*models.Movie, []error)

	// Wait is how long wait before sending a batch
	Wait time.Duration

	// MaxBatch will limit the maximum number of keys to send in one batch, 0 = not limit
	MaxBatch int
}

MovieLoaderConfig captures the config to create a new MovieLoader

type PerformerLoader

type PerformerLoader struct {
	// contains filtered or unexported fields
}

PerformerLoader batches and caches requests

func NewPerformerLoader

func NewPerformerLoader(config PerformerLoaderConfig) *PerformerLoader

NewPerformerLoader creates a new PerformerLoader given a fetch, wait, and maxBatch

func (*PerformerLoader) Clear

func (l *PerformerLoader) Clear(key int)

Clear the value at key from the cache, if it exists

func (*PerformerLoader) Load

func (l *PerformerLoader) Load(key int) (*models.Performer, error)

Load a Performer by key, batching and caching will be applied automatically

func (*PerformerLoader) LoadAll

func (l *PerformerLoader) LoadAll(keys []int) ([]*models.Performer, []error)

LoadAll fetches many keys at once. It will be broken into appropriate sized sub batches depending on how the loader is configured

func (*PerformerLoader) LoadAllThunk

func (l *PerformerLoader) LoadAllThunk(keys []int) func() ([]*models.Performer, []error)

LoadAllThunk returns a function that when called will block waiting for a Performers. This method should be used if you want one goroutine to make requests to many different data loaders without blocking until the thunk is called.

func (*PerformerLoader) LoadThunk

func (l *PerformerLoader) LoadThunk(key int) func() (*models.Performer, error)

LoadThunk returns a function that when called will block waiting for a Performer. This method should be used if you want one goroutine to make requests to many different data loaders without blocking until the thunk is called.

func (*PerformerLoader) Prime

func (l *PerformerLoader) Prime(key int, value *models.Performer) bool

Prime the cache with the provided key and value. If the key already exists, no change is made and false is returned. (To forcefully prime the cache, clear the key first with loader.clear(key).prime(key, value).)

type PerformerLoaderConfig

type PerformerLoaderConfig struct {
	// Fetch is a method that provides the data for the loader
	Fetch func(keys []int) ([]*models.Performer, []error)

	// Wait is how long wait before sending a batch
	Wait time.Duration

	// MaxBatch will limit the maximum number of keys to send in one batch, 0 = not limit
	MaxBatch int
}

PerformerLoaderConfig captures the config to create a new PerformerLoader

type SceneFileIDsLoader

type SceneFileIDsLoader struct {
	// contains filtered or unexported fields
}

SceneFileIDsLoader batches and caches requests

func NewSceneFileIDsLoader

func NewSceneFileIDsLoader(config SceneFileIDsLoaderConfig) *SceneFileIDsLoader

NewSceneFileIDsLoader creates a new SceneFileIDsLoader given a fetch, wait, and maxBatch

func (*SceneFileIDsLoader) Clear

func (l *SceneFileIDsLoader) Clear(key int)

Clear the value at key from the cache, if it exists

func (*SceneFileIDsLoader) Load

func (l *SceneFileIDsLoader) Load(key int) ([]models.FileID, error)

Load a FileID by key, batching and caching will be applied automatically

func (*SceneFileIDsLoader) LoadAll

func (l *SceneFileIDsLoader) LoadAll(keys []int) ([][]models.FileID, []error)

LoadAll fetches many keys at once. It will be broken into appropriate sized sub batches depending on how the loader is configured

func (*SceneFileIDsLoader) LoadAllThunk

func (l *SceneFileIDsLoader) LoadAllThunk(keys []int) func() ([][]models.FileID, []error)

LoadAllThunk returns a function that when called will block waiting for a FileIDs. This method should be used if you want one goroutine to make requests to many different data loaders without blocking until the thunk is called.

func (*SceneFileIDsLoader) LoadThunk

func (l *SceneFileIDsLoader) LoadThunk(key int) func() ([]models.FileID, error)

LoadThunk returns a function that when called will block waiting for a FileID. This method should be used if you want one goroutine to make requests to many different data loaders without blocking until the thunk is called.

func (*SceneFileIDsLoader) Prime

func (l *SceneFileIDsLoader) Prime(key int, value []models.FileID) bool

Prime the cache with the provided key and value. If the key already exists, no change is made and false is returned. (To forcefully prime the cache, clear the key first with loader.clear(key).prime(key, value).)

type SceneFileIDsLoaderConfig

type SceneFileIDsLoaderConfig struct {
	// Fetch is a method that provides the data for the loader
	Fetch func(keys []int) ([][]models.FileID, []error)

	// Wait is how long wait before sending a batch
	Wait time.Duration

	// MaxBatch will limit the maximum number of keys to send in one batch, 0 = not limit
	MaxBatch int
}

SceneFileIDsLoaderConfig captures the config to create a new SceneFileIDsLoader

type SceneLastPlayedLoader added in v0.25.0

type SceneLastPlayedLoader struct {
	// contains filtered or unexported fields
}

SceneLastPlayedLoader batches and caches requests

func NewSceneLastPlayedLoader added in v0.25.0

func NewSceneLastPlayedLoader(config SceneLastPlayedLoaderConfig) *SceneLastPlayedLoader

NewSceneLastPlayedLoader creates a new SceneLastPlayedLoader given a fetch, wait, and maxBatch

func (*SceneLastPlayedLoader) Clear added in v0.25.0

func (l *SceneLastPlayedLoader) Clear(key int)

Clear the value at key from the cache, if it exists

func (*SceneLastPlayedLoader) Load added in v0.25.0

func (l *SceneLastPlayedLoader) Load(key int) (*time.Time, error)

Load a Time by key, batching and caching will be applied automatically

func (*SceneLastPlayedLoader) LoadAll added in v0.25.0

func (l *SceneLastPlayedLoader) LoadAll(keys []int) ([]*time.Time, []error)

LoadAll fetches many keys at once. It will be broken into appropriate sized sub batches depending on how the loader is configured

func (*SceneLastPlayedLoader) LoadAllThunk added in v0.25.0

func (l *SceneLastPlayedLoader) LoadAllThunk(keys []int) func() ([]*time.Time, []error)

LoadAllThunk returns a function that when called will block waiting for a Times. This method should be used if you want one goroutine to make requests to many different data loaders without blocking until the thunk is called.

func (*SceneLastPlayedLoader) LoadThunk added in v0.25.0

func (l *SceneLastPlayedLoader) LoadThunk(key int) func() (*time.Time, error)

LoadThunk returns a function that when called will block waiting for a Time. This method should be used if you want one goroutine to make requests to many different data loaders without blocking until the thunk is called.

func (*SceneLastPlayedLoader) Prime added in v0.25.0

func (l *SceneLastPlayedLoader) Prime(key int, value *time.Time) bool

Prime the cache with the provided key and value. If the key already exists, no change is made and false is returned. (To forcefully prime the cache, clear the key first with loader.clear(key).prime(key, value).)

type SceneLastPlayedLoaderConfig added in v0.25.0

type SceneLastPlayedLoaderConfig struct {
	// Fetch is a method that provides the data for the loader
	Fetch func(keys []int) ([]*time.Time, []error)

	// Wait is how long wait before sending a batch
	Wait time.Duration

	// MaxBatch will limit the maximum number of keys to send in one batch, 0 = not limit
	MaxBatch int
}

SceneLastPlayedLoaderConfig captures the config to create a new SceneLastPlayedLoader

type SceneLoader

type SceneLoader struct {
	// contains filtered or unexported fields
}

SceneLoader batches and caches requests

func NewSceneLoader

func NewSceneLoader(config SceneLoaderConfig) *SceneLoader

NewSceneLoader creates a new SceneLoader given a fetch, wait, and maxBatch

func (*SceneLoader) Clear

func (l *SceneLoader) Clear(key int)

Clear the value at key from the cache, if it exists

func (*SceneLoader) Load

func (l *SceneLoader) Load(key int) (*models.Scene, error)

Load a Scene by key, batching and caching will be applied automatically

func (*SceneLoader) LoadAll

func (l *SceneLoader) LoadAll(keys []int) ([]*models.Scene, []error)

LoadAll fetches many keys at once. It will be broken into appropriate sized sub batches depending on how the loader is configured

func (*SceneLoader) LoadAllThunk

func (l *SceneLoader) LoadAllThunk(keys []int) func() ([]*models.Scene, []error)

LoadAllThunk returns a function that when called will block waiting for a Scenes. This method should be used if you want one goroutine to make requests to many different data loaders without blocking until the thunk is called.

func (*SceneLoader) LoadThunk

func (l *SceneLoader) LoadThunk(key int) func() (*models.Scene, error)

LoadThunk returns a function that when called will block waiting for a Scene. This method should be used if you want one goroutine to make requests to many different data loaders without blocking until the thunk is called.

func (*SceneLoader) Prime

func (l *SceneLoader) Prime(key int, value *models.Scene) bool

Prime the cache with the provided key and value. If the key already exists, no change is made and false is returned. (To forcefully prime the cache, clear the key first with loader.clear(key).prime(key, value).)

type SceneLoaderConfig

type SceneLoaderConfig struct {
	// Fetch is a method that provides the data for the loader
	Fetch func(keys []int) ([]*models.Scene, []error)

	// Wait is how long wait before sending a batch
	Wait time.Duration

	// MaxBatch will limit the maximum number of keys to send in one batch, 0 = not limit
	MaxBatch int
}

SceneLoaderConfig captures the config to create a new SceneLoader

type SceneOCountLoader added in v0.25.0

type SceneOCountLoader struct {
	// contains filtered or unexported fields
}

SceneOCountLoader batches and caches requests

func NewSceneOCountLoader added in v0.25.0

func NewSceneOCountLoader(config SceneOCountLoaderConfig) *SceneOCountLoader

NewSceneOCountLoader creates a new SceneOCountLoader given a fetch, wait, and maxBatch

func (*SceneOCountLoader) Clear added in v0.25.0

func (l *SceneOCountLoader) Clear(key int)

Clear the value at key from the cache, if it exists

func (*SceneOCountLoader) Load added in v0.25.0

func (l *SceneOCountLoader) Load(key int) (int, error)

Load a int by key, batching and caching will be applied automatically

func (*SceneOCountLoader) LoadAll added in v0.25.0

func (l *SceneOCountLoader) LoadAll(keys []int) ([]int, []error)

LoadAll fetches many keys at once. It will be broken into appropriate sized sub batches depending on how the loader is configured

func (*SceneOCountLoader) LoadAllThunk added in v0.25.0

func (l *SceneOCountLoader) LoadAllThunk(keys []int) func() ([]int, []error)

LoadAllThunk returns a function that when called will block waiting for a ints. This method should be used if you want one goroutine to make requests to many different data loaders without blocking until the thunk is called.

func (*SceneOCountLoader) LoadThunk added in v0.25.0

func (l *SceneOCountLoader) LoadThunk(key int) func() (int, error)

LoadThunk returns a function that when called will block waiting for a int. This method should be used if you want one goroutine to make requests to many different data loaders without blocking until the thunk is called.

func (*SceneOCountLoader) Prime added in v0.25.0

func (l *SceneOCountLoader) Prime(key int, value int) bool

Prime the cache with the provided key and value. If the key already exists, no change is made and false is returned. (To forcefully prime the cache, clear the key first with loader.clear(key).prime(key, value).)

type SceneOCountLoaderConfig added in v0.25.0

type SceneOCountLoaderConfig struct {
	// Fetch is a method that provides the data for the loader
	Fetch func(keys []int) ([]int, []error)

	// Wait is how long wait before sending a batch
	Wait time.Duration

	// MaxBatch will limit the maximum number of keys to send in one batch, 0 = not limit
	MaxBatch int
}

SceneOCountLoaderConfig captures the config to create a new SceneOCountLoader

type SceneOHistoryLoader added in v0.25.0

type SceneOHistoryLoader struct {
	// contains filtered or unexported fields
}

SceneOHistoryLoader batches and caches requests

func NewSceneOHistoryLoader added in v0.25.0

func NewSceneOHistoryLoader(config SceneOHistoryLoaderConfig) *SceneOHistoryLoader

NewSceneOHistoryLoader creates a new SceneOHistoryLoader given a fetch, wait, and maxBatch

func (*SceneOHistoryLoader) Clear added in v0.25.0

func (l *SceneOHistoryLoader) Clear(key int)

Clear the value at key from the cache, if it exists

func (*SceneOHistoryLoader) Load added in v0.25.0

func (l *SceneOHistoryLoader) Load(key int) ([]time.Time, error)

Load a Time by key, batching and caching will be applied automatically

func (*SceneOHistoryLoader) LoadAll added in v0.25.0

func (l *SceneOHistoryLoader) LoadAll(keys []int) ([][]time.Time, []error)

LoadAll fetches many keys at once. It will be broken into appropriate sized sub batches depending on how the loader is configured

func (*SceneOHistoryLoader) LoadAllThunk added in v0.25.0

func (l *SceneOHistoryLoader) LoadAllThunk(keys []int) func() ([][]time.Time, []error)

LoadAllThunk returns a function that when called will block waiting for a Times. This method should be used if you want one goroutine to make requests to many different data loaders without blocking until the thunk is called.

func (*SceneOHistoryLoader) LoadThunk added in v0.25.0

func (l *SceneOHistoryLoader) LoadThunk(key int) func() ([]time.Time, error)

LoadThunk returns a function that when called will block waiting for a Time. This method should be used if you want one goroutine to make requests to many different data loaders without blocking until the thunk is called.

func (*SceneOHistoryLoader) Prime added in v0.25.0

func (l *SceneOHistoryLoader) Prime(key int, value []time.Time) bool

Prime the cache with the provided key and value. If the key already exists, no change is made and false is returned. (To forcefully prime the cache, clear the key first with loader.clear(key).prime(key, value).)

type SceneOHistoryLoaderConfig added in v0.25.0

type SceneOHistoryLoaderConfig struct {
	// Fetch is a method that provides the data for the loader
	Fetch func(keys []int) ([][]time.Time, []error)

	// Wait is how long wait before sending a batch
	Wait time.Duration

	// MaxBatch will limit the maximum number of keys to send in one batch, 0 = not limit
	MaxBatch int
}

SceneOHistoryLoaderConfig captures the config to create a new SceneOHistoryLoader

type ScenePlayCountLoader added in v0.25.0

type ScenePlayCountLoader struct {
	// contains filtered or unexported fields
}

ScenePlayCountLoader batches and caches requests

func NewScenePlayCountLoader added in v0.25.0

func NewScenePlayCountLoader(config ScenePlayCountLoaderConfig) *ScenePlayCountLoader

NewScenePlayCountLoader creates a new ScenePlayCountLoader given a fetch, wait, and maxBatch

func (*ScenePlayCountLoader) Clear added in v0.25.0

func (l *ScenePlayCountLoader) Clear(key int)

Clear the value at key from the cache, if it exists

func (*ScenePlayCountLoader) Load added in v0.25.0

func (l *ScenePlayCountLoader) Load(key int) (int, error)

Load a int by key, batching and caching will be applied automatically

func (*ScenePlayCountLoader) LoadAll added in v0.25.0

func (l *ScenePlayCountLoader) LoadAll(keys []int) ([]int, []error)

LoadAll fetches many keys at once. It will be broken into appropriate sized sub batches depending on how the loader is configured

func (*ScenePlayCountLoader) LoadAllThunk added in v0.25.0

func (l *ScenePlayCountLoader) LoadAllThunk(keys []int) func() ([]int, []error)

LoadAllThunk returns a function that when called will block waiting for a ints. This method should be used if you want one goroutine to make requests to many different data loaders without blocking until the thunk is called.

func (*ScenePlayCountLoader) LoadThunk added in v0.25.0

func (l *ScenePlayCountLoader) LoadThunk(key int) func() (int, error)

LoadThunk returns a function that when called will block waiting for a int. This method should be used if you want one goroutine to make requests to many different data loaders without blocking until the thunk is called.

func (*ScenePlayCountLoader) Prime added in v0.25.0

func (l *ScenePlayCountLoader) Prime(key int, value int) bool

Prime the cache with the provided key and value. If the key already exists, no change is made and false is returned. (To forcefully prime the cache, clear the key first with loader.clear(key).prime(key, value).)

type ScenePlayCountLoaderConfig added in v0.25.0

type ScenePlayCountLoaderConfig struct {
	// Fetch is a method that provides the data for the loader
	Fetch func(keys []int) ([]int, []error)

	// Wait is how long wait before sending a batch
	Wait time.Duration

	// MaxBatch will limit the maximum number of keys to send in one batch, 0 = not limit
	MaxBatch int
}

ScenePlayCountLoaderConfig captures the config to create a new ScenePlayCountLoader

type ScenePlayHistoryLoader added in v0.25.0

type ScenePlayHistoryLoader struct {
	// contains filtered or unexported fields
}

ScenePlayHistoryLoader batches and caches requests

func NewScenePlayHistoryLoader added in v0.25.0

func NewScenePlayHistoryLoader(config ScenePlayHistoryLoaderConfig) *ScenePlayHistoryLoader

NewScenePlayHistoryLoader creates a new ScenePlayHistoryLoader given a fetch, wait, and maxBatch

func (*ScenePlayHistoryLoader) Clear added in v0.25.0

func (l *ScenePlayHistoryLoader) Clear(key int)

Clear the value at key from the cache, if it exists

func (*ScenePlayHistoryLoader) Load added in v0.25.0

func (l *ScenePlayHistoryLoader) Load(key int) ([]time.Time, error)

Load a Time by key, batching and caching will be applied automatically

func (*ScenePlayHistoryLoader) LoadAll added in v0.25.0

func (l *ScenePlayHistoryLoader) LoadAll(keys []int) ([][]time.Time, []error)

LoadAll fetches many keys at once. It will be broken into appropriate sized sub batches depending on how the loader is configured

func (*ScenePlayHistoryLoader) LoadAllThunk added in v0.25.0

func (l *ScenePlayHistoryLoader) LoadAllThunk(keys []int) func() ([][]time.Time, []error)

LoadAllThunk returns a function that when called will block waiting for a Times. This method should be used if you want one goroutine to make requests to many different data loaders without blocking until the thunk is called.

func (*ScenePlayHistoryLoader) LoadThunk added in v0.25.0

func (l *ScenePlayHistoryLoader) LoadThunk(key int) func() ([]time.Time, error)

LoadThunk returns a function that when called will block waiting for a Time. This method should be used if you want one goroutine to make requests to many different data loaders without blocking until the thunk is called.

func (*ScenePlayHistoryLoader) Prime added in v0.25.0

func (l *ScenePlayHistoryLoader) Prime(key int, value []time.Time) bool

Prime the cache with the provided key and value. If the key already exists, no change is made and false is returned. (To forcefully prime the cache, clear the key first with loader.clear(key).prime(key, value).)

type ScenePlayHistoryLoaderConfig added in v0.25.0

type ScenePlayHistoryLoaderConfig struct {
	// Fetch is a method that provides the data for the loader
	Fetch func(keys []int) ([][]time.Time, []error)

	// Wait is how long wait before sending a batch
	Wait time.Duration

	// MaxBatch will limit the maximum number of keys to send in one batch, 0 = not limit
	MaxBatch int
}

ScenePlayHistoryLoaderConfig captures the config to create a new ScenePlayHistoryLoader

type StudioLoader

type StudioLoader struct {
	// contains filtered or unexported fields
}

StudioLoader batches and caches requests

func NewStudioLoader

func NewStudioLoader(config StudioLoaderConfig) *StudioLoader

NewStudioLoader creates a new StudioLoader given a fetch, wait, and maxBatch

func (*StudioLoader) Clear

func (l *StudioLoader) Clear(key int)

Clear the value at key from the cache, if it exists

func (*StudioLoader) Load

func (l *StudioLoader) Load(key int) (*models.Studio, error)

Load a Studio by key, batching and caching will be applied automatically

func (*StudioLoader) LoadAll

func (l *StudioLoader) LoadAll(keys []int) ([]*models.Studio, []error)

LoadAll fetches many keys at once. It will be broken into appropriate sized sub batches depending on how the loader is configured

func (*StudioLoader) LoadAllThunk

func (l *StudioLoader) LoadAllThunk(keys []int) func() ([]*models.Studio, []error)

LoadAllThunk returns a function that when called will block waiting for a Studios. This method should be used if you want one goroutine to make requests to many different data loaders without blocking until the thunk is called.

func (*StudioLoader) LoadThunk

func (l *StudioLoader) LoadThunk(key int) func() (*models.Studio, error)

LoadThunk returns a function that when called will block waiting for a Studio. This method should be used if you want one goroutine to make requests to many different data loaders without blocking until the thunk is called.

func (*StudioLoader) Prime

func (l *StudioLoader) Prime(key int, value *models.Studio) bool

Prime the cache with the provided key and value. If the key already exists, no change is made and false is returned. (To forcefully prime the cache, clear the key first with loader.clear(key).prime(key, value).)

type StudioLoaderConfig

type StudioLoaderConfig struct {
	// Fetch is a method that provides the data for the loader
	Fetch func(keys []int) ([]*models.Studio, []error)

	// Wait is how long wait before sending a batch
	Wait time.Duration

	// MaxBatch will limit the maximum number of keys to send in one batch, 0 = not limit
	MaxBatch int
}

StudioLoaderConfig captures the config to create a new StudioLoader

type TagLoader

type TagLoader struct {
	// contains filtered or unexported fields
}

TagLoader batches and caches requests

func NewTagLoader

func NewTagLoader(config TagLoaderConfig) *TagLoader

NewTagLoader creates a new TagLoader given a fetch, wait, and maxBatch

func (*TagLoader) Clear

func (l *TagLoader) Clear(key int)

Clear the value at key from the cache, if it exists

func (*TagLoader) Load

func (l *TagLoader) Load(key int) (*models.Tag, error)

Load a Tag by key, batching and caching will be applied automatically

func (*TagLoader) LoadAll

func (l *TagLoader) LoadAll(keys []int) ([]*models.Tag, []error)

LoadAll fetches many keys at once. It will be broken into appropriate sized sub batches depending on how the loader is configured

func (*TagLoader) LoadAllThunk

func (l *TagLoader) LoadAllThunk(keys []int) func() ([]*models.Tag, []error)

LoadAllThunk returns a function that when called will block waiting for a Tags. This method should be used if you want one goroutine to make requests to many different data loaders without blocking until the thunk is called.

func (*TagLoader) LoadThunk

func (l *TagLoader) LoadThunk(key int) func() (*models.Tag, error)

LoadThunk returns a function that when called will block waiting for a Tag. This method should be used if you want one goroutine to make requests to many different data loaders without blocking until the thunk is called.

func (*TagLoader) Prime

func (l *TagLoader) Prime(key int, value *models.Tag) bool

Prime the cache with the provided key and value. If the key already exists, no change is made and false is returned. (To forcefully prime the cache, clear the key first with loader.clear(key).prime(key, value).)

type TagLoaderConfig

type TagLoaderConfig struct {
	// Fetch is a method that provides the data for the loader
	Fetch func(keys []int) ([]*models.Tag, []error)

	// Wait is how long wait before sending a batch
	Wait time.Duration

	// MaxBatch will limit the maximum number of keys to send in one batch, 0 = not limit
	MaxBatch int
}

TagLoaderConfig captures the config to create a new TagLoader

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL