ucache

package
v1.16.7 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Jun 4, 2024 License: MIT Imports: 15 Imported by: 0

Documentation

Index

Constants

This section is empty.

Variables

This section is empty.

Functions

This section is empty.

Types

type Cache

type Cache[K Unique, T any] interface {
	// Set updates the cache value for the provided key. If the key already exists,
	// its previous value is removed before adding the new value. This method should be thread-safe.
	Set(key K, value T)

	// Get retrieves the value associated with the provided key from the cache.
	// It returns the value and a boolean indicating whether the key was found.
	// This method should be thread-safe. Get operation drops down change state of the item, meaning that item becomes
	// actual after Get operation.
	Get(key K) (*T, bool)

	// Changes returns a slice of keys that have been modified in the cache.
	// This method provides a way to track changes made to the cache, useful for scenarios like cache syncing.
	// Cache changes will be updated only on modifying operations, meaning that in-fact, changes contain all the present keys.
	Changes() []K

	// Drop completely clears the cache, removing all entries. This method should be thread-safe.
	Drop()

	// DropKey removes the value associated with the provided key from the cache. This method should be thread-safe.
	DropKey(key K)

	// Outdated checks if the provided key or the entire cache (if no key is provided)
	// is outdated based on the set TTL (time-to-live). Returns true if outdated, false otherwise.
	// This method should be thread-safe.
	// If key was not found returns false.
	Outdated(key uopt.Opt[K]) bool

	// SetQuietly is an optimized method adds a value to the cache for the provided key but does so without
	// altering the change history. This method is useful when modifications should not trigger cache change diff.
	// This method should be thread-safe.
	// This operation is much faster and can be used to optimize cache performance in case you don't want to track changes.
	SetQuietly(key K, value T)
}

The Cache interface defines a set of methods for a generic cache implementation. This interface supports setting, getting, and managing cache entries with composite keys. Unlike MultiCache, it is designed to handle only one value per key and does not support hierarchical composite keys.

func NewInMemoryHashMapCache

func NewInMemoryHashMapCache[K Unique, T any](ttl uopt.Opt[time.Duration]) Cache[K, T]

NewInMemoryHashMapCache creates a new instance of the InMemoryHashMapCache. It takes a hashing function to translate the composite keys to a desired hash type, and an optional time-to-live duration for the cache entries.

type Comparable

type Comparable interface {
	Equals(other Comparable) bool
}

Comparable entity

type ComparableKey

type ComparableKey[T comparable] struct {
	// contains filtered or unexported fields
}

func NewComparableKey

func NewComparableKey[T comparable](v T) ComparableKey[T]

func (ComparableKey[T]) Equals

func (k ComparableKey[T]) Equals(other Comparable) bool

func (ComparableKey[T]) Key added in v1.16.0

func (k ComparableKey[T]) Key() int64

func (ComparableKey[T]) String

func (k ComparableKey[T]) String() string

type ComparableSlice

type ComparableSlice[T Comparable] struct {
	Data []T
}

func (ComparableSlice[T]) Equals

func (c ComparableSlice[T]) Equals(other Comparable) bool

type CompositeKey

type CompositeKey interface {
	Comparable
	Keys() []Unique // Keys returns an ordered list of keys ordered by priority (ASC), so the first element has the most prio.
}

CompositeKey specifies an abstract key with an ability to provide an ordered list of available keys.

type FarmHash64CompositeKey added in v1.16.6

type FarmHash64CompositeKey struct {
	// contains filtered or unexported fields
}

func NewFarmHashCompositeKey added in v1.16.6

func NewFarmHashCompositeKey(keys ...any) FarmHash64CompositeKey

NewFarmHashCompositeKey creates a GenericCompositeKey with Farm64 support.

func (FarmHash64CompositeKey) Equals added in v1.16.6

func (k FarmHash64CompositeKey) Equals(other Comparable) bool

func (FarmHash64CompositeKey) Keys added in v1.16.6

func (k FarmHash64CompositeKey) Keys() []Unique

type FarmHash64Entity added in v1.16.5

type FarmHash64Entity struct {
	// contains filtered or unexported fields
}

FarmHash64Entity wraps any object and provides a Unique implementation using farm's 64-bit hash function to be used in cache. This hashed entity uses internal hash to avoid redundant rehashing operations.

IMPORTANT: The object must have exported fields and only those fields will be considered for the hashing uniqueness operation. IMPORTANT: If the object is a pointer, the hash will compare pointer values. If the object is not a pointer, the hash will compare contents.

  • Equals method compares the hash values of the wrapped objects.
  • Key method uses farm.Hash64 to generate a 64-bit hash of the object and returns it as an int64 value for the key.

This can be used for uniquely identifying objects and comparing them based on their content rather than their memory address.

func Hashed

func Hashed(obj any) *FarmHash64Entity

Hashed is a constructor function that creates and returns a new instance of FarmHash64Entity, wrapping the provided object. This instance provides a Unique implementation using farm's 64-bit hash function.

Usage:

  • To uniquely identify objects based on their content rather than their memory address.
  • To generate a unique key for any object by hashing its content.

Example:

obj := Hashed("example object")
fmt.Println("Key:", obj.Key())
// Outputs the unique key generated by farm.Hash64 for the string "example object".

func (*FarmHash64Entity) Equals added in v1.16.5

func (e *FarmHash64Entity) Equals(other Comparable) bool

func (*FarmHash64Entity) Key added in v1.16.5

func (e *FarmHash64Entity) Key() int64

type GenericCompositeKey

type GenericCompositeKey struct {
	// contains filtered or unexported fields
}

func NewGenericCompositeKey

func NewGenericCompositeKey(keys ...any) GenericCompositeKey

NewGenericCompositeKey creates a GenericCompositeKey that supports only 'comparable' keys

func (GenericCompositeKey) Equals

func (k GenericCompositeKey) Equals(other Comparable) bool

func (GenericCompositeKey) Keys

func (k GenericCompositeKey) Keys() []Unique

func (GenericCompositeKey) String

func (k GenericCompositeKey) String() string

type InMemoryHashMapCache

type InMemoryHashMapCache[K Unique, T any] struct {
	// contains filtered or unexported fields
}

InMemoryHashMapCache provides an in-memory caching mechanism using hashmaps for single-value entries. Unlike InMemoryHashMapMultiCache, it stores only one value per key. This implementation supports linked-chain collision resolution, so at the worst it should be O(n) complexity. This structure translates composite keys into a hash value using a user-provided hashing function. Supports optional TTL for entries and ensures concurrency-safe operations using a mutex. TTL parameter in cache doesn't automatically clean up all the entries. Use ManagedCache wrapper to automatically manage outdated keys.

func (*InMemoryHashMapCache[K, T]) Changes

func (c *InMemoryHashMapCache[K, T]) Changes() []K

Changes returns a slice of keys that have been modified in the cache. This method provides a way to track changes made to the cache, useful for scenarios like cache syncing.

func (*InMemoryHashMapCache[K, T]) Drop

func (c *InMemoryHashMapCache[K, T]) Drop()

Drop completely clears the cache, removing all entries. The operation is thread-safe.

func (*InMemoryHashMapCache[K, T]) DropKey

func (c *InMemoryHashMapCache[K, T]) DropKey(key K)

DropKey removes the value associated with the provided key from the cache. The operation is thread-safe.

func (*InMemoryHashMapCache[K, T]) Get

func (c *InMemoryHashMapCache[K, T]) Get(key K) (*T, bool)

Get retrieves the value associated with the provided key from the cache. The operation is thread-safe and does not alter the change history.

func (*InMemoryHashMapCache[K, T]) Outdated

func (c *InMemoryHashMapCache[K, T]) Outdated(key uopt.Opt[K]) bool

Outdated checks if the provided key or the entire cache (if no key is provided) is outdated based on the set TTL. Returns true if outdated, false otherwise.

func (*InMemoryHashMapCache[K, T]) Set

func (c *InMemoryHashMapCache[K, T]) Set(key K, value T)

Set updates the cache value for the provided key. If the key already exists, its previous value are removed before adding the new value. The operation is thread-safe.

func (*InMemoryHashMapCache[K, T]) SetQuietly added in v1.15.0

func (c *InMemoryHashMapCache[K, T]) SetQuietly(key K, value T)

SetQuietly is an optimized method that adds value to the cache for the provided key but does so without altering the change history. This operation can be used when modifications should not trigger cache change diff. This operation is much faster and can be used to optimize cache performance in case you don't want to track changes.

type InMemoryHashMapMultiCache added in v1.15.0

type InMemoryHashMapMultiCache[K CompositeKey, T any, H comparable] struct {
	// contains filtered or unexported fields
}

InMemoryHashMapMultiCache provides an in-memory caching mechanism using hashmaps. Unlike InMemoryHashMapCache, it stores multiple values per key. This cache structure translates composite keys into a hash value using a user-provided hashing function. The cache supports optional TTL (time-to-live) for entries (Please read below regarding clean up.) Concurrency-safe operations are ensured through the use of a mutex. Unlike InMemoryTreeMultiCache, it doesn't support a hierarchy for keys, so each key is unique. - Setting a more specific key (e.g., [1, 2, 3, 4]) WILL NOT replace the value of a broader key (e.g., [1, 2, 3]). - Deleting a specific key or a parent key (e.g., [1, 2]) will not affect the other one.

Performance Comparison with InMemoryTreeMultiCache: - Insertions: InMemoryTreeMultiCache is slightly faster for single-depth insertions and significantly faster for deeper depths. - Retrievals: InMemoryHashMapMultiCache (especially with FarmHash) is faster for both single-depth and deeper retrievals. - Setting Values: Performance is relatively close between the two, with minor variations.

The choice between InMemoryTreeMultiCache and InMemoryHashMapMultiCache would depend on the specific use case, especially the depth of the keys and the frequency of retrieval operations. TTL parameter in cache doesn't automatically clean up all the entries. Use ManagedMultiCache wrapper to automatically manage outdated keys.

func (*InMemoryHashMapMultiCache[K, T, H]) Changes added in v1.15.0

func (c *InMemoryHashMapMultiCache[K, T, H]) Changes() []K

Changes returns a list of keys that have experienced changes in the cache since the last reset.

func (*InMemoryHashMapMultiCache[K, T, H]) Drop added in v1.15.0

func (c *InMemoryHashMapMultiCache[K, T, H]) Drop()

Drop completely clears the cache, removing all entries. The operation is thread-safe.

func (*InMemoryHashMapMultiCache[K, T, H]) DropKey added in v1.15.0

func (c *InMemoryHashMapMultiCache[K, T, H]) DropKey(key K)

DropKey removes the values associated with the provided key from the cache. The operation is thread-safe.

func (*InMemoryHashMapMultiCache[K, T, H]) Get added in v1.15.0

func (c *InMemoryHashMapMultiCache[K, T, H]) Get(key K) []T

Get retrieves the values associated with the provided key from the cache. The operation is thread-safe and does not alter the change history.

func (*InMemoryHashMapMultiCache[K, T, H]) Outdated added in v1.15.0

func (c *InMemoryHashMapMultiCache[K, T, H]) Outdated(key uopt.Opt[K]) bool

Outdated checks if the provided key or the entire cache (if no key is provided) is outdated based on the set TTL. Returns true if outdated, false otherwise.

func (*InMemoryHashMapMultiCache[K, T, H]) Put added in v1.15.0

func (c *InMemoryHashMapMultiCache[K, T, H]) Put(key K, values ...T)

Put adds the given values to the cache associated with the provided key. If the key already exists, the values are updated. The insertion is thread-safe.

func (*InMemoryHashMapMultiCache[K, T, H]) PutQuietly added in v1.15.0

func (c *InMemoryHashMapMultiCache[K, T, H]) PutQuietly(key K, values ...T)

PutQuietly adds values to the cache for the provided key but does so without altering the change history. This operation can be used when modifications should not trigger cache change diff.

func (*InMemoryHashMapMultiCache[K, T, H]) Set added in v1.15.0

func (c *InMemoryHashMapMultiCache[K, T, H]) Set(key K, values ...T)

Set updates the cache values for the provided key. If the key already exists, its previous values are removed before adding the new values. The operation is thread-safe.

type InMemoryTreeMultiCache added in v1.15.0

type InMemoryTreeMultiCache[K CompositeKey, T Comparable] struct {
	// contains filtered or unexported fields
}

InMemoryTreeMultiCache provides an in-memory caching mechanism with support for compound keys. The cache leverages tree-like structures to store and organize data, allowing efficient operations even with composite keys. The cache supports optional TTL (time-to-live) for entries, ensuring that outdated entries can be identified and potentially purged. Concurrency-safe operations are ensured through the use of a mutex.

Benchmark insights: - Put operation performance is fast for shallow depth keys but slows down as the depth increases. - Get operation is particularly efficient, especially for shallow depth keys. - Set operation's performance is consistent regardless of the depth of the key. TTL parameter in cache doesn't automatically clean up all the entries. Use ManagedMultiCache wrapper to automatically manage outdated keys.

func (*InMemoryTreeMultiCache[K, T]) Changes added in v1.15.0

func (c *InMemoryTreeMultiCache[K, T]) Changes() []K

Changes returns a slice of keys that have been modified in the cache. This method provides a way to track changes made to the cache, useful for scenarios like cache syncing.

func (*InMemoryTreeMultiCache[K, T]) Drop added in v1.15.0

func (c *InMemoryTreeMultiCache[K, T]) Drop()

Drop removes all entries from the cache. This is a complete reset of the cache, useful when you want to clear the cache and start fresh.

func (*InMemoryTreeMultiCache[K, T]) DropKey added in v1.15.0

func (c *InMemoryTreeMultiCache[K, T]) DropKey(key K)

DropKey removes the value(s) associated with the given key from the cache.

func (*InMemoryTreeMultiCache[K, T]) Get added in v1.15.0

func (c *InMemoryTreeMultiCache[K, T]) Get(key K) []T

Get retrieves the value(s) associated with the given key from the cache. If the key is not found, it returns an empty slice. Retrieval is fast, especially for shallow depth keys.

func (*InMemoryTreeMultiCache[K, T]) Outdated added in v1.15.0

func (c *InMemoryTreeMultiCache[K, T]) Outdated(key uopt.Opt[K]) bool

Outdated checks if a given key or the entire cache is outdated based on the TTL. If no key is provided or key was not found, it checks the last updated time of the entire cache. If a key is provided and found, it checks the last updated time of that specific key.

func (*InMemoryTreeMultiCache[K, T]) Put added in v1.15.0

func (c *InMemoryTreeMultiCache[K, T]) Put(key K, val ...T)

Put inserts a new value(s) into the cache associated with the given key. If the key already exists in the cache, it appends the new value(s) to the existing values. This operation is relatively fast for shallow depth keys, but becomes slower as the depth increases.

func (*InMemoryTreeMultiCache[K, T]) PutQuietly added in v1.15.0

func (c *InMemoryTreeMultiCache[K, T]) PutQuietly(key K, val ...T)

PutQuietly behaves like the Put method but does not update the cache state or add any changes to the cache, making it much faster alternative to Put and Set. This method is useful when you want to add values to the cache without triggering any side effects.

func (*InMemoryTreeMultiCache[K, T]) Set added in v1.15.0

func (c *InMemoryTreeMultiCache[K, T]) Set(key K, val ...T)

Set inserts a new value(s) into the cache associated with the given key. If the key already exists in the cache, this method will overwrite the existing values.

type Int64Value

type Int64Value struct {
	// contains filtered or unexported fields
}

func NewInt64Value

func NewInt64Value(v int64) Int64Value

func (Int64Value) Equals

func (s Int64Value) Equals(other Comparable) bool

func (Int64Value) Value

func (s Int64Value) Value() string

type IntCompositeKey

type IntCompositeKey struct {
	// contains filtered or unexported fields
}

func NewIntCompositeKey

func NewIntCompositeKey(keys ...int64) IntCompositeKey

func (IntCompositeKey) Equals

func (k IntCompositeKey) Equals(other Comparable) bool

func (IntCompositeKey) Keys

func (k IntCompositeKey) Keys() []Unique

func (IntCompositeKey) String

func (k IntCompositeKey) String() string

type IntKey

type IntKey int64

func (IntKey) Equals

func (k IntKey) Equals(other Comparable) bool

func (IntKey) Key

func (k IntKey) Key() int64

func (IntKey) Keys added in v1.10.0

func (k IntKey) Keys() []Unique

func (IntKey) String added in v1.10.0

func (k IntKey) String() string

type ManagedCache added in v1.16.1

type ManagedCache[K Unique, T any] struct {
	// contains filtered or unexported fields
}

ManagedCache provides a wrapper around a Cache implementation to manage periodic cleanup of outdated cache entries. It uses a background goroutine to perform cleanup tasks based on the provided TTL (time-to-live) value. The Stop method must be called to clean up resources if you want to stop managing the cache.

func NewManagedCache added in v1.16.1

func NewManagedCache[K Unique, T any](cache Cache[K, T], tick time.Duration) *ManagedCache[K, T]

func (*ManagedCache[K, T]) Changes added in v1.16.4

func (b *ManagedCache[K, T]) Changes() []K

func (*ManagedCache[K, T]) Drop added in v1.16.1

func (b *ManagedCache[K, T]) Drop()

func (*ManagedCache[K, T]) DropKey added in v1.16.1

func (b *ManagedCache[K, T]) DropKey(key K)

func (*ManagedCache[K, T]) ForceCleanup added in v1.16.3

func (b *ManagedCache[K, T]) ForceCleanup()

func (*ManagedCache[K, T]) Get added in v1.16.1

func (b *ManagedCache[K, T]) Get(key K) (*T, bool)

func (*ManagedCache[K, T]) Outdated added in v1.16.1

func (b *ManagedCache[K, T]) Outdated(key uopt.Opt[K]) bool

func (*ManagedCache[K, T]) Set added in v1.16.1

func (b *ManagedCache[K, T]) Set(key K, value T)

func (*ManagedCache[K, T]) SetQuietly added in v1.16.1

func (b *ManagedCache[K, T]) SetQuietly(key K, value T)

func (*ManagedCache[K, T]) Stop added in v1.16.1

func (b *ManagedCache[K, T]) Stop()

type ManagedMultiCache added in v1.16.1

type ManagedMultiCache[K CompositeKey, T Comparable] struct {
	// contains filtered or unexported fields
}

ManagedMultiCache provides a wrapper around a MultiCache implementation to manage periodic cleanup of outdated cache entries. It uses a background goroutine to perform cleanup tasks based on the provided TTL (time-to-live) value. The Stop method must be called to clean up resources if you want to stop managing the cache.

func NewManagedMultiCache added in v1.16.1

func NewManagedMultiCache[K CompositeKey, T Comparable](cache MultiCache[K, T], tick time.Duration) *ManagedMultiCache[K, T]

func (*ManagedMultiCache[K, T]) Changes added in v1.16.1

func (b *ManagedMultiCache[K, T]) Changes() []K

func (*ManagedMultiCache[K, T]) Drop added in v1.16.1

func (b *ManagedMultiCache[K, T]) Drop()

func (*ManagedMultiCache[K, T]) DropKey added in v1.16.1

func (b *ManagedMultiCache[K, T]) DropKey(key K)

func (*ManagedMultiCache[K, T]) Get added in v1.16.1

func (b *ManagedMultiCache[K, T]) Get(key K) []T

func (*ManagedMultiCache[K, T]) Outdated added in v1.16.1

func (b *ManagedMultiCache[K, T]) Outdated(key uopt.Opt[K]) bool

func (*ManagedMultiCache[K, T]) Put added in v1.16.1

func (b *ManagedMultiCache[K, T]) Put(key K, values ...T)

func (*ManagedMultiCache[K, T]) PutQuietly added in v1.16.1

func (b *ManagedMultiCache[K, T]) PutQuietly(key K, values ...T)

func (*ManagedMultiCache[K, T]) Set added in v1.16.1

func (b *ManagedMultiCache[K, T]) Set(key K, values ...T)

func (*ManagedMultiCache[K, T]) Stop added in v1.16.1

func (b *ManagedMultiCache[K, T]) Stop()

type MultiCache added in v1.15.0

type MultiCache[K CompositeKey, T any] interface {
	// Put inserts a new value(s) into the cache associated with the given key.
	// If the key already exists in the cache, it appends the new value(s) to the existing values.
	// This operation is relatively fast for shallow depth keys, but becomes slower as the depth increases.
	Put(key K, values ...T)

	// Set inserts a new value(s) into the cache associated with the given key.
	// If the key already exists in the cache, this method will overwrite the existing values.
	Set(key K, values ...T)

	// Get retrieves the value(s) associated with the given key from the cache.
	// If the key is not found, it returns an empty slice.
	// Retrieval is fast, especially for shallow depth keys.
	// Supports retrieving a value using a broader key (e.g., [1, 2]) or a full/shallow key (e.g., [1, 2, 3, 4])
	Get(key K) []T

	// Changes returns a slice of keys that have been modified in the cache.
	// This method provides a way to track changes made to the cache, useful for scenarios like cache syncing.
	// Cache changes will be updated only on modifying operations, meaning that in-fact, changes contain all the present keys.
	Changes() []K

	// Drop removes all entries from the cache.
	// This is a complete reset of the cache, useful when you want to clear the cache and start fresh.
	Drop()

	// DropKey removes the value(s) associated with the given key from the cache.
	DropKey(key K)

	// Outdated checks if a given key or the entire cache is outdated based on the TTL.
	// If no key is provided it checks the last updated time of the entire cache.
	// If a key is provided and found, it checks the last updated time of that specific key.
	// If key was not found returns false.
	Outdated(key uopt.Opt[K]) bool

	// PutQuietly behaves like the Put method but does not update the cache state or add any changes to the cache, making it
	// much faster alternative to Put and Set.
	// This method is useful when you want to add values to the cache without triggering any side effects.
	PutQuietly(key K, values ...T)
}

The MultiCache interface defines a set of methods for a generic cache implementation. This interface supports setting, getting, and managing cache entries with composite keys. Unlike Cache, it is designed to handle multiple values per key and has a hierarchical key handling.

Example: - Set([1, 2, 3], "Value1") => [1, 2, 3] is set with "Value1". - Set([1, 2, 3, 4], "Value2") => [1, 2, 3] is replaced, and [1, 2, 3, 4] is set with "Value2". - Get([1, 2, 3]) => returns nil, as it has been replaced by [1, 2, 3, 4].

This hierarchical key handling is useful for scenarios where more specific keys should override the values of their parent keys, providing a clear and structured way to manage cache entries.

func NewDefaultHashMapMultiCache added in v1.15.0

func NewDefaultHashMapMultiCache[K CompositeKey, T Comparable](ttl uopt.Opt[time.Duration]) MultiCache[K, T]

NewDefaultHashMapMultiCache creates a new instance of the InMemoryHashMapMultiCache using SHA256 as the hashing algorithm.

func NewFarmHashMapMultiCache added in v1.15.0

func NewFarmHashMapMultiCache[K CompositeKey, T Comparable](ttl uopt.Opt[time.Duration]) MultiCache[K, T]

func NewInMemoryHashMapMultiCache added in v1.15.0

func NewInMemoryHashMapMultiCache[K CompositeKey, T any, H comparable](toHash func(keys []Unique) H, ttl uopt.Opt[time.Duration]) MultiCache[K, T]

NewInMemoryHashMapMultiCache creates a new instance of the InMemoryHashMapMultiCache. It takes a hashing function to translate the composite keys to a desired hash type, and an optional time-to-live duration for the cache entries.

func NewInMemoryTreeMultiCache added in v1.15.0

func NewInMemoryTreeMultiCache[K CompositeKey, T Comparable](ttl uopt.Opt[time.Duration]) MultiCache[K, T]

NewInMemoryTreeMultiCache creates a new instance of the InMemoryTreeMultiCache. It takes an optional TTL (time-to-live) parameter to set expiration time for cache entries. If the TTL is not provided, cache entries will not expire. Note on hierarchical key handling:

  • If a composite key (e.g., [1, 2, 3]) is already set, any broader keys that share the same prefix (e.g., [1, 2]) are considered "busy" as part of the hierarchy.
  • Setting a more specific key (e.g., [1, 2, 3, 4]) will replace the broader key's value (e.g., [1, 2, 3]).
  • This design ensures that more specific keys take precedence and can replace the values of their parent keys.
  • Additionally, retrieving a value using a broader key (e.g., [1, 2]) will return the values of the most specific key that shares the prefix (e.g., [1, 2, 3, 4]).

func NewSha256HashMapMultiCache added in v1.15.0

func NewSha256HashMapMultiCache[K CompositeKey, T Comparable](ttl uopt.Opt[time.Duration]) MultiCache[K, T]

type StrCompositeKey

type StrCompositeKey struct {
	// contains filtered or unexported fields
}

func NewStrCompositeKey

func NewStrCompositeKey(keys ...string) StrCompositeKey

func (StrCompositeKey) Equals

func (k StrCompositeKey) Equals(other Comparable) bool

func (StrCompositeKey) Keys

func (k StrCompositeKey) Keys() []Unique

func (StrCompositeKey) String

func (k StrCompositeKey) String() string

type StringKey

type StringKey string

func (StringKey) Equals

func (k StringKey) Equals(other Comparable) bool

func (StringKey) Key

func (k StringKey) Key() int64

func (StringKey) Keys added in v1.10.0

func (k StringKey) Keys() []Unique

func (StringKey) String

func (k StringKey) String() string

type StringSliceValue

type StringSliceValue struct {
	// contains filtered or unexported fields
}

func NewStringSliceValue

func NewStringSliceValue(v []string) StringSliceValue

func (StringSliceValue) Equals

func (s StringSliceValue) Equals(other Comparable) bool

func (StringSliceValue) Values

func (s StringSliceValue) Values() []string

type StringValue

type StringValue struct {
	// contains filtered or unexported fields
}

func NewStringValue

func NewStringValue(v string) StringValue

func (StringValue) Equals

func (s StringValue) Equals(other Comparable) bool

func (StringValue) Value

func (s StringValue) Value() string

type UIntCompositeKey

type UIntCompositeKey struct {
	// contains filtered or unexported fields
}

func NewUIntCompositeKey

func NewUIntCompositeKey(keys ...uint64) UIntCompositeKey

func (UIntCompositeKey) Equals

func (k UIntCompositeKey) Equals(other Comparable) bool

func (UIntCompositeKey) Keys

func (k UIntCompositeKey) Keys() []int64

func (UIntCompositeKey) String

func (k UIntCompositeKey) String() string

type UIntKey

type UIntKey uint64

func (UIntKey) Equals

func (k UIntKey) Equals(other Comparable) bool

func (UIntKey) Key

func (k UIntKey) Key() int64

func (UIntKey) Keys added in v1.10.0

func (k UIntKey) Keys() []Unique

func (UIntKey) String added in v1.10.0

func (k UIntKey) String() string

type Unique added in v1.16.0

type Unique interface {
	Comparable
	Key() int64 // Key should return a unique item key. It can be a hash or just an index.
}

Unique specifies an abstract key with an ability to provide hash.

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL