z

package
v0.4.1 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Mar 23, 2022 License: Apache-2.0, Apache-2.0, MIT Imports: 26 Imported by: 7

README

bbloom: a bitset Bloom filter for go/golang

===

package implements a fast bloom filter with real 'bitset' and JSONMarshal/JSONUnmarshal to store/reload the Bloom filter.

NOTE: the package uses unsafe.Pointer to set and read the bits from the bitset. If you're uncomfortable with using the unsafe package, please consider using my bloom filter package at github.com/AndreasBriese/bloom

===

changelog 11/2015: new thread safe methods AddTS(), HasTS(), AddIfNotHasTS() following a suggestion from Srdjan Marinovic (github @a-little-srdjan), who used this to code a bloomfilter cache.

This bloom filter was developed to strengthen a website-log database and was tested and optimized for this log-entry mask: "2014/%02i/%02i %02i:%02i:%02i /info.html". Nonetheless bbloom should work with any other form of entries.

Hash function is a modified Berkeley DB sdbm hash (to optimize for smaller strings). sdbm http://www.cse.yorku.ca/~oz/hash.html

Found sipHash (SipHash-2-4, a fast short-input PRF created by Jean-Philippe Aumasson and Daniel J. Bernstein.) to be about as fast. sipHash had been ported by Dimtry Chestnyk to Go (github.com/dchest/siphash )

Minimum hashset size is: 512 ([4]uint64; will be set automatically).

###install

go get github.com/AndreasBriese/bbloom

###test

  • change to folder ../bbloom
  • create wordlist in file "words.txt" (you might use python permut.py)
  • run 'go test -bench=.' within the folder
go test -bench=.

If you've installed the GOCONVEY TDD-framework http://goconvey.co/ you can run the tests automatically.

using go's testing framework now (have in mind that the op timing is related to 65536 operations of Add, Has, AddIfNotHas respectively)

usage

after installation add

import (
	...
	"github.com/AndreasBriese/bbloom"
	...
	)

at your header. In the program use

// create a bloom filter for 65536 items and 1 % wrong-positive ratio 
bf := bbloom.New(float64(1<<16), float64(0.01))

// or 
// create a bloom filter with 650000 for 65536 items and 7 locs per hash explicitly
// bf = bbloom.New(float64(650000), float64(7))
// or
bf = bbloom.New(650000.0, 7.0)

// add one item
bf.Add([]byte("butter"))

// Number of elements added is exposed now 
// Note: ElemNum will not be included in JSON export (for compatability to older version)
nOfElementsInFilter := bf.ElemNum

// check if item is in the filter
isIn := bf.Has([]byte("butter"))    // should be true
isNotIn := bf.Has([]byte("Butter")) // should be false

// 'add only if item is new' to the bloomfilter
added := bf.AddIfNotHas([]byte("butter"))    // should be false because 'butter' is already in the set
added = bf.AddIfNotHas([]byte("buTTer"))    // should be true because 'buTTer' is new

// thread safe versions for concurrent use: AddTS, HasTS, AddIfNotHasTS
// add one item
bf.AddTS([]byte("peanutbutter"))
// check if item is in the filter
isIn = bf.HasTS([]byte("peanutbutter"))    // should be true
isNotIn = bf.HasTS([]byte("peanutButter")) // should be false
// 'add only if item is new' to the bloomfilter
added = bf.AddIfNotHasTS([]byte("butter"))    // should be false because 'peanutbutter' is already in the set
added = bf.AddIfNotHasTS([]byte("peanutbuTTer"))    // should be true because 'penutbuTTer' is new

// convert to JSON ([]byte) 
Json := bf.JSONMarshal()

// bloomfilters Mutex is exposed for external un-/locking
// i.e. mutex lock while doing JSON conversion
bf.Mtx.Lock()
Json = bf.JSONMarshal()
bf.Mtx.Unlock()

// restore a bloom filter from storage 
bfNew := bbloom.JSONUnmarshal(Json)

isInNew := bfNew.Has([]byte("butter"))    // should be true
isNotInNew := bfNew.Has([]byte("Butter")) // should be false

to work with the bloom filter.

why 'fast'?

It's about 3 times faster than William Fitzgeralds bitset bloom filter https://github.com/willf/bloom . And it is about so fast as my []bool set variant for Boom filters (see https://github.com/AndreasBriese/bloom ) but having a 8times smaller memory footprint:

Bloom filter (filter size 524288, 7 hashlocs)
github.com/AndreasBriese/bbloom 'Add' 65536 items (10 repetitions): 6595800 ns (100 ns/op)
github.com/AndreasBriese/bbloom 'Has' 65536 items (10 repetitions): 5986600 ns (91 ns/op)
github.com/AndreasBriese/bloom 'Add' 65536 items (10 repetitions): 6304684 ns (96 ns/op)
github.com/AndreasBriese/bloom 'Has' 65536 items (10 repetitions): 6568663 ns (100 ns/op)

github.com/willf/bloom 'Add' 65536 items (10 repetitions): 24367224 ns (371 ns/op)
github.com/willf/bloom 'Test' 65536 items (10 repetitions): 21881142 ns (333 ns/op)
github.com/dataence/bloom/standard 'Add' 65536 items (10 repetitions): 23041644 ns (351 ns/op)
github.com/dataence/bloom/standard 'Check' 65536 items (10 repetitions): 19153133 ns (292 ns/op)
github.com/cabello/bloom 'Add' 65536 items (10 repetitions): 131921507 ns (2012 ns/op)
github.com/cabello/bloom 'Contains' 65536 items (10 repetitions): 131108962 ns (2000 ns/op)

(on MBPro15 OSX10.8.5 i7 4Core 2.4Ghz)

With 32bit bloom filters (bloom32) using modified sdbm, bloom32 does hashing with only 2 bit shifts, one xor and one substraction per byte. smdb is about as fast as fnv64a but gives less collisions with the dataset (see mask above). bloom.New(float64(10 * 1<<16),float64(7)) populated with 1<<16 random items from the dataset (see above) and tested against the rest results in less than 0.05% collisions.

Documentation

Index

Constants

View Source
const (
	// MaxArrayLen is a safe maximum length for slices on this architecture.
	MaxArrayLen = 1<<50 - 1
	// MaxBufferSize is the size of virtually unlimited buffer on this architecture.
	MaxBufferSize = 256 << 30
)

Variables

View Source
var ErrNewFileCreateFailed = errors.New("create a new file")

ErrNewFileCreateFailed signals that creation of a new file has failed

Functions

func Allocators

func Allocators() string

func BytesToUint64Slice

func BytesToUint64Slice(b []byte) []uint64

BytesToUint64Slice converts a byte slice to a uint64 slice.

func CPUTicks

func CPUTicks() int64

CPUTicks is a faster alternative to NanoTime to measure time duration.

func Calloc

func Calloc(n int, tag string) []byte

Calloc allocates a slice of size n.

func CallocNoRef

func CallocNoRef(n int, tag string) []byte

CallocNoRef will not give you memory back without jemalloc.

func FastRand

func FastRand() uint32

FastRand is a fast thread local random function.

func Fibonacci

func Fibonacci(num int) []float64

func Free

func Free(b []byte)

Free does not do anything in this mode.

func HistogramBounds

func HistogramBounds(minExponent, maxExponent uint32) []float64

Creates bounds for an histogram. The bounds are powers of two of the form [2^min_exponent, ..., 2^max_exponent].

func KeyToHash

func KeyToHash(key interface{}) (uint64, uint64)

TODO: Figure out a way to re-use memhash for the second uint64 hash, we

already know that appending bytes isn't reliable for generating a
second hash (see Ristretto PR #88).

We also know that while the Go runtime has a runtime memhash128
function, it's not possible to use it to generate [2]uint64 or
anything resembling a 128bit hash, even though that's exactly what
we need in this situation.

func Leaks

func Leaks() string

func Madvise

func Madvise(b []byte, readahead bool) error

Madvise uses the madvise system call to give advise about the use of memory when using a slice that is memory-mapped to a file. Set the readahead flag to false if page references are expected in random order.

func MemHash

func MemHash(data []byte) uint64

MemHash is the hash function used by go map, it utilizes available hardware instructions(behaves as aeshash if aes instruction is available). NOTE: The hash seed changes for every process. So, this cannot be used as a persistent hash.

func MemHashString

func MemHashString(str string) uint64

MemHashString is the hash function used by go map, it utilizes available hardware instructions (behaves as aeshash if aes instruction is available). NOTE: The hash seed changes for every process. So, this cannot be used as a persistent hash.

func Memclr

func Memclr(b []byte)

func Mmap

func Mmap(fd *os.File, writable bool, size int64) ([]byte, error)

Mmap uses the mmap system call to memory-map a file. If writable is true, memory protection of the pages is set so that they may be written to as well.

func Msync

func Msync(b []byte) error

Msync would call sync on the mmapped data.

func Munmap

func Munmap(b []byte) error

Munmap unmaps a previously mapped slice.

func NanoTime

func NanoTime() int64

NanoTime returns the current time in nanoseconds from a monotonic clock.

func NumAllocBytes

func NumAllocBytes() int64

NumAllocBytes returns the number of bytes allocated using calls to z.Calloc. The allocations could be happening via either Go or jemalloc, depending upon the build flags.

func ReadMemStats

func ReadMemStats(_ *MemStats)

ReadMemStats doesn't do anything since all the memory is being managed by the Go runtime.

func SetTmpDir

func SetTmpDir(dir string)

SetTmpDir sets the temporary directory for the temporary buffers.

func StatsPrint

func StatsPrint()

func SyncDir

func SyncDir(dir string) error

func ZeroOut

func ZeroOut(dst []byte, start, end int)

ZeroOut zeroes out all the bytes in the range [start, end).

Types

type Allocator

type Allocator struct {
	Tag string

	Ref uint64
	sync.Mutex
	// contains filtered or unexported fields
}

Allocator amortizes the cost of small allocations by allocating memory in bigger chunks. Internally it uses z.Calloc to allocate memory. Once allocated, the memory is not moved, so it is safe to use the allocated bytes to unsafe cast them to Go struct pointers. Maintaining a freelist is slow. Instead, Allocator only allocates memory, with the idea that finally we would just release the entire Allocator.

func AllocatorFrom

func AllocatorFrom(ref uint64) *Allocator

AllocatorFrom would return the allocator corresponding to the ref.

func NewAllocator

func NewAllocator(sz int, tag string) *Allocator

NewAllocator creates an allocator starting with the given size.

func (*Allocator) Allocate

func (a *Allocator) Allocate(sz int) []byte

func (*Allocator) AllocateAligned

func (a *Allocator) AllocateAligned(sz int) []byte

func (*Allocator) Allocated

func (a *Allocator) Allocated() uint64

func (*Allocator) Copy

func (a *Allocator) Copy(buf []byte) []byte

func (*Allocator) MaxAlloc

func (a *Allocator) MaxAlloc() int

func (*Allocator) Release

func (a *Allocator) Release()

Release would release the memory back. Remember to make this call to avoid memory leaks.

func (*Allocator) Reset

func (a *Allocator) Reset()

func (*Allocator) Size

func (a *Allocator) Size() int

Size returns the size of the allocations so far.

func (*Allocator) String

func (a *Allocator) String() string

func (*Allocator) TrimTo

func (a *Allocator) TrimTo(max int)

type AllocatorPool

type AllocatorPool struct {
	// contains filtered or unexported fields
}

func NewAllocatorPool

func NewAllocatorPool(sz int) *AllocatorPool

func (*AllocatorPool) Get

func (p *AllocatorPool) Get(sz int, tag string) *Allocator

func (*AllocatorPool) Release

func (p *AllocatorPool) Release()

func (*AllocatorPool) Return

func (p *AllocatorPool) Return(a *Allocator)

type Bloom

type Bloom struct {
	ElemNum uint64
	// contains filtered or unexported fields
}

Bloom filter

func JSONUnmarshal

func JSONUnmarshal(dbData []byte) (*Bloom, error)

JSONUnmarshal takes JSON-Object (type bloomJSONImExport) as []bytes returns bloom32 / bloom64 object.

func NewBloomFilter

func NewBloomFilter(params ...float64) (bloomfilter *Bloom)

NewBloomFilter returns a new bloomfilter.

func (*Bloom) Add

func (bl *Bloom) Add(hash uint64)

Add adds hash of a key to the bloomfilter.

func (*Bloom) AddIfNotHas

func (bl *Bloom) AddIfNotHas(hash uint64) bool

AddIfNotHas only Adds hash, if it's not present in the bloomfilter. Returns true if hash was added. Returns false if hash was already registered in the bloomfilter.

func (*Bloom) Clear

func (bl *Bloom) Clear()

Clear resets the Bloom filter.

func (Bloom) Has

func (bl Bloom) Has(hash uint64) bool

Has checks if bit(s) for entry hash is/are set, returns true if the hash was added to the Bloom Filter.

func (*Bloom) IsSet

func (bl *Bloom) IsSet(idx uint64) bool

IsSet checks if bit[idx] of bitset is set, returns true/false.

func (Bloom) JSONMarshal

func (bl Bloom) JSONMarshal() []byte

JSONMarshal returns JSON-object (type bloomJSONImExport) as []byte.

func (*Bloom) Set

func (bl *Bloom) Set(idx uint64)

Set sets the bit[idx] of bitset.

func (*Bloom) Size

func (bl *Bloom) Size(sz uint64)

Size makes Bloom filter with as bitset of size sz.

func (*Bloom) TotalSize

func (bl *Bloom) TotalSize() int

TotalSize returns the total size of the bloom filter.

type Buffer

type Buffer struct {
	// contains filtered or unexported fields
}

Buffer is equivalent of bytes.Buffer without the ability to read. It is NOT thread-safe.

In UseCalloc mode, z.Calloc is used to allocate memory, which depending upon how the code is compiled could use jemalloc for allocations.

In UseMmap mode, Buffer uses file mmap to allocate memory. This allows us to store big data structures without using physical memory.

MaxSize can be set to limit the memory usage.

func NewBuffer

func NewBuffer(capacity int, tag string) *Buffer

func NewBufferPersistent

func NewBufferPersistent(path string, capacity int) (*Buffer, error)

It is the caller's responsibility to set offset after this, because Buffer doesn't remember what it was.

func NewBufferSlice

func NewBufferSlice(slice []byte) *Buffer

func NewBufferTmp

func NewBufferTmp(dir string, capacity int) (*Buffer, error)

func (*Buffer) Allocate

func (b *Buffer) Allocate(n int) []byte

Allocate is a way to get a slice of size n back from the buffer. This slice can be directly written to. Warning: Allocate is not thread-safe. The byte slice returned MUST be used before further calls to Buffer.

func (*Buffer) AllocateOffset

func (b *Buffer) AllocateOffset(n int) int

AllocateOffset works the same way as allocate, but instead of returning a byte slice, it returns the offset of the allocation.

func (*Buffer) Bytes

func (b *Buffer) Bytes() []byte

Bytes would return all the written bytes as a slice.

func (*Buffer) Data

func (b *Buffer) Data(offset int) []byte

func (*Buffer) Grow

func (b *Buffer) Grow(n int)

Grow would grow the buffer to have at least n more bytes. In case the buffer is at capacity, it would reallocate twice the size of current capacity + n, to ensure n bytes can be written to the buffer without further allocation. In UseMmap mode, this might result in underlying file expansion.

func (*Buffer) IsEmpty

func (b *Buffer) IsEmpty() bool

func (*Buffer) LenNoPadding

func (b *Buffer) LenNoPadding() int

LenNoPadding would return the number of bytes written to the buffer so far (without the padding).

func (*Buffer) LenWithPadding

func (b *Buffer) LenWithPadding() int

LenWithPadding would return the number of bytes written to the buffer so far plus the padding at the start of the buffer.

func (*Buffer) Release

func (b *Buffer) Release() error

Release would free up the memory allocated by the buffer. Once the usage of buffer is done, it is important to call Release, otherwise a memory leak can happen.

func (*Buffer) Reset

func (b *Buffer) Reset()

Reset would reset the buffer to be reused.

func (*Buffer) Slice

func (b *Buffer) Slice(offset int) ([]byte, int)

Slice would return the slice written at offset.

func (*Buffer) SliceAllocate

func (b *Buffer) SliceAllocate(sz int) []byte

SliceAllocate would encode the size provided into the buffer, followed by a call to Allocate, hence returning the slice of size sz. This can be used to allocate a lot of small buffers into this big buffer. Note that SliceAllocate should NOT be mixed with normal calls to Write.

func (*Buffer) SliceIterate

func (b *Buffer) SliceIterate(f func(slice []byte) error) error

func (*Buffer) SliceOffsets

func (b *Buffer) SliceOffsets() []int

SliceOffsets is an expensive function. Use sparingly.

func (*Buffer) SortSlice

func (b *Buffer) SortSlice(less func(left, right []byte) bool)

SortSlice is like SortSliceBetween but sorting over the entire buffer.

func (*Buffer) SortSliceBetween

func (b *Buffer) SortSliceBetween(start, end int, less LessFunc)

func (*Buffer) StartOffset

func (b *Buffer) StartOffset() int

func (*Buffer) WithAutoMmap

func (b *Buffer) WithAutoMmap(threshold int, path string) *Buffer

func (*Buffer) WithMaxSize

func (b *Buffer) WithMaxSize(size int) *Buffer

func (*Buffer) Write

func (b *Buffer) Write(p []byte) (n int, err error)

Write would write p bytes to the buffer.

func (*Buffer) WriteSlice

func (b *Buffer) WriteSlice(slice []byte)

type BufferType

type BufferType int
const (
	UseCalloc BufferType = iota
	UseMmap
	UseInvalid
)

func (BufferType) String

func (t BufferType) String() string

type Closer

type Closer struct {
	// contains filtered or unexported fields
}

Closer holds the two things we need to close a goroutine and wait for it to finish: a chan to tell the goroutine to shut down, and a WaitGroup with which to wait for it to finish shutting down.

func NewCloser

func NewCloser(initial int) *Closer

NewCloser constructs a new Closer, with an initial count on the WaitGroup.

func (*Closer) AddRunning

func (lc *Closer) AddRunning(delta int)

AddRunning Add()'s delta to the WaitGroup.

func (*Closer) Ctx

func (lc *Closer) Ctx() context.Context

Ctx can be used to get a context, which would automatically get cancelled when Signal is called.

func (*Closer) Done

func (lc *Closer) Done()

Done calls Done() on the WaitGroup.

func (*Closer) HasBeenClosed

func (lc *Closer) HasBeenClosed() <-chan struct{}

HasBeenClosed gets signaled when Signal() is called.

func (*Closer) Signal

func (lc *Closer) Signal()

Signal signals the HasBeenClosed signal.

func (*Closer) SignalAndWait

func (lc *Closer) SignalAndWait()

SignalAndWait calls Signal(), then Wait().

func (*Closer) Wait

func (lc *Closer) Wait()

Wait waits on the WaitGroup. (It waits for NewCloser's initial value, AddRunning, and Done calls to balance out.)

type HistogramData

type HistogramData struct {
	Bounds         []float64
	CountPerBucket []int64
	Count          int64
	Min            int64
	Max            int64
	Sum            int64
}

HistogramData stores the information needed to represent the sizes of the keys and values as a histogram.

func NewHistogramData

func NewHistogramData(bounds []float64) *HistogramData

NewHistogramData returns a new instance of HistogramData with properly initialized fields.

func (*HistogramData) Clear

func (histogram *HistogramData) Clear()

Clear reset the histogram. Helpful in situations where we need to reset the metrics

func (*HistogramData) Copy

func (histogram *HistogramData) Copy() *HistogramData

func (*HistogramData) Mean

func (histogram *HistogramData) Mean() float64

Mean returns the mean value for the histogram.

func (*HistogramData) Percentile

func (histogram *HistogramData) Percentile(p float64) float64

Percentile returns the percentile value for the histogram. value of p should be between [0.0-1.0]

func (*HistogramData) String

func (histogram *HistogramData) String() string

String converts the histogram data into human-readable string.

func (*HistogramData) Update

func (histogram *HistogramData) Update(value int64)

Update changes the Min and Max fields if value is less than or greater than the current values.

type LessFunc

type LessFunc func(a, b []byte) bool //nolint:revive //adopt fork, do not touch it

type MemStats

type MemStats struct {
	// Total number of bytes allocated by the application.
	// http://jemalloc.net/jemalloc.3.html#stats.allocated
	Allocated uint64
	// Total number of bytes in active pages allocated by the application. This
	// is a multiple of the page size, and greater than or equal to
	// Allocated.
	// http://jemalloc.net/jemalloc.3.html#stats.active
	Active uint64
	// Maximum number of bytes in physically resident data pages mapped by the
	// allocator, comprising all pages dedicated to allocator metadata, pages
	// backing active allocations, and unused dirty pages. This is a maximum
	// rather than precise because pages may not actually be physically
	// resident if they correspond to demand-zeroed virtual memory that has not
	// yet been touched. This is a multiple of the page size, and is larger
	// than stats.active.
	// http://jemalloc.net/jemalloc.3.html#stats.resident
	Resident uint64
	// Total number of bytes in virtual memory mappings that were retained
	// rather than being returned to the operating system via e.g. munmap(2) or
	// similar. Retained virtual memory is typically untouched, decommitted, or
	// purged, so it has no strongly associated physical memory (see extent
	// hooks http://jemalloc.net/jemalloc.3.html#arena.i.extent_hooks for
	// details). Retained memory is excluded from mapped memory statistics,
	// e.g. stats.mapped (http://jemalloc.net/jemalloc.3.html#stats.mapped).
	// http://jemalloc.net/jemalloc.3.html#stats.retained
	Retained uint64
}

MemStats is used to fetch JE Malloc Stats. The stats are fetched from the mallctl namespace http://jemalloc.net/jemalloc.3.html#mallctl_namespace.

type MmapFile

type MmapFile struct {
	Fd   *os.File
	Data []byte
}

MmapFile represents an mmapd file and includes both the buffer to the data and the file descriptor.

func OpenMmapFile

func OpenMmapFile(filename string, flag int, maxSz int) (*MmapFile, error)

OpenMmapFile opens an existing file or creates a new file. If the file is created, it would truncate the file to maxSz. In both cases, it would mmap the file to maxSz and returned it. In case the file is created, z.ErrNewFileCreateFailed is returned.

func OpenMmapFileUsing

func OpenMmapFileUsing(fd *os.File, sz int, writable bool) (*MmapFile, error)

func (*MmapFile) AllocateSlice

func (m *MmapFile) AllocateSlice(sz, offset int) ([]byte, int, error)

AllocateSlice allocates a slice of the given size at the given offset.

func (*MmapFile) Bytes

func (m *MmapFile) Bytes(off, sz int) ([]byte, error)

Bytes returns data starting from offset off of size sz. If there's not enough data, it would return nil slice and io.EOF.

func (*MmapFile) Close

func (m *MmapFile) Close(maxSz int64) error

Close would close the file. It would also truncate the file if maxSz >= 0.

func (*MmapFile) Delete

func (m *MmapFile) Delete() error

func (*MmapFile) NewReader

func (m *MmapFile) NewReader(offset int) io.Reader

func (*MmapFile) Slice

func (m *MmapFile) Slice(offset int) []byte

Slice returns the slice at the given offset.

func (*MmapFile) Sync

func (m *MmapFile) Sync() error

func (*MmapFile) Truncate

func (m *MmapFile) Truncate(maxSz int64) error

Truncate would truncate the mmapped file to the given size. On Linux, we truncate the underlying file and then call mremap, but on other systems, we unmap first, then truncate, then re-map.

type SuperFlag

type SuperFlag struct {
	// contains filtered or unexported fields
}

func NewSuperFlag

func NewSuperFlag(flag string) *SuperFlag

func (*SuperFlag) GetBool

func (sf *SuperFlag) GetBool(opt string) bool

func (*SuperFlag) GetDuration

func (sf *SuperFlag) GetDuration(opt string) time.Duration

func (*SuperFlag) GetFloat64

func (sf *SuperFlag) GetFloat64(opt string) float64

func (*SuperFlag) GetInt64

func (sf *SuperFlag) GetInt64(opt string) int64

func (*SuperFlag) GetPath

func (sf *SuperFlag) GetPath(opt string) string

func (*SuperFlag) GetString

func (sf *SuperFlag) GetString(opt string) string

func (*SuperFlag) GetUint32

func (sf *SuperFlag) GetUint32(opt string) uint32

func (*SuperFlag) GetUint64

func (sf *SuperFlag) GetUint64(opt string) uint64

func (*SuperFlag) Has

func (sf *SuperFlag) Has(opt string) bool

func (*SuperFlag) MergeAndCheckDefault

func (sf *SuperFlag) MergeAndCheckDefault(flag string) *SuperFlag

func (*SuperFlag) MergeWithDefault

func (sf *SuperFlag) MergeWithDefault(flag string) (*SuperFlag, error)

func (*SuperFlag) String

func (sf *SuperFlag) String() string

type SuperFlagHelp

type SuperFlagHelp struct {
	// contains filtered or unexported fields
}

SuperFlagHelp makes it really easy to generate command line `--help` output for a SuperFlag. For example:

const flagDefaults = `enabled=true; path=some/path;`

var help string = z.NewSuperFlagHelp(flagDefaults).
	Flag("enabled", "Turns on <something>.").
	Flag("path", "The path to <something>.").
	Flag("another", "Not present in defaults, but still included.").
	String()

The `help` string would then contain:

enabled=true; Turns on <something>.
path=some/path; The path to <something>.
another=; Not present in defaults, but still included.

All flags are sorted alphabetically for consistent `--help` output. Flags with default values are placed at the top, and everything else goes under.

func NewSuperFlagHelp

func NewSuperFlagHelp(defaults string) *SuperFlagHelp

func (*SuperFlagHelp) Flag

func (h *SuperFlagHelp) Flag(name, description string) *SuperFlagHelp

func (*SuperFlagHelp) Head

func (h *SuperFlagHelp) Head(head string) *SuperFlagHelp

func (*SuperFlagHelp) String

func (h *SuperFlagHelp) String() string

type Tree

type Tree struct {
	// contains filtered or unexported fields
}

Tree represents the structure for custom mmaped B+ tree. It supports keys in range [1, math.MaxUint64-1] and values [1, math.Uint64].

func NewTree

func NewTree(tag string) *Tree

NewTree returns an in-memory B+ tree.

func NewTreePersistent

func NewTreePersistent(path string) (*Tree, error)

NewTree returns a persistent on-disk B+ tree.

func (*Tree) Close

func (t *Tree) Close() error

Close releases the memory used by the tree.

func (*Tree) DeleteBelow

func (t *Tree) DeleteBelow(ts uint64)

DeleteBelow deletes all keys with value under ts.

func (*Tree) Get

func (t *Tree) Get(k uint64) uint64

Get looks for key and returns the corresponding value. If key is not found, 0 is returned.

func (*Tree) Iterate

func (t *Tree) Iterate(fn func(node))

Iterate iterates over the tree and executes the fn on each node.

func (*Tree) IterateKV

func (t *Tree) IterateKV(f func(key, val uint64) (newVal uint64))

IterateKV iterates through all keys and values in the tree. If newVal is non-zero, it will be set in the tree.

func (*Tree) Print

func (t *Tree) Print()

Print iterates over the tree and prints all valid KVs.

func (*Tree) Reset

func (t *Tree) Reset()

Reset resets the tree and truncates it to maxSz.

func (*Tree) Set

func (t *Tree) Set(k, v uint64)

Set sets the key-value pair in the tree.

func (*Tree) Stats

func (t *Tree) Stats() TreeStats

Stats returns stats about the tree.

type TreeStats

type TreeStats struct {
	Allocated    int     // Derived.
	Bytes        int     // Derived.
	NumLeafKeys  int     // Calculated.
	NumPages     int     // Derived.
	NumPagesFree int     // Calculated.
	Occupancy    float64 // Derived.
	PageSize     int     // Derived.
}

Directories

Path Synopsis

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL