cbft

package module
v0.0.3 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Apr 20, 2015 License: Apache-2.0 Imports: 43 Imported by: 4

README

cbft

Couchbase Full Text server

This project integrates the bleve full-text search engine and Couchbase Server.

Build Status Coverage Status GoDoc

LICENSE: Apache 2.0

A cbft process creates and maintains connections to a Couchbase Server cluster and indexes any incoming streamed data (coming from the Couchbase's DCP protocol) using the bleve full-text search engine. Indexes can be partitioned amongst multiple cbft processes, and queries on the index will be scatter/gather'ed across the distributed index partitions.

Getting started

Getting cbft

Download a pre-built cbft from the releases page. For example, for OSX...

wget https://github.com/couchbaselabs/cbft/releases/download/vX.Y.Z/vX.Y.Z-AAA_cbft.darwin.amd64.tar.gz
tar -xzvf vX.Y.Z-AAA_cbft.darwin.amd64.tar.gz
./cbft.darwin.amd64 --help

Or, to build cbft from source (requires golang 1.4)...

go get -u github.com/couchbaselabs/cbft/...
$GOPATH/bin/cbft --help

First time setup

Prerequisites: you should have a Couchbase Server (3.0+) already installed and running somewhere.

Create a directory where cbft will store its config and data files...

mkdir -p data

Running cbft

Start cbft, pointing it to the Couchbase Server as its datasource server...

./cbft -server http://localhost:8091

Next, you can use a web browser on cbft's web admin UI...

http://localhost:8095

Create a new full-text index, which will be powered by the bleve full-text engine; the index will be called "default" and will have the "default" bucket from Couchbase as its datasource...

curl -XPUT 'http://localhost:8095/api/index/default?indexType=bleve&sourceType=couchbase'

Check how many documents are indexed...

curl http://localhost:8095/api/index/default/count

Query the index...

curl -XPOST --header Content-Type:text/json \
     -d '{"query":{"size":10,"query":{"query":"your-search-term"}}}' \
     http://localhost:8095/api/index/default/query

Delete the index...

curl -XDELETE http://localhost:8095/api/index/default

Documentation

Index

Constants

View Source
const BLEVE_DEST_INITIAL_BUF_SIZE_BYTES = 40 * 1024 // 40K.
View Source
const FEED_BACKOFF_FACTOR = 1.5
View Source
const FEED_SLEEP_INIT_MS = 100
View Source
const FEED_SLEEP_MAX_MS = 10000

Default values for feed parameters.

View Source
const INDEX_DEFS_KEY = "indexDefs"
View Source
const INDEX_NAME_REGEXP = `^[A-Za-z][0-9A-Za-z_\-]*$`
View Source
const JANITOR_CLOSE_PINDEX = "janitor_close_pindex"
View Source
const JANITOR_REMOVE_PINDEX = "janitor_remove_pindex"
View Source
const MANAGER_MAX_EVENTS = 10
View Source
const NODE_DEFS_KEY = "nodeDefs"
View Source
const NODE_DEFS_KNOWN = "known"
View Source
const NODE_DEFS_WANTED = "wanted"
View Source
const PINDEX_META_FILENAME string = "PINDEX_META"
View Source
const PLAN_PINDEXES_KEY = "planPIndexes"
View Source
const VERSION = "3.0.0"

The cbft.VERSION tracks persistence versioning (format of persisted data and configuration). The main.VERSION (see cmd/cbft/...), in contrast, is an overall "product" version. For example, we might introduce new features like web UI admin enhancements into the software project, in which case we'd bump the main.VERSION number; but, if the persisted data/config format was unchanged, then the cbft.VERSION number would also remain unchanged.

NOTE: You *must* update VERSION if you change what's stored in the Cfg (such as the JSON/struct definitions or planning algorithms).

View Source
const VERSION_KEY = "version"
View Source
const WORK_KICK = "kick"
View Source
const WORK_NOOP = ""

Variables

View Source
var EMPTY_BYTES = []byte{}
View Source
var FeedTypes = make(map[string]*FeedType) // Key is sourceType.
View Source
var PINDEX_STORE_MAX_ERRORS = 40
View Source
var PIndexImplTypes = make(map[string]*PIndexImplType) // Keyed by indexType.

Functions

func Asset added in v0.0.1

func Asset(name string) ([]byte, error)

Asset loads and returns the asset for the given name. It returns an error if the asset could not be found or could not be loaded.

func AssetDir added in v0.0.1

func AssetDir(name string) ([]string, error)

AssetDir returns the file names below a certain directory embedded in the file by go-bindata. For example if you run go-bindata on data/... and data contains the following hierarchy:

data/
  foo.txt
  img/
    a.png
    b.png

then AssetDir("data") would return []string{"foo.txt", "img"} AssetDir("data/img") would return []string{"a.png", "b.png"} AssetDir("foo.txt") and AssetDir("notexist") would return an error AssetDir("") will return []string{"data"}.

func AssetInfo added in v0.0.1

func AssetInfo(name string) (os.FileInfo, error)

AssetInfo loads and returns the asset info for the given name. It returns an error if the asset could not be found or could not be loaded.

func AssetNames added in v0.0.1

func AssetNames() []string

AssetNames returns the names of the assets.

func AtomicCopyMetrics added in v0.0.1

func AtomicCopyMetrics(s, r interface{},
	fn func(sv uint64, rv uint64) uint64)

AtomicCopyMetrics copies uint64 metrics from s to r (from source to result), and also applies an optional fn function to each metric. The fn is invoked with metrics from s and r, and can be used to compute additions, subtractions, etc. When fn is nil, AtomicCopyTo defaults to just a straight copier.

func CalcPIndexesDelta

func CalcPIndexesDelta(mgrUUID string,
	currPIndexes map[string]*PIndex,
	wantedPlanPIndexes *PlanPIndexes) (
	addPlanPIndexes []*PlanPIndex,
	removePIndexes []*PIndex)

Functionally determine the delta of which pindexes need creation and which should be shut down on our local node (mgrUUID).

func CfgNodeDefsKey added in v0.0.1

func CfgNodeDefsKey(kind string) string

func CfgSetIndexDefs

func CfgSetIndexDefs(cfg Cfg, indexDefs *IndexDefs, cas uint64) (uint64, error)

func CfgSetNodeDefs

func CfgSetNodeDefs(cfg Cfg, kind string, nodeDefs *NodeDefs,
	cas uint64) (uint64, error)

func CfgSetPlanPIndexes

func CfgSetPlanPIndexes(cfg Cfg, planPIndexes *PlanPIndexes, cas uint64) (uint64, error)

func CheckVersion

func CheckVersion(cfg Cfg, myVersion string) (bool, error)

Returns true if a given version is modern enough to modify the Cfg. Older versions (which are running with older JSON/struct defintions or planning algorithms) will see false from their CheckVersion()'s.

func ConsistencyWaitDone added in v0.0.1

func ConsistencyWaitDone(partition string,
	cancelCh <-chan bool,
	doneCh chan error,
	currSeq func() uint64) error

func ConsistencyWaitGroup added in v0.0.1

func ConsistencyWaitGroup(indexName string,
	consistencyParams *ConsistencyParams, cancelCh <-chan bool,
	localPIndexes []*PIndex,
	addLocalPIndex func(*PIndex) error) error

func ConsistencyWaitPIndex added in v0.0.1

func ConsistencyWaitPIndex(pindex *PIndex, t ConsistencyWaiter,
	consistencyParams *ConsistencyParams, cancelCh <-chan bool) error

func ConsistencyWaitPartitions added in v0.0.1

func ConsistencyWaitPartitions(
	t ConsistencyWaiter,
	partitions map[string]bool,
	consistencyLevel string,
	consistencyVector map[string]uint64,
	cancelCh <-chan bool) error

func CouchbasePartitions added in v0.0.1

func CouchbasePartitions(sourceType, sourceName, sourceUUID, sourceParams,
	server string) (partitions []string, err error)

func CountAlias added in v0.0.1

func CountAlias(mgr *Manager, indexName, indexUUID string) (uint64, error)

func CountBlevePIndexImpl added in v0.0.1

func CountBlevePIndexImpl(mgr *Manager, indexName, indexUUID string) (
	uint64, error)

func DataSourcePartitions added in v0.0.1

func DataSourcePartitions(sourceType, sourceName, sourceUUID, sourceParams,
	server string) ([]string, error)

func ErrorToString added in v0.0.1

func ErrorToString(e error) string

func ExponentialBackoffLoop

func ExponentialBackoffLoop(name string,
	f func() int,
	startSleepMS int,
	backoffFactor float32,
	maxSleepMS int)

Calls f() in a loop, sleeping in an exponential backoff if needed. The provided f() function should return < 0 to stop the loop; >= 0 to continue the loop, where > 0 means there was progress which allows an immediate retry of f() with no sleeping. A return of < 0 is useful when f() will never make any future progress.

func FeedName

func FeedName(pindex *PIndex) string

func MustAsset added in v0.0.1

func MustAsset(name string) []byte

MustAsset is like Asset but panics when Asset would return an error. It simplifies safe initialization of global variables.

func NewBlackHolePIndexImpl added in v0.0.1

func NewBlackHolePIndexImpl(indexType, indexParams,
	path string, restart func()) (PIndexImpl, Dest, error)

func NewBlevePIndexImpl added in v0.0.1

func NewBlevePIndexImpl(indexType, indexParams, path string,
	restart func()) (PIndexImpl, Dest, error)

func NewManagerRESTRouter

func NewManagerRESTRouter(versionMain string, mgr *Manager,
	staticDir, staticETag string, mr *MsgRing) (
	*mux.Router, map[string]RESTMeta, error)

func NewPIndexImpl added in v0.0.1

func NewPIndexImpl(indexType, indexParams, path string, restart func()) (
	PIndexImpl, Dest, error)

func NewUUID

func NewUUID() string

func OpenBlackHolePIndexImpl added in v0.0.1

func OpenBlackHolePIndexImpl(indexType, path string, restart func()) (
	PIndexImpl, Dest, error)

func OpenBlevePIndexImpl added in v0.0.1

func OpenBlevePIndexImpl(indexType, path string,
	restart func()) (PIndexImpl, Dest, error)

func OpenPIndexImpl added in v0.0.1

func OpenPIndexImpl(indexType, path string, restart func()) (
	PIndexImpl, Dest, error)

func PIndexMatchesPlan

func PIndexMatchesPlan(pindex *PIndex, planPIndex *PlanPIndex) bool

Returns true if both the PIndex meets the PlanPIndex, ignoring UUID.

func PIndexPath

func PIndexPath(dataDir, pindexName string) string

func ParsePIndexPath

func ParsePIndexPath(dataDir, pindexPath string) (string, bool)

func ParsePartitionsToVBucketIds added in v0.0.1

func ParsePartitionsToVBucketIds(dests map[string]Dest) ([]uint16, error)

func PlanPIndexName added in v0.0.1

func PlanPIndexName(indexDef *IndexDef, sourcePartitions string) string

NOTE: PlanPIndex.Name must be unique across the cluster and ideally functionally based off of the indexDef so that the SamePlanPIndex() comparison works even if concurrent planners are racing to calculate plans.

NOTE: We can't use sourcePartitions directly as part of a PlanPIndex.Name suffix because in vbucket/hash partitioning the string would be too long -- since PIndexes might use PlanPIndex.Name for filesystem paths.

func PlanPIndexNodeCanRead added in v0.0.1

func PlanPIndexNodeCanRead(p *PlanPIndexNode) bool

func PlanPIndexNodeCanWrite added in v0.0.1

func PlanPIndexNodeCanWrite(p *PlanPIndexNode) bool

func PlanPIndexNodeOk added in v0.0.3

func PlanPIndexNodeOk(p *PlanPIndexNode) bool

func PlannerCheckVersion added in v0.0.1

func PlannerCheckVersion(cfg Cfg, version string) error

func PrimaryFeedPartitions added in v0.0.1

func PrimaryFeedPartitions(sourceType, sourceName, sourceUUID, sourceParams,
	server string) ([]string, error)

func QueryAlias added in v0.0.1

func QueryAlias(mgr *Manager, indexName, indexUUID string,
	req []byte, res io.Writer) error

func QueryBlevePIndexImpl added in v0.0.1

func QueryBlevePIndexImpl(mgr *Manager, indexName, indexUUID string,
	req []byte, res io.Writer) error

func RegisterFeedType added in v0.0.1

func RegisterFeedType(sourceType string, f *FeedType)

func RegisterPIndexImplType added in v0.0.1

func RegisterPIndexImplType(indexType string, t *PIndexImplType)

func RestoreAsset added in v0.0.1

func RestoreAsset(dir, name string) error

Restore an asset under the given directory

func RestoreAssets added in v0.0.1

func RestoreAssets(dir, name string) error

Restore assets under the given directory recursively

func RewriteURL

func RewriteURL(to string, h http.Handler) http.Handler

func SamePlanPIndex

func SamePlanPIndex(a, b *PlanPIndex) bool

Returns true if both PlanPIndex are the same, ignoring PlanPIndex.UUID.

func SamePlanPIndexes

func SamePlanPIndexes(a, b *PlanPIndexes) bool

Returns true if both PlanPIndexes are the same, where we ignore any differences in UUID or ImplVersion.

func StartDCPFeed added in v0.0.1

func StartDCPFeed(mgr *Manager, feedName, indexName, indexUUID,
	sourceType, bucketName, bucketUUID, params string,
	dests map[string]Dest) error

func StartTAPFeed added in v0.0.1

func StartTAPFeed(mgr *Manager, feedName, indexName, indexUUID,
	sourceType, bucketName, bucketUUID, params string,
	dests map[string]Dest) error

func StringsIntersectStrings added in v0.0.1

func StringsIntersectStrings(a, b []string) []string

StringsIntersectStrings returns a brand new array that has the intersection of a and b.

func StringsRemoveStrings added in v0.0.1

func StringsRemoveStrings(stringArr, removeArr []string) []string

StringsRemoveStrings returns a copy of stringArr, but with some strings removed, keeping the same order as stringArr.

func StringsToMap added in v0.0.1

func StringsToMap(strsArr []string) map[string]bool

func SubsetPlanPIndexes

func SubsetPlanPIndexes(a, b *PlanPIndexes) bool

Returns true if PlanPIndex children in a are a subset of those in b, using SamePlanPIndex() for sameness comparion.

func SyncWorkReq added in v0.0.1

func SyncWorkReq(ch chan *WorkReq, op, msg string, obj interface{}) error

func Time added in v0.0.1

func Time(f func() error, totalDuration, totalCount, maxDuration *uint64) error

func TimeoutCancelChan added in v0.0.1

func TimeoutCancelChan(timeout int64) <-chan bool

func Timer added in v0.0.1

func Timer(f func() error, t metrics.Timer) error

func ValidateAlias added in v0.0.1

func ValidateAlias(indexType, indexName, indexParams string) error

func ValidateBlevePIndexImpl added in v0.0.1

func ValidateBlevePIndexImpl(indexType, indexName, indexParams string) error

func VersionGTE

func VersionGTE(x, y string) bool

Compares two dotted versioning strings, like "1.0.1" and "1.2.3". Returns true when x >= y.

func WriteTimerJSON added in v0.0.1

func WriteTimerJSON(w io.Writer, timer metrics.Timer)

Types

type AliasParams added in v0.0.1

type AliasParams struct {
	Targets map[string]*AliasParamsTarget `json:"targets"` // Keyed by indexName.
}

AliasParams holds the definition for a user-defined index alias. A user-defined index alias can be used as a level of indirection (the "LastQuartersSales" alias points currently to the "2014-Q3-Sales" index, but the administrator might repoint it in the future without changing the application) or to scatter-gather or fan-out a query across multiple real indexes (e.g., to query across customer records, product catalog, call-center records, etc, in one shot).

type AliasParamsTarget added in v0.0.1

type AliasParamsTarget struct {
	IndexUUID string `json:"indexUUID"` // Optional.
}

type BlackHole added in v0.0.1

type BlackHole struct {
	// contains filtered or unexported fields
}

Implements both Dest and PIndexImpl interfaces.

func (*BlackHole) Close added in v0.0.1

func (t *BlackHole) Close() error

func (*BlackHole) ConsistencyWait added in v0.0.1

func (t *BlackHole) ConsistencyWait(partition, partitionUUID string,
	consistencyLevel string,
	consistencySeq uint64,
	cancelCh <-chan bool) error

func (*BlackHole) Count added in v0.0.1

func (t *BlackHole) Count(pindex *PIndex,
	cancelCh <-chan bool) (uint64, error)

func (*BlackHole) DataDelete added in v0.0.3

func (t *BlackHole) DataDelete(partition string,
	key []byte, seq uint64) error

func (*BlackHole) DataUpdate added in v0.0.3

func (t *BlackHole) DataUpdate(partition string,
	key []byte, seq uint64, val []byte) error

func (*BlackHole) OpaqueGet added in v0.0.3

func (t *BlackHole) OpaqueGet(partition string) (
	value []byte, lastSeq uint64, err error)

func (*BlackHole) OpaqueSet added in v0.0.3

func (t *BlackHole) OpaqueSet(partition string, value []byte) error

func (*BlackHole) Query added in v0.0.1

func (t *BlackHole) Query(pindex *PIndex, req []byte, w io.Writer,
	cancelCh <-chan bool) error

func (*BlackHole) Rollback added in v0.0.1

func (t *BlackHole) Rollback(partition string, rollbackSeq uint64) error

func (*BlackHole) SnapshotStart added in v0.0.3

func (t *BlackHole) SnapshotStart(partition string,
	snapStart, snapEnd uint64) error

func (*BlackHole) Stats added in v0.0.1

func (t *BlackHole) Stats(w io.Writer) error

type BleveDest added in v0.0.1

type BleveDest struct {
	// contains filtered or unexported fields
}

func NewBleveDest added in v0.0.1

func NewBleveDest(path string, bindex bleve.Index, restart func()) *BleveDest

func (*BleveDest) AddError added in v0.0.1

func (t *BleveDest) AddError(op, partition string,
	key []byte, seq uint64, val []byte, err error)

func (*BleveDest) Close added in v0.0.1

func (t *BleveDest) Close() error

func (*BleveDest) ConsistencyWait added in v0.0.1

func (t *BleveDest) ConsistencyWait(partition, partitionUUID string,
	consistencyLevel string,
	consistencySeq uint64,
	cancelCh <-chan bool) error

func (*BleveDest) Count added in v0.0.1

func (t *BleveDest) Count(pindex *PIndex, cancelCh <-chan bool) (uint64, error)

func (*BleveDest) Dest added in v0.0.1

func (t *BleveDest) Dest(partition string) (Dest, error)

func (*BleveDest) Query added in v0.0.1

func (t *BleveDest) Query(pindex *PIndex, req []byte, res io.Writer,
	cancelCh <-chan bool) error

func (*BleveDest) Rollback added in v0.0.1

func (t *BleveDest) Rollback(partition string, rollbackSeq uint64) error

func (*BleveDest) Stats added in v0.0.1

func (t *BleveDest) Stats(w io.Writer) (err error)

type BleveDestPartition added in v0.0.1

type BleveDestPartition struct {
	// contains filtered or unexported fields
}

Used to track state for a single partition.

func (*BleveDestPartition) Close added in v0.0.1

func (t *BleveDestPartition) Close() error

func (*BleveDestPartition) ConsistencyWait added in v0.0.1

func (t *BleveDestPartition) ConsistencyWait(partition, partitionUUID string,
	consistencyLevel string,
	consistencySeq uint64,
	cancelCh <-chan bool) error

func (*BleveDestPartition) Count added in v0.0.1

func (t *BleveDestPartition) Count(pindex *PIndex, cancelCh <-chan bool) (
	uint64, error)

func (*BleveDestPartition) DataDelete added in v0.0.3

func (t *BleveDestPartition) DataDelete(partition string,
	key []byte, seq uint64) error

func (*BleveDestPartition) DataUpdate added in v0.0.3

func (t *BleveDestPartition) DataUpdate(partition string,
	key []byte, seq uint64, val []byte) error

func (*BleveDestPartition) OpaqueGet added in v0.0.3

func (t *BleveDestPartition) OpaqueGet(partition string) ([]byte, uint64, error)

func (*BleveDestPartition) OpaqueSet added in v0.0.3

func (t *BleveDestPartition) OpaqueSet(partition string, value []byte) error

func (*BleveDestPartition) Query added in v0.0.1

func (t *BleveDestPartition) Query(pindex *PIndex, req []byte, res io.Writer,
	cancelCh <-chan bool) error

func (*BleveDestPartition) Rollback added in v0.0.1

func (t *BleveDestPartition) Rollback(partition string, rollbackSeq uint64) error

func (*BleveDestPartition) SnapshotStart added in v0.0.3

func (t *BleveDestPartition) SnapshotStart(partition string,
	snapStart, snapEnd uint64) error

func (*BleveDestPartition) Stats added in v0.0.1

func (t *BleveDestPartition) Stats(w io.Writer) error

type BleveParams added in v0.0.1

type BleveParams struct {
	Mapping bleve.IndexMapping     `json:"mapping"`
	Store   map[string]interface{} `json:"store"`
}

func NewBleveParams added in v0.0.1

func NewBleveParams() *BleveParams

type BleveQueryParams added in v0.0.1

type BleveQueryParams struct {
	Timeout     int64                `json:"timeout"`
	Consistency *ConsistencyParams   `json:"consistency"`
	Query       *bleve.SearchRequest `json:"query"`
}

type CBFeedParams added in v0.0.1

type CBFeedParams struct {
	AuthUser     string `json:"authUser"` // May be "" for no auth.
	AuthPassword string `json:"authPassword"`
}

type Cfg

type Cfg interface {
	// Get retrieves an entry from the Cfg.  A zero cas means don't do
	// a CAS match on Get(), and a non-zero cas value means the Get()
	// will succeed only if the CAS matches.
	Get(key string, cas uint64) (val []byte, casSuccess uint64, err error)

	// Set creates or updates an entry in the Cfg.  A non-zero cas
	// that does not match will result in an error.  A zero cas means
	// the Set() operation must be an entry creation, where a zero cas
	// Set() will error if the entry already exists.
	Set(key string, val []byte, cas uint64) (casSuccess uint64, err error)

	// Del removes an entry from the Cfg.  A non-zero cas that does
	// not match will result in an error.  A zero cas means a CAS
	// match will be skipped, so that clients can perform a
	// "don't-care, out-of-the-blue" deletion.
	Del(key string, cas uint64) error

	// Subscribe allows clients to receive events on changes to a key.
	// During a deletion event, the CfgEvent.CAS field will be 0.
	Subscribe(key string, ch chan CfgEvent) error

	// Refresh forces the Cfg implementation to reload from its
	// backend-specific data source, clearing any locally cached data.
	// Any subscribers will receive events on a Refresh, where it's up
	// to subscribers to detect if there were actual changes or not.
	Refresh() error
}

Cfg is the interface that configuration providers must implement.

type CfgCASError

type CfgCASError struct{}

The error used on mismatches of CAS (compare and set/swap) values.

func (*CfgCASError) Error

func (e *CfgCASError) Error() string

type CfgCB added in v0.0.1

type CfgCB struct {
	// contains filtered or unexported fields
}

CfgCB is an implementation of Cfg that uses a couchbase bucket, and uses DCP to get change notifications.

TODO: This current implementation is race-y! Instead of storing everything as a single uber key/value, we should instead be storing individual key/value's on every get/set/del operation.

func NewCfgCB added in v0.0.1

func NewCfgCB(url, bucket string) (*CfgCB, error)

func (*CfgCB) DataDelete added in v0.0.1

func (r *CfgCB) DataDelete(vbucketId uint16, key []byte, seq uint64,
	req *gomemcached.MCRequest) error

func (*CfgCB) DataUpdate added in v0.0.1

func (r *CfgCB) DataUpdate(vbucketId uint16, key []byte, seq uint64,
	req *gomemcached.MCRequest) error

func (*CfgCB) Del added in v0.0.1

func (c *CfgCB) Del(key string, cas uint64) error

func (*CfgCB) Get added in v0.0.1

func (c *CfgCB) Get(key string, cas uint64) (
	[]byte, uint64, error)

func (*CfgCB) GetCredentials added in v0.0.1

func (a *CfgCB) GetCredentials() (string, string, string)

func (*CfgCB) GetMetaData added in v0.0.1

func (r *CfgCB) GetMetaData(vbucketId uint16) (
	value []byte, lastSeq uint64, err error)

func (*CfgCB) Load added in v0.0.1

func (c *CfgCB) Load() error

func (*CfgCB) OnError added in v0.0.1

func (r *CfgCB) OnError(err error)

func (*CfgCB) Refresh added in v0.0.1

func (c *CfgCB) Refresh() error

func (*CfgCB) Rollback added in v0.0.1

func (r *CfgCB) Rollback(vbucketId uint16, rollbackSeq uint64) error

func (*CfgCB) Set added in v0.0.1

func (c *CfgCB) Set(key string, val []byte, cas uint64) (
	uint64, error)

func (*CfgCB) SetMetaData added in v0.0.1

func (r *CfgCB) SetMetaData(vbucketId uint16, value []byte) error

func (*CfgCB) SnapshotStart added in v0.0.1

func (r *CfgCB) SnapshotStart(vbucketId uint16,
	snapStart, snapEnd uint64, snapType uint32) error

func (*CfgCB) Subscribe added in v0.0.1

func (c *CfgCB) Subscribe(key string, ch chan CfgEvent) error

type CfgEvent added in v0.0.1

type CfgEvent struct {
	Key string
	CAS uint64
}

See the Cfg.Subscribe() method.

type CfgGetHandler added in v0.0.1

type CfgGetHandler struct {
	// contains filtered or unexported fields
}

func NewCfgGetHandler added in v0.0.1

func NewCfgGetHandler(mgr *Manager) *CfgGetHandler

func (*CfgGetHandler) ServeHTTP added in v0.0.1

func (h *CfgGetHandler) ServeHTTP(w http.ResponseWriter, req *http.Request)

type CfgMem

type CfgMem struct {
	CASNext uint64
	Entries map[string]*CfgMemEntry
	// contains filtered or unexported fields
}

func NewCfgMem

func NewCfgMem() *CfgMem

func (*CfgMem) Del

func (c *CfgMem) Del(key string, cas uint64) error

func (*CfgMem) Get

func (c *CfgMem) Get(key string, cas uint64) (
	[]byte, uint64, error)

func (*CfgMem) Refresh added in v0.0.1

func (c *CfgMem) Refresh() error

func (*CfgMem) Set

func (c *CfgMem) Set(key string, val []byte, cas uint64) (
	uint64, error)

func (*CfgMem) Subscribe added in v0.0.1

func (c *CfgMem) Subscribe(key string, ch chan CfgEvent) error

type CfgMemEntry

type CfgMemEntry struct {
	CAS uint64
	Val []byte
}

type CfgRefreshHandler added in v0.0.1

type CfgRefreshHandler struct {
	// contains filtered or unexported fields
}

func NewCfgRefreshHandler added in v0.0.1

func NewCfgRefreshHandler(mgr *Manager) *CfgRefreshHandler

func (*CfgRefreshHandler) ServeHTTP added in v0.0.1

func (h *CfgRefreshHandler) ServeHTTP(w http.ResponseWriter, req *http.Request)

type CfgSimple

type CfgSimple struct {
	// contains filtered or unexported fields
}

func NewCfgSimple

func NewCfgSimple(path string) *CfgSimple

func (*CfgSimple) Del

func (c *CfgSimple) Del(key string, cas uint64) error

func (*CfgSimple) Get

func (c *CfgSimple) Get(key string, cas uint64) (
	[]byte, uint64, error)

func (*CfgSimple) Load

func (c *CfgSimple) Load() error

func (*CfgSimple) Refresh added in v0.0.1

func (c *CfgSimple) Refresh() error

func (*CfgSimple) Set

func (c *CfgSimple) Set(key string, val []byte, cas uint64) (
	uint64, error)

func (*CfgSimple) Subscribe added in v0.0.1

func (c *CfgSimple) Subscribe(key string, ch chan CfgEvent) error

type ConsistencyParams added in v0.0.1

type ConsistencyParams struct {
	// A Level value of "" means stale is ok; "at_plus" means we need
	// consistency at least at or beyond the consistency vector but
	// not before.
	Level string `json:"level"`

	// Keyed by indexName.
	Vectors map[string]ConsistencyVector `json:"vectors"`
}

type ConsistencyVector added in v0.0.1

type ConsistencyVector map[string]uint64

Key is partition or partition/partitionUUID. Value is seq. For example, a DCP data source might have the key as either "vbucketId" or "vbucketId/vbucketUUID".

type ConsistencyWaitReq added in v0.0.1

type ConsistencyWaitReq struct {
	PartitionUUID    string
	ConsistencyLevel string
	ConsistencySeq   uint64
	CancelCh         <-chan bool
	DoneCh           chan error
}

type ConsistencyWaiter added in v0.0.1

type ConsistencyWaiter interface {
	ConsistencyWait(partition, partitionUUID string,
		consistencyLevel string,
		consistencySeq uint64,
		cancelCh <-chan bool) error
}

type CountHandler added in v0.0.1

type CountHandler struct {
	// contains filtered or unexported fields
}

func NewCountHandler added in v0.0.1

func NewCountHandler(mgr *Manager) *CountHandler

func (*CountHandler) RESTOpts added in v0.0.1

func (h *CountHandler) RESTOpts(opts map[string]string)

func (*CountHandler) ServeHTTP added in v0.0.1

func (h *CountHandler) ServeHTTP(w http.ResponseWriter, req *http.Request)

type CountPIndexHandler added in v0.0.1

type CountPIndexHandler struct {
	// contains filtered or unexported fields
}

func NewCountPIndexHandler added in v0.0.1

func NewCountPIndexHandler(mgr *Manager) *CountPIndexHandler

func (*CountPIndexHandler) ServeHTTP added in v0.0.1

func (h *CountPIndexHandler) ServeHTTP(w http.ResponseWriter, req *http.Request)

type CreateIndexHandler

type CreateIndexHandler struct {
	// contains filtered or unexported fields
}

func NewCreateIndexHandler added in v0.0.1

func NewCreateIndexHandler(mgr *Manager) *CreateIndexHandler

func (*CreateIndexHandler) RESTOpts added in v0.0.1

func (h *CreateIndexHandler) RESTOpts(opts map[string]string)

func (*CreateIndexHandler) ServeHTTP

func (h *CreateIndexHandler) ServeHTTP(w http.ResponseWriter, req *http.Request)

type DCPFeed

type DCPFeed struct {
	// contains filtered or unexported fields
}

A DCPFeed implements both Feed and cbdatasource.Receiver interfaces.

func NewDCPFeed

func NewDCPFeed(name, indexName, url, poolName,
	bucketName, bucketUUID, paramsStr string,
	pf DestPartitionFunc, dests map[string]Dest,
	disable bool) (*DCPFeed, error)

func (*DCPFeed) Close

func (t *DCPFeed) Close() error

func (*DCPFeed) DataDelete added in v0.0.1

func (r *DCPFeed) DataDelete(vbucketId uint16, key []byte, seq uint64,
	req *gomemcached.MCRequest) error

func (*DCPFeed) DataUpdate added in v0.0.1

func (r *DCPFeed) DataUpdate(vbucketId uint16, key []byte, seq uint64,
	req *gomemcached.MCRequest) error

func (*DCPFeed) Dests added in v0.0.1

func (t *DCPFeed) Dests() map[string]Dest

func (*DCPFeed) GetMetaData added in v0.0.1

func (r *DCPFeed) GetMetaData(vbucketId uint16) (
	value []byte, lastSeq uint64, err error)

func (*DCPFeed) IndexName added in v0.0.1

func (t *DCPFeed) IndexName() string

func (*DCPFeed) Name

func (t *DCPFeed) Name() string

func (*DCPFeed) OnError added in v0.0.1

func (r *DCPFeed) OnError(err error)

func (*DCPFeed) Rollback added in v0.0.1

func (r *DCPFeed) Rollback(vbucketId uint16, rollbackSeq uint64) error

func (*DCPFeed) SetMetaData added in v0.0.1

func (r *DCPFeed) SetMetaData(vbucketId uint16, value []byte) error

func (*DCPFeed) SnapshotStart added in v0.0.1

func (r *DCPFeed) SnapshotStart(vbucketId uint16,
	snapStart, snapEnd uint64, snapType uint32) error

func (*DCPFeed) Start

func (t *DCPFeed) Start() error

func (*DCPFeed) Stats added in v0.0.1

func (t *DCPFeed) Stats(w io.Writer) error

type DCPFeedParams added in v0.0.1

type DCPFeedParams struct {
	AuthUser     string `json:"authUser"` // May be "" for no auth.
	AuthPassword string `json:"authPassword"`

	// Factor (like 1.5) to increase sleep time between retries
	// in connecting to a cluster manager node.
	ClusterManagerBackoffFactor float32 `json:"clusterManagerBackoffFactor"`

	// Initial sleep time (millisecs) before first retry to cluster manager.
	ClusterManagerSleepInitMS int `json:"clusterManagerSleepInitMS"`

	// Maximum sleep time (millisecs) between retries to cluster manager.
	ClusterManagerSleepMaxMS int `json:"clusterManagerSleepMaxMS"`

	// Factor (like 1.5) to increase sleep time between retries
	// in connecting to a data manager node.
	DataManagerBackoffFactor float32 `json:"dataManagerBackoffFactor"`

	// Initial sleep time (millisecs) before first retry to data manager.
	DataManagerSleepInitMS int `json:"dataManagerSleepInitMS"`

	// Maximum sleep time (millisecs) between retries to data manager.
	DataManagerSleepMaxMS int `json:"dataManagerSleepMaxMS"`

	// Buffer size in bytes provided for UPR flow control.
	FeedBufferSizeBytes uint32 `json:"feedBufferSizeBytes"`

	// Used for UPR flow control and buffer-ack messages when this
	// percentage of FeedBufferSizeBytes is reached.
	FeedBufferAckThreshold float32 `json:"feedBufferAckThreshold"`
}

func NewDCPFeedParams added in v0.0.1

func NewDCPFeedParams() *DCPFeedParams

func (*DCPFeedParams) GetCredentials added in v0.0.1

func (d *DCPFeedParams) GetCredentials() (string, string, string)

type DeleteIndexHandler

type DeleteIndexHandler struct {
	// contains filtered or unexported fields
}

func NewDeleteIndexHandler

func NewDeleteIndexHandler(mgr *Manager) *DeleteIndexHandler

func (*DeleteIndexHandler) RESTOpts added in v0.0.1

func (h *DeleteIndexHandler) RESTOpts(opts map[string]string)

func (*DeleteIndexHandler) ServeHTTP

func (h *DeleteIndexHandler) ServeHTTP(w http.ResponseWriter, req *http.Request)

type Dest added in v0.0.1

type Dest interface {
	// Invoked by PIndex.Close().
	Close() error

	// Invoked when there's a new mutation from a data source for a
	// partition.  Dest implementation is responsible for making its
	// own copies of the key and val data.
	DataUpdate(partition string, key []byte, seq uint64, val []byte) error

	// Invoked by the data source when there's a data deletion in a
	// partition.  Dest implementation is responsible for making its
	// own copies of the key data.
	DataDelete(partition string, key []byte, seq uint64) error

	// An callback invoked by the data source when there's a start of
	// a new snapshot for a partition.  The Receiver implementation,
	// for example, might choose to optimize persistence perhaps by
	// preparing a batch write to application-specific storage.
	SnapshotStart(partition string, snapStart, snapEnd uint64) error

	// OpaqueGet() should return the opaque value previously
	// provided by an earlier call to OpaqueSet().  If there was no
	// previous call to OpaqueSet(), such as in the case of a brand
	// new instance of a Dest (as opposed to a restarted or reloaded
	// Dest), the Dest should return (nil, 0, nil) for (value,
	// lastSeq, err), respectively.  The lastSeq should be the last
	// sequence number received and persisted during calls to the
	// Dest's DataUpdate() & DataDelete() methods.
	OpaqueGet(partition string) (value []byte, lastSeq uint64, err error)

	// The Dest implementation should persist the value parameter of
	// OpaqueSet() for retrieval during some future call to
	// OpaqueGet() by the system.  The metadata value should be
	// considered "in-stream", or as part of the sequence history of
	// mutations.  That is, a later Rollback() to some previous
	// sequence number for a particular partition should rollback
	// both persisted metadata and regular data.  The Dest
	// implementation should make its own copy of the value data.
	OpaqueSet(partition string, value []byte) error

	// Invoked by when the datasource signals a rollback during dest
	// initialization.  Note that both regular data and opaque data
	// should be rolled back to at a maximum of the rollbackSeq.  Of
	// note, the Dest is allowed to rollback even further, even all
	// the way back to the start or to zero.
	Rollback(partition string, rollbackSeq uint64) error

	// Blocks until the Dest has reached the desired consistency for
	// the partition or until the cancelCh is readable or closed by
	// some goroutine related to the calling goroutine.  The error
	// response might be a ErrorConsistencyWait instance, which has
	// StartEndSeqs information.  The seqStart is the seq number when
	// the operation started waiting and the seqEnd is the seq number
	// at the end of operation (even when cancelled or error), so that
	// the caller might get a rough idea of ingest velocity.
	ConsistencyWait(partition, partitionUUID string,
		consistencyLevel string,
		consistencySeq uint64,
		cancelCh <-chan bool) error

	// Counts the underlying pindex implementation.
	Count(pindex *PIndex, cancelCh <-chan bool) (uint64, error)

	// Queries the underlying pindex implementation, blocking if
	// needed for the Dest to reach the desired consistency.
	Query(pindex *PIndex, req []byte, w io.Writer,
		cancelCh <-chan bool) error

	Stats(io.Writer) error
}

func BasicPartitionFunc added in v0.0.1

func BasicPartitionFunc(partition string, key []byte,
	dests map[string]Dest) (Dest, error)

This basic partition func first tries a direct lookup by partition string, else it tries the "" partition.

func VBucketIdToPartitionDest added in v0.0.1

func VBucketIdToPartitionDest(pf DestPartitionFunc,
	dests map[string]Dest, vbucketId uint16, key []byte) (
	partition string, dest Dest, err error)

type DestForwarder added in v0.0.1

type DestForwarder struct {
	DestProvider DestProvider
}

A DestForwarder forwards method calls on it to the Dest returned by the DestProvider.

func (*DestForwarder) Close added in v0.0.1

func (t *DestForwarder) Close() error

func (*DestForwarder) ConsistencyWait added in v0.0.1

func (t *DestForwarder) ConsistencyWait(partition, partitionUUID string,
	consistencyLevel string,
	consistencySeq uint64,
	cancelCh <-chan bool) error

func (*DestForwarder) Count added in v0.0.1

func (t *DestForwarder) Count(pindex *PIndex, cancelCh <-chan bool) (
	uint64, error)

func (*DestForwarder) DataDelete added in v0.0.3

func (t *DestForwarder) DataDelete(partition string,
	key []byte, seq uint64) error

func (*DestForwarder) DataUpdate added in v0.0.3

func (t *DestForwarder) DataUpdate(partition string,
	key []byte, seq uint64, val []byte) error

func (*DestForwarder) OpaqueGet added in v0.0.3

func (t *DestForwarder) OpaqueGet(partition string) (
	value []byte, lastSeq uint64, err error)

func (*DestForwarder) OpaqueSet added in v0.0.3

func (t *DestForwarder) OpaqueSet(partition string, value []byte) error

func (*DestForwarder) Query added in v0.0.1

func (t *DestForwarder) Query(pindex *PIndex, req []byte, res io.Writer,
	cancelCh <-chan bool) error

func (*DestForwarder) Rollback added in v0.0.1

func (t *DestForwarder) Rollback(partition string, rollbackSeq uint64) error

func (*DestForwarder) SnapshotStart added in v0.0.3

func (t *DestForwarder) SnapshotStart(partition string,
	snapStart, snapEnd uint64) error

func (*DestForwarder) Stats added in v0.0.1

func (t *DestForwarder) Stats(w io.Writer) error

type DestPartitionFunc added in v0.0.1

type DestPartitionFunc func(partition string, key []byte,
	dests map[string]Dest) (Dest, error)

type DestProvider added in v0.0.1

type DestProvider interface {
	Dest(partition string) (Dest, error)

	Count(pindex *PIndex, cancelCh <-chan bool) (uint64, error)

	Query(pindex *PIndex, req []byte, res io.Writer,
		cancelCh <-chan bool) error

	Stats(io.Writer) error

	Close() error
}

type DestSourceParams added in v0.0.1

type DestSourceParams struct {
	NumPartitions int `json:"numPartitions"`
}

type DestStats added in v0.0.1

type DestStats struct {
	TotError uint64

	TimerDataUpdate    metrics.Timer
	TimerDataDelete    metrics.Timer
	TimerSnapshotStart metrics.Timer
	TimerOpaqueGet     metrics.Timer
	TimerOpaqueSet     metrics.Timer
	TimerRollback      metrics.Timer
}

func NewDestStats added in v0.0.1

func NewDestStats() *DestStats

func (*DestStats) WriteJSON added in v0.0.1

func (d *DestStats) WriteJSON(w io.Writer)

type DiagGetHandler added in v0.0.1

type DiagGetHandler struct {
	// contains filtered or unexported fields
}

func NewDiagGetHandler added in v0.0.1

func NewDiagGetHandler(versionMain string,
	mgr *Manager, mr *MsgRing) *DiagGetHandler

func (*DiagGetHandler) ServeHTTP added in v0.0.1

func (h *DiagGetHandler) ServeHTTP(w http.ResponseWriter, req *http.Request)

type ErrorConsistencyWait added in v0.0.1

type ErrorConsistencyWait struct {
	Err    error  // The underlying, wrapped error.
	Status string // Short status reason, like "timeout", "cancelled", etc.

	// Keyed by partitionId, value is pair of start/end seq's.
	StartEndSeqs map[string][]uint64
}

func (*ErrorConsistencyWait) Error added in v0.0.1

func (e *ErrorConsistencyWait) Error() string

type Feed

type Feed interface {
	Name() string
	IndexName() string
	Start() error
	Close() error
	Dests() map[string]Dest // Key is partition identifier.

	// Writes stats as JSON to the given writer.
	Stats(io.Writer) error
}

func CalcFeedsDelta

func CalcFeedsDelta(nodeUUID string, planPIndexes *PlanPIndexes,
	currFeeds map[string]Feed, pindexes map[string]*PIndex) (
	addFeeds [][]*PIndex, removeFeeds []Feed)

Functionally determine the delta of which feeds need creation and which should be shut down.

type FeedPartitionsFunc added in v0.0.1

type FeedPartitionsFunc func(sourceType, sourceName, sourceUUID, sourceParams,
	server string) ([]string, error)

type FeedStartFunc added in v0.0.1

type FeedStartFunc func(mgr *Manager, feedName, indexName, indexUUID string,
	sourceType, sourceName, sourceUUID, sourceParams string,
	dests map[string]Dest) error

type FeedType added in v0.0.1

type FeedType struct {
	Start           FeedStartFunc
	Partitions      FeedPartitionsFunc
	Public          bool
	Description     string
	StartSample     interface{}
	StartSampleDocs map[string]string
}

type FileLike added in v0.0.1

type FileLike interface {
	io.Closer
	io.ReaderAt
	io.WriterAt
	Stat() (os.FileInfo, error)
	Truncate(size int64) error
}

A FileLike does things kind of like a file.

type FileService added in v0.0.1

type FileService struct {
	// contains filtered or unexported fields
}

func NewFileService added in v0.0.1

func NewFileService(concurrency int) *FileService

func (*FileService) Close added in v0.0.1

func (f *FileService) Close() error

func (*FileService) Do added in v0.0.1

func (f *FileService) Do(path string, flags int, fn func(*os.File) error) error

func (*FileService) OpenFile added in v0.0.1

func (fs *FileService) OpenFile(path string, mode int) (FileLike, error)

Open a FileLike thing that works within this FileService.

type GetIndexHandler added in v0.0.1

type GetIndexHandler struct {
	// contains filtered or unexported fields
}

func NewGetIndexHandler added in v0.0.1

func NewGetIndexHandler(mgr *Manager) *GetIndexHandler

func (*GetIndexHandler) RESTOpts added in v0.0.1

func (h *GetIndexHandler) RESTOpts(opts map[string]string)

func (*GetIndexHandler) ServeHTTP added in v0.0.1

func (h *GetIndexHandler) ServeHTTP(w http.ResponseWriter, req *http.Request)

type GetPIndexHandler added in v0.0.1

type GetPIndexHandler struct {
	// contains filtered or unexported fields
}

func NewGetPIndexHandler added in v0.0.1

func NewGetPIndexHandler(mgr *Manager) *GetPIndexHandler

func (*GetPIndexHandler) ServeHTTP added in v0.0.1

func (h *GetPIndexHandler) ServeHTTP(w http.ResponseWriter, req *http.Request)

type IndexClient added in v0.0.1

type IndexClient struct {
	QueryURL    string
	CountURL    string
	Consistency *ConsistencyParams
}

IndexClient implements the Search() and DocCount() subset of the bleve.Index interface by accessing a remote cbft server via REST protocol. This allows callers to add a IndexClient as a target of a bleve.IndexAlias, and implements cbft protocol features like query consistency and auth.

TODO: Implement propagating auth info in IndexClient.

func (*IndexClient) Advanced added in v0.0.1

func (r *IndexClient) Advanced() (index.Index, store.KVStore, error)

func (*IndexClient) Batch added in v0.0.1

func (r *IndexClient) Batch(b *bleve.Batch) error

func (*IndexClient) Close added in v0.0.1

func (r *IndexClient) Close() error

func (*IndexClient) Count added in v0.0.1

func (r *IndexClient) Count() (uint64, error)

func (*IndexClient) Delete added in v0.0.1

func (r *IndexClient) Delete(id string) error

func (*IndexClient) DeleteInternal added in v0.0.1

func (r *IndexClient) DeleteInternal(key []byte) error

func (*IndexClient) DocCount added in v0.0.1

func (r *IndexClient) DocCount() (uint64, error)

func (*IndexClient) Document added in v0.0.1

func (r *IndexClient) Document(id string) (*document.Document, error)

func (*IndexClient) DumpAll added in v0.0.1

func (r *IndexClient) DumpAll() chan interface{}

func (*IndexClient) DumpDoc added in v0.0.1

func (r *IndexClient) DumpDoc(id string) chan interface{}

func (*IndexClient) DumpFields added in v0.0.1

func (r *IndexClient) DumpFields() chan interface{}

func (*IndexClient) FieldDict added in v0.0.1

func (r *IndexClient) FieldDict(field string) (index.FieldDict, error)

func (*IndexClient) FieldDictPrefix added in v0.0.1

func (r *IndexClient) FieldDictPrefix(field string,
	termPrefix []byte) (index.FieldDict, error)

func (*IndexClient) FieldDictRange added in v0.0.1

func (r *IndexClient) FieldDictRange(field string,
	startTerm []byte, endTerm []byte) (index.FieldDict, error)

func (*IndexClient) Fields added in v0.0.1

func (r *IndexClient) Fields() ([]string, error)

func (*IndexClient) GetInternal added in v0.0.1

func (r *IndexClient) GetInternal(key []byte) ([]byte, error)

func (*IndexClient) Index added in v0.0.1

func (r *IndexClient) Index(id string, data interface{}) error

func (*IndexClient) Mapping added in v0.0.1

func (r *IndexClient) Mapping() *bleve.IndexMapping

func (*IndexClient) NewBatch added in v0.0.1

func (r *IndexClient) NewBatch() *bleve.Batch

func (*IndexClient) Query added in v0.0.1

func (r *IndexClient) Query(buf []byte) ([]byte, error)

func (*IndexClient) Search added in v0.0.1

func (r *IndexClient) Search(req *bleve.SearchRequest) (*bleve.SearchResult, error)

func (*IndexClient) SetInternal added in v0.0.1

func (r *IndexClient) SetInternal(key, val []byte) error

func (*IndexClient) Stats added in v0.0.1

func (r *IndexClient) Stats() *bleve.IndexStat

type IndexControlHandler added in v0.0.1

type IndexControlHandler struct {
	// contains filtered or unexported fields
}

func NewIndexControlHandler added in v0.0.1

func NewIndexControlHandler(mgr *Manager, control string,
	allowedOps map[string]bool) *IndexControlHandler

func (*IndexControlHandler) RESTOpts added in v0.0.1

func (h *IndexControlHandler) RESTOpts(opts map[string]string)

func (*IndexControlHandler) ServeHTTP added in v0.0.1

func (h *IndexControlHandler) ServeHTTP(w http.ResponseWriter, req *http.Request)

type IndexDef

type IndexDef struct {
	Type         string     `json:"type"` // Ex: "bleve", "alias", "blackhole", etc.
	Name         string     `json:"name"`
	UUID         string     `json:"uuid"`
	Params       string     `json:"params"`
	SourceType   string     `json:"sourceType"`
	SourceName   string     `json:"sourceName"`
	SourceUUID   string     `json:"sourceUUID"`
	SourceParams string     `json:"sourceParams"` // Optional connection info.
	PlanParams   PlanParams `json:"planParams"`
}

type IndexDefs

type IndexDefs struct {
	// IndexDefs.UUID changes whenever any child IndexDef changes.
	UUID        string               `json:"uuid"`
	IndexDefs   map[string]*IndexDef `json:"indexDefs"`   // Key is IndexDef.Name.
	ImplVersion string               `json:"implVersion"` // See VERSION.
}

func CfgGetIndexDefs

func CfgGetIndexDefs(cfg Cfg) (*IndexDefs, uint64, error)

func NewIndexDefs

func NewIndexDefs(version string) *IndexDefs

func PlannerGetIndexDefs added in v0.0.1

func PlannerGetIndexDefs(cfg Cfg, version string) (*IndexDefs, error)

type JSONStatsWriter added in v0.0.1

type JSONStatsWriter interface {
	WriteJSON(w io.Writer)
}

type ListIndexHandler added in v0.0.1

type ListIndexHandler struct {
	// contains filtered or unexported fields
}

func NewListIndexHandler added in v0.0.1

func NewListIndexHandler(mgr *Manager) *ListIndexHandler

func (*ListIndexHandler) ServeHTTP added in v0.0.1

func (h *ListIndexHandler) ServeHTTP(w http.ResponseWriter, req *http.Request)

type ListPIndexHandler added in v0.0.1

type ListPIndexHandler struct {
	// contains filtered or unexported fields
}

func NewListPIndexHandler added in v0.0.1

func NewListPIndexHandler(mgr *Manager) *ListPIndexHandler

func (*ListPIndexHandler) ServeHTTP added in v0.0.1

func (h *ListPIndexHandler) ServeHTTP(w http.ResponseWriter, req *http.Request)

type LogGetHandler added in v0.0.1

type LogGetHandler struct {
	// contains filtered or unexported fields
}

func NewLogGetHandler added in v0.0.1

func NewLogGetHandler(mgr *Manager, mr *MsgRing) *LogGetHandler

func (*LogGetHandler) ServeHTTP added in v0.0.1

func (h *LogGetHandler) ServeHTTP(w http.ResponseWriter, req *http.Request)

type Manager

type Manager struct {
	// contains filtered or unexported fields
}

func NewManager

func NewManager(version string, cfg Cfg, uuid string, tags []string,
	container string, weight int, bindHttp, dataDir string, server string,
	meh ManagerEventHandlers) *Manager

func (*Manager) Cfg added in v0.0.1

func (mgr *Manager) Cfg() Cfg

func (*Manager) ClosePIndex added in v0.0.1

func (mgr *Manager) ClosePIndex(pindex *PIndex) error

func (*Manager) CoveringPIndexes added in v0.0.1

func (mgr *Manager) CoveringPIndexes(indexName, indexUUID string,
	wantNode PlanPIndexFilter, wantKind string) (
	localPIndexes []*PIndex, remotePlanPIndexes []*RemotePlanPIndex, err error)

Returns a non-overlapping, disjoint set (or cut) of PIndexes (either local or remote) that cover all the partitons of an index so that the caller can perform scatter/gather queries, etc. Only PlanPIndexes on wanted nodes that pass the wantNode filter will be returned.

TODO: Perhaps need a tighter check around indexUUID, as the current implementation might have a race where old pindexes with a matching (but outdated) indexUUID might be chosen.

TODO: This implementation currently always favors the local node's pindex, but should it? Perhaps a remote node is more up-to-date than the local pindex?

TODO: We should favor the most up-to-date node rather than the first one that we run into here? But, perhaps the most up-to-date node is also the most overloaded? Or, perhaps the planner may be trying to rebalance away the most up-to-date node and hitting it with load just makes the rebalance take longer?

func (*Manager) CreateIndex

func (mgr *Manager) CreateIndex(sourceType, sourceName, sourceUUID, sourceParams,
	indexType, indexName, indexParams string, planParams PlanParams,
	prevIndexUUID string) error

Creates a logical index, which might be comprised of many PIndex objects. A non-"" prevIndexUUID means an update to an existing index.

func (*Manager) CurrentMaps

func (mgr *Manager) CurrentMaps() (map[string]Feed, map[string]*PIndex)

Returns a snapshot copy of the current feeds and pindexes.

func (*Manager) DataDir

func (mgr *Manager) DataDir() string

func (*Manager) DeleteIndex

func (mgr *Manager) DeleteIndex(indexName string) error

Deletes a logical index, which might be comprised of many PIndex objects.

TODO: DeleteIndex should also take index UUID?

func (*Manager) GetIndexDefs added in v0.0.1

func (mgr *Manager) GetIndexDefs(refresh bool) (
	*IndexDefs, map[string]*IndexDef, error)

Returns read-only snapshot of the IndexDefs, also with IndexDef's organized by name. Use refresh of true to force a read from Cfg.

func (*Manager) GetPIndex added in v0.0.1

func (mgr *Manager) GetPIndex(pindexName string) *PIndex

func (*Manager) GetPlanPIndexes added in v0.0.1

func (mgr *Manager) GetPlanPIndexes(refresh bool) (
	*PlanPIndexes, map[string][]*PlanPIndex, error)

Returns read-only snapshot of the PlanPIndexes, also with PlanPIndex's organized by IndexName. Use refresh of true to force a read from Cfg.

func (*Manager) IndexControl added in v0.0.1

func (mgr *Manager) IndexControl(indexName, indexUUID, readOp, writeOp,
	planFreezeOp string) error

IndexControl is used to change runtime properties of an index.

func (*Manager) JanitorKick

func (mgr *Manager) JanitorKick(msg string)

JanitorKick synchronously kicks the manager's janitor, if any.

func (*Manager) JanitorLoop

func (mgr *Manager) JanitorLoop()

JanitorLoop is the main loop for the janitor.

func (*Manager) JanitorNOOP added in v0.0.1

func (mgr *Manager) JanitorNOOP(msg string)

JanitorNOOP sends a synchronous NOOP request to the manager's janitor, if any.

func (*Manager) JanitorOnce

func (mgr *Manager) JanitorOnce(reason string) error

func (*Manager) Kick added in v0.0.1

func (mgr *Manager) Kick(msg string)

func (*Manager) LoadDataDir

func (mgr *Manager) LoadDataDir() error

Walk the data dir and register pindexes.

func (*Manager) PIndexPath

func (mgr *Manager) PIndexPath(pindexName string) string

func (*Manager) ParsePIndexPath

func (mgr *Manager) ParsePIndexPath(pindexPath string) (string, bool)

func (*Manager) PlannerKick

func (mgr *Manager) PlannerKick(msg string)

PlannerKick synchronously kicks the manager's planner, if any.

func (*Manager) PlannerLoop

func (mgr *Manager) PlannerLoop()

PlannerLoop is the main loop for the planner.

func (*Manager) PlannerNOOP added in v0.0.1

func (mgr *Manager) PlannerNOOP(msg string)

PlannerNOOP sends a synchronous NOOP request to the manager's planner, if any.

func (*Manager) PlannerOnce

func (mgr *Manager) PlannerOnce(reason string) (bool, error)

func (*Manager) RemoveNodeDef added in v0.0.1

func (mgr *Manager) RemoveNodeDef(kind string) error

func (*Manager) RemovePIndex added in v0.0.1

func (mgr *Manager) RemovePIndex(pindex *PIndex) error

func (*Manager) SaveNodeDef

func (mgr *Manager) SaveNodeDef(kind string, force bool) error

func (*Manager) Start

func (mgr *Manager) Start(register string) error

func (*Manager) StartRegister added in v0.0.1

func (mgr *Manager) StartRegister(register string) error

func (*Manager) UUID added in v0.0.1

func (mgr *Manager) UUID() string

type ManagerEventHandlers

type ManagerEventHandlers interface {
	OnRegisterPIndex(pindex *PIndex)
	OnUnregisterPIndex(pindex *PIndex)
}

type ManagerKickHandler added in v0.0.1

type ManagerKickHandler struct {
	// contains filtered or unexported fields
}

func NewManagerKickHandler added in v0.0.1

func NewManagerKickHandler(mgr *Manager) *ManagerKickHandler

func (*ManagerKickHandler) ServeHTTP added in v0.0.1

func (h *ManagerKickHandler) ServeHTTP(w http.ResponseWriter, req *http.Request)

type ManagerMetaHandler added in v0.0.1

type ManagerMetaHandler struct {
	// contains filtered or unexported fields
}

func NewManagerMetaHandler added in v0.0.1

func NewManagerMetaHandler(mgr *Manager,
	meta map[string]RESTMeta) *ManagerMetaHandler

func (*ManagerMetaHandler) ServeHTTP added in v0.0.1

func (h *ManagerMetaHandler) ServeHTTP(w http.ResponseWriter, req *http.Request)

type ManagerStats added in v0.0.1

type ManagerStats struct {
	TotKick uint64

	TotSaveNodeDef        uint64
	TotSaveNodeDefGetErr  uint64
	TotSaveNodeDefSetErr  uint64
	TotSaveNodeDefUUIDErr uint64
	TotSaveNodeDefOk      uint64

	TotCreateIndex    uint64
	TotCreateIndexOk  uint64
	TotDeleteIndex    uint64
	TotDeleteIndexOk  uint64
	TotIndexControl   uint64
	TotIndexControlOk uint64

	TotPlannerNOOP              uint64
	TotPlannerNOOPOk            uint64
	TotPlannerKick              uint64
	TotPlannerKickStart         uint64
	TotPlannerKickChanged       uint64
	TotPlannerKickErr           uint64
	TotPlannerKickOk            uint64
	TotPlannerUnknownErr        uint64
	TotPlannerSubscriptionEvent uint64

	TotJanitorNOOP              uint64
	TotJanitorNOOPOk            uint64
	TotJanitorKick              uint64
	TotJanitorKickStart         uint64
	TotJanitorKickErr           uint64
	TotJanitorKickOk            uint64
	TotJanitorClosePIndex       uint64
	TotJanitorRemovePIndex      uint64
	TotJanitorUnknownErr        uint64
	TotJanitorSubscriptionEvent uint64
}

func (*ManagerStats) AtomicCopyTo added in v0.0.1

func (s *ManagerStats) AtomicCopyTo(r *ManagerStats)

AtomicCopyTo copies metrics from s to r (from source to result).

type MetaDesc added in v0.0.1

type MetaDesc struct {
	Description     string            `json:"description"`
	StartSample     interface{}       `json:"startSample"`
	StartSampleDocs map[string]string `json:"startSampleDocs"`
}

type MetaDescIndex added in v0.0.3

type MetaDescIndex struct {
	MetaDesc

	QueryHelp string `json:"queryHelp"`
}

type MetaDescSource added in v0.0.3

type MetaDescSource MetaDesc

type MsgRing added in v0.0.1

type MsgRing struct {
	Next int      `json:"next"`
	Msgs [][]byte `json:"msgs"`
	// contains filtered or unexported fields
}

func NewMsgRing added in v0.0.1

func NewMsgRing(inner io.Writer, ringSize int) (*MsgRing, error)

func (*MsgRing) Messages added in v0.0.1

func (m *MsgRing) Messages() [][]byte

func (*MsgRing) Write added in v0.0.1

func (m *MsgRing) Write(p []byte) (n int, err error)

Implements the io.Writer interface.

type NILFeed added in v0.0.1

type NILFeed struct {
	// contains filtered or unexported fields
}

A NILFeed never feeds any data to its dests. It's useful for testing and for pindexes that are actually primary data sources.

func NewNILFeed added in v0.0.1

func NewNILFeed(name, indexName string, dests map[string]Dest) *NILFeed

func (*NILFeed) Close added in v0.0.1

func (t *NILFeed) Close() error

func (*NILFeed) Dests added in v0.0.1

func (t *NILFeed) Dests() map[string]Dest

func (*NILFeed) IndexName added in v0.0.1

func (t *NILFeed) IndexName() string

func (*NILFeed) Name added in v0.0.1

func (t *NILFeed) Name() string

func (*NILFeed) Start added in v0.0.1

func (t *NILFeed) Start() error

func (*NILFeed) Stats added in v0.0.1

func (t *NILFeed) Stats(w io.Writer) error

type NodeDef

type NodeDef struct {
	HostPort    string   `json:"hostPort"`
	UUID        string   `json:"uuid"`
	ImplVersion string   `json:"implVersion"` // See VERSION.
	Tags        []string `json:"tags"`
	Container   string   `json:"container"`
	Weight      int      `json:"weight"`
}

type NodeDefs

type NodeDefs struct {
	// NodeDefs.UUID changes whenever any child NodeDef changes.
	UUID        string              `json:"uuid"`
	NodeDefs    map[string]*NodeDef `json:"nodeDefs"`    // Key is NodeDef.HostPort.
	ImplVersion string              `json:"implVersion"` // See VERSION.
}

func CfgGetNodeDefs

func CfgGetNodeDefs(cfg Cfg, kind string) (*NodeDefs, uint64, error)

func NewNodeDefs

func NewNodeDefs(version string) *NodeDefs

func PlannerGetNodeDefs added in v0.0.1

func PlannerGetNodeDefs(cfg Cfg, version, uuid, bindHttp string) (
	*NodeDefs, error)

type NodePlanParam added in v0.0.1

type NodePlanParam struct {
	CanRead  bool `json:"canRead"`
	CanWrite bool `json:"canWrite"`
}

func GetNodePlanParam added in v0.0.1

func GetNodePlanParam(nodePlanParams map[string]map[string]*NodePlanParam,
	nodeUUID, indexDefName, planPIndexName string) *NodePlanParam

type PIndex

type PIndex struct {
	Name             string     `json:"name"`
	UUID             string     `json:"uuid"`
	IndexType        string     `json:"indexType"`
	IndexName        string     `json:"indexName"`
	IndexUUID        string     `json:"indexUUID"`
	IndexParams      string     `json:"indexParams"`
	SourceType       string     `json:"sourceType"`
	SourceName       string     `json:"sourceName"`
	SourceUUID       string     `json:"sourceUUID"`
	SourceParams     string     `json:"sourceParams"`
	SourcePartitions string     `json:"sourcePartitions"`
	Path             string     `json:"-"` // Transient, not persisted.
	Impl             PIndexImpl `json:"-"` // Transient, not persisted.
	Dest             Dest       `json:"-"` // Transient, not persisted.
	// contains filtered or unexported fields
}

func NewPIndex

func NewPIndex(mgr *Manager, name, uuid,
	indexType, indexName, indexUUID, indexParams,
	sourceType, sourceName, sourceUUID, sourceParams, sourcePartitions string,
	path string) (*PIndex, error)

func OpenPIndex

func OpenPIndex(mgr *Manager, path string) (*PIndex, error)

NOTE: Path argument must be a directory.

func (*PIndex) Close added in v0.0.1

func (p *PIndex) Close(remove bool) error

type PIndexImpl added in v0.0.1

type PIndexImpl interface{}

type PIndexImplType added in v0.0.1

type PIndexImplType struct {
	Validate func(indexType, indexName, indexParams string) error

	New func(indexType, indexParams, path string, restart func()) (
		PIndexImpl, Dest, error)

	Open func(indexType, path string, restart func()) (
		PIndexImpl, Dest, error)

	Count func(mgr *Manager, indexName, indexUUID string) (
		uint64, error)

	Query func(mgr *Manager, indexName, indexUUID string,
		req []byte, res io.Writer) error

	Description string

	// A prototype instance of indexParams that is usable for
	// Validate() and New().
	StartSample interface{}

	QueryHelp string
}

func PIndexImplTypeForIndex added in v0.0.1

func PIndexImplTypeForIndex(cfg Cfg, indexName string) (
	*PIndexImplType, error)

type PIndexStoreStats added in v0.0.1

type PIndexStoreStats struct {
	TimerBatchStore metrics.Timer
	Errors          *list.List // Capped list of string (json).
}

func (*PIndexStoreStats) WriteJSON added in v0.0.1

func (d *PIndexStoreStats) WriteJSON(w io.Writer)

type PlanPIndex

type PlanPIndex struct {
	Name             string `json:"name"` // Stable & unique cluster wide.
	UUID             string `json:"uuid"`
	IndexType        string `json:"indexType"`   // See IndexDef.Type.
	IndexName        string `json:"indexName"`   // See IndexDef.Name.
	IndexUUID        string `json:"indexUUID"`   // See IndefDef.UUID.
	IndexParams      string `json:"indexParams"` // See IndexDef.Params.
	SourceType       string `json:"sourceType"`
	SourceName       string `json:"sourceName"`
	SourceUUID       string `json:"sourceUUID"`
	SourceParams     string `json:"sourceParams"` // Optional connection info.
	SourcePartitions string `json:"sourcePartitions"`

	Nodes map[string]*PlanPIndexNode `json:"nodes"` // Keyed by NodeDef.UUID.
}

type PlanPIndexFilter added in v0.0.3

type PlanPIndexFilter func(*PlanPIndexNode) bool

type PlanPIndexNode added in v0.0.1

type PlanPIndexNode struct {
	CanRead  bool `json:"canRead"`
	CanWrite bool `json:"canWrite"`
	Priority int  `json:"priority"`
}

type PlanPIndexNodeRef added in v0.0.1

type PlanPIndexNodeRef struct {
	UUID string
	Node *PlanPIndexNode
}

type PlanPIndexNodeRefs added in v0.0.1

type PlanPIndexNodeRefs []*PlanPIndexNodeRef

func (PlanPIndexNodeRefs) Len added in v0.0.1

func (pms PlanPIndexNodeRefs) Len() int

func (PlanPIndexNodeRefs) Less added in v0.0.1

func (pms PlanPIndexNodeRefs) Less(i, j int) bool

func (PlanPIndexNodeRefs) Swap added in v0.0.1

func (pms PlanPIndexNodeRefs) Swap(i, j int)

type PlanPIndexes

type PlanPIndexes struct {
	// PlanPIndexes.UUID changes whenever any child PlanPIndex changes.
	UUID         string                 `json:"uuid"`
	PlanPIndexes map[string]*PlanPIndex `json:"planPIndexes"` // Key is PlanPIndex.Name.
	ImplVersion  string                 `json:"implVersion"`  // See VERSION.
	Warnings     map[string][]string    `json:"warnings"`     // Key is IndexDef.Name.
}

func CalcPlan

func CalcPlan(indexDefs *IndexDefs, nodeDefs *NodeDefs,
	planPIndexesPrev *PlanPIndexes, version, server string) (
	*PlanPIndexes, error)

Split logical indexes into PIndexes and assign PIndexes to nodes.

func CfgGetPlanPIndexes

func CfgGetPlanPIndexes(cfg Cfg) (*PlanPIndexes, uint64, error)

func NewPlanPIndexes

func NewPlanPIndexes(version string) *PlanPIndexes

func PlannerGetPlanPIndexes added in v0.0.1

func PlannerGetPlanPIndexes(cfg Cfg, version string) (*PlanPIndexes, uint64, error)

type PlanParams added in v0.0.1

type PlanParams struct {
	// MaxPartitionsPerPIndex controls the maximum number of source
	// partitions the planner can assign to or clump into a PIndex (or
	// index partition).
	MaxPartitionsPerPIndex int `json:"maxPartitionsPerPIndex"`

	// NumReplicas controls the number of replicas for a PIndex, over
	// the first copy.  The first copy is not counted as a replica.
	// For example, a NumReplicas setting of 2 means there should be a
	// primary and 2 replicas... so 3 copies in total.  A NumReplicas
	// of 0 means just the first, primary copy only.
	NumReplicas int `json:"numReplicas"`

	// HierarchyRules defines the policy the planner should follow
	// when assigning PIndexes to nodes, especially for replica
	// placement.  Through the HierarchyRules, a user can specify, for
	// example, that the first replica should be not on the same rack
	// and zone as the first copy.  Some examples:
	// Try to put the first replica on the same rack...
	// {"replica":[{"includeLevel":1,"excludeLevel":0}]}
	// Try to put the first replica on a different rack...
	// {"replica":[{"includeLevel":2,"excludeLevel":1}]}
	HierarchyRules blance.HierarchyRules `json:"hierarchyRules"`

	// NodePlanParams allows users to specify per-node input to the
	// planner, such as whether PIndexes assigned to different nodes
	// can be readable or writable.  Keyed by node UUID.  Value is
	// keyed by planPIndex.Name or indexDef.Name.  The empty string
	// ("") is used to represent any node UUID and/or any planPIndex
	// and/or any indexDef.
	NodePlanParams map[string]map[string]*NodePlanParam `json:"nodePlanParams"`

	// PlanFrozen means the planner should not change the previous
	// plan for an index, even if as nodes join or leave and even if
	// there was no previous plan.  Defaults to false (allow
	// re-planning).
	PlanFrozen bool `json:"planFrozen"`
}

type PrimaryFeed added in v0.0.1

type PrimaryFeed struct {
	// contains filtered or unexported fields
}

A PrimaryFeed implements both the Feed and Dest interfaces, for chainability; and is also useful for testing.

func NewPrimaryFeed added in v0.0.1

func NewPrimaryFeed(name, indexName string, pf DestPartitionFunc,
	dests map[string]Dest) *PrimaryFeed

func (*PrimaryFeed) Close added in v0.0.1

func (t *PrimaryFeed) Close() error

func (*PrimaryFeed) ConsistencyWait added in v0.0.1

func (t *PrimaryFeed) ConsistencyWait(partition, partitionUUID string,
	consistencyLevel string,
	consistencySeq uint64,
	cancelCh <-chan bool) error

func (*PrimaryFeed) Count added in v0.0.1

func (t *PrimaryFeed) Count(pindex *PIndex, cancelCh <-chan bool) (
	uint64, error)

func (*PrimaryFeed) DataDelete added in v0.0.3

func (t *PrimaryFeed) DataDelete(partition string,
	key []byte, seq uint64) error

func (*PrimaryFeed) DataUpdate added in v0.0.3

func (t *PrimaryFeed) DataUpdate(partition string,
	key []byte, seq uint64, val []byte) error

func (*PrimaryFeed) Dests added in v0.0.1

func (t *PrimaryFeed) Dests() map[string]Dest

func (*PrimaryFeed) IndexName added in v0.0.1

func (t *PrimaryFeed) IndexName() string

func (*PrimaryFeed) Name added in v0.0.1

func (t *PrimaryFeed) Name() string

func (*PrimaryFeed) OpaqueGet added in v0.0.3

func (t *PrimaryFeed) OpaqueGet(partition string) (
	value []byte, lastSeq uint64, err error)

func (*PrimaryFeed) OpaqueSet added in v0.0.3

func (t *PrimaryFeed) OpaqueSet(partition string,
	value []byte) error

func (*PrimaryFeed) Query added in v0.0.1

func (t *PrimaryFeed) Query(pindex *PIndex, req []byte, w io.Writer,
	cancelCh <-chan bool) error

func (*PrimaryFeed) Rollback added in v0.0.1

func (t *PrimaryFeed) Rollback(partition string,
	rollbackSeq uint64) error

func (*PrimaryFeed) SnapshotStart added in v0.0.3

func (t *PrimaryFeed) SnapshotStart(partition string,
	snapStart, snapEnd uint64) error

func (*PrimaryFeed) Start added in v0.0.1

func (t *PrimaryFeed) Start() error

func (*PrimaryFeed) Stats added in v0.0.1

func (t *PrimaryFeed) Stats(w io.Writer) error

type QueryHandler added in v0.0.1

type QueryHandler struct {
	// contains filtered or unexported fields
}

func NewQueryHandler added in v0.0.1

func NewQueryHandler(mgr *Manager) *QueryHandler

func (*QueryHandler) RESTOpts added in v0.0.1

func (h *QueryHandler) RESTOpts(opts map[string]string)

func (*QueryHandler) ServeHTTP added in v0.0.1

func (h *QueryHandler) ServeHTTP(w http.ResponseWriter, req *http.Request)

type QueryPIndexHandler added in v0.0.1

type QueryPIndexHandler struct {
	// contains filtered or unexported fields
}

func NewQueryPIndexHandler added in v0.0.1

func NewQueryPIndexHandler(mgr *Manager) *QueryPIndexHandler

func (*QueryPIndexHandler) ServeHTTP added in v0.0.1

func (h *QueryPIndexHandler) ServeHTTP(w http.ResponseWriter, req *http.Request)

type RESTMeta added in v0.0.1

type RESTMeta struct {
	Path   string
	Method string
	Opts   map[string]string
}

type RESTOpts added in v0.0.1

type RESTOpts interface {
	RESTOpts(map[string]string)
}

type RemotePlanPIndex added in v0.0.1

type RemotePlanPIndex struct {
	PlanPIndex *PlanPIndex
	NodeDef    *NodeDef
}

type RuntimeGetHandler added in v0.0.1

type RuntimeGetHandler struct {
	// contains filtered or unexported fields
}

func NewRuntimeGetHandler added in v0.0.1

func NewRuntimeGetHandler(versionMain string, mgr *Manager) *RuntimeGetHandler

func (*RuntimeGetHandler) ServeHTTP added in v0.0.1

func (h *RuntimeGetHandler) ServeHTTP(w http.ResponseWriter, r *http.Request)

type ScanCursor added in v0.0.1

type ScanCursor interface {
	Done() bool
	Key() []byte
	Val() []byte
	Next() bool
}

type ScanCursors added in v0.0.1

type ScanCursors []ScanCursor

ScanCursors implements the heap.Interface for easy merging.

func (ScanCursors) Len added in v0.0.1

func (pq ScanCursors) Len() int

func (ScanCursors) Less added in v0.0.1

func (pq ScanCursors) Less(i, j int) bool

func (*ScanCursors) Pop added in v0.0.1

func (pq *ScanCursors) Pop() interface{}

func (*ScanCursors) Push added in v0.0.1

func (pq *ScanCursors) Push(x interface{})

func (ScanCursors) Swap added in v0.0.1

func (pq ScanCursors) Swap(i, j int)

type StatsHandler added in v0.0.1

type StatsHandler struct {
	// contains filtered or unexported fields
}

func NewStatsHandler added in v0.0.1

func NewStatsHandler(mgr *Manager) *StatsHandler

func (*StatsHandler) ServeHTTP added in v0.0.1

func (h *StatsHandler) ServeHTTP(w http.ResponseWriter, req *http.Request)

type TAPFeed

type TAPFeed struct {
	// contains filtered or unexported fields
}

A TAPFeed uses TAP protocol to dump data from a couchbase data source.

func NewTAPFeed

func NewTAPFeed(name, indexName, url, poolName, bucketName, bucketUUID,
	paramsStr string, pf DestPartitionFunc, dests map[string]Dest,
	disable bool) (*TAPFeed, error)

func (*TAPFeed) Close

func (t *TAPFeed) Close() error

func (*TAPFeed) Dests added in v0.0.1

func (t *TAPFeed) Dests() map[string]Dest

func (*TAPFeed) IndexName added in v0.0.1

func (t *TAPFeed) IndexName() string

func (*TAPFeed) Name

func (t *TAPFeed) Name() string

func (*TAPFeed) Start

func (t *TAPFeed) Start() error

func (*TAPFeed) Stats added in v0.0.1

func (t *TAPFeed) Stats(w io.Writer) error

type TAPFeedParams added in v0.0.1

type TAPFeedParams struct {
	BackoffFactor float32 `json:"backoffFactor"`
	SleepInitMS   int     `json:"sleepInitMS"`
	SleepMaxMS    int     `json:"sleepMaxMS"`
}

type WorkReq

type WorkReq struct {
	// contains filtered or unexported fields
}

Directories

Path Synopsis
cmd

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL