Documentation ¶
Overview ¶
Package kv provides a key-value API to an underlying cockroach datastore. Cockroach itself provides a single, monolithic, sorted key value map, distributed over multiple nodes. Each node holds a set of key ranges. Package kv translates between the monolithic, logical map which Cockroach clients experience to the physically distributed key ranges which comprise the whole.
Package kv implements the logic necessary to locate appropriate nodes based on keys being read or written. In some cases, requests may span a range of keys, in which case multiple RPCs may be sent out.
Index ¶
- type DBServer
- type DistSender
- type DistSenderContext
- type LocalSender
- func (ls *LocalSender) AddStore(s *storage.Store)
- func (ls *LocalSender) GetStore(storeID proto.StoreID) (*storage.Store, error)
- func (ls *LocalSender) GetStoreCount() int
- func (ls *LocalSender) GetStoreIDs() []proto.StoreID
- func (ls *LocalSender) HasStore(storeID proto.StoreID) bool
- func (ls *LocalSender) RemoveStore(s *storage.Store)
- func (ls *LocalSender) Send(ctx context.Context, call proto.Call)
- func (ls *LocalSender) SendBatch(ctx context.Context, ba proto.BatchRequest) (*proto.BatchResponse, error)
- func (ls *LocalSender) VisitStores(visitor func(s *storage.Store) error) error
- type LocalTestCluster
- type TxnCoordSender
Constants ¶
This section is empty.
Variables ¶
This section is empty.
Functions ¶
This section is empty.
Types ¶
type DBServer ¶
type DBServer struct {
// contains filtered or unexported fields
}
A DBServer provides an HTTP server endpoint serving the key-value API. It accepts either JSON or serialized protobuf content types.
func NewDBServer ¶
NewDBServer allocates and returns a new DBServer.
type DistSender ¶
type DistSender struct {
// contains filtered or unexported fields
}
A DistSender provides methods to access Cockroach's monolithic, distributed key value store. Each method invocation triggers a lookup or lookups to find replica metadata for implicated key ranges. RPCs are sent to one or more of the replicas to satisfy the method invocation.
func NewDistSender ¶
func NewDistSender(ctx *DistSenderContext, gossip *gossip.Gossip) *DistSender
NewDistSender returns a batch.Sender instance which connects to the Cockroach cluster via the supplied gossip instance. Supplying a DistSenderContext or the fields within is optional. For omitted values, sane defaults will be used.
func (*DistSender) SendBatch ¶
func (ds *DistSender) SendBatch(ctx context.Context, ba proto.BatchRequest) (*proto.BatchResponse, error)
SendBatch implements the batch.Sender interface. It subdivides the Batch into batches admissible for sending (preventing certain illegal mixtures of requests), executes each individual part (which may span multiple ranges), and recombines the response.
type DistSenderContext ¶
type DistSenderContext struct { Clock *hlc.Clock RangeDescriptorCacheSize int32 // RangeLookupMaxRanges sets how many ranges will be prefetched into the // range descriptor cache when dispatching a range lookup request. RangeLookupMaxRanges int32 LeaderCacheSize int32 RPCRetryOptions *retry.Options // The RPC dispatcher. Defaults to rpc.Send but can be changed here // for testing purposes. RPCSend rpcSendFn RangeDescriptorDB rangeDescriptorDB // contains filtered or unexported fields }
DistSenderContext holds auxiliary objects that can be passed to NewDistSender.
type LocalSender ¶
type LocalSender struct {
// contains filtered or unexported fields
}
A LocalSender provides methods to access a collection of local stores.
func NewLocalSender ¶
func NewLocalSender() *LocalSender
NewLocalSender returns a local-only sender which directly accesses a collection of stores.
func (*LocalSender) AddStore ¶
func (ls *LocalSender) AddStore(s *storage.Store)
AddStore adds the specified store to the store map.
func (*LocalSender) GetStore ¶
GetStore looks up the store by store ID. Returns an error if not found.
func (*LocalSender) GetStoreCount ¶
func (ls *LocalSender) GetStoreCount() int
GetStoreCount returns the number of stores this node is exporting.
func (*LocalSender) GetStoreIDs ¶
func (ls *LocalSender) GetStoreIDs() []proto.StoreID
GetStoreIDs returns all the current store ids in a random order.
func (*LocalSender) HasStore ¶
func (ls *LocalSender) HasStore(storeID proto.StoreID) bool
HasStore returns true if the specified store is owned by this LocalSender.
func (*LocalSender) RemoveStore ¶
func (ls *LocalSender) RemoveStore(s *storage.Store)
RemoveStore removes the specified store from the store map.
func (*LocalSender) Send ¶
func (ls *LocalSender) Send(ctx context.Context, call proto.Call)
Send implements the client.Sender interface. The store is looked up from the store map if specified by header.Replica; otherwise, the command is being executed locally, and the replica is determined via lookup through each store's LookupRange method.
func (*LocalSender) SendBatch ¶
func (ls *LocalSender) SendBatch(ctx context.Context, ba proto.BatchRequest) (*proto.BatchResponse, error)
SendBatch implements batch.Sender.
func (*LocalSender) VisitStores ¶
func (ls *LocalSender) VisitStores(visitor func(s *storage.Store) error) error
VisitStores implements a visitor pattern over stores in the storeMap. The specified function is invoked with each store in turn. Stores are visited in a random order.
type LocalTestCluster ¶
type LocalTestCluster struct { Manual *hlc.ManualClock Clock *hlc.Clock Gossip *gossip.Gossip Eng engine.Engine Store *storage.Store DB *client.DB Sender *TxnCoordSender Stopper *stop.Stopper // contains filtered or unexported fields }
A LocalTestCluster encapsulates an in-memory instantiation of a cockroach node with a single store using a local sender. Example usage of a LocalTestCluster follows:
s := &server.LocalTestCluster{} s.Start(t) defer s.Stop()
Note that the LocalTestCluster is different from server.TestCluster in that it doesn't use a distributed sender and doesn't start a server node. There is no RPC traffic.
func (*LocalTestCluster) Start ¶
func (ltc *LocalTestCluster) Start(t util.Tester)
Start starts the test cluster by bootstrapping an in-memory store (defaults to maximum of 50M). The server is started, launching the node RPC server and all HTTP endpoints. Use the value of TestServer.Addr after Start() for client connections. Use Stop() to shutdown the server after the test completes.
type TxnCoordSender ¶
type TxnCoordSender struct { sync.Mutex // protects txns and txnStats // contains filtered or unexported fields }
A TxnCoordSender is an implementation of client.Sender which wraps a lower-level Sender (either a LocalSender or a DistSender) to which it sends commands. It acts as a man-in-the-middle, coordinating transaction state for clients. After a transaction is started, the TxnCoordSender starts asynchronously sending heartbeat messages to that transaction's txn record, to keep it live. It also keeps track of each written key or key range over the course of the transaction. When the transaction is committed or aborted, it clears accumulated write intents for the transaction.
func NewTxnCoordSender ¶
func NewTxnCoordSender(wrapped client.BatchSender, clock *hlc.Clock, linearizable bool, tracer *tracer.Tracer, stopper *stop.Stopper) *TxnCoordSender
NewTxnCoordSender creates a new TxnCoordSender for use from a KV distributed DB instance.
func (*TxnCoordSender) Send ¶
func (tc *TxnCoordSender) Send(ctx context.Context, call proto.Call)
Send implements the client.Sender interface. If the call is part of a transaction, the coordinator will initialize the transaction if it's not nil but has an empty ID. TODO(tschottdorf): remove in favor of SendBatch.
func (*TxnCoordSender) SendBatch ¶
func (tc *TxnCoordSender) SendBatch(ctx context.Context, ba proto.BatchRequest) (*proto.BatchResponse, error)
SendBatch implements the batch.Sender interface. If the request is part of a transaction, the TxnCoordSender adds the transaction to a map of active transactions and begins heartbeating it. Every subsequent request for the same transaction updates the lastUpdate timestamp to prevent live transactions from being considered abandoned and garbage collected. Read/write mutating requests have their key or key range added to the transaction's interval tree of key ranges for eventual cleanup via resolved write intents; they're tagged to an outgoing EndTransaction request, with the receiving replica in charge of resolving them.