datas

package
v0.0.0-...-22f70f5 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Mar 27, 2017 License: Apache-2.0 Imports: 35 Imported by: 0

Documentation

Overview

Package datas defines and implements the database layer used in Noms.

Index

Constants

View Source
const (
	ParentsField = "parents"
	ValueField   = "value"
	MetaField    = "meta"
)
View Source
const (
	// NomsVersionHeader is the name of the header that Noms clients and
	// servers must set in every request/response.
	NomsVersionHeader = "x-noms-vers"
)

Variables

View Source
var (
	ErrOptimisticLockFailed = errors.New("Optimistic lock failed on database Root update")
	ErrMergeNeeded          = errors.New("Dataset head is not ancestor of commit")
)
View Source
var (
	// HandleWriteValue is meant to handle HTTP POST requests to the
	// writeValue/ server endpoint. The payload should be an appropriately-
	// ordered sequence of Chunks to be validated and stored on the server.
	// TODO: Nice comment about what headers it expects/honors, payload
	// format, and error responses.
	HandleWriteValue = createHandler(handleWriteValue, true)

	// HandleGetRefs is meant to handle HTTP POST requests to the getRefs/
	// server endpoint. Given a sequence of Chunk hashes, the server will
	// fetch and return them.
	// TODO: Nice comment about what headers it
	// expects/honors, payload format, and responses.
	HandleGetRefs = createHandler(handleGetRefs, true)

	// HandleGetBlob is a custom endpoint whose sole purpose is to directly
	// fetch the *bytes* contained in a Blob value. It expects a single query
	// param of `h` to be the ref of the Blob.
	// TODO: Support retrieving blob contents via a path.
	HandleGetBlob = createHandler(handleGetBlob, false)

	// HandleWriteValue is meant to handle HTTP POST requests to the hasRefs/
	// server endpoint. Given a sequence of Chunk hashes, the server check for
	// their presence and return a list of true/false responses.
	// TODO: Nice comment about what headers it expects/honors, payload
	// format, and responses.
	HandleHasRefs = createHandler(handleHasRefs, true)

	// HandleRootGet is meant to handle HTTP GET requests to the root/ server
	// endpoint. The server returns the hash of the Root as a string.
	// TODO: Nice comment about what headers it expects/honors, payload
	// format, and responses.
	HandleRootGet = createHandler(handleRootGet, true)

	// HandleWriteValue is meant to handle HTTP POST requests to the root/
	// server endpoint. This is used to update the Root to point to a new
	// Chunk.
	// TODO: Nice comment about what headers it expects/honors, payload
	// format, and error responses.
	HandleRootPost = createHandler(handleRootPost, true)

	// HandleBaseGet is meant to handle HTTP GET requests to the / server
	// endpoint. This is used to give a friendly message to users.
	// TODO: Nice comment about what headers it expects/honors, payload
	// format, and error responses.
	HandleBaseGet = handleBaseGet

	HandleGraphQL = createHandler(handleGraphQL, false)
)
View Source
var DatasetFullRe = regexp.MustCompile("^" + DatasetRe.String() + "$")

DatasetFullRe is a regexp that matches a only a target string that is entirely legal Dataset name.

View Source
var DatasetRe = regexp.MustCompile(`[a-zA-Z0-9\-_/]+`)

DatasetRe is a regexp that matches a legal Dataset name anywhere within the target string.

Functions

func FindCommonAncestor

func FindCommonAncestor(c1, c2 types.Ref, vr types.ValueReader) (a types.Ref, ok bool)

FindCommonAncestor returns the most recent common ancestor of c1 and c2, if one exists, setting ok to true. If there is no common ancestor, ok is set to false.

func IsCommitType

func IsCommitType(t *types.Type) bool

func IsRefOfCommitType

func IsRefOfCommitType(t *types.Type) bool

func IsValidDatasetName

func IsValidDatasetName(name string) bool

func NewCommit

func NewCommit(value types.Value, parents types.Set, meta types.Struct) types.Struct

NewCommit creates a new commit object. The type of Commit is computed based on the type of the value, the type of the meta info as well as the type of the parents.

For the first commit we get:

```

struct Commit {
  meta: M,
  parents: Set<Ref<Cycle<0>>>,
  value: T,
}

```

As long as we continue to commit values with type T and meta of type M that type stays the same.

When we later do a commit with value of type U and meta of type N we get:

```

struct Commit {
  meta: N,
  parents: Set<Ref<struct Commit {
    meta: M | N,
    parents: Set<Ref<Cycle<0>>>,
    value: T | U
  }>>,
  value: U,
}

```

Similarly if we do a commit with a different type for the meta info.

The new type gets combined as a union type for the value/meta of the inner commit struct.

func NewHTTPBatchStore

func NewHTTPBatchStore(baseURL, auth string) *httpBatchStore

func Pull

func Pull(srcDB, sinkDB Database, sourceRef, sinkHeadRef types.Ref, concurrency int, progressCh chan PullProgress)

Pull objects that descend from sourceRef from srcDB to sinkDB. sinkHeadRef should point to a Commit (in sinkDB) that's an ancestor of sourceRef. This allows the algorithm to figure out which portions of data are already present in sinkDB and skip copying them.

func PullWithFlush

func PullWithFlush(srcDB, sinkDB Database, sourceRef, sinkHeadRef types.Ref, concurrency int, progressCh chan PullProgress)

PullWithFlush calls Pull and then manually flushes data to sinkDB. This is an unfortunate current necessity. The Flush() can't happen at the end of regular Pull() because that breaks tests that try to ensure we're not reading more data from the sinkDB than expected. Flush() triggers validation, which triggers sinkDB reads, which means that the code can no longer tell which reads were caused by Pull() and which by Flush(). TODO: Get rid of this (BUG 2982)

Types

type CommitOptions

type CommitOptions struct {
	// Parents, if provided is the parent commits of the commit we are
	// creating.
	Parents types.Set

	// Meta is a Struct that describes arbitrary metadata about this Commit,
	// e.g. a timestamp or descriptive text.
	Meta types.Struct

	// Policy will be called to attempt to merge this Commit with the current
	// Head, if this is not a fast-forward. If Policy is nil, no merging will
	// be attempted. Note that because Commit() retries in some cases, Policy
	// might also be called multiple times with different values.
	Policy merge.Policy
}

CommitOptions is used to pass options into Commit.

type Database

type Database interface {
	// To implement types.ValueWriter, Database implementations provide
	// WriteValue(). WriteValue() writes v to this Database, though v is not
	// guaranteed to be be persistent until after a subsequent Commit(). The
	// return value is the Ref of v.
	types.ValueReadWriter
	io.Closer

	// Datasets returns the root of the database which is a
	// Map<String, Ref<Commit>> where string is a datasetID.
	Datasets() types.Map

	// GetDataset returns a Dataset struct containing the current mapping of
	// datasetID in the above Datasets Map.
	GetDataset(datasetID string) Dataset

	// Commit updates the Commit that ds.ID() in this database points at. All
	// Values that have been written to this Database are guaranteed to be
	// persistent after Commit() returns.
	// The new Commit struct is constructed using v, opts.Parents, and
	// opts.Meta. If opts.Parents is the zero value (types.Set{}) then
	// the current head is used. If opts.Meta is the zero value
	// (types.Struct{}) then a fully initialized empty Struct is passed to
	// NewCommit.
	// The returned Dataset is always the newest snapshot, regardless of
	// success or failure, and Datasets() is updated to match backing storage
	// upon return as well. If the update cannot be performed, e.g., because
	// of a conflict, Commit returns an 'ErrMergeNeeded' error.
	Commit(ds Dataset, v types.Value, opts CommitOptions) (Dataset, error)

	// CommitValue updates the Commit that ds.ID() in this database points at.
	// All Values that have been written to this Database are guaranteed to be
	// persistent after Commit().
	// The new Commit struct is constructed using `v`, and the current Head of
	// `ds` as the lone Parent.
	// The returned Dataset is always the newest snapshot, regardless of
	// success or failure, and Datasets() is updated to match backing storage
	// upon return as well. If the update cannot be performed, e.g., because
	// of a conflict, Commit returns an 'ErrMergeNeeded' error.
	CommitValue(ds Dataset, v types.Value) (Dataset, error)

	// Delete removes the Dataset named ds.ID() from the map at the root of
	// the Database. The Dataset data is not necessarily cleaned up at this
	// time, but may be garbage collected in the future.
	// The returned Dataset is always the newest snapshot, regardless of
	// success or failure, and Datasets() is updated to match backing storage
	// upon return as well. If the update cannot be performed, e.g., because
	// of a conflict, Delete returns an 'ErrMergeNeeded' error.
	Delete(ds Dataset) (Dataset, error)

	// SetHead ignores any lineage constraints (e.g. the current Head being in
	// commit’s Parent set) and force-sets a mapping from datasetID: commit in
	// this database.
	// All Values that have been written to this Database are guaranteed to be
	// persistent after SetHead(). If the update cannot be performed, e.g.,
	// because another process moved the current Head out from under you,
	// error will be non-nil.
	// The newest snapshot of the Dataset is always returned, so the caller an
	// easily retry using the latest.
	// Regardless, Datasets() is updated to match backing storage upon return.
	SetHead(ds Dataset, newHeadRef types.Ref) (Dataset, error)

	// FastForward takes a types.Ref to a Commit object and makes it the new
	// Head of ds iff it is a descendant of the current Head. Intended to be
	// used e.g. after a call to Pull(). If the update cannot be performed,
	// e.g., because another process moved the current Head out from under
	// you, err will be non-nil.
	// The newest snapshot of the Dataset is always returned, so the caller
	// can easily retry using the latest.
	// Regardless, Datasets() is updated to match backing storage upon return.
	FastForward(ds Dataset, newHeadRef types.Ref) (Dataset, error)
	// contains filtered or unexported methods
}

Database provides versioned storage for noms values. While Values can be directly read and written from a Database, it is generally more appropriate to read data by inspecting the Head of a Dataset and write new data by updating the Head of a Dataset via Commit() or similar. Particularly, new data is not guaranteed to be persistent until after a Commit (Delete, SetHead, or FastForward) operation completes. The Database API is stateful, meaning that calls to GetDataset() or Datasets() occurring after a call to Commit() (et al) will represent the result of the Commit().

func NewDatabase

func NewDatabase(cs chunks.ChunkStore) Database

type Dataset

type Dataset struct {
	// contains filtered or unexported fields
}

Dataset is a named Commit within a Database.

func (Dataset) Database

func (ds Dataset) Database() Database

Database returns the Database object in which this Dataset is stored. WARNING: This method is under consideration for deprecation.

func (Dataset) HasHead

func (ds Dataset) HasHead() bool

HasHead() returns 'true' if this dataset has a Head Commit, false otherwise.

func (Dataset) Head

func (ds Dataset) Head() types.Struct

Head returns the current head Commit, which contains the current root of the Dataset's value tree.

func (Dataset) HeadRef

func (ds Dataset) HeadRef() types.Ref

HeadRef returns the Ref of the current head Commit, which contains the current root of the Dataset's value tree.

func (Dataset) HeadValue

func (ds Dataset) HeadValue() types.Value

HeadValue returns the Value field of the current head Commit.

func (Dataset) ID

func (ds Dataset) ID() string

ID returns the name of this Dataset.

func (Dataset) MaybeHead

func (ds Dataset) MaybeHead() (types.Struct, bool)

MaybeHead returns the current Head Commit of this Dataset, which contains the current root of the Dataset's value tree, if available. If not, it returns a new Commit and 'false'.

func (Dataset) MaybeHeadRef

func (ds Dataset) MaybeHeadRef() (types.Ref, bool)

MaybeHeadRef returns the Ref of the current Head Commit of this Dataset, which contains the current root of the Dataset's value tree, if available. If not, it returns an empty Ref and 'false'.

func (Dataset) MaybeHeadValue

func (ds Dataset) MaybeHeadValue() (types.Value, bool)

MaybeHeadValue returns the Value field of the current head Commit, if available. If not it returns nil and 'false'.

type Factory

type Factory interface {
	Create(string) (Database, bool)

	// Shutter shuts down the factory. Subsequent calls to Create() will fail.
	Shutter()
}

Factory allows the creation of namespaced Database instances. The details of how namespaces are separated is left up to the particular implementation of Factory and Database.

func NewRemoteStoreFactory

func NewRemoteStoreFactory(host, auth string) Factory

type Handler

type Handler func(w http.ResponseWriter, req *http.Request, ps URLParams, cs chunks.ChunkStore)

type LocalDatabase

type LocalDatabase struct {
	// contains filtered or unexported fields
}

Database provides versioned storage for noms values. Each Database instance represents one moment in history. Heads() returns the Commit from each active fork at that moment. The Commit() method returns a new Database, representing a new moment in history.

func (*LocalDatabase) Close

func (ldb *LocalDatabase) Close() error

func (*LocalDatabase) Commit

func (ldb *LocalDatabase) Commit(ds Dataset, v types.Value, opts CommitOptions) (Dataset, error)

func (*LocalDatabase) CommitValue

func (ldb *LocalDatabase) CommitValue(ds Dataset, v types.Value) (Dataset, error)

func (*LocalDatabase) Datasets

func (dbc *LocalDatabase) Datasets() types.Map

func (*LocalDatabase) Delete

func (ldb *LocalDatabase) Delete(ds Dataset) (Dataset, error)

func (*LocalDatabase) FastForward

func (ldb *LocalDatabase) FastForward(ds Dataset, newHeadRef types.Ref) (Dataset, error)

func (*LocalDatabase) GetDataset

func (ldb *LocalDatabase) GetDataset(datasetID string) Dataset

func (*LocalDatabase) SetHead

func (ldb *LocalDatabase) SetHead(ds Dataset, newHeadRef types.Ref) (Dataset, error)

type LocalFactory

type LocalFactory struct {
	// contains filtered or unexported fields
}

func NewLocalFactory

func NewLocalFactory(cf chunks.Factory) *LocalFactory

func (*LocalFactory) Create

func (lf *LocalFactory) Create(ns string) (Database, bool)

func (*LocalFactory) Shutter

func (lf *LocalFactory) Shutter()

type PullProgress

type PullProgress struct {
	DoneCount, KnownCount, ApproxWrittenBytes uint64
}

type RemoteDatabaseClient

type RemoteDatabaseClient struct {
	// contains filtered or unexported fields
}

Database provides versioned storage for noms values. Each Database instance represents one moment in history. Heads() returns the Commit from each active fork at that moment. The Commit() method returns a new Database, representing a new moment in history.

func NewRemoteDatabase

func NewRemoteDatabase(baseURL, auth string) *RemoteDatabaseClient

func (*RemoteDatabaseClient) Close

func (dbc *RemoteDatabaseClient) Close() error

func (*RemoteDatabaseClient) Commit

func (rdb *RemoteDatabaseClient) Commit(ds Dataset, v types.Value, opts CommitOptions) (Dataset, error)

func (*RemoteDatabaseClient) CommitValue

func (rdb *RemoteDatabaseClient) CommitValue(ds Dataset, v types.Value) (Dataset, error)

func (*RemoteDatabaseClient) Datasets

func (dbc *RemoteDatabaseClient) Datasets() types.Map

func (*RemoteDatabaseClient) Delete

func (rdb *RemoteDatabaseClient) Delete(ds Dataset) (Dataset, error)

func (*RemoteDatabaseClient) FastForward

func (rdb *RemoteDatabaseClient) FastForward(ds Dataset, newHeadRef types.Ref) (Dataset, error)

func (*RemoteDatabaseClient) GetDataset

func (rdb *RemoteDatabaseClient) GetDataset(datasetID string) Dataset

func (*RemoteDatabaseClient) SetHead

func (rdb *RemoteDatabaseClient) SetHead(ds Dataset, newHeadRef types.Ref) (Dataset, error)

type RemoteDatabaseServer

type RemoteDatabaseServer struct {

	// Called just before the server is started.
	Ready func()
	// contains filtered or unexported fields
}

func NewRemoteDatabaseServer

func NewRemoteDatabaseServer(cs chunks.ChunkStore, port int) *RemoteDatabaseServer

func (*RemoteDatabaseServer) Port

func (s *RemoteDatabaseServer) Port() int

Port is the actual port used. This may be different than the port passed in to NewRemoteDatabaseServer.

func (*RemoteDatabaseServer) Run

func (s *RemoteDatabaseServer) Run()

Run blocks while the RemoteDatabaseServer is listening. Running on a separate go routine is supported.

func (*RemoteDatabaseServer) Stop

func (s *RemoteDatabaseServer) Stop()

Will cause the RemoteDatabaseServer to stop listening and an existing call to Run() to continue.

type RemoteStoreFactory

type RemoteStoreFactory struct {
	// contains filtered or unexported fields
}

func (RemoteStoreFactory) Create

func (f RemoteStoreFactory) Create(ns string) (Database, bool)

func (RemoteStoreFactory) CreateStore

func (f RemoteStoreFactory) CreateStore(ns string) Database

func (RemoteStoreFactory) Shutter

func (f RemoteStoreFactory) Shutter()

type URLParams

type URLParams interface {
	ByName(string) string
}

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL