Documentation ¶
Overview ¶
Package reflow implements the core data structures and (abstract) runtime for Reflow.
Reflow is a system for distributed program execution. The programs are described by Flows, which are an abstract specification of the program's execution. Each Flow node can take any number of other Flows as dependent inputs and perform some (local) execution over these inputs in order to compute some output value.
Reflow supports a limited form of dynamic dependencies: a Flow may evaluate to a list of values, each of which may be executed independently. This mechanism also provides parallelism.
The system orchestrates Flow execution by evaluating the flow in the manner of an abstract syntax tree; see Eval for more details.
Index ¶
- Constants
- Variables
- func AssertExact(_ context.Context, source, target []*Assertions) bool
- func AssertNever(_ context.Context, _, _ []*Assertions) bool
- func GetMaxResourcesMemoryBufferFactor() float64
- func PrettyDiff(lefts, rights []*Assertions) string
- func Runbase(id digest.Digest) (string, error)
- func Rundir() (string, error)
- func SetFilesetOpConcurrencyLimit(limit int)
- type Arg
- type Assert
- type AssertionGenerator
- type AssertionGeneratorMux
- type AssertionKey
- type Assertions
- func AssertionsFromEntry(k AssertionKey, v map[string]string) *Assertions
- func AssertionsFromMap(m map[AssertionKey]map[string]string) *Assertions
- func DistinctAssertions(list ...*Assertions) ([]*Assertions, int)
- func MergeAssertions(list ...*Assertions) (*Assertions, error)
- func NewAssertions() *Assertions
- type AssertionsGroupPart
- func (*AssertionsGroupPart) Descriptor() ([]byte, []int)
- func (m *AssertionsGroupPart) GetId() int32
- func (m *AssertionsGroupPart) GetKeyIds() []int32
- func (*AssertionsGroupPart) ProtoMessage()
- func (m *AssertionsGroupPart) Reset()
- func (m *AssertionsGroupPart) String() string
- func (m *AssertionsGroupPart) XXX_DiscardUnknown()
- func (m *AssertionsGroupPart) XXX_Marshal(b []byte, deterministic bool) ([]byte, error)
- func (m *AssertionsGroupPart) XXX_Merge(src proto.Message)
- func (m *AssertionsGroupPart) XXX_Size() int
- func (m *AssertionsGroupPart) XXX_Unmarshal(b []byte) error
- type AssertionsKeyPart
- func (*AssertionsKeyPart) Descriptor() ([]byte, []int)
- func (m *AssertionsKeyPart) GetBp() *BlobProperties
- func (m *AssertionsKeyPart) GetId() int32
- func (m *AssertionsKeyPart) GetProperties() isAssertionsKeyPart_Properties
- func (m *AssertionsKeyPart) GetSubject() string
- func (*AssertionsKeyPart) ProtoMessage()
- func (m *AssertionsKeyPart) Reset()
- func (m *AssertionsKeyPart) String() string
- func (m *AssertionsKeyPart) XXX_DiscardUnknown()
- func (m *AssertionsKeyPart) XXX_Marshal(b []byte, deterministic bool) ([]byte, error)
- func (m *AssertionsKeyPart) XXX_Merge(src proto.Message)
- func (*AssertionsKeyPart) XXX_OneofWrappers() []interface{}
- func (m *AssertionsKeyPart) XXX_Size() int
- func (m *AssertionsKeyPart) XXX_Unmarshal(b []byte) error
- type AssertionsKeyPart_Bp
- type BlobProperties
- func (*BlobProperties) Descriptor() ([]byte, []int)
- func (m *BlobProperties) GetEtag() string
- func (m *BlobProperties) GetLastModified() string
- func (m *BlobProperties) GetSize() string
- func (*BlobProperties) ProtoMessage()
- func (m *BlobProperties) Reset()
- func (m *BlobProperties) String() string
- func (m *BlobProperties) XXX_DiscardUnknown()
- func (m *BlobProperties) XXX_Marshal(b []byte, deterministic bool) ([]byte, error)
- func (m *BlobProperties) XXX_Merge(src proto.Message)
- func (m *BlobProperties) XXX_Size() int
- func (m *BlobProperties) XXX_Unmarshal(b []byte) error
- type Cache
- type Exec
- type ExecConfig
- type ExecInspect
- type ExecRunInfo
- type Executor
- type File
- type FileMappingPart
- func (*FileMappingPart) Descriptor() ([]byte, []int)
- func (m *FileMappingPart) GetDepth() int32
- func (m *FileMappingPart) GetFileId() int32
- func (m *FileMappingPart) GetIndex() int32
- func (m *FileMappingPart) GetKey() string
- func (*FileMappingPart) ProtoMessage()
- func (m *FileMappingPart) Reset()
- func (m *FileMappingPart) String() string
- func (m *FileMappingPart) XXX_DiscardUnknown()
- func (m *FileMappingPart) XXX_Marshal(b []byte, deterministic bool) ([]byte, error)
- func (m *FileMappingPart) XXX_Merge(src proto.Message)
- func (m *FileMappingPart) XXX_Size() int
- func (m *FileMappingPart) XXX_Unmarshal(b []byte) error
- type FileP
- func (*FileP) Descriptor() ([]byte, []int)
- func (m *FileP) GetContentHash() string
- func (m *FileP) GetEtag() string
- func (m *FileP) GetId() string
- func (m *FileP) GetLastModified() *Timestamp
- func (m *FileP) GetSize() int64
- func (m *FileP) GetSource() string
- func (*FileP) ProtoMessage()
- func (m *FileP) Reset()
- func (m *FileP) String() string
- func (m *FileP) XXX_DiscardUnknown()
- func (m *FileP) XXX_Marshal(b []byte, deterministic bool) ([]byte, error)
- func (m *FileP) XXX_Merge(src proto.Message)
- func (m *FileP) XXX_Size() int
- func (m *FileP) XXX_Unmarshal(b []byte) error
- type FilePart
- func (*FilePart) Descriptor() ([]byte, []int)
- func (m *FilePart) GetAssertionsGroupId() int32
- func (m *FilePart) GetFile() *FileP
- func (m *FilePart) GetId() int32
- func (*FilePart) ProtoMessage()
- func (m *FilePart) Reset()
- func (m *FilePart) String() string
- func (m *FilePart) XXX_DiscardUnknown()
- func (m *FilePart) XXX_Marshal(b []byte, deterministic bool) ([]byte, error)
- func (m *FilePart) XXX_Merge(src proto.Message)
- func (m *FilePart) XXX_Size() int
- func (m *FilePart) XXX_Unmarshal(b []byte) error
- type Fileset
- func (v *Fileset) AddAssertions(a *Assertions) error
- func (v Fileset) AnyEmpty() bool
- func (v Fileset) Assertions() *Assertions
- func (v Fileset) Diff(w Fileset) (string, bool)
- func (v Fileset) Digest() digest.Digest
- func (v Fileset) Empty() bool
- func (v Fileset) Equal(w Fileset) bool
- func (v Fileset) File() (File, error)
- func (v Fileset) Files() []File
- func (v Fileset) Flatten() []Fileset
- func (v *Fileset) MapAssertionsByFile(files []File)
- func (v Fileset) N() int
- func (v Fileset) Pullup() Fileset
- func (v *Fileset) Read(r io.Reader, kind assoc.Kind) error
- func (v Fileset) Short() string
- func (v Fileset) Size() int64
- func (v Fileset) String() string
- func (v Fileset) Subst(sub map[digest.Digest]File) (out Fileset, resolved bool)
- func (v *Fileset) Write(w io.Writer, kind assoc.Kind, includeFileRefFields, includeAssertions bool) error
- func (v Fileset) WriteDigest(w io.Writer)
- type FilesetLimiter
- type FilesetPart
- func (*FilesetPart) Descriptor() ([]byte, []int)
- func (m *FilesetPart) GetAgp() *AssertionsGroupPart
- func (m *FilesetPart) GetAkp() *AssertionsKeyPart
- func (m *FilesetPart) GetFmp() *FileMappingPart
- func (m *FilesetPart) GetFp() *FilePart
- func (m *FilesetPart) GetPart() isFilesetPart_Part
- func (*FilesetPart) ProtoMessage()
- func (m *FilesetPart) Reset()
- func (m *FilesetPart) String() string
- func (m *FilesetPart) XXX_DiscardUnknown()
- func (m *FilesetPart) XXX_Marshal(b []byte, deterministic bool) ([]byte, error)
- func (m *FilesetPart) XXX_Merge(src proto.Message)
- func (*FilesetPart) XXX_OneofWrappers() []interface{}
- func (m *FilesetPart) XXX_Size() int
- func (m *FilesetPart) XXX_Unmarshal(b []byte) error
- type FilesetPart_Agp
- type FilesetPart_Akp
- type FilesetPart_Fmp
- type FilesetPart_Fp
- type Gauges
- type InspectResponse
- type Profile
- type RWAssertions
- type RemoteLogs
- type RemoteLogsType
- type RepoObjectRef
- type Repository
- type Requirements
- type Resources
- func (r *Resources) Add(x, y Resources) *Resources
- func (r Resources) Available(s Resources) bool
- func (r Resources) Div(s Resources) map[string]float64
- func (r Resources) Equal(s Resources) bool
- func (r *Resources) Max(x, y Resources) *Resources
- func (r Resources) MaxRatio(s Resources) (max float64)
- func (r *Resources) Min(x, y Resources) *Resources
- func (r *Resources) Scale(s Resources, factor float64) *Resources
- func (r Resources) ScaledDistance(u Resources) float64
- func (r *Resources) Set(s Resources) *Resources
- func (r Resources) String() string
- func (r *Resources) Sub(x, y Resources) *Resources
- type Result
- type StringDigest
- type Timestamp
- func (*Timestamp) Descriptor() ([]byte, []int)
- func (m *Timestamp) GetNanos() int64
- func (m *Timestamp) GetSeconds() int64
- func (*Timestamp) ProtoMessage()
- func (m *Timestamp) Reset()
- func (m *Timestamp) String() string
- func (m *Timestamp) XXX_DiscardUnknown()
- func (m *Timestamp) XXX_Marshal(b []byte, deterministic bool) ([]byte, error)
- func (m *Timestamp) XXX_Merge(src proto.Message)
- func (m *Timestamp) XXX_Size() int
- func (m *Timestamp) XXX_Unmarshal(b []byte) error
- type Transferer
Constants ¶
const ( // BlobAssertionsNamespace defines the namespace for blob assertions generated by a blob.Mux. BlobAssertionsNamespace = "blob" // BlobAssertionPropertyETag defines the assertion property for the etag (opaque resource ID) of a blob object. BlobAssertionPropertyETag = "etag" // BlobAssertionPropertyLastModified defines the assertion property of when a blob object was last modified. BlobAssertionPropertyLastModified = "last-modified" // BlobAssertionPropertySize defines the assertion property for the size of a blob object. BlobAssertionPropertySize = "size" )
Assertion namespace and property constants.
const DockerInspectTimeFormat = "2006-01-02T15:04:05.999999999Z"
DockerInspectTimeFormat is the format of the time fields in Docker.State retrieved using docker container inspect.
const ( // ReflowletProcessMemoryReservationPct is the amount of memory that's reserved for the reflowlet process, // currently set to 5%. The EC2 overhead and this memory reservation for the reflowlet process, // will both be accounted for when we do per-instance verification (using `reflow ec2verify`). // NOTE: Changing this will warrant re-verification of all instance types. ReflowletProcessMemoryReservationPct = 0.05 )
Variables ¶
var Digester = digest.Digester(crypto.SHA256)
var ResourcesKeys = []string{mem, cpu}
ResourcesKeys are the type of labeled resources typically used. ResourcesKeys does not include disk because it is not updated/used for most practical purposes.
Functions ¶
func AssertExact ¶
func AssertExact(_ context.Context, source, target []*Assertions) bool
AssertExact implements Assert for an exact match. That is, for each key in target, the value should match exactly what's in src and target can't contain keys missing in src.
func AssertNever ¶
func AssertNever(_ context.Context, _, _ []*Assertions) bool
AssertNever implements Assert for an always match (ie, never assert).
func GetMaxResourcesMemoryBufferFactor ¶
func GetMaxResourcesMemoryBufferFactor() float64
GetMaxResourcesMemoryBufferFactor is the buffer factor to compute the max memory resources, when determining if a given resource requirement is within the threshold of resources an alloc can provide.
Since the max amount of memory available on an alloc is computed based on ReflowletProcessMemoryReservationPct, we simply invert that computation to determine the max amount of actual memory likely available on a reflowlet.
func PrettyDiff ¶
func PrettyDiff(lefts, rights []*Assertions) string
PrettyDiff returns a pretty-printable string representing the differences between the set of Assertions in lefts and rights. Specifically only these differences are relevant: - any key present in any of the rights but not in lefts. - any entry (in any of the rights) with a mismatching assertion (in any of the lefts). TODO(swami): Add unit tests.
func Runbase ¶
Runbase returns a base path representing a run with the given id under `Rundir`. Runbase should be used to create run-specific artifacts with appropriate file suffix added to the returned base path.
func Rundir ¶
Rundir returns the "runs" directory where reflow run artifacts can be found. Rundir returns an error if a it cannot be found (or created).
func SetFilesetOpConcurrencyLimit ¶
func SetFilesetOpConcurrencyLimit(limit int)
SetFilesetOpConcurrencyLimit sets the limit of concurrent fileset operations. For a successful reset of the limit, this should be called before a call is made to GetFilesetOpLimiter, ie before any reflow evaluations have started, otherwise this will panic.
Types ¶
type Arg ¶
type Arg struct { // Out is true if this is an output argument. Out bool // Fileset is the fileset used as an input argument. Fileset *Fileset `json:",omitempty"` // Index is the output argument index. Index int }
Arg represents an exec argument (either input or output).
type Assert ¶
type Assert func(ctx context.Context, source, target []*Assertions) bool
Assert asserts whether the target set of assertions are compatible with the src set. Compatibility is directional and this strictly determines if the target is compatible with src and Assert(target, src) may not yield the same result.
type AssertionGenerator ¶
type AssertionGenerator interface { // Generate computes assertions for a given AssertionKey. Generate(ctx context.Context, key AssertionKey) (*Assertions, error) }
AssertionGenerator generates assertions based on a AssertionKey. Implementations are specific to a namespace and generate assertions for a given subject.
type AssertionGeneratorMux ¶
type AssertionGeneratorMux map[string]AssertionGenerator
GeneratorMux multiplexes a number of AssertionGenerator implementations based on the namespace.
func (AssertionGeneratorMux) Generate ¶
func (am AssertionGeneratorMux) Generate(ctx context.Context, key AssertionKey) (*Assertions, error)
Generate implements the AssertionGenerator interface for AttributerMux.
type AssertionKey ¶
type AssertionKey struct {
Subject, Namespace string
}
AssertionKey represents a subject within a namespace whose properties can be asserted.
- Subject represents the unique entity within the Namespace to which this Assertion applies. (eg: full path to blob object, a Docker Image, etc)
- Namespace represents the namespace to which the subject of this Assertion belongs. (eg: "blob" for blob objects, "docker" for docker images, etc)
func (AssertionKey) Less ¶
func (a AssertionKey) Less(b AssertionKey) bool
Less returns whether the given AssertionKey is lexicographically smaller than this one.
type Assertions ¶
type Assertions struct {
// contains filtered or unexported fields
}
Assertions represent a collection of AssertionKeys with specific values for various properties of theirs. Assertions are immutable and constructed in one of the following ways:
NewAssertions: creates an empty Assertions and is typically used when subsequent operations are to AddFrom. AssertionsFromEntry: creates Assertions from a single entry mapping an AssertionKey to various properties (within the key's Namespace) of the named Subject in the key. AssertionsFromMap: creates Assertions from a mapping of AssertionKey to properties. MergeAssertions: merges a list of Assertions into a single Assertions.
func AssertionsFromEntry ¶
func AssertionsFromEntry(k AssertionKey, v map[string]string) *Assertions
AssertionsFromEntry creates an Assertions from a single entry. It is similar to AssertionsFromMap and exists for convenience.
func AssertionsFromMap ¶
func AssertionsFromMap(m map[AssertionKey]map[string]string) *Assertions
AssertionsFromMap creates an Assertions from a given mapping of AssertionKey to a map representing its property names and corresponding values.
func DistinctAssertions ¶
func DistinctAssertions(list ...*Assertions) ([]*Assertions, int)
DistinctAssertions returns the distinct list of non-empty Assertions from the given list and a total size.
func MergeAssertions ¶
func MergeAssertions(list ...*Assertions) (*Assertions, error)
MergeAssertions merges a list of Assertions into a single Assertions. Returns an error if the same key maps to a conflicting value as a result of the merge.
func NewAssertions ¶
func NewAssertions() *Assertions
NewAssertions creates a new (empty) Assertions object.
func (*Assertions) Digest ¶
func (s *Assertions) Digest() digest.Digest
Digest returns the assertions' digest.
func (*Assertions) Equal ¶
func (s *Assertions) Equal(t *Assertions) bool
Equal returns whether the given Assertions is equal to this one.
func (*Assertions) IsEmpty ¶
func (s *Assertions) IsEmpty() bool
IsEmpty returns whether this is empty, which it is if its a nil reference or has no entries.
func (*Assertions) PrettyDiff ¶
func (s *Assertions) PrettyDiff(t *Assertions) string
PrettyDiff returns a pretty-printable string representing the differences in the given Assertions that conflict with this one. Specifically only these differences are relevant: - any key present in t but not in s. - any entry with a mismatching assertion in t and s.
func (*Assertions) Short ¶
func (s *Assertions) Short() string
Short returns a short, string representation of assertions.
func (*Assertions) String ¶
func (s *Assertions) String() string
String returns a full, human-readable string representing the assertions.
type AssertionsGroupPart ¶
type AssertionsGroupPart struct { Id int32 `protobuf:"varint,1,opt,name=id,proto3" json:"id,omitempty"` KeyIds []int32 `protobuf:"varint,2,rep,packed,name=keyIds,proto3" json:"keyIds,omitempty"` XXX_NoUnkeyedLiteral struct{} `json:"-"` XXX_unrecognized []byte `json:"-"` XXX_sizecache int32 `json:"-"` }
func (*AssertionsGroupPart) Descriptor ¶
func (*AssertionsGroupPart) Descriptor() ([]byte, []int)
func (*AssertionsGroupPart) GetId ¶
func (m *AssertionsGroupPart) GetId() int32
func (*AssertionsGroupPart) GetKeyIds ¶
func (m *AssertionsGroupPart) GetKeyIds() []int32
func (*AssertionsGroupPart) ProtoMessage ¶
func (*AssertionsGroupPart) ProtoMessage()
func (*AssertionsGroupPart) Reset ¶
func (m *AssertionsGroupPart) Reset()
func (*AssertionsGroupPart) String ¶
func (m *AssertionsGroupPart) String() string
func (*AssertionsGroupPart) XXX_DiscardUnknown ¶
func (m *AssertionsGroupPart) XXX_DiscardUnknown()
func (*AssertionsGroupPart) XXX_Marshal ¶
func (m *AssertionsGroupPart) XXX_Marshal(b []byte, deterministic bool) ([]byte, error)
func (*AssertionsGroupPart) XXX_Merge ¶
func (m *AssertionsGroupPart) XXX_Merge(src proto.Message)
func (*AssertionsGroupPart) XXX_Size ¶
func (m *AssertionsGroupPart) XXX_Size() int
func (*AssertionsGroupPart) XXX_Unmarshal ¶
func (m *AssertionsGroupPart) XXX_Unmarshal(b []byte) error
type AssertionsKeyPart ¶
type AssertionsKeyPart struct { Id int32 `protobuf:"varint,1,opt,name=id,proto3" json:"id,omitempty"` Subject string `protobuf:"bytes,2,opt,name=subject,proto3" json:"subject,omitempty"` // Types that are valid to be assigned to Properties: // *AssertionsKeyPart_Bp Properties isAssertionsKeyPart_Properties `protobuf_oneof:"properties"` XXX_NoUnkeyedLiteral struct{} `json:"-"` XXX_unrecognized []byte `json:"-"` XXX_sizecache int32 `json:"-"` }
func (*AssertionsKeyPart) Descriptor ¶
func (*AssertionsKeyPart) Descriptor() ([]byte, []int)
func (*AssertionsKeyPart) GetBp ¶
func (m *AssertionsKeyPart) GetBp() *BlobProperties
func (*AssertionsKeyPart) GetId ¶
func (m *AssertionsKeyPart) GetId() int32
func (*AssertionsKeyPart) GetProperties ¶
func (m *AssertionsKeyPart) GetProperties() isAssertionsKeyPart_Properties
func (*AssertionsKeyPart) GetSubject ¶
func (m *AssertionsKeyPart) GetSubject() string
func (*AssertionsKeyPart) ProtoMessage ¶
func (*AssertionsKeyPart) ProtoMessage()
func (*AssertionsKeyPart) Reset ¶
func (m *AssertionsKeyPart) Reset()
func (*AssertionsKeyPart) String ¶
func (m *AssertionsKeyPart) String() string
func (*AssertionsKeyPart) XXX_DiscardUnknown ¶
func (m *AssertionsKeyPart) XXX_DiscardUnknown()
func (*AssertionsKeyPart) XXX_Marshal ¶
func (m *AssertionsKeyPart) XXX_Marshal(b []byte, deterministic bool) ([]byte, error)
func (*AssertionsKeyPart) XXX_Merge ¶
func (m *AssertionsKeyPart) XXX_Merge(src proto.Message)
func (*AssertionsKeyPart) XXX_OneofWrappers ¶
func (*AssertionsKeyPart) XXX_OneofWrappers() []interface{}
XXX_OneofWrappers is for the internal use of the proto package.
func (*AssertionsKeyPart) XXX_Size ¶
func (m *AssertionsKeyPart) XXX_Size() int
func (*AssertionsKeyPart) XXX_Unmarshal ¶
func (m *AssertionsKeyPart) XXX_Unmarshal(b []byte) error
type AssertionsKeyPart_Bp ¶
type AssertionsKeyPart_Bp struct {
Bp *BlobProperties `protobuf:"bytes,3,opt,name=bp,proto3,oneof"`
}
type BlobProperties ¶
type BlobProperties struct { Etag string `protobuf:"bytes,1,opt,name=etag,proto3" json:"etag,omitempty"` LastModified string `protobuf:"bytes,2,opt,name=lastModified,proto3" json:"lastModified,omitempty"` Size string `protobuf:"bytes,3,opt,name=size,proto3" json:"size,omitempty"` XXX_NoUnkeyedLiteral struct{} `json:"-"` XXX_unrecognized []byte `json:"-"` XXX_sizecache int32 `json:"-"` }
func (*BlobProperties) Descriptor ¶
func (*BlobProperties) Descriptor() ([]byte, []int)
func (*BlobProperties) GetEtag ¶
func (m *BlobProperties) GetEtag() string
func (*BlobProperties) GetLastModified ¶
func (m *BlobProperties) GetLastModified() string
func (*BlobProperties) GetSize ¶
func (m *BlobProperties) GetSize() string
func (*BlobProperties) ProtoMessage ¶
func (*BlobProperties) ProtoMessage()
func (*BlobProperties) Reset ¶
func (m *BlobProperties) Reset()
func (*BlobProperties) String ¶
func (m *BlobProperties) String() string
func (*BlobProperties) XXX_DiscardUnknown ¶
func (m *BlobProperties) XXX_DiscardUnknown()
func (*BlobProperties) XXX_Marshal ¶
func (m *BlobProperties) XXX_Marshal(b []byte, deterministic bool) ([]byte, error)
func (*BlobProperties) XXX_Merge ¶
func (m *BlobProperties) XXX_Merge(src proto.Message)
func (*BlobProperties) XXX_Size ¶
func (m *BlobProperties) XXX_Size() int
func (*BlobProperties) XXX_Unmarshal ¶
func (m *BlobProperties) XXX_Unmarshal(b []byte) error
type Cache ¶
type Cache interface { // Lookup returns the value associated with a (digest) key. // Lookup returns an error flagged errors.NotExist when there // is no such value. // // Lookup should also check to make sure that the objects // actually exist, and provide a reasonable guarantee that they'll // be available for transfer. // // TODO(marius): allow the caller to maintain a lease on the desired // objects so that garbage collection can (safely) be run // concurrently with flows. This isn't a correctness concern (the // flows may be restarted), but rather one of efficiency. Lookup(context.Context, digest.Digest) (Fileset, error) // Transfer transmits the file objects associated with value v // (usually retrieved by Lookup) to the repository dst. Transfer // should be used in place of direct (cache) repository access since // it may apply additional policies (e.g., rate limiting, etc.) Transfer(ctx context.Context, dst Repository, v Fileset) error // NeedTransfer returns the set of files in the Fileset v that are absent // in the provided repository. NeedTransfer(ctx context.Context, dst Repository, v Fileset) ([]File, error) // Write stores the Value v, whose file objects exist in Repository repo, // under the key id. If the repository is nil no objects are transferred. Write(ctx context.Context, id digest.Digest, v Fileset, repo Repository) error // Delete removes the value named by id from this cache. Delete(ctx context.Context, id digest.Digest) error // Repository returns this cache's underlying repository. It should // not be used for data transfer during the course of evaluation; see // Transfer. Repository() Repository }
A Cache stores Values and their associated File objects for later retrieval. Caches may be temporary: objects are not guaranteed to persist.
type Exec ¶
type Exec interface { // ID returns the digest of the exec. This is equivalent to the Digest of the value computed // by the Exec. ID() digest.Digest // URI names execs in a process-agnostic fashion. URI() string // Result returns the exec's result after it has been completed. Result(ctx context.Context) (Result, error) // Inspect inspects the exec. It can be called at any point in the Exec's lifetime. // If a repo is provided, Inspect will also marshal this exec's inspect (but only if the exec is complete), // into the given repo and returns the digest of the marshaled contents. // If a repo is provided, the implementation may also put the exec's stdout and stderr logs and return their references. // Implementations may return zero values for digest fields in the case of transient errors. Inspect(ctx context.Context, repo *url.URL) (resp InspectResponse, err error) // Wait awaits completion of the Exec. Wait(ctx context.Context) error // Logs returns the standard error and/or standard output of the Exec. // If it is called during execution, and if follow is true, it follows // the logs until completion of execution. // Completed Execs return the full set of available logs. Logs(ctx context.Context, stdout, stderr, follow bool) (io.ReadCloser, error) // RemoteLogs returns the remote location of the logs (if available). // Returns the standard error (if 'stdout' is false) or standard output of the Exec. // The location is just a reference and the content must be retrieved as appropriate for the type. // RemoteLogs may (or may not) return a valid location during execution. RemoteLogs(ctx context.Context, stdout bool) (RemoteLogs, error) // Shell invokes /bin/bash inside an Exec. It can be invoked only when // the Exec is executing. r provides the shell input. The returned read // closer has the shell output. The caller has to close the read closer // once done. // TODO(pgopal) - Implement shell for zombie execs. Shell(ctx context.Context) (io.ReadWriteCloser, error) // Promote installs this exec's objects into the alloc's repository. // Promote assumes that the Exec is complete. i.e. Wait returned successfully. Promote(context.Context) error }
An Exec computes a Value. It is created from an ExecConfig; the Exec interface permits waiting on completion, and inspection of results as well as ongoing execution.
type ExecConfig ¶
type ExecConfig struct { // The type of exec: "exec", "intern", "extern" Type string // A human-readable name for the exec. Ident string // intern, extern: the URL from which data is fetched or to which // data is pushed. URL string // exec: the docker image used to perform an exec Image string // The docker image that is specified by the user OriginalImage string // exec: the Sprintf-able command that is to be run inside of the // Docker image. Cmd string // exec: the set of arguments (one per %s in Cmd) passed to the command // extern: the single argument which is to be exported Args []Arg // exec: the resource requirements for the exec Resources // NeedAWSCreds indicates the exec needs AWS credentials defined in // its environment: AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY, and // AWS_SESSION_TOKEN will be available with the user's default // credentials. NeedAWSCreds bool // NeedDockerAccess indicates that the exec needs access to the host docker daemon NeedDockerAccess bool // OutputIsDir tells whether an output argument (by index) // is a directory. OutputIsDir []bool `json:",omitempty"` }
ExecConfig contains all the necessary information to perform an exec.
func (ExecConfig) String ¶
func (e ExecConfig) String() string
type ExecInspect ¶
type ExecInspect struct { Created time.Time Config ExecConfig State string // "created", "waiting", "running", .., "zombie" Status string // human readable status Error *errors.Error `json:",omitempty"` // non-nil runtime on error Profile Profile // Gauges are used to export realtime exec stats. They are used only // while the Exec is in running state. Gauges Gauges // Commands running from top, for live inspection. Commands []string // Docker inspect output. Docker types.ContainerJSON // ExecError stores exec result errors. ExecError *errors.Error `json:",omitempty"` }
ExecInspect describes the current state of an Exec.
func (ExecInspect) RunInfo ¶
func (e ExecInspect) RunInfo() ExecRunInfo
RunInfo returns an ExecRunInfo object populated with the data from this inspect.
func (ExecInspect) Runtime ¶
func (e ExecInspect) Runtime() time.Duration
Runtime computes the exec's runtime based on Docker's timestamps.
type ExecRunInfo ¶
type ExecRunInfo struct { Runtime time.Duration Profile Profile Resources Resources // InspectDigest is the reference to the inspect object stored in the repository. InspectDigest RepoObjectRef // Stdout is the reference to the exec's stdout log. Stdout RepoObjectRef // Stderr is the reference to the exec's stdout log. Stderr RepoObjectRef }
ExecRunInfo contains information associated with a completed Exec run.
type Executor ¶
type Executor interface { // Put creates a new Exec at id. It is idempotent. Put(ctx context.Context, id digest.Digest, exec ExecConfig) (Exec, error) // Get retrieves the Exec named id. Get(ctx context.Context, id digest.Digest) (Exec, error) // Remove deletes an Exec. Remove(ctx context.Context, id digest.Digest) error // Execs lists all Execs known to the Executor. Execs(ctx context.Context) ([]Exec, error) // Load fetches missing files into the executor's repository. Load fetches // resolved files from the specified backing repository and unresolved files // directly from the source. The resolved fileset is returned and is available // on the executor on successful return. The client has to explicitly unload the // files to free them. Load(ctx context.Context, repo *url.URL, fileset Fileset) (Fileset, error) // VerifyIntegrity verifies the integrity of the given set of files VerifyIntegrity(ctx context.Context, fileset Fileset) error // Unload the data from the executor's repository. Any use of the unloaded files // after the successful return of Unload is undefined. Unload(ctx context.Context, fileset Fileset) error // Resources indicates the total amount of resources available at the Executor. Resources() Resources // Repository returns the Repository associated with this Executor. Repository() Repository }
Executor manages Execs and their values.
type File ¶
type File struct { // The digest of the contents of the file. ID digest.Digest // The size of the file. Size int64 // Source stores a URL for the file from which it may // be retrieved. Source string `json:",omitempty"` // ETag stores an optional entity tag for the Source file. ETag string `json:",omitempty"` // LastModified stores the file's last modified time. LastModified time.Time `json:",omitempty"` // ContentHash is the digest of the file contents and can be present // for unresolved (ie reference) files. // ContentHash is expected to equal ID once this file is resolved. ContentHash digest.Digest `json:",omitempty"` // Assertions are the set of assertions representing the state // of all the dependencies that went into producing this file. // Unlike Etag/Size etc which are properties of this File, // Assertions can include properties of other subjects that // contributed to producing this File. // In order to include Assertions when converting to/from JSON, // the custom Fileset.Write and Fileset.Read methods must be used. // The standard JSON library (and probably most third party ones) will ignore this field. Assertions *Assertions `json:"-"` }
File represents a File inside of Reflow. A file is said to be resolved if it contains the digest of the file's contents (ID). Otherwise, a File is said to be a reference, in which case it must contain a source and etag and may contain a ContentHash. Any type of File (resolved or reference) can contain Assertions. TODO(swami): Split into resolved/reference files explicitly.
func (File) Digest ¶
Digest returns the file's digest: if the file is a reference and it's ContentHash is unset, the digest comprises the reference, source, etag and assertions. Reference files will return ContentHash if set (which is assumed to be the digest of the file's contents). Resolved files return ID which is the digest of the file's contents.
type FileMappingPart ¶
type FileMappingPart struct { Depth int32 `protobuf:"varint,1,opt,name=depth,proto3" json:"depth,omitempty"` Index int32 `protobuf:"varint,2,opt,name=index,proto3" json:"index,omitempty"` Key string `protobuf:"bytes,3,opt,name=key,proto3" json:"key,omitempty"` FileId int32 `protobuf:"varint,4,opt,name=fileId,proto3" json:"fileId,omitempty"` XXX_NoUnkeyedLiteral struct{} `json:"-"` XXX_unrecognized []byte `json:"-"` XXX_sizecache int32 `json:"-"` }
func (*FileMappingPart) Descriptor ¶
func (*FileMappingPart) Descriptor() ([]byte, []int)
func (*FileMappingPart) GetDepth ¶
func (m *FileMappingPart) GetDepth() int32
func (*FileMappingPart) GetFileId ¶
func (m *FileMappingPart) GetFileId() int32
func (*FileMappingPart) GetIndex ¶
func (m *FileMappingPart) GetIndex() int32
func (*FileMappingPart) GetKey ¶
func (m *FileMappingPart) GetKey() string
func (*FileMappingPart) ProtoMessage ¶
func (*FileMappingPart) ProtoMessage()
func (*FileMappingPart) Reset ¶
func (m *FileMappingPart) Reset()
func (*FileMappingPart) String ¶
func (m *FileMappingPart) String() string
func (*FileMappingPart) XXX_DiscardUnknown ¶
func (m *FileMappingPart) XXX_DiscardUnknown()
func (*FileMappingPart) XXX_Marshal ¶
func (m *FileMappingPart) XXX_Marshal(b []byte, deterministic bool) ([]byte, error)
func (*FileMappingPart) XXX_Merge ¶
func (m *FileMappingPart) XXX_Merge(src proto.Message)
func (*FileMappingPart) XXX_Size ¶
func (m *FileMappingPart) XXX_Size() int
func (*FileMappingPart) XXX_Unmarshal ¶
func (m *FileMappingPart) XXX_Unmarshal(b []byte) error
type FileP ¶
type FileP struct { Id string `protobuf:"bytes,1,opt,name=id,proto3" json:"id,omitempty"` Size int64 `protobuf:"varint,2,opt,name=size,proto3" json:"size,omitempty"` Source string `protobuf:"bytes,3,opt,name=source,proto3" json:"source,omitempty"` Etag string `protobuf:"bytes,4,opt,name=etag,proto3" json:"etag,omitempty"` LastModified *Timestamp `protobuf:"bytes,5,opt,name=lastModified,proto3" json:"lastModified,omitempty"` ContentHash string `protobuf:"bytes,6,opt,name=contentHash,proto3" json:"contentHash,omitempty"` XXX_NoUnkeyedLiteral struct{} `json:"-"` XXX_unrecognized []byte `json:"-"` XXX_sizecache int32 `json:"-"` }
func (*FileP) Descriptor ¶
func (*FileP) GetContentHash ¶
func (*FileP) GetLastModified ¶
func (*FileP) ProtoMessage ¶
func (*FileP) ProtoMessage()
func (*FileP) XXX_DiscardUnknown ¶
func (m *FileP) XXX_DiscardUnknown()
func (*FileP) XXX_Marshal ¶
func (*FileP) XXX_Unmarshal ¶
type FilePart ¶
type FilePart struct { Id int32 `protobuf:"varint,1,opt,name=id,proto3" json:"id,omitempty"` File *FileP `protobuf:"bytes,2,opt,name=file,proto3" json:"file,omitempty"` AssertionsGroupId int32 `protobuf:"varint,3,opt,name=assertionsGroupId,proto3" json:"assertionsGroupId,omitempty"` XXX_NoUnkeyedLiteral struct{} `json:"-"` XXX_unrecognized []byte `json:"-"` XXX_sizecache int32 `json:"-"` }
func (*FilePart) Descriptor ¶
func (*FilePart) GetAssertionsGroupId ¶
func (*FilePart) ProtoMessage ¶
func (*FilePart) ProtoMessage()
func (*FilePart) XXX_DiscardUnknown ¶
func (m *FilePart) XXX_DiscardUnknown()
func (*FilePart) XXX_Marshal ¶
func (*FilePart) XXX_Unmarshal ¶
type Fileset ¶
type Fileset struct { List []Fileset `json:",omitempty"` Map map[string]File `json:"Fileset,omitempty"` }
Fileset is the result of an evaluated flow. Values may either be lists of values or Filesets. Filesets are a map of paths to Files.
func (*Fileset) AddAssertions ¶
func (v *Fileset) AddAssertions(a *Assertions) error
AddAssertions adds the given assertions to all files in this Fileset.
func (Fileset) AnyEmpty ¶
AnyEmpty tells whether this value, or any of its constituent values contain no files.
func (Fileset) Assertions ¶
func (v Fileset) Assertions() *Assertions
Assertions returns all the assertions across all the Files in this Fileset.
func (Fileset) Diff ¶
Diff deep-compares the values two filesets assuming they have the same structure and returns a pretty-diff of the differences (if any) and a boolean if they are different.
func (Fileset) Digest ¶
Digest returns a digest representing the value. Digests preserve semantics: two values with the same digest are considered to be equivalent.
func (Fileset) File ¶
File returns the only file expected to be contained in this Fileset. Returns error if the fileset does not contain only one file.
func (Fileset) Flatten ¶
Flatten is a convenience function to flatten (shallowly) the value v, returning a list of Values. If the value is a list value, the list is returned; otherwise a unary list of the value v is returned.
func (*Fileset) MapAssertionsByFile ¶
MapAssertionsByFile maps the assertions from the given set of files to the corresponding same file (based on file.Digest()), if any, in this fileset.
func (*Fileset) Read ¶
Read reads (unmarshals) a serialized fileset from the given Reader into this Fileset.
func (Fileset) Short ¶
Short returns a short, human-readable string representing the value. Its intended use is for pretty-printed output. In particular, hashes are abbreviated, and lists display only the first member, followed by ellipsis. For example, a list of values is printed as:
list<val<sample.fastq.gz=f2c59c40>, ...50MB>
func (Fileset) String ¶
String returns a full, human-readable string representing the value v. Unlike Short, string is fully descriptive: it contains the full digest and lists are complete. For example:
list<sample.fastq.gz=sha256:f2c59c40a1d71c0c2af12d38a2276d9df49073c08360d72320847efebc820160>, sample2.fastq.gz=sha256:59eb82c49448e349486b29540ad71f4ddd7f53e5a204d50997f054d05c939adb>>
func (Fileset) Subst ¶
Subst the files in fileset using the provided mapping of File object digests to Files. Subst returns whether the fileset is fully resolved after substitution. That is, any unresolved file f in this fileset tree, will be substituted by sub[f.Digest()].
func (*Fileset) Write ¶
func (v *Fileset) Write(w io.Writer, kind assoc.Kind, includeFileRefFields, includeAssertions bool) error
Write serializes and writes (marshals) this Fileset in the specified format to the given writer. If includeFileRefFields is true, the proto marshalled format will include reference file fields such as source and etag. If includeAssertions is true, the proto marshalled format will include assertions on files.
func (Fileset) WriteDigest ¶
WriteDigest writes the digestible material for v to w. The io.Writer is assumed to be produced by a Digester, and hence infallible. Errors are not checked.
type FilesetLimiter ¶
func GetFilesetOpLimiter ¶
func GetFilesetOpLimiter() *FilesetLimiter
func (*FilesetLimiter) Limit ¶
func (l *FilesetLimiter) Limit() int
type FilesetPart ¶
type FilesetPart struct { // Types that are valid to be assigned to Part: // *FilesetPart_Fp // *FilesetPart_Fmp // *FilesetPart_Akp // *FilesetPart_Agp Part isFilesetPart_Part `protobuf_oneof:"part"` XXX_NoUnkeyedLiteral struct{} `json:"-"` XXX_unrecognized []byte `json:"-"` XXX_sizecache int32 `json:"-"` }
func (*FilesetPart) Descriptor ¶
func (*FilesetPart) Descriptor() ([]byte, []int)
func (*FilesetPart) GetAgp ¶
func (m *FilesetPart) GetAgp() *AssertionsGroupPart
func (*FilesetPart) GetAkp ¶
func (m *FilesetPart) GetAkp() *AssertionsKeyPart
func (*FilesetPart) GetFmp ¶
func (m *FilesetPart) GetFmp() *FileMappingPart
func (*FilesetPart) GetFp ¶
func (m *FilesetPart) GetFp() *FilePart
func (*FilesetPart) GetPart ¶
func (m *FilesetPart) GetPart() isFilesetPart_Part
func (*FilesetPart) ProtoMessage ¶
func (*FilesetPart) ProtoMessage()
func (*FilesetPart) Reset ¶
func (m *FilesetPart) Reset()
func (*FilesetPart) String ¶
func (m *FilesetPart) String() string
func (*FilesetPart) XXX_DiscardUnknown ¶
func (m *FilesetPart) XXX_DiscardUnknown()
func (*FilesetPart) XXX_Marshal ¶
func (m *FilesetPart) XXX_Marshal(b []byte, deterministic bool) ([]byte, error)
func (*FilesetPart) XXX_Merge ¶
func (m *FilesetPart) XXX_Merge(src proto.Message)
func (*FilesetPart) XXX_OneofWrappers ¶
func (*FilesetPart) XXX_OneofWrappers() []interface{}
XXX_OneofWrappers is for the internal use of the proto package.
func (*FilesetPart) XXX_Size ¶
func (m *FilesetPart) XXX_Size() int
func (*FilesetPart) XXX_Unmarshal ¶
func (m *FilesetPart) XXX_Unmarshal(b []byte) error
type FilesetPart_Agp ¶
type FilesetPart_Agp struct {
Agp *AssertionsGroupPart `protobuf:"bytes,4,opt,name=agp,proto3,oneof"`
}
type FilesetPart_Akp ¶
type FilesetPart_Akp struct {
Akp *AssertionsKeyPart `protobuf:"bytes,3,opt,name=akp,proto3,oneof"`
}
type FilesetPart_Fmp ¶
type FilesetPart_Fmp struct {
Fmp *FileMappingPart `protobuf:"bytes,2,opt,name=fmp,proto3,oneof"`
}
type FilesetPart_Fp ¶
type FilesetPart_Fp struct {
Fp *FilePart `protobuf:"bytes,1,opt,name=fp,proto3,oneof"`
}
type InspectResponse ¶
type InspectResponse struct { // Inspect is the full inspect data for this exec. Inspect *ExecInspect `json:",omitempty"` // RunInfo contains useful information for the client derived from the ExecInspect. RunInfo *ExecRunInfo `json:",omitempty"` }
InspectResponse is the value returned by a call to an exec's Inspect. Either Inspect or RunInfo will be populated, with the other field set to nil.
type RWAssertions ¶
type RWAssertions struct {
// contains filtered or unexported fields
}
RWAssertions are a mutable representation of Assertions.
func NewRWAssertions ¶
func NewRWAssertions(a *Assertions) *RWAssertions
NewRWAssertions creates a new RWAssertions with the given Assertions.
func (*RWAssertions) AddFrom ¶
func (s *RWAssertions) AddFrom(list ...*Assertions) error
AddFrom adds to this RWAssertions from the given list of Assertions. Returns an error if the same key maps to a conflicting value as a result of the adding. AddFrom panics if s is nil.
func (*RWAssertions) Filter ¶
func (s *RWAssertions) Filter(t *Assertions) (*Assertions, []AssertionKey)
Filter returns new Assertions mapping keys from t with values from s (panics if s is nil) and a list of AssertionKeys that exist in t but are missing in s.
type RemoteLogs ¶
type RemoteLogs struct { Type RemoteLogsType // LogGroupName is the log group name (applicable if Type is 'Cloudwatch') LogGroupName string // LogStreamName is the log stream name (applicable if Type is 'Cloudwatch') LogStreamName string }
RemoteLogs is a description of remote logs primarily useful for storing a reference. It is expected to contain basic details necessary to retrieve the logs but does not provide the means to do so.
func (RemoteLogs) String ¶
func (r RemoteLogs) String() string
type RemoteLogsType ¶
type RemoteLogsType string
const ( RemoteLogsTypeUnknown RemoteLogsType = "Unknown" RemoteLogsTypeCloudwatch RemoteLogsType = "cloudwatch" )
type RepoObjectRef ¶
type RepoObjectRef struct { // RepoURL is the URL of the repository where this object is located RepoURL *url.URL // Digest is the reference to this object in the repository where it is located Digest digest.Digest }
RepoObjectRef is a reference to an object in a particular Repository.
func (RepoObjectRef) String ¶
func (ror RepoObjectRef) String() string
type Repository ¶
type Repository interface { // Collect removes from this repository any objects not in the // Liveset Collect(context.Context, liveset.Liveset) error // CollectWithThreshold removes from this repository any objects not in the live set and // is either in the dead set or its creation times are not more recent than the threshold time. CollectWithThreshold(ctx context.Context, live liveset.Liveset, dead liveset.Liveset, threshold time.Time, dryrun bool) error // Stat returns the File metadata for the blob with the given digest. // It returns errors.NotExist if the blob does not exist in this // repository. Stat(context.Context, digest.Digest) (File, error) // Get streams the blob named by the given Digest. // If it does not exist in this repository, an error with code // errors.NotFound will be returned. Get(context.Context, digest.Digest) (io.ReadCloser, error) // Put streams a blob to the repository and returns its // digest when completed. Put(context.Context, io.Reader) (digest.Digest, error) // WriteTo writes a blob identified by a Digest directly to a // foreign repository named by a URL. If the repository is // unable to write directly to the foreign repository, an error // with flag errors.NotSupported is returned. WriteTo(context.Context, digest.Digest, *url.URL) error // ReadFrom reads a blob identified by a Digest directly from a // foreign repository named by a URL. If the repository is // unable to read directly from the foreign repository, an error // with flag errors.NotSupported is returned. ReadFrom(context.Context, digest.Digest, *url.URL) error // URL returns the URL of this repository, or nil if it does not // have one. The returned URL may be used for direct transfers via // WriteTo or ReadFrom. URL() *url.URL }
Repository defines an interface used for servicing blobs of data that are named-by-hash.
type Requirements ¶
type Requirements struct { // Min is the smallest amount of resources that must be allocated // to satisfy the requirements. Min Resources // Width is the width of the requirements. A width of zero indicates // a "narrow" job: minimum describes the exact resources needed. // Widths greater than zero require a multiple (ie, 1 + Width) of the minimum requirement. Width int }
Requirements stores resource requirements, comprising the minimum amount of acceptable resources and a width.
func (*Requirements) Add ¶
func (r *Requirements) Add(s Requirements)
Add adds the provided requirements s to the requirements r. R's minimum requirements are set to the larger of the two; the two widths are added.
func (*Requirements) AddParallel ¶
func (r *Requirements) AddParallel(s Resources)
AddParallel adds the provided resources s to the requirements, and also increases the requirement's width by one.
func (*Requirements) AddSerial ¶
func (r *Requirements) AddSerial(s Resources)
AddSerial adds the provided resources s to the requirements.
func (Requirements) Equal ¶
func (r Requirements) Equal(s Requirements) bool
Equal reports whether r and s represent the same requirements.
func (*Requirements) Max ¶
func (r *Requirements) Max() Resources
Max is the maximum amount of resources represented by this resource request.
func (Requirements) String ¶
func (r Requirements) String() string
String renders a human-readable representation of r.
type Resources ¶
Resources describes a set of labeled resources. Each resource is described by a string label and assigned a value. The zero value of Resources represents the resources with zeros for all labels.
func (Resources) Div ¶
Div returns a mapping of the intersection of keys in r and s to the fraction r[key]/s[key]. Since the returned value cannot be treated as Resources, Div simply returns a map.
func (Resources) Equal ¶
Equal tells whether the resources r and s are equal in all dimensions of both r and s.
func (Resources) MaxRatio ¶
MaxRatio computes the max across ratios of values in r to s for the intersection of keys in r and s.
func (*Resources) Scale ¶
Scale sets r to the scaled resources s[key]*factor for all keys and returns r.
func (Resources) ScaledDistance ¶
ScaledDistance returns the distance between two resources computed as a sum of the differences in memory, cpu and disk with some predefined scaling.
type Result ¶
type Result struct { // Fileset is the fileset produced by an exec. Fileset Fileset `json:",omitempty"` // Err is error produced by an exec. Err *errors.Error `json:",omitempty"` }
Result is the result of an exec.
type StringDigest ¶
type StringDigest struct {
// contains filtered or unexported fields
}
StringDigest holds any string and its digest.
func NewStringDigest ¶
func NewStringDigest(s string) StringDigest
NewStringDigest creates a StringDigest based on the given string.
func (StringDigest) Digest ¶
func (i StringDigest) Digest() digest.Digest
Digest returns the digest of the underlying string.
func (StringDigest) IsValid ¶
func (i StringDigest) IsValid() bool
IsValid returns whether this StringDigest is valid (ie, the underlying string is non-empty).
func (StringDigest) String ¶
func (i StringDigest) String() string
String returns the underlying string.
type Timestamp ¶
type Timestamp struct { Seconds int64 `protobuf:"varint,1,opt,name=Seconds,proto3" json:"Seconds,omitempty"` Nanos int64 `protobuf:"varint,2,opt,name=Nanos,proto3" json:"Nanos,omitempty"` XXX_NoUnkeyedLiteral struct{} `json:"-"` XXX_unrecognized []byte `json:"-"` XXX_sizecache int32 `json:"-"` }
func (*Timestamp) Descriptor ¶
func (*Timestamp) GetSeconds ¶
func (*Timestamp) ProtoMessage ¶
func (*Timestamp) ProtoMessage()
func (*Timestamp) XXX_DiscardUnknown ¶
func (m *Timestamp) XXX_DiscardUnknown()
func (*Timestamp) XXX_Marshal ¶
func (*Timestamp) XXX_Unmarshal ¶
type Transferer ¶
type Transferer interface { // Transfer transfers a set of files from the src to the dst // repository. A transfer manager may apply policies (e.g., rate // limits and concurrency limits) to these transfers. Transfer(ctx context.Context, dst, src Repository, files ...File) error NeedTransfer(ctx context.Context, dst Repository, files ...File) ([]File, error) }
Transferer defines an interface used for management of transfers between multiple repositories.
Source Files ¶
Directories ¶
Path | Synopsis |
---|---|
Package assoc defines data types for associative maps used within Reflow.
|
Package assoc defines data types for associative maps used within Reflow. |
dydbassoc
Package dydbassoc implements an assoc.Assoc based on AWS's DynamoDB.
|
Package dydbassoc implements an assoc.Assoc based on AWS's DynamoDB. |
Package batch implements support for running batches of reflow (stateful) evaluations.
|
Package batch implements support for running batches of reflow (stateful) evaluations. |
Package blob implements a set of generic interfaces used to implement blob storage implementations such as S3, GCS, and local file system implementations.
|
Package blob implements a set of generic interfaces used to implement blob storage implementations such as S3, GCS, and local file system implementations. |
s3blob
Package s3blob implements the blob interfaces for S3.
|
Package s3blob implements the blob interfaces for S3. |
testblob
Package testblob implements a blobstore appropriate for testing.
|
Package testblob implements a blobstore appropriate for testing. |
client
Package client implements a remote client for the reflow bootstrap.
|
Package client implements a remote client for the reflow bootstrap. |
cmd
|
|
Package ec2authenticator implements Docker repository authentication for ECR using an AWS SDK session and a root.
|
Package ec2authenticator implements Docker repository authentication for ECR using an AWS SDK session and a root. |
Package ec2cluster implements support for maintaining elastic clusters of Reflow instances on EC2.
|
Package ec2cluster implements support for maintaining elastic clusters of Reflow instances on EC2. |
volume
Package volume implements support for maintaining (EBS) volumes on an EC2 instance by watching the disk usage of the underlying disk device and resizing the EBS volumes whenever necessary based on provided parameters.
|
Package volume implements support for maintaining (EBS) volumes on an EC2 instance by watching the disk usage of the underlying disk device and resizing the EBS volumes whenever necessary based on provided parameters. |
Package errors provides a standard error definition for use in Reflow.
|
Package errors provides a standard error definition for use in Reflow. |
internal
|
|
ecrauth
Package ecrauth provides an interface and utilities for authenticating AWS EC2 ECR Docker repositories.
|
Package ecrauth provides an interface and utilities for authenticating AWS EC2 ECR Docker repositories. |
scanner
Package scanner provides a scanner and tokenizer for UTF-8-encoded text.
|
Package scanner provides a scanner and tokenizer for UTF-8-encoded text. |
Package lang implements the reflow language.
|
Package lang implements the reflow language. |
pooltest
Package pooltest tests pools.
|
Package pooltest tests pools. |
testutil
Package testutil provides utilities for testing code that involves pools.
|
Package testutil provides utilities for testing code that involves pools. |
Package localcluster implements a runner.Cluster using the local machine's docker.
|
Package localcluster implements a runner.Cluster using the local machine's docker. |
Package log implements leveling and teeing on top of Go's standard logs package.
|
Package log implements leveling and teeing on top of Go's standard logs package. |
Package pool implements resource pools for reflow.
|
Package pool implements resource pools for reflow. |
client
Package client implements a remoting client for reflow pools.
|
Package client implements a remoting client for reflow pools. |
server
Package server exposes a pool implementation for remote access.
|
Package server exposes a pool implementation for remote access. |
Package repository provides common ways to dial reflow.Repository implementations; it also provides some common utilities for working with repositories.
|
Package repository provides common ways to dial reflow.Repository implementations; it also provides some common utilities for working with repositories. |
blobrepo
Package blobrepo implements a generic reflow.Repository on top of a blob.Bucket.
|
Package blobrepo implements a generic reflow.Repository on top of a blob.Bucket. |
client
Package client implements repository REST client.
|
Package client implements repository REST client. |
filerepo
Package filerepo implements a filesystem-backed repository.
|
Package filerepo implements a filesystem-backed repository. |
server
Package server implements a Repository REST server.
|
Package server implements a Repository REST server. |
Package rest provides a framework for serving and accessing hierarchical resource-based APIs.
|
Package rest provides a framework for serving and accessing hierarchical resource-based APIs. |
Package sched implements task scheduling for Reflow.
|
Package sched implements task scheduling for Reflow. |
Package syntax implements the Reflow language.
|
Package syntax implements the Reflow language. |
Package taskdb defines interfaces and data types for storing and querying reflow runs, tasks, pools and allocs.
|
Package taskdb defines interfaces and data types for storing and querying reflow runs, tasks, pools and allocs. |
dynamodbtask
Package dynamodbtask implements the taskdb.TaskDB interface for AWS dynamodb backend.
|
Package dynamodbtask implements the taskdb.TaskDB interface for AWS dynamodb backend. |
test
|
|
flow
Package flow contains a number of constructors for Flow nodes that are convenient for testing.
|
Package flow contains a number of constructors for Flow nodes that are convenient for testing. |
Package trace provides a tracing system for Reflow events.
|
Package trace provides a tracing system for Reflow events. |
Package types contains data structures and algorithms for dealing with value types in Reflow.
|
Package types contains data structures and algorithms for dealing with value types in Reflow. |
Package values defines data structures for representing (runtime) values in Reflow.
|
Package values defines data structures for representing (runtime) values in Reflow. |
Package wg implements a channel-enabled WaitGroup.
|
Package wg implements a channel-enabled WaitGroup. |