Documentation ¶
Index ¶
- Variables
- type Executor
- func (e *Executor) Execs(ctx context.Context) ([]reflow.Exec, error)
- func (e *Executor) Get(ctx context.Context, id digest.Digest) (reflow.Exec, error)
- func (e *Executor) Kill(ctx context.Context) error
- func (e *Executor) Put(ctx context.Context, id digest.Digest, cfg reflow.ExecConfig) (reflow.Exec, error)
- func (e *Executor) Remove(ctx context.Context, id digest.Digest) error
- func (e *Executor) Repository() reflow.Repository
- func (e *Executor) Resources() reflow.Resources
- func (e *Executor) SetResources(r reflow.Resources)
- func (e *Executor) Start() error
- func (e *Executor) URI() string
- type Manifest
- type Pool
- func (p *Pool) Alloc(ctx context.Context, id string) (pool.Alloc, error)
- func (p *Pool) Allocs(ctx context.Context) ([]pool.Alloc, error)
- func (p *Pool) ID() string
- func (p *Pool) Offer(ctx context.Context, id string) (pool.Offer, error)
- func (p *Pool) Offers(ctx context.Context) ([]pool.Offer, error)
- func (p *Pool) Start() error
- func (p *Pool) StopIfIdleFor(d time.Duration) bool
Constants ¶
This section is empty.
Variables ¶
var DefaultS3Region = "us-east-1"
DefaultS3Region is the region used for s3 requests if a bucket's region is undiscoverable (e.g., lacking permissions for the GetBucketLocation API call.)
Amazon generally defaults to us-east-1 when regions are unspecified (or undiscoverable), but this can be overridden if a different default is desired.
Functions ¶
This section is empty.
Types ¶
type Executor ¶
type Executor struct { // RunID of the run - <username>@grailbio.com/<hash> RunID string // ID is the ID of the executor. It is the URI of the executor and also // the prefix used in any Docker containers whose exec's are // children of this executor. ID string // Prefix is the filesystem prefix used to access paths on disk. This is // defined so that the executor can run inside of a Docker container // (which has the host's filesystem exported at this prefix). Prefix string // Dir is the root directory of this executor. All of its state is contained // within it. Dir string // Client is the Docker client used by this executor. Client *client.Client // Authenticator is used to pull images that are stored on Amazon's ECR // service. Authenticator ecrauth.Interface // AWSImage is a Docker image that contains the 'aws' tool. // This is used to implement S3 interns and externs. AWSImage string // AWSCreds is an AWS credentials provider, used for S3 operations // and "$aws" passthroughs. AWSCreds *credentials.Credentials // Log is this executor's logger where operational status is printed. Log *log.Logger // DigestLimiter limits the number of concurrent digest operations // performed while installing files into this executor's repository.� DigestLimiter *limiter.Limiter // S3FileLimiter controls the number of S3 file downloads that may // proceed concurrently. S3FileLimiter *limiter.Limiter // ExternalS3 defines whether to use external processes (AWS CLI tool // running in docker) for S3 operations. At the moment, this flag only // works for interns. ExternalS3 bool // FileRepository is the (file-based) object repository used by this // Executor. It may be provided by the user, or else it is set to a // default implementation when (*Executor).Start is called. FileRepository *file.Repository // contains filtered or unexported fields }
Executor is a small management layer on top of exec. It implements reflow.Executor. Executor assumes that it has local access to the file system (perhaps with a prefix).
Executor stores its state to disk and, when recovered, re-instantiates all execs (which in turn recover).
func (*Executor) Get ¶
Get returns the exec named ID, or an errors.NotExist if the exec does not exist.
func (*Executor) Kill ¶
Kill disposes of the executors and all of its execs. It also sets the executor's "dead" flag, so that all future operations on the executor returns an error.
func (*Executor) Put ¶
func (e *Executor) Put(ctx context.Context, id digest.Digest, cfg reflow.ExecConfig) (reflow.Exec, error)
Put idempotently defines a new exec with a given ID and config. The exec may be (deterministically) rewritten.
func (*Executor) Repository ¶
func (e *Executor) Repository() reflow.Repository
Repository returns the repository attached to this executor.
func (*Executor) SetResources ¶
SetResources sets the resources reported by Resources() to r.
type Manifest ¶
type Manifest struct { Type execType State execState Created time.Time Result reflow.Result Config reflow.ExecConfig // The object config used to create this object. Docker types.ContainerJSON // Docker inspect output. Resources reflow.Resources Stats stats Gauges reflow.Gauges }
Manifest stores the state of an exec. It is serialized to JSON and stored on disk so that executors are restartable, and can recover from crashes.
type Pool ¶
type Pool struct { // Dir is the filesystem root of the pool. Everything under this // path is assumed to be owned and managed by the pool. Dir string // Prefix is prepended to paths constructed by allocs. This is to // permit running the pool manager inside of a Docker container. Prefix string // Client is the Docker client. We assume that the Docker daemon // runs on the same host from which the pool is managed. Client *client.Client // Authenticator is used to authenticate ECR image pulls. Authenticator interface { Authenticates(ctx context.Context, image string) (bool, error) Authenticate(ctx context.Context, cfg *types.AuthConfig) error } // AWSImage is the name of the image that contains the 'aws' tool. // This is used to implement directory syncing via s3. AWSImage string // AWSCreds is a credentials provider used to mint AWS credentials. // They are used to access AWS services. AWSCreds *credentials.Credentials // Log Log *log.Logger // DigestLimiter controls the number of digest operations that may // proceed concurrently. DigestLimiter *limiter.Limiter // S3FileLimiter controls the number of S3 file downloads that may // proceed concurrently. S3FileLimiter *limiter.Limiter // contains filtered or unexported fields }
Pool implements a resource pool on top of a Docker client. The pool itself must run on the same machine as the Docker instance as it performs local filesystem operations that must be reflected inside the container.
Pool keeps all state on disk, as follows:
Prefix/Dir/state.json Stores the set of currently active allocs, together with their resource requirements. Prefix/Dir/allocs/<id>/ The root directory for the alloc with id. The state under this directory is managed by an executor instance.
func (*Pool) Offers ¶
Offers enumerates all the current offers of this pool. The local pool always returns either no offers, when there are no more available resources, or 1 offer comprising the entirety of available resources.