v2

package
v1.7.1 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: May 10, 2023 License: Apache-2.0 Imports: 50 Imported by: 164

README

Runtime v2

Runtime v2 introduces a first class shim API for runtime authors to integrate with containerd. The shim API is minimal and scoped to the execution lifecycle of a container.

Binary Naming

Users specify the runtime they wish to use when creating a container. The runtime can also be changed via a container update.

> ctr run --runtime io.containerd.runc.v1

When a user specifies a runtime name, io.containerd.runc.v1, they will specify the name and version of the runtime. This will be translated by containerd into a binary name for the shim.

io.containerd.runc.v1 -> containerd-shim-runc-v1

Since 1.6 release, it's also possible to specify absolute runtime path:

> ctr run --runtime /usr/local/bin/containerd-shim-runc-v1

containerd keeps the containerd-shim-* prefix so that users can ps aux | grep containerd-shim to see running shims on their system.

Shim Authoring

This section is dedicated to runtime authors wishing to build a shim. It will detail how the API works and different considerations when building shim.

Commands

Container information is provided to a shim in two ways. The OCI Runtime Bundle and on the Create rpc request.

start

Each shim MUST implement a start subcommand. This command will launch new shims. The start command MUST accept the following flags:

  • -namespace the namespace for the container
  • -address the address of the containerd's main grpc socket
  • -publish-binary the binary path to publish events back to containerd
  • -id the id of the container

The start command, as well as all binary calls to the shim, has the bundle for the container set as the cwd.

The start command may have the following containerd specific environment variables set:

  • TTRPC_ADDRESS the address of containerd's ttrpc API socket
  • GRPC_ADDRESS the address of containerd's grpc API socket (1.7+)
  • MAX_SHIM_VERSION the maximum shim version supported by the client, always 2 for shim v2 (1.7+)
  • SCHED_CORE enable core scheduling if available (1.6+)
  • NAMESPACE an optional namespace the shim is operating in or inheriting (1.7+)

The start command MUST write to stdout either the ttrpc address that the shim is serving its API on, or (experimental) a JSON structure in the following format (where protocol can be either "ttrpc" or "grpc"):

{
	"version": 2,
	"address": "/address/of/task/service",
	"protocol": "grpc"
}

The address will be used by containerd to issue API requests for container operations.

The start command can either start a new shim or return an address to an existing shim based on the shim's logic.

delete

Each shim MUST implement a delete subcommand. This command allows containerd to delete any container resources created, mounted, and/or run by a shim when containerd can no longer communicate over rpc. This happens if a shim is SIGKILL'd with a running container. These resources will need to be cleaned up when containerd looses the connection to a shim. This is also used when containerd boots and reconnects to shims. If a bundle is still on disk but containerd cannot connect to a shim, the delete command is invoked.

The delete command MUST accept the following flags:

  • -namespace the namespace for the container
  • -address the address of the containerd's main socket
  • -publish-binary the binary path to publish events back to containerd
  • -id the id of the container
  • -bundle the path to the bundle to delete. On non-Windows and non-FreeBSD platforms this will match cwd

The delete command will be executed in the container's bundle as its cwd except for on Windows and FreeBSD platforms.

Host Level Shim Configuration

containerd does not provide any host level configuration for shims via the API. If a shim needs configuration from the user with host level information across all instances, a shim specific configuration file can be setup.

Container Level Shim Configuration

On the create request, there is a generic *protobuf.Any that allows a user to specify container level configuration for the shim.

message CreateTaskRequest {
	string id = 1;
	...
	google.protobuf.Any options = 10;
}

A shim author can create their own protobuf message for configuration and clients can import and provide this information is needed.

I/O

I/O for a container is provided by the client to the shim via fifo on Linux, named pipes on Windows, or log files on disk. The paths to these files are provided on the Create rpc for the initial creation and on the Exec rpc for additional processes.

message CreateTaskRequest {
	string id = 1;
	bool terminal = 4;
	string stdin = 5;
	string stdout = 6;
	string stderr = 7;
}
message ExecProcessRequest {
	string id = 1;
	string exec_id = 2;
	bool terminal = 3;
	string stdin = 4;
	string stdout = 5;
	string stderr = 6;
}

Containers that are to be launched with an interactive terminal will have the terminal field set to true, data is still copied over the files(fifos,pipes) in the same way as non interactive containers.

Root Filesystems

The root filesystem for the containers is provided by on the Create rpc. Shims are responsible for managing the lifecycle of the filesystem mount during the lifecycle of a container.

message CreateTaskRequest {
	string id = 1;
	string bundle = 2;
	repeated containerd.types.Mount rootfs = 3;
	...
}

The mount protobuf message is:

message Mount {
	// Type defines the nature of the mount.
	string type = 1;
	// Source specifies the name of the mount. Depending on mount type, this
	// may be a volume name or a host path, or even ignored.
	string source = 2;
	// Target path in container
	string target = 3;
	// Options specifies zero or more fstab style mount options.
	repeated string options = 4;
}

Shims are responsible for mounting the filesystem into the rootfs/ directory of the bundle. Shims are also responsible for unmounting of the filesystem. During a delete binary call, the shim MUST ensure that filesystem is also unmounted. Filesystems are provided by the containerd snapshotters.

Events

The Runtime v2 supports an async event model. In order for the an upstream caller (such as Docker) to get these events in the correct order a Runtime v2 shim MUST implement the following events where Compliance=MUST. This avoids race conditions between the shim and shim client where for example a call to Start can signal a TaskExitEventTopic before even returning the results from the Start call. With these guarantees of a Runtime v2 shim a call to Start is required to have published the async event TaskStartEventTopic before the shim can publish the TaskExitEventTopic.

Tasks
Topic Compliance Description
runtime.TaskCreateEventTopic MUST When a task is successfully created
runtime.TaskStartEventTopic MUST (follow TaskCreateEventTopic) When a task is successfully started
runtime.TaskExitEventTopic MUST (follow TaskStartEventTopic) When a task exits expected or unexpected
runtime.TaskDeleteEventTopic MUST (follow TaskExitEventTopic or TaskCreateEventTopic if never started) When a task is removed from a shim
runtime.TaskPausedEventTopic SHOULD When a task is successfully paused
runtime.TaskResumedEventTopic SHOULD (follow TaskPausedEventTopic) When a task is successfully resumed
runtime.TaskCheckpointedEventTopic SHOULD When a task is checkpointed
runtime.TaskOOMEventTopic SHOULD If the shim collects Out of Memory events
Execs
Topic Compliance Description
runtime.TaskExecAddedEventTopic MUST (follow TaskCreateEventTopic ) When an exec is successfully added
runtime.TaskExecStartedEventTopic MUST (follow TaskExecAddedEventTopic) When an exec is successfully started
runtime.TaskExitEventTopic MUST (follow TaskExecStartedEventTopic) When an exec (other than the init exec) exits expected or unexpected
runtime.TaskDeleteEventTopic SHOULD (follow TaskExitEventTopic or TaskExecAddedEventTopic if never started) When an exec is removed from a shim
Flow

The following sequence diagram shows the flow of actions when ctr run command executed.

sequenceDiagram
    participant ctr
    participant containerd
    participant shim

    autonumber

    ctr->>containerd: Create container
    Note right of containerd: Save container metadata
    containerd-->>ctr: Container ID

    ctr->>containerd: Create task

    %% Start shim
    containerd-->shim: Prepare bundle
    containerd->>shim: Execute binary: containerd-shim-runc-v1 start
    shim->shim: Start TTRPC server
    shim-->>containerd: Respond with address: unix://containerd/container.sock

    containerd-->>shim: Create TTRPC client

    %% Schedule task

    Note right of containerd: Schedule new task

    containerd->>shim: TaskService.CreateTaskRequest
    shim-->>containerd: Task PID

    containerd-->>ctr: Task ID

    %% Start task

    ctr->>containerd: Start task

    containerd->>shim: TaskService.StartRequest
    shim-->>containerd: OK

    %% Wait task

    ctr->>containerd: Wait task

    containerd->>shim: TaskService.WaitRequest
    Note right of shim: Block until task exits
    shim-->>containerd: Exit status

    containerd-->>ctr: OK

    Note over ctr,shim: Other task requests (Kill, Pause, Resume, CloseIO, Exec, etc)

    %% Kill signal

    opt Kill task

    ctr->>containerd: Kill task

    containerd->>shim: TaskService.KillRequest
    shim-->>containerd: OK

    containerd-->>ctr: OK

    end

    %% Delete task

    ctr->>containerd: Task Delete

    containerd->>shim: TaskService.DeleteRequest
    shim-->>containerd: Exit information

    containerd->>shim: TaskService.ShutdownRequest
    shim-->>containerd: OK

    containerd-->shim: Close client
    containerd->>shim: Execute binary: containerd-shim-runc-v1 delete
    containerd-->shim: Delete bundle

    containerd-->>ctr: Exit code
Logging

Shims may support pluggable logging via STDIO URIs. Current supported schemes for logging are:

  • fifo - Linux
  • binary - Linux & Windows
  • file - Linux & Windows
  • npipe - Windows

Binary logging has the ability to forward a container's STDIO to an external binary for consumption. A sample logging driver that forwards the container's STDOUT and STDERR to journald is:

package main

import (
	"bufio"
	"context"
	"fmt"
	"io"
	"sync"

	"github.com/containerd/containerd/runtime/v2/logging"
	"github.com/coreos/go-systemd/journal"
)

func main() {
	logging.Run(log)
}

func log(ctx context.Context, config *logging.Config, ready func() error) error {
	// construct any log metadata for the container
	vars := map[string]string{
		"SYSLOG_IDENTIFIER": fmt.Sprintf("%s:%s", config.Namespace, config.ID),
	}
	var wg sync.WaitGroup
	wg.Add(2)
	// forward both stdout and stderr to the journal
	go copy(&wg, config.Stdout, journal.PriInfo, vars)
	go copy(&wg, config.Stderr, journal.PriErr, vars)

	// signal that we are ready and setup for the container to be started
	if err := ready(); err != nil {
		return err
	}
	wg.Wait()
	return nil
}

func copy(wg *sync.WaitGroup, r io.Reader, pri journal.Priority, vars map[string]string) {
	defer wg.Done()
	s := bufio.NewScanner(r)
	for s.Scan() {
		journal.Send(s.Text(), pri, vars)
	}
}
Other
Unsupported rpcs

If a shim does not or cannot implement an rpc call, it MUST return a github.com/containerd/containerd/errdefs.ErrNotImplemented error.

Debugging and Shim Logs

A fifo on unix or named pipe on Windows will be provided to the shim. It can be located inside the cwd of the shim named "log". The shims can use the existing github.com/containerd/containerd/log package to log debug messages. Messages will automatically be output in the containerd's daemon logs with the correct fields and runtime set.

ttrpc

ttrpc is one of the supported protocols for shims. It works with standard protobufs and GRPC services as well as generating clients. The only difference between grpc and ttrpc is the wire protocol. ttrpc removes the http stack in order to save memory and binary size to keep shims small. It is recommended to use ttrpc in your shim but grpc support is currently an experimental feature.

Documentation

Index

Constants

This section is empty.

Variables

This section is empty.

Functions

func NewTaskClient added in v1.7.0

func NewTaskClient(client interface{}) (v2.TaskService, error)

NewTaskClient returns a new task client interface which handles both GRPC and TTRPC servers depending on the client object type passed in.

Supported client types are: - *ttrpc.Client - grpc.ClientConnInterface

In 1.7 we support TaskService v2 (for backward compatibility with existing shims) and GRPC TaskService v3. In 2.0 we'll switch to TaskService v3 only for both TTRPC and GRPC, which will remove overhead of mapping v2 structs to v3 structs.

Types

type Bundle

type Bundle struct {
	// ID of the bundle
	ID string
	// Path to the bundle
	Path string
	// Namespace of the bundle
	Namespace string
}

Bundle represents an OCI bundle

func LoadBundle

func LoadBundle(ctx context.Context, root, id string) (*Bundle, error)

LoadBundle loads an existing bundle from disk

func NewBundle

func NewBundle(ctx context.Context, root, state, id string, spec typeurl.Any) (b *Bundle, err error)

NewBundle returns a new bundle on disk

func (*Bundle) Delete

func (b *Bundle) Delete() error

Delete a bundle atomically

type Config added in v1.3.0

type Config struct {
	// Supported platforms
	Platforms []string `toml:"platforms"`
	// SchedCore enabled linux core scheduling
	SchedCore bool `toml:"sched_core"`
}

Config for the v2 runtime

type ManagerConfig added in v1.6.0

type ManagerConfig struct {
	Root         string
	State        string
	Store        containers.Store
	Events       *exchange.Exchange
	Address      string
	TTRPCAddress string
	SchedCore    bool
	SandboxStore sandbox.Store
}

type ShimInstance added in v1.7.0

type ShimInstance interface {
	io.Closer

	// ID of the shim.
	ID() string
	// Namespace of this shim.
	Namespace() string
	// Bundle is a file system path to shim's bundle.
	Bundle() string
	// Client returns the underlying TTRPC or GRPC client object for this shim.
	// The underlying object can be either *ttrpc.Client or grpc.ClientConnInterface.
	Client() any
	// Delete will close the client and remove bundle from disk.
	Delete(ctx context.Context) error
}

ShimInstance represents running shim process managed by ShimManager.

type ShimManager added in v1.6.0

type ShimManager struct {
	// contains filtered or unexported fields
}

ShimManager manages currently running shim processes. It is mainly responsible for launching new shims and for proper shutdown and cleanup of existing instances. The manager is unaware of the underlying services shim provides and lets higher level services consume them, but don't care about lifecycle management.

func NewShimManager added in v1.6.0

func NewShimManager(ctx context.Context, config *ManagerConfig) (*ShimManager, error)

NewShimManager creates a manager for v2 shims

func (*ShimManager) Delete added in v1.6.0

func (m *ShimManager) Delete(ctx context.Context, id string) error

Delete a runtime task

func (*ShimManager) Get added in v1.6.0

func (m *ShimManager) Get(ctx context.Context, id string) (ShimInstance, error)

func (*ShimManager) ID added in v1.6.0

func (m *ShimManager) ID() string

ID of the shim manager

func (*ShimManager) Start added in v1.6.0

func (m *ShimManager) Start(ctx context.Context, id string, opts runtime.CreateOpts) (_ ShimInstance, retErr error)

Start launches a new shim instance

type TaskManager

type TaskManager struct {
	// contains filtered or unexported fields
}

TaskManager wraps task service client on top of shim manager.

func NewTaskManager added in v1.6.0

func NewTaskManager(shims *ShimManager) *TaskManager

NewTaskManager creates a new task manager instance.

func (*TaskManager) Create

func (m *TaskManager) Create(ctx context.Context, taskID string, opts runtime.CreateOpts) (runtime.Task, error)

Create launches new shim instance and creates new task

func (*TaskManager) Delete added in v1.2.3

func (m *TaskManager) Delete(ctx context.Context, taskID string) (*runtime.Exit, error)

Delete deletes the task and shim instance

func (*TaskManager) Get

func (m *TaskManager) Get(ctx context.Context, id string) (runtime.Task, error)

Get a specific task

func (*TaskManager) ID

func (m *TaskManager) ID() string

ID of the task manager

func (*TaskManager) Tasks

func (m *TaskManager) Tasks(ctx context.Context, all bool) ([]runtime.Task, error)

Tasks lists all tasks

Directories

Path Synopsis
cmd
v1
v2

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL