etl

package
v1.3.15 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Nov 13, 2022 License: MIT Imports: 30 Imported by: 0

README

ETL package

The etl package compiles into aisnode executable to facilitate running custom ETL containers and communicating with those containers at runtime.

AIStore supports both on the fly (aka online) and offline user-defined dataset transformations. All the respective I/O intensive (and expensive) operation is confined to the storage cluster, with computing clients retaining all their resources to execute computation over transformed, filtered, and sorted data.

Popular use cases include - but are not limited to - dataset augmentation (of any kind) and filtering of AI datasets.

Please refer to ETL readme for the prerequisites, 4 supported ais <=> container communication mechanisms, and usage examples.

ETL readme also contains an overview of the architecture, important technical details, and further guidance.

Architecture

AIS-ETL extension is designed to maximize the effectiveness of the transformation process. In particular, AIS-ETL optimizes-out the entire networking operation that would otherwise be required to move pre-transformed data between storage and compute nodes.

Based on the specification provided by a user, each target starts its own ETL container (worker) - one ETL container per each storage target in the cluster. From now this "local" ETL container will be responsible for transforming objects stored on "its" AIS target. This approach allows us to run custom transformations close to data. This approach also ensures performance and scalability of the transformation workloads - the scalability that for all intents and purposes must be considered practically unlimited.

The following figure illustrates a cluster of 3 AIS proxies (gateways) and 4 storage targets, with each target running user-defined ETL in parallel:

ETL architecture

Management and Benchmarking

  • AIS CLI includes commands to start, stop, and monitor ETL at runtime.
  • AIS Loader has been extended to benchmark and stress test AIS clusters by running a number of pre-defined transformations that we include with the source code.

For more information and details, please refer to ETL readme.

Documentation

Overview

Package etl provides utilities to initialize and use transformation pods.

  • Copyright (c) 2018-2022, NVIDIA CORPORATION. All rights reserved.

Package etl provides utilities to initialize and use transformation pods.

  • Copyright (c) 2018-2022, NVIDIA CORPORATION. All rights reserved.

Package etl provides utilities to initialize and use transformation pods.

  • Copyright (c) 2018-2022, NVIDIA CORPORATION. All rights reserved.

Package etl provides utilities to initialize and use transformation pods.

  • Copyright (c) 2018-2022, NVIDIA CORPORATION. All rights reserved.

Package etl provides utilities to initialize and use transformation pods.

  • Copyright (c) 2021-2022, NVIDIA CORPORATION. All rights reserved.

Package etl provides utilities to initialize and use transformation pods.

  • Copyright (c) 2018-2022, NVIDIA CORPORATION. All rights reserved.

Package etl provides utilities to initialize and use transformation pods.

  • Copyright (c) 2018-2022, NVIDIA CORPORATION. All rights reserved.

Index

Constants

View Source
const (
	// ETL container receives POST request from target with the data. It
	// must read the data and return response to the target which then will be
	// transferred to the client.
	Hpush = "hpush://"
	// Target redirects the GET request to the ETL container. Then ETL container
	// contacts the target via `AIS_TARGET_URL` env variable to get the data.
	// The data is then transformed and returned to the client.
	Hpull = "hpull://"
	// Similar to redirection strategy but with usage of reverse proxy.
	Hrev = "hrev://"
	// Stdin/stdout communication.
	HpushStdin = "io://"
)

Variables

This section is empty.

Functions

func CheckSecret

func CheckSecret(secret string) error

func InitCode

func InitCode(t cluster.Target, msg *InitCodeMsg) error

Given user message `InitCodeMsg`, make the corresponding assorted substitutions in the etl/runtime/podspec.yaml spec and run the container. See also: etl/runtime/podspec.yaml

func InitSpec

func InitSpec(t cluster.Target, msg *InitSpecMsg, opts StartOpts) (err error)

func ParsePodSpec

func ParsePodSpec(errCtx *cmn.ETLErrorContext, spec []byte) (*corev1.Pod, error)

func Stop

func Stop(t cluster.Target, id string, errCause error) error

Stop deletes all occupied by the ETL resources, including Pods and Services. It unregisters ETL smap listener.

func StopAll

func StopAll(t cluster.Target)

StopAll terminates all running ETLs.

Types

type Aborter

type Aborter struct {
	// contains filtered or unexported fields
}

Aborter listens to smap changes and aborts the ETL on the target when there is any change in targets membership. Aborter should be registered on ETL init. It is unregistered by Stop function. The is no synchronization between aborters on different targets. It is assumed that if one target received smap with changed targets membership, eventually each of the targets will receive it as well. Hence, all ETL containers will be stopped.

func (*Aborter) ListenSmapChanged

func (e *Aborter) ListenSmapChanged()

func (*Aborter) String

func (e *Aborter) String() string

type CommStats

type CommStats interface {
	ObjCount() int64
	InBytes() int64
	OutBytes() int64
}

type Communicator

type Communicator interface {
	cluster.Slistener

	Name() string
	PodName() string
	SvcName() string

	// OnlineTransform uses one of the two ETL container endpoints:
	//  - Method "PUT", Path "/"
	//  - Method "GET", Path "/bucket/object"
	OnlineTransform(w http.ResponseWriter, r *http.Request, bck *cluster.Bck, objName string) error

	// OfflineTransform interface implementations realize offline ETL.
	// OfflineTransform is driven by `OfflineDataProvider` - not to confuse
	// with GET requests from users (such as training models and apps)
	// to perform on-the-fly transformation.
	OfflineTransform(bck *cluster.Bck, objName string, timeout time.Duration) (cos.ReadCloseSizer, error)
	Stop()

	CommStats
}

Communicator is responsible for managing communications with local ETL container. It listens to cluster membership changes and terminates ETL container, if need be.

func GetCommunicator

func GetCommunicator(transformID string, lsnode *cluster.Snode) (Communicator, error)

type ETLs

type ETLs map[string]InitMsg

type Info

type Info struct {
	ID string `json:"id"`

	ObjCount int64 `json:"obj_count"`
	InBytes  int64 `json:"in_bytes"`
	OutBytes int64 `json:"out_bytes"`
}

func List

func List() []Info

type InfoList

type InfoList []Info

func (InfoList) Len

func (il InfoList) Len() int

func (InfoList) Less

func (il InfoList) Less(i, j int) bool

func (InfoList) Swap

func (il InfoList) Swap(i, j int)

type InitCodeMsg

type InitCodeMsg struct {
	InitMsgBase
	Code    []byte `json:"code"`
	Deps    []byte `json:"dependencies"`
	Runtime string `json:"runtime"`
	// ========================================================================================
	// `InitCodeMsg` carries the name of the transforming function;
	// the `Transform` function is mandatory and cannot be "" (empty) - it _will_ be called
	//   by the `Runtime` container (see etl/runtime/all.go for all supported pre-built runtimes);
	// =========================================================================================
	// TODO -- FIXME: decide if we need to remove nested struct for funcs
	Funcs struct {
		Transform string `json:"transform"` // cannot be omitted
	}
	// 0 (zero) - read the entire payload in memory and then transform it in one shot;
	// > 0 - use chunk-size buffering and transform incrementally, one chunk at a time
	ChunkSize int64 `json:"chunk_size"`
	// bitwise flags: (streaming | debug | strict | ...)
	Flags int64 `json:"flags"`
}

func (*InitCodeMsg) InitType

func (*InitCodeMsg) InitType() string

func (*InitCodeMsg) Validate

func (m *InitCodeMsg) Validate() error

type InitMsg

type InitMsg interface {
	ID() string
	CommType() string
	InitType() string
	Validate() error
}

func UnmarshalInitMsg

func UnmarshalInitMsg(b []byte) (msg InitMsg, err error)

type InitMsgBase

type InitMsgBase struct {
	IDX       string       `json:"id"`
	CommTypeX string       `json:"communication"`
	Timeout   cos.Duration `json:"timeout"`
}

func (InitMsgBase) CommType

func (m InitMsgBase) CommType() string

func (InitMsgBase) ID

func (m InitMsgBase) ID() string

type InitSpecMsg

type InitSpecMsg struct {
	InitMsgBase
	Spec []byte `json:"spec"`
}

func (*InitSpecMsg) InitType

func (*InitSpecMsg) InitType() string

func (*InitSpecMsg) Validate

func (m *InitSpecMsg) Validate() (err error)

type MD

type MD struct {
	Version int64
	ETLs    ETLs
	Ext     any
}

ETL metadata

func (*MD) Add

func (e *MD) Add(spec InitMsg)

func (*MD) Del

func (e *MD) Del(id string) (deleted bool)

func (*MD) Get

func (e *MD) Get(id string) (msg InitMsg, present bool)

func (*MD) Init

func (e *MD) Init(l int)

func (*MD) JspOpts

func (*MD) JspOpts() jsp.Options

func (*MD) MarshalJSON

func (e *MD) MarshalJSON() ([]byte, error)

func (*MD) String

func (e *MD) String() string

func (*MD) UnmarshalJSON

func (e *MD) UnmarshalJSON(data []byte) (err error)

type OfflineDataProvider

type OfflineDataProvider struct {
	// contains filtered or unexported fields
}

func NewOfflineDataProvider

func NewOfflineDataProvider(msg *apc.TCBMsg, lsnode *cluster.Snode) (*OfflineDataProvider, error)

func (*OfflineDataProvider) Reader

Returns reader resulting from lom ETL transformation.

type OfflineMsg

type OfflineMsg struct {
	ID     string `json:"id"`      // ETL ID
	Prefix string `json:"prefix"`  // Prefix added to each resulting object.
	DryRun bool   `json:"dry_run"` // Don't perform any PUT

	// New objects names will have this extension. Warning: if in a source
	// bucket exist two objects with the same base name, but different
	// extension, specifying this field might cause object overriding.
	// This is because of resulting name conflict.
	Ext string `json:"ext"`
}

type PodHealthMsg

type PodHealthMsg struct {
	TargetID string  `json:"target_id"`
	CPU      float64 `json:"cpu"`
	Mem      int64   `json:"mem"`
}

func PodHealth

func PodHealth(t cluster.Target, etlID string) (stats *PodHealthMsg, err error)

type PodLogsMsg

type PodLogsMsg struct {
	TargetID string `json:"target_id"`
	Logs     []byte `json:"logs"`
}

func PodLogs

func PodLogs(t cluster.Target, transformID string) (logs PodLogsMsg, err error)

func (*PodLogsMsg) String

func (p *PodLogsMsg) String(maxLen ...int) string

type PodsHealthMsg

type PodsHealthMsg []*PodHealthMsg

type PodsLogsMsg

type PodsLogsMsg []PodLogsMsg

func (PodsLogsMsg) Len

func (p PodsLogsMsg) Len() int

func (PodsLogsMsg) Less

func (p PodsLogsMsg) Less(i, j int) bool

func (PodsLogsMsg) Swap

func (p PodsLogsMsg) Swap(i, j int)

type StartOpts

type StartOpts struct {
	Env map[string]string
}

Directories

Path Synopsis
Package runtime provides skeletons and static specifications for building ETL from scratch.
Package runtime provides skeletons and static specifications for building ETL from scratch.

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL