compile

package
v1.2.3-hotfix-20241016 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Oct 16, 2024 License: Apache-2.0 Imports: 141 Imported by: 0

Documentation

Index

Constants

View Source
const (
	DistributedThreshold   uint64 = 10 * mpool.MB
	SingleLineSizeEstimate uint64 = 300 * mpool.B
)

Note: Now the cost going from stat is actually the number of rows, so we can only estimate a number for the size of each row. The current insertion of around 200,000 rows triggers cn to write s3 directly

View Source
const (
	Merge magicType = iota
	Normal
	Remote
	Parallel
	CreateDatabase
	CreateTable
	CreateView
	CreateIndex
	DropDatabase
	DropTable
	DropIndex
	TruncateTable
	AlterView
	AlterTable
	MergeInsert
	MergeDelete
	CreateSequence
	DropSequence
	AlterSequence
	Replace
)

type of scope

View Source
const (
	INDEX_TYPE_PRIMARY  = "PRIMARY"
	INDEX_TYPE_UNIQUE   = "UNIQUE"
	INDEX_TYPE_MULTIPLE = "MULTIPLE"
)
View Source
const (
	INDEX_VISIBLE_YES = 1
	INDEX_VISIBLE_NO  = 0
)
View Source
const (
	INDEX_HIDDEN_YES = 1
	INDEX_HIDDEN_NO  = 0
)
View Source
const (
	NULL_VALUE   = "null"
	EMPTY_STRING = ""
)
View Source
const (
	ALLOCID_INDEX_KEY = "index_key"
)
View Source
const (

	// HandleNotifyTimeout
	// todo: this is a bad design here.
	//  we should do the waiting work in the prepare stage of the dispatch operator but not in the exec stage.
	//      do the waiting work in the exec stage can save some execution time, but it will cause an unstable waiting time.
	//		(because we cannot control the execution time of the running sql,
	//		and the coming time of the first batch of the result is not a constant time.)
	// 		see the codes in pkg/sql/colexec/dispatch/dispatch.go:waitRemoteRegsReady()
	//
	// need to fix this in the future. for now, increase it to make tpch1T can run on 3 CN
	HandleNotifyTimeout = 300 * time.Second
)
View Source
const MaxRpcTime = time.Hour * 24

MaxRpcTime is a default timeout time to rpc context if user never set this deadline. this is just a number I casually wrote, the purpose of doing this is that any message sent through rpc need a clear deadline.

View Source
const StreamMaxInterval = 8192

Variables

This section is empty.

Functions

func AddChildTblIdToParentTable added in v1.1.2

func AddChildTblIdToParentTable(ctx context.Context, fkRelation engine.Relation, tblId uint64) error

func AddFkeyToRelation added in v1.1.2

func AddFkeyToRelation(ctx context.Context, fkRelation engine.Relation, fkey *plan.ForeignKeyDef) error

func ApplyRuntimeFilters added in v1.0.0

func ApplyRuntimeFilters(
	ctx context.Context,
	proc *process.Process,
	tableDef *plan.TableDef,
	blockInfos objectio.BlockInfoSlice,
	exprs []*plan.Expr,
	runtimeFilters []process.RuntimeFilterMessage,
) ([]byte, error)

func CnServerMessageHandler added in v0.6.0

func CnServerMessageHandler(
	ctx context.Context,
	serverAddress string,
	message morpc.Message,
	cs morpc.ClientSession,
	storageEngine engine.Engine, fileService fileservice.FileService, lockService lockservice.LockService,
	queryClient qclient.QueryClient,
	HaKeeper logservice.CNHAKeeperClient, udfService udf.Service, txnClient client.TxnClient,
	autoIncreaseCM *defines.AutoIncrCacheManager,
	messageAcquirer func() morpc.Message) (err error)

CnServerMessageHandler receive and deal the message from cn-client.

The message should always *pipeline.Message here. there are 2 types of pipeline message now.

  1. notify message : a message to tell the dispatch pipeline where its remote receiver are. and we use this connection's write-back method to send the data. or a message to stop the running pipeline.

  2. scope message : a message contains the encoded pipeline. we decoded it and run it locally.

func DebugShowScopes added in v0.6.0

func DebugShowScopes(ss []*Scope) string

DebugShowScopes show information of a scope structure.

func DecodeMergeGroup added in v1.1.0

func DecodeMergeGroup(merge *mergegroup.Argument, pipe *pipeline.Group)

func DetermineRuntimeDOP added in v1.2.1

func DetermineRuntimeDOP(cpunum, blocks int) int

func EncodeMergeGroup added in v1.1.0

func EncodeMergeGroup(merge *mergegroup.Argument, pipe *pipeline.Group)

func GetConstraintDef added in v1.1.2

func GetConstraintDef(ctx context.Context, rel engine.Relation) (*engine.ConstraintDef, error)

func GetConstraintDefFromTableDefs added in v1.1.2

func GetConstraintDefFromTableDefs(defs []engine.TableDef) *engine.ConstraintDef

func MakeNewCreateConstraint added in v1.1.2

func MakeNewCreateConstraint(oldCt *engine.ConstraintDef, c engine.Constraint) (*engine.ConstraintDef, error)

func NewSQLExecutor added in v0.8.0

NewSQLExecutor returns a internal used sql service. It can execute sql in current CN.

func ReleaseScopes added in v1.1.0

func ReleaseScopes(ss []*Scope)

Types

type Col

type Col struct {
	Typ  types.T
	Name string
}

Col is the information of attribute

type Compile added in v0.5.0

type Compile struct {

	// TxnOffset read starting offset position within the transaction during the execute current statement
	TxnOffset int

	MessageBoard *process.MessageBoard
	// contains filtered or unexported fields
}

Compile contains all the information needed for compilation.

func NewCompile added in v1.1.0

func NewCompile(addr, db, sql, tenant, uid string, e engine.Engine, proc *process.Process, stmt tree.Statement, isInternal bool, cnLabel map[string]string, startAt time.Time) *Compile

NewCompile is used to new an object of compile

func (*Compile) Compile added in v0.5.0

func (c *Compile) Compile(ctx context.Context, pn *plan.Plan, fill func(*batch.Batch) error) (err error)

Compile is the entrance of the compute-execute-layer. It generates a scope (logic pipeline) for a query plan.

func (*Compile) NeedInitTempEngine added in v0.7.0

func (c *Compile) NeedInitTempEngine() bool

helper function to judge if init temporary engine is needed

func (*Compile) Release added in v1.1.0

func (c *Compile) Release()

func (*Compile) Run added in v0.5.0

func (c *Compile) Run(_ uint64) (result *util2.RunResult, err error)

Run is an important function of the compute-layer, it executes a single sql according to its scope Need call Release() after call this function.

func (*Compile) SetBuildPlanFunc added in v1.0.0

func (c *Compile) SetBuildPlanFunc(buildPlanFunc func() (*plan2.Plan, error))

func (*Compile) SetOriginSQL added in v1.2.0

func (c *Compile) SetOriginSQL(sql string)

func (*Compile) SetTempEngine added in v0.7.0

func (c *Compile) SetTempEngine(tempEngine engine.Engine, tempStorage *memorystorage.Storage)

func (Compile) TypeName added in v1.2.0

func (c Compile) TypeName() string

type MultiTableIndex added in v1.2.0

type MultiTableIndex struct {
	IndexAlgo string
	IndexDefs map[string]*plan.IndexDef
}

type RemoteReceivRegInfo added in v0.7.0

type RemoteReceivRegInfo struct {
	Idx      int
	Uuid     uuid.UUID
	FromAddr string
}

type RuntimeFilterEvaluator added in v1.0.0

type RuntimeFilterEvaluator interface {
	Evaluate(objectio.ZoneMap) bool
}

type RuntimeInFilter added in v1.0.0

type RuntimeInFilter struct {
	InList *vector.Vector
}

func (*RuntimeInFilter) Evaluate added in v1.0.0

func (f *RuntimeInFilter) Evaluate(zm objectio.ZoneMap) bool

type RuntimeZonemapFilter added in v1.0.0

type RuntimeZonemapFilter struct {
	Zm objectio.ZoneMap
}

func (*RuntimeZonemapFilter) Evaluate added in v1.0.0

func (f *RuntimeZonemapFilter) Evaluate(zm objectio.ZoneMap) bool

type Scope

type Scope struct {
	// Magic specifies the type of Scope.
	// 0 -  execution unit for reading data.
	// 1 -  execution unit for processing intermediate results.
	// 2 -  execution unit that requires remote call.
	Magic magicType

	// IsJoin means the pipeline is join
	IsJoin bool

	// IsEnd means the pipeline is end
	IsEnd bool

	// IsRemote means the pipeline is remote
	IsRemote bool

	// IsLoad means the pipeline is load
	IsLoad bool

	Plan *plan.Plan
	// DataSource stores information about data source.
	DataSource *Source
	// PreScopes contains children of this scope will inherit and execute.
	PreScopes []*Scope
	// NodeInfo contains the information about the remote node.
	NodeInfo engine.Node
	// TxnOffset represents the transaction's write offset, specifying the starting position for reading data.
	TxnOffset int
	// Instructions contains command list of this scope.
	Instructions vm.Instructions
	// Proc contains the execution context.
	Proc *process.Process

	Reg *process.WaitRegister

	RemoteReceivRegInfos []RemoteReceivRegInfo

	BuildIdx   int
	ShuffleCnt int

	PartialResults     []any
	PartialResultTypes []types.T
}

Scope is the output of the compile process. Each sql will be compiled to one or more execution unit scopes.

func (*Scope) AlterSequence added in v1.0.0

func (s *Scope) AlterSequence(c *Compile) error

func (*Scope) AlterTable added in v0.8.0

func (s *Scope) AlterTable(c *Compile) (err error)

func (*Scope) AlterTableCopy added in v1.0.0

func (s *Scope) AlterTableCopy(c *Compile) error

func (*Scope) AlterTableInplace added in v1.0.0

func (s *Scope) AlterTableInplace(c *Compile) error

func (*Scope) AlterView added in v0.7.0

func (s *Scope) AlterView(c *Compile) error

Drop the old view, and create the new view.

func (*Scope) CreateDatabase

func (s *Scope) CreateDatabase(c *Compile) error

func (*Scope) CreateIndex

func (s *Scope) CreateIndex(c *Compile) error

func (*Scope) CreateSequence added in v0.8.0

func (s *Scope) CreateSequence(c *Compile) error

func (*Scope) CreateTable

func (s *Scope) CreateTable(c *Compile) error

func (*Scope) CreateTempTable added in v0.7.0

func (s *Scope) CreateTempTable(c *Compile) error

func (*Scope) CreateView added in v1.2.1

func (s *Scope) CreateView(c *Compile) error

func (*Scope) DropDatabase

func (s *Scope) DropDatabase(c *Compile) error

func (*Scope) DropIndex

func (s *Scope) DropIndex(c *Compile) error

func (*Scope) DropSequence added in v0.8.0

func (s *Scope) DropSequence(c *Compile) error

func (*Scope) DropTable

func (s *Scope) DropTable(c *Compile) error

func (*Scope) InitAllDataSource added in v1.2.0

func (s *Scope) InitAllDataSource(c *Compile) error

func (*Scope) MergeRun

func (s *Scope) MergeRun(c *Compile) error

MergeRun range and run the scope's pre-scopes by go-routine, and finally run itself to do merge work.

func (*Scope) ParallelRun

func (s *Scope) ParallelRun(c *Compile) (err error)

ParallelRun run a pipeline in parallel.

func (*Scope) RemoteRun

func (s *Scope) RemoteRun(c *Compile) error

RemoteRun send the scope to a remote node for execution.

func (*Scope) Run

func (s *Scope) Run(c *Compile) (err error)

Run read data from storage engine and run the instructions of scope.

func (*Scope) SetContextRecursively added in v0.8.0

func (s *Scope) SetContextRecursively(ctx context.Context)

func (*Scope) SetOperatorInfoRecursively added in v1.2.0

func (s *Scope) SetOperatorInfoRecursively(cb func() int32)

func (*Scope) TruncateTable added in v0.6.0

func (s *Scope) TruncateTable(c *Compile) error

Truncation operations cannot be performed if the session holds an active table lock.

func (Scope) TypeName added in v1.2.0

func (s Scope) TypeName() string

type ServiceOfCompile added in v1.2.1

type ServiceOfCompile struct {
	sync.RWMutex
	// contains filtered or unexported fields
}

ServiceOfCompile is used to manage the lifecycle of Compile structures, including their creation and deletion.

It also tracks the currently active complies within a single CN.

func GetCompileService added in v1.2.1

func GetCompileService() *ServiceOfCompile

func InitCompileService added in v1.2.1

func InitCompileService() *ServiceOfCompile

func (*ServiceOfCompile) KillAllQueriesWithError added in v1.2.1

func (srv *ServiceOfCompile) KillAllQueriesWithError(err error)

func (*ServiceOfCompile) PauseService added in v1.2.1

func (srv *ServiceOfCompile) PauseService()

func (*ServiceOfCompile) ResumeService added in v1.2.1

func (srv *ServiceOfCompile) ResumeService()

type Source

type Source struct {
	PushdownId             uint64
	PushdownAddr           string
	SchemaName             string
	RelationName           string
	PartitionRelationNames []string
	Attributes             []string
	R                      engine.Reader
	Bat                    *batch.Batch
	FilterExpr             *plan.Expr // todo: change this to []*plan.Expr

	TableDef  *plan.TableDef
	Timestamp timestamp.Timestamp
	AccountId *plan.PubInfo

	RuntimeFilterSpecs []*plan.RuntimeFilterSpec
	OrderBy            []*plan.OrderBySpec // for ordered scan
	// contains filtered or unexported fields
}

Source contains information of a relation which will be used in execution.

type TxnOperator added in v0.6.0

type TxnOperator = client.TxnOperator

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL