worker

package
v1.2.9-prerelease5 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Apr 24, 2024 License: MIT Imports: 25 Imported by: 4

README

Cadence Worker

Cadence Worker is a new role for Cadence service used for hosting any components responsible for performing background processing on the Cadence cluster.

Replicator

Replicator is a background worker responsible for consuming replication tasks generated by remote Cadence clusters and pass it down to processor so they can be applied to local Cadence cluster.

Quickstart for local development with multiple Cadence clusters and replication

  1. Start dependency using docker if you don't have one running:
docker-compose -f docker/dev/cassandra.yml up

Then install the schemas:

make install-schema-xdc
  1. Start Cadence development server for cluster0, cluster1 and cluster2:
./cadence-server --zone xdc_cluster0 start
./cadence-server --zone xdc_cluster1 start
./cadence-server --zone xdc_cluster2 start
  1. Create a global Cadence domain that replicates data across clusters
cadence --do sample domain register --ac cluster0 --cl cluster0 cluster1 cluster2

Then run a helloworld from Go Client Sample or Java Client Sample

  1. Failover a domain between clusters:

Failover to cluster1:

cadence --do samples-domain domain update --ac cluster1

or failover to cluster2:

cadence --do samples-domain domain update --ac cluster2

Failback to cluster0:

cadence --do sample samples-domain update --ac cluster0

Multiple region setup

In a multiple region setup, use another set of config instead.

./cadence-server --zone cross_region_cluster0 start
./cadence-server --zone cross_region_cluster1 start
./cadence-server --zone cross_region_cluster2 start

Right now the only difference is at clusterGroupMetadata.clusterRedirectionPolicy. In multiple region setup, network communication overhead between clusters is high so should use "selected-apis-forwarding". workflow/activity workers need to be connected to each cluster to keep high availability.

Archiver

Archiver is used to handle archival of workflow execution histories. It does this by hosting a cadence client worker and running an archival system workflow. The archival client gets used to initiate archival through signal sending. The archiver shards work across several workflows.

Documentation

Index

Constants

This section is empty.

Variables

This section is empty.

Functions

func NewService

func NewService(params *resource.Params) (resource.Resource, error)

NewService builds a new cadence-worker service

Types

type Config

type Config struct {
	ArchiverConfig *archiver.Config
	IndexerCfg     *indexer.Config
	ScannerCfg     *scanner.Config
	BatcherCfg     *batcher.Config
	ESAnalyzerCfg  *esanalyzer.Config

	ThrottledLogRPS                     dynamicconfig.IntPropertyFn
	PersistenceGlobalMaxQPS             dynamicconfig.IntPropertyFn
	PersistenceMaxQPS                   dynamicconfig.IntPropertyFn
	EnableBatcher                       dynamicconfig.BoolPropertyFn
	EnableParentClosePolicyWorker       dynamicconfig.BoolPropertyFn
	NumParentClosePolicySystemWorkflows dynamicconfig.IntPropertyFn
	EnableFailoverManager               dynamicconfig.BoolPropertyFn
	DomainReplicationMaxRetryDuration   dynamicconfig.DurationPropertyFn
	EnableESAnalyzer                    dynamicconfig.BoolPropertyFn
	EnableAsyncWorkflowConsumption      dynamicconfig.BoolPropertyFn
	HostName                            string
	// contains filtered or unexported fields
}

Config contains all the service config for worker

func NewConfig

func NewConfig(params *resource.Params) *Config

NewConfig builds the new Config for cadence-worker service

type Service

type Service struct {
	resource.Resource
	// contains filtered or unexported fields
}

Service represents the cadence-worker service. This service hosts all background processing needed for cadence cluster: 1. Replicator: Handles applying replication tasks generated by remote clusters. 2. Indexer: Handles uploading of visibility records to elastic search. 3. Archiver: Handles archival of workflow histories.

func (*Service) Start

func (s *Service) Start()

Start is called to start the service

func (*Service) Stop

func (s *Service) Stop()

Stop is called to stop the service

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL