README
¶
Cadence Worker
Cadence Worker is a new role for Cadence service used for hosting any components responsible for performing background processing on the Cadence cluster.
Replicator
Replicator is a background worker responsible for consuming replication tasks generated by remote Cadence clusters and pass it down to processor so they can be applied to local Cadence cluster.
Quickstart for local development with multiple Cadence clusters and replication
- Start dependency using docker if you don't have one running:
docker-compose -f docker/dev/cassandra.yml up
Then install the schemas:
make install-schema-xdc
- Start Cadence development server for cluster0, cluster1 and cluster2:
./cadence-server --zone xdc_cluster0 start
./cadence-server --zone xdc_cluster1 start
./cadence-server --zone xdc_cluster2 start
- Create a global Cadence domain that replicates data across clusters
cadence --do sample domain register --ac cluster0 --cl cluster0 cluster1 cluster2
Then run a helloworld from Go Client Sample or Java Client Sample
- Failover a domain between clusters:
Failover to cluster1:
cadence --do samples-domain domain update --ac cluster1
or failover to cluster2:
cadence --do samples-domain domain update --ac cluster2
Failback to cluster0:
cadence --do sample samples-domain update --ac cluster0
Multiple region setup
In a multiple region setup, use another set of config instead.
./cadence-server --zone cross_region_cluster0 start
./cadence-server --zone cross_region_cluster1 start
./cadence-server --zone cross_region_cluster2 start
Right now the only difference is at clusterGroupMetadata.clusterRedirectionPolicy. In multiple region setup, network communication overhead between clusters is high so should use "selected-apis-forwarding". workflow/activity workers need to be connected to each cluster to keep high availability.
Archiver
Archiver is used to handle archival of workflow execution histories. It does this by hosting a cadence client worker and running an archival system workflow. The archival client gets used to initiate archival through signal sending. The archiver shards work across several workflows.
Documentation
¶
Index ¶
Constants ¶
This section is empty.
Variables ¶
This section is empty.
Functions ¶
Types ¶
type Config ¶
type Config struct { KafkaCfg config.KafkaConfig ArchiverConfig *archiver.Config IndexerCfg *indexer.Config ScannerCfg *scanner.Config BatcherCfg *batcher.Config ESAnalyzerCfg *esanalyzer.Config ThrottledLogRPS dynamicconfig.IntPropertyFn PersistenceGlobalMaxQPS dynamicconfig.IntPropertyFn PersistenceMaxQPS dynamicconfig.IntPropertyFn EnableBatcher dynamicconfig.BoolPropertyFn EnableParentClosePolicyWorker dynamicconfig.BoolPropertyFn NumParentClosePolicySystemWorkflows dynamicconfig.IntPropertyFn EnableFailoverManager dynamicconfig.BoolPropertyFn DomainReplicationMaxRetryDuration dynamicconfig.DurationPropertyFn EnableESAnalyzer dynamicconfig.BoolPropertyFn EnableAsyncWorkflowConsumption dynamicconfig.BoolPropertyFn HostName string // contains filtered or unexported fields }
Config contains all the service config for worker
type Service ¶
Service represents the cadence-worker service. This service hosts all background processing needed for cadence cluster: 1. Replicator: Handles applying replication tasks generated by remote clusters. 2. Indexer: Handles uploading of visibility records to elastic search. 3. Archiver: Handles archival of workflow histories.
Directories
¶
Path | Synopsis |
---|---|
Package indexer is a generated GoMock package.
|
Package indexer is a generated GoMock package. |
Package worker is a generated GoMock package.
|
Package worker is a generated GoMock package. |