Documentation ¶
Index ¶
- Constants
- func Load(env string, configDir string, zone string, config interface{}) error
- type AWSEnvironmentCredential
- type AWSSigning
- type AWSStaticCredential
- type Archival
- type ArchivalDomainDefaults
- type AsyncWorkflowQueueProvider
- type Authorization
- type AuthorizationProvider
- type Blobstore
- type Cassandra
- type ClusterConfig
- type ClusterGroupMetadata
- type ClusterInformation
- type ClusterRedirectionPolicy
- type Config
- type CustomDatastoreConfig
- type DBShardConnection
- type DataStore
- type DomainDefaults
- type DynamicConfig
- type ElasticSearchConfig
- type FileBlobstore
- type FilestoreArchiver
- type HTTP
- type HeaderRule
- type HistoryArchival
- type HistoryArchivalDomainDefaults
- type HistoryArchiverProvider
- type HistoryShardRange
- type JwtCredentials
- type KafkaConfig
- func (k *KafkaConfig) GetBrokersForKafkaCluster(kafkaCluster string) []string
- func (k *KafkaConfig) GetKafkaClusterForTopic(topic string) string
- func (k *KafkaConfig) GetKafkaPropertyForTopic(topic string, property string) any
- func (k *KafkaConfig) GetTopicsForApplication(app string) TopicList
- func (k *KafkaConfig) Validate(checkApp bool)
- type Logger
- type Membership
- type Metrics
- type MultipleDatabasesConfigEntry
- type NoSQL
- type NoopAuthorizer
- type OAuthAuthorizer
- type OAuthProvider
- type PProf
- type PProfInitializerImpl
- type PeerProvider
- type Persistence
- type PinotMigration
- type PinotVisibilityConfig
- type PublicClient
- type RPC
- type Replicator
- type S3Archiver
- type SASL
- type SQL
- type Service
- type ShardedNoSQL
- type ShardingPolicy
- type Statsd
- type TLS
- type TasklistHashing
- type TopicConfig
- type TopicList
- type VisibilityArchival
- type VisibilityArchivalDomainDefaults
- type VisibilityArchiverProvider
- type YamlNode
Constants ¶
const ( // NonShardedStoreName is the shard name used for singular (non-sharded) stores NonShardedStoreName = "NonShardedStore" FilestoreConfig = "filestore" S3storeConfig = "s3store" )
const ( // EnvKeyRoot the environment variable key for runtime root dir EnvKeyRoot = "CADENCE_ROOT" // EnvKeyConfigDir the environment variable key for config dir EnvKeyConfigDir = "CADENCE_CONFIG_DIR" // EnvKeyEnvironment is the environment variable key for environment EnvKeyEnvironment = "CADENCE_ENVIRONMENT" // EnvKeyAvailabilityZone is the environment variable key for AZ EnvKeyAvailabilityZone = "CADENCE_AVAILABILTY_ZONE" )
const ( // StoreTypeSQL refers to sql based storage as persistence store StoreTypeSQL = "sql" // StoreTypeCassandra refers to cassandra as persistence store StoreTypeCassandra = "cassandra" )
Variables ¶
This section is empty.
Functions ¶
func Load ¶
Load loads the configuration from a set of yaml config files found in the config directory
The loader first fetches the set of files matching a pre-determined naming convention, then sorts them by hierarchy order and after that, simply loads the files one after another with the key/values in the later files overriding the key/values in the earlier files
The hierarchy is as follows from lowest to highest
base.yaml env.yaml -- environment is one of the input params ex-development env_az.yaml -- zone is another input param
Types ¶
type AWSEnvironmentCredential ¶
type AWSEnvironmentCredential struct {
Region string `yaml:"region"`
}
AWSEnvironmentCredential will make a new Session created from SDK defaults, config files, environment, and user provided config files. See more in https://github.com/aws/aws-sdk-go/blob/3974dd034387fbc7cf09c8cd2400787ce07f3285/aws/session/session.go#L147
type AWSSigning ¶
type AWSSigning struct { Enable bool `yaml:"enable"` StaticCredential *AWSStaticCredential `yaml:"staticCredential"` EnvironmentCredential *AWSEnvironmentCredential `yaml:"environmentCredential"` }
AWSSigning contains config to enable signing, Must provide either StaticCredential or EnvironmentCredential
func (AWSSigning) GetCredentials ¶ added in v1.2.1
func (a AWSSigning) GetCredentials() (*credentials.Credentials, *string, error)
func (AWSSigning) Validate ¶ added in v1.2.1
func (a AWSSigning) Validate() error
type AWSStaticCredential ¶
type AWSStaticCredential struct { AccessKey string `yaml:"accessKey"` Region string `yaml:"region"` SecretKey string `yaml:"secretKey"` SessionToken string `yaml:"sessionToken"` }
AWSStaticCredential to create a static credentials value provider. SessionToken is only required for temporary security credentials retrieved via STS, otherwise an empty string can be passed for this parameter. See more in https://github.com/aws/aws-sdk-go/blob/master/aws/credentials/static_provider.go#L21
type Archival ¶
type Archival struct { // History is the config for the history archival History HistoryArchival `yaml:"history"` // Visibility is the config for visibility archival Visibility VisibilityArchival `yaml:"visibility"` }
Archival contains the config for archival
func (*Archival) Validate ¶
func (a *Archival) Validate(domainDefaults *ArchivalDomainDefaults) error
Validate validates the archival config
type ArchivalDomainDefaults ¶
type ArchivalDomainDefaults struct { // History is the domain default history archival config for each domain History HistoryArchivalDomainDefaults `yaml:"history"` // Visibility is the domain default visibility archival config for each domain Visibility VisibilityArchivalDomainDefaults `yaml:"visibility"` }
ArchivalDomainDefaults is the default archival config for each domain
type AsyncWorkflowQueueProvider ¶ added in v1.2.8
type AsyncWorkflowQueueProvider struct { Type string `yaml:"type"` Config *YamlNode `yaml:"config"` }
AsyncWorkflowQueueProvider contains the config for an async workflow queue. Type is the implementation type of the queue provider. Config is the configuration for the queue provider. Config types and structures expected in the main default binary include: - type: "kafka", config: *github.com/uber/cadence/common/asyncworkflow/queue/kafka.QueueConfig]]
type Authorization ¶ added in v0.23.1
type Authorization struct { OAuthAuthorizer OAuthAuthorizer `yaml:"oauthAuthorizer"` NoopAuthorizer NoopAuthorizer `yaml:"noopAuthorizer"` }
func (*Authorization) Validate ¶ added in v0.23.1
func (a *Authorization) Validate() error
Validate validates the persistence config
type AuthorizationProvider ¶ added in v0.24.0
type Blobstore ¶
type Blobstore struct {
Filestore *FileBlobstore `yaml:"filestore"`
}
Blobstore contains the config for blobstore
type Cassandra ¶
type Cassandra = NoSQL
Cassandra contains configuration to connect to Cassandra cluster Deprecated: please use NoSQL instead, the structure is backward-compatible
type ClusterConfig ¶
type ClusterConfig struct {
Brokers []string `yaml:"brokers"`
}
ClusterConfig describes the configuration for a single Kafka cluster
type ClusterGroupMetadata ¶ added in v0.23.1
type ClusterGroupMetadata struct { // FailoverVersionIncrement is the increment of each cluster version when failover happens // It decides the maximum number clusters in this replication groups FailoverVersionIncrement int64 `yaml:"failoverVersionIncrement"` // PrimaryClusterName is the primary cluster name, only the primary cluster can register / update domain // all clusters can do domain failover PrimaryClusterName string `yaml:"primaryClusterName"` // Deprecated: please use PrimaryClusterName MasterClusterName string `yaml:"masterClusterName"` // CurrentClusterName is the name of the cluster of current deployment CurrentClusterName string `yaml:"currentClusterName"` // ClusterRedirectionPolicy contains the cluster redirection policy for global domains ClusterRedirectionPolicy *ClusterRedirectionPolicy `yaml:"clusterRedirectionPolicy"` // ClusterGroup contains information for each cluster within the replication group // Key is the clusterName ClusterGroup map[string]ClusterInformation `yaml:"clusterGroup"` // Deprecated: please use ClusterGroup ClusterInformation map[string]ClusterInformation `yaml:"clusterInformation"` }
ClusterGroupMetadata contains all the clusters participating in a replication group(aka XDC/GlobalDomain)
func (*ClusterGroupMetadata) FillDefaults ¶ added in v0.23.1
func (m *ClusterGroupMetadata) FillDefaults()
FillDefaults populates default values for unspecified fields
func (*ClusterGroupMetadata) Validate ¶ added in v0.23.1
func (m *ClusterGroupMetadata) Validate() error
Validate validates ClusterGroupMetadata
type ClusterInformation ¶
type ClusterInformation struct { Enabled bool `yaml:"enabled"` // InitialFailoverVersion is the identifier of each cluster. 0 <= the value < failoverVersionIncrement InitialFailoverVersion int64 `yaml:"initialFailoverVersion"` // NewInitialFailoverVersion is a new failover version for an initialFailoverVersion migration // for when it's necessary to migrate between two values. // // this is a pointer to imply optionality, it's an optional field and its lack // is indicated by a nil pointer. Zero is a valid field NewInitialFailoverVersion *int64 `yaml:"newInitialFailoverVersion"` // RPCName indicate the remote service name RPCName string `yaml:"rpcName"` // Address indicate the remote service address(Host:Port). Host can be DNS name. // For currentCluster, it's usually the same as publicClient.hostPort RPCAddress string `yaml:"rpcAddress" validate:"nonzero"` // RPCTransport specifies transport to use for replication traffic. // Allowed values: tchannel|grpc // Default: tchannel RPCTransport string `yaml:"rpcTransport"` // AuthorizationProvider contains the information to authorize the cluster AuthorizationProvider AuthorizationProvider `yaml:"authorizationProvider"` // TLS configures client TLS/SSL authentication for connections to this cluster TLS TLS `yaml:"tls"` }
ClusterInformation contains the information about each cluster participating in cross DC
type ClusterRedirectionPolicy ¶ added in v0.24.0
type ClusterRedirectionPolicy struct { // Support "noop", "selected-apis-forwarding" and "all-domain-apis-forwarding", default (when empty) is "noop" // // 1) "noop" will not do any forwarding. // // 2) "all-domain-apis-forwarding" will forward all domain specific APIs(worker and non worker) if the current active domain is // the same as "allDomainApisForwardingTargetCluster"( or "allDomainApisForwardingTargetCluster" is empty), otherwise it fallbacks to "selected-apis-forwarding". // // 3) "selected-apis-forwarding" will forward all non-worker APIs including // 1. StartWorkflowExecution // 2. SignalWithStartWorkflowExecution // 3. SignalWorkflowExecution // 4. RequestCancelWorkflowExecution // 5. TerminateWorkflowExecution // 6. QueryWorkflow // 7. ResetWorkflow // // 4) "selected-apis-forwarding-v2" will forward all of "selected-apis-forwarding", and also activity responses // and heartbeats, but not other worker APIs. // // "selected-apis-forwarding(-v2)" and "all-domain-apis-forwarding" can work with EnableDomainNotActiveAutoForwarding dynamicconfig to select certain domains using the policy. // // Usage recommendation: when enabling XDC(global domain) feature, either "all-domain-apis-forwarding" or "selected-apis-forwarding(-v2)" should be used to ensure seamless domain failover(high availability) // Depending on the cost of cross cluster calls: // // 1) If the network communication overhead is high(e.g., clusters are in remote datacenters of different region), then should use "selected-apis-forwarding(-v2)". // But you must ensure a different set of workers with the same workflow & activity code are connected to each Cadence cluster. // // 2) If the network communication overhead is low (e.g. in the same datacenter, mostly for cluster migration usage), then you can use "all-domain-apis-forwarding". Then only one set of // workflow & activity worker connected of one of the Cadence cluster is enough as all domain APIs are forwarded. See more details in documentation of cluster migration section. // Usually "allDomainApisForwardingTargetCluster" should be empty(default value) except for very rare cases: you have more than two clusters and some are in a remote region but some are in local region. Policy string `yaml:"policy"` // A supplement for "all-domain-apis-forwarding" policy. It decides how the policy fallback to "selected-apis-forwarding" policy. // If this is not empty, and current domain is not active in the value of allDomainApisForwardingTargetCluster, then the policy will fallback to "selected-apis-forwarding" policy. // Default is empty, meaning that all requests will not fallback. AllDomainApisForwardingTargetCluster string `yaml:"allDomainApisForwardingTargetCluster"` // Not being used, but we have to keep it so that config loading is not broken ToDC string `yaml:"toDC"` }
ClusterRedirectionPolicy contains the frontend datacenter redirection policy When using XDC (global domain) feature to failover a domain from one cluster to another one, client may call the passive cluster to start /signal workflows etc. To have a seamless failover experience, cluster should use this forwarding option to forward those APIs to the active cluster.
type Config ¶
type Config struct { // Ringpop is the ringpop related configuration Ringpop ringpopprovider.Config `yaml:"ringpop"` // Membership is used to configure peer provider plugin Membership Membership `yaml:"membership"` // Persistence contains the configuration for cadence datastores Persistence Persistence `yaml:"persistence"` // Log is the logging config Log Logger `yaml:"log"` // ClusterGroupMetadata is the config containing all valid clusters and active cluster ClusterGroupMetadata *ClusterGroupMetadata `yaml:"clusterGroupMetadata"` // Deprecated: please use ClusterGroupMetadata ClusterMetadata *ClusterGroupMetadata `yaml:"clusterMetadata"` // Deprecated: please use ClusterRedirectionPolicy under ClusterGroupMetadata DCRedirectionPolicy *ClusterRedirectionPolicy `yaml:"dcRedirectionPolicy"` // Services is a map of service name to service config items Services map[string]Service `yaml:"services"` // Kafka is the config for connecting to kafka Kafka KafkaConfig `yaml:"kafka"` // Archival is the config for archival Archival Archival `yaml:"archival"` // PublicClient is config for sys worker service connecting to cadence frontend PublicClient PublicClient `yaml:"publicClient"` // DynamicConfigClient is the config for setting up the file based dynamic config client // Filepath would be relative to the root directory when the path wasn't absolute. // Included for backwards compatibility, please transition to DynamicConfig // If both are specified, DynamicConig will be used. DynamicConfigClient dynamicconfig.FileBasedClientConfig `yaml:"dynamicConfigClient"` // DynamicConfig is the config for setting up all dynamic config clients // Allows for changes in client without needing code change DynamicConfig DynamicConfig `yaml:"dynamicconfig"` // DomainDefaults is the default config for every domain DomainDefaults DomainDefaults `yaml:"domainDefaults"` // Blobstore is the config for setting up blobstore Blobstore Blobstore `yaml:"blobstore"` // Authorization is the config for setting up authorization Authorization Authorization `yaml:"authorization"` // HeaderForwardingRules defines which inbound headers to include or exclude on outbound calls HeaderForwardingRules []HeaderRule `yaml:"headerForwardingRules"` // Note: This is not implemented yet. It's coming in the next release. // AsyncWorkflowQueues is the config for predefining async workflow queue(s) // To use Async APIs for a domain first specify the queue using Admin API. // Either refer to one of the predefined queues in this config or alternatively specify the queue details inline in the API call. AsyncWorkflowQueues map[string]AsyncWorkflowQueueProvider `yaml:"asyncWorkflowQueues"` }
Config contains the configuration for a set of cadence services
func (*Config) GetServiceConfig ¶ added in v0.24.0
func (*Config) ValidateAndFillDefaults ¶
ValidateAndFillDefaults validates this config and fills default values if needed
type CustomDatastoreConfig ¶
type CustomDatastoreConfig struct { // Name of the custom datastore Name string `yaml:"name"` // Options is a set of key-value attributes that can be used by AbstractDatastoreFactory implementation Options map[string]string `yaml:"options"` }
CustomDatastoreConfig is the configuration for connecting to a custom datastore that is not supported by cadence core TODO can we remove it?
type DBShardConnection ¶ added in v1.2.1
type DBShardConnection struct { // NoSQLPlugin is the NoSQL plugin used for connecting to the DB shard NoSQLPlugin *NoSQL `yaml:"nosqlPlugin"` }
DBShardConnection contains configuration for one NoSQL DB Shard
type DataStore ¶
type DataStore struct { // Cassandra contains the config for a cassandra datastore // Deprecated: please use NoSQL instead, the structure is backward-compatible Cassandra *Cassandra `yaml:"cassandra"` // SQL contains the config for a SQL based datastore SQL *SQL `yaml:"sql"` // NoSQL contains the config for a NoSQL based datastore NoSQL *NoSQL `yaml:"nosql"` // ShardedNoSQL contains the config for a collection of NoSQL datastores that are used as a single datastore ShardedNoSQL *ShardedNoSQL `yaml:"shardedNosql"` // ElasticSearch contains the config for a ElasticSearch datastore ElasticSearch *ElasticSearchConfig `yaml:"elasticsearch"` // Pinot contains the config for a Pinot datastore Pinot *PinotVisibilityConfig `yaml:"pinot"` }
DataStore is the configuration for a single datastore
type DomainDefaults ¶
type DomainDefaults struct { // Archival is the default archival config for each domain Archival ArchivalDomainDefaults `yaml:"archival"` }
DomainDefaults is the default config for each domain
type DynamicConfig ¶ added in v0.23.1
type DynamicConfig struct { Client string `yaml:"client"` ConfigStore c.ClientConfig `yaml:"configstore"` FileBased dynamicconfig.FileBasedClientConfig `yaml:"filebased"` }
type ElasticSearchConfig ¶
type ElasticSearchConfig struct { URL url.URL `yaml:"url"` //nolint:govet Indices map[string]string `yaml:"indices"` //nolint:govet // supporting v6 and v7. Default to v6 if empty. Version string `yaml:"version"` //nolint:govet // optional username to communicate with ElasticSearch Username string `yaml:"username"` //nolint:govet // optional password to communicate with ElasticSearch Password string `yaml:"password"` //nolint:govet // optional to disable sniff, according to issues on Github, // Sniff could cause issue like "no Elasticsearch node available" DisableSniff bool `yaml:"disableSniff"` // optional to disable health check DisableHealthCheck bool `yaml:"disableHealthCheck"` // optional to use AWS signing client // See more info https://github.com/olivere/elastic/wiki/Using-with-AWS-Elasticsearch-Service AWSSigning AWSSigning `yaml:"awsSigning"` // optional to use Signed Certificates over https TLS TLS `yaml:"tls"` }
ElasticSearchConfig for connecting to ElasticSearch
func (*ElasticSearchConfig) GetVisibilityIndex ¶
func (cfg *ElasticSearchConfig) GetVisibilityIndex() string
GetVisibilityIndex return visibility index name
func (*ElasticSearchConfig) SetUsernamePassword ¶
func (cfg *ElasticSearchConfig) SetUsernamePassword()
SetUsernamePassword set the username/password into URL It is a bit tricky here because url.URL doesn't expose the username/password in the struct because of the security concern.
type FileBlobstore ¶
type FileBlobstore struct {
OutputDirectory string `yaml:"outputDirectory"`
}
FileBlobstore contains the config for a file backed blobstore
type FilestoreArchiver ¶
FilestoreArchiver contain the config for filestore archiver
type HTTP ¶ added in v1.2.1
type HTTP struct { // Port for listening HTTP requests Port uint16 `yaml:"port"` // List of RPC procedures available to call using HTTP Procedures []string `yaml:"procedures"` // TLS allows configuring TLS/SSL for HTTP requests TLS TLS `yaml:"tls"` // Mode represents the TLS mode of the transport. // Available modes: disabled, permissive, enforced TLSMode yarpctls.Mode `yaml:"TLSMode"` }
HTTP API configuration
type HeaderRule ¶ added in v1.0.0
type HistoryArchival ¶
type HistoryArchival struct { // Status is the status of history archival either: enabled, disabled, or paused Status string `yaml:"status"` // EnableRead whether history can be read from archival EnableRead bool `yaml:"enableRead"` // Provider contains the config for all history archivers Provider HistoryArchiverProvider `yaml:"provider"` }
HistoryArchival contains the config for history archival
type HistoryArchivalDomainDefaults ¶
type HistoryArchivalDomainDefaults struct { // Status is the domain default status of history archival: enabled or disabled Status string `yaml:"status"` // URI is the domain default URI for history archiver URI string `yaml:"URI"` }
HistoryArchivalDomainDefaults is the default history archival config for each domain
type HistoryArchiverProvider ¶
HistoryArchiverProvider contains the config for all history archivers.
Because archivers support external plugins, so there is no fundamental structure expected, but a top-level key per named store plugin is required, and will be used to select the config for a plugin as it is initialized.
Config keys and structures expected in the main default binary include:
- FilestoreConfig: *FilestoreArchiver, used with provider scheme github.com/uber/cadence/common/archiver/filestore.URIScheme
- S3storeConfig: *S3Archiver, used with provider scheme github.com/uber/cadence/common/archiver/s3store.URIScheme
- "gstorage" via github.com/uber/cadence/common/archiver/gcloud.ConfigKey: github.com/uber/cadence/common/archiver/gcloud.Config, used with provider scheme "gs" github.com/uber/cadence/common/archiver/gcloud.URIScheme
For handling hardcoded config, see ToYamlNode.
type HistoryShardRange ¶ added in v1.2.1
type HistoryShardRange struct { // Start defines the inclusive lower bound for the history shard range Start int `yaml:"start"` // End defines the exclusive upper bound for the history shard range End int `yaml:"end"` // Shard defines the shard that owns this range Shard string `yaml:"shard"` }
HistoryShardRange contains configuration for one NoSQL DB Shard
type JwtCredentials ¶ added in v0.23.1
type KafkaConfig ¶
type KafkaConfig struct { TLS TLS `yaml:"tls"` SASL SASL `yaml:"sasl"` Clusters map[string]ClusterConfig `yaml:"clusters"` Topics map[string]TopicConfig `yaml:"topics"` // Applications describes the applications that will use the Kafka topics Applications map[string]TopicList `yaml:"applications"` Version string `yaml:"version"` }
KafkaConfig describes the configuration needed to connect to all kafka clusters
func (*KafkaConfig) GetBrokersForKafkaCluster ¶
func (k *KafkaConfig) GetBrokersForKafkaCluster(kafkaCluster string) []string
GetBrokersForKafkaCluster gets broker from cluster
func (*KafkaConfig) GetKafkaClusterForTopic ¶
func (k *KafkaConfig) GetKafkaClusterForTopic(topic string) string
GetKafkaClusterForTopic gets cluster from topic
func (*KafkaConfig) GetKafkaPropertyForTopic ¶ added in v1.2.7
func (k *KafkaConfig) GetKafkaPropertyForTopic(topic string, property string) any
GetKafkaPropertiesForTopic gets properties from topic
func (*KafkaConfig) GetTopicsForApplication ¶
func (k *KafkaConfig) GetTopicsForApplication(app string) TopicList
GetTopicsForApplication gets topic from application
func (*KafkaConfig) Validate ¶
func (k *KafkaConfig) Validate(checkApp bool)
Validate will validate config for kafka
type Logger ¶
type Logger struct { // Stdout is true then the output needs to goto standard out // By default this is false and output will go to standard error Stdout bool `yaml:"stdout"` // Level is the desired log level Level string `yaml:"level"` // OutputFile is the path to the log output file // Stdout must be false, otherwise Stdout will take precedence OutputFile string `yaml:"outputFile"` // LevelKey is the desired log level, defaults to "level" LevelKey string `yaml:"levelKey"` // Encoding decides the format, supports "console" and "json". // "json" will print the log in JSON format(better for machine), while "console" will print in plain-text format(more human friendly) // Default is "json" Encoding string `yaml:"encoding"` }
Logger contains the config items for logger
type Membership ¶ added in v1.2.13
type Membership struct {
Provider PeerProvider `yaml:"provider"`
}
Membership holds peer provider configuration.
type Metrics ¶
type Metrics struct { // M3 is the configuration for m3 metrics reporter M3 *m3.Configuration `yaml:"m3"` // Statsd is the configuration for statsd reporter Statsd *Statsd `yaml:"statsd"` // Prometheus is the configuration for prometheus reporter // Some documentation below because the tally library is missing it: // In this configuration, default timerType is "histogram", alternatively "summary" is also supported. // In some cases, summary is better. Choose it wisely. // For histogram, default buckets are defined in https://github.com/uber/cadence/blob/master/common/metrics/tally/prometheus/buckets.go#L34 // For summary, default objectives are defined in https://github.com/uber-go/tally/blob/137973e539cd3589f904c23d0b3a28c579fd0ae4/prometheus/reporter.go#L70 // You can customize the buckets/objectives if the default is not good enough. Prometheus *prometheus.Configuration `yaml:"prometheus"` // Tags is the set of key-value pairs to be reported // as part of every metric Tags map[string]string `yaml:"tags"` // Prefix sets the prefix to all outgoing metrics Prefix string `yaml:"prefix"` // ReportingInterval is the interval of metrics reporter ReportingInterval time.Duration `yaml:"reportingInterval"` // defaults to 1s }
Metrics contains the config items for metrics subsystem
type MultipleDatabasesConfigEntry ¶ added in v0.24.0
type MultipleDatabasesConfigEntry struct { // User is the username to be used for the conn User string `yaml:"user"` // Password is the password corresponding to the user name Password string `yaml:"password"` // DatabaseName is the name of SQL database to connect to DatabaseName string `yaml:"databaseName" validate:"nonzero"` // ConnectAddr is the remote addr of the database ConnectAddr string `yaml:"connectAddr" validate:"nonzero"` }
MultipleDatabasesConfigEntry is an entry for MultipleDatabasesConfig to connect to a single SQL database
type NoSQL ¶ added in v0.22.0
type NoSQL struct { // PluginName is the name of NoSQL plugin, default is "cassandra". Supported values: cassandra PluginName string `yaml:"pluginName"` // Hosts is a csv of cassandra endpoints Hosts string `yaml:"hosts" validate:"nonzero"` // Port is the cassandra port used for connection by gocql client Port int `yaml:"port"` // User is the cassandra user used for authentication by gocql client User string `yaml:"user"` // Password is the cassandra password used for authentication by gocql client Password string `yaml:"password"` // AllowedAuthenticators informs the cassandra client to expect a custom authenticator AllowedAuthenticators []string `yaml:"allowedAuthenticators"` // Keyspace is the cassandra keyspace Keyspace string `yaml:"keyspace"` // Region is the region filter arg for cassandra Region string `yaml:"region"` // Datacenter is the data center filter arg for cassandra Datacenter string `yaml:"datacenter"` // MaxConns is the max number of connections to this datastore for a single keyspace MaxConns int `yaml:"maxConns"` // TLS configuration TLS *TLS `yaml:"tls"` // ProtoVersion ProtoVersion int `yaml:"protoVersion"` // ConnectTimeout defines duration for initial dial ConnectTimeout time.Duration `yaml:"connectTimeout"` // Timout is a connection timeout Timeout time.Duration `yaml:"timeout"` // Consistency defines default consistency level Consistency string `yaml:"consistency"` // SerialConsistency sets the consistency for the serial part of queries SerialConsistency string `yaml:"serialConsistency"` // ConnectAttributes is a set of key-value attributes as a supplement/extension to the above common fields // Use it ONLY when a configure is too specific to a particular NoSQL database that should not be in the common struct // Otherwise please add new fields to the struct for better documentation // If being used in any database, update this comment here to make it clear ConnectAttributes map[string]string `yaml:"connectAttributes"` }
NoSQL contains configuration to connect to NoSQL Database cluster
func (*NoSQL) ConvertToShardedNoSQLConfig ¶ added in v1.2.1
func (n *NoSQL) ConvertToShardedNoSQLConfig() *ShardedNoSQL
type NoopAuthorizer ¶ added in v0.23.1
type NoopAuthorizer struct {
Enable bool `yaml:"enable"`
}
type OAuthAuthorizer ¶ added in v0.23.1
type OAuthAuthorizer struct { Enable bool `yaml:"enable"` // Max of TTL in the claim MaxJwtTTL int64 `yaml:"maxJwtTTL"` // Credentials to verify/create the JWT using public/private keys JwtCredentials *JwtCredentials `yaml:"jwtCredentials"` // Provider Provider *OAuthProvider `yaml:"provider"` }
type OAuthProvider ¶ added in v1.2.8
type OAuthProvider struct { JWKSURL string `yaml:"jwksURL"` GroupsAttributePath string `yaml:"groupsAttributePath"` AdminAttributePath string `yaml:"adminAttributePath"` }
OAuthProvider is used to validate tokens provided by 3rd party Identity Provider service
type PProf ¶
type PProf struct { // Port is the port on which the PProf will bind to Port int `yaml:"port"` // Host is the host on which the PProf will bind to, default to `localhost` Host string `yaml:"host"` }
PProf contains the rpc config items
func (*PProf) NewInitializer ¶
func (cfg *PProf) NewInitializer(logger log.Logger) *PProfInitializerImpl
NewInitializer create a new instance of PProf Initializer
type PProfInitializerImpl ¶
PProfInitializerImpl initialize the pprof based on config
func (*PProfInitializerImpl) Start ¶
func (initializer *PProfInitializerImpl) Start() error
Start the pprof based on config
type PeerProvider ¶ added in v1.2.13
PeerProvider is provider config. Contents depends on plugin in use
type Persistence ¶
type Persistence struct { // DefaultStore is the name of the default data store to use DefaultStore string `yaml:"defaultStore" validate:"nonzero"` // VisibilityStore is the name of the datastore to be used for visibility records // Must provide one of VisibilityStore and AdvancedVisibilityStore VisibilityStore string `yaml:"visibilityStore"` // AdvancedVisibilityStore is the name of the datastore to be used for visibility records // Must provide one of VisibilityStore and AdvancedVisibilityStore AdvancedVisibilityStore string `yaml:"advancedVisibilityStore"` // HistoryMaxConns is the desired number of conns to history store. Value specified // here overrides the MaxConns config specified as part of datastore HistoryMaxConns int `yaml:"historyMaxConns"` // EnablePersistenceLatencyHistogramMetrics is to enable latency histogram metrics for persistence layer EnablePersistenceLatencyHistogramMetrics bool `yaml:"enablePersistenceLatencyHistogramMetrics"` // NumHistoryShards is the desired number of history shards. It's for computing the historyShardID from workflowID into [0, NumHistoryShards) // Therefore, the value cannot be changed once set. // TODO This config doesn't belong here, needs refactoring NumHistoryShards int `yaml:"numHistoryShards" validate:"nonzero"` // DataStores contains the configuration for all datastores DataStores map[string]DataStore `yaml:"datastores"` // TODO: move dynamic config out of static config // TransactionSizeLimit is the largest allowed transaction size TransactionSizeLimit dynamicconfig.IntPropertyFn `yaml:"-" json:"-"` // TODO: move dynamic config out of static config // ErrorInjectionRate is the the rate for injecting random error ErrorInjectionRate dynamicconfig.FloatPropertyFn `yaml:"-" json:"-"` }
Persistence contains the configuration for data store / persistence layer
func (*Persistence) DefaultStoreType ¶
func (c *Persistence) DefaultStoreType() string
DefaultStoreType returns the storeType for the default persistence store
func (*Persistence) FillDefaults ¶ added in v0.23.1
func (c *Persistence) FillDefaults()
FillDefaults populates default values for unspecified fields in persistence config
func (*Persistence) IsAdvancedVisibilityConfigExist ¶
func (c *Persistence) IsAdvancedVisibilityConfigExist() bool
IsAdvancedVisibilityConfigExist returns whether user specified advancedVisibilityStore in config
func (*Persistence) Validate ¶
func (c *Persistence) Validate() error
Validate validates the persistence config
type PinotMigration ¶ added in v1.2.11
type PinotMigration struct {
Enabled bool `yaml:"enabled"` //nolint:govet
}
PinotVisibilityConfig for connecting to Pinot
type PinotVisibilityConfig ¶ added in v1.2.5
type PinotVisibilityConfig struct { Cluster string `yaml:"cluster"` //nolint:govet Broker string `yaml:"broker"` //nolint:govet Table string `yaml:"table"` //nolint:govet ServiceName string `yaml:"serviceName"` //nolint:govet Migration PinotMigration `yaml:"migration"` //nolint:govet }
PinotVisibilityConfig for connecting to Pinot
type PublicClient ¶
type PublicClient struct { // HostPort is the host port to connect on. Host can be DNS name // Default to currentCluster's RPCAddress in ClusterInformation HostPort string `yaml:"hostPort"` // Transport is the tranport to use when communicating using the SDK client. // Defaults to: // - currentCluster's RPCTransport in ClusterInformation (if HostPort is not provided) // - grpc (if HostPort is provided) Transport string `yaml:"transport"` // interval to refresh DNS. Default to 10s RefreshInterval time.Duration `yaml:"RefreshInterval"` }
PublicClient is config for connecting to cadence frontend
type RPC ¶
type RPC struct { // Port is the port on which the Thrift TChannel will bind to Port uint16 `yaml:"port"` // GRPCPort is the port on which the grpc listener will bind to GRPCPort uint16 `yaml:"grpcPort"` // BindOnLocalHost is true if localhost is the bind address BindOnLocalHost bool `yaml:"bindOnLocalHost"` // BindOnIP can be used to bind service on specific ip (eg. `0.0.0.0`) - // check net.ParseIP for supported syntax, only IPv4 is supported, // mutually exclusive with `BindOnLocalHost` option BindOnIP string `yaml:"bindOnIP"` // DisableLogging disables all logging for rpc DisableLogging bool `yaml:"disableLogging"` // LogLevel is the desired log level LogLevel string `yaml:"logLevel"` // GRPCMaxMsgSize allows overriding default (4MB) message size for gRPC GRPCMaxMsgSize int `yaml:"grpcMaxMsgSize"` // TLS allows configuring optional TLS/SSL authentication on the server (only on gRPC port) TLS TLS `yaml:"tls"` // HTTP keeps configuration for exposed HTTP API HTTP *HTTP `yaml:"http"` }
RPC contains the rpc config items
type Replicator ¶
type Replicator struct{}
Replicator describes the configuration of replicator TODO can we remove it?
type S3Archiver ¶
type S3Archiver struct { Region string `yaml:"region"` Endpoint *string `yaml:"endpoint"` S3ForcePathStyle bool `yaml:"s3ForcePathStyle"` }
S3Archiver contains the config for S3 archiver
type SASL ¶
type SASL struct { Enabled bool `yaml:"enabled"` // false as default User string `yaml:"user"` Password string `yaml:"password"` Algorithm string `yaml:"algorithm"` // plain, sha512 or sha256 }
SASL describe SASL configuration (for Kafka)
type SQL ¶
type SQL struct { // User is the username to be used for the conn // If useMultipleDatabases, must be empty and provide it via multipleDatabasesConfig instead User string `yaml:"user"` // Password is the password corresponding to the user name // If useMultipleDatabases, must be empty and provide it via multipleDatabasesConfig instead Password string `yaml:"password"` // PluginName is the name of SQL plugin PluginName string `yaml:"pluginName" validate:"nonzero"` // DatabaseName is the name of SQL database to connect to // If useMultipleDatabases, must be empty and provide it via multipleDatabasesConfig instead // Required if not useMultipleDatabases DatabaseName string `yaml:"databaseName"` // ConnectAddr is the remote addr of the database // If useMultipleDatabases, must be empty and provide it via multipleDatabasesConfig instead // Required if not useMultipleDatabases ConnectAddr string `yaml:"connectAddr"` // ConnectProtocol is the protocol that goes with the ConnectAddr ex - tcp, unix ConnectProtocol string `yaml:"connectProtocol" validate:"nonzero"` // ConnectAttributes is a set of key-value attributes to be sent as part of connect data_source_name url ConnectAttributes map[string]string `yaml:"connectAttributes"` // MaxConns the max number of connections to this datastore MaxConns int `yaml:"maxConns"` // MaxIdleConns is the max number of idle connections to this datastore MaxIdleConns int `yaml:"maxIdleConns"` // MaxConnLifetime is the maximum time a connection can be alive MaxConnLifetime time.Duration `yaml:"maxConnLifetime"` // NumShards is the number of DB shards in a sharded sql database. Default is 1 for single SQL database setup. // It's for computing a shardID value of [0,NumShards) to decide which shard of DB to query. // Relationship with NumHistoryShards, both values cannot be changed once set in the same cluster, // and the historyShardID value calculated from NumHistoryShards will be calculated using this NumShards to get a dbShardID NumShards int `yaml:"nShards"` // TLS is the configuration for TLS connections TLS *TLS `yaml:"tls"` // EncodingType is the configuration for the type of encoding used for sql blobs EncodingType string `yaml:"encodingType"` // DecodingTypes is the configuration for all the sql blob decoding types which need to be supported // DecodingTypes should not be removed unless there are no blobs in database with the encoding type DecodingTypes []string `yaml:"decodingTypes"` // UseMultipleDatabases enables using multiple databases as a sharding SQL database, default is false // When enabled, connection will be established using MultipleDatabasesConfig in favor of single values // of User, Password, DatabaseName, ConnectAddr. UseMultipleDatabases bool `yaml:"useMultipleDatabases"` // Required when UseMultipleDatabases is true // the length of the list should be exactly the same as NumShards MultipleDatabasesConfig []MultipleDatabasesConfigEntry `yaml:"multipleDatabasesConfig"` }
SQL is the configuration for connecting to a SQL backed datastore
type Service ¶
type Service struct { // TChannel is the tchannel configuration RPC RPC `yaml:"rpc"` // Metrics is the metrics subsystem configuration Metrics Metrics `yaml:"metrics"` // PProf is the PProf configuration PProf PProf `yaml:"pprof"` }
Service contains the service specific config items
type ShardedNoSQL ¶ added in v1.2.1
type ShardedNoSQL struct { // DefaultShard is the DB shard where the non-sharded tables (ie. cluster metadata) are stored DefaultShard string `yaml:"defaultShard"` // ShardingPolicy is the configuration for the sharding strategy used ShardingPolicy ShardingPolicy `yaml:"shardingPolicy"` // Connections is the collection of NoSQL DB plugins that are used to connect to the shard Connections map[string]DBShardConnection `yaml:"connections"` }
ShardedNoSQL contains configuration to connect to a set of NoSQL Database clusters in a sharded manner
type ShardingPolicy ¶ added in v1.2.1
type ShardingPolicy struct { // HistoryShardMapping defines the ranges of history shards stored by each DB shard. Ranges listed here *MUST* // be continuous and non-overlapping, such that the first range in the list starts from Shard 0, each following // range starts with <prevRange.End> + 1, and the last range ends with <NumHistoryHosts>-1. HistoryShardMapping []HistoryShardRange `yaml:"historyShardMapping"` // TaskListHashing defines the parameters needed for shard ownership calculation based on hashing TaskListHashing TasklistHashing `yaml:"taskListHashing"` }
ShardingPolicy contains configuration for physical DB sharding
type Statsd ¶
type Statsd struct { // The host and port of the statsd server HostPort string `yaml:"hostPort" validate:"nonzero"` // The prefix to use in reporting to statsd Prefix string `yaml:"prefix" validate:"nonzero"` // FlushInterval is the maximum interval for sending packets. // If it is not specified, it defaults to 1 second. FlushInterval time.Duration `yaml:"flushInterval"` // FlushBytes specifies the maximum udp packet size you wish to send. // If FlushBytes is unspecified, it defaults to 1432 bytes, which is // considered safe for local traffic. FlushBytes int `yaml:"flushBytes"` }
Statsd contains the config items for statsd metrics reporter
type TLS ¶
type TLS struct { Enabled bool `yaml:"enabled"` // For Postgres(https://www.postgresql.org/docs/9.1/libpq-ssl.html) and MySQL // default to require if Enable is true. // For MySQL: https://github.com/go-sql-driver/mysql , it also can be set in ConnectAttributes, default is tls-custom SSLMode string `yaml:"sslmode"` // CertPath and KeyPath are optional depending on server // config, but both fields must be omitted to avoid using a // client certificate CertFile string `yaml:"certFile"` KeyFile string `yaml:"keyFile"` CaFile string `yaml:"caFile"` // optional depending on server config CaFiles []string `yaml:"caFiles"` // If you want to verify the hostname and server cert (like a wildcard for cass cluster) then you should turn this on // This option is basically the inverse of InSecureSkipVerify // See InSecureSkipVerify in http://golang.org/pkg/crypto/tls/ for more info EnableHostVerification bool `yaml:"enableHostVerification"` // Set RequireClientAuth to true if mutual TLS is desired. // In this mode, client will need to present their certificate which is signed by CA // that is specified with server "caFile"/"caFiles" or stored in server host level CA store. RequireClientAuth bool `yaml:"requireClientAuth"` ServerName string `yaml:"serverName"` }
TLS describe TLS configuration
type TasklistHashing ¶ added in v1.2.1
type TasklistHashing struct { // ShardOrder defines the order of shards to be used when hashing tasklists to shards ShardOrder []string `yaml:"shardOrder"` }
type TopicConfig ¶
type TopicConfig struct { Cluster string `yaml:"cluster"` // Properties map describes whether the topic properties, such as whether it is secure Properties map[string]any `yaml:"properties,omitempty"` }
TopicConfig describes the mapping from topic to Kafka cluster
type VisibilityArchival ¶
type VisibilityArchival struct { // Status is the status of visibility archival either: enabled, disabled, or paused Status string `yaml:"status"` // EnableRead whether visibility can be read from archival EnableRead bool `yaml:"enableRead"` // Provider contains the config for all visibility archivers Provider VisibilityArchiverProvider `yaml:"provider"` }
VisibilityArchival contains the config for visibility archival
type VisibilityArchivalDomainDefaults ¶
type VisibilityArchivalDomainDefaults struct { // Status is the domain default status of visibility archival: enabled or disabled Status string `yaml:"status"` // URI is the domain default URI for visibility archiver URI string `yaml:"URI"` }
VisibilityArchivalDomainDefaults is the default visibility archival config for each domain
type VisibilityArchiverProvider ¶
VisibilityArchiverProvider contains the config for all visibility archivers.
Because archivers support external plugins, so there is no fundamental structure expected, but a top-level key per named store plugin is required, and will be used to select the config for a plugin as it is initialized.
Config keys and structures expected in the main default binary include:
- FilestoreConfig: *FilestoreArchiver, used with provider scheme github.com/uber/cadence/common/archiver/filestore.URIScheme
- S3storeConfig: *S3Archiver, used with provider scheme github.com/uber/cadence/common/archiver/s3store.URIScheme
- "gstorage" via github.com/uber/cadence/common/archiver/gcloud.ConfigKey: github.com/uber/cadence/common/archiver/gcloud.Config, used with provider scheme "gs" github.com/uber/cadence/common/archiver/gcloud.URIScheme
For handling hardcoded config, see ToYamlNode.
type YamlNode ¶ added in v1.2.7
type YamlNode struct {
// contains filtered or unexported fields
}
YamlNode is a lazy-unmarshaler, because *yaml.Node only exists in gopkg.in/yaml.v3, not v2, and go.uber.org/config currently uses only v2.
func ToYamlNode ¶ added in v1.2.7
ToYamlNode is a bit of a hack to get a *yaml.Node for config-parsing compatibility purposes. There is probably a better way to achieve this with yaml-loading compatibility, but this is at least fairly simple.