Documentation ¶
Index ¶
- Constants
- Variables
- func CheckAWSSigningConfig(config AWSSigning) error
- func ListenIP() (net.IP, error)
- func Load(env string, configDir string, zone string, config interface{}) error
- type AWSEnvironmentCredential
- type AWSSigning
- type AWSStaticCredential
- type Archival
- type ArchivalDomainDefaults
- type Authorization
- type Blobstore
- type BootstrapMode
- type Cassandra
- type ClusterConfig
- type ClusterGroupMetadata
- type ClusterInformation
- type Config
- type CustomDatastoreConfig
- type DCRedirectionPolicy
- type DataStore
- type DomainDefaults
- type DynamicConfig
- type ElasticSearchConfig
- type FileBlobstore
- type FilestoreArchiver
- type GRPCPorts
- type GstorageArchiver
- type HistoryArchival
- type HistoryArchivalDomainDefaults
- type HistoryArchiverProvider
- type JwtCredentials
- type KafkaConfig
- type Logger
- type Metrics
- type NoSQL
- type NoopAuthorizer
- type OAuthAuthorizer
- type PProf
- type PProfInitializerImpl
- type Persistence
- type PublicClient
- type RPC
- type RPCFactory
- func (d *RPCFactory) CreateDispatcherForOutbound(callerName string, serviceName string, hostName string) (*yarpc.Dispatcher, error)
- func (d *RPCFactory) CreateGRPCDispatcherForOutbound(callerName string, serviceName string, hostName string) (*yarpc.Dispatcher, error)
- func (d *RPCFactory) GetDispatcher() *yarpc.Dispatcher
- func (d *RPCFactory) GetMaxMessageSize() int
- func (d *RPCFactory) ReplaceGRPCPort(serviceName, hostAddress string) (string, error)
- type Replicator
- type Ringpop
- type RingpopFactory
- type S3Archiver
- type SASL
- type SQL
- type Service
- type Statsd
- type TLS
- type TopicConfig
- type TopicList
- type VisibilityArchival
- type VisibilityArchivalDomainDefaults
- type VisibilityArchiverProvider
Constants ¶
const ( // EnvKeyRoot the environment variable key for runtime root dir EnvKeyRoot = "CADENCE_ROOT" // EnvKeyConfigDir the environment variable key for config dir EnvKeyConfigDir = "CADENCE_CONFIG_DIR" // EnvKeyEnvironment is the environment variable key for environment EnvKeyEnvironment = "CADENCE_ENVIRONMENT" // EnvKeyAvailabilityZone is the environment variable key for AZ EnvKeyAvailabilityZone = "CADENCE_AVAILABILTY_ZONE" )
const ( // StoreTypeSQL refers to sql based storage as persistence store StoreTypeSQL = "sql" // StoreTypeCassandra refers to cassandra as persistence store StoreTypeCassandra = "cassandra" )
Variables ¶
var CadenceServices = []string{ common.FrontendServiceName, common.HistoryServiceName, common.MatchingServiceName, common.WorkerServiceName, }
CadenceServices indicate the list of cadence services
Functions ¶
func CheckAWSSigningConfig ¶
func CheckAWSSigningConfig(config AWSSigning) error
CheckAWSSigningConfig checks if the AWSSigning configuration is valid
func ListenIP ¶
ListenIP returns the IP to bind to in Listen. It tries to find an IP that can be used by other machines to reach this machine.
func Load ¶
Load loads the configuration from a set of yaml config files found in the config directory
The loader first fetches the set of files matching a pre-determined naming convention, then sorts them by hierarchy order and after that, simply loads the files one after another with the key/values in the later files overriding the key/values in the earlier files
The hierarchy is as follows from lowest to highest
base.yaml env.yaml -- environment is one of the input params ex-development env_az.yaml -- zone is another input param
Types ¶
type AWSEnvironmentCredential ¶
type AWSEnvironmentCredential struct {
Region string `yaml:"region"`
}
AWSEnvironmentCredential will make a new Session created from SDK defaults, config files, environment, and user provided config files. See more in https://github.com/aws/aws-sdk-go/blob/3974dd034387fbc7cf09c8cd2400787ce07f3285/aws/session/session.go#L147
type AWSSigning ¶
type AWSSigning struct { Enable bool `yaml:"enable"` StaticCredential *AWSStaticCredential `yaml:"staticCredential"` EnvironmentCredential *AWSEnvironmentCredential `yaml:"environmentCredential"` }
AWSSigning contains config to enable signing, Must provide either StaticCredential or EnvironmentCredential
type AWSStaticCredential ¶
type AWSStaticCredential struct { AccessKey string `yaml:"accessKey"` SecretKey string `yaml:"secretKey"` Region string `yaml:"region"` SessionToken string `yaml:"sessionToken"` }
AWSStaticCredential to create a static credentials value provider. SessionToken is only required for temporary security credentials retrieved via STS, otherwise an empty string can be passed for this parameter. See more in https://github.com/aws/aws-sdk-go/blob/master/aws/credentials/static_provider.go#L21
type Archival ¶
type Archival struct { // History is the config for the history archival History HistoryArchival `yaml:"history"` // Visibility is the config for visibility archival Visibility VisibilityArchival `yaml:"visibility"` }
Archival contains the config for archival
func (*Archival) Validate ¶
func (a *Archival) Validate(domainDefaults *ArchivalDomainDefaults) error
Validate validates the archival config
type ArchivalDomainDefaults ¶
type ArchivalDomainDefaults struct { // History is the domain default history archival config for each domain History HistoryArchivalDomainDefaults `yaml:"history"` // Visibility is the domain default visibility archival config for each domain Visibility VisibilityArchivalDomainDefaults `yaml:"visibility"` }
ArchivalDomainDefaults is the default archival config for each domain
type Authorization ¶ added in v0.23.1
type Authorization struct { OAuthAuthorizer OAuthAuthorizer `yaml:"oauthAuthorizer"` NoopAuthorizer NoopAuthorizer `yaml:"noopAuthorizer"` }
func (*Authorization) Validate ¶ added in v0.23.1
func (a *Authorization) Validate() error
Validate validates the persistence config
type Blobstore ¶
type Blobstore struct {
Filestore *FileBlobstore `yaml:"filestore"`
}
Blobstore contains the config for blobstore
type BootstrapMode ¶
type BootstrapMode int
BootstrapMode is an enum type for ringpop bootstrap mode
const ( // BootstrapModeNone represents a bootstrap mode set to nothing or invalid BootstrapModeNone BootstrapMode = iota // BootstrapModeFile represents a file-based bootstrap mode BootstrapModeFile // BootstrapModeHosts represents a list of hosts passed in the configuration BootstrapModeHosts // BootstrapModeCustom represents a custom bootstrap mode BootstrapModeCustom // BootstrapModeDNS represents a list of hosts passed in the configuration // to be resolved, and the resulting addresses are used for bootstrap BootstrapModeDNS // BootstrapModeDNSSRV represents a list of DNS hosts passed in the configuration // to resolve secondary addresses that DNS SRV record would return resulting in // a host list that will contain multiple dynamic addresses and their unique ports BootstrapModeDNSSRV )
func (*BootstrapMode) UnmarshalYAML ¶
func (m *BootstrapMode) UnmarshalYAML( unmarshal func(interface{}) error, ) error
UnmarshalYAML is called by the yaml package to convert the config YAML into a BootstrapMode.
type Cassandra ¶
type Cassandra = NoSQL
Cassandra contains configuration to connect to Cassandra cluster Deprecated: please use NoSQL instead, the structure is backward-compatible
type ClusterConfig ¶
type ClusterConfig struct {
Brokers []string `yaml:"brokers"`
}
ClusterConfig describes the configuration for a single Kafka cluster
type ClusterGroupMetadata ¶ added in v0.23.1
type ClusterGroupMetadata struct { EnableGlobalDomain bool `yaml:"enableGlobalDomain"` // FailoverVersionIncrement is the increment of each cluster version when failover happens // It decides the maximum number clusters in this replication groups FailoverVersionIncrement int64 `yaml:"failoverVersionIncrement"` // PrimaryClusterName is the primary cluster name, only the primary cluster can register / update domain // all clusters can do domain failover PrimaryClusterName string `yaml:"primaryClusterName"` // Deprecated: please use PrimaryClusterName MasterClusterName string `yaml:"masterClusterName"` // CurrentClusterName is the name of the cluster of current deployment CurrentClusterName string `yaml:"currentClusterName"` // ClusterGroup contains information for each cluster within the replication group // Key is the clusterName ClusterGroup map[string]ClusterInformation `yaml:"clusterGroup"` // Deprecated: please use ClusterGroup ClusterInformation map[string]ClusterInformation `yaml:"clusterInformation"` }
ClusterGroupMetadata contains all the clusters participating in a replication group(aka XDC/GlobalDomain)
func (*ClusterGroupMetadata) FillDefaults ¶ added in v0.23.1
func (m *ClusterGroupMetadata) FillDefaults()
FillDefaults populates default values for unspecified fields
func (*ClusterGroupMetadata) Validate ¶ added in v0.23.1
func (m *ClusterGroupMetadata) Validate() error
Validate validates ClusterGroupMetadata
type ClusterInformation ¶
type ClusterInformation struct { Enabled bool `yaml:"enabled"` // InitialFailoverVersion is the identifier of each cluster. 0 <= the value < failoverVersionIncrement InitialFailoverVersion int64 `yaml:"initialFailoverVersion"` // RPCName indicate the remote service name RPCName string `yaml:"rpcName"` // Address indicate the remote service address(Host:Port). Host can be DNS name. // For currentCluster, it's usually the same as publicClient.hostPort RPCAddress string `yaml:"rpcAddress" validate:"nonzero"` // RPCTransport specifies transport to use for replication traffic. // Allowed values: tchannel|grpc // Default: tchannel RPCTransport string `yaml:"rpcTransport"` }
ClusterInformation contains the information about each cluster participating in cross DC
type Config ¶
type Config struct { // Ringpop is the ringpop related configuration Ringpop Ringpop `yaml:"ringpop"` // Persistence contains the configuration for cadence datastores Persistence Persistence `yaml:"persistence"` // Log is the logging config Log Logger `yaml:"log"` // ClusterGroupMetadata is the config containing all valid clusters and active cluster ClusterGroupMetadata *ClusterGroupMetadata `yaml:"clusterGroupMetadata"` // Deprecated: please use ClusterGroupMetadata ClusterMetadata *ClusterGroupMetadata `yaml:"clusterMetadata"` // DCRedirectionPolicy contains the frontend datacenter redirection policy DCRedirectionPolicy DCRedirectionPolicy `yaml:"dcRedirectionPolicy"` // Services is a map of service name to service config items Services map[string]Service `yaml:"services"` // Kafka is the config for connecting to kafka Kafka KafkaConfig `yaml:"kafka"` // Archival is the config for archival Archival Archival `yaml:"archival"` // PublicClient is config for sys worker service connecting to cadence frontend PublicClient PublicClient `yaml:"publicClient"` // DynamicConfigClient is the config for setting up the file based dynamic config client // Filepath would be relative to the root directory when the path wasn't absolute. // Included for backwards compatibility, please transition to DynamicConfig // If both are specified, DynamicConig will be used. DynamicConfigClient dynamicconfig.FileBasedClientConfig `yaml:"dynamicConfigClient"` // DynamicConfig is the config for setting up all dynamic config clients // Allows for changes in client without needing code change DynamicConfig DynamicConfig `yaml:"dynamicconfig"` // DomainDefaults is the default config for every domain DomainDefaults DomainDefaults `yaml:"domainDefaults"` // Blobstore is the config for setting up blobstore Blobstore Blobstore `yaml:"blobstore"` // Authorization is the config for setting up authorization Authorization Authorization `yaml:"authorization"` }
Config contains the configuration for a set of cadence services
func (*Config) NewGRPCPorts ¶
func (*Config) ValidateAndFillDefaults ¶
ValidateAndFillDefaults validates this config and fills default values if needed
type CustomDatastoreConfig ¶
type CustomDatastoreConfig struct { // Name of the custom datastore Name string `yaml:"name"` // Options is a set of key-value attributes that can be used by AbstractDatastoreFactory implementation Options map[string]string `yaml:"options"` }
CustomDatastoreConfig is the configuration for connecting to a custom datastore that is not supported by cadence core
type DCRedirectionPolicy ¶
DCRedirectionPolicy contains the frontend datacenter redirection policy
type DataStore ¶
type DataStore struct { // Cassandra contains the config for a cassandra datastore // Deprecated: please use NoSQL instead, the structure is backward-compatible Cassandra *Cassandra `yaml:"cassandra"` // SQL contains the config for a SQL based datastore SQL *SQL `yaml:"sql"` // NoSQL contains the config for a NoSQL based datastore NoSQL *NoSQL `yaml:"nosql"` // ElasticSearch contains the config for a ElasticSearch datastore ElasticSearch *ElasticSearchConfig `yaml:"elasticsearch"` }
DataStore is the configuration for a single datastore
type DomainDefaults ¶
type DomainDefaults struct { // Archival is the default archival config for each domain Archival ArchivalDomainDefaults `yaml:"archival"` }
DomainDefaults is the default config for each domain
type DynamicConfig ¶ added in v0.23.1
type DynamicConfig struct { Client string `yaml:"client"` ConfigStore c.ClientConfig `yaml:"configstore"` FileBased dynamicconfig.FileBasedClientConfig `yaml:"filebased"` }
type ElasticSearchConfig ¶
type ElasticSearchConfig struct { URL url.URL `yaml:"url"` //nolint:govet Indices map[string]string `yaml:"indices"` //nolint:govet // supporting v6 and v7. Default to v6 if empty. Version string `yaml:"version"` //nolint:govet // optional username to communicate with ElasticSearch Username string `yaml:"username"` //nolint:govet // optional password to communicate with ElasticSearch Password string `yaml:"password"` //nolint:govet // optional to disable sniff, according to issues on Github, // Sniff could cause issue like "no Elasticsearch node available" DisableSniff bool `yaml:"disableSniff"` // optional to disable health check DisableHealthCheck bool `yaml:"disableHealthCheck"` // optional to use AWS signing client // See more info https://github.com/olivere/elastic/wiki/Using-with-AWS-Elasticsearch-Service AWSSigning AWSSigning `yaml:"awsSigning"` // optional to use Signed Certificates over https TLS TLS `yaml:"tls"` }
ElasticSearchConfig for connecting to ElasticSearch
func (*ElasticSearchConfig) GetVisibilityIndex ¶
func (cfg *ElasticSearchConfig) GetVisibilityIndex() string
GetVisibilityIndex return visibility index name
func (*ElasticSearchConfig) SetUsernamePassword ¶
func (cfg *ElasticSearchConfig) SetUsernamePassword()
SetUsernamePassword set the username/password into URL It is a bit tricky here because url.URL doesn't expose the username/password in the struct because of the security concern.
type FileBlobstore ¶
type FileBlobstore struct {
OutputDirectory string `yaml:"outputDirectory"`
}
FileBlobstore contains the config for a file backed blobstore
type FilestoreArchiver ¶
FilestoreArchiver contain the config for filestore archiver
type GstorageArchiver ¶
type GstorageArchiver struct {
CredentialsPath string `yaml:"credentialsPath"`
}
GstorageArchiver contain the config for google storage archiver
type HistoryArchival ¶
type HistoryArchival struct { // Status is the status of history archival either: enabled, disabled, or paused Status string `yaml:"status"` // EnableRead whether history can be read from archival EnableRead bool `yaml:"enableRead"` // Provider contains the config for all history archivers Provider *HistoryArchiverProvider `yaml:"provider"` }
HistoryArchival contains the config for history archival
type HistoryArchivalDomainDefaults ¶
type HistoryArchivalDomainDefaults struct { // Status is the domain default status of history archival: enabled or disabled Status string `yaml:"status"` // URI is the domain default URI for history archiver URI string `yaml:"URI"` }
HistoryArchivalDomainDefaults is the default history archival config for each domain
type HistoryArchiverProvider ¶
type HistoryArchiverProvider struct { Filestore *FilestoreArchiver `yaml:"filestore"` Gstorage *GstorageArchiver `yaml:"gstorage"` S3store *S3Archiver `yaml:"s3store"` }
HistoryArchiverProvider contains the config for all history archivers
type JwtCredentials ¶ added in v0.23.1
type KafkaConfig ¶
type KafkaConfig struct { TLS TLS `yaml:"tls"` SASL SASL `yaml:"sasl"` Clusters map[string]ClusterConfig `yaml:"clusters"` Topics map[string]TopicConfig `yaml:"topics"` // Applications describes the applications that will use the Kafka topics Applications map[string]TopicList `yaml:"applications"` Version string `yaml:"version"` }
KafkaConfig describes the configuration needed to connect to all kafka clusters
func (*KafkaConfig) GetBrokersForKafkaCluster ¶
func (k *KafkaConfig) GetBrokersForKafkaCluster(kafkaCluster string) []string
GetBrokersForKafkaCluster gets broker from cluster
func (*KafkaConfig) GetKafkaClusterForTopic ¶
func (k *KafkaConfig) GetKafkaClusterForTopic(topic string) string
GetKafkaClusterForTopic gets cluster from topic
func (*KafkaConfig) GetTopicsForApplication ¶
func (k *KafkaConfig) GetTopicsForApplication(app string) TopicList
GetTopicsForApplication gets topic from application
func (*KafkaConfig) Validate ¶
func (k *KafkaConfig) Validate(checkApp bool)
Validate will validate config for kafka
type Logger ¶
type Logger struct { // Stdout is true then the output needs to goto standard out // By default this is false and output will go to standard error Stdout bool `yaml:"stdout"` // Level is the desired log level Level string `yaml:"level"` // OutputFile is the path to the log output file // Stdout must be false, otherwise Stdout will take precedence OutputFile string `yaml:"outputFile"` // LevelKey is the desired log level, defaults to "level" LevelKey string `yaml:"levelKey"` // Encoding decides the format, supports "console" and "json". // "json" will print the log in JSON format(better for machine), while "console" will print in plain-text format(more human friendly) // Default is "json" Encoding string `yaml:"encoding"` }
Logger contains the config items for logger
type Metrics ¶
type Metrics struct { // M3 is the configuration for m3 metrics reporter M3 *m3.Configuration `yaml:"m3"` // Statsd is the configuration for statsd reporter Statsd *Statsd `yaml:"statsd"` // Prometheus is the configuration for prometheus reporter // Some documentation below because the tally library is missing it: // In this configuration, default timerType is "histogram", alternatively "summary" is also supported. // In some cases, summary is better. Choose it wisely. // For histogram, default buckets are defined in https://github.com/uber/cadence/blob/master/common/metrics/tally/prometheus/buckets.go#L34 // For summary, default objectives are defined in https://github.com/uber-go/tally/blob/137973e539cd3589f904c23d0b3a28c579fd0ae4/prometheus/reporter.go#L70 // You can customize the buckets/objectives if the default is not good enough. Prometheus *prometheus.Configuration `yaml:"prometheus"` // Tags is the set of key-value pairs to be reported // as part of every metric Tags map[string]string `yaml:"tags"` // Prefix sets the prefix to all outgoing metrics Prefix string `yaml:"prefix"` }
Metrics contains the config items for metrics subsystem
type NoSQL ¶ added in v0.22.0
type NoSQL struct { // PluginName is the name of NoSQL plugin, default is "cassandra". Supported values: cassandra PluginName string `yaml:"pluginName"` // Hosts is a csv of cassandra endpoints Hosts string `yaml:"hosts" validate:"nonzero"` // Port is the cassandra port used for connection by gocql client Port int `yaml:"port"` // User is the cassandra user used for authentication by gocql client User string `yaml:"user"` // Password is the cassandra password used for authentication by gocql client Password string `yaml:"password"` // Keyspace is the cassandra keyspace Keyspace string `yaml:"keyspace"` // Region is the region filter arg for cassandra Region string `yaml:"region"` // Datacenter is the data center filter arg for cassandra Datacenter string `yaml:"datacenter"` // MaxConns is the max number of connections to this datastore for a single keyspace MaxConns int `yaml:"maxConns"` // TLS configuration TLS *TLS `yaml:"tls"` // ProtoVersion ProtoVersion int `yaml:"protoVersion"` // ConnectAttributes is a set of key-value attributes as a supplement/extension to the above common fields // Use it ONLY when a configure is too specific to a particular NoSQL database that should not be in the common struct // Otherwise please add new fields to the struct for better documentation // If being used in any database, update this comment here to make it clear ConnectAttributes map[string]string `yaml:"connectAttributes"` }
NoSQL contains configuration to connect to NoSQL Database cluster
type NoopAuthorizer ¶ added in v0.23.1
type NoopAuthorizer struct {
Enable bool `yaml:"enable"`
}
type OAuthAuthorizer ¶ added in v0.23.1
type OAuthAuthorizer struct { Enable bool `yaml:"enable"` // Credentials to verify/create the JWT JwtCredentials JwtCredentials `yaml:"jwtCredentials"` // Max of TTL in the claim MaxJwtTTL int64 `yaml:"maxJwtTTL"` }
type PProf ¶
type PProf struct { // Port is the port on which the PProf will bind to Port int `yaml:"port"` }
PProf contains the rpc config items
func (*PProf) NewInitializer ¶
func (cfg *PProf) NewInitializer(logger log.Logger) *PProfInitializerImpl
NewInitializer create a new instance of PProf Initializer
type PProfInitializerImpl ¶
PProfInitializerImpl initialize the pprof based on config
func (*PProfInitializerImpl) Start ¶
func (initializer *PProfInitializerImpl) Start() error
Start the pprof based on config
type Persistence ¶
type Persistence struct { // DefaultStore is the name of the default data store to use DefaultStore string `yaml:"defaultStore" validate:"nonzero"` // VisibilityStore is the name of the datastore to be used for visibility records // Must provide one of VisibilityStore and AdvancedVisibilityStore VisibilityStore string `yaml:"visibilityStore"` // AdvancedVisibilityStore is the name of the datastore to be used for visibility records // Must provide one of VisibilityStore and AdvancedVisibilityStore AdvancedVisibilityStore string `yaml:"advancedVisibilityStore"` // HistoryMaxConns is the desired number of conns to history store. Value specified // here overrides the MaxConns config specified as part of datastore HistoryMaxConns int `yaml:"historyMaxConns"` // NumHistoryShards is the desired number of history shards. This config doesn't // belong here, needs refactoring NumHistoryShards int `yaml:"numHistoryShards" validate:"nonzero"` // DataStores contains the configuration for all datastores DataStores map[string]DataStore `yaml:"datastores"` // TODO: move dynamic config out of static config // TransactionSizeLimit is the largest allowed transaction size TransactionSizeLimit dynamicconfig.IntPropertyFn `yaml:"-" json:"-"` // TODO: move dynamic config out of static config // ErrorInjectionRate is the the rate for injecting random error ErrorInjectionRate dynamicconfig.FloatPropertyFn `yaml:"-" json:"-"` }
Persistence contains the configuration for data store / persistence layer
func (*Persistence) DefaultStoreType ¶
func (c *Persistence) DefaultStoreType() string
DefaultStoreType returns the storeType for the default persistence store
func (*Persistence) FillDefaults ¶ added in v0.23.1
func (c *Persistence) FillDefaults()
FillDefaults populates default values for unspecified fields in persistence config
func (*Persistence) IsAdvancedVisibilityConfigExist ¶
func (c *Persistence) IsAdvancedVisibilityConfigExist() bool
IsAdvancedVisibilityConfigExist returns whether user specified advancedVisibilityStore in config
func (*Persistence) Validate ¶
func (c *Persistence) Validate() error
Validate validates the persistence config
type PublicClient ¶
type PublicClient struct { // HostPort is the host port to connect on. Host can be DNS name // Default to currentCluster's RPCAddress in ClusterInformation HostPort string `yaml:"hostPort"` // interval to refresh DNS. Default to 10s RefreshInterval time.Duration `yaml:"RefreshInterval"` }
PublicClient is config for connecting to cadence frontend
type RPC ¶
type RPC struct { // Port is the port on which the channel will bind to Port int `yaml:"port"` // GRPCPort is the port on which the grpc listener will bind to GRPCPort int `yaml:"grpcPort"` // BindOnLocalHost is true if localhost is the bind address BindOnLocalHost bool `yaml:"bindOnLocalHost"` // BindOnIP can be used to bind service on specific ip (eg. `0.0.0.0`) - // check net.ParseIP for supported syntax, only IPv4 is supported, // mutually exclusive with `BindOnLocalHost` option BindOnIP string `yaml:"bindOnIP"` // DisableLogging disables all logging for rpc DisableLogging bool `yaml:"disableLogging"` // LogLevel is the desired log level LogLevel string `yaml:"logLevel"` // GRPCMaxMsgSize allows overriding default (4MB) message size for gRPC GRPCMaxMsgSize int `yaml:"grpcMaxMsgSize"` }
RPC contains the rpc config items
func (*RPC) NewFactory ¶
NewFactory builds a new RPCFactory conforming to the underlying configuration
type RPCFactory ¶
RPCFactory is an implementation of service.RPCFactory interface
func (*RPCFactory) CreateDispatcherForOutbound ¶
func (d *RPCFactory) CreateDispatcherForOutbound( callerName string, serviceName string, hostName string, ) (*yarpc.Dispatcher, error)
CreateDispatcherForOutbound creates a dispatcher for outbound connection
func (*RPCFactory) CreateGRPCDispatcherForOutbound ¶
func (d *RPCFactory) CreateGRPCDispatcherForOutbound( callerName string, serviceName string, hostName string, ) (*yarpc.Dispatcher, error)
CreateGRPCDispatcherForOutbound creates a dispatcher for GRPC outbound connection
func (*RPCFactory) GetDispatcher ¶
func (d *RPCFactory) GetDispatcher() *yarpc.Dispatcher
GetDispatcher return a cached dispatcher
func (*RPCFactory) GetMaxMessageSize ¶ added in v0.23.1
func (d *RPCFactory) GetMaxMessageSize() int
GetMaxMessageSize returns the max support payload size
func (*RPCFactory) ReplaceGRPCPort ¶
func (d *RPCFactory) ReplaceGRPCPort(serviceName, hostAddress string) (string, error)
ReplaceGRPCPort replaces port in the address to grpc for a given service
type Ringpop ¶
type Ringpop struct { // Name to be used in ringpop advertisement Name string `yaml:"name" validate:"nonzero"` // BootstrapMode is a enum that defines the ringpop bootstrap method, currently supports: hosts, files, custom, dns, and dns-srv BootstrapMode BootstrapMode `yaml:"bootstrapMode"` // BootstrapHosts is a list of seed hosts to be used for ringpop bootstrap BootstrapHosts []string `yaml:"bootstrapHosts"` // BootstrapFile is the file path to be used for ringpop bootstrap BootstrapFile string `yaml:"bootstrapFile"` // MaxJoinDuration is the max wait time to join the ring MaxJoinDuration time.Duration `yaml:"maxJoinDuration"` // Custom discovery provider, cannot be specified through yaml DiscoveryProvider discovery.DiscoverProvider `yaml:"-"` }
Ringpop contains the ringpop config items
func (*Ringpop) NewFactory ¶
func (rpConfig *Ringpop) NewFactory( dispatcher *yarpc.Dispatcher, serviceName string, logger log.Logger, ) (*RingpopFactory, error)
NewFactory builds a ringpop factory conforming to the underlying configuration
type RingpopFactory ¶
RingpopFactory implements the RingpopFactory interface
func (*RingpopFactory) GetMembershipMonitor ¶
func (factory *RingpopFactory) GetMembershipMonitor() (membership.Monitor, error)
GetMembershipMonitor return a membership monitor
type S3Archiver ¶
type S3Archiver struct { Region string `yaml:"region"` Endpoint *string `yaml:"endpoint"` S3ForcePathStyle bool `yaml:"s3ForcePathStyle"` }
S3Archiver contains the config for S3 archiver
type SASL ¶
type SASL struct { Enabled bool `yaml:"enabled"` // false as default User string `yaml:"user"` Password string `yaml:"password"` Algorithm string `yaml:"algorithm"` // plain, sha512 or sha256 }
SASL describe SASL configuration (for Kafka)
type SQL ¶
type SQL struct { // User is the username to be used for the conn User string `yaml:"user"` // Password is the password corresponding to the user name Password string `yaml:"password"` // PluginName is the name of SQL plugin PluginName string `yaml:"pluginName" validate:"nonzero"` // DatabaseName is the name of SQL database to connect to DatabaseName string `yaml:"databaseName" validate:"nonzero"` // ConnectAddr is the remote addr of the database ConnectAddr string `yaml:"connectAddr" validate:"nonzero"` // ConnectProtocol is the protocol that goes with the ConnectAddr ex - tcp, unix ConnectProtocol string `yaml:"connectProtocol" validate:"nonzero"` // ConnectAttributes is a set of key-value attributes to be sent as part of connect data_source_name url ConnectAttributes map[string]string `yaml:"connectAttributes"` // MaxConns the max number of connections to this datastore MaxConns int `yaml:"maxConns"` // MaxIdleConns is the max number of idle connections to this datastore MaxIdleConns int `yaml:"maxIdleConns"` // MaxConnLifetime is the maximum time a connection can be alive MaxConnLifetime time.Duration `yaml:"maxConnLifetime"` // NumShards is the number of storage shards to use for tables // in a sharded sql database. The default value for this param is 1 NumShards int `yaml:"nShards"` // TLS is the configuration for TLS connections TLS *TLS `yaml:"tls"` // EncodingType is the configuration for the type of encoding used for sql blobs EncodingType string `yaml:"encodingType"` // DecodingTypes is the configuration for all the sql blob decoding types which need to be supported // DecodingTypes should not be removed unless there are no blobs in database with the encoding type DecodingTypes []string `yaml:"decodingTypes"` }
SQL is the configuration for connecting to a SQL backed datastore
type Service ¶
type Service struct { // TChannel is the tchannel configuration RPC RPC `yaml:"rpc"` // Metrics is the metrics subsystem configuration Metrics Metrics `yaml:"metrics"` // PProf is the PProf configuration PProf PProf `yaml:"pprof"` }
Service contains the service specific config items
type Statsd ¶
type Statsd struct { // The host and port of the statsd server HostPort string `yaml:"hostPort" validate:"nonzero"` // The prefix to use in reporting to statsd Prefix string `yaml:"prefix" validate:"nonzero"` // FlushInterval is the maximum interval for sending packets. // If it is not specified, it defaults to 1 second. FlushInterval time.Duration `yaml:"flushInterval"` // FlushBytes specifies the maximum udp packet size you wish to send. // If FlushBytes is unspecified, it defaults to 1432 bytes, which is // considered safe for local traffic. FlushBytes int `yaml:"flushBytes"` }
Statsd contains the config items for statsd metrics reporter
type TLS ¶
type TLS struct { Enabled bool `yaml:"enabled"` // For Postgres(https://www.postgresql.org/docs/9.1/libpq-ssl.html) and MySQL // default to require if Enable is true. // For MySQL: https://github.com/go-sql-driver/mysql , it also can be set in ConnectAttributes, default is tls-custom SSLMode string `yaml:"sslmode" ` // CertPath and KeyPath are optional depending on server // config, but both fields must be omitted to avoid using a // client certificate CertFile string `yaml:"certFile"` KeyFile string `yaml:"keyFile"` CaFile string `yaml:"caFile"` //optional depending on server config // If you want to verify the hostname and server cert (like a wildcard for cass cluster) then you should turn this on // This option is basically the inverse of InSecureSkipVerify // See InSecureSkipVerify in http://golang.org/pkg/crypto/tls/ for more info EnableHostVerification bool `yaml:"enableHostVerification"` ServerName string `yaml:"serverName"` }
TLS describe TLS configuration (for Kafka, Cassandra, SQL)
type TopicConfig ¶
type TopicConfig struct {
Cluster string `yaml:"cluster"`
}
TopicConfig describes the mapping from topic to Kafka cluster
type VisibilityArchival ¶
type VisibilityArchival struct { // Status is the status of visibility archival either: enabled, disabled, or paused Status string `yaml:"status"` // EnableRead whether visibility can be read from archival EnableRead bool `yaml:"enableRead"` // Provider contains the config for all visibility archivers Provider *VisibilityArchiverProvider `yaml:"provider"` }
VisibilityArchival contains the config for visibility archival
type VisibilityArchivalDomainDefaults ¶
type VisibilityArchivalDomainDefaults struct { // Status is the domain default status of visibility archival: enabled or disabled Status string `yaml:"status"` // URI is the domain default URI for visibility archiver URI string `yaml:"URI"` }
VisibilityArchivalDomainDefaults is the default visibility archival config for each domain
type VisibilityArchiverProvider ¶
type VisibilityArchiverProvider struct { Filestore *FilestoreArchiver `yaml:"filestore"` S3store *S3Archiver `yaml:"s3store"` Gstorage *GstorageArchiver `yaml:"gstorage"` }
VisibilityArchiverProvider contains the config for all visibility archivers