dynamicconfig

package
v1.24.0-m3.1 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Apr 22, 2024 License: MIT Imports: 15 Imported by: 13

Documentation

Overview

Package dynamicconfig is a generated GoMock package.

Index

Constants

View Source
const (

	// AdminEnableListHistoryTasks is the key for enabling listing history tasks
	AdminEnableListHistoryTasks = "admin.enableListHistoryTasks"
	// AdminMatchingNamespaceToPartitionDispatchRate is the max qps of any task queue partition for a given namespace
	AdminMatchingNamespaceToPartitionDispatchRate = "admin.matchingNamespaceToPartitionDispatchRate"
	// AdminMatchingNamespaceTaskqueueToPartitionDispatchRate is the max qps of a task queue partition for a given namespace & task queue
	AdminMatchingNamespaceTaskqueueToPartitionDispatchRate = "admin.matchingNamespaceTaskqueueToPartitionDispatchRate"

	// VisibilityPersistenceMaxReadQPS is the max QPC system host can query visibility DB for read.
	VisibilityPersistenceMaxReadQPS = "system.visibilityPersistenceMaxReadQPS"
	// VisibilityPersistenceMaxWriteQPS is the max QPC system host can query visibility DB for write.
	VisibilityPersistenceMaxWriteQPS = "system.visibilityPersistenceMaxWriteQPS"
	// EnableReadFromSecondaryVisibility is the config to enable read from secondary visibility
	EnableReadFromSecondaryVisibility = "system.enableReadFromSecondaryVisibility"
	// SecondaryVisibilityWritingMode is key for how to write to secondary visibility
	SecondaryVisibilityWritingMode = "system.secondaryVisibilityWritingMode"
	// VisibilityDisableOrderByClause is the config to disable ORDERY BY clause for Elasticsearch
	VisibilityDisableOrderByClause = "system.visibilityDisableOrderByClause"
	// VisibilityEnableManualPagination is the config to enable manual pagination for Elasticsearch
	VisibilityEnableManualPagination = "system.visibilityEnableManualPagination"
	// VisibilityAllowList is the config to allow list of values for regular types
	VisibilityAllowList = "system.visibilityAllowList"
	// SuppressErrorSetSystemSearchAttribute suppresses errors when trying to set
	// values in system search attributes.
	SuppressErrorSetSystemSearchAttribute = "system.suppressErrorSetSystemSearchAttribute"

	// HistoryArchivalState is key for the state of history archival
	HistoryArchivalState = "system.historyArchivalState"
	// EnableReadFromHistoryArchival is key for enabling reading history from archival store
	EnableReadFromHistoryArchival = "system.enableReadFromHistoryArchival"
	// VisibilityArchivalState is key for the state of visibility archival
	VisibilityArchivalState = "system.visibilityArchivalState"
	// EnableReadFromVisibilityArchival is key for enabling reading visibility from archival store
	EnableReadFromVisibilityArchival = "system.enableReadFromVisibilityArchival"
	// EnableNamespaceNotActiveAutoForwarding whether enabling DC auto forwarding to active cluster
	// for signal / start / signal with start API if namespace is not active
	EnableNamespaceNotActiveAutoForwarding = "system.enableNamespaceNotActiveAutoForwarding"
	// TransactionSizeLimit is the largest allowed transaction size to persistence
	TransactionSizeLimit = "system.transactionSizeLimit"
	// DisallowQuery is the key to disallow query for a namespace
	DisallowQuery = "system.disallowQuery"
	// EnableCrossNamespaceCommands is the key to enable commands for external namespaces
	EnableCrossNamespaceCommands = "system.enableCrossNamespaceCommands"
	// ClusterMetadataRefreshInterval is config to manage cluster metadata table refresh interval
	ClusterMetadataRefreshInterval = "system.clusterMetadataRefreshInterval"
	// ForceSearchAttributesCacheRefreshOnRead forces refreshing search attributes cache on a read operation, so we always
	// get the latest data from DB. This effectively bypasses cache value and is used to facilitate testing of changes in
	// search attributes. This should not be turned on in production.
	ForceSearchAttributesCacheRefreshOnRead = "system.forceSearchAttributesCacheRefreshOnRead"
	// EnableRingpopTLS controls whether to use TLS for ringpop, using the same "internode" TLS
	// config as the other services.
	EnableRingpopTLS = "system.enableRingpopTLS"
	// RingpopApproximateMaxPropagationTime is used for timing certain startup and shutdown processes.
	// (It is not and doesn't have to be a guarantee.)
	RingpopApproximateMaxPropagationTime = "system.ringpopApproximateMaxPropagationTime"
	// EnableParentClosePolicyWorker decides whether or not enable system workers for processing parent close policy task
	EnableParentClosePolicyWorker = "system.enableParentClosePolicyWorker"
	// EnableStickyQuery indicates if sticky query should be enabled per namespace
	EnableStickyQuery = "system.enableStickyQuery"
	// EnableActivityEagerExecution indicates if activity eager execution is enabled per namespace
	EnableActivityEagerExecution = "system.enableActivityEagerExecution"
	// EnableEagerWorkflowStart toggles "eager workflow start" - returning the first workflow task inline in the
	// response to a StartWorkflowExecution request and skipping the trip through matching.
	EnableEagerWorkflowStart = "system.enableEagerWorkflowStart"
	// NamespaceCacheRefreshInterval is the key for namespace cache refresh interval dynamic config
	NamespaceCacheRefreshInterval = "system.namespaceCacheRefreshInterval"
	// PersistenceHealthSignalMetricsEnabled determines whether persistence shard RPS metrics are emitted
	PersistenceHealthSignalMetricsEnabled = "system.persistenceHealthSignalMetricsEnabled"
	// PersistenceHealthSignalAggregationEnabled determines whether persistence latency and error averages are tracked
	PersistenceHealthSignalAggregationEnabled = "system.persistenceHealthSignalAggregationEnabled"
	// PersistenceHealthSignalWindowSize is the time window size in seconds for aggregating persistence signals
	PersistenceHealthSignalWindowSize = "system.persistenceHealthSignalWindowSize"
	// PersistenceHealthSignalBufferSize is the maximum number of persistence signals to buffer in memory per signal key
	PersistenceHealthSignalBufferSize = "system.persistenceHealthSignalBufferSize"
	// ShardRPSWarnLimit is the per-shard RPS limit for warning
	ShardRPSWarnLimit = "system.shardRPSWarnLimit"
	// ShardPerNsRPSWarnPercent is the per-shard per-namespace RPS limit for warning as a percentage of ShardRPSWarnLimit
	// these warning are not emitted if the value is set to 0 or less
	ShardPerNsRPSWarnPercent = "system.shardPerNsRPSWarnPercent"
	// OperatorRPSRatio is the percentage of the rate limit provided to priority rate limiters that should be used for
	// operator API calls (highest priority). Should be >0.0 and <= 1.0 (defaults to 20% if not specified)
	OperatorRPSRatio = "system.operatorRPSRatio"
	// PersistenceQPSBurstRatio is the burst ratio for persistence QPS.
	// This flag controls the burst ratio for all services.
	PersistenceQPSBurstRatio = "system.persistenceQPSBurstRatio"

	// Whether the deadlock detector should dump goroutines
	DeadlockDumpGoroutines = "system.deadlock.DumpGoroutines"
	// Whether the deadlock detector should cause the grpc server to fail health checks
	DeadlockFailHealthCheck = "system.deadlock.FailHealthCheck"
	// Whether the deadlock detector should abort the process
	DeadlockAbortProcess = "system.deadlock.AbortProcess"
	// How often the detector checks each root.
	DeadlockInterval = "system.deadlock.Interval"
	// How many extra goroutines can be created per root.
	DeadlockMaxWorkersPerRoot = "system.deadlock.MaxWorkersPerRoot"

	// utf-8 validation
	// The *Sample* keys control the sample rate of messages to examine as a fraction in [0.0, 1.0].
	// The *Fail* keys control whether a validation failure causes an error (rpc error for rpc
	// request/response, [de]serialization error for persistence).
	ValidateUTF8SampleRPCRequest  = "system.validateUTF8.sample.rpcRequest"
	ValidateUTF8SampleRPCResponse = "system.validateUTF8.sample.rpcResponse"
	ValidateUTF8SamplePersistence = "system.validateUTF8.sample.persistence"
	ValidateUTF8FailRPCRequest    = "system.validateUTF8.fail.rpcRequest"
	ValidateUTF8FailRPCResponse   = "system.validateUTF8.fail.rpcResponse"
	ValidateUTF8FailPersistence   = "system.validateUTF8.fail.persistence"

	// BlobSizeLimitError is the per event blob size limit
	BlobSizeLimitError = "limit.blobSize.error"
	// BlobSizeLimitWarn is the per event blob size limit for warning
	BlobSizeLimitWarn = "limit.blobSize.warn"
	// MemoSizeLimitError is the per event memo size limit
	MemoSizeLimitError = "limit.memoSize.error"
	// MemoSizeLimitWarn is the per event memo size limit for warning
	MemoSizeLimitWarn = "limit.memoSize.warn"
	// NumPendingChildExecutionsLimitError is the maximum number of pending child workflows a workflow can have before
	// StartChildWorkflowExecution commands will fail.
	NumPendingChildExecutionsLimitError = "limit.numPendingChildExecutions.error"
	// NumPendingActivitiesLimitError is the maximum number of pending activities a workflow can have before
	// ScheduleActivityTask will fail.
	NumPendingActivitiesLimitError = "limit.numPendingActivities.error"
	// NumPendingSignalsLimitError is the maximum number of pending signals a workflow can have before
	// SignalExternalWorkflowExecution commands from this workflow will fail.
	NumPendingSignalsLimitError = "limit.numPendingSignals.error"
	// NumPendingCancelRequestsLimitError is the maximum number of pending requests to cancel other workflows a workflow can have before
	// RequestCancelExternalWorkflowExecution commands will fail.
	NumPendingCancelRequestsLimitError = "limit.numPendingCancelRequests.error"
	// HistorySizeLimitError is the per workflow execution history size limit
	HistorySizeLimitError = "limit.historySize.error"
	// HistorySizeLimitWarn is the per workflow execution history size limit for warning
	HistorySizeLimitWarn = "limit.historySize.warn"
	// HistorySizeSuggestContinueAsNew is the workflow execution history size limit to suggest
	// continue-as-new (in workflow task started event)
	HistorySizeSuggestContinueAsNew = "limit.historySize.suggestContinueAsNew"
	// HistoryCountLimitError is the per workflow execution history event count limit
	HistoryCountLimitError = "limit.historyCount.error"
	// HistoryCountLimitWarn is the per workflow execution history event count limit for warning
	HistoryCountLimitWarn = "limit.historyCount.warn"
	// MutableStateActivityFailureSizeLimitError is the per activity failure size limit for workflow mutable state.
	// If exceeded, failure will be truncated before being stored in mutable state.
	MutableStateActivityFailureSizeLimitError = "limit.mutableStateActivityFailureSize.error"
	// MutableStateActivityFailureSizeLimitWarn is the per activity failure size warning limit for workflow mutable state
	MutableStateActivityFailureSizeLimitWarn = "limit.mutableStateActivityFailureSize.warn"
	// MutableStateSizeLimitError is the per workflow execution mutable state size limit in bytes
	MutableStateSizeLimitError = "limit.mutableStateSize.error"
	// MutableStateSizeLimitWarn is the per workflow execution mutable state size limit in bytes for warning
	MutableStateSizeLimitWarn = "limit.mutableStateSize.warn"
	// HistoryCountSuggestContinueAsNew is the workflow execution history event count limit to
	// suggest continue-as-new (in workflow task started event)
	HistoryCountSuggestContinueAsNew = "limit.historyCount.suggestContinueAsNew"
	// HistoryMaxPageSize is default max size for GetWorkflowExecutionHistory in one page
	HistoryMaxPageSize = "limit.historyMaxPageSize"
	// MaxIDLengthLimit is the length limit for various IDs, including: Namespace, TaskQueue, WorkflowID, ActivityID, TimerID,
	// WorkflowType, ActivityType, SignalName, MarkerName, ErrorReason/FailureReason/CancelCause, Identity, RequestID
	MaxIDLengthLimit = "limit.maxIDLength"
	// WorkerBuildIdSizeLimit is the byte length limit for a worker build id as used in the rpc methods for updating
	// the version sets for a task queue.
	// Do not set this to a value higher than 255 for clusters using SQL based persistence due to predefined VARCHAR
	// column width.
	WorkerBuildIdSizeLimit = "limit.workerBuildIdSize"
	// VersionCompatibleSetLimitPerQueue is the max number of compatible sets allowed in the versioning data for a task
	// queue. Update requests which would cause the versioning data to exceed this number will fail with a
	// FailedPrecondition error.
	VersionCompatibleSetLimitPerQueue = "limit.versionCompatibleSetLimitPerQueue"
	// VersionBuildIdLimitPerQueue is the max number of build IDs allowed to be defined in the versioning data for a
	// task queue. Update requests which would cause the versioning data to exceed this number will fail with a
	// FailedPrecondition error.
	VersionBuildIdLimitPerQueue = "limit.versionBuildIdLimitPerQueue"
	// AssignmentRuleLimitPerQueue is the max number of Build ID assignment rules allowed to be defined in the
	// versioning data for a task queue. Update requests which would cause the versioning data to exceed this number
	// will fail with a FailedPrecondition error.
	AssignmentRuleLimitPerQueue = "limit.wv.AssignmentRuleLimitPerQueue"
	// RedirectRuleLimitPerQueue is the max number of compatible redirect rules allowed to be defined
	// in the versioning data for a task queue. Update requests which would cause the versioning data to exceed this
	// number will fail with a FailedPrecondition error.
	RedirectRuleLimitPerQueue = "limit.wv.RedirectRuleLimitPerQueue"
	// RedirectRuleChainLimitPerQueue is the max number of compatible redirect rules allowed to be connected
	// in one chain in the versioning data for a task queue. Update requests which would cause the versioning data
	// to exceed this number will fail with a FailedPrecondition error.
	RedirectRuleChainLimitPerQueue = "limit.wv.RedirectRuleChainLimitPerQueue"
	// MatchingDeletedRuleRetentionTime is the length of time that deleted Version Assignment Rules and
	// Deleted Redirect Rules will be kept in the DB (with DeleteTimestamp). After this time, the tombstones are deleted at the next time update of versioning data for the task queue.
	MatchingDeletedRuleRetentionTime = "matching.wv.DeletedRuleRetentionTime"
	// ReachabilityBuildIdVisibilityGracePeriod is the time period for which deleted versioning rules are still considered active
	// to account for the delay in updating the build id field in visibility.
	ReachabilityBuildIdVisibilityGracePeriod = "matching.wv.ReachabilityBuildIdVisibilityGracePeriod"
	// ReachabilityTaskQueueScanLimit limits the number of task queues to scan when responding to a
	// GetWorkerTaskReachability query.
	ReachabilityTaskQueueScanLimit = "limit.reachabilityTaskQueueScan"
	// ReachabilityQueryBuildIdLimit limits the number of build ids that can be requested in a single call to the
	// DescribeTaskQueue API with ReportTaskQueueReachability==true, or to the GetWorkerTaskReachability API.
	ReachabilityQueryBuildIdLimit = "limit.reachabilityQueryBuildIds"
	// ReachabilityQuerySetDurationSinceDefault is the minimum period since a version set was demoted from being the
	// queue default before it is considered unreachable by new workflows.
	// This setting allows some propagation delay of versioning data for the reachability queries, which may happen for
	// the following reasons:
	// 1. There are no workflows currently marked as open in the visibility store but a worker for the demoted version
	// is currently processing a task.
	// 2. There are delays in the visibility task processor (which is asynchronous).
	// 3. There's propagation delay of the versioning data between matching nodes.
	ReachabilityQuerySetDurationSinceDefault = "frontend.reachabilityQuerySetDurationSinceDefault"
	// TaskQueuesPerBuildIdLimit limits the number of task queue names that can be mapped to a single build id.
	TaskQueuesPerBuildIdLimit = "limit.taskQueuesPerBuildId"

	// NexusIncomingServiceNameMaxLength is the maximum length of a Nexus incoming service name.
	NexusIncomingServiceNameMaxLength = "limit.incomingServiceNameMaxLength"
	// NexusIncomingServiceMaxSize is the maximum size of a Nexus incoming service in bytes.
	NexusIncomingServiceMaxSize = "limit.incomingServiceMaxSize"
	// NexusIncomingServiceListDefaultPageSize is the default page size for listing Nexus incoming services.
	NexusIncomingServiceListDefaultPageSize = "limit.incomingServiceListDefaultPageSize"
	// NexusIncomingServiceListMaxPageSize is the maximum page size for listing Nexus incoming services.
	NexusIncomingServiceListMaxPageSize = "limit.incomingServiceListMaxPageSize"
	// NexusOutgoingServiceURLMaxLength is the maximum length of an outgoing service URL.
	NexusOutgoingServiceURLMaxLength = "limit.outgoingServiceURLMaxLength"
	// NexusOutgoingServiceNameMaxLength is the maximum length of an outgoing service name.
	NexusOutgoingServiceNameMaxLength = "limit.outgoingServiceNameMaxLength"
	// NexusOutgoingServiceListDefaultPageSize is the default page size for listing outgoing services.
	NexusOutgoingServiceListDefaultPageSize = "limit.outgoingServiceListDefaultPageSize"
	// NexusOutgoingServiceListMaxPageSize is the maximum page size for listing outgoing services.
	NexusOutgoingServiceListMaxPageSize = "limit.outgoingServiceListMaxPageSize"

	// RemovableBuildIdDurationSinceDefault is the minimum duration since a build id was last default in its containing
	// set for it to be considered for removal, used by the build id scavenger.
	// This setting allows some propagation delay of versioning data, which may happen for the following reasons:
	// 1. There are no workflows currently marked as open in the visibility store but a worker for the demoted version
	// is currently processing a task.
	// 2. There are delays in the visibility task processor (which is asynchronous).
	// 3. There's propagation delay of the versioning data between matching nodes.
	RemovableBuildIdDurationSinceDefault = "worker.removableBuildIdDurationSinceDefault"
	// BuildIdScavengerVisibilityRPS is the rate limit for visibility calls from the build id scavenger
	BuildIdScavengerVisibilityRPS = "worker.buildIdScavengerVisibilityRPS"

	// FrontendPersistenceMaxQPS is the max qps frontend host can query DB
	FrontendPersistenceMaxQPS = "frontend.persistenceMaxQPS"
	// FrontendPersistenceGlobalMaxQPS is the max qps frontend cluster can query DB
	FrontendPersistenceGlobalMaxQPS = "frontend.persistenceGlobalMaxQPS"
	// FrontendPersistenceNamespaceMaxQPS is the max qps each namespace on frontend host can query DB
	FrontendPersistenceNamespaceMaxQPS = "frontend.persistenceNamespaceMaxQPS"
	// FrontendPersistenceGlobalNamespaceMaxQPS is the max qps each namespace in frontend cluster can query DB
	FrontendPersistenceGlobalNamespaceMaxQPS = "frontend.persistenceGlobalNamespaceMaxQPS"
	// FrontendEnablePersistencePriorityRateLimiting indicates if priority rate limiting is enabled in frontend persistence client
	FrontendEnablePersistencePriorityRateLimiting = "frontend.enablePersistencePriorityRateLimiting"
	// FrontendPersistenceDynamicRateLimitingParams is a map that contains all adjustable dynamic rate limiting params
	// see DefaultDynamicRateLimitingParams for available options and defaults
	FrontendPersistenceDynamicRateLimitingParams = "frontend.persistenceDynamicRateLimitingParams"
	// FrontendVisibilityMaxPageSize is default max size for ListWorkflowExecutions in one page
	FrontendVisibilityMaxPageSize = "frontend.visibilityMaxPageSize"
	// FrontendHistoryMaxPageSize is default max size for GetWorkflowExecutionHistory in one page
	FrontendHistoryMaxPageSize = "frontend.historyMaxPageSize"
	// FrontendRPS is workflow rate limit per second per-instance
	FrontendRPS = "frontend.rps"
	// FrontendGlobalRPS is workflow rate limit per second for the whole cluster
	FrontendGlobalRPS = "frontend.globalRPS"
	// FrontendNamespaceReplicationInducingAPIsRPS limits the per second request rate for namespace replication inducing
	// APIs (e.g. RegisterNamespace, UpdateNamespace, UpdateWorkerBuildIdCompatibility).
	// This config is EXPERIMENTAL and may be changed or removed in a later release.
	FrontendNamespaceReplicationInducingAPIsRPS = "frontend.rps.namespaceReplicationInducingAPIs"
	// FrontendMaxNamespaceRPSPerInstance is workflow namespace rate limit per second
	FrontendMaxNamespaceRPSPerInstance = "frontend.namespaceRPS"
	// FrontendMaxNamespaceBurstRatioPerInstance is workflow namespace burst limit as a ratio of namespace RPS. The RPS
	// used here will be the effective RPS from global and per-instance limits. The value must be 1 or higher.
	FrontendMaxNamespaceBurstRatioPerInstance = "frontend.namespaceBurstRatio"
	// FrontendMaxConcurrentLongRunningRequestsPerInstance limits concurrent long-running requests per-instance,
	// per-API. Example requests include long-poll requests, and `Query` requests (which need to wait for WFTs). The
	// limit is applied individually to each API method. This value is ignored if
	// FrontendGlobalMaxConcurrentLongRunningRequests is greater than zero. Warning: setting this to zero will cause all
	// long-running requests to fail. The name `frontend.namespaceCount` is kept for backwards compatibility with
	// existing deployments even though it is a bit of a misnomer. This does not limit the number of namespaces; it is a
	// per-_namespace_ limit on the _count_ of long-running requests. Requests are only throttled when the limit is
	// exceeded, not when it is only reached.
	FrontendMaxConcurrentLongRunningRequestsPerInstance = "frontend.namespaceCount"
	// FrontendGlobalMaxConcurrentLongRunningRequests limits concurrent long-running requests across all frontend
	// instances in the cluster, for a given namespace, per-API method. If this is set to 0 (the default), then it is
	// ignored. The name `frontend.globalNamespaceCount` is kept for consistency with the per-instance limit name,
	// `frontend.namespaceCount`.
	FrontendGlobalMaxConcurrentLongRunningRequests = "frontend.globalNamespaceCount"
	// FrontendMaxNamespaceVisibilityRPSPerInstance is namespace rate limit per second for visibility APIs.
	// This config is EXPERIMENTAL and may be changed or removed in a later release.
	FrontendMaxNamespaceVisibilityRPSPerInstance = "frontend.namespaceRPS.visibility"
	// FrontendMaxNamespaceNamespaceReplicationInducingAPIsRPSPerInstance is a per host/per namespace RPS limit for
	// namespace replication inducing APIs (e.g. RegisterNamespace, UpdateNamespace, UpdateWorkerBuildIdCompatibility).
	// This config is EXPERIMENTAL and may be changed or removed in a later release.
	FrontendMaxNamespaceNamespaceReplicationInducingAPIsRPSPerInstance = "frontend.namespaceRPS.namespaceReplicationInducingAPIs"
	// FrontendMaxNamespaceVisibilityBurstRatioPerInstance is namespace burst limit for visibility APIs as a ratio of
	// namespace visibility RPS. The RPS used here will be the effective RPS from global and per-instance limits. This
	// config is EXPERIMENTAL and may be changed or removed in a later release. The value must be 1 or higher.
	FrontendMaxNamespaceVisibilityBurstRatioPerInstance = "frontend.namespaceBurstRatio.visibility"
	// FrontendMaxNamespaceNamespaceReplicationInducingAPIsBurstRatioPerInstance is a per host/per namespace burst limit for
	// namespace replication inducing APIs (e.g. RegisterNamespace, UpdateNamespace, UpdateWorkerBuildIdCompatibility)
	// as a ratio of namespace ReplicationInducingAPIs RPS. The RPS used here will be the effective RPS from global and
	// per-instance limits. This config is EXPERIMENTAL and may be changed or removed in a later release. The value must
	// be 1 or higher.
	FrontendMaxNamespaceNamespaceReplicationInducingAPIsBurstRatioPerInstance = "frontend.namespaceBurstRatio.namespaceReplicationInducingAPIs"
	// FrontendGlobalNamespaceRPS is workflow namespace rate limit per second for the whole cluster.
	// The limit is evenly distributed among available frontend service instances.
	// If this is set, it overwrites per instance limit "frontend.namespaceRPS".
	FrontendGlobalNamespaceRPS = "frontend.globalNamespaceRPS"
	// InternalFrontendGlobalNamespaceRPS is workflow namespace rate limit per second across
	// all internal-frontends.
	InternalFrontendGlobalNamespaceRPS = "internal-frontend.globalNamespaceRPS"
	// FrontendGlobalNamespaceVisibilityRPS is workflow namespace rate limit per second for the whole cluster for visibility API.
	// The limit is evenly distributed among available frontend service instances.
	// If this is set, it overwrites per instance limit "frontend.namespaceRPS.visibility".
	// This config is EXPERIMENTAL and may be changed or removed in a later release.
	FrontendGlobalNamespaceVisibilityRPS = "frontend.globalNamespaceRPS.visibility"
	// FrontendGlobalNamespaceNamespaceReplicationInducingAPIsRPS is a cluster global, per namespace RPS limit for
	// namespace replication inducing APIs (e.g. RegisterNamespace, UpdateNamespace, UpdateWorkerBuildIdCompatibility).
	// The limit is evenly distributed among available frontend service instances.
	// If this is set, it overwrites the per instance limit configured with
	// "frontend.namespaceRPS.namespaceReplicationInducingAPIs".
	// This config is EXPERIMENTAL and may be changed or removed in a later release.
	FrontendGlobalNamespaceNamespaceReplicationInducingAPIsRPS = "frontend.globalNamespaceRPS.namespaceReplicationInducingAPIs"
	// InternalFrontendGlobalNamespaceVisibilityRPS is workflow namespace rate limit per second
	// across all internal-frontends.
	// This config is EXPERIMENTAL and may be changed or removed in a later release.
	InternalFrontendGlobalNamespaceVisibilityRPS = "internal-frontend.globalNamespaceRPS.visibility"
	// FrontendThrottledLogRPS is the rate limit on number of log messages emitted per second for throttled logger
	FrontendThrottledLogRPS = "frontend.throttledLogRPS"
	// FrontendShutdownDrainDuration is the duration of traffic drain during shutdown
	FrontendShutdownDrainDuration = "frontend.shutdownDrainDuration"
	// FrontendShutdownFailHealthCheckDuration is the duration of shutdown failure detection
	FrontendShutdownFailHealthCheckDuration = "frontend.shutdownFailHealthCheckDuration"
	// FrontendMaxBadBinaries is the max number of bad binaries in namespace config
	FrontendMaxBadBinaries = "frontend.maxBadBinaries"
	// SendRawWorkflowHistory is whether to enable raw history retrieving
	SendRawWorkflowHistory = "frontend.sendRawWorkflowHistory"
	// SearchAttributesNumberOfKeysLimit is the limit of number of keys
	SearchAttributesNumberOfKeysLimit = "frontend.searchAttributesNumberOfKeysLimit"
	// SearchAttributesSizeOfValueLimit is the size limit of each value
	SearchAttributesSizeOfValueLimit = "frontend.searchAttributesSizeOfValueLimit"
	// SearchAttributesTotalSizeLimit is the size limit of the whole map
	SearchAttributesTotalSizeLimit = "frontend.searchAttributesTotalSizeLimit"
	// VisibilityArchivalQueryMaxPageSize is the maximum page size for a visibility archival query
	VisibilityArchivalQueryMaxPageSize = "frontend.visibilityArchivalQueryMaxPageSize"
	// EnableServerVersionCheck is a flag that controls whether or not periodic version checking is enabled
	EnableServerVersionCheck = "frontend.enableServerVersionCheck"
	// EnableTokenNamespaceEnforcement enables enforcement that namespace in completion token matches namespace of the request
	EnableTokenNamespaceEnforcement = "frontend.enableTokenNamespaceEnforcement"
	// DisableListVisibilityByFilter is config to disable list open/close workflow using filter
	DisableListVisibilityByFilter = "frontend.disableListVisibilityByFilter"
	// KeepAliveMinTime is the minimum amount of time a client should wait before sending a keepalive ping.
	KeepAliveMinTime = "frontend.keepAliveMinTime"
	// KeepAlivePermitWithoutStream If true, server allows keepalive pings even when there are no active
	// streams(RPCs). If false, and client sends ping when there are no active
	// streams, server will send GOAWAY and close the connection.
	KeepAlivePermitWithoutStream = "frontend.keepAlivePermitWithoutStream"
	// KeepAliveMaxConnectionIdle is a duration for the amount of time after which an
	// idle connection would be closed by sending a GoAway. Idleness duration is
	// defined since the most recent time the number of outstanding RPCs became
	// zero or the connection establishment.
	KeepAliveMaxConnectionIdle = "frontend.keepAliveMaxConnectionIdle"
	// KeepAliveMaxConnectionAge is a duration for the maximum amount of time a
	// connection may exist before it will be closed by sending a GoAway. A
	// random jitter of +/-10% will be added to MaxConnectionAge to spread out
	// connection storms.
	KeepAliveMaxConnectionAge = "frontend.keepAliveMaxConnectionAge"
	// KeepAliveMaxConnectionAgeGrace is an additive period after MaxConnectionAge after
	// which the connection will be forcibly closed.
	KeepAliveMaxConnectionAgeGrace = "frontend.keepAliveMaxConnectionAgeGrace"
	// KeepAliveTime After a duration of this time if the server doesn't see any activity it
	// pings the client to see if the transport is still alive.
	// If set below 1s, a minimum value of 1s will be used instead.
	KeepAliveTime = "frontend.keepAliveTime"
	// KeepAliveTimeout After having pinged for keepalive check, the server waits for a duration
	// of Timeout and if no activity is seen even after that the connection is closed.
	KeepAliveTimeout = "frontend.keepAliveTimeout"
	// FrontendEnableSchedules enables schedule-related RPCs in the frontend
	FrontendEnableSchedules = "frontend.enableSchedules"
	// FrontendEnableNexusAPIs enables serving Nexus HTTP requests in the frontend.
	FrontendEnableNexusAPIs = "frontend.enableNexusAPIs"
	// FrontendRefreshNexusIncomingServicesLongPollTimeout is the maximum duration of background long poll requests to update Nexus incoming services.
	FrontendRefreshNexusIncomingServicesLongPollTimeout = "frontend.refreshNexusIncomingServicesLongPollTimeout"
	// FrontendRefreshNexusIncomingServicesMinWait is the minimum wait time between background long poll requests to update Nexus incoming services.
	FrontendRefreshNexusIncomingServicesMinWait = "frontend.refreshNexusIncomingServicesMinWait"
	// FrontendEnableCallbackAttachment enables attaching callbacks to workflows.
	FrontendEnableCallbackAttachment = "frontend.enableCallbackAttachment"
	// FrontendCallbackURLMaxLength is the maximum length of callback URL
	FrontendCallbackURLMaxLength = "frontend.callbackURLMaxLength"
	// FrontendMaxConcurrentBatchOperationPerNamespace is the max concurrent batch operation job count per namespace
	FrontendMaxConcurrentBatchOperationPerNamespace = "frontend.MaxConcurrentBatchOperationPerNamespace"
	// FrontendMaxExecutionCountBatchOperationPerNamespace is the max execution count batch operation supports per namespace
	FrontendMaxExecutionCountBatchOperationPerNamespace = "frontend.MaxExecutionCountBatchOperationPerNamespace"
	// FrontendEnableBatcher enables batcher-related RPCs in the frontend
	FrontendEnableBatcher = "frontend.enableBatcher"

	// FrontendEnableUpdateWorkflowExecution enables UpdateWorkflowExecution API in the frontend.
	// The UpdateWorkflowExecution API has gone through rigorous testing efforts but this config's default is `false` until the
	// feature gets more time in production.
	FrontendEnableUpdateWorkflowExecution = "frontend.enableUpdateWorkflowExecution"

	// FrontendEnableExecuteMultiOperation enables the ExecuteMultiOperation API in the frontend.
	// The API is under active development.
	FrontendEnableExecuteMultiOperation = "frontend.enableExecuteMultiOperation"

	// FrontendEnableUpdateWorkflowExecutionAsyncAccepted enables the form of
	// asynchronous workflow execution update that waits on the "Accepted"
	// lifecycle stage. Default value is `false`.
	FrontendEnableUpdateWorkflowExecutionAsyncAccepted = "frontend.enableUpdateWorkflowExecutionAsyncAccepted"

	// FrontendEnableWorkerVersioningDataAPIs enables worker versioning data read / write APIs.
	FrontendEnableWorkerVersioningDataAPIs = "frontend.workerVersioningDataAPIs"
	// FrontendEnableWorkerVersioningWorkflowAPIs enables worker versioning in workflow progress APIs.
	FrontendEnableWorkerVersioningWorkflowAPIs = "frontend.workerVersioningWorkflowAPIs"
	// FrontendEnableWorkerVersioningRuleAPIs enables worker versioning in workflow progress APIs.
	FrontendEnableWorkerVersioningRuleAPIs = "frontend.workerVersioningRuleAPIs"

	// DeleteNamespaceDeleteActivityRPS is an RPS per every parallel delete executions activity.
	// Total RPS is equal to DeleteNamespaceDeleteActivityRPS * DeleteNamespaceConcurrentDeleteExecutionsActivities.
	// Default value is 100.
	DeleteNamespaceDeleteActivityRPS = "frontend.deleteNamespaceDeleteActivityRPS"
	// DeleteNamespacePageSize is a page size to read executions from visibility for delete executions activity.
	// Default value is 1000.
	DeleteNamespacePageSize = "frontend.deleteNamespaceDeletePageSize"
	// DeleteNamespacePagesPerExecution is a number of pages before returning ContinueAsNew from delete executions activity.
	// Default value is 256.
	DeleteNamespacePagesPerExecution = "frontend.deleteNamespacePagesPerExecution"
	// DeleteNamespaceConcurrentDeleteExecutionsActivities is a number of concurrent delete executions activities.
	// Must be not greater than 256 and number of worker cores in the cluster.
	// Default is 4.
	DeleteNamespaceConcurrentDeleteExecutionsActivities = "frontend.deleteNamespaceConcurrentDeleteExecutionsActivities"
	// DeleteNamespaceNamespaceDeleteDelay is a duration for how long namespace stays in database
	// after all namespace resources (i.e. workflow executions) are deleted.
	// Default is 0, means, namespace will be deleted immediately.
	DeleteNamespaceNamespaceDeleteDelay = "frontend.deleteNamespaceNamespaceDeleteDelay"

	// MatchingRPS is request rate per second for each matching host
	MatchingRPS = "matching.rps"
	// MatchingPersistenceMaxQPS is the max qps matching host can query DB
	MatchingPersistenceMaxQPS = "matching.persistenceMaxQPS"
	// MatchingPersistenceGlobalMaxQPS is the max qps matching cluster can query DB
	MatchingPersistenceGlobalMaxQPS = "matching.persistenceGlobalMaxQPS"
	// MatchingPersistenceNamespaceMaxQPS is the max qps each namespace on matching host can query DB
	MatchingPersistenceNamespaceMaxQPS = "matching.persistenceNamespaceMaxQPS"
	// MatchingPersistenceNamespaceMaxQPS is the max qps each namespace in matching cluster can query DB
	MatchingPersistenceGlobalNamespaceMaxQPS = "matching.persistenceGlobalNamespaceMaxQPS"
	// MatchingEnablePersistencePriorityRateLimiting indicates if priority rate limiting is enabled in matching persistence client
	MatchingEnablePersistencePriorityRateLimiting = "matching.enablePersistencePriorityRateLimiting"
	// MatchingPersistenceDynamicRateLimitingParams is a map that contains all adjustable dynamic rate limiting params
	// see DefaultDynamicRateLimitingParams for available options and defaults
	MatchingPersistenceDynamicRateLimitingParams = "matching.persistenceDynamicRateLimitingParams"
	// MatchingMinTaskThrottlingBurstSize is the minimum burst size for task queue throttling
	MatchingMinTaskThrottlingBurstSize = "matching.minTaskThrottlingBurstSize"
	// MatchingGetTasksBatchSize is the maximum batch size to fetch from the task buffer
	MatchingGetTasksBatchSize = "matching.getTasksBatchSize"
	// MatchingLongPollExpirationInterval is the long poll expiration interval in the matching service
	MatchingLongPollExpirationInterval = "matching.longPollExpirationInterval"
	// MatchingSyncMatchWaitDuration is to wait time for sync match
	MatchingSyncMatchWaitDuration = "matching.syncMatchWaitDuration"
	// MatchingHistoryMaxPageSize is the maximum page size of history events returned on PollWorkflowTaskQueue requests
	MatchingHistoryMaxPageSize = "matching.historyMaxPageSize"
	// MatchingLoadUserData can be used to entirely disable loading user data from persistence (and the inter node RPCs
	// that propoagate it). When turned off, features that rely on user data (e.g. worker versioning) will essentially
	// be disabled. When disabled, matching will drop tasks for versioned workflows and activities to avoid breaking
	// versioning semantics. Operator intervention will be required to reschedule the dropped tasks.
	MatchingLoadUserData = "matching.loadUserData"
	// MatchingUpdateAckInterval is the interval for update ack
	MatchingUpdateAckInterval = "matching.updateAckInterval"
	// MatchingMaxTaskQueueIdleTime is the time after which an idle task queue will be unloaded.
	// Note: this should be greater than matching.longPollExpirationInterval and matching.getUserDataLongPollTimeout.
	MatchingMaxTaskQueueIdleTime = "matching.maxTaskQueueIdleTime"
	// MatchingOutstandingTaskAppendsThreshold is the threshold for outstanding task appends
	MatchingOutstandingTaskAppendsThreshold = "matching.outstandingTaskAppendsThreshold"
	// MatchingMaxTaskBatchSize is max batch size for task writer
	MatchingMaxTaskBatchSize = "matching.maxTaskBatchSize"
	// MatchingMaxTaskDeleteBatchSize is the max batch size for range deletion of tasks
	MatchingMaxTaskDeleteBatchSize = "matching.maxTaskDeleteBatchSize"
	// MatchingThrottledLogRPS is the rate limit on number of log messages emitted per second for throttled logger
	MatchingThrottledLogRPS = "matching.throttledLogRPS"
	// MatchingNumTaskqueueWritePartitions is the number of write partitions for a task queue
	MatchingNumTaskqueueWritePartitions = "matching.numTaskqueueWritePartitions"
	// MatchingNumTaskqueueReadPartitions is the number of read partitions for a task queue
	MatchingNumTaskqueueReadPartitions = "matching.numTaskqueueReadPartitions"
	// MatchingForwarderMaxOutstandingPolls is the max number of inflight polls from the forwarder
	MatchingForwarderMaxOutstandingPolls = "matching.forwarderMaxOutstandingPolls"
	// MatchingForwarderMaxOutstandingTasks is the max number of inflight addTask/queryTask from the forwarder
	MatchingForwarderMaxOutstandingTasks = "matching.forwarderMaxOutstandingTasks"
	// MatchingForwarderMaxRatePerSecond is the max rate at which add/query can be forwarded
	MatchingForwarderMaxRatePerSecond = "matching.forwarderMaxRatePerSecond"
	// MatchingForwarderMaxChildrenPerNode is the max number of children per node in the task queue partition tree
	MatchingForwarderMaxChildrenPerNode = "matching.forwarderMaxChildrenPerNode"
	// MatchingAlignMembershipChange is a duration to align matching's membership changes to.
	// This can help reduce effects of task queue movement.
	MatchingAlignMembershipChange = "matching.alignMembershipChange"
	// MatchingShutdownDrainDuration is the duration of traffic drain during shutdown
	MatchingShutdownDrainDuration = "matching.shutdownDrainDuration"
	// MatchingGetUserDataLongPollTimeout is the max length of long polls for GetUserData calls between partitions.
	MatchingGetUserDataLongPollTimeout = "matching.getUserDataLongPollTimeout"
	// MatchingBacklogNegligibleAge if the head of backlog gets older than this we stop sync match and
	// forwarding to ensure more equal dispatch order among partitions.
	MatchingBacklogNegligibleAge = "matching.backlogNegligibleAge"
	// MatchingMaxWaitForPollerBeforeFwd in presence of a non-negligible backlog, we resume forwarding tasks if the
	// duration since last poll exceeds this threshold.
	MatchingMaxWaitForPollerBeforeFwd = "matching.maxWaitForPollerBeforeFwd"
	// QueryPollerUnavailableWindow WF Queries are rejected after a while if no poller has been seen within the window
	QueryPollerUnavailableWindow = "matching.queryPollerUnavailableWindow"
	// MatchingListNexusIncomingServicesLongPollTimeout is the max length of long polls for ListNexusIncomingServices calls.
	MatchingListNexusIncomingServicesLongPollTimeout = "matching.listNexusIncomingServicesLongPollTimeout"
	// MatchingMembershipUnloadDelay is how long to wait to re-confirm loss of ownership before unloading a task queue.
	// Set to zero to disable proactive unload.
	MatchingMembershipUnloadDelay = "matching.membershipUnloadDelay"
	// MatchingQueryWorkflowTaskTimeoutLogRate defines the sampling rate for logs when a query workflow task times out. Since
	// these log lines can be noisy, we want to be able to turn on and sample selectively for each affected namespace.
	MatchingQueryWorkflowTaskTimeoutLogRate = "matching.queryWorkflowTaskTimeoutLogRate"

	// TestMatchingDisableSyncMatch forces tasks to go through the db once
	TestMatchingDisableSyncMatch = "test.matching.disableSyncMatch"
	// TestMatchingLBForceReadPartition forces polls to go to a specific partition
	TestMatchingLBForceReadPartition = "test.matching.lbForceReadPartition"
	// TestMatchingLBForceWritePartition forces adds to go to a specific partition
	TestMatchingLBForceWritePartition = "test.matching.lbForceWritePartition"

	// EnableReplicationStream turn on replication stream
	EnableReplicationStream = "history.enableReplicationStream"
	// EnableHistoryReplicationDLQV2 switches to the DLQ v2 implementation for history replication. See details in
	// [go.temporal.io/server/common/persistence.QueueV2]. This feature is currently in development. Do NOT use it in
	// production.
	EnableHistoryReplicationDLQV2 = "history.enableHistoryReplicationDLQV2"

	// HistoryRPS is request rate per second for each history host
	HistoryRPS = "history.rps"
	// HistoryPersistenceMaxQPS is the max qps history host can query DB
	HistoryPersistenceMaxQPS = "history.persistenceMaxQPS"
	// HistoryPersistenceGlobalMaxQPS is the max qps history cluster can query DB
	HistoryPersistenceGlobalMaxQPS = "history.persistenceGlobalMaxQPS"
	// HistoryPersistenceNamespaceMaxQPS is the max qps each namespace on history host can query DB
	// If value less or equal to 0, will fall back to HistoryPersistenceMaxQPS
	HistoryPersistenceNamespaceMaxQPS = "history.persistenceNamespaceMaxQPS"
	// HistoryPersistenceNamespaceMaxQPS is the max qps each namespace in history cluster can query DB
	HistoryPersistenceGlobalNamespaceMaxQPS = "history.persistenceGlobalNamespaceMaxQPS"
	// HistoryPersistencePerShardNamespaceMaxQPS is the max qps each namespace on a shard can query DB
	HistoryPersistencePerShardNamespaceMaxQPS = "history.persistencePerShardNamespaceMaxQPS"
	// HistoryEnablePersistencePriorityRateLimiting indicates if priority rate limiting is enabled in history persistence client
	HistoryEnablePersistencePriorityRateLimiting = "history.enablePersistencePriorityRateLimiting"
	// HistoryPersistenceDynamicRateLimitingParams is a map that contains all adjustable dynamic rate limiting params
	// see DefaultDynamicRateLimitingParams for available options and defaults
	HistoryPersistenceDynamicRateLimitingParams = "history.persistenceDynamicRateLimitingParams"
	// HistoryLongPollExpirationInterval is the long poll expiration interval in the history service
	HistoryLongPollExpirationInterval = "history.longPollExpirationInterval"
	// HistoryCacheSizeBasedLimit if true, size of the history cache will be limited by HistoryCacheMaxSizeBytes
	// and HistoryCacheHostLevelMaxSizeBytes. Otherwise, entry count in the history cache will be limited by
	// HistoryCacheMaxSize and HistoryCacheHostLevelMaxSize.
	HistoryCacheSizeBasedLimit = "history.cacheSizeBasedLimit"
	// HistoryCacheInitialSize is initial size of history cache
	HistoryCacheInitialSize = "history.cacheInitialSize"
	// HistoryCacheMaxSize is the maximum number of entries in the shard level history cache
	HistoryCacheMaxSize = "history.cacheMaxSize"
	// HistoryCacheMaxSizeBytes is the maximum size of the shard level history cache in bytes. This is only used if
	// HistoryCacheSizeBasedLimit is set to true.
	HistoryCacheMaxSizeBytes = "history.cacheMaxSizeBytes"
	// HistoryCacheTTL is TTL of history cache
	HistoryCacheTTL = "history.cacheTTL"
	// HistoryCacheNonUserContextLockTimeout controls how long non-user call (callerType != API or Operator)
	// will wait on workflow lock acquisition. Requires service restart to take effect.
	HistoryCacheNonUserContextLockTimeout = "history.cacheNonUserContextLockTimeout"
	// EnableHostHistoryCache controls if the history cache is host level
	EnableHostHistoryCache = "history.enableHostHistoryCache"
	// HistoryCacheHostLevelMaxSize is the maximum number of entries in the host level history cache
	HistoryCacheHostLevelMaxSize = "history.hostLevelCacheMaxSize"
	// HistoryCacheHostLevelMaxSizeBytes is the maximum size of the host level history cache. This is only used if
	// HistoryCacheSizeBasedLimit is set to true.
	HistoryCacheHostLevelMaxSizeBytes = "history.hostLevelCacheMaxSizeBytes"
	// EnableMutableStateTransitionHistory controls whether to record state transition history in mutable state records.
	// The feature is used in the hierarchical state machine framework and is considered unstable as the structure may
	// change with the pending replication design.
	EnableMutableStateTransitionHistory = "history.enableMutableStateTransitionHistory"
	// EnableWorkflowExecutionTimeoutTimer controls whether to enable the new logic for generating a workflow execution
	// timeout timer when execution timeout is specified when starting a workflow.
	// For backward compatibility, this feature is disabled by default and should only be enabled after server version
	// containing this flag is deployed to all history service nodes in the cluster.
	EnableWorkflowExecutionTimeoutTimer = "history.enableWorkflowExecutionTimeoutTimer"
	// HistoryStartupMembershipJoinDelay is the duration a history instance waits
	// before joining membership after starting.
	HistoryStartupMembershipJoinDelay = "history.startupMembershipJoinDelay"
	// HistoryShutdownDrainDuration is the duration of traffic drain during shutdown
	HistoryShutdownDrainDuration = "history.shutdownDrainDuration"
	// XDCCacheMaxSizeBytes is max size of events cache in bytes
	XDCCacheMaxSizeBytes = "history.xdcCacheMaxSizeBytes"
	// EventsCacheMaxSizeBytes is max size of the shard level events cache in bytes
	EventsCacheMaxSizeBytes = "history.eventsCacheMaxSizeBytes"
	// EventsHostLevelCacheMaxSizeBytes is max size of the host level events cache in bytes
	EventsHostLevelCacheMaxSizeBytes = "history.eventsHostLevelCacheMaxSizeBytes"
	// EventsCacheTTL is TTL of events cache
	EventsCacheTTL = "history.eventsCacheTTL"
	// EnableHostLevelEventsCache controls if the events cache is host level
	EnableHostLevelEventsCache = "history.enableHostLevelEventsCache"
	// AcquireShardInterval is interval that timer used to acquire shard
	AcquireShardInterval = "history.acquireShardInterval"
	// AcquireShardConcurrency is number of goroutines that can be used to acquire shards in the shard controller.
	AcquireShardConcurrency = "history.acquireShardConcurrency"
	// ShardLingerOwnershipCheckQPS is the frequency to perform shard ownership
	// checks while a shard is lingering.
	ShardLingerOwnershipCheckQPS = "history.shardLingerOwnershipCheckQPS"
	// ShardLingerTimeLimit configures if and for how long the shard controller
	// will temporarily delay closing shards after a membership update, awaiting a
	// shard ownership lost error from persistence. Not recommended with
	// persistence layers that are missing AssertShardOwnership support.
	// If set to zero, shards will not delay closing.
	ShardLingerTimeLimit = "history.shardLingerTimeLimit"
	// ShardOwnershipAssertionEnabled configures if the shard ownership is asserted
	// for API requests when a NotFound or NamespaceNotFound error is returned from
	// persistence.
	// NOTE: Shard ownership assertion is not implemented by any persistence implementation
	// in this codebase, because assertion is not needed for persistence implementation
	// that guarantees read after write consistency. As a result, even if this config is
	// enabled, it's a no-op.
	ShardOwnershipAssertionEnabled = "history.shardOwnershipAssertionEnabled"
	// HistoryClientOwnershipCachingEnabled configures if history clients try to cache
	// shard ownership information, instead of checking membership for each request.
	// Only inspected when an instance first creates a history client, so changes
	// to this require a restart to take effect.
	HistoryClientOwnershipCachingEnabled = "history.clientOwnershipCachingEnabled"
	// ShardIOConcurrency controls the concurrency of persistence operations in shard context
	ShardIOConcurrency = "history.shardIOConcurrency"
	// StandbyClusterDelay is the artificial delay added to standby cluster's view of active cluster's time
	StandbyClusterDelay = "history.standbyClusterDelay"
	// StandbyTaskMissingEventsResendDelay is the amount of time standby cluster's will wait (if events are missing)
	// before calling remote for missing events
	StandbyTaskMissingEventsResendDelay = "history.standbyTaskMissingEventsResendDelay"
	// StandbyTaskMissingEventsDiscardDelay is the amount of time standby cluster's will wait (if events are missing)
	// before discarding the task
	StandbyTaskMissingEventsDiscardDelay = "history.standbyTaskMissingEventsDiscardDelay"
	// QueuePendingTaskCriticalCount is the max number of pending task in one queue
	// before triggering queue slice splitting and unloading
	QueuePendingTaskCriticalCount = "history.queuePendingTaskCriticalCount"
	// QueueReaderStuckCriticalAttempts is the max number of task loading attempts for a certain task range
	// before that task range is split into a separate slice to unblock loading for later range.
	// currently only work for scheduled queues and the task range is 1s.
	QueueReaderStuckCriticalAttempts = "history.queueReaderStuckCriticalAttempts"
	// QueueCriticalSlicesCount is the max number of slices in one queue
	// before force compacting slices
	QueueCriticalSlicesCount = "history.queueCriticalSlicesCount"
	// QueuePendingTaskMaxCount is the max number of task pending tasks in one queue before stop
	// loading new tasks into memory. While QueuePendingTaskCriticalCount won't stop task loading
	// for the entire queue but only trigger a queue action to unload tasks. Ideally this max count
	// limit should not be hit and task unloading should happen once critical count is exceeded. But
	// since queue action is async, we need this hard limit.
	QueuePendingTaskMaxCount = "history.queuePendingTasksMaxCount"
	// ContinueAsNewMinInterval is the minimal interval between continue_as_new executions.
	// This is needed to prevent tight loop continue_as_new spin. Default is 1s.
	ContinueAsNewMinInterval = "history.continueAsNewMinInterval"

	// TaskSchedulerEnableRateLimiter indicates if task scheduler rate limiter should be enabled
	TaskSchedulerEnableRateLimiter = "history.taskSchedulerEnableRateLimiter"
	// TaskSchedulerEnableRateLimiterShadowMode indicates if task scheduler rate limiter should run in shadow mode
	// i.e. through rate limiter and emit metrics but do not actually block/throttle task scheduling
	TaskSchedulerEnableRateLimiterShadowMode = "history.taskSchedulerEnableRateLimiterShadowMode"
	// TaskSchedulerRateLimiterStartupDelay is the duration to wait after startup before enforcing task scheduler rate limiting
	TaskSchedulerRateLimiterStartupDelay = "history.taskSchedulerRateLimiterStartupDelay"
	// TaskSchedulerGlobalMaxQPS is the max qps all task schedulers in the cluster can schedule tasks
	// If value less or equal to 0, will fall back to TaskSchedulerMaxQPS
	TaskSchedulerGlobalMaxQPS = "history.taskSchedulerGlobalMaxQPS"
	// TaskSchedulerMaxQPS is the max qps task schedulers on a host can schedule tasks
	// If value less or equal to 0, will fall back to HistoryPersistenceMaxQPS
	TaskSchedulerMaxQPS = "history.taskSchedulerMaxQPS"
	// TaskSchedulerGlobalNamespaceMaxQPS is the max qps all task schedulers in the cluster can schedule tasks for a certain namespace
	// If value less or equal to 0, will fall back to TaskSchedulerNamespaceMaxQPS
	TaskSchedulerGlobalNamespaceMaxQPS = "history.taskSchedulerGlobalNamespaceMaxQPS"
	// TaskSchedulerNamespaceMaxQPS is the max qps task schedulers on a host can schedule tasks for a certain namespace
	// If value less or equal to 0, will fall back to HistoryPersistenceNamespaceMaxQPS
	TaskSchedulerNamespaceMaxQPS = "history.taskSchedulerNamespaceMaxQPS"

	// TimerTaskBatchSize is batch size for timer processor to process tasks
	TimerTaskBatchSize = "history.timerTaskBatchSize"
	// TimerProcessorSchedulerWorkerCount is the number of workers in the host level task scheduler for timer processor
	TimerProcessorSchedulerWorkerCount = "history.timerProcessorSchedulerWorkerCount"
	// TimerProcessorSchedulerActiveRoundRobinWeights is the priority round robin weights used by timer task scheduler for active namespaces
	TimerProcessorSchedulerActiveRoundRobinWeights = "history.timerProcessorSchedulerActiveRoundRobinWeights"
	// TimerProcessorSchedulerStandbyRoundRobinWeights is the priority round robin weights used by timer task scheduler for standby namespaces
	TimerProcessorSchedulerStandbyRoundRobinWeights = "history.timerProcessorSchedulerStandbyRoundRobinWeights"
	// TimerProcessorUpdateAckInterval is update interval for timer processor
	TimerProcessorUpdateAckInterval = "history.timerProcessorUpdateAckInterval"
	// TimerProcessorUpdateAckIntervalJitterCoefficient is the update interval jitter coefficient
	TimerProcessorUpdateAckIntervalJitterCoefficient = "history.timerProcessorUpdateAckIntervalJitterCoefficient"
	// TimerProcessorMaxPollRPS is max poll rate per second for timer processor
	TimerProcessorMaxPollRPS = "history.timerProcessorMaxPollRPS"
	// TimerProcessorMaxPollHostRPS is max poll rate per second for all timer processor on a host
	TimerProcessorMaxPollHostRPS = "history.timerProcessorMaxPollHostRPS"
	// TimerProcessorMaxPollInterval is max poll interval for timer processor
	TimerProcessorMaxPollInterval = "history.timerProcessorMaxPollInterval"
	// TimerProcessorMaxPollIntervalJitterCoefficient is the max poll interval jitter coefficient
	TimerProcessorMaxPollIntervalJitterCoefficient = "history.timerProcessorMaxPollIntervalJitterCoefficient"
	// TimerProcessorPollBackoffInterval is the poll backoff interval if task redispatcher's size exceeds limit for timer processor
	TimerProcessorPollBackoffInterval = "history.timerProcessorPollBackoffInterval"
	// TimerProcessorMaxTimeShift is the max shift timer processor can have
	TimerProcessorMaxTimeShift = "history.timerProcessorMaxTimeShift"
	// TimerQueueMaxReaderCount is the max number of readers in one multi-cursor timer queue
	TimerQueueMaxReaderCount = "history.timerQueueMaxReaderCount"
	// RetentionTimerJitterDuration is a time duration jitter to distribute timer from T0 to T0 + jitter duration
	RetentionTimerJitterDuration = "history.retentionTimerJitterDuration"

	// MemoryTimerProcessorSchedulerWorkerCount is the number of workers in the task scheduler for in memory timer processor.
	MemoryTimerProcessorSchedulerWorkerCount = "history.memoryTimerProcessorSchedulerWorkerCount"

	// TransferTaskBatchSize is batch size for transferQueueProcessor
	TransferTaskBatchSize = "history.transferTaskBatchSize"
	// TransferProcessorMaxPollRPS is max poll rate per second for transferQueueProcessor
	TransferProcessorMaxPollRPS = "history.transferProcessorMaxPollRPS"
	// TransferProcessorMaxPollHostRPS is max poll rate per second for all transferQueueProcessor on a host
	TransferProcessorMaxPollHostRPS = "history.transferProcessorMaxPollHostRPS"
	// TransferProcessorSchedulerWorkerCount is the number of workers in the host level task scheduler for transferQueueProcessor
	TransferProcessorSchedulerWorkerCount = "history.transferProcessorSchedulerWorkerCount"
	// TransferProcessorSchedulerActiveRoundRobinWeights is the priority round robin weights used by transfer task scheduler for active namespaces
	TransferProcessorSchedulerActiveRoundRobinWeights = "history.transferProcessorSchedulerActiveRoundRobinWeights"
	// TransferProcessorSchedulerStandbyRoundRobinWeights is the priority round robin weights used by transfer task scheduler for standby namespaces
	TransferProcessorSchedulerStandbyRoundRobinWeights = "history.transferProcessorSchedulerStandbyRoundRobinWeights"
	// TransferProcessorMaxPollInterval max poll interval for transferQueueProcessor
	TransferProcessorMaxPollInterval = "history.transferProcessorMaxPollInterval"
	// TransferProcessorMaxPollIntervalJitterCoefficient is the max poll interval jitter coefficient
	TransferProcessorMaxPollIntervalJitterCoefficient = "history.transferProcessorMaxPollIntervalJitterCoefficient"
	// TransferProcessorUpdateAckInterval is update interval for transferQueueProcessor
	TransferProcessorUpdateAckInterval = "history.transferProcessorUpdateAckInterval"
	// TransferProcessorUpdateAckIntervalJitterCoefficient is the update interval jitter coefficient
	TransferProcessorUpdateAckIntervalJitterCoefficient = "history.transferProcessorUpdateAckIntervalJitterCoefficient"
	// TransferProcessorPollBackoffInterval is the poll backoff interval if task redispatcher's size exceeds limit for transferQueueProcessor
	TransferProcessorPollBackoffInterval = "history.transferProcessorPollBackoffInterval"
	// TransferProcessorEnsureCloseBeforeDelete means we ensure the execution is closed before we delete it
	TransferProcessorEnsureCloseBeforeDelete = "history.transferProcessorEnsureCloseBeforeDelete"
	// TransferQueueMaxReaderCount is the max number of readers in one multi-cursor transfer queue
	TransferQueueMaxReaderCount = "history.transferQueueMaxReaderCount"

	// OutboundProcessorEnabled enables starting the outbound queue processor.
	OutboundProcessorEnabled = "history.outboundProcessorEnabled"
	// OutboundTaskBatchSize is batch size for outboundQueueFactory
	OutboundTaskBatchSize = "history.outboundTaskBatchSize"
	// OutboundProcessorMaxPollRPS is max poll rate per second for outboundQueueFactory
	OutboundProcessorMaxPollRPS = "history.outboundProcessorMaxPollRPS"
	// OutboundProcessorMaxPollHostRPS is max poll rate per second for all outboundQueueFactory on a host
	OutboundProcessorMaxPollHostRPS = "history.outboundProcessorMaxPollHostRPS"
	// OutboundProcessorMaxPollInterval max poll interval for outboundQueueFactory
	OutboundProcessorMaxPollInterval = "history.outboundProcessorMaxPollInterval"
	// OutboundProcessorMaxPollIntervalJitterCoefficient is the max poll interval jitter coefficient
	OutboundProcessorMaxPollIntervalJitterCoefficient = "history.outboundProcessorMaxPollIntervalJitterCoefficient"
	// OutboundProcessorUpdateAckInterval is update interval for outboundQueueFactory
	OutboundProcessorUpdateAckInterval = "history.outboundProcessorUpdateAckInterval"
	// OutboundProcessorUpdateAckIntervalJitterCoefficient is the update interval jitter coefficient
	OutboundProcessorUpdateAckIntervalJitterCoefficient = "history.outboundProcessorUpdateAckIntervalJitterCoefficient"
	// OutboundProcessorPollBackoffInterval is the poll backoff interval if task redispatcher's size exceeds limit for outboundQueueFactory
	OutboundProcessorPollBackoffInterval = "history.outboundProcessorPollBackoffInterval"
	// OutboundQueueMaxReaderCount is the max number of readers in one multi-cursor outbound queue
	OutboundQueueMaxReaderCount = "history.outboundQueueMaxReaderCount"

	// VisibilityTaskBatchSize is batch size for visibilityQueueProcessor
	VisibilityTaskBatchSize = "history.visibilityTaskBatchSize"
	// VisibilityProcessorMaxPollRPS is max poll rate per second for visibilityQueueProcessor
	VisibilityProcessorMaxPollRPS = "history.visibilityProcessorMaxPollRPS"
	// VisibilityProcessorMaxPollHostRPS is max poll rate per second for all visibilityQueueProcessor on a host
	VisibilityProcessorMaxPollHostRPS = "history.visibilityProcessorMaxPollHostRPS"
	// VisibilityProcessorSchedulerWorkerCount is the number of workers in the host level task scheduler for visibilityQueueProcessor
	VisibilityProcessorSchedulerWorkerCount = "history.visibilityProcessorSchedulerWorkerCount"
	// VisibilityProcessorSchedulerActiveRoundRobinWeights is the priority round robin weights by visibility task scheduler for active namespaces
	VisibilityProcessorSchedulerActiveRoundRobinWeights = "history.visibilityProcessorSchedulerActiveRoundRobinWeights"
	// VisibilityProcessorSchedulerStandbyRoundRobinWeights is the priority round robin weights by visibility task scheduler for standby namespaces
	VisibilityProcessorSchedulerStandbyRoundRobinWeights = "history.visibilityProcessorSchedulerStandbyRoundRobinWeights"
	// VisibilityProcessorMaxPollInterval max poll interval for visibilityQueueProcessor
	VisibilityProcessorMaxPollInterval = "history.visibilityProcessorMaxPollInterval"
	// VisibilityProcessorMaxPollIntervalJitterCoefficient is the max poll interval jitter coefficient
	VisibilityProcessorMaxPollIntervalJitterCoefficient = "history.visibilityProcessorMaxPollIntervalJitterCoefficient"
	// VisibilityProcessorUpdateAckInterval is update interval for visibilityQueueProcessor
	VisibilityProcessorUpdateAckInterval = "history.visibilityProcessorUpdateAckInterval"
	// VisibilityProcessorUpdateAckIntervalJitterCoefficient is the update interval jitter coefficient
	VisibilityProcessorUpdateAckIntervalJitterCoefficient = "history.visibilityProcessorUpdateAckIntervalJitterCoefficient"
	// VisibilityProcessorPollBackoffInterval is the poll backoff interval if task redispatcher's size exceeds limit for visibilityQueueProcessor
	VisibilityProcessorPollBackoffInterval = "history.visibilityProcessorPollBackoffInterval"
	// VisibilityProcessorEnsureCloseBeforeDelete means we ensure the visibility of an execution is closed before we delete its visibility records
	VisibilityProcessorEnsureCloseBeforeDelete = "history.visibilityProcessorEnsureCloseBeforeDelete"
	// VisibilityProcessorEnableCloseWorkflowCleanup to clean up the mutable state after visibility
	// close task has been processed. Must use Elasticsearch as visibility store, otherwise workflow
	// data (eg: search attributes) will be lost after workflow is closed.
	VisibilityProcessorEnableCloseWorkflowCleanup = "history.visibilityProcessorEnableCloseWorkflowCleanup"
	// VisibilityQueueMaxReaderCount is the max number of readers in one multi-cursor visibility queue
	VisibilityQueueMaxReaderCount = "history.visibilityQueueMaxReaderCount"

	// ArchivalTaskBatchSize is batch size for archivalQueueProcessor
	ArchivalTaskBatchSize = "history.archivalTaskBatchSize"
	// ArchivalProcessorMaxPollRPS is max poll rate per second for archivalQueueProcessor
	ArchivalProcessorMaxPollRPS = "history.archivalProcessorMaxPollRPS"
	// ArchivalProcessorMaxPollHostRPS is max poll rate per second for all archivalQueueProcessor on a host
	ArchivalProcessorMaxPollHostRPS = "history.archivalProcessorMaxPollHostRPS"
	// ArchivalProcessorSchedulerWorkerCount is the number of workers in the host level task scheduler for
	// archivalQueueProcessor
	ArchivalProcessorSchedulerWorkerCount = "history.archivalProcessorSchedulerWorkerCount"
	// ArchivalProcessorMaxPollInterval max poll interval for archivalQueueProcessor
	ArchivalProcessorMaxPollInterval = "history.archivalProcessorMaxPollInterval"
	// ArchivalProcessorMaxPollIntervalJitterCoefficient is the max poll interval jitter coefficient
	ArchivalProcessorMaxPollIntervalJitterCoefficient = "history.archivalProcessorMaxPollIntervalJitterCoefficient"
	// ArchivalProcessorUpdateAckInterval is update interval for archivalQueueProcessor
	ArchivalProcessorUpdateAckInterval = "history.archivalProcessorUpdateAckInterval"
	// ArchivalProcessorUpdateAckIntervalJitterCoefficient is the update interval jitter coefficient
	ArchivalProcessorUpdateAckIntervalJitterCoefficient = "history.archivalProcessorUpdateAckIntervalJitterCoefficient"
	// ArchivalProcessorPollBackoffInterval is the poll backoff interval if task redispatcher's size exceeds limit for
	// archivalQueueProcessor
	ArchivalProcessorPollBackoffInterval = "history.archivalProcessorPollBackoffInterval"
	// ArchivalProcessorArchiveDelay is the delay before archivalQueueProcessor starts to process archival tasks
	ArchivalProcessorArchiveDelay = "history.archivalProcessorArchiveDelay"
	// ArchivalBackendMaxRPS is the maximum rate of requests per second to the archival backend
	ArchivalBackendMaxRPS = "history.archivalBackendMaxRPS"
	// ArchivalQueueMaxReaderCount is the max number of readers in one multi-cursor archival queue
	ArchivalQueueMaxReaderCount = "history.archivalQueueMaxReaderCount"

	// WorkflowExecutionMaxInFlightUpdates is the max number of updates that can be in-flight (admitted but not yet completed) for any given workflow execution.
	WorkflowExecutionMaxInFlightUpdates = "history.maxInFlightUpdates"
	// WorkflowExecutionMaxTotalUpdates is the max number of updates that any given workflow execution can receive.
	WorkflowExecutionMaxTotalUpdates = "history.maxTotalUpdates"

	// ReplicatorTaskBatchSize is batch size for ReplicatorProcessor
	ReplicatorTaskBatchSize = "history.replicatorTaskBatchSize"
	// ReplicatorMaxSkipTaskCount is maximum number of tasks that can be skipped during tasks pagination due to not meeting filtering conditions (e.g. missed namespace).
	ReplicatorMaxSkipTaskCount = "history.replicatorMaxSkipTaskCount"
	// ReplicatorProcessorMaxPollInterval is max poll interval for ReplicatorProcessor
	ReplicatorProcessorMaxPollInterval = "history.replicatorProcessorMaxPollInterval"
	// ReplicatorProcessorMaxPollIntervalJitterCoefficient is the max poll interval jitter coefficient
	ReplicatorProcessorMaxPollIntervalJitterCoefficient = "history.replicatorProcessorMaxPollIntervalJitterCoefficient"
	// MaximumBufferedEventsBatch is the maximum permissible number of buffered events for any given mutable state.
	MaximumBufferedEventsBatch = "history.maximumBufferedEventsBatch"
	// MaximumBufferedEventsSizeInBytes is the maximum permissible size of all buffered events for any given mutable
	// state. The total size is determined by the sum of the size, in bytes, of each HistoryEvent proto.
	MaximumBufferedEventsSizeInBytes = "history.maximumBufferedEventsSizeInBytes"
	// MaximumSignalsPerExecution is max number of signals supported by single execution
	MaximumSignalsPerExecution = "history.maximumSignalsPerExecution"
	// ShardUpdateMinInterval is the minimal time interval which the shard info can be updated
	ShardUpdateMinInterval = "history.shardUpdateMinInterval"
	// ShardUpdateMinTasksCompleted is the minimum number of tasks which must be completed (across all queues) before the shard info can be updated.
	// Note that once history.shardUpdateMinInterval amount of time has passed we'll update the shard info regardless of the number of tasks completed.
	// When the this config is zero or lower we will only update shard info at most once every history.shardUpdateMinInterval.
	ShardUpdateMinTasksCompleted = "history.shardUpdateMinTasksCompleted"
	// ShardSyncMinInterval is the minimal time interval which the shard info should be sync to remote
	ShardSyncMinInterval = "history.shardSyncMinInterval"
	// EmitShardLagLog whether emit the shard lag log
	EmitShardLagLog = "history.emitShardLagLog"
	// DefaultEventEncoding is the encoding type for history events
	DefaultEventEncoding = "history.defaultEventEncoding"
	// DefaultActivityRetryPolicy represents the out-of-box retry policy for activities where
	// the user has not specified an explicit RetryPolicy
	DefaultActivityRetryPolicy = "history.defaultActivityRetryPolicy"
	// DefaultWorkflowRetryPolicy represents the out-of-box retry policy for unset fields
	// where the user has set an explicit RetryPolicy, but not specified all the fields
	DefaultWorkflowRetryPolicy = "history.defaultWorkflowRetryPolicy"
	// HistoryMaxAutoResetPoints is the key for max number of auto reset points stored in mutableState
	HistoryMaxAutoResetPoints = "history.historyMaxAutoResetPoints"
	// EnableParentClosePolicy whether to  ParentClosePolicy
	EnableParentClosePolicy = "history.enableParentClosePolicy"
	// ParentClosePolicyThreshold decides that parent close policy will be processed by sys workers(if enabled) if
	// the number of children greater than or equal to this threshold
	ParentClosePolicyThreshold = "history.parentClosePolicyThreshold"
	// NumParentClosePolicySystemWorkflows is key for number of parentClosePolicy system workflows running in total
	NumParentClosePolicySystemWorkflows = "history.numParentClosePolicySystemWorkflows"
	// HistoryThrottledLogRPS is the rate limit on number of log messages emitted per second for throttled logger
	HistoryThrottledLogRPS = "history.throttledLogRPS"
	// StickyTTL is to expire a sticky taskqueue if no update more than this duration
	StickyTTL = "history.stickyTTL"
	// WorkflowTaskHeartbeatTimeout for workflow task heartbeat
	WorkflowTaskHeartbeatTimeout = "history.workflowTaskHeartbeatTimeout"
	// WorkflowTaskCriticalAttempts is the number of attempts for a workflow task that's regarded as critical
	WorkflowTaskCriticalAttempts = "history.workflowTaskCriticalAttempt"
	// WorkflowTaskRetryMaxInterval is the maximum interval added to a workflow task's startToClose timeout for slowing down retry
	WorkflowTaskRetryMaxInterval = "history.workflowTaskRetryMaxInterval"
	// DefaultWorkflowTaskTimeout for a workflow task
	DefaultWorkflowTaskTimeout = "history.defaultWorkflowTaskTimeout"
	// SkipReapplicationByNamespaceID is whether skipping a event re-application for a namespace
	SkipReapplicationByNamespaceID = "history.SkipReapplicationByNamespaceID"
	// StandbyTaskReReplicationContextTimeout is the context timeout for standby task re-replication
	StandbyTaskReReplicationContextTimeout = "history.standbyTaskReReplicationContextTimeout"
	// MaxBufferedQueryCount indicates max buffer query count
	MaxBufferedQueryCount = "history.MaxBufferedQueryCount"
	// MutableStateChecksumGenProbability is the probability [0-100] that checksum will be generated for mutable state
	MutableStateChecksumGenProbability = "history.mutableStateChecksumGenProbability"
	// MutableStateChecksumVerifyProbability is the probability [0-100] that checksum will be verified for mutable state
	MutableStateChecksumVerifyProbability = "history.mutableStateChecksumVerifyProbability"
	// MutableStateChecksumInvalidateBefore is the epoch timestamp before which all checksums are to be discarded
	MutableStateChecksumInvalidateBefore = "history.mutableStateChecksumInvalidateBefore"

	// ReplicationTaskFetcherParallelism determines how many go routines we spin up for fetching tasks
	ReplicationTaskFetcherParallelism = "history.ReplicationTaskFetcherParallelism"
	// ReplicationTaskFetcherAggregationInterval determines how frequently the fetch requests are sent
	ReplicationTaskFetcherAggregationInterval = "history.ReplicationTaskFetcherAggregationInterval"
	// ReplicationTaskFetcherTimerJitterCoefficient is the jitter for fetcher timer
	ReplicationTaskFetcherTimerJitterCoefficient = "history.ReplicationTaskFetcherTimerJitterCoefficient"
	// ReplicationTaskFetcherErrorRetryWait is the wait time when fetcher encounters error
	ReplicationTaskFetcherErrorRetryWait = "history.ReplicationTaskFetcherErrorRetryWait"
	// ReplicationTaskProcessorErrorRetryWait is the initial retry wait when we see errors in applying replication tasks
	ReplicationTaskProcessorErrorRetryWait = "history.ReplicationTaskProcessorErrorRetryWait"
	// ReplicationTaskProcessorErrorRetryBackoffCoefficient is the retry wait backoff time coefficient
	ReplicationTaskProcessorErrorRetryBackoffCoefficient = "history.ReplicationTaskProcessorErrorRetryBackoffCoefficient"
	// ReplicationTaskProcessorErrorRetryMaxInterval is the retry wait backoff max duration
	ReplicationTaskProcessorErrorRetryMaxInterval = "history.ReplicationTaskProcessorErrorRetryMaxInterval"
	// ReplicationTaskProcessorErrorRetryMaxAttempts is the max retry attempts for applying replication tasks
	ReplicationTaskProcessorErrorRetryMaxAttempts = "history.ReplicationTaskProcessorErrorRetryMaxAttempts"
	// ReplicationTaskProcessorErrorRetryExpiration is the max retry duration for applying replication tasks
	ReplicationTaskProcessorErrorRetryExpiration = "history.ReplicationTaskProcessorErrorRetryExpiration"
	// ReplicationTaskProcessorNoTaskInitialWait is the wait time when not ask is returned
	ReplicationTaskProcessorNoTaskInitialWait = "history.ReplicationTaskProcessorNoTaskInitialWait"
	// ReplicationTaskProcessorCleanupInterval determines how frequently the cleanup replication queue
	ReplicationTaskProcessorCleanupInterval = "history.ReplicationTaskProcessorCleanupInterval"
	// ReplicationTaskProcessorCleanupJitterCoefficient is the jitter for cleanup timer
	ReplicationTaskProcessorCleanupJitterCoefficient = "history.ReplicationTaskProcessorCleanupJitterCoefficient"
	// ReplicationTaskProcessorHostQPS is the qps of task processing rate limiter on host level
	ReplicationTaskProcessorHostQPS = "history.ReplicationTaskProcessorHostQPS"
	// ReplicationTaskProcessorShardQPS is the qps of task processing rate limiter on shard level
	ReplicationTaskProcessorShardQPS = "history.ReplicationTaskProcessorShardQPS"
	// ReplicationEnableDLQMetrics is the flag to emit DLQ metrics
	ReplicationEnableDLQMetrics = "history.ReplicationEnableDLQMetrics"
	// ReplicationEnableUpdateWithNewTaskMerge is the flag controlling whether replication task merging logic
	// should be enabled for non continuedAsNew workflow UpdateWithNew case.
	ReplicationEnableUpdateWithNewTaskMerge = "history.ReplicationEnableUpdateWithNewTaskMerge"
	// HistoryTaskDLQEnabled enables the history task DLQ. This applies to internal tasks like transfer and timer tasks.
	// Do not turn this on if you aren't using Cassandra as the history task DLQ is not implemented for other databases.
	HistoryTaskDLQEnabled = "history.TaskDLQEnabled"
	// HistoryTaskDLQUnexpectedErrorAttempts is the number of task execution attempts before sending the task to DLQ.
	HistoryTaskDLQUnexpectedErrorAttempts = "history.TaskDLQUnexpectedErrorAttempts"
	// HistoryTaskDLQInternalErrors causes history task processing to send tasks failing with serviceerror.Internal to
	// the dlq (or will drop them if not enabled)
	HistoryTaskDLQInternalErrors = "history.TaskDLQInternalErrors"
	// HistoryTaskDLQErrorPattern specifies a regular expression. If a task processing error matches with this regex,
	// that task will be sent to DLQ.
	HistoryTaskDLQErrorPattern = "history.TaskDLQErrorPattern"

	// ReplicationStreamSyncStatusDuration sync replication status duration
	ReplicationStreamSyncStatusDuration = "history.ReplicationStreamSyncStatusDuration"
	// ReplicationProcessorSchedulerQueueSize is the replication task executor queue size
	ReplicationProcessorSchedulerQueueSize = "history.ReplicationProcessorSchedulerQueueSize"
	// ReplicationProcessorSchedulerWorkerCount is the replication task executor worker count
	ReplicationProcessorSchedulerWorkerCount = "history.ReplicationProcessorSchedulerWorkerCount"
	// EnableEagerNamespaceRefresher is a feature flag for eagerly refresh namespace during processing replication task
	EnableEagerNamespaceRefresher = "history.EnableEagerNamespaceRefresher"
	// EnableReplicationTaskBatching is a feature flag for batching replicate history event task
	EnableReplicationTaskBatching = "history.EnableReplicationTaskBatching"
	// EnableReplicateLocalGeneratedEvents is a feature flag for replicating locally generated events
	EnableReplicateLocalGeneratedEvents = "history.EnableReplicateLocalGeneratedEvents"

	// WorkerPersistenceMaxQPS is the max qps worker host can query DB
	WorkerPersistenceMaxQPS = "worker.persistenceMaxQPS"
	// WorkerPersistenceGlobalMaxQPS is the max qps worker cluster can query DB
	WorkerPersistenceGlobalMaxQPS = "worker.persistenceGlobalMaxQPS"
	// WorkerPersistenceNamespaceMaxQPS is the max qps each namespace on worker host can query DB
	WorkerPersistenceNamespaceMaxQPS = "worker.persistenceNamespaceMaxQPS"
	// WorkerPersistenceNamespaceMaxQPS is the max qps each namespace in worker cluster can query DB
	WorkerPersistenceGlobalNamespaceMaxQPS = "worker.persistenceGlobalNamespaceMaxQPS"
	// WorkerEnablePersistencePriorityRateLimiting indicates if priority rate limiting is enabled in worker persistence client
	WorkerEnablePersistencePriorityRateLimiting = "worker.enablePersistencePriorityRateLimiting"
	// WorkerPersistenceDynamicRateLimitingParams is a map that contains all adjustable dynamic rate limiting params
	// see DefaultDynamicRateLimitingParams for available options and defaults
	WorkerPersistenceDynamicRateLimitingParams = "worker.persistenceDynamicRateLimitingParams"
	// WorkerIndexerConcurrency is the max concurrent messages to be processed at any given time
	WorkerIndexerConcurrency = "worker.indexerConcurrency"
	// WorkerESProcessorNumOfWorkers is num of workers for esProcessor
	WorkerESProcessorNumOfWorkers = "worker.ESProcessorNumOfWorkers"
	// WorkerESProcessorBulkActions is max number of requests in bulk for esProcessor
	WorkerESProcessorBulkActions = "worker.ESProcessorBulkActions"
	// WorkerESProcessorBulkSize is max total size of bulk in bytes for esProcessor
	WorkerESProcessorBulkSize = "worker.ESProcessorBulkSize"
	// WorkerESProcessorFlushInterval is flush interval for esProcessor
	WorkerESProcessorFlushInterval = "worker.ESProcessorFlushInterval"
	// WorkerESProcessorAckTimeout is the timeout that store will wait to get ack signal from ES processor.
	// Should be at least WorkerESProcessorFlushInterval+<time to process request>.
	WorkerESProcessorAckTimeout = "worker.ESProcessorAckTimeout"
	// WorkerThrottledLogRPS is the rate limit on number of log messages emitted per second for throttled logger
	WorkerThrottledLogRPS = "worker.throttledLogRPS"
	// WorkerScannerMaxConcurrentActivityExecutionSize indicates worker scanner max concurrent activity execution size
	WorkerScannerMaxConcurrentActivityExecutionSize = "worker.ScannerMaxConcurrentActivityExecutionSize"
	// WorkerScannerMaxConcurrentWorkflowTaskExecutionSize indicates worker scanner max concurrent workflow execution size
	WorkerScannerMaxConcurrentWorkflowTaskExecutionSize = "worker.ScannerMaxConcurrentWorkflowTaskExecutionSize"
	// WorkerScannerMaxConcurrentActivityTaskPollers indicates worker scanner max concurrent activity pollers
	WorkerScannerMaxConcurrentActivityTaskPollers = "worker.ScannerMaxConcurrentActivityTaskPollers"
	// WorkerScannerMaxConcurrentWorkflowTaskPollers indicates worker scanner max concurrent workflow pollers
	WorkerScannerMaxConcurrentWorkflowTaskPollers = "worker.ScannerMaxConcurrentWorkflowTaskPollers"
	// ScannerPersistenceMaxQPS is the maximum rate of persistence calls from worker.Scanner
	ScannerPersistenceMaxQPS = "worker.scannerPersistenceMaxQPS"
	// ExecutionScannerPerHostQPS is the maximum rate of calls per host from executions.Scanner
	ExecutionScannerPerHostQPS = "worker.executionScannerPerHostQPS"
	// ExecutionScannerPerShardQPS is the maximum rate of calls per shard from executions.Scanner
	ExecutionScannerPerShardQPS = "worker.executionScannerPerShardQPS"
	// ExecutionDataDurationBuffer is the data TTL duration buffer of execution data
	ExecutionDataDurationBuffer = "worker.executionDataDurationBuffer"
	// ExecutionScannerWorkerCount is the execution scavenger worker count
	ExecutionScannerWorkerCount = "worker.executionScannerWorkerCount"
	// ExecutionScannerHistoryEventIdValidator is the flag to enable history event id validator
	ExecutionScannerHistoryEventIdValidator = "worker.executionEnableHistoryEventIdValidator"
	// TaskQueueScannerEnabled indicates if task queue scanner should be started as part of worker.Scanner
	TaskQueueScannerEnabled = "worker.taskQueueScannerEnabled"
	// BuildIdScavengerEnabled indicates if the build id scavenger should be started as part of worker.Scanner
	BuildIdScavengerEnabled = "worker.buildIdScavengerEnabled"
	// HistoryScannerEnabled indicates if history scanner should be started as part of worker.Scanner
	HistoryScannerEnabled = "worker.historyScannerEnabled"
	// ExecutionsScannerEnabled indicates if executions scanner should be started as part of worker.Scanner
	ExecutionsScannerEnabled = "worker.executionsScannerEnabled"
	// HistoryScannerDataMinAge indicates the history scanner cleanup minimum age.
	HistoryScannerDataMinAge = "worker.historyScannerDataMinAge"
	// HistoryScannerVerifyRetention indicates the history scanner verify data retention.
	// If the service configures with archival feature enabled, update worker.historyScannerVerifyRetention to be double of the data retention.
	HistoryScannerVerifyRetention = "worker.historyScannerVerifyRetention"
	// EnableBatcher decides whether start batcher in our worker
	EnableBatcher = "worker.enableBatcher"
	// BatcherRPS controls number the rps of batch operations
	BatcherRPS = "worker.batcherRPS"
	// BatcherConcurrency controls the concurrency of one batch operation
	BatcherConcurrency = "worker.batcherConcurrency"
	// WorkerParentCloseMaxConcurrentActivityExecutionSize indicates worker parent close worker max concurrent activity execution size
	WorkerParentCloseMaxConcurrentActivityExecutionSize = "worker.ParentCloseMaxConcurrentActivityExecutionSize"
	// WorkerParentCloseMaxConcurrentWorkflowTaskExecutionSize indicates worker parent close worker max concurrent workflow execution size
	WorkerParentCloseMaxConcurrentWorkflowTaskExecutionSize = "worker.ParentCloseMaxConcurrentWorkflowTaskExecutionSize"
	// WorkerParentCloseMaxConcurrentActivityTaskPollers indicates worker parent close worker max concurrent activity pollers
	WorkerParentCloseMaxConcurrentActivityTaskPollers = "worker.ParentCloseMaxConcurrentActivityTaskPollers"
	// WorkerParentCloseMaxConcurrentWorkflowTaskPollers indicates worker parent close worker max concurrent workflow pollers
	WorkerParentCloseMaxConcurrentWorkflowTaskPollers = "worker.ParentCloseMaxConcurrentWorkflowTaskPollers"
	// WorkerPerNamespaceWorkerCount controls number of per-ns (scheduler, batcher, etc.) workers to run per namespace
	WorkerPerNamespaceWorkerCount = "worker.perNamespaceWorkerCount"
	// WorkerPerNamespaceWorkerOptions are SDK worker options for per-namespace worker
	WorkerPerNamespaceWorkerOptions = "worker.perNamespaceWorkerOptions"
	// WorkerPerNamespaceWorkerStartRate controls how fast per-namespace workers can be started (workers/second).
	WorkerPerNamespaceWorkerStartRate = "worker.perNamespaceWorkerStartRate"
	// WorkerEnableScheduler controls whether to start the worker for scheduled workflows
	WorkerEnableScheduler = "worker.enableScheduler"
	// WorkerStickyCacheSize controls the sticky cache size for SDK workers on worker nodes
	// (shared between all workers in the process, cannot be changed after startup)
	WorkerStickyCacheSize = "worker.stickyCacheSize"
	// SchedulerNamespaceStartWorkflowRPS is the per-namespace limit for starting workflows by schedules
	SchedulerNamespaceStartWorkflowRPS = "worker.schedulerNamespaceStartWorkflowRPS"
	// WorkerDeleteNamespaceActivityLimitsConfig is a map that contains a copy of relevant sdkworker.Options
	// settings for controlling remote activity concurrency for delete namespace workflows.
	WorkerDeleteNamespaceActivityLimitsConfig = "worker.deleteNamespaceActivityLimitsConfig"
)
View Source
const GlobalDefaultNumTaskQueuePartitions = 4

Variables

View Source
var DefaultDynamicRateLimitingParams = map[string]interface{}{
	// contains filtered or unexported fields
}
View Source
var DefaultPerShardNamespaceRPSMax = GetIntPropertyFnFilteredByNamespace(0)

Functions

func GetBoolPropertyFn

func GetBoolPropertyFn(value bool) func() bool

GetBoolPropertyFn returns value as BoolPropertyFn

func GetBoolPropertyFnFilteredByNamespace

func GetBoolPropertyFnFilteredByNamespace(value bool) func(namespace string) bool

GetBoolPropertyFnFilteredByNamespace returns value as BoolPropertyFnWithNamespaceFilters

func GetBoolPropertyFnFilteredByTaskQueueInfo added in v1.21.0

func GetBoolPropertyFnFilteredByTaskQueueInfo(value bool) func(namespace string, taskQueue string, taskType enumspb.TaskQueueType) bool

GetBoolPropertyFnFilteredByTaskQueueInfo returns value as BoolPropertyFnWithTaskQueueFilter

func GetDurationPropertyFn

func GetDurationPropertyFn(value time.Duration) func() time.Duration

GetDurationPropertyFn returns value as DurationPropertyFn

func GetDurationPropertyFnFilteredByNamespace

func GetDurationPropertyFnFilteredByNamespace(value time.Duration) func(namespace string) time.Duration

GetDurationPropertyFnFilteredByNamespace returns value as DurationPropertyFnFilteredByNamespace

func GetDurationPropertyFnFilteredByTaskQueueInfo

func GetDurationPropertyFnFilteredByTaskQueueInfo(value time.Duration) func(namespace string, taskQueue string, taskType enumspb.TaskQueueType) time.Duration

GetDurationPropertyFnFilteredByTaskQueueInfo returns value as DurationPropertyFnWithTaskQueueFilter

func GetFloatPropertyFn

func GetFloatPropertyFn(value float64) func() float64

GetFloatPropertyFn returns value as FloatPropertyFn

func GetIntPropertyFn

func GetIntPropertyFn(value int) func() int

GetIntPropertyFn returns value as IntPropertyFn

func GetIntPropertyFnFilteredByNamespace added in v1.24.0

func GetIntPropertyFnFilteredByNamespace(value int) func(namespace string) int

GetIntPropertyFnFilteredByNamespace returns values as IntPropertyFnWithNamespaceFilters

func GetIntPropertyFnFilteredByTaskQueue added in v1.24.0

func GetIntPropertyFnFilteredByTaskQueue(value int) func(namespace string, taskQueue string, taskType enumspb.TaskQueueType) int

GetIntPropertyFnFilteredByTaskQueue returns value as IntPropertyFnWithTaskQueueFilter

func GetMapPropertyFn

func GetMapPropertyFn(value map[string]interface{}) func() map[string]interface{}

GetMapPropertyFn returns value as MapPropertyFn

func GetMapPropertyFnFilteredByNamespace added in v1.24.0

func GetMapPropertyFnFilteredByNamespace(value map[string]interface{}) func(namespace string) map[string]interface{}

GetMapPropertyFnFilteredByNamespace returns value as MapPropertyFn

func GetStringPropertyFn

func GetStringPropertyFn(value string) func() string

GetStringPropertyFn returns value as StringPropertyFn

func NewFileBasedClient

func NewFileBasedClient(config *FileBasedClientConfig, logger log.Logger, doneCh <-chan interface{}) (*fileBasedClient, error)

NewFileBasedClient creates a file based client.

func NewFileBasedClientWithReader added in v1.16.0

func NewFileBasedClientWithReader(reader fileReader, config *FileBasedClientConfig, logger log.Logger, doneCh <-chan interface{}) (*fileBasedClient, error)

Types

type BoolPropertyFn

type BoolPropertyFn func() bool

These function types follow a similar pattern:

{X}PropertyFn - returns a value of type X that is global (no filters)
{X}PropertyFnWith{Y}Filter - returns a value of type X with the given filters

Available value types:

Bool: bool
Duration: time.Duration
Float: float64
Int: int
Map: map[string]any
String: string

Available filters:

Namespace func(namespace string)
NamespaceID func(namespaceID string)
TaskQueue func(namespace string, taskQueue string, taskType enumspb.TaskQueueType)  (matching task queue)
TaskType func(taskType enumspsb.TaskType)  (history task type)
ShardID func(shardID int32)

type BoolPropertyFnWithNamespaceFilter

type BoolPropertyFnWithNamespaceFilter func(namespace string) bool

type BoolPropertyFnWithNamespaceIDFilter

type BoolPropertyFnWithNamespaceIDFilter func(namespaceID string) bool

type BoolPropertyFnWithTaskQueueFilter added in v1.24.0

type BoolPropertyFnWithTaskQueueFilter func(namespace string, taskQueue string, taskType enumspb.TaskQueueType) bool

type Client

type Client interface {
	// GetValue returns a set of values and associated constraints for a key. Not all
	// constraints are valid for all keys.
	//
	// The returned slice of ConstrainedValues is treated as a set, and order does not
	// matter. The effective order of constraints is determined by server logic. See the
	// comment on Constraints below.
	//
	// If none of the ConstrainedValues match the constraints being used for the key, then
	// the server default value will be used.
	//
	// Note that GetValue is called very often! You should not synchronously call out to an
	// external system. Instead you should keep a set of all configured values, refresh it
	// periodically or when notified, and only do in-memory lookups inside of GetValue.
	GetValue(key Key) []ConstrainedValue
}

Client is a source of dynamic configuration. The default Client, fileBasedClient, reads from a file in the filesystem, and refreshes it periodically. You can extend the server with an alternate Client using ServerOptions.

func NewNoopClient

func NewNoopClient() Client

NewNoopClient returns a Client that has no keys (a Collection using it will always return default values).

type Collection

type Collection struct {
	// contains filtered or unexported fields
}

Collection implements lookup and constraint logic on top of a Client. The rest of the server code should use Collection as the interface to dynamic config, instead of the low-level Client.

func NewCollection

func NewCollection(client Client, logger log.Logger) *Collection

NewCollection creates a new collection

func NewNoopCollection

func NewNoopCollection() *Collection

NewNoopCollection creates a new noop collection.

func (*Collection) GetBoolProperty

func (c *Collection) GetBoolProperty(key Key, defaultValue any) BoolPropertyFn

GetBoolProperty gets property and asserts that it's a bool

func (*Collection) GetBoolPropertyFilteredByTaskQueueInfo

func (c *Collection) GetBoolPropertyFilteredByTaskQueueInfo(key Key, defaultValue any) BoolPropertyFnWithTaskQueueFilter

GetBoolPropertyFilteredByTaskQueueInfo gets property with taskQueueInfo as filters and asserts that it's a bool

func (*Collection) GetBoolPropertyFnWithNamespaceFilter

func (c *Collection) GetBoolPropertyFnWithNamespaceFilter(key Key, defaultValue any) BoolPropertyFnWithNamespaceFilter

GetBoolPropertyFnWithNamespaceFilter gets property with namespace filter and asserts that it's a bool

func (*Collection) GetBoolPropertyFnWithNamespaceIDFilter

func (c *Collection) GetBoolPropertyFnWithNamespaceIDFilter(key Key, defaultValue any) BoolPropertyFnWithNamespaceIDFilter

GetBoolPropertyFnWithNamespaceIDFilter gets property with namespaceID filter and asserts that it's a bool

func (*Collection) GetDurationProperty

func (c *Collection) GetDurationProperty(key Key, defaultValue any) DurationPropertyFn

GetDurationProperty gets property and asserts that it's a duration

func (*Collection) GetDurationPropertyFilteredByNamespace

func (c *Collection) GetDurationPropertyFilteredByNamespace(key Key, defaultValue any) DurationPropertyFnWithNamespaceFilter

GetDurationPropertyFilteredByNamespace gets property with namespace filter and asserts that it's a duration

func (*Collection) GetDurationPropertyFilteredByNamespaceID

func (c *Collection) GetDurationPropertyFilteredByNamespaceID(key Key, defaultValue any) DurationPropertyFnWithNamespaceIDFilter

GetDurationPropertyFilteredByNamespaceID gets property with namespaceID filter and asserts that it's a duration

func (*Collection) GetDurationPropertyFilteredByShardID

func (c *Collection) GetDurationPropertyFilteredByShardID(key Key, defaultValue any) DurationPropertyFnWithShardIDFilter

GetDurationPropertyFilteredByShardID gets property with shardID id as filter and asserts that it's a duration

func (*Collection) GetDurationPropertyFilteredByTaskQueueInfo

func (c *Collection) GetDurationPropertyFilteredByTaskQueueInfo(key Key, defaultValue any) DurationPropertyFnWithTaskQueueFilter

GetDurationPropertyFilteredByTaskQueueInfo gets property with taskQueueInfo as filters and asserts that it's a duration

func (*Collection) GetDurationPropertyFilteredByTaskType added in v1.18.1

func (c *Collection) GetDurationPropertyFilteredByTaskType(key Key, defaultValue any) DurationPropertyFnWithTaskTypeFilter

GetDurationPropertyFilteredByTaskType gets property with task type as filters and asserts that it's a duration

func (*Collection) GetFloat64Property

func (c *Collection) GetFloat64Property(key Key, defaultValue any) FloatPropertyFn

GetFloat64Property gets property and asserts that it's a float64

func (*Collection) GetFloat64PropertyFilteredByShardID

func (c *Collection) GetFloat64PropertyFilteredByShardID(key Key, defaultValue any) FloatPropertyFnWithShardIDFilter

GetFloat64PropertyFilteredByShardID gets property with shardID filter and asserts that it's a float64

func (*Collection) GetFloatPropertyFilteredByNamespace

func (c *Collection) GetFloatPropertyFilteredByNamespace(key Key, defaultValue any) FloatPropertyFnWithNamespaceFilter

GetFloatPropertyFilteredByNamespace gets property with namespace filter and asserts that it's a float64

func (*Collection) GetFloatPropertyFilteredByTaskQueueInfo

func (c *Collection) GetFloatPropertyFilteredByTaskQueueInfo(key Key, defaultValue any) FloatPropertyFnWithTaskQueueFilter

GetFloatPropertyFilteredByTaskQueueInfo gets property with taskQueueInfo as filters and asserts that it's a float64

func (*Collection) GetIntProperty

func (c *Collection) GetIntProperty(key Key, defaultValue any) IntPropertyFn

GetIntProperty gets property and asserts that it's an integer

func (*Collection) GetIntPropertyFilteredByNamespace

func (c *Collection) GetIntPropertyFilteredByNamespace(key Key, defaultValue any) IntPropertyFnWithNamespaceFilter

GetIntPropertyFilteredByNamespace gets property with namespace filter and asserts that it's an integer

func (*Collection) GetIntPropertyFilteredByShardID

func (c *Collection) GetIntPropertyFilteredByShardID(key Key, defaultValue any) IntPropertyFnWithShardIDFilter

GetIntPropertyFilteredByShardID gets property with shardID as filter and asserts that it's an integer

func (*Collection) GetIntPropertyFilteredByTaskQueueInfo

func (c *Collection) GetIntPropertyFilteredByTaskQueueInfo(key Key, defaultValue any) IntPropertyFnWithTaskQueueFilter

GetIntPropertyFilteredByTaskQueueInfo gets property with taskQueueInfo as filters and asserts that it's an integer

func (*Collection) GetMapProperty

func (c *Collection) GetMapProperty(key Key, defaultValue any) MapPropertyFn

GetMapProperty gets property and asserts that it's a map

func (*Collection) GetMapPropertyFnWithNamespaceFilter

func (c *Collection) GetMapPropertyFnWithNamespaceFilter(key Key, defaultValue any) MapPropertyFnWithNamespaceFilter

GetMapPropertyFnWithNamespaceFilter gets property and asserts that it's a map

func (*Collection) GetStringProperty

func (c *Collection) GetStringProperty(key Key, defaultValue any) StringPropertyFn

GetStringProperty gets property and asserts that it's a string

func (*Collection) GetStringPropertyFnWithNamespaceFilter

func (c *Collection) GetStringPropertyFnWithNamespaceFilter(key Key, defaultValue any) StringPropertyFnWithNamespaceFilter

GetStringPropertyFnWithNamespaceFilter gets property with namespace filter and asserts that it's a string

func (*Collection) GetStringPropertyFnWithNamespaceIDFilter added in v1.21.0

func (c *Collection) GetStringPropertyFnWithNamespaceIDFilter(key Key, defaultValue any) StringPropertyFnWithNamespaceIDFilter

GetStringPropertyFnWithNamespaceIDFilter gets property with namespace ID filter and asserts that it's a string

func (*Collection) GetTaskQueuePartitionsProperty added in v1.17.3

func (c *Collection) GetTaskQueuePartitionsProperty(key Key) IntPropertyFnWithTaskQueueFilter

Task queue partitions use a dedicated function to handle defaults.

func (*Collection) HasKey added in v1.21.0

func (c *Collection) HasKey(key Key) bool

type ConstrainedValue added in v1.17.3

type ConstrainedValue struct {
	Constraints Constraints
	Value       any
}

ConstrainedValue is a value plus associated constraints.

The type of the Value field depends on the key. Acceptable types will be one of:

int, float64, bool, string, map[string]any, time.Duration

If time.Duration is expected, a string is also accepted, which will be converted using timestamp.ParseDurationDefaultDays. If float64 is expected, int is also accepted. In other cases, the exact type must be used. If a Value is returned with an unexpected type, it will be ignored.

type Constraints added in v1.18.0

type Constraints struct {
	Namespace     string
	NamespaceID   string
	TaskQueueName string
	TaskQueueType enumspb.TaskQueueType
	ShardID       int32
	TaskType      enumsspb.TaskType
}

Constraints describe under what conditions a ConstrainedValue should be used. There are few standard "constraint precedence orders" that the server uses:

global precedence:
  no constraints
namespace precedence:
  Namespace
  no constraints
task queue precedence
  Namespace+TaskQueueName+TaskQueueType
  Namespace+TaskQueueName
  TaskQueueName
  Namespace
  no constraints
shard id precedence:
  ShardID
  no constraints

In each case, the constraints that the server is checking and the constraints that apply to the value must match exactly, including the fields that are not set (zero values). That is, for keys that use namespace precedence, you must either return a ConstrainedValue with only Namespace set, or with no fields set. (Or return one of each.) If you return a ConstrainedValue with Namespace and ShardID set, for example, that value will never be used, even if the Namespace matches.

type DurationPropertyFn

type DurationPropertyFn func() time.Duration

type DurationPropertyFnWithNamespaceFilter

type DurationPropertyFnWithNamespaceFilter func(namespace string) time.Duration

type DurationPropertyFnWithNamespaceIDFilter

type DurationPropertyFnWithNamespaceIDFilter func(namespaceID string) time.Duration

type DurationPropertyFnWithShardIDFilter

type DurationPropertyFnWithShardIDFilter func(shardID int32) time.Duration

type DurationPropertyFnWithTaskQueueFilter added in v1.24.0

type DurationPropertyFnWithTaskQueueFilter func(namespace string, taskQueue string, taskType enumspb.TaskQueueType) time.Duration

type DurationPropertyFnWithTaskTypeFilter added in v1.18.1

type DurationPropertyFnWithTaskTypeFilter func(task enumsspb.TaskType) time.Duration

type FileBasedClientConfig

type FileBasedClientConfig struct {
	Filepath     string        `yaml:"filepath"`
	PollInterval time.Duration `yaml:"pollInterval"`
}

FileBasedClientConfig is the config for the file based dynamic config client. It specifies where the config file is stored and how often the config should be updated by checking the config file again.

type FloatPropertyFn

type FloatPropertyFn func() float64

type FloatPropertyFnWithNamespaceFilter

type FloatPropertyFnWithNamespaceFilter func(namespace string) float64

type FloatPropertyFnWithShardIDFilter

type FloatPropertyFnWithShardIDFilter func(shardID int32) float64

type FloatPropertyFnWithTaskQueueFilter added in v1.24.0

type FloatPropertyFnWithTaskQueueFilter func(namespace string, taskQueue string, taskType enumspb.TaskQueueType) float64

type IntPropertyFn

type IntPropertyFn func() int

type IntPropertyFnWithNamespaceFilter

type IntPropertyFnWithNamespaceFilter func(namespace string) int

type IntPropertyFnWithShardIDFilter

type IntPropertyFnWithShardIDFilter func(shardID int32) int

type IntPropertyFnWithTaskQueueFilter added in v1.24.0

type IntPropertyFnWithTaskQueueFilter func(namespace string, taskQueue string, taskType enumspb.TaskQueueType) int

type Key

type Key string

Key is a key/property stored in dynamic config. For convenience, it is recommended that you treat keys as case-insensitive.

func (Key) String

func (k Key) String() string

type MapPropertyFn

type MapPropertyFn func() map[string]any

type MapPropertyFnWithNamespaceFilter

type MapPropertyFnWithNamespaceFilter func(namespace string) map[string]any

type MockfileReader added in v1.16.0

type MockfileReader struct {
	// contains filtered or unexported fields
}

MockfileReader is a mock of fileReader interface.

func NewMockfileReader added in v1.16.0

func NewMockfileReader(ctrl *gomock.Controller) *MockfileReader

NewMockfileReader creates a new mock instance.

func (*MockfileReader) EXPECT added in v1.16.0

EXPECT returns an object that allows the caller to indicate expected use.

func (*MockfileReader) ReadFile added in v1.16.0

func (m *MockfileReader) ReadFile(src string) ([]byte, error)

ReadFile mocks base method.

func (*MockfileReader) Stat added in v1.16.0

func (m *MockfileReader) Stat(src string) (os.FileInfo, error)

Stat mocks base method.

type MockfileReaderMockRecorder added in v1.16.0

type MockfileReaderMockRecorder struct {
	// contains filtered or unexported fields
}

MockfileReaderMockRecorder is the mock recorder for MockfileReader.

func (*MockfileReaderMockRecorder) ReadFile added in v1.16.0

func (mr *MockfileReaderMockRecorder) ReadFile(src interface{}) *gomock.Call

ReadFile indicates an expected call of ReadFile.

func (*MockfileReaderMockRecorder) Stat added in v1.16.0

func (mr *MockfileReaderMockRecorder) Stat(src interface{}) *gomock.Call

Stat indicates an expected call of Stat.

type StaticClient added in v1.18.0

type StaticClient map[Key]any

StaticClient is a simple implementation of Client that just looks up in a map. Values can be either plain values or []ConstrainedValue for a constrained value.

func (StaticClient) GetValue added in v1.18.0

func (s StaticClient) GetValue(key Key) []ConstrainedValue

type StringPropertyFn

type StringPropertyFn func() string

type StringPropertyFnWithNamespaceFilter

type StringPropertyFnWithNamespaceFilter func(namespace string) string

type StringPropertyFnWithNamespaceIDFilter added in v1.21.0

type StringPropertyFnWithNamespaceIDFilter func(namespaceID string) string

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL