Documentation ¶
Index ¶
- Variables
- func Dial(ctx context.Context, node *config.Node, registry *protoregistry.Registry, ...) (*grpc.ClientConn, error)
- func GeneratePraefectName(c config.Config, log logrus.FieldLogger) string
- func PingAll(ctx context.Context, cfg config.Config, printer Printer, quiet bool) error
- type HealthClients
- type HealthManager
- type Manager
- type Mgr
- func (n *Mgr) GetPrimary(ctx context.Context, virtualStorage string, _ int64) (string, error)
- func (n *Mgr) GetShard(ctx context.Context, virtualStorageName string) (Shard, error)
- func (n *Mgr) GetSyncedNode(ctx context.Context, virtualStorageName, repoPath string) (Node, error)
- func (n *Mgr) HealthyNodes() map[string][]string
- func (n *Mgr) Nodes() map[string][]Node
- func (n *Mgr) Start(bootstrapInterval, monitorInterval time.Duration)
- func (n *Mgr) Stop()
- type MockManager
- type MockNode
- type Node
- type PerRepositoryElector
- type Ping
- type PingError
- type Printer
- type Shard
- type TextPrinter
Constants ¶
This section is empty.
Variables ¶
var ErrNoPrimary = errors.New("no primary")
ErrNoPrimary is returned if the repository does not have a primary.
var ErrPrimaryNotHealthy = errors.New("primary gitaly is not healthy")
ErrPrimaryNotHealthy indicates the primary of a shard is not in a healthy state and hence should not be used for a new request
var ErrVirtualStorageNotExist = errors.New("virtual storage does not exist")
ErrVirtualStorageNotExist indicates the node manager is not aware of the virtual storage for which a shard is being requested
Functions ¶
func Dial ¶
func Dial(ctx context.Context, node *config.Node, registry *protoregistry.Registry, errorTracker tracker.ErrorTracker, handshaker client.Handshaker, sidechannelRegistry *sidechannel.Registry) (*grpc.ClientConn, error)
Dial dials a node with the necessary interceptors configured.
func GeneratePraefectName ¶
func GeneratePraefectName(c config.Config, log logrus.FieldLogger) string
GeneratePraefectName generates a name so that each Praefect process can report node statuses independently. This will enable us to do a SQL election to determine which nodes are active. Ideally this name doesn't change across restarts since that may temporarily make it look like there are more Praefect processes active for determining a quorum.
Types ¶
type HealthClients ¶
type HealthClients map[string]map[string]grpc_health_v1.HealthClient
HealthClients contains HealthClients for every physical storage by virtual storage.
type HealthManager ¶
type HealthManager struct {
// contains filtered or unexported fields
}
HealthManager monitors the health status of the storage cluster. The monitoring frequency is controlled by the Ticker passed in to Run method. On each tick, the HealthManager:
- Runs health checks on configured physical storages by performing a gRPC call to the health checking endpoint. If an error tracker is configured, it also considers its view of the node's health.
- Stores its health check results in the `node_status` table.
- Checks if the clusters consensus of healthy nodes has changed by querying the `node_status` table for results of the other Praefect instances. If so, it sends to the Updated channel to signal a change in the cluster status.
To determine the participants for the quorum, we use a lightweight service discovery protocol. A Praefect instance is deemed to be voting member if it has a recent health check in the `node_status` table. Each Praefect node is identified by their host name and the provided stable ID. The stable ID should uniquely identify a Praefect instance on the host.
func NewHealthManager ¶
func NewHealthManager( log logrus.FieldLogger, db glsql.Querier, praefectName string, clients HealthClients, ) *HealthManager
NewHealthManager returns a new health manager that monitors which nodes in the cluster are healthy.
If db is nil, the HealthManager checks the connection health normally but doesn't persist any information about the nodes in the database.
func (*HealthManager) HealthyNodes ¶
func (hm *HealthManager) HealthyNodes() map[string][]string
HealthyNodes returns a map of healthy nodes in each virtual storage as seen by the latest local health check.
func (*HealthManager) Run ¶
Run runs the health check on every tick by the Ticker until the context is canceled. Returns the error from the context.
func (*HealthManager) Updated ¶
func (hm *HealthManager) Updated() <-chan struct{}
Updated returns a channel that is sent to when the set of healthy nodes is updated. Update is also sent to on the first check even if no nodes are healthy. The channel is buffered to allow HealthManager to proceed with cluster health monitoring when the channel consumer is slow.
type Manager ¶
type Manager interface { GetShard(ctx context.Context, virtualStorageName string) (Shard, error) // GetSyncedNode returns a random storage node based on the state of the replication. // It returns primary in case there are no up to date secondaries or error occurs. GetSyncedNode(ctx context.Context, virtualStorageName, repoPath string) (Node, error) // HealthyNodes returns healthy storages by virtual storage. HealthyNodes() map[string][]string // Nodes returns nodes by their virtual storages. Nodes() map[string][]Node }
Manager is responsible for returning shards for virtual storages
type Mgr ¶
type Mgr struct {
// contains filtered or unexported fields
}
Mgr is a concrete type that adheres to the Manager interface
func NewManager ¶
func NewManager( log *logrus.Entry, c config.Config, db *sql.DB, csg datastore.ConsistentStoragesGetter, latencyHistogram prommetrics.HistogramVec, registry *protoregistry.Registry, errorTracker tracker.ErrorTracker, handshaker client.Handshaker, sidechannelRegistry *sidechannel.Registry, ) (*Mgr, error)
NewManager creates a new NodeMgr based on virtual storage configs
func (*Mgr) GetPrimary ¶
GetPrimary returns the current primary of a repository. This is an adapter so NodeManager can be used as a praefect.PrimaryGetter in newer code which written to support repository specific primaries.
func (*Mgr) GetSyncedNode ¶
func (*Mgr) HealthyNodes ¶
type MockManager ¶
MockManager is a helper for tests that implements Manager and allows for parametrizing behavior.
func (*MockManager) HealthyNodes ¶
func (m *MockManager) HealthyNodes() map[string][]string
HealthyNodes returns healthy nodes. This is implemented similar to Nodes() and thus also requires setup of the MockManager's Storage field.
func (*MockManager) Nodes ¶
func (m *MockManager) Nodes() map[string][]Node
Nodes returns nodes contained by the GetShardFunc. Note that this mocking only works in case the MockManager was set up with a Storage as the GetShardFunc will be called with that storage as parameter.
type MockNode ¶
type MockNode struct { Node GetStorageMethod func() string Conn *grpc.ClientConn Healthy bool }
MockNode is a helper for tests that implements Node and allows for parametrizing behavior.
func (*MockNode) GetAddress ¶
func (*MockNode) GetConnection ¶
func (m *MockNode) GetConnection() *grpc.ClientConn
func (*MockNode) GetStorage ¶
type Node ¶
type Node interface { GetStorage() string GetAddress() string GetToken() string GetConnection() *grpc.ClientConn // IsHealthy reports if node is healthy and can handle requests. // Node considered healthy if last 'healthcheckThreshold' checks were positive. IsHealthy() bool // CheckHealth executes health check for the node and tracks last 'healthcheckThreshold' checks for it. CheckHealth(context.Context) (bool, error) }
Node represents some metadata of a node as well as a connection
type PerRepositoryElector ¶
type PerRepositoryElector struct {
// contains filtered or unexported fields
}
PerRepositoryElector implements an elector that selects a primary for each repository. It elects a healthy node with most recent generation as the primary. If all nodes are on the same generation, it picks one randomly to balance repositories in simple fashion.
func NewPerRepositoryElector ¶
func NewPerRepositoryElector(db glsql.Querier) *PerRepositoryElector
NewPerRepositoryElector returns a new per repository primary elector.
func (*PerRepositoryElector) GetPrimary ¶
func (pr *PerRepositoryElector) GetPrimary(ctx context.Context, virtualStorage string, repositoryID int64) (string, error)
GetPrimary returns the primary storage of a repository. If the current primary is invalid, a new primary is elected if there are valid candidates for promotion.
type Ping ¶
type Ping struct {
// contains filtered or unexported fields
}
Ping is used to determine node health for a gitaly node
type PingError ¶
type PingError struct { // UnhealthyAddresses contains all addresses which UnhealthyAddresses []string }
PingError is an error returned in case pinging a node failed.
type Printer ¶
type Printer interface { // Printf prints a message, taking into account whether // or not the verbose flag has been set Printf(format string, args ...interface{}) }
Printer is an interface for Ping to print messages
type Shard ¶
Shard is a primary with a set of secondaries
func (Shard) GetHealthySecondaries ¶
GetHealthySecondaries returns all secondaries of the shard whose which are currently known to be healthy.
type TextPrinter ¶
type TextPrinter struct {
// contains filtered or unexported fields
}
TextPrinter is a basic printer that writes to a writer
func NewTextPrinter ¶
func NewTextPrinter(w io.Writer) *TextPrinter
NewTextPrinter creates a new TextPrinter instance
func (*TextPrinter) Printf ¶
func (t *TextPrinter) Printf(format string, args ...interface{})
Printf prints the message and adds a newline