leaderelection

package
v1.2.112 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Jan 2, 2024 License: MIT Imports: 8 Imported by: 0

Documentation

Overview

Package leaderelection implements leader election of a set of endpoints. It uses an annotation in the endpoints object to store the record of the election state. This implementation does not guarantee that only one client is acting as a leader (a.k.a. fencing).

A client only acts on timestamps captured locally to infer the state of the leader election. The client does not consider timestamps in the leader election record to be accurate because these timestamps may not have been produced by a local clock. The implementation does not depend on their accuracy and only uses their change to indicate that another client has renewed the leader lease. Thus, the implementation is tolerant to arbitrary clock skew, but is not tolerant to arbitrary clock skew rate.

However, the level of tolerance to skew rate can be configured by setting RenewDeadline and LeaseDuration appropriately. The tolerance expressed as a maximum tolerated ratio of time passed on the fastest node to time passed on the slowest node can be approximately achieved with a configuration that sets the same ratio of LeaseDuration to RenewDeadline. For example if a user wanted to tolerate some nodes progressing forward in time twice as fast as other nodes, the user could set LeaseDuration to 60 seconds and RenewDeadline to 30 seconds.

While not required, some method of clock synchronization between nodes in the cluster is highly recommended. It's important to keep in mind when configuring this client that the tolerance to skew rate varies inversely to master availability.

Larger clusters often have a more lenient SLA for API latency. This should be taken into account when configuring the client. The rate of leader transitions should be monitored and RetryPeriod and LeaseDuration should be increased until the rate is stable and acceptably low. It's important to keep in mind when configuring this client that the tolerance to API latency varies inversely to master availability.

Index

Constants

View Source
const (
	JitterFactor        = 1.2
	EventBecameLeader   = "became leader"
	EventStoppedLeading = "stopped leading"
)

Variables

This section is empty.

Functions

func NewDummyLock

func NewDummyLock(identity string) *dummyLock

NewDummyLock returns a locker which returns follower

Types

type Config

type Config struct {
	// Lock is the resource that will be used for locking
	Lock ResourceLocker

	// LeaseDuration is the duration that non-leader candidates will
	// wait to force acquire leadership. This is measured against time of
	// last observed ack.
	//
	// A client needs to wait a full LeaseDuration without observing a change to
	// the record before it can attempt to take over. When all clients are
	// shutdown and a new set of clients are started with different names against
	// the same leader record, they must wait the full LeaseDuration before
	// attempting to acquire the lease. Thus LeaseDuration should be as short as
	// possible (within your tolerance for clock skew rate) to avoid a possible
	// long waits in the scenario.
	//
	// Core clients default this value to 15 seconds.
	LeaseDuration time.Duration
	// RenewTimeout is the duration that the acting master will retry
	// refreshing leadership before giving up.
	//
	// Core clients default this value to 10 seconds.
	RenewTimeout time.Duration
	// RetryPeriod is the duration the LeaderElector clients should wait
	// between tries of actions.
	//
	// Core clients default this value to 2 seconds.
	RetryPeriod time.Duration

	// Callbacks are callbacks that are triggered during certain lifecycle
	// events of the LeaderElector
	Callbacks LeaderCallbacks

	// ReleaseOnCancel should be set true if the lock should be released
	// when the run context is cancelled. If you set this to true, you must
	// ensure all code guarded by this lease has successfully completed
	// prior to cancelling the context, or you may have two processes
	// simultaneously acting on the critical path.
	ReleaseOnCancel bool

	// Name is the name of the resource lock for debugging
	Name string
}

func (*Config) Complete

func (c *Config) Complete()

func (*Config) New

func (c *Config) New() (*LeaderElector, error)

func (*Config) SetDefaults

func (c *Config) SetDefaults()

type LeaderCallbacks

type LeaderCallbacks struct {
	// OnStartedLeading is called when a LeaderElector client starts leading
	OnStartedLeading func(context.Context)
	// OnStoppedLeading is called when a LeaderElector client stops leading
	OnStoppedLeading func()
	// OnNewLeader is called when the client observes a leader that is
	// not the previously observed leader. This includes the first observed
	// leader when the client starts.
	OnNewLeader func(identity string)
}

LeaderCallbacks are callbacks that are triggered during certain lifecycle events of the LeaderElector. These are invoked asynchronously.

possible future callbacks:

  • OnChallenge()

type LeaderElector

type LeaderElector struct {
	// ErrorLog specifies an optional logger for errors accepting
	// connections, unexpected behavior from handlers, and
	// underlying FileSystem errors.
	// If nil, logging is done via the log package's standard logger.
	ErrorLog *log.Logger
	// contains filtered or unexported fields
}

LeaderElector is a leader election client.

func NewLeaderElector

func NewLeaderElector(lec Config) (*LeaderElector, error)

NewLeaderElector creates a LeaderElector from a LeaderElectionConfig

func Run

func Run(ctx context.Context, lec Config) (*LeaderElector, error)

Run starts a client with the provided config or failed if the config fails to validate. RunOrDie blocks until leader election loop is stopped by ctx or it has stopped holding the leader lease

func (*LeaderElector) Check

func (le *LeaderElector) Check(maxTolerableExpiredLease time.Duration) error

Check will determine if the current lease is expired by more than timeout.

func (*LeaderElector) GetLeader

func (le *LeaderElector) GetLeader() string

GetLeader returns the identity of the last observed leader or returns the empty string if no leader has yet been observed. This function is for informational purposes. (e.g. monitoring, logs, etc.)

func (*LeaderElector) IsLeader

func (le *LeaderElector) IsLeader() bool

IsLeader returns true if the last observed leader was this client else returns false.

func (*LeaderElector) Run

func (le *LeaderElector) Run(ctx context.Context)

Run starts the leader election loop. Run will not return before leader election loop is stopped by ctx, or it has stopped holding the leader lease

type Record

type Record struct {
	// HolderIdentity is the ID that owns the lease. If empty, no one owns this lease and
	// all callers may acquire.
	// This value is set to empty when a client voluntarily steps down.
	HolderIdentity string
	// LeaseDuration is the duration that non-leader candidates will
	// wait to force acquire leadership. This is measured against time of
	// last observed ack.
	LeaseDuration     time.Duration
	AcquireTime       time.Time // when the leader hold this record
	RenewTime         time.Time // when the locker is renewed recently
	LeaderTransitions int       // +1 if expired checked by followers, changed to trigger a new lock acquire or new
}

Record is the record that is stored in the leader election annotation. This information should be used for observational purposes only and could be replaced with a random string (e.g. UUID) with only slight modification of this code.

type ResourceLocker

type ResourceLocker interface {
	// Get returns the LeaderElectionRecord
	// get locker's state
	Get(ctx context.Context) (record *Record, rawRecord []byte, err error)

	// Create attempts to create a LeaderElectionRecord
	// return err if lock failed
	// Lock
	Create(ctx context.Context, ler Record) error

	// Update will update and existing LeaderElectionRecord
	// return err if lock failed
	// Lock or UnLock
	Update(ctx context.Context, ler Record) error

	// RecordEvent is used to record events
	RecordEvent(name, event string)

	// Identity will return the locks Identity
	Identity() string

	// Describe is used to convert details on current resource lock
	// into a string
	Describe() string
}

ResourceLocker offers a common interface for locking on arbitrary resources used in leader election. The ResourceLocker is used to hide the details on specific implementations in order to allow them to change over time. This interface is strictly for use by the leaderelection code.

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL