Documentation ¶
Index ¶
Constants ¶
const ( // MetropolisControlAddress is the address of the current Metropolis leader as // accepted by the Resolver. Dialing a gRPC channel to this address while the // Resolver is used will open the channel to the current leader of the // Metropolis control plane. MetropolisControlAddress = "metropolis:///control" )
Variables ¶
var ( // ErrResolverClosed will be returned by the resolver to gRPC machinery whenever a // resolver cannot be used anymore because it was Closed. ErrResolverClosed = errors.New("cluster resolver closed") )
Functions ¶
This section is empty.
Types ¶
type NodeEndpoint ¶
type NodeEndpoint struct {
// contains filtered or unexported fields
}
NodeEndpoint is the gRPC endpoint (host+port) of a Metropolis control plane node.
func NodeAtAddressWithDefaultPort ¶
func NodeAtAddressWithDefaultPort(host string) *NodeEndpoint
NodeAtAddressWithDefaultPort returns a NodeEndpoint referencing the default control plane port (the Curator port) of a node at a given address.
func NodeByHostPort ¶
func NodeByHostPort(host string, port uint16) *NodeEndpoint
NodeByHostPort returns a NodeEndpoint for a fully specified host + port pair. The host can either be a hostname or an IP address.
func NodeWithDefaultPort ¶
func NodeWithDefaultPort(id string) (*NodeEndpoint, error)
NodeWithDefaultPort returns a NodeEndpoint referencing the default control plane port (the Curator port) of a node resolved by its ID over DNS. This is the easiest way to construct a NodeEndpoint provided DNS is fully set up.
type Resolver ¶
type Resolver struct {
// contains filtered or unexported fields
}
Resolver is a gRPC resolver Builder that can be passed to grpc.WithResolvers() when dialing a gRPC endpoint.
It's responsible for resolving the magic MetropolisControlAddress (metropolis:///control) into an address of the node that is currently the leader of the cluster's control plane.
To function, the ClusterResolver needs to be provided with at least one control plane node address. It will use these addresses to retrieve the address of the node which is the current leader of the control plane.
Then, having established communication with the leader, it will continuously update an internal set of control plane node endpoints (the curator map) that will be contacted in the future about the state of the leadership when the current leader fails over.
The resolver will wait for a first gRPC connection established through it to extract the transport credentials used, then use these credentials to call the Curator and CuratorLocal services on control plane nodes to perform its logic.
This resolver is designed to be used as a long-running object which multiple gRPC client connections can use. Usually one ClusterResolver instance should be used per application.
.------------------------. .--------------------------------------. | Metropolis Cluster | | Resolver | :------------------------: :--------------------------------------: : : : : : .--------------------. : : .----------------. : : | curator (follower) |<---.---------| Leader Updater |------------. : : '--------------------' : | : '----------------' | : : .--------------------. : | : .------------------------. | : : | curator (follower) |<---: : | Processor (CuratorMap) |<-.-'-. : : '--------------------' : | : '------------------------' | | : : .--------------------.<---' : .-----------------. | | : : | curator (leader) |<-------------| Curator Updater |---------' | : : '--------------------' : : '-----------------' | : : : : | : '------------------------' : .----------. | : : | Watchers |-. | : : '----------' |------------------' : : '-^--------' : : | ^ : : | | : .---------------. | gRPC channels | '---------------'
func New ¶
func New(ctx context.Context, opts ...ResolverOption) *Resolver
New starts a new Resolver, ready to be used as a gRPC via WithResolvers. However, it needs to be populated with at least one endpoint first (via AddEndpoint).
func (*Resolver) AddEndpoint ¶
func (r *Resolver) AddEndpoint(endpoint *NodeEndpoint)
AddEndpoint tells the resolver that it should attempt to reach the cluster through a node available at the given NodeEndpoint.
The resolver will make use of this during the next leadership find routine, but this node might then get overridden when the resolver retrieves the newest set of Curators from the acquired leader.
func (*Resolver) AddOverride ¶
func (r *Resolver) AddOverride(id string, ep *NodeEndpoint)
AddOverride adds a long-lived override which forces the resolver to assume that a given node (by ID) is available at the given endpoint, instead of at whatever endpoint is reported by the cluster. This should be used sparingly outside the cluster, and is mostly designed so that nodes which connect to themselves can do so over the loopback address instead of their (possibly changing) external address.
func (*Resolver) Build ¶
func (r *Resolver) Build(target resolver.Target, cc resolver.ClientConn, opts resolver.BuildOptions) (resolver.Resolver, error)
Build is called by gRPC on each Dial call. It spawns a new clientWatcher, whose goroutine receives information about currently available nodes from the parent ClusterResolver and actually updates a given gRPC client connection with information about the current set of nodes.
type ResolverOption ¶
type ResolverOption func(r *Resolver)
ResolverOption are passed to a Resolver being created.
func WithLogger ¶
func WithLogger(logger logging.Leveled) ResolverOption
WithLogger sets the logger that the resolver will use. If not configured, the resolver will silently block on errors!
func WithoutCuratorUpdater ¶
func WithoutCuratorUpdater() ResolverOption
WithoutCuratorUpdater configures the Resolver to not attmept to update curators from the cluster. This is useful in one-shot resolvers, eg. unauthenticated ones.