Documentation ¶
Overview ¶
package balancer implements a custom gRPC load balancer.
Similarly to gRPC's built-in "pick_first" balancer, our balancer will pin the client to a single connection/server. However, it will switch servers as soon as an RPC error occurs (e.g. if the client has exhausted its rate limit on that server). It also provides a method that will be called periodically by the Consul router to randomize the connection priorities to rebalance load.
Our balancer aims to keep exactly one TCP connection (to the current server) open at a time. This is different to gRPC's "round_robin" and "base" balancers which connect to *all* resolved addresses up-front so that you can quickly cycle between them - which we want to avoid because of the overhead on the servers. It's also slightly different to gRPC's "pick_first" balancer which will attempt to remain connected to the same server as long its address is returned by the resolver - we previously had to work around this behavior in order to shuffle the servers, which had some unfortunate side effects as documented in this issue: https://github.com/hernad/consul/issues/10603.
If a server is in a perpetually bad state, the balancer's standard error handling will steer away from it but it will *not* be removed from the set and will remain in a TRANSIENT_FAILURE state to possibly be retried in the future. It is expected that Consul's router will remove servers from the resolver which have been network partitioned etc.
Quick primer on how gRPC's different components work together:
Targets (e.g. consul://.../server.dc1) represent endpoints/collections of hosts. They're what you pass as the first argument to grpc.Dial.
ClientConns represent logical connections to targets. Each ClientConn may have many SubConns (and therefore TCP connections to different hosts).
SubConns represent connections to a single host. They map 1:1 with TCP connections (that's actually a bit of a lie, but true for our purposes).
Resolvers are responsible for turning Targets into sets of addresses (e.g. via DNS resolution) and updating the ClientConn when they change. They map 1:1 with ClientConns. gRPC creates them for a ClientConn using the builder registered for the Target's scheme (i.e. the protocol part of the URL).
Balancers are responsible for turning resolved addresses into SubConns and a Picker. They're called whenever the Resolver updates the ClientConn's state (e.g. with new addresses) or when the SubConns change state.
Like Resolvers, they also map 1:1 with ClientConns and are created using a builder registered with a name that is specified in the "service config".
Pickers are responsible for deciding which SubConn will be used for an RPC.
Index ¶
Constants ¶
const BuilderName = "consul-internal"
BuilderName should be given in gRPC service configuration to enable our custom balancer. It refers to this package's global registry, rather than an instance of Builder to enable us to add and remove builders at runtime, specifically during tests.
Variables ¶
This section is empty.
Functions ¶
This section is empty.
Types ¶
type Builder ¶
type Builder struct {
// contains filtered or unexported fields
}
Builder implements gRPC's balancer.Builder interface to construct balancers.
func NewBuilder ¶
NewBuilder constructs a new Builder. Calling Register will add the Builder to our global registry under the given "authority" such that it will be used when dialing targets in the form "consul-internal://<authority>/...", this allows us to add and remove balancers for different in-memory agents during tests.
func (*Builder) Build ¶
func (b *Builder) Build(cc gbalancer.ClientConn, opts gbalancer.BuildOptions) gbalancer.Balancer
Build is called by gRPC (e.g. on grpc.Dial) to construct a balancer for the given ClientConn.
func (*Builder) Deregister ¶
func (b *Builder) Deregister()
Deregister the Builder from our global registry to clean up state.