Documentation
¶
Index ¶
- Constants
- func CurrentPods(rcid fields.ID, labeler LabelMatcher) (types.PodLocations, error)
- type AuditLogStore
- type Farm
- type LabelMatcher
- type Labeler
- type RCNodeTransferLocker
- type ReplicationController
- type ReplicationControllerLocker
- type ReplicationControllerStore
- type ReplicationControllerWatcher
- type Scheduler
Constants ¶
const (
// This label is applied to pods owned by an RC.
RCIDLabel = "replication_controller_id"
)
Variables ¶
This section is empty.
Functions ¶
func CurrentPods ¶
func CurrentPods(rcid fields.ID, labeler LabelMatcher) (types.PodLocations, error)
CurrentPods returns all pods managed by an RC with the given ID.
Types ¶
type AuditLogStore ¶
type Farm ¶
type Farm struct {
// contains filtered or unexported fields
}
The Farm is responsible for spawning and reaping replication controllers as they are added to and deleted from Consul. Multiple farms can exist simultaneously, but each one must hold a different Consul session. This ensures that the farms do not instantiate the same replication controller multiple times.
RC farms take an RC selector that is used to decide whether this farm should pick up a particular RC. This can be used to assist in RC partitioning of work or to create test environments. Note that this is _not_ required for RC farms to cooperatively schedule work.
func NewFarm ¶
func NewFarm( store consulStore, client consulutil.ConsulClient, rcStatusStore rcstatus.ConsulStore, auditLogStore AuditLogStore, rcs ReplicationControllerStore, rcLocker ReplicationControllerLocker, rcWatcher ReplicationControllerWatcher, txner transaction.Txner, healthChecker checker.HealthChecker, scheduler Scheduler, labeler Labeler, sessions <-chan string, logger logging.Logger, rcSelector klabels.Selector, alerter alerting.Alerter, rcWatchPauseTime time.Duration, artifactRegistry artifact.Registry, ) *Farm
func (*Farm) Start ¶
func (rcf *Farm) Start(quit <-chan struct{})
Start is a blocking function that monitors Consul for replication controllers. The Farm will attempt to claim replication controllers as they appear and, if successful, will start goroutines for those replication controllers to do their job. Closing the quit channel will cause this function to return, releasing all locks it holds.
Start is not safe for concurrent execution. Do not execute multiple concurrent instances of Start.
type LabelMatcher ¶
type LabelMatcher interface {
GetMatches(selector klabels.Selector, labelType labels.Type) ([]labels.Labeled, error)
}
LabelMatcher is a subset of Labeler, but its small size makes it easier to call CurrentPods() in code where transactions are not available
type Labeler ¶
type Labeler interface { SetLabelsTxn(ctx context.Context, labelType labels.Type, id string, labels map[string]string) error RemoveLabelsTxn(ctx context.Context, labelType labels.Type, id string, keysToRemove []string) error GetLabels(labelType labels.Type, id string) (labels.Labeled, error) GetMatches(selector klabels.Selector, labelType labels.Type) ([]labels.Labeled, error) }
subset of labels.Applicator
type RCNodeTransferLocker ¶
type ReplicationController ¶
type ReplicationController interface { // WatchDesires causes the replication controller to watch for any changes to its desired state. // It is expected that a replication controller is aware of a backing rcstore against which to perform this watch. // Upon seeing any changes, the replication controller schedules or unschedules pods to meet the desired state. // This spawns a goroutine that performs the watch and returns a channel on which errors are sent. // The caller must consume from the error channel. // Failure to do so blocks the replication controller from meeting desires. // Send a struct{} on the quit channel to stop the goroutine. // The error channel will be closed in response. WatchDesires(quit <-chan struct{}) <-chan error // CurrentPods() returns all pods managed by this replication controller. CurrentPods() (types.PodLocations, error) }
func New ¶
func New( rcID fields.ID, consulStore consulStore, consulClient consulutil.ConsulClient, rcLocker RCNodeTransferLocker, rcStatusStore rcstatus.ConsulStore, auditLogStore AuditLogStore, txner transaction.Txner, rcWatcher ReplicationControllerWatcher, scheduler Scheduler, podApplicator Labeler, logger logging.Logger, alerter alerting.Alerter, healthChecker checker.HealthChecker, artifactRegistry artifact.Registry, ) ReplicationController
type Scheduler ¶
type Scheduler interface { // EligibleNodes returns the nodes that this RC may schedule the manifest on EligibleNodes(manifest.Manifest, klabels.Selector) ([]types.NodeName, error) // AllocateNodes() can be called by the RC when it needs more nodes to // schedule on than EligibleNodes() returns. It will return the newly // allocated nodes which will also appear in subsequent EligibleNodes() // calls AllocateNodes(manifest manifest.Manifest, nodeSelector klabels.Selector, allocationCount int) ([]types.NodeName, error) // DeallocateNodes() indicates to the scheduler that the RC has unscheduled // the pod from these nodes, meaning the scheduler can free the // resource reservations DeallocateNodes(nodeSelector klabels.Selector, nodes []types.NodeName) error }
A Scheduler decides what nodes are appropriate for a pod to run on. It potentially takes into account considerations such as existing load on the nodes, label selectors, and more.