Documentation ¶
Overview ¶
Package replication contains logic for watching and synchronizing replication controllers.
Index ¶
- Constants
- func GetCondition(status api.ReplicationControllerStatus, ...) *api.ReplicationControllerCondition
- func NewReplicationControllerCondition(condType api.ReplicationControllerConditionType, status api.ConditionStatus, ...) api.ReplicationControllerCondition
- func RemoveCondition(status *api.ReplicationControllerStatus, ...)
- func SetCondition(status *api.ReplicationControllerStatus, ...)
- type OverlappingControllers
- type ReplicationManager
- func NewReplicationManager(podInformer cache.SharedIndexInformer, kubeClient clientset.Interface, ...) *ReplicationManager
- func NewReplicationManagerFromClient(kubeClient clientset.Interface, resyncPeriod controller.ResyncPeriodFunc, ...) *ReplicationManager
- func NewReplicationManagerFromClientForIntegration(kubeClient clientset.Interface, resyncPeriod controller.ResyncPeriodFunc, ...) *ReplicationManager
Constants ¶
const ( // We'll attempt to recompute the required replicas of all replication controllers // that have fulfilled their expectations at least this often. This recomputation // happens based on contents in local pod storage. // Full Resync shouldn't be needed at all in a healthy system. This is a protection // against disappearing objects and watch notification, that we believe should not // happen at all. // TODO: We should get rid of it completely in the fullness of time. FullControllerResyncPeriod = 10 * time.Minute // Realistic value of the burstReplica field for the replication manager based off // performance requirements for kubernetes 1.0. BurstReplicas = 500 // We must avoid counting pods until the pod store has synced. If it hasn't synced, to // avoid a hot loop, we'll wait this long between checks. PodStoreSyncedPollPeriod = 100 * time.Millisecond )
Variables ¶
This section is empty.
Functions ¶
func GetCondition ¶
func GetCondition(status api.ReplicationControllerStatus, condType api.ReplicationControllerConditionType) *api.ReplicationControllerCondition
GetCondition returns a replication controller condition with the provided type if it exists.
func NewReplicationControllerCondition ¶
func NewReplicationControllerCondition(condType api.ReplicationControllerConditionType, status api.ConditionStatus, reason, msg string) api.ReplicationControllerCondition
NewReplicationControllerCondition creates a new replication controller condition.
func RemoveCondition ¶
func RemoveCondition(status *api.ReplicationControllerStatus, condType api.ReplicationControllerConditionType)
RemoveCondition removes the condition with the provided type from the replication controller status.
func SetCondition ¶
func SetCondition(status *api.ReplicationControllerStatus, condition api.ReplicationControllerCondition)
SetCondition adds/replaces the given condition in the replication controller status.
Types ¶
type OverlappingControllers ¶ added in v1.2.0
type OverlappingControllers []*api.ReplicationController
OverlappingControllers sorts a list of controllers by creation timestamp, using their names as a tie breaker.
func (OverlappingControllers) Len ¶ added in v1.2.0
func (o OverlappingControllers) Len() int
func (OverlappingControllers) Less ¶ added in v1.2.0
func (o OverlappingControllers) Less(i, j int) bool
func (OverlappingControllers) Swap ¶ added in v1.2.0
func (o OverlappingControllers) Swap(i, j int)
type ReplicationManager ¶
type ReplicationManager struct {
// contains filtered or unexported fields
}
ReplicationManager is responsible for synchronizing ReplicationController objects stored in the system with actual running pods. TODO: this really should be called ReplicationController. The only reason why it's a Manager is to distinguish this type from API object "ReplicationController". We should fix this.
func NewReplicationManager ¶
func NewReplicationManager(podInformer cache.SharedIndexInformer, kubeClient clientset.Interface, resyncPeriod controller.ResyncPeriodFunc, burstReplicas int, lookupCacheSize int, garbageCollectorEnabled bool) *ReplicationManager
NewReplicationManager creates a replication manager
func NewReplicationManagerFromClient ¶ added in v1.3.0
func NewReplicationManagerFromClient(kubeClient clientset.Interface, resyncPeriod controller.ResyncPeriodFunc, burstReplicas int, lookupCacheSize int) *ReplicationManager
NewReplicationManagerFromClient creates a new ReplicationManager that runs its own informer.
func NewReplicationManagerFromClientForIntegration ¶ added in v1.3.0
func NewReplicationManagerFromClientForIntegration(kubeClient clientset.Interface, resyncPeriod controller.ResyncPeriodFunc, burstReplicas int, lookupCacheSize int) *ReplicationManager
NewReplicationManagerFromClientForIntegration creates a new ReplicationManager that runs its own informer. It disables event recording for use in integration tests.
func (*ReplicationManager) Run ¶
func (rm *ReplicationManager) Run(workers int, stopCh <-chan struct{})
Run begins watching and syncing.
func (*ReplicationManager) SetEventRecorder ¶
func (rm *ReplicationManager) SetEventRecorder(recorder record.EventRecorder)
SetEventRecorder replaces the event recorder used by the replication manager with the given recorder. Only used for testing.