Documentation ¶
Overview ¶
Package gameservers handles management of the GameServer Custom Resource Definition
Index ¶
Constants ¶
This section is empty.
Variables ¶
This section is empty.
Functions ¶
This section is empty.
Types ¶
type Controller ¶
type Controller struct {
// contains filtered or unexported fields
}
Controller is a the main GameServer crd controller
func NewController ¶
func NewController( wh *webhooks.WebHook, health healthcheck.Handler, minPort, maxPort int32, sidecarImage string, alwaysPullSidecarImage bool, sidecarCPURequest resource.Quantity, sidecarCPULimit resource.Quantity, sidecarMemoryRequest resource.Quantity, sidecarMemoryLimit resource.Quantity, sdkServiceAccount string, kubeClient kubernetes.Interface, kubeInformerFactory informers.SharedInformerFactory, extClient extclientset.Interface, agonesClient versioned.Interface, agonesInformerFactory externalversions.SharedInformerFactory) *Controller
NewController returns a new gameserver crd controller
func (*Controller) Run ¶
func (c *Controller) Run(workers int, stop <-chan struct{}) error
Run the GameServer controller. Will block until stop is closed. Runs threadiness number workers to process the rate limited queue
type HealthController ¶
type HealthController struct {
// contains filtered or unexported fields
}
HealthController watches Pods, and applies an Unhealthy state if certain pods crash, or can't be assigned a port, and other similar type conditions.
func NewHealthController ¶
func NewHealthController(health healthcheck.Handler, kubeClient kubernetes.Interface, agonesClient versioned.Interface, kubeInformerFactory informers.SharedInformerFactory, agonesInformerFactory externalversions.SharedInformerFactory) *HealthController
NewHealthController returns a HealthController
func (*HealthController) Run ¶
func (hc *HealthController) Run(stop <-chan struct{}) error
Run processes the rate limited queue. Will block until stop is closed
type MigrationController ¶ added in v1.3.0
type MigrationController struct {
// contains filtered or unexported fields
}
MigrationController watches for if a Pod is migrated/a maintenance event happens on a node, and a Pod is recreated with a new Address for a GameServer
func NewMigrationController ¶ added in v1.3.0
func NewMigrationController(health healthcheck.Handler, kubeClient kubernetes.Interface, agonesClient versioned.Interface, kubeInformerFactory informers.SharedInformerFactory, agonesInformerFactory externalversions.SharedInformerFactory) *MigrationController
NewMigrationController returns a MigrationController
func (*MigrationController) Run ¶ added in v1.3.0
func (mc *MigrationController) Run(stop <-chan struct{}) error
Run processes the rate limited queue. Will block until stop is closed
type MissingPodController ¶ added in v1.4.0
type MissingPodController struct {
// contains filtered or unexported fields
}
MissingPodController makes sure that any GameServer that isn't in a Scheduled or Unhealthy state and is missing a Pod is moved to Unhealthy.
It's possible that a GameServer is missing its associated pod due to unexpected controller downtime or if the Pod is deleted with no subsequent Delete event.
Since resync on the controller is every 30 seconds, even if there is some time in which a GameServer is in a broken state, it will eventually move to Unhealthy, and get replaced (if in a Fleet).
func NewMissingPodController ¶ added in v1.4.0
func NewMissingPodController(health healthcheck.Handler, kubeClient kubernetes.Interface, agonesClient versioned.Interface, kubeInformerFactory informers.SharedInformerFactory, agonesInformerFactory externalversions.SharedInformerFactory) *MissingPodController
NewMissingPodController returns a MissingPodController
func (*MissingPodController) Run ¶ added in v1.4.0
func (c *MissingPodController) Run(stop <-chan struct{}) error
Run processes the rate limited queue. Will block until stop is closed
type NodeCount ¶ added in v0.9.0
type NodeCount struct { // Ready is ready count Ready int64 // Allocated is allocated out Allocated int64 }
NodeCount is just a convenience data structure for keeping relevant GameServer counts about Nodes
type PerNodeCounter ¶ added in v0.9.0
type PerNodeCounter struct {
// contains filtered or unexported fields
}
PerNodeCounter counts how many Allocated and Ready GameServers currently exist on each node. This is useful for scheduling allocations, fleet management mostly under a Packed strategy
func NewPerNodeCounter ¶ added in v0.9.0
func NewPerNodeCounter( kubeInformerFactory informers.SharedInformerFactory, agonesInformerFactory externalversions.SharedInformerFactory) *PerNodeCounter
NewPerNodeCounter returns a new PerNodeCounter
func (*PerNodeCounter) Counts ¶ added in v0.9.0
func (pnc *PerNodeCounter) Counts() map[string]NodeCount
Counts returns the NodeCount map in a thread safe way
func (*PerNodeCounter) Run ¶ added in v0.9.0
func (pnc *PerNodeCounter) Run(_ int, stop <-chan struct{}) error
Run sets up the current state GameServer counts across nodes non blocking Run function.
type PortAllocator ¶
type PortAllocator struct {
// contains filtered or unexported fields
}
PortAllocator manages the dynamic port allocation strategy. Only use exposed methods to ensure appropriate locking is taken. The PortAllocator does not currently support mixing static portAllocations (or any pods with defined HostPort) within the dynamic port range other than the ones it coordinates.
func NewPortAllocator ¶
func NewPortAllocator(minPort, maxPort int32, kubeInformerFactory informers.SharedInformerFactory, agonesInformerFactory externalversions.SharedInformerFactory) *PortAllocator
NewPortAllocator returns a new dynamic port allocator. minPort and maxPort are the top and bottom portAllocations that can be allocated in the range for the game servers
func (*PortAllocator) Allocate ¶
func (pa *PortAllocator) Allocate(gs *agonesv1.GameServer) *agonesv1.GameServer
Allocate assigns a port to the GameServer and returns it. Return ErrPortNotFound if no port is allocatable
func (*PortAllocator) DeAllocate ¶
func (pa *PortAllocator) DeAllocate(gs *agonesv1.GameServer)
DeAllocate marks the given port as no longer allocated
func (*PortAllocator) Run ¶
func (pa *PortAllocator) Run(stop <-chan struct{}) error
Run sets up the current state of port allocations and starts tracking Pod and Node changes