aws

package
v0.0.0-...-6b2d771 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Dec 11, 2017 License: Apache-2.0 Imports: 23 Imported by: 0

README

Cluster Autoscaler on AWS

The cluster autoscaler on AWS scales worker nodes within any specified autoscaling group. It will run as a Deployment in your cluster. This README will go over some of the necessary steps required to get the cluster autoscaler up and running.

Kubernetes Version

Cluster autoscaler must run on v1.3.0 or greater.

Permissions

The worker running the cluster autoscaler will need access to certain resources and actions.

A minimum IAM policy would look like:

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Action": [
                "autoscaling:DescribeAutoScalingGroups",
                "autoscaling:DescribeAutoScalingInstances",
                "autoscaling:SetDesiredCapacity",
                "autoscaling:TerminateInstanceInAutoScalingGroup"
            ],
            "Resource": "*"
        }
    ]
}

If you'd like to auto-discover node groups by specifing the --node-group-auto-discover flag, a DescribeTags permission is also required:

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Action": [
                "autoscaling:DescribeAutoScalingGroups",
                "autoscaling:DescribeAutoScalingInstances",
                "autoscaling:DescribeTags",
                "autoscaling:SetDesiredCapacity",
                "autoscaling:TerminateInstanceInAutoScalingGroup"
            ],
            "Resource": "*"
        }
    ]
}

Unfortunately AWS does not support ARNs for autoscaling groups yet so you must use "*" as the resource. More information here.

Deployment Specification

1 ASG Setup (min: 1, max: 10, ASG Name: k8s-worker-asg-1)
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: cluster-autoscaler
  namespace: kube-system
  labels:
    app: cluster-autoscaler
spec:
  replicas: 1
  selector:
    matchLabels:
      app: cluster-autoscaler
  template:
    metadata:
      labels:
        app: cluster-autoscaler
    spec:
      containers:
        - image: gcr.io/google_containers/cluster-autoscaler:v0.6.0
          name: cluster-autoscaler
          resources:
            limits:
              cpu: 100m
              memory: 300Mi
            requests:
              cpu: 100m
              memory: 300Mi
          command:
            - ./cluster-autoscaler
            - --v=4
            - --stderrthreshold=info
            - --cloud-provider=aws
            - --skip-nodes-with-local-storage=false
            - --nodes=1:10:k8s-worker-asg-1
          env:
            - name: AWS_REGION
              value: us-east-1
          volumeMounts:
            - name: ssl-certs
              mountPath: /etc/ssl/certs/ca-certificates.crt
              readOnly: true
          imagePullPolicy: "Always"
      volumes:
        - name: ssl-certs
          hostPath:
            path: "/etc/ssl/certs/ca-certificates.crt"
Multiple ASG Setup
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: cluster-autoscaler
  namespace: kube-system
  labels:
    app: cluster-autoscaler
spec:
  replicas: 1
  selector:
    matchLabels:
      app: cluster-autoscaler
  template:
    metadata:
      labels:
        app: cluster-autoscaler
    spec:
      containers:
        - image: gcr.io/google_containers/cluster-autoscaler:v0.6.0
          name: cluster-autoscaler
          resources:
            limits:
              cpu: 100m
              memory: 300Mi
            requests:
              cpu: 100m
              memory: 300Mi
          command:
            - ./cluster-autoscaler
            - --v=4
            - --stderrthreshold=info
            - --cloud-provider=aws
            - --skip-nodes-with-local-storage=false
            - --expander=least-waste
            - --nodes=1:10:k8s-worker-asg-1
            - --nodes=1:3:k8s-worker-asg-2
          env:
            - name: AWS_REGION
              value: us-east-1
          volumeMounts:
            - name: ssl-certs
              mountPath: /etc/ssl/certs/ca-certificates.crt
              readOnly: true
          imagePullPolicy: "Always"
      volumes:
        - name: ssl-certs
          hostPath:
            path: "/etc/ssl/certs/ca-certificates.crt"
Master Node Setup

To run a CA pod in master node - CA deployment should tolerate the master taint and nodeSelector should be used to schedule the pods in master node.

---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: cluster-autoscaler
  namespace: kube-system
  labels:
    app: cluster-autoscaler
spec:
  replicas: 1
  selector:
    matchLabels:
      app: cluster-autoscaler
  template:
    metadata:
      labels:
        app: cluster-autoscaler
    spec:
      tolerations:
      - effect: NoSchedule
        key: node-role.kubernetes.io/master
      nodeSelector:
        kubernetes.io/role: master
      containers:
        - image: gcr.io/google_containers/cluster-autoscaler:{{ ca_version }}
          name: cluster-autoscaler
          resources:
            limits:
              cpu: 100m
              memory: 300Mi
            requests:
              cpu: 100m
              memory: 300Mi
          command:
            - ./cluster-autoscaler
            - --v=4
            - --stderrthreshold=info
            - --cloud-provider=aws
            - --skip-nodes-with-local-storage=false
            - --nodes={{ node_asg_min }}:{{ node_asg_max }}:{{ name }}
          env:
            - name: AWS_REGION
              value: {{ region }}
          volumeMounts:
            - name: ssl-certs
              mountPath: /etc/ssl/certs/ca-certificates.crt
              readOnly: true
          imagePullPolicy: "Always"
      volumes:
        - name: ssl-certs
          hostPath:
            path: "/etc/ssl/certs/ca-certificates.crt"
Auto-Discovery Setup

As of version v0.5.1, docker images including the support for --node-group-auto-discovery is not yet published to official repository. Please checkout the latest source of this project locally and run REGISTRY=<your docker repo> make release to build and push an image yourself. Then, a manifest like below would run a cluster-autoscaler which auto-discovers ASGs tagged with k8s.io/cluster-autoscaler/enabled and kubernetes.io/cluster/<YOUR CLUSTER NAME> to be node groups. Note that:

  • kubernetes.io/cluster/<YOUR CLUSTER NAME> is required when k8s.io/cluster-autoscaler/enabled is used across many clusters to prevent ASGs from different clusters recognized as the node groups
  • There are no --nodes flags passed to cluster-autoscaler because the node groups are automatically discovered by tags
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: cluster-autoscaler
  namespace: kube-system
  labels:
    app: cluster-autoscaler
spec:
  replicas: 1
  selector:
    matchLabels:
      app: cluster-autoscaler
  template:
    metadata:
      labels:
        app: cluster-autoscaler
    spec:
      containers:
        - image: <your docker repo>/cluster-autoscaler:dev
          name: cluster-autoscaler
          resources:
            limits:
              cpu: 100m
              memory: 300Mi
            requests:
              cpu: 100m
              memory: 300Mi
          command:
            - ./cluster-autoscaler
            - --v=4
            - --stderrthreshold=info
            - --cloud-provider=aws
            - --skip-nodes-with-local-storage=false
            - --expander=least-waste
            - --node-group-auto-discovery=asg:tag=k8s.io/cluster-autoscaler/enabled,kubernetes.io/cluster/<YOUR CLUSTER NAME>
          env:
            - name: AWS_REGION
              value: us-east-1
          volumeMounts:
            - name: ssl-certs
              mountPath: /etc/ssl/certs/ca-certificates.crt
              readOnly: true
          imagePullPolicy: "Always"
      volumes:
        - name: ssl-certs
          hostPath:
            path: "/etc/ssl/certs/ca-certificates.crt"

Scaling a node group to 0

From CA 0.6.1 - it is possible to scale a node group to 0 (and obviously from 0), assuming that all scale-down conditions are met.

If you are using nodeSelector you need to tag the ASG with a node-template key "k8s.io/cluster-autoscaler/node-template/label/" and "k8s.io/cluster-autoscaler/node-template/taint/" if you are using taints.

For example for a node label of foo=bar you would tag the ASG with:

{
    "ResourceType": "auto-scaling-group",
    "ResourceId": "foo.example.com",
    "PropagateAtLaunch": true,
    "Value": "bar",
    "Key": "k8s.io/cluster-autoscaler/node-template/label/foo"
}

And for a taint of "dedicated": "foo:NoSchedule" you would tag the ASG with:

{
    "ResourceType": "auto-scaling-group",
    "ResourceId": "foo.example.com",
    "PropagateAtLaunch": true,
    "Value": "foo:NoSchedule",
    "Key": "k8s.io/cluster-autoscaler/node-template/taint/dedicated"
}

If you'd like to scale node groups from 0, a DescribeLaunchConfigurations permission is also required:

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Action": [
                "autoscaling:DescribeAutoScalingGroups",
                "autoscaling:DescribeAutoScalingInstances",
                "autoscaling:DescribeTags",
                "autoscaling:DescribeLaunchConfigurations",
                "autoscaling:SetDesiredCapacity",
                "autoscaling:TerminateInstanceInAutoScalingGroup"
            ],
            "Resource": "*"
        }
    ]
}

Common Notes and Gotchas:

  • The /etc/ssl/certs/ca-certificates.crt should exist by default on your ec2 instance.
  • Cluster autoscaler is not zone aware (for now), so if you wish to span multiple availability zones in your autoscaling groups beware that cluster autoscaler will not evenly distribute them. For more information, see https://github.com/kubernetes/contrib/pull/1552#r75532949.
  • By default, cluster autoscaler will not terminate nodes running pods in the kube-system namespace. You can override this default behaviour by passing in the --skip-nodes-with-system-pods=false flag.
  • By default, cluster autoscaler will wait 10 minutes between scale down operations, you can adjust this using the --scale-down-delay flag. E.g. --scale-down-delay=5m to decrease the scale down delay to 5 minutes.
  • If you're running multiple ASGs, the --expander flag supports three options: random, most-pods and least-waste. random will expand a random ASG on scale up. most-pods will scale up the ASG that will scheduable the most amount of pods. least-waste will expand the ASG that will waste the least amount of CPU/MEM resources. In the event of a tie, cluster autoscaler will fall back to random.

Documentation

Index

Constants

This section is empty.

Variables

View Source
var InstanceTypes = map[string]*instanceType{}/* 122 elements not displayed */

InstanceTypes is a map of ec2 resources

Functions

func BuildAwsCloudProvider

func BuildAwsCloudProvider(awsManager *AwsManager, discoveryOpts cloudprovider.NodeGroupDiscoveryOptions, resourceLimiter *cloudprovider.ResourceLimiter) (cloudprovider.CloudProvider, error)

BuildAwsCloudProvider builds CloudProvider implementation for AWS.

Types

type Asg

type Asg struct {
	AwsRef
	// contains filtered or unexported fields
}

Asg implements NodeGroup interface.

func (*Asg) Autoprovisioned

func (asg *Asg) Autoprovisioned() bool

Autoprovisioned returns true if the node group is autoprovisioned.

func (*Asg) Belongs

func (asg *Asg) Belongs(node *apiv1.Node) (bool, error)

Belongs returns true if the given node belongs to the NodeGroup.

func (*Asg) Create

func (asg *Asg) Create() error

Create creates the node group on the cloud provider side.

func (*Asg) Debug

func (asg *Asg) Debug() string

Debug returns a debug string for the Asg.

func (*Asg) DecreaseTargetSize

func (asg *Asg) DecreaseTargetSize(delta int) error

DecreaseTargetSize decreases the target size of the node group. This function doesn't permit to delete any existing node and can be used only to reduce the request for new nodes that have not been yet fulfilled. Delta should be negative. It is assumed that cloud provider will not delete the existing nodes if the size when there is an option to just decrease the target.

func (*Asg) Delete

func (asg *Asg) Delete() error

Delete deletes the node group on the cloud provider side. This will be executed only for autoprovisioned node groups, once their size drops to 0.

func (*Asg) DeleteNodes

func (asg *Asg) DeleteNodes(nodes []*apiv1.Node) error

DeleteNodes deletes the nodes from the group.

func (*Asg) Exist

func (asg *Asg) Exist() bool

Exist checks if the node group really exists on the cloud provider side. Allows to tell the theoretical node group from the real one.

func (*Asg) Id

func (asg *Asg) Id() string

Id returns asg id.

func (*Asg) IncreaseSize

func (asg *Asg) IncreaseSize(delta int) error

IncreaseSize increases Asg size

func (*Asg) MaxSize

func (asg *Asg) MaxSize() int

MaxSize returns maximum size of the node group.

func (*Asg) MinSize

func (asg *Asg) MinSize() int

MinSize returns minimum size of the node group.

func (*Asg) Nodes

func (asg *Asg) Nodes() ([]string, error)

Nodes returns a list of all nodes that belong to this node group.

func (*Asg) TargetSize

func (asg *Asg) TargetSize() (int, error)

TargetSize returns the current TARGET size of the node group. It is possible that the number is different from the number of nodes registered in Kubernetes.

func (*Asg) TemplateNodeInfo

func (asg *Asg) TemplateNodeInfo() (*schedulercache.NodeInfo, error)

TemplateNodeInfo returns a node template for this node group.

type AwsManager

type AwsManager struct {
	// contains filtered or unexported fields
}

AwsManager is handles aws communication and data caching.

func CreateAwsManager

func CreateAwsManager(configReader io.Reader) (*AwsManager, error)

CreateAwsManager constructs awsManager object.

func (*AwsManager) Cleanup

func (m *AwsManager) Cleanup()

Cleanup closes the channel to signal the go routine to stop that is handling the cache

func (*AwsManager) DeleteInstances

func (m *AwsManager) DeleteInstances(instances []*AwsRef) error

DeleteInstances deletes the given instances. All instances must be controlled by the same ASG.

func (*AwsManager) GetAsgForInstance

func (m *AwsManager) GetAsgForInstance(instance *AwsRef) (*Asg, error)

GetAsgForInstance returns AsgConfig of the given Instance

func (*AwsManager) GetAsgNodes

func (m *AwsManager) GetAsgNodes(asg *Asg) ([]string, error)

GetAsgNodes returns Asg nodes.

func (*AwsManager) GetAsgSize

func (m *AwsManager) GetAsgSize(asgConfig *Asg) (int64, error)

GetAsgSize gets ASG size.

func (*AwsManager) RegisterAsg

func (m *AwsManager) RegisterAsg(asg *Asg)

RegisterAsg registers asg in Aws Manager.

func (*AwsManager) SetAsgSize

func (m *AwsManager) SetAsgSize(asg *Asg, size int64) error

SetAsgSize sets ASG size.

type AwsRef

type AwsRef struct {
	Name string
}

AwsRef contains a reference to some entity in AWS/GKE world.

func AwsRefFromProviderId

func AwsRefFromProviderId(id string) (*AwsRef, error)

AwsRefFromProviderId creates InstanceConfig object from provider id which must be in format: aws:///zone/name

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL