pluton

package
v0.5.0 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Apr 19, 2017 License: Apache-2.0 Imports: 7 Imported by: 0

README

pluton

Pluton represents a tool to enable testing of kubernetes clusters built upon the kola testing primitives. Each test in pluton receives a working kubernetes cluster to test against rather then a kola.TestCluster. The spawn package is the glue that utilizes the platform package to build a kubernetes cluster from a tool. Right now, bootkube on gce is the primarily supported kubernetes platform.

Examples

Building: ./build

Listing Available Tests: ./bin/pluton list

Running the main bootkube test suite:

./bin/pluton run \
--parallel 5 \
--platform=gce \
--gce-image=projects/coreos-cloud/global/images/coreos-stable-1235-12-0-v20170223 \
--bootkubeRepo=$IMAGE_REPO \
--bootkubeTag=$IMAGE_TAG \
bootkube*

Running a bootkube conformance test:

./bin/pluton run \
--parallel 5 \
--platform=gce \
--gce-image=projects/coreos-cloud/global/images/coreos-stable-1235-12-0-v20170223 \
--bootkubeRepo=$IMAGE_REPO \
--bootkubeTag=$IMAGE_TAG \
--hostKubeletTag=v1.5.3_coreos.0 \
--conformanceVersion=v1.5.3+coreos.0 \
conformance*

Getting Logs: By default, journal logs for each machine per test will be placed in _pluton_temp and overwritten on the next invocation of pluton in the same directory.

Roadmap

  • Directly use new harness pkg such that a pluton.Cluster is passed to every test function
  • Begin to build out the ability of tests to register options in the test structure that customize use of the spawn package
  • build a subcommand that looks like pluton daemon [options] ./custom_script in which the custom script is passed the location of a temporary kubeconfig. This will enable use of pluton in other repositories that just rely on a kubeconfig and a single cluster and don't wish to integrate and register tests in to the harness directly
  • Research allowing different implementations of the spawn package.
  • Collect docker logs automatically for each machine.
daemon plan

The goal of the daemon subcommand is to allow easier integration and use of pluton as a CI tool for creating and destroying bootkube based clusters for those that already have kubernetes tests and don't need direct harness integration that runs multiple clusters at once. It will look like:

pluton daemon [-options] ./test.sh

This would create a cluster and exec ./test.sh which, for many people, would just be calling a go test suite that only needs a KUBECONFIG. It would export a KUBECONFIG env to ./test.sh and tear down the cluster when the script is done.

The daemon would also have an interactive mode pluton daemon [-options] --interactive which would run a cluster, drop out a kubeconfig and eventually SSH keys as well. Issueing a sigterm would then gracefully shutdown the cluster.

The [-options] command right now would generally specify gce cloud platform options and the bootkube container to use, for example:

--platform=gce \
--gce-image=projects/coreos-cloud/global/images/coreos-stable-1235-12-0-v20170223 \
--bootkubeRepo=quay.io/coreos/bootkube \
--bootkubeTag=v0.3.9 \
--hostKubeletTag=v1.5.3+coreos.0

The main limitation is that right now you would specify a bootkube container to use and you have no choice of cloud platform. Internally there are the right interfaces to swap in something like terraform cloud platform. In the future we also want the ability to specify additional cluster assets. Overtime we want better ways to specify all aspects of how the initial kubernetes cluster is built and what versions of what software it is running.

Documentation

Index

Constants

This section is empty.

Variables

This section is empty.

Functions

This section is empty.

Types

type Cluster

type Cluster struct {
	Masters []platform.Machine
	Workers []platform.Machine
	Info    Info

	*harness.H
	// contains filtered or unexported fields
}

Cluster represents an interface to test kubernetes clusters. The harness object is used for logging, exiting or skipping a test. Is nearly identical to the Go test harness. The creation is usually implemented by a function that builds the Cluster from a kola TestCluster from the 'spawn' subpackage. Tests may be aware of the implementor function since not all clusters are expected to have the same components nor properties.

func NewCluster

func NewCluster(m Manager, masters, workers []platform.Machine, info Info) *Cluster

func (*Cluster) AddMasters

func (c *Cluster) AddMasters(n int) error

AddMasters creates new master nodes for a Cluster and blocks until ready.

func (*Cluster) Kubectl

func (c *Cluster) Kubectl(cmd string) (string, error)

Kubectl will run kubectl from /home/core on the Master Machine

func (*Cluster) Ready

func (c *Cluster) Ready() error

Ready blocks until a Cluster is considered available. The current implementation only checks that all nodes are Registered. TODO: Use the manager interface to allow the implementor to determine when a Cluster is considered available either by exposing enough information for this function to check that certain pods are running or implementing its own `Ready() error` function that gets called to after the nodeCheck in this function.

func (*Cluster) SSH

func (c *Cluster) SSH(cmd string) (stdout, stderr []byte, err error)

SSH is just a convenience function for running SSH commands when you don't care which machine the command runs on. The current implementation chooses the first master node. The signature is slightly different then the machine SSH command and doesn't automatically print stderr. I expect in the future that this will be more unified with the Machine.SSH signature, but for now this is useful to silence all the retry loops from clogging up the test results while giving the option to deal with stderr.

type Info added in v0.4.0

type Info struct {
	KubeletTag      string // e.g. v1.5.3_coreos.0
	Version         string // e.g. v1.5.3+coreos.0
	UpstreamVersion string // e.g. v1.5.3
}

Info contains information about how a Cluster is configured that may be useful for some tests.

type Manager

type Manager interface {
	AddMasters(n int) ([]platform.Machine, error)
	AddWorkers(n int) ([]platform.Machine, error)
}

A Manager provides higher level management of the underlying Cluster platform.

type Options added in v0.4.0

type Options struct {
	SelfHostEtcd   bool
	InitialWorkers int
	InitialMasters int
}

Options represent per-test options that control the initial creation of a Cluster. It is advised that all options are explicitly filled out for each registered test. TODO(pb): If this is too verbose to declare per test we may move to a default system but lose the nice struct declaration syntax. Eventually, we may allow overriding global options such as bootkube versions or cloud options.

type Test added in v0.4.0

type Test struct {
	Name    string
	Run     func(c *Cluster)
	Options Options
}

A Test defines a function that will test a running Cluster. The test is run based on if its Name matches the glob pattern specified on the pluton command line.

Directories

Path Synopsis
tests

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL