e2e

package
v1.8.5 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Oct 31, 2024 License: Apache-2.0 Imports: 63 Imported by: 4

Documentation

Overview

Package e2e implements end to end testing.

Index

Constants

View Source
const (
	KubernetesVersionManagement     = "KUBERNETES_VERSION_MANAGEMENT"
	KubernetesVersion               = "KUBERNETES_VERSION"
	CNIPath                         = "CNI"
	CNIResources                    = "CNI_RESOURCES"
	KubernetesVersionUpgradeFrom    = "KUBERNETES_VERSION_UPGRADE_FROM"
	KubernetesVersionUpgradeTo      = "KUBERNETES_VERSION_UPGRADE_TO"
	CPMachineTemplateUpgradeTo      = "CONTROL_PLANE_MACHINE_TEMPLATE_UPGRADE_TO"
	WorkersMachineTemplateUpgradeTo = "WORKERS_MACHINE_TEMPLATE_UPGRADE_TO"
	EtcdVersionUpgradeTo            = "ETCD_VERSION_UPGRADE_TO"
	CoreDNSVersionUpgradeTo         = "COREDNS_VERSION_UPGRADE_TO"
	IPFamily                        = "IP_FAMILY"
)

Test suite constants for e2e config variables.

View Source
const AutoscalerWorkloadYAMLPath = "AUTOSCALER_WORKLOAD"

Variables

This section is empty.

Functions

func AutoscalerSpec added in v1.5.0

func AutoscalerSpec(ctx context.Context, inputGetter func() AutoscalerSpecInput)

AutoscalerSpec implements a test for the autoscaler, and more specifically for the autoscaler being deployed in the workload cluster.

func Byf

func Byf(format string, a ...interface{})

func ClusterClassChangesSpec added in v1.1.0

func ClusterClassChangesSpec(ctx context.Context, inputGetter func() ClusterClassChangesSpecInput)

ClusterClassChangesSpec implements a test that verifies that ClusterClass changes are rolled out successfully. Thus, the test consists of the following steps:

  • Deploy Cluster using a ClusterClass and wait until it is fully provisioned.
  • Modify the ControlPlaneTemplate of the ClusterClass by setting ModifyControlPlaneFields and wait until the change has been rolled out to the ControlPlane of the Cluster.
  • Modify the BootstrapTemplate of all MachineDeploymentClasses of the ClusterClass by setting ModifyMachineDeploymentBootstrapConfigTemplateFields and wait until the change has been rolled out to the MachineDeployments of the Cluster.
  • Modify the InfrastructureMachineTemplate of all MachineDeploymentClasses of the ClusterClass by setting ModifyMachineDeploymentInfrastructureMachineTemplateFields and wait until the change has been rolled out to the MachineDeployments of the Cluster.
  • Rebase the Cluster to a copy of the ClusterClass which has an additional worker label set. Then wait until the change has been rolled out to the MachineDeployments of the Cluster and verify the ControlPlane has not been changed.

NOTE: The ClusterClass can be changed in many ways (as documented in the ClusterClass Operations doc). This test verifies a subset of the possible operations and aims to test the most complicated rollouts (template changes, label propagation, rebase), everything else will be covered by unit or integration tests. NOTE: Changing the ClusterClass or rebasing to another ClusterClass is semantically equivalent from the point of view of a Cluster and the Cluster topology reconciler does not handle those cases differently. Thus we have indirect test coverage of this from other tests as well.

func ClusterClassRolloutSpec added in v1.4.0

func ClusterClassRolloutSpec(ctx context.Context, inputGetter func() ClusterClassRolloutSpecInput)

ClusterClassRolloutSpec implements a test that verifies the ClusterClass rollout behavior. It specifically covers the in-place propagation behavior from ClusterClass / Cluster topology to all objects of the Cluster topology (e.g. KCP, MD) and even tests label propagation to the Nodes of the workload cluster. Thus, the test consists of the following steps:

  • Deploy Cluster using a ClusterClass and wait until it is fully provisioned.
  • Assert cluster objects
  • Modify in-place mutable fields of KCP and the MachineDeployments
  • Verify that fields were mutated in-place and assert cluster objects
  • Modify fields in KCP and MachineDeployments to trigger a full rollout of all Machines
  • Verify that all Machines have been replaced and assert cluster objects
  • Set RolloutAfter on KCP and MachineDeployments to trigger a full rollout of all Machines
  • Verify that all Machines have been replaced and assert cluster objects

While asserting cluster objects we check that all objects have the right labels, annotations and selectors.

func ClusterUpgradeConformanceSpec

func ClusterUpgradeConformanceSpec(ctx context.Context, inputGetter func() ClusterUpgradeConformanceSpecInput)

ClusterUpgradeConformanceSpec implements a spec that upgrades a cluster and runs the Kubernetes conformance suite. Upgrading a cluster refers to upgrading the control-plane and worker nodes (managed by MD and machine pools). NOTE: This test only works with a KubeadmControlPlane. NOTE: This test works with Clusters with and without ClusterClass. When using ClusterClass the ClusterClass must have the variables "etcdImageTag" and "coreDNSImageTag" of type string. Those variables should have corresponding patches which set the etcd and CoreDNS tags in KCP.

func ClusterUpgradeWithRuntimeSDKSpec added in v1.8.0

func ClusterUpgradeWithRuntimeSDKSpec(ctx context.Context, inputGetter func() ClusterUpgradeWithRuntimeSDKSpecInput)

ClusterUpgradeWithRuntimeSDKSpec implements a spec that upgrades a cluster and runs the Kubernetes conformance suite. Upgrading a cluster refers to upgrading the control-plane and worker nodes (managed by MD and machine pools). NOTE: This test only works with a KubeadmControlPlane. NOTE: This test works with Clusters with and without ClusterClass. When using ClusterClass the ClusterClass must have the variables "etcdImageTag" and "coreDNSImageTag" of type string. Those variables should have corresponding patches which set the etcd and CoreDNS tags in KCP.

func ClusterctlUpgradeSpec

func ClusterctlUpgradeSpec(ctx context.Context, inputGetter func() ClusterctlUpgradeSpecInput)

ClusterctlUpgradeSpec implements a test that verifies clusterctl upgrade of a management cluster.

NOTE: this test is designed to test older versions of Cluster API --> v1beta1 upgrades. This spec will create a workload cluster, which will be converted into a new management cluster (henceforth called secondary managemnet cluster) with the older version of Cluster API and infrastructure provider. It will then create an additional workload cluster (henceforth called secondary workload cluster) from the new management cluster using the default cluster template of the old release then run clusterctl upgrade to the latest version of Cluster API and ensure correct operation by scaling a MachineDeployment.

To use this spec the fields InitWithBinary and InitWithKubernetesVersion must be specified in the spec input. See ClusterctlUpgradeSpecInput for further information.

In order to get this to work, infrastructure providers need to implement a mechanism to stage the locally compiled OCI image of their infrastructure provider and have it downloaded and available on the secondary management cluster. It is recommended that infrastructure providers use `docker save` and output the local image to a tar file, upload it to object storage, and then use preKubeadmCommands to pre-load the image before Kubernetes starts.

For example, for Cluster API Provider AWS, the docker image is stored in an s3 bucket with a unique name for the account-region pair, so as to not clash with any other AWS user / account, with the object key being the sha256sum of the image digest.

The following commands are then added to preKubeadmCommands:

preKubeadmCommands:
- mkdir -p /opt/cluster-api
- aws s3 cp "s3://${S3_BUCKET}/${E2E_IMAGE_SHA}" /opt/cluster-api/image.tar
- ctr -n k8s.io images import /opt/cluster-api/image.tar # The image must be imported into the k8s.io namespace

func GetStableReleaseOfMinor added in v1.7.0

func GetStableReleaseOfMinor(ctx context.Context, minorRelease string) (string, error)

GetStableReleaseOfMinor returns latest stable version of minorRelease.

func HaveControllerRef

func HaveControllerRef(kind string, owner metav1.Object) types.GomegaMatcher

func HaveValidVersion

func HaveValidVersion(version string) types.GomegaMatcher

HaveValidVersion succeeds if version is a valid semver version.

func K8SConformanceSpec

func K8SConformanceSpec(ctx context.Context, inputGetter func() K8SConformanceSpecInput)

K8SConformanceSpec implements a spec that creates a cluster and runs Kubernetes conformance suite.

func KCPAdoptionSpec

func KCPAdoptionSpec(ctx context.Context, inputGetter func() KCPAdoptionSpecInput)

KCPAdoptionSpec implements a test that verifies KCP to properly adopt existing control plane Machines.

func KCPRemediationSpec added in v1.4.0

func KCPRemediationSpec(ctx context.Context, inputGetter func() KCPRemediationSpecInput)

KCPRemediationSpec implements a test that verifies that Machines are remediated by MHC during unhealthy conditions.

func MachineDeploymentRemediationSpec added in v1.4.0

func MachineDeploymentRemediationSpec(ctx context.Context, inputGetter func() MachineDeploymentRemediationSpecInput)

MachineDeploymentRemediationSpec implements a test that verifies that Machines are remediated by MHC during unhealthy conditions.

func MachineDeploymentRolloutSpec added in v0.4.1

func MachineDeploymentRolloutSpec(ctx context.Context, inputGetter func() MachineDeploymentRolloutSpecInput)

MachineDeploymentRolloutSpec implements a test that verifies that MachineDeployment rolling updates are successful.

func MachineDeploymentScaleSpec

func MachineDeploymentScaleSpec(ctx context.Context, inputGetter func() MachineDeploymentScaleSpecInput)

MachineDeploymentScaleSpec implements a test that verifies that MachineDeployment scale operations are successful.

func NodeDrainTimeoutSpec

func NodeDrainTimeoutSpec(ctx context.Context, inputGetter func() NodeDrainTimeoutSpecInput)

func QuickStartSpec

func QuickStartSpec(ctx context.Context, inputGetter func() QuickStartSpecInput)

QuickStartSpec implements a spec that mimics the operation described in the Cluster API quick start, that is creating a workload cluster. This test is meant to provide a first, fast signal to detect regression; it is recommended to use it as a PR blocker test. NOTE: This test works with Clusters with and without ClusterClass.

func SelfHostedSpec

func SelfHostedSpec(ctx context.Context, inputGetter func() SelfHostedSpecInput)

SelfHostedSpec implements a test that verifies Cluster API creating a cluster, pivoting to a self-hosted cluster. NOTE: This test works with Clusters with and without ClusterClass.

Types

type AutoscalerSpecInput added in v1.5.0

type AutoscalerSpecInput struct {
	E2EConfig             *clusterctl.E2EConfig
	ClusterctlConfigPath  string
	BootstrapClusterProxy framework.ClusterProxy
	ArtifactFolder        string
	SkipCleanup           bool
	ControlPlaneWaiters   clusterctl.ControlPlaneWaiters

	// Flavor, if specified must refer to a managed topology cluster template
	// which has exactly one MachineDeployment. The replicas should be nil on the MachineDeployment.
	// The MachineDeployment should have the autoscaler annotations set on it.
	// If not specified, it defaults to "topology-autoscaler".
	Flavor *string
	// InfrastructureProviders specifies the infrastructure to use for clusterctl
	// operations (Example: get cluster templates).
	// Note: In most cases this need not be specified. It only needs to be specified when
	// multiple infrastructure providers (ex: CAPD + Kubemark) are installed on the cluster as clusterctl will not be
	// able to identify the default.
	InfrastructureProvider *string
	// InfrastructureMachineTemplateKind should be the plural form of the InfraMachineTemplate kind.
	// It should be specified in lower case.
	// Example: dockermachinetemplates.
	InfrastructureMachineTemplateKind     string
	InfrastructureMachinePoolTemplateKind string
	InfrastructureMachinePoolKind         string
	InfrastructureAPIGroup                string
	AutoscalerVersion                     string

	// InstallOnManagementCluster steers if the autoscaler should get installed to the management or workload cluster.
	// Depending on the CI environments, there may be no connectivity from the workload to the management cluster.
	InstallOnManagementCluster bool

	// ScaleToAndFromZero enables tests to scale to and from zero.
	// Note: This is only implemented for MachineDeployments.
	ScaleToAndFromZero bool

	// Allows to inject a function to be run after test namespace is created.
	// If not specified, this is a no-op.
	PostNamespaceCreated func(managementClusterProxy framework.ClusterProxy, workloadClusterNamespace string)
}

AutoscalerSpecInput is the input for AutoscalerSpec.

type ClusterClassChangesSpecInput added in v1.1.0

type ClusterClassChangesSpecInput struct {
	E2EConfig             *clusterctl.E2EConfig
	ClusterctlConfigPath  string
	BootstrapClusterProxy framework.ClusterProxy
	ArtifactFolder        string
	SkipCleanup           bool
	ControlPlaneWaiters   clusterctl.ControlPlaneWaiters

	// InfrastructureProviders specifies the infrastructure to use for clusterctl
	// operations (Example: get cluster templates).
	// Note: In most cases this need not be specified. It only needs to be specified when
	// multiple infrastructure providers (ex: CAPD + in-memory) are installed on the cluster as clusterctl will not be
	// able to identify the default.
	InfrastructureProvider *string
	// Flavor is the cluster-template flavor used to create the Cluster for testing.
	// NOTE: The template must be using a ClusterClass.
	Flavor string

	// ModifyControlPlaneFields are the ControlPlane fields which will be set on the
	// ControlPlaneTemplate of the ClusterClass after the initial Cluster creation.
	// The test verifies that these fields are rolled out to the ControlPlane.
	// NOTE: The fields are configured in the following format: (without ".spec.template")
	// map[string]interface{}{
	//   "spec.path.to.field": <value>,
	// }
	ModifyControlPlaneFields map[string]interface{}

	// ModifyMachineDeploymentBootstrapConfigTemplateFields are the fields which will be set on the
	// BootstrapConfigTemplate of all MachineDeploymentClasses of the ClusterClass after the initial Cluster creation.
	// The test verifies that these fields are rolled out to the MachineDeployments.
	// NOTE: The fields are configured in the following format:
	// map[string]interface{}{
	//   "spec.template.spec.path.to.field": <value>,
	// }
	ModifyMachineDeploymentBootstrapConfigTemplateFields map[string]interface{}

	// ModifyMachineDeploymentInfrastructureMachineTemplateFields are the fields which will be set on the
	// InfrastructureMachineTemplate of all MachineDeploymentClasses of the ClusterClass after the initial Cluster creation.
	// The test verifies that these fields are rolled out to the MachineDeployments.
	// NOTE: The fields are configured in the following format:
	// map[string]interface{}{
	//   "spec.template.spec.path.to.field": <value>,
	// }
	ModifyMachineDeploymentInfrastructureMachineTemplateFields map[string]interface{}

	// ModifyMachinePoolBootstrapConfigTemplateFields are the fields which will be set on the
	// BootstrapConfigTemplate of all MachinePoolClasses of the ClusterClass after the initial Cluster creation.
	// The test verifies that these fields are rolled out to the MachinePools.
	// NOTE: The fields are configured in the following format:
	// map[string]interface{}{
	//   "spec.template.spec.path.to.field": <value>,
	// }
	ModifyMachinePoolBootstrapConfigTemplateFields map[string]interface{}

	// ModifyMachinePoolInfrastructureMachinePoolTemplateFields are the fields which will be set on the
	// InfrastructureMachinePoolTemplate of all MachinePoolClasses of the ClusterClass after the initial Cluster creation.
	// The test verifies that these fields are rolled out to the MachinePools.
	// NOTE: The fields are configured in the following format:
	// map[string]interface{}{
	//   "spec.template.spec.path.to.field": <value>,
	// }
	ModifyMachinePoolInfrastructureMachinePoolTemplateFields map[string]interface{}

	// Allows to inject a function to be run after test namespace is created.
	// If not specified, this is a no-op.
	PostNamespaceCreated func(managementClusterProxy framework.ClusterProxy, workloadClusterNamespace string)
}

ClusterClassChangesSpecInput is the input for ClusterClassChangesSpec.

type ClusterClassRolloutSpecInput added in v1.4.0

type ClusterClassRolloutSpecInput struct {
	E2EConfig             *clusterctl.E2EConfig
	ClusterctlConfigPath  string
	BootstrapClusterProxy framework.ClusterProxy
	ArtifactFolder        string
	SkipCleanup           bool
	ControlPlaneWaiters   clusterctl.ControlPlaneWaiters

	// InfrastructureProviders specifies the infrastructure to use for clusterctl
	// operations (Example: get cluster templates).
	// Note: In most cases this need not be specified. It only needs to be specified when
	// multiple infrastructure providers (ex: CAPD + in-memory) are installed on the cluster as clusterctl will not be
	// able to identify the default.
	InfrastructureProvider *string

	// Flavor is the cluster-template flavor used to create the Cluster for testing.
	// NOTE: The template must be using ClusterClass, KCP and CABPK as this test is specifically
	// testing ClusterClass and KCP rollout behavior.
	Flavor string

	// Allows to inject a function to be run after test namespace is created.
	// If not specified, this is a no-op.
	PostNamespaceCreated func(managementClusterProxy framework.ClusterProxy, workloadClusterNamespace string)

	// FilterMetadataBeforeValidation allows filtering out labels and annotations of Machines, InfraMachines,
	// BootstrapConfigs and Nodes before we validate them.
	// This can be e.g. used to filter out additional infrastructure provider specific labels that would
	// otherwise lead to a failed test.
	FilterMetadataBeforeValidation func(object client.Object) clusterv1.ObjectMeta
}

ClusterClassRolloutSpecInput is the input for ClusterClassRolloutSpec.

type ClusterProxy

type ClusterProxy interface {
	framework.ClusterProxy

	ApplyWithArgs(context.Context, []byte, ...string) error
}

type ClusterUpgradeConformanceSpecInput

type ClusterUpgradeConformanceSpecInput struct {
	E2EConfig             *clusterctl.E2EConfig
	ClusterctlConfigPath  string
	BootstrapClusterProxy framework.ClusterProxy
	ArtifactFolder        string
	SkipCleanup           bool
	SkipConformanceTests  bool
	ControlPlaneWaiters   clusterctl.ControlPlaneWaiters

	// InfrastructureProviders specifies the infrastructure to use for clusterctl
	// operations (Example: get cluster templates).
	// Note: In most cases this need not be specified. It only needs to be specified when
	// multiple infrastructure providers (ex: CAPD + in-memory) are installed on the cluster as clusterctl will not be
	// able to identify the default.
	InfrastructureProvider *string

	// ControlPlaneMachineCount is used in `config cluster` to configure the count of the control plane machines used in the test.
	// Default is 1.
	ControlPlaneMachineCount *int64

	// WorkerMachineCount is used in `config cluster` to configure the count of the worker machines used in the test.
	// NOTE: If the WORKER_MACHINE_COUNT var is used multiple times in the cluster template, the absolute count of
	// worker machines is a multiple of WorkerMachineCount.
	// Default is 2.
	WorkerMachineCount *int64

	// Flavor to use when creating the cluster for testing, "upgrades" is used if not specified.
	Flavor *string

	// Allows to inject a function to be run after test namespace is created.
	// If not specified, this is a no-op.
	PostNamespaceCreated func(managementClusterProxy framework.ClusterProxy, workloadClusterNamespace string)

	// Allows to inject a function to be run before checking control-plane machines to be upgraded.
	// If not specified, this is a no-op.
	PreWaitForControlPlaneToBeUpgraded func(managementClusterProxy framework.ClusterProxy, workloadClusterNamespace, workloadClusterName string)
}

ClusterUpgradeConformanceSpecInput is the input for ClusterUpgradeConformanceSpec.

type ClusterUpgradeWithRuntimeSDKSpecInput added in v1.8.0

type ClusterUpgradeWithRuntimeSDKSpecInput struct {
	E2EConfig             *clusterctl.E2EConfig
	ClusterctlConfigPath  string
	BootstrapClusterProxy framework.ClusterProxy
	ArtifactFolder        string
	SkipCleanup           bool

	// InfrastructureProviders specifies the infrastructure to use for clusterctl
	// operations (Example: get cluster templates).
	// Note: In most cases this need not be specified. It only needs to be specified when
	// multiple infrastructure providers (ex: CAPD + in-memory) are installed on the cluster as clusterctl will not be
	// able to identify the default.
	InfrastructureProvider *string

	// ControlPlaneMachineCount is used in `config cluster` to configure the count of the control plane machines used in the test.
	// Default is 1.
	ControlPlaneMachineCount *int64

	// WorkerMachineCount is used in `config cluster` to configure the count of the worker machines used in the test.
	// NOTE: If the WORKER_MACHINE_COUNT var is used multiple times in the cluster template, the absolute count of
	// worker machines is a multiple of WorkerMachineCount.
	// Default is 2.
	WorkerMachineCount *int64

	// Flavor to use when creating the cluster for testing, "upgrades" is used if not specified.
	Flavor *string

	// Allows to inject a function to be run after test namespace is created.
	// If not specified, this is a no-op.
	PostNamespaceCreated func(managementClusterProxy framework.ClusterProxy, workloadClusterNamespace string)

	// Allows to inject a function to be run after the cluster is upgraded.
	// If not specified, this is a no-op.
	PostUpgrade func(managementClusterProxy framework.ClusterProxy, workloadClusterNamespace, workloadClusterName string)

	// ExtensionServiceNamespace is the namespace where the service for the Runtime SDK is located
	// and is used to configure in the test-namespace scoped ExtensionConfig.
	ExtensionServiceNamespace string
	// ExtensionServiceName is the name of the service to configure in the test-namespace scoped ExtensionConfig.
	ExtensionServiceName string
}

ClusterUpgradeWithRuntimeSDKSpecInput is the input for clusterUpgradeWithRuntimeSDKSpec.

type ClusterctlUpgradeSpecInput

type ClusterctlUpgradeSpecInput struct {
	E2EConfig             *clusterctl.E2EConfig
	ClusterctlConfigPath  string
	BootstrapClusterProxy framework.ClusterProxy
	ArtifactFolder        string

	// UseKindForManagementCluster instruct the test to use kind for creating the management cluster (instead to use the actual infrastructure provider).
	// NOTE: given that the bootstrap cluster could be shared by several tests, it is not practical to use it for testing clusterctl upgrades.
	// So we are creating a new management cluster where to install older version of providers
	UseKindForManagementCluster bool
	// KindManagementClusterNewClusterProxyFunc is used to create the ClusterProxy used in the test after creating the kind based management cluster.
	// This allows to use a custom ClusterProxy implementation or create a ClusterProxy with a custom scheme and options.
	KindManagementClusterNewClusterProxyFunc func(name string, kubeconfigPath string) framework.ClusterProxy

	// InitWithBinary must be used to specify the URL of the clusterctl binary of the old version of Cluster API. The spec will interpolate the
	// strings `{OS}` and `{ARCH}` to `runtime.GOOS` and `runtime.GOARCH` respectively, e.g. https://github.com/kubernetes-sigs/cluster-api/releases/download/v0.3.23/clusterctl-{OS}-{ARCH}
	InitWithBinary string
	// InitWithProvidersContract can be used to set the contract used to initialise the secondary management cluster, e.g. `v1alpha3`
	InitWithProvidersContract string
	// InitWithKubernetesVersion must be used to set a Kubernetes version to use to create the secondary management cluster, e.g. `v1.25.0`
	InitWithKubernetesVersion string
	// InitWithCoreProvider specifies the core provider version to use when initializing the secondary management cluster, e.g. `cluster-api:v1.3.0`.
	// If not set, the core provider version is calculated based on the contract.
	InitWithCoreProvider string
	// InitWithBootstrapProviders specifies the bootstrap provider versions to use when initializing the secondary management cluster, e.g. `kubeadm:v1.3.0`.
	// If not set, the bootstrap provider version is calculated based on the contract.
	InitWithBootstrapProviders []string
	// InitWithControlPlaneProviders specifies the control plane provider versions to use when initializing the secondary management cluster, e.g. `kubeadm:v1.3.0`.
	// If not set, the control plane provider version is calculated based on the contract.
	InitWithControlPlaneProviders []string
	// InitWithInfrastructureProviders specifies the infrastructure provider versions to add to the secondary management cluster, e.g. `aws:v2.0.0`.
	// If not set, the infrastructure provider version is calculated based on the contract.
	InitWithInfrastructureProviders []string
	// InitWithIPAMProviders specifies the IPAM provider versions to add to the secondary management cluster, e.g. `infoblox:v0.0.1`.
	// If not set, the IPAM provider version is calculated based on the contract.
	InitWithIPAMProviders []string
	// InitWithRuntimeExtensionProviders specifies the runtime extension provider versions to add to the secondary management cluster, e.g. `test:v0.0.1`.
	// If not set, the runtime extension provider version is calculated based on the contract.
	InitWithRuntimeExtensionProviders []string
	// InitWithAddonProviders specifies the add-on provider versions to add to the secondary management cluster, e.g. `helm:v0.0.1`.
	// If not set, the add-on provider version is calculated based on the contract.
	InitWithAddonProviders []string
	// UpgradeClusterctlVariables can be used to set additional variables for clusterctl upgrade.
	UpgradeClusterctlVariables map[string]string
	SkipCleanup                bool

	// InfrastructureProviders specifies the infrastructure to use for clusterctl
	// operations (Example: get cluster templates).
	// Note: In most cases this need not be specified. It only needs to be specified when
	// multiple infrastructure providers (ex: CAPD + in-memory) are installed on the cluster as clusterctl will not be
	// able to identify the default.
	InfrastructureProvider *string
	// Allows to inject a function to be run after test namespace is created.
	// If not specified, this is a no-op.
	PostNamespaceCreated func(managementClusterProxy framework.ClusterProxy, workloadClusterNamespace string)
	// PreWaitForCluster is a function that can be used as a hook to apply extra resources (that cannot be part of the template) in the generated namespace hosting the cluster
	// This function is called after applying the cluster template and before waiting for the cluster resources.
	PreWaitForCluster   func(managementClusterProxy framework.ClusterProxy, workloadClusterNamespace string, workloadClusterName string)
	ControlPlaneWaiters clusterctl.ControlPlaneWaiters
	PreInit             func(managementClusterProxy framework.ClusterProxy)
	PreUpgrade          func(managementClusterProxy framework.ClusterProxy)
	PostUpgrade         func(managementClusterProxy framework.ClusterProxy, clusterNamespace, clusterName string)
	// PreCleanupManagementCluster hook can be used for extra steps that might be required from providers, for example, remove conflicting service (such as DHCP) running on
	// the target management cluster and run it on bootstrap (before the latter resumes LCM) if both clusters share the same LAN
	PreCleanupManagementCluster func(managementClusterProxy framework.ClusterProxy)
	MgmtFlavor                  string
	CNIManifestPath             string
	WorkloadFlavor              string
	// WorkloadKubernetesVersion is Kubernetes version used to create the workload cluster, e.g. `v1.25.0`
	WorkloadKubernetesVersion string

	// Upgrades allows to define upgrade sequences.
	// If not set, the test will upgrade once to the v1beta1 contract.
	// For some examples see clusterctl_upgrade_test.go
	Upgrades []ClusterctlUpgradeSpecInputUpgrade

	// ControlPlaneMachineCount specifies the number of control plane machines to create in the workload cluster.
	ControlPlaneMachineCount *int64
}

ClusterctlUpgradeSpecInput is the input for ClusterctlUpgradeSpec.

type ClusterctlUpgradeSpecInputUpgrade added in v1.6.2

type ClusterctlUpgradeSpecInputUpgrade struct {
	// UpgradeWithBinary can be used to set the clusterctl binary to use for the provider upgrade. The spec will interpolate the
	// strings `{OS}` and `{ARCH}` to `runtime.GOOS` and `runtime.GOARCH` respectively, e.g. https://github.com/kubernetes-sigs/cluster-api/releases/download/v0.3.23/clusterctl-{OS}-{ARCH}
	// If not set, the test will use the ApplyUpgrade function of the clusterctl library.
	WithBinary string
	// Contract defines the contract to upgrade to.
	// Either the contract *or* the *Provider fields should be defined
	// For some examples see clusterctl_upgrade_test.go
	Contract                  string
	CoreProvider              string
	BootstrapProviders        []string
	ControlPlaneProviders     []string
	InfrastructureProviders   []string
	IPAMProviders             []string
	RuntimeExtensionProviders []string
	AddonProviders            []string
}

ClusterctlUpgradeSpecInputUpgrade defines an upgrade.

type K8SConformanceSpecInput

type K8SConformanceSpecInput struct {
	E2EConfig             *clusterctl.E2EConfig
	ClusterctlConfigPath  string
	BootstrapClusterProxy framework.ClusterProxy
	ArtifactFolder        string
	SkipCleanup           bool

	// InfrastructureProviders specifies the infrastructure to use for clusterctl
	// operations (Example: get cluster templates).
	// Note: In most cases this need not be specified. It only needs to be specified when
	// multiple infrastructure providers (ex: CAPD + in-memory) are installed on the cluster as clusterctl will not be
	// able to identify the default.
	InfrastructureProvider *string

	Flavor              string
	ControlPlaneWaiters clusterctl.ControlPlaneWaiters

	// Allows to inject a function to be run after test namespace is created.
	// If not specified, this is a no-op.
	PostNamespaceCreated func(managementClusterProxy framework.ClusterProxy, workloadClusterNamespace string)
}

K8SConformanceSpecInput is the input for K8SConformanceSpec.

type KCPAdoptionSpecInput

type KCPAdoptionSpecInput struct {
	E2EConfig             *clusterctl.E2EConfig
	ClusterctlConfigPath  string
	BootstrapClusterProxy framework.ClusterProxy
	ArtifactFolder        string
	SkipCleanup           bool

	// InfrastructureProviders specifies the infrastructure to use for clusterctl
	// operations (Example: get cluster templates).
	// Note: In most cases this need not be specified. It only needs to be specified when
	// multiple infrastructure providers (ex: CAPD + in-memory) are installed on the cluster as clusterctl will not be
	// able to identify the default.
	InfrastructureProvider *string

	// Flavor, if specified, must refer to a template that is
	// specially crafted with individual control plane machines
	// and a KubeadmControlPlane resource configured for adoption.
	// The initial Cluster, InfraCluster, Machine, InfraMachine,
	// KubeadmConfig, and any other resources that should exist
	// prior to adoption must have the kcp-adoption.step1: "" label
	// applied to them. The updated Cluster (with controlPlaneRef
	// configured), InfraMachineTemplate, and KubeadmControlPlane
	// resources must have the kcp-adoption.step2: "" applied to them.
	// If not specified, "kcp-adoption" is used.
	Flavor *string

	// Allows to inject a function to be run after test namespace is created.
	// If not specified, this is a no-op.
	PostNamespaceCreated func(managementClusterProxy framework.ClusterProxy, workloadClusterNamespace string)
}

KCPAdoptionSpecInput is the input for KCPAdoptionSpec.

type KCPRemediationSpecInput added in v1.4.0

type KCPRemediationSpecInput struct {
	// This spec requires following intervals to be defined in order to work:
	// - wait-cluster, used when waiting for the cluster infrastructure to be provisioned.
	// - wait-machines, used when waiting for an old machine to be remediated and a new one provisioned.
	// - check-machines-stable, used when checking that the current list of machines in stable.
	// - wait-machine-provisioned, used when waiting for a machine to be provisioned after unblocking bootstrap.
	E2EConfig             *clusterctl.E2EConfig
	ClusterctlConfigPath  string
	BootstrapClusterProxy framework.ClusterProxy
	ArtifactFolder        string
	SkipCleanup           bool

	// InfrastructureProviders specifies the infrastructure to use for clusterctl
	// operations (Example: get cluster templates).
	// Note: In most cases this need not be specified. It only needs to be specified when
	// multiple infrastructure providers (ex: CAPD + in-memory) are installed on the cluster as clusterctl will not be
	// able to identify the default.
	InfrastructureProvider *string

	// Flavor, if specified, must refer to a template that has a MachineHealthCheck
	// - 3 node CP, no workers
	// - Control plane machines having a pre-kubeadm command that queries for a well-known ConfigMap on the management cluster,
	//   holding up bootstrap until a signal is passed via the config map.
	//   NOTE: In order for this to work communications from workload cluster to management cluster must be enabled.
	// - An MHC targeting control plane machines with the mhc-test=fail labels and
	//     nodeStartupTimeout: 30s
	// 	   unhealthyConditions:
	//     - type: e2e.remediation.condition
	//       status: "False"
	// 	     timeout: 10s
	// If not specified, "kcp-remediation" is used.
	Flavor *string

	// Allows to inject a function to be run after test namespace is created.
	// If not specified, this is a no-op.
	PostNamespaceCreated func(managementClusterProxy framework.ClusterProxy, workloadClusterNamespace string)
}

KCPRemediationSpecInput is the input for KCPRemediationSpec.

type MachineDeploymentRemediationSpecInput added in v1.4.0

type MachineDeploymentRemediationSpecInput struct {
	E2EConfig             *clusterctl.E2EConfig
	ClusterctlConfigPath  string
	BootstrapClusterProxy framework.ClusterProxy
	ArtifactFolder        string
	SkipCleanup           bool
	ControlPlaneWaiters   clusterctl.ControlPlaneWaiters

	// InfrastructureProviders specifies the infrastructure to use for clusterctl
	// operations (Example: get cluster templates).
	// Note: In most cases this need not be specified. It only needs to be specified when
	// multiple infrastructure providers (ex: CAPD + in-memory) are installed on the cluster as clusterctl will not be
	// able to identify the default.
	InfrastructureProvider *string

	// Flavor, if specified, must refer to a template that has a MachineHealthCheck
	// resource configured to match the MachineDeployment managed Machines and be
	// configured to treat "e2e.remediation.condition" "False" as an unhealthy
	// condition with a short timeout.
	// If not specified, "md-remediation" is used.
	Flavor *string

	// Allows to inject a function to be run after test namespace is created.
	// If not specified, this is a no-op.
	PostNamespaceCreated func(managementClusterProxy framework.ClusterProxy, workloadClusterNamespace string)
}

MachineDeploymentRemediationSpecInput is the input for MachineDeploymentRemediationSpec.

type MachineDeploymentRolloutSpecInput added in v0.4.1

type MachineDeploymentRolloutSpecInput struct {
	E2EConfig             *clusterctl.E2EConfig
	ClusterctlConfigPath  string
	BootstrapClusterProxy framework.ClusterProxy
	ArtifactFolder        string
	SkipCleanup           bool
	ControlPlaneWaiters   clusterctl.ControlPlaneWaiters
	Flavor                string

	// InfrastructureProviders specifies the infrastructure to use for clusterctl
	// operations (Example: get cluster templates).
	// Note: In most cases this need not be specified. It only needs to be specified when
	// multiple infrastructure providers (ex: CAPD + in-memory) are installed on the cluster as clusterctl will not be
	// able to identify the default.
	InfrastructureProvider *string

	// Allows to inject a function to be run after test namespace is created.
	// If not specified, this is a no-op.
	PostNamespaceCreated func(managementClusterProxy framework.ClusterProxy, workloadClusterNamespace string)
}

MachineDeploymentRolloutSpecInput is the input for MachineDeploymentRolloutSpec.

type MachineDeploymentScaleSpecInput

type MachineDeploymentScaleSpecInput struct {
	E2EConfig             *clusterctl.E2EConfig
	ClusterctlConfigPath  string
	BootstrapClusterProxy framework.ClusterProxy
	ArtifactFolder        string
	SkipCleanup           bool
	ControlPlaneWaiters   clusterctl.ControlPlaneWaiters
	Flavor                string

	// InfrastructureProviders specifies the infrastructure to use for clusterctl
	// operations (Example: get cluster templates).
	// Note: In most cases this need not be specified. It only needs to be specified when
	// multiple infrastructure providers (ex: CAPD + in-memory) are installed on the cluster as clusterctl will not be
	// able to identify the default.
	InfrastructureProvider *string

	// Allows to inject a function to be run after test namespace is created.
	// If not specified, this is a no-op.
	PostNamespaceCreated func(managementClusterProxy framework.ClusterProxy, workloadClusterNamespace string)
}

MachineDeploymentScaleSpecInput is the input for MachineDeploymentScaleSpec.

type NodeDrainTimeoutSpecInput

type NodeDrainTimeoutSpecInput struct {
	E2EConfig             *clusterctl.E2EConfig
	ClusterctlConfigPath  string
	BootstrapClusterProxy framework.ClusterProxy
	ArtifactFolder        string
	SkipCleanup           bool
	ControlPlaneWaiters   clusterctl.ControlPlaneWaiters

	// InfrastructureProviders specifies the infrastructure to use for clusterctl
	// operations (Example: get cluster templates).
	// Note: In most cases this need not be specified. It only needs to be specified when
	// multiple infrastructure providers (ex: CAPD + in-memory) are installed on the cluster as clusterctl will not be
	// able to identify the default.
	InfrastructureProvider *string

	// Flavor, if specified, must refer to a template that contains
	// a KubeadmControlPlane resource with spec.machineTemplate.nodeDrainTimeout
	// configured and a MachineDeployment resource that has
	// spec.template.spec.nodeDrainTimeout configured.
	// If not specified, "node-drain" is used.
	Flavor *string

	// Allows to inject a function to be run after test namespace is created.
	// If not specified, this is a no-op.
	PostNamespaceCreated func(managementClusterProxy framework.ClusterProxy, workloadClusterNamespace string)
}

NodeDrainTimeoutSpecInput is the input for NodeDrainTimeoutSpec.

type QuickStartSpecInput

type QuickStartSpecInput struct {
	E2EConfig             *clusterctl.E2EConfig
	ClusterctlConfigPath  string
	BootstrapClusterProxy framework.ClusterProxy
	ArtifactFolder        string
	SkipCleanup           bool

	// Cluster name allows to specify a deterministic clusterName.
	// If not set, a random one will be generated.
	ClusterName *string

	// InfrastructureProvider allows to specify the infrastructure provider to be used when looking for
	// cluster templates.
	// If not set, clusterctl will look at the infrastructure provider installed in the management cluster;
	// if only one infrastructure provider exists, it will be used, otherwise the operation will fail if more than one exists.
	InfrastructureProvider *string

	// Flavor, if specified is the template flavor used to create the cluster for testing.
	// If not specified, the default flavor for the selected infrastructure provider is used.
	Flavor *string

	// ControlPlaneMachineCount defines the number of control plane machines to be added to the workload cluster.
	// If not specified, 1 will be used.
	ControlPlaneMachineCount *int64

	// WorkerMachineCount defines number of worker machines to be added to the workload cluster.
	// If not specified, 1 will be used.
	WorkerMachineCount *int64

	// Allows to inject functions to be run while waiting for the control plane to be initialized,
	// which unblocks CNI installation, and for the control plane machines to be ready (after CNI installation).
	ControlPlaneWaiters clusterctl.ControlPlaneWaiters

	// Allows to inject a function to be run after test namespace is created.
	// If not specified, this is a no-op.
	PostNamespaceCreated func(managementClusterProxy framework.ClusterProxy, workloadClusterNamespace string)

	// Allows to inject a function to be run after machines are provisioned.
	// If not specified, this is a no-op.
	PostMachinesProvisioned func(managementClusterProxy framework.ClusterProxy, workloadClusterNamespace, workloadClusterName string)
}

QuickStartSpecInput is the input for QuickStartSpec.

type SelfHostedSpecInput

type SelfHostedSpecInput struct {
	E2EConfig             *clusterctl.E2EConfig
	ClusterctlConfigPath  string
	BootstrapClusterProxy framework.ClusterProxy
	ArtifactFolder        string
	SkipCleanup           bool
	ControlPlaneWaiters   clusterctl.ControlPlaneWaiters
	Flavor                string

	// InfrastructureProviders specifies the infrastructure to use for clusterctl
	// operations (Example: get cluster templates).
	// Note: In most cases this need not be specified. It only needs to be specified when
	// multiple infrastructure providers (ex: CAPD + in-memory) are installed on the cluster as clusterctl will not be
	// able to identify the default.
	InfrastructureProvider *string

	// SkipUpgrade skip the upgrade of the self-hosted clusters kubernetes version.
	// If true, the variable KUBERNETES_VERSION is expected to be set.
	// If false, the variables KUBERNETES_VERSION_UPGRADE_FROM, KUBERNETES_VERSION_UPGRADE_TO,
	// ETCD_VERSION_UPGRADE_TO and COREDNS_VERSION_UPGRADE_TO are expected to be set.
	// There are also (optional) variables CONTROL_PLANE_MACHINE_TEMPLATE_UPGRADE_TO and
	// WORKERS_MACHINE_TEMPLATE_UPGRADE_TO to change the infrastructure machine template
	// during the upgrade. Note that these templates need to have the clusterctl.cluster.x-k8s.io/move
	// label in order to be moved to the self hosted cluster (since they are not part of the owner chain).
	SkipUpgrade bool

	// ControlPlaneMachineCount is used in `config cluster` to configure the count of the control plane machines used in the test.
	// Default is 1.
	ControlPlaneMachineCount *int64

	// WorkerMachineCount is used in `config cluster` to configure the count of the worker machines used in the test.
	// NOTE: If the WORKER_MACHINE_COUNT var is used multiple times in the cluster template, the absolute count of
	// worker machines is a multiple of WorkerMachineCount.
	// Default is 1.
	WorkerMachineCount *int64

	// Allows to inject a function to be run after test namespace is created.
	// If not specified, this is a no-op.
	PostNamespaceCreated func(managementClusterProxy framework.ClusterProxy, workloadClusterNamespace string)
}

SelfHostedSpecInput is the input for SelfHostedSpec.

Directories

Path Synopsis
internal
log
Package log implements log handling.
Package log implements log handling.

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL