Documentation ¶
Index ¶
- func CheckControlPlaneReady(ctx context.Context, client client.Client, log logr.Logger, ...) (controller.Result, error)
- func CleanupStatusAfterValidate(_ context.Context, _ logr.Logger, spec *cluster.Spec) (controller.Result, error)
- func FetchManagementEksaCluster(ctx context.Context, cli client.Client, cluster *v1alpha1.Cluster) (*v1alpha1.Cluster, error)
- func ReconcileControlPlane(ctx context.Context, log logr.Logger, c client.Client, cp *ControlPlane) (controller.Result, error)
- func ReconcileWorkers(ctx context.Context, c client.Client, cluster *clusterv1.Cluster, w *Workers) (controller.Result, error)
- func ReconcileWorkersForEKSA(ctx context.Context, log logr.Logger, c client.Client, ...) (controller.Result, error)
- func UpdateClusterStatusForCNI(ctx context.Context, cluster *anywherev1.Cluster)
- func UpdateClusterStatusForControlPlane(ctx context.Context, client client.Client, cluster *anywherev1.Cluster) error
- func UpdateClusterStatusForWorkers(ctx context.Context, client client.Client, cluster *anywherev1.Cluster) error
- type ClusterValidator
- type ControlPlane
- type IPUniquenessValidator
- type IPValidator
- type ProviderClusterReconciler
- type ProviderClusterReconcilerRegistry
- type ProviderClusterReconcilerRegistryBuilder
- type WorkerGroup
- type Workers
Constants ¶
This section is empty.
Variables ¶
This section is empty.
Functions ¶
func CheckControlPlaneReady ¶ added in v0.12.0
func CheckControlPlaneReady(ctx context.Context, client client.Client, log logr.Logger, cluster *anywherev1.Cluster) (controller.Result, error)
CheckControlPlaneReady is a controller helper to check whether KCP object for the cluster is ready or not. This is intended to be used from cluster reconcilers due its signature and that it returns controller results with appropriate wait times whenever the cluster is not ready.
func CleanupStatusAfterValidate ¶ added in v0.13.0
func CleanupStatusAfterValidate(_ context.Context, _ logr.Logger, spec *cluster.Spec) (controller.Result, error)
CleanupStatusAfterValidate removes errors from the cluster status. Intended to be used as a reconciler phase after all validation phases have been executed.
func FetchManagementEksaCluster ¶ added in v0.16.4
func FetchManagementEksaCluster(ctx context.Context, cli client.Client, cluster *v1alpha1.Cluster) (*v1alpha1.Cluster, error)
FetchManagementEksaCluster returns the management cluster object for a given workload Cluster. If we are unable to find the management cluster using the same namespace as the current cluster, we will attempt to get a list of the clusters with that name across all the namespaces. If we find multiple, which usually should not happen as these clusters get mapped to a cluster-api cluster object in the eksa-system namespace, then we also error on that because it is not possible to have multiple resources with the same name within a namespace.
func ReconcileControlPlane ¶ added in v0.13.0
func ReconcileControlPlane(ctx context.Context, log logr.Logger, c client.Client, cp *ControlPlane) (controller.Result, error)
ReconcileControlPlane orchestrates the ControlPlane reconciliation logic.
func ReconcileWorkers ¶ added in v0.13.0
func ReconcileWorkers(ctx context.Context, c client.Client, cluster *clusterv1.Cluster, w *Workers) (controller.Result, error)
ReconcileWorkers orchestrates the worker node reconciliation logic. It takes care of applying all desired objects in the Workers spec and deleting the old MachineDeployments that are not in it.
func ReconcileWorkersForEKSA ¶ added in v0.13.0
func ReconcileWorkersForEKSA(ctx context.Context, log logr.Logger, c client.Client, cluster *anywherev1.Cluster, w *Workers) (controller.Result, error)
ReconcileWorkersForEKSA orchestrates the worker node reconciliation logic for a particular EKS-A cluster. It takes care of applying all desired objects in the Workers spec and deleting the old MachineDeployments that are not in it.
func UpdateClusterStatusForCNI ¶ added in v0.17.0
func UpdateClusterStatusForCNI(ctx context.Context, cluster *anywherev1.Cluster)
UpdateClusterStatusForCNI updates the Cluster status for the default cni before the control plane is ready. The CNI reconciler handles the rest of the logic for determining the condition and updating the status based on the current state of the cluster.
func UpdateClusterStatusForControlPlane ¶ added in v0.17.0
func UpdateClusterStatusForControlPlane(ctx context.Context, client client.Client, cluster *anywherev1.Cluster) error
UpdateClusterStatusForControlPlane checks the current state of the Cluster's control plane and updates the Cluster status information. There is a possibility that UpdateClusterStatusForControlPlane does not update the controlplane status specially in case where it is still waiting for cluster objects to be created.
func UpdateClusterStatusForWorkers ¶ added in v0.17.0
func UpdateClusterStatusForWorkers(ctx context.Context, client client.Client, cluster *anywherev1.Cluster) error
UpdateClusterStatusForWorkers checks the current state of the Cluster's workers and updates the Cluster status information.
Types ¶
type ClusterValidator ¶ added in v0.15.0
type ClusterValidator struct {
// contains filtered or unexported fields
}
ClusterValidator runs cluster level validations.
func NewClusterValidator ¶ added in v0.15.0
func NewClusterValidator(client client.Client) *ClusterValidator
NewClusterValidator returns a validator that will run cluster level validations.
func (*ClusterValidator) ValidateManagementClusterName ¶ added in v0.15.0
func (v *ClusterValidator) ValidateManagementClusterName(ctx context.Context, log logr.Logger, cluster *anywherev1.Cluster) error
ValidateManagementClusterName checks if the management cluster specified in the workload cluster spec is valid.
type ControlPlane ¶ added in v0.13.0
type ControlPlane struct { Cluster *clusterv1.Cluster // ProviderCluster is the provider-specific resource that holds the details // for provisioning the infrastructure, referenced in Cluster.Spec.InfrastructureRef ProviderCluster client.Object KubeadmControlPlane *controlplanev1.KubeadmControlPlane // ControlPlaneMachineTemplate is the provider-specific machine template referenced // in KubeadmControlPlane.Spec.MachineTemplate.InfrastructureRef ControlPlaneMachineTemplate client.Object EtcdCluster *etcdv1.EtcdadmCluster // EtcdMachineTemplate is the provider-specific machine template referenced // in EtcdCluster.Spec.InfrastructureTemplate EtcdMachineTemplate client.Object // Other includes any other provider-specific objects that need to be reconciled // as part of the control plane. Other []client.Object }
ControlPlane represents a CAPI spec for a kubernetes cluster.
func (*ControlPlane) AllObjects ¶ added in v0.13.0
func (cp *ControlPlane) AllObjects() []client.Object
AllObjects returns all the control plane objects.
type IPUniquenessValidator ¶ added in v0.13.0
type IPUniquenessValidator interface {
ValidateControlPlaneIPUniqueness(cluster *anywherev1.Cluster) error
}
IPUniquenessValidator defines an interface for the methods to validate the control plane IP.
type IPValidator ¶ added in v0.13.0
type IPValidator struct {
// contains filtered or unexported fields
}
IPValidator validates control plane IP.
func NewIPValidator ¶ added in v0.13.0
func NewIPValidator(ipUniquenessValidator IPUniquenessValidator, client client.Client) *IPValidator
NewIPValidator returns a new NewIPValidator.
func (*IPValidator) ValidateControlPlaneIP ¶ added in v0.13.0
func (i *IPValidator) ValidateControlPlaneIP(ctx context.Context, log logr.Logger, spec *cluster.Spec) (controller.Result, error)
ValidateControlPlaneIP only validates IP on cluster creation.
type ProviderClusterReconciler ¶
type ProviderClusterReconciler interface { // Reconcile handles the full cluster reconciliation. Reconcile(ctx context.Context, log logr.Logger, cluster *anywherev1.Cluster) (controller.Result, error) }
ProviderClusterReconciler reconciles a provider specific eks-a cluster.
type ProviderClusterReconcilerRegistry ¶
type ProviderClusterReconcilerRegistry struct {
// contains filtered or unexported fields
}
ProviderClusterReconcilerRegistry holds a collection of cluster provider reconcilers and ties them to different provider Datacenter kinds.
func (*ProviderClusterReconcilerRegistry) Get ¶
func (r *ProviderClusterReconcilerRegistry) Get(datacenterKind string) ProviderClusterReconciler
Get returns ProviderClusterReconciler for a particular Datacenter kind.
type ProviderClusterReconcilerRegistryBuilder ¶
type ProviderClusterReconcilerRegistryBuilder struct {
// contains filtered or unexported fields
}
ProviderClusterReconcilerRegistryBuilder builds ProviderClusterReconcilerRegistry's.
func NewProviderClusterReconcilerRegistryBuilder ¶
func NewProviderClusterReconcilerRegistryBuilder() *ProviderClusterReconcilerRegistryBuilder
NewProviderClusterReconcilerRegistryBuilder returns a new empty ProviderClusterReconcilerRegistryBuilder.
func (*ProviderClusterReconcilerRegistryBuilder) Add ¶
func (b *ProviderClusterReconcilerRegistryBuilder) Add(datacenterKind string, reconciler ProviderClusterReconciler) *ProviderClusterReconcilerRegistryBuilder
Add accumulates a pair of datacenter kind a reconciler to be included in the final registry.
func (*ProviderClusterReconcilerRegistryBuilder) Build ¶
func (b *ProviderClusterReconcilerRegistryBuilder) Build() ProviderClusterReconcilerRegistry
Build returns a registry with all the previously added reconcilers.
type WorkerGroup ¶ added in v0.13.0
type WorkerGroup struct { KubeadmConfigTemplate *kubeadmv1.KubeadmConfigTemplate MachineDeployment *clusterv1.MachineDeployment ProviderMachineTemplate client.Object }
WorkerGroup represents the CAPI spec for an eks-a worker group.
type Workers ¶ added in v0.13.0
type Workers struct { Groups []WorkerGroup // Other includes any other provider-specific objects that need to be reconciled // as part of the worker groups. Other []client.Object }
Workers represents the CAPI spec for an eks-a cluster's workers.
func ToWorkers ¶ added in v0.13.0
func ToWorkers[M clusterapi.Object[M]](capiWorkers *clusterapi.Workers[M]) *Workers
ToWorkers converts the generic clusterapi Workers definition to the concrete one defined here. It's just a helper for callers generating workers spec using the clusterapi package.